Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-misc-next-2025-11-05-1' of https://gitlab.freedesktop.org/drm/misc/kernel into drm-next

drm-misc-next for v6.19-rc1:

UAPI Changes:
- Add userptr support to ivpu.
- Add IOCTL's for resource and telemetry data in amdxdna.

Core Changes:
- Improve some atomic state checking handling.
- drm/client updates.
- Use forward declarations instead of including drm_print.h
- RUse allocation flags in ttm_pool/device_init and allow specifying max
useful pool size and propagate ENOSPC.
- Updates and fixes to scheduler and bridge code.
- Add support for quirking DisplayID checksum errors.

Driver Changes:
- Assorted cleanups and fixes in rcar-du, accel/ivpu, panel/nv3052cf,
sti, imxm, accel/qaic, accel/amdxdna, imagination, tidss, sti,
panthor, vkms.
- Add Samsung S6E3FC2X01 DDIC/AMS641RW, Synaptics TDDI series DSI,
TL121BVMS07-00 (IL79900A) panels.
- Add mali MediaTek MT8196 SoC gpu support.
- Add etnaviv GC8000 Nano Ultra VIP r6205 support.
- Document powervr ge7800 support in the devicetree.

Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patch.msgid.link/5afae707-c9aa-4a47-b726-5e1f1aa7a106@linux.intel.com

+3740 -1377
+4 -4
Documentation/accel/qaic/qaic.rst
··· 36 36 This mitigation in QAIC is very effective. The same lprnet usecase that 37 37 generates 100k IRQs per second (per /proc/interrupts) is reduced to roughly 64 38 38 IRQs over 5 minutes while keeping the host system stable, and having the same 39 - workload throughput performance (within run to run noise variation). 39 + workload throughput performance (within run-to-run noise variation). 40 40 41 41 Single MSI Mode 42 42 --------------- ··· 49 49 To support this fallback, we allow the case where only one MSI is able to be 50 50 allocated, and share that one MSI between MHI and the DBCs. The device detects 51 51 when only one MSI has been configured and directs the interrupts for the DBCs 52 - to the interrupt normally used for MHI. Unfortunately this means that the 52 + to the interrupt normally used for MHI. Unfortunately, this means that the 53 53 interrupt handlers for every DBC and MHI wake up for every interrupt that 54 54 arrives; however, the DBC threaded irq handlers only are started when work to be 55 55 done is detected (MHI will always start its threaded handler). ··· 62 62 Neural Network Control (NNC) Protocol 63 63 ===================================== 64 64 65 - The implementation of NNC is split between the KMD (QAIC) and UMD. In general 65 + The implementation of NNC is split between the KMD (QAIC) and UMD. In general, 66 66 QAIC understands how to encode/decode NNC wire protocol, and elements of the 67 - protocol which require kernel space knowledge to process (for example, mapping 67 + protocol which requires kernel space knowledge to process (for example, mapping 68 68 host memory to device IOVAs). QAIC understands the structure of a message, and 69 69 all of the transactions. QAIC does not understand commands (the payload of a 70 70 passthrough transaction).
+2 -1
Documentation/devicetree/bindings/display/bridge/renesas,dsi-csi2-tx.yaml
··· 157 157 158 158 panel@0 { 159 159 reg = <0>; 160 - compatible = "raspberrypi,dsi-7inch"; 160 + compatible = "raspberrypi,dsi-7inch", "ilitek,ili9881c"; 161 + power-supply = <&vcc_lcd_reg>; 161 162 162 163 port { 163 164 panel_in: endpoint {
+68
Documentation/devicetree/bindings/display/panel/ilitek,il79900a.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/display/panel/ilitek,il79900a.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Ilitek IL79900a based MIPI-DSI panels 8 + 9 + maintainers: 10 + - Langyan Ye <yelangyan@huaqin.corp-partner.google.com> 11 + 12 + allOf: 13 + - $ref: panel-common.yaml# 14 + 15 + properties: 16 + compatible: 17 + items: 18 + - enum: 19 + - tianma,tl121bvms07-00 20 + - const: ilitek,il79900a 21 + 22 + reg: 23 + maxItems: 1 24 + description: DSI virtual channel used by the panel 25 + 26 + enable-gpios: 27 + maxItems: 1 28 + description: GPIO specifier for the enable pin 29 + 30 + avdd-supply: 31 + description: Positive analog voltage supply (AVDD) 32 + 33 + avee-supply: 34 + description: Negative analog voltage supply (AVEE) 35 + 36 + pp1800-supply: 37 + description: 1.8V logic voltage supply 38 + 39 + backlight: true 40 + 41 + required: 42 + - compatible 43 + - reg 44 + - enable-gpios 45 + - avdd-supply 46 + - avee-supply 47 + - pp1800-supply 48 + 49 + additionalProperties: false 50 + 51 + examples: 52 + - | 53 + dsi { 54 + #address-cells = <1>; 55 + #size-cells = <0>; 56 + 57 + panel@0 { 58 + compatible = "tianma,tl121bvms07-00", "ilitek,il79900a"; 59 + reg = <0>; 60 + enable-gpios = <&pio 25 0>; 61 + avdd-supply = <&reg_avdd>; 62 + avee-supply = <&reg_avee>; 63 + pp1800-supply = <&reg_pp1800>; 64 + backlight = <&backlight>; 65 + }; 66 + }; 67 + 68 + ...
-3
Documentation/devicetree/bindings/display/panel/panel-simple-dsi.yaml
··· 56 56 - panasonic,vvx10f034n00 57 57 # Samsung s6e3fa7 1080x2220 based AMS559NK06 AMOLED panel 58 58 - samsung,s6e3fa7-ams559nk06 59 - # Samsung s6e3fc2x01 1080x2340 AMOLED panel 60 - - samsung,s6e3fc2x01 61 59 # Samsung sofef00 1080x2280 AMOLED panel 62 60 - samsung,sofef00 63 61 # Shangai Top Display Optoelectronics 7" TL070WSH30 1024x600 TFT LCD panel ··· 78 80 properties: 79 81 compatible: 80 82 enum: 81 - - samsung,s6e3fc2x01 82 83 - samsung,sofef00 83 84 then: 84 85 properties:
+2
Documentation/devicetree/bindings/display/panel/samsung,atna33xc20.yaml
··· 33 33 - samsung,atna45dc02 34 34 # Samsung 15.6" 3K (2880x1620 pixels) eDP AMOLED panel 35 35 - samsung,atna56ac03 36 + # Samsung 16.0" 3K (2880x1800 pixels) eDP AMOLED panel 37 + - samsung,atna60cl08 36 38 - const: samsung,atna33xc20 37 39 38 40 enable-gpios: true
+81
Documentation/devicetree/bindings/display/panel/samsung,s6e3fc2x01.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/display/panel/samsung,s6e3fc2x01.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Samsung S6E3FC2X01 AMOLED DDIC 8 + 9 + description: The S6E3FC2X01 is display driver IC with connected panel. 10 + 11 + maintainers: 12 + - David Heidelberg <david@ixit.cz> 13 + 14 + allOf: 15 + - $ref: panel-common.yaml# 16 + 17 + properties: 18 + compatible: 19 + items: 20 + - enum: 21 + # Samsung 6.41 inch, 1080x2340 pixels, 19.5:9 ratio 22 + - samsung,s6e3fc2x01-ams641rw 23 + - const: samsung,s6e3fc2x01 24 + 25 + reg: 26 + maxItems: 1 27 + 28 + reset-gpios: true 29 + 30 + port: true 31 + 32 + vddio-supply: 33 + description: VDD regulator 34 + 35 + vci-supply: 36 + description: VCI regulator 37 + 38 + poc-supply: 39 + description: POC regulator 40 + 41 + required: 42 + - compatible 43 + - reset-gpios 44 + - vddio-supply 45 + - vci-supply 46 + - poc-supply 47 + 48 + unevaluatedProperties: false 49 + 50 + examples: 51 + - | 52 + #include <dt-bindings/gpio/gpio.h> 53 + 54 + dsi { 55 + #address-cells = <1>; 56 + #size-cells = <0>; 57 + 58 + panel@0 { 59 + compatible = "samsung,s6e3fc2x01-ams641rw", "samsung,s6e3fc2x01"; 60 + reg = <0>; 61 + 62 + vddio-supply = <&vreg_l14a_1p88>; 63 + vci-supply = <&s2dos05_buck1>; 64 + poc-supply = <&s2dos05_ldo1>; 65 + 66 + te-gpios = <&tlmm 10 GPIO_ACTIVE_HIGH>; 67 + reset-gpios = <&tlmm 6 GPIO_ACTIVE_HIGH>; 68 + 69 + pinctrl-0 = <&sde_dsi_active &sde_te_active_sleep>; 70 + pinctrl-1 = <&sde_dsi_suspend &sde_te_active_sleep>; 71 + pinctrl-names = "default", "sleep"; 72 + 73 + port { 74 + panel_in: endpoint { 75 + remote-endpoint = <&mdss_dsi0_out>; 76 + }; 77 + }; 78 + }; 79 + }; 80 + 81 + ...
+89
Documentation/devicetree/bindings/display/panel/synaptics,td4300-panel.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/display/panel/synaptics,td4300-panel.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Synaptics TDDI Display Panel Controller 8 + 9 + maintainers: 10 + - Kaustabh Chakraborty <kauschluss@disroot.org> 11 + 12 + allOf: 13 + - $ref: panel-common.yaml# 14 + 15 + properties: 16 + compatible: 17 + enum: 18 + - syna,td4101-panel 19 + - syna,td4300-panel 20 + 21 + reg: 22 + maxItems: 1 23 + 24 + vio-supply: 25 + description: core I/O voltage supply 26 + 27 + vsn-supply: 28 + description: negative voltage supply for analog circuits 29 + 30 + vsp-supply: 31 + description: positive voltage supply for analog circuits 32 + 33 + backlight-gpios: 34 + maxItems: 1 35 + description: backlight enable GPIO 36 + 37 + reset-gpios: true 38 + width-mm: true 39 + height-mm: true 40 + panel-timing: true 41 + 42 + required: 43 + - compatible 44 + - reg 45 + - width-mm 46 + - height-mm 47 + - panel-timing 48 + 49 + additionalProperties: false 50 + 51 + examples: 52 + - | 53 + #include <dt-bindings/gpio/gpio.h> 54 + 55 + dsi { 56 + #address-cells = <1>; 57 + #size-cells = <0>; 58 + 59 + panel@0 { 60 + compatible = "syna,td4300-panel"; 61 + reg = <0>; 62 + 63 + vio-supply = <&panel_vio_reg>; 64 + vsn-supply = <&panel_vsn_reg>; 65 + vsp-supply = <&panel_vsp_reg>; 66 + 67 + backlight-gpios = <&gpd3 5 GPIO_ACTIVE_LOW>; 68 + reset-gpios = <&gpd3 4 GPIO_ACTIVE_LOW>; 69 + 70 + width-mm = <68>; 71 + height-mm = <121>; 72 + 73 + panel-timing { 74 + clock-frequency = <144389520>; 75 + 76 + hactive = <1080>; 77 + hsync-len = <4>; 78 + hfront-porch = <120>; 79 + hback-porch = <32>; 80 + 81 + vactive = <1920>; 82 + vsync-len = <2>; 83 + vfront-porch = <21>; 84 + vback-porch = <4>; 85 + }; 86 + }; 87 + }; 88 + 89 + ...
+37 -1
Documentation/devicetree/bindings/gpu/arm,mali-valhall-csf.yaml
··· 19 19 - items: 20 20 - enum: 21 21 - mediatek,mt8196-mali 22 + - nxp,imx95-mali # G310 22 23 - rockchip,rk3588-mali 23 24 - const: arm,mali-valhall-csf # Mali Valhall GPU model/revision is fully discoverable 24 25 ··· 46 45 minItems: 1 47 46 items: 48 47 - const: core 49 - - const: coregroup 48 + - enum: 49 + - coregroup 50 + - stacks 50 51 - const: stacks 51 52 52 53 mali-supply: true ··· 113 110 power-domain-names: false 114 111 required: 115 112 - mali-supply 113 + - if: 114 + properties: 115 + compatible: 116 + contains: 117 + const: mediatek,mt8196-mali 118 + then: 119 + properties: 120 + mali-supply: false 121 + sram-supply: false 122 + operating-points-v2: false 123 + power-domains: 124 + maxItems: 1 125 + power-domain-names: false 126 + clocks: 127 + maxItems: 2 128 + clock-names: 129 + items: 130 + - const: core 131 + - const: stacks 132 + required: 133 + - power-domains 116 134 117 135 examples: 118 136 - | ··· 168 144 opp-microvolt = <675000 675000 850000>; 169 145 }; 170 146 }; 147 + }; 148 + - | 149 + gpu@48000000 { 150 + compatible = "mediatek,mt8196-mali", "arm,mali-valhall-csf"; 151 + reg = <0x48000000 0x480000>; 152 + clocks = <&gpufreq 0>, <&gpufreq 1>; 153 + clock-names = "core", "stacks"; 154 + interrupts = <GIC_SPI 606 IRQ_TYPE_LEVEL_HIGH 0>, 155 + <GIC_SPI 605 IRQ_TYPE_LEVEL_HIGH 0>, 156 + <GIC_SPI 604 IRQ_TYPE_LEVEL_HIGH 0>; 157 + interrupt-names = "job", "mmu", "gpu"; 158 + power-domains = <&gpufreq>; 171 159 }; 172 160 173 161 ...
+7 -2
Documentation/devicetree/bindings/gpu/img,powervr-rogue.yaml
··· 20 20 - const: img,img-gx6250 21 21 - const: img,img-rogue 22 22 - items: 23 + - const: renesas,r8a77965-gpu 24 + - const: img,img-ge7800 25 + - const: img,img-rogue 26 + - items: 23 27 - enum: 24 28 - ti,am62-gpu 25 29 - const: img,img-axe-1-16m ··· 104 100 clocks: 105 101 maxItems: 1 106 102 107 - 108 103 - if: 109 104 properties: 110 105 compatible: 111 106 contains: 112 107 enum: 108 + - img,img-ge7800 113 109 - img,img-gx6250 114 110 - thead,th1520-gpu 115 111 then: ··· 139 135 compatible: 140 136 contains: 141 137 enum: 142 - - img,img-gx6250 143 138 - img,img-bxs-4-64 139 + - img,img-ge7800 140 + - img,img-gx6250 144 141 then: 145 142 properties: 146 143 power-domains:
+8 -11
Documentation/gpu/vkms.rst
··· 159 159 160 160 sudo systemctl isolate graphical.target 161 161 162 - Once you are in text only mode, you can run tests using the --device switch 163 - or IGT_DEVICE variable to specify the device filter for the driver we want 164 - to test. IGT_DEVICE can also be used with the run-test.sh script to run the 162 + Once you are in text only mode, you can run tests using the IGT_FORCE_DRIVER 163 + variable to specify the device filter for the driver we want to test. 164 + IGT_FORCE_DRIVER can also be used with the run-tests.sh script to run the 165 165 tests for a specific driver:: 166 166 167 - sudo ./build/tests/<name of test> --device "sys:/sys/devices/platform/vkms" 168 - sudo IGT_DEVICE="sys:/sys/devices/platform/vkms" ./build/tests/<name of test> 169 - sudo IGT_DEVICE="sys:/sys/devices/platform/vkms" ./scripts/run-tests.sh -t <name of test> 167 + sudo IGT_FORCE_DRIVER="vkms" ./build/tests/<name of test> 168 + sudo IGT_FORCE_DRIVER="vkms" ./scripts/run-tests.sh -t <name of test> 170 169 171 170 For example, to test the functionality of the writeback library, 172 171 we can run the kms_writeback test:: 173 172 174 - sudo ./build/tests/kms_writeback --device "sys:/sys/devices/platform/vkms" 175 - sudo IGT_DEVICE="sys:/sys/devices/platform/vkms" ./build/tests/kms_writeback 176 - sudo IGT_DEVICE="sys:/sys/devices/platform/vkms" ./scripts/run-tests.sh -t kms_writeback 173 + sudo IGT_FORCE_DRIVER="vkms" ./build/tests/kms_writeback 174 + sudo IGT_FORCE_DRIVER="vkms" ./scripts/run-tests.sh -t kms_writeback 177 175 178 176 You can also run subtests if you do not want to run the entire test:: 179 177 180 - sudo ./build/tests/kms_flip --run-subtest basic-plain-flip --device "sys:/sys/devices/platform/vkms" 181 - sudo IGT_DEVICE="sys:/sys/devices/platform/vkms" ./build/tests/kms_flip --run-subtest basic-plain-flip 178 + sudo IGT_FORCE_DRIVER="vkms" ./build/tests/kms_flip --run-subtest basic-plain-flip 182 179 183 180 Testing With KUnit 184 181 ==================
+6
MAINTAINERS
··· 8068 8068 F: Documentation/devicetree/bindings/display/panel/samsung,s6d7aa0.yaml 8069 8069 F: drivers/gpu/drm/panel/panel-samsung-s6d7aa0.c 8070 8070 8071 + DRM DRIVER FOR SAMSUNG S6E3FC2X01 DDIC 8072 + M: David Heidelberg <david@ixit.cz> 8073 + S: Maintained 8074 + F: Documentation/devicetree/bindings/display/panel/samsung,s6e3fc2x01.yaml 8075 + F: drivers/gpu/drm/panel/panel-samsung-s6e3fc2x01.c 8076 + 8071 8077 DRM DRIVER FOR SAMSUNG S6E3HA8 PANELS 8072 8078 M: Dzmitry Sankouski <dsankouski@gmail.com> 8073 8079 S: Maintained
+187 -129
arch/arm/boot/dts/st/stih410.dtsi
··· 34 34 status = "disabled"; 35 35 }; 36 36 37 + display-subsystem { 38 + compatible = "st,sti-display-subsystem"; 39 + ports = <&compositor>, <&hqvdp>, <&tvout>, <&sti_hdmi>; 40 + 41 + assigned-clocks = <&clk_s_d2_quadfs 0>, 42 + <&clk_s_d2_quadfs 1>, 43 + <&clk_s_c0_pll1 0>, 44 + <&clk_s_c0_flexgen CLK_COMPO_DVP>, 45 + <&clk_s_c0_flexgen CLK_MAIN_DISP>, 46 + <&clk_s_d2_flexgen CLK_PIX_MAIN_DISP>, 47 + <&clk_s_d2_flexgen CLK_PIX_AUX_DISP>, 48 + <&clk_s_d2_flexgen CLK_PIX_GDP1>, 49 + <&clk_s_d2_flexgen CLK_PIX_GDP2>, 50 + <&clk_s_d2_flexgen CLK_PIX_GDP3>, 51 + <&clk_s_d2_flexgen CLK_PIX_GDP4>; 52 + 53 + assigned-clock-parents = <0>, 54 + <0>, 55 + <0>, 56 + <&clk_s_c0_pll1 0>, 57 + <&clk_s_c0_pll1 0>, 58 + <&clk_s_d2_quadfs 0>, 59 + <&clk_s_d2_quadfs 1>, 60 + <&clk_s_d2_quadfs 0>, 61 + <&clk_s_d2_quadfs 0>, 62 + <&clk_s_d2_quadfs 0>, 63 + <&clk_s_d2_quadfs 0>; 64 + 65 + assigned-clock-rates = <297000000>, 66 + <297000000>, 67 + <0>, 68 + <400000000>, 69 + <400000000>; 70 + }; 71 + 37 72 soc { 38 73 ohci0: usb@9a03c00 { 39 74 compatible = "st,st-ohci-300x"; ··· 134 99 status = "disabled"; 135 100 }; 136 101 137 - sti-display-subsystem@0 { 138 - compatible = "st,sti-display-subsystem"; 139 - #address-cells = <1>; 140 - #size-cells = <1>; 102 + compositor: display-controller@9d11000 { 103 + compatible = "st,stih407-compositor"; 104 + reg = <0x9d11000 0x1000>; 141 105 142 - reg = <0 0>; 143 - assigned-clocks = <&clk_s_d2_quadfs 0>, 144 - <&clk_s_d2_quadfs 1>, 145 - <&clk_s_c0_pll1 0>, 146 - <&clk_s_c0_flexgen CLK_COMPO_DVP>, 147 - <&clk_s_c0_flexgen CLK_MAIN_DISP>, 148 - <&clk_s_d2_flexgen CLK_PIX_MAIN_DISP>, 149 - <&clk_s_d2_flexgen CLK_PIX_AUX_DISP>, 150 - <&clk_s_d2_flexgen CLK_PIX_GDP1>, 151 - <&clk_s_d2_flexgen CLK_PIX_GDP2>, 152 - <&clk_s_d2_flexgen CLK_PIX_GDP3>, 153 - <&clk_s_d2_flexgen CLK_PIX_GDP4>; 106 + clock-names = "compo_main", 107 + "compo_aux", 108 + "pix_main", 109 + "pix_aux", 110 + "pix_gdp1", 111 + "pix_gdp2", 112 + "pix_gdp3", 113 + "pix_gdp4", 114 + "main_parent", 115 + "aux_parent"; 154 116 155 - assigned-clock-parents = <0>, 156 - <0>, 157 - <0>, 158 - <&clk_s_c0_pll1 0>, 159 - <&clk_s_c0_pll1 0>, 117 + clocks = <&clk_s_c0_flexgen CLK_COMPO_DVP>, 118 + <&clk_s_c0_flexgen CLK_COMPO_DVP>, 119 + <&clk_s_d2_flexgen CLK_PIX_MAIN_DISP>, 120 + <&clk_s_d2_flexgen CLK_PIX_AUX_DISP>, 121 + <&clk_s_d2_flexgen CLK_PIX_GDP1>, 122 + <&clk_s_d2_flexgen CLK_PIX_GDP2>, 123 + <&clk_s_d2_flexgen CLK_PIX_GDP3>, 124 + <&clk_s_d2_flexgen CLK_PIX_GDP4>, 125 + <&clk_s_d2_quadfs 0>, 126 + <&clk_s_d2_quadfs 1>; 127 + 128 + reset-names = "compo-main", "compo-aux"; 129 + resets = <&softreset STIH407_COMPO_SOFTRESET>, 130 + <&softreset STIH407_COMPO_SOFTRESET>; 131 + st,vtg = <&vtg_main>, <&vtg_aux>; 132 + 133 + ports { 134 + #address-cells = <1>; 135 + #size-cells = <0>; 136 + 137 + port@0 { 138 + reg = <0>; 139 + compo_main_out: endpoint { 140 + remote-endpoint = <&tvout_in0>; 141 + }; 142 + }; 143 + 144 + port@1 { 145 + reg = <1>; 146 + compo_aux_out: endpoint { 147 + remote-endpoint = <&tvout_in1>; 148 + }; 149 + }; 150 + }; 151 + }; 152 + 153 + tvout: encoder@8d08000 { 154 + compatible = "st,stih407-tvout"; 155 + reg = <0x8d08000 0x1000>; 156 + reg-names = "tvout-reg"; 157 + reset-names = "tvout"; 158 + resets = <&softreset STIH407_HDTVOUT_SOFTRESET>; 159 + assigned-clocks = <&clk_s_d2_flexgen CLK_PIX_HDMI>, 160 + <&clk_s_d2_flexgen CLK_TMDS_HDMI>, 161 + <&clk_s_d2_flexgen CLK_REF_HDMIPHY>, 162 + <&clk_s_d0_flexgen CLK_PCM_0>, 163 + <&clk_s_d2_flexgen CLK_PIX_HDDAC>, 164 + <&clk_s_d2_flexgen CLK_HDDAC>; 165 + 166 + assigned-clock-parents = <&clk_s_d2_quadfs 0>, 167 + <&clk_tmdsout_hdmi>, 160 168 <&clk_s_d2_quadfs 0>, 161 - <&clk_s_d2_quadfs 1>, 162 - <&clk_s_d2_quadfs 0>, 163 - <&clk_s_d2_quadfs 0>, 169 + <&clk_s_d0_quadfs 0>, 164 170 <&clk_s_d2_quadfs 0>, 165 171 <&clk_s_d2_quadfs 0>; 166 172 167 - assigned-clock-rates = <297000000>, 168 - <297000000>, 169 - <0>, 170 - <400000000>, 171 - <400000000>; 172 - 173 - ranges; 174 - 175 - sti-compositor@9d11000 { 176 - compatible = "st,stih407-compositor"; 177 - reg = <0x9d11000 0x1000>; 178 - 179 - clock-names = "compo_main", 180 - "compo_aux", 181 - "pix_main", 182 - "pix_aux", 183 - "pix_gdp1", 184 - "pix_gdp2", 185 - "pix_gdp3", 186 - "pix_gdp4", 187 - "main_parent", 188 - "aux_parent"; 189 - 190 - clocks = <&clk_s_c0_flexgen CLK_COMPO_DVP>, 191 - <&clk_s_c0_flexgen CLK_COMPO_DVP>, 192 - <&clk_s_d2_flexgen CLK_PIX_MAIN_DISP>, 193 - <&clk_s_d2_flexgen CLK_PIX_AUX_DISP>, 194 - <&clk_s_d2_flexgen CLK_PIX_GDP1>, 195 - <&clk_s_d2_flexgen CLK_PIX_GDP2>, 196 - <&clk_s_d2_flexgen CLK_PIX_GDP3>, 197 - <&clk_s_d2_flexgen CLK_PIX_GDP4>, 198 - <&clk_s_d2_quadfs 0>, 199 - <&clk_s_d2_quadfs 1>; 200 - 201 - reset-names = "compo-main", "compo-aux"; 202 - resets = <&softreset STIH407_COMPO_SOFTRESET>, 203 - <&softreset STIH407_COMPO_SOFTRESET>; 204 - st,vtg = <&vtg_main>, <&vtg_aux>; 205 - }; 206 - 207 - sti-tvout@8d08000 { 208 - compatible = "st,stih407-tvout"; 209 - reg = <0x8d08000 0x1000>; 210 - reg-names = "tvout-reg"; 211 - reset-names = "tvout"; 212 - resets = <&softreset STIH407_HDTVOUT_SOFTRESET>; 173 + ports { 213 174 #address-cells = <1>; 214 - #size-cells = <1>; 215 - assigned-clocks = <&clk_s_d2_flexgen CLK_PIX_HDMI>, 216 - <&clk_s_d2_flexgen CLK_TMDS_HDMI>, 217 - <&clk_s_d2_flexgen CLK_REF_HDMIPHY>, 218 - <&clk_s_d0_flexgen CLK_PCM_0>, 219 - <&clk_s_d2_flexgen CLK_PIX_HDDAC>, 220 - <&clk_s_d2_flexgen CLK_HDDAC>; 175 + #size-cells = <0>; 221 176 222 - assigned-clock-parents = <&clk_s_d2_quadfs 0>, 223 - <&clk_tmdsout_hdmi>, 224 - <&clk_s_d2_quadfs 0>, 225 - <&clk_s_d0_quadfs 0>, 226 - <&clk_s_d2_quadfs 0>, 227 - <&clk_s_d2_quadfs 0>; 177 + port@0 { 178 + reg = <0>; 179 + tvout_in0: endpoint { 180 + remote-endpoint = <&compo_main_out>; 181 + }; 182 + }; 183 + 184 + port@1 { 185 + reg = <1>; 186 + tvout_in1: endpoint { 187 + remote-endpoint = <&compo_aux_out>; 188 + }; 189 + }; 190 + 191 + port@2 { 192 + reg = <2>; 193 + tvout_out0: endpoint { 194 + remote-endpoint = <&hdmi_in>; 195 + }; 196 + }; 197 + 198 + port@3 { 199 + reg = <3>; 200 + tvout_out1: endpoint { 201 + remote-endpoint = <&hda_in>; 202 + }; 203 + }; 228 204 }; 205 + }; 229 206 230 - sti_hdmi: sti-hdmi@8d04000 { 231 - compatible = "st,stih407-hdmi"; 232 - reg = <0x8d04000 0x1000>; 233 - reg-names = "hdmi-reg"; 234 - #sound-dai-cells = <0>; 235 - interrupts = <GIC_SPI 106 IRQ_TYPE_LEVEL_HIGH>; 236 - interrupt-names = "irq"; 237 - clock-names = "pix", 238 - "tmds", 239 - "phy", 240 - "audio", 241 - "main_parent", 242 - "aux_parent"; 207 + sti_hdmi: hdmi@8d04000 { 208 + compatible = "st,stih407-hdmi"; 209 + reg = <0x8d04000 0x1000>; 210 + reg-names = "hdmi-reg"; 211 + #sound-dai-cells = <0>; 212 + interrupts = <GIC_SPI 106 IRQ_TYPE_LEVEL_HIGH>; 213 + interrupt-names = "irq"; 214 + clock-names = "pix", 215 + "tmds", 216 + "phy", 217 + "audio", 218 + "main_parent", 219 + "aux_parent"; 243 220 244 - clocks = <&clk_s_d2_flexgen CLK_PIX_HDMI>, 245 - <&clk_s_d2_flexgen CLK_TMDS_HDMI>, 246 - <&clk_s_d2_flexgen CLK_REF_HDMIPHY>, 247 - <&clk_s_d0_flexgen CLK_PCM_0>, 248 - <&clk_s_d2_quadfs 0>, 249 - <&clk_s_d2_quadfs 1>; 221 + clocks = <&clk_s_d2_flexgen CLK_PIX_HDMI>, 222 + <&clk_s_d2_flexgen CLK_TMDS_HDMI>, 223 + <&clk_s_d2_flexgen CLK_REF_HDMIPHY>, 224 + <&clk_s_d0_flexgen CLK_PCM_0>, 225 + <&clk_s_d2_quadfs 0>, 226 + <&clk_s_d2_quadfs 1>; 250 227 251 - hdmi,hpd-gpio = <&pio5 3 GPIO_ACTIVE_LOW>; 252 - reset-names = "hdmi"; 253 - resets = <&softreset STIH407_HDMI_TX_PHY_SOFTRESET>; 254 - ddc = <&hdmiddc>; 228 + hdmi,hpd-gpio = <&pio5 3 GPIO_ACTIVE_LOW>; 229 + reset-names = "hdmi"; 230 + resets = <&softreset STIH407_HDMI_TX_PHY_SOFTRESET>; 231 + ddc = <&hdmiddc>; 232 + 233 + port { 234 + hdmi_in: endpoint { 235 + remote-endpoint = <&tvout_out0>; 236 + }; 255 237 }; 238 + }; 256 239 257 - sti-hda@8d02000 { 258 - compatible = "st,stih407-hda"; 259 - status = "disabled"; 260 - reg = <0x8d02000 0x400>, <0x92b0120 0x4>; 261 - reg-names = "hda-reg", "video-dacs-ctrl"; 262 - clock-names = "pix", 263 - "hddac", 264 - "main_parent", 265 - "aux_parent"; 266 - clocks = <&clk_s_d2_flexgen CLK_PIX_HDDAC>, 267 - <&clk_s_d2_flexgen CLK_HDDAC>, 268 - <&clk_s_d2_quadfs 0>, 269 - <&clk_s_d2_quadfs 1>; 270 - }; 240 + analog@8d02000 { 241 + compatible = "st,stih407-hda"; 242 + status = "disabled"; 243 + reg = <0x8d02000 0x400>, <0x92b0120 0x4>; 244 + reg-names = "hda-reg", "video-dacs-ctrl"; 245 + clock-names = "pix", 246 + "hddac", 247 + "main_parent", 248 + "aux_parent"; 249 + clocks = <&clk_s_d2_flexgen CLK_PIX_HDDAC>, 250 + <&clk_s_d2_flexgen CLK_HDDAC>, 251 + <&clk_s_d2_quadfs 0>, 252 + <&clk_s_d2_quadfs 1>; 271 253 272 - sti-hqvdp@9c00000 { 273 - compatible = "st,stih407-hqvdp"; 274 - reg = <0x9C00000 0x100000>; 275 - clock-names = "hqvdp", "pix_main"; 276 - clocks = <&clk_s_c0_flexgen CLK_MAIN_DISP>, 277 - <&clk_s_d2_flexgen CLK_PIX_MAIN_DISP>; 278 - reset-names = "hqvdp"; 279 - resets = <&softreset STIH407_HDQVDP_SOFTRESET>; 280 - st,vtg = <&vtg_main>; 254 + port { 255 + hda_in: endpoint { 256 + remote-endpoint = <&tvout_out1>; 257 + }; 281 258 }; 259 + }; 260 + 261 + hqvdp: plane@9c00000 { 262 + compatible = "st,stih407-hqvdp"; 263 + reg = <0x9C00000 0x100000>; 264 + clock-names = "hqvdp", "pix_main"; 265 + clocks = <&clk_s_c0_flexgen CLK_MAIN_DISP>, 266 + <&clk_s_d2_flexgen CLK_PIX_MAIN_DISP>; 267 + reset-names = "hqvdp"; 268 + resets = <&softreset STIH407_HDQVDP_SOFTRESET>; 269 + st,vtg = <&vtg_main>; 282 270 }; 283 271 284 272 bdisp0:bdisp@9f10000 {
+37
arch/arm64/boot/dts/freescale/imx95.dtsi
··· 250 250 clock-output-names = "dummy"; 251 251 }; 252 252 253 + gpu_opp_table: opp-table { 254 + compatible = "operating-points-v2"; 255 + 256 + opp-500000000 { 257 + opp-hz = /bits/ 64 <500000000>; 258 + opp-hz-real = /bits/ 64 <500000000>; 259 + opp-microvolt = <920000>; 260 + }; 261 + 262 + opp-800000000 { 263 + opp-hz = /bits/ 64 <800000000>; 264 + opp-hz-real = /bits/ 64 <800000000>; 265 + opp-microvolt = <920000>; 266 + }; 267 + 268 + opp-1000000000 { 269 + opp-hz = /bits/ 64 <1000000000>; 270 + opp-hz-real = /bits/ 64 <1000000000>; 271 + opp-microvolt = <920000>; 272 + }; 273 + }; 274 + 253 275 clk_ext1: clock-ext1 { 254 276 compatible = "fixed-clock"; 255 277 #clock-cells = <0>; ··· 2158 2136 status = "disabled"; 2159 2137 }; 2160 2138 }; 2139 + }; 2140 + 2141 + gpu: gpu@4d900000 { 2142 + compatible = "nxp,imx95-mali", "arm,mali-valhall-csf"; 2143 + reg = <0 0x4d900000 0 0x480000>; 2144 + clocks = <&scmi_clk IMX95_CLK_GPU>, <&scmi_clk IMX95_CLK_GPUAPB>; 2145 + clock-names = "core", "coregroup"; 2146 + interrupts = <GIC_SPI 289 IRQ_TYPE_LEVEL_HIGH>, 2147 + <GIC_SPI 290 IRQ_TYPE_LEVEL_HIGH>, 2148 + <GIC_SPI 288 IRQ_TYPE_LEVEL_HIGH>; 2149 + interrupt-names = "job", "mmu", "gpu"; 2150 + operating-points-v2 = <&gpu_opp_table>; 2151 + power-domains = <&scmi_devpd IMX95_PD_GPU>; 2152 + #cooling-cells = <2>; 2153 + dynamic-power-coefficient = <1013>; 2161 2154 }; 2162 2155 2163 2156 ddr-pmu@4e090dc0 {
+13 -8
drivers/accel/amdxdna/aie2_ctx.c
··· 204 204 205 205 cmd_abo = job->cmd_bo; 206 206 207 - if (unlikely(!data)) 207 + if (unlikely(job->job_timeout)) { 208 + amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_TIMEOUT); 209 + ret = -EINVAL; 208 210 goto out; 211 + } 209 212 210 - if (unlikely(size != sizeof(u32))) { 213 + if (unlikely(!data) || unlikely(size != sizeof(u32))) { 211 214 amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_ABORT); 212 215 ret = -EINVAL; 213 216 goto out; ··· 261 258 int ret = 0; 262 259 263 260 cmd_abo = job->cmd_bo; 261 + 262 + if (unlikely(job->job_timeout)) { 263 + amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_TIMEOUT); 264 + ret = -EINVAL; 265 + goto out; 266 + } 267 + 264 268 if (unlikely(!data) || unlikely(size != sizeof(u32) * 3)) { 265 269 amdxdna_cmd_set_state(cmd_abo, ERT_CMD_STATE_ABORT); 266 270 ret = -EINVAL; ··· 380 370 381 371 xdna = hwctx->client->xdna; 382 372 trace_xdna_job(sched_job, hwctx->name, "job timedout", job->seq); 373 + job->job_timeout = true; 383 374 mutex_lock(&xdna->dev_lock); 384 375 aie2_hwctx_stop(xdna, hwctx, sched_job); 385 376 ··· 556 545 struct drm_gpu_scheduler *sched; 557 546 struct amdxdna_hwctx_priv *priv; 558 547 struct amdxdna_gem_obj *heap; 559 - struct amdxdna_dev_hdl *ndev; 560 548 int i, ret; 561 549 562 550 priv = kzalloc(sizeof(*hwctx->priv), GFP_KERNEL); ··· 653 643 amdxdna_pm_suspend_put(xdna); 654 644 655 645 hwctx->status = HWCTX_STAT_INIT; 656 - ndev = xdna->dev_handle; 657 - ndev->hwctx_num++; 658 646 init_waitqueue_head(&priv->job_free_wq); 659 647 660 648 XDNA_DBG(xdna, "hwctx %s init completed", hwctx->name); ··· 685 677 686 678 void aie2_hwctx_fini(struct amdxdna_hwctx *hwctx) 687 679 { 688 - struct amdxdna_dev_hdl *ndev; 689 680 struct amdxdna_dev *xdna; 690 681 int idx; 691 682 692 683 xdna = hwctx->client->xdna; 693 - ndev = xdna->dev_handle; 694 - ndev->hwctx_num--; 695 684 696 685 XDNA_DBG(xdna, "%s sequence number %lld", hwctx->name, hwctx->priv->seq); 697 686 drm_sched_entity_destroy(&hwctx->priv->entity);
+380 -205
drivers/accel/amdxdna/aie2_message.c
··· 27 27 #define DECLARE_AIE2_MSG(name, op) \ 28 28 DECLARE_XDNA_MSG_COMMON(name, op, MAX_AIE2_STATUS_CODE) 29 29 30 + #define EXEC_MSG_OPS(xdna) ((xdna)->dev_handle->exec_msg_ops) 31 + 30 32 static int aie2_send_mgmt_msg_wait(struct amdxdna_dev_hdl *ndev, 31 33 struct xdna_mailbox_msg *msg) 32 34 { ··· 47 45 ndev->mgmt_chann = NULL; 48 46 } 49 47 50 - if (!ret && *hdl->data != AIE2_STATUS_SUCCESS) { 48 + if (!ret && *hdl->status != AIE2_STATUS_SUCCESS) { 51 49 XDNA_ERR(xdna, "command opcode 0x%x failed, status 0x%x", 52 50 msg->opcode, *hdl->data); 53 51 ret = -EINVAL; ··· 235 233 ret = -EINVAL; 236 234 goto out_destroy_context; 237 235 } 236 + ndev->hwctx_num++; 238 237 239 238 XDNA_DBG(xdna, "%s mailbox channel irq: %d, msix_id: %d", 240 239 hwctx->name, ret, resp.msix_id); ··· 270 267 hwctx->fw_ctx_id); 271 268 hwctx->priv->mbox_chann = NULL; 272 269 hwctx->fw_ctx_id = -1; 270 + ndev->hwctx_num--; 273 271 274 272 return ret; 275 273 } ··· 336 332 goto fail; 337 333 } 338 334 339 - if (resp.status != AIE2_STATUS_SUCCESS) { 340 - XDNA_ERR(xdna, "Query NPU status failed, status 0x%x", resp.status); 341 - ret = -EINVAL; 342 - goto fail; 343 - } 344 335 XDNA_DBG(xdna, "Query NPU status completed"); 345 336 346 337 if (size < resp.size) { ··· 354 355 355 356 fail: 356 357 dma_free_noncoherent(xdna->ddev.dev, size, buff_addr, dma_addr, DMA_FROM_DEVICE); 358 + return ret; 359 + } 360 + 361 + int aie2_query_telemetry(struct amdxdna_dev_hdl *ndev, 362 + char __user *buf, u32 size, 363 + struct amdxdna_drm_query_telemetry_header *header) 364 + { 365 + DECLARE_AIE2_MSG(get_telemetry, MSG_OP_GET_TELEMETRY); 366 + struct amdxdna_dev *xdna = ndev->xdna; 367 + dma_addr_t dma_addr; 368 + u8 *addr; 369 + int ret; 370 + 371 + if (header->type >= MAX_TELEMETRY_TYPE) 372 + return -EINVAL; 373 + 374 + addr = dma_alloc_noncoherent(xdna->ddev.dev, size, &dma_addr, 375 + DMA_FROM_DEVICE, GFP_KERNEL); 376 + if (!addr) 377 + return -ENOMEM; 378 + 379 + req.buf_addr = dma_addr; 380 + req.buf_size = size; 381 + req.type = header->type; 382 + 383 + drm_clflush_virt_range(addr, size); /* device can access */ 384 + ret = aie2_send_mgmt_msg_wait(ndev, &msg); 385 + if (ret) { 386 + XDNA_ERR(xdna, "Query telemetry failed, status %d", ret); 387 + goto free_buf; 388 + } 389 + 390 + if (size < resp.size) { 391 + ret = -EINVAL; 392 + XDNA_ERR(xdna, "Bad buffer size. Available: %u. Needs: %u", size, resp.size); 393 + goto free_buf; 394 + } 395 + 396 + if (copy_to_user(buf, addr, resp.size)) { 397 + ret = -EFAULT; 398 + XDNA_ERR(xdna, "Failed to copy telemetry to user space"); 399 + goto free_buf; 400 + } 401 + 402 + header->major = resp.major; 403 + header->minor = resp.minor; 404 + 405 + free_buf: 406 + dma_free_noncoherent(xdna->ddev.dev, size, addr, dma_addr, DMA_FROM_DEVICE); 357 407 return ret; 358 408 } 359 409 ··· 481 433 return xdna_mailbox_send_msg(chann, &msg, TX_TIMEOUT); 482 434 } 483 435 436 + static int aie2_init_exec_cu_req(struct amdxdna_gem_obj *cmd_bo, void *req, 437 + size_t *size, u32 *msg_op) 438 + { 439 + struct execute_buffer_req *cu_req = req; 440 + u32 cmd_len; 441 + void *cmd; 442 + 443 + cmd = amdxdna_cmd_get_payload(cmd_bo, &cmd_len); 444 + if (cmd_len > sizeof(cu_req->payload)) 445 + return -EINVAL; 446 + 447 + cu_req->cu_idx = amdxdna_cmd_get_cu_idx(cmd_bo); 448 + if (cu_req->cu_idx == INVALID_CU_IDX) 449 + return -EINVAL; 450 + 451 + memcpy(cu_req->payload, cmd, cmd_len); 452 + 453 + *size = sizeof(*cu_req); 454 + *msg_op = MSG_OP_EXECUTE_BUFFER_CF; 455 + return 0; 456 + } 457 + 458 + static int aie2_init_exec_dpu_req(struct amdxdna_gem_obj *cmd_bo, void *req, 459 + size_t *size, u32 *msg_op) 460 + { 461 + struct exec_dpu_req *dpu_req = req; 462 + struct amdxdna_cmd_start_npu *sn; 463 + u32 cmd_len; 464 + 465 + sn = amdxdna_cmd_get_payload(cmd_bo, &cmd_len); 466 + if (cmd_len - sizeof(*sn) > sizeof(dpu_req->payload)) 467 + return -EINVAL; 468 + 469 + dpu_req->cu_idx = amdxdna_cmd_get_cu_idx(cmd_bo); 470 + if (dpu_req->cu_idx == INVALID_CU_IDX) 471 + return -EINVAL; 472 + 473 + dpu_req->inst_buf_addr = sn->buffer; 474 + dpu_req->inst_size = sn->buffer_size; 475 + dpu_req->inst_prop_cnt = sn->prop_count; 476 + memcpy(dpu_req->payload, sn->prop_args, cmd_len - sizeof(*sn)); 477 + 478 + *size = sizeof(*dpu_req); 479 + *msg_op = MSG_OP_EXEC_DPU; 480 + return 0; 481 + } 482 + 483 + static void aie2_init_exec_chain_req(void *req, u64 slot_addr, size_t size, u32 cmd_cnt) 484 + { 485 + struct cmd_chain_req *chain_req = req; 486 + 487 + chain_req->buf_addr = slot_addr; 488 + chain_req->buf_size = size; 489 + chain_req->count = cmd_cnt; 490 + } 491 + 492 + static void aie2_init_npu_chain_req(void *req, u64 slot_addr, size_t size, u32 cmd_cnt) 493 + { 494 + struct cmd_chain_npu_req *npu_chain_req = req; 495 + 496 + npu_chain_req->flags = 0; 497 + npu_chain_req->reserved = 0; 498 + npu_chain_req->buf_addr = slot_addr; 499 + npu_chain_req->buf_size = size; 500 + npu_chain_req->count = cmd_cnt; 501 + } 502 + 503 + static int 504 + aie2_cmdlist_fill_cf(struct amdxdna_gem_obj *cmd_bo, void *slot, size_t *size) 505 + { 506 + struct cmd_chain_slot_execbuf_cf *cf_slot = slot; 507 + u32 cmd_len; 508 + void *cmd; 509 + 510 + cmd = amdxdna_cmd_get_payload(cmd_bo, &cmd_len); 511 + if (*size < sizeof(*cf_slot) + cmd_len) 512 + return -EINVAL; 513 + 514 + cf_slot->cu_idx = amdxdna_cmd_get_cu_idx(cmd_bo); 515 + if (cf_slot->cu_idx == INVALID_CU_IDX) 516 + return -EINVAL; 517 + 518 + cf_slot->arg_cnt = cmd_len / sizeof(u32); 519 + memcpy(cf_slot->args, cmd, cmd_len); 520 + /* Accurate slot size to hint firmware to do necessary copy */ 521 + *size = sizeof(*cf_slot) + cmd_len; 522 + return 0; 523 + } 524 + 525 + static int 526 + aie2_cmdlist_fill_dpu(struct amdxdna_gem_obj *cmd_bo, void *slot, size_t *size) 527 + { 528 + struct cmd_chain_slot_dpu *dpu_slot = slot; 529 + struct amdxdna_cmd_start_npu *sn; 530 + u32 cmd_len; 531 + u32 arg_sz; 532 + 533 + sn = amdxdna_cmd_get_payload(cmd_bo, &cmd_len); 534 + arg_sz = cmd_len - sizeof(*sn); 535 + if (cmd_len < sizeof(*sn) || arg_sz > MAX_DPU_ARGS_SIZE) 536 + return -EINVAL; 537 + 538 + if (*size < sizeof(*dpu_slot) + arg_sz) 539 + return -EINVAL; 540 + 541 + dpu_slot->cu_idx = amdxdna_cmd_get_cu_idx(cmd_bo); 542 + if (dpu_slot->cu_idx == INVALID_CU_IDX) 543 + return -EINVAL; 544 + 545 + dpu_slot->inst_buf_addr = sn->buffer; 546 + dpu_slot->inst_size = sn->buffer_size; 547 + dpu_slot->inst_prop_cnt = sn->prop_count; 548 + dpu_slot->arg_cnt = arg_sz / sizeof(u32); 549 + memcpy(dpu_slot->args, sn->prop_args, arg_sz); 550 + 551 + /* Accurate slot size to hint firmware to do necessary copy */ 552 + *size = sizeof(*dpu_slot) + arg_sz; 553 + return 0; 554 + } 555 + 556 + static u32 aie2_get_chain_msg_op(u32 cmd_op) 557 + { 558 + switch (cmd_op) { 559 + case ERT_START_CU: 560 + return MSG_OP_CHAIN_EXEC_BUFFER_CF; 561 + case ERT_START_NPU: 562 + return MSG_OP_CHAIN_EXEC_DPU; 563 + default: 564 + break; 565 + } 566 + 567 + return MSG_OP_MAX_OPCODE; 568 + } 569 + 570 + static struct aie2_exec_msg_ops legacy_exec_message_ops = { 571 + .init_cu_req = aie2_init_exec_cu_req, 572 + .init_dpu_req = aie2_init_exec_dpu_req, 573 + .init_chain_req = aie2_init_exec_chain_req, 574 + .fill_cf_slot = aie2_cmdlist_fill_cf, 575 + .fill_dpu_slot = aie2_cmdlist_fill_dpu, 576 + .get_chain_msg_op = aie2_get_chain_msg_op, 577 + }; 578 + 579 + static int 580 + aie2_cmdlist_fill_npu_cf(struct amdxdna_gem_obj *cmd_bo, void *slot, size_t *size) 581 + { 582 + struct cmd_chain_slot_npu *npu_slot = slot; 583 + u32 cmd_len; 584 + void *cmd; 585 + 586 + cmd = amdxdna_cmd_get_payload(cmd_bo, &cmd_len); 587 + if (*size < sizeof(*npu_slot) + cmd_len) 588 + return -EINVAL; 589 + 590 + npu_slot->cu_idx = amdxdna_cmd_get_cu_idx(cmd_bo); 591 + if (npu_slot->cu_idx == INVALID_CU_IDX) 592 + return -EINVAL; 593 + 594 + memset(npu_slot, 0, sizeof(*npu_slot)); 595 + npu_slot->type = EXEC_NPU_TYPE_NON_ELF; 596 + npu_slot->arg_cnt = cmd_len / sizeof(u32); 597 + memcpy(npu_slot->args, cmd, cmd_len); 598 + 599 + *size = sizeof(*npu_slot) + cmd_len; 600 + return 0; 601 + } 602 + 603 + static int 604 + aie2_cmdlist_fill_npu_dpu(struct amdxdna_gem_obj *cmd_bo, void *slot, size_t *size) 605 + { 606 + struct cmd_chain_slot_npu *npu_slot = slot; 607 + struct amdxdna_cmd_start_npu *sn; 608 + u32 cmd_len; 609 + u32 arg_sz; 610 + 611 + sn = amdxdna_cmd_get_payload(cmd_bo, &cmd_len); 612 + arg_sz = cmd_len - sizeof(*sn); 613 + if (cmd_len < sizeof(*sn) || arg_sz > MAX_NPU_ARGS_SIZE) 614 + return -EINVAL; 615 + 616 + if (*size < sizeof(*npu_slot) + arg_sz) 617 + return -EINVAL; 618 + 619 + npu_slot->cu_idx = amdxdna_cmd_get_cu_idx(cmd_bo); 620 + if (npu_slot->cu_idx == INVALID_CU_IDX) 621 + return -EINVAL; 622 + 623 + memset(npu_slot, 0, sizeof(*npu_slot)); 624 + npu_slot->type = EXEC_NPU_TYPE_PARTIAL_ELF; 625 + npu_slot->inst_buf_addr = sn->buffer; 626 + npu_slot->inst_size = sn->buffer_size; 627 + npu_slot->inst_prop_cnt = sn->prop_count; 628 + npu_slot->arg_cnt = arg_sz / sizeof(u32); 629 + memcpy(npu_slot->args, sn->prop_args, arg_sz); 630 + 631 + *size = sizeof(*npu_slot) + arg_sz; 632 + return 0; 633 + } 634 + 635 + static u32 aie2_get_npu_chain_msg_op(u32 cmd_op) 636 + { 637 + return MSG_OP_CHAIN_EXEC_NPU; 638 + } 639 + 640 + static struct aie2_exec_msg_ops npu_exec_message_ops = { 641 + .init_cu_req = aie2_init_exec_cu_req, 642 + .init_dpu_req = aie2_init_exec_dpu_req, 643 + .init_chain_req = aie2_init_npu_chain_req, 644 + .fill_cf_slot = aie2_cmdlist_fill_npu_cf, 645 + .fill_dpu_slot = aie2_cmdlist_fill_npu_dpu, 646 + .get_chain_msg_op = aie2_get_npu_chain_msg_op, 647 + }; 648 + 649 + static int aie2_init_exec_req(void *req, struct amdxdna_gem_obj *cmd_abo, 650 + size_t *size, u32 *msg_op) 651 + { 652 + struct amdxdna_dev *xdna = cmd_abo->client->xdna; 653 + int ret; 654 + u32 op; 655 + 656 + 657 + op = amdxdna_cmd_get_op(cmd_abo); 658 + switch (op) { 659 + case ERT_START_CU: 660 + ret = EXEC_MSG_OPS(xdna)->init_cu_req(cmd_abo, req, size, msg_op); 661 + if (ret) { 662 + XDNA_DBG(xdna, "Init CU req failed ret %d", ret); 663 + return ret; 664 + } 665 + break; 666 + case ERT_START_NPU: 667 + ret = EXEC_MSG_OPS(xdna)->init_dpu_req(cmd_abo, req, size, msg_op); 668 + if (ret) { 669 + XDNA_DBG(xdna, "Init DPU req failed ret %d", ret); 670 + return ret; 671 + } 672 + 673 + break; 674 + default: 675 + XDNA_ERR(xdna, "Unsupported op %d", op); 676 + ret = -EOPNOTSUPP; 677 + break; 678 + } 679 + 680 + return ret; 681 + } 682 + 683 + static int 684 + aie2_cmdlist_fill_slot(void *slot, struct amdxdna_gem_obj *cmd_abo, 685 + size_t *size, u32 *cmd_op) 686 + { 687 + struct amdxdna_dev *xdna = cmd_abo->client->xdna; 688 + int ret; 689 + u32 op; 690 + 691 + op = amdxdna_cmd_get_op(cmd_abo); 692 + if (*cmd_op == ERT_INVALID_CMD) 693 + *cmd_op = op; 694 + else if (op != *cmd_op) 695 + return -EINVAL; 696 + 697 + switch (op) { 698 + case ERT_START_CU: 699 + ret = EXEC_MSG_OPS(xdna)->fill_cf_slot(cmd_abo, slot, size); 700 + break; 701 + case ERT_START_NPU: 702 + ret = EXEC_MSG_OPS(xdna)->fill_dpu_slot(cmd_abo, slot, size); 703 + break; 704 + default: 705 + XDNA_INFO(xdna, "Unsupported op %d", op); 706 + ret = -EOPNOTSUPP; 707 + break; 708 + } 709 + 710 + return ret; 711 + } 712 + 713 + void aie2_msg_init(struct amdxdna_dev_hdl *ndev) 714 + { 715 + if (AIE2_FEATURE_ON(ndev, AIE2_NPU_COMMAND)) 716 + ndev->exec_msg_ops = &npu_exec_message_ops; 717 + else 718 + ndev->exec_msg_ops = &legacy_exec_message_ops; 719 + } 720 + 721 + static inline struct amdxdna_gem_obj * 722 + aie2_cmdlist_get_cmd_buf(struct amdxdna_sched_job *job) 723 + { 724 + int idx = get_job_idx(job->seq); 725 + 726 + return job->hwctx->priv->cmd_buf[idx]; 727 + } 728 + 484 729 int aie2_execbuf(struct amdxdna_hwctx *hwctx, struct amdxdna_sched_job *job, 485 730 int (*notify_cb)(void *, void __iomem *, size_t)) 486 731 { 487 732 struct mailbox_channel *chann = hwctx->priv->mbox_chann; 488 733 struct amdxdna_dev *xdna = hwctx->client->xdna; 489 734 struct amdxdna_gem_obj *cmd_abo = job->cmd_bo; 490 - union { 491 - struct execute_buffer_req ebuf; 492 - struct exec_dpu_req dpu; 493 - } req; 494 735 struct xdna_mailbox_msg msg; 495 - u32 payload_len; 496 - void *payload; 497 - int cu_idx; 736 + union exec_req req; 498 737 int ret; 499 - u32 op; 500 738 501 739 if (!chann) 502 740 return -ENODEV; 503 741 504 - payload = amdxdna_cmd_get_payload(cmd_abo, &payload_len); 505 - if (!payload) { 506 - XDNA_ERR(xdna, "Invalid command, cannot get payload"); 507 - return -EINVAL; 508 - } 742 + ret = aie2_init_exec_req(&req, cmd_abo, &msg.send_size, &msg.opcode); 743 + if (ret) 744 + return ret; 509 745 510 - cu_idx = amdxdna_cmd_get_cu_idx(cmd_abo); 511 - if (cu_idx < 0) { 512 - XDNA_DBG(xdna, "Invalid cu idx"); 513 - return -EINVAL; 514 - } 515 - 516 - op = amdxdna_cmd_get_op(cmd_abo); 517 - switch (op) { 518 - case ERT_START_CU: 519 - if (unlikely(payload_len > sizeof(req.ebuf.payload))) 520 - XDNA_DBG(xdna, "Invalid ebuf payload len: %d", payload_len); 521 - req.ebuf.cu_idx = cu_idx; 522 - memcpy(req.ebuf.payload, payload, sizeof(req.ebuf.payload)); 523 - msg.send_size = sizeof(req.ebuf); 524 - msg.opcode = MSG_OP_EXECUTE_BUFFER_CF; 525 - break; 526 - case ERT_START_NPU: { 527 - struct amdxdna_cmd_start_npu *sn = payload; 528 - 529 - if (unlikely(payload_len - sizeof(*sn) > sizeof(req.dpu.payload))) 530 - XDNA_DBG(xdna, "Invalid dpu payload len: %d", payload_len); 531 - req.dpu.inst_buf_addr = sn->buffer; 532 - req.dpu.inst_size = sn->buffer_size; 533 - req.dpu.inst_prop_cnt = sn->prop_count; 534 - req.dpu.cu_idx = cu_idx; 535 - memcpy(req.dpu.payload, sn->prop_args, sizeof(req.dpu.payload)); 536 - msg.send_size = sizeof(req.dpu); 537 - msg.opcode = MSG_OP_EXEC_DPU; 538 - break; 539 - } 540 - default: 541 - XDNA_DBG(xdna, "Invalid ERT cmd op code: %d", op); 542 - return -EINVAL; 543 - } 544 746 msg.handle = job; 545 747 msg.notify_cb = notify_cb; 546 748 msg.send_data = (u8 *)&req; ··· 806 508 return 0; 807 509 } 808 510 809 - static int 810 - aie2_cmdlist_fill_one_slot_cf(void *cmd_buf, u32 offset, 811 - struct amdxdna_gem_obj *abo, u32 *size) 812 - { 813 - struct cmd_chain_slot_execbuf_cf *buf = cmd_buf + offset; 814 - int cu_idx = amdxdna_cmd_get_cu_idx(abo); 815 - u32 payload_len; 816 - void *payload; 817 - 818 - if (cu_idx < 0) 819 - return -EINVAL; 820 - 821 - payload = amdxdna_cmd_get_payload(abo, &payload_len); 822 - if (!payload) 823 - return -EINVAL; 824 - 825 - if (!slot_has_space(*buf, offset, payload_len)) 826 - return -ENOSPC; 827 - 828 - buf->cu_idx = cu_idx; 829 - buf->arg_cnt = payload_len / sizeof(u32); 830 - memcpy(buf->args, payload, payload_len); 831 - /* Accurate buf size to hint firmware to do necessary copy */ 832 - *size = sizeof(*buf) + payload_len; 833 - return 0; 834 - } 835 - 836 - static int 837 - aie2_cmdlist_fill_one_slot_dpu(void *cmd_buf, u32 offset, 838 - struct amdxdna_gem_obj *abo, u32 *size) 839 - { 840 - struct cmd_chain_slot_dpu *buf = cmd_buf + offset; 841 - int cu_idx = amdxdna_cmd_get_cu_idx(abo); 842 - struct amdxdna_cmd_start_npu *sn; 843 - u32 payload_len; 844 - void *payload; 845 - u32 arg_sz; 846 - 847 - if (cu_idx < 0) 848 - return -EINVAL; 849 - 850 - payload = amdxdna_cmd_get_payload(abo, &payload_len); 851 - if (!payload) 852 - return -EINVAL; 853 - sn = payload; 854 - arg_sz = payload_len - sizeof(*sn); 855 - if (payload_len < sizeof(*sn) || arg_sz > MAX_DPU_ARGS_SIZE) 856 - return -EINVAL; 857 - 858 - if (!slot_has_space(*buf, offset, arg_sz)) 859 - return -ENOSPC; 860 - 861 - buf->inst_buf_addr = sn->buffer; 862 - buf->inst_size = sn->buffer_size; 863 - buf->inst_prop_cnt = sn->prop_count; 864 - buf->cu_idx = cu_idx; 865 - buf->arg_cnt = arg_sz / sizeof(u32); 866 - memcpy(buf->args, sn->prop_args, arg_sz); 867 - 868 - /* Accurate buf size to hint firmware to do necessary copy */ 869 - *size = sizeof(*buf) + arg_sz; 870 - return 0; 871 - } 872 - 873 - static int 874 - aie2_cmdlist_fill_one_slot(u32 op, struct amdxdna_gem_obj *cmdbuf_abo, u32 offset, 875 - struct amdxdna_gem_obj *abo, u32 *size) 876 - { 877 - u32 this_op = amdxdna_cmd_get_op(abo); 878 - void *cmd_buf = cmdbuf_abo->mem.kva; 879 - int ret; 880 - 881 - if (this_op != op) { 882 - ret = -EINVAL; 883 - goto done; 884 - } 885 - 886 - switch (op) { 887 - case ERT_START_CU: 888 - ret = aie2_cmdlist_fill_one_slot_cf(cmd_buf, offset, abo, size); 889 - break; 890 - case ERT_START_NPU: 891 - ret = aie2_cmdlist_fill_one_slot_dpu(cmd_buf, offset, abo, size); 892 - break; 893 - default: 894 - ret = -EOPNOTSUPP; 895 - } 896 - 897 - done: 898 - if (ret) { 899 - XDNA_ERR(abo->client->xdna, "Can't fill slot for cmd op %d ret %d", 900 - op, ret); 901 - } 902 - return ret; 903 - } 904 - 905 - static inline struct amdxdna_gem_obj * 906 - aie2_cmdlist_get_cmd_buf(struct amdxdna_sched_job *job) 907 - { 908 - int idx = get_job_idx(job->seq); 909 - 910 - return job->hwctx->priv->cmd_buf[idx]; 911 - } 912 - 913 - static void 914 - aie2_cmdlist_prepare_request(struct cmd_chain_req *req, 915 - struct amdxdna_gem_obj *cmdbuf_abo, u32 size, u32 cnt) 916 - { 917 - req->buf_addr = cmdbuf_abo->mem.dev_addr; 918 - req->buf_size = size; 919 - req->count = cnt; 920 - drm_clflush_virt_range(cmdbuf_abo->mem.kva, size); 921 - XDNA_DBG(cmdbuf_abo->client->xdna, "Command buf addr 0x%llx size 0x%x count %d", 922 - req->buf_addr, size, cnt); 923 - } 924 - 925 - static inline u32 926 - aie2_cmd_op_to_msg_op(u32 op) 927 - { 928 - switch (op) { 929 - case ERT_START_CU: 930 - return MSG_OP_CHAIN_EXEC_BUFFER_CF; 931 - case ERT_START_NPU: 932 - return MSG_OP_CHAIN_EXEC_DPU; 933 - default: 934 - return MSG_OP_MAX_OPCODE; 935 - } 936 - } 937 - 938 511 int aie2_cmdlist_multi_execbuf(struct amdxdna_hwctx *hwctx, 939 512 struct amdxdna_sched_job *job, 940 513 int (*notify_cb)(void *, void __iomem *, size_t)) ··· 814 645 struct mailbox_channel *chann = hwctx->priv->mbox_chann; 815 646 struct amdxdna_client *client = hwctx->client; 816 647 struct amdxdna_gem_obj *cmd_abo = job->cmd_bo; 648 + struct amdxdna_dev *xdna = client->xdna; 817 649 struct amdxdna_cmd_chain *payload; 818 650 struct xdna_mailbox_msg msg; 819 - struct cmd_chain_req req; 651 + union exec_chain_req req; 820 652 u32 payload_len; 821 653 u32 offset = 0; 822 - u32 size; 654 + size_t size; 823 655 int ret; 824 656 u32 op; 825 657 u32 i; ··· 831 661 payload_len < struct_size(payload, data, payload->command_count)) 832 662 return -EINVAL; 833 663 664 + op = ERT_INVALID_CMD; 834 665 for (i = 0; i < payload->command_count; i++) { 835 666 u32 boh = (u32)(payload->data[i]); 836 667 struct amdxdna_gem_obj *abo; 837 668 838 669 abo = amdxdna_gem_get_obj(client, boh, AMDXDNA_BO_CMD); 839 670 if (!abo) { 840 - XDNA_ERR(client->xdna, "Failed to find cmd BO %d", boh); 671 + XDNA_ERR(xdna, "Failed to find cmd BO %d", boh); 841 672 return -ENOENT; 842 673 } 843 674 844 - /* All sub-cmd should have same op, use the first one. */ 845 - if (i == 0) 846 - op = amdxdna_cmd_get_op(abo); 847 - 848 - ret = aie2_cmdlist_fill_one_slot(op, cmdbuf_abo, offset, abo, &size); 675 + size = cmdbuf_abo->mem.size - offset; 676 + ret = aie2_cmdlist_fill_slot(cmdbuf_abo->mem.kva + offset, 677 + abo, &size, &op); 849 678 amdxdna_gem_put_obj(abo); 850 679 if (ret) 851 - return -EINVAL; 680 + return ret; 852 681 853 682 offset += size; 854 683 } 855 - 856 - /* The offset is the accumulated total size of the cmd buffer */ 857 - aie2_cmdlist_prepare_request(&req, cmdbuf_abo, offset, payload->command_count); 858 - 859 - msg.opcode = aie2_cmd_op_to_msg_op(op); 684 + msg.opcode = EXEC_MSG_OPS(xdna)->get_chain_msg_op(op); 860 685 if (msg.opcode == MSG_OP_MAX_OPCODE) 861 686 return -EOPNOTSUPP; 687 + 688 + /* The offset is the accumulated total size of the cmd buffer */ 689 + EXEC_MSG_OPS(xdna)->init_chain_req(&req, cmdbuf_abo->mem.dev_addr, 690 + offset, payload->command_count); 691 + drm_clflush_virt_range(cmdbuf_abo->mem.kva, offset); 692 + 862 693 msg.handle = job; 863 694 msg.notify_cb = notify_cb; 864 695 msg.send_data = (u8 *)&req; 865 696 msg.send_size = sizeof(req); 866 697 ret = xdna_mailbox_send_msg(chann, &msg, TX_TIMEOUT); 867 698 if (ret) { 868 - XDNA_ERR(hwctx->client->xdna, "Send message failed"); 699 + XDNA_ERR(xdna, "Send message failed"); 869 700 return ret; 870 701 } 871 702 ··· 879 708 { 880 709 struct amdxdna_gem_obj *cmdbuf_abo = aie2_cmdlist_get_cmd_buf(job); 881 710 struct mailbox_channel *chann = hwctx->priv->mbox_chann; 711 + struct amdxdna_dev *xdna = hwctx->client->xdna; 882 712 struct amdxdna_gem_obj *cmd_abo = job->cmd_bo; 883 713 struct xdna_mailbox_msg msg; 884 - struct cmd_chain_req req; 885 - u32 size; 714 + union exec_chain_req req; 715 + u32 op = ERT_INVALID_CMD; 716 + size_t size; 886 717 int ret; 887 - u32 op; 888 718 889 - op = amdxdna_cmd_get_op(cmd_abo); 890 - ret = aie2_cmdlist_fill_one_slot(op, cmdbuf_abo, 0, cmd_abo, &size); 719 + size = cmdbuf_abo->mem.size; 720 + ret = aie2_cmdlist_fill_slot(cmdbuf_abo->mem.kva, cmd_abo, &size, &op); 891 721 if (ret) 892 722 return ret; 893 723 894 - aie2_cmdlist_prepare_request(&req, cmdbuf_abo, size, 1); 895 - 896 - msg.opcode = aie2_cmd_op_to_msg_op(op); 724 + msg.opcode = EXEC_MSG_OPS(xdna)->get_chain_msg_op(op); 897 725 if (msg.opcode == MSG_OP_MAX_OPCODE) 898 726 return -EOPNOTSUPP; 727 + 728 + EXEC_MSG_OPS(xdna)->init_chain_req(&req, cmdbuf_abo->mem.dev_addr, 729 + size, 1); 730 + drm_clflush_virt_range(cmdbuf_abo->mem.kva, size); 731 + 899 732 msg.handle = job; 900 733 msg.notify_cb = notify_cb; 901 734 msg.send_data = (u8 *)&req;
+63 -4
drivers/accel/amdxdna/aie2_msg_priv.h
··· 9 9 enum aie2_msg_opcode { 10 10 MSG_OP_CREATE_CONTEXT = 0x2, 11 11 MSG_OP_DESTROY_CONTEXT = 0x3, 12 - MSG_OP_SYNC_BO = 0x7, 12 + MSG_OP_GET_TELEMETRY = 0x4, 13 + MSG_OP_SYNC_BO = 0x7, 13 14 MSG_OP_EXECUTE_BUFFER_CF = 0xC, 14 15 MSG_OP_QUERY_COL_STATUS = 0xD, 15 16 MSG_OP_QUERY_AIE_TILE_INFO = 0xE, ··· 20 19 MSG_OP_CHAIN_EXEC_BUFFER_CF = 0x12, 21 20 MSG_OP_CHAIN_EXEC_DPU = 0x13, 22 21 MSG_OP_CONFIG_DEBUG_BO = 0x14, 22 + MSG_OP_CHAIN_EXEC_NPU = 0x18, 23 23 MSG_OP_MAX_XRT_OPCODE, 24 24 MSG_OP_SUSPEND = 0x101, 25 25 MSG_OP_RESUME = 0x102, ··· 138 136 enum aie2_msg_status status; 139 137 } __packed; 140 138 139 + enum telemetry_type { 140 + TELEMETRY_TYPE_DISABLED, 141 + TELEMETRY_TYPE_HEALTH, 142 + TELEMETRY_TYPE_ERROR_INFO, 143 + TELEMETRY_TYPE_PROFILING, 144 + TELEMETRY_TYPE_DEBUG, 145 + MAX_TELEMETRY_TYPE 146 + }; 147 + 148 + struct get_telemetry_req { 149 + enum telemetry_type type; 150 + __u64 buf_addr; 151 + __u32 buf_size; 152 + } __packed; 153 + 154 + struct get_telemetry_resp { 155 + __u32 major; 156 + __u32 minor; 157 + __u32 size; 158 + enum aie2_msg_status status; 159 + } __packed; 160 + 141 161 struct execute_buffer_req { 142 162 __u32 cu_idx; 143 163 __u32 payload[19]; ··· 172 148 __u32 cu_idx; 173 149 __u32 payload[35]; 174 150 } __packed; 151 + 152 + enum exec_npu_type { 153 + EXEC_NPU_TYPE_NON_ELF = 0x1, 154 + EXEC_NPU_TYPE_PARTIAL_ELF = 0x2, 155 + }; 156 + 157 + union exec_req { 158 + struct execute_buffer_req ebuf; 159 + struct exec_dpu_req dpu_req; 160 + }; 175 161 176 162 struct execute_buffer_resp { 177 163 enum aie2_msg_status status; ··· 354 320 } __packed; 355 321 356 322 #define MAX_CHAIN_CMDBUF_SIZE SZ_4K 357 - #define slot_has_space(slot, offset, payload_size) \ 358 - (MAX_CHAIN_CMDBUF_SIZE >= (offset) + (payload_size) + \ 359 - sizeof(typeof(slot))) 360 323 361 324 struct cmd_chain_slot_execbuf_cf { 362 325 __u32 cu_idx; ··· 371 340 __u32 args[] __counted_by(arg_cnt); 372 341 }; 373 342 343 + #define MAX_NPU_ARGS_SIZE (26 * sizeof(__u32)) 344 + struct cmd_chain_slot_npu { 345 + enum exec_npu_type type; 346 + u64 inst_buf_addr; 347 + u64 save_buf_addr; 348 + u64 restore_buf_addr; 349 + u32 inst_size; 350 + u32 save_size; 351 + u32 restore_size; 352 + u32 inst_prop_cnt; 353 + u32 cu_idx; 354 + u32 arg_cnt; 355 + u32 args[] __counted_by(arg_cnt); 356 + } __packed; 357 + 374 358 struct cmd_chain_req { 375 359 __u64 buf_addr; 376 360 __u32 buf_size; 377 361 __u32 count; 378 362 } __packed; 363 + 364 + struct cmd_chain_npu_req { 365 + u32 flags; 366 + u32 reserved; 367 + u64 buf_addr; 368 + u32 buf_size; 369 + u32 count; 370 + } __packed; 371 + 372 + union exec_chain_req { 373 + struct cmd_chain_npu_req npu_req; 374 + struct cmd_chain_req req; 375 + }; 379 376 380 377 struct cmd_chain_resp { 381 378 enum aie2_msg_status status;
+113
drivers/accel/amdxdna/aie2_pci.c
··· 55 55 56 56 static int aie2_check_protocol(struct amdxdna_dev_hdl *ndev, u32 fw_major, u32 fw_minor) 57 57 { 58 + const struct aie2_fw_feature_tbl *feature; 58 59 struct amdxdna_dev *xdna = ndev->xdna; 59 60 60 61 /* ··· 79 78 XDNA_ERR(xdna, "Firmware minor version smaller than supported"); 80 79 return -EINVAL; 81 80 } 81 + 82 + for (feature = ndev->priv->fw_feature_tbl; feature && feature->min_minor; 83 + feature++) { 84 + if (fw_minor < feature->min_minor) 85 + continue; 86 + if (feature->max_minor > 0 && fw_minor > feature->max_minor) 87 + continue; 88 + 89 + set_bit(feature->feature, &ndev->feature_mask); 90 + } 91 + 82 92 return 0; 83 93 } 84 94 ··· 599 587 } 600 588 601 589 release_firmware(fw); 590 + aie2_msg_init(ndev); 602 591 amdxdna_pm_init(xdna); 603 592 return 0; 604 593 ··· 838 825 return 0; 839 826 } 840 827 828 + static int aie2_query_resource_info(struct amdxdna_client *client, 829 + struct amdxdna_drm_get_info *args) 830 + { 831 + struct amdxdna_drm_get_resource_info res_info; 832 + const struct amdxdna_dev_priv *priv; 833 + struct amdxdna_dev_hdl *ndev; 834 + struct amdxdna_dev *xdna; 835 + 836 + xdna = client->xdna; 837 + ndev = xdna->dev_handle; 838 + priv = ndev->priv; 839 + 840 + res_info.npu_clk_max = priv->dpm_clk_tbl[ndev->max_dpm_level].hclk; 841 + res_info.npu_tops_max = ndev->max_tops; 842 + res_info.npu_task_max = priv->hwctx_limit; 843 + res_info.npu_tops_curr = ndev->curr_tops; 844 + res_info.npu_task_curr = ndev->hwctx_num; 845 + 846 + if (copy_to_user(u64_to_user_ptr(args->buffer), &res_info, sizeof(res_info))) 847 + return -EFAULT; 848 + 849 + return 0; 850 + } 851 + 852 + static int aie2_fill_hwctx_map(struct amdxdna_hwctx *hwctx, void *arg) 853 + { 854 + struct amdxdna_dev *xdna = hwctx->client->xdna; 855 + u32 *map = arg; 856 + 857 + if (hwctx->fw_ctx_id >= xdna->dev_handle->priv->hwctx_limit) { 858 + XDNA_ERR(xdna, "Invalid fw ctx id %d/%d ", hwctx->fw_ctx_id, 859 + xdna->dev_handle->priv->hwctx_limit); 860 + return -EINVAL; 861 + } 862 + 863 + map[hwctx->fw_ctx_id] = hwctx->id; 864 + return 0; 865 + } 866 + 867 + static int aie2_get_telemetry(struct amdxdna_client *client, 868 + struct amdxdna_drm_get_info *args) 869 + { 870 + struct amdxdna_drm_query_telemetry_header *header __free(kfree) = NULL; 871 + u32 telemetry_data_sz, header_sz, elem_num; 872 + struct amdxdna_dev *xdna = client->xdna; 873 + struct amdxdna_client *tmp_client; 874 + int ret; 875 + 876 + elem_num = xdna->dev_handle->priv->hwctx_limit; 877 + header_sz = struct_size(header, map, elem_num); 878 + if (args->buffer_size <= header_sz) { 879 + XDNA_ERR(xdna, "Invalid buffer size"); 880 + return -EINVAL; 881 + } 882 + 883 + telemetry_data_sz = args->buffer_size - header_sz; 884 + if (telemetry_data_sz > SZ_4M) { 885 + XDNA_ERR(xdna, "Buffer size is too big, %d", telemetry_data_sz); 886 + return -EINVAL; 887 + } 888 + 889 + header = kzalloc(header_sz, GFP_KERNEL); 890 + if (!header) 891 + return -ENOMEM; 892 + 893 + if (copy_from_user(header, u64_to_user_ptr(args->buffer), sizeof(*header))) { 894 + XDNA_ERR(xdna, "Failed to copy telemetry header from user"); 895 + return -EFAULT; 896 + } 897 + 898 + header->map_num_elements = elem_num; 899 + list_for_each_entry(tmp_client, &xdna->client_list, node) { 900 + ret = amdxdna_hwctx_walk(tmp_client, &header->map, 901 + aie2_fill_hwctx_map); 902 + if (ret) 903 + return ret; 904 + } 905 + 906 + ret = aie2_query_telemetry(xdna->dev_handle, 907 + u64_to_user_ptr(args->buffer + header_sz), 908 + telemetry_data_sz, header); 909 + if (ret) { 910 + XDNA_ERR(xdna, "Query telemetry failed ret %d", ret); 911 + return ret; 912 + } 913 + 914 + if (copy_to_user(u64_to_user_ptr(args->buffer), header, header_sz)) { 915 + XDNA_ERR(xdna, "Copy header failed"); 916 + return -EFAULT; 917 + } 918 + 919 + return 0; 920 + } 921 + 841 922 static int aie2_get_info(struct amdxdna_client *client, struct amdxdna_drm_get_info *args) 842 923 { 843 924 struct amdxdna_dev *xdna = client->xdna; ··· 965 858 break; 966 859 case DRM_AMDXDNA_GET_POWER_MODE: 967 860 ret = aie2_get_power_mode(client, args); 861 + break; 862 + case DRM_AMDXDNA_QUERY_TELEMETRY: 863 + ret = aie2_get_telemetry(client, args); 864 + break; 865 + case DRM_AMDXDNA_QUERY_RESOURCE_INFO: 866 + ret = aie2_query_resource_info(client, args); 968 867 break; 969 868 default: 970 869 XDNA_ERR(xdna, "Not supported request parameter %u", args->param);
+35
drivers/accel/amdxdna/aie2_pci.h
··· 156 156 AIE2_DEV_START, 157 157 }; 158 158 159 + struct aie2_exec_msg_ops { 160 + int (*init_cu_req)(struct amdxdna_gem_obj *cmd_bo, void *req, 161 + size_t *size, u32 *msg_op); 162 + int (*init_dpu_req)(struct amdxdna_gem_obj *cmd_bo, void *req, 163 + size_t *size, u32 *msg_op); 164 + void (*init_chain_req)(void *req, u64 slot_addr, size_t size, u32 cmd_cnt); 165 + int (*fill_cf_slot)(struct amdxdna_gem_obj *cmd_bo, void *slot, size_t *size); 166 + int (*fill_dpu_slot)(struct amdxdna_gem_obj *cmd_bo, void *slot, size_t *size); 167 + u32 (*get_chain_msg_op)(u32 cmd_op); 168 + }; 169 + 159 170 struct amdxdna_dev_hdl { 160 171 struct amdxdna_dev *xdna; 161 172 const struct amdxdna_dev_priv *priv; ··· 184 173 u32 total_col; 185 174 struct aie_version version; 186 175 struct aie_metadata metadata; 176 + unsigned long feature_mask; 177 + struct aie2_exec_msg_ops *exec_msg_ops; 187 178 188 179 /* power management and clock*/ 189 180 enum amdxdna_power_mode_type pw_mode; ··· 195 182 u32 clk_gating; 196 183 u32 npuclk_freq; 197 184 u32 hclk_freq; 185 + u32 max_tops; 186 + u32 curr_tops; 198 187 199 188 /* Mailbox and the management channel */ 200 189 struct mailbox *mbox; ··· 221 206 int (*set_dpm)(struct amdxdna_dev_hdl *ndev, u32 dpm_level); 222 207 }; 223 208 209 + enum aie2_fw_feature { 210 + AIE2_NPU_COMMAND, 211 + AIE2_FEATURE_MAX 212 + }; 213 + 214 + struct aie2_fw_feature_tbl { 215 + enum aie2_fw_feature feature; 216 + u32 max_minor; 217 + u32 min_minor; 218 + }; 219 + 220 + #define AIE2_FEATURE_ON(ndev, feature) test_bit(feature, &(ndev)->feature_mask) 221 + 224 222 struct amdxdna_dev_priv { 225 223 const char *fw_path; 226 224 u64 protocol_major; 227 225 u64 protocol_minor; 228 226 const struct rt_config *rt_config; 229 227 const struct dpm_clk_freq *dpm_clk_tbl; 228 + const struct aie2_fw_feature_tbl *fw_feature_tbl; 230 229 231 230 #define COL_ALIGN_NONE 0 232 231 #define COL_ALIGN_NATURE 1 ··· 248 219 u32 mbox_dev_addr; 249 220 /* If mbox_size is 0, use BAR size. See MBOX_SIZE macro */ 250 221 u32 mbox_size; 222 + u32 hwctx_limit; 251 223 u32 sram_dev_addr; 252 224 struct aie2_bar_off_pair sram_offs[SRAM_MAX_INDEX]; 253 225 struct aie2_bar_off_pair psp_regs_off[PSP_MAX_REGS]; ··· 266 236 extern const struct dpm_clk_freq npu4_dpm_clk_table[]; 267 237 extern const struct rt_config npu1_default_rt_cfg[]; 268 238 extern const struct rt_config npu4_default_rt_cfg[]; 239 + extern const struct aie2_fw_feature_tbl npu4_fw_feature_table[]; 269 240 270 241 /* aie2_smu.c */ 271 242 int aie2_smu_init(struct amdxdna_dev_hdl *ndev); ··· 291 260 struct amdxdna_drm_get_array *args); 292 261 293 262 /* aie2_message.c */ 263 + void aie2_msg_init(struct amdxdna_dev_hdl *ndev); 294 264 int aie2_suspend_fw(struct amdxdna_dev_hdl *ndev); 295 265 int aie2_resume_fw(struct amdxdna_dev_hdl *ndev); 296 266 int aie2_set_runtime_cfg(struct amdxdna_dev_hdl *ndev, u32 type, u64 value); ··· 305 273 int aie2_destroy_context(struct amdxdna_dev_hdl *ndev, struct amdxdna_hwctx *hwctx); 306 274 int aie2_map_host_buf(struct amdxdna_dev_hdl *ndev, u32 context_id, u64 addr, u64 size); 307 275 int aie2_query_status(struct amdxdna_dev_hdl *ndev, char __user *buf, u32 size, u32 *cols_filled); 276 + int aie2_query_telemetry(struct amdxdna_dev_hdl *ndev, 277 + char __user *buf, u32 size, 278 + struct amdxdna_drm_query_telemetry_header *header); 308 279 int aie2_register_asyn_event_msg(struct amdxdna_dev_hdl *ndev, dma_addr_t addr, u32 size, 309 280 void *handle, int (*cb)(void*, void __iomem *, size_t)); 310 281 int aie2_config_cu(struct amdxdna_hwctx *hwctx,
+11
drivers/accel/amdxdna/aie2_smu.c
··· 23 23 #define AIE2_SMU_SET_SOFT_DPMLEVEL 0x7 24 24 #define AIE2_SMU_SET_HARD_DPMLEVEL 0x8 25 25 26 + #define NPU4_DPM_TOPS(ndev, dpm_level) \ 27 + ({ \ 28 + typeof(ndev) _ndev = ndev; \ 29 + (4096 * (_ndev)->total_col * \ 30 + (_ndev)->priv->dpm_clk_tbl[dpm_level].hclk / 1000000); \ 31 + }) 32 + 26 33 static int aie2_smu_exec(struct amdxdna_dev_hdl *ndev, u32 reg_cmd, 27 34 u32 reg_arg, u32 *out) 28 35 { ··· 91 84 amdxdna_pm_suspend_put(ndev->xdna); 92 85 ndev->hclk_freq = freq; 93 86 ndev->dpm_level = dpm_level; 87 + ndev->max_tops = 2 * ndev->total_col; 88 + ndev->curr_tops = ndev->max_tops * freq / 1028; 94 89 95 90 XDNA_DBG(ndev->xdna, "MP-NPU clock %d, H clock %d\n", 96 91 ndev->npuclk_freq, ndev->hclk_freq); ··· 130 121 ndev->npuclk_freq = ndev->priv->dpm_clk_tbl[dpm_level].npuclk; 131 122 ndev->hclk_freq = ndev->priv->dpm_clk_tbl[dpm_level].hclk; 132 123 ndev->dpm_level = dpm_level; 124 + ndev->max_tops = NPU4_DPM_TOPS(ndev, ndev->max_dpm_level); 125 + ndev->curr_tops = NPU4_DPM_TOPS(ndev, dpm_level); 133 126 134 127 XDNA_DBG(ndev->xdna, "MP-NPU clock %d, H clock %d\n", 135 128 ndev->npuclk_freq, ndev->hclk_freq);
+3 -3
drivers/accel/amdxdna/amdxdna_ctx.c
··· 113 113 return &cmd->data[num_masks]; 114 114 } 115 115 116 - int amdxdna_cmd_get_cu_idx(struct amdxdna_gem_obj *abo) 116 + u32 amdxdna_cmd_get_cu_idx(struct amdxdna_gem_obj *abo) 117 117 { 118 118 struct amdxdna_cmd *cmd = abo->mem.kva; 119 119 u32 num_masks, i; 120 120 u32 *cu_mask; 121 121 122 122 if (amdxdna_cmd_get_op(abo) == ERT_CMD_CHAIN) 123 - return -1; 123 + return INVALID_CU_IDX; 124 124 125 125 num_masks = 1 + FIELD_GET(AMDXDNA_CMD_EXTRA_CU_MASK, cmd->header); 126 126 cu_mask = cmd->data; ··· 129 129 return ffs(cu_mask[i]) - 1; 130 130 } 131 131 132 - return -1; 132 + return INVALID_CU_IDX; 133 133 } 134 134 135 135 /*
+8 -4
drivers/accel/amdxdna/amdxdna_ctx.h
··· 13 13 struct amdxdna_hwctx_priv; 14 14 15 15 enum ert_cmd_opcode { 16 - ERT_START_CU = 0, 17 - ERT_CMD_CHAIN = 19, 18 - ERT_START_NPU = 20, 16 + ERT_START_CU = 0, 17 + ERT_CMD_CHAIN = 19, 18 + ERT_START_NPU = 20, 19 + ERT_INVALID_CMD = ~0U, 19 20 }; 20 21 21 22 enum ert_cmd_state { ··· 64 63 u32 header; 65 64 u32 data[]; 66 65 }; 66 + 67 + #define INVALID_CU_IDX (~0U) 67 68 68 69 struct amdxdna_hwctx { 69 70 struct amdxdna_client *client; ··· 119 116 /* user can wait on this fence */ 120 117 struct dma_fence *out_fence; 121 118 bool job_done; 119 + bool job_timeout; 122 120 u64 seq; 123 121 struct amdxdna_drv_cmd *drv_cmd; 124 122 struct amdxdna_gem_obj *cmd_bo; ··· 153 149 } 154 150 155 151 void *amdxdna_cmd_get_payload(struct amdxdna_gem_obj *abo, u32 *size); 156 - int amdxdna_cmd_get_cu_idx(struct amdxdna_gem_obj *abo); 152 + u32 amdxdna_cmd_get_cu_idx(struct amdxdna_gem_obj *abo); 157 153 158 154 void amdxdna_sched_job_cleanup(struct amdxdna_sched_job *job); 159 155 void amdxdna_hwctx_remove_all(struct amdxdna_client *client);
+1
drivers/accel/amdxdna/amdxdna_gem.c
··· 8 8 #include <drm/drm_device.h> 9 9 #include <drm/drm_gem.h> 10 10 #include <drm/drm_gem_shmem_helper.h> 11 + #include <drm/drm_print.h> 11 12 #include <drm/gpu_scheduler.h> 12 13 #include <linux/dma-buf.h> 13 14 #include <linux/dma-direct.h>
+4 -2
drivers/accel/amdxdna/amdxdna_mailbox_helper.h
··· 16 16 u32 *data; 17 17 size_t size; 18 18 int error; 19 + u32 *status; 19 20 }; 20 21 21 - #define DECLARE_XDNA_MSG_COMMON(name, op, status) \ 22 + #define DECLARE_XDNA_MSG_COMMON(name, op, s) \ 22 23 struct name##_req req = { 0 }; \ 23 - struct name##_resp resp = { status }; \ 24 + struct name##_resp resp = { .status = s }; \ 24 25 struct xdna_notify hdl = { \ 25 26 .error = 0, \ 26 27 .data = (u32 *)&resp, \ 27 28 .size = sizeof(resp), \ 28 29 .comp = COMPLETION_INITIALIZER_ONSTACK(hdl.comp), \ 30 + .status = (u32 *)&resp.status, \ 29 31 }; \ 30 32 struct xdna_mailbox_msg msg = { \ 31 33 .send_data = (u8 *)&req, \
+3 -1
drivers/accel/amdxdna/amdxdna_pci_drv.c
··· 29 29 * 0.1: Support getting all hardware contexts by DRM_IOCTL_AMDXDNA_GET_ARRAY 30 30 * 0.2: Support getting last error hardware error 31 31 * 0.3: Support firmware debug buffer 32 + * 0.4: Support getting resource information 33 + * 0.5: Support getting telemetry data 32 34 */ 33 35 #define AMDXDNA_DRIVER_MAJOR 0 34 - #define AMDXDNA_DRIVER_MINOR 3 36 + #define AMDXDNA_DRIVER_MINOR 5 35 37 36 38 /* 37 39 * Bind the driver base on (vendor_id, device_id) pair and later use the
+7
drivers/accel/amdxdna/npu1_regs.c
··· 63 63 { 0 } 64 64 }; 65 65 66 + static const struct aie2_fw_feature_tbl npu1_fw_feature_table[] = { 67 + { .feature = AIE2_NPU_COMMAND, .min_minor = 8 }, 68 + { 0 } 69 + }; 70 + 66 71 static const struct amdxdna_dev_priv npu1_dev_priv = { 67 72 .fw_path = "amdnpu/1502_00/npu.sbin", 68 73 .protocol_major = 0x5, 69 74 .protocol_minor = 0x7, 70 75 .rt_config = npu1_default_rt_cfg, 71 76 .dpm_clk_tbl = npu1_dpm_clk_table, 77 + .fw_feature_tbl = npu1_fw_feature_table, 72 78 .col_align = COL_ALIGN_NONE, 73 79 .mbox_dev_addr = NPU1_MBOX_BAR_BASE, 74 80 .mbox_size = 0, /* Use BAR size */ 75 81 .sram_dev_addr = NPU1_SRAM_BAR_BASE, 82 + .hwctx_limit = 6, 76 83 .sram_offs = { 77 84 DEFINE_BAR_OFFSET(MBOX_CHANN_OFF, NPU1_SRAM, MPNPU_SRAM_X2I_MAILBOX_0), 78 85 DEFINE_BAR_OFFSET(FW_ALIVE_OFF, NPU1_SRAM, MPNPU_SRAM_I2X_MAILBOX_15),
+2
drivers/accel/amdxdna/npu2_regs.c
··· 67 67 .protocol_minor = 0x6, 68 68 .rt_config = npu4_default_rt_cfg, 69 69 .dpm_clk_tbl = npu4_dpm_clk_table, 70 + .fw_feature_tbl = npu4_fw_feature_table, 70 71 .col_align = COL_ALIGN_NATURE, 71 72 .mbox_dev_addr = NPU2_MBOX_BAR_BASE, 72 73 .mbox_size = 0, /* Use BAR size */ 73 74 .sram_dev_addr = NPU2_SRAM_BAR_BASE, 75 + .hwctx_limit = 16, 74 76 .sram_offs = { 75 77 DEFINE_BAR_OFFSET(MBOX_CHANN_OFF, NPU2_SRAM, MPNPU_SRAM_X2I_MAILBOX_0), 76 78 DEFINE_BAR_OFFSET(FW_ALIVE_OFF, NPU2_SRAM, MPNPU_SRAM_X2I_MAILBOX_15),
+7
drivers/accel/amdxdna/npu4_regs.c
··· 83 83 { 0 } 84 84 }; 85 85 86 + const struct aie2_fw_feature_tbl npu4_fw_feature_table[] = { 87 + { .feature = AIE2_NPU_COMMAND, .min_minor = 15 }, 88 + { 0 } 89 + }; 90 + 86 91 static const struct amdxdna_dev_priv npu4_dev_priv = { 87 92 .fw_path = "amdnpu/17f0_10/npu.sbin", 88 93 .protocol_major = 0x6, 89 94 .protocol_minor = 12, 90 95 .rt_config = npu4_default_rt_cfg, 91 96 .dpm_clk_tbl = npu4_dpm_clk_table, 97 + .fw_feature_tbl = npu4_fw_feature_table, 92 98 .col_align = COL_ALIGN_NATURE, 93 99 .mbox_dev_addr = NPU4_MBOX_BAR_BASE, 94 100 .mbox_size = 0, /* Use BAR size */ 95 101 .sram_dev_addr = NPU4_SRAM_BAR_BASE, 102 + .hwctx_limit = 16, 96 103 .sram_offs = { 97 104 DEFINE_BAR_OFFSET(MBOX_CHANN_OFF, NPU4_SRAM, MPNPU_SRAM_X2I_MAILBOX_0), 98 105 DEFINE_BAR_OFFSET(FW_ALIVE_OFF, NPU4_SRAM, MPNPU_SRAM_X2I_MAILBOX_15),
+2
drivers/accel/amdxdna/npu5_regs.c
··· 67 67 .protocol_minor = 12, 68 68 .rt_config = npu4_default_rt_cfg, 69 69 .dpm_clk_tbl = npu4_dpm_clk_table, 70 + .fw_feature_tbl = npu4_fw_feature_table, 70 71 .col_align = COL_ALIGN_NATURE, 71 72 .mbox_dev_addr = NPU5_MBOX_BAR_BASE, 72 73 .mbox_size = 0, /* Use BAR size */ 73 74 .sram_dev_addr = NPU5_SRAM_BAR_BASE, 75 + .hwctx_limit = 16, 74 76 .sram_offs = { 75 77 DEFINE_BAR_OFFSET(MBOX_CHANN_OFF, NPU5_SRAM, MPNPU_SRAM_X2I_MAILBOX_0), 76 78 DEFINE_BAR_OFFSET(FW_ALIVE_OFF, NPU5_SRAM, MPNPU_SRAM_X2I_MAILBOX_15),
+2
drivers/accel/amdxdna/npu6_regs.c
··· 67 67 .protocol_minor = 12, 68 68 .rt_config = npu4_default_rt_cfg, 69 69 .dpm_clk_tbl = npu4_dpm_clk_table, 70 + .fw_feature_tbl = npu4_fw_feature_table, 70 71 .col_align = COL_ALIGN_NATURE, 71 72 .mbox_dev_addr = NPU6_MBOX_BAR_BASE, 72 73 .mbox_size = 0, /* Use BAR size */ 73 74 .sram_dev_addr = NPU6_SRAM_BAR_BASE, 75 + .hwctx_limit = 16, 74 76 .sram_offs = { 75 77 DEFINE_BAR_OFFSET(MBOX_CHANN_OFF, NPU6_SRAM, MPNPU_SRAM_X2I_MAILBOX_0), 76 78 DEFINE_BAR_OFFSET(FW_ALIVE_OFF, NPU6_SRAM, MPNPU_SRAM_X2I_MAILBOX_15),
+1
drivers/accel/ethosu/ethosu_job.c
··· 12 12 #include <drm/drm_file.h> 13 13 #include <drm/drm_gem.h> 14 14 #include <drm/drm_gem_dma_helper.h> 15 + #include <drm/drm_print.h> 15 16 #include <drm/ethosu_accel.h> 16 17 17 18 #include "ethosu_device.h"
+1
drivers/accel/ivpu/Makefile
··· 6 6 ivpu_fw.o \ 7 7 ivpu_fw_log.o \ 8 8 ivpu_gem.o \ 9 + ivpu_gem_userptr.o \ 9 10 ivpu_hw.o \ 10 11 ivpu_hw_btrs.o \ 11 12 ivpu_hw_ip.o \
+4 -1
drivers/accel/ivpu/ivpu_drv.c
··· 57 57 58 58 int ivpu_sched_mode = IVPU_SCHED_MODE_AUTO; 59 59 module_param_named(sched_mode, ivpu_sched_mode, int, 0444); 60 - MODULE_PARM_DESC(sched_mode, "Scheduler mode: -1 - Use default scheduler, 0 - Use OS scheduler, 1 - Use HW scheduler"); 60 + MODULE_PARM_DESC(sched_mode, "Scheduler mode: -1 - Use default scheduler, 0 - Use OS scheduler (supported on 27XX - 50XX), 1 - Use HW scheduler"); 61 61 62 62 bool ivpu_disable_mmu_cont_pages; 63 63 module_param_named(disable_mmu_cont_pages, ivpu_disable_mmu_cont_pages, bool, 0444); ··· 133 133 case DRM_IVPU_CAP_METRIC_STREAMER: 134 134 return true; 135 135 case DRM_IVPU_CAP_DMA_MEMORY_RANGE: 136 + return true; 137 + case DRM_IVPU_CAP_BO_CREATE_FROM_USERPTR: 136 138 return true; 137 139 case DRM_IVPU_CAP_MANAGE_CMDQ: 138 140 return vdev->fw->sched_mode == VPU_SCHEDULING_MODE_HW; ··· 315 313 DRM_IOCTL_DEF_DRV(IVPU_CMDQ_CREATE, ivpu_cmdq_create_ioctl, 0), 316 314 DRM_IOCTL_DEF_DRV(IVPU_CMDQ_DESTROY, ivpu_cmdq_destroy_ioctl, 0), 317 315 DRM_IOCTL_DEF_DRV(IVPU_CMDQ_SUBMIT, ivpu_cmdq_submit_ioctl, 0), 316 + DRM_IOCTL_DEF_DRV(IVPU_BO_CREATE_FROM_USERPTR, ivpu_bo_create_from_userptr_ioctl, 0), 318 317 }; 319 318 320 319 static int ivpu_wait_for_ready(struct ivpu_device *vdev)
+1
drivers/accel/ivpu/ivpu_drv.h
··· 79 79 #define IVPU_DBG_KREF BIT(11) 80 80 #define IVPU_DBG_RPM BIT(12) 81 81 #define IVPU_DBG_MMU_MAP BIT(13) 82 + #define IVPU_DBG_IOCTL BIT(14) 82 83 83 84 #define ivpu_err(vdev, fmt, ...) \ 84 85 drm_err(&(vdev)->drm, "%s(): " fmt, __func__, ##__VA_ARGS__)
+6
drivers/accel/ivpu/ivpu_fw.c
··· 144 144 static u32 145 145 ivpu_fw_sched_mode_select(struct ivpu_device *vdev, const struct vpu_firmware_header *fw_hdr) 146 146 { 147 + if (ivpu_hw_ip_gen(vdev) >= IVPU_HW_IP_60XX && 148 + ivpu_sched_mode == VPU_SCHEDULING_MODE_OS) { 149 + ivpu_warn(vdev, "OS sched mode is not supported, using HW mode\n"); 150 + return VPU_SCHEDULING_MODE_HW; 151 + } 152 + 147 153 if (ivpu_sched_mode != IVPU_SCHED_MODE_AUTO) 148 154 return ivpu_sched_mode; 149 155
+18 -15
drivers/accel/ivpu/ivpu_gem.c
··· 96 96 if (!bo->mmu_mapped) { 97 97 drm_WARN_ON(&vdev->drm, !bo->ctx); 98 98 ret = ivpu_mmu_context_map_sgt(vdev, bo->ctx, bo->vpu_addr, sgt, 99 - ivpu_bo_is_snooped(bo)); 99 + ivpu_bo_is_snooped(bo), ivpu_bo_is_read_only(bo)); 100 100 if (ret) { 101 101 ivpu_err(vdev, "Failed to map BO in MMU: %d\n", ret); 102 102 goto unlock; ··· 128 128 bo->ctx_id = ctx->id; 129 129 bo->vpu_addr = bo->mm_node.start; 130 130 ivpu_dbg_bo(vdev, bo, "vaddr"); 131 - } else { 132 - ivpu_err(vdev, "Failed to add BO to context %u: %d\n", ctx->id, ret); 133 131 } 134 132 135 133 ivpu_bo_unlock(bo); ··· 155 157 ivpu_mmu_context_remove_node(bo->ctx, &bo->mm_node); 156 158 bo->ctx = NULL; 157 159 } 158 - 159 - if (drm_gem_is_imported(&bo->base.base)) 160 - return; 161 160 162 161 if (bo->base.sgt) { 163 162 if (bo->base.base.import_attach) { ··· 287 292 struct ivpu_addr_range *range; 288 293 289 294 if (bo->ctx) { 290 - ivpu_warn(vdev, "Can't add BO to ctx %u: already in ctx %u\n", 291 - file_priv->ctx.id, bo->ctx->id); 295 + ivpu_dbg(vdev, IOCTL, "Can't add BO %pe to ctx %u: already in ctx %u\n", 296 + bo, file_priv->ctx.id, bo->ctx->id); 292 297 return -EALREADY; 293 298 } 294 299 ··· 313 318 314 319 mutex_lock(&vdev->bo_list_lock); 315 320 list_del(&bo->bo_list_node); 316 - mutex_unlock(&vdev->bo_list_lock); 317 321 318 322 drm_WARN_ON(&vdev->drm, !drm_gem_is_imported(&bo->base.base) && 319 323 !dma_resv_test_signaled(obj->resv, DMA_RESV_USAGE_READ)); ··· 322 328 ivpu_bo_lock(bo); 323 329 ivpu_bo_unbind_locked(bo); 324 330 ivpu_bo_unlock(bo); 331 + 332 + mutex_unlock(&vdev->bo_list_lock); 325 333 326 334 drm_WARN_ON(&vdev->drm, bo->mmu_mapped); 327 335 drm_WARN_ON(&vdev->drm, bo->ctx); ··· 355 359 struct ivpu_bo *bo; 356 360 int ret; 357 361 358 - if (args->flags & ~DRM_IVPU_BO_FLAGS) 362 + if (args->flags & ~DRM_IVPU_BO_FLAGS) { 363 + ivpu_dbg(vdev, IOCTL, "Invalid BO flags 0x%x\n", args->flags); 359 364 return -EINVAL; 365 + } 360 366 361 - if (size == 0) 367 + if (size == 0) { 368 + ivpu_dbg(vdev, IOCTL, "Invalid BO size %llu\n", args->size); 362 369 return -EINVAL; 370 + } 363 371 364 372 bo = ivpu_bo_alloc(vdev, size, args->flags); 365 373 if (IS_ERR(bo)) { 366 - ivpu_err(vdev, "Failed to allocate BO: %pe (ctx %u size %llu flags 0x%x)", 374 + ivpu_dbg(vdev, IOCTL, "Failed to allocate BO: %pe ctx %u size %llu flags 0x%x\n", 367 375 bo, file_priv->ctx.id, args->size, args->flags); 368 376 return PTR_ERR(bo); 369 377 } ··· 376 376 377 377 ret = drm_gem_handle_create(file, &bo->base.base, &args->handle); 378 378 if (ret) { 379 - ivpu_err(vdev, "Failed to create handle for BO: %pe (ctx %u size %llu flags 0x%x)", 379 + ivpu_dbg(vdev, IOCTL, "Failed to create handle for BO: %pe ctx %u size %llu flags 0x%x\n", 380 380 bo, file_priv->ctx.id, args->size, args->flags); 381 381 } else { 382 382 args->vpu_addr = bo->vpu_addr; ··· 405 405 406 406 bo = ivpu_bo_alloc(vdev, size, flags); 407 407 if (IS_ERR(bo)) { 408 - ivpu_err(vdev, "Failed to allocate BO: %pe (vpu_addr 0x%llx size %llu flags 0x%x)", 408 + ivpu_err(vdev, "Failed to allocate BO: %pe vpu_addr 0x%llx size %llu flags 0x%x\n", 409 409 bo, range->start, size, flags); 410 410 return NULL; 411 411 } 412 412 413 413 ret = ivpu_bo_alloc_vpu_addr(bo, ctx, range); 414 - if (ret) 414 + if (ret) { 415 + ivpu_err(vdev, "Failed to allocate NPU address for BO: %pe ctx %u size %llu: %d\n", 416 + bo, ctx->id, size, ret); 415 417 goto err_put; 418 + } 416 419 417 420 ret = ivpu_bo_bind(bo); 418 421 if (ret)
+7
drivers/accel/ivpu/ivpu_gem.h
··· 38 38 int ivpu_bo_create_ioctl(struct drm_device *dev, void *data, struct drm_file *file); 39 39 int ivpu_bo_info_ioctl(struct drm_device *dev, void *data, struct drm_file *file); 40 40 int ivpu_bo_wait_ioctl(struct drm_device *dev, void *data, struct drm_file *file); 41 + int ivpu_bo_create_from_userptr_ioctl(struct drm_device *dev, void *data, 42 + struct drm_file *file); 41 43 42 44 void ivpu_bo_list(struct drm_device *dev, struct drm_printer *p); 43 45 void ivpu_bo_list_print(struct drm_device *dev); ··· 75 73 return true; 76 74 77 75 return ivpu_bo_cache_mode(bo) == DRM_IVPU_BO_CACHED; 76 + } 77 + 78 + static inline bool ivpu_bo_is_read_only(struct ivpu_bo *bo) 79 + { 80 + return bo->flags & DRM_IVPU_BO_READ_ONLY; 78 81 } 79 82 80 83 static inline void *ivpu_to_cpu_addr(struct ivpu_bo *bo, u32 vpu_addr)
+213
drivers/accel/ivpu/ivpu_gem_userptr.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (C) 2020-2025 Intel Corporation 4 + */ 5 + 6 + #include <linux/dma-buf.h> 7 + #include <linux/err.h> 8 + #include <linux/highmem.h> 9 + #include <linux/mm.h> 10 + #include <linux/mman.h> 11 + #include <linux/scatterlist.h> 12 + #include <linux/slab.h> 13 + #include <linux/capability.h> 14 + 15 + #include <drm/drm_device.h> 16 + #include <drm/drm_file.h> 17 + #include <drm/drm_gem.h> 18 + 19 + #include "ivpu_drv.h" 20 + #include "ivpu_gem.h" 21 + 22 + static struct sg_table * 23 + ivpu_gem_userptr_dmabuf_map(struct dma_buf_attachment *attachment, 24 + enum dma_data_direction direction) 25 + { 26 + struct sg_table *sgt = attachment->dmabuf->priv; 27 + int ret; 28 + 29 + ret = dma_map_sgtable(attachment->dev, sgt, direction, DMA_ATTR_SKIP_CPU_SYNC); 30 + if (ret) 31 + return ERR_PTR(ret); 32 + 33 + return sgt; 34 + } 35 + 36 + static void ivpu_gem_userptr_dmabuf_unmap(struct dma_buf_attachment *attachment, 37 + struct sg_table *sgt, 38 + enum dma_data_direction direction) 39 + { 40 + dma_unmap_sgtable(attachment->dev, sgt, direction, DMA_ATTR_SKIP_CPU_SYNC); 41 + } 42 + 43 + static void ivpu_gem_userptr_dmabuf_release(struct dma_buf *dma_buf) 44 + { 45 + struct sg_table *sgt = dma_buf->priv; 46 + struct sg_page_iter page_iter; 47 + struct page *page; 48 + 49 + for_each_sgtable_page(sgt, &page_iter, 0) { 50 + page = sg_page_iter_page(&page_iter); 51 + unpin_user_page(page); 52 + } 53 + 54 + sg_free_table(sgt); 55 + kfree(sgt); 56 + } 57 + 58 + static const struct dma_buf_ops ivpu_gem_userptr_dmabuf_ops = { 59 + .map_dma_buf = ivpu_gem_userptr_dmabuf_map, 60 + .unmap_dma_buf = ivpu_gem_userptr_dmabuf_unmap, 61 + .release = ivpu_gem_userptr_dmabuf_release, 62 + }; 63 + 64 + static struct dma_buf * 65 + ivpu_create_userptr_dmabuf(struct ivpu_device *vdev, void __user *user_ptr, 66 + size_t size, uint32_t flags) 67 + { 68 + struct dma_buf_export_info exp_info = {}; 69 + struct dma_buf *dma_buf; 70 + struct sg_table *sgt; 71 + struct page **pages; 72 + unsigned long nr_pages = size >> PAGE_SHIFT; 73 + unsigned int gup_flags = FOLL_LONGTERM; 74 + int ret, i, pinned; 75 + 76 + /* Add FOLL_WRITE only if the BO is not read-only */ 77 + if (!(flags & DRM_IVPU_BO_READ_ONLY)) 78 + gup_flags |= FOLL_WRITE; 79 + 80 + pages = kvmalloc_array(nr_pages, sizeof(*pages), GFP_KERNEL); 81 + if (!pages) 82 + return ERR_PTR(-ENOMEM); 83 + 84 + pinned = pin_user_pages_fast((unsigned long)user_ptr, nr_pages, gup_flags, pages); 85 + if (pinned < 0) { 86 + ret = pinned; 87 + ivpu_dbg(vdev, IOCTL, "Failed to pin user pages: %d\n", ret); 88 + goto free_pages_array; 89 + } 90 + 91 + if (pinned != nr_pages) { 92 + ivpu_dbg(vdev, IOCTL, "Pinned %d pages, expected %lu\n", pinned, nr_pages); 93 + ret = -EFAULT; 94 + goto unpin_pages; 95 + } 96 + 97 + sgt = kmalloc(sizeof(*sgt), GFP_KERNEL); 98 + if (!sgt) { 99 + ret = -ENOMEM; 100 + goto unpin_pages; 101 + } 102 + 103 + ret = sg_alloc_table_from_pages(sgt, pages, nr_pages, 0, size, GFP_KERNEL); 104 + if (ret) { 105 + ivpu_dbg(vdev, IOCTL, "Failed to create sg table: %d\n", ret); 106 + goto free_sgt; 107 + } 108 + 109 + exp_info.exp_name = "ivpu_userptr_dmabuf"; 110 + exp_info.owner = THIS_MODULE; 111 + exp_info.ops = &ivpu_gem_userptr_dmabuf_ops; 112 + exp_info.size = size; 113 + exp_info.flags = O_RDWR | O_CLOEXEC; 114 + exp_info.priv = sgt; 115 + 116 + dma_buf = dma_buf_export(&exp_info); 117 + if (IS_ERR(dma_buf)) { 118 + ret = PTR_ERR(dma_buf); 119 + ivpu_dbg(vdev, IOCTL, "Failed to export userptr dma-buf: %d\n", ret); 120 + goto free_sg_table; 121 + } 122 + 123 + kvfree(pages); 124 + return dma_buf; 125 + 126 + free_sg_table: 127 + sg_free_table(sgt); 128 + free_sgt: 129 + kfree(sgt); 130 + unpin_pages: 131 + for (i = 0; i < pinned; i++) 132 + unpin_user_page(pages[i]); 133 + free_pages_array: 134 + kvfree(pages); 135 + return ERR_PTR(ret); 136 + } 137 + 138 + static struct ivpu_bo * 139 + ivpu_bo_create_from_userptr(struct ivpu_device *vdev, void __user *user_ptr, 140 + size_t size, uint32_t flags) 141 + { 142 + struct dma_buf *dma_buf; 143 + struct drm_gem_object *obj; 144 + struct ivpu_bo *bo; 145 + 146 + dma_buf = ivpu_create_userptr_dmabuf(vdev, user_ptr, size, flags); 147 + if (IS_ERR(dma_buf)) 148 + return ERR_CAST(dma_buf); 149 + 150 + obj = ivpu_gem_prime_import(&vdev->drm, dma_buf); 151 + if (IS_ERR(obj)) { 152 + dma_buf_put(dma_buf); 153 + return ERR_CAST(obj); 154 + } 155 + 156 + dma_buf_put(dma_buf); 157 + 158 + bo = to_ivpu_bo(obj); 159 + bo->flags = flags; 160 + 161 + return bo; 162 + } 163 + 164 + int ivpu_bo_create_from_userptr_ioctl(struct drm_device *dev, void *data, struct drm_file *file) 165 + { 166 + struct drm_ivpu_bo_create_from_userptr *args = data; 167 + struct ivpu_file_priv *file_priv = file->driver_priv; 168 + struct ivpu_device *vdev = to_ivpu_device(dev); 169 + void __user *user_ptr = u64_to_user_ptr(args->user_ptr); 170 + struct ivpu_bo *bo; 171 + int ret; 172 + 173 + if (args->flags & ~(DRM_IVPU_BO_HIGH_MEM | DRM_IVPU_BO_DMA_MEM | DRM_IVPU_BO_READ_ONLY)) { 174 + ivpu_dbg(vdev, IOCTL, "Invalid BO flags: 0x%x\n", args->flags); 175 + return -EINVAL; 176 + } 177 + 178 + if (!args->user_ptr || !args->size) { 179 + ivpu_dbg(vdev, IOCTL, "Userptr or size are zero: ptr %llx size %llu\n", 180 + args->user_ptr, args->size); 181 + return -EINVAL; 182 + } 183 + 184 + if (!PAGE_ALIGNED(args->user_ptr) || !PAGE_ALIGNED(args->size)) { 185 + ivpu_dbg(vdev, IOCTL, "Userptr or size not page aligned: ptr %llx size %llu\n", 186 + args->user_ptr, args->size); 187 + return -EINVAL; 188 + } 189 + 190 + if (!access_ok(user_ptr, args->size)) { 191 + ivpu_dbg(vdev, IOCTL, "Userptr is not accessible: ptr %llx size %llu\n", 192 + args->user_ptr, args->size); 193 + return -EFAULT; 194 + } 195 + 196 + bo = ivpu_bo_create_from_userptr(vdev, user_ptr, args->size, args->flags); 197 + if (IS_ERR(bo)) 198 + return PTR_ERR(bo); 199 + 200 + ret = drm_gem_handle_create(file, &bo->base.base, &args->handle); 201 + if (ret) { 202 + ivpu_dbg(vdev, IOCTL, "Failed to create handle for BO: %pe ctx %u size %llu flags 0x%x\n", 203 + bo, file_priv->ctx.id, args->size, args->flags); 204 + } else { 205 + ivpu_dbg(vdev, BO, "Created userptr BO: handle=%u vpu_addr=0x%llx size=%llu flags=0x%x\n", 206 + args->handle, bo->vpu_addr, args->size, bo->flags); 207 + args->vpu_addr = bo->vpu_addr; 208 + } 209 + 210 + drm_gem_object_put(&bo->base.base); 211 + 212 + return ret; 213 + }
+17 -1
drivers/accel/ivpu/ivpu_hw_btrs.c
··· 321 321 return REGB_POLL_FLD(VPU_HW_BTRS_MTL_PLL_STATUS, LOCK, exp_val, PLL_TIMEOUT_US); 322 322 } 323 323 324 + static int wait_for_cdyn_deassert(struct ivpu_device *vdev) 325 + { 326 + if (ivpu_hw_btrs_gen(vdev) == IVPU_HW_BTRS_MTL) 327 + return 0; 328 + 329 + return REGB_POLL_FLD(VPU_HW_BTRS_LNL_CDYN, CDYN, 0, PLL_TIMEOUT_US); 330 + } 331 + 324 332 int ivpu_hw_btrs_wp_drive(struct ivpu_device *vdev, bool enable) 325 333 { 326 334 struct wp_request wp; ··· 360 352 if (ret) { 361 353 ivpu_err(vdev, "Timed out waiting for NPU ready status\n"); 362 354 return ret; 355 + } 356 + 357 + if (!enable) { 358 + ret = wait_for_cdyn_deassert(vdev); 359 + if (ret) { 360 + ivpu_err(vdev, "Timed out waiting for CDYN deassert\n"); 361 + return ret; 362 + } 363 363 } 364 364 365 365 return 0; ··· 689 673 690 674 if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, SURV_ERR, status)) { 691 675 ivpu_dbg(vdev, IRQ, "Survivability IRQ\n"); 692 - queue_work(system_wq, &vdev->irq_dct_work); 676 + queue_work(system_percpu_wq, &vdev->irq_dct_work); 693 677 } 694 678 695 679 if (REG_TEST_FLD(VPU_HW_BTRS_LNL_INTERRUPT_STAT, FREQ_CHANGE, status)) {
+3
drivers/accel/ivpu/ivpu_hw_btrs_lnl_reg.h
··· 74 74 #define VPU_HW_BTRS_LNL_PLL_FREQ 0x00000148u 75 75 #define VPU_HW_BTRS_LNL_PLL_FREQ_RATIO_MASK GENMASK(15, 0) 76 76 77 + #define VPU_HW_BTRS_LNL_CDYN 0x0000014cu 78 + #define VPU_HW_BTRS_LNL_CDYN_CDYN_MASK GENMASK(15, 0) 79 + 77 80 #define VPU_HW_BTRS_LNL_TILE_FUSE 0x00000150u 78 81 #define VPU_HW_BTRS_LNL_TILE_FUSE_VALID_MASK BIT_MASK(0) 79 82 #define VPU_HW_BTRS_LNL_TILE_FUSE_CONFIG_MASK GENMASK(6, 1)
+1 -1
drivers/accel/ivpu/ivpu_ipc.c
··· 459 459 } 460 460 } 461 461 462 - queue_work(system_wq, &vdev->irq_ipc_work); 462 + queue_work(system_percpu_wq, &vdev->irq_ipc_work); 463 463 } 464 464 465 465 void ivpu_ipc_irq_work_fn(struct work_struct *work)
+66 -32
drivers/accel/ivpu/ivpu_job.c
··· 348 348 349 349 cmdq = xa_load(&file_priv->cmdq_xa, cmdq_id); 350 350 if (!cmdq) { 351 - ivpu_warn_ratelimited(vdev, "Failed to find command queue with ID: %u\n", cmdq_id); 351 + ivpu_dbg(vdev, IOCTL, "Failed to find command queue with ID: %u\n", cmdq_id); 352 352 return NULL; 353 353 } 354 354 ··· 534 534 job->bo_count = bo_count; 535 535 job->done_fence = ivpu_fence_create(vdev); 536 536 if (!job->done_fence) { 537 - ivpu_warn_ratelimited(vdev, "Failed to create a fence\n"); 537 + ivpu_err(vdev, "Failed to create a fence\n"); 538 538 goto err_free_job; 539 539 } 540 540 ··· 591 591 * status and ensure both are handled in the same way 592 592 */ 593 593 job->file_priv->has_mmu_faults = true; 594 - queue_work(system_wq, &vdev->context_abort_work); 594 + queue_work(system_percpu_wq, &vdev->context_abort_work); 595 595 return true; 596 596 } 597 597 default: ··· 687 687 else 688 688 cmdq = ivpu_cmdq_acquire(file_priv, cmdq_id); 689 689 if (!cmdq) { 690 - ivpu_warn_ratelimited(vdev, "Failed to get job queue, ctx %d\n", file_priv->ctx.id); 691 690 ret = -EINVAL; 692 691 goto err_unlock; 693 692 } ··· 770 771 for (i = 0; i < buf_count; i++) { 771 772 struct drm_gem_object *obj = drm_gem_object_lookup(file, buf_handles[i]); 772 773 773 - if (!obj) 774 + if (!obj) { 775 + ivpu_dbg(vdev, IOCTL, "Failed to lookup GEM object with handle %u\n", 776 + buf_handles[i]); 774 777 return -ENOENT; 778 + } 775 779 776 780 job->bos[i] = to_ivpu_bo(obj); 777 781 ··· 785 783 786 784 bo = job->bos[CMD_BUF_IDX]; 787 785 if (!dma_resv_test_signaled(bo->base.base.resv, DMA_RESV_USAGE_READ)) { 788 - ivpu_warn(vdev, "Buffer is already in use\n"); 786 + ivpu_dbg(vdev, IOCTL, "Buffer is already in use by another job\n"); 789 787 return -EBUSY; 790 788 } 791 789 792 790 if (commands_offset >= ivpu_bo_size(bo)) { 793 - ivpu_warn(vdev, "Invalid command buffer offset %u\n", commands_offset); 791 + ivpu_dbg(vdev, IOCTL, "Invalid commands offset %u for buffer size %zu\n", 792 + commands_offset, ivpu_bo_size(bo)); 794 793 return -EINVAL; 795 794 } 796 795 ··· 801 798 struct ivpu_bo *preempt_bo = job->bos[preempt_buffer_index]; 802 799 803 800 if (ivpu_bo_size(preempt_bo) < ivpu_fw_preempt_buf_size(vdev)) { 804 - ivpu_warn(vdev, "Preemption buffer is too small\n"); 801 + ivpu_dbg(vdev, IOCTL, "Preemption buffer is too small\n"); 805 802 return -EINVAL; 806 803 } 807 804 if (ivpu_bo_is_mappable(preempt_bo)) { 808 - ivpu_warn(vdev, "Preemption buffer cannot be mappable\n"); 805 + ivpu_dbg(vdev, IOCTL, "Preemption buffer cannot be mappable\n"); 809 806 return -EINVAL; 810 807 } 811 808 job->primary_preempt_buf = preempt_bo; ··· 814 811 ret = drm_gem_lock_reservations((struct drm_gem_object **)job->bos, buf_count, 815 812 &acquire_ctx); 816 813 if (ret) { 817 - ivpu_warn(vdev, "Failed to lock reservations: %d\n", ret); 814 + ivpu_warn_ratelimited(vdev, "Failed to lock reservations: %d\n", ret); 818 815 return ret; 819 816 } 820 817 821 818 for (i = 0; i < buf_count; i++) { 822 819 ret = dma_resv_reserve_fences(job->bos[i]->base.base.resv, 1); 823 820 if (ret) { 824 - ivpu_warn(vdev, "Failed to reserve fences: %d\n", ret); 821 + ivpu_warn_ratelimited(vdev, "Failed to reserve fences: %d\n", ret); 825 822 goto unlock_reservations; 826 823 } 827 824 } ··· 868 865 869 866 job = ivpu_job_create(file_priv, engine, buffer_count); 870 867 if (!job) { 871 - ivpu_err(vdev, "Failed to create job\n"); 872 868 ret = -ENOMEM; 873 869 goto err_exit_dev; 874 870 } 875 871 876 872 ret = ivpu_job_prepare_bos_for_submit(file, job, buf_handles, buffer_count, cmds_offset, 877 873 preempt_buffer_index); 878 - if (ret) { 879 - ivpu_err(vdev, "Failed to prepare job: %d\n", ret); 874 + if (ret) 880 875 goto err_destroy_job; 881 - } 882 876 883 877 down_read(&vdev->pm->reset_lock); 884 878 ret = ivpu_job_submit(job, priority, cmdq_id); ··· 901 901 int ivpu_submit_ioctl(struct drm_device *dev, void *data, struct drm_file *file) 902 902 { 903 903 struct ivpu_file_priv *file_priv = file->driver_priv; 904 + struct ivpu_device *vdev = file_priv->vdev; 904 905 struct drm_ivpu_submit *args = data; 905 906 u8 priority; 906 907 907 - if (args->engine != DRM_IVPU_ENGINE_COMPUTE) 908 + if (args->engine != DRM_IVPU_ENGINE_COMPUTE) { 909 + ivpu_dbg(vdev, IOCTL, "Invalid engine %d\n", args->engine); 908 910 return -EINVAL; 911 + } 909 912 910 - if (args->priority > DRM_IVPU_JOB_PRIORITY_REALTIME) 913 + if (args->priority > DRM_IVPU_JOB_PRIORITY_REALTIME) { 914 + ivpu_dbg(vdev, IOCTL, "Invalid priority %d\n", args->priority); 911 915 return -EINVAL; 916 + } 912 917 913 - if (args->buffer_count == 0 || args->buffer_count > JOB_MAX_BUFFER_COUNT) 918 + if (args->buffer_count == 0 || args->buffer_count > JOB_MAX_BUFFER_COUNT) { 919 + ivpu_dbg(vdev, IOCTL, "Invalid buffer count %u\n", args->buffer_count); 914 920 return -EINVAL; 921 + } 915 922 916 - if (!IS_ALIGNED(args->commands_offset, 8)) 923 + if (!IS_ALIGNED(args->commands_offset, 8)) { 924 + ivpu_dbg(vdev, IOCTL, "Invalid commands offset %u\n", args->commands_offset); 917 925 return -EINVAL; 926 + } 918 927 919 - if (!file_priv->ctx.id) 928 + if (!file_priv->ctx.id) { 929 + ivpu_dbg(vdev, IOCTL, "Context not initialized\n"); 920 930 return -EINVAL; 931 + } 921 932 922 - if (file_priv->has_mmu_faults) 933 + if (file_priv->has_mmu_faults) { 934 + ivpu_dbg(vdev, IOCTL, "Context %u has MMU faults\n", file_priv->ctx.id); 923 935 return -EBADFD; 936 + } 924 937 925 938 priority = ivpu_job_to_jsm_priority(args->priority); 926 939 ··· 944 931 int ivpu_cmdq_submit_ioctl(struct drm_device *dev, void *data, struct drm_file *file) 945 932 { 946 933 struct ivpu_file_priv *file_priv = file->driver_priv; 934 + struct ivpu_device *vdev = file_priv->vdev; 947 935 struct drm_ivpu_cmdq_submit *args = data; 948 936 949 - if (!ivpu_is_capable(file_priv->vdev, DRM_IVPU_CAP_MANAGE_CMDQ)) 937 + if (!ivpu_is_capable(file_priv->vdev, DRM_IVPU_CAP_MANAGE_CMDQ)) { 938 + ivpu_dbg(vdev, IOCTL, "Command queue management not supported\n"); 950 939 return -ENODEV; 940 + } 951 941 952 - if (args->cmdq_id < IVPU_CMDQ_MIN_ID || args->cmdq_id > IVPU_CMDQ_MAX_ID) 942 + if (args->cmdq_id < IVPU_CMDQ_MIN_ID || args->cmdq_id > IVPU_CMDQ_MAX_ID) { 943 + ivpu_dbg(vdev, IOCTL, "Invalid command queue ID %u\n", args->cmdq_id); 953 944 return -EINVAL; 945 + } 954 946 955 - if (args->buffer_count == 0 || args->buffer_count > JOB_MAX_BUFFER_COUNT) 947 + if (args->buffer_count == 0 || args->buffer_count > JOB_MAX_BUFFER_COUNT) { 948 + ivpu_dbg(vdev, IOCTL, "Invalid buffer count %u\n", args->buffer_count); 956 949 return -EINVAL; 950 + } 957 951 958 - if (args->preempt_buffer_index >= args->buffer_count) 952 + if (args->preempt_buffer_index >= args->buffer_count) { 953 + ivpu_dbg(vdev, IOCTL, "Invalid preemption buffer index %u\n", 954 + args->preempt_buffer_index); 959 955 return -EINVAL; 956 + } 960 957 961 - if (!IS_ALIGNED(args->commands_offset, 8)) 958 + if (!IS_ALIGNED(args->commands_offset, 8)) { 959 + ivpu_dbg(vdev, IOCTL, "Invalid commands offset %u\n", args->commands_offset); 962 960 return -EINVAL; 961 + } 963 962 964 - if (!file_priv->ctx.id) 963 + if (!file_priv->ctx.id) { 964 + ivpu_dbg(vdev, IOCTL, "Context not initialized\n"); 965 965 return -EINVAL; 966 + } 966 967 967 - if (file_priv->has_mmu_faults) 968 + if (file_priv->has_mmu_faults) { 969 + ivpu_dbg(vdev, IOCTL, "Context %u has MMU faults\n", file_priv->ctx.id); 968 970 return -EBADFD; 971 + } 969 972 970 973 return ivpu_submit(file, file_priv, args->cmdq_id, args->buffer_count, VPU_ENGINE_COMPUTE, 971 974 (void __user *)args->buffers_ptr, args->commands_offset, ··· 996 967 struct ivpu_cmdq *cmdq; 997 968 int ret; 998 969 999 - if (!ivpu_is_capable(vdev, DRM_IVPU_CAP_MANAGE_CMDQ)) 970 + if (!ivpu_is_capable(vdev, DRM_IVPU_CAP_MANAGE_CMDQ)) { 971 + ivpu_dbg(vdev, IOCTL, "Command queue management not supported\n"); 1000 972 return -ENODEV; 973 + } 1001 974 1002 - if (args->priority > DRM_IVPU_JOB_PRIORITY_REALTIME) 975 + if (args->priority > DRM_IVPU_JOB_PRIORITY_REALTIME) { 976 + ivpu_dbg(vdev, IOCTL, "Invalid priority %d\n", args->priority); 1003 977 return -EINVAL; 978 + } 1004 979 1005 980 ret = ivpu_rpm_get(vdev); 1006 981 if (ret < 0) ··· 1032 999 u32 cmdq_id = 0; 1033 1000 int ret; 1034 1001 1035 - if (!ivpu_is_capable(vdev, DRM_IVPU_CAP_MANAGE_CMDQ)) 1002 + if (!ivpu_is_capable(vdev, DRM_IVPU_CAP_MANAGE_CMDQ)) { 1003 + ivpu_dbg(vdev, IOCTL, "Command queue management not supported\n"); 1036 1004 return -ENODEV; 1005 + } 1037 1006 1038 1007 ret = ivpu_rpm_get(vdev); 1039 1008 if (ret < 0) ··· 1149 1114 mutex_unlock(&vdev->submitted_jobs_lock); 1150 1115 1151 1116 runtime_put: 1152 - pm_runtime_mark_last_busy(vdev->drm.dev); 1153 1117 pm_runtime_put_autosuspend(vdev->drm.dev); 1154 1118 }
+1 -1
drivers/accel/ivpu/ivpu_mmu.c
··· 970 970 } 971 971 } 972 972 973 - queue_work(system_wq, &vdev->context_abort_work); 973 + queue_work(system_percpu_wq, &vdev->context_abort_work); 974 974 } 975 975 976 976 void ivpu_mmu_evtq_dump(struct ivpu_device *vdev)
+5 -2
drivers/accel/ivpu/ivpu_mmu_context.c
··· 430 430 431 431 int 432 432 ivpu_mmu_context_map_sgt(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx, 433 - u64 vpu_addr, struct sg_table *sgt, bool llc_coherent) 433 + u64 vpu_addr, struct sg_table *sgt, bool llc_coherent, bool read_only) 434 434 { 435 435 size_t start_vpu_addr = vpu_addr; 436 436 struct scatterlist *sg; ··· 450 450 prot = IVPU_MMU_ENTRY_MAPPED; 451 451 if (llc_coherent) 452 452 prot |= IVPU_MMU_ENTRY_FLAG_LLC_COHERENT; 453 + if (read_only) 454 + prot |= IVPU_MMU_ENTRY_FLAG_RO; 453 455 454 456 mutex_lock(&ctx->lock); 455 457 ··· 529 527 530 528 ret = ivpu_mmu_invalidate_tlb(vdev, ctx->id); 531 529 if (ret) 532 - ivpu_warn(vdev, "Failed to invalidate TLB for ctx %u: %d\n", ctx->id, ret); 530 + ivpu_warn_ratelimited(vdev, "Failed to invalidate TLB for ctx %u: %d\n", 531 + ctx->id, ret); 533 532 } 534 533 535 534 int
+1 -1
drivers/accel/ivpu/ivpu_mmu_context.h
··· 42 42 void ivpu_mmu_context_remove_node(struct ivpu_mmu_context *ctx, struct drm_mm_node *node); 43 43 44 44 int ivpu_mmu_context_map_sgt(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx, 45 - u64 vpu_addr, struct sg_table *sgt, bool llc_coherent); 45 + u64 vpu_addr, struct sg_table *sgt, bool llc_coherent, bool read_only); 46 46 void ivpu_mmu_context_unmap_sgt(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx, 47 47 u64 vpu_addr, struct sg_table *sgt); 48 48 int ivpu_mmu_context_set_pages_ro(struct ivpu_device *vdev, struct ivpu_mmu_context *ctx,
+17 -8
drivers/accel/ivpu/ivpu_ms.c
··· 8 8 9 9 #include "ivpu_drv.h" 10 10 #include "ivpu_gem.h" 11 + #include "ivpu_hw.h" 11 12 #include "ivpu_jsm_msg.h" 12 13 #include "ivpu_ms.h" 13 14 #include "ivpu_pm.h" ··· 38 37 struct drm_ivpu_metric_streamer_start *args = data; 39 38 struct ivpu_device *vdev = file_priv->vdev; 40 39 struct ivpu_ms_instance *ms; 41 - u64 single_buff_size; 42 40 u32 sample_size; 41 + u64 buf_size; 43 42 int ret; 44 43 45 44 if (!args->metric_group_mask || !args->read_period_samples || ··· 53 52 mutex_lock(&file_priv->ms_lock); 54 53 55 54 if (get_instance_by_mask(file_priv, args->metric_group_mask)) { 56 - ivpu_err(vdev, "Instance already exists (mask %#llx)\n", args->metric_group_mask); 55 + ivpu_dbg(vdev, IOCTL, "Instance already exists (mask %#llx)\n", 56 + args->metric_group_mask); 57 57 ret = -EALREADY; 58 58 goto unlock; 59 59 } ··· 71 69 if (ret) 72 70 goto err_free_ms; 73 71 74 - single_buff_size = sample_size * 75 - ((u64)args->read_period_samples * MS_READ_PERIOD_MULTIPLIER); 76 - ms->bo = ivpu_bo_create_global(vdev, PAGE_ALIGN(single_buff_size * MS_NUM_BUFFERS), 77 - DRM_IVPU_BO_CACHED | DRM_IVPU_BO_MAPPABLE); 72 + buf_size = PAGE_ALIGN((u64)args->read_period_samples * sample_size * 73 + MS_READ_PERIOD_MULTIPLIER * MS_NUM_BUFFERS); 74 + if (buf_size > ivpu_hw_range_size(&vdev->hw->ranges.global)) { 75 + ivpu_dbg(vdev, IOCTL, "Requested MS buffer size %llu exceeds range size %llu\n", 76 + buf_size, ivpu_hw_range_size(&vdev->hw->ranges.global)); 77 + ret = -EINVAL; 78 + goto err_free_ms; 79 + } 80 + 81 + ms->bo = ivpu_bo_create_global(vdev, buf_size, DRM_IVPU_BO_CACHED | DRM_IVPU_BO_MAPPABLE); 78 82 if (!ms->bo) { 79 - ivpu_err(vdev, "Failed to allocate MS buffer (size %llu)\n", single_buff_size); 83 + ivpu_dbg(vdev, IOCTL, "Failed to allocate MS buffer (size %llu)\n", buf_size); 80 84 ret = -ENOMEM; 81 85 goto err_free_ms; 82 86 } ··· 183 175 184 176 ms = get_instance_by_mask(file_priv, args->metric_group_mask); 185 177 if (!ms) { 186 - ivpu_err(vdev, "Instance doesn't exist for mask: %#llx\n", args->metric_group_mask); 178 + ivpu_dbg(vdev, IOCTL, "Instance doesn't exist for mask: %#llx\n", 179 + args->metric_group_mask); 187 180 ret = -EINVAL; 188 181 goto unlock; 189 182 }
+3 -4
drivers/accel/ivpu/ivpu_pm.c
··· 186 186 if (atomic_cmpxchg(&vdev->pm->reset_pending, 0, 1) == 0) { 187 187 ivpu_hw_diagnose_failure(vdev); 188 188 ivpu_hw_irq_disable(vdev); /* Disable IRQ early to protect from IRQ storm */ 189 - queue_work(system_unbound_wq, &vdev->pm->recovery_work); 189 + queue_work(system_dfl_wq, &vdev->pm->recovery_work); 190 190 } 191 191 } 192 192 ··· 226 226 unsigned long timeout_ms = ivpu_tdr_timeout_ms ? ivpu_tdr_timeout_ms : vdev->timeout.tdr; 227 227 228 228 /* No-op if already queued */ 229 - queue_delayed_work(system_wq, &vdev->pm->job_timeout_work, msecs_to_jiffies(timeout_ms)); 229 + queue_delayed_work(system_percpu_wq, &vdev->pm->job_timeout_work, 230 + msecs_to_jiffies(timeout_ms)); 230 231 } 231 232 232 233 void ivpu_stop_job_timeout_detection(struct ivpu_device *vdev) ··· 360 359 361 360 void ivpu_rpm_put(struct ivpu_device *vdev) 362 361 { 363 - pm_runtime_mark_last_busy(vdev->drm.dev); 364 362 pm_runtime_put_autosuspend(vdev->drm.dev); 365 363 } 366 364 ··· 428 428 struct device *dev = vdev->drm.dev; 429 429 430 430 pm_runtime_allow(dev); 431 - pm_runtime_mark_last_busy(dev); 432 431 pm_runtime_put_autosuspend(dev); 433 432 } 434 433
+1
drivers/accel/rocket/rocket_gem.c
··· 2 2 /* Copyright 2024-2025 Tomeu Vizoso <tomeu@tomeuvizoso.net> */ 3 3 4 4 #include <drm/drm_device.h> 5 + #include <drm/drm_print.h> 5 6 #include <drm/drm_utils.h> 6 7 #include <drm/rocket_accel.h> 7 8 #include <linux/dma-mapping.h>
+1
drivers/gpu/drm/adp/adp_drv.c
··· 16 16 #include <drm/drm_gem_dma_helper.h> 17 17 #include <drm/drm_gem_framebuffer_helper.h> 18 18 #include <drm/drm_of.h> 19 + #include <drm/drm_print.h> 19 20 #include <drm/drm_probe_helper.h> 20 21 #include <drm/drm_vblank.h> 21 22
+6 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
··· 1798 1798 for (i = 0; i < adev->gmc.num_mem_partitions; i++) { 1799 1799 ttm_pool_init(&adev->mman.ttm_pools[i], adev->dev, 1800 1800 adev->gmc.mem_partitions[i].numa.node, 1801 - false, false); 1801 + TTM_ALLOCATION_POOL_BENEFICIAL_ORDER(get_order(SZ_2M))); 1802 1802 } 1803 1803 return 0; 1804 1804 } ··· 1891 1891 r = ttm_device_init(&adev->mman.bdev, &amdgpu_bo_driver, adev->dev, 1892 1892 adev_to_drm(adev)->anon_inode->i_mapping, 1893 1893 adev_to_drm(adev)->vma_offset_manager, 1894 - adev->need_swiotlb, 1895 - dma_addressing_limited(adev->dev)); 1894 + (adev->need_swiotlb ? 1895 + TTM_ALLOCATION_POOL_USE_DMA_ALLOC : 0) | 1896 + (dma_addressing_limited(adev->dev) ? 1897 + TTM_ALLOCATION_POOL_USE_DMA32 : 0) | 1898 + TTM_ALLOCATION_POOL_BENEFICIAL_ORDER(get_order(SZ_2M))); 1896 1899 if (r) { 1897 1900 dev_err(adev->dev, 1898 1901 "failed initializing buffer object driver(%d).\n", r);
+1
drivers/gpu/drm/arm/display/komeda/komeda_framebuffer.c
··· 9 9 #include <drm/drm_gem.h> 10 10 #include <drm/drm_gem_dma_helper.h> 11 11 #include <drm/drm_gem_framebuffer_helper.h> 12 + #include <drm/drm_print.h> 12 13 13 14 #include "komeda_framebuffer.h" 14 15 #include "komeda_dev.h"
+1
drivers/gpu/drm/arm/hdlcd_crtc.c
··· 22 22 #include <drm/drm_framebuffer.h> 23 23 #include <drm/drm_gem_dma_helper.h> 24 24 #include <drm/drm_of.h> 25 + #include <drm/drm_print.h> 25 26 #include <drm/drm_probe_helper.h> 26 27 #include <drm/drm_vblank.h> 27 28
+1
drivers/gpu/drm/arm/hdlcd_drv.c
··· 33 33 #include <drm/drm_modeset_helper.h> 34 34 #include <drm/drm_module.h> 35 35 #include <drm/drm_of.h> 36 + #include <drm/drm_print.h> 36 37 #include <drm/drm_probe_helper.h> 37 38 #include <drm/drm_vblank.h> 38 39
+1
drivers/gpu/drm/arm/malidp_drv.c
··· 29 29 #include <drm/drm_modeset_helper.h> 30 30 #include <drm/drm_module.h> 31 31 #include <drm/drm_of.h> 32 + #include <drm/drm_print.h> 32 33 #include <drm/drm_probe_helper.h> 33 34 #include <drm/drm_vblank.h> 34 35
+1
drivers/gpu/drm/arm/malidp_mw.c
··· 14 14 #include <drm/drm_fourcc.h> 15 15 #include <drm/drm_framebuffer.h> 16 16 #include <drm/drm_gem_dma_helper.h> 17 + #include <drm/drm_print.h> 17 18 #include <drm/drm_probe_helper.h> 18 19 #include <drm/drm_writeback.h> 19 20
+1
drivers/gpu/drm/armada/armada_crtc.c
··· 13 13 14 14 #include <drm/drm_atomic.h> 15 15 #include <drm/drm_atomic_helper.h> 16 + #include <drm/drm_print.h> 16 17 #include <drm/drm_probe_helper.h> 17 18 #include <drm/drm_vblank.h> 18 19
+1
drivers/gpu/drm/armada/armada_debugfs.c
··· 12 12 13 13 #include <drm/drm_debugfs.h> 14 14 #include <drm/drm_file.h> 15 + #include <drm/drm_print.h> 15 16 16 17 #include "armada_crtc.h" 17 18 #include "armada_drm.h"
+1
drivers/gpu/drm/armada/armada_fb.c
··· 6 6 #include <drm/drm_modeset_helper.h> 7 7 #include <drm/drm_fourcc.h> 8 8 #include <drm/drm_gem_framebuffer_helper.h> 9 + #include <drm/drm_print.h> 9 10 10 11 #include "armada_drm.h" 11 12 #include "armada_fb.h"
+1
drivers/gpu/drm/armada/armada_fbdev.c
··· 13 13 #include <drm/drm_drv.h> 14 14 #include <drm/drm_fb_helper.h> 15 15 #include <drm/drm_fourcc.h> 16 + #include <drm/drm_print.h> 16 17 17 18 #include "armada_crtc.h" 18 19 #include "armada_drm.h"
+1
drivers/gpu/drm/armada/armada_gem.c
··· 10 10 11 11 #include <drm/armada_drm.h> 12 12 #include <drm/drm_prime.h> 13 + #include <drm/drm_print.h> 13 14 14 15 #include "armada_drm.h" 15 16 #include "armada_gem.h"
+1
drivers/gpu/drm/armada/armada_overlay.c
··· 12 12 #include <drm/drm_atomic_uapi.h> 13 13 #include <drm/drm_fourcc.h> 14 14 #include <drm/drm_plane_helper.h> 15 + #include <drm/drm_print.h> 15 16 16 17 #include "armada_crtc.h" 17 18 #include "armada_drm.h"
+1
drivers/gpu/drm/armada/armada_plane.c
··· 8 8 #include <drm/drm_atomic_helper.h> 9 9 #include <drm/drm_fourcc.h> 10 10 #include <drm/drm_plane_helper.h> 11 + #include <drm/drm_print.h> 11 12 12 13 #include "armada_crtc.h" 13 14 #include "armada_drm.h"
+1
drivers/gpu/drm/ast/ast_mode.c
··· 43 43 #include <drm/drm_gem_shmem_helper.h> 44 44 #include <drm/drm_managed.h> 45 45 #include <drm/drm_panic.h> 46 + #include <drm/drm_print.h> 46 47 #include <drm/drm_probe_helper.h> 47 48 48 49 #include "ast_drv.h"
+1
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_dc.c
··· 25 25 #include <drm/drm_gem_dma_helper.h> 26 26 #include <drm/drm_gem_framebuffer_helper.h> 27 27 #include <drm/drm_module.h> 28 + #include <drm/drm_print.h> 28 29 #include <drm/drm_probe_helper.h> 29 30 #include <drm/drm_vblank.h> 30 31
+1
drivers/gpu/drm/atmel-hlcdc/atmel_hlcdc_plane.c
··· 16 16 #include <drm/drm_fourcc.h> 17 17 #include <drm/drm_framebuffer.h> 18 18 #include <drm/drm_gem_dma_helper.h> 19 + #include <drm/drm_print.h> 19 20 20 21 #include "atmel_hlcdc_dc.h" 21 22
+2
drivers/gpu/drm/bridge/synopsys/dw-dp.c
··· 2049 2049 bridge->type = DRM_MODE_CONNECTOR_DisplayPort; 2050 2050 bridge->ycbcr_420_allowed = true; 2051 2051 2052 + devm_drm_bridge_add(dev, bridge); 2053 + 2052 2054 dp->aux.dev = dev; 2053 2055 dp->aux.drm_dev = encoder->dev; 2054 2056 dp->aux.name = dev_name(dev);
+5 -5
drivers/gpu/drm/clients/drm_log.c
··· 100 100 return; 101 101 iosys_map_memset(&map, r.y1 * fb->pitches[0], 0, height * fb->pitches[0]); 102 102 drm_client_buffer_vunmap_local(scanout->buffer); 103 - drm_client_framebuffer_flush(scanout->buffer, &r); 103 + drm_client_buffer_flush(scanout->buffer, &r); 104 104 } 105 105 106 106 static void drm_log_draw_line(struct drm_log_scanout *scanout, const char *s, ··· 133 133 if (scanout->line >= scanout->rows) 134 134 scanout->line = 0; 135 135 drm_client_buffer_vunmap_local(scanout->buffer); 136 - drm_client_framebuffer_flush(scanout->buffer, &r); 136 + drm_client_buffer_flush(scanout->buffer, &r); 137 137 } 138 138 139 139 static void drm_log_draw_new_line(struct drm_log_scanout *scanout, ··· 204 204 if (format == DRM_FORMAT_INVALID) 205 205 return -EINVAL; 206 206 207 - scanout->buffer = drm_client_framebuffer_create(client, width, height, format); 207 + scanout->buffer = drm_client_buffer_create_dumb(client, width, height, format); 208 208 if (IS_ERR(scanout->buffer)) { 209 209 drm_warn(client->dev, "drm_log can't create framebuffer %d %d %p4cc\n", 210 210 width, height, &format); ··· 272 272 273 273 err_failed_commit: 274 274 for (i = 0; i < n_modeset; i++) 275 - drm_client_framebuffer_delete(dlog->scanout[i].buffer); 275 + drm_client_buffer_delete(dlog->scanout[i].buffer); 276 276 277 277 err_nomodeset: 278 278 kfree(dlog->scanout); ··· 286 286 287 287 if (dlog->n_scanout) { 288 288 for (i = 0; i < dlog->n_scanout; i++) 289 - drm_client_framebuffer_delete(dlog->scanout[i].buffer); 289 + drm_client_buffer_delete(dlog->scanout[i].buffer); 290 290 dlog->n_scanout = 0; 291 291 kfree(dlog->scanout); 292 292 dlog->scanout = NULL;
+39 -53
drivers/gpu/drm/display/drm_bridge_connector.c
··· 652 652 struct drm_bridge_connector *bridge_connector; 653 653 struct drm_connector *connector; 654 654 struct i2c_adapter *ddc = NULL; 655 - struct drm_bridge *panel_bridge __free(drm_bridge_put) = NULL; 656 - struct drm_bridge *bridge_edid __free(drm_bridge_put) = NULL; 657 - struct drm_bridge *bridge_hpd __free(drm_bridge_put) = NULL; 658 - struct drm_bridge *bridge_detect __free(drm_bridge_put) = NULL; 659 - struct drm_bridge *bridge_modes __free(drm_bridge_put) = NULL; 660 - struct drm_bridge *bridge_hdmi __free(drm_bridge_put) = NULL; 661 - struct drm_bridge *bridge_hdmi_audio __free(drm_bridge_put) = NULL; 662 - struct drm_bridge *bridge_dp_audio __free(drm_bridge_put) = NULL; 663 - struct drm_bridge *bridge_hdmi_cec __free(drm_bridge_put) = NULL; 655 + struct drm_bridge *panel_bridge __free(drm_bridge_put) = NULL; 664 656 unsigned int supported_formats = BIT(HDMI_COLORSPACE_RGB); 665 657 unsigned int max_bpc = 8; 666 658 bool support_hdcp = false; ··· 691 699 connector->ycbcr_420_allowed = false; 692 700 693 701 if (bridge->ops & DRM_BRIDGE_OP_EDID) { 694 - drm_bridge_put(bridge_edid); 695 - bridge_edid = drm_bridge_get(bridge); 702 + drm_bridge_put(bridge_connector->bridge_edid); 703 + bridge_connector->bridge_edid = drm_bridge_get(bridge); 696 704 } 697 705 if (bridge->ops & DRM_BRIDGE_OP_HPD) { 698 - drm_bridge_put(bridge_hpd); 699 - bridge_hpd = drm_bridge_get(bridge); 706 + drm_bridge_put(bridge_connector->bridge_hpd); 707 + bridge_connector->bridge_hpd = drm_bridge_get(bridge); 700 708 } 701 709 if (bridge->ops & DRM_BRIDGE_OP_DETECT) { 702 - drm_bridge_put(bridge_detect); 703 - bridge_detect = drm_bridge_get(bridge); 710 + drm_bridge_put(bridge_connector->bridge_detect); 711 + bridge_connector->bridge_detect = drm_bridge_get(bridge); 704 712 } 705 713 if (bridge->ops & DRM_BRIDGE_OP_MODES) { 706 - drm_bridge_put(bridge_modes); 707 - bridge_modes = drm_bridge_get(bridge); 714 + drm_bridge_put(bridge_connector->bridge_modes); 715 + bridge_connector->bridge_modes = drm_bridge_get(bridge); 708 716 } 709 717 if (bridge->ops & DRM_BRIDGE_OP_HDMI) { 710 - if (bridge_hdmi) 718 + if (bridge_connector->bridge_hdmi) 711 719 return ERR_PTR(-EBUSY); 712 720 if (!bridge->funcs->hdmi_write_infoframe || 713 721 !bridge->funcs->hdmi_clear_infoframe) 714 722 return ERR_PTR(-EINVAL); 715 723 716 - bridge_hdmi = drm_bridge_get(bridge); 724 + bridge_connector->bridge_hdmi = drm_bridge_get(bridge); 717 725 718 726 if (bridge->supported_formats) 719 727 supported_formats = bridge->supported_formats; ··· 722 730 } 723 731 724 732 if (bridge->ops & DRM_BRIDGE_OP_HDMI_AUDIO) { 725 - if (bridge_hdmi_audio) 733 + if (bridge_connector->bridge_hdmi_audio) 726 734 return ERR_PTR(-EBUSY); 727 735 728 - if (bridge_dp_audio) 736 + if (bridge_connector->bridge_dp_audio) 729 737 return ERR_PTR(-EBUSY); 730 738 731 739 if (!bridge->hdmi_audio_max_i2s_playback_channels && ··· 736 744 !bridge->funcs->hdmi_audio_shutdown) 737 745 return ERR_PTR(-EINVAL); 738 746 739 - bridge_hdmi_audio = drm_bridge_get(bridge); 747 + bridge_connector->bridge_hdmi_audio = drm_bridge_get(bridge); 740 748 } 741 749 742 750 if (bridge->ops & DRM_BRIDGE_OP_DP_AUDIO) { 743 - if (bridge_dp_audio) 751 + if (bridge_connector->bridge_dp_audio) 744 752 return ERR_PTR(-EBUSY); 745 753 746 - if (bridge_hdmi_audio) 754 + if (bridge_connector->bridge_hdmi_audio) 747 755 return ERR_PTR(-EBUSY); 748 756 749 757 if (!bridge->hdmi_audio_max_i2s_playback_channels && ··· 754 762 !bridge->funcs->dp_audio_shutdown) 755 763 return ERR_PTR(-EINVAL); 756 764 757 - bridge_dp_audio = drm_bridge_get(bridge); 765 + bridge_connector->bridge_dp_audio = drm_bridge_get(bridge); 758 766 } 759 767 760 768 if (bridge->ops & DRM_BRIDGE_OP_HDMI_CEC_NOTIFIER) { 761 769 if (bridge_connector->bridge_hdmi_cec) 762 770 return ERR_PTR(-EBUSY); 763 771 764 - bridge_connector->bridge_hdmi_cec = bridge; 772 + bridge_connector->bridge_hdmi_cec = drm_bridge_get(bridge); 765 773 } 766 774 767 775 if (bridge->ops & DRM_BRIDGE_OP_HDMI_CEC_ADAPTER) { 768 - if (bridge_hdmi_cec) 776 + if (bridge_connector->bridge_hdmi_cec) 769 777 return ERR_PTR(-EBUSY); 770 778 771 - bridge_hdmi_cec = drm_bridge_get(bridge); 779 + bridge_connector->bridge_hdmi_cec = drm_bridge_get(bridge); 772 780 773 781 if (!bridge->funcs->hdmi_cec_enable || 774 782 !bridge->funcs->hdmi_cec_log_addr || ··· 787 795 if (bridge->ddc) 788 796 ddc = bridge->ddc; 789 797 790 - if (drm_bridge_is_panel(bridge)) 798 + if (drm_bridge_is_panel(bridge)) { 799 + drm_bridge_put(panel_bridge); 791 800 panel_bridge = drm_bridge_get(bridge); 801 + } 792 802 793 803 if (bridge->support_hdcp) 794 804 support_hdcp = true; ··· 799 805 if (connector_type == DRM_MODE_CONNECTOR_Unknown) 800 806 return ERR_PTR(-EINVAL); 801 807 802 - if (bridge_hdmi) { 808 + if (bridge_connector->bridge_hdmi) { 803 809 if (!connector->ycbcr_420_allowed) 804 810 supported_formats &= ~BIT(HDMI_COLORSPACE_YUV420); 805 811 806 812 ret = drmm_connector_hdmi_init(drm, connector, 807 - bridge_hdmi->vendor, 808 - bridge_hdmi->product, 813 + bridge_connector->bridge_hdmi->vendor, 814 + bridge_connector->bridge_hdmi->product, 809 815 &drm_bridge_connector_funcs, 810 816 &drm_bridge_connector_hdmi_funcs, 811 817 connector_type, ddc, ··· 821 827 return ERR_PTR(ret); 822 828 } 823 829 824 - if (bridge_hdmi_audio || bridge_dp_audio) { 830 + if (bridge_connector->bridge_hdmi_audio || 831 + bridge_connector->bridge_dp_audio) { 825 832 struct device *dev; 826 833 struct drm_bridge *bridge; 827 834 828 - if (bridge_hdmi_audio) 829 - bridge = bridge_hdmi_audio; 835 + if (bridge_connector->bridge_hdmi_audio) 836 + bridge = bridge_connector->bridge_hdmi_audio; 830 837 else 831 - bridge = bridge_dp_audio; 838 + bridge = bridge_connector->bridge_dp_audio; 832 839 833 840 dev = bridge->hdmi_audio_dev; 834 841 ··· 843 848 return ERR_PTR(ret); 844 849 } 845 850 846 - if (bridge_hdmi_cec && 847 - bridge_hdmi_cec->ops & DRM_BRIDGE_OP_HDMI_CEC_NOTIFIER) { 848 - struct drm_bridge *bridge = bridge_hdmi_cec; 851 + if (bridge_connector->bridge_hdmi_cec && 852 + bridge_connector->bridge_hdmi_cec->ops & DRM_BRIDGE_OP_HDMI_CEC_NOTIFIER) { 853 + struct drm_bridge *bridge = bridge_connector->bridge_hdmi_cec; 849 854 850 855 ret = drmm_connector_hdmi_cec_notifier_register(connector, 851 856 NULL, ··· 854 859 return ERR_PTR(ret); 855 860 } 856 861 857 - if (bridge_hdmi_cec && 858 - bridge_hdmi_cec->ops & DRM_BRIDGE_OP_HDMI_CEC_ADAPTER) { 859 - struct drm_bridge *bridge = bridge_hdmi_cec; 862 + if (bridge_connector->bridge_hdmi_cec && 863 + bridge_connector->bridge_hdmi_cec->ops & DRM_BRIDGE_OP_HDMI_CEC_ADAPTER) { 864 + struct drm_bridge *bridge = bridge_connector->bridge_hdmi_cec; 860 865 861 866 ret = drmm_connector_hdmi_cec_register(connector, 862 867 &drm_bridge_connector_hdmi_cec_funcs, ··· 869 874 870 875 drm_connector_helper_add(connector, &drm_bridge_connector_helper_funcs); 871 876 872 - if (bridge_hpd) 877 + if (bridge_connector->bridge_hpd) 873 878 connector->polled = DRM_CONNECTOR_POLL_HPD; 874 - else if (bridge_detect) 879 + else if (bridge_connector->bridge_detect) 875 880 connector->polled = DRM_CONNECTOR_POLL_CONNECT 876 881 | DRM_CONNECTOR_POLL_DISCONNECT; 877 882 ··· 881 886 if (support_hdcp && IS_REACHABLE(CONFIG_DRM_DISPLAY_HELPER) && 882 887 IS_ENABLED(CONFIG_DRM_DISPLAY_HDCP_HELPER)) 883 888 drm_connector_attach_content_protection_property(connector, true); 884 - 885 - bridge_connector->bridge_edid = drm_bridge_get(bridge_edid); 886 - bridge_connector->bridge_hpd = drm_bridge_get(bridge_hpd); 887 - bridge_connector->bridge_detect = drm_bridge_get(bridge_detect); 888 - bridge_connector->bridge_modes = drm_bridge_get(bridge_modes); 889 - bridge_connector->bridge_hdmi = drm_bridge_get(bridge_hdmi); 890 - bridge_connector->bridge_hdmi_audio = drm_bridge_get(bridge_hdmi_audio); 891 - bridge_connector->bridge_dp_audio = drm_bridge_get(bridge_dp_audio); 892 - bridge_connector->bridge_hdmi_cec = drm_bridge_get(bridge_hdmi_cec); 893 889 894 890 return connector; 895 891 }
+10
drivers/gpu/drm/drm_atomic.c
··· 200 200 201 201 drm_dbg_atomic(dev, "Clearing atomic state %p\n", state); 202 202 203 + state->checked = false; 204 + 203 205 for (i = 0; i < state->num_connector; i++) { 204 206 struct drm_connector *connector = state->connectors[i].ptr; 205 207 ··· 350 348 struct drm_crtc_state *crtc_state; 351 349 352 350 WARN_ON(!state->acquire_ctx); 351 + drm_WARN_ON(state->dev, state->checked); 353 352 354 353 crtc_state = drm_atomic_get_new_crtc_state(state, crtc); 355 354 if (crtc_state) ··· 531 528 struct drm_plane_state *plane_state; 532 529 533 530 WARN_ON(!state->acquire_ctx); 531 + drm_WARN_ON(state->dev, state->checked); 534 532 535 533 /* the legacy pointers should never be set */ 536 534 WARN_ON(plane->fb); ··· 840 836 struct __drm_private_objs_state *arr; 841 837 struct drm_private_state *obj_state; 842 838 839 + WARN_ON(!state->acquire_ctx); 840 + drm_WARN_ON(state->dev, state->checked); 841 + 843 842 obj_state = drm_atomic_get_new_private_obj_state(state, obj); 844 843 if (obj_state) 845 844 return obj_state; ··· 1136 1129 struct drm_connector_state *connector_state; 1137 1130 1138 1131 WARN_ON(!state->acquire_ctx); 1132 + drm_WARN_ON(state->dev, state->checked); 1139 1133 1140 1134 ret = drm_modeset_lock(&config->connection_mutex, state->acquire_ctx); 1141 1135 if (ret) ··· 1548 1540 WARN(!state->allow_modeset, "adding CRTC not allowed without modesets: requested 0x%x, affected 0x%0x\n", 1549 1541 requested_crtc, affected_crtc); 1550 1542 } 1543 + 1544 + state->checked = true; 1551 1545 1552 1546 return 0; 1553 1547 }
+9
drivers/gpu/drm/drm_bridge.c
··· 422 422 * If non-NULL the previous bridge must be already attached by a call to this 423 423 * function. 424 424 * 425 + * The bridge to be attached must have been previously added by 426 + * drm_bridge_add(). 427 + * 425 428 * Note that bridges attached to encoders are auto-detached during encoder 426 429 * cleanup in drm_encoder_cleanup(), so drm_bridge_attach() should generally 427 430 * *not* be balanced with a drm_bridge_detach() in driver code. ··· 440 437 441 438 if (!encoder || !bridge) 442 439 return -EINVAL; 440 + 441 + if (!bridge->container) 442 + DRM_WARN("DRM bridge corrupted or not allocated by devm_drm_bridge_alloc()\n"); 443 + 444 + if (list_empty(&bridge->list)) 445 + DRM_WARN("Missing drm_bridge_add() before attach\n"); 443 446 444 447 drm_bridge_get(bridge); 445 448
+1
drivers/gpu/drm/drm_buddy.c
··· 11 11 #include <linux/sizes.h> 12 12 13 13 #include <drm/drm_buddy.h> 14 + #include <drm/drm_print.h> 14 15 15 16 enum drm_buddy_free_tree { 16 17 DRM_BUDDY_CLEAR_TREE = 0,
+92 -103
drivers/gpu/drm/drm_client.c
··· 17 17 #include <drm/drm_fourcc.h> 18 18 #include <drm/drm_framebuffer.h> 19 19 #include <drm/drm_gem.h> 20 + #include <drm/drm_gem_framebuffer_helper.h> 20 21 #include <drm/drm_mode.h> 21 22 #include <drm/drm_print.h> 22 23 ··· 177 176 } 178 177 EXPORT_SYMBOL(drm_client_release); 179 178 180 - static void drm_client_buffer_delete(struct drm_client_buffer *buffer) 179 + /** 180 + * drm_client_buffer_delete - Delete a client buffer 181 + * @buffer: DRM client buffer 182 + */ 183 + void drm_client_buffer_delete(struct drm_client_buffer *buffer) 181 184 { 182 - if (buffer->gem) { 183 - drm_gem_vunmap(buffer->gem, &buffer->map); 184 - drm_gem_object_put(buffer->gem); 185 - } 185 + struct drm_gem_object *gem; 186 + int ret; 187 + 188 + if (!buffer) 189 + return; 190 + 191 + gem = buffer->fb->obj[0]; 192 + drm_gem_vunmap(gem, &buffer->map); 193 + 194 + ret = drm_mode_rmfb(buffer->client->dev, buffer->fb->base.id, buffer->client->file); 195 + if (ret) 196 + drm_err(buffer->client->dev, 197 + "Error removing FB:%u (%d)\n", buffer->fb->base.id, ret); 198 + 199 + drm_gem_object_put(buffer->gem); 186 200 187 201 kfree(buffer); 188 202 } 203 + EXPORT_SYMBOL(drm_client_buffer_delete); 189 204 190 205 static struct drm_client_buffer * 191 206 drm_client_buffer_create(struct drm_client_dev *client, u32 width, u32 height, 192 - u32 format, u32 *handle) 207 + u32 format, u32 handle, u32 pitch) 193 208 { 194 - const struct drm_format_info *info = drm_format_info(format); 195 - struct drm_mode_create_dumb dumb_args = { }; 209 + struct drm_mode_fb_cmd2 fb_req = { 210 + .width = width, 211 + .height = height, 212 + .pixel_format = format, 213 + .handles = { 214 + handle, 215 + }, 216 + .pitches = { 217 + pitch, 218 + }, 219 + }; 196 220 struct drm_device *dev = client->dev; 197 221 struct drm_client_buffer *buffer; 198 222 struct drm_gem_object *obj; 223 + struct drm_framebuffer *fb; 199 224 int ret; 200 225 201 226 buffer = kzalloc(sizeof(*buffer), GFP_KERNEL); ··· 230 203 231 204 buffer->client = client; 232 205 233 - dumb_args.width = width; 234 - dumb_args.height = height; 235 - dumb_args.bpp = drm_format_info_bpp(info, 0); 236 - ret = drm_mode_create_dumb(dev, &dumb_args, client->file); 237 - if (ret) 238 - goto err_delete; 239 - 240 - obj = drm_gem_object_lookup(client->file, dumb_args.handle); 206 + obj = drm_gem_object_lookup(client->file, handle); 241 207 if (!obj) { 242 208 ret = -ENOENT; 243 209 goto err_delete; 244 210 } 245 211 246 - buffer->pitch = dumb_args.pitch; 212 + ret = drm_mode_addfb2(dev, &fb_req, client->file); 213 + if (ret) 214 + goto err_drm_gem_object_put; 215 + 216 + fb = drm_framebuffer_lookup(dev, client->file, fb_req.fb_id); 217 + if (drm_WARN_ON(dev, !fb)) { 218 + ret = -ENOENT; 219 + goto err_drm_mode_rmfb; 220 + } 221 + 222 + /* drop the reference we picked up in framebuffer lookup */ 223 + drm_framebuffer_put(fb); 224 + 225 + strscpy(fb->comm, client->name, TASK_COMM_LEN); 226 + 247 227 buffer->gem = obj; 248 - *handle = dumb_args.handle; 228 + buffer->fb = fb; 249 229 250 230 return buffer; 251 231 232 + err_drm_mode_rmfb: 233 + drm_mode_rmfb(dev, fb_req.fb_id, client->file); 234 + err_drm_gem_object_put: 235 + drm_gem_object_put(obj); 252 236 err_delete: 253 - drm_client_buffer_delete(buffer); 254 - 237 + kfree(buffer); 255 238 return ERR_PTR(ret); 256 239 } 257 240 ··· 288 251 int drm_client_buffer_vmap_local(struct drm_client_buffer *buffer, 289 252 struct iosys_map *map_copy) 290 253 { 291 - struct drm_gem_object *gem = buffer->gem; 254 + struct drm_gem_object *gem = buffer->fb->obj[0]; 292 255 struct iosys_map *map = &buffer->map; 293 256 int ret; 294 257 ··· 317 280 */ 318 281 void drm_client_buffer_vunmap_local(struct drm_client_buffer *buffer) 319 282 { 320 - struct drm_gem_object *gem = buffer->gem; 283 + struct drm_gem_object *gem = buffer->fb->obj[0]; 321 284 struct iosys_map *map = &buffer->map; 322 285 323 286 drm_gem_vunmap_locked(gem, map); ··· 348 311 int drm_client_buffer_vmap(struct drm_client_buffer *buffer, 349 312 struct iosys_map *map_copy) 350 313 { 314 + struct drm_gem_object *gem = buffer->fb->obj[0]; 351 315 int ret; 352 316 353 - ret = drm_gem_vmap(buffer->gem, &buffer->map); 317 + ret = drm_gem_vmap(gem, &buffer->map); 354 318 if (ret) 355 319 return ret; 356 320 *map_copy = buffer->map; ··· 370 332 */ 371 333 void drm_client_buffer_vunmap(struct drm_client_buffer *buffer) 372 334 { 373 - drm_gem_vunmap(buffer->gem, &buffer->map); 335 + struct drm_gem_object *gem = buffer->fb->obj[0]; 336 + 337 + drm_gem_vunmap(gem, &buffer->map); 374 338 } 375 339 EXPORT_SYMBOL(drm_client_buffer_vunmap); 376 340 377 - static void drm_client_buffer_rmfb(struct drm_client_buffer *buffer) 378 - { 379 - int ret; 380 - 381 - if (!buffer->fb) 382 - return; 383 - 384 - ret = drm_mode_rmfb(buffer->client->dev, buffer->fb->base.id, buffer->client->file); 385 - if (ret) 386 - drm_err(buffer->client->dev, 387 - "Error removing FB:%u (%d)\n", buffer->fb->base.id, ret); 388 - 389 - buffer->fb = NULL; 390 - } 391 - 392 - static int drm_client_buffer_addfb(struct drm_client_buffer *buffer, 393 - u32 width, u32 height, u32 format, 394 - u32 handle) 395 - { 396 - struct drm_client_dev *client = buffer->client; 397 - struct drm_mode_fb_cmd2 fb_req = { }; 398 - int ret; 399 - 400 - fb_req.width = width; 401 - fb_req.height = height; 402 - fb_req.pixel_format = format; 403 - fb_req.handles[0] = handle; 404 - fb_req.pitches[0] = buffer->pitch; 405 - 406 - ret = drm_mode_addfb2(client->dev, &fb_req, client->file); 407 - if (ret) 408 - return ret; 409 - 410 - buffer->fb = drm_framebuffer_lookup(client->dev, buffer->client->file, fb_req.fb_id); 411 - if (WARN_ON(!buffer->fb)) 412 - return -ENOENT; 413 - 414 - /* drop the reference we picked up in framebuffer lookup */ 415 - drm_framebuffer_put(buffer->fb); 416 - 417 - strscpy(buffer->fb->comm, client->name, TASK_COMM_LEN); 418 - 419 - return 0; 420 - } 421 - 422 341 /** 423 - * drm_client_framebuffer_create - Create a client framebuffer 342 + * drm_client_buffer_create_dumb - Create a client buffer backed by a dumb buffer 424 343 * @client: DRM client 425 344 * @width: Framebuffer width 426 345 * @height: Framebuffer height ··· 385 390 * 386 391 * This function creates a &drm_client_buffer which consists of a 387 392 * &drm_framebuffer backed by a dumb buffer. 388 - * Call drm_client_framebuffer_delete() to free the buffer. 393 + * Call drm_client_buffer_delete() to free the buffer. 389 394 * 390 395 * Returns: 391 396 * Pointer to a client buffer or an error pointer on failure. 392 397 */ 393 398 struct drm_client_buffer * 394 - drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format) 399 + drm_client_buffer_create_dumb(struct drm_client_dev *client, u32 width, u32 height, u32 format) 395 400 { 401 + const struct drm_format_info *info = drm_format_info(format); 402 + struct drm_device *dev = client->dev; 403 + struct drm_mode_create_dumb dumb_args = { }; 396 404 struct drm_client_buffer *buffer; 397 - u32 handle; 398 405 int ret; 399 406 400 - buffer = drm_client_buffer_create(client, width, height, format, 401 - &handle); 402 - if (IS_ERR(buffer)) 403 - return buffer; 407 + dumb_args.width = width; 408 + dumb_args.height = height; 409 + dumb_args.bpp = drm_format_info_bpp(info, 0); 410 + ret = drm_mode_create_dumb(dev, &dumb_args, client->file); 411 + if (ret) 412 + return ERR_PTR(ret); 404 413 405 - ret = drm_client_buffer_addfb(buffer, width, height, format, handle); 414 + buffer = drm_client_buffer_create(client, width, height, format, 415 + dumb_args.handle, dumb_args.pitch); 416 + if (IS_ERR(buffer)) { 417 + ret = PTR_ERR(buffer); 418 + goto err_drm_mode_destroy_dumb; 419 + } 406 420 407 421 /* 408 422 * The handle is only needed for creating the framebuffer, destroy it ··· 419 415 * object as DMA-buf. The framebuffer and our buffer structure are still 420 416 * holding references to the GEM object to prevent its destruction. 421 417 */ 422 - drm_mode_destroy_dumb(client->dev, handle, client->file); 423 - 424 - if (ret) { 425 - drm_client_buffer_delete(buffer); 426 - return ERR_PTR(ret); 427 - } 418 + drm_mode_destroy_dumb(client->dev, dumb_args.handle, client->file); 428 419 429 420 return buffer; 421 + 422 + err_drm_mode_destroy_dumb: 423 + drm_mode_destroy_dumb(client->dev, dumb_args.handle, client->file); 424 + return ERR_PTR(ret); 430 425 } 431 - EXPORT_SYMBOL(drm_client_framebuffer_create); 426 + EXPORT_SYMBOL(drm_client_buffer_create_dumb); 432 427 433 428 /** 434 - * drm_client_framebuffer_delete - Delete a client framebuffer 435 - * @buffer: DRM client buffer (can be NULL) 436 - */ 437 - void drm_client_framebuffer_delete(struct drm_client_buffer *buffer) 438 - { 439 - if (!buffer) 440 - return; 441 - 442 - drm_client_buffer_rmfb(buffer); 443 - drm_client_buffer_delete(buffer); 444 - } 445 - EXPORT_SYMBOL(drm_client_framebuffer_delete); 446 - 447 - /** 448 - * drm_client_framebuffer_flush - Manually flush client framebuffer 449 - * @buffer: DRM client buffer (can be NULL) 429 + * drm_client_buffer_flush - Manually flush client buffer 430 + * @buffer: DRM client buffer 450 431 * @rect: Damage rectangle (if NULL flushes all) 451 432 * 452 433 * This calls &drm_framebuffer_funcs->dirty (if present) to flush buffer changes ··· 440 451 * Returns: 441 452 * Zero on success or negative error code on failure. 442 453 */ 443 - int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect) 454 + int drm_client_buffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect) 444 455 { 445 456 if (!buffer || !buffer->fb || !buffer->fb->funcs->dirty) 446 457 return 0; ··· 460 471 return buffer->fb->funcs->dirty(buffer->fb, buffer->client->file, 461 472 0, 0, NULL, 0); 462 473 } 463 - EXPORT_SYMBOL(drm_client_framebuffer_flush); 474 + EXPORT_SYMBOL(drm_client_buffer_flush);
+43 -15
drivers/gpu/drm/drm_displayid.c
··· 9 9 #include "drm_crtc_internal.h" 10 10 #include "drm_displayid_internal.h" 11 11 12 + enum { 13 + QUIRK_IGNORE_CHECKSUM, 14 + }; 15 + 16 + struct displayid_quirk { 17 + const struct drm_edid_ident ident; 18 + u8 quirks; 19 + }; 20 + 21 + static const struct displayid_quirk quirks[] = { 22 + { 23 + .ident = DRM_EDID_IDENT_INIT('C', 'S', 'O', 5142, "MNE007ZA1-5"), 24 + .quirks = BIT(QUIRK_IGNORE_CHECKSUM), 25 + }, 26 + }; 27 + 28 + static u8 get_quirks(const struct drm_edid *drm_edid) 29 + { 30 + int i; 31 + 32 + for (i = 0; i < ARRAY_SIZE(quirks); i++) { 33 + if (drm_edid_match(drm_edid, &quirks[i].ident)) 34 + return quirks[i].quirks; 35 + } 36 + 37 + return 0; 38 + } 39 + 12 40 static const struct displayid_header * 13 41 displayid_get_header(const u8 *displayid, int length, int index) 14 42 { ··· 51 23 } 52 24 53 25 static const struct displayid_header * 54 - validate_displayid(const u8 *displayid, int length, int idx) 26 + validate_displayid(const u8 *displayid, int length, int idx, bool ignore_checksum) 55 27 { 56 28 int i, dispid_length; 57 29 u8 csum = 0; ··· 69 41 for (i = 0; i < dispid_length; i++) 70 42 csum += displayid[idx + i]; 71 43 if (csum) { 72 - DRM_NOTE("DisplayID checksum invalid, remainder is %d\n", csum); 73 - return ERR_PTR(-EINVAL); 44 + DRM_NOTE("DisplayID checksum invalid, remainder is %d%s\n", csum, 45 + ignore_checksum ? " (ignoring)" : ""); 46 + 47 + if (!ignore_checksum) 48 + return ERR_PTR(-EINVAL); 74 49 } 75 50 76 51 return base; 77 52 } 78 53 79 - static const u8 *drm_find_displayid_extension(const struct drm_edid *drm_edid, 80 - int *length, int *idx, 81 - int *ext_index) 54 + static const u8 *find_next_displayid_extension(struct displayid_iter *iter) 82 55 { 83 56 const struct displayid_header *base; 84 57 const u8 *displayid; 58 + bool ignore_checksum = iter->quirks & BIT(QUIRK_IGNORE_CHECKSUM); 85 59 86 - displayid = drm_edid_find_extension(drm_edid, DISPLAYID_EXT, ext_index); 60 + displayid = drm_edid_find_extension(iter->drm_edid, DISPLAYID_EXT, &iter->ext_index); 87 61 if (!displayid) 88 62 return NULL; 89 63 90 64 /* EDID extensions block checksum isn't for us */ 91 - *length = EDID_LENGTH - 1; 92 - *idx = 1; 65 + iter->length = EDID_LENGTH - 1; 66 + iter->idx = 1; 93 67 94 - base = validate_displayid(displayid, *length, *idx); 68 + base = validate_displayid(displayid, iter->length, iter->idx, ignore_checksum); 95 69 if (IS_ERR(base)) 96 70 return NULL; 97 71 98 - *length = *idx + sizeof(*base) + base->bytes; 72 + iter->length = iter->idx + sizeof(*base) + base->bytes; 99 73 100 74 return displayid; 101 75 } ··· 108 78 memset(iter, 0, sizeof(*iter)); 109 79 110 80 iter->drm_edid = drm_edid; 81 + iter->quirks = get_quirks(drm_edid); 111 82 } 112 83 113 84 static const struct displayid_block * ··· 157 126 /* The first section we encounter is the base section */ 158 127 bool base_section = !iter->section; 159 128 160 - iter->section = drm_find_displayid_extension(iter->drm_edid, 161 - &iter->length, 162 - &iter->idx, 163 - &iter->ext_index); 129 + iter->section = find_next_displayid_extension(iter); 164 130 if (!iter->section) { 165 131 iter->drm_edid = NULL; 166 132 return NULL;
+2
drivers/gpu/drm/drm_displayid_internal.h
··· 167 167 168 168 u8 version; 169 169 u8 primary_use; 170 + 171 + u8 quirks; 170 172 }; 171 173 172 174 void displayid_iter_edid_begin(const struct drm_edid *drm_edid,
+1
drivers/gpu/drm/drm_dumb_buffers.c
··· 29 29 #include <drm/drm_fourcc.h> 30 30 #include <drm/drm_gem.h> 31 31 #include <drm/drm_mode.h> 32 + #include <drm/drm_print.h> 32 33 33 34 #include "drm_crtc_internal.h" 34 35 #include "drm_internal.h"
+5 -4
drivers/gpu/drm/drm_fbdev_dma.c
··· 10 10 #include <drm/drm_fb_helper.h> 11 11 #include <drm/drm_framebuffer.h> 12 12 #include <drm/drm_gem_dma_helper.h> 13 + #include <drm/drm_print.h> 13 14 14 15 /* 15 16 * struct fb_ops ··· 56 55 drm_fb_helper_fini(fb_helper); 57 56 58 57 drm_client_buffer_vunmap(fb_helper->buffer); 59 - drm_client_framebuffer_delete(fb_helper->buffer); 58 + drm_client_buffer_delete(fb_helper->buffer); 60 59 drm_client_release(&fb_helper->client); 61 60 } 62 61 ··· 89 88 vfree(shadow); 90 89 91 90 drm_client_buffer_vunmap(fb_helper->buffer); 92 - drm_client_framebuffer_delete(fb_helper->buffer); 91 + drm_client_buffer_delete(fb_helper->buffer); 93 92 drm_client_release(&fb_helper->client); 94 93 } 95 94 ··· 282 281 283 282 format = drm_driver_legacy_fb_format(dev, sizes->surface_bpp, 284 283 sizes->surface_depth); 285 - buffer = drm_client_framebuffer_create(client, sizes->surface_width, 284 + buffer = drm_client_buffer_create_dumb(client, sizes->surface_width, 286 285 sizes->surface_height, format); 287 286 if (IS_ERR(buffer)) 288 287 return PTR_ERR(buffer); ··· 325 324 fb_helper->buffer = NULL; 326 325 drm_client_buffer_vunmap(buffer); 327 326 err_drm_client_buffer_delete: 328 - drm_client_framebuffer_delete(buffer); 327 + drm_client_buffer_delete(buffer); 329 328 return ret; 330 329 } 331 330 EXPORT_SYMBOL(drm_fbdev_dma_driver_fbdev_probe);
+4 -3
drivers/gpu/drm/drm_fbdev_shmem.c
··· 9 9 #include <drm/drm_framebuffer.h> 10 10 #include <drm/drm_gem_framebuffer_helper.h> 11 11 #include <drm/drm_gem_shmem_helper.h> 12 + #include <drm/drm_print.h> 12 13 13 14 /* 14 15 * struct fb_ops ··· 64 63 drm_fb_helper_fini(fb_helper); 65 64 66 65 drm_client_buffer_vunmap(fb_helper->buffer); 67 - drm_client_framebuffer_delete(fb_helper->buffer); 66 + drm_client_buffer_delete(fb_helper->buffer); 68 67 drm_client_release(&fb_helper->client); 69 68 } 70 69 ··· 148 147 sizes->surface_bpp); 149 148 150 149 format = drm_driver_legacy_fb_format(dev, sizes->surface_bpp, sizes->surface_depth); 151 - buffer = drm_client_framebuffer_create(client, sizes->surface_width, 150 + buffer = drm_client_buffer_create_dumb(client, sizes->surface_width, 152 151 sizes->surface_height, format); 153 152 if (IS_ERR(buffer)) 154 153 return PTR_ERR(buffer); ··· 205 204 fb_helper->buffer = NULL; 206 205 drm_client_buffer_vunmap(buffer); 207 206 err_drm_client_buffer_delete: 208 - drm_client_framebuffer_delete(buffer); 207 + drm_client_buffer_delete(buffer); 209 208 return ret; 210 209 } 211 210 EXPORT_SYMBOL(drm_fbdev_shmem_driver_fbdev_probe);
+5 -5
drivers/gpu/drm/drm_fbdev_ttm.c
··· 50 50 fb_deferred_io_cleanup(info); 51 51 drm_fb_helper_fini(fb_helper); 52 52 vfree(shadow); 53 - drm_client_framebuffer_delete(fb_helper->buffer); 53 + drm_client_buffer_delete(fb_helper->buffer); 54 54 55 55 drm_client_release(&fb_helper->client); 56 56 } ··· 187 187 188 188 format = drm_driver_legacy_fb_format(dev, sizes->surface_bpp, 189 189 sizes->surface_depth); 190 - buffer = drm_client_framebuffer_create(client, sizes->surface_width, 190 + buffer = drm_client_buffer_create_dumb(client, sizes->surface_width, 191 191 sizes->surface_height, format); 192 192 if (IS_ERR(buffer)) 193 193 return PTR_ERR(buffer); ··· 200 200 screen_buffer = vzalloc(screen_size); 201 201 if (!screen_buffer) { 202 202 ret = -ENOMEM; 203 - goto err_drm_client_framebuffer_delete; 203 + goto err_drm_client_buffer_delete; 204 204 } 205 205 206 206 info = drm_fb_helper_alloc_info(fb_helper); ··· 233 233 drm_fb_helper_release_info(fb_helper); 234 234 err_vfree: 235 235 vfree(screen_buffer); 236 - err_drm_client_framebuffer_delete: 236 + err_drm_client_buffer_delete: 237 237 fb_helper->fb = NULL; 238 238 fb_helper->buffer = NULL; 239 - drm_client_framebuffer_delete(buffer); 239 + drm_client_buffer_delete(buffer); 240 240 return ret; 241 241 } 242 242 EXPORT_SYMBOL(drm_fbdev_ttm_driver_fbdev_probe);
+1
drivers/gpu/drm/drm_gem_dma_helper.c
··· 22 22 #include <drm/drm_drv.h> 23 23 #include <drm/drm_dumb_buffers.h> 24 24 #include <drm/drm_gem_dma_helper.h> 25 + #include <drm/drm_print.h> 25 26 #include <drm/drm_vma_manager.h> 26 27 27 28 /**
+1
drivers/gpu/drm/drm_gem_framebuffer_helper.c
··· 16 16 #include <drm/drm_gem.h> 17 17 #include <drm/drm_gem_framebuffer_helper.h> 18 18 #include <drm/drm_modeset_helper.h> 19 + #include <drm/drm_print.h> 19 20 20 21 #include "drm_internal.h" 21 22
+1
drivers/gpu/drm/drm_gem_ttm_helper.c
··· 4 4 #include <linux/module.h> 5 5 6 6 #include <drm/drm_gem_ttm_helper.h> 7 + #include <drm/drm_print.h> 7 8 #include <drm/ttm/ttm_placement.h> 8 9 #include <drm/ttm/ttm_tt.h> 9 10
+2 -1
drivers/gpu/drm/drm_gem_vram_helper.c
··· 17 17 #include <drm/drm_mode.h> 18 18 #include <drm/drm_plane.h> 19 19 #include <drm/drm_prime.h> 20 + #include <drm/drm_print.h> 20 21 21 22 #include <drm/ttm/ttm_range_manager.h> 22 23 #include <drm/ttm/ttm_tt.h> ··· 860 859 ret = ttm_device_init(&vmm->bdev, &bo_driver, dev->dev, 861 860 dev->anon_inode->i_mapping, 862 861 dev->vma_offset_manager, 863 - false, true); 862 + TTM_ALLOCATION_POOL_USE_DMA32); 864 863 if (ret) 865 864 return ret; 866 865
+1
drivers/gpu/drm/drm_gpuvm.c
··· 26 26 */ 27 27 28 28 #include <drm/drm_gpuvm.h> 29 + #include <drm/drm_print.h> 29 30 30 31 #include <linux/export.h> 31 32 #include <linux/interval_tree_generic.h>
+1
drivers/gpu/drm/drm_mipi_dbi.c
··· 26 26 #include <drm/drm_gem_framebuffer_helper.h> 27 27 #include <drm/drm_mipi_dbi.h> 28 28 #include <drm/drm_modes.h> 29 + #include <drm/drm_print.h> 29 30 #include <drm/drm_probe_helper.h> 30 31 #include <drm/drm_rect.h> 31 32 #include <video/mipi_display.h>
+1
drivers/gpu/drm/drm_mm.c
··· 49 49 #include <linux/stacktrace.h> 50 50 51 51 #include <drm/drm_mm.h> 52 + #include <drm/drm_print.h> 52 53 53 54 /** 54 55 * DOC: Overview
+1
drivers/gpu/drm/drm_prime.c
··· 37 37 #include <drm/drm_framebuffer.h> 38 38 #include <drm/drm_gem.h> 39 39 #include <drm/drm_prime.h> 40 + #include <drm/drm_print.h> 40 41 41 42 #include "drm_internal.h" 42 43
+1
drivers/gpu/drm/etnaviv/etnaviv_buffer.c
··· 4 4 */ 5 5 6 6 #include <drm/drm_drv.h> 7 + #include <drm/drm_print.h> 7 8 8 9 #include "etnaviv_cmdbuf.h" 9 10 #include "etnaviv_gpu.h"
+1
drivers/gpu/drm/etnaviv/etnaviv_drv.c
··· 17 17 #include <drm/drm_ioctl.h> 18 18 #include <drm/drm_of.h> 19 19 #include <drm/drm_prime.h> 20 + #include <drm/drm_print.h> 20 21 21 22 #include "etnaviv_cmdbuf.h" 22 23 #include "etnaviv_drv.h"
+1
drivers/gpu/drm/etnaviv/etnaviv_gem.c
··· 4 4 */ 5 5 6 6 #include <drm/drm_prime.h> 7 + #include <drm/drm_print.h> 7 8 #include <linux/dma-mapping.h> 8 9 #include <linux/shmem_fs.h> 9 10 #include <linux/spinlock.h>
+1
drivers/gpu/drm/etnaviv/etnaviv_gem_submit.c
··· 4 4 */ 5 5 6 6 #include <drm/drm_file.h> 7 + #include <drm/drm_print.h> 7 8 #include <linux/dma-fence-array.h> 8 9 #include <linux/file.h> 9 10 #include <linux/dma-resv.h>
+2
drivers/gpu/drm/etnaviv/etnaviv_gpu.c
··· 16 16 #include <linux/reset.h> 17 17 #include <linux/thermal.h> 18 18 19 + #include <drm/drm_print.h> 20 + 19 21 #include "etnaviv_cmdbuf.h" 20 22 #include "etnaviv_dump.h" 21 23 #include "etnaviv_gpu.h"
+32
drivers/gpu/drm/etnaviv/etnaviv_hwdb.c
··· 198 198 }, 199 199 { 200 200 .model = 0x8000, 201 + .revision = 0x6205, 202 + .product_id = 0x80003, 203 + .customer_id = 0x15, 204 + .eco_id = 0, 205 + .stream_count = 16, 206 + .register_max = 64, 207 + .thread_count = 512, 208 + .shader_core_count = 2, 209 + .nn_core_count = 2, 210 + .vertex_cache_size = 16, 211 + .vertex_output_buffer_size = 1024, 212 + .pixel_pipes = 1, 213 + .instruction_count = 512, 214 + .num_constants = 320, 215 + .buffer_size = 0, 216 + .varyings_count = 16, 217 + .features = 0xe0287c8d, 218 + .minor_features0 = 0xc1799eff, 219 + .minor_features1 = 0xfefbfad9, 220 + .minor_features2 = 0xeb9d4fbf, 221 + .minor_features3 = 0xedfffced, 222 + .minor_features4 = 0xdb0dafc7, 223 + .minor_features5 = 0x7b5ac333, 224 + .minor_features6 = 0xfcce6000, 225 + .minor_features7 = 0x03fbfa6f, 226 + .minor_features8 = 0x00ef0ef0, 227 + .minor_features9 = 0x0eca703c, 228 + .minor_features10 = 0x898048f0, 229 + .minor_features11 = 0x00000034, 230 + }, 231 + { 232 + .model = 0x8000, 201 233 .revision = 0x7120, 202 234 .product_id = 0x45080009, 203 235 .customer_id = 0x88,
+2
drivers/gpu/drm/etnaviv/etnaviv_mmu.c
··· 6 6 #include <linux/dma-mapping.h> 7 7 #include <linux/scatterlist.h> 8 8 9 + #include <drm/drm_print.h> 10 + 9 11 #include "common.xml.h" 10 12 #include "etnaviv_cmdbuf.h" 11 13 #include "etnaviv_drv.h"
+1
drivers/gpu/drm/exynos/exynos5433_drm_decon.c
··· 20 20 #include <drm/drm_blend.h> 21 21 #include <drm/drm_fourcc.h> 22 22 #include <drm/drm_framebuffer.h> 23 + #include <drm/drm_print.h> 23 24 #include <drm/drm_vblank.h> 24 25 25 26 #include "exynos_drm_crtc.h"
+1
drivers/gpu/drm/exynos/exynos7_drm_decon.c
··· 20 20 21 21 #include <drm/drm_fourcc.h> 22 22 #include <drm/drm_framebuffer.h> 23 + #include <drm/drm_print.h> 23 24 #include <drm/drm_vblank.h> 24 25 #include <drm/exynos_drm.h> 25 26
+1
drivers/gpu/drm/exynos/exynos_drm_fb.c
··· 14 14 #include <drm/drm_framebuffer.h> 15 15 #include <drm/drm_fourcc.h> 16 16 #include <drm/drm_gem_framebuffer_helper.h> 17 + #include <drm/drm_print.h> 17 18 #include <drm/drm_probe_helper.h> 18 19 #include <drm/exynos_drm.h> 19 20
+1
drivers/gpu/drm/exynos/exynos_drm_fbdev.c
··· 16 16 #include <drm/drm_framebuffer.h> 17 17 #include <drm/drm_gem_framebuffer_helper.h> 18 18 #include <drm/drm_prime.h> 19 + #include <drm/drm_print.h> 19 20 #include <drm/exynos_drm.h> 20 21 21 22 #include "exynos_drm_drv.h"
+1
drivers/gpu/drm/exynos/exynos_drm_fimd.c
··· 23 23 #include <drm/drm_blend.h> 24 24 #include <drm/drm_fourcc.h> 25 25 #include <drm/drm_framebuffer.h> 26 + #include <drm/drm_print.h> 26 27 #include <drm/drm_vblank.h> 27 28 #include <drm/exynos_drm.h> 28 29
+1
drivers/gpu/drm/exynos/exynos_drm_g2d.c
··· 21 21 #include <linux/workqueue.h> 22 22 23 23 #include <drm/drm_file.h> 24 + #include <drm/drm_print.h> 24 25 #include <drm/exynos_drm.h> 25 26 26 27 #include "exynos_drm_drv.h"
+1
drivers/gpu/drm/exynos/exynos_drm_gem.c
··· 12 12 13 13 #include <drm/drm_dumb_buffers.h> 14 14 #include <drm/drm_prime.h> 15 + #include <drm/drm_print.h> 15 16 #include <drm/drm_vma_manager.h> 16 17 #include <drm/exynos_drm.h> 17 18
+1
drivers/gpu/drm/exynos/exynos_drm_ipp.c
··· 22 22 #include <drm/drm_file.h> 23 23 #include <drm/drm_fourcc.h> 24 24 #include <drm/drm_mode.h> 25 + #include <drm/drm_print.h> 25 26 #include <drm/exynos_drm.h> 26 27 27 28 #include "exynos_drm_drv.h"
+1
drivers/gpu/drm/exynos/exynos_drm_plane.c
··· 9 9 #include <drm/drm_atomic_helper.h> 10 10 #include <drm/drm_blend.h> 11 11 #include <drm/drm_framebuffer.h> 12 + #include <drm/drm_print.h> 12 13 #include <drm/exynos_drm.h> 13 14 14 15 #include "exynos_drm_crtc.h"
+1
drivers/gpu/drm/exynos/exynos_drm_vidi.c
··· 14 14 #include <drm/drm_atomic_helper.h> 15 15 #include <drm/drm_edid.h> 16 16 #include <drm/drm_framebuffer.h> 17 + #include <drm/drm_print.h> 17 18 #include <drm/drm_probe_helper.h> 18 19 #include <drm/drm_simple_kms_helper.h> 19 20 #include <drm/drm_vblank.h>
+1
drivers/gpu/drm/exynos/exynos_mixer.c
··· 28 28 #include <drm/drm_edid.h> 29 29 #include <drm/drm_fourcc.h> 30 30 #include <drm/drm_framebuffer.h> 31 + #include <drm/drm_print.h> 31 32 #include <drm/drm_vblank.h> 32 33 #include <drm/exynos_drm.h> 33 34
+1
drivers/gpu/drm/fsl-dcu/fsl_dcu_drm_plane.c
··· 15 15 #include <drm/drm_framebuffer.h> 16 16 #include <drm/drm_gem_dma_helper.h> 17 17 #include <drm/drm_plane_helper.h> 18 + #include <drm/drm_print.h> 18 19 #include <drm/drm_probe_helper.h> 19 20 20 21 #include "fsl_dcu_drm_drv.h"
+2
drivers/gpu/drm/gma500/backlight.c
··· 11 11 12 12 #include <acpi/video.h> 13 13 14 + #include <drm/drm_print.h> 15 + 14 16 #include "psb_drv.h" 15 17 #include "psb_intel_reg.h" 16 18 #include "psb_intel_drv.h"
+1
drivers/gpu/drm/gma500/cdv_device.c
··· 9 9 10 10 #include <drm/drm.h> 11 11 #include <drm/drm_crtc_helper.h> 12 + #include <drm/drm_print.h> 12 13 13 14 #include "cdv_device.h" 14 15 #include "gma_device.h"
+1
drivers/gpu/drm/gma500/cdv_intel_display.c
··· 11 11 12 12 #include <drm/drm_crtc.h> 13 13 #include <drm/drm_modeset_helper_vtables.h> 14 + #include <drm/drm_print.h> 14 15 15 16 #include "cdv_device.h" 16 17 #include "framebuffer.h"
+1
drivers/gpu/drm/gma500/cdv_intel_dp.c
··· 34 34 #include <drm/drm_crtc_helper.h> 35 35 #include <drm/drm_edid.h> 36 36 #include <drm/drm_modeset_helper_vtables.h> 37 + #include <drm/drm_print.h> 37 38 #include <drm/drm_simple_kms_helper.h> 38 39 39 40 #include "gma_display.h"
+1
drivers/gpu/drm/gma500/cdv_intel_hdmi.c
··· 31 31 #include <drm/drm_crtc_helper.h> 32 32 #include <drm/drm_edid.h> 33 33 #include <drm/drm_modeset_helper_vtables.h> 34 + #include <drm/drm_print.h> 34 35 #include <drm/drm_simple_kms_helper.h> 35 36 36 37 #include "cdv_device.h"
+1
drivers/gpu/drm/gma500/cdv_intel_lvds.c
··· 14 14 15 15 #include <drm/drm_crtc_helper.h> 16 16 #include <drm/drm_modeset_helper_vtables.h> 17 + #include <drm/drm_print.h> 17 18 #include <drm/drm_simple_kms_helper.h> 18 19 19 20 #include "cdv_device.h"
+1
drivers/gpu/drm/gma500/gem.c
··· 16 16 #include <asm/set_memory.h> 17 17 18 18 #include <drm/drm.h> 19 + #include <drm/drm_print.h> 19 20 #include <drm/drm_vma_manager.h> 20 21 21 22 #include "gem.h"
+1
drivers/gpu/drm/gma500/intel_bios.c
··· 8 8 9 9 #include <drm/display/drm_dp_helper.h> 10 10 #include <drm/drm.h> 11 + #include <drm/drm_print.h> 11 12 12 13 #include "intel_bios.h" 13 14 #include "psb_drv.h"
+2
drivers/gpu/drm/gma500/intel_gmbus.c
··· 32 32 #include <linux/i2c.h> 33 33 #include <linux/module.h> 34 34 35 + #include <drm/drm_print.h> 36 + 35 37 #include "psb_drv.h" 36 38 #include "psb_intel_drv.h" 37 39 #include "psb_intel_reg.h"
+1
drivers/gpu/drm/gma500/mid_bios.c
··· 12 12 */ 13 13 14 14 #include <drm/drm.h> 15 + #include <drm/drm_print.h> 15 16 16 17 #include "mid_bios.h" 17 18 #include "psb_drv.h"
+1
drivers/gpu/drm/gma500/oaktrail_crtc.c
··· 10 10 #include <drm/drm_fourcc.h> 11 11 #include <drm/drm_framebuffer.h> 12 12 #include <drm/drm_modeset_helper_vtables.h> 13 + #include <drm/drm_print.h> 13 14 14 15 #include "framebuffer.h" 15 16 #include "gem.h"
+1
drivers/gpu/drm/gma500/oaktrail_hdmi.c
··· 30 30 #include <drm/drm_crtc_helper.h> 31 31 #include <drm/drm_edid.h> 32 32 #include <drm/drm_modeset_helper_vtables.h> 33 + #include <drm/drm_print.h> 33 34 #include <drm/drm_simple_kms_helper.h> 34 35 35 36 #include "psb_drv.h"
+3
drivers/gpu/drm/gma500/oaktrail_hdmi_i2c.c
··· 30 30 #include <linux/i2c.h> 31 31 #include <linux/interrupt.h> 32 32 #include <linux/delay.h> 33 + 34 + #include <drm/drm_print.h> 35 + 33 36 #include "psb_drv.h" 34 37 35 38 #define HDMI_READ(reg) readl(hdmi_dev->regs + (reg))
+1
drivers/gpu/drm/gma500/oaktrail_lvds.c
··· 13 13 14 14 #include <drm/drm_edid.h> 15 15 #include <drm/drm_modeset_helper_vtables.h> 16 + #include <drm/drm_print.h> 16 17 #include <drm/drm_simple_kms_helper.h> 17 18 18 19 #include "intel_bios.h"
+3
drivers/gpu/drm/gma500/opregion.c
··· 22 22 * 23 23 */ 24 24 #include <linux/acpi.h> 25 + 26 + #include <drm/drm_print.h> 27 + 25 28 #include "psb_drv.h" 26 29 #include "psb_irq.h" 27 30 #include "psb_intel_reg.h"
+1
drivers/gpu/drm/gma500/psb_drv.c
··· 25 25 #include <drm/drm_file.h> 26 26 #include <drm/drm_ioctl.h> 27 27 #include <drm/drm_pciids.h> 28 + #include <drm/drm_print.h> 28 29 #include <drm/drm_vblank.h> 29 30 30 31 #include "framebuffer.h"
+1
drivers/gpu/drm/gma500/psb_intel_display.c
··· 11 11 12 12 #include <drm/drm_modeset_helper.h> 13 13 #include <drm/drm_modeset_helper_vtables.h> 14 + #include <drm/drm_print.h> 14 15 15 16 #include "framebuffer.h" 16 17 #include "gem.h"
+1
drivers/gpu/drm/gma500/psb_intel_lvds.c
··· 13 13 14 14 #include <drm/drm_crtc_helper.h> 15 15 #include <drm/drm_modeset_helper_vtables.h> 16 + #include <drm/drm_print.h> 16 17 #include <drm/drm_simple_kms_helper.h> 17 18 18 19 #include "intel_bios.h"
+1
drivers/gpu/drm/gma500/psb_intel_sdvo.c
··· 36 36 #include <drm/drm_crtc_helper.h> 37 37 #include <drm/drm_edid.h> 38 38 #include <drm/drm_modeset_helper_vtables.h> 39 + #include <drm/drm_print.h> 39 40 40 41 #include "psb_drv.h" 41 42 #include "psb_intel_drv.h"
+1
drivers/gpu/drm/gma500/psb_irq.c
··· 9 9 **************************************************************************/ 10 10 11 11 #include <drm/drm_drv.h> 12 + #include <drm/drm_print.h> 12 13 #include <drm/drm_vblank.h> 13 14 14 15 #include "power.h"
+1
drivers/gpu/drm/hisilicon/kirin/kirin_drm_ade.c
··· 29 29 #include <drm/drm_fourcc.h> 30 30 #include <drm/drm_framebuffer.h> 31 31 #include <drm/drm_gem_dma_helper.h> 32 + #include <drm/drm_print.h> 32 33 #include <drm/drm_probe_helper.h> 33 34 #include <drm/drm_vblank.h> 34 35 #include <drm/drm_gem_framebuffer_helper.h>
+1
drivers/gpu/drm/hisilicon/kirin/kirin_drm_drv.c
··· 24 24 #include <drm/drm_gem_framebuffer_helper.h> 25 25 #include <drm/drm_module.h> 26 26 #include <drm/drm_of.h> 27 + #include <drm/drm_print.h> 27 28 #include <drm/drm_probe_helper.h> 28 29 #include <drm/drm_vblank.h> 29 30
+1
drivers/gpu/drm/hyperv/hyperv_drm_drv.c
··· 14 14 #include <drm/drm_drv.h> 15 15 #include <drm/drm_fbdev_shmem.h> 16 16 #include <drm/drm_gem_shmem_helper.h> 17 + #include <drm/drm_print.h> 17 18 #include <drm/drm_simple_kms_helper.h> 18 19 19 20 #include "hyperv_drm.h"
+1
drivers/gpu/drm/hyperv/hyperv_drm_modeset.c
··· 19 19 #include <drm/drm_probe_helper.h> 20 20 #include <drm/drm_panic.h> 21 21 #include <drm/drm_plane.h> 22 + #include <drm/drm_print.h> 22 23 #include <drm/drm_vblank.h> 23 24 #include <drm/drm_vblank_helper.h> 24 25
+2
drivers/gpu/drm/i915/display/i9xx_wm.c
··· 5 5 6 6 #include <linux/iopoll.h> 7 7 8 + #include <drm/drm_print.h> 9 + 8 10 #include "soc/intel_dram.h" 9 11 10 12 #include "i915_drv.h"
+1
drivers/gpu/drm/i915/display/intel_bios.c
··· 32 32 #include <drm/display/drm_dsc_helper.h> 33 33 #include <drm/drm_edid.h> 34 34 #include <drm/drm_fixed.h> 35 + #include <drm/drm_print.h> 35 36 36 37 #include "soc/intel_rom.h" 37 38
+3
drivers/gpu/drm/i915/display/intel_bw.c
··· 3 3 * Copyright © 2019 Intel Corporation 4 4 */ 5 5 6 + #include <drm/drm_atomic_state_helper.h> 7 + #include <drm/drm_print.h> 8 + 6 9 #include "soc/intel_dram.h" 7 10 8 11 #include "i915_drv.h"
+1
drivers/gpu/drm/i915/display/intel_cdclk.c
··· 26 26 #include <linux/time.h> 27 27 28 28 #include <drm/drm_fixed.h> 29 + #include <drm/drm_print.h> 29 30 30 31 #include "soc/intel_dram.h" 31 32
+1
drivers/gpu/drm/i915/display/intel_connector.c
··· 28 28 29 29 #include <drm/drm_atomic_helper.h> 30 30 #include <drm/drm_edid.h> 31 + #include <drm/drm_print.h> 31 32 #include <drm/drm_probe_helper.h> 32 33 33 34 #include "i915_drv.h"
+1
drivers/gpu/drm/i915/display/intel_crtc.c
··· 9 9 #include <drm/drm_atomic_helper.h> 10 10 #include <drm/drm_fourcc.h> 11 11 #include <drm/drm_plane.h> 12 + #include <drm/drm_print.h> 12 13 #include <drm/drm_vblank.h> 13 14 #include <drm/drm_vblank_work.h> 14 15
+1
drivers/gpu/drm/i915/display/intel_display.c
··· 41 41 #include <drm/drm_edid.h> 42 42 #include <drm/drm_fixed.h> 43 43 #include <drm/drm_fourcc.h> 44 + #include <drm/drm_print.h> 44 45 #include <drm/drm_probe_helper.h> 45 46 #include <drm/drm_rect.h> 46 47 #include <drm/drm_vblank.h>
+1
drivers/gpu/drm/i915/display/intel_display_debugfs.c
··· 12 12 #include <drm/drm_edid.h> 13 13 #include <drm/drm_file.h> 14 14 #include <drm/drm_fourcc.h> 15 + #include <drm/drm_print.h> 15 16 16 17 #include "hsw_ips.h" 17 18 #include "i915_reg.h"
+1
drivers/gpu/drm/i915/display/intel_display_driver.c
··· 14 14 #include <drm/drm_client_event.h> 15 15 #include <drm/drm_mode_config.h> 16 16 #include <drm/drm_privacy_screen_consumer.h> 17 + #include <drm/drm_print.h> 17 18 #include <drm/drm_probe_helper.h> 18 19 #include <drm/drm_vblank.h> 19 20
+1
drivers/gpu/drm/i915/display/intel_display_irq.c
··· 3 3 * Copyright © 2023 Intel Corporation 4 4 */ 5 5 6 + #include <drm/drm_print.h> 6 7 #include <drm/drm_vblank.h> 7 8 8 9 #include "i915_drv.h"
+2
drivers/gpu/drm/i915/display/intel_display_power.c
··· 6 6 #include <linux/iopoll.h> 7 7 #include <linux/string_helpers.h> 8 8 9 + #include <drm/drm_print.h> 10 + 9 11 #include "soc/intel_dram.h" 10 12 11 13 #include "i915_drv.h"
+2
drivers/gpu/drm/i915/display/intel_display_power_well.c
··· 5 5 6 6 #include <linux/iopoll.h> 7 7 8 + #include <drm/drm_print.h> 9 + 8 10 #include "i915_drv.h" 9 11 #include "i915_irq.h" 10 12 #include "i915_reg.h"
+1
drivers/gpu/drm/i915/display/intel_display_reset.c
··· 4 4 */ 5 5 6 6 #include <drm/drm_atomic_helper.h> 7 + #include <drm/drm_print.h> 7 8 8 9 #include "i915_drv.h" 9 10 #include "intel_clock_gating.h"
+2
drivers/gpu/drm/i915/display/intel_dpt.c
··· 3 3 * Copyright © 2021 Intel Corporation 4 4 */ 5 5 6 + #include <drm/drm_print.h> 7 + 6 8 #include "gem/i915_gem_domain.h" 7 9 #include "gem/i915_gem_internal.h" 8 10 #include "gem/i915_gem_lmem.h"
+1
drivers/gpu/drm/i915/display/intel_fb.c
··· 9 9 #include <drm/drm_blend.h> 10 10 #include <drm/drm_gem.h> 11 11 #include <drm/drm_modeset_helper.h> 12 + #include <drm/drm_print.h> 12 13 13 14 #include "intel_bo.h" 14 15 #include "intel_display.h"
+1
drivers/gpu/drm/i915/display/intel_fb_bo.c
··· 4 4 */ 5 5 6 6 #include <drm/drm_framebuffer.h> 7 + #include <drm/drm_print.h> 7 8 8 9 #include "gem/i915_gem_object.h" 9 10
+2
drivers/gpu/drm/i915/display/intel_fb_pin.c
··· 7 7 * DOC: display pinning helpers 8 8 */ 9 9 10 + #include <drm/drm_print.h> 11 + 10 12 #include "gem/i915_gem_domain.h" 11 13 #include "gem/i915_gem_object.h" 12 14
+1
drivers/gpu/drm/i915/display/intel_fbc.c
··· 43 43 44 44 #include <drm/drm_blend.h> 45 45 #include <drm/drm_fourcc.h> 46 + #include <drm/drm_print.h> 46 47 47 48 #include "gem/i915_gem_stolen.h" 48 49
+2
drivers/gpu/drm/i915/display/intel_fbdev_fb.c
··· 5 5 6 6 #include <linux/fb.h> 7 7 8 + #include <drm/drm_print.h> 9 + 8 10 #include "gem/i915_gem_lmem.h" 9 11 10 12 #include "i915_drv.h"
+1
drivers/gpu/drm/i915/display/intel_frontbuffer.c
··· 56 56 */ 57 57 58 58 #include <drm/drm_gem.h> 59 + #include <drm/drm_print.h> 59 60 60 61 #include "i915_active.h" 61 62 #include "i915_vma.h"
+1
drivers/gpu/drm/i915/display/intel_gmbus.c
··· 32 32 #include <linux/i2c.h> 33 33 #include <linux/iopoll.h> 34 34 35 + #include <drm/drm_print.h> 35 36 #include <drm/display/drm_hdcp_helper.h> 36 37 37 38 #include "i915_drv.h"
+1
drivers/gpu/drm/i915/display/intel_hdcp_gsc.c
··· 3 3 * Copyright 2023, Intel Corporation. 4 4 */ 5 5 6 + #include <drm/drm_print.h> 6 7 #include <drm/intel/i915_hdcp_interface.h> 7 8 8 9 #include "gem/i915_gem_region.h"
+1
drivers/gpu/drm/i915/display/intel_hotplug.c
··· 24 24 #include <linux/debugfs.h> 25 25 #include <linux/kernel.h> 26 26 27 + #include <drm/drm_print.h> 27 28 #include <drm/drm_probe_helper.h> 28 29 29 30 #include "i915_drv.h"
+1
drivers/gpu/drm/i915/display/intel_overlay.c
··· 27 27 */ 28 28 29 29 #include <drm/drm_fourcc.h> 30 + #include <drm/drm_print.h> 30 31 31 32 #include "gem/i915_gem_internal.h" 32 33 #include "gem/i915_gem_object_frontbuffer.h"
+2
drivers/gpu/drm/i915/display/intel_pipe_crc.c
··· 28 28 #include <linux/debugfs.h> 29 29 #include <linux/seq_file.h> 30 30 31 + #include <drm/drm_print.h> 32 + 31 33 #include "i915_drv.h" 32 34 #include "i915_irq.h" 33 35 #include "intel_atomic.h"
+1
drivers/gpu/drm/i915/display/intel_plane.c
··· 43 43 #include <drm/drm_gem.h> 44 44 #include <drm/drm_gem_atomic_helper.h> 45 45 #include <drm/drm_panic.h> 46 + #include <drm/drm_print.h> 46 47 47 48 #include "gem/i915_gem_object.h" 48 49 #include "i9xx_plane_regs.h"
+2
drivers/gpu/drm/i915/display/intel_plane_initial.c
··· 3 3 * Copyright © 2021 Intel Corporation 4 4 */ 5 5 6 + #include <drm/drm_print.h> 7 + 6 8 #include "gem/i915_gem_lmem.h" 7 9 #include "gem/i915_gem_region.h" 8 10 #include "i915_drv.h"
+1
drivers/gpu/drm/i915/display/intel_psr.c
··· 26 26 #include <drm/drm_atomic_helper.h> 27 27 #include <drm/drm_damage_helper.h> 28 28 #include <drm/drm_debugfs.h> 29 + #include <drm/drm_print.h> 29 30 #include <drm/drm_vblank.h> 30 31 31 32 #include "i915_reg.h"
+1
drivers/gpu/drm/i915/display/intel_vblank.c
··· 5 5 6 6 #include <linux/iopoll.h> 7 7 8 + #include <drm/drm_print.h> 8 9 #include <drm/drm_vblank.h> 9 10 10 11 #include "i915_drv.h"
+1
drivers/gpu/drm/i915/gem/i915_gem_context.c
··· 68 68 #include <linux/nospec.h> 69 69 70 70 #include <drm/drm_cache.h> 71 + #include <drm/drm_print.h> 71 72 #include <drm/drm_syncobj.h> 72 73 73 74 #include "gt/gen6_ppgtt.h"
+1
drivers/gpu/drm/i915/gem/i915_gem_create.c
··· 4 4 */ 5 5 6 6 #include <drm/drm_fourcc.h> 7 + #include <drm/drm_print.h> 7 8 8 9 #include "display/intel_display.h" 9 10 #include "gem/i915_gem_ioctls.h"
+1
drivers/gpu/drm/i915/gem/i915_gem_execbuffer.c
··· 9 9 #include <linux/uaccess.h> 10 10 11 11 #include <drm/drm_auth.h> 12 + #include <drm/drm_print.h> 12 13 #include <drm/drm_syncobj.h> 13 14 14 15 #include "gem/i915_gem_ioctls.h"
+1
drivers/gpu/drm/i915/gem/i915_gem_object.c
··· 27 27 #include <linux/sched/mm.h> 28 28 29 29 #include <drm/drm_cache.h> 30 + #include <drm/drm_print.h> 30 31 31 32 #include "display/intel_frontbuffer.h" 32 33 #include "pxp/intel_pxp.h"
+3 -1
drivers/gpu/drm/i915/gem/i915_gem_pages.c
··· 3 3 * Copyright © 2014-2016 Intel Corporation 4 4 */ 5 5 6 + #include <linux/vmalloc.h> 7 + 6 8 #include <drm/drm_cache.h> 7 9 #include <drm/drm_panic.h> 8 - #include <linux/vmalloc.h> 10 + #include <drm/drm_print.h> 9 11 10 12 #include "display/intel_fb.h" 11 13 #include "display/intel_display_types.h"
+1
drivers/gpu/drm/i915/gem/i915_gem_phys.c
··· 8 8 #include <linux/swap.h> 9 9 10 10 #include <drm/drm_cache.h> 11 + #include <drm/drm_print.h> 11 12 12 13 #include "gt/intel_gt.h" 13 14 #include "i915_drv.h"
+2
drivers/gpu/drm/i915/gem/i915_gem_shrinker.c
··· 12 12 #include <linux/dma-buf.h> 13 13 #include <linux/vmalloc.h> 14 14 15 + #include <drm/drm_print.h> 16 + 15 17 #include "gt/intel_gt_requests.h" 16 18 #include "gt/intel_gt.h" 17 19
+1
drivers/gpu/drm/i915/gem/i915_gem_stolen.c
··· 7 7 #include <linux/mutex.h> 8 8 9 9 #include <drm/drm_mm.h> 10 + #include <drm/drm_print.h> 10 11 #include <drm/intel/i915_drm.h> 11 12 12 13 #include "gem/i915_gem_lmem.h"
+2 -1
drivers/gpu/drm/i915/gem/i915_gem_ttm.c
··· 5 5 6 6 #include <linux/shmem_fs.h> 7 7 8 + #include <drm/drm_buddy.h> 9 + #include <drm/drm_print.h> 8 10 #include <drm/ttm/ttm_placement.h> 9 11 #include <drm/ttm/ttm_tt.h> 10 - #include <drm/drm_buddy.h> 11 12 12 13 #include "i915_drv.h" 13 14 #include "i915_jiffies.h"
+1
drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.c
··· 3 3 * Copyright © 2021 Intel Corporation 4 4 */ 5 5 6 + #include <drm/drm_print.h> 6 7 #include <drm/ttm/ttm_placement.h> 7 8 #include <drm/ttm/ttm_tt.h> 8 9
+2
drivers/gpu/drm/i915/gem/i915_gem_userptr.c
··· 38 38 #include <linux/swap.h> 39 39 #include <linux/sched/mm.h> 40 40 41 + #include <drm/drm_print.h> 42 + 41 43 #include "i915_drv.h" 42 44 #include "i915_gem_ioctls.h" 43 45 #include "i915_gem_object.h"
+2
drivers/gpu/drm/i915/gem/i915_gemfs.c
··· 7 7 #include <linux/mount.h> 8 8 #include <linux/fs_context.h> 9 9 10 + #include <drm/drm_print.h> 11 + 10 12 #include "i915_drv.h" 11 13 #include "i915_gemfs.h" 12 14 #include "i915_utils.h"
+2
drivers/gpu/drm/i915/gem/selftests/i915_gem_client_blt.c
··· 3 3 * Copyright © 2019 Intel Corporation 4 4 */ 5 5 6 + #include <drm/drm_print.h> 7 + 6 8 #include "i915_selftest.h" 7 9 8 10 #include "display/intel_display_device.h"
+2
drivers/gpu/drm/i915/gem/selftests/i915_gem_mman.c
··· 7 7 #include <linux/highmem.h> 8 8 #include <linux/prime_numbers.h> 9 9 10 + #include <drm/drm_print.h> 11 + 10 12 #include "gem/i915_gem_internal.h" 11 13 #include "gem/i915_gem_lmem.h" 12 14 #include "gem/i915_gem_region.h"
+2
drivers/gpu/drm/i915/gt/gen8_engine_cs.c
··· 3 3 * Copyright © 2014 Intel Corporation 4 4 */ 5 5 6 + #include <drm/drm_print.h> 7 + 6 8 #include "gen8_engine_cs.h" 7 9 #include "intel_engine_regs.h" 8 10 #include "intel_gpu_commands.h"
+2
drivers/gpu/drm/i915/gt/intel_breadcrumbs.c
··· 8 8 #include <trace/events/dma_fence.h> 9 9 #include <uapi/linux/sched/types.h> 10 10 11 + #include <drm/drm_print.h> 12 + 11 13 #include "i915_drv.h" 12 14 #include "i915_trace.h" 13 15 #include "intel_breadcrumbs.h"
+2
drivers/gpu/drm/i915/gt/intel_engine_heartbeat.c
··· 3 3 * Copyright © 2019 Intel Corporation 4 4 */ 5 5 6 + #include <drm/drm_print.h> 7 + 6 8 #include "i915_drv.h" 7 9 #include "i915_jiffies.h" 8 10 #include "i915_request.h"
+2
drivers/gpu/drm/i915/gt/intel_engine_user.c
··· 7 7 #include <linux/list_sort.h> 8 8 #include <linux/llist.h> 9 9 10 + #include <drm/drm_print.h> 11 + 10 12 #include "i915_drv.h" 11 13 #include "intel_engine.h" 12 14 #include "intel_engine_user.h"
+2
drivers/gpu/drm/i915/gt/intel_execlists_submission.c
··· 110 110 #include <linux/interrupt.h> 111 111 #include <linux/string_helpers.h> 112 112 113 + #include <drm/drm_print.h> 114 + 113 115 #include "gen8_engine_cs.h" 114 116 #include "i915_drv.h" 115 117 #include "i915_list_util.h"
+1
drivers/gpu/drm/i915/gt/intel_ggtt.c
··· 9 9 #include <linux/stop_machine.h> 10 10 11 11 #include <drm/drm_managed.h> 12 + #include <drm/drm_print.h> 12 13 #include <drm/intel/i915_drm.h> 13 14 #include <drm/intel/intel-gtt.h> 14 15
+2
drivers/gpu/drm/i915/gt/intel_ggtt_fencing.c
··· 5 5 6 6 #include <linux/highmem.h> 7 7 8 + #include <drm/drm_print.h> 9 + 8 10 #include "display/intel_display.h" 9 11 #include "i915_drv.h" 10 12 #include "i915_reg.h"
+1
drivers/gpu/drm/i915/gt/intel_ggtt_gmch.c
··· 5 5 6 6 #include "intel_ggtt_gmch.h" 7 7 8 + #include <drm/drm_print.h> 8 9 #include <drm/intel/intel-gtt.h> 9 10 10 11 #include <linux/agp_backend.h>
+2
drivers/gpu/drm/i915/gt/intel_gt_debugfs.c
··· 5 5 6 6 #include <linux/debugfs.h> 7 7 8 + #include <drm/drm_print.h> 9 + 8 10 #include "i915_drv.h" 9 11 #include "intel_gt.h" 10 12 #include "intel_gt_debugfs.h"
+2
drivers/gpu/drm/i915/gt/intel_gt_pm_debugfs.c
··· 7 7 #include <linux/seq_file.h> 8 8 #include <linux/string_helpers.h> 9 9 10 + #include <drm/drm_print.h> 11 + 10 12 #include "i915_drv.h" 11 13 #include "i915_reg.h" 12 14 #include "intel_gt.h"
+2
drivers/gpu/drm/i915/gt/intel_lrc.c
··· 3 3 * Copyright © 2014 Intel Corporation 4 4 */ 5 5 6 + #include <drm/drm_print.h> 7 + 6 8 #include "gem/i915_gem_lmem.h" 7 9 8 10 #include "gen8_engine_cs.h"
+2
drivers/gpu/drm/i915/gt/intel_mocs.c
··· 3 3 * Copyright © 2015 Intel Corporation 4 4 */ 5 5 6 + #include <drm/drm_print.h> 7 + 6 8 #include "i915_drv.h" 7 9 8 10 #include "intel_engine.h"
+2
drivers/gpu/drm/i915/gt/intel_rc6.c
··· 6 6 #include <linux/pm_runtime.h> 7 7 #include <linux/string_helpers.h> 8 8 9 + #include <drm/drm_print.h> 10 + 9 11 #include "display/vlv_clock.h" 10 12 #include "gem/i915_gem_region.h" 11 13 #include "i915_drv.h"
+2
drivers/gpu/drm/i915/gt/intel_region_lmem.c
··· 3 3 * Copyright © 2019 Intel Corporation 4 4 */ 5 5 6 + #include <drm/drm_print.h> 7 + 6 8 #include "i915_drv.h" 7 9 #include "i915_pci.h" 8 10 #include "i915_reg.h"
+2
drivers/gpu/drm/i915/gt/intel_renderstate.c
··· 3 3 * Copyright © 2014 Intel Corporation 4 4 */ 5 5 6 + #include <drm/drm_print.h> 7 + 6 8 #include "gem/i915_gem_internal.h" 7 9 8 10 #include "i915_drv.h"
+1
drivers/gpu/drm/i915/gt/intel_sa_media.c
··· 4 4 */ 5 5 6 6 #include <drm/drm_managed.h> 7 + #include <drm/drm_print.h> 7 8 8 9 #include "i915_drv.h" 9 10 #include "gt/intel_gt.h"
+2
drivers/gpu/drm/i915/gt/intel_sseu.c
··· 5 5 6 6 #include <linux/string_helpers.h> 7 7 8 + #include <drm/drm_print.h> 9 + 8 10 #include "i915_drv.h" 9 11 #include "i915_perf_types.h" 10 12 #include "intel_engine_regs.h"
+2
drivers/gpu/drm/i915/gt/intel_sseu_debugfs.c
··· 7 7 #include <linux/bitmap.h> 8 8 #include <linux/string_helpers.h> 9 9 10 + #include <drm/drm_print.h> 11 + 10 12 #include "i915_drv.h" 11 13 #include "intel_gt_debugfs.h" 12 14 #include "intel_gt_regs.h"
+1
drivers/gpu/drm/i915/gt/intel_timeline.c
··· 4 4 */ 5 5 6 6 #include <drm/drm_cache.h> 7 + #include <drm/drm_print.h> 7 8 8 9 #include "gem/i915_gem_internal.h" 9 10
+2
drivers/gpu/drm/i915/gt/intel_wopcm.c
··· 3 3 * Copyright © 2017-2019 Intel Corporation 4 4 */ 5 5 6 + #include <drm/drm_print.h> 7 + 6 8 #include "intel_wopcm.h" 7 9 #include "i915_drv.h" 8 10
+2
drivers/gpu/drm/i915/gt/selftest_context.c
··· 3 3 * Copyright © 2019 Intel Corporation 4 4 */ 5 5 6 + #include <drm/drm_print.h> 7 + 6 8 #include "i915_selftest.h" 7 9 #include "intel_engine_heartbeat.h" 8 10 #include "intel_engine_pm.h"
+2
drivers/gpu/drm/i915/gt/selftest_execlists.c
··· 5 5 6 6 #include <linux/prime_numbers.h> 7 7 8 + #include <drm/drm_print.h> 9 + 8 10 #include "gem/i915_gem_internal.h" 9 11 #include "gem/i915_gem_pm.h" 10 12 #include "gt/intel_engine_heartbeat.h"
+2
drivers/gpu/drm/i915/gt/uc/intel_gsc_uc_heci_cmd_submit.c
··· 3 3 * Copyright © 2023 Intel Corporation 4 4 */ 5 5 6 + #include <drm/drm_print.h> 7 + 6 8 #include "gt/intel_context.h" 7 9 #include "gt/intel_engine_pm.h" 8 10 #include "gt/intel_gpu_commands.h"
+2
drivers/gpu/drm/i915/gvt/aperture_gm.c
··· 34 34 * 35 35 */ 36 36 37 + #include <drm/drm_print.h> 38 + 37 39 #include "i915_drv.h" 38 40 #include "i915_reg.h" 39 41 #include "gt/intel_ggtt_fencing.h"
+2
drivers/gpu/drm/i915/gvt/cfg_space.c
··· 31 31 * 32 32 */ 33 33 34 + #include <drm/drm_print.h> 35 + 34 36 #include "i915_drv.h" 35 37 #include "gvt.h" 36 38 #include "intel_pci_config.h"
+2
drivers/gpu/drm/i915/gvt/cmd_parser.c
··· 36 36 37 37 #include <linux/slab.h> 38 38 39 + #include <drm/drm_print.h> 40 + 39 41 #include "i915_drv.h" 40 42 #include "i915_reg.h" 41 43 #include "display/intel_display_regs.h"
+1
drivers/gpu/drm/i915/gvt/display.c
··· 33 33 */ 34 34 35 35 #include <drm/display/drm_dp.h> 36 + #include <drm/drm_print.h> 36 37 37 38 #include "i915_drv.h" 38 39 #include "i915_reg.h"
+1
drivers/gpu/drm/i915/gvt/dmabuf.c
··· 33 33 34 34 #include <drm/drm_fourcc.h> 35 35 #include <drm/drm_plane.h> 36 + #include <drm/drm_print.h> 36 37 37 38 #include "gem/i915_gem_dmabuf.h" 38 39
+1
drivers/gpu/drm/i915/gvt/edid.c
··· 33 33 */ 34 34 35 35 #include <drm/display/drm_dp.h> 36 + #include <drm/drm_print.h> 36 37 37 38 #include "display/intel_dp_aux_regs.h" 38 39 #include "display/intel_gmbus.h"
+2
drivers/gpu/drm/i915/gvt/gtt.c
··· 33 33 * 34 34 */ 35 35 36 + #include <drm/drm_print.h> 37 + 36 38 #include "i915_drv.h" 37 39 #include "gvt.h" 38 40 #include "i915_pvinfo.h"
+1
drivers/gpu/drm/i915/gvt/handlers.c
··· 37 37 */ 38 38 39 39 #include <drm/display/drm_dp.h> 40 + #include <drm/drm_print.h> 40 41 41 42 #include "i915_drv.h" 42 43 #include "i915_reg.h"
+2
drivers/gpu/drm/i915/gvt/interrupt.c
··· 31 31 32 32 #include <linux/eventfd.h> 33 33 34 + #include <drm/drm_print.h> 35 + 34 36 #include "i915_drv.h" 35 37 #include "i915_reg.h" 36 38 #include "display/intel_display_regs.h"
+1
drivers/gpu/drm/i915/gvt/kvmgt.c
··· 48 48 #include <linux/nospec.h> 49 49 50 50 #include <drm/drm_edid.h> 51 + #include <drm/drm_print.h> 51 52 52 53 #include "i915_drv.h" 53 54 #include "intel_gvt.h"
+3
drivers/gpu/drm/i915/gvt/mmio.c
··· 34 34 */ 35 35 36 36 #include <linux/vmalloc.h> 37 + 38 + #include <drm/drm_print.h> 39 + 37 40 #include "i915_drv.h" 38 41 #include "i915_reg.h" 39 42 #include "display/intel_display_regs.h"
+2
drivers/gpu/drm/i915/gvt/mmio_context.c
··· 33 33 * 34 34 */ 35 35 36 + #include <drm/drm_print.h> 37 + 36 38 #include "gt/intel_context.h" 37 39 #include "gt/intel_engine_regs.h" 38 40 #include "gt/intel_gpu_commands.h"
+2
drivers/gpu/drm/i915/gvt/scheduler.c
··· 35 35 36 36 #include <linux/kthread.h> 37 37 38 + #include <drm/drm_print.h> 39 + 38 40 #include "gem/i915_gem_pm.h" 39 41 #include "gt/intel_context.h" 40 42 #include "gt/intel_execlists_submission.h"
+2
drivers/gpu/drm/i915/gvt/vgpu.c
··· 31 31 * 32 32 */ 33 33 34 + #include <drm/drm_print.h> 35 + 34 36 #include "i915_drv.h" 35 37 #include "gvt.h" 36 38 #include "i915_pvinfo.h"
+1
drivers/gpu/drm/i915/i915_cmd_parser.c
··· 28 28 #include <linux/highmem.h> 29 29 30 30 #include <drm/drm_cache.h> 31 + #include <drm/drm_print.h> 31 32 32 33 #include "gt/intel_engine.h" 33 34 #include "gt/intel_engine_regs.h"
+1
drivers/gpu/drm/i915/i915_debugfs.c
··· 32 32 #include <linux/string_helpers.h> 33 33 34 34 #include <drm/drm_debugfs.h> 35 + #include <drm/drm_print.h> 35 36 36 37 #include "gem/i915_gem_context.h" 37 38 #include "gt/intel_gt.h"
+1
drivers/gpu/drm/i915/i915_gem.c
··· 37 37 #include <linux/mman.h> 38 38 39 39 #include <drm/drm_cache.h> 40 + #include <drm/drm_print.h> 40 41 #include <drm/drm_vma_manager.h> 41 42 42 43 #include "gem/i915_gem_clflush.h"
+2
drivers/gpu/drm/i915/i915_getparam.c
··· 2 2 * SPDX-License-Identifier: MIT 3 3 */ 4 4 5 + #include <drm/drm_print.h> 6 + 5 7 #include "display/intel_overlay.h" 6 8 #include "gem/i915_gem_mman.h" 7 9 #include "gt/intel_engine_user.h"
+1
drivers/gpu/drm/i915/i915_irq.c
··· 32 32 #include <linux/sysrq.h> 33 33 34 34 #include <drm/drm_drv.h> 35 + #include <drm/drm_print.h> 35 36 36 37 #include "display/intel_display_irq.h" 37 38 #include "display/intel_hotplug.h"
+1
drivers/gpu/drm/i915/i915_module.c
··· 5 5 */ 6 6 7 7 #include <drm/drm_drv.h> 8 + #include <drm/drm_print.h> 8 9 9 10 #include "gem/i915_gem_context.h" 10 11 #include "gem/i915_gem_object.h"
+2
drivers/gpu/drm/i915/i915_pmu.c
··· 6 6 7 7 #include <linux/pm_runtime.h> 8 8 9 + #include <drm/drm_print.h> 10 + 9 11 #include "gt/intel_engine.h" 10 12 #include "gt/intel_engine_pm.h" 11 13 #include "gt/intel_engine_regs.h"
+2
drivers/gpu/drm/i915/i915_query.c
··· 6 6 7 7 #include <linux/nospec.h> 8 8 9 + #include <drm/drm_print.h> 10 + 9 11 #include "i915_drv.h" 10 12 #include "i915_perf.h" 11 13 #include "i915_query.h"
+2
drivers/gpu/drm/i915/i915_request.c
··· 31 31 #include <linux/sched/signal.h> 32 32 #include <linux/sched/mm.h> 33 33 34 + #include <drm/drm_print.h> 35 + 34 36 #include "gem/i915_gem_context.h" 35 37 #include "gt/intel_breadcrumbs.h" 36 38 #include "gt/intel_context.h"
+2
drivers/gpu/drm/i915/i915_switcheroo.c
··· 5 5 6 6 #include <linux/vga_switcheroo.h> 7 7 8 + #include <drm/drm_print.h> 9 + 8 10 #include "display/intel_display_device.h" 9 11 10 12 #include "i915_driver.h"
+2
drivers/gpu/drm/i915/i915_sysfs.c
··· 30 30 #include <linux/stat.h> 31 31 #include <linux/sysfs.h> 32 32 33 + #include <drm/drm_print.h> 34 + 33 35 #include "gt/intel_gt_regs.h" 34 36 #include "gt/intel_rc6.h" 35 37 #include "gt/intel_rps.h"
+2 -2
drivers/gpu/drm/i915/i915_ttm_buddy_manager.c
··· 5 5 6 6 #include <linux/slab.h> 7 7 8 + #include <drm/drm_buddy.h> 9 + #include <drm/drm_print.h> 8 10 #include <drm/ttm/ttm_placement.h> 9 11 #include <drm/ttm/ttm_bo.h> 10 - 11 - #include <drm/drm_buddy.h> 12 12 13 13 #include "i915_ttm_buddy_manager.h" 14 14
+1
drivers/gpu/drm/i915/i915_utils.c
··· 6 6 #include <linux/device.h> 7 7 8 8 #include <drm/drm_drv.h> 9 + #include <drm/drm_print.h> 9 10 10 11 #include "i915_drv.h" 11 12 #include "i915_reg.h"
+2
drivers/gpu/drm/i915/i915_vgpu.c
··· 21 21 * SOFTWARE. 22 22 */ 23 23 24 + #include <drm/drm_print.h> 25 + 24 26 #include "i915_drv.h" 25 27 #include "i915_pvinfo.h" 26 28 #include "i915_vgpu.h"
+2
drivers/gpu/drm/i915/i915_vma.c
··· 24 24 25 25 #include <linux/sched/mm.h> 26 26 #include <linux/dma-fence-array.h> 27 + 27 28 #include <drm/drm_gem.h> 29 + #include <drm/drm_print.h> 28 30 29 31 #include "display/intel_fb.h" 30 32 #include "display/intel_frontbuffer.h"
+2
drivers/gpu/drm/i915/intel_clock_gating.c
··· 25 25 * 26 26 */ 27 27 28 + #include <drm/drm_print.h> 29 + 28 30 #include "display/i9xx_plane_regs.h" 29 31 #include "display/intel_display.h" 30 32 #include "display/intel_display_core.h"
+2
drivers/gpu/drm/i915/intel_gvt.c
··· 21 21 * SOFTWARE. 22 22 */ 23 23 24 + #include <drm/drm_print.h> 25 + 24 26 #include "i915_drv.h" 25 27 #include "i915_vgpu.h" 26 28 #include "intel_gvt.h"
+1
drivers/gpu/drm/i915/intel_memory_region.c
··· 5 5 6 6 #include <linux/prandom.h> 7 7 8 + #include <drm/drm_print.h> 8 9 #include <uapi/drm/i915_drm.h> 9 10 10 11 #include "intel_memory_region.h"
+2
drivers/gpu/drm/i915/intel_pcode.c
··· 3 3 * Copyright © 2013-2021 Intel Corporation 4 4 */ 5 5 6 + #include <drm/drm_print.h> 7 + 6 8 #include "i915_drv.h" 7 9 #include "i915_reg.h" 8 10 #include "i915_wait_util.h"
+1 -1
drivers/gpu/drm/i915/intel_region_ttm.c
··· 34 34 35 35 return ttm_device_init(&dev_priv->bdev, i915_ttm_driver(), 36 36 drm->dev, drm->anon_inode->i_mapping, 37 - drm->vma_offset_manager, false, false); 37 + drm->vma_offset_manager, 0); 38 38 } 39 39 40 40 /**
+2
drivers/gpu/drm/i915/intel_step.c
··· 3 3 * Copyright © 2020,2021 Intel Corporation 4 4 */ 5 5 6 + #include <drm/drm_print.h> 7 + 6 8 #include "i915_drv.h" 7 9 #include "intel_step.h" 8 10
+1
drivers/gpu/drm/i915/intel_uncore.c
··· 24 24 #include <linux/pm_runtime.h> 25 25 26 26 #include <drm/drm_managed.h> 27 + #include <drm/drm_print.h> 27 28 28 29 #include "display/intel_display_core.h" 29 30 #include "gt/intel_engine_regs.h"
+2
drivers/gpu/drm/i915/intel_wakeref.c
··· 6 6 7 7 #include <linux/wait_bit.h> 8 8 9 + #include <drm/drm_print.h> 10 + 9 11 #include "intel_runtime_pm.h" 10 12 #include "intel_wakeref.h" 11 13 #include "i915_drv.h"
+2
drivers/gpu/drm/i915/pxp/intel_pxp.c
··· 5 5 6 6 #include <linux/workqueue.h> 7 7 8 + #include <drm/drm_print.h> 9 + 8 10 #include "gem/i915_gem_context.h" 9 11 #include "gt/intel_context.h" 10 12 #include "gt/intel_gt.h"
+2
drivers/gpu/drm/i915/pxp/intel_pxp_gsccs.c
··· 3 3 * Copyright(c) 2023 Intel Corporation. 4 4 */ 5 5 6 + #include <drm/drm_print.h> 7 + 6 8 #include "gem/i915_gem_internal.h" 7 9 8 10 #include "gt/intel_context.h"
+2
drivers/gpu/drm/i915/pxp/intel_pxp_huc.c
··· 3 3 * Copyright(c) 2021-2022, Intel Corporation. All rights reserved. 4 4 */ 5 5 6 + #include <drm/drm_print.h> 7 + 6 8 #include "i915_drv.h" 7 9 8 10 #include "gem/i915_gem_region.h"
+2
drivers/gpu/drm/i915/pxp/intel_pxp_session.c
··· 3 3 * Copyright(c) 2020, Intel Corporation. All rights reserved. 4 4 */ 5 5 6 + #include <drm/drm_print.h> 7 + 6 8 #include "i915_drv.h" 7 9 8 10 #include "intel_pxp.h"
+2
drivers/gpu/drm/i915/selftests/i915_active.c
··· 7 7 #include <linux/kref.h> 8 8 #include <linux/string_helpers.h> 9 9 10 + #include <drm/drm_print.h> 11 + 10 12 #include "gem/i915_gem_pm.h" 11 13 #include "gt/intel_gt.h" 12 14
+2
drivers/gpu/drm/i915/selftests/i915_request.c
··· 26 26 #include <linux/prime_numbers.h> 27 27 #include <linux/sort.h> 28 28 29 + #include <drm/drm_print.h> 30 + 29 31 #include "gem/i915_gem_internal.h" 30 32 #include "gem/i915_gem_pm.h" 31 33 #include "gem/selftests/mock_context.h"
+1
drivers/gpu/drm/i915/soc/intel_dram.c
··· 6 6 #include <linux/string_helpers.h> 7 7 8 8 #include <drm/drm_managed.h> 9 + #include <drm/drm_print.h> 9 10 10 11 #include "../display/intel_display_core.h" /* FIXME */ 11 12
+1
drivers/gpu/drm/i915/soc/intel_gmch.c
··· 8 8 #include <linux/vgaarb.h> 9 9 10 10 #include <drm/drm_managed.h> 11 + #include <drm/drm_print.h> 11 12 #include <drm/intel/i915_drm.h> 12 13 13 14 #include "../display/intel_display_core.h" /* FIXME */
+2
drivers/gpu/drm/i915/vlv_iosf_sb.c
··· 3 3 * Copyright © 2013-2021 Intel Corporation 4 4 */ 5 5 6 + #include <drm/drm_print.h> 7 + 6 8 #include "i915_drv.h" 7 9 #include "i915_iosf_mbi.h" 8 10 #include "i915_reg.h"
+1
drivers/gpu/drm/imagination/pvr_ccb.c
··· 10 10 #include "pvr_power.h" 11 11 12 12 #include <drm/drm_managed.h> 13 + #include <drm/drm_print.h> 13 14 #include <linux/compiler.h> 14 15 #include <linux/delay.h> 15 16 #include <linux/jiffies.h>
+1 -1
drivers/gpu/drm/imagination/pvr_device.c
··· 48 48 * 49 49 * Return: 50 50 * * 0 on success, or 51 - * * Any error returned by devm_platform_ioremap_resource(). 51 + * * Any error returned by devm_platform_get_and_ioremap_resource(). 52 52 */ 53 53 static int 54 54 pvr_device_reg_init(struct pvr_device *pvr_dev)
+1
drivers/gpu/drm/imagination/pvr_fw.c
··· 17 17 #include <drm/drm_drv.h> 18 18 #include <drm/drm_managed.h> 19 19 #include <drm/drm_mm.h> 20 + #include <drm/drm_print.h> 20 21 #include <linux/clk.h> 21 22 #include <linux/firmware.h> 22 23 #include <linux/math.h>
+2
drivers/gpu/drm/imagination/pvr_fw_meta.c
··· 16 16 #include <linux/ktime.h> 17 17 #include <linux/types.h> 18 18 19 + #include <drm/drm_print.h> 20 + 19 21 #define ROGUE_FW_HEAP_META_SHIFT 25 /* 32 MB */ 20 22 21 23 #define POLL_TIMEOUT_USEC 1000000
+1
drivers/gpu/drm/imagination/pvr_fw_trace.c
··· 9 9 10 10 #include <drm/drm_drv.h> 11 11 #include <drm/drm_file.h> 12 + #include <drm/drm_print.h> 12 13 13 14 #include <linux/build_bug.h> 14 15 #include <linux/dcache.h>
+1
drivers/gpu/drm/imagination/pvr_power.c
··· 10 10 11 11 #include <drm/drm_drv.h> 12 12 #include <drm/drm_managed.h> 13 + #include <drm/drm_print.h> 13 14 #include <linux/cleanup.h> 14 15 #include <linux/clk.h> 15 16 #include <linux/interrupt.h>
+1
drivers/gpu/drm/imagination/pvr_vm.c
··· 13 13 #include <drm/drm_exec.h> 14 14 #include <drm/drm_gem.h> 15 15 #include <drm/drm_gpuvm.h> 16 + #include <drm/drm_print.h> 16 17 17 18 #include <linux/bug.h> 18 19 #include <linux/container_of.h>
+1
drivers/gpu/drm/imx/dcss/dcss-plane.c
··· 10 10 #include <drm/drm_framebuffer.h> 11 11 #include <drm/drm_gem_atomic_helper.h> 12 12 #include <drm/drm_gem_dma_helper.h> 13 + #include <drm/drm_print.h> 13 14 14 15 #include "dcss-dev.h" 15 16 #include "dcss-kms.h"
-1
drivers/gpu/drm/imx/ipuv3/dw_hdmi-imx.c
··· 278 278 MODULE_AUTHOR("Yakir Yang <ykk@rock-chips.com>"); 279 279 MODULE_DESCRIPTION("IMX6 Specific DW-HDMI Driver Extension"); 280 280 MODULE_LICENSE("GPL"); 281 - MODULE_ALIAS("platform:dwhdmi-imx");
-1
drivers/gpu/drm/imx/ipuv3/imx-ldb.c
··· 644 644 MODULE_DESCRIPTION("i.MX LVDS driver"); 645 645 MODULE_AUTHOR("Sascha Hauer, Pengutronix"); 646 646 MODULE_LICENSE("GPL"); 647 - MODULE_ALIAS("platform:" DRIVER_NAME);
-1
drivers/gpu/drm/imx/ipuv3/imx-tve.c
··· 677 677 MODULE_DESCRIPTION("i.MX Television Encoder driver"); 678 678 MODULE_AUTHOR("Philipp Zabel, Pengutronix"); 679 679 MODULE_LICENSE("GPL"); 680 - MODULE_ALIAS("platform:imx-tve");
+1
drivers/gpu/drm/imx/ipuv3/ipuv3-plane.c
··· 14 14 #include <drm/drm_gem_atomic_helper.h> 15 15 #include <drm/drm_gem_dma_helper.h> 16 16 #include <drm/drm_managed.h> 17 + #include <drm/drm_print.h> 17 18 18 19 #include <video/imx-ipu-v3.h> 19 20
-1
drivers/gpu/drm/imx/ipuv3/parallel-display.c
··· 286 286 MODULE_DESCRIPTION("i.MX parallel display driver"); 287 287 MODULE_AUTHOR("Sascha Hauer, Pengutronix"); 288 288 MODULE_LICENSE("GPL"); 289 - MODULE_ALIAS("platform:imx-parallel-display");
+1
drivers/gpu/drm/imx/lcdc/imx-lcdc.c
··· 14 14 #include <drm/drm_gem_dma_helper.h> 15 15 #include <drm/drm_gem_framebuffer_helper.h> 16 16 #include <drm/drm_of.h> 17 + #include <drm/drm_print.h> 17 18 #include <drm/drm_probe_helper.h> 18 19 #include <drm/drm_simple_kms_helper.h> 19 20 #include <drm/drm_vblank.h>
+1
drivers/gpu/drm/kmb/kmb_drv.c
··· 20 20 #include <drm/drm_gem_dma_helper.h> 21 21 #include <drm/drm_gem_framebuffer_helper.h> 22 22 #include <drm/drm_module.h> 23 + #include <drm/drm_print.h> 23 24 #include <drm/drm_probe_helper.h> 24 25 #include <drm/drm_vblank.h> 25 26
+1
drivers/gpu/drm/kmb/kmb_plane.c
··· 12 12 #include <drm/drm_framebuffer.h> 13 13 #include <drm/drm_gem_dma_helper.h> 14 14 #include <drm/drm_managed.h> 15 + #include <drm/drm_print.h> 15 16 16 17 #include "kmb_drv.h" 17 18 #include "kmb_plane.h"
+2
drivers/gpu/drm/lima/lima_sched.c
··· 8 8 #include <linux/vmalloc.h> 9 9 #include <linux/pm_runtime.h> 10 10 11 + #include <drm/drm_print.h> 12 + 11 13 #include "lima_devfreq.h" 12 14 #include "lima_drv.h" 13 15 #include "lima_sched.h"
+1
drivers/gpu/drm/loongson/lsdc_benchmark.c
··· 4 4 */ 5 5 6 6 #include <drm/drm_debugfs.h> 7 + #include <drm/drm_print.h> 7 8 8 9 #include "lsdc_benchmark.h" 9 10 #include "lsdc_drv.h"
+1
drivers/gpu/drm/loongson/lsdc_crtc.c
··· 9 9 #include <drm/drm_atomic.h> 10 10 #include <drm/drm_atomic_helper.h> 11 11 #include <drm/drm_debugfs.h> 12 + #include <drm/drm_print.h> 12 13 #include <drm/drm_vblank.h> 13 14 14 15 #include "lsdc_drv.h"
+1
drivers/gpu/drm/loongson/lsdc_debugfs.c
··· 4 4 */ 5 5 6 6 #include <drm/drm_debugfs.h> 7 + #include <drm/drm_print.h> 7 8 8 9 #include "lsdc_benchmark.h" 9 10 #include "lsdc_drv.h"
+1
drivers/gpu/drm/loongson/lsdc_drv.c
··· 15 15 #include <drm/drm_gem_framebuffer_helper.h> 16 16 #include <drm/drm_ioctl.h> 17 17 #include <drm/drm_modeset_helper.h> 18 + #include <drm/drm_print.h> 18 19 #include <drm/drm_probe_helper.h> 19 20 #include <drm/drm_vblank.h> 20 21
+1
drivers/gpu/drm/loongson/lsdc_gem.c
··· 10 10 #include <drm/drm_file.h> 11 11 #include <drm/drm_gem.h> 12 12 #include <drm/drm_prime.h> 13 + #include <drm/drm_print.h> 13 14 14 15 #include "lsdc_drv.h" 15 16 #include "lsdc_gem.h"
+1
drivers/gpu/drm/loongson/lsdc_i2c.c
··· 4 4 */ 5 5 6 6 #include <drm/drm_managed.h> 7 + #include <drm/drm_print.h> 7 8 8 9 #include "lsdc_drv.h" 9 10 #include "lsdc_output.h"
+1
drivers/gpu/drm/loongson/lsdc_irq.c
··· 3 3 * Copyright (C) 2023 Loongson Technology Corporation Limited 4 4 */ 5 5 6 + #include <drm/drm_print.h> 6 7 #include <drm/drm_vblank.h> 7 8 8 9 #include "lsdc_irq.h"
+1
drivers/gpu/drm/loongson/lsdc_output_7a1000.c
··· 5 5 6 6 #include <drm/drm_atomic_helper.h> 7 7 #include <drm/drm_edid.h> 8 + #include <drm/drm_print.h> 8 9 #include <drm/drm_probe_helper.h> 9 10 10 11 #include "lsdc_drv.h"
+1
drivers/gpu/drm/loongson/lsdc_output_7a2000.c
··· 8 8 #include <drm/drm_atomic_helper.h> 9 9 #include <drm/drm_debugfs.h> 10 10 #include <drm/drm_edid.h> 11 + #include <drm/drm_print.h> 11 12 #include <drm/drm_probe_helper.h> 12 13 13 14 #include "lsdc_drv.h"
+1
drivers/gpu/drm/loongson/lsdc_pixpll.c
··· 6 6 #include <linux/delay.h> 7 7 8 8 #include <drm/drm_managed.h> 9 + #include <drm/drm_print.h> 9 10 10 11 #include "lsdc_drv.h" 11 12
+1
drivers/gpu/drm/loongson/lsdc_plane.c
··· 9 9 #include <drm/drm_atomic_helper.h> 10 10 #include <drm/drm_framebuffer.h> 11 11 #include <drm/drm_gem_atomic_helper.h> 12 + #include <drm/drm_print.h> 12 13 13 14 #include "lsdc_drv.h" 14 15 #include "lsdc_regs.h"
+3 -1
drivers/gpu/drm/loongson/lsdc_ttm.c
··· 8 8 #include <drm/drm_gem.h> 9 9 #include <drm/drm_managed.h> 10 10 #include <drm/drm_prime.h> 11 + #include <drm/drm_print.h> 11 12 12 13 #include "lsdc_drv.h" 13 14 #include "lsdc_ttm.h" ··· 545 544 546 545 ret = ttm_device_init(&ldev->bdev, &lsdc_bo_driver, ddev->dev, 547 546 ddev->anon_inode->i_mapping, 548 - ddev->vma_offset_manager, false, true); 547 + ddev->vma_offset_manager, 548 + TTM_ALLOCATION_POOL_USE_DMA32); 549 549 if (ret) 550 550 return ret; 551 551
+1
drivers/gpu/drm/mcde/mcde_display.c
··· 17 17 #include <drm/drm_gem_atomic_helper.h> 18 18 #include <drm/drm_gem_dma_helper.h> 19 19 #include <drm/drm_mipi_dsi.h> 20 + #include <drm/drm_print.h> 20 21 #include <drm/drm_simple_kms_helper.h> 21 22 #include <drm/drm_bridge.h> 22 23 #include <drm/drm_vblank.h>
+1
drivers/gpu/drm/mediatek/mtk_crtc.c
··· 16 16 17 17 #include <drm/drm_atomic.h> 18 18 #include <drm/drm_atomic_helper.h> 19 + #include <drm/drm_print.h> 19 20 #include <drm/drm_probe_helper.h> 20 21 #include <drm/drm_vblank.h> 21 22
+1
drivers/gpu/drm/mediatek/mtk_gem.c
··· 11 11 #include <drm/drm_gem.h> 12 12 #include <drm/drm_gem_dma_helper.h> 13 13 #include <drm/drm_prime.h> 14 + #include <drm/drm_print.h> 14 15 15 16 #include "mtk_drm_drv.h" 16 17 #include "mtk_gem.h"
+1
drivers/gpu/drm/mediatek/mtk_plane.c
··· 11 11 #include <drm/drm_fourcc.h> 12 12 #include <drm/drm_framebuffer.h> 13 13 #include <drm/drm_gem_atomic_helper.h> 14 + #include <drm/drm_print.h> 14 15 #include <linux/align.h> 15 16 16 17 #include "mtk_crtc.h"
+1
drivers/gpu/drm/meson/meson_overlay.c
··· 16 16 #include <drm/drm_framebuffer.h> 17 17 #include <drm/drm_gem_atomic_helper.h> 18 18 #include <drm/drm_gem_dma_helper.h> 19 + #include <drm/drm_print.h> 19 20 20 21 #include "meson_overlay.h" 21 22 #include "meson_registers.h"
+1
drivers/gpu/drm/meson/meson_plane.c
··· 20 20 #include <drm/drm_framebuffer.h> 21 21 #include <drm/drm_gem_atomic_helper.h> 22 22 #include <drm/drm_gem_dma_helper.h> 23 + #include <drm/drm_print.h> 23 24 24 25 #include "meson_plane.h" 25 26 #include "meson_registers.h"
+1
drivers/gpu/drm/mgag200/mgag200_drv.c
··· 20 20 #include <drm/drm_managed.h> 21 21 #include <drm/drm_module.h> 22 22 #include <drm/drm_pciids.h> 23 + #include <drm/drm_print.h> 23 24 24 25 #include "mgag200_drv.h" 25 26
+1
drivers/gpu/drm/mgag200/mgag200_g200.c
··· 7 7 #include <drm/drm_atomic_helper.h> 8 8 #include <drm/drm_drv.h> 9 9 #include <drm/drm_gem_atomic_helper.h> 10 + #include <drm/drm_print.h> 10 11 #include <drm/drm_probe_helper.h> 11 12 12 13 #include "mgag200_drv.h"
+1
drivers/gpu/drm/mgag200/mgag200_g200eh.c
··· 7 7 #include <drm/drm_atomic_helper.h> 8 8 #include <drm/drm_drv.h> 9 9 #include <drm/drm_gem_atomic_helper.h> 10 + #include <drm/drm_print.h> 10 11 #include <drm/drm_probe_helper.h> 11 12 12 13 #include "mgag200_drv.h"
+1
drivers/gpu/drm/mgag200/mgag200_g200eh3.c
··· 6 6 #include <drm/drm_atomic_helper.h> 7 7 #include <drm/drm_drv.h> 8 8 #include <drm/drm_gem_atomic_helper.h> 9 + #include <drm/drm_print.h> 9 10 #include <drm/drm_probe_helper.h> 10 11 11 12 #include "mgag200_drv.h"
+1
drivers/gpu/drm/mgag200/mgag200_g200eh5.c
··· 8 8 #include <drm/drm_atomic_helper.h> 9 9 #include <drm/drm_drv.h> 10 10 #include <drm/drm_gem_atomic_helper.h> 11 + #include <drm/drm_print.h> 11 12 #include <drm/drm_probe_helper.h> 12 13 13 14 #include "mgag200_drv.h"
+1
drivers/gpu/drm/mgag200/mgag200_g200er.c
··· 7 7 #include <drm/drm_atomic_helper.h> 8 8 #include <drm/drm_drv.h> 9 9 #include <drm/drm_gem_atomic_helper.h> 10 + #include <drm/drm_print.h> 10 11 #include <drm/drm_probe_helper.h> 11 12 12 13 #include "mgag200_drv.h"
+1
drivers/gpu/drm/mgag200/mgag200_g200ev.c
··· 7 7 #include <drm/drm_atomic_helper.h> 8 8 #include <drm/drm_drv.h> 9 9 #include <drm/drm_gem_atomic_helper.h> 10 + #include <drm/drm_print.h> 10 11 #include <drm/drm_probe_helper.h> 11 12 12 13 #include "mgag200_drv.h"
+1
drivers/gpu/drm/mgag200/mgag200_g200ew3.c
··· 6 6 #include <drm/drm_atomic_helper.h> 7 7 #include <drm/drm_drv.h> 8 8 #include <drm/drm_gem_atomic_helper.h> 9 + #include <drm/drm_print.h> 9 10 #include <drm/drm_probe_helper.h> 10 11 11 12 #include "mgag200_drv.h"
+1
drivers/gpu/drm/mgag200/mgag200_g200se.c
··· 7 7 #include <drm/drm_atomic_helper.h> 8 8 #include <drm/drm_drv.h> 9 9 #include <drm/drm_gem_atomic_helper.h> 10 + #include <drm/drm_print.h> 10 11 #include <drm/drm_probe_helper.h> 11 12 12 13 #include "mgag200_drv.h"
+1
drivers/gpu/drm/mgag200/mgag200_g200wb.c
··· 7 7 #include <drm/drm_atomic_helper.h> 8 8 #include <drm/drm_drv.h> 9 9 #include <drm/drm_gem_atomic_helper.h> 10 + #include <drm/drm_print.h> 10 11 #include <drm/drm_probe_helper.h> 11 12 12 13 #include "mgag200_drv.h"
+1
drivers/gpu/drm/mgag200/mgag200_vga.c
··· 2 2 3 3 #include <drm/drm_atomic_helper.h> 4 4 #include <drm/drm_modeset_helper_vtables.h> 5 + #include <drm/drm_print.h> 5 6 #include <drm/drm_probe_helper.h> 6 7 7 8 #include "mgag200_ddc.h"
+1
drivers/gpu/drm/mgag200/mgag200_vga_bmc.c
··· 3 3 #include <drm/drm_atomic_helper.h> 4 4 #include <drm/drm_edid.h> 5 5 #include <drm/drm_modeset_helper_vtables.h> 6 + #include <drm/drm_print.h> 6 7 #include <drm/drm_probe_helper.h> 7 8 8 9 #include "mgag200_ddc.h"
+1
drivers/gpu/drm/mxsfb/lcdif_kms.c
··· 26 26 #include <drm/drm_gem_atomic_helper.h> 27 27 #include <drm/drm_gem_dma_helper.h> 28 28 #include <drm/drm_plane.h> 29 + #include <drm/drm_print.h> 29 30 #include <drm/drm_vblank.h> 30 31 31 32 #include "lcdif_drv.h"
+1
drivers/gpu/drm/mxsfb/mxsfb_kms.c
··· 26 26 #include <drm/drm_gem_atomic_helper.h> 27 27 #include <drm/drm_gem_dma_helper.h> 28 28 #include <drm/drm_plane.h> 29 + #include <drm/drm_print.h> 29 30 #include <drm/drm_vblank.h> 30 31 31 32 #include "mxsfb_drv.h"
+1
drivers/gpu/drm/nouveau/nouveau_drv.h
··· 49 49 #include <drm/drm_device.h> 50 50 #include <drm/drm_drv.h> 51 51 #include <drm/drm_file.h> 52 + #include <drm/drm_print.h> 52 53 53 54 #include <drm/ttm/ttm_bo.h> 54 55 #include <drm/ttm/ttm_placement.h>
+4 -2
drivers/gpu/drm/nouveau/nouveau_ttm.c
··· 302 302 ret = ttm_device_init(&drm->ttm.bdev, &nouveau_bo_driver, drm->dev->dev, 303 303 dev->anon_inode->i_mapping, 304 304 dev->vma_offset_manager, 305 - drm_need_swiotlb(drm->client.mmu.dmabits), 306 - drm->client.mmu.dmabits <= 32); 305 + (drm_need_swiotlb(drm->client.mmu.dmabits) ? 306 + TTM_ALLOCATION_POOL_USE_DMA_ALLOC : 0) | 307 + (drm->client.mmu.dmabits <= 32 ? 308 + TTM_ALLOCATION_POOL_USE_DMA32 : 0)); 307 309 if (ret) { 308 310 NV_ERROR(drm, "error initialising bo driver, %d\n", ret); 309 311 return ret;
+1
drivers/gpu/drm/omapdrm/omap_crtc.c
··· 10 10 #include <drm/drm_atomic_helper.h> 11 11 #include <drm/drm_crtc.h> 12 12 #include <drm/drm_mode.h> 13 + #include <drm/drm_print.h> 13 14 #include <drm/drm_vblank.h> 14 15 15 16 #include "omap_drv.h"
+1
drivers/gpu/drm/omapdrm/omap_debugfs.c
··· 11 11 #include <drm/drm_file.h> 12 12 #include <drm/drm_fb_helper.h> 13 13 #include <drm/drm_framebuffer.h> 14 + #include <drm/drm_print.h> 14 15 15 16 #include "omap_drv.h" 16 17 #include "omap_dmm_tiler.h"
+2
drivers/gpu/drm/omapdrm/omap_dmm_tiler.c
··· 26 26 #include <linux/vmalloc.h> 27 27 #include <linux/wait.h> 28 28 29 + #include <drm/drm_print.h> 30 + 29 31 #include "omap_dmm_tiler.h" 30 32 #include "omap_dmm_priv.h" 31 33
+1
drivers/gpu/drm/omapdrm/omap_drv.c
··· 19 19 #include <drm/drm_ioctl.h> 20 20 #include <drm/drm_panel.h> 21 21 #include <drm/drm_prime.h> 22 + #include <drm/drm_print.h> 22 23 #include <drm/drm_probe_helper.h> 23 24 #include <drm/drm_vblank.h> 24 25
+1
drivers/gpu/drm/omapdrm/omap_fb.c
··· 12 12 #include <drm/drm_fourcc.h> 13 13 #include <drm/drm_framebuffer.h> 14 14 #include <drm/drm_gem_framebuffer_helper.h> 15 + #include <drm/drm_print.h> 15 16 16 17 #include "omap_dmm_tiler.h" 17 18 #include "omap_drv.h"
+1
drivers/gpu/drm/omapdrm/omap_fbdev.c
··· 15 15 #include <drm/drm_framebuffer.h> 16 16 #include <drm/drm_gem_framebuffer_helper.h> 17 17 #include <drm/drm_managed.h> 18 + #include <drm/drm_print.h> 18 19 #include <drm/drm_util.h> 19 20 20 21 #include "omap_drv.h"
+1
drivers/gpu/drm/omapdrm/omap_gem.c
··· 12 12 13 13 #include <drm/drm_dumb_buffers.h> 14 14 #include <drm/drm_prime.h> 15 + #include <drm/drm_print.h> 15 16 #include <drm/drm_vma_manager.h> 16 17 17 18 #include "omap_drv.h"
+1
drivers/gpu/drm/omapdrm/omap_irq.c
··· 5 5 */ 6 6 7 7 #include <drm/drm_vblank.h> 8 + #include <drm/drm_print.h> 8 9 9 10 #include "omap_drv.h" 10 11
+1
drivers/gpu/drm/omapdrm/omap_overlay.c
··· 6 6 7 7 #include <drm/drm_atomic.h> 8 8 #include <drm/drm_atomic_helper.h> 9 + #include <drm/drm_print.h> 9 10 10 11 #include "omap_dmm_tiler.h" 11 12 #include "omap_drv.h"
+1
drivers/gpu/drm/omapdrm/omap_plane.c
··· 10 10 #include <drm/drm_gem_atomic_helper.h> 11 11 #include <drm/drm_fourcc.h> 12 12 #include <drm/drm_framebuffer.h> 13 + #include <drm/drm_print.h> 13 14 14 15 #include "omap_dmm_tiler.h" 15 16 #include "omap_drv.h"
+24
drivers/gpu/drm/panel/Kconfig
··· 801 801 select DRM_MIPI_DSI 802 802 select VIDEOMODE_HELPERS 803 803 804 + config DRM_PANEL_SAMSUNG_S6E3FC2X01 805 + tristate "Samsung S6E3FC2X01 DSI panel controller" 806 + depends on OF 807 + depends on DRM_MIPI_DSI 808 + depends on BACKLIGHT_CLASS_DEVICE 809 + select VIDEOMODE_HELPERS 810 + help 811 + Say Y or M here if you want to enable support for the 812 + Samsung S6E3FC2 DDIC and connected MIPI DSI panel. 813 + Currently supported panels: 814 + 815 + Samsung AMS641RW (found in the OnePlus 6T smartphone) 816 + 804 817 config DRM_PANEL_SAMSUNG_S6E3HA2 805 818 tristate "Samsung S6E3HA2 DSI video mode panel" 806 819 depends on OF ··· 1072 1059 help 1073 1060 Say Y if you want to enable support for panels based on the 1074 1061 Synaptics R63353 controller. 1062 + 1063 + config DRM_PANEL_SYNAPTICS_TDDI 1064 + tristate "Synaptics TDDI display panels" 1065 + depends on OF 1066 + depends on DRM_MIPI_DSI 1067 + depends on BACKLIGHT_CLASS_DEVICE 1068 + help 1069 + Say Y if you want to enable support for the Synaptics TDDI display 1070 + panels. There are multiple MIPI DSI panels manufactured under the TDDI 1071 + namesake, with varying resolutions and data lanes. They also have a 1072 + built-in LED backlight and a touch controller. 1075 1073 1076 1074 config DRM_PANEL_TDO_TL070WSH30 1077 1075 tristate "TDO TL070WSH30 DSI panel"
+2
drivers/gpu/drm/panel/Makefile
··· 79 79 obj-$(CONFIG_DRM_PANEL_SAMSUNG_S6D27A1) += panel-samsung-s6d27a1.o 80 80 obj-$(CONFIG_DRM_PANEL_SAMSUNG_S6D7AA0) += panel-samsung-s6d7aa0.o 81 81 obj-$(CONFIG_DRM_PANEL_SAMSUNG_S6E3FA7) += panel-samsung-s6e3fa7.o 82 + obj-$(CONFIG_DRM_PANEL_SAMSUNG_S6E3FC2X01) += panel-samsung-s6e3fc2x01.o 82 83 obj-$(CONFIG_DRM_PANEL_SAMSUNG_S6E3HA2) += panel-samsung-s6e3ha2.o 83 84 obj-$(CONFIG_DRM_PANEL_SAMSUNG_S6E3HA8) += panel-samsung-s6e3ha8.o 84 85 obj-$(CONFIG_DRM_PANEL_SAMSUNG_S6E63J0X03) += panel-samsung-s6e63j0x03.o ··· 102 101 obj-$(CONFIG_DRM_PANEL_SITRONIX_ST7789V) += panel-sitronix-st7789v.o 103 102 obj-$(CONFIG_DRM_PANEL_SUMMIT) += panel-summit.o 104 103 obj-$(CONFIG_DRM_PANEL_SYNAPTICS_R63353) += panel-synaptics-r63353.o 104 + obj-$(CONFIG_DRM_PANEL_SYNAPTICS_TDDI) += panel-synaptics-tddi.o 105 105 obj-$(CONFIG_DRM_PANEL_SONY_ACX565AKM) += panel-sony-acx565akm.o 106 106 obj-$(CONFIG_DRM_PANEL_SONY_TD4353_JDI) += panel-sony-td4353-jdi.o 107 107 obj-$(CONFIG_DRM_PANEL_SONY_TULIP_TRULY_NT35521) += panel-sony-tulip-truly-nt35521.o
+69
drivers/gpu/drm/panel/panel-ilitek-ili9882t.c
··· 61 61 mipi_dsi_dcs_write_seq_multi(ctx, ILI9882T_DCS_SWITCH_PAGE, \ 62 62 0x98, 0x82, (page)) 63 63 64 + /* IL79900A-specific commands, add new commands as you decode them */ 65 + #define IL79900A_DCS_SWITCH_PAGE 0xFF 66 + 67 + #define il79900a_switch_page(ctx, page) \ 68 + mipi_dsi_dcs_write_seq_multi(ctx, IL79900A_DCS_SWITCH_PAGE, \ 69 + 0x5a, 0xa5, (page)) 70 + 64 71 static int starry_ili9882t_init(struct ili9882t *ili) 65 72 { 66 73 struct mipi_dsi_multi_context ctx = { .dsi = ili->dsi }; ··· 420 413 return ctx.accum_err; 421 414 }; 422 415 416 + static int tianma_il79900a_init(struct ili9882t *ili) 417 + { 418 + struct mipi_dsi_multi_context ctx = { .dsi = ili->dsi }; 419 + 420 + mipi_dsi_usleep_range(&ctx, 5000, 5100); 421 + 422 + il79900a_switch_page(&ctx, 0x06); 423 + mipi_dsi_dcs_write_seq_multi(&ctx, 0x3e, 0x62); 424 + 425 + il79900a_switch_page(&ctx, 0x02); 426 + mipi_dsi_dcs_write_seq_multi(&ctx, 0x1b, 0x20); 427 + mipi_dsi_dcs_write_seq_multi(&ctx, 0x5d, 0x00); 428 + mipi_dsi_dcs_write_seq_multi(&ctx, 0x5e, 0x40); 429 + 430 + il79900a_switch_page(&ctx, 0x07); 431 + mipi_dsi_dcs_write_seq_multi(&ctx, 0X29, 0x00); 432 + 433 + il79900a_switch_page(&ctx, 0x06); 434 + mipi_dsi_dcs_write_seq_multi(&ctx, 0x92, 0x22); 435 + 436 + il79900a_switch_page(&ctx, 0x00); 437 + mipi_dsi_dcs_exit_sleep_mode_multi(&ctx); 438 + 439 + mipi_dsi_msleep(&ctx, 120); 440 + 441 + mipi_dsi_dcs_set_display_on_multi(&ctx); 442 + 443 + mipi_dsi_msleep(&ctx, 80); 444 + 445 + return 0; 446 + }; 447 + 423 448 static inline struct ili9882t *to_ili9882t(struct drm_panel *panel) 424 449 { 425 450 return container_of(panel, struct ili9882t, base); ··· 568 529 .type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED, 569 530 }; 570 531 532 + static const struct drm_display_mode tianma_il79900a_default_mode = { 533 + .clock = 264355, 534 + .hdisplay = 1600, 535 + .hsync_start = 1600 + 20, 536 + .hsync_end = 1600 + 20 + 4, 537 + .htotal = 1600 + 20 + 4 + 20, 538 + .vdisplay = 2560, 539 + .vsync_start = 2560 + 82, 540 + .vsync_end = 2560 + 82 + 2, 541 + .vtotal = 2560 + 82 + 2 + 36, 542 + .type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED, 543 + }; 544 + 571 545 static const struct panel_desc starry_ili9882t_desc = { 572 546 .modes = &starry_ili9882t_default_mode, 573 547 .bpc = 8, ··· 593 541 .mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_SYNC_PULSE | 594 542 MIPI_DSI_MODE_LPM, 595 543 .init = starry_ili9882t_init, 544 + }; 545 + 546 + static const struct panel_desc tianma_tl121bvms07_desc = { 547 + .modes = &tianma_il79900a_default_mode, 548 + .bpc = 8, 549 + .size = { 550 + .width_mm = 163, 551 + .height_mm = 260, 552 + }, 553 + .lanes = 3, 554 + .format = MIPI_DSI_FMT_RGB888, 555 + .mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_SYNC_PULSE | 556 + MIPI_DSI_MODE_LPM, 557 + .init = tianma_il79900a_init, 596 558 }; 597 559 598 560 static int ili9882t_get_modes(struct drm_panel *panel, ··· 745 679 static const struct of_device_id ili9882t_of_match[] = { 746 680 { .compatible = "starry,ili9882t", 747 681 .data = &starry_ili9882t_desc 682 + }, 683 + { .compatible = "tianma,tl121bvms07-00", 684 + .data = &tianma_tl121bvms07_desc 748 685 }, 749 686 { /* sentinel */ } 750 687 };
+74 -334
drivers/gpu/drm/panel/panel-newvision-nv3052c.c
··· 43 43 struct gpio_desc *reset_gpio; 44 44 }; 45 45 46 - static const struct nv3052c_reg ltk035c5444t_panel_regs[] = { 47 - // EXTC Command set enable, select page 1 48 - { 0xff, 0x30 }, { 0xff, 0x52 }, { 0xff, 0x01 }, 49 - // Mostly unknown registers 50 - { 0xe3, 0x00 }, 51 - { 0x40, 0x00 }, 52 - { 0x03, 0x40 }, 53 - { 0x04, 0x00 }, 54 - { 0x05, 0x03 }, 55 - { 0x08, 0x00 }, 56 - { 0x09, 0x07 }, 57 - { 0x0a, 0x01 }, 58 - { 0x0b, 0x32 }, 59 - { 0x0c, 0x32 }, 60 - { 0x0d, 0x0b }, 61 - { 0x0e, 0x00 }, 62 - { 0x23, 0xa0 }, 63 - { 0x24, 0x0c }, 64 - { 0x25, 0x06 }, 65 - { 0x26, 0x14 }, 66 - { 0x27, 0x14 }, 67 - { 0x38, 0xcc }, // VCOM_ADJ1 68 - { 0x39, 0xd7 }, // VCOM_ADJ2 69 - { 0x3a, 0x4a }, // VCOM_ADJ3 70 - { 0x28, 0x40 }, 71 - { 0x29, 0x01 }, 72 - { 0x2a, 0xdf }, 73 - { 0x49, 0x3c }, 74 - { 0x91, 0x77 }, // EXTPW_CTRL2 75 - { 0x92, 0x77 }, // EXTPW_CTRL3 76 - { 0xa0, 0x55 }, 77 - { 0xa1, 0x50 }, 78 - { 0xa4, 0x9c }, 79 - { 0xa7, 0x02 }, 80 - { 0xa8, 0x01 }, 81 - { 0xa9, 0x01 }, 82 - { 0xaa, 0xfc }, 83 - { 0xab, 0x28 }, 84 - { 0xac, 0x06 }, 85 - { 0xad, 0x06 }, 86 - { 0xae, 0x06 }, 87 - { 0xaf, 0x03 }, 88 - { 0xb0, 0x08 }, 89 - { 0xb1, 0x26 }, 90 - { 0xb2, 0x28 }, 91 - { 0xb3, 0x28 }, 92 - { 0xb4, 0x33 }, 93 - { 0xb5, 0x08 }, 94 - { 0xb6, 0x26 }, 95 - { 0xb7, 0x08 }, 96 - { 0xb8, 0x26 }, 97 - { 0xf0, 0x00 }, 98 - { 0xf6, 0xc0 }, 46 + /* 47 + * Common initialization registers for all currently 48 + * supported displays. Mostly seem to be related 49 + * to Gamma correction curves and output pad mappings. 50 + */ 51 + static const struct nv3052c_reg common_init_regs[] = { 99 52 // EXTC Command set enable, select page 2 100 53 { 0xff, 0x30 }, { 0xff, 0x52 }, { 0xff, 0x02 }, 101 54 // Set gray scale voltage to adjust gamma ··· 168 215 { 0xa0, 0x01 }, // PANELU2D33 169 216 // EXTC Command set enable, select page 2 170 217 { 0xff, 0x30 }, { 0xff, 0x52 }, { 0xff, 0x02 }, 171 - // Unknown registers 218 + // Page 2 register values (0x01..0x10) are same for nv3051d and nv3052c 172 219 { 0x01, 0x01 }, 173 220 { 0x02, 0xda }, 174 221 { 0x03, 0xba }, ··· 189 236 { 0xff, 0x30 }, { 0xff, 0x52 }, { 0xff, 0x00 }, 190 237 // Display Access Control 191 238 { 0x36, 0x0a }, // bgr = 1, ss = 1, gs = 0 239 + 240 + }; 241 + 242 + static const struct nv3052c_reg ltk035c5444t_panel_regs[] = { 243 + // EXTC Command set enable, select page 1 244 + { 0xff, 0x30 }, { 0xff, 0x52 }, { 0xff, 0x01 }, 245 + // Mostly unknown registers 246 + { 0xe3, 0x00 }, 247 + { 0x40, 0x00 }, 248 + { 0x03, 0x40 }, 249 + { 0x04, 0x00 }, 250 + { 0x05, 0x03 }, 251 + { 0x08, 0x00 }, 252 + { 0x09, 0x07 }, 253 + { 0x0a, 0x01 }, 254 + { 0x0b, 0x32 }, 255 + { 0x0c, 0x32 }, 256 + { 0x0d, 0x0b }, 257 + { 0x0e, 0x00 }, 258 + { 0x23, 0xa0 }, 259 + { 0x24, 0x0c }, 260 + { 0x25, 0x06 }, 261 + { 0x26, 0x14 }, 262 + { 0x27, 0x14 }, 263 + { 0x38, 0xcc }, // VCOM_ADJ1 264 + { 0x39, 0xd7 }, // VCOM_ADJ2 265 + { 0x3a, 0x4a }, // VCOM_ADJ3 266 + { 0x28, 0x40 }, 267 + { 0x29, 0x01 }, 268 + { 0x2a, 0xdf }, 269 + { 0x49, 0x3c }, 270 + { 0x91, 0x77 }, // EXTPW_CTRL2 271 + { 0x92, 0x77 }, // EXTPW_CTRL3 272 + { 0xa0, 0x55 }, 273 + { 0xa1, 0x50 }, 274 + { 0xa4, 0x9c }, 275 + { 0xa7, 0x02 }, 276 + { 0xa8, 0x01 }, 277 + { 0xa9, 0x01 }, 278 + { 0xaa, 0xfc }, 279 + { 0xab, 0x28 }, 280 + { 0xac, 0x06 }, 281 + { 0xad, 0x06 }, 282 + { 0xae, 0x06 }, 283 + { 0xaf, 0x03 }, 284 + { 0xb0, 0x08 }, 285 + { 0xb1, 0x26 }, 286 + { 0xb2, 0x28 }, 287 + { 0xb3, 0x28 }, 288 + { 0xb4, 0x33 }, 289 + { 0xb5, 0x08 }, 290 + { 0xb6, 0x26 }, 291 + { 0xb7, 0x08 }, 292 + { 0xb8, 0x26 }, 293 + { 0xf0, 0x00 }, 294 + { 0xf6, 0xc0 }, 192 295 }; 193 296 194 297 static const struct nv3052c_reg fs035vg158_panel_regs[] = { ··· 300 291 { 0xb8, 0x26 }, 301 292 { 0xf0, 0x00 }, 302 293 { 0xf6, 0xc0 }, 303 - // EXTC Command set enable, select page 0 304 - { 0xff, 0x30 }, { 0xff, 0x52 }, { 0xff, 0x02 }, 305 - // Set gray scale voltage to adjust gamma 306 - { 0xb0, 0x0b }, // PGAMVR0 307 - { 0xb1, 0x16 }, // PGAMVR1 308 - { 0xb2, 0x17 }, // PGAMVR2 309 - { 0xb3, 0x2c }, // PGAMVR3 310 - { 0xb4, 0x32 }, // PGAMVR4 311 - { 0xb5, 0x3b }, // PGAMVR5 312 - { 0xb6, 0x29 }, // PGAMPR0 313 - { 0xb7, 0x40 }, // PGAMPR1 314 - { 0xb8, 0x0d }, // PGAMPK0 315 - { 0xb9, 0x05 }, // PGAMPK1 316 - { 0xba, 0x12 }, // PGAMPK2 317 - { 0xbb, 0x10 }, // PGAMPK3 318 - { 0xbc, 0x12 }, // PGAMPK4 319 - { 0xbd, 0x15 }, // PGAMPK5 320 - { 0xbe, 0x19 }, // PGAMPK6 321 - { 0xbf, 0x0e }, // PGAMPK7 322 - { 0xc0, 0x16 }, // PGAMPK8 323 - { 0xc1, 0x0a }, // PGAMPK9 324 - // Set gray scale voltage to adjust gamma 325 - { 0xd0, 0x0c }, // NGAMVR0 326 - { 0xd1, 0x17 }, // NGAMVR0 327 - { 0xd2, 0x14 }, // NGAMVR1 328 - { 0xd3, 0x2e }, // NGAMVR2 329 - { 0xd4, 0x32 }, // NGAMVR3 330 - { 0xd5, 0x3c }, // NGAMVR4 331 - { 0xd6, 0x22 }, // NGAMPR0 332 - { 0xd7, 0x3d }, // NGAMPR1 333 - { 0xd8, 0x0d }, // NGAMPK0 334 - { 0xd9, 0x07 }, // NGAMPK1 335 - { 0xda, 0x13 }, // NGAMPK2 336 - { 0xdb, 0x13 }, // NGAMPK3 337 - { 0xdc, 0x11 }, // NGAMPK4 338 - { 0xdd, 0x15 }, // NGAMPK5 339 - { 0xde, 0x19 }, // NGAMPK6 340 - { 0xdf, 0x10 }, // NGAMPK7 341 - { 0xe0, 0x17 }, // NGAMPK8 342 - { 0xe1, 0x0a }, // NGAMPK9 343 - // EXTC Command set enable, select page 3 344 - { 0xff, 0x30 }, { 0xff, 0x52 }, { 0xff, 0x03 }, 345 - // Set various timing settings 346 - { 0x00, 0x2a }, // GIP_VST_1 347 - { 0x01, 0x2a }, // GIP_VST_2 348 - { 0x02, 0x2a }, // GIP_VST_3 349 - { 0x03, 0x2a }, // GIP_VST_4 350 - { 0x04, 0x61 }, // GIP_VST_5 351 - { 0x05, 0x80 }, // GIP_VST_6 352 - { 0x06, 0xc7 }, // GIP_VST_7 353 - { 0x07, 0x01 }, // GIP_VST_8 354 - { 0x08, 0x03 }, // GIP_VST_9 355 - { 0x09, 0x04 }, // GIP_VST_10 356 - { 0x70, 0x22 }, // GIP_ECLK1 357 - { 0x71, 0x80 }, // GIP_ECLK2 358 - { 0x30, 0x2a }, // GIP_CLK_1 359 - { 0x31, 0x2a }, // GIP_CLK_2 360 - { 0x32, 0x2a }, // GIP_CLK_3 361 - { 0x33, 0x2a }, // GIP_CLK_4 362 - { 0x34, 0x61 }, // GIP_CLK_5 363 - { 0x35, 0xc5 }, // GIP_CLK_6 364 - { 0x36, 0x80 }, // GIP_CLK_7 365 - { 0x37, 0x23 }, // GIP_CLK_8 366 - { 0x40, 0x03 }, // GIP_CLKA_1 367 - { 0x41, 0x04 }, // GIP_CLKA_2 368 - { 0x42, 0x05 }, // GIP_CLKA_3 369 - { 0x43, 0x06 }, // GIP_CLKA_4 370 - { 0x44, 0x11 }, // GIP_CLKA_5 371 - { 0x45, 0xe8 }, // GIP_CLKA_6 372 - { 0x46, 0xe9 }, // GIP_CLKA_7 373 - { 0x47, 0x11 }, // GIP_CLKA_8 374 - { 0x48, 0xea }, // GIP_CLKA_9 375 - { 0x49, 0xeb }, // GIP_CLKA_10 376 - { 0x50, 0x07 }, // GIP_CLKB_1 377 - { 0x51, 0x08 }, // GIP_CLKB_2 378 - { 0x52, 0x09 }, // GIP_CLKB_3 379 - { 0x53, 0x0a }, // GIP_CLKB_4 380 - { 0x54, 0x11 }, // GIP_CLKB_5 381 - { 0x55, 0xec }, // GIP_CLKB_6 382 - { 0x56, 0xed }, // GIP_CLKB_7 383 - { 0x57, 0x11 }, // GIP_CLKB_8 384 - { 0x58, 0xef }, // GIP_CLKB_9 385 - { 0x59, 0xf0 }, // GIP_CLKB_10 386 - // Map internal GOA signals to GOA output pad 387 - { 0xb1, 0x01 }, // PANELD2U2 388 - { 0xb4, 0x15 }, // PANELD2U5 389 - { 0xb5, 0x16 }, // PANELD2U6 390 - { 0xb6, 0x09 }, // PANELD2U7 391 - { 0xb7, 0x0f }, // PANELD2U8 392 - { 0xb8, 0x0d }, // PANELD2U9 393 - { 0xb9, 0x0b }, // PANELD2U10 394 - { 0xba, 0x00 }, // PANELD2U11 395 - { 0xc7, 0x02 }, // PANELD2U24 396 - { 0xca, 0x17 }, // PANELD2U27 397 - { 0xcb, 0x18 }, // PANELD2U28 398 - { 0xcc, 0x0a }, // PANELD2U29 399 - { 0xcd, 0x10 }, // PANELD2U30 400 - { 0xce, 0x0e }, // PANELD2U31 401 - { 0xcf, 0x0c }, // PANELD2U32 402 - { 0xd0, 0x00 }, // PANELD2U33 403 - // Map internal GOA signals to GOA output pad 404 - { 0x81, 0x00 }, // PANELU2D2 405 - { 0x84, 0x15 }, // PANELU2D5 406 - { 0x85, 0x16 }, // PANELU2D6 407 - { 0x86, 0x10 }, // PANELU2D7 408 - { 0x87, 0x0a }, // PANELU2D8 409 - { 0x88, 0x0c }, // PANELU2D9 410 - { 0x89, 0x0e }, // PANELU2D10 411 - { 0x8a, 0x02 }, // PANELU2D11 412 - { 0x97, 0x00 }, // PANELU2D24 413 - { 0x9a, 0x17 }, // PANELU2D27 414 - { 0x9b, 0x18 }, // PANELU2D28 415 - { 0x9c, 0x0f }, // PANELU2D29 416 - { 0x9d, 0x09 }, // PANELU2D30 417 - { 0x9e, 0x0b }, // PANELU2D31 418 - { 0x9f, 0x0d }, // PANELU2D32 419 - { 0xa0, 0x01 }, // PANELU2D33 420 - // EXTC Command set enable, select page 2 421 - { 0xff, 0x30 }, { 0xff, 0x52 }, { 0xff, 0x02 }, 422 - // Unknown registers 423 - { 0x01, 0x01 }, 424 - { 0x02, 0xda }, 425 - { 0x03, 0xba }, 426 - { 0x04, 0xa8 }, 427 - { 0x05, 0x9a }, 428 - { 0x06, 0x70 }, 429 - { 0x07, 0xff }, 430 - { 0x08, 0x91 }, 431 - { 0x09, 0x90 }, 432 - { 0x0a, 0xff }, 433 - { 0x0b, 0x8f }, 434 - { 0x0c, 0x60 }, 435 - { 0x0d, 0x58 }, 436 - { 0x0e, 0x48 }, 437 - { 0x0f, 0x38 }, 438 - { 0x10, 0x2b }, 439 - // EXTC Command set enable, select page 0 440 - { 0xff, 0x30 }, { 0xff, 0x52 }, { 0xff, 0x00 }, 441 - // Display Access Control 442 - { 0x36, 0x0a }, // bgr = 1, ss = 1, gs = 0 443 294 }; 444 295 445 296 ··· 356 487 { 0xb8, 0x26 }, 357 488 { 0xf0, 0x00 }, 358 489 { 0xf6, 0xc0 }, 359 - // EXTC Command set enable, select page 2 360 - { 0xff, 0x30 }, { 0xff, 0x52 }, { 0xff, 0x02 }, 361 - // Set gray scale voltage to adjust gamma 362 - { 0xb0, 0x0b }, // PGAMVR0 363 - { 0xb1, 0x16 }, // PGAMVR1 364 - { 0xb2, 0x17 }, // PGAMVR2 365 - { 0xb3, 0x2c }, // PGAMVR3 366 - { 0xb4, 0x32 }, // PGAMVR4 367 - { 0xb5, 0x3b }, // PGAMVR5 368 - { 0xb6, 0x29 }, // PGAMPR0 369 - { 0xb7, 0x40 }, // PGAMPR1 370 - { 0xb8, 0x0d }, // PGAMPK0 371 - { 0xb9, 0x05 }, // PGAMPK1 372 - { 0xba, 0x12 }, // PGAMPK2 373 - { 0xbb, 0x10 }, // PGAMPK3 374 - { 0xbc, 0x12 }, // PGAMPK4 375 - { 0xbd, 0x15 }, // PGAMPK5 376 - { 0xbe, 0x19 }, // PGAMPK6 377 - { 0xbf, 0x0e }, // PGAMPK7 378 - { 0xc0, 0x16 }, // PGAMPK8 379 - { 0xc1, 0x0a }, // PGAMPK9 380 - // Set gray scale voltage to adjust gamma 381 - { 0xd0, 0x0c }, // NGAMVR0 382 - { 0xd1, 0x17 }, // NGAMVR0 383 - { 0xd2, 0x14 }, // NGAMVR1 384 - { 0xd3, 0x2e }, // NGAMVR2 385 - { 0xd4, 0x32 }, // NGAMVR3 386 - { 0xd5, 0x3c }, // NGAMVR4 387 - { 0xd6, 0x22 }, // NGAMPR0 388 - { 0xd7, 0x3d }, // NGAMPR1 389 - { 0xd8, 0x0d }, // NGAMPK0 390 - { 0xd9, 0x07 }, // NGAMPK1 391 - { 0xda, 0x13 }, // NGAMPK2 392 - { 0xdb, 0x13 }, // NGAMPK3 393 - { 0xdc, 0x11 }, // NGAMPK4 394 - { 0xdd, 0x15 }, // NGAMPK5 395 - { 0xde, 0x19 }, // NGAMPK6 396 - { 0xdf, 0x10 }, // NGAMPK7 397 - { 0xe0, 0x17 }, // NGAMPK8 398 - { 0xe1, 0x0a }, // NGAMPK9 399 - // EXTC Command set enable, select page 3 400 - { 0xff, 0x30 }, { 0xff, 0x52 }, { 0xff, 0x03 }, 401 - // Set various timing settings 402 - { 0x00, 0x2a }, // GIP_VST_1 403 - { 0x01, 0x2a }, // GIP_VST_2 404 - { 0x02, 0x2a }, // GIP_VST_3 405 - { 0x03, 0x2a }, // GIP_VST_4 406 - { 0x04, 0x61 }, // GIP_VST_5 407 - { 0x05, 0x80 }, // GIP_VST_6 408 - { 0x06, 0xc7 }, // GIP_VST_7 409 - { 0x07, 0x01 }, // GIP_VST_8 410 - { 0x08, 0x03 }, // GIP_VST_9 411 - { 0x09, 0x04 }, // GIP_VST_10 412 - { 0x70, 0x22 }, // GIP_ECLK1 413 - { 0x71, 0x80 }, // GIP_ECLK2 414 - { 0x30, 0x2a }, // GIP_CLK_1 415 - { 0x31, 0x2a }, // GIP_CLK_2 416 - { 0x32, 0x2a }, // GIP_CLK_3 417 - { 0x33, 0x2a }, // GIP_CLK_4 418 - { 0x34, 0x61 }, // GIP_CLK_5 419 - { 0x35, 0xc5 }, // GIP_CLK_6 420 - { 0x36, 0x80 }, // GIP_CLK_7 421 - { 0x37, 0x23 }, // GIP_CLK_8 422 - { 0x40, 0x03 }, // GIP_CLKA_1 423 - { 0x41, 0x04 }, // GIP_CLKA_2 424 - { 0x42, 0x05 }, // GIP_CLKA_3 425 - { 0x43, 0x06 }, // GIP_CLKA_4 426 - { 0x44, 0x11 }, // GIP_CLKA_5 427 - { 0x45, 0xe8 }, // GIP_CLKA_6 428 - { 0x46, 0xe9 }, // GIP_CLKA_7 429 - { 0x47, 0x11 }, // GIP_CLKA_8 430 - { 0x48, 0xea }, // GIP_CLKA_9 431 - { 0x49, 0xeb }, // GIP_CLKA_10 432 - { 0x50, 0x07 }, // GIP_CLKB_1 433 - { 0x51, 0x08 }, // GIP_CLKB_2 434 - { 0x52, 0x09 }, // GIP_CLKB_3 435 - { 0x53, 0x0a }, // GIP_CLKB_4 436 - { 0x54, 0x11 }, // GIP_CLKB_5 437 - { 0x55, 0xec }, // GIP_CLKB_6 438 - { 0x56, 0xed }, // GIP_CLKB_7 439 - { 0x57, 0x11 }, // GIP_CLKB_8 440 - { 0x58, 0xef }, // GIP_CLKB_9 441 - { 0x59, 0xf0 }, // GIP_CLKB_10 442 - // Map internal GOA signals to GOA output pad 443 - { 0xb1, 0x01 }, // PANELD2U2 444 - { 0xb4, 0x15 }, // PANELD2U5 445 - { 0xb5, 0x16 }, // PANELD2U6 446 - { 0xb6, 0x09 }, // PANELD2U7 447 - { 0xb7, 0x0f }, // PANELD2U8 448 - { 0xb8, 0x0d }, // PANELD2U9 449 - { 0xb9, 0x0b }, // PANELD2U10 450 - { 0xba, 0x00 }, // PANELD2U11 451 - { 0xc7, 0x02 }, // PANELD2U24 452 - { 0xca, 0x17 }, // PANELD2U27 453 - { 0xcb, 0x18 }, // PANELD2U28 454 - { 0xcc, 0x0a }, // PANELD2U29 455 - { 0xcd, 0x10 }, // PANELD2U30 456 - { 0xce, 0x0e }, // PANELD2U31 457 - { 0xcf, 0x0c }, // PANELD2U32 458 - { 0xd0, 0x00 }, // PANELD2U33 459 - // Map internal GOA signals to GOA output pad 460 - { 0x81, 0x00 }, // PANELU2D2 461 - { 0x84, 0x15 }, // PANELU2D5 462 - { 0x85, 0x16 }, // PANELU2D6 463 - { 0x86, 0x10 }, // PANELU2D7 464 - { 0x87, 0x0a }, // PANELU2D8 465 - { 0x88, 0x0c }, // PANELU2D9 466 - { 0x89, 0x0e }, // PANELU2D10 467 - { 0x8a, 0x02 }, // PANELU2D11 468 - { 0x97, 0x00 }, // PANELU2D24 469 - { 0x9a, 0x17 }, // PANELU2D27 470 - { 0x9b, 0x18 }, // PANELU2D28 471 - { 0x9c, 0x0f }, // PANELU2D29 472 - { 0x9d, 0x09 }, // PANELU2D30 473 - { 0x9e, 0x0b }, // PANELU2D31 474 - { 0x9f, 0x0d }, // PANELU2D32 475 - { 0xa0, 0x01 }, // PANELU2D33 476 - // EXTC Command set enable, select page 2 477 - { 0xff, 0x30 }, { 0xff, 0x52 }, { 0xff, 0x02 }, 478 - // Unknown registers 479 - { 0x01, 0x01 }, 480 - { 0x02, 0xda }, 481 - { 0x03, 0xba }, 482 - { 0x04, 0xa8 }, 483 - { 0x05, 0x9a }, 484 - { 0x06, 0x70 }, 485 - { 0x07, 0xff }, 486 - { 0x08, 0x91 }, 487 - { 0x09, 0x90 }, 488 - { 0x0a, 0xff }, 489 - { 0x0b, 0x8f }, 490 - { 0x0c, 0x60 }, 491 - { 0x0d, 0x58 }, 492 - { 0x0e, 0x48 }, 493 - { 0x0f, 0x38 }, 494 - { 0x10, 0x2b }, 495 - // EXTC Command set enable, select page 0 496 - { 0xff, 0x30 }, { 0xff, 0x52 }, { 0xff, 0x00 }, 497 - // Display Access Control 498 - { 0x36, 0x0a }, // bgr = 1, ss = 1, gs = 0 499 490 }; 500 491 501 492 static inline struct nv3052c *to_nv3052c(struct drm_panel *panel) ··· 384 655 gpiod_set_value_cansleep(priv->reset_gpio, 0); 385 656 usleep_range(5000, 20000); 386 657 658 + /* Apply panel-specific initialization registers */ 387 659 for (i = 0; i < panel_regs_len; i++) { 388 660 err = mipi_dbi_command(dbi, panel_regs[i].cmd, 389 661 panel_regs[i].val); 390 662 663 + if (err) { 664 + dev_err(priv->dev, "Unable to set register: %d\n", err); 665 + goto err_disable_regulator; 666 + } 667 + } 668 + 669 + /* Apply common initialization registers */ 670 + for (i = 0; i < ARRAY_SIZE(common_init_regs); i++) { 671 + err = mipi_dbi_command(dbi, common_init_regs[i].cmd, 672 + common_init_regs[i].val); 391 673 if (err) { 392 674 dev_err(priv->dev, "Unable to set register: %d\n", err); 393 675 goto err_disable_regulator;
+385
drivers/gpu/drm/panel/panel-samsung-s6e3fc2x01.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (c) 2022 Nia Espera <a5b6@riseup.net> 4 + * Copyright (c) 2025 David Heidelberg <david@ixit.cz> 5 + */ 6 + 7 + #include <linux/delay.h> 8 + #include <linux/gpio/consumer.h> 9 + #include <linux/module.h> 10 + #include <linux/of.h> 11 + #include <linux/of_device.h> 12 + #include <linux/regulator/consumer.h> 13 + #include <linux/swab.h> 14 + #include <linux/backlight.h> 15 + 16 + #include <video/mipi_display.h> 17 + 18 + #include <drm/drm_mipi_dsi.h> 19 + #include <drm/drm_modes.h> 20 + #include <drm/drm_panel.h> 21 + #include <drm/drm_probe_helper.h> 22 + 23 + #define MCS_ELVSS_ON 0xb1 24 + 25 + struct samsung_s6e3fc2x01 { 26 + struct drm_panel panel; 27 + struct mipi_dsi_device *dsi; 28 + struct regulator_bulk_data *supplies; 29 + struct gpio_desc *reset_gpio; 30 + }; 31 + 32 + static const struct regulator_bulk_data s6e3fc2x01_supplies[] = { 33 + { .supply = "vddio" }, 34 + { .supply = "vci" }, 35 + { .supply = "poc" }, 36 + }; 37 + 38 + static inline 39 + struct samsung_s6e3fc2x01 *to_samsung_s6e3fc2x01(struct drm_panel *panel) 40 + { 41 + return container_of(panel, struct samsung_s6e3fc2x01, panel); 42 + } 43 + 44 + #define s6e3fc2x01_test_key_on_lvl1(ctx) \ 45 + mipi_dsi_dcs_write_seq_multi(ctx, 0x9f, 0xa5, 0xa5) 46 + #define s6e3fc2x01_test_key_off_lvl1(ctx) \ 47 + mipi_dsi_dcs_write_seq_multi(ctx, 0x9f, 0x5a, 0x5a) 48 + #define s6e3fc2x01_test_key_on_lvl2(ctx) \ 49 + mipi_dsi_dcs_write_seq_multi(ctx, 0xf0, 0x5a, 0x5a) 50 + #define s6e3fc2x01_test_key_off_lvl2(ctx) \ 51 + mipi_dsi_dcs_write_seq_multi(ctx, 0xf0, 0xa5, 0xa5) 52 + #define s6e3fc2x01_test_key_on_lvl3(ctx) \ 53 + mipi_dsi_dcs_write_seq_multi(ctx, 0xfc, 0x5a, 0x5a) 54 + #define s6e3fc2x01_test_key_off_lvl3(ctx) \ 55 + mipi_dsi_dcs_write_seq_multi(ctx, 0xfc, 0xa5, 0xa5) 56 + 57 + static void s6e3fc2x01_reset(struct samsung_s6e3fc2x01 *ctx) 58 + { 59 + gpiod_set_value_cansleep(ctx->reset_gpio, 1); 60 + usleep_range(5000, 6000); 61 + gpiod_set_value_cansleep(ctx->reset_gpio, 0); 62 + usleep_range(5000, 6000); 63 + } 64 + 65 + static int s6e3fc2x01_on(struct samsung_s6e3fc2x01 *ctx) 66 + { 67 + struct mipi_dsi_multi_context dsi_ctx = { .dsi = ctx->dsi }; 68 + 69 + s6e3fc2x01_test_key_on_lvl1(&dsi_ctx); 70 + 71 + mipi_dsi_dcs_exit_sleep_mode_multi(&dsi_ctx); 72 + 73 + mipi_dsi_usleep_range(&dsi_ctx, 10000, 11000); 74 + 75 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xff, 0x0a); 76 + mipi_dsi_usleep_range(&dsi_ctx, 10000, 11000); 77 + 78 + s6e3fc2x01_test_key_off_lvl1(&dsi_ctx); 79 + 80 + s6e3fc2x01_test_key_on_lvl2(&dsi_ctx); 81 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xb0, 0x01); 82 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xcd, 0x01); 83 + s6e3fc2x01_test_key_off_lvl2(&dsi_ctx); 84 + 85 + mipi_dsi_usleep_range(&dsi_ctx, 15000, 16000); 86 + 87 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xff, 0x0f); 88 + mipi_dsi_usleep_range(&dsi_ctx, 10000, 11000); 89 + 90 + s6e3fc2x01_test_key_on_lvl1(&dsi_ctx); 91 + mipi_dsi_dcs_set_tear_on_multi(&dsi_ctx, MIPI_DSI_DCS_TEAR_MODE_VBLANK); 92 + s6e3fc2x01_test_key_off_lvl1(&dsi_ctx); 93 + 94 + s6e3fc2x01_test_key_on_lvl2(&dsi_ctx); 95 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xeb, 0x17, 96 + 0x41, 0x92, 97 + 0x0e, 0x10, 98 + 0x82, 0x5a); 99 + s6e3fc2x01_test_key_off_lvl2(&dsi_ctx); 100 + 101 + /* Column & Page Address Setting */ 102 + mipi_dsi_dcs_set_column_address_multi(&dsi_ctx, 0x0000, 0x0437); 103 + mipi_dsi_dcs_set_page_address_multi(&dsi_ctx, 0x0000, 0x0923); 104 + 105 + /* Horizontal & Vertical sync Setting */ 106 + s6e3fc2x01_test_key_on_lvl2(&dsi_ctx); 107 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xb0, 0x09); 108 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xe8, 0x10, 0x30); 109 + s6e3fc2x01_test_key_off_lvl2(&dsi_ctx); 110 + 111 + s6e3fc2x01_test_key_on_lvl3(&dsi_ctx); 112 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xb0, 0x01); 113 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xe3, 0x88); 114 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xb0, 0x07); 115 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xed, 0x67); 116 + s6e3fc2x01_test_key_off_lvl3(&dsi_ctx); 117 + 118 + s6e3fc2x01_test_key_on_lvl2(&dsi_ctx); 119 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xb0, 0x07); 120 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xb7, 0x01); 121 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xb0, 0x08); 122 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xb7, 0x12); 123 + s6e3fc2x01_test_key_off_lvl2(&dsi_ctx); 124 + 125 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MIPI_DCS_WRITE_CONTROL_DISPLAY, 0x20); 126 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MIPI_DCS_WRITE_POWER_SAVE, 0x00); 127 + mipi_dsi_usleep_range(&dsi_ctx, 1000, 2000); 128 + 129 + s6e3fc2x01_test_key_on_lvl2(&dsi_ctx); 130 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MCS_ELVSS_ON, 0x00, 0x01); 131 + s6e3fc2x01_test_key_off_lvl2(&dsi_ctx); 132 + 133 + s6e3fc2x01_test_key_on_lvl2(&dsi_ctx); 134 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xb3, 0x00, 0xc1); 135 + s6e3fc2x01_test_key_off_lvl2(&dsi_ctx); 136 + 137 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xff, 0x78); 138 + mipi_dsi_usleep_range(&dsi_ctx, 10000, 11000); 139 + 140 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0x81, 0x90); 141 + mipi_dsi_usleep_range(&dsi_ctx, 10000, 11000); 142 + 143 + s6e3fc2x01_test_key_on_lvl2(&dsi_ctx); 144 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xb0, 0x02); 145 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MCS_ELVSS_ON, 0xc6, 0x00, 0x00, 146 + 0x21, 0xed, 0x02, 0x08, 0x06, 0xc1, 0x27, 147 + 0xfc, 0xdc, 0xe4, 0x00, 0xd9, 0xe6, 0xe7, 148 + 0x00, 0xfc, 0xff, 0xea); 149 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, MCS_ELVSS_ON, 0x00, 0x00); 150 + s6e3fc2x01_test_key_off_lvl2(&dsi_ctx); 151 + 152 + mipi_dsi_usleep_range(&dsi_ctx, 10000, 11000); 153 + 154 + return dsi_ctx.accum_err; 155 + } 156 + 157 + static int s6e3fc2x01_enable(struct drm_panel *panel) 158 + { 159 + struct samsung_s6e3fc2x01 *ctx = to_samsung_s6e3fc2x01(panel); 160 + struct mipi_dsi_multi_context dsi_ctx = { .dsi = ctx->dsi }; 161 + 162 + s6e3fc2x01_test_key_on_lvl1(&dsi_ctx); 163 + mipi_dsi_dcs_set_display_on_multi(&dsi_ctx); 164 + s6e3fc2x01_test_key_off_lvl1(&dsi_ctx); 165 + 166 + return dsi_ctx.accum_err; 167 + } 168 + 169 + static int s6e3fc2x01_off(struct samsung_s6e3fc2x01 *ctx) 170 + { 171 + struct mipi_dsi_multi_context dsi_ctx = { .dsi = ctx->dsi }; 172 + 173 + s6e3fc2x01_test_key_on_lvl1(&dsi_ctx); 174 + 175 + mipi_dsi_dcs_set_display_off_multi(&dsi_ctx); 176 + 177 + mipi_dsi_usleep_range(&dsi_ctx, 10000, 11000); 178 + 179 + s6e3fc2x01_test_key_on_lvl2(&dsi_ctx); 180 + mipi_dsi_usleep_range(&dsi_ctx, 16000, 17000); 181 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xb0, 0x50); 182 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xb9, 0x82); 183 + s6e3fc2x01_test_key_off_lvl2(&dsi_ctx); 184 + mipi_dsi_usleep_range(&dsi_ctx, 16000, 17000); 185 + 186 + mipi_dsi_dcs_enter_sleep_mode_multi(&dsi_ctx); 187 + 188 + s6e3fc2x01_test_key_off_lvl1(&dsi_ctx); 189 + 190 + s6e3fc2x01_test_key_on_lvl2(&dsi_ctx); 191 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xb0, 0x05); 192 + mipi_dsi_dcs_write_seq_multi(&dsi_ctx, 0xf4, 0x01); 193 + s6e3fc2x01_test_key_off_lvl2(&dsi_ctx); 194 + mipi_dsi_msleep(&dsi_ctx, 160); 195 + 196 + return dsi_ctx.accum_err; 197 + } 198 + 199 + static int s6e3fc2x01_disable(struct drm_panel *panel) 200 + { 201 + struct samsung_s6e3fc2x01 *ctx = to_samsung_s6e3fc2x01(panel); 202 + 203 + s6e3fc2x01_off(ctx); 204 + 205 + return 0; 206 + } 207 + 208 + static int s6e3fc2x01_prepare(struct drm_panel *panel) 209 + { 210 + struct samsung_s6e3fc2x01 *ctx = to_samsung_s6e3fc2x01(panel); 211 + int ret; 212 + 213 + ret = regulator_bulk_enable(ARRAY_SIZE(s6e3fc2x01_supplies), ctx->supplies); 214 + if (ret < 0) 215 + return ret; 216 + 217 + s6e3fc2x01_reset(ctx); 218 + 219 + ret = s6e3fc2x01_on(ctx); 220 + if (ret < 0) { 221 + gpiod_set_value_cansleep(ctx->reset_gpio, 1); 222 + regulator_bulk_disable(ARRAY_SIZE(s6e3fc2x01_supplies), ctx->supplies); 223 + return ret; 224 + } 225 + 226 + return 0; 227 + } 228 + 229 + static int s6e3fc2x01_unprepare(struct drm_panel *panel) 230 + { 231 + struct samsung_s6e3fc2x01 *ctx = to_samsung_s6e3fc2x01(panel); 232 + 233 + gpiod_set_value_cansleep(ctx->reset_gpio, 1); 234 + regulator_bulk_disable(ARRAY_SIZE(s6e3fc2x01_supplies), ctx->supplies); 235 + 236 + return 0; 237 + } 238 + 239 + static const struct drm_display_mode ams641rw_mode = { 240 + .clock = (1080 + 72 + 16 + 36) * (2340 + 32 + 4 + 18) * 60 / 1000, 241 + .hdisplay = 1080, 242 + .hsync_start = 1080 + 72, 243 + .hsync_end = 1080 + 72 + 16, 244 + .htotal = 1080 + 72 + 16 + 36, 245 + .vdisplay = 2340, 246 + .vsync_start = 2340 + 32, 247 + .vsync_end = 2340 + 32 + 4, 248 + .vtotal = 2340 + 32 + 4 + 18, 249 + .width_mm = 68, 250 + .height_mm = 145, 251 + }; 252 + 253 + static int s6e3fc2x01_get_modes(struct drm_panel *panel, 254 + struct drm_connector *connector) 255 + { 256 + return drm_connector_helper_get_modes_fixed(connector, &ams641rw_mode); 257 + } 258 + 259 + static const struct drm_panel_funcs samsung_s6e3fc2x01_panel_funcs = { 260 + .prepare = s6e3fc2x01_prepare, 261 + .enable = s6e3fc2x01_enable, 262 + .disable = s6e3fc2x01_disable, 263 + .unprepare = s6e3fc2x01_unprepare, 264 + .get_modes = s6e3fc2x01_get_modes, 265 + }; 266 + 267 + static int s6e3fc2x01_panel_bl_update_status(struct backlight_device *bl) 268 + { 269 + struct mipi_dsi_device *dsi = bl_get_data(bl); 270 + u16 brightness = backlight_get_brightness(bl); 271 + int err; 272 + 273 + dsi->mode_flags &= ~MIPI_DSI_MODE_LPM; 274 + 275 + err = mipi_dsi_dcs_set_display_brightness_large(dsi, brightness); 276 + if (err < 0) 277 + return err; 278 + 279 + dsi->mode_flags |= MIPI_DSI_MODE_LPM; 280 + 281 + return 0; 282 + } 283 + 284 + static const struct backlight_ops s6e3fc2x01_panel_bl_ops = { 285 + .update_status = s6e3fc2x01_panel_bl_update_status, 286 + }; 287 + 288 + static struct backlight_device * 289 + s6e3fc2x01_create_backlight(struct mipi_dsi_device *dsi) 290 + { 291 + struct device *dev = &dsi->dev; 292 + const struct backlight_properties props = { 293 + .type = BACKLIGHT_PLATFORM, 294 + .brightness = 512, 295 + .max_brightness = 1023, 296 + }; 297 + 298 + return devm_backlight_device_register(dev, dev_name(dev), dev, dsi, 299 + &s6e3fc2x01_panel_bl_ops, &props); 300 + } 301 + 302 + static int s6e3fc2x01_probe(struct mipi_dsi_device *dsi) 303 + { 304 + struct device *dev = &dsi->dev; 305 + struct samsung_s6e3fc2x01 *ctx; 306 + int ret; 307 + 308 + ctx = devm_drm_panel_alloc(dev, struct samsung_s6e3fc2x01, panel, 309 + &samsung_s6e3fc2x01_panel_funcs, 310 + DRM_MODE_CONNECTOR_DSI); 311 + if (IS_ERR(ctx)) 312 + return PTR_ERR(ctx); 313 + 314 + ret = devm_regulator_bulk_get_const(dev, 315 + ARRAY_SIZE(s6e3fc2x01_supplies), 316 + s6e3fc2x01_supplies, 317 + &ctx->supplies); 318 + if (ret) 319 + return dev_err_probe(dev, ret, "Failed to get regulators\n"); 320 + 321 + 322 + /* keep the display on for flicker-free experience */ 323 + ctx->reset_gpio = devm_gpiod_get(dev, "reset", GPIOD_OUT_LOW); 324 + if (IS_ERR(ctx->reset_gpio)) 325 + return dev_err_probe(dev, PTR_ERR(ctx->reset_gpio), 326 + "Failed to get reset-gpios\n"); 327 + 328 + ctx->dsi = dsi; 329 + mipi_dsi_set_drvdata(dsi, ctx); 330 + 331 + dsi->lanes = 4; 332 + dsi->format = MIPI_DSI_FMT_RGB888; 333 + dsi->mode_flags = MIPI_DSI_MODE_VIDEO_BURST | 334 + MIPI_DSI_CLOCK_NON_CONTINUOUS | MIPI_DSI_MODE_LPM; 335 + 336 + ctx->panel.prepare_prev_first = true; 337 + 338 + ctx->panel.backlight = s6e3fc2x01_create_backlight(dsi); 339 + if (IS_ERR(ctx->panel.backlight)) 340 + return dev_err_probe(dev, PTR_ERR(ctx->panel.backlight), 341 + "Failed to create backlight\n"); 342 + 343 + drm_panel_add(&ctx->panel); 344 + 345 + ret = mipi_dsi_attach(dsi); 346 + if (ret < 0) { 347 + dev_err(dev, "Failed to attach to DSI host: %d\n", ret); 348 + drm_panel_remove(&ctx->panel); 349 + return ret; 350 + } 351 + 352 + return 0; 353 + } 354 + 355 + static void s6e3fc2x01_remove(struct mipi_dsi_device *dsi) 356 + { 357 + struct samsung_s6e3fc2x01 *ctx = mipi_dsi_get_drvdata(dsi); 358 + int ret; 359 + 360 + ret = mipi_dsi_detach(dsi); 361 + if (ret < 0) 362 + dev_err(&dsi->dev, "Failed to detach from DSI host: %d\n", ret); 363 + 364 + drm_panel_remove(&ctx->panel); 365 + } 366 + 367 + static const struct of_device_id s6e3fc2x01_of_match[] = { 368 + { .compatible = "samsung,s6e3fc2x01-ams641rw", .data = &ams641rw_mode }, 369 + { /* sentinel */ } 370 + }; 371 + MODULE_DEVICE_TABLE(of, s6e3fc2x01_of_match); 372 + 373 + static struct mipi_dsi_driver s6e3fc2x01_driver = { 374 + .probe = s6e3fc2x01_probe, 375 + .remove = s6e3fc2x01_remove, 376 + .driver = { 377 + .name = "panel-samsung-s6e3fc2x01", 378 + .of_match_table = s6e3fc2x01_of_match, 379 + }, 380 + }; 381 + module_mipi_dsi_driver(s6e3fc2x01_driver); 382 + 383 + MODULE_AUTHOR("David Heidelberg <david@ixit.cz>"); 384 + MODULE_DESCRIPTION("DRM driver for Samsung S6E3FC2X01 DDIC"); 385 + MODULE_LICENSE("GPL");
+277
drivers/gpu/drm/panel/panel-synaptics-tddi.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Synaptics TDDI display panel driver. 4 + * 5 + * Copyright (C) 2025 Kaustabh Chakraborty <kauschluss@disroot.org> 6 + */ 7 + 8 + #include <linux/backlight.h> 9 + #include <linux/gpio/consumer.h> 10 + #include <linux/module.h> 11 + #include <linux/of.h> 12 + #include <linux/regulator/consumer.h> 13 + 14 + #include <video/mipi_display.h> 15 + 16 + #include <drm/drm_mipi_dsi.h> 17 + #include <drm/drm_modes.h> 18 + #include <drm/drm_panel.h> 19 + #include <drm/drm_probe_helper.h> 20 + 21 + struct tddi_panel_data { 22 + u8 lanes; 23 + /* wait timings for panel enable */ 24 + u8 delay_ms_sleep_exit; 25 + u8 delay_ms_display_on; 26 + /* wait timings for panel disable */ 27 + u8 delay_ms_display_off; 28 + u8 delay_ms_sleep_enter; 29 + }; 30 + 31 + struct tddi_ctx { 32 + struct drm_panel panel; 33 + struct mipi_dsi_device *dsi; 34 + struct drm_display_mode mode; 35 + struct backlight_device *backlight; 36 + const struct tddi_panel_data *data; 37 + struct regulator_bulk_data *supplies; 38 + struct gpio_desc *reset_gpio; 39 + struct gpio_desc *backlight_gpio; 40 + }; 41 + 42 + static const struct regulator_bulk_data tddi_supplies[] = { 43 + { .supply = "vio" }, 44 + { .supply = "vsn" }, 45 + { .supply = "vsp" }, 46 + }; 47 + 48 + static inline struct tddi_ctx *to_tddi_ctx(struct drm_panel *panel) 49 + { 50 + return container_of(panel, struct tddi_ctx, panel); 51 + } 52 + 53 + static int tddi_update_status(struct backlight_device *backlight) 54 + { 55 + struct tddi_ctx *ctx = bl_get_data(backlight); 56 + struct mipi_dsi_multi_context dsi = { .dsi = ctx->dsi }; 57 + u8 brightness = backlight_get_brightness(backlight); 58 + 59 + if (!ctx->panel.enabled) 60 + return 0; 61 + 62 + mipi_dsi_dcs_set_display_brightness_multi(&dsi, brightness); 63 + 64 + return dsi.accum_err; 65 + } 66 + 67 + static int tddi_prepare(struct drm_panel *panel) 68 + { 69 + struct tddi_ctx *ctx = to_tddi_ctx(panel); 70 + struct device *dev = &ctx->dsi->dev; 71 + int ret; 72 + 73 + ret = regulator_bulk_enable(ARRAY_SIZE(tddi_supplies), ctx->supplies); 74 + if (ret < 0) { 75 + dev_err(dev, "failed to enable regulators: %d\n", ret); 76 + return ret; 77 + } 78 + 79 + gpiod_set_value_cansleep(ctx->reset_gpio, 0); 80 + usleep_range(5000, 6000); 81 + gpiod_set_value_cansleep(ctx->reset_gpio, 1); 82 + usleep_range(5000, 6000); 83 + gpiod_set_value_cansleep(ctx->reset_gpio, 0); 84 + usleep_range(10000, 11000); 85 + 86 + gpiod_set_value_cansleep(ctx->backlight_gpio, 0); 87 + usleep_range(5000, 6000); 88 + 89 + return 0; 90 + } 91 + 92 + static int tddi_unprepare(struct drm_panel *panel) 93 + { 94 + struct tddi_ctx *ctx = to_tddi_ctx(panel); 95 + 96 + gpiod_set_value_cansleep(ctx->backlight_gpio, 1); 97 + usleep_range(5000, 6000); 98 + 99 + gpiod_set_value_cansleep(ctx->reset_gpio, 1); 100 + usleep_range(5000, 6000); 101 + 102 + regulator_bulk_disable(ARRAY_SIZE(tddi_supplies), ctx->supplies); 103 + 104 + return 0; 105 + } 106 + 107 + static int tddi_enable(struct drm_panel *panel) 108 + { 109 + struct tddi_ctx *ctx = to_tddi_ctx(panel); 110 + struct mipi_dsi_multi_context dsi = { .dsi = ctx->dsi }; 111 + u8 brightness = ctx->backlight->props.brightness; 112 + 113 + mipi_dsi_dcs_write_seq_multi(&dsi, MIPI_DCS_WRITE_POWER_SAVE, 0x00); 114 + mipi_dsi_dcs_write_seq_multi(&dsi, MIPI_DCS_WRITE_CONTROL_DISPLAY, 0x0c); 115 + 116 + mipi_dsi_dcs_exit_sleep_mode_multi(&dsi); 117 + mipi_dsi_msleep(&dsi, ctx->data->delay_ms_sleep_exit); 118 + 119 + /* sync the panel with the backlight's brightness level */ 120 + mipi_dsi_dcs_set_display_brightness_multi(&dsi, brightness); 121 + 122 + mipi_dsi_dcs_set_display_on_multi(&dsi); 123 + mipi_dsi_msleep(&dsi, ctx->data->delay_ms_display_on); 124 + 125 + return dsi.accum_err; 126 + }; 127 + 128 + static int tddi_disable(struct drm_panel *panel) 129 + { 130 + struct tddi_ctx *ctx = to_tddi_ctx(panel); 131 + struct mipi_dsi_multi_context dsi = { .dsi = ctx->dsi }; 132 + 133 + mipi_dsi_dcs_set_display_off_multi(&dsi); 134 + mipi_dsi_msleep(&dsi, ctx->data->delay_ms_display_off); 135 + 136 + mipi_dsi_dcs_enter_sleep_mode_multi(&dsi); 137 + mipi_dsi_msleep(&dsi, ctx->data->delay_ms_sleep_enter); 138 + 139 + return dsi.accum_err; 140 + } 141 + 142 + static int tddi_get_modes(struct drm_panel *panel, 143 + struct drm_connector *connector) 144 + { 145 + struct tddi_ctx *ctx = to_tddi_ctx(panel); 146 + 147 + return drm_connector_helper_get_modes_fixed(connector, &ctx->mode); 148 + } 149 + 150 + static const struct backlight_ops tddi_bl_ops = { 151 + .update_status = tddi_update_status, 152 + }; 153 + 154 + static const struct backlight_properties tddi_bl_props = { 155 + .type = BACKLIGHT_PLATFORM, 156 + .brightness = 255, 157 + .max_brightness = 255, 158 + }; 159 + 160 + static const struct drm_panel_funcs tddi_drm_panel_funcs = { 161 + .prepare = tddi_prepare, 162 + .unprepare = tddi_unprepare, 163 + .enable = tddi_enable, 164 + .disable = tddi_disable, 165 + .get_modes = tddi_get_modes, 166 + }; 167 + 168 + static int tddi_probe(struct mipi_dsi_device *dsi) 169 + { 170 + struct device *dev = &dsi->dev; 171 + struct tddi_ctx *ctx; 172 + int ret; 173 + 174 + ctx = devm_drm_panel_alloc(dev, struct tddi_ctx, panel, 175 + &tddi_drm_panel_funcs, DRM_MODE_CONNECTOR_DSI); 176 + if (IS_ERR(ctx)) 177 + return PTR_ERR(ctx); 178 + 179 + ctx->data = of_device_get_match_data(dev); 180 + 181 + ctx->dsi = dsi; 182 + mipi_dsi_set_drvdata(dsi, ctx); 183 + 184 + ret = devm_regulator_bulk_get_const(dev, ARRAY_SIZE(tddi_supplies), 185 + tddi_supplies, &ctx->supplies); 186 + if (ret < 0) 187 + return dev_err_probe(dev, ret, "failed to get regulators\n"); 188 + 189 + ctx->backlight_gpio = devm_gpiod_get_optional(dev, "backlight", GPIOD_ASIS); 190 + if (IS_ERR(ctx->backlight_gpio)) 191 + return dev_err_probe(dev, PTR_ERR(ctx->backlight_gpio), 192 + "failed to get backlight-gpios\n"); 193 + 194 + ctx->reset_gpio = devm_gpiod_get_optional(dev, "reset", GPIOD_ASIS); 195 + if (IS_ERR(ctx->reset_gpio)) 196 + return dev_err_probe(dev, PTR_ERR(ctx->reset_gpio), 197 + "failed to get reset-gpios\n"); 198 + 199 + ret = of_get_drm_panel_display_mode(dev->of_node, &ctx->mode, NULL); 200 + if (ret < 0) 201 + return dev_err_probe(dev, ret, "failed to get panel timings\n"); 202 + 203 + ctx->backlight = devm_backlight_device_register(dev, dev_name(dev), dev, 204 + ctx, &tddi_bl_ops, 205 + &tddi_bl_props); 206 + if (IS_ERR(ctx->backlight)) 207 + return dev_err_probe(dev, PTR_ERR(ctx->backlight), 208 + "failed to register backlight device"); 209 + 210 + dsi->lanes = ctx->data->lanes; 211 + dsi->format = MIPI_DSI_FMT_RGB888; 212 + dsi->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST | 213 + MIPI_DSI_MODE_VIDEO_NO_HFP; 214 + 215 + ctx->panel.prepare_prev_first = true; 216 + drm_panel_add(&ctx->panel); 217 + 218 + ret = devm_mipi_dsi_attach(dev, dsi); 219 + if (ret < 0) { 220 + drm_panel_remove(&ctx->panel); 221 + return dev_err_probe(dev, ret, "failed to attach to DSI host\n"); 222 + } 223 + 224 + return 0; 225 + } 226 + 227 + static void tddi_remove(struct mipi_dsi_device *dsi) 228 + { 229 + struct tddi_ctx *ctx = mipi_dsi_get_drvdata(dsi); 230 + 231 + drm_panel_remove(&ctx->panel); 232 + } 233 + 234 + static const struct tddi_panel_data td4101_panel_data = { 235 + .lanes = 2, 236 + /* wait timings for panel enable */ 237 + .delay_ms_sleep_exit = 100, 238 + .delay_ms_display_on = 0, 239 + /* wait timings for panel disable */ 240 + .delay_ms_display_off = 20, 241 + .delay_ms_sleep_enter = 90, 242 + }; 243 + 244 + static const struct tddi_panel_data td4300_panel_data = { 245 + .lanes = 4, 246 + /* wait timings for panel enable */ 247 + .delay_ms_sleep_exit = 100, 248 + .delay_ms_display_on = 0, 249 + /* wait timings for panel disable */ 250 + .delay_ms_display_off = 0, 251 + .delay_ms_sleep_enter = 0, 252 + }; 253 + 254 + static const struct of_device_id tddi_of_device_id[] = { 255 + { 256 + .compatible = "syna,td4101-panel", 257 + .data = &td4101_panel_data, 258 + }, { 259 + .compatible = "syna,td4300-panel", 260 + .data = &td4300_panel_data, 261 + }, { } 262 + }; 263 + MODULE_DEVICE_TABLE(of, tddi_of_device_id); 264 + 265 + static struct mipi_dsi_driver tddi_dsi_driver = { 266 + .probe = tddi_probe, 267 + .remove = tddi_remove, 268 + .driver = { 269 + .name = "panel-synaptics-tddi", 270 + .of_match_table = tddi_of_device_id, 271 + }, 272 + }; 273 + module_mipi_dsi_driver(tddi_dsi_driver); 274 + 275 + MODULE_AUTHOR("Kaustabh Chakraborty <kauschluss@disroot.org>"); 276 + MODULE_DESCRIPTION("Synaptics TDDI Display Panel Driver"); 277 + MODULE_LICENSE("GPL");
+2
drivers/gpu/drm/panfrost/panfrost_devfreq.c
··· 8 8 #include <linux/platform_device.h> 9 9 #include <linux/pm_opp.h> 10 10 11 + #include <drm/drm_print.h> 12 + 11 13 #include "panfrost_device.h" 12 14 #include "panfrost_devfreq.h" 13 15
+1
drivers/gpu/drm/panfrost/panfrost_drv.c
··· 16 16 #include <drm/drm_debugfs.h> 17 17 #include <drm/drm_drv.h> 18 18 #include <drm/drm_ioctl.h> 19 + #include <drm/drm_print.h> 19 20 #include <drm/drm_syncobj.h> 20 21 #include <drm/drm_utils.h> 21 22
+1
drivers/gpu/drm/panfrost/panfrost_gem.c
··· 8 8 #include <linux/dma-mapping.h> 9 9 10 10 #include <drm/panfrost_drm.h> 11 + #include <drm/drm_print.h> 11 12 #include "panfrost_device.h" 12 13 #include "panfrost_gem.h" 13 14 #include "panfrost_mmu.h"
+2
drivers/gpu/drm/panfrost/panfrost_gpu.c
··· 12 12 #include <linux/platform_device.h> 13 13 #include <linux/pm_runtime.h> 14 14 15 + #include <drm/drm_print.h> 16 + 15 17 #include "panfrost_device.h" 16 18 #include "panfrost_features.h" 17 19 #include "panfrost_issues.h"
+1
drivers/gpu/drm/panfrost/panfrost_mmu.c
··· 2 2 /* Copyright 2019 Linaro, Ltd, Rob Herring <robh@kernel.org> */ 3 3 4 4 #include <drm/panfrost_drm.h> 5 + #include <drm/drm_print.h> 5 6 6 7 #include <linux/atomic.h> 7 8 #include <linux/bitfield.h>
+50 -13
drivers/gpu/drm/panthor/panthor_devfreq.c
··· 8 8 #include <linux/pm_opp.h> 9 9 10 10 #include <drm/drm_managed.h> 11 + #include <drm/drm_print.h> 11 12 12 13 #include "panthor_devfreq.h" 13 14 #include "panthor_device.h" ··· 63 62 static int panthor_devfreq_target(struct device *dev, unsigned long *freq, 64 63 u32 flags) 65 64 { 66 - struct panthor_device *ptdev = dev_get_drvdata(dev); 67 65 struct dev_pm_opp *opp; 68 66 int err; 69 67 ··· 72 72 dev_pm_opp_put(opp); 73 73 74 74 err = dev_pm_opp_set_rate(dev, *freq); 75 - if (!err) 76 - ptdev->current_frequency = *freq; 77 75 78 76 return err; 79 77 } ··· 113 115 return 0; 114 116 } 115 117 118 + static int panthor_devfreq_get_cur_freq(struct device *dev, unsigned long *freq) 119 + { 120 + struct panthor_device *ptdev = dev_get_drvdata(dev); 121 + 122 + *freq = clk_get_rate(ptdev->clks.core); 123 + 124 + return 0; 125 + } 126 + 116 127 static struct devfreq_dev_profile panthor_devfreq_profile = { 117 128 .timer = DEVFREQ_TIMER_DELAYED, 118 129 .polling_ms = 50, /* ~3 frames */ 119 130 .target = panthor_devfreq_target, 120 131 .get_dev_status = panthor_devfreq_get_dev_status, 132 + .get_cur_freq = panthor_devfreq_get_cur_freq, 121 133 }; 122 134 123 135 int panthor_devfreq_init(struct panthor_device *ptdev) ··· 142 134 struct thermal_cooling_device *cooling; 143 135 struct device *dev = ptdev->base.dev; 144 136 struct panthor_devfreq *pdevfreq; 137 + struct opp_table *table; 145 138 struct dev_pm_opp *opp; 146 139 unsigned long cur_freq; 147 140 unsigned long freq = ULONG_MAX; ··· 154 145 155 146 ptdev->devfreq = pdevfreq; 156 147 157 - ret = devm_pm_opp_set_regulators(dev, reg_names); 158 - if (ret && ret != -ENODEV) { 159 - if (ret != -EPROBE_DEFER) 160 - DRM_DEV_ERROR(dev, "Couldn't set OPP regulators\n"); 161 - return ret; 162 - } 148 + /* 149 + * The power domain associated with the GPU may have already added an 150 + * OPP table, complete with OPPs, as part of the platform bus 151 + * initialization. If this is the case, the power domain is in charge of 152 + * also controlling the performance, with a set_performance callback. 153 + * Only add a new OPP table from DT if there isn't such a table present 154 + * already. 155 + */ 156 + table = dev_pm_opp_get_opp_table(dev); 157 + if (IS_ERR_OR_NULL(table)) { 158 + ret = devm_pm_opp_set_regulators(dev, reg_names); 159 + if (ret && ret != -ENODEV) { 160 + if (ret != -EPROBE_DEFER) 161 + DRM_DEV_ERROR(dev, "Couldn't set OPP regulators\n"); 162 + return ret; 163 + } 163 164 164 - ret = devm_pm_opp_of_add_table(dev); 165 - if (ret) 166 - return ret; 165 + ret = devm_pm_opp_of_add_table(dev); 166 + if (ret) 167 + return ret; 168 + } else { 169 + dev_pm_opp_put_opp_table(table); 170 + } 167 171 168 172 spin_lock_init(&pdevfreq->lock); 169 173 ··· 219 197 return PTR_ERR(opp); 220 198 221 199 panthor_devfreq_profile.initial_freq = cur_freq; 222 - ptdev->current_frequency = cur_freq; 223 200 224 201 /* 225 202 * Set the recommend OPP this will enable and configure the regulator ··· 315 294 pdevfreq->last_busy_state = false; 316 295 317 296 spin_unlock_irqrestore(&pdevfreq->lock, irqflags); 297 + } 298 + 299 + unsigned long panthor_devfreq_get_freq(struct panthor_device *ptdev) 300 + { 301 + struct panthor_devfreq *pdevfreq = ptdev->devfreq; 302 + unsigned long freq = 0; 303 + int ret; 304 + 305 + if (!pdevfreq->devfreq) 306 + return 0; 307 + 308 + ret = pdevfreq->devfreq->profile->get_cur_freq(ptdev->base.dev, &freq); 309 + if (ret) 310 + return 0; 311 + 312 + return freq; 318 313 }
+2
drivers/gpu/drm/panthor/panthor_devfreq.h
··· 18 18 void panthor_devfreq_record_busy(struct panthor_device *ptdev); 19 19 void panthor_devfreq_record_idle(struct panthor_device *ptdev); 20 20 21 + unsigned long panthor_devfreq_get_freq(struct panthor_device *ptdev); 22 + 21 23 #endif /* __PANTHOR_DEVFREQ_H__ */
+20 -3
drivers/gpu/drm/panthor/panthor_device.c
··· 13 13 14 14 #include <drm/drm_drv.h> 15 15 #include <drm/drm_managed.h> 16 + #include <drm/drm_print.h> 16 17 17 18 #include "panthor_devfreq.h" 18 19 #include "panthor_device.h" ··· 66 65 return 0; 67 66 } 68 67 68 + static int panthor_init_power(struct device *dev) 69 + { 70 + struct dev_pm_domain_list *pd_list = NULL; 71 + 72 + if (dev->pm_domain) 73 + return 0; 74 + 75 + return devm_pm_domain_attach_list(dev, NULL, &pd_list); 76 + } 77 + 69 78 void panthor_device_unplug(struct panthor_device *ptdev) 70 79 { 71 80 /* This function can be called from two different path: the reset work ··· 94 83 return; 95 84 } 96 85 86 + drm_WARN_ON(&ptdev->base, pm_runtime_get_sync(ptdev->base.dev) < 0); 87 + 97 88 /* Call drm_dev_unplug() so any access to HW blocks happening after 98 89 * that point get rejected. 99 90 */ ··· 105 92 * future callers will wait on ptdev->unplug.done anyway. 106 93 */ 107 94 mutex_unlock(&ptdev->unplug.lock); 108 - 109 - drm_WARN_ON(&ptdev->base, pm_runtime_get_sync(ptdev->base.dev) < 0); 110 95 111 96 /* Now, try to cleanly shutdown the GPU before the device resources 112 97 * get reclaimed. ··· 131 120 { 132 121 struct panthor_device *ptdev = container_of(ddev, struct panthor_device, base); 133 122 134 - cancel_work_sync(&ptdev->reset.work); 123 + disable_work_sync(&ptdev->reset.work); 135 124 destroy_workqueue(ptdev->reset.wq); 136 125 } 137 126 ··· 231 220 ret = panthor_clk_init(ptdev); 232 221 if (ret) 233 222 return ret; 223 + 224 + ret = panthor_init_power(ptdev->base.dev); 225 + if (ret < 0) { 226 + drm_err(&ptdev->base, "init power domains failed, ret=%d", ret); 227 + return ret; 228 + } 234 229 235 230 ret = panthor_devfreq_init(ptdev); 236 231 if (ret)
-3
drivers/gpu/drm/panthor/panthor_device.h
··· 214 214 /** @profile_mask: User-set profiling flags for job accounting. */ 215 215 u32 profile_mask; 216 216 217 - /** @current_frequency: Device clock frequency at present. Set by DVFS*/ 218 - unsigned long current_frequency; 219 - 220 217 /** @fast_rate: Maximum device clock frequency. Set by DVFS */ 221 218 unsigned long fast_rate; 222 219
+4 -1
drivers/gpu/drm/panthor/panthor_drv.c
··· 20 20 #include <drm/drm_drv.h> 21 21 #include <drm/drm_exec.h> 22 22 #include <drm/drm_ioctl.h> 23 + #include <drm/drm_print.h> 23 24 #include <drm/drm_syncobj.h> 24 25 #include <drm/drm_utils.h> 25 26 #include <drm/gpu_scheduler.h> 26 27 #include <drm/panthor_drm.h> 27 28 29 + #include "panthor_devfreq.h" 28 30 #include "panthor_device.h" 29 31 #include "panthor_fw.h" 30 32 #include "panthor_gem.h" ··· 1521 1519 drm_printf(p, "drm-cycles-panthor:\t%llu\n", pfile->stats.cycles); 1522 1520 1523 1521 drm_printf(p, "drm-maxfreq-panthor:\t%lu Hz\n", ptdev->fast_rate); 1524 - drm_printf(p, "drm-curfreq-panthor:\t%lu Hz\n", ptdev->current_frequency); 1522 + drm_printf(p, "drm-curfreq-panthor:\t%lu Hz\n", 1523 + panthor_devfreq_get_freq(ptdev)); 1525 1524 } 1526 1525 1527 1526 static void panthor_show_internal_memory_stats(struct drm_printer *p, struct drm_file *file)
+2 -1
drivers/gpu/drm/panthor/panthor_fw.c
··· 16 16 17 17 #include <drm/drm_drv.h> 18 18 #include <drm/drm_managed.h> 19 + #include <drm/drm_print.h> 19 20 20 21 #include "panthor_device.h" 21 22 #include "panthor_fw.h" ··· 1164 1163 { 1165 1164 struct panthor_fw_section *section; 1166 1165 1167 - cancel_delayed_work_sync(&ptdev->fw->watchdog.ping_work); 1166 + disable_delayed_work_sync(&ptdev->fw->watchdog.ping_work); 1168 1167 1169 1168 if (!IS_ENABLED(CONFIG_PM) || pm_runtime_active(ptdev->base.dev)) { 1170 1169 /* Make sure the IRQ handler cannot be called after that point. */
+4 -11
drivers/gpu/drm/panthor/panthor_gem.c
··· 8 8 #include <linux/err.h> 9 9 #include <linux/slab.h> 10 10 11 + #include <drm/drm_print.h> 11 12 #include <drm/panthor_drm.h> 12 13 13 14 #include "panthor_device.h" ··· 87 86 void panthor_kernel_bo_destroy(struct panthor_kernel_bo *bo) 88 87 { 89 88 struct panthor_vm *vm; 90 - int ret; 91 89 92 90 if (IS_ERR_OR_NULL(bo)) 93 91 return; ··· 94 94 vm = bo->vm; 95 95 panthor_kernel_bo_vunmap(bo); 96 96 97 - if (drm_WARN_ON(bo->obj->dev, 98 - to_panthor_bo(bo->obj)->exclusive_vm_root_gem != panthor_vm_root_gem(vm))) 99 - goto out_free_bo; 100 - 101 - ret = panthor_vm_unmap_range(vm, bo->va_node.start, bo->va_node.size); 102 - if (ret) 103 - goto out_free_bo; 104 - 97 + drm_WARN_ON(bo->obj->dev, 98 + to_panthor_bo(bo->obj)->exclusive_vm_root_gem != panthor_vm_root_gem(vm)); 99 + panthor_vm_unmap_range(vm, bo->va_node.start, bo->va_node.size); 105 100 panthor_vm_free_va(vm, &bo->va_node); 106 101 drm_gem_object_put(bo->obj); 107 - 108 - out_free_bo: 109 102 panthor_vm_put(vm); 110 103 kfree(bo); 111 104 }
+1
drivers/gpu/drm/panthor/panthor_gpu.c
··· 15 15 16 16 #include <drm/drm_drv.h> 17 17 #include <drm/drm_managed.h> 18 + #include <drm/drm_print.h> 18 19 19 20 #include "panthor_device.h" 20 21 #include "panthor_gpu.h"
+1
drivers/gpu/drm/panthor/panthor_heap.c
··· 4 4 #include <linux/iosys-map.h> 5 5 #include <linux/rwsem.h> 6 6 7 + #include <drm/drm_print.h> 7 8 #include <drm/panthor_drm.h> 8 9 9 10 #include "panthor_device.h"
+2
drivers/gpu/drm/panthor/panthor_hw.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 or MIT 2 2 /* Copyright 2025 ARM Limited. All rights reserved. */ 3 3 4 + #include <drm/drm_print.h> 5 + 4 6 #include "panthor_device.h" 5 7 #include "panthor_hw.h" 6 8 #include "panthor_regs.h"
+1
drivers/gpu/drm/panthor/panthor_mmu.c
··· 7 7 #include <drm/drm_exec.h> 8 8 #include <drm/drm_gpuvm.h> 9 9 #include <drm/drm_managed.h> 10 + #include <drm/drm_print.h> 10 11 #include <drm/gpu_scheduler.h> 11 12 #include <drm/panthor_drm.h> 12 13
+8 -4
drivers/gpu/drm/panthor/panthor_sched.c
··· 5 5 #include <drm/drm_exec.h> 6 6 #include <drm/drm_gem_shmem_helper.h> 7 7 #include <drm/drm_managed.h> 8 + #include <drm/drm_print.h> 8 9 #include <drm/gpu_scheduler.h> 9 10 #include <drm/panthor_drm.h> 10 11 ··· 899 898 if (IS_ERR_OR_NULL(queue)) 900 899 return; 901 900 902 - drm_sched_entity_destroy(&queue->entity); 901 + if (queue->entity.fence_context) 902 + drm_sched_entity_destroy(&queue->entity); 903 903 904 904 if (queue->scheduler.ops) 905 905 drm_sched_fini(&queue->scheduler); ··· 3419 3417 3420 3418 drm_sched = &queue->scheduler; 3421 3419 ret = drm_sched_entity_init(&queue->entity, 0, &drm_sched, 1, NULL); 3420 + if (ret) 3421 + goto err_free_queue; 3422 3422 3423 3423 return queue; 3424 3424 ··· 3877 3873 { 3878 3874 struct panthor_scheduler *sched = ptdev->scheduler; 3879 3875 3880 - cancel_delayed_work_sync(&sched->tick_work); 3876 + disable_delayed_work_sync(&sched->tick_work); 3877 + disable_work_sync(&sched->fw_events_work); 3878 + disable_work_sync(&sched->sync_upd_work); 3881 3879 3882 3880 mutex_lock(&sched->lock); 3883 3881 if (sched->pm.has_ref) { ··· 3896 3890 3897 3891 if (!sched || !sched->csg_slot_count) 3898 3892 return; 3899 - 3900 - cancel_delayed_work_sync(&sched->tick_work); 3901 3893 3902 3894 if (sched->wq) 3903 3895 destroy_workqueue(sched->wq);
+1
drivers/gpu/drm/pl111/pl111_display.c
··· 20 20 #include <drm/drm_framebuffer.h> 21 21 #include <drm/drm_gem_atomic_helper.h> 22 22 #include <drm/drm_gem_dma_helper.h> 23 + #include <drm/drm_print.h> 23 24 #include <drm/drm_vblank.h> 24 25 25 26 #include "pl111_drm.h"
+1
drivers/gpu/drm/qxl/qxl_cmd.c
··· 27 27 28 28 #include <linux/delay.h> 29 29 30 + #include <drm/drm_print.h> 30 31 #include <drm/drm_util.h> 31 32 32 33 #include "qxl_drv.h"
+1
drivers/gpu/drm/qxl/qxl_debugfs.c
··· 30 30 31 31 #include <drm/drm_debugfs.h> 32 32 #include <drm/drm_file.h> 33 + #include <drm/drm_print.h> 33 34 34 35 #include "qxl_drv.h" 35 36 #include "qxl_object.h"
+1
drivers/gpu/drm/qxl/qxl_display.c
··· 34 34 #include <drm/drm_framebuffer.h> 35 35 #include <drm/drm_gem_framebuffer_helper.h> 36 36 #include <drm/drm_plane_helper.h> 37 + #include <drm/drm_print.h> 37 38 #include <drm/drm_probe_helper.h> 38 39 #include <drm/drm_simple_kms_helper.h> 39 40 #include <drm/drm_gem_atomic_helper.h>
+1
drivers/gpu/drm/qxl/qxl_drv.c
··· 44 44 #include <drm/drm_module.h> 45 45 #include <drm/drm_modeset_helper.h> 46 46 #include <drm/drm_prime.h> 47 + #include <drm/drm_print.h> 47 48 #include <drm/drm_probe_helper.h> 48 49 49 50 #include "qxl_object.h"
+1
drivers/gpu/drm/qxl/qxl_gem.c
··· 24 24 */ 25 25 26 26 #include <drm/drm.h> 27 + #include <drm/drm_print.h> 27 28 28 29 #include "qxl_drv.h" 29 30 #include "qxl_object.h"
+2
drivers/gpu/drm/qxl/qxl_image.c
··· 26 26 #include <linux/gfp.h> 27 27 #include <linux/slab.h> 28 28 29 + #include <drm/drm_print.h> 30 + 29 31 #include "qxl_drv.h" 30 32 #include "qxl_object.h" 31 33
+2
drivers/gpu/drm/qxl/qxl_ioctl.c
··· 26 26 #include <linux/pci.h> 27 27 #include <linux/uaccess.h> 28 28 29 + #include <drm/drm_print.h> 30 + 29 31 #include "qxl_drv.h" 30 32 #include "qxl_object.h" 31 33
+1
drivers/gpu/drm/qxl/qxl_irq.c
··· 26 26 #include <linux/pci.h> 27 27 28 28 #include <drm/drm_drv.h> 29 + #include <drm/drm_print.h> 29 30 30 31 #include "qxl_drv.h" 31 32
+1
drivers/gpu/drm/qxl/qxl_kms.c
··· 28 28 29 29 #include <drm/drm_drv.h> 30 30 #include <drm/drm_managed.h> 31 + #include <drm/drm_print.h> 31 32 #include <drm/drm_probe_helper.h> 32 33 33 34 #include "qxl_drv.h"
+2
drivers/gpu/drm/qxl/qxl_release.c
··· 22 22 23 23 #include <linux/delay.h> 24 24 25 + #include <drm/drm_print.h> 26 + 25 27 #include <trace/events/dma_fence.h> 26 28 27 29 #include "qxl_drv.h"
+2 -1
drivers/gpu/drm/qxl/qxl_ttm.c
··· 28 28 #include <drm/drm.h> 29 29 #include <drm/drm_file.h> 30 30 #include <drm/drm_debugfs.h> 31 + #include <drm/drm_print.h> 31 32 #include <drm/qxl_drm.h> 32 33 #include <drm/ttm/ttm_bo.h> 33 34 #include <drm/ttm/ttm_placement.h> ··· 197 196 r = ttm_device_init(&qdev->mman.bdev, &qxl_bo_driver, NULL, 198 197 qdev->ddev.anon_inode->i_mapping, 199 198 qdev->ddev.vma_offset_manager, 200 - false, false); 199 + 0); 201 200 if (r) { 202 201 DRM_ERROR("failed initializing buffer object driver(%d).\n", r); 203 202 return r;
+1
drivers/gpu/drm/radeon/radeon.h
··· 80 80 #include <drm/drm_gem.h> 81 81 #include <drm/drm_audio_component.h> 82 82 #include <drm/drm_suballoc.h> 83 + #include <drm/drm_print.h> 83 84 84 85 #include "radeon_family.h" 85 86 #include "radeon_mode.h"
+4 -2
drivers/gpu/drm/radeon/radeon_ttm.c
··· 683 683 r = ttm_device_init(&rdev->mman.bdev, &radeon_bo_driver, rdev->dev, 684 684 rdev_to_drm(rdev)->anon_inode->i_mapping, 685 685 rdev_to_drm(rdev)->vma_offset_manager, 686 - rdev->need_swiotlb, 687 - dma_addressing_limited(&rdev->pdev->dev)); 686 + (rdev->need_swiotlb ? 687 + TTM_ALLOCATION_POOL_USE_DMA_ALLOC : 0) | 688 + (dma_addressing_limited(&rdev->pdev->dev) ? 689 + TTM_ALLOCATION_POOL_USE_DMA32 : 0)); 688 690 if (r) { 689 691 DRM_ERROR("failed initializing buffer object driver(%d).\n", r); 690 692 return r;
+2 -1
drivers/gpu/drm/renesas/rcar-du/rcar_du_crtc.c
··· 17 17 #include <drm/drm_crtc.h> 18 18 #include <drm/drm_device.h> 19 19 #include <drm/drm_gem_dma_helper.h> 20 + #include <drm/drm_print.h> 20 21 #include <drm/drm_vblank.h> 21 22 22 23 #include "rcar_cmm.h" ··· 994 993 995 994 rcar_du_crtc_crc_cleanup(rcrtc); 996 995 997 - return drm_crtc_cleanup(crtc); 996 + drm_crtc_cleanup(crtc); 998 997 } 999 998 1000 999 static void rcar_du_crtc_reset(struct drm_crtc *crtc)
+1
drivers/gpu/drm/renesas/rcar-du/rcar_du_drv.c
··· 24 24 #include <drm/drm_fbdev_dma.h> 25 25 #include <drm/drm_gem_dma_helper.h> 26 26 #include <drm/drm_managed.h> 27 + #include <drm/drm_print.h> 27 28 #include <drm/drm_probe_helper.h> 28 29 29 30 #include "rcar_du_drv.h"
+34 -16
drivers/gpu/drm/renesas/rcar-du/rcar_mipi_dsi.c
··· 5 5 * Copyright (C) 2020 Renesas Electronics Corporation 6 6 */ 7 7 8 + #include <linux/bitfield.h> 8 9 #include <linux/clk.h> 9 10 #include <linux/delay.h> 10 11 #include <linux/io.h> ··· 72 71 } clocks; 73 72 74 73 enum mipi_dsi_pixel_format format; 74 + unsigned long mode_flags; 75 75 unsigned int num_data_lanes; 76 76 unsigned int lanes; 77 77 }; ··· 318 316 WRITE_PHTW(0x01020100, 0x00000180); 319 317 320 318 ret = read_poll_timeout(rcar_mipi_dsi_read, status, 321 - status & PHTR_TEST, 2000, 10000, false, 322 - dsi, PHTR); 319 + status & PHTR_TESTDOUT_TEST, 320 + 2000, 10000, false, dsi, PHTR); 323 321 if (ret < 0) { 324 322 dev_err(dsi->dev, "failed to test PHTR\n"); 325 323 return ret; ··· 459 457 u32 vprmset4r; 460 458 461 459 /* Configuration for Pixel Stream and Packet Header */ 462 - if (mipi_dsi_pixel_format_to_bpp(dsi->format) == 24) 460 + switch (mipi_dsi_pixel_format_to_bpp(dsi->format)) { 461 + case 24: 463 462 rcar_mipi_dsi_write(dsi, TXVMPSPHSETR, TXVMPSPHSETR_DT_RGB24); 464 - else if (mipi_dsi_pixel_format_to_bpp(dsi->format) == 18) 463 + break; 464 + case 18: 465 465 rcar_mipi_dsi_write(dsi, TXVMPSPHSETR, TXVMPSPHSETR_DT_RGB18); 466 - else if (mipi_dsi_pixel_format_to_bpp(dsi->format) == 16) 466 + break; 467 + case 16: 467 468 rcar_mipi_dsi_write(dsi, TXVMPSPHSETR, TXVMPSPHSETR_DT_RGB16); 468 - else { 469 + break; 470 + default: 469 471 dev_warn(dsi->dev, "unsupported format"); 470 472 return; 471 473 } 472 474 473 475 /* Configuration for Blanking sequence and Input Pixel */ 474 - setr = TXVMSETR_HSABPEN_EN | TXVMSETR_HBPBPEN_EN 475 - | TXVMSETR_HFPBPEN_EN | TXVMSETR_SYNSEQ_PULSES 476 - | TXVMSETR_PIXWDTH | TXVMSETR_VSTPM; 476 + setr = TXVMSETR_PIXWDTH | TXVMSETR_VSTPM; 477 + 478 + if (dsi->mode_flags & MIPI_DSI_MODE_VIDEO) { 479 + if (!(dsi->mode_flags & MIPI_DSI_MODE_VIDEO_SYNC_PULSE)) 480 + setr |= TXVMSETR_SYNSEQ_EVENTS; 481 + if (!(dsi->mode_flags & MIPI_DSI_MODE_VIDEO_NO_HFP)) 482 + setr |= TXVMSETR_HFPBPEN; 483 + if (!(dsi->mode_flags & MIPI_DSI_MODE_VIDEO_NO_HBP)) 484 + setr |= TXVMSETR_HBPBPEN; 485 + if (!(dsi->mode_flags & MIPI_DSI_MODE_VIDEO_NO_HSA)) 486 + setr |= TXVMSETR_HSABPEN; 487 + } 488 + 477 489 rcar_mipi_dsi_write(dsi, TXVMSETR, setr); 478 490 479 - /* Configuration for Video Parameters */ 480 - vprmset0r = (mode->flags & DRM_MODE_FLAG_PVSYNC ? 481 - TXVMVPRMSET0R_VSPOL_HIG : TXVMVPRMSET0R_VSPOL_LOW) 482 - | (mode->flags & DRM_MODE_FLAG_PHSYNC ? 483 - TXVMVPRMSET0R_HSPOL_HIG : TXVMVPRMSET0R_HSPOL_LOW) 484 - | TXVMVPRMSET0R_CSPC_RGB | TXVMVPRMSET0R_BPP_24; 491 + /* Configuration for Video Parameters, input is always RGB888 */ 492 + vprmset0r = TXVMVPRMSET0R_BPP_24; 493 + if (mode->flags & DRM_MODE_FLAG_NVSYNC) 494 + vprmset0r |= TXVMVPRMSET0R_VSPOL_LOW; 495 + if (mode->flags & DRM_MODE_FLAG_NHSYNC) 496 + vprmset0r |= TXVMVPRMSET0R_HSPOL_LOW; 485 497 486 498 vprmset1r = TXVMVPRMSET1R_VACTIVE(mode->vdisplay) 487 499 | TXVMVPRMSET1R_VSA(mode->vsync_end - mode->vsync_start); ··· 636 620 vclkset = VCLKSET_CKEN; 637 621 rcar_mipi_dsi_write(dsi, VCLKSET, vclkset); 638 622 623 + /* Output is always RGB, never YCbCr */ 639 624 if (dsi_format == 24) 640 625 vclkset |= VCLKSET_BPP_24; 641 626 else if (dsi_format == 18) ··· 648 631 return -EINVAL; 649 632 } 650 633 651 - vclkset |= VCLKSET_COLOR_RGB | VCLKSET_LANE(dsi->lanes - 1); 634 + vclkset |= VCLKSET_LANE(dsi->lanes - 1); 652 635 653 636 switch (dsi->info->model) { 654 637 case RCAR_DSI_V3U: ··· 928 911 929 912 dsi->lanes = device->lanes; 930 913 dsi->format = device->format; 914 + dsi->mode_flags = device->mode_flags; 931 915 932 916 dsi->next_bridge = devm_drm_of_get_bridge(dsi->dev, dsi->dev->of_node, 933 917 1, 0);
+203 -184
drivers/gpu/drm/renesas/rcar-du/rcar_mipi_dsi_regs.h
··· 9 9 #define __RCAR_MIPI_DSI_REGS_H__ 10 10 11 11 #define LINKSR 0x010 12 - #define LINKSR_LPBUSY (1 << 1) 13 - #define LINKSR_HSBUSY (1 << 0) 12 + #define LINKSR_LPBUSY BIT_U32(1) 13 + #define LINKSR_HSBUSY BIT_U32(0) 14 14 15 15 #define TXSETR 0x100 16 - #define TXSETR_LANECNT_MASK (0x3 << 0) 16 + #define TXSETR_LANECNT_MASK GENMASK_U32(1, 0) 17 17 18 18 /* 19 19 * DSI Command Transfer Registers 20 20 */ 21 21 #define TXCMSETR 0x110 22 - #define TXCMSETR_SPDTYP (1 << 8) /* 0:HS 1:LP */ 23 - #define TXCMSETR_LPPDACC (1 << 0) 22 + #define TXCMSETR_SPDTYP BIT_U32(8) /* 0:HS 1:LP */ 23 + #define TXCMSETR_LPPDACC BIT_U32(0) 24 24 #define TXCMCR 0x120 25 - #define TXCMCR_BTATYP (1 << 2) 26 - #define TXCMCR_BTAREQ (1 << 1) 27 - #define TXCMCR_TXREQ (1 << 0) 25 + #define TXCMCR_BTATYP BIT_U32(2) 26 + #define TXCMCR_BTAREQ BIT_U32(1) 27 + #define TXCMCR_TXREQ BIT_U32(0) 28 28 #define TXCMSR 0x130 29 - #define TXCMSR_CLSNERR (1 << 18) 30 - #define TXCMSR_AXIERR (1 << 16) 31 - #define TXCMSR_TXREQEND (1 << 0) 29 + #define TXCMSR_CLSNERR BIT_U32(18) 30 + #define TXCMSR_AXIERR BIT_U32(16) 31 + #define TXCMSR_TXREQEND BIT_U32(0) 32 32 #define TXCMSCR 0x134 33 - #define TXCMSCR_CLSNERR (1 << 18) 34 - #define TXCMSCR_AXIERR (1 << 16) 35 - #define TXCMSCR_TXREQEND (1 << 0) 33 + #define TXCMSCR_CLSNERR BIT_U32(18) 34 + #define TXCMSCR_AXIERR BIT_U32(16) 35 + #define TXCMSCR_TXREQEND BIT_U32(0) 36 36 #define TXCMIER 0x138 37 - #define TXCMIER_CLSNERR (1 << 18) 38 - #define TXCMIER_AXIERR (1 << 16) 39 - #define TXCMIER_TXREQEND (1 << 0) 37 + #define TXCMIER_CLSNERR BIT_U32(18) 38 + #define TXCMIER_AXIERR BIT_U32(16) 39 + #define TXCMIER_TXREQEND BIT_U32(0) 40 40 #define TXCMADDRSET0R 0x140 41 41 #define TXCMPHDR 0x150 42 - #define TXCMPHDR_FMT (1 << 24) /* 0:SP 1:LP */ 43 - #define TXCMPHDR_VC(n) (((n) & 0x3) << 22) 44 - #define TXCMPHDR_DT(n) (((n) & 0x3f) << 16) 45 - #define TXCMPHDR_DATA1(n) (((n) & 0xff) << 8) 46 - #define TXCMPHDR_DATA0(n) (((n) & 0xff) << 0) 42 + #define TXCMPHDR_FMT BIT_U32(24) /* 0:SP 1:LP */ 43 + #define TXCMPHDR_VC_MASK GENMASK_U32(23, 22) 44 + #define TXCMPHDR_VC(n) FIELD_PREP(TXCMPHDR_VC_MASK, (n)) 45 + #define TXCMPHDR_DT_MASK GENMASK_U32(21, 16) 46 + #define TXCMPHDR_DT(n) FIELD_PREP(TXCMPHDR_DT_MASK, (n)) 47 + #define TXCMPHDR_DATA1_MASK GENMASK_U32(15, 8) 48 + #define TXCMPHDR_DATA1(n) FIELD_PREP(TXCMPHDR_DATA1_MASK, (n)) 49 + #define TXCMPHDR_DATA0_MASK GENMASK_U32(7, 0) 50 + #define TXCMPHDR_DATA0(n) FIELD_PREP(TXCMPHDR_DATA0_MASK, (n)) 47 51 #define TXCMPPD0R 0x160 48 52 #define TXCMPPD1R 0x164 49 53 #define TXCMPPD2R 0x168 50 54 #define TXCMPPD3R 0x16c 51 55 52 56 #define RXSETR 0x200 53 - #define RXSETR_CRCEN (((n) & 0xf) << 24) 54 - #define RXSETR_ECCEN (((n) & 0xf) << 16) 57 + #define RXSETR_CRCEN_MASK GENMASK_U32(27, 24) 58 + #define RXSETR_ECCEN_MASK GENMASK_U32(19, 16) 55 59 #define RXPSETR 0x210 56 - #define RXPSETR_LPPDACC (1 << 0) 60 + #define RXPSETR_LPPDACC BIT_U32(0) 57 61 #define RXPSR 0x220 58 - #define RXPSR_ECCERR1B (1 << 28) 59 - #define RXPSR_UEXTRGERR (1 << 25) 60 - #define RXPSR_RESPTOERR (1 << 24) 61 - #define RXPSR_OVRERR (1 << 23) 62 - #define RXPSR_AXIERR (1 << 22) 63 - #define RXPSR_CRCERR (1 << 21) 64 - #define RXPSR_WCERR (1 << 20) 65 - #define RXPSR_UEXDTERR (1 << 19) 66 - #define RXPSR_UEXPKTERR (1 << 18) 67 - #define RXPSR_ECCERR (1 << 17) 68 - #define RXPSR_MLFERR (1 << 16) 69 - #define RXPSR_RCVACK (1 << 14) 70 - #define RXPSR_RCVEOT (1 << 10) 71 - #define RXPSR_RCVAKE (1 << 9) 72 - #define RXPSR_RCVRESP (1 << 8) 73 - #define RXPSR_BTAREQEND (1 << 0) 62 + #define RXPSR_ECCERR1B BIT_U32(28) 63 + #define RXPSR_UEXTRGERR BIT_U32(25) 64 + #define RXPSR_RESPTOERR BIT_U32(24) 65 + #define RXPSR_OVRERR BIT_U32(23) 66 + #define RXPSR_AXIERR BIT_U32(22) 67 + #define RXPSR_CRCERR BIT_U32(21) 68 + #define RXPSR_WCERR BIT_U32(20) 69 + #define RXPSR_UEXDTERR BIT_U32(19) 70 + #define RXPSR_UEXPKTERR BIT_U32(18) 71 + #define RXPSR_ECCERR BIT_U32(17) 72 + #define RXPSR_MLFERR BIT_U32(16) 73 + #define RXPSR_RCVACK BIT_U32(14) 74 + #define RXPSR_RCVEOT BIT_U32(10) 75 + #define RXPSR_RCVAKE BIT_U32(9) 76 + #define RXPSR_RCVRESP BIT_U32(8) 77 + #define RXPSR_BTAREQEND BIT_U32(0) 74 78 #define RXPSCR 0x224 75 - #define RXPSCR_ECCERR1B (1 << 28) 76 - #define RXPSCR_UEXTRGERR (1 << 25) 77 - #define RXPSCR_RESPTOERR (1 << 24) 78 - #define RXPSCR_OVRERR (1 << 23) 79 - #define RXPSCR_AXIERR (1 << 22) 80 - #define RXPSCR_CRCERR (1 << 21) 81 - #define RXPSCR_WCERR (1 << 20) 82 - #define RXPSCR_UEXDTERR (1 << 19) 83 - #define RXPSCR_UEXPKTERR (1 << 18) 84 - #define RXPSCR_ECCERR (1 << 17) 85 - #define RXPSCR_MLFERR (1 << 16) 86 - #define RXPSCR_RCVACK (1 << 14) 87 - #define RXPSCR_RCVEOT (1 << 10) 88 - #define RXPSCR_RCVAKE (1 << 9) 89 - #define RXPSCR_RCVRESP (1 << 8) 90 - #define RXPSCR_BTAREQEND (1 << 0) 79 + #define RXPSCR_ECCERR1B BIT_U32(28) 80 + #define RXPSCR_UEXTRGERR BIT_U32(25) 81 + #define RXPSCR_RESPTOERR BIT_U32(24) 82 + #define RXPSCR_OVRERR BIT_U32(23) 83 + #define RXPSCR_AXIERR BIT_U32(22) 84 + #define RXPSCR_CRCERR BIT_U32(21) 85 + #define RXPSCR_WCERR BIT_U32(20) 86 + #define RXPSCR_UEXDTERR BIT_U32(19) 87 + #define RXPSCR_UEXPKTERR BIT_U32(18) 88 + #define RXPSCR_ECCERR BIT_U32(17) 89 + #define RXPSCR_MLFERR BIT_U32(16) 90 + #define RXPSCR_RCVACK BIT_U32(14) 91 + #define RXPSCR_RCVEOT BIT_U32(10) 92 + #define RXPSCR_RCVAKE BIT_U32(9) 93 + #define RXPSCR_RCVRESP BIT_U32(8) 94 + #define RXPSCR_BTAREQEND BIT_U32(0) 91 95 #define RXPIER 0x228 92 - #define RXPIER_ECCERR1B (1 << 28) 93 - #define RXPIER_UEXTRGERR (1 << 25) 94 - #define RXPIER_RESPTOERR (1 << 24) 95 - #define RXPIER_OVRERR (1 << 23) 96 - #define RXPIER_AXIERR (1 << 22) 97 - #define RXPIER_CRCERR (1 << 21) 98 - #define RXPIER_WCERR (1 << 20) 99 - #define RXPIER_UEXDTERR (1 << 19) 100 - #define RXPIER_UEXPKTERR (1 << 18) 101 - #define RXPIER_ECCERR (1 << 17) 102 - #define RXPIER_MLFERR (1 << 16) 103 - #define RXPIER_RCVACK (1 << 14) 104 - #define RXPIER_RCVEOT (1 << 10) 105 - #define RXPIER_RCVAKE (1 << 9) 106 - #define RXPIER_RCVRESP (1 << 8) 107 - #define RXPIER_BTAREQEND (1 << 0) 96 + #define RXPIER_ECCERR1B BIT_U32(28) 97 + #define RXPIER_UEXTRGERR BIT_U32(25) 98 + #define RXPIER_RESPTOERR BIT_U32(24) 99 + #define RXPIER_OVRERR BIT_U32(23) 100 + #define RXPIER_AXIERR BIT_U32(22) 101 + #define RXPIER_CRCERR BIT_U32(21) 102 + #define RXPIER_WCERR BIT_U32(20) 103 + #define RXPIER_UEXDTERR BIT_U32(19) 104 + #define RXPIER_UEXPKTERR BIT_U32(18) 105 + #define RXPIER_ECCERR BIT_U32(17) 106 + #define RXPIER_MLFERR BIT_U32(16) 107 + #define RXPIER_RCVACK BIT_U32(14) 108 + #define RXPIER_RCVEOT BIT_U32(10) 109 + #define RXPIER_RCVAKE BIT_U32(9) 110 + #define RXPIER_RCVRESP BIT_U32(8) 111 + #define RXPIER_BTAREQEND BIT_U32(0) 108 112 #define RXPADDRSET0R 0x230 109 113 #define RXPSIZESETR 0x238 110 - #define RXPSIZESETR_SIZE(n) (((n) & 0xf) << 3) 114 + #define RXPSIZESETR_SIZE_MASK GENMASK_U32(6, 3) 111 115 #define RXPHDR 0x240 112 - #define RXPHDR_FMT (1 << 24) /* 0:SP 1:LP */ 113 - #define RXPHDR_VC(n) (((n) & 0x3) << 22) 114 - #define RXPHDR_DT(n) (((n) & 0x3f) << 16) 115 - #define RXPHDR_DATA1(n) (((n) & 0xff) << 8) 116 - #define RXPHDR_DATA0(n) (((n) & 0xff) << 0) 116 + #define RXPHDR_FMT BIT_U32(24) /* 0:SP 1:LP */ 117 + #define RXPHDR_VC_MASK GENMASK_U32(23, 22) 118 + #define RXPHDR_DT_MASK GENMASK_U32(21, 16) 119 + #define RXPHDR_DATA1_MASK GENMASK_U32(15, 8) 120 + #define RXPHDR_DATA0_MASK GENMASK_U32(7, 0) 117 121 #define RXPPD0R 0x250 118 122 #define RXPPD1R 0x254 119 123 #define RXPPD2R 0x258 120 124 #define RXPPD3R 0x25c 121 125 #define AKEPR 0x300 122 - #define AKEPR_VC(n) (((n) & 0x3) << 22) 123 - #define AKEPR_DT(n) (((n) & 0x3f) << 16) 124 - #define AKEPR_ERRRPT(n) (((n) & 0xffff) << 0) 126 + #define AKEPR_VC_MASK GENMASK_U32(23, 22) 127 + #define AKEPR_DT_MASK GENMASK_U32(21, 16) 128 + #define AKEPR_ERRRPT_MASK GENMASK_U32(15, 0) 125 129 #define RXRESPTOSETR 0x400 126 130 #define TACR 0x500 127 131 #define TASR 0x510 128 132 #define TASCR 0x514 129 133 #define TAIER 0x518 130 134 #define TOSR 0x610 131 - #define TOSR_TATO (1 << 2) 132 - #define TOSR_LRXHTO (1 << 1) 133 - #define TOSR_HRXTO (1 << 0) 135 + #define TOSR_TATO BIT_U32(2) 136 + #define TOSR_LRXHTO BIT_U32(1) 137 + #define TOSR_HRXTO BIT_U32(0) 134 138 #define TOSCR 0x614 135 - #define TOSCR_TATO (1 << 2) 136 - #define TOSCR_LRXHTO (1 << 1) 137 - #define TOSCR_HRXTO (1 << 0) 139 + #define TOSCR_TATO BIT_U32(2) 140 + #define TOSCR_LRXHTO BIT_U32(1) 141 + #define TOSCR_HRXTO BIT_U32(0) 138 142 139 143 /* 140 144 * Video Mode Register 141 145 */ 142 146 #define TXVMSETR 0x180 143 - #define TXVMSETR_SYNSEQ_PULSES (0 << 16) 144 - #define TXVMSETR_SYNSEQ_EVENTS (1 << 16) 145 - #define TXVMSETR_VSTPM (1 << 15) 146 - #define TXVMSETR_PIXWDTH (1 << 8) 147 - #define TXVMSETR_VSEN_EN (1 << 4) 148 - #define TXVMSETR_VSEN_DIS (0 << 4) 149 - #define TXVMSETR_HFPBPEN_EN (1 << 2) 150 - #define TXVMSETR_HFPBPEN_DIS (0 << 2) 151 - #define TXVMSETR_HBPBPEN_EN (1 << 1) 152 - #define TXVMSETR_HBPBPEN_DIS (0 << 1) 153 - #define TXVMSETR_HSABPEN_EN (1 << 0) 154 - #define TXVMSETR_HSABPEN_DIS (0 << 0) 147 + #define TXVMSETR_SYNSEQ_EVENTS BIT_U32(16) /* 0:Pulses 1:Events */ 148 + #define TXVMSETR_VSTPM BIT_U32(15) 149 + #define TXVMSETR_PIXWDTH_MASK GENMASK_U32(10, 8) 150 + #define TXVMSETR_PIXWDTH BIT_U32(8) /* Only allowed value */ 151 + #define TXVMSETR_VSEN BIT_U32(4) 152 + #define TXVMSETR_HFPBPEN BIT_U32(2) 153 + #define TXVMSETR_HBPBPEN BIT_U32(1) 154 + #define TXVMSETR_HSABPEN BIT_U32(0) 155 155 156 156 #define TXVMCR 0x190 157 - #define TXVMCR_VFCLR (1 << 12) 158 - #define TXVMCR_EN_VIDEO (1 << 0) 157 + #define TXVMCR_VFCLR BIT_U32(12) 158 + #define TXVMCR_EN_VIDEO BIT_U32(0) 159 159 160 160 #define TXVMSR 0x1a0 161 - #define TXVMSR_STR (1 << 16) 162 - #define TXVMSR_VFRDY (1 << 12) 163 - #define TXVMSR_ACT (1 << 8) 164 - #define TXVMSR_RDY (1 << 0) 161 + #define TXVMSR_STR BIT_U32(16) 162 + #define TXVMSR_VFRDY BIT_U32(12) 163 + #define TXVMSR_ACT BIT_U32(8) 164 + #define TXVMSR_RDY BIT_U32(0) 165 165 166 166 #define TXVMSCR 0x1a4 167 - #define TXVMSCR_STR (1 << 16) 167 + #define TXVMSCR_STR BIT_U32(16) 168 168 169 169 #define TXVMPSPHSETR 0x1c0 170 - #define TXVMPSPHSETR_DT_RGB16 (0x0e << 16) 171 - #define TXVMPSPHSETR_DT_RGB18 (0x1e << 16) 172 - #define TXVMPSPHSETR_DT_RGB18_LS (0x2e << 16) 173 - #define TXVMPSPHSETR_DT_RGB24 (0x3e << 16) 174 - #define TXVMPSPHSETR_DT_YCBCR16 (0x2c << 16) 170 + #define TXVMPSPHSETR_DT_MASK (0x3f << 16) 171 + #define TXVMPSPHSETR_DT_RGB16 FIELD_PREP(TXVMPSPHSETR_DT_MASK, 0x0e) 172 + #define TXVMPSPHSETR_DT_RGB18 FIELD_PREP(TXVMPSPHSETR_DT_MASK, 0x1e) 173 + #define TXVMPSPHSETR_DT_RGB18_LS FIELD_PREP(TXVMPSPHSETR_DT_MASK, 0x2e) 174 + #define TXVMPSPHSETR_DT_RGB24 FIELD_PREP(TXVMPSPHSETR_DT_MASK, 0x3e) 175 + #define TXVMPSPHSETR_DT_YCBCR16 FIELD_PREP(TXVMPSPHSETR_DT_MASK, 0x2c) 175 176 176 177 #define TXVMVPRMSET0R 0x1d0 177 - #define TXVMVPRMSET0R_HSPOL_HIG (0 << 17) 178 - #define TXVMVPRMSET0R_HSPOL_LOW (1 << 17) 179 - #define TXVMVPRMSET0R_VSPOL_HIG (0 << 16) 180 - #define TXVMVPRMSET0R_VSPOL_LOW (1 << 16) 181 - #define TXVMVPRMSET0R_CSPC_RGB (0 << 4) 182 - #define TXVMVPRMSET0R_CSPC_YCbCr (1 << 4) 183 - #define TXVMVPRMSET0R_BPP_16 (0 << 0) 184 - #define TXVMVPRMSET0R_BPP_18 (1 << 0) 185 - #define TXVMVPRMSET0R_BPP_24 (2 << 0) 178 + #define TXVMVPRMSET0R_HSPOL_LOW BIT_U32(17) /* 0:High 1:Low */ 179 + #define TXVMVPRMSET0R_VSPOL_LOW BIT_U32(16) /* 0:High 1:Low */ 180 + #define TXVMVPRMSET0R_CSPC_YCbCr BIT_U32(4) /* 0:RGB 1:YCbCr */ 181 + #define TXVMVPRMSET0R_BPP_MASK GENMASK_U32(2, 0) 182 + #define TXVMVPRMSET0R_BPP_16 FIELD_PREP(TXVMVPRMSET0R_BPP_MASK, 0) 183 + #define TXVMVPRMSET0R_BPP_18 FIELD_PREP(TXVMVPRMSET0R_BPP_MASK, 1) 184 + #define TXVMVPRMSET0R_BPP_24 FIELD_PREP(TXVMVPRMSET0R_BPP_MASK, 2) 186 185 187 186 #define TXVMVPRMSET1R 0x1d4 188 - #define TXVMVPRMSET1R_VACTIVE(x) (((x) & 0x7fff) << 16) 189 - #define TXVMVPRMSET1R_VSA(x) (((x) & 0xfff) << 0) 187 + #define TXVMVPRMSET1R_VACTIVE_MASK GENMASK_U32(30, 16) 188 + #define TXVMVPRMSET1R_VACTIVE(n) FIELD_PREP(TXVMVPRMSET1R_VACTIVE_MASK, (n)) 189 + #define TXVMVPRMSET1R_VSA_MASK GENMASK_U32(11, 0) 190 + #define TXVMVPRMSET1R_VSA(n) FIELD_PREP(TXVMVPRMSET1R_VSA_MASK, (n)) 190 191 191 192 #define TXVMVPRMSET2R 0x1d8 192 - #define TXVMVPRMSET2R_VFP(x) (((x) & 0x1fff) << 16) 193 - #define TXVMVPRMSET2R_VBP(x) (((x) & 0x1fff) << 0) 193 + #define TXVMVPRMSET2R_VFP_MASK GENMASK_U32(28, 16) 194 + #define TXVMVPRMSET2R_VFP(n) FIELD_PREP(TXVMVPRMSET2R_VFP_MASK, (n)) 195 + #define TXVMVPRMSET2R_VBP_MASK GENMASK_U32(12, 0) 196 + #define TXVMVPRMSET2R_VBP(n) FIELD_PREP(TXVMVPRMSET2R_VBP_MASK, (n)) 194 197 195 198 #define TXVMVPRMSET3R 0x1dc 196 - #define TXVMVPRMSET3R_HACTIVE(x) (((x) & 0x7fff) << 16) 197 - #define TXVMVPRMSET3R_HSA(x) (((x) & 0xfff) << 0) 199 + #define TXVMVPRMSET3R_HACTIVE_MASK GENMASK_U32(30, 16) 200 + #define TXVMVPRMSET3R_HACTIVE(n) FIELD_PREP(TXVMVPRMSET3R_HACTIVE_MASK, (n)) 201 + #define TXVMVPRMSET3R_HSA_MASK GENMASK_U32(11, 0) 202 + #define TXVMVPRMSET3R_HSA(n) FIELD_PREP(TXVMVPRMSET3R_HSA_MASK, (n)) 198 203 199 204 #define TXVMVPRMSET4R 0x1e0 200 - #define TXVMVPRMSET4R_HFP(x) (((x) & 0x1fff) << 16) 201 - #define TXVMVPRMSET4R_HBP(x) (((x) & 0x1fff) << 0) 205 + #define TXVMVPRMSET4R_HFP_MASK GENMASK_U32(28, 16) 206 + #define TXVMVPRMSET4R_HFP(n) FIELD_PREP(TXVMVPRMSET4R_HFP_MASK, (n)) 207 + #define TXVMVPRMSET4R_HBP_MASK GENMASK_U32(12, 0) 208 + #define TXVMVPRMSET4R_HBP(n) FIELD_PREP(TXVMVPRMSET4R_HBP_MASK, (n)) 202 209 203 210 /* 204 211 * PHY-Protocol Interface (PPI) Registers 205 212 */ 206 213 #define PPISETR 0x700 207 - #define PPISETR_DLEN_MASK (0xf << 0) 208 - #define PPISETR_CLEN (1 << 8) 214 + #define PPISETR_DLEN_MASK GENMASK_U32(3, 0) 215 + #define PPISETR_CLEN BIT_U32(8) 209 216 210 217 #define PPICLCR 0x710 211 - #define PPICLCR_TXREQHS (1 << 8) 212 - #define PPICLCR_TXULPSEXT (1 << 1) 213 - #define PPICLCR_TXULPSCLK (1 << 0) 218 + #define PPICLCR_TXREQHS BIT_U32(8) 219 + #define PPICLCR_TXULPSEXT BIT_U32(1) 220 + #define PPICLCR_TXULPSCLK BIT_U32(0) 214 221 215 222 #define PPICLSR 0x720 216 - #define PPICLSR_HSTOLP (1 << 27) 217 - #define PPICLSR_TOHS (1 << 26) 218 - #define PPICLSR_STPST (1 << 0) 223 + #define PPICLSR_HSTOLP BIT_U32(27) 224 + #define PPICLSR_TOHS BIT_U32(26) 225 + #define PPICLSR_STPST BIT_U32(0) 219 226 220 227 #define PPICLSCR 0x724 221 - #define PPICLSCR_HSTOLP (1 << 27) 222 - #define PPICLSCR_TOHS (1 << 26) 228 + #define PPICLSCR_HSTOLP BIT_U32(27) 229 + #define PPICLSCR_TOHS BIT_U32(26) 223 230 224 231 #define PPIDL0SR 0x740 225 - #define PPIDL0SR_DIR (1 << 10) 226 - #define PPIDL0SR_STPST (1 << 6) 232 + #define PPIDL0SR_DIR BIT_U32(10) 233 + #define PPIDL0SR_STPST BIT_U32(6) 227 234 228 235 #define PPIDLSR 0x760 229 - #define PPIDLSR_STPST (0xf << 0) 236 + #define PPIDLSR_STPST GENMASK_U32(3, 0) 230 237 231 238 /* 232 239 * Clocks registers 233 240 */ 234 241 #define LPCLKSET 0x1000 235 - #define LPCLKSET_CKEN (1 << 8) 236 - #define LPCLKSET_LPCLKDIV(x) (((x) & 0x3f) << 0) 242 + #define LPCLKSET_CKEN BIT_U32(8) 243 + #define LPCLKSET_LPCLKDIV_MASK GENMASK_U32(5, 0) 237 244 238 245 #define CFGCLKSET 0x1004 239 - #define CFGCLKSET_CKEN (1 << 8) 240 - #define CFGCLKSET_CFGCLKDIV(x) (((x) & 0x3f) << 0) 246 + #define CFGCLKSET_CKEN BIT_U32(8) 247 + #define CFGCLKSET_CFGCLKDIV_MASK GENMASK_U32(5, 0) 241 248 242 249 #define DOTCLKDIV 0x1008 243 - #define DOTCLKDIV_CKEN (1 << 8) 244 - #define DOTCLKDIV_DOTCLKDIV(x) (((x) & 0x3f) << 0) 250 + #define DOTCLKDIV_CKEN BIT_U32(8) 251 + #define DOTCLKDIV_DOTCLKDIV_MASK GENMASK_U32(5, 0) 245 252 246 253 #define VCLKSET 0x100c 247 - #define VCLKSET_CKEN (1 << 16) 248 - #define VCLKSET_COLOR_RGB (0 << 8) 249 - #define VCLKSET_COLOR_YCC (1 << 8) 250 - #define VCLKSET_DIV_V3U(x) (((x) & 0x3) << 4) 251 - #define VCLKSET_DIV_V4H(x) (((x) & 0x7) << 4) 252 - #define VCLKSET_BPP_16 (0 << 2) 253 - #define VCLKSET_BPP_18 (1 << 2) 254 - #define VCLKSET_BPP_18L (2 << 2) 255 - #define VCLKSET_BPP_24 (3 << 2) 256 - #define VCLKSET_LANE(x) (((x) & 0x3) << 0) 254 + #define VCLKSET_CKEN BIT_U32(16) 255 + #define VCLKSET_COLOR_YCC BIT_U32(8) /* 0:RGB 1:YCbCr */ 256 + #define VCLKSET_DIV_V3U_MASK GENMASK_U32(5, 4) 257 + #define VCLKSET_DIV_V3U(n) FIELD_PREP(VCLKSET_DIV_V3U_MASK, (n)) 258 + #define VCLKSET_DIV_V4H_MASK GENMASK_U32(6, 4) 259 + #define VCLKSET_DIV_V4H(n) FIELD_PREP(VCLKSET_DIV_V4H_MASK, (n)) 260 + #define VCLKSET_BPP_MASK GENMASK_U32(3, 2) 261 + #define VCLKSET_BPP_16 FIELD_PREP(VCLKSET_BPP_MASK, 0) 262 + #define VCLKSET_BPP_18 FIELD_PREP(VCLKSET_BPP_MASK, 1) 263 + #define VCLKSET_BPP_18L FIELD_PREP(VCLKSET_BPP_MASK, 2) 264 + #define VCLKSET_BPP_24 FIELD_PREP(VCLKSET_BPP_MASK, 3) 265 + #define VCLKSET_LANE_MASK GENMASK_U32(1, 0) 266 + #define VCLKSET_LANE(n) FIELD_PREP(VCLKSET_LANE_MASK, (n)) 257 267 258 268 #define VCLKEN 0x1010 259 - #define VCLKEN_CKEN (1 << 0) 269 + #define VCLKEN_CKEN BIT_U32(0) 260 270 261 271 #define PHYSETUP 0x1014 262 - #define PHYSETUP_HSFREQRANGE(x) (((x) & 0x7f) << 16) 263 - #define PHYSETUP_HSFREQRANGE_MASK (0x7f << 16) 264 - #define PHYSETUP_CFGCLKFREQRANGE(x) (((x) & 0x3f) << 8) 265 - #define PHYSETUP_SHUTDOWNZ (1 << 1) 266 - #define PHYSETUP_RSTZ (1 << 0) 272 + #define PHYSETUP_HSFREQRANGE_MASK GENMASK_U32(22, 16) 273 + #define PHYSETUP_HSFREQRANGE(n) FIELD_PREP(PHYSETUP_HSFREQRANGE_MASK, (n)) 274 + #define PHYSETUP_CFGCLKFREQRANGE_MASK GENMASK_U32(13, 8) 275 + #define PHYSETUP_SHUTDOWNZ BIT_U32(1) 276 + #define PHYSETUP_RSTZ BIT_U32(0) 267 277 268 278 #define CLOCKSET1 0x101c 269 - #define CLOCKSET1_LOCK_PHY (1 << 17) 270 - #define CLOCKSET1_CLKSEL (1 << 8) 271 - #define CLOCKSET1_CLKINSEL_EXTAL (0 << 2) 272 - #define CLOCKSET1_CLKINSEL_DIG (1 << 2) 273 - #define CLOCKSET1_CLKINSEL_DU (1 << 3) 274 - #define CLOCKSET1_SHADOW_CLEAR (1 << 1) 275 - #define CLOCKSET1_UPDATEPLL (1 << 0) 279 + #define CLOCKSET1_LOCK_PHY BIT_U32(17) 280 + #define CLOCKSET1_CLKSEL BIT_U32(8) 281 + #define CLOCKSET1_CLKINSEL_MASK GENMASK_U32(3, 2) 282 + #define CLOCKSET1_CLKINSEL_EXTAL FIELD_PREP(CLOCKSET1_CLKINSEL_MASK, 0) 283 + #define CLOCKSET1_CLKINSEL_DIG FIELD_PREP(CLOCKSET1_CLKINSEL_MASK, 1) 284 + #define CLOCKSET1_CLKINSEL_DU FIELD_PREP(CLOCKSET1_CLKINSEL_MASK, 2) 285 + #define CLOCKSET1_SHADOW_CLEAR BIT_U32(1) 286 + #define CLOCKSET1_UPDATEPLL BIT_U32(0) 276 287 277 288 #define CLOCKSET2 0x1020 278 - #define CLOCKSET2_M(x) (((x) & 0xfff) << 16) 279 - #define CLOCKSET2_VCO_CNTRL(x) (((x) & 0x3f) << 8) 280 - #define CLOCKSET2_N(x) (((x) & 0xf) << 0) 289 + #define CLOCKSET2_M_MASK GENMASK_U32(27, 16) 290 + #define CLOCKSET2_M(n) FIELD_PREP(CLOCKSET2_M_MASK, (n)) 291 + #define CLOCKSET2_VCO_CNTRL_MASK GENMASK_U32(13, 8) 292 + #define CLOCKSET2_VCO_CNTRL(n) FIELD_PREP(CLOCKSET2_VCO_CNTRL_MASK, (n)) 293 + #define CLOCKSET2_N_MASK GENMASK_U32(3, 0) 294 + #define CLOCKSET2_N(n) FIELD_PREP(CLOCKSET2_N_MASK, (n)) 281 295 282 296 #define CLOCKSET3 0x1024 283 - #define CLOCKSET3_PROP_CNTRL(x) (((x) & 0x3f) << 24) 284 - #define CLOCKSET3_INT_CNTRL(x) (((x) & 0x3f) << 16) 285 - #define CLOCKSET3_CPBIAS_CNTRL(x) (((x) & 0x7f) << 8) 286 - #define CLOCKSET3_GMP_CNTRL(x) (((x) & 0x3) << 0) 297 + #define CLOCKSET3_PROP_CNTRL_MASK GENMASK_U32(29, 24) 298 + #define CLOCKSET3_PROP_CNTRL(n) FIELD_PREP(CLOCKSET3_PROP_CNTRL_MASK, (n)) 299 + #define CLOCKSET3_INT_CNTRL_MASK GENMASK_U32(21, 16) 300 + #define CLOCKSET3_INT_CNTRL(n) FIELD_PREP(CLOCKSET3_INT_CNTRL_MASK, (n)) 301 + #define CLOCKSET3_CPBIAS_CNTRL_MASK GENMASK_U32(14, 8) 302 + #define CLOCKSET3_CPBIAS_CNTRL(n) FIELD_PREP(CLOCKSET3_CPBIAS_CNTRL_MASK, (n)) 303 + #define CLOCKSET3_GMP_CNTRL_MASK GENMASK_U32(1, 0) 304 + #define CLOCKSET3_GMP_CNTRL(n) FIELD_PREP(CLOCKSET3_GMP_CNTRL_MASK, (n)) 287 305 288 306 #define PHTW 0x1034 289 - #define PHTW_DWEN (1 << 24) 290 - #define PHTW_TESTDIN_DATA(x) (((x) & 0xff) << 16) 291 - #define PHTW_CWEN (1 << 8) 292 - #define PHTW_TESTDIN_CODE(x) (((x) & 0xff) << 0) 307 + #define PHTW_DWEN BIT_U32(24) 308 + #define PHTW_TESTDIN_DATA_MASK GENMASK_U32(23, 16) 309 + #define PHTW_CWEN BIT_U32(8) 310 + #define PHTW_TESTDIN_CODE_MASK GENMASK_U32(7, 0) 293 311 294 312 #define PHTR 0x1038 295 - #define PHTR_TEST (1 << 16) 313 + #define PHTR_TESTDOUT GENMASK_U32(23, 16) 314 + #define PHTR_TESTDOUT_TEST BIT_U32(16) 296 315 297 316 #define PHTC 0x103c 298 - #define PHTC_TESTCLR (1 << 0) 317 + #define PHTC_TESTCLR BIT_U32(0) 299 318 300 319 #endif /* __RCAR_MIPI_DSI_REGS_H__ */
+1
drivers/gpu/drm/renesas/rz-du/rzg2l_du_drv.c
··· 17 17 #include <drm/drm_drv.h> 18 18 #include <drm/drm_fbdev_dma.h> 19 19 #include <drm/drm_gem_dma_helper.h> 20 + #include <drm/drm_print.h> 20 21 #include <drm/drm_probe_helper.h> 21 22 22 23 #include "rzg2l_du_drv.h"
+1
drivers/gpu/drm/rockchip/analogix_dp-rockchip.c
··· 28 28 #include <drm/bridge/analogix_dp.h> 29 29 #include <drm/drm_of.h> 30 30 #include <drm/drm_panel.h> 31 + #include <drm/drm_print.h> 31 32 #include <drm/drm_probe_helper.h> 32 33 #include <drm/drm_simple_kms_helper.h> 33 34
+1
drivers/gpu/drm/rockchip/cdn-dp-core.c
··· 21 21 #include <drm/drm_bridge_connector.h> 22 22 #include <drm/drm_edid.h> 23 23 #include <drm/drm_of.h> 24 + #include <drm/drm_print.h> 24 25 #include <drm/drm_probe_helper.h> 25 26 #include <drm/drm_simple_kms_helper.h> 26 27
+2
drivers/gpu/drm/rockchip/cdn-dp-reg.c
··· 11 11 #include <linux/iopoll.h> 12 12 #include <linux/reset.h> 13 13 14 + #include <drm/drm_print.h> 15 + 14 16 #include "cdn-dp-core.h" 15 17 #include "cdn-dp-reg.h" 16 18
+1
drivers/gpu/drm/rockchip/dw-mipi-dsi-rockchip.c
··· 24 24 #include <drm/bridge/dw_mipi_dsi.h> 25 25 #include <drm/drm_mipi_dsi.h> 26 26 #include <drm/drm_of.h> 27 + #include <drm/drm_print.h> 27 28 #include <drm/drm_simple_kms_helper.h> 28 29 29 30 #include "rockchip_drm_drv.h"
+1
drivers/gpu/drm/rockchip/inno_hdmi.c
··· 22 22 #include <drm/drm_atomic_helper.h> 23 23 #include <drm/drm_edid.h> 24 24 #include <drm/drm_of.h> 25 + #include <drm/drm_print.h> 25 26 #include <drm/drm_probe_helper.h> 26 27 #include <drm/drm_simple_kms_helper.h> 27 28
+1
drivers/gpu/drm/rockchip/rk3066_hdmi.c
··· 10 10 #include <drm/display/drm_hdmi_state_helper.h> 11 11 #include <drm/drm_edid.h> 12 12 #include <drm/drm_of.h> 13 + #include <drm/drm_print.h> 13 14 #include <drm/drm_probe_helper.h> 14 15 #include <drm/drm_simple_kms_helper.h> 15 16
+1
drivers/gpu/drm/rockchip/rockchip_drm_drv.c
··· 22 22 #include <drm/drm_fbdev_dma.h> 23 23 #include <drm/drm_gem_dma_helper.h> 24 24 #include <drm/drm_of.h> 25 + #include <drm/drm_print.h> 25 26 #include <drm/drm_probe_helper.h> 26 27 #include <drm/drm_vblank.h> 27 28
+1
drivers/gpu/drm/rockchip/rockchip_drm_gem.c
··· 14 14 #include <drm/drm_gem.h> 15 15 #include <drm/drm_gem_dma_helper.h> 16 16 #include <drm/drm_prime.h> 17 + #include <drm/drm_print.h> 17 18 #include <drm/drm_vma_manager.h> 18 19 19 20 #include "rockchip_drm_drv.h"
+1
drivers/gpu/drm/rockchip/rockchip_drm_vop.c
··· 27 27 #include <drm/drm_framebuffer.h> 28 28 #include <drm/drm_gem_atomic_helper.h> 29 29 #include <drm/drm_gem_framebuffer_helper.h> 30 + #include <drm/drm_print.h> 30 31 #include <drm/drm_probe_helper.h> 31 32 #include <drm/drm_self_refresh_helper.h> 32 33 #include <drm/drm_vblank.h>
+1
drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
··· 29 29 #include <drm/drm_flip_work.h> 30 30 #include <drm/drm_framebuffer.h> 31 31 #include <drm/drm_gem_framebuffer_helper.h> 32 + #include <drm/drm_print.h> 32 33 #include <drm/drm_probe_helper.h> 33 34 #include <drm/drm_vblank.h> 34 35
+1
drivers/gpu/drm/rockchip/rockchip_lvds.c
··· 22 22 #include <drm/drm_bridge_connector.h> 23 23 #include <drm/drm_of.h> 24 24 #include <drm/drm_panel.h> 25 + #include <drm/drm_print.h> 25 26 #include <drm/drm_probe_helper.h> 26 27 #include <drm/drm_simple_kms_helper.h> 27 28
+1
drivers/gpu/drm/rockchip/rockchip_rgb.c
··· 15 15 #include <drm/drm_bridge_connector.h> 16 16 #include <drm/drm_of.h> 17 17 #include <drm/drm_panel.h> 18 + #include <drm/drm_print.h> 18 19 #include <drm/drm_probe_helper.h> 19 20 #include <drm/drm_simple_kms_helper.h> 20 21
+15 -3
drivers/gpu/drm/scheduler/sched_main.c
··· 1237 1237 1238 1238 /* Find entity with a ready job */ 1239 1239 entity = drm_sched_select_entity(sched); 1240 - if (!entity) 1241 - return; /* No more work */ 1240 + if (!entity) { 1241 + /* 1242 + * Either no more work to do, or the next ready job needs more 1243 + * credits than the scheduler has currently available. 1244 + */ 1245 + return; 1246 + } 1242 1247 1243 1248 sched_job = drm_sched_entity_pop_job(entity); 1244 1249 if (!sched_job) { ··· 1425 1420 struct drm_sched_rq *rq = sched->sched_rq[i]; 1426 1421 1427 1422 spin_lock(&rq->lock); 1428 - list_for_each_entry(s_entity, &rq->entities, list) 1423 + list_for_each_entry(s_entity, &rq->entities, list) { 1429 1424 /* 1430 1425 * Prevents reinsertion and marks job_queue as idle, 1431 1426 * it will be removed from the rq in drm_sched_entity_fini() ··· 1446 1441 * For now, this remains a potential race in all 1447 1442 * drivers that keep entities alive for longer than 1448 1443 * the scheduler. 1444 + * 1445 + * The READ_ONCE() is there to make the lockless read 1446 + * (warning about the lockless write below) slightly 1447 + * less broken... 1449 1448 */ 1449 + if (!READ_ONCE(s_entity->stopped)) 1450 + dev_warn(sched->dev, "Tearing down scheduler with active entities!\n"); 1450 1451 s_entity->stopped = true; 1452 + } 1451 1453 spin_unlock(&rq->lock); 1452 1454 kfree(sched->sched_rq[i]); 1453 1455 }
+1
drivers/gpu/drm/sitronix/st7586.c
··· 25 25 #include <drm/drm_gem_framebuffer_helper.h> 26 26 #include <drm/drm_managed.h> 27 27 #include <drm/drm_mipi_dbi.h> 28 + #include <drm/drm_print.h> 28 29 #include <drm/drm_rect.h> 29 30 30 31 /* controller-specific commands */
+1
drivers/gpu/drm/sitronix/st7735r.c
··· 24 24 #include <drm/drm_gem_dma_helper.h> 25 25 #include <drm/drm_managed.h> 26 26 #include <drm/drm_mipi_dbi.h> 27 + #include <drm/drm_print.h> 27 28 28 29 #define ST7735R_FRMCTR1 0xb1 29 30 #define ST7735R_FRMCTR2 0xb2
+1
drivers/gpu/drm/solomon/ssd130x.c
··· 33 33 #include <drm/drm_managed.h> 34 34 #include <drm/drm_modes.h> 35 35 #include <drm/drm_rect.h> 36 + #include <drm/drm_print.h> 36 37 #include <drm/drm_probe_helper.h> 37 38 38 39 #include "ssd130x.h"
+1
drivers/gpu/drm/sti/sti_cursor.c
··· 14 14 #include <drm/drm_fb_dma_helper.h> 15 15 #include <drm/drm_framebuffer.h> 16 16 #include <drm/drm_gem_dma_helper.h> 17 + #include <drm/drm_print.h> 17 18 18 19 #include "sti_compositor.h" 19 20 #include "sti_cursor.h"
+6 -13
drivers/gpu/drm/sti/sti_drv.c
··· 22 22 #include <drm/drm_gem_dma_helper.h> 23 23 #include <drm/drm_gem_framebuffer_helper.h> 24 24 #include <drm/drm_of.h> 25 + #include <drm/drm_print.h> 25 26 #include <drm/drm_probe_helper.h> 26 27 27 28 #include "sti_drv.h" ··· 232 231 static int sti_platform_probe(struct platform_device *pdev) 233 232 { 234 233 struct device *dev = &pdev->dev; 235 - struct device_node *node = dev->of_node; 236 - struct device_node *child_np; 237 - struct component_match *match = NULL; 234 + int ret; 238 235 239 - dma_set_coherent_mask(dev, DMA_BIT_MASK(32)); 236 + ret = dma_set_coherent_mask(dev, DMA_BIT_MASK(32)); 237 + if (ret) 238 + return ret; 240 239 241 240 devm_of_platform_populate(dev); 242 241 243 - child_np = of_get_next_available_child(node, NULL); 244 - 245 - while (child_np) { 246 - drm_of_component_match_add(dev, &match, component_compare_of, 247 - child_np); 248 - child_np = of_get_next_available_child(node, child_np); 249 - } 250 - 251 - return component_master_add_with_match(dev, &sti_ops, match); 242 + return drm_of_component_probe(dev, component_compare_of, &sti_ops); 252 243 } 253 244 254 245 static void sti_platform_remove(struct platform_device *pdev)
+1
drivers/gpu/drm/sti/sti_gdp.c
··· 16 16 #include <drm/drm_fourcc.h> 17 17 #include <drm/drm_framebuffer.h> 18 18 #include <drm/drm_gem_dma_helper.h> 19 + #include <drm/drm_print.h> 19 20 20 21 #include "sti_compositor.h" 21 22 #include "sti_gdp.h"
+5
drivers/gpu/drm/sti/sti_hda.c
··· 779 779 return PTR_ERR(hda->clk_hddac); 780 780 } 781 781 782 + drm_bridge_add(&hda->bridge); 783 + 782 784 platform_set_drvdata(pdev, hda); 783 785 784 786 return component_add(&pdev->dev, &sti_hda_ops); ··· 788 786 789 787 static void sti_hda_remove(struct platform_device *pdev) 790 788 { 789 + struct sti_hda *hda = platform_get_drvdata(pdev); 790 + 791 791 component_del(&pdev->dev, &sti_hda_ops); 792 + drm_bridge_remove(&hda->bridge); 792 793 } 793 794 794 795 static const struct of_device_id hda_of_match[] = {
+2
drivers/gpu/drm/sti/sti_hdmi.c
··· 1459 1459 1460 1460 platform_set_drvdata(pdev, hdmi); 1461 1461 1462 + drm_bridge_add(&hdmi->bridge); 1462 1463 return component_add(&pdev->dev, &sti_hdmi_ops); 1463 1464 1464 1465 release_adapter: ··· 1476 1475 if (hdmi->audio_pdev) 1477 1476 platform_device_unregister(hdmi->audio_pdev); 1478 1477 component_del(&pdev->dev, &sti_hdmi_ops); 1478 + drm_bridge_remove(&hdmi->bridge); 1479 1479 } 1480 1480 1481 1481 struct platform_driver sti_hdmi_driver = {
+1
drivers/gpu/drm/sti/sti_hqvdp.c
··· 20 20 #include <drm/drm_fourcc.h> 21 21 #include <drm/drm_framebuffer.h> 22 22 #include <drm/drm_gem_dma_helper.h> 23 + #include <drm/drm_print.h> 23 24 24 25 #include "sti_compositor.h" 25 26 #include "sti_drv.h"
+1
drivers/gpu/drm/sti/sti_plane.c
··· 12 12 #include <drm/drm_fourcc.h> 13 13 #include <drm/drm_framebuffer.h> 14 14 #include <drm/drm_gem_dma_helper.h> 15 + #include <drm/drm_print.h> 15 16 16 17 #include "sti_compositor.h" 17 18 #include "sti_drv.h"
+1
drivers/gpu/drm/stm/drv.c
··· 25 25 #include <drm/drm_gem_dma_helper.h> 26 26 #include <drm/drm_gem_framebuffer_helper.h> 27 27 #include <drm/drm_module.h> 28 + #include <drm/drm_print.h> 28 29 #include <drm/drm_probe_helper.h> 29 30 #include <drm/drm_vblank.h> 30 31 #include <drm/drm_managed.h>
+1
drivers/gpu/drm/stm/ltdc.c
··· 34 34 #include <drm/drm_gem_atomic_helper.h> 35 35 #include <drm/drm_gem_dma_helper.h> 36 36 #include <drm/drm_of.h> 37 + #include <drm/drm_print.h> 37 38 #include <drm/drm_probe_helper.h> 38 39 #include <drm/drm_simple_kms_helper.h> 39 40 #include <drm/drm_vblank.h>
+1
drivers/gpu/drm/sun4i/sun4i_backend.c
··· 23 23 #include <drm/drm_fourcc.h> 24 24 #include <drm/drm_framebuffer.h> 25 25 #include <drm/drm_gem_dma_helper.h> 26 + #include <drm/drm_print.h> 26 27 #include <drm/drm_probe_helper.h> 27 28 28 29 #include "sun4i_backend.h"
+1
drivers/gpu/drm/sun4i/sun4i_drv.c
··· 22 22 #include <drm/drm_gem_dma_helper.h> 23 23 #include <drm/drm_module.h> 24 24 #include <drm/drm_of.h> 25 + #include <drm/drm_print.h> 25 26 #include <drm/drm_probe_helper.h> 26 27 #include <drm/drm_vblank.h> 27 28
+1
drivers/gpu/drm/sun4i/sun4i_frontend.c
··· 19 19 #include <drm/drm_framebuffer.h> 20 20 #include <drm/drm_gem_dma_helper.h> 21 21 #include <drm/drm_plane.h> 22 + #include <drm/drm_print.h> 22 23 23 24 #include "sun4i_drv.h" 24 25 #include "sun4i_frontend.h"
+1
drivers/gpu/drm/sun4i/sun8i_mixer.c
··· 21 21 #include <drm/drm_crtc.h> 22 22 #include <drm/drm_framebuffer.h> 23 23 #include <drm/drm_gem_dma_helper.h> 24 + #include <drm/drm_print.h> 24 25 #include <drm/drm_probe_helper.h> 25 26 26 27 #include "sun4i_drv.h"
+1
drivers/gpu/drm/sun4i/sun8i_ui_layer.c
··· 18 18 #include <drm/drm_framebuffer.h> 19 19 #include <drm/drm_gem_atomic_helper.h> 20 20 #include <drm/drm_gem_dma_helper.h> 21 + #include <drm/drm_print.h> 21 22 #include <drm/drm_probe_helper.h> 22 23 23 24 #include "sun8i_mixer.h"
+1
drivers/gpu/drm/sun4i/sun8i_vi_layer.c
··· 11 11 #include <drm/drm_framebuffer.h> 12 12 #include <drm/drm_gem_atomic_helper.h> 13 13 #include <drm/drm_gem_dma_helper.h> 14 + #include <drm/drm_print.h> 14 15 #include <drm/drm_probe_helper.h> 15 16 16 17 #include "sun8i_csc.h"
+1
drivers/gpu/drm/sysfb/efidrm.c
··· 21 21 #include <drm/drm_gem_shmem_helper.h> 22 22 #include <drm/drm_managed.h> 23 23 #include <drm/drm_modeset_helper_vtables.h> 24 + #include <drm/drm_print.h> 24 25 #include <drm/drm_probe_helper.h> 25 26 26 27 #include <video/edid.h>
+1
drivers/gpu/drm/sysfb/ofdrm.c
··· 21 21 #include <drm/drm_gem_shmem_helper.h> 22 22 #include <drm/drm_managed.h> 23 23 #include <drm/drm_modeset_helper_vtables.h> 24 + #include <drm/drm_print.h> 24 25 #include <drm/drm_probe_helper.h> 25 26 26 27 #include "drm_sysfb_helper.h"
+1
drivers/gpu/drm/sysfb/simpledrm.c
··· 25 25 #include <drm/drm_gem_shmem_helper.h> 26 26 #include <drm/drm_managed.h> 27 27 #include <drm/drm_modeset_helper_vtables.h> 28 + #include <drm/drm_print.h> 28 29 #include <drm/drm_probe_helper.h> 29 30 30 31 #include "drm_sysfb_helper.h"
+1
drivers/gpu/drm/sysfb/vesadrm.c
··· 22 22 #include <drm/drm_gem_shmem_helper.h> 23 23 #include <drm/drm_managed.h> 24 24 #include <drm/drm_modeset_helper_vtables.h> 25 + #include <drm/drm_print.h> 25 26 #include <drm/drm_probe_helper.h> 26 27 27 28 #include <video/edid.h>
+1
drivers/gpu/drm/tegra/dc.c
··· 27 27 #include <drm/drm_debugfs.h> 28 28 #include <drm/drm_fourcc.h> 29 29 #include <drm/drm_framebuffer.h> 30 + #include <drm/drm_print.h> 30 31 #include <drm/drm_vblank.h> 31 32 32 33 #include "dc.h"
+1
drivers/gpu/drm/tegra/drm.c
··· 22 22 #include <drm/drm_framebuffer.h> 23 23 #include <drm/drm_ioctl.h> 24 24 #include <drm/drm_prime.h> 25 + #include <drm/drm_print.h> 25 26 #include <drm/drm_vblank.h> 26 27 27 28 #if IS_ENABLED(CONFIG_ARM_DMA_USE_IOMMU)
+1
drivers/gpu/drm/tegra/dsi.c
··· 22 22 #include <drm/drm_file.h> 23 23 #include <drm/drm_mipi_dsi.h> 24 24 #include <drm/drm_panel.h> 25 + #include <drm/drm_print.h> 25 26 #include <drm/drm_simple_kms_helper.h> 26 27 27 28 #include "dc.h"
+1
drivers/gpu/drm/tegra/fb.c
··· 13 13 #include <drm/drm_framebuffer.h> 14 14 #include <drm/drm_gem_framebuffer_helper.h> 15 15 #include <drm/drm_modeset_helper.h> 16 + #include <drm/drm_print.h> 16 17 17 18 #include "drm.h" 18 19 #include "gem.h"
+1
drivers/gpu/drm/tegra/hdmi.c
··· 28 28 #include <drm/drm_eld.h> 29 29 #include <drm/drm_file.h> 30 30 #include <drm/drm_fourcc.h> 31 + #include <drm/drm_print.h> 31 32 #include <drm/drm_probe_helper.h> 32 33 #include <drm/drm_simple_kms_helper.h> 33 34
+1
drivers/gpu/drm/tegra/hub.c
··· 20 20 #include <drm/drm_blend.h> 21 21 #include <drm/drm_fourcc.h> 22 22 #include <drm/drm_framebuffer.h> 23 + #include <drm/drm_print.h> 23 24 #include <drm/drm_probe_helper.h> 24 25 25 26 #include "drm.h"
+1
drivers/gpu/drm/tegra/sor.c
··· 24 24 #include <drm/drm_eld.h> 25 25 #include <drm/drm_file.h> 26 26 #include <drm/drm_panel.h> 27 + #include <drm/drm_print.h> 27 28 #include <drm/drm_simple_kms_helper.h> 28 29 29 30 #include "dc.h"
+1
drivers/gpu/drm/tests/drm_mm_test.c
··· 14 14 #include <linux/ktime.h> 15 15 16 16 #include <drm/drm_mm.h> 17 + #include <drm/drm_print.h> 17 18 18 19 #include "../lib/drm_random.h" 19 20
+8 -2
drivers/gpu/drm/tidss/tidss_crtc.c
··· 8 8 #include <drm/drm_atomic_helper.h> 9 9 #include <drm/drm_crtc.h> 10 10 #include <drm/drm_gem_dma_helper.h> 11 + #include <drm/drm_print.h> 11 12 #include <drm/drm_vblank.h> 12 13 13 14 #include "tidss_crtc.h" ··· 243 242 244 243 dispc_vp_prepare(tidss->dispc, tcrtc->hw_videoport, crtc->state); 245 244 246 - dispc_vp_enable(tidss->dispc, tcrtc->hw_videoport, crtc->state); 247 - 248 245 spin_lock_irqsave(&ddev->event_lock, flags); 249 246 247 + dispc_vp_enable(tidss->dispc, tcrtc->hw_videoport); 248 + 250 249 if (crtc->state->event) { 250 + unsigned int pipe = drm_crtc_index(crtc); 251 + struct drm_vblank_crtc *vblank = &ddev->vblank[pipe]; 252 + 253 + vblank->time = ktime_get(); 254 + 251 255 drm_crtc_send_vblank_event(crtc, crtc->state->event); 252 256 crtc->state->event = NULL; 253 257 }
+7 -16
drivers/gpu/drm/tidss/tidss_dispc.c
··· 27 27 #include <drm/drm_framebuffer.h> 28 28 #include <drm/drm_gem_dma_helper.h> 29 29 #include <drm/drm_panel.h> 30 + #include <drm/drm_print.h> 30 31 31 32 #include "tidss_crtc.h" 32 33 #include "tidss_dispc.h" ··· 1164 1163 { 1165 1164 const struct tidss_crtc_state *tstate = to_tidss_crtc_state(state); 1166 1165 const struct dispc_bus_format *fmt; 1166 + const struct drm_display_mode *mode = &state->adjusted_mode; 1167 + bool align, onoff, rf, ieo, ipc, ihs, ivs; 1168 + u32 hsw, hfp, hbp, vsw, vfp, vbp; 1167 1169 1168 1170 fmt = dispc_vp_find_bus_fmt(dispc, hw_videoport, tstate->bus_format, 1169 1171 tstate->bus_flags); ··· 1179 1175 1180 1176 dispc_enable_am65x_oldi(dispc, hw_videoport, fmt); 1181 1177 } 1182 - } 1183 - 1184 - void dispc_vp_enable(struct dispc_device *dispc, u32 hw_videoport, 1185 - const struct drm_crtc_state *state) 1186 - { 1187 - const struct drm_display_mode *mode = &state->adjusted_mode; 1188 - const struct tidss_crtc_state *tstate = to_tidss_crtc_state(state); 1189 - bool align, onoff, rf, ieo, ipc, ihs, ivs; 1190 - const struct dispc_bus_format *fmt; 1191 - u32 hsw, hfp, hbp, vsw, vfp, vbp; 1192 - 1193 - fmt = dispc_vp_find_bus_fmt(dispc, hw_videoport, tstate->bus_format, 1194 - tstate->bus_flags); 1195 - 1196 - if (WARN_ON(!fmt)) 1197 - return; 1198 1178 1199 1179 dispc_set_num_datalines(dispc, hw_videoport, fmt->data_width); 1200 1180 ··· 1234 1246 mode->crtc_hdisplay - 1) | 1235 1247 FIELD_PREP(DISPC_VP_SIZE_SCREEN_VDISPLAY_MASK, 1236 1248 mode->crtc_vdisplay - 1)); 1249 + } 1237 1250 1251 + void dispc_vp_enable(struct dispc_device *dispc, u32 hw_videoport) 1252 + { 1238 1253 VP_REG_FLD_MOD(dispc, hw_videoport, DISPC_VP_CONTROL, 1, 1239 1254 DISPC_VP_CONTROL_ENABLE_MASK); 1240 1255 }
+1 -2
drivers/gpu/drm/tidss/tidss_dispc.h
··· 119 119 120 120 void dispc_vp_prepare(struct dispc_device *dispc, u32 hw_videoport, 121 121 const struct drm_crtc_state *state); 122 - void dispc_vp_enable(struct dispc_device *dispc, u32 hw_videoport, 123 - const struct drm_crtc_state *state); 122 + void dispc_vp_enable(struct dispc_device *dispc, u32 hw_videoport); 124 123 void dispc_vp_disable(struct dispc_device *dispc, u32 hw_videoport); 125 124 void dispc_vp_unprepare(struct dispc_device *dispc, u32 hw_videoport); 126 125 bool dispc_vp_go_busy(struct dispc_device *dispc, u32 hw_videoport);
+1
drivers/gpu/drm/tiny/bochs.c
··· 21 21 #include <drm/drm_module.h> 22 22 #include <drm/drm_panic.h> 23 23 #include <drm/drm_plane_helper.h> 24 + #include <drm/drm_print.h> 24 25 #include <drm/drm_probe_helper.h> 25 26 #include <drm/drm_vblank.h> 26 27 #include <drm/drm_vblank_helper.h>
+1
drivers/gpu/drm/tiny/cirrus-qemu.c
··· 44 44 #include <drm/drm_managed.h> 45 45 #include <drm/drm_modeset_helper_vtables.h> 46 46 #include <drm/drm_module.h> 47 + #include <drm/drm_print.h> 47 48 #include <drm/drm_probe_helper.h> 48 49 #include <drm/drm_vblank.h> 49 50 #include <drm/drm_vblank_helper.h>
+1
drivers/gpu/drm/tiny/gm12u320.c
··· 25 25 #include <drm/drm_ioctl.h> 26 26 #include <drm/drm_managed.h> 27 27 #include <drm/drm_modeset_helper_vtables.h> 28 + #include <drm/drm_print.h> 28 29 #include <drm/drm_probe_helper.h> 29 30 #include <drm/drm_simple_kms_helper.h> 30 31
+1
drivers/gpu/drm/tiny/hx8357d.c
··· 25 25 #include <drm/drm_managed.h> 26 26 #include <drm/drm_mipi_dbi.h> 27 27 #include <drm/drm_modeset_helper.h> 28 + #include <drm/drm_print.h> 28 29 #include <video/mipi_display.h> 29 30 30 31 #define HX8357D_SETOSC 0xb0
+1
drivers/gpu/drm/tiny/ili9163.c
··· 15 15 #include <drm/drm_gem_dma_helper.h> 16 16 #include <drm/drm_mipi_dbi.h> 17 17 #include <drm/drm_modeset_helper.h> 18 + #include <drm/drm_print.h> 18 19 19 20 #include <video/mipi_display.h> 20 21
+1
drivers/gpu/drm/tiny/ili9225.c
··· 29 29 #include <drm/drm_gem_framebuffer_helper.h> 30 30 #include <drm/drm_managed.h> 31 31 #include <drm/drm_mipi_dbi.h> 32 + #include <drm/drm_print.h> 32 33 #include <drm/drm_rect.h> 33 34 34 35 #define ILI9225_DRIVER_READ_CODE 0x00
+1
drivers/gpu/drm/tiny/ili9341.c
··· 24 24 #include <drm/drm_managed.h> 25 25 #include <drm/drm_mipi_dbi.h> 26 26 #include <drm/drm_modeset_helper.h> 27 + #include <drm/drm_print.h> 27 28 #include <video/mipi_display.h> 28 29 29 30 #define ILI9341_FRMCTR1 0xb1
+1
drivers/gpu/drm/tiny/ili9486.c
··· 23 23 #include <drm/drm_managed.h> 24 24 #include <drm/drm_mipi_dbi.h> 25 25 #include <drm/drm_modeset_helper.h> 26 + #include <drm/drm_print.h> 26 27 27 28 #define ILI9486_ITFCTR1 0xb0 28 29 #define ILI9486_PWCTRL1 0xc2
+1
drivers/gpu/drm/tiny/mi0283qt.c
··· 22 22 #include <drm/drm_managed.h> 23 23 #include <drm/drm_mipi_dbi.h> 24 24 #include <drm/drm_modeset_helper.h> 25 + #include <drm/drm_print.h> 25 26 #include <video/mipi_display.h> 26 27 27 28 #define ILI9341_FRMCTR1 0xb1
+1
drivers/gpu/drm/tiny/panel-mipi-dbi.c
··· 25 25 #include <drm/drm_mipi_dbi.h> 26 26 #include <drm/drm_modes.h> 27 27 #include <drm/drm_modeset_helper.h> 28 + #include <drm/drm_print.h> 28 29 29 30 #include <video/mipi_display.h> 30 31
+1
drivers/gpu/drm/tiny/pixpaper.c
··· 17 17 #include <drm/drm_gem_atomic_helper.h> 18 18 #include <drm/drm_gem_shmem_helper.h> 19 19 #include <drm/drm_gem_framebuffer_helper.h> 20 + #include <drm/drm_print.h> 20 21 #include <drm/drm_probe_helper.h> 21 22 22 23 /*
+1
drivers/gpu/drm/tiny/repaper.c
··· 36 36 #include <drm/drm_managed.h> 37 37 #include <drm/drm_modes.h> 38 38 #include <drm/drm_rect.h> 39 + #include <drm/drm_print.h> 39 40 #include <drm/drm_probe_helper.h> 40 41 #include <drm/drm_simple_kms_helper.h> 41 42
+8 -8
drivers/gpu/drm/ttm/tests/ttm_bo_test.c
··· 251 251 ttm_dev = kunit_kzalloc(test, sizeof(*ttm_dev), GFP_KERNEL); 252 252 KUNIT_ASSERT_NOT_NULL(test, ttm_dev); 253 253 254 - err = ttm_device_kunit_init(priv, ttm_dev, false, false); 254 + err = ttm_device_kunit_init(priv, ttm_dev, 0); 255 255 KUNIT_ASSERT_EQ(test, err, 0); 256 256 priv->ttm_dev = ttm_dev; 257 257 ··· 290 290 ttm_dev = kunit_kzalloc(test, sizeof(*ttm_dev), GFP_KERNEL); 291 291 KUNIT_ASSERT_NOT_NULL(test, ttm_dev); 292 292 293 - err = ttm_device_kunit_init(priv, ttm_dev, false, false); 293 + err = ttm_device_kunit_init(priv, ttm_dev, 0); 294 294 KUNIT_ASSERT_EQ(test, err, 0); 295 295 priv->ttm_dev = ttm_dev; 296 296 ··· 342 342 resv = kunit_kzalloc(test, sizeof(*resv), GFP_KERNEL); 343 343 KUNIT_ASSERT_NOT_NULL(test, resv); 344 344 345 - err = ttm_device_kunit_init(priv, ttm_dev, false, false); 345 + err = ttm_device_kunit_init(priv, ttm_dev, 0); 346 346 KUNIT_ASSERT_EQ(test, err, 0); 347 347 priv->ttm_dev = ttm_dev; 348 348 ··· 394 394 ttm_dev = kunit_kzalloc(test, sizeof(*ttm_dev), GFP_KERNEL); 395 395 KUNIT_ASSERT_NOT_NULL(test, ttm_dev); 396 396 397 - err = ttm_device_kunit_init(priv, ttm_dev, false, false); 397 + err = ttm_device_kunit_init(priv, ttm_dev, 0); 398 398 KUNIT_ASSERT_EQ(test, err, 0); 399 399 priv->ttm_dev = ttm_dev; 400 400 ··· 437 437 ttm_dev = kunit_kzalloc(test, sizeof(*ttm_dev), GFP_KERNEL); 438 438 KUNIT_ASSERT_NOT_NULL(test, ttm_dev); 439 439 440 - err = ttm_device_kunit_init(priv, ttm_dev, false, false); 440 + err = ttm_device_kunit_init(priv, ttm_dev, 0); 441 441 KUNIT_ASSERT_EQ(test, err, 0); 442 442 priv->ttm_dev = ttm_dev; 443 443 ··· 477 477 ttm_dev = kunit_kzalloc(test, sizeof(*ttm_dev), GFP_KERNEL); 478 478 KUNIT_ASSERT_NOT_NULL(test, ttm_dev); 479 479 480 - err = ttm_device_kunit_init(priv, ttm_dev, false, false); 480 + err = ttm_device_kunit_init(priv, ttm_dev, 0); 481 481 KUNIT_ASSERT_EQ(test, err, 0); 482 482 priv->ttm_dev = ttm_dev; 483 483 ··· 512 512 ttm_dev = kunit_kzalloc(test, sizeof(*ttm_dev), GFP_KERNEL); 513 513 KUNIT_ASSERT_NOT_NULL(test, ttm_dev); 514 514 515 - err = ttm_device_kunit_init(priv, ttm_dev, false, false); 515 + err = ttm_device_kunit_init(priv, ttm_dev, 0); 516 516 KUNIT_ASSERT_EQ(test, err, 0); 517 517 priv->ttm_dev = ttm_dev; 518 518 ··· 563 563 ttm_dev = kunit_kzalloc(test, sizeof(*ttm_dev), GFP_KERNEL); 564 564 KUNIT_ASSERT_NOT_NULL(test, ttm_dev); 565 565 566 - err = ttm_device_kunit_init(priv, ttm_dev, false, false); 566 + err = ttm_device_kunit_init(priv, ttm_dev, 0); 567 567 KUNIT_ASSERT_EQ(test, err, 0); 568 568 priv->ttm_dev = ttm_dev; 569 569
+1 -1
drivers/gpu/drm/ttm/tests/ttm_bo_validate_test.c
··· 995 995 */ 996 996 ttm_device_fini(priv->ttm_dev); 997 997 998 - err = ttm_device_kunit_init_bad_evict(test->priv, priv->ttm_dev, false, false); 998 + err = ttm_device_kunit_init_bad_evict(test->priv, priv->ttm_dev); 999 999 KUNIT_ASSERT_EQ(test, err, 0); 1000 1000 1001 1001 ttm_mock_manager_init(priv->ttm_dev, mem_type, MANAGER_SIZE);
+13 -20
drivers/gpu/drm/ttm/tests/ttm_device_test.c
··· 7 7 #include <drm/ttm/ttm_placement.h> 8 8 9 9 #include "ttm_kunit_helpers.h" 10 + #include "../ttm_pool_internal.h" 10 11 11 12 struct ttm_device_test_case { 12 13 const char *description; 13 - bool use_dma_alloc; 14 - bool use_dma32; 14 + unsigned int alloc_flags; 15 15 bool pools_init_expected; 16 16 }; 17 17 ··· 25 25 ttm_dev = kunit_kzalloc(test, sizeof(*ttm_dev), GFP_KERNEL); 26 26 KUNIT_ASSERT_NOT_NULL(test, ttm_dev); 27 27 28 - err = ttm_device_kunit_init(priv, ttm_dev, false, false); 28 + err = ttm_device_kunit_init(priv, ttm_dev, 0); 29 29 KUNIT_ASSERT_EQ(test, err, 0); 30 30 31 31 KUNIT_EXPECT_PTR_EQ(test, ttm_dev->funcs, &ttm_dev_funcs); ··· 55 55 KUNIT_ASSERT_NOT_NULL(test, ttm_devs); 56 56 57 57 for (i = 0; i < num_dev; i++) { 58 - err = ttm_device_kunit_init(priv, &ttm_devs[i], false, false); 58 + err = ttm_device_kunit_init(priv, &ttm_devs[i], 0); 59 59 KUNIT_ASSERT_EQ(test, err, 0); 60 60 61 61 KUNIT_EXPECT_PTR_EQ(test, ttm_devs[i].dev_mapping, ··· 81 81 ttm_dev = kunit_kzalloc(test, sizeof(*ttm_dev), GFP_KERNEL); 82 82 KUNIT_ASSERT_NOT_NULL(test, ttm_dev); 83 83 84 - err = ttm_device_kunit_init(priv, ttm_dev, false, false); 84 + err = ttm_device_kunit_init(priv, ttm_dev, 0); 85 85 KUNIT_ASSERT_EQ(test, err, 0); 86 86 87 87 man = ttm_manager_type(ttm_dev, TTM_PL_SYSTEM); ··· 109 109 vma_man = drm->vma_offset_manager; 110 110 drm->vma_offset_manager = NULL; 111 111 112 - err = ttm_device_kunit_init(priv, ttm_dev, false, false); 112 + err = ttm_device_kunit_init(priv, ttm_dev, 0); 113 113 KUNIT_EXPECT_EQ(test, err, -EINVAL); 114 114 115 115 /* Bring the manager back for a graceful cleanup */ ··· 119 119 static const struct ttm_device_test_case ttm_device_cases[] = { 120 120 { 121 121 .description = "No DMA allocations, no DMA32 required", 122 - .use_dma_alloc = false, 123 - .use_dma32 = false, 124 122 .pools_init_expected = false, 125 123 }, 126 124 { 127 125 .description = "DMA allocations, DMA32 required", 128 - .use_dma_alloc = true, 129 - .use_dma32 = true, 126 + .alloc_flags = TTM_ALLOCATION_POOL_USE_DMA_ALLOC | 127 + TTM_ALLOCATION_POOL_USE_DMA32, 130 128 .pools_init_expected = true, 131 129 }, 132 130 { 133 131 .description = "No DMA allocations, DMA32 required", 134 - .use_dma_alloc = false, 135 - .use_dma32 = true, 132 + .alloc_flags = TTM_ALLOCATION_POOL_USE_DMA32, 136 133 .pools_init_expected = false, 137 134 }, 138 135 { 139 136 .description = "DMA allocations, no DMA32 required", 140 - .use_dma_alloc = true, 141 - .use_dma32 = false, 137 + .alloc_flags = TTM_ALLOCATION_POOL_USE_DMA_ALLOC, 142 138 .pools_init_expected = true, 143 139 }, 144 140 }; ··· 158 162 ttm_dev = kunit_kzalloc(test, sizeof(*ttm_dev), GFP_KERNEL); 159 163 KUNIT_ASSERT_NOT_NULL(test, ttm_dev); 160 164 161 - err = ttm_device_kunit_init(priv, ttm_dev, 162 - params->use_dma_alloc, 163 - params->use_dma32); 165 + err = ttm_device_kunit_init(priv, ttm_dev, params->alloc_flags); 164 166 KUNIT_ASSERT_EQ(test, err, 0); 165 167 166 168 pool = &ttm_dev->pool; 167 169 KUNIT_ASSERT_NOT_NULL(test, pool); 168 170 KUNIT_EXPECT_PTR_EQ(test, pool->dev, priv->dev); 169 - KUNIT_EXPECT_EQ(test, pool->use_dma_alloc, params->use_dma_alloc); 170 - KUNIT_EXPECT_EQ(test, pool->use_dma32, params->use_dma32); 171 + KUNIT_EXPECT_EQ(test, pool->alloc_flags, params->alloc_flags); 171 172 172 173 if (params->pools_init_expected) { 173 174 for (int i = 0; i < TTM_NUM_CACHING_TYPES; ++i) { ··· 174 181 KUNIT_EXPECT_EQ(test, pt.caching, i); 175 182 KUNIT_EXPECT_EQ(test, pt.order, j); 176 183 177 - if (params->use_dma_alloc) 184 + if (ttm_pool_uses_dma_alloc(pool)) 178 185 KUNIT_ASSERT_FALSE(test, 179 186 list_empty(&pt.pages)); 180 187 }
+9 -13
drivers/gpu/drm/ttm/tests/ttm_kunit_helpers.c
··· 117 117 118 118 static int ttm_device_kunit_init_with_funcs(struct ttm_test_devices *priv, 119 119 struct ttm_device *ttm, 120 - bool use_dma_alloc, 121 - bool use_dma32, 120 + unsigned int alloc_flags, 122 121 struct ttm_device_funcs *funcs) 123 122 { 124 123 struct drm_device *drm = priv->drm; ··· 126 127 err = ttm_device_init(ttm, funcs, drm->dev, 127 128 drm->anon_inode->i_mapping, 128 129 drm->vma_offset_manager, 129 - use_dma_alloc, use_dma32); 130 + alloc_flags); 130 131 131 132 return err; 132 133 } ··· 142 143 143 144 int ttm_device_kunit_init(struct ttm_test_devices *priv, 144 145 struct ttm_device *ttm, 145 - bool use_dma_alloc, 146 - bool use_dma32) 146 + unsigned int alloc_flags) 147 147 { 148 - return ttm_device_kunit_init_with_funcs(priv, ttm, use_dma_alloc, 149 - use_dma32, &ttm_dev_funcs); 148 + return ttm_device_kunit_init_with_funcs(priv, ttm, alloc_flags, 149 + &ttm_dev_funcs); 150 150 } 151 151 EXPORT_SYMBOL_GPL(ttm_device_kunit_init); 152 152 ··· 159 161 EXPORT_SYMBOL_GPL(ttm_dev_funcs_bad_evict); 160 162 161 163 int ttm_device_kunit_init_bad_evict(struct ttm_test_devices *priv, 162 - struct ttm_device *ttm, 163 - bool use_dma_alloc, 164 - bool use_dma32) 164 + struct ttm_device *ttm) 165 165 { 166 - return ttm_device_kunit_init_with_funcs(priv, ttm, use_dma_alloc, 167 - use_dma32, &ttm_dev_funcs_bad_evict); 166 + return ttm_device_kunit_init_with_funcs(priv, ttm, 0, 167 + &ttm_dev_funcs_bad_evict); 168 168 } 169 169 EXPORT_SYMBOL_GPL(ttm_device_kunit_init_bad_evict); 170 170 ··· 248 252 ttm_dev = kunit_kzalloc(test, sizeof(*ttm_dev), GFP_KERNEL); 249 253 KUNIT_ASSERT_NOT_NULL(test, ttm_dev); 250 254 251 - err = ttm_device_kunit_init(devs, ttm_dev, false, false); 255 + err = ttm_device_kunit_init(devs, ttm_dev, 0); 252 256 KUNIT_ASSERT_EQ(test, err, 0); 253 257 254 258 devs->ttm_dev = ttm_dev;
+2 -5
drivers/gpu/drm/ttm/tests/ttm_kunit_helpers.h
··· 28 28 /* Building blocks for test-specific init functions */ 29 29 int ttm_device_kunit_init(struct ttm_test_devices *priv, 30 30 struct ttm_device *ttm, 31 - bool use_dma_alloc, 32 - bool use_dma32); 31 + unsigned int alloc_flags); 33 32 int ttm_device_kunit_init_bad_evict(struct ttm_test_devices *priv, 34 - struct ttm_device *ttm, 35 - bool use_dma_alloc, 36 - bool use_dma32); 33 + struct ttm_device *ttm); 37 34 struct ttm_buffer_object *ttm_bo_kunit_init(struct kunit *test, 38 35 struct ttm_test_devices *devs, 39 36 size_t size,
+1
drivers/gpu/drm/ttm/tests/ttm_mock_manager.c
··· 4 4 */ 5 5 6 6 #include <linux/export.h> 7 + #include <linux/module.h> 7 8 8 9 #include <drm/ttm/ttm_resource.h> 9 10 #include <drm/ttm/ttm_device.h>
+12 -12
drivers/gpu/drm/ttm/tests/ttm_pool_test.c
··· 8 8 #include <drm/ttm/ttm_pool.h> 9 9 10 10 #include "ttm_kunit_helpers.h" 11 + #include "../ttm_pool_internal.h" 11 12 12 13 struct ttm_pool_test_case { 13 14 const char *description; 14 15 unsigned int order; 15 - bool use_dma_alloc; 16 + unsigned int alloc_flags; 16 17 }; 17 18 18 19 struct ttm_pool_test_priv { ··· 87 86 pool = kunit_kzalloc(test, sizeof(*pool), GFP_KERNEL); 88 87 KUNIT_ASSERT_NOT_NULL(test, pool); 89 88 90 - ttm_pool_init(pool, devs->dev, NUMA_NO_NODE, true, false); 89 + ttm_pool_init(pool, devs->dev, NUMA_NO_NODE, TTM_ALLOCATION_POOL_USE_DMA_ALLOC); 91 90 92 91 err = ttm_pool_alloc(pool, tt, &simple_ctx); 93 92 KUNIT_ASSERT_EQ(test, err, 0); ··· 114 113 { 115 114 .description = "One page, with coherent DMA mappings enabled", 116 115 .order = 0, 117 - .use_dma_alloc = true, 116 + .alloc_flags = TTM_ALLOCATION_POOL_USE_DMA_ALLOC, 118 117 }, 119 118 { 120 119 .description = "Above the allocation limit, with coherent DMA mappings enabled", 121 120 .order = MAX_PAGE_ORDER + 1, 122 - .use_dma_alloc = true, 121 + .alloc_flags = TTM_ALLOCATION_POOL_USE_DMA_ALLOC, 123 122 }, 124 123 }; 125 124 ··· 151 150 pool = kunit_kzalloc(test, sizeof(*pool), GFP_KERNEL); 152 151 KUNIT_ASSERT_NOT_NULL(test, pool); 153 152 154 - ttm_pool_init(pool, devs->dev, NUMA_NO_NODE, params->use_dma_alloc, 155 - false); 153 + ttm_pool_init(pool, devs->dev, NUMA_NO_NODE, params->alloc_flags); 156 154 157 155 KUNIT_ASSERT_PTR_EQ(test, pool->dev, devs->dev); 158 156 KUNIT_ASSERT_EQ(test, pool->nid, NUMA_NO_NODE); 159 - KUNIT_ASSERT_EQ(test, pool->use_dma_alloc, params->use_dma_alloc); 157 + KUNIT_ASSERT_EQ(test, pool->alloc_flags, params->alloc_flags); 160 158 161 159 err = ttm_pool_alloc(pool, tt, &simple_ctx); 162 160 KUNIT_ASSERT_EQ(test, err, 0); ··· 165 165 last_page = tt->pages[tt->num_pages - 1]; 166 166 167 167 if (params->order <= MAX_PAGE_ORDER) { 168 - if (params->use_dma_alloc) { 168 + if (ttm_pool_uses_dma_alloc(pool)) { 169 169 KUNIT_ASSERT_NOT_NULL(test, (void *)fst_page->private); 170 170 KUNIT_ASSERT_NOT_NULL(test, (void *)last_page->private); 171 171 } else { 172 172 KUNIT_ASSERT_EQ(test, fst_page->private, params->order); 173 173 } 174 174 } else { 175 - if (params->use_dma_alloc) { 175 + if (ttm_pool_uses_dma_alloc(pool)) { 176 176 KUNIT_ASSERT_NOT_NULL(test, (void *)fst_page->private); 177 177 KUNIT_ASSERT_NULL(test, (void *)last_page->private); 178 178 } else { ··· 218 218 pool = kunit_kzalloc(test, sizeof(*pool), GFP_KERNEL); 219 219 KUNIT_ASSERT_NOT_NULL(test, pool); 220 220 221 - ttm_pool_init(pool, devs->dev, NUMA_NO_NODE, true, false); 221 + ttm_pool_init(pool, devs->dev, NUMA_NO_NODE, TTM_ALLOCATION_POOL_USE_DMA_ALLOC); 222 222 223 223 err = ttm_pool_alloc(pool, tt, &simple_ctx); 224 224 KUNIT_ASSERT_EQ(test, err, 0); ··· 348 348 pool = kunit_kzalloc(test, sizeof(*pool), GFP_KERNEL); 349 349 KUNIT_ASSERT_NOT_NULL(test, pool); 350 350 351 - ttm_pool_init(pool, devs->dev, NUMA_NO_NODE, true, false); 351 + ttm_pool_init(pool, devs->dev, NUMA_NO_NODE, TTM_ALLOCATION_POOL_USE_DMA_ALLOC); 352 352 ttm_pool_alloc(pool, tt, &simple_ctx); 353 353 354 354 pt = &pool->caching[caching].orders[order]; ··· 379 379 pool = kunit_kzalloc(test, sizeof(*pool), GFP_KERNEL); 380 380 KUNIT_ASSERT_NOT_NULL(test, pool); 381 381 382 - ttm_pool_init(pool, devs->dev, NUMA_NO_NODE, false, false); 382 + ttm_pool_init(pool, devs->dev, NUMA_NO_NODE, 0); 383 383 ttm_pool_alloc(pool, tt, &simple_ctx); 384 384 385 385 pt = &pool->caching[caching].orders[order];
+4 -1
drivers/gpu/drm/ttm/ttm_bo.c
··· 31 31 32 32 #define pr_fmt(fmt) "[TTM] " fmt 33 33 34 + #include <drm/drm_print.h> 35 + #include <drm/ttm/ttm_allocation.h> 34 36 #include <drm/ttm/ttm_bo.h> 35 37 #include <drm/ttm/ttm_placement.h> 36 38 #include <drm/ttm/ttm_tt.h> ··· 879 877 880 878 /* For backward compatibility with userspace */ 881 879 if (ret == -ENOSPC) 882 - return -ENOMEM; 880 + return bo->bdev->alloc_flags & TTM_ALLOCATION_PROPAGATE_ENOSPC ? 881 + ret : -ENOMEM; 883 882 884 883 /* 885 884 * We might need to add a TTM.
+5 -4
drivers/gpu/drm/ttm/ttm_device.c
··· 31 31 #include <linux/export.h> 32 32 #include <linux/mm.h> 33 33 34 + #include <drm/ttm/ttm_allocation.h> 34 35 #include <drm/ttm/ttm_bo.h> 35 36 #include <drm/ttm/ttm_device.h> 36 37 #include <drm/ttm/ttm_tt.h> ··· 199 198 * @dev: The core kernel device pointer for DMA mappings and allocations. 200 199 * @mapping: The address space to use for this bo. 201 200 * @vma_manager: A pointer to a vma manager. 202 - * @use_dma_alloc: If coherent DMA allocation API should be used. 203 - * @use_dma32: If we should use GFP_DMA32 for device memory allocations. 201 + * @alloc_flags: TTM_ALLOCATION_ flags. 204 202 * 205 203 * Initializes a struct ttm_device: 206 204 * Returns: ··· 208 208 int ttm_device_init(struct ttm_device *bdev, const struct ttm_device_funcs *funcs, 209 209 struct device *dev, struct address_space *mapping, 210 210 struct drm_vma_offset_manager *vma_manager, 211 - bool use_dma_alloc, bool use_dma32) 211 + unsigned int alloc_flags) 212 212 { 213 213 struct ttm_global *glob = &ttm_glob; 214 214 int ret, nid; ··· 227 227 return -ENOMEM; 228 228 } 229 229 230 + bdev->alloc_flags = alloc_flags; 230 231 bdev->funcs = funcs; 231 232 232 233 ttm_sys_man_init(bdev); ··· 237 236 else 238 237 nid = NUMA_NO_NODE; 239 238 240 - ttm_pool_init(&bdev->pool, dev, nid, use_dma_alloc, use_dma32); 239 + ttm_pool_init(&bdev->pool, dev, nid, alloc_flags); 241 240 242 241 bdev->vma_manager = vma_manager; 243 242 spin_lock_init(&bdev->lru_lock);
+26 -19
drivers/gpu/drm/ttm/ttm_pool.c
··· 48 48 #include <drm/ttm/ttm_bo.h> 49 49 50 50 #include "ttm_module.h" 51 + #include "ttm_pool_internal.h" 51 52 52 53 #ifdef CONFIG_FAULT_INJECTION 53 54 #include <linux/fault-inject.h> ··· 136 135 static struct page *ttm_pool_alloc_page(struct ttm_pool *pool, gfp_t gfp_flags, 137 136 unsigned int order) 138 137 { 138 + const unsigned int beneficial_order = ttm_pool_beneficial_order(pool); 139 139 unsigned long attr = DMA_ATTR_FORCE_CONTIGUOUS; 140 140 struct ttm_pool_dma *dma; 141 141 struct page *p; ··· 150 148 gfp_flags |= __GFP_NOMEMALLOC | __GFP_NORETRY | __GFP_NOWARN | 151 149 __GFP_THISNODE; 152 150 153 - if (!pool->use_dma_alloc) { 151 + /* 152 + * Do not add latency to the allocation path for allocations orders 153 + * device tolds us do not bring them additional performance gains. 154 + */ 155 + if (beneficial_order && order > beneficial_order) 156 + gfp_flags &= ~__GFP_DIRECT_RECLAIM; 157 + 158 + if (!ttm_pool_uses_dma_alloc(pool)) { 154 159 p = alloc_pages_node(pool->nid, gfp_flags, order); 155 160 if (p) 156 161 p->private = order; ··· 209 200 set_pages_wb(p, 1 << order); 210 201 #endif 211 202 212 - if (!pool || !pool->use_dma_alloc) { 203 + if (!pool || !ttm_pool_uses_dma_alloc(pool)) { 213 204 __free_pages(p, order); 214 205 return; 215 206 } ··· 252 243 { 253 244 dma_addr_t addr; 254 245 255 - if (pool->use_dma_alloc) { 246 + if (ttm_pool_uses_dma_alloc(pool)) { 256 247 struct ttm_pool_dma *dma = (void *)p->private; 257 248 258 249 addr = dma->addr; ··· 274 265 unsigned int num_pages) 275 266 { 276 267 /* Unmapped while freeing the page */ 277 - if (pool->use_dma_alloc) 268 + if (ttm_pool_uses_dma_alloc(pool)) 278 269 return; 279 270 280 271 dma_unmap_page(pool->dev, dma_addr, (long)num_pages << PAGE_SHIFT, ··· 348 339 enum ttm_caching caching, 349 340 unsigned int order) 350 341 { 351 - if (pool->use_dma_alloc) 342 + if (ttm_pool_uses_dma_alloc(pool)) 352 343 return &pool->caching[caching].orders[order]; 353 344 354 345 #ifdef CONFIG_X86 ··· 357 348 if (pool->nid != NUMA_NO_NODE) 358 349 return &pool->caching[caching].orders[order]; 359 350 360 - if (pool->use_dma32) 351 + if (ttm_pool_uses_dma32(pool)) 361 352 return &global_dma32_write_combined[order]; 362 353 363 354 return &global_write_combined[order]; ··· 365 356 if (pool->nid != NUMA_NO_NODE) 366 357 return &pool->caching[caching].orders[order]; 367 358 368 - if (pool->use_dma32) 359 + if (ttm_pool_uses_dma32(pool)) 369 360 return &global_dma32_uncached[order]; 370 361 371 362 return &global_uncached[order]; ··· 405 396 /* Return the allocation order based for a page */ 406 397 static unsigned int ttm_pool_page_order(struct ttm_pool *pool, struct page *p) 407 398 { 408 - if (pool->use_dma_alloc) { 399 + if (ttm_pool_uses_dma_alloc(pool)) { 409 400 struct ttm_pool_dma *dma = (void *)p->private; 410 401 411 402 return dma->vaddr & ~PAGE_MASK; ··· 728 719 if (ctx->gfp_retry_mayfail) 729 720 gfp_flags |= __GFP_RETRY_MAYFAIL; 730 721 731 - if (pool->use_dma32) 722 + if (ttm_pool_uses_dma32(pool)) 732 723 gfp_flags |= GFP_DMA32; 733 724 else 734 725 gfp_flags |= GFP_HIGHUSER; ··· 986 977 return -EINVAL; 987 978 988 979 if ((!ttm_backup_bytes_avail() && !flags->purge) || 989 - pool->use_dma_alloc || ttm_tt_is_backed_up(tt)) 980 + ttm_pool_uses_dma_alloc(pool) || ttm_tt_is_backed_up(tt)) 990 981 return -EBUSY; 991 982 992 983 #ifdef CONFIG_X86 ··· 1023 1014 if (flags->purge) 1024 1015 return shrunken; 1025 1016 1026 - if (pool->use_dma32) 1017 + if (ttm_pool_uses_dma32(pool)) 1027 1018 gfp = GFP_DMA32; 1028 1019 else 1029 1020 gfp = GFP_HIGHUSER; ··· 1067 1058 * @pool: the pool to initialize 1068 1059 * @dev: device for DMA allocations and mappings 1069 1060 * @nid: NUMA node to use for allocations 1070 - * @use_dma_alloc: true if coherent DMA alloc should be used 1071 - * @use_dma32: true if GFP_DMA32 should be used 1061 + * @alloc_flags: TTM_ALLOCATION_POOL_ flags 1072 1062 * 1073 1063 * Initialize the pool and its pool types. 1074 1064 */ 1075 1065 void ttm_pool_init(struct ttm_pool *pool, struct device *dev, 1076 - int nid, bool use_dma_alloc, bool use_dma32) 1066 + int nid, unsigned int alloc_flags) 1077 1067 { 1078 1068 unsigned int i, j; 1079 1069 1080 - WARN_ON(!dev && use_dma_alloc); 1070 + WARN_ON(!dev && ttm_pool_uses_dma_alloc(pool)); 1081 1071 1082 1072 pool->dev = dev; 1083 1073 pool->nid = nid; 1084 - pool->use_dma_alloc = use_dma_alloc; 1085 - pool->use_dma32 = use_dma32; 1074 + pool->alloc_flags = alloc_flags; 1086 1075 1087 1076 for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i) { 1088 1077 for (j = 0; j < NR_PAGE_ORDERS; ++j) { ··· 1246 1239 { 1247 1240 unsigned int i; 1248 1241 1249 - if (!pool->use_dma_alloc && pool->nid == NUMA_NO_NODE) { 1242 + if (!ttm_pool_uses_dma_alloc(pool) && pool->nid == NUMA_NO_NODE) { 1250 1243 seq_puts(m, "unused\n"); 1251 1244 return 0; 1252 1245 } ··· 1257 1250 for (i = 0; i < TTM_NUM_CACHING_TYPES; ++i) { 1258 1251 if (!ttm_pool_select_type(pool, i, 0)) 1259 1252 continue; 1260 - if (pool->use_dma_alloc) 1253 + if (ttm_pool_uses_dma_alloc(pool)) 1261 1254 seq_puts(m, "DMA "); 1262 1255 else 1263 1256 seq_printf(m, "N%d ", pool->nid);
+25
drivers/gpu/drm/ttm/ttm_pool_internal.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 OR MIT */ 2 + /* Copyright (c) 2025 Valve Corporation */ 3 + 4 + #ifndef _TTM_POOL_INTERNAL_H_ 5 + #define _TTM_POOL_INTERNAL_H_ 6 + 7 + #include <drm/ttm/ttm_allocation.h> 8 + #include <drm/ttm/ttm_pool.h> 9 + 10 + static inline bool ttm_pool_uses_dma_alloc(struct ttm_pool *pool) 11 + { 12 + return pool->alloc_flags & TTM_ALLOCATION_POOL_USE_DMA_ALLOC; 13 + } 14 + 15 + static inline bool ttm_pool_uses_dma32(struct ttm_pool *pool) 16 + { 17 + return pool->alloc_flags & TTM_ALLOCATION_POOL_USE_DMA32; 18 + } 19 + 20 + static inline bool ttm_pool_beneficial_order(struct ttm_pool *pool) 21 + { 22 + return pool->alloc_flags & 0xff; 23 + } 24 + 25 + #endif
+1
drivers/gpu/drm/ttm/ttm_resource.c
··· 34 34 #include <drm/ttm/ttm_resource.h> 35 35 #include <drm/ttm/ttm_tt.h> 36 36 37 + #include <drm/drm_print.h> 37 38 #include <drm/drm_util.h> 38 39 39 40 /* Detach the cursor from the bulk move list*/
+7 -4
drivers/gpu/drm/ttm/ttm_tt.c
··· 40 40 #include <linux/shmem_fs.h> 41 41 #include <drm/drm_cache.h> 42 42 #include <drm/drm_device.h> 43 + #include <drm/drm_print.h> 43 44 #include <drm/drm_util.h> 44 45 #include <drm/ttm/ttm_backup.h> 45 46 #include <drm/ttm/ttm_bo.h> 46 47 #include <drm/ttm/ttm_tt.h> 47 48 48 49 #include "ttm_module.h" 50 + #include "ttm_pool_internal.h" 49 51 50 52 static unsigned long ttm_pages_limit; 51 53 ··· 95 93 * mapped TT pages need to be decrypted or otherwise the drivers 96 94 * will end up sending encrypted mem to the gpu. 97 95 */ 98 - if (bdev->pool.use_dma_alloc && cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) { 96 + if (ttm_pool_uses_dma_alloc(&bdev->pool) && 97 + cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT)) { 99 98 page_flags |= TTM_TT_FLAG_DECRYPTED; 100 99 drm_info_once(ddev, "TT memory decryption enabled."); 101 100 } ··· 381 378 382 379 if (!(ttm->page_flags & TTM_TT_FLAG_EXTERNAL)) { 383 380 atomic_long_add(ttm->num_pages, &ttm_pages_allocated); 384 - if (bdev->pool.use_dma32) 381 + if (ttm_pool_uses_dma32(&bdev->pool)) 385 382 atomic_long_add(ttm->num_pages, 386 383 &ttm_dma32_pages_allocated); 387 384 } ··· 419 416 error: 420 417 if (!(ttm->page_flags & TTM_TT_FLAG_EXTERNAL)) { 421 418 atomic_long_sub(ttm->num_pages, &ttm_pages_allocated); 422 - if (bdev->pool.use_dma32) 419 + if (ttm_pool_uses_dma32(&bdev->pool)) 423 420 atomic_long_sub(ttm->num_pages, 424 421 &ttm_dma32_pages_allocated); 425 422 } ··· 442 439 443 440 if (!(ttm->page_flags & TTM_TT_FLAG_EXTERNAL)) { 444 441 atomic_long_sub(ttm->num_pages, &ttm_pages_allocated); 445 - if (bdev->pool.use_dma32) 442 + if (ttm_pool_uses_dma32(&bdev->pool)) 446 443 atomic_long_sub(ttm->num_pages, 447 444 &ttm_dma32_pages_allocated); 448 445 }
+1
drivers/gpu/drm/tve200/tve200_display.c
··· 21 21 #include <drm/drm_gem_atomic_helper.h> 22 22 #include <drm/drm_gem_dma_helper.h> 23 23 #include <drm/drm_panel.h> 24 + #include <drm/drm_print.h> 24 25 #include <drm/drm_vblank.h> 25 26 26 27 #include "tve200_drm.h"
+1
drivers/gpu/drm/udl/udl_edid.c
··· 4 4 5 5 #include <drm/drm_drv.h> 6 6 #include <drm/drm_edid.h> 7 + #include <drm/drm_print.h> 7 8 8 9 #include "udl_drv.h" 9 10 #include "udl_edid.h"
+2
drivers/gpu/drm/v3d/v3d_bo.c
··· 18 18 #include <linux/dma-buf.h> 19 19 #include <linux/vmalloc.h> 20 20 21 + #include <drm/drm_print.h> 22 + 21 23 #include "v3d_drv.h" 22 24 #include "uapi/drm/v3d_drm.h" 23 25
+1
drivers/gpu/drm/v3d/v3d_debugfs.c
··· 8 8 #include <linux/string_helpers.h> 9 9 10 10 #include <drm/drm_debugfs.h> 11 + #include <drm/drm_print.h> 11 12 12 13 #include "v3d_drv.h" 13 14 #include "v3d_regs.h"
+1
drivers/gpu/drm/v3d/v3d_drv.c
··· 25 25 26 26 #include <drm/drm_drv.h> 27 27 #include <drm/drm_managed.h> 28 + #include <drm/drm_print.h> 28 29 #include <uapi/drm/v3d_drm.h> 29 30 30 31 #include "v3d_drv.h"
+1
drivers/gpu/drm/v3d/v3d_gem.c
··· 11 11 #include <linux/uaccess.h> 12 12 13 13 #include <drm/drm_managed.h> 14 + #include <drm/drm_print.h> 14 15 15 16 #include "v3d_drv.h" 16 17 #include "v3d_regs.h"
+2
drivers/gpu/drm/v3d/v3d_gemfs.c
··· 5 5 #include <linux/mount.h> 6 6 #include <linux/fs_context.h> 7 7 8 + #include <drm/drm_print.h> 9 + 8 10 #include "v3d_drv.h" 9 11 10 12 void v3d_gemfs_init(struct v3d_dev *v3d)
+2
drivers/gpu/drm/v3d/v3d_irq.c
··· 16 16 #include <linux/platform_device.h> 17 17 #include <linux/sched/clock.h> 18 18 19 + #include <drm/drm_print.h> 20 + 19 21 #include "v3d_drv.h" 20 22 #include "v3d_regs.h" 21 23 #include "v3d_trace.h"
+1
drivers/gpu/drm/v3d/v3d_sched.c
··· 21 21 #include <linux/sched/clock.h> 22 22 #include <linux/kthread.h> 23 23 24 + #include <drm/drm_print.h> 24 25 #include <drm/drm_syncobj.h> 25 26 26 27 #include "v3d_drv.h"
+1
drivers/gpu/drm/v3d/v3d_submit.c
··· 4 4 * Copyright (C) 2023 Raspberry Pi 5 5 */ 6 6 7 + #include <drm/drm_print.h> 7 8 #include <drm/drm_syncobj.h> 8 9 9 10 #include "v3d_drv.h"
+1
drivers/gpu/drm/vboxvideo/vbox_irq.c
··· 12 12 #include <linux/pci.h> 13 13 14 14 #include <drm/drm_drv.h> 15 + #include <drm/drm_print.h> 15 16 #include <drm/drm_probe_helper.h> 16 17 17 18 #include "vbox_drv.h"
+1
drivers/gpu/drm/vboxvideo/vbox_main.c
··· 12 12 #include <linux/vbox_err.h> 13 13 14 14 #include <drm/drm_damage_helper.h> 15 + #include <drm/drm_print.h> 15 16 16 17 #include "vbox_drv.h" 17 18 #include "vboxvideo_guest.h"
+1
drivers/gpu/drm/vboxvideo/vbox_mode.c
··· 22 22 #include <drm/drm_gem_atomic_helper.h> 23 23 #include <drm/drm_gem_framebuffer_helper.h> 24 24 #include <drm/drm_plane_helper.h> 25 + #include <drm/drm_print.h> 25 26 #include <drm/drm_probe_helper.h> 26 27 27 28 #include "hgsmi_channels.h"
+1
drivers/gpu/drm/vboxvideo/vbox_ttm.c
··· 8 8 */ 9 9 #include <linux/pci.h> 10 10 #include <drm/drm_file.h> 11 + #include <drm/drm_print.h> 11 12 #include "vbox_drv.h" 12 13 13 14 int vbox_mm_init(struct vbox_private *vbox)
+1
drivers/gpu/drm/vc4/vc4_bo.c
··· 19 19 #include <linux/dma-buf.h> 20 20 21 21 #include <drm/drm_fourcc.h> 22 + #include <drm/drm_print.h> 22 23 23 24 #include "vc4_drv.h" 24 25 #include "uapi/drm/vc4_drm.h"
+1
drivers/gpu/drm/vc4/vc4_debugfs.c
··· 4 4 */ 5 5 6 6 #include <drm/drm_drv.h> 7 + #include <drm/drm_print.h> 7 8 8 9 #include <linux/seq_file.h> 9 10 #include <linux/circ_buf.h>
+1
drivers/gpu/drm/vc4/vc4_dpi.c
··· 17 17 #include <drm/drm_edid.h> 18 18 #include <drm/drm_of.h> 19 19 #include <drm/drm_panel.h> 20 + #include <drm/drm_print.h> 20 21 #include <drm/drm_probe_helper.h> 21 22 #include <drm/drm_simple_kms_helper.h> 22 23 #include <linux/clk.h>
+1
drivers/gpu/drm/vc4/vc4_drv.c
··· 36 36 #include <drm/drm_drv.h> 37 37 #include <drm/drm_fbdev_dma.h> 38 38 #include <drm/drm_fourcc.h> 39 + #include <drm/drm_print.h> 39 40 #include <drm/drm_vblank.h> 40 41 41 42 #include <soc/bcm2835/raspberrypi-firmware.h>
+1
drivers/gpu/drm/vc4/vc4_dsi.c
··· 36 36 #include <drm/drm_mipi_dsi.h> 37 37 #include <drm/drm_of.h> 38 38 #include <drm/drm_panel.h> 39 + #include <drm/drm_print.h> 39 40 #include <drm/drm_probe_helper.h> 40 41 #include <drm/drm_simple_kms_helper.h> 41 42
+1
drivers/gpu/drm/vc4/vc4_gem.c
··· 30 30 #include <linux/dma-fence-array.h> 31 31 32 32 #include <drm/drm_exec.h> 33 + #include <drm/drm_print.h> 33 34 #include <drm/drm_syncobj.h> 34 35 35 36 #include "uapi/drm/vc4_drm.h"
+1
drivers/gpu/drm/vc4/vc4_hdmi.c
··· 39 39 #include <drm/drm_atomic_helper.h> 40 40 #include <drm/drm_drv.h> 41 41 #include <drm/drm_edid.h> 42 + #include <drm/drm_print.h> 42 43 #include <drm/drm_probe_helper.h> 43 44 #include <drm/drm_simple_kms_helper.h> 44 45 #include <linux/clk.h>
+1
drivers/gpu/drm/vc4/vc4_hvs.c
··· 26 26 27 27 #include <drm/drm_atomic_helper.h> 28 28 #include <drm/drm_drv.h> 29 + #include <drm/drm_print.h> 29 30 #include <drm/drm_vblank.h> 30 31 31 32 #include <soc/bcm2835/raspberrypi-firmware.h>
+1
drivers/gpu/drm/vc4/vc4_irq.c
··· 48 48 #include <linux/platform_device.h> 49 49 50 50 #include <drm/drm_drv.h> 51 + #include <drm/drm_print.h> 51 52 52 53 #include "vc4_drv.h" 53 54 #include "vc4_regs.h"
+1
drivers/gpu/drm/vc4/vc4_kms.c
··· 19 19 #include <drm/drm_crtc.h> 20 20 #include <drm/drm_fourcc.h> 21 21 #include <drm/drm_gem_framebuffer_helper.h> 22 + #include <drm/drm_print.h> 22 23 #include <drm/drm_probe_helper.h> 23 24 #include <drm/drm_vblank.h> 24 25
+2
drivers/gpu/drm/vc4/vc4_perfmon.c
··· 9 9 * The V3D block provides 16 hardware counters which can count various events. 10 10 */ 11 11 12 + #include <drm/drm_print.h> 13 + 12 14 #include "vc4_drv.h" 13 15 #include "vc4_regs.h" 14 16
+1
drivers/gpu/drm/vc4/vc4_plane.c
··· 24 24 #include <drm/drm_fourcc.h> 25 25 #include <drm/drm_framebuffer.h> 26 26 #include <drm/drm_gem_atomic_helper.h> 27 + #include <drm/drm_print.h> 27 28 28 29 #include "uapi/drm/vc4_drm.h" 29 30
+2
drivers/gpu/drm/vc4/vc4_render_cl.c
··· 35 35 * actually fairly low. 36 36 */ 37 37 38 + #include <drm/drm_print.h> 39 + 38 40 #include "uapi/drm/vc4_drm.h" 39 41 #include "vc4_drv.h" 40 42 #include "vc4_packet.h"
+1
drivers/gpu/drm/vc4/vc4_txp.c
··· 21 21 #include <drm/drm_fourcc.h> 22 22 #include <drm/drm_framebuffer.h> 23 23 #include <drm/drm_panel.h> 24 + #include <drm/drm_print.h> 24 25 #include <drm/drm_probe_helper.h> 25 26 #include <drm/drm_vblank.h> 26 27 #include <drm/drm_writeback.h>
+2
drivers/gpu/drm/vc4/vc4_v3d.c
··· 10 10 #include <linux/platform_device.h> 11 11 #include <linux/pm_runtime.h> 12 12 13 + #include <drm/drm_print.h> 14 + 13 15 #include "vc4_drv.h" 14 16 #include "vc4_regs.h" 15 17
+2
drivers/gpu/drm/vc4/vc4_validate.c
··· 43 43 * to use) happens. 44 44 */ 45 45 46 + #include <drm/drm_print.h> 47 + 46 48 #include "uapi/drm/vc4_drm.h" 47 49 #include "vc4_drv.h" 48 50 #include "vc4_packet.h"
+2
drivers/gpu/drm/vc4/vc4_validate_shaders.c
··· 41 41 * this validation is only performed at BO creation time. 42 42 */ 43 43 44 + #include <drm/drm_print.h> 45 + 44 46 #include "vc4_drv.h" 45 47 #include "vc4_qpu_defines.h" 46 48
+1
drivers/gpu/drm/vc4/vc4_vec.c
··· 17 17 #include <drm/drm_drv.h> 18 18 #include <drm/drm_edid.h> 19 19 #include <drm/drm_panel.h> 20 + #include <drm/drm_print.h> 20 21 #include <drm/drm_probe_helper.h> 21 22 #include <drm/drm_simple_kms_helper.h> 22 23 #include <linux/clk.h>
+1
drivers/gpu/drm/virtio/virtgpu_debugfs.c
··· 27 27 28 28 #include <drm/drm_debugfs.h> 29 29 #include <drm/drm_file.h> 30 + #include <drm/drm_print.h> 30 31 31 32 #include "virtgpu_drv.h" 32 33
+1
drivers/gpu/drm/virtio/virtgpu_display.c
··· 30 30 #include <drm/drm_edid.h> 31 31 #include <drm/drm_fourcc.h> 32 32 #include <drm/drm_gem_framebuffer_helper.h> 33 + #include <drm/drm_print.h> 33 34 #include <drm/drm_probe_helper.h> 34 35 #include <drm/drm_simple_kms_helper.h> 35 36 #include <drm/drm_vblank.h>
+1
drivers/gpu/drm/virtio/virtgpu_drv.c
··· 39 39 #include <drm/drm_drv.h> 40 40 #include <drm/drm_fbdev_shmem.h> 41 41 #include <drm/drm_file.h> 42 + #include <drm/drm_print.h> 42 43 43 44 #include "virtgpu_drv.h" 44 45
+1
drivers/gpu/drm/virtio/virtgpu_kms.c
··· 29 29 30 30 #include <drm/drm_file.h> 31 31 #include <drm/drm_managed.h> 32 + #include <drm/drm_print.h> 32 33 33 34 #include "virtgpu_drv.h" 34 35
+2
drivers/gpu/drm/virtio/virtgpu_object.c
··· 26 26 #include <linux/dma-mapping.h> 27 27 #include <linux/moduleparam.h> 28 28 29 + #include <drm/drm_print.h> 30 + 29 31 #include "virtgpu_drv.h" 30 32 31 33 static int virtio_gpu_virglrenderer_workaround = 1;
+1
drivers/gpu/drm/virtio/virtgpu_plane.c
··· 30 30 #include <linux/virtio_dma_buf.h> 31 31 #include <drm/drm_managed.h> 32 32 #include <drm/drm_panic.h> 33 + #include <drm/drm_print.h> 33 34 34 35 #include "virtgpu_drv.h" 35 36
+1
drivers/gpu/drm/virtio/virtgpu_vq.c
··· 32 32 #include <linux/virtio_ring.h> 33 33 34 34 #include <drm/drm_edid.h> 35 + #include <drm/drm_print.h> 35 36 36 37 #include "virtgpu_drv.h" 37 38 #include "virtgpu_trace.h"
+1
drivers/gpu/drm/vkms/vkms_composer.c
··· 8 8 #include <drm/drm_fourcc.h> 9 9 #include <drm/drm_fixed.h> 10 10 #include <drm/drm_gem_framebuffer_helper.h> 11 + #include <drm/drm_print.h> 11 12 #include <drm/drm_vblank.h> 12 13 #include <linux/minmax.h> 13 14
+15 -5
drivers/gpu/drm/vkms/vkms_configfs.c
··· 204 204 { 205 205 struct vkms_configfs_device *dev; 206 206 struct vkms_configfs_crtc *crtc; 207 + int ret; 207 208 208 209 dev = child_group_to_vkms_configfs_device(group); 209 210 ··· 220 219 221 220 crtc->config = vkms_config_create_crtc(dev->config); 222 221 if (IS_ERR(crtc->config)) { 222 + ret = PTR_ERR(crtc->config); 223 223 kfree(crtc); 224 - return ERR_CAST(crtc->config); 224 + return ERR_PTR(ret); 225 225 } 226 226 227 227 config_group_init_type_name(&crtc->group, name, &crtc_item_type); ··· 360 358 { 361 359 struct vkms_configfs_device *dev; 362 360 struct vkms_configfs_plane *plane; 361 + int ret; 363 362 364 363 dev = child_group_to_vkms_configfs_device(group); 365 364 ··· 376 373 377 374 plane->config = vkms_config_create_plane(dev->config); 378 375 if (IS_ERR(plane->config)) { 376 + ret = PTR_ERR(plane->config); 379 377 kfree(plane); 380 - return ERR_CAST(plane->config); 378 + return ERR_PTR(ret); 381 379 } 382 380 383 381 config_group_init_type_name(&plane->group, name, &plane_item_type); ··· 476 472 { 477 473 struct vkms_configfs_device *dev; 478 474 struct vkms_configfs_encoder *encoder; 475 + int ret; 479 476 480 477 dev = child_group_to_vkms_configfs_device(group); 481 478 ··· 492 487 493 488 encoder->config = vkms_config_create_encoder(dev->config); 494 489 if (IS_ERR(encoder->config)) { 490 + ret = PTR_ERR(encoder->config); 495 491 kfree(encoder); 496 - return ERR_CAST(encoder->config); 492 + return ERR_PTR(ret); 497 493 } 498 494 499 495 config_group_init_type_name(&encoder->group, name, ··· 643 637 { 644 638 struct vkms_configfs_device *dev; 645 639 struct vkms_configfs_connector *connector; 640 + int ret; 646 641 647 642 dev = child_group_to_vkms_configfs_device(group); 648 643 ··· 659 652 660 653 connector->config = vkms_config_create_connector(dev->config); 661 654 if (IS_ERR(connector->config)) { 655 + ret = PTR_ERR(connector->config); 662 656 kfree(connector); 663 - return ERR_CAST(connector->config); 657 + return ERR_PTR(ret); 664 658 } 665 659 666 660 config_group_init_type_name(&connector->group, name, ··· 764 756 const char *name) 765 757 { 766 758 struct vkms_configfs_device *dev; 759 + int ret; 767 760 768 761 if (strcmp(name, DEFAULT_DEVICE_NAME) == 0) 769 762 return ERR_PTR(-EINVAL); ··· 775 766 776 767 dev->config = vkms_config_create(name); 777 768 if (IS_ERR(dev->config)) { 769 + ret = PTR_ERR(dev->config); 778 770 kfree(dev); 779 - return ERR_CAST(dev->config); 771 + return ERR_PTR(ret); 780 772 } 781 773 782 774 config_group_init_type_name(&dev->group, name, &device_item_type);
+1
drivers/gpu/drm/vkms/vkms_crtc.c
··· 5 5 #include <drm/drm_atomic.h> 6 6 #include <drm/drm_atomic_helper.h> 7 7 #include <drm/drm_managed.h> 8 + #include <drm/drm_print.h> 8 9 #include <drm/drm_probe_helper.h> 9 10 #include <drm/drm_vblank.h> 10 11 #include <drm/drm_vblank_helper.h>
+1
drivers/gpu/drm/vkms/vkms_drv.c
··· 23 23 #include <drm/drm_gem_framebuffer_helper.h> 24 24 #include <drm/drm_ioctl.h> 25 25 #include <drm/drm_managed.h> 26 + #include <drm/drm_print.h> 26 27 #include <drm/drm_probe_helper.h> 27 28 #include <drm/drm_gem_shmem_helper.h> 28 29 #include <drm/drm_vblank.h>
+1
drivers/gpu/drm/vkms/vkms_output.c
··· 4 4 #include "vkms_connector.h" 5 5 #include "vkms_drv.h" 6 6 #include <drm/drm_managed.h> 7 + #include <drm/drm_print.h> 7 8 8 9 int vkms_output_init(struct vkms_device *vkmsdev) 9 10 {
+1
drivers/gpu/drm/vkms/vkms_plane.c
··· 8 8 #include <drm/drm_fourcc.h> 9 9 #include <drm/drm_gem_atomic_helper.h> 10 10 #include <drm/drm_gem_framebuffer_helper.h> 11 + #include <drm/drm_print.h> 11 12 12 13 #include "vkms_drv.h" 13 14 #include "vkms_formats.h"
+1
drivers/gpu/drm/vkms/vkms_writeback.c
··· 6 6 #include <drm/drm_edid.h> 7 7 #include <drm/drm_fourcc.h> 8 8 #include <drm/drm_writeback.h> 9 + #include <drm/drm_print.h> 9 10 #include <drm/drm_probe_helper.h> 10 11 #include <drm/drm_atomic_helper.h> 11 12 #include <drm/drm_gem_framebuffer_helper.h>
+2 -2
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
··· 1023 1023 dev_priv->drm.dev, 1024 1024 dev_priv->drm.anon_inode->i_mapping, 1025 1025 dev_priv->drm.vma_offset_manager, 1026 - dev_priv->map_mode == vmw_dma_alloc_coherent, 1027 - false); 1026 + (dev_priv->map_mode == vmw_dma_alloc_coherent) ? 1027 + TTM_ALLOCATION_POOL_USE_DMA_ALLOC : 0); 1028 1028 if (unlikely(ret != 0)) { 1029 1029 drm_err(&dev_priv->drm, 1030 1030 "Failed initializing TTM buffer object driver.\n");
+1
drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
··· 16 16 #include <drm/drm_auth.h> 17 17 #include <drm/drm_device.h> 18 18 #include <drm/drm_file.h> 19 + #include <drm/drm_print.h> 19 20 #include <drm/drm_rect.h> 20 21 21 22 #include <drm/ttm/ttm_execbuf_util.h>
+1 -1
drivers/gpu/drm/xe/xe_device.c
··· 437 437 438 438 err = ttm_device_init(&xe->ttm, &xe_ttm_funcs, xe->drm.dev, 439 439 xe->drm.anon_inode->i_mapping, 440 - xe->drm.vma_offset_manager, false, false); 440 + xe->drm.vma_offset_manager, 0); 441 441 if (WARN_ON(err)) 442 442 goto err; 443 443
+2
drivers/gpu/drm/xe/xe_heci_gsc.c
··· 8 8 #include <linux/pci.h> 9 9 #include <linux/sizes.h> 10 10 11 + #include <drm/drm_print.h> 12 + 11 13 #include "xe_device_types.h" 12 14 #include "xe_drv.h" 13 15 #include "xe_heci_gsc.h"
+1
drivers/gpu/drm/xe/xe_tuning.c
··· 8 8 #include <kunit/visibility.h> 9 9 10 10 #include <drm/drm_managed.h> 11 + #include <drm/drm_print.h> 11 12 12 13 #include "regs/xe_gt_regs.h" 13 14 #include "xe_gt_types.h"
+1
drivers/gpu/drm/xen/xen_drm_front.c
··· 18 18 #include <drm/drm_probe_helper.h> 19 19 #include <drm/drm_file.h> 20 20 #include <drm/drm_gem.h> 21 + #include <drm/drm_print.h> 21 22 22 23 #include <xen/platform_pci.h> 23 24 #include <xen/xen.h>
+1
drivers/gpu/drm/xen/xen_drm_front_gem.c
··· 15 15 16 16 #include <drm/drm_gem.h> 17 17 #include <drm/drm_prime.h> 18 + #include <drm/drm_print.h> 18 19 #include <drm/drm_probe_helper.h> 19 20 20 21 #include <xen/balloon.h>
+1
drivers/gpu/drm/xen/xen_drm_front_kms.c
··· 16 16 #include <drm/drm_gem.h> 17 17 #include <drm/drm_gem_atomic_helper.h> 18 18 #include <drm/drm_gem_framebuffer_helper.h> 19 + #include <drm/drm_print.h> 19 20 #include <drm/drm_probe_helper.h> 20 21 #include <drm/drm_vblank.h> 21 22
+8
include/drm/drm_atomic.h
··· 524 524 bool duplicated : 1; 525 525 526 526 /** 527 + * @checked: 528 + * 529 + * Indicates the state has been checked and thus must no longer 530 + * be mutated. For internal use only, do not consult from drivers. 531 + */ 532 + bool checked : 1; 533 + 534 + /** 527 535 * @planes: 528 536 * 529 537 * Pointer to array of @drm_plane and @drm_plane_state part of this
+1 -1
include/drm/drm_buddy.h
··· 12 12 #include <linux/sched.h> 13 13 #include <linux/rbtree.h> 14 14 15 - #include <drm/drm_print.h> 15 + struct drm_printer; 16 16 17 17 #define DRM_BUDDY_RANGE_ALLOCATION BIT(0) 18 18 #define DRM_BUDDY_TOPDOWN_ALLOCATION BIT(1)
+6 -14
include/drm/drm_client.h
··· 174 174 struct drm_client_dev *client; 175 175 176 176 /** 177 - * @pitch: Buffer pitch 178 - */ 179 - u32 pitch; 180 - 181 - /** 182 177 * @gem: GEM object backing this buffer 183 178 * 184 - * FIXME: The dependency on GEM here isn't required, we could 185 - * convert the driver handle to a dma-buf instead and use the 186 - * backend-agnostic dma-buf vmap support instead. This would 187 - * require that the handle2fd prime ioctl is reworked to pull the 188 - * fd_install step out of the driver backend hooks, to make that 189 - * final step optional for internal users. 179 + * FIXME: The DRM framebuffer holds a reference on its GEM 180 + * buffer objects. Do not use this field in new code and 181 + * update existing users. 190 182 */ 191 183 struct drm_gem_object *gem; 192 184 ··· 194 202 }; 195 203 196 204 struct drm_client_buffer * 197 - drm_client_framebuffer_create(struct drm_client_dev *client, u32 width, u32 height, u32 format); 198 - void drm_client_framebuffer_delete(struct drm_client_buffer *buffer); 199 - int drm_client_framebuffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect); 205 + drm_client_buffer_create_dumb(struct drm_client_dev *client, u32 width, u32 height, u32 format); 206 + void drm_client_buffer_delete(struct drm_client_buffer *buffer); 207 + int drm_client_buffer_flush(struct drm_client_buffer *buffer, struct drm_rect *rect); 200 208 int drm_client_buffer_vmap_local(struct drm_client_buffer *buffer, 201 209 struct iosys_map *map_copy); 202 210 void drm_client_buffer_vunmap_local(struct drm_client_buffer *buffer);
+6
include/drm/drm_edid.h
··· 340 340 const char *name; 341 341 }; 342 342 343 + #define DRM_EDID_IDENT_INIT(_vend_chr_0, _vend_chr_1, _vend_chr_2, _product_id, _name) \ 344 + { \ 345 + .panel_id = drm_edid_encode_panel_id(_vend_chr_0, _vend_chr_1, _vend_chr_2, _product_id), \ 346 + .name = _name, \ 347 + } 348 + 343 349 #define EDID_PRODUCT_ID(e) ((e)->prod_code[0] | ((e)->prod_code[1] << 8)) 344 350 345 351 /* Short Audio Descriptor */
+1 -1
include/drm/drm_mm.h
··· 48 48 #endif 49 49 #include <linux/types.h> 50 50 51 - #include <drm/drm_print.h> 51 + struct drm_printer; 52 52 53 53 #ifdef CONFIG_DRM_DEBUG_MM 54 54 #define DRM_MM_BUG_ON(expr) BUG_ON(expr)
+12
include/drm/ttm/ttm_allocation.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 OR MIT */ 2 + /* Copyright (c) 2025 Valve Corporation */ 3 + 4 + #ifndef _TTM_ALLOCATION_H_ 5 + #define _TTM_ALLOCATION_H_ 6 + 7 + #define TTM_ALLOCATION_POOL_BENEFICIAL_ORDER(n) ((n) & 0xff) /* Max order which caller can benefit from */ 8 + #define TTM_ALLOCATION_POOL_USE_DMA_ALLOC BIT(8) /* Use coherent DMA allocations. */ 9 + #define TTM_ALLOCATION_POOL_USE_DMA32 BIT(9) /* Use GFP_DMA32 allocations. */ 10 + #define TTM_ALLOCATION_PROPAGATE_ENOSPC BIT(10) /* Do not convert ENOSPC from resource managers to ENOMEM. */ 11 + 12 + #endif
+7 -1
include/drm/ttm/ttm_device.h
··· 27 27 28 28 #include <linux/types.h> 29 29 #include <linux/workqueue.h> 30 + #include <drm/ttm/ttm_allocation.h> 30 31 #include <drm/ttm/ttm_resource.h> 31 32 #include <drm/ttm/ttm_pool.h> 32 33 ··· 221 220 struct list_head device_list; 222 221 223 222 /** 223 + * @alloc_flags: TTM_ALLOCATION_ flags. 224 + */ 225 + unsigned int alloc_flags; 226 + 227 + /** 224 228 * @funcs: Function table for the device. 225 229 * Constant after bo device init 226 230 */ ··· 298 292 int ttm_device_init(struct ttm_device *bdev, const struct ttm_device_funcs *funcs, 299 293 struct device *dev, struct address_space *mapping, 300 294 struct drm_vma_offset_manager *vma_manager, 301 - bool use_dma_alloc, bool use_dma32); 295 + unsigned int alloc_flags); 302 296 void ttm_device_fini(struct ttm_device *bdev); 303 297 void ttm_device_clear_dma_mappings(struct ttm_device *bdev); 304 298
+3 -5
include/drm/ttm/ttm_pool.h
··· 64 64 * 65 65 * @dev: the device we allocate pages for 66 66 * @nid: which numa node to use 67 - * @use_dma_alloc: if coherent DMA allocations should be used 68 - * @use_dma32: if GFP_DMA32 should be used 67 + * @alloc_flags: TTM_ALLOCATION_POOL_ flags 69 68 * @caching: pools for each caching/order 70 69 */ 71 70 struct ttm_pool { 72 71 struct device *dev; 73 72 int nid; 74 73 75 - bool use_dma_alloc; 76 - bool use_dma32; 74 + unsigned int alloc_flags; 77 75 78 76 struct { 79 77 struct ttm_pool_type orders[NR_PAGE_ORDERS]; ··· 83 85 void ttm_pool_free(struct ttm_pool *pool, struct ttm_tt *tt); 84 86 85 87 void ttm_pool_init(struct ttm_pool *pool, struct device *dev, 86 - int nid, bool use_dma_alloc, bool use_dma32); 88 + int nid, unsigned int alloc_flags); 87 89 void ttm_pool_fini(struct ttm_pool *pool); 88 90 89 91 int ttm_pool_debugfs(struct ttm_pool *pool, struct seq_file *m);
+2 -1
include/drm/ttm/ttm_resource.h
··· 31 31 #include <linux/iosys-map.h> 32 32 #include <linux/dma-fence.h> 33 33 34 - #include <drm/drm_print.h> 35 34 #include <drm/ttm/ttm_caching.h> 36 35 #include <drm/ttm/ttm_kmap_iter.h> 37 36 38 37 #define TTM_MAX_BO_PRIORITY 4U 39 38 #define TTM_NUM_MEM_TYPES 9 40 39 40 + struct dentry; 41 41 struct dmem_cgroup_device; 42 + struct drm_printer; 42 43 struct ttm_device; 43 44 struct ttm_resource_manager; 44 45 struct ttm_resource;
+34
include/uapi/drm/amdxdna_accel.h
··· 442 442 DRM_AMDXDNA_QUERY_HW_CONTEXTS, 443 443 DRM_AMDXDNA_QUERY_FIRMWARE_VERSION = 8, 444 444 DRM_AMDXDNA_GET_POWER_MODE, 445 + DRM_AMDXDNA_QUERY_TELEMETRY, 446 + DRM_AMDXDNA_QUERY_RESOURCE_INFO = 12, 447 + }; 448 + 449 + /** 450 + * struct amdxdna_drm_get_resource_info - Get resource information 451 + */ 452 + struct amdxdna_drm_get_resource_info { 453 + /** @npu_clk_max: max H-Clocks */ 454 + __u64 npu_clk_max; 455 + /** @npu_tops_max: max TOPs */ 456 + __u64 npu_tops_max; 457 + /** @npu_task_max: max number of tasks */ 458 + __u64 npu_task_max; 459 + /** @npu_tops_curr: current TOPs */ 460 + __u64 npu_tops_curr; 461 + /** @npu_task_curr: current number of tasks */ 462 + __u64 npu_task_curr; 463 + }; 464 + 465 + /** 466 + * struct amdxdna_drm_query_telemetry_header - Telemetry data header 467 + */ 468 + struct amdxdna_drm_query_telemetry_header { 469 + /** @major: Firmware telemetry interface major version number */ 470 + __u32 major; 471 + /** @minor: Firmware telemetry interface minor version number */ 472 + __u32 minor; 473 + /** @type: Telemetry query type */ 474 + __u32 type; 475 + /** @map_num_elements: Total number of elements in the map table */ 476 + __u32 map_num_elements; 477 + /** @map: Element map */ 478 + __u32 map[]; 445 479 }; 446 480 447 481 /**
+52
include/uapi/drm/ivpu_accel.h
··· 25 25 #define DRM_IVPU_CMDQ_CREATE 0x0b 26 26 #define DRM_IVPU_CMDQ_DESTROY 0x0c 27 27 #define DRM_IVPU_CMDQ_SUBMIT 0x0d 28 + #define DRM_IVPU_BO_CREATE_FROM_USERPTR 0x0e 28 29 29 30 #define DRM_IOCTL_IVPU_GET_PARAM \ 30 31 DRM_IOWR(DRM_COMMAND_BASE + DRM_IVPU_GET_PARAM, struct drm_ivpu_param) ··· 69 68 70 69 #define DRM_IOCTL_IVPU_CMDQ_SUBMIT \ 71 70 DRM_IOW(DRM_COMMAND_BASE + DRM_IVPU_CMDQ_SUBMIT, struct drm_ivpu_cmdq_submit) 71 + 72 + #define DRM_IOCTL_IVPU_BO_CREATE_FROM_USERPTR \ 73 + DRM_IOWR(DRM_COMMAND_BASE + DRM_IVPU_BO_CREATE_FROM_USERPTR, \ 74 + struct drm_ivpu_bo_create_from_userptr) 72 75 73 76 /** 74 77 * DOC: contexts ··· 132 127 * command queue destroy and submit job on specific command queue. 133 128 */ 134 129 #define DRM_IVPU_CAP_MANAGE_CMDQ 3 130 + /** 131 + * DRM_IVPU_CAP_BO_CREATE_FROM_USERPTR 132 + * 133 + * Driver supports creating buffer objects from user space memory pointers. 134 + * This allows creating GEM buffers from existing user memory regions. 135 + */ 136 + #define DRM_IVPU_CAP_BO_CREATE_FROM_USERPTR 4 135 137 136 138 /** 137 139 * struct drm_ivpu_param - Get/Set VPU parameters ··· 206 194 #define DRM_IVPU_BO_HIGH_MEM DRM_IVPU_BO_SHAVE_MEM 207 195 #define DRM_IVPU_BO_MAPPABLE 0x00000002 208 196 #define DRM_IVPU_BO_DMA_MEM 0x00000004 197 + #define DRM_IVPU_BO_READ_ONLY 0x00000008 209 198 210 199 #define DRM_IVPU_BO_CACHED 0x00000000 211 200 #define DRM_IVPU_BO_UNCACHED 0x00010000 ··· 217 204 (DRM_IVPU_BO_HIGH_MEM | \ 218 205 DRM_IVPU_BO_MAPPABLE | \ 219 206 DRM_IVPU_BO_DMA_MEM | \ 207 + DRM_IVPU_BO_READ_ONLY | \ 220 208 DRM_IVPU_BO_CACHE_MASK) 221 209 222 210 /** ··· 259 245 * 260 246 * Allocated BO will use write combining buffer for writes but reads will be 261 247 * uncached. 248 + */ 249 + __u32 flags; 250 + 251 + /** @handle: Returned GEM object handle */ 252 + __u32 handle; 253 + 254 + /** @vpu_addr: Returned VPU virtual address */ 255 + __u64 vpu_addr; 256 + }; 257 + 258 + /** 259 + * struct drm_ivpu_bo_create_from_userptr - Create dma-buf from user pointer 260 + * 261 + * Create a GEM buffer object from a user pointer to a memory region. 262 + */ 263 + struct drm_ivpu_bo_create_from_userptr { 264 + /** @user_ptr: User pointer to memory region (must be page aligned) */ 265 + __u64 user_ptr; 266 + 267 + /** @size: Size of the memory region in bytes (must be page aligned) */ 268 + __u64 size; 269 + 270 + /** 271 + * @flags: 272 + * 273 + * Supported flags: 274 + * 275 + * %DRM_IVPU_BO_HIGH_MEM: 276 + * 277 + * Allocate VPU address from >4GB range. 278 + * 279 + * %DRM_IVPU_BO_DMA_MEM: 280 + * 281 + * Allocate from DMA memory range accessible by hardware DMA. 282 + * 283 + * %DRM_IVPU_BO_READ_ONLY: 284 + * 285 + * Allocate as a read-only buffer object. 262 286 */ 263 287 __u32 flags; 264 288