Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'drm-misc-next-2023-03-07' of git://anongit.freedesktop.org/drm/drm-misc into drm-next

drm-misc-next for v6.4-rc1:

Note: Only changes since pull request from 2023-02-23 are included here.

UAPI Changes:
- Convert rockchip bindings to YAML.
- Constify kobj_type structure in dma-buf.
- FBDEV cmdline parser fixes, and other small fbdev fixes for mode
parsing.

Cross-subsystem Changes:
- Add Neil Armstrong as linaro maintainer.
- Actually signal the private stub dma-fence.

Core Changes:
- Add function for adding syncobj dep to sched_job and use it in panfrost, v3d.
- Improve DisplayID 2.0 topology parsing and EDID parsing in general.
- Add a gem eviction function and callback for generic GEM shrinker
purposes.
- Prepare to convert shmem helper to use the GEM reservation lock instead of own
locking. (Actual commit itself got reverted for now)
- Move the suballocator from radeon and amdgpu drivers to core in preparation
for Xe.
- Assorted small fixes and documentation.
- Fixes to HPD polling.
- Assorted small fixes in simpledrm, bridge, accel, shmem-helper,
and the selftest of format-helper.
- Remove dummy resource when ttm bo is created, and during pipelined
gutting. Fix all drivers to accept a NULL ttm_bo->resource.
- Handle pinned BO moving prevention in ttm core.
- Set drm panel-bridge orientation before connector is registered.
- Remove dumb_destroy callback.
- Add documentation to GEM_CLOSE, PRIME_HANDLE_TO_FD, PRIME_FD_TO_HANDLE, GETFB2 ioctl's.
- Add atomic enable_plane callback, use it in ast, mgag200, tidss.

Driver Changes:
- Use drm_gem_objects_lookup in vc4.
- Assorted small fixes to virtio, ast, bridge/tc358762, meson, nouveau.
- Allow virtio KMS to be disabled and compiled out.
- Add Radxa 8/10HD, Samsung AMS495QA01 panels.
- Fix ivpu compiler errors.
- Assorted fixes to drm/panel, malidp, rockchip, ivpu, amdgpu, vgem,
nouveau, vc4.
- Assorted cleanups, simplifications and fixes to vmwgfx.

Signed-off-by: Dave Airlie <airlied@redhat.com>

From: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/ac1f5186-54bb-02f4-ac56-907f5b76f3de@linux.intel.com

+4430 -3332
+63
Documentation/devicetree/bindings/display/bridge/analogix,dp.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/display/bridge/analogix,dp.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Analogix Display Port bridge 8 + 9 + maintainers: 10 + - Rob Herring <robh@kernel.org> 11 + 12 + properties: 13 + reg: 14 + maxItems: 1 15 + 16 + interrupts: 17 + maxItems: 1 18 + 19 + clocks: true 20 + 21 + clock-names: true 22 + 23 + phys: true 24 + 25 + phy-names: 26 + const: dp 27 + 28 + force-hpd: 29 + description: 30 + Indicate driver need force hpd when hpd detect failed, this 31 + is used for some eDP screen which don not have a hpd signal. 32 + 33 + hpd-gpios: 34 + description: 35 + Hotplug detect GPIO. 36 + Indicates which GPIO should be used for hotplug detection 37 + 38 + ports: 39 + $ref: /schemas/graph.yaml#/properties/ports 40 + 41 + properties: 42 + port@0: 43 + $ref: /schemas/graph.yaml#/properties/port 44 + description: 45 + Input node to receive pixel data. 46 + 47 + port@1: 48 + $ref: /schemas/graph.yaml#/properties/port 49 + description: 50 + Port node with one endpoint connected to a dp-connector node. 51 + 52 + required: 53 + - port@0 54 + - port@1 55 + 56 + required: 57 + - reg 58 + - interrupts 59 + - clock-names 60 + - clocks 61 + - ports 62 + 63 + additionalProperties: true
-51
Documentation/devicetree/bindings/display/bridge/analogix_dp.txt
··· 1 - Analogix Display Port bridge bindings 2 - 3 - Required properties for dp-controller: 4 - -compatible: 5 - platform specific such as: 6 - * "samsung,exynos5-dp" 7 - * "rockchip,rk3288-dp" 8 - * "rockchip,rk3399-edp" 9 - -reg: 10 - physical base address of the controller and length 11 - of memory mapped region. 12 - -interrupts: 13 - interrupt combiner values. 14 - -clocks: 15 - from common clock binding: handle to dp clock. 16 - -clock-names: 17 - from common clock binding: Shall be "dp". 18 - -phys: 19 - from general PHY binding: the phandle for the PHY device. 20 - -phy-names: 21 - from general PHY binding: Should be "dp". 22 - 23 - Optional properties for dp-controller: 24 - -force-hpd: 25 - Indicate driver need force hpd when hpd detect failed, this 26 - is used for some eDP screen which don't have hpd signal. 27 - -hpd-gpios: 28 - Hotplug detect GPIO. 29 - Indicates which GPIO should be used for hotplug detection 30 - -port@[X]: SoC specific port nodes with endpoint definitions as defined 31 - in Documentation/devicetree/bindings/media/video-interfaces.txt, 32 - please refer to the SoC specific binding document: 33 - * Documentation/devicetree/bindings/display/exynos/exynos_dp.txt 34 - * Documentation/devicetree/bindings/display/rockchip/analogix_dp-rockchip.txt 35 - 36 - [1]: Documentation/devicetree/bindings/media/video-interfaces.txt 37 - ------------------------------------------------------------------------------- 38 - 39 - Example: 40 - 41 - dp-controller { 42 - compatible = "samsung,exynos5-dp"; 43 - reg = <0x145b0000 0x10000>; 44 - interrupts = <10 3>; 45 - interrupt-parent = <&combiner>; 46 - clocks = <&clock 342>; 47 - clock-names = "dp"; 48 - 49 - phys = <&dp_phy>; 50 - phy-names = "dp"; 51 - };
+2 -12
Documentation/devicetree/bindings/display/bridge/snps,dw-mipi-dsi.yaml
··· 26 26 reg: 27 27 maxItems: 1 28 28 29 - clocks: 30 - items: 31 - - description: Module clock 32 - - description: DSI bus clock for either AHB and APB 33 - - description: Pixel clock for the DPI/RGB input 34 - minItems: 2 29 + clocks: true 35 30 36 - clock-names: 37 - items: 38 - - const: ref 39 - - const: pclk 40 - - const: px_clk 41 - minItems: 2 31 + clock-names: true 42 32 43 33 resets: 44 34 maxItems: 1
+9 -9
Documentation/devicetree/bindings/display/dsi-controller.yaml
··· 30 30 $nodename: 31 31 pattern: "^dsi(@.*)?$" 32 32 33 + clock-master: 34 + type: boolean 35 + description: 36 + Should be enabled if the host is being used in conjunction with 37 + another DSI host to drive the same peripheral. Hardware supporting 38 + such a configuration generally requires the data on both the busses 39 + to be driven by the same clock. Only the DSI host instance 40 + controlling this clock should contain this property. 41 + 33 42 "#address-cells": 34 43 const: 1 35 44 ··· 60 51 peripherals respond to more than a single virtual channel. In that 61 52 case the reg property can take multiple entries, one for each virtual 62 53 channel that the peripheral responds to. 63 - 64 - clock-master: 65 - type: boolean 66 - description: 67 - Should be enabled if the host is being used in conjunction with 68 - another DSI host to drive the same peripheral. Hardware supporting 69 - such a configuration generally requires the data on both the busses 70 - to be driven by the same clock. Only the DSI host instance 71 - controlling this clock should contain this property. 72 54 73 55 enforce-video-mode: 74 56 type: boolean
+1 -1
Documentation/devicetree/bindings/display/exynos/exynos_dp.txt
··· 50 50 Documentation/devicetree/bindings/display/panel/display-timing.txt 51 51 52 52 For the below properties, please refer to Analogix DP binding document: 53 - * Documentation/devicetree/bindings/display/bridge/analogix_dp.txt 53 + * Documentation/devicetree/bindings/display/bridge/analogix,dp.yaml 54 54 -phys (required) 55 55 -phy-names (required) 56 56 -hpd-gpios (optional)
+2
Documentation/devicetree/bindings/display/panel/jadard,jd9365da-h3.yaml
··· 17 17 items: 18 18 - enum: 19 19 - chongzhou,cz101b4001 20 + - radxa,display-10hd-ad001 21 + - radxa,display-8hd-ad002 20 22 - const: jadard,jd9365da-h3 21 23 22 24 reg: true
+57
Documentation/devicetree/bindings/display/panel/samsung,ams495qa01.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/display/panel/samsung,ams495qa01.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Samsung AMS495QA01 panel with Magnachip D53E6EA8966 controller 8 + 9 + maintainers: 10 + - Chris Morgan <macromorgan@hotmail.com> 11 + 12 + allOf: 13 + - $ref: panel-common.yaml# 14 + 15 + properties: 16 + compatible: 17 + const: samsung,ams495qa01 18 + 19 + reg: true 20 + reset-gpios: 21 + description: reset gpio, must be GPIO_ACTIVE_LOW 22 + elvdd-supply: 23 + description: regulator that supplies voltage to the panel display 24 + enable-gpios: true 25 + port: true 26 + vdd-supply: 27 + description: regulator that supplies voltage to panel logic 28 + 29 + required: 30 + - compatible 31 + - reg 32 + - reset-gpios 33 + - vdd-supply 34 + 35 + additionalProperties: false 36 + 37 + examples: 38 + - | 39 + #include <dt-bindings/gpio/gpio.h> 40 + spi { 41 + #address-cells = <1>; 42 + #size-cells = <0>; 43 + panel@0 { 44 + compatible = "samsung,ams495qa01"; 45 + reg = <0>; 46 + reset-gpios = <&gpio4 0 GPIO_ACTIVE_LOW>; 47 + vdd-supply = <&vcc_3v3>; 48 + 49 + port { 50 + mipi_in_panel: endpoint { 51 + remote-endpoint = <&mipi_out_panel>; 52 + }; 53 + }; 54 + }; 55 + }; 56 + 57 + ...
-98
Documentation/devicetree/bindings/display/rockchip/analogix_dp-rockchip.txt
··· 1 - Rockchip RK3288 specific extensions to the Analogix Display Port 2 - ================================ 3 - 4 - Required properties: 5 - - compatible: "rockchip,rk3288-dp", 6 - "rockchip,rk3399-edp"; 7 - 8 - - reg: physical base address of the controller and length 9 - 10 - - clocks: from common clock binding: handle to dp clock. 11 - of memory mapped region. 12 - 13 - - clock-names: from common clock binding: 14 - Required elements: "dp" "pclk" 15 - 16 - - resets: Must contain an entry for each entry in reset-names. 17 - See ../reset/reset.txt for details. 18 - 19 - - pinctrl-names: Names corresponding to the chip hotplug pinctrl states. 20 - - pinctrl-0: pin-control mode. should be <&edp_hpd> 21 - 22 - - reset-names: Must include the name "dp" 23 - 24 - - rockchip,grf: this soc should set GRF regs, so need get grf here. 25 - 26 - - ports: there are 2 port nodes with endpoint definitions as defined in 27 - Documentation/devicetree/bindings/media/video-interfaces.txt. 28 - Port 0: contained 2 endpoints, connecting to the output of vop. 29 - Port 1: contained 1 endpoint, connecting to the input of panel. 30 - 31 - Optional property for different chips: 32 - - clocks: from common clock binding: handle to grf_vio clock. 33 - 34 - - clock-names: from common clock binding: 35 - Required elements: "grf" 36 - 37 - For the below properties, please refer to Analogix DP binding document: 38 - * Documentation/devicetree/bindings/display/bridge/analogix_dp.txt 39 - - phys (required) 40 - - phy-names (required) 41 - - hpd-gpios (optional) 42 - - force-hpd (optional) 43 - ------------------------------------------------------------------------------- 44 - 45 - Example: 46 - dp-controller: dp@ff970000 { 47 - compatible = "rockchip,rk3288-dp"; 48 - reg = <0xff970000 0x4000>; 49 - interrupts = <GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>; 50 - clocks = <&cru SCLK_EDP>, <&cru PCLK_EDP_CTRL>; 51 - clock-names = "dp", "pclk"; 52 - phys = <&dp_phy>; 53 - phy-names = "dp"; 54 - 55 - rockchip,grf = <&grf>; 56 - resets = <&cru 111>; 57 - reset-names = "dp"; 58 - 59 - pinctrl-names = "default"; 60 - pinctrl-0 = <&edp_hpd>; 61 - 62 - 63 - ports { 64 - #address-cells = <1>; 65 - #size-cells = <0>; 66 - edp_in: port@0 { 67 - reg = <0>; 68 - #address-cells = <1>; 69 - #size-cells = <0>; 70 - edp_in_vopb: endpoint@0 { 71 - reg = <0>; 72 - remote-endpoint = <&vopb_out_edp>; 73 - }; 74 - edp_in_vopl: endpoint@1 { 75 - reg = <1>; 76 - remote-endpoint = <&vopl_out_edp>; 77 - }; 78 - }; 79 - 80 - edp_out: port@1 { 81 - reg = <1>; 82 - #address-cells = <1>; 83 - #size-cells = <0>; 84 - edp_out_panel: endpoint { 85 - reg = <0>; 86 - remote-endpoint = <&panel_in_edp> 87 - }; 88 - }; 89 - }; 90 - }; 91 - 92 - pinctrl { 93 - edp { 94 - edp_hpd: edp-hpd { 95 - rockchip,pins = <7 11 RK_FUNC_2 &pcfg_pull_none>; 96 - }; 97 - }; 98 - };
-94
Documentation/devicetree/bindings/display/rockchip/dw_mipi_dsi_rockchip.txt
··· 1 - Rockchip specific extensions to the Synopsys Designware MIPI DSI 2 - ================================ 3 - 4 - Required properties: 5 - - #address-cells: Should be <1>. 6 - - #size-cells: Should be <0>. 7 - - compatible: one of 8 - "rockchip,px30-mipi-dsi", "snps,dw-mipi-dsi" 9 - "rockchip,rk3288-mipi-dsi", "snps,dw-mipi-dsi" 10 - "rockchip,rk3399-mipi-dsi", "snps,dw-mipi-dsi" 11 - "rockchip,rk3568-mipi-dsi", "snps,dw-mipi-dsi" 12 - - reg: Represent the physical address range of the controller. 13 - - interrupts: Represent the controller's interrupt to the CPU(s). 14 - - clocks, clock-names: Phandles to the controller's pll reference 15 - clock(ref) when using an internal dphy and APB clock(pclk). 16 - For RK3399, a phy config clock (phy_cfg) and a grf clock(grf) 17 - are required. As described in [1]. 18 - - rockchip,grf: this soc should set GRF regs to mux vopl/vopb. 19 - - ports: contain a port node with endpoint definitions as defined in [2]. 20 - For vopb,set the reg = <0> and set the reg = <1> for vopl. 21 - - video port 0 for the VOP input, the remote endpoint maybe vopb or vopl 22 - - video port 1 for either a panel or subsequent encoder 23 - 24 - Optional properties: 25 - - phys: from general PHY binding: the phandle for the PHY device. 26 - - phy-names: Should be "dphy" if phys references an external phy. 27 - - #phy-cells: Defined when used as ISP phy, should be 0. 28 - - power-domains: a phandle to mipi dsi power domain node. 29 - - resets: list of phandle + reset specifier pairs, as described in [3]. 30 - - reset-names: string reset name, must be "apb". 31 - 32 - [1] Documentation/devicetree/bindings/clock/clock-bindings.txt 33 - [2] Documentation/devicetree/bindings/media/video-interfaces.txt 34 - [3] Documentation/devicetree/bindings/reset/reset.txt 35 - 36 - Example: 37 - mipi_dsi: mipi@ff960000 { 38 - #address-cells = <1>; 39 - #size-cells = <0>; 40 - compatible = "rockchip,rk3288-mipi-dsi", "snps,dw-mipi-dsi"; 41 - reg = <0xff960000 0x4000>; 42 - interrupts = <GIC_SPI 83 IRQ_TYPE_LEVEL_HIGH>; 43 - clocks = <&cru SCLK_MIPI_24M>, <&cru PCLK_MIPI_DSI0>; 44 - clock-names = "ref", "pclk"; 45 - resets = <&cru SRST_MIPIDSI0>; 46 - reset-names = "apb"; 47 - rockchip,grf = <&grf>; 48 - 49 - ports { 50 - #address-cells = <1>; 51 - #size-cells = <0>; 52 - 53 - mipi_in: port@0 { 54 - reg = <0>; 55 - #address-cells = <1>; 56 - #size-cells = <0>; 57 - 58 - mipi_in_vopb: endpoint@0 { 59 - reg = <0>; 60 - remote-endpoint = <&vopb_out_mipi>; 61 - }; 62 - mipi_in_vopl: endpoint@1 { 63 - reg = <1>; 64 - remote-endpoint = <&vopl_out_mipi>; 65 - }; 66 - }; 67 - 68 - mipi_out: port@1 { 69 - reg = <1>; 70 - #address-cells = <1>; 71 - #size-cells = <0>; 72 - 73 - mipi_out_panel: endpoint { 74 - remote-endpoint = <&panel_in_mipi>; 75 - }; 76 - }; 77 - }; 78 - 79 - panel { 80 - compatible ="boe,tv080wum-nl0"; 81 - reg = <0>; 82 - 83 - enable-gpios = <&gpio7 3 GPIO_ACTIVE_HIGH>; 84 - pinctrl-names = "default"; 85 - pinctrl-0 = <&lcd_en>; 86 - backlight = <&backlight>; 87 - 88 - port { 89 - panel_in_mipi: endpoint { 90 - remote-endpoint = <&mipi_out_panel>; 91 - }; 92 - }; 93 - }; 94 - };
+103
Documentation/devicetree/bindings/display/rockchip/rockchip,analogix-dp.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/display/rockchip/rockchip,analogix-dp.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Rockchip specific extensions to the Analogix Display Port 8 + 9 + maintainers: 10 + - Sandy Huang <hjc@rock-chips.com> 11 + - Heiko Stuebner <heiko@sntech.de> 12 + 13 + properties: 14 + compatible: 15 + enum: 16 + - rockchip,rk3288-dp 17 + - rockchip,rk3399-edp 18 + 19 + clocks: 20 + minItems: 2 21 + maxItems: 3 22 + 23 + clock-names: 24 + minItems: 2 25 + items: 26 + - const: dp 27 + - const: pclk 28 + - const: grf 29 + 30 + power-domains: 31 + maxItems: 1 32 + 33 + resets: 34 + maxItems: 1 35 + 36 + reset-names: 37 + const: dp 38 + 39 + rockchip,grf: 40 + $ref: /schemas/types.yaml#/definitions/phandle 41 + description: 42 + This SoC makes use of GRF regs. 43 + 44 + required: 45 + - compatible 46 + - clocks 47 + - clock-names 48 + - resets 49 + - reset-names 50 + - rockchip,grf 51 + 52 + allOf: 53 + - $ref: /schemas/display/bridge/analogix,dp.yaml# 54 + 55 + unevaluatedProperties: false 56 + 57 + examples: 58 + - | 59 + #include <dt-bindings/clock/rk3288-cru.h> 60 + #include <dt-bindings/interrupt-controller/arm-gic.h> 61 + #include <dt-bindings/interrupt-controller/irq.h> 62 + dp@ff970000 { 63 + compatible = "rockchip,rk3288-dp"; 64 + reg = <0xff970000 0x4000>; 65 + interrupts = <GIC_SPI 98 IRQ_TYPE_LEVEL_HIGH>; 66 + clocks = <&cru SCLK_EDP>, <&cru PCLK_EDP_CTRL>; 67 + clock-names = "dp", "pclk"; 68 + phys = <&dp_phy>; 69 + phy-names = "dp"; 70 + resets = <&cru 111>; 71 + reset-names = "dp"; 72 + rockchip,grf = <&grf>; 73 + pinctrl-0 = <&edp_hpd>; 74 + pinctrl-names = "default"; 75 + 76 + ports { 77 + #address-cells = <1>; 78 + #size-cells = <0>; 79 + 80 + edp_in: port@0 { 81 + reg = <0>; 82 + #address-cells = <1>; 83 + #size-cells = <0>; 84 + 85 + edp_in_vopb: endpoint@0 { 86 + reg = <0>; 87 + remote-endpoint = <&vopb_out_edp>; 88 + }; 89 + edp_in_vopl: endpoint@1 { 90 + reg = <1>; 91 + remote-endpoint = <&vopl_out_edp>; 92 + }; 93 + }; 94 + 95 + edp_out: port@1 { 96 + reg = <1>; 97 + 98 + edp_out_panel: endpoint { 99 + remote-endpoint = <&panel_in_edp>; 100 + }; 101 + }; 102 + }; 103 + };
+166
Documentation/devicetree/bindings/display/rockchip/rockchip,dw-mipi-dsi.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/display/rockchip/rockchip,dw-mipi-dsi.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Rockchip specific extensions to the Synopsys Designware MIPI DSI 8 + 9 + maintainers: 10 + - Sandy Huang <hjc@rock-chips.com> 11 + - Heiko Stuebner <heiko@sntech.de> 12 + 13 + properties: 14 + compatible: 15 + items: 16 + - enum: 17 + - rockchip,px30-mipi-dsi 18 + - rockchip,rk3288-mipi-dsi 19 + - rockchip,rk3399-mipi-dsi 20 + - rockchip,rk3568-mipi-dsi 21 + - const: snps,dw-mipi-dsi 22 + 23 + interrupts: 24 + maxItems: 1 25 + 26 + clocks: 27 + minItems: 1 28 + maxItems: 4 29 + 30 + clock-names: 31 + oneOf: 32 + - minItems: 2 33 + items: 34 + - const: ref 35 + - const: pclk 36 + - const: phy_cfg 37 + - const: grf 38 + - const: pclk 39 + 40 + rockchip,grf: 41 + $ref: /schemas/types.yaml#/definitions/phandle 42 + description: 43 + This SoC uses GRF regs to switch between vopl/vopb. 44 + 45 + phys: 46 + maxItems: 1 47 + 48 + phy-names: 49 + const: dphy 50 + 51 + "#phy-cells": 52 + const: 0 53 + description: 54 + Defined when in use as ISP phy. 55 + 56 + power-domains: 57 + maxItems: 1 58 + 59 + "#address-cells": 60 + const: 1 61 + 62 + "#size-cells": 63 + const: 0 64 + 65 + required: 66 + - compatible 67 + - clocks 68 + - clock-names 69 + - rockchip,grf 70 + 71 + allOf: 72 + - $ref: /schemas/display/bridge/snps,dw-mipi-dsi.yaml# 73 + - if: 74 + properties: 75 + compatible: 76 + contains: 77 + enum: 78 + - rockchip,px30-mipi-dsi 79 + - rockchip,rk3568-mipi-dsi 80 + 81 + then: 82 + properties: 83 + clocks: 84 + maxItems: 1 85 + 86 + clock-names: 87 + maxItems: 1 88 + 89 + required: 90 + - phys 91 + - phy-names 92 + 93 + - if: 94 + properties: 95 + compatible: 96 + contains: 97 + const: rockchip,rk3288-mipi-dsi 98 + 99 + then: 100 + properties: 101 + clocks: 102 + maxItems: 2 103 + 104 + clock-names: 105 + maxItems: 2 106 + 107 + - if: 108 + properties: 109 + compatible: 110 + contains: 111 + const: rockchip,rk3399-mipi-dsi 112 + 113 + then: 114 + properties: 115 + clocks: 116 + minItems: 4 117 + 118 + clock-names: 119 + minItems: 4 120 + 121 + unevaluatedProperties: false 122 + 123 + examples: 124 + - | 125 + #include <dt-bindings/clock/rk3288-cru.h> 126 + #include <dt-bindings/interrupt-controller/arm-gic.h> 127 + #include <dt-bindings/interrupt-controller/irq.h> 128 + 129 + mipi_dsi: dsi@ff960000 { 130 + compatible = "rockchip,rk3288-mipi-dsi", "snps,dw-mipi-dsi"; 131 + reg = <0xff960000 0x4000>; 132 + interrupts = <GIC_SPI 83 IRQ_TYPE_LEVEL_HIGH>; 133 + clocks = <&cru SCLK_MIPIDSI_24M>, <&cru PCLK_MIPI_DSI0>; 134 + clock-names = "ref", "pclk"; 135 + resets = <&cru SRST_MIPIDSI0>; 136 + reset-names = "apb"; 137 + rockchip,grf = <&grf>; 138 + 139 + ports { 140 + #address-cells = <1>; 141 + #size-cells = <0>; 142 + 143 + mipi_in: port@0 { 144 + reg = <0>; 145 + #address-cells = <1>; 146 + #size-cells = <0>; 147 + 148 + mipi_in_vopb: endpoint@0 { 149 + reg = <0>; 150 + remote-endpoint = <&vopb_out_mipi>; 151 + }; 152 + mipi_in_vopl: endpoint@1 { 153 + reg = <1>; 154 + remote-endpoint = <&vopl_out_mipi>; 155 + }; 156 + }; 157 + 158 + mipi_out: port@1 { 159 + reg = <1>; 160 + 161 + mipi_out_panel: endpoint { 162 + remote-endpoint = <&panel_in_mipi>; 163 + }; 164 + }; 165 + }; 166 + };
+170
Documentation/devicetree/bindings/display/rockchip/rockchip,lvds.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/display/rockchip/rockchip,lvds.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Rockchip low-voltage differential signal (LVDS) transmitter 8 + 9 + maintainers: 10 + - Sandy Huang <hjc@rock-chips.com> 11 + - Heiko Stuebner <heiko@sntech.de> 12 + 13 + properties: 14 + compatible: 15 + enum: 16 + - rockchip,px30-lvds 17 + - rockchip,rk3288-lvds 18 + 19 + reg: 20 + maxItems: 1 21 + 22 + clocks: 23 + maxItems: 1 24 + 25 + clock-names: 26 + const: pclk_lvds 27 + 28 + avdd1v0-supply: 29 + description: 1.0V analog power. 30 + 31 + avdd1v8-supply: 32 + description: 1.8V analog power. 33 + 34 + avdd3v3-supply: 35 + description: 3.3V analog power. 36 + 37 + rockchip,grf: 38 + $ref: /schemas/types.yaml#/definitions/phandle 39 + description: Phandle to the general register files syscon. 40 + 41 + rockchip,output: 42 + $ref: /schemas/types.yaml#/definitions/string 43 + enum: [rgb, lvds, duallvds] 44 + description: This describes the output interface. 45 + 46 + phys: 47 + maxItems: 1 48 + 49 + phy-names: 50 + const: dphy 51 + 52 + pinctrl-names: 53 + const: lcdc 54 + 55 + pinctrl-0: true 56 + 57 + power-domains: 58 + maxItems: 1 59 + 60 + ports: 61 + $ref: /schemas/graph.yaml#/properties/ports 62 + 63 + properties: 64 + port@0: 65 + $ref: /schemas/graph.yaml#/properties/port 66 + description: 67 + Video port 0 for the VOP input. 68 + The remote endpoint maybe vopb or vopl. 69 + 70 + port@1: 71 + $ref: /schemas/graph.yaml#/properties/port 72 + description: 73 + Video port 1 for either a panel or subsequent encoder. 74 + 75 + required: 76 + - port@0 77 + - port@1 78 + 79 + required: 80 + - compatible 81 + - rockchip,grf 82 + - rockchip,output 83 + - ports 84 + 85 + allOf: 86 + - if: 87 + properties: 88 + compatible: 89 + contains: 90 + const: rockchip,px30-lvds 91 + 92 + then: 93 + properties: 94 + reg: false 95 + clocks: false 96 + clock-names: false 97 + avdd1v0-supply: false 98 + avdd1v8-supply: false 99 + avdd3v3-supply: false 100 + 101 + required: 102 + - phys 103 + - phy-names 104 + 105 + - if: 106 + properties: 107 + compatible: 108 + contains: 109 + const: rockchip,rk3288-lvds 110 + 111 + then: 112 + properties: 113 + phys: false 114 + phy-names: false 115 + 116 + required: 117 + - reg 118 + - clocks 119 + - clock-names 120 + - avdd1v0-supply 121 + - avdd1v8-supply 122 + - avdd3v3-supply 123 + 124 + additionalProperties: false 125 + 126 + examples: 127 + - | 128 + #include <dt-bindings/clock/rk3288-cru.h> 129 + 130 + lvds: lvds@ff96c000 { 131 + compatible = "rockchip,rk3288-lvds"; 132 + reg = <0xff96c000 0x4000>; 133 + clocks = <&cru PCLK_LVDS_PHY>; 134 + clock-names = "pclk_lvds"; 135 + avdd1v0-supply = <&vdd10_lcd>; 136 + avdd1v8-supply = <&vcc18_lcd>; 137 + avdd3v3-supply = <&vcca_33>; 138 + pinctrl-names = "lcdc"; 139 + pinctrl-0 = <&lcdc_ctl>; 140 + rockchip,grf = <&grf>; 141 + rockchip,output = "rgb"; 142 + 143 + ports { 144 + #address-cells = <1>; 145 + #size-cells = <0>; 146 + 147 + lvds_in: port@0 { 148 + reg = <0>; 149 + #address-cells = <1>; 150 + #size-cells = <0>; 151 + 152 + lvds_in_vopb: endpoint@0 { 153 + reg = <0>; 154 + remote-endpoint = <&vopb_out_lvds>; 155 + }; 156 + lvds_in_vopl: endpoint@1 { 157 + reg = <1>; 158 + remote-endpoint = <&vopl_out_lvds>; 159 + }; 160 + }; 161 + 162 + lvds_out: port@1 { 163 + reg = <1>; 164 + 165 + lvds_out_panel: endpoint { 166 + remote-endpoint = <&panel_in_lvds>; 167 + }; 168 + }; 169 + }; 170 + };
-92
Documentation/devicetree/bindings/display/rockchip/rockchip-lvds.txt
··· 1 - Rockchip RK3288 LVDS interface 2 - ================================ 3 - 4 - Required properties: 5 - - compatible: matching the soc type, one of 6 - - "rockchip,rk3288-lvds"; 7 - - "rockchip,px30-lvds"; 8 - 9 - - reg: physical base address of the controller and length 10 - of memory mapped region. 11 - - clocks: must include clock specifiers corresponding to entries in the 12 - clock-names property. 13 - - clock-names: must contain "pclk_lvds" 14 - 15 - - avdd1v0-supply: regulator phandle for 1.0V analog power 16 - - avdd1v8-supply: regulator phandle for 1.8V analog power 17 - - avdd3v3-supply: regulator phandle for 3.3V analog power 18 - 19 - - rockchip,grf: phandle to the general register files syscon 20 - - rockchip,output: "rgb", "lvds" or "duallvds", This describes the output interface 21 - 22 - - phys: LVDS/DSI DPHY (px30 only) 23 - - phy-names: name of the PHY, must be "dphy" (px30 only) 24 - 25 - Optional properties: 26 - - pinctrl-names: must contain a "lcdc" entry. 27 - - pinctrl-0: pin control group to be used for this controller. 28 - 29 - Required nodes: 30 - 31 - The lvds has two video ports as described by 32 - Documentation/devicetree/bindings/media/video-interfaces.txt 33 - Their connections are modeled using the OF graph bindings specified in 34 - Documentation/devicetree/bindings/graph.txt. 35 - 36 - - video port 0 for the VOP input, the remote endpoint maybe vopb or vopl 37 - - video port 1 for either a panel or subsequent encoder 38 - 39 - Example: 40 - 41 - lvds_panel: lvds-panel { 42 - compatible = "auo,b101ean01"; 43 - enable-gpios = <&gpio7 21 GPIO_ACTIVE_HIGH>; 44 - data-mapping = "jeida-24"; 45 - 46 - ports { 47 - panel_in_lvds: endpoint { 48 - remote-endpoint = <&lvds_out_panel>; 49 - }; 50 - }; 51 - }; 52 - 53 - For Rockchip RK3288: 54 - 55 - lvds: lvds@ff96c000 { 56 - compatible = "rockchip,rk3288-lvds"; 57 - rockchip,grf = <&grf>; 58 - reg = <0xff96c000 0x4000>; 59 - clocks = <&cru PCLK_LVDS_PHY>; 60 - clock-names = "pclk_lvds"; 61 - pinctrl-names = "lcdc"; 62 - pinctrl-0 = <&lcdc_ctl>; 63 - avdd1v0-supply = <&vdd10_lcd>; 64 - avdd1v8-supply = <&vcc18_lcd>; 65 - avdd3v3-supply = <&vcca_33>; 66 - rockchip,output = "rgb"; 67 - ports { 68 - #address-cells = <1>; 69 - #size-cells = <0>; 70 - 71 - lvds_in: port@0 { 72 - reg = <0>; 73 - 74 - lvds_in_vopb: endpoint@0 { 75 - reg = <0>; 76 - remote-endpoint = <&vopb_out_lvds>; 77 - }; 78 - lvds_in_vopl: endpoint@1 { 79 - reg = <1>; 80 - remote-endpoint = <&vopl_out_lvds>; 81 - }; 82 - }; 83 - 84 - lvds_out: port@1 { 85 - reg = <1>; 86 - 87 - lvds_out_panel: endpoint { 88 - remote-endpoint = <&panel_in_lvds>; 89 - }; 90 - }; 91 - }; 92 - };
+9
Documentation/devicetree/bindings/display/simple-framebuffer.yaml
··· 26 26 over control to a driver for the real hardware. The bindings for the 27 27 hw nodes must specify which node is considered the primary node. 28 28 29 + If a panel node is given, then the driver uses this to configure the 30 + physical width and height of the display. If no panel node is given, 31 + then the driver uses the width and height properties of the simplefb 32 + node to estimate it. 33 + 29 34 It is advised to add display# aliases to help the OS determine how 30 35 to number things. If display# aliases are used, then if the simplefb 31 36 node contains a display property then the /aliases/display# path ··· 121 116 display: 122 117 $ref: /schemas/types.yaml#/definitions/phandle 123 118 description: Primary display hardware node 119 + 120 + panel: 121 + $ref: /schemas/types.yaml#/definitions/phandle 122 + description: Display panel node 124 123 125 124 allwinner,pipeline: 126 125 description: Pipeline used by the framebuffer on Allwinner SoCs
+7 -3
Documentation/devicetree/bindings/soc/rockchip/grf.yaml
··· 80 80 properties: 81 81 compatible: 82 82 contains: 83 - const: rockchip,px30-grf 83 + enum: 84 + - rockchip,px30-grf 84 85 85 86 then: 86 87 properties: 87 88 lvds: 88 - description: 89 - Documentation/devicetree/bindings/display/rockchip/rockchip-lvds.txt 89 + type: object 90 + 91 + $ref: /schemas/display/rockchip/rockchip,lvds.yaml# 92 + 93 + unevaluatedProperties: false 90 94 91 95 - if: 92 96 properties:
+1 -1
MAINTAINERS
··· 7044 7044 F: drivers/gpu/drm/xlnx/ 7045 7045 7046 7046 DRM PANEL DRIVERS 7047 - M: Thierry Reding <thierry.reding@gmail.com> 7047 + M: Neil Armstrong <neil.armstrong@linaro.org> 7048 7048 R: Sam Ravnborg <sam@ravnborg.org> 7049 7049 L: dri-devel@lists.freedesktop.org 7050 7050 S: Maintained
-10
drivers/accel/ivpu/ivpu_pm.c
··· 237 237 { 238 238 int ret; 239 239 240 - ivpu_dbg(vdev, RPM, "rpm_get count %d\n", atomic_read(&vdev->drm.dev->power.usage_count)); 241 - 242 240 ret = pm_runtime_resume_and_get(vdev->drm.dev); 243 241 if (!drm_WARN_ON(&vdev->drm, ret < 0)) 244 242 vdev->pm->suspend_reschedule_counter = PM_RESCHEDULE_LIMIT; ··· 246 248 247 249 void ivpu_rpm_put(struct ivpu_device *vdev) 248 250 { 249 - ivpu_dbg(vdev, RPM, "rpm_put count %d\n", atomic_read(&vdev->drm.dev->power.usage_count)); 250 - 251 251 pm_runtime_mark_last_busy(vdev->drm.dev); 252 252 pm_runtime_put_autosuspend(vdev->drm.dev); 253 253 } ··· 310 314 pm_runtime_allow(dev); 311 315 pm_runtime_mark_last_busy(dev); 312 316 pm_runtime_put_autosuspend(dev); 313 - 314 - ivpu_dbg(vdev, RPM, "Enable RPM count %d\n", atomic_read(&dev->power.usage_count)); 315 317 } 316 318 317 319 void ivpu_pm_disable(struct ivpu_device *vdev) 318 320 { 319 - struct device *dev = vdev->drm.dev; 320 - 321 - ivpu_dbg(vdev, RPM, "Disable RPM count %d\n", atomic_read(&dev->power.usage_count)); 322 - 323 321 pm_runtime_get_noresume(vdev->drm.dev); 324 322 pm_runtime_forbid(vdev->drm.dev); 325 323 }
+1 -1
drivers/dma-buf/dma-buf.c
··· 828 828 * - dma_buf_attach() 829 829 * - dma_buf_dynamic_attach() 830 830 * - dma_buf_detach() 831 - * - dma_buf_export( 831 + * - dma_buf_export() 832 832 * - dma_buf_fd() 833 833 * - dma_buf_get() 834 834 * - dma_buf_put()
+5 -1
drivers/gpu/drm/Kconfig
··· 10 10 depends on (AGP || AGP=n) && !EMULATED_CMPXCHG && HAS_DMA 11 11 select DRM_PANEL_ORIENTATION_QUIRKS 12 12 select HDMI 13 - select FB_CMDLINE 14 13 select I2C 15 14 select DMA_SHARED_BUFFER 16 15 select SYNC_FILE 17 16 # gallium uses SYS_kcmp for os_same_file_description() to de-duplicate 18 17 # device and dmabuf fd. Let's make sure that is available for our userspace. 19 18 select KCMP 19 + select VIDEO_CMDLINE 20 20 select VIDEO_NOMODESET 21 21 help 22 22 Kernel-level support for the Direct Rendering Infrastructure (DRI) ··· 231 231 depends on DRM && MMU 232 232 help 233 233 Choose this if you need the GEM shmem helper functions 234 + 235 + config DRM_SUBALLOC_HELPER 236 + tristate 237 + depends on DRM 234 238 235 239 config DRM_SCHED 236 240 tristate
+3
drivers/gpu/drm/Makefile
··· 88 88 drm_shmem_helper-y := drm_gem_shmem_helper.o 89 89 obj-$(CONFIG_DRM_GEM_SHMEM_HELPER) += drm_shmem_helper.o 90 90 91 + drm_suballoc_helper-y := drm_suballoc.o 92 + obj-$(CONFIG_DRM_SUBALLOC_HELPER) += drm_suballoc_helper.o 93 + 91 94 drm_vram_helper-y := drm_gem_vram_helper.o 92 95 obj-$(CONFIG_DRM_VRAM_HELPER) += drm_vram_helper.o 93 96
+1
drivers/gpu/drm/amd/amdgpu/Kconfig
··· 19 19 select BACKLIGHT_CLASS_DEVICE 20 20 select INTERVAL_TREE 21 21 select DRM_BUDDY 22 + select DRM_SUBALLOC_HELPER 22 23 # amdgpu depends on ACPI_VIDEO when ACPI is enabled, for select to work 23 24 # ACPI_VIDEO's dependencies must also be selected. 24 25 select INPUT if ACPI
+4 -22
drivers/gpu/drm/amd/amdgpu/amdgpu.h
··· 424 424 * alignment). 425 425 */ 426 426 427 - #define AMDGPU_SA_NUM_FENCE_LISTS 32 428 - 429 427 struct amdgpu_sa_manager { 430 - wait_queue_head_t wq; 431 - struct amdgpu_bo *bo; 432 - struct list_head *hole; 433 - struct list_head flist[AMDGPU_SA_NUM_FENCE_LISTS]; 434 - struct list_head olist; 435 - unsigned size; 436 - uint64_t gpu_addr; 437 - void *cpu_ptr; 438 - uint32_t domain; 439 - uint32_t align; 440 - }; 441 - 442 - /* sub-allocation buffer */ 443 - struct amdgpu_sa_bo { 444 - struct list_head olist; 445 - struct list_head flist; 446 - struct amdgpu_sa_manager *manager; 447 - unsigned soffset; 448 - unsigned eoffset; 449 - struct dma_fence *fence; 428 + struct drm_suballoc_manager base; 429 + struct amdgpu_bo *bo; 430 + uint64_t gpu_addr; 431 + void *cpu_ptr; 450 432 }; 451 433 452 434 int amdgpu_fence_slab_init(void);
+2 -3
drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
··· 69 69 70 70 if (size) { 71 71 r = amdgpu_sa_bo_new(&adev->ib_pools[pool_type], 72 - &ib->sa_bo, size, 256); 72 + &ib->sa_bo, size); 73 73 if (r) { 74 74 dev_err(adev->dev, "failed to get a new IB (%d)\n", r); 75 75 return r; ··· 309 309 310 310 for (i = 0; i < AMDGPU_IB_POOL_MAX; i++) { 311 311 r = amdgpu_sa_bo_manager_init(adev, &adev->ib_pools[i], 312 - AMDGPU_IB_POOL_SIZE, 313 - AMDGPU_GPU_PAGE_SIZE, 312 + AMDGPU_IB_POOL_SIZE, 256, 314 313 AMDGPU_GEM_DOMAIN_GTT); 315 314 if (r) 316 315 goto error;
+3 -6
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c
··· 600 600 601 601 if (!amdgpu_gmc_vram_full_visible(&adev->gmc) && 602 602 bo->tbo.resource->mem_type == TTM_PL_VRAM && 603 - bo->tbo.resource->start < adev->gmc.visible_vram_size >> PAGE_SHIFT) 603 + amdgpu_bo_in_cpu_visible_vram(bo)) 604 604 amdgpu_cs_report_moved_bytes(adev, ctx.bytes_moved, 605 605 ctx.bytes_moved); 606 606 else ··· 1346 1346 struct amdgpu_device *adev = amdgpu_ttm_adev(bo->bdev); 1347 1347 struct ttm_operation_ctx ctx = { false, false }; 1348 1348 struct amdgpu_bo *abo = ttm_to_amdgpu_bo(bo); 1349 - unsigned long offset; 1350 1349 int r; 1351 1350 1352 1351 /* Remember that this BO was accessed by the CPU */ ··· 1354 1355 if (bo->resource->mem_type != TTM_PL_VRAM) 1355 1356 return 0; 1356 1357 1357 - offset = bo->resource->start << PAGE_SHIFT; 1358 - if ((offset + bo->base.size) <= adev->gmc.visible_vram_size) 1358 + if (amdgpu_bo_in_cpu_visible_vram(abo)) 1359 1359 return 0; 1360 1360 1361 1361 /* Can't move a pinned BO to visible VRAM */ ··· 1376 1378 else if (unlikely(r)) 1377 1379 return VM_FAULT_SIGBUS; 1378 1380 1379 - offset = bo->resource->start << PAGE_SHIFT; 1380 1381 /* this should never happen */ 1381 1382 if (bo->resource->mem_type == TTM_PL_VRAM && 1382 - (offset + bo->base.size) > adev->gmc.visible_vram_size) 1383 + !amdgpu_bo_in_cpu_visible_vram(abo)) 1383 1384 return VM_FAULT_SIGBUS; 1384 1385 1385 1386 ttm_bo_move_to_lru_tail_unlocked(bo);
+16 -9
drivers/gpu/drm/amd/amdgpu/amdgpu_object.h
··· 336 336 /* 337 337 * sub allocation 338 338 */ 339 - 340 - static inline uint64_t amdgpu_sa_bo_gpu_addr(struct amdgpu_sa_bo *sa_bo) 339 + static inline struct amdgpu_sa_manager * 340 + to_amdgpu_sa_manager(struct drm_suballoc_manager *manager) 341 341 { 342 - return sa_bo->manager->gpu_addr + sa_bo->soffset; 342 + return container_of(manager, struct amdgpu_sa_manager, base); 343 343 } 344 344 345 - static inline void * amdgpu_sa_bo_cpu_addr(struct amdgpu_sa_bo *sa_bo) 345 + static inline uint64_t amdgpu_sa_bo_gpu_addr(struct drm_suballoc *sa_bo) 346 346 { 347 - return sa_bo->manager->cpu_ptr + sa_bo->soffset; 347 + return to_amdgpu_sa_manager(sa_bo->manager)->gpu_addr + 348 + drm_suballoc_soffset(sa_bo); 349 + } 350 + 351 + static inline void *amdgpu_sa_bo_cpu_addr(struct drm_suballoc *sa_bo) 352 + { 353 + return to_amdgpu_sa_manager(sa_bo->manager)->cpu_ptr + 354 + drm_suballoc_soffset(sa_bo); 348 355 } 349 356 350 357 int amdgpu_sa_bo_manager_init(struct amdgpu_device *adev, ··· 362 355 int amdgpu_sa_bo_manager_start(struct amdgpu_device *adev, 363 356 struct amdgpu_sa_manager *sa_manager); 364 357 int amdgpu_sa_bo_new(struct amdgpu_sa_manager *sa_manager, 365 - struct amdgpu_sa_bo **sa_bo, 366 - unsigned size, unsigned align); 358 + struct drm_suballoc **sa_bo, 359 + unsigned int size); 367 360 void amdgpu_sa_bo_free(struct amdgpu_device *adev, 368 - struct amdgpu_sa_bo **sa_bo, 369 - struct dma_fence *fence); 361 + struct drm_suballoc **sa_bo, 362 + struct dma_fence *fence); 370 363 #if defined(CONFIG_DEBUG_FS) 371 364 void amdgpu_sa_bo_dump_debug_info(struct amdgpu_sa_manager *sa_manager, 372 365 struct seq_file *m);
+2 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_ring.h
··· 27 27 #include <drm/amdgpu_drm.h> 28 28 #include <drm/gpu_scheduler.h> 29 29 #include <drm/drm_print.h> 30 + #include <drm/drm_suballoc.h> 30 31 31 32 struct amdgpu_device; 32 33 struct amdgpu_ring; ··· 93 92 }; 94 93 95 94 struct amdgpu_ib { 96 - struct amdgpu_sa_bo *sa_bo; 95 + struct drm_suballoc *sa_bo; 97 96 uint32_t length_dw; 98 97 uint64_t gpu_addr; 99 98 uint32_t *ptr;
+22 -304
drivers/gpu/drm/amd/amdgpu/amdgpu_sa.c
··· 44 44 45 45 #include "amdgpu.h" 46 46 47 - static void amdgpu_sa_bo_remove_locked(struct amdgpu_sa_bo *sa_bo); 48 - static void amdgpu_sa_bo_try_free(struct amdgpu_sa_manager *sa_manager); 49 - 50 47 int amdgpu_sa_bo_manager_init(struct amdgpu_device *adev, 51 48 struct amdgpu_sa_manager *sa_manager, 52 - unsigned size, u32 align, u32 domain) 49 + unsigned int size, u32 suballoc_align, u32 domain) 53 50 { 54 - int i, r; 51 + int r; 55 52 56 - init_waitqueue_head(&sa_manager->wq); 57 - sa_manager->bo = NULL; 58 - sa_manager->size = size; 59 - sa_manager->domain = domain; 60 - sa_manager->align = align; 61 - sa_manager->hole = &sa_manager->olist; 62 - INIT_LIST_HEAD(&sa_manager->olist); 63 - for (i = 0; i < AMDGPU_SA_NUM_FENCE_LISTS; ++i) 64 - INIT_LIST_HEAD(&sa_manager->flist[i]); 65 - 66 - r = amdgpu_bo_create_kernel(adev, size, align, domain, &sa_manager->bo, 67 - &sa_manager->gpu_addr, &sa_manager->cpu_ptr); 53 + r = amdgpu_bo_create_kernel(adev, size, AMDGPU_GPU_PAGE_SIZE, domain, 54 + &sa_manager->bo, &sa_manager->gpu_addr, 55 + &sa_manager->cpu_ptr); 68 56 if (r) { 69 57 dev_err(adev->dev, "(%d) failed to allocate bo for manager\n", r); 70 58 return r; 71 59 } 72 60 73 - memset(sa_manager->cpu_ptr, 0, sa_manager->size); 61 + memset(sa_manager->cpu_ptr, 0, size); 62 + drm_suballoc_manager_init(&sa_manager->base, size, suballoc_align); 74 63 return r; 75 64 } 76 65 77 66 void amdgpu_sa_bo_manager_fini(struct amdgpu_device *adev, 78 67 struct amdgpu_sa_manager *sa_manager) 79 68 { 80 - struct amdgpu_sa_bo *sa_bo, *tmp; 81 - 82 69 if (sa_manager->bo == NULL) { 83 70 dev_err(adev->dev, "no bo for sa manager\n"); 84 71 return; 85 72 } 86 73 87 - if (!list_empty(&sa_manager->olist)) { 88 - sa_manager->hole = &sa_manager->olist, 89 - amdgpu_sa_bo_try_free(sa_manager); 90 - if (!list_empty(&sa_manager->olist)) { 91 - dev_err(adev->dev, "sa_manager is not empty, clearing anyway\n"); 92 - } 93 - } 94 - list_for_each_entry_safe(sa_bo, tmp, &sa_manager->olist, olist) { 95 - amdgpu_sa_bo_remove_locked(sa_bo); 96 - } 74 + drm_suballoc_manager_fini(&sa_manager->base); 97 75 98 76 amdgpu_bo_free_kernel(&sa_manager->bo, &sa_manager->gpu_addr, &sa_manager->cpu_ptr); 99 - sa_manager->size = 0; 100 - } 101 - 102 - static void amdgpu_sa_bo_remove_locked(struct amdgpu_sa_bo *sa_bo) 103 - { 104 - struct amdgpu_sa_manager *sa_manager = sa_bo->manager; 105 - if (sa_manager->hole == &sa_bo->olist) { 106 - sa_manager->hole = sa_bo->olist.prev; 107 - } 108 - list_del_init(&sa_bo->olist); 109 - list_del_init(&sa_bo->flist); 110 - dma_fence_put(sa_bo->fence); 111 - kfree(sa_bo); 112 - } 113 - 114 - static void amdgpu_sa_bo_try_free(struct amdgpu_sa_manager *sa_manager) 115 - { 116 - struct amdgpu_sa_bo *sa_bo, *tmp; 117 - 118 - if (sa_manager->hole->next == &sa_manager->olist) 119 - return; 120 - 121 - sa_bo = list_entry(sa_manager->hole->next, struct amdgpu_sa_bo, olist); 122 - list_for_each_entry_safe_from(sa_bo, tmp, &sa_manager->olist, olist) { 123 - if (sa_bo->fence == NULL || 124 - !dma_fence_is_signaled(sa_bo->fence)) { 125 - return; 126 - } 127 - amdgpu_sa_bo_remove_locked(sa_bo); 128 - } 129 - } 130 - 131 - static inline unsigned amdgpu_sa_bo_hole_soffset(struct amdgpu_sa_manager *sa_manager) 132 - { 133 - struct list_head *hole = sa_manager->hole; 134 - 135 - if (hole != &sa_manager->olist) { 136 - return list_entry(hole, struct amdgpu_sa_bo, olist)->eoffset; 137 - } 138 - return 0; 139 - } 140 - 141 - static inline unsigned amdgpu_sa_bo_hole_eoffset(struct amdgpu_sa_manager *sa_manager) 142 - { 143 - struct list_head *hole = sa_manager->hole; 144 - 145 - if (hole->next != &sa_manager->olist) { 146 - return list_entry(hole->next, struct amdgpu_sa_bo, olist)->soffset; 147 - } 148 - return sa_manager->size; 149 - } 150 - 151 - static bool amdgpu_sa_bo_try_alloc(struct amdgpu_sa_manager *sa_manager, 152 - struct amdgpu_sa_bo *sa_bo, 153 - unsigned size, unsigned align) 154 - { 155 - unsigned soffset, eoffset, wasted; 156 - 157 - soffset = amdgpu_sa_bo_hole_soffset(sa_manager); 158 - eoffset = amdgpu_sa_bo_hole_eoffset(sa_manager); 159 - wasted = (align - (soffset % align)) % align; 160 - 161 - if ((eoffset - soffset) >= (size + wasted)) { 162 - soffset += wasted; 163 - 164 - sa_bo->manager = sa_manager; 165 - sa_bo->soffset = soffset; 166 - sa_bo->eoffset = soffset + size; 167 - list_add(&sa_bo->olist, sa_manager->hole); 168 - INIT_LIST_HEAD(&sa_bo->flist); 169 - sa_manager->hole = &sa_bo->olist; 170 - return true; 171 - } 172 - return false; 173 - } 174 - 175 - /** 176 - * amdgpu_sa_event - Check if we can stop waiting 177 - * 178 - * @sa_manager: pointer to the sa_manager 179 - * @size: number of bytes we want to allocate 180 - * @align: alignment we need to match 181 - * 182 - * Check if either there is a fence we can wait for or 183 - * enough free memory to satisfy the allocation directly 184 - */ 185 - static bool amdgpu_sa_event(struct amdgpu_sa_manager *sa_manager, 186 - unsigned size, unsigned align) 187 - { 188 - unsigned soffset, eoffset, wasted; 189 - int i; 190 - 191 - for (i = 0; i < AMDGPU_SA_NUM_FENCE_LISTS; ++i) 192 - if (!list_empty(&sa_manager->flist[i])) 193 - return true; 194 - 195 - soffset = amdgpu_sa_bo_hole_soffset(sa_manager); 196 - eoffset = amdgpu_sa_bo_hole_eoffset(sa_manager); 197 - wasted = (align - (soffset % align)) % align; 198 - 199 - if ((eoffset - soffset) >= (size + wasted)) { 200 - return true; 201 - } 202 - 203 - return false; 204 - } 205 - 206 - static bool amdgpu_sa_bo_next_hole(struct amdgpu_sa_manager *sa_manager, 207 - struct dma_fence **fences, 208 - unsigned *tries) 209 - { 210 - struct amdgpu_sa_bo *best_bo = NULL; 211 - unsigned i, soffset, best, tmp; 212 - 213 - /* if hole points to the end of the buffer */ 214 - if (sa_manager->hole->next == &sa_manager->olist) { 215 - /* try again with its beginning */ 216 - sa_manager->hole = &sa_manager->olist; 217 - return true; 218 - } 219 - 220 - soffset = amdgpu_sa_bo_hole_soffset(sa_manager); 221 - /* to handle wrap around we add sa_manager->size */ 222 - best = sa_manager->size * 2; 223 - /* go over all fence list and try to find the closest sa_bo 224 - * of the current last 225 - */ 226 - for (i = 0; i < AMDGPU_SA_NUM_FENCE_LISTS; ++i) { 227 - struct amdgpu_sa_bo *sa_bo; 228 - 229 - fences[i] = NULL; 230 - 231 - if (list_empty(&sa_manager->flist[i])) 232 - continue; 233 - 234 - sa_bo = list_first_entry(&sa_manager->flist[i], 235 - struct amdgpu_sa_bo, flist); 236 - 237 - if (!dma_fence_is_signaled(sa_bo->fence)) { 238 - fences[i] = sa_bo->fence; 239 - continue; 240 - } 241 - 242 - /* limit the number of tries each ring gets */ 243 - if (tries[i] > 2) { 244 - continue; 245 - } 246 - 247 - tmp = sa_bo->soffset; 248 - if (tmp < soffset) { 249 - /* wrap around, pretend it's after */ 250 - tmp += sa_manager->size; 251 - } 252 - tmp -= soffset; 253 - if (tmp < best) { 254 - /* this sa bo is the closest one */ 255 - best = tmp; 256 - best_bo = sa_bo; 257 - } 258 - } 259 - 260 - if (best_bo) { 261 - uint32_t idx = best_bo->fence->context; 262 - 263 - idx %= AMDGPU_SA_NUM_FENCE_LISTS; 264 - ++tries[idx]; 265 - sa_manager->hole = best_bo->olist.prev; 266 - 267 - /* we knew that this one is signaled, 268 - so it's save to remote it */ 269 - amdgpu_sa_bo_remove_locked(best_bo); 270 - return true; 271 - } 272 - return false; 273 77 } 274 78 275 79 int amdgpu_sa_bo_new(struct amdgpu_sa_manager *sa_manager, 276 - struct amdgpu_sa_bo **sa_bo, 277 - unsigned size, unsigned align) 80 + struct drm_suballoc **sa_bo, 81 + unsigned int size) 278 82 { 279 - struct dma_fence *fences[AMDGPU_SA_NUM_FENCE_LISTS]; 280 - unsigned tries[AMDGPU_SA_NUM_FENCE_LISTS]; 281 - unsigned count; 282 - int i, r; 283 - signed long t; 83 + struct drm_suballoc *sa = drm_suballoc_new(&sa_manager->base, size, 84 + GFP_KERNEL, true, 0); 284 85 285 - if (WARN_ON_ONCE(align > sa_manager->align)) 286 - return -EINVAL; 86 + if (IS_ERR(sa)) { 87 + *sa_bo = NULL; 287 88 288 - if (WARN_ON_ONCE(size > sa_manager->size)) 289 - return -EINVAL; 89 + return PTR_ERR(sa); 90 + } 290 91 291 - *sa_bo = kmalloc(sizeof(struct amdgpu_sa_bo), GFP_KERNEL); 292 - if (!(*sa_bo)) 293 - return -ENOMEM; 294 - (*sa_bo)->manager = sa_manager; 295 - (*sa_bo)->fence = NULL; 296 - INIT_LIST_HEAD(&(*sa_bo)->olist); 297 - INIT_LIST_HEAD(&(*sa_bo)->flist); 298 - 299 - spin_lock(&sa_manager->wq.lock); 300 - do { 301 - for (i = 0; i < AMDGPU_SA_NUM_FENCE_LISTS; ++i) 302 - tries[i] = 0; 303 - 304 - do { 305 - amdgpu_sa_bo_try_free(sa_manager); 306 - 307 - if (amdgpu_sa_bo_try_alloc(sa_manager, *sa_bo, 308 - size, align)) { 309 - spin_unlock(&sa_manager->wq.lock); 310 - return 0; 311 - } 312 - 313 - /* see if we can skip over some allocations */ 314 - } while (amdgpu_sa_bo_next_hole(sa_manager, fences, tries)); 315 - 316 - for (i = 0, count = 0; i < AMDGPU_SA_NUM_FENCE_LISTS; ++i) 317 - if (fences[i]) 318 - fences[count++] = dma_fence_get(fences[i]); 319 - 320 - if (count) { 321 - spin_unlock(&sa_manager->wq.lock); 322 - t = dma_fence_wait_any_timeout(fences, count, false, 323 - MAX_SCHEDULE_TIMEOUT, 324 - NULL); 325 - for (i = 0; i < count; ++i) 326 - dma_fence_put(fences[i]); 327 - 328 - r = (t > 0) ? 0 : t; 329 - spin_lock(&sa_manager->wq.lock); 330 - } else { 331 - /* if we have nothing to wait for block */ 332 - r = wait_event_interruptible_locked( 333 - sa_manager->wq, 334 - amdgpu_sa_event(sa_manager, size, align) 335 - ); 336 - } 337 - 338 - } while (!r); 339 - 340 - spin_unlock(&sa_manager->wq.lock); 341 - kfree(*sa_bo); 342 - *sa_bo = NULL; 343 - return r; 92 + *sa_bo = sa; 93 + return 0; 344 94 } 345 95 346 - void amdgpu_sa_bo_free(struct amdgpu_device *adev, struct amdgpu_sa_bo **sa_bo, 96 + void amdgpu_sa_bo_free(struct amdgpu_device *adev, struct drm_suballoc **sa_bo, 347 97 struct dma_fence *fence) 348 98 { 349 - struct amdgpu_sa_manager *sa_manager; 350 - 351 99 if (sa_bo == NULL || *sa_bo == NULL) { 352 100 return; 353 101 } 354 102 355 - sa_manager = (*sa_bo)->manager; 356 - spin_lock(&sa_manager->wq.lock); 357 - if (fence && !dma_fence_is_signaled(fence)) { 358 - uint32_t idx; 359 - 360 - (*sa_bo)->fence = dma_fence_get(fence); 361 - idx = fence->context % AMDGPU_SA_NUM_FENCE_LISTS; 362 - list_add_tail(&(*sa_bo)->flist, &sa_manager->flist[idx]); 363 - } else { 364 - amdgpu_sa_bo_remove_locked(*sa_bo); 365 - } 366 - wake_up_all_locked(&sa_manager->wq); 367 - spin_unlock(&sa_manager->wq.lock); 103 + drm_suballoc_free(*sa_bo, fence); 368 104 *sa_bo = NULL; 369 105 } 370 106 ··· 109 373 void amdgpu_sa_bo_dump_debug_info(struct amdgpu_sa_manager *sa_manager, 110 374 struct seq_file *m) 111 375 { 112 - struct amdgpu_sa_bo *i; 376 + struct drm_printer p = drm_seq_file_printer(m); 113 377 114 - spin_lock(&sa_manager->wq.lock); 115 - list_for_each_entry(i, &sa_manager->olist, olist) { 116 - uint64_t soffset = i->soffset + sa_manager->gpu_addr; 117 - uint64_t eoffset = i->eoffset + sa_manager->gpu_addr; 118 - if (&i->olist == sa_manager->hole) { 119 - seq_printf(m, ">"); 120 - } else { 121 - seq_printf(m, " "); 122 - } 123 - seq_printf(m, "[0x%010llx 0x%010llx] size %8lld", 124 - soffset, eoffset, eoffset - soffset); 125 - 126 - if (i->fence) 127 - seq_printf(m, " protected by 0x%016llx on context %llu", 128 - i->fence->seqno, i->fence->context); 129 - 130 - seq_printf(m, "\n"); 131 - } 132 - spin_unlock(&sa_manager->wq.lock); 378 + drm_suballoc_dump_debug_info(&sa_manager->base, &p, sa_manager->gpu_addr); 133 379 } 134 380 #endif
-4
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c
··· 466 466 return r; 467 467 } 468 468 469 - /* Can't move a pinned BO */ 470 469 abo = ttm_to_amdgpu_bo(bo); 471 - if (WARN_ON_ONCE(abo->tbo.pin_count > 0)) 472 - return -EINVAL; 473 - 474 470 adev = amdgpu_ttm_adev(bo->bdev); 475 471 476 472 if (!old_mem || (old_mem->mem_type == TTM_PL_SYSTEM &&
+1 -1
drivers/gpu/drm/arm/malidp_drv.c
··· 649 649 struct drm_device *drm = dev_get_drvdata(dev); 650 650 struct malidp_drm *malidp = drm_to_malidp(drm); 651 651 652 - return snprintf(buf, PAGE_SIZE, "%08x\n", malidp->core_id); 652 + return sysfs_emit(buf, "%08x\n", malidp->core_id); 653 653 } 654 654 655 655 static DEVICE_ATTR_RO(core_id);
+5 -5
drivers/gpu/drm/ast/ast_dp.c
··· 9 9 10 10 int ast_astdp_read_edid(struct drm_device *dev, u8 *ediddata) 11 11 { 12 - struct ast_private *ast = to_ast_private(dev); 12 + struct ast_device *ast = to_ast_device(dev); 13 13 u8 i = 0, j = 0; 14 14 15 15 /* ··· 125 125 u8 bDPTX = 0; 126 126 u8 bDPExecute = 1; 127 127 128 - struct ast_private *ast = to_ast_private(dev); 128 + struct ast_device *ast = to_ast_device(dev); 129 129 // S3 come back, need more time to wait BMC ready. 130 130 if (bPower) 131 131 WaitCount = 300; ··· 172 172 173 173 void ast_dp_power_on_off(struct drm_device *dev, bool on) 174 174 { 175 - struct ast_private *ast = to_ast_private(dev); 175 + struct ast_device *ast = to_ast_device(dev); 176 176 // Read and Turn off DP PHY sleep 177 177 u8 bE3 = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xE3, AST_DP_VIDEO_ENABLE); 178 178 ··· 188 188 189 189 void ast_dp_set_on_off(struct drm_device *dev, bool on) 190 190 { 191 - struct ast_private *ast = to_ast_private(dev); 191 + struct ast_device *ast = to_ast_device(dev); 192 192 u8 video_on_off = on; 193 193 194 194 // Video On/Off ··· 208 208 209 209 void ast_dp_set_mode(struct drm_crtc *crtc, struct ast_vbios_mode_info *vbios_mode) 210 210 { 211 - struct ast_private *ast = to_ast_private(crtc->dev); 211 + struct ast_device *ast = to_ast_device(crtc->dev); 212 212 213 213 u32 ulRefreshRateIndex; 214 214 u8 ModeIdx;
+20 -20
drivers/gpu/drm/ast/ast_dp501.c
··· 10 10 11 11 static void ast_release_firmware(void *data) 12 12 { 13 - struct ast_private *ast = data; 13 + struct ast_device *ast = data; 14 14 15 15 release_firmware(ast->dp501_fw); 16 16 ast->dp501_fw = NULL; ··· 18 18 19 19 static int ast_load_dp501_microcode(struct drm_device *dev) 20 20 { 21 - struct ast_private *ast = to_ast_private(dev); 21 + struct ast_device *ast = to_ast_device(dev); 22 22 int ret; 23 23 24 24 ret = request_firmware(&ast->dp501_fw, "ast_dp501_fw.bin", dev->dev); ··· 28 28 return devm_add_action_or_reset(dev->dev, ast_release_firmware, ast); 29 29 } 30 30 31 - static void send_ack(struct ast_private *ast) 31 + static void send_ack(struct ast_device *ast) 32 32 { 33 33 u8 sendack; 34 34 sendack = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0x9b, 0xff); ··· 36 36 ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0x9b, 0x00, sendack); 37 37 } 38 38 39 - static void send_nack(struct ast_private *ast) 39 + static void send_nack(struct ast_device *ast) 40 40 { 41 41 u8 sendack; 42 42 sendack = ast_get_index_reg_mask(ast, AST_IO_CRTC_PORT, 0x9b, 0xff); ··· 44 44 ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0x9b, 0x00, sendack); 45 45 } 46 46 47 - static bool wait_ack(struct ast_private *ast) 47 + static bool wait_ack(struct ast_device *ast) 48 48 { 49 49 u8 waitack; 50 50 u32 retry = 0; ··· 60 60 return false; 61 61 } 62 62 63 - static bool wait_nack(struct ast_private *ast) 63 + static bool wait_nack(struct ast_device *ast) 64 64 { 65 65 u8 waitack; 66 66 u32 retry = 0; ··· 76 76 return false; 77 77 } 78 78 79 - static void set_cmd_trigger(struct ast_private *ast) 79 + static void set_cmd_trigger(struct ast_device *ast) 80 80 { 81 81 ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0x9b, ~0x40, 0x40); 82 82 } 83 83 84 - static void clear_cmd_trigger(struct ast_private *ast) 84 + static void clear_cmd_trigger(struct ast_device *ast) 85 85 { 86 86 ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0x9b, ~0x40, 0x00); 87 87 } 88 88 89 89 #if 0 90 - static bool wait_fw_ready(struct ast_private *ast) 90 + static bool wait_fw_ready(struct ast_device *ast) 91 91 { 92 92 u8 waitready; 93 93 u32 retry = 0; ··· 106 106 107 107 static bool ast_write_cmd(struct drm_device *dev, u8 data) 108 108 { 109 - struct ast_private *ast = to_ast_private(dev); 109 + struct ast_device *ast = to_ast_device(dev); 110 110 int retry = 0; 111 111 if (wait_nack(ast)) { 112 112 send_nack(ast); ··· 128 128 129 129 static bool ast_write_data(struct drm_device *dev, u8 data) 130 130 { 131 - struct ast_private *ast = to_ast_private(dev); 131 + struct ast_device *ast = to_ast_device(dev); 132 132 133 133 if (wait_nack(ast)) { 134 134 send_nack(ast); ··· 146 146 #if 0 147 147 static bool ast_read_data(struct drm_device *dev, u8 *data) 148 148 { 149 - struct ast_private *ast = to_ast_private(dev); 149 + struct ast_device *ast = to_ast_device(dev); 150 150 u8 tmp; 151 151 152 152 *data = 0; ··· 163 163 return true; 164 164 } 165 165 166 - static void clear_cmd(struct ast_private *ast) 166 + static void clear_cmd(struct ast_device *ast) 167 167 { 168 168 send_nack(ast); 169 169 ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0x9a, 0x00, 0x00); ··· 178 178 msleep(10); 179 179 } 180 180 181 - static u32 get_fw_base(struct ast_private *ast) 181 + static u32 get_fw_base(struct ast_device *ast) 182 182 { 183 183 return ast_mindwm(ast, 0x1e6e2104) & 0x7fffffff; 184 184 } 185 185 186 186 bool ast_backup_fw(struct drm_device *dev, u8 *addr, u32 size) 187 187 { 188 - struct ast_private *ast = to_ast_private(dev); 188 + struct ast_device *ast = to_ast_device(dev); 189 189 u32 i, data; 190 190 u32 boot_address; 191 191 ··· 204 204 205 205 static bool ast_launch_m68k(struct drm_device *dev) 206 206 { 207 - struct ast_private *ast = to_ast_private(dev); 207 + struct ast_device *ast = to_ast_device(dev); 208 208 u32 i, data, len = 0; 209 209 u32 boot_address; 210 210 u8 *fw_addr = NULL; ··· 274 274 275 275 bool ast_dp501_read_edid(struct drm_device *dev, u8 *ediddata) 276 276 { 277 - struct ast_private *ast = to_ast_private(dev); 277 + struct ast_device *ast = to_ast_device(dev); 278 278 u32 i, boot_address, offset, data; 279 279 u32 *pEDIDidx; 280 280 ··· 334 334 335 335 static bool ast_init_dvo(struct drm_device *dev) 336 336 { 337 - struct ast_private *ast = to_ast_private(dev); 337 + struct ast_device *ast = to_ast_device(dev); 338 338 u8 jreg; 339 339 u32 data; 340 340 ast_write32(ast, 0xf004, 0x1e6e0000); ··· 407 407 408 408 static void ast_init_analog(struct drm_device *dev) 409 409 { 410 - struct ast_private *ast = to_ast_private(dev); 410 + struct ast_device *ast = to_ast_device(dev); 411 411 u32 data; 412 412 413 413 /* ··· 434 434 435 435 void ast_init_3rdtx(struct drm_device *dev) 436 436 { 437 - struct ast_private *ast = to_ast_private(dev); 437 + struct ast_device *ast = to_ast_device(dev); 438 438 u8 jreg; 439 439 440 440 if (ast->chip == AST2300 || ast->chip == AST2400) {
+1 -1
drivers/gpu/drm/ast/ast_drv.c
··· 105 105 106 106 static int ast_pci_probe(struct pci_dev *pdev, const struct pci_device_id *ent) 107 107 { 108 - struct ast_private *ast; 108 + struct ast_device *ast; 109 109 struct drm_device *dev; 110 110 int ret; 111 111
+33 -51
drivers/gpu/drm/ast/ast_drv.h
··· 157 157 * Device 158 158 */ 159 159 160 - struct ast_private { 160 + struct ast_device { 161 161 struct drm_device base; 162 162 163 163 struct mutex ioregs_lock; /* Protects access to I/O registers in ioregs */ ··· 210 210 const struct firmware *dp501_fw; /* dp501 fw */ 211 211 }; 212 212 213 - static inline struct ast_private *to_ast_private(struct drm_device *dev) 213 + static inline struct ast_device *to_ast_device(struct drm_device *dev) 214 214 { 215 - return container_of(dev, struct ast_private, base); 215 + return container_of(dev, struct ast_device, base); 216 216 } 217 217 218 - struct ast_private *ast_device_create(const struct drm_driver *drv, 219 - struct pci_dev *pdev, 220 - unsigned long flags); 218 + struct ast_device *ast_device_create(const struct drm_driver *drv, 219 + struct pci_dev *pdev, 220 + unsigned long flags); 221 221 222 222 #define AST_IO_AR_PORT_WRITE (0x40) 223 223 #define AST_IO_MISC_PORT_WRITE (0x42) ··· 238 238 #define AST_IO_VGACRCB_HWC_ENABLED BIT(1) 239 239 #define AST_IO_VGACRCB_HWC_16BPP BIT(0) /* set: ARGB4444, cleared: 2bpp palette */ 240 240 241 - #define __ast_read(x) \ 242 - static inline u##x ast_read##x(struct ast_private *ast, u32 reg) { \ 243 - u##x val = 0;\ 244 - val = ioread##x(ast->regs + reg); \ 245 - return val;\ 241 + static inline u32 ast_read32(struct ast_device *ast, u32 reg) 242 + { 243 + return ioread32(ast->regs + reg); 246 244 } 247 245 248 - __ast_read(8); 249 - __ast_read(16); 250 - __ast_read(32) 251 - 252 - #define __ast_io_read(x) \ 253 - static inline u##x ast_io_read##x(struct ast_private *ast, u32 reg) { \ 254 - u##x val = 0;\ 255 - val = ioread##x(ast->ioregs + reg); \ 256 - return val;\ 246 + static inline void ast_write32(struct ast_device *ast, u32 reg, u32 val) 247 + { 248 + iowrite32(val, ast->regs + reg); 257 249 } 258 250 259 - __ast_io_read(8); 260 - __ast_io_read(16); 261 - __ast_io_read(32); 251 + static inline u8 ast_io_read8(struct ast_device *ast, u32 reg) 252 + { 253 + return ioread8(ast->ioregs + reg); 254 + } 262 255 263 - #define __ast_write(x) \ 264 - static inline void ast_write##x(struct ast_private *ast, u32 reg, u##x val) {\ 265 - iowrite##x(val, ast->regs + reg);\ 266 - } 256 + static inline void ast_io_write8(struct ast_device *ast, u32 reg, u8 val) 257 + { 258 + iowrite8(val, ast->ioregs + reg); 259 + } 267 260 268 - __ast_write(8); 269 - __ast_write(16); 270 - __ast_write(32); 271 - 272 - #define __ast_io_write(x) \ 273 - static inline void ast_io_write##x(struct ast_private *ast, u32 reg, u##x val) {\ 274 - iowrite##x(val, ast->ioregs + reg);\ 275 - } 276 - 277 - __ast_io_write(8); 278 - __ast_io_write(16); 279 - #undef __ast_io_write 280 - 281 - static inline void ast_set_index_reg(struct ast_private *ast, 261 + static inline void ast_set_index_reg(struct ast_device *ast, 282 262 uint32_t base, uint8_t index, 283 263 uint8_t val) 284 264 { 285 - ast_io_write16(ast, base, ((u16)val << 8) | index); 265 + ast_io_write8(ast, base, index); 266 + ++base; 267 + ast_io_write8(ast, base, val); 286 268 } 287 269 288 - void ast_set_index_reg_mask(struct ast_private *ast, 270 + void ast_set_index_reg_mask(struct ast_device *ast, 289 271 uint32_t base, uint8_t index, 290 272 uint8_t mask, uint8_t val); 291 - uint8_t ast_get_index_reg(struct ast_private *ast, 273 + uint8_t ast_get_index_reg(struct ast_device *ast, 292 274 uint32_t base, uint8_t index); 293 - uint8_t ast_get_index_reg_mask(struct ast_private *ast, 275 + uint8_t ast_get_index_reg_mask(struct ast_device *ast, 294 276 uint32_t base, uint8_t index, uint8_t mask); 295 277 296 - static inline void ast_open_key(struct ast_private *ast) 278 + static inline void ast_open_key(struct ast_device *ast) 297 279 { 298 280 ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0x80, 0xA8); 299 281 } ··· 334 352 335 353 #define to_ast_crtc_state(state) container_of(state, struct ast_crtc_state, base) 336 354 337 - int ast_mode_config_init(struct ast_private *ast); 355 + int ast_mode_config_init(struct ast_device *ast); 338 356 339 357 #define AST_MM_ALIGN_SHIFT 4 340 358 #define AST_MM_ALIGN_MASK ((1 << AST_MM_ALIGN_SHIFT) - 1) ··· 458 476 #define ASTDP_1366x768_60 0x1E 459 477 #define ASTDP_1152x864_75 0x1F 460 478 461 - int ast_mm_init(struct ast_private *ast); 479 + int ast_mm_init(struct ast_device *ast); 462 480 463 481 /* ast post */ 464 482 void ast_enable_vga(struct drm_device *dev); 465 483 void ast_enable_mmio(struct drm_device *dev); 466 484 bool ast_is_vga_enabled(struct drm_device *dev); 467 485 void ast_post_gpu(struct drm_device *dev); 468 - u32 ast_mindwm(struct ast_private *ast, u32 r); 469 - void ast_moutdwm(struct ast_private *ast, u32 r, u32 v); 470 - void ast_patch_ahb_2500(struct ast_private *ast); 486 + u32 ast_mindwm(struct ast_device *ast, u32 r); 487 + void ast_moutdwm(struct ast_device *ast, u32 r, u32 v); 488 + void ast_patch_ahb_2500(struct ast_device *ast); 471 489 /* ast dp501 */ 472 490 void ast_set_dp501_video_output(struct drm_device *dev, u8 mode); 473 491 bool ast_backup_fw(struct drm_device *dev, u8 *addr, u32 size);
+4 -4
drivers/gpu/drm/ast/ast_i2c.c
··· 29 29 static void ast_i2c_setsda(void *i2c_priv, int data) 30 30 { 31 31 struct ast_i2c_chan *i2c = i2c_priv; 32 - struct ast_private *ast = to_ast_private(i2c->dev); 32 + struct ast_device *ast = to_ast_device(i2c->dev); 33 33 int i; 34 34 u8 ujcrb7, jtemp; 35 35 ··· 45 45 static void ast_i2c_setscl(void *i2c_priv, int clock) 46 46 { 47 47 struct ast_i2c_chan *i2c = i2c_priv; 48 - struct ast_private *ast = to_ast_private(i2c->dev); 48 + struct ast_device *ast = to_ast_device(i2c->dev); 49 49 int i; 50 50 u8 ujcrb7, jtemp; 51 51 ··· 61 61 static int ast_i2c_getsda(void *i2c_priv) 62 62 { 63 63 struct ast_i2c_chan *i2c = i2c_priv; 64 - struct ast_private *ast = to_ast_private(i2c->dev); 64 + struct ast_device *ast = to_ast_device(i2c->dev); 65 65 uint32_t val, val2, count, pass; 66 66 67 67 count = 0; ··· 83 83 static int ast_i2c_getscl(void *i2c_priv) 84 84 { 85 85 struct ast_i2c_chan *i2c = i2c_priv; 86 - struct ast_private *ast = to_ast_private(i2c->dev); 86 + struct ast_device *ast = to_ast_device(i2c->dev); 87 87 uint32_t val, val2, count, pass; 88 88 89 89 count = 0;
+12 -12
drivers/gpu/drm/ast/ast_main.c
··· 35 35 36 36 #include "ast_drv.h" 37 37 38 - void ast_set_index_reg_mask(struct ast_private *ast, 38 + void ast_set_index_reg_mask(struct ast_device *ast, 39 39 uint32_t base, uint8_t index, 40 40 uint8_t mask, uint8_t val) 41 41 { ··· 45 45 ast_set_index_reg(ast, base, index, tmp); 46 46 } 47 47 48 - uint8_t ast_get_index_reg(struct ast_private *ast, 48 + uint8_t ast_get_index_reg(struct ast_device *ast, 49 49 uint32_t base, uint8_t index) 50 50 { 51 51 uint8_t ret; ··· 54 54 return ret; 55 55 } 56 56 57 - uint8_t ast_get_index_reg_mask(struct ast_private *ast, 57 + uint8_t ast_get_index_reg_mask(struct ast_device *ast, 58 58 uint32_t base, uint8_t index, uint8_t mask) 59 59 { 60 60 uint8_t ret; ··· 66 66 static void ast_detect_config_mode(struct drm_device *dev, u32 *scu_rev) 67 67 { 68 68 struct device_node *np = dev->dev->of_node; 69 - struct ast_private *ast = to_ast_private(dev); 69 + struct ast_device *ast = to_ast_device(dev); 70 70 struct pci_dev *pdev = to_pci_dev(dev->dev); 71 71 uint32_t data, jregd0, jregd1; 72 72 ··· 122 122 123 123 static int ast_detect_chip(struct drm_device *dev, bool *need_post) 124 124 { 125 - struct ast_private *ast = to_ast_private(dev); 125 + struct ast_device *ast = to_ast_device(dev); 126 126 struct pci_dev *pdev = to_pci_dev(dev->dev); 127 127 uint32_t jreg, scu_rev; 128 128 ··· 271 271 static int ast_get_dram_info(struct drm_device *dev) 272 272 { 273 273 struct device_node *np = dev->dev->of_node; 274 - struct ast_private *ast = to_ast_private(dev); 274 + struct ast_device *ast = to_ast_device(dev); 275 275 uint32_t mcr_cfg, mcr_scu_mpll, mcr_scu_strap; 276 276 uint32_t denum, num, div, ref_pll, dsel; 277 277 ··· 394 394 */ 395 395 static void ast_device_release(void *data) 396 396 { 397 - struct ast_private *ast = data; 397 + struct ast_device *ast = data; 398 398 399 399 /* enable standard VGA decode */ 400 400 ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xa1, 0x04); 401 401 } 402 402 403 - struct ast_private *ast_device_create(const struct drm_driver *drv, 404 - struct pci_dev *pdev, 405 - unsigned long flags) 403 + struct ast_device *ast_device_create(const struct drm_driver *drv, 404 + struct pci_dev *pdev, 405 + unsigned long flags) 406 406 { 407 407 struct drm_device *dev; 408 - struct ast_private *ast; 408 + struct ast_device *ast; 409 409 bool need_post; 410 410 int ret = 0; 411 411 412 - ast = devm_drm_dev_alloc(&pdev->dev, drv, struct ast_private, base); 412 + ast = devm_drm_dev_alloc(&pdev->dev, drv, struct ast_device, base); 413 413 if (IS_ERR(ast)) 414 414 return ast; 415 415 dev = &ast->base;
+2 -2
drivers/gpu/drm/ast/ast_mm.c
··· 33 33 34 34 #include "ast_drv.h" 35 35 36 - static u32 ast_get_vram_size(struct ast_private *ast) 36 + static u32 ast_get_vram_size(struct ast_device *ast) 37 37 { 38 38 u8 jreg; 39 39 u32 vram_size; ··· 73 73 return vram_size; 74 74 } 75 75 76 - int ast_mm_init(struct ast_private *ast) 76 + int ast_mm_init(struct ast_device *ast) 77 77 { 78 78 struct drm_device *dev = &ast->base; 79 79 struct pci_dev *pdev = to_pci_dev(dev->dev);
+58 -46
drivers/gpu/drm/ast/ast_mode.c
··· 51 51 52 52 #define AST_LUT_SIZE 256 53 53 54 - static inline void ast_load_palette_index(struct ast_private *ast, 54 + static inline void ast_load_palette_index(struct ast_device *ast, 55 55 u8 index, u8 red, u8 green, 56 56 u8 blue) 57 57 { ··· 65 65 ast_io_read8(ast, AST_IO_SEQ_PORT); 66 66 } 67 67 68 - static void ast_crtc_set_gamma_linear(struct ast_private *ast, 68 + static void ast_crtc_set_gamma_linear(struct ast_device *ast, 69 69 const struct drm_format_info *format) 70 70 { 71 71 int i; ··· 84 84 } 85 85 } 86 86 87 - static void ast_crtc_set_gamma(struct ast_private *ast, 87 + static void ast_crtc_set_gamma(struct ast_device *ast, 88 88 const struct drm_format_info *format, 89 89 struct drm_color_lut *lut) 90 90 { ··· 232 232 return true; 233 233 } 234 234 235 - static void ast_set_vbios_color_reg(struct ast_private *ast, 235 + static void ast_set_vbios_color_reg(struct ast_device *ast, 236 236 const struct drm_format_info *format, 237 237 const struct ast_vbios_mode_info *vbios_mode) 238 238 { ··· 263 263 } 264 264 } 265 265 266 - static void ast_set_vbios_mode_reg(struct ast_private *ast, 266 + static void ast_set_vbios_mode_reg(struct ast_device *ast, 267 267 const struct drm_display_mode *adjusted_mode, 268 268 const struct ast_vbios_mode_info *vbios_mode) 269 269 { ··· 287 287 } 288 288 } 289 289 290 - static void ast_set_std_reg(struct ast_private *ast, 290 + static void ast_set_std_reg(struct ast_device *ast, 291 291 struct drm_display_mode *mode, 292 292 struct ast_vbios_mode_info *vbios_mode) 293 293 { ··· 335 335 ast_set_index_reg(ast, AST_IO_GR_PORT, i, stdtable->gr[i]); 336 336 } 337 337 338 - static void ast_set_crtc_reg(struct ast_private *ast, 338 + static void ast_set_crtc_reg(struct ast_device *ast, 339 339 struct drm_display_mode *mode, 340 340 struct ast_vbios_mode_info *vbios_mode) 341 341 { ··· 450 450 ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0x11, 0x7f, 0x80); 451 451 } 452 452 453 - static void ast_set_offset_reg(struct ast_private *ast, 453 + static void ast_set_offset_reg(struct ast_device *ast, 454 454 struct drm_framebuffer *fb) 455 455 { 456 456 u16 offset; ··· 460 460 ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xb0, (offset >> 8) & 0x3f); 461 461 } 462 462 463 - static void ast_set_dclk_reg(struct ast_private *ast, 463 + static void ast_set_dclk_reg(struct ast_device *ast, 464 464 struct drm_display_mode *mode, 465 465 struct ast_vbios_mode_info *vbios_mode) 466 466 { ··· 478 478 ((clk_info->param3 & 0x3) << 4)); 479 479 } 480 480 481 - static void ast_set_color_reg(struct ast_private *ast, 481 + static void ast_set_color_reg(struct ast_device *ast, 482 482 const struct drm_format_info *format) 483 483 { 484 484 u8 jregA0 = 0, jregA3 = 0, jregA8 = 0; ··· 507 507 ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xa8, 0xfd, jregA8); 508 508 } 509 509 510 - static void ast_set_crtthd_reg(struct ast_private *ast) 510 + static void ast_set_crtthd_reg(struct ast_device *ast) 511 511 { 512 512 /* Set Threshold */ 513 513 if (ast->chip == AST2600) { ··· 529 529 } 530 530 } 531 531 532 - static void ast_set_sync_reg(struct ast_private *ast, 532 + static void ast_set_sync_reg(struct ast_device *ast, 533 533 struct drm_display_mode *mode, 534 534 struct ast_vbios_mode_info *vbios_mode) 535 535 { ··· 544 544 ast_io_write8(ast, AST_IO_MISC_PORT_WRITE, jreg); 545 545 } 546 546 547 - static void ast_set_start_address_crt1(struct ast_private *ast, 547 + static void ast_set_start_address_crt1(struct ast_device *ast, 548 548 unsigned int offset) 549 549 { 550 550 u32 addr; ··· 556 556 557 557 } 558 558 559 - static void ast_wait_for_vretrace(struct ast_private *ast) 559 + static void ast_wait_for_vretrace(struct ast_device *ast) 560 560 { 561 561 unsigned long timeout = jiffies + HZ; 562 562 u8 vgair1; ··· 645 645 struct drm_atomic_state *state) 646 646 { 647 647 struct drm_device *dev = plane->dev; 648 - struct ast_private *ast = to_ast_private(dev); 648 + struct ast_device *ast = to_ast_device(dev); 649 649 struct drm_plane_state *plane_state = drm_atomic_get_new_plane_state(state, plane); 650 650 struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(plane_state); 651 651 struct drm_framebuffer *fb = plane_state->fb; ··· 672 672 673 673 /* 674 674 * Some BMCs stop scanning out the video signal after the driver 675 - * reprogrammed the offset or scanout address. This stalls display 676 - * output for several seconds and makes the display unusable. 677 - * Therefore only update the offset if it changes and reprogram the 678 - * address after enabling the plane. 675 + * reprogrammed the offset. This stalls display output for several 676 + * seconds and makes the display unusable. Therefore only update 677 + * the offset if it changes. 679 678 */ 680 679 if (!old_fb || old_fb->pitches[0] != fb->pitches[0]) 681 680 ast_set_offset_reg(ast, fb); 682 - if (!old_fb) { 683 - ast_set_start_address_crt1(ast, (u32)ast_plane->offset); 684 - ast_set_index_reg_mask(ast, AST_IO_SEQ_PORT, 0x1, 0xdf, 0x00); 685 - } 681 + } 682 + 683 + static void ast_primary_plane_helper_atomic_enable(struct drm_plane *plane, 684 + struct drm_atomic_state *state) 685 + { 686 + struct ast_device *ast = to_ast_device(plane->dev); 687 + struct ast_plane *ast_plane = to_ast_plane(plane); 688 + 689 + /* 690 + * Some BMCs stop scanning out the video signal after the driver 691 + * reprogrammed the scanout address. This stalls display 692 + * output for several seconds and makes the display unusable. 693 + * Therefore only reprogram the address after enabling the plane. 694 + */ 695 + ast_set_start_address_crt1(ast, (u32)ast_plane->offset); 696 + ast_set_index_reg_mask(ast, AST_IO_SEQ_PORT, 0x1, 0xdf, 0x00); 686 697 } 687 698 688 699 static void ast_primary_plane_helper_atomic_disable(struct drm_plane *plane, 689 700 struct drm_atomic_state *state) 690 701 { 691 - struct ast_private *ast = to_ast_private(plane->dev); 702 + struct ast_device *ast = to_ast_device(plane->dev); 692 703 693 704 ast_set_index_reg_mask(ast, AST_IO_SEQ_PORT, 0x1, 0xdf, 0x20); 694 705 } ··· 708 697 DRM_GEM_SHADOW_PLANE_HELPER_FUNCS, 709 698 .atomic_check = ast_primary_plane_helper_atomic_check, 710 699 .atomic_update = ast_primary_plane_helper_atomic_update, 700 + .atomic_enable = ast_primary_plane_helper_atomic_enable, 711 701 .atomic_disable = ast_primary_plane_helper_atomic_disable, 712 702 }; 713 703 ··· 719 707 DRM_GEM_SHADOW_PLANE_FUNCS, 720 708 }; 721 709 722 - static int ast_primary_plane_init(struct ast_private *ast) 710 + static int ast_primary_plane_init(struct ast_device *ast) 723 711 { 724 712 struct drm_device *dev = &ast->base; 725 713 struct ast_plane *ast_primary_plane = &ast->primary_plane; ··· 812 800 writel(0, dst + AST_HWC_SIGNATURE_HOTSPOTY); 813 801 } 814 802 815 - static void ast_set_cursor_base(struct ast_private *ast, u64 address) 803 + static void ast_set_cursor_base(struct ast_device *ast, u64 address) 816 804 { 817 805 u8 addr0 = (address >> 3) & 0xff; 818 806 u8 addr1 = (address >> 11) & 0xff; ··· 823 811 ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xca, addr2); 824 812 } 825 813 826 - static void ast_set_cursor_location(struct ast_private *ast, u16 x, u16 y, 814 + static void ast_set_cursor_location(struct ast_device *ast, u16 x, u16 y, 827 815 u8 x_offset, u8 y_offset) 828 816 { 829 817 u8 x0 = (x & 0x00ff); ··· 839 827 ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xc7, y1); 840 828 } 841 829 842 - static void ast_set_cursor_enabled(struct ast_private *ast, bool enabled) 830 + static void ast_set_cursor_enabled(struct ast_device *ast, bool enabled) 843 831 { 844 832 static const u8 mask = (u8)~(AST_IO_VGACRCB_HWC_16BPP | 845 833 AST_IO_VGACRCB_HWC_ENABLED); ··· 888 876 struct drm_shadow_plane_state *shadow_plane_state = to_drm_shadow_plane_state(plane_state); 889 877 struct drm_framebuffer *fb = plane_state->fb; 890 878 struct drm_plane_state *old_plane_state = drm_atomic_get_old_plane_state(state, plane); 891 - struct ast_private *ast = to_ast_private(plane->dev); 879 + struct ast_device *ast = to_ast_device(plane->dev); 892 880 struct iosys_map src_map = shadow_plane_state->data[0]; 893 881 struct drm_rect damage; 894 882 const u8 *src = src_map.vaddr; /* TODO: Use mapping abstraction properly */ ··· 943 931 static void ast_cursor_plane_helper_atomic_disable(struct drm_plane *plane, 944 932 struct drm_atomic_state *state) 945 933 { 946 - struct ast_private *ast = to_ast_private(plane->dev); 934 + struct ast_device *ast = to_ast_device(plane->dev); 947 935 948 936 ast_set_cursor_enabled(ast, false); 949 937 } ··· 962 950 DRM_GEM_SHADOW_PLANE_FUNCS, 963 951 }; 964 952 965 - static int ast_cursor_plane_init(struct ast_private *ast) 953 + static int ast_cursor_plane_init(struct ast_device *ast) 966 954 { 967 955 struct drm_device *dev = &ast->base; 968 956 struct ast_plane *ast_cursor_plane = &ast->cursor_plane; ··· 1007 995 1008 996 static void ast_crtc_dpms(struct drm_crtc *crtc, int mode) 1009 997 { 1010 - struct ast_private *ast = to_ast_private(crtc->dev); 998 + struct ast_device *ast = to_ast_device(crtc->dev); 1011 999 u8 ch = AST_DPMS_VSYNC_OFF | AST_DPMS_HSYNC_OFF; 1012 1000 struct ast_crtc_state *ast_state; 1013 1001 const struct drm_format_info *format; ··· 1064 1052 static enum drm_mode_status 1065 1053 ast_crtc_helper_mode_valid(struct drm_crtc *crtc, const struct drm_display_mode *mode) 1066 1054 { 1067 - struct ast_private *ast = to_ast_private(crtc->dev); 1055 + struct ast_device *ast = to_ast_device(crtc->dev); 1068 1056 enum drm_mode_status status; 1069 1057 uint32_t jtemp; 1070 1058 ··· 1189 1177 struct drm_crtc_state *crtc_state = drm_atomic_get_new_crtc_state(state, 1190 1178 crtc); 1191 1179 struct drm_device *dev = crtc->dev; 1192 - struct ast_private *ast = to_ast_private(dev); 1180 + struct ast_device *ast = to_ast_device(dev); 1193 1181 struct ast_crtc_state *ast_crtc_state = to_ast_crtc_state(crtc_state); 1194 1182 struct ast_vbios_mode_info *vbios_mode_info = &ast_crtc_state->vbios_mode_info; 1195 1183 ··· 1214 1202 static void ast_crtc_helper_atomic_enable(struct drm_crtc *crtc, struct drm_atomic_state *state) 1215 1203 { 1216 1204 struct drm_device *dev = crtc->dev; 1217 - struct ast_private *ast = to_ast_private(dev); 1205 + struct ast_device *ast = to_ast_device(dev); 1218 1206 struct drm_crtc_state *crtc_state = drm_atomic_get_new_crtc_state(state, crtc); 1219 1207 struct ast_crtc_state *ast_crtc_state = to_ast_crtc_state(crtc_state); 1220 1208 struct ast_vbios_mode_info *vbios_mode_info = ··· 1236 1224 { 1237 1225 struct drm_crtc_state *old_crtc_state = drm_atomic_get_old_crtc_state(state, crtc); 1238 1226 struct drm_device *dev = crtc->dev; 1239 - struct ast_private *ast = to_ast_private(dev); 1227 + struct ast_device *ast = to_ast_device(dev); 1240 1228 1241 1229 ast_crtc_dpms(crtc, DRM_MODE_DPMS_OFF); 1242 1230 ··· 1324 1312 1325 1313 static int ast_crtc_init(struct drm_device *dev) 1326 1314 { 1327 - struct ast_private *ast = to_ast_private(dev); 1315 + struct ast_device *ast = to_ast_device(dev); 1328 1316 struct drm_crtc *crtc = &ast->crtc; 1329 1317 int ret; 1330 1318 ··· 1350 1338 { 1351 1339 struct ast_vga_connector *ast_vga_connector = to_ast_vga_connector(connector); 1352 1340 struct drm_device *dev = connector->dev; 1353 - struct ast_private *ast = to_ast_private(dev); 1341 + struct ast_device *ast = to_ast_device(dev); 1354 1342 struct edid *edid; 1355 1343 int count; 1356 1344 ··· 1423 1411 return 0; 1424 1412 } 1425 1413 1426 - static int ast_vga_output_init(struct ast_private *ast) 1414 + static int ast_vga_output_init(struct ast_device *ast) 1427 1415 { 1428 1416 struct drm_device *dev = &ast->base; 1429 1417 struct drm_crtc *crtc = &ast->crtc; ··· 1456 1444 { 1457 1445 struct ast_sil164_connector *ast_sil164_connector = to_ast_sil164_connector(connector); 1458 1446 struct drm_device *dev = connector->dev; 1459 - struct ast_private *ast = to_ast_private(dev); 1447 + struct ast_device *ast = to_ast_device(dev); 1460 1448 struct edid *edid; 1461 1449 int count; 1462 1450 ··· 1529 1517 return 0; 1530 1518 } 1531 1519 1532 - static int ast_sil164_output_init(struct ast_private *ast) 1520 + static int ast_sil164_output_init(struct ast_device *ast) 1533 1521 { 1534 1522 struct drm_device *dev = &ast->base; 1535 1523 struct drm_crtc *crtc = &ast->crtc; ··· 1616 1604 return 0; 1617 1605 } 1618 1606 1619 - static int ast_dp501_output_init(struct ast_private *ast) 1607 + static int ast_dp501_output_init(struct ast_device *ast) 1620 1608 { 1621 1609 struct drm_device *dev = &ast->base; 1622 1610 struct drm_crtc *crtc = &ast->crtc; ··· 1703 1691 return 0; 1704 1692 } 1705 1693 1706 - static int ast_astdp_output_init(struct ast_private *ast) 1694 + static int ast_astdp_output_init(struct ast_device *ast) 1707 1695 { 1708 1696 struct drm_device *dev = &ast->base; 1709 1697 struct drm_crtc *crtc = &ast->crtc; ··· 1733 1721 1734 1722 static void ast_mode_config_helper_atomic_commit_tail(struct drm_atomic_state *state) 1735 1723 { 1736 - struct ast_private *ast = to_ast_private(state->dev); 1724 + struct ast_device *ast = to_ast_device(state->dev); 1737 1725 1738 1726 /* 1739 1727 * Concurrent operations could possibly trigger a call to ··· 1754 1742 const struct drm_display_mode *mode) 1755 1743 { 1756 1744 static const unsigned long max_bpp = 4; /* DRM_FORMAT_XRGB8888 */ 1757 - struct ast_private *ast = to_ast_private(dev); 1745 + struct ast_device *ast = to_ast_device(dev); 1758 1746 unsigned long fbsize, fbpages, max_fbpages; 1759 1747 1760 1748 max_fbpages = (ast->vram_fb_available) >> PAGE_SHIFT; ··· 1775 1763 .atomic_commit = drm_atomic_helper_commit, 1776 1764 }; 1777 1765 1778 - int ast_mode_config_init(struct ast_private *ast) 1766 + int ast_mode_config_init(struct ast_device *ast) 1779 1767 { 1780 1768 struct drm_device *dev = &ast->base; 1781 1769 int ret;
+47 -47
drivers/gpu/drm/ast/ast_post.c
··· 39 39 40 40 void ast_enable_vga(struct drm_device *dev) 41 41 { 42 - struct ast_private *ast = to_ast_private(dev); 42 + struct ast_device *ast = to_ast_device(dev); 43 43 44 44 ast_io_write8(ast, AST_IO_VGA_ENABLE_PORT, 0x01); 45 45 ast_io_write8(ast, AST_IO_MISC_PORT_WRITE, 0x01); ··· 47 47 48 48 void ast_enable_mmio(struct drm_device *dev) 49 49 { 50 - struct ast_private *ast = to_ast_private(dev); 50 + struct ast_device *ast = to_ast_device(dev); 51 51 52 52 ast_set_index_reg(ast, AST_IO_CRTC_PORT, 0xa1, 0x06); 53 53 } ··· 55 55 56 56 bool ast_is_vga_enabled(struct drm_device *dev) 57 57 { 58 - struct ast_private *ast = to_ast_private(dev); 58 + struct ast_device *ast = to_ast_device(dev); 59 59 u8 ch; 60 60 61 61 ch = ast_io_read8(ast, AST_IO_VGA_ENABLE_PORT); ··· 70 70 static void 71 71 ast_set_def_ext_reg(struct drm_device *dev) 72 72 { 73 - struct ast_private *ast = to_ast_private(dev); 73 + struct ast_device *ast = to_ast_device(dev); 74 74 struct pci_dev *pdev = to_pci_dev(dev->dev); 75 75 u8 i, index, reg; 76 76 const u8 *ext_reg_info; ··· 110 110 ast_set_index_reg_mask(ast, AST_IO_CRTC_PORT, 0xb6, 0xff, reg); 111 111 } 112 112 113 - u32 ast_mindwm(struct ast_private *ast, u32 r) 113 + u32 ast_mindwm(struct ast_device *ast, u32 r) 114 114 { 115 115 uint32_t data; 116 116 ··· 123 123 return ast_read32(ast, 0x10000 + (r & 0x0000ffff)); 124 124 } 125 125 126 - void ast_moutdwm(struct ast_private *ast, u32 r, u32 v) 126 + void ast_moutdwm(struct ast_device *ast, u32 r, u32 v) 127 127 { 128 128 uint32_t data; 129 129 ast_write32(ast, 0xf004, r & 0xffff0000); ··· 162 162 0x20F050E0 163 163 }; 164 164 165 - static u32 mmctestburst2_ast2150(struct ast_private *ast, u32 datagen) 165 + static u32 mmctestburst2_ast2150(struct ast_device *ast, u32 datagen) 166 166 { 167 167 u32 data, timeout; 168 168 ··· 192 192 } 193 193 194 194 #if 0 /* unused in DDX driver - here for completeness */ 195 - static u32 mmctestsingle2_ast2150(struct ast_private *ast, u32 datagen) 195 + static u32 mmctestsingle2_ast2150(struct ast_device *ast, u32 datagen) 196 196 { 197 197 u32 data, timeout; 198 198 ··· 212 212 } 213 213 #endif 214 214 215 - static int cbrtest_ast2150(struct ast_private *ast) 215 + static int cbrtest_ast2150(struct ast_device *ast) 216 216 { 217 217 int i; 218 218 ··· 222 222 return 1; 223 223 } 224 224 225 - static int cbrscan_ast2150(struct ast_private *ast, int busw) 225 + static int cbrscan_ast2150(struct ast_device *ast, int busw) 226 226 { 227 227 u32 patcnt, loop; 228 228 ··· 239 239 } 240 240 241 241 242 - static void cbrdlli_ast2150(struct ast_private *ast, int busw) 242 + static void cbrdlli_ast2150(struct ast_device *ast, int busw) 243 243 { 244 244 u32 dll_min[4], dll_max[4], dlli, data, passcnt; 245 245 ··· 273 273 274 274 static void ast_init_dram_reg(struct drm_device *dev) 275 275 { 276 - struct ast_private *ast = to_ast_private(dev); 276 + struct ast_device *ast = to_ast_device(dev); 277 277 u8 j; 278 278 u32 data, temp, i; 279 279 const struct ast_dramstruct *dram_reg_info; ··· 366 366 367 367 void ast_post_gpu(struct drm_device *dev) 368 368 { 369 - struct ast_private *ast = to_ast_private(dev); 369 + struct ast_device *ast = to_ast_device(dev); 370 370 struct pci_dev *pdev = to_pci_dev(dev->dev); 371 371 u32 reg; 372 372 ··· 449 449 0x7C61D253 450 450 }; 451 451 452 - static bool mmc_test(struct ast_private *ast, u32 datagen, u8 test_ctl) 452 + static bool mmc_test(struct ast_device *ast, u32 datagen, u8 test_ctl) 453 453 { 454 454 u32 data, timeout; 455 455 ··· 469 469 return true; 470 470 } 471 471 472 - static u32 mmc_test2(struct ast_private *ast, u32 datagen, u8 test_ctl) 472 + static u32 mmc_test2(struct ast_device *ast, u32 datagen, u8 test_ctl) 473 473 { 474 474 u32 data, timeout; 475 475 ··· 490 490 } 491 491 492 492 493 - static bool mmc_test_burst(struct ast_private *ast, u32 datagen) 493 + static bool mmc_test_burst(struct ast_device *ast, u32 datagen) 494 494 { 495 495 return mmc_test(ast, datagen, 0xc1); 496 496 } 497 497 498 - static u32 mmc_test_burst2(struct ast_private *ast, u32 datagen) 498 + static u32 mmc_test_burst2(struct ast_device *ast, u32 datagen) 499 499 { 500 500 return mmc_test2(ast, datagen, 0x41); 501 501 } 502 502 503 - static bool mmc_test_single(struct ast_private *ast, u32 datagen) 503 + static bool mmc_test_single(struct ast_device *ast, u32 datagen) 504 504 { 505 505 return mmc_test(ast, datagen, 0xc5); 506 506 } 507 507 508 - static u32 mmc_test_single2(struct ast_private *ast, u32 datagen) 508 + static u32 mmc_test_single2(struct ast_device *ast, u32 datagen) 509 509 { 510 510 return mmc_test2(ast, datagen, 0x05); 511 511 } 512 512 513 - static bool mmc_test_single_2500(struct ast_private *ast, u32 datagen) 513 + static bool mmc_test_single_2500(struct ast_device *ast, u32 datagen) 514 514 { 515 515 return mmc_test(ast, datagen, 0x85); 516 516 } 517 517 518 - static int cbr_test(struct ast_private *ast) 518 + static int cbr_test(struct ast_device *ast) 519 519 { 520 520 u32 data; 521 521 int i; ··· 534 534 return 1; 535 535 } 536 536 537 - static int cbr_scan(struct ast_private *ast) 537 + static int cbr_scan(struct ast_device *ast) 538 538 { 539 539 u32 data, data2, patcnt, loop; 540 540 ··· 555 555 return data2; 556 556 } 557 557 558 - static u32 cbr_test2(struct ast_private *ast) 558 + static u32 cbr_test2(struct ast_device *ast) 559 559 { 560 560 u32 data; 561 561 ··· 569 569 return ~data & 0xffff; 570 570 } 571 571 572 - static u32 cbr_scan2(struct ast_private *ast) 572 + static u32 cbr_scan2(struct ast_device *ast) 573 573 { 574 574 u32 data, data2, patcnt, loop; 575 575 ··· 590 590 return data2; 591 591 } 592 592 593 - static bool cbr_test3(struct ast_private *ast) 593 + static bool cbr_test3(struct ast_device *ast) 594 594 { 595 595 if (!mmc_test_burst(ast, 0)) 596 596 return false; ··· 599 599 return true; 600 600 } 601 601 602 - static bool cbr_scan3(struct ast_private *ast) 602 + static bool cbr_scan3(struct ast_device *ast) 603 603 { 604 604 u32 patcnt, loop; 605 605 ··· 615 615 return true; 616 616 } 617 617 618 - static bool finetuneDQI_L(struct ast_private *ast, struct ast2300_dram_param *param) 618 + static bool finetuneDQI_L(struct ast_device *ast, struct ast2300_dram_param *param) 619 619 { 620 620 u32 gold_sadj[2], dllmin[16], dllmax[16], dlli, data, cnt, mask, passcnt, retry = 0; 621 621 bool status = false; ··· 714 714 return status; 715 715 } /* finetuneDQI_L */ 716 716 717 - static void finetuneDQSI(struct ast_private *ast) 717 + static void finetuneDQSI(struct ast_device *ast) 718 718 { 719 719 u32 dlli, dqsip, dqidly; 720 720 u32 reg_mcr18, reg_mcr0c, passcnt[2], diff; ··· 804 804 ast_moutdwm(ast, 0x1E6E0018, reg_mcr18); 805 805 806 806 } 807 - static bool cbr_dll2(struct ast_private *ast, struct ast2300_dram_param *param) 807 + static bool cbr_dll2(struct ast_device *ast, struct ast2300_dram_param *param) 808 808 { 809 809 u32 dllmin[2], dllmax[2], dlli, data, passcnt, retry = 0; 810 810 bool status = false; ··· 860 860 return status; 861 861 } /* CBRDLL2 */ 862 862 863 - static void get_ddr3_info(struct ast_private *ast, struct ast2300_dram_param *param) 863 + static void get_ddr3_info(struct ast_device *ast, struct ast2300_dram_param *param) 864 864 { 865 865 u32 trap, trap_AC2, trap_MRS; 866 866 ··· 1102 1102 1103 1103 } 1104 1104 1105 - static void ddr3_init(struct ast_private *ast, struct ast2300_dram_param *param) 1105 + static void ddr3_init(struct ast_device *ast, struct ast2300_dram_param *param) 1106 1106 { 1107 1107 u32 data, data2, retry = 0; 1108 1108 ··· 1225 1225 1226 1226 } 1227 1227 1228 - static void get_ddr2_info(struct ast_private *ast, struct ast2300_dram_param *param) 1228 + static void get_ddr2_info(struct ast_device *ast, struct ast2300_dram_param *param) 1229 1229 { 1230 1230 u32 trap, trap_AC2, trap_MRS; 1231 1231 ··· 1472 1472 } 1473 1473 } 1474 1474 1475 - static void ddr2_init(struct ast_private *ast, struct ast2300_dram_param *param) 1475 + static void ddr2_init(struct ast_device *ast, struct ast2300_dram_param *param) 1476 1476 { 1477 1477 u32 data, data2, retry = 0; 1478 1478 ··· 1600 1600 1601 1601 static void ast_post_chip_2300(struct drm_device *dev) 1602 1602 { 1603 - struct ast_private *ast = to_ast_private(dev); 1603 + struct ast_device *ast = to_ast_device(dev); 1604 1604 struct ast2300_dram_param param; 1605 1605 u32 temp; 1606 1606 u8 reg; ··· 1681 1681 } while ((reg & 0x40) == 0); 1682 1682 } 1683 1683 1684 - static bool cbr_test_2500(struct ast_private *ast) 1684 + static bool cbr_test_2500(struct ast_device *ast) 1685 1685 { 1686 1686 ast_moutdwm(ast, 0x1E6E0074, 0x0000FFFF); 1687 1687 ast_moutdwm(ast, 0x1E6E007C, 0xFF00FF00); ··· 1692 1692 return true; 1693 1693 } 1694 1694 1695 - static bool ddr_test_2500(struct ast_private *ast) 1695 + static bool ddr_test_2500(struct ast_device *ast) 1696 1696 { 1697 1697 ast_moutdwm(ast, 0x1E6E0074, 0x0000FFFF); 1698 1698 ast_moutdwm(ast, 0x1E6E007C, 0xFF00FF00); ··· 1709 1709 return true; 1710 1710 } 1711 1711 1712 - static void ddr_init_common_2500(struct ast_private *ast) 1712 + static void ddr_init_common_2500(struct ast_device *ast) 1713 1713 { 1714 1714 ast_moutdwm(ast, 0x1E6E0034, 0x00020080); 1715 1715 ast_moutdwm(ast, 0x1E6E0008, 0x2003000F); ··· 1732 1732 ast_moutdwm(ast, 0x1E6E024C, 0x80808080); 1733 1733 } 1734 1734 1735 - static void ddr_phy_init_2500(struct ast_private *ast) 1735 + static void ddr_phy_init_2500(struct ast_device *ast) 1736 1736 { 1737 1737 u32 data, pass, timecnt; 1738 1738 ··· 1766 1766 * 4Gb : 0x80000000 ~ 0x9FFFFFFF 1767 1767 * 8Gb : 0x80000000 ~ 0xBFFFFFFF 1768 1768 */ 1769 - static void check_dram_size_2500(struct ast_private *ast, u32 tRFC) 1769 + static void check_dram_size_2500(struct ast_device *ast, u32 tRFC) 1770 1770 { 1771 1771 u32 reg_04, reg_14; 1772 1772 ··· 1797 1797 ast_moutdwm(ast, 0x1E6E0014, reg_14); 1798 1798 } 1799 1799 1800 - static void enable_cache_2500(struct ast_private *ast) 1800 + static void enable_cache_2500(struct ast_device *ast) 1801 1801 { 1802 1802 u32 reg_04, data; 1803 1803 ··· 1810 1810 ast_moutdwm(ast, 0x1E6E0004, reg_04 | 0x400); 1811 1811 } 1812 1812 1813 - static void set_mpll_2500(struct ast_private *ast) 1813 + static void set_mpll_2500(struct ast_device *ast) 1814 1814 { 1815 1815 u32 addr, data, param; 1816 1816 ··· 1837 1837 udelay(100); 1838 1838 } 1839 1839 1840 - static void reset_mmc_2500(struct ast_private *ast) 1840 + static void reset_mmc_2500(struct ast_device *ast) 1841 1841 { 1842 1842 ast_moutdwm(ast, 0x1E78505C, 0x00000004); 1843 1843 ast_moutdwm(ast, 0x1E785044, 0x00000001); ··· 1848 1848 ast_moutdwm(ast, 0x1E6E0000, 0xFC600309); 1849 1849 } 1850 1850 1851 - static void ddr3_init_2500(struct ast_private *ast, const u32 *ddr_table) 1851 + static void ddr3_init_2500(struct ast_device *ast, const u32 *ddr_table) 1852 1852 { 1853 1853 1854 1854 ast_moutdwm(ast, 0x1E6E0004, 0x00000303); ··· 1892 1892 ast_moutdwm(ast, 0x1E6E0038, 0xFFFFFF00); 1893 1893 } 1894 1894 1895 - static void ddr4_init_2500(struct ast_private *ast, const u32 *ddr_table) 1895 + static void ddr4_init_2500(struct ast_device *ast, const u32 *ddr_table) 1896 1896 { 1897 1897 u32 data, data2, pass, retrycnt; 1898 1898 u32 ddr_vref, phy_vref; ··· 2002 2002 ast_moutdwm(ast, 0x1E6E0038, 0xFFFFFF00); 2003 2003 } 2004 2004 2005 - static bool ast_dram_init_2500(struct ast_private *ast) 2005 + static bool ast_dram_init_2500(struct ast_device *ast) 2006 2006 { 2007 2007 u32 data; 2008 2008 u32 max_tries = 5; ··· 2030 2030 return true; 2031 2031 } 2032 2032 2033 - void ast_patch_ahb_2500(struct ast_private *ast) 2033 + void ast_patch_ahb_2500(struct ast_device *ast) 2034 2034 { 2035 2035 u32 data; 2036 2036 ··· 2066 2066 2067 2067 void ast_post_chip_2500(struct drm_device *dev) 2068 2068 { 2069 - struct ast_private *ast = to_ast_private(dev); 2069 + struct ast_device *ast = to_ast_device(dev); 2070 2070 u32 temp; 2071 2071 u8 reg; 2072 2072
+1 -1
drivers/gpu/drm/bridge/Kconfig
··· 326 326 input that produces a DMD output in RGB565, RGB666, RGB888 327 327 formats. 328 328 329 - It supports upto 720p resolution with 60 and 120 Hz refresh 329 + It supports up to 720p resolution with 60 and 120 Hz refresh 330 330 rates. 331 331 332 332 config DRM_TI_TFP410
+2
drivers/gpu/drm/bridge/panel.c
··· 81 81 return ret; 82 82 } 83 83 84 + drm_panel_bridge_set_orientation(connector, bridge); 85 + 84 86 drm_connector_attach_encoder(&panel_bridge->connector, 85 87 bridge->encoder); 86 88
+1
drivers/gpu/drm/bridge/tc358762.c
··· 229 229 ctx->bridge.funcs = &tc358762_bridge_funcs; 230 230 ctx->bridge.type = DRM_MODE_CONNECTOR_DPI; 231 231 ctx->bridge.of_node = dev->of_node; 232 + ctx->bridge.pre_enable_prev_first = true; 232 233 233 234 drm_bridge_add(&ctx->bridge); 234 235
+16 -4
drivers/gpu/drm/drm_atomic_helper.c
··· 2702 2702 funcs->atomic_disable(plane, old_state); 2703 2703 } else if (new_plane_state->crtc || disabling) { 2704 2704 funcs->atomic_update(plane, old_state); 2705 + 2706 + if (!disabling && funcs->atomic_enable) { 2707 + if (drm_atomic_plane_enabling(old_plane_state, new_plane_state)) 2708 + funcs->atomic_enable(plane, old_state); 2709 + } 2705 2710 } 2706 2711 } 2707 2712 ··· 2767 2762 struct drm_plane_state *new_plane_state = 2768 2763 drm_atomic_get_new_plane_state(old_state, plane); 2769 2764 const struct drm_plane_helper_funcs *plane_funcs; 2765 + bool disabling; 2770 2766 2771 2767 plane_funcs = plane->helper_private; 2772 2768 ··· 2777 2771 WARN_ON(new_plane_state->crtc && 2778 2772 new_plane_state->crtc != crtc); 2779 2773 2780 - if (drm_atomic_plane_disabling(old_plane_state, new_plane_state) && 2781 - plane_funcs->atomic_disable) 2774 + disabling = drm_atomic_plane_disabling(old_plane_state, new_plane_state); 2775 + 2776 + if (disabling && plane_funcs->atomic_disable) { 2782 2777 plane_funcs->atomic_disable(plane, old_state); 2783 - else if (new_plane_state->crtc || 2784 - drm_atomic_plane_disabling(old_plane_state, new_plane_state)) 2778 + } else if (new_plane_state->crtc || disabling) { 2785 2779 plane_funcs->atomic_update(plane, old_state); 2780 + 2781 + if (!disabling && plane_funcs->atomic_enable) { 2782 + if (drm_atomic_plane_enabling(old_plane_state, new_plane_state)) 2783 + plane_funcs->atomic_enable(plane, old_state); 2784 + } 2785 + } 2786 2786 } 2787 2787 2788 2788 if (crtc_funcs && crtc_funcs->atomic_flush)
+20 -8
drivers/gpu/drm/drm_connector.c
··· 33 33 #include <drm/drm_sysfs.h> 34 34 #include <drm/drm_utils.h> 35 35 36 - #include <linux/fb.h> 36 + #include <linux/property.h> 37 37 #include <linux/uaccess.h> 38 + 39 + #include <video/cmdline.h> 38 40 39 41 #include "drm_crtc_internal.h" 40 42 #include "drm_internal.h" ··· 156 154 static void drm_connector_get_cmdline_mode(struct drm_connector *connector) 157 155 { 158 156 struct drm_cmdline_mode *mode = &connector->cmdline_mode; 159 - char *option = NULL; 157 + const char *option; 160 158 161 - if (fb_get_options(connector->name, &option)) 159 + option = video_get_options(connector->name); 160 + if (!option) 162 161 return; 163 162 164 163 if (!drm_mode_parse_command_line_for_connector(option, ··· 1449 1446 * a firmware handled hotkey. Therefor userspace must not include the 1450 1447 * privacy-screen sw-state in an atomic commit unless it wants to change 1451 1448 * its value. 1449 + * 1450 + * left margin, right margin, top margin, bottom margin: 1451 + * Add margins to the connector's viewport. This is typically used to 1452 + * mitigate underscan on TVs. 1453 + * 1454 + * The value is the size in pixels of the black border which will be 1455 + * added. The attached CRTC's content will be scaled to fill the whole 1456 + * area inside the margin. 1457 + * 1458 + * The margins configuration might be sent to the sink, e.g. via HDMI AVI 1459 + * InfoFrames. 1460 + * 1461 + * Drivers can set up these properties by calling 1462 + * drm_mode_create_tv_margin_properties(). 1452 1463 */ 1453 1464 1454 1465 int drm_connector_create_standard_properties(struct drm_device *dev) ··· 1607 1590 1608 1591 /* 1609 1592 * TODO: Document the properties: 1610 - * - left margin 1611 - * - right margin 1612 - * - top margin 1613 - * - bottom margin 1614 1593 * - brightness 1615 1594 * - contrast 1616 1595 * - flicker reduction ··· 1615 1602 * - overscan 1616 1603 * - saturation 1617 1604 * - select subconnector 1618 - * - subconnector 1619 1605 */ 1620 1606 /** 1621 1607 * DOC: Analog TV Connector Properties
+53 -9
drivers/gpu/drm/drm_displayid.c
··· 7 7 #include <drm/drm_edid.h> 8 8 #include <drm/drm_print.h> 9 9 10 - static int validate_displayid(const u8 *displayid, int length, int idx) 10 + static const struct displayid_header * 11 + displayid_get_header(const u8 *displayid, int length, int index) 12 + { 13 + const struct displayid_header *base; 14 + 15 + if (sizeof(*base) > length - index) 16 + return ERR_PTR(-EINVAL); 17 + 18 + base = (const struct displayid_header *)&displayid[index]; 19 + 20 + return base; 21 + } 22 + 23 + static const struct displayid_header * 24 + validate_displayid(const u8 *displayid, int length, int idx) 11 25 { 12 26 int i, dispid_length; 13 27 u8 csum = 0; 14 28 const struct displayid_header *base; 15 29 16 - base = (const struct displayid_header *)&displayid[idx]; 30 + base = displayid_get_header(displayid, length, idx); 31 + if (IS_ERR(base)) 32 + return base; 17 33 18 34 DRM_DEBUG_KMS("base revision 0x%x, length %d, %d %d\n", 19 35 base->rev, base->bytes, base->prod_id, base->ext_count); ··· 37 21 /* +1 for DispID checksum */ 38 22 dispid_length = sizeof(*base) + base->bytes + 1; 39 23 if (dispid_length > length - idx) 40 - return -EINVAL; 24 + return ERR_PTR(-EINVAL); 41 25 42 26 for (i = 0; i < dispid_length; i++) 43 27 csum += displayid[idx + i]; 44 28 if (csum) { 45 29 DRM_NOTE("DisplayID checksum invalid, remainder is %d\n", csum); 46 - return -EINVAL; 30 + return ERR_PTR(-EINVAL); 47 31 } 48 32 49 - return 0; 33 + return base; 50 34 } 51 35 52 36 static const u8 *drm_find_displayid_extension(const struct drm_edid *drm_edid, ··· 55 39 { 56 40 const u8 *displayid = drm_find_edid_extension(drm_edid, DISPLAYID_EXT, ext_index); 57 41 const struct displayid_header *base; 58 - int ret; 59 42 60 43 if (!displayid) 61 44 return NULL; ··· 63 48 *length = EDID_LENGTH - 1; 64 49 *idx = 1; 65 50 66 - ret = validate_displayid(displayid, *length, *idx); 67 - if (ret) 51 + base = validate_displayid(displayid, *length, *idx); 52 + if (IS_ERR(base)) 68 53 return NULL; 69 54 70 - base = (const struct displayid_header *)&displayid[*idx]; 71 55 *length = *idx + sizeof(*base) + base->bytes; 72 56 73 57 return displayid; ··· 123 109 } 124 110 125 111 for (;;) { 112 + /* The first section we encounter is the base section */ 113 + bool base_section = !iter->section; 114 + 126 115 iter->section = drm_find_displayid_extension(iter->drm_edid, 127 116 &iter->length, 128 117 &iter->idx, ··· 133 116 if (!iter->section) { 134 117 iter->drm_edid = NULL; 135 118 return NULL; 119 + } 120 + 121 + /* Save the structure version and primary use case. */ 122 + if (base_section) { 123 + const struct displayid_header *base; 124 + 125 + base = displayid_get_header(iter->section, iter->length, 126 + iter->idx); 127 + if (!IS_ERR(base)) { 128 + iter->version = base->rev; 129 + iter->primary_use = base->prod_id; 130 + } 136 131 } 137 132 138 133 iter->idx += sizeof(struct displayid_header); ··· 158 129 void displayid_iter_end(struct displayid_iter *iter) 159 130 { 160 131 memset(iter, 0, sizeof(*iter)); 132 + } 133 + 134 + /* DisplayID Structure Version/Revision from the Base Section. */ 135 + u8 displayid_version(const struct displayid_iter *iter) 136 + { 137 + return iter->version; 138 + } 139 + 140 + /* 141 + * DisplayID Primary Use Case (2.0+) or Product Type Identifier (1.0-1.3) from 142 + * the Base Section. 143 + */ 144 + u8 displayid_primary_use(const struct displayid_iter *iter) 145 + { 146 + return iter->primary_use; 161 147 }
+1 -4
drivers/gpu/drm/drm_dumb_buffers.c
··· 139 139 if (!dev->driver->dumb_create) 140 140 return -ENOSYS; 141 141 142 - if (dev->driver->dumb_destroy) 143 - return dev->driver->dumb_destroy(file_priv, dev, handle); 144 - else 145 - return drm_gem_dumb_destroy(file_priv, dev, handle); 142 + return drm_gem_handle_delete(file_priv, handle); 146 143 } 147 144 148 145 int drm_mode_destroy_dumb_ioctl(struct drm_device *dev,
+56 -9
drivers/gpu/drm/drm_edid.c
··· 3424 3424 connector->base.id, connector->name); 3425 3425 return NULL; 3426 3426 } 3427 - if (!(pt->misc & DRM_EDID_PT_SEPARATE_SYNC)) { 3428 - drm_dbg_kms(dev, "[CONNECTOR:%d:%s] Composite sync not supported\n", 3429 - connector->base.id, connector->name); 3430 - } 3431 3427 3432 3428 /* it is incorrect if hsync/vsync width is zero */ 3433 3429 if (!hsync_pulse_width || !vsync_pulse_width) { ··· 3470 3474 if (info->quirks & EDID_QUIRK_DETAILED_SYNC_PP) { 3471 3475 mode->flags |= DRM_MODE_FLAG_PHSYNC | DRM_MODE_FLAG_PVSYNC; 3472 3476 } else { 3473 - mode->flags |= (pt->misc & DRM_EDID_PT_HSYNC_POSITIVE) ? 3474 - DRM_MODE_FLAG_PHSYNC : DRM_MODE_FLAG_NHSYNC; 3475 - mode->flags |= (pt->misc & DRM_EDID_PT_VSYNC_POSITIVE) ? 3476 - DRM_MODE_FLAG_PVSYNC : DRM_MODE_FLAG_NVSYNC; 3477 + switch (pt->misc & DRM_EDID_PT_SYNC_MASK) { 3478 + case DRM_EDID_PT_ANALOG_CSYNC: 3479 + case DRM_EDID_PT_BIPOLAR_ANALOG_CSYNC: 3480 + drm_dbg_kms(dev, "[CONNECTOR:%d:%s] Analog composite sync!\n", 3481 + connector->base.id, connector->name); 3482 + mode->flags |= DRM_MODE_FLAG_CSYNC | DRM_MODE_FLAG_NCSYNC; 3483 + break; 3484 + case DRM_EDID_PT_DIGITAL_CSYNC: 3485 + drm_dbg_kms(dev, "[CONNECTOR:%d:%s] Digital composite sync!\n", 3486 + connector->base.id, connector->name); 3487 + mode->flags |= DRM_MODE_FLAG_CSYNC; 3488 + mode->flags |= (pt->misc & DRM_EDID_PT_HSYNC_POSITIVE) ? 3489 + DRM_MODE_FLAG_PCSYNC : DRM_MODE_FLAG_NCSYNC; 3490 + break; 3491 + case DRM_EDID_PT_DIGITAL_SEPARATE_SYNC: 3492 + mode->flags |= (pt->misc & DRM_EDID_PT_HSYNC_POSITIVE) ? 3493 + DRM_MODE_FLAG_PHSYNC : DRM_MODE_FLAG_NHSYNC; 3494 + mode->flags |= (pt->misc & DRM_EDID_PT_VSYNC_POSITIVE) ? 3495 + DRM_MODE_FLAG_PVSYNC : DRM_MODE_FLAG_NVSYNC; 3496 + break; 3497 + } 3477 3498 } 3478 3499 3479 3500 set_size: ··· 6446 6433 info->quirks = 0; 6447 6434 } 6448 6435 6436 + static void update_displayid_info(struct drm_connector *connector, 6437 + const struct drm_edid *drm_edid) 6438 + { 6439 + struct drm_display_info *info = &connector->display_info; 6440 + const struct displayid_block *block; 6441 + struct displayid_iter iter; 6442 + 6443 + displayid_iter_edid_begin(drm_edid, &iter); 6444 + displayid_iter_for_each(block, &iter) { 6445 + if (displayid_version(&iter) == DISPLAY_ID_STRUCTURE_VER_20 && 6446 + (displayid_primary_use(&iter) == PRIMARY_USE_HEAD_MOUNTED_VR || 6447 + displayid_primary_use(&iter) == PRIMARY_USE_HEAD_MOUNTED_AR)) 6448 + info->non_desktop = true; 6449 + 6450 + /* 6451 + * We're only interested in the base section here, no need to 6452 + * iterate further. 6453 + */ 6454 + break; 6455 + } 6456 + displayid_iter_end(&iter); 6457 + } 6458 + 6449 6459 static void update_display_info(struct drm_connector *connector, 6450 6460 const struct drm_edid *drm_edid) 6451 6461 { ··· 6498 6462 6499 6463 info->color_formats |= DRM_COLOR_FORMAT_RGB444; 6500 6464 drm_parse_cea_ext(connector, drm_edid); 6465 + 6466 + update_displayid_info(connector, drm_edid); 6501 6467 6502 6468 /* 6503 6469 * Digital sink with "DFP 1.x compliant TMDS" according to EDID 1.3? ··· 7280 7242 } 7281 7243 } 7282 7244 7245 + static bool displayid_is_tiled_block(const struct displayid_iter *iter, 7246 + const struct displayid_block *block) 7247 + { 7248 + return (displayid_version(iter) == DISPLAY_ID_STRUCTURE_VER_12 && 7249 + block->tag == DATA_BLOCK_TILED_DISPLAY) || 7250 + (displayid_version(iter) == DISPLAY_ID_STRUCTURE_VER_20 && 7251 + block->tag == DATA_BLOCK_2_TILED_DISPLAY_TOPOLOGY); 7252 + } 7253 + 7283 7254 static void _drm_update_tile_info(struct drm_connector *connector, 7284 7255 const struct drm_edid *drm_edid) 7285 7256 { ··· 7299 7252 7300 7253 displayid_iter_edid_begin(drm_edid, &iter); 7301 7254 displayid_iter_for_each(block, &iter) { 7302 - if (block->tag == DATA_BLOCK_TILED_DISPLAY) 7255 + if (displayid_is_tiled_block(&iter, block)) 7303 7256 drm_parse_tiled_block(connector, block); 7304 7257 } 7305 7258 displayid_iter_end(&iter);
+18 -7
drivers/gpu/drm/drm_gem.c
··· 336 336 } 337 337 EXPORT_SYMBOL_GPL(drm_gem_dumb_map_offset); 338 338 339 - int drm_gem_dumb_destroy(struct drm_file *file, 340 - struct drm_device *dev, 341 - u32 handle) 342 - { 343 - return drm_gem_handle_delete(file, handle); 344 - } 345 - 346 339 /** 347 340 * drm_gem_handle_create_tail - internal functions to create a handle 348 341 * @file_priv: drm file-private structure to register the handle for ··· 1459 1466 return freed; 1460 1467 } 1461 1468 EXPORT_SYMBOL(drm_gem_lru_scan); 1469 + 1470 + /** 1471 + * drm_gem_evict - helper to evict backing pages for a GEM object 1472 + * @obj: obj in question 1473 + */ 1474 + int drm_gem_evict(struct drm_gem_object *obj) 1475 + { 1476 + dma_resv_assert_held(obj->resv); 1477 + 1478 + if (!dma_resv_test_signaled(obj->resv, DMA_RESV_USAGE_READ)) 1479 + return -EBUSY; 1480 + 1481 + if (obj->funcs->evict) 1482 + return obj->funcs->evict(obj); 1483 + 1484 + return 0; 1485 + } 1486 + EXPORT_SYMBOL(drm_gem_evict);
+36 -29
drivers/gpu/drm/drm_gem_shmem_helper.c
··· 141 141 { 142 142 struct drm_gem_object *obj = &shmem->base; 143 143 144 - WARN_ON(shmem->vmap_use_count); 144 + drm_WARN_ON(obj->dev, shmem->vmap_use_count); 145 145 146 146 if (obj->import_attach) { 147 147 drm_prime_gem_destroy(obj, shmem->sgt); ··· 156 156 drm_gem_shmem_put_pages(shmem); 157 157 } 158 158 159 - WARN_ON(shmem->pages_use_count); 159 + drm_WARN_ON(obj->dev, shmem->pages_use_count); 160 160 161 161 drm_gem_object_release(obj); 162 162 mutex_destroy(&shmem->pages_lock); ··· 175 175 176 176 pages = drm_gem_get_pages(obj); 177 177 if (IS_ERR(pages)) { 178 - DRM_DEBUG_KMS("Failed to get pages (%ld)\n", PTR_ERR(pages)); 178 + drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n", 179 + PTR_ERR(pages)); 179 180 shmem->pages_use_count = 0; 180 181 return PTR_ERR(pages); 181 182 } ··· 208 207 */ 209 208 int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem) 210 209 { 210 + struct drm_gem_object *obj = &shmem->base; 211 211 int ret; 212 212 213 - WARN_ON(shmem->base.import_attach); 213 + drm_WARN_ON(obj->dev, obj->import_attach); 214 214 215 215 ret = mutex_lock_interruptible(&shmem->pages_lock); 216 216 if (ret) ··· 227 225 { 228 226 struct drm_gem_object *obj = &shmem->base; 229 227 230 - if (WARN_ON_ONCE(!shmem->pages_use_count)) 228 + if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count)) 231 229 return; 232 230 233 231 if (--shmem->pages_use_count > 0) ··· 270 268 */ 271 269 int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem) 272 270 { 273 - WARN_ON(shmem->base.import_attach); 271 + struct drm_gem_object *obj = &shmem->base; 272 + 273 + drm_WARN_ON(obj->dev, obj->import_attach); 274 274 275 275 return drm_gem_shmem_get_pages(shmem); 276 276 } ··· 287 283 */ 288 284 void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem) 289 285 { 290 - WARN_ON(shmem->base.import_attach); 286 + struct drm_gem_object *obj = &shmem->base; 287 + 288 + drm_WARN_ON(obj->dev, obj->import_attach); 291 289 292 290 drm_gem_shmem_put_pages(shmem); 293 291 } ··· 301 295 struct drm_gem_object *obj = &shmem->base; 302 296 int ret = 0; 303 297 304 - if (shmem->vmap_use_count++ > 0) { 305 - iosys_map_set_vaddr(map, shmem->vaddr); 306 - return 0; 307 - } 308 - 309 298 if (obj->import_attach) { 310 299 ret = dma_buf_vmap(obj->import_attach->dmabuf, map); 311 300 if (!ret) { 312 - if (WARN_ON(map->is_iomem)) { 301 + if (drm_WARN_ON(obj->dev, map->is_iomem)) { 313 302 dma_buf_vunmap(obj->import_attach->dmabuf, map); 314 - ret = -EIO; 315 - goto err_put_pages; 303 + return -EIO; 316 304 } 317 - shmem->vaddr = map->vaddr; 318 305 } 319 306 } else { 320 307 pgprot_t prot = PAGE_KERNEL; 308 + 309 + if (shmem->vmap_use_count++ > 0) { 310 + iosys_map_set_vaddr(map, shmem->vaddr); 311 + return 0; 312 + } 321 313 322 314 ret = drm_gem_shmem_get_pages(shmem); 323 315 if (ret) ··· 332 328 } 333 329 334 330 if (ret) { 335 - DRM_DEBUG_KMS("Failed to vmap pages, error %d\n", ret); 331 + drm_dbg_kms(obj->dev, "Failed to vmap pages, error %d\n", ret); 336 332 goto err_put_pages; 337 333 } 338 334 ··· 382 378 { 383 379 struct drm_gem_object *obj = &shmem->base; 384 380 385 - if (WARN_ON_ONCE(!shmem->vmap_use_count)) 386 - return; 387 - 388 - if (--shmem->vmap_use_count > 0) 389 - return; 390 - 391 381 if (obj->import_attach) { 392 382 dma_buf_vunmap(obj->import_attach->dmabuf, map); 393 383 } else { 384 + if (drm_WARN_ON_ONCE(obj->dev, !shmem->vmap_use_count)) 385 + return; 386 + 387 + if (--shmem->vmap_use_count > 0) 388 + return; 389 + 394 390 vunmap(shmem->vaddr); 395 391 drm_gem_shmem_put_pages(shmem); 396 392 } ··· 465 461 struct drm_gem_object *obj = &shmem->base; 466 462 struct drm_device *dev = obj->dev; 467 463 468 - WARN_ON(!drm_gem_shmem_is_purgeable(shmem)); 464 + drm_WARN_ON(obj->dev, !drm_gem_shmem_is_purgeable(shmem)); 469 465 470 466 dma_unmap_sgtable(dev->dev, shmem->sgt, DMA_BIDIRECTIONAL, 0); 471 467 sg_free_table(shmem->sgt); ··· 554 550 mutex_lock(&shmem->pages_lock); 555 551 556 552 if (page_offset >= num_pages || 557 - WARN_ON_ONCE(!shmem->pages) || 553 + drm_WARN_ON_ONCE(obj->dev, !shmem->pages) || 558 554 shmem->madv < 0) { 559 555 ret = VM_FAULT_SIGBUS; 560 556 } else { ··· 573 569 struct drm_gem_object *obj = vma->vm_private_data; 574 570 struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj); 575 571 576 - WARN_ON(shmem->base.import_attach); 572 + drm_WARN_ON(obj->dev, obj->import_attach); 577 573 578 574 mutex_lock(&shmem->pages_lock); 579 575 ··· 582 578 * mmap'd, vm_open() just grabs an additional reference for the new 583 579 * mm the vma is getting copied into (ie. on fork()). 584 580 */ 585 - if (!WARN_ON_ONCE(!shmem->pages_use_count)) 581 + if (!drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count)) 586 582 shmem->pages_use_count++; 587 583 588 584 mutex_unlock(&shmem->pages_lock); ··· 652 648 void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem, 653 649 struct drm_printer *p, unsigned int indent) 654 650 { 651 + if (shmem->base.import_attach) 652 + return; 653 + 655 654 drm_printf_indent(p, indent, "pages_use_count=%u\n", shmem->pages_use_count); 656 655 drm_printf_indent(p, indent, "vmap_use_count=%u\n", shmem->vmap_use_count); 657 656 drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr); ··· 679 672 { 680 673 struct drm_gem_object *obj = &shmem->base; 681 674 682 - WARN_ON(shmem->base.import_attach); 675 + drm_WARN_ON(obj->dev, obj->import_attach); 683 676 684 677 return drm_prime_pages_to_sg(obj->dev, shmem->pages, obj->size >> PAGE_SHIFT); 685 678 } ··· 694 687 if (shmem->sgt) 695 688 return shmem->sgt; 696 689 697 - WARN_ON(obj->import_attach); 690 + drm_WARN_ON(obj->dev, obj->import_attach); 698 691 699 692 ret = drm_gem_shmem_get_pages_locked(shmem); 700 693 if (ret)
+11
drivers/gpu/drm/drm_gem_vram_helper.c
··· 916 916 { 917 917 struct drm_gem_vram_object *gbo; 918 918 919 + if (!bo->resource) { 920 + if (new_mem->mem_type != TTM_PL_SYSTEM) { 921 + hop->mem_type = TTM_PL_SYSTEM; 922 + hop->flags = TTM_PL_FLAG_TEMPORARY; 923 + return -EMULTIHOP; 924 + } 925 + 926 + ttm_bo_move_null(bo, new_mem); 927 + return 0; 928 + } 929 + 919 930 gbo = drm_gem_vram_of_bo(bo); 920 931 921 932 return drm_gem_vram_bo_driver_move(gbo, evict, ctx, new_mem);
-3
drivers/gpu/drm/drm_internal.h
··· 178 178 int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map); 179 179 void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map); 180 180 181 - int drm_gem_dumb_destroy(struct drm_file *file, struct drm_device *dev, 182 - u32 handle); 183 - 184 181 /* drm_debugfs.c drm_debugfs_crc.c */ 185 182 #if defined(CONFIG_DEBUG_FS) 186 183 int drm_debugfs_init(struct drm_minor *minor, int minor_id,
+1 -2
drivers/gpu/drm/drm_modes.c
··· 2339 2339 * @mode: preallocated drm_cmdline_mode structure to fill out 2340 2340 * 2341 2341 * This parses @mode_option command line modeline for modes and options to 2342 - * configure the connector. If @mode_option is NULL the default command line 2343 - * modeline in fb_mode_option will be parsed instead. 2342 + * configure the connector. 2344 2343 * 2345 2344 * This uses the same parameters as the fb modedb.c, except for an extra 2346 2345 * force-enable, force-enable-digital and force-disable bit at the end::
+51
drivers/gpu/drm/drm_of.c
··· 10 10 #include <drm/drm_crtc.h> 11 11 #include <drm/drm_device.h> 12 12 #include <drm/drm_encoder.h> 13 + #include <drm/drm_mipi_dsi.h> 13 14 #include <drm/drm_of.h> 14 15 #include <drm/drm_panel.h> 15 16 ··· 494 493 return ret; 495 494 } 496 495 EXPORT_SYMBOL_GPL(drm_of_get_data_lanes_count_ep); 496 + 497 + #if IS_ENABLED(CONFIG_DRM_MIPI_DSI) 498 + 499 + /** 500 + * drm_of_get_dsi_bus - find the DSI bus for a given device 501 + * @dev: parent device of display (SPI, I2C) 502 + * 503 + * Gets parent DSI bus for a DSI device controlled through a bus other 504 + * than MIPI-DCS (SPI, I2C, etc.) using the Device Tree. 505 + * 506 + * Returns pointer to mipi_dsi_host if successful, -EINVAL if the 507 + * request is unsupported, -EPROBE_DEFER if the DSI host is found but 508 + * not available, or -ENODEV otherwise. 509 + */ 510 + struct mipi_dsi_host *drm_of_get_dsi_bus(struct device *dev) 511 + { 512 + struct mipi_dsi_host *dsi_host; 513 + struct device_node *endpoint, *dsi_host_node; 514 + 515 + /* 516 + * Get first endpoint child from device. 517 + */ 518 + endpoint = of_graph_get_next_endpoint(dev->of_node, NULL); 519 + if (!endpoint) 520 + return ERR_PTR(-ENODEV); 521 + 522 + /* 523 + * Follow the first endpoint to get the DSI host node and then 524 + * release the endpoint since we no longer need it. 525 + */ 526 + dsi_host_node = of_graph_get_remote_port_parent(endpoint); 527 + of_node_put(endpoint); 528 + if (!dsi_host_node) 529 + return ERR_PTR(-ENODEV); 530 + 531 + /* 532 + * Get the DSI host from the DSI host node. If we get an error 533 + * or the return is null assume we're not ready to probe just 534 + * yet. Release the DSI host node since we're done with it. 535 + */ 536 + dsi_host = of_find_mipi_dsi_host_by_node(dsi_host_node); 537 + of_node_put(dsi_host_node); 538 + if (IS_ERR_OR_NULL(dsi_host)) 539 + return ERR_PTR(-EPROBE_DEFER); 540 + 541 + return dsi_host; 542 + } 543 + EXPORT_SYMBOL_GPL(drm_of_get_dsi_bus); 544 + 545 + #endif /* CONFIG_DRM_MIPI_DSI */
+3 -2
drivers/gpu/drm/drm_probe_helper.c
··· 590 590 */ 591 591 dev->mode_config.delayed_event = true; 592 592 if (dev->mode_config.poll_enabled) 593 - schedule_delayed_work(&dev->mode_config.output_poll_work, 594 - 0); 593 + mod_delayed_work(system_wq, 594 + &dev->mode_config.output_poll_work, 595 + 0); 595 596 } 596 597 597 598 /* Re-enable polling in case the global poll config changed. */
+457
drivers/gpu/drm/drm_suballoc.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 OR MIT 2 + /* 3 + * Copyright 2011 Red Hat Inc. 4 + * Copyright 2023 Intel Corporation. 5 + * All Rights Reserved. 6 + * 7 + * Permission is hereby granted, free of charge, to any person obtaining a 8 + * copy of this software and associated documentation files (the 9 + * "Software"), to deal in the Software without restriction, including 10 + * without limitation the rights to use, copy, modify, merge, publish, 11 + * distribute, sub license, and/or sell copies of the Software, and to 12 + * permit persons to whom the Software is furnished to do so, subject to 13 + * the following conditions: 14 + * 15 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL 18 + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, 19 + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR 20 + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE 21 + * USE OR OTHER DEALINGS IN THE SOFTWARE. 22 + * 23 + * The above copyright notice and this permission notice (including the 24 + * next paragraph) shall be included in all copies or substantial portions 25 + * of the Software. 26 + * 27 + */ 28 + /* Algorithm: 29 + * 30 + * We store the last allocated bo in "hole", we always try to allocate 31 + * after the last allocated bo. Principle is that in a linear GPU ring 32 + * progression was is after last is the oldest bo we allocated and thus 33 + * the first one that should no longer be in use by the GPU. 34 + * 35 + * If it's not the case we skip over the bo after last to the closest 36 + * done bo if such one exist. If none exist and we are not asked to 37 + * block we report failure to allocate. 38 + * 39 + * If we are asked to block we wait on all the oldest fence of all 40 + * rings. We just wait for any of those fence to complete. 41 + */ 42 + 43 + #include <drm/drm_suballoc.h> 44 + #include <drm/drm_print.h> 45 + #include <linux/slab.h> 46 + #include <linux/sched.h> 47 + #include <linux/wait.h> 48 + #include <linux/dma-fence.h> 49 + 50 + static void drm_suballoc_remove_locked(struct drm_suballoc *sa); 51 + static void drm_suballoc_try_free(struct drm_suballoc_manager *sa_manager); 52 + 53 + /** 54 + * drm_suballoc_manager_init() - Initialise the drm_suballoc_manager 55 + * @sa_manager: pointer to the sa_manager 56 + * @size: number of bytes we want to suballocate 57 + * @align: alignment for each suballocated chunk 58 + * 59 + * Prepares the suballocation manager for suballocations. 60 + */ 61 + void drm_suballoc_manager_init(struct drm_suballoc_manager *sa_manager, 62 + size_t size, size_t align) 63 + { 64 + unsigned int i; 65 + 66 + BUILD_BUG_ON(!is_power_of_2(DRM_SUBALLOC_MAX_QUEUES)); 67 + 68 + if (!align) 69 + align = 1; 70 + 71 + /* alignment must be a power of 2 */ 72 + if (WARN_ON_ONCE(align & (align - 1))) 73 + align = roundup_pow_of_two(align); 74 + 75 + init_waitqueue_head(&sa_manager->wq); 76 + sa_manager->size = size; 77 + sa_manager->align = align; 78 + sa_manager->hole = &sa_manager->olist; 79 + INIT_LIST_HEAD(&sa_manager->olist); 80 + for (i = 0; i < DRM_SUBALLOC_MAX_QUEUES; ++i) 81 + INIT_LIST_HEAD(&sa_manager->flist[i]); 82 + } 83 + EXPORT_SYMBOL(drm_suballoc_manager_init); 84 + 85 + /** 86 + * drm_suballoc_manager_fini() - Destroy the drm_suballoc_manager 87 + * @sa_manager: pointer to the sa_manager 88 + * 89 + * Cleans up the suballocation manager after use. All fences added 90 + * with drm_suballoc_free() must be signaled, or we cannot clean up 91 + * the entire manager. 92 + */ 93 + void drm_suballoc_manager_fini(struct drm_suballoc_manager *sa_manager) 94 + { 95 + struct drm_suballoc *sa, *tmp; 96 + 97 + if (!sa_manager->size) 98 + return; 99 + 100 + if (!list_empty(&sa_manager->olist)) { 101 + sa_manager->hole = &sa_manager->olist; 102 + drm_suballoc_try_free(sa_manager); 103 + if (!list_empty(&sa_manager->olist)) 104 + DRM_ERROR("sa_manager is not empty, clearing anyway\n"); 105 + } 106 + list_for_each_entry_safe(sa, tmp, &sa_manager->olist, olist) { 107 + drm_suballoc_remove_locked(sa); 108 + } 109 + 110 + sa_manager->size = 0; 111 + } 112 + EXPORT_SYMBOL(drm_suballoc_manager_fini); 113 + 114 + static void drm_suballoc_remove_locked(struct drm_suballoc *sa) 115 + { 116 + struct drm_suballoc_manager *sa_manager = sa->manager; 117 + 118 + if (sa_manager->hole == &sa->olist) 119 + sa_manager->hole = sa->olist.prev; 120 + 121 + list_del_init(&sa->olist); 122 + list_del_init(&sa->flist); 123 + dma_fence_put(sa->fence); 124 + kfree(sa); 125 + } 126 + 127 + static void drm_suballoc_try_free(struct drm_suballoc_manager *sa_manager) 128 + { 129 + struct drm_suballoc *sa, *tmp; 130 + 131 + if (sa_manager->hole->next == &sa_manager->olist) 132 + return; 133 + 134 + sa = list_entry(sa_manager->hole->next, struct drm_suballoc, olist); 135 + list_for_each_entry_safe_from(sa, tmp, &sa_manager->olist, olist) { 136 + if (!sa->fence || !dma_fence_is_signaled(sa->fence)) 137 + return; 138 + 139 + drm_suballoc_remove_locked(sa); 140 + } 141 + } 142 + 143 + static size_t drm_suballoc_hole_soffset(struct drm_suballoc_manager *sa_manager) 144 + { 145 + struct list_head *hole = sa_manager->hole; 146 + 147 + if (hole != &sa_manager->olist) 148 + return list_entry(hole, struct drm_suballoc, olist)->eoffset; 149 + 150 + return 0; 151 + } 152 + 153 + static size_t drm_suballoc_hole_eoffset(struct drm_suballoc_manager *sa_manager) 154 + { 155 + struct list_head *hole = sa_manager->hole; 156 + 157 + if (hole->next != &sa_manager->olist) 158 + return list_entry(hole->next, struct drm_suballoc, olist)->soffset; 159 + return sa_manager->size; 160 + } 161 + 162 + static bool drm_suballoc_try_alloc(struct drm_suballoc_manager *sa_manager, 163 + struct drm_suballoc *sa, 164 + size_t size, size_t align) 165 + { 166 + size_t soffset, eoffset, wasted; 167 + 168 + soffset = drm_suballoc_hole_soffset(sa_manager); 169 + eoffset = drm_suballoc_hole_eoffset(sa_manager); 170 + wasted = round_up(soffset, align) - soffset; 171 + 172 + if ((eoffset - soffset) >= (size + wasted)) { 173 + soffset += wasted; 174 + 175 + sa->manager = sa_manager; 176 + sa->soffset = soffset; 177 + sa->eoffset = soffset + size; 178 + list_add(&sa->olist, sa_manager->hole); 179 + INIT_LIST_HEAD(&sa->flist); 180 + sa_manager->hole = &sa->olist; 181 + return true; 182 + } 183 + return false; 184 + } 185 + 186 + static bool __drm_suballoc_event(struct drm_suballoc_manager *sa_manager, 187 + size_t size, size_t align) 188 + { 189 + size_t soffset, eoffset, wasted; 190 + unsigned int i; 191 + 192 + for (i = 0; i < DRM_SUBALLOC_MAX_QUEUES; ++i) 193 + if (!list_empty(&sa_manager->flist[i])) 194 + return true; 195 + 196 + soffset = drm_suballoc_hole_soffset(sa_manager); 197 + eoffset = drm_suballoc_hole_eoffset(sa_manager); 198 + wasted = round_up(soffset, align) - soffset; 199 + 200 + return ((eoffset - soffset) >= (size + wasted)); 201 + } 202 + 203 + /** 204 + * drm_suballoc_event() - Check if we can stop waiting 205 + * @sa_manager: pointer to the sa_manager 206 + * @size: number of bytes we want to allocate 207 + * @align: alignment we need to match 208 + * 209 + * Return: true if either there is a fence we can wait for or 210 + * enough free memory to satisfy the allocation directly. 211 + * false otherwise. 212 + */ 213 + static bool drm_suballoc_event(struct drm_suballoc_manager *sa_manager, 214 + size_t size, size_t align) 215 + { 216 + bool ret; 217 + 218 + spin_lock(&sa_manager->wq.lock); 219 + ret = __drm_suballoc_event(sa_manager, size, align); 220 + spin_unlock(&sa_manager->wq.lock); 221 + return ret; 222 + } 223 + 224 + static bool drm_suballoc_next_hole(struct drm_suballoc_manager *sa_manager, 225 + struct dma_fence **fences, 226 + unsigned int *tries) 227 + { 228 + struct drm_suballoc *best_bo = NULL; 229 + unsigned int i, best_idx; 230 + size_t soffset, best, tmp; 231 + 232 + /* if hole points to the end of the buffer */ 233 + if (sa_manager->hole->next == &sa_manager->olist) { 234 + /* try again with its beginning */ 235 + sa_manager->hole = &sa_manager->olist; 236 + return true; 237 + } 238 + 239 + soffset = drm_suballoc_hole_soffset(sa_manager); 240 + /* to handle wrap around we add sa_manager->size */ 241 + best = sa_manager->size * 2; 242 + /* go over all fence list and try to find the closest sa 243 + * of the current last 244 + */ 245 + for (i = 0; i < DRM_SUBALLOC_MAX_QUEUES; ++i) { 246 + struct drm_suballoc *sa; 247 + 248 + fences[i] = NULL; 249 + 250 + if (list_empty(&sa_manager->flist[i])) 251 + continue; 252 + 253 + sa = list_first_entry(&sa_manager->flist[i], 254 + struct drm_suballoc, flist); 255 + 256 + if (!dma_fence_is_signaled(sa->fence)) { 257 + fences[i] = sa->fence; 258 + continue; 259 + } 260 + 261 + /* limit the number of tries each freelist gets */ 262 + if (tries[i] > 2) 263 + continue; 264 + 265 + tmp = sa->soffset; 266 + if (tmp < soffset) { 267 + /* wrap around, pretend it's after */ 268 + tmp += sa_manager->size; 269 + } 270 + tmp -= soffset; 271 + if (tmp < best) { 272 + /* this sa bo is the closest one */ 273 + best = tmp; 274 + best_idx = i; 275 + best_bo = sa; 276 + } 277 + } 278 + 279 + if (best_bo) { 280 + ++tries[best_idx]; 281 + sa_manager->hole = best_bo->olist.prev; 282 + 283 + /* 284 + * We know that this one is signaled, 285 + * so it's safe to remove it. 286 + */ 287 + drm_suballoc_remove_locked(best_bo); 288 + return true; 289 + } 290 + return false; 291 + } 292 + 293 + /** 294 + * drm_suballoc_new() - Make a suballocation. 295 + * @sa_manager: pointer to the sa_manager 296 + * @size: number of bytes we want to suballocate. 297 + * @gfp: gfp flags used for memory allocation. Typically GFP_KERNEL but 298 + * the argument is provided for suballocations from reclaim context or 299 + * where the caller wants to avoid pipelining rather than wait for 300 + * reclaim. 301 + * @intr: Whether to perform waits interruptible. This should typically 302 + * always be true, unless the caller needs to propagate a 303 + * non-interruptible context from above layers. 304 + * @align: Alignment. Must not exceed the default manager alignment. 305 + * If @align is zero, then the manager alignment is used. 306 + * 307 + * Try to make a suballocation of size @size, which will be rounded 308 + * up to the alignment specified in specified in drm_suballoc_manager_init(). 309 + * 310 + * Return: a new suballocated bo, or an ERR_PTR. 311 + */ 312 + struct drm_suballoc * 313 + drm_suballoc_new(struct drm_suballoc_manager *sa_manager, size_t size, 314 + gfp_t gfp, bool intr, size_t align) 315 + { 316 + struct dma_fence *fences[DRM_SUBALLOC_MAX_QUEUES]; 317 + unsigned int tries[DRM_SUBALLOC_MAX_QUEUES]; 318 + unsigned int count; 319 + int i, r; 320 + struct drm_suballoc *sa; 321 + 322 + if (WARN_ON_ONCE(align > sa_manager->align)) 323 + return ERR_PTR(-EINVAL); 324 + if (WARN_ON_ONCE(size > sa_manager->size || !size)) 325 + return ERR_PTR(-EINVAL); 326 + 327 + if (!align) 328 + align = sa_manager->align; 329 + 330 + sa = kmalloc(sizeof(*sa), gfp); 331 + if (!sa) 332 + return ERR_PTR(-ENOMEM); 333 + sa->manager = sa_manager; 334 + sa->fence = NULL; 335 + INIT_LIST_HEAD(&sa->olist); 336 + INIT_LIST_HEAD(&sa->flist); 337 + 338 + spin_lock(&sa_manager->wq.lock); 339 + do { 340 + for (i = 0; i < DRM_SUBALLOC_MAX_QUEUES; ++i) 341 + tries[i] = 0; 342 + 343 + do { 344 + drm_suballoc_try_free(sa_manager); 345 + 346 + if (drm_suballoc_try_alloc(sa_manager, sa, 347 + size, align)) { 348 + spin_unlock(&sa_manager->wq.lock); 349 + return sa; 350 + } 351 + 352 + /* see if we can skip over some allocations */ 353 + } while (drm_suballoc_next_hole(sa_manager, fences, tries)); 354 + 355 + for (i = 0, count = 0; i < DRM_SUBALLOC_MAX_QUEUES; ++i) 356 + if (fences[i]) 357 + fences[count++] = dma_fence_get(fences[i]); 358 + 359 + if (count) { 360 + long t; 361 + 362 + spin_unlock(&sa_manager->wq.lock); 363 + t = dma_fence_wait_any_timeout(fences, count, intr, 364 + MAX_SCHEDULE_TIMEOUT, 365 + NULL); 366 + for (i = 0; i < count; ++i) 367 + dma_fence_put(fences[i]); 368 + 369 + r = (t > 0) ? 0 : t; 370 + spin_lock(&sa_manager->wq.lock); 371 + } else if (intr) { 372 + /* if we have nothing to wait for block */ 373 + r = wait_event_interruptible_locked 374 + (sa_manager->wq, 375 + __drm_suballoc_event(sa_manager, size, align)); 376 + } else { 377 + spin_unlock(&sa_manager->wq.lock); 378 + wait_event(sa_manager->wq, 379 + drm_suballoc_event(sa_manager, size, align)); 380 + r = 0; 381 + spin_lock(&sa_manager->wq.lock); 382 + } 383 + } while (!r); 384 + 385 + spin_unlock(&sa_manager->wq.lock); 386 + kfree(sa); 387 + return ERR_PTR(r); 388 + } 389 + EXPORT_SYMBOL(drm_suballoc_new); 390 + 391 + /** 392 + * drm_suballoc_free - Free a suballocation 393 + * @suballoc: pointer to the suballocation 394 + * @fence: fence that signals when suballocation is idle 395 + * 396 + * Free the suballocation. The suballocation can be re-used after @fence signals. 397 + */ 398 + void drm_suballoc_free(struct drm_suballoc *suballoc, 399 + struct dma_fence *fence) 400 + { 401 + struct drm_suballoc_manager *sa_manager; 402 + 403 + if (!suballoc) 404 + return; 405 + 406 + sa_manager = suballoc->manager; 407 + 408 + spin_lock(&sa_manager->wq.lock); 409 + if (fence && !dma_fence_is_signaled(fence)) { 410 + u32 idx; 411 + 412 + suballoc->fence = dma_fence_get(fence); 413 + idx = fence->context & (DRM_SUBALLOC_MAX_QUEUES - 1); 414 + list_add_tail(&suballoc->flist, &sa_manager->flist[idx]); 415 + } else { 416 + drm_suballoc_remove_locked(suballoc); 417 + } 418 + wake_up_all_locked(&sa_manager->wq); 419 + spin_unlock(&sa_manager->wq.lock); 420 + } 421 + EXPORT_SYMBOL(drm_suballoc_free); 422 + 423 + #ifdef CONFIG_DEBUG_FS 424 + void drm_suballoc_dump_debug_info(struct drm_suballoc_manager *sa_manager, 425 + struct drm_printer *p, 426 + unsigned long long suballoc_base) 427 + { 428 + struct drm_suballoc *i; 429 + 430 + spin_lock(&sa_manager->wq.lock); 431 + list_for_each_entry(i, &sa_manager->olist, olist) { 432 + unsigned long long soffset = i->soffset; 433 + unsigned long long eoffset = i->eoffset; 434 + 435 + if (&i->olist == sa_manager->hole) 436 + drm_puts(p, ">"); 437 + else 438 + drm_puts(p, " "); 439 + 440 + drm_printf(p, "[0x%010llx 0x%010llx] size %8lld", 441 + suballoc_base + soffset, suballoc_base + eoffset, 442 + eoffset - soffset); 443 + 444 + if (i->fence) 445 + drm_printf(p, " protected by 0x%016llx on context %llu", 446 + (unsigned long long)i->fence->seqno, 447 + (unsigned long long)i->fence->context); 448 + 449 + drm_puts(p, "\n"); 450 + } 451 + spin_unlock(&sa_manager->wq.lock); 452 + } 453 + EXPORT_SYMBOL(drm_suballoc_dump_debug_info); 454 + #endif 455 + MODULE_AUTHOR("Multiple"); 456 + MODULE_DESCRIPTION("Range suballocator helper"); 457 + MODULE_LICENSE("Dual MIT/GPL");
+11 -4
drivers/gpu/drm/i915/gem/i915_gem_ttm.c
··· 472 472 struct ttm_placement place = {}; 473 473 int ret; 474 474 475 - if (!bo->ttm || bo->resource->mem_type != TTM_PL_SYSTEM) 475 + if (!bo->ttm || i915_ttm_cpu_maps_iomem(bo->resource)) 476 476 return 0; 477 477 478 478 GEM_BUG_ON(!i915_tt->is_shmem); ··· 511 511 { 512 512 struct drm_i915_gem_object *obj = i915_ttm_to_gem(bo); 513 513 514 - if (bo->resource && !i915_ttm_is_ghost_object(bo)) { 514 + /* 515 + * This gets called twice by ttm, so long as we have a ttm resource or 516 + * ttm_tt then we can still safely call this. Due to pipeline-gutting, 517 + * we maybe have NULL bo->resource, but in that case we should always 518 + * have a ttm alive (like if the pages are swapped out). 519 + */ 520 + if ((bo->resource || bo->ttm) && !i915_ttm_is_ghost_object(bo)) { 515 521 __i915_gem_object_pages_fini(obj); 516 522 i915_ttm_free_cached_io_rsgt(obj); 517 523 } ··· 1073 1067 .interruptible = true, 1074 1068 .no_wait_gpu = true, /* should be idle already */ 1075 1069 }; 1070 + int err; 1076 1071 1077 1072 GEM_BUG_ON(!bo->ttm || !(bo->ttm->page_flags & TTM_TT_FLAG_SWAPPED)); 1078 1073 1079 - ret = ttm_bo_validate(bo, i915_ttm_sys_placement(), &ctx); 1080 - if (ret) { 1074 + err = ttm_bo_validate(bo, i915_ttm_sys_placement(), &ctx); 1075 + if (err) { 1081 1076 dma_resv_unlock(bo->base.resv); 1082 1077 return VM_FAULT_SIGBUS; 1083 1078 }
+1 -1
drivers/gpu/drm/i915/gem/i915_gem_ttm.h
··· 98 98 static inline bool i915_ttm_cpu_maps_iomem(struct ttm_resource *mem) 99 99 { 100 100 /* Once / if we support GGTT, this is also false for cached ttm_tts */ 101 - return mem->mem_type != I915_PL_SYSTEM; 101 + return mem && mem->mem_type != I915_PL_SYSTEM; 102 102 } 103 103 104 104 bool i915_ttm_resource_mappable(struct ttm_resource *res);
+4
drivers/gpu/drm/i915/gem/i915_gem_ttm_move.c
··· 711 711 712 712 assert_object_held(dst); 713 713 assert_object_held(src); 714 + 715 + if (GEM_WARN_ON(!src_bo->resource || !dst_bo->resource)) 716 + return -EINVAL; 717 + 714 718 i915_deps_init(&deps, GFP_KERNEL | __GFP_NORETRY | __GFP_NOWARN); 715 719 716 720 ret = dma_resv_reserve_fences(src_bo->base.resv, 1);
+5 -2
drivers/gpu/drm/i915/gem/i915_gem_ttm_pm.c
··· 53 53 unsigned int flags; 54 54 int err = 0; 55 55 56 - if (bo->resource->mem_type == I915_PL_SYSTEM || obj->ttm.backup) 56 + if (!i915_ttm_cpu_maps_iomem(bo->resource) || obj->ttm.backup) 57 57 return 0; 58 58 59 59 if (pm_apply->allow_gpu && i915_gem_object_evictable(obj)) ··· 187 187 return err; 188 188 189 189 /* Content may have been swapped. */ 190 - err = ttm_tt_populate(backup_bo->bdev, backup_bo->ttm, &ctx); 190 + if (!backup_bo->resource) 191 + err = ttm_bo_validate(backup_bo, i915_ttm_sys_placement(), &ctx); 192 + if (!err) 193 + err = ttm_tt_populate(backup_bo->bdev, backup_bo->ttm, &ctx); 191 194 if (!err) { 192 195 err = i915_gem_obj_copy_ttm(obj, backup, pm_apply->allow_gpu, 193 196 false);
+2 -2
drivers/gpu/drm/meson/meson_venc.c
··· 866 866 DRM_MODE_FLAG_PVSYNC | DRM_MODE_FLAG_NVSYNC)) 867 867 return MODE_BAD; 868 868 869 - if (mode->hdisplay < 640 || mode->hdisplay > 1920) 869 + if (mode->hdisplay < 400 || mode->hdisplay > 1920) 870 870 return MODE_BAD_HVALUE; 871 871 872 - if (mode->vdisplay < 480 || mode->vdisplay > 1200) 872 + if (mode->vdisplay < 480 || mode->vdisplay > 1920) 873 873 return MODE_BAD_VVALUE; 874 874 875 875 return MODE_OK;
+3
drivers/gpu/drm/mgag200/mgag200_drv.h
··· 375 375 struct drm_atomic_state *new_state); 376 376 void mgag200_primary_plane_helper_atomic_update(struct drm_plane *plane, 377 377 struct drm_atomic_state *old_state); 378 + void mgag200_primary_plane_helper_atomic_enable(struct drm_plane *plane, 379 + struct drm_atomic_state *state); 378 380 void mgag200_primary_plane_helper_atomic_disable(struct drm_plane *plane, 379 381 struct drm_atomic_state *old_state); 380 382 #define MGAG200_PRIMARY_PLANE_HELPER_FUNCS \ 381 383 DRM_GEM_SHADOW_PLANE_HELPER_FUNCS, \ 382 384 .atomic_check = mgag200_primary_plane_helper_atomic_check, \ 383 385 .atomic_update = mgag200_primary_plane_helper_atomic_update, \ 386 + .atomic_enable = mgag200_primary_plane_helper_atomic_enable, \ 384 387 .atomic_disable = mgag200_primary_plane_helper_atomic_disable 385 388 386 389 #define MGAG200_PRIMARY_PLANE_FUNCS \
+12 -10
drivers/gpu/drm/mgag200/mgag200_mode.c
··· 501 501 struct drm_framebuffer *fb = plane_state->fb; 502 502 struct drm_atomic_helper_damage_iter iter; 503 503 struct drm_rect damage; 504 - u8 seq1; 505 - 506 - if (!fb) 507 - return; 508 504 509 505 drm_atomic_helper_damage_iter_init(&iter, old_plane_state, plane_state); 510 506 drm_atomic_for_each_plane_damage(&iter, &damage) { ··· 510 514 /* Always scanout image at VRAM offset 0 */ 511 515 mgag200_set_startadd(mdev, (u32)0); 512 516 mgag200_set_offset(mdev, fb); 517 + } 513 518 514 - if (!old_plane_state->crtc && plane_state->crtc) { // enabling 515 - RREG_SEQ(0x01, seq1); 516 - seq1 &= ~MGAREG_SEQ1_SCROFF; 517 - WREG_SEQ(0x01, seq1); 518 - msleep(20); 519 - } 519 + void mgag200_primary_plane_helper_atomic_enable(struct drm_plane *plane, 520 + struct drm_atomic_state *state) 521 + { 522 + struct drm_device *dev = plane->dev; 523 + struct mga_device *mdev = to_mga_device(dev); 524 + u8 seq1; 525 + 526 + RREG_SEQ(0x01, seq1); 527 + seq1 &= ~MGAREG_SEQ1_SCROFF; 528 + WREG_SEQ(0x01, seq1); 529 + msleep(20); 520 530 } 521 531 522 532 void mgag200_primary_plane_helper_atomic_disable(struct drm_plane *plane,
-3
drivers/gpu/drm/nouveau/nouveau_bo.c
··· 1015 1015 if (ret) 1016 1016 goto out_ntfy; 1017 1017 1018 - if (nvbo->bo.pin_count) 1019 - NV_WARN(drm, "Moving pinned object %p!\n", nvbo); 1020 - 1021 1018 if (drm->client.device.info.family < NV_DEVICE_INFO_V0_TESLA) { 1022 1019 ret = nouveau_bo_vm_bind(bo, new_reg, &new_tile); 1023 1020 if (ret)
+5 -5
drivers/gpu/drm/nouveau/nouveau_hwmon.c
··· 41 41 nouveau_hwmon_show_temp1_auto_point1_pwm(struct device *d, 42 42 struct device_attribute *a, char *buf) 43 43 { 44 - return snprintf(buf, PAGE_SIZE, "%d\n", 100); 44 + return sysfs_emit(buf, "%d\n", 100); 45 45 } 46 46 static SENSOR_DEVICE_ATTR(temp1_auto_point1_pwm, 0444, 47 47 nouveau_hwmon_show_temp1_auto_point1_pwm, NULL, 0); ··· 54 54 struct nouveau_drm *drm = nouveau_drm(dev); 55 55 struct nvkm_therm *therm = nvxx_therm(&drm->client.device); 56 56 57 - return snprintf(buf, PAGE_SIZE, "%d\n", 58 - therm->attr_get(therm, NVKM_THERM_ATTR_THRS_FAN_BOOST) * 1000); 57 + return sysfs_emit(buf, "%d\n", 58 + therm->attr_get(therm, NVKM_THERM_ATTR_THRS_FAN_BOOST) * 1000); 59 59 } 60 60 static ssize_t 61 61 nouveau_hwmon_set_temp1_auto_point1_temp(struct device *d, ··· 87 87 struct nouveau_drm *drm = nouveau_drm(dev); 88 88 struct nvkm_therm *therm = nvxx_therm(&drm->client.device); 89 89 90 - return snprintf(buf, PAGE_SIZE, "%d\n", 91 - therm->attr_get(therm, NVKM_THERM_ATTR_THRS_FAN_BOOST_HYST) * 1000); 90 + return sysfs_emit(buf, "%d\n", 91 + therm->attr_get(therm, NVKM_THERM_ATTR_THRS_FAN_BOOST_HYST) * 1000); 92 92 } 93 93 static ssize_t 94 94 nouveau_hwmon_set_temp1_auto_point1_temp_hyst(struct device *d,
+1 -1
drivers/gpu/drm/nouveau/nouveau_led.h
··· 27 27 28 28 #include "nouveau_drv.h" 29 29 30 - struct led_classdev; 30 + #include <linux/leds.h> 31 31 32 32 struct nouveau_led { 33 33 struct drm_device *dev;
+11
drivers/gpu/drm/panel/Kconfig
··· 318 318 Say Y here if you want to enable support for LG4573 RGB panel. 319 319 To compile this driver as a module, choose M here. 320 320 321 + config DRM_PANEL_MAGNACHIP_D53E6EA8966 322 + tristate "Magnachip D53E6EA8966 DSI panel" 323 + depends on OF && SPI 324 + depends on DRM_MIPI_DSI 325 + depends on BACKLIGHT_CLASS_DEVICE 326 + select DRM_MIPI_DBI 327 + help 328 + DRM panel driver for the Samsung AMS495QA01 panel controlled 329 + with the Magnachip D53E6EA8966 panel IC. This panel receives 330 + video data via DSI but commands via 9-bit SPI using DBI. 331 + 321 332 config DRM_PANEL_NEC_NL8048HL11 322 333 tristate "NEC NL8048HL11 RGB panel" 323 334 depends on GPIOLIB && OF && SPI
+1
drivers/gpu/drm/panel/Makefile
··· 29 29 obj-$(CONFIG_DRM_PANEL_LEADTEK_LTK500HD1829) += panel-leadtek-ltk500hd1829.o 30 30 obj-$(CONFIG_DRM_PANEL_LG_LB035Q02) += panel-lg-lb035q02.o 31 31 obj-$(CONFIG_DRM_PANEL_LG_LG4573) += panel-lg-lg4573.o 32 + obj-$(CONFIG_DRM_PANEL_MAGNACHIP_D53E6EA8966) += panel-magnachip-d53e6ea8966.o 32 33 obj-$(CONFIG_DRM_PANEL_NEC_NL8048HL11) += panel-nec-nl8048hl11.o 33 34 obj-$(CONFIG_DRM_PANEL_NEWVISION_NV3051D) += panel-newvision-nv3051d.o 34 35 obj-$(CONFIG_DRM_PANEL_NEWVISION_NV3052C) += panel-newvision-nv3052c.o
+208 -1
drivers/gpu/drm/panel/panel-jadard-jd9365da-h3.c
··· 167 167 .get_modes = jadard_get_modes, 168 168 }; 169 169 170 + static const struct jadard_init_cmd radxa_display_8hd_ad002_init_cmds[] = { 171 + { .data = { 0xE0, 0x00 } }, 172 + { .data = { 0xE1, 0x93 } }, 173 + { .data = { 0xE2, 0x65 } }, 174 + { .data = { 0xE3, 0xF8 } }, 175 + { .data = { 0x80, 0x03 } }, 176 + { .data = { 0xE0, 0x01 } }, 177 + { .data = { 0x00, 0x00 } }, 178 + { .data = { 0x01, 0x7E } }, 179 + { .data = { 0x03, 0x00 } }, 180 + { .data = { 0x04, 0x65 } }, 181 + { .data = { 0x0C, 0x74 } }, 182 + { .data = { 0x17, 0x00 } }, 183 + { .data = { 0x18, 0xB7 } }, 184 + { .data = { 0x19, 0x00 } }, 185 + { .data = { 0x1A, 0x00 } }, 186 + { .data = { 0x1B, 0xB7 } }, 187 + { .data = { 0x1C, 0x00 } }, 188 + { .data = { 0x24, 0xFE } }, 189 + { .data = { 0x37, 0x19 } }, 190 + { .data = { 0x38, 0x05 } }, 191 + { .data = { 0x39, 0x00 } }, 192 + { .data = { 0x3A, 0x01 } }, 193 + { .data = { 0x3B, 0x01 } }, 194 + { .data = { 0x3C, 0x70 } }, 195 + { .data = { 0x3D, 0xFF } }, 196 + { .data = { 0x3E, 0xFF } }, 197 + { .data = { 0x3F, 0xFF } }, 198 + { .data = { 0x40, 0x06 } }, 199 + { .data = { 0x41, 0xA0 } }, 200 + { .data = { 0x43, 0x1E } }, 201 + { .data = { 0x44, 0x0F } }, 202 + { .data = { 0x45, 0x28 } }, 203 + { .data = { 0x4B, 0x04 } }, 204 + { .data = { 0x55, 0x02 } }, 205 + { .data = { 0x56, 0x01 } }, 206 + { .data = { 0x57, 0xA9 } }, 207 + { .data = { 0x58, 0x0A } }, 208 + { .data = { 0x59, 0x0A } }, 209 + { .data = { 0x5A, 0x37 } }, 210 + { .data = { 0x5B, 0x19 } }, 211 + { .data = { 0x5D, 0x78 } }, 212 + { .data = { 0x5E, 0x63 } }, 213 + { .data = { 0x5F, 0x54 } }, 214 + { .data = { 0x60, 0x49 } }, 215 + { .data = { 0x61, 0x45 } }, 216 + { .data = { 0x62, 0x38 } }, 217 + { .data = { 0x63, 0x3D } }, 218 + { .data = { 0x64, 0x28 } }, 219 + { .data = { 0x65, 0x43 } }, 220 + { .data = { 0x66, 0x41 } }, 221 + { .data = { 0x67, 0x43 } }, 222 + { .data = { 0x68, 0x62 } }, 223 + { .data = { 0x69, 0x50 } }, 224 + { .data = { 0x6A, 0x57 } }, 225 + { .data = { 0x6B, 0x49 } }, 226 + { .data = { 0x6C, 0x44 } }, 227 + { .data = { 0x6D, 0x37 } }, 228 + { .data = { 0x6E, 0x23 } }, 229 + { .data = { 0x6F, 0x10 } }, 230 + { .data = { 0x70, 0x78 } }, 231 + { .data = { 0x71, 0x63 } }, 232 + { .data = { 0x72, 0x54 } }, 233 + { .data = { 0x73, 0x49 } }, 234 + { .data = { 0x74, 0x45 } }, 235 + { .data = { 0x75, 0x38 } }, 236 + { .data = { 0x76, 0x3D } }, 237 + { .data = { 0x77, 0x28 } }, 238 + { .data = { 0x78, 0x43 } }, 239 + { .data = { 0x79, 0x41 } }, 240 + { .data = { 0x7A, 0x43 } }, 241 + { .data = { 0x7B, 0x62 } }, 242 + { .data = { 0x7C, 0x50 } }, 243 + { .data = { 0x7D, 0x57 } }, 244 + { .data = { 0x7E, 0x49 } }, 245 + { .data = { 0x7F, 0x44 } }, 246 + { .data = { 0x80, 0x37 } }, 247 + { .data = { 0x81, 0x23 } }, 248 + { .data = { 0x82, 0x10 } }, 249 + { .data = { 0xE0, 0x02 } }, 250 + { .data = { 0x00, 0x47 } }, 251 + { .data = { 0x01, 0x47 } }, 252 + { .data = { 0x02, 0x45 } }, 253 + { .data = { 0x03, 0x45 } }, 254 + { .data = { 0x04, 0x4B } }, 255 + { .data = { 0x05, 0x4B } }, 256 + { .data = { 0x06, 0x49 } }, 257 + { .data = { 0x07, 0x49 } }, 258 + { .data = { 0x08, 0x41 } }, 259 + { .data = { 0x09, 0x1F } }, 260 + { .data = { 0x0A, 0x1F } }, 261 + { .data = { 0x0B, 0x1F } }, 262 + { .data = { 0x0C, 0x1F } }, 263 + { .data = { 0x0D, 0x1F } }, 264 + { .data = { 0x0E, 0x1F } }, 265 + { .data = { 0x0F, 0x5F } }, 266 + { .data = { 0x10, 0x5F } }, 267 + { .data = { 0x11, 0x57 } }, 268 + { .data = { 0x12, 0x77 } }, 269 + { .data = { 0x13, 0x35 } }, 270 + { .data = { 0x14, 0x1F } }, 271 + { .data = { 0x15, 0x1F } }, 272 + { .data = { 0x16, 0x46 } }, 273 + { .data = { 0x17, 0x46 } }, 274 + { .data = { 0x18, 0x44 } }, 275 + { .data = { 0x19, 0x44 } }, 276 + { .data = { 0x1A, 0x4A } }, 277 + { .data = { 0x1B, 0x4A } }, 278 + { .data = { 0x1C, 0x48 } }, 279 + { .data = { 0x1D, 0x48 } }, 280 + { .data = { 0x1E, 0x40 } }, 281 + { .data = { 0x1F, 0x1F } }, 282 + { .data = { 0x20, 0x1F } }, 283 + { .data = { 0x21, 0x1F } }, 284 + { .data = { 0x22, 0x1F } }, 285 + { .data = { 0x23, 0x1F } }, 286 + { .data = { 0x24, 0x1F } }, 287 + { .data = { 0x25, 0x5F } }, 288 + { .data = { 0x26, 0x5F } }, 289 + { .data = { 0x27, 0x57 } }, 290 + { .data = { 0x28, 0x77 } }, 291 + { .data = { 0x29, 0x35 } }, 292 + { .data = { 0x2A, 0x1F } }, 293 + { .data = { 0x2B, 0x1F } }, 294 + { .data = { 0x58, 0x40 } }, 295 + { .data = { 0x59, 0x00 } }, 296 + { .data = { 0x5A, 0x00 } }, 297 + { .data = { 0x5B, 0x10 } }, 298 + { .data = { 0x5C, 0x06 } }, 299 + { .data = { 0x5D, 0x40 } }, 300 + { .data = { 0x5E, 0x01 } }, 301 + { .data = { 0x5F, 0x02 } }, 302 + { .data = { 0x60, 0x30 } }, 303 + { .data = { 0x61, 0x01 } }, 304 + { .data = { 0x62, 0x02 } }, 305 + { .data = { 0x63, 0x03 } }, 306 + { .data = { 0x64, 0x6B } }, 307 + { .data = { 0x65, 0x05 } }, 308 + { .data = { 0x66, 0x0C } }, 309 + { .data = { 0x67, 0x73 } }, 310 + { .data = { 0x68, 0x09 } }, 311 + { .data = { 0x69, 0x03 } }, 312 + { .data = { 0x6A, 0x56 } }, 313 + { .data = { 0x6B, 0x08 } }, 314 + { .data = { 0x6C, 0x00 } }, 315 + { .data = { 0x6D, 0x04 } }, 316 + { .data = { 0x6E, 0x04 } }, 317 + { .data = { 0x6F, 0x88 } }, 318 + { .data = { 0x70, 0x00 } }, 319 + { .data = { 0x71, 0x00 } }, 320 + { .data = { 0x72, 0x06 } }, 321 + { .data = { 0x73, 0x7B } }, 322 + { .data = { 0x74, 0x00 } }, 323 + { .data = { 0x75, 0xF8 } }, 324 + { .data = { 0x76, 0x00 } }, 325 + { .data = { 0x77, 0xD5 } }, 326 + { .data = { 0x78, 0x2E } }, 327 + { .data = { 0x79, 0x12 } }, 328 + { .data = { 0x7A, 0x03 } }, 329 + { .data = { 0x7B, 0x00 } }, 330 + { .data = { 0x7C, 0x00 } }, 331 + { .data = { 0x7D, 0x03 } }, 332 + { .data = { 0x7E, 0x7B } }, 333 + { .data = { 0xE0, 0x04 } }, 334 + { .data = { 0x00, 0x0E } }, 335 + { .data = { 0x02, 0xB3 } }, 336 + { .data = { 0x09, 0x60 } }, 337 + { .data = { 0x0E, 0x2A } }, 338 + { .data = { 0x36, 0x59 } }, 339 + { .data = { 0xE0, 0x00 } }, 340 + }; 341 + 342 + static const struct jadard_panel_desc radxa_display_8hd_ad002_desc = { 343 + .mode = { 344 + .clock = 70000, 345 + 346 + .hdisplay = 800, 347 + .hsync_start = 800 + 40, 348 + .hsync_end = 800 + 40 + 18, 349 + .htotal = 800 + 40 + 18 + 20, 350 + 351 + .vdisplay = 1280, 352 + .vsync_start = 1280 + 20, 353 + .vsync_end = 1280 + 20 + 4, 354 + .vtotal = 1280 + 20 + 4 + 20, 355 + 356 + .width_mm = 127, 357 + .height_mm = 199, 358 + .type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED, 359 + }, 360 + .lanes = 4, 361 + .format = MIPI_DSI_FMT_RGB888, 362 + .init_cmds = radxa_display_8hd_ad002_init_cmds, 363 + .num_init_cmds = ARRAY_SIZE(radxa_display_8hd_ad002_init_cmds), 364 + }; 365 + 170 366 static const struct jadard_init_cmd cz101b4001_init_cmds[] = { 171 367 { .data = { 0xE0, 0x00 } }, 172 368 { .data = { 0xE1, 0x93 } }, ··· 648 452 } 649 453 650 454 static const struct of_device_id jadard_of_match[] = { 651 - { .compatible = "chongzhou,cz101b4001", .data = &cz101b4001_desc }, 455 + { 456 + .compatible = "chongzhou,cz101b4001", 457 + .data = &cz101b4001_desc 458 + }, 459 + { 460 + .compatible = "radxa,display-10hd-ad001", 461 + .data = &cz101b4001_desc 462 + }, 463 + { 464 + .compatible = "radxa,display-8hd-ad002", 465 + .data = &radxa_display_8hd_ad002_desc 466 + }, 652 467 { /* sentinel */ } 653 468 }; 654 469 MODULE_DEVICE_TABLE(of, jadard_of_match);
+522
drivers/gpu/drm/panel/panel-magnachip-d53e6ea8966.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Magnachip d53e6ea8966 MIPI-DSI panel driver 4 + * Copyright (C) 2023 Chris Morgan 5 + */ 6 + 7 + #include <drm/drm_mipi_dbi.h> 8 + #include <drm/drm_mipi_dsi.h> 9 + #include <drm/drm_modes.h> 10 + #include <drm/drm_of.h> 11 + #include <drm/drm_panel.h> 12 + 13 + #include <linux/backlight.h> 14 + #include <linux/delay.h> 15 + #include <linux/gpio/consumer.h> 16 + #include <linux/init.h> 17 + #include <linux/kernel.h> 18 + #include <linux/media-bus-format.h> 19 + #include <linux/module.h> 20 + #include <linux/of.h> 21 + #include <linux/of_device.h> 22 + #include <linux/regulator/consumer.h> 23 + #include <linux/spi/spi.h> 24 + 25 + #include <video/mipi_display.h> 26 + 27 + /* Forward declaration for use in backlight function */ 28 + struct d53e6ea8966; 29 + 30 + /* Panel info, unique to each panel */ 31 + struct d53e6ea8966_panel_info { 32 + /** @display_modes: the supported display modes */ 33 + const struct drm_display_mode *display_modes; 34 + /** @num_modes: the number of supported display modes */ 35 + unsigned int num_modes; 36 + /** @width_mm: panel width in mm */ 37 + u16 width_mm; 38 + /** @height_mm: panel height in mm */ 39 + u16 height_mm; 40 + /** @bus_flags: drm bus flags for panel */ 41 + u32 bus_flags; 42 + /** @panel_init_seq: panel specific init sequence */ 43 + void (*panel_init_seq)(struct d53e6ea8966 *db); 44 + /** @backlight_register: panel backlight registration or NULL */ 45 + int (*backlight_register)(struct d53e6ea8966 *db); 46 + }; 47 + 48 + struct d53e6ea8966 { 49 + /** @dev: the container device */ 50 + struct device *dev; 51 + /** @dbi: the DBI bus abstraction handle */ 52 + struct mipi_dbi dbi; 53 + /** @panel: the DRM panel instance for this device */ 54 + struct drm_panel panel; 55 + /** @reset: reset GPIO line */ 56 + struct gpio_desc *reset; 57 + /** @enable: enable GPIO line */ 58 + struct gpio_desc *enable; 59 + /** @reg_vdd: VDD supply regulator for panel logic */ 60 + struct regulator *reg_vdd; 61 + /** @reg_elvdd: ELVDD supply regulator for panel display */ 62 + struct regulator *reg_elvdd; 63 + /** @dsi_dev: DSI child device (panel) */ 64 + struct mipi_dsi_device *dsi_dev; 65 + /** @bl_dev: pseudo-backlight device for oled panel */ 66 + struct backlight_device *bl_dev; 67 + /** @panel_info: struct containing panel timing and info */ 68 + const struct d53e6ea8966_panel_info *panel_info; 69 + }; 70 + 71 + #define NUM_GAMMA_LEVELS 16 72 + #define GAMMA_TABLE_COUNT 23 73 + #define MAX_BRIGHTNESS (NUM_GAMMA_LEVELS - 1) 74 + 75 + #define MCS_ELVSS_ON 0xb1 76 + #define MCS_TEMP_SWIRE 0xb2 77 + #define MCS_PASSWORD_0 0xf0 78 + #define MCS_PASSWORD_1 0xf1 79 + #define MCS_ANALOG_PWR_CTL_0 0xf4 80 + #define MCS_ANALOG_PWR_CTL_1 0xf5 81 + #define MCS_GTCON_SET 0xf7 82 + #define MCS_GATELESS_SIGNAL_SET 0xf8 83 + #define MCS_SET_GAMMA 0xf9 84 + 85 + static inline struct d53e6ea8966 *to_d53e6ea8966(struct drm_panel *panel) 86 + { 87 + return container_of(panel, struct d53e6ea8966, panel); 88 + } 89 + 90 + /* Table of gamma values provided in datasheet */ 91 + static u8 ams495qa01_gamma[NUM_GAMMA_LEVELS][GAMMA_TABLE_COUNT] = { 92 + {0x01, 0x79, 0x78, 0x8d, 0xd9, 0xdf, 0xd5, 0xcb, 0xcf, 0xc5, 93 + 0xe5, 0xe0, 0xe4, 0xdc, 0xb8, 0xd4, 0xfa, 0xed, 0xe6, 0x2f, 94 + 0x00, 0x2f}, 95 + {0x01, 0x7d, 0x7c, 0x92, 0xd7, 0xdd, 0xd2, 0xcb, 0xd0, 0xc6, 96 + 0xe5, 0xe1, 0xe3, 0xda, 0xbd, 0xd3, 0xfa, 0xed, 0xe6, 0x2f, 97 + 0x00, 0x2f}, 98 + {0x01, 0x7f, 0x7e, 0x95, 0xd7, 0xde, 0xd2, 0xcb, 0xcf, 0xc5, 99 + 0xe5, 0xe3, 0xe3, 0xda, 0xbf, 0xd3, 0xfa, 0xed, 0xe6, 0x2f, 100 + 0x00, 0x2f}, 101 + {0x01, 0x82, 0x81, 0x99, 0xd6, 0xdd, 0xd1, 0xca, 0xcf, 0xc3, 102 + 0xe4, 0xe3, 0xe3, 0xda, 0xc2, 0xd3, 0xfa, 0xed, 0xe6, 0x2f, 103 + 0x00, 0x2f}, 104 + {0x01, 0x84, 0x83, 0x9b, 0xd7, 0xde, 0xd2, 0xc8, 0xce, 0xc2, 105 + 0xe4, 0xe3, 0xe2, 0xd9, 0xc3, 0xd3, 0xfa, 0xed, 0xe6, 0x2f, 106 + 0x00, 0x2f}, 107 + {0x01, 0x87, 0x86, 0x9f, 0xd6, 0xdd, 0xd1, 0xc7, 0xce, 0xc1, 108 + 0xe4, 0xe3, 0xe2, 0xd9, 0xc6, 0xd3, 0xfa, 0xed, 0xe6, 0x2f, 109 + 0x00, 0x2f}, 110 + {0x01, 0x89, 0x89, 0xa2, 0xd5, 0xdb, 0xcf, 0xc8, 0xcf, 0xc2, 111 + 0xe3, 0xe3, 0xe1, 0xd9, 0xc7, 0xd3, 0xfa, 0xed, 0xe6, 0x2f, 112 + 0x00, 0x2f}, 113 + {0x01, 0x8b, 0x8b, 0xa5, 0xd5, 0xdb, 0xcf, 0xc7, 0xce, 0xc0, 114 + 0xe3, 0xe3, 0xe1, 0xd8, 0xc7, 0xd3, 0xfa, 0xed, 0xe6, 0x2f, 115 + 0x00, 0x2f}, 116 + {0x01, 0x8d, 0x8d, 0xa7, 0xd5, 0xdb, 0xcf, 0xc6, 0xce, 0xc0, 117 + 0xe4, 0xe4, 0xe1, 0xd7, 0xc8, 0xd3, 0xfa, 0xed, 0xe6, 0x2f, 118 + 0x00, 0x2f}, 119 + {0x01, 0x8f, 0x8f, 0xaa, 0xd4, 0xdb, 0xce, 0xc6, 0xcd, 0xbf, 120 + 0xe3, 0xe3, 0xe1, 0xd7, 0xca, 0xd3, 0xfa, 0xed, 0xe6, 0x2f, 121 + 0x00, 0x2f}, 122 + {0x01, 0x91, 0x91, 0xac, 0xd3, 0xda, 0xce, 0xc5, 0xcd, 0xbe, 123 + 0xe3, 0xe3, 0xe0, 0xd7, 0xca, 0xd3, 0xfa, 0xed, 0xe6, 0x2f, 124 + 0x00, 0x2f}, 125 + {0x01, 0x93, 0x93, 0xaf, 0xd3, 0xda, 0xcd, 0xc5, 0xcd, 0xbe, 126 + 0xe2, 0xe3, 0xdf, 0xd6, 0xca, 0xd3, 0xfa, 0xed, 0xe6, 0x2f, 127 + 0x00, 0x2f}, 128 + {0x01, 0x95, 0x95, 0xb1, 0xd2, 0xd9, 0xcc, 0xc4, 0xcd, 0xbe, 129 + 0xe2, 0xe3, 0xdf, 0xd7, 0xcc, 0xd3, 0xfa, 0xed, 0xe6, 0x2f, 130 + 0x00, 0x2f}, 131 + {0x01, 0x99, 0x99, 0xb6, 0xd1, 0xd9, 0xcc, 0xc3, 0xcb, 0xbc, 132 + 0xe2, 0xe4, 0xdf, 0xd6, 0xcc, 0xd3, 0xfa, 0xed, 0xe6, 0x2f, 133 + 0x00, 0x2f}, 134 + {0x01, 0x9c, 0x9c, 0xba, 0xd0, 0xd8, 0xcb, 0xc3, 0xcb, 0xbb, 135 + 0xe2, 0xe4, 0xdf, 0xd6, 0xce, 0xd3, 0xfa, 0xed, 0xe6, 0x2f, 136 + 0x00, 0x2f}, 137 + {0x01, 0x9f, 0x9f, 0xbe, 0xcf, 0xd7, 0xc9, 0xc2, 0xcb, 0xbb, 138 + 0xe1, 0xe3, 0xde, 0xd6, 0xd0, 0xd3, 0xfa, 0xed, 0xe6, 0x2f, 139 + 0x00, 0x2f}, 140 + }; 141 + 142 + /* 143 + * Table of elvss values provided in datasheet and corresponds to 144 + * gamma values. 145 + */ 146 + static u8 ams495qa01_elvss[NUM_GAMMA_LEVELS] = { 147 + 0x15, 0x15, 0x15, 0x15, 0x15, 0x15, 0x15, 0x15, 0x15, 0x15, 148 + 0x15, 0x15, 0x14, 0x14, 0x13, 0x12, 149 + }; 150 + 151 + static int ams495qa01_update_gamma(struct mipi_dbi *dbi, int brightness) 152 + { 153 + int tmp = brightness; 154 + 155 + mipi_dbi_command_buf(dbi, MCS_SET_GAMMA, ams495qa01_gamma[tmp], 156 + ARRAY_SIZE(ams495qa01_gamma[tmp])); 157 + mipi_dbi_command(dbi, MCS_SET_GAMMA, 0x00); 158 + 159 + /* Undocumented command */ 160 + mipi_dbi_command(dbi, 0x26, 0x00); 161 + 162 + mipi_dbi_command(dbi, MCS_TEMP_SWIRE, ams495qa01_elvss[tmp]); 163 + 164 + return 0; 165 + } 166 + 167 + static void ams495qa01_panel_init(struct d53e6ea8966 *db) 168 + { 169 + struct mipi_dbi *dbi = &db->dbi; 170 + 171 + mipi_dbi_command(dbi, MCS_PASSWORD_0, 0x5a, 0x5a); 172 + mipi_dbi_command(dbi, MCS_PASSWORD_1, 0x5a, 0x5a); 173 + 174 + /* Undocumented commands */ 175 + mipi_dbi_command(dbi, 0xb0, 0x02); 176 + mipi_dbi_command(dbi, 0xf3, 0x3b); 177 + 178 + mipi_dbi_command(dbi, MCS_ANALOG_PWR_CTL_0, 0x33, 0x42, 0x00, 0x08); 179 + mipi_dbi_command(dbi, MCS_ANALOG_PWR_CTL_1, 0x00, 0x06, 0x26, 0x35, 0x03); 180 + 181 + /* Undocumented commands */ 182 + mipi_dbi_command(dbi, 0xf6, 0x02); 183 + mipi_dbi_command(dbi, 0xc6, 0x0b, 0x00, 0x00, 0x3c, 0x00, 0x22, 184 + 0x00, 0x00, 0x00, 0x00); 185 + 186 + mipi_dbi_command(dbi, MCS_GTCON_SET, 0x20); 187 + mipi_dbi_command(dbi, MCS_TEMP_SWIRE, 0x06, 0x06, 0x06, 0x06); 188 + mipi_dbi_command(dbi, MCS_ELVSS_ON, 0x07, 0x00, 0x10); 189 + mipi_dbi_command(dbi, MCS_GATELESS_SIGNAL_SET, 0x7f, 0x7a, 190 + 0x89, 0x67, 0x26, 0x38, 0x00, 0x00, 0x09, 191 + 0x67, 0x70, 0x88, 0x7a, 0x76, 0x05, 0x09, 192 + 0x23, 0x23, 0x23); 193 + 194 + /* Undocumented commands */ 195 + mipi_dbi_command(dbi, 0xb5, 0xff, 0xef, 0x35, 0x42, 0x0d, 0xd7, 196 + 0xff, 0x07, 0xff, 0xff, 0xfd, 0x00, 0x01, 197 + 0xff, 0x05, 0x12, 0x0f, 0xff, 0xff, 0xff, 198 + 0xff); 199 + mipi_dbi_command(dbi, 0xb4, 0x15); 200 + mipi_dbi_command(dbi, 0xb3, 0x00); 201 + 202 + ams495qa01_update_gamma(dbi, MAX_BRIGHTNESS); 203 + } 204 + 205 + static int d53e6ea8966_prepare(struct drm_panel *panel) 206 + { 207 + struct d53e6ea8966 *db = to_d53e6ea8966(panel); 208 + int ret; 209 + 210 + /* Power up */ 211 + ret = regulator_enable(db->reg_vdd); 212 + if (ret) { 213 + dev_err(db->dev, "failed to enable vdd regulator: %d\n", ret); 214 + return ret; 215 + } 216 + 217 + if (db->reg_elvdd) { 218 + ret = regulator_enable(db->reg_elvdd); 219 + if (ret) { 220 + dev_err(db->dev, 221 + "failed to enable elvdd regulator: %d\n", ret); 222 + regulator_disable(db->reg_vdd); 223 + return ret; 224 + } 225 + } 226 + 227 + /* Enable */ 228 + if (db->enable) 229 + gpiod_set_value_cansleep(db->enable, 1); 230 + 231 + msleep(50); 232 + 233 + /* Reset */ 234 + gpiod_set_value_cansleep(db->reset, 1); 235 + usleep_range(1000, 5000); 236 + gpiod_set_value_cansleep(db->reset, 0); 237 + msleep(20); 238 + 239 + db->panel_info->panel_init_seq(db); 240 + 241 + return 0; 242 + } 243 + 244 + static int d53e6ea8966_enable(struct drm_panel *panel) 245 + { 246 + struct d53e6ea8966 *db = to_d53e6ea8966(panel); 247 + struct mipi_dbi *dbi = &db->dbi; 248 + 249 + mipi_dbi_command(dbi, MIPI_DCS_EXIT_SLEEP_MODE); 250 + msleep(200); 251 + mipi_dbi_command(dbi, MIPI_DCS_SET_DISPLAY_ON); 252 + usleep_range(10000, 15000); 253 + 254 + return 0; 255 + } 256 + 257 + static int d53e6ea8966_disable(struct drm_panel *panel) 258 + { 259 + struct d53e6ea8966 *db = to_d53e6ea8966(panel); 260 + struct mipi_dbi *dbi = &db->dbi; 261 + 262 + mipi_dbi_command(dbi, MIPI_DCS_SET_DISPLAY_OFF); 263 + msleep(20); 264 + mipi_dbi_command(dbi, MIPI_DCS_ENTER_SLEEP_MODE); 265 + msleep(100); 266 + 267 + return 0; 268 + } 269 + 270 + static int d53e6ea8966_unprepare(struct drm_panel *panel) 271 + { 272 + struct d53e6ea8966 *db = to_d53e6ea8966(panel); 273 + 274 + if (db->enable) 275 + gpiod_set_value_cansleep(db->enable, 0); 276 + 277 + gpiod_set_value_cansleep(db->reset, 1); 278 + 279 + if (db->reg_elvdd) 280 + regulator_disable(db->reg_elvdd); 281 + 282 + regulator_disable(db->reg_vdd); 283 + msleep(100); 284 + 285 + return 0; 286 + } 287 + 288 + static int d53e6ea8966_get_modes(struct drm_panel *panel, 289 + struct drm_connector *connector) 290 + { 291 + struct d53e6ea8966 *db = to_d53e6ea8966(panel); 292 + const struct d53e6ea8966_panel_info *panel_info = db->panel_info; 293 + struct drm_display_mode *mode; 294 + static const u32 bus_format = MEDIA_BUS_FMT_RGB888_1X24; 295 + unsigned int i; 296 + 297 + for (i = 0; i < panel_info->num_modes; i++) { 298 + mode = drm_mode_duplicate(connector->dev, 299 + &panel_info->display_modes[i]); 300 + if (!mode) 301 + return -ENOMEM; 302 + 303 + drm_mode_set_name(mode); 304 + drm_mode_probed_add(connector, mode); 305 + } 306 + 307 + connector->display_info.bpc = 8; 308 + connector->display_info.width_mm = panel_info->width_mm; 309 + connector->display_info.height_mm = panel_info->height_mm; 310 + connector->display_info.bus_flags = panel_info->bus_flags; 311 + 312 + drm_display_info_set_bus_formats(&connector->display_info, 313 + &bus_format, 1); 314 + 315 + return 1; 316 + } 317 + 318 + static const struct drm_panel_funcs d53e6ea8966_panel_funcs = { 319 + .disable = d53e6ea8966_disable, 320 + .enable = d53e6ea8966_enable, 321 + .get_modes = d53e6ea8966_get_modes, 322 + .prepare = d53e6ea8966_prepare, 323 + .unprepare = d53e6ea8966_unprepare, 324 + }; 325 + 326 + static int ams495qa01_set_brightness(struct backlight_device *bd) 327 + { 328 + struct d53e6ea8966 *db = bl_get_data(bd); 329 + struct mipi_dbi *dbi = &db->dbi; 330 + int brightness = backlight_get_brightness(bd); 331 + 332 + ams495qa01_update_gamma(dbi, brightness); 333 + 334 + return 0; 335 + } 336 + 337 + static const struct backlight_ops ams495qa01_backlight_ops = { 338 + .update_status = ams495qa01_set_brightness, 339 + }; 340 + 341 + static int ams495qa01_backlight_register(struct d53e6ea8966 *db) 342 + { 343 + struct backlight_properties props = { 344 + .type = BACKLIGHT_RAW, 345 + .brightness = MAX_BRIGHTNESS, 346 + .max_brightness = MAX_BRIGHTNESS, 347 + }; 348 + struct device *dev = db->dev; 349 + int ret = 0; 350 + 351 + db->bl_dev = devm_backlight_device_register(dev, "panel", dev, db, 352 + &ams495qa01_backlight_ops, 353 + &props); 354 + if (IS_ERR(db->bl_dev)) { 355 + ret = PTR_ERR(db->bl_dev); 356 + dev_err(dev, "error registering backlight device (%d)\n", ret); 357 + } 358 + 359 + return ret; 360 + } 361 + 362 + static int d53e6ea8966_probe(struct spi_device *spi) 363 + { 364 + struct device *dev = &spi->dev; 365 + struct mipi_dsi_host *dsi_host; 366 + struct d53e6ea8966 *db; 367 + int ret; 368 + struct mipi_dsi_device_info info = { 369 + .type = "d53e6ea8966", 370 + .channel = 0, 371 + .node = NULL, 372 + }; 373 + 374 + db = devm_kzalloc(dev, sizeof(*db), GFP_KERNEL); 375 + if (!db) 376 + return -ENOMEM; 377 + 378 + spi_set_drvdata(spi, db); 379 + 380 + db->dev = dev; 381 + 382 + db->panel_info = of_device_get_match_data(dev); 383 + if (!db->panel_info) 384 + return -EINVAL; 385 + 386 + db->reg_vdd = devm_regulator_get(dev, "vdd"); 387 + if (IS_ERR(db->reg_vdd)) 388 + return dev_err_probe(dev, PTR_ERR(db->reg_vdd), 389 + "Failed to get vdd supply\n"); 390 + 391 + db->reg_elvdd = devm_regulator_get_optional(dev, "elvdd"); 392 + if (IS_ERR(db->reg_elvdd)) 393 + db->reg_elvdd = NULL; 394 + 395 + db->reset = devm_gpiod_get(dev, "reset", GPIOD_OUT_HIGH); 396 + if (IS_ERR(db->reset)) { 397 + ret = PTR_ERR(db->reset); 398 + return dev_err_probe(dev, ret, "no RESET GPIO\n"); 399 + } 400 + 401 + db->enable = devm_gpiod_get_optional(dev, "enable", GPIOD_OUT_LOW); 402 + if (IS_ERR(db->enable)) { 403 + ret = PTR_ERR(db->enable); 404 + return dev_err_probe(dev, ret, "cannot get ENABLE GPIO\n"); 405 + } 406 + 407 + ret = mipi_dbi_spi_init(spi, &db->dbi, NULL); 408 + if (ret) 409 + return dev_err_probe(dev, ret, "MIPI DBI init failed\n"); 410 + 411 + dsi_host = drm_of_get_dsi_bus(dev); 412 + if (IS_ERR(dsi_host)) { 413 + ret = PTR_ERR(dsi_host); 414 + return dev_err_probe(dev, ret, "Error attaching DSI bus\n"); 415 + } 416 + 417 + db->dsi_dev = devm_mipi_dsi_device_register_full(dev, dsi_host, &info); 418 + if (IS_ERR(db->dsi_dev)) { 419 + dev_err(dev, "failed to register dsi device: %ld\n", 420 + PTR_ERR(db->dsi_dev)); 421 + ret = PTR_ERR(db->dsi_dev); 422 + } 423 + 424 + db->dsi_dev->lanes = 2; 425 + db->dsi_dev->format = MIPI_DSI_FMT_RGB888; 426 + db->dsi_dev->mode_flags = MIPI_DSI_MODE_VIDEO | MIPI_DSI_MODE_VIDEO_BURST | 427 + MIPI_DSI_MODE_LPM | MIPI_DSI_MODE_NO_EOT_PACKET; 428 + 429 + drm_panel_init(&db->panel, dev, &d53e6ea8966_panel_funcs, 430 + DRM_MODE_CONNECTOR_DSI); 431 + 432 + if (db->panel_info->backlight_register) { 433 + ret = db->panel_info->backlight_register(db); 434 + if (ret < 0) 435 + return ret; 436 + db->panel.backlight = db->bl_dev; 437 + } 438 + 439 + drm_panel_add(&db->panel); 440 + 441 + ret = devm_mipi_dsi_attach(dev, db->dsi_dev); 442 + if (ret < 0) { 443 + dev_err(dev, "mipi_dsi_attach failed: %d\n", ret); 444 + drm_panel_remove(&db->panel); 445 + return ret; 446 + } 447 + 448 + return 0; 449 + } 450 + 451 + static void d53e6ea8966_remove(struct spi_device *spi) 452 + { 453 + struct d53e6ea8966 *db = spi_get_drvdata(spi); 454 + 455 + drm_panel_remove(&db->panel); 456 + } 457 + 458 + static const struct drm_display_mode ams495qa01_modes[] = { 459 + { /* 60hz */ 460 + .clock = 33500, 461 + .hdisplay = 960, 462 + .hsync_start = 960 + 10, 463 + .hsync_end = 960 + 10 + 2, 464 + .htotal = 960 + 10 + 2 + 10, 465 + .vdisplay = 544, 466 + .vsync_start = 544 + 10, 467 + .vsync_end = 544 + 10 + 2, 468 + .vtotal = 544 + 10 + 2 + 10, 469 + .flags = DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC, 470 + .type = DRM_MODE_TYPE_DRIVER | DRM_MODE_TYPE_PREFERRED, 471 + }, 472 + { /* 50hz */ 473 + .clock = 27800, 474 + .hdisplay = 960, 475 + .hsync_start = 960 + 10, 476 + .hsync_end = 960 + 10 + 2, 477 + .htotal = 960 + 10 + 2 + 10, 478 + .vdisplay = 544, 479 + .vsync_start = 544 + 10, 480 + .vsync_end = 544 + 10 + 2, 481 + .vtotal = 544 + 10 + 2 + 10, 482 + .flags = DRM_MODE_FLAG_NHSYNC | DRM_MODE_FLAG_NVSYNC, 483 + .type = DRM_MODE_TYPE_DRIVER, 484 + }, 485 + }; 486 + 487 + static const struct d53e6ea8966_panel_info ams495qa01_info = { 488 + .display_modes = ams495qa01_modes, 489 + .num_modes = ARRAY_SIZE(ams495qa01_modes), 490 + .width_mm = 117, 491 + .height_mm = 74, 492 + .bus_flags = DRM_BUS_FLAG_DE_LOW | DRM_BUS_FLAG_PIXDATA_DRIVE_NEGEDGE, 493 + .panel_init_seq = ams495qa01_panel_init, 494 + .backlight_register = ams495qa01_backlight_register, 495 + }; 496 + 497 + static const struct of_device_id d53e6ea8966_match[] = { 498 + { .compatible = "samsung,ams495qa01", .data = &ams495qa01_info }, 499 + { /* sentinel */ }, 500 + }; 501 + MODULE_DEVICE_TABLE(of, d53e6ea8966_match); 502 + 503 + static const struct spi_device_id d53e6ea8966_ids[] = { 504 + { "ams495qa01", 0 }, 505 + { /* sentinel */ }, 506 + }; 507 + MODULE_DEVICE_TABLE(spi, d53e6ea8966_ids); 508 + 509 + static struct spi_driver d53e6ea8966_driver = { 510 + .driver = { 511 + .name = "d53e6ea8966-panel", 512 + .of_match_table = d53e6ea8966_match, 513 + }, 514 + .id_table = d53e6ea8966_ids, 515 + .probe = d53e6ea8966_probe, 516 + .remove = d53e6ea8966_remove, 517 + }; 518 + module_spi_driver(d53e6ea8966_driver); 519 + 520 + MODULE_AUTHOR("Chris Morgan <macromorgan@hotmail.com>"); 521 + MODULE_DESCRIPTION("Magnachip d53e6ea8966 panel driver"); 522 + MODULE_LICENSE("GPL");
+2 -9
drivers/gpu/drm/panfrost/panfrost_drv.c
··· 220 220 } 221 221 222 222 for (i = 0; i < in_fence_count; i++) { 223 - struct dma_fence *fence; 224 - 225 - ret = drm_syncobj_find_fence(file_priv, handles[i], 0, 0, 226 - &fence); 227 - if (ret) 228 - goto fail; 229 - 230 - ret = drm_sched_job_add_dependency(&job->base, fence); 231 - 223 + ret = drm_sched_job_add_syncobj_dependency(&job->base, file_priv, 224 + handles[i], 0); 232 225 if (ret) 233 226 goto fail; 234 227 }
+11
drivers/gpu/drm/qxl/qxl_ttm.c
··· 143 143 struct ttm_resource *old_mem = bo->resource; 144 144 int ret; 145 145 146 + if (!old_mem) { 147 + if (new_mem->mem_type != TTM_PL_SYSTEM) { 148 + hop->mem_type = TTM_PL_SYSTEM; 149 + hop->flags = TTM_PL_FLAG_TEMPORARY; 150 + return -EMULTIHOP; 151 + } 152 + 153 + ttm_bo_move_null(bo, new_mem); 154 + return 0; 155 + } 156 + 146 157 qxl_bo_move_notify(bo, new_mem); 147 158 148 159 ret = ttm_bo_wait_ctx(bo, ctx);
+1
drivers/gpu/drm/radeon/Kconfig
··· 8 8 select DRM_DISPLAY_DP_HELPER 9 9 select DRM_DISPLAY_HELPER 10 10 select DRM_KMS_HELPER 11 + select DRM_SUBALLOC_HELPER 11 12 select DRM_TTM 12 13 select DRM_TTM_HELPER 13 14 select SND_HDA_COMPONENT if SND_HDA_CORE
+8 -47
drivers/gpu/drm/radeon/radeon.h
··· 79 79 80 80 #include <drm/drm_gem.h> 81 81 #include <drm/drm_audio_component.h> 82 + #include <drm/drm_suballoc.h> 82 83 83 84 #include "radeon_family.h" 84 85 #include "radeon_mode.h" ··· 512 511 }; 513 512 #define gem_to_radeon_bo(gobj) container_of((gobj), struct radeon_bo, tbo.base) 514 513 515 - /* sub-allocation manager, it has to be protected by another lock. 516 - * By conception this is an helper for other part of the driver 517 - * like the indirect buffer or semaphore, which both have their 518 - * locking. 519 - * 520 - * Principe is simple, we keep a list of sub allocation in offset 521 - * order (first entry has offset == 0, last entry has the highest 522 - * offset). 523 - * 524 - * When allocating new object we first check if there is room at 525 - * the end total_size - (last_object_offset + last_object_size) >= 526 - * alloc_size. If so we allocate new object there. 527 - * 528 - * When there is not enough room at the end, we start waiting for 529 - * each sub object until we reach object_offset+object_size >= 530 - * alloc_size, this object then become the sub object we return. 531 - * 532 - * Alignment can't be bigger than page size. 533 - * 534 - * Hole are not considered for allocation to keep things simple. 535 - * Assumption is that there won't be hole (all object on same 536 - * alignment). 537 - */ 538 514 struct radeon_sa_manager { 539 - wait_queue_head_t wq; 540 - struct radeon_bo *bo; 541 - struct list_head *hole; 542 - struct list_head flist[RADEON_NUM_RINGS]; 543 - struct list_head olist; 544 - unsigned size; 545 - uint64_t gpu_addr; 546 - void *cpu_ptr; 547 - uint32_t domain; 548 - uint32_t align; 549 - }; 550 - 551 - struct radeon_sa_bo; 552 - 553 - /* sub-allocation buffer */ 554 - struct radeon_sa_bo { 555 - struct list_head olist; 556 - struct list_head flist; 557 - struct radeon_sa_manager *manager; 558 - unsigned soffset; 559 - unsigned eoffset; 560 - struct radeon_fence *fence; 515 + struct drm_suballoc_manager base; 516 + struct radeon_bo *bo; 517 + uint64_t gpu_addr; 518 + void *cpu_ptr; 519 + u32 domain; 561 520 }; 562 521 563 522 /* ··· 548 587 * Semaphores. 549 588 */ 550 589 struct radeon_semaphore { 551 - struct radeon_sa_bo *sa_bo; 590 + struct drm_suballoc *sa_bo; 552 591 signed waiters; 553 592 uint64_t gpu_addr; 554 593 }; ··· 777 816 */ 778 817 779 818 struct radeon_ib { 780 - struct radeon_sa_bo *sa_bo; 819 + struct drm_suballoc *sa_bo; 781 820 uint32_t length_dw; 782 821 uint64_t gpu_addr; 783 822 uint32_t *ptr;
+5 -7
drivers/gpu/drm/radeon/radeon_ib.c
··· 61 61 { 62 62 int r; 63 63 64 - r = radeon_sa_bo_new(rdev, &rdev->ring_tmp_bo, &ib->sa_bo, size, 256); 64 + r = radeon_sa_bo_new(&rdev->ring_tmp_bo, &ib->sa_bo, size, 256); 65 65 if (r) { 66 66 dev_err(rdev->dev, "failed to get a new IB (%d)\n", r); 67 67 return r; ··· 77 77 /* ib pool is bound at RADEON_VA_IB_OFFSET in virtual address 78 78 * space and soffset is the offset inside the pool bo 79 79 */ 80 - ib->gpu_addr = ib->sa_bo->soffset + RADEON_VA_IB_OFFSET; 80 + ib->gpu_addr = drm_suballoc_soffset(ib->sa_bo) + RADEON_VA_IB_OFFSET; 81 81 } else { 82 82 ib->gpu_addr = radeon_sa_bo_gpu_addr(ib->sa_bo); 83 83 } ··· 97 97 void radeon_ib_free(struct radeon_device *rdev, struct radeon_ib *ib) 98 98 { 99 99 radeon_sync_free(rdev, &ib->sync, ib->fence); 100 - radeon_sa_bo_free(rdev, &ib->sa_bo, ib->fence); 100 + radeon_sa_bo_free(&ib->sa_bo, ib->fence); 101 101 radeon_fence_unref(&ib->fence); 102 102 } 103 103 ··· 201 201 202 202 if (rdev->family >= CHIP_BONAIRE) { 203 203 r = radeon_sa_bo_manager_init(rdev, &rdev->ring_tmp_bo, 204 - RADEON_IB_POOL_SIZE*64*1024, 205 - RADEON_GPU_PAGE_SIZE, 204 + RADEON_IB_POOL_SIZE*64*1024, 256, 206 205 RADEON_GEM_DOMAIN_GTT, 207 206 RADEON_GEM_GTT_WC); 208 207 } else { ··· 209 210 * to the command stream checking 210 211 */ 211 212 r = radeon_sa_bo_manager_init(rdev, &rdev->ring_tmp_bo, 212 - RADEON_IB_POOL_SIZE*64*1024, 213 - RADEON_GPU_PAGE_SIZE, 213 + RADEON_IB_POOL_SIZE*64*1024, 256, 214 214 RADEON_GEM_DOMAIN_GTT, 0); 215 215 } 216 216 if (r) {
+16 -11
drivers/gpu/drm/radeon/radeon_object.h
··· 169 169 /* 170 170 * sub allocation 171 171 */ 172 - 173 - static inline uint64_t radeon_sa_bo_gpu_addr(struct radeon_sa_bo *sa_bo) 172 + static inline struct radeon_sa_manager * 173 + to_radeon_sa_manager(struct drm_suballoc_manager *manager) 174 174 { 175 - return sa_bo->manager->gpu_addr + sa_bo->soffset; 175 + return container_of(manager, struct radeon_sa_manager, base); 176 176 } 177 177 178 - static inline void * radeon_sa_bo_cpu_addr(struct radeon_sa_bo *sa_bo) 178 + static inline uint64_t radeon_sa_bo_gpu_addr(struct drm_suballoc *sa_bo) 179 179 { 180 - return sa_bo->manager->cpu_ptr + sa_bo->soffset; 180 + return to_radeon_sa_manager(sa_bo->manager)->gpu_addr + 181 + drm_suballoc_soffset(sa_bo); 182 + } 183 + 184 + static inline void *radeon_sa_bo_cpu_addr(struct drm_suballoc *sa_bo) 185 + { 186 + return to_radeon_sa_manager(sa_bo->manager)->cpu_ptr + 187 + drm_suballoc_soffset(sa_bo); 181 188 } 182 189 183 190 extern int radeon_sa_bo_manager_init(struct radeon_device *rdev, ··· 197 190 struct radeon_sa_manager *sa_manager); 198 191 extern int radeon_sa_bo_manager_suspend(struct radeon_device *rdev, 199 192 struct radeon_sa_manager *sa_manager); 200 - extern int radeon_sa_bo_new(struct radeon_device *rdev, 201 - struct radeon_sa_manager *sa_manager, 202 - struct radeon_sa_bo **sa_bo, 203 - unsigned size, unsigned align); 204 - extern void radeon_sa_bo_free(struct radeon_device *rdev, 205 - struct radeon_sa_bo **sa_bo, 193 + extern int radeon_sa_bo_new(struct radeon_sa_manager *sa_manager, 194 + struct drm_suballoc **sa_bo, 195 + unsigned int size, unsigned int align); 196 + extern void radeon_sa_bo_free(struct drm_suballoc **sa_bo, 206 197 struct radeon_fence *fence); 207 198 #if defined(CONFIG_DEBUG_FS) 208 199 extern void radeon_sa_bo_dump_debug_info(struct radeon_sa_manager *sa_manager,
+27 -291
drivers/gpu/drm/radeon/radeon_sa.c
··· 44 44 45 45 #include "radeon.h" 46 46 47 - static void radeon_sa_bo_remove_locked(struct radeon_sa_bo *sa_bo); 48 - static void radeon_sa_bo_try_free(struct radeon_sa_manager *sa_manager); 49 - 50 47 int radeon_sa_bo_manager_init(struct radeon_device *rdev, 51 48 struct radeon_sa_manager *sa_manager, 52 - unsigned size, u32 align, u32 domain, u32 flags) 49 + unsigned int size, u32 sa_align, u32 domain, 50 + u32 flags) 53 51 { 54 - int i, r; 52 + int r; 55 53 56 - init_waitqueue_head(&sa_manager->wq); 57 - sa_manager->bo = NULL; 58 - sa_manager->size = size; 59 - sa_manager->domain = domain; 60 - sa_manager->align = align; 61 - sa_manager->hole = &sa_manager->olist; 62 - INIT_LIST_HEAD(&sa_manager->olist); 63 - for (i = 0; i < RADEON_NUM_RINGS; ++i) { 64 - INIT_LIST_HEAD(&sa_manager->flist[i]); 65 - } 66 - 67 - r = radeon_bo_create(rdev, size, align, true, 54 + r = radeon_bo_create(rdev, size, RADEON_GPU_PAGE_SIZE, true, 68 55 domain, flags, NULL, NULL, &sa_manager->bo); 69 56 if (r) { 70 57 dev_err(rdev->dev, "(%d) failed to allocate bo for manager\n", r); 71 58 return r; 72 59 } 60 + 61 + sa_manager->domain = domain; 62 + 63 + drm_suballoc_manager_init(&sa_manager->base, size, sa_align); 73 64 74 65 return r; 75 66 } ··· 68 77 void radeon_sa_bo_manager_fini(struct radeon_device *rdev, 69 78 struct radeon_sa_manager *sa_manager) 70 79 { 71 - struct radeon_sa_bo *sa_bo, *tmp; 72 - 73 - if (!list_empty(&sa_manager->olist)) { 74 - sa_manager->hole = &sa_manager->olist, 75 - radeon_sa_bo_try_free(sa_manager); 76 - if (!list_empty(&sa_manager->olist)) { 77 - dev_err(rdev->dev, "sa_manager is not empty, clearing anyway\n"); 78 - } 79 - } 80 - list_for_each_entry_safe(sa_bo, tmp, &sa_manager->olist, olist) { 81 - radeon_sa_bo_remove_locked(sa_bo); 82 - } 80 + drm_suballoc_manager_fini(&sa_manager->base); 83 81 radeon_bo_unref(&sa_manager->bo); 84 - sa_manager->size = 0; 85 82 } 86 83 87 84 int radeon_sa_bo_manager_start(struct radeon_device *rdev, ··· 118 139 return r; 119 140 } 120 141 121 - static void radeon_sa_bo_remove_locked(struct radeon_sa_bo *sa_bo) 142 + int radeon_sa_bo_new(struct radeon_sa_manager *sa_manager, 143 + struct drm_suballoc **sa_bo, 144 + unsigned int size, unsigned int align) 122 145 { 123 - struct radeon_sa_manager *sa_manager = sa_bo->manager; 124 - if (sa_manager->hole == &sa_bo->olist) { 125 - sa_manager->hole = sa_bo->olist.prev; 146 + struct drm_suballoc *sa = drm_suballoc_new(&sa_manager->base, size, 147 + GFP_KERNEL, true, align); 148 + 149 + if (IS_ERR(sa)) { 150 + *sa_bo = NULL; 151 + return PTR_ERR(sa); 126 152 } 127 - list_del_init(&sa_bo->olist); 128 - list_del_init(&sa_bo->flist); 129 - radeon_fence_unref(&sa_bo->fence); 130 - kfree(sa_bo); 131 - } 132 153 133 - static void radeon_sa_bo_try_free(struct radeon_sa_manager *sa_manager) 134 - { 135 - struct radeon_sa_bo *sa_bo, *tmp; 136 - 137 - if (sa_manager->hole->next == &sa_manager->olist) 138 - return; 139 - 140 - sa_bo = list_entry(sa_manager->hole->next, struct radeon_sa_bo, olist); 141 - list_for_each_entry_safe_from(sa_bo, tmp, &sa_manager->olist, olist) { 142 - if (sa_bo->fence == NULL || !radeon_fence_signaled(sa_bo->fence)) { 143 - return; 144 - } 145 - radeon_sa_bo_remove_locked(sa_bo); 146 - } 147 - } 148 - 149 - static inline unsigned radeon_sa_bo_hole_soffset(struct radeon_sa_manager *sa_manager) 150 - { 151 - struct list_head *hole = sa_manager->hole; 152 - 153 - if (hole != &sa_manager->olist) { 154 - return list_entry(hole, struct radeon_sa_bo, olist)->eoffset; 155 - } 154 + *sa_bo = sa; 156 155 return 0; 157 156 } 158 157 159 - static inline unsigned radeon_sa_bo_hole_eoffset(struct radeon_sa_manager *sa_manager) 160 - { 161 - struct list_head *hole = sa_manager->hole; 162 - 163 - if (hole->next != &sa_manager->olist) { 164 - return list_entry(hole->next, struct radeon_sa_bo, olist)->soffset; 165 - } 166 - return sa_manager->size; 167 - } 168 - 169 - static bool radeon_sa_bo_try_alloc(struct radeon_sa_manager *sa_manager, 170 - struct radeon_sa_bo *sa_bo, 171 - unsigned size, unsigned align) 172 - { 173 - unsigned soffset, eoffset, wasted; 174 - 175 - soffset = radeon_sa_bo_hole_soffset(sa_manager); 176 - eoffset = radeon_sa_bo_hole_eoffset(sa_manager); 177 - wasted = (align - (soffset % align)) % align; 178 - 179 - if ((eoffset - soffset) >= (size + wasted)) { 180 - soffset += wasted; 181 - 182 - sa_bo->manager = sa_manager; 183 - sa_bo->soffset = soffset; 184 - sa_bo->eoffset = soffset + size; 185 - list_add(&sa_bo->olist, sa_manager->hole); 186 - INIT_LIST_HEAD(&sa_bo->flist); 187 - sa_manager->hole = &sa_bo->olist; 188 - return true; 189 - } 190 - return false; 191 - } 192 - 193 - /** 194 - * radeon_sa_event - Check if we can stop waiting 195 - * 196 - * @sa_manager: pointer to the sa_manager 197 - * @size: number of bytes we want to allocate 198 - * @align: alignment we need to match 199 - * 200 - * Check if either there is a fence we can wait for or 201 - * enough free memory to satisfy the allocation directly 202 - */ 203 - static bool radeon_sa_event(struct radeon_sa_manager *sa_manager, 204 - unsigned size, unsigned align) 205 - { 206 - unsigned soffset, eoffset, wasted; 207 - int i; 208 - 209 - for (i = 0; i < RADEON_NUM_RINGS; ++i) { 210 - if (!list_empty(&sa_manager->flist[i])) { 211 - return true; 212 - } 213 - } 214 - 215 - soffset = radeon_sa_bo_hole_soffset(sa_manager); 216 - eoffset = radeon_sa_bo_hole_eoffset(sa_manager); 217 - wasted = (align - (soffset % align)) % align; 218 - 219 - if ((eoffset - soffset) >= (size + wasted)) { 220 - return true; 221 - } 222 - 223 - return false; 224 - } 225 - 226 - static bool radeon_sa_bo_next_hole(struct radeon_sa_manager *sa_manager, 227 - struct radeon_fence **fences, 228 - unsigned *tries) 229 - { 230 - struct radeon_sa_bo *best_bo = NULL; 231 - unsigned i, soffset, best, tmp; 232 - 233 - /* if hole points to the end of the buffer */ 234 - if (sa_manager->hole->next == &sa_manager->olist) { 235 - /* try again with its beginning */ 236 - sa_manager->hole = &sa_manager->olist; 237 - return true; 238 - } 239 - 240 - soffset = radeon_sa_bo_hole_soffset(sa_manager); 241 - /* to handle wrap around we add sa_manager->size */ 242 - best = sa_manager->size * 2; 243 - /* go over all fence list and try to find the closest sa_bo 244 - * of the current last 245 - */ 246 - for (i = 0; i < RADEON_NUM_RINGS; ++i) { 247 - struct radeon_sa_bo *sa_bo; 248 - 249 - fences[i] = NULL; 250 - 251 - if (list_empty(&sa_manager->flist[i])) { 252 - continue; 253 - } 254 - 255 - sa_bo = list_first_entry(&sa_manager->flist[i], 256 - struct radeon_sa_bo, flist); 257 - 258 - if (!radeon_fence_signaled(sa_bo->fence)) { 259 - fences[i] = sa_bo->fence; 260 - continue; 261 - } 262 - 263 - /* limit the number of tries each ring gets */ 264 - if (tries[i] > 2) { 265 - continue; 266 - } 267 - 268 - tmp = sa_bo->soffset; 269 - if (tmp < soffset) { 270 - /* wrap around, pretend it's after */ 271 - tmp += sa_manager->size; 272 - } 273 - tmp -= soffset; 274 - if (tmp < best) { 275 - /* this sa bo is the closest one */ 276 - best = tmp; 277 - best_bo = sa_bo; 278 - } 279 - } 280 - 281 - if (best_bo) { 282 - ++tries[best_bo->fence->ring]; 283 - sa_manager->hole = best_bo->olist.prev; 284 - 285 - /* we knew that this one is signaled, 286 - so it's save to remote it */ 287 - radeon_sa_bo_remove_locked(best_bo); 288 - return true; 289 - } 290 - return false; 291 - } 292 - 293 - int radeon_sa_bo_new(struct radeon_device *rdev, 294 - struct radeon_sa_manager *sa_manager, 295 - struct radeon_sa_bo **sa_bo, 296 - unsigned size, unsigned align) 297 - { 298 - struct radeon_fence *fences[RADEON_NUM_RINGS]; 299 - unsigned tries[RADEON_NUM_RINGS]; 300 - int i, r; 301 - 302 - BUG_ON(align > sa_manager->align); 303 - BUG_ON(size > sa_manager->size); 304 - 305 - *sa_bo = kmalloc(sizeof(struct radeon_sa_bo), GFP_KERNEL); 306 - if ((*sa_bo) == NULL) { 307 - return -ENOMEM; 308 - } 309 - (*sa_bo)->manager = sa_manager; 310 - (*sa_bo)->fence = NULL; 311 - INIT_LIST_HEAD(&(*sa_bo)->olist); 312 - INIT_LIST_HEAD(&(*sa_bo)->flist); 313 - 314 - spin_lock(&sa_manager->wq.lock); 315 - do { 316 - for (i = 0; i < RADEON_NUM_RINGS; ++i) 317 - tries[i] = 0; 318 - 319 - do { 320 - radeon_sa_bo_try_free(sa_manager); 321 - 322 - if (radeon_sa_bo_try_alloc(sa_manager, *sa_bo, 323 - size, align)) { 324 - spin_unlock(&sa_manager->wq.lock); 325 - return 0; 326 - } 327 - 328 - /* see if we can skip over some allocations */ 329 - } while (radeon_sa_bo_next_hole(sa_manager, fences, tries)); 330 - 331 - for (i = 0; i < RADEON_NUM_RINGS; ++i) 332 - radeon_fence_ref(fences[i]); 333 - 334 - spin_unlock(&sa_manager->wq.lock); 335 - r = radeon_fence_wait_any(rdev, fences, false); 336 - for (i = 0; i < RADEON_NUM_RINGS; ++i) 337 - radeon_fence_unref(&fences[i]); 338 - spin_lock(&sa_manager->wq.lock); 339 - /* if we have nothing to wait for block */ 340 - if (r == -ENOENT) { 341 - r = wait_event_interruptible_locked( 342 - sa_manager->wq, 343 - radeon_sa_event(sa_manager, size, align) 344 - ); 345 - } 346 - 347 - } while (!r); 348 - 349 - spin_unlock(&sa_manager->wq.lock); 350 - kfree(*sa_bo); 351 - *sa_bo = NULL; 352 - return r; 353 - } 354 - 355 - void radeon_sa_bo_free(struct radeon_device *rdev, struct radeon_sa_bo **sa_bo, 158 + void radeon_sa_bo_free(struct drm_suballoc **sa_bo, 356 159 struct radeon_fence *fence) 357 160 { 358 - struct radeon_sa_manager *sa_manager; 359 - 360 161 if (sa_bo == NULL || *sa_bo == NULL) { 361 162 return; 362 163 } 363 164 364 - sa_manager = (*sa_bo)->manager; 365 - spin_lock(&sa_manager->wq.lock); 366 - if (fence && !radeon_fence_signaled(fence)) { 367 - (*sa_bo)->fence = radeon_fence_ref(fence); 368 - list_add_tail(&(*sa_bo)->flist, 369 - &sa_manager->flist[fence->ring]); 370 - } else { 371 - radeon_sa_bo_remove_locked(*sa_bo); 372 - } 373 - wake_up_all_locked(&sa_manager->wq); 374 - spin_unlock(&sa_manager->wq.lock); 165 + if (fence) 166 + drm_suballoc_free(*sa_bo, &fence->base); 167 + else 168 + drm_suballoc_free(*sa_bo, NULL); 169 + 375 170 *sa_bo = NULL; 376 171 } 377 172 ··· 153 400 void radeon_sa_bo_dump_debug_info(struct radeon_sa_manager *sa_manager, 154 401 struct seq_file *m) 155 402 { 156 - struct radeon_sa_bo *i; 403 + struct drm_printer p = drm_seq_file_printer(m); 157 404 158 - spin_lock(&sa_manager->wq.lock); 159 - list_for_each_entry(i, &sa_manager->olist, olist) { 160 - uint64_t soffset = i->soffset + sa_manager->gpu_addr; 161 - uint64_t eoffset = i->eoffset + sa_manager->gpu_addr; 162 - if (&i->olist == sa_manager->hole) { 163 - seq_printf(m, ">"); 164 - } else { 165 - seq_printf(m, " "); 166 - } 167 - seq_printf(m, "[0x%010llx 0x%010llx] size %8lld", 168 - soffset, eoffset, eoffset - soffset); 169 - if (i->fence) { 170 - seq_printf(m, " protected by 0x%016llx on ring %d", 171 - i->fence->seq, i->fence->ring); 172 - } 173 - seq_printf(m, "\n"); 174 - } 175 - spin_unlock(&sa_manager->wq.lock); 405 + drm_suballoc_dump_debug_info(&sa_manager->base, &p, sa_manager->gpu_addr); 176 406 } 177 407 #endif
+2 -2
drivers/gpu/drm/radeon/radeon_semaphore.c
··· 40 40 if (*semaphore == NULL) { 41 41 return -ENOMEM; 42 42 } 43 - r = radeon_sa_bo_new(rdev, &rdev->ring_tmp_bo, 43 + r = radeon_sa_bo_new(&rdev->ring_tmp_bo, 44 44 &(*semaphore)->sa_bo, 8, 8); 45 45 if (r) { 46 46 kfree(*semaphore); ··· 100 100 dev_err(rdev->dev, "semaphore %p has more waiters than signalers," 101 101 " hardware lockup imminent!\n", *semaphore); 102 102 } 103 - radeon_sa_bo_free(rdev, &(*semaphore)->sa_bo, fence); 103 + radeon_sa_bo_free(&(*semaphore)->sa_bo, fence); 104 104 kfree(*semaphore); 105 105 *semaphore = NULL; 106 106 }
+2 -5
drivers/gpu/drm/radeon/radeon_ttm.c
··· 211 211 if (r) 212 212 return r; 213 213 214 - /* Can't move a pinned BO */ 215 214 rbo = container_of(bo, struct radeon_bo, tbo); 216 - if (WARN_ON_ONCE(rbo->tbo.pin_count > 0)) 217 - return -EINVAL; 218 - 219 215 rdev = radeon_get_rdev(bo->bdev); 220 - if (old_mem->mem_type == TTM_PL_SYSTEM && bo->ttm == NULL) { 216 + if (!old_mem || (old_mem->mem_type == TTM_PL_SYSTEM && 217 + bo->ttm == NULL)) { 221 218 ttm_bo_move_null(bo, new_mem); 222 219 goto out; 223 220 }
+10 -6
drivers/gpu/drm/rockchip/rockchip_drm_gem.c
··· 261 261 else 262 262 ret = rockchip_drm_gem_object_mmap_dma(obj, vma); 263 263 264 - if (ret) 265 - drm_gem_vm_close(vma); 266 - 267 264 return ret; 268 265 } 269 266 ··· 515 518 struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj); 516 519 517 520 if (rk_obj->pages) { 518 - void *vaddr = vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP, 519 - pgprot_writecombine(PAGE_KERNEL)); 521 + void *vaddr; 522 + 523 + if (rk_obj->kvaddr) 524 + vaddr = rk_obj->kvaddr; 525 + else 526 + vaddr = vmap(rk_obj->pages, rk_obj->num_pages, VM_MAP, 527 + pgprot_writecombine(PAGE_KERNEL)); 528 + 520 529 if (!vaddr) 521 530 return -ENOMEM; 522 531 iosys_map_set_vaddr(map, vaddr); ··· 542 539 struct rockchip_gem_object *rk_obj = to_rockchip_obj(obj); 543 540 544 541 if (rk_obj->pages) { 545 - vunmap(map->vaddr); 542 + if (map->vaddr != rk_obj->kvaddr) 543 + vunmap(map->vaddr); 546 544 return; 547 545 } 548 546
+2 -5
drivers/gpu/drm/rockchip/rockchip_drm_vop.c
··· 316 316 case DRM_FORMAT_RGB565: 317 317 case DRM_FORMAT_BGR565: 318 318 return AFBC_FMT_RGB565; 319 - /* either of the below should not be reachable */ 320 319 default: 321 - DRM_WARN_ONCE("unsupported AFBC format[%08x]\n", format); 320 + DRM_DEBUG_KMS("unsupported AFBC format[%08x]\n", format); 322 321 return -EINVAL; 323 322 } 324 - 325 - return -EINVAL; 326 323 } 327 324 328 325 static uint16_t scl_vop_cal_scale(enum scale_mode mode, uint32_t src, ··· 2218 2221 goto err_disable_pm_runtime; 2219 2222 2220 2223 if (vop->data->feature & VOP_FEATURE_INTERNAL_RGB) { 2221 - vop->rgb = rockchip_rgb_init(dev, &vop->crtc, vop->drm_dev); 2224 + vop->rgb = rockchip_rgb_init(dev, &vop->crtc, vop->drm_dev, 0); 2222 2225 if (IS_ERR(vop->rgb)) { 2223 2226 ret = PTR_ERR(vop->rgb); 2224 2227 goto err_disable_pm_runtime;
+63 -17
drivers/gpu/drm/rockchip/rockchip_drm_vop2.c
··· 38 38 #include "rockchip_drm_gem.h" 39 39 #include "rockchip_drm_fb.h" 40 40 #include "rockchip_drm_vop2.h" 41 + #include "rockchip_rgb.h" 41 42 42 43 /* 43 44 * VOP2 architecture ··· 211 210 unsigned int enable_count; 212 211 struct clk *hclk; 213 212 struct clk *aclk; 213 + 214 + /* optional internal rgb encoder */ 215 + struct rockchip_rgb *rgb; 214 216 215 217 /* must be put at the end of the struct */ 216 218 struct vop2_win win[]; ··· 2249 2245 2250 2246 #define NR_LAYERS 6 2251 2247 2252 - static int vop2_create_crtc(struct vop2 *vop2) 2248 + static int vop2_create_crtcs(struct vop2 *vop2) 2253 2249 { 2254 2250 const struct vop2_data *vop2_data = vop2->data; 2255 2251 struct drm_device *drm = vop2->drm; ··· 2325 2321 /* change the unused primary window to overlay window */ 2326 2322 win->type = DRM_PLANE_TYPE_OVERLAY; 2327 2323 } 2328 - } 2329 - 2330 - if (win->type == DRM_PLANE_TYPE_OVERLAY) 2324 + } else if (win->type == DRM_PLANE_TYPE_OVERLAY) { 2331 2325 possible_crtcs = (1 << nvps) - 1; 2326 + } else { 2327 + possible_crtcs = 0; 2328 + } 2332 2329 2333 2330 ret = vop2_plane_init(vop2, win, possible_crtcs); 2334 2331 if (ret) { ··· 2375 2370 return 0; 2376 2371 } 2377 2372 2378 - static void vop2_destroy_crtc(struct drm_crtc *crtc) 2373 + static void vop2_destroy_crtcs(struct vop2 *vop2) 2379 2374 { 2380 - of_node_put(crtc->port); 2375 + struct drm_device *drm = vop2->drm; 2376 + struct list_head *crtc_list = &drm->mode_config.crtc_list; 2377 + struct list_head *plane_list = &drm->mode_config.plane_list; 2378 + struct drm_crtc *crtc, *tmpc; 2379 + struct drm_plane *plane, *tmpp; 2380 + 2381 + list_for_each_entry_safe(plane, tmpp, plane_list, head) 2382 + drm_plane_cleanup(plane); 2381 2383 2382 2384 /* 2383 2385 * Destroy CRTC after vop2_plane_destroy() since vop2_disable_plane() 2384 2386 * references the CRTC. 2385 2387 */ 2386 - drm_crtc_cleanup(crtc); 2388 + list_for_each_entry_safe(crtc, tmpc, crtc_list, head) { 2389 + of_node_put(crtc->port); 2390 + drm_crtc_cleanup(crtc); 2391 + } 2392 + } 2393 + 2394 + static int vop2_find_rgb_encoder(struct vop2 *vop2) 2395 + { 2396 + struct device_node *node = vop2->dev->of_node; 2397 + struct device_node *endpoint; 2398 + int i; 2399 + 2400 + for (i = 0; i < vop2->data->nr_vps; i++) { 2401 + endpoint = of_graph_get_endpoint_by_regs(node, i, 2402 + ROCKCHIP_VOP2_EP_RGB0); 2403 + if (!endpoint) 2404 + continue; 2405 + 2406 + of_node_put(endpoint); 2407 + return i; 2408 + } 2409 + 2410 + return -ENOENT; 2387 2411 } 2388 2412 2389 2413 static struct reg_field vop2_cluster_regs[VOP2_WIN_MAX_REG] = { ··· 2716 2682 if (ret) 2717 2683 return ret; 2718 2684 2719 - ret = vop2_create_crtc(vop2); 2685 + ret = vop2_create_crtcs(vop2); 2720 2686 if (ret) 2721 2687 return ret; 2688 + 2689 + ret = vop2_find_rgb_encoder(vop2); 2690 + if (ret >= 0) { 2691 + vop2->rgb = rockchip_rgb_init(dev, &vop2->vps[ret].crtc, 2692 + vop2->drm, ret); 2693 + if (IS_ERR(vop2->rgb)) { 2694 + if (PTR_ERR(vop2->rgb) == -EPROBE_DEFER) { 2695 + ret = PTR_ERR(vop2->rgb); 2696 + goto err_crtcs; 2697 + } 2698 + vop2->rgb = NULL; 2699 + } 2700 + } 2722 2701 2723 2702 rockchip_drm_dma_init_device(vop2->drm, vop2->dev); 2724 2703 2725 2704 pm_runtime_enable(&pdev->dev); 2726 2705 2727 2706 return 0; 2707 + 2708 + err_crtcs: 2709 + vop2_destroy_crtcs(vop2); 2710 + 2711 + return ret; 2728 2712 } 2729 2713 2730 2714 static void vop2_unbind(struct device *dev, struct device *master, void *data) 2731 2715 { 2732 2716 struct vop2 *vop2 = dev_get_drvdata(dev); 2733 - struct drm_device *drm = vop2->drm; 2734 - struct list_head *plane_list = &drm->mode_config.plane_list; 2735 - struct list_head *crtc_list = &drm->mode_config.crtc_list; 2736 - struct drm_crtc *crtc, *tmpc; 2737 - struct drm_plane *plane, *tmpp; 2738 2717 2739 2718 pm_runtime_disable(dev); 2740 2719 2741 - list_for_each_entry_safe(plane, tmpp, plane_list, head) 2742 - drm_plane_cleanup(plane); 2720 + if (vop2->rgb) 2721 + rockchip_rgb_fini(vop2->rgb); 2743 2722 2744 - list_for_each_entry_safe(crtc, tmpc, crtc_list, head) 2745 - vop2_destroy_crtc(crtc); 2723 + vop2_destroy_crtcs(vop2); 2746 2724 } 2747 2725 2748 2726 const struct component_ops vop2_component_ops = {
+10 -9
drivers/gpu/drm/rockchip/rockchip_rgb.c
··· 22 22 #include "rockchip_drm_vop.h" 23 23 #include "rockchip_rgb.h" 24 24 25 - #define encoder_to_rgb(c) container_of(c, struct rockchip_rgb, encoder) 26 - 27 25 struct rockchip_rgb { 28 26 struct device *dev; 29 27 struct drm_device *drm_dev; 30 28 struct drm_bridge *bridge; 31 - struct drm_encoder encoder; 29 + struct rockchip_encoder encoder; 32 30 struct drm_connector connector; 33 31 int output_mode; 34 32 }; ··· 72 74 73 75 struct rockchip_rgb *rockchip_rgb_init(struct device *dev, 74 76 struct drm_crtc *crtc, 75 - struct drm_device *drm_dev) 77 + struct drm_device *drm_dev, 78 + int video_port) 76 79 { 77 80 struct rockchip_rgb *rgb; 78 81 struct drm_encoder *encoder; ··· 91 92 rgb->dev = dev; 92 93 rgb->drm_dev = drm_dev; 93 94 94 - port = of_graph_get_port_by_id(dev->of_node, 0); 95 + port = of_graph_get_port_by_id(dev->of_node, video_port); 95 96 if (!port) 96 97 return ERR_PTR(-EINVAL); 97 98 ··· 104 105 continue; 105 106 106 107 child_count++; 107 - ret = drm_of_find_panel_or_bridge(dev->of_node, 0, endpoint_id, 108 - &panel, &bridge); 108 + ret = drm_of_find_panel_or_bridge(dev->of_node, video_port, 109 + endpoint_id, &panel, &bridge); 109 110 if (!ret) { 110 111 of_node_put(endpoint); 111 112 break; ··· 124 125 return ERR_PTR(ret); 125 126 } 126 127 127 - encoder = &rgb->encoder; 128 + encoder = &rgb->encoder.encoder; 128 129 encoder->possible_crtcs = drm_crtc_mask(crtc); 129 130 130 131 ret = drm_simple_encoder_init(drm_dev, encoder, DRM_MODE_ENCODER_NONE); ··· 160 161 goto err_free_encoder; 161 162 } 162 163 164 + rgb->encoder.crtc_endpoint_id = endpoint_id; 165 + 163 166 ret = drm_connector_attach_encoder(connector, encoder); 164 167 if (ret < 0) { 165 168 DRM_DEV_ERROR(drm_dev->dev, ··· 183 182 { 184 183 drm_panel_bridge_remove(rgb->bridge); 185 184 drm_connector_cleanup(&rgb->connector); 186 - drm_encoder_cleanup(&rgb->encoder); 185 + drm_encoder_cleanup(&rgb->encoder.encoder); 187 186 } 188 187 EXPORT_SYMBOL_GPL(rockchip_rgb_fini);
+4 -2
drivers/gpu/drm/rockchip/rockchip_rgb.h
··· 8 8 #ifdef CONFIG_ROCKCHIP_RGB 9 9 struct rockchip_rgb *rockchip_rgb_init(struct device *dev, 10 10 struct drm_crtc *crtc, 11 - struct drm_device *drm_dev); 11 + struct drm_device *drm_dev, 12 + int video_port); 12 13 void rockchip_rgb_fini(struct rockchip_rgb *rgb); 13 14 #else 14 15 static inline struct rockchip_rgb *rockchip_rgb_init(struct device *dev, 15 16 struct drm_crtc *crtc, 16 - struct drm_device *drm_dev) 17 + struct drm_device *drm_dev, 18 + int video_port) 17 19 { 18 20 return NULL; 19 21 }
+29
drivers/gpu/drm/scheduler/sched_main.c
··· 53 53 54 54 #include <drm/drm_print.h> 55 55 #include <drm/drm_gem.h> 56 + #include <drm/drm_syncobj.h> 56 57 #include <drm/gpu_scheduler.h> 57 58 #include <drm/spsc_queue.h> 58 59 ··· 718 717 return ret; 719 718 } 720 719 EXPORT_SYMBOL(drm_sched_job_add_dependency); 720 + 721 + /** 722 + * drm_sched_job_add_syncobj_dependency - adds a syncobj's fence as a job dependency 723 + * @job: scheduler job to add the dependencies to 724 + * @file_private: drm file private pointer 725 + * @handle: syncobj handle to lookup 726 + * @point: timeline point 727 + * 728 + * This adds the fence matching the given syncobj to @job. 729 + * 730 + * Returns: 731 + * 0 on success, or an error on failing to expand the array. 732 + */ 733 + int drm_sched_job_add_syncobj_dependency(struct drm_sched_job *job, 734 + struct drm_file *file, 735 + u32 handle, 736 + u32 point) 737 + { 738 + struct dma_fence *fence; 739 + int ret; 740 + 741 + ret = drm_syncobj_find_fence(file, handle, point, 0, &fence); 742 + if (ret) 743 + return ret; 744 + 745 + return drm_sched_job_add_dependency(job, fence); 746 + } 747 + EXPORT_SYMBOL(drm_sched_job_add_syncobj_dependency); 721 748 722 749 /** 723 750 * drm_sched_job_add_resv_dependencies - add all fences from the resv to the job
+5 -5
drivers/gpu/drm/tests/drm_format_helper_test.c
··· 597 597 598 598 drm_fb_xrgb8888_to_xrgb1555(&dst, &result->dst_pitch, &src, &fb, &params->clip); 599 599 buf = le16buf_to_cpu(test, (__force const __le16 *)buf, dst_size / sizeof(__le16)); 600 - KUNIT_EXPECT_EQ(test, memcmp(buf, result->expected, dst_size), 0); 600 + KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); 601 601 } 602 602 603 603 static void drm_test_fb_xrgb8888_to_argb1555(struct kunit *test) ··· 628 628 629 629 drm_fb_xrgb8888_to_argb1555(&dst, &result->dst_pitch, &src, &fb, &params->clip); 630 630 buf = le16buf_to_cpu(test, (__force const __le16 *)buf, dst_size / sizeof(__le16)); 631 - KUNIT_EXPECT_EQ(test, memcmp(buf, result->expected, dst_size), 0); 631 + KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); 632 632 } 633 633 634 634 static void drm_test_fb_xrgb8888_to_rgba5551(struct kunit *test) ··· 659 659 660 660 drm_fb_xrgb8888_to_rgba5551(&dst, &result->dst_pitch, &src, &fb, &params->clip); 661 661 buf = le16buf_to_cpu(test, (__force const __le16 *)buf, dst_size / sizeof(__le16)); 662 - KUNIT_EXPECT_EQ(test, memcmp(buf, result->expected, dst_size), 0); 662 + KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); 663 663 } 664 664 665 665 static void drm_test_fb_xrgb8888_to_rgb888(struct kunit *test) ··· 724 724 725 725 drm_fb_xrgb8888_to_argb8888(&dst, &result->dst_pitch, &src, &fb, &params->clip); 726 726 buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32)); 727 - KUNIT_EXPECT_EQ(test, memcmp(buf, result->expected, dst_size), 0); 727 + KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); 728 728 } 729 729 730 730 static void drm_test_fb_xrgb8888_to_xrgb2101010(struct kunit *test) ··· 786 786 787 787 drm_fb_xrgb8888_to_argb2101010(&dst, &result->dst_pitch, &src, &fb, &params->clip); 788 788 buf = le32buf_to_cpu(test, (__force const __le32 *)buf, dst_size / sizeof(u32)); 789 - KUNIT_EXPECT_EQ(test, memcmp(buf, result->expected, dst_size), 0); 789 + KUNIT_EXPECT_MEMEQ(test, buf, result->expected, dst_size); 790 790 } 791 791 792 792 static struct kunit_case drm_format_helper_test_cases[] = {
+4 -8
drivers/gpu/drm/tidss/tidss_dispc.c
··· 1985 1985 (y * fb->pitches[1] / fb->format->vsub); 1986 1986 } 1987 1987 1988 - int dispc_plane_setup(struct dispc_device *dispc, u32 hw_plane, 1989 - const struct drm_plane_state *state, 1990 - u32 hw_videoport) 1988 + void dispc_plane_setup(struct dispc_device *dispc, u32 hw_plane, 1989 + const struct drm_plane_state *state, 1990 + u32 hw_videoport) 1991 1991 { 1992 1992 bool lite = dispc->feat->vid_lite[hw_plane]; 1993 1993 u32 fourcc = state->fb->format->format; ··· 2066 2066 else 2067 2067 VID_REG_FLD_MOD(dispc, hw_plane, DISPC_VID_ATTRIBUTES, 0, 2068 2068 28, 28); 2069 - 2070 - return 0; 2071 2069 } 2072 2070 2073 - int dispc_plane_enable(struct dispc_device *dispc, u32 hw_plane, bool enable) 2071 + void dispc_plane_enable(struct dispc_device *dispc, u32 hw_plane, bool enable) 2074 2072 { 2075 2073 VID_REG_FLD_MOD(dispc, hw_plane, DISPC_VID_ATTRIBUTES, !!enable, 0, 0); 2076 - 2077 - return 0; 2078 2074 } 2079 2075 2080 2076 static u32 dispc_vid_get_fifo_size(struct dispc_device *dispc, u32 hw_plane)
+4 -4
drivers/gpu/drm/tidss/tidss_dispc.h
··· 123 123 int dispc_plane_check(struct dispc_device *dispc, u32 hw_plane, 124 124 const struct drm_plane_state *state, 125 125 u32 hw_videoport); 126 - int dispc_plane_setup(struct dispc_device *dispc, u32 hw_plane, 127 - const struct drm_plane_state *state, 128 - u32 hw_videoport); 129 - int dispc_plane_enable(struct dispc_device *dispc, u32 hw_plane, bool enable); 126 + void dispc_plane_setup(struct dispc_device *dispc, u32 hw_plane, 127 + const struct drm_plane_state *state, 128 + u32 hw_videoport); 129 + void dispc_plane_enable(struct dispc_device *dispc, u32 hw_plane, bool enable); 130 130 const u32 *dispc_plane_formats(struct dispc_device *dispc, unsigned int *len); 131 131 132 132 int dispc_init(struct tidss_device *tidss);
+11 -9
drivers/gpu/drm/tidss/tidss_plane.c
··· 113 113 struct drm_plane_state *new_state = drm_atomic_get_new_plane_state(state, 114 114 plane); 115 115 u32 hw_videoport; 116 - int ret; 117 116 118 117 dev_dbg(ddev->dev, "%s\n", __func__); 119 118 ··· 123 124 124 125 hw_videoport = to_tidss_crtc(new_state->crtc)->hw_videoport; 125 126 126 - ret = dispc_plane_setup(tidss->dispc, tplane->hw_plane_id, 127 - new_state, hw_videoport); 127 + dispc_plane_setup(tidss->dispc, tplane->hw_plane_id, new_state, hw_videoport); 128 + } 128 129 129 - if (ret) { 130 - dev_err(plane->dev->dev, "%s: Failed to setup plane %d\n", 131 - __func__, tplane->hw_plane_id); 132 - dispc_plane_enable(tidss->dispc, tplane->hw_plane_id, false); 133 - return; 134 - } 130 + static void tidss_plane_atomic_enable(struct drm_plane *plane, 131 + struct drm_atomic_state *state) 132 + { 133 + struct drm_device *ddev = plane->dev; 134 + struct tidss_device *tidss = to_tidss(ddev); 135 + struct tidss_plane *tplane = to_tidss_plane(plane); 136 + 137 + dev_dbg(ddev->dev, "%s\n", __func__); 135 138 136 139 dispc_plane_enable(tidss->dispc, tplane->hw_plane_id, true); 137 140 } ··· 161 160 static const struct drm_plane_helper_funcs tidss_plane_helper_funcs = { 162 161 .atomic_check = tidss_plane_atomic_check, 163 162 .atomic_update = tidss_plane_atomic_update, 163 + .atomic_enable = tidss_plane_atomic_enable, 164 164 .atomic_disable = tidss_plane_atomic_disable, 165 165 }; 166 166
+22 -9
drivers/gpu/drm/tiny/simpledrm.c
··· 606 606 */ 607 607 608 608 static struct drm_display_mode simpledrm_mode(unsigned int width, 609 - unsigned int height) 609 + unsigned int height, 610 + unsigned int width_mm, 611 + unsigned int height_mm) 610 612 { 611 - /* 612 - * Assume a monitor resolution of 96 dpi to 613 - * get a somewhat reasonable screen size. 614 - */ 615 613 const struct drm_display_mode mode = { 616 - DRM_MODE_INIT(60, width, height, 617 - DRM_MODE_RES_MM(width, 96ul), 618 - DRM_MODE_RES_MM(height, 96ul)) 614 + DRM_MODE_INIT(60, width, height, width_mm, height_mm) 619 615 }; 620 616 621 617 return mode; ··· 625 629 struct simpledrm_device *sdev; 626 630 struct drm_device *dev; 627 631 int width, height, stride; 632 + int width_mm = 0, height_mm = 0; 633 + struct device_node *panel_node; 628 634 const struct drm_format_info *format; 629 635 struct resource *res, *mem = NULL; 630 636 struct drm_plane *primary_plane; ··· 683 685 mem = simplefb_get_memory_of(dev, of_node); 684 686 if (IS_ERR(mem)) 685 687 return ERR_CAST(mem); 688 + panel_node = of_parse_phandle(of_node, "panel", 0); 689 + if (panel_node) { 690 + simplefb_read_u32_of(dev, panel_node, "width-mm", &width_mm); 691 + simplefb_read_u32_of(dev, panel_node, "height-mm", &height_mm); 692 + of_node_put(panel_node); 693 + } 686 694 } else { 687 695 drm_err(dev, "no simplefb configuration found\n"); 688 696 return ERR_PTR(-ENODEV); ··· 699 695 return ERR_PTR(-EINVAL); 700 696 } 701 697 702 - sdev->mode = simpledrm_mode(width, height); 698 + /* 699 + * Assume a monitor resolution of 96 dpi if physical dimensions 700 + * are not specified to get a somewhat reasonable screen size. 701 + */ 702 + if (!width_mm) 703 + width_mm = DRM_MODE_RES_MM(width, 96ul); 704 + if (!height_mm) 705 + height_mm = DRM_MODE_RES_MM(height, 96ul); 706 + 707 + sdev->mode = simpledrm_mode(width, height, width_mm, height_mm); 703 708 sdev->format = format; 704 709 sdev->pitch = stride; 705 710
+13 -17
drivers/gpu/drm/ttm/ttm_bo.c
··· 120 120 bool old_use_tt, new_use_tt; 121 121 int ret; 122 122 123 - old_use_tt = bo->resource && 124 - ttm_manager_type(bdev, bo->resource->mem_type)->use_tt; 123 + old_use_tt = !bo->resource || ttm_manager_type(bdev, bo->resource->mem_type)->use_tt; 125 124 new_use_tt = ttm_manager_type(bdev, mem->mem_type)->use_tt; 126 125 127 126 ttm_bo_unmap_virtual(bo); ··· 893 894 if (!placement->num_placement && !placement->num_busy_placement) 894 895 return ttm_bo_pipeline_gutting(bo); 895 896 896 - /* 897 - * Check whether we need to move buffer. 898 - */ 899 - if (!bo->resource || !ttm_resource_compat(bo->resource, placement)) { 900 - ret = ttm_bo_move_buffer(bo, placement, ctx); 901 - if (ret) 902 - return ret; 903 - } 897 + /* Check whether we need to move buffer. */ 898 + if (bo->resource && ttm_resource_compat(bo->resource, placement)) 899 + return 0; 900 + 901 + /* Moving of pinned BOs is forbidden */ 902 + if (bo->pin_count) 903 + return -EINVAL; 904 + 905 + ret = ttm_bo_move_buffer(bo, placement, ctx); 906 + if (ret) 907 + return ret; 908 + 904 909 /* 905 910 * We might need to add a TTM. 906 911 */ ··· 956 953 struct sg_table *sg, struct dma_resv *resv, 957 954 void (*destroy) (struct ttm_buffer_object *)) 958 955 { 959 - static const struct ttm_place sys_mem = { .mem_type = TTM_PL_SYSTEM }; 960 956 int ret; 961 957 962 958 kref_init(&bo->kref); ··· 971 969 else 972 970 bo->base.resv = &bo->base._resv; 973 971 atomic_inc(&ttm_glob.bo_count); 974 - 975 - ret = ttm_resource_alloc(bo, &sys_mem, &bo->resource); 976 - if (unlikely(ret)) { 977 - ttm_bo_put(bo); 978 - return ret; 979 - } 980 972 981 973 /* 982 974 * For ttm_bo_type_device buffers, allocate
+4 -15
drivers/gpu/drm/ttm/ttm_bo_util.c
··· 157 157 bool clear; 158 158 int ret = 0; 159 159 160 - if (!src_mem) 161 - return 0; 160 + if (WARN_ON(!src_mem)) 161 + return -EINVAL; 162 162 163 163 src_man = ttm_manager_type(bdev, src_mem->mem_type); 164 164 if (ttm && ((ttm->page_flags & TTM_TT_FLAG_SWAPPED) || ··· 704 704 */ 705 705 int ttm_bo_pipeline_gutting(struct ttm_buffer_object *bo) 706 706 { 707 - static const struct ttm_place sys_mem = { .mem_type = TTM_PL_SYSTEM }; 708 707 struct ttm_buffer_object *ghost; 709 - struct ttm_resource *sys_res; 710 708 struct ttm_tt *ttm; 711 709 int ret; 712 - 713 - ret = ttm_resource_alloc(bo, &sys_mem, &sys_res); 714 - if (ret) 715 - return ret; 716 710 717 711 /* If already idle, no need for ghost object dance. */ 718 712 if (dma_resv_test_signaled(bo->base.resv, DMA_RESV_USAGE_BOOKKEEP)) { ··· 714 720 /* See comment below about clearing. */ 715 721 ret = ttm_tt_create(bo, true); 716 722 if (ret) 717 - goto error_free_sys_mem; 723 + return ret; 718 724 } else { 719 725 ttm_tt_unpopulate(bo->bdev, bo->ttm); 720 726 if (bo->type == ttm_bo_type_device) 721 727 ttm_tt_mark_for_clear(bo->ttm); 722 728 } 723 729 ttm_resource_free(bo, &bo->resource); 724 - ttm_bo_assign_mem(bo, sys_res); 725 730 return 0; 726 731 } 727 732 ··· 737 744 ret = ttm_tt_create(bo, true); 738 745 swap(bo->ttm, ttm); 739 746 if (ret) 740 - goto error_free_sys_mem; 747 + return ret; 741 748 742 749 ret = ttm_buffer_object_transfer(bo, &ghost); 743 750 if (ret) ··· 753 760 dma_resv_unlock(&ghost->base._resv); 754 761 ttm_bo_put(ghost); 755 762 bo->ttm = ttm; 756 - ttm_bo_assign_mem(bo, sys_res); 757 763 return 0; 758 764 759 765 error_destroy_tt: 760 766 ttm_tt_destroy(bo->bdev, ttm); 761 - 762 - error_free_sys_mem: 763 - ttm_resource_free(bo, &sys_res); 764 767 return ret; 765 768 }
-1
drivers/gpu/drm/ttm/ttm_resource.c
··· 361 361 362 362 return false; 363 363 } 364 - EXPORT_SYMBOL(ttm_resource_compat); 365 364 366 365 void ttm_resource_set_bo(struct ttm_resource *res, 367 366 struct ttm_buffer_object *bo)
+8 -18
drivers/gpu/drm/v3d/v3d_gem.c
··· 397 397 } 398 398 399 399 static int 400 - v3d_job_add_deps(struct drm_file *file_priv, struct v3d_job *job, 401 - u32 in_sync, u32 point) 402 - { 403 - struct dma_fence *in_fence = NULL; 404 - int ret; 405 - 406 - ret = drm_syncobj_find_fence(file_priv, in_sync, point, 0, &in_fence); 407 - if (ret == -EINVAL) 408 - return ret; 409 - 410 - return drm_sched_job_add_dependency(&job->base, in_fence); 411 - } 412 - 413 - static int 414 400 v3d_job_init(struct v3d_dev *v3d, struct drm_file *file_priv, 415 401 void **container, size_t size, void (*free)(struct kref *ref), 416 402 u32 in_sync, struct v3d_submit_ext *se, enum v3d_queue queue) ··· 433 447 DRM_DEBUG("Failed to copy wait dep handle.\n"); 434 448 goto fail_deps; 435 449 } 436 - ret = v3d_job_add_deps(file_priv, job, in.handle, 0); 437 - if (ret) 450 + ret = drm_sched_job_add_syncobj_dependency(&job->base, file_priv, in.handle, 0); 451 + 452 + // TODO: Investigate why this was filtered out for the IOCTL. 453 + if (ret && ret != -ENOENT) 438 454 goto fail_deps; 439 455 } 440 456 } 441 457 } else { 442 - ret = v3d_job_add_deps(file_priv, job, in_sync, 0); 443 - if (ret) 458 + ret = drm_sched_job_add_syncobj_dependency(&job->base, file_priv, in_sync, 0); 459 + 460 + // TODO: Investigate why this was filtered out for the IOCTL. 461 + if (ret && ret != -ENOENT) 444 462 goto fail_deps; 445 463 } 446 464
+1 -1
drivers/gpu/drm/vc4/vc4_drv.h
··· 690 690 /* This is the array of BOs that were looked up at the start of exec. 691 691 * Command validation will use indices into this array. 692 692 */ 693 - struct drm_gem_dma_object **bo; 693 + struct drm_gem_object **bo; 694 694 uint32_t bo_count; 695 695 696 696 /* List of BOs that are being written by the RCL. Other than
+18 -60
drivers/gpu/drm/vc4/vc4_gem.c
··· 199 199 continue; 200 200 201 201 for (j = 0; j < exec[i]->bo_count; j++) { 202 - bo = to_vc4_bo(&exec[i]->bo[j]->base); 202 + bo = to_vc4_bo(exec[i]->bo[j]); 203 203 204 204 /* Retain BOs just in case they were marked purgeable. 205 205 * This prevents the BO from being purged before ··· 207 207 */ 208 208 WARN_ON(!refcount_read(&bo->usecnt)); 209 209 refcount_inc(&bo->usecnt); 210 - drm_gem_object_get(&exec[i]->bo[j]->base); 211 - kernel_state->bo[k++] = &exec[i]->bo[j]->base; 210 + drm_gem_object_get(exec[i]->bo[j]); 211 + kernel_state->bo[k++] = exec[i]->bo[j]; 212 212 } 213 213 214 214 list_for_each_entry(bo, &exec[i]->unref_list, unref_head) { ··· 558 558 unsigned i; 559 559 560 560 for (i = 0; i < exec->bo_count; i++) { 561 - bo = to_vc4_bo(&exec->bo[i]->base); 561 + bo = to_vc4_bo(exec->bo[i]); 562 562 bo->seqno = seqno; 563 563 564 564 dma_resv_add_fence(bo->base.base.resv, exec->fence, ··· 585 585 { 586 586 int i; 587 587 588 - for (i = 0; i < exec->bo_count; i++) { 589 - struct drm_gem_object *bo = &exec->bo[i]->base; 590 - 591 - dma_resv_unlock(bo->resv); 592 - } 588 + for (i = 0; i < exec->bo_count; i++) 589 + dma_resv_unlock(exec->bo[i]->resv); 593 590 594 591 ww_acquire_fini(acquire_ctx); 595 592 } ··· 611 614 612 615 retry: 613 616 if (contended_lock != -1) { 614 - bo = &exec->bo[contended_lock]->base; 617 + bo = exec->bo[contended_lock]; 615 618 ret = dma_resv_lock_slow_interruptible(bo->resv, acquire_ctx); 616 619 if (ret) { 617 620 ww_acquire_done(acquire_ctx); ··· 623 626 if (i == contended_lock) 624 627 continue; 625 628 626 - bo = &exec->bo[i]->base; 629 + bo = exec->bo[i]; 627 630 628 631 ret = dma_resv_lock_interruptible(bo->resv, acquire_ctx); 629 632 if (ret) { 630 633 int j; 631 634 632 635 for (j = 0; j < i; j++) { 633 - bo = &exec->bo[j]->base; 636 + bo = exec->bo[j]; 634 637 dma_resv_unlock(bo->resv); 635 638 } 636 639 637 640 if (contended_lock != -1 && contended_lock >= i) { 638 - bo = &exec->bo[contended_lock]->base; 641 + bo = exec->bo[contended_lock]; 639 642 640 643 dma_resv_unlock(bo->resv); 641 644 } ··· 656 659 * before we commit the CL to the hardware. 657 660 */ 658 661 for (i = 0; i < exec->bo_count; i++) { 659 - bo = &exec->bo[i]->base; 662 + bo = exec->bo[i]; 660 663 661 664 ret = dma_resv_reserve_fences(bo->resv, 1); 662 665 if (ret) { ··· 746 749 struct vc4_exec_info *exec) 747 750 { 748 751 struct drm_vc4_submit_cl *args = exec->args; 749 - uint32_t *handles; 750 752 int ret = 0; 751 753 int i; 752 754 ··· 759 763 return -EINVAL; 760 764 } 761 765 762 - exec->bo = kvmalloc_array(exec->bo_count, 763 - sizeof(struct drm_gem_dma_object *), 764 - GFP_KERNEL | __GFP_ZERO); 765 - if (!exec->bo) { 766 - DRM_ERROR("Failed to allocate validated BO pointers\n"); 767 - return -ENOMEM; 768 - } 769 - 770 - handles = kvmalloc_array(exec->bo_count, sizeof(uint32_t), GFP_KERNEL); 771 - if (!handles) { 772 - ret = -ENOMEM; 773 - DRM_ERROR("Failed to allocate incoming GEM handles\n"); 774 - goto fail; 775 - } 776 - 777 - if (copy_from_user(handles, u64_to_user_ptr(args->bo_handles), 778 - exec->bo_count * sizeof(uint32_t))) { 779 - ret = -EFAULT; 780 - DRM_ERROR("Failed to copy in GEM handles\n"); 781 - goto fail; 782 - } 783 - 784 - spin_lock(&file_priv->table_lock); 785 - for (i = 0; i < exec->bo_count; i++) { 786 - struct drm_gem_object *bo = idr_find(&file_priv->object_idr, 787 - handles[i]); 788 - if (!bo) { 789 - DRM_DEBUG("Failed to look up GEM BO %d: %d\n", 790 - i, handles[i]); 791 - ret = -EINVAL; 792 - break; 793 - } 794 - 795 - drm_gem_object_get(bo); 796 - exec->bo[i] = (struct drm_gem_dma_object *)bo; 797 - } 798 - spin_unlock(&file_priv->table_lock); 766 + ret = drm_gem_objects_lookup(file_priv, u64_to_user_ptr(args->bo_handles), 767 + exec->bo_count, &exec->bo); 799 768 800 769 if (ret) 801 770 goto fail_put_bo; 802 771 803 772 for (i = 0; i < exec->bo_count; i++) { 804 - ret = vc4_bo_inc_usecnt(to_vc4_bo(&exec->bo[i]->base)); 773 + ret = vc4_bo_inc_usecnt(to_vc4_bo(exec->bo[i])); 805 774 if (ret) 806 775 goto fail_dec_usecnt; 807 776 } 808 777 809 - kvfree(handles); 810 778 return 0; 811 779 812 780 fail_dec_usecnt: ··· 783 823 * step. 784 824 */ 785 825 for (i-- ; i >= 0; i--) 786 - vc4_bo_dec_usecnt(to_vc4_bo(&exec->bo[i]->base)); 826 + vc4_bo_dec_usecnt(to_vc4_bo(exec->bo[i])); 787 827 788 828 fail_put_bo: 789 829 /* Release any reference to acquired objects. */ 790 830 for (i = 0; i < exec->bo_count && exec->bo[i]; i++) 791 - drm_gem_object_put(&exec->bo[i]->base); 831 + drm_gem_object_put(exec->bo[i]); 792 832 793 - fail: 794 - kvfree(handles); 795 833 kvfree(exec->bo); 796 834 exec->bo = NULL; 797 835 return ret; ··· 932 974 933 975 if (exec->bo) { 934 976 for (i = 0; i < exec->bo_count; i++) { 935 - struct vc4_bo *bo = to_vc4_bo(&exec->bo[i]->base); 977 + struct vc4_bo *bo = to_vc4_bo(exec->bo[i]); 936 978 937 979 vc4_bo_dec_usecnt(bo); 938 - drm_gem_object_put(&exec->bo[i]->base); 980 + drm_gem_object_put(exec->bo[i]); 939 981 } 940 982 kvfree(exec->bo); 941 983 }
+13 -33
drivers/gpu/drm/vc4/vc4_hdmi.c
··· 1466 1466 if (!drm_dev_enter(drm, &idx)) 1467 1467 goto out; 1468 1468 1469 + ret = pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev); 1470 + if (ret < 0) { 1471 + DRM_ERROR("Failed to retain power domain: %d\n", ret); 1472 + goto err_dev_exit; 1473 + } 1474 + 1469 1475 /* 1470 1476 * As stated in RPi's vc4 firmware "HDMI state machine (HSM) clock must 1471 1477 * be faster than pixel clock, infinitesimally faster, tested in ··· 1488 1482 * Additionally, the AXI clock needs to be at least 25% of 1489 1483 * pixel clock, but HSM ends up being the limiting factor. 1490 1484 */ 1491 - hsm_rate = max_t(unsigned long, 120000000, (tmds_char_rate / 100) * 101); 1485 + hsm_rate = max_t(unsigned long, 1486 + HSM_MIN_CLOCK_FREQ, 1487 + (tmds_char_rate / 100) * 101); 1492 1488 ret = clk_set_min_rate(vc4_hdmi->hsm_clock, hsm_rate); 1493 1489 if (ret) { 1494 1490 DRM_ERROR("Failed to set HSM clock rate: %d\n", ret); 1495 - goto err_dev_exit; 1496 - } 1497 - 1498 - ret = pm_runtime_resume_and_get(&vc4_hdmi->pdev->dev); 1499 - if (ret < 0) { 1500 - DRM_ERROR("Failed to retain power domain: %d\n", ret); 1501 - goto err_dev_exit; 1491 + goto err_put_runtime_pm; 1502 1492 } 1503 1493 1504 1494 ret = clk_set_rate(vc4_hdmi->pixel_clock, tmds_char_rate); ··· 3190 3188 DRM_ERROR("Failed to get HDMI state machine clock\n"); 3191 3189 return PTR_ERR(vc4_hdmi->hsm_clock); 3192 3190 } 3193 - 3194 3191 vc4_hdmi->audio_clock = vc4_hdmi->hsm_clock; 3195 3192 vc4_hdmi->cec_clock = vc4_hdmi->hsm_clock; 3196 - 3197 - vc4_hdmi->hsm_rpm_clock = devm_clk_get(dev, "hdmi"); 3198 - if (IS_ERR(vc4_hdmi->hsm_rpm_clock)) { 3199 - DRM_ERROR("Failed to get HDMI state machine clock\n"); 3200 - return PTR_ERR(vc4_hdmi->hsm_rpm_clock); 3201 - } 3202 3193 3203 3194 return 0; 3204 3195 } ··· 3275 3280 return PTR_ERR(vc4_hdmi->hsm_clock); 3276 3281 } 3277 3282 3278 - vc4_hdmi->hsm_rpm_clock = devm_clk_get(dev, "hdmi"); 3279 - if (IS_ERR(vc4_hdmi->hsm_rpm_clock)) { 3280 - DRM_ERROR("Failed to get HDMI state machine clock\n"); 3281 - return PTR_ERR(vc4_hdmi->hsm_rpm_clock); 3282 - } 3283 - 3284 3283 vc4_hdmi->pixel_bvb_clock = devm_clk_get(dev, "bvb"); 3285 3284 if (IS_ERR(vc4_hdmi->pixel_bvb_clock)) { 3286 3285 DRM_ERROR("Failed to get pixel bvb clock\n"); ··· 3338 3349 { 3339 3350 struct vc4_hdmi *vc4_hdmi = dev_get_drvdata(dev); 3340 3351 3341 - clk_disable_unprepare(vc4_hdmi->hsm_rpm_clock); 3352 + clk_disable_unprepare(vc4_hdmi->hsm_clock); 3342 3353 3343 3354 return 0; 3344 3355 } ··· 3351 3362 unsigned long rate; 3352 3363 int ret; 3353 3364 3354 - /* 3355 - * The HSM clock is in the HDMI power domain, so we need to set 3356 - * its frequency while the power domain is active so that it 3357 - * keeps its rate. 3358 - */ 3359 - ret = clk_set_min_rate(vc4_hdmi->hsm_rpm_clock, HSM_MIN_CLOCK_FREQ); 3360 - if (ret) 3361 - return ret; 3362 - 3363 - ret = clk_prepare_enable(vc4_hdmi->hsm_rpm_clock); 3365 + ret = clk_prepare_enable(vc4_hdmi->hsm_clock); 3364 3366 if (ret) 3365 3367 return ret; 3366 3368 ··· 3364 3384 * case, it will lead to a silent CPU stall. Let's make sure we 3365 3385 * prevent such a case. 3366 3386 */ 3367 - rate = clk_get_rate(vc4_hdmi->hsm_rpm_clock); 3387 + rate = clk_get_rate(vc4_hdmi->hsm_clock); 3368 3388 if (!rate) { 3369 3389 ret = -EINVAL; 3370 3390 goto err_disable_clk;
-1
drivers/gpu/drm/vc4/vc4_hdmi.h
··· 164 164 struct clk *cec_clock; 165 165 struct clk *pixel_clock; 166 166 struct clk *hsm_clock; 167 - struct clk *hsm_rpm_clock; 168 167 struct clk *audio_clock; 169 168 struct clk *pixel_bvb_clock; 170 169
+2 -2
drivers/gpu/drm/vc4/vc4_validate.c
··· 117 117 hindex, exec->bo_count); 118 118 return NULL; 119 119 } 120 - obj = exec->bo[hindex]; 120 + obj = to_drm_gem_dma_obj(exec->bo[hindex]); 121 121 bo = to_vc4_bo(&obj->base); 122 122 123 123 if (bo->validated_shader) { ··· 810 810 return -EINVAL; 811 811 } 812 812 813 - bo[i] = exec->bo[src_handles[i]]; 813 + bo[i] = to_drm_gem_dma_obj(exec->bo[src_handles[i]]); 814 814 if (!bo[i]) 815 815 return -EINVAL; 816 816 }
+1
drivers/gpu/drm/vgem/vgem_fence.c
··· 249 249 { 250 250 idr_for_each(&vfile->fence_idr, __vgem_fence_idr_fini, vfile); 251 251 idr_destroy(&vfile->fence_idr); 252 + mutex_destroy(&vfile->fence_mutex); 252 253 }
+11
drivers/gpu/drm/virtio/Kconfig
··· 11 11 QEMU based VMMs (like KVM or Xen). 12 12 13 13 If unsure say M. 14 + 15 + config DRM_VIRTIO_GPU_KMS 16 + bool "Virtio GPU driver modesetting support" 17 + depends on DRM_VIRTIO_GPU 18 + default y 19 + help 20 + Enable modesetting support for virtio GPU driver. This can be 21 + disabled in cases where only "headless" usage of the GPU is 22 + required. 23 + 24 + If unsure, say Y.
+6
drivers/gpu/drm/virtio/virtgpu_display.c
··· 336 336 { 337 337 int i, ret; 338 338 339 + if (!vgdev->num_scanouts) 340 + return 0; 341 + 339 342 ret = drmm_mode_config_init(vgdev->ddev); 340 343 if (ret) 341 344 return ret; ··· 364 361 void virtio_gpu_modeset_fini(struct virtio_gpu_device *vgdev) 365 362 { 366 363 int i; 364 + 365 + if (!vgdev->num_scanouts) 366 + return; 367 367 368 368 for (i = 0 ; i < vgdev->num_scanouts; ++i) 369 369 kfree(vgdev->outputs[i].edid);
+4
drivers/gpu/drm/virtio/virtgpu_drv.c
··· 172 172 DEFINE_DRM_GEM_FOPS(virtio_gpu_driver_fops); 173 173 174 174 static const struct drm_driver driver = { 175 + /* 176 + * If KMS is disabled DRIVER_MODESET and DRIVER_ATOMIC are masked 177 + * out via drm_device::driver_features: 178 + */ 175 179 .driver_features = DRIVER_MODESET | DRIVER_GEM | DRIVER_RENDER | DRIVER_ATOMIC, 176 180 .open = virtio_gpu_driver_open, 177 181 .postclose = virtio_gpu_driver_postclose,
+23 -16
drivers/gpu/drm/virtio/virtgpu_kms.c
··· 43 43 virtio_cread_le(vgdev->vdev, struct virtio_gpu_config, 44 44 events_read, &events_read); 45 45 if (events_read & VIRTIO_GPU_EVENT_DISPLAY) { 46 - if (vgdev->has_edid) 47 - virtio_gpu_cmd_get_edids(vgdev); 48 - virtio_gpu_cmd_get_display_info(vgdev); 49 - virtio_gpu_notify(vgdev); 50 - drm_helper_hpd_irq_event(vgdev->ddev); 46 + if (vgdev->num_scanouts) { 47 + if (vgdev->has_edid) 48 + virtio_gpu_cmd_get_edids(vgdev); 49 + virtio_gpu_cmd_get_display_info(vgdev); 50 + virtio_gpu_notify(vgdev); 51 + drm_helper_hpd_irq_event(vgdev->ddev); 52 + } 51 53 events_clear |= VIRTIO_GPU_EVENT_DISPLAY; 52 54 } 53 55 virtio_cwrite_le(vgdev->vdev, struct virtio_gpu_config, ··· 225 223 num_scanouts, &num_scanouts); 226 224 vgdev->num_scanouts = min_t(uint32_t, num_scanouts, 227 225 VIRTIO_GPU_MAX_SCANOUTS); 228 - if (!vgdev->num_scanouts) { 229 - DRM_ERROR("num_scanouts is zero\n"); 230 - ret = -EINVAL; 231 - goto err_scanouts; 226 + 227 + if (!IS_ENABLED(CONFIG_DRM_VIRTIO_GPU_KMS) || !vgdev->num_scanouts) { 228 + DRM_INFO("KMS disabled\n"); 229 + vgdev->num_scanouts = 0; 230 + vgdev->has_edid = false; 231 + dev->driver_features &= ~(DRIVER_MODESET | DRIVER_ATOMIC); 232 + } else { 233 + DRM_INFO("number of scanouts: %d\n", num_scanouts); 232 234 } 233 - DRM_INFO("number of scanouts: %d\n", num_scanouts); 234 235 235 236 virtio_cread_le(vgdev->vdev, struct virtio_gpu_config, 236 237 num_capsets, &num_capsets); ··· 249 244 250 245 if (num_capsets) 251 246 virtio_gpu_get_capsets(vgdev, num_capsets); 252 - if (vgdev->has_edid) 253 - virtio_gpu_cmd_get_edids(vgdev); 254 - virtio_gpu_cmd_get_display_info(vgdev); 255 - virtio_gpu_notify(vgdev); 256 - wait_event_timeout(vgdev->resp_wq, !vgdev->display_info_pending, 257 - 5 * HZ); 247 + if (vgdev->num_scanouts) { 248 + if (vgdev->has_edid) 249 + virtio_gpu_cmd_get_edids(vgdev); 250 + virtio_gpu_cmd_get_display_info(vgdev); 251 + virtio_gpu_notify(vgdev); 252 + wait_event_timeout(vgdev->resp_wq, !vgdev->display_info_pending, 253 + 5 * HZ); 254 + } 258 255 return 0; 259 256 260 257 err_scanouts:
+1 -2
drivers/gpu/drm/virtio/virtgpu_vq.c
··· 923 923 cmd_p->hdr.ctx_id = cpu_to_le32(id); 924 924 cmd_p->nlen = cpu_to_le32(nlen); 925 925 cmd_p->context_init = cpu_to_le32(context_init); 926 - strncpy(cmd_p->debug_name, name, sizeof(cmd_p->debug_name) - 1); 927 - cmd_p->debug_name[sizeof(cmd_p->debug_name) - 1] = 0; 926 + strscpy(cmd_p->debug_name, name, sizeof(cmd_p->debug_name)); 928 927 virtio_gpu_queue_ctrl_buffer(vgdev, vbuf); 929 928 } 930 929
+1 -1
drivers/gpu/drm/vmwgfx/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 vmwgfx-y := vmwgfx_execbuf.o vmwgfx_gmr.o vmwgfx_kms.o vmwgfx_drv.o \ 3 3 vmwgfx_ioctl.o vmwgfx_resource.o vmwgfx_ttm_buffer.o \ 4 - vmwgfx_cmd.o vmwgfx_irq.o vmwgfx_ldu.o vmwgfx_ttm_glue.o \ 4 + vmwgfx_cmd.o vmwgfx_irq.o vmwgfx_ldu.o \ 5 5 vmwgfx_overlay.o vmwgfx_gmrid_manager.o vmwgfx_fence.o \ 6 6 vmwgfx_bo.o vmwgfx_scrn.o vmwgfx_context.o \ 7 7 vmwgfx_surface.o vmwgfx_prime.o vmwgfx_mob.o vmwgfx_shader.o \
+197 -212
drivers/gpu/drm/vmwgfx/vmwgfx_bo.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR MIT 2 2 /************************************************************************** 3 3 * 4 - * Copyright © 2011-2018 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright © 2011-2023 VMware, Inc., Palo Alto, CA., USA 5 5 * All Rights Reserved. 6 6 * 7 7 * Permission is hereby granted, free of charge, to any person obtaining a ··· 26 26 * 27 27 **************************************************************************/ 28 28 29 + #include "vmwgfx_bo.h" 30 + #include "vmwgfx_drv.h" 31 + 32 + 29 33 #include <drm/ttm/ttm_placement.h> 30 34 31 - #include "vmwgfx_drv.h" 32 - #include "ttm_object.h" 33 - 34 - 35 - /** 36 - * vmw_buffer_object - Convert a struct ttm_buffer_object to a struct 37 - * vmw_buffer_object. 38 - * 39 - * @bo: Pointer to the TTM buffer object. 40 - * Return: Pointer to the struct vmw_buffer_object embedding the 41 - * TTM buffer object. 42 - */ 43 - static struct vmw_buffer_object * 44 - vmw_buffer_object(struct ttm_buffer_object *bo) 35 + static void vmw_bo_release(struct vmw_bo *vbo) 45 36 { 46 - return container_of(bo, struct vmw_buffer_object, base); 37 + vmw_bo_unmap(vbo); 38 + drm_gem_object_release(&vbo->tbo.base); 47 39 } 48 40 49 41 /** 50 - * bo_is_vmw - check if the buffer object is a &vmw_buffer_object 51 - * @bo: ttm buffer object to be checked 42 + * vmw_bo_free - vmw_bo destructor 52 43 * 53 - * Uses destroy function associated with the object to determine if this is 54 - * a &vmw_buffer_object. 55 - * 56 - * Returns: 57 - * true if the object is of &vmw_buffer_object type, false if not. 44 + * @bo: Pointer to the embedded struct ttm_buffer_object 58 45 */ 59 - static bool bo_is_vmw(struct ttm_buffer_object *bo) 46 + static void vmw_bo_free(struct ttm_buffer_object *bo) 60 47 { 61 - return bo->destroy == &vmw_bo_bo_free || 62 - bo->destroy == &vmw_gem_destroy; 48 + struct vmw_bo *vbo = to_vmw_bo(&bo->base); 49 + 50 + WARN_ON(vbo->dirty); 51 + WARN_ON(!RB_EMPTY_ROOT(&vbo->res_tree)); 52 + vmw_bo_release(vbo); 53 + kfree(vbo); 63 54 } 64 55 65 56 /** ··· 63 72 * Return: Zero on success, Negative error code on failure. In particular 64 73 * -ERESTARTSYS if interrupted by a signal 65 74 */ 66 - int vmw_bo_pin_in_placement(struct vmw_private *dev_priv, 67 - struct vmw_buffer_object *buf, 68 - struct ttm_placement *placement, 69 - bool interruptible) 75 + static int vmw_bo_pin_in_placement(struct vmw_private *dev_priv, 76 + struct vmw_bo *buf, 77 + struct ttm_placement *placement, 78 + bool interruptible) 70 79 { 71 80 struct ttm_operation_ctx ctx = {interruptible, false }; 72 - struct ttm_buffer_object *bo = &buf->base; 81 + struct ttm_buffer_object *bo = &buf->tbo; 73 82 int ret; 74 83 75 84 vmw_execbuf_release_pinned_bo(dev_priv); ··· 78 87 if (unlikely(ret != 0)) 79 88 goto err; 80 89 81 - if (buf->base.pin_count > 0) 82 - ret = ttm_resource_compat(bo->resource, placement) 83 - ? 0 : -EINVAL; 84 - else 85 - ret = ttm_bo_validate(bo, placement, &ctx); 86 - 90 + ret = ttm_bo_validate(bo, placement, &ctx); 87 91 if (!ret) 88 92 vmw_bo_pin_reserved(buf, true); 89 93 ··· 101 115 * -ERESTARTSYS if interrupted by a signal 102 116 */ 103 117 int vmw_bo_pin_in_vram_or_gmr(struct vmw_private *dev_priv, 104 - struct vmw_buffer_object *buf, 118 + struct vmw_bo *buf, 105 119 bool interruptible) 106 120 { 107 121 struct ttm_operation_ctx ctx = {interruptible, false }; 108 - struct ttm_buffer_object *bo = &buf->base; 122 + struct ttm_buffer_object *bo = &buf->tbo; 109 123 int ret; 110 124 111 125 vmw_execbuf_release_pinned_bo(dev_priv); ··· 114 128 if (unlikely(ret != 0)) 115 129 goto err; 116 130 117 - if (buf->base.pin_count > 0) { 118 - ret = ttm_resource_compat(bo->resource, &vmw_vram_gmr_placement) 119 - ? 0 : -EINVAL; 120 - goto out_unreserve; 121 - } 122 - 123 - ret = ttm_bo_validate(bo, &vmw_vram_gmr_placement, &ctx); 131 + vmw_bo_placement_set(buf, 132 + VMW_BO_DOMAIN_GMR | VMW_BO_DOMAIN_VRAM, 133 + VMW_BO_DOMAIN_GMR); 134 + ret = ttm_bo_validate(bo, &buf->placement, &ctx); 124 135 if (likely(ret == 0) || ret == -ERESTARTSYS) 125 136 goto out_unreserve; 126 137 127 - ret = ttm_bo_validate(bo, &vmw_vram_placement, &ctx); 138 + vmw_bo_placement_set(buf, 139 + VMW_BO_DOMAIN_VRAM, 140 + VMW_BO_DOMAIN_VRAM); 141 + ret = ttm_bo_validate(bo, &buf->placement, &ctx); 128 142 129 143 out_unreserve: 130 144 if (!ret) ··· 149 163 * -ERESTARTSYS if interrupted by a signal 150 164 */ 151 165 int vmw_bo_pin_in_vram(struct vmw_private *dev_priv, 152 - struct vmw_buffer_object *buf, 166 + struct vmw_bo *buf, 153 167 bool interruptible) 154 168 { 155 169 return vmw_bo_pin_in_placement(dev_priv, buf, &vmw_vram_placement, ··· 170 184 * -ERESTARTSYS if interrupted by a signal 171 185 */ 172 186 int vmw_bo_pin_in_start_of_vram(struct vmw_private *dev_priv, 173 - struct vmw_buffer_object *buf, 187 + struct vmw_bo *buf, 174 188 bool interruptible) 175 189 { 176 190 struct ttm_operation_ctx ctx = {interruptible, false }; 177 - struct ttm_buffer_object *bo = &buf->base; 178 - struct ttm_placement placement; 179 - struct ttm_place place; 191 + struct ttm_buffer_object *bo = &buf->tbo; 180 192 int ret = 0; 181 - 182 - place = vmw_vram_placement.placement[0]; 183 - place.lpfn = PFN_UP(bo->resource->size); 184 - placement.num_placement = 1; 185 - placement.placement = &place; 186 - placement.num_busy_placement = 1; 187 - placement.busy_placement = &place; 188 193 189 194 vmw_execbuf_release_pinned_bo(dev_priv); 190 195 ret = ttm_bo_reserve(bo, interruptible, false, NULL); ··· 190 213 if (bo->resource->mem_type == TTM_PL_VRAM && 191 214 bo->resource->start < PFN_UP(bo->resource->size) && 192 215 bo->resource->start > 0 && 193 - buf->base.pin_count == 0) { 216 + buf->tbo.pin_count == 0) { 194 217 ctx.interruptible = false; 195 - (void) ttm_bo_validate(bo, &vmw_sys_placement, &ctx); 218 + vmw_bo_placement_set(buf, 219 + VMW_BO_DOMAIN_SYS, 220 + VMW_BO_DOMAIN_SYS); 221 + (void)ttm_bo_validate(bo, &buf->placement, &ctx); 196 222 } 197 223 198 - if (buf->base.pin_count > 0) 199 - ret = ttm_resource_compat(bo->resource, &placement) 200 - ? 0 : -EINVAL; 201 - else 202 - ret = ttm_bo_validate(bo, &placement, &ctx); 224 + vmw_bo_placement_set(buf, 225 + VMW_BO_DOMAIN_VRAM, 226 + VMW_BO_DOMAIN_VRAM); 227 + buf->places[0].lpfn = PFN_UP(bo->resource->size); 228 + ret = ttm_bo_validate(bo, &buf->placement, &ctx); 203 229 204 230 /* For some reason we didn't end up at the start of vram */ 205 231 WARN_ON(ret == 0 && bo->resource->start != 0); ··· 228 248 * -ERESTARTSYS if interrupted by a signal 229 249 */ 230 250 int vmw_bo_unpin(struct vmw_private *dev_priv, 231 - struct vmw_buffer_object *buf, 251 + struct vmw_bo *buf, 232 252 bool interruptible) 233 253 { 234 - struct ttm_buffer_object *bo = &buf->base; 254 + struct ttm_buffer_object *bo = &buf->tbo; 235 255 int ret; 236 256 237 257 ret = ttm_bo_reserve(bo, interruptible, false, NULL); ··· 273 293 * @pin: Whether to pin or unpin. 274 294 * 275 295 */ 276 - void vmw_bo_pin_reserved(struct vmw_buffer_object *vbo, bool pin) 296 + void vmw_bo_pin_reserved(struct vmw_bo *vbo, bool pin) 277 297 { 278 298 struct ttm_operation_ctx ctx = { false, true }; 279 299 struct ttm_place pl; 280 300 struct ttm_placement placement; 281 - struct ttm_buffer_object *bo = &vbo->base; 301 + struct ttm_buffer_object *bo = &vbo->tbo; 282 302 uint32_t old_mem_type = bo->resource->mem_type; 283 303 int ret; 284 304 ··· 321 341 * 3) Buffer object destruction 322 342 * 323 343 */ 324 - void *vmw_bo_map_and_cache(struct vmw_buffer_object *vbo) 344 + void *vmw_bo_map_and_cache(struct vmw_bo *vbo) 325 345 { 326 - struct ttm_buffer_object *bo = &vbo->base; 346 + struct ttm_buffer_object *bo = &vbo->tbo; 327 347 bool not_used; 328 348 void *virtual; 329 349 int ret; ··· 346 366 * @vbo: The buffer object whose map we are tearing down. 347 367 * 348 368 * This function tears down a cached map set up using 349 - * vmw_buffer_object_map_and_cache(). 369 + * vmw_bo_map_and_cache(). 350 370 */ 351 - void vmw_bo_unmap(struct vmw_buffer_object *vbo) 371 + void vmw_bo_unmap(struct vmw_bo *vbo) 352 372 { 353 373 if (vbo->map.bo == NULL) 354 374 return; 355 375 356 376 ttm_bo_kunmap(&vbo->map); 377 + vbo->map.bo = NULL; 357 378 } 358 379 359 380 360 381 /** 361 - * vmw_bo_bo_free - vmw buffer object destructor 362 - * 363 - * @bo: Pointer to the embedded struct ttm_buffer_object 364 - */ 365 - void vmw_bo_bo_free(struct ttm_buffer_object *bo) 366 - { 367 - struct vmw_buffer_object *vmw_bo = vmw_buffer_object(bo); 368 - 369 - WARN_ON(vmw_bo->dirty); 370 - WARN_ON(!RB_EMPTY_ROOT(&vmw_bo->res_tree)); 371 - vmw_bo_unmap(vmw_bo); 372 - drm_gem_object_release(&bo->base); 373 - kfree(vmw_bo); 374 - } 375 - 376 - /* default destructor */ 377 - static void vmw_bo_default_destroy(struct ttm_buffer_object *bo) 378 - { 379 - kfree(bo); 380 - } 381 - 382 - /** 383 - * vmw_bo_create_kernel - Create a pinned BO for internal kernel use. 382 + * vmw_bo_init - Initialize a vmw buffer object 384 383 * 385 384 * @dev_priv: Pointer to the device private struct 386 - * @size: size of the BO we need 387 - * @placement: where to put it 388 - * @p_bo: resulting BO 385 + * @vmw_bo: Buffer object to initialize 386 + * @params: Parameters used to initialize the buffer object 387 + * @destroy: The function used to delete the buffer object 388 + * Returns: Zero on success, negative error code on error. 389 389 * 390 - * Creates and pin a simple BO for in kernel use. 391 390 */ 392 - int vmw_bo_create_kernel(struct vmw_private *dev_priv, unsigned long size, 393 - struct ttm_placement *placement, 394 - struct ttm_buffer_object **p_bo) 391 + static int vmw_bo_init(struct vmw_private *dev_priv, 392 + struct vmw_bo *vmw_bo, 393 + struct vmw_bo_params *params, 394 + void (*destroy)(struct ttm_buffer_object *)) 395 395 { 396 396 struct ttm_operation_ctx ctx = { 397 - .interruptible = false, 397 + .interruptible = params->bo_type != ttm_bo_type_kernel, 398 398 .no_wait_gpu = false 399 399 }; 400 - struct ttm_buffer_object *bo; 400 + struct ttm_device *bdev = &dev_priv->bdev; 401 401 struct drm_device *vdev = &dev_priv->drm; 402 402 int ret; 403 403 404 - bo = kzalloc(sizeof(*bo), GFP_KERNEL); 405 - if (unlikely(!bo)) 406 - return -ENOMEM; 404 + memset(vmw_bo, 0, sizeof(*vmw_bo)); 407 405 408 - size = ALIGN(size, PAGE_SIZE); 406 + BUILD_BUG_ON(TTM_MAX_BO_PRIORITY <= 3); 407 + vmw_bo->tbo.priority = 3; 408 + vmw_bo->res_tree = RB_ROOT; 409 409 410 - drm_gem_private_object_init(vdev, &bo->base, size); 410 + params->size = ALIGN(params->size, PAGE_SIZE); 411 + drm_gem_private_object_init(vdev, &vmw_bo->tbo.base, params->size); 411 412 412 - ret = ttm_bo_init_reserved(&dev_priv->bdev, bo, ttm_bo_type_kernel, 413 - placement, 0, &ctx, NULL, NULL, 414 - vmw_bo_default_destroy); 413 + vmw_bo_placement_set(vmw_bo, params->domain, params->busy_domain); 414 + ret = ttm_bo_init_reserved(bdev, &vmw_bo->tbo, params->bo_type, 415 + &vmw_bo->placement, 0, &ctx, NULL, 416 + NULL, destroy); 415 417 if (unlikely(ret)) 416 - goto error_free; 418 + return ret; 417 419 418 - ttm_bo_pin(bo); 419 - ttm_bo_unreserve(bo); 420 - *p_bo = bo; 420 + if (params->pin) 421 + ttm_bo_pin(&vmw_bo->tbo); 422 + ttm_bo_unreserve(&vmw_bo->tbo); 421 423 422 424 return 0; 423 - 424 - error_free: 425 - kfree(bo); 426 - return ret; 427 425 } 428 426 429 427 int vmw_bo_create(struct vmw_private *vmw, 430 - size_t size, struct ttm_placement *placement, 431 - bool interruptible, bool pin, 432 - void (*bo_free)(struct ttm_buffer_object *bo), 433 - struct vmw_buffer_object **p_bo) 428 + struct vmw_bo_params *params, 429 + struct vmw_bo **p_bo) 434 430 { 435 431 int ret; 436 - 437 - BUG_ON(!bo_free); 438 432 439 433 *p_bo = kmalloc(sizeof(**p_bo), GFP_KERNEL); 440 434 if (unlikely(!*p_bo)) { ··· 419 465 /* 420 466 * vmw_bo_init will delete the *p_bo object if it fails 421 467 */ 422 - ret = vmw_bo_init(vmw, *p_bo, size, 423 - placement, interruptible, pin, 424 - bo_free); 468 + ret = vmw_bo_init(vmw, *p_bo, params, vmw_bo_free); 425 469 if (unlikely(ret != 0)) 426 470 goto out_error; 427 471 ··· 430 478 } 431 479 432 480 /** 433 - * vmw_bo_init - Initialize a vmw buffer object 434 - * 435 - * @dev_priv: Pointer to the device private struct 436 - * @vmw_bo: Pointer to the struct vmw_buffer_object to initialize. 437 - * @size: Buffer object size in bytes. 438 - * @placement: Initial placement. 439 - * @interruptible: Whether waits should be performed interruptible. 440 - * @pin: If the BO should be created pinned at a fixed location. 441 - * @bo_free: The buffer object destructor. 442 - * Returns: Zero on success, negative error code on error. 443 - * 444 - * Note that on error, the code will free the buffer object. 445 - */ 446 - int vmw_bo_init(struct vmw_private *dev_priv, 447 - struct vmw_buffer_object *vmw_bo, 448 - size_t size, struct ttm_placement *placement, 449 - bool interruptible, bool pin, 450 - void (*bo_free)(struct ttm_buffer_object *bo)) 451 - { 452 - struct ttm_operation_ctx ctx = { 453 - .interruptible = interruptible, 454 - .no_wait_gpu = false 455 - }; 456 - struct ttm_device *bdev = &dev_priv->bdev; 457 - struct drm_device *vdev = &dev_priv->drm; 458 - int ret; 459 - 460 - WARN_ON_ONCE(!bo_free); 461 - memset(vmw_bo, 0, sizeof(*vmw_bo)); 462 - BUILD_BUG_ON(TTM_MAX_BO_PRIORITY <= 3); 463 - vmw_bo->base.priority = 3; 464 - vmw_bo->res_tree = RB_ROOT; 465 - 466 - size = ALIGN(size, PAGE_SIZE); 467 - drm_gem_private_object_init(vdev, &vmw_bo->base.base, size); 468 - 469 - ret = ttm_bo_init_reserved(bdev, &vmw_bo->base, ttm_bo_type_device, 470 - placement, 0, &ctx, NULL, NULL, bo_free); 471 - if (unlikely(ret)) { 472 - return ret; 473 - } 474 - 475 - if (pin) 476 - ttm_bo_pin(&vmw_bo->base); 477 - ttm_bo_unreserve(&vmw_bo->base); 478 - 479 - return 0; 480 - } 481 - 482 - /** 483 - * vmw_user_bo_synccpu_grab - Grab a struct vmw_buffer_object for cpu 481 + * vmw_user_bo_synccpu_grab - Grab a struct vmw_bo for cpu 484 482 * access, idling previous GPU operations on the buffer and optionally 485 483 * blocking it for further command submissions. 486 484 * ··· 443 541 * 444 542 * A blocking grab will be automatically released when @tfile is closed. 445 543 */ 446 - static int vmw_user_bo_synccpu_grab(struct vmw_buffer_object *vmw_bo, 544 + static int vmw_user_bo_synccpu_grab(struct vmw_bo *vmw_bo, 447 545 uint32_t flags) 448 546 { 449 547 bool nonblock = !!(flags & drm_vmw_synccpu_dontblock); 450 - struct ttm_buffer_object *bo = &vmw_bo->base; 548 + struct ttm_buffer_object *bo = &vmw_bo->tbo; 451 549 int ret; 452 550 453 551 if (flags & drm_vmw_synccpu_allow_cs) { ··· 490 588 uint32_t handle, 491 589 uint32_t flags) 492 590 { 493 - struct vmw_buffer_object *vmw_bo; 591 + struct vmw_bo *vmw_bo; 494 592 int ret = vmw_user_bo_lookup(filp, handle, &vmw_bo); 495 593 496 594 if (!ret) { 497 595 if (!(flags & drm_vmw_synccpu_allow_cs)) { 498 596 atomic_dec(&vmw_bo->cpu_writers); 499 597 } 500 - ttm_bo_put(&vmw_bo->base); 598 + ttm_bo_put(&vmw_bo->tbo); 501 599 } 502 600 503 - drm_gem_object_put(&vmw_bo->base.base); 601 + drm_gem_object_put(&vmw_bo->tbo.base); 504 602 return ret; 505 603 } 506 604 ··· 522 620 { 523 621 struct drm_vmw_synccpu_arg *arg = 524 622 (struct drm_vmw_synccpu_arg *) data; 525 - struct vmw_buffer_object *vbo; 623 + struct vmw_bo *vbo; 526 624 int ret; 527 625 528 626 if ((arg->flags & (drm_vmw_synccpu_read | drm_vmw_synccpu_write)) == 0 ··· 541 639 542 640 ret = vmw_user_bo_synccpu_grab(vbo, arg->flags); 543 641 vmw_bo_unreference(&vbo); 544 - drm_gem_object_put(&vbo->base.base); 642 + drm_gem_object_put(&vbo->tbo.base); 545 643 if (unlikely(ret != 0)) { 546 644 if (ret == -ERESTARTSYS || ret == -EBUSY) 547 645 return -EBUSY; ··· 585 683 struct drm_vmw_unref_dmabuf_arg *arg = 586 684 (struct drm_vmw_unref_dmabuf_arg *)data; 587 685 588 - drm_gem_handle_delete(file_priv, arg->handle); 589 - return 0; 686 + return drm_gem_handle_delete(file_priv, arg->handle); 590 687 } 591 688 592 689 ··· 595 694 * @filp: The file the handle is registered with. 596 695 * @handle: The user buffer object handle 597 696 * @out: Pointer to a where a pointer to the embedded 598 - * struct vmw_buffer_object should be placed. 697 + * struct vmw_bo should be placed. 599 698 * Return: Zero on success, Negative error code on error. 600 699 * 601 700 * The vmw buffer object pointer will be refcounted (both ttm and gem) 602 701 */ 603 702 int vmw_user_bo_lookup(struct drm_file *filp, 604 - uint32_t handle, 605 - struct vmw_buffer_object **out) 703 + u32 handle, 704 + struct vmw_bo **out) 606 705 { 607 706 struct drm_gem_object *gobj; 608 707 ··· 613 712 return -ESRCH; 614 713 } 615 714 616 - *out = gem_to_vmw_bo(gobj); 617 - ttm_bo_get(&(*out)->base); 715 + *out = to_vmw_bo(gobj); 716 + ttm_bo_get(&(*out)->tbo); 618 717 619 718 return 0; 620 719 } ··· 635 734 struct vmw_fence_obj *fence) 636 735 { 637 736 struct ttm_device *bdev = bo->bdev; 638 - struct vmw_private *dev_priv = 639 - container_of(bdev, struct vmw_private, bdev); 737 + struct vmw_private *dev_priv = vmw_priv_from_ttm(bdev); 640 738 int ret; 641 739 642 740 if (fence == NULL) ··· 671 771 struct drm_mode_create_dumb *args) 672 772 { 673 773 struct vmw_private *dev_priv = vmw_priv(dev); 674 - struct vmw_buffer_object *vbo; 774 + struct vmw_bo *vbo; 675 775 int cpp = DIV_ROUND_UP(args->bpp, 8); 676 776 int ret; 677 777 ··· 695 795 args->size, &args->handle, 696 796 &vbo); 697 797 /* drop reference from allocate - handle holds it now */ 698 - drm_gem_object_put(&vbo->base.base); 798 + drm_gem_object_put(&vbo->tbo.base); 699 799 return ret; 700 800 } 701 801 ··· 706 806 */ 707 807 void vmw_bo_swap_notify(struct ttm_buffer_object *bo) 708 808 { 709 - /* Is @bo embedded in a struct vmw_buffer_object? */ 710 - if (!bo_is_vmw(bo)) 711 - return; 712 - 713 809 /* Kill any cached kernel maps before swapout */ 714 - vmw_bo_unmap(vmw_buffer_object(bo)); 810 + vmw_bo_unmap(to_vmw_bo(&bo->base)); 715 811 } 716 812 717 813 ··· 724 828 void vmw_bo_move_notify(struct ttm_buffer_object *bo, 725 829 struct ttm_resource *mem) 726 830 { 727 - struct vmw_buffer_object *vbo; 728 - 729 - /* Make sure @bo is embedded in a struct vmw_buffer_object? */ 730 - if (!bo_is_vmw(bo)) 731 - return; 732 - 733 - vbo = container_of(bo, struct vmw_buffer_object, base); 831 + struct vmw_bo *vbo = to_vmw_bo(&bo->base); 734 832 735 833 /* 736 834 * Kill any cached kernel maps before move to or from VRAM. ··· 741 851 */ 742 852 if (mem->mem_type != VMW_PL_MOB && bo->resource->mem_type == VMW_PL_MOB) 743 853 vmw_resource_unbind_list(vbo); 854 + } 855 + 856 + static u32 857 + set_placement_list(struct ttm_place *pl, u32 domain) 858 + { 859 + u32 n = 0; 860 + 861 + /* 862 + * The placements are ordered according to our preferences 863 + */ 864 + if (domain & VMW_BO_DOMAIN_MOB) { 865 + pl[n].mem_type = VMW_PL_MOB; 866 + pl[n].flags = 0; 867 + pl[n].fpfn = 0; 868 + pl[n].lpfn = 0; 869 + n++; 870 + } 871 + if (domain & VMW_BO_DOMAIN_GMR) { 872 + pl[n].mem_type = VMW_PL_GMR; 873 + pl[n].flags = 0; 874 + pl[n].fpfn = 0; 875 + pl[n].lpfn = 0; 876 + n++; 877 + } 878 + if (domain & VMW_BO_DOMAIN_VRAM) { 879 + pl[n].mem_type = TTM_PL_VRAM; 880 + pl[n].flags = 0; 881 + pl[n].fpfn = 0; 882 + pl[n].lpfn = 0; 883 + n++; 884 + } 885 + if (domain & VMW_BO_DOMAIN_WAITABLE_SYS) { 886 + pl[n].mem_type = VMW_PL_SYSTEM; 887 + pl[n].flags = 0; 888 + pl[n].fpfn = 0; 889 + pl[n].lpfn = 0; 890 + n++; 891 + } 892 + if (domain & VMW_BO_DOMAIN_SYS) { 893 + pl[n].mem_type = TTM_PL_SYSTEM; 894 + pl[n].flags = 0; 895 + pl[n].fpfn = 0; 896 + pl[n].lpfn = 0; 897 + n++; 898 + } 899 + 900 + WARN_ON(!n); 901 + if (!n) { 902 + pl[n].mem_type = TTM_PL_SYSTEM; 903 + pl[n].flags = 0; 904 + pl[n].fpfn = 0; 905 + pl[n].lpfn = 0; 906 + n++; 907 + } 908 + return n; 909 + } 910 + 911 + void vmw_bo_placement_set(struct vmw_bo *bo, u32 domain, u32 busy_domain) 912 + { 913 + struct ttm_device *bdev = bo->tbo.bdev; 914 + struct vmw_private *vmw = vmw_priv_from_ttm(bdev); 915 + struct ttm_placement *pl = &bo->placement; 916 + bool mem_compatible = false; 917 + u32 i; 918 + 919 + pl->placement = bo->places; 920 + pl->num_placement = set_placement_list(bo->places, domain); 921 + 922 + if (drm_debug_enabled(DRM_UT_DRIVER) && bo->tbo.resource) { 923 + for (i = 0; i < pl->num_placement; ++i) { 924 + if (bo->tbo.resource->mem_type == TTM_PL_SYSTEM || 925 + bo->tbo.resource->mem_type == pl->placement[i].mem_type) 926 + mem_compatible = true; 927 + } 928 + if (!mem_compatible) 929 + drm_warn(&vmw->drm, 930 + "%s: Incompatible transition from " 931 + "bo->base.resource->mem_type = %u to domain = %u\n", 932 + __func__, bo->tbo.resource->mem_type, domain); 933 + } 934 + 935 + pl->busy_placement = bo->busy_places; 936 + pl->num_busy_placement = set_placement_list(bo->busy_places, busy_domain); 937 + } 938 + 939 + void vmw_bo_placement_set_default_accelerated(struct vmw_bo *bo) 940 + { 941 + struct ttm_device *bdev = bo->tbo.bdev; 942 + struct vmw_private *vmw = vmw_priv_from_ttm(bdev); 943 + u32 domain = VMW_BO_DOMAIN_GMR | VMW_BO_DOMAIN_VRAM; 944 + 945 + if (vmw->has_mob) 946 + domain = VMW_BO_DOMAIN_MOB; 947 + 948 + vmw_bo_placement_set(bo, domain, domain); 744 949 }
+203
drivers/gpu/drm/vmwgfx/vmwgfx_bo.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 OR MIT */ 2 + /************************************************************************** 3 + * 4 + * Copyright 2023 VMware, Inc., Palo Alto, CA., USA 5 + * 6 + * Permission is hereby granted, free of charge, to any person obtaining a 7 + * copy of this software and associated documentation files (the 8 + * "Software"), to deal in the Software without restriction, including 9 + * without limitation the rights to use, copy, modify, merge, publish, 10 + * distribute, sub license, and/or sell copies of the Software, and to 11 + * permit persons to whom the Software is furnished to do so, subject to 12 + * the following conditions: 13 + * 14 + * The above copyright notice and this permission notice (including the 15 + * next paragraph) shall be included in all copies or substantial portions 16 + * of the Software. 17 + * 18 + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 19 + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 20 + * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL 21 + * THE COPYRIGHT HOLDERS, AUTHORS AND/OR ITS SUPPLIERS BE LIABLE FOR ANY CLAIM, 22 + * DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR 23 + * OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE 24 + * USE OR OTHER DEALINGS IN THE SOFTWARE. 25 + * 26 + **************************************************************************/ 27 + 28 + #ifndef VMWGFX_BO_H 29 + #define VMWGFX_BO_H 30 + 31 + #include "device_include/svga_reg.h" 32 + 33 + #include <drm/ttm/ttm_bo.h> 34 + #include <drm/ttm/ttm_placement.h> 35 + 36 + #include <linux/rbtree_types.h> 37 + #include <linux/types.h> 38 + 39 + struct vmw_bo_dirty; 40 + struct vmw_fence_obj; 41 + struct vmw_private; 42 + struct vmw_resource; 43 + 44 + enum vmw_bo_domain { 45 + VMW_BO_DOMAIN_SYS = BIT(0), 46 + VMW_BO_DOMAIN_WAITABLE_SYS = BIT(1), 47 + VMW_BO_DOMAIN_VRAM = BIT(2), 48 + VMW_BO_DOMAIN_GMR = BIT(3), 49 + VMW_BO_DOMAIN_MOB = BIT(4), 50 + }; 51 + 52 + struct vmw_bo_params { 53 + u32 domain; 54 + u32 busy_domain; 55 + enum ttm_bo_type bo_type; 56 + size_t size; 57 + bool pin; 58 + }; 59 + 60 + /** 61 + * struct vmw_bo - TTM buffer object with vmwgfx additions 62 + * @tbo: The TTM buffer object 63 + * @placement: The preferred placement for this buffer object 64 + * @places: The chosen places for the preferred placement. 65 + * @busy_places: Chosen busy places for the preferred placement 66 + * @map: Kmap object for semi-persistent mappings 67 + * @res_tree: RB tree of resources using this buffer object as a backing MOB 68 + * @res_prios: Eviction priority counts for attached resources 69 + * @cpu_writers: Number of synccpu write grabs. Protected by reservation when 70 + * increased. May be decreased without reservation. 71 + * @dx_query_ctx: DX context if this buffer object is used as a DX query MOB 72 + * @dirty: structure for user-space dirty-tracking 73 + */ 74 + struct vmw_bo { 75 + struct ttm_buffer_object tbo; 76 + 77 + struct ttm_placement placement; 78 + struct ttm_place places[5]; 79 + struct ttm_place busy_places[5]; 80 + 81 + /* Protected by reservation */ 82 + struct ttm_bo_kmap_obj map; 83 + 84 + struct rb_root res_tree; 85 + u32 res_prios[TTM_MAX_BO_PRIORITY]; 86 + 87 + atomic_t cpu_writers; 88 + /* Not ref-counted. Protected by binding_mutex */ 89 + struct vmw_resource *dx_query_ctx; 90 + struct vmw_bo_dirty *dirty; 91 + }; 92 + 93 + void vmw_bo_placement_set(struct vmw_bo *bo, u32 domain, u32 busy_domain); 94 + void vmw_bo_placement_set_default_accelerated(struct vmw_bo *bo); 95 + 96 + int vmw_bo_create(struct vmw_private *dev_priv, 97 + struct vmw_bo_params *params, 98 + struct vmw_bo **p_bo); 99 + 100 + int vmw_bo_unref_ioctl(struct drm_device *dev, void *data, 101 + struct drm_file *file_priv); 102 + 103 + int vmw_bo_pin_in_vram(struct vmw_private *dev_priv, 104 + struct vmw_bo *buf, 105 + bool interruptible); 106 + int vmw_bo_pin_in_vram_or_gmr(struct vmw_private *dev_priv, 107 + struct vmw_bo *buf, 108 + bool interruptible); 109 + int vmw_bo_pin_in_start_of_vram(struct vmw_private *vmw_priv, 110 + struct vmw_bo *bo, 111 + bool interruptible); 112 + void vmw_bo_pin_reserved(struct vmw_bo *bo, bool pin); 113 + int vmw_bo_unpin(struct vmw_private *vmw_priv, 114 + struct vmw_bo *bo, 115 + bool interruptible); 116 + 117 + void vmw_bo_get_guest_ptr(const struct ttm_buffer_object *buf, 118 + SVGAGuestPtr *ptr); 119 + int vmw_user_bo_synccpu_ioctl(struct drm_device *dev, void *data, 120 + struct drm_file *file_priv); 121 + void vmw_bo_fence_single(struct ttm_buffer_object *bo, 122 + struct vmw_fence_obj *fence); 123 + 124 + void *vmw_bo_map_and_cache(struct vmw_bo *vbo); 125 + void vmw_bo_unmap(struct vmw_bo *vbo); 126 + 127 + void vmw_bo_move_notify(struct ttm_buffer_object *bo, 128 + struct ttm_resource *mem); 129 + void vmw_bo_swap_notify(struct ttm_buffer_object *bo); 130 + 131 + int vmw_user_bo_lookup(struct drm_file *filp, 132 + u32 handle, 133 + struct vmw_bo **out); 134 + /** 135 + * vmw_bo_adjust_prio - Adjust the buffer object eviction priority 136 + * according to attached resources 137 + * @vbo: The struct vmw_bo 138 + */ 139 + static inline void vmw_bo_prio_adjust(struct vmw_bo *vbo) 140 + { 141 + int i = ARRAY_SIZE(vbo->res_prios); 142 + 143 + while (i--) { 144 + if (vbo->res_prios[i]) { 145 + vbo->tbo.priority = i; 146 + return; 147 + } 148 + } 149 + 150 + vbo->tbo.priority = 3; 151 + } 152 + 153 + /** 154 + * vmw_bo_prio_add - Notify a buffer object of a newly attached resource 155 + * eviction priority 156 + * @vbo: The struct vmw_bo 157 + * @prio: The resource priority 158 + * 159 + * After being notified, the code assigns the highest resource eviction priority 160 + * to the backing buffer object (mob). 161 + */ 162 + static inline void vmw_bo_prio_add(struct vmw_bo *vbo, int prio) 163 + { 164 + if (vbo->res_prios[prio]++ == 0) 165 + vmw_bo_prio_adjust(vbo); 166 + } 167 + 168 + /** 169 + * vmw_bo_used_prio_del - Notify a buffer object of a resource with a certain 170 + * priority being removed 171 + * @vbo: The struct vmw_bo 172 + * @prio: The resource priority 173 + * 174 + * After being notified, the code assigns the highest resource eviction priority 175 + * to the backing buffer object (mob). 176 + */ 177 + static inline void vmw_bo_prio_del(struct vmw_bo *vbo, int prio) 178 + { 179 + if (--vbo->res_prios[prio] == 0) 180 + vmw_bo_prio_adjust(vbo); 181 + } 182 + 183 + static inline void vmw_bo_unreference(struct vmw_bo **buf) 184 + { 185 + struct vmw_bo *tmp_buf = *buf; 186 + 187 + *buf = NULL; 188 + if (tmp_buf) 189 + ttm_bo_put(&tmp_buf->tbo); 190 + } 191 + 192 + static inline struct vmw_bo *vmw_bo_reference(struct vmw_bo *buf) 193 + { 194 + ttm_bo_get(&buf->tbo); 195 + return buf; 196 + } 197 + 198 + static inline struct vmw_bo *to_vmw_bo(struct drm_gem_object *gobj) 199 + { 200 + return container_of((gobj), struct vmw_bo, tbo.base); 201 + } 202 + 203 + #endif // VMWGFX_BO_H
+7 -7
drivers/gpu/drm/vmwgfx/vmwgfx_cmd.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR MIT 2 2 /************************************************************************** 3 3 * 4 - * Copyright 2009-2020 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright 2009-2023 VMware, Inc., Palo Alto, CA., USA 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a 7 7 * copy of this software and associated documentation files (the ··· 24 24 * USE OR OTHER DEALINGS IN THE SOFTWARE. 25 25 * 26 26 **************************************************************************/ 27 - 28 - #include <linux/sched/signal.h> 27 + #include "vmwgfx_bo.h" 28 + #include "vmwgfx_drv.h" 29 + #include "vmwgfx_devcaps.h" 29 30 30 31 #include <drm/ttm/ttm_placement.h> 31 32 32 - #include "vmwgfx_drv.h" 33 - #include "vmwgfx_devcaps.h" 33 + #include <linux/sched/signal.h> 34 34 35 35 bool vmw_supports_3d(struct vmw_private *dev_priv) 36 36 { ··· 567 567 * without writing to the query result structure. 568 568 */ 569 569 570 - struct ttm_buffer_object *bo = &dev_priv->dummy_query_bo->base; 570 + struct ttm_buffer_object *bo = &dev_priv->dummy_query_bo->tbo; 571 571 struct { 572 572 SVGA3dCmdHeader header; 573 573 SVGA3dCmdWaitForQuery body; ··· 613 613 * without writing to the query result structure. 614 614 */ 615 615 616 - struct ttm_buffer_object *bo = &dev_priv->dummy_query_bo->base; 616 + struct ttm_buffer_object *bo = &dev_priv->dummy_query_bo->tbo; 617 617 struct { 618 618 SVGA3dCmdHeader header; 619 619 SVGA3dCmdWaitForGBQuery body;
+20 -33
drivers/gpu/drm/vmwgfx/vmwgfx_cmdbuf.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR MIT 2 2 /************************************************************************** 3 3 * 4 - * Copyright 2015 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright 2015-2023 VMware, Inc., Palo Alto, CA., USA 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a 7 7 * copy of this software and associated documentation files (the ··· 25 25 * 26 26 **************************************************************************/ 27 27 28 - #include <linux/dmapool.h> 29 - #include <linux/pci.h> 28 + #include "vmwgfx_bo.h" 29 + #include "vmwgfx_drv.h" 30 30 31 31 #include <drm/ttm/ttm_bo.h> 32 32 33 - #include "vmwgfx_drv.h" 33 + #include <linux/dmapool.h> 34 + #include <linux/pci.h> 34 35 35 36 /* 36 37 * Size of inline command buffers. Try to make sure that a page size is a ··· 80 79 * frees are protected by @lock. 81 80 * @cmd_space: Buffer object for the command buffer space, unless we were 82 81 * able to make a contigous coherent DMA memory allocation, @handle. Immutable. 83 - * @map_obj: Mapping state for @cmd_space. Immutable. 84 82 * @map: Pointer to command buffer space. May be a mapped buffer object or 85 83 * a contigous coherent DMA memory allocation. Immutable. 86 84 * @cur: Command buffer for small kernel command submissions. Protected by ··· 116 116 struct vmw_cmdbuf_context ctx[SVGA_CB_CONTEXT_MAX]; 117 117 struct list_head error; 118 118 struct drm_mm mm; 119 - struct ttm_buffer_object *cmd_space; 120 - struct ttm_bo_kmap_obj map_obj; 119 + struct vmw_bo *cmd_space; 121 120 u8 *map; 122 121 struct vmw_cmdbuf_header *cur; 123 122 size_t cur_pos; ··· 887 888 header->cmd = man->map + offset; 888 889 if (man->using_mob) { 889 890 cb_hdr->flags = SVGA_CB_FLAG_MOB; 890 - cb_hdr->ptr.mob.mobid = man->cmd_space->resource->start; 891 + cb_hdr->ptr.mob.mobid = man->cmd_space->tbo.resource->start; 891 892 cb_hdr->ptr.mob.mobOffset = offset; 892 893 } else { 893 894 cb_hdr->ptr.pa = (u64)man->handle + (u64)offset; ··· 1220 1221 int vmw_cmdbuf_set_pool_size(struct vmw_cmdbuf_man *man, size_t size) 1221 1222 { 1222 1223 struct vmw_private *dev_priv = man->dev_priv; 1223 - bool dummy; 1224 1224 int ret; 1225 1225 1226 1226 if (man->has_pool) ··· 1232 1234 if (man->map) { 1233 1235 man->using_mob = false; 1234 1236 } else { 1237 + struct vmw_bo_params bo_params = { 1238 + .domain = VMW_BO_DOMAIN_MOB, 1239 + .busy_domain = VMW_BO_DOMAIN_MOB, 1240 + .bo_type = ttm_bo_type_kernel, 1241 + .size = size, 1242 + .pin = true 1243 + }; 1235 1244 /* 1236 1245 * DMA memory failed. If we can have command buffers in a 1237 1246 * MOB, try to use that instead. Note that this will ··· 1249 1244 !dev_priv->has_mob) 1250 1245 return -ENOMEM; 1251 1246 1252 - ret = vmw_bo_create_kernel(dev_priv, size, 1253 - &vmw_mob_placement, 1254 - &man->cmd_space); 1247 + ret = vmw_bo_create(dev_priv, &bo_params, &man->cmd_space); 1255 1248 if (ret) 1256 1249 return ret; 1257 1250 1258 - man->using_mob = true; 1259 - ret = ttm_bo_kmap(man->cmd_space, 0, size >> PAGE_SHIFT, 1260 - &man->map_obj); 1261 - if (ret) 1262 - goto out_no_map; 1263 - 1264 - man->map = ttm_kmap_obj_virtual(&man->map_obj, &dummy); 1251 + man->map = vmw_bo_map_and_cache(man->cmd_space); 1252 + man->using_mob = man->map; 1265 1253 } 1266 1254 1267 1255 man->size = size; ··· 1274 1276 (man->using_mob) ? "MOB" : "DMA"); 1275 1277 1276 1278 return 0; 1277 - 1278 - out_no_map: 1279 - if (man->using_mob) { 1280 - ttm_bo_put(man->cmd_space); 1281 - man->cmd_space = NULL; 1282 - } 1283 - 1284 - return ret; 1285 1279 } 1286 1280 1287 1281 /** ··· 1372 1382 man->has_pool = false; 1373 1383 man->default_size = VMW_CMDBUF_INLINE_SIZE; 1374 1384 (void) vmw_cmdbuf_idle(man, false, 10*HZ); 1375 - if (man->using_mob) { 1376 - (void) ttm_bo_kunmap(&man->map_obj); 1377 - ttm_bo_put(man->cmd_space); 1378 - man->cmd_space = NULL; 1379 - } else { 1385 + if (man->using_mob) 1386 + vmw_bo_unreference(&man->cmd_space); 1387 + else 1380 1388 dma_free_coherent(man->dev_priv->drm.dev, 1381 1389 man->size, man->map, man->handle); 1382 - } 1383 1390 } 1384 1391 1385 1392 /**
+20 -16
drivers/gpu/drm/vmwgfx/vmwgfx_context.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR MIT 2 2 /************************************************************************** 3 3 * 4 - * Copyright 2009-2015 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright 2009-2023 VMware, Inc., Palo Alto, CA., USA 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a 7 7 * copy of this software and associated documentation files (the ··· 27 27 28 28 #include <drm/ttm/ttm_placement.h> 29 29 30 + #include "vmwgfx_binding.h" 31 + #include "vmwgfx_bo.h" 30 32 #include "vmwgfx_drv.h" 31 33 #include "vmwgfx_resource_priv.h" 32 - #include "vmwgfx_binding.h" 33 34 34 35 struct vmw_user_context { 35 36 struct ttm_base_object base; ··· 39 38 struct vmw_cmdbuf_res_manager *man; 40 39 struct vmw_resource *cotables[SVGA_COTABLE_MAX]; 41 40 spinlock_t cotable_lock; 42 - struct vmw_buffer_object *dx_query_mob; 41 + struct vmw_bo *dx_query_mob; 43 42 }; 44 43 45 44 static void vmw_user_context_free(struct vmw_resource *res); ··· 73 72 74 73 static const struct vmw_res_func vmw_legacy_context_func = { 75 74 .res_type = vmw_res_context, 76 - .needs_backup = false, 75 + .needs_guest_memory = false, 77 76 .may_evict = false, 78 77 .type_name = "legacy contexts", 79 - .backup_placement = NULL, 78 + .domain = VMW_BO_DOMAIN_SYS, 79 + .busy_domain = VMW_BO_DOMAIN_SYS, 80 80 .create = NULL, 81 81 .destroy = NULL, 82 82 .bind = NULL, ··· 86 84 87 85 static const struct vmw_res_func vmw_gb_context_func = { 88 86 .res_type = vmw_res_context, 89 - .needs_backup = true, 87 + .needs_guest_memory = true, 90 88 .may_evict = true, 91 89 .prio = 3, 92 90 .dirty_prio = 3, 93 91 .type_name = "guest backed contexts", 94 - .backup_placement = &vmw_mob_placement, 92 + .domain = VMW_BO_DOMAIN_MOB, 93 + .busy_domain = VMW_BO_DOMAIN_MOB, 95 94 .create = vmw_gb_context_create, 96 95 .destroy = vmw_gb_context_destroy, 97 96 .bind = vmw_gb_context_bind, ··· 101 98 102 99 static const struct vmw_res_func vmw_dx_context_func = { 103 100 .res_type = vmw_res_dx_context, 104 - .needs_backup = true, 101 + .needs_guest_memory = true, 105 102 .may_evict = true, 106 103 .prio = 3, 107 104 .dirty_prio = 3, 108 105 .type_name = "dx contexts", 109 - .backup_placement = &vmw_mob_placement, 106 + .domain = VMW_BO_DOMAIN_MOB, 107 + .busy_domain = VMW_BO_DOMAIN_MOB, 110 108 .create = vmw_dx_context_create, 111 109 .destroy = vmw_dx_context_destroy, 112 110 .bind = vmw_dx_context_bind, ··· 186 182 struct vmw_user_context *uctx = 187 183 container_of(res, struct vmw_user_context, res); 188 184 189 - res->backup_size = (dx ? sizeof(SVGADXContextMobFormat) : 185 + res->guest_memory_size = (dx ? sizeof(SVGADXContextMobFormat) : 190 186 sizeof(SVGAGBContextData)); 191 187 ret = vmw_resource_init(dev_priv, res, true, 192 188 res_free, ··· 358 354 cmd->header.size = sizeof(cmd->body); 359 355 cmd->body.cid = res->id; 360 356 cmd->body.mobid = bo->resource->start; 361 - cmd->body.validContents = res->backup_dirty; 362 - res->backup_dirty = false; 357 + cmd->body.validContents = res->guest_memory_dirty; 358 + res->guest_memory_dirty = false; 363 359 vmw_cmd_commit(dev_priv, sizeof(*cmd)); 364 360 365 361 return 0; ··· 525 521 cmd->header.size = sizeof(cmd->body); 526 522 cmd->body.cid = res->id; 527 523 cmd->body.mobid = bo->resource->start; 528 - cmd->body.validContents = res->backup_dirty; 529 - res->backup_dirty = false; 524 + cmd->body.validContents = res->guest_memory_dirty; 525 + res->guest_memory_dirty = false; 530 526 vmw_cmd_commit(dev_priv, sizeof(*cmd)); 531 527 532 528 ··· 857 853 * specified in the parameter. 0 otherwise. 858 854 */ 859 855 int vmw_context_bind_dx_query(struct vmw_resource *ctx_res, 860 - struct vmw_buffer_object *mob) 856 + struct vmw_bo *mob) 861 857 { 862 858 struct vmw_user_context *uctx = 863 859 container_of(ctx_res, struct vmw_user_context, res); ··· 889 885 * 890 886 * @ctx_res: The context resource 891 887 */ 892 - struct vmw_buffer_object * 888 + struct vmw_bo * 893 889 vmw_context_get_dx_query_mob(struct vmw_resource *ctx_res) 894 890 { 895 891 struct vmw_user_context *uctx =
+38 -27
drivers/gpu/drm/vmwgfx/vmwgfx_cotable.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR MIT 2 2 /************************************************************************** 3 3 * 4 - * Copyright 2014-2015 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright 2014-2023 VMware, Inc., Palo Alto, CA., USA 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a 7 7 * copy of this software and associated documentation files (the ··· 30 30 * whenever the backing MOB is evicted. 31 31 */ 32 32 33 - #include <drm/ttm/ttm_placement.h> 34 - 33 + #include "vmwgfx_bo.h" 35 34 #include "vmwgfx_drv.h" 36 35 #include "vmwgfx_mksstat.h" 37 36 #include "vmwgfx_resource_priv.h" 38 37 #include "vmwgfx_so.h" 38 + 39 + #include <drm/ttm/ttm_placement.h> 39 40 40 41 /** 41 42 * struct vmw_cotable - Context Object Table resource ··· 131 130 132 131 static const struct vmw_res_func vmw_cotable_func = { 133 132 .res_type = vmw_res_cotable, 134 - .needs_backup = true, 133 + .needs_guest_memory = true, 135 134 .may_evict = true, 136 135 .prio = 3, 137 136 .dirty_prio = 3, 138 137 .type_name = "context guest backed object tables", 139 - .backup_placement = &vmw_mob_placement, 138 + .domain = VMW_BO_DOMAIN_MOB, 139 + .busy_domain = VMW_BO_DOMAIN_MOB, 140 140 .create = vmw_cotable_create, 141 141 .destroy = vmw_cotable_destroy, 142 142 .bind = vmw_cotable_bind, ··· 182 180 { 183 181 struct vmw_cotable *vcotbl = vmw_cotable(res); 184 182 struct vmw_private *dev_priv = res->dev_priv; 185 - struct ttm_buffer_object *bo = &res->backup->base; 183 + struct ttm_buffer_object *bo = &res->guest_memory_bo->tbo; 186 184 struct { 187 185 SVGA3dCmdHeader header; 188 186 SVGA3dCmdDXSetCOTable body; ··· 230 228 * take the opportunity to correct the value here so that it's not 231 229 * misused in the future. 232 230 */ 233 - val_buf->bo = &res->backup->base; 231 + val_buf->bo = &res->guest_memory_bo->tbo; 234 232 235 233 return vmw_cotable_unscrub(res); 236 234 } ··· 291 289 cmd0->body.cid = vcotbl->ctx->id; 292 290 cmd0->body.type = vcotbl->type; 293 291 cmd1 = (void *) &cmd0[1]; 294 - vcotbl->size_read_back = res->backup_size; 292 + vcotbl->size_read_back = res->guest_memory_size; 295 293 } 296 294 cmd1->header.id = SVGA_3D_CMD_DX_SET_COTABLE; 297 295 cmd1->header.size = sizeof(cmd1->body); ··· 373 371 cmd->header.size = sizeof(cmd->body); 374 372 cmd->body.cid = vcotbl->ctx->id; 375 373 cmd->body.type = vcotbl->type; 376 - vcotbl->size_read_back = res->backup_size; 374 + vcotbl->size_read_back = res->guest_memory_size; 377 375 vmw_cmd_commit(dev_priv, sizeof(*cmd)); 378 376 } 379 377 380 378 (void) vmw_execbuf_fence_commands(NULL, dev_priv, &fence, NULL); 381 - vmw_bo_fence_single(&res->backup->base, fence); 379 + vmw_bo_fence_single(&res->guest_memory_bo->tbo, fence); 382 380 vmw_fence_obj_unreference(&fence); 383 381 384 382 return 0; ··· 401 399 struct ttm_operation_ctx ctx = { false, false }; 402 400 struct vmw_private *dev_priv = res->dev_priv; 403 401 struct vmw_cotable *vcotbl = vmw_cotable(res); 404 - struct vmw_buffer_object *buf, *old_buf = res->backup; 405 - struct ttm_buffer_object *bo, *old_bo = &res->backup->base; 406 - size_t old_size = res->backup_size; 402 + struct vmw_bo *buf, *old_buf = res->guest_memory_bo; 403 + struct ttm_buffer_object *bo, *old_bo = &res->guest_memory_bo->tbo; 404 + size_t old_size = res->guest_memory_size; 407 405 size_t old_size_read_back = vcotbl->size_read_back; 408 406 size_t cur_size_read_back; 409 407 struct ttm_bo_kmap_obj old_map, new_map; 410 408 int ret; 411 409 size_t i; 410 + struct vmw_bo_params bo_params = { 411 + .domain = VMW_BO_DOMAIN_MOB, 412 + .busy_domain = VMW_BO_DOMAIN_MOB, 413 + .bo_type = ttm_bo_type_device, 414 + .size = new_size, 415 + .pin = true 416 + }; 412 417 413 418 MKS_STAT_TIME_DECL(MKSSTAT_KERN_COTABLE_RESIZE); 414 419 MKS_STAT_TIME_PUSH(MKSSTAT_KERN_COTABLE_RESIZE); ··· 432 423 * for the new COTable. Initially pin the buffer object to make sure 433 424 * we can use tryreserve without failure. 434 425 */ 435 - ret = vmw_bo_create(dev_priv, new_size, &vmw_mob_placement, 436 - true, true, vmw_bo_bo_free, &buf); 426 + ret = vmw_bo_create(dev_priv, &bo_params, &buf); 437 427 if (ret) { 438 428 DRM_ERROR("Failed initializing new cotable MOB.\n"); 439 429 goto out_done; 440 430 } 441 431 442 - bo = &buf->base; 432 + bo = &buf->tbo; 443 433 WARN_ON_ONCE(ttm_bo_reserve(bo, false, true, NULL)); 444 434 445 435 ret = ttm_bo_wait(old_bo, false, false); ··· 472 464 } 473 465 474 466 /* Unpin new buffer, and switch backup buffers. */ 475 - ret = ttm_bo_validate(bo, &vmw_mob_placement, &ctx); 467 + vmw_bo_placement_set(buf, 468 + VMW_BO_DOMAIN_MOB, 469 + VMW_BO_DOMAIN_MOB); 470 + ret = ttm_bo_validate(bo, &buf->placement, &ctx); 476 471 if (unlikely(ret != 0)) { 477 472 DRM_ERROR("Failed validating new COTable backup buffer.\n"); 478 473 goto out_wait; 479 474 } 480 475 481 476 vmw_resource_mob_detach(res); 482 - res->backup = buf; 483 - res->backup_size = new_size; 477 + res->guest_memory_bo = buf; 478 + res->guest_memory_size = new_size; 484 479 vcotbl->size_read_back = cur_size_read_back; 485 480 486 481 /* ··· 493 482 ret = vmw_cotable_unscrub(res); 494 483 if (ret) { 495 484 DRM_ERROR("Failed switching COTable backup buffer.\n"); 496 - res->backup = old_buf; 497 - res->backup_size = old_size; 485 + res->guest_memory_bo = old_buf; 486 + res->guest_memory_size = old_size; 498 487 vcotbl->size_read_back = old_size_read_back; 499 488 vmw_resource_mob_attach(res); 500 489 goto out_wait; ··· 509 498 if (unlikely(ret)) 510 499 goto out_wait; 511 500 512 - /* Release the pin acquired in vmw_bo_init */ 501 + /* Release the pin acquired in vmw_bo_create */ 513 502 ttm_bo_unpin(bo); 514 503 515 504 MKS_STAT_TIME_POP(MKSSTAT_KERN_COTABLE_RESIZE); ··· 544 533 static int vmw_cotable_create(struct vmw_resource *res) 545 534 { 546 535 struct vmw_cotable *vcotbl = vmw_cotable(res); 547 - size_t new_size = res->backup_size; 536 + size_t new_size = res->guest_memory_size; 548 537 size_t needed_size; 549 538 int ret; 550 539 ··· 553 542 while (needed_size > new_size) 554 543 new_size *= 2; 555 544 556 - if (likely(new_size <= res->backup_size)) { 545 + if (likely(new_size <= res->guest_memory_size)) { 557 546 if (vcotbl->scrubbed && vmw_resource_mob_attached(res)) { 558 547 ret = vmw_cotable_unscrub(res); 559 548 if (ret) ··· 617 606 618 607 INIT_LIST_HEAD(&vcotbl->resource_list); 619 608 vcotbl->res.id = type; 620 - vcotbl->res.backup_size = PAGE_SIZE; 609 + vcotbl->res.guest_memory_size = PAGE_SIZE; 621 610 num_entries = PAGE_SIZE / co_info[type].size; 622 611 if (num_entries < co_info[type].min_initial_entries) { 623 - vcotbl->res.backup_size = co_info[type].min_initial_entries * 612 + vcotbl->res.guest_memory_size = co_info[type].min_initial_entries * 624 613 co_info[type].size; 625 - vcotbl->res.backup_size = PFN_ALIGN(vcotbl->res.backup_size); 614 + vcotbl->res.guest_memory_size = PFN_ALIGN(vcotbl->res.guest_memory_size); 626 615 } 627 616 628 617 vcotbl->scrubbed = true;
+16 -10
drivers/gpu/drm/vmwgfx/vmwgfx_drv.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR MIT 2 2 /************************************************************************** 3 3 * 4 - * Copyright 2009-2022 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright 2009-2023 VMware, Inc., Palo Alto, CA., USA 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a 7 7 * copy of this software and associated documentation files (the ··· 28 28 29 29 #include "vmwgfx_drv.h" 30 30 31 + #include "vmwgfx_bo.h" 32 + #include "vmwgfx_binding.h" 31 33 #include "vmwgfx_devcaps.h" 32 34 #include "vmwgfx_mksstat.h" 33 - #include "vmwgfx_binding.h" 34 35 #include "ttm_object.h" 35 36 36 37 #include <drm/drm_aperture.h> ··· 387 386 static int vmw_dummy_query_bo_create(struct vmw_private *dev_priv) 388 387 { 389 388 int ret; 390 - struct vmw_buffer_object *vbo; 389 + struct vmw_bo *vbo; 391 390 struct ttm_bo_kmap_obj map; 392 391 volatile SVGA3dQueryResult *result; 393 392 bool dummy; 393 + struct vmw_bo_params bo_params = { 394 + .domain = VMW_BO_DOMAIN_SYS, 395 + .busy_domain = VMW_BO_DOMAIN_SYS, 396 + .bo_type = ttm_bo_type_kernel, 397 + .size = PAGE_SIZE, 398 + .pin = true 399 + }; 394 400 395 401 /* 396 402 * Create the vbo as pinned, so that a tryreserve will 397 403 * immediately succeed. This is because we're the only 398 404 * user of the bo currently. 399 405 */ 400 - ret = vmw_bo_create(dev_priv, PAGE_SIZE, 401 - &vmw_sys_placement, false, true, 402 - &vmw_bo_bo_free, &vbo); 406 + ret = vmw_bo_create(dev_priv, &bo_params, &vbo); 403 407 if (unlikely(ret != 0)) 404 408 return ret; 405 409 406 - ret = ttm_bo_reserve(&vbo->base, false, true, NULL); 410 + ret = ttm_bo_reserve(&vbo->tbo, false, true, NULL); 407 411 BUG_ON(ret != 0); 408 412 vmw_bo_pin_reserved(vbo, true); 409 413 410 - ret = ttm_bo_kmap(&vbo->base, 0, 1, &map); 414 + ret = ttm_bo_kmap(&vbo->tbo, 0, 1, &map); 411 415 if (likely(ret == 0)) { 412 416 result = ttm_kmap_obj_virtual(&map, &dummy); 413 417 result->totalSize = sizeof(*result); ··· 421 415 ttm_bo_kunmap(&map); 422 416 } 423 417 vmw_bo_pin_reserved(vbo, false); 424 - ttm_bo_unreserve(&vbo->base); 418 + ttm_bo_unreserve(&vbo->tbo); 425 419 426 420 if (unlikely(ret != 0)) { 427 421 DRM_ERROR("Dummy query buffer map failed.\n"); ··· 1571 1565 .open = drm_open, 1572 1566 .release = drm_release, 1573 1567 .unlocked_ioctl = vmw_unlocked_ioctl, 1574 - .mmap = vmw_mmap, 1568 + .mmap = drm_gem_mmap, 1575 1569 .poll = drm_poll, 1576 1570 .read = drm_read, 1577 1571 #if defined(CONFIG_COMPAT)
+48 -197
drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 OR MIT */ 2 2 /************************************************************************** 3 3 * 4 - * Copyright 2009-2022 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright 2009-2023 VMware, Inc., Palo Alto, CA., USA 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a 7 7 * copy of this software and associated documentation files (the ··· 117 117 unsigned long key; 118 118 }; 119 119 120 - /** 121 - * struct vmw_buffer_object - TTM buffer object with vmwgfx additions 122 - * @base: The TTM buffer object 123 - * @res_tree: RB tree of resources using this buffer object as a backing MOB 124 - * @base_mapped_count: ttm BO mapping count; used by KMS atomic helpers. 125 - * @cpu_writers: Number of synccpu write grabs. Protected by reservation when 126 - * increased. May be decreased without reservation. 127 - * @dx_query_ctx: DX context if this buffer object is used as a DX query MOB 128 - * @map: Kmap object for semi-persistent mappings 129 - * @res_prios: Eviction priority counts for attached resources 130 - * @dirty: structure for user-space dirty-tracking 131 - */ 132 - struct vmw_buffer_object { 133 - struct ttm_buffer_object base; 134 - struct rb_root res_tree; 135 - /* For KMS atomic helpers: ttm bo mapping count */ 136 - atomic_t base_mapped_count; 137 - 138 - atomic_t cpu_writers; 139 - /* Not ref-counted. Protected by binding_mutex */ 140 - struct vmw_resource *dx_query_ctx; 141 - /* Protected by reservation */ 142 - struct ttm_bo_kmap_obj map; 143 - u32 res_prios[TTM_MAX_BO_PRIORITY]; 144 - struct vmw_bo_dirty *dirty; 145 - }; 146 120 147 121 /** 148 122 * struct vmw_validate_buffer - Carries validation info about buffers. ··· 142 168 * @kref: For refcounting. 143 169 * @dev_priv: Pointer to the device private for this resource. Immutable. 144 170 * @id: Device id. Protected by @dev_priv::resource_lock. 145 - * @backup_size: Backup buffer size. Immutable. 146 - * @res_dirty: Resource contains data not yet in the backup buffer. Protected 147 - * by resource reserved. 148 - * @backup_dirty: Backup buffer contains data not yet in the HW resource. 171 + * @guest_memory_size: Guest memory buffer size. Immutable. 172 + * @res_dirty: Resource contains data not yet in the guest memory buffer. 149 173 * Protected by resource reserved. 174 + * @guest_memory_dirty: Guest memory buffer contains data not yet in the HW 175 + * resource. Protected by resource reserved. 150 176 * @coherent: Emulate coherency by tracking vm accesses. 151 - * @backup: The backup buffer if any. Protected by resource reserved. 152 - * @backup_offset: Offset into the backup buffer if any. Protected by resource 153 - * reserved. Note that only a few resource types can have a @backup_offset 154 - * different from zero. 177 + * @guest_memory_bo: The guest memory buffer if any. Protected by resource 178 + * reserved. 179 + * @guest_memory_offset: Offset into the guest memory buffer if any. Protected 180 + * by resource reserved. Note that only a few resource types can have a 181 + * @guest_memory_offset different from zero. 155 182 * @pin_count: The pin count for this resource. A pinned resource has a 156 183 * pin-count greater than zero. It is not on the resource LRU lists and its 157 - * backup buffer is pinned. Hence it can't be evicted. 184 + * guest memory buffer is pinned. Hence it can't be evicted. 158 185 * @func: Method vtable for this resource. Immutable. 159 - * @mob_node; Node for the MOB backup rbtree. Protected by @backup reserved. 186 + * @mob_node; Node for the MOB guest memory rbtree. Protected by 187 + * @guest_memory_bo reserved. 160 188 * @lru_head: List head for the LRU list. Protected by @dev_priv::resource_lock. 161 189 * @binding_head: List head for the context binding list. Protected by 162 190 * the @dev_priv::binding_mutex ··· 166 190 * @hw_destroy: Callback to destroy the resource on the device, as part of 167 191 * resource destruction. 168 192 */ 193 + struct vmw_bo; 194 + struct vmw_bo; 169 195 struct vmw_resource_dirty; 170 196 struct vmw_resource { 171 197 struct kref kref; 172 198 struct vmw_private *dev_priv; 173 199 int id; 174 200 u32 used_prio; 175 - unsigned long backup_size; 201 + unsigned long guest_memory_size; 176 202 u32 res_dirty : 1; 177 - u32 backup_dirty : 1; 203 + u32 guest_memory_dirty : 1; 178 204 u32 coherent : 1; 179 - struct vmw_buffer_object *backup; 180 - unsigned long backup_offset; 205 + struct vmw_bo *guest_memory_bo; 206 + unsigned long guest_memory_offset; 181 207 unsigned long pin_count; 182 208 const struct vmw_res_func *func; 183 209 struct rb_node mob_node; ··· 424 446 struct drm_file *filp; 425 447 uint32_t *cmd_bounce; 426 448 uint32_t cmd_bounce_size; 427 - struct vmw_buffer_object *cur_query_bo; 449 + struct vmw_bo *cur_query_bo; 428 450 struct list_head bo_relocations; 429 451 struct list_head res_relocations; 430 452 uint32_t *buf_start; ··· 436 458 struct list_head staged_cmd_res; 437 459 struct list_head ctx_list; 438 460 struct vmw_ctx_validation_info *dx_ctx_node; 439 - struct vmw_buffer_object *dx_query_mob; 461 + struct vmw_bo *dx_query_mob; 440 462 struct vmw_resource *dx_query_ctx; 441 463 struct vmw_cmdbuf_res_manager *man; 442 464 struct vmw_validation_context *ctx; ··· 470 492 unsigned num_otables; 471 493 struct vmw_otable *otables; 472 494 struct vmw_resource *context; 473 - struct ttm_buffer_object *otable_bo; 495 + struct vmw_bo *otable_bo; 474 496 }; 475 497 476 498 enum { ··· 610 632 * are protected by the cmdbuf mutex. 611 633 */ 612 634 613 - struct vmw_buffer_object *dummy_query_bo; 614 - struct vmw_buffer_object *pinned_bo; 635 + struct vmw_bo *dummy_query_bo; 636 + struct vmw_bo *pinned_bo; 615 637 uint32_t query_cid; 616 638 uint32_t query_cid_valid; 617 639 bool dummy_query_bo_pinned; ··· 655 677 #endif 656 678 }; 657 679 658 - static inline struct vmw_buffer_object *gem_to_vmw_bo(struct drm_gem_object *gobj) 659 - { 660 - return container_of((gobj), struct vmw_buffer_object, base.base); 661 - } 662 - 663 680 static inline struct vmw_surface *vmw_res_to_srf(struct vmw_resource *res) 664 681 { 665 682 return container_of(res, struct vmw_surface, res); ··· 663 690 static inline struct vmw_private *vmw_priv(struct drm_device *dev) 664 691 { 665 692 return (struct vmw_private *)dev->dev_private; 693 + } 694 + 695 + static inline struct vmw_private *vmw_priv_from_ttm(struct ttm_device *bdev) 696 + { 697 + return container_of(bdev, struct vmw_private, bdev); 666 698 } 667 699 668 700 static inline struct vmw_fpriv *vmw_fpriv(struct drm_file *file_priv) ··· 803 825 struct drm_file *filp, 804 826 uint32_t handle, 805 827 struct vmw_surface **out_surf, 806 - struct vmw_buffer_object **out_buf); 828 + struct vmw_bo **out_buf); 807 829 extern int vmw_user_resource_lookup_handle( 808 830 struct vmw_private *dev_priv, 809 831 struct ttm_object_file *tfile, ··· 822 844 extern void vmw_resource_unreserve(struct vmw_resource *res, 823 845 bool dirty_set, 824 846 bool dirty, 825 - bool switch_backup, 826 - struct vmw_buffer_object *new_backup, 827 - unsigned long new_backup_offset); 847 + bool switch_guest_memory, 848 + struct vmw_bo *new_guest_memory, 849 + unsigned long new_guest_memory_offset); 828 850 extern void vmw_query_move_notify(struct ttm_buffer_object *bo, 829 851 struct ttm_resource *old_mem, 830 852 struct ttm_resource *new_mem); 831 - extern int vmw_query_readback_all(struct vmw_buffer_object *dx_query_mob); 832 - extern void vmw_resource_evict_all(struct vmw_private *dev_priv); 833 - extern void vmw_resource_unbind_list(struct vmw_buffer_object *vbo); 853 + int vmw_query_readback_all(struct vmw_bo *dx_query_mob); 854 + void vmw_resource_evict_all(struct vmw_private *dev_priv); 855 + void vmw_resource_unbind_list(struct vmw_bo *vbo); 834 856 void vmw_resource_mob_attach(struct vmw_resource *res); 835 857 void vmw_resource_mob_detach(struct vmw_resource *res); 836 858 void vmw_resource_dirty_update(struct vmw_resource *res, pgoff_t start, 837 859 pgoff_t end); 838 - int vmw_resources_clean(struct vmw_buffer_object *vbo, pgoff_t start, 860 + int vmw_resources_clean(struct vmw_bo *vbo, pgoff_t start, 839 861 pgoff_t end, pgoff_t *num_prefault); 840 862 841 863 /** ··· 850 872 } 851 873 852 874 /** 853 - * Buffer object helper functions - vmwgfx_bo.c 854 - */ 855 - extern int vmw_bo_pin_in_placement(struct vmw_private *vmw_priv, 856 - struct vmw_buffer_object *bo, 857 - struct ttm_placement *placement, 858 - bool interruptible); 859 - extern int vmw_bo_pin_in_vram(struct vmw_private *dev_priv, 860 - struct vmw_buffer_object *buf, 861 - bool interruptible); 862 - extern int vmw_bo_pin_in_vram_or_gmr(struct vmw_private *dev_priv, 863 - struct vmw_buffer_object *buf, 864 - bool interruptible); 865 - extern int vmw_bo_pin_in_start_of_vram(struct vmw_private *vmw_priv, 866 - struct vmw_buffer_object *bo, 867 - bool interruptible); 868 - extern int vmw_bo_unpin(struct vmw_private *vmw_priv, 869 - struct vmw_buffer_object *bo, 870 - bool interruptible); 871 - extern void vmw_bo_get_guest_ptr(const struct ttm_buffer_object *buf, 872 - SVGAGuestPtr *ptr); 873 - extern void vmw_bo_pin_reserved(struct vmw_buffer_object *bo, bool pin); 874 - extern void vmw_bo_bo_free(struct ttm_buffer_object *bo); 875 - extern int vmw_bo_create_kernel(struct vmw_private *dev_priv, 876 - unsigned long size, 877 - struct ttm_placement *placement, 878 - struct ttm_buffer_object **p_bo); 879 - extern int vmw_bo_create(struct vmw_private *dev_priv, 880 - size_t size, struct ttm_placement *placement, 881 - bool interruptible, bool pin, 882 - void (*bo_free)(struct ttm_buffer_object *bo), 883 - struct vmw_buffer_object **p_bo); 884 - extern int vmw_bo_init(struct vmw_private *dev_priv, 885 - struct vmw_buffer_object *vmw_bo, 886 - size_t size, struct ttm_placement *placement, 887 - bool interruptible, bool pin, 888 - void (*bo_free)(struct ttm_buffer_object *bo)); 889 - extern int vmw_bo_unref_ioctl(struct drm_device *dev, void *data, 890 - struct drm_file *file_priv); 891 - extern int vmw_user_bo_synccpu_ioctl(struct drm_device *dev, void *data, 892 - struct drm_file *file_priv); 893 - extern int vmw_user_bo_lookup(struct drm_file *filp, 894 - uint32_t handle, 895 - struct vmw_buffer_object **out); 896 - extern void vmw_bo_fence_single(struct ttm_buffer_object *bo, 897 - struct vmw_fence_obj *fence); 898 - extern void *vmw_bo_map_and_cache(struct vmw_buffer_object *vbo); 899 - extern void vmw_bo_unmap(struct vmw_buffer_object *vbo); 900 - extern void vmw_bo_move_notify(struct ttm_buffer_object *bo, 901 - struct ttm_resource *mem); 902 - extern void vmw_bo_swap_notify(struct ttm_buffer_object *bo); 903 - 904 - /** 905 - * vmw_bo_adjust_prio - Adjust the buffer object eviction priority 906 - * according to attached resources 907 - * @vbo: The struct vmw_buffer_object 908 - */ 909 - static inline void vmw_bo_prio_adjust(struct vmw_buffer_object *vbo) 910 - { 911 - int i = ARRAY_SIZE(vbo->res_prios); 912 - 913 - while (i--) { 914 - if (vbo->res_prios[i]) { 915 - vbo->base.priority = i; 916 - return; 917 - } 918 - } 919 - 920 - vbo->base.priority = 3; 921 - } 922 - 923 - /** 924 - * vmw_bo_prio_add - Notify a buffer object of a newly attached resource 925 - * eviction priority 926 - * @vbo: The struct vmw_buffer_object 927 - * @prio: The resource priority 928 - * 929 - * After being notified, the code assigns the highest resource eviction priority 930 - * to the backing buffer object (mob). 931 - */ 932 - static inline void vmw_bo_prio_add(struct vmw_buffer_object *vbo, int prio) 933 - { 934 - if (vbo->res_prios[prio]++ == 0) 935 - vmw_bo_prio_adjust(vbo); 936 - } 937 - 938 - /** 939 - * vmw_bo_prio_del - Notify a buffer object of a resource with a certain 940 - * priority being removed 941 - * @vbo: The struct vmw_buffer_object 942 - * @prio: The resource priority 943 - * 944 - * After being notified, the code assigns the highest resource eviction priority 945 - * to the backing buffer object (mob). 946 - */ 947 - static inline void vmw_bo_prio_del(struct vmw_buffer_object *vbo, int prio) 948 - { 949 - if (--vbo->res_prios[prio] == 0) 950 - vmw_bo_prio_adjust(vbo); 951 - } 952 - 953 - /** 954 875 * GEM related functionality - vmwgfx_gem.c 955 876 */ 956 877 extern int vmw_gem_object_create_with_handle(struct vmw_private *dev_priv, 957 878 struct drm_file *filp, 958 879 uint32_t size, 959 880 uint32_t *handle, 960 - struct vmw_buffer_object **p_vbo); 881 + struct vmw_bo **p_vbo); 961 882 extern int vmw_gem_object_create_ioctl(struct drm_device *dev, void *data, 962 883 struct drm_file *filp); 963 - extern void vmw_gem_destroy(struct ttm_buffer_object *bo); 964 884 extern void vmw_debugfs_gem_init(struct vmw_private *vdev); 965 885 966 886 /** ··· 932 1056 } 933 1057 934 1058 /** 935 - * TTM glue - vmwgfx_ttm_glue.c 936 - */ 937 - 938 - extern int vmw_mmap(struct file *filp, struct vm_area_struct *vma); 939 - 940 - /** 941 1059 * TTM buffer object driver - vmwgfx_ttm_buffer.c 942 1060 */ 943 1061 944 1062 extern const size_t vmw_tt_size; 945 1063 extern struct ttm_placement vmw_vram_placement; 946 - extern struct ttm_placement vmw_vram_sys_placement; 947 1064 extern struct ttm_placement vmw_vram_gmr_placement; 948 1065 extern struct ttm_placement vmw_sys_placement; 949 - extern struct ttm_placement vmw_srf_placement; 950 - extern struct ttm_placement vmw_mob_placement; 951 - extern struct ttm_placement vmw_nonfixed_placement; 952 1066 extern struct ttm_device_funcs vmw_bo_driver; 953 1067 extern const struct vmw_sg_table * 954 1068 vmw_bo_sg_table(struct ttm_buffer_object *bo); 955 - extern int vmw_bo_create_and_populate(struct vmw_private *dev_priv, 956 - unsigned long bo_size, 957 - struct ttm_buffer_object **bo_p); 1069 + int vmw_bo_create_and_populate(struct vmw_private *dev_priv, 1070 + size_t bo_size, 1071 + u32 domain, 1072 + struct vmw_bo **bo_p); 958 1073 959 1074 extern void vmw_piter_start(struct vmw_piter *viter, 960 1075 const struct vmw_sg_table *vsgt, ··· 1164 1297 extern void vmw_dx_context_scrub_cotables(struct vmw_resource *ctx, 1165 1298 bool readback); 1166 1299 extern int vmw_context_bind_dx_query(struct vmw_resource *ctx_res, 1167 - struct vmw_buffer_object *mob); 1168 - extern struct vmw_buffer_object * 1300 + struct vmw_bo *mob); 1301 + extern struct vmw_bo * 1169 1302 vmw_context_get_dx_query_mob(struct vmw_resource *ctx_res); 1170 1303 1171 1304 ··· 1390 1523 DRM_DEBUG_DRIVER(fmt, ##__VA_ARGS__) 1391 1524 1392 1525 /* Resource dirtying - vmwgfx_page_dirty.c */ 1393 - void vmw_bo_dirty_scan(struct vmw_buffer_object *vbo); 1394 - int vmw_bo_dirty_add(struct vmw_buffer_object *vbo); 1526 + void vmw_bo_dirty_scan(struct vmw_bo *vbo); 1527 + int vmw_bo_dirty_add(struct vmw_bo *vbo); 1395 1528 void vmw_bo_dirty_transfer_to_res(struct vmw_resource *res); 1396 1529 void vmw_bo_dirty_clear_res(struct vmw_resource *res); 1397 - void vmw_bo_dirty_release(struct vmw_buffer_object *vbo); 1398 - void vmw_bo_dirty_unmap(struct vmw_buffer_object *vbo, 1530 + void vmw_bo_dirty_release(struct vmw_bo *vbo); 1531 + void vmw_bo_dirty_unmap(struct vmw_bo *vbo, 1399 1532 pgoff_t start, pgoff_t end); 1400 1533 vm_fault_t vmw_bo_vm_fault(struct vm_fault *vmf); 1401 1534 vm_fault_t vmw_bo_vm_mkwrite(struct vm_fault *vmf); ··· 1426 1559 { 1427 1560 (void) vmw_resource_reference(&srf->res); 1428 1561 return srf; 1429 - } 1430 - 1431 - static inline void vmw_bo_unreference(struct vmw_buffer_object **buf) 1432 - { 1433 - struct vmw_buffer_object *tmp_buf = *buf; 1434 - 1435 - *buf = NULL; 1436 - if (tmp_buf != NULL) 1437 - ttm_bo_put(&tmp_buf->base); 1438 - } 1439 - 1440 - static inline struct vmw_buffer_object * 1441 - vmw_bo_reference(struct vmw_buffer_object *buf) 1442 - { 1443 - ttm_bo_get(&buf->base); 1444 - return buf; 1445 1562 } 1446 1563 1447 1564 static inline void vmw_fifo_resource_inc(struct vmw_private *dev_priv)
+58 -47
drivers/gpu/drm/vmwgfx/vmwgfx_execbuf.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR MIT 2 2 /************************************************************************** 3 3 * 4 - * Copyright 2009 - 2022 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright 2009 - 2023 VMware, Inc., Palo Alto, CA., USA 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a 7 7 * copy of this software and associated documentation files (the ··· 24 24 * USE OR OTHER DEALINGS IN THE SOFTWARE. 25 25 * 26 26 **************************************************************************/ 27 - #include <linux/sync_file.h> 28 - #include <linux/hashtable.h> 29 - 27 + #include "vmwgfx_binding.h" 28 + #include "vmwgfx_bo.h" 30 29 #include "vmwgfx_drv.h" 31 - #include "vmwgfx_reg.h" 30 + #include "vmwgfx_mksstat.h" 31 + #include "vmwgfx_so.h" 32 + 32 33 #include <drm/ttm/ttm_bo.h> 33 34 #include <drm/ttm/ttm_placement.h> 34 - #include "vmwgfx_so.h" 35 - #include "vmwgfx_binding.h" 36 - #include "vmwgfx_mksstat.h" 37 35 36 + #include <linux/sync_file.h> 37 + #include <linux/hashtable.h> 38 38 39 39 /* 40 40 * Helper macro to get dx_ctx_node if available otherwise print an error ··· 65 65 */ 66 66 struct vmw_relocation { 67 67 struct list_head head; 68 - struct vmw_buffer_object *vbo; 68 + struct vmw_bo *vbo; 69 69 union { 70 70 SVGAMobId *mob_loc; 71 71 SVGAGuestPtr *location; ··· 149 149 static int vmw_translate_mob_ptr(struct vmw_private *dev_priv, 150 150 struct vmw_sw_context *sw_context, 151 151 SVGAMobId *id, 152 - struct vmw_buffer_object **vmw_bo_p); 152 + struct vmw_bo **vmw_bo_p); 153 153 /** 154 154 * vmw_ptr_diff - Compute the offset from a to b in bytes 155 155 * ··· 475 475 476 476 if (has_sm4_context(dev_priv) && 477 477 vmw_res_type(ctx) == vmw_res_dx_context) { 478 - struct vmw_buffer_object *dx_query_mob; 478 + struct vmw_bo *dx_query_mob; 479 479 480 480 dx_query_mob = vmw_context_get_dx_query_mob(ctx); 481 - if (dx_query_mob) 481 + if (dx_query_mob) { 482 + vmw_bo_placement_set(dx_query_mob, 483 + VMW_BO_DOMAIN_MOB, 484 + VMW_BO_DOMAIN_MOB); 482 485 ret = vmw_validation_add_bo(sw_context->ctx, 483 - dx_query_mob, true, false); 486 + dx_query_mob); 487 + } 484 488 } 485 489 486 490 mutex_unlock(&dev_priv->binding_mutex); ··· 600 596 return ret; 601 597 602 598 if (sw_context->dx_query_mob) { 603 - struct vmw_buffer_object *expected_dx_query_mob; 599 + struct vmw_bo *expected_dx_query_mob; 604 600 605 601 expected_dx_query_mob = 606 602 vmw_context_get_dx_query_mob(sw_context->dx_query_ctx); ··· 707 703 static int vmw_rebind_all_dx_query(struct vmw_resource *ctx_res) 708 704 { 709 705 struct vmw_private *dev_priv = ctx_res->dev_priv; 710 - struct vmw_buffer_object *dx_query_mob; 706 + struct vmw_bo *dx_query_mob; 711 707 VMW_DECLARE_CMD_VAR(*cmd, SVGA3dCmdDXBindAllQuery); 712 708 713 709 dx_query_mob = vmw_context_get_dx_query_mob(ctx_res); ··· 722 718 cmd->header.id = SVGA_3D_CMD_DX_BIND_ALL_QUERY; 723 719 cmd->header.size = sizeof(cmd->body); 724 720 cmd->body.cid = ctx_res->id; 725 - cmd->body.mobid = dx_query_mob->base.resource->start; 721 + cmd->body.mobid = dx_query_mob->tbo.resource->start; 726 722 vmw_cmd_commit(dev_priv, sizeof(*cmd)); 727 723 728 724 vmw_context_bind_dx_query(ctx_res, dx_query_mob); ··· 1021 1017 * after successful submission of the current command batch. 1022 1018 */ 1023 1019 static int vmw_query_bo_switch_prepare(struct vmw_private *dev_priv, 1024 - struct vmw_buffer_object *new_query_bo, 1020 + struct vmw_bo *new_query_bo, 1025 1021 struct vmw_sw_context *sw_context) 1026 1022 { 1027 1023 struct vmw_res_cache_entry *ctx_entry = ··· 1033 1029 1034 1030 if (unlikely(new_query_bo != sw_context->cur_query_bo)) { 1035 1031 1036 - if (unlikely(PFN_UP(new_query_bo->base.resource->size) > 4)) { 1032 + if (unlikely(PFN_UP(new_query_bo->tbo.resource->size) > 4)) { 1037 1033 VMW_DEBUG_USER("Query buffer too large.\n"); 1038 1034 return -EINVAL; 1039 1035 } 1040 1036 1041 1037 if (unlikely(sw_context->cur_query_bo != NULL)) { 1042 1038 sw_context->needs_post_query_barrier = true; 1039 + vmw_bo_placement_set_default_accelerated(sw_context->cur_query_bo); 1043 1040 ret = vmw_validation_add_bo(sw_context->ctx, 1044 - sw_context->cur_query_bo, 1045 - dev_priv->has_mob, false); 1041 + sw_context->cur_query_bo); 1046 1042 if (unlikely(ret != 0)) 1047 1043 return ret; 1048 1044 } 1049 1045 sw_context->cur_query_bo = new_query_bo; 1050 1046 1047 + vmw_bo_placement_set_default_accelerated(dev_priv->dummy_query_bo); 1051 1048 ret = vmw_validation_add_bo(sw_context->ctx, 1052 - dev_priv->dummy_query_bo, 1053 - dev_priv->has_mob, false); 1049 + dev_priv->dummy_query_bo); 1054 1050 if (unlikely(ret != 0)) 1055 1051 return ret; 1056 1052 } ··· 1149 1145 static int vmw_translate_mob_ptr(struct vmw_private *dev_priv, 1150 1146 struct vmw_sw_context *sw_context, 1151 1147 SVGAMobId *id, 1152 - struct vmw_buffer_object **vmw_bo_p) 1148 + struct vmw_bo **vmw_bo_p) 1153 1149 { 1154 - struct vmw_buffer_object *vmw_bo; 1150 + struct vmw_bo *vmw_bo; 1155 1151 uint32_t handle = *id; 1156 1152 struct vmw_relocation *reloc; 1157 1153 int ret; ··· 1162 1158 drm_dbg(&dev_priv->drm, "Could not find or use MOB buffer.\n"); 1163 1159 return PTR_ERR(vmw_bo); 1164 1160 } 1165 - ret = vmw_validation_add_bo(sw_context->ctx, vmw_bo, true, false); 1166 - ttm_bo_put(&vmw_bo->base); 1167 - drm_gem_object_put(&vmw_bo->base.base); 1161 + vmw_bo_placement_set(vmw_bo, VMW_BO_DOMAIN_MOB, VMW_BO_DOMAIN_MOB); 1162 + ret = vmw_validation_add_bo(sw_context->ctx, vmw_bo); 1163 + ttm_bo_put(&vmw_bo->tbo); 1164 + drm_gem_object_put(&vmw_bo->tbo.base); 1168 1165 if (unlikely(ret != 0)) 1169 1166 return ret; 1170 1167 ··· 1205 1200 static int vmw_translate_guest_ptr(struct vmw_private *dev_priv, 1206 1201 struct vmw_sw_context *sw_context, 1207 1202 SVGAGuestPtr *ptr, 1208 - struct vmw_buffer_object **vmw_bo_p) 1203 + struct vmw_bo **vmw_bo_p) 1209 1204 { 1210 - struct vmw_buffer_object *vmw_bo; 1205 + struct vmw_bo *vmw_bo; 1211 1206 uint32_t handle = ptr->gmrId; 1212 1207 struct vmw_relocation *reloc; 1213 1208 int ret; ··· 1218 1213 drm_dbg(&dev_priv->drm, "Could not find or use GMR region.\n"); 1219 1214 return PTR_ERR(vmw_bo); 1220 1215 } 1221 - ret = vmw_validation_add_bo(sw_context->ctx, vmw_bo, false, false); 1222 - ttm_bo_put(&vmw_bo->base); 1223 - drm_gem_object_put(&vmw_bo->base.base); 1216 + vmw_bo_placement_set(vmw_bo, VMW_BO_DOMAIN_GMR | VMW_BO_DOMAIN_VRAM, 1217 + VMW_BO_DOMAIN_GMR | VMW_BO_DOMAIN_VRAM); 1218 + ret = vmw_validation_add_bo(sw_context->ctx, vmw_bo); 1219 + ttm_bo_put(&vmw_bo->tbo); 1220 + drm_gem_object_put(&vmw_bo->tbo.base); 1224 1221 if (unlikely(ret != 0)) 1225 1222 return ret; 1226 1223 ··· 1287 1280 SVGA3dCmdHeader *header) 1288 1281 { 1289 1282 VMW_DECLARE_CMD_VAR(*cmd, SVGA3dCmdDXBindQuery); 1290 - struct vmw_buffer_object *vmw_bo; 1283 + struct vmw_bo *vmw_bo; 1291 1284 int ret; 1292 1285 1293 1286 cmd = container_of(header, typeof(*cmd), header); ··· 1370 1363 struct vmw_sw_context *sw_context, 1371 1364 SVGA3dCmdHeader *header) 1372 1365 { 1373 - struct vmw_buffer_object *vmw_bo; 1366 + struct vmw_bo *vmw_bo; 1374 1367 VMW_DECLARE_CMD_VAR(*cmd, SVGA3dCmdEndGBQuery); 1375 1368 int ret; 1376 1369 ··· 1400 1393 struct vmw_sw_context *sw_context, 1401 1394 SVGA3dCmdHeader *header) 1402 1395 { 1403 - struct vmw_buffer_object *vmw_bo; 1396 + struct vmw_bo *vmw_bo; 1404 1397 VMW_DECLARE_CMD_VAR(*cmd, SVGA3dCmdEndQuery); 1405 1398 int ret; 1406 1399 ··· 1446 1439 struct vmw_sw_context *sw_context, 1447 1440 SVGA3dCmdHeader *header) 1448 1441 { 1449 - struct vmw_buffer_object *vmw_bo; 1442 + struct vmw_bo *vmw_bo; 1450 1443 VMW_DECLARE_CMD_VAR(*cmd, SVGA3dCmdWaitForGBQuery); 1451 1444 int ret; 1452 1445 ··· 1474 1467 struct vmw_sw_context *sw_context, 1475 1468 SVGA3dCmdHeader *header) 1476 1469 { 1477 - struct vmw_buffer_object *vmw_bo; 1470 + struct vmw_bo *vmw_bo; 1478 1471 VMW_DECLARE_CMD_VAR(*cmd, SVGA3dCmdWaitForQuery); 1479 1472 int ret; 1480 1473 ··· 1511 1504 struct vmw_sw_context *sw_context, 1512 1505 SVGA3dCmdHeader *header) 1513 1506 { 1514 - struct vmw_buffer_object *vmw_bo = NULL; 1507 + struct vmw_bo *vmw_bo = NULL; 1515 1508 struct vmw_surface *srf = NULL; 1516 1509 VMW_DECLARE_CMD_VAR(*cmd, SVGA3dCmdSurfaceDMA); 1517 1510 int ret; ··· 1535 1528 return ret; 1536 1529 1537 1530 /* Make sure DMA doesn't cross BO boundaries. */ 1538 - bo_size = vmw_bo->base.base.size; 1531 + bo_size = vmw_bo->tbo.base.size; 1539 1532 if (unlikely(cmd->body.guest.ptr.offset > bo_size)) { 1540 1533 VMW_DEBUG_USER("Invalid DMA offset.\n"); 1541 1534 return -EINVAL; ··· 1558 1551 1559 1552 srf = vmw_res_to_srf(sw_context->res_cache[vmw_res_surface].res); 1560 1553 1561 - vmw_kms_cursor_snoop(srf, sw_context->fp->tfile, &vmw_bo->base, header); 1554 + vmw_kms_cursor_snoop(srf, sw_context->fp->tfile, &vmw_bo->tbo, header); 1562 1555 1563 1556 return 0; 1564 1557 } ··· 1677 1670 struct vmw_sw_context *sw_context, 1678 1671 void *buf) 1679 1672 { 1680 - struct vmw_buffer_object *vmw_bo; 1673 + struct vmw_bo *vmw_bo; 1681 1674 1682 1675 struct { 1683 1676 uint32_t header; ··· 1708 1701 struct vmw_resource *res, uint32_t *buf_id, 1709 1702 unsigned long backup_offset) 1710 1703 { 1711 - struct vmw_buffer_object *vbo; 1704 + struct vmw_bo *vbo; 1712 1705 void *info; 1713 1706 int ret; 1714 1707 ··· 3761 3754 struct ttm_buffer_object *bo; 3762 3755 3763 3756 list_for_each_entry(reloc, &sw_context->bo_relocations, head) { 3764 - bo = &reloc->vbo->base; 3757 + bo = &reloc->vbo->tbo; 3765 3758 switch (bo->resource->mem_type) { 3766 3759 case TTM_PL_VRAM: 3767 3760 reloc->location->offset += bo->resource->start << PAGE_SHIFT; ··· 4371 4364 if (dev_priv->pinned_bo == NULL) 4372 4365 goto out_unlock; 4373 4366 4374 - ret = vmw_validation_add_bo(&val_ctx, dev_priv->pinned_bo, false, 4375 - false); 4367 + vmw_bo_placement_set(dev_priv->pinned_bo, 4368 + VMW_BO_DOMAIN_GMR | VMW_BO_DOMAIN_VRAM, 4369 + VMW_BO_DOMAIN_GMR | VMW_BO_DOMAIN_VRAM); 4370 + ret = vmw_validation_add_bo(&val_ctx, dev_priv->pinned_bo); 4376 4371 if (ret) 4377 4372 goto out_no_reserve; 4378 4373 4379 - ret = vmw_validation_add_bo(&val_ctx, dev_priv->dummy_query_bo, false, 4380 - false); 4374 + vmw_bo_placement_set(dev_priv->dummy_query_bo, 4375 + VMW_BO_DOMAIN_GMR | VMW_BO_DOMAIN_VRAM, 4376 + VMW_BO_DOMAIN_GMR | VMW_BO_DOMAIN_VRAM); 4377 + ret = vmw_validation_add_bo(&val_ctx, dev_priv->dummy_query_bo); 4381 4378 if (ret) 4382 4379 goto out_no_reserve; 4383 4380
+1 -1
drivers/gpu/drm/vmwgfx/vmwgfx_fence.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR MIT 2 2 /************************************************************************** 3 3 * 4 - * Copyright 2011-2014 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright 2011-2023 VMware, Inc., Palo Alto, CA., USA 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a 7 7 * copy of this software and associated documentation files (the
+35 -54
drivers/gpu/drm/vmwgfx/vmwgfx_gem.c
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 OR MIT */ 2 2 /* 3 - * Copyright 2021 VMware, Inc. 3 + * Copyright 2021-2023 VMware, Inc. 4 4 * 5 5 * Permission is hereby granted, free of charge, to any person 6 6 * obtaining a copy of this software and associated documentation ··· 24 24 * 25 25 */ 26 26 27 + #include "vmwgfx_bo.h" 27 28 #include "vmwgfx_drv.h" 28 29 29 30 #include "drm/drm_prime.h" 30 31 #include "drm/drm_gem_ttm_helper.h" 31 32 32 - /** 33 - * vmw_buffer_object - Convert a struct ttm_buffer_object to a struct 34 - * vmw_buffer_object. 35 - * 36 - * @bo: Pointer to the TTM buffer object. 37 - * Return: Pointer to the struct vmw_buffer_object embedding the 38 - * TTM buffer object. 39 - */ 40 - static struct vmw_buffer_object * 41 - vmw_buffer_object(struct ttm_buffer_object *bo) 42 - { 43 - return container_of(bo, struct vmw_buffer_object, base); 44 - } 45 - 46 33 static void vmw_gem_object_free(struct drm_gem_object *gobj) 47 34 { 48 35 struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(gobj); 49 - if (bo) { 36 + if (bo) 50 37 ttm_bo_put(bo); 51 - } 52 38 } 53 39 54 40 static int vmw_gem_object_open(struct drm_gem_object *obj, ··· 51 65 static int vmw_gem_pin_private(struct drm_gem_object *obj, bool do_pin) 52 66 { 53 67 struct ttm_buffer_object *bo = drm_gem_ttm_of_gem(obj); 54 - struct vmw_buffer_object *vbo = vmw_buffer_object(bo); 68 + struct vmw_bo *vbo = to_vmw_bo(obj); 55 69 int ret; 56 70 57 71 ret = ttm_bo_reserve(bo, false, false, NULL); ··· 89 103 return drm_prime_pages_to_sg(obj->dev, vmw_tt->dma_ttm.pages, vmw_tt->dma_ttm.num_pages); 90 104 } 91 105 106 + static const struct vm_operations_struct vmw_vm_ops = { 107 + .pfn_mkwrite = vmw_bo_vm_mkwrite, 108 + .page_mkwrite = vmw_bo_vm_mkwrite, 109 + .fault = vmw_bo_vm_fault, 110 + .open = ttm_bo_vm_open, 111 + .close = ttm_bo_vm_close, 112 + }; 92 113 93 114 static const struct drm_gem_object_funcs vmw_gem_object_funcs = { 94 115 .free = vmw_gem_object_free, ··· 108 115 .vmap = drm_gem_ttm_vmap, 109 116 .vunmap = drm_gem_ttm_vunmap, 110 117 .mmap = drm_gem_ttm_mmap, 118 + .vm_ops = &vmw_vm_ops, 111 119 }; 112 - 113 - /** 114 - * vmw_gem_destroy - vmw buffer object destructor 115 - * 116 - * @bo: Pointer to the embedded struct ttm_buffer_object 117 - */ 118 - void vmw_gem_destroy(struct ttm_buffer_object *bo) 119 - { 120 - struct vmw_buffer_object *vbo = vmw_buffer_object(bo); 121 - 122 - WARN_ON(vbo->dirty); 123 - WARN_ON(!RB_EMPTY_ROOT(&vbo->res_tree)); 124 - vmw_bo_unmap(vbo); 125 - drm_gem_object_release(&vbo->base.base); 126 - kfree(vbo); 127 - } 128 120 129 121 int vmw_gem_object_create_with_handle(struct vmw_private *dev_priv, 130 122 struct drm_file *filp, 131 123 uint32_t size, 132 124 uint32_t *handle, 133 - struct vmw_buffer_object **p_vbo) 125 + struct vmw_bo **p_vbo) 134 126 { 135 127 int ret; 128 + struct vmw_bo_params params = { 129 + .domain = (dev_priv->has_mob) ? VMW_BO_DOMAIN_SYS : VMW_BO_DOMAIN_VRAM, 130 + .busy_domain = VMW_BO_DOMAIN_SYS, 131 + .bo_type = ttm_bo_type_device, 132 + .size = size, 133 + .pin = false 134 + }; 136 135 137 - ret = vmw_bo_create(dev_priv, size, 138 - (dev_priv->has_mob) ? 139 - &vmw_sys_placement : 140 - &vmw_vram_sys_placement, 141 - true, false, &vmw_gem_destroy, p_vbo); 136 + ret = vmw_bo_create(dev_priv, &params, p_vbo); 142 137 if (ret != 0) 143 138 goto out_no_bo; 144 139 145 - (*p_vbo)->base.base.funcs = &vmw_gem_object_funcs; 140 + (*p_vbo)->tbo.base.funcs = &vmw_gem_object_funcs; 146 141 147 - ret = drm_gem_handle_create(filp, &(*p_vbo)->base.base, handle); 142 + ret = drm_gem_handle_create(filp, &(*p_vbo)->tbo.base, handle); 148 143 out_no_bo: 149 144 return ret; 150 145 } ··· 146 165 (union drm_vmw_alloc_dmabuf_arg *)data; 147 166 struct drm_vmw_alloc_dmabuf_req *req = &arg->req; 148 167 struct drm_vmw_dmabuf_rep *rep = &arg->rep; 149 - struct vmw_buffer_object *vbo; 168 + struct vmw_bo *vbo; 150 169 uint32_t handle; 151 170 int ret; 152 171 ··· 156 175 goto out_no_bo; 157 176 158 177 rep->handle = handle; 159 - rep->map_handle = drm_vma_node_offset_addr(&vbo->base.base.vma_node); 178 + rep->map_handle = drm_vma_node_offset_addr(&vbo->tbo.base.vma_node); 160 179 rep->cur_gmr_id = handle; 161 180 rep->cur_gmr_offset = 0; 162 181 /* drop reference from allocate - handle holds it now */ 163 - drm_gem_object_put(&vbo->base.base); 182 + drm_gem_object_put(&vbo->tbo.base); 164 183 out_no_bo: 165 184 return ret; 166 185 } 167 186 168 187 #if defined(CONFIG_DEBUG_FS) 169 188 170 - static void vmw_bo_print_info(int id, struct vmw_buffer_object *bo, struct seq_file *m) 189 + static void vmw_bo_print_info(int id, struct vmw_bo *bo, struct seq_file *m) 171 190 { 172 191 const char *placement; 173 192 const char *type; 174 193 175 - switch (bo->base.resource->mem_type) { 194 + switch (bo->tbo.resource->mem_type) { 176 195 case TTM_PL_SYSTEM: 177 196 placement = " CPU"; 178 197 break; ··· 193 212 break; 194 213 } 195 214 196 - switch (bo->base.type) { 215 + switch (bo->tbo.type) { 197 216 case ttm_bo_type_device: 198 217 type = "device"; 199 218 break; ··· 209 228 } 210 229 211 230 seq_printf(m, "\t\t0x%08x: %12zu bytes %s, type = %s", 212 - id, bo->base.base.size, placement, type); 231 + id, bo->tbo.base.size, placement, type); 213 232 seq_printf(m, ", priority = %u, pin_count = %u, GEM refs = %d, TTM refs = %d", 214 - bo->base.priority, 215 - bo->base.pin_count, 216 - kref_read(&bo->base.base.refcount), 217 - kref_read(&bo->base.kref)); 233 + bo->tbo.priority, 234 + bo->tbo.pin_count, 235 + kref_read(&bo->tbo.base.refcount), 236 + kref_read(&bo->tbo.kref)); 218 237 seq_puts(m, "\n"); 219 238 } 220 239 ··· 248 267 249 268 spin_lock(&file->table_lock); 250 269 idr_for_each_entry(&file->object_idr, gobj, id) { 251 - struct vmw_buffer_object *bo = gem_to_vmw_bo(gobj); 270 + struct vmw_bo *bo = to_vmw_bo(gobj); 252 271 253 272 vmw_bo_print_info(id, bo, m); 254 273 }
+77 -155
drivers/gpu/drm/vmwgfx/vmwgfx_kms.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR MIT 2 2 /************************************************************************** 3 3 * 4 - * Copyright 2009-2022 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright 2009-2023 VMware, Inc., Palo Alto, CA., USA 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a 7 7 * copy of this software and associated documentation files (the ··· 24 24 * USE OR OTHER DEALINGS IN THE SOFTWARE. 25 25 * 26 26 **************************************************************************/ 27 - 28 27 #include "vmwgfx_kms.h" 28 + 29 + #include "vmwgfx_bo.h" 29 30 #include "vmw_surface_cache.h" 30 31 31 32 #include <drm/drm_atomic.h> ··· 153 152 SVGAGBCursorHeader *header; 154 153 SVGAGBAlphaCursorHeader *alpha_header; 155 154 const u32 image_size = width * height * sizeof(*image); 156 - bool dummy; 157 155 158 - header = ttm_kmap_obj_virtual(&vps->cursor.map, &dummy); 156 + header = vmw_bo_map_and_cache(vps->cursor.bo); 159 157 alpha_header = &header->header.alphaHeader; 160 158 161 159 memset(header, 0, sizeof(*header)); ··· 169 169 170 170 memcpy(header + 1, image, image_size); 171 171 vmw_write(dev_priv, SVGA_REG_CURSOR_MOBID, 172 - vps->cursor.bo->resource->start); 172 + vps->cursor.bo->tbo.resource->start); 173 173 } 174 174 175 175 ··· 184 184 */ 185 185 static u32 *vmw_du_cursor_plane_acquire_image(struct vmw_plane_state *vps) 186 186 { 187 - bool dummy; 187 + bool is_iomem; 188 188 if (vps->surf) { 189 189 if (vps->surf_mapped) 190 - return vmw_bo_map_and_cache(vps->surf->res.backup); 190 + return vmw_bo_map_and_cache(vps->surf->res.guest_memory_bo); 191 191 return vps->surf->snooper.image; 192 192 } else if (vps->bo) 193 - return ttm_kmap_obj_virtual(&vps->bo->map, &dummy); 193 + return ttm_kmap_obj_virtual(&vps->bo->map, &is_iomem); 194 194 return NULL; 195 195 } 196 196 ··· 222 222 return changed; 223 223 } 224 224 225 - static void vmw_du_destroy_cursor_mob(struct ttm_buffer_object **bo) 225 + static void vmw_du_destroy_cursor_mob(struct vmw_bo **vbo) 226 226 { 227 - if (!(*bo)) 227 + if (!(*vbo)) 228 228 return; 229 229 230 - ttm_bo_unpin(*bo); 231 - ttm_bo_put(*bo); 232 - kfree(*bo); 233 - *bo = NULL; 230 + ttm_bo_unpin(&(*vbo)->tbo); 231 + vmw_bo_unreference(vbo); 234 232 } 235 233 236 234 static void vmw_du_put_cursor_mob(struct vmw_cursor_plane *vcp, ··· 252 254 253 255 /* Cache is full: See if this mob is bigger than an existing mob. */ 254 256 for (i = 0; i < ARRAY_SIZE(vcp->cursor_mobs); i++) { 255 - if (vcp->cursor_mobs[i]->base.size < 256 - vps->cursor.bo->base.size) { 257 + if (vcp->cursor_mobs[i]->tbo.base.size < 258 + vps->cursor.bo->tbo.base.size) { 257 259 vmw_du_destroy_cursor_mob(&vcp->cursor_mobs[i]); 258 260 vcp->cursor_mobs[i] = vps->cursor.bo; 259 261 vps->cursor.bo = NULL; ··· 286 288 return -EINVAL; 287 289 288 290 if (vps->cursor.bo) { 289 - if (vps->cursor.bo->base.size >= size) 291 + if (vps->cursor.bo->tbo.base.size >= size) 290 292 return 0; 291 293 vmw_du_put_cursor_mob(vcp, vps); 292 294 } ··· 294 296 /* Look for an unused mob in the cache. */ 295 297 for (i = 0; i < ARRAY_SIZE(vcp->cursor_mobs); i++) { 296 298 if (vcp->cursor_mobs[i] && 297 - vcp->cursor_mobs[i]->base.size >= size) { 299 + vcp->cursor_mobs[i]->tbo.base.size >= size) { 298 300 vps->cursor.bo = vcp->cursor_mobs[i]; 299 301 vcp->cursor_mobs[i] = NULL; 300 302 return 0; 301 303 } 302 304 } 303 305 /* Create a new mob if we can't find an existing one. */ 304 - ret = vmw_bo_create_kernel(dev_priv, size, &vmw_mob_placement, 305 - &vps->cursor.bo); 306 + ret = vmw_bo_create_and_populate(dev_priv, size, 307 + VMW_BO_DOMAIN_MOB, 308 + &vps->cursor.bo); 306 309 307 310 if (ret != 0) 308 311 return ret; 309 312 310 313 /* Fence the mob creation so we are guarateed to have the mob */ 311 - ret = ttm_bo_reserve(vps->cursor.bo, false, false, NULL); 314 + ret = ttm_bo_reserve(&vps->cursor.bo->tbo, false, false, NULL); 312 315 if (ret != 0) 313 316 goto teardown; 314 317 315 - vmw_bo_fence_single(vps->cursor.bo, NULL); 316 - ttm_bo_unreserve(vps->cursor.bo); 318 + vmw_bo_fence_single(&vps->cursor.bo->tbo, NULL); 319 + ttm_bo_unreserve(&vps->cursor.bo->tbo); 317 320 return 0; 318 321 319 322 teardown: ··· 362 363 SVGA3dCopyBox *box; 363 364 unsigned box_count; 364 365 void *virtual; 365 - bool dummy; 366 + bool is_iomem; 366 367 struct vmw_dma_cmd { 367 368 SVGA3dCmdHeader header; 368 369 SVGA3dCmdSurfaceDMA dma; ··· 422 423 if (unlikely(ret != 0)) 423 424 goto err_unreserve; 424 425 425 - virtual = ttm_kmap_obj_virtual(&map, &dummy); 426 + virtual = ttm_kmap_obj_virtual(&map, &is_iomem); 426 427 427 428 if (box->w == VMW_CURSOR_SNOOP_WIDTH && cmd->dma.guest.pitch == image_pitch) { 428 429 memcpy(srf->snooper.image, virtual, ··· 572 573 { 573 574 int ret; 574 575 u32 size = vmw_du_cursor_mob_size(vps->base.crtc_w, vps->base.crtc_h); 575 - struct ttm_buffer_object *bo = vps->cursor.bo; 576 + struct ttm_buffer_object *bo; 576 577 577 - if (!bo) 578 + if (!vps->cursor.bo) 578 579 return -EINVAL; 580 + 581 + bo = &vps->cursor.bo->tbo; 579 582 580 583 if (bo->base.size < size) 581 584 return -EINVAL; 582 585 583 - if (vps->cursor.mapped) 586 + if (vps->cursor.bo->map.virtual) 584 587 return 0; 585 588 586 589 ret = ttm_bo_reserve(bo, false, false, NULL); 587 - 588 590 if (unlikely(ret != 0)) 589 591 return -ENOMEM; 590 592 591 - ret = ttm_bo_kmap(bo, 0, PFN_UP(size), &vps->cursor.map); 592 - 593 - /* 594 - * We just want to try to get mob bind to finish 595 - * so that the first write to SVGA_REG_CURSOR_MOBID 596 - * is done with a buffer that the device has already 597 - * seen 598 - */ 599 - (void) ttm_bo_wait(bo, false, false); 593 + vmw_bo_map_and_cache(vps->cursor.bo); 600 594 601 595 ttm_bo_unreserve(bo); 602 596 603 597 if (unlikely(ret != 0)) 604 598 return -ENOMEM; 605 - 606 - vps->cursor.mapped = true; 607 599 608 600 return 0; 609 601 } ··· 612 622 vmw_du_cursor_plane_unmap_cm(struct vmw_plane_state *vps) 613 623 { 614 624 int ret = 0; 615 - struct ttm_buffer_object *bo = vps->cursor.bo; 625 + struct vmw_bo *vbo = vps->cursor.bo; 616 626 617 - if (!vps->cursor.mapped) 627 + if (!vbo || !vbo->map.virtual) 618 628 return 0; 619 629 620 - if (!bo) 621 - return 0; 622 - 623 - ret = ttm_bo_reserve(bo, true, false, NULL); 630 + ret = ttm_bo_reserve(&vbo->tbo, true, false, NULL); 624 631 if (likely(ret == 0)) { 625 - ttm_bo_kunmap(&vps->cursor.map); 626 - ttm_bo_unreserve(bo); 627 - vps->cursor.mapped = false; 632 + vmw_bo_unmap(vbo); 633 + ttm_bo_unreserve(&vbo->tbo); 628 634 } 629 635 630 636 return ret; ··· 643 657 { 644 658 struct vmw_cursor_plane *vcp = vmw_plane_to_vcp(plane); 645 659 struct vmw_plane_state *vps = vmw_plane_state_to_vps(old_state); 646 - bool dummy; 660 + bool is_iomem; 647 661 648 662 if (vps->surf_mapped) { 649 - vmw_bo_unmap(vps->surf->res.backup); 663 + vmw_bo_unmap(vps->surf->res.guest_memory_bo); 650 664 vps->surf_mapped = false; 651 665 } 652 666 653 - if (vps->bo && ttm_kmap_obj_virtual(&vps->bo->map, &dummy)) { 654 - const int ret = ttm_bo_reserve(&vps->bo->base, true, false, NULL); 667 + if (vps->bo && ttm_kmap_obj_virtual(&vps->bo->map, &is_iomem)) { 668 + const int ret = ttm_bo_reserve(&vps->bo->tbo, true, false, NULL); 655 669 656 670 if (likely(ret == 0)) { 657 - if (atomic_read(&vps->bo->base_mapped_count) == 0) 658 - ttm_bo_kunmap(&vps->bo->map); 659 - ttm_bo_unreserve(&vps->bo->base); 671 + ttm_bo_kunmap(&vps->bo->map); 672 + ttm_bo_unreserve(&vps->bo->tbo); 660 673 } 661 674 } 662 675 ··· 721 736 * reserve the ttm_buffer_object first which 722 737 * vmw_bo_map_and_cache() omits. 723 738 */ 724 - ret = ttm_bo_reserve(&vps->bo->base, true, false, NULL); 739 + ret = ttm_bo_reserve(&vps->bo->tbo, true, false, NULL); 725 740 726 741 if (unlikely(ret != 0)) 727 742 return -ENOMEM; 728 743 729 - ret = ttm_bo_kmap(&vps->bo->base, 0, PFN_UP(size), &vps->bo->map); 744 + ret = ttm_bo_kmap(&vps->bo->tbo, 0, PFN_UP(size), &vps->bo->map); 730 745 731 - if (likely(ret == 0)) 732 - atomic_inc(&vps->bo->base_mapped_count); 733 - 734 - ttm_bo_unreserve(&vps->bo->base); 746 + ttm_bo_unreserve(&vps->bo->tbo); 735 747 736 748 if (unlikely(ret != 0)) 737 749 return -ENOMEM; 738 - } else if (vps->surf && !vps->bo && vps->surf->res.backup) { 750 + } else if (vps->surf && !vps->bo && vps->surf->res.guest_memory_bo) { 739 751 740 752 WARN_ON(vps->surf->snooper.image); 741 - ret = ttm_bo_reserve(&vps->surf->res.backup->base, true, false, 753 + ret = ttm_bo_reserve(&vps->surf->res.guest_memory_bo->tbo, true, false, 742 754 NULL); 743 755 if (unlikely(ret != 0)) 744 756 return -ENOMEM; 745 - vmw_bo_map_and_cache(vps->surf->res.backup); 746 - ttm_bo_unreserve(&vps->surf->res.backup->base); 757 + vmw_bo_map_and_cache(vps->surf->res.guest_memory_bo); 758 + ttm_bo_unreserve(&vps->surf->res.guest_memory_bo->tbo); 747 759 vps->surf_mapped = true; 748 760 } 749 761 ··· 767 785 struct vmw_plane_state *vps = vmw_plane_state_to_vps(new_state); 768 786 struct vmw_plane_state *old_vps = vmw_plane_state_to_vps(old_state); 769 787 s32 hotspot_x, hotspot_y; 770 - bool dummy; 771 788 772 789 hotspot_x = du->hotspot_x; 773 790 hotspot_y = du->hotspot_y; ··· 806 825 new_state->crtc_w, 807 826 new_state->crtc_h, 808 827 hotspot_x, hotspot_y); 809 - } 810 - 811 - if (vps->bo) { 812 - if (ttm_kmap_obj_virtual(&vps->bo->map, &dummy)) 813 - atomic_dec(&vps->bo->base_mapped_count); 814 828 } 815 829 816 830 du->cursor_x = new_state->crtc_x + du->set_gui_x; ··· 911 935 WARN_ON(!surface); 912 936 913 937 if (!surface || 914 - (!surface->snooper.image && !surface->res.backup)) { 938 + (!surface->snooper.image && !surface->res.guest_memory_bo)) { 915 939 DRM_ERROR("surface not suitable for cursor\n"); 916 940 return -EINVAL; 917 941 } ··· 1255 1279 user_fence_rep, vclips, num_clips, 1256 1280 NULL); 1257 1281 case vmw_du_screen_target: 1258 - return vmw_kms_stdu_dma(dev_priv, file_priv, vfb, 1259 - user_fence_rep, NULL, vclips, num_clips, 1260 - 1, false, true, NULL); 1282 + return vmw_kms_stdu_readback(dev_priv, file_priv, vfb, 1283 + user_fence_rep, NULL, vclips, num_clips, 1284 + 1, NULL); 1261 1285 default: 1262 1286 WARN_ONCE(true, 1263 1287 "Readback called with invalid display system.\n"); ··· 1382 1406 struct vmw_framebuffer_bo *vfbd = 1383 1407 vmw_framebuffer_to_vfbd(fb); 1384 1408 1385 - return drm_gem_handle_create(file_priv, &vfbd->buffer->base.base, handle); 1409 + return drm_gem_handle_create(file_priv, &vfbd->buffer->tbo.base, handle); 1386 1410 } 1387 1411 1388 1412 static void vmw_framebuffer_bo_destroy(struct drm_framebuffer *framebuffer) ··· 1462 1486 .dirty = vmw_framebuffer_bo_dirty_ext, 1463 1487 }; 1464 1488 1465 - /* 1466 - * Pin the bofer in a location suitable for access by the 1467 - * display system. 1468 - */ 1469 - static int vmw_framebuffer_pin(struct vmw_framebuffer *vfb) 1470 - { 1471 - struct vmw_private *dev_priv = vmw_priv(vfb->base.dev); 1472 - struct vmw_buffer_object *buf; 1473 - struct ttm_placement *placement; 1474 - int ret; 1475 - 1476 - buf = vfb->bo ? vmw_framebuffer_to_vfbd(&vfb->base)->buffer : 1477 - vmw_framebuffer_to_vfbs(&vfb->base)->surface->res.backup; 1478 - 1479 - if (!buf) 1480 - return 0; 1481 - 1482 - switch (dev_priv->active_display_unit) { 1483 - case vmw_du_legacy: 1484 - vmw_overlay_pause_all(dev_priv); 1485 - ret = vmw_bo_pin_in_start_of_vram(dev_priv, buf, false); 1486 - vmw_overlay_resume_all(dev_priv); 1487 - break; 1488 - case vmw_du_screen_object: 1489 - case vmw_du_screen_target: 1490 - if (vfb->bo) { 1491 - if (dev_priv->capabilities & SVGA_CAP_3D) { 1492 - /* 1493 - * Use surface DMA to get content to 1494 - * sreen target surface. 1495 - */ 1496 - placement = &vmw_vram_gmr_placement; 1497 - } else { 1498 - /* Use CPU blit. */ 1499 - placement = &vmw_sys_placement; 1500 - } 1501 - } else { 1502 - /* Use surface / image update */ 1503 - placement = &vmw_mob_placement; 1504 - } 1505 - 1506 - return vmw_bo_pin_in_placement(dev_priv, buf, placement, false); 1507 - default: 1508 - return -EINVAL; 1509 - } 1510 - 1511 - return ret; 1512 - } 1513 - 1514 - static int vmw_framebuffer_unpin(struct vmw_framebuffer *vfb) 1515 - { 1516 - struct vmw_private *dev_priv = vmw_priv(vfb->base.dev); 1517 - struct vmw_buffer_object *buf; 1518 - 1519 - buf = vfb->bo ? vmw_framebuffer_to_vfbd(&vfb->base)->buffer : 1520 - vmw_framebuffer_to_vfbs(&vfb->base)->surface->res.backup; 1521 - 1522 - if (WARN_ON(!buf)) 1523 - return 0; 1524 - 1525 - return vmw_bo_unpin(dev_priv, buf, false); 1526 - } 1527 - 1528 1489 /** 1529 1490 * vmw_create_bo_proxy - create a proxy surface for the buffer object 1530 1491 * ··· 1479 1566 */ 1480 1567 static int vmw_create_bo_proxy(struct drm_device *dev, 1481 1568 const struct drm_mode_fb_cmd2 *mode_cmd, 1482 - struct vmw_buffer_object *bo_mob, 1569 + struct vmw_bo *bo_mob, 1483 1570 struct vmw_surface **srf_out) 1484 1571 { 1485 1572 struct vmw_surface_metadata metadata = {0}; ··· 1531 1618 /* Reserve and switch the backing mob. */ 1532 1619 mutex_lock(&res->dev_priv->cmdbuf_mutex); 1533 1620 (void) vmw_resource_reserve(res, false, true); 1534 - vmw_bo_unreference(&res->backup); 1535 - res->backup = vmw_bo_reference(bo_mob); 1536 - res->backup_offset = 0; 1621 + vmw_bo_unreference(&res->guest_memory_bo); 1622 + res->guest_memory_bo = vmw_bo_reference(bo_mob); 1623 + res->guest_memory_offset = 0; 1537 1624 vmw_resource_unreserve(res, false, false, false, NULL, 0); 1538 1625 mutex_unlock(&res->dev_priv->cmdbuf_mutex); 1539 1626 ··· 1543 1630 1544 1631 1545 1632 static int vmw_kms_new_framebuffer_bo(struct vmw_private *dev_priv, 1546 - struct vmw_buffer_object *bo, 1633 + struct vmw_bo *bo, 1547 1634 struct vmw_framebuffer **out, 1548 1635 const struct drm_mode_fb_cmd2 1549 1636 *mode_cmd) ··· 1555 1642 int ret; 1556 1643 1557 1644 requested_size = mode_cmd->height * mode_cmd->pitches[0]; 1558 - if (unlikely(requested_size > bo->base.base.size)) { 1645 + if (unlikely(requested_size > bo->tbo.base.size)) { 1559 1646 DRM_ERROR("Screen buffer object size is too small " 1560 1647 "for requested mode.\n"); 1561 1648 return -EINVAL; ··· 1576 1663 goto out_err1; 1577 1664 } 1578 1665 1579 - vfbd->base.base.obj[0] = &bo->base.base; 1666 + vfbd->base.base.obj[0] = &bo->tbo.base; 1580 1667 drm_helper_mode_fill_fb_struct(dev, &vfbd->base.base, mode_cmd); 1581 1668 vfbd->base.bo = true; 1582 1669 vfbd->buffer = vmw_bo_reference(bo); ··· 1631 1718 */ 1632 1719 struct vmw_framebuffer * 1633 1720 vmw_kms_new_framebuffer(struct vmw_private *dev_priv, 1634 - struct vmw_buffer_object *bo, 1721 + struct vmw_bo *bo, 1635 1722 struct vmw_surface *surface, 1636 1723 bool only_2d, 1637 1724 const struct drm_mode_fb_cmd2 *mode_cmd) ··· 1678 1765 if (ret) 1679 1766 return ERR_PTR(ret); 1680 1767 1681 - vfb->pin = vmw_framebuffer_pin; 1682 - vfb->unpin = vmw_framebuffer_unpin; 1683 - 1684 1768 return vfb; 1685 1769 } 1686 1770 ··· 1692 1782 struct vmw_private *dev_priv = vmw_priv(dev); 1693 1783 struct vmw_framebuffer *vfb = NULL; 1694 1784 struct vmw_surface *surface = NULL; 1695 - struct vmw_buffer_object *bo = NULL; 1785 + struct vmw_bo *bo = NULL; 1696 1786 int ret; 1697 1787 1698 1788 /* returns either a bo or surface */ ··· 1727 1817 /* vmw_user_lookup_handle takes one ref so does new_fb */ 1728 1818 if (bo) { 1729 1819 vmw_bo_unreference(&bo); 1730 - drm_gem_object_put(&bo->base.base); 1820 + drm_gem_object_put(&bo->tbo.base); 1731 1821 } 1732 1822 if (surface) 1733 1823 vmw_surface_unreference(&surface); ··· 2986 3076 struct vmw_framebuffer_bo *vfbbo = 2987 3077 container_of(update->vfb, typeof(*vfbbo), base); 2988 3078 2989 - ret = vmw_validation_add_bo(&val_ctx, vfbbo->buffer, false, 2990 - update->cpu_blit); 3079 + /* 3080 + * For screen targets we want a mappable bo, for everything else we want 3081 + * accelerated i.e. host backed (vram or gmr) bo. If the display unit 3082 + * is not screen target then mob's shouldn't be available. 3083 + */ 3084 + if (update->dev_priv->active_display_unit == vmw_du_screen_target) { 3085 + vmw_bo_placement_set(vfbbo->buffer, 3086 + VMW_BO_DOMAIN_SYS | VMW_BO_DOMAIN_MOB | VMW_BO_DOMAIN_GMR, 3087 + VMW_BO_DOMAIN_SYS | VMW_BO_DOMAIN_MOB | VMW_BO_DOMAIN_GMR); 3088 + } else { 3089 + WARN_ON(update->dev_priv->has_mob); 3090 + vmw_bo_placement_set_default_accelerated(vfbbo->buffer); 3091 + } 3092 + ret = vmw_validation_add_bo(&val_ctx, vfbbo->buffer); 2991 3093 } else { 2992 3094 struct vmw_framebuffer_surface *vfbs = 2993 3095 container_of(update->vfb, typeof(*vfbs), base);
+18 -25
drivers/gpu/drm/vmwgfx/vmwgfx_kms.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 OR MIT */ 2 2 /************************************************************************** 3 3 * 4 - * Copyright 2009-2022 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright 2009-2023 VMware, Inc., Palo Alto, CA., USA 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a 7 7 * copy of this software and associated documentation files (the ··· 126 126 struct vmw_framebuffer *vfb; 127 127 struct vmw_fence_obj **out_fence; 128 128 struct mutex *mutex; 129 - bool cpu_blit; 130 129 bool intr; 131 130 }; 132 131 ··· 216 217 */ 217 218 struct vmw_framebuffer { 218 219 struct drm_framebuffer base; 219 - int (*pin)(struct vmw_framebuffer *fb); 220 - int (*unpin)(struct vmw_framebuffer *fb); 221 220 bool bo; 222 221 uint32_t user_handle; 223 222 }; ··· 230 233 struct vmw_framebuffer_surface { 231 234 struct vmw_framebuffer base; 232 235 struct vmw_surface *surface; 233 - struct vmw_buffer_object *buffer; 236 + struct vmw_bo *buffer; 234 237 struct list_head head; 235 238 bool is_bo_proxy; /* true if this is proxy surface for DMA buf */ 236 239 }; ··· 238 241 239 242 struct vmw_framebuffer_bo { 240 243 struct vmw_framebuffer base; 241 - struct vmw_buffer_object *buffer; 244 + struct vmw_bo *buffer; 242 245 }; 243 246 244 247 ··· 270 273 }; 271 274 272 275 struct vmw_cursor_plane_state { 273 - struct ttm_buffer_object *bo; 274 - struct ttm_bo_kmap_obj map; 275 - bool mapped; 276 + struct vmw_bo *bo; 276 277 s32 hotspot_x; 277 278 s32 hotspot_y; 278 279 }; ··· 288 293 struct vmw_plane_state { 289 294 struct drm_plane_state base; 290 295 struct vmw_surface *surf; 291 - struct vmw_buffer_object *bo; 296 + struct vmw_bo *bo; 292 297 293 298 int content_fb_type; 294 299 unsigned long bo_size; ··· 341 346 struct vmw_cursor_plane { 342 347 struct drm_plane base; 343 348 344 - struct ttm_buffer_object *cursor_mobs[3]; 349 + struct vmw_bo *cursor_mobs[3]; 345 350 }; 346 351 347 352 /** ··· 359 364 struct vmw_cursor_plane cursor; 360 365 361 366 struct vmw_surface *cursor_surface; 362 - struct vmw_buffer_object *cursor_bo; 367 + struct vmw_bo *cursor_bo; 363 368 size_t cursor_age; 364 369 365 370 int cursor_x; ··· 392 397 393 398 struct vmw_validation_ctx { 394 399 struct vmw_resource *res; 395 - struct vmw_buffer_object *buf; 400 + struct vmw_bo *buf; 396 401 }; 397 402 398 403 #define vmw_crtc_to_du(x) \ ··· 453 458 uint32_t num_clips); 454 459 struct vmw_framebuffer * 455 460 vmw_kms_new_framebuffer(struct vmw_private *dev_priv, 456 - struct vmw_buffer_object *bo, 461 + struct vmw_bo *bo, 457 462 struct vmw_surface *surface, 458 463 bool only_2d, 459 464 const struct drm_mode_fb_cmd2 *mode_cmd); ··· 561 566 unsigned num_clips, int inc, 562 567 struct vmw_fence_obj **out_fence, 563 568 struct drm_crtc *crtc); 564 - int vmw_kms_stdu_dma(struct vmw_private *dev_priv, 565 - struct drm_file *file_priv, 566 - struct vmw_framebuffer *vfb, 567 - struct drm_vmw_fence_rep __user *user_fence_rep, 568 - struct drm_clip_rect *clips, 569 - struct drm_vmw_rect *vclips, 570 - uint32_t num_clips, 571 - int increment, 572 - bool to_surface, 573 - bool interruptible, 574 - struct drm_crtc *crtc); 569 + int vmw_kms_stdu_readback(struct vmw_private *dev_priv, 570 + struct drm_file *file_priv, 571 + struct vmw_framebuffer *vfb, 572 + struct drm_vmw_fence_rep __user *user_fence_rep, 573 + struct drm_clip_rect *clips, 574 + struct drm_vmw_rect *vclips, 575 + uint32_t num_clips, 576 + int increment, 577 + struct drm_crtc *crtc); 575 578 576 579 int vmw_du_helper_plane_update(struct vmw_du_update_plane *update); 577 580
+49 -8
drivers/gpu/drm/vmwgfx/vmwgfx_ldu.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR MIT 2 2 /************************************************************************** 3 3 * 4 - * Copyright 2009-2022 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright 2009-2023 VMware, Inc., Palo Alto, CA., USA 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a 7 7 * copy of this software and associated documentation files (the ··· 25 25 * 26 26 **************************************************************************/ 27 27 28 + #include "vmwgfx_bo.h" 29 + #include "vmwgfx_kms.h" 30 + 28 31 #include <drm/drm_atomic.h> 29 32 #include <drm/drm_atomic_helper.h> 30 33 #include <drm/drm_fourcc.h> 31 34 32 - #include "vmwgfx_kms.h" 33 35 34 36 #define vmw_crtc_to_ldu(x) \ 35 37 container_of(x, struct vmw_legacy_display_unit, base.crtc) ··· 136 134 return 0; 137 135 } 138 136 137 + /* 138 + * Pin the buffer in a location suitable for access by the 139 + * display system. 140 + */ 141 + static int vmw_ldu_fb_pin(struct vmw_framebuffer *vfb) 142 + { 143 + struct vmw_private *dev_priv = vmw_priv(vfb->base.dev); 144 + struct vmw_bo *buf; 145 + int ret; 146 + 147 + buf = vfb->bo ? vmw_framebuffer_to_vfbd(&vfb->base)->buffer : 148 + vmw_framebuffer_to_vfbs(&vfb->base)->surface->res.guest_memory_bo; 149 + 150 + if (!buf) 151 + return 0; 152 + WARN_ON(dev_priv->active_display_unit != vmw_du_legacy); 153 + 154 + if (dev_priv->active_display_unit == vmw_du_legacy) { 155 + vmw_overlay_pause_all(dev_priv); 156 + ret = vmw_bo_pin_in_start_of_vram(dev_priv, buf, false); 157 + vmw_overlay_resume_all(dev_priv); 158 + } else 159 + ret = -EINVAL; 160 + 161 + return ret; 162 + } 163 + 164 + static int vmw_ldu_fb_unpin(struct vmw_framebuffer *vfb) 165 + { 166 + struct vmw_private *dev_priv = vmw_priv(vfb->base.dev); 167 + struct vmw_bo *buf; 168 + 169 + buf = vfb->bo ? vmw_framebuffer_to_vfbd(&vfb->base)->buffer : 170 + vmw_framebuffer_to_vfbs(&vfb->base)->surface->res.guest_memory_bo; 171 + 172 + if (WARN_ON(!buf)) 173 + return 0; 174 + 175 + return vmw_bo_unpin(dev_priv, buf, false); 176 + } 177 + 139 178 static int vmw_ldu_del_active(struct vmw_private *vmw_priv, 140 179 struct vmw_legacy_display_unit *ldu) 141 180 { ··· 188 145 list_del_init(&ldu->active); 189 146 if (--(ld->num_active) == 0) { 190 147 BUG_ON(!ld->fb); 191 - if (ld->fb->unpin) 192 - ld->fb->unpin(ld->fb); 148 + WARN_ON(vmw_ldu_fb_unpin(ld->fb)); 193 149 ld->fb = NULL; 194 150 } 195 151 ··· 205 163 206 164 BUG_ON(!ld->num_active && ld->fb); 207 165 if (vfb != ld->fb) { 208 - if (ld->fb && ld->fb->unpin) 209 - ld->fb->unpin(ld->fb); 166 + if (ld->fb) 167 + WARN_ON(vmw_ldu_fb_unpin(ld->fb)); 210 168 vmw_svga_enable(vmw_priv); 211 - if (vfb->pin) 212 - vfb->pin(vfb); 169 + WARN_ON(vmw_ldu_fb_pin(vfb)); 213 170 ld->fb = vfb; 214 171 } 215 172
+24 -21
drivers/gpu/drm/vmwgfx/vmwgfx_mob.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR MIT 2 2 /************************************************************************** 3 3 * 4 - * Copyright 2012-2021 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright 2012-2023 VMware, Inc., Palo Alto, CA., USA 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a 7 7 * copy of this software and associated documentation files (the ··· 25 25 * 26 26 **************************************************************************/ 27 27 28 - #include <linux/highmem.h> 29 - 28 + #include "vmwgfx_bo.h" 30 29 #include "vmwgfx_drv.h" 30 + 31 + #include <linux/highmem.h> 31 32 32 33 #ifdef CONFIG_64BIT 33 34 #define VMW_PPN_SIZE 8 ··· 51 50 * @pt_root_page DMA address of the level 0 page of the page table. 52 51 */ 53 52 struct vmw_mob { 54 - struct ttm_buffer_object *pt_bo; 53 + struct vmw_bo *pt_bo; 55 54 unsigned long num_pages; 56 55 unsigned pt_level; 57 56 dma_addr_t pt_root_page; ··· 204 203 if (otable->page_table == NULL) 205 204 return; 206 205 207 - bo = otable->page_table->pt_bo; 206 + bo = &otable->page_table->pt_bo->tbo; 208 207 cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); 209 208 if (unlikely(cmd == NULL)) 210 209 return; ··· 252 251 bo_size += otables[i].size; 253 252 } 254 253 255 - ret = vmw_bo_create_and_populate(dev_priv, bo_size, &batch->otable_bo); 254 + ret = vmw_bo_create_and_populate(dev_priv, bo_size, 255 + VMW_BO_DOMAIN_WAITABLE_SYS, 256 + &batch->otable_bo); 256 257 if (unlikely(ret != 0)) 257 258 return ret; 258 259 ··· 263 260 if (!batch->otables[i].enabled) 264 261 continue; 265 262 266 - ret = vmw_setup_otable_base(dev_priv, i, batch->otable_bo, 263 + ret = vmw_setup_otable_base(dev_priv, i, 264 + &batch->otable_bo->tbo, 267 265 offset, 268 266 &otables[i]); 269 267 if (unlikely(ret != 0)) ··· 281 277 &batch->otables[i]); 282 278 } 283 279 284 - vmw_bo_unpin_unlocked(batch->otable_bo); 285 - ttm_bo_put(batch->otable_bo); 280 + vmw_bo_unpin_unlocked(&batch->otable_bo->tbo); 281 + ttm_bo_put(&batch->otable_bo->tbo); 286 282 batch->otable_bo = NULL; 287 283 return ret; 288 284 } ··· 333 329 struct vmw_otable_batch *batch) 334 330 { 335 331 SVGAOTableType i; 336 - struct ttm_buffer_object *bo = batch->otable_bo; 332 + struct ttm_buffer_object *bo = &batch->otable_bo->tbo; 337 333 int ret; 338 334 339 335 for (i = 0; i < batch->num_otables; ++i) ··· 348 344 ttm_bo_unpin(bo); 349 345 ttm_bo_unreserve(bo); 350 346 351 - ttm_bo_put(batch->otable_bo); 352 - batch->otable_bo = NULL; 347 + vmw_bo_unreference(&batch->otable_bo); 353 348 } 354 349 355 350 /* ··· 416 413 { 417 414 BUG_ON(mob->pt_bo != NULL); 418 415 419 - return vmw_bo_create_and_populate(dev_priv, mob->num_pages * PAGE_SIZE, &mob->pt_bo); 416 + return vmw_bo_create_and_populate(dev_priv, mob->num_pages * PAGE_SIZE, 417 + VMW_BO_DOMAIN_WAITABLE_SYS, 418 + &mob->pt_bo); 420 419 } 421 420 422 421 /** ··· 499 494 unsigned long num_data_pages) 500 495 { 501 496 unsigned long num_pt_pages = 0; 502 - struct ttm_buffer_object *bo = mob->pt_bo; 497 + struct ttm_buffer_object *bo = &mob->pt_bo->tbo; 503 498 struct vmw_piter save_pt_iter = {0}; 504 499 struct vmw_piter pt_iter; 505 500 const struct vmw_sg_table *vsgt; ··· 536 531 void vmw_mob_destroy(struct vmw_mob *mob) 537 532 { 538 533 if (mob->pt_bo) { 539 - vmw_bo_unpin_unlocked(mob->pt_bo); 540 - ttm_bo_put(mob->pt_bo); 541 - mob->pt_bo = NULL; 534 + vmw_bo_unpin_unlocked(&mob->pt_bo->tbo); 535 + vmw_bo_unreference(&mob->pt_bo); 542 536 } 543 537 kfree(mob); 544 538 } ··· 556 552 SVGA3dCmdDestroyGBMob body; 557 553 } *cmd; 558 554 int ret; 559 - struct ttm_buffer_object *bo = mob->pt_bo; 555 + struct ttm_buffer_object *bo = &mob->pt_bo->tbo; 560 556 561 557 if (bo) { 562 558 ret = ttm_bo_reserve(bo, false, true, NULL); ··· 648 644 out_no_cmd_space: 649 645 vmw_fifo_resource_dec(dev_priv); 650 646 if (pt_set_up) { 651 - vmw_bo_unpin_unlocked(mob->pt_bo); 652 - ttm_bo_put(mob->pt_bo); 653 - mob->pt_bo = NULL; 647 + vmw_bo_unpin_unlocked(&mob->pt_bo->tbo); 648 + vmw_bo_unreference(&mob->pt_bo); 654 649 } 655 650 656 651 return -ENOMEM;
+11 -11
drivers/gpu/drm/vmwgfx/vmwgfx_overlay.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR MIT 2 2 /************************************************************************** 3 3 * 4 - * Copyright 2009-2014 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright 2009-2023 VMware, Inc., Palo Alto, CA., USA 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a 7 7 * copy of this software and associated documentation files (the ··· 24 24 * USE OR OTHER DEALINGS IN THE SOFTWARE. 25 25 * 26 26 **************************************************************************/ 27 - 28 - #include <drm/ttm/ttm_placement.h> 27 + #include "vmwgfx_bo.h" 28 + #include "vmwgfx_drv.h" 29 29 30 30 #include "device_include/svga_overlay.h" 31 31 #include "device_include/svga_escape.h" 32 32 33 - #include "vmwgfx_drv.h" 33 + #include <drm/ttm/ttm_placement.h> 34 34 35 35 #define VMW_MAX_NUM_STREAMS 1 36 36 #define VMW_OVERLAY_CAP_MASK (SVGA_FIFO_CAP_VIDEO | SVGA_FIFO_CAP_ESCAPE) 37 37 38 38 struct vmw_stream { 39 - struct vmw_buffer_object *buf; 39 + struct vmw_bo *buf; 40 40 bool claimed; 41 41 bool paused; 42 42 struct drm_vmw_control_stream_arg saved; ··· 92 92 * -ERESTARTSYS if interrupted by a signal. 93 93 */ 94 94 static int vmw_overlay_send_put(struct vmw_private *dev_priv, 95 - struct vmw_buffer_object *buf, 95 + struct vmw_bo *buf, 96 96 struct drm_vmw_control_stream_arg *arg, 97 97 bool interruptible) 98 98 { ··· 140 140 for (i = 0; i < num_items; i++) 141 141 items[i].registerId = i; 142 142 143 - vmw_bo_get_guest_ptr(&buf->base, &ptr); 143 + vmw_bo_get_guest_ptr(&buf->tbo, &ptr); 144 144 ptr.offset += arg->offset; 145 145 146 146 items[SVGA_VIDEO_ENABLED].value = true; ··· 223 223 * used with GMRs instead of being locked to vram. 224 224 */ 225 225 static int vmw_overlay_move_buffer(struct vmw_private *dev_priv, 226 - struct vmw_buffer_object *buf, 226 + struct vmw_bo *buf, 227 227 bool pin, bool inter) 228 228 { 229 229 if (!pin) ··· 295 295 * -ERESTARTSYS if interrupted. 296 296 */ 297 297 static int vmw_overlay_update_stream(struct vmw_private *dev_priv, 298 - struct vmw_buffer_object *buf, 298 + struct vmw_bo *buf, 299 299 struct drm_vmw_control_stream_arg *arg, 300 300 bool interruptible) 301 301 { ··· 433 433 struct vmw_overlay *overlay = dev_priv->overlay_priv; 434 434 struct drm_vmw_control_stream_arg *arg = 435 435 (struct drm_vmw_control_stream_arg *)data; 436 - struct vmw_buffer_object *buf; 436 + struct vmw_bo *buf; 437 437 struct vmw_resource *res; 438 438 int ret; 439 439 ··· 458 458 ret = vmw_overlay_update_stream(dev_priv, buf, arg, true); 459 459 460 460 vmw_bo_unreference(&buf); 461 - drm_gem_object_put(&buf->base.base); 461 + drm_gem_object_put(&buf->tbo.base); 462 462 463 463 out_unlock: 464 464 mutex_unlock(&overlay->mutex);
+33 -35
drivers/gpu/drm/vmwgfx/vmwgfx_page_dirty.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR MIT 2 2 /************************************************************************** 3 3 * 4 - * Copyright 2019 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright 2019-2023 VMware, Inc., Palo Alto, CA., USA 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a 7 7 * copy of this software and associated documentation files (the ··· 24 24 * USE OR OTHER DEALINGS IN THE SOFTWARE. 25 25 * 26 26 **************************************************************************/ 27 + #include "vmwgfx_bo.h" 27 28 #include "vmwgfx_drv.h" 28 29 29 30 /* ··· 79 78 * dirty structure with the results. This function may change the 80 79 * dirty-tracking method. 81 80 */ 82 - static void vmw_bo_dirty_scan_pagetable(struct vmw_buffer_object *vbo) 81 + static void vmw_bo_dirty_scan_pagetable(struct vmw_bo *vbo) 83 82 { 84 83 struct vmw_bo_dirty *dirty = vbo->dirty; 85 - pgoff_t offset = drm_vma_node_start(&vbo->base.base.vma_node); 86 - struct address_space *mapping = vbo->base.bdev->dev_mapping; 84 + pgoff_t offset = drm_vma_node_start(&vbo->tbo.base.vma_node); 85 + struct address_space *mapping = vbo->tbo.bdev->dev_mapping; 87 86 pgoff_t num_marked; 88 87 89 88 num_marked = clean_record_shared_mapping_range ··· 117 116 * 118 117 * This function may change the dirty-tracking method. 119 118 */ 120 - static void vmw_bo_dirty_scan_mkwrite(struct vmw_buffer_object *vbo) 119 + static void vmw_bo_dirty_scan_mkwrite(struct vmw_bo *vbo) 121 120 { 122 121 struct vmw_bo_dirty *dirty = vbo->dirty; 123 - unsigned long offset = drm_vma_node_start(&vbo->base.base.vma_node); 124 - struct address_space *mapping = vbo->base.bdev->dev_mapping; 122 + unsigned long offset = drm_vma_node_start(&vbo->tbo.base.vma_node); 123 + struct address_space *mapping = vbo->tbo.bdev->dev_mapping; 125 124 pgoff_t num_marked; 126 125 127 126 if (dirty->end <= dirty->start) 128 127 return; 129 128 130 - num_marked = wp_shared_mapping_range(vbo->base.bdev->dev_mapping, 131 - dirty->start + offset, 132 - dirty->end - dirty->start); 129 + num_marked = wp_shared_mapping_range(vbo->tbo.bdev->dev_mapping, 130 + dirty->start + offset, 131 + dirty->end - dirty->start); 133 132 134 133 if (100UL * num_marked / dirty->bitmap_size > 135 - VMW_DIRTY_PERCENTAGE) { 134 + VMW_DIRTY_PERCENTAGE) 136 135 dirty->change_count++; 137 - } else { 136 + else 138 137 dirty->change_count = 0; 139 - } 140 138 141 139 if (dirty->change_count > VMW_DIRTY_NUM_CHANGE_TRIGGERS) { 142 140 pgoff_t start = 0; ··· 160 160 * 161 161 * This function may change the dirty tracking method. 162 162 */ 163 - void vmw_bo_dirty_scan(struct vmw_buffer_object *vbo) 163 + void vmw_bo_dirty_scan(struct vmw_bo *vbo) 164 164 { 165 165 struct vmw_bo_dirty *dirty = vbo->dirty; 166 166 ··· 181 181 * when calling unmap_mapping_range(). This function makes sure we pick 182 182 * up all dirty pages. 183 183 */ 184 - static void vmw_bo_dirty_pre_unmap(struct vmw_buffer_object *vbo, 184 + static void vmw_bo_dirty_pre_unmap(struct vmw_bo *vbo, 185 185 pgoff_t start, pgoff_t end) 186 186 { 187 187 struct vmw_bo_dirty *dirty = vbo->dirty; 188 - unsigned long offset = drm_vma_node_start(&vbo->base.base.vma_node); 189 - struct address_space *mapping = vbo->base.bdev->dev_mapping; 188 + unsigned long offset = drm_vma_node_start(&vbo->tbo.base.vma_node); 189 + struct address_space *mapping = vbo->tbo.bdev->dev_mapping; 190 190 191 191 if (dirty->method != VMW_BO_DIRTY_PAGETABLE || start >= end) 192 192 return; ··· 206 206 * 207 207 * This is similar to ttm_bo_unmap_virtual() except it takes a subrange. 208 208 */ 209 - void vmw_bo_dirty_unmap(struct vmw_buffer_object *vbo, 209 + void vmw_bo_dirty_unmap(struct vmw_bo *vbo, 210 210 pgoff_t start, pgoff_t end) 211 211 { 212 - unsigned long offset = drm_vma_node_start(&vbo->base.base.vma_node); 213 - struct address_space *mapping = vbo->base.bdev->dev_mapping; 212 + unsigned long offset = drm_vma_node_start(&vbo->tbo.base.vma_node); 213 + struct address_space *mapping = vbo->tbo.bdev->dev_mapping; 214 214 215 215 vmw_bo_dirty_pre_unmap(vbo, start, end); 216 216 unmap_shared_mapping_range(mapping, (offset + start) << PAGE_SHIFT, ··· 227 227 * 228 228 * Return: Zero on success, -ENOMEM on memory allocation failure. 229 229 */ 230 - int vmw_bo_dirty_add(struct vmw_buffer_object *vbo) 230 + int vmw_bo_dirty_add(struct vmw_bo *vbo) 231 231 { 232 232 struct vmw_bo_dirty *dirty = vbo->dirty; 233 - pgoff_t num_pages = PFN_UP(vbo->base.resource->size); 233 + pgoff_t num_pages = PFN_UP(vbo->tbo.resource->size); 234 234 size_t size; 235 235 int ret; 236 236 ··· 253 253 if (num_pages < PAGE_SIZE / sizeof(pte_t)) { 254 254 dirty->method = VMW_BO_DIRTY_PAGETABLE; 255 255 } else { 256 - struct address_space *mapping = vbo->base.bdev->dev_mapping; 257 - pgoff_t offset = drm_vma_node_start(&vbo->base.base.vma_node); 256 + struct address_space *mapping = vbo->tbo.bdev->dev_mapping; 257 + pgoff_t offset = drm_vma_node_start(&vbo->tbo.base.vma_node); 258 258 259 259 dirty->method = VMW_BO_DIRTY_MKWRITE; 260 260 ··· 284 284 * 285 285 * Return: Zero on success, -ENOMEM on memory allocation failure. 286 286 */ 287 - void vmw_bo_dirty_release(struct vmw_buffer_object *vbo) 287 + void vmw_bo_dirty_release(struct vmw_bo *vbo) 288 288 { 289 289 struct vmw_bo_dirty *dirty = vbo->dirty; 290 290 ··· 306 306 */ 307 307 void vmw_bo_dirty_transfer_to_res(struct vmw_resource *res) 308 308 { 309 - struct vmw_buffer_object *vbo = res->backup; 309 + struct vmw_bo *vbo = res->guest_memory_bo; 310 310 struct vmw_bo_dirty *dirty = vbo->dirty; 311 311 pgoff_t start, cur, end; 312 - unsigned long res_start = res->backup_offset; 313 - unsigned long res_end = res->backup_offset + res->backup_size; 312 + unsigned long res_start = res->guest_memory_offset; 313 + unsigned long res_end = res->guest_memory_offset + res->guest_memory_size; 314 314 315 315 WARN_ON_ONCE(res_start & ~PAGE_MASK); 316 316 res_start >>= PAGE_SHIFT; ··· 351 351 */ 352 352 void vmw_bo_dirty_clear_res(struct vmw_resource *res) 353 353 { 354 - unsigned long res_start = res->backup_offset; 355 - unsigned long res_end = res->backup_offset + res->backup_size; 356 - struct vmw_buffer_object *vbo = res->backup; 354 + unsigned long res_start = res->guest_memory_offset; 355 + unsigned long res_end = res->guest_memory_offset + res->guest_memory_size; 356 + struct vmw_bo *vbo = res->guest_memory_bo; 357 357 struct vmw_bo_dirty *dirty = vbo->dirty; 358 358 359 359 res_start >>= PAGE_SHIFT; ··· 380 380 vm_fault_t ret; 381 381 unsigned long page_offset; 382 382 unsigned int save_flags; 383 - struct vmw_buffer_object *vbo = 384 - container_of(bo, typeof(*vbo), base); 383 + struct vmw_bo *vbo = to_vmw_bo(&bo->base); 385 384 386 385 /* 387 386 * mkwrite() doesn't handle the VM_FAULT_RETRY return value correctly. ··· 418 419 struct vm_area_struct *vma = vmf->vma; 419 420 struct ttm_buffer_object *bo = (struct ttm_buffer_object *) 420 421 vma->vm_private_data; 421 - struct vmw_buffer_object *vbo = 422 - container_of(bo, struct vmw_buffer_object, base); 422 + struct vmw_bo *vbo = to_vmw_bo(&bo->base); 423 423 pgoff_t num_prefault; 424 424 pgprot_t prot; 425 425 vm_fault_t ret;
+128 -118
drivers/gpu/drm/vmwgfx/vmwgfx_resource.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR MIT 2 2 /************************************************************************** 3 3 * 4 - * Copyright 2009-2015 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright 2009-2023 VMware, Inc., Palo Alto, CA., USA 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a 7 7 * copy of this software and associated documentation files (the ··· 27 27 28 28 #include <drm/ttm/ttm_placement.h> 29 29 30 - #include "vmwgfx_resource_priv.h" 31 30 #include "vmwgfx_binding.h" 31 + #include "vmwgfx_bo.h" 32 32 #include "vmwgfx_drv.h" 33 + #include "vmwgfx_resource_priv.h" 33 34 34 35 #define VMW_RES_EVICT_ERR_COUNT 10 35 36 ··· 40 39 */ 41 40 void vmw_resource_mob_attach(struct vmw_resource *res) 42 41 { 43 - struct vmw_buffer_object *backup = res->backup; 44 - struct rb_node **new = &backup->res_tree.rb_node, *parent = NULL; 42 + struct vmw_bo *gbo = res->guest_memory_bo; 43 + struct rb_node **new = &gbo->res_tree.rb_node, *parent = NULL; 45 44 46 - dma_resv_assert_held(res->backup->base.base.resv); 45 + dma_resv_assert_held(gbo->tbo.base.resv); 47 46 res->used_prio = (res->res_dirty) ? res->func->dirty_prio : 48 47 res->func->prio; 49 48 ··· 52 51 container_of(*new, struct vmw_resource, mob_node); 53 52 54 53 parent = *new; 55 - new = (res->backup_offset < this->backup_offset) ? 54 + new = (res->guest_memory_offset < this->guest_memory_offset) ? 56 55 &((*new)->rb_left) : &((*new)->rb_right); 57 56 } 58 57 59 58 rb_link_node(&res->mob_node, parent, new); 60 - rb_insert_color(&res->mob_node, &backup->res_tree); 59 + rb_insert_color(&res->mob_node, &gbo->res_tree); 61 60 62 - vmw_bo_prio_add(backup, res->used_prio); 61 + vmw_bo_prio_add(gbo, res->used_prio); 63 62 } 64 63 65 64 /** ··· 68 67 */ 69 68 void vmw_resource_mob_detach(struct vmw_resource *res) 70 69 { 71 - struct vmw_buffer_object *backup = res->backup; 70 + struct vmw_bo *gbo = res->guest_memory_bo; 72 71 73 - dma_resv_assert_held(backup->base.base.resv); 72 + dma_resv_assert_held(gbo->tbo.base.resv); 74 73 if (vmw_resource_mob_attached(res)) { 75 - rb_erase(&res->mob_node, &backup->res_tree); 74 + rb_erase(&res->mob_node, &gbo->res_tree); 76 75 RB_CLEAR_NODE(&res->mob_node); 77 - vmw_bo_prio_del(backup, res->used_prio); 76 + vmw_bo_prio_del(gbo, res->used_prio); 78 77 } 79 78 } 80 79 ··· 121 120 spin_lock(&dev_priv->resource_lock); 122 121 list_del_init(&res->lru_head); 123 122 spin_unlock(&dev_priv->resource_lock); 124 - if (res->backup) { 125 - struct ttm_buffer_object *bo = &res->backup->base; 123 + if (res->guest_memory_bo) { 124 + struct ttm_buffer_object *bo = &res->guest_memory_bo->tbo; 126 125 127 126 ret = ttm_bo_reserve(bo, false, false, NULL); 128 127 BUG_ON(ret); ··· 134 133 val_buf.num_shared = 0; 135 134 res->func->unbind(res, false, &val_buf); 136 135 } 137 - res->backup_dirty = false; 136 + res->guest_memory_size = false; 138 137 vmw_resource_mob_detach(res); 139 138 if (res->dirty) 140 139 res->func->dirty_free(res); 141 140 if (res->coherent) 142 - vmw_bo_dirty_release(res->backup); 141 + vmw_bo_dirty_release(res->guest_memory_bo); 143 142 ttm_bo_unreserve(bo); 144 - vmw_bo_unreference(&res->backup); 143 + vmw_bo_unreference(&res->guest_memory_bo); 145 144 } 146 145 147 146 if (likely(res->hw_destroy != NULL)) { ··· 224 223 INIT_LIST_HEAD(&res->lru_head); 225 224 INIT_LIST_HEAD(&res->binding_head); 226 225 res->id = -1; 227 - res->backup = NULL; 228 - res->backup_offset = 0; 229 - res->backup_dirty = false; 226 + res->guest_memory_bo = NULL; 227 + res->guest_memory_offset = 0; 228 + res->guest_memory_dirty = false; 230 229 res->res_dirty = false; 231 230 res->coherent = false; 232 231 res->used_prio = 3; ··· 264 263 int ret = -EINVAL; 265 264 266 265 base = ttm_base_object_lookup(tfile, handle); 267 - if (unlikely(base == NULL)) 266 + if (unlikely(!base)) 268 267 return -EINVAL; 269 268 270 269 if (unlikely(ttm_base_object_type(base) != converter->object_type)) ··· 291 290 struct drm_file *filp, 292 291 uint32_t handle, 293 292 struct vmw_surface **out_surf, 294 - struct vmw_buffer_object **out_buf) 293 + struct vmw_bo **out_buf) 295 294 { 296 295 struct ttm_object_file *tfile = vmw_fpriv(filp)->tfile; 297 296 struct vmw_resource *res; ··· 313 312 } 314 313 315 314 /** 316 - * vmw_resource_buf_alloc - Allocate a backup buffer for a resource. 315 + * vmw_resource_buf_alloc - Allocate a guest memory buffer for a resource. 317 316 * 318 - * @res: The resource for which to allocate a backup buffer. 317 + * @res: The resource for which to allocate a gbo buffer. 319 318 * @interruptible: Whether any sleeps during allocation should be 320 319 * performed while interruptible. 321 320 */ 322 321 static int vmw_resource_buf_alloc(struct vmw_resource *res, 323 322 bool interruptible) 324 323 { 325 - unsigned long size = PFN_ALIGN(res->backup_size); 326 - struct vmw_buffer_object *backup; 324 + unsigned long size = PFN_ALIGN(res->guest_memory_size); 325 + struct vmw_bo *gbo; 326 + struct vmw_bo_params bo_params = { 327 + .domain = res->func->domain, 328 + .busy_domain = res->func->busy_domain, 329 + .bo_type = ttm_bo_type_device, 330 + .size = res->guest_memory_size, 331 + .pin = false 332 + }; 327 333 int ret; 328 334 329 - if (likely(res->backup)) { 330 - BUG_ON(res->backup->base.base.size < size); 335 + if (likely(res->guest_memory_bo)) { 336 + BUG_ON(res->guest_memory_bo->tbo.base.size < size); 331 337 return 0; 332 338 } 333 339 334 - ret = vmw_bo_create(res->dev_priv, res->backup_size, 335 - res->func->backup_placement, 336 - interruptible, false, 337 - &vmw_bo_bo_free, &backup); 340 + ret = vmw_bo_create(res->dev_priv, &bo_params, &gbo); 338 341 if (unlikely(ret != 0)) 339 342 goto out_no_bo; 340 343 341 - res->backup = backup; 344 + res->guest_memory_bo = gbo; 342 345 343 346 out_no_bo: 344 347 return ret; ··· 374 369 } 375 370 376 371 if (func->bind && 377 - ((func->needs_backup && !vmw_resource_mob_attached(res) && 378 - val_buf->bo != NULL) || 379 - (!func->needs_backup && val_buf->bo != NULL))) { 372 + ((func->needs_guest_memory && !vmw_resource_mob_attached(res) && 373 + val_buf->bo) || 374 + (!func->needs_guest_memory && val_buf->bo))) { 380 375 ret = func->bind(res, val_buf); 381 376 if (unlikely(ret != 0)) 382 377 goto out_bind_failed; 383 - if (func->needs_backup) 378 + if (func->needs_guest_memory) 384 379 vmw_resource_mob_attach(res); 385 380 } 386 381 ··· 390 385 */ 391 386 if (func->dirty_alloc && vmw_resource_mob_attached(res) && 392 387 !res->coherent) { 393 - if (res->backup->dirty && !res->dirty) { 388 + if (res->guest_memory_bo->dirty && !res->dirty) { 394 389 ret = func->dirty_alloc(res); 395 390 if (ret) 396 391 return ret; 397 - } else if (!res->backup->dirty && res->dirty) { 392 + } else if (!res->guest_memory_bo->dirty && res->dirty) { 398 393 func->dirty_free(res); 399 394 } 400 395 } ··· 405 400 */ 406 401 if (res->dirty) { 407 402 if (dirtying && !res->res_dirty) { 408 - pgoff_t start = res->backup_offset >> PAGE_SHIFT; 403 + pgoff_t start = res->guest_memory_offset >> PAGE_SHIFT; 409 404 pgoff_t end = __KERNEL_DIV_ROUND_UP 410 - (res->backup_offset + res->backup_size, 405 + (res->guest_memory_offset + res->guest_memory_size, 411 406 PAGE_SIZE); 412 407 413 - vmw_bo_dirty_unmap(res->backup, start, end); 408 + vmw_bo_dirty_unmap(res->guest_memory_bo, start, end); 414 409 } 415 410 416 411 vmw_bo_dirty_transfer_to_res(res); ··· 432 427 * @res: Pointer to the struct vmw_resource to unreserve. 433 428 * @dirty_set: Change dirty status of the resource. 434 429 * @dirty: When changing dirty status indicates the new status. 435 - * @switch_backup: Backup buffer has been switched. 436 - * @new_backup: Pointer to new backup buffer if command submission 430 + * @switch_guest_memory: Guest memory buffer has been switched. 431 + * @new_guest_memory_bo: Pointer to new guest memory buffer if command submission 437 432 * switched. May be NULL. 438 - * @new_backup_offset: New backup offset if @switch_backup is true. 433 + * @new_guest_memory_offset: New gbo offset if @switch_guest_memory is true. 439 434 * 440 435 * Currently unreserving a resource means putting it back on the device's 441 436 * resource lru list, so that it can be evicted if necessary. ··· 443 438 void vmw_resource_unreserve(struct vmw_resource *res, 444 439 bool dirty_set, 445 440 bool dirty, 446 - bool switch_backup, 447 - struct vmw_buffer_object *new_backup, 448 - unsigned long new_backup_offset) 441 + bool switch_guest_memory, 442 + struct vmw_bo *new_guest_memory_bo, 443 + unsigned long new_guest_memory_offset) 449 444 { 450 445 struct vmw_private *dev_priv = res->dev_priv; 451 446 452 447 if (!list_empty(&res->lru_head)) 453 448 return; 454 449 455 - if (switch_backup && new_backup != res->backup) { 456 - if (res->backup) { 450 + if (switch_guest_memory && new_guest_memory_bo != res->guest_memory_bo) { 451 + if (res->guest_memory_bo) { 457 452 vmw_resource_mob_detach(res); 458 453 if (res->coherent) 459 - vmw_bo_dirty_release(res->backup); 460 - vmw_bo_unreference(&res->backup); 454 + vmw_bo_dirty_release(res->guest_memory_bo); 455 + vmw_bo_unreference(&res->guest_memory_bo); 461 456 } 462 457 463 - if (new_backup) { 464 - res->backup = vmw_bo_reference(new_backup); 458 + if (new_guest_memory_bo) { 459 + res->guest_memory_bo = vmw_bo_reference(new_guest_memory_bo); 465 460 466 461 /* 467 462 * The validation code should already have added a 468 463 * dirty tracker here. 469 464 */ 470 - WARN_ON(res->coherent && !new_backup->dirty); 465 + WARN_ON(res->coherent && !new_guest_memory_bo->dirty); 471 466 472 467 vmw_resource_mob_attach(res); 473 468 } else { 474 - res->backup = NULL; 469 + res->guest_memory_bo = NULL; 475 470 } 476 - } else if (switch_backup && res->coherent) { 477 - vmw_bo_dirty_release(res->backup); 471 + } else if (switch_guest_memory && res->coherent) { 472 + vmw_bo_dirty_release(res->guest_memory_bo); 478 473 } 479 474 480 - if (switch_backup) 481 - res->backup_offset = new_backup_offset; 475 + if (switch_guest_memory) 476 + res->guest_memory_offset = new_guest_memory_offset; 482 477 483 478 if (dirty_set) 484 479 res->res_dirty = dirty; ··· 512 507 { 513 508 struct ttm_operation_ctx ctx = { true, false }; 514 509 struct list_head val_list; 515 - bool backup_dirty = false; 510 + bool guest_memory_dirty = false; 516 511 int ret; 517 512 518 - if (unlikely(res->backup == NULL)) { 513 + if (unlikely(!res->guest_memory_bo)) { 519 514 ret = vmw_resource_buf_alloc(res, interruptible); 520 515 if (unlikely(ret != 0)) 521 516 return ret; 522 517 } 523 518 524 519 INIT_LIST_HEAD(&val_list); 525 - ttm_bo_get(&res->backup->base); 526 - val_buf->bo = &res->backup->base; 520 + ttm_bo_get(&res->guest_memory_bo->tbo); 521 + val_buf->bo = &res->guest_memory_bo->tbo; 527 522 val_buf->num_shared = 0; 528 523 list_add_tail(&val_buf->head, &val_list); 529 524 ret = ttm_eu_reserve_buffers(ticket, &val_list, interruptible, NULL); 530 525 if (unlikely(ret != 0)) 531 526 goto out_no_reserve; 532 527 533 - if (res->func->needs_backup && !vmw_resource_mob_attached(res)) 528 + if (res->func->needs_guest_memory && !vmw_resource_mob_attached(res)) 534 529 return 0; 535 530 536 - backup_dirty = res->backup_dirty; 537 - ret = ttm_bo_validate(&res->backup->base, 538 - res->func->backup_placement, 531 + guest_memory_dirty = res->guest_memory_dirty; 532 + vmw_bo_placement_set(res->guest_memory_bo, res->func->domain, 533 + res->func->busy_domain); 534 + ret = ttm_bo_validate(&res->guest_memory_bo->tbo, 535 + &res->guest_memory_bo->placement, 539 536 &ctx); 540 537 541 538 if (unlikely(ret != 0)) ··· 550 543 out_no_reserve: 551 544 ttm_bo_put(val_buf->bo); 552 545 val_buf->bo = NULL; 553 - if (backup_dirty) 554 - vmw_bo_unreference(&res->backup); 546 + if (guest_memory_dirty) 547 + vmw_bo_unreference(&res->guest_memory_bo); 555 548 556 549 return ret; 557 550 } ··· 562 555 * @res: The resource to reserve. 563 556 * 564 557 * This function takes the resource off the LRU list and make sure 565 - * a backup buffer is present for guest-backed resources. However, 566 - * the buffer may not be bound to the resource at this point. 558 + * a guest memory buffer is present for guest-backed resources. 559 + * However, the buffer may not be bound to the resource at this 560 + * point. 567 561 * 568 562 */ 569 563 int vmw_resource_reserve(struct vmw_resource *res, bool interruptible, 570 - bool no_backup) 564 + bool no_guest_memory) 571 565 { 572 566 struct vmw_private *dev_priv = res->dev_priv; 573 567 int ret; ··· 577 569 list_del_init(&res->lru_head); 578 570 spin_unlock(&dev_priv->resource_lock); 579 571 580 - if (res->func->needs_backup && res->backup == NULL && 581 - !no_backup) { 572 + if (res->func->needs_guest_memory && !res->guest_memory_bo && 573 + !no_guest_memory) { 582 574 ret = vmw_resource_buf_alloc(res, interruptible); 583 575 if (unlikely(ret != 0)) { 584 - DRM_ERROR("Failed to allocate a backup buffer " 576 + DRM_ERROR("Failed to allocate a guest memory buffer " 585 577 "of size %lu. bytes\n", 586 - (unsigned long) res->backup_size); 578 + (unsigned long) res->guest_memory_size); 587 579 return ret; 588 580 } 589 581 } ··· 593 585 594 586 /** 595 587 * vmw_resource_backoff_reservation - Unreserve and unreference a 596 - * backup buffer 588 + * guest memory buffer 597 589 *. 598 590 * @ticket: The ww acquire ctx used for reservation. 599 - * @val_buf: Backup buffer information. 591 + * @val_buf: Guest memory buffer information. 600 592 */ 601 593 static void 602 594 vmw_resource_backoff_reservation(struct ww_acquire_ctx *ticket, ··· 638 630 return ret; 639 631 640 632 if (unlikely(func->unbind != NULL && 641 - (!func->needs_backup || vmw_resource_mob_attached(res)))) { 633 + (!func->needs_guest_memory || vmw_resource_mob_attached(res)))) { 642 634 ret = func->unbind(res, res->res_dirty, &val_buf); 643 635 if (unlikely(ret != 0)) 644 636 goto out_no_unbind; 645 637 vmw_resource_mob_detach(res); 646 638 } 647 639 ret = func->destroy(res); 648 - res->backup_dirty = true; 640 + res->guest_memory_dirty = true; 649 641 res->res_dirty = false; 650 642 out_no_unbind: 651 643 vmw_resource_backoff_reservation(ticket, &val_buf); ··· 684 676 685 677 val_buf.bo = NULL; 686 678 val_buf.num_shared = 0; 687 - if (res->backup) 688 - val_buf.bo = &res->backup->base; 679 + if (res->guest_memory_bo) 680 + val_buf.bo = &res->guest_memory_bo->tbo; 689 681 do { 690 682 ret = vmw_resource_do_validate(res, &val_buf, dirtying); 691 683 if (likely(ret != -EBUSY)) ··· 725 717 726 718 if (unlikely(ret != 0)) 727 719 goto out_no_validate; 728 - else if (!res->func->needs_backup && res->backup) { 720 + else if (!res->func->needs_guest_memory && res->guest_memory_bo) { 729 721 WARN_ON_ONCE(vmw_resource_mob_attached(res)); 730 - vmw_bo_unreference(&res->backup); 722 + vmw_bo_unreference(&res->guest_memory_bo); 731 723 } 732 724 733 725 return 0; ··· 748 740 * validation code, since resource validation and eviction 749 741 * both require the backup buffer to be reserved. 750 742 */ 751 - void vmw_resource_unbind_list(struct vmw_buffer_object *vbo) 743 + void vmw_resource_unbind_list(struct vmw_bo *vbo) 752 744 { 753 745 struct ttm_validate_buffer val_buf = { 754 - .bo = &vbo->base, 746 + .bo = &vbo->tbo, 755 747 .num_shared = 0 756 748 }; 757 749 758 - dma_resv_assert_held(vbo->base.base.resv); 750 + dma_resv_assert_held(vbo->tbo.base.resv); 759 751 while (!RB_EMPTY_ROOT(&vbo->res_tree)) { 760 752 struct rb_node *node = vbo->res_tree.rb_node; 761 753 struct vmw_resource *res = ··· 764 756 if (!WARN_ON_ONCE(!res->func->unbind)) 765 757 (void) res->func->unbind(res, res->res_dirty, &val_buf); 766 758 767 - res->backup_dirty = true; 759 + res->guest_memory_size = true; 768 760 res->res_dirty = false; 769 761 vmw_resource_mob_detach(res); 770 762 } 771 763 772 - (void) ttm_bo_wait(&vbo->base, false, false); 764 + (void) ttm_bo_wait(&vbo->tbo, false, false); 773 765 } 774 766 775 767 ··· 781 773 * Read back cached states from the device if they exist. This function 782 774 * assumes binding_mutex is held. 783 775 */ 784 - int vmw_query_readback_all(struct vmw_buffer_object *dx_query_mob) 776 + int vmw_query_readback_all(struct vmw_bo *dx_query_mob) 785 777 { 786 778 struct vmw_resource *dx_query_ctx; 787 779 struct vmw_private *dev_priv; ··· 830 822 struct ttm_resource *old_mem, 831 823 struct ttm_resource *new_mem) 832 824 { 833 - struct vmw_buffer_object *dx_query_mob; 825 + struct vmw_bo *dx_query_mob; 834 826 struct ttm_device *bdev = bo->bdev; 835 - struct vmw_private *dev_priv; 836 - 837 - dev_priv = container_of(bdev, struct vmw_private, bdev); 827 + struct vmw_private *dev_priv = vmw_priv_from_ttm(bdev); 838 828 839 829 mutex_lock(&dev_priv->binding_mutex); 840 830 841 831 /* If BO is being moved from MOB to system memory */ 842 - if (new_mem->mem_type == TTM_PL_SYSTEM && 832 + if (old_mem && 833 + new_mem->mem_type == TTM_PL_SYSTEM && 843 834 old_mem->mem_type == VMW_PL_MOB) { 844 835 struct vmw_fence_obj *fence; 845 836 846 - dx_query_mob = container_of(bo, struct vmw_buffer_object, base); 837 + dx_query_mob = to_vmw_bo(&bo->base); 847 838 if (!dx_query_mob || !dx_query_mob->dx_query_ctx) { 848 839 mutex_unlock(&dev_priv->binding_mutex); 849 840 return; ··· 870 863 */ 871 864 bool vmw_resource_needs_backup(const struct vmw_resource *res) 872 865 { 873 - return res->func->needs_backup; 866 + return res->func->needs_guest_memory; 874 867 } 875 868 876 869 /** ··· 966 959 goto out_no_reserve; 967 960 968 961 if (res->pin_count == 0) { 969 - struct vmw_buffer_object *vbo = NULL; 962 + struct vmw_bo *vbo = NULL; 970 963 971 - if (res->backup) { 972 - vbo = res->backup; 964 + if (res->guest_memory_bo) { 965 + vbo = res->guest_memory_bo; 973 966 974 - ret = ttm_bo_reserve(&vbo->base, interruptible, false, NULL); 967 + ret = ttm_bo_reserve(&vbo->tbo, interruptible, false, NULL); 975 968 if (ret) 976 969 goto out_no_validate; 977 - if (!vbo->base.pin_count) { 970 + if (!vbo->tbo.pin_count) { 971 + vmw_bo_placement_set(vbo, 972 + res->func->domain, 973 + res->func->busy_domain); 978 974 ret = ttm_bo_validate 979 - (&vbo->base, 980 - res->func->backup_placement, 975 + (&vbo->tbo, 976 + &vbo->placement, 981 977 &ctx); 982 978 if (ret) { 983 - ttm_bo_unreserve(&vbo->base); 979 + ttm_bo_unreserve(&vbo->tbo); 984 980 goto out_no_validate; 985 981 } 986 982 } ··· 993 983 } 994 984 ret = vmw_resource_validate(res, interruptible, true); 995 985 if (vbo) 996 - ttm_bo_unreserve(&vbo->base); 986 + ttm_bo_unreserve(&vbo->tbo); 997 987 if (ret) 998 988 goto out_no_validate; 999 989 } ··· 1026 1016 WARN_ON(ret); 1027 1017 1028 1018 WARN_ON(res->pin_count == 0); 1029 - if (--res->pin_count == 0 && res->backup) { 1030 - struct vmw_buffer_object *vbo = res->backup; 1019 + if (--res->pin_count == 0 && res->guest_memory_bo) { 1020 + struct vmw_bo *vbo = res->guest_memory_bo; 1031 1021 1032 - (void) ttm_bo_reserve(&vbo->base, false, false, NULL); 1022 + (void) ttm_bo_reserve(&vbo->tbo, false, false, NULL); 1033 1023 vmw_bo_pin_reserved(vbo, false); 1034 - ttm_bo_unreserve(&vbo->base); 1024 + ttm_bo_unreserve(&vbo->tbo); 1035 1025 } 1036 1026 1037 1027 vmw_resource_unreserve(res, false, false, false, NULL, 0UL); ··· 1072 1062 * @num_prefault: Returns how many pages including the first have been 1073 1063 * cleaned and are ok to prefault 1074 1064 */ 1075 - int vmw_resources_clean(struct vmw_buffer_object *vbo, pgoff_t start, 1065 + int vmw_resources_clean(struct vmw_bo *vbo, pgoff_t start, 1076 1066 pgoff_t end, pgoff_t *num_prefault) 1077 1067 { 1078 1068 struct rb_node *cur = vbo->res_tree.rb_node; ··· 1089 1079 struct vmw_resource *cur_res = 1090 1080 container_of(cur, struct vmw_resource, mob_node); 1091 1081 1092 - if (cur_res->backup_offset >= res_end) { 1082 + if (cur_res->guest_memory_offset >= res_end) { 1093 1083 cur = cur->rb_left; 1094 - } else if (cur_res->backup_offset + cur_res->backup_size <= 1084 + } else if (cur_res->guest_memory_offset + cur_res->guest_memory_size <= 1095 1085 res_start) { 1096 1086 cur = cur->rb_right; 1097 1087 } else { ··· 1102 1092 } 1103 1093 1104 1094 /* 1105 - * In order of increasing backup_offset, clean dirty resources 1095 + * In order of increasing guest_memory_offset, clean dirty resources 1106 1096 * intersecting the range. 1107 1097 */ 1108 1098 while (found) { ··· 1118 1108 1119 1109 found->res_dirty = false; 1120 1110 } 1121 - last_cleaned = found->backup_offset + found->backup_size; 1111 + last_cleaned = found->guest_memory_offset + found->guest_memory_size; 1122 1112 cur = rb_next(&found->mob_node); 1123 1113 if (!cur) 1124 1114 break; 1125 1115 1126 1116 found = container_of(cur, struct vmw_resource, mob_node); 1127 - if (found->backup_offset >= res_end) 1117 + if (found->guest_memory_offset >= res_end) 1128 1118 break; 1129 1119 } 1130 1120 ··· 1133 1123 */ 1134 1124 *num_prefault = 1; 1135 1125 if (last_cleaned > res_start) { 1136 - struct ttm_buffer_object *bo = &vbo->base; 1126 + struct ttm_buffer_object *bo = &vbo->tbo; 1137 1127 1138 1128 *num_prefault = __KERNEL_DIV_ROUND_UP(last_cleaned - res_start, 1139 1129 PAGE_SIZE);
+6 -4
drivers/gpu/drm/vmwgfx/vmwgfx_resource_priv.h
··· 58 58 * struct vmw_res_func - members and functions common for a resource type 59 59 * 60 60 * @res_type: Enum that identifies the lru list to use for eviction. 61 - * @needs_backup: Whether the resource is guest-backed and needs 61 + * @needs_guest_memory:Whether the resource is guest-backed and needs 62 62 * persistent buffer storage. 63 63 * @type_name: String that identifies the resource type. 64 - * @backup_placement: TTM placement for backup buffers. 64 + * @domain: TTM placement for guest memory buffers. 65 + * @busy_domain: TTM busy placement for guest memory buffers. 65 66 * @may_evict Whether the resource may be evicted. 66 67 * @create: Create a hardware resource. 67 68 * @destroy: Destroy a hardware resource. ··· 82 81 */ 83 82 struct vmw_res_func { 84 83 enum vmw_res_type res_type; 85 - bool needs_backup; 84 + bool needs_guest_memory; 86 85 const char *type_name; 87 - struct ttm_placement *backup_placement; 86 + u32 domain; 87 + u32 busy_domain; 88 88 bool may_evict; 89 89 u32 prio; 90 90 u32 dirty_prio;
+28 -25
drivers/gpu/drm/vmwgfx/vmwgfx_scrn.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR MIT 2 2 /************************************************************************** 3 3 * 4 - * Copyright 2011-2022 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright 2011-2023 VMware, Inc., Palo Alto, CA., USA 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a 7 7 * copy of this software and associated documentation files (the ··· 25 25 * 26 26 **************************************************************************/ 27 27 28 + #include "vmwgfx_bo.h" 29 + #include "vmwgfx_kms.h" 30 + 28 31 #include <drm/drm_atomic.h> 29 32 #include <drm/drm_atomic_helper.h> 30 33 #include <drm/drm_damage_helper.h> 31 34 #include <drm/drm_fourcc.h> 32 - 33 - #include "vmwgfx_kms.h" 34 35 35 36 #define vmw_crtc_to_sou(x) \ 36 37 container_of(x, struct vmw_screen_object_unit, base.crtc) ··· 90 89 struct vmw_display_unit base; 91 90 92 91 unsigned long buffer_size; /**< Size of allocated buffer */ 93 - struct vmw_buffer_object *buffer; /**< Backing store buffer */ 92 + struct vmw_bo *buffer; /**< Backing store buffer */ 94 93 95 94 bool defined; 96 95 }; ··· 149 148 sou->base.set_gui_y = cmd->obj.root.y; 150 149 151 150 /* Ok to assume that buffer is pinned in vram */ 152 - vmw_bo_get_guest_ptr(&sou->buffer->base, &cmd->obj.backingStore.ptr); 151 + vmw_bo_get_guest_ptr(&sou->buffer->tbo, &cmd->obj.backingStore.ptr); 153 152 cmd->obj.backingStore.pitch = mode->hdisplay * 4; 154 153 155 154 vmw_cmd_commit(dev_priv, fifo_size); ··· 410 409 struct drm_crtc *crtc = plane->state->crtc ?: new_state->crtc; 411 410 struct vmw_plane_state *vps = vmw_plane_state_to_vps(new_state); 412 411 struct vmw_private *dev_priv; 413 - size_t size; 414 412 int ret; 415 - 413 + struct vmw_bo_params bo_params = { 414 + .domain = VMW_BO_DOMAIN_VRAM, 415 + .busy_domain = VMW_BO_DOMAIN_VRAM, 416 + .bo_type = ttm_bo_type_device, 417 + .pin = true 418 + }; 416 419 417 420 if (!new_fb) { 418 421 vmw_bo_unreference(&vps->bo); ··· 425 420 return 0; 426 421 } 427 422 428 - size = new_state->crtc_w * new_state->crtc_h * 4; 423 + bo_params.size = new_state->crtc_w * new_state->crtc_h * 4; 429 424 dev_priv = vmw_priv(crtc->dev); 430 425 431 426 if (vps->bo) { 432 - if (vps->bo_size == size) { 427 + if (vps->bo_size == bo_params.size) { 433 428 /* 434 429 * Note that this might temporarily up the pin-count 435 430 * to 2, until cleanup_fb() is called. ··· 448 443 * resume the overlays, this is preferred to failing to alloc. 449 444 */ 450 445 vmw_overlay_pause_all(dev_priv); 451 - ret = vmw_bo_create(dev_priv, size, 452 - &vmw_vram_placement, 453 - false, true, &vmw_bo_bo_free, &vps->bo); 446 + ret = vmw_bo_create(dev_priv, &bo_params, &vps->bo); 454 447 vmw_overlay_resume_all(dev_priv); 455 - if (ret) { 456 - vps->bo = NULL; /* vmw_bo_init frees on error */ 448 + if (ret) 457 449 return ret; 458 - } 459 450 460 - vps->bo_size = size; 451 + vps->bo_size = bo_params.size; 461 452 462 453 /* 463 454 * TTM already thinks the buffer is pinned, but make sure the ··· 490 489 gmr->body.format.colorDepth = depth; 491 490 gmr->body.format.reserved = 0; 492 491 gmr->body.bytesPerLine = update->vfb->base.pitches[0]; 493 - vmw_bo_get_guest_ptr(&vfbbo->buffer->base, &gmr->body.ptr); 492 + vmw_bo_get_guest_ptr(&vfbbo->buffer->tbo, &gmr->body.ptr); 494 493 495 494 return sizeof(*gmr); 496 495 } ··· 547 546 bo_update.base.vfb = vfb; 548 547 bo_update.base.out_fence = out_fence; 549 548 bo_update.base.mutex = NULL; 550 - bo_update.base.cpu_blit = false; 551 549 bo_update.base.intr = true; 552 550 553 551 bo_update.base.calc_fifo_size = vmw_sou_bo_fifo_size; ··· 707 707 srf_update.base.vfb = vfb; 708 708 srf_update.base.out_fence = out_fence; 709 709 srf_update.base.mutex = &dev_priv->cmdbuf_mutex; 710 - srf_update.base.cpu_blit = false; 711 710 srf_update.base.intr = true; 712 711 713 712 srf_update.base.calc_fifo_size = vmw_sou_surface_fifo_size; ··· 946 947 static int do_bo_define_gmrfb(struct vmw_private *dev_priv, 947 948 struct vmw_framebuffer *framebuffer) 948 949 { 949 - struct vmw_buffer_object *buf = 950 + struct vmw_bo *buf = 950 951 container_of(framebuffer, struct vmw_framebuffer_bo, 951 952 base)->buffer; 952 953 int depth = framebuffer->base.format->depth; ··· 972 973 cmd->body.format.reserved = 0; 973 974 cmd->body.bytesPerLine = framebuffer->base.pitches[0]; 974 975 /* Buffer is reserved in vram or GMR */ 975 - vmw_bo_get_guest_ptr(&buf->base, &cmd->body.ptr); 976 + vmw_bo_get_guest_ptr(&buf->tbo, &cmd->body.ptr); 976 977 vmw_cmd_commit(dev_priv, sizeof(*cmd)); 977 978 978 979 return 0; ··· 1215 1216 struct vmw_fence_obj **out_fence, 1216 1217 struct drm_crtc *crtc) 1217 1218 { 1218 - struct vmw_buffer_object *buf = 1219 + struct vmw_bo *buf = 1219 1220 container_of(framebuffer, struct vmw_framebuffer_bo, 1220 1221 base)->buffer; 1221 1222 struct vmw_kms_dirty dirty; 1222 1223 DECLARE_VAL_CONTEXT(val_ctx, NULL, 0); 1223 1224 int ret; 1224 1225 1225 - ret = vmw_validation_add_bo(&val_ctx, buf, false, false); 1226 + vmw_bo_placement_set(buf, VMW_BO_DOMAIN_GMR | VMW_BO_DOMAIN_VRAM, 1227 + VMW_BO_DOMAIN_GMR | VMW_BO_DOMAIN_VRAM); 1228 + ret = vmw_validation_add_bo(&val_ctx, buf); 1226 1229 if (ret) 1227 1230 return ret; 1228 1231 ··· 1324 1323 uint32_t num_clips, 1325 1324 struct drm_crtc *crtc) 1326 1325 { 1327 - struct vmw_buffer_object *buf = 1326 + struct vmw_bo *buf = 1328 1327 container_of(vfb, struct vmw_framebuffer_bo, base)->buffer; 1329 1328 struct vmw_kms_dirty dirty; 1330 1329 DECLARE_VAL_CONTEXT(val_ctx, NULL, 0); 1331 1330 int ret; 1332 1331 1333 - ret = vmw_validation_add_bo(&val_ctx, buf, false, false); 1332 + vmw_bo_placement_set(buf, VMW_BO_DOMAIN_GMR | VMW_BO_DOMAIN_VRAM, 1333 + VMW_BO_DOMAIN_GMR | VMW_BO_DOMAIN_VRAM); 1334 + ret = vmw_validation_add_bo(&val_ctx, buf); 1334 1335 if (ret) 1335 1336 return ret; 1336 1337
+38 -29
drivers/gpu/drm/vmwgfx/vmwgfx_shader.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR MIT 2 2 /************************************************************************** 3 3 * 4 - * Copyright 2009-2015 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright 2009-2023 VMware, Inc., Palo Alto, CA., USA 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a 7 7 * copy of this software and associated documentation files (the ··· 27 27 28 28 #include <drm/ttm/ttm_placement.h> 29 29 30 + #include "vmwgfx_binding.h" 31 + #include "vmwgfx_bo.h" 30 32 #include "vmwgfx_drv.h" 31 33 #include "vmwgfx_resource_priv.h" 32 - #include "vmwgfx_binding.h" 33 34 34 35 struct vmw_shader { 35 36 struct vmw_resource res; ··· 89 88 90 89 static const struct vmw_res_func vmw_gb_shader_func = { 91 90 .res_type = vmw_res_shader, 92 - .needs_backup = true, 91 + .needs_guest_memory = true, 93 92 .may_evict = true, 94 93 .prio = 3, 95 94 .dirty_prio = 3, 96 95 .type_name = "guest backed shaders", 97 - .backup_placement = &vmw_mob_placement, 96 + .domain = VMW_BO_DOMAIN_MOB, 97 + .busy_domain = VMW_BO_DOMAIN_MOB, 98 98 .create = vmw_gb_shader_create, 99 99 .destroy = vmw_gb_shader_destroy, 100 100 .bind = vmw_gb_shader_bind, ··· 104 102 105 103 static const struct vmw_res_func vmw_dx_shader_func = { 106 104 .res_type = vmw_res_shader, 107 - .needs_backup = true, 105 + .needs_guest_memory = true, 108 106 .may_evict = true, 109 107 .prio = 3, 110 108 .dirty_prio = 3, 111 109 .type_name = "dx shaders", 112 - .backup_placement = &vmw_mob_placement, 110 + .domain = VMW_BO_DOMAIN_MOB, 111 + .busy_domain = VMW_BO_DOMAIN_MOB, 113 112 .create = vmw_dx_shader_create, 114 113 /* 115 114 * The destroy callback is only called with a committed resource on ··· 161 158 SVGA3dShaderType type, 162 159 uint8_t num_input_sig, 163 160 uint8_t num_output_sig, 164 - struct vmw_buffer_object *byte_code, 161 + struct vmw_bo *byte_code, 165 162 void (*res_free) (struct vmw_resource *res)) 166 163 { 167 164 struct vmw_shader *shader = vmw_res_to_shader(res); ··· 178 175 return ret; 179 176 } 180 177 181 - res->backup_size = size; 178 + res->guest_memory_size = size; 182 179 if (byte_code) { 183 - res->backup = vmw_bo_reference(byte_code); 184 - res->backup_offset = offset; 180 + res->guest_memory_bo = vmw_bo_reference(byte_code); 181 + res->guest_memory_offset = offset; 185 182 } 186 183 shader->size = size; 187 184 shader->type = type; ··· 262 259 cmd->header.size = sizeof(cmd->body); 263 260 cmd->body.shid = res->id; 264 261 cmd->body.mobid = bo->resource->start; 265 - cmd->body.offsetInBytes = res->backup_offset; 266 - res->backup_dirty = false; 262 + cmd->body.offsetInBytes = res->guest_memory_offset; 263 + res->guest_memory_dirty = false; 267 264 vmw_cmd_commit(dev_priv, sizeof(*cmd)); 268 265 269 266 return 0; ··· 280 277 } *cmd; 281 278 struct vmw_fence_obj *fence; 282 279 283 - BUG_ON(res->backup->base.resource->mem_type != VMW_PL_MOB); 280 + BUG_ON(res->guest_memory_bo->tbo.resource->mem_type != VMW_PL_MOB); 284 281 285 282 cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); 286 283 if (unlikely(cmd == NULL)) ··· 400 397 cmd->header.size = sizeof(cmd->body); 401 398 cmd->body.cid = shader->ctx->id; 402 399 cmd->body.shid = shader->id; 403 - cmd->body.mobid = res->backup->base.resource->start; 404 - cmd->body.offsetInBytes = res->backup_offset; 400 + cmd->body.mobid = res->guest_memory_bo->tbo.resource->start; 401 + cmd->body.offsetInBytes = res->guest_memory_offset; 405 402 vmw_cmd_commit(dev_priv, sizeof(*cmd)); 406 403 407 404 vmw_cotable_add_resource(shader->cotable, &shader->cotable_head); ··· 511 508 struct vmw_fence_obj *fence; 512 509 int ret; 513 510 514 - BUG_ON(res->backup->base.resource->mem_type != VMW_PL_MOB); 511 + BUG_ON(res->guest_memory_bo->tbo.resource->mem_type != VMW_PL_MOB); 515 512 516 513 mutex_lock(&dev_priv->binding_mutex); 517 514 ret = vmw_dx_shader_scrub(res); ··· 683 680 } 684 681 685 682 static int vmw_user_shader_alloc(struct vmw_private *dev_priv, 686 - struct vmw_buffer_object *buffer, 683 + struct vmw_bo *buffer, 687 684 size_t shader_size, 688 685 size_t offset, 689 686 SVGA3dShaderType shader_type, ··· 737 734 738 735 739 736 static struct vmw_resource *vmw_shader_alloc(struct vmw_private *dev_priv, 740 - struct vmw_buffer_object *buffer, 737 + struct vmw_bo *buffer, 741 738 size_t shader_size, 742 739 size_t offset, 743 740 SVGA3dShaderType shader_type) ··· 774 771 { 775 772 struct vmw_private *dev_priv = vmw_priv(dev); 776 773 struct ttm_object_file *tfile = vmw_fpriv(file_priv)->tfile; 777 - struct vmw_buffer_object *buffer = NULL; 774 + struct vmw_bo *buffer = NULL; 778 775 SVGA3dShaderType shader_type; 779 776 int ret; 780 777 ··· 785 782 return ret; 786 783 } 787 784 788 - if ((u64)buffer->base.base.size < (u64)size + (u64)offset) { 785 + if ((u64)buffer->tbo.base.size < (u64)size + (u64)offset) { 789 786 VMW_DEBUG_USER("Illegal buffer- or shader size.\n"); 790 787 ret = -EINVAL; 791 788 goto out_bad_arg; ··· 810 807 num_output_sig, tfile, shader_handle); 811 808 out_bad_arg: 812 809 vmw_bo_unreference(&buffer); 813 - drm_gem_object_put(&buffer->base.base); 810 + drm_gem_object_put(&buffer->tbo.base); 814 811 return ret; 815 812 } 816 813 ··· 887 884 struct list_head *list) 888 885 { 889 886 struct ttm_operation_ctx ctx = { false, true }; 890 - struct vmw_buffer_object *buf; 887 + struct vmw_bo *buf; 891 888 struct ttm_bo_kmap_obj map; 892 889 bool is_iomem; 893 890 int ret; 894 891 struct vmw_resource *res; 892 + struct vmw_bo_params bo_params = { 893 + .domain = VMW_BO_DOMAIN_SYS, 894 + .busy_domain = VMW_BO_DOMAIN_SYS, 895 + .bo_type = ttm_bo_type_device, 896 + .size = size, 897 + .pin = true 898 + }; 895 899 896 900 if (!vmw_shader_id_ok(user_key, shader_type)) 897 901 return -EINVAL; 898 902 899 - ret = vmw_bo_create(dev_priv, size, &vmw_sys_placement, 900 - true, true, vmw_bo_bo_free, &buf); 903 + ret = vmw_bo_create(dev_priv, &bo_params, &buf); 901 904 if (unlikely(ret != 0)) 902 905 goto out; 903 906 904 - ret = ttm_bo_reserve(&buf->base, false, true, NULL); 907 + ret = ttm_bo_reserve(&buf->tbo, false, true, NULL); 905 908 if (unlikely(ret != 0)) 906 909 goto no_reserve; 907 910 908 911 /* Map and copy shader bytecode. */ 909 - ret = ttm_bo_kmap(&buf->base, 0, PFN_UP(size), &map); 912 + ret = ttm_bo_kmap(&buf->tbo, 0, PFN_UP(size), &map); 910 913 if (unlikely(ret != 0)) { 911 - ttm_bo_unreserve(&buf->base); 914 + ttm_bo_unreserve(&buf->tbo); 912 915 goto no_reserve; 913 916 } 914 917 ··· 922 913 WARN_ON(is_iomem); 923 914 924 915 ttm_bo_kunmap(&map); 925 - ret = ttm_bo_validate(&buf->base, &vmw_sys_placement, &ctx); 916 + ret = ttm_bo_validate(&buf->tbo, &buf->placement, &ctx); 926 917 WARN_ON(ret != 0); 927 - ttm_bo_unreserve(&buf->base); 918 + ttm_bo_unreserve(&buf->tbo); 928 919 929 920 res = vmw_shader_alloc(dev_priv, buf, size, 0, shader_type); 930 921 if (unlikely(ret != 0))
+4 -2
drivers/gpu/drm/vmwgfx/vmwgfx_so.c
··· 24 24 * 25 25 **************************************************************************/ 26 26 27 + #include "vmwgfx_bo.h" 27 28 #include "vmwgfx_drv.h" 28 29 #include "vmwgfx_resource_priv.h" 29 30 #include "vmwgfx_so.h" ··· 82 81 83 82 static const struct vmw_res_func vmw_view_func = { 84 83 .res_type = vmw_res_view, 85 - .needs_backup = false, 84 + .needs_guest_memory = false, 86 85 .may_evict = false, 87 86 .type_name = "DX view", 88 - .backup_placement = NULL, 87 + .domain = VMW_BO_DOMAIN_SYS, 88 + .busy_domain = VMW_BO_DOMAIN_SYS, 89 89 .create = vmw_view_create, 90 90 .commit_notify = vmw_view_commit_notify, 91 91 };
+43 -276
drivers/gpu/drm/vmwgfx/vmwgfx_stdu.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR MIT 2 2 /****************************************************************************** 3 3 * 4 - * COPYRIGHT (C) 2014-2022 VMware, Inc., Palo Alto, CA., USA 4 + * COPYRIGHT (C) 2014-2023 VMware, Inc., Palo Alto, CA., USA 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a 7 7 * copy of this software and associated documentation files (the ··· 25 25 * 26 26 ******************************************************************************/ 27 27 28 + #include "vmwgfx_bo.h" 29 + #include "vmwgfx_kms.h" 30 + #include "vmw_surface_cache.h" 31 + 28 32 #include <drm/drm_atomic.h> 29 33 #include <drm/drm_atomic_helper.h> 30 34 #include <drm/drm_damage_helper.h> 31 35 #include <drm/drm_fourcc.h> 32 - 33 - #include "vmwgfx_kms.h" 34 - #include "vmw_surface_cache.h" 35 36 36 37 #define vmw_crtc_to_stdu(x) \ 37 38 container_of(x, struct vmw_screen_target_display_unit, base.crtc) ··· 66 65 */ 67 66 struct vmw_stdu_dirty { 68 67 struct vmw_kms_dirty base; 69 - SVGA3dTransferType transfer; 70 68 s32 left, right, top, bottom; 71 69 s32 fb_left, fb_top; 72 70 u32 pitch; 73 71 union { 74 - struct vmw_buffer_object *buf; 72 + struct vmw_bo *buf; 75 73 u32 sid; 76 74 }; 77 75 }; ··· 135 135 /****************************************************************************** 136 136 * Screen Target Display Unit CRTC Functions 137 137 *****************************************************************************/ 138 - 139 - static bool vmw_stdu_use_cpu_blit(const struct vmw_private *vmw) 140 - { 141 - return !(vmw->capabilities & SVGA_CAP_3D) || vmw->vram_size < (32 * 1024 * 1024); 142 - } 143 - 144 138 145 139 /** 146 140 * vmw_stdu_crtc_destroy - cleans up the STDU ··· 445 451 } 446 452 447 453 /** 448 - * vmw_stdu_bo_clip - Callback to encode a suface DMA command cliprect 449 - * 450 - * @dirty: The closure structure. 451 - * 452 - * Encodes a surface DMA command cliprect and updates the bounding box 453 - * for the DMA. 454 - */ 455 - static void vmw_stdu_bo_clip(struct vmw_kms_dirty *dirty) 456 - { 457 - struct vmw_stdu_dirty *ddirty = 458 - container_of(dirty, struct vmw_stdu_dirty, base); 459 - struct vmw_stdu_dma *cmd = dirty->cmd; 460 - struct SVGA3dCopyBox *blit = (struct SVGA3dCopyBox *) &cmd[1]; 461 - 462 - blit += dirty->num_hits; 463 - blit->srcx = dirty->fb_x; 464 - blit->srcy = dirty->fb_y; 465 - blit->x = dirty->unit_x1; 466 - blit->y = dirty->unit_y1; 467 - blit->d = 1; 468 - blit->w = dirty->unit_x2 - dirty->unit_x1; 469 - blit->h = dirty->unit_y2 - dirty->unit_y1; 470 - dirty->num_hits++; 471 - 472 - if (ddirty->transfer != SVGA3D_WRITE_HOST_VRAM) 473 - return; 474 - 475 - /* Destination bounding box */ 476 - ddirty->left = min_t(s32, ddirty->left, dirty->unit_x1); 477 - ddirty->top = min_t(s32, ddirty->top, dirty->unit_y1); 478 - ddirty->right = max_t(s32, ddirty->right, dirty->unit_x2); 479 - ddirty->bottom = max_t(s32, ddirty->bottom, dirty->unit_y2); 480 - } 481 - 482 - /** 483 - * vmw_stdu_bo_fifo_commit - Callback to fill in and submit a DMA command. 484 - * 485 - * @dirty: The closure structure. 486 - * 487 - * Fills in the missing fields in a DMA command, and optionally encodes 488 - * a screen target update command, depending on transfer direction. 489 - */ 490 - static void vmw_stdu_bo_fifo_commit(struct vmw_kms_dirty *dirty) 491 - { 492 - struct vmw_stdu_dirty *ddirty = 493 - container_of(dirty, struct vmw_stdu_dirty, base); 494 - struct vmw_screen_target_display_unit *stdu = 495 - container_of(dirty->unit, typeof(*stdu), base); 496 - struct vmw_stdu_dma *cmd = dirty->cmd; 497 - struct SVGA3dCopyBox *blit = (struct SVGA3dCopyBox *) &cmd[1]; 498 - SVGA3dCmdSurfaceDMASuffix *suffix = 499 - (SVGA3dCmdSurfaceDMASuffix *) &blit[dirty->num_hits]; 500 - size_t blit_size = sizeof(*blit) * dirty->num_hits + sizeof(*suffix); 501 - 502 - if (!dirty->num_hits) { 503 - vmw_cmd_commit(dirty->dev_priv, 0); 504 - return; 505 - } 506 - 507 - cmd->header.id = SVGA_3D_CMD_SURFACE_DMA; 508 - cmd->header.size = sizeof(cmd->body) + blit_size; 509 - vmw_bo_get_guest_ptr(&ddirty->buf->base, &cmd->body.guest.ptr); 510 - cmd->body.guest.pitch = ddirty->pitch; 511 - cmd->body.host.sid = stdu->display_srf->res.id; 512 - cmd->body.host.face = 0; 513 - cmd->body.host.mipmap = 0; 514 - cmd->body.transfer = ddirty->transfer; 515 - suffix->suffixSize = sizeof(*suffix); 516 - suffix->maximumOffset = ddirty->buf->base.base.size; 517 - 518 - if (ddirty->transfer == SVGA3D_WRITE_HOST_VRAM) { 519 - blit_size += sizeof(struct vmw_stdu_update); 520 - 521 - vmw_stdu_populate_update(&suffix[1], stdu->base.unit, 522 - ddirty->left, ddirty->right, 523 - ddirty->top, ddirty->bottom); 524 - } 525 - 526 - vmw_cmd_commit(dirty->dev_priv, sizeof(*cmd) + blit_size); 527 - 528 - stdu->display_srf->res.res_dirty = true; 529 - ddirty->left = ddirty->top = S32_MAX; 530 - ddirty->right = ddirty->bottom = S32_MIN; 531 - } 532 - 533 - 534 - /** 535 454 * vmw_stdu_bo_cpu_clip - Callback to encode a CPU blit 536 455 * 537 456 * @dirty: The closure structure. ··· 504 597 return; 505 598 506 599 /* Assume we are blitting from Guest (bo) to Host (display_srf) */ 507 - dst_pitch = stdu->display_srf->metadata.base_size.width * stdu->cpp; 508 - dst_bo = &stdu->display_srf->res.backup->base; 509 - dst_offset = ddirty->top * dst_pitch + ddirty->left * stdu->cpp; 600 + src_pitch = stdu->display_srf->metadata.base_size.width * stdu->cpp; 601 + src_bo = &stdu->display_srf->res.guest_memory_bo->tbo; 602 + src_offset = ddirty->top * dst_pitch + ddirty->left * stdu->cpp; 510 603 511 - src_pitch = ddirty->pitch; 512 - src_bo = &ddirty->buf->base; 513 - src_offset = ddirty->fb_top * src_pitch + ddirty->fb_left * stdu->cpp; 514 - 515 - /* Swap src and dst if the assumption was wrong. */ 516 - if (ddirty->transfer != SVGA3D_WRITE_HOST_VRAM) { 517 - swap(dst_pitch, src_pitch); 518 - swap(dst_bo, src_bo); 519 - swap(src_offset, dst_offset); 520 - } 604 + dst_pitch = ddirty->pitch; 605 + dst_bo = &ddirty->buf->tbo; 606 + dst_offset = ddirty->fb_top * src_pitch + ddirty->fb_left * stdu->cpp; 521 607 522 608 (void) vmw_bo_cpu_blit(dst_bo, dst_offset, dst_pitch, 523 609 src_bo, src_offset, src_pitch, 524 610 width * stdu->cpp, height, &diff); 525 - 526 - if (ddirty->transfer == SVGA3D_WRITE_HOST_VRAM && 527 - drm_rect_visible(&diff.rect)) { 528 - struct vmw_private *dev_priv; 529 - struct vmw_stdu_update *cmd; 530 - struct drm_clip_rect region; 531 - int ret; 532 - 533 - /* We are updating the actual surface, not a proxy */ 534 - region.x1 = diff.rect.x1; 535 - region.x2 = diff.rect.x2; 536 - region.y1 = diff.rect.y1; 537 - region.y2 = diff.rect.y2; 538 - ret = vmw_kms_update_proxy(&stdu->display_srf->res, &region, 539 - 1, 1); 540 - if (ret) 541 - goto out_cleanup; 542 - 543 - 544 - dev_priv = vmw_priv(stdu->base.crtc.dev); 545 - cmd = VMW_CMD_RESERVE(dev_priv, sizeof(*cmd)); 546 - if (!cmd) 547 - goto out_cleanup; 548 - 549 - vmw_stdu_populate_update(cmd, stdu->base.unit, 550 - region.x1, region.x2, 551 - region.y1, region.y2); 552 - 553 - vmw_cmd_commit(dev_priv, sizeof(*cmd)); 554 - } 555 - 556 - out_cleanup: 557 - ddirty->left = ddirty->top = ddirty->fb_left = ddirty->fb_top = S32_MAX; 558 - ddirty->right = ddirty->bottom = S32_MIN; 559 611 } 560 612 561 613 /** 562 - * vmw_kms_stdu_dma - Perform a DMA transfer between a buffer-object backed 614 + * vmw_kms_stdu_readback - Perform a readback from a buffer-object backed 563 615 * framebuffer and the screen target system. 564 616 * 565 617 * @dev_priv: Pointer to the device private structure. ··· 531 665 * be NULL. 532 666 * @num_clips: Number of clip rects in @clips or @vclips. 533 667 * @increment: Increment to use when looping over @clips or @vclips. 534 - * @to_surface: Whether to DMA to the screen target system as opposed to 535 - * from the screen target system. 536 - * @interruptible: Whether to perform waits interruptible if possible. 537 668 * @crtc: If crtc is passed, perform stdu dma on that crtc only. 538 669 * 539 670 * If DMA-ing till the screen target system, the function will also notify ··· 539 676 * Returns 0 on success, negative error code on failure. -ERESTARTSYS if 540 677 * interrupted. 541 678 */ 542 - int vmw_kms_stdu_dma(struct vmw_private *dev_priv, 543 - struct drm_file *file_priv, 544 - struct vmw_framebuffer *vfb, 545 - struct drm_vmw_fence_rep __user *user_fence_rep, 546 - struct drm_clip_rect *clips, 547 - struct drm_vmw_rect *vclips, 548 - uint32_t num_clips, 549 - int increment, 550 - bool to_surface, 551 - bool interruptible, 552 - struct drm_crtc *crtc) 679 + int vmw_kms_stdu_readback(struct vmw_private *dev_priv, 680 + struct drm_file *file_priv, 681 + struct vmw_framebuffer *vfb, 682 + struct drm_vmw_fence_rep __user *user_fence_rep, 683 + struct drm_clip_rect *clips, 684 + struct drm_vmw_rect *vclips, 685 + uint32_t num_clips, 686 + int increment, 687 + struct drm_crtc *crtc) 553 688 { 554 - struct vmw_buffer_object *buf = 689 + struct vmw_bo *buf = 555 690 container_of(vfb, struct vmw_framebuffer_bo, base)->buffer; 556 691 struct vmw_stdu_dirty ddirty; 557 692 int ret; 558 - bool cpu_blit = vmw_stdu_use_cpu_blit(dev_priv); 559 693 DECLARE_VAL_CONTEXT(val_ctx, NULL, 0); 560 694 561 695 /* 562 - * VMs without 3D support don't have the surface DMA command and 563 - * we'll be using a CPU blit, and the framebuffer should be moved out 564 - * of VRAM. 696 + * The GMR domain might seem confusing because it might seem like it should 697 + * never happen with screen targets but e.g. the xorg vmware driver issues 698 + * CMD_SURFACE_DMA for various pixmap updates which might transition our bo to 699 + * a GMR. Instead of forcing another transition we can optimize the readback 700 + * by reading directly from the GMR. 565 701 */ 566 - ret = vmw_validation_add_bo(&val_ctx, buf, false, cpu_blit); 702 + vmw_bo_placement_set(buf, 703 + VMW_BO_DOMAIN_MOB | VMW_BO_DOMAIN_SYS | VMW_BO_DOMAIN_GMR, 704 + VMW_BO_DOMAIN_MOB | VMW_BO_DOMAIN_SYS | VMW_BO_DOMAIN_GMR); 705 + ret = vmw_validation_add_bo(&val_ctx, buf); 567 706 if (ret) 568 707 return ret; 569 708 570 - ret = vmw_validation_prepare(&val_ctx, NULL, interruptible); 709 + ret = vmw_validation_prepare(&val_ctx, NULL, true); 571 710 if (ret) 572 711 goto out_unref; 573 712 574 - ddirty.transfer = (to_surface) ? SVGA3D_WRITE_HOST_VRAM : 575 - SVGA3D_READ_HOST_VRAM; 576 713 ddirty.left = ddirty.top = S32_MAX; 577 714 ddirty.right = ddirty.bottom = S32_MIN; 578 715 ddirty.fb_left = ddirty.fb_top = S32_MAX; 579 716 ddirty.pitch = vfb->base.pitches[0]; 580 717 ddirty.buf = buf; 581 - ddirty.base.fifo_commit = vmw_stdu_bo_fifo_commit; 582 - ddirty.base.clip = vmw_stdu_bo_clip; 583 - ddirty.base.fifo_reserve_size = sizeof(struct vmw_stdu_dma) + 584 - num_clips * sizeof(SVGA3dCopyBox) + 585 - sizeof(SVGA3dCmdSurfaceDMASuffix); 586 - if (to_surface) 587 - ddirty.base.fifo_reserve_size += sizeof(struct vmw_stdu_update); 588 718 589 - 590 - if (cpu_blit) { 591 - ddirty.base.fifo_commit = vmw_stdu_bo_cpu_commit; 592 - ddirty.base.clip = vmw_stdu_bo_cpu_clip; 593 - ddirty.base.fifo_reserve_size = 0; 594 - } 719 + ddirty.base.fifo_commit = vmw_stdu_bo_cpu_commit; 720 + ddirty.base.clip = vmw_stdu_bo_cpu_clip; 721 + ddirty.base.fifo_reserve_size = 0; 595 722 596 723 ddirty.base.crtc = crtc; 597 724 ··· 1013 1160 /* 1014 1161 * This should only happen if the buffer object is too large to create a 1015 1162 * proxy surface for. 1016 - * If we are a 2D VM with a buffer object then we have to use CPU blit 1017 - * so cache these mappings 1018 1163 */ 1019 - if (vps->content_fb_type == SEPARATE_BO && 1020 - vmw_stdu_use_cpu_blit(dev_priv)) 1164 + if (vps->content_fb_type == SEPARATE_BO) 1021 1165 vps->cpp = new_fb->pitches[0] / new_fb->width; 1022 1166 1023 1167 return 0; ··· 1024 1174 return ret; 1025 1175 } 1026 1176 1027 - static uint32_t vmw_stdu_bo_fifo_size(struct vmw_du_update_plane *update, 1028 - uint32_t num_hits) 1029 - { 1030 - return sizeof(struct vmw_stdu_dma) + sizeof(SVGA3dCopyBox) * num_hits + 1031 - sizeof(SVGA3dCmdSurfaceDMASuffix) + 1032 - sizeof(struct vmw_stdu_update); 1033 - } 1034 - 1035 1177 static uint32_t vmw_stdu_bo_fifo_size_cpu(struct vmw_du_update_plane *update, 1036 1178 uint32_t num_hits) 1037 1179 { 1038 1180 return sizeof(struct vmw_stdu_update_gb_image) + 1039 1181 sizeof(struct vmw_stdu_update); 1040 - } 1041 - 1042 - static uint32_t vmw_stdu_bo_populate_dma(struct vmw_du_update_plane *update, 1043 - void *cmd, uint32_t num_hits) 1044 - { 1045 - struct vmw_screen_target_display_unit *stdu; 1046 - struct vmw_framebuffer_bo *vfbbo; 1047 - struct vmw_stdu_dma *cmd_dma = cmd; 1048 - 1049 - stdu = container_of(update->du, typeof(*stdu), base); 1050 - vfbbo = container_of(update->vfb, typeof(*vfbbo), base); 1051 - 1052 - cmd_dma->header.id = SVGA_3D_CMD_SURFACE_DMA; 1053 - cmd_dma->header.size = sizeof(cmd_dma->body) + 1054 - sizeof(struct SVGA3dCopyBox) * num_hits + 1055 - sizeof(SVGA3dCmdSurfaceDMASuffix); 1056 - vmw_bo_get_guest_ptr(&vfbbo->buffer->base, &cmd_dma->body.guest.ptr); 1057 - cmd_dma->body.guest.pitch = update->vfb->base.pitches[0]; 1058 - cmd_dma->body.host.sid = stdu->display_srf->res.id; 1059 - cmd_dma->body.host.face = 0; 1060 - cmd_dma->body.host.mipmap = 0; 1061 - cmd_dma->body.transfer = SVGA3D_WRITE_HOST_VRAM; 1062 - 1063 - return sizeof(*cmd_dma); 1064 - } 1065 - 1066 - static uint32_t vmw_stdu_bo_populate_clip(struct vmw_du_update_plane *update, 1067 - void *cmd, struct drm_rect *clip, 1068 - uint32_t fb_x, uint32_t fb_y) 1069 - { 1070 - struct SVGA3dCopyBox *box = cmd; 1071 - 1072 - box->srcx = fb_x; 1073 - box->srcy = fb_y; 1074 - box->srcz = 0; 1075 - box->x = clip->x1; 1076 - box->y = clip->y1; 1077 - box->z = 0; 1078 - box->w = drm_rect_width(clip); 1079 - box->h = drm_rect_height(clip); 1080 - box->d = 1; 1081 - 1082 - return sizeof(*box); 1083 - } 1084 - 1085 - static uint32_t vmw_stdu_bo_populate_update(struct vmw_du_update_plane *update, 1086 - void *cmd, struct drm_rect *bb) 1087 - { 1088 - struct vmw_screen_target_display_unit *stdu; 1089 - struct vmw_framebuffer_bo *vfbbo; 1090 - SVGA3dCmdSurfaceDMASuffix *suffix = cmd; 1091 - 1092 - stdu = container_of(update->du, typeof(*stdu), base); 1093 - vfbbo = container_of(update->vfb, typeof(*vfbbo), base); 1094 - 1095 - suffix->suffixSize = sizeof(*suffix); 1096 - suffix->maximumOffset = vfbbo->buffer->base.base.size; 1097 - 1098 - vmw_stdu_populate_update(&suffix[1], stdu->base.unit, bb->x1, bb->x2, 1099 - bb->y1, bb->y2); 1100 - 1101 - return sizeof(*suffix) + sizeof(struct vmw_stdu_update); 1102 1182 } 1103 1183 1104 1184 static uint32_t vmw_stdu_bo_pre_clip_cpu(struct vmw_du_update_plane *update, ··· 1080 1300 1081 1301 diff.cpp = stdu->cpp; 1082 1302 1083 - dst_bo = &stdu->display_srf->res.backup->base; 1303 + dst_bo = &stdu->display_srf->res.guest_memory_bo->tbo; 1084 1304 dst_pitch = stdu->display_srf->metadata.base_size.width * stdu->cpp; 1085 1305 dst_offset = bb->y1 * dst_pitch + bb->x1 * stdu->cpp; 1086 1306 1087 - src_bo = &vfbbo->buffer->base; 1307 + src_bo = &vfbbo->buffer->tbo; 1088 1308 src_pitch = update->vfb->base.pitches[0]; 1089 1309 src_offset = bo_update->fb_top * src_pitch + bo_update->fb_left * 1090 1310 stdu->cpp; ··· 1148 1368 bo_update.base.vfb = vfb; 1149 1369 bo_update.base.out_fence = out_fence; 1150 1370 bo_update.base.mutex = NULL; 1151 - bo_update.base.cpu_blit = vmw_stdu_use_cpu_blit(dev_priv); 1152 1371 bo_update.base.intr = false; 1153 1372 1154 - /* 1155 - * VM without 3D support don't have surface DMA command and framebuffer 1156 - * should be moved out of VRAM. 1157 - */ 1158 - if (bo_update.base.cpu_blit) { 1159 - bo_update.base.calc_fifo_size = vmw_stdu_bo_fifo_size_cpu; 1160 - bo_update.base.pre_clip = vmw_stdu_bo_pre_clip_cpu; 1161 - bo_update.base.clip = vmw_stdu_bo_clip_cpu; 1162 - bo_update.base.post_clip = vmw_stdu_bo_populate_update_cpu; 1163 - } else { 1164 - bo_update.base.calc_fifo_size = vmw_stdu_bo_fifo_size; 1165 - bo_update.base.pre_clip = vmw_stdu_bo_populate_dma; 1166 - bo_update.base.clip = vmw_stdu_bo_populate_clip; 1167 - bo_update.base.post_clip = vmw_stdu_bo_populate_update; 1168 - } 1373 + bo_update.base.calc_fifo_size = vmw_stdu_bo_fifo_size_cpu; 1374 + bo_update.base.pre_clip = vmw_stdu_bo_pre_clip_cpu; 1375 + bo_update.base.clip = vmw_stdu_bo_clip_cpu; 1376 + bo_update.base.post_clip = vmw_stdu_bo_populate_update_cpu; 1169 1377 1170 1378 return vmw_du_helper_plane_update(&bo_update.base); 1171 1379 } ··· 1316 1548 srf_update.vfb = vfb; 1317 1549 srf_update.out_fence = out_fence; 1318 1550 srf_update.mutex = &dev_priv->cmdbuf_mutex; 1319 - srf_update.cpu_blit = false; 1320 1551 srf_update.intr = true; 1321 1552 1322 1553 if (vfbs->is_bo_proxy)
+11 -9
drivers/gpu/drm/vmwgfx/vmwgfx_streamoutput.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR MIT 2 2 /************************************************************************** 3 3 * 4 - * Copyright © 2018-2019 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright © 2018-2023 VMware, Inc., Palo Alto, CA., USA 5 5 * All Rights Reserved. 6 6 * 7 7 * Permission is hereby granted, free of charge, to any person obtaining a ··· 26 26 * 27 27 **************************************************************************/ 28 28 29 - #include <drm/ttm/ttm_placement.h> 30 - 29 + #include "vmwgfx_binding.h" 30 + #include "vmwgfx_bo.h" 31 31 #include "vmwgfx_drv.h" 32 32 #include "vmwgfx_resource_priv.h" 33 - #include "vmwgfx_binding.h" 33 + 34 + #include <drm/ttm/ttm_placement.h> 34 35 35 36 /** 36 37 * struct vmw_dx_streamoutput - Streamoutput resource metadata. ··· 63 62 64 63 static const struct vmw_res_func vmw_dx_streamoutput_func = { 65 64 .res_type = vmw_res_streamoutput, 66 - .needs_backup = true, 65 + .needs_guest_memory = true, 67 66 .may_evict = false, 68 67 .type_name = "DX streamoutput", 69 - .backup_placement = &vmw_mob_placement, 68 + .domain = VMW_BO_DOMAIN_MOB, 69 + .busy_domain = VMW_BO_DOMAIN_MOB, 70 70 .create = vmw_dx_streamoutput_create, 71 71 .destroy = NULL, /* Command buffer managed resource. */ 72 72 .bind = vmw_dx_streamoutput_bind, ··· 106 104 cmd->header.id = SVGA_3D_CMD_DX_BIND_STREAMOUTPUT; 107 105 cmd->header.size = sizeof(cmd->body); 108 106 cmd->body.soid = so->id; 109 - cmd->body.mobid = res->backup->base.resource->start; 110 - cmd->body.offsetInBytes = res->backup_offset; 107 + cmd->body.mobid = res->guest_memory_bo->tbo.resource->start; 108 + cmd->body.offsetInBytes = res->guest_memory_offset; 111 109 cmd->body.sizeInBytes = so->size; 112 110 vmw_cmd_commit(dev_priv, sizeof(*cmd)); 113 111 ··· 197 195 struct vmw_fence_obj *fence; 198 196 int ret; 199 197 200 - if (WARN_ON(res->backup->base.resource->mem_type != VMW_PL_MOB)) 198 + if (WARN_ON(res->guest_memory_bo->tbo.resource->mem_type != VMW_PL_MOB)) 201 199 return -EINVAL; 202 200 203 201 mutex_lock(&dev_priv->binding_mutex);
+55 -52
drivers/gpu/drm/vmwgfx/vmwgfx_surface.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR MIT 2 2 /************************************************************************** 3 3 * 4 - * Copyright 2009-2015 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright 2009-2023 VMware, Inc., Palo Alto, CA., USA 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a 7 7 * copy of this software and associated documentation files (the ··· 25 25 * 26 26 **************************************************************************/ 27 27 28 - #include <drm/ttm/ttm_placement.h> 29 - 28 + #include "vmwgfx_bo.h" 30 29 #include "vmwgfx_drv.h" 31 30 #include "vmwgfx_resource_priv.h" 32 31 #include "vmwgfx_so.h" 33 32 #include "vmwgfx_binding.h" 34 33 #include "vmw_surface_cache.h" 35 34 #include "device_include/svga3d_surfacedefs.h" 35 + 36 + #include <drm/ttm/ttm_placement.h> 36 37 37 38 #define SVGA3D_FLAGS_64(upper32, lower32) (((uint64_t)upper32 << 32) | lower32) 38 39 #define SVGA3D_FLAGS_UPPER_32(svga3d_flags) (svga3d_flags >> 32) ··· 126 125 127 126 static const struct vmw_res_func vmw_legacy_surface_func = { 128 127 .res_type = vmw_res_surface, 129 - .needs_backup = false, 128 + .needs_guest_memory = false, 130 129 .may_evict = true, 131 130 .prio = 1, 132 131 .dirty_prio = 1, 133 132 .type_name = "legacy surfaces", 134 - .backup_placement = &vmw_srf_placement, 133 + .domain = VMW_BO_DOMAIN_GMR, 134 + .busy_domain = VMW_BO_DOMAIN_GMR | VMW_BO_DOMAIN_VRAM, 135 135 .create = &vmw_legacy_srf_create, 136 136 .destroy = &vmw_legacy_srf_destroy, 137 137 .bind = &vmw_legacy_srf_bind, ··· 141 139 142 140 static const struct vmw_res_func vmw_gb_surface_func = { 143 141 .res_type = vmw_res_surface, 144 - .needs_backup = true, 142 + .needs_guest_memory = true, 145 143 .may_evict = true, 146 144 .prio = 1, 147 145 .dirty_prio = 2, 148 146 .type_name = "guest backed surfaces", 149 - .backup_placement = &vmw_mob_placement, 147 + .domain = VMW_BO_DOMAIN_MOB, 148 + .busy_domain = VMW_BO_DOMAIN_MOB, 150 149 .create = vmw_gb_surface_create, 151 150 .destroy = vmw_gb_surface_destroy, 152 151 .bind = vmw_gb_surface_bind, ··· 382 379 */ 383 380 384 381 mutex_lock(&dev_priv->cmdbuf_mutex); 385 - dev_priv->used_memory_size -= res->backup_size; 382 + dev_priv->used_memory_size -= res->guest_memory_size; 386 383 mutex_unlock(&dev_priv->cmdbuf_mutex); 387 384 } 388 385 } ··· 412 409 return 0; 413 410 414 411 srf = vmw_res_to_srf(res); 415 - if (unlikely(dev_priv->used_memory_size + res->backup_size >= 412 + if (unlikely(dev_priv->used_memory_size + res->guest_memory_size >= 416 413 dev_priv->memory_size)) 417 414 return -EBUSY; 418 415 ··· 450 447 * Surface memory usage accounting. 451 448 */ 452 449 453 - dev_priv->used_memory_size += res->backup_size; 450 + dev_priv->used_memory_size += res->guest_memory_size; 454 451 return 0; 455 452 456 453 out_no_fifo: ··· 527 524 static int vmw_legacy_srf_bind(struct vmw_resource *res, 528 525 struct ttm_validate_buffer *val_buf) 529 526 { 530 - if (!res->backup_dirty) 527 + if (!res->guest_memory_dirty) 531 528 return 0; 532 529 533 530 return vmw_legacy_srf_dma(res, val_buf, true); ··· 586 583 * Surface memory usage accounting. 587 584 */ 588 585 589 - dev_priv->used_memory_size -= res->backup_size; 586 + dev_priv->used_memory_size -= res->guest_memory_size; 590 587 591 588 /* 592 589 * Release the surface ID. ··· 686 683 container_of(base, struct vmw_user_surface, prime.base); 687 684 struct vmw_resource *res = &user_srf->srf.res; 688 685 689 - if (res && res->backup) 690 - drm_gem_object_put(&res->backup->base.base); 686 + if (res->guest_memory_bo) 687 + drm_gem_object_put(&res->guest_memory_bo->tbo.base); 691 688 692 689 *p_base = NULL; 693 690 vmw_resource_unreference(&res); ··· 815 812 ++cur_size; 816 813 } 817 814 } 818 - res->backup_size = cur_bo_offset; 815 + res->guest_memory_size = cur_bo_offset; 819 816 if (metadata->scanout && 820 817 metadata->num_sizes == 1 && 821 818 metadata->sizes[0].width == VMW_CURSOR_SNOOP_WIDTH && ··· 859 856 860 857 ret = vmw_gem_object_create_with_handle(dev_priv, 861 858 file_priv, 862 - res->backup_size, 859 + res->guest_memory_size, 863 860 &backup_handle, 864 - &res->backup); 861 + &res->guest_memory_bo); 865 862 if (unlikely(ret != 0)) { 866 863 vmw_resource_unreference(&res); 867 864 goto out_unlock; 868 865 } 869 - vmw_bo_reference(res->backup); 866 + vmw_bo_reference(res->guest_memory_bo); 870 867 /* 871 868 * We don't expose the handle to the userspace and surface 872 869 * already holds a gem reference ··· 875 872 } 876 873 877 874 tmp = vmw_resource_reference(&srf->res); 878 - ret = ttm_prime_object_init(tfile, res->backup_size, &user_srf->prime, 875 + ret = ttm_prime_object_init(tfile, res->guest_memory_size, &user_srf->prime, 879 876 req->shareable, VMW_RES_SURFACE, 880 877 &vmw_user_surface_base_release); 881 878 ··· 1189 1186 1190 1187 BUG_ON(bo->resource->mem_type != VMW_PL_MOB); 1191 1188 1192 - submit_size = sizeof(*cmd1) + (res->backup_dirty ? sizeof(*cmd2) : 0); 1189 + submit_size = sizeof(*cmd1) + (res->guest_memory_dirty ? sizeof(*cmd2) : 0); 1193 1190 1194 1191 cmd1 = VMW_CMD_RESERVE(dev_priv, submit_size); 1195 1192 if (unlikely(!cmd1)) ··· 1199 1196 cmd1->header.size = sizeof(cmd1->body); 1200 1197 cmd1->body.sid = res->id; 1201 1198 cmd1->body.mobid = bo->resource->start; 1202 - if (res->backup_dirty) { 1199 + if (res->guest_memory_dirty) { 1203 1200 cmd2 = (void *) &cmd1[1]; 1204 1201 cmd2->header.id = SVGA_3D_CMD_UPDATE_GB_SURFACE; 1205 1202 cmd2->header.size = sizeof(cmd2->body); ··· 1207 1204 } 1208 1205 vmw_cmd_commit(dev_priv, submit_size); 1209 1206 1210 - if (res->backup->dirty && res->backup_dirty) { 1207 + if (res->guest_memory_bo->dirty && res->guest_memory_dirty) { 1211 1208 /* We've just made a full upload. Cear dirty regions. */ 1212 1209 vmw_bo_dirty_clear_res(res); 1213 1210 } 1214 1211 1215 - res->backup_dirty = false; 1212 + res->guest_memory_dirty = false; 1216 1213 1217 1214 return 0; 1218 1215 } ··· 1508 1505 1509 1506 if (req->base.buffer_handle != SVGA3D_INVALID_ID) { 1510 1507 ret = vmw_user_bo_lookup(file_priv, req->base.buffer_handle, 1511 - &res->backup); 1508 + &res->guest_memory_bo); 1512 1509 if (ret == 0) { 1513 - if (res->backup->base.base.size < res->backup_size) { 1510 + if (res->guest_memory_bo->tbo.base.size < res->guest_memory_size) { 1514 1511 VMW_DEBUG_USER("Surface backup buffer too small.\n"); 1515 - vmw_bo_unreference(&res->backup); 1512 + vmw_bo_unreference(&res->guest_memory_bo); 1516 1513 ret = -EINVAL; 1517 1514 goto out_unlock; 1518 1515 } else { ··· 1523 1520 (drm_vmw_surface_flag_create_buffer | 1524 1521 drm_vmw_surface_flag_coherent)) { 1525 1522 ret = vmw_gem_object_create_with_handle(dev_priv, file_priv, 1526 - res->backup_size, 1523 + res->guest_memory_size, 1527 1524 &backup_handle, 1528 - &res->backup); 1525 + &res->guest_memory_bo); 1529 1526 if (ret == 0) 1530 - vmw_bo_reference(res->backup); 1527 + vmw_bo_reference(res->guest_memory_bo); 1531 1528 } 1532 1529 1533 1530 if (unlikely(ret != 0)) { ··· 1536 1533 } 1537 1534 1538 1535 if (req->base.drm_surface_flags & drm_vmw_surface_flag_coherent) { 1539 - struct vmw_buffer_object *backup = res->backup; 1536 + struct vmw_bo *backup = res->guest_memory_bo; 1540 1537 1541 - ttm_bo_reserve(&backup->base, false, false, NULL); 1538 + ttm_bo_reserve(&backup->tbo, false, false, NULL); 1542 1539 if (!res->func->dirty_alloc) 1543 1540 ret = -EINVAL; 1544 1541 if (!ret) ··· 1547 1544 res->coherent = true; 1548 1545 ret = res->func->dirty_alloc(res); 1549 1546 } 1550 - ttm_bo_unreserve(&backup->base); 1547 + ttm_bo_unreserve(&backup->tbo); 1551 1548 if (ret) { 1552 1549 vmw_resource_unreference(&res); 1553 1550 goto out_unlock; ··· 1556 1553 } 1557 1554 1558 1555 tmp = vmw_resource_reference(res); 1559 - ret = ttm_prime_object_init(tfile, res->backup_size, &user_srf->prime, 1556 + ret = ttm_prime_object_init(tfile, res->guest_memory_size, &user_srf->prime, 1560 1557 req->base.drm_surface_flags & 1561 1558 drm_vmw_surface_flag_shareable, 1562 1559 VMW_RES_SURFACE, ··· 1569 1566 } 1570 1567 1571 1568 rep->handle = user_srf->prime.base.handle; 1572 - rep->backup_size = res->backup_size; 1573 - if (res->backup) { 1569 + rep->backup_size = res->guest_memory_size; 1570 + if (res->guest_memory_bo) { 1574 1571 rep->buffer_map_handle = 1575 - drm_vma_node_offset_addr(&res->backup->base.base.vma_node); 1576 - rep->buffer_size = res->backup->base.base.size; 1572 + drm_vma_node_offset_addr(&res->guest_memory_bo->tbo.base.vma_node); 1573 + rep->buffer_size = res->guest_memory_bo->tbo.base.size; 1577 1574 rep->buffer_handle = backup_handle; 1578 1575 } else { 1579 1576 rep->buffer_map_handle = 0; ··· 1616 1613 1617 1614 user_srf = container_of(base, struct vmw_user_surface, prime.base); 1618 1615 srf = &user_srf->srf; 1619 - if (!srf->res.backup) { 1616 + if (!srf->res.guest_memory_bo) { 1620 1617 DRM_ERROR("Shared GB surface is missing a backup buffer.\n"); 1621 1618 goto out_bad_resource; 1622 1619 } 1623 1620 metadata = &srf->metadata; 1624 1621 1625 1622 mutex_lock(&dev_priv->cmdbuf_mutex); /* Protect res->backup */ 1626 - ret = drm_gem_handle_create(file_priv, &srf->res.backup->base.base, 1623 + ret = drm_gem_handle_create(file_priv, &srf->res.guest_memory_bo->tbo.base, 1627 1624 &backup_handle); 1628 1625 mutex_unlock(&dev_priv->cmdbuf_mutex); 1629 1626 if (ret != 0) { ··· 1642 1639 rep->creq.base.buffer_handle = backup_handle; 1643 1640 rep->creq.base.base_size = metadata->base_size; 1644 1641 rep->crep.handle = user_srf->prime.base.handle; 1645 - rep->crep.backup_size = srf->res.backup_size; 1642 + rep->crep.backup_size = srf->res.guest_memory_size; 1646 1643 rep->crep.buffer_handle = backup_handle; 1647 1644 rep->crep.buffer_map_handle = 1648 - drm_vma_node_offset_addr(&srf->res.backup->base.base.vma_node); 1649 - rep->crep.buffer_size = srf->res.backup->base.base.size; 1645 + drm_vma_node_offset_addr(&srf->res.guest_memory_bo->tbo.base.vma_node); 1646 + rep->crep.buffer_size = srf->res.guest_memory_bo->tbo.base.size; 1650 1647 1651 1648 rep->creq.version = drm_vmw_gb_surface_v1; 1652 1649 rep->creq.svga3d_flags_upper_32_bits = ··· 1745 1742 { 1746 1743 struct vmw_surface_dirty *dirty = 1747 1744 (struct vmw_surface_dirty *) res->dirty; 1748 - size_t backup_end = res->backup_offset + res->backup_size; 1745 + size_t backup_end = res->guest_memory_offset + res->guest_memory_size; 1749 1746 struct vmw_surface_loc loc1, loc2; 1750 1747 const struct vmw_surface_cache *cache; 1751 1748 1752 - start = max_t(size_t, start, res->backup_offset) - res->backup_offset; 1753 - end = min(end, backup_end) - res->backup_offset; 1749 + start = max_t(size_t, start, res->guest_memory_offset) - res->guest_memory_offset; 1750 + end = min(end, backup_end) - res->guest_memory_offset; 1754 1751 cache = &dirty->cache; 1755 1752 vmw_surface_get_loc(cache, &loc1, start); 1756 1753 vmw_surface_get_loc(cache, &loc2, end - 1); ··· 1797 1794 struct vmw_surface_dirty *dirty = 1798 1795 (struct vmw_surface_dirty *) res->dirty; 1799 1796 const struct vmw_surface_cache *cache = &dirty->cache; 1800 - size_t backup_end = res->backup_offset + cache->mip_chain_bytes; 1797 + size_t backup_end = res->guest_memory_offset + cache->mip_chain_bytes; 1801 1798 SVGA3dBox *box = &dirty->boxes[0]; 1802 1799 u32 box_c2; 1803 1800 1804 1801 box->h = box->d = 1; 1805 - start = max_t(size_t, start, res->backup_offset) - res->backup_offset; 1806 - end = min(end, backup_end) - res->backup_offset; 1802 + start = max_t(size_t, start, res->guest_memory_offset) - res->guest_memory_offset; 1803 + end = min(end, backup_end) - res->guest_memory_offset; 1807 1804 box_c2 = box->x + box->w; 1808 1805 if (box->w == 0 || box->x > start) 1809 1806 box->x = start; ··· 1819 1816 { 1820 1817 struct vmw_surface *srf = vmw_res_to_srf(res); 1821 1818 1822 - if (WARN_ON(end <= res->backup_offset || 1823 - start >= res->backup_offset + res->backup_size)) 1819 + if (WARN_ON(end <= res->guest_memory_offset || 1820 + start >= res->guest_memory_offset + res->guest_memory_size)) 1824 1821 return; 1825 1822 1826 1823 if (srf->metadata.format == SVGA3D_BUFFER) ··· 2077 2074 if (metadata->flags & SVGA3D_SURFACE_MULTISAMPLE) 2078 2075 sample_count = metadata->multisample_count; 2079 2076 2080 - srf->res.backup_size = 2077 + srf->res.guest_memory_size = 2081 2078 vmw_surface_get_serialized_size_extended( 2082 2079 metadata->format, 2083 2080 metadata->base_size, ··· 2086 2083 sample_count); 2087 2084 2088 2085 if (metadata->flags & SVGA3D_SURFACE_BIND_STREAM_OUTPUT) 2089 - srf->res.backup_size += sizeof(SVGA3dDXSOState); 2086 + srf->res.guest_memory_size += sizeof(SVGA3dDXSOState); 2090 2087 2091 2088 /* 2092 2089 * Don't set SVGA3D_SURFACE_SCREENTARGET flag for a scanout surface with
+34 -100
drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR MIT 2 2 /************************************************************************** 3 3 * 4 - * Copyright 2009-2015 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright 2009-2023 VMware, Inc., Palo Alto, CA., USA 5 5 * 6 6 * Permission is hereby granted, free of charge, to any person obtaining a 7 7 * copy of this software and associated documentation files (the ··· 25 25 * 26 26 **************************************************************************/ 27 27 28 + #include "vmwgfx_bo.h" 28 29 #include "vmwgfx_drv.h" 29 30 #include <drm/ttm/ttm_placement.h> 30 31 ··· 50 49 .flags = 0 51 50 }; 52 51 53 - static const struct ttm_place mob_placement_flags = { 54 - .fpfn = 0, 55 - .lpfn = 0, 56 - .mem_type = VMW_PL_MOB, 57 - .flags = 0 58 - }; 59 - 60 52 struct ttm_placement vmw_vram_placement = { 61 53 .num_placement = 1, 62 54 .placement = &vram_placement_flags, ··· 71 77 } 72 78 }; 73 79 74 - static const struct ttm_place gmr_vram_placement_flags[] = { 75 - { 76 - .fpfn = 0, 77 - .lpfn = 0, 78 - .mem_type = VMW_PL_GMR, 79 - .flags = 0 80 - }, { 81 - .fpfn = 0, 82 - .lpfn = 0, 83 - .mem_type = TTM_PL_VRAM, 84 - .flags = 0 85 - } 86 - }; 87 - 88 - static const struct ttm_place vmw_sys_placement_flags = { 89 - .fpfn = 0, 90 - .lpfn = 0, 91 - .mem_type = VMW_PL_SYSTEM, 92 - .flags = 0 93 - }; 94 - 95 80 struct ttm_placement vmw_vram_gmr_placement = { 96 81 .num_placement = 2, 97 82 .placement = vram_gmr_placement_flags, ··· 78 105 .busy_placement = &gmr_placement_flags 79 106 }; 80 107 81 - struct ttm_placement vmw_vram_sys_placement = { 82 - .num_placement = 1, 83 - .placement = &vram_placement_flags, 84 - .num_busy_placement = 1, 85 - .busy_placement = &sys_placement_flags 86 - }; 87 - 88 108 struct ttm_placement vmw_sys_placement = { 89 109 .num_placement = 1, 90 110 .placement = &sys_placement_flags, 91 - .num_busy_placement = 1, 92 - .busy_placement = &sys_placement_flags 93 - }; 94 - 95 - struct ttm_placement vmw_pt_sys_placement = { 96 - .num_placement = 1, 97 - .placement = &vmw_sys_placement_flags, 98 - .num_busy_placement = 1, 99 - .busy_placement = &vmw_sys_placement_flags 100 - }; 101 - 102 - static const struct ttm_place nonfixed_placement_flags[] = { 103 - { 104 - .fpfn = 0, 105 - .lpfn = 0, 106 - .mem_type = TTM_PL_SYSTEM, 107 - .flags = 0 108 - }, { 109 - .fpfn = 0, 110 - .lpfn = 0, 111 - .mem_type = VMW_PL_GMR, 112 - .flags = 0 113 - }, { 114 - .fpfn = 0, 115 - .lpfn = 0, 116 - .mem_type = VMW_PL_MOB, 117 - .flags = 0 118 - } 119 - }; 120 - 121 - struct ttm_placement vmw_srf_placement = { 122 - .num_placement = 1, 123 - .num_busy_placement = 2, 124 - .placement = &gmr_placement_flags, 125 - .busy_placement = gmr_vram_placement_flags 126 - }; 127 - 128 - struct ttm_placement vmw_mob_placement = { 129 - .num_placement = 1, 130 - .num_busy_placement = 1, 131 - .placement = &mob_placement_flags, 132 - .busy_placement = &mob_placement_flags 133 - }; 134 - 135 - struct ttm_placement vmw_nonfixed_placement = { 136 - .num_placement = 3, 137 - .placement = nonfixed_placement_flags, 138 111 .num_busy_placement = 1, 139 112 .busy_placement = &sys_placement_flags 140 113 }; ··· 427 508 if (!vmw_be) 428 509 return NULL; 429 510 430 - vmw_be->dev_priv = container_of(bo->bdev, struct vmw_private, bdev); 511 + vmw_be->dev_priv = vmw_priv_from_ttm(bo->bdev); 431 512 vmw_be->mob = NULL; 432 513 433 514 if (vmw_be->dev_priv->map_mode == vmw_dma_alloc_coherent) ··· 453 534 454 535 static int vmw_ttm_io_mem_reserve(struct ttm_device *bdev, struct ttm_resource *mem) 455 536 { 456 - struct vmw_private *dev_priv = container_of(bdev, struct vmw_private, bdev); 537 + struct vmw_private *dev_priv = vmw_priv_from_ttm(bdev); 457 538 458 539 switch (mem->mem_type) { 459 540 case TTM_PL_SYSTEM: ··· 515 596 struct ttm_resource *new_mem, 516 597 struct ttm_place *hop) 517 598 { 518 - struct ttm_resource_manager *old_man = ttm_manager_type(bo->bdev, bo->resource->mem_type); 519 - struct ttm_resource_manager *new_man = ttm_manager_type(bo->bdev, new_mem->mem_type); 520 - int ret; 599 + struct ttm_resource_manager *new_man; 600 + struct ttm_resource_manager *old_man = NULL; 601 + int ret = 0; 602 + 603 + new_man = ttm_manager_type(bo->bdev, new_mem->mem_type); 604 + if (bo->resource) 605 + old_man = ttm_manager_type(bo->bdev, bo->resource->mem_type); 521 606 522 607 if (new_man->use_tt && !vmw_memtype_is_system(new_mem->mem_type)) { 523 608 ret = vmw_ttm_bind(bo->bdev, bo->ttm, new_mem); ··· 529 606 return ret; 530 607 } 531 608 609 + if (!bo->resource || (bo->resource->mem_type == TTM_PL_SYSTEM && 610 + bo->ttm == NULL)) { 611 + ttm_bo_move_null(bo, new_mem); 612 + return 0; 613 + } 614 + 532 615 vmw_move_notify(bo, bo->resource, new_mem); 533 616 534 - if (old_man->use_tt && new_man->use_tt) { 617 + if (old_man && old_man->use_tt && new_man->use_tt) { 535 618 if (vmw_memtype_is_system(bo->resource->mem_type)) { 536 619 ttm_bo_move_null(bo, new_mem); 537 620 return 0; ··· 574 645 }; 575 646 576 647 int vmw_bo_create_and_populate(struct vmw_private *dev_priv, 577 - unsigned long bo_size, 578 - struct ttm_buffer_object **bo_p) 648 + size_t bo_size, u32 domain, 649 + struct vmw_bo **bo_p) 579 650 { 580 651 struct ttm_operation_ctx ctx = { 581 652 .interruptible = false, 582 653 .no_wait_gpu = false 583 654 }; 584 - struct ttm_buffer_object *bo; 655 + struct vmw_bo *vbo; 585 656 int ret; 657 + struct vmw_bo_params bo_params = { 658 + .domain = domain, 659 + .busy_domain = domain, 660 + .bo_type = ttm_bo_type_kernel, 661 + .size = bo_size, 662 + .pin = true 663 + }; 586 664 587 - ret = vmw_bo_create_kernel(dev_priv, bo_size, 588 - &vmw_pt_sys_placement, 589 - &bo); 665 + ret = vmw_bo_create(dev_priv, &bo_params, &vbo); 590 666 if (unlikely(ret != 0)) 591 667 return ret; 592 668 593 - ret = ttm_bo_reserve(bo, false, true, NULL); 669 + ret = ttm_bo_reserve(&vbo->tbo, false, true, NULL); 594 670 BUG_ON(ret != 0); 595 - ret = vmw_ttm_populate(bo->bdev, bo->ttm, &ctx); 671 + ret = vmw_ttm_populate(vbo->tbo.bdev, vbo->tbo.ttm, &ctx); 596 672 if (likely(ret == 0)) { 597 673 struct vmw_ttm_tt *vmw_tt = 598 - container_of(bo->ttm, struct vmw_ttm_tt, dma_ttm); 674 + container_of(vbo->tbo.ttm, struct vmw_ttm_tt, dma_ttm); 599 675 ret = vmw_ttm_map_dma(vmw_tt); 600 676 } 601 677 602 - ttm_bo_unreserve(bo); 678 + ttm_bo_unreserve(&vbo->tbo); 603 679 604 680 if (likely(ret == 0)) 605 - *bo_p = bo; 681 + *bo_p = vbo; 606 682 return ret; 607 683 }
+4 -2
drivers/gpu/drm/vmwgfx/vmwgfx_va.c
··· 25 25 * 26 26 **************************************************************************/ 27 27 28 + #include "vmwgfx_bo.h" 28 29 #include "vmwgfx_drv.h" 29 30 #include "vmwgfx_resource_priv.h" 30 31 ··· 81 80 static const struct vmw_simple_resource_func va_stream_func = { 82 81 .res_func = { 83 82 .res_type = vmw_res_stream, 84 - .needs_backup = false, 83 + .needs_guest_memory = false, 85 84 .may_evict = false, 86 85 .type_name = "overlay stream", 87 - .backup_placement = NULL, 86 + .domain = VMW_BO_DOMAIN_SYS, 87 + .busy_domain = VMW_BO_DOMAIN_SYS, 88 88 .create = NULL, 89 89 .destroy = NULL, 90 90 .bind = NULL,
+54 -92
drivers/gpu/drm/vmwgfx/vmwgfx_validation.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 OR MIT 2 2 /************************************************************************** 3 3 * 4 - * Copyright © 2018 - 2022 VMware, Inc., Palo Alto, CA., USA 4 + * Copyright © 2018 - 2023 VMware, Inc., Palo Alto, CA., USA 5 5 * All Rights Reserved. 6 6 * 7 7 * Permission is hereby granted, free of charge, to any person obtaining a ··· 25 25 * USE OR OTHER DEALINGS IN THE SOFTWARE. 26 26 * 27 27 **************************************************************************/ 28 - #include <linux/slab.h> 29 - #include "vmwgfx_validation.h" 28 + #include "vmwgfx_bo.h" 30 29 #include "vmwgfx_drv.h" 30 + #include "vmwgfx_resource_priv.h" 31 + #include "vmwgfx_validation.h" 32 + 33 + #include <linux/slab.h> 31 34 32 35 33 36 #define VMWGFX_VALIDATION_MEM_GRAN (16*PAGE_SIZE) ··· 41 38 * @hash: A hash entry used for the duplicate detection hash table. 42 39 * @coherent_count: If switching backup buffers, number of new coherent 43 40 * resources that will have this buffer as a backup buffer. 44 - * @as_mob: Validate as mob. 45 - * @cpu_blit: Validate for cpu blit access. 46 41 * 47 42 * Bit fields are used since these structures are allocated and freed in 48 43 * large numbers and space conservation is desired. ··· 49 48 struct ttm_validate_buffer base; 50 49 struct vmwgfx_hash_item hash; 51 50 unsigned int coherent_count; 52 - u32 as_mob : 1; 53 - u32 cpu_blit : 1; 54 51 }; 55 52 /** 56 53 * struct vmw_validation_res_node - Resource validation metadata. 57 54 * @head: List head for the resource validation list. 58 55 * @hash: A hash entry used for the duplicate detection hash table. 59 56 * @res: Reference counted resource pointer. 60 - * @new_backup: Non ref-counted pointer to new backup buffer to be assigned 61 - * to a resource. 62 - * @new_backup_offset: Offset into the new backup mob for resources that can 63 - * share MOBs. 57 + * @new_guest_memory_bo: Non ref-counted pointer to new guest memory buffer 58 + * to be assigned to a resource. 59 + * @new_guest_memory_offset: Offset into the new backup mob for resources 60 + * that can share MOBs. 64 61 * @no_buffer_needed: Kernel does not need to allocate a MOB during validation, 65 62 * the command stream provides a mob bind operation. 66 - * @switching_backup: The validation process is switching backup MOB. 63 + * @switching_guest_memory_bo: The validation process is switching backup MOB. 67 64 * @first_usage: True iff the resource has been seen only once in the current 68 65 * validation batch. 69 66 * @reserved: Whether the resource is currently reserved by this process. ··· 76 77 struct list_head head; 77 78 struct vmwgfx_hash_item hash; 78 79 struct vmw_resource *res; 79 - struct vmw_buffer_object *new_backup; 80 - unsigned long new_backup_offset; 80 + struct vmw_bo *new_guest_memory_bo; 81 + unsigned long new_guest_memory_offset; 81 82 u32 no_buffer_needed : 1; 82 - u32 switching_backup : 1; 83 + u32 switching_guest_memory_bo : 1; 83 84 u32 first_usage : 1; 84 85 u32 reserved : 1; 85 86 u32 dirty : 1; ··· 172 173 */ 173 174 static struct vmw_validation_bo_node * 174 175 vmw_validation_find_bo_dup(struct vmw_validation_context *ctx, 175 - struct vmw_buffer_object *vbo) 176 + struct vmw_bo *vbo) 176 177 { 177 178 struct vmw_validation_bo_node *bo_node = NULL; 178 179 ··· 193 194 struct vmw_validation_bo_node *entry; 194 195 195 196 list_for_each_entry(entry, &ctx->bo_list, base.head) { 196 - if (entry->base.bo == &vbo->base) { 197 + if (entry->base.bo == &vbo->tbo) { 197 198 bo_node = entry; 198 199 break; 199 200 } ··· 257 258 * vmw_validation_add_bo - Add a buffer object to the validation context. 258 259 * @ctx: The validation context. 259 260 * @vbo: The buffer object. 260 - * @as_mob: Validate as mob, otherwise suitable for GMR operations. 261 - * @cpu_blit: Validate in a page-mappable location. 262 261 * 263 262 * Return: Zero on success, negative error code otherwise. 264 263 */ 265 264 int vmw_validation_add_bo(struct vmw_validation_context *ctx, 266 - struct vmw_buffer_object *vbo, 267 - bool as_mob, 268 - bool cpu_blit) 265 + struct vmw_bo *vbo) 269 266 { 270 267 struct vmw_validation_bo_node *bo_node; 271 268 272 269 bo_node = vmw_validation_find_bo_dup(ctx, vbo); 273 - if (bo_node) { 274 - if (bo_node->as_mob != as_mob || 275 - bo_node->cpu_blit != cpu_blit) { 276 - DRM_ERROR("Inconsistent buffer usage.\n"); 277 - return -EINVAL; 278 - } 279 - } else { 270 + if (!bo_node) { 280 271 struct ttm_validate_buffer *val_buf; 281 272 282 273 bo_node = vmw_validation_mem_alloc(ctx, sizeof(*bo_node)); ··· 279 290 bo_node->hash.key); 280 291 } 281 292 val_buf = &bo_node->base; 282 - val_buf->bo = ttm_bo_get_unless_zero(&vbo->base); 293 + val_buf->bo = ttm_bo_get_unless_zero(&vbo->tbo); 283 294 if (!val_buf->bo) 284 295 return -ESRCH; 285 296 val_buf->num_shared = 0; 286 297 list_add_tail(&val_buf->head, &ctx->bo_list); 287 - bo_node->as_mob = as_mob; 288 - bo_node->cpu_blit = cpu_blit; 289 298 } 290 299 291 300 return 0; ··· 393 406 * the resource. 394 407 * @vbo: The new backup buffer object MOB. This buffer object needs to have 395 408 * already been registered with the validation context. 396 - * @backup_offset: Offset into the new backup MOB. 409 + * @guest_memory_offset: Offset into the new backup MOB. 397 410 */ 398 411 void vmw_validation_res_switch_backup(struct vmw_validation_context *ctx, 399 412 void *val_private, 400 - struct vmw_buffer_object *vbo, 401 - unsigned long backup_offset) 413 + struct vmw_bo *vbo, 414 + unsigned long guest_memory_offset) 402 415 { 403 416 struct vmw_validation_res_node *val; 404 417 405 418 val = container_of(val_private, typeof(*val), private); 406 419 407 - val->switching_backup = 1; 420 + val->switching_guest_memory_bo = 1; 408 421 if (val->first_usage) 409 422 val->no_buffer_needed = 1; 410 423 411 - val->new_backup = vbo; 412 - val->new_backup_offset = backup_offset; 424 + val->new_guest_memory_bo = vbo; 425 + val->new_guest_memory_offset = guest_memory_offset; 413 426 } 414 427 415 428 /** ··· 437 450 goto out_unreserve; 438 451 439 452 val->reserved = 1; 440 - if (res->backup) { 441 - struct vmw_buffer_object *vbo = res->backup; 453 + if (res->guest_memory_bo) { 454 + struct vmw_bo *vbo = res->guest_memory_bo; 442 455 443 - ret = vmw_validation_add_bo 444 - (ctx, vbo, vmw_resource_needs_backup(res), 445 - false); 456 + vmw_bo_placement_set(vbo, 457 + res->func->domain, 458 + res->func->busy_domain); 459 + ret = vmw_validation_add_bo(ctx, vbo); 446 460 if (ret) 447 461 goto out_unreserve; 448 462 } 449 463 450 - if (val->switching_backup && val->new_backup && 464 + if (val->switching_guest_memory_bo && val->new_guest_memory_bo && 451 465 res->coherent) { 452 466 struct vmw_validation_bo_node *bo_node = 453 467 vmw_validation_find_bo_dup(ctx, 454 - val->new_backup); 468 + val->new_guest_memory_bo); 455 469 456 470 if (WARN_ON(!bo_node)) { 457 471 ret = -EINVAL; ··· 495 507 vmw_resource_unreserve(val->res, 496 508 val->dirty_set, 497 509 val->dirty, 498 - val->switching_backup, 499 - val->new_backup, 500 - val->new_backup_offset); 510 + val->switching_guest_memory_bo, 511 + val->new_guest_memory_bo, 512 + val->new_guest_memory_offset); 501 513 } 502 514 } 503 515 ··· 505 517 * vmw_validation_bo_validate_single - Validate a single buffer object. 506 518 * @bo: The TTM buffer object base. 507 519 * @interruptible: Whether to perform waits interruptible if possible. 508 - * @validate_as_mob: Whether to validate in MOB memory. 509 520 * 510 521 * Return: Zero on success, -ERESTARTSYS if interrupted. Negative error 511 522 * code on failure. 512 523 */ 513 - int vmw_validation_bo_validate_single(struct ttm_buffer_object *bo, 514 - bool interruptible, 515 - bool validate_as_mob) 524 + static int vmw_validation_bo_validate_single(struct ttm_buffer_object *bo, 525 + bool interruptible) 516 526 { 517 - struct vmw_buffer_object *vbo = 518 - container_of(bo, struct vmw_buffer_object, base); 527 + struct vmw_bo *vbo = to_vmw_bo(&bo->base); 519 528 struct ttm_operation_ctx ctx = { 520 529 .interruptible = interruptible, 521 530 .no_wait_gpu = false ··· 522 537 if (atomic_read(&vbo->cpu_writers)) 523 538 return -EBUSY; 524 539 525 - if (vbo->base.pin_count > 0) 540 + if (vbo->tbo.pin_count > 0) 526 541 return 0; 527 542 528 - if (validate_as_mob) 529 - return ttm_bo_validate(bo, &vmw_mob_placement, &ctx); 530 - 531 - /** 532 - * Put BO in VRAM if there is space, otherwise as a GMR. 533 - * If there is no space in VRAM and GMR ids are all used up, 534 - * start evicting GMRs to make room. If the DMA buffer can't be 535 - * used as a GMR, this will return -ENOMEM. 536 - */ 537 - 538 - ret = ttm_bo_validate(bo, &vmw_vram_gmr_placement, &ctx); 543 + ret = ttm_bo_validate(bo, &vbo->placement, &ctx); 539 544 if (ret == 0 || ret == -ERESTARTSYS) 540 545 return ret; 541 546 542 - /** 543 - * If that failed, try VRAM again, this time evicting 547 + /* 548 + * If that failed, try again, this time evicting 544 549 * previous contents. 545 550 */ 551 + ctx.allow_res_evict = true; 546 552 547 - ret = ttm_bo_validate(bo, &vmw_vram_placement, &ctx); 548 - return ret; 553 + return ttm_bo_validate(bo, &vbo->placement, &ctx); 549 554 } 550 555 551 556 /** ··· 553 578 int ret; 554 579 555 580 list_for_each_entry(entry, &ctx->bo_list, base.head) { 556 - struct vmw_buffer_object *vbo = 557 - container_of(entry->base.bo, typeof(*vbo), base); 581 + struct vmw_bo *vbo = to_vmw_bo(&entry->base.bo->base); 558 582 559 - if (entry->cpu_blit) { 560 - struct ttm_operation_ctx ttm_ctx = { 561 - .interruptible = intr, 562 - .no_wait_gpu = false 563 - }; 583 + ret = vmw_validation_bo_validate_single(entry->base.bo, intr); 564 584 565 - ret = ttm_bo_validate(entry->base.bo, 566 - &vmw_nonfixed_placement, &ttm_ctx); 567 - } else { 568 - ret = vmw_validation_bo_validate_single 569 - (entry->base.bo, intr, entry->as_mob); 570 - } 571 585 if (ret) 572 586 return ret; 573 587 ··· 603 639 604 640 list_for_each_entry(val, &ctx->resource_list, head) { 605 641 struct vmw_resource *res = val->res; 606 - struct vmw_buffer_object *backup = res->backup; 642 + struct vmw_bo *backup = res->guest_memory_bo; 607 643 608 644 ret = vmw_resource_validate(res, intr, val->dirty_set && 609 645 val->dirty); ··· 614 650 } 615 651 616 652 /* Check if the resource switched backup buffer */ 617 - if (backup && res->backup && (backup != res->backup)) { 618 - struct vmw_buffer_object *vbo = res->backup; 653 + if (backup && res->guest_memory_bo && backup != res->guest_memory_bo) { 654 + struct vmw_bo *vbo = res->guest_memory_bo; 619 655 620 - ret = vmw_validation_add_bo 621 - (ctx, vbo, vmw_resource_needs_backup(res), 622 - false); 656 + vmw_bo_placement_set(vbo, res->func->domain, 657 + res->func->busy_domain); 658 + ret = vmw_validation_add_bo(ctx, vbo); 623 659 if (ret) 624 660 return ret; 625 661 } ··· 853 889 list_for_each_entry(entry, &ctx->bo_list, base.head) { 854 890 if (entry->coherent_count) { 855 891 unsigned int coherent_count = entry->coherent_count; 856 - struct vmw_buffer_object *vbo = 857 - container_of(entry->base.bo, typeof(*vbo), 858 - base); 892 + struct vmw_bo *vbo = to_vmw_bo(&entry->base.bo->base); 859 893 860 894 while (coherent_count--) 861 895 vmw_bo_dirty_release(vbo);
+3 -7
drivers/gpu/drm/vmwgfx/vmwgfx_validation.h
··· 73 73 size_t total_mem; 74 74 }; 75 75 76 - struct vmw_buffer_object; 76 + struct vmw_bo; 77 77 struct vmw_resource; 78 78 struct vmw_fence_obj; 79 79 ··· 159 159 } 160 160 161 161 int vmw_validation_add_bo(struct vmw_validation_context *ctx, 162 - struct vmw_buffer_object *vbo, 163 - bool as_mob, bool cpu_blit); 164 - int vmw_validation_bo_validate_single(struct ttm_buffer_object *bo, 165 - bool interruptible, 166 - bool validate_as_mob); 162 + struct vmw_bo *vbo); 167 163 int vmw_validation_bo_validate(struct vmw_validation_context *ctx, bool intr); 168 164 void vmw_validation_unref_lists(struct vmw_validation_context *ctx); 169 165 int vmw_validation_add_resource(struct vmw_validation_context *ctx, ··· 175 179 bool backoff); 176 180 void vmw_validation_res_switch_backup(struct vmw_validation_context *ctx, 177 181 void *val_private, 178 - struct vmw_buffer_object *vbo, 182 + struct vmw_bo *vbo, 179 183 unsigned long backup_offset); 180 184 int vmw_validation_res_validate(struct vmw_validation_context *ctx, bool intr); 181 185
+5 -4
drivers/ps3/ps3av.c
··· 11 11 #include <linux/delay.h> 12 12 #include <linux/notifier.h> 13 13 #include <linux/ioctl.h> 14 - #include <linux/fb.h> 15 14 #include <linux/slab.h> 16 15 17 16 #include <asm/firmware.h> 18 17 #include <asm/ps3av.h> 19 18 #include <asm/ps3.h> 19 + 20 + #include <video/cmdline.h> 20 21 21 22 #include "vuart.h" 22 23 ··· 922 921 923 922 static int ps3av_probe(struct ps3_system_bus_device *dev) 924 923 { 924 + const char *mode_option; 925 925 int res; 926 926 int id; 927 927 ··· 970 968 971 969 ps3av_get_hw_conf(ps3av); 972 970 973 - #ifdef CONFIG_FB 974 - if (fb_mode_option && !strcmp(fb_mode_option, "safe")) 971 + mode_option = video_get_options(NULL); 972 + if (mode_option && !strcmp(mode_option, "safe")) 975 973 safe_mode = 1; 976 - #endif /* CONFIG_FB */ 977 974 id = ps3av_auto_videomode(&ps3av->av_hw_conf); 978 975 if (id < 0) { 979 976 printk(KERN_ERR "%s: invalid id :%d\n", __func__, id);
+3
drivers/video/Kconfig
··· 11 11 Support tracking and hand-over of aperture ownership. Required 12 12 by graphics drivers for firmware-provided framebuffers. 13 13 14 + config VIDEO_CMDLINE 15 + bool 16 + 14 17 config VIDEO_NOMODESET 15 18 bool 16 19 default n
+1
drivers/video/Makefile
··· 2 2 3 3 obj-$(CONFIG_APERTURE_HELPERS) += aperture.o 4 4 obj-$(CONFIG_VGASTATE) += vgastate.o 5 + obj-$(CONFIG_VIDEO_CMDLINE) += cmdline.o 5 6 obj-$(CONFIG_VIDEO_NOMODESET) += nomodeset.o 6 7 obj-$(CONFIG_HDMI) += hdmi.o 7 8
+133
drivers/video/cmdline.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Based on the fbdev code in drivers/video/fbdev/core/fb_cmdline: 4 + * 5 + * Copyright (C) 2014 Intel Corp 6 + * Copyright (C) 1994 Martin Schaller 7 + * 8 + * 2001 - Documented with DocBook 9 + * - Brad Douglas <brad@neruo.com> 10 + * 11 + * This file is subject to the terms and conditions of the GNU General Public 12 + * License. See the file COPYING in the main directory of this archive 13 + * for more details. 14 + * 15 + * Authors: 16 + * Daniel Vetter <daniel.vetter@ffwll.ch> 17 + */ 18 + 19 + #include <linux/fb.h> /* for FB_MAX */ 20 + #include <linux/init.h> 21 + 22 + #include <video/cmdline.h> 23 + 24 + /* 25 + * FB_MAX is the maximum number of framebuffer devices and also 26 + * the maximum number of video= parameters. Although not directly 27 + * related to each other, it makes sense to keep it that way. 28 + */ 29 + static const char *video_options[FB_MAX] __read_mostly; 30 + static const char *video_option __read_mostly; 31 + static int video_of_only __read_mostly; 32 + 33 + static const char *__video_get_option_string(const char *name) 34 + { 35 + const char *options = NULL; 36 + size_t name_len = 0; 37 + 38 + if (name) 39 + name_len = strlen(name); 40 + 41 + if (name_len) { 42 + unsigned int i; 43 + const char *opt; 44 + 45 + for (i = 0; i < ARRAY_SIZE(video_options); ++i) { 46 + if (!video_options[i]) 47 + continue; 48 + if (video_options[i][0] == '\0') 49 + continue; 50 + opt = video_options[i]; 51 + if (!strncmp(opt, name, name_len) && opt[name_len] == ':') 52 + options = opt + name_len + 1; 53 + } 54 + } 55 + 56 + /* No match, return global options */ 57 + if (!options) 58 + options = video_option; 59 + 60 + return options; 61 + } 62 + 63 + /** 64 + * video_get_options - get kernel boot parameters 65 + * @name: name of the output as it would appear in the boot parameter 66 + * line (video=<name>:<options>) 67 + * 68 + * Looks up the video= options for the given name. Names are connector 69 + * names with DRM, or driver names with fbdev. If no video option for 70 + * the name has been specified, the function returns the global video= 71 + * setting. A @name of NULL always returns the global video setting. 72 + * 73 + * Returns: 74 + * The string of video options for the given name, or NULL if no video 75 + * option has been specified. 76 + */ 77 + const char *video_get_options(const char *name) 78 + { 79 + return __video_get_option_string(name); 80 + } 81 + EXPORT_SYMBOL(video_get_options); 82 + 83 + bool __video_get_options(const char *name, const char **options, bool is_of) 84 + { 85 + bool enabled = true; 86 + const char *opt = NULL; 87 + 88 + if (video_of_only && !is_of) 89 + enabled = false; 90 + 91 + opt = __video_get_option_string(name); 92 + 93 + if (options) 94 + *options = opt; 95 + 96 + return enabled; 97 + } 98 + EXPORT_SYMBOL(__video_get_options); 99 + 100 + /* 101 + * Process command line options for video adapters. This function is 102 + * a __setup and __init function. It only stores the options. Drivers 103 + * have to call video_get_options() as necessary. 104 + */ 105 + static int __init video_setup(char *options) 106 + { 107 + if (!options || !*options) 108 + goto out; 109 + 110 + if (!strncmp(options, "ofonly", 6)) { 111 + video_of_only = true; 112 + goto out; 113 + } 114 + 115 + if (strchr(options, ':')) { 116 + /* named */ 117 + size_t i; 118 + 119 + for (i = 0; i < ARRAY_SIZE(video_options); i++) { 120 + if (!video_options[i]) { 121 + video_options[i] = options; 122 + break; 123 + } 124 + } 125 + } else { 126 + /* global */ 127 + video_option = options; 128 + } 129 + 130 + out: 131 + return 1; 132 + } 133 + __setup("video=", video_setup);
+1 -4
drivers/video/fbdev/Kconfig
··· 3 3 # fbdev configuration 4 4 # 5 5 6 - config FB_CMDLINE 7 - bool 8 - 9 6 config FB_NOTIFY 10 7 bool 11 8 12 9 menuconfig FB 13 10 tristate "Support for frame buffer devices" 14 - select FB_CMDLINE 15 11 select FB_NOTIFY 12 + select VIDEO_CMDLINE 16 13 help 17 14 The frame buffer device provides an abstraction for the graphics 18 15 hardware. It represents the frame buffer of some video hardware and
+1 -2
drivers/video/fbdev/core/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 - obj-$(CONFIG_FB_CMDLINE) += fb_cmdline.o 3 2 obj-$(CONFIG_FB_NOTIFY) += fb_notify.o 4 3 obj-$(CONFIG_FB) += fb.o 5 4 fb-y := fbmem.o fbmon.o fbcmap.o fbsysfs.o \ 6 - modedb.o fbcvt.o 5 + modedb.o fbcvt.o fb_cmdline.o 7 6 fb-$(CONFIG_FB_DEFERRED_IO) += fb_defio.o 8 7 9 8 ifeq ($(CONFIG_FRAMEBUFFER_CONSOLE),y)
+25 -71
drivers/video/fbdev/core/fb_cmdline.c
··· 12 12 * for more details. 13 13 * 14 14 * Authors: 15 - * Vetter <danie.vetter@ffwll.ch> 15 + * Daniel Vetter <daniel.vetter@ffwll.ch> 16 16 */ 17 - #include <linux/init.h> 17 + 18 + #include <linux/export.h> 18 19 #include <linux/fb.h> 20 + #include <linux/string.h> 19 21 20 - static char *video_options[FB_MAX] __read_mostly; 21 - static int ofonly __read_mostly; 22 - 23 - const char *fb_mode_option; 24 - EXPORT_SYMBOL_GPL(fb_mode_option); 22 + #include <video/cmdline.h> 25 23 26 24 /** 27 25 * fb_get_options - get kernel boot parameters ··· 28 30 * (video=<name>:<options>) 29 31 * @option: the option will be stored here 30 32 * 33 + * The caller owns the string returned in @option and is 34 + * responsible for releasing the memory. 35 + * 31 36 * NOTE: Needed to maintain backwards compatibility 32 37 */ 33 38 int fb_get_options(const char *name, char **option) 34 39 { 35 - char *opt, *options = NULL; 36 - int retval = 0; 37 - int name_len = strlen(name), i; 40 + const char *options = NULL; 41 + bool is_of = false; 42 + bool enabled; 38 43 39 - if (name_len && ofonly && strncmp(name, "offb", 4)) 40 - retval = 1; 44 + if (name) 45 + is_of = strncmp(name, "offb", 4); 41 46 42 - if (name_len && !retval) { 43 - for (i = 0; i < FB_MAX; i++) { 44 - if (video_options[i] == NULL) 45 - continue; 46 - if (!video_options[i][0]) 47 - continue; 48 - opt = video_options[i]; 49 - if (!strncmp(name, opt, name_len) && 50 - opt[name_len] == ':') 51 - options = opt + name_len + 1; 52 - } 47 + enabled = __video_get_options(name, &options, is_of); 48 + 49 + if (options) { 50 + if (!strncmp(options, "off", 3)) 51 + enabled = false; 53 52 } 54 - /* No match, pass global option */ 55 - if (!options && option && fb_mode_option) 56 - options = kstrdup(fb_mode_option, GFP_KERNEL); 57 - if (options && !strncmp(options, "off", 3)) 58 - retval = 1; 59 53 60 - if (option) 61 - *option = options; 54 + if (option) { 55 + if (options) 56 + *option = kstrdup(options, GFP_KERNEL); 57 + else 58 + *option = NULL; 59 + } 62 60 63 - return retval; 61 + return enabled ? 0 : 1; // 0 on success, 1 otherwise 64 62 } 65 63 EXPORT_SYMBOL(fb_get_options); 66 - 67 - /** 68 - * video_setup - process command line options 69 - * @options: string of options 70 - * 71 - * Process command line options for frame buffer subsystem. 72 - * 73 - * NOTE: This function is a __setup and __init function. 74 - * It only stores the options. Drivers have to call 75 - * fb_get_options() as necessary. 76 - */ 77 - static int __init video_setup(char *options) 78 - { 79 - if (!options || !*options) 80 - goto out; 81 - 82 - if (!strncmp(options, "ofonly", 6)) { 83 - ofonly = 1; 84 - goto out; 85 - } 86 - 87 - if (strchr(options, ':')) { 88 - /* named */ 89 - int i; 90 - 91 - for (i = 0; i < FB_MAX; i++) { 92 - if (video_options[i] == NULL) { 93 - video_options[i] = options; 94 - break; 95 - } 96 - } 97 - } else { 98 - /* global */ 99 - fb_mode_option = options; 100 - } 101 - 102 - out: 103 - return 1; 104 - } 105 - __setup("video=", video_setup);
+6 -2
drivers/video/fbdev/core/modedb.c
··· 620 620 const struct fb_videomode *default_mode, 621 621 unsigned int default_bpp) 622 622 { 623 + char *mode_option_buf = NULL; 623 624 int i; 624 625 625 626 /* Set up defaults */ ··· 636 635 default_bpp = 8; 637 636 638 637 /* Did the user specify a video mode? */ 639 - if (!mode_option) 640 - mode_option = fb_mode_option; 638 + if (!mode_option) { 639 + fb_get_options(NULL, &mode_option_buf); 640 + mode_option = mode_option_buf; 641 + } 641 642 if (mode_option) { 642 643 const char *name = mode_option; 643 644 unsigned int namelen = strlen(name); ··· 718 715 res_specified = 1; 719 716 } 720 717 done: 718 + kfree(mode_option_buf); 721 719 if (cvt) { 722 720 struct fb_videomode cvt_mode; 723 721 int ret;
+26
include/drm/drm_atomic_helper.h
··· 210 210 plane))) 211 211 212 212 /** 213 + * drm_atomic_plane_enabling - check whether a plane is being enabled 214 + * @old_plane_state: old atomic plane state 215 + * @new_plane_state: new atomic plane state 216 + * 217 + * Checks the atomic state of a plane to determine whether it's being enabled 218 + * or not. This also WARNs if it detects an invalid state (both CRTC and FB 219 + * need to either both be NULL or both be non-NULL). 220 + * 221 + * RETURNS: 222 + * True if the plane is being enabled, false otherwise. 223 + */ 224 + static inline bool drm_atomic_plane_enabling(struct drm_plane_state *old_plane_state, 225 + struct drm_plane_state *new_plane_state) 226 + { 227 + /* 228 + * When enabling a plane, CRTC and FB should always be set together. 229 + * Anything else should be considered a bug in the atomic core, so we 230 + * gently warn about it. 231 + */ 232 + WARN_ON((!new_plane_state->crtc && new_plane_state->fb) || 233 + (new_plane_state->crtc && !new_plane_state->fb)); 234 + 235 + return !old_plane_state->crtc && new_plane_state->crtc; 236 + } 237 + 238 + /** 213 239 * drm_atomic_plane_disabling - check whether a plane is being disabled 214 240 * @old_plane_state: old atomic plane state 215 241 * @new_plane_state: new atomic plane state
+11 -1
include/drm/drm_displayid.h
··· 139 139 u8 mso; 140 140 } __packed; 141 141 142 - /* DisplayID iteration */ 142 + /* 143 + * DisplayID iteration. 144 + * 145 + * Do not access directly, this is private. 146 + */ 143 147 struct displayid_iter { 144 148 const struct drm_edid *drm_edid; 145 149 ··· 151 147 int length; 152 148 int idx; 153 149 int ext_index; 150 + 151 + u8 version; 152 + u8 primary_use; 154 153 }; 155 154 156 155 void displayid_iter_edid_begin(const struct drm_edid *drm_edid, ··· 163 156 #define displayid_iter_for_each(__block, __iter) \ 164 157 while (((__block) = __displayid_iter_next(__iter))) 165 158 void displayid_iter_end(struct displayid_iter *iter); 159 + 160 + u8 displayid_version(const struct displayid_iter *iter); 161 + u8 displayid_primary_use(const struct displayid_iter *iter); 166 162 167 163 #endif
-19
include/drm/drm_drv.h
··· 400 400 int (*dumb_map_offset)(struct drm_file *file_priv, 401 401 struct drm_device *dev, uint32_t handle, 402 402 uint64_t *offset); 403 - /** 404 - * @dumb_destroy: 405 - * 406 - * This destroys the userspace handle for the given dumb backing storage buffer. 407 - * Since buffer objects must be reference counted in the kernel a buffer object 408 - * won't be immediately freed if a framebuffer modeset object still uses it. 409 - * 410 - * Called by the user via ioctl. 411 - * 412 - * The default implementation is drm_gem_dumb_destroy(). GEM based drivers 413 - * must not overwrite this. 414 - * 415 - * Returns: 416 - * 417 - * Zero on success, negative errno on failure. 418 - */ 419 - int (*dumb_destroy)(struct drm_file *file_priv, 420 - struct drm_device *dev, 421 - uint32_t handle); 422 403 423 404 /** @major: driver major number */ 424 405 int major;
+9 -3
include/drm/drm_edid.h
··· 61 61 u8 vfreq_aspect; 62 62 } __attribute__((packed)); 63 63 64 - #define DRM_EDID_PT_HSYNC_POSITIVE (1 << 1) 65 - #define DRM_EDID_PT_VSYNC_POSITIVE (1 << 2) 66 - #define DRM_EDID_PT_SEPARATE_SYNC (3 << 3) 64 + #define DRM_EDID_PT_SYNC_MASK (3 << 3) 65 + # define DRM_EDID_PT_ANALOG_CSYNC (0 << 3) 66 + # define DRM_EDID_PT_BIPOLAR_ANALOG_CSYNC (1 << 3) 67 + # define DRM_EDID_PT_DIGITAL_CSYNC (2 << 3) 68 + # define DRM_EDID_PT_CSYNC_ON_RGB (1 << 1) /* analog csync only */ 69 + # define DRM_EDID_PT_CSYNC_SERRATE (1 << 2) 70 + # define DRM_EDID_PT_DIGITAL_SEPARATE_SYNC (3 << 3) 71 + # define DRM_EDID_PT_HSYNC_POSITIVE (1 << 1) /* also digital csync */ 72 + # define DRM_EDID_PT_VSYNC_POSITIVE (1 << 2) 67 73 #define DRM_EDID_PT_STEREO (1 << 5) 68 74 #define DRM_EDID_PT_INTERLACED (1 << 7) 69 75
+12
include/drm/drm_gem.h
··· 165 165 int (*mmap)(struct drm_gem_object *obj, struct vm_area_struct *vma); 166 166 167 167 /** 168 + * @evict: 169 + * 170 + * Evicts gem object out from memory. Used by the drm_gem_object_evict() 171 + * helper. Returns 0 on success, -errno otherwise. 172 + * 173 + * This callback is optional. 174 + */ 175 + int (*evict)(struct drm_gem_object *obj); 176 + 177 + /** 168 178 * @vm_ops: 169 179 * 170 180 * Virtual memory operations used with mmap. ··· 488 478 void drm_gem_lru_move_tail(struct drm_gem_lru *lru, struct drm_gem_object *obj); 489 479 unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru, unsigned nr_to_scan, 490 480 bool (*shrink)(struct drm_gem_object *obj)); 481 + 482 + int drm_gem_evict(struct drm_gem_object *obj); 491 483 492 484 #endif /* __DRM_GEM_H__ */
+15 -15
include/drm/drm_gem_shmem_helper.h
··· 61 61 struct list_head madv_list; 62 62 63 63 /** 64 - * @pages_mark_dirty_on_put: 65 - * 66 - * Mark pages as dirty when they are put. 67 - */ 68 - unsigned int pages_mark_dirty_on_put : 1; 69 - 70 - /** 71 - * @pages_mark_accessed_on_put: 72 - * 73 - * Mark pages as accessed when they are put. 74 - */ 75 - unsigned int pages_mark_accessed_on_put : 1; 76 - 77 - /** 78 64 * @sgt: Scatter/gather table for imported PRIME buffers 79 65 */ 80 66 struct sg_table *sgt; ··· 84 98 unsigned int vmap_use_count; 85 99 86 100 /** 101 + * @pages_mark_dirty_on_put: 102 + * 103 + * Mark pages as dirty when they are put. 104 + */ 105 + bool pages_mark_dirty_on_put : 1; 106 + 107 + /** 108 + * @pages_mark_accessed_on_put: 109 + * 110 + * Mark pages as accessed when they are put. 111 + */ 112 + bool pages_mark_accessed_on_put : 1; 113 + 114 + /** 87 115 * @map_wc: map object write-combined (instead of using shmem defaults). 88 116 */ 89 - bool map_wc; 117 + bool map_wc : 1; 90 118 }; 91 119 92 120 #define to_drm_gem_shmem_obj(obj) \
+28 -1
include/drm/drm_modeset_helper_vtables.h
··· 1331 1331 */ 1332 1332 void (*atomic_update)(struct drm_plane *plane, 1333 1333 struct drm_atomic_state *state); 1334 + 1335 + /** 1336 + * @atomic_enable: 1337 + * 1338 + * Drivers should use this function to unconditionally enable a plane. 1339 + * This hook is called in-between the &drm_crtc_helper_funcs.atomic_begin 1340 + * and drm_crtc_helper_funcs.atomic_flush callbacks. It is called after 1341 + * @atomic_update, which will be called for all enabled planes. Drivers 1342 + * that use @atomic_enable should set up a plane in @atomic_update and 1343 + * afterwards enable the plane in @atomic_enable. If a plane needs to be 1344 + * enabled before installing the scanout buffer, drivers can still do 1345 + * so in @atomic_update. 1346 + * 1347 + * Note that the power state of the display pipe when this function is 1348 + * called depends upon the exact helpers and calling sequence the driver 1349 + * has picked. See drm_atomic_helper_commit_planes() for a discussion of 1350 + * the tradeoffs and variants of plane commit helpers. 1351 + * 1352 + * This callback is used by the atomic modeset helpers, but it is 1353 + * optional. If implemented, @atomic_enable should be the inverse of 1354 + * @atomic_disable. Drivers that don't want to use either can still 1355 + * implement the complete plane update in @atomic_update. 1356 + */ 1357 + void (*atomic_enable)(struct drm_plane *plane, 1358 + struct drm_atomic_state *state); 1359 + 1334 1360 /** 1335 1361 * @atomic_disable: 1336 1362 * ··· 1377 1351 * the tradeoffs and variants of plane commit helpers. 1378 1352 * 1379 1353 * This callback is used by the atomic modeset helpers and by the 1380 - * transitional plane helpers, but it is optional. 1354 + * transitional plane helpers, but it is optional. It's intended to 1355 + * reverse the effects of @atomic_enable. 1381 1356 */ 1382 1357 void (*atomic_disable)(struct drm_plane *plane, 1383 1358 struct drm_atomic_state *state);
+12
include/drm/drm_of.h
··· 15 15 struct drm_panel; 16 16 struct drm_bridge; 17 17 struct device_node; 18 + struct mipi_dsi_device_info; 19 + struct mipi_dsi_host; 18 20 19 21 /** 20 22 * enum drm_lvds_dual_link_pixels - Pixel order of an LVDS dual-link connection ··· 130 128 return -EINVAL; 131 129 } 132 130 #endif 131 + 132 + #if IS_ENABLED(CONFIG_OF) && IS_ENABLED(CONFIG_DRM_MIPI_DSI) 133 + struct mipi_dsi_host *drm_of_get_dsi_bus(struct device *dev); 134 + #else 135 + static inline struct 136 + mipi_dsi_host *drm_of_get_dsi_bus(struct device *dev) 137 + { 138 + return ERR_PTR(-EINVAL); 139 + } 140 + #endif /* CONFIG_OF && CONFIG_DRM_MIPI_DSI */ 133 141 134 142 /* 135 143 * drm_of_panel_bridge_remove - remove panel bridge
+108
include/drm/drm_suballoc.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 OR MIT */ 2 + /* 3 + * Copyright 2011 Red Hat Inc. 4 + * Copyright © 2022 Intel Corporation 5 + */ 6 + #ifndef _DRM_SUBALLOC_H_ 7 + #define _DRM_SUBALLOC_H_ 8 + 9 + #include <drm/drm_mm.h> 10 + 11 + #include <linux/dma-fence.h> 12 + #include <linux/types.h> 13 + 14 + #define DRM_SUBALLOC_MAX_QUEUES 32 15 + /** 16 + * struct drm_suballoc_manager - fenced range allocations 17 + * @wq: Wait queue for sleeping allocations on contention. 18 + * @hole: Pointer to first hole node. 19 + * @olist: List of allocated ranges. 20 + * @flist: Array[fence context hash] of queues of fenced allocated ranges. 21 + * @size: Size of the managed range. 22 + * @align: Default alignment for the managed range. 23 + */ 24 + struct drm_suballoc_manager { 25 + wait_queue_head_t wq; 26 + struct list_head *hole; 27 + struct list_head olist; 28 + struct list_head flist[DRM_SUBALLOC_MAX_QUEUES]; 29 + size_t size; 30 + size_t align; 31 + }; 32 + 33 + /** 34 + * struct drm_suballoc - Sub-allocated range 35 + * @olist: List link for list of allocated ranges. 36 + * @flist: List linkk for the manager fenced allocated ranges queues. 37 + * @manager: The drm_suballoc_manager. 38 + * @soffset: Start offset. 39 + * @eoffset: End offset + 1 so that @eoffset - @soffset = size. 40 + * @dma_fence: The fence protecting the allocation. 41 + */ 42 + struct drm_suballoc { 43 + struct list_head olist; 44 + struct list_head flist; 45 + struct drm_suballoc_manager *manager; 46 + size_t soffset; 47 + size_t eoffset; 48 + struct dma_fence *fence; 49 + }; 50 + 51 + void drm_suballoc_manager_init(struct drm_suballoc_manager *sa_manager, 52 + size_t size, size_t align); 53 + 54 + void drm_suballoc_manager_fini(struct drm_suballoc_manager *sa_manager); 55 + 56 + struct drm_suballoc * 57 + drm_suballoc_new(struct drm_suballoc_manager *sa_manager, size_t size, 58 + gfp_t gfp, bool intr, size_t align); 59 + 60 + void drm_suballoc_free(struct drm_suballoc *sa, struct dma_fence *fence); 61 + 62 + /** 63 + * drm_suballoc_soffset - Range start. 64 + * @sa: The struct drm_suballoc. 65 + * 66 + * Return: The start of the allocated range. 67 + */ 68 + static inline size_t drm_suballoc_soffset(struct drm_suballoc *sa) 69 + { 70 + return sa->soffset; 71 + } 72 + 73 + /** 74 + * drm_suballoc_eoffset - Range end. 75 + * @sa: The struct drm_suballoc. 76 + * 77 + * Return: The end of the allocated range + 1. 78 + */ 79 + static inline size_t drm_suballoc_eoffset(struct drm_suballoc *sa) 80 + { 81 + return sa->eoffset; 82 + } 83 + 84 + /** 85 + * drm_suballoc_size - Range size. 86 + * @sa: The struct drm_suballoc. 87 + * 88 + * Return: The size of the allocated range. 89 + */ 90 + static inline size_t drm_suballoc_size(struct drm_suballoc *sa) 91 + { 92 + return sa->eoffset - sa->soffset; 93 + } 94 + 95 + #ifdef CONFIG_DEBUG_FS 96 + void drm_suballoc_dump_debug_info(struct drm_suballoc_manager *sa_manager, 97 + struct drm_printer *p, 98 + unsigned long long suballoc_base); 99 + #else 100 + static inline void 101 + drm_suballoc_dump_debug_info(struct drm_suballoc_manager *sa_manager, 102 + struct drm_printer *p, 103 + unsigned long long suballoc_base) 104 + { } 105 + 106 + #endif 107 + 108 + #endif /* _DRM_SUBALLOC_H_ */
+6
include/drm/gpu_scheduler.h
··· 48 48 struct drm_gpu_scheduler; 49 49 struct drm_sched_rq; 50 50 51 + struct drm_file; 52 + 51 53 /* These are often used as an (initial) index 52 54 * to an array, and as such should start at 0. 53 55 */ ··· 524 522 void drm_sched_job_arm(struct drm_sched_job *job); 525 523 int drm_sched_job_add_dependency(struct drm_sched_job *job, 526 524 struct dma_fence *fence); 525 + int drm_sched_job_add_syncobj_dependency(struct drm_sched_job *job, 526 + struct drm_file *file, 527 + u32 handle, 528 + u32 point); 527 529 int drm_sched_job_add_resv_dependencies(struct drm_sched_job *job, 528 530 struct dma_resv *resv, 529 531 enum dma_resv_usage usage);
+1 -1
include/drm/ttm/ttm_device.h
··· 141 141 * the graphics address space 142 142 * @ctx: context for this move with parameters 143 143 * @new_mem: the new memory region receiving the buffer 144 - @ @hop: placement for driver directed intermediate hop 144 + * @hop: placement for driver directed intermediate hop 145 145 * 146 146 * Move a buffer between two memory regions. 147 147 * Returns errno -EMULTIHOP if driver requests a hop
-1
include/linux/fb.h
··· 765 765 const struct fb_videomode *mode; 766 766 }; 767 767 768 - extern const char *fb_mode_option; 769 768 extern const struct fb_videomode vesa_modes[]; 770 769 extern const struct dmt_videomode dmt_modes[]; 771 770
+55 -2
include/uapi/drm/drm.h
··· 972 972 #define DRM_IOCTL_GET_STATS DRM_IOR( 0x06, struct drm_stats) 973 973 #define DRM_IOCTL_SET_VERSION DRM_IOWR(0x07, struct drm_set_version) 974 974 #define DRM_IOCTL_MODESET_CTL DRM_IOW(0x08, struct drm_modeset_ctl) 975 + /** 976 + * DRM_IOCTL_GEM_CLOSE - Close a GEM handle. 977 + * 978 + * GEM handles are not reference-counted by the kernel. User-space is 979 + * responsible for managing their lifetime. For example, if user-space imports 980 + * the same memory object twice on the same DRM file description, the same GEM 981 + * handle is returned by both imports, and user-space needs to ensure 982 + * &DRM_IOCTL_GEM_CLOSE is performed once only. The same situation can happen 983 + * when a memory object is allocated, then exported and imported again on the 984 + * same DRM file description. The &DRM_IOCTL_MODE_GETFB2 IOCTL is an exception 985 + * and always returns fresh new GEM handles even if an existing GEM handle 986 + * already refers to the same memory object before the IOCTL is performed. 987 + */ 975 988 #define DRM_IOCTL_GEM_CLOSE DRM_IOW (0x09, struct drm_gem_close) 976 989 #define DRM_IOCTL_GEM_FLINK DRM_IOWR(0x0a, struct drm_gem_flink) 977 990 #define DRM_IOCTL_GEM_OPEN DRM_IOWR(0x0b, struct drm_gem_open) ··· 1025 1012 #define DRM_IOCTL_UNLOCK DRM_IOW( 0x2b, struct drm_lock) 1026 1013 #define DRM_IOCTL_FINISH DRM_IOW( 0x2c, struct drm_lock) 1027 1014 1015 + /** 1016 + * DRM_IOCTL_PRIME_HANDLE_TO_FD - Convert a GEM handle to a DMA-BUF FD. 1017 + * 1018 + * User-space sets &drm_prime_handle.handle with the GEM handle to export and 1019 + * &drm_prime_handle.flags, and gets back a DMA-BUF file descriptor in 1020 + * &drm_prime_handle.fd. 1021 + * 1022 + * The export can fail for any driver-specific reason, e.g. because export is 1023 + * not supported for this specific GEM handle (but might be for others). 1024 + * 1025 + * Support for exporting DMA-BUFs is advertised via &DRM_PRIME_CAP_EXPORT. 1026 + */ 1028 1027 #define DRM_IOCTL_PRIME_HANDLE_TO_FD DRM_IOWR(0x2d, struct drm_prime_handle) 1028 + /** 1029 + * DRM_IOCTL_PRIME_FD_TO_HANDLE - Convert a DMA-BUF FD to a GEM handle. 1030 + * 1031 + * User-space sets &drm_prime_handle.fd with a DMA-BUF file descriptor to 1032 + * import, and gets back a GEM handle in &drm_prime_handle.handle. 1033 + * &drm_prime_handle.flags is unused. 1034 + * 1035 + * If an existing GEM handle refers to the memory object backing the DMA-BUF, 1036 + * that GEM handle is returned. Therefore user-space which needs to handle 1037 + * arbitrary DMA-BUFs must have a user-space lookup data structure to manually 1038 + * reference-count duplicated GEM handles. For more information see 1039 + * &DRM_IOCTL_GEM_CLOSE. 1040 + * 1041 + * The import can fail for any driver-specific reason, e.g. because import is 1042 + * only supported for DMA-BUFs allocated on this DRM device. 1043 + * 1044 + * Support for importing DMA-BUFs is advertised via &DRM_PRIME_CAP_IMPORT. 1045 + */ 1029 1046 #define DRM_IOCTL_PRIME_FD_TO_HANDLE DRM_IOWR(0x2e, struct drm_prime_handle) 1030 1047 1031 1048 #define DRM_IOCTL_AGP_ACQUIRE DRM_IO( 0x30) ··· 1147 1104 * struct as the output. 1148 1105 * 1149 1106 * If the client is DRM master or has &CAP_SYS_ADMIN, &drm_mode_fb_cmd2.handles 1150 - * will be filled with GEM buffer handles. Planes are valid until one has a 1151 - * zero handle -- this can be used to compute the number of planes. 1107 + * will be filled with GEM buffer handles. Fresh new GEM handles are always 1108 + * returned, even if another GEM handle referring to the same memory object 1109 + * already exists on the DRM file description. The caller is responsible for 1110 + * removing the new handles, e.g. via the &DRM_IOCTL_GEM_CLOSE IOCTL. The same 1111 + * new handle will be returned for multiple planes in case they use the same 1112 + * memory object. Planes are valid until one has a zero handle -- this can be 1113 + * used to compute the number of planes. 1152 1114 * 1153 1115 * Otherwise, &drm_mode_fb_cmd2.handles will be zeroed and planes are valid 1154 1116 * until one has a zero &drm_mode_fb_cmd2.pitches. ··· 1161 1113 * If the framebuffer has a format modifier, &DRM_MODE_FB_MODIFIERS will be set 1162 1114 * in &drm_mode_fb_cmd2.flags and &drm_mode_fb_cmd2.modifier will contain the 1163 1115 * modifier. Otherwise, user-space must ignore &drm_mode_fb_cmd2.modifier. 1116 + * 1117 + * To obtain DMA-BUF FDs for each plane without leaking GEM handles, user-space 1118 + * can export each handle via &DRM_IOCTL_PRIME_HANDLE_TO_FD, then immediately 1119 + * close each unique handle via &DRM_IOCTL_GEM_CLOSE, making sure to not 1120 + * double-close handles which are specified multiple times in the array. 1164 1121 */ 1165 1122 #define DRM_IOCTL_MODE_GETFB2 DRM_IOWR(0xCE, struct drm_mode_fb_cmd2) 1166 1123
+20
include/video/cmdline.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 3 + #ifndef VIDEO_CMDLINE_H 4 + #define VIDEO_CMDLINE_H 5 + 6 + #include <linux/types.h> 7 + 8 + #if defined(CONFIG_VIDEO_CMDLINE) 9 + const char *video_get_options(const char *name); 10 + 11 + /* exported for compatibility with fbdev; don't use in new code */ 12 + bool __video_get_options(const char *name, const char **option, bool is_of); 13 + #else 14 + static inline const char *video_get_options(const char *name) 15 + { 16 + return NULL; 17 + } 18 + #endif 19 + 20 + #endif