Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'msm-next' of git://people.freedesktop.org/~robclark/linux into drm-next

This time, a bunch of cleanups and refactoring work so that we can get
dt bindings upstream. In general, we keep compatibility with existing
downstream bindings as much as possible, to make backports to device
kernels easier, but now we have cleaner upstream bindings so that we
can start landing gpu/display support in upstream dts files.

Plus shrinker and madvise support, which has been on my todo list for
a long time. And support for arbitrary # of cmd bufs in submit ioctl
(I've got libdrm+mesa userspace for this on branches) to enable some
of the mesa batch/reorder stuff I'm working on. Userspace decides
whether this is supported based on advertised driver version. For the
interesting userspace bits, see:

https://github.com/freedreno/libdrm/commit/1baf03ac6e77049d9c8be1e3d5164283ce82c9db

Plus support for ASoC hdmi audio codec, and few other random other
cleanups.

* 'msm-next' of git://people.freedesktop.org/~robclark/linux: (52 commits)
drm/msm: Delete an unnecessary check before drm_gem_object_unreference()
drm/msm: Delete unnecessary checks before drm_gem_object_unreference_unlocked()
drm/msm/hdmi: Delete an unnecessary check before the function call "kfree"
drm/msm: return -EFAULT instead of bytes remaining
drm/msm/hdmi: use PTR_ERR_OR_ZERO() to simplify the code
drm/msm: add missing of_node_put after calling of_parse_phandle
drm/msm: Replace drm_fb_get_bpp_depth() with drm_format_plane_cpp()
drm/msm/dsi: Fix return value check in msm_dsi_host_set_display_mode()
drm: msm: Add ASoC generic hdmi audio codec support.
drm/msm/rd: add module param to dump all bo's
drm/msm/rd: split out snapshot_buf helper
drm/msm: bump kernel api version
drm/msm: deal with arbitrary # of cmd buffers
drm/msm: wire up vmap shrinker
drm/msm: change gem->vmap() to get/put
drm/msm: shrinker support
drm/msm: add put_iova() helper
drm/msm: add madvise ioctl
drm/msm: use mutex_lock_interruptible for submit ioctl
dt-bindings: msm/mdp: Provide details on MDP interface ports
...

+1962 -743
+77 -40
Documentation/devicetree/bindings/display/msm/dsi.txt
··· 11 11 be 0 or 1, since we have 2 DSI controllers at most for now. 12 12 - interrupts: The interrupt signal from the DSI block. 13 13 - power-domains: Should be <&mmcc MDSS_GDSC>. 14 - - clocks: device clocks 15 - See Documentation/devicetree/bindings/clocks/clock-bindings.txt for details. 14 + - clocks: Phandles to device clocks. 16 15 - clock-names: the following clocks are required: 17 16 * "mdp_core_clk" 18 17 * "iface_clk" ··· 22 23 * "core_clk" 23 24 For DSIv2, we need an additional clock: 24 25 * "src_clk" 26 + - assigned-clocks: Parents of "byte_clk" and "pixel_clk" for the given platform. 27 + - assigned-clock-parents: The Byte clock and Pixel clock PLL outputs provided 28 + by a DSI PHY block. See [1] for details on clock bindings. 25 29 - vdd-supply: phandle to vdd regulator device node 26 30 - vddio-supply: phandle to vdd-io regulator device node 27 31 - vdda-supply: phandle to vdda regulator device node 28 - - qcom,dsi-phy: phandle to DSI PHY device node 32 + - phys: phandle to DSI PHY device node 33 + - phy-names: the name of the corresponding PHY device 29 34 - syscon-sfpb: A phandle to mmss_sfpb syscon node (only for DSIv2) 35 + - ports: Contains 2 DSI controller ports as child nodes. Each port contains 36 + an endpoint subnode as defined in [2] and [3]. 30 37 31 38 Optional properties: 32 39 - panel@0: Node of panel connected to this DSI controller. 33 - See files in Documentation/devicetree/bindings/display/panel/ for each supported 34 - panel. 40 + See files in [4] for each supported panel. 35 41 - qcom,dual-dsi-mode: Boolean value indicating if the DSI controller is 36 42 driving a panel which needs 2 DSI links. 37 43 - qcom,master-dsi: Boolean value indicating if the DSI controller is driving ··· 48 44 - pinctrl-names: the pin control state names; should contain "default" 49 45 - pinctrl-0: the default pinctrl state (active) 50 46 - pinctrl-n: the "sleep" pinctrl state 51 - - port: DSI controller output port, containing one endpoint subnode. 47 + - ports: contains DSI controller input and output ports as children, each 48 + containing one endpoint subnode. 52 49 53 50 DSI Endpoint properties: 54 - - remote-endpoint: set to phandle of the connected panel's endpoint. 55 - See Documentation/devicetree/bindings/graph.txt for device graph info. 56 - - qcom,data-lane-map: this describes how the logical DSI lanes are mapped 57 - to the physical lanes on the given platform. The value contained in 58 - index n describes what logical data lane is mapped to the physical data 59 - lane n (DATAn, where n lies between 0 and 3). 51 + - remote-endpoint: For port@0, set to phandle of the connected panel/bridge's 52 + input endpoint. For port@1, set to the MDP interface output. See [2] for 53 + device graph info. 54 + 55 + - data-lanes: this describes how the physical DSI data lanes are mapped 56 + to the logical lanes on the given platform. The value contained in 57 + index n describes what physical lane is mapped to the logical lane n 58 + (DATAn, where n lies between 0 and 3). The clock lane position is fixed 59 + and can't be changed. Hence, they aren't a part of the DT bindings. See 60 + [3] for more info on the data-lanes property. 60 61 61 62 For example: 62 63 63 - qcom,data-lane-map = <3 0 1 2>; 64 + data-lanes = <3 0 1 2>; 64 65 65 - The above mapping describes that the logical data lane DATA3 is mapped to 66 - the physical data lane DATA0, logical DATA0 to physical DATA1, logic DATA1 67 - to phys DATA2 and logic DATA2 to phys DATA3. 66 + The above mapping describes that the logical data lane DATA0 is mapped to 67 + the physical data lane DATA3, logical DATA1 to physical DATA0, logic DATA2 68 + to phys DATA1 and logic DATA3 to phys DATA2. 68 69 69 70 There are only a limited number of physical to logical mappings possible: 70 - 71 - "0123": Logic 0->Phys 0; Logic 1->Phys 1; Logic 2->Phys 2; Logic 3->Phys 3; 72 - "3012": Logic 3->Phys 0; Logic 0->Phys 1; Logic 1->Phys 2; Logic 2->Phys 3; 73 - "2301": Logic 2->Phys 0; Logic 3->Phys 1; Logic 0->Phys 2; Logic 1->Phys 3; 74 - "1230": Logic 1->Phys 0; Logic 2->Phys 1; Logic 3->Phys 2; Logic 0->Phys 3; 75 - "0321": Logic 0->Phys 0; Logic 3->Phys 1; Logic 2->Phys 2; Logic 1->Phys 3; 76 - "1032": Logic 1->Phys 0; Logic 0->Phys 1; Logic 3->Phys 2; Logic 2->Phys 3; 77 - "2103": Logic 2->Phys 0; Logic 1->Phys 1; Logic 0->Phys 2; Logic 3->Phys 3; 78 - "3210": Logic 3->Phys 0; Logic 2->Phys 1; Logic 1->Phys 2; Logic 0->Phys 3; 71 + <0 1 2 3> 72 + <1 2 3 0> 73 + <2 3 0 1> 74 + <3 0 1 2> 75 + <0 3 2 1> 76 + <1 0 3 2> 77 + <2 1 0 3> 78 + <3 2 1 0> 79 79 80 80 DSI PHY: 81 81 Required properties: ··· 94 86 * "dsi_pll" 95 87 * "dsi_phy" 96 88 * "dsi_phy_regulator" 89 + - clock-cells: Must be 1. The DSI PHY block acts as a clock provider, creating 90 + 2 clocks: A byte clock (index 0), and a pixel clock (index 1). 97 91 - qcom,dsi-phy-index: The ID of DSI PHY hardware instance. This should 98 92 be 0 or 1, since we have 2 DSI PHYs at most for now. 99 93 - power-domains: Should be <&mmcc MDSS_GDSC>. 100 - - clocks: device clocks 101 - See Documentation/devicetree/bindings/clocks/clock-bindings.txt for details. 94 + - clocks: Phandles to device clocks. See [1] for details on clock bindings. 102 95 - clock-names: the following clocks are required: 103 96 * "iface_clk" 104 97 - vddio-supply: phandle to vdd-io regulator device node ··· 108 99 - qcom,dsi-phy-regulator-ldo-mode: Boolean value indicating if the LDO mode PHY 109 100 regulator is wanted. 110 101 102 + [1] Documentation/devicetree/bindings/clocks/clock-bindings.txt 103 + [2] Documentation/devicetree/bindings/graph.txt 104 + [3] Documentation/devicetree/bindings/media/video-interfaces.txt 105 + [4] Documentation/devicetree/bindings/display/panel/ 106 + 111 107 Example: 112 - mdss_dsi0: qcom,mdss_dsi@fd922800 { 108 + dsi0: dsi@fd922800 { 113 109 compatible = "qcom,mdss-dsi-ctrl"; 114 110 qcom,dsi-host-index = <0>; 115 - interrupt-parent = <&mdss_mdp>; 111 + interrupt-parent = <&mdp>; 116 112 interrupts = <4 0>; 117 113 reg-names = "dsi_ctrl"; 118 114 reg = <0xfd922800 0x200>; ··· 138 124 <&mmcc MDSS_AHB_CLK>, 139 125 <&mmcc MDSS_MDP_CLK>, 140 126 <&mmcc MDSS_PCLK0_CLK>; 127 + 128 + assigned-clocks = 129 + <&mmcc BYTE0_CLK_SRC>, 130 + <&mmcc PCLK0_CLK_SRC>; 131 + assigned-clock-parents = 132 + <&dsi_phy0 0>, 133 + <&dsi_phy0 1>; 134 + 141 135 vdda-supply = <&pma8084_l2>; 142 136 vdd-supply = <&pma8084_l22>; 143 137 vddio-supply = <&pma8084_l12>; 144 138 145 - qcom,dsi-phy = <&mdss_dsi_phy0>; 139 + phys = <&dsi_phy0>; 140 + phy-names ="dsi-phy"; 146 141 147 142 qcom,dual-dsi-mode; 148 143 qcom,master-dsi; 149 144 qcom,sync-dual-dsi; 150 145 151 146 pinctrl-names = "default", "sleep"; 152 - pinctrl-0 = <&mdss_dsi_active>; 153 - pinctrl-1 = <&mdss_dsi_suspend>; 147 + pinctrl-0 = <&dsi_active>; 148 + pinctrl-1 = <&dsi_suspend>; 149 + 150 + ports { 151 + #address-cells = <1>; 152 + #size-cells = <0>; 153 + 154 + port@0 { 155 + reg = <0>; 156 + dsi0_in: endpoint { 157 + remote-endpoint = <&mdp_intf1_out>; 158 + }; 159 + }; 160 + 161 + port@1 { 162 + reg = <1>; 163 + dsi0_out: endpoint { 164 + remote-endpoint = <&panel_in>; 165 + data-lanes = <0 1 2 3>; 166 + }; 167 + }; 168 + }; 154 169 155 170 panel: panel@0 { 156 171 compatible = "sharp,lq101r1sx01"; ··· 195 152 }; 196 153 }; 197 154 }; 198 - 199 - port { 200 - dsi0_out: endpoint { 201 - remote-endpoint = <&panel_in>; 202 - lanes = <0 1 2 3>; 203 - }; 204 - }; 205 155 }; 206 156 207 - mdss_dsi_phy0: qcom,mdss_dsi_phy@fd922a00 { 157 + dsi_phy0: dsi-phy@fd922a00 { 208 158 compatible = "qcom,dsi-phy-28nm-hpm"; 209 159 qcom,dsi-phy-index = <0>; 210 160 reg-names = ··· 209 173 <0xfd922d80 0x7b>; 210 174 clock-names = "iface_clk"; 211 175 clocks = <&mmcc MDSS_AHB_CLK>; 176 + #clock-cells = <1>; 212 177 vddio-supply = <&pma8084_l12>; 213 178 214 179 qcom,dsi-phy-regulator-ldo-mode;
-59
Documentation/devicetree/bindings/display/msm/mdp.txt
··· 1 - Qualcomm adreno/snapdragon display controller 2 - 3 - Required properties: 4 - - compatible: 5 - * "qcom,mdp4" - mdp4 6 - * "qcom,mdp5" - mdp5 7 - - reg: Physical base address and length of the controller's registers. 8 - - interrupts: The interrupt signal from the display controller. 9 - - connectors: array of phandles for output device(s) 10 - - clocks: device clocks 11 - See ../clocks/clock-bindings.txt for details. 12 - - clock-names: the following clocks are required. 13 - For MDP4: 14 - * "core_clk" 15 - * "iface_clk" 16 - * "lut_clk" 17 - * "src_clk" 18 - * "hdmi_clk" 19 - * "mdp_clk" 20 - For MDP5: 21 - * "bus_clk" 22 - * "iface_clk" 23 - * "core_clk_src" 24 - * "core_clk" 25 - * "lut_clk" (some MDP5 versions may not need this) 26 - * "vsync_clk" 27 - 28 - Optional properties: 29 - - gpus: phandle for gpu device 30 - - clock-names: the following clocks are optional: 31 - * "lut_clk" 32 - 33 - Example: 34 - 35 - / { 36 - ... 37 - 38 - mdp: qcom,mdp@5100000 { 39 - compatible = "qcom,mdp4"; 40 - reg = <0x05100000 0xf0000>; 41 - interrupts = <GIC_SPI 75 0>; 42 - connectors = <&hdmi>; 43 - gpus = <&gpu>; 44 - clock-names = 45 - "core_clk", 46 - "iface_clk", 47 - "lut_clk", 48 - "src_clk", 49 - "hdmi_clk", 50 - "mdp_clk"; 51 - clocks = 52 - <&mmcc MDP_SRC>, 53 - <&mmcc MDP_AHB_CLK>, 54 - <&mmcc MDP_LUT_CLK>, 55 - <&mmcc TV_SRC>, 56 - <&mmcc HDMI_TV_CLK>, 57 - <&mmcc MDP_TV_CLK>; 58 - }; 59 - };
+112
Documentation/devicetree/bindings/display/msm/mdp4.txt
··· 1 + Qualcomm adreno/snapdragon MDP4 display controller 2 + 3 + Description: 4 + 5 + This is the bindings documentation for the MDP4 display controller found in 6 + SoCs like MSM8960, APQ8064 and MSM8660. 7 + 8 + Required properties: 9 + - compatible: 10 + * "qcom,mdp4" - mdp4 11 + - reg: Physical base address and length of the controller's registers. 12 + - interrupts: The interrupt signal from the display controller. 13 + - clocks: device clocks 14 + See ../clocks/clock-bindings.txt for details. 15 + - clock-names: the following clocks are required. 16 + * "core_clk" 17 + * "iface_clk" 18 + * "bus_clk" 19 + * "lut_clk" 20 + * "hdmi_clk" 21 + * "tv_clk" 22 + - ports: contains the list of output ports from MDP. These connect to interfaces 23 + that are external to the MDP hardware, such as HDMI, DSI, EDP etc (LVDS is a 24 + special case since it is a part of the MDP block itself). 25 + 26 + Each output port contains an endpoint that describes how it is connected to an 27 + external interface. These are described by the standard properties documented 28 + here: 29 + Documentation/devicetree/bindings/graph.txt 30 + Documentation/devicetree/bindings/media/video-interfaces.txt 31 + 32 + The output port mappings are: 33 + Port 0 -> LCDC/LVDS 34 + Port 1 -> DSI1 Cmd/Video 35 + Port 2 -> DSI2 Cmd/Video 36 + Port 3 -> DTV 37 + 38 + Optional properties: 39 + - clock-names: the following clocks are optional: 40 + * "lut_clk" 41 + 42 + Example: 43 + 44 + / { 45 + ... 46 + 47 + hdmi: hdmi@4a00000 { 48 + ... 49 + ports { 50 + ... 51 + port@0 { 52 + reg = <0>; 53 + hdmi_in: endpoint { 54 + remote-endpoint = <&mdp_dtv_out>; 55 + }; 56 + }; 57 + ... 58 + }; 59 + ... 60 + }; 61 + 62 + ... 63 + 64 + mdp: mdp@5100000 { 65 + compatible = "qcom,mdp4"; 66 + reg = <0x05100000 0xf0000>; 67 + interrupts = <GIC_SPI 75 0>; 68 + clock-names = 69 + "core_clk", 70 + "iface_clk", 71 + "lut_clk", 72 + "hdmi_clk", 73 + "tv_clk"; 74 + clocks = 75 + <&mmcc MDP_CLK>, 76 + <&mmcc MDP_AHB_CLK>, 77 + <&mmcc MDP_AXI_CLK>, 78 + <&mmcc MDP_LUT_CLK>, 79 + <&mmcc HDMI_TV_CLK>, 80 + <&mmcc MDP_TV_CLK>; 81 + 82 + ports { 83 + #address-cells = <1>; 84 + #size-cells = <0>; 85 + 86 + port@0 { 87 + reg = <0>; 88 + mdp_lvds_out: endpoint { 89 + }; 90 + }; 91 + 92 + port@1 { 93 + reg = <1>; 94 + mdp_dsi1_out: endpoint { 95 + }; 96 + }; 97 + 98 + port@2 { 99 + reg = <2>; 100 + mdp_dsi2_out: endpoint { 101 + }; 102 + }; 103 + 104 + port@3 { 105 + reg = <3>; 106 + mdp_dtv_out: endpoint { 107 + remote-endpoint = <&hdmi_in>; 108 + }; 109 + }; 110 + }; 111 + }; 112 + };
+160
Documentation/devicetree/bindings/display/msm/mdp5.txt
··· 1 + Qualcomm adreno/snapdragon MDP5 display controller 2 + 3 + Description: 4 + 5 + This is the bindings documentation for the Mobile Display Subsytem(MDSS) that 6 + encapsulates sub-blocks like MDP5, DSI, HDMI, eDP etc, and the MDP5 display 7 + controller found in SoCs like MSM8974, APQ8084, MSM8916, MSM8994 and MSM8996. 8 + 9 + MDSS: 10 + Required properties: 11 + - compatible: 12 + * "qcom,mdss" - MDSS 13 + - reg: Physical base address and length of the controller's registers. 14 + - reg-names: The names of register regions. The following regions are required: 15 + * "mdss_phys" 16 + * "vbif_phys" 17 + - interrupts: The interrupt signal from MDSS. 18 + - interrupt-controller: identifies the node as an interrupt controller. 19 + - #interrupt-cells: specifies the number of cells needed to encode an interrupt 20 + source, should be 1. 21 + - power-domains: a power domain consumer specifier according to 22 + Documentation/devicetree/bindings/power/power_domain.txt 23 + - clocks: device clocks. See ../clocks/clock-bindings.txt for details. 24 + - clock-names: the following clocks are required. 25 + * "iface_clk" 26 + * "bus_clk" 27 + * "vsync_clk" 28 + - #address-cells: number of address cells for the MDSS children. Should be 1. 29 + - #size-cells: Should be 1. 30 + - ranges: parent bus address space is the same as the child bus address space. 31 + 32 + Optional properties: 33 + - clock-names: the following clocks are optional: 34 + * "lut_clk" 35 + 36 + MDP5: 37 + Required properties: 38 + - compatible: 39 + * "qcom,mdp5" - MDP5 40 + - reg: Physical base address and length of the controller's registers. 41 + - reg-names: The names of register regions. The following regions are required: 42 + * "mdp_phys" 43 + - interrupts: Interrupt line from MDP5 to MDSS interrupt controller. 44 + - interrupt-parent: phandle to the MDSS block 45 + through MDP block 46 + - clocks: device clocks. See ../clocks/clock-bindings.txt for details. 47 + - clock-names: the following clocks are required. 48 + - * "bus_clk" 49 + - * "iface_clk" 50 + - * "core_clk" 51 + - * "vsync_clk" 52 + - ports: contains the list of output ports from MDP. These connect to interfaces 53 + that are external to the MDP hardware, such as HDMI, DSI, EDP etc (LVDS is a 54 + special case since it is a part of the MDP block itself). 55 + 56 + Each output port contains an endpoint that describes how it is connected to an 57 + external interface. These are described by the standard properties documented 58 + here: 59 + Documentation/devicetree/bindings/graph.txt 60 + Documentation/devicetree/bindings/media/video-interfaces.txt 61 + 62 + The availability of output ports can vary across SoC revisions: 63 + 64 + For MSM8974 and APQ8084: 65 + Port 0 -> MDP_INTF0 (eDP) 66 + Port 1 -> MDP_INTF1 (DSI1) 67 + Port 2 -> MDP_INTF2 (DSI2) 68 + Port 3 -> MDP_INTF3 (HDMI) 69 + 70 + For MSM8916: 71 + Port 0 -> MDP_INTF1 (DSI1) 72 + 73 + For MSM8994 and MSM8996: 74 + Port 0 -> MDP_INTF1 (DSI1) 75 + Port 1 -> MDP_INTF2 (DSI2) 76 + Port 2 -> MDP_INTF3 (HDMI) 77 + 78 + Optional properties: 79 + - clock-names: the following clocks are optional: 80 + * "lut_clk" 81 + 82 + Example: 83 + 84 + / { 85 + ... 86 + 87 + mdss: mdss@1a00000 { 88 + compatible = "qcom,mdss"; 89 + reg = <0x1a00000 0x1000>, 90 + <0x1ac8000 0x3000>; 91 + reg-names = "mdss_phys", "vbif_phys"; 92 + 93 + power-domains = <&gcc MDSS_GDSC>; 94 + 95 + clocks = <&gcc GCC_MDSS_AHB_CLK>, 96 + <&gcc GCC_MDSS_AXI_CLK>, 97 + <&gcc GCC_MDSS_VSYNC_CLK>; 98 + clock-names = "iface_clk", 99 + "bus_clk", 100 + "vsync_clk" 101 + 102 + interrupts = <0 72 0>; 103 + 104 + interrupt-controller; 105 + #interrupt-cells = <1>; 106 + 107 + #address-cells = <1>; 108 + #size-cells = <1>; 109 + ranges; 110 + 111 + mdp: mdp@1a01000 { 112 + compatible = "qcom,mdp5"; 113 + reg = <0x1a01000 0x90000>; 114 + reg-names = "mdp_phys"; 115 + 116 + interrupt-parent = <&mdss>; 117 + interrupts = <0 0>; 118 + 119 + clocks = <&gcc GCC_MDSS_AHB_CLK>, 120 + <&gcc GCC_MDSS_AXI_CLK>, 121 + <&gcc GCC_MDSS_MDP_CLK>, 122 + <&gcc GCC_MDSS_VSYNC_CLK>; 123 + clock-names = "iface_clk", 124 + "bus_clk", 125 + "core_clk", 126 + "vsync_clk"; 127 + 128 + ports { 129 + #address-cells = <1>; 130 + #size-cells = <0>; 131 + 132 + port@0 { 133 + reg = <0>; 134 + mdp5_intf1_out: endpoint { 135 + remote-endpoint = <&dsi0_in>; 136 + }; 137 + }; 138 + }; 139 + }; 140 + 141 + dsi0: dsi@1a98000 { 142 + ... 143 + ports { 144 + ... 145 + port@0 { 146 + reg = <0>; 147 + dsi0_in: endpoint { 148 + remote-endpoint = <&mdp5_intf1_out>; 149 + }; 150 + }; 151 + ... 152 + }; 153 + ... 154 + }; 155 + 156 + dsi_phy0: dsi-phy@1a98300 { 157 + ... 158 + }; 159 + }; 160 + };
+1
drivers/gpu/drm/msm/Kconfig
··· 10 10 select SHMEM 11 11 select TMPFS 12 12 select QCOM_SCM 13 + select SND_SOC_HDMI_CODEC if SND_SOC 13 14 default y 14 15 help 15 16 DRM/KMS driver for MSM/snapdragon.
+2
drivers/gpu/drm/msm/Makefile
··· 35 35 mdp/mdp5/mdp5_crtc.o \ 36 36 mdp/mdp5/mdp5_encoder.o \ 37 37 mdp/mdp5/mdp5_irq.o \ 38 + mdp/mdp5/mdp5_mdss.o \ 38 39 mdp/mdp5/mdp5_kms.o \ 39 40 mdp/mdp5/mdp5_plane.o \ 40 41 mdp/mdp5/mdp5_smp.o \ ··· 46 45 msm_fence.o \ 47 46 msm_gem.o \ 48 47 msm_gem_prime.o \ 48 + msm_gem_shrinker.o \ 49 49 msm_gem_submit.o \ 50 50 msm_gpu.o \ 51 51 msm_iommu.o \
+7 -10
drivers/gpu/drm/msm/adreno/adreno_gpu.c
··· 139 139 struct adreno_gpu *adreno_gpu = to_adreno_gpu(gpu); 140 140 struct msm_drm_private *priv = gpu->dev->dev_private; 141 141 struct msm_ringbuffer *ring = gpu->rb; 142 - unsigned i, ibs = 0; 142 + unsigned i; 143 143 144 144 for (i = 0; i < submit->nr_cmds; i++) { 145 145 switch (submit->cmd[i].type) { ··· 155 155 CP_INDIRECT_BUFFER_PFE : CP_INDIRECT_BUFFER_PFD, 2); 156 156 OUT_RING(ring, submit->cmd[i].iova); 157 157 OUT_RING(ring, submit->cmd[i].size); 158 - ibs++; 158 + OUT_PKT2(ring); 159 159 break; 160 160 } 161 161 } 162 - 163 - /* on a320, at least, we seem to need to pad things out to an 164 - * even number of qwords to avoid issue w/ CP hanging on wrap- 165 - * around: 166 - */ 167 - if (ibs % 2) 168 - OUT_PKT2(ring); 169 162 170 163 OUT_PKT0(ring, REG_AXXX_CP_SCRATCH_REG2, 1); 171 164 OUT_RING(ring, submit->fence->seqno); ··· 400 407 return ret; 401 408 } 402 409 403 - adreno_gpu->memptrs = msm_gem_vaddr(adreno_gpu->memptrs_bo); 410 + adreno_gpu->memptrs = msm_gem_get_vaddr(adreno_gpu->memptrs_bo); 404 411 if (IS_ERR(adreno_gpu->memptrs)) { 405 412 dev_err(drm->dev, "could not vmap memptrs\n"); 406 413 return -ENOMEM; ··· 419 426 void adreno_gpu_cleanup(struct adreno_gpu *gpu) 420 427 { 421 428 if (gpu->memptrs_bo) { 429 + if (gpu->memptrs) 430 + msm_gem_put_vaddr(gpu->memptrs_bo); 431 + 422 432 if (gpu->memptrs_iova) 423 433 msm_gem_put_iova(gpu->memptrs_bo, gpu->base.id); 434 + 424 435 drm_gem_object_unreference_unlocked(gpu->memptrs_bo); 425 436 } 426 437 release_firmware(gpu->pm4);
+1 -1
drivers/gpu/drm/msm/dsi/dsi.c
··· 29 29 struct platform_device *phy_pdev; 30 30 struct device_node *phy_node; 31 31 32 - phy_node = of_parse_phandle(pdev->dev.of_node, "qcom,dsi-phy", 0); 32 + phy_node = of_parse_phandle(pdev->dev.of_node, "phys", 0); 33 33 if (!phy_node) { 34 34 dev_err(&pdev->dev, "cannot find phy device\n"); 35 35 return -ENXIO;
+8
drivers/gpu/drm/msm/dsi/dsi_cfg.c
··· 29 29 }, 30 30 .bus_clk_names = dsi_v2_bus_clk_names, 31 31 .num_bus_clks = ARRAY_SIZE(dsi_v2_bus_clk_names), 32 + .io_start = { 0x4700000, 0x5800000 }, 33 + .num_dsi = 2, 32 34 }; 33 35 34 36 static const char * const dsi_6g_bus_clk_names[] = { ··· 50 48 }, 51 49 .bus_clk_names = dsi_6g_bus_clk_names, 52 50 .num_bus_clks = ARRAY_SIZE(dsi_6g_bus_clk_names), 51 + .io_start = { 0xfd922800, 0xfd922b00 }, 52 + .num_dsi = 2, 53 53 }; 54 54 55 55 static const char * const dsi_8916_bus_clk_names[] = { ··· 70 66 }, 71 67 .bus_clk_names = dsi_8916_bus_clk_names, 72 68 .num_bus_clks = ARRAY_SIZE(dsi_8916_bus_clk_names), 69 + .io_start = { 0x1a98000 }, 70 + .num_dsi = 1, 73 71 }; 74 72 75 73 static const struct msm_dsi_config msm8994_dsi_cfg = { ··· 90 84 }, 91 85 .bus_clk_names = dsi_6g_bus_clk_names, 92 86 .num_bus_clks = ARRAY_SIZE(dsi_6g_bus_clk_names), 87 + .io_start = { 0xfd998000, 0xfd9a0000 }, 88 + .num_dsi = 2, 93 89 }; 94 90 95 91 static const struct msm_dsi_cfg_handler dsi_cfg_handlers[] = {
+2
drivers/gpu/drm/msm/dsi/dsi_cfg.h
··· 34 34 struct dsi_reg_config reg_cfg; 35 35 const char * const *bus_clk_names; 36 36 const int num_bus_clks; 37 + const resource_size_t io_start[DSI_MAX]; 38 + const int num_dsi; 37 39 }; 38 40 39 41 struct msm_dsi_cfg_handler {
+51 -18
drivers/gpu/drm/msm/dsi/dsi_host.c
··· 1066 1066 } 1067 1067 1068 1068 if (cfg_hnd->major == MSM_DSI_VER_MAJOR_6G) { 1069 - data = msm_gem_vaddr(msm_host->tx_gem_obj); 1069 + data = msm_gem_get_vaddr(msm_host->tx_gem_obj); 1070 1070 if (IS_ERR(data)) { 1071 1071 ret = PTR_ERR(data); 1072 1072 pr_err("%s: get vaddr failed, %d\n", __func__, ret); ··· 1093 1093 /* Append 0xff to the end */ 1094 1094 if (packet.size < len) 1095 1095 memset(data + packet.size, 0xff, len - packet.size); 1096 + 1097 + if (cfg_hnd->major == MSM_DSI_VER_MAJOR_6G) 1098 + msm_gem_put_vaddr(msm_host->tx_gem_obj); 1096 1099 1097 1100 return len; 1098 1101 } ··· 1546 1543 u32 lane_map[4]; 1547 1544 int ret, i, len, num_lanes; 1548 1545 1549 - prop = of_find_property(ep, "qcom,data-lane-map", &len); 1546 + prop = of_find_property(ep, "data-lanes", &len); 1550 1547 if (!prop) { 1551 1548 dev_dbg(dev, "failed to find data lane mapping\n"); 1552 1549 return -EINVAL; ··· 1561 1558 1562 1559 msm_host->num_data_lanes = num_lanes; 1563 1560 1564 - ret = of_property_read_u32_array(ep, "qcom,data-lane-map", lane_map, 1561 + ret = of_property_read_u32_array(ep, "data-lanes", lane_map, 1565 1562 num_lanes); 1566 1563 if (ret) { 1567 1564 dev_err(dev, "failed to read lane data\n"); ··· 1576 1573 const int *swap = supported_data_lane_swaps[i]; 1577 1574 int j; 1578 1575 1576 + /* 1577 + * the data-lanes array we get from DT has a logical->physical 1578 + * mapping. The "data lane swap" register field represents 1579 + * supported configurations in a physical->logical mapping. 1580 + * Translate the DT mapping to what we understand and find a 1581 + * configuration that works. 1582 + */ 1579 1583 for (j = 0; j < num_lanes; j++) { 1580 - if (swap[j] != lane_map[j]) 1584 + if (lane_map[j] < 0 || lane_map[j] > 3) 1585 + dev_err(dev, "bad physical lane entry %u\n", 1586 + lane_map[j]); 1587 + 1588 + if (swap[lane_map[j]] != j) 1581 1589 break; 1582 1590 } 1583 1591 ··· 1608 1594 struct device_node *endpoint, *device_node; 1609 1595 int ret; 1610 1596 1611 - ret = of_property_read_u32(np, "qcom,dsi-host-index", &msm_host->id); 1612 - if (ret) { 1613 - dev_err(dev, "%s: host index not specified, ret=%d\n", 1614 - __func__, ret); 1615 - return ret; 1616 - } 1617 - 1618 1597 /* 1619 - * Get the first endpoint node. In our case, dsi has one output port 1620 - * to which the panel is connected. Don't return an error if a port 1621 - * isn't defined. It's possible that there is nothing connected to 1622 - * the dsi output. 1598 + * Get the endpoint of the output port of the DSI host. In our case, 1599 + * this is mapped to port number with reg = 1. Don't return an error if 1600 + * the remote endpoint isn't defined. It's possible that there is 1601 + * nothing connected to the dsi output. 1623 1602 */ 1624 - endpoint = of_graph_get_next_endpoint(np, NULL); 1603 + endpoint = of_graph_get_endpoint_by_regs(np, 1, -1); 1625 1604 if (!endpoint) { 1626 1605 dev_dbg(dev, "%s: no endpoint\n", __func__); 1627 1606 return 0; ··· 1655 1648 return ret; 1656 1649 } 1657 1650 1651 + static int dsi_host_get_id(struct msm_dsi_host *msm_host) 1652 + { 1653 + struct platform_device *pdev = msm_host->pdev; 1654 + const struct msm_dsi_config *cfg = msm_host->cfg_hnd->cfg; 1655 + struct resource *res; 1656 + int i; 1657 + 1658 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dsi_ctrl"); 1659 + if (!res) 1660 + return -EINVAL; 1661 + 1662 + for (i = 0; i < cfg->num_dsi; i++) { 1663 + if (cfg->io_start[i] == res->start) 1664 + return i; 1665 + } 1666 + 1667 + return -EINVAL; 1668 + } 1669 + 1658 1670 int msm_dsi_host_init(struct msm_dsi *msm_dsi) 1659 1671 { 1660 1672 struct msm_dsi_host *msm_host = NULL; ··· 1707 1681 if (!msm_host->cfg_hnd) { 1708 1682 ret = -EINVAL; 1709 1683 pr_err("%s: get config failed\n", __func__); 1684 + goto fail; 1685 + } 1686 + 1687 + msm_host->id = dsi_host_get_id(msm_host); 1688 + if (msm_host->id < 0) { 1689 + ret = msm_host->id; 1690 + pr_err("%s: unable to identify DSI host index\n", __func__); 1710 1691 goto fail; 1711 1692 } 1712 1693 ··· 2278 2245 } 2279 2246 2280 2247 msm_host->mode = drm_mode_duplicate(msm_host->dev, mode); 2281 - if (IS_ERR(msm_host->mode)) { 2248 + if (!msm_host->mode) { 2282 2249 pr_err("%s: cannot duplicate mode\n", __func__); 2283 - return PTR_ERR(msm_host->mode); 2250 + return -ENOMEM; 2284 2251 } 2285 2252 2286 2253 return 0;
+28 -4
drivers/gpu/drm/msm/dsi/phy/dsi_phy.c
··· 271 271 {} 272 272 }; 273 273 274 + /* 275 + * Currently, we only support one SoC for each PHY type. When we have multiple 276 + * SoCs for the same PHY, we can try to make the index searching a bit more 277 + * clever. 278 + */ 279 + static int dsi_phy_get_id(struct msm_dsi_phy *phy) 280 + { 281 + struct platform_device *pdev = phy->pdev; 282 + const struct msm_dsi_phy_cfg *cfg = phy->cfg; 283 + struct resource *res; 284 + int i; 285 + 286 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, "dsi_phy"); 287 + if (!res) 288 + return -EINVAL; 289 + 290 + for (i = 0; i < cfg->num_dsi_phy; i++) { 291 + if (cfg->io_start[i] == res->start) 292 + return i; 293 + } 294 + 295 + return -EINVAL; 296 + } 297 + 274 298 static int dsi_phy_driver_probe(struct platform_device *pdev) 275 299 { 276 300 struct msm_dsi_phy *phy; ··· 313 289 phy->cfg = match->data; 314 290 phy->pdev = pdev; 315 291 316 - ret = of_property_read_u32(dev->of_node, 317 - "qcom,dsi-phy-index", &phy->id); 318 - if (ret) { 319 - dev_err(dev, "%s: PHY index not specified, %d\n", 292 + phy->id = dsi_phy_get_id(phy); 293 + if (phy->id < 0) { 294 + ret = phy->id; 295 + dev_err(dev, "%s: couldn't identify PHY index, %d\n", 320 296 __func__, ret); 321 297 goto fail; 322 298 }
+2
drivers/gpu/drm/msm/dsi/phy/dsi_phy.h
··· 38 38 * Fill default H/W values in illegal cells, eg. cell {0, 1}. 39 39 */ 40 40 bool src_pll_truthtable[DSI_MAX][DSI_MAX]; 41 + const resource_size_t io_start[DSI_MAX]; 42 + const int num_dsi_phy; 41 43 }; 42 44 43 45 extern const struct msm_dsi_phy_cfg dsi_phy_28nm_hpm_cfgs;
+3 -1
drivers/gpu/drm/msm/dsi/phy/dsi_phy_20nm.c
··· 145 145 .ops = { 146 146 .enable = dsi_20nm_phy_enable, 147 147 .disable = dsi_20nm_phy_disable, 148 - } 148 + }, 149 + .io_start = { 0xfd998300, 0xfd9a0300 }, 150 + .num_dsi_phy = 2, 149 151 }; 150 152
+4
drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm.c
··· 145 145 .enable = dsi_28nm_phy_enable, 146 146 .disable = dsi_28nm_phy_disable, 147 147 }, 148 + .io_start = { 0xfd922b00, 0xfd923100 }, 149 + .num_dsi_phy = 2, 148 150 }; 149 151 150 152 const struct msm_dsi_phy_cfg dsi_phy_28nm_lp_cfgs = { ··· 162 160 .enable = dsi_28nm_phy_enable, 163 161 .disable = dsi_28nm_phy_disable, 164 162 }, 163 + .io_start = { 0x1a98500 }, 164 + .num_dsi_phy = 1, 165 165 }; 166 166
+2
drivers/gpu/drm/msm/dsi/phy/dsi_phy_28nm_8960.c
··· 192 192 .enable = dsi_28nm_phy_enable, 193 193 .disable = dsi_28nm_phy_disable, 194 194 }, 195 + .io_start = { 0x4700300, 0x5800300 }, 196 + .num_dsi_phy = 2, 195 197 };
+116 -1
drivers/gpu/drm/msm/hdmi/hdmi.c
··· 19 19 #include <linux/of_irq.h> 20 20 #include <linux/of_gpio.h> 21 21 22 + #include <sound/hdmi-codec.h> 22 23 #include "hdmi.h" 23 24 24 25 void msm_hdmi_set_mode(struct hdmi *hdmi, bool power_on) ··· 435 434 return gpio; 436 435 } 437 436 437 + /* 438 + * HDMI audio codec callbacks 439 + */ 440 + static int msm_hdmi_audio_hw_params(struct device *dev, void *data, 441 + struct hdmi_codec_daifmt *daifmt, 442 + struct hdmi_codec_params *params) 443 + { 444 + struct hdmi *hdmi = dev_get_drvdata(dev); 445 + unsigned int chan; 446 + unsigned int channel_allocation = 0; 447 + unsigned int rate; 448 + unsigned int level_shift = 0; /* 0dB */ 449 + bool down_mix = false; 450 + 451 + dev_dbg(dev, "%u Hz, %d bit, %d channels\n", params->sample_rate, 452 + params->sample_width, params->cea.channels); 453 + 454 + switch (params->cea.channels) { 455 + case 2: 456 + /* FR and FL speakers */ 457 + channel_allocation = 0; 458 + chan = MSM_HDMI_AUDIO_CHANNEL_2; 459 + break; 460 + case 4: 461 + /* FC, LFE, FR and FL speakers */ 462 + channel_allocation = 0x3; 463 + chan = MSM_HDMI_AUDIO_CHANNEL_4; 464 + break; 465 + case 6: 466 + /* RR, RL, FC, LFE, FR and FL speakers */ 467 + channel_allocation = 0x0B; 468 + chan = MSM_HDMI_AUDIO_CHANNEL_6; 469 + break; 470 + case 8: 471 + /* FRC, FLC, RR, RL, FC, LFE, FR and FL speakers */ 472 + channel_allocation = 0x1F; 473 + chan = MSM_HDMI_AUDIO_CHANNEL_8; 474 + break; 475 + default: 476 + return -EINVAL; 477 + } 478 + 479 + switch (params->sample_rate) { 480 + case 32000: 481 + rate = HDMI_SAMPLE_RATE_32KHZ; 482 + break; 483 + case 44100: 484 + rate = HDMI_SAMPLE_RATE_44_1KHZ; 485 + break; 486 + case 48000: 487 + rate = HDMI_SAMPLE_RATE_48KHZ; 488 + break; 489 + case 88200: 490 + rate = HDMI_SAMPLE_RATE_88_2KHZ; 491 + break; 492 + case 96000: 493 + rate = HDMI_SAMPLE_RATE_96KHZ; 494 + break; 495 + case 176400: 496 + rate = HDMI_SAMPLE_RATE_176_4KHZ; 497 + break; 498 + case 192000: 499 + rate = HDMI_SAMPLE_RATE_192KHZ; 500 + break; 501 + default: 502 + dev_err(dev, "rate[%d] not supported!\n", 503 + params->sample_rate); 504 + return -EINVAL; 505 + } 506 + 507 + msm_hdmi_audio_set_sample_rate(hdmi, rate); 508 + msm_hdmi_audio_info_setup(hdmi, 1, chan, channel_allocation, 509 + level_shift, down_mix); 510 + 511 + return 0; 512 + } 513 + 514 + static void msm_hdmi_audio_shutdown(struct device *dev, void *data) 515 + { 516 + struct hdmi *hdmi = dev_get_drvdata(dev); 517 + 518 + msm_hdmi_audio_info_setup(hdmi, 0, 0, 0, 0, 0); 519 + } 520 + 521 + static const struct hdmi_codec_ops msm_hdmi_audio_codec_ops = { 522 + .hw_params = msm_hdmi_audio_hw_params, 523 + .audio_shutdown = msm_hdmi_audio_shutdown, 524 + }; 525 + 526 + static struct hdmi_codec_pdata codec_data = { 527 + .ops = &msm_hdmi_audio_codec_ops, 528 + .max_i2s_channels = 8, 529 + .i2s = 1, 530 + }; 531 + 532 + static int msm_hdmi_register_audio_driver(struct hdmi *hdmi, struct device *dev) 533 + { 534 + hdmi->audio_pdev = platform_device_register_data(dev, 535 + HDMI_CODEC_DRV_NAME, 536 + PLATFORM_DEVID_AUTO, 537 + &codec_data, 538 + sizeof(codec_data)); 539 + return PTR_ERR_OR_ZERO(hdmi->audio_pdev); 540 + } 541 + 438 542 static int msm_hdmi_bind(struct device *dev, struct device *master, void *data) 439 543 { 440 544 struct drm_device *drm = dev_get_drvdata(master); ··· 547 441 static struct hdmi_platform_config *hdmi_cfg; 548 442 struct hdmi *hdmi; 549 443 struct device_node *of_node = dev->of_node; 550 - int i; 444 + int i, err; 551 445 552 446 hdmi_cfg = (struct hdmi_platform_config *) 553 447 of_device_get_match_data(dev); ··· 574 468 return PTR_ERR(hdmi); 575 469 priv->hdmi = hdmi; 576 470 471 + err = msm_hdmi_register_audio_driver(hdmi, dev); 472 + if (err) { 473 + DRM_ERROR("Failed to attach an audio codec %d\n", err); 474 + hdmi->audio_pdev = NULL; 475 + } 476 + 577 477 return 0; 578 478 } 579 479 ··· 589 477 struct drm_device *drm = dev_get_drvdata(master); 590 478 struct msm_drm_private *priv = drm->dev_private; 591 479 if (priv->hdmi) { 480 + if (priv->hdmi->audio_pdev) 481 + platform_device_unregister(priv->hdmi->audio_pdev); 482 + 592 483 msm_hdmi_destroy(priv->hdmi); 593 484 priv->hdmi = NULL; 594 485 }
+14
drivers/gpu/drm/msm/hdmi/hdmi.h
··· 50 50 struct hdmi { 51 51 struct drm_device *dev; 52 52 struct platform_device *pdev; 53 + struct platform_device *audio_pdev; 53 54 54 55 const struct hdmi_platform_config *config; 55 56 ··· 211 210 /* 212 211 * audio: 213 212 */ 213 + /* Supported HDMI Audio channels and rates */ 214 + #define MSM_HDMI_AUDIO_CHANNEL_2 0 215 + #define MSM_HDMI_AUDIO_CHANNEL_4 1 216 + #define MSM_HDMI_AUDIO_CHANNEL_6 2 217 + #define MSM_HDMI_AUDIO_CHANNEL_8 3 218 + 219 + #define HDMI_SAMPLE_RATE_32KHZ 0 220 + #define HDMI_SAMPLE_RATE_44_1KHZ 1 221 + #define HDMI_SAMPLE_RATE_48KHZ 2 222 + #define HDMI_SAMPLE_RATE_88_2KHZ 3 223 + #define HDMI_SAMPLE_RATE_96KHZ 4 224 + #define HDMI_SAMPLE_RATE_176_4KHZ 5 225 + #define HDMI_SAMPLE_RATE_192KHZ 6 214 226 215 227 int msm_hdmi_audio_update(struct hdmi *hdmi); 216 228 int msm_hdmi_audio_info_setup(struct hdmi *hdmi, bool enabled,
+1 -1
drivers/gpu/drm/msm/hdmi/hdmi_hdcp.c
··· 1430 1430 1431 1431 void msm_hdmi_hdcp_destroy(struct hdmi *hdmi) 1432 1432 { 1433 - if (hdmi && hdmi->hdcp_ctrl) { 1433 + if (hdmi) { 1434 1434 kfree(hdmi->hdcp_ctrl); 1435 1435 hdmi->hdcp_ctrl = NULL; 1436 1436 }
+12 -19
drivers/gpu/drm/msm/mdp/mdp4/mdp4_dtv_encoder.c
··· 23 23 24 24 struct mdp4_dtv_encoder { 25 25 struct drm_encoder base; 26 - struct clk *src_clk; 27 26 struct clk *hdmi_clk; 28 27 struct clk *mdp_clk; 29 28 unsigned long int pixclock; ··· 178 179 */ 179 180 mdp_irq_wait(&mdp4_kms->base, MDP4_IRQ_EXTERNAL_VSYNC); 180 181 181 - clk_disable_unprepare(mdp4_dtv_encoder->src_clk); 182 182 clk_disable_unprepare(mdp4_dtv_encoder->hdmi_clk); 183 183 clk_disable_unprepare(mdp4_dtv_encoder->mdp_clk); 184 184 ··· 206 208 207 209 bs_set(mdp4_dtv_encoder, 1); 208 210 209 - DBG("setting src_clk=%lu", pc); 211 + DBG("setting mdp_clk=%lu", pc); 210 212 211 - ret = clk_set_rate(mdp4_dtv_encoder->src_clk, pc); 213 + ret = clk_set_rate(mdp4_dtv_encoder->mdp_clk, pc); 212 214 if (ret) 213 - dev_err(dev->dev, "failed to set src_clk to %lu: %d\n", pc, ret); 214 - clk_prepare_enable(mdp4_dtv_encoder->src_clk); 215 - ret = clk_prepare_enable(mdp4_dtv_encoder->hdmi_clk); 216 - if (ret) 217 - dev_err(dev->dev, "failed to enable hdmi_clk: %d\n", ret); 215 + dev_err(dev->dev, "failed to set mdp_clk to %lu: %d\n", 216 + pc, ret); 217 + 218 218 ret = clk_prepare_enable(mdp4_dtv_encoder->mdp_clk); 219 219 if (ret) 220 220 dev_err(dev->dev, "failed to enabled mdp_clk: %d\n", ret); 221 + 222 + ret = clk_prepare_enable(mdp4_dtv_encoder->hdmi_clk); 223 + if (ret) 224 + dev_err(dev->dev, "failed to enable hdmi_clk: %d\n", ret); 221 225 222 226 mdp4_write(mdp4_kms, REG_MDP4_DTV_ENABLE, 1); 223 227 ··· 235 235 long mdp4_dtv_round_pixclk(struct drm_encoder *encoder, unsigned long rate) 236 236 { 237 237 struct mdp4_dtv_encoder *mdp4_dtv_encoder = to_mdp4_dtv_encoder(encoder); 238 - return clk_round_rate(mdp4_dtv_encoder->src_clk, rate); 238 + return clk_round_rate(mdp4_dtv_encoder->mdp_clk, rate); 239 239 } 240 240 241 241 /* initialize encoder */ ··· 257 257 DRM_MODE_ENCODER_TMDS, NULL); 258 258 drm_encoder_helper_add(encoder, &mdp4_dtv_encoder_helper_funcs); 259 259 260 - mdp4_dtv_encoder->src_clk = devm_clk_get(dev->dev, "src_clk"); 261 - if (IS_ERR(mdp4_dtv_encoder->src_clk)) { 262 - dev_err(dev->dev, "failed to get src_clk\n"); 263 - ret = PTR_ERR(mdp4_dtv_encoder->src_clk); 264 - goto fail; 265 - } 266 - 267 260 mdp4_dtv_encoder->hdmi_clk = devm_clk_get(dev->dev, "hdmi_clk"); 268 261 if (IS_ERR(mdp4_dtv_encoder->hdmi_clk)) { 269 262 dev_err(dev->dev, "failed to get hdmi_clk\n"); ··· 264 271 goto fail; 265 272 } 266 273 267 - mdp4_dtv_encoder->mdp_clk = devm_clk_get(dev->dev, "mdp_clk"); 274 + mdp4_dtv_encoder->mdp_clk = devm_clk_get(dev->dev, "tv_clk"); 268 275 if (IS_ERR(mdp4_dtv_encoder->mdp_clk)) { 269 - dev_err(dev->dev, "failed to get mdp_clk\n"); 276 + dev_err(dev->dev, "failed to get tv_clk\n"); 270 277 ret = PTR_ERR(mdp4_dtv_encoder->mdp_clk); 271 278 goto fail; 272 279 }
+20 -4
drivers/gpu/drm/msm/mdp/mdp4/mdp4_kms.c
··· 158 158 static void mdp4_destroy(struct msm_kms *kms) 159 159 { 160 160 struct mdp4_kms *mdp4_kms = to_mdp4_kms(to_mdp_kms(kms)); 161 + struct device *dev = mdp4_kms->dev->dev; 161 162 struct msm_mmu *mmu = mdp4_kms->mmu; 162 163 163 164 if (mmu) { ··· 168 167 169 168 if (mdp4_kms->blank_cursor_iova) 170 169 msm_gem_put_iova(mdp4_kms->blank_cursor_bo, mdp4_kms->id); 171 - if (mdp4_kms->blank_cursor_bo) 172 - drm_gem_object_unreference_unlocked(mdp4_kms->blank_cursor_bo); 170 + drm_gem_object_unreference_unlocked(mdp4_kms->blank_cursor_bo); 171 + 172 + if (mdp4_kms->rpm_enabled) 173 + pm_runtime_disable(dev); 174 + 173 175 kfree(mdp4_kms); 174 176 } 175 177 ··· 440 436 struct mdp4_kms *mdp4_kms; 441 437 struct msm_kms *kms = NULL; 442 438 struct msm_mmu *mmu; 443 - int ret; 439 + int irq, ret; 444 440 445 441 mdp4_kms = kzalloc(sizeof(*mdp4_kms), GFP_KERNEL); 446 442 if (!mdp4_kms) { ··· 460 456 ret = PTR_ERR(mdp4_kms->mmio); 461 457 goto fail; 462 458 } 459 + 460 + irq = platform_get_irq(pdev, 0); 461 + if (irq < 0) { 462 + ret = irq; 463 + dev_err(dev->dev, "failed to get irq: %d\n", ret); 464 + goto fail; 465 + } 466 + 467 + kms->irq = irq; 463 468 464 469 /* NOTE: driver for this regulator still missing upstream.. use 465 470 * _get_exclusive() and ignore the error if it does not exist ··· 505 492 goto fail; 506 493 } 507 494 508 - mdp4_kms->axi_clk = devm_clk_get(&pdev->dev, "mdp_axi_clk"); 495 + mdp4_kms->axi_clk = devm_clk_get(&pdev->dev, "bus_clk"); 509 496 if (IS_ERR(mdp4_kms->axi_clk)) { 510 497 dev_err(dev->dev, "failed to get axi_clk\n"); 511 498 ret = PTR_ERR(mdp4_kms->axi_clk); ··· 514 501 515 502 clk_set_rate(mdp4_kms->clk, config->max_clk); 516 503 clk_set_rate(mdp4_kms->lut_clk, config->max_clk); 504 + 505 + pm_runtime_enable(dev->dev); 506 + mdp4_kms->rpm_enabled = true; 517 507 518 508 /* make sure things are off before attaching iommu (bootloader could 519 509 * have left things on, in which case we'll start getting faults if
+2
drivers/gpu/drm/msm/mdp/mdp4/mdp4_kms.h
··· 47 47 48 48 struct mdp_irq error_handler; 49 49 50 + bool rpm_enabled; 51 + 50 52 /* empty/blank cursor bo to use when cursor is "disabled" */ 51 53 struct drm_gem_object *blank_cursor_bo; 52 54 uint32_t blank_cursor_iova;
+107 -124
drivers/gpu/drm/msm/mdp/mdp5/mdp5.xml.h
··· 8 8 git clone https://github.com/freedreno/envytools.git 9 9 10 10 The rules-ng-ng source files this header was generated from are: 11 - - /home/robclark/src/freedreno/envytools/rnndb/msm.xml ( 676 bytes, from 2015-05-20 20:03:14) 12 - - /home/robclark/src/freedreno/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-02-10 17:07:21) 13 - - /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp4.xml ( 20915 bytes, from 2015-05-20 20:03:14) 14 - - /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp_common.xml ( 2849 bytes, from 2015-09-18 12:07:28) 15 - - /home/robclark/src/freedreno/envytools/rnndb/mdp/mdp5.xml ( 37194 bytes, from 2015-09-18 12:07:28) 16 - - /home/robclark/src/freedreno/envytools/rnndb/dsi/dsi.xml ( 27887 bytes, from 2015-10-22 16:34:52) 17 - - /home/robclark/src/freedreno/envytools/rnndb/dsi/sfpb.xml ( 602 bytes, from 2015-10-22 16:35:02) 18 - - /home/robclark/src/freedreno/envytools/rnndb/dsi/mmss_cc.xml ( 1686 bytes, from 2015-05-20 20:03:14) 19 - - /home/robclark/src/freedreno/envytools/rnndb/hdmi/qfprom.xml ( 600 bytes, from 2015-05-20 20:03:07) 20 - - /home/robclark/src/freedreno/envytools/rnndb/hdmi/hdmi.xml ( 41472 bytes, from 2016-01-22 18:18:18) 21 - - /home/robclark/src/freedreno/envytools/rnndb/edp/edp.xml ( 10416 bytes, from 2015-05-20 20:03:14) 11 + - /local/mnt/workspace/source_trees/envytools/rnndb/../rnndb/mdp/mdp5.xml ( 36965 bytes, from 2016-05-10 05:06:30) 12 + - /local/mnt/workspace/source_trees/envytools/rnndb/freedreno_copyright.xml ( 1572 bytes, from 2016-05-09 06:32:54) 13 + - /local/mnt/workspace/source_trees/envytools/rnndb/mdp/mdp_common.xml ( 2849 bytes, from 2016-01-07 08:45:55) 22 14 23 - Copyright (C) 2013-2015 by the following authors: 15 + Copyright (C) 2013-2016 by the following authors: 24 16 - Rob Clark <robdclark@gmail.com> (robclark) 25 17 - Ilia Mirkin <imirkin@alum.mit.edu> (imirkin) 26 18 ··· 190 198 #define MDSS_HW_INTR_STATUS_INTR_HDMI 0x00000100 191 199 #define MDSS_HW_INTR_STATUS_INTR_EDP 0x00001000 192 200 193 - static inline uint32_t __offset_MDP(uint32_t idx) 201 + #define REG_MDP5_HW_VERSION 0x00000000 202 + #define MDP5_HW_VERSION_STEP__MASK 0x0000ffff 203 + #define MDP5_HW_VERSION_STEP__SHIFT 0 204 + static inline uint32_t MDP5_HW_VERSION_STEP(uint32_t val) 194 205 { 195 - switch (idx) { 196 - case 0: return (mdp5_cfg->mdp.base[0]); 197 - default: return INVALID_IDX(idx); 198 - } 206 + return ((val) << MDP5_HW_VERSION_STEP__SHIFT) & MDP5_HW_VERSION_STEP__MASK; 199 207 } 200 - static inline uint32_t REG_MDP5_MDP(uint32_t i0) { return 0x00000000 + __offset_MDP(i0); } 201 - 202 - static inline uint32_t REG_MDP5_MDP_HW_VERSION(uint32_t i0) { return 0x00000000 + __offset_MDP(i0); } 203 - #define MDP5_MDP_HW_VERSION_STEP__MASK 0x0000ffff 204 - #define MDP5_MDP_HW_VERSION_STEP__SHIFT 0 205 - static inline uint32_t MDP5_MDP_HW_VERSION_STEP(uint32_t val) 208 + #define MDP5_HW_VERSION_MINOR__MASK 0x0fff0000 209 + #define MDP5_HW_VERSION_MINOR__SHIFT 16 210 + static inline uint32_t MDP5_HW_VERSION_MINOR(uint32_t val) 206 211 { 207 - return ((val) << MDP5_MDP_HW_VERSION_STEP__SHIFT) & MDP5_MDP_HW_VERSION_STEP__MASK; 212 + return ((val) << MDP5_HW_VERSION_MINOR__SHIFT) & MDP5_HW_VERSION_MINOR__MASK; 208 213 } 209 - #define MDP5_MDP_HW_VERSION_MINOR__MASK 0x0fff0000 210 - #define MDP5_MDP_HW_VERSION_MINOR__SHIFT 16 211 - static inline uint32_t MDP5_MDP_HW_VERSION_MINOR(uint32_t val) 214 + #define MDP5_HW_VERSION_MAJOR__MASK 0xf0000000 215 + #define MDP5_HW_VERSION_MAJOR__SHIFT 28 216 + static inline uint32_t MDP5_HW_VERSION_MAJOR(uint32_t val) 212 217 { 213 - return ((val) << MDP5_MDP_HW_VERSION_MINOR__SHIFT) & MDP5_MDP_HW_VERSION_MINOR__MASK; 214 - } 215 - #define MDP5_MDP_HW_VERSION_MAJOR__MASK 0xf0000000 216 - #define MDP5_MDP_HW_VERSION_MAJOR__SHIFT 28 217 - static inline uint32_t MDP5_MDP_HW_VERSION_MAJOR(uint32_t val) 218 - { 219 - return ((val) << MDP5_MDP_HW_VERSION_MAJOR__SHIFT) & MDP5_MDP_HW_VERSION_MAJOR__MASK; 218 + return ((val) << MDP5_HW_VERSION_MAJOR__SHIFT) & MDP5_HW_VERSION_MAJOR__MASK; 220 219 } 221 220 222 - static inline uint32_t REG_MDP5_MDP_DISP_INTF_SEL(uint32_t i0) { return 0x00000004 + __offset_MDP(i0); } 223 - #define MDP5_MDP_DISP_INTF_SEL_INTF0__MASK 0x000000ff 224 - #define MDP5_MDP_DISP_INTF_SEL_INTF0__SHIFT 0 225 - static inline uint32_t MDP5_MDP_DISP_INTF_SEL_INTF0(enum mdp5_intf_type val) 221 + #define REG_MDP5_DISP_INTF_SEL 0x00000004 222 + #define MDP5_DISP_INTF_SEL_INTF0__MASK 0x000000ff 223 + #define MDP5_DISP_INTF_SEL_INTF0__SHIFT 0 224 + static inline uint32_t MDP5_DISP_INTF_SEL_INTF0(enum mdp5_intf_type val) 226 225 { 227 - return ((val) << MDP5_MDP_DISP_INTF_SEL_INTF0__SHIFT) & MDP5_MDP_DISP_INTF_SEL_INTF0__MASK; 226 + return ((val) << MDP5_DISP_INTF_SEL_INTF0__SHIFT) & MDP5_DISP_INTF_SEL_INTF0__MASK; 228 227 } 229 - #define MDP5_MDP_DISP_INTF_SEL_INTF1__MASK 0x0000ff00 230 - #define MDP5_MDP_DISP_INTF_SEL_INTF1__SHIFT 8 231 - static inline uint32_t MDP5_MDP_DISP_INTF_SEL_INTF1(enum mdp5_intf_type val) 228 + #define MDP5_DISP_INTF_SEL_INTF1__MASK 0x0000ff00 229 + #define MDP5_DISP_INTF_SEL_INTF1__SHIFT 8 230 + static inline uint32_t MDP5_DISP_INTF_SEL_INTF1(enum mdp5_intf_type val) 232 231 { 233 - return ((val) << MDP5_MDP_DISP_INTF_SEL_INTF1__SHIFT) & MDP5_MDP_DISP_INTF_SEL_INTF1__MASK; 232 + return ((val) << MDP5_DISP_INTF_SEL_INTF1__SHIFT) & MDP5_DISP_INTF_SEL_INTF1__MASK; 234 233 } 235 - #define MDP5_MDP_DISP_INTF_SEL_INTF2__MASK 0x00ff0000 236 - #define MDP5_MDP_DISP_INTF_SEL_INTF2__SHIFT 16 237 - static inline uint32_t MDP5_MDP_DISP_INTF_SEL_INTF2(enum mdp5_intf_type val) 234 + #define MDP5_DISP_INTF_SEL_INTF2__MASK 0x00ff0000 235 + #define MDP5_DISP_INTF_SEL_INTF2__SHIFT 16 236 + static inline uint32_t MDP5_DISP_INTF_SEL_INTF2(enum mdp5_intf_type val) 238 237 { 239 - return ((val) << MDP5_MDP_DISP_INTF_SEL_INTF2__SHIFT) & MDP5_MDP_DISP_INTF_SEL_INTF2__MASK; 238 + return ((val) << MDP5_DISP_INTF_SEL_INTF2__SHIFT) & MDP5_DISP_INTF_SEL_INTF2__MASK; 240 239 } 241 - #define MDP5_MDP_DISP_INTF_SEL_INTF3__MASK 0xff000000 242 - #define MDP5_MDP_DISP_INTF_SEL_INTF3__SHIFT 24 243 - static inline uint32_t MDP5_MDP_DISP_INTF_SEL_INTF3(enum mdp5_intf_type val) 240 + #define MDP5_DISP_INTF_SEL_INTF3__MASK 0xff000000 241 + #define MDP5_DISP_INTF_SEL_INTF3__SHIFT 24 242 + static inline uint32_t MDP5_DISP_INTF_SEL_INTF3(enum mdp5_intf_type val) 244 243 { 245 - return ((val) << MDP5_MDP_DISP_INTF_SEL_INTF3__SHIFT) & MDP5_MDP_DISP_INTF_SEL_INTF3__MASK; 246 - } 247 - 248 - static inline uint32_t REG_MDP5_MDP_INTR_EN(uint32_t i0) { return 0x00000010 + __offset_MDP(i0); } 249 - 250 - static inline uint32_t REG_MDP5_MDP_INTR_STATUS(uint32_t i0) { return 0x00000014 + __offset_MDP(i0); } 251 - 252 - static inline uint32_t REG_MDP5_MDP_INTR_CLEAR(uint32_t i0) { return 0x00000018 + __offset_MDP(i0); } 253 - 254 - static inline uint32_t REG_MDP5_MDP_HIST_INTR_EN(uint32_t i0) { return 0x0000001c + __offset_MDP(i0); } 255 - 256 - static inline uint32_t REG_MDP5_MDP_HIST_INTR_STATUS(uint32_t i0) { return 0x00000020 + __offset_MDP(i0); } 257 - 258 - static inline uint32_t REG_MDP5_MDP_HIST_INTR_CLEAR(uint32_t i0) { return 0x00000024 + __offset_MDP(i0); } 259 - 260 - static inline uint32_t REG_MDP5_MDP_SPARE_0(uint32_t i0) { return 0x00000028 + __offset_MDP(i0); } 261 - #define MDP5_MDP_SPARE_0_SPLIT_DPL_SINGLE_FLUSH_EN 0x00000001 262 - 263 - static inline uint32_t REG_MDP5_MDP_SMP_ALLOC_W(uint32_t i0, uint32_t i1) { return 0x00000080 + __offset_MDP(i0) + 0x4*i1; } 264 - 265 - static inline uint32_t REG_MDP5_MDP_SMP_ALLOC_W_REG(uint32_t i0, uint32_t i1) { return 0x00000080 + __offset_MDP(i0) + 0x4*i1; } 266 - #define MDP5_MDP_SMP_ALLOC_W_REG_CLIENT0__MASK 0x000000ff 267 - #define MDP5_MDP_SMP_ALLOC_W_REG_CLIENT0__SHIFT 0 268 - static inline uint32_t MDP5_MDP_SMP_ALLOC_W_REG_CLIENT0(uint32_t val) 269 - { 270 - return ((val) << MDP5_MDP_SMP_ALLOC_W_REG_CLIENT0__SHIFT) & MDP5_MDP_SMP_ALLOC_W_REG_CLIENT0__MASK; 271 - } 272 - #define MDP5_MDP_SMP_ALLOC_W_REG_CLIENT1__MASK 0x0000ff00 273 - #define MDP5_MDP_SMP_ALLOC_W_REG_CLIENT1__SHIFT 8 274 - static inline uint32_t MDP5_MDP_SMP_ALLOC_W_REG_CLIENT1(uint32_t val) 275 - { 276 - return ((val) << MDP5_MDP_SMP_ALLOC_W_REG_CLIENT1__SHIFT) & MDP5_MDP_SMP_ALLOC_W_REG_CLIENT1__MASK; 277 - } 278 - #define MDP5_MDP_SMP_ALLOC_W_REG_CLIENT2__MASK 0x00ff0000 279 - #define MDP5_MDP_SMP_ALLOC_W_REG_CLIENT2__SHIFT 16 280 - static inline uint32_t MDP5_MDP_SMP_ALLOC_W_REG_CLIENT2(uint32_t val) 281 - { 282 - return ((val) << MDP5_MDP_SMP_ALLOC_W_REG_CLIENT2__SHIFT) & MDP5_MDP_SMP_ALLOC_W_REG_CLIENT2__MASK; 244 + return ((val) << MDP5_DISP_INTF_SEL_INTF3__SHIFT) & MDP5_DISP_INTF_SEL_INTF3__MASK; 283 245 } 284 246 285 - static inline uint32_t REG_MDP5_MDP_SMP_ALLOC_R(uint32_t i0, uint32_t i1) { return 0x00000130 + __offset_MDP(i0) + 0x4*i1; } 247 + #define REG_MDP5_INTR_EN 0x00000010 286 248 287 - static inline uint32_t REG_MDP5_MDP_SMP_ALLOC_R_REG(uint32_t i0, uint32_t i1) { return 0x00000130 + __offset_MDP(i0) + 0x4*i1; } 288 - #define MDP5_MDP_SMP_ALLOC_R_REG_CLIENT0__MASK 0x000000ff 289 - #define MDP5_MDP_SMP_ALLOC_R_REG_CLIENT0__SHIFT 0 290 - static inline uint32_t MDP5_MDP_SMP_ALLOC_R_REG_CLIENT0(uint32_t val) 249 + #define REG_MDP5_INTR_STATUS 0x00000014 250 + 251 + #define REG_MDP5_INTR_CLEAR 0x00000018 252 + 253 + #define REG_MDP5_HIST_INTR_EN 0x0000001c 254 + 255 + #define REG_MDP5_HIST_INTR_STATUS 0x00000020 256 + 257 + #define REG_MDP5_HIST_INTR_CLEAR 0x00000024 258 + 259 + #define REG_MDP5_SPARE_0 0x00000028 260 + #define MDP5_SPARE_0_SPLIT_DPL_SINGLE_FLUSH_EN 0x00000001 261 + 262 + static inline uint32_t REG_MDP5_SMP_ALLOC_W(uint32_t i0) { return 0x00000080 + 0x4*i0; } 263 + 264 + static inline uint32_t REG_MDP5_SMP_ALLOC_W_REG(uint32_t i0) { return 0x00000080 + 0x4*i0; } 265 + #define MDP5_SMP_ALLOC_W_REG_CLIENT0__MASK 0x000000ff 266 + #define MDP5_SMP_ALLOC_W_REG_CLIENT0__SHIFT 0 267 + static inline uint32_t MDP5_SMP_ALLOC_W_REG_CLIENT0(uint32_t val) 291 268 { 292 - return ((val) << MDP5_MDP_SMP_ALLOC_R_REG_CLIENT0__SHIFT) & MDP5_MDP_SMP_ALLOC_R_REG_CLIENT0__MASK; 269 + return ((val) << MDP5_SMP_ALLOC_W_REG_CLIENT0__SHIFT) & MDP5_SMP_ALLOC_W_REG_CLIENT0__MASK; 293 270 } 294 - #define MDP5_MDP_SMP_ALLOC_R_REG_CLIENT1__MASK 0x0000ff00 295 - #define MDP5_MDP_SMP_ALLOC_R_REG_CLIENT1__SHIFT 8 296 - static inline uint32_t MDP5_MDP_SMP_ALLOC_R_REG_CLIENT1(uint32_t val) 271 + #define MDP5_SMP_ALLOC_W_REG_CLIENT1__MASK 0x0000ff00 272 + #define MDP5_SMP_ALLOC_W_REG_CLIENT1__SHIFT 8 273 + static inline uint32_t MDP5_SMP_ALLOC_W_REG_CLIENT1(uint32_t val) 297 274 { 298 - return ((val) << MDP5_MDP_SMP_ALLOC_R_REG_CLIENT1__SHIFT) & MDP5_MDP_SMP_ALLOC_R_REG_CLIENT1__MASK; 275 + return ((val) << MDP5_SMP_ALLOC_W_REG_CLIENT1__SHIFT) & MDP5_SMP_ALLOC_W_REG_CLIENT1__MASK; 299 276 } 300 - #define MDP5_MDP_SMP_ALLOC_R_REG_CLIENT2__MASK 0x00ff0000 301 - #define MDP5_MDP_SMP_ALLOC_R_REG_CLIENT2__SHIFT 16 302 - static inline uint32_t MDP5_MDP_SMP_ALLOC_R_REG_CLIENT2(uint32_t val) 277 + #define MDP5_SMP_ALLOC_W_REG_CLIENT2__MASK 0x00ff0000 278 + #define MDP5_SMP_ALLOC_W_REG_CLIENT2__SHIFT 16 279 + static inline uint32_t MDP5_SMP_ALLOC_W_REG_CLIENT2(uint32_t val) 303 280 { 304 - return ((val) << MDP5_MDP_SMP_ALLOC_R_REG_CLIENT2__SHIFT) & MDP5_MDP_SMP_ALLOC_R_REG_CLIENT2__MASK; 281 + return ((val) << MDP5_SMP_ALLOC_W_REG_CLIENT2__SHIFT) & MDP5_SMP_ALLOC_W_REG_CLIENT2__MASK; 282 + } 283 + 284 + static inline uint32_t REG_MDP5_SMP_ALLOC_R(uint32_t i0) { return 0x00000130 + 0x4*i0; } 285 + 286 + static inline uint32_t REG_MDP5_SMP_ALLOC_R_REG(uint32_t i0) { return 0x00000130 + 0x4*i0; } 287 + #define MDP5_SMP_ALLOC_R_REG_CLIENT0__MASK 0x000000ff 288 + #define MDP5_SMP_ALLOC_R_REG_CLIENT0__SHIFT 0 289 + static inline uint32_t MDP5_SMP_ALLOC_R_REG_CLIENT0(uint32_t val) 290 + { 291 + return ((val) << MDP5_SMP_ALLOC_R_REG_CLIENT0__SHIFT) & MDP5_SMP_ALLOC_R_REG_CLIENT0__MASK; 292 + } 293 + #define MDP5_SMP_ALLOC_R_REG_CLIENT1__MASK 0x0000ff00 294 + #define MDP5_SMP_ALLOC_R_REG_CLIENT1__SHIFT 8 295 + static inline uint32_t MDP5_SMP_ALLOC_R_REG_CLIENT1(uint32_t val) 296 + { 297 + return ((val) << MDP5_SMP_ALLOC_R_REG_CLIENT1__SHIFT) & MDP5_SMP_ALLOC_R_REG_CLIENT1__MASK; 298 + } 299 + #define MDP5_SMP_ALLOC_R_REG_CLIENT2__MASK 0x00ff0000 300 + #define MDP5_SMP_ALLOC_R_REG_CLIENT2__SHIFT 16 301 + static inline uint32_t MDP5_SMP_ALLOC_R_REG_CLIENT2(uint32_t val) 302 + { 303 + return ((val) << MDP5_SMP_ALLOC_R_REG_CLIENT2__SHIFT) & MDP5_SMP_ALLOC_R_REG_CLIENT2__MASK; 305 304 } 306 305 307 306 static inline uint32_t __offset_IGC(enum mdp5_igc_type idx) ··· 305 322 default: return INVALID_IDX(idx); 306 323 } 307 324 } 308 - static inline uint32_t REG_MDP5_MDP_IGC(uint32_t i0, enum mdp5_igc_type i1) { return 0x00000000 + __offset_MDP(i0) + __offset_IGC(i1); } 325 + static inline uint32_t REG_MDP5_IGC(enum mdp5_igc_type i0) { return 0x00000000 + __offset_IGC(i0); } 309 326 310 - static inline uint32_t REG_MDP5_MDP_IGC_LUT(uint32_t i0, enum mdp5_igc_type i1, uint32_t i2) { return 0x00000000 + __offset_MDP(i0) + __offset_IGC(i1) + 0x4*i2; } 327 + static inline uint32_t REG_MDP5_IGC_LUT(enum mdp5_igc_type i0, uint32_t i1) { return 0x00000000 + __offset_IGC(i0) + 0x4*i1; } 311 328 312 - static inline uint32_t REG_MDP5_MDP_IGC_LUT_REG(uint32_t i0, enum mdp5_igc_type i1, uint32_t i2) { return 0x00000000 + __offset_MDP(i0) + __offset_IGC(i1) + 0x4*i2; } 313 - #define MDP5_MDP_IGC_LUT_REG_VAL__MASK 0x00000fff 314 - #define MDP5_MDP_IGC_LUT_REG_VAL__SHIFT 0 315 - static inline uint32_t MDP5_MDP_IGC_LUT_REG_VAL(uint32_t val) 329 + static inline uint32_t REG_MDP5_IGC_LUT_REG(enum mdp5_igc_type i0, uint32_t i1) { return 0x00000000 + __offset_IGC(i0) + 0x4*i1; } 330 + #define MDP5_IGC_LUT_REG_VAL__MASK 0x00000fff 331 + #define MDP5_IGC_LUT_REG_VAL__SHIFT 0 332 + static inline uint32_t MDP5_IGC_LUT_REG_VAL(uint32_t val) 316 333 { 317 - return ((val) << MDP5_MDP_IGC_LUT_REG_VAL__SHIFT) & MDP5_MDP_IGC_LUT_REG_VAL__MASK; 334 + return ((val) << MDP5_IGC_LUT_REG_VAL__SHIFT) & MDP5_IGC_LUT_REG_VAL__MASK; 318 335 } 319 - #define MDP5_MDP_IGC_LUT_REG_INDEX_UPDATE 0x02000000 320 - #define MDP5_MDP_IGC_LUT_REG_DISABLE_PIPE_0 0x10000000 321 - #define MDP5_MDP_IGC_LUT_REG_DISABLE_PIPE_1 0x20000000 322 - #define MDP5_MDP_IGC_LUT_REG_DISABLE_PIPE_2 0x40000000 336 + #define MDP5_IGC_LUT_REG_INDEX_UPDATE 0x02000000 337 + #define MDP5_IGC_LUT_REG_DISABLE_PIPE_0 0x10000000 338 + #define MDP5_IGC_LUT_REG_DISABLE_PIPE_1 0x20000000 339 + #define MDP5_IGC_LUT_REG_DISABLE_PIPE_2 0x40000000 323 340 324 - static inline uint32_t REG_MDP5_MDP_SPLIT_DPL_EN(uint32_t i0) { return 0x000002f4 + __offset_MDP(i0); } 341 + #define REG_MDP5_SPLIT_DPL_EN 0x000002f4 325 342 326 - static inline uint32_t REG_MDP5_MDP_SPLIT_DPL_UPPER(uint32_t i0) { return 0x000002f8 + __offset_MDP(i0); } 327 - #define MDP5_MDP_SPLIT_DPL_UPPER_SMART_PANEL 0x00000002 328 - #define MDP5_MDP_SPLIT_DPL_UPPER_SMART_PANEL_FREE_RUN 0x00000004 329 - #define MDP5_MDP_SPLIT_DPL_UPPER_INTF1_SW_TRG_MUX 0x00000010 330 - #define MDP5_MDP_SPLIT_DPL_UPPER_INTF2_SW_TRG_MUX 0x00000100 343 + #define REG_MDP5_SPLIT_DPL_UPPER 0x000002f8 344 + #define MDP5_SPLIT_DPL_UPPER_SMART_PANEL 0x00000002 345 + #define MDP5_SPLIT_DPL_UPPER_SMART_PANEL_FREE_RUN 0x00000004 346 + #define MDP5_SPLIT_DPL_UPPER_INTF1_SW_TRG_MUX 0x00000010 347 + #define MDP5_SPLIT_DPL_UPPER_INTF2_SW_TRG_MUX 0x00000100 331 348 332 - static inline uint32_t REG_MDP5_MDP_SPLIT_DPL_LOWER(uint32_t i0) { return 0x000003f0 + __offset_MDP(i0); } 333 - #define MDP5_MDP_SPLIT_DPL_LOWER_SMART_PANEL 0x00000002 334 - #define MDP5_MDP_SPLIT_DPL_LOWER_SMART_PANEL_FREE_RUN 0x00000004 335 - #define MDP5_MDP_SPLIT_DPL_LOWER_INTF1_TG_SYNC 0x00000010 336 - #define MDP5_MDP_SPLIT_DPL_LOWER_INTF2_TG_SYNC 0x00000100 349 + #define REG_MDP5_SPLIT_DPL_LOWER 0x000003f0 350 + #define MDP5_SPLIT_DPL_LOWER_SMART_PANEL 0x00000002 351 + #define MDP5_SPLIT_DPL_LOWER_SMART_PANEL_FREE_RUN 0x00000004 352 + #define MDP5_SPLIT_DPL_LOWER_INTF1_TG_SYNC 0x00000010 353 + #define MDP5_SPLIT_DPL_LOWER_INTF2_TG_SYNC 0x00000100 337 354 338 355 static inline uint32_t __offset_CTL(uint32_t idx) 339 356 {
+54 -59
drivers/gpu/drm/msm/mdp/mdp5/mdp5_cfg.c
··· 26 26 .name = "msm8x74v1", 27 27 .mdp = { 28 28 .count = 1, 29 - .base = { 0x00100 }, 30 29 .caps = MDP_CAP_SMP | 31 30 0, 32 31 }, ··· 40 41 }, 41 42 .ctl = { 42 43 .count = 5, 43 - .base = { 0x00600, 0x00700, 0x00800, 0x00900, 0x00a00 }, 44 + .base = { 0x00500, 0x00600, 0x00700, 0x00800, 0x00900 }, 44 45 .flush_hw_mask = 0x0003ffff, 45 46 }, 46 47 .pipe_vig = { 47 48 .count = 3, 48 - .base = { 0x01200, 0x01600, 0x01a00 }, 49 + .base = { 0x01100, 0x01500, 0x01900 }, 49 50 .caps = MDP_PIPE_CAP_HFLIP | 50 51 MDP_PIPE_CAP_VFLIP | 51 52 MDP_PIPE_CAP_SCALE | ··· 54 55 }, 55 56 .pipe_rgb = { 56 57 .count = 3, 57 - .base = { 0x01e00, 0x02200, 0x02600 }, 58 + .base = { 0x01d00, 0x02100, 0x02500 }, 58 59 .caps = MDP_PIPE_CAP_HFLIP | 59 60 MDP_PIPE_CAP_VFLIP | 60 61 MDP_PIPE_CAP_SCALE | ··· 62 63 }, 63 64 .pipe_dma = { 64 65 .count = 2, 65 - .base = { 0x02a00, 0x02e00 }, 66 + .base = { 0x02900, 0x02d00 }, 66 67 .caps = MDP_PIPE_CAP_HFLIP | 67 68 MDP_PIPE_CAP_VFLIP | 68 69 0, 69 70 }, 70 71 .lm = { 71 72 .count = 5, 72 - .base = { 0x03200, 0x03600, 0x03a00, 0x03e00, 0x04200 }, 73 + .base = { 0x03100, 0x03500, 0x03900, 0x03d00, 0x04100 }, 73 74 .nb_stages = 5, 74 75 }, 75 76 .dspp = { 76 77 .count = 3, 77 - .base = { 0x04600, 0x04a00, 0x04e00 }, 78 + .base = { 0x04500, 0x04900, 0x04d00 }, 78 79 }, 79 80 .pp = { 80 81 .count = 3, 81 - .base = { 0x21b00, 0x21c00, 0x21d00 }, 82 + .base = { 0x21a00, 0x21b00, 0x21c00 }, 82 83 }, 83 84 .intf = { 84 - .base = { 0x21100, 0x21300, 0x21500, 0x21700 }, 85 + .base = { 0x21000, 0x21200, 0x21400, 0x21600 }, 85 86 .connect = { 86 87 [0] = INTF_eDP, 87 88 [1] = INTF_DSI, ··· 96 97 .name = "msm8x74", 97 98 .mdp = { 98 99 .count = 1, 99 - .base = { 0x00100 }, 100 100 .caps = MDP_CAP_SMP | 101 101 0, 102 102 }, ··· 110 112 }, 111 113 .ctl = { 112 114 .count = 5, 113 - .base = { 0x00600, 0x00700, 0x00800, 0x00900, 0x00a00 }, 115 + .base = { 0x00500, 0x00600, 0x00700, 0x00800, 0x00900 }, 114 116 .flush_hw_mask = 0x0003ffff, 115 117 }, 116 118 .pipe_vig = { 117 119 .count = 3, 118 - .base = { 0x01200, 0x01600, 0x01a00 }, 120 + .base = { 0x01100, 0x01500, 0x01900 }, 119 121 .caps = MDP_PIPE_CAP_HFLIP | MDP_PIPE_CAP_VFLIP | 120 122 MDP_PIPE_CAP_SCALE | MDP_PIPE_CAP_CSC | 121 123 MDP_PIPE_CAP_DECIMATION, 122 124 }, 123 125 .pipe_rgb = { 124 126 .count = 3, 125 - .base = { 0x01e00, 0x02200, 0x02600 }, 127 + .base = { 0x01d00, 0x02100, 0x02500 }, 126 128 .caps = MDP_PIPE_CAP_HFLIP | MDP_PIPE_CAP_VFLIP | 127 129 MDP_PIPE_CAP_SCALE | MDP_PIPE_CAP_DECIMATION, 128 130 }, 129 131 .pipe_dma = { 130 132 .count = 2, 131 - .base = { 0x02a00, 0x02e00 }, 133 + .base = { 0x02900, 0x02d00 }, 132 134 .caps = MDP_PIPE_CAP_HFLIP | MDP_PIPE_CAP_VFLIP, 133 135 }, 134 136 .lm = { 135 137 .count = 5, 136 - .base = { 0x03200, 0x03600, 0x03a00, 0x03e00, 0x04200 }, 138 + .base = { 0x03100, 0x03500, 0x03900, 0x03d00, 0x04100 }, 137 139 .nb_stages = 5, 138 140 .max_width = 2048, 139 141 .max_height = 0xFFFF, 140 142 }, 141 143 .dspp = { 142 144 .count = 3, 143 - .base = { 0x04600, 0x04a00, 0x04e00 }, 145 + .base = { 0x04500, 0x04900, 0x04d00 }, 144 146 }, 145 147 .ad = { 146 148 .count = 2, 147 - .base = { 0x13100, 0x13300 }, 149 + .base = { 0x13000, 0x13200 }, 148 150 }, 149 151 .pp = { 150 152 .count = 3, 151 - .base = { 0x12d00, 0x12e00, 0x12f00 }, 153 + .base = { 0x12c00, 0x12d00, 0x12e00 }, 152 154 }, 153 155 .intf = { 154 - .base = { 0x12500, 0x12700, 0x12900, 0x12b00 }, 156 + .base = { 0x12400, 0x12600, 0x12800, 0x12a00 }, 155 157 .connect = { 156 158 [0] = INTF_eDP, 157 159 [1] = INTF_DSI, ··· 166 168 .name = "apq8084", 167 169 .mdp = { 168 170 .count = 1, 169 - .base = { 0x00100 }, 170 171 .caps = MDP_CAP_SMP | 171 172 0, 172 173 }, ··· 187 190 }, 188 191 .ctl = { 189 192 .count = 5, 190 - .base = { 0x00600, 0x00700, 0x00800, 0x00900, 0x00a00 }, 193 + .base = { 0x00500, 0x00600, 0x00700, 0x00800, 0x00900 }, 191 194 .flush_hw_mask = 0x003fffff, 192 195 }, 193 196 .pipe_vig = { 194 197 .count = 4, 195 - .base = { 0x01200, 0x01600, 0x01a00, 0x01e00 }, 198 + .base = { 0x01100, 0x01500, 0x01900, 0x01d00 }, 196 199 .caps = MDP_PIPE_CAP_HFLIP | MDP_PIPE_CAP_VFLIP | 197 200 MDP_PIPE_CAP_SCALE | MDP_PIPE_CAP_CSC | 198 201 MDP_PIPE_CAP_DECIMATION, 199 202 }, 200 203 .pipe_rgb = { 201 204 .count = 4, 202 - .base = { 0x02200, 0x02600, 0x02a00, 0x02e00 }, 205 + .base = { 0x02100, 0x02500, 0x02900, 0x02d00 }, 203 206 .caps = MDP_PIPE_CAP_HFLIP | MDP_PIPE_CAP_VFLIP | 204 207 MDP_PIPE_CAP_SCALE | MDP_PIPE_CAP_DECIMATION, 205 208 }, 206 209 .pipe_dma = { 207 210 .count = 2, 208 - .base = { 0x03200, 0x03600 }, 211 + .base = { 0x03100, 0x03500 }, 209 212 .caps = MDP_PIPE_CAP_HFLIP | MDP_PIPE_CAP_VFLIP, 210 213 }, 211 214 .lm = { 212 215 .count = 6, 213 - .base = { 0x03a00, 0x03e00, 0x04200, 0x04600, 0x04a00, 0x04e00 }, 216 + .base = { 0x03900, 0x03d00, 0x04100, 0x04500, 0x04900, 0x04d00 }, 214 217 .nb_stages = 5, 215 218 .max_width = 2048, 216 219 .max_height = 0xFFFF, 217 220 }, 218 221 .dspp = { 219 222 .count = 4, 220 - .base = { 0x05200, 0x05600, 0x05a00, 0x05e00 }, 223 + .base = { 0x05100, 0x05500, 0x05900, 0x05d00 }, 221 224 222 225 }, 223 226 .ad = { 224 227 .count = 3, 225 - .base = { 0x13500, 0x13700, 0x13900 }, 228 + .base = { 0x13400, 0x13600, 0x13800 }, 226 229 }, 227 230 .pp = { 228 231 .count = 4, 229 - .base = { 0x12f00, 0x13000, 0x13100, 0x13200 }, 232 + .base = { 0x12e00, 0x12f00, 0x13000, 0x13100 }, 230 233 }, 231 234 .intf = { 232 - .base = { 0x12500, 0x12700, 0x12900, 0x12b00, 0x12d00 }, 235 + .base = { 0x12400, 0x12600, 0x12800, 0x12a00, 0x12c00 }, 233 236 .connect = { 234 237 [0] = INTF_eDP, 235 238 [1] = INTF_DSI, ··· 244 247 .name = "msm8x16", 245 248 .mdp = { 246 249 .count = 1, 247 - .base = { 0x01000 }, 250 + .base = { 0x0 }, 248 251 .caps = MDP_CAP_SMP | 249 252 0, 250 253 }, ··· 258 261 }, 259 262 .ctl = { 260 263 .count = 5, 261 - .base = { 0x02000, 0x02200, 0x02400, 0x02600, 0x02800 }, 264 + .base = { 0x01000, 0x01200, 0x01400, 0x01600, 0x01800 }, 262 265 .flush_hw_mask = 0x4003ffff, 263 266 }, 264 267 .pipe_vig = { 265 268 .count = 1, 266 - .base = { 0x05000 }, 269 + .base = { 0x04000 }, 267 270 .caps = MDP_PIPE_CAP_HFLIP | MDP_PIPE_CAP_VFLIP | 268 271 MDP_PIPE_CAP_SCALE | MDP_PIPE_CAP_CSC | 269 272 MDP_PIPE_CAP_DECIMATION, 270 273 }, 271 274 .pipe_rgb = { 272 275 .count = 2, 273 - .base = { 0x15000, 0x17000 }, 276 + .base = { 0x14000, 0x16000 }, 274 277 .caps = MDP_PIPE_CAP_HFLIP | MDP_PIPE_CAP_VFLIP | 275 278 MDP_PIPE_CAP_SCALE | MDP_PIPE_CAP_DECIMATION, 276 279 }, 277 280 .pipe_dma = { 278 281 .count = 1, 279 - .base = { 0x25000 }, 282 + .base = { 0x24000 }, 280 283 .caps = MDP_PIPE_CAP_HFLIP | MDP_PIPE_CAP_VFLIP, 281 284 }, 282 285 .lm = { 283 286 .count = 2, /* LM0 and LM3 */ 284 - .base = { 0x45000, 0x48000 }, 287 + .base = { 0x44000, 0x47000 }, 285 288 .nb_stages = 5, 286 289 .max_width = 2048, 287 290 .max_height = 0xFFFF, 288 291 }, 289 292 .dspp = { 290 293 .count = 1, 291 - .base = { 0x55000 }, 294 + .base = { 0x54000 }, 292 295 293 296 }, 294 297 .intf = { 295 - .base = { 0x00000, 0x6b800 }, 298 + .base = { 0x00000, 0x6a800 }, 296 299 .connect = { 297 300 [0] = INTF_DISABLED, 298 301 [1] = INTF_DSI, ··· 305 308 .name = "msm8x94", 306 309 .mdp = { 307 310 .count = 1, 308 - .base = { 0x01000 }, 309 311 .caps = MDP_CAP_SMP | 310 312 0, 311 313 }, ··· 326 330 }, 327 331 .ctl = { 328 332 .count = 5, 329 - .base = { 0x02000, 0x02200, 0x02400, 0x02600, 0x02800 }, 333 + .base = { 0x01000, 0x01200, 0x01400, 0x01600, 0x01800 }, 330 334 .flush_hw_mask = 0xf0ffffff, 331 335 }, 332 336 .pipe_vig = { 333 337 .count = 4, 334 - .base = { 0x05000, 0x07000, 0x09000, 0x0b000 }, 338 + .base = { 0x04000, 0x06000, 0x08000, 0x0a000 }, 335 339 .caps = MDP_PIPE_CAP_HFLIP | MDP_PIPE_CAP_VFLIP | 336 340 MDP_PIPE_CAP_SCALE | MDP_PIPE_CAP_CSC | 337 341 MDP_PIPE_CAP_DECIMATION, 338 342 }, 339 343 .pipe_rgb = { 340 344 .count = 4, 341 - .base = { 0x15000, 0x17000, 0x19000, 0x1b000 }, 345 + .base = { 0x14000, 0x16000, 0x18000, 0x1a000 }, 342 346 .caps = MDP_PIPE_CAP_HFLIP | MDP_PIPE_CAP_VFLIP | 343 347 MDP_PIPE_CAP_SCALE | MDP_PIPE_CAP_DECIMATION, 344 348 }, 345 349 .pipe_dma = { 346 350 .count = 2, 347 - .base = { 0x25000, 0x27000 }, 351 + .base = { 0x24000, 0x26000 }, 348 352 .caps = MDP_PIPE_CAP_HFLIP | MDP_PIPE_CAP_VFLIP, 349 353 }, 350 354 .lm = { 351 355 .count = 6, 352 - .base = { 0x45000, 0x46000, 0x47000, 0x48000, 0x49000, 0x4a000 }, 356 + .base = { 0x44000, 0x45000, 0x46000, 0x47000, 0x48000, 0x49000 }, 353 357 .nb_stages = 8, 354 358 .max_width = 2048, 355 359 .max_height = 0xFFFF, 356 360 }, 357 361 .dspp = { 358 362 .count = 4, 359 - .base = { 0x55000, 0x57000, 0x59000, 0x5b000 }, 363 + .base = { 0x54000, 0x56000, 0x58000, 0x5a000 }, 360 364 361 365 }, 362 366 .ad = { 363 367 .count = 3, 364 - .base = { 0x79000, 0x79800, 0x7a000 }, 368 + .base = { 0x78000, 0x78800, 0x79000 }, 365 369 }, 366 370 .pp = { 367 371 .count = 4, 368 - .base = { 0x71000, 0x71800, 0x72000, 0x72800 }, 372 + .base = { 0x70000, 0x70800, 0x71000, 0x71800 }, 369 373 }, 370 374 .intf = { 371 - .base = { 0x6b000, 0x6b800, 0x6c000, 0x6c800, 0x6d000 }, 375 + .base = { 0x6a000, 0x6a800, 0x6b000, 0x6b800, 0x6c000 }, 372 376 .connect = { 373 377 [0] = INTF_DISABLED, 374 378 [1] = INTF_DSI, ··· 383 387 .name = "msm8x96", 384 388 .mdp = { 385 389 .count = 1, 386 - .base = { 0x01000 }, 387 390 .caps = MDP_CAP_DSC | 388 391 MDP_CAP_CDM | 389 392 0, 390 393 }, 391 394 .ctl = { 392 395 .count = 5, 393 - .base = { 0x02000, 0x02200, 0x02400, 0x02600, 0x02800 }, 396 + .base = { 0x01000, 0x01200, 0x01400, 0x01600, 0x01800 }, 394 397 .flush_hw_mask = 0xf4ffffff, 395 398 }, 396 399 .pipe_vig = { 397 400 .count = 4, 398 - .base = { 0x05000, 0x07000, 0x09000, 0x0b000 }, 401 + .base = { 0x04000, 0x06000, 0x08000, 0x0a000 }, 399 402 .caps = MDP_PIPE_CAP_HFLIP | 400 403 MDP_PIPE_CAP_VFLIP | 401 404 MDP_PIPE_CAP_SCALE | ··· 405 410 }, 406 411 .pipe_rgb = { 407 412 .count = 4, 408 - .base = { 0x15000, 0x17000, 0x19000, 0x1b000 }, 413 + .base = { 0x14000, 0x16000, 0x18000, 0x1a000 }, 409 414 .caps = MDP_PIPE_CAP_HFLIP | 410 415 MDP_PIPE_CAP_VFLIP | 411 416 MDP_PIPE_CAP_SCALE | ··· 415 420 }, 416 421 .pipe_dma = { 417 422 .count = 2, 418 - .base = { 0x25000, 0x27000 }, 423 + .base = { 0x24000, 0x26000 }, 419 424 .caps = MDP_PIPE_CAP_HFLIP | 420 425 MDP_PIPE_CAP_VFLIP | 421 426 MDP_PIPE_CAP_SW_PIX_EXT | ··· 423 428 }, 424 429 .lm = { 425 430 .count = 6, 426 - .base = { 0x45000, 0x46000, 0x47000, 0x48000, 0x49000, 0x4a000 }, 431 + .base = { 0x44000, 0x45000, 0x46000, 0x47000, 0x48000, 0x49000 }, 427 432 .nb_stages = 8, 428 433 .max_width = 2560, 429 434 .max_height = 0xFFFF, 430 435 }, 431 436 .dspp = { 432 437 .count = 2, 433 - .base = { 0x55000, 0x57000 }, 438 + .base = { 0x54000, 0x56000 }, 434 439 }, 435 440 .ad = { 436 441 .count = 3, 437 - .base = { 0x79000, 0x79800, 0x7a000 }, 442 + .base = { 0x78000, 0x78800, 0x79000 }, 438 443 }, 439 444 .pp = { 440 445 .count = 4, 441 - .base = { 0x71000, 0x71800, 0x72000, 0x72800 }, 446 + .base = { 0x70000, 0x70800, 0x71000, 0x71800 }, 442 447 }, 443 448 .cdm = { 444 449 .count = 1, 445 - .base = { 0x7a200 }, 450 + .base = { 0x79200 }, 446 451 }, 447 452 .dsc = { 448 453 .count = 2, 449 - .base = { 0x81000, 0x81400 }, 454 + .base = { 0x80000, 0x80400 }, 450 455 }, 451 456 .intf = { 452 - .base = { 0x6b000, 0x6b800, 0x6c000, 0x6c800, 0x6d000 }, 457 + .base = { 0x6a000, 0x6a800, 0x6b000, 0x6b800, 0x6c000 }, 453 458 .connect = { 454 459 [0] = INTF_DISABLED, 455 460 [1] = INTF_DSI,
+7 -7
drivers/gpu/drm/msm/mdp/mdp5/mdp5_cmd_encoder.c
··· 272 272 * start signal for the slave encoder 273 273 */ 274 274 if (intf_num == 1) 275 - data |= MDP5_MDP_SPLIT_DPL_UPPER_INTF2_SW_TRG_MUX; 275 + data |= MDP5_SPLIT_DPL_UPPER_INTF2_SW_TRG_MUX; 276 276 else if (intf_num == 2) 277 - data |= MDP5_MDP_SPLIT_DPL_UPPER_INTF1_SW_TRG_MUX; 277 + data |= MDP5_SPLIT_DPL_UPPER_INTF1_SW_TRG_MUX; 278 278 else 279 279 return -EINVAL; 280 280 281 281 /* Smart Panel, Sync mode */ 282 - data |= MDP5_MDP_SPLIT_DPL_UPPER_SMART_PANEL; 282 + data |= MDP5_SPLIT_DPL_UPPER_SMART_PANEL; 283 283 284 284 /* Make sure clocks are on when connectors calling this function. */ 285 285 mdp5_enable(mdp5_kms); 286 - mdp5_write(mdp5_kms, REG_MDP5_MDP_SPLIT_DPL_UPPER(0), data); 286 + mdp5_write(mdp5_kms, REG_MDP5_SPLIT_DPL_UPPER, data); 287 287 288 - mdp5_write(mdp5_kms, REG_MDP5_MDP_SPLIT_DPL_LOWER(0), 289 - MDP5_MDP_SPLIT_DPL_LOWER_SMART_PANEL); 290 - mdp5_write(mdp5_kms, REG_MDP5_MDP_SPLIT_DPL_EN(0), 1); 288 + mdp5_write(mdp5_kms, REG_MDP5_SPLIT_DPL_LOWER, 289 + MDP5_SPLIT_DPL_LOWER_SMART_PANEL); 290 + mdp5_write(mdp5_kms, REG_MDP5_SPLIT_DPL_EN, 1); 291 291 mdp5_disable(mdp5_kms); 292 292 293 293 return 0;
+2 -4
drivers/gpu/drm/msm/mdp/mdp5/mdp5_crtc.c
··· 490 490 struct mdp5_kms *mdp5_kms = get_kms(crtc); 491 491 struct drm_gem_object *cursor_bo, *old_bo = NULL; 492 492 uint32_t blendcfg, cursor_addr, stride; 493 - int ret, bpp, lm; 494 - unsigned int depth; 493 + int ret, lm; 495 494 enum mdp5_cursor_alpha cur_alpha = CURSOR_ALPHA_PER_PIXEL; 496 495 uint32_t flush_mask = mdp_ctl_flush_mask_cursor(0); 497 496 uint32_t roi_w, roi_h; ··· 520 521 return -EINVAL; 521 522 522 523 lm = mdp5_crtc->lm; 523 - drm_fb_get_bpp_depth(DRM_FORMAT_ARGB8888, &depth, &bpp); 524 - stride = width * (bpp >> 3); 524 + stride = width * drm_format_plane_cpp(DRM_FORMAT_ARGB8888, 0); 525 525 526 526 spin_lock_irqsave(&mdp5_crtc->cursor.lock, flags); 527 527 old_bo = mdp5_crtc->cursor.scanout_bo;
+13 -13
drivers/gpu/drm/msm/mdp/mdp5/mdp5_ctl.c
··· 118 118 u32 intf_sel; 119 119 120 120 spin_lock_irqsave(&mdp5_kms->resource_lock, flags); 121 - intf_sel = mdp5_read(mdp5_kms, REG_MDP5_MDP_DISP_INTF_SEL(0)); 121 + intf_sel = mdp5_read(mdp5_kms, REG_MDP5_DISP_INTF_SEL); 122 122 123 123 switch (intf->num) { 124 124 case 0: 125 - intf_sel &= ~MDP5_MDP_DISP_INTF_SEL_INTF0__MASK; 126 - intf_sel |= MDP5_MDP_DISP_INTF_SEL_INTF0(intf->type); 125 + intf_sel &= ~MDP5_DISP_INTF_SEL_INTF0__MASK; 126 + intf_sel |= MDP5_DISP_INTF_SEL_INTF0(intf->type); 127 127 break; 128 128 case 1: 129 - intf_sel &= ~MDP5_MDP_DISP_INTF_SEL_INTF1__MASK; 130 - intf_sel |= MDP5_MDP_DISP_INTF_SEL_INTF1(intf->type); 129 + intf_sel &= ~MDP5_DISP_INTF_SEL_INTF1__MASK; 130 + intf_sel |= MDP5_DISP_INTF_SEL_INTF1(intf->type); 131 131 break; 132 132 case 2: 133 - intf_sel &= ~MDP5_MDP_DISP_INTF_SEL_INTF2__MASK; 134 - intf_sel |= MDP5_MDP_DISP_INTF_SEL_INTF2(intf->type); 133 + intf_sel &= ~MDP5_DISP_INTF_SEL_INTF2__MASK; 134 + intf_sel |= MDP5_DISP_INTF_SEL_INTF2(intf->type); 135 135 break; 136 136 case 3: 137 - intf_sel &= ~MDP5_MDP_DISP_INTF_SEL_INTF3__MASK; 138 - intf_sel |= MDP5_MDP_DISP_INTF_SEL_INTF3(intf->type); 137 + intf_sel &= ~MDP5_DISP_INTF_SEL_INTF3__MASK; 138 + intf_sel |= MDP5_DISP_INTF_SEL_INTF3(intf->type); 139 139 break; 140 140 default: 141 141 BUG(); 142 142 break; 143 143 } 144 144 145 - mdp5_write(mdp5_kms, REG_MDP5_MDP_DISP_INTF_SEL(0), intf_sel); 145 + mdp5_write(mdp5_kms, REG_MDP5_DISP_INTF_SEL, intf_sel); 146 146 spin_unlock_irqrestore(&mdp5_kms->resource_lock, flags); 147 147 } 148 148 ··· 557 557 if (!enable) { 558 558 ctlx->pair = NULL; 559 559 ctly->pair = NULL; 560 - mdp5_write(mdp5_kms, REG_MDP5_MDP_SPARE_0(0), 0); 560 + mdp5_write(mdp5_kms, REG_MDP5_SPARE_0, 0); 561 561 return 0; 562 562 } else if ((ctlx->pair != NULL) || (ctly->pair != NULL)) { 563 563 dev_err(ctl_mgr->dev->dev, "CTLs already paired\n"); ··· 570 570 ctlx->pair = ctly; 571 571 ctly->pair = ctlx; 572 572 573 - mdp5_write(mdp5_kms, REG_MDP5_MDP_SPARE_0(0), 574 - MDP5_MDP_SPARE_0_SPLIT_DPL_SINGLE_FLUSH_EN); 573 + mdp5_write(mdp5_kms, REG_MDP5_SPARE_0, 574 + MDP5_SPARE_0_SPLIT_DPL_SINGLE_FLUSH_EN); 575 575 576 576 return 0; 577 577 }
+5 -5
drivers/gpu/drm/msm/mdp/mdp5/mdp5_encoder.c
··· 322 322 * to use the master's enable signal for the slave encoder. 323 323 */ 324 324 if (intf_num == 1) 325 - data |= MDP5_MDP_SPLIT_DPL_LOWER_INTF2_TG_SYNC; 325 + data |= MDP5_SPLIT_DPL_LOWER_INTF2_TG_SYNC; 326 326 else if (intf_num == 2) 327 - data |= MDP5_MDP_SPLIT_DPL_LOWER_INTF1_TG_SYNC; 327 + data |= MDP5_SPLIT_DPL_LOWER_INTF1_TG_SYNC; 328 328 else 329 329 return -EINVAL; 330 330 331 331 /* Make sure clocks are on when connectors calling this function. */ 332 332 mdp5_enable(mdp5_kms); 333 333 /* Dumb Panel, Sync mode */ 334 - mdp5_write(mdp5_kms, REG_MDP5_MDP_SPLIT_DPL_UPPER(0), 0); 335 - mdp5_write(mdp5_kms, REG_MDP5_MDP_SPLIT_DPL_LOWER(0), data); 336 - mdp5_write(mdp5_kms, REG_MDP5_MDP_SPLIT_DPL_EN(0), 1); 334 + mdp5_write(mdp5_kms, REG_MDP5_SPLIT_DPL_UPPER, 0); 335 + mdp5_write(mdp5_kms, REG_MDP5_SPLIT_DPL_LOWER, data); 336 + mdp5_write(mdp5_kms, REG_MDP5_SPLIT_DPL_EN, 1); 337 337 338 338 mdp5_ctl_pair(mdp5_encoder->ctl, mdp5_slave_enc->ctl, true); 339 339
+13 -112
drivers/gpu/drm/msm/mdp/mdp5/mdp5_irq.c
··· 15 15 * this program. If not, see <http://www.gnu.org/licenses/>. 16 16 */ 17 17 18 - #include <linux/irqdomain.h> 19 18 #include <linux/irq.h> 20 19 21 20 #include "msm_drv.h" ··· 23 24 void mdp5_set_irqmask(struct mdp_kms *mdp_kms, uint32_t irqmask, 24 25 uint32_t old_irqmask) 25 26 { 26 - mdp5_write(to_mdp5_kms(mdp_kms), REG_MDP5_MDP_INTR_CLEAR(0), 27 - irqmask ^ (irqmask & old_irqmask)); 28 - mdp5_write(to_mdp5_kms(mdp_kms), REG_MDP5_MDP_INTR_EN(0), irqmask); 27 + mdp5_write(to_mdp5_kms(mdp_kms), REG_MDP5_INTR_CLEAR, 28 + irqmask ^ (irqmask & old_irqmask)); 29 + mdp5_write(to_mdp5_kms(mdp_kms), REG_MDP5_INTR_EN, irqmask); 29 30 } 30 31 31 32 static void mdp5_irq_error_handler(struct mdp_irq *irq, uint32_t irqstatus) ··· 37 38 { 38 39 struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); 39 40 mdp5_enable(mdp5_kms); 40 - mdp5_write(mdp5_kms, REG_MDP5_MDP_INTR_CLEAR(0), 0xffffffff); 41 - mdp5_write(mdp5_kms, REG_MDP5_MDP_INTR_EN(0), 0x00000000); 41 + mdp5_write(mdp5_kms, REG_MDP5_INTR_CLEAR, 0xffffffff); 42 + mdp5_write(mdp5_kms, REG_MDP5_INTR_EN, 0x00000000); 42 43 mdp5_disable(mdp5_kms); 43 44 } 44 45 ··· 54 55 MDP5_IRQ_INTF2_UNDER_RUN | 55 56 MDP5_IRQ_INTF3_UNDER_RUN; 56 57 58 + mdp5_enable(mdp5_kms); 57 59 mdp_irq_register(mdp_kms, error_handler); 60 + mdp5_disable(mdp5_kms); 58 61 59 62 return 0; 60 63 } ··· 65 64 { 66 65 struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); 67 66 mdp5_enable(mdp5_kms); 68 - mdp5_write(mdp5_kms, REG_MDP5_MDP_INTR_EN(0), 0x00000000); 67 + mdp5_write(mdp5_kms, REG_MDP5_INTR_EN, 0x00000000); 69 68 mdp5_disable(mdp5_kms); 70 69 } 71 70 72 - static void mdp5_irq_mdp(struct mdp_kms *mdp_kms) 71 + irqreturn_t mdp5_irq(struct msm_kms *kms) 73 72 { 73 + struct mdp_kms *mdp_kms = to_mdp_kms(kms); 74 74 struct mdp5_kms *mdp5_kms = to_mdp5_kms(mdp_kms); 75 75 struct drm_device *dev = mdp5_kms->dev; 76 76 struct msm_drm_private *priv = dev->dev_private; 77 77 unsigned int id; 78 78 uint32_t status, enable; 79 79 80 - enable = mdp5_read(mdp5_kms, REG_MDP5_MDP_INTR_EN(0)); 81 - status = mdp5_read(mdp5_kms, REG_MDP5_MDP_INTR_STATUS(0)) & enable; 82 - mdp5_write(mdp5_kms, REG_MDP5_MDP_INTR_CLEAR(0), status); 80 + enable = mdp5_read(mdp5_kms, REG_MDP5_INTR_EN); 81 + status = mdp5_read(mdp5_kms, REG_MDP5_INTR_STATUS) & enable; 82 + mdp5_write(mdp5_kms, REG_MDP5_INTR_CLEAR, status); 83 83 84 84 VERB("status=%08x", status); 85 85 ··· 89 87 for (id = 0; id < priv->num_crtcs; id++) 90 88 if (status & mdp5_crtc_vblank(priv->crtcs[id])) 91 89 drm_handle_vblank(dev, id); 92 - } 93 - 94 - irqreturn_t mdp5_irq(struct msm_kms *kms) 95 - { 96 - struct mdp_kms *mdp_kms = to_mdp_kms(kms); 97 - struct mdp5_kms *mdp5_kms = to_mdp5_kms(mdp_kms); 98 - uint32_t intr; 99 - 100 - intr = mdp5_read(mdp5_kms, REG_MDSS_HW_INTR_STATUS); 101 - 102 - VERB("intr=%08x", intr); 103 - 104 - if (intr & MDSS_HW_INTR_STATUS_INTR_MDP) { 105 - mdp5_irq_mdp(mdp_kms); 106 - intr &= ~MDSS_HW_INTR_STATUS_INTR_MDP; 107 - } 108 - 109 - while (intr) { 110 - irq_hw_number_t hwirq = fls(intr) - 1; 111 - generic_handle_irq(irq_find_mapping( 112 - mdp5_kms->irqcontroller.domain, hwirq)); 113 - intr &= ~(1 << hwirq); 114 - } 115 90 116 91 return IRQ_HANDLED; 117 92 } ··· 113 134 mdp_update_vblank_mask(to_mdp_kms(kms), 114 135 mdp5_crtc_vblank(crtc), false); 115 136 mdp5_disable(mdp5_kms); 116 - } 117 - 118 - /* 119 - * interrupt-controller implementation, so sub-blocks (hdmi/eDP/dsi/etc) 120 - * can register to get their irq's delivered 121 - */ 122 - 123 - #define VALID_IRQS (MDSS_HW_INTR_STATUS_INTR_DSI0 | \ 124 - MDSS_HW_INTR_STATUS_INTR_DSI1 | \ 125 - MDSS_HW_INTR_STATUS_INTR_HDMI | \ 126 - MDSS_HW_INTR_STATUS_INTR_EDP) 127 - 128 - static void mdp5_hw_mask_irq(struct irq_data *irqd) 129 - { 130 - struct mdp5_kms *mdp5_kms = irq_data_get_irq_chip_data(irqd); 131 - smp_mb__before_atomic(); 132 - clear_bit(irqd->hwirq, &mdp5_kms->irqcontroller.enabled_mask); 133 - smp_mb__after_atomic(); 134 - } 135 - 136 - static void mdp5_hw_unmask_irq(struct irq_data *irqd) 137 - { 138 - struct mdp5_kms *mdp5_kms = irq_data_get_irq_chip_data(irqd); 139 - smp_mb__before_atomic(); 140 - set_bit(irqd->hwirq, &mdp5_kms->irqcontroller.enabled_mask); 141 - smp_mb__after_atomic(); 142 - } 143 - 144 - static struct irq_chip mdp5_hw_irq_chip = { 145 - .name = "mdp5", 146 - .irq_mask = mdp5_hw_mask_irq, 147 - .irq_unmask = mdp5_hw_unmask_irq, 148 - }; 149 - 150 - static int mdp5_hw_irqdomain_map(struct irq_domain *d, 151 - unsigned int irq, irq_hw_number_t hwirq) 152 - { 153 - struct mdp5_kms *mdp5_kms = d->host_data; 154 - 155 - if (!(VALID_IRQS & (1 << hwirq))) 156 - return -EPERM; 157 - 158 - irq_set_chip_and_handler(irq, &mdp5_hw_irq_chip, handle_level_irq); 159 - irq_set_chip_data(irq, mdp5_kms); 160 - 161 - return 0; 162 - } 163 - 164 - static struct irq_domain_ops mdp5_hw_irqdomain_ops = { 165 - .map = mdp5_hw_irqdomain_map, 166 - .xlate = irq_domain_xlate_onecell, 167 - }; 168 - 169 - 170 - int mdp5_irq_domain_init(struct mdp5_kms *mdp5_kms) 171 - { 172 - struct device *dev = mdp5_kms->dev->dev; 173 - struct irq_domain *d; 174 - 175 - d = irq_domain_add_linear(dev->of_node, 32, 176 - &mdp5_hw_irqdomain_ops, mdp5_kms); 177 - if (!d) { 178 - dev_err(dev, "mdp5 irq domain add failed\n"); 179 - return -ENXIO; 180 - } 181 - 182 - mdp5_kms->irqcontroller.enabled_mask = 0; 183 - mdp5_kms->irqcontroller.domain = d; 184 - 185 - return 0; 186 - } 187 - 188 - void mdp5_irq_domain_fini(struct mdp5_kms *mdp5_kms) 189 - { 190 - if (mdp5_kms->irqcontroller.domain) { 191 - irq_domain_remove(mdp5_kms->irqcontroller.domain); 192 - mdp5_kms->irqcontroller.domain = NULL; 193 - } 194 137 }
+216 -133
drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.c
··· 16 16 * this program. If not, see <http://www.gnu.org/licenses/>. 17 17 */ 18 18 19 + #include <linux/of_irq.h> 19 20 20 21 #include "msm_drv.h" 21 22 #include "msm_mmu.h" ··· 29 28 static int mdp5_hw_init(struct msm_kms *kms) 30 29 { 31 30 struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); 32 - struct drm_device *dev = mdp5_kms->dev; 31 + struct platform_device *pdev = mdp5_kms->pdev; 33 32 unsigned long flags; 34 33 35 - pm_runtime_get_sync(dev->dev); 34 + pm_runtime_get_sync(&pdev->dev); 35 + mdp5_enable(mdp5_kms); 36 36 37 37 /* Magic unknown register writes: 38 38 * ··· 60 58 */ 61 59 62 60 spin_lock_irqsave(&mdp5_kms->resource_lock, flags); 63 - mdp5_write(mdp5_kms, REG_MDP5_MDP_DISP_INTF_SEL(0), 0); 61 + mdp5_write(mdp5_kms, REG_MDP5_DISP_INTF_SEL, 0); 64 62 spin_unlock_irqrestore(&mdp5_kms->resource_lock, flags); 65 63 66 64 mdp5_ctlm_hw_reset(mdp5_kms->ctlm); 67 65 68 - pm_runtime_put_sync(dev->dev); 66 + mdp5_disable(mdp5_kms); 67 + pm_runtime_put_sync(&pdev->dev); 69 68 70 69 return 0; 71 70 } ··· 114 111 return mdp5_encoder_set_split_display(encoder, slave_encoder); 115 112 } 116 113 117 - static void mdp5_destroy(struct msm_kms *kms) 114 + static void mdp5_kms_destroy(struct msm_kms *kms) 118 115 { 119 116 struct mdp5_kms *mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); 120 117 struct msm_mmu *mmu = mdp5_kms->mmu; 121 - 122 - mdp5_irq_domain_fini(mdp5_kms); 123 118 124 119 if (mmu) { 125 120 mmu->funcs->detach(mmu, iommu_ports, ARRAY_SIZE(iommu_ports)); 126 121 mmu->funcs->destroy(mmu); 127 122 } 128 - 129 - if (mdp5_kms->ctlm) 130 - mdp5_ctlm_destroy(mdp5_kms->ctlm); 131 - if (mdp5_kms->smp) 132 - mdp5_smp_destroy(mdp5_kms->smp); 133 - if (mdp5_kms->cfg) 134 - mdp5_cfg_destroy(mdp5_kms->cfg); 135 - 136 - kfree(mdp5_kms); 137 123 } 138 124 139 125 static const struct mdp_kms_funcs kms_funcs = { ··· 140 148 .get_format = mdp_get_format, 141 149 .round_pixclk = mdp5_round_pixclk, 142 150 .set_split_display = mdp5_set_split_display, 143 - .destroy = mdp5_destroy, 151 + .destroy = mdp5_kms_destroy, 144 152 }, 145 153 .set_irqmask = mdp5_set_irqmask, 146 154 }; ··· 337 345 338 346 hw_cfg = mdp5_cfg_get_hw_config(mdp5_kms->cfg); 339 347 340 - /* register our interrupt-controller for hdmi/eDP/dsi/etc 341 - * to use for irqs routed through mdp: 342 - */ 343 - ret = mdp5_irq_domain_init(mdp5_kms); 344 - if (ret) 345 - goto fail; 346 - 347 348 /* construct CRTCs and their private planes: */ 348 349 for (i = 0; i < hw_cfg->pipe_rgb.count; i++) { 349 350 struct drm_plane *plane; ··· 404 419 return ret; 405 420 } 406 421 407 - static void read_hw_revision(struct mdp5_kms *mdp5_kms, 408 - uint32_t *major, uint32_t *minor) 422 + static void read_mdp_hw_revision(struct mdp5_kms *mdp5_kms, 423 + u32 *major, u32 *minor) 409 424 { 410 - uint32_t version; 425 + u32 version; 411 426 412 427 mdp5_enable(mdp5_kms); 413 - version = mdp5_read(mdp5_kms, REG_MDSS_HW_VERSION); 428 + version = mdp5_read(mdp5_kms, REG_MDP5_HW_VERSION); 414 429 mdp5_disable(mdp5_kms); 415 430 416 - *major = FIELD(version, MDSS_HW_VERSION_MAJOR); 417 - *minor = FIELD(version, MDSS_HW_VERSION_MINOR); 431 + *major = FIELD(version, MDP5_HW_VERSION_MAJOR); 432 + *minor = FIELD(version, MDP5_HW_VERSION_MINOR); 418 433 419 434 DBG("MDP5 version v%d.%d", *major, *minor); 420 435 } ··· 559 574 560 575 struct msm_kms *mdp5_kms_init(struct drm_device *dev) 561 576 { 562 - struct platform_device *pdev = dev->platformdev; 563 - struct mdp5_cfg *config; 577 + struct msm_drm_private *priv = dev->dev_private; 578 + struct platform_device *pdev; 564 579 struct mdp5_kms *mdp5_kms; 565 - struct msm_kms *kms = NULL; 580 + struct mdp5_cfg *config; 581 + struct msm_kms *kms; 566 582 struct msm_mmu *mmu; 567 - uint32_t major, minor; 568 - int i, ret; 583 + int irq, i, ret; 569 584 570 - mdp5_kms = kzalloc(sizeof(*mdp5_kms), GFP_KERNEL); 585 + /* priv->kms would have been populated by the MDP5 driver */ 586 + kms = priv->kms; 587 + if (!kms) 588 + return NULL; 589 + 590 + mdp5_kms = to_mdp5_kms(to_mdp_kms(kms)); 591 + 592 + mdp_kms_init(&mdp5_kms->base, &kms_funcs); 593 + 594 + pdev = mdp5_kms->pdev; 595 + 596 + irq = irq_of_parse_and_map(pdev->dev.of_node, 0); 597 + if (irq < 0) { 598 + ret = irq; 599 + dev_err(&pdev->dev, "failed to get irq: %d\n", ret); 600 + goto fail; 601 + } 602 + 603 + kms->irq = irq; 604 + 605 + config = mdp5_cfg_get_config(mdp5_kms->cfg); 606 + 607 + /* make sure things are off before attaching iommu (bootloader could 608 + * have left things on, in which case we'll start getting faults if 609 + * we don't disable): 610 + */ 611 + mdp5_enable(mdp5_kms); 612 + for (i = 0; i < MDP5_INTF_NUM_MAX; i++) { 613 + if (mdp5_cfg_intf_is_virtual(config->hw->intf.connect[i]) || 614 + !config->hw->intf.base[i]) 615 + continue; 616 + mdp5_write(mdp5_kms, REG_MDP5_INTF_TIMING_ENGINE_EN(i), 0); 617 + 618 + mdp5_write(mdp5_kms, REG_MDP5_INTF_FRAME_LINE_COUNT_EN(i), 0x3); 619 + } 620 + mdp5_disable(mdp5_kms); 621 + mdelay(16); 622 + 623 + if (config->platform.iommu) { 624 + mmu = msm_iommu_new(&pdev->dev, config->platform.iommu); 625 + if (IS_ERR(mmu)) { 626 + ret = PTR_ERR(mmu); 627 + dev_err(&pdev->dev, "failed to init iommu: %d\n", ret); 628 + iommu_domain_free(config->platform.iommu); 629 + goto fail; 630 + } 631 + 632 + ret = mmu->funcs->attach(mmu, iommu_ports, 633 + ARRAY_SIZE(iommu_ports)); 634 + if (ret) { 635 + dev_err(&pdev->dev, "failed to attach iommu: %d\n", 636 + ret); 637 + mmu->funcs->destroy(mmu); 638 + goto fail; 639 + } 640 + } else { 641 + dev_info(&pdev->dev, 642 + "no iommu, fallback to phys contig buffers for scanout\n"); 643 + mmu = NULL; 644 + } 645 + mdp5_kms->mmu = mmu; 646 + 647 + mdp5_kms->id = msm_register_mmu(dev, mmu); 648 + if (mdp5_kms->id < 0) { 649 + ret = mdp5_kms->id; 650 + dev_err(&pdev->dev, "failed to register mdp5 iommu: %d\n", ret); 651 + goto fail; 652 + } 653 + 654 + ret = modeset_init(mdp5_kms); 655 + if (ret) { 656 + dev_err(&pdev->dev, "modeset_init failed: %d\n", ret); 657 + goto fail; 658 + } 659 + 660 + dev->mode_config.min_width = 0; 661 + dev->mode_config.min_height = 0; 662 + dev->mode_config.max_width = config->hw->lm.max_width; 663 + dev->mode_config.max_height = config->hw->lm.max_height; 664 + 665 + dev->driver->get_vblank_timestamp = mdp5_get_vblank_timestamp; 666 + dev->driver->get_scanout_position = mdp5_get_scanoutpos; 667 + dev->driver->get_vblank_counter = mdp5_get_vblank_counter; 668 + dev->max_vblank_count = 0xffffffff; 669 + dev->vblank_disable_immediate = true; 670 + 671 + return kms; 672 + fail: 673 + if (kms) 674 + mdp5_kms_destroy(kms); 675 + return ERR_PTR(ret); 676 + } 677 + 678 + static void mdp5_destroy(struct platform_device *pdev) 679 + { 680 + struct mdp5_kms *mdp5_kms = platform_get_drvdata(pdev); 681 + 682 + if (mdp5_kms->ctlm) 683 + mdp5_ctlm_destroy(mdp5_kms->ctlm); 684 + if (mdp5_kms->smp) 685 + mdp5_smp_destroy(mdp5_kms->smp); 686 + if (mdp5_kms->cfg) 687 + mdp5_cfg_destroy(mdp5_kms->cfg); 688 + 689 + if (mdp5_kms->rpm_enabled) 690 + pm_runtime_disable(&pdev->dev); 691 + } 692 + 693 + static int mdp5_init(struct platform_device *pdev, struct drm_device *dev) 694 + { 695 + struct msm_drm_private *priv = dev->dev_private; 696 + struct mdp5_kms *mdp5_kms; 697 + struct mdp5_cfg *config; 698 + u32 major, minor; 699 + int ret; 700 + 701 + mdp5_kms = devm_kzalloc(&pdev->dev, sizeof(*mdp5_kms), GFP_KERNEL); 571 702 if (!mdp5_kms) { 572 - dev_err(dev->dev, "failed to allocate kms\n"); 573 703 ret = -ENOMEM; 574 704 goto fail; 575 705 } 576 706 707 + platform_set_drvdata(pdev, mdp5_kms); 708 + 577 709 spin_lock_init(&mdp5_kms->resource_lock); 578 710 579 - mdp_kms_init(&mdp5_kms->base, &kms_funcs); 580 - 581 - kms = &mdp5_kms->base.base; 582 - 583 711 mdp5_kms->dev = dev; 712 + mdp5_kms->pdev = pdev; 584 713 585 - /* mdp5_kms->mmio actually represents the MDSS base address */ 586 714 mdp5_kms->mmio = msm_ioremap(pdev, "mdp_phys", "MDP5"); 587 715 if (IS_ERR(mdp5_kms->mmio)) { 588 716 ret = PTR_ERR(mdp5_kms->mmio); 589 - goto fail; 590 - } 591 - 592 - mdp5_kms->vbif = msm_ioremap(pdev, "vbif_phys", "VBIF"); 593 - if (IS_ERR(mdp5_kms->vbif)) { 594 - ret = PTR_ERR(mdp5_kms->vbif); 595 - goto fail; 596 - } 597 - 598 - mdp5_kms->vdd = devm_regulator_get(&pdev->dev, "vdd"); 599 - if (IS_ERR(mdp5_kms->vdd)) { 600 - ret = PTR_ERR(mdp5_kms->vdd); 601 - goto fail; 602 - } 603 - 604 - ret = regulator_enable(mdp5_kms->vdd); 605 - if (ret) { 606 - dev_err(dev->dev, "failed to enable regulator vdd: %d\n", ret); 607 717 goto fail; 608 718 } 609 719 ··· 707 627 if (ret) 708 628 goto fail; 709 629 ret = get_clk(pdev, &mdp5_kms->ahb_clk, "iface_clk", true); 710 - if (ret) 711 - goto fail; 712 - ret = get_clk(pdev, &mdp5_kms->src_clk, "core_clk_src", true); 713 630 if (ret) 714 631 goto fail; 715 632 ret = get_clk(pdev, &mdp5_kms->core_clk, "core_clk", true); ··· 723 646 * rate first, then figure out hw revision, and then set a 724 647 * more optimal rate: 725 648 */ 726 - clk_set_rate(mdp5_kms->src_clk, 200000000); 649 + clk_set_rate(mdp5_kms->core_clk, 200000000); 727 650 728 - read_hw_revision(mdp5_kms, &major, &minor); 651 + pm_runtime_enable(&pdev->dev); 652 + mdp5_kms->rpm_enabled = true; 653 + 654 + read_mdp_hw_revision(mdp5_kms, &major, &minor); 729 655 730 656 mdp5_kms->cfg = mdp5_cfg_init(mdp5_kms, major, minor); 731 657 if (IS_ERR(mdp5_kms->cfg)) { ··· 741 661 mdp5_kms->caps = config->hw->mdp.caps; 742 662 743 663 /* TODO: compute core clock rate at runtime */ 744 - clk_set_rate(mdp5_kms->src_clk, config->hw->max_clk); 664 + clk_set_rate(mdp5_kms->core_clk, config->hw->max_clk); 745 665 746 666 /* 747 667 * Some chipsets have a Shared Memory Pool (SMP), while others ··· 764 684 goto fail; 765 685 } 766 686 767 - /* make sure things are off before attaching iommu (bootloader could 768 - * have left things on, in which case we'll start getting faults if 769 - * we don't disable): 770 - */ 771 - mdp5_enable(mdp5_kms); 772 - for (i = 0; i < MDP5_INTF_NUM_MAX; i++) { 773 - if (mdp5_cfg_intf_is_virtual(config->hw->intf.connect[i]) || 774 - !config->hw->intf.base[i]) 775 - continue; 776 - mdp5_write(mdp5_kms, REG_MDP5_INTF_TIMING_ENGINE_EN(i), 0); 687 + /* set uninit-ed kms */ 688 + priv->kms = &mdp5_kms->base.base; 777 689 778 - mdp5_write(mdp5_kms, REG_MDP5_INTF_FRAME_LINE_COUNT_EN(i), 0x3); 779 - } 780 - mdp5_disable(mdp5_kms); 781 - mdelay(16); 782 - 783 - if (config->platform.iommu) { 784 - mmu = msm_iommu_new(&pdev->dev, config->platform.iommu); 785 - if (IS_ERR(mmu)) { 786 - ret = PTR_ERR(mmu); 787 - dev_err(dev->dev, "failed to init iommu: %d\n", ret); 788 - iommu_domain_free(config->platform.iommu); 789 - goto fail; 790 - } 791 - 792 - ret = mmu->funcs->attach(mmu, iommu_ports, 793 - ARRAY_SIZE(iommu_ports)); 794 - if (ret) { 795 - dev_err(dev->dev, "failed to attach iommu: %d\n", ret); 796 - mmu->funcs->destroy(mmu); 797 - goto fail; 798 - } 799 - } else { 800 - dev_info(dev->dev, "no iommu, fallback to phys " 801 - "contig buffers for scanout\n"); 802 - mmu = NULL; 803 - } 804 - mdp5_kms->mmu = mmu; 805 - 806 - mdp5_kms->id = msm_register_mmu(dev, mmu); 807 - if (mdp5_kms->id < 0) { 808 - ret = mdp5_kms->id; 809 - dev_err(dev->dev, "failed to register mdp5 iommu: %d\n", ret); 810 - goto fail; 811 - } 812 - 813 - ret = modeset_init(mdp5_kms); 814 - if (ret) { 815 - dev_err(dev->dev, "modeset_init failed: %d\n", ret); 816 - goto fail; 817 - } 818 - 819 - dev->mode_config.min_width = 0; 820 - dev->mode_config.min_height = 0; 821 - dev->mode_config.max_width = config->hw->lm.max_width; 822 - dev->mode_config.max_height = config->hw->lm.max_height; 823 - 824 - dev->driver->get_vblank_timestamp = mdp5_get_vblank_timestamp; 825 - dev->driver->get_scanout_position = mdp5_get_scanoutpos; 826 - dev->driver->get_vblank_counter = mdp5_get_vblank_counter; 827 - dev->max_vblank_count = 0xffffffff; 828 - dev->vblank_disable_immediate = true; 829 - 830 - return kms; 831 - 690 + return 0; 832 691 fail: 833 - if (kms) 834 - mdp5_destroy(kms); 835 - return ERR_PTR(ret); 692 + mdp5_destroy(pdev); 693 + return ret; 694 + } 695 + 696 + static int mdp5_bind(struct device *dev, struct device *master, void *data) 697 + { 698 + struct drm_device *ddev = dev_get_drvdata(master); 699 + struct platform_device *pdev = to_platform_device(dev); 700 + 701 + DBG(""); 702 + 703 + return mdp5_init(pdev, ddev); 704 + } 705 + 706 + static void mdp5_unbind(struct device *dev, struct device *master, 707 + void *data) 708 + { 709 + struct platform_device *pdev = to_platform_device(dev); 710 + 711 + mdp5_destroy(pdev); 712 + } 713 + 714 + static const struct component_ops mdp5_ops = { 715 + .bind = mdp5_bind, 716 + .unbind = mdp5_unbind, 717 + }; 718 + 719 + static int mdp5_dev_probe(struct platform_device *pdev) 720 + { 721 + DBG(""); 722 + return component_add(&pdev->dev, &mdp5_ops); 723 + } 724 + 725 + static int mdp5_dev_remove(struct platform_device *pdev) 726 + { 727 + DBG(""); 728 + component_del(&pdev->dev, &mdp5_ops); 729 + return 0; 730 + } 731 + 732 + static const struct of_device_id mdp5_dt_match[] = { 733 + { .compatible = "qcom,mdp5", }, 734 + /* to support downstream DT files */ 735 + { .compatible = "qcom,mdss_mdp", }, 736 + {} 737 + }; 738 + MODULE_DEVICE_TABLE(of, mdp5_dt_match); 739 + 740 + static struct platform_driver mdp5_driver = { 741 + .probe = mdp5_dev_probe, 742 + .remove = mdp5_dev_remove, 743 + .driver = { 744 + .name = "msm_mdp", 745 + .of_match_table = mdp5_dt_match, 746 + }, 747 + }; 748 + 749 + void __init msm_mdp_register(void) 750 + { 751 + DBG(""); 752 + platform_driver_register(&mdp5_driver); 753 + } 754 + 755 + void __exit msm_mdp_unregister(void) 756 + { 757 + DBG(""); 758 + platform_driver_unregister(&mdp5_driver); 836 759 }
+6 -10
drivers/gpu/drm/msm/mdp/mdp5/mdp5_kms.h
··· 31 31 32 32 struct drm_device *dev; 33 33 34 + struct platform_device *pdev; 35 + 34 36 struct mdp5_cfg_handler *cfg; 35 37 uint32_t caps; /* MDP capabilities (MDP_CAP_XXX bits) */ 36 38 ··· 45 43 struct mdp5_ctl_manager *ctlm; 46 44 47 45 /* io/register spaces: */ 48 - void __iomem *mmio, *vbif; 49 - 50 - struct regulator *vdd; 46 + void __iomem *mmio; 51 47 52 48 struct clk *axi_clk; 53 49 struct clk *ahb_clk; 54 - struct clk *src_clk; 55 50 struct clk *core_clk; 56 51 struct clk *lut_clk; 57 52 struct clk *vsync_clk; 58 53 59 54 /* 60 55 * lock to protect access to global resources: ie., following register: 61 - * - REG_MDP5_MDP_DISP_INTF_SEL 56 + * - REG_MDP5_DISP_INTF_SEL 62 57 */ 63 58 spinlock_t resource_lock; 64 59 65 - struct mdp_irq error_handler; 60 + bool rpm_enabled; 66 61 67 - struct { 68 - volatile unsigned long enabled_mask; 69 - struct irq_domain *domain; 70 - } irqcontroller; 62 + struct mdp_irq error_handler; 71 63 }; 72 64 #define to_mdp5_kms(x) container_of(x, struct mdp5_kms, base) 73 65
+235
drivers/gpu/drm/msm/mdp/mdp5/mdp5_mdss.c
··· 1 + /* 2 + * Copyright (c) 2016, The Linux Foundation. All rights reserved. 3 + * 4 + * This program is free software; you can redistribute it and/or modify it 5 + * under the terms of the GNU General Public License version 2 as published by 6 + * the Free Software Foundation. 7 + * 8 + * This program is distributed in the hope that it will be useful, but WITHOUT 9 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 10 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 11 + * more details. 12 + * 13 + * You should have received a copy of the GNU General Public License along with 14 + * this program. If not, see <http://www.gnu.org/licenses/>. 15 + */ 16 + 17 + #include <linux/irqdomain.h> 18 + #include <linux/irq.h> 19 + 20 + #include "msm_drv.h" 21 + #include "mdp5_kms.h" 22 + 23 + /* 24 + * If needed, this can become more specific: something like struct mdp5_mdss, 25 + * which contains a 'struct msm_mdss base' member. 26 + */ 27 + struct msm_mdss { 28 + struct drm_device *dev; 29 + 30 + void __iomem *mmio, *vbif; 31 + 32 + struct regulator *vdd; 33 + 34 + struct { 35 + volatile unsigned long enabled_mask; 36 + struct irq_domain *domain; 37 + } irqcontroller; 38 + }; 39 + 40 + static inline void mdss_write(struct msm_mdss *mdss, u32 reg, u32 data) 41 + { 42 + msm_writel(data, mdss->mmio + reg); 43 + } 44 + 45 + static inline u32 mdss_read(struct msm_mdss *mdss, u32 reg) 46 + { 47 + return msm_readl(mdss->mmio + reg); 48 + } 49 + 50 + static irqreturn_t mdss_irq(int irq, void *arg) 51 + { 52 + struct msm_mdss *mdss = arg; 53 + u32 intr; 54 + 55 + intr = mdss_read(mdss, REG_MDSS_HW_INTR_STATUS); 56 + 57 + VERB("intr=%08x", intr); 58 + 59 + while (intr) { 60 + irq_hw_number_t hwirq = fls(intr) - 1; 61 + 62 + generic_handle_irq(irq_find_mapping( 63 + mdss->irqcontroller.domain, hwirq)); 64 + intr &= ~(1 << hwirq); 65 + } 66 + 67 + return IRQ_HANDLED; 68 + } 69 + 70 + /* 71 + * interrupt-controller implementation, so sub-blocks (MDP/HDMI/eDP/DSI/etc) 72 + * can register to get their irq's delivered 73 + */ 74 + 75 + #define VALID_IRQS (MDSS_HW_INTR_STATUS_INTR_MDP | \ 76 + MDSS_HW_INTR_STATUS_INTR_DSI0 | \ 77 + MDSS_HW_INTR_STATUS_INTR_DSI1 | \ 78 + MDSS_HW_INTR_STATUS_INTR_HDMI | \ 79 + MDSS_HW_INTR_STATUS_INTR_EDP) 80 + 81 + static void mdss_hw_mask_irq(struct irq_data *irqd) 82 + { 83 + struct msm_mdss *mdss = irq_data_get_irq_chip_data(irqd); 84 + 85 + smp_mb__before_atomic(); 86 + clear_bit(irqd->hwirq, &mdss->irqcontroller.enabled_mask); 87 + smp_mb__after_atomic(); 88 + } 89 + 90 + static void mdss_hw_unmask_irq(struct irq_data *irqd) 91 + { 92 + struct msm_mdss *mdss = irq_data_get_irq_chip_data(irqd); 93 + 94 + smp_mb__before_atomic(); 95 + set_bit(irqd->hwirq, &mdss->irqcontroller.enabled_mask); 96 + smp_mb__after_atomic(); 97 + } 98 + 99 + static struct irq_chip mdss_hw_irq_chip = { 100 + .name = "mdss", 101 + .irq_mask = mdss_hw_mask_irq, 102 + .irq_unmask = mdss_hw_unmask_irq, 103 + }; 104 + 105 + static int mdss_hw_irqdomain_map(struct irq_domain *d, unsigned int irq, 106 + irq_hw_number_t hwirq) 107 + { 108 + struct msm_mdss *mdss = d->host_data; 109 + 110 + if (!(VALID_IRQS & (1 << hwirq))) 111 + return -EPERM; 112 + 113 + irq_set_chip_and_handler(irq, &mdss_hw_irq_chip, handle_level_irq); 114 + irq_set_chip_data(irq, mdss); 115 + 116 + return 0; 117 + } 118 + 119 + static struct irq_domain_ops mdss_hw_irqdomain_ops = { 120 + .map = mdss_hw_irqdomain_map, 121 + .xlate = irq_domain_xlate_onecell, 122 + }; 123 + 124 + 125 + static int mdss_irq_domain_init(struct msm_mdss *mdss) 126 + { 127 + struct device *dev = mdss->dev->dev; 128 + struct irq_domain *d; 129 + 130 + d = irq_domain_add_linear(dev->of_node, 32, &mdss_hw_irqdomain_ops, 131 + mdss); 132 + if (!d) { 133 + dev_err(dev, "mdss irq domain add failed\n"); 134 + return -ENXIO; 135 + } 136 + 137 + mdss->irqcontroller.enabled_mask = 0; 138 + mdss->irqcontroller.domain = d; 139 + 140 + return 0; 141 + } 142 + 143 + void msm_mdss_destroy(struct drm_device *dev) 144 + { 145 + struct msm_drm_private *priv = dev->dev_private; 146 + struct msm_mdss *mdss = priv->mdss; 147 + 148 + if (!mdss) 149 + return; 150 + 151 + irq_domain_remove(mdss->irqcontroller.domain); 152 + mdss->irqcontroller.domain = NULL; 153 + 154 + regulator_disable(mdss->vdd); 155 + 156 + pm_runtime_put_sync(dev->dev); 157 + 158 + pm_runtime_disable(dev->dev); 159 + } 160 + 161 + int msm_mdss_init(struct drm_device *dev) 162 + { 163 + struct platform_device *pdev = dev->platformdev; 164 + struct msm_drm_private *priv = dev->dev_private; 165 + struct msm_mdss *mdss; 166 + int ret; 167 + 168 + DBG(""); 169 + 170 + if (!of_device_is_compatible(dev->dev->of_node, "qcom,mdss")) 171 + return 0; 172 + 173 + mdss = devm_kzalloc(dev->dev, sizeof(*mdss), GFP_KERNEL); 174 + if (!mdss) { 175 + ret = -ENOMEM; 176 + goto fail; 177 + } 178 + 179 + mdss->dev = dev; 180 + 181 + mdss->mmio = msm_ioremap(pdev, "mdss_phys", "MDSS"); 182 + if (IS_ERR(mdss->mmio)) { 183 + ret = PTR_ERR(mdss->mmio); 184 + goto fail; 185 + } 186 + 187 + mdss->vbif = msm_ioremap(pdev, "vbif_phys", "VBIF"); 188 + if (IS_ERR(mdss->vbif)) { 189 + ret = PTR_ERR(mdss->vbif); 190 + goto fail; 191 + } 192 + 193 + /* Regulator to enable GDSCs in downstream kernels */ 194 + mdss->vdd = devm_regulator_get(dev->dev, "vdd"); 195 + if (IS_ERR(mdss->vdd)) { 196 + ret = PTR_ERR(mdss->vdd); 197 + goto fail; 198 + } 199 + 200 + ret = regulator_enable(mdss->vdd); 201 + if (ret) { 202 + dev_err(dev->dev, "failed to enable regulator vdd: %d\n", 203 + ret); 204 + goto fail; 205 + } 206 + 207 + ret = devm_request_irq(dev->dev, platform_get_irq(pdev, 0), 208 + mdss_irq, 0, "mdss_isr", mdss); 209 + if (ret) { 210 + dev_err(dev->dev, "failed to init irq: %d\n", ret); 211 + goto fail_irq; 212 + } 213 + 214 + ret = mdss_irq_domain_init(mdss); 215 + if (ret) { 216 + dev_err(dev->dev, "failed to init sub-block irqs: %d\n", ret); 217 + goto fail_irq; 218 + } 219 + 220 + priv->mdss = mdss; 221 + 222 + pm_runtime_enable(dev->dev); 223 + 224 + /* 225 + * TODO: This is needed as the MDSS GDSC is only tied to MDSS's power 226 + * domain. Remove this once runtime PM is adapted for all the devices. 227 + */ 228 + pm_runtime_get_sync(dev->dev); 229 + 230 + return 0; 231 + fail_irq: 232 + regulator_disable(mdss->vdd); 233 + fail: 234 + return ret; 235 + }
+11 -11
drivers/gpu/drm/msm/mdp/mdp5/mdp5_smp.c
··· 42 42 * 43 43 * configured: 44 44 * The block is allocated to some client, and assigned to that 45 - * client in MDP5_MDP_SMP_ALLOC registers. 45 + * client in MDP5_SMP_ALLOC registers. 46 46 * 47 47 * inuse: 48 48 * The block is being actively used by a client. ··· 59 59 * mdp5_smp_commit. 60 60 * 61 61 * 2) mdp5_smp_configure(): 62 - * As hw is programmed, before FLUSH, MDP5_MDP_SMP_ALLOC registers 62 + * As hw is programmed, before FLUSH, MDP5_SMP_ALLOC registers 63 63 * are configured for the union(pending, inuse) 64 64 * Current pending is copied to configured. 65 65 * It is assumed that mdp5_smp_request and mdp5_smp_configure not run ··· 311 311 int idx = blk / 3; 312 312 int fld = blk % 3; 313 313 314 - val = mdp5_read(mdp5_kms, REG_MDP5_MDP_SMP_ALLOC_W_REG(0, idx)); 314 + val = mdp5_read(mdp5_kms, REG_MDP5_SMP_ALLOC_W_REG(idx)); 315 315 316 316 switch (fld) { 317 317 case 0: 318 - val &= ~MDP5_MDP_SMP_ALLOC_W_REG_CLIENT0__MASK; 319 - val |= MDP5_MDP_SMP_ALLOC_W_REG_CLIENT0(cid); 318 + val &= ~MDP5_SMP_ALLOC_W_REG_CLIENT0__MASK; 319 + val |= MDP5_SMP_ALLOC_W_REG_CLIENT0(cid); 320 320 break; 321 321 case 1: 322 - val &= ~MDP5_MDP_SMP_ALLOC_W_REG_CLIENT1__MASK; 323 - val |= MDP5_MDP_SMP_ALLOC_W_REG_CLIENT1(cid); 322 + val &= ~MDP5_SMP_ALLOC_W_REG_CLIENT1__MASK; 323 + val |= MDP5_SMP_ALLOC_W_REG_CLIENT1(cid); 324 324 break; 325 325 case 2: 326 - val &= ~MDP5_MDP_SMP_ALLOC_W_REG_CLIENT2__MASK; 327 - val |= MDP5_MDP_SMP_ALLOC_W_REG_CLIENT2(cid); 326 + val &= ~MDP5_SMP_ALLOC_W_REG_CLIENT2__MASK; 327 + val |= MDP5_SMP_ALLOC_W_REG_CLIENT2(cid); 328 328 break; 329 329 } 330 330 331 - mdp5_write(mdp5_kms, REG_MDP5_MDP_SMP_ALLOC_W_REG(0, idx), val); 332 - mdp5_write(mdp5_kms, REG_MDP5_MDP_SMP_ALLOC_R_REG(0, idx), val); 331 + mdp5_write(mdp5_kms, REG_MDP5_SMP_ALLOC_W_REG(idx), val); 332 + mdp5_write(mdp5_kms, REG_MDP5_SMP_ALLOC_R_REG(idx), val); 333 333 } 334 334 } 335 335
+225 -36
drivers/gpu/drm/msm/msm_drv.c
··· 21 21 #include "msm_gpu.h" 22 22 #include "msm_kms.h" 23 23 24 + 25 + /* 26 + * MSM driver version: 27 + * - 1.0.0 - initial interface 28 + * - 1.1.0 - adds madvise, and support for submits with > 4 cmd buffers 29 + */ 30 + #define MSM_VERSION_MAJOR 1 31 + #define MSM_VERSION_MINOR 1 32 + #define MSM_VERSION_PATCHLEVEL 0 33 + 24 34 static void msm_fb_output_poll_changed(struct drm_device *dev) 25 35 { 26 36 struct msm_drm_private *priv = dev->dev_private; ··· 205 195 kfree(vbl_ev); 206 196 } 207 197 198 + msm_gem_shrinker_cleanup(ddev); 199 + 208 200 drm_kms_helper_poll_fini(ddev); 209 201 210 202 drm_dev_unregister(ddev); ··· 227 215 flush_workqueue(priv->atomic_wq); 228 216 destroy_workqueue(priv->atomic_wq); 229 217 230 - if (kms) { 231 - pm_runtime_disable(dev); 218 + if (kms) 232 219 kms->funcs->destroy(kms); 233 - } 234 220 235 221 if (gpu) { 236 222 mutex_lock(&ddev->struct_mutex); ··· 246 236 } 247 237 248 238 component_unbind_all(dev, ddev); 239 + 240 + msm_mdss_destroy(ddev); 249 241 250 242 ddev->dev_private = NULL; 251 243 drm_dev_unref(ddev); ··· 294 282 if (node) { 295 283 struct resource r; 296 284 ret = of_address_to_resource(node, 0, &r); 285 + of_node_put(node); 297 286 if (ret) 298 287 return ret; 299 288 size = r.end - r.start; ··· 363 350 } 364 351 365 352 ddev->dev_private = priv; 353 + priv->dev = ddev; 354 + 355 + ret = msm_mdss_init(ddev); 356 + if (ret) { 357 + kfree(priv); 358 + drm_dev_unref(ddev); 359 + return ret; 360 + } 366 361 367 362 priv->wq = alloc_ordered_workqueue("msm", 0); 368 363 priv->atomic_wq = alloc_ordered_workqueue("msm:atomic", 0); ··· 386 365 /* Bind all our sub-components: */ 387 366 ret = component_bind_all(dev, ddev); 388 367 if (ret) { 368 + msm_mdss_destroy(ddev); 389 369 kfree(priv); 390 370 drm_dev_unref(ddev); 391 371 return ret; ··· 396 374 if (ret) 397 375 goto fail; 398 376 377 + msm_gem_shrinker_init(ddev); 378 + 399 379 switch (get_mdp_ver(pdev)) { 400 380 case 4: 401 381 kms = mdp4_kms_init(ddev); 382 + priv->kms = kms; 402 383 break; 403 384 case 5: 404 385 kms = mdp5_kms_init(ddev); ··· 423 398 goto fail; 424 399 } 425 400 426 - priv->kms = kms; 427 - 428 401 if (kms) { 429 - pm_runtime_enable(dev); 430 402 ret = kms->funcs->hw_init(kms); 431 403 if (ret) { 432 404 dev_err(dev, "kms hw init failed: %d\n", ret); ··· 439 417 goto fail; 440 418 } 441 419 442 - pm_runtime_get_sync(dev); 443 - ret = drm_irq_install(ddev, platform_get_irq(pdev, 0)); 444 - pm_runtime_put_sync(dev); 445 - if (ret < 0) { 446 - dev_err(dev, "failed to install IRQ handler\n"); 447 - goto fail; 420 + if (kms) { 421 + pm_runtime_get_sync(dev); 422 + ret = drm_irq_install(ddev, kms->irq); 423 + pm_runtime_put_sync(dev); 424 + if (ret < 0) { 425 + dev_err(dev, "failed to install IRQ handler\n"); 426 + goto fail; 427 + } 448 428 } 449 429 450 430 ret = drm_dev_register(ddev, 0); ··· 706 682 return msm_wait_fence(priv->gpu->fctx, args->fence, &timeout, true); 707 683 } 708 684 685 + static int msm_ioctl_gem_madvise(struct drm_device *dev, void *data, 686 + struct drm_file *file) 687 + { 688 + struct drm_msm_gem_madvise *args = data; 689 + struct drm_gem_object *obj; 690 + int ret; 691 + 692 + switch (args->madv) { 693 + case MSM_MADV_DONTNEED: 694 + case MSM_MADV_WILLNEED: 695 + break; 696 + default: 697 + return -EINVAL; 698 + } 699 + 700 + ret = mutex_lock_interruptible(&dev->struct_mutex); 701 + if (ret) 702 + return ret; 703 + 704 + obj = drm_gem_object_lookup(file, args->handle); 705 + if (!obj) { 706 + ret = -ENOENT; 707 + goto unlock; 708 + } 709 + 710 + ret = msm_gem_madvise(obj, args->madv); 711 + if (ret >= 0) { 712 + args->retained = ret; 713 + ret = 0; 714 + } 715 + 716 + drm_gem_object_unreference(obj); 717 + 718 + unlock: 719 + mutex_unlock(&dev->struct_mutex); 720 + return ret; 721 + } 722 + 709 723 static const struct drm_ioctl_desc msm_ioctls[] = { 710 724 DRM_IOCTL_DEF_DRV(MSM_GET_PARAM, msm_ioctl_get_param, DRM_AUTH|DRM_RENDER_ALLOW), 711 725 DRM_IOCTL_DEF_DRV(MSM_GEM_NEW, msm_ioctl_gem_new, DRM_AUTH|DRM_RENDER_ALLOW), ··· 752 690 DRM_IOCTL_DEF_DRV(MSM_GEM_CPU_FINI, msm_ioctl_gem_cpu_fini, DRM_AUTH|DRM_RENDER_ALLOW), 753 691 DRM_IOCTL_DEF_DRV(MSM_GEM_SUBMIT, msm_ioctl_gem_submit, DRM_AUTH|DRM_RENDER_ALLOW), 754 692 DRM_IOCTL_DEF_DRV(MSM_WAIT_FENCE, msm_ioctl_wait_fence, DRM_AUTH|DRM_RENDER_ALLOW), 693 + DRM_IOCTL_DEF_DRV(MSM_GEM_MADVISE, msm_ioctl_gem_madvise, DRM_AUTH|DRM_RENDER_ALLOW), 755 694 }; 756 695 757 696 static const struct vm_operations_struct vm_ops = { ··· 818 755 .name = "msm", 819 756 .desc = "MSM Snapdragon DRM", 820 757 .date = "20130625", 821 - .major = 1, 822 - .minor = 0, 758 + .major = MSM_VERSION_MAJOR, 759 + .minor = MSM_VERSION_MINOR, 760 + .patchlevel = MSM_VERSION_PATCHLEVEL, 823 761 }; 824 762 825 763 #ifdef CONFIG_PM_SLEEP ··· 860 796 return dev->of_node == data; 861 797 } 862 798 863 - static int add_components(struct device *dev, struct component_match **matchptr, 864 - const char *name) 799 + /* 800 + * Identify what components need to be added by parsing what remote-endpoints 801 + * our MDP output ports are connected to. In the case of LVDS on MDP4, there 802 + * is no external component that we need to add since LVDS is within MDP4 803 + * itself. 804 + */ 805 + static int add_components_mdp(struct device *mdp_dev, 806 + struct component_match **matchptr) 865 807 { 866 - struct device_node *np = dev->of_node; 867 - unsigned i; 808 + struct device_node *np = mdp_dev->of_node; 809 + struct device_node *ep_node; 810 + struct device *master_dev; 868 811 869 - for (i = 0; ; i++) { 870 - struct device_node *node; 812 + /* 813 + * on MDP4 based platforms, the MDP platform device is the component 814 + * master that adds other display interface components to itself. 815 + * 816 + * on MDP5 based platforms, the MDSS platform device is the component 817 + * master that adds MDP5 and other display interface components to 818 + * itself. 819 + */ 820 + if (of_device_is_compatible(np, "qcom,mdp4")) 821 + master_dev = mdp_dev; 822 + else 823 + master_dev = mdp_dev->parent; 871 824 872 - node = of_parse_phandle(np, name, i); 873 - if (!node) 874 - break; 825 + for_each_endpoint_of_node(np, ep_node) { 826 + struct device_node *intf; 827 + struct of_endpoint ep; 828 + int ret; 875 829 876 - component_match_add(dev, matchptr, compare_of, node); 830 + ret = of_graph_parse_endpoint(ep_node, &ep); 831 + if (ret) { 832 + dev_err(mdp_dev, "unable to parse port endpoint\n"); 833 + of_node_put(ep_node); 834 + return ret; 835 + } 836 + 837 + /* 838 + * The LCDC/LVDS port on MDP4 is a speacial case where the 839 + * remote-endpoint isn't a component that we need to add 840 + */ 841 + if (of_device_is_compatible(np, "qcom,mdp4") && 842 + ep.port == 0) { 843 + of_node_put(ep_node); 844 + continue; 845 + } 846 + 847 + /* 848 + * It's okay if some of the ports don't have a remote endpoint 849 + * specified. It just means that the port isn't connected to 850 + * any external interface. 851 + */ 852 + intf = of_graph_get_remote_port_parent(ep_node); 853 + if (!intf) { 854 + of_node_put(ep_node); 855 + continue; 856 + } 857 + 858 + component_match_add(master_dev, matchptr, compare_of, intf); 859 + 860 + of_node_put(intf); 861 + of_node_put(ep_node); 877 862 } 863 + 864 + return 0; 865 + } 866 + 867 + static int compare_name_mdp(struct device *dev, void *data) 868 + { 869 + return (strstr(dev_name(dev), "mdp") != NULL); 870 + } 871 + 872 + static int add_display_components(struct device *dev, 873 + struct component_match **matchptr) 874 + { 875 + struct device *mdp_dev; 876 + int ret; 877 + 878 + /* 879 + * MDP5 based devices don't have a flat hierarchy. There is a top level 880 + * parent: MDSS, and children: MDP5, DSI, HDMI, eDP etc. Populate the 881 + * children devices, find the MDP5 node, and then add the interfaces 882 + * to our components list. 883 + */ 884 + if (of_device_is_compatible(dev->of_node, "qcom,mdss")) { 885 + ret = of_platform_populate(dev->of_node, NULL, NULL, dev); 886 + if (ret) { 887 + dev_err(dev, "failed to populate children devices\n"); 888 + return ret; 889 + } 890 + 891 + mdp_dev = device_find_child(dev, NULL, compare_name_mdp); 892 + if (!mdp_dev) { 893 + dev_err(dev, "failed to find MDSS MDP node\n"); 894 + of_platform_depopulate(dev); 895 + return -ENODEV; 896 + } 897 + 898 + put_device(mdp_dev); 899 + 900 + /* add the MDP component itself */ 901 + component_match_add(dev, matchptr, compare_of, 902 + mdp_dev->of_node); 903 + } else { 904 + /* MDP4 */ 905 + mdp_dev = dev; 906 + } 907 + 908 + ret = add_components_mdp(mdp_dev, matchptr); 909 + if (ret) 910 + of_platform_depopulate(dev); 911 + 912 + return ret; 913 + } 914 + 915 + /* 916 + * We don't know what's the best binding to link the gpu with the drm device. 917 + * Fow now, we just hunt for all the possible gpus that we support, and add them 918 + * as components. 919 + */ 920 + static const struct of_device_id msm_gpu_match[] = { 921 + { .compatible = "qcom,adreno-3xx" }, 922 + { .compatible = "qcom,kgsl-3d0" }, 923 + { }, 924 + }; 925 + 926 + static int add_gpu_components(struct device *dev, 927 + struct component_match **matchptr) 928 + { 929 + struct device_node *np; 930 + 931 + np = of_find_matching_node(NULL, msm_gpu_match); 932 + if (!np) 933 + return 0; 934 + 935 + component_match_add(dev, matchptr, compare_of, np); 936 + 937 + of_node_put(np); 878 938 879 939 return 0; 880 940 } ··· 1025 837 static int msm_pdev_probe(struct platform_device *pdev) 1026 838 { 1027 839 struct component_match *match = NULL; 840 + int ret; 1028 841 1029 - add_components(&pdev->dev, &match, "connectors"); 1030 - add_components(&pdev->dev, &match, "gpus"); 842 + ret = add_display_components(&pdev->dev, &match); 843 + if (ret) 844 + return ret; 845 + 846 + ret = add_gpu_components(&pdev->dev, &match); 847 + if (ret) 848 + return ret; 1031 849 1032 850 pdev->dev.coherent_dma_mask = DMA_BIT_MASK(32); 1033 851 return component_master_add_with_match(&pdev->dev, &msm_drm_ops, match); ··· 1042 848 static int msm_pdev_remove(struct platform_device *pdev) 1043 849 { 1044 850 component_master_del(&pdev->dev, &msm_drm_ops); 851 + of_platform_depopulate(&pdev->dev); 1045 852 1046 853 return 0; 1047 854 } 1048 855 1049 - static const struct platform_device_id msm_id[] = { 1050 - { "mdp", 0 }, 1051 - { } 1052 - }; 1053 - 1054 856 static const struct of_device_id dt_match[] = { 1055 - { .compatible = "qcom,mdp4", .data = (void *) 4 }, /* mdp4 */ 1056 - { .compatible = "qcom,mdp5", .data = (void *) 5 }, /* mdp5 */ 1057 - /* to support downstream DT files */ 1058 - { .compatible = "qcom,mdss_mdp", .data = (void *) 5 }, /* mdp5 */ 857 + { .compatible = "qcom,mdp4", .data = (void *)4 }, /* MDP4 */ 858 + { .compatible = "qcom,mdss", .data = (void *)5 }, /* MDP5 MDSS */ 1059 859 {} 1060 860 }; 1061 861 MODULE_DEVICE_TABLE(of, dt_match); ··· 1062 874 .of_match_table = dt_match, 1063 875 .pm = &msm_pm_ops, 1064 876 }, 1065 - .id_table = msm_id, 1066 877 }; 1067 878 1068 879 static int __init msm_drm_register(void) 1069 880 { 1070 881 DBG("init"); 882 + msm_mdp_register(); 1071 883 msm_dsi_register(); 1072 884 msm_edp_register(); 1073 885 msm_hdmi_register(); ··· 1083 895 adreno_unregister(); 1084 896 msm_edp_unregister(); 1085 897 msm_dsi_unregister(); 898 + msm_mdp_unregister(); 1086 899 } 1087 900 1088 901 module_init(msm_drm_register);
+22 -2
drivers/gpu/drm/msm/msm_drv.h
··· 46 46 struct msm_kms; 47 47 struct msm_gpu; 48 48 struct msm_mmu; 49 + struct msm_mdss; 49 50 struct msm_rd_state; 50 51 struct msm_perf_state; 51 52 struct msm_gem_submit; ··· 78 77 79 78 struct msm_drm_private { 80 79 80 + struct drm_device *dev; 81 + 81 82 struct msm_kms *kms; 82 83 83 84 /* subordinate devices, if present: */ 84 85 struct platform_device *gpu_pdev; 86 + 87 + /* top level MDSS wrapper device (for MDP5 only) */ 88 + struct msm_mdss *mdss; 85 89 86 90 /* possibly this should be in the kms component, but it is 87 91 * shared by both mdp4 and mdp5.. ··· 153 147 struct drm_mm mm; 154 148 } vram; 155 149 150 + struct notifier_block vmap_notifier; 151 + struct shrinker shrinker; 152 + 156 153 struct msm_vblank_ctrl vblank_ctrl; 157 154 }; 158 155 ··· 173 164 void msm_gem_submit_free(struct msm_gem_submit *submit); 174 165 int msm_ioctl_gem_submit(struct drm_device *dev, void *data, 175 166 struct drm_file *file); 167 + 168 + void msm_gem_shrinker_init(struct drm_device *dev); 169 + void msm_gem_shrinker_cleanup(struct drm_device *dev); 176 170 177 171 int msm_gem_mmap_obj(struct drm_gem_object *obj, 178 172 struct vm_area_struct *vma); ··· 201 189 struct dma_buf_attachment *attach, struct sg_table *sg); 202 190 int msm_gem_prime_pin(struct drm_gem_object *obj); 203 191 void msm_gem_prime_unpin(struct drm_gem_object *obj); 204 - void *msm_gem_vaddr_locked(struct drm_gem_object *obj); 205 - void *msm_gem_vaddr(struct drm_gem_object *obj); 192 + void *msm_gem_get_vaddr_locked(struct drm_gem_object *obj); 193 + void *msm_gem_get_vaddr(struct drm_gem_object *obj); 194 + void msm_gem_put_vaddr_locked(struct drm_gem_object *obj); 195 + void msm_gem_put_vaddr(struct drm_gem_object *obj); 196 + int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv); 197 + void msm_gem_purge(struct drm_gem_object *obj); 198 + void msm_gem_vunmap(struct drm_gem_object *obj); 206 199 int msm_gem_sync_object(struct drm_gem_object *obj, 207 200 struct msm_fence_context *fctx, bool exclusive); 208 201 void msm_gem_move_to_active(struct drm_gem_object *obj, ··· 273 256 return -EINVAL; 274 257 } 275 258 #endif 259 + 260 + void __init msm_mdp_register(void); 261 + void __exit msm_mdp_unregister(void); 276 262 277 263 #ifdef CONFIG_DEBUG_FS 278 264 void msm_gem_describe(struct drm_gem_object *obj, struct seq_file *m);
+2 -2
drivers/gpu/drm/msm/msm_fb.c
··· 49 49 50 50 for (i = 0; i < n; i++) { 51 51 struct drm_gem_object *bo = msm_fb->planes[i]; 52 - if (bo) 53 - drm_gem_object_unreference_unlocked(bo); 52 + 53 + drm_gem_object_unreference_unlocked(bo); 54 54 } 55 55 56 56 kfree(msm_fb);
+2 -1
drivers/gpu/drm/msm/msm_fbdev.c
··· 158 158 159 159 dev->mode_config.fb_base = paddr; 160 160 161 - fbi->screen_base = msm_gem_vaddr_locked(fbdev->bo); 161 + fbi->screen_base = msm_gem_get_vaddr_locked(fbdev->bo); 162 162 if (IS_ERR(fbi->screen_base)) { 163 163 ret = PTR_ERR(fbi->screen_base); 164 164 goto fail_unlock; ··· 251 251 252 252 /* this will free the backing object */ 253 253 if (fbdev->fb) { 254 + msm_gem_put_vaddr(fbdev->bo); 254 255 drm_framebuffer_unregister_private(fbdev->fb); 255 256 drm_framebuffer_remove(fbdev->fb); 256 257 }
+118 -21
drivers/gpu/drm/msm/msm_gem.c
··· 276 276 return offset; 277 277 } 278 278 279 + static void 280 + put_iova(struct drm_gem_object *obj) 281 + { 282 + struct drm_device *dev = obj->dev; 283 + struct msm_drm_private *priv = obj->dev->dev_private; 284 + struct msm_gem_object *msm_obj = to_msm_bo(obj); 285 + int id; 286 + 287 + WARN_ON(!mutex_is_locked(&dev->struct_mutex)); 288 + 289 + for (id = 0; id < ARRAY_SIZE(msm_obj->domain); id++) { 290 + struct msm_mmu *mmu = priv->mmus[id]; 291 + if (mmu && msm_obj->domain[id].iova) { 292 + uint32_t offset = msm_obj->domain[id].iova; 293 + mmu->funcs->unmap(mmu, offset, msm_obj->sgt, obj->size); 294 + msm_obj->domain[id].iova = 0; 295 + } 296 + } 297 + } 298 + 279 299 /* should be called under struct_mutex.. although it can be called 280 300 * from atomic context without struct_mutex to acquire an extra 281 301 * iova ref if you know one is already held. ··· 408 388 return ret; 409 389 } 410 390 411 - void *msm_gem_vaddr_locked(struct drm_gem_object *obj) 391 + void *msm_gem_get_vaddr_locked(struct drm_gem_object *obj) 412 392 { 413 393 struct msm_gem_object *msm_obj = to_msm_bo(obj); 414 394 WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); ··· 421 401 if (msm_obj->vaddr == NULL) 422 402 return ERR_PTR(-ENOMEM); 423 403 } 404 + msm_obj->vmap_count++; 424 405 return msm_obj->vaddr; 425 406 } 426 407 427 - void *msm_gem_vaddr(struct drm_gem_object *obj) 408 + void *msm_gem_get_vaddr(struct drm_gem_object *obj) 428 409 { 429 410 void *ret; 430 411 mutex_lock(&obj->dev->struct_mutex); 431 - ret = msm_gem_vaddr_locked(obj); 412 + ret = msm_gem_get_vaddr_locked(obj); 432 413 mutex_unlock(&obj->dev->struct_mutex); 433 414 return ret; 415 + } 416 + 417 + void msm_gem_put_vaddr_locked(struct drm_gem_object *obj) 418 + { 419 + struct msm_gem_object *msm_obj = to_msm_bo(obj); 420 + WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); 421 + WARN_ON(msm_obj->vmap_count < 1); 422 + msm_obj->vmap_count--; 423 + } 424 + 425 + void msm_gem_put_vaddr(struct drm_gem_object *obj) 426 + { 427 + mutex_lock(&obj->dev->struct_mutex); 428 + msm_gem_put_vaddr_locked(obj); 429 + mutex_unlock(&obj->dev->struct_mutex); 430 + } 431 + 432 + /* Update madvise status, returns true if not purged, else 433 + * false or -errno. 434 + */ 435 + int msm_gem_madvise(struct drm_gem_object *obj, unsigned madv) 436 + { 437 + struct msm_gem_object *msm_obj = to_msm_bo(obj); 438 + 439 + WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); 440 + 441 + if (msm_obj->madv != __MSM_MADV_PURGED) 442 + msm_obj->madv = madv; 443 + 444 + return (msm_obj->madv != __MSM_MADV_PURGED); 445 + } 446 + 447 + void msm_gem_purge(struct drm_gem_object *obj) 448 + { 449 + struct drm_device *dev = obj->dev; 450 + struct msm_gem_object *msm_obj = to_msm_bo(obj); 451 + 452 + WARN_ON(!mutex_is_locked(&dev->struct_mutex)); 453 + WARN_ON(!is_purgeable(msm_obj)); 454 + WARN_ON(obj->import_attach); 455 + 456 + put_iova(obj); 457 + 458 + msm_gem_vunmap(obj); 459 + 460 + put_pages(obj); 461 + 462 + msm_obj->madv = __MSM_MADV_PURGED; 463 + 464 + drm_vma_node_unmap(&obj->vma_node, dev->anon_inode->i_mapping); 465 + drm_gem_free_mmap_offset(obj); 466 + 467 + /* Our goal here is to return as much of the memory as 468 + * is possible back to the system as we are called from OOM. 469 + * To do this we must instruct the shmfs to drop all of its 470 + * backing pages, *now*. 471 + */ 472 + shmem_truncate_range(file_inode(obj->filp), 0, (loff_t)-1); 473 + 474 + invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 475 + 0, (loff_t)-1); 476 + } 477 + 478 + void msm_gem_vunmap(struct drm_gem_object *obj) 479 + { 480 + struct msm_gem_object *msm_obj = to_msm_bo(obj); 481 + 482 + if (!msm_obj->vaddr || WARN_ON(!is_vunmapable(msm_obj))) 483 + return; 484 + 485 + vunmap(msm_obj->vaddr); 486 + msm_obj->vaddr = NULL; 434 487 } 435 488 436 489 /* must be called before _move_to_active().. */ ··· 557 464 struct msm_gpu *gpu, bool exclusive, struct fence *fence) 558 465 { 559 466 struct msm_gem_object *msm_obj = to_msm_bo(obj); 467 + WARN_ON(msm_obj->madv != MSM_MADV_WILLNEED); 560 468 msm_obj->gpu = gpu; 561 469 if (exclusive) 562 470 reservation_object_add_excl_fence(msm_obj->resv, fence); ··· 626 532 struct reservation_object_list *fobj; 627 533 struct fence *fence; 628 534 uint64_t off = drm_vma_node_start(&obj->vma_node); 535 + const char *madv; 629 536 630 537 WARN_ON(!mutex_is_locked(&obj->dev->struct_mutex)); 631 538 632 - seq_printf(m, "%08x: %c %2d (%2d) %08llx %p %zu\n", 539 + switch (msm_obj->madv) { 540 + case __MSM_MADV_PURGED: 541 + madv = " purged"; 542 + break; 543 + case MSM_MADV_DONTNEED: 544 + madv = " purgeable"; 545 + break; 546 + case MSM_MADV_WILLNEED: 547 + default: 548 + madv = ""; 549 + break; 550 + } 551 + 552 + seq_printf(m, "%08x: %c %2d (%2d) %08llx %p %zu%s\n", 633 553 msm_obj->flags, is_active(msm_obj) ? 'A' : 'I', 634 554 obj->name, obj->refcount.refcount.counter, 635 - off, msm_obj->vaddr, obj->size); 555 + off, msm_obj->vaddr, obj->size, madv); 636 556 637 557 rcu_read_lock(); 638 558 fobj = rcu_dereference(robj->fence); ··· 686 578 void msm_gem_free_object(struct drm_gem_object *obj) 687 579 { 688 580 struct drm_device *dev = obj->dev; 689 - struct msm_drm_private *priv = obj->dev->dev_private; 690 581 struct msm_gem_object *msm_obj = to_msm_bo(obj); 691 - int id; 692 582 693 583 WARN_ON(!mutex_is_locked(&dev->struct_mutex)); 694 584 ··· 695 589 696 590 list_del(&msm_obj->mm_list); 697 591 698 - for (id = 0; id < ARRAY_SIZE(msm_obj->domain); id++) { 699 - struct msm_mmu *mmu = priv->mmus[id]; 700 - if (mmu && msm_obj->domain[id].iova) { 701 - uint32_t offset = msm_obj->domain[id].iova; 702 - mmu->funcs->unmap(mmu, offset, msm_obj->sgt, obj->size); 703 - } 704 - } 592 + put_iova(obj); 705 593 706 594 if (obj->import_attach) { 707 595 if (msm_obj->vaddr) ··· 709 609 710 610 drm_prime_gem_destroy(obj, msm_obj->sgt); 711 611 } else { 712 - vunmap(msm_obj->vaddr); 612 + msm_gem_vunmap(obj); 713 613 put_pages(obj); 714 614 } 715 615 ··· 788 688 msm_obj->vram_node = (void *)&msm_obj[1]; 789 689 790 690 msm_obj->flags = flags; 691 + msm_obj->madv = MSM_MADV_WILLNEED; 791 692 792 693 if (resv) { 793 694 msm_obj->resv = resv; ··· 830 729 return obj; 831 730 832 731 fail: 833 - if (obj) 834 - drm_gem_object_unreference(obj); 835 - 732 + drm_gem_object_unreference(obj); 836 733 return ERR_PTR(ret); 837 734 } 838 735 ··· 873 774 return obj; 874 775 875 776 fail: 876 - if (obj) 877 - drm_gem_object_unreference_unlocked(obj); 878 - 777 + drm_gem_object_unreference_unlocked(obj); 879 778 return ERR_PTR(ret); 880 779 }
+21 -2
drivers/gpu/drm/msm/msm_gem.h
··· 29 29 30 30 uint32_t flags; 31 31 32 + /** 33 + * Advice: are the backing pages purgeable? 34 + */ 35 + uint8_t madv; 36 + 37 + /** 38 + * count of active vmap'ing 39 + */ 40 + uint8_t vmap_count; 41 + 32 42 /* And object is either: 33 43 * inactive - on priv->inactive_list 34 44 * active - on one one of the gpu's active_list.. well, at ··· 82 72 return msm_obj->gpu != NULL; 83 73 } 84 74 85 - #define MAX_CMDS 4 75 + static inline bool is_purgeable(struct msm_gem_object *msm_obj) 76 + { 77 + return (msm_obj->madv == MSM_MADV_DONTNEED) && msm_obj->sgt && 78 + !msm_obj->base.dma_buf && !msm_obj->base.import_attach; 79 + } 80 + 81 + static inline bool is_vunmapable(struct msm_gem_object *msm_obj) 82 + { 83 + return (msm_obj->vmap_count == 0) && msm_obj->vaddr; 84 + } 86 85 87 86 /* Created per submit-ioctl, to track bo's and cmdstream bufs, etc, 88 87 * associated with the cmdstream submission for synchronization (and ··· 114 95 uint32_t size; /* in dwords */ 115 96 uint32_t iova; 116 97 uint32_t idx; /* cmdstream buffer idx in bos[] */ 117 - } cmd[MAX_CMDS]; 98 + } *cmd; /* array of size nr_cmds */ 118 99 struct { 119 100 uint32_t flags; 120 101 struct msm_gem_object *obj;
+2 -2
drivers/gpu/drm/msm/msm_gem_prime.c
··· 33 33 34 34 void *msm_gem_prime_vmap(struct drm_gem_object *obj) 35 35 { 36 - return msm_gem_vaddr(obj); 36 + return msm_gem_get_vaddr(obj); 37 37 } 38 38 39 39 void msm_gem_prime_vunmap(struct drm_gem_object *obj, void *vaddr) 40 40 { 41 - /* TODO msm_gem_vunmap() */ 41 + msm_gem_put_vaddr(obj); 42 42 } 43 43 44 44 int msm_gem_prime_mmap(struct drm_gem_object *obj, struct vm_area_struct *vma)
+168
drivers/gpu/drm/msm/msm_gem_shrinker.c
··· 1 + /* 2 + * Copyright (C) 2016 Red Hat 3 + * Author: Rob Clark <robdclark@gmail.com> 4 + * 5 + * This program is free software; you can redistribute it and/or modify it 6 + * under the terms of the GNU General Public License version 2 as published by 7 + * the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope that it will be useful, but WITHOUT 10 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 + * more details. 13 + * 14 + * You should have received a copy of the GNU General Public License along with 15 + * this program. If not, see <http://www.gnu.org/licenses/>. 16 + */ 17 + 18 + #include "msm_drv.h" 19 + #include "msm_gem.h" 20 + 21 + static bool mutex_is_locked_by(struct mutex *mutex, struct task_struct *task) 22 + { 23 + if (!mutex_is_locked(mutex)) 24 + return false; 25 + 26 + #if defined(CONFIG_SMP) || defined(CONFIG_DEBUG_MUTEXES) 27 + return mutex->owner == task; 28 + #else 29 + /* Since UP may be pre-empted, we cannot assume that we own the lock */ 30 + return false; 31 + #endif 32 + } 33 + 34 + static bool msm_gem_shrinker_lock(struct drm_device *dev, bool *unlock) 35 + { 36 + if (!mutex_trylock(&dev->struct_mutex)) { 37 + if (!mutex_is_locked_by(&dev->struct_mutex, current)) 38 + return false; 39 + *unlock = false; 40 + } else { 41 + *unlock = true; 42 + } 43 + 44 + return true; 45 + } 46 + 47 + 48 + static unsigned long 49 + msm_gem_shrinker_count(struct shrinker *shrinker, struct shrink_control *sc) 50 + { 51 + struct msm_drm_private *priv = 52 + container_of(shrinker, struct msm_drm_private, shrinker); 53 + struct drm_device *dev = priv->dev; 54 + struct msm_gem_object *msm_obj; 55 + unsigned long count = 0; 56 + bool unlock; 57 + 58 + if (!msm_gem_shrinker_lock(dev, &unlock)) 59 + return 0; 60 + 61 + list_for_each_entry(msm_obj, &priv->inactive_list, mm_list) { 62 + if (is_purgeable(msm_obj)) 63 + count += msm_obj->base.size >> PAGE_SHIFT; 64 + } 65 + 66 + if (unlock) 67 + mutex_unlock(&dev->struct_mutex); 68 + 69 + return count; 70 + } 71 + 72 + static unsigned long 73 + msm_gem_shrinker_scan(struct shrinker *shrinker, struct shrink_control *sc) 74 + { 75 + struct msm_drm_private *priv = 76 + container_of(shrinker, struct msm_drm_private, shrinker); 77 + struct drm_device *dev = priv->dev; 78 + struct msm_gem_object *msm_obj; 79 + unsigned long freed = 0; 80 + bool unlock; 81 + 82 + if (!msm_gem_shrinker_lock(dev, &unlock)) 83 + return SHRINK_STOP; 84 + 85 + list_for_each_entry(msm_obj, &priv->inactive_list, mm_list) { 86 + if (freed >= sc->nr_to_scan) 87 + break; 88 + if (is_purgeable(msm_obj)) { 89 + msm_gem_purge(&msm_obj->base); 90 + freed += msm_obj->base.size >> PAGE_SHIFT; 91 + } 92 + } 93 + 94 + if (unlock) 95 + mutex_unlock(&dev->struct_mutex); 96 + 97 + if (freed > 0) 98 + pr_info_ratelimited("Purging %lu bytes\n", freed << PAGE_SHIFT); 99 + 100 + return freed; 101 + } 102 + 103 + static int 104 + msm_gem_shrinker_vmap(struct notifier_block *nb, unsigned long event, void *ptr) 105 + { 106 + struct msm_drm_private *priv = 107 + container_of(nb, struct msm_drm_private, vmap_notifier); 108 + struct drm_device *dev = priv->dev; 109 + struct msm_gem_object *msm_obj; 110 + unsigned unmapped = 0; 111 + bool unlock; 112 + 113 + if (!msm_gem_shrinker_lock(dev, &unlock)) 114 + return NOTIFY_DONE; 115 + 116 + list_for_each_entry(msm_obj, &priv->inactive_list, mm_list) { 117 + if (is_vunmapable(msm_obj)) { 118 + msm_gem_vunmap(&msm_obj->base); 119 + /* since we don't know any better, lets bail after a few 120 + * and if necessary the shrinker will be invoked again. 121 + * Seems better than unmapping *everything* 122 + */ 123 + if (++unmapped >= 15) 124 + break; 125 + } 126 + } 127 + 128 + if (unlock) 129 + mutex_unlock(&dev->struct_mutex); 130 + 131 + *(unsigned long *)ptr += unmapped; 132 + 133 + if (unmapped > 0) 134 + pr_info_ratelimited("Purging %u vmaps\n", unmapped); 135 + 136 + return NOTIFY_DONE; 137 + } 138 + 139 + /** 140 + * msm_gem_shrinker_init - Initialize msm shrinker 141 + * @dev_priv: msm device 142 + * 143 + * This function registers and sets up the msm shrinker. 144 + */ 145 + void msm_gem_shrinker_init(struct drm_device *dev) 146 + { 147 + struct msm_drm_private *priv = dev->dev_private; 148 + priv->shrinker.count_objects = msm_gem_shrinker_count; 149 + priv->shrinker.scan_objects = msm_gem_shrinker_scan; 150 + priv->shrinker.seeks = DEFAULT_SEEKS; 151 + WARN_ON(register_shrinker(&priv->shrinker)); 152 + 153 + priv->vmap_notifier.notifier_call = msm_gem_shrinker_vmap; 154 + WARN_ON(register_vmap_purge_notifier(&priv->vmap_notifier)); 155 + } 156 + 157 + /** 158 + * msm_gem_shrinker_cleanup - Clean up msm shrinker 159 + * @dev_priv: msm device 160 + * 161 + * This function unregisters the msm shrinker. 162 + */ 163 + void msm_gem_shrinker_cleanup(struct drm_device *dev) 164 + { 165 + struct msm_drm_private *priv = dev->dev_private; 166 + WARN_ON(unregister_vmap_purge_notifier(&priv->vmap_notifier)); 167 + unregister_shrinker(&priv->shrinker); 168 + }
+16 -10
drivers/gpu/drm/msm/msm_gem_submit.c
··· 29 29 #define BO_PINNED 0x2000 30 30 31 31 static struct msm_gem_submit *submit_create(struct drm_device *dev, 32 - struct msm_gpu *gpu, int nr) 32 + struct msm_gpu *gpu, int nr_bos, int nr_cmds) 33 33 { 34 34 struct msm_gem_submit *submit; 35 - int sz = sizeof(*submit) + (nr * sizeof(submit->bos[0])); 35 + int sz = sizeof(*submit) + (nr_bos * sizeof(submit->bos[0])) + 36 + (nr_cmds * sizeof(*submit->cmd)); 36 37 37 38 submit = kmalloc(sz, GFP_TEMPORARY | __GFP_NOWARN | __GFP_NORETRY); 38 39 if (!submit) ··· 43 42 submit->gpu = gpu; 44 43 submit->fence = NULL; 45 44 submit->pid = get_pid(task_pid(current)); 45 + submit->cmd = (void *)&submit->bos[nr_bos]; 46 46 47 47 /* initially, until copy_from_user() and bo lookup succeeds: */ 48 48 submit->nr_bos = 0; ··· 281 279 /* For now, just map the entire thing. Eventually we probably 282 280 * to do it page-by-page, w/ kmap() if not vmap()d.. 283 281 */ 284 - ptr = msm_gem_vaddr_locked(&obj->base); 282 + ptr = msm_gem_get_vaddr_locked(&obj->base); 285 283 286 284 if (IS_ERR(ptr)) { 287 285 ret = PTR_ERR(ptr); ··· 334 332 last_offset = off; 335 333 } 336 334 335 + msm_gem_put_vaddr_locked(&obj->base); 336 + 337 337 return 0; 338 338 } 339 339 ··· 373 369 if (args->pipe != MSM_PIPE_3D0) 374 370 return -EINVAL; 375 371 376 - if (args->nr_cmds > MAX_CMDS) 377 - return -EINVAL; 372 + ret = mutex_lock_interruptible(&dev->struct_mutex); 373 + if (ret) 374 + return ret; 378 375 379 - submit = submit_create(dev, gpu, args->nr_bos); 380 - if (!submit) 381 - return -ENOMEM; 382 - 383 - mutex_lock(&dev->struct_mutex); 376 + submit = submit_create(dev, gpu, args->nr_bos, args->nr_cmds); 377 + if (!submit) { 378 + ret = -ENOMEM; 379 + goto out_unlock; 380 + } 384 381 385 382 ret = submit_lookup_objects(submit, args, file); 386 383 if (ret) ··· 467 462 submit_cleanup(submit); 468 463 if (ret) 469 464 msm_gem_submit_free(submit); 465 + out_unlock: 470 466 mutex_unlock(&dev->struct_mutex); 471 467 return ret; 472 468 }
+3 -3
drivers/gpu/drm/msm/msm_iommu.c
··· 59 59 return -EINVAL; 60 60 61 61 for_each_sg(sgt->sgl, sg, sgt->nents, i) { 62 - u32 pa = sg_phys(sg) - sg->offset; 62 + dma_addr_t pa = sg_phys(sg) - sg->offset; 63 63 size_t bytes = sg->length + sg->offset; 64 64 65 - VERB("map[%d]: %08x %08x(%zx)", i, iova, pa, bytes); 65 + VERB("map[%d]: %08x %08lx(%zx)", i, da, (unsigned long)pa, bytes); 66 66 67 67 ret = iommu_map(domain, da, pa, bytes, prot); 68 68 if (ret) ··· 101 101 if (unmapped < bytes) 102 102 return unmapped; 103 103 104 - VERB("unmap[%d]: %08x(%zx)", i, iova, bytes); 104 + VERB("unmap[%d]: %08x(%zx)", i, da, bytes); 105 105 106 106 BUG_ON(!PAGE_ALIGNED(bytes)); 107 107
+4 -4
drivers/gpu/drm/msm/msm_kms.h
··· 61 61 struct msm_kms { 62 62 const struct msm_kms_funcs *funcs; 63 63 64 - /* irq handling: */ 65 - bool in_irq; 66 - struct list_head irq_list; /* list of mdp4_irq */ 67 - uint32_t vblank_mask; /* irq bits set for userspace vblank */ 64 + /* irq number to be passed on to drm_irq_install */ 65 + int irq; 68 66 }; 69 67 70 68 static inline void msm_kms_init(struct msm_kms *kms, ··· 73 75 74 76 struct msm_kms *mdp4_kms_init(struct drm_device *dev); 75 77 struct msm_kms *mdp5_kms_init(struct drm_device *dev); 78 + int msm_mdss_init(struct drm_device *dev); 79 + void msm_mdss_destroy(struct drm_device *dev); 76 80 77 81 #endif /* __MSM_KMS_H__ */
+4 -3
drivers/gpu/drm/msm/msm_perf.c
··· 132 132 size_t sz, loff_t *ppos) 133 133 { 134 134 struct msm_perf_state *perf = file->private_data; 135 - int n = 0, ret; 135 + int n = 0, ret = 0; 136 136 137 137 mutex_lock(&perf->read_lock); 138 138 ··· 143 143 } 144 144 145 145 n = min((int)sz, perf->buftot - perf->bufpos); 146 - ret = copy_to_user(buf, &perf->buf[perf->bufpos], n); 147 - if (ret) 146 + if (copy_to_user(buf, &perf->buf[perf->bufpos], n)) { 147 + ret = -EFAULT; 148 148 goto out; 149 + } 149 150 150 151 perf->bufpos += n; 151 152 *ppos += n;
+53 -18
drivers/gpu/drm/msm/msm_rd.c
··· 27 27 * This bypasses drm_debugfs_create_files() mainly because we need to use 28 28 * our own fops for a bit more control. In particular, we don't want to 29 29 * do anything if userspace doesn't have the debugfs file open. 30 + * 31 + * The module-param "rd_full", which defaults to false, enables snapshotting 32 + * all (non-written) buffers in the submit, rather than just cmdstream bo's. 33 + * This is useful to capture the contents of (for example) vbo's or textures, 34 + * or shader programs (if not emitted inline in cmdstream). 30 35 */ 31 36 32 37 #ifdef CONFIG_DEBUG_FS ··· 44 39 #include "msm_drv.h" 45 40 #include "msm_gpu.h" 46 41 #include "msm_gem.h" 42 + 43 + static bool rd_full = false; 44 + MODULE_PARM_DESC(rd_full, "If true, $debugfs/.../rd will snapshot all buffer contents"); 45 + module_param_named(rd_full, rd_full, bool, 0600); 47 46 48 47 enum rd_sect_type { 49 48 RD_NONE, ··· 149 140 goto out; 150 141 151 142 n = min_t(int, sz, circ_count_to_end(&rd->fifo)); 152 - ret = copy_to_user(buf, fptr, n); 153 - if (ret) 143 + if (copy_to_user(buf, fptr, n)) { 144 + ret = -EFAULT; 154 145 goto out; 146 + } 155 147 156 148 fifo->tail = (fifo->tail + n) & (BUF_SZ - 1); 157 149 *ppos += n; ··· 287 277 kfree(rd); 288 278 } 289 279 280 + static void snapshot_buf(struct msm_rd_state *rd, 281 + struct msm_gem_submit *submit, int idx, 282 + uint32_t iova, uint32_t size) 283 + { 284 + struct msm_gem_object *obj = submit->bos[idx].obj; 285 + const char *buf; 286 + 287 + buf = msm_gem_get_vaddr_locked(&obj->base); 288 + if (IS_ERR(buf)) 289 + return; 290 + 291 + if (iova) { 292 + buf += iova - submit->bos[idx].iova; 293 + } else { 294 + iova = submit->bos[idx].iova; 295 + size = obj->base.size; 296 + } 297 + 298 + rd_write_section(rd, RD_GPUADDR, 299 + (uint32_t[2]){ iova, size }, 8); 300 + rd_write_section(rd, RD_BUFFER_CONTENTS, buf, size); 301 + 302 + msm_gem_put_vaddr_locked(&obj->base); 303 + } 304 + 290 305 /* called under struct_mutex */ 291 306 void msm_rd_dump_submit(struct msm_gem_submit *submit) 292 307 { ··· 335 300 336 301 rd_write_section(rd, RD_CMD, msg, ALIGN(n, 4)); 337 302 338 - /* could be nice to have an option (module-param?) to snapshot 339 - * all the bo's associated with the submit. Handy to see vtx 340 - * buffers, etc. For now just the cmdstream bo's is enough. 341 - */ 303 + if (rd_full) { 304 + for (i = 0; i < submit->nr_bos; i++) { 305 + /* buffers that are written to probably don't start out 306 + * with anything interesting: 307 + */ 308 + if (submit->bos[i].flags & MSM_SUBMIT_BO_WRITE) 309 + continue; 310 + 311 + snapshot_buf(rd, submit, i, 0, 0); 312 + } 313 + } 342 314 343 315 for (i = 0; i < submit->nr_cmds; i++) { 344 - uint32_t idx = submit->cmd[i].idx; 345 316 uint32_t iova = submit->cmd[i].iova; 346 317 uint32_t szd = submit->cmd[i].size; /* in dwords */ 347 - struct msm_gem_object *obj = submit->bos[idx].obj; 348 - const char *buf = msm_gem_vaddr_locked(&obj->base); 349 318 350 - if (IS_ERR(buf)) 351 - continue; 352 - 353 - buf += iova - submit->bos[idx].iova; 354 - 355 - rd_write_section(rd, RD_GPUADDR, 356 - (uint32_t[2]){ iova, szd * 4 }, 8); 357 - rd_write_section(rd, RD_BUFFER_CONTENTS, 358 - buf, szd * 4); 319 + /* snapshot cmdstream bo's (if we haven't already): */ 320 + if (!rd_full) { 321 + snapshot_buf(rd, submit, submit->cmd[i].idx, 322 + submit->cmd[i].iova, szd * 4); 323 + } 359 324 360 325 switch (submit->cmd[i].type) { 361 326 case MSM_SUBMIT_CMD_IB_TARGET_BUF:
+4 -2
drivers/gpu/drm/msm/msm_ringbuffer.c
··· 39 39 goto fail; 40 40 } 41 41 42 - ring->start = msm_gem_vaddr_locked(ring->bo); 42 + ring->start = msm_gem_get_vaddr_locked(ring->bo); 43 43 if (IS_ERR(ring->start)) { 44 44 ret = PTR_ERR(ring->start); 45 45 goto fail; ··· 59 59 60 60 void msm_ringbuffer_destroy(struct msm_ringbuffer *ring) 61 61 { 62 - if (ring->bo) 62 + if (ring->bo) { 63 + msm_gem_put_vaddr(ring->bo); 63 64 drm_gem_object_unreference_unlocked(ring->bo); 65 + } 64 66 kfree(ring); 65 67 }
+24 -1
include/uapi/drm/msm_drm.h
··· 201 201 struct drm_msm_timespec timeout; /* in */ 202 202 }; 203 203 204 + /* madvise provides a way to tell the kernel in case a buffers contents 205 + * can be discarded under memory pressure, which is useful for userspace 206 + * bo cache where we want to optimistically hold on to buffer allocate 207 + * and potential mmap, but allow the pages to be discarded under memory 208 + * pressure. 209 + * 210 + * Typical usage would involve madvise(DONTNEED) when buffer enters BO 211 + * cache, and madvise(WILLNEED) if trying to recycle buffer from BO cache. 212 + * In the WILLNEED case, 'retained' indicates to userspace whether the 213 + * backing pages still exist. 214 + */ 215 + #define MSM_MADV_WILLNEED 0 /* backing pages are needed, status returned in 'retained' */ 216 + #define MSM_MADV_DONTNEED 1 /* backing pages not needed */ 217 + #define __MSM_MADV_PURGED 2 /* internal state */ 218 + 219 + struct drm_msm_gem_madvise { 220 + __u32 handle; /* in, GEM handle */ 221 + __u32 madv; /* in, MSM_MADV_x */ 222 + __u32 retained; /* out, whether backing store still exists */ 223 + }; 224 + 204 225 #define DRM_MSM_GET_PARAM 0x00 205 226 /* placeholder: 206 227 #define DRM_MSM_SET_PARAM 0x01 ··· 232 211 #define DRM_MSM_GEM_CPU_FINI 0x05 233 212 #define DRM_MSM_GEM_SUBMIT 0x06 234 213 #define DRM_MSM_WAIT_FENCE 0x07 235 - #define DRM_MSM_NUM_IOCTLS 0x08 214 + #define DRM_MSM_GEM_MADVISE 0x08 215 + #define DRM_MSM_NUM_IOCTLS 0x09 236 216 237 217 #define DRM_IOCTL_MSM_GET_PARAM DRM_IOWR(DRM_COMMAND_BASE + DRM_MSM_GET_PARAM, struct drm_msm_param) 238 218 #define DRM_IOCTL_MSM_GEM_NEW DRM_IOWR(DRM_COMMAND_BASE + DRM_MSM_GEM_NEW, struct drm_msm_gem_new) ··· 242 220 #define DRM_IOCTL_MSM_GEM_CPU_FINI DRM_IOW (DRM_COMMAND_BASE + DRM_MSM_GEM_CPU_FINI, struct drm_msm_gem_cpu_fini) 243 221 #define DRM_IOCTL_MSM_GEM_SUBMIT DRM_IOWR(DRM_COMMAND_BASE + DRM_MSM_GEM_SUBMIT, struct drm_msm_gem_submit) 244 222 #define DRM_IOCTL_MSM_WAIT_FENCE DRM_IOW (DRM_COMMAND_BASE + DRM_MSM_WAIT_FENCE, struct drm_msm_wait_fence) 223 + #define DRM_IOCTL_MSM_GEM_MADVISE DRM_IOWR(DRM_COMMAND_BASE + DRM_MSM_GEM_MADVISE, struct drm_msm_gem_madvise) 245 224 246 225 #if defined(__cplusplus) 247 226 }