Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'arm-drivers-5.8' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

Pull ARM/SoC driver updates from Arnd Bergmann:
"These are updates to SoC specific drivers that did not have another
subsystem maintainer tree to go through for some reason:

- Some bus and memory drivers for the MIPS P5600 based Baikal-T1 SoC
that is getting added through the MIPS tree.

- There are new soc_device identification drivers for TI K3, Qualcomm
MSM8939

- New reset controller drivers for NXP i.MX8MP, Renesas RZ/G1H, and
Hisilicon hi6220

- The SCMI firmware interface can now work across ARM SMC/HVC as a
transport.

- Mediatek platforms now use a new driver for their "MMSYS" hardware
block that controls clocks and some other aspects in behalf of the
media and gpu drivers.

- Some Tegra processors have improved power management support,
including getting woken up by the PMIC and cluster power down
during idle.

- A new v4l staging driver for Tegra is added.

- Cleanups and minor bugfixes for TI, NXP, Hisilicon, Mediatek, and
Tegra"

* tag 'arm-drivers-5.8' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (155 commits)
clk: sprd: fix compile-testing
bus: bt1-axi: Build the driver into the kernel
bus: bt1-apb: Build the driver into the kernel
bus: bt1-axi: Use sysfs_streq instead of strncmp
bus: bt1-axi: Optimize the return points in the driver
bus: bt1-apb: Use sysfs_streq instead of strncmp
bus: bt1-apb: Use PTR_ERR_OR_ZERO to return from request-regs method
bus: bt1-apb: Fix show/store callback identations
bus: bt1-apb: Include linux/io.h
dt-bindings: memory: Add Baikal-T1 L2-cache Control Block binding
memory: Add Baikal-T1 L2-cache Control Block driver
bus: Add Baikal-T1 APB-bus driver
bus: Add Baikal-T1 AXI-bus driver
dt-bindings: bus: Add Baikal-T1 APB-bus binding
dt-bindings: bus: Add Baikal-T1 AXI-bus binding
staging: tegra-video: fix V4L2 dependency
tee: fix crypto select
drivers: soc: ti: knav_qmss_queue: Make knav_gp_range_ops static
soc: ti: add k3 platforms chipid module driver
dt-bindings: soc: ti: add binding for k3 platforms chipid module
...

+8003 -1238
+2 -1
Documentation/devicetree/bindings/arm/arm,scmi.txt
··· 14 14 15 15 The scmi node with the following properties shall be under the /firmware/ node. 16 16 17 - - compatible : shall be "arm,scmi" 17 + - compatible : shall be "arm,scmi" or "arm,scmi-smc" for smc/hvc transports 18 18 - mboxes: List of phandle and mailbox channel specifiers. It should contain 19 19 exactly one or two mailboxes, one for transmitting messages("tx") 20 20 and another optional for receiving the notifications("rx") if ··· 25 25 protocol identifier for a given sub-node. 26 26 - #size-cells : should be '0' as 'reg' property doesn't have any size 27 27 associated with it. 28 + - arm,smc-id : SMC id required when using smc or hvc transports 28 29 29 30 Optional properties: 30 31
+4 -3
Documentation/devicetree/bindings/arm/mediatek/mediatek,mmsys.txt
··· 1 1 Mediatek mmsys controller 2 2 ============================ 3 3 4 - The Mediatek mmsys controller provides various clocks to the system. 4 + The Mediatek mmsys system controller provides clock control, routing control, 5 + and miscellaneous control in mmsys partition. 5 6 6 7 Required Properties: 7 8 ··· 16 15 - "mediatek,mt8183-mmsys", "syscon" 17 16 - #clock-cells: Must be 1 18 17 19 - The mmsys controller uses the common clk binding from 18 + For the clock control, the mmsys controller uses the common clk binding from 20 19 Documentation/devicetree/bindings/clock/clock-bindings.txt 21 20 The available clocks are defined in dt-bindings/clock/mt*-clk.h. 22 21 23 22 Example: 24 23 25 - mmsys: clock-controller@14000000 { 24 + mmsys: syscon@14000000 { 26 25 compatible = "mediatek,mt8173-mmsys", "syscon"; 27 26 reg = <0 0x14000000 0 0x1000>; 28 27 #clock-cells = <1>;
+90
Documentation/devicetree/bindings/bus/baikal,bt1-apb.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + # Copyright (C) 2020 BAIKAL ELECTRONICS, JSC 3 + %YAML 1.2 4 + --- 5 + $id: http://devicetree.org/schemas/bus/baikal,bt1-apb.yaml# 6 + $schema: http://devicetree.org/meta-schemas/core.yaml# 7 + 8 + title: Baikal-T1 APB-bus 9 + 10 + maintainers: 11 + - Serge Semin <fancer.lancer@gmail.com> 12 + 13 + description: | 14 + Baikal-T1 CPU or DMAC MMIO requests are handled by the AMBA 3 AXI Interconnect 15 + which routes them to the AXI-APB bridge. This interface is a single master 16 + multiple slaves bus in turn serializing IO accesses and routing them to the 17 + addressed APB slave devices. In case of any APB protocol collisions, slave 18 + device not responding on timeout an IRQ is raised with an erroneous address 19 + reported to the APB terminator (APB Errors Handler Block). 20 + 21 + allOf: 22 + - $ref: /schemas/simple-bus.yaml# 23 + 24 + properties: 25 + compatible: 26 + contains: 27 + const: baikal,bt1-apb 28 + 29 + reg: 30 + items: 31 + - description: APB EHB MMIO registers 32 + - description: APB MMIO region with no any device mapped 33 + 34 + reg-names: 35 + items: 36 + - const: ehb 37 + - const: nodev 38 + 39 + interrupts: 40 + maxItems: 1 41 + 42 + clocks: 43 + items: 44 + - description: APB reference clock 45 + 46 + clock-names: 47 + items: 48 + - const: pclk 49 + 50 + resets: 51 + items: 52 + - description: APB domain reset line 53 + 54 + reset-names: 55 + items: 56 + - const: prst 57 + 58 + unevaluatedProperties: false 59 + 60 + required: 61 + - compatible 62 + - reg 63 + - reg-names 64 + - interrupts 65 + - clocks 66 + - clock-names 67 + 68 + examples: 69 + - | 70 + #include <dt-bindings/interrupt-controller/mips-gic.h> 71 + 72 + bus@1f059000 { 73 + compatible = "baikal,bt1-apb", "simple-bus"; 74 + reg = <0 0x1f059000 0 0x1000>, 75 + <0 0x1d000000 0 0x2040000>; 76 + reg-names = "ehb", "nodev"; 77 + #address-cells = <1>; 78 + #size-cells = <1>; 79 + 80 + ranges; 81 + 82 + interrupts = <GIC_SHARED 16 IRQ_TYPE_LEVEL_HIGH>; 83 + 84 + clocks = <&ccu_sys 1>; 85 + clock-names = "pclk"; 86 + 87 + resets = <&ccu_sys 1>; 88 + reset-names = "prst"; 89 + }; 90 + ...
+107
Documentation/devicetree/bindings/bus/baikal,bt1-axi.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + # Copyright (C) 2020 BAIKAL ELECTRONICS, JSC 3 + %YAML 1.2 4 + --- 5 + $id: http://devicetree.org/schemas/bus/baikal,bt1-axi.yaml# 6 + $schema: http://devicetree.org/meta-schemas/core.yaml# 7 + 8 + title: Baikal-T1 AXI-bus 9 + 10 + maintainers: 11 + - Serge Semin <fancer.lancer@gmail.com> 12 + 13 + description: | 14 + AXI3-bus is the main communication bus of Baikal-T1 SoC connecting all 15 + high-speed peripheral IP-cores with RAM controller and with MIPS P5600 16 + cores. Traffic arbitration is done by means of DW AXI Interconnect (so 17 + called AXI Main Interconnect) routing IO requests from one block to 18 + another: from CPU to SoC peripherals and between some SoC peripherals 19 + (mostly between peripheral devices and RAM, but also between DMA and 20 + some peripherals). In case of any protocol error, device not responding 21 + an IRQ is raised and a faulty situation is reported to the AXI EHB 22 + (Errors Handler Block) embedded on top of the DW AXI Interconnect and 23 + accessible by means of the Baikal-T1 System Controller. 24 + 25 + allOf: 26 + - $ref: /schemas/simple-bus.yaml# 27 + 28 + properties: 29 + compatible: 30 + contains: 31 + const: baikal,bt1-axi 32 + 33 + reg: 34 + minItems: 1 35 + items: 36 + - description: Synopsys DesignWare AXI Interconnect QoS registers 37 + - description: AXI EHB MMIO system controller registers 38 + 39 + reg-names: 40 + minItems: 1 41 + items: 42 + - const: qos 43 + - const: ehb 44 + 45 + '#interconnect-cells': 46 + const: 1 47 + 48 + syscon: 49 + $ref: /schemas/types.yaml#definitions/phandle 50 + description: Phandle to the Baikal-T1 System Controller DT node 51 + 52 + interrupts: 53 + maxItems: 1 54 + 55 + clocks: 56 + items: 57 + - description: Main Interconnect uplink reference clock 58 + 59 + clock-names: 60 + items: 61 + - const: aclk 62 + 63 + resets: 64 + items: 65 + - description: Main Interconnect reset line 66 + 67 + reset-names: 68 + items: 69 + - const: arst 70 + 71 + unevaluatedProperties: false 72 + 73 + required: 74 + - compatible 75 + - reg 76 + - reg-names 77 + - syscon 78 + - interrupts 79 + - clocks 80 + - clock-names 81 + 82 + examples: 83 + - | 84 + #include <dt-bindings/interrupt-controller/mips-gic.h> 85 + 86 + bus@1f05a000 { 87 + compatible = "baikal,bt1-axi", "simple-bus"; 88 + reg = <0 0x1f05a000 0 0x1000>, 89 + <0 0x1f04d110 0 0x8>; 90 + reg-names = "qos", "ehb"; 91 + #address-cells = <1>; 92 + #size-cells = <1>; 93 + #interconnect-cells = <1>; 94 + 95 + syscon = <&syscon>; 96 + 97 + ranges; 98 + 99 + interrupts = <GIC_SHARED 127 IRQ_TYPE_LEVEL_HIGH>; 100 + 101 + clocks = <&ccu_axi 0>; 102 + clock-names = "aclk"; 103 + 104 + resets = <&ccu_axi 0>; 105 + reset-names = "arst"; 106 + }; 107 + ...
+56
Documentation/devicetree/bindings/cpufreq/nvidia,tegra20-cpufreq.txt
··· 1 + Binding for NVIDIA Tegra20 CPUFreq 2 + ================================== 3 + 4 + Required properties: 5 + - clocks: Must contain an entry for the CPU clock. 6 + See ../clocks/clock-bindings.txt for details. 7 + - operating-points-v2: See ../bindings/opp/opp.txt for details. 8 + - #cooling-cells: Should be 2. See ../thermal/thermal.txt for details. 9 + 10 + For each opp entry in 'operating-points-v2' table: 11 + - opp-supported-hw: Two bitfields indicating: 12 + On Tegra20: 13 + 1. CPU process ID mask 14 + 2. SoC speedo ID mask 15 + 16 + On Tegra30: 17 + 1. CPU process ID mask 18 + 2. CPU speedo ID mask 19 + 20 + A bitwise AND is performed against these values and if any bit 21 + matches, the OPP gets enabled. 22 + 23 + - opp-microvolt: CPU voltage triplet. 24 + 25 + Optional properties: 26 + - cpu-supply: Phandle to the CPU power supply. 27 + 28 + Example: 29 + regulators { 30 + cpu_reg: regulator0 { 31 + regulator-name = "vdd_cpu"; 32 + }; 33 + }; 34 + 35 + cpu0_opp_table: opp_table0 { 36 + compatible = "operating-points-v2"; 37 + 38 + opp@456000000 { 39 + clock-latency-ns = <125000>; 40 + opp-microvolt = <825000 825000 1125000>; 41 + opp-supported-hw = <0x03 0x0001>; 42 + opp-hz = /bits/ 64 <456000000>; 43 + }; 44 + 45 + ... 46 + }; 47 + 48 + cpus { 49 + cpu@0 { 50 + compatible = "arm,cortex-a9"; 51 + clocks = <&tegra_car TEGRA20_CLK_CCLK>; 52 + operating-points-v2 = <&cpu0_opp_table>; 53 + cpu-supply = <&cpu_reg>; 54 + #cooling-cells = <2>; 55 + }; 56 + };
+60 -13
Documentation/devicetree/bindings/display/tegra/nvidia,tegra20-host1x.txt
··· 40 40 41 41 Required properties: 42 42 - compatible: "nvidia,tegra<chip>-vi" 43 - - reg: Physical base address and length of the controller's registers. 43 + - reg: Physical base address and length of the controller registers. 44 44 - interrupts: The interrupt outputs from the controller. 45 - - clocks: Must contain one entry, for the module clock. 45 + - clocks: clocks: Must contain one entry, for the module clock. 46 46 See ../clocks/clock-bindings.txt for details. 47 - - resets: Must contain an entry for each entry in reset-names. 48 - See ../reset/reset.txt for details. 49 - - reset-names: Must include the following entries: 50 - - vi 47 + - Tegra20/Tegra30/Tegra114/Tegra124: 48 + - resets: Must contain an entry for each entry in reset-names. 49 + See ../reset/reset.txt for details. 50 + - reset-names: Must include the following entries: 51 + - vi 52 + - Tegra210: 53 + - power-domains: Must include venc powergate node as vi is in VE partition. 54 + - Tegra210 has CSI part of VI sharing same host interface and register space. 55 + So, VI device node should have CSI child node. 56 + 57 + - csi: mipi csi interface to vi 58 + 59 + Required properties: 60 + - compatible: "nvidia,tegra210-csi" 61 + - reg: Physical base address offset to parent and length of the controller 62 + registers. 63 + - clocks: Must contain entries csi, cilab, cilcd, cile, csi_tpg clocks. 64 + See ../clocks/clock-bindings.txt for details. 65 + - power-domains: Must include sor powergate node as csicil is in 66 + SOR partition. 51 67 52 68 - epp: encoder pre-processor 53 69 ··· 325 309 reset-names = "mpe"; 326 310 }; 327 311 328 - vi { 329 - compatible = "nvidia,tegra20-vi"; 330 - reg = <0x54080000 0x00040000>; 331 - interrupts = <0 69 0x04>; 332 - clocks = <&tegra_car TEGRA20_CLK_VI>; 333 - resets = <&tegra_car 100>; 334 - reset-names = "vi"; 312 + vi@54080000 { 313 + compatible = "nvidia,tegra210-vi"; 314 + reg = <0x0 0x54080000 0x0 0x700>; 315 + interrupts = <GIC_SPI 69 IRQ_TYPE_LEVEL_HIGH>; 316 + assigned-clocks = <&tegra_car TEGRA210_CLK_VI>; 317 + assigned-clock-parents = <&tegra_car TEGRA210_CLK_PLL_C4_OUT0>; 318 + 319 + clocks = <&tegra_car TEGRA210_CLK_VI>; 320 + power-domains = <&pd_venc>; 321 + 322 + #address-cells = <1>; 323 + #size-cells = <1>; 324 + 325 + ranges = <0x0 0x0 0x54080000 0x2000>; 326 + 327 + csi@838 { 328 + compatible = "nvidia,tegra210-csi"; 329 + reg = <0x838 0x1300>; 330 + assigned-clocks = <&tegra_car TEGRA210_CLK_CILAB>, 331 + <&tegra_car TEGRA210_CLK_CILCD>, 332 + <&tegra_car TEGRA210_CLK_CILE>, 333 + <&tegra_car TEGRA210_CLK_CSI_TPG>; 334 + assigned-clock-parents = <&tegra_car TEGRA210_CLK_PLL_P>, 335 + <&tegra_car TEGRA210_CLK_PLL_P>, 336 + <&tegra_car TEGRA210_CLK_PLL_P>; 337 + assigned-clock-rates = <102000000>, 338 + <102000000>, 339 + <102000000>, 340 + <972000000>; 341 + 342 + clocks = <&tegra_car TEGRA210_CLK_CSI>, 343 + <&tegra_car TEGRA210_CLK_CILAB>, 344 + <&tegra_car TEGRA210_CLK_CILCD>, 345 + <&tegra_car TEGRA210_CLK_CILE>, 346 + <&tegra_car TEGRA210_CLK_CSI_TPG>; 347 + clock-names = "csi", "cilab", "cilcd", "cile", "csi_tpg"; 348 + power-domains = <&pd_sor>; 349 + }; 335 350 }; 336 351 337 352 epp {
+6
Documentation/devicetree/bindings/i2c/nvidia,tegra20-i2c.txt
··· 35 35 Due to above changes, Tegra114 I2C driver makes incompatible with 36 36 previous hardware driver. Hence, tegra114 I2C controller is compatible 37 37 with "nvidia,tegra114-i2c". 38 + nvidia,tegra210-i2c-vi: Tegra210 has one I2C controller that is part of the 39 + host1x domain and typically used for camera use-cases. This VI I2C 40 + controller is mostly compatible with the programming model of the 41 + regular I2C controllers with a few exceptions. The I2C registers start 42 + at an offset of 0xc00 (instead of 0), registers are 16 bytes apart 43 + (rather than 4) and the controller does not support slave mode. 38 44 - reg: Should contain I2C controller registers physical address and length. 39 45 - interrupts: Should contain I2C controller interrupts. 40 46 - address-cells: Address cells for I2C device address.
+63
Documentation/devicetree/bindings/memory-controllers/baikal,bt1-l2-ctl.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + # Copyright (C) 2020 BAIKAL ELECTRONICS, JSC 3 + %YAML 1.2 4 + --- 5 + $id: http://devicetree.org/schemas/memory-controllers/baikal,bt1-l2-ctl.yaml# 6 + $schema: http://devicetree.org/meta-schemas/core.yaml# 7 + 8 + title: Baikal-T1 L2-cache Control Block 9 + 10 + maintainers: 11 + - Serge Semin <fancer.lancer@gmail.com> 12 + 13 + description: | 14 + By means of the System Controller Baikal-T1 SoC exposes a few settings to 15 + tune the MIPS P5600 CM2 L2 cache performance up. In particular it's possible 16 + to change the Tag, Data and Way-select RAM access latencies. Baikal-T1 17 + L2-cache controller block is responsible for the tuning. Its DT node is 18 + supposed to be a child of the system controller. 19 + 20 + properties: 21 + compatible: 22 + const: baikal,bt1-l2-ctl 23 + 24 + reg: 25 + maxItems: 1 26 + 27 + baikal,l2-ws-latency: 28 + $ref: /schemas/types.yaml#/definitions/uint32 29 + description: Cycles of latency for Way-select RAM accesses 30 + default: 0 31 + minimum: 0 32 + maximum: 3 33 + 34 + baikal,l2-tag-latency: 35 + $ref: /schemas/types.yaml#/definitions/uint32 36 + description: Cycles of latency for Tag RAM accesses 37 + default: 0 38 + minimum: 0 39 + maximum: 3 40 + 41 + baikal,l2-data-latency: 42 + $ref: /schemas/types.yaml#/definitions/uint32 43 + description: Cycles of latency for Data RAM accesses 44 + default: 1 45 + minimum: 0 46 + maximum: 3 47 + 48 + additionalProperties: false 49 + 50 + required: 51 + - compatible 52 + 53 + examples: 54 + - | 55 + l2@1f04d028 { 56 + compatible = "baikal,bt1-l2-ctl"; 57 + reg = <0x1f04d028 0x004>; 58 + 59 + baikal,l2-ws-latency = <1>; 60 + baikal,l2-tag-latency = <1>; 61 + baikal,l2-data-latency = <2>; 62 + }; 63 + ...
+82
Documentation/devicetree/bindings/memory-controllers/nvidia,tegra210-emc.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/memory-controllers/nvidia,tegra210-emc.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: NVIDIA Tegra210 SoC External Memory Controller 8 + 9 + maintainers: 10 + - Thierry Reding <thierry.reding@gmail.com> 11 + - Jon Hunter <jonathanh@nvidia.com> 12 + 13 + description: | 14 + The EMC interfaces with the off-chip SDRAM to service the request stream 15 + sent from the memory controller. 16 + 17 + properties: 18 + compatible: 19 + const: nvidia,tegra210-emc 20 + 21 + reg: 22 + maxItems: 3 23 + 24 + clocks: 25 + items: 26 + - description: external memory clock 27 + 28 + clock-names: 29 + items: 30 + - const: emc 31 + 32 + interrupts: 33 + items: 34 + - description: EMC general interrupt 35 + 36 + memory-region: 37 + $ref: /schemas/types.yaml#/definitions/phandle 38 + description: 39 + phandle to a reserved memory region describing the table of EMC 40 + frequencies trained by the firmware 41 + 42 + nvidia,memory-controller: 43 + $ref: /schemas/types.yaml#/definitions/phandle 44 + description: 45 + phandle of the memory controller node 46 + 47 + required: 48 + - compatible 49 + - reg 50 + - clocks 51 + - clock-names 52 + - nvidia,memory-controller 53 + 54 + additionalProperties: false 55 + 56 + examples: 57 + - | 58 + #include <dt-bindings/clock/tegra210-car.h> 59 + #include <dt-bindings/interrupt-controller/arm-gic.h> 60 + 61 + reserved-memory { 62 + #address-cells = <1>; 63 + #size-cells = <1>; 64 + ranges; 65 + 66 + emc_table: emc-table@83400000 { 67 + compatible = "nvidia,tegra210-emc-table"; 68 + reg = <0x83400000 0x10000>; 69 + }; 70 + }; 71 + 72 + external-memory-controller@7001b000 { 73 + compatible = "nvidia,tegra210-emc"; 74 + reg = <0x7001b000 0x1000>, 75 + <0x7001e000 0x1000>, 76 + <0x7001f000 0x1000>; 77 + clocks = <&tegra_car TEGRA210_CLK_EMC>; 78 + clock-names = "emc"; 79 + interrupts = <GIC_SPI 78 IRQ_TYPE_LEVEL_HIGH>; 80 + memory-region = <&emc_table>; 81 + nvidia,memory-controller = <&mc>; 82 + };
+87 -15
Documentation/devicetree/bindings/power/amlogic,meson-ee-pwrc.yaml
··· 23 23 properties: 24 24 compatible: 25 25 enum: 26 + - amlogic,meson8-pwrc 27 + - amlogic,meson8b-pwrc 28 + - amlogic,meson8m2-pwrc 29 + - amlogic,meson-gxbb-pwrc 26 30 - amlogic,meson-g12a-pwrc 27 31 - amlogic,meson-sm1-pwrc 28 32 29 33 clocks: 30 - minItems: 2 34 + minItems: 1 35 + maxItems: 2 31 36 32 37 clock-names: 38 + minItems: 1 39 + maxItems: 2 33 40 items: 34 41 - const: vpu 35 42 - const: vapb 36 43 37 44 resets: 38 45 minItems: 11 46 + maxItems: 12 39 47 40 48 reset-names: 41 - items: 42 - - const: viu 43 - - const: venc 44 - - const: vcbus 45 - - const: bt656 46 - - const: rdma 47 - - const: venci 48 - - const: vencp 49 - - const: vdac 50 - - const: vdi6 51 - - const: vencl 52 - - const: vid_lock 49 + minItems: 11 50 + maxItems: 12 53 51 54 52 "#power-domain-cells": 55 53 const: 1 ··· 57 59 allOf: 58 60 - $ref: /schemas/types.yaml#/definitions/phandle 59 61 62 + allOf: 63 + - if: 64 + properties: 65 + compatible: 66 + enum: 67 + - amlogic,meson8b-pwrc 68 + - amlogic,meson8m2-pwrc 69 + then: 70 + properties: 71 + reset-names: 72 + items: 73 + - const: dblk 74 + - const: pic_dc 75 + - const: hdmi_apb 76 + - const: hdmi_system 77 + - const: venci 78 + - const: vencp 79 + - const: vdac 80 + - const: vencl 81 + - const: viu 82 + - const: venc 83 + - const: rdma 84 + required: 85 + - resets 86 + - reset-names 87 + 88 + - if: 89 + properties: 90 + compatible: 91 + enum: 92 + - amlogic,meson-gxbb-pwrc 93 + then: 94 + properties: 95 + reset-names: 96 + items: 97 + - const: viu 98 + - const: venc 99 + - const: vcbus 100 + - const: bt656 101 + - const: dvin 102 + - const: rdma 103 + - const: venci 104 + - const: vencp 105 + - const: vdac 106 + - const: vdi6 107 + - const: vencl 108 + - const: vid_lock 109 + required: 110 + - resets 111 + - reset-names 112 + 113 + - if: 114 + properties: 115 + compatible: 116 + enum: 117 + - amlogic,meson-g12a-pwrc 118 + - amlogic,meson-sm1-pwrc 119 + then: 120 + properties: 121 + reset-names: 122 + items: 123 + - const: viu 124 + - const: venc 125 + - const: vcbus 126 + - const: bt656 127 + - const: rdma 128 + - const: venci 129 + - const: vencp 130 + - const: vdac 131 + - const: vdi6 132 + - const: vencl 133 + - const: vid_lock 134 + required: 135 + - resets 136 + - reset-names 137 + 60 138 required: 61 139 - compatible 62 140 - clocks 63 141 - clock-names 64 - - resets 65 - - reset-names 66 142 - "#power-domain-cells" 67 143 - amlogic,ao-sysctrl 68 144
+1
Documentation/devicetree/bindings/power/qcom,rpmpd.yaml
··· 23 23 - qcom,sc7180-rpmhpd 24 24 - qcom,sdm845-rpmhpd 25 25 - qcom,sm8150-rpmhpd 26 + - qcom,sm8250-rpmhpd 26 27 27 28 '#power-domain-cells': 28 29 const: 1
+5 -1
Documentation/devicetree/bindings/reset/fsl,imx7-src.txt
··· 9 9 - For i.MX7 SoCs should be "fsl,imx7d-src", "syscon" 10 10 - For i.MX8MQ SoCs should be "fsl,imx8mq-src", "syscon" 11 11 - For i.MX8MM SoCs should be "fsl,imx8mm-src", "fsl,imx8mq-src", "syscon" 12 + - For i.MX8MN SoCs should be "fsl,imx8mn-src", "fsl,imx8mq-src", "syscon" 13 + - For i.MX8MP SoCs should be "fsl,imx8mp-src", "syscon" 12 14 - reg: should be register base and length as documented in the 13 15 datasheet 14 16 - interrupts: Should contain SRC interrupt ··· 51 49 For list of all valid reset indices see 52 50 <dt-bindings/reset/imx7-reset.h> for i.MX7, 53 51 <dt-bindings/reset/imx8mq-reset.h> for i.MX8MQ and 54 - <dt-bindings/reset/imx8mq-reset.h> for i.MX8MM 52 + <dt-bindings/reset/imx8mq-reset.h> for i.MX8MM and 53 + <dt-bindings/reset/imx8mq-reset.h> for i.MX8MN and 54 + <dt-bindings/reset/imx8mp-reset.h> for i.MX8MP
+1
Documentation/devicetree/bindings/soc/qcom/qcom,aoss-qmp.txt
··· 19 19 "qcom,sc7180-aoss-qmp" 20 20 "qcom,sdm845-aoss-qmp" 21 21 "qcom,sm8150-aoss-qmp" 22 + "qcom,sm8250-aoss-qmp" 22 23 23 24 - reg: 24 25 Usage: required
+10 -10
Documentation/devicetree/bindings/soc/qcom/qcom,apr.txt
··· 65 65 compatible = "qcom,apr-v2"; 66 66 qcom,apr-domain = <APR_DOMAIN_ADSP>; 67 67 68 - q6core@3 { 68 + apr-service@3 { 69 69 compatible = "qcom,q6core"; 70 70 reg = <APR_SVC_ADSP_CORE>; 71 71 }; 72 72 73 - q6afe@4 { 73 + apr-service@4 { 74 74 compatible = "qcom,q6afe"; 75 75 reg = <APR_SVC_AFE>; 76 76 77 77 dais { 78 78 #sound-dai-cells = <1>; 79 - hdmi@1 { 80 - reg = <1>; 79 + dai@1 { 80 + reg = <HDMI_RX>; 81 81 }; 82 82 }; 83 83 }; 84 84 85 - q6asm@7 { 85 + apr-service@7 { 86 86 compatible = "qcom,q6asm"; 87 87 reg = <APR_SVC_ASM>; 88 88 ... 89 89 }; 90 90 91 - q6adm@8 { 91 + apr-service@8 { 92 92 compatible = "qcom,q6adm"; 93 93 reg = <APR_SVC_ADM>; 94 94 ... ··· 106 106 qcom,glink-channels = "apr_audio_svc"; 107 107 qcom,apr-domain = <APR_DOMAIN_ADSP>; 108 108 109 - q6core { 109 + apr-service@3 { 110 110 compatible = "qcom,q6core"; 111 111 reg = <APR_SVC_ADSP_CORE>; 112 112 }; 113 113 114 - q6afe: q6afe { 114 + q6afe: apr-service@4 { 115 115 compatible = "qcom,q6afe"; 116 116 reg = <APR_SVC_AFE>; 117 117 qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd"; 118 118 ... 119 119 }; 120 120 121 - q6asm: q6asm { 121 + q6asm: apr-service@7 { 122 122 compatible = "qcom,q6asm"; 123 123 reg = <APR_SVC_ASM>; 124 124 qcom,protection-domain = "tms/servreg", "msm/slpi/sensor_pd"; 125 125 ... 126 126 }; 127 127 128 - q6adm: q6adm { 128 + q6adm: apr-service@8 { 129 129 compatible = "qcom,q6adm"; 130 130 reg = <APR_SVC_ADM>; 131 131 qcom,protection-domain = "avs/audio", "msm/adsp/audio_pd";
+40
Documentation/devicetree/bindings/soc/ti/k3-socinfo.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/soc/ti/k3-socinfo.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Texas Instruments K3 Multicore SoC platforms chipid module 8 + 9 + maintainers: 10 + - Tero Kristo <t-kristo@ti.com> 11 + - Nishanth Menon <nm@ti.com> 12 + 13 + description: | 14 + Texas Instruments (ARM64) K3 Multicore SoC platforms chipid module is 15 + represented by CTRLMMR_xxx_JTAGID register which contains information about 16 + SoC id and revision. 17 + 18 + properties: 19 + $nodename: 20 + pattern: "^chipid@[0-9a-f]+$" 21 + 22 + compatible: 23 + items: 24 + - const: ti,am654-chipid 25 + 26 + reg: 27 + maxItems: 1 28 + 29 + required: 30 + - compatible 31 + - reg 32 + 33 + additionalProperties: false 34 + 35 + examples: 36 + - | 37 + chipid@43000014 { 38 + compatible = "ti,am654-chipid"; 39 + reg = <0x43000014 0x4>; 40 + };
+10
MAINTAINERS
··· 16729 16729 S: Supported 16730 16730 F: drivers/spi/spi-tegra* 16731 16731 16732 + TEGRA VIDEO DRIVER 16733 + M: Thierry Reding <thierry.reding@gmail.com> 16734 + M: Jonathan Hunter <jonathanh@nvidia.com> 16735 + M: Sowjanya Komatineni <skomatineni@nvidia.com> 16736 + L: linux-media@vger.kernel.org 16737 + L: linux-tegra@vger.kernel.org 16738 + S: Maintained 16739 + F: Documentation/devicetree/bindings/display/tegra/nvidia,tegra20-host1x.txt 16740 + F: drivers/staging/media/tegra-video/ 16741 + 16732 16742 TEGRA XUSB PADCTL DRIVER 16733 16743 M: JC Kuo <jckuo@nvidia.com> 16734 16744 S: Supported
+1 -1
arch/arm64/Kconfig.platforms
··· 248 248 This enables support for the NVIDIA Tegra SoC family. 249 249 250 250 config ARCH_SPRD 251 - tristate "Spreadtrum SoC platform" 251 + bool "Spreadtrum SoC platform" 252 252 help 253 253 Support for Spreadtrum ARM based SoCs 254 254
+30
drivers/bus/Kconfig
··· 38 38 arbiter. This driver provides timeout and target abort error handling 39 39 and internal bus master decoding. 40 40 41 + config BT1_APB 42 + bool "Baikal-T1 APB-bus driver" 43 + depends on MIPS_BAIKAL_T1 || COMPILE_TEST 44 + select REGMAP_MMIO 45 + help 46 + Baikal-T1 AXI-APB bridge is used to access the SoC subsystem CSRs. 47 + IO requests are routed to this bus by means of the DW AMBA 3 AXI 48 + Interconnect. In case of any APB protocol collisions, slave device 49 + not responding on timeout an IRQ is raised with an erroneous address 50 + reported to the APB terminator (APB Errors Handler Block). This 51 + driver provides the interrupt handler to detect the erroneous 52 + address, prints an error message about the address fault, updates an 53 + errors counter. The counter and the APB-bus operations timeout can be 54 + accessed via corresponding sysfs nodes. 55 + 56 + config BT1_AXI 57 + bool "Baikal-T1 AXI-bus driver" 58 + depends on MIPS_BAIKAL_T1 || COMPILE_TEST 59 + select MFD_SYSCON 60 + help 61 + AXI3-bus is the main communication bus connecting all high-speed 62 + peripheral IP-cores with RAM controller and with MIPS P5600 cores on 63 + Baikal-T1 SoC. Traffic arbitration is done by means of DW AMBA 3 AXI 64 + Interconnect (so called AXI Main Interconnect) routing IO requests 65 + from one SoC block to another. This driver provides a way to detect 66 + any bus protocol errors and device not responding situations by 67 + means of an embedded on top of the interconnect errors handler 68 + block (EHB). AXI Interconnect QoS arbitration tuning is currently 69 + unsupported. 70 + 41 71 config MOXTET 42 72 tristate "CZ.NIC Turris Mox module configuration bus" 43 73 depends on SPI_MASTER && OF
+2
drivers/bus/Makefile
··· 13 13 # DPAA2 fsl-mc bus 14 14 obj-$(CONFIG_FSL_MC_BUS) += fsl-mc/ 15 15 16 + obj-$(CONFIG_BT1_APB) += bt1-apb.o 17 + obj-$(CONFIG_BT1_AXI) += bt1-axi.o 16 18 obj-$(CONFIG_IMX_WEIM) += imx-weim.o 17 19 obj-$(CONFIG_MIPS_CDMM) += mips_cdmm.o 18 20 obj-$(CONFIG_MVEBU_MBUS) += mvebu-mbus.o
+421
drivers/bus/bt1-apb.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (C) 2020 BAIKAL ELECTRONICS, JSC 4 + * 5 + * Authors: 6 + * Serge Semin <Sergey.Semin@baikalelectronics.ru> 7 + * 8 + * Baikal-T1 APB-bus driver 9 + */ 10 + 11 + #include <linux/kernel.h> 12 + #include <linux/module.h> 13 + #include <linux/types.h> 14 + #include <linux/device.h> 15 + #include <linux/atomic.h> 16 + #include <linux/platform_device.h> 17 + #include <linux/interrupt.h> 18 + #include <linux/io.h> 19 + #include <linux/nmi.h> 20 + #include <linux/of.h> 21 + #include <linux/regmap.h> 22 + #include <linux/clk.h> 23 + #include <linux/reset.h> 24 + #include <linux/time64.h> 25 + #include <linux/clk.h> 26 + #include <linux/sysfs.h> 27 + 28 + #define APB_EHB_ISR 0x00 29 + #define APB_EHB_ISR_PENDING BIT(0) 30 + #define APB_EHB_ISR_MASK BIT(1) 31 + #define APB_EHB_ADDR 0x04 32 + #define APB_EHB_TIMEOUT 0x08 33 + 34 + #define APB_EHB_TIMEOUT_MIN 0x000003FFU 35 + #define APB_EHB_TIMEOUT_MAX 0xFFFFFFFFU 36 + 37 + /* 38 + * struct bt1_apb - Baikal-T1 APB EHB private data 39 + * @dev: Pointer to the device structure. 40 + * @regs: APB EHB registers map. 41 + * @res: No-device error injection memory region. 42 + * @irq: Errors IRQ number. 43 + * @rate: APB-bus reference clock rate. 44 + * @pclk: APB-reference clock. 45 + * @prst: APB domain reset line. 46 + * @count: Number of errors detected. 47 + */ 48 + struct bt1_apb { 49 + struct device *dev; 50 + 51 + struct regmap *regs; 52 + void __iomem *res; 53 + int irq; 54 + 55 + unsigned long rate; 56 + struct clk *pclk; 57 + 58 + struct reset_control *prst; 59 + 60 + atomic_t count; 61 + }; 62 + 63 + static const struct regmap_config bt1_apb_regmap_cfg = { 64 + .reg_bits = 32, 65 + .val_bits = 32, 66 + .reg_stride = 4, 67 + .max_register = APB_EHB_TIMEOUT, 68 + .fast_io = true 69 + }; 70 + 71 + static inline unsigned long bt1_apb_n_to_timeout_us(struct bt1_apb *apb, u32 n) 72 + { 73 + u64 timeout = (u64)n * USEC_PER_SEC; 74 + 75 + do_div(timeout, apb->rate); 76 + 77 + return timeout; 78 + 79 + } 80 + 81 + static inline unsigned long bt1_apb_timeout_to_n_us(struct bt1_apb *apb, 82 + unsigned long timeout) 83 + { 84 + u64 n = (u64)timeout * apb->rate; 85 + 86 + do_div(n, USEC_PER_SEC); 87 + 88 + return n; 89 + 90 + } 91 + 92 + static irqreturn_t bt1_apb_isr(int irq, void *data) 93 + { 94 + struct bt1_apb *apb = data; 95 + u32 addr = 0; 96 + 97 + regmap_read(apb->regs, APB_EHB_ADDR, &addr); 98 + 99 + dev_crit_ratelimited(apb->dev, 100 + "APB-bus fault %d: Slave access timeout at 0x%08x\n", 101 + atomic_inc_return(&apb->count), 102 + addr); 103 + 104 + /* 105 + * Print backtrace on each CPU. This might be pointless if the fault 106 + * has happened on the same CPU as the IRQ handler is executed or 107 + * the other core proceeded further execution despite the error. 108 + * But if it's not, by looking at the trace we would get straight to 109 + * the cause of the problem. 110 + */ 111 + trigger_all_cpu_backtrace(); 112 + 113 + regmap_update_bits(apb->regs, APB_EHB_ISR, APB_EHB_ISR_PENDING, 0); 114 + 115 + return IRQ_HANDLED; 116 + } 117 + 118 + static void bt1_apb_clear_data(void *data) 119 + { 120 + struct bt1_apb *apb = data; 121 + struct platform_device *pdev = to_platform_device(apb->dev); 122 + 123 + platform_set_drvdata(pdev, NULL); 124 + } 125 + 126 + static struct bt1_apb *bt1_apb_create_data(struct platform_device *pdev) 127 + { 128 + struct device *dev = &pdev->dev; 129 + struct bt1_apb *apb; 130 + int ret; 131 + 132 + apb = devm_kzalloc(dev, sizeof(*apb), GFP_KERNEL); 133 + if (!apb) 134 + return ERR_PTR(-ENOMEM); 135 + 136 + ret = devm_add_action(dev, bt1_apb_clear_data, apb); 137 + if (ret) { 138 + dev_err(dev, "Can't add APB EHB data clear action\n"); 139 + return ERR_PTR(ret); 140 + } 141 + 142 + apb->dev = dev; 143 + atomic_set(&apb->count, 0); 144 + platform_set_drvdata(pdev, apb); 145 + 146 + return apb; 147 + } 148 + 149 + static int bt1_apb_request_regs(struct bt1_apb *apb) 150 + { 151 + struct platform_device *pdev = to_platform_device(apb->dev); 152 + void __iomem *regs; 153 + 154 + regs = devm_platform_ioremap_resource_byname(pdev, "ehb"); 155 + if (IS_ERR(regs)) { 156 + dev_err(apb->dev, "Couldn't map APB EHB registers\n"); 157 + return PTR_ERR(regs); 158 + } 159 + 160 + apb->regs = devm_regmap_init_mmio(apb->dev, regs, &bt1_apb_regmap_cfg); 161 + if (IS_ERR(apb->regs)) { 162 + dev_err(apb->dev, "Couldn't create APB EHB regmap\n"); 163 + return PTR_ERR(apb->regs); 164 + } 165 + 166 + apb->res = devm_platform_ioremap_resource_byname(pdev, "nodev"); 167 + if (IS_ERR(apb->res)) 168 + dev_err(apb->dev, "Couldn't map reserved region\n"); 169 + 170 + return PTR_ERR_OR_ZERO(apb->res); 171 + } 172 + 173 + static int bt1_apb_request_rst(struct bt1_apb *apb) 174 + { 175 + int ret; 176 + 177 + apb->prst = devm_reset_control_get_optional_exclusive(apb->dev, "prst"); 178 + if (IS_ERR(apb->prst)) { 179 + dev_warn(apb->dev, "Couldn't get reset control line\n"); 180 + return PTR_ERR(apb->prst); 181 + } 182 + 183 + ret = reset_control_deassert(apb->prst); 184 + if (ret) 185 + dev_err(apb->dev, "Failed to deassert the reset line\n"); 186 + 187 + return ret; 188 + } 189 + 190 + static void bt1_apb_disable_clk(void *data) 191 + { 192 + struct bt1_apb *apb = data; 193 + 194 + clk_disable_unprepare(apb->pclk); 195 + } 196 + 197 + static int bt1_apb_request_clk(struct bt1_apb *apb) 198 + { 199 + int ret; 200 + 201 + apb->pclk = devm_clk_get(apb->dev, "pclk"); 202 + if (IS_ERR(apb->pclk)) { 203 + dev_err(apb->dev, "Couldn't get APB clock descriptor\n"); 204 + return PTR_ERR(apb->pclk); 205 + } 206 + 207 + ret = clk_prepare_enable(apb->pclk); 208 + if (ret) { 209 + dev_err(apb->dev, "Couldn't enable the APB clock\n"); 210 + return ret; 211 + } 212 + 213 + ret = devm_add_action_or_reset(apb->dev, bt1_apb_disable_clk, apb); 214 + if (ret) { 215 + dev_err(apb->dev, "Can't add APB EHB clocks disable action\n"); 216 + return ret; 217 + } 218 + 219 + apb->rate = clk_get_rate(apb->pclk); 220 + if (!apb->rate) { 221 + dev_err(apb->dev, "Invalid clock rate\n"); 222 + return -EINVAL; 223 + } 224 + 225 + return 0; 226 + } 227 + 228 + static void bt1_apb_clear_irq(void *data) 229 + { 230 + struct bt1_apb *apb = data; 231 + 232 + regmap_update_bits(apb->regs, APB_EHB_ISR, APB_EHB_ISR_MASK, 0); 233 + } 234 + 235 + static int bt1_apb_request_irq(struct bt1_apb *apb) 236 + { 237 + struct platform_device *pdev = to_platform_device(apb->dev); 238 + int ret; 239 + 240 + apb->irq = platform_get_irq(pdev, 0); 241 + if (apb->irq < 0) 242 + return apb->irq; 243 + 244 + ret = devm_request_irq(apb->dev, apb->irq, bt1_apb_isr, IRQF_SHARED, 245 + "bt1-apb", apb); 246 + if (ret) { 247 + dev_err(apb->dev, "Couldn't request APB EHB IRQ\n"); 248 + return ret; 249 + } 250 + 251 + ret = devm_add_action(apb->dev, bt1_apb_clear_irq, apb); 252 + if (ret) { 253 + dev_err(apb->dev, "Can't add APB EHB IRQs clear action\n"); 254 + return ret; 255 + } 256 + 257 + /* Unmask IRQ and clear it' pending flag. */ 258 + regmap_update_bits(apb->regs, APB_EHB_ISR, 259 + APB_EHB_ISR_PENDING | APB_EHB_ISR_MASK, 260 + APB_EHB_ISR_MASK); 261 + 262 + return 0; 263 + } 264 + 265 + static ssize_t count_show(struct device *dev, struct device_attribute *attr, 266 + char *buf) 267 + { 268 + struct bt1_apb *apb = dev_get_drvdata(dev); 269 + 270 + return scnprintf(buf, PAGE_SIZE, "%d\n", atomic_read(&apb->count)); 271 + } 272 + static DEVICE_ATTR_RO(count); 273 + 274 + static ssize_t timeout_show(struct device *dev, struct device_attribute *attr, 275 + char *buf) 276 + { 277 + struct bt1_apb *apb = dev_get_drvdata(dev); 278 + unsigned long timeout; 279 + int ret; 280 + u32 n; 281 + 282 + ret = regmap_read(apb->regs, APB_EHB_TIMEOUT, &n); 283 + if (ret) 284 + return ret; 285 + 286 + timeout = bt1_apb_n_to_timeout_us(apb, n); 287 + 288 + return scnprintf(buf, PAGE_SIZE, "%lu\n", timeout); 289 + } 290 + 291 + static ssize_t timeout_store(struct device *dev, 292 + struct device_attribute *attr, 293 + const char *buf, size_t count) 294 + { 295 + struct bt1_apb *apb = dev_get_drvdata(dev); 296 + unsigned long timeout; 297 + int ret; 298 + u32 n; 299 + 300 + if (kstrtoul(buf, 0, &timeout) < 0) 301 + return -EINVAL; 302 + 303 + n = bt1_apb_timeout_to_n_us(apb, timeout); 304 + n = clamp(n, APB_EHB_TIMEOUT_MIN, APB_EHB_TIMEOUT_MAX); 305 + 306 + ret = regmap_write(apb->regs, APB_EHB_TIMEOUT, n); 307 + 308 + return ret ?: count; 309 + } 310 + static DEVICE_ATTR_RW(timeout); 311 + 312 + static ssize_t inject_error_show(struct device *dev, 313 + struct device_attribute *attr, char *buf) 314 + { 315 + return scnprintf(buf, PAGE_SIZE, "Error injection: nodev irq\n"); 316 + } 317 + 318 + static ssize_t inject_error_store(struct device *dev, 319 + struct device_attribute *attr, 320 + const char *data, size_t count) 321 + { 322 + struct bt1_apb *apb = dev_get_drvdata(dev); 323 + 324 + /* 325 + * Either dummy read from the unmapped address in the APB IO area 326 + * or manually set the IRQ status. 327 + */ 328 + if (sysfs_streq(data, "nodev")) 329 + readl(apb->res); 330 + else if (sysfs_streq(data, "irq")) 331 + regmap_update_bits(apb->regs, APB_EHB_ISR, APB_EHB_ISR_PENDING, 332 + APB_EHB_ISR_PENDING); 333 + else 334 + return -EINVAL; 335 + 336 + return count; 337 + } 338 + static DEVICE_ATTR_RW(inject_error); 339 + 340 + static struct attribute *bt1_apb_sysfs_attrs[] = { 341 + &dev_attr_count.attr, 342 + &dev_attr_timeout.attr, 343 + &dev_attr_inject_error.attr, 344 + NULL 345 + }; 346 + ATTRIBUTE_GROUPS(bt1_apb_sysfs); 347 + 348 + static void bt1_apb_remove_sysfs(void *data) 349 + { 350 + struct bt1_apb *apb = data; 351 + 352 + device_remove_groups(apb->dev, bt1_apb_sysfs_groups); 353 + } 354 + 355 + static int bt1_apb_init_sysfs(struct bt1_apb *apb) 356 + { 357 + int ret; 358 + 359 + ret = device_add_groups(apb->dev, bt1_apb_sysfs_groups); 360 + if (ret) { 361 + dev_err(apb->dev, "Failed to create EHB APB sysfs nodes\n"); 362 + return ret; 363 + } 364 + 365 + ret = devm_add_action_or_reset(apb->dev, bt1_apb_remove_sysfs, apb); 366 + if (ret) 367 + dev_err(apb->dev, "Can't add APB EHB sysfs remove action\n"); 368 + 369 + return ret; 370 + } 371 + 372 + static int bt1_apb_probe(struct platform_device *pdev) 373 + { 374 + struct bt1_apb *apb; 375 + int ret; 376 + 377 + apb = bt1_apb_create_data(pdev); 378 + if (IS_ERR(apb)) 379 + return PTR_ERR(apb); 380 + 381 + ret = bt1_apb_request_regs(apb); 382 + if (ret) 383 + return ret; 384 + 385 + ret = bt1_apb_request_rst(apb); 386 + if (ret) 387 + return ret; 388 + 389 + ret = bt1_apb_request_clk(apb); 390 + if (ret) 391 + return ret; 392 + 393 + ret = bt1_apb_request_irq(apb); 394 + if (ret) 395 + return ret; 396 + 397 + ret = bt1_apb_init_sysfs(apb); 398 + if (ret) 399 + return ret; 400 + 401 + return 0; 402 + } 403 + 404 + static const struct of_device_id bt1_apb_of_match[] = { 405 + { .compatible = "baikal,bt1-apb" }, 406 + { } 407 + }; 408 + MODULE_DEVICE_TABLE(of, bt1_apb_of_match); 409 + 410 + static struct platform_driver bt1_apb_driver = { 411 + .probe = bt1_apb_probe, 412 + .driver = { 413 + .name = "bt1-apb", 414 + .of_match_table = bt1_apb_of_match 415 + } 416 + }; 417 + module_platform_driver(bt1_apb_driver); 418 + 419 + MODULE_AUTHOR("Serge Semin <Sergey.Semin@baikalelectronics.ru>"); 420 + MODULE_DESCRIPTION("Baikal-T1 APB-bus driver"); 421 + MODULE_LICENSE("GPL v2");
+314
drivers/bus/bt1-axi.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (C) 2020 BAIKAL ELECTRONICS, JSC 4 + * 5 + * Authors: 6 + * Serge Semin <Sergey.Semin@baikalelectronics.ru> 7 + * 8 + * Baikal-T1 AXI-bus driver 9 + */ 10 + 11 + #include <linux/kernel.h> 12 + #include <linux/module.h> 13 + #include <linux/types.h> 14 + #include <linux/bitfield.h> 15 + #include <linux/device.h> 16 + #include <linux/atomic.h> 17 + #include <linux/regmap.h> 18 + #include <linux/platform_device.h> 19 + #include <linux/mfd/syscon.h> 20 + #include <linux/interrupt.h> 21 + #include <linux/io.h> 22 + #include <linux/nmi.h> 23 + #include <linux/of.h> 24 + #include <linux/clk.h> 25 + #include <linux/reset.h> 26 + #include <linux/sysfs.h> 27 + 28 + #define BT1_AXI_WERRL 0x110 29 + #define BT1_AXI_WERRH 0x114 30 + #define BT1_AXI_WERRH_TYPE BIT(23) 31 + #define BT1_AXI_WERRH_ADDR_FLD 24 32 + #define BT1_AXI_WERRH_ADDR_MASK GENMASK(31, BT1_AXI_WERRH_ADDR_FLD) 33 + 34 + /* 35 + * struct bt1_axi - Baikal-T1 AXI-bus private data 36 + * @dev: Pointer to the device structure. 37 + * @qos_regs: AXI Interconnect QoS tuning registers. 38 + * @sys_regs: Baikal-T1 System Controller registers map. 39 + * @irq: Errors IRQ number. 40 + * @aclk: AXI reference clock. 41 + * @arst: AXI Interconnect reset line. 42 + * @count: Number of errors detected. 43 + */ 44 + struct bt1_axi { 45 + struct device *dev; 46 + 47 + void __iomem *qos_regs; 48 + struct regmap *sys_regs; 49 + int irq; 50 + 51 + struct clk *aclk; 52 + 53 + struct reset_control *arst; 54 + 55 + atomic_t count; 56 + }; 57 + 58 + static irqreturn_t bt1_axi_isr(int irq, void *data) 59 + { 60 + struct bt1_axi *axi = data; 61 + u32 low = 0, high = 0; 62 + 63 + regmap_read(axi->sys_regs, BT1_AXI_WERRL, &low); 64 + regmap_read(axi->sys_regs, BT1_AXI_WERRH, &high); 65 + 66 + dev_crit_ratelimited(axi->dev, 67 + "AXI-bus fault %d: %s at 0x%x%08x\n", 68 + atomic_inc_return(&axi->count), 69 + high & BT1_AXI_WERRH_TYPE ? "no slave" : "slave protocol error", 70 + high, low); 71 + 72 + /* 73 + * Print backtrace on each CPU. This might be pointless if the fault 74 + * has happened on the same CPU as the IRQ handler is executed or 75 + * the other core proceeded further execution despite the error. 76 + * But if it's not, by looking at the trace we would get straight to 77 + * the cause of the problem. 78 + */ 79 + trigger_all_cpu_backtrace(); 80 + 81 + return IRQ_HANDLED; 82 + } 83 + 84 + static void bt1_axi_clear_data(void *data) 85 + { 86 + struct bt1_axi *axi = data; 87 + struct platform_device *pdev = to_platform_device(axi->dev); 88 + 89 + platform_set_drvdata(pdev, NULL); 90 + } 91 + 92 + static struct bt1_axi *bt1_axi_create_data(struct platform_device *pdev) 93 + { 94 + struct device *dev = &pdev->dev; 95 + struct bt1_axi *axi; 96 + int ret; 97 + 98 + axi = devm_kzalloc(dev, sizeof(*axi), GFP_KERNEL); 99 + if (!axi) 100 + return ERR_PTR(-ENOMEM); 101 + 102 + ret = devm_add_action(dev, bt1_axi_clear_data, axi); 103 + if (ret) { 104 + dev_err(dev, "Can't add AXI EHB data clear action\n"); 105 + return ERR_PTR(ret); 106 + } 107 + 108 + axi->dev = dev; 109 + atomic_set(&axi->count, 0); 110 + platform_set_drvdata(pdev, axi); 111 + 112 + return axi; 113 + } 114 + 115 + static int bt1_axi_request_regs(struct bt1_axi *axi) 116 + { 117 + struct platform_device *pdev = to_platform_device(axi->dev); 118 + struct device *dev = axi->dev; 119 + 120 + axi->sys_regs = syscon_regmap_lookup_by_phandle(dev->of_node, "syscon"); 121 + if (IS_ERR(axi->sys_regs)) { 122 + dev_err(dev, "Couldn't find syscon registers\n"); 123 + return PTR_ERR(axi->sys_regs); 124 + } 125 + 126 + axi->qos_regs = devm_platform_ioremap_resource_byname(pdev, "qos"); 127 + if (IS_ERR(axi->qos_regs)) 128 + dev_err(dev, "Couldn't map AXI-bus QoS registers\n"); 129 + 130 + return PTR_ERR_OR_ZERO(axi->qos_regs); 131 + } 132 + 133 + static int bt1_axi_request_rst(struct bt1_axi *axi) 134 + { 135 + int ret; 136 + 137 + axi->arst = devm_reset_control_get_optional_exclusive(axi->dev, "arst"); 138 + if (IS_ERR(axi->arst)) { 139 + dev_warn(axi->dev, "Couldn't get reset control line\n"); 140 + return PTR_ERR(axi->arst); 141 + } 142 + 143 + ret = reset_control_deassert(axi->arst); 144 + if (ret) 145 + dev_err(axi->dev, "Failed to deassert the reset line\n"); 146 + 147 + return ret; 148 + } 149 + 150 + static void bt1_axi_disable_clk(void *data) 151 + { 152 + struct bt1_axi *axi = data; 153 + 154 + clk_disable_unprepare(axi->aclk); 155 + } 156 + 157 + static int bt1_axi_request_clk(struct bt1_axi *axi) 158 + { 159 + int ret; 160 + 161 + axi->aclk = devm_clk_get(axi->dev, "aclk"); 162 + if (IS_ERR(axi->aclk)) { 163 + dev_err(axi->dev, "Couldn't get AXI Interconnect clock\n"); 164 + return PTR_ERR(axi->aclk); 165 + } 166 + 167 + ret = clk_prepare_enable(axi->aclk); 168 + if (ret) { 169 + dev_err(axi->dev, "Couldn't enable the AXI clock\n"); 170 + return ret; 171 + } 172 + 173 + ret = devm_add_action_or_reset(axi->dev, bt1_axi_disable_clk, axi); 174 + if (ret) 175 + dev_err(axi->dev, "Can't add AXI clock disable action\n"); 176 + 177 + return ret; 178 + } 179 + 180 + static int bt1_axi_request_irq(struct bt1_axi *axi) 181 + { 182 + struct platform_device *pdev = to_platform_device(axi->dev); 183 + int ret; 184 + 185 + axi->irq = platform_get_irq(pdev, 0); 186 + if (axi->irq < 0) 187 + return axi->irq; 188 + 189 + ret = devm_request_irq(axi->dev, axi->irq, bt1_axi_isr, IRQF_SHARED, 190 + "bt1-axi", axi); 191 + if (ret) 192 + dev_err(axi->dev, "Couldn't request AXI EHB IRQ\n"); 193 + 194 + return ret; 195 + } 196 + 197 + static ssize_t count_show(struct device *dev, 198 + struct device_attribute *attr, char *buf) 199 + { 200 + struct bt1_axi *axi = dev_get_drvdata(dev); 201 + 202 + return scnprintf(buf, PAGE_SIZE, "%d\n", atomic_read(&axi->count)); 203 + } 204 + static DEVICE_ATTR_RO(count); 205 + 206 + static ssize_t inject_error_show(struct device *dev, 207 + struct device_attribute *attr, char *buf) 208 + { 209 + return scnprintf(buf, PAGE_SIZE, "Error injection: bus unaligned\n"); 210 + } 211 + 212 + static ssize_t inject_error_store(struct device *dev, 213 + struct device_attribute *attr, 214 + const char *data, size_t count) 215 + { 216 + struct bt1_axi *axi = dev_get_drvdata(dev); 217 + 218 + /* 219 + * Performing unaligned read from the memory will cause the CM2 bus 220 + * error while unaligned writing - the AXI bus write error handled 221 + * by this driver. 222 + */ 223 + if (sysfs_streq(data, "bus")) 224 + readb(axi->qos_regs); 225 + else if (sysfs_streq(data, "unaligned")) 226 + writeb(0, axi->qos_regs); 227 + else 228 + return -EINVAL; 229 + 230 + return count; 231 + } 232 + static DEVICE_ATTR_RW(inject_error); 233 + 234 + static struct attribute *bt1_axi_sysfs_attrs[] = { 235 + &dev_attr_count.attr, 236 + &dev_attr_inject_error.attr, 237 + NULL 238 + }; 239 + ATTRIBUTE_GROUPS(bt1_axi_sysfs); 240 + 241 + static void bt1_axi_remove_sysfs(void *data) 242 + { 243 + struct bt1_axi *axi = data; 244 + 245 + device_remove_groups(axi->dev, bt1_axi_sysfs_groups); 246 + } 247 + 248 + static int bt1_axi_init_sysfs(struct bt1_axi *axi) 249 + { 250 + int ret; 251 + 252 + ret = device_add_groups(axi->dev, bt1_axi_sysfs_groups); 253 + if (ret) { 254 + dev_err(axi->dev, "Failed to add sysfs files group\n"); 255 + return ret; 256 + } 257 + 258 + ret = devm_add_action_or_reset(axi->dev, bt1_axi_remove_sysfs, axi); 259 + if (ret) 260 + dev_err(axi->dev, "Can't add AXI EHB sysfs remove action\n"); 261 + 262 + return ret; 263 + } 264 + 265 + static int bt1_axi_probe(struct platform_device *pdev) 266 + { 267 + struct bt1_axi *axi; 268 + int ret; 269 + 270 + axi = bt1_axi_create_data(pdev); 271 + if (IS_ERR(axi)) 272 + return PTR_ERR(axi); 273 + 274 + ret = bt1_axi_request_regs(axi); 275 + if (ret) 276 + return ret; 277 + 278 + ret = bt1_axi_request_rst(axi); 279 + if (ret) 280 + return ret; 281 + 282 + ret = bt1_axi_request_clk(axi); 283 + if (ret) 284 + return ret; 285 + 286 + ret = bt1_axi_request_irq(axi); 287 + if (ret) 288 + return ret; 289 + 290 + ret = bt1_axi_init_sysfs(axi); 291 + if (ret) 292 + return ret; 293 + 294 + return 0; 295 + } 296 + 297 + static const struct of_device_id bt1_axi_of_match[] = { 298 + { .compatible = "baikal,bt1-axi" }, 299 + { } 300 + }; 301 + MODULE_DEVICE_TABLE(of, bt1_axi_of_match); 302 + 303 + static struct platform_driver bt1_axi_driver = { 304 + .probe = bt1_axi_probe, 305 + .driver = { 306 + .name = "bt1-axi", 307 + .of_match_table = bt1_axi_of_match 308 + } 309 + }; 310 + module_platform_driver(bt1_axi_driver); 311 + 312 + MODULE_AUTHOR("Serge Semin <Sergey.Semin@baikalelectronics.ru>"); 313 + MODULE_DESCRIPTION("Baikal-T1 AXI-bus driver"); 314 + MODULE_LICENSE("GPL v2");
+1 -1
drivers/clk/Makefile
··· 105 105 obj-$(CONFIG_ARCH_SIRF) += sirf/ 106 106 obj-$(CONFIG_ARCH_SOCFPGA) += socfpga/ 107 107 obj-$(CONFIG_PLAT_SPEAR) += spear/ 108 - obj-$(CONFIG_ARCH_SPRD) += sprd/ 108 + obj-y += sprd/ 109 109 obj-$(CONFIG_ARCH_STI) += st/ 110 110 obj-$(CONFIG_ARCH_STRATIX10) += socfpga/ 111 111 obj-$(CONFIG_ARCH_SUNXI) += sunxi/
+7
drivers/clk/mediatek/Kconfig
··· 274 274 ---help--- 275 275 This driver supports MediaTek MT8173 clocks. 276 276 277 + config COMMON_CLK_MT8173_MMSYS 278 + bool "Clock driver for MediaTek MT8173 mmsys" 279 + depends on COMMON_CLK_MT8173 280 + default COMMON_CLK_MT8173 281 + help 282 + This driver supports MediaTek MT8173 mmsys clocks. 283 + 277 284 config COMMON_CLK_MT8183 278 285 bool "Clock driver for MediaTek MT8183" 279 286 depends on (ARCH_MEDIATEK && ARM64) || COMPILE_TEST
+1
drivers/clk/mediatek/Makefile
··· 41 41 obj-$(CONFIG_COMMON_CLK_MT7629_HIFSYS) += clk-mt7629-hif.o 42 42 obj-$(CONFIG_COMMON_CLK_MT8135) += clk-mt8135.o 43 43 obj-$(CONFIG_COMMON_CLK_MT8173) += clk-mt8173.o 44 + obj-$(CONFIG_COMMON_CLK_MT8173_MMSYS) += clk-mt8173-mm.o 44 45 obj-$(CONFIG_COMMON_CLK_MT8183) += clk-mt8183.o 45 46 obj-$(CONFIG_COMMON_CLK_MT8183_AUDIOSYS) += clk-mt8183-audio.o 46 47 obj-$(CONFIG_COMMON_CLK_MT8183_CAMSYS) += clk-mt8183-cam.o
+2 -7
drivers/clk/mediatek/clk-mt2701-mm.c
··· 79 79 GATE_DISP1(CLK_MM_TVE_FMM, "mm_tve_fmm", "mm_sel", 14), 80 80 }; 81 81 82 - static const struct of_device_id of_match_clk_mt2701_mm[] = { 83 - { .compatible = "mediatek,mt2701-mmsys", }, 84 - {} 85 - }; 86 - 87 82 static int clk_mt2701_mm_probe(struct platform_device *pdev) 88 83 { 84 + struct device *dev = &pdev->dev; 85 + struct device_node *node = dev->parent->of_node; 89 86 struct clk_onecell_data *clk_data; 90 87 int r; 91 - struct device_node *node = pdev->dev.of_node; 92 88 93 89 clk_data = mtk_alloc_clk_data(CLK_MM_NR); 94 90 ··· 104 108 .probe = clk_mt2701_mm_probe, 105 109 .driver = { 106 110 .name = "clk-mt2701-mm", 107 - .of_match_table = of_match_clk_mt2701_mm, 108 111 }, 109 112 }; 110 113
+2 -7
drivers/clk/mediatek/clk-mt2712-mm.c
··· 128 128 129 129 static int clk_mt2712_mm_probe(struct platform_device *pdev) 130 130 { 131 + struct device *dev = &pdev->dev; 132 + struct device_node *node = dev->parent->of_node; 131 133 struct clk_onecell_data *clk_data; 132 134 int r; 133 - struct device_node *node = pdev->dev.of_node; 134 135 135 136 clk_data = mtk_alloc_clk_data(CLK_MM_NR_CLK); 136 137 ··· 147 146 return r; 148 147 } 149 148 150 - static const struct of_device_id of_match_clk_mt2712_mm[] = { 151 - { .compatible = "mediatek,mt2712-mmsys", }, 152 - {} 153 - }; 154 - 155 149 static struct platform_driver clk_mt2712_mm_drv = { 156 150 .probe = clk_mt2712_mm_probe, 157 151 .driver = { 158 152 .name = "clk-mt2712-mm", 159 - .of_match_table = of_match_clk_mt2712_mm, 160 153 }, 161 154 }; 162 155
+2 -7
drivers/clk/mediatek/clk-mt6779-mm.c
··· 84 84 GATE_MM1(CLK_MM_DISP_OVL_FBDC, "mm_disp_ovl_fbdc", "mm_sel", 16), 85 85 }; 86 86 87 - static const struct of_device_id of_match_clk_mt6779_mm[] = { 88 - { .compatible = "mediatek,mt6779-mmsys", }, 89 - {} 90 - }; 91 - 92 87 static int clk_mt6779_mm_probe(struct platform_device *pdev) 93 88 { 89 + struct device *dev = &pdev->dev; 90 + struct device_node *node = dev->parent->of_node; 94 91 struct clk_onecell_data *clk_data; 95 - struct device_node *node = pdev->dev.of_node; 96 92 97 93 clk_data = mtk_alloc_clk_data(CLK_MM_NR_CLK); 98 94 ··· 102 106 .probe = clk_mt6779_mm_probe, 103 107 .driver = { 104 108 .name = "clk-mt6779-mm", 105 - .of_match_table = of_match_clk_mt6779_mm, 106 109 }, 107 110 }; 108 111
+2 -7
drivers/clk/mediatek/clk-mt6797-mm.c
··· 92 92 "clk26m", 3), 93 93 }; 94 94 95 - static const struct of_device_id of_match_clk_mt6797_mm[] = { 96 - { .compatible = "mediatek,mt6797-mmsys", }, 97 - {} 98 - }; 99 - 100 95 static int clk_mt6797_mm_probe(struct platform_device *pdev) 101 96 { 97 + struct device *dev = &pdev->dev; 98 + struct device_node *node = dev->parent->of_node; 102 99 struct clk_onecell_data *clk_data; 103 100 int r; 104 - struct device_node *node = pdev->dev.of_node; 105 101 106 102 clk_data = mtk_alloc_clk_data(CLK_MM_NR); 107 103 ··· 117 121 .probe = clk_mt6797_mm_probe, 118 122 .driver = { 119 123 .name = "clk-mt6797-mm", 120 - .of_match_table = of_match_clk_mt6797_mm, 121 124 }, 122 125 }; 123 126
+146
drivers/clk/mediatek/clk-mt8173-mm.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (c) 2014 MediaTek Inc. 4 + * Author: James Liao <jamesjj.liao@mediatek.com> 5 + */ 6 + 7 + #include <linux/clk-provider.h> 8 + #include <linux/of_device.h> 9 + #include <linux/platform_device.h> 10 + 11 + #include "clk-gate.h" 12 + #include "clk-mtk.h" 13 + 14 + #include <dt-bindings/clock/mt8173-clk.h> 15 + 16 + static const struct mtk_gate_regs mm0_cg_regs = { 17 + .set_ofs = 0x0104, 18 + .clr_ofs = 0x0108, 19 + .sta_ofs = 0x0100, 20 + }; 21 + 22 + static const struct mtk_gate_regs mm1_cg_regs = { 23 + .set_ofs = 0x0114, 24 + .clr_ofs = 0x0118, 25 + .sta_ofs = 0x0110, 26 + }; 27 + 28 + #define GATE_MM0(_id, _name, _parent, _shift) { \ 29 + .id = _id, \ 30 + .name = _name, \ 31 + .parent_name = _parent, \ 32 + .regs = &mm0_cg_regs, \ 33 + .shift = _shift, \ 34 + .ops = &mtk_clk_gate_ops_setclr, \ 35 + } 36 + 37 + #define GATE_MM1(_id, _name, _parent, _shift) { \ 38 + .id = _id, \ 39 + .name = _name, \ 40 + .parent_name = _parent, \ 41 + .regs = &mm1_cg_regs, \ 42 + .shift = _shift, \ 43 + .ops = &mtk_clk_gate_ops_setclr, \ 44 + } 45 + 46 + static const struct mtk_gate mt8173_mm_clks[] = { 47 + /* MM0 */ 48 + GATE_MM0(CLK_MM_SMI_COMMON, "mm_smi_common", "mm_sel", 0), 49 + GATE_MM0(CLK_MM_SMI_LARB0, "mm_smi_larb0", "mm_sel", 1), 50 + GATE_MM0(CLK_MM_CAM_MDP, "mm_cam_mdp", "mm_sel", 2), 51 + GATE_MM0(CLK_MM_MDP_RDMA0, "mm_mdp_rdma0", "mm_sel", 3), 52 + GATE_MM0(CLK_MM_MDP_RDMA1, "mm_mdp_rdma1", "mm_sel", 4), 53 + GATE_MM0(CLK_MM_MDP_RSZ0, "mm_mdp_rsz0", "mm_sel", 5), 54 + GATE_MM0(CLK_MM_MDP_RSZ1, "mm_mdp_rsz1", "mm_sel", 6), 55 + GATE_MM0(CLK_MM_MDP_RSZ2, "mm_mdp_rsz2", "mm_sel", 7), 56 + GATE_MM0(CLK_MM_MDP_TDSHP0, "mm_mdp_tdshp0", "mm_sel", 8), 57 + GATE_MM0(CLK_MM_MDP_TDSHP1, "mm_mdp_tdshp1", "mm_sel", 9), 58 + GATE_MM0(CLK_MM_MDP_WDMA, "mm_mdp_wdma", "mm_sel", 11), 59 + GATE_MM0(CLK_MM_MDP_WROT0, "mm_mdp_wrot0", "mm_sel", 12), 60 + GATE_MM0(CLK_MM_MDP_WROT1, "mm_mdp_wrot1", "mm_sel", 13), 61 + GATE_MM0(CLK_MM_FAKE_ENG, "mm_fake_eng", "mm_sel", 14), 62 + GATE_MM0(CLK_MM_MUTEX_32K, "mm_mutex_32k", "rtc_sel", 15), 63 + GATE_MM0(CLK_MM_DISP_OVL0, "mm_disp_ovl0", "mm_sel", 16), 64 + GATE_MM0(CLK_MM_DISP_OVL1, "mm_disp_ovl1", "mm_sel", 17), 65 + GATE_MM0(CLK_MM_DISP_RDMA0, "mm_disp_rdma0", "mm_sel", 18), 66 + GATE_MM0(CLK_MM_DISP_RDMA1, "mm_disp_rdma1", "mm_sel", 19), 67 + GATE_MM0(CLK_MM_DISP_RDMA2, "mm_disp_rdma2", "mm_sel", 20), 68 + GATE_MM0(CLK_MM_DISP_WDMA0, "mm_disp_wdma0", "mm_sel", 21), 69 + GATE_MM0(CLK_MM_DISP_WDMA1, "mm_disp_wdma1", "mm_sel", 22), 70 + GATE_MM0(CLK_MM_DISP_COLOR0, "mm_disp_color0", "mm_sel", 23), 71 + GATE_MM0(CLK_MM_DISP_COLOR1, "mm_disp_color1", "mm_sel", 24), 72 + GATE_MM0(CLK_MM_DISP_AAL, "mm_disp_aal", "mm_sel", 25), 73 + GATE_MM0(CLK_MM_DISP_GAMMA, "mm_disp_gamma", "mm_sel", 26), 74 + GATE_MM0(CLK_MM_DISP_UFOE, "mm_disp_ufoe", "mm_sel", 27), 75 + GATE_MM0(CLK_MM_DISP_SPLIT0, "mm_disp_split0", "mm_sel", 28), 76 + GATE_MM0(CLK_MM_DISP_SPLIT1, "mm_disp_split1", "mm_sel", 29), 77 + GATE_MM0(CLK_MM_DISP_MERGE, "mm_disp_merge", "mm_sel", 30), 78 + GATE_MM0(CLK_MM_DISP_OD, "mm_disp_od", "mm_sel", 31), 79 + /* MM1 */ 80 + GATE_MM1(CLK_MM_DISP_PWM0MM, "mm_disp_pwm0mm", "mm_sel", 0), 81 + GATE_MM1(CLK_MM_DISP_PWM026M, "mm_disp_pwm026m", "pwm_sel", 1), 82 + GATE_MM1(CLK_MM_DISP_PWM1MM, "mm_disp_pwm1mm", "mm_sel", 2), 83 + GATE_MM1(CLK_MM_DISP_PWM126M, "mm_disp_pwm126m", "pwm_sel", 3), 84 + GATE_MM1(CLK_MM_DSI0_ENGINE, "mm_dsi0_engine", "mm_sel", 4), 85 + GATE_MM1(CLK_MM_DSI0_DIGITAL, "mm_dsi0_digital", "dsi0_dig", 5), 86 + GATE_MM1(CLK_MM_DSI1_ENGINE, "mm_dsi1_engine", "mm_sel", 6), 87 + GATE_MM1(CLK_MM_DSI1_DIGITAL, "mm_dsi1_digital", "dsi1_dig", 7), 88 + GATE_MM1(CLK_MM_DPI_PIXEL, "mm_dpi_pixel", "dpi0_sel", 8), 89 + GATE_MM1(CLK_MM_DPI_ENGINE, "mm_dpi_engine", "mm_sel", 9), 90 + GATE_MM1(CLK_MM_DPI1_PIXEL, "mm_dpi1_pixel", "lvds_pxl", 10), 91 + GATE_MM1(CLK_MM_DPI1_ENGINE, "mm_dpi1_engine", "mm_sel", 11), 92 + GATE_MM1(CLK_MM_HDMI_PIXEL, "mm_hdmi_pixel", "dpi0_sel", 12), 93 + GATE_MM1(CLK_MM_HDMI_PLLCK, "mm_hdmi_pllck", "hdmi_sel", 13), 94 + GATE_MM1(CLK_MM_HDMI_AUDIO, "mm_hdmi_audio", "apll1", 14), 95 + GATE_MM1(CLK_MM_HDMI_SPDIF, "mm_hdmi_spdif", "apll2", 15), 96 + GATE_MM1(CLK_MM_LVDS_PIXEL, "mm_lvds_pixel", "lvds_pxl", 16), 97 + GATE_MM1(CLK_MM_LVDS_CTS, "mm_lvds_cts", "lvds_cts", 17), 98 + GATE_MM1(CLK_MM_SMI_LARB4, "mm_smi_larb4", "mm_sel", 18), 99 + GATE_MM1(CLK_MM_HDMI_HDCP, "mm_hdmi_hdcp", "hdcp_sel", 19), 100 + GATE_MM1(CLK_MM_HDMI_HDCP24M, "mm_hdmi_hdcp24m", "hdcp_24m_sel", 20), 101 + }; 102 + 103 + struct clk_mt8173_mm_driver_data { 104 + const struct mtk_gate *gates_clk; 105 + int gates_num; 106 + }; 107 + 108 + static const struct clk_mt8173_mm_driver_data mt8173_mmsys_driver_data = { 109 + .gates_clk = mt8173_mm_clks, 110 + .gates_num = ARRAY_SIZE(mt8173_mm_clks), 111 + }; 112 + 113 + static int clk_mt8173_mm_probe(struct platform_device *pdev) 114 + { 115 + struct device *dev = &pdev->dev; 116 + struct device_node *node = dev->parent->of_node; 117 + const struct clk_mt8173_mm_driver_data *data; 118 + struct clk_onecell_data *clk_data; 119 + int ret; 120 + 121 + clk_data = mtk_alloc_clk_data(CLK_MM_NR_CLK); 122 + if (!clk_data) 123 + return -ENOMEM; 124 + 125 + data = &mt8173_mmsys_driver_data; 126 + 127 + ret = mtk_clk_register_gates(node, data->gates_clk, data->gates_num, 128 + clk_data); 129 + if (ret) 130 + return ret; 131 + 132 + ret = of_clk_add_provider(node, of_clk_src_onecell_get, clk_data); 133 + if (ret) 134 + return ret; 135 + 136 + return 0; 137 + } 138 + 139 + static struct platform_driver clk_mt8173_mm_drv = { 140 + .driver = { 141 + .name = "clk-mt8173-mm", 142 + }, 143 + .probe = clk_mt8173_mm_probe, 144 + }; 145 + 146 + builtin_platform_driver(clk_mt8173_mm_drv);
-104
drivers/clk/mediatek/clk-mt8173.c
··· 753 753 GATE_IMG(CLK_IMG_FD, "img_fd", "mm_sel", 11), 754 754 }; 755 755 756 - static const struct mtk_gate_regs mm0_cg_regs __initconst = { 757 - .set_ofs = 0x0104, 758 - .clr_ofs = 0x0108, 759 - .sta_ofs = 0x0100, 760 - }; 761 - 762 - static const struct mtk_gate_regs mm1_cg_regs __initconst = { 763 - .set_ofs = 0x0114, 764 - .clr_ofs = 0x0118, 765 - .sta_ofs = 0x0110, 766 - }; 767 - 768 - #define GATE_MM0(_id, _name, _parent, _shift) { \ 769 - .id = _id, \ 770 - .name = _name, \ 771 - .parent_name = _parent, \ 772 - .regs = &mm0_cg_regs, \ 773 - .shift = _shift, \ 774 - .ops = &mtk_clk_gate_ops_setclr, \ 775 - } 776 - 777 - #define GATE_MM1(_id, _name, _parent, _shift) { \ 778 - .id = _id, \ 779 - .name = _name, \ 780 - .parent_name = _parent, \ 781 - .regs = &mm1_cg_regs, \ 782 - .shift = _shift, \ 783 - .ops = &mtk_clk_gate_ops_setclr, \ 784 - } 785 - 786 - static const struct mtk_gate mm_clks[] __initconst = { 787 - /* MM0 */ 788 - GATE_MM0(CLK_MM_SMI_COMMON, "mm_smi_common", "mm_sel", 0), 789 - GATE_MM0(CLK_MM_SMI_LARB0, "mm_smi_larb0", "mm_sel", 1), 790 - GATE_MM0(CLK_MM_CAM_MDP, "mm_cam_mdp", "mm_sel", 2), 791 - GATE_MM0(CLK_MM_MDP_RDMA0, "mm_mdp_rdma0", "mm_sel", 3), 792 - GATE_MM0(CLK_MM_MDP_RDMA1, "mm_mdp_rdma1", "mm_sel", 4), 793 - GATE_MM0(CLK_MM_MDP_RSZ0, "mm_mdp_rsz0", "mm_sel", 5), 794 - GATE_MM0(CLK_MM_MDP_RSZ1, "mm_mdp_rsz1", "mm_sel", 6), 795 - GATE_MM0(CLK_MM_MDP_RSZ2, "mm_mdp_rsz2", "mm_sel", 7), 796 - GATE_MM0(CLK_MM_MDP_TDSHP0, "mm_mdp_tdshp0", "mm_sel", 8), 797 - GATE_MM0(CLK_MM_MDP_TDSHP1, "mm_mdp_tdshp1", "mm_sel", 9), 798 - GATE_MM0(CLK_MM_MDP_WDMA, "mm_mdp_wdma", "mm_sel", 11), 799 - GATE_MM0(CLK_MM_MDP_WROT0, "mm_mdp_wrot0", "mm_sel", 12), 800 - GATE_MM0(CLK_MM_MDP_WROT1, "mm_mdp_wrot1", "mm_sel", 13), 801 - GATE_MM0(CLK_MM_FAKE_ENG, "mm_fake_eng", "mm_sel", 14), 802 - GATE_MM0(CLK_MM_MUTEX_32K, "mm_mutex_32k", "rtc_sel", 15), 803 - GATE_MM0(CLK_MM_DISP_OVL0, "mm_disp_ovl0", "mm_sel", 16), 804 - GATE_MM0(CLK_MM_DISP_OVL1, "mm_disp_ovl1", "mm_sel", 17), 805 - GATE_MM0(CLK_MM_DISP_RDMA0, "mm_disp_rdma0", "mm_sel", 18), 806 - GATE_MM0(CLK_MM_DISP_RDMA1, "mm_disp_rdma1", "mm_sel", 19), 807 - GATE_MM0(CLK_MM_DISP_RDMA2, "mm_disp_rdma2", "mm_sel", 20), 808 - GATE_MM0(CLK_MM_DISP_WDMA0, "mm_disp_wdma0", "mm_sel", 21), 809 - GATE_MM0(CLK_MM_DISP_WDMA1, "mm_disp_wdma1", "mm_sel", 22), 810 - GATE_MM0(CLK_MM_DISP_COLOR0, "mm_disp_color0", "mm_sel", 23), 811 - GATE_MM0(CLK_MM_DISP_COLOR1, "mm_disp_color1", "mm_sel", 24), 812 - GATE_MM0(CLK_MM_DISP_AAL, "mm_disp_aal", "mm_sel", 25), 813 - GATE_MM0(CLK_MM_DISP_GAMMA, "mm_disp_gamma", "mm_sel", 26), 814 - GATE_MM0(CLK_MM_DISP_UFOE, "mm_disp_ufoe", "mm_sel", 27), 815 - GATE_MM0(CLK_MM_DISP_SPLIT0, "mm_disp_split0", "mm_sel", 28), 816 - GATE_MM0(CLK_MM_DISP_SPLIT1, "mm_disp_split1", "mm_sel", 29), 817 - GATE_MM0(CLK_MM_DISP_MERGE, "mm_disp_merge", "mm_sel", 30), 818 - GATE_MM0(CLK_MM_DISP_OD, "mm_disp_od", "mm_sel", 31), 819 - /* MM1 */ 820 - GATE_MM1(CLK_MM_DISP_PWM0MM, "mm_disp_pwm0mm", "mm_sel", 0), 821 - GATE_MM1(CLK_MM_DISP_PWM026M, "mm_disp_pwm026m", "pwm_sel", 1), 822 - GATE_MM1(CLK_MM_DISP_PWM1MM, "mm_disp_pwm1mm", "mm_sel", 2), 823 - GATE_MM1(CLK_MM_DISP_PWM126M, "mm_disp_pwm126m", "pwm_sel", 3), 824 - GATE_MM1(CLK_MM_DSI0_ENGINE, "mm_dsi0_engine", "mm_sel", 4), 825 - GATE_MM1(CLK_MM_DSI0_DIGITAL, "mm_dsi0_digital", "dsi0_dig", 5), 826 - GATE_MM1(CLK_MM_DSI1_ENGINE, "mm_dsi1_engine", "mm_sel", 6), 827 - GATE_MM1(CLK_MM_DSI1_DIGITAL, "mm_dsi1_digital", "dsi1_dig", 7), 828 - GATE_MM1(CLK_MM_DPI_PIXEL, "mm_dpi_pixel", "dpi0_sel", 8), 829 - GATE_MM1(CLK_MM_DPI_ENGINE, "mm_dpi_engine", "mm_sel", 9), 830 - GATE_MM1(CLK_MM_DPI1_PIXEL, "mm_dpi1_pixel", "lvds_pxl", 10), 831 - GATE_MM1(CLK_MM_DPI1_ENGINE, "mm_dpi1_engine", "mm_sel", 11), 832 - GATE_MM1(CLK_MM_HDMI_PIXEL, "mm_hdmi_pixel", "dpi0_sel", 12), 833 - GATE_MM1(CLK_MM_HDMI_PLLCK, "mm_hdmi_pllck", "hdmi_sel", 13), 834 - GATE_MM1(CLK_MM_HDMI_AUDIO, "mm_hdmi_audio", "apll1", 14), 835 - GATE_MM1(CLK_MM_HDMI_SPDIF, "mm_hdmi_spdif", "apll2", 15), 836 - GATE_MM1(CLK_MM_LVDS_PIXEL, "mm_lvds_pixel", "lvds_pxl", 16), 837 - GATE_MM1(CLK_MM_LVDS_CTS, "mm_lvds_cts", "lvds_cts", 17), 838 - GATE_MM1(CLK_MM_SMI_LARB4, "mm_smi_larb4", "mm_sel", 18), 839 - GATE_MM1(CLK_MM_HDMI_HDCP, "mm_hdmi_hdcp", "hdcp_sel", 19), 840 - GATE_MM1(CLK_MM_HDMI_HDCP24M, "mm_hdmi_hdcp24m", "hdcp_24m_sel", 20), 841 - }; 842 - 843 756 static const struct mtk_gate_regs vdec0_cg_regs __initconst = { 844 757 .set_ofs = 0x0000, 845 758 .clr_ofs = 0x0004, ··· 1056 1143 __func__, r); 1057 1144 } 1058 1145 CLK_OF_DECLARE(mtk_imgsys, "mediatek,mt8173-imgsys", mtk_imgsys_init); 1059 - 1060 - static void __init mtk_mmsys_init(struct device_node *node) 1061 - { 1062 - struct clk_onecell_data *clk_data; 1063 - int r; 1064 - 1065 - clk_data = mtk_alloc_clk_data(CLK_MM_NR_CLK); 1066 - 1067 - mtk_clk_register_gates(node, mm_clks, ARRAY_SIZE(mm_clks), 1068 - clk_data); 1069 - 1070 - r = of_clk_add_provider(node, of_clk_src_onecell_get, clk_data); 1071 - if (r) 1072 - pr_err("%s(): could not register clock provider: %d\n", 1073 - __func__, r); 1074 - } 1075 - CLK_OF_DECLARE(mtk_mmsys, "mediatek,mt8173-mmsys", mtk_mmsys_init); 1076 1146 1077 1147 static void __init mtk_vdecsys_init(struct device_node *node) 1078 1148 {
+2 -7
drivers/clk/mediatek/clk-mt8183-mm.c
··· 84 84 85 85 static int clk_mt8183_mm_probe(struct platform_device *pdev) 86 86 { 87 + struct device *dev = &pdev->dev; 88 + struct device_node *node = dev->parent->of_node; 87 89 struct clk_onecell_data *clk_data; 88 - struct device_node *node = pdev->dev.of_node; 89 90 90 91 clk_data = mtk_alloc_clk_data(CLK_MM_NR_CLK); 91 92 ··· 96 95 return of_clk_add_provider(node, of_clk_src_onecell_get, clk_data); 97 96 } 98 97 99 - static const struct of_device_id of_match_clk_mt8183_mm[] = { 100 - { .compatible = "mediatek,mt8183-mmsys", }, 101 - {} 102 - }; 103 - 104 98 static struct platform_driver clk_mt8183_mm_drv = { 105 99 .probe = clk_mt8183_mm_probe, 106 100 .driver = { 107 101 .name = "clk-mt8183-mm", 108 - .of_match_table = of_match_clk_mt8183_mm, 109 102 }, 110 103 }; 111 104
+3 -3
drivers/cpufreq/Kconfig.arm
··· 295 295 default y 296 296 297 297 config ARM_TEGRA20_CPUFREQ 298 - tristate "Tegra20 CPUFreq support" 299 - depends on ARCH_TEGRA 298 + tristate "Tegra20/30 CPUFreq support" 299 + depends on ARCH_TEGRA && CPUFREQ_DT 300 300 default y 301 301 help 302 - This adds the CPUFreq driver support for Tegra20 SOCs. 302 + This adds the CPUFreq driver support for Tegra20/30 SOCs. 303 303 304 304 config ARM_TEGRA124_CPUFREQ 305 305 bool "Tegra124 CPUFreq support"
+56 -161
drivers/cpufreq/tegra20-cpufreq.c
··· 7 7 * Based on arch/arm/plat-omap/cpu-omap.c, (C) 2005 Nokia Corporation 8 8 */ 9 9 10 - #include <linux/clk.h> 11 - #include <linux/cpufreq.h> 10 + #include <linux/bits.h> 11 + #include <linux/cpu.h> 12 12 #include <linux/err.h> 13 13 #include <linux/init.h> 14 14 #include <linux/module.h> 15 + #include <linux/of_device.h> 15 16 #include <linux/platform_device.h> 17 + #include <linux/pm_opp.h> 16 18 #include <linux/types.h> 17 19 18 - static struct cpufreq_frequency_table freq_table[] = { 19 - { .frequency = 216000 }, 20 - { .frequency = 312000 }, 21 - { .frequency = 456000 }, 22 - { .frequency = 608000 }, 23 - { .frequency = 760000 }, 24 - { .frequency = 816000 }, 25 - { .frequency = 912000 }, 26 - { .frequency = 1000000 }, 27 - { .frequency = CPUFREQ_TABLE_END }, 28 - }; 20 + #include <soc/tegra/common.h> 21 + #include <soc/tegra/fuse.h> 29 22 30 - struct tegra20_cpufreq { 31 - struct device *dev; 32 - struct cpufreq_driver driver; 33 - struct clk *cpu_clk; 34 - struct clk *pll_x_clk; 35 - struct clk *pll_p_clk; 36 - bool pll_x_prepared; 37 - }; 38 - 39 - static unsigned int tegra_get_intermediate(struct cpufreq_policy *policy, 40 - unsigned int index) 23 + static bool cpu0_node_has_opp_v2_prop(void) 41 24 { 42 - struct tegra20_cpufreq *cpufreq = cpufreq_get_driver_data(); 43 - unsigned int ifreq = clk_get_rate(cpufreq->pll_p_clk) / 1000; 25 + struct device_node *np = of_cpu_device_node_get(0); 26 + bool ret = false; 44 27 45 - /* 46 - * Don't switch to intermediate freq if: 47 - * - we are already at it, i.e. policy->cur == ifreq 48 - * - index corresponds to ifreq 49 - */ 50 - if (freq_table[index].frequency == ifreq || policy->cur == ifreq) 51 - return 0; 28 + if (of_get_property(np, "operating-points-v2", NULL)) 29 + ret = true; 52 30 53 - return ifreq; 54 - } 55 - 56 - static int tegra_target_intermediate(struct cpufreq_policy *policy, 57 - unsigned int index) 58 - { 59 - struct tegra20_cpufreq *cpufreq = cpufreq_get_driver_data(); 60 - int ret; 61 - 62 - /* 63 - * Take an extra reference to the main pll so it doesn't turn 64 - * off when we move the cpu off of it as enabling it again while we 65 - * switch to it from tegra_target() would take additional time. 66 - * 67 - * When target-freq is equal to intermediate freq we don't need to 68 - * switch to an intermediate freq and so this routine isn't called. 69 - * Also, we wouldn't be using pll_x anymore and must not take extra 70 - * reference to it, as it can be disabled now to save some power. 71 - */ 72 - clk_prepare_enable(cpufreq->pll_x_clk); 73 - 74 - ret = clk_set_parent(cpufreq->cpu_clk, cpufreq->pll_p_clk); 75 - if (ret) 76 - clk_disable_unprepare(cpufreq->pll_x_clk); 77 - else 78 - cpufreq->pll_x_prepared = true; 79 - 31 + of_node_put(np); 80 32 return ret; 81 - } 82 - 83 - static int tegra_target(struct cpufreq_policy *policy, unsigned int index) 84 - { 85 - struct tegra20_cpufreq *cpufreq = cpufreq_get_driver_data(); 86 - unsigned long rate = freq_table[index].frequency; 87 - unsigned int ifreq = clk_get_rate(cpufreq->pll_p_clk) / 1000; 88 - int ret; 89 - 90 - /* 91 - * target freq == pll_p, don't need to take extra reference to pll_x_clk 92 - * as it isn't used anymore. 93 - */ 94 - if (rate == ifreq) 95 - return clk_set_parent(cpufreq->cpu_clk, cpufreq->pll_p_clk); 96 - 97 - ret = clk_set_rate(cpufreq->pll_x_clk, rate * 1000); 98 - /* Restore to earlier frequency on error, i.e. pll_x */ 99 - if (ret) 100 - dev_err(cpufreq->dev, "Failed to change pll_x to %lu\n", rate); 101 - 102 - ret = clk_set_parent(cpufreq->cpu_clk, cpufreq->pll_x_clk); 103 - /* This shouldn't fail while changing or restoring */ 104 - WARN_ON(ret); 105 - 106 - /* 107 - * Drop count to pll_x clock only if we switched to intermediate freq 108 - * earlier while transitioning to a target frequency. 109 - */ 110 - if (cpufreq->pll_x_prepared) { 111 - clk_disable_unprepare(cpufreq->pll_x_clk); 112 - cpufreq->pll_x_prepared = false; 113 - } 114 - 115 - return ret; 116 - } 117 - 118 - static int tegra_cpu_init(struct cpufreq_policy *policy) 119 - { 120 - struct tegra20_cpufreq *cpufreq = cpufreq_get_driver_data(); 121 - 122 - clk_prepare_enable(cpufreq->cpu_clk); 123 - 124 - /* FIXME: what's the actual transition time? */ 125 - cpufreq_generic_init(policy, freq_table, 300 * 1000); 126 - policy->clk = cpufreq->cpu_clk; 127 - policy->suspend_freq = freq_table[0].frequency; 128 - return 0; 129 - } 130 - 131 - static int tegra_cpu_exit(struct cpufreq_policy *policy) 132 - { 133 - struct tegra20_cpufreq *cpufreq = cpufreq_get_driver_data(); 134 - 135 - clk_disable_unprepare(cpufreq->cpu_clk); 136 - return 0; 137 33 } 138 34 139 35 static int tegra20_cpufreq_probe(struct platform_device *pdev) 140 36 { 141 - struct tegra20_cpufreq *cpufreq; 37 + struct platform_device *cpufreq_dt; 38 + struct opp_table *opp_table; 39 + struct device *cpu_dev; 40 + u32 versions[2]; 142 41 int err; 143 42 144 - cpufreq = devm_kzalloc(&pdev->dev, sizeof(*cpufreq), GFP_KERNEL); 145 - if (!cpufreq) 146 - return -ENOMEM; 147 - 148 - cpufreq->cpu_clk = clk_get_sys(NULL, "cclk"); 149 - if (IS_ERR(cpufreq->cpu_clk)) 150 - return PTR_ERR(cpufreq->cpu_clk); 151 - 152 - cpufreq->pll_x_clk = clk_get_sys(NULL, "pll_x"); 153 - if (IS_ERR(cpufreq->pll_x_clk)) { 154 - err = PTR_ERR(cpufreq->pll_x_clk); 155 - goto put_cpu; 43 + if (!cpu0_node_has_opp_v2_prop()) { 44 + dev_err(&pdev->dev, "operating points not found\n"); 45 + dev_err(&pdev->dev, "please update your device tree\n"); 46 + return -ENODEV; 156 47 } 157 48 158 - cpufreq->pll_p_clk = clk_get_sys(NULL, "pll_p"); 159 - if (IS_ERR(cpufreq->pll_p_clk)) { 160 - err = PTR_ERR(cpufreq->pll_p_clk); 161 - goto put_pll_x; 49 + if (of_machine_is_compatible("nvidia,tegra20")) { 50 + versions[0] = BIT(tegra_sku_info.cpu_process_id); 51 + versions[1] = BIT(tegra_sku_info.soc_speedo_id); 52 + } else { 53 + versions[0] = BIT(tegra_sku_info.cpu_process_id); 54 + versions[1] = BIT(tegra_sku_info.cpu_speedo_id); 162 55 } 163 56 164 - cpufreq->dev = &pdev->dev; 165 - cpufreq->driver.get = cpufreq_generic_get; 166 - cpufreq->driver.attr = cpufreq_generic_attr; 167 - cpufreq->driver.init = tegra_cpu_init; 168 - cpufreq->driver.exit = tegra_cpu_exit; 169 - cpufreq->driver.flags = CPUFREQ_NEED_INITIAL_FREQ_CHECK; 170 - cpufreq->driver.verify = cpufreq_generic_frequency_table_verify; 171 - cpufreq->driver.suspend = cpufreq_generic_suspend; 172 - cpufreq->driver.driver_data = cpufreq; 173 - cpufreq->driver.target_index = tegra_target; 174 - cpufreq->driver.get_intermediate = tegra_get_intermediate; 175 - cpufreq->driver.target_intermediate = tegra_target_intermediate; 176 - snprintf(cpufreq->driver.name, CPUFREQ_NAME_LEN, "tegra"); 57 + dev_info(&pdev->dev, "hardware version 0x%x 0x%x\n", 58 + versions[0], versions[1]); 177 59 178 - err = cpufreq_register_driver(&cpufreq->driver); 179 - if (err) 180 - goto put_pll_p; 60 + cpu_dev = get_cpu_device(0); 61 + if (WARN_ON(!cpu_dev)) 62 + return -ENODEV; 181 63 182 - platform_set_drvdata(pdev, cpufreq); 64 + opp_table = dev_pm_opp_set_supported_hw(cpu_dev, versions, 2); 65 + err = PTR_ERR_OR_ZERO(opp_table); 66 + if (err) { 67 + dev_err(&pdev->dev, "failed to set supported hw: %d\n", err); 68 + return err; 69 + } 70 + 71 + cpufreq_dt = platform_device_register_simple("cpufreq-dt", -1, NULL, 0); 72 + err = PTR_ERR_OR_ZERO(cpufreq_dt); 73 + if (err) { 74 + dev_err(&pdev->dev, 75 + "failed to create cpufreq-dt device: %d\n", err); 76 + goto err_put_supported_hw; 77 + } 78 + 79 + platform_set_drvdata(pdev, cpufreq_dt); 183 80 184 81 return 0; 185 82 186 - put_pll_p: 187 - clk_put(cpufreq->pll_p_clk); 188 - put_pll_x: 189 - clk_put(cpufreq->pll_x_clk); 190 - put_cpu: 191 - clk_put(cpufreq->cpu_clk); 83 + err_put_supported_hw: 84 + dev_pm_opp_put_supported_hw(opp_table); 192 85 193 86 return err; 194 87 } 195 88 196 89 static int tegra20_cpufreq_remove(struct platform_device *pdev) 197 90 { 198 - struct tegra20_cpufreq *cpufreq = platform_get_drvdata(pdev); 91 + struct platform_device *cpufreq_dt; 92 + struct opp_table *opp_table; 199 93 200 - cpufreq_unregister_driver(&cpufreq->driver); 94 + cpufreq_dt = platform_get_drvdata(pdev); 95 + platform_device_unregister(cpufreq_dt); 201 96 202 - clk_put(cpufreq->pll_p_clk); 203 - clk_put(cpufreq->pll_x_clk); 204 - clk_put(cpufreq->cpu_clk); 97 + opp_table = dev_pm_opp_get_opp_table(get_cpu_device(0)); 98 + dev_pm_opp_put_supported_hw(opp_table); 99 + dev_pm_opp_put_opp_table(opp_table); 205 100 206 101 return 0; 207 102 }
-1
drivers/cpuidle/cpuidle-tegra.c
··· 365 365 break; 366 366 367 367 case TEGRA30: 368 - tegra_cpuidle_disable_state(TEGRA_CC6); 369 368 break; 370 369 371 370 case TEGRA114:
+3 -1
drivers/firmware/arm_scmi/Makefile
··· 2 2 obj-y = scmi-bus.o scmi-driver.o scmi-protocols.o scmi-transport.o 3 3 scmi-bus-y = bus.o 4 4 scmi-driver-y = driver.o 5 - scmi-transport-y = mailbox.o shmem.o 5 + scmi-transport-y = shmem.o 6 + scmi-transport-$(CONFIG_MAILBOX) += mailbox.o 7 + scmi-transport-$(CONFIG_ARM_PSCI_FW) += smc.o 6 8 scmi-protocols-y = base.o clock.o perf.o power.o reset.o sensors.o 7 9 obj-$(CONFIG_ARM_SCMI_POWER_DOMAIN) += scmi_pm_domain.o
+7
drivers/firmware/arm_scmi/base.c
··· 14 14 BASE_DISCOVER_LIST_PROTOCOLS = 0x6, 15 15 BASE_DISCOVER_AGENT = 0x7, 16 16 BASE_NOTIFY_ERRORS = 0x8, 17 + BASE_SET_DEVICE_PERMISSIONS = 0x9, 18 + BASE_SET_PROTOCOL_PERMISSIONS = 0xa, 19 + BASE_RESET_AGENT_CONFIGURATION = 0xb, 20 + }; 21 + 22 + enum scmi_base_protocol_notify { 23 + BASE_ERROR_EVENT = 0x0, 17 24 }; 18 25 19 26 struct scmi_msg_resp_base_attributes {
+11
drivers/firmware/arm_scmi/common.h
··· 178 178 * @send_message: Callback to send a message 179 179 * @mark_txdone: Callback to mark tx as done 180 180 * @fetch_response: Callback to fetch response 181 + * @fetch_notification: Callback to fetch notification 182 + * @clear_channel: Callback to clear a channel 181 183 * @poll_done: Callback to poll transfer status 182 184 */ 183 185 struct scmi_transport_ops { ··· 192 190 void (*mark_txdone)(struct scmi_chan_info *cinfo, int ret); 193 191 void (*fetch_response)(struct scmi_chan_info *cinfo, 194 192 struct scmi_xfer *xfer); 193 + void (*fetch_notification)(struct scmi_chan_info *cinfo, 194 + size_t max_len, struct scmi_xfer *xfer); 195 + void (*clear_channel)(struct scmi_chan_info *cinfo); 195 196 bool (*poll_done)(struct scmi_chan_info *cinfo, struct scmi_xfer *xfer); 196 197 }; 197 198 ··· 215 210 }; 216 211 217 212 extern const struct scmi_desc scmi_mailbox_desc; 213 + #ifdef CONFIG_HAVE_ARM_SMCCC 214 + extern const struct scmi_desc scmi_smc_desc; 215 + #endif 218 216 219 217 void scmi_rx_callback(struct scmi_chan_info *cinfo, u32 msg_hdr); 220 218 void scmi_free_channel(struct scmi_chan_info *cinfo, struct idr *idr, int id); ··· 230 222 u32 shmem_read_header(struct scmi_shared_mem __iomem *shmem); 231 223 void shmem_fetch_response(struct scmi_shared_mem __iomem *shmem, 232 224 struct scmi_xfer *xfer); 225 + void shmem_fetch_notification(struct scmi_shared_mem __iomem *shmem, 226 + size_t max_len, struct scmi_xfer *xfer); 227 + void shmem_clear_channel(struct scmi_shared_mem __iomem *shmem); 233 228 bool shmem_poll_done(struct scmi_shared_mem __iomem *shmem, 234 229 struct scmi_xfer *xfer);
+109 -32
drivers/firmware/arm_scmi/driver.c
··· 76 76 * implementation version and (sub-)vendor identification. 77 77 * @handle: Instance of SCMI handle to send to clients 78 78 * @tx_minfo: Universal Transmit Message management info 79 + * @rx_minfo: Universal Receive Message management info 79 80 * @tx_idr: IDR object to map protocol id to Tx channel info pointer 80 81 * @rx_idr: IDR object to map protocol id to Rx channel info pointer 81 82 * @protocols_imp: List of protocols implemented, currently maximum of ··· 90 89 struct scmi_revision_info version; 91 90 struct scmi_handle handle; 92 91 struct scmi_xfers_info tx_minfo; 92 + struct scmi_xfers_info rx_minfo; 93 93 struct idr tx_idr; 94 94 struct idr rx_idr; 95 95 u8 *protocols_imp; ··· 202 200 spin_unlock_irqrestore(&minfo->xfer_lock, flags); 203 201 } 204 202 203 + static void scmi_handle_notification(struct scmi_chan_info *cinfo, u32 msg_hdr) 204 + { 205 + struct scmi_xfer *xfer; 206 + struct device *dev = cinfo->dev; 207 + struct scmi_info *info = handle_to_scmi_info(cinfo->handle); 208 + struct scmi_xfers_info *minfo = &info->rx_minfo; 209 + 210 + xfer = scmi_xfer_get(cinfo->handle, minfo); 211 + if (IS_ERR(xfer)) { 212 + dev_err(dev, "failed to get free message slot (%ld)\n", 213 + PTR_ERR(xfer)); 214 + info->desc->ops->clear_channel(cinfo); 215 + return; 216 + } 217 + 218 + unpack_scmi_header(msg_hdr, &xfer->hdr); 219 + scmi_dump_header_dbg(dev, &xfer->hdr); 220 + info->desc->ops->fetch_notification(cinfo, info->desc->max_msg_size, 221 + xfer); 222 + 223 + trace_scmi_rx_done(xfer->transfer_id, xfer->hdr.id, 224 + xfer->hdr.protocol_id, xfer->hdr.seq, 225 + MSG_TYPE_NOTIFICATION); 226 + 227 + __scmi_xfer_put(minfo, xfer); 228 + 229 + info->desc->ops->clear_channel(cinfo); 230 + } 231 + 232 + static void scmi_handle_response(struct scmi_chan_info *cinfo, 233 + u16 xfer_id, u8 msg_type) 234 + { 235 + struct scmi_xfer *xfer; 236 + struct device *dev = cinfo->dev; 237 + struct scmi_info *info = handle_to_scmi_info(cinfo->handle); 238 + struct scmi_xfers_info *minfo = &info->tx_minfo; 239 + 240 + /* Are we even expecting this? */ 241 + if (!test_bit(xfer_id, minfo->xfer_alloc_table)) { 242 + dev_err(dev, "message for %d is not expected!\n", xfer_id); 243 + info->desc->ops->clear_channel(cinfo); 244 + return; 245 + } 246 + 247 + xfer = &minfo->xfer_block[xfer_id]; 248 + /* 249 + * Even if a response was indeed expected on this slot at this point, 250 + * a buggy platform could wrongly reply feeding us an unexpected 251 + * delayed response we're not prepared to handle: bail-out safely 252 + * blaming firmware. 253 + */ 254 + if (unlikely(msg_type == MSG_TYPE_DELAYED_RESP && !xfer->async_done)) { 255 + dev_err(dev, 256 + "Delayed Response for %d not expected! Buggy F/W ?\n", 257 + xfer_id); 258 + info->desc->ops->clear_channel(cinfo); 259 + /* It was unexpected, so nobody will clear the xfer if not us */ 260 + __scmi_xfer_put(minfo, xfer); 261 + return; 262 + } 263 + 264 + scmi_dump_header_dbg(dev, &xfer->hdr); 265 + 266 + info->desc->ops->fetch_response(cinfo, xfer); 267 + 268 + trace_scmi_rx_done(xfer->transfer_id, xfer->hdr.id, 269 + xfer->hdr.protocol_id, xfer->hdr.seq, 270 + msg_type); 271 + 272 + if (msg_type == MSG_TYPE_DELAYED_RESP) { 273 + info->desc->ops->clear_channel(cinfo); 274 + complete(xfer->async_done); 275 + } else { 276 + complete(&xfer->done); 277 + } 278 + } 279 + 205 280 /** 206 281 * scmi_rx_callback() - callback for receiving messages 207 282 * ··· 293 214 */ 294 215 void scmi_rx_callback(struct scmi_chan_info *cinfo, u32 msg_hdr) 295 216 { 296 - struct scmi_info *info = handle_to_scmi_info(cinfo->handle); 297 - struct scmi_xfers_info *minfo = &info->tx_minfo; 298 217 u16 xfer_id = MSG_XTRACT_TOKEN(msg_hdr); 299 218 u8 msg_type = MSG_XTRACT_TYPE(msg_hdr); 300 - struct device *dev = cinfo->dev; 301 - struct scmi_xfer *xfer; 302 219 303 - if (msg_type == MSG_TYPE_NOTIFICATION) 304 - return; /* Notifications not yet supported */ 305 - 306 - /* Are we even expecting this? */ 307 - if (!test_bit(xfer_id, minfo->xfer_alloc_table)) { 308 - dev_err(dev, "message for %d is not expected!\n", xfer_id); 309 - return; 220 + switch (msg_type) { 221 + case MSG_TYPE_NOTIFICATION: 222 + scmi_handle_notification(cinfo, msg_hdr); 223 + break; 224 + case MSG_TYPE_COMMAND: 225 + case MSG_TYPE_DELAYED_RESP: 226 + scmi_handle_response(cinfo, xfer_id, msg_type); 227 + break; 228 + default: 229 + WARN_ONCE(1, "received unknown msg_type:%d\n", msg_type); 230 + break; 310 231 } 311 - 312 - xfer = &minfo->xfer_block[xfer_id]; 313 - 314 - scmi_dump_header_dbg(dev, &xfer->hdr); 315 - 316 - info->desc->ops->fetch_response(cinfo, xfer); 317 - 318 - trace_scmi_rx_done(xfer->transfer_id, xfer->hdr.id, 319 - xfer->hdr.protocol_id, xfer->hdr.seq, 320 - msg_type); 321 - 322 - if (msg_type == MSG_TYPE_DELAYED_RESP) 323 - complete(xfer->async_done); 324 - else 325 - complete(&xfer->done); 326 232 } 327 233 328 234 /** ··· 589 525 return 0; 590 526 } 591 527 592 - static int scmi_xfer_info_init(struct scmi_info *sinfo) 528 + static int __scmi_xfer_info_init(struct scmi_info *sinfo, 529 + struct scmi_xfers_info *info) 593 530 { 594 531 int i; 595 532 struct scmi_xfer *xfer; 596 533 struct device *dev = sinfo->dev; 597 534 const struct scmi_desc *desc = sinfo->desc; 598 - struct scmi_xfers_info *info = &sinfo->tx_minfo; 599 535 600 536 /* Pre-allocated messages, no more than what hdr.seq can support */ 601 537 if (WARN_ON(desc->max_msg >= MSG_TOKEN_MAX)) { ··· 628 564 spin_lock_init(&info->xfer_lock); 629 565 630 566 return 0; 567 + } 568 + 569 + static int scmi_xfer_info_init(struct scmi_info *sinfo) 570 + { 571 + int ret = __scmi_xfer_info_init(sinfo, &sinfo->tx_minfo); 572 + 573 + if (!ret && idr_find(&sinfo->rx_idr, SCMI_PROTOCOL_BASE)) 574 + ret = __scmi_xfer_info_init(sinfo, &sinfo->rx_minfo); 575 + 576 + return ret; 631 577 } 632 578 633 579 static int scmi_chan_setup(struct scmi_info *info, struct device *dev, ··· 773 699 info->desc = desc; 774 700 INIT_LIST_HEAD(&info->node); 775 701 776 - ret = scmi_xfer_info_init(info); 777 - if (ret) 778 - return ret; 779 - 780 702 platform_set_drvdata(pdev, info); 781 703 idr_init(&info->tx_idr); 782 704 idr_init(&info->rx_idr); ··· 782 712 handle->version = &info->version; 783 713 784 714 ret = scmi_txrx_setup(info, dev, SCMI_PROTOCOL_BASE); 715 + if (ret) 716 + return ret; 717 + 718 + ret = scmi_xfer_info_init(info); 785 719 if (ret) 786 720 return ret; 787 721 ··· 901 827 /* Each compatible listed below must have descriptor associated with it */ 902 828 static const struct of_device_id scmi_of_match[] = { 903 829 { .compatible = "arm,scmi", .data = &scmi_mailbox_desc }, 830 + #ifdef CONFIG_ARM_PSCI_FW 831 + { .compatible = "arm,scmi-smc", .data = &scmi_smc_desc}, 832 + #endif 904 833 { /* Sentinel */ }, 905 834 }; 906 835
+17
drivers/firmware/arm_scmi/mailbox.c
··· 158 158 shmem_fetch_response(smbox->shmem, xfer); 159 159 } 160 160 161 + static void mailbox_fetch_notification(struct scmi_chan_info *cinfo, 162 + size_t max_len, struct scmi_xfer *xfer) 163 + { 164 + struct scmi_mailbox *smbox = cinfo->transport_info; 165 + 166 + shmem_fetch_notification(smbox->shmem, max_len, xfer); 167 + } 168 + 169 + static void mailbox_clear_channel(struct scmi_chan_info *cinfo) 170 + { 171 + struct scmi_mailbox *smbox = cinfo->transport_info; 172 + 173 + shmem_clear_channel(smbox->shmem); 174 + } 175 + 161 176 static bool 162 177 mailbox_poll_done(struct scmi_chan_info *cinfo, struct scmi_xfer *xfer) 163 178 { ··· 188 173 .send_message = mailbox_send_message, 189 174 .mark_txdone = mailbox_mark_txdone, 190 175 .fetch_response = mailbox_fetch_response, 176 + .fetch_notification = mailbox_fetch_notification, 177 + .clear_channel = mailbox_clear_channel, 191 178 .poll_done = mailbox_poll_done, 192 179 }; 193 180
+5
drivers/firmware/arm_scmi/perf.c
··· 27 27 PERF_DESCRIBE_FASTCHANNEL = 0xb, 28 28 }; 29 29 30 + enum scmi_performance_protocol_notify { 31 + PERFORMANCE_LIMITS_CHANGED = 0x0, 32 + PERFORMANCE_LEVEL_CHANGED = 0x1, 33 + }; 34 + 30 35 struct scmi_opp { 31 36 u32 perf; 32 37 u32 power;
+6
drivers/firmware/arm_scmi/power.c
··· 12 12 POWER_STATE_SET = 0x4, 13 13 POWER_STATE_GET = 0x5, 14 14 POWER_STATE_NOTIFY = 0x6, 15 + POWER_STATE_CHANGE_REQUESTED_NOTIFY = 0x7, 16 + }; 17 + 18 + enum scmi_power_protocol_notify { 19 + POWER_STATE_CHANGED = 0x0, 20 + POWER_STATE_CHANGE_REQUESTED = 0x1, 15 21 }; 16 22 17 23 struct scmi_msg_resp_power_attributes {
+4
drivers/firmware/arm_scmi/sensors.c
··· 14 14 SENSOR_READING_GET = 0x6, 15 15 }; 16 16 17 + enum scmi_sensor_protocol_notify { 18 + SENSOR_TRIP_POINT_EVENT = 0x0, 19 + }; 20 + 17 21 struct scmi_msg_resp_sensor_attributes { 18 22 __le16 num_sensors; 19 23 u8 max_requests;
+15
drivers/firmware/arm_scmi/shmem.c
··· 67 67 memcpy_fromio(xfer->rx.buf, shmem->msg_payload + 4, xfer->rx.len); 68 68 } 69 69 70 + void shmem_fetch_notification(struct scmi_shared_mem __iomem *shmem, 71 + size_t max_len, struct scmi_xfer *xfer) 72 + { 73 + /* Skip only the length of header in shmem area i.e 4 bytes */ 74 + xfer->rx.len = min_t(size_t, max_len, ioread32(&shmem->length) - 4); 75 + 76 + /* Take a copy to the rx buffer.. */ 77 + memcpy_fromio(xfer->rx.buf, shmem->msg_payload, xfer->rx.len); 78 + } 79 + 80 + void shmem_clear_channel(struct scmi_shared_mem __iomem *shmem) 81 + { 82 + iowrite32(SCMI_SHMEM_CHAN_STAT_CHANNEL_FREE, &shmem->channel_status); 83 + } 84 + 70 85 bool shmem_poll_done(struct scmi_shared_mem __iomem *shmem, 71 86 struct scmi_xfer *xfer) 72 87 {
+153
drivers/firmware/arm_scmi/smc.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * System Control and Management Interface (SCMI) Message SMC/HVC 4 + * Transport driver 5 + * 6 + * Copyright 2020 NXP 7 + */ 8 + 9 + #include <linux/arm-smccc.h> 10 + #include <linux/device.h> 11 + #include <linux/err.h> 12 + #include <linux/mutex.h> 13 + #include <linux/of.h> 14 + #include <linux/of_address.h> 15 + #include <linux/slab.h> 16 + 17 + #include "common.h" 18 + 19 + /** 20 + * struct scmi_smc - Structure representing a SCMI smc transport 21 + * 22 + * @cinfo: SCMI channel info 23 + * @shmem: Transmit/Receive shared memory area 24 + * @func_id: smc/hvc call function id 25 + */ 26 + 27 + struct scmi_smc { 28 + struct scmi_chan_info *cinfo; 29 + struct scmi_shared_mem __iomem *shmem; 30 + struct mutex shmem_lock; 31 + u32 func_id; 32 + }; 33 + 34 + static bool smc_chan_available(struct device *dev, int idx) 35 + { 36 + struct device_node *np = of_parse_phandle(dev->of_node, "shmem", 0); 37 + if (!np) 38 + return false; 39 + 40 + of_node_put(np); 41 + return true; 42 + } 43 + 44 + static int smc_chan_setup(struct scmi_chan_info *cinfo, struct device *dev, 45 + bool tx) 46 + { 47 + struct device *cdev = cinfo->dev; 48 + struct scmi_smc *scmi_info; 49 + resource_size_t size; 50 + struct resource res; 51 + struct device_node *np; 52 + u32 func_id; 53 + int ret; 54 + 55 + if (!tx) 56 + return -ENODEV; 57 + 58 + scmi_info = devm_kzalloc(dev, sizeof(*scmi_info), GFP_KERNEL); 59 + if (!scmi_info) 60 + return -ENOMEM; 61 + 62 + np = of_parse_phandle(cdev->of_node, "shmem", 0); 63 + ret = of_address_to_resource(np, 0, &res); 64 + of_node_put(np); 65 + if (ret) { 66 + dev_err(cdev, "failed to get SCMI Tx shared memory\n"); 67 + return ret; 68 + } 69 + 70 + size = resource_size(&res); 71 + scmi_info->shmem = devm_ioremap(dev, res.start, size); 72 + if (!scmi_info->shmem) { 73 + dev_err(dev, "failed to ioremap SCMI Tx shared memory\n"); 74 + return -EADDRNOTAVAIL; 75 + } 76 + 77 + ret = of_property_read_u32(dev->of_node, "arm,smc-id", &func_id); 78 + if (ret < 0) 79 + return ret; 80 + 81 + scmi_info->func_id = func_id; 82 + scmi_info->cinfo = cinfo; 83 + mutex_init(&scmi_info->shmem_lock); 84 + cinfo->transport_info = scmi_info; 85 + 86 + return 0; 87 + } 88 + 89 + static int smc_chan_free(int id, void *p, void *data) 90 + { 91 + struct scmi_chan_info *cinfo = p; 92 + struct scmi_smc *scmi_info = cinfo->transport_info; 93 + 94 + cinfo->transport_info = NULL; 95 + scmi_info->cinfo = NULL; 96 + 97 + scmi_free_channel(cinfo, data, id); 98 + 99 + return 0; 100 + } 101 + 102 + static int smc_send_message(struct scmi_chan_info *cinfo, 103 + struct scmi_xfer *xfer) 104 + { 105 + struct scmi_smc *scmi_info = cinfo->transport_info; 106 + struct arm_smccc_res res; 107 + 108 + mutex_lock(&scmi_info->shmem_lock); 109 + 110 + shmem_tx_prepare(scmi_info->shmem, xfer); 111 + 112 + arm_smccc_1_1_invoke(scmi_info->func_id, 0, 0, 0, 0, 0, 0, 0, &res); 113 + scmi_rx_callback(scmi_info->cinfo, shmem_read_header(scmi_info->shmem)); 114 + 115 + mutex_unlock(&scmi_info->shmem_lock); 116 + 117 + /* Only SMCCC_RET_NOT_SUPPORTED is valid error code */ 118 + if (res.a0) 119 + return -EOPNOTSUPP; 120 + return 0; 121 + } 122 + 123 + static void smc_fetch_response(struct scmi_chan_info *cinfo, 124 + struct scmi_xfer *xfer) 125 + { 126 + struct scmi_smc *scmi_info = cinfo->transport_info; 127 + 128 + shmem_fetch_response(scmi_info->shmem, xfer); 129 + } 130 + 131 + static bool 132 + smc_poll_done(struct scmi_chan_info *cinfo, struct scmi_xfer *xfer) 133 + { 134 + struct scmi_smc *scmi_info = cinfo->transport_info; 135 + 136 + return shmem_poll_done(scmi_info->shmem, xfer); 137 + } 138 + 139 + static struct scmi_transport_ops scmi_smc_ops = { 140 + .chan_available = smc_chan_available, 141 + .chan_setup = smc_chan_setup, 142 + .chan_free = smc_chan_free, 143 + .send_message = smc_send_message, 144 + .fetch_response = smc_fetch_response, 145 + .poll_done = smc_poll_done, 146 + }; 147 + 148 + const struct scmi_desc scmi_smc_desc = { 149 + .ops = &scmi_smc_ops, 150 + .max_rx_timeout_ms = 30, 151 + .max_msg = 1, 152 + .max_msg_size = 128, 153 + };
+48 -16
drivers/firmware/imx/imx-scu.c
··· 8 8 */ 9 9 10 10 #include <linux/err.h> 11 - #include <linux/firmware/imx/types.h> 12 11 #include <linux/firmware/imx/ipc.h> 13 12 #include <linux/firmware/imx/sci.h> 14 13 #include <linux/interrupt.h> ··· 37 38 struct device *dev; 38 39 struct mutex lock; 39 40 struct completion done; 41 + bool fast_ipc; 40 42 41 43 /* temporarily store the SCU msg */ 42 44 u32 *msg; ··· 115 115 struct imx_sc_ipc *sc_ipc = sc_chan->sc_ipc; 116 116 struct imx_sc_rpc_msg *hdr; 117 117 u32 *data = msg; 118 + int i; 118 119 119 120 if (!sc_ipc->msg) { 120 121 dev_warn(sc_ipc->dev, "unexpected rx idx %d 0x%08x, ignore!\n", 121 122 sc_chan->idx, *data); 123 + return; 124 + } 125 + 126 + if (sc_ipc->fast_ipc) { 127 + hdr = msg; 128 + sc_ipc->rx_size = hdr->size; 129 + sc_ipc->msg[0] = *data++; 130 + 131 + for (i = 1; i < sc_ipc->rx_size; i++) 132 + sc_ipc->msg[i] = *data++; 133 + 134 + complete(&sc_ipc->done); 135 + 122 136 return; 123 137 } 124 138 ··· 157 143 158 144 static int imx_scu_ipc_write(struct imx_sc_ipc *sc_ipc, void *msg) 159 145 { 160 - struct imx_sc_rpc_msg *hdr = msg; 146 + struct imx_sc_rpc_msg hdr = *(struct imx_sc_rpc_msg *)msg; 161 147 struct imx_sc_chan *sc_chan; 162 148 u32 *data = msg; 163 149 int ret; 150 + int size; 164 151 int i; 165 152 166 153 /* Check size */ 167 - if (hdr->size > IMX_SC_RPC_MAX_MSG) 154 + if (hdr.size > IMX_SC_RPC_MAX_MSG) 168 155 return -EINVAL; 169 156 170 - dev_dbg(sc_ipc->dev, "RPC SVC %u FUNC %u SIZE %u\n", hdr->svc, 171 - hdr->func, hdr->size); 157 + dev_dbg(sc_ipc->dev, "RPC SVC %u FUNC %u SIZE %u\n", hdr.svc, 158 + hdr.func, hdr.size); 172 159 173 - for (i = 0; i < hdr->size; i++) { 160 + size = sc_ipc->fast_ipc ? 1 : hdr.size; 161 + for (i = 0; i < size; i++) { 174 162 sc_chan = &sc_ipc->chans[i % 4]; 175 163 176 164 /* ··· 184 168 * Wait for tx_done before every send to ensure that no 185 169 * queueing happens at the mailbox channel level. 186 170 */ 187 - wait_for_completion(&sc_chan->tx_done); 188 - reinit_completion(&sc_chan->tx_done); 171 + if (!sc_ipc->fast_ipc) { 172 + wait_for_completion(&sc_chan->tx_done); 173 + reinit_completion(&sc_chan->tx_done); 174 + } 189 175 190 176 ret = mbox_send_message(sc_chan->ch, &data[i]); 191 177 if (ret < 0) ··· 264 246 struct imx_sc_chan *sc_chan; 265 247 struct mbox_client *cl; 266 248 char *chan_name; 249 + struct of_phandle_args args; 250 + int num_channel; 267 251 int ret; 268 252 int i; 269 253 ··· 273 253 if (!sc_ipc) 274 254 return -ENOMEM; 275 255 276 - for (i = 0; i < SCU_MU_CHAN_NUM; i++) { 277 - if (i < 4) 256 + ret = of_parse_phandle_with_args(pdev->dev.of_node, "mboxes", 257 + "#mbox-cells", 0, &args); 258 + if (ret) 259 + return ret; 260 + 261 + sc_ipc->fast_ipc = of_device_is_compatible(args.np, "fsl,imx8-mu-scu"); 262 + 263 + num_channel = sc_ipc->fast_ipc ? 2 : SCU_MU_CHAN_NUM; 264 + for (i = 0; i < num_channel; i++) { 265 + if (i < num_channel / 2) 278 266 chan_name = kasprintf(GFP_KERNEL, "tx%d", i); 279 267 else 280 - chan_name = kasprintf(GFP_KERNEL, "rx%d", i - 4); 268 + chan_name = kasprintf(GFP_KERNEL, "rx%d", 269 + i - num_channel / 2); 281 270 282 271 if (!chan_name) 283 272 return -ENOMEM; ··· 298 269 cl->knows_txdone = true; 299 270 cl->rx_callback = imx_scu_rx_callback; 300 271 301 - /* Initial tx_done completion as "done" */ 302 - cl->tx_done = imx_scu_tx_done; 303 - init_completion(&sc_chan->tx_done); 304 - complete(&sc_chan->tx_done); 272 + if (!sc_ipc->fast_ipc) { 273 + /* Initial tx_done completion as "done" */ 274 + cl->tx_done = imx_scu_tx_done; 275 + init_completion(&sc_chan->tx_done); 276 + complete(&sc_chan->tx_done); 277 + } 305 278 306 279 sc_chan->sc_ipc = sc_ipc; 307 - sc_chan->idx = i % 4; 280 + sc_chan->idx = i % (num_channel / 2); 308 281 sc_chan->ch = mbox_request_channel_byname(cl, chan_name); 309 282 if (IS_ERR(sc_chan->ch)) { 310 283 ret = PTR_ERR(sc_chan->ch); 311 284 if (ret != -EPROBE_DEFER) 312 285 dev_err(dev, "Failed to request mbox chan %s ret %d\n", 313 286 chan_name, ret); 287 + kfree(chan_name); 314 288 return ret; 315 289 } 316 290
+1 -1
drivers/firmware/qcom_scm-legacy.c
··· 56 56 __le32 buf_offset; 57 57 __le32 resp_hdr_offset; 58 58 __le32 id; 59 - __le32 buf[0]; 59 + __le32 buf[]; 60 60 }; 61 61 62 62 /**
+4 -7
drivers/firmware/qcom_scm.c
··· 6 6 #include <linux/init.h> 7 7 #include <linux/cpumask.h> 8 8 #include <linux/export.h> 9 - #include <linux/dma-direct.h> 10 9 #include <linux/dma-mapping.h> 11 10 #include <linux/module.h> 12 11 #include <linux/types.h> ··· 805 806 struct qcom_scm_mem_map_info *mem_to_map; 806 807 phys_addr_t mem_to_map_phys; 807 808 phys_addr_t dest_phys; 808 - phys_addr_t ptr_phys; 809 - dma_addr_t ptr_dma; 809 + dma_addr_t ptr_phys; 810 810 size_t mem_to_map_sz; 811 811 size_t dest_sz; 812 812 size_t src_sz; ··· 822 824 ptr_sz = ALIGN(src_sz, SZ_64) + ALIGN(mem_to_map_sz, SZ_64) + 823 825 ALIGN(dest_sz, SZ_64); 824 826 825 - ptr = dma_alloc_coherent(__scm->dev, ptr_sz, &ptr_dma, GFP_KERNEL); 827 + ptr = dma_alloc_coherent(__scm->dev, ptr_sz, &ptr_phys, GFP_KERNEL); 826 828 if (!ptr) 827 829 return -ENOMEM; 828 - ptr_phys = dma_to_phys(__scm->dev, ptr_dma); 829 830 830 831 /* Fill source vmid detail */ 831 832 src = ptr; ··· 852 855 853 856 ret = __qcom_scm_assign_mem(__scm->dev, mem_to_map_phys, mem_to_map_sz, 854 857 ptr_phys, src_sz, dest_phys, dest_sz); 855 - dma_free_coherent(__scm->dev, ptr_sz, ptr, ptr_dma); 858 + dma_free_coherent(__scm->dev, ptr_sz, ptr, ptr_phys); 856 859 if (ret) { 857 860 dev_err(__scm->dev, 858 861 "Assign memory protection call failed %d\n", ret); ··· 940 943 941 944 qcom_scm_clk_disable(); 942 945 943 - return ret > 0 ? true : false; 946 + return ret > 0; 944 947 } 945 948 EXPORT_SYMBOL(qcom_scm_hdcp_available); 946 949
+2 -2
drivers/firmware/tegra/bpmp-tegra186.c
··· 176 176 priv->tx.pool = of_gen_pool_get(bpmp->dev->of_node, "shmem", 0); 177 177 if (!priv->tx.pool) { 178 178 dev_err(bpmp->dev, "TX shmem pool not found\n"); 179 - return -ENOMEM; 179 + return -EPROBE_DEFER; 180 180 } 181 181 182 182 priv->tx.virt = gen_pool_dma_alloc(priv->tx.pool, 4096, &priv->tx.phys); ··· 188 188 priv->rx.pool = of_gen_pool_get(bpmp->dev->of_node, "shmem", 1); 189 189 if (!priv->rx.pool) { 190 190 dev_err(bpmp->dev, "RX shmem pool not found\n"); 191 - err = -ENOMEM; 191 + err = -EPROBE_DEFER; 192 192 goto free_tx; 193 193 } 194 194
+1
drivers/gpu/drm/mediatek/Kconfig
··· 11 11 select DRM_MIPI_DSI 12 12 select DRM_PANEL 13 13 select MEMORY 14 + select MTK_MMSYS 14 15 select MTK_SMI 15 16 select VIDEOMODE_HELPERS 16 17 help
+4 -1
drivers/gpu/drm/mediatek/mtk_disp_color.c
··· 119 119 ret = mtk_ddp_comp_init(dev, dev->of_node, &priv->ddp_comp, comp_id, 120 120 &mtk_disp_color_funcs); 121 121 if (ret) { 122 - dev_err(dev, "Failed to initialize component: %d\n", ret); 122 + if (ret != -EPROBE_DEFER) 123 + dev_err(dev, "Failed to initialize component: %d\n", 124 + ret); 125 + 123 126 return ret; 124 127 } 125 128
+4 -1
drivers/gpu/drm/mediatek/mtk_disp_ovl.c
··· 386 386 ret = mtk_ddp_comp_init(dev, dev->of_node, &priv->ddp_comp, comp_id, 387 387 &mtk_disp_ovl_funcs); 388 388 if (ret) { 389 - dev_err(dev, "Failed to initialize component: %d\n", ret); 389 + if (ret != -EPROBE_DEFER) 390 + dev_err(dev, "Failed to initialize component: %d\n", 391 + ret); 392 + 390 393 return ret; 391 394 } 392 395
+4 -1
drivers/gpu/drm/mediatek/mtk_disp_rdma.c
··· 294 294 ret = mtk_ddp_comp_init(dev, dev->of_node, &priv->ddp_comp, comp_id, 295 295 &mtk_disp_rdma_funcs); 296 296 if (ret) { 297 - dev_err(dev, "Failed to initialize component: %d\n", ret); 297 + if (ret != -EPROBE_DEFER) 298 + dev_err(dev, "Failed to initialize component: %d\n", 299 + ret); 300 + 298 301 return ret; 299 302 } 300 303
+9 -3
drivers/gpu/drm/mediatek/mtk_dpi.c
··· 739 739 dpi->engine_clk = devm_clk_get(dev, "engine"); 740 740 if (IS_ERR(dpi->engine_clk)) { 741 741 ret = PTR_ERR(dpi->engine_clk); 742 - dev_err(dev, "Failed to get engine clock: %d\n", ret); 742 + if (ret != -EPROBE_DEFER) 743 + dev_err(dev, "Failed to get engine clock: %d\n", ret); 744 + 743 745 return ret; 744 746 } 745 747 746 748 dpi->pixel_clk = devm_clk_get(dev, "pixel"); 747 749 if (IS_ERR(dpi->pixel_clk)) { 748 750 ret = PTR_ERR(dpi->pixel_clk); 749 - dev_err(dev, "Failed to get pixel clock: %d\n", ret); 751 + if (ret != -EPROBE_DEFER) 752 + dev_err(dev, "Failed to get pixel clock: %d\n", ret); 753 + 750 754 return ret; 751 755 } 752 756 753 757 dpi->tvd_clk = devm_clk_get(dev, "pll"); 754 758 if (IS_ERR(dpi->tvd_clk)) { 755 759 ret = PTR_ERR(dpi->tvd_clk); 756 - dev_err(dev, "Failed to get tvdpll clock: %d\n", ret); 760 + if (ret != -EPROBE_DEFER) 761 + dev_err(dev, "Failed to get tvdpll clock: %d\n", ret); 762 + 757 763 return ret; 758 764 } 759 765
+10 -9
drivers/gpu/drm/mediatek/mtk_drm_crtc.c
··· 6 6 #include <linux/clk.h> 7 7 #include <linux/pm_runtime.h> 8 8 #include <linux/soc/mediatek/mtk-cmdq.h> 9 + #include <linux/soc/mediatek/mtk-mmsys.h> 9 10 10 11 #include <asm/barrier.h> 11 12 #include <soc/mediatek/smi.h> ··· 29 28 * @enabled: records whether crtc_enable succeeded 30 29 * @planes: array of 4 drm_plane structures, one for each overlay plane 31 30 * @pending_planes: whether any plane has pending changes to be applied 32 - * @config_regs: memory mapped mmsys configuration register space 31 + * @mmsys_dev: pointer to the mmsys device for configuration registers 33 32 * @mutex: handle to one of the ten disp_mutex streams 34 33 * @ddp_comp_nr: number of components in ddp_comp 35 34 * @ddp_comp: array of pointers the mtk_ddp_comp structures used by this crtc ··· 51 50 u32 cmdq_event; 52 51 #endif 53 52 54 - void __iomem *config_regs; 53 + struct device *mmsys_dev; 55 54 struct mtk_disp_mutex *mutex; 56 55 unsigned int ddp_comp_nr; 57 56 struct mtk_ddp_comp **ddp_comp; ··· 301 300 302 301 DRM_DEBUG_DRIVER("mediatek_ddp_ddp_path_setup\n"); 303 302 for (i = 0; i < mtk_crtc->ddp_comp_nr - 1; i++) { 304 - mtk_ddp_add_comp_to_path(mtk_crtc->config_regs, 305 - mtk_crtc->ddp_comp[i]->id, 306 - mtk_crtc->ddp_comp[i + 1]->id); 303 + mtk_mmsys_ddp_connect(mtk_crtc->mmsys_dev, 304 + mtk_crtc->ddp_comp[i]->id, 305 + mtk_crtc->ddp_comp[i + 1]->id); 307 306 mtk_disp_mutex_add_comp(mtk_crtc->mutex, 308 307 mtk_crtc->ddp_comp[i]->id); 309 308 } ··· 361 360 mtk_crtc->ddp_comp[i]->id); 362 361 mtk_disp_mutex_disable(mtk_crtc->mutex); 363 362 for (i = 0; i < mtk_crtc->ddp_comp_nr - 1; i++) { 364 - mtk_ddp_remove_comp_from_path(mtk_crtc->config_regs, 365 - mtk_crtc->ddp_comp[i]->id, 366 - mtk_crtc->ddp_comp[i + 1]->id); 363 + mtk_mmsys_ddp_disconnect(mtk_crtc->mmsys_dev, 364 + mtk_crtc->ddp_comp[i]->id, 365 + mtk_crtc->ddp_comp[i + 1]->id); 367 366 mtk_disp_mutex_remove_comp(mtk_crtc->mutex, 368 367 mtk_crtc->ddp_comp[i]->id); 369 368 } ··· 767 766 if (!mtk_crtc) 768 767 return -ENOMEM; 769 768 770 - mtk_crtc->config_regs = priv->config_regs; 769 + mtk_crtc->mmsys_dev = priv->mmsys_dev; 771 770 mtk_crtc->ddp_comp_nr = path_len; 772 771 mtk_crtc->ddp_comp = devm_kmalloc_array(dev, mtk_crtc->ddp_comp_nr, 773 772 sizeof(*mtk_crtc->ddp_comp),
+2 -257
drivers/gpu/drm/mediatek/mtk_drm_ddp.c
··· 13 13 #include "mtk_drm_ddp.h" 14 14 #include "mtk_drm_ddp_comp.h" 15 15 16 - #define DISP_REG_CONFIG_DISP_OVL0_MOUT_EN 0x040 17 - #define DISP_REG_CONFIG_DISP_OVL1_MOUT_EN 0x044 18 - #define DISP_REG_CONFIG_DISP_OD_MOUT_EN 0x048 19 - #define DISP_REG_CONFIG_DISP_GAMMA_MOUT_EN 0x04c 20 - #define DISP_REG_CONFIG_DISP_UFOE_MOUT_EN 0x050 21 - #define DISP_REG_CONFIG_DISP_COLOR0_SEL_IN 0x084 22 - #define DISP_REG_CONFIG_DISP_COLOR1_SEL_IN 0x088 23 - #define DISP_REG_CONFIG_DSIE_SEL_IN 0x0a4 24 - #define DISP_REG_CONFIG_DSIO_SEL_IN 0x0a8 25 - #define DISP_REG_CONFIG_DPI_SEL_IN 0x0ac 26 - #define DISP_REG_CONFIG_DISP_RDMA2_SOUT 0x0b8 27 - #define DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN 0x0c4 28 - #define DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN 0x0c8 29 - #define DISP_REG_CONFIG_MMSYS_CG_CON0 0x100 30 - 31 - #define DISP_REG_CONFIG_DISP_OVL_MOUT_EN 0x030 32 - #define DISP_REG_CONFIG_OUT_SEL 0x04c 33 - #define DISP_REG_CONFIG_DSI_SEL 0x050 34 - #define DISP_REG_CONFIG_DPI_SEL 0x064 35 - 36 16 #define MT2701_DISP_MUTEX0_MOD0 0x2c 37 17 #define MT2701_DISP_MUTEX0_SOF0 0x30 38 18 ··· 74 94 #define MUTEX_SOF_DSI2 5 75 95 #define MUTEX_SOF_DSI3 6 76 96 77 - #define OVL0_MOUT_EN_COLOR0 0x1 78 - #define OD_MOUT_EN_RDMA0 0x1 79 - #define OD1_MOUT_EN_RDMA1 BIT(16) 80 - #define UFOE_MOUT_EN_DSI0 0x1 81 - #define COLOR0_SEL_IN_OVL0 0x1 82 - #define OVL1_MOUT_EN_COLOR1 0x1 83 - #define GAMMA_MOUT_EN_RDMA1 0x1 84 - #define RDMA0_SOUT_DPI0 0x2 85 - #define RDMA0_SOUT_DPI1 0x3 86 - #define RDMA0_SOUT_DSI1 0x1 87 - #define RDMA0_SOUT_DSI2 0x4 88 - #define RDMA0_SOUT_DSI3 0x5 89 - #define RDMA1_SOUT_DPI0 0x2 90 - #define RDMA1_SOUT_DPI1 0x3 91 - #define RDMA1_SOUT_DSI1 0x1 92 - #define RDMA1_SOUT_DSI2 0x4 93 - #define RDMA1_SOUT_DSI3 0x5 94 - #define RDMA2_SOUT_DPI0 0x2 95 - #define RDMA2_SOUT_DPI1 0x3 96 - #define RDMA2_SOUT_DSI1 0x1 97 - #define RDMA2_SOUT_DSI2 0x4 98 - #define RDMA2_SOUT_DSI3 0x5 99 - #define DPI0_SEL_IN_RDMA1 0x1 100 - #define DPI0_SEL_IN_RDMA2 0x3 101 - #define DPI1_SEL_IN_RDMA1 (0x1 << 8) 102 - #define DPI1_SEL_IN_RDMA2 (0x3 << 8) 103 - #define DSI0_SEL_IN_RDMA1 0x1 104 - #define DSI0_SEL_IN_RDMA2 0x4 105 - #define DSI1_SEL_IN_RDMA1 0x1 106 - #define DSI1_SEL_IN_RDMA2 0x4 107 - #define DSI2_SEL_IN_RDMA1 (0x1 << 16) 108 - #define DSI2_SEL_IN_RDMA2 (0x4 << 16) 109 - #define DSI3_SEL_IN_RDMA1 (0x1 << 16) 110 - #define DSI3_SEL_IN_RDMA2 (0x4 << 16) 111 - #define COLOR1_SEL_IN_OVL1 0x1 112 - 113 - #define OVL_MOUT_EN_RDMA 0x1 114 - #define BLS_TO_DSI_RDMA1_TO_DPI1 0x8 115 - #define BLS_TO_DPI_RDMA1_TO_DSI 0x2 116 - #define DSI_SEL_IN_BLS 0x0 117 - #define DPI_SEL_IN_BLS 0x0 118 - #define DSI_SEL_IN_RDMA 0x1 119 97 120 98 struct mtk_disp_mutex { 121 99 int id; ··· 183 245 .mutex_mod_reg = MT2701_DISP_MUTEX0_MOD0, 184 246 .mutex_sof_reg = MT2701_DISP_MUTEX0_SOF0, 185 247 }; 186 - 187 - static unsigned int mtk_ddp_mout_en(enum mtk_ddp_comp_id cur, 188 - enum mtk_ddp_comp_id next, 189 - unsigned int *addr) 190 - { 191 - unsigned int value; 192 - 193 - if (cur == DDP_COMPONENT_OVL0 && next == DDP_COMPONENT_COLOR0) { 194 - *addr = DISP_REG_CONFIG_DISP_OVL0_MOUT_EN; 195 - value = OVL0_MOUT_EN_COLOR0; 196 - } else if (cur == DDP_COMPONENT_OVL0 && next == DDP_COMPONENT_RDMA0) { 197 - *addr = DISP_REG_CONFIG_DISP_OVL_MOUT_EN; 198 - value = OVL_MOUT_EN_RDMA; 199 - } else if (cur == DDP_COMPONENT_OD0 && next == DDP_COMPONENT_RDMA0) { 200 - *addr = DISP_REG_CONFIG_DISP_OD_MOUT_EN; 201 - value = OD_MOUT_EN_RDMA0; 202 - } else if (cur == DDP_COMPONENT_UFOE && next == DDP_COMPONENT_DSI0) { 203 - *addr = DISP_REG_CONFIG_DISP_UFOE_MOUT_EN; 204 - value = UFOE_MOUT_EN_DSI0; 205 - } else if (cur == DDP_COMPONENT_OVL1 && next == DDP_COMPONENT_COLOR1) { 206 - *addr = DISP_REG_CONFIG_DISP_OVL1_MOUT_EN; 207 - value = OVL1_MOUT_EN_COLOR1; 208 - } else if (cur == DDP_COMPONENT_GAMMA && next == DDP_COMPONENT_RDMA1) { 209 - *addr = DISP_REG_CONFIG_DISP_GAMMA_MOUT_EN; 210 - value = GAMMA_MOUT_EN_RDMA1; 211 - } else if (cur == DDP_COMPONENT_OD1 && next == DDP_COMPONENT_RDMA1) { 212 - *addr = DISP_REG_CONFIG_DISP_OD_MOUT_EN; 213 - value = OD1_MOUT_EN_RDMA1; 214 - } else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DPI0) { 215 - *addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN; 216 - value = RDMA0_SOUT_DPI0; 217 - } else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DPI1) { 218 - *addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN; 219 - value = RDMA0_SOUT_DPI1; 220 - } else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DSI1) { 221 - *addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN; 222 - value = RDMA0_SOUT_DSI1; 223 - } else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DSI2) { 224 - *addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN; 225 - value = RDMA0_SOUT_DSI2; 226 - } else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DSI3) { 227 - *addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN; 228 - value = RDMA0_SOUT_DSI3; 229 - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI1) { 230 - *addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN; 231 - value = RDMA1_SOUT_DSI1; 232 - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI2) { 233 - *addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN; 234 - value = RDMA1_SOUT_DSI2; 235 - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI3) { 236 - *addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN; 237 - value = RDMA1_SOUT_DSI3; 238 - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DPI0) { 239 - *addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN; 240 - value = RDMA1_SOUT_DPI0; 241 - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DPI1) { 242 - *addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN; 243 - value = RDMA1_SOUT_DPI1; 244 - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DPI0) { 245 - *addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT; 246 - value = RDMA2_SOUT_DPI0; 247 - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DPI1) { 248 - *addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT; 249 - value = RDMA2_SOUT_DPI1; 250 - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI1) { 251 - *addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT; 252 - value = RDMA2_SOUT_DSI1; 253 - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI2) { 254 - *addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT; 255 - value = RDMA2_SOUT_DSI2; 256 - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI3) { 257 - *addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT; 258 - value = RDMA2_SOUT_DSI3; 259 - } else { 260 - value = 0; 261 - } 262 - 263 - return value; 264 - } 265 - 266 - static unsigned int mtk_ddp_sel_in(enum mtk_ddp_comp_id cur, 267 - enum mtk_ddp_comp_id next, 268 - unsigned int *addr) 269 - { 270 - unsigned int value; 271 - 272 - if (cur == DDP_COMPONENT_OVL0 && next == DDP_COMPONENT_COLOR0) { 273 - *addr = DISP_REG_CONFIG_DISP_COLOR0_SEL_IN; 274 - value = COLOR0_SEL_IN_OVL0; 275 - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DPI0) { 276 - *addr = DISP_REG_CONFIG_DPI_SEL_IN; 277 - value = DPI0_SEL_IN_RDMA1; 278 - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DPI1) { 279 - *addr = DISP_REG_CONFIG_DPI_SEL_IN; 280 - value = DPI1_SEL_IN_RDMA1; 281 - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI0) { 282 - *addr = DISP_REG_CONFIG_DSIE_SEL_IN; 283 - value = DSI0_SEL_IN_RDMA1; 284 - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI1) { 285 - *addr = DISP_REG_CONFIG_DSIO_SEL_IN; 286 - value = DSI1_SEL_IN_RDMA1; 287 - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI2) { 288 - *addr = DISP_REG_CONFIG_DSIE_SEL_IN; 289 - value = DSI2_SEL_IN_RDMA1; 290 - } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI3) { 291 - *addr = DISP_REG_CONFIG_DSIO_SEL_IN; 292 - value = DSI3_SEL_IN_RDMA1; 293 - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DPI0) { 294 - *addr = DISP_REG_CONFIG_DPI_SEL_IN; 295 - value = DPI0_SEL_IN_RDMA2; 296 - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DPI1) { 297 - *addr = DISP_REG_CONFIG_DPI_SEL_IN; 298 - value = DPI1_SEL_IN_RDMA2; 299 - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI0) { 300 - *addr = DISP_REG_CONFIG_DSIE_SEL_IN; 301 - value = DSI0_SEL_IN_RDMA2; 302 - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI1) { 303 - *addr = DISP_REG_CONFIG_DSIO_SEL_IN; 304 - value = DSI1_SEL_IN_RDMA2; 305 - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI2) { 306 - *addr = DISP_REG_CONFIG_DSIE_SEL_IN; 307 - value = DSI2_SEL_IN_RDMA2; 308 - } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI3) { 309 - *addr = DISP_REG_CONFIG_DSIE_SEL_IN; 310 - value = DSI3_SEL_IN_RDMA2; 311 - } else if (cur == DDP_COMPONENT_OVL1 && next == DDP_COMPONENT_COLOR1) { 312 - *addr = DISP_REG_CONFIG_DISP_COLOR1_SEL_IN; 313 - value = COLOR1_SEL_IN_OVL1; 314 - } else if (cur == DDP_COMPONENT_BLS && next == DDP_COMPONENT_DSI0) { 315 - *addr = DISP_REG_CONFIG_DSI_SEL; 316 - value = DSI_SEL_IN_BLS; 317 - } else { 318 - value = 0; 319 - } 320 - 321 - return value; 322 - } 323 - 324 - static void mtk_ddp_sout_sel(void __iomem *config_regs, 325 - enum mtk_ddp_comp_id cur, 326 - enum mtk_ddp_comp_id next) 327 - { 328 - if (cur == DDP_COMPONENT_BLS && next == DDP_COMPONENT_DSI0) { 329 - writel_relaxed(BLS_TO_DSI_RDMA1_TO_DPI1, 330 - config_regs + DISP_REG_CONFIG_OUT_SEL); 331 - } else if (cur == DDP_COMPONENT_BLS && next == DDP_COMPONENT_DPI0) { 332 - writel_relaxed(BLS_TO_DPI_RDMA1_TO_DSI, 333 - config_regs + DISP_REG_CONFIG_OUT_SEL); 334 - writel_relaxed(DSI_SEL_IN_RDMA, 335 - config_regs + DISP_REG_CONFIG_DSI_SEL); 336 - writel_relaxed(DPI_SEL_IN_BLS, 337 - config_regs + DISP_REG_CONFIG_DPI_SEL); 338 - } 339 - } 340 - 341 - void mtk_ddp_add_comp_to_path(void __iomem *config_regs, 342 - enum mtk_ddp_comp_id cur, 343 - enum mtk_ddp_comp_id next) 344 - { 345 - unsigned int addr, value, reg; 346 - 347 - value = mtk_ddp_mout_en(cur, next, &addr); 348 - if (value) { 349 - reg = readl_relaxed(config_regs + addr) | value; 350 - writel_relaxed(reg, config_regs + addr); 351 - } 352 - 353 - mtk_ddp_sout_sel(config_regs, cur, next); 354 - 355 - value = mtk_ddp_sel_in(cur, next, &addr); 356 - if (value) { 357 - reg = readl_relaxed(config_regs + addr) | value; 358 - writel_relaxed(reg, config_regs + addr); 359 - } 360 - } 361 - 362 - void mtk_ddp_remove_comp_from_path(void __iomem *config_regs, 363 - enum mtk_ddp_comp_id cur, 364 - enum mtk_ddp_comp_id next) 365 - { 366 - unsigned int addr, value, reg; 367 - 368 - value = mtk_ddp_mout_en(cur, next, &addr); 369 - if (value) { 370 - reg = readl_relaxed(config_regs + addr) & ~value; 371 - writel_relaxed(reg, config_regs + addr); 372 - } 373 - 374 - value = mtk_ddp_sel_in(cur, next, &addr); 375 - if (value) { 376 - reg = readl_relaxed(config_regs + addr) & ~value; 377 - writel_relaxed(reg, config_regs + addr); 378 - } 379 - } 380 248 381 249 struct mtk_disp_mutex *mtk_disp_mutex_get(struct device *dev, unsigned int id) 382 250 { ··· 372 628 if (!ddp->data->no_clk) { 373 629 ddp->clk = devm_clk_get(dev, NULL); 374 630 if (IS_ERR(ddp->clk)) { 375 - dev_err(dev, "Failed to get clock\n"); 631 + if (PTR_ERR(ddp->clk) != -EPROBE_DEFER) 632 + dev_err(dev, "Failed to get clock\n"); 376 633 return PTR_ERR(ddp->clk); 377 634 } 378 635 }
-7
drivers/gpu/drm/mediatek/mtk_drm_ddp.h
··· 12 12 struct device; 13 13 struct mtk_disp_mutex; 14 14 15 - void mtk_ddp_add_comp_to_path(void __iomem *config_regs, 16 - enum mtk_ddp_comp_id cur, 17 - enum mtk_ddp_comp_id next); 18 - void mtk_ddp_remove_comp_from_path(void __iomem *config_regs, 19 - enum mtk_ddp_comp_id cur, 20 - enum mtk_ddp_comp_id next); 21 - 22 15 struct mtk_disp_mutex *mtk_disp_mutex_get(struct device *dev, unsigned int id); 23 16 int mtk_disp_mutex_prepare(struct mtk_disp_mutex *mutex); 24 17 void mtk_disp_mutex_add_comp(struct mtk_disp_mutex *mutex,
+24 -21
drivers/gpu/drm/mediatek/mtk_drm_drv.c
··· 10 10 #include <linux/of_address.h> 11 11 #include <linux/of_platform.h> 12 12 #include <linux/pm_runtime.h> 13 + #include <linux/soc/mediatek/mtk-mmsys.h> 13 14 #include <linux/dma-mapping.h> 14 15 15 16 #include <drm/drm_atomic.h> ··· 419 418 { } 420 419 }; 421 420 421 + static const struct of_device_id mtk_drm_of_ids[] = { 422 + { .compatible = "mediatek,mt2701-mmsys", 423 + .data = &mt2701_mmsys_driver_data}, 424 + { .compatible = "mediatek,mt2712-mmsys", 425 + .data = &mt2712_mmsys_driver_data}, 426 + { .compatible = "mediatek,mt8173-mmsys", 427 + .data = &mt8173_mmsys_driver_data}, 428 + { } 429 + }; 430 + 422 431 static int mtk_drm_probe(struct platform_device *pdev) 423 432 { 424 433 struct device *dev = &pdev->dev; 434 + struct device_node *phandle = dev->parent->of_node; 435 + const struct of_device_id *of_id; 425 436 struct mtk_drm_private *private; 426 - struct resource *mem; 427 437 struct device_node *node; 428 438 struct component_match *match = NULL; 429 439 int ret; ··· 445 433 return -ENOMEM; 446 434 447 435 private->data = of_device_get_match_data(dev); 448 - 449 - mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 450 - private->config_regs = devm_ioremap_resource(dev, mem); 451 - if (IS_ERR(private->config_regs)) { 452 - ret = PTR_ERR(private->config_regs); 453 - dev_err(dev, "Failed to ioremap mmsys-config resource: %d\n", 454 - ret); 455 - return ret; 436 + private->mmsys_dev = dev->parent; 437 + if (!private->mmsys_dev) { 438 + dev_err(dev, "Failed to get MMSYS device\n"); 439 + return -ENODEV; 456 440 } 457 441 442 + of_id = of_match_node(mtk_drm_of_ids, phandle); 443 + if (!of_id) 444 + return -ENODEV; 445 + 446 + private->data = of_id->data; 447 + 458 448 /* Iterate over sibling DISP function blocks */ 459 - for_each_child_of_node(dev->of_node->parent, node) { 449 + for_each_child_of_node(phandle->parent, node) { 460 450 const struct of_device_id *of_id; 461 451 enum mtk_ddp_comp_type comp_type; 462 452 int comp_id; ··· 592 578 static SIMPLE_DEV_PM_OPS(mtk_drm_pm_ops, mtk_drm_sys_suspend, 593 579 mtk_drm_sys_resume); 594 580 595 - static const struct of_device_id mtk_drm_of_ids[] = { 596 - { .compatible = "mediatek,mt2701-mmsys", 597 - .data = &mt2701_mmsys_driver_data}, 598 - { .compatible = "mediatek,mt2712-mmsys", 599 - .data = &mt2712_mmsys_driver_data}, 600 - { .compatible = "mediatek,mt8173-mmsys", 601 - .data = &mt8173_mmsys_driver_data}, 602 - { } 603 - }; 604 - 605 581 static struct platform_driver mtk_drm_platform_driver = { 606 582 .probe = mtk_drm_probe, 607 583 .remove = mtk_drm_remove, 608 584 .driver = { 609 585 .name = "mediatek-drm", 610 - .of_match_table = mtk_drm_of_ids, 611 586 .pm = &mtk_drm_pm_ops, 612 587 }, 613 588 };
+1 -1
drivers/gpu/drm/mediatek/mtk_drm_drv.h
··· 39 39 40 40 struct device_node *mutex_node; 41 41 struct device *mutex_dev; 42 - void __iomem *config_regs; 42 + struct device *mmsys_dev; 43 43 struct device_node *comp_node[DDP_COMPONENT_ID_MAX]; 44 44 struct mtk_ddp_comp *ddp_comp[DDP_COMPONENT_ID_MAX]; 45 45 const struct mtk_mmsys_driver_data *data;
+6 -2
drivers/gpu/drm/mediatek/mtk_dsi.c
··· 1186 1186 dsi->engine_clk = devm_clk_get(dev, "engine"); 1187 1187 if (IS_ERR(dsi->engine_clk)) { 1188 1188 ret = PTR_ERR(dsi->engine_clk); 1189 - dev_err(dev, "Failed to get engine clock: %d\n", ret); 1189 + 1190 + if (ret != -EPROBE_DEFER) 1191 + dev_err(dev, "Failed to get engine clock: %d\n", ret); 1190 1192 goto err_unregister_host; 1191 1193 } 1192 1194 1193 1195 dsi->digital_clk = devm_clk_get(dev, "digital"); 1194 1196 if (IS_ERR(dsi->digital_clk)) { 1195 1197 ret = PTR_ERR(dsi->digital_clk); 1196 - dev_err(dev, "Failed to get digital clock: %d\n", ret); 1198 + 1199 + if (ret != -EPROBE_DEFER) 1200 + dev_err(dev, "Failed to get digital clock: %d\n", ret); 1197 1201 goto err_unregister_host; 1198 1202 } 1199 1203
+3 -1
drivers/gpu/drm/mediatek/mtk_hdmi.c
··· 1470 1470 1471 1471 ret = mtk_hdmi_get_all_clk(hdmi, np); 1472 1472 if (ret) { 1473 - dev_err(dev, "Failed to get clocks: %d\n", ret); 1473 + if (ret != -EPROBE_DEFER) 1474 + dev_err(dev, "Failed to get clocks: %d\n", ret); 1475 + 1474 1476 return ret; 1475 1477 } 1476 1478
+11
drivers/memory/Kconfig
··· 46 46 tree is used. This bus supports NANDs, external ethernet controller, 47 47 SRAMs, ATA devices, etc. 48 48 49 + config BT1_L2_CTL 50 + bool "Baikal-T1 CM2 L2-RAM Cache Control Block" 51 + depends on MIPS_BAIKAL_T1 || COMPILE_TEST 52 + select MFD_SYSCON 53 + help 54 + Baikal-T1 CPU is based on the MIPS P5600 Warrior IP-core. The CPU 55 + resides Coherency Manager v2 with embedded 1MB L2-cache. It's 56 + possible to tune the L2 cache performance up by setting the data, 57 + tags and way-select latencies of RAM access. This driver provides a 58 + dt properties-based and sysfs interface for it. 59 + 49 60 config TI_AEMIF 50 61 tristate "Texas Instruments AEMIF driver" 51 62 depends on (ARCH_DAVINCI || ARCH_KEYSTONE) && OF
+1
drivers/memory/Makefile
··· 11 11 obj-$(CONFIG_ATMEL_SDRAMC) += atmel-sdramc.o 12 12 obj-$(CONFIG_ATMEL_EBI) += atmel-ebi.o 13 13 obj-$(CONFIG_ARCH_BRCMSTB) += brcmstb_dpfe.o 14 + obj-$(CONFIG_BT1_L2_CTL) += bt1-l2-ctl.o 14 15 obj-$(CONFIG_TI_AEMIF) += ti-aemif.o 15 16 obj-$(CONFIG_TI_EMIF) += emif.o 16 17 obj-$(CONFIG_OMAP_GPMC) += omap-gpmc.o
+322
drivers/memory/bt1-l2-ctl.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (C) 2020 BAIKAL ELECTRONICS, JSC 4 + * 5 + * Authors: 6 + * Serge Semin <Sergey.Semin@baikalelectronics.ru> 7 + * 8 + * Baikal-T1 CM2 L2-cache Control Block driver. 9 + */ 10 + 11 + #include <linux/kernel.h> 12 + #include <linux/module.h> 13 + #include <linux/bitfield.h> 14 + #include <linux/types.h> 15 + #include <linux/device.h> 16 + #include <linux/platform_device.h> 17 + #include <linux/regmap.h> 18 + #include <linux/mfd/syscon.h> 19 + #include <linux/sysfs.h> 20 + #include <linux/of.h> 21 + 22 + #define L2_CTL_REG 0x028 23 + #define L2_CTL_DATA_STALL_FLD 0 24 + #define L2_CTL_DATA_STALL_MASK GENMASK(1, L2_CTL_DATA_STALL_FLD) 25 + #define L2_CTL_TAG_STALL_FLD 2 26 + #define L2_CTL_TAG_STALL_MASK GENMASK(3, L2_CTL_TAG_STALL_FLD) 27 + #define L2_CTL_WS_STALL_FLD 4 28 + #define L2_CTL_WS_STALL_MASK GENMASK(5, L2_CTL_WS_STALL_FLD) 29 + #define L2_CTL_SET_CLKRATIO BIT(13) 30 + #define L2_CTL_CLKRATIO_LOCK BIT(31) 31 + 32 + #define L2_CTL_STALL_MIN 0 33 + #define L2_CTL_STALL_MAX 3 34 + #define L2_CTL_STALL_SET_DELAY_US 1 35 + #define L2_CTL_STALL_SET_TOUT_US 1000 36 + 37 + /* 38 + * struct l2_ctl - Baikal-T1 L2 Control block private data. 39 + * @dev: Pointer to the device structure. 40 + * @sys_regs: Baikal-T1 System Controller registers map. 41 + */ 42 + struct l2_ctl { 43 + struct device *dev; 44 + 45 + struct regmap *sys_regs; 46 + }; 47 + 48 + /* 49 + * enum l2_ctl_stall - Baikal-T1 L2-cache-RAM stall identifier. 50 + * @L2_WSSTALL: Way-select latency. 51 + * @L2_TAGSTALL: Tag latency. 52 + * @L2_DATASTALL: Data latency. 53 + */ 54 + enum l2_ctl_stall { 55 + L2_WS_STALL, 56 + L2_TAG_STALL, 57 + L2_DATA_STALL 58 + }; 59 + 60 + /* 61 + * struct l2_ctl_device_attribute - Baikal-T1 L2-cache device attribute. 62 + * @dev_attr: Actual sysfs device attribute. 63 + * @id: L2-cache stall field identifier. 64 + */ 65 + struct l2_ctl_device_attribute { 66 + struct device_attribute dev_attr; 67 + enum l2_ctl_stall id; 68 + }; 69 + #define to_l2_ctl_dev_attr(_dev_attr) \ 70 + container_of(_dev_attr, struct l2_ctl_device_attribute, dev_attr) 71 + 72 + #define L2_CTL_ATTR_RW(_name, _prefix, _id) \ 73 + struct l2_ctl_device_attribute l2_ctl_attr_##_name = \ 74 + { __ATTR(_name, 0644, _prefix##_show, _prefix##_store), _id } 75 + 76 + static int l2_ctl_get_latency(struct l2_ctl *l2, enum l2_ctl_stall id, u32 *val) 77 + { 78 + u32 data = 0; 79 + int ret; 80 + 81 + ret = regmap_read(l2->sys_regs, L2_CTL_REG, &data); 82 + if (ret) 83 + return ret; 84 + 85 + switch (id) { 86 + case L2_WS_STALL: 87 + *val = FIELD_GET(L2_CTL_WS_STALL_MASK, data); 88 + break; 89 + case L2_TAG_STALL: 90 + *val = FIELD_GET(L2_CTL_TAG_STALL_MASK, data); 91 + break; 92 + case L2_DATA_STALL: 93 + *val = FIELD_GET(L2_CTL_DATA_STALL_MASK, data); 94 + break; 95 + default: 96 + return -EINVAL; 97 + } 98 + 99 + return 0; 100 + } 101 + 102 + static int l2_ctl_set_latency(struct l2_ctl *l2, enum l2_ctl_stall id, u32 val) 103 + { 104 + u32 mask = 0, data = 0; 105 + int ret; 106 + 107 + val = clamp_val(val, L2_CTL_STALL_MIN, L2_CTL_STALL_MAX); 108 + 109 + switch (id) { 110 + case L2_WS_STALL: 111 + data = FIELD_PREP(L2_CTL_WS_STALL_MASK, val); 112 + mask = L2_CTL_WS_STALL_MASK; 113 + break; 114 + case L2_TAG_STALL: 115 + data = FIELD_PREP(L2_CTL_TAG_STALL_MASK, val); 116 + mask = L2_CTL_TAG_STALL_MASK; 117 + break; 118 + case L2_DATA_STALL: 119 + data = FIELD_PREP(L2_CTL_DATA_STALL_MASK, val); 120 + mask = L2_CTL_DATA_STALL_MASK; 121 + break; 122 + default: 123 + return -EINVAL; 124 + } 125 + 126 + data |= L2_CTL_SET_CLKRATIO; 127 + mask |= L2_CTL_SET_CLKRATIO; 128 + 129 + ret = regmap_update_bits(l2->sys_regs, L2_CTL_REG, mask, data); 130 + if (ret) 131 + return ret; 132 + 133 + return regmap_read_poll_timeout(l2->sys_regs, L2_CTL_REG, data, 134 + data & L2_CTL_CLKRATIO_LOCK, 135 + L2_CTL_STALL_SET_DELAY_US, 136 + L2_CTL_STALL_SET_TOUT_US); 137 + } 138 + 139 + static void l2_ctl_clear_data(void *data) 140 + { 141 + struct l2_ctl *l2 = data; 142 + struct platform_device *pdev = to_platform_device(l2->dev); 143 + 144 + platform_set_drvdata(pdev, NULL); 145 + } 146 + 147 + static struct l2_ctl *l2_ctl_create_data(struct platform_device *pdev) 148 + { 149 + struct device *dev = &pdev->dev; 150 + struct l2_ctl *l2; 151 + int ret; 152 + 153 + l2 = devm_kzalloc(dev, sizeof(*l2), GFP_KERNEL); 154 + if (!l2) 155 + return ERR_PTR(-ENOMEM); 156 + 157 + ret = devm_add_action(dev, l2_ctl_clear_data, l2); 158 + if (ret) { 159 + dev_err(dev, "Can't add L2 CTL data clear action\n"); 160 + return ERR_PTR(ret); 161 + } 162 + 163 + l2->dev = dev; 164 + platform_set_drvdata(pdev, l2); 165 + 166 + return l2; 167 + } 168 + 169 + static int l2_ctl_find_sys_regs(struct l2_ctl *l2) 170 + { 171 + l2->sys_regs = syscon_node_to_regmap(l2->dev->of_node->parent); 172 + if (IS_ERR(l2->sys_regs)) { 173 + dev_err(l2->dev, "Couldn't get L2 CTL register map\n"); 174 + return PTR_ERR(l2->sys_regs); 175 + } 176 + 177 + return 0; 178 + } 179 + 180 + static int l2_ctl_of_parse_property(struct l2_ctl *l2, enum l2_ctl_stall id, 181 + const char *propname) 182 + { 183 + int ret = 0; 184 + u32 data; 185 + 186 + if (!of_property_read_u32(l2->dev->of_node, propname, &data)) { 187 + ret = l2_ctl_set_latency(l2, id, data); 188 + if (ret) 189 + dev_err(l2->dev, "Invalid value of '%s'\n", propname); 190 + } 191 + 192 + return ret; 193 + } 194 + 195 + static int l2_ctl_of_parse(struct l2_ctl *l2) 196 + { 197 + int ret; 198 + 199 + ret = l2_ctl_of_parse_property(l2, L2_WS_STALL, "baikal,l2-ws-latency"); 200 + if (ret) 201 + return ret; 202 + 203 + ret = l2_ctl_of_parse_property(l2, L2_TAG_STALL, "baikal,l2-tag-latency"); 204 + if (ret) 205 + return ret; 206 + 207 + return l2_ctl_of_parse_property(l2, L2_DATA_STALL, 208 + "baikal,l2-data-latency"); 209 + } 210 + 211 + static ssize_t l2_ctl_latency_show(struct device *dev, 212 + struct device_attribute *attr, 213 + char *buf) 214 + { 215 + struct l2_ctl_device_attribute *devattr = to_l2_ctl_dev_attr(attr); 216 + struct l2_ctl *l2 = dev_get_drvdata(dev); 217 + u32 data; 218 + int ret; 219 + 220 + ret = l2_ctl_get_latency(l2, devattr->id, &data); 221 + if (ret) 222 + return ret; 223 + 224 + return scnprintf(buf, PAGE_SIZE, "%u\n", data); 225 + } 226 + 227 + static ssize_t l2_ctl_latency_store(struct device *dev, 228 + struct device_attribute *attr, 229 + const char *buf, size_t count) 230 + { 231 + struct l2_ctl_device_attribute *devattr = to_l2_ctl_dev_attr(attr); 232 + struct l2_ctl *l2 = dev_get_drvdata(dev); 233 + u32 data; 234 + int ret; 235 + 236 + if (kstrtouint(buf, 0, &data) < 0) 237 + return -EINVAL; 238 + 239 + ret = l2_ctl_set_latency(l2, devattr->id, data); 240 + if (ret) 241 + return ret; 242 + 243 + return count; 244 + } 245 + static L2_CTL_ATTR_RW(l2_ws_latency, l2_ctl_latency, L2_WS_STALL); 246 + static L2_CTL_ATTR_RW(l2_tag_latency, l2_ctl_latency, L2_TAG_STALL); 247 + static L2_CTL_ATTR_RW(l2_data_latency, l2_ctl_latency, L2_DATA_STALL); 248 + 249 + static struct attribute *l2_ctl_sysfs_attrs[] = { 250 + &l2_ctl_attr_l2_ws_latency.dev_attr.attr, 251 + &l2_ctl_attr_l2_tag_latency.dev_attr.attr, 252 + &l2_ctl_attr_l2_data_latency.dev_attr.attr, 253 + NULL 254 + }; 255 + ATTRIBUTE_GROUPS(l2_ctl_sysfs); 256 + 257 + static void l2_ctl_remove_sysfs(void *data) 258 + { 259 + struct l2_ctl *l2 = data; 260 + 261 + device_remove_groups(l2->dev, l2_ctl_sysfs_groups); 262 + } 263 + 264 + static int l2_ctl_init_sysfs(struct l2_ctl *l2) 265 + { 266 + int ret; 267 + 268 + ret = device_add_groups(l2->dev, l2_ctl_sysfs_groups); 269 + if (ret) { 270 + dev_err(l2->dev, "Failed to create L2 CTL sysfs nodes\n"); 271 + return ret; 272 + } 273 + 274 + ret = devm_add_action_or_reset(l2->dev, l2_ctl_remove_sysfs, l2); 275 + if (ret) 276 + dev_err(l2->dev, "Can't add L2 CTL sysfs remove action\n"); 277 + 278 + return ret; 279 + } 280 + 281 + static int l2_ctl_probe(struct platform_device *pdev) 282 + { 283 + struct l2_ctl *l2; 284 + int ret; 285 + 286 + l2 = l2_ctl_create_data(pdev); 287 + if (IS_ERR(l2)) 288 + return PTR_ERR(l2); 289 + 290 + ret = l2_ctl_find_sys_regs(l2); 291 + if (ret) 292 + return ret; 293 + 294 + ret = l2_ctl_of_parse(l2); 295 + if (ret) 296 + return ret; 297 + 298 + ret = l2_ctl_init_sysfs(l2); 299 + if (ret) 300 + return ret; 301 + 302 + return 0; 303 + } 304 + 305 + static const struct of_device_id l2_ctl_of_match[] = { 306 + { .compatible = "baikal,bt1-l2-ctl" }, 307 + { } 308 + }; 309 + MODULE_DEVICE_TABLE(of, l2_ctl_of_match); 310 + 311 + static struct platform_driver l2_ctl_driver = { 312 + .probe = l2_ctl_probe, 313 + .driver = { 314 + .name = "bt1-l2-ctl", 315 + .of_match_table = l2_ctl_of_match 316 + } 317 + }; 318 + module_platform_driver(l2_ctl_driver); 319 + 320 + MODULE_AUTHOR("Serge Semin <Sergey.Semin@baikalelectronics.ru>"); 321 + MODULE_DESCRIPTION("Baikal-T1 L2-cache driver"); 322 + MODULE_LICENSE("GPL v2");
+3 -5
drivers/memory/samsung/exynos5422-dmc.c
··· 1091 1091 /* power related timings */ 1092 1092 val = dmc->timings->tFAW / clk_period_ps; 1093 1093 val += dmc->timings->tFAW % clk_period_ps ? 1 : 0; 1094 - val = max(val, dmc->min_tck->tXP); 1094 + val = max(val, dmc->min_tck->tFAW); 1095 1095 reg = &timing_power[0]; 1096 1096 *reg_timing_power |= TIMING_VAL2REG(reg, val); 1097 1097 ··· 1346 1346 struct exynos5_dmc *dmc = priv; 1347 1347 1348 1348 mutex_lock(&dmc->df->lock); 1349 - 1350 1349 exynos5_dmc_perf_events_check(dmc); 1351 - 1352 1350 res = update_devfreq(dmc->df); 1351 + mutex_unlock(&dmc->df->lock); 1352 + 1353 1353 if (res) 1354 1354 dev_warn(dmc->dev, "devfreq failed with %d\n", res); 1355 - 1356 - mutex_unlock(&dmc->df->lock); 1357 1355 1358 1356 return IRQ_HANDLED; 1359 1357 }
+29 -12
drivers/of/of_reserved_mem.c
··· 358 358 EXPORT_SYMBOL_GPL(of_reserved_mem_device_init_by_idx); 359 359 360 360 /** 361 + * of_reserved_mem_device_init_by_name() - assign named reserved memory region 362 + * to given device 363 + * @dev: pointer to the device to configure 364 + * @np: pointer to the device node with 'memory-region' property 365 + * @name: name of the selected memory region 366 + * 367 + * Returns: 0 on success or a negative error-code on failure. 368 + */ 369 + int of_reserved_mem_device_init_by_name(struct device *dev, 370 + struct device_node *np, 371 + const char *name) 372 + { 373 + int idx = of_property_match_string(np, "memory-region-names", name); 374 + 375 + return of_reserved_mem_device_init_by_idx(dev, np, idx); 376 + } 377 + EXPORT_SYMBOL_GPL(of_reserved_mem_device_init_by_name); 378 + 379 + /** 361 380 * of_reserved_mem_device_release() - release reserved memory device structures 362 381 * @dev: Pointer to the device to deconfigure 363 382 * ··· 385 366 */ 386 367 void of_reserved_mem_device_release(struct device *dev) 387 368 { 388 - struct rmem_assigned_device *rd; 389 - struct reserved_mem *rmem = NULL; 369 + struct rmem_assigned_device *rd, *tmp; 370 + LIST_HEAD(release_list); 390 371 391 372 mutex_lock(&of_rmem_assigned_device_mutex); 392 - list_for_each_entry(rd, &of_rmem_assigned_device_list, list) { 393 - if (rd->dev == dev) { 394 - rmem = rd->rmem; 395 - list_del(&rd->list); 396 - kfree(rd); 397 - break; 398 - } 373 + list_for_each_entry_safe(rd, tmp, &of_rmem_assigned_device_list, list) { 374 + if (rd->dev == dev) 375 + list_move_tail(&rd->list, &release_list); 399 376 } 400 377 mutex_unlock(&of_rmem_assigned_device_mutex); 401 378 402 - if (!rmem || !rmem->ops || !rmem->ops->device_release) 403 - return; 379 + list_for_each_entry_safe(rd, tmp, &release_list, list) { 380 + if (rd->rmem && rd->rmem->ops && rd->rmem->ops->device_release) 381 + rd->rmem->ops->device_release(rd->rmem, dev); 404 382 405 - rmem->ops->device_release(rmem, dev); 383 + kfree(rd); 384 + } 406 385 } 407 386 EXPORT_SYMBOL_GPL(of_reserved_mem_device_release); 408 387
+68 -1
drivers/reset/hisilicon/hi6220_reset.c
··· 33 33 enum hi6220_reset_ctrl_type { 34 34 PERIPHERAL, 35 35 MEDIA, 36 + AO, 36 37 }; 37 38 38 39 struct hi6220_reset_data { ··· 93 92 .deassert = hi6220_media_deassert, 94 93 }; 95 94 95 + #define AO_SCTRL_SC_PW_CLKEN0 0x800 96 + #define AO_SCTRL_SC_PW_CLKDIS0 0x804 97 + 98 + #define AO_SCTRL_SC_PW_RSTEN0 0x810 99 + #define AO_SCTRL_SC_PW_RSTDIS0 0x814 100 + 101 + #define AO_SCTRL_SC_PW_ISOEN0 0x820 102 + #define AO_SCTRL_SC_PW_ISODIS0 0x824 103 + #define AO_MAX_INDEX 12 104 + 105 + static int hi6220_ao_assert(struct reset_controller_dev *rc_dev, 106 + unsigned long idx) 107 + { 108 + struct hi6220_reset_data *data = to_reset_data(rc_dev); 109 + struct regmap *regmap = data->regmap; 110 + int ret; 111 + 112 + ret = regmap_write(regmap, AO_SCTRL_SC_PW_RSTEN0, BIT(idx)); 113 + if (ret) 114 + return ret; 115 + 116 + ret = regmap_write(regmap, AO_SCTRL_SC_PW_ISOEN0, BIT(idx)); 117 + if (ret) 118 + return ret; 119 + 120 + ret = regmap_write(regmap, AO_SCTRL_SC_PW_CLKDIS0, BIT(idx)); 121 + return ret; 122 + } 123 + 124 + static int hi6220_ao_deassert(struct reset_controller_dev *rc_dev, 125 + unsigned long idx) 126 + { 127 + struct hi6220_reset_data *data = to_reset_data(rc_dev); 128 + struct regmap *regmap = data->regmap; 129 + int ret; 130 + 131 + /* 132 + * It was suggested to disable isolation before enabling 133 + * the clocks and deasserting reset, to avoid glitches. 134 + * But this order is preserved to keep it matching the 135 + * vendor code. 136 + */ 137 + ret = regmap_write(regmap, AO_SCTRL_SC_PW_RSTDIS0, BIT(idx)); 138 + if (ret) 139 + return ret; 140 + 141 + ret = regmap_write(regmap, AO_SCTRL_SC_PW_ISODIS0, BIT(idx)); 142 + if (ret) 143 + return ret; 144 + 145 + ret = regmap_write(regmap, AO_SCTRL_SC_PW_CLKEN0, BIT(idx)); 146 + return ret; 147 + } 148 + 149 + static const struct reset_control_ops hi6220_ao_reset_ops = { 150 + .assert = hi6220_ao_assert, 151 + .deassert = hi6220_ao_deassert, 152 + }; 153 + 96 154 static int hi6220_reset_probe(struct platform_device *pdev) 97 155 { 98 156 struct device_node *np = pdev->dev.of_node; ··· 177 117 if (type == MEDIA) { 178 118 data->rc_dev.ops = &hi6220_media_reset_ops; 179 119 data->rc_dev.nr_resets = MEDIA_MAX_INDEX; 180 - } else { 120 + } else if (type == PERIPHERAL) { 181 121 data->rc_dev.ops = &hi6220_peripheral_reset_ops; 182 122 data->rc_dev.nr_resets = PERIPH_MAX_INDEX; 123 + } else { 124 + data->rc_dev.ops = &hi6220_ao_reset_ops; 125 + data->rc_dev.nr_resets = AO_MAX_INDEX; 183 126 } 184 127 185 128 return reset_controller_register(&data->rc_dev); ··· 196 133 { 197 134 .compatible = "hisilicon,hi6220-mediactrl", 198 135 .data = (void *)MEDIA, 136 + }, 137 + { 138 + .compatible = "hisilicon,hi6220-aoctrl", 139 + .data = (void *)AO, 199 140 }, 200 141 { /* sentinel */ }, 201 142 };
+101
drivers/reset/reset-imx7.c
··· 15 15 #include <linux/regmap.h> 16 16 #include <dt-bindings/reset/imx7-reset.h> 17 17 #include <dt-bindings/reset/imx8mq-reset.h> 18 + #include <dt-bindings/reset/imx8mp-reset.h> 18 19 19 20 struct imx7_src_signal { 20 21 unsigned int offset, bit; ··· 146 145 SRC_DDRC2_RCR = 0x1004, 147 146 }; 148 147 148 + enum imx8mp_src_registers { 149 + SRC_SUPERMIX_RCR = 0x0018, 150 + SRC_AUDIOMIX_RCR = 0x001c, 151 + SRC_MLMIX_RCR = 0x0028, 152 + SRC_GPU2D_RCR = 0x0038, 153 + SRC_GPU3D_RCR = 0x003c, 154 + SRC_VPU_G1_RCR = 0x0048, 155 + SRC_VPU_G2_RCR = 0x004c, 156 + SRC_VPUVC8KE_RCR = 0x0050, 157 + SRC_NOC_RCR = 0x0054, 158 + }; 159 + 149 160 static const struct imx7_src_signal imx8mq_src_signals[IMX8MQ_RESET_NUM] = { 150 161 [IMX8MQ_RESET_A53_CORE_POR_RESET0] = { SRC_A53RCR0, BIT(0) }, 151 162 [IMX8MQ_RESET_A53_CORE_POR_RESET1] = { SRC_A53RCR0, BIT(1) }, ··· 266 253 }, 267 254 }; 268 255 256 + static const struct imx7_src_signal imx8mp_src_signals[IMX8MP_RESET_NUM] = { 257 + [IMX8MP_RESET_A53_CORE_POR_RESET0] = { SRC_A53RCR0, BIT(0) }, 258 + [IMX8MP_RESET_A53_CORE_POR_RESET1] = { SRC_A53RCR0, BIT(1) }, 259 + [IMX8MP_RESET_A53_CORE_POR_RESET2] = { SRC_A53RCR0, BIT(2) }, 260 + [IMX8MP_RESET_A53_CORE_POR_RESET3] = { SRC_A53RCR0, BIT(3) }, 261 + [IMX8MP_RESET_A53_CORE_RESET0] = { SRC_A53RCR0, BIT(4) }, 262 + [IMX8MP_RESET_A53_CORE_RESET1] = { SRC_A53RCR0, BIT(5) }, 263 + [IMX8MP_RESET_A53_CORE_RESET2] = { SRC_A53RCR0, BIT(6) }, 264 + [IMX8MP_RESET_A53_CORE_RESET3] = { SRC_A53RCR0, BIT(7) }, 265 + [IMX8MP_RESET_A53_DBG_RESET0] = { SRC_A53RCR0, BIT(8) }, 266 + [IMX8MP_RESET_A53_DBG_RESET1] = { SRC_A53RCR0, BIT(9) }, 267 + [IMX8MP_RESET_A53_DBG_RESET2] = { SRC_A53RCR0, BIT(10) }, 268 + [IMX8MP_RESET_A53_DBG_RESET3] = { SRC_A53RCR0, BIT(11) }, 269 + [IMX8MP_RESET_A53_ETM_RESET0] = { SRC_A53RCR0, BIT(12) }, 270 + [IMX8MP_RESET_A53_ETM_RESET1] = { SRC_A53RCR0, BIT(13) }, 271 + [IMX8MP_RESET_A53_ETM_RESET2] = { SRC_A53RCR0, BIT(14) }, 272 + [IMX8MP_RESET_A53_ETM_RESET3] = { SRC_A53RCR0, BIT(15) }, 273 + [IMX8MP_RESET_A53_SOC_DBG_RESET] = { SRC_A53RCR0, BIT(20) }, 274 + [IMX8MP_RESET_A53_L2RESET] = { SRC_A53RCR0, BIT(21) }, 275 + [IMX8MP_RESET_SW_NON_SCLR_M7C_RST] = { SRC_M4RCR, BIT(0) }, 276 + [IMX8MP_RESET_OTG1_PHY_RESET] = { SRC_USBOPHY1_RCR, BIT(0) }, 277 + [IMX8MP_RESET_OTG2_PHY_RESET] = { SRC_USBOPHY2_RCR, BIT(0) }, 278 + [IMX8MP_RESET_SUPERMIX_RESET] = { SRC_SUPERMIX_RCR, BIT(0) }, 279 + [IMX8MP_RESET_AUDIOMIX_RESET] = { SRC_AUDIOMIX_RCR, BIT(0) }, 280 + [IMX8MP_RESET_MLMIX_RESET] = { SRC_MLMIX_RCR, BIT(0) }, 281 + [IMX8MP_RESET_PCIEPHY] = { SRC_PCIEPHY_RCR, BIT(2) }, 282 + [IMX8MP_RESET_PCIEPHY_PERST] = { SRC_PCIEPHY_RCR, BIT(3) }, 283 + [IMX8MP_RESET_PCIE_CTRL_APPS_EN] = { SRC_PCIEPHY_RCR, BIT(6) }, 284 + [IMX8MP_RESET_PCIE_CTRL_APPS_TURNOFF] = { SRC_PCIEPHY_RCR, BIT(11) }, 285 + [IMX8MP_RESET_HDMI_PHY_APB_RESET] = { SRC_HDMI_RCR, BIT(0) }, 286 + [IMX8MP_RESET_MEDIA_RESET] = { SRC_DISP_RCR, BIT(0) }, 287 + [IMX8MP_RESET_GPU2D_RESET] = { SRC_GPU2D_RCR, BIT(0) }, 288 + [IMX8MP_RESET_GPU3D_RESET] = { SRC_GPU3D_RCR, BIT(0) }, 289 + [IMX8MP_RESET_GPU_RESET] = { SRC_GPU_RCR, BIT(0) }, 290 + [IMX8MP_RESET_VPU_RESET] = { SRC_VPU_RCR, BIT(0) }, 291 + [IMX8MP_RESET_VPU_G1_RESET] = { SRC_VPU_G1_RCR, BIT(0) }, 292 + [IMX8MP_RESET_VPU_G2_RESET] = { SRC_VPU_G2_RCR, BIT(0) }, 293 + [IMX8MP_RESET_VPUVC8KE_RESET] = { SRC_VPUVC8KE_RCR, BIT(0) }, 294 + [IMX8MP_RESET_NOC_RESET] = { SRC_NOC_RCR, BIT(0) }, 295 + }; 296 + 297 + static int imx8mp_reset_set(struct reset_controller_dev *rcdev, 298 + unsigned long id, bool assert) 299 + { 300 + struct imx7_src *imx7src = to_imx7_src(rcdev); 301 + const unsigned int bit = imx7src->signals[id].bit; 302 + unsigned int value = assert ? bit : 0; 303 + 304 + switch (id) { 305 + case IMX8MP_RESET_PCIEPHY: 306 + /* 307 + * wait for more than 10us to release phy g_rst and 308 + * btnrst 309 + */ 310 + if (!assert) 311 + udelay(10); 312 + break; 313 + 314 + case IMX8MP_RESET_PCIE_CTRL_APPS_EN: 315 + value = assert ? 0 : bit; 316 + break; 317 + } 318 + 319 + return imx7_reset_update(imx7src, id, value); 320 + } 321 + 322 + static int imx8mp_reset_assert(struct reset_controller_dev *rcdev, 323 + unsigned long id) 324 + { 325 + return imx8mp_reset_set(rcdev, id, true); 326 + } 327 + 328 + static int imx8mp_reset_deassert(struct reset_controller_dev *rcdev, 329 + unsigned long id) 330 + { 331 + return imx8mp_reset_set(rcdev, id, false); 332 + } 333 + 334 + static const struct imx7_src_variant variant_imx8mp = { 335 + .signals = imx8mp_src_signals, 336 + .signals_num = ARRAY_SIZE(imx8mp_src_signals), 337 + .ops = { 338 + .assert = imx8mp_reset_assert, 339 + .deassert = imx8mp_reset_deassert, 340 + }, 341 + }; 342 + 269 343 static int imx7_reset_probe(struct platform_device *pdev) 270 344 { 271 345 struct imx7_src *imx7src; ··· 383 283 static const struct of_device_id imx7_reset_dt_ids[] = { 384 284 { .compatible = "fsl,imx7d-src", .data = &variant_imx7 }, 385 285 { .compatible = "fsl,imx8mq-src", .data = &variant_imx8mq }, 286 + { .compatible = "fsl,imx8mp-src", .data = &variant_imx8mp }, 386 287 { /* sentinel */ }, 387 288 }; 388 289
+101 -11
drivers/soc/amlogic/meson-ee-pwrc.c
··· 14 14 #include <linux/reset-controller.h> 15 15 #include <linux/reset.h> 16 16 #include <linux/clk.h> 17 + #include <dt-bindings/power/meson8-power.h> 17 18 #include <dt-bindings/power/meson-g12a-power.h> 19 + #include <dt-bindings/power/meson-gxbb-power.h> 18 20 #include <dt-bindings/power/meson-sm1-power.h> 19 21 20 22 /* AO Offsets */ 21 23 22 - #define AO_RTI_GEN_PWR_SLEEP0 (0x3a << 2) 23 - #define AO_RTI_GEN_PWR_ISO0 (0x3b << 2) 24 + #define GX_AO_RTI_GEN_PWR_SLEEP0 (0x3a << 2) 25 + #define GX_AO_RTI_GEN_PWR_ISO0 (0x3b << 2) 26 + 27 + /* 28 + * Meson8/Meson8b/Meson8m2 only expose the power management registers of the 29 + * AO-bus as syscon. 0x3a from GX translates to 0x02, 0x3b translates to 0x03 30 + * and so on. 31 + */ 32 + #define MESON8_AO_RTI_GEN_PWR_SLEEP0 (0x02 << 2) 33 + #define MESON8_AO_RTI_GEN_PWR_ISO0 (0x03 << 2) 24 34 25 35 /* HHI Offsets */ 26 36 ··· 76 66 77 67 /* TOP Power Domains */ 78 68 79 - static struct meson_ee_pwrc_top_domain g12a_pwrc_vpu = { 80 - .sleep_reg = AO_RTI_GEN_PWR_SLEEP0, 69 + static struct meson_ee_pwrc_top_domain gx_pwrc_vpu = { 70 + .sleep_reg = GX_AO_RTI_GEN_PWR_SLEEP0, 81 71 .sleep_mask = BIT(8), 82 - .iso_reg = AO_RTI_GEN_PWR_SLEEP0, 72 + .iso_reg = GX_AO_RTI_GEN_PWR_SLEEP0, 73 + .iso_mask = BIT(9), 74 + }; 75 + 76 + static struct meson_ee_pwrc_top_domain meson8_pwrc_vpu = { 77 + .sleep_reg = MESON8_AO_RTI_GEN_PWR_SLEEP0, 78 + .sleep_mask = BIT(8), 79 + .iso_reg = MESON8_AO_RTI_GEN_PWR_SLEEP0, 83 80 .iso_mask = BIT(9), 84 81 }; 85 82 86 83 #define SM1_EE_PD(__bit) \ 87 84 { \ 88 - .sleep_reg = AO_RTI_GEN_PWR_SLEEP0, \ 85 + .sleep_reg = GX_AO_RTI_GEN_PWR_SLEEP0, \ 89 86 .sleep_mask = BIT(__bit), \ 90 - .iso_reg = AO_RTI_GEN_PWR_ISO0, \ 87 + .iso_reg = GX_AO_RTI_GEN_PWR_ISO0, \ 91 88 .iso_mask = BIT(__bit), \ 92 89 } 93 90 ··· 141 124 VPU_HHI_MEMPD(HHI_MEM_PD_REG0), 142 125 }; 143 126 144 - static struct meson_ee_pwrc_mem_domain g12a_pwrc_mem_eth[] = { 127 + static struct meson_ee_pwrc_mem_domain gxbb_pwrc_mem_vpu[] = { 128 + VPU_MEMPD(HHI_VPU_MEM_PD_REG0), 129 + VPU_MEMPD(HHI_VPU_MEM_PD_REG1), 130 + VPU_HHI_MEMPD(HHI_MEM_PD_REG0), 131 + }; 132 + 133 + static struct meson_ee_pwrc_mem_domain meson_pwrc_mem_eth[] = { 145 134 { HHI_MEM_PD_REG0, GENMASK(3, 2) }, 135 + }; 136 + 137 + static struct meson_ee_pwrc_mem_domain meson8_pwrc_audio_dsp_mem[] = { 138 + { HHI_MEM_PD_REG0, GENMASK(1, 0) }, 139 + }; 140 + 141 + static struct meson_ee_pwrc_mem_domain meson8_pwrc_mem_vpu[] = { 142 + VPU_MEMPD(HHI_VPU_MEM_PD_REG0), 143 + VPU_MEMPD(HHI_VPU_MEM_PD_REG1), 144 + VPU_HHI_MEMPD(HHI_MEM_PD_REG0), 146 145 }; 147 146 148 147 static struct meson_ee_pwrc_mem_domain sm1_pwrc_mem_vpu[] = { ··· 232 199 static bool pwrc_ee_get_power(struct meson_ee_pwrc_domain *pwrc_domain); 233 200 234 201 static struct meson_ee_pwrc_domain_desc g12a_pwrc_domains[] = { 235 - [PWRC_G12A_VPU_ID] = VPU_PD("VPU", &g12a_pwrc_vpu, g12a_pwrc_mem_vpu, 202 + [PWRC_G12A_VPU_ID] = VPU_PD("VPU", &gx_pwrc_vpu, g12a_pwrc_mem_vpu, 236 203 pwrc_ee_get_power, 11, 2), 237 - [PWRC_G12A_ETH_ID] = MEM_PD("ETH", g12a_pwrc_mem_eth), 204 + [PWRC_G12A_ETH_ID] = MEM_PD("ETH", meson_pwrc_mem_eth), 205 + }; 206 + 207 + static struct meson_ee_pwrc_domain_desc gxbb_pwrc_domains[] = { 208 + [PWRC_GXBB_VPU_ID] = VPU_PD("VPU", &gx_pwrc_vpu, gxbb_pwrc_mem_vpu, 209 + pwrc_ee_get_power, 12, 2), 210 + [PWRC_GXBB_ETHERNET_MEM_ID] = MEM_PD("ETH", meson_pwrc_mem_eth), 211 + }; 212 + 213 + static struct meson_ee_pwrc_domain_desc meson8_pwrc_domains[] = { 214 + [PWRC_MESON8_VPU_ID] = VPU_PD("VPU", &meson8_pwrc_vpu, 215 + meson8_pwrc_mem_vpu, pwrc_ee_get_power, 216 + 0, 1), 217 + [PWRC_MESON8_ETHERNET_MEM_ID] = MEM_PD("ETHERNET_MEM", 218 + meson_pwrc_mem_eth), 219 + [PWRC_MESON8_AUDIO_DSP_MEM_ID] = MEM_PD("AUDIO_DSP_MEM", 220 + meson8_pwrc_audio_dsp_mem), 221 + }; 222 + 223 + static struct meson_ee_pwrc_domain_desc meson8b_pwrc_domains[] = { 224 + [PWRC_MESON8_VPU_ID] = VPU_PD("VPU", &meson8_pwrc_vpu, 225 + meson8_pwrc_mem_vpu, pwrc_ee_get_power, 226 + 11, 1), 227 + [PWRC_MESON8_ETHERNET_MEM_ID] = MEM_PD("ETHERNET_MEM", 228 + meson_pwrc_mem_eth), 229 + [PWRC_MESON8_AUDIO_DSP_MEM_ID] = MEM_PD("AUDIO_DSP_MEM", 230 + meson8_pwrc_audio_dsp_mem), 238 231 }; 239 232 240 233 static struct meson_ee_pwrc_domain_desc sm1_pwrc_domains[] = { ··· 275 216 [PWRC_SM1_GE2D_ID] = TOP_PD("GE2D", &sm1_pwrc_ge2d, sm1_pwrc_mem_ge2d, 276 217 pwrc_ee_get_power), 277 218 [PWRC_SM1_AUDIO_ID] = MEM_PD("AUDIO", sm1_pwrc_mem_audio), 278 - [PWRC_SM1_ETH_ID] = MEM_PD("ETH", g12a_pwrc_mem_eth), 219 + [PWRC_SM1_ETH_ID] = MEM_PD("ETH", meson_pwrc_mem_eth), 279 220 }; 280 221 281 222 struct meson_ee_pwrc_domain { ··· 529 470 .domains = g12a_pwrc_domains, 530 471 }; 531 472 473 + static struct meson_ee_pwrc_domain_data meson_ee_gxbb_pwrc_data = { 474 + .count = ARRAY_SIZE(gxbb_pwrc_domains), 475 + .domains = gxbb_pwrc_domains, 476 + }; 477 + 478 + static struct meson_ee_pwrc_domain_data meson_ee_m8_pwrc_data = { 479 + .count = ARRAY_SIZE(meson8_pwrc_domains), 480 + .domains = meson8_pwrc_domains, 481 + }; 482 + 483 + static struct meson_ee_pwrc_domain_data meson_ee_m8b_pwrc_data = { 484 + .count = ARRAY_SIZE(meson8b_pwrc_domains), 485 + .domains = meson8b_pwrc_domains, 486 + }; 487 + 532 488 static struct meson_ee_pwrc_domain_data meson_ee_sm1_pwrc_data = { 533 489 .count = ARRAY_SIZE(sm1_pwrc_domains), 534 490 .domains = sm1_pwrc_domains, 535 491 }; 536 492 537 493 static const struct of_device_id meson_ee_pwrc_match_table[] = { 494 + { 495 + .compatible = "amlogic,meson8-pwrc", 496 + .data = &meson_ee_m8_pwrc_data, 497 + }, 498 + { 499 + .compatible = "amlogic,meson8b-pwrc", 500 + .data = &meson_ee_m8b_pwrc_data, 501 + }, 502 + { 503 + .compatible = "amlogic,meson8m2-pwrc", 504 + .data = &meson_ee_m8b_pwrc_data, 505 + }, 506 + { 507 + .compatible = "amlogic,meson-gxbb-pwrc", 508 + .data = &meson_ee_gxbb_pwrc_data, 509 + }, 538 510 { 539 511 .compatible = "amlogic,meson-g12a-pwrc", 540 512 .data = &meson_ee_g12a_pwrc_data,
+5 -1
drivers/soc/fsl/dpio/dpio-service.c
··· 58 58 * If cpu == -1, choose the current cpu, with no guarantees about 59 59 * potentially being migrated away. 60 60 */ 61 - if (unlikely(cpu < 0)) 61 + if (cpu < 0) 62 62 cpu = smp_processor_id(); 63 63 64 64 /* If a specific cpu was requested, pick it up immediately */ ··· 67 67 68 68 static inline struct dpaa2_io *service_select(struct dpaa2_io *d) 69 69 { 70 + if (d) 71 + return d; 72 + 73 + d = service_select_by_cpu(d, -1); 70 74 if (d) 71 75 return d; 72 76
-12
drivers/soc/fsl/dpio/qbman-portal.c
··· 572 572 #define EQAR_VB(eqar) ((eqar) & 0x80) 573 573 #define EQAR_SUCCESS(eqar) ((eqar) & 0x100) 574 574 575 - static inline void qbman_write_eqcr_am_rt_register(struct qbman_swp *p, 576 - u8 idx) 577 - { 578 - if (idx < 16) 579 - qbman_write_register(p, QBMAN_CINH_SWP_EQCR_AM_RT + idx * 4, 580 - QMAN_RT_MODE); 581 - else 582 - qbman_write_register(p, QBMAN_CINH_SWP_EQCR_AM_RT2 + 583 - (idx - 16) * 4, 584 - QMAN_RT_MODE); 585 - } 586 - 587 575 #define QB_RT_BIT ((u32)0x100) 588 576 /** 589 577 * qbman_swp_enqueue_direct() - Issue an enqueue command
-5
drivers/soc/fsl/qbman/qman.c
··· 449 449 return 0; 450 450 } 451 451 452 - static inline unsigned int qm_eqcr_get_ci_stashing(struct qm_portal *portal) 453 - { 454 - return (qm_in(portal, QM_REG_CFG) >> 28) & 0x7; 455 - } 456 - 457 452 static inline void qm_eqcr_finish(struct qm_portal *portal) 458 453 { 459 454 struct qm_eqcr *eqcr = &portal->eqcr;
+2 -2
drivers/soc/fsl/qe/qe.c
··· 448 448 unsigned int i; 449 449 unsigned int j; 450 450 u32 crc; 451 - size_t calc_size = sizeof(struct qe_firmware); 451 + size_t calc_size; 452 452 size_t length; 453 453 const struct qe_header *hdr; 454 454 ··· 480 480 } 481 481 482 482 /* Validate the length and check if there's a CRC */ 483 - calc_size += (firmware->count - 1) * sizeof(struct qe_microcode); 483 + calc_size = struct_size(firmware, microcode, firmware->count); 484 484 485 485 for (i = 0; i < firmware->count; i++) 486 486 /*
+1 -1
drivers/soc/fsl/qe/ucc.c
··· 519 519 int clock_bits; 520 520 u32 shift; 521 521 struct qe_mux __iomem *qe_mux_reg; 522 - __be32 __iomem *cmxs1cr; 522 + __be32 __iomem *cmxs1cr; 523 523 524 524 qe_mux_reg = &qe_immr->qmx; 525 525
+3 -4
drivers/soc/imx/soc-imx8m.c
··· 53 53 struct device_node *np; 54 54 void __iomem *ocotp_base; 55 55 u32 magic; 56 - u32 rev = 0; 56 + u32 rev; 57 57 58 58 np = of_find_compatible_node(NULL, NULL, "fsl,imx8mq-ocotp"); 59 59 if (!np) 60 - goto out; 60 + return 0; 61 61 62 62 ocotp_base = of_iomap(np, 0); 63 63 WARN_ON(!ocotp_base); ··· 78 78 soc_uid |= readl_relaxed(ocotp_base + OCOTP_UID_LOW); 79 79 80 80 iounmap(ocotp_base); 81 - 82 - out: 83 81 of_node_put(np); 82 + 84 83 return rev; 85 84 } 86 85
+7
drivers/soc/mediatek/Kconfig
··· 44 44 Say yes here to add support for the MediaTek SCPSYS power domain 45 45 driver. 46 46 47 + config MTK_MMSYS 48 + bool "MediaTek MMSYS Support" 49 + default ARCH_MEDIATEK 50 + help 51 + Say yes here to add support for the MediaTek Multimedia 52 + Subsystem (MMSYS). 53 + 47 54 endmenu
+1
drivers/soc/mediatek/Makefile
··· 3 3 obj-$(CONFIG_MTK_INFRACFG) += mtk-infracfg.o 4 4 obj-$(CONFIG_MTK_PMIC_WRAP) += mtk-pmic-wrap.o 5 5 obj-$(CONFIG_MTK_SCPSYS) += mtk-scpsys.o 6 + obj-$(CONFIG_MTK_MMSYS) += mtk-mmsys.o
+378
drivers/soc/mediatek/mtk-mmsys.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (c) 2014 MediaTek Inc. 4 + * Author: James Liao <jamesjj.liao@mediatek.com> 5 + */ 6 + 7 + #include <linux/device.h> 8 + #include <linux/of_device.h> 9 + #include <linux/platform_device.h> 10 + #include <linux/soc/mediatek/mtk-mmsys.h> 11 + 12 + #include "../../gpu/drm/mediatek/mtk_drm_ddp.h" 13 + #include "../../gpu/drm/mediatek/mtk_drm_ddp_comp.h" 14 + 15 + #define DISP_REG_CONFIG_DISP_OVL0_MOUT_EN 0x040 16 + #define DISP_REG_CONFIG_DISP_OVL1_MOUT_EN 0x044 17 + #define DISP_REG_CONFIG_DISP_OD_MOUT_EN 0x048 18 + #define DISP_REG_CONFIG_DISP_GAMMA_MOUT_EN 0x04c 19 + #define DISP_REG_CONFIG_DISP_UFOE_MOUT_EN 0x050 20 + #define DISP_REG_CONFIG_DISP_COLOR0_SEL_IN 0x084 21 + #define DISP_REG_CONFIG_DISP_COLOR1_SEL_IN 0x088 22 + #define DISP_REG_CONFIG_DSIE_SEL_IN 0x0a4 23 + #define DISP_REG_CONFIG_DSIO_SEL_IN 0x0a8 24 + #define DISP_REG_CONFIG_DPI_SEL_IN 0x0ac 25 + #define DISP_REG_CONFIG_DISP_RDMA2_SOUT 0x0b8 26 + #define DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN 0x0c4 27 + #define DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN 0x0c8 28 + #define DISP_REG_CONFIG_MMSYS_CG_CON0 0x100 29 + 30 + #define DISP_REG_CONFIG_DISP_OVL_MOUT_EN 0x030 31 + #define DISP_REG_CONFIG_OUT_SEL 0x04c 32 + #define DISP_REG_CONFIG_DSI_SEL 0x050 33 + #define DISP_REG_CONFIG_DPI_SEL 0x064 34 + 35 + #define OVL0_MOUT_EN_COLOR0 0x1 36 + #define OD_MOUT_EN_RDMA0 0x1 37 + #define OD1_MOUT_EN_RDMA1 BIT(16) 38 + #define UFOE_MOUT_EN_DSI0 0x1 39 + #define COLOR0_SEL_IN_OVL0 0x1 40 + #define OVL1_MOUT_EN_COLOR1 0x1 41 + #define GAMMA_MOUT_EN_RDMA1 0x1 42 + #define RDMA0_SOUT_DPI0 0x2 43 + #define RDMA0_SOUT_DPI1 0x3 44 + #define RDMA0_SOUT_DSI1 0x1 45 + #define RDMA0_SOUT_DSI2 0x4 46 + #define RDMA0_SOUT_DSI3 0x5 47 + #define RDMA1_SOUT_DPI0 0x2 48 + #define RDMA1_SOUT_DPI1 0x3 49 + #define RDMA1_SOUT_DSI1 0x1 50 + #define RDMA1_SOUT_DSI2 0x4 51 + #define RDMA1_SOUT_DSI3 0x5 52 + #define RDMA2_SOUT_DPI0 0x2 53 + #define RDMA2_SOUT_DPI1 0x3 54 + #define RDMA2_SOUT_DSI1 0x1 55 + #define RDMA2_SOUT_DSI2 0x4 56 + #define RDMA2_SOUT_DSI3 0x5 57 + #define DPI0_SEL_IN_RDMA1 0x1 58 + #define DPI0_SEL_IN_RDMA2 0x3 59 + #define DPI1_SEL_IN_RDMA1 (0x1 << 8) 60 + #define DPI1_SEL_IN_RDMA2 (0x3 << 8) 61 + #define DSI0_SEL_IN_RDMA1 0x1 62 + #define DSI0_SEL_IN_RDMA2 0x4 63 + #define DSI1_SEL_IN_RDMA1 0x1 64 + #define DSI1_SEL_IN_RDMA2 0x4 65 + #define DSI2_SEL_IN_RDMA1 (0x1 << 16) 66 + #define DSI2_SEL_IN_RDMA2 (0x4 << 16) 67 + #define DSI3_SEL_IN_RDMA1 (0x1 << 16) 68 + #define DSI3_SEL_IN_RDMA2 (0x4 << 16) 69 + #define COLOR1_SEL_IN_OVL1 0x1 70 + 71 + #define OVL_MOUT_EN_RDMA 0x1 72 + #define BLS_TO_DSI_RDMA1_TO_DPI1 0x8 73 + #define BLS_TO_DPI_RDMA1_TO_DSI 0x2 74 + #define DSI_SEL_IN_BLS 0x0 75 + #define DPI_SEL_IN_BLS 0x0 76 + #define DSI_SEL_IN_RDMA 0x1 77 + 78 + struct mtk_mmsys_driver_data { 79 + const char *clk_driver; 80 + }; 81 + 82 + static const struct mtk_mmsys_driver_data mt2701_mmsys_driver_data = { 83 + .clk_driver = "clk-mt2701-mm", 84 + }; 85 + 86 + static const struct mtk_mmsys_driver_data mt2712_mmsys_driver_data = { 87 + .clk_driver = "clk-mt2712-mm", 88 + }; 89 + 90 + static const struct mtk_mmsys_driver_data mt6779_mmsys_driver_data = { 91 + .clk_driver = "clk-mt6779-mm", 92 + }; 93 + 94 + static const struct mtk_mmsys_driver_data mt6797_mmsys_driver_data = { 95 + .clk_driver = "clk-mt6797-mm", 96 + }; 97 + 98 + static const struct mtk_mmsys_driver_data mt8173_mmsys_driver_data = { 99 + .clk_driver = "clk-mt8173-mm", 100 + }; 101 + 102 + static const struct mtk_mmsys_driver_data mt8183_mmsys_driver_data = { 103 + .clk_driver = "clk-mt8183-mm", 104 + }; 105 + 106 + static unsigned int mtk_mmsys_ddp_mout_en(enum mtk_ddp_comp_id cur, 107 + enum mtk_ddp_comp_id next, 108 + unsigned int *addr) 109 + { 110 + unsigned int value; 111 + 112 + if (cur == DDP_COMPONENT_OVL0 && next == DDP_COMPONENT_COLOR0) { 113 + *addr = DISP_REG_CONFIG_DISP_OVL0_MOUT_EN; 114 + value = OVL0_MOUT_EN_COLOR0; 115 + } else if (cur == DDP_COMPONENT_OVL0 && next == DDP_COMPONENT_RDMA0) { 116 + *addr = DISP_REG_CONFIG_DISP_OVL_MOUT_EN; 117 + value = OVL_MOUT_EN_RDMA; 118 + } else if (cur == DDP_COMPONENT_OD0 && next == DDP_COMPONENT_RDMA0) { 119 + *addr = DISP_REG_CONFIG_DISP_OD_MOUT_EN; 120 + value = OD_MOUT_EN_RDMA0; 121 + } else if (cur == DDP_COMPONENT_UFOE && next == DDP_COMPONENT_DSI0) { 122 + *addr = DISP_REG_CONFIG_DISP_UFOE_MOUT_EN; 123 + value = UFOE_MOUT_EN_DSI0; 124 + } else if (cur == DDP_COMPONENT_OVL1 && next == DDP_COMPONENT_COLOR1) { 125 + *addr = DISP_REG_CONFIG_DISP_OVL1_MOUT_EN; 126 + value = OVL1_MOUT_EN_COLOR1; 127 + } else if (cur == DDP_COMPONENT_GAMMA && next == DDP_COMPONENT_RDMA1) { 128 + *addr = DISP_REG_CONFIG_DISP_GAMMA_MOUT_EN; 129 + value = GAMMA_MOUT_EN_RDMA1; 130 + } else if (cur == DDP_COMPONENT_OD1 && next == DDP_COMPONENT_RDMA1) { 131 + *addr = DISP_REG_CONFIG_DISP_OD_MOUT_EN; 132 + value = OD1_MOUT_EN_RDMA1; 133 + } else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DPI0) { 134 + *addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN; 135 + value = RDMA0_SOUT_DPI0; 136 + } else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DPI1) { 137 + *addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN; 138 + value = RDMA0_SOUT_DPI1; 139 + } else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DSI1) { 140 + *addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN; 141 + value = RDMA0_SOUT_DSI1; 142 + } else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DSI2) { 143 + *addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN; 144 + value = RDMA0_SOUT_DSI2; 145 + } else if (cur == DDP_COMPONENT_RDMA0 && next == DDP_COMPONENT_DSI3) { 146 + *addr = DISP_REG_CONFIG_DISP_RDMA0_SOUT_EN; 147 + value = RDMA0_SOUT_DSI3; 148 + } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI1) { 149 + *addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN; 150 + value = RDMA1_SOUT_DSI1; 151 + } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI2) { 152 + *addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN; 153 + value = RDMA1_SOUT_DSI2; 154 + } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI3) { 155 + *addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN; 156 + value = RDMA1_SOUT_DSI3; 157 + } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DPI0) { 158 + *addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN; 159 + value = RDMA1_SOUT_DPI0; 160 + } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DPI1) { 161 + *addr = DISP_REG_CONFIG_DISP_RDMA1_SOUT_EN; 162 + value = RDMA1_SOUT_DPI1; 163 + } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DPI0) { 164 + *addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT; 165 + value = RDMA2_SOUT_DPI0; 166 + } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DPI1) { 167 + *addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT; 168 + value = RDMA2_SOUT_DPI1; 169 + } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI1) { 170 + *addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT; 171 + value = RDMA2_SOUT_DSI1; 172 + } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI2) { 173 + *addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT; 174 + value = RDMA2_SOUT_DSI2; 175 + } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI3) { 176 + *addr = DISP_REG_CONFIG_DISP_RDMA2_SOUT; 177 + value = RDMA2_SOUT_DSI3; 178 + } else { 179 + value = 0; 180 + } 181 + 182 + return value; 183 + } 184 + 185 + static unsigned int mtk_mmsys_ddp_sel_in(enum mtk_ddp_comp_id cur, 186 + enum mtk_ddp_comp_id next, 187 + unsigned int *addr) 188 + { 189 + unsigned int value; 190 + 191 + if (cur == DDP_COMPONENT_OVL0 && next == DDP_COMPONENT_COLOR0) { 192 + *addr = DISP_REG_CONFIG_DISP_COLOR0_SEL_IN; 193 + value = COLOR0_SEL_IN_OVL0; 194 + } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DPI0) { 195 + *addr = DISP_REG_CONFIG_DPI_SEL_IN; 196 + value = DPI0_SEL_IN_RDMA1; 197 + } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DPI1) { 198 + *addr = DISP_REG_CONFIG_DPI_SEL_IN; 199 + value = DPI1_SEL_IN_RDMA1; 200 + } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI0) { 201 + *addr = DISP_REG_CONFIG_DSIE_SEL_IN; 202 + value = DSI0_SEL_IN_RDMA1; 203 + } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI1) { 204 + *addr = DISP_REG_CONFIG_DSIO_SEL_IN; 205 + value = DSI1_SEL_IN_RDMA1; 206 + } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI2) { 207 + *addr = DISP_REG_CONFIG_DSIE_SEL_IN; 208 + value = DSI2_SEL_IN_RDMA1; 209 + } else if (cur == DDP_COMPONENT_RDMA1 && next == DDP_COMPONENT_DSI3) { 210 + *addr = DISP_REG_CONFIG_DSIO_SEL_IN; 211 + value = DSI3_SEL_IN_RDMA1; 212 + } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DPI0) { 213 + *addr = DISP_REG_CONFIG_DPI_SEL_IN; 214 + value = DPI0_SEL_IN_RDMA2; 215 + } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DPI1) { 216 + *addr = DISP_REG_CONFIG_DPI_SEL_IN; 217 + value = DPI1_SEL_IN_RDMA2; 218 + } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI0) { 219 + *addr = DISP_REG_CONFIG_DSIE_SEL_IN; 220 + value = DSI0_SEL_IN_RDMA2; 221 + } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI1) { 222 + *addr = DISP_REG_CONFIG_DSIO_SEL_IN; 223 + value = DSI1_SEL_IN_RDMA2; 224 + } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI2) { 225 + *addr = DISP_REG_CONFIG_DSIE_SEL_IN; 226 + value = DSI2_SEL_IN_RDMA2; 227 + } else if (cur == DDP_COMPONENT_RDMA2 && next == DDP_COMPONENT_DSI3) { 228 + *addr = DISP_REG_CONFIG_DSIE_SEL_IN; 229 + value = DSI3_SEL_IN_RDMA2; 230 + } else if (cur == DDP_COMPONENT_OVL1 && next == DDP_COMPONENT_COLOR1) { 231 + *addr = DISP_REG_CONFIG_DISP_COLOR1_SEL_IN; 232 + value = COLOR1_SEL_IN_OVL1; 233 + } else if (cur == DDP_COMPONENT_BLS && next == DDP_COMPONENT_DSI0) { 234 + *addr = DISP_REG_CONFIG_DSI_SEL; 235 + value = DSI_SEL_IN_BLS; 236 + } else { 237 + value = 0; 238 + } 239 + 240 + return value; 241 + } 242 + 243 + static void mtk_mmsys_ddp_sout_sel(void __iomem *config_regs, 244 + enum mtk_ddp_comp_id cur, 245 + enum mtk_ddp_comp_id next) 246 + { 247 + if (cur == DDP_COMPONENT_BLS && next == DDP_COMPONENT_DSI0) { 248 + writel_relaxed(BLS_TO_DSI_RDMA1_TO_DPI1, 249 + config_regs + DISP_REG_CONFIG_OUT_SEL); 250 + } else if (cur == DDP_COMPONENT_BLS && next == DDP_COMPONENT_DPI0) { 251 + writel_relaxed(BLS_TO_DPI_RDMA1_TO_DSI, 252 + config_regs + DISP_REG_CONFIG_OUT_SEL); 253 + writel_relaxed(DSI_SEL_IN_RDMA, 254 + config_regs + DISP_REG_CONFIG_DSI_SEL); 255 + writel_relaxed(DPI_SEL_IN_BLS, 256 + config_regs + DISP_REG_CONFIG_DPI_SEL); 257 + } 258 + } 259 + 260 + void mtk_mmsys_ddp_connect(struct device *dev, 261 + enum mtk_ddp_comp_id cur, 262 + enum mtk_ddp_comp_id next) 263 + { 264 + void __iomem *config_regs = dev_get_drvdata(dev); 265 + unsigned int addr, value, reg; 266 + 267 + value = mtk_mmsys_ddp_mout_en(cur, next, &addr); 268 + if (value) { 269 + reg = readl_relaxed(config_regs + addr) | value; 270 + writel_relaxed(reg, config_regs + addr); 271 + } 272 + 273 + mtk_mmsys_ddp_sout_sel(config_regs, cur, next); 274 + 275 + value = mtk_mmsys_ddp_sel_in(cur, next, &addr); 276 + if (value) { 277 + reg = readl_relaxed(config_regs + addr) | value; 278 + writel_relaxed(reg, config_regs + addr); 279 + } 280 + } 281 + EXPORT_SYMBOL_GPL(mtk_mmsys_ddp_connect); 282 + 283 + void mtk_mmsys_ddp_disconnect(struct device *dev, 284 + enum mtk_ddp_comp_id cur, 285 + enum mtk_ddp_comp_id next) 286 + { 287 + void __iomem *config_regs = dev_get_drvdata(dev); 288 + unsigned int addr, value, reg; 289 + 290 + value = mtk_mmsys_ddp_mout_en(cur, next, &addr); 291 + if (value) { 292 + reg = readl_relaxed(config_regs + addr) & ~value; 293 + writel_relaxed(reg, config_regs + addr); 294 + } 295 + 296 + value = mtk_mmsys_ddp_sel_in(cur, next, &addr); 297 + if (value) { 298 + reg = readl_relaxed(config_regs + addr) & ~value; 299 + writel_relaxed(reg, config_regs + addr); 300 + } 301 + } 302 + EXPORT_SYMBOL_GPL(mtk_mmsys_ddp_disconnect); 303 + 304 + static int mtk_mmsys_probe(struct platform_device *pdev) 305 + { 306 + const struct mtk_mmsys_driver_data *data; 307 + struct device *dev = &pdev->dev; 308 + struct platform_device *clks; 309 + struct platform_device *drm; 310 + void __iomem *config_regs; 311 + struct resource *mem; 312 + int ret; 313 + 314 + mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 315 + config_regs = devm_ioremap_resource(dev, mem); 316 + if (IS_ERR(config_regs)) { 317 + ret = PTR_ERR(config_regs); 318 + dev_err(dev, "Failed to ioremap mmsys-config resource: %d\n", 319 + ret); 320 + return ret; 321 + } 322 + 323 + platform_set_drvdata(pdev, config_regs); 324 + 325 + data = of_device_get_match_data(&pdev->dev); 326 + 327 + clks = platform_device_register_data(&pdev->dev, data->clk_driver, 328 + PLATFORM_DEVID_AUTO, NULL, 0); 329 + if (IS_ERR(clks)) 330 + return PTR_ERR(clks); 331 + 332 + drm = platform_device_register_data(&pdev->dev, "mediatek-drm", 333 + PLATFORM_DEVID_AUTO, NULL, 0); 334 + if (IS_ERR(drm)) { 335 + platform_device_unregister(clks); 336 + return PTR_ERR(drm); 337 + } 338 + 339 + return 0; 340 + } 341 + 342 + static const struct of_device_id of_match_mtk_mmsys[] = { 343 + { 344 + .compatible = "mediatek,mt2701-mmsys", 345 + .data = &mt2701_mmsys_driver_data, 346 + }, 347 + { 348 + .compatible = "mediatek,mt2712-mmsys", 349 + .data = &mt2712_mmsys_driver_data, 350 + }, 351 + { 352 + .compatible = "mediatek,mt6779-mmsys", 353 + .data = &mt6779_mmsys_driver_data, 354 + }, 355 + { 356 + .compatible = "mediatek,mt6797-mmsys", 357 + .data = &mt6797_mmsys_driver_data, 358 + }, 359 + { 360 + .compatible = "mediatek,mt8173-mmsys", 361 + .data = &mt8173_mmsys_driver_data, 362 + }, 363 + { 364 + .compatible = "mediatek,mt8183-mmsys", 365 + .data = &mt8183_mmsys_driver_data, 366 + }, 367 + { } 368 + }; 369 + 370 + static struct platform_driver mtk_mmsys_drv = { 371 + .driver = { 372 + .name = "mtk-mmsys", 373 + .of_match_table = of_match_mtk_mmsys, 374 + }, 375 + .probe = mtk_mmsys_probe, 376 + }; 377 + 378 + builtin_platform_driver(mtk_mmsys_drv);
+3 -3
drivers/soc/qcom/Kconfig
··· 107 107 help apply the aggregated state on the resource. 108 108 109 109 config QCOM_RPMHPD 110 - bool "Qualcomm RPMh Power domain driver" 110 + tristate "Qualcomm RPMh Power domain driver" 111 111 depends on QCOM_RPMH && QCOM_COMMAND_DB 112 112 help 113 113 QCOM RPMh Power domain driver to support power-domains with ··· 116 116 for the voltage rail. 117 117 118 118 config QCOM_RPMPD 119 - bool "Qualcomm RPM Power domain driver" 120 - depends on QCOM_SMD_RPM=y 119 + tristate "Qualcomm RPM Power domain driver" 120 + depends on QCOM_SMD_RPM 121 121 help 122 122 QCOM RPM Power domain driver to support power-domains with 123 123 performance states. The driver communicates a performance state
+76 -2
drivers/soc/qcom/cmd-db.c
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 /* Copyright (c) 2016-2018, The Linux Foundation. All rights reserved. */ 3 3 4 + #include <linux/debugfs.h> 4 5 #include <linux/kernel.h> 5 6 #include <linux/of.h> 6 7 #include <linux/of_address.h> 7 - #include <linux/of_platform.h> 8 8 #include <linux/of_reserved_mem.h> 9 9 #include <linux/platform_device.h> 10 + #include <linux/seq_file.h> 10 11 #include <linux/types.h> 11 12 12 13 #include <soc/qcom/cmd-db.h> ··· 237 236 } 238 237 EXPORT_SYMBOL(cmd_db_read_slave_id); 239 238 239 + #ifdef CONFIG_DEBUG_FS 240 + static int cmd_db_debugfs_dump(struct seq_file *seq, void *p) 241 + { 242 + int i, j; 243 + const struct rsc_hdr *rsc; 244 + const struct entry_header *ent; 245 + const char *name; 246 + u16 len, version; 247 + u8 major, minor; 248 + 249 + seq_puts(seq, "Command DB DUMP\n"); 250 + 251 + for (i = 0; i < MAX_SLV_ID; i++) { 252 + rsc = &cmd_db_header->header[i]; 253 + if (!rsc->slv_id) 254 + break; 255 + 256 + switch (le16_to_cpu(rsc->slv_id)) { 257 + case CMD_DB_HW_ARC: 258 + name = "ARC"; 259 + break; 260 + case CMD_DB_HW_VRM: 261 + name = "VRM"; 262 + break; 263 + case CMD_DB_HW_BCM: 264 + name = "BCM"; 265 + break; 266 + default: 267 + name = "Unknown"; 268 + break; 269 + } 270 + 271 + version = le16_to_cpu(rsc->version); 272 + major = version >> 8; 273 + minor = version; 274 + 275 + seq_printf(seq, "Slave %s (v%u.%u)\n", name, major, minor); 276 + seq_puts(seq, "-------------------------\n"); 277 + 278 + ent = rsc_to_entry_header(rsc); 279 + for (j = 0; j < le16_to_cpu(rsc->cnt); j++, ent++) { 280 + seq_printf(seq, "0x%05x: %*pEp", le32_to_cpu(ent->addr), 281 + (int)sizeof(ent->id), ent->id); 282 + 283 + len = le16_to_cpu(ent->len); 284 + if (len) { 285 + seq_printf(seq, " [%*ph]", 286 + len, rsc_offset(rsc, ent)); 287 + } 288 + seq_putc(seq, '\n'); 289 + } 290 + } 291 + 292 + return 0; 293 + } 294 + 295 + static int open_cmd_db_debugfs(struct inode *inode, struct file *file) 296 + { 297 + return single_open(file, cmd_db_debugfs_dump, inode->i_private); 298 + } 299 + #endif 300 + 301 + static const struct file_operations cmd_db_debugfs_ops = { 302 + #ifdef CONFIG_DEBUG_FS 303 + .open = open_cmd_db_debugfs, 304 + #endif 305 + .read = seq_read, 306 + .llseek = seq_lseek, 307 + .release = single_release, 308 + }; 309 + 240 310 static int cmd_db_dev_probe(struct platform_device *pdev) 241 311 { 242 312 struct reserved_mem *rmem; ··· 331 259 return -EINVAL; 332 260 } 333 261 262 + debugfs_create_file("cmd-db", 0400, NULL, NULL, &cmd_db_debugfs_ops); 263 + 334 264 return 0; 335 265 } 336 266 337 267 static const struct of_device_id cmd_db_match_table[] = { 338 268 { .compatible = "qcom,cmd-db" }, 339 - { }, 269 + { } 340 270 }; 341 271 342 272 static struct platform_driver cmd_db_dev_driver = {
-4
drivers/soc/qcom/pdr_interface.c
··· 155 155 return ret; 156 156 } 157 157 158 - if ((int)resp.curr_state < INT_MIN || (int)resp.curr_state > INT_MAX) 159 - pr_err("PDR: %s notification state invalid: 0x%x\n", 160 - pds->service_path, resp.curr_state); 161 - 162 158 pds->state = resp.curr_state; 163 159 164 160 return 0;
+1
drivers/soc/qcom/qcom_aoss.c
··· 599 599 { .compatible = "qcom,sc7180-aoss-qmp", }, 600 600 { .compatible = "qcom,sdm845-aoss-qmp", }, 601 601 { .compatible = "qcom,sm8150-aoss-qmp", }, 602 + { .compatible = "qcom,sm8250-aoss-qmp", }, 602 603 {} 603 604 }; 604 605 MODULE_DEVICE_TABLE(of, qmp_dt_match);
+38 -21
drivers/soc/qcom/rpmh-internal.h
··· 22 22 * struct tcs_group: group of Trigger Command Sets (TCS) to send state requests 23 23 * to the controller 24 24 * 25 - * @drv: the controller 26 - * @type: type of the TCS in this group - active, sleep, wake 27 - * @mask: mask of the TCSes relative to all the TCSes in the RSC 28 - * @offset: start of the TCS group relative to the TCSes in the RSC 29 - * @num_tcs: number of TCSes in this type 30 - * @ncpt: number of commands in each TCS 31 - * @lock: lock for synchronizing this TCS writes 32 - * @req: requests that are sent from the TCS 33 - * @cmd_cache: flattened cache of cmds in sleep/wake TCS 34 - * @slots: indicates which of @cmd_addr are occupied 25 + * @drv: The controller. 26 + * @type: Type of the TCS in this group - active, sleep, wake. 27 + * @mask: Mask of the TCSes relative to all the TCSes in the RSC. 28 + * @offset: Start of the TCS group relative to the TCSes in the RSC. 29 + * @num_tcs: Number of TCSes in this type. 30 + * @ncpt: Number of commands in each TCS. 31 + * @req: Requests that are sent from the TCS; only used for ACTIVE_ONLY 32 + * transfers (could be on a wake/sleep TCS if we are borrowing for 33 + * an ACTIVE_ONLY transfer). 34 + * Start: grab drv->lock, set req, set tcs_in_use, drop drv->lock, 35 + * trigger 36 + * End: get irq, access req, 37 + * grab drv->lock, clear tcs_in_use, drop drv->lock 38 + * @slots: Indicates which of @cmd_addr are occupied; only used for 39 + * SLEEP / WAKE TCSs. Things are tightly packed in the 40 + * case that (ncpt < MAX_CMDS_PER_TCS). That is if ncpt = 2 and 41 + * MAX_CMDS_PER_TCS = 16 then bit[2] = the first bit in 2nd TCS. 35 42 */ 36 43 struct tcs_group { 37 44 struct rsc_drv *drv; ··· 47 40 u32 offset; 48 41 int num_tcs; 49 42 int ncpt; 50 - spinlock_t lock; 51 43 const struct tcs_request *req[MAX_TCS_PER_TYPE]; 52 - u32 *cmd_cache; 53 44 DECLARE_BITMAP(slots, MAX_TCS_SLOTS); 54 45 }; 55 46 ··· 89 84 * struct rsc_drv: the Direct Resource Voter (DRV) of the 90 85 * Resource State Coordinator controller (RSC) 91 86 * 92 - * @name: controller identifier 93 - * @tcs_base: start address of the TCS registers in this controller 94 - * @id: instance id in the controller (Direct Resource Voter) 95 - * @num_tcs: number of TCSes in this DRV 96 - * @tcs: TCS groups 97 - * @tcs_in_use: s/w state of the TCS 98 - * @lock: synchronize state of the controller 99 - * @client: handle to the DRV's client. 87 + * @name: Controller identifier. 88 + * @tcs_base: Start address of the TCS registers in this controller. 89 + * @id: Instance id in the controller (Direct Resource Voter). 90 + * @num_tcs: Number of TCSes in this DRV. 91 + * @rsc_pm: CPU PM notifier for controller. 92 + * Used when solver mode is not present. 93 + * @cpus_in_pm: Number of CPUs not in idle power collapse. 94 + * Used when solver mode is not present. 95 + * @tcs: TCS groups. 96 + * @tcs_in_use: S/W state of the TCS; only set for ACTIVE_ONLY 97 + * transfers, but might show a sleep/wake TCS in use if 98 + * it was borrowed for an active_only transfer. You 99 + * must hold the lock in this struct (AKA drv->lock) in 100 + * order to update this. 101 + * @lock: Synchronize state of the controller. If RPMH's cache 102 + * lock will also be held, the order is: drv->lock then 103 + * cache_lock. 104 + * @client: Handle to the DRV's client. 100 105 */ 101 106 struct rsc_drv { 102 107 const char *name; 103 108 void __iomem *tcs_base; 104 109 int id; 105 110 int num_tcs; 111 + struct notifier_block rsc_pm; 112 + atomic_t cpus_in_pm; 106 113 struct tcs_group tcs[TCS_TYPE_NR]; 107 114 DECLARE_BITMAP(tcs_in_use, MAX_TCS_NR); 108 115 spinlock_t lock; ··· 124 107 int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg); 125 108 int rpmh_rsc_write_ctrl_data(struct rsc_drv *drv, 126 109 const struct tcs_request *msg); 127 - int rpmh_rsc_invalidate(struct rsc_drv *drv); 110 + void rpmh_rsc_invalidate(struct rsc_drv *drv); 128 111 129 112 void rpmh_tx_done(const struct tcs_request *msg, int r); 130 113 int rpmh_flush(struct rpmh_ctrlr *ctrlr);
+545 -207
drivers/soc/qcom/rpmh-rsc.c
··· 6 6 #define pr_fmt(fmt) "%s " fmt, KBUILD_MODNAME 7 7 8 8 #include <linux/atomic.h> 9 + #include <linux/cpu_pm.h> 9 10 #include <linux/delay.h> 10 11 #include <linux/interrupt.h> 11 12 #include <linux/io.h> 13 + #include <linux/iopoll.h> 12 14 #include <linux/kernel.h> 13 15 #include <linux/list.h> 14 16 #include <linux/of.h> ··· 32 30 #define RSC_DRV_TCS_OFFSET 672 33 31 #define RSC_DRV_CMD_OFFSET 20 34 32 35 - /* DRV Configuration Information Register */ 33 + /* DRV HW Solver Configuration Information Register */ 34 + #define DRV_SOLVER_CONFIG 0x04 35 + #define DRV_HW_SOLVER_MASK 1 36 + #define DRV_HW_SOLVER_SHIFT 24 37 + 38 + /* DRV TCS Configuration Information Register */ 36 39 #define DRV_PRNT_CHLD_CONFIG 0x0C 37 40 #define DRV_NUM_TCS_MASK 0x3F 38 41 #define DRV_NUM_TCS_SHIFT 6 39 42 #define DRV_NCPT_MASK 0x1F 40 43 #define DRV_NCPT_SHIFT 27 41 44 42 - /* Register offsets */ 45 + /* Offsets for common TCS Registers, one bit per TCS */ 43 46 #define RSC_DRV_IRQ_ENABLE 0x00 44 47 #define RSC_DRV_IRQ_STATUS 0x04 45 - #define RSC_DRV_IRQ_CLEAR 0x08 46 - #define RSC_DRV_CMD_WAIT_FOR_CMPL 0x10 48 + #define RSC_DRV_IRQ_CLEAR 0x08 /* w/o; write 1 to clear */ 49 + 50 + /* 51 + * Offsets for per TCS Registers. 52 + * 53 + * TCSes start at 0x10 from tcs_base and are stored one after another. 54 + * Multiply tcs_id by RSC_DRV_TCS_OFFSET to find a given TCS and add one 55 + * of the below to find a register. 56 + */ 57 + #define RSC_DRV_CMD_WAIT_FOR_CMPL 0x10 /* 1 bit per command */ 47 58 #define RSC_DRV_CONTROL 0x14 48 - #define RSC_DRV_STATUS 0x18 49 - #define RSC_DRV_CMD_ENABLE 0x1C 59 + #define RSC_DRV_STATUS 0x18 /* zero if tcs is busy */ 60 + #define RSC_DRV_CMD_ENABLE 0x1C /* 1 bit per command */ 61 + 62 + /* 63 + * Offsets for per command in a TCS. 64 + * 65 + * Commands (up to 16) start at 0x30 in a TCS; multiply command index 66 + * by RSC_DRV_CMD_OFFSET and add one of the below to find a register. 67 + */ 50 68 #define RSC_DRV_CMD_MSGID 0x30 51 69 #define RSC_DRV_CMD_ADDR 0x34 52 70 #define RSC_DRV_CMD_DATA 0x38 ··· 83 61 #define CMD_STATUS_ISSUED BIT(8) 84 62 #define CMD_STATUS_COMPL BIT(16) 85 63 86 - static u32 read_tcs_reg(struct rsc_drv *drv, int reg, int tcs_id, int cmd_id) 64 + /* 65 + * Here's a high level overview of how all the registers in RPMH work 66 + * together: 67 + * 68 + * - The main rpmh-rsc address is the base of a register space that can 69 + * be used to find overall configuration of the hardware 70 + * (DRV_PRNT_CHLD_CONFIG). Also found within the rpmh-rsc register 71 + * space are all the TCS blocks. The offset of the TCS blocks is 72 + * specified in the device tree by "qcom,tcs-offset" and used to 73 + * compute tcs_base. 74 + * - TCS blocks come one after another. Type, count, and order are 75 + * specified by the device tree as "qcom,tcs-config". 76 + * - Each TCS block has some registers, then space for up to 16 commands. 77 + * Note that though address space is reserved for 16 commands, fewer 78 + * might be present. See ncpt (num cmds per TCS). 79 + * 80 + * Here's a picture: 81 + * 82 + * +---------------------------------------------------+ 83 + * |RSC | 84 + * | ctrl | 85 + * | | 86 + * | Drvs: | 87 + * | +-----------------------------------------------+ | 88 + * | |DRV0 | | 89 + * | | ctrl/config | | 90 + * | | IRQ | | 91 + * | | | | 92 + * | | TCSes: | | 93 + * | | +------------------------------------------+ | | 94 + * | | |TCS0 | | | | | | | | | | | | | | | 95 + * | | | ctrl | 0| 1| 2| 3| 4| 5| .| .| .| .|14|15| | | 96 + * | | | | | | | | | | | | | | | | | | 97 + * | | +------------------------------------------+ | | 98 + * | | +------------------------------------------+ | | 99 + * | | |TCS1 | | | | | | | | | | | | | | | 100 + * | | | ctrl | 0| 1| 2| 3| 4| 5| .| .| .| .|14|15| | | 101 + * | | | | | | | | | | | | | | | | | | 102 + * | | +------------------------------------------+ | | 103 + * | | +------------------------------------------+ | | 104 + * | | |TCS2 | | | | | | | | | | | | | | | 105 + * | | | ctrl | 0| 1| 2| 3| 4| 5| .| .| .| .|14|15| | | 106 + * | | | | | | | | | | | | | | | | | | 107 + * | | +------------------------------------------+ | | 108 + * | | ...... | | 109 + * | +-----------------------------------------------+ | 110 + * | +-----------------------------------------------+ | 111 + * | |DRV1 | | 112 + * | | (same as DRV0) | | 113 + * | +-----------------------------------------------+ | 114 + * | ...... | 115 + * +---------------------------------------------------+ 116 + */ 117 + 118 + static inline void __iomem * 119 + tcs_reg_addr(const struct rsc_drv *drv, int reg, int tcs_id) 87 120 { 88 - return readl_relaxed(drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * tcs_id + 89 - RSC_DRV_CMD_OFFSET * cmd_id); 121 + return drv->tcs_base + RSC_DRV_TCS_OFFSET * tcs_id + reg; 90 122 } 91 123 92 - static void write_tcs_cmd(struct rsc_drv *drv, int reg, int tcs_id, int cmd_id, 124 + static inline void __iomem * 125 + tcs_cmd_addr(const struct rsc_drv *drv, int reg, int tcs_id, int cmd_id) 126 + { 127 + return tcs_reg_addr(drv, reg, tcs_id) + RSC_DRV_CMD_OFFSET * cmd_id; 128 + } 129 + 130 + static u32 read_tcs_cmd(const struct rsc_drv *drv, int reg, int tcs_id, 131 + int cmd_id) 132 + { 133 + return readl_relaxed(tcs_cmd_addr(drv, reg, tcs_id, cmd_id)); 134 + } 135 + 136 + static u32 read_tcs_reg(const struct rsc_drv *drv, int reg, int tcs_id) 137 + { 138 + return readl_relaxed(tcs_reg_addr(drv, reg, tcs_id)); 139 + } 140 + 141 + static void write_tcs_cmd(const struct rsc_drv *drv, int reg, int tcs_id, 142 + int cmd_id, u32 data) 143 + { 144 + writel_relaxed(data, tcs_cmd_addr(drv, reg, tcs_id, cmd_id)); 145 + } 146 + 147 + static void write_tcs_reg(const struct rsc_drv *drv, int reg, int tcs_id, 93 148 u32 data) 94 149 { 95 - writel_relaxed(data, drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * tcs_id + 96 - RSC_DRV_CMD_OFFSET * cmd_id); 150 + writel_relaxed(data, tcs_reg_addr(drv, reg, tcs_id)); 97 151 } 98 152 99 - static void write_tcs_reg(struct rsc_drv *drv, int reg, int tcs_id, u32 data) 100 - { 101 - writel_relaxed(data, drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * tcs_id); 102 - } 103 - 104 - static void write_tcs_reg_sync(struct rsc_drv *drv, int reg, int tcs_id, 153 + static void write_tcs_reg_sync(const struct rsc_drv *drv, int reg, int tcs_id, 105 154 u32 data) 106 155 { 107 - writel(data, drv->tcs_base + reg + RSC_DRV_TCS_OFFSET * tcs_id); 108 - for (;;) { 109 - if (data == readl(drv->tcs_base + reg + 110 - RSC_DRV_TCS_OFFSET * tcs_id)) 111 - break; 112 - udelay(1); 113 - } 156 + u32 new_data; 157 + 158 + writel(data, tcs_reg_addr(drv, reg, tcs_id)); 159 + if (readl_poll_timeout_atomic(tcs_reg_addr(drv, reg, tcs_id), new_data, 160 + new_data == data, 1, USEC_PER_SEC)) 161 + pr_err("%s: error writing %#x to %d:%#x\n", drv->name, 162 + data, tcs_id, reg); 114 163 } 115 164 165 + /** 166 + * tcs_is_free() - Return if a TCS is totally free. 167 + * @drv: The RSC controller. 168 + * @tcs_id: The global ID of this TCS. 169 + * 170 + * Returns true if nobody has claimed this TCS (by setting tcs_in_use). 171 + * 172 + * Context: Must be called with the drv->lock held. 173 + * 174 + * Return: true if the given TCS is free. 175 + */ 116 176 static bool tcs_is_free(struct rsc_drv *drv, int tcs_id) 117 177 { 118 - return !test_bit(tcs_id, drv->tcs_in_use) && 119 - read_tcs_reg(drv, RSC_DRV_STATUS, tcs_id, 0); 178 + return !test_bit(tcs_id, drv->tcs_in_use); 120 179 } 121 180 122 - static struct tcs_group *get_tcs_of_type(struct rsc_drv *drv, int type) 123 - { 124 - return &drv->tcs[type]; 125 - } 126 - 127 - static int tcs_invalidate(struct rsc_drv *drv, int type) 181 + /** 182 + * tcs_invalidate() - Invalidate all TCSes of the given type (sleep or wake). 183 + * @drv: The RSC controller. 184 + * @type: SLEEP_TCS or WAKE_TCS 185 + * 186 + * This will clear the "slots" variable of the given tcs_group and also 187 + * tell the hardware to forget about all entries. 188 + * 189 + * The caller must ensure that no other RPMH actions are happening when this 190 + * function is called, since otherwise the device may immediately become 191 + * used again even before this function exits. 192 + */ 193 + static void tcs_invalidate(struct rsc_drv *drv, int type) 128 194 { 129 195 int m; 130 - struct tcs_group *tcs; 196 + struct tcs_group *tcs = &drv->tcs[type]; 131 197 132 - tcs = get_tcs_of_type(drv, type); 133 - 134 - spin_lock(&tcs->lock); 135 - if (bitmap_empty(tcs->slots, MAX_TCS_SLOTS)) { 136 - spin_unlock(&tcs->lock); 137 - return 0; 138 - } 198 + /* Caller ensures nobody else is running so no lock */ 199 + if (bitmap_empty(tcs->slots, MAX_TCS_SLOTS)) 200 + return; 139 201 140 202 for (m = tcs->offset; m < tcs->offset + tcs->num_tcs; m++) { 141 - if (!tcs_is_free(drv, m)) { 142 - spin_unlock(&tcs->lock); 143 - return -EAGAIN; 144 - } 145 203 write_tcs_reg_sync(drv, RSC_DRV_CMD_ENABLE, m, 0); 146 204 write_tcs_reg_sync(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, m, 0); 147 205 } 148 206 bitmap_zero(tcs->slots, MAX_TCS_SLOTS); 149 - spin_unlock(&tcs->lock); 150 - 151 - return 0; 152 207 } 153 208 154 209 /** 155 - * rpmh_rsc_invalidate - Invalidate sleep and wake TCSes 210 + * rpmh_rsc_invalidate() - Invalidate sleep and wake TCSes. 211 + * @drv: The RSC controller. 156 212 * 157 - * @drv: the RSC controller 213 + * The caller must ensure that no other RPMH actions are happening when this 214 + * function is called, since otherwise the device may immediately become 215 + * used again even before this function exits. 158 216 */ 159 - int rpmh_rsc_invalidate(struct rsc_drv *drv) 217 + void rpmh_rsc_invalidate(struct rsc_drv *drv) 160 218 { 161 - int ret; 162 - 163 - ret = tcs_invalidate(drv, SLEEP_TCS); 164 - if (!ret) 165 - ret = tcs_invalidate(drv, WAKE_TCS); 166 - 167 - return ret; 219 + tcs_invalidate(drv, SLEEP_TCS); 220 + tcs_invalidate(drv, WAKE_TCS); 168 221 } 169 222 223 + /** 224 + * get_tcs_for_msg() - Get the tcs_group used to send the given message. 225 + * @drv: The RSC controller. 226 + * @msg: The message we want to send. 227 + * 228 + * This is normally pretty straightforward except if we are trying to send 229 + * an ACTIVE_ONLY message but don't have any active_only TCSes. 230 + * 231 + * Return: A pointer to a tcs_group or an ERR_PTR. 232 + */ 170 233 static struct tcs_group *get_tcs_for_msg(struct rsc_drv *drv, 171 234 const struct tcs_request *msg) 172 235 { 173 - int type, ret; 236 + int type; 174 237 struct tcs_group *tcs; 175 238 176 239 switch (msg->state) { ··· 275 168 /* 276 169 * If we are making an active request on a RSC that does not have a 277 170 * dedicated TCS for active state use, then re-purpose a wake TCS to 278 - * send active votes. 279 - * NOTE: The driver must be aware that this RSC does not have a 280 - * dedicated AMC, and therefore would invalidate the sleep and wake 281 - * TCSes before making an active state request. 171 + * send active votes. This is safe because we ensure any active-only 172 + * transfers have finished before we use it (maybe by running from 173 + * the last CPU in PM code). 282 174 */ 283 - tcs = get_tcs_of_type(drv, type); 284 - if (msg->state == RPMH_ACTIVE_ONLY_STATE && !tcs->num_tcs) { 285 - tcs = get_tcs_of_type(drv, WAKE_TCS); 286 - if (tcs->num_tcs) { 287 - ret = rpmh_rsc_invalidate(drv); 288 - if (ret) 289 - return ERR_PTR(ret); 290 - } 291 - } 175 + tcs = &drv->tcs[type]; 176 + if (msg->state == RPMH_ACTIVE_ONLY_STATE && !tcs->num_tcs) 177 + tcs = &drv->tcs[WAKE_TCS]; 292 178 293 179 return tcs; 294 180 } 295 181 182 + /** 183 + * get_req_from_tcs() - Get a stashed request that was xfering on the given TCS. 184 + * @drv: The RSC controller. 185 + * @tcs_id: The global ID of this TCS. 186 + * 187 + * For ACTIVE_ONLY transfers we want to call back into the client when the 188 + * transfer finishes. To do this we need the "request" that the client 189 + * originally provided us. This function grabs the request that we stashed 190 + * when we started the transfer. 191 + * 192 + * This only makes sense for ACTIVE_ONLY transfers since those are the only 193 + * ones we track sending (the only ones we enable interrupts for and the only 194 + * ones we call back to the client for). 195 + * 196 + * Return: The stashed request. 197 + */ 296 198 static const struct tcs_request *get_req_from_tcs(struct rsc_drv *drv, 297 199 int tcs_id) 298 200 { ··· 318 202 } 319 203 320 204 /** 321 - * tcs_tx_done: TX Done interrupt handler 205 + * __tcs_set_trigger() - Start xfer on a TCS or unset trigger on a borrowed TCS 206 + * @drv: The controller. 207 + * @tcs_id: The global ID of this TCS. 208 + * @trigger: If true then untrigger/retrigger. If false then just untrigger. 209 + * 210 + * In the normal case we only ever call with "trigger=true" to start a 211 + * transfer. That will un-trigger/disable the TCS from the last transfer 212 + * then trigger/enable for this transfer. 213 + * 214 + * If we borrowed a wake TCS for an active-only transfer we'll also call 215 + * this function with "trigger=false" to just do the un-trigger/disable 216 + * before using the TCS for wake purposes again. 217 + * 218 + * Note that the AP is only in charge of triggering active-only transfers. 219 + * The AP never triggers sleep/wake values using this function. 220 + */ 221 + static void __tcs_set_trigger(struct rsc_drv *drv, int tcs_id, bool trigger) 222 + { 223 + u32 enable; 224 + 225 + /* 226 + * HW req: Clear the DRV_CONTROL and enable TCS again 227 + * While clearing ensure that the AMC mode trigger is cleared 228 + * and then the mode enable is cleared. 229 + */ 230 + enable = read_tcs_reg(drv, RSC_DRV_CONTROL, tcs_id); 231 + enable &= ~TCS_AMC_MODE_TRIGGER; 232 + write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable); 233 + enable &= ~TCS_AMC_MODE_ENABLE; 234 + write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable); 235 + 236 + if (trigger) { 237 + /* Enable the AMC mode on the TCS and then trigger the TCS */ 238 + enable = TCS_AMC_MODE_ENABLE; 239 + write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable); 240 + enable |= TCS_AMC_MODE_TRIGGER; 241 + write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable); 242 + } 243 + } 244 + 245 + /** 246 + * enable_tcs_irq() - Enable or disable interrupts on the given TCS. 247 + * @drv: The controller. 248 + * @tcs_id: The global ID of this TCS. 249 + * @enable: If true then enable; if false then disable 250 + * 251 + * We only ever call this when we borrow a wake TCS for an active-only 252 + * transfer. For active-only TCSes interrupts are always left enabled. 253 + */ 254 + static void enable_tcs_irq(struct rsc_drv *drv, int tcs_id, bool enable) 255 + { 256 + u32 data; 257 + 258 + data = readl_relaxed(drv->tcs_base + RSC_DRV_IRQ_ENABLE); 259 + if (enable) 260 + data |= BIT(tcs_id); 261 + else 262 + data &= ~BIT(tcs_id); 263 + writel_relaxed(data, drv->tcs_base + RSC_DRV_IRQ_ENABLE); 264 + } 265 + 266 + /** 267 + * tcs_tx_done() - TX Done interrupt handler. 268 + * @irq: The IRQ number (ignored). 269 + * @p: Pointer to "struct rsc_drv". 270 + * 271 + * Called for ACTIVE_ONLY transfers (those are the only ones we enable the 272 + * IRQ for) when a transfer is done. 273 + * 274 + * Return: IRQ_HANDLED 322 275 */ 323 276 static irqreturn_t tcs_tx_done(int irq, void *p) 324 277 { ··· 397 212 const struct tcs_request *req; 398 213 struct tcs_cmd *cmd; 399 214 400 - irq_status = read_tcs_reg(drv, RSC_DRV_IRQ_STATUS, 0, 0); 215 + irq_status = readl_relaxed(drv->tcs_base + RSC_DRV_IRQ_STATUS); 401 216 402 217 for_each_set_bit(i, &irq_status, BITS_PER_LONG) { 403 218 req = get_req_from_tcs(drv, i); ··· 411 226 u32 sts; 412 227 413 228 cmd = &req->cmds[j]; 414 - sts = read_tcs_reg(drv, RSC_DRV_CMD_STATUS, i, j); 229 + sts = read_tcs_cmd(drv, RSC_DRV_CMD_STATUS, i, j); 415 230 if (!(sts & CMD_STATUS_ISSUED) || 416 231 ((req->wait_for_compl || cmd->wait) && 417 232 !(sts & CMD_STATUS_COMPL))) { ··· 422 237 } 423 238 424 239 trace_rpmh_tx_done(drv, i, req, err); 240 + 241 + /* 242 + * If wake tcs was re-purposed for sending active 243 + * votes, clear AMC trigger & enable modes and 244 + * disable interrupt for this TCS 245 + */ 246 + if (!drv->tcs[ACTIVE_TCS].num_tcs) 247 + __tcs_set_trigger(drv, i, false); 425 248 skip: 426 249 /* Reclaim the TCS */ 427 250 write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, i, 0); 428 251 write_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, i, 0); 429 - write_tcs_reg(drv, RSC_DRV_IRQ_CLEAR, 0, BIT(i)); 252 + writel_relaxed(BIT(i), drv->tcs_base + RSC_DRV_IRQ_CLEAR); 430 253 spin_lock(&drv->lock); 431 254 clear_bit(i, drv->tcs_in_use); 255 + /* 256 + * Disable interrupt for WAKE TCS to avoid being 257 + * spammed with interrupts coming when the solver 258 + * sends its wake votes. 259 + */ 260 + if (!drv->tcs[ACTIVE_TCS].num_tcs) 261 + enable_tcs_irq(drv, i, false); 432 262 spin_unlock(&drv->lock); 433 263 if (req) 434 264 rpmh_tx_done(req, err); ··· 452 252 return IRQ_HANDLED; 453 253 } 454 254 255 + /** 256 + * __tcs_buffer_write() - Write to TCS hardware from a request; don't trigger. 257 + * @drv: The controller. 258 + * @tcs_id: The global ID of this TCS. 259 + * @cmd_id: The index within the TCS to start writing. 260 + * @msg: The message we want to send, which will contain several addr/data 261 + * pairs to program (but few enough that they all fit in one TCS). 262 + * 263 + * This is used for all types of transfers (active, sleep, and wake). 264 + */ 455 265 static void __tcs_buffer_write(struct rsc_drv *drv, int tcs_id, int cmd_id, 456 266 const struct tcs_request *msg) 457 267 { ··· 475 265 cmd_msgid |= msg->wait_for_compl ? CMD_MSGID_RESP_REQ : 0; 476 266 cmd_msgid |= CMD_MSGID_WRITE; 477 267 478 - cmd_complete = read_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, tcs_id, 0); 268 + cmd_complete = read_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, tcs_id); 479 269 480 270 for (i = 0, j = cmd_id; i < msg->num_cmds; i++, j++) { 481 271 cmd = &msg->cmds[i]; ··· 491 281 } 492 282 493 283 write_tcs_reg(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, tcs_id, cmd_complete); 494 - cmd_enable |= read_tcs_reg(drv, RSC_DRV_CMD_ENABLE, tcs_id, 0); 284 + cmd_enable |= read_tcs_reg(drv, RSC_DRV_CMD_ENABLE, tcs_id); 495 285 write_tcs_reg(drv, RSC_DRV_CMD_ENABLE, tcs_id, cmd_enable); 496 286 } 497 287 498 - static void __tcs_trigger(struct rsc_drv *drv, int tcs_id) 499 - { 500 - u32 enable; 501 - 502 - /* 503 - * HW req: Clear the DRV_CONTROL and enable TCS again 504 - * While clearing ensure that the AMC mode trigger is cleared 505 - * and then the mode enable is cleared. 506 - */ 507 - enable = read_tcs_reg(drv, RSC_DRV_CONTROL, tcs_id, 0); 508 - enable &= ~TCS_AMC_MODE_TRIGGER; 509 - write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable); 510 - enable &= ~TCS_AMC_MODE_ENABLE; 511 - write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable); 512 - 513 - /* Enable the AMC mode on the TCS and then trigger the TCS */ 514 - enable = TCS_AMC_MODE_ENABLE; 515 - write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable); 516 - enable |= TCS_AMC_MODE_TRIGGER; 517 - write_tcs_reg_sync(drv, RSC_DRV_CONTROL, tcs_id, enable); 518 - } 519 - 288 + /** 289 + * check_for_req_inflight() - Look to see if conflicting cmds are in flight. 290 + * @drv: The controller. 291 + * @tcs: A pointer to the tcs_group used for ACTIVE_ONLY transfers. 292 + * @msg: The message we want to send, which will contain several addr/data 293 + * pairs to program (but few enough that they all fit in one TCS). 294 + * 295 + * This will walk through the TCSes in the group and check if any of them 296 + * appear to be sending to addresses referenced in the message. If it finds 297 + * one it'll return -EBUSY. 298 + * 299 + * Only for use for active-only transfers. 300 + * 301 + * Must be called with the drv->lock held since that protects tcs_in_use. 302 + * 303 + * Return: 0 if nothing in flight or -EBUSY if we should try again later. 304 + * The caller must re-enable interrupts between tries since that's 305 + * the only way tcs_is_free() will ever return true and the only way 306 + * RSC_DRV_CMD_ENABLE will ever be cleared. 307 + */ 520 308 static int check_for_req_inflight(struct rsc_drv *drv, struct tcs_group *tcs, 521 309 const struct tcs_request *msg) 522 310 { ··· 527 319 if (tcs_is_free(drv, tcs_id)) 528 320 continue; 529 321 530 - curr_enabled = read_tcs_reg(drv, RSC_DRV_CMD_ENABLE, tcs_id, 0); 322 + curr_enabled = read_tcs_reg(drv, RSC_DRV_CMD_ENABLE, tcs_id); 531 323 532 324 for_each_set_bit(j, &curr_enabled, MAX_CMDS_PER_TCS) { 533 - addr = read_tcs_reg(drv, RSC_DRV_CMD_ADDR, tcs_id, j); 325 + addr = read_tcs_cmd(drv, RSC_DRV_CMD_ADDR, tcs_id, j); 534 326 for (k = 0; k < msg->num_cmds; k++) { 535 327 if (addr == msg->cmds[k].addr) 536 328 return -EBUSY; ··· 541 333 return 0; 542 334 } 543 335 336 + /** 337 + * find_free_tcs() - Find free tcs in the given tcs_group; only for active. 338 + * @tcs: A pointer to the active-only tcs_group (or the wake tcs_group if 339 + * we borrowed it because there are zero active-only ones). 340 + * 341 + * Must be called with the drv->lock held since that protects tcs_in_use. 342 + * 343 + * Return: The first tcs that's free. 344 + */ 544 345 static int find_free_tcs(struct tcs_group *tcs) 545 346 { 546 347 int i; ··· 562 345 return -EBUSY; 563 346 } 564 347 348 + /** 349 + * tcs_write() - Store messages into a TCS right now, or return -EBUSY. 350 + * @drv: The controller. 351 + * @msg: The data to be sent. 352 + * 353 + * Grabs a TCS for ACTIVE_ONLY transfers and writes the messages to it. 354 + * 355 + * If there are no free TCSes for ACTIVE_ONLY transfers or if a command for 356 + * the same address is already transferring returns -EBUSY which means the 357 + * client should retry shortly. 358 + * 359 + * Return: 0 on success, -EBUSY if client should retry, or an error. 360 + * Client should have interrupts enabled for a bit before retrying. 361 + */ 565 362 static int tcs_write(struct rsc_drv *drv, const struct tcs_request *msg) 566 363 { 567 364 struct tcs_group *tcs; ··· 587 356 if (IS_ERR(tcs)) 588 357 return PTR_ERR(tcs); 589 358 590 - spin_lock_irqsave(&tcs->lock, flags); 591 - spin_lock(&drv->lock); 359 + spin_lock_irqsave(&drv->lock, flags); 592 360 /* 593 361 * The h/w does not like if we send a request to the same address, 594 362 * when one is already in-flight or being processed. 595 363 */ 596 364 ret = check_for_req_inflight(drv, tcs, msg); 597 - if (ret) { 598 - spin_unlock(&drv->lock); 599 - goto done_write; 600 - } 365 + if (ret) 366 + goto unlock; 601 367 602 - tcs_id = find_free_tcs(tcs); 603 - if (tcs_id < 0) { 604 - ret = tcs_id; 605 - spin_unlock(&drv->lock); 606 - goto done_write; 607 - } 368 + ret = find_free_tcs(tcs); 369 + if (ret < 0) 370 + goto unlock; 371 + tcs_id = ret; 608 372 609 373 tcs->req[tcs_id - tcs->offset] = msg; 610 374 set_bit(tcs_id, drv->tcs_in_use); 611 - spin_unlock(&drv->lock); 375 + if (msg->state == RPMH_ACTIVE_ONLY_STATE && tcs->type != ACTIVE_TCS) { 376 + /* 377 + * Clear previously programmed WAKE commands in selected 378 + * repurposed TCS to avoid triggering them. tcs->slots will be 379 + * cleaned from rpmh_flush() by invoking rpmh_rsc_invalidate() 380 + */ 381 + write_tcs_reg_sync(drv, RSC_DRV_CMD_ENABLE, tcs_id, 0); 382 + write_tcs_reg_sync(drv, RSC_DRV_CMD_WAIT_FOR_CMPL, tcs_id, 0); 383 + enable_tcs_irq(drv, tcs_id, true); 384 + } 385 + spin_unlock_irqrestore(&drv->lock, flags); 612 386 387 + /* 388 + * These two can be done after the lock is released because: 389 + * - We marked "tcs_in_use" under lock. 390 + * - Once "tcs_in_use" has been marked nobody else could be writing 391 + * to these registers until the interrupt goes off. 392 + * - The interrupt can't go off until we trigger w/ the last line 393 + * of __tcs_set_trigger() below. 394 + */ 613 395 __tcs_buffer_write(drv, tcs_id, 0, msg); 614 - __tcs_trigger(drv, tcs_id); 396 + __tcs_set_trigger(drv, tcs_id, true); 615 397 616 - done_write: 617 - spin_unlock_irqrestore(&tcs->lock, flags); 398 + return 0; 399 + unlock: 400 + spin_unlock_irqrestore(&drv->lock, flags); 618 401 return ret; 619 402 } 620 403 621 404 /** 622 - * rpmh_rsc_send_data: Validate the incoming message and write to the 623 - * appropriate TCS block. 405 + * rpmh_rsc_send_data() - Write / trigger active-only message. 406 + * @drv: The controller. 407 + * @msg: The data to be sent. 624 408 * 625 - * @drv: the controller 626 - * @msg: the data to be sent 409 + * NOTES: 410 + * - This is only used for "ACTIVE_ONLY" since the limitations of this 411 + * function don't make sense for sleep/wake cases. 412 + * - To do the transfer, we will grab a whole TCS for ourselves--we don't 413 + * try to share. If there are none available we'll wait indefinitely 414 + * for a free one. 415 + * - This function will not wait for the commands to be finished, only for 416 + * data to be programmed into the RPMh. See rpmh_tx_done() which will 417 + * be called when the transfer is fully complete. 418 + * - This function must be called with interrupts enabled. If the hardware 419 + * is busy doing someone else's transfer we need that transfer to fully 420 + * finish so that we can have the hardware, and to fully finish it needs 421 + * the interrupt handler to run. If the interrupts is set to run on the 422 + * active CPU this can never happen if interrupts are disabled. 627 423 * 628 424 * Return: 0 on success, -EINVAL on error. 629 - * Note: This call blocks until a valid data is written to the TCS. 630 425 */ 631 426 int rpmh_rsc_send_data(struct rsc_drv *drv, const struct tcs_request *msg) 632 427 { 633 428 int ret; 634 - 635 - if (!msg || !msg->cmds || !msg->num_cmds || 636 - msg->num_cmds > MAX_RPMH_PAYLOAD) { 637 - WARN_ON(1); 638 - return -EINVAL; 639 - } 640 429 641 430 do { 642 431 ret = tcs_write(drv, msg); ··· 670 419 return ret; 671 420 } 672 421 673 - static int find_match(const struct tcs_group *tcs, const struct tcs_cmd *cmd, 674 - int len) 675 - { 676 - int i, j; 677 - 678 - /* Check for already cached commands */ 679 - for_each_set_bit(i, tcs->slots, MAX_TCS_SLOTS) { 680 - if (tcs->cmd_cache[i] != cmd[0].addr) 681 - continue; 682 - if (i + len >= tcs->num_tcs * tcs->ncpt) 683 - goto seq_err; 684 - for (j = 0; j < len; j++) { 685 - if (tcs->cmd_cache[i + j] != cmd[j].addr) 686 - goto seq_err; 687 - } 688 - return i; 689 - } 690 - 691 - return -ENODATA; 692 - 693 - seq_err: 694 - WARN(1, "Message does not match previous sequence.\n"); 695 - return -EINVAL; 696 - } 697 - 422 + /** 423 + * find_slots() - Find a place to write the given message. 424 + * @tcs: The tcs group to search. 425 + * @msg: The message we want to find room for. 426 + * @tcs_id: If we return 0 from the function, we return the global ID of the 427 + * TCS to write to here. 428 + * @cmd_id: If we return 0 from the function, we return the index of 429 + * the command array of the returned TCS where the client should 430 + * start writing the message. 431 + * 432 + * Only for use on sleep/wake TCSes since those are the only ones we maintain 433 + * tcs->slots for. 434 + * 435 + * Return: -ENOMEM if there was no room, else 0. 436 + */ 698 437 static int find_slots(struct tcs_group *tcs, const struct tcs_request *msg, 699 438 int *tcs_id, int *cmd_id) 700 439 { 701 440 int slot, offset; 702 441 int i = 0; 703 442 704 - /* Find if we already have the msg in our TCS */ 705 - slot = find_match(tcs, msg->cmds, msg->num_cmds); 706 - if (slot >= 0) 707 - goto copy_data; 708 - 709 - /* Do over, until we can fit the full payload in a TCS */ 443 + /* Do over, until we can fit the full payload in a single TCS */ 710 444 do { 711 445 slot = bitmap_find_next_zero_area(tcs->slots, MAX_TCS_SLOTS, 712 446 i, msg->num_cmds, 0); ··· 700 464 i += tcs->ncpt; 701 465 } while (slot + msg->num_cmds - 1 >= i); 702 466 703 - copy_data: 704 467 bitmap_set(tcs->slots, slot, msg->num_cmds); 705 - /* Copy the addresses of the resources over to the slots */ 706 - for (i = 0; i < msg->num_cmds; i++) 707 - tcs->cmd_cache[slot + i] = msg->cmds[i].addr; 708 468 709 469 offset = slot / tcs->ncpt; 710 470 *tcs_id = offset + tcs->offset; ··· 709 477 return 0; 710 478 } 711 479 712 - static int tcs_ctrl_write(struct rsc_drv *drv, const struct tcs_request *msg) 480 + /** 481 + * rpmh_rsc_write_ctrl_data() - Write request to controller but don't trigger. 482 + * @drv: The controller. 483 + * @msg: The data to be written to the controller. 484 + * 485 + * This should only be called for for sleep/wake state, never active-only 486 + * state. 487 + * 488 + * The caller must ensure that no other RPMH actions are happening and the 489 + * controller is idle when this function is called since it runs lockless. 490 + * 491 + * Return: 0 if no error; else -error. 492 + */ 493 + int rpmh_rsc_write_ctrl_data(struct rsc_drv *drv, const struct tcs_request *msg) 713 494 { 714 495 struct tcs_group *tcs; 715 496 int tcs_id = 0, cmd_id = 0; 716 - unsigned long flags; 717 497 int ret; 718 498 719 499 tcs = get_tcs_for_msg(drv, msg); 720 500 if (IS_ERR(tcs)) 721 501 return PTR_ERR(tcs); 722 502 723 - spin_lock_irqsave(&tcs->lock, flags); 724 503 /* find the TCS id and the command in the TCS to write to */ 725 504 ret = find_slots(tcs, msg, &tcs_id, &cmd_id); 726 505 if (!ret) 727 506 __tcs_buffer_write(drv, tcs_id, cmd_id, msg); 728 - spin_unlock_irqrestore(&tcs->lock, flags); 729 507 730 508 return ret; 731 509 } 732 510 733 511 /** 734 - * rpmh_rsc_write_ctrl_data: Write request to the controller 512 + * rpmh_rsc_ctrlr_is_busy() - Check if any of the AMCs are busy. 513 + * @drv: The controller 735 514 * 736 - * @drv: the controller 737 - * @msg: the data to be written to the controller 515 + * Checks if any of the AMCs are busy in handling ACTIVE sets. 516 + * This is called from the last cpu powering down before flushing 517 + * SLEEP and WAKE sets. If AMCs are busy, controller can not enter 518 + * power collapse, so deny from the last cpu's pm notification. 738 519 * 739 - * There is no response returned for writing the request to the controller. 520 + * Context: Must be called with the drv->lock held. 521 + * 522 + * Return: 523 + * * False - AMCs are idle 524 + * * True - AMCs are busy 740 525 */ 741 - int rpmh_rsc_write_ctrl_data(struct rsc_drv *drv, const struct tcs_request *msg) 526 + static bool rpmh_rsc_ctrlr_is_busy(struct rsc_drv *drv) 742 527 { 743 - if (!msg || !msg->cmds || !msg->num_cmds || 744 - msg->num_cmds > MAX_RPMH_PAYLOAD) { 745 - pr_err("Payload error\n"); 746 - return -EINVAL; 528 + int m; 529 + struct tcs_group *tcs = &drv->tcs[ACTIVE_TCS]; 530 + 531 + /* 532 + * If we made an active request on a RSC that does not have a 533 + * dedicated TCS for active state use, then re-purposed wake TCSes 534 + * should be checked for not busy, because we used wake TCSes for 535 + * active requests in this case. 536 + */ 537 + if (!tcs->num_tcs) 538 + tcs = &drv->tcs[WAKE_TCS]; 539 + 540 + for (m = tcs->offset; m < tcs->offset + tcs->num_tcs; m++) { 541 + if (!tcs_is_free(drv, m)) 542 + return true; 747 543 } 748 544 749 - /* Data sent to this API will not be sent immediately */ 750 - if (msg->state == RPMH_ACTIVE_ONLY_STATE) 751 - return -EINVAL; 545 + return false; 546 + } 752 547 753 - return tcs_ctrl_write(drv, msg); 548 + /** 549 + * rpmh_rsc_cpu_pm_callback() - Check if any of the AMCs are busy. 550 + * @nfb: Pointer to the notifier block in struct rsc_drv. 551 + * @action: CPU_PM_ENTER, CPU_PM_ENTER_FAILED, or CPU_PM_EXIT. 552 + * @v: Unused 553 + * 554 + * This function is given to cpu_pm_register_notifier so we can be informed 555 + * about when CPUs go down. When all CPUs go down we know no more active 556 + * transfers will be started so we write sleep/wake sets. This function gets 557 + * called from cpuidle code paths and also at system suspend time. 558 + * 559 + * If its last CPU going down and AMCs are not busy then writes cached sleep 560 + * and wake messages to TCSes. The firmware then takes care of triggering 561 + * them when entering deepest low power modes. 562 + * 563 + * Return: See cpu_pm_register_notifier() 564 + */ 565 + static int rpmh_rsc_cpu_pm_callback(struct notifier_block *nfb, 566 + unsigned long action, void *v) 567 + { 568 + struct rsc_drv *drv = container_of(nfb, struct rsc_drv, rsc_pm); 569 + int ret = NOTIFY_OK; 570 + int cpus_in_pm; 571 + 572 + switch (action) { 573 + case CPU_PM_ENTER: 574 + cpus_in_pm = atomic_inc_return(&drv->cpus_in_pm); 575 + /* 576 + * NOTE: comments for num_online_cpus() point out that it's 577 + * only a snapshot so we need to be careful. It should be OK 578 + * for us to use, though. It's important for us not to miss 579 + * if we're the last CPU going down so it would only be a 580 + * problem if a CPU went offline right after we did the check 581 + * AND that CPU was not idle AND that CPU was the last non-idle 582 + * CPU. That can't happen. CPUs would have to come out of idle 583 + * before the CPU could go offline. 584 + */ 585 + if (cpus_in_pm < num_online_cpus()) 586 + return NOTIFY_OK; 587 + break; 588 + case CPU_PM_ENTER_FAILED: 589 + case CPU_PM_EXIT: 590 + atomic_dec(&drv->cpus_in_pm); 591 + return NOTIFY_OK; 592 + default: 593 + return NOTIFY_DONE; 594 + } 595 + 596 + /* 597 + * It's likely we're on the last CPU. Grab the drv->lock and write 598 + * out the sleep/wake commands to RPMH hardware. Grabbing the lock 599 + * means that if we race with another CPU coming up we are still 600 + * guaranteed to be safe. If another CPU came up just after we checked 601 + * and has grabbed the lock or started an active transfer then we'll 602 + * notice we're busy and abort. If another CPU comes up after we start 603 + * flushing it will be blocked from starting an active transfer until 604 + * we're done flushing. If another CPU starts an active transfer after 605 + * we release the lock we're still OK because we're no longer the last 606 + * CPU. 607 + */ 608 + if (spin_trylock(&drv->lock)) { 609 + if (rpmh_rsc_ctrlr_is_busy(drv) || rpmh_flush(&drv->client)) 610 + ret = NOTIFY_BAD; 611 + spin_unlock(&drv->lock); 612 + } else { 613 + /* Another CPU must be up */ 614 + return NOTIFY_OK; 615 + } 616 + 617 + if (ret == NOTIFY_BAD) { 618 + /* Double-check if we're here because someone else is up */ 619 + if (cpus_in_pm < num_online_cpus()) 620 + ret = NOTIFY_OK; 621 + else 622 + /* We won't be called w/ CPU_PM_ENTER_FAILED */ 623 + atomic_dec(&drv->cpus_in_pm); 624 + } 625 + 626 + return ret; 754 627 } 755 628 756 629 static int rpmh_probe_tcs_config(struct platform_device *pdev, 757 - struct rsc_drv *drv) 630 + struct rsc_drv *drv, void __iomem *base) 758 631 { 759 632 struct tcs_type_config { 760 633 u32 type; ··· 869 532 u32 config, max_tcs, ncpt, offset; 870 533 int i, ret, n, st = 0; 871 534 struct tcs_group *tcs; 872 - struct resource *res; 873 - void __iomem *base; 874 - char drv_id[10] = {0}; 875 - 876 - snprintf(drv_id, ARRAY_SIZE(drv_id), "drv-%d", drv->id); 877 - res = platform_get_resource_byname(pdev, IORESOURCE_MEM, drv_id); 878 - base = devm_ioremap_resource(&pdev->dev, res); 879 - if (IS_ERR(base)) 880 - return PTR_ERR(base); 881 535 882 536 ret = of_property_read_u32(dn, "qcom,tcs-offset", &offset); 883 537 if (ret) ··· 912 584 tcs->type = tcs_cfg[i].type; 913 585 tcs->num_tcs = tcs_cfg[i].n; 914 586 tcs->ncpt = ncpt; 915 - spin_lock_init(&tcs->lock); 916 587 917 588 if (!tcs->num_tcs || tcs->type == CONTROL_TCS) 918 589 continue; ··· 923 596 tcs->mask = ((1 << tcs->num_tcs) - 1) << st; 924 597 tcs->offset = st; 925 598 st += tcs->num_tcs; 926 - 927 - /* 928 - * Allocate memory to cache sleep and wake requests to 929 - * avoid reading TCS register memory. 930 - */ 931 - if (tcs->type == ACTIVE_TCS) 932 - continue; 933 - 934 - tcs->cmd_cache = devm_kcalloc(&pdev->dev, 935 - tcs->num_tcs * ncpt, sizeof(u32), 936 - GFP_KERNEL); 937 - if (!tcs->cmd_cache) 938 - return -ENOMEM; 939 599 } 940 600 941 601 drv->num_tcs = st; ··· 934 620 { 935 621 struct device_node *dn = pdev->dev.of_node; 936 622 struct rsc_drv *drv; 623 + struct resource *res; 624 + char drv_id[10] = {0}; 937 625 int ret, irq; 626 + u32 solver_config; 627 + void __iomem *base; 938 628 939 629 /* 940 630 * Even though RPMh doesn't directly use cmd-db, all of its children ··· 964 646 if (!drv->name) 965 647 drv->name = dev_name(&pdev->dev); 966 648 967 - ret = rpmh_probe_tcs_config(pdev, drv); 649 + snprintf(drv_id, ARRAY_SIZE(drv_id), "drv-%d", drv->id); 650 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, drv_id); 651 + base = devm_ioremap_resource(&pdev->dev, res); 652 + if (IS_ERR(base)) 653 + return PTR_ERR(base); 654 + 655 + ret = rpmh_probe_tcs_config(pdev, drv, base); 968 656 if (ret) 969 657 return ret; 970 658 ··· 987 663 if (ret) 988 664 return ret; 989 665 666 + /* 667 + * CPU PM notification are not required for controllers that support 668 + * 'HW solver' mode where they can be in autonomous mode executing low 669 + * power mode to power down. 670 + */ 671 + solver_config = readl_relaxed(base + DRV_SOLVER_CONFIG); 672 + solver_config &= DRV_HW_SOLVER_MASK << DRV_HW_SOLVER_SHIFT; 673 + solver_config = solver_config >> DRV_HW_SOLVER_SHIFT; 674 + if (!solver_config) { 675 + drv->rsc_pm.notifier_call = rpmh_rsc_cpu_pm_callback; 676 + cpu_pm_register_notifier(&drv->rsc_pm); 677 + } 678 + 990 679 /* Enable the active TCS to send requests immediately */ 991 - write_tcs_reg(drv, RSC_DRV_IRQ_ENABLE, 0, drv->tcs[ACTIVE_TCS].mask); 680 + writel_relaxed(drv->tcs[ACTIVE_TCS].mask, 681 + drv->tcs_base + RSC_DRV_IRQ_ENABLE); 992 682 993 683 spin_lock_init(&drv->client.cache_lock); 994 684 INIT_LIST_HEAD(&drv->client.cache);
+47 -50
drivers/soc/qcom/rpmh.c
··· 9 9 #include <linux/jiffies.h> 10 10 #include <linux/kernel.h> 11 11 #include <linux/list.h> 12 + #include <linux/lockdep.h> 12 13 #include <linux/module.h> 13 14 #include <linux/of.h> 14 15 #include <linux/platform_device.h> ··· 120 119 { 121 120 struct cache_req *req; 122 121 unsigned long flags; 122 + u32 old_sleep_val, old_wake_val; 123 123 124 124 spin_lock_irqsave(&ctrlr->cache_lock, flags); 125 125 req = __find_req(ctrlr, cmd->addr); ··· 135 133 136 134 req->addr = cmd->addr; 137 135 req->sleep_val = req->wake_val = UINT_MAX; 138 - INIT_LIST_HEAD(&req->list); 139 136 list_add_tail(&req->list, &ctrlr->cache); 140 137 141 138 existing: 139 + old_sleep_val = req->sleep_val; 140 + old_wake_val = req->wake_val; 141 + 142 142 switch (state) { 143 143 case RPMH_ACTIVE_ONLY_STATE: 144 - if (req->sleep_val != UINT_MAX) 145 - req->wake_val = cmd->data; 146 - break; 147 144 case RPMH_WAKE_ONLY_STATE: 148 145 req->wake_val = cmd->data; 149 146 break; 150 147 case RPMH_SLEEP_STATE: 151 148 req->sleep_val = cmd->data; 152 149 break; 153 - default: 154 - break; 155 150 } 156 151 157 - ctrlr->dirty = true; 152 + ctrlr->dirty |= (req->sleep_val != old_sleep_val || 153 + req->wake_val != old_wake_val) && 154 + req->sleep_val != UINT_MAX && 155 + req->wake_val != UINT_MAX; 156 + 158 157 unlock: 159 158 spin_unlock_irqrestore(&ctrlr->cache_lock, flags); 160 159 ··· 290 287 291 288 spin_lock_irqsave(&ctrlr->cache_lock, flags); 292 289 list_add_tail(&req->list, &ctrlr->batch_cache); 290 + ctrlr->dirty = true; 293 291 spin_unlock_irqrestore(&ctrlr->cache_lock, flags); 294 292 } 295 293 ··· 298 294 { 299 295 struct batch_cache_req *req; 300 296 const struct rpmh_request *rpm_msg; 301 - unsigned long flags; 302 297 int ret = 0; 303 298 int i; 304 299 305 300 /* Send Sleep/Wake requests to the controller, expect no response */ 306 - spin_lock_irqsave(&ctrlr->cache_lock, flags); 307 301 list_for_each_entry(req, &ctrlr->batch_cache, list) { 308 302 for (i = 0; i < req->count; i++) { 309 303 rpm_msg = req->rpm_msgs + i; ··· 311 309 break; 312 310 } 313 311 } 314 - spin_unlock_irqrestore(&ctrlr->cache_lock, flags); 315 312 316 313 return ret; 317 - } 318 - 319 - static void invalidate_batch(struct rpmh_ctrlr *ctrlr) 320 - { 321 - struct batch_cache_req *req, *tmp; 322 - unsigned long flags; 323 - 324 - spin_lock_irqsave(&ctrlr->cache_lock, flags); 325 - list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list) 326 - kfree(req); 327 - INIT_LIST_HEAD(&ctrlr->batch_cache); 328 - spin_unlock_irqrestore(&ctrlr->cache_lock, flags); 329 314 } 330 315 331 316 /** ··· 431 442 } 432 443 433 444 /** 434 - * rpmh_flush: Flushes the buffered active and sleep sets to TCS 445 + * rpmh_flush() - Flushes the buffered sleep and wake sets to TCSes 435 446 * 436 - * @ctrlr: controller making request to flush cached data 447 + * @ctrlr: Controller making request to flush cached data 437 448 * 438 - * Return: -EBUSY if the controller is busy, probably waiting on a response 439 - * to a RPMH request sent earlier. 440 - * 441 - * This function is always called from the sleep code from the last CPU 442 - * that is powering down the entire system. Since no other RPMH API would be 443 - * executing at this time, it is safe to run lockless. 449 + * Return: 450 + * * 0 - Success 451 + * * Error code - Otherwise 444 452 */ 445 453 int rpmh_flush(struct rpmh_ctrlr *ctrlr) 446 454 { 447 455 struct cache_req *p; 448 - int ret; 456 + int ret = 0; 457 + 458 + lockdep_assert_irqs_disabled(); 459 + 460 + /* 461 + * Currently rpmh_flush() is only called when we think we're running 462 + * on the last processor. If the lock is busy it means another 463 + * processor is up and it's better to abort than spin. 464 + */ 465 + if (!spin_trylock(&ctrlr->cache_lock)) 466 + return -EBUSY; 449 467 450 468 if (!ctrlr->dirty) { 451 469 pr_debug("Skipping flush, TCS has latest data.\n"); 452 - return 0; 470 + goto exit; 453 471 } 472 + 473 + /* Invalidate the TCSes first to avoid stale data */ 474 + rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr)); 454 475 455 476 /* First flush the cached batch requests */ 456 477 ret = flush_batch(ctrlr); 457 478 if (ret) 458 - return ret; 479 + goto exit; 459 480 460 - /* 461 - * Nobody else should be calling this function other than system PM, 462 - * hence we can run without locks. 463 - */ 464 481 list_for_each_entry(p, &ctrlr->cache, list) { 465 482 if (!is_req_valid(p)) { 466 483 pr_debug("%s: skipping RPMH req: a:%#x s:%#x w:%#x", ··· 476 481 ret = send_single(ctrlr, RPMH_SLEEP_STATE, p->addr, 477 482 p->sleep_val); 478 483 if (ret) 479 - return ret; 484 + goto exit; 480 485 ret = send_single(ctrlr, RPMH_WAKE_ONLY_STATE, p->addr, 481 486 p->wake_val); 482 487 if (ret) 483 - return ret; 488 + goto exit; 484 489 } 485 490 486 491 ctrlr->dirty = false; 487 492 488 - return 0; 493 + exit: 494 + spin_unlock(&ctrlr->cache_lock); 495 + return ret; 489 496 } 490 497 491 498 /** 492 - * rpmh_invalidate: Invalidate all sleep and active sets 493 - * sets. 499 + * rpmh_invalidate: Invalidate sleep and wake sets in batch_cache 494 500 * 495 501 * @dev: The device making the request 496 502 * 497 - * Invalidate the sleep and active values in the TCS blocks. 503 + * Invalidate the sleep and wake values in batch_cache. 498 504 */ 499 505 int rpmh_invalidate(const struct device *dev) 500 506 { 501 507 struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev); 502 - int ret; 508 + struct batch_cache_req *req, *tmp; 509 + unsigned long flags; 503 510 504 - invalidate_batch(ctrlr); 511 + spin_lock_irqsave(&ctrlr->cache_lock, flags); 512 + list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list) 513 + kfree(req); 514 + INIT_LIST_HEAD(&ctrlr->batch_cache); 505 515 ctrlr->dirty = true; 516 + spin_unlock_irqrestore(&ctrlr->cache_lock, flags); 506 517 507 - do { 508 - ret = rpmh_rsc_invalidate(ctrlr_to_drv(ctrlr)); 509 - } while (ret == -EAGAIN); 510 - 511 - return ret; 518 + return 0; 512 519 } 513 520 EXPORT_SYMBOL(rpmh_invalidate);
+24
drivers/soc/qcom/rpmhpd.c
··· 4 4 #include <linux/err.h> 5 5 #include <linux/init.h> 6 6 #include <linux/kernel.h> 7 + #include <linux/module.h> 7 8 #include <linux/mutex.h> 8 9 #include <linux/pm_domain.h> 9 10 #include <linux/slab.h> ··· 167 166 .num_pds = ARRAY_SIZE(sm8150_rpmhpds), 168 167 }; 169 168 169 + static struct rpmhpd *sm8250_rpmhpds[] = { 170 + [SM8250_CX] = &sdm845_cx, 171 + [SM8250_CX_AO] = &sdm845_cx_ao, 172 + [SM8250_EBI] = &sdm845_ebi, 173 + [SM8250_GFX] = &sdm845_gfx, 174 + [SM8250_LCX] = &sdm845_lcx, 175 + [SM8250_LMX] = &sdm845_lmx, 176 + [SM8250_MMCX] = &sm8150_mmcx, 177 + [SM8250_MMCX_AO] = &sm8150_mmcx_ao, 178 + [SM8250_MX] = &sdm845_mx, 179 + [SM8250_MX_AO] = &sdm845_mx_ao, 180 + }; 181 + 182 + static const struct rpmhpd_desc sm8250_desc = { 183 + .rpmhpds = sm8250_rpmhpds, 184 + .num_pds = ARRAY_SIZE(sm8250_rpmhpds), 185 + }; 186 + 170 187 /* SC7180 RPMH powerdomains */ 171 188 static struct rpmhpd *sc7180_rpmhpds[] = { 172 189 [SC7180_CX] = &sdm845_cx, ··· 206 187 { .compatible = "qcom,sc7180-rpmhpd", .data = &sc7180_desc }, 207 188 { .compatible = "qcom,sdm845-rpmhpd", .data = &sdm845_desc }, 208 189 { .compatible = "qcom,sm8150-rpmhpd", .data = &sm8150_desc }, 190 + { .compatible = "qcom,sm8250-rpmhpd", .data = &sm8250_desc }, 209 191 { } 210 192 }; 193 + MODULE_DEVICE_TABLE(of, rpmhpd_match_table); 211 194 212 195 static int rpmhpd_send_corner(struct rpmhpd *pd, int state, 213 196 unsigned int corner, bool sync) ··· 481 460 return platform_driver_register(&rpmhpd_driver); 482 461 } 483 462 core_initcall(rpmhpd_init); 463 + 464 + MODULE_DESCRIPTION("Qualcomm Technologies, Inc. RPMh Power Domain Driver"); 465 + MODULE_LICENSE("GPL v2");
+5
drivers/soc/qcom/rpmpd.c
··· 4 4 #include <linux/err.h> 5 5 #include <linux/init.h> 6 6 #include <linux/kernel.h> 7 + #include <linux/module.h> 7 8 #include <linux/mutex.h> 8 9 #include <linux/pm_domain.h> 9 10 #include <linux/of.h> ··· 227 226 { .compatible = "qcom,qcs404-rpmpd", .data = &qcs404_desc }, 228 227 { } 229 228 }; 229 + MODULE_DEVICE_TABLE(of, rpmpd_match_table); 230 230 231 231 static int rpmpd_send_enable(struct rpmpd *pd, bool enable) 232 232 { ··· 424 422 return platform_driver_register(&rpmpd_driver); 425 423 } 426 424 core_initcall(rpmpd_init); 425 + 426 + MODULE_DESCRIPTION("Qualcomm Technologies, Inc. RPM Power Domain Driver"); 427 + MODULE_LICENSE("GPL v2");
+1 -3
drivers/soc/qcom/smp2p.c
··· 474 474 goto report_read_failure; 475 475 476 476 irq = platform_get_irq(pdev, 0); 477 - if (irq < 0) { 478 - dev_err(&pdev->dev, "unable to acquire smp2p interrupt\n"); 477 + if (irq < 0) 479 478 return irq; 480 - } 481 479 482 480 smp2p->mbox_client.dev = &pdev->dev; 483 481 smp2p->mbox_client.knows_txdone = true;
+6
drivers/soc/qcom/socinfo.c
··· 188 188 { 216, "MSM8674PRO" }, 189 189 { 217, "MSM8974-AA" }, 190 190 { 218, "MSM8974-AB" }, 191 + { 233, "MSM8936" }, 192 + { 239, "MSM8939" }, 193 + { 240, "APQ8036" }, 194 + { 241, "APQ8039" }, 191 195 { 246, "MSM8996" }, 192 196 { 247, "APQ8016" }, 193 197 { 248, "MSM8216" }, ··· 434 430 qs->attr.family = "Snapdragon"; 435 431 qs->attr.machine = socinfo_machine(&pdev->dev, 436 432 le32_to_cpu(info->id)); 433 + qs->attr.soc_id = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%u", 434 + le32_to_cpu(info->id)); 437 435 qs->attr.revision = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%u.%u", 438 436 SOCINFO_MAJOR(le32_to_cpu(info->ver)), 439 437 SOCINFO_MINOR(le32_to_cpu(info->ver)));
+11
drivers/soc/renesas/Kconfig
··· 83 83 select ARM_ERRATA_754322 84 84 select RENESAS_INTC_IRQPIN 85 85 86 + config ARCH_R8A7742 87 + bool "RZ/G1H (R8A77420)" 88 + select ARCH_RCAR_GEN2 89 + select ARM_ERRATA_798181 if SMP 90 + select ARM_ERRATA_814220 91 + select SYSC_R8A7742 92 + 86 93 config ARCH_R8A7743 87 94 bool "RZ/G1M (R8A77430)" 88 95 select ARCH_RCAR_GEN2 ··· 268 261 endif # ARM64 269 262 270 263 # SoC 264 + config SYSC_R8A7742 265 + bool "RZ/G1H System Controller support" if COMPILE_TEST 266 + select SYSC_RCAR 267 + 271 268 config SYSC_R8A7743 272 269 bool "RZ/G1M System Controller support" if COMPILE_TEST 273 270 select SYSC_RCAR
+1
drivers/soc/renesas/Makefile
··· 3 3 obj-$(CONFIG_SOC_RENESAS) += renesas-soc.o 4 4 5 5 # SoC 6 + obj-$(CONFIG_SYSC_R8A7742) += r8a7742-sysc.o 6 7 obj-$(CONFIG_SYSC_R8A7743) += r8a7743-sysc.o 7 8 obj-$(CONFIG_SYSC_R8A7745) += r8a7745-sysc.o 8 9 obj-$(CONFIG_SYSC_R8A77470) += r8a77470-sysc.o
+42
drivers/soc/renesas/r8a7742-sysc.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Renesas RZ/G1H System Controller 4 + * 5 + * Copyright (C) 2020 Renesas Electronics Corp. 6 + */ 7 + 8 + #include <linux/kernel.h> 9 + 10 + #include <dt-bindings/power/r8a7742-sysc.h> 11 + 12 + #include "rcar-sysc.h" 13 + 14 + static const struct rcar_sysc_area r8a7742_areas[] __initconst = { 15 + { "always-on", 0, 0, R8A7742_PD_ALWAYS_ON, -1, PD_ALWAYS_ON }, 16 + { "ca15-scu", 0x180, 0, R8A7742_PD_CA15_SCU, R8A7742_PD_ALWAYS_ON, 17 + PD_SCU }, 18 + { "ca15-cpu0", 0x40, 0, R8A7742_PD_CA15_CPU0, R8A7742_PD_CA15_SCU, 19 + PD_CPU_NOCR }, 20 + { "ca15-cpu1", 0x40, 1, R8A7742_PD_CA15_CPU1, R8A7742_PD_CA15_SCU, 21 + PD_CPU_NOCR }, 22 + { "ca15-cpu2", 0x40, 2, R8A7742_PD_CA15_CPU2, R8A7742_PD_CA15_SCU, 23 + PD_CPU_NOCR }, 24 + { "ca15-cpu3", 0x40, 3, R8A7742_PD_CA15_CPU3, R8A7742_PD_CA15_SCU, 25 + PD_CPU_NOCR }, 26 + { "ca7-scu", 0x100, 0, R8A7742_PD_CA7_SCU, R8A7742_PD_ALWAYS_ON, 27 + PD_SCU }, 28 + { "ca7-cpu0", 0x1c0, 0, R8A7742_PD_CA7_CPU0, R8A7742_PD_CA7_SCU, 29 + PD_CPU_NOCR }, 30 + { "ca7-cpu1", 0x1c0, 1, R8A7742_PD_CA7_CPU1, R8A7742_PD_CA7_SCU, 31 + PD_CPU_NOCR }, 32 + { "ca7-cpu2", 0x1c0, 2, R8A7742_PD_CA7_CPU2, R8A7742_PD_CA7_SCU, 33 + PD_CPU_NOCR }, 34 + { "ca7-cpu3", 0x1c0, 3, R8A7742_PD_CA7_CPU3, R8A7742_PD_CA7_SCU, 35 + PD_CPU_NOCR }, 36 + { "rgx", 0xc0, 0, R8A7742_PD_RGX, R8A7742_PD_ALWAYS_ON }, 37 + }; 38 + 39 + const struct rcar_sysc_info r8a7742_sysc_info __initconst = { 40 + .areas = r8a7742_areas, 41 + .num_areas = ARRAY_SIZE(r8a7742_areas), 42 + };
+1
drivers/soc/renesas/rcar-rst.c
··· 39 39 40 40 static const struct of_device_id rcar_rst_matches[] __initconst = { 41 41 /* RZ/G1 is handled like R-Car Gen2 */ 42 + { .compatible = "renesas,r8a7742-rst", .data = &rcar_rst_gen2 }, 42 43 { .compatible = "renesas,r8a7743-rst", .data = &rcar_rst_gen2 }, 43 44 { .compatible = "renesas,r8a7744-rst", .data = &rcar_rst_gen2 }, 44 45 { .compatible = "renesas,r8a7745-rst", .data = &rcar_rst_gen2 },
+3
drivers/soc/renesas/rcar-sysc.c
··· 273 273 } 274 274 275 275 static const struct of_device_id rcar_sysc_matches[] __initconst = { 276 + #ifdef CONFIG_SYSC_R8A7742 277 + { .compatible = "renesas,r8a7742-sysc", .data = &r8a7742_sysc_info }, 278 + #endif 276 279 #ifdef CONFIG_SYSC_R8A7743 277 280 { .compatible = "renesas,r8a7743-sysc", .data = &r8a7743_sysc_info }, 278 281 /* RZ/G1N is identical to RZ/G2M w.r.t. power domains. */
+1
drivers/soc/renesas/rcar-sysc.h
··· 49 49 u32 extmask_val; /* SYSCEXTMASK register mask value */ 50 50 }; 51 51 52 + extern const struct rcar_sysc_info r8a7742_sysc_info; 52 53 extern const struct rcar_sysc_info r8a7743_sysc_info; 53 54 extern const struct rcar_sysc_info r8a7745_sysc_info; 54 55 extern const struct rcar_sysc_info r8a77470_sysc_info;
+1
drivers/soc/tegra/Kconfig
··· 133 133 134 134 config SOC_TEGRA_PMC 135 135 bool 136 + select GENERIC_PINCONF 136 137 137 138 config SOC_TEGRA_POWERGATE_BPMP 138 139 def_bool y
+56 -1
drivers/soc/tegra/fuse/fuse-tegra.c
··· 300 300 writel(reg, base + 0x14); 301 301 } 302 302 303 + static ssize_t major_show(struct device *dev, struct device_attribute *attr, 304 + char *buf) 305 + { 306 + return sprintf(buf, "%d\n", tegra_get_major_rev()); 307 + } 308 + 309 + static DEVICE_ATTR_RO(major); 310 + 311 + static ssize_t minor_show(struct device *dev, struct device_attribute *attr, 312 + char *buf) 313 + { 314 + return sprintf(buf, "%d\n", tegra_get_minor_rev()); 315 + } 316 + 317 + static DEVICE_ATTR_RO(minor); 318 + 319 + static struct attribute *tegra_soc_attr[] = { 320 + &dev_attr_major.attr, 321 + &dev_attr_minor.attr, 322 + NULL, 323 + }; 324 + 325 + const struct attribute_group tegra_soc_attr_group = { 326 + .attrs = tegra_soc_attr, 327 + }; 328 + 329 + #ifdef CONFIG_ARCH_TEGRA_194_SOC 330 + static ssize_t platform_show(struct device *dev, struct device_attribute *attr, 331 + char *buf) 332 + { 333 + /* 334 + * Displays the value in the 'pre_si_platform' field of the HIDREV 335 + * register for Tegra194 devices. A value of 0 indicates that the 336 + * platform type is silicon and all other non-zero values indicate 337 + * the type of simulation platform is being used. 338 + */ 339 + return sprintf(buf, "%d\n", (tegra_read_chipid() >> 20) & 0xf); 340 + } 341 + 342 + static DEVICE_ATTR_RO(platform); 343 + 344 + static struct attribute *tegra194_soc_attr[] = { 345 + &dev_attr_major.attr, 346 + &dev_attr_minor.attr, 347 + &dev_attr_platform.attr, 348 + NULL, 349 + }; 350 + 351 + const struct attribute_group tegra194_soc_attr_group = { 352 + .attrs = tegra194_soc_attr, 353 + }; 354 + #endif 355 + 303 356 struct device * __init tegra_soc_device_register(void) 304 357 { 305 358 struct soc_device_attribute *attr; ··· 363 310 return NULL; 364 311 365 312 attr->family = kasprintf(GFP_KERNEL, "Tegra"); 366 - attr->revision = kasprintf(GFP_KERNEL, "%d", tegra_sku_info.revision); 313 + attr->revision = kasprintf(GFP_KERNEL, "%s", 314 + tegra_revision_name[tegra_sku_info.revision]); 367 315 attr->soc_id = kasprintf(GFP_KERNEL, "%u", tegra_get_chip_id()); 316 + attr->custom_attr_group = fuse->soc->soc_attr_group; 368 317 369 318 dev = soc_device_register(attr); 370 319 if (IS_ERR(dev)) {
+1
drivers/soc/tegra/fuse/fuse-tegra20.c
··· 164 164 .speedo_init = tegra20_init_speedo_data, 165 165 .probe = tegra20_fuse_probe, 166 166 .info = &tegra20_fuse_info, 167 + .soc_attr_group = &tegra_soc_attr_group, 167 168 };
+6
drivers/soc/tegra/fuse/fuse-tegra30.c
··· 111 111 .init = tegra30_fuse_init, 112 112 .speedo_init = tegra30_init_speedo_data, 113 113 .info = &tegra30_fuse_info, 114 + .soc_attr_group = &tegra_soc_attr_group, 114 115 }; 115 116 #endif 116 117 ··· 126 125 .init = tegra30_fuse_init, 127 126 .speedo_init = tegra114_init_speedo_data, 128 127 .info = &tegra114_fuse_info, 128 + .soc_attr_group = &tegra_soc_attr_group, 129 129 }; 130 130 #endif 131 131 ··· 207 205 .info = &tegra124_fuse_info, 208 206 .lookups = tegra124_fuse_lookups, 209 207 .num_lookups = ARRAY_SIZE(tegra124_fuse_lookups), 208 + .soc_attr_group = &tegra_soc_attr_group, 210 209 }; 211 210 #endif 212 211 ··· 293 290 .info = &tegra210_fuse_info, 294 291 .lookups = tegra210_fuse_lookups, 295 292 .num_lookups = ARRAY_SIZE(tegra210_fuse_lookups), 293 + .soc_attr_group = &tegra_soc_attr_group, 296 294 }; 297 295 #endif 298 296 ··· 323 319 .info = &tegra186_fuse_info, 324 320 .lookups = tegra186_fuse_lookups, 325 321 .num_lookups = ARRAY_SIZE(tegra186_fuse_lookups), 322 + .soc_attr_group = &tegra_soc_attr_group, 326 323 }; 327 324 #endif 328 325 ··· 353 348 .info = &tegra194_fuse_info, 354 349 .lookups = tegra194_fuse_lookups, 355 350 .num_lookups = ARRAY_SIZE(tegra194_fuse_lookups), 351 + .soc_attr_group = &tegra194_soc_attr_group, 356 352 }; 357 353 #endif
+8
drivers/soc/tegra/fuse/fuse.h
··· 32 32 33 33 const struct nvmem_cell_lookup *lookups; 34 34 unsigned int num_lookups; 35 + 36 + const struct attribute_group *soc_attr_group; 35 37 }; 36 38 37 39 struct tegra_fuse { ··· 65 63 66 64 bool __init tegra_fuse_read_spare(unsigned int spare); 67 65 u32 __init tegra_fuse_read_early(unsigned int offset); 66 + 67 + u8 tegra_get_major_rev(void); 68 + u8 tegra_get_minor_rev(void); 69 + 70 + extern const struct attribute_group tegra_soc_attr_group; 68 71 69 72 #ifdef CONFIG_ARCH_TEGRA_2x_SOC 70 73 void tegra20_init_speedo_data(struct tegra_sku_info *sku_info); ··· 117 110 118 111 #ifdef CONFIG_ARCH_TEGRA_194_SOC 119 112 extern const struct tegra_fuse_soc tegra194_fuse_soc; 113 + extern const struct attribute_group tegra194_soc_attr_group; 120 114 #endif 121 115 122 116 #endif
+19 -13
drivers/soc/tegra/fuse/tegra-apbmisc.c
··· 37 37 return (tegra_read_chipid() >> 8) & 0xff; 38 38 } 39 39 40 + u8 tegra_get_major_rev(void) 41 + { 42 + return (tegra_read_chipid() >> 4) & 0xf; 43 + } 44 + 45 + u8 tegra_get_minor_rev(void) 46 + { 47 + return (tegra_read_chipid() >> 16) & 0xf; 48 + } 49 + 40 50 u32 tegra_read_straps(void) 41 51 { 42 52 WARN(!chipid, "Tegra ABP MISC not yet available\n"); ··· 75 65 76 66 void __init tegra_init_revision(void) 77 67 { 78 - u32 id, chip_id, minor_rev; 79 - int rev; 68 + u8 chip_id, minor_rev; 80 69 81 - id = tegra_read_chipid(); 82 - chip_id = (id >> 8) & 0xff; 83 - minor_rev = (id >> 16) & 0xf; 70 + chip_id = tegra_get_chip_id(); 71 + minor_rev = tegra_get_minor_rev(); 84 72 85 73 switch (minor_rev) { 86 74 case 1: 87 - rev = TEGRA_REVISION_A01; 75 + tegra_sku_info.revision = TEGRA_REVISION_A01; 88 76 break; 89 77 case 2: 90 - rev = TEGRA_REVISION_A02; 78 + tegra_sku_info.revision = TEGRA_REVISION_A02; 91 79 break; 92 80 case 3: 93 81 if (chip_id == TEGRA20 && (tegra_fuse_read_spare(18) || 94 82 tegra_fuse_read_spare(19))) 95 - rev = TEGRA_REVISION_A03p; 83 + tegra_sku_info.revision = TEGRA_REVISION_A03p; 96 84 else 97 - rev = TEGRA_REVISION_A03; 85 + tegra_sku_info.revision = TEGRA_REVISION_A03; 98 86 break; 99 87 case 4: 100 - rev = TEGRA_REVISION_A04; 88 + tegra_sku_info.revision = TEGRA_REVISION_A04; 101 89 break; 102 90 default: 103 - rev = TEGRA_REVISION_UNKNOWN; 91 + tegra_sku_info.revision = TEGRA_REVISION_UNKNOWN; 104 92 } 105 - 106 - tegra_sku_info.revision = rev; 107 93 108 94 tegra_sku_info.sku_id = tegra_fuse_read_early(FUSE_SKU_INFO); 109 95 }
+3
drivers/soc/tegra/pmc.c
··· 3063 3063 3064 3064 static const struct tegra_wake_event tegra210_wake_events[] = { 3065 3065 TEGRA_WAKE_IRQ("rtc", 16, 2), 3066 + TEGRA_WAKE_IRQ("pmu", 51, 86), 3066 3067 }; 3067 3068 3068 3069 static const struct tegra_pmc_soc tegra210_pmc_soc = { ··· 3194 3193 } 3195 3194 3196 3195 static const struct tegra_wake_event tegra186_wake_events[] = { 3196 + TEGRA_WAKE_IRQ("pmu", 24, 209), 3197 3197 TEGRA_WAKE_GPIO("power", 29, 1, TEGRA186_AON_GPIO(FF, 0)), 3198 3198 TEGRA_WAKE_IRQ("rtc", 73, 10), 3199 3199 }; ··· 3327 3325 }; 3328 3326 3329 3327 static const struct tegra_wake_event tegra194_wake_events[] = { 3328 + TEGRA_WAKE_IRQ("pmu", 24, 209), 3330 3329 TEGRA_WAKE_GPIO("power", 29, 1, TEGRA194_AON_GPIO(EE, 4)), 3331 3330 TEGRA_WAKE_IRQ("rtc", 73, 10), 3332 3331 };
+10
drivers/soc/ti/Kconfig
··· 91 91 and a consumer. There is one RINGACC module per NAVSS on TI AM65x SoCs 92 92 If unsure, say N. 93 93 94 + config TI_K3_SOCINFO 95 + bool 96 + depends on ARCH_K3 || COMPILE_TEST 97 + select SOC_BUS 98 + select MFD_SYSCON 99 + help 100 + Include support for the SoC bus socinfo for the TI K3 Multicore SoC 101 + platforms to provide information about the SoC family and 102 + variant to user space. 103 + 94 104 endif # SOC_TI 95 105 96 106 config TI_SCI_INTA_MSI_DOMAIN
+1
drivers/soc/ti/Makefile
··· 11 11 obj-$(CONFIG_TI_SCI_PM_DOMAINS) += ti_sci_pm_domains.o 12 12 obj-$(CONFIG_TI_SCI_INTA_MSI_DOMAIN) += ti_sci_inta_msi.o 13 13 obj-$(CONFIG_TI_K3_RINGACC) += k3-ringacc.o 14 + obj-$(CONFIG_TI_K3_SOCINFO) += k3-socinfo.o
+152
drivers/soc/ti/k3-socinfo.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * TI K3 SoC info driver 4 + * 5 + * Copyright (C) 2020 Texas Instruments Incorporated - http://www.ti.com 6 + */ 7 + 8 + #include <linux/mfd/syscon.h> 9 + #include <linux/of.h> 10 + #include <linux/of_address.h> 11 + #include <linux/regmap.h> 12 + #include <linux/platform_device.h> 13 + #include <linux/slab.h> 14 + #include <linux/string.h> 15 + #include <linux/sys_soc.h> 16 + 17 + #define CTRLMMR_WKUP_JTAGID_REG 0 18 + /* 19 + * Bits: 20 + * 31-28 VARIANT Device variant 21 + * 27-12 PARTNO Part number 22 + * 11-1 MFG Indicates TI as manufacturer (0x17) 23 + * 1 Always 1 24 + */ 25 + #define CTRLMMR_WKUP_JTAGID_VARIANT_SHIFT (28) 26 + #define CTRLMMR_WKUP_JTAGID_VARIANT_MASK GENMASK(31, 28) 27 + 28 + #define CTRLMMR_WKUP_JTAGID_PARTNO_SHIFT (12) 29 + #define CTRLMMR_WKUP_JTAGID_PARTNO_MASK GENMASK(27, 12) 30 + 31 + #define CTRLMMR_WKUP_JTAGID_MFG_SHIFT (1) 32 + #define CTRLMMR_WKUP_JTAGID_MFG_MASK GENMASK(11, 1) 33 + 34 + #define CTRLMMR_WKUP_JTAGID_MFG_TI 0x17 35 + 36 + static const struct k3_soc_id { 37 + unsigned int id; 38 + const char *family_name; 39 + } k3_soc_ids[] = { 40 + { 0xBB5A, "AM65X" }, 41 + { 0xBB64, "J721E" }, 42 + }; 43 + 44 + static int 45 + k3_chipinfo_partno_to_names(unsigned int partno, 46 + struct soc_device_attribute *soc_dev_attr) 47 + { 48 + int i; 49 + 50 + for (i = 0; i < ARRAY_SIZE(k3_soc_ids); i++) 51 + if (partno == k3_soc_ids[i].id) { 52 + soc_dev_attr->family = k3_soc_ids[i].family_name; 53 + return 0; 54 + } 55 + 56 + return -EINVAL; 57 + } 58 + 59 + static int k3_chipinfo_probe(struct platform_device *pdev) 60 + { 61 + struct device_node *node = pdev->dev.of_node; 62 + struct soc_device_attribute *soc_dev_attr; 63 + struct device *dev = &pdev->dev; 64 + struct soc_device *soc_dev; 65 + struct regmap *regmap; 66 + u32 partno_id; 67 + u32 variant; 68 + u32 jtag_id; 69 + u32 mfg; 70 + int ret; 71 + 72 + regmap = device_node_to_regmap(node); 73 + if (IS_ERR(regmap)) 74 + return PTR_ERR(regmap); 75 + 76 + ret = regmap_read(regmap, CTRLMMR_WKUP_JTAGID_REG, &jtag_id); 77 + if (ret < 0) 78 + return ret; 79 + 80 + mfg = (jtag_id & CTRLMMR_WKUP_JTAGID_MFG_MASK) >> 81 + CTRLMMR_WKUP_JTAGID_MFG_SHIFT; 82 + 83 + if (mfg != CTRLMMR_WKUP_JTAGID_MFG_TI) { 84 + dev_err(dev, "Invalid MFG SoC\n"); 85 + return -ENODEV; 86 + } 87 + 88 + variant = (jtag_id & CTRLMMR_WKUP_JTAGID_VARIANT_MASK) >> 89 + CTRLMMR_WKUP_JTAGID_VARIANT_SHIFT; 90 + variant++; 91 + 92 + partno_id = (jtag_id & CTRLMMR_WKUP_JTAGID_PARTNO_MASK) >> 93 + CTRLMMR_WKUP_JTAGID_PARTNO_SHIFT; 94 + 95 + soc_dev_attr = kzalloc(sizeof(*soc_dev_attr), GFP_KERNEL); 96 + if (!soc_dev_attr) 97 + return -ENOMEM; 98 + 99 + soc_dev_attr->revision = kasprintf(GFP_KERNEL, "SR%x.0", variant); 100 + if (!soc_dev_attr->revision) { 101 + ret = -ENOMEM; 102 + goto err; 103 + } 104 + 105 + ret = k3_chipinfo_partno_to_names(partno_id, soc_dev_attr); 106 + if (ret) { 107 + dev_err(dev, "Unknown SoC JTAGID[0x%08X]\n", jtag_id); 108 + ret = -ENODEV; 109 + goto err_free_rev; 110 + } 111 + 112 + node = of_find_node_by_path("/"); 113 + of_property_read_string(node, "model", &soc_dev_attr->machine); 114 + of_node_put(node); 115 + 116 + soc_dev = soc_device_register(soc_dev_attr); 117 + if (IS_ERR(soc_dev)) { 118 + ret = PTR_ERR(soc_dev); 119 + goto err_free_rev; 120 + } 121 + 122 + dev_info(dev, "Family:%s rev:%s JTAGID[0x%08x] Detected\n", 123 + soc_dev_attr->family, 124 + soc_dev_attr->revision, jtag_id); 125 + 126 + return 0; 127 + 128 + err_free_rev: 129 + kfree(soc_dev_attr->revision); 130 + err: 131 + kfree(soc_dev_attr); 132 + return ret; 133 + } 134 + 135 + static const struct of_device_id k3_chipinfo_of_match[] = { 136 + { .compatible = "ti,am654-chipid", }, 137 + { /* sentinel */ }, 138 + }; 139 + 140 + static struct platform_driver k3_chipinfo_driver = { 141 + .driver = { 142 + .name = "k3-chipinfo", 143 + .of_match_table = k3_chipinfo_of_match, 144 + }, 145 + .probe = k3_chipinfo_probe, 146 + }; 147 + 148 + static int __init k3_chipinfo_init(void) 149 + { 150 + return platform_driver_register(&k3_chipinfo_driver); 151 + } 152 + subsys_initcall(k3_chipinfo_init);
+1 -1
drivers/soc/ti/knav_qmss_queue.c
··· 409 409 return 0; 410 410 } 411 411 412 - struct knav_range_ops knav_gp_range_ops = { 412 + static struct knav_range_ops knav_gp_range_ops = { 413 413 .set_notify = knav_gp_set_notify, 414 414 .open_queue = knav_gp_open_queue, 415 415 .close_queue = knav_gp_close_queue,
+2
drivers/staging/media/Kconfig
··· 38 38 39 39 source "drivers/staging/media/tegra-vde/Kconfig" 40 40 41 + source "drivers/staging/media/tegra-video/Kconfig" 42 + 41 43 source "drivers/staging/media/ipu3/Kconfig" 42 44 43 45 source "drivers/staging/media/soc_camera/Kconfig"
+1
drivers/staging/media/Makefile
··· 6 6 obj-$(CONFIG_VIDEO_OMAP4) += omap4iss/ 7 7 obj-$(CONFIG_VIDEO_ROCKCHIP_VDEC) += rkvdec/ 8 8 obj-$(CONFIG_VIDEO_SUNXI) += sunxi/ 9 + obj-$(CONFIG_VIDEO_TEGRA) += tegra-video/ 9 10 obj-$(CONFIG_TEGRA_VDE) += tegra-vde/ 10 11 obj-$(CONFIG_VIDEO_HANTRO) += hantro/ 11 12 obj-$(CONFIG_VIDEO_IPU3_IMGU) += ipu3/
+12
drivers/staging/media/tegra-video/Kconfig
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + config VIDEO_TEGRA 3 + tristate "NVIDIA Tegra VI driver" 4 + depends on TEGRA_HOST1X 5 + depends on VIDEO_V4L2 6 + select MEDIA_CONTROLLER 7 + select VIDEOBUF2_DMA_CONTIG 8 + help 9 + Choose this option if you have an NVIDIA Tegra SoC. 10 + 11 + To compile this driver as a module, choose M here: the module 12 + will be called tegra-video.
+8
drivers/staging/media/tegra-video/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + tegra-video-objs := \ 3 + video.o \ 4 + vi.o \ 5 + csi.o 6 + 7 + tegra-video-$(CONFIG_ARCH_TEGRA_210_SOC) += tegra210.o 8 + obj-$(CONFIG_VIDEO_TEGRA) += tegra-video.o
+11
drivers/staging/media/tegra-video/TODO
··· 1 + TODO list 2 + * Currently driver supports Tegra build-in TPG only with direct media links 3 + from CSI to VI. Add kernel config CONFIG_VIDEO_TEGRA_TPG and update the 4 + driver to do TPG Vs Sensor media links based on CONFIG_VIDEO_TEGRA_TPG. 5 + * Add real camera sensor capture support. 6 + * Add Tegra CSI MIPI pads calibration. 7 + * Add MIPI clock Settle time computation based on the data rate. 8 + * Add support for Ganged mode. 9 + * Add RAW10 packed video format support to Tegra210 video formats. 10 + * Add support for suspend and resume. 11 + * Make sure v4l2-compliance tests pass with all of the above implementations.
+539
drivers/staging/media/tegra-video/csi.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (C) 2020 NVIDIA CORPORATION. All rights reserved. 4 + */ 5 + 6 + #include <linux/clk.h> 7 + #include <linux/clk/tegra.h> 8 + #include <linux/device.h> 9 + #include <linux/host1x.h> 10 + #include <linux/module.h> 11 + #include <linux/of.h> 12 + #include <linux/of_device.h> 13 + #include <linux/platform_device.h> 14 + #include <linux/pm_runtime.h> 15 + 16 + #include "csi.h" 17 + #include "video.h" 18 + 19 + static inline struct tegra_csi * 20 + host1x_client_to_csi(struct host1x_client *client) 21 + { 22 + return container_of(client, struct tegra_csi, client); 23 + } 24 + 25 + static inline struct tegra_csi_channel *to_csi_chan(struct v4l2_subdev *subdev) 26 + { 27 + return container_of(subdev, struct tegra_csi_channel, subdev); 28 + } 29 + 30 + /* 31 + * CSI is a separate subdevice which has 6 source pads to generate 32 + * test pattern. CSI subdevice pad ops are used only for TPG and 33 + * allows below TPG formats. 34 + */ 35 + static const struct v4l2_mbus_framefmt tegra_csi_tpg_fmts[] = { 36 + { 37 + TEGRA_DEF_WIDTH, 38 + TEGRA_DEF_HEIGHT, 39 + MEDIA_BUS_FMT_SRGGB10_1X10, 40 + V4L2_FIELD_NONE, 41 + V4L2_COLORSPACE_SRGB 42 + }, 43 + { 44 + TEGRA_DEF_WIDTH, 45 + TEGRA_DEF_HEIGHT, 46 + MEDIA_BUS_FMT_RGB888_1X32_PADHI, 47 + V4L2_FIELD_NONE, 48 + V4L2_COLORSPACE_SRGB 49 + }, 50 + }; 51 + 52 + static const struct v4l2_frmsize_discrete tegra_csi_tpg_sizes[] = { 53 + { 1280, 720 }, 54 + { 1920, 1080 }, 55 + { 3840, 2160 }, 56 + }; 57 + 58 + /* 59 + * V4L2 Subdevice Pad Operations 60 + */ 61 + static int csi_enum_bus_code(struct v4l2_subdev *subdev, 62 + struct v4l2_subdev_pad_config *cfg, 63 + struct v4l2_subdev_mbus_code_enum *code) 64 + { 65 + if (code->index >= ARRAY_SIZE(tegra_csi_tpg_fmts)) 66 + return -EINVAL; 67 + 68 + code->code = tegra_csi_tpg_fmts[code->index].code; 69 + 70 + return 0; 71 + } 72 + 73 + static int csi_get_format(struct v4l2_subdev *subdev, 74 + struct v4l2_subdev_pad_config *cfg, 75 + struct v4l2_subdev_format *fmt) 76 + { 77 + struct tegra_csi_channel *csi_chan = to_csi_chan(subdev); 78 + 79 + fmt->format = csi_chan->format; 80 + 81 + return 0; 82 + } 83 + 84 + static int csi_get_frmrate_table_index(struct tegra_csi *csi, u32 code, 85 + u32 width, u32 height) 86 + { 87 + const struct tpg_framerate *frmrate; 88 + unsigned int i; 89 + 90 + frmrate = csi->soc->tpg_frmrate_table; 91 + for (i = 0; i < csi->soc->tpg_frmrate_table_size; i++) { 92 + if (frmrate[i].code == code && 93 + frmrate[i].frmsize.width == width && 94 + frmrate[i].frmsize.height == height) { 95 + return i; 96 + } 97 + } 98 + 99 + return -EINVAL; 100 + } 101 + 102 + static void csi_chan_update_blank_intervals(struct tegra_csi_channel *csi_chan, 103 + u32 code, u32 width, u32 height) 104 + { 105 + struct tegra_csi *csi = csi_chan->csi; 106 + const struct tpg_framerate *frmrate = csi->soc->tpg_frmrate_table; 107 + int index; 108 + 109 + index = csi_get_frmrate_table_index(csi_chan->csi, code, 110 + width, height); 111 + if (index >= 0) { 112 + csi_chan->h_blank = frmrate[index].h_blank; 113 + csi_chan->v_blank = frmrate[index].v_blank; 114 + csi_chan->framerate = frmrate[index].framerate; 115 + } 116 + } 117 + 118 + static int csi_enum_framesizes(struct v4l2_subdev *subdev, 119 + struct v4l2_subdev_pad_config *cfg, 120 + struct v4l2_subdev_frame_size_enum *fse) 121 + { 122 + unsigned int i; 123 + 124 + if (fse->index >= ARRAY_SIZE(tegra_csi_tpg_sizes)) 125 + return -EINVAL; 126 + 127 + for (i = 0; i < ARRAY_SIZE(tegra_csi_tpg_fmts); i++) 128 + if (fse->code == tegra_csi_tpg_fmts[i].code) 129 + break; 130 + 131 + if (i == ARRAY_SIZE(tegra_csi_tpg_fmts)) 132 + return -EINVAL; 133 + 134 + fse->min_width = tegra_csi_tpg_sizes[fse->index].width; 135 + fse->max_width = tegra_csi_tpg_sizes[fse->index].width; 136 + fse->min_height = tegra_csi_tpg_sizes[fse->index].height; 137 + fse->max_height = tegra_csi_tpg_sizes[fse->index].height; 138 + 139 + return 0; 140 + } 141 + 142 + static int csi_enum_frameintervals(struct v4l2_subdev *subdev, 143 + struct v4l2_subdev_pad_config *cfg, 144 + struct v4l2_subdev_frame_interval_enum *fie) 145 + { 146 + struct tegra_csi_channel *csi_chan = to_csi_chan(subdev); 147 + struct tegra_csi *csi = csi_chan->csi; 148 + const struct tpg_framerate *frmrate = csi->soc->tpg_frmrate_table; 149 + int index; 150 + 151 + /* one framerate per format and resolution */ 152 + if (fie->index > 0) 153 + return -EINVAL; 154 + 155 + index = csi_get_frmrate_table_index(csi_chan->csi, fie->code, 156 + fie->width, fie->height); 157 + if (index < 0) 158 + return -EINVAL; 159 + 160 + fie->interval.numerator = 1; 161 + fie->interval.denominator = frmrate[index].framerate; 162 + 163 + return 0; 164 + } 165 + 166 + static int csi_set_format(struct v4l2_subdev *subdev, 167 + struct v4l2_subdev_pad_config *cfg, 168 + struct v4l2_subdev_format *fmt) 169 + { 170 + struct tegra_csi_channel *csi_chan = to_csi_chan(subdev); 171 + struct v4l2_mbus_framefmt *format = &fmt->format; 172 + const struct v4l2_frmsize_discrete *sizes; 173 + unsigned int i; 174 + 175 + sizes = v4l2_find_nearest_size(tegra_csi_tpg_sizes, 176 + ARRAY_SIZE(tegra_csi_tpg_sizes), 177 + width, height, 178 + format->width, format->width); 179 + format->width = sizes->width; 180 + format->height = sizes->height; 181 + 182 + for (i = 0; i < ARRAY_SIZE(tegra_csi_tpg_fmts); i++) 183 + if (format->code == tegra_csi_tpg_fmts[i].code) 184 + break; 185 + 186 + if (i == ARRAY_SIZE(tegra_csi_tpg_fmts)) 187 + i = 0; 188 + 189 + format->code = tegra_csi_tpg_fmts[i].code; 190 + format->field = V4L2_FIELD_NONE; 191 + 192 + if (fmt->which == V4L2_SUBDEV_FORMAT_TRY) 193 + return 0; 194 + 195 + /* update blanking intervals from frame rate table and format */ 196 + csi_chan_update_blank_intervals(csi_chan, format->code, 197 + format->width, format->height); 198 + csi_chan->format = *format; 199 + 200 + return 0; 201 + } 202 + 203 + /* 204 + * V4L2 Subdevice Video Operations 205 + */ 206 + static int tegra_csi_g_frame_interval(struct v4l2_subdev *subdev, 207 + struct v4l2_subdev_frame_interval *vfi) 208 + { 209 + struct tegra_csi_channel *csi_chan = to_csi_chan(subdev); 210 + 211 + vfi->interval.numerator = 1; 212 + vfi->interval.denominator = csi_chan->framerate; 213 + 214 + return 0; 215 + } 216 + 217 + static int tegra_csi_s_stream(struct v4l2_subdev *subdev, int enable) 218 + { 219 + struct tegra_vi_channel *chan = v4l2_get_subdev_hostdata(subdev); 220 + struct tegra_csi_channel *csi_chan = to_csi_chan(subdev); 221 + struct tegra_csi *csi = csi_chan->csi; 222 + int ret = 0; 223 + 224 + csi_chan->pg_mode = chan->pg_mode; 225 + if (enable) { 226 + ret = pm_runtime_get_sync(csi->dev); 227 + if (ret < 0) { 228 + dev_err(csi->dev, 229 + "failed to get runtime PM: %d\n", ret); 230 + pm_runtime_put_noidle(csi->dev); 231 + return ret; 232 + } 233 + 234 + ret = csi->ops->csi_start_streaming(csi_chan); 235 + if (ret < 0) 236 + goto rpm_put; 237 + 238 + return 0; 239 + } 240 + 241 + csi->ops->csi_stop_streaming(csi_chan); 242 + 243 + rpm_put: 244 + pm_runtime_put(csi->dev); 245 + return ret; 246 + } 247 + 248 + /* 249 + * V4L2 Subdevice Operations 250 + */ 251 + static const struct v4l2_subdev_video_ops tegra_csi_video_ops = { 252 + .s_stream = tegra_csi_s_stream, 253 + .g_frame_interval = tegra_csi_g_frame_interval, 254 + .s_frame_interval = tegra_csi_g_frame_interval, 255 + }; 256 + 257 + static const struct v4l2_subdev_pad_ops tegra_csi_pad_ops = { 258 + .enum_mbus_code = csi_enum_bus_code, 259 + .enum_frame_size = csi_enum_framesizes, 260 + .enum_frame_interval = csi_enum_frameintervals, 261 + .get_fmt = csi_get_format, 262 + .set_fmt = csi_set_format, 263 + }; 264 + 265 + static const struct v4l2_subdev_ops tegra_csi_ops = { 266 + .video = &tegra_csi_video_ops, 267 + .pad = &tegra_csi_pad_ops, 268 + }; 269 + 270 + static int tegra_csi_tpg_channels_alloc(struct tegra_csi *csi) 271 + { 272 + struct device_node *node = csi->dev->of_node; 273 + unsigned int port_num; 274 + struct tegra_csi_channel *chan; 275 + unsigned int tpg_channels = csi->soc->csi_max_channels; 276 + 277 + /* allocate CSI channel for each CSI x2 ports */ 278 + for (port_num = 0; port_num < tpg_channels; port_num++) { 279 + chan = kzalloc(sizeof(*chan), GFP_KERNEL); 280 + if (!chan) 281 + return -ENOMEM; 282 + 283 + list_add_tail(&chan->list, &csi->csi_chans); 284 + chan->csi = csi; 285 + chan->csi_port_num = port_num; 286 + chan->numlanes = 2; 287 + chan->of_node = node; 288 + chan->numpads = 1; 289 + chan->pads[0].flags = MEDIA_PAD_FL_SOURCE; 290 + } 291 + 292 + return 0; 293 + } 294 + 295 + static int tegra_csi_channel_init(struct tegra_csi_channel *chan) 296 + { 297 + struct tegra_csi *csi = chan->csi; 298 + struct v4l2_subdev *subdev; 299 + int ret; 300 + 301 + /* initialize the default format */ 302 + chan->format.code = MEDIA_BUS_FMT_SRGGB10_1X10; 303 + chan->format.field = V4L2_FIELD_NONE; 304 + chan->format.colorspace = V4L2_COLORSPACE_SRGB; 305 + chan->format.width = TEGRA_DEF_WIDTH; 306 + chan->format.height = TEGRA_DEF_HEIGHT; 307 + csi_chan_update_blank_intervals(chan, chan->format.code, 308 + chan->format.width, 309 + chan->format.height); 310 + /* initialize V4L2 subdevice and media entity */ 311 + subdev = &chan->subdev; 312 + v4l2_subdev_init(subdev, &tegra_csi_ops); 313 + subdev->dev = csi->dev; 314 + snprintf(subdev->name, V4L2_SUBDEV_NAME_SIZE, "%s-%d", "tpg", 315 + chan->csi_port_num); 316 + 317 + v4l2_set_subdevdata(subdev, chan); 318 + subdev->fwnode = of_fwnode_handle(chan->of_node); 319 + subdev->entity.function = MEDIA_ENT_F_VID_IF_BRIDGE; 320 + 321 + /* initialize media entity pads */ 322 + ret = media_entity_pads_init(&subdev->entity, chan->numpads, 323 + chan->pads); 324 + if (ret < 0) { 325 + dev_err(csi->dev, 326 + "failed to initialize media entity: %d\n", ret); 327 + subdev->dev = NULL; 328 + return ret; 329 + } 330 + 331 + return 0; 332 + } 333 + 334 + void tegra_csi_error_recover(struct v4l2_subdev *sd) 335 + { 336 + struct tegra_csi_channel *csi_chan = to_csi_chan(sd); 337 + struct tegra_csi *csi = csi_chan->csi; 338 + 339 + /* stop streaming during error recovery */ 340 + csi->ops->csi_stop_streaming(csi_chan); 341 + csi->ops->csi_err_recover(csi_chan); 342 + csi->ops->csi_start_streaming(csi_chan); 343 + } 344 + 345 + static int tegra_csi_channels_init(struct tegra_csi *csi) 346 + { 347 + struct tegra_csi_channel *chan; 348 + int ret; 349 + 350 + list_for_each_entry(chan, &csi->csi_chans, list) { 351 + ret = tegra_csi_channel_init(chan); 352 + if (ret) { 353 + dev_err(csi->dev, 354 + "failed to initialize channel-%d: %d\n", 355 + chan->csi_port_num, ret); 356 + return ret; 357 + } 358 + } 359 + 360 + return 0; 361 + } 362 + 363 + static void tegra_csi_channels_cleanup(struct tegra_csi *csi) 364 + { 365 + struct v4l2_subdev *subdev; 366 + struct tegra_csi_channel *chan, *tmp; 367 + 368 + list_for_each_entry_safe(chan, tmp, &csi->csi_chans, list) { 369 + subdev = &chan->subdev; 370 + if (subdev->dev) 371 + media_entity_cleanup(&subdev->entity); 372 + list_del(&chan->list); 373 + kfree(chan); 374 + } 375 + } 376 + 377 + static int __maybe_unused csi_runtime_suspend(struct device *dev) 378 + { 379 + struct tegra_csi *csi = dev_get_drvdata(dev); 380 + 381 + clk_bulk_disable_unprepare(csi->soc->num_clks, csi->clks); 382 + 383 + return 0; 384 + } 385 + 386 + static int __maybe_unused csi_runtime_resume(struct device *dev) 387 + { 388 + struct tegra_csi *csi = dev_get_drvdata(dev); 389 + int ret; 390 + 391 + ret = clk_bulk_prepare_enable(csi->soc->num_clks, csi->clks); 392 + if (ret < 0) { 393 + dev_err(csi->dev, "failed to enable clocks: %d\n", ret); 394 + return ret; 395 + } 396 + 397 + return 0; 398 + } 399 + 400 + static int tegra_csi_init(struct host1x_client *client) 401 + { 402 + struct tegra_csi *csi = host1x_client_to_csi(client); 403 + struct tegra_video_device *vid = dev_get_drvdata(client->host); 404 + int ret; 405 + 406 + INIT_LIST_HEAD(&csi->csi_chans); 407 + 408 + ret = tegra_csi_tpg_channels_alloc(csi); 409 + if (ret < 0) { 410 + dev_err(csi->dev, 411 + "failed to allocate tpg channels: %d\n", ret); 412 + goto cleanup; 413 + } 414 + 415 + ret = tegra_csi_channels_init(csi); 416 + if (ret < 0) 417 + goto cleanup; 418 + 419 + vid->csi = csi; 420 + 421 + return 0; 422 + 423 + cleanup: 424 + tegra_csi_channels_cleanup(csi); 425 + return ret; 426 + } 427 + 428 + static int tegra_csi_exit(struct host1x_client *client) 429 + { 430 + struct tegra_csi *csi = host1x_client_to_csi(client); 431 + 432 + tegra_csi_channels_cleanup(csi); 433 + 434 + return 0; 435 + } 436 + 437 + static const struct host1x_client_ops csi_client_ops = { 438 + .init = tegra_csi_init, 439 + .exit = tegra_csi_exit, 440 + }; 441 + 442 + static int tegra_csi_probe(struct platform_device *pdev) 443 + { 444 + struct tegra_csi *csi; 445 + unsigned int i; 446 + int ret; 447 + 448 + csi = devm_kzalloc(&pdev->dev, sizeof(*csi), GFP_KERNEL); 449 + if (!csi) 450 + return -ENOMEM; 451 + 452 + csi->iomem = devm_platform_ioremap_resource(pdev, 0); 453 + if (IS_ERR(csi->iomem)) 454 + return PTR_ERR(csi->iomem); 455 + 456 + csi->soc = of_device_get_match_data(&pdev->dev); 457 + 458 + csi->clks = devm_kcalloc(&pdev->dev, csi->soc->num_clks, 459 + sizeof(*csi->clks), GFP_KERNEL); 460 + if (!csi->clks) 461 + return -ENOMEM; 462 + 463 + for (i = 0; i < csi->soc->num_clks; i++) 464 + csi->clks[i].id = csi->soc->clk_names[i]; 465 + 466 + ret = devm_clk_bulk_get(&pdev->dev, csi->soc->num_clks, csi->clks); 467 + if (ret) { 468 + dev_err(&pdev->dev, "failed to get the clocks: %d\n", ret); 469 + return ret; 470 + } 471 + 472 + if (!pdev->dev.pm_domain) { 473 + ret = -ENOENT; 474 + dev_warn(&pdev->dev, "PM domain is not attached: %d\n", ret); 475 + return ret; 476 + } 477 + 478 + csi->dev = &pdev->dev; 479 + csi->ops = csi->soc->ops; 480 + platform_set_drvdata(pdev, csi); 481 + pm_runtime_enable(&pdev->dev); 482 + 483 + /* initialize host1x interface */ 484 + INIT_LIST_HEAD(&csi->client.list); 485 + csi->client.ops = &csi_client_ops; 486 + csi->client.dev = &pdev->dev; 487 + 488 + ret = host1x_client_register(&csi->client); 489 + if (ret < 0) { 490 + dev_err(&pdev->dev, 491 + "failed to register host1x client: %d\n", ret); 492 + goto rpm_disable; 493 + } 494 + 495 + return 0; 496 + 497 + rpm_disable: 498 + pm_runtime_disable(&pdev->dev); 499 + return ret; 500 + } 501 + 502 + static int tegra_csi_remove(struct platform_device *pdev) 503 + { 504 + struct tegra_csi *csi = platform_get_drvdata(pdev); 505 + int err; 506 + 507 + err = host1x_client_unregister(&csi->client); 508 + if (err < 0) { 509 + dev_err(&pdev->dev, 510 + "failed to unregister host1x client: %d\n", err); 511 + return err; 512 + } 513 + 514 + pm_runtime_disable(&pdev->dev); 515 + 516 + return 0; 517 + } 518 + 519 + static const struct of_device_id tegra_csi_of_id_table[] = { 520 + #if defined(CONFIG_ARCH_TEGRA_210_SOC) 521 + { .compatible = "nvidia,tegra210-csi", .data = &tegra210_csi_soc }, 522 + #endif 523 + { } 524 + }; 525 + MODULE_DEVICE_TABLE(of, tegra_csi_of_id_table); 526 + 527 + static const struct dev_pm_ops tegra_csi_pm_ops = { 528 + SET_RUNTIME_PM_OPS(csi_runtime_suspend, csi_runtime_resume, NULL) 529 + }; 530 + 531 + struct platform_driver tegra_csi_driver = { 532 + .driver = { 533 + .name = "tegra-csi", 534 + .of_match_table = tegra_csi_of_id_table, 535 + .pm = &tegra_csi_pm_ops, 536 + }, 537 + .probe = tegra_csi_probe, 538 + .remove = tegra_csi_remove, 539 + };
+147
drivers/staging/media/tegra-video/csi.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright (C) 2020 NVIDIA CORPORATION. All rights reserved. 4 + */ 5 + 6 + #ifndef __TEGRA_CSI_H__ 7 + #define __TEGRA_CSI_H__ 8 + 9 + #include <media/media-entity.h> 10 + #include <media/v4l2-subdev.h> 11 + 12 + /* 13 + * Each CSI brick supports max of 4 lanes that can be used as either 14 + * one x4 port using both CILA and CILB partitions of a CSI brick or can 15 + * be used as two x2 ports with one x2 from CILA and the other x2 from 16 + * CILB. 17 + */ 18 + #define CSI_PORTS_PER_BRICK 2 19 + 20 + /* each CSI channel can have one sink and one source pads */ 21 + #define TEGRA_CSI_PADS_NUM 2 22 + 23 + enum tegra_csi_cil_port { 24 + PORT_A = 0, 25 + PORT_B, 26 + }; 27 + 28 + enum tegra_csi_block { 29 + CSI_CIL_AB = 0, 30 + CSI_CIL_CD, 31 + CSI_CIL_EF, 32 + }; 33 + 34 + struct tegra_csi; 35 + 36 + /** 37 + * struct tegra_csi_channel - Tegra CSI channel 38 + * 39 + * @list: list head for this entry 40 + * @subdev: V4L2 subdevice associated with this channel 41 + * @pads: media pads for the subdevice entity 42 + * @numpads: number of pads. 43 + * @csi: Tegra CSI device structure 44 + * @of_node: csi device tree node 45 + * @numlanes: number of lanes used per port/channel 46 + * @csi_port_num: CSI channel port number 47 + * @pg_mode: test pattern generator mode for channel 48 + * @format: active format of the channel 49 + * @framerate: active framerate for TPG 50 + * @h_blank: horizontal blanking for TPG active format 51 + * @v_blank: vertical blanking for TPG active format 52 + */ 53 + struct tegra_csi_channel { 54 + struct list_head list; 55 + struct v4l2_subdev subdev; 56 + struct media_pad pads[TEGRA_CSI_PADS_NUM]; 57 + unsigned int numpads; 58 + struct tegra_csi *csi; 59 + struct device_node *of_node; 60 + unsigned int numlanes; 61 + u8 csi_port_num; 62 + u8 pg_mode; 63 + struct v4l2_mbus_framefmt format; 64 + unsigned int framerate; 65 + unsigned int h_blank; 66 + unsigned int v_blank; 67 + }; 68 + 69 + /** 70 + * struct tpg_framerate - Tegra CSI TPG framerate configuration 71 + * 72 + * @frmsize: frame resolution 73 + * @code: media bus format code 74 + * @h_blank: horizontal blanking used for TPG 75 + * @v_blank: vertical blanking interval used for TPG 76 + * @framerate: framerate achieved with the corresponding blanking intervals, 77 + * format and resolution. 78 + */ 79 + struct tpg_framerate { 80 + struct v4l2_frmsize_discrete frmsize; 81 + u32 code; 82 + unsigned int h_blank; 83 + unsigned int v_blank; 84 + unsigned int framerate; 85 + }; 86 + 87 + /** 88 + * struct tegra_csi_ops - Tegra CSI operations 89 + * 90 + * @csi_start_streaming: programs csi hardware to enable streaming. 91 + * @csi_stop_streaming: programs csi hardware to disable streaming. 92 + * @csi_err_recover: csi hardware block recovery in case of any capture errors 93 + * due to missing source stream or due to improper csi input from 94 + * the external source. 95 + */ 96 + struct tegra_csi_ops { 97 + int (*csi_start_streaming)(struct tegra_csi_channel *csi_chan); 98 + void (*csi_stop_streaming)(struct tegra_csi_channel *csi_chan); 99 + void (*csi_err_recover)(struct tegra_csi_channel *csi_chan); 100 + }; 101 + 102 + /** 103 + * struct tegra_csi_soc - NVIDIA Tegra CSI SoC structure 104 + * 105 + * @ops: csi hardware operations 106 + * @csi_max_channels: supported max streaming channels 107 + * @clk_names: csi and cil clock names 108 + * @num_clks: total clocks count 109 + * @tpg_frmrate_table: csi tpg frame rate table with blanking intervals 110 + * @tpg_frmrate_table_size: size of frame rate table 111 + */ 112 + struct tegra_csi_soc { 113 + const struct tegra_csi_ops *ops; 114 + unsigned int csi_max_channels; 115 + const char * const *clk_names; 116 + unsigned int num_clks; 117 + const struct tpg_framerate *tpg_frmrate_table; 118 + unsigned int tpg_frmrate_table_size; 119 + }; 120 + 121 + /** 122 + * struct tegra_csi - NVIDIA Tegra CSI device structure 123 + * 124 + * @dev: device struct 125 + * @client: host1x_client struct 126 + * @iomem: register base 127 + * @clks: clock for CSI and CIL 128 + * @soc: pointer to SoC data structure 129 + * @ops: csi operations 130 + * @channels: list head for CSI channels 131 + */ 132 + struct tegra_csi { 133 + struct device *dev; 134 + struct host1x_client client; 135 + void __iomem *iomem; 136 + struct clk_bulk_data *clks; 137 + const struct tegra_csi_soc *soc; 138 + const struct tegra_csi_ops *ops; 139 + struct list_head csi_chans; 140 + }; 141 + 142 + #if defined(CONFIG_ARCH_TEGRA_210_SOC) 143 + extern const struct tegra_csi_soc tegra210_csi_soc; 144 + #endif 145 + 146 + void tegra_csi_error_recover(struct v4l2_subdev *subdev); 147 + #endif
+978
drivers/staging/media/tegra-video/tegra210.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (C) 2020 NVIDIA CORPORATION. All rights reserved. 4 + */ 5 + 6 + /* 7 + * This source file contains Tegra210 supported video formats, 8 + * VI and CSI SoC specific data, operations and registers accessors. 9 + */ 10 + #include <linux/clk.h> 11 + #include <linux/clk/tegra.h> 12 + #include <linux/delay.h> 13 + #include <linux/host1x.h> 14 + #include <linux/kthread.h> 15 + 16 + #include "csi.h" 17 + #include "vi.h" 18 + 19 + #define TEGRA_VI_SYNCPT_WAIT_TIMEOUT msecs_to_jiffies(200) 20 + 21 + /* Tegra210 VI registers */ 22 + #define TEGRA_VI_CFG_VI_INCR_SYNCPT 0x000 23 + #define VI_CFG_VI_INCR_SYNCPT_COND(x) (((x) & 0xff) << 8) 24 + #define VI_CSI_PP_FRAME_START(port) (5 + (port) * 4) 25 + #define VI_CSI_MW_ACK_DONE(port) (7 + (port) * 4) 26 + #define TEGRA_VI_CFG_VI_INCR_SYNCPT_CNTRL 0x004 27 + #define VI_INCR_SYNCPT_NO_STALL BIT(8) 28 + #define TEGRA_VI_CFG_VI_INCR_SYNCPT_ERROR 0x008 29 + #define TEGRA_VI_CFG_CG_CTRL 0x0b8 30 + #define VI_CG_2ND_LEVEL_EN 0x1 31 + 32 + /* Tegra210 VI CSI registers */ 33 + #define TEGRA_VI_CSI_SW_RESET 0x000 34 + #define TEGRA_VI_CSI_SINGLE_SHOT 0x004 35 + #define SINGLE_SHOT_CAPTURE 0x1 36 + #define TEGRA_VI_CSI_IMAGE_DEF 0x00c 37 + #define BYPASS_PXL_TRANSFORM_OFFSET 24 38 + #define IMAGE_DEF_FORMAT_OFFSET 16 39 + #define IMAGE_DEF_DEST_MEM 0x1 40 + #define TEGRA_VI_CSI_IMAGE_SIZE 0x018 41 + #define IMAGE_SIZE_HEIGHT_OFFSET 16 42 + #define TEGRA_VI_CSI_IMAGE_SIZE_WC 0x01c 43 + #define TEGRA_VI_CSI_IMAGE_DT 0x020 44 + #define TEGRA_VI_CSI_SURFACE0_OFFSET_MSB 0x024 45 + #define TEGRA_VI_CSI_SURFACE0_OFFSET_LSB 0x028 46 + #define TEGRA_VI_CSI_SURFACE1_OFFSET_MSB 0x02c 47 + #define TEGRA_VI_CSI_SURFACE1_OFFSET_LSB 0x030 48 + #define TEGRA_VI_CSI_SURFACE2_OFFSET_MSB 0x034 49 + #define TEGRA_VI_CSI_SURFACE2_OFFSET_LSB 0x038 50 + #define TEGRA_VI_CSI_SURFACE0_STRIDE 0x054 51 + #define TEGRA_VI_CSI_SURFACE1_STRIDE 0x058 52 + #define TEGRA_VI_CSI_SURFACE2_STRIDE 0x05c 53 + #define TEGRA_VI_CSI_SURFACE_HEIGHT0 0x060 54 + #define TEGRA_VI_CSI_ERROR_STATUS 0x084 55 + 56 + /* Tegra210 CSI Pixel Parser registers: Starts from 0x838, offset 0x0 */ 57 + #define TEGRA_CSI_INPUT_STREAM_CONTROL 0x000 58 + #define CSI_SKIP_PACKET_THRESHOLD_OFFSET 16 59 + #define TEGRA_CSI_PIXEL_STREAM_CONTROL0 0x004 60 + #define CSI_PP_PACKET_HEADER_SENT BIT(4) 61 + #define CSI_PP_DATA_IDENTIFIER_ENABLE BIT(5) 62 + #define CSI_PP_WORD_COUNT_SELECT_HEADER BIT(6) 63 + #define CSI_PP_CRC_CHECK_ENABLE BIT(7) 64 + #define CSI_PP_WC_CHECK BIT(8) 65 + #define CSI_PP_OUTPUT_FORMAT_STORE (0x3 << 16) 66 + #define CSI_PPA_PAD_LINE_NOPAD (0x2 << 24) 67 + #define CSI_PP_HEADER_EC_DISABLE (0x1 << 27) 68 + #define CSI_PPA_PAD_FRAME_NOPAD (0x2 << 28) 69 + #define TEGRA_CSI_PIXEL_STREAM_CONTROL1 0x008 70 + #define CSI_PP_TOP_FIELD_FRAME_OFFSET 0 71 + #define CSI_PP_TOP_FIELD_FRAME_MASK_OFFSET 4 72 + #define TEGRA_CSI_PIXEL_STREAM_GAP 0x00c 73 + #define PP_FRAME_MIN_GAP_OFFSET 16 74 + #define TEGRA_CSI_PIXEL_STREAM_PP_COMMAND 0x010 75 + #define CSI_PP_ENABLE 0x1 76 + #define CSI_PP_DISABLE 0x2 77 + #define CSI_PP_RST 0x3 78 + #define CSI_PP_SINGLE_SHOT_ENABLE (0x1 << 2) 79 + #define CSI_PP_START_MARKER_FRAME_MAX_OFFSET 12 80 + #define TEGRA_CSI_PIXEL_STREAM_EXPECTED_FRAME 0x014 81 + #define TEGRA_CSI_PIXEL_PARSER_INTERRUPT_MASK 0x018 82 + #define TEGRA_CSI_PIXEL_PARSER_STATUS 0x01c 83 + 84 + /* Tegra210 CSI PHY registers */ 85 + /* CSI_PHY_CIL_COMMAND_0 offset 0x0d0 from TEGRA_CSI_PIXEL_PARSER_0_BASE */ 86 + #define TEGRA_CSI_PHY_CIL_COMMAND 0x0d0 87 + #define CSI_A_PHY_CIL_NOP 0x0 88 + #define CSI_A_PHY_CIL_ENABLE 0x1 89 + #define CSI_A_PHY_CIL_DISABLE 0x2 90 + #define CSI_A_PHY_CIL_ENABLE_MASK 0x3 91 + #define CSI_B_PHY_CIL_NOP (0x0 << 8) 92 + #define CSI_B_PHY_CIL_ENABLE (0x1 << 8) 93 + #define CSI_B_PHY_CIL_DISABLE (0x2 << 8) 94 + #define CSI_B_PHY_CIL_ENABLE_MASK (0x3 << 8) 95 + 96 + #define TEGRA_CSI_CIL_PAD_CONFIG0 0x000 97 + #define BRICK_CLOCK_A_4X (0x1 << 16) 98 + #define BRICK_CLOCK_B_4X (0x2 << 16) 99 + #define TEGRA_CSI_CIL_PAD_CONFIG1 0x004 100 + #define TEGRA_CSI_CIL_PHY_CONTROL 0x008 101 + #define TEGRA_CSI_CIL_INTERRUPT_MASK 0x00c 102 + #define TEGRA_CSI_CIL_STATUS 0x010 103 + #define TEGRA_CSI_CILX_STATUS 0x014 104 + #define TEGRA_CSI_CIL_SW_SENSOR_RESET 0x020 105 + 106 + #define TEGRA_CSI_PATTERN_GENERATOR_CTRL 0x000 107 + #define PG_MODE_OFFSET 2 108 + #define PG_ENABLE 0x1 109 + #define PG_DISABLE 0x0 110 + #define TEGRA_CSI_PG_BLANK 0x004 111 + #define PG_VBLANK_OFFSET 16 112 + #define TEGRA_CSI_PG_PHASE 0x008 113 + #define TEGRA_CSI_PG_RED_FREQ 0x00c 114 + #define PG_RED_VERT_INIT_FREQ_OFFSET 16 115 + #define PG_RED_HOR_INIT_FREQ_OFFSET 0 116 + #define TEGRA_CSI_PG_RED_FREQ_RATE 0x010 117 + #define TEGRA_CSI_PG_GREEN_FREQ 0x014 118 + #define PG_GREEN_VERT_INIT_FREQ_OFFSET 16 119 + #define PG_GREEN_HOR_INIT_FREQ_OFFSET 0 120 + #define TEGRA_CSI_PG_GREEN_FREQ_RATE 0x018 121 + #define TEGRA_CSI_PG_BLUE_FREQ 0x01c 122 + #define PG_BLUE_VERT_INIT_FREQ_OFFSET 16 123 + #define PG_BLUE_HOR_INIT_FREQ_OFFSET 0 124 + #define TEGRA_CSI_PG_BLUE_FREQ_RATE 0x020 125 + #define TEGRA_CSI_PG_AOHDR 0x024 126 + #define TEGRA_CSI_CSI_SW_STATUS_RESET 0x214 127 + #define TEGRA_CSI_CLKEN_OVERRIDE 0x218 128 + 129 + #define TEGRA210_CSI_PORT_OFFSET 0x34 130 + #define TEGRA210_CSI_CIL_OFFSET 0x0f4 131 + #define TEGRA210_CSI_TPG_OFFSET 0x18c 132 + 133 + #define CSI_PP_OFFSET(block) ((block) * 0x800) 134 + #define TEGRA210_VI_CSI_BASE(x) (0x100 + (x) * 0x100) 135 + 136 + /* Tegra210 VI registers accessors */ 137 + static void tegra_vi_write(struct tegra_vi_channel *chan, unsigned int addr, 138 + u32 val) 139 + { 140 + writel_relaxed(val, chan->vi->iomem + addr); 141 + } 142 + 143 + static u32 tegra_vi_read(struct tegra_vi_channel *chan, unsigned int addr) 144 + { 145 + return readl_relaxed(chan->vi->iomem + addr); 146 + } 147 + 148 + /* Tegra210 VI_CSI registers accessors */ 149 + static void vi_csi_write(struct tegra_vi_channel *chan, unsigned int addr, 150 + u32 val) 151 + { 152 + void __iomem *vi_csi_base; 153 + 154 + vi_csi_base = chan->vi->iomem + TEGRA210_VI_CSI_BASE(chan->portno); 155 + 156 + writel_relaxed(val, vi_csi_base + addr); 157 + } 158 + 159 + static u32 vi_csi_read(struct tegra_vi_channel *chan, unsigned int addr) 160 + { 161 + void __iomem *vi_csi_base; 162 + 163 + vi_csi_base = chan->vi->iomem + TEGRA210_VI_CSI_BASE(chan->portno); 164 + 165 + return readl_relaxed(vi_csi_base + addr); 166 + } 167 + 168 + /* 169 + * Tegra210 VI channel capture operations 170 + */ 171 + static int tegra_channel_capture_setup(struct tegra_vi_channel *chan) 172 + { 173 + u32 height = chan->format.height; 174 + u32 width = chan->format.width; 175 + u32 format = chan->fmtinfo->img_fmt; 176 + u32 data_type = chan->fmtinfo->img_dt; 177 + u32 word_count = (width * chan->fmtinfo->bit_width) / 8; 178 + 179 + vi_csi_write(chan, TEGRA_VI_CSI_ERROR_STATUS, 0xffffffff); 180 + vi_csi_write(chan, TEGRA_VI_CSI_IMAGE_DEF, 181 + ((chan->pg_mode ? 0 : 1) << BYPASS_PXL_TRANSFORM_OFFSET) | 182 + (format << IMAGE_DEF_FORMAT_OFFSET) | 183 + IMAGE_DEF_DEST_MEM); 184 + vi_csi_write(chan, TEGRA_VI_CSI_IMAGE_DT, data_type); 185 + vi_csi_write(chan, TEGRA_VI_CSI_IMAGE_SIZE_WC, word_count); 186 + vi_csi_write(chan, TEGRA_VI_CSI_IMAGE_SIZE, 187 + (height << IMAGE_SIZE_HEIGHT_OFFSET) | width); 188 + return 0; 189 + } 190 + 191 + static void tegra_channel_vi_soft_reset(struct tegra_vi_channel *chan) 192 + { 193 + /* disable clock gating to enable continuous clock */ 194 + tegra_vi_write(chan, TEGRA_VI_CFG_CG_CTRL, 0); 195 + /* 196 + * Soft reset memory client interface, pixel format logic, sensor 197 + * control logic, and a shadow copy logic to bring VI to clean state. 198 + */ 199 + vi_csi_write(chan, TEGRA_VI_CSI_SW_RESET, 0xf); 200 + usleep_range(100, 200); 201 + vi_csi_write(chan, TEGRA_VI_CSI_SW_RESET, 0x0); 202 + 203 + /* enable back VI clock gating */ 204 + tegra_vi_write(chan, TEGRA_VI_CFG_CG_CTRL, VI_CG_2ND_LEVEL_EN); 205 + } 206 + 207 + static void tegra_channel_capture_error_recover(struct tegra_vi_channel *chan) 208 + { 209 + struct v4l2_subdev *subdev; 210 + u32 val; 211 + 212 + /* 213 + * Recover VI and CSI hardware blocks in case of missing frame start 214 + * events due to source not streaming or noisy csi inputs from the 215 + * external source or many outstanding frame start or MW_ACK_DONE 216 + * events which can cause CSI and VI hardware hang. 217 + * This helps to have a clean capture for next frame. 218 + */ 219 + val = vi_csi_read(chan, TEGRA_VI_CSI_ERROR_STATUS); 220 + dev_dbg(&chan->video.dev, "TEGRA_VI_CSI_ERROR_STATUS 0x%08x\n", val); 221 + vi_csi_write(chan, TEGRA_VI_CSI_ERROR_STATUS, val); 222 + 223 + val = tegra_vi_read(chan, TEGRA_VI_CFG_VI_INCR_SYNCPT_ERROR); 224 + dev_dbg(&chan->video.dev, 225 + "TEGRA_VI_CFG_VI_INCR_SYNCPT_ERROR 0x%08x\n", val); 226 + tegra_vi_write(chan, TEGRA_VI_CFG_VI_INCR_SYNCPT_ERROR, val); 227 + 228 + /* recover VI by issuing software reset and re-setup for capture */ 229 + tegra_channel_vi_soft_reset(chan); 230 + tegra_channel_capture_setup(chan); 231 + 232 + /* recover CSI block */ 233 + subdev = tegra_channel_get_remote_subdev(chan); 234 + tegra_csi_error_recover(subdev); 235 + } 236 + 237 + static struct tegra_channel_buffer * 238 + dequeue_buf_done(struct tegra_vi_channel *chan) 239 + { 240 + struct tegra_channel_buffer *buf = NULL; 241 + 242 + spin_lock(&chan->done_lock); 243 + if (list_empty(&chan->done)) { 244 + spin_unlock(&chan->done_lock); 245 + return NULL; 246 + } 247 + 248 + buf = list_first_entry(&chan->done, 249 + struct tegra_channel_buffer, queue); 250 + if (buf) 251 + list_del_init(&buf->queue); 252 + spin_unlock(&chan->done_lock); 253 + 254 + return buf; 255 + } 256 + 257 + static void release_buffer(struct tegra_vi_channel *chan, 258 + struct tegra_channel_buffer *buf, 259 + enum vb2_buffer_state state) 260 + { 261 + struct vb2_v4l2_buffer *vb = &buf->buf; 262 + 263 + vb->sequence = chan->sequence++; 264 + vb->field = V4L2_FIELD_NONE; 265 + vb->vb2_buf.timestamp = ktime_get_ns(); 266 + vb2_buffer_done(&vb->vb2_buf, state); 267 + } 268 + 269 + static int tegra_channel_capture_frame(struct tegra_vi_channel *chan, 270 + struct tegra_channel_buffer *buf) 271 + { 272 + u32 thresh, value, frame_start, mw_ack_done; 273 + int bytes_per_line = chan->format.bytesperline; 274 + int err; 275 + 276 + /* program buffer address by using surface 0 */ 277 + vi_csi_write(chan, TEGRA_VI_CSI_SURFACE0_OFFSET_MSB, 278 + (u64)buf->addr >> 32); 279 + vi_csi_write(chan, TEGRA_VI_CSI_SURFACE0_OFFSET_LSB, buf->addr); 280 + vi_csi_write(chan, TEGRA_VI_CSI_SURFACE0_STRIDE, bytes_per_line); 281 + 282 + /* 283 + * Tegra VI block interacts with host1x syncpt for synchronizing 284 + * programmed condition of capture state and hardware operation. 285 + * Frame start and Memory write acknowledge syncpts has their own 286 + * FIFO of depth 2. 287 + * 288 + * Syncpoint trigger conditions set through VI_INCR_SYNCPT register 289 + * are added to HW syncpt FIFO and when the HW triggers, syncpt 290 + * condition is removed from the FIFO and counter at syncpoint index 291 + * will be incremented by the hardware and software can wait for 292 + * counter to reach threshold to synchronize capturing frame with the 293 + * hardware capture events. 294 + */ 295 + 296 + /* increase channel syncpoint threshold for FRAME_START */ 297 + thresh = host1x_syncpt_incr_max(chan->frame_start_sp, 1); 298 + 299 + /* Program FRAME_START trigger condition syncpt request */ 300 + frame_start = VI_CSI_PP_FRAME_START(chan->portno); 301 + value = VI_CFG_VI_INCR_SYNCPT_COND(frame_start) | 302 + host1x_syncpt_id(chan->frame_start_sp); 303 + tegra_vi_write(chan, TEGRA_VI_CFG_VI_INCR_SYNCPT, value); 304 + 305 + /* increase channel syncpoint threshold for MW_ACK_DONE */ 306 + buf->mw_ack_sp_thresh = host1x_syncpt_incr_max(chan->mw_ack_sp, 1); 307 + 308 + /* Program MW_ACK_DONE trigger condition syncpt request */ 309 + mw_ack_done = VI_CSI_MW_ACK_DONE(chan->portno); 310 + value = VI_CFG_VI_INCR_SYNCPT_COND(mw_ack_done) | 311 + host1x_syncpt_id(chan->mw_ack_sp); 312 + tegra_vi_write(chan, TEGRA_VI_CFG_VI_INCR_SYNCPT, value); 313 + 314 + /* enable single shot capture */ 315 + vi_csi_write(chan, TEGRA_VI_CSI_SINGLE_SHOT, SINGLE_SHOT_CAPTURE); 316 + 317 + /* wait for syncpt counter to reach frame start event threshold */ 318 + err = host1x_syncpt_wait(chan->frame_start_sp, thresh, 319 + TEGRA_VI_SYNCPT_WAIT_TIMEOUT, &value); 320 + if (err) { 321 + dev_err_ratelimited(&chan->video.dev, 322 + "frame start syncpt timeout: %d\n", err); 323 + /* increment syncpoint counter for timedout events */ 324 + host1x_syncpt_incr(chan->frame_start_sp); 325 + spin_lock(&chan->sp_incr_lock); 326 + host1x_syncpt_incr(chan->mw_ack_sp); 327 + spin_unlock(&chan->sp_incr_lock); 328 + /* clear errors and recover */ 329 + tegra_channel_capture_error_recover(chan); 330 + release_buffer(chan, buf, VB2_BUF_STATE_ERROR); 331 + return err; 332 + } 333 + 334 + /* move buffer to capture done queue */ 335 + spin_lock(&chan->done_lock); 336 + list_add_tail(&buf->queue, &chan->done); 337 + spin_unlock(&chan->done_lock); 338 + 339 + /* wait up kthread for capture done */ 340 + wake_up_interruptible(&chan->done_wait); 341 + 342 + return 0; 343 + } 344 + 345 + static void tegra_channel_capture_done(struct tegra_vi_channel *chan, 346 + struct tegra_channel_buffer *buf) 347 + { 348 + enum vb2_buffer_state state = VB2_BUF_STATE_DONE; 349 + u32 value; 350 + int ret; 351 + 352 + /* wait for syncpt counter to reach MW_ACK_DONE event threshold */ 353 + ret = host1x_syncpt_wait(chan->mw_ack_sp, buf->mw_ack_sp_thresh, 354 + TEGRA_VI_SYNCPT_WAIT_TIMEOUT, &value); 355 + if (ret) { 356 + dev_err_ratelimited(&chan->video.dev, 357 + "MW_ACK_DONE syncpt timeout: %d\n", ret); 358 + state = VB2_BUF_STATE_ERROR; 359 + /* increment syncpoint counter for timedout event */ 360 + spin_lock(&chan->sp_incr_lock); 361 + host1x_syncpt_incr(chan->mw_ack_sp); 362 + spin_unlock(&chan->sp_incr_lock); 363 + } 364 + 365 + release_buffer(chan, buf, state); 366 + } 367 + 368 + static int chan_capture_kthread_start(void *data) 369 + { 370 + struct tegra_vi_channel *chan = data; 371 + struct tegra_channel_buffer *buf; 372 + int err = 0; 373 + 374 + while (1) { 375 + /* 376 + * Source is not streaming if error is non-zero. 377 + * So, do not dequeue buffers on error and let the thread sleep 378 + * till kthread stop signal is received. 379 + */ 380 + wait_event_interruptible(chan->start_wait, 381 + kthread_should_stop() || 382 + (!list_empty(&chan->capture) && 383 + !err)); 384 + 385 + if (kthread_should_stop()) 386 + break; 387 + 388 + /* dequeue the buffer and start capture */ 389 + spin_lock(&chan->start_lock); 390 + if (list_empty(&chan->capture)) { 391 + spin_unlock(&chan->start_lock); 392 + continue; 393 + } 394 + 395 + buf = list_first_entry(&chan->capture, 396 + struct tegra_channel_buffer, queue); 397 + list_del_init(&buf->queue); 398 + spin_unlock(&chan->start_lock); 399 + 400 + err = tegra_channel_capture_frame(chan, buf); 401 + if (err) 402 + vb2_queue_error(&chan->queue); 403 + } 404 + 405 + return 0; 406 + } 407 + 408 + static int chan_capture_kthread_finish(void *data) 409 + { 410 + struct tegra_vi_channel *chan = data; 411 + struct tegra_channel_buffer *buf; 412 + 413 + while (1) { 414 + wait_event_interruptible(chan->done_wait, 415 + !list_empty(&chan->done) || 416 + kthread_should_stop()); 417 + 418 + /* dequeue buffers and finish capture */ 419 + buf = dequeue_buf_done(chan); 420 + while (buf) { 421 + tegra_channel_capture_done(chan, buf); 422 + buf = dequeue_buf_done(chan); 423 + } 424 + 425 + if (kthread_should_stop()) 426 + break; 427 + } 428 + 429 + return 0; 430 + } 431 + 432 + static int tegra210_vi_start_streaming(struct vb2_queue *vq, u32 count) 433 + { 434 + struct tegra_vi_channel *chan = vb2_get_drv_priv(vq); 435 + struct media_pipeline *pipe = &chan->video.pipe; 436 + u32 val; 437 + int ret; 438 + 439 + tegra_vi_write(chan, TEGRA_VI_CFG_CG_CTRL, VI_CG_2ND_LEVEL_EN); 440 + 441 + /* clear errors */ 442 + val = vi_csi_read(chan, TEGRA_VI_CSI_ERROR_STATUS); 443 + vi_csi_write(chan, TEGRA_VI_CSI_ERROR_STATUS, val); 444 + 445 + val = tegra_vi_read(chan, TEGRA_VI_CFG_VI_INCR_SYNCPT_ERROR); 446 + tegra_vi_write(chan, TEGRA_VI_CFG_VI_INCR_SYNCPT_ERROR, val); 447 + 448 + /* 449 + * Sync point FIFO full stalls the host interface. 450 + * Setting NO_STALL will drop INCR_SYNCPT methods when fifos are 451 + * full and the corresponding condition bits in INCR_SYNCPT_ERROR 452 + * register will be set. 453 + * This allows SW to process error recovery. 454 + */ 455 + tegra_vi_write(chan, TEGRA_VI_CFG_VI_INCR_SYNCPT_CNTRL, 456 + VI_INCR_SYNCPT_NO_STALL); 457 + 458 + /* start the pipeline */ 459 + ret = media_pipeline_start(&chan->video.entity, pipe); 460 + if (ret < 0) 461 + goto error_pipeline_start; 462 + 463 + tegra_channel_capture_setup(chan); 464 + ret = tegra_channel_set_stream(chan, true); 465 + if (ret < 0) 466 + goto error_set_stream; 467 + 468 + chan->sequence = 0; 469 + 470 + /* start kthreads to capture data to buffer and return them */ 471 + chan->kthread_start_capture = kthread_run(chan_capture_kthread_start, 472 + chan, "%s:0", 473 + chan->video.name); 474 + if (IS_ERR(chan->kthread_start_capture)) { 475 + ret = PTR_ERR(chan->kthread_start_capture); 476 + chan->kthread_start_capture = NULL; 477 + dev_err(&chan->video.dev, 478 + "failed to run capture start kthread: %d\n", ret); 479 + goto error_kthread_start; 480 + } 481 + 482 + chan->kthread_finish_capture = kthread_run(chan_capture_kthread_finish, 483 + chan, "%s:1", 484 + chan->video.name); 485 + if (IS_ERR(chan->kthread_finish_capture)) { 486 + ret = PTR_ERR(chan->kthread_finish_capture); 487 + chan->kthread_finish_capture = NULL; 488 + dev_err(&chan->video.dev, 489 + "failed to run capture finish kthread: %d\n", ret); 490 + goto error_kthread_done; 491 + } 492 + 493 + return 0; 494 + 495 + error_kthread_done: 496 + kthread_stop(chan->kthread_start_capture); 497 + error_kthread_start: 498 + tegra_channel_set_stream(chan, false); 499 + error_set_stream: 500 + media_pipeline_stop(&chan->video.entity); 501 + error_pipeline_start: 502 + tegra_channel_release_buffers(chan, VB2_BUF_STATE_QUEUED); 503 + return ret; 504 + } 505 + 506 + static void tegra210_vi_stop_streaming(struct vb2_queue *vq) 507 + { 508 + struct tegra_vi_channel *chan = vb2_get_drv_priv(vq); 509 + 510 + if (chan->kthread_start_capture) { 511 + kthread_stop(chan->kthread_start_capture); 512 + chan->kthread_start_capture = NULL; 513 + } 514 + 515 + if (chan->kthread_finish_capture) { 516 + kthread_stop(chan->kthread_finish_capture); 517 + chan->kthread_finish_capture = NULL; 518 + } 519 + 520 + tegra_channel_release_buffers(chan, VB2_BUF_STATE_ERROR); 521 + tegra_channel_set_stream(chan, false); 522 + media_pipeline_stop(&chan->video.entity); 523 + } 524 + 525 + /* 526 + * Tegra210 VI Pixel memory format enum. 527 + * These format enum value gets programmed into corresponding Tegra VI 528 + * channel register bits. 529 + */ 530 + enum tegra210_image_format { 531 + TEGRA210_IMAGE_FORMAT_T_L8 = 16, 532 + 533 + TEGRA210_IMAGE_FORMAT_T_R16_I = 32, 534 + TEGRA210_IMAGE_FORMAT_T_B5G6R5, 535 + TEGRA210_IMAGE_FORMAT_T_R5G6B5, 536 + TEGRA210_IMAGE_FORMAT_T_A1B5G5R5, 537 + TEGRA210_IMAGE_FORMAT_T_A1R5G5B5, 538 + TEGRA210_IMAGE_FORMAT_T_B5G5R5A1, 539 + TEGRA210_IMAGE_FORMAT_T_R5G5B5A1, 540 + TEGRA210_IMAGE_FORMAT_T_A4B4G4R4, 541 + TEGRA210_IMAGE_FORMAT_T_A4R4G4B4, 542 + TEGRA210_IMAGE_FORMAT_T_B4G4R4A4, 543 + TEGRA210_IMAGE_FORMAT_T_R4G4B4A4, 544 + 545 + TEGRA210_IMAGE_FORMAT_T_A8B8G8R8 = 64, 546 + TEGRA210_IMAGE_FORMAT_T_A8R8G8B8, 547 + TEGRA210_IMAGE_FORMAT_T_B8G8R8A8, 548 + TEGRA210_IMAGE_FORMAT_T_R8G8B8A8, 549 + TEGRA210_IMAGE_FORMAT_T_A2B10G10R10, 550 + TEGRA210_IMAGE_FORMAT_T_A2R10G10B10, 551 + TEGRA210_IMAGE_FORMAT_T_B10G10R10A2, 552 + TEGRA210_IMAGE_FORMAT_T_R10G10B10A2, 553 + 554 + TEGRA210_IMAGE_FORMAT_T_A8Y8U8V8 = 193, 555 + TEGRA210_IMAGE_FORMAT_T_V8U8Y8A8, 556 + 557 + TEGRA210_IMAGE_FORMAT_T_A2Y10U10V10 = 197, 558 + TEGRA210_IMAGE_FORMAT_T_V10U10Y10A2, 559 + TEGRA210_IMAGE_FORMAT_T_Y8_U8__Y8_V8, 560 + TEGRA210_IMAGE_FORMAT_T_Y8_V8__Y8_U8, 561 + TEGRA210_IMAGE_FORMAT_T_U8_Y8__V8_Y8, 562 + TEGRA210_IMAGE_FORMAT_T_V8_Y8__U8_Y8, 563 + 564 + TEGRA210_IMAGE_FORMAT_T_Y8__U8__V8_N444 = 224, 565 + TEGRA210_IMAGE_FORMAT_T_Y8__U8V8_N444, 566 + TEGRA210_IMAGE_FORMAT_T_Y8__V8U8_N444, 567 + TEGRA210_IMAGE_FORMAT_T_Y8__U8__V8_N422, 568 + TEGRA210_IMAGE_FORMAT_T_Y8__U8V8_N422, 569 + TEGRA210_IMAGE_FORMAT_T_Y8__V8U8_N422, 570 + TEGRA210_IMAGE_FORMAT_T_Y8__U8__V8_N420, 571 + TEGRA210_IMAGE_FORMAT_T_Y8__U8V8_N420, 572 + TEGRA210_IMAGE_FORMAT_T_Y8__V8U8_N420, 573 + TEGRA210_IMAGE_FORMAT_T_X2LC10LB10LA10, 574 + TEGRA210_IMAGE_FORMAT_T_A2R6R6R6R6R6, 575 + }; 576 + 577 + #define TEGRA210_VIDEO_FMT(DATA_TYPE, BIT_WIDTH, MBUS_CODE, BPP, \ 578 + FORMAT, FOURCC) \ 579 + { \ 580 + TEGRA_IMAGE_DT_##DATA_TYPE, \ 581 + BIT_WIDTH, \ 582 + MEDIA_BUS_FMT_##MBUS_CODE, \ 583 + BPP, \ 584 + TEGRA210_IMAGE_FORMAT_##FORMAT, \ 585 + V4L2_PIX_FMT_##FOURCC, \ 586 + } 587 + 588 + /* Tegra210 supported video formats */ 589 + static const struct tegra_video_format tegra210_video_formats[] = { 590 + /* RAW 8 */ 591 + TEGRA210_VIDEO_FMT(RAW8, 8, SRGGB8_1X8, 1, T_L8, SRGGB8), 592 + TEGRA210_VIDEO_FMT(RAW8, 8, SGRBG8_1X8, 1, T_L8, SGRBG8), 593 + TEGRA210_VIDEO_FMT(RAW8, 8, SGBRG8_1X8, 1, T_L8, SGBRG8), 594 + TEGRA210_VIDEO_FMT(RAW8, 8, SBGGR8_1X8, 1, T_L8, SBGGR8), 595 + /* RAW 10 */ 596 + TEGRA210_VIDEO_FMT(RAW10, 10, SRGGB10_1X10, 2, T_R16_I, SRGGB10), 597 + TEGRA210_VIDEO_FMT(RAW10, 10, SGRBG10_1X10, 2, T_R16_I, SGRBG10), 598 + TEGRA210_VIDEO_FMT(RAW10, 10, SGBRG10_1X10, 2, T_R16_I, SGBRG10), 599 + TEGRA210_VIDEO_FMT(RAW10, 10, SBGGR10_1X10, 2, T_R16_I, SBGGR10), 600 + /* RAW 12 */ 601 + TEGRA210_VIDEO_FMT(RAW12, 12, SRGGB12_1X12, 2, T_R16_I, SRGGB12), 602 + TEGRA210_VIDEO_FMT(RAW12, 12, SGRBG12_1X12, 2, T_R16_I, SGRBG12), 603 + TEGRA210_VIDEO_FMT(RAW12, 12, SGBRG12_1X12, 2, T_R16_I, SGBRG12), 604 + TEGRA210_VIDEO_FMT(RAW12, 12, SBGGR12_1X12, 2, T_R16_I, SBGGR12), 605 + /* RGB888 */ 606 + TEGRA210_VIDEO_FMT(RGB888, 24, RGB888_1X24, 4, T_A8R8G8B8, RGB24), 607 + TEGRA210_VIDEO_FMT(RGB888, 24, RGB888_1X32_PADHI, 4, T_A8B8G8R8, 608 + XBGR32), 609 + /* YUV422 */ 610 + TEGRA210_VIDEO_FMT(YUV422_8, 16, UYVY8_1X16, 2, T_U8_Y8__V8_Y8, UYVY), 611 + TEGRA210_VIDEO_FMT(YUV422_8, 16, VYUY8_1X16, 2, T_V8_Y8__U8_Y8, VYUY), 612 + TEGRA210_VIDEO_FMT(YUV422_8, 16, YUYV8_1X16, 2, T_Y8_U8__Y8_V8, YUYV), 613 + TEGRA210_VIDEO_FMT(YUV422_8, 16, YVYU8_1X16, 2, T_Y8_V8__Y8_U8, YVYU), 614 + TEGRA210_VIDEO_FMT(YUV422_8, 16, UYVY8_1X16, 1, T_Y8__V8U8_N422, NV16), 615 + TEGRA210_VIDEO_FMT(YUV422_8, 16, UYVY8_2X8, 2, T_U8_Y8__V8_Y8, UYVY), 616 + TEGRA210_VIDEO_FMT(YUV422_8, 16, VYUY8_2X8, 2, T_V8_Y8__U8_Y8, VYUY), 617 + TEGRA210_VIDEO_FMT(YUV422_8, 16, YUYV8_2X8, 2, T_Y8_U8__Y8_V8, YUYV), 618 + TEGRA210_VIDEO_FMT(YUV422_8, 16, YVYU8_2X8, 2, T_Y8_V8__Y8_U8, YVYU), 619 + }; 620 + 621 + /* Tegra210 VI operations */ 622 + static const struct tegra_vi_ops tegra210_vi_ops = { 623 + .vi_start_streaming = tegra210_vi_start_streaming, 624 + .vi_stop_streaming = tegra210_vi_stop_streaming, 625 + }; 626 + 627 + /* Tegra210 VI SoC data */ 628 + const struct tegra_vi_soc tegra210_vi_soc = { 629 + .video_formats = tegra210_video_formats, 630 + .nformats = ARRAY_SIZE(tegra210_video_formats), 631 + .ops = &tegra210_vi_ops, 632 + .hw_revision = 3, 633 + .vi_max_channels = 6, 634 + .vi_max_clk_hz = 499200000, 635 + }; 636 + 637 + /* Tegra210 CSI PHY registers accessors */ 638 + static void csi_write(struct tegra_csi *csi, u8 portno, unsigned int addr, 639 + u32 val) 640 + { 641 + void __iomem *csi_pp_base; 642 + 643 + csi_pp_base = csi->iomem + CSI_PP_OFFSET(portno >> 1); 644 + 645 + writel_relaxed(val, csi_pp_base + addr); 646 + } 647 + 648 + /* Tegra210 CSI Pixel parser registers accessors */ 649 + static void pp_write(struct tegra_csi *csi, u8 portno, u32 addr, u32 val) 650 + { 651 + void __iomem *csi_pp_base; 652 + unsigned int offset; 653 + 654 + csi_pp_base = csi->iomem + CSI_PP_OFFSET(portno >> 1); 655 + offset = (portno % CSI_PORTS_PER_BRICK) * TEGRA210_CSI_PORT_OFFSET; 656 + 657 + writel_relaxed(val, csi_pp_base + offset + addr); 658 + } 659 + 660 + static u32 pp_read(struct tegra_csi *csi, u8 portno, u32 addr) 661 + { 662 + void __iomem *csi_pp_base; 663 + unsigned int offset; 664 + 665 + csi_pp_base = csi->iomem + CSI_PP_OFFSET(portno >> 1); 666 + offset = (portno % CSI_PORTS_PER_BRICK) * TEGRA210_CSI_PORT_OFFSET; 667 + 668 + return readl_relaxed(csi_pp_base + offset + addr); 669 + } 670 + 671 + /* Tegra210 CSI CIL A/B port registers accessors */ 672 + static void cil_write(struct tegra_csi *csi, u8 portno, u32 addr, u32 val) 673 + { 674 + void __iomem *csi_cil_base; 675 + unsigned int offset; 676 + 677 + csi_cil_base = csi->iomem + CSI_PP_OFFSET(portno >> 1) + 678 + TEGRA210_CSI_CIL_OFFSET; 679 + offset = (portno % CSI_PORTS_PER_BRICK) * TEGRA210_CSI_PORT_OFFSET; 680 + 681 + writel_relaxed(val, csi_cil_base + offset + addr); 682 + } 683 + 684 + static u32 cil_read(struct tegra_csi *csi, u8 portno, u32 addr) 685 + { 686 + void __iomem *csi_cil_base; 687 + unsigned int offset; 688 + 689 + csi_cil_base = csi->iomem + CSI_PP_OFFSET(portno >> 1) + 690 + TEGRA210_CSI_CIL_OFFSET; 691 + offset = (portno % CSI_PORTS_PER_BRICK) * TEGRA210_CSI_PORT_OFFSET; 692 + 693 + return readl_relaxed(csi_cil_base + offset + addr); 694 + } 695 + 696 + /* Tegra210 CSI Test pattern generator registers accessor */ 697 + static void tpg_write(struct tegra_csi *csi, u8 portno, unsigned int addr, 698 + u32 val) 699 + { 700 + void __iomem *csi_pp_base; 701 + unsigned int offset; 702 + 703 + csi_pp_base = csi->iomem + CSI_PP_OFFSET(portno >> 1); 704 + offset = (portno % CSI_PORTS_PER_BRICK) * TEGRA210_CSI_PORT_OFFSET + 705 + TEGRA210_CSI_TPG_OFFSET; 706 + 707 + writel_relaxed(val, csi_pp_base + offset + addr); 708 + } 709 + 710 + /* 711 + * Tegra210 CSI operations 712 + */ 713 + static void tegra210_csi_error_recover(struct tegra_csi_channel *csi_chan) 714 + { 715 + struct tegra_csi *csi = csi_chan->csi; 716 + unsigned int portno = csi_chan->csi_port_num; 717 + u32 val; 718 + 719 + /* 720 + * Recover CSI hardware in case of capture errors by issuing 721 + * software reset to CSICIL sensor, pixel parser, and clear errors 722 + * to have clean capture on next streaming. 723 + */ 724 + val = pp_read(csi, portno, TEGRA_CSI_PIXEL_PARSER_STATUS); 725 + dev_dbg(csi->dev, "TEGRA_CSI_PIXEL_PARSER_STATUS 0x%08x\n", val); 726 + 727 + val = cil_read(csi, portno, TEGRA_CSI_CIL_STATUS); 728 + dev_dbg(csi->dev, "TEGRA_CSI_CIL_STATUS 0x%08x\n", val); 729 + 730 + val = cil_read(csi, portno, TEGRA_CSI_CILX_STATUS); 731 + dev_dbg(csi->dev, "TEGRA_CSI_CILX_STATUS 0x%08x\n", val); 732 + 733 + if (csi_chan->numlanes == 4) { 734 + /* reset CSI CIL sensor */ 735 + cil_write(csi, portno, TEGRA_CSI_CIL_SW_SENSOR_RESET, 0x1); 736 + cil_write(csi, portno + 1, TEGRA_CSI_CIL_SW_SENSOR_RESET, 0x1); 737 + /* 738 + * SW_STATUS_RESET resets all status bits of PPA, PPB, CILA, 739 + * CILB status registers and debug counters. 740 + * So, SW_STATUS_RESET can be used only when CSI brick is in 741 + * x4 mode. 742 + */ 743 + csi_write(csi, portno, TEGRA_CSI_CSI_SW_STATUS_RESET, 0x1); 744 + 745 + /* sleep for 20 clock cycles to drain the FIFO */ 746 + usleep_range(10, 20); 747 + 748 + cil_write(csi, portno + 1, TEGRA_CSI_CIL_SW_SENSOR_RESET, 0x0); 749 + cil_write(csi, portno, TEGRA_CSI_CIL_SW_SENSOR_RESET, 0x0); 750 + csi_write(csi, portno, TEGRA_CSI_CSI_SW_STATUS_RESET, 0x0); 751 + } else { 752 + /* reset CSICIL sensor */ 753 + cil_write(csi, portno, TEGRA_CSI_CIL_SW_SENSOR_RESET, 0x1); 754 + usleep_range(10, 20); 755 + cil_write(csi, portno, TEGRA_CSI_CIL_SW_SENSOR_RESET, 0x0); 756 + 757 + /* clear the errors */ 758 + pp_write(csi, portno, TEGRA_CSI_PIXEL_PARSER_STATUS, 759 + 0xffffffff); 760 + cil_write(csi, portno, TEGRA_CSI_CIL_STATUS, 0xffffffff); 761 + cil_write(csi, portno, TEGRA_CSI_CILX_STATUS, 0xffffffff); 762 + } 763 + } 764 + 765 + static int tegra210_csi_start_streaming(struct tegra_csi_channel *csi_chan) 766 + { 767 + struct tegra_csi *csi = csi_chan->csi; 768 + unsigned int portno = csi_chan->csi_port_num; 769 + u32 val; 770 + 771 + csi_write(csi, portno, TEGRA_CSI_CLKEN_OVERRIDE, 0); 772 + 773 + /* clean up status */ 774 + pp_write(csi, portno, TEGRA_CSI_PIXEL_PARSER_STATUS, 0xffffffff); 775 + cil_write(csi, portno, TEGRA_CSI_CIL_STATUS, 0xffffffff); 776 + cil_write(csi, portno, TEGRA_CSI_CILX_STATUS, 0xffffffff); 777 + cil_write(csi, portno, TEGRA_CSI_CIL_INTERRUPT_MASK, 0x0); 778 + 779 + /* CIL PHY registers setup */ 780 + cil_write(csi, portno, TEGRA_CSI_CIL_PAD_CONFIG0, 0x0); 781 + cil_write(csi, portno, TEGRA_CSI_CIL_PHY_CONTROL, 0xa); 782 + 783 + /* 784 + * The CSI unit provides for connection of up to six cameras in 785 + * the system and is organized as three identical instances of 786 + * two MIPI support blocks, each with a separate 4-lane 787 + * interface that can be configured as a single camera with 4 788 + * lanes or as a dual camera with 2 lanes available for each 789 + * camera. 790 + */ 791 + if (csi_chan->numlanes == 4) { 792 + cil_write(csi, portno + 1, TEGRA_CSI_CIL_STATUS, 0xffffffff); 793 + cil_write(csi, portno + 1, TEGRA_CSI_CILX_STATUS, 0xffffffff); 794 + cil_write(csi, portno + 1, TEGRA_CSI_CIL_INTERRUPT_MASK, 0x0); 795 + 796 + cil_write(csi, portno, TEGRA_CSI_CIL_PAD_CONFIG0, 797 + BRICK_CLOCK_A_4X); 798 + cil_write(csi, portno + 1, TEGRA_CSI_CIL_PAD_CONFIG0, 0x0); 799 + cil_write(csi, portno + 1, TEGRA_CSI_CIL_INTERRUPT_MASK, 0x0); 800 + cil_write(csi, portno + 1, TEGRA_CSI_CIL_PHY_CONTROL, 0xa); 801 + csi_write(csi, portno, TEGRA_CSI_PHY_CIL_COMMAND, 802 + CSI_A_PHY_CIL_ENABLE | CSI_B_PHY_CIL_ENABLE); 803 + } else { 804 + val = ((portno & 1) == PORT_A) ? 805 + CSI_A_PHY_CIL_ENABLE | CSI_B_PHY_CIL_NOP : 806 + CSI_B_PHY_CIL_ENABLE | CSI_A_PHY_CIL_NOP; 807 + csi_write(csi, portno, TEGRA_CSI_PHY_CIL_COMMAND, val); 808 + } 809 + 810 + /* CSI pixel parser registers setup */ 811 + pp_write(csi, portno, TEGRA_CSI_PIXEL_STREAM_PP_COMMAND, 812 + (0xf << CSI_PP_START_MARKER_FRAME_MAX_OFFSET) | 813 + CSI_PP_SINGLE_SHOT_ENABLE | CSI_PP_RST); 814 + pp_write(csi, portno, TEGRA_CSI_PIXEL_PARSER_INTERRUPT_MASK, 0x0); 815 + pp_write(csi, portno, TEGRA_CSI_PIXEL_STREAM_CONTROL0, 816 + CSI_PP_PACKET_HEADER_SENT | 817 + CSI_PP_DATA_IDENTIFIER_ENABLE | 818 + CSI_PP_WORD_COUNT_SELECT_HEADER | 819 + CSI_PP_CRC_CHECK_ENABLE | CSI_PP_WC_CHECK | 820 + CSI_PP_OUTPUT_FORMAT_STORE | CSI_PPA_PAD_LINE_NOPAD | 821 + CSI_PP_HEADER_EC_DISABLE | CSI_PPA_PAD_FRAME_NOPAD | 822 + (portno & 1)); 823 + pp_write(csi, portno, TEGRA_CSI_PIXEL_STREAM_CONTROL1, 824 + (0x1 << CSI_PP_TOP_FIELD_FRAME_OFFSET) | 825 + (0x1 << CSI_PP_TOP_FIELD_FRAME_MASK_OFFSET)); 826 + pp_write(csi, portno, TEGRA_CSI_PIXEL_STREAM_GAP, 827 + 0x14 << PP_FRAME_MIN_GAP_OFFSET); 828 + pp_write(csi, portno, TEGRA_CSI_PIXEL_STREAM_EXPECTED_FRAME, 0x0); 829 + pp_write(csi, portno, TEGRA_CSI_INPUT_STREAM_CONTROL, 830 + (0x3f << CSI_SKIP_PACKET_THRESHOLD_OFFSET) | 831 + (csi_chan->numlanes - 1)); 832 + 833 + /* TPG setup */ 834 + if (csi_chan->pg_mode) { 835 + tpg_write(csi, portno, TEGRA_CSI_PATTERN_GENERATOR_CTRL, 836 + ((csi_chan->pg_mode - 1) << PG_MODE_OFFSET) | 837 + PG_ENABLE); 838 + tpg_write(csi, portno, TEGRA_CSI_PG_BLANK, 839 + csi_chan->v_blank << PG_VBLANK_OFFSET | 840 + csi_chan->h_blank); 841 + tpg_write(csi, portno, TEGRA_CSI_PG_PHASE, 0x0); 842 + tpg_write(csi, portno, TEGRA_CSI_PG_RED_FREQ, 843 + (0x10 << PG_RED_VERT_INIT_FREQ_OFFSET) | 844 + (0x10 << PG_RED_HOR_INIT_FREQ_OFFSET)); 845 + tpg_write(csi, portno, TEGRA_CSI_PG_RED_FREQ_RATE, 0x0); 846 + tpg_write(csi, portno, TEGRA_CSI_PG_GREEN_FREQ, 847 + (0x10 << PG_GREEN_VERT_INIT_FREQ_OFFSET) | 848 + (0x10 << PG_GREEN_HOR_INIT_FREQ_OFFSET)); 849 + tpg_write(csi, portno, TEGRA_CSI_PG_GREEN_FREQ_RATE, 0x0); 850 + tpg_write(csi, portno, TEGRA_CSI_PG_BLUE_FREQ, 851 + (0x10 << PG_BLUE_VERT_INIT_FREQ_OFFSET) | 852 + (0x10 << PG_BLUE_HOR_INIT_FREQ_OFFSET)); 853 + tpg_write(csi, portno, TEGRA_CSI_PG_BLUE_FREQ_RATE, 0x0); 854 + } 855 + 856 + pp_write(csi, portno, TEGRA_CSI_PIXEL_STREAM_PP_COMMAND, 857 + (0xf << CSI_PP_START_MARKER_FRAME_MAX_OFFSET) | 858 + CSI_PP_SINGLE_SHOT_ENABLE | CSI_PP_ENABLE); 859 + 860 + return 0; 861 + } 862 + 863 + static void tegra210_csi_stop_streaming(struct tegra_csi_channel *csi_chan) 864 + { 865 + struct tegra_csi *csi = csi_chan->csi; 866 + unsigned int portno = csi_chan->csi_port_num; 867 + u32 val; 868 + 869 + val = pp_read(csi, portno, TEGRA_CSI_PIXEL_PARSER_STATUS); 870 + 871 + dev_dbg(csi->dev, "TEGRA_CSI_PIXEL_PARSER_STATUS 0x%08x\n", val); 872 + pp_write(csi, portno, TEGRA_CSI_PIXEL_PARSER_STATUS, val); 873 + 874 + val = cil_read(csi, portno, TEGRA_CSI_CIL_STATUS); 875 + dev_dbg(csi->dev, "TEGRA_CSI_CIL_STATUS 0x%08x\n", val); 876 + cil_write(csi, portno, TEGRA_CSI_CIL_STATUS, val); 877 + 878 + val = cil_read(csi, portno, TEGRA_CSI_CILX_STATUS); 879 + dev_dbg(csi->dev, "TEGRA_CSI_CILX_STATUS 0x%08x\n", val); 880 + cil_write(csi, portno, TEGRA_CSI_CILX_STATUS, val); 881 + 882 + pp_write(csi, portno, TEGRA_CSI_PIXEL_STREAM_PP_COMMAND, 883 + (0xf << CSI_PP_START_MARKER_FRAME_MAX_OFFSET) | 884 + CSI_PP_DISABLE); 885 + 886 + if (csi_chan->pg_mode) { 887 + tpg_write(csi, portno, TEGRA_CSI_PATTERN_GENERATOR_CTRL, 888 + PG_DISABLE); 889 + return; 890 + } 891 + 892 + if (csi_chan->numlanes == 4) { 893 + csi_write(csi, portno, TEGRA_CSI_PHY_CIL_COMMAND, 894 + CSI_A_PHY_CIL_DISABLE | 895 + CSI_B_PHY_CIL_DISABLE); 896 + } else { 897 + val = ((portno & 1) == PORT_A) ? 898 + CSI_A_PHY_CIL_DISABLE | CSI_B_PHY_CIL_NOP : 899 + CSI_B_PHY_CIL_DISABLE | CSI_A_PHY_CIL_NOP; 900 + csi_write(csi, portno, TEGRA_CSI_PHY_CIL_COMMAND, val); 901 + } 902 + } 903 + 904 + /* 905 + * Tegra210 CSI TPG frame rate table with horizontal and vertical 906 + * blanking intervals for corresponding format and resolution. 907 + * Blanking intervals are tuned values from design team for max TPG 908 + * clock rate. 909 + */ 910 + static const struct tpg_framerate tegra210_tpg_frmrate_table[] = { 911 + { 912 + .frmsize = { 1280, 720 }, 913 + .code = MEDIA_BUS_FMT_SRGGB10_1X10, 914 + .framerate = 120, 915 + .h_blank = 512, 916 + .v_blank = 8, 917 + }, 918 + { 919 + .frmsize = { 1920, 1080 }, 920 + .code = MEDIA_BUS_FMT_SRGGB10_1X10, 921 + .framerate = 60, 922 + .h_blank = 512, 923 + .v_blank = 8, 924 + }, 925 + { 926 + .frmsize = { 3840, 2160 }, 927 + .code = MEDIA_BUS_FMT_SRGGB10_1X10, 928 + .framerate = 20, 929 + .h_blank = 8, 930 + .v_blank = 8, 931 + }, 932 + { 933 + .frmsize = { 1280, 720 }, 934 + .code = MEDIA_BUS_FMT_RGB888_1X32_PADHI, 935 + .framerate = 60, 936 + .h_blank = 512, 937 + .v_blank = 8, 938 + }, 939 + { 940 + .frmsize = { 1920, 1080 }, 941 + .code = MEDIA_BUS_FMT_RGB888_1X32_PADHI, 942 + .framerate = 30, 943 + .h_blank = 512, 944 + .v_blank = 8, 945 + }, 946 + { 947 + .frmsize = { 3840, 2160 }, 948 + .code = MEDIA_BUS_FMT_RGB888_1X32_PADHI, 949 + .framerate = 8, 950 + .h_blank = 8, 951 + .v_blank = 8, 952 + }, 953 + }; 954 + 955 + static const char * const tegra210_csi_cil_clks[] = { 956 + "csi", 957 + "cilab", 958 + "cilcd", 959 + "cile", 960 + "csi_tpg", 961 + }; 962 + 963 + /* Tegra210 CSI operations */ 964 + static const struct tegra_csi_ops tegra210_csi_ops = { 965 + .csi_start_streaming = tegra210_csi_start_streaming, 966 + .csi_stop_streaming = tegra210_csi_stop_streaming, 967 + .csi_err_recover = tegra210_csi_error_recover, 968 + }; 969 + 970 + /* Tegra210 CSI SoC data */ 971 + const struct tegra_csi_soc tegra210_csi_soc = { 972 + .ops = &tegra210_csi_ops, 973 + .csi_max_channels = 6, 974 + .clk_names = tegra210_csi_cil_clks, 975 + .num_clks = ARRAY_SIZE(tegra210_csi_cil_clks), 976 + .tpg_frmrate_table = tegra210_tpg_frmrate_table, 977 + .tpg_frmrate_table_size = ARRAY_SIZE(tegra210_tpg_frmrate_table), 978 + };
+1074
drivers/staging/media/tegra-video/vi.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (C) 2020 NVIDIA CORPORATION. All rights reserved. 4 + */ 5 + 6 + #include <linux/bitmap.h> 7 + #include <linux/clk.h> 8 + #include <linux/delay.h> 9 + #include <linux/host1x.h> 10 + #include <linux/lcm.h> 11 + #include <linux/list.h> 12 + #include <linux/module.h> 13 + #include <linux/of.h> 14 + #include <linux/of_device.h> 15 + #include <linux/platform_device.h> 16 + #include <linux/regulator/consumer.h> 17 + #include <linux/pm_runtime.h> 18 + #include <linux/slab.h> 19 + 20 + #include <media/v4l2-event.h> 21 + #include <media/v4l2-fh.h> 22 + #include <media/v4l2-fwnode.h> 23 + #include <media/v4l2-ioctl.h> 24 + #include <media/videobuf2-dma-contig.h> 25 + 26 + #include <soc/tegra/pmc.h> 27 + 28 + #include "vi.h" 29 + #include "video.h" 30 + 31 + #define SURFACE_ALIGN_BYTES 64 32 + #define MAX_CID_CONTROLS 1 33 + 34 + static const struct tegra_video_format tegra_default_format = { 35 + .img_dt = TEGRA_IMAGE_DT_RAW10, 36 + .bit_width = 10, 37 + .code = MEDIA_BUS_FMT_SRGGB10_1X10, 38 + .bpp = 2, 39 + .img_fmt = TEGRA_IMAGE_FORMAT_DEF, 40 + .fourcc = V4L2_PIX_FMT_SRGGB10, 41 + }; 42 + 43 + static inline struct tegra_vi * 44 + host1x_client_to_vi(struct host1x_client *client) 45 + { 46 + return container_of(client, struct tegra_vi, client); 47 + } 48 + 49 + static inline struct tegra_channel_buffer * 50 + to_tegra_channel_buffer(struct vb2_v4l2_buffer *vb) 51 + { 52 + return container_of(vb, struct tegra_channel_buffer, buf); 53 + } 54 + 55 + static int tegra_get_format_idx_by_code(struct tegra_vi *vi, 56 + unsigned int code) 57 + { 58 + unsigned int i; 59 + 60 + for (i = 0; i < vi->soc->nformats; ++i) { 61 + if (vi->soc->video_formats[i].code == code) 62 + return i; 63 + } 64 + 65 + return -1; 66 + } 67 + 68 + static u32 tegra_get_format_fourcc_by_idx(struct tegra_vi *vi, 69 + unsigned int index) 70 + { 71 + if (index >= vi->soc->nformats) 72 + return -EINVAL; 73 + 74 + return vi->soc->video_formats[index].fourcc; 75 + } 76 + 77 + static const struct tegra_video_format * 78 + tegra_get_format_by_fourcc(struct tegra_vi *vi, u32 fourcc) 79 + { 80 + unsigned int i; 81 + 82 + for (i = 0; i < vi->soc->nformats; ++i) { 83 + if (vi->soc->video_formats[i].fourcc == fourcc) 84 + return &vi->soc->video_formats[i]; 85 + } 86 + 87 + return NULL; 88 + } 89 + 90 + /* 91 + * videobuf2 queue operations 92 + */ 93 + static int tegra_channel_queue_setup(struct vb2_queue *vq, 94 + unsigned int *nbuffers, 95 + unsigned int *nplanes, 96 + unsigned int sizes[], 97 + struct device *alloc_devs[]) 98 + { 99 + struct tegra_vi_channel *chan = vb2_get_drv_priv(vq); 100 + 101 + if (*nplanes) 102 + return sizes[0] < chan->format.sizeimage ? -EINVAL : 0; 103 + 104 + *nplanes = 1; 105 + sizes[0] = chan->format.sizeimage; 106 + alloc_devs[0] = chan->vi->dev; 107 + 108 + return 0; 109 + } 110 + 111 + static int tegra_channel_buffer_prepare(struct vb2_buffer *vb) 112 + { 113 + struct tegra_vi_channel *chan = vb2_get_drv_priv(vb->vb2_queue); 114 + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb); 115 + struct tegra_channel_buffer *buf = to_tegra_channel_buffer(vbuf); 116 + unsigned long size = chan->format.sizeimage; 117 + 118 + if (vb2_plane_size(vb, 0) < size) { 119 + v4l2_err(chan->video.v4l2_dev, 120 + "buffer too small (%lu < %lu)\n", 121 + vb2_plane_size(vb, 0), size); 122 + return -EINVAL; 123 + } 124 + 125 + vb2_set_plane_payload(vb, 0, size); 126 + buf->chan = chan; 127 + buf->addr = vb2_dma_contig_plane_dma_addr(vb, 0); 128 + 129 + return 0; 130 + } 131 + 132 + static void tegra_channel_buffer_queue(struct vb2_buffer *vb) 133 + { 134 + struct tegra_vi_channel *chan = vb2_get_drv_priv(vb->vb2_queue); 135 + struct vb2_v4l2_buffer *vbuf = to_vb2_v4l2_buffer(vb); 136 + struct tegra_channel_buffer *buf = to_tegra_channel_buffer(vbuf); 137 + 138 + /* put buffer into the capture queue */ 139 + spin_lock(&chan->start_lock); 140 + list_add_tail(&buf->queue, &chan->capture); 141 + spin_unlock(&chan->start_lock); 142 + 143 + /* wait up kthread for capture */ 144 + wake_up_interruptible(&chan->start_wait); 145 + } 146 + 147 + struct v4l2_subdev * 148 + tegra_channel_get_remote_subdev(struct tegra_vi_channel *chan) 149 + { 150 + struct media_pad *pad; 151 + struct v4l2_subdev *subdev; 152 + struct media_entity *entity; 153 + 154 + pad = media_entity_remote_pad(&chan->pad); 155 + entity = pad->entity; 156 + subdev = media_entity_to_v4l2_subdev(entity); 157 + 158 + return subdev; 159 + } 160 + 161 + int tegra_channel_set_stream(struct tegra_vi_channel *chan, bool on) 162 + { 163 + struct v4l2_subdev *subdev; 164 + int ret; 165 + 166 + /* stream CSI */ 167 + subdev = tegra_channel_get_remote_subdev(chan); 168 + ret = v4l2_subdev_call(subdev, video, s_stream, on); 169 + if (on && ret < 0 && ret != -ENOIOCTLCMD) 170 + return ret; 171 + 172 + return 0; 173 + } 174 + 175 + void tegra_channel_release_buffers(struct tegra_vi_channel *chan, 176 + enum vb2_buffer_state state) 177 + { 178 + struct tegra_channel_buffer *buf, *nbuf; 179 + 180 + spin_lock(&chan->start_lock); 181 + list_for_each_entry_safe(buf, nbuf, &chan->capture, queue) { 182 + vb2_buffer_done(&buf->buf.vb2_buf, state); 183 + list_del(&buf->queue); 184 + } 185 + spin_unlock(&chan->start_lock); 186 + 187 + spin_lock(&chan->done_lock); 188 + list_for_each_entry_safe(buf, nbuf, &chan->done, queue) { 189 + vb2_buffer_done(&buf->buf.vb2_buf, state); 190 + list_del(&buf->queue); 191 + } 192 + spin_unlock(&chan->done_lock); 193 + } 194 + 195 + static int tegra_channel_start_streaming(struct vb2_queue *vq, u32 count) 196 + { 197 + struct tegra_vi_channel *chan = vb2_get_drv_priv(vq); 198 + int ret; 199 + 200 + ret = pm_runtime_get_sync(chan->vi->dev); 201 + if (ret < 0) { 202 + dev_err(chan->vi->dev, "failed to get runtime PM: %d\n", ret); 203 + pm_runtime_put_noidle(chan->vi->dev); 204 + return ret; 205 + } 206 + 207 + ret = chan->vi->ops->vi_start_streaming(vq, count); 208 + if (ret < 0) 209 + pm_runtime_put(chan->vi->dev); 210 + 211 + return ret; 212 + } 213 + 214 + static void tegra_channel_stop_streaming(struct vb2_queue *vq) 215 + { 216 + struct tegra_vi_channel *chan = vb2_get_drv_priv(vq); 217 + 218 + chan->vi->ops->vi_stop_streaming(vq); 219 + pm_runtime_put(chan->vi->dev); 220 + } 221 + 222 + static const struct vb2_ops tegra_channel_queue_qops = { 223 + .queue_setup = tegra_channel_queue_setup, 224 + .buf_prepare = tegra_channel_buffer_prepare, 225 + .buf_queue = tegra_channel_buffer_queue, 226 + .wait_prepare = vb2_ops_wait_prepare, 227 + .wait_finish = vb2_ops_wait_finish, 228 + .start_streaming = tegra_channel_start_streaming, 229 + .stop_streaming = tegra_channel_stop_streaming, 230 + }; 231 + 232 + /* 233 + * V4L2 ioctl operations 234 + */ 235 + static int tegra_channel_querycap(struct file *file, void *fh, 236 + struct v4l2_capability *cap) 237 + { 238 + struct tegra_vi_channel *chan = video_drvdata(file); 239 + 240 + strscpy(cap->driver, "tegra-video", sizeof(cap->driver)); 241 + strscpy(cap->card, chan->video.name, sizeof(cap->card)); 242 + snprintf(cap->bus_info, sizeof(cap->bus_info), "platform:%s", 243 + dev_name(chan->vi->dev)); 244 + 245 + return 0; 246 + } 247 + 248 + static int tegra_channel_g_parm(struct file *file, void *fh, 249 + struct v4l2_streamparm *a) 250 + { 251 + struct tegra_vi_channel *chan = video_drvdata(file); 252 + struct v4l2_subdev *subdev; 253 + 254 + subdev = tegra_channel_get_remote_subdev(chan); 255 + return v4l2_g_parm_cap(&chan->video, subdev, a); 256 + } 257 + 258 + static int tegra_channel_s_parm(struct file *file, void *fh, 259 + struct v4l2_streamparm *a) 260 + { 261 + struct tegra_vi_channel *chan = video_drvdata(file); 262 + struct v4l2_subdev *subdev; 263 + 264 + subdev = tegra_channel_get_remote_subdev(chan); 265 + return v4l2_s_parm_cap(&chan->video, subdev, a); 266 + } 267 + 268 + static int tegra_channel_enum_framesizes(struct file *file, void *fh, 269 + struct v4l2_frmsizeenum *sizes) 270 + { 271 + int ret; 272 + struct tegra_vi_channel *chan = video_drvdata(file); 273 + struct v4l2_subdev *subdev; 274 + const struct tegra_video_format *fmtinfo; 275 + struct v4l2_subdev_frame_size_enum fse = { 276 + .index = sizes->index, 277 + .which = V4L2_SUBDEV_FORMAT_ACTIVE, 278 + }; 279 + 280 + fmtinfo = tegra_get_format_by_fourcc(chan->vi, sizes->pixel_format); 281 + if (!fmtinfo) 282 + return -EINVAL; 283 + 284 + fse.code = fmtinfo->code; 285 + 286 + subdev = tegra_channel_get_remote_subdev(chan); 287 + ret = v4l2_subdev_call(subdev, pad, enum_frame_size, NULL, &fse); 288 + if (ret) 289 + return ret; 290 + 291 + sizes->type = V4L2_FRMSIZE_TYPE_DISCRETE; 292 + sizes->discrete.width = fse.max_width; 293 + sizes->discrete.height = fse.max_height; 294 + 295 + return 0; 296 + } 297 + 298 + static int tegra_channel_enum_frameintervals(struct file *file, void *fh, 299 + struct v4l2_frmivalenum *ivals) 300 + { 301 + int ret; 302 + struct tegra_vi_channel *chan = video_drvdata(file); 303 + struct v4l2_subdev *subdev; 304 + const struct tegra_video_format *fmtinfo; 305 + struct v4l2_subdev_frame_interval_enum fie = { 306 + .index = ivals->index, 307 + .width = ivals->width, 308 + .height = ivals->height, 309 + .which = V4L2_SUBDEV_FORMAT_ACTIVE, 310 + }; 311 + 312 + fmtinfo = tegra_get_format_by_fourcc(chan->vi, ivals->pixel_format); 313 + if (!fmtinfo) 314 + return -EINVAL; 315 + 316 + fie.code = fmtinfo->code; 317 + 318 + subdev = tegra_channel_get_remote_subdev(chan); 319 + ret = v4l2_subdev_call(subdev, pad, enum_frame_interval, NULL, &fie); 320 + if (ret) 321 + return ret; 322 + 323 + ivals->type = V4L2_FRMIVAL_TYPE_DISCRETE; 324 + ivals->discrete.numerator = fie.interval.numerator; 325 + ivals->discrete.denominator = fie.interval.denominator; 326 + 327 + return 0; 328 + } 329 + 330 + static int tegra_channel_enum_format(struct file *file, void *fh, 331 + struct v4l2_fmtdesc *f) 332 + { 333 + struct tegra_vi_channel *chan = video_drvdata(file); 334 + unsigned int index = 0, i; 335 + unsigned long *fmts_bitmap = chan->tpg_fmts_bitmap; 336 + 337 + if (f->index >= bitmap_weight(fmts_bitmap, MAX_FORMAT_NUM)) 338 + return -EINVAL; 339 + 340 + for (i = 0; i < f->index + 1; i++, index++) 341 + index = find_next_bit(fmts_bitmap, MAX_FORMAT_NUM, index); 342 + 343 + f->pixelformat = tegra_get_format_fourcc_by_idx(chan->vi, index - 1); 344 + 345 + return 0; 346 + } 347 + 348 + static int tegra_channel_get_format(struct file *file, void *fh, 349 + struct v4l2_format *format) 350 + { 351 + struct tegra_vi_channel *chan = video_drvdata(file); 352 + 353 + format->fmt.pix = chan->format; 354 + 355 + return 0; 356 + } 357 + 358 + static void tegra_channel_fmt_align(struct tegra_vi_channel *chan, 359 + struct v4l2_pix_format *pix, 360 + unsigned int bpp) 361 + { 362 + unsigned int align; 363 + unsigned int min_width; 364 + unsigned int max_width; 365 + unsigned int width; 366 + unsigned int min_bpl; 367 + unsigned int max_bpl; 368 + unsigned int bpl; 369 + 370 + /* 371 + * The transfer alignment requirements are expressed in bytes. Compute 372 + * minimum and maximum values, clamp the requested width and convert 373 + * it back to pixels. Use bytesperline to adjust the width. 374 + */ 375 + align = lcm(SURFACE_ALIGN_BYTES, bpp); 376 + min_width = roundup(TEGRA_MIN_WIDTH, align); 377 + max_width = rounddown(TEGRA_MAX_WIDTH, align); 378 + width = roundup(pix->width * bpp, align); 379 + 380 + pix->width = clamp(width, min_width, max_width) / bpp; 381 + pix->height = clamp(pix->height, TEGRA_MIN_HEIGHT, TEGRA_MAX_HEIGHT); 382 + 383 + /* Clamp the requested bytes per line value. If the maximum bytes per 384 + * line value is zero, the module doesn't support user configurable 385 + * line sizes. Override the requested value with the minimum in that 386 + * case. 387 + */ 388 + min_bpl = pix->width * bpp; 389 + max_bpl = rounddown(TEGRA_MAX_WIDTH, SURFACE_ALIGN_BYTES); 390 + bpl = roundup(pix->bytesperline, SURFACE_ALIGN_BYTES); 391 + 392 + pix->bytesperline = clamp(bpl, min_bpl, max_bpl); 393 + pix->sizeimage = pix->bytesperline * pix->height; 394 + } 395 + 396 + static int __tegra_channel_try_format(struct tegra_vi_channel *chan, 397 + struct v4l2_pix_format *pix) 398 + { 399 + const struct tegra_video_format *fmtinfo; 400 + struct v4l2_subdev *subdev; 401 + struct v4l2_subdev_format fmt; 402 + struct v4l2_subdev_pad_config *pad_cfg; 403 + 404 + subdev = tegra_channel_get_remote_subdev(chan); 405 + pad_cfg = v4l2_subdev_alloc_pad_config(subdev); 406 + if (!pad_cfg) 407 + return -ENOMEM; 408 + /* 409 + * Retrieve the format information and if requested format isn't 410 + * supported, keep the current format. 411 + */ 412 + fmtinfo = tegra_get_format_by_fourcc(chan->vi, pix->pixelformat); 413 + if (!fmtinfo) { 414 + pix->pixelformat = chan->format.pixelformat; 415 + pix->colorspace = chan->format.colorspace; 416 + fmtinfo = tegra_get_format_by_fourcc(chan->vi, 417 + pix->pixelformat); 418 + } 419 + 420 + pix->field = V4L2_FIELD_NONE; 421 + fmt.which = V4L2_SUBDEV_FORMAT_TRY; 422 + fmt.pad = 0; 423 + v4l2_fill_mbus_format(&fmt.format, pix, fmtinfo->code); 424 + v4l2_subdev_call(subdev, pad, set_fmt, pad_cfg, &fmt); 425 + v4l2_fill_pix_format(pix, &fmt.format); 426 + tegra_channel_fmt_align(chan, pix, fmtinfo->bpp); 427 + 428 + v4l2_subdev_free_pad_config(pad_cfg); 429 + 430 + return 0; 431 + } 432 + 433 + static int tegra_channel_try_format(struct file *file, void *fh, 434 + struct v4l2_format *format) 435 + { 436 + struct tegra_vi_channel *chan = video_drvdata(file); 437 + 438 + return __tegra_channel_try_format(chan, &format->fmt.pix); 439 + } 440 + 441 + static int tegra_channel_set_format(struct file *file, void *fh, 442 + struct v4l2_format *format) 443 + { 444 + struct tegra_vi_channel *chan = video_drvdata(file); 445 + const struct tegra_video_format *fmtinfo; 446 + struct v4l2_subdev_format fmt; 447 + struct v4l2_subdev *subdev; 448 + struct v4l2_pix_format *pix = &format->fmt.pix; 449 + int ret; 450 + 451 + if (vb2_is_busy(&chan->queue)) 452 + return -EBUSY; 453 + 454 + /* get supported format by try_fmt */ 455 + ret = __tegra_channel_try_format(chan, pix); 456 + if (ret) 457 + return ret; 458 + 459 + fmtinfo = tegra_get_format_by_fourcc(chan->vi, pix->pixelformat); 460 + 461 + fmt.which = V4L2_SUBDEV_FORMAT_ACTIVE; 462 + fmt.pad = 0; 463 + v4l2_fill_mbus_format(&fmt.format, pix, fmtinfo->code); 464 + subdev = tegra_channel_get_remote_subdev(chan); 465 + v4l2_subdev_call(subdev, pad, set_fmt, NULL, &fmt); 466 + v4l2_fill_pix_format(pix, &fmt.format); 467 + tegra_channel_fmt_align(chan, pix, fmtinfo->bpp); 468 + 469 + chan->format = *pix; 470 + chan->fmtinfo = fmtinfo; 471 + 472 + return 0; 473 + } 474 + 475 + static int tegra_channel_enum_input(struct file *file, void *fh, 476 + struct v4l2_input *inp) 477 + { 478 + /* currently driver supports internal TPG only */ 479 + if (inp->index) 480 + return -EINVAL; 481 + 482 + inp->type = V4L2_INPUT_TYPE_CAMERA; 483 + strscpy(inp->name, "Tegra TPG", sizeof(inp->name)); 484 + 485 + return 0; 486 + } 487 + 488 + static int tegra_channel_g_input(struct file *file, void *priv, 489 + unsigned int *i) 490 + { 491 + *i = 0; 492 + 493 + return 0; 494 + } 495 + 496 + static int tegra_channel_s_input(struct file *file, void *priv, 497 + unsigned int input) 498 + { 499 + if (input > 0) 500 + return -EINVAL; 501 + 502 + return 0; 503 + } 504 + 505 + static const struct v4l2_ioctl_ops tegra_channel_ioctl_ops = { 506 + .vidioc_querycap = tegra_channel_querycap, 507 + .vidioc_g_parm = tegra_channel_g_parm, 508 + .vidioc_s_parm = tegra_channel_s_parm, 509 + .vidioc_enum_framesizes = tegra_channel_enum_framesizes, 510 + .vidioc_enum_frameintervals = tegra_channel_enum_frameintervals, 511 + .vidioc_enum_fmt_vid_cap = tegra_channel_enum_format, 512 + .vidioc_g_fmt_vid_cap = tegra_channel_get_format, 513 + .vidioc_s_fmt_vid_cap = tegra_channel_set_format, 514 + .vidioc_try_fmt_vid_cap = tegra_channel_try_format, 515 + .vidioc_enum_input = tegra_channel_enum_input, 516 + .vidioc_g_input = tegra_channel_g_input, 517 + .vidioc_s_input = tegra_channel_s_input, 518 + .vidioc_reqbufs = vb2_ioctl_reqbufs, 519 + .vidioc_prepare_buf = vb2_ioctl_prepare_buf, 520 + .vidioc_querybuf = vb2_ioctl_querybuf, 521 + .vidioc_qbuf = vb2_ioctl_qbuf, 522 + .vidioc_dqbuf = vb2_ioctl_dqbuf, 523 + .vidioc_create_bufs = vb2_ioctl_create_bufs, 524 + .vidioc_expbuf = vb2_ioctl_expbuf, 525 + .vidioc_streamon = vb2_ioctl_streamon, 526 + .vidioc_streamoff = vb2_ioctl_streamoff, 527 + .vidioc_subscribe_event = v4l2_ctrl_subscribe_event, 528 + .vidioc_unsubscribe_event = v4l2_event_unsubscribe, 529 + }; 530 + 531 + /* 532 + * V4L2 file operations 533 + */ 534 + static const struct v4l2_file_operations tegra_channel_fops = { 535 + .owner = THIS_MODULE, 536 + .unlocked_ioctl = video_ioctl2, 537 + .open = v4l2_fh_open, 538 + .release = vb2_fop_release, 539 + .read = vb2_fop_read, 540 + .poll = vb2_fop_poll, 541 + .mmap = vb2_fop_mmap, 542 + }; 543 + 544 + /* 545 + * V4L2 control operations 546 + */ 547 + static int vi_s_ctrl(struct v4l2_ctrl *ctrl) 548 + { 549 + struct tegra_vi_channel *chan = container_of(ctrl->handler, 550 + struct tegra_vi_channel, 551 + ctrl_handler); 552 + 553 + switch (ctrl->id) { 554 + case V4L2_CID_TEST_PATTERN: 555 + /* pattern change takes effect on next stream */ 556 + chan->pg_mode = ctrl->val + 1; 557 + break; 558 + default: 559 + return -EINVAL; 560 + } 561 + 562 + return 0; 563 + } 564 + 565 + static const struct v4l2_ctrl_ops vi_ctrl_ops = { 566 + .s_ctrl = vi_s_ctrl, 567 + }; 568 + 569 + static const char *const vi_pattern_strings[] = { 570 + "Black/White Direct Mode", 571 + "Color Patch Mode", 572 + }; 573 + 574 + static int tegra_channel_setup_ctrl_handler(struct tegra_vi_channel *chan) 575 + { 576 + int ret; 577 + 578 + /* add test pattern control handler to v4l2 device */ 579 + v4l2_ctrl_new_std_menu_items(&chan->ctrl_handler, &vi_ctrl_ops, 580 + V4L2_CID_TEST_PATTERN, 581 + ARRAY_SIZE(vi_pattern_strings) - 1, 582 + 0, 0, vi_pattern_strings); 583 + if (chan->ctrl_handler.error) { 584 + dev_err(chan->vi->dev, "failed to add TPG ctrl handler: %d\n", 585 + chan->ctrl_handler.error); 586 + v4l2_ctrl_handler_free(&chan->ctrl_handler); 587 + return chan->ctrl_handler.error; 588 + } 589 + 590 + /* setup the controls */ 591 + ret = v4l2_ctrl_handler_setup(&chan->ctrl_handler); 592 + if (ret < 0) { 593 + dev_err(chan->vi->dev, 594 + "failed to setup v4l2 ctrl handler: %d\n", ret); 595 + return ret; 596 + } 597 + 598 + return 0; 599 + } 600 + 601 + /* VI only support 2 formats in TPG mode */ 602 + static void vi_tpg_fmts_bitmap_init(struct tegra_vi_channel *chan) 603 + { 604 + int index; 605 + 606 + bitmap_zero(chan->tpg_fmts_bitmap, MAX_FORMAT_NUM); 607 + 608 + index = tegra_get_format_idx_by_code(chan->vi, 609 + MEDIA_BUS_FMT_SRGGB10_1X10); 610 + bitmap_set(chan->tpg_fmts_bitmap, index, 1); 611 + 612 + index = tegra_get_format_idx_by_code(chan->vi, 613 + MEDIA_BUS_FMT_RGB888_1X32_PADHI); 614 + bitmap_set(chan->tpg_fmts_bitmap, index, 1); 615 + } 616 + 617 + static void tegra_channel_cleanup(struct tegra_vi_channel *chan) 618 + { 619 + v4l2_ctrl_handler_free(&chan->ctrl_handler); 620 + media_entity_cleanup(&chan->video.entity); 621 + host1x_syncpt_free(chan->mw_ack_sp); 622 + host1x_syncpt_free(chan->frame_start_sp); 623 + mutex_destroy(&chan->video_lock); 624 + } 625 + 626 + void tegra_channels_cleanup(struct tegra_vi *vi) 627 + { 628 + struct tegra_vi_channel *chan, *tmp; 629 + 630 + if (!vi) 631 + return; 632 + 633 + list_for_each_entry_safe(chan, tmp, &vi->vi_chans, list) { 634 + tegra_channel_cleanup(chan); 635 + list_del(&chan->list); 636 + kfree(chan); 637 + } 638 + } 639 + 640 + static int tegra_channel_init(struct tegra_vi_channel *chan) 641 + { 642 + struct tegra_vi *vi = chan->vi; 643 + struct tegra_video_device *vid = dev_get_drvdata(vi->client.host); 644 + unsigned long flags = HOST1X_SYNCPT_CLIENT_MANAGED; 645 + int ret; 646 + 647 + mutex_init(&chan->video_lock); 648 + INIT_LIST_HEAD(&chan->capture); 649 + INIT_LIST_HEAD(&chan->done); 650 + spin_lock_init(&chan->start_lock); 651 + spin_lock_init(&chan->done_lock); 652 + spin_lock_init(&chan->sp_incr_lock); 653 + init_waitqueue_head(&chan->start_wait); 654 + init_waitqueue_head(&chan->done_wait); 655 + 656 + /* initialize the video format */ 657 + chan->fmtinfo = &tegra_default_format; 658 + chan->format.pixelformat = chan->fmtinfo->fourcc; 659 + chan->format.colorspace = V4L2_COLORSPACE_SRGB; 660 + chan->format.field = V4L2_FIELD_NONE; 661 + chan->format.width = TEGRA_DEF_WIDTH; 662 + chan->format.height = TEGRA_DEF_HEIGHT; 663 + chan->format.bytesperline = TEGRA_DEF_WIDTH * chan->fmtinfo->bpp; 664 + chan->format.sizeimage = chan->format.bytesperline * TEGRA_DEF_HEIGHT; 665 + tegra_channel_fmt_align(chan, &chan->format, chan->fmtinfo->bpp); 666 + 667 + chan->frame_start_sp = host1x_syncpt_request(&vi->client, flags); 668 + if (!chan->frame_start_sp) { 669 + dev_err(vi->dev, "failed to request frame start syncpoint\n"); 670 + return -ENOMEM; 671 + } 672 + 673 + chan->mw_ack_sp = host1x_syncpt_request(&vi->client, flags); 674 + if (!chan->mw_ack_sp) { 675 + dev_err(vi->dev, "failed to request memory ack syncpoint\n"); 676 + ret = -ENOMEM; 677 + goto free_fs_syncpt; 678 + } 679 + 680 + /* initialize the media entity */ 681 + chan->pad.flags = MEDIA_PAD_FL_SINK; 682 + ret = media_entity_pads_init(&chan->video.entity, 1, &chan->pad); 683 + if (ret < 0) { 684 + dev_err(vi->dev, 685 + "failed to initialize media entity: %d\n", ret); 686 + goto free_mw_ack_syncpt; 687 + } 688 + 689 + ret = v4l2_ctrl_handler_init(&chan->ctrl_handler, MAX_CID_CONTROLS); 690 + if (chan->ctrl_handler.error) { 691 + dev_err(vi->dev, 692 + "failed to initialize v4l2 ctrl handler: %d\n", ret); 693 + goto cleanup_media; 694 + } 695 + 696 + /* initialize the video_device */ 697 + chan->video.fops = &tegra_channel_fops; 698 + chan->video.v4l2_dev = &vid->v4l2_dev; 699 + chan->video.release = video_device_release_empty; 700 + chan->video.queue = &chan->queue; 701 + snprintf(chan->video.name, sizeof(chan->video.name), "%s-%s-%u", 702 + dev_name(vi->dev), "output", chan->portno); 703 + chan->video.vfl_type = VFL_TYPE_VIDEO; 704 + chan->video.vfl_dir = VFL_DIR_RX; 705 + chan->video.ioctl_ops = &tegra_channel_ioctl_ops; 706 + chan->video.ctrl_handler = &chan->ctrl_handler; 707 + chan->video.lock = &chan->video_lock; 708 + chan->video.device_caps = V4L2_CAP_VIDEO_CAPTURE | 709 + V4L2_CAP_STREAMING | 710 + V4L2_CAP_READWRITE; 711 + video_set_drvdata(&chan->video, chan); 712 + 713 + chan->queue.type = V4L2_BUF_TYPE_VIDEO_CAPTURE; 714 + chan->queue.io_modes = VB2_MMAP | VB2_DMABUF | VB2_READ; 715 + chan->queue.lock = &chan->video_lock; 716 + chan->queue.drv_priv = chan; 717 + chan->queue.buf_struct_size = sizeof(struct tegra_channel_buffer); 718 + chan->queue.ops = &tegra_channel_queue_qops; 719 + chan->queue.mem_ops = &vb2_dma_contig_memops; 720 + chan->queue.timestamp_flags = V4L2_BUF_FLAG_TIMESTAMP_MONOTONIC; 721 + chan->queue.min_buffers_needed = 2; 722 + chan->queue.dev = vi->dev; 723 + ret = vb2_queue_init(&chan->queue); 724 + if (ret < 0) { 725 + dev_err(vi->dev, "failed to initialize vb2 queue: %d\n", ret); 726 + goto free_v4l2_ctrl_hdl; 727 + } 728 + 729 + return 0; 730 + 731 + free_v4l2_ctrl_hdl: 732 + v4l2_ctrl_handler_free(&chan->ctrl_handler); 733 + cleanup_media: 734 + media_entity_cleanup(&chan->video.entity); 735 + free_mw_ack_syncpt: 736 + host1x_syncpt_free(chan->mw_ack_sp); 737 + free_fs_syncpt: 738 + host1x_syncpt_free(chan->frame_start_sp); 739 + return ret; 740 + } 741 + 742 + static int tegra_vi_tpg_channels_alloc(struct tegra_vi *vi) 743 + { 744 + struct tegra_vi_channel *chan; 745 + unsigned int port_num; 746 + unsigned int nchannels = vi->soc->vi_max_channels; 747 + 748 + for (port_num = 0; port_num < nchannels; port_num++) { 749 + /* 750 + * Do not use devm_kzalloc as memory is freed immediately 751 + * when device instance is unbound but application might still 752 + * be holding the device node open. Channel memory allocated 753 + * with kzalloc is freed during video device release callback. 754 + */ 755 + chan = kzalloc(sizeof(*chan), GFP_KERNEL); 756 + if (!chan) 757 + return -ENOMEM; 758 + 759 + chan->vi = vi; 760 + chan->portno = port_num; 761 + list_add_tail(&chan->list, &vi->vi_chans); 762 + } 763 + 764 + return 0; 765 + } 766 + 767 + static int tegra_vi_channels_init(struct tegra_vi *vi) 768 + { 769 + struct tegra_vi_channel *chan; 770 + int ret; 771 + 772 + list_for_each_entry(chan, &vi->vi_chans, list) { 773 + ret = tegra_channel_init(chan); 774 + if (ret < 0) { 775 + dev_err(vi->dev, 776 + "failed to initialize channel-%d: %d\n", 777 + chan->portno, ret); 778 + goto cleanup; 779 + } 780 + } 781 + 782 + return 0; 783 + 784 + cleanup: 785 + list_for_each_entry_continue_reverse(chan, &vi->vi_chans, list) 786 + tegra_channel_cleanup(chan); 787 + 788 + return ret; 789 + } 790 + 791 + void tegra_v4l2_nodes_cleanup_tpg(struct tegra_video_device *vid) 792 + { 793 + struct tegra_vi *vi = vid->vi; 794 + struct tegra_csi *csi = vid->csi; 795 + struct tegra_csi_channel *csi_chan; 796 + struct tegra_vi_channel *chan; 797 + 798 + list_for_each_entry(chan, &vi->vi_chans, list) { 799 + video_unregister_device(&chan->video); 800 + mutex_lock(&chan->video_lock); 801 + vb2_queue_release(&chan->queue); 802 + mutex_unlock(&chan->video_lock); 803 + } 804 + 805 + list_for_each_entry(csi_chan, &csi->csi_chans, list) 806 + v4l2_device_unregister_subdev(&csi_chan->subdev); 807 + } 808 + 809 + int tegra_v4l2_nodes_setup_tpg(struct tegra_video_device *vid) 810 + { 811 + struct tegra_vi *vi = vid->vi; 812 + struct tegra_csi *csi = vid->csi; 813 + struct tegra_vi_channel *vi_chan; 814 + struct tegra_csi_channel *csi_chan; 815 + u32 link_flags = MEDIA_LNK_FL_ENABLED; 816 + int ret; 817 + 818 + if (!vi || !csi) 819 + return -ENODEV; 820 + 821 + csi_chan = list_first_entry(&csi->csi_chans, 822 + struct tegra_csi_channel, list); 823 + 824 + list_for_each_entry(vi_chan, &vi->vi_chans, list) { 825 + struct media_entity *source = &csi_chan->subdev.entity; 826 + struct media_entity *sink = &vi_chan->video.entity; 827 + struct media_pad *source_pad = csi_chan->pads; 828 + struct media_pad *sink_pad = &vi_chan->pad; 829 + 830 + ret = v4l2_device_register_subdev(&vid->v4l2_dev, 831 + &csi_chan->subdev); 832 + if (ret) { 833 + dev_err(vi->dev, 834 + "failed to register subdev: %d\n", ret); 835 + goto cleanup; 836 + } 837 + 838 + ret = video_register_device(&vi_chan->video, 839 + VFL_TYPE_VIDEO, -1); 840 + if (ret < 0) { 841 + dev_err(vi->dev, 842 + "failed to register video device: %d\n", ret); 843 + goto cleanup; 844 + } 845 + 846 + dev_dbg(vi->dev, "creating %s:%u -> %s:%u link\n", 847 + source->name, source_pad->index, 848 + sink->name, sink_pad->index); 849 + 850 + ret = media_create_pad_link(source, source_pad->index, 851 + sink, sink_pad->index, 852 + link_flags); 853 + if (ret < 0) { 854 + dev_err(vi->dev, 855 + "failed to create %s:%u -> %s:%u link: %d\n", 856 + source->name, source_pad->index, 857 + sink->name, sink_pad->index, ret); 858 + goto cleanup; 859 + } 860 + 861 + ret = tegra_channel_setup_ctrl_handler(vi_chan); 862 + if (ret < 0) 863 + goto cleanup; 864 + 865 + v4l2_set_subdev_hostdata(&csi_chan->subdev, vi_chan); 866 + vi_tpg_fmts_bitmap_init(vi_chan); 867 + csi_chan = list_next_entry(csi_chan, list); 868 + } 869 + 870 + return 0; 871 + 872 + cleanup: 873 + tegra_v4l2_nodes_cleanup_tpg(vid); 874 + return ret; 875 + } 876 + 877 + static int __maybe_unused vi_runtime_resume(struct device *dev) 878 + { 879 + struct tegra_vi *vi = dev_get_drvdata(dev); 880 + int ret; 881 + 882 + ret = regulator_enable(vi->vdd); 883 + if (ret) { 884 + dev_err(dev, "failed to enable VDD supply: %d\n", ret); 885 + return ret; 886 + } 887 + 888 + ret = clk_set_rate(vi->clk, vi->soc->vi_max_clk_hz); 889 + if (ret) { 890 + dev_err(dev, "failed to set vi clock rate: %d\n", ret); 891 + goto disable_vdd; 892 + } 893 + 894 + ret = clk_prepare_enable(vi->clk); 895 + if (ret) { 896 + dev_err(dev, "failed to enable vi clock: %d\n", ret); 897 + goto disable_vdd; 898 + } 899 + 900 + return 0; 901 + 902 + disable_vdd: 903 + regulator_disable(vi->vdd); 904 + return ret; 905 + } 906 + 907 + static int __maybe_unused vi_runtime_suspend(struct device *dev) 908 + { 909 + struct tegra_vi *vi = dev_get_drvdata(dev); 910 + 911 + clk_disable_unprepare(vi->clk); 912 + 913 + regulator_disable(vi->vdd); 914 + 915 + return 0; 916 + } 917 + 918 + static int tegra_vi_init(struct host1x_client *client) 919 + { 920 + struct tegra_video_device *vid = dev_get_drvdata(client->host); 921 + struct tegra_vi *vi = host1x_client_to_vi(client); 922 + struct tegra_vi_channel *chan, *tmp; 923 + int ret; 924 + 925 + vid->media_dev.hw_revision = vi->soc->hw_revision; 926 + snprintf(vid->media_dev.bus_info, sizeof(vid->media_dev.bus_info), 927 + "platform:%s", dev_name(vi->dev)); 928 + 929 + INIT_LIST_HEAD(&vi->vi_chans); 930 + 931 + ret = tegra_vi_tpg_channels_alloc(vi); 932 + if (ret < 0) { 933 + dev_err(vi->dev, "failed to allocate tpg channels: %d\n", ret); 934 + goto free_chans; 935 + } 936 + 937 + ret = tegra_vi_channels_init(vi); 938 + if (ret < 0) 939 + goto free_chans; 940 + 941 + vid->vi = vi; 942 + 943 + return 0; 944 + 945 + free_chans: 946 + list_for_each_entry_safe(chan, tmp, &vi->vi_chans, list) { 947 + list_del(&chan->list); 948 + kfree(chan); 949 + } 950 + 951 + return ret; 952 + } 953 + 954 + static int tegra_vi_exit(struct host1x_client *client) 955 + { 956 + /* 957 + * Do not cleanup the channels here as application might still be 958 + * holding video device nodes. Channels cleanup will happen during 959 + * v4l2_device release callback which gets called after all video 960 + * device nodes are released. 961 + */ 962 + 963 + return 0; 964 + } 965 + 966 + static const struct host1x_client_ops vi_client_ops = { 967 + .init = tegra_vi_init, 968 + .exit = tegra_vi_exit, 969 + }; 970 + 971 + static int tegra_vi_probe(struct platform_device *pdev) 972 + { 973 + struct tegra_vi *vi; 974 + int ret; 975 + 976 + vi = devm_kzalloc(&pdev->dev, sizeof(*vi), GFP_KERNEL); 977 + if (!vi) 978 + return -ENOMEM; 979 + 980 + vi->iomem = devm_platform_ioremap_resource(pdev, 0); 981 + if (IS_ERR(vi->iomem)) 982 + return PTR_ERR(vi->iomem); 983 + 984 + vi->soc = of_device_get_match_data(&pdev->dev); 985 + 986 + vi->clk = devm_clk_get(&pdev->dev, NULL); 987 + if (IS_ERR(vi->clk)) { 988 + ret = PTR_ERR(vi->clk); 989 + dev_err(&pdev->dev, "failed to get vi clock: %d\n", ret); 990 + return ret; 991 + } 992 + 993 + vi->vdd = devm_regulator_get(&pdev->dev, "avdd-dsi-csi"); 994 + if (IS_ERR(vi->vdd)) { 995 + ret = PTR_ERR(vi->vdd); 996 + dev_err(&pdev->dev, "failed to get VDD supply: %d\n", ret); 997 + return ret; 998 + } 999 + 1000 + if (!pdev->dev.pm_domain) { 1001 + ret = -ENOENT; 1002 + dev_warn(&pdev->dev, "PM domain is not attached: %d\n", ret); 1003 + return ret; 1004 + } 1005 + 1006 + ret = devm_of_platform_populate(&pdev->dev); 1007 + if (ret < 0) { 1008 + dev_err(&pdev->dev, 1009 + "failed to populate vi child device: %d\n", ret); 1010 + return ret; 1011 + } 1012 + 1013 + vi->dev = &pdev->dev; 1014 + vi->ops = vi->soc->ops; 1015 + platform_set_drvdata(pdev, vi); 1016 + pm_runtime_enable(&pdev->dev); 1017 + 1018 + /* initialize host1x interface */ 1019 + INIT_LIST_HEAD(&vi->client.list); 1020 + vi->client.ops = &vi_client_ops; 1021 + vi->client.dev = &pdev->dev; 1022 + 1023 + ret = host1x_client_register(&vi->client); 1024 + if (ret < 0) { 1025 + dev_err(&pdev->dev, 1026 + "failed to register host1x client: %d\n", ret); 1027 + goto rpm_disable; 1028 + } 1029 + 1030 + return 0; 1031 + 1032 + rpm_disable: 1033 + pm_runtime_disable(&pdev->dev); 1034 + return ret; 1035 + } 1036 + 1037 + static int tegra_vi_remove(struct platform_device *pdev) 1038 + { 1039 + struct tegra_vi *vi = platform_get_drvdata(pdev); 1040 + int err; 1041 + 1042 + err = host1x_client_unregister(&vi->client); 1043 + if (err < 0) { 1044 + dev_err(&pdev->dev, 1045 + "failed to unregister host1x client: %d\n", err); 1046 + return err; 1047 + } 1048 + 1049 + pm_runtime_disable(&pdev->dev); 1050 + 1051 + return 0; 1052 + } 1053 + 1054 + static const struct of_device_id tegra_vi_of_id_table[] = { 1055 + #if defined(CONFIG_ARCH_TEGRA_210_SOC) 1056 + { .compatible = "nvidia,tegra210-vi", .data = &tegra210_vi_soc }, 1057 + #endif 1058 + { } 1059 + }; 1060 + MODULE_DEVICE_TABLE(of, tegra_vi_of_id_table); 1061 + 1062 + static const struct dev_pm_ops tegra_vi_pm_ops = { 1063 + SET_RUNTIME_PM_OPS(vi_runtime_suspend, vi_runtime_resume, NULL) 1064 + }; 1065 + 1066 + struct platform_driver tegra_vi_driver = { 1067 + .driver = { 1068 + .name = "tegra-vi", 1069 + .of_match_table = tegra_vi_of_id_table, 1070 + .pm = &tegra_vi_pm_ops, 1071 + }, 1072 + .probe = tegra_vi_probe, 1073 + .remove = tegra_vi_remove, 1074 + };
+257
drivers/staging/media/tegra-video/vi.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright (C) 2020 NVIDIA CORPORATION. All rights reserved. 4 + */ 5 + 6 + #ifndef __TEGRA_VI_H__ 7 + #define __TEGRA_VI_H__ 8 + 9 + #include <linux/host1x.h> 10 + #include <linux/list.h> 11 + 12 + #include <linux/mutex.h> 13 + #include <linux/spinlock.h> 14 + #include <linux/wait.h> 15 + 16 + #include <media/media-entity.h> 17 + #include <media/v4l2-ctrls.h> 18 + #include <media/v4l2-device.h> 19 + #include <media/v4l2-dev.h> 20 + #include <media/v4l2-subdev.h> 21 + #include <media/videobuf2-v4l2.h> 22 + 23 + #define TEGRA_MIN_WIDTH 32U 24 + #define TEGRA_MAX_WIDTH 32768U 25 + #define TEGRA_MIN_HEIGHT 32U 26 + #define TEGRA_MAX_HEIGHT 32768U 27 + 28 + #define TEGRA_DEF_WIDTH 1920 29 + #define TEGRA_DEF_HEIGHT 1080 30 + #define TEGRA_IMAGE_FORMAT_DEF 32 31 + 32 + #define MAX_FORMAT_NUM 64 33 + 34 + enum tegra_vi_pg_mode { 35 + TEGRA_VI_PG_DISABLED = 0, 36 + TEGRA_VI_PG_DIRECT, 37 + TEGRA_VI_PG_PATCH, 38 + }; 39 + 40 + /** 41 + * struct tegra_vi_ops - Tegra VI operations 42 + * @vi_start_streaming: starts media pipeline, subdevice streaming, sets up 43 + * VI for capture and runs capture start and capture finish 44 + * kthreads for capturing frames to buffer and returns them back. 45 + * @vi_stop_streaming: stops media pipeline and subdevice streaming and returns 46 + * back any queued buffers. 47 + */ 48 + struct tegra_vi_ops { 49 + int (*vi_start_streaming)(struct vb2_queue *vq, u32 count); 50 + void (*vi_stop_streaming)(struct vb2_queue *vq); 51 + }; 52 + 53 + /** 54 + * struct tegra_vi_soc - NVIDIA Tegra Video Input SoC structure 55 + * 56 + * @video_formats: supported video formats 57 + * @nformats: total video formats 58 + * @ops: vi operations 59 + * @hw_revision: VI hw_revision 60 + * @vi_max_channels: supported max streaming channels 61 + * @vi_max_clk_hz: VI clock max frequency 62 + */ 63 + struct tegra_vi_soc { 64 + const struct tegra_video_format *video_formats; 65 + const unsigned int nformats; 66 + const struct tegra_vi_ops *ops; 67 + u32 hw_revision; 68 + unsigned int vi_max_channels; 69 + unsigned int vi_max_clk_hz; 70 + }; 71 + 72 + /** 73 + * struct tegra_vi - NVIDIA Tegra Video Input device structure 74 + * 75 + * @dev: device struct 76 + * @client: host1x_client struct 77 + * @iomem: register base 78 + * @clk: main clock for VI block 79 + * @vdd: vdd regulator for VI hardware, normally it is avdd_dsi_csi 80 + * @soc: pointer to SoC data structure 81 + * @ops: vi operations 82 + * @vi_chans: list head for VI channels 83 + */ 84 + struct tegra_vi { 85 + struct device *dev; 86 + struct host1x_client client; 87 + void __iomem *iomem; 88 + struct clk *clk; 89 + struct regulator *vdd; 90 + const struct tegra_vi_soc *soc; 91 + const struct tegra_vi_ops *ops; 92 + struct list_head vi_chans; 93 + }; 94 + 95 + /** 96 + * struct tegra_vi_channel - Tegra video channel 97 + * 98 + * @list: list head for this entry 99 + * @video: V4L2 video device associated with the video channel 100 + * @video_lock: protects the @format and @queue fields 101 + * @pad: media pad for the video device entity 102 + * 103 + * @vi: Tegra video input device structure 104 + * @frame_start_sp: host1x syncpoint pointer to synchronize programmed capture 105 + * start condition with hardware frame start events through host1x 106 + * syncpoint counters. 107 + * @mw_ack_sp: host1x syncpoint pointer to synchronize programmed memory write 108 + * ack trigger condition with hardware memory write done at end of 109 + * frame through host1x syncpoint counters. 110 + * @sp_incr_lock: protects cpu syncpoint increment. 111 + * 112 + * @kthread_start_capture: kthread to start capture of single frame when 113 + * vb buffer is available. This thread programs VI CSI hardware 114 + * for single frame capture and waits for frame start event from 115 + * the hardware. On receiving frame start event, it wakes up 116 + * kthread_finish_capture thread to wait for finishing frame data 117 + * write to the memory. In case of missing frame start event, this 118 + * thread returns buffer back to vb with VB2_BUF_STATE_ERROR. 119 + * @start_wait: waitqueue for starting frame capture when buffer is available. 120 + * @kthread_finish_capture: kthread to finish the buffer capture and return to. 121 + * This thread is woken up by kthread_start_capture on receiving 122 + * frame start event from the hardware and this thread waits for 123 + * MW_ACK_DONE event which indicates completion of writing frame 124 + * data to the memory. On receiving MW_ACK_DONE event, buffer is 125 + * returned back to vb with VB2_BUF_STATE_DONE and in case of 126 + * missing MW_ACK_DONE event, buffer is returned back to vb with 127 + * VB2_BUF_STATE_ERROR. 128 + * @done_wait: waitqueue for finishing capture data writes to memory. 129 + * 130 + * @format: active V4L2 pixel format 131 + * @fmtinfo: format information corresponding to the active @format 132 + * @queue: vb2 buffers queue 133 + * @sequence: V4L2 buffers sequence number 134 + * 135 + * @capture: list of queued buffers for capture 136 + * @start_lock: protects the capture queued list 137 + * @done: list of capture done queued buffers 138 + * @done_lock: protects the capture done queue list 139 + * 140 + * @portno: VI channel port number 141 + * 142 + * @ctrl_handler: V4L2 control handler of this video channel 143 + * @tpg_fmts_bitmap: a bitmap for supported TPG formats 144 + * @pg_mode: test pattern generator mode (disabled/direct/patch) 145 + */ 146 + struct tegra_vi_channel { 147 + struct list_head list; 148 + struct video_device video; 149 + /* protects the @format and @queue fields */ 150 + struct mutex video_lock; 151 + struct media_pad pad; 152 + 153 + struct tegra_vi *vi; 154 + struct host1x_syncpt *frame_start_sp; 155 + struct host1x_syncpt *mw_ack_sp; 156 + /* protects the cpu syncpoint increment */ 157 + spinlock_t sp_incr_lock; 158 + 159 + struct task_struct *kthread_start_capture; 160 + wait_queue_head_t start_wait; 161 + struct task_struct *kthread_finish_capture; 162 + wait_queue_head_t done_wait; 163 + 164 + struct v4l2_pix_format format; 165 + const struct tegra_video_format *fmtinfo; 166 + struct vb2_queue queue; 167 + u32 sequence; 168 + 169 + struct list_head capture; 170 + /* protects the capture queued list */ 171 + spinlock_t start_lock; 172 + struct list_head done; 173 + /* protects the capture done queue list */ 174 + spinlock_t done_lock; 175 + 176 + unsigned char portno; 177 + 178 + struct v4l2_ctrl_handler ctrl_handler; 179 + DECLARE_BITMAP(tpg_fmts_bitmap, MAX_FORMAT_NUM); 180 + enum tegra_vi_pg_mode pg_mode; 181 + }; 182 + 183 + /** 184 + * struct tegra_channel_buffer - video channel buffer 185 + * 186 + * @buf: vb2 buffer base object 187 + * @queue: buffer list entry in the channel queued buffers list 188 + * @chan: channel that uses the buffer 189 + * @addr: Tegra IOVA buffer address for VI output 190 + * @mw_ack_sp_thresh: MW_ACK_DONE syncpoint threshold corresponding 191 + * to the capture buffer. 192 + */ 193 + struct tegra_channel_buffer { 194 + struct vb2_v4l2_buffer buf; 195 + struct list_head queue; 196 + struct tegra_vi_channel *chan; 197 + dma_addr_t addr; 198 + u32 mw_ack_sp_thresh; 199 + }; 200 + 201 + /* 202 + * VI channel input data type enum. 203 + * These data type enum value gets programmed into corresponding Tegra VI 204 + * channel register bits. 205 + */ 206 + enum tegra_image_dt { 207 + TEGRA_IMAGE_DT_YUV420_8 = 24, 208 + TEGRA_IMAGE_DT_YUV420_10, 209 + 210 + TEGRA_IMAGE_DT_YUV420CSPS_8 = 28, 211 + TEGRA_IMAGE_DT_YUV420CSPS_10, 212 + TEGRA_IMAGE_DT_YUV422_8, 213 + TEGRA_IMAGE_DT_YUV422_10, 214 + TEGRA_IMAGE_DT_RGB444, 215 + TEGRA_IMAGE_DT_RGB555, 216 + TEGRA_IMAGE_DT_RGB565, 217 + TEGRA_IMAGE_DT_RGB666, 218 + TEGRA_IMAGE_DT_RGB888, 219 + 220 + TEGRA_IMAGE_DT_RAW6 = 40, 221 + TEGRA_IMAGE_DT_RAW7, 222 + TEGRA_IMAGE_DT_RAW8, 223 + TEGRA_IMAGE_DT_RAW10, 224 + TEGRA_IMAGE_DT_RAW12, 225 + TEGRA_IMAGE_DT_RAW14, 226 + }; 227 + 228 + /** 229 + * struct tegra_video_format - Tegra video format description 230 + * 231 + * @img_dt: image data type 232 + * @bit_width: format width in bits per component 233 + * @code: media bus format code 234 + * @bpp: bytes per pixel (when stored in memory) 235 + * @img_fmt: image format 236 + * @fourcc: V4L2 pixel format FCC identifier 237 + */ 238 + struct tegra_video_format { 239 + enum tegra_image_dt img_dt; 240 + unsigned int bit_width; 241 + unsigned int code; 242 + unsigned int bpp; 243 + u32 img_fmt; 244 + u32 fourcc; 245 + }; 246 + 247 + #if defined(CONFIG_ARCH_TEGRA_210_SOC) 248 + extern const struct tegra_vi_soc tegra210_vi_soc; 249 + #endif 250 + 251 + struct v4l2_subdev * 252 + tegra_channel_get_remote_subdev(struct tegra_vi_channel *chan); 253 + int tegra_channel_set_stream(struct tegra_vi_channel *chan, bool on); 254 + void tegra_channel_release_buffers(struct tegra_vi_channel *chan, 255 + enum vb2_buffer_state state); 256 + void tegra_channels_cleanup(struct tegra_vi *vi); 257 + #endif
+155
drivers/staging/media/tegra-video/video.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (C) 2020 NVIDIA CORPORATION. All rights reserved. 4 + */ 5 + 6 + #include <linux/host1x.h> 7 + #include <linux/module.h> 8 + #include <linux/platform_device.h> 9 + 10 + #include "video.h" 11 + 12 + static void tegra_v4l2_dev_release(struct v4l2_device *v4l2_dev) 13 + { 14 + struct tegra_video_device *vid; 15 + 16 + vid = container_of(v4l2_dev, struct tegra_video_device, v4l2_dev); 17 + 18 + /* cleanup channels here as all video device nodes are released */ 19 + tegra_channels_cleanup(vid->vi); 20 + 21 + v4l2_device_unregister(v4l2_dev); 22 + media_device_unregister(&vid->media_dev); 23 + media_device_cleanup(&vid->media_dev); 24 + kfree(vid); 25 + } 26 + 27 + static int host1x_video_probe(struct host1x_device *dev) 28 + { 29 + struct tegra_video_device *vid; 30 + int ret; 31 + 32 + vid = kzalloc(sizeof(*vid), GFP_KERNEL); 33 + if (!vid) 34 + return -ENOMEM; 35 + 36 + dev_set_drvdata(&dev->dev, vid); 37 + 38 + vid->media_dev.dev = &dev->dev; 39 + strscpy(vid->media_dev.model, "NVIDIA Tegra Video Input Device", 40 + sizeof(vid->media_dev.model)); 41 + 42 + media_device_init(&vid->media_dev); 43 + ret = media_device_register(&vid->media_dev); 44 + if (ret < 0) { 45 + dev_err(&dev->dev, 46 + "failed to register media device: %d\n", ret); 47 + goto cleanup; 48 + } 49 + 50 + vid->v4l2_dev.mdev = &vid->media_dev; 51 + vid->v4l2_dev.release = tegra_v4l2_dev_release; 52 + ret = v4l2_device_register(&dev->dev, &vid->v4l2_dev); 53 + if (ret < 0) { 54 + dev_err(&dev->dev, 55 + "V4L2 device registration failed: %d\n", ret); 56 + goto unregister_media; 57 + } 58 + 59 + ret = host1x_device_init(dev); 60 + if (ret < 0) 61 + goto unregister_v4l2; 62 + 63 + /* 64 + * Both vi and csi channels are available now. 65 + * Register v4l2 nodes and create media links for TPG. 66 + */ 67 + ret = tegra_v4l2_nodes_setup_tpg(vid); 68 + if (ret < 0) { 69 + dev_err(&dev->dev, 70 + "failed to setup tpg graph: %d\n", ret); 71 + goto device_exit; 72 + } 73 + 74 + return 0; 75 + 76 + device_exit: 77 + host1x_device_exit(dev); 78 + /* vi exit ops does not clean channels, so clean them here */ 79 + tegra_channels_cleanup(vid->vi); 80 + unregister_v4l2: 81 + v4l2_device_unregister(&vid->v4l2_dev); 82 + unregister_media: 83 + media_device_unregister(&vid->media_dev); 84 + cleanup: 85 + media_device_cleanup(&vid->media_dev); 86 + kfree(vid); 87 + return ret; 88 + } 89 + 90 + static int host1x_video_remove(struct host1x_device *dev) 91 + { 92 + struct tegra_video_device *vid = dev_get_drvdata(&dev->dev); 93 + 94 + tegra_v4l2_nodes_cleanup_tpg(vid); 95 + 96 + host1x_device_exit(dev); 97 + 98 + /* This calls v4l2_dev release callback on last reference */ 99 + v4l2_device_put(&vid->v4l2_dev); 100 + 101 + return 0; 102 + } 103 + 104 + static const struct of_device_id host1x_video_subdevs[] = { 105 + #if defined(CONFIG_ARCH_TEGRA_210_SOC) 106 + { .compatible = "nvidia,tegra210-csi", }, 107 + { .compatible = "nvidia,tegra210-vi", }, 108 + #endif 109 + { } 110 + }; 111 + 112 + static struct host1x_driver host1x_video_driver = { 113 + .driver = { 114 + .name = "tegra-video", 115 + }, 116 + .probe = host1x_video_probe, 117 + .remove = host1x_video_remove, 118 + .subdevs = host1x_video_subdevs, 119 + }; 120 + 121 + static struct platform_driver * const drivers[] = { 122 + &tegra_csi_driver, 123 + &tegra_vi_driver, 124 + }; 125 + 126 + static int __init host1x_video_init(void) 127 + { 128 + int err; 129 + 130 + err = host1x_driver_register(&host1x_video_driver); 131 + if (err < 0) 132 + return err; 133 + 134 + err = platform_register_drivers(drivers, ARRAY_SIZE(drivers)); 135 + if (err < 0) 136 + goto unregister_host1x; 137 + 138 + return 0; 139 + 140 + unregister_host1x: 141 + host1x_driver_unregister(&host1x_video_driver); 142 + return err; 143 + } 144 + module_init(host1x_video_init); 145 + 146 + static void __exit host1x_video_exit(void) 147 + { 148 + platform_unregister_drivers(drivers, ARRAY_SIZE(drivers)); 149 + host1x_driver_unregister(&host1x_video_driver); 150 + } 151 + module_exit(host1x_video_exit); 152 + 153 + MODULE_AUTHOR("Sowjanya Komatineni <skomatineni@nvidia.com>"); 154 + MODULE_DESCRIPTION("NVIDIA Tegra Host1x Video driver"); 155 + MODULE_LICENSE("GPL v2");
+29
drivers/staging/media/tegra-video/video.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright (C) 2020 NVIDIA CORPORATION. All rights reserved. 4 + */ 5 + 6 + #ifndef __TEGRA_VIDEO_H__ 7 + #define __TEGRA_VIDEO_H__ 8 + 9 + #include <linux/host1x.h> 10 + 11 + #include <media/media-device.h> 12 + #include <media/v4l2-device.h> 13 + 14 + #include "vi.h" 15 + #include "csi.h" 16 + 17 + struct tegra_video_device { 18 + struct v4l2_device v4l2_dev; 19 + struct media_device media_dev; 20 + struct tegra_vi *vi; 21 + struct tegra_csi *csi; 22 + }; 23 + 24 + int tegra_v4l2_nodes_setup_tpg(struct tegra_video_device *vid); 25 + void tegra_v4l2_nodes_cleanup_tpg(struct tegra_video_device *vid); 26 + 27 + extern struct platform_driver tegra_vi_driver; 28 + extern struct platform_driver tegra_csi_driver; 29 + #endif
+2
drivers/tee/Kconfig
··· 3 3 config TEE 4 4 tristate "Trusted Execution Environment support" 5 5 depends on HAVE_ARM_SMCCC || COMPILE_TEST || CPU_SUP_AMD 6 + select CRYPTO 7 + select CRYPTO_SHA1 6 8 select DMA_SHARED_BUFFER 7 9 select GENERIC_ALLOCATOR 8 10 help
+5 -1
drivers/tee/optee/call.c
··· 233 233 msg_arg->params[1].attr = OPTEE_MSG_ATTR_TYPE_VALUE_INPUT | 234 234 OPTEE_MSG_ATTR_META; 235 235 memcpy(&msg_arg->params[0].u.value, arg->uuid, sizeof(arg->uuid)); 236 - memcpy(&msg_arg->params[1].u.value, arg->uuid, sizeof(arg->clnt_uuid)); 237 236 msg_arg->params[1].u.value.c = arg->clnt_login; 237 + 238 + rc = tee_session_calc_client_uuid((uuid_t *)&msg_arg->params[1].u.value, 239 + arg->clnt_login, arg->clnt_uuid); 240 + if (rc) 241 + goto out; 238 242 239 243 rc = optee_to_msg_param(msg_arg->params + 2, arg->num_params, param); 240 244 if (rc)
+159
drivers/tee/tee_core.c
··· 6 6 #define pr_fmt(fmt) "%s: " fmt, __func__ 7 7 8 8 #include <linux/cdev.h> 9 + #include <linux/cred.h> 9 10 #include <linux/fs.h> 10 11 #include <linux/idr.h> 11 12 #include <linux/module.h> 12 13 #include <linux/slab.h> 13 14 #include <linux/tee_drv.h> 14 15 #include <linux/uaccess.h> 16 + #include <crypto/hash.h> 17 + #include <crypto/sha.h> 15 18 #include "tee_private.h" 16 19 17 20 #define TEE_NUM_DEVICES 32 18 21 19 22 #define TEE_IOCTL_PARAM_SIZE(x) (sizeof(struct tee_param) * (x)) 23 + 24 + #define TEE_UUID_NS_NAME_SIZE 128 25 + 26 + /* 27 + * TEE Client UUID name space identifier (UUIDv4) 28 + * 29 + * Value here is random UUID that is allocated as name space identifier for 30 + * forming Client UUID's for TEE environment using UUIDv5 scheme. 31 + */ 32 + static const uuid_t tee_client_uuid_ns = UUID_INIT(0x58ac9ca0, 0x2086, 0x4683, 33 + 0xa1, 0xb8, 0xec, 0x4b, 34 + 0xc0, 0x8e, 0x01, 0xb6); 20 35 21 36 /* 22 37 * Unprivileged devices in the lower half range and privileged devices in ··· 124 109 teedev_close_context(filp->private_data); 125 110 return 0; 126 111 } 112 + 113 + /** 114 + * uuid_v5() - Calculate UUIDv5 115 + * @uuid: Resulting UUID 116 + * @ns: Name space ID for UUIDv5 function 117 + * @name: Name for UUIDv5 function 118 + * @size: Size of name 119 + * 120 + * UUIDv5 is specific in RFC 4122. 121 + * 122 + * This implements section (for SHA-1): 123 + * 4.3. Algorithm for Creating a Name-Based UUID 124 + */ 125 + static int uuid_v5(uuid_t *uuid, const uuid_t *ns, const void *name, 126 + size_t size) 127 + { 128 + unsigned char hash[SHA1_DIGEST_SIZE]; 129 + struct crypto_shash *shash = NULL; 130 + struct shash_desc *desc = NULL; 131 + int rc; 132 + 133 + shash = crypto_alloc_shash("sha1", 0, 0); 134 + if (IS_ERR(shash)) { 135 + rc = PTR_ERR(shash); 136 + pr_err("shash(sha1) allocation failed\n"); 137 + return rc; 138 + } 139 + 140 + desc = kzalloc(sizeof(*desc) + crypto_shash_descsize(shash), 141 + GFP_KERNEL); 142 + if (!desc) { 143 + rc = -ENOMEM; 144 + goto out_free_shash; 145 + } 146 + 147 + desc->tfm = shash; 148 + 149 + rc = crypto_shash_init(desc); 150 + if (rc < 0) 151 + goto out_free_desc; 152 + 153 + rc = crypto_shash_update(desc, (const u8 *)ns, sizeof(*ns)); 154 + if (rc < 0) 155 + goto out_free_desc; 156 + 157 + rc = crypto_shash_update(desc, (const u8 *)name, size); 158 + if (rc < 0) 159 + goto out_free_desc; 160 + 161 + rc = crypto_shash_final(desc, hash); 162 + if (rc < 0) 163 + goto out_free_desc; 164 + 165 + memcpy(uuid->b, hash, UUID_SIZE); 166 + 167 + /* Tag for version 5 */ 168 + uuid->b[6] = (hash[6] & 0x0F) | 0x50; 169 + uuid->b[8] = (hash[8] & 0x3F) | 0x80; 170 + 171 + out_free_desc: 172 + kfree(desc); 173 + 174 + out_free_shash: 175 + crypto_free_shash(shash); 176 + return rc; 177 + } 178 + 179 + int tee_session_calc_client_uuid(uuid_t *uuid, u32 connection_method, 180 + const u8 connection_data[TEE_IOCTL_UUID_LEN]) 181 + { 182 + gid_t ns_grp = (gid_t)-1; 183 + kgid_t grp = INVALID_GID; 184 + char *name = NULL; 185 + int name_len; 186 + int rc; 187 + 188 + if (connection_method == TEE_IOCTL_LOGIN_PUBLIC) { 189 + /* Nil UUID to be passed to TEE environment */ 190 + uuid_copy(uuid, &uuid_null); 191 + return 0; 192 + } 193 + 194 + /* 195 + * In Linux environment client UUID is based on UUIDv5. 196 + * 197 + * Determine client UUID with following semantics for 'name': 198 + * 199 + * For TEEC_LOGIN_USER: 200 + * uid=<uid> 201 + * 202 + * For TEEC_LOGIN_GROUP: 203 + * gid=<gid> 204 + * 205 + */ 206 + 207 + name = kzalloc(TEE_UUID_NS_NAME_SIZE, GFP_KERNEL); 208 + if (!name) 209 + return -ENOMEM; 210 + 211 + switch (connection_method) { 212 + case TEE_IOCTL_LOGIN_USER: 213 + name_len = snprintf(name, TEE_UUID_NS_NAME_SIZE, "uid=%x", 214 + current_euid().val); 215 + if (name_len >= TEE_UUID_NS_NAME_SIZE) { 216 + rc = -E2BIG; 217 + goto out_free_name; 218 + } 219 + break; 220 + 221 + case TEE_IOCTL_LOGIN_GROUP: 222 + memcpy(&ns_grp, connection_data, sizeof(gid_t)); 223 + grp = make_kgid(current_user_ns(), ns_grp); 224 + if (!gid_valid(grp) || !in_egroup_p(grp)) { 225 + rc = -EPERM; 226 + goto out_free_name; 227 + } 228 + 229 + name_len = snprintf(name, TEE_UUID_NS_NAME_SIZE, "gid=%x", 230 + grp.val); 231 + if (name_len >= TEE_UUID_NS_NAME_SIZE) { 232 + rc = -E2BIG; 233 + goto out_free_name; 234 + } 235 + break; 236 + 237 + default: 238 + rc = -EINVAL; 239 + goto out_free_name; 240 + } 241 + 242 + rc = uuid_v5(uuid, &tee_client_uuid_ns, name, name_len); 243 + out_free_name: 244 + kfree(name); 245 + 246 + return rc; 247 + } 248 + EXPORT_SYMBOL_GPL(tee_session_calc_client_uuid); 127 249 128 250 static int tee_ioctl_version(struct tee_context *ctx, 129 251 struct tee_ioctl_version_data __user *uvers) ··· 483 331 rc = params_from_user(ctx, params, arg.num_params, uparams); 484 332 if (rc) 485 333 goto out; 334 + } 335 + 336 + if (arg.clnt_login >= TEE_IOCTL_LOGIN_REE_KERNEL_MIN && 337 + arg.clnt_login <= TEE_IOCTL_LOGIN_REE_KERNEL_MAX) { 338 + pr_debug("login method not allowed for user-space client\n"); 339 + rc = -EPERM; 340 + goto out; 486 341 } 487 342 488 343 rc = ctx->teedev->desc->ops->open_session(ctx, &arg, params);
+26 -5
drivers/tee/tee_shm.c
··· 9 9 #include <linux/sched.h> 10 10 #include <linux/slab.h> 11 11 #include <linux/tee_drv.h> 12 + #include <linux/uio.h> 12 13 #include "tee_private.h" 13 14 14 15 static void tee_shm_release(struct tee_shm *shm) ··· 162 161 } 163 162 } 164 163 165 - if (ctx) 166 - teedev_ctx_get(ctx); 164 + teedev_ctx_get(ctx); 167 165 168 166 return shm; 169 167 err_rem: ··· 185 185 size_t length, u32 flags) 186 186 { 187 187 struct tee_device *teedev = ctx->teedev; 188 - const u32 req_flags = TEE_SHM_DMA_BUF | TEE_SHM_USER_MAPPED; 188 + const u32 req_user_flags = TEE_SHM_DMA_BUF | TEE_SHM_USER_MAPPED; 189 + const u32 req_kernel_flags = TEE_SHM_DMA_BUF | TEE_SHM_KERNEL_MAPPED; 189 190 struct tee_shm *shm; 190 191 void *ret; 191 192 int rc; 192 193 int num_pages; 193 194 unsigned long start; 194 195 195 - if (flags != req_flags) 196 + if (flags != req_user_flags && flags != req_kernel_flags) 196 197 return ERR_PTR(-ENOTSUPP); 197 198 198 199 if (!tee_device_get(teedev)) ··· 227 226 goto err; 228 227 } 229 228 230 - rc = get_user_pages_fast(start, num_pages, FOLL_WRITE, shm->pages); 229 + if (flags & TEE_SHM_USER_MAPPED) { 230 + rc = get_user_pages_fast(start, num_pages, FOLL_WRITE, 231 + shm->pages); 232 + } else { 233 + struct kvec *kiov; 234 + int i; 235 + 236 + kiov = kcalloc(num_pages, sizeof(*kiov), GFP_KERNEL); 237 + if (!kiov) { 238 + ret = ERR_PTR(-ENOMEM); 239 + goto err; 240 + } 241 + 242 + for (i = 0; i < num_pages; i++) { 243 + kiov[i].iov_base = (void *)(start + i * PAGE_SIZE); 244 + kiov[i].iov_len = PAGE_SIZE; 245 + } 246 + 247 + rc = get_kernel_pages(kiov, num_pages, 0, shm->pages); 248 + kfree(kiov); 249 + } 231 250 if (rc > 0) 232 251 shm->num_pages = rc; 233 252 if (rc != num_pages) {
+1 -1
drivers/thermal/imx_sc_thermal.c
··· 3 3 * Copyright 2018-2020 NXP. 4 4 */ 5 5 6 + #include <dt-bindings/firmware/imx/rsrc.h> 6 7 #include <linux/err.h> 7 8 #include <linux/firmware/imx/sci.h> 8 - #include <linux/firmware/imx/types.h> 9 9 #include <linux/module.h> 10 10 #include <linux/of.h> 11 11 #include <linux/of_device.h>
+44 -20
include/asm-generic/io.h
··· 448 448 #define IO_SPACE_LIMIT 0xffff 449 449 #endif 450 450 451 - #include <linux/logic_pio.h> 452 - 453 451 /* 454 452 * {in,out}{b,w,l}() access little endian I/O. {in,out}{b,w,l}_p() can be 455 453 * implemented on hardware that needs an additional delay for I/O accesses to 456 454 * take effect. 457 455 */ 458 456 459 - #ifndef inb 460 - #define inb inb 461 - static inline u8 inb(unsigned long addr) 457 + #if !defined(inb) && !defined(_inb) 458 + #define _inb _inb 459 + static inline u16 _inb(unsigned long addr) 462 460 { 463 461 u8 val; 464 462 ··· 467 469 } 468 470 #endif 469 471 470 - #ifndef inw 471 - #define inw inw 472 - static inline u16 inw(unsigned long addr) 472 + #if !defined(inw) && !defined(_inw) 473 + #define _inw _inw 474 + static inline u16 _inw(unsigned long addr) 473 475 { 474 476 u16 val; 475 477 ··· 480 482 } 481 483 #endif 482 484 483 - #ifndef inl 484 - #define inl inl 485 - static inline u32 inl(unsigned long addr) 485 + #if !defined(inl) && !defined(_inl) 486 + #define _inl _inl 487 + static inline u16 _inl(unsigned long addr) 486 488 { 487 489 u32 val; 488 490 ··· 493 495 } 494 496 #endif 495 497 496 - #ifndef outb 497 - #define outb outb 498 - static inline void outb(u8 value, unsigned long addr) 498 + #if !defined(outb) && !defined(_outb) 499 + #define _outb _outb 500 + static inline void _outb(u8 value, unsigned long addr) 499 501 { 500 502 __io_pbw(); 501 503 __raw_writeb(value, PCI_IOBASE + addr); ··· 503 505 } 504 506 #endif 505 507 506 - #ifndef outw 507 - #define outw outw 508 - static inline void outw(u16 value, unsigned long addr) 508 + #if !defined(outw) && !defined(_outw) 509 + #define _outw _outw 510 + static inline void _outw(u16 value, unsigned long addr) 509 511 { 510 512 __io_pbw(); 511 513 __raw_writew(cpu_to_le16(value), PCI_IOBASE + addr); ··· 513 515 } 514 516 #endif 515 517 516 - #ifndef outl 517 - #define outl outl 518 - static inline void outl(u32 value, unsigned long addr) 518 + #if !defined(outl) && !defined(_outl) 519 + #define _outl _outl 520 + static inline void _outl(u32 value, unsigned long addr) 519 521 { 520 522 __io_pbw(); 521 523 __raw_writel(cpu_to_le32(value), PCI_IOBASE + addr); 522 524 __io_paw(); 523 525 } 526 + #endif 527 + 528 + #include <linux/logic_pio.h> 529 + 530 + #ifndef inb 531 + #define inb _inb 532 + #endif 533 + 534 + #ifndef inw 535 + #define inw _inw 536 + #endif 537 + 538 + #ifndef inl 539 + #define inl _inl 540 + #endif 541 + 542 + #ifndef outb 543 + #define outb _outb 544 + #endif 545 + 546 + #ifndef outw 547 + #define outw _outw 548 + #endif 549 + 550 + #ifndef outl 551 + #define outl _outl 524 552 #endif 525 553 526 554 #ifndef inb_p
+42
include/dt-bindings/clock/r8a7742-cpg-mssr.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0+ 2 + * 3 + * Copyright (C) 2020 Renesas Electronics Corp. 4 + */ 5 + #ifndef __DT_BINDINGS_CLOCK_R8A7742_CPG_MSSR_H__ 6 + #define __DT_BINDINGS_CLOCK_R8A7742_CPG_MSSR_H__ 7 + 8 + #include <dt-bindings/clock/renesas-cpg-mssr.h> 9 + 10 + /* r8a7742 CPG Core Clocks */ 11 + #define R8A7742_CLK_Z 0 12 + #define R8A7742_CLK_Z2 1 13 + #define R8A7742_CLK_ZG 2 14 + #define R8A7742_CLK_ZTR 3 15 + #define R8A7742_CLK_ZTRD2 4 16 + #define R8A7742_CLK_ZT 5 17 + #define R8A7742_CLK_ZX 6 18 + #define R8A7742_CLK_ZS 7 19 + #define R8A7742_CLK_HP 8 20 + #define R8A7742_CLK_B 9 21 + #define R8A7742_CLK_LB 10 22 + #define R8A7742_CLK_P 11 23 + #define R8A7742_CLK_CL 12 24 + #define R8A7742_CLK_M2 13 25 + #define R8A7742_CLK_ZB3 14 26 + #define R8A7742_CLK_ZB3D2 15 27 + #define R8A7742_CLK_DDR 16 28 + #define R8A7742_CLK_SDH 17 29 + #define R8A7742_CLK_SD0 18 30 + #define R8A7742_CLK_SD1 19 31 + #define R8A7742_CLK_SD2 20 32 + #define R8A7742_CLK_SD3 21 33 + #define R8A7742_CLK_MMC0 22 34 + #define R8A7742_CLK_MMC1 23 35 + #define R8A7742_CLK_MP 24 36 + #define R8A7742_CLK_QSPI 25 37 + #define R8A7742_CLK_CP 26 38 + #define R8A7742_CLK_RCAN 27 39 + #define R8A7742_CLK_R 28 40 + #define R8A7742_CLK_OSC 29 41 + 42 + #endif /* __DT_BINDINGS_CLOCK_R8A7742_CPG_MSSR_H__ */
+7 -7
include/dt-bindings/clock/tegra114-car.h
··· 272 272 #define TEGRA114_CLK_AUDIO3 242 273 273 #define TEGRA114_CLK_AUDIO4 243 274 274 #define TEGRA114_CLK_SPDIF 244 275 - #define TEGRA114_CLK_CLK_OUT_1 245 276 - #define TEGRA114_CLK_CLK_OUT_2 246 277 - #define TEGRA114_CLK_CLK_OUT_3 247 278 - #define TEGRA114_CLK_BLINK 248 275 + /* 245 */ 276 + /* 246 */ 277 + /* 247 */ 278 + /* 248 */ 279 279 #define TEGRA114_CLK_OSC 249 280 280 /* 250 */ 281 281 /* 251 */ ··· 335 335 #define TEGRA114_CLK_AUDIO3_MUX 303 336 336 #define TEGRA114_CLK_AUDIO4_MUX 304 337 337 #define TEGRA114_CLK_SPDIF_MUX 305 338 - #define TEGRA114_CLK_CLK_OUT_1_MUX 306 339 - #define TEGRA114_CLK_CLK_OUT_2_MUX 307 340 - #define TEGRA114_CLK_CLK_OUT_3_MUX 308 338 + /* 306 */ 339 + /* 307 */ 340 + /* 308 */ 341 341 #define TEGRA114_CLK_DSIA_MUX 309 342 342 #define TEGRA114_CLK_DSIB_MUX 310 343 343 #define TEGRA114_CLK_XUSB_SS_DIV2 311
+7 -7
include/dt-bindings/clock/tegra124-car-common.h
··· 271 271 #define TEGRA124_CLK_AUDIO3 242 272 272 #define TEGRA124_CLK_AUDIO4 243 273 273 #define TEGRA124_CLK_SPDIF 244 274 - #define TEGRA124_CLK_CLK_OUT_1 245 275 - #define TEGRA124_CLK_CLK_OUT_2 246 276 - #define TEGRA124_CLK_CLK_OUT_3 247 277 - #define TEGRA124_CLK_BLINK 248 274 + /* 245 */ 275 + /* 246 */ 276 + /* 247 */ 277 + /* 248 */ 278 278 #define TEGRA124_CLK_OSC 249 279 279 /* 250 */ 280 280 /* 251 */ ··· 334 334 #define TEGRA124_CLK_AUDIO3_MUX 303 335 335 #define TEGRA124_CLK_AUDIO4_MUX 304 336 336 #define TEGRA124_CLK_SPDIF_MUX 305 337 - #define TEGRA124_CLK_CLK_OUT_1_MUX 306 338 - #define TEGRA124_CLK_CLK_OUT_2_MUX 307 339 - #define TEGRA124_CLK_CLK_OUT_3_MUX 308 337 + /* 306 */ 338 + /* 307 */ 339 + /* 308 */ 340 340 /* 309 */ 341 341 /* 310 */ 342 342 #define TEGRA124_CLK_SOR0_LVDS 311 /* deprecated */
+1 -1
include/dt-bindings/clock/tegra20-car.h
··· 131 131 #define TEGRA20_CLK_CCLK 108 132 132 #define TEGRA20_CLK_HCLK 109 133 133 #define TEGRA20_CLK_PCLK 110 134 - #define TEGRA20_CLK_BLINK 111 134 + /* 111 */ 135 135 #define TEGRA20_CLK_PLL_A 112 136 136 #define TEGRA20_CLK_PLL_A_OUT0 113 137 137 #define TEGRA20_CLK_PLL_C 114
+8 -8
include/dt-bindings/clock/tegra210-car.h
··· 306 306 #define TEGRA210_CLK_AUDIO3 274 307 307 #define TEGRA210_CLK_AUDIO4 275 308 308 #define TEGRA210_CLK_SPDIF 276 309 - #define TEGRA210_CLK_CLK_OUT_1 277 310 - #define TEGRA210_CLK_CLK_OUT_2 278 311 - #define TEGRA210_CLK_CLK_OUT_3 279 312 - #define TEGRA210_CLK_BLINK 280 309 + /* 277 */ 310 + /* 278 */ 311 + /* 279 */ 312 + /* 280 */ 313 313 #define TEGRA210_CLK_SOR0_LVDS 281 /* deprecated */ 314 314 #define TEGRA210_CLK_SOR0_OUT 281 315 315 #define TEGRA210_CLK_SOR1_OUT 282 ··· 358 358 #define TEGRA210_CLK_PLL_A_OUT0_OUT_ADSP 324 359 359 /* 325 */ 360 360 #define TEGRA210_CLK_OSC 326 361 - /* 327 */ 361 + #define TEGRA210_CLK_CSI_TPG 327 362 362 /* 328 */ 363 363 /* 329 */ 364 364 /* 330 */ ··· 388 388 #define TEGRA210_CLK_AUDIO3_MUX 353 389 389 #define TEGRA210_CLK_AUDIO4_MUX 354 390 390 #define TEGRA210_CLK_SPDIF_MUX 355 391 - #define TEGRA210_CLK_CLK_OUT_1_MUX 356 392 - #define TEGRA210_CLK_CLK_OUT_2_MUX 357 393 - #define TEGRA210_CLK_CLK_OUT_3_MUX 358 391 + /* 356 */ 392 + /* 357 */ 393 + /* 358 */ 394 394 #define TEGRA210_CLK_DSIA_MUX 359 395 395 #define TEGRA210_CLK_DSIB_MUX 360 396 396 /* 361 */
+7 -7
include/dt-bindings/clock/tegra30-car.h
··· 232 232 #define TEGRA30_CLK_AUDIO3 204 233 233 #define TEGRA30_CLK_AUDIO4 205 234 234 #define TEGRA30_CLK_SPDIF 206 235 - #define TEGRA30_CLK_CLK_OUT_1 207 /* (extern1) */ 236 - #define TEGRA30_CLK_CLK_OUT_2 208 /* (extern2) */ 237 - #define TEGRA30_CLK_CLK_OUT_3 209 /* (extern3) */ 235 + /* 207 */ 236 + /* 208 */ 237 + /* 209 */ 238 238 #define TEGRA30_CLK_SCLK 210 239 - #define TEGRA30_CLK_BLINK 211 239 + /* 211 */ 240 240 #define TEGRA30_CLK_CCLK_G 212 241 241 #define TEGRA30_CLK_CCLK_LP 213 242 242 #define TEGRA30_CLK_TWD 214 ··· 262 262 /* 297 */ 263 263 /* 298 */ 264 264 /* 299 */ 265 - #define TEGRA30_CLK_CLK_OUT_1_MUX 300 266 - #define TEGRA30_CLK_CLK_OUT_2_MUX 301 267 - #define TEGRA30_CLK_CLK_OUT_3_MUX 302 265 + /* 300 */ 266 + /* 301 */ 267 + /* 302 */ 268 268 #define TEGRA30_CLK_AUDIO0_MUX 303 269 269 #define TEGRA30_CLK_AUDIO1_MUX 304 270 270 #define TEGRA30_CLK_AUDIO2_MUX 305
+84
include/dt-bindings/firmware/imx/rsrc.h
··· 547 547 #define IMX_SC_R_ATTESTATION 545 548 548 #define IMX_SC_R_LAST 546 549 549 550 + /* 551 + * Defines for SC PM CLK 552 + */ 553 + #define IMX_SC_PM_CLK_SLV_BUS 0 /* Slave bus clock */ 554 + #define IMX_SC_PM_CLK_MST_BUS 1 /* Master bus clock */ 555 + #define IMX_SC_PM_CLK_PER 2 /* Peripheral clock */ 556 + #define IMX_SC_PM_CLK_PHY 3 /* Phy clock */ 557 + #define IMX_SC_PM_CLK_MISC 4 /* Misc clock */ 558 + #define IMX_SC_PM_CLK_MISC0 0 /* Misc 0 clock */ 559 + #define IMX_SC_PM_CLK_MISC1 1 /* Misc 1 clock */ 560 + #define IMX_SC_PM_CLK_MISC2 2 /* Misc 2 clock */ 561 + #define IMX_SC_PM_CLK_MISC3 3 /* Misc 3 clock */ 562 + #define IMX_SC_PM_CLK_MISC4 4 /* Misc 4 clock */ 563 + #define IMX_SC_PM_CLK_CPU 2 /* CPU clock */ 564 + #define IMX_SC_PM_CLK_PLL 4 /* PLL */ 565 + #define IMX_SC_PM_CLK_BYPASS 4 /* Bypass clock */ 566 + 567 + /* 568 + * Defines for SC CONTROL 569 + */ 570 + #define IMX_SC_C_TEMP 0 571 + #define IMX_SC_C_TEMP_HI 1 572 + #define IMX_SC_C_TEMP_LOW 2 573 + #define IMX_SC_C_PXL_LINK_MST1_ADDR 3 574 + #define IMX_SC_C_PXL_LINK_MST2_ADDR 4 575 + #define IMX_SC_C_PXL_LINK_MST_ENB 5 576 + #define IMX_SC_C_PXL_LINK_MST1_ENB 6 577 + #define IMX_SC_C_PXL_LINK_MST2_ENB 7 578 + #define IMX_SC_C_PXL_LINK_SLV1_ADDR 8 579 + #define IMX_SC_C_PXL_LINK_SLV2_ADDR 9 580 + #define IMX_SC_C_PXL_LINK_MST_VLD 10 581 + #define IMX_SC_C_PXL_LINK_MST1_VLD 11 582 + #define IMX_SC_C_PXL_LINK_MST2_VLD 12 583 + #define IMX_SC_C_SINGLE_MODE 13 584 + #define IMX_SC_C_ID 14 585 + #define IMX_SC_C_PXL_CLK_POLARITY 15 586 + #define IMX_SC_C_LINESTATE 16 587 + #define IMX_SC_C_PCIE_G_RST 17 588 + #define IMX_SC_C_PCIE_BUTTON_RST 18 589 + #define IMX_SC_C_PCIE_PERST 19 590 + #define IMX_SC_C_PHY_RESET 20 591 + #define IMX_SC_C_PXL_LINK_RATE_CORRECTION 21 592 + #define IMX_SC_C_PANIC 22 593 + #define IMX_SC_C_PRIORITY_GROUP 23 594 + #define IMX_SC_C_TXCLK 24 595 + #define IMX_SC_C_CLKDIV 25 596 + #define IMX_SC_C_DISABLE_50 26 597 + #define IMX_SC_C_DISABLE_125 27 598 + #define IMX_SC_C_SEL_125 28 599 + #define IMX_SC_C_MODE 29 600 + #define IMX_SC_C_SYNC_CTRL0 30 601 + #define IMX_SC_C_KACHUNK_CNT 31 602 + #define IMX_SC_C_KACHUNK_SEL 32 603 + #define IMX_SC_C_SYNC_CTRL1 33 604 + #define IMX_SC_C_DPI_RESET 34 605 + #define IMX_SC_C_MIPI_RESET 35 606 + #define IMX_SC_C_DUAL_MODE 36 607 + #define IMX_SC_C_VOLTAGE 37 608 + #define IMX_SC_C_PXL_LINK_SEL 38 609 + #define IMX_SC_C_OFS_SEL 39 610 + #define IMX_SC_C_OFS_AUDIO 40 611 + #define IMX_SC_C_OFS_PERIPH 41 612 + #define IMX_SC_C_OFS_IRQ 42 613 + #define IMX_SC_C_RST0 43 614 + #define IMX_SC_C_RST1 44 615 + #define IMX_SC_C_SEL0 45 616 + #define IMX_SC_C_CALIB0 46 617 + #define IMX_SC_C_CALIB1 47 618 + #define IMX_SC_C_CALIB2 48 619 + #define IMX_SC_C_IPG_DEBUG 49 620 + #define IMX_SC_C_IPG_DOZE 50 621 + #define IMX_SC_C_IPG_WAIT 51 622 + #define IMX_SC_C_IPG_STOP 52 623 + #define IMX_SC_C_IPG_STOP_MODE 53 624 + #define IMX_SC_C_IPG_STOP_ACK 54 625 + #define IMX_SC_C_SYNC_CTRL 55 626 + #define IMX_SC_C_OFS_AUDIO_ALT 56 627 + #define IMX_SC_C_DSP_BYP 57 628 + #define IMX_SC_C_CLK_GEN_EN 58 629 + #define IMX_SC_C_INTF_SEL 59 630 + #define IMX_SC_C_RXC_DLY 60 631 + #define IMX_SC_C_TIMER_SEL 61 632 + #define IMX_SC_C_LAST 62 633 + 550 634 #endif /* __DT_BINDINGS_RSCRC_IMX_H */
+13
include/dt-bindings/power/meson-gxbb-power.h
··· 1 + /* SPDX-License-Identifier: (GPL-2.0+ or MIT) */ 2 + /* 3 + * Copyright (c) 2019 BayLibre, SAS 4 + * Author: Neil Armstrong <narmstrong@baylibre.com> 5 + */ 6 + 7 + #ifndef _DT_BINDINGS_MESON_GXBB_POWER_H 8 + #define _DT_BINDINGS_MESON_GXBB_POWER_H 9 + 10 + #define PWRC_GXBB_VPU_ID 0 11 + #define PWRC_GXBB_ETHERNET_MEM_ID 1 12 + 13 + #endif
+13
include/dt-bindings/power/meson8-power.h
··· 1 + /* SPDX-License-Identifier: (GPL-2.0+ or MIT) */ 2 + /* 3 + * Copyright (c) 2019 Martin Blumenstingl <martin.blumenstingl@googlemail.com> 4 + */ 5 + 6 + #ifndef _DT_BINDINGS_MESON8_POWER_H 7 + #define _DT_BINDINGS_MESON8_POWER_H 8 + 9 + #define PWRC_MESON8_VPU_ID 0 10 + #define PWRC_MESON8_ETHERNET_MEM_ID 1 11 + #define PWRC_MESON8_AUDIO_DSP_MEM_ID 2 12 + 13 + #endif /* _DT_BINDINGS_MESON8_POWER_H */
+12
include/dt-bindings/power/qcom-rpmpd.h
··· 28 28 #define SM8150_MMCX 9 29 29 #define SM8150_MMCX_AO 10 30 30 31 + /* SM8250 Power Domain Indexes */ 32 + #define SM8250_CX 0 33 + #define SM8250_CX_AO 1 34 + #define SM8250_EBI 2 35 + #define SM8250_GFX 3 36 + #define SM8250_LCX 4 37 + #define SM8250_LMX 5 38 + #define SM8250_MMCX 6 39 + #define SM8250_MMCX_AO 7 40 + #define SM8250_MX 8 41 + #define SM8250_MX_AO 9 42 + 31 43 /* SC7180 Power Domain Indexes */ 32 44 #define SC7180_CX 0 33 45 #define SC7180_CX_AO 1
+29
include/dt-bindings/power/r8a7742-sysc.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 3 + * Copyright (C) 2020 Renesas Electronics Corp. 4 + */ 5 + #ifndef __DT_BINDINGS_POWER_R8A7742_SYSC_H__ 6 + #define __DT_BINDINGS_POWER_R8A7742_SYSC_H__ 7 + 8 + /* 9 + * These power domain indices match the numbers of the interrupt bits 10 + * representing the power areas in the various Interrupt Registers 11 + * (e.g. SYSCISR, Interrupt Status Register) 12 + */ 13 + 14 + #define R8A7742_PD_CA15_CPU0 0 15 + #define R8A7742_PD_CA15_CPU1 1 16 + #define R8A7742_PD_CA15_CPU2 2 17 + #define R8A7742_PD_CA15_CPU3 3 18 + #define R8A7742_PD_CA7_CPU0 5 19 + #define R8A7742_PD_CA7_CPU1 6 20 + #define R8A7742_PD_CA7_CPU2 7 21 + #define R8A7742_PD_CA7_CPU3 8 22 + #define R8A7742_PD_CA15_SCU 12 23 + #define R8A7742_PD_RGX 20 24 + #define R8A7742_PD_CA7_SCU 21 25 + 26 + /* Always-on power area */ 27 + #define R8A7742_PD_ALWAYS_ON 32 28 + 29 + #endif /* __DT_BINDINGS_POWER_R8A7742_SYSC_H__ */
+1 -1
include/dt-bindings/reset/amlogic,meson-gxbb-reset.h
··· 69 69 #define RESET_SYS_CPU_L2 58 70 70 #define RESET_SYS_CPU_P 59 71 71 #define RESET_SYS_CPU_MBIST 60 72 - /* 61 */ 72 + #define RESET_ACODEC 61 73 73 /* 62 */ 74 74 /* 63 */ 75 75 /* RESET2 */
+50
include/dt-bindings/reset/imx8mp-reset.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright 2020 NXP 4 + */ 5 + 6 + #ifndef DT_BINDING_RESET_IMX8MP_H 7 + #define DT_BINDING_RESET_IMX8MP_H 8 + 9 + #define IMX8MP_RESET_A53_CORE_POR_RESET0 0 10 + #define IMX8MP_RESET_A53_CORE_POR_RESET1 1 11 + #define IMX8MP_RESET_A53_CORE_POR_RESET2 2 12 + #define IMX8MP_RESET_A53_CORE_POR_RESET3 3 13 + #define IMX8MP_RESET_A53_CORE_RESET0 4 14 + #define IMX8MP_RESET_A53_CORE_RESET1 5 15 + #define IMX8MP_RESET_A53_CORE_RESET2 6 16 + #define IMX8MP_RESET_A53_CORE_RESET3 7 17 + #define IMX8MP_RESET_A53_DBG_RESET0 8 18 + #define IMX8MP_RESET_A53_DBG_RESET1 9 19 + #define IMX8MP_RESET_A53_DBG_RESET2 10 20 + #define IMX8MP_RESET_A53_DBG_RESET3 11 21 + #define IMX8MP_RESET_A53_ETM_RESET0 12 22 + #define IMX8MP_RESET_A53_ETM_RESET1 13 23 + #define IMX8MP_RESET_A53_ETM_RESET2 14 24 + #define IMX8MP_RESET_A53_ETM_RESET3 15 25 + #define IMX8MP_RESET_A53_SOC_DBG_RESET 16 26 + #define IMX8MP_RESET_A53_L2RESET 17 27 + #define IMX8MP_RESET_SW_NON_SCLR_M7C_RST 18 28 + #define IMX8MP_RESET_OTG1_PHY_RESET 19 29 + #define IMX8MP_RESET_OTG2_PHY_RESET 20 30 + #define IMX8MP_RESET_SUPERMIX_RESET 21 31 + #define IMX8MP_RESET_AUDIOMIX_RESET 22 32 + #define IMX8MP_RESET_MLMIX_RESET 23 33 + #define IMX8MP_RESET_PCIEPHY 24 34 + #define IMX8MP_RESET_PCIEPHY_PERST 25 35 + #define IMX8MP_RESET_PCIE_CTRL_APPS_EN 26 36 + #define IMX8MP_RESET_PCIE_CTRL_APPS_TURNOFF 27 37 + #define IMX8MP_RESET_HDMI_PHY_APB_RESET 28 38 + #define IMX8MP_RESET_MEDIA_RESET 29 39 + #define IMX8MP_RESET_GPU2D_RESET 30 40 + #define IMX8MP_RESET_GPU3D_RESET 31 41 + #define IMX8MP_RESET_GPU_RESET 32 42 + #define IMX8MP_RESET_VPU_RESET 33 43 + #define IMX8MP_RESET_VPU_G1_RESET 34 44 + #define IMX8MP_RESET_VPU_G2_RESET 35 45 + #define IMX8MP_RESET_VPUVC8KE_RESET 36 46 + #define IMX8MP_RESET_NOC_RESET 37 47 + 48 + #define IMX8MP_RESET_NUM 38 49 + 50 + #endif
+28 -28
include/dt-bindings/reset/imx8mq-reset.h
··· 28 28 #define IMX8MQ_RESET_A53_L2RESET 17 29 29 #define IMX8MQ_RESET_SW_NON_SCLR_M4C_RST 18 30 30 #define IMX8MQ_RESET_OTG1_PHY_RESET 19 31 - #define IMX8MQ_RESET_OTG2_PHY_RESET 20 32 - #define IMX8MQ_RESET_MIPI_DSI_RESET_BYTE_N 21 33 - #define IMX8MQ_RESET_MIPI_DSI_RESET_N 22 34 - #define IMX8MQ_RESET_MIPI_DSI_DPI_RESET_N 23 35 - #define IMX8MQ_RESET_MIPI_DSI_ESC_RESET_N 24 36 - #define IMX8MQ_RESET_MIPI_DSI_PCLK_RESET_N 25 37 - #define IMX8MQ_RESET_PCIEPHY 26 38 - #define IMX8MQ_RESET_PCIEPHY_PERST 27 39 - #define IMX8MQ_RESET_PCIE_CTRL_APPS_EN 28 40 - #define IMX8MQ_RESET_PCIE_CTRL_APPS_TURNOFF 29 41 - #define IMX8MQ_RESET_HDMI_PHY_APB_RESET 30 /* i.MX8MM does NOT support */ 31 + #define IMX8MQ_RESET_OTG2_PHY_RESET 20 /* i.MX8MN does NOT support */ 32 + #define IMX8MQ_RESET_MIPI_DSI_RESET_BYTE_N 21 /* i.MX8MN does NOT support */ 33 + #define IMX8MQ_RESET_MIPI_DSI_RESET_N 22 /* i.MX8MN does NOT support */ 34 + #define IMX8MQ_RESET_MIPI_DSI_DPI_RESET_N 23 /* i.MX8MN does NOT support */ 35 + #define IMX8MQ_RESET_MIPI_DSI_ESC_RESET_N 24 /* i.MX8MN does NOT support */ 36 + #define IMX8MQ_RESET_MIPI_DSI_PCLK_RESET_N 25 /* i.MX8MN does NOT support */ 37 + #define IMX8MQ_RESET_PCIEPHY 26 /* i.MX8MN does NOT support */ 38 + #define IMX8MQ_RESET_PCIEPHY_PERST 27 /* i.MX8MN does NOT support */ 39 + #define IMX8MQ_RESET_PCIE_CTRL_APPS_EN 28 /* i.MX8MN does NOT support */ 40 + #define IMX8MQ_RESET_PCIE_CTRL_APPS_TURNOFF 29 /* i.MX8MN does NOT support */ 41 + #define IMX8MQ_RESET_HDMI_PHY_APB_RESET 30 /* i.MX8MM/i.MX8MN does NOT support */ 42 42 #define IMX8MQ_RESET_DISP_RESET 31 43 43 #define IMX8MQ_RESET_GPU_RESET 32 44 - #define IMX8MQ_RESET_VPU_RESET 33 45 - #define IMX8MQ_RESET_PCIEPHY2 34 /* i.MX8MM does NOT support */ 46 - #define IMX8MQ_RESET_PCIEPHY2_PERST 35 /* i.MX8MM does NOT support */ 47 - #define IMX8MQ_RESET_PCIE2_CTRL_APPS_EN 36 /* i.MX8MM does NOT support */ 48 - #define IMX8MQ_RESET_PCIE2_CTRL_APPS_TURNOFF 37 /* i.MX8MM does NOT support */ 49 - #define IMX8MQ_RESET_MIPI_CSI1_CORE_RESET 38 /* i.MX8MM does NOT support */ 50 - #define IMX8MQ_RESET_MIPI_CSI1_PHY_REF_RESET 39 /* i.MX8MM does NOT support */ 51 - #define IMX8MQ_RESET_MIPI_CSI1_ESC_RESET 40 /* i.MX8MM does NOT support */ 52 - #define IMX8MQ_RESET_MIPI_CSI2_CORE_RESET 41 /* i.MX8MM does NOT support */ 53 - #define IMX8MQ_RESET_MIPI_CSI2_PHY_REF_RESET 42 /* i.MX8MM does NOT support */ 54 - #define IMX8MQ_RESET_MIPI_CSI2_ESC_RESET 43 /* i.MX8MM does NOT support */ 55 - #define IMX8MQ_RESET_DDRC1_PRST 44 56 - #define IMX8MQ_RESET_DDRC1_CORE_RESET 45 57 - #define IMX8MQ_RESET_DDRC1_PHY_RESET 46 58 - #define IMX8MQ_RESET_DDRC2_PRST 47 /* i.MX8MM does NOT support */ 59 - #define IMX8MQ_RESET_DDRC2_CORE_RESET 48 /* i.MX8MM does NOT support */ 60 - #define IMX8MQ_RESET_DDRC2_PHY_RESET 49 /* i.MX8MM does NOT support */ 44 + #define IMX8MQ_RESET_VPU_RESET 33 /* i.MX8MN does NOT support */ 45 + #define IMX8MQ_RESET_PCIEPHY2 34 /* i.MX8MM/i.MX8MN does NOT support */ 46 + #define IMX8MQ_RESET_PCIEPHY2_PERST 35 /* i.MX8MM/i.MX8MN does NOT support */ 47 + #define IMX8MQ_RESET_PCIE2_CTRL_APPS_EN 36 /* i.MX8MM/i.MX8MN does NOT support */ 48 + #define IMX8MQ_RESET_PCIE2_CTRL_APPS_TURNOFF 37 /* i.MX8MM/i.MX8MN does NOT support */ 49 + #define IMX8MQ_RESET_MIPI_CSI1_CORE_RESET 38 /* i.MX8MM/i.MX8MN does NOT support */ 50 + #define IMX8MQ_RESET_MIPI_CSI1_PHY_REF_RESET 39 /* i.MX8MM/i.MX8MN does NOT support */ 51 + #define IMX8MQ_RESET_MIPI_CSI1_ESC_RESET 40 /* i.MX8MM/i.MX8MN does NOT support */ 52 + #define IMX8MQ_RESET_MIPI_CSI2_CORE_RESET 41 /* i.MX8MM/i.MX8MN does NOT support */ 53 + #define IMX8MQ_RESET_MIPI_CSI2_PHY_REF_RESET 42 /* i.MX8MM/i.MX8MN does NOT support */ 54 + #define IMX8MQ_RESET_MIPI_CSI2_ESC_RESET 43 /* i.MX8MM/i.MX8MN does NOT support */ 55 + #define IMX8MQ_RESET_DDRC1_PRST 44 /* i.MX8MN does NOT support */ 56 + #define IMX8MQ_RESET_DDRC1_CORE_RESET 45 /* i.MX8MN does NOT support */ 57 + #define IMX8MQ_RESET_DDRC1_PHY_RESET 46 /* i.MX8MN does NOT support */ 58 + #define IMX8MQ_RESET_DDRC2_PRST 47 /* i.MX8MM/i.MX8MN does NOT support */ 59 + #define IMX8MQ_RESET_DDRC2_CORE_RESET 48 /* i.MX8MM/i.MX8MN does NOT support */ 60 + #define IMX8MQ_RESET_DDRC2_PHY_RESET 49 /* i.MX8MM/i.MX8MN does NOT support */ 61 61 62 62 #define IMX8MQ_RESET_NUM 50 63 63
-1
include/linux/firmware/imx/sci.h
··· 11 11 #define _SC_SCI_H 12 12 13 13 #include <linux/firmware/imx/ipc.h> 14 - #include <linux/firmware/imx/types.h> 15 14 16 15 #include <linux/firmware/imx/svc/misc.h> 17 16 #include <linux/firmware/imx/svc/pm.h>
-65
include/linux/firmware/imx/types.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0+ */ 2 - /* 3 - * Copyright (C) 2016 Freescale Semiconductor, Inc. 4 - * Copyright 2017~2018 NXP 5 - * 6 - * Header file containing types used across multiple service APIs. 7 - */ 8 - 9 - #ifndef _SC_TYPES_H 10 - #define _SC_TYPES_H 11 - 12 - /* 13 - * This type is used to indicate a control. 14 - */ 15 - enum imx_sc_ctrl { 16 - IMX_SC_C_TEMP = 0, 17 - IMX_SC_C_TEMP_HI = 1, 18 - IMX_SC_C_TEMP_LOW = 2, 19 - IMX_SC_C_PXL_LINK_MST1_ADDR = 3, 20 - IMX_SC_C_PXL_LINK_MST2_ADDR = 4, 21 - IMX_SC_C_PXL_LINK_MST_ENB = 5, 22 - IMX_SC_C_PXL_LINK_MST1_ENB = 6, 23 - IMX_SC_C_PXL_LINK_MST2_ENB = 7, 24 - IMX_SC_C_PXL_LINK_SLV1_ADDR = 8, 25 - IMX_SC_C_PXL_LINK_SLV2_ADDR = 9, 26 - IMX_SC_C_PXL_LINK_MST_VLD = 10, 27 - IMX_SC_C_PXL_LINK_MST1_VLD = 11, 28 - IMX_SC_C_PXL_LINK_MST2_VLD = 12, 29 - IMX_SC_C_SINGLE_MODE = 13, 30 - IMX_SC_C_ID = 14, 31 - IMX_SC_C_PXL_CLK_POLARITY = 15, 32 - IMX_SC_C_LINESTATE = 16, 33 - IMX_SC_C_PCIE_G_RST = 17, 34 - IMX_SC_C_PCIE_BUTTON_RST = 18, 35 - IMX_SC_C_PCIE_PERST = 19, 36 - IMX_SC_C_PHY_RESET = 20, 37 - IMX_SC_C_PXL_LINK_RATE_CORRECTION = 21, 38 - IMX_SC_C_PANIC = 22, 39 - IMX_SC_C_PRIORITY_GROUP = 23, 40 - IMX_SC_C_TXCLK = 24, 41 - IMX_SC_C_CLKDIV = 25, 42 - IMX_SC_C_DISABLE_50 = 26, 43 - IMX_SC_C_DISABLE_125 = 27, 44 - IMX_SC_C_SEL_125 = 28, 45 - IMX_SC_C_MODE = 29, 46 - IMX_SC_C_SYNC_CTRL0 = 30, 47 - IMX_SC_C_KACHUNK_CNT = 31, 48 - IMX_SC_C_KACHUNK_SEL = 32, 49 - IMX_SC_C_SYNC_CTRL1 = 33, 50 - IMX_SC_C_DPI_RESET = 34, 51 - IMX_SC_C_MIPI_RESET = 35, 52 - IMX_SC_C_DUAL_MODE = 36, 53 - IMX_SC_C_VOLTAGE = 37, 54 - IMX_SC_C_PXL_LINK_SEL = 38, 55 - IMX_SC_C_OFS_SEL = 39, 56 - IMX_SC_C_OFS_AUDIO = 40, 57 - IMX_SC_C_OFS_PERIPH = 41, 58 - IMX_SC_C_OFS_IRQ = 42, 59 - IMX_SC_C_RST0 = 43, 60 - IMX_SC_C_RST1 = 44, 61 - IMX_SC_C_SEL0 = 45, 62 - IMX_SC_C_LAST 63 - }; 64 - 65 - #endif /* _SC_TYPES_H */
+1 -1
include/linux/fsl/bestcomm/bestcomm.h
··· 27 27 */ 28 28 struct bcom_bd { 29 29 u32 status; 30 - u32 data[0]; /* variable payload size */ 30 + u32 data[]; /* variable payload size */ 31 31 }; 32 32 33 33 /* ======================================================================== */
+12
include/linux/of_reserved_mem.h
··· 3 3 #define __OF_RESERVED_MEM_H 4 4 5 5 #include <linux/device.h> 6 + #include <linux/of.h> 6 7 7 8 struct of_phandle_args; 8 9 struct reserved_mem_ops; ··· 34 33 35 34 int of_reserved_mem_device_init_by_idx(struct device *dev, 36 35 struct device_node *np, int idx); 36 + int of_reserved_mem_device_init_by_name(struct device *dev, 37 + struct device_node *np, 38 + const char *name); 37 39 void of_reserved_mem_device_release(struct device *dev); 38 40 39 41 void fdt_init_reserved_mem(void); ··· 49 45 { 50 46 return -ENOSYS; 51 47 } 48 + 49 + static inline int of_reserved_mem_device_init_by_name(struct device *dev, 50 + struct device_node *np, 51 + const char *name) 52 + { 53 + return -ENOSYS; 54 + } 55 + 52 56 static inline void of_reserved_mem_device_release(struct device *pdev) { } 53 57 54 58 static inline void fdt_init_reserved_mem(void) { }
+6
include/linux/scmi_protocol.h
··· 4 4 * 5 5 * Copyright (C) 2018 ARM Ltd. 6 6 */ 7 + 8 + #ifndef _LINUX_SCMI_PROTOCOL_H 9 + #define _LINUX_SCMI_PROTOCOL_H 10 + 7 11 #include <linux/device.h> 8 12 #include <linux/types.h> 9 13 ··· 323 319 typedef int (*scmi_prot_init_fn_t)(struct scmi_handle *); 324 320 int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn); 325 321 void scmi_protocol_unregister(int protocol_id); 322 + 323 + #endif /* _LINUX_SCMI_PROTOCOL_H */
+6
include/linux/scpi_protocol.h
··· 4 4 * 5 5 * Copyright (C) 2014 ARM Ltd. 6 6 */ 7 + 8 + #ifndef _LINUX_SCPI_PROTOCOL_H 9 + #define _LINUX_SCPI_PROTOCOL_H 10 + 7 11 #include <linux/types.h> 8 12 9 13 struct scpi_opp { ··· 75 71 #else 76 72 static inline struct scpi_ops *get_scpi_ops(void) { return NULL; } 77 73 #endif 74 + 75 + #endif /* _LINUX_SCPI_PROTOCOL_H */
+20
include/linux/soc/mediatek/mtk-mmsys.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright (c) 2015 MediaTek Inc. 4 + */ 5 + 6 + #ifndef __MTK_MMSYS_H 7 + #define __MTK_MMSYS_H 8 + 9 + enum mtk_ddp_comp_id; 10 + struct device; 11 + 12 + void mtk_mmsys_ddp_connect(struct device *dev, 13 + enum mtk_ddp_comp_id cur, 14 + enum mtk_ddp_comp_id next); 15 + 16 + void mtk_mmsys_ddp_disconnect(struct device *dev, 17 + enum mtk_ddp_comp_id cur, 18 + enum mtk_ddp_comp_id next); 19 + 20 + #endif /* __MTK_MMSYS_H */
+17
include/linux/tee_drv.h
··· 26 26 #define TEE_SHM_REGISTER BIT(3) /* Memory registered in secure world */ 27 27 #define TEE_SHM_USER_MAPPED BIT(4) /* Memory mapped in user space */ 28 28 #define TEE_SHM_POOL BIT(5) /* Memory allocated from pool */ 29 + #define TEE_SHM_KERNEL_MAPPED BIT(6) /* Memory mapped in kernel space */ 29 30 30 31 struct device; 31 32 struct tee_device; ··· 165 164 * @teedev is NULL. 166 165 */ 167 166 void tee_device_unregister(struct tee_device *teedev); 167 + 168 + /** 169 + * tee_session_calc_client_uuid() - Calculates client UUID for session 170 + * @uuid: Resulting UUID 171 + * @connection_method: Connection method for session (TEE_IOCTL_LOGIN_*) 172 + * @connectuon_data: Connection data for opening session 173 + * 174 + * Based on connection method calculates UUIDv5 based client UUID. 175 + * 176 + * For group based logins verifies that calling process has specified 177 + * credentials. 178 + * 179 + * @return < 0 on failure 180 + */ 181 + int tee_session_calc_client_uuid(uuid_t *uuid, u32 connection_method, 182 + const u8 connection_data[TEE_IOCTL_UUID_LEN]); 168 183 169 184 /** 170 185 * struct tee_shm - shared memory object
+1 -1
include/soc/fsl/qe/qe.h
··· 307 307 u8 revision; /* The microcode version revision */ 308 308 u8 padding; /* Reserved, for alignment */ 309 309 u8 reserved[4]; /* Reserved, for future expansion */ 310 - } __attribute__ ((packed)) microcode[1]; 310 + } __packed microcode[]; 311 311 /* All microcode binaries should be located here */ 312 312 /* CRC32 should be located here, after the microcode binaries */ 313 313 } __attribute__ ((packed));
+1
include/soc/qcom/cmd-db.h
··· 4 4 #ifndef __QCOM_COMMAND_DB_H__ 5 5 #define __QCOM_COMMAND_DB_H__ 6 6 7 + #include <linux/err.h> 7 8 8 9 enum cmd_db_hw_type { 9 10 CMD_DB_HW_INVALID = 0,
+9
include/uapi/linux/tee.h
··· 173 173 #define TEE_IOCTL_LOGIN_APPLICATION 4 174 174 #define TEE_IOCTL_LOGIN_USER_APPLICATION 5 175 175 #define TEE_IOCTL_LOGIN_GROUP_APPLICATION 6 176 + /* 177 + * Disallow user-space to use GP implementation specific login 178 + * method range (0x80000000 - 0xBFFFFFFF). This range is rather 179 + * being reserved for REE kernel clients or TEE implementation. 180 + */ 181 + #define TEE_IOCTL_LOGIN_REE_KERNEL_MIN 0x80000000 182 + #define TEE_IOCTL_LOGIN_REE_KERNEL_MAX 0xBFFFFFFF 183 + /* Private login method for REE kernel clients */ 184 + #define TEE_IOCTL_LOGIN_REE_KERNEL 0x80000000 176 185 177 186 /** 178 187 * struct tee_ioctl_param - parameter
+2 -2
kernel/cpu_pm.c
··· 80 80 */ 81 81 int cpu_pm_enter(void) 82 82 { 83 - int nr_calls; 83 + int nr_calls = 0; 84 84 int ret = 0; 85 85 86 86 ret = cpu_pm_notify(CPU_PM_ENTER, -1, &nr_calls); ··· 131 131 */ 132 132 int cpu_cluster_pm_enter(void) 133 133 { 134 - int nr_calls; 134 + int nr_calls = 0; 135 135 int ret = 0; 136 136 137 137 ret = cpu_pm_notify(CPU_CLUSTER_PM_ENTER, -1, &nr_calls);
+11 -11
lib/logic_pio.c
··· 229 229 } 230 230 231 231 #if defined(CONFIG_INDIRECT_PIO) && defined(PCI_IOBASE) 232 - #define BUILD_LOGIC_IO(bw, type) \ 233 - type logic_in##bw(unsigned long addr) \ 232 + #define BUILD_LOGIC_IO(bwl, type) \ 233 + type logic_in##bwl(unsigned long addr) \ 234 234 { \ 235 235 type ret = (type)~0; \ 236 236 \ 237 237 if (addr < MMIO_UPPER_LIMIT) { \ 238 - ret = read##bw(PCI_IOBASE + addr); \ 238 + ret = _in##bwl(addr); \ 239 239 } else if (addr >= MMIO_UPPER_LIMIT && addr < IO_SPACE_LIMIT) { \ 240 240 struct logic_pio_hwaddr *entry = find_io_range(addr); \ 241 241 \ ··· 248 248 return ret; \ 249 249 } \ 250 250 \ 251 - void logic_out##bw(type value, unsigned long addr) \ 251 + void logic_out##bwl(type value, unsigned long addr) \ 252 252 { \ 253 253 if (addr < MMIO_UPPER_LIMIT) { \ 254 - write##bw(value, PCI_IOBASE + addr); \ 254 + _out##bwl(value, addr); \ 255 255 } else if (addr >= MMIO_UPPER_LIMIT && addr < IO_SPACE_LIMIT) { \ 256 256 struct logic_pio_hwaddr *entry = find_io_range(addr); \ 257 257 \ ··· 263 263 } \ 264 264 } \ 265 265 \ 266 - void logic_ins##bw(unsigned long addr, void *buffer, \ 267 - unsigned int count) \ 266 + void logic_ins##bwl(unsigned long addr, void *buffer, \ 267 + unsigned int count) \ 268 268 { \ 269 269 if (addr < MMIO_UPPER_LIMIT) { \ 270 - reads##bw(PCI_IOBASE + addr, buffer, count); \ 270 + reads##bwl(PCI_IOBASE + addr, buffer, count); \ 271 271 } else if (addr >= MMIO_UPPER_LIMIT && addr < IO_SPACE_LIMIT) { \ 272 272 struct logic_pio_hwaddr *entry = find_io_range(addr); \ 273 273 \ ··· 280 280 \ 281 281 } \ 282 282 \ 283 - void logic_outs##bw(unsigned long addr, const void *buffer, \ 284 - unsigned int count) \ 283 + void logic_outs##bwl(unsigned long addr, const void *buffer, \ 284 + unsigned int count) \ 285 285 { \ 286 286 if (addr < MMIO_UPPER_LIMIT) { \ 287 - writes##bw(PCI_IOBASE + addr, buffer, count); \ 287 + writes##bwl(PCI_IOBASE + addr, buffer, count); \ 288 288 } else if (addr >= MMIO_UPPER_LIMIT && addr < IO_SPACE_LIMIT) { \ 289 289 struct logic_pio_hwaddr *entry = find_io_range(addr); \ 290 290 \