Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'arm-drivers-5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

Pull ARM SoC driver updates from Arnd Bergmann:
"A couple of subsystems have their own subsystem maintainers but choose
to have the code merged through the soc tree as upstream, as the code
tends to be used across multiple SoCs or has SoC specific drivers
itself:

- memory controllers:

Krzysztof Kozlowski takes ownership of the drivers/memory subsystem
and its drivers, starting out with a set of cleanup patches.

A larger driver for the Tegra memory controller that was
accidentally missed for v5.8 is now added.

- reset controllers:

Only minor updates to drivers/reset this time

- firmware:

The "turris mox" firmware driver gains support for signed firmware
blobs The tegra firmware driver gets extended to export some debug
information Various updates to i.MX firmware drivers, mostly
cosmetic

- ARM SCMI/SCPI:

A new mechanism for platform notifications is added, among a number
of minor changes.

- optee:

Probing of the TEE bus is rewritten to better support detection of
devices that depend on the tee-supplicant user space. A new
firmware based trusted platform module (fTPM) driver is added based
on OP-TEE

- SoC attributes:

A new driver is added to provide a generic soc_device for
identifying a machine through the SMCCC ARCH_SOC_ID firmware
interface rather than by probing SoC family specific registers.

The series also contains some cleanups to the common soc_device
code.

There are also a number of updates to SoC specific drivers, the main
ones are:

- Mediatek cmdq driver gains a few in-kernel interfaces

- Minor updates to Qualcomm RPMh, socinfo, rpm drivers, mostly adding
support for additional SoC variants

- The Qualcomm GENI core code gains interconnect path voting and
performance level support, and integrating this into a number of
device drivers.

- A new driver for Samsung Exynos5800 voltage coupler for

- Renesas RZ/G2H (R8A774E1) SoC support gets added to a couple of SoC
specific device drivers

- Updates to the TI K3 Ring Accelerator driver"

* tag 'arm-drivers-5.9' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (164 commits)
soc: qcom: geni: Fix unused label warning
soc: qcom: smd-rpm: Fix kerneldoc
memory: jz4780_nemc: Only request IO memory the driver will use
soc: qcom: pdr: Reorder the PD state indication ack
MAINTAINERS: Add Git repository for memory controller drivers
memory: brcmstb_dpfe: Fix language typo
memory: samsung: exynos5422-dmc: Correct white space issues
memory: samsung: exynos-srom: Correct alignment
memory: pl172: Enclose macro argument usage in parenthesis
memory: of: Correct kerneldoc
memory: omap-gpmc: Fix language typo
memory: omap-gpmc: Correct white space issues
memory: omap-gpmc: Use 'unsigned int' for consistency
memory: omap-gpmc: Enclose macro argument usage in parenthesis
memory: omap-gpmc: Correct kerneldoc
memory: mvebu-devbus: Align with open parenthesis
memory: mvebu-devbus: Add missing braces to all arms of if statement
memory: bt1-l2-ctl: Add blank lines after declarations
soc: TI knav_qmss: make symbol 'knav_acc_range_ops' static
firmware: ti_sci: Replace HTTP links with HTTPS ones
...

+10925 -1350
+9
Documentation/ABI/testing/debugfs-turris-mox-rwtm
··· 1 + What: /sys/kernel/debug/turris-mox-rwtm/do_sign 2 + Date: Jun 2020 3 + KernelVersion: 5.8 4 + Contact: Marek Behún <marek.behun@nic.cz> 5 + Description: (W) Message to sign with the ECDSA private key stored in 6 + device's OTP. The message must be exactly 64 bytes (since 7 + this is intended for SHA-512 hashes). 8 + (R) The resulting signature, 136 bytes. This contains the R and 9 + S values of the ECDSA signature, both in big-endian format.
+8
Documentation/ABI/testing/sysfs-bus-optee-devices
··· 1 + What: /sys/bus/tee/devices/optee-ta-<uuid>/ 2 + Date: May 2020 3 + KernelVersion 5.8 4 + Contact: op-tee@lists.trustedfirmware.org 5 + Description: 6 + OP-TEE bus provides reference to registered drivers under this directory. The <uuid> 7 + matches Trusted Application (TA) driver and corresponding TA in secure OS. Drivers 8 + are free to create needed API under optee-ta-<uuid> directory.
+30
Documentation/ABI/testing/sysfs-devices-soc
··· 26 26 Read-only attribute common to all SoCs. Contains SoC family name 27 27 (e.g. DB8500). 28 28 29 + On many of ARM based silicon with SMCCC v1.2+ compliant firmware 30 + this will contain the JEDEC JEP106 manufacturer’s identification 31 + code. The format is "jep106:XXYY" where XX is identity code and 32 + YY is continuation code. 33 + 34 + This manufacturer’s identification code is defined by one 35 + or more eight (8) bit fields, each consisting of seven (7) 36 + data bits plus one (1) odd parity bit. It is a single field, 37 + limiting the possible number of vendors to 126. To expand 38 + the maximum number of identification codes, a continuation 39 + scheme has been defined. 40 + 41 + The specified mechanism is that an identity code of 0x7F 42 + represents the "continuation code" and implies the presence 43 + of an additional identity code field, and this mechanism 44 + may be extended to multiple continuation codes followed 45 + by the manufacturer's identity code. 46 + 47 + For example, ARM has identity code 0x7F 0x7F 0x7F 0x7F 0x3B, 48 + which is code 0x3B on the fifth 'page'. This is shortened 49 + as JEP106 identity code of 0x3B and a continuation code of 50 + 0x4 to represent the four continuation codes preceding the 51 + identity code. 52 + 29 53 What: /sys/devices/socX/serial_number 30 54 Date: January 2019 31 55 contact: Bjorn Andersson <bjorn.andersson@linaro.org> ··· 63 39 Description: 64 40 Read-only attribute supported by most SoCs. In the case of 65 41 ST-Ericsson's chips this contains the SoC serial number. 42 + 43 + On many of ARM based silicon with SMCCC v1.2+ compliant firmware 44 + this will contain the SOC ID appended to the family attribute 45 + to ensure there is no conflict in this namespace across various 46 + vendors. The format is "jep106:XXYY:ZZZZ" where XX is identity 47 + code, YY is continuation code and ZZZZ is the SOC ID. 66 48 67 49 What: /sys/devices/socX/revision 68 50 Date: January 2012
+2
Documentation/devicetree/bindings/firmware/qcom,scm.txt
··· 11 11 * "qcom,scm-apq8084" 12 12 * "qcom,scm-ipq4019" 13 13 * "qcom,scm-ipq806x" 14 + * "qcom,scm-ipq8074" 14 15 * "qcom,scm-msm8660" 15 16 * "qcom,scm-msm8916" 16 17 * "qcom,scm-msm8960" 17 18 * "qcom,scm-msm8974" 19 + * "qcom,scm-msm8994" 18 20 * "qcom,scm-msm8996" 19 21 * "qcom,scm-msm8998" 20 22 * "qcom,scm-sc7180"
+1 -1
Documentation/devicetree/bindings/interrupt-controller/ti,sci-intr.txt
··· 55 55 corresponds to a range of host irqs. 56 56 57 57 For more details on TISCI IRQ resource management refer: 58 - http://downloads.ti.com/tisci/esd/latest/2_tisci_msgs/rm/rm_irq.html 58 + https://downloads.ti.com/tisci/esd/latest/2_tisci_msgs/rm/rm_irq.html 59 59 60 60 Example: 61 61 --------
-49
Documentation/devicetree/bindings/reset/fsl,imx-src.txt
··· 1 - Freescale i.MX System Reset Controller 2 - ====================================== 3 - 4 - Please also refer to reset.txt in this directory for common reset 5 - controller binding usage. 6 - 7 - Required properties: 8 - - compatible: Should be "fsl,<chip>-src" 9 - - reg: should be register base and length as documented in the 10 - datasheet 11 - - interrupts: Should contain SRC interrupt and CPU WDOG interrupt, 12 - in this order. 13 - - #reset-cells: 1, see below 14 - 15 - example: 16 - 17 - src: src@20d8000 { 18 - compatible = "fsl,imx6q-src"; 19 - reg = <0x020d8000 0x4000>; 20 - interrupts = <0 91 0x04 0 96 0x04>; 21 - #reset-cells = <1>; 22 - }; 23 - 24 - Specifying reset lines connected to IP modules 25 - ============================================== 26 - 27 - The system reset controller can be used to reset the GPU, VPU, 28 - IPU, and OpenVG IP modules on i.MX5 and i.MX6 ICs. Those device 29 - nodes should specify the reset line on the SRC in their resets 30 - property, containing a phandle to the SRC device node and a 31 - RESET_INDEX specifying which module to reset, as described in 32 - reset.txt 33 - 34 - example: 35 - 36 - ipu1: ipu@2400000 { 37 - resets = <&src 2>; 38 - }; 39 - ipu2: ipu@2800000 { 40 - resets = <&src 4>; 41 - }; 42 - 43 - The following RESET_INDEX values are valid for i.MX5: 44 - GPU_RESET 0 45 - VPU_RESET 1 46 - IPU1_RESET 2 47 - OPEN_VG_RESET 3 48 - The following additional RESET_INDEX value is valid for i.MX6: 49 - IPU2_RESET 4
+82
Documentation/devicetree/bindings/reset/fsl,imx-src.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/reset/fsl,imx-src.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Freescale i.MX System Reset Controller 8 + 9 + maintainers: 10 + - Philipp Zabel <p.zabel@pengutronix.de> 11 + 12 + description: | 13 + The system reset controller can be used to reset the GPU, VPU, 14 + IPU, and OpenVG IP modules on i.MX5 and i.MX6 ICs. Those device 15 + nodes should specify the reset line on the SRC in their resets 16 + property, containing a phandle to the SRC device node and a 17 + RESET_INDEX specifying which module to reset, as described in 18 + reset.txt 19 + 20 + The following RESET_INDEX values are valid for i.MX5: 21 + GPU_RESET 0 22 + VPU_RESET 1 23 + IPU1_RESET 2 24 + OPEN_VG_RESET 3 25 + The following additional RESET_INDEX value is valid for i.MX6: 26 + IPU2_RESET 4 27 + 28 + properties: 29 + compatible: 30 + oneOf: 31 + - const: "fsl,imx51-src" 32 + - items: 33 + - const: "fsl,imx50-src" 34 + - const: "fsl,imx51-src" 35 + - items: 36 + - const: "fsl,imx53-src" 37 + - const: "fsl,imx51-src" 38 + - items: 39 + - const: "fsl,imx6q-src" 40 + - const: "fsl,imx51-src" 41 + - items: 42 + - const: "fsl,imx6sx-src" 43 + - const: "fsl,imx51-src" 44 + - items: 45 + - const: "fsl,imx6sl-src" 46 + - const: "fsl,imx51-src" 47 + - items: 48 + - const: "fsl,imx6ul-src" 49 + - const: "fsl,imx51-src" 50 + - items: 51 + - const: "fsl,imx6sll-src" 52 + - const: "fsl,imx51-src" 53 + 54 + reg: 55 + maxItems: 1 56 + 57 + interrupts: 58 + items: 59 + - description: SRC interrupt 60 + - description: CPU WDOG interrupts out of SRC 61 + minItems: 1 62 + maxItems: 2 63 + 64 + '#reset-cells': 65 + const: 1 66 + 67 + required: 68 + - compatible 69 + - reg 70 + - interrupts 71 + - '#reset-cells' 72 + 73 + additionalProperties: false 74 + 75 + examples: 76 + - | 77 + reset-controller@73fd0000 { 78 + compatible = "fsl,imx51-src"; 79 + reg = <0x73fd0000 0x4000>; 80 + interrupts = <75>; 81 + #reset-cells = <1>; 82 + };
-56
Documentation/devicetree/bindings/reset/fsl,imx7-src.txt
··· 1 - Freescale i.MX7 System Reset Controller 2 - ====================================== 3 - 4 - Please also refer to reset.txt in this directory for common reset 5 - controller binding usage. 6 - 7 - Required properties: 8 - - compatible: 9 - - For i.MX7 SoCs should be "fsl,imx7d-src", "syscon" 10 - - For i.MX8MQ SoCs should be "fsl,imx8mq-src", "syscon" 11 - - For i.MX8MM SoCs should be "fsl,imx8mm-src", "fsl,imx8mq-src", "syscon" 12 - - For i.MX8MN SoCs should be "fsl,imx8mn-src", "fsl,imx8mq-src", "syscon" 13 - - For i.MX8MP SoCs should be "fsl,imx8mp-src", "syscon" 14 - - reg: should be register base and length as documented in the 15 - datasheet 16 - - interrupts: Should contain SRC interrupt 17 - - #reset-cells: 1, see below 18 - 19 - example: 20 - 21 - src: reset-controller@30390000 { 22 - compatible = "fsl,imx7d-src", "syscon"; 23 - reg = <0x30390000 0x2000>; 24 - interrupts = <GIC_SPI 89 IRQ_TYPE_LEVEL_HIGH>; 25 - #reset-cells = <1>; 26 - }; 27 - 28 - 29 - Specifying reset lines connected to IP modules 30 - ============================================== 31 - 32 - The system reset controller can be used to reset various set of 33 - peripherals. Device nodes that need access to reset lines should 34 - specify them as a reset phandle in their corresponding node as 35 - specified in reset.txt. 36 - 37 - Example: 38 - 39 - pcie: pcie@33800000 { 40 - 41 - ... 42 - 43 - resets = <&src IMX7_RESET_PCIEPHY>, 44 - <&src IMX7_RESET_PCIE_CTRL_APPS_EN>; 45 - reset-names = "pciephy", "apps"; 46 - 47 - ... 48 - }; 49 - 50 - 51 - For list of all valid reset indices see 52 - <dt-bindings/reset/imx7-reset.h> for i.MX7, 53 - <dt-bindings/reset/imx8mq-reset.h> for i.MX8MQ and 54 - <dt-bindings/reset/imx8mq-reset.h> for i.MX8MM and 55 - <dt-bindings/reset/imx8mq-reset.h> for i.MX8MN and 56 - <dt-bindings/reset/imx8mp-reset.h> for i.MX8MP
+58
Documentation/devicetree/bindings/reset/fsl,imx7-src.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/reset/fsl,imx7-src.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Freescale i.MX7 System Reset Controller 8 + 9 + maintainers: 10 + - Andrey Smirnov <andrew.smirnov@gmail.com> 11 + 12 + description: | 13 + The system reset controller can be used to reset various set of 14 + peripherals. Device nodes that need access to reset lines should 15 + specify them as a reset phandle in their corresponding node as 16 + specified in reset.txt. 17 + 18 + For list of all valid reset indices see 19 + <dt-bindings/reset/imx7-reset.h> for i.MX7, 20 + <dt-bindings/reset/imx8mq-reset.h> for i.MX8MQ, i.MX8MM and i.MX8MN, 21 + <dt-bindings/reset/imx8mp-reset.h> for i.MX8MP. 22 + 23 + properties: 24 + compatible: 25 + items: 26 + - enum: 27 + - fsl,imx7d-src 28 + - fsl,imx8mq-src 29 + - fsl,imx8mp-src 30 + - const: syscon 31 + 32 + reg: 33 + maxItems: 1 34 + 35 + interrupts: 36 + maxItems: 1 37 + 38 + '#reset-cells': 39 + const: 1 40 + 41 + required: 42 + - compatible 43 + - reg 44 + - interrupts 45 + - '#reset-cells' 46 + 47 + additionalProperties: false 48 + 49 + examples: 50 + - | 51 + #include <dt-bindings/interrupt-controller/arm-gic.h> 52 + 53 + reset-controller@30390000 { 54 + compatible = "fsl,imx7d-src", "syscon"; 55 + reg = <0x30390000 0x2000>; 56 + interrupts = <GIC_SPI 89 IRQ_TYPE_LEVEL_HIGH>; 57 + #reset-cells = <1>; 58 + };
-62
Documentation/devicetree/bindings/soc/qcom/qcom,smd-rpm.txt
··· 1 - Qualcomm Resource Power Manager (RPM) over SMD 2 - 3 - This driver is used to interface with the Resource Power Manager (RPM) found in 4 - various Qualcomm platforms. The RPM allows each component in the system to vote 5 - for state of the system resources, such as clocks, regulators and bus 6 - frequencies. 7 - 8 - The SMD information for the RPM edge should be filled out. See qcom,smd.txt for 9 - the required edge properties. All SMD related properties will reside within the 10 - RPM node itself. 11 - 12 - = SUBDEVICES 13 - 14 - The RPM exposes resources to its subnodes. The rpm_requests node must be 15 - present and this subnode may contain children that designate regulator 16 - resources. 17 - 18 - - compatible: 19 - Usage: required 20 - Value type: <string> 21 - Definition: must be one of: 22 - "qcom,rpm-apq8084" 23 - "qcom,rpm-msm8916" 24 - "qcom,rpm-msm8974" 25 - "qcom,rpm-msm8976" 26 - "qcom,rpm-msm8998" 27 - "qcom,rpm-sdm660" 28 - "qcom,rpm-qcs404" 29 - 30 - - qcom,smd-channels: 31 - Usage: required 32 - Value type: <string> 33 - Definition: must be "rpm_requests" 34 - 35 - Refer to Documentation/devicetree/bindings/regulator/qcom,smd-rpm-regulator.txt 36 - for information on the regulator subnodes that can exist under the rpm_requests. 37 - 38 - Example: 39 - 40 - soc { 41 - apcs: syscon@f9011000 { 42 - compatible = "syscon"; 43 - reg = <0xf9011000 0x1000>; 44 - }; 45 - }; 46 - 47 - smd { 48 - compatible = "qcom,smd"; 49 - 50 - rpm { 51 - interrupts = <0 168 1>; 52 - qcom,ipc = <&apcs 8 0>; 53 - qcom,smd-edge = <15>; 54 - 55 - rpm_requests { 56 - compatible = "qcom,rpm-msm8974"; 57 - qcom,smd-channels = "rpm_requests"; 58 - 59 - ... 60 - }; 61 - }; 62 - };
+87
Documentation/devicetree/bindings/soc/qcom/qcom,smd-rpm.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: "http://devicetree.org/schemas/soc/qcom/qcom,smd-rpm.yaml#" 5 + $schema: "http://devicetree.org/meta-schemas/core.yaml#" 6 + 7 + title: Qualcomm Resource Power Manager (RPM) over SMD 8 + 9 + description: | 10 + This driver is used to interface with the Resource Power Manager (RPM) found 11 + in various Qualcomm platforms. The RPM allows each component in the system 12 + to vote for state of the system resources, such as clocks, regulators and bus 13 + frequencies. 14 + 15 + The SMD information for the RPM edge should be filled out. See qcom,smd.txt 16 + for the required edge properties. All SMD related properties will reside 17 + within the RPM node itself. 18 + 19 + The RPM exposes resources to its subnodes. The rpm_requests node must be 20 + present and this subnode may contain children that designate regulator 21 + resources. 22 + 23 + Refer to Documentation/devicetree/bindings/regulator/qcom,smd-rpm-regulator.txt 24 + for information on the regulator subnodes that can exist under the 25 + rpm_requests. 26 + 27 + maintainers: 28 + - Kathiravan T <kathirav@codeaurora.org> 29 + 30 + properties: 31 + compatible: 32 + enum: 33 + - qcom,rpm-apq8084 34 + - qcom,rpm-ipq6018 35 + - qcom,rpm-msm8916 36 + - qcom,rpm-msm8974 37 + - qcom,rpm-msm8976 38 + - qcom,rpm-msm8996 39 + - qcom,rpm-msm8998 40 + - qcom,rpm-sdm660 41 + - qcom,rpm-qcs404 42 + 43 + qcom,smd-channels: 44 + $ref: /schemas/types.yaml#/definitions/string-array 45 + description: Channel name used for the RPM communication 46 + items: 47 + - const: rpm_requests 48 + 49 + if: 50 + properties: 51 + compatible: 52 + contains: 53 + enum: 54 + - qcom,rpm-apq8084 55 + - qcom,rpm-msm8916 56 + - qcom,rpm-msm8974 57 + then: 58 + required: 59 + - qcom,smd-channels 60 + 61 + required: 62 + - compatible 63 + 64 + additionalProperties: false 65 + 66 + examples: 67 + - | 68 + #include <dt-bindings/interrupt-controller/arm-gic.h> 69 + #include <dt-bindings/interrupt-controller/irq.h> 70 + 71 + smd { 72 + compatible = "qcom,smd"; 73 + 74 + rpm { 75 + interrupts = <GIC_SPI 168 IRQ_TYPE_EDGE_RISING>; 76 + qcom,ipc = <&apcs 8 0>; 77 + qcom,smd-edge = <15>; 78 + 79 + rpm_requests { 80 + compatible = "qcom,rpm-msm8974"; 81 + qcom,smd-channels = "rpm_requests"; 82 + 83 + /* Regulator nodes to follow */ 84 + }; 85 + }; 86 + }; 87 + ...
-59
Documentation/devicetree/bindings/soc/ti/k3-ringacc.txt
··· 1 - * Texas Instruments K3 NavigatorSS Ring Accelerator 2 - 3 - The Ring Accelerator (RA) is a machine which converts read/write accesses 4 - from/to a constant address into corresponding read/write accesses from/to a 5 - circular data structure in memory. The RA eliminates the need for each DMA 6 - controller which needs to access ring elements from having to know the current 7 - state of the ring (base address, current offset). The DMA controller 8 - performs a read or write access to a specific address range (which maps to the 9 - source interface on the RA) and the RA replaces the address for the transaction 10 - with a new address which corresponds to the head or tail element of the ring 11 - (head for reads, tail for writes). 12 - 13 - The Ring Accelerator is a hardware module that is responsible for accelerating 14 - management of the packet queues. The K3 SoCs can have more than one RA instances 15 - 16 - Required properties: 17 - - compatible : Must be "ti,am654-navss-ringacc"; 18 - - reg : Should contain register location and length of the following 19 - named register regions. 20 - - reg-names : should be 21 - "rt" - The RA Ring Real-time Control/Status Registers 22 - "fifos" - The RA Queues Registers 23 - "proxy_gcfg" - The RA Proxy Global Config Registers 24 - "proxy_target" - The RA Proxy Datapath Registers 25 - - ti,num-rings : Number of rings supported by RA 26 - - ti,sci-rm-range-gp-rings : TI-SCI RM subtype for GP ring range 27 - - ti,sci : phandle on TI-SCI compatible System controller node 28 - - ti,sci-dev-id : TI-SCI device id of the ring accelerator 29 - - msi-parent : phandle for "ti,sci-inta" interrupt controller 30 - 31 - Optional properties: 32 - -- ti,dma-ring-reset-quirk : enable ringacc / udma ring state interoperability 33 - issue software w/a 34 - 35 - Example: 36 - 37 - ringacc: ringacc@3c000000 { 38 - compatible = "ti,am654-navss-ringacc"; 39 - reg = <0x0 0x3c000000 0x0 0x400000>, 40 - <0x0 0x38000000 0x0 0x400000>, 41 - <0x0 0x31120000 0x0 0x100>, 42 - <0x0 0x33000000 0x0 0x40000>; 43 - reg-names = "rt", "fifos", 44 - "proxy_gcfg", "proxy_target"; 45 - ti,num-rings = <818>; 46 - ti,sci-rm-range-gp-rings = <0x2>; /* GP ring range */ 47 - ti,dma-ring-reset-quirk; 48 - ti,sci = <&dmsc>; 49 - ti,sci-dev-id = <187>; 50 - msi-parent = <&inta_main_udmass>; 51 - }; 52 - 53 - client: 54 - 55 - dma_ipx: dma_ipx@<addr> { 56 - ... 57 - ti,ringacc = <&ringacc>; 58 - ... 59 - }
+102
Documentation/devicetree/bindings/soc/ti/k3-ringacc.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + # Copyright (C) 2020 Texas Instruments Incorporated - http://www.ti.com/ 3 + %YAML 1.2 4 + --- 5 + $id: "http://devicetree.org/schemas/soc/ti/k3-ringacc.yaml#" 6 + $schema: "http://devicetree.org/meta-schemas/core.yaml#" 7 + 8 + title: Texas Instruments K3 NavigatorSS Ring Accelerator 9 + 10 + maintainers: 11 + - Santosh Shilimkar <ssantosh@kernel.org> 12 + - Grygorii Strashko <grygorii.strashko@ti.com> 13 + 14 + description: | 15 + The Ring Accelerator (RA) is a machine which converts read/write accesses 16 + from/to a constant address into corresponding read/write accesses from/to a 17 + circular data structure in memory. The RA eliminates the need for each DMA 18 + controller which needs to access ring elements from having to know the current 19 + state of the ring (base address, current offset). The DMA controller 20 + performs a read or write access to a specific address range (which maps to the 21 + source interface on the RA) and the RA replaces the address for the transaction 22 + with a new address which corresponds to the head or tail element of the ring 23 + (head for reads, tail for writes). 24 + 25 + The Ring Accelerator is a hardware module that is responsible for accelerating 26 + management of the packet queues. The K3 SoCs can have more than one RA instances 27 + 28 + properties: 29 + compatible: 30 + items: 31 + - const: ti,am654-navss-ringacc 32 + 33 + reg: 34 + items: 35 + - description: real time registers regions 36 + - description: fifos registers regions 37 + - description: proxy gcfg registers regions 38 + - description: proxy target registers regions 39 + 40 + reg-names: 41 + items: 42 + - const: rt 43 + - const: fifos 44 + - const: proxy_gcfg 45 + - const: proxy_target 46 + 47 + msi-parent: true 48 + 49 + ti,num-rings: 50 + $ref: /schemas/types.yaml#/definitions/uint32 51 + description: Number of rings supported by RA 52 + 53 + ti,sci-rm-range-gp-rings: 54 + $ref: /schemas/types.yaml#/definitions/uint32 55 + description: TI-SCI RM subtype for GP ring range 56 + 57 + ti,sci: 58 + $ref: /schemas/types.yaml#definitions/phandle-array 59 + description: phandle on TI-SCI compatible System controller node 60 + 61 + ti,sci-dev-id: 62 + $ref: /schemas/types.yaml#/definitions/uint32 63 + description: TI-SCI device id of the ring accelerator 64 + 65 + ti,dma-ring-reset-quirk: 66 + $ref: /schemas/types.yaml#definitions/flag 67 + description: | 68 + enable ringacc/udma ring state interoperability issue software w/a 69 + 70 + required: 71 + - compatible 72 + - reg 73 + - reg-names 74 + - msi-parent 75 + - ti,num-rings 76 + - ti,sci-rm-range-gp-rings 77 + - ti,sci 78 + - ti,sci-dev-id 79 + 80 + additionalProperties: false 81 + 82 + examples: 83 + - | 84 + bus { 85 + #address-cells = <2>; 86 + #size-cells = <2>; 87 + 88 + ringacc: ringacc@3c000000 { 89 + compatible = "ti,am654-navss-ringacc"; 90 + reg = <0x0 0x3c000000 0x0 0x400000>, 91 + <0x0 0x38000000 0x0 0x400000>, 92 + <0x0 0x31120000 0x0 0x100>, 93 + <0x0 0x33000000 0x0 0x40000>; 94 + reg-names = "rt", "fifos", "proxy_gcfg", "proxy_target"; 95 + ti,num-rings = <818>; 96 + ti,sci-rm-range-gp-rings = <0x2>; /* GP ring range */ 97 + ti,dma-ring-reset-quirk; 98 + ti,sci = <&dmsc>; 99 + ti,sci-dev-id = <187>; 100 + msi-parent = <&inta_main_udmass>; 101 + }; 102 + };
+9
MAINTAINERS
··· 11117 11117 F: include/linux/memblock.h 11118 11118 F: mm/memblock.c 11119 11119 11120 + MEMORY CONTROLLER DRIVERS 11121 + M: Krzysztof Kozlowski <krzk@kernel.org> 11122 + L: linux-kernel@vger.kernel.org 11123 + S: Maintained 11124 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/krzk/linux-mem-ctrl.git 11125 + F: Documentation/devicetree/bindings/memory-controllers/ 11126 + F: drivers/memory/ 11127 + 11120 11128 MEMORY MANAGEMENT 11121 11129 M: Andrew Morton <akpm@linux-foundation.org> 11122 11130 L: linux-mm@kvack.org ··· 12737 12729 M: Jens Wiklander <jens.wiklander@linaro.org> 12738 12730 L: op-tee@lists.trustedfirmware.org 12739 12731 S: Maintained 12732 + F: Documentation/ABI/testing/sysfs-bus-optee-devices 12740 12733 F: drivers/tee/optee/ 12741 12734 12742 12735 OP-TEE RANDOM NUMBER GENERATOR (RNG) DRIVER
+1
arch/arm/mach-exynos/Kconfig
··· 118 118 bool "Samsung EXYNOS5800" 119 119 default y 120 120 depends on SOC_EXYNOS5420 121 + select EXYNOS_REGULATOR_COUPLER 121 122 122 123 config EXYNOS_MCPM 123 124 bool
+11 -9
arch/arm/mach-omap2/id.c
··· 775 775 return kasprintf(GFP_KERNEL, "Unknown"); 776 776 } 777 777 778 - static ssize_t omap_get_type(struct device *dev, 779 - struct device_attribute *attr, 780 - char *buf) 778 + static ssize_t 779 + type_show(struct device *dev, struct device_attribute *attr, char *buf) 781 780 { 782 781 return sprintf(buf, "%s\n", omap_types[omap_type()]); 783 782 } 784 783 785 - static struct device_attribute omap_soc_attr = 786 - __ATTR(type, S_IRUGO, omap_get_type, NULL); 784 + static DEVICE_ATTR_RO(type); 785 + 786 + static struct attribute *omap_soc_attrs[] = { 787 + &dev_attr_type.attr, 788 + NULL 789 + }; 790 + 791 + ATTRIBUTE_GROUPS(omap_soc); 787 792 788 793 void __init omap_soc_device_init(void) 789 794 { 790 - struct device *parent; 791 795 struct soc_device *soc_dev; 792 796 struct soc_device_attribute *soc_dev_attr; 793 797 ··· 802 798 soc_dev_attr->machine = soc_name; 803 799 soc_dev_attr->family = omap_get_family(); 804 800 soc_dev_attr->revision = soc_rev; 801 + soc_dev_attr->custom_attr_group = omap_soc_groups[0]; 805 802 806 803 soc_dev = soc_device_register(soc_dev_attr); 807 804 if (IS_ERR(soc_dev)) { 808 805 kfree(soc_dev_attr); 809 806 return; 810 807 } 811 - 812 - parent = soc_device_to_device(soc_dev); 813 - device_create_file(parent, &omap_soc_attr); 814 808 } 815 809 #endif /* CONFIG_SOC_BUS */
-1
arch/arm64/configs/defconfig
··· 881 881 CONFIG_RASPBERRYPI_POWER=y 882 882 CONFIG_FSL_DPAA=y 883 883 CONFIG_FSL_MC_DPIO=y 884 - CONFIG_IMX_SCU_SOC=y 885 884 CONFIG_QCOM_AOSS_QMP=y 886 885 CONFIG_QCOM_GENI_SE=y 887 886 CONFIG_QCOM_RMTFS_MEM=m
+60 -10
drivers/char/tpm/tpm_ftpm_tee.c
··· 214 214 * Return: 215 215 * On success, 0. On failure, -errno. 216 216 */ 217 - static int ftpm_tee_probe(struct platform_device *pdev) 217 + static int ftpm_tee_probe(struct device *dev) 218 218 { 219 219 int rc; 220 220 struct tpm_chip *chip; 221 - struct device *dev = &pdev->dev; 222 221 struct ftpm_tee_private *pvt_data = NULL; 223 222 struct tee_ioctl_open_session_arg sess_arg; 224 223 ··· 296 297 return rc; 297 298 } 298 299 300 + static int ftpm_plat_tee_probe(struct platform_device *pdev) 301 + { 302 + struct device *dev = &pdev->dev; 303 + 304 + return ftpm_tee_probe(dev); 305 + } 306 + 299 307 /** 300 308 * ftpm_tee_remove() - remove the TPM device 301 309 * @pdev: the platform_device description. ··· 310 304 * Return: 311 305 * 0 always. 312 306 */ 313 - static int ftpm_tee_remove(struct platform_device *pdev) 307 + static int ftpm_tee_remove(struct device *dev) 314 308 { 315 - struct ftpm_tee_private *pvt_data = dev_get_drvdata(&pdev->dev); 309 + struct ftpm_tee_private *pvt_data = dev_get_drvdata(dev); 316 310 317 311 /* Release the chip */ 318 312 tpm_chip_unregister(pvt_data->chip); ··· 334 328 return 0; 335 329 } 336 330 331 + static int ftpm_plat_tee_remove(struct platform_device *pdev) 332 + { 333 + struct device *dev = &pdev->dev; 334 + 335 + return ftpm_tee_remove(dev); 336 + } 337 + 337 338 /** 338 339 * ftpm_tee_shutdown() - shutdown the TPM device 339 340 * @pdev: the platform_device description. 340 341 */ 341 - static void ftpm_tee_shutdown(struct platform_device *pdev) 342 + static void ftpm_plat_tee_shutdown(struct platform_device *pdev) 342 343 { 343 344 struct ftpm_tee_private *pvt_data = dev_get_drvdata(&pdev->dev); 344 345 ··· 360 347 }; 361 348 MODULE_DEVICE_TABLE(of, of_ftpm_tee_ids); 362 349 363 - static struct platform_driver ftpm_tee_driver = { 350 + static struct platform_driver ftpm_tee_plat_driver = { 364 351 .driver = { 365 352 .name = "ftpm-tee", 366 353 .of_match_table = of_match_ptr(of_ftpm_tee_ids), 367 354 }, 368 - .probe = ftpm_tee_probe, 369 - .remove = ftpm_tee_remove, 370 - .shutdown = ftpm_tee_shutdown, 355 + .shutdown = ftpm_plat_tee_shutdown, 356 + .probe = ftpm_plat_tee_probe, 357 + .remove = ftpm_plat_tee_remove, 371 358 }; 372 359 373 - module_platform_driver(ftpm_tee_driver); 360 + /* UUID of the fTPM TA */ 361 + static const struct tee_client_device_id optee_ftpm_id_table[] = { 362 + {UUID_INIT(0xbc50d971, 0xd4c9, 0x42c4, 363 + 0x82, 0xcb, 0x34, 0x3f, 0xb7, 0xf3, 0x78, 0x96)}, 364 + {} 365 + }; 366 + 367 + MODULE_DEVICE_TABLE(tee, optee_ftpm_id_table); 368 + 369 + static struct tee_client_driver ftpm_tee_driver = { 370 + .id_table = optee_ftpm_id_table, 371 + .driver = { 372 + .name = "optee-ftpm", 373 + .bus = &tee_bus_type, 374 + .probe = ftpm_tee_probe, 375 + .remove = ftpm_tee_remove, 376 + }, 377 + }; 378 + 379 + static int __init ftpm_mod_init(void) 380 + { 381 + int rc; 382 + 383 + rc = platform_driver_register(&ftpm_tee_plat_driver); 384 + if (rc) 385 + return rc; 386 + 387 + return driver_register(&ftpm_tee_driver.driver); 388 + } 389 + 390 + static void __exit ftpm_mod_exit(void) 391 + { 392 + platform_driver_unregister(&ftpm_tee_plat_driver); 393 + driver_unregister(&ftpm_tee_driver.driver); 394 + } 395 + 396 + module_init(ftpm_mod_init); 397 + module_exit(ftpm_mod_exit); 374 398 375 399 MODULE_AUTHOR("Thirupathaiah Annapureddy <thiruan@microsoft.com>"); 376 400 MODULE_DESCRIPTION("TPM Driver for fTPM TA in TEE");
+19 -3
drivers/clk/clk-scmi.c
··· 103 103 static int scmi_clk_ops_init(struct device *dev, struct scmi_clk *sclk) 104 104 { 105 105 int ret; 106 + unsigned long min_rate, max_rate; 107 + 106 108 struct clk_init_data init = { 107 109 .flags = CLK_GET_RATE_NOCACHE, 108 110 .num_parents = 0, ··· 114 112 115 113 sclk->hw.init = &init; 116 114 ret = devm_clk_hw_register(dev, &sclk->hw); 117 - if (!ret) 118 - clk_hw_set_rate_range(&sclk->hw, sclk->info->range.min_rate, 119 - sclk->info->range.max_rate); 115 + if (ret) 116 + return ret; 117 + 118 + if (sclk->info->rate_discrete) { 119 + int num_rates = sclk->info->list.num_rates; 120 + 121 + if (num_rates <= 0) 122 + return -EINVAL; 123 + 124 + min_rate = sclk->info->list.rates[0]; 125 + max_rate = sclk->info->list.rates[num_rates - 1]; 126 + } else { 127 + min_rate = sclk->info->range.min_rate; 128 + max_rate = sclk->info->range.max_rate; 129 + } 130 + 131 + clk_hw_set_rate_range(&sclk->hw, min_rate, max_rate); 120 132 return ret; 121 133 } 122 134
+2 -1
drivers/cpufreq/scmi-cpufreq.c
··· 198 198 199 199 policy->cpuinfo.transition_latency = latency; 200 200 201 - policy->fast_switch_possible = true; 201 + policy->fast_switch_possible = 202 + handle->perf_ops->fast_switch_possible(handle, cpu_dev); 202 203 203 204 em_register_perf_domain(policy->cpus, nr_opp, &em_cb); 204 205
+13 -29
drivers/dma/ti/k3-udma-glue.c
··· 271 271 atomic_set(&tx_chn->free_pkts, cfg->txcq_cfg.size); 272 272 273 273 /* request and cfg rings */ 274 - tx_chn->ringtx = k3_ringacc_request_ring(tx_chn->common.ringacc, 275 - tx_chn->udma_tchan_id, 0); 276 - if (!tx_chn->ringtx) { 277 - ret = -ENODEV; 278 - dev_err(dev, "Failed to get TX ring %u\n", 279 - tx_chn->udma_tchan_id); 280 - goto err; 281 - } 282 - 283 - tx_chn->ringtxcq = k3_ringacc_request_ring(tx_chn->common.ringacc, 284 - -1, 0); 285 - if (!tx_chn->ringtxcq) { 286 - ret = -ENODEV; 287 - dev_err(dev, "Failed to get TXCQ ring\n"); 274 + ret = k3_ringacc_request_rings_pair(tx_chn->common.ringacc, 275 + tx_chn->udma_tchan_id, -1, 276 + &tx_chn->ringtx, 277 + &tx_chn->ringtxcq); 278 + if (ret) { 279 + dev_err(dev, "Failed to get TX/TXCQ rings %d\n", ret); 288 280 goto err; 289 281 } 290 282 ··· 579 587 } 580 588 581 589 /* request and cfg rings */ 582 - flow->ringrx = k3_ringacc_request_ring(rx_chn->common.ringacc, 583 - flow_cfg->ring_rxq_id, 0); 584 - if (!flow->ringrx) { 585 - ret = -ENODEV; 586 - dev_err(dev, "Failed to get RX ring\n"); 590 + ret = k3_ringacc_request_rings_pair(rx_chn->common.ringacc, 591 + flow_cfg->ring_rxq_id, 592 + flow_cfg->ring_rxfdq0_id, 593 + &flow->ringrxfdq, 594 + &flow->ringrx); 595 + if (ret) { 596 + dev_err(dev, "Failed to get RX/RXFDQ rings %d\n", ret); 587 597 goto err_rflow_put; 588 - } 589 - 590 - flow->ringrxfdq = k3_ringacc_request_ring(rx_chn->common.ringacc, 591 - flow_cfg->ring_rxfdq0_id, 0); 592 - if (!flow->ringrxfdq) { 593 - ret = -ENODEV; 594 - dev_err(dev, "Failed to get RXFDQ ring\n"); 595 - goto err_ringrx_free; 596 598 } 597 599 598 600 ret = k3_ringacc_ring_cfg(flow->ringrx, &flow_cfg->rx_cfg); ··· 659 673 660 674 err_ringrxfdq_free: 661 675 k3_ringacc_ring_free(flow->ringrxfdq); 662 - 663 - err_ringrx_free: 664 676 k3_ringacc_ring_free(flow->ringrx); 665 677 666 678 err_rflow_put:
+11 -23
drivers/dma/ti/k3-udma.c
··· 1418 1418 if (ret) 1419 1419 return ret; 1420 1420 1421 - uc->tchan->t_ring = k3_ringacc_request_ring(ud->ringacc, 1422 - uc->tchan->id, 0); 1423 - if (!uc->tchan->t_ring) { 1421 + ret = k3_ringacc_request_rings_pair(ud->ringacc, uc->tchan->id, -1, 1422 + &uc->tchan->t_ring, 1423 + &uc->tchan->tc_ring); 1424 + if (ret) { 1424 1425 ret = -EBUSY; 1425 - goto err_tx_ring; 1426 - } 1427 - 1428 - uc->tchan->tc_ring = k3_ringacc_request_ring(ud->ringacc, -1, 0); 1429 - if (!uc->tchan->tc_ring) { 1430 - ret = -EBUSY; 1431 - goto err_txc_ring; 1426 + goto err_ring; 1432 1427 } 1433 1428 1434 1429 memset(&ring_cfg, 0, sizeof(ring_cfg)); ··· 1442 1447 err_ringcfg: 1443 1448 k3_ringacc_ring_free(uc->tchan->tc_ring); 1444 1449 uc->tchan->tc_ring = NULL; 1445 - err_txc_ring: 1446 1450 k3_ringacc_ring_free(uc->tchan->t_ring); 1447 1451 uc->tchan->t_ring = NULL; 1448 - err_tx_ring: 1452 + err_ring: 1449 1453 udma_put_tchan(uc); 1450 1454 1451 1455 return ret; ··· 1493 1499 1494 1500 rflow = uc->rflow; 1495 1501 fd_ring_id = ud->tchan_cnt + ud->echan_cnt + uc->rchan->id; 1496 - rflow->fd_ring = k3_ringacc_request_ring(ud->ringacc, fd_ring_id, 0); 1497 - if (!rflow->fd_ring) { 1502 + ret = k3_ringacc_request_rings_pair(ud->ringacc, fd_ring_id, -1, 1503 + &rflow->fd_ring, &rflow->r_ring); 1504 + if (ret) { 1498 1505 ret = -EBUSY; 1499 - goto err_rx_ring; 1500 - } 1501 - 1502 - rflow->r_ring = k3_ringacc_request_ring(ud->ringacc, -1, 0); 1503 - if (!rflow->r_ring) { 1504 - ret = -EBUSY; 1505 - goto err_rxc_ring; 1506 + goto err_ring; 1506 1507 } 1507 1508 1508 1509 memset(&ring_cfg, 0, sizeof(ring_cfg)); ··· 1522 1533 err_ringcfg: 1523 1534 k3_ringacc_ring_free(rflow->r_ring); 1524 1535 rflow->r_ring = NULL; 1525 - err_rxc_ring: 1526 1536 k3_ringacc_ring_free(rflow->fd_ring); 1527 1537 rflow->fd_ring = NULL; 1528 - err_rx_ring: 1538 + err_ring: 1529 1539 udma_put_rflow(uc); 1530 1540 err_rflow: 1531 1541 udma_put_rchan(uc);
+2 -2
drivers/firmware/arm_scmi/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 obj-y = scmi-bus.o scmi-driver.o scmi-protocols.o scmi-transport.o 3 3 scmi-bus-y = bus.o 4 - scmi-driver-y = driver.o 4 + scmi-driver-y = driver.o notify.o 5 5 scmi-transport-y = shmem.o 6 6 scmi-transport-$(CONFIG_MAILBOX) += mailbox.o 7 - scmi-transport-$(CONFIG_ARM_PSCI_FW) += smc.o 7 + scmi-transport-$(CONFIG_HAVE_ARM_SMCCC_DISCOVERY) += smc.o 8 8 scmi-protocols-y = base.o clock.o perf.o power.o reset.o sensors.o 9 9 obj-$(CONFIG_ARM_SCMI_POWER_DOMAIN) += scmi_pm_domain.o
+104 -4
drivers/firmware/arm_scmi/base.c
··· 5 5 * Copyright (C) 2018 ARM Ltd. 6 6 */ 7 7 8 + #define pr_fmt(fmt) "SCMI Notifications BASE - " fmt 9 + 10 + #include <linux/scmi_protocol.h> 11 + 8 12 #include "common.h" 13 + #include "notify.h" 14 + 15 + #define SCMI_BASE_NUM_SOURCES 1 16 + #define SCMI_BASE_MAX_CMD_ERR_COUNT 1024 9 17 10 18 enum scmi_base_protocol_cmd { 11 19 BASE_DISCOVER_VENDOR = 0x3, ··· 27 19 BASE_RESET_AGENT_CONFIGURATION = 0xb, 28 20 }; 29 21 30 - enum scmi_base_protocol_notify { 31 - BASE_ERROR_EVENT = 0x0, 32 - }; 33 - 34 22 struct scmi_msg_resp_base_attributes { 35 23 u8 num_protocols; 36 24 u8 num_agents; 37 25 __le16 reserved; 26 + }; 27 + 28 + struct scmi_msg_base_error_notify { 29 + __le32 event_control; 30 + #define BASE_TP_NOTIFY_ALL BIT(0) 31 + }; 32 + 33 + struct scmi_base_error_notify_payld { 34 + __le32 agent_id; 35 + __le32 error_status; 36 + #define IS_FATAL_ERROR(x) ((x) & BIT(31)) 37 + #define ERROR_CMD_COUNT(x) FIELD_GET(GENMASK(9, 0), (x)) 38 + __le64 msg_reports[SCMI_BASE_MAX_CMD_ERR_COUNT]; 38 39 }; 39 40 40 41 /** ··· 239 222 return ret; 240 223 } 241 224 225 + static int scmi_base_error_notify(const struct scmi_handle *handle, bool enable) 226 + { 227 + int ret; 228 + u32 evt_cntl = enable ? BASE_TP_NOTIFY_ALL : 0; 229 + struct scmi_xfer *t; 230 + struct scmi_msg_base_error_notify *cfg; 231 + 232 + ret = scmi_xfer_get_init(handle, BASE_NOTIFY_ERRORS, 233 + SCMI_PROTOCOL_BASE, sizeof(*cfg), 0, &t); 234 + if (ret) 235 + return ret; 236 + 237 + cfg = t->tx.buf; 238 + cfg->event_control = cpu_to_le32(evt_cntl); 239 + 240 + ret = scmi_do_xfer(handle, t); 241 + 242 + scmi_xfer_put(handle, t); 243 + return ret; 244 + } 245 + 246 + static int scmi_base_set_notify_enabled(const struct scmi_handle *handle, 247 + u8 evt_id, u32 src_id, bool enable) 248 + { 249 + int ret; 250 + 251 + ret = scmi_base_error_notify(handle, enable); 252 + if (ret) 253 + pr_debug("FAIL_ENABLED - evt[%X] ret:%d\n", evt_id, ret); 254 + 255 + return ret; 256 + } 257 + 258 + static void *scmi_base_fill_custom_report(const struct scmi_handle *handle, 259 + u8 evt_id, ktime_t timestamp, 260 + const void *payld, size_t payld_sz, 261 + void *report, u32 *src_id) 262 + { 263 + int i; 264 + const struct scmi_base_error_notify_payld *p = payld; 265 + struct scmi_base_error_report *r = report; 266 + 267 + /* 268 + * BaseError notification payload is variable in size but 269 + * up to a maximum length determined by the struct ponted by p. 270 + * Instead payld_sz is the effective length of this notification 271 + * payload so cannot be greater of the maximum allowed size as 272 + * pointed by p. 273 + */ 274 + if (evt_id != SCMI_EVENT_BASE_ERROR_EVENT || sizeof(*p) < payld_sz) 275 + return NULL; 276 + 277 + r->timestamp = timestamp; 278 + r->agent_id = le32_to_cpu(p->agent_id); 279 + r->fatal = IS_FATAL_ERROR(le32_to_cpu(p->error_status)); 280 + r->cmd_count = ERROR_CMD_COUNT(le32_to_cpu(p->error_status)); 281 + for (i = 0; i < r->cmd_count; i++) 282 + r->reports[i] = le64_to_cpu(p->msg_reports[i]); 283 + *src_id = 0; 284 + 285 + return r; 286 + } 287 + 288 + static const struct scmi_event base_events[] = { 289 + { 290 + .id = SCMI_EVENT_BASE_ERROR_EVENT, 291 + .max_payld_sz = sizeof(struct scmi_base_error_notify_payld), 292 + .max_report_sz = sizeof(struct scmi_base_error_report) + 293 + SCMI_BASE_MAX_CMD_ERR_COUNT * sizeof(u64), 294 + }, 295 + }; 296 + 297 + static const struct scmi_event_ops base_event_ops = { 298 + .set_notify_enabled = scmi_base_set_notify_enabled, 299 + .fill_custom_report = scmi_base_fill_custom_report, 300 + }; 301 + 242 302 int scmi_base_protocol_init(struct scmi_handle *h) 243 303 { 244 304 int id, ret; ··· 349 255 rev->sub_vendor_id, rev->impl_ver); 350 256 dev_dbg(dev, "Found %d protocol(s) %d agent(s)\n", rev->num_protocols, 351 257 rev->num_agents); 258 + 259 + scmi_register_protocol_events(handle, SCMI_PROTOCOL_BASE, 260 + (4 * SCMI_PROTO_QUEUE_SZ), 261 + &base_event_ops, base_events, 262 + ARRAY_SIZE(base_events), 263 + SCMI_BASE_NUM_SOURCES); 352 264 353 265 for (id = 0; id < rev->num_agents; id++) { 354 266 scmi_base_discover_agent_get(handle, id, name);
+18 -2
drivers/firmware/arm_scmi/clock.c
··· 5 5 * Copyright (C) 2018 ARM Ltd. 6 6 */ 7 7 8 + #include <linux/sort.h> 9 + 8 10 #include "common.h" 9 11 10 12 enum scmi_clock_protocol_cmd { ··· 123 121 return ret; 124 122 } 125 123 124 + static int rate_cmp_func(const void *_r1, const void *_r2) 125 + { 126 + const u64 *r1 = _r1, *r2 = _r2; 127 + 128 + if (*r1 < *r2) 129 + return -1; 130 + else if (*r1 == *r2) 131 + return 0; 132 + else 133 + return 1; 134 + } 135 + 126 136 static int 127 137 scmi_clock_describe_rates_get(const struct scmi_handle *handle, u32 clk_id, 128 138 struct scmi_clock_info *clk) 129 139 { 130 - u64 *rate; 140 + u64 *rate = NULL; 131 141 int ret, cnt; 132 142 bool rate_discrete = false; 133 143 u32 tot_rate_cnt = 0, rates_flag; ··· 198 184 */ 199 185 } while (num_returned && num_remaining); 200 186 201 - if (rate_discrete) 187 + if (rate_discrete && rate) { 202 188 clk->list.num_rates = tot_rate_cnt; 189 + sort(rate, tot_rate_cnt, sizeof(*rate), rate_cmp_func, NULL); 190 + } 203 191 204 192 clk->rate_discrete = rate_discrete; 205 193
+4
drivers/firmware/arm_scmi/common.h
··· 6 6 * 7 7 * Copyright (C) 2018 ARM Ltd. 8 8 */ 9 + #ifndef _SCMI_COMMON_H 10 + #define _SCMI_COMMON_H 9 11 10 12 #include <linux/bitfield.h> 11 13 #include <linux/completion.h> ··· 237 235 void shmem_clear_channel(struct scmi_shared_mem __iomem *shmem); 238 236 bool shmem_poll_done(struct scmi_shared_mem __iomem *shmem, 239 237 struct scmi_xfer *xfer); 238 + 239 + #endif /* _SCMI_COMMON_H */
+12 -3
drivers/firmware/arm_scmi/driver.c
··· 26 26 #include <linux/slab.h> 27 27 28 28 #include "common.h" 29 + #include "notify.h" 29 30 30 31 #define CREATE_TRACE_POINTS 31 32 #include <trace/events/scmi.h> ··· 209 208 struct device *dev = cinfo->dev; 210 209 struct scmi_info *info = handle_to_scmi_info(cinfo->handle); 211 210 struct scmi_xfers_info *minfo = &info->rx_minfo; 211 + ktime_t ts; 212 212 213 + ts = ktime_get_boottime(); 213 214 xfer = scmi_xfer_get(cinfo->handle, minfo); 214 215 if (IS_ERR(xfer)) { 215 216 dev_err(dev, "failed to get free message slot (%ld)\n", ··· 224 221 scmi_dump_header_dbg(dev, &xfer->hdr); 225 222 info->desc->ops->fetch_notification(cinfo, info->desc->max_msg_size, 226 223 xfer); 224 + scmi_notify(cinfo->handle, xfer->hdr.protocol_id, 225 + xfer->hdr.id, xfer->rx.buf, xfer->rx.len, ts); 227 226 228 227 trace_scmi_rx_done(xfer->transfer_id, xfer->hdr.id, 229 228 xfer->hdr.protocol_id, xfer->hdr.seq, ··· 397 392 info->desc->ops->mark_txdone(cinfo, ret); 398 393 399 394 trace_scmi_xfer_end(xfer->transfer_id, xfer->hdr.id, 400 - xfer->hdr.protocol_id, xfer->hdr.seq, 401 - xfer->hdr.status); 395 + xfer->hdr.protocol_id, xfer->hdr.seq, ret); 402 396 403 397 return ret; 404 398 } ··· 793 789 if (ret) 794 790 return ret; 795 791 792 + if (scmi_notification_init(handle)) 793 + dev_err(dev, "SCMI Notifications NOT available.\n"); 794 + 796 795 ret = scmi_base_protocol_init(handle); 797 796 if (ret) { 798 797 dev_err(dev, "unable to communicate with SCMI(%d)\n", ret); ··· 837 830 int ret = 0; 838 831 struct scmi_info *info = platform_get_drvdata(pdev); 839 832 struct idr *idr = &info->tx_idr; 833 + 834 + scmi_notification_exit(&info->handle); 840 835 841 836 mutex_lock(&scmi_list_mutex); 842 837 if (info->users) ··· 910 901 /* Each compatible listed below must have descriptor associated with it */ 911 902 static const struct of_device_id scmi_of_match[] = { 912 903 { .compatible = "arm,scmi", .data = &scmi_mailbox_desc }, 913 - #ifdef CONFIG_ARM_PSCI_FW 904 + #ifdef CONFIG_HAVE_ARM_SMCCC_DISCOVERY 914 905 { .compatible = "arm,scmi-smc", .data = &scmi_smc_desc}, 915 906 #endif 916 907 { /* Sentinel */ },
+1526
drivers/firmware/arm_scmi/notify.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * System Control and Management Interface (SCMI) Notification support 4 + * 5 + * Copyright (C) 2020 ARM Ltd. 6 + */ 7 + /** 8 + * DOC: Theory of operation 9 + * 10 + * SCMI Protocol specification allows the platform to signal events to 11 + * interested agents via notification messages: this is an implementation 12 + * of the dispatch and delivery of such notifications to the interested users 13 + * inside the Linux kernel. 14 + * 15 + * An SCMI Notification core instance is initialized for each active platform 16 + * instance identified by the means of the usual &struct scmi_handle. 17 + * 18 + * Each SCMI Protocol implementation, during its initialization, registers with 19 + * this core its set of supported events using scmi_register_protocol_events(): 20 + * all the needed descriptors are stored in the &struct registered_protocols and 21 + * &struct registered_events arrays. 22 + * 23 + * Kernel users interested in some specific event can register their callbacks 24 + * providing the usual notifier_block descriptor, since this core implements 25 + * events' delivery using the standard Kernel notification chains machinery. 26 + * 27 + * Given the number of possible events defined by SCMI and the extensibility 28 + * of the SCMI Protocol itself, the underlying notification chains are created 29 + * and destroyed dynamically on demand depending on the number of users 30 + * effectively registered for an event, so that no support structures or chains 31 + * are allocated until at least one user has registered a notifier_block for 32 + * such event. Similarly, events' generation itself is enabled at the platform 33 + * level only after at least one user has registered, and it is shutdown after 34 + * the last user for that event has gone. 35 + * 36 + * All users provided callbacks and allocated notification-chains are stored in 37 + * the @registered_events_handlers hashtable. Callbacks' registration requests 38 + * for still to be registered events are instead kept in the dedicated common 39 + * hashtable @pending_events_handlers. 40 + * 41 + * An event is identified univocally by the tuple (proto_id, evt_id, src_id) 42 + * and is served by its own dedicated notification chain; information contained 43 + * in such tuples is used, in a few different ways, to generate the needed 44 + * hash-keys. 45 + * 46 + * Here proto_id and evt_id are simply the protocol_id and message_id numbers 47 + * as described in the SCMI Protocol specification, while src_id represents an 48 + * optional, protocol dependent, source identifier (like domain_id, perf_id 49 + * or sensor_id and so forth). 50 + * 51 + * Upon reception of a notification message from the platform the SCMI RX ISR 52 + * passes the received message payload and some ancillary information (including 53 + * an arrival timestamp in nanoseconds) to the core via @scmi_notify() which 54 + * pushes the event-data itself on a protocol-dedicated kfifo queue for further 55 + * deferred processing as specified in @scmi_events_dispatcher(). 56 + * 57 + * Each protocol has it own dedicated work_struct and worker which, once kicked 58 + * by the ISR, takes care to empty its own dedicated queue, deliverying the 59 + * queued items into the proper notification-chain: notifications processing can 60 + * proceed concurrently on distinct workers only between events belonging to 61 + * different protocols while delivery of events within the same protocol is 62 + * still strictly sequentially ordered by time of arrival. 63 + * 64 + * Events' information is then extracted from the SCMI Notification messages and 65 + * conveyed, converted into a custom per-event report struct, as the void *data 66 + * param to the user callback provided by the registered notifier_block, so that 67 + * from the user perspective his callback will look invoked like: 68 + * 69 + * int user_cb(struct notifier_block *nb, unsigned long event_id, void *report) 70 + * 71 + */ 72 + 73 + #define dev_fmt(fmt) "SCMI Notifications - " fmt 74 + #define pr_fmt(fmt) "SCMI Notifications - " fmt 75 + 76 + #include <linux/bitfield.h> 77 + #include <linux/bug.h> 78 + #include <linux/compiler.h> 79 + #include <linux/device.h> 80 + #include <linux/err.h> 81 + #include <linux/hashtable.h> 82 + #include <linux/kernel.h> 83 + #include <linux/ktime.h> 84 + #include <linux/kfifo.h> 85 + #include <linux/list.h> 86 + #include <linux/mutex.h> 87 + #include <linux/notifier.h> 88 + #include <linux/refcount.h> 89 + #include <linux/scmi_protocol.h> 90 + #include <linux/slab.h> 91 + #include <linux/types.h> 92 + #include <linux/workqueue.h> 93 + 94 + #include "notify.h" 95 + 96 + #define SCMI_MAX_PROTO 256 97 + 98 + #define PROTO_ID_MASK GENMASK(31, 24) 99 + #define EVT_ID_MASK GENMASK(23, 16) 100 + #define SRC_ID_MASK GENMASK(15, 0) 101 + 102 + /* 103 + * Builds an unsigned 32bit key from the given input tuple to be used 104 + * as a key in hashtables. 105 + */ 106 + #define MAKE_HASH_KEY(p, e, s) \ 107 + (FIELD_PREP(PROTO_ID_MASK, (p)) | \ 108 + FIELD_PREP(EVT_ID_MASK, (e)) | \ 109 + FIELD_PREP(SRC_ID_MASK, (s))) 110 + 111 + #define MAKE_ALL_SRCS_KEY(p, e) MAKE_HASH_KEY((p), (e), SRC_ID_MASK) 112 + 113 + /* 114 + * Assumes that the stored obj includes its own hash-key in a field named 'key': 115 + * with this simplification this macro can be equally used for all the objects' 116 + * types hashed by this implementation. 117 + * 118 + * @__ht: The hashtable name 119 + * @__obj: A pointer to the object type to be retrieved from the hashtable; 120 + * it will be used as a cursor while scanning the hastable and it will 121 + * be possibly left as NULL when @__k is not found 122 + * @__k: The key to search for 123 + */ 124 + #define KEY_FIND(__ht, __obj, __k) \ 125 + ({ \ 126 + typeof(__k) k_ = __k; \ 127 + typeof(__obj) obj_; \ 128 + \ 129 + hash_for_each_possible((__ht), obj_, hash, k_) \ 130 + if (obj_->key == k_) \ 131 + break; \ 132 + __obj = obj_; \ 133 + }) 134 + 135 + #define KEY_XTRACT_PROTO_ID(key) FIELD_GET(PROTO_ID_MASK, (key)) 136 + #define KEY_XTRACT_EVT_ID(key) FIELD_GET(EVT_ID_MASK, (key)) 137 + #define KEY_XTRACT_SRC_ID(key) FIELD_GET(SRC_ID_MASK, (key)) 138 + 139 + /* 140 + * A set of macros used to access safely @registered_protocols and 141 + * @registered_events arrays; these are fixed in size and each entry is possibly 142 + * populated at protocols' registration time and then only read but NEVER 143 + * modified or removed. 144 + */ 145 + #define SCMI_GET_PROTO(__ni, __pid) \ 146 + ({ \ 147 + typeof(__ni) ni_ = __ni; \ 148 + struct scmi_registered_events_desc *__pd = NULL; \ 149 + \ 150 + if (ni_) \ 151 + __pd = READ_ONCE(ni_->registered_protocols[(__pid)]); \ 152 + __pd; \ 153 + }) 154 + 155 + #define SCMI_GET_REVT_FROM_PD(__pd, __eid) \ 156 + ({ \ 157 + typeof(__pd) pd_ = __pd; \ 158 + typeof(__eid) eid_ = __eid; \ 159 + struct scmi_registered_event *__revt = NULL; \ 160 + \ 161 + if (pd_ && eid_ < pd_->num_events) \ 162 + __revt = READ_ONCE(pd_->registered_events[eid_]); \ 163 + __revt; \ 164 + }) 165 + 166 + #define SCMI_GET_REVT(__ni, __pid, __eid) \ 167 + ({ \ 168 + struct scmi_registered_event *__revt; \ 169 + struct scmi_registered_events_desc *__pd; \ 170 + \ 171 + __pd = SCMI_GET_PROTO((__ni), (__pid)); \ 172 + __revt = SCMI_GET_REVT_FROM_PD(__pd, (__eid)); \ 173 + __revt; \ 174 + }) 175 + 176 + /* A couple of utility macros to limit cruft when calling protocols' helpers */ 177 + #define REVT_NOTIFY_SET_STATUS(revt, eid, sid, state) \ 178 + ({ \ 179 + typeof(revt) r = revt; \ 180 + r->proto->ops->set_notify_enabled(r->proto->ni->handle, \ 181 + (eid), (sid), (state)); \ 182 + }) 183 + 184 + #define REVT_NOTIFY_ENABLE(revt, eid, sid) \ 185 + REVT_NOTIFY_SET_STATUS((revt), (eid), (sid), true) 186 + 187 + #define REVT_NOTIFY_DISABLE(revt, eid, sid) \ 188 + REVT_NOTIFY_SET_STATUS((revt), (eid), (sid), false) 189 + 190 + #define REVT_FILL_REPORT(revt, ...) \ 191 + ({ \ 192 + typeof(revt) r = revt; \ 193 + r->proto->ops->fill_custom_report(r->proto->ni->handle, \ 194 + __VA_ARGS__); \ 195 + }) 196 + 197 + #define SCMI_PENDING_HASH_SZ 4 198 + #define SCMI_REGISTERED_HASH_SZ 6 199 + 200 + struct scmi_registered_events_desc; 201 + 202 + /** 203 + * struct scmi_notify_instance - Represents an instance of the notification 204 + * core 205 + * @gid: GroupID used for devres 206 + * @handle: A reference to the platform instance 207 + * @init_work: A work item to perform final initializations of pending handlers 208 + * @notify_wq: A reference to the allocated Kernel cmwq 209 + * @pending_mtx: A mutex to protect @pending_events_handlers 210 + * @registered_protocols: A statically allocated array containing pointers to 211 + * all the registered protocol-level specific information 212 + * related to events' handling 213 + * @pending_events_handlers: An hashtable containing all pending events' 214 + * handlers descriptors 215 + * 216 + * Each platform instance, represented by a handle, has its own instance of 217 + * the notification subsystem represented by this structure. 218 + */ 219 + struct scmi_notify_instance { 220 + void *gid; 221 + struct scmi_handle *handle; 222 + struct work_struct init_work; 223 + struct workqueue_struct *notify_wq; 224 + /* lock to protect pending_events_handlers */ 225 + struct mutex pending_mtx; 226 + struct scmi_registered_events_desc **registered_protocols; 227 + DECLARE_HASHTABLE(pending_events_handlers, SCMI_PENDING_HASH_SZ); 228 + }; 229 + 230 + /** 231 + * struct events_queue - Describes a queue and its associated worker 232 + * @sz: Size in bytes of the related kfifo 233 + * @kfifo: A dedicated Kernel kfifo descriptor 234 + * @notify_work: A custom work item bound to this queue 235 + * @wq: A reference to the associated workqueue 236 + * 237 + * Each protocol has its own dedicated events_queue descriptor. 238 + */ 239 + struct events_queue { 240 + size_t sz; 241 + struct kfifo kfifo; 242 + struct work_struct notify_work; 243 + struct workqueue_struct *wq; 244 + }; 245 + 246 + /** 247 + * struct scmi_event_header - A utility header 248 + * @timestamp: The timestamp, in nanoseconds (boottime), which was associated 249 + * to this event as soon as it entered the SCMI RX ISR 250 + * @payld_sz: Effective size of the embedded message payload which follows 251 + * @evt_id: Event ID (corresponds to the Event MsgID for this Protocol) 252 + * @payld: A reference to the embedded event payload 253 + * 254 + * This header is prepended to each received event message payload before 255 + * queueing it on the related &struct events_queue. 256 + */ 257 + struct scmi_event_header { 258 + ktime_t timestamp; 259 + size_t payld_sz; 260 + unsigned char evt_id; 261 + unsigned char payld[]; 262 + }; 263 + 264 + struct scmi_registered_event; 265 + 266 + /** 267 + * struct scmi_registered_events_desc - Protocol Specific information 268 + * @id: Protocol ID 269 + * @ops: Protocol specific and event-related operations 270 + * @equeue: The embedded per-protocol events_queue 271 + * @ni: A reference to the initialized instance descriptor 272 + * @eh: A reference to pre-allocated buffer to be used as a scratch area by the 273 + * deferred worker when fetching data from the kfifo 274 + * @eh_sz: Size of the pre-allocated buffer @eh 275 + * @in_flight: A reference to an in flight &struct scmi_registered_event 276 + * @num_events: Number of events in @registered_events 277 + * @registered_events: A dynamically allocated array holding all the registered 278 + * events' descriptors, whose fixed-size is determined at 279 + * compile time. 280 + * @registered_mtx: A mutex to protect @registered_events_handlers 281 + * @registered_events_handlers: An hashtable containing all events' handlers 282 + * descriptors registered for this protocol 283 + * 284 + * All protocols that register at least one event have their protocol-specific 285 + * information stored here, together with the embedded allocated events_queue. 286 + * These descriptors are stored in the @registered_protocols array at protocol 287 + * registration time. 288 + * 289 + * Once these descriptors are successfully registered, they are NEVER again 290 + * removed or modified since protocols do not unregister ever, so that, once 291 + * we safely grab a NON-NULL reference from the array we can keep it and use it. 292 + */ 293 + struct scmi_registered_events_desc { 294 + u8 id; 295 + const struct scmi_event_ops *ops; 296 + struct events_queue equeue; 297 + struct scmi_notify_instance *ni; 298 + struct scmi_event_header *eh; 299 + size_t eh_sz; 300 + void *in_flight; 301 + int num_events; 302 + struct scmi_registered_event **registered_events; 303 + /* mutex to protect registered_events_handlers */ 304 + struct mutex registered_mtx; 305 + DECLARE_HASHTABLE(registered_events_handlers, SCMI_REGISTERED_HASH_SZ); 306 + }; 307 + 308 + /** 309 + * struct scmi_registered_event - Event Specific Information 310 + * @proto: A reference to the associated protocol descriptor 311 + * @evt: A reference to the associated event descriptor (as provided at 312 + * registration time) 313 + * @report: A pre-allocated buffer used by the deferred worker to fill a 314 + * customized event report 315 + * @num_sources: The number of possible sources for this event as stated at 316 + * events' registration time 317 + * @sources: A reference to a dynamically allocated array used to refcount the 318 + * events' enable requests for all the existing sources 319 + * @sources_mtx: A mutex to serialize the access to @sources 320 + * 321 + * All registered events are represented by one of these structures that are 322 + * stored in the @registered_events array at protocol registration time. 323 + * 324 + * Once these descriptors are successfully registered, they are NEVER again 325 + * removed or modified since protocols do not unregister ever, so that once we 326 + * safely grab a NON-NULL reference from the table we can keep it and use it. 327 + */ 328 + struct scmi_registered_event { 329 + struct scmi_registered_events_desc *proto; 330 + const struct scmi_event *evt; 331 + void *report; 332 + u32 num_sources; 333 + refcount_t *sources; 334 + /* locking to serialize the access to sources */ 335 + struct mutex sources_mtx; 336 + }; 337 + 338 + /** 339 + * struct scmi_event_handler - Event handler information 340 + * @key: The used hashkey 341 + * @users: A reference count for number of active users for this handler 342 + * @r_evt: A reference to the associated registered event; when this is NULL 343 + * this handler is pending, which means that identifies a set of 344 + * callbacks intended to be attached to an event which is still not 345 + * known nor registered by any protocol at that point in time 346 + * @chain: The notification chain dedicated to this specific event tuple 347 + * @hash: The hlist_node used for collision handling 348 + * @enabled: A boolean which records if event's generation has been already 349 + * enabled for this handler as a whole 350 + * 351 + * This structure collects all the information needed to process a received 352 + * event identified by the tuple (proto_id, evt_id, src_id). 353 + * These descriptors are stored in a per-protocol @registered_events_handlers 354 + * table using as a key a value derived from that tuple. 355 + */ 356 + struct scmi_event_handler { 357 + u32 key; 358 + refcount_t users; 359 + struct scmi_registered_event *r_evt; 360 + struct blocking_notifier_head chain; 361 + struct hlist_node hash; 362 + bool enabled; 363 + }; 364 + 365 + #define IS_HNDL_PENDING(hndl) (!(hndl)->r_evt) 366 + 367 + static struct scmi_event_handler * 368 + scmi_get_active_handler(struct scmi_notify_instance *ni, u32 evt_key); 369 + static void scmi_put_active_handler(struct scmi_notify_instance *ni, 370 + struct scmi_event_handler *hndl); 371 + static void scmi_put_handler_unlocked(struct scmi_notify_instance *ni, 372 + struct scmi_event_handler *hndl); 373 + 374 + /** 375 + * scmi_lookup_and_call_event_chain() - Lookup the proper chain and call it 376 + * @ni: A reference to the notification instance to use 377 + * @evt_key: The key to use to lookup the related notification chain 378 + * @report: The customized event-specific report to pass down to the callbacks 379 + * as their *data parameter. 380 + */ 381 + static inline void 382 + scmi_lookup_and_call_event_chain(struct scmi_notify_instance *ni, 383 + u32 evt_key, void *report) 384 + { 385 + int ret; 386 + struct scmi_event_handler *hndl; 387 + 388 + /* 389 + * Here ensure the event handler cannot vanish while using it. 390 + * It is legitimate, though, for an handler not to be found at all here, 391 + * e.g. when it has been unregistered by the user after some events had 392 + * already been queued. 393 + */ 394 + hndl = scmi_get_active_handler(ni, evt_key); 395 + if (!hndl) 396 + return; 397 + 398 + ret = blocking_notifier_call_chain(&hndl->chain, 399 + KEY_XTRACT_EVT_ID(evt_key), 400 + report); 401 + /* Notifiers are NOT supposed to cut the chain ... */ 402 + WARN_ON_ONCE(ret & NOTIFY_STOP_MASK); 403 + 404 + scmi_put_active_handler(ni, hndl); 405 + } 406 + 407 + /** 408 + * scmi_process_event_header() - Dequeue and process an event header 409 + * @eq: The queue to use 410 + * @pd: The protocol descriptor to use 411 + * 412 + * Read an event header from the protocol queue into the dedicated scratch 413 + * buffer and looks for a matching registered event; in case an anomalously 414 + * sized read is detected just flush the queue. 415 + * 416 + * Return: 417 + * * a reference to the matching registered event when found 418 + * * ERR_PTR(-EINVAL) when NO registered event could be found 419 + * * NULL when the queue is empty 420 + */ 421 + static inline struct scmi_registered_event * 422 + scmi_process_event_header(struct events_queue *eq, 423 + struct scmi_registered_events_desc *pd) 424 + { 425 + unsigned int outs; 426 + struct scmi_registered_event *r_evt; 427 + 428 + outs = kfifo_out(&eq->kfifo, pd->eh, 429 + sizeof(struct scmi_event_header)); 430 + if (!outs) 431 + return NULL; 432 + if (outs != sizeof(struct scmi_event_header)) { 433 + dev_err(pd->ni->handle->dev, "corrupted EVT header. Flush.\n"); 434 + kfifo_reset_out(&eq->kfifo); 435 + return NULL; 436 + } 437 + 438 + r_evt = SCMI_GET_REVT_FROM_PD(pd, pd->eh->evt_id); 439 + if (!r_evt) 440 + r_evt = ERR_PTR(-EINVAL); 441 + 442 + return r_evt; 443 + } 444 + 445 + /** 446 + * scmi_process_event_payload() - Dequeue and process an event payload 447 + * @eq: The queue to use 448 + * @pd: The protocol descriptor to use 449 + * @r_evt: The registered event descriptor to use 450 + * 451 + * Read an event payload from the protocol queue into the dedicated scratch 452 + * buffer, fills a custom report and then look for matching event handlers and 453 + * call them; skip any unknown event (as marked by scmi_process_event_header()) 454 + * and in case an anomalously sized read is detected just flush the queue. 455 + * 456 + * Return: False when the queue is empty 457 + */ 458 + static inline bool 459 + scmi_process_event_payload(struct events_queue *eq, 460 + struct scmi_registered_events_desc *pd, 461 + struct scmi_registered_event *r_evt) 462 + { 463 + u32 src_id, key; 464 + unsigned int outs; 465 + void *report = NULL; 466 + 467 + outs = kfifo_out(&eq->kfifo, pd->eh->payld, pd->eh->payld_sz); 468 + if (!outs) 469 + return false; 470 + 471 + /* Any in-flight event has now been officially processed */ 472 + pd->in_flight = NULL; 473 + 474 + if (outs != pd->eh->payld_sz) { 475 + dev_err(pd->ni->handle->dev, "corrupted EVT Payload. Flush.\n"); 476 + kfifo_reset_out(&eq->kfifo); 477 + return false; 478 + } 479 + 480 + if (IS_ERR(r_evt)) { 481 + dev_warn(pd->ni->handle->dev, 482 + "SKIP UNKNOWN EVT - proto:%X evt:%d\n", 483 + pd->id, pd->eh->evt_id); 484 + return true; 485 + } 486 + 487 + report = REVT_FILL_REPORT(r_evt, pd->eh->evt_id, pd->eh->timestamp, 488 + pd->eh->payld, pd->eh->payld_sz, 489 + r_evt->report, &src_id); 490 + if (!report) { 491 + dev_err(pd->ni->handle->dev, 492 + "report not available - proto:%X evt:%d\n", 493 + pd->id, pd->eh->evt_id); 494 + return true; 495 + } 496 + 497 + /* At first search for a generic ALL src_ids handler... */ 498 + key = MAKE_ALL_SRCS_KEY(pd->id, pd->eh->evt_id); 499 + scmi_lookup_and_call_event_chain(pd->ni, key, report); 500 + 501 + /* ...then search for any specific src_id */ 502 + key = MAKE_HASH_KEY(pd->id, pd->eh->evt_id, src_id); 503 + scmi_lookup_and_call_event_chain(pd->ni, key, report); 504 + 505 + return true; 506 + } 507 + 508 + /** 509 + * scmi_events_dispatcher() - Common worker logic for all work items. 510 + * @work: The work item to use, which is associated to a dedicated events_queue 511 + * 512 + * Logic: 513 + * 1. dequeue one pending RX notification (queued in SCMI RX ISR context) 514 + * 2. generate a custom event report from the received event message 515 + * 3. lookup for any registered ALL_SRC_IDs handler: 516 + * - > call the related notification chain passing in the report 517 + * 4. lookup for any registered specific SRC_ID handler: 518 + * - > call the related notification chain passing in the report 519 + * 520 + * Note that: 521 + * * a dedicated per-protocol kfifo queue is used: in this way an anomalous 522 + * flood of events cannot saturate other protocols' queues. 523 + * * each per-protocol queue is associated to a distinct work_item, which 524 + * means, in turn, that: 525 + * + all protocols can process their dedicated queues concurrently 526 + * (since notify_wq:max_active != 1) 527 + * + anyway at most one worker instance is allowed to run on the same queue 528 + * concurrently: this ensures that we can have only one concurrent 529 + * reader/writer on the associated kfifo, so that we can use it lock-less 530 + * 531 + * Context: Process context. 532 + */ 533 + static void scmi_events_dispatcher(struct work_struct *work) 534 + { 535 + struct events_queue *eq; 536 + struct scmi_registered_events_desc *pd; 537 + struct scmi_registered_event *r_evt; 538 + 539 + eq = container_of(work, struct events_queue, notify_work); 540 + pd = container_of(eq, struct scmi_registered_events_desc, equeue); 541 + /* 542 + * In order to keep the queue lock-less and the number of memcopies 543 + * to the bare minimum needed, the dispatcher accounts for the 544 + * possibility of per-protocol in-flight events: i.e. an event whose 545 + * reception could end up being split across two subsequent runs of this 546 + * worker, first the header, then the payload. 547 + */ 548 + do { 549 + if (!pd->in_flight) { 550 + r_evt = scmi_process_event_header(eq, pd); 551 + if (!r_evt) 552 + break; 553 + pd->in_flight = r_evt; 554 + } else { 555 + r_evt = pd->in_flight; 556 + } 557 + } while (scmi_process_event_payload(eq, pd, r_evt)); 558 + } 559 + 560 + /** 561 + * scmi_notify() - Queues a notification for further deferred processing 562 + * @handle: The handle identifying the platform instance from which the 563 + * dispatched event is generated 564 + * @proto_id: Protocol ID 565 + * @evt_id: Event ID (msgID) 566 + * @buf: Event Message Payload (without the header) 567 + * @len: Event Message Payload size 568 + * @ts: RX Timestamp in nanoseconds (boottime) 569 + * 570 + * Context: Called in interrupt context to queue a received event for 571 + * deferred processing. 572 + * 573 + * Return: 0 on Success 574 + */ 575 + int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id, 576 + const void *buf, size_t len, ktime_t ts) 577 + { 578 + struct scmi_registered_event *r_evt; 579 + struct scmi_event_header eh; 580 + struct scmi_notify_instance *ni; 581 + 582 + /* Ensure notify_priv is updated */ 583 + smp_rmb(); 584 + if (!handle->notify_priv) 585 + return 0; 586 + ni = handle->notify_priv; 587 + 588 + r_evt = SCMI_GET_REVT(ni, proto_id, evt_id); 589 + if (!r_evt) 590 + return -EINVAL; 591 + 592 + if (len > r_evt->evt->max_payld_sz) { 593 + dev_err(handle->dev, "discard badly sized message\n"); 594 + return -EINVAL; 595 + } 596 + if (kfifo_avail(&r_evt->proto->equeue.kfifo) < sizeof(eh) + len) { 597 + dev_warn(handle->dev, 598 + "queue full, dropping proto_id:%d evt_id:%d ts:%lld\n", 599 + proto_id, evt_id, ktime_to_ns(ts)); 600 + return -ENOMEM; 601 + } 602 + 603 + eh.timestamp = ts; 604 + eh.evt_id = evt_id; 605 + eh.payld_sz = len; 606 + /* 607 + * Header and payload are enqueued with two distinct kfifo_in() (so non 608 + * atomic), but this situation is handled properly on the consumer side 609 + * with in-flight events tracking. 610 + */ 611 + kfifo_in(&r_evt->proto->equeue.kfifo, &eh, sizeof(eh)); 612 + kfifo_in(&r_evt->proto->equeue.kfifo, buf, len); 613 + /* 614 + * Don't care about return value here since we just want to ensure that 615 + * a work is queued all the times whenever some items have been pushed 616 + * on the kfifo: 617 + * - if work was already queued it will simply fail to queue a new one 618 + * since it is not needed 619 + * - if work was not queued already it will be now, even in case work 620 + * was in fact already running: this behavior avoids any possible race 621 + * when this function pushes new items onto the kfifos after the 622 + * related executing worker had already determined the kfifo to be 623 + * empty and it was terminating. 624 + */ 625 + queue_work(r_evt->proto->equeue.wq, 626 + &r_evt->proto->equeue.notify_work); 627 + 628 + return 0; 629 + } 630 + 631 + /** 632 + * scmi_kfifo_free() - Devres action helper to free the kfifo 633 + * @kfifo: The kfifo to free 634 + */ 635 + static void scmi_kfifo_free(void *kfifo) 636 + { 637 + kfifo_free((struct kfifo *)kfifo); 638 + } 639 + 640 + /** 641 + * scmi_initialize_events_queue() - Allocate/Initialize a kfifo buffer 642 + * @ni: A reference to the notification instance to use 643 + * @equeue: The events_queue to initialize 644 + * @sz: Size of the kfifo buffer to allocate 645 + * 646 + * Allocate a buffer for the kfifo and initialize it. 647 + * 648 + * Return: 0 on Success 649 + */ 650 + static int scmi_initialize_events_queue(struct scmi_notify_instance *ni, 651 + struct events_queue *equeue, size_t sz) 652 + { 653 + int ret; 654 + 655 + if (kfifo_alloc(&equeue->kfifo, sz, GFP_KERNEL)) 656 + return -ENOMEM; 657 + /* Size could have been roundup to power-of-two */ 658 + equeue->sz = kfifo_size(&equeue->kfifo); 659 + 660 + ret = devm_add_action_or_reset(ni->handle->dev, scmi_kfifo_free, 661 + &equeue->kfifo); 662 + if (ret) 663 + return ret; 664 + 665 + INIT_WORK(&equeue->notify_work, scmi_events_dispatcher); 666 + equeue->wq = ni->notify_wq; 667 + 668 + return ret; 669 + } 670 + 671 + /** 672 + * scmi_allocate_registered_events_desc() - Allocate a registered events' 673 + * descriptor 674 + * @ni: A reference to the &struct scmi_notify_instance notification instance 675 + * to use 676 + * @proto_id: Protocol ID 677 + * @queue_sz: Size of the associated queue to allocate 678 + * @eh_sz: Size of the event header scratch area to pre-allocate 679 + * @num_events: Number of events to support (size of @registered_events) 680 + * @ops: Pointer to a struct holding references to protocol specific helpers 681 + * needed during events handling 682 + * 683 + * It is supposed to be called only once for each protocol at protocol 684 + * initialization time, so it warns if the requested protocol is found already 685 + * registered. 686 + * 687 + * Return: The allocated and registered descriptor on Success 688 + */ 689 + static struct scmi_registered_events_desc * 690 + scmi_allocate_registered_events_desc(struct scmi_notify_instance *ni, 691 + u8 proto_id, size_t queue_sz, size_t eh_sz, 692 + int num_events, 693 + const struct scmi_event_ops *ops) 694 + { 695 + int ret; 696 + struct scmi_registered_events_desc *pd; 697 + 698 + /* Ensure protocols are up to date */ 699 + smp_rmb(); 700 + if (WARN_ON(ni->registered_protocols[proto_id])) 701 + return ERR_PTR(-EINVAL); 702 + 703 + pd = devm_kzalloc(ni->handle->dev, sizeof(*pd), GFP_KERNEL); 704 + if (!pd) 705 + return ERR_PTR(-ENOMEM); 706 + pd->id = proto_id; 707 + pd->ops = ops; 708 + pd->ni = ni; 709 + 710 + ret = scmi_initialize_events_queue(ni, &pd->equeue, queue_sz); 711 + if (ret) 712 + return ERR_PTR(ret); 713 + 714 + pd->eh = devm_kzalloc(ni->handle->dev, eh_sz, GFP_KERNEL); 715 + if (!pd->eh) 716 + return ERR_PTR(-ENOMEM); 717 + pd->eh_sz = eh_sz; 718 + 719 + pd->registered_events = devm_kcalloc(ni->handle->dev, num_events, 720 + sizeof(char *), GFP_KERNEL); 721 + if (!pd->registered_events) 722 + return ERR_PTR(-ENOMEM); 723 + pd->num_events = num_events; 724 + 725 + /* Initialize per protocol handlers table */ 726 + mutex_init(&pd->registered_mtx); 727 + hash_init(pd->registered_events_handlers); 728 + 729 + return pd; 730 + } 731 + 732 + /** 733 + * scmi_register_protocol_events() - Register Protocol Events with the core 734 + * @handle: The handle identifying the platform instance against which the 735 + * the protocol's events are registered 736 + * @proto_id: Protocol ID 737 + * @queue_sz: Size in bytes of the associated queue to be allocated 738 + * @ops: Protocol specific event-related operations 739 + * @evt: Event descriptor array 740 + * @num_events: Number of events in @evt array 741 + * @num_sources: Number of possible sources for this protocol on this 742 + * platform. 743 + * 744 + * Used by SCMI Protocols initialization code to register with the notification 745 + * core the list of supported events and their descriptors: takes care to 746 + * pre-allocate and store all needed descriptors, scratch buffers and event 747 + * queues. 748 + * 749 + * Return: 0 on Success 750 + */ 751 + int scmi_register_protocol_events(const struct scmi_handle *handle, 752 + u8 proto_id, size_t queue_sz, 753 + const struct scmi_event_ops *ops, 754 + const struct scmi_event *evt, int num_events, 755 + int num_sources) 756 + { 757 + int i; 758 + size_t payld_sz = 0; 759 + struct scmi_registered_events_desc *pd; 760 + struct scmi_notify_instance *ni; 761 + 762 + if (!ops || !evt) 763 + return -EINVAL; 764 + 765 + /* Ensure notify_priv is updated */ 766 + smp_rmb(); 767 + if (!handle->notify_priv) 768 + return -ENOMEM; 769 + ni = handle->notify_priv; 770 + 771 + /* Attach to the notification main devres group */ 772 + if (!devres_open_group(ni->handle->dev, ni->gid, GFP_KERNEL)) 773 + return -ENOMEM; 774 + 775 + for (i = 0; i < num_events; i++) 776 + payld_sz = max_t(size_t, payld_sz, evt[i].max_payld_sz); 777 + payld_sz += sizeof(struct scmi_event_header); 778 + 779 + pd = scmi_allocate_registered_events_desc(ni, proto_id, queue_sz, 780 + payld_sz, num_events, ops); 781 + if (IS_ERR(pd)) 782 + goto err; 783 + 784 + for (i = 0; i < num_events; i++, evt++) { 785 + struct scmi_registered_event *r_evt; 786 + 787 + r_evt = devm_kzalloc(ni->handle->dev, sizeof(*r_evt), 788 + GFP_KERNEL); 789 + if (!r_evt) 790 + goto err; 791 + r_evt->proto = pd; 792 + r_evt->evt = evt; 793 + 794 + r_evt->sources = devm_kcalloc(ni->handle->dev, num_sources, 795 + sizeof(refcount_t), GFP_KERNEL); 796 + if (!r_evt->sources) 797 + goto err; 798 + r_evt->num_sources = num_sources; 799 + mutex_init(&r_evt->sources_mtx); 800 + 801 + r_evt->report = devm_kzalloc(ni->handle->dev, 802 + evt->max_report_sz, GFP_KERNEL); 803 + if (!r_evt->report) 804 + goto err; 805 + 806 + pd->registered_events[i] = r_evt; 807 + /* Ensure events are updated */ 808 + smp_wmb(); 809 + dev_dbg(handle->dev, "registered event - %lX\n", 810 + MAKE_ALL_SRCS_KEY(r_evt->proto->id, r_evt->evt->id)); 811 + } 812 + 813 + /* Register protocol and events...it will never be removed */ 814 + ni->registered_protocols[proto_id] = pd; 815 + /* Ensure protocols are updated */ 816 + smp_wmb(); 817 + 818 + devres_close_group(ni->handle->dev, ni->gid); 819 + 820 + /* 821 + * Finalize any pending events' handler which could have been waiting 822 + * for this protocol's events registration. 823 + */ 824 + schedule_work(&ni->init_work); 825 + 826 + return 0; 827 + 828 + err: 829 + dev_warn(handle->dev, "Proto:%X - Registration Failed !\n", proto_id); 830 + /* A failing protocol registration does not trigger full failure */ 831 + devres_close_group(ni->handle->dev, ni->gid); 832 + 833 + return -ENOMEM; 834 + } 835 + 836 + /** 837 + * scmi_allocate_event_handler() - Allocate Event handler 838 + * @ni: A reference to the notification instance to use 839 + * @evt_key: 32bit key uniquely bind to the event identified by the tuple 840 + * (proto_id, evt_id, src_id) 841 + * 842 + * Allocate an event handler and related notification chain associated with 843 + * the provided event handler key. 844 + * Note that, at this point, a related registered_event is still to be 845 + * associated to this handler descriptor (hndl->r_evt == NULL), so the handler 846 + * is initialized as pending. 847 + * 848 + * Context: Assumes to be called with @pending_mtx already acquired. 849 + * Return: the freshly allocated structure on Success 850 + */ 851 + static struct scmi_event_handler * 852 + scmi_allocate_event_handler(struct scmi_notify_instance *ni, u32 evt_key) 853 + { 854 + struct scmi_event_handler *hndl; 855 + 856 + hndl = kzalloc(sizeof(*hndl), GFP_KERNEL); 857 + if (!hndl) 858 + return NULL; 859 + hndl->key = evt_key; 860 + BLOCKING_INIT_NOTIFIER_HEAD(&hndl->chain); 861 + refcount_set(&hndl->users, 1); 862 + /* New handlers are created pending */ 863 + hash_add(ni->pending_events_handlers, &hndl->hash, hndl->key); 864 + 865 + return hndl; 866 + } 867 + 868 + /** 869 + * scmi_free_event_handler() - Free the provided Event handler 870 + * @hndl: The event handler structure to free 871 + * 872 + * Context: Assumes to be called with proper locking acquired depending 873 + * on the situation. 874 + */ 875 + static void scmi_free_event_handler(struct scmi_event_handler *hndl) 876 + { 877 + hash_del(&hndl->hash); 878 + kfree(hndl); 879 + } 880 + 881 + /** 882 + * scmi_bind_event_handler() - Helper to attempt binding an handler to an event 883 + * @ni: A reference to the notification instance to use 884 + * @hndl: The event handler to bind 885 + * 886 + * If an associated registered event is found, move the handler from the pending 887 + * into the registered table. 888 + * 889 + * Context: Assumes to be called with @pending_mtx already acquired. 890 + * 891 + * Return: 0 on Success 892 + */ 893 + static inline int scmi_bind_event_handler(struct scmi_notify_instance *ni, 894 + struct scmi_event_handler *hndl) 895 + { 896 + struct scmi_registered_event *r_evt; 897 + 898 + r_evt = SCMI_GET_REVT(ni, KEY_XTRACT_PROTO_ID(hndl->key), 899 + KEY_XTRACT_EVT_ID(hndl->key)); 900 + if (!r_evt) 901 + return -EINVAL; 902 + 903 + /* Remove from pending and insert into registered */ 904 + hash_del(&hndl->hash); 905 + hndl->r_evt = r_evt; 906 + mutex_lock(&r_evt->proto->registered_mtx); 907 + hash_add(r_evt->proto->registered_events_handlers, 908 + &hndl->hash, hndl->key); 909 + mutex_unlock(&r_evt->proto->registered_mtx); 910 + 911 + return 0; 912 + } 913 + 914 + /** 915 + * scmi_valid_pending_handler() - Helper to check pending status of handlers 916 + * @ni: A reference to the notification instance to use 917 + * @hndl: The event handler to check 918 + * 919 + * An handler is considered pending when its r_evt == NULL, because the related 920 + * event was still unknown at handler's registration time; anyway, since all 921 + * protocols register their supported events once for all at protocols' 922 + * initialization time, a pending handler cannot be considered valid anymore if 923 + * the underlying event (which it is waiting for), belongs to an already 924 + * initialized and registered protocol. 925 + * 926 + * Return: 0 on Success 927 + */ 928 + static inline int scmi_valid_pending_handler(struct scmi_notify_instance *ni, 929 + struct scmi_event_handler *hndl) 930 + { 931 + struct scmi_registered_events_desc *pd; 932 + 933 + if (!IS_HNDL_PENDING(hndl)) 934 + return -EINVAL; 935 + 936 + pd = SCMI_GET_PROTO(ni, KEY_XTRACT_PROTO_ID(hndl->key)); 937 + if (pd) 938 + return -EINVAL; 939 + 940 + return 0; 941 + } 942 + 943 + /** 944 + * scmi_register_event_handler() - Register whenever possible an Event handler 945 + * @ni: A reference to the notification instance to use 946 + * @hndl: The event handler to register 947 + * 948 + * At first try to bind an event handler to its associated event, then check if 949 + * it was at least a valid pending handler: if it was not bound nor valid return 950 + * false. 951 + * 952 + * Valid pending incomplete bindings will be periodically retried by a dedicated 953 + * worker which is kicked each time a new protocol completes its own 954 + * registration phase. 955 + * 956 + * Context: Assumes to be called with @pending_mtx acquired. 957 + * 958 + * Return: 0 on Success 959 + */ 960 + static int scmi_register_event_handler(struct scmi_notify_instance *ni, 961 + struct scmi_event_handler *hndl) 962 + { 963 + int ret; 964 + 965 + ret = scmi_bind_event_handler(ni, hndl); 966 + if (!ret) { 967 + dev_dbg(ni->handle->dev, "registered NEW handler - key:%X\n", 968 + hndl->key); 969 + } else { 970 + ret = scmi_valid_pending_handler(ni, hndl); 971 + if (!ret) 972 + dev_dbg(ni->handle->dev, 973 + "registered PENDING handler - key:%X\n", 974 + hndl->key); 975 + } 976 + 977 + return ret; 978 + } 979 + 980 + /** 981 + * __scmi_event_handler_get_ops() - Utility to get or create an event handler 982 + * @ni: A reference to the notification instance to use 983 + * @evt_key: The event key to use 984 + * @create: A boolean flag to specify if a handler must be created when 985 + * not already existent 986 + * 987 + * Search for the desired handler matching the key in both the per-protocol 988 + * registered table and the common pending table: 989 + * * if found adjust users refcount 990 + * * if not found and @create is true, create and register the new handler: 991 + * handler could end up being registered as pending if no matching event 992 + * could be found. 993 + * 994 + * An handler is guaranteed to reside in one and only one of the tables at 995 + * any one time; to ensure this the whole search and create is performed 996 + * holding the @pending_mtx lock, with @registered_mtx additionally acquired 997 + * if needed. 998 + * 999 + * Note that when a nested acquisition of these mutexes is needed the locking 1000 + * order is always (same as in @init_work): 1001 + * 1. pending_mtx 1002 + * 2. registered_mtx 1003 + * 1004 + * Events generation is NOT enabled right after creation within this routine 1005 + * since at creation time we usually want to have all setup and ready before 1006 + * events really start flowing. 1007 + * 1008 + * Return: A properly refcounted handler on Success, NULL on Failure 1009 + */ 1010 + static inline struct scmi_event_handler * 1011 + __scmi_event_handler_get_ops(struct scmi_notify_instance *ni, 1012 + u32 evt_key, bool create) 1013 + { 1014 + struct scmi_registered_event *r_evt; 1015 + struct scmi_event_handler *hndl = NULL; 1016 + 1017 + r_evt = SCMI_GET_REVT(ni, KEY_XTRACT_PROTO_ID(evt_key), 1018 + KEY_XTRACT_EVT_ID(evt_key)); 1019 + 1020 + mutex_lock(&ni->pending_mtx); 1021 + /* Search registered events at first ... if possible at all */ 1022 + if (r_evt) { 1023 + mutex_lock(&r_evt->proto->registered_mtx); 1024 + hndl = KEY_FIND(r_evt->proto->registered_events_handlers, 1025 + hndl, evt_key); 1026 + if (hndl) 1027 + refcount_inc(&hndl->users); 1028 + mutex_unlock(&r_evt->proto->registered_mtx); 1029 + } 1030 + 1031 + /* ...then amongst pending. */ 1032 + if (!hndl) { 1033 + hndl = KEY_FIND(ni->pending_events_handlers, hndl, evt_key); 1034 + if (hndl) 1035 + refcount_inc(&hndl->users); 1036 + } 1037 + 1038 + /* Create if still not found and required */ 1039 + if (!hndl && create) { 1040 + hndl = scmi_allocate_event_handler(ni, evt_key); 1041 + if (hndl && scmi_register_event_handler(ni, hndl)) { 1042 + dev_dbg(ni->handle->dev, 1043 + "purging UNKNOWN handler - key:%X\n", 1044 + hndl->key); 1045 + /* this hndl can be only a pending one */ 1046 + scmi_put_handler_unlocked(ni, hndl); 1047 + hndl = NULL; 1048 + } 1049 + } 1050 + mutex_unlock(&ni->pending_mtx); 1051 + 1052 + return hndl; 1053 + } 1054 + 1055 + static struct scmi_event_handler * 1056 + scmi_get_handler(struct scmi_notify_instance *ni, u32 evt_key) 1057 + { 1058 + return __scmi_event_handler_get_ops(ni, evt_key, false); 1059 + } 1060 + 1061 + static struct scmi_event_handler * 1062 + scmi_get_or_create_handler(struct scmi_notify_instance *ni, u32 evt_key) 1063 + { 1064 + return __scmi_event_handler_get_ops(ni, evt_key, true); 1065 + } 1066 + 1067 + /** 1068 + * scmi_get_active_handler() - Helper to get active handlers only 1069 + * @ni: A reference to the notification instance to use 1070 + * @evt_key: The event key to use 1071 + * 1072 + * Search for the desired handler matching the key only in the per-protocol 1073 + * table of registered handlers: this is called only from the dispatching path 1074 + * so want to be as quick as possible and do not care about pending. 1075 + * 1076 + * Return: A properly refcounted active handler 1077 + */ 1078 + static struct scmi_event_handler * 1079 + scmi_get_active_handler(struct scmi_notify_instance *ni, u32 evt_key) 1080 + { 1081 + struct scmi_registered_event *r_evt; 1082 + struct scmi_event_handler *hndl = NULL; 1083 + 1084 + r_evt = SCMI_GET_REVT(ni, KEY_XTRACT_PROTO_ID(evt_key), 1085 + KEY_XTRACT_EVT_ID(evt_key)); 1086 + if (r_evt) { 1087 + mutex_lock(&r_evt->proto->registered_mtx); 1088 + hndl = KEY_FIND(r_evt->proto->registered_events_handlers, 1089 + hndl, evt_key); 1090 + if (hndl) 1091 + refcount_inc(&hndl->users); 1092 + mutex_unlock(&r_evt->proto->registered_mtx); 1093 + } 1094 + 1095 + return hndl; 1096 + } 1097 + 1098 + /** 1099 + * __scmi_enable_evt() - Enable/disable events generation 1100 + * @r_evt: The registered event to act upon 1101 + * @src_id: The src_id to act upon 1102 + * @enable: The action to perform: true->Enable, false->Disable 1103 + * 1104 + * Takes care of proper refcounting while performing enable/disable: handles 1105 + * the special case of ALL sources requests by itself. 1106 + * Returns successfully if at least one of the required src_id has been 1107 + * successfully enabled/disabled. 1108 + * 1109 + * Return: 0 on Success 1110 + */ 1111 + static inline int __scmi_enable_evt(struct scmi_registered_event *r_evt, 1112 + u32 src_id, bool enable) 1113 + { 1114 + int retvals = 0; 1115 + u32 num_sources; 1116 + refcount_t *sid; 1117 + 1118 + if (src_id == SRC_ID_MASK) { 1119 + src_id = 0; 1120 + num_sources = r_evt->num_sources; 1121 + } else if (src_id < r_evt->num_sources) { 1122 + num_sources = 1; 1123 + } else { 1124 + return -EINVAL; 1125 + } 1126 + 1127 + mutex_lock(&r_evt->sources_mtx); 1128 + if (enable) { 1129 + for (; num_sources; src_id++, num_sources--) { 1130 + int ret = 0; 1131 + 1132 + sid = &r_evt->sources[src_id]; 1133 + if (refcount_read(sid) == 0) { 1134 + ret = REVT_NOTIFY_ENABLE(r_evt, r_evt->evt->id, 1135 + src_id); 1136 + if (!ret) 1137 + refcount_set(sid, 1); 1138 + } else { 1139 + refcount_inc(sid); 1140 + } 1141 + retvals += !ret; 1142 + } 1143 + } else { 1144 + for (; num_sources; src_id++, num_sources--) { 1145 + sid = &r_evt->sources[src_id]; 1146 + if (refcount_dec_and_test(sid)) 1147 + REVT_NOTIFY_DISABLE(r_evt, 1148 + r_evt->evt->id, src_id); 1149 + } 1150 + retvals = 1; 1151 + } 1152 + mutex_unlock(&r_evt->sources_mtx); 1153 + 1154 + return retvals ? 0 : -EINVAL; 1155 + } 1156 + 1157 + static int scmi_enable_events(struct scmi_event_handler *hndl) 1158 + { 1159 + int ret = 0; 1160 + 1161 + if (!hndl->enabled) { 1162 + ret = __scmi_enable_evt(hndl->r_evt, 1163 + KEY_XTRACT_SRC_ID(hndl->key), true); 1164 + if (!ret) 1165 + hndl->enabled = true; 1166 + } 1167 + 1168 + return ret; 1169 + } 1170 + 1171 + static int scmi_disable_events(struct scmi_event_handler *hndl) 1172 + { 1173 + int ret = 0; 1174 + 1175 + if (hndl->enabled) { 1176 + ret = __scmi_enable_evt(hndl->r_evt, 1177 + KEY_XTRACT_SRC_ID(hndl->key), false); 1178 + if (!ret) 1179 + hndl->enabled = false; 1180 + } 1181 + 1182 + return ret; 1183 + } 1184 + 1185 + /** 1186 + * scmi_put_handler_unlocked() - Put an event handler 1187 + * @ni: A reference to the notification instance to use 1188 + * @hndl: The event handler to act upon 1189 + * 1190 + * After having got exclusive access to the registered handlers hashtable, 1191 + * update the refcount and if @hndl is no more in use by anyone: 1192 + * * ask for events' generation disabling 1193 + * * unregister and free the handler itself 1194 + * 1195 + * Context: Assumes all the proper locking has been managed by the caller. 1196 + */ 1197 + static void scmi_put_handler_unlocked(struct scmi_notify_instance *ni, 1198 + struct scmi_event_handler *hndl) 1199 + { 1200 + if (refcount_dec_and_test(&hndl->users)) { 1201 + if (!IS_HNDL_PENDING(hndl)) 1202 + scmi_disable_events(hndl); 1203 + scmi_free_event_handler(hndl); 1204 + } 1205 + } 1206 + 1207 + static void scmi_put_handler(struct scmi_notify_instance *ni, 1208 + struct scmi_event_handler *hndl) 1209 + { 1210 + struct scmi_registered_event *r_evt = hndl->r_evt; 1211 + 1212 + mutex_lock(&ni->pending_mtx); 1213 + if (r_evt) 1214 + mutex_lock(&r_evt->proto->registered_mtx); 1215 + 1216 + scmi_put_handler_unlocked(ni, hndl); 1217 + 1218 + if (r_evt) 1219 + mutex_unlock(&r_evt->proto->registered_mtx); 1220 + mutex_unlock(&ni->pending_mtx); 1221 + } 1222 + 1223 + static void scmi_put_active_handler(struct scmi_notify_instance *ni, 1224 + struct scmi_event_handler *hndl) 1225 + { 1226 + struct scmi_registered_event *r_evt = hndl->r_evt; 1227 + 1228 + mutex_lock(&r_evt->proto->registered_mtx); 1229 + scmi_put_handler_unlocked(ni, hndl); 1230 + mutex_unlock(&r_evt->proto->registered_mtx); 1231 + } 1232 + 1233 + /** 1234 + * scmi_event_handler_enable_events() - Enable events associated to an handler 1235 + * @hndl: The Event handler to act upon 1236 + * 1237 + * Return: 0 on Success 1238 + */ 1239 + static int scmi_event_handler_enable_events(struct scmi_event_handler *hndl) 1240 + { 1241 + if (scmi_enable_events(hndl)) { 1242 + pr_err("Failed to ENABLE events for key:%X !\n", hndl->key); 1243 + return -EINVAL; 1244 + } 1245 + 1246 + return 0; 1247 + } 1248 + 1249 + /** 1250 + * scmi_register_notifier() - Register a notifier_block for an event 1251 + * @handle: The handle identifying the platform instance against which the 1252 + * callback is registered 1253 + * @proto_id: Protocol ID 1254 + * @evt_id: Event ID 1255 + * @src_id: Source ID, when NULL register for events coming form ALL possible 1256 + * sources 1257 + * @nb: A standard notifier block to register for the specified event 1258 + * 1259 + * Generic helper to register a notifier_block against a protocol event. 1260 + * 1261 + * A notifier_block @nb will be registered for each distinct event identified 1262 + * by the tuple (proto_id, evt_id, src_id) on a dedicated notification chain 1263 + * so that: 1264 + * 1265 + * (proto_X, evt_Y, src_Z) --> chain_X_Y_Z 1266 + * 1267 + * @src_id meaning is protocol specific and identifies the origin of the event 1268 + * (like domain_id, sensor_id and so forth). 1269 + * 1270 + * @src_id can be NULL to signify that the caller is interested in receiving 1271 + * notifications from ALL the available sources for that protocol OR simply that 1272 + * the protocol does not support distinct sources. 1273 + * 1274 + * As soon as one user for the specified tuple appears, an handler is created, 1275 + * and that specific event's generation is enabled at the platform level, unless 1276 + * an associated registered event is found missing, meaning that the needed 1277 + * protocol is still to be initialized and the handler has just been registered 1278 + * as still pending. 1279 + * 1280 + * Return: 0 on Success 1281 + */ 1282 + static int scmi_register_notifier(const struct scmi_handle *handle, 1283 + u8 proto_id, u8 evt_id, u32 *src_id, 1284 + struct notifier_block *nb) 1285 + { 1286 + int ret = 0; 1287 + u32 evt_key; 1288 + struct scmi_event_handler *hndl; 1289 + struct scmi_notify_instance *ni; 1290 + 1291 + /* Ensure notify_priv is updated */ 1292 + smp_rmb(); 1293 + if (!handle->notify_priv) 1294 + return -ENODEV; 1295 + ni = handle->notify_priv; 1296 + 1297 + evt_key = MAKE_HASH_KEY(proto_id, evt_id, 1298 + src_id ? *src_id : SRC_ID_MASK); 1299 + hndl = scmi_get_or_create_handler(ni, evt_key); 1300 + if (!hndl) 1301 + return -EINVAL; 1302 + 1303 + blocking_notifier_chain_register(&hndl->chain, nb); 1304 + 1305 + /* Enable events for not pending handlers */ 1306 + if (!IS_HNDL_PENDING(hndl)) { 1307 + ret = scmi_event_handler_enable_events(hndl); 1308 + if (ret) 1309 + scmi_put_handler(ni, hndl); 1310 + } 1311 + 1312 + return ret; 1313 + } 1314 + 1315 + /** 1316 + * scmi_unregister_notifier() - Unregister a notifier_block for an event 1317 + * @handle: The handle identifying the platform instance against which the 1318 + * callback is unregistered 1319 + * @proto_id: Protocol ID 1320 + * @evt_id: Event ID 1321 + * @src_id: Source ID 1322 + * @nb: The notifier_block to unregister 1323 + * 1324 + * Takes care to unregister the provided @nb from the notification chain 1325 + * associated to the specified event and, if there are no more users for the 1326 + * event handler, frees also the associated event handler structures. 1327 + * (this could possibly cause disabling of event's generation at platform level) 1328 + * 1329 + * Return: 0 on Success 1330 + */ 1331 + static int scmi_unregister_notifier(const struct scmi_handle *handle, 1332 + u8 proto_id, u8 evt_id, u32 *src_id, 1333 + struct notifier_block *nb) 1334 + { 1335 + u32 evt_key; 1336 + struct scmi_event_handler *hndl; 1337 + struct scmi_notify_instance *ni; 1338 + 1339 + /* Ensure notify_priv is updated */ 1340 + smp_rmb(); 1341 + if (!handle->notify_priv) 1342 + return -ENODEV; 1343 + ni = handle->notify_priv; 1344 + 1345 + evt_key = MAKE_HASH_KEY(proto_id, evt_id, 1346 + src_id ? *src_id : SRC_ID_MASK); 1347 + hndl = scmi_get_handler(ni, evt_key); 1348 + if (!hndl) 1349 + return -EINVAL; 1350 + 1351 + /* 1352 + * Note that this chain unregistration call is safe on its own 1353 + * being internally protected by an rwsem. 1354 + */ 1355 + blocking_notifier_chain_unregister(&hndl->chain, nb); 1356 + scmi_put_handler(ni, hndl); 1357 + 1358 + /* 1359 + * This balances the initial get issued in @scmi_register_notifier. 1360 + * If this notifier_block happened to be the last known user callback 1361 + * for this event, the handler is here freed and the event's generation 1362 + * stopped. 1363 + * 1364 + * Note that, an ongoing concurrent lookup on the delivery workqueue 1365 + * path could still hold the refcount to 1 even after this routine 1366 + * completes: in such a case it will be the final put on the delivery 1367 + * path which will finally free this unused handler. 1368 + */ 1369 + scmi_put_handler(ni, hndl); 1370 + 1371 + return 0; 1372 + } 1373 + 1374 + /** 1375 + * scmi_protocols_late_init() - Worker for late initialization 1376 + * @work: The work item to use associated to the proper SCMI instance 1377 + * 1378 + * This kicks in whenever a new protocol has completed its own registration via 1379 + * scmi_register_protocol_events(): it is in charge of scanning the table of 1380 + * pending handlers (registered by users while the related protocol was still 1381 + * not initialized) and finalizing their initialization whenever possible; 1382 + * invalid pending handlers are purged at this point in time. 1383 + */ 1384 + static void scmi_protocols_late_init(struct work_struct *work) 1385 + { 1386 + int bkt; 1387 + struct scmi_event_handler *hndl; 1388 + struct scmi_notify_instance *ni; 1389 + struct hlist_node *tmp; 1390 + 1391 + ni = container_of(work, struct scmi_notify_instance, init_work); 1392 + 1393 + /* Ensure protocols and events are up to date */ 1394 + smp_rmb(); 1395 + 1396 + mutex_lock(&ni->pending_mtx); 1397 + hash_for_each_safe(ni->pending_events_handlers, bkt, tmp, hndl, hash) { 1398 + int ret; 1399 + 1400 + ret = scmi_bind_event_handler(ni, hndl); 1401 + if (!ret) { 1402 + dev_dbg(ni->handle->dev, 1403 + "finalized PENDING handler - key:%X\n", 1404 + hndl->key); 1405 + ret = scmi_event_handler_enable_events(hndl); 1406 + } else { 1407 + ret = scmi_valid_pending_handler(ni, hndl); 1408 + } 1409 + if (ret) { 1410 + dev_dbg(ni->handle->dev, 1411 + "purging PENDING handler - key:%X\n", 1412 + hndl->key); 1413 + /* this hndl can be only a pending one */ 1414 + scmi_put_handler_unlocked(ni, hndl); 1415 + } 1416 + } 1417 + mutex_unlock(&ni->pending_mtx); 1418 + } 1419 + 1420 + /* 1421 + * notify_ops are attached to the handle so that can be accessed 1422 + * directly from an scmi_driver to register its own notifiers. 1423 + */ 1424 + static struct scmi_notify_ops notify_ops = { 1425 + .register_event_notifier = scmi_register_notifier, 1426 + .unregister_event_notifier = scmi_unregister_notifier, 1427 + }; 1428 + 1429 + /** 1430 + * scmi_notification_init() - Initializes Notification Core Support 1431 + * @handle: The handle identifying the platform instance to initialize 1432 + * 1433 + * This function lays out all the basic resources needed by the notification 1434 + * core instance identified by the provided handle: once done, all of the 1435 + * SCMI Protocols can register their events with the core during their own 1436 + * initializations. 1437 + * 1438 + * Note that failing to initialize the core notifications support does not 1439 + * cause the whole SCMI Protocols stack to fail its initialization. 1440 + * 1441 + * SCMI Notification Initialization happens in 2 steps: 1442 + * * initialization: basic common allocations (this function) 1443 + * * registration: protocols asynchronously come into life and registers their 1444 + * own supported list of events with the core; this causes 1445 + * further per-protocol allocations 1446 + * 1447 + * Any user's callback registration attempt, referring a still not registered 1448 + * event, will be registered as pending and finalized later (if possible) 1449 + * by scmi_protocols_late_init() work. 1450 + * This allows for lazy initialization of SCMI Protocols due to late (or 1451 + * missing) SCMI drivers' modules loading. 1452 + * 1453 + * Return: 0 on Success 1454 + */ 1455 + int scmi_notification_init(struct scmi_handle *handle) 1456 + { 1457 + void *gid; 1458 + struct scmi_notify_instance *ni; 1459 + 1460 + gid = devres_open_group(handle->dev, NULL, GFP_KERNEL); 1461 + if (!gid) 1462 + return -ENOMEM; 1463 + 1464 + ni = devm_kzalloc(handle->dev, sizeof(*ni), GFP_KERNEL); 1465 + if (!ni) 1466 + goto err; 1467 + 1468 + ni->gid = gid; 1469 + ni->handle = handle; 1470 + 1471 + ni->notify_wq = alloc_workqueue("scmi_notify", 1472 + WQ_UNBOUND | WQ_FREEZABLE | WQ_SYSFS, 1473 + 0); 1474 + if (!ni->notify_wq) 1475 + goto err; 1476 + 1477 + ni->registered_protocols = devm_kcalloc(handle->dev, SCMI_MAX_PROTO, 1478 + sizeof(char *), GFP_KERNEL); 1479 + if (!ni->registered_protocols) 1480 + goto err; 1481 + 1482 + mutex_init(&ni->pending_mtx); 1483 + hash_init(ni->pending_events_handlers); 1484 + 1485 + INIT_WORK(&ni->init_work, scmi_protocols_late_init); 1486 + 1487 + handle->notify_ops = &notify_ops; 1488 + handle->notify_priv = ni; 1489 + /* Ensure handle is up to date */ 1490 + smp_wmb(); 1491 + 1492 + dev_info(handle->dev, "Core Enabled.\n"); 1493 + 1494 + devres_close_group(handle->dev, ni->gid); 1495 + 1496 + return 0; 1497 + 1498 + err: 1499 + dev_warn(handle->dev, "Initialization Failed.\n"); 1500 + devres_release_group(handle->dev, NULL); 1501 + return -ENOMEM; 1502 + } 1503 + 1504 + /** 1505 + * scmi_notification_exit() - Shutdown and clean Notification core 1506 + * @handle: The handle identifying the platform instance to shutdown 1507 + */ 1508 + void scmi_notification_exit(struct scmi_handle *handle) 1509 + { 1510 + struct scmi_notify_instance *ni; 1511 + 1512 + /* Ensure notify_priv is updated */ 1513 + smp_rmb(); 1514 + if (!handle->notify_priv) 1515 + return; 1516 + ni = handle->notify_priv; 1517 + 1518 + handle->notify_priv = NULL; 1519 + /* Ensure handle is up to date */ 1520 + smp_wmb(); 1521 + 1522 + /* Destroy while letting pending work complete */ 1523 + destroy_workqueue(ni->notify_wq); 1524 + 1525 + devres_release_group(ni->handle->dev, ni->gid); 1526 + }
+68
drivers/firmware/arm_scmi/notify.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * System Control and Management Interface (SCMI) Message Protocol 4 + * notification header file containing some definitions, structures 5 + * and function prototypes related to SCMI Notification handling. 6 + * 7 + * Copyright (C) 2020 ARM Ltd. 8 + */ 9 + #ifndef _SCMI_NOTIFY_H 10 + #define _SCMI_NOTIFY_H 11 + 12 + #include <linux/device.h> 13 + #include <linux/ktime.h> 14 + #include <linux/types.h> 15 + 16 + #define SCMI_PROTO_QUEUE_SZ 4096 17 + 18 + /** 19 + * struct scmi_event - Describes an event to be supported 20 + * @id: Event ID 21 + * @max_payld_sz: Max possible size for the payload of a notification message 22 + * @max_report_sz: Max possible size for the report of a notification message 23 + * 24 + * Each SCMI protocol, during its initialization phase, can describe the events 25 + * it wishes to support in a few struct scmi_event and pass them to the core 26 + * using scmi_register_protocol_events(). 27 + */ 28 + struct scmi_event { 29 + u8 id; 30 + size_t max_payld_sz; 31 + size_t max_report_sz; 32 + }; 33 + 34 + /** 35 + * struct scmi_event_ops - Protocol helpers called by the notification core. 36 + * @set_notify_enabled: Enable/disable the required evt_id/src_id notifications 37 + * using the proper custom protocol commands. 38 + * Return 0 on Success 39 + * @fill_custom_report: fills a custom event report from the provided 40 + * event message payld identifying the event 41 + * specific src_id. 42 + * Return NULL on failure otherwise @report now fully 43 + * populated 44 + * 45 + * Context: Helpers described in &struct scmi_event_ops are called only in 46 + * process context. 47 + */ 48 + struct scmi_event_ops { 49 + int (*set_notify_enabled)(const struct scmi_handle *handle, 50 + u8 evt_id, u32 src_id, bool enabled); 51 + void *(*fill_custom_report)(const struct scmi_handle *handle, 52 + u8 evt_id, ktime_t timestamp, 53 + const void *payld, size_t payld_sz, 54 + void *report, u32 *src_id); 55 + }; 56 + 57 + int scmi_notification_init(struct scmi_handle *handle); 58 + void scmi_notification_exit(struct scmi_handle *handle); 59 + 60 + int scmi_register_protocol_events(const struct scmi_handle *handle, 61 + u8 proto_id, size_t queue_sz, 62 + const struct scmi_event_ops *ops, 63 + const struct scmi_event *evt, int num_events, 64 + int num_sources); 65 + int scmi_notify(const struct scmi_handle *handle, u8 proto_id, u8 evt_id, 66 + const void *buf, size_t len, ktime_t ts); 67 + 68 + #endif /* _SCMI_NOTIFY_H */
+146 -5
drivers/firmware/arm_scmi/perf.c
··· 5 5 * Copyright (C) 2018 ARM Ltd. 6 6 */ 7 7 8 + #define pr_fmt(fmt) "SCMI Notifications PERF - " fmt 9 + 8 10 #include <linux/bits.h> 9 11 #include <linux/of.h> 10 12 #include <linux/io.h> 11 13 #include <linux/io-64-nonatomic-hi-lo.h> 12 14 #include <linux/platform_device.h> 13 15 #include <linux/pm_opp.h> 16 + #include <linux/scmi_protocol.h> 14 17 #include <linux/sort.h> 15 18 16 19 #include "common.h" 20 + #include "notify.h" 17 21 18 22 enum scmi_performance_protocol_cmd { 19 23 PERF_DOMAIN_ATTRIBUTES = 0x3, ··· 29 25 PERF_NOTIFY_LIMITS = 0x9, 30 26 PERF_NOTIFY_LEVEL = 0xa, 31 27 PERF_DESCRIBE_FASTCHANNEL = 0xb, 32 - }; 33 - 34 - enum scmi_performance_protocol_notify { 35 - PERFORMANCE_LIMITS_CHANGED = 0x0, 36 - PERFORMANCE_LEVEL_CHANGED = 0x1, 37 28 }; 38 29 39 30 struct scmi_opp { ··· 83 84 struct scmi_perf_notify_level_or_limits { 84 85 __le32 domain; 85 86 __le32 notify_enable; 87 + }; 88 + 89 + struct scmi_perf_limits_notify_payld { 90 + __le32 agent_id; 91 + __le32 domain_id; 92 + __le32 range_max; 93 + __le32 range_min; 94 + }; 95 + 96 + struct scmi_perf_level_notify_payld { 97 + __le32 agent_id; 98 + __le32 domain_id; 99 + __le32 performance_level; 86 100 }; 87 101 88 102 struct scmi_msg_resp_perf_describe_levels { ··· 168 156 u64 stats_addr; 169 157 u32 stats_size; 170 158 struct perf_dom_info *dom_info; 159 + }; 160 + 161 + static enum scmi_performance_protocol_cmd evt_2_cmd[] = { 162 + PERF_NOTIFY_LIMITS, 163 + PERF_NOTIFY_LEVEL, 171 164 }; 172 165 173 166 static int scmi_perf_attributes_get(const struct scmi_handle *handle, ··· 505 488 return scmi_perf_mb_level_get(handle, domain, level, poll); 506 489 } 507 490 491 + static int scmi_perf_level_limits_notify(const struct scmi_handle *handle, 492 + u32 domain, int message_id, 493 + bool enable) 494 + { 495 + int ret; 496 + struct scmi_xfer *t; 497 + struct scmi_perf_notify_level_or_limits *notify; 498 + 499 + ret = scmi_xfer_get_init(handle, message_id, SCMI_PROTOCOL_PERF, 500 + sizeof(*notify), 0, &t); 501 + if (ret) 502 + return ret; 503 + 504 + notify = t->tx.buf; 505 + notify->domain = cpu_to_le32(domain); 506 + notify->notify_enable = enable ? cpu_to_le32(BIT(0)) : 0; 507 + 508 + ret = scmi_do_xfer(handle, t); 509 + 510 + scmi_xfer_put(handle, t); 511 + return ret; 512 + } 513 + 508 514 static bool scmi_perf_fc_size_is_valid(u32 msg, u32 size) 509 515 { 510 516 if ((msg == PERF_LEVEL_GET || msg == PERF_LEVEL_SET) && size == 4) ··· 737 697 return ret; 738 698 } 739 699 700 + static bool scmi_fast_switch_possible(const struct scmi_handle *handle, 701 + struct device *dev) 702 + { 703 + struct perf_dom_info *dom; 704 + struct scmi_perf_info *pi = handle->perf_priv; 705 + 706 + dom = pi->dom_info + scmi_dev_domain_id(dev); 707 + 708 + return dom->fc_info && dom->fc_info->level_set_addr; 709 + } 710 + 740 711 static struct scmi_perf_ops perf_ops = { 741 712 .limits_set = scmi_perf_limits_set, 742 713 .limits_get = scmi_perf_limits_get, ··· 759 708 .freq_set = scmi_dvfs_freq_set, 760 709 .freq_get = scmi_dvfs_freq_get, 761 710 .est_power_get = scmi_dvfs_est_power_get, 711 + .fast_switch_possible = scmi_fast_switch_possible, 712 + }; 713 + 714 + static int scmi_perf_set_notify_enabled(const struct scmi_handle *handle, 715 + u8 evt_id, u32 src_id, bool enable) 716 + { 717 + int ret, cmd_id; 718 + 719 + if (evt_id >= ARRAY_SIZE(evt_2_cmd)) 720 + return -EINVAL; 721 + 722 + cmd_id = evt_2_cmd[evt_id]; 723 + ret = scmi_perf_level_limits_notify(handle, src_id, cmd_id, enable); 724 + if (ret) 725 + pr_debug("FAIL_ENABLED - evt[%X] dom[%d] - ret:%d\n", 726 + evt_id, src_id, ret); 727 + 728 + return ret; 729 + } 730 + 731 + static void *scmi_perf_fill_custom_report(const struct scmi_handle *handle, 732 + u8 evt_id, ktime_t timestamp, 733 + const void *payld, size_t payld_sz, 734 + void *report, u32 *src_id) 735 + { 736 + void *rep = NULL; 737 + 738 + switch (evt_id) { 739 + case SCMI_EVENT_PERFORMANCE_LIMITS_CHANGED: 740 + { 741 + const struct scmi_perf_limits_notify_payld *p = payld; 742 + struct scmi_perf_limits_report *r = report; 743 + 744 + if (sizeof(*p) != payld_sz) 745 + break; 746 + 747 + r->timestamp = timestamp; 748 + r->agent_id = le32_to_cpu(p->agent_id); 749 + r->domain_id = le32_to_cpu(p->domain_id); 750 + r->range_max = le32_to_cpu(p->range_max); 751 + r->range_min = le32_to_cpu(p->range_min); 752 + *src_id = r->domain_id; 753 + rep = r; 754 + break; 755 + } 756 + case SCMI_EVENT_PERFORMANCE_LEVEL_CHANGED: 757 + { 758 + const struct scmi_perf_level_notify_payld *p = payld; 759 + struct scmi_perf_level_report *r = report; 760 + 761 + if (sizeof(*p) != payld_sz) 762 + break; 763 + 764 + r->timestamp = timestamp; 765 + r->agent_id = le32_to_cpu(p->agent_id); 766 + r->domain_id = le32_to_cpu(p->domain_id); 767 + r->performance_level = le32_to_cpu(p->performance_level); 768 + *src_id = r->domain_id; 769 + rep = r; 770 + break; 771 + } 772 + default: 773 + break; 774 + } 775 + 776 + return rep; 777 + } 778 + 779 + static const struct scmi_event perf_events[] = { 780 + { 781 + .id = SCMI_EVENT_PERFORMANCE_LIMITS_CHANGED, 782 + .max_payld_sz = sizeof(struct scmi_perf_limits_notify_payld), 783 + .max_report_sz = sizeof(struct scmi_perf_limits_report), 784 + }, 785 + { 786 + .id = SCMI_EVENT_PERFORMANCE_LEVEL_CHANGED, 787 + .max_payld_sz = sizeof(struct scmi_perf_level_notify_payld), 788 + .max_report_sz = sizeof(struct scmi_perf_level_report), 789 + }, 790 + }; 791 + 792 + static const struct scmi_event_ops perf_event_ops = { 793 + .set_notify_enabled = scmi_perf_set_notify_enabled, 794 + .fill_custom_report = scmi_perf_fill_custom_report, 762 795 }; 763 796 764 797 static int scmi_perf_protocol_init(struct scmi_handle *handle) ··· 876 741 if (dom->perf_fastchannels) 877 742 scmi_perf_domain_init_fc(handle, domain, &dom->fc_info); 878 743 } 744 + 745 + scmi_register_protocol_events(handle, 746 + SCMI_PROTOCOL_PERF, SCMI_PROTO_QUEUE_SZ, 747 + &perf_event_ops, perf_events, 748 + ARRAY_SIZE(perf_events), 749 + pinfo->num_domains); 879 750 880 751 pinfo->version = version; 881 752 handle->perf_ops = &perf_ops;
+86 -6
drivers/firmware/arm_scmi/power.c
··· 5 5 * Copyright (C) 2018 ARM Ltd. 6 6 */ 7 7 8 + #define pr_fmt(fmt) "SCMI Notifications POWER - " fmt 9 + 10 + #include <linux/scmi_protocol.h> 11 + 8 12 #include "common.h" 13 + #include "notify.h" 9 14 10 15 enum scmi_power_protocol_cmd { 11 16 POWER_DOMAIN_ATTRIBUTES = 0x3, 12 17 POWER_STATE_SET = 0x4, 13 18 POWER_STATE_GET = 0x5, 14 19 POWER_STATE_NOTIFY = 0x6, 15 - POWER_STATE_CHANGE_REQUESTED_NOTIFY = 0x7, 16 - }; 17 - 18 - enum scmi_power_protocol_notify { 19 - POWER_STATE_CHANGED = 0x0, 20 - POWER_STATE_CHANGE_REQUESTED = 0x1, 21 20 }; 22 21 23 22 struct scmi_msg_resp_power_attributes { ··· 45 46 struct scmi_power_state_notify { 46 47 __le32 domain; 47 48 __le32 notify_enable; 49 + }; 50 + 51 + struct scmi_power_state_notify_payld { 52 + __le32 agent_id; 53 + __le32 domain_id; 54 + __le32 power_state; 48 55 }; 49 56 50 57 struct power_dom_info { ··· 191 186 .state_get = scmi_power_state_get, 192 187 }; 193 188 189 + static int scmi_power_request_notify(const struct scmi_handle *handle, 190 + u32 domain, bool enable) 191 + { 192 + int ret; 193 + struct scmi_xfer *t; 194 + struct scmi_power_state_notify *notify; 195 + 196 + ret = scmi_xfer_get_init(handle, POWER_STATE_NOTIFY, 197 + SCMI_PROTOCOL_POWER, sizeof(*notify), 0, &t); 198 + if (ret) 199 + return ret; 200 + 201 + notify = t->tx.buf; 202 + notify->domain = cpu_to_le32(domain); 203 + notify->notify_enable = enable ? cpu_to_le32(BIT(0)) : 0; 204 + 205 + ret = scmi_do_xfer(handle, t); 206 + 207 + scmi_xfer_put(handle, t); 208 + return ret; 209 + } 210 + 211 + static int scmi_power_set_notify_enabled(const struct scmi_handle *handle, 212 + u8 evt_id, u32 src_id, bool enable) 213 + { 214 + int ret; 215 + 216 + ret = scmi_power_request_notify(handle, src_id, enable); 217 + if (ret) 218 + pr_debug("FAIL_ENABLE - evt[%X] dom[%d] - ret:%d\n", 219 + evt_id, src_id, ret); 220 + 221 + return ret; 222 + } 223 + 224 + static void *scmi_power_fill_custom_report(const struct scmi_handle *handle, 225 + u8 evt_id, ktime_t timestamp, 226 + const void *payld, size_t payld_sz, 227 + void *report, u32 *src_id) 228 + { 229 + const struct scmi_power_state_notify_payld *p = payld; 230 + struct scmi_power_state_changed_report *r = report; 231 + 232 + if (evt_id != SCMI_EVENT_POWER_STATE_CHANGED || sizeof(*p) != payld_sz) 233 + return NULL; 234 + 235 + r->timestamp = timestamp; 236 + r->agent_id = le32_to_cpu(p->agent_id); 237 + r->domain_id = le32_to_cpu(p->domain_id); 238 + r->power_state = le32_to_cpu(p->power_state); 239 + *src_id = r->domain_id; 240 + 241 + return r; 242 + } 243 + 244 + static const struct scmi_event power_events[] = { 245 + { 246 + .id = SCMI_EVENT_POWER_STATE_CHANGED, 247 + .max_payld_sz = sizeof(struct scmi_power_state_notify_payld), 248 + .max_report_sz = 249 + sizeof(struct scmi_power_state_changed_report), 250 + }, 251 + }; 252 + 253 + static const struct scmi_event_ops power_event_ops = { 254 + .set_notify_enabled = scmi_power_set_notify_enabled, 255 + .fill_custom_report = scmi_power_fill_custom_report, 256 + }; 257 + 194 258 static int scmi_power_protocol_init(struct scmi_handle *handle) 195 259 { 196 260 int domain; ··· 287 213 288 214 scmi_power_domain_attributes_get(handle, domain, dom); 289 215 } 216 + 217 + scmi_register_protocol_events(handle, 218 + SCMI_PROTOCOL_POWER, SCMI_PROTO_QUEUE_SZ, 219 + &power_event_ops, power_events, 220 + ARRAY_SIZE(power_events), 221 + pinfo->num_domains); 290 222 291 223 pinfo->version = version; 292 224 handle->power_ops = &power_ops;
+92 -4
drivers/firmware/arm_scmi/reset.c
··· 5 5 * Copyright (C) 2019 ARM Ltd. 6 6 */ 7 7 8 + #define pr_fmt(fmt) "SCMI Notifications RESET - " fmt 9 + 10 + #include <linux/scmi_protocol.h> 11 + 8 12 #include "common.h" 13 + #include "notify.h" 9 14 10 15 enum scmi_reset_protocol_cmd { 11 16 RESET_DOMAIN_ATTRIBUTES = 0x3, 12 17 RESET = 0x4, 13 18 RESET_NOTIFY = 0x5, 14 - }; 15 - 16 - enum scmi_reset_protocol_notify { 17 - RESET_ISSUED = 0x0, 18 19 }; 19 20 20 21 #define NUM_RESET_DOMAIN_MASK 0xffff ··· 39 38 #define ARCH_RESET_TYPE BIT(31) 40 39 #define COLD_RESET_STATE BIT(0) 41 40 #define ARCH_COLD_RESET (ARCH_RESET_TYPE | COLD_RESET_STATE) 41 + }; 42 + 43 + struct scmi_msg_reset_notify { 44 + __le32 id; 45 + __le32 event_control; 46 + #define RESET_TP_NOTIFY_ALL BIT(0) 47 + }; 48 + 49 + struct scmi_reset_issued_notify_payld { 50 + __le32 agent_id; 51 + __le32 domain_id; 52 + __le32 reset_state; 42 53 }; 43 54 44 55 struct reset_dom_info { ··· 203 190 .deassert = scmi_reset_domain_deassert, 204 191 }; 205 192 193 + static int scmi_reset_notify(const struct scmi_handle *handle, u32 domain_id, 194 + bool enable) 195 + { 196 + int ret; 197 + u32 evt_cntl = enable ? RESET_TP_NOTIFY_ALL : 0; 198 + struct scmi_xfer *t; 199 + struct scmi_msg_reset_notify *cfg; 200 + 201 + ret = scmi_xfer_get_init(handle, RESET_NOTIFY, 202 + SCMI_PROTOCOL_RESET, sizeof(*cfg), 0, &t); 203 + if (ret) 204 + return ret; 205 + 206 + cfg = t->tx.buf; 207 + cfg->id = cpu_to_le32(domain_id); 208 + cfg->event_control = cpu_to_le32(evt_cntl); 209 + 210 + ret = scmi_do_xfer(handle, t); 211 + 212 + scmi_xfer_put(handle, t); 213 + return ret; 214 + } 215 + 216 + static int scmi_reset_set_notify_enabled(const struct scmi_handle *handle, 217 + u8 evt_id, u32 src_id, bool enable) 218 + { 219 + int ret; 220 + 221 + ret = scmi_reset_notify(handle, src_id, enable); 222 + if (ret) 223 + pr_debug("FAIL_ENABLED - evt[%X] dom[%d] - ret:%d\n", 224 + evt_id, src_id, ret); 225 + 226 + return ret; 227 + } 228 + 229 + static void *scmi_reset_fill_custom_report(const struct scmi_handle *handle, 230 + u8 evt_id, ktime_t timestamp, 231 + const void *payld, size_t payld_sz, 232 + void *report, u32 *src_id) 233 + { 234 + const struct scmi_reset_issued_notify_payld *p = payld; 235 + struct scmi_reset_issued_report *r = report; 236 + 237 + if (evt_id != SCMI_EVENT_RESET_ISSUED || sizeof(*p) != payld_sz) 238 + return NULL; 239 + 240 + r->timestamp = timestamp; 241 + r->agent_id = le32_to_cpu(p->agent_id); 242 + r->domain_id = le32_to_cpu(p->domain_id); 243 + r->reset_state = le32_to_cpu(p->reset_state); 244 + *src_id = r->domain_id; 245 + 246 + return r; 247 + } 248 + 249 + static const struct scmi_event reset_events[] = { 250 + { 251 + .id = SCMI_EVENT_RESET_ISSUED, 252 + .max_payld_sz = sizeof(struct scmi_reset_issued_notify_payld), 253 + .max_report_sz = sizeof(struct scmi_reset_issued_report), 254 + }, 255 + }; 256 + 257 + static const struct scmi_event_ops reset_event_ops = { 258 + .set_notify_enabled = scmi_reset_set_notify_enabled, 259 + .fill_custom_report = scmi_reset_fill_custom_report, 260 + }; 261 + 206 262 static int scmi_reset_protocol_init(struct scmi_handle *handle) 207 263 { 208 264 int domain; ··· 299 217 300 218 scmi_reset_domain_attributes_get(handle, domain, dom); 301 219 } 220 + 221 + scmi_register_protocol_events(handle, 222 + SCMI_PROTOCOL_RESET, SCMI_PROTO_QUEUE_SZ, 223 + &reset_event_ops, reset_events, 224 + ARRAY_SIZE(reset_events), 225 + pinfo->num_domains); 302 226 303 227 pinfo->version = version; 304 228 handle->reset_ops = &reset_ops;
+6 -6
drivers/firmware/arm_scmi/scmi_pm_domain.c
··· 85 85 for (i = 0; i < num_domains; i++, scmi_pd++) { 86 86 u32 state; 87 87 88 - domains[i] = &scmi_pd->genpd; 88 + if (handle->power_ops->state_get(handle, i, &state)) { 89 + dev_warn(dev, "failed to get state for domain %d\n", i); 90 + continue; 91 + } 89 92 90 93 scmi_pd->domain = i; 91 94 scmi_pd->handle = handle; ··· 97 94 scmi_pd->genpd.power_off = scmi_pd_power_off; 98 95 scmi_pd->genpd.power_on = scmi_pd_power_on; 99 96 100 - if (handle->power_ops->state_get(handle, i, &state)) { 101 - dev_warn(dev, "failed to get state for domain %d\n", i); 102 - continue; 103 - } 104 - 105 97 pm_genpd_init(&scmi_pd->genpd, NULL, 106 98 state == SCMI_POWER_STATE_GENERIC_OFF); 99 + 100 + domains[i] = &scmi_pd->genpd; 107 101 } 108 102 109 103 scmi_pd_data->domains = domains;
+64 -5
drivers/firmware/arm_scmi/sensors.c
··· 5 5 * Copyright (C) 2018 ARM Ltd. 6 6 */ 7 7 8 + #define pr_fmt(fmt) "SCMI Notifications SENSOR - " fmt 9 + 10 + #include <linux/scmi_protocol.h> 11 + 8 12 #include "common.h" 13 + #include "notify.h" 9 14 10 15 enum scmi_sensor_protocol_cmd { 11 16 SENSOR_DESCRIPTION_GET = 0x3, 12 17 SENSOR_TRIP_POINT_NOTIFY = 0x4, 13 18 SENSOR_TRIP_POINT_CONFIG = 0x5, 14 19 SENSOR_READING_GET = 0x6, 15 - }; 16 - 17 - enum scmi_sensor_protocol_notify { 18 - SENSOR_TRIP_POINT_EVENT = 0x0, 19 20 }; 20 21 21 22 struct scmi_msg_resp_sensor_attributes { ··· 70 69 __le32 id; 71 70 __le32 flags; 72 71 #define SENSOR_READ_ASYNC BIT(0) 72 + }; 73 + 74 + struct scmi_sensor_trip_notify_payld { 75 + __le32 agent_id; 76 + __le32 sensor_id; 77 + __le32 trip_point_desc; 73 78 }; 74 79 75 80 struct sensors_info { ··· 278 271 static struct scmi_sensor_ops sensor_ops = { 279 272 .count_get = scmi_sensor_count_get, 280 273 .info_get = scmi_sensor_info_get, 281 - .trip_point_notify = scmi_sensor_trip_point_notify, 282 274 .trip_point_config = scmi_sensor_trip_point_config, 283 275 .reading_get = scmi_sensor_reading_get, 276 + }; 277 + 278 + static int scmi_sensor_set_notify_enabled(const struct scmi_handle *handle, 279 + u8 evt_id, u32 src_id, bool enable) 280 + { 281 + int ret; 282 + 283 + ret = scmi_sensor_trip_point_notify(handle, src_id, enable); 284 + if (ret) 285 + pr_debug("FAIL_ENABLED - evt[%X] dom[%d] - ret:%d\n", 286 + evt_id, src_id, ret); 287 + 288 + return ret; 289 + } 290 + 291 + static void *scmi_sensor_fill_custom_report(const struct scmi_handle *handle, 292 + u8 evt_id, ktime_t timestamp, 293 + const void *payld, size_t payld_sz, 294 + void *report, u32 *src_id) 295 + { 296 + const struct scmi_sensor_trip_notify_payld *p = payld; 297 + struct scmi_sensor_trip_point_report *r = report; 298 + 299 + if (evt_id != SCMI_EVENT_SENSOR_TRIP_POINT_EVENT || 300 + sizeof(*p) != payld_sz) 301 + return NULL; 302 + 303 + r->timestamp = timestamp; 304 + r->agent_id = le32_to_cpu(p->agent_id); 305 + r->sensor_id = le32_to_cpu(p->sensor_id); 306 + r->trip_point_desc = le32_to_cpu(p->trip_point_desc); 307 + *src_id = r->sensor_id; 308 + 309 + return r; 310 + } 311 + 312 + static const struct scmi_event sensor_events[] = { 313 + { 314 + .id = SCMI_EVENT_SENSOR_TRIP_POINT_EVENT, 315 + .max_payld_sz = sizeof(struct scmi_sensor_trip_notify_payld), 316 + .max_report_sz = sizeof(struct scmi_sensor_trip_point_report), 317 + }, 318 + }; 319 + 320 + static const struct scmi_event_ops sensor_event_ops = { 321 + .set_notify_enabled = scmi_sensor_set_notify_enabled, 322 + .fill_custom_report = scmi_sensor_fill_custom_report, 284 323 }; 285 324 286 325 static int scmi_sensors_protocol_init(struct scmi_handle *handle) ··· 351 298 return -ENOMEM; 352 299 353 300 scmi_sensor_description_get(handle, sinfo); 301 + 302 + scmi_register_protocol_events(handle, 303 + SCMI_PROTOCOL_SENSOR, SCMI_PROTO_QUEUE_SZ, 304 + &sensor_event_ops, sensor_events, 305 + ARRAY_SIZE(sensor_events), 306 + sinfo->num_sensors); 354 307 355 308 sinfo->version = version; 356 309 handle->sensor_ops = &sensor_ops;
+1
drivers/firmware/arm_scmi/smc.c
··· 21 21 * 22 22 * @cinfo: SCMI channel info 23 23 * @shmem: Transmit/Receive shared memory area 24 + * @shmem_lock: Lock to protect access to Tx/Rx shared memory area 24 25 * @func_id: smc/hvc call function id 25 26 */ 26 27
+1 -1
drivers/firmware/imx/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 obj-$(CONFIG_IMX_DSP) += imx-dsp.o 3 - obj-$(CONFIG_IMX_SCU) += imx-scu.o misc.o imx-scu-irq.o 3 + obj-$(CONFIG_IMX_SCU) += imx-scu.o misc.o imx-scu-irq.o rm.o imx-scu-soc.o 4 4 obj-$(CONFIG_IMX_SCU_PD) += scu-pd.o
+2
drivers/firmware/imx/imx-scu-irq.c
··· 10 10 #include <linux/firmware/imx/ipc.h> 11 11 #include <linux/firmware/imx/sci.h> 12 12 #include <linux/mailbox_client.h> 13 + #include <linux/suspend.h> 13 14 14 15 #define IMX_SC_IRQ_FUNC_ENABLE 1 15 16 #define IMX_SC_IRQ_FUNC_STATUS 2 ··· 92 91 if (!irq_status) 93 92 continue; 94 93 94 + pm_system_wakeup(); 95 95 imx_scu_irq_notifier_call_chain(irq_status, &i); 96 96 } 97 97 }
+4
drivers/firmware/imx/imx-scu.c
··· 328 328 329 329 imx_sc_ipc_handle = sc_ipc; 330 330 331 + ret = imx_scu_soc_init(dev); 332 + if (ret) 333 + dev_warn(dev, "failed to initialize SoC info: %d\n", ret); 334 + 331 335 ret = imx_scu_enable_general_irq_channel(dev); 332 336 if (ret) 333 337 dev_warn(dev,
+45
drivers/firmware/imx/rm.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * Copyright 2020 NXP 4 + * 5 + * File containing client-side RPC functions for the RM service. These 6 + * function are ported to clients that communicate to the SC. 7 + */ 8 + 9 + #include <linux/firmware/imx/svc/rm.h> 10 + 11 + struct imx_sc_msg_rm_rsrc_owned { 12 + struct imx_sc_rpc_msg hdr; 13 + u16 resource; 14 + } __packed __aligned(4); 15 + 16 + /* 17 + * This function check @resource is owned by current partition or not 18 + * 19 + * @param[in] ipc IPC handle 20 + * @param[in] resource resource the control is associated with 21 + * 22 + * @return Returns 0 for not owned and 1 for owned. 23 + */ 24 + bool imx_sc_rm_is_resource_owned(struct imx_sc_ipc *ipc, u16 resource) 25 + { 26 + struct imx_sc_msg_rm_rsrc_owned msg; 27 + struct imx_sc_rpc_msg *hdr = &msg.hdr; 28 + 29 + hdr->ver = IMX_SC_RPC_VERSION; 30 + hdr->svc = IMX_SC_RPC_SVC_RM; 31 + hdr->func = IMX_SC_RM_FUNC_IS_RESOURCE_OWNED; 32 + hdr->size = 2; 33 + 34 + msg.resource = resource; 35 + 36 + /* 37 + * SCU firmware only returns value 0 or 1 38 + * for resource owned check which means not owned or owned. 39 + * So it is always successful. 40 + */ 41 + imx_scu_call_rpc(ipc, &msg, true); 42 + 43 + return hdr->func; 44 + } 45 + EXPORT_SYMBOL(imx_sc_rm_is_resource_owned);
+12 -2
drivers/firmware/imx/scu-pd.c
··· 167 167 { "dc0-pll", IMX_SC_R_DC_0_PLL_0, 2, true, 0 }, 168 168 169 169 /* CM40 SS */ 170 - { "cm40_i2c", IMX_SC_R_M4_0_I2C, 1, 0 }, 171 - { "cm40_intmux", IMX_SC_R_M4_0_INTMUX, 1, 0 }, 170 + { "cm40-i2c", IMX_SC_R_M4_0_I2C, 1, false, 0 }, 171 + { "cm40-intmux", IMX_SC_R_M4_0_INTMUX, 1, false, 0 }, 172 + { "cm40-pid", IMX_SC_R_M4_0_PID0, 5, true, 0}, 173 + { "cm40-mu-a1", IMX_SC_R_M4_0_MU_1A, 1, false, 0}, 174 + { "cm40-lpuart", IMX_SC_R_M4_0_UART, 1, false, 0}, 175 + 176 + /* CM41 SS */ 177 + { "cm41-i2c", IMX_SC_R_M4_1_I2C, 1, false, 0 }, 178 + { "cm41-intmux", IMX_SC_R_M4_1_INTMUX, 1, false, 0 }, 179 + { "cm41-pid", IMX_SC_R_M4_1_PID0, 5, true, 0}, 180 + { "cm41-mu-a1", IMX_SC_R_M4_1_MU_1A, 1, false, 0}, 181 + { "cm41-lpuart", IMX_SC_R_M4_1_UART, 1, false, 0}, 172 182 }; 173 183 174 184 static const struct imx_sc_pd_soc imx8qxp_scu_pd = {
+4 -4
drivers/firmware/qcom_scm.c
··· 391 391 392 392 desc.args[1] = enable ? QCOM_SCM_BOOT_SET_DLOAD_MODE : 0; 393 393 394 - return qcom_scm_call(__scm->dev, &desc, NULL); 394 + return qcom_scm_call_atomic(__scm->dev, &desc, NULL); 395 395 } 396 396 397 397 static void qcom_scm_set_download_mode(bool enable) ··· 650 650 int ret; 651 651 652 652 653 - ret = qcom_scm_call(__scm->dev, &desc, &res); 653 + ret = qcom_scm_call_atomic(__scm->dev, &desc, &res); 654 654 if (ret >= 0) 655 655 *val = res.result[0]; 656 656 ··· 669 669 .owner = ARM_SMCCC_OWNER_SIP, 670 670 }; 671 671 672 - 673 - return qcom_scm_call(__scm->dev, &desc, NULL); 672 + return qcom_scm_call_atomic(__scm->dev, &desc, NULL); 674 673 } 675 674 EXPORT_SYMBOL(qcom_scm_io_writel); 676 675 ··· 1150 1151 SCM_HAS_IFACE_CLK | 1151 1152 SCM_HAS_BUS_CLK) 1152 1153 }, 1154 + { .compatible = "qcom,scm-msm8994" }, 1153 1155 { .compatible = "qcom,scm-msm8996" }, 1154 1156 { .compatible = "qcom,scm" }, 1155 1157 {}
+9
drivers/firmware/smccc/Kconfig
··· 14 14 to add SMCCC discovery mechanism though the PSCI firmware 15 15 implementation of PSCI_FEATURES(SMCCC_VERSION) which returns 16 16 success on firmware compliant to SMCCC v1.1 and above. 17 + 18 + config ARM_SMCCC_SOC_ID 19 + bool "SoC bus device for the ARM SMCCC SOC_ID" 20 + depends on HAVE_ARM_SMCCC_DISCOVERY 21 + default y 22 + select SOC_BUS 23 + help 24 + Include support for the SoC bus on the ARM SMCCC firmware based 25 + platforms providing some sysfs information about the SoC variant.
+1
drivers/firmware/smccc/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 # 3 3 obj-$(CONFIG_HAVE_ARM_SMCCC_DISCOVERY) += smccc.o 4 + obj-$(CONFIG_ARM_SMCCC_SOC_ID) += soc_id.o
+114
drivers/firmware/smccc/soc_id.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright 2020 Arm Limited 4 + */ 5 + 6 + #define pr_fmt(fmt) "SMCCC: SOC_ID: " fmt 7 + 8 + #include <linux/arm-smccc.h> 9 + #include <linux/bitfield.h> 10 + #include <linux/device.h> 11 + #include <linux/module.h> 12 + #include <linux/kernel.h> 13 + #include <linux/slab.h> 14 + #include <linux/sys_soc.h> 15 + 16 + #define SMCCC_SOC_ID_JEP106_BANK_IDX_MASK GENMASK(30, 24) 17 + /* 18 + * As per the SMC Calling Convention specification v1.2 (ARM DEN 0028C) 19 + * Section 7.4 SMCCC_ARCH_SOC_ID bits[23:16] are JEP-106 identification 20 + * code with parity bit for the SiP. We can drop the parity bit. 21 + */ 22 + #define SMCCC_SOC_ID_JEP106_ID_CODE_MASK GENMASK(22, 16) 23 + #define SMCCC_SOC_ID_IMP_DEF_SOC_ID_MASK GENMASK(15, 0) 24 + 25 + #define JEP106_BANK_CONT_CODE(x) \ 26 + (u8)(FIELD_GET(SMCCC_SOC_ID_JEP106_BANK_IDX_MASK, (x))) 27 + #define JEP106_ID_CODE(x) \ 28 + (u8)(FIELD_GET(SMCCC_SOC_ID_JEP106_ID_CODE_MASK, (x))) 29 + #define IMP_DEF_SOC_ID(x) \ 30 + (u16)(FIELD_GET(SMCCC_SOC_ID_IMP_DEF_SOC_ID_MASK, (x))) 31 + 32 + static struct soc_device *soc_dev; 33 + static struct soc_device_attribute *soc_dev_attr; 34 + 35 + static int __init smccc_soc_init(void) 36 + { 37 + struct arm_smccc_res res; 38 + int soc_id_rev, soc_id_version; 39 + static char soc_id_str[20], soc_id_rev_str[12]; 40 + static char soc_id_jep106_id_str[12]; 41 + 42 + if (arm_smccc_get_version() < ARM_SMCCC_VERSION_1_2) 43 + return 0; 44 + 45 + if (arm_smccc_1_1_get_conduit() == SMCCC_CONDUIT_NONE) { 46 + pr_err("%s: invalid SMCCC conduit\n", __func__); 47 + return -EOPNOTSUPP; 48 + } 49 + 50 + arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_FEATURES_FUNC_ID, 51 + ARM_SMCCC_ARCH_SOC_ID, &res); 52 + 53 + if (res.a0 == SMCCC_RET_NOT_SUPPORTED) { 54 + pr_info("ARCH_SOC_ID not implemented, skipping ....\n"); 55 + return 0; 56 + } 57 + 58 + if ((int)res.a0 < 0) { 59 + pr_info("ARCH_FEATURES(ARCH_SOC_ID) returned error: %lx\n", 60 + res.a0); 61 + return -EINVAL; 62 + } 63 + 64 + arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_SOC_ID, 0, &res); 65 + if ((int)res.a0 < 0) { 66 + pr_err("ARCH_SOC_ID(0) returned error: %lx\n", res.a0); 67 + return -EINVAL; 68 + } 69 + 70 + soc_id_version = res.a0; 71 + 72 + arm_smccc_1_1_invoke(ARM_SMCCC_ARCH_SOC_ID, 1, &res); 73 + if ((int)res.a0 < 0) { 74 + pr_err("ARCH_SOC_ID(1) returned error: %lx\n", res.a0); 75 + return -EINVAL; 76 + } 77 + 78 + soc_id_rev = res.a0; 79 + 80 + soc_dev_attr = kzalloc(sizeof(*soc_dev_attr), GFP_KERNEL); 81 + if (!soc_dev_attr) 82 + return -ENOMEM; 83 + 84 + sprintf(soc_id_rev_str, "0x%08x", soc_id_rev); 85 + sprintf(soc_id_jep106_id_str, "jep106:%02x%02x", 86 + JEP106_BANK_CONT_CODE(soc_id_version), 87 + JEP106_ID_CODE(soc_id_version)); 88 + sprintf(soc_id_str, "%s:%04x", soc_id_jep106_id_str, 89 + IMP_DEF_SOC_ID(soc_id_version)); 90 + 91 + soc_dev_attr->soc_id = soc_id_str; 92 + soc_dev_attr->revision = soc_id_rev_str; 93 + soc_dev_attr->family = soc_id_jep106_id_str; 94 + 95 + soc_dev = soc_device_register(soc_dev_attr); 96 + if (IS_ERR(soc_dev)) { 97 + kfree(soc_dev_attr); 98 + return PTR_ERR(soc_dev); 99 + } 100 + 101 + pr_info("ID = %s Revision = %s\n", soc_dev_attr->soc_id, 102 + soc_dev_attr->revision); 103 + 104 + return 0; 105 + } 106 + module_init(smccc_soc_init); 107 + 108 + static void __exit smccc_soc_exit(void) 109 + { 110 + if (soc_dev) 111 + soc_device_unregister(soc_dev); 112 + kfree(soc_dev_attr); 113 + } 114 + module_exit(smccc_soc_exit);
+399 -37
drivers/firmware/tegra/bpmp-debugfs.c
··· 4 4 */ 5 5 #include <linux/debugfs.h> 6 6 #include <linux/dma-mapping.h> 7 + #include <linux/slab.h> 7 8 #include <linux/uaccess.h> 8 9 9 10 #include <soc/tegra/bpmp.h> 10 11 #include <soc/tegra/bpmp-abi.h> 12 + 13 + static DEFINE_MUTEX(bpmp_debug_lock); 11 14 12 15 struct seqbuf { 13 16 char *buf; ··· 99 96 return filename; 100 97 } 101 98 99 + static int mrq_debug_open(struct tegra_bpmp *bpmp, const char *name, 100 + uint32_t *fd, uint32_t *len, bool write) 101 + { 102 + struct mrq_debug_request req = { 103 + .cmd = cpu_to_le32(write ? CMD_DEBUG_OPEN_WO : CMD_DEBUG_OPEN_RO), 104 + }; 105 + struct mrq_debug_response resp; 106 + struct tegra_bpmp_message msg = { 107 + .mrq = MRQ_DEBUG, 108 + .tx = { 109 + .data = &req, 110 + .size = sizeof(req), 111 + }, 112 + .rx = { 113 + .data = &resp, 114 + .size = sizeof(resp), 115 + }, 116 + }; 117 + ssize_t sz_name; 118 + int err = 0; 119 + 120 + sz_name = strscpy(req.fop.name, name, sizeof(req.fop.name)); 121 + if (sz_name < 0) { 122 + pr_err("File name too large: %s\n", name); 123 + return -EINVAL; 124 + } 125 + 126 + err = tegra_bpmp_transfer(bpmp, &msg); 127 + if (err < 0) 128 + return err; 129 + else if (msg.rx.ret < 0) 130 + return -EINVAL; 131 + 132 + *len = resp.fop.datalen; 133 + *fd = resp.fop.fd; 134 + 135 + return 0; 136 + } 137 + 138 + static int mrq_debug_close(struct tegra_bpmp *bpmp, uint32_t fd) 139 + { 140 + struct mrq_debug_request req = { 141 + .cmd = cpu_to_le32(CMD_DEBUG_CLOSE), 142 + .frd = { 143 + .fd = fd, 144 + }, 145 + }; 146 + struct mrq_debug_response resp; 147 + struct tegra_bpmp_message msg = { 148 + .mrq = MRQ_DEBUG, 149 + .tx = { 150 + .data = &req, 151 + .size = sizeof(req), 152 + }, 153 + .rx = { 154 + .data = &resp, 155 + .size = sizeof(resp), 156 + }, 157 + }; 158 + int err = 0; 159 + 160 + err = tegra_bpmp_transfer(bpmp, &msg); 161 + if (err < 0) 162 + return err; 163 + else if (msg.rx.ret < 0) 164 + return -EINVAL; 165 + 166 + return 0; 167 + } 168 + 169 + static int mrq_debug_read(struct tegra_bpmp *bpmp, const char *name, 170 + char *data, size_t sz_data, uint32_t *nbytes) 171 + { 172 + struct mrq_debug_request req = { 173 + .cmd = cpu_to_le32(CMD_DEBUG_READ), 174 + }; 175 + struct mrq_debug_response resp; 176 + struct tegra_bpmp_message msg = { 177 + .mrq = MRQ_DEBUG, 178 + .tx = { 179 + .data = &req, 180 + .size = sizeof(req), 181 + }, 182 + .rx = { 183 + .data = &resp, 184 + .size = sizeof(resp), 185 + }, 186 + }; 187 + uint32_t fd = 0, len = 0; 188 + int remaining, err; 189 + 190 + mutex_lock(&bpmp_debug_lock); 191 + err = mrq_debug_open(bpmp, name, &fd, &len, 0); 192 + if (err) 193 + goto out; 194 + 195 + if (len > sz_data) { 196 + err = -EFBIG; 197 + goto close; 198 + } 199 + 200 + req.frd.fd = fd; 201 + remaining = len; 202 + 203 + while (remaining > 0) { 204 + err = tegra_bpmp_transfer(bpmp, &msg); 205 + if (err < 0) { 206 + goto close; 207 + } else if (msg.rx.ret < 0) { 208 + err = -EINVAL; 209 + goto close; 210 + } 211 + 212 + if (resp.frd.readlen > remaining) { 213 + pr_err("%s: read data length invalid\n", __func__); 214 + err = -EINVAL; 215 + goto close; 216 + } 217 + 218 + memcpy(data, resp.frd.data, resp.frd.readlen); 219 + data += resp.frd.readlen; 220 + remaining -= resp.frd.readlen; 221 + } 222 + 223 + *nbytes = len; 224 + 225 + close: 226 + err = mrq_debug_close(bpmp, fd); 227 + out: 228 + mutex_unlock(&bpmp_debug_lock); 229 + return err; 230 + } 231 + 232 + static int mrq_debug_write(struct tegra_bpmp *bpmp, const char *name, 233 + uint8_t *data, size_t sz_data) 234 + { 235 + struct mrq_debug_request req = { 236 + .cmd = cpu_to_le32(CMD_DEBUG_WRITE) 237 + }; 238 + struct mrq_debug_response resp; 239 + struct tegra_bpmp_message msg = { 240 + .mrq = MRQ_DEBUG, 241 + .tx = { 242 + .data = &req, 243 + .size = sizeof(req), 244 + }, 245 + .rx = { 246 + .data = &resp, 247 + .size = sizeof(resp), 248 + }, 249 + }; 250 + uint32_t fd = 0, len = 0; 251 + size_t remaining; 252 + int err; 253 + 254 + mutex_lock(&bpmp_debug_lock); 255 + err = mrq_debug_open(bpmp, name, &fd, &len, 1); 256 + if (err) 257 + goto out; 258 + 259 + if (sz_data > len) { 260 + err = -EINVAL; 261 + goto close; 262 + } 263 + 264 + req.fwr.fd = fd; 265 + remaining = sz_data; 266 + 267 + while (remaining > 0) { 268 + len = min(remaining, sizeof(req.fwr.data)); 269 + memcpy(req.fwr.data, data, len); 270 + req.fwr.datalen = len; 271 + 272 + err = tegra_bpmp_transfer(bpmp, &msg); 273 + if (err < 0) { 274 + goto close; 275 + } else if (msg.rx.ret < 0) { 276 + err = -EINVAL; 277 + goto close; 278 + } 279 + 280 + data += req.fwr.datalen; 281 + remaining -= req.fwr.datalen; 282 + } 283 + 284 + close: 285 + err = mrq_debug_close(bpmp, fd); 286 + out: 287 + mutex_unlock(&bpmp_debug_lock); 288 + return err; 289 + } 290 + 291 + static int bpmp_debug_show(struct seq_file *m, void *p) 292 + { 293 + struct file *file = m->private; 294 + struct inode *inode = file_inode(file); 295 + struct tegra_bpmp *bpmp = inode->i_private; 296 + char *databuf = NULL; 297 + char fnamebuf[256]; 298 + const char *filename; 299 + uint32_t nbytes = 0; 300 + size_t len; 301 + int err; 302 + 303 + len = seq_get_buf(m, &databuf); 304 + if (!databuf) 305 + return -ENOMEM; 306 + 307 + filename = get_filename(bpmp, file, fnamebuf, sizeof(fnamebuf)); 308 + if (!filename) 309 + return -ENOENT; 310 + 311 + err = mrq_debug_read(bpmp, filename, databuf, len, &nbytes); 312 + if (!err) 313 + seq_commit(m, nbytes); 314 + 315 + return err; 316 + } 317 + 318 + static ssize_t bpmp_debug_store(struct file *file, const char __user *buf, 319 + size_t count, loff_t *f_pos) 320 + { 321 + struct inode *inode = file_inode(file); 322 + struct tegra_bpmp *bpmp = inode->i_private; 323 + char *databuf = NULL; 324 + char fnamebuf[256]; 325 + const char *filename; 326 + ssize_t err; 327 + 328 + filename = get_filename(bpmp, file, fnamebuf, sizeof(fnamebuf)); 329 + if (!filename) 330 + return -ENOENT; 331 + 332 + databuf = kmalloc(count, GFP_KERNEL); 333 + if (!databuf) 334 + return -ENOMEM; 335 + 336 + if (copy_from_user(databuf, buf, count)) { 337 + err = -EFAULT; 338 + goto free_ret; 339 + } 340 + 341 + err = mrq_debug_write(bpmp, filename, databuf, count); 342 + 343 + free_ret: 344 + kfree(databuf); 345 + 346 + return err ?: count; 347 + } 348 + 349 + static int bpmp_debug_open(struct inode *inode, struct file *file) 350 + { 351 + return single_open_size(file, bpmp_debug_show, file, SZ_256K); 352 + } 353 + 354 + static const struct file_operations bpmp_debug_fops = { 355 + .open = bpmp_debug_open, 356 + .read = seq_read, 357 + .llseek = seq_lseek, 358 + .write = bpmp_debug_store, 359 + .release = single_release, 360 + }; 361 + 362 + static int bpmp_populate_debugfs_inband(struct tegra_bpmp *bpmp, 363 + struct dentry *parent, 364 + char *ppath) 365 + { 366 + const size_t pathlen = SZ_256; 367 + const size_t bufsize = SZ_16K; 368 + uint32_t dsize, attrs = 0; 369 + struct dentry *dentry; 370 + struct seqbuf seqbuf; 371 + char *buf, *pathbuf; 372 + const char *name; 373 + int err = 0; 374 + 375 + if (!bpmp || !parent || !ppath) 376 + return -EINVAL; 377 + 378 + buf = kmalloc(bufsize, GFP_KERNEL); 379 + if (!buf) 380 + return -ENOMEM; 381 + 382 + pathbuf = kzalloc(pathlen, GFP_KERNEL); 383 + if (!pathbuf) { 384 + kfree(buf); 385 + return -ENOMEM; 386 + } 387 + 388 + err = mrq_debug_read(bpmp, ppath, buf, bufsize, &dsize); 389 + if (err) 390 + goto out; 391 + 392 + seqbuf_init(&seqbuf, buf, dsize); 393 + 394 + while (!seqbuf_eof(&seqbuf)) { 395 + err = seqbuf_read_u32(&seqbuf, &attrs); 396 + if (err) 397 + goto out; 398 + 399 + err = seqbuf_read_str(&seqbuf, &name); 400 + if (err < 0) 401 + goto out; 402 + 403 + if (attrs & DEBUGFS_S_ISDIR) { 404 + size_t len; 405 + 406 + dentry = debugfs_create_dir(name, parent); 407 + if (IS_ERR(dentry)) { 408 + err = PTR_ERR(dentry); 409 + goto out; 410 + } 411 + 412 + len = strlen(ppath) + strlen(name) + 1; 413 + if (len >= pathlen) { 414 + err = -EINVAL; 415 + goto out; 416 + } 417 + 418 + strncpy(pathbuf, ppath, pathlen); 419 + strncat(pathbuf, name, strlen(name)); 420 + strcat(pathbuf, "/"); 421 + 422 + err = bpmp_populate_debugfs_inband(bpmp, dentry, 423 + pathbuf); 424 + if (err < 0) 425 + goto out; 426 + } else { 427 + umode_t mode; 428 + 429 + mode = attrs & DEBUGFS_S_IRUSR ? 0400 : 0; 430 + mode |= attrs & DEBUGFS_S_IWUSR ? 0200 : 0; 431 + dentry = debugfs_create_file(name, mode, parent, bpmp, 432 + &bpmp_debug_fops); 433 + if (!dentry) { 434 + err = -ENOMEM; 435 + goto out; 436 + } 437 + } 438 + } 439 + 440 + out: 441 + kfree(pathbuf); 442 + kfree(buf); 443 + 444 + return err; 445 + } 446 + 102 447 static int mrq_debugfs_read(struct tegra_bpmp *bpmp, 103 448 dma_addr_t name, size_t sz_name, 104 449 dma_addr_t data, size_t sz_data, ··· 478 127 err = tegra_bpmp_transfer(bpmp, &msg); 479 128 if (err < 0) 480 129 return err; 130 + else if (msg.rx.ret < 0) 131 + return -EINVAL; 481 132 482 133 *nbytes = (size_t)resp.fop.nbytes; 483 134 ··· 537 184 err = tegra_bpmp_transfer(bpmp, &msg); 538 185 if (err < 0) 539 186 return err; 187 + else if (msg.rx.ret < 0) 188 + return -EINVAL; 540 189 541 190 *nbytes = (size_t)resp.dumpdir.nbytes; 542 191 ··· 557 202 char buf[256]; 558 203 const char *filename; 559 204 size_t len, nbytes; 560 - int ret; 205 + int err; 561 206 562 207 filename = get_filename(bpmp, file, buf, sizeof(buf)); 563 208 if (!filename) ··· 571 216 datavirt = dma_alloc_coherent(bpmp->dev, datasize, &dataphys, 572 217 GFP_KERNEL | GFP_DMA32); 573 218 if (!datavirt) { 574 - ret = -ENOMEM; 219 + err = -ENOMEM; 575 220 goto free_namebuf; 576 221 } 577 222 578 223 len = strlen(filename); 579 224 strncpy(namevirt, filename, namesize); 580 225 581 - ret = mrq_debugfs_read(bpmp, namephys, len, dataphys, datasize, 226 + err = mrq_debugfs_read(bpmp, namephys, len, dataphys, datasize, 582 227 &nbytes); 583 228 584 - if (!ret) 229 + if (!err) 585 230 seq_write(m, datavirt, nbytes); 586 231 587 232 dma_free_coherent(bpmp->dev, datasize, datavirt, dataphys); 588 233 free_namebuf: 589 234 dma_free_coherent(bpmp->dev, namesize, namevirt, namephys); 590 235 591 - return ret; 236 + return err; 592 237 } 593 238 594 239 static int debugfs_open(struct inode *inode, struct file *file) ··· 608 253 char fnamebuf[256]; 609 254 const char *filename; 610 255 size_t len; 611 - int ret; 256 + int err; 612 257 613 258 filename = get_filename(bpmp, file, fnamebuf, sizeof(fnamebuf)); 614 259 if (!filename) ··· 622 267 datavirt = dma_alloc_coherent(bpmp->dev, datasize, &dataphys, 623 268 GFP_KERNEL | GFP_DMA32); 624 269 if (!datavirt) { 625 - ret = -ENOMEM; 270 + err = -ENOMEM; 626 271 goto free_namebuf; 627 272 } 628 273 ··· 630 275 strncpy(namevirt, filename, namesize); 631 276 632 277 if (copy_from_user(datavirt, buf, count)) { 633 - ret = -EFAULT; 278 + err = -EFAULT; 634 279 goto free_databuf; 635 280 } 636 281 637 - ret = mrq_debugfs_write(bpmp, namephys, len, dataphys, 282 + err = mrq_debugfs_write(bpmp, namephys, len, dataphys, 638 283 count); 639 284 640 285 free_databuf: ··· 642 287 free_namebuf: 643 288 dma_free_coherent(bpmp->dev, namesize, namevirt, namephys); 644 289 645 - return ret ?: count; 290 + return err ?: count; 646 291 } 647 292 648 293 static const struct file_operations debugfs_fops = { ··· 705 350 return 0; 706 351 } 707 352 708 - static int create_debugfs_mirror(struct tegra_bpmp *bpmp, void *buf, 709 - size_t bufsize, struct dentry *root) 353 + static int bpmp_populate_debugfs_shmem(struct tegra_bpmp *bpmp) 710 354 { 711 355 struct seqbuf seqbuf; 356 + const size_t sz = SZ_512K; 357 + dma_addr_t phys; 358 + size_t nbytes; 359 + void *virt; 712 360 int err; 713 361 714 - bpmp->debugfs_mirror = debugfs_create_dir("debug", root); 715 - if (!bpmp->debugfs_mirror) 362 + virt = dma_alloc_coherent(bpmp->dev, sz, &phys, 363 + GFP_KERNEL | GFP_DMA32); 364 + if (!virt) 716 365 return -ENOMEM; 717 366 718 - seqbuf_init(&seqbuf, buf, bufsize); 719 - err = bpmp_populate_dir(bpmp, &seqbuf, bpmp->debugfs_mirror, 0); 367 + err = mrq_debugfs_dumpdir(bpmp, phys, sz, &nbytes); 720 368 if (err < 0) { 721 - debugfs_remove_recursive(bpmp->debugfs_mirror); 722 - bpmp->debugfs_mirror = NULL; 369 + goto free; 370 + } else if (nbytes > sz) { 371 + err = -EINVAL; 372 + goto free; 723 373 } 374 + 375 + seqbuf_init(&seqbuf, virt, nbytes); 376 + err = bpmp_populate_dir(bpmp, &seqbuf, bpmp->debugfs_mirror, 0); 377 + free: 378 + dma_free_coherent(bpmp->dev, sz, virt, phys); 724 379 725 380 return err; 726 381 } 727 382 728 383 int tegra_bpmp_init_debugfs(struct tegra_bpmp *bpmp) 729 384 { 730 - dma_addr_t phys; 731 - void *virt; 732 - const size_t sz = SZ_256K; 733 - size_t nbytes; 734 - int ret; 735 385 struct dentry *root; 386 + bool inband; 387 + int err; 736 388 737 - if (!tegra_bpmp_mrq_is_supported(bpmp, MRQ_DEBUGFS)) 389 + inband = tegra_bpmp_mrq_is_supported(bpmp, MRQ_DEBUG); 390 + 391 + if (!inband && !tegra_bpmp_mrq_is_supported(bpmp, MRQ_DEBUGFS)) 738 392 return 0; 739 393 740 394 root = debugfs_create_dir("bpmp", NULL); 741 395 if (!root) 742 396 return -ENOMEM; 743 397 744 - virt = dma_alloc_coherent(bpmp->dev, sz, &phys, 745 - GFP_KERNEL | GFP_DMA32); 746 - if (!virt) { 747 - ret = -ENOMEM; 398 + bpmp->debugfs_mirror = debugfs_create_dir("debug", root); 399 + if (!bpmp->debugfs_mirror) { 400 + err = -ENOMEM; 748 401 goto out; 749 402 } 750 403 751 - ret = mrq_debugfs_dumpdir(bpmp, phys, sz, &nbytes); 752 - if (ret < 0) 753 - goto free; 404 + if (inband) 405 + err = bpmp_populate_debugfs_inband(bpmp, bpmp->debugfs_mirror, 406 + "/"); 407 + else 408 + err = bpmp_populate_debugfs_shmem(bpmp); 754 409 755 - ret = create_debugfs_mirror(bpmp, virt, nbytes, root); 756 - free: 757 - dma_free_coherent(bpmp->dev, sz, virt, phys); 758 410 out: 759 - if (ret < 0) 760 - debugfs_remove(root); 411 + if (err < 0) 412 + debugfs_remove_recursive(root); 761 413 762 - return ret; 414 + return err; 763 415 }
+3 -3
drivers/firmware/tegra/bpmp.c
··· 515 515 .size = sizeof(resp), 516 516 }, 517 517 }; 518 - int ret; 518 + int err; 519 519 520 - ret = tegra_bpmp_transfer(bpmp, &msg); 521 - if (ret || msg.rx.ret) 520 + err = tegra_bpmp_transfer(bpmp, &msg); 521 + if (err || msg.rx.ret) 522 522 return false; 523 523 524 524 return resp.status == 0;
+1 -1
drivers/firmware/ti_sci.c
··· 2 2 /* 3 3 * Texas Instruments System Control Interface Protocol Driver 4 4 * 5 - * Copyright (C) 2015-2016 Texas Instruments Incorporated - http://www.ti.com/ 5 + * Copyright (C) 2015-2016 Texas Instruments Incorporated - https://www.ti.com/ 6 6 * Nishanth Menon 7 7 */ 8 8
+1 -1
drivers/firmware/ti_sci.h
··· 6 6 * The system works in a message response protocol 7 7 * See: http://processors.wiki.ti.com/index.php/TISCI for details 8 8 * 9 - * Copyright (C) 2015-2016 Texas Instruments Incorporated - http://www.ti.com/ 9 + * Copyright (C) 2015-2016 Texas Instruments Incorporated - https://www.ti.com/ 10 10 */ 11 11 12 12 #ifndef __TI_SCI_H
+166
drivers/firmware/turris-mox-rwtm.c
··· 7 7 8 8 #include <linux/armada-37xx-rwtm-mailbox.h> 9 9 #include <linux/completion.h> 10 + #include <linux/debugfs.h> 10 11 #include <linux/dma-mapping.h> 11 12 #include <linux/hw_random.h> 12 13 #include <linux/mailbox_client.h> ··· 70 69 /* public key burned in eFuse */ 71 70 int has_pubkey; 72 71 u8 pubkey[135]; 72 + 73 + #ifdef CONFIG_DEBUG_FS 74 + /* 75 + * Signature process. This is currently done via debugfs, because it 76 + * does not conform to the sysfs standard "one file per attribute". 77 + * It should be rewritten via crypto API once akcipher API is available 78 + * from userspace. 79 + */ 80 + struct dentry *debugfs_root; 81 + u32 last_sig[34]; 82 + int last_sig_done; 83 + #endif 73 84 }; 74 85 75 86 struct mox_kobject { ··· 292 279 return ret; 293 280 } 294 281 282 + #ifdef CONFIG_DEBUG_FS 283 + static int rwtm_debug_open(struct inode *inode, struct file *file) 284 + { 285 + file->private_data = inode->i_private; 286 + 287 + return nonseekable_open(inode, file); 288 + } 289 + 290 + static ssize_t do_sign_read(struct file *file, char __user *buf, size_t len, 291 + loff_t *ppos) 292 + { 293 + struct mox_rwtm *rwtm = file->private_data; 294 + ssize_t ret; 295 + 296 + /* only allow one read, of 136 bytes, from position 0 */ 297 + if (*ppos != 0) 298 + return 0; 299 + 300 + if (len < 136) 301 + return -EINVAL; 302 + 303 + if (!rwtm->last_sig_done) 304 + return -ENODATA; 305 + 306 + /* 2 arrays of 17 32-bit words are 136 bytes */ 307 + ret = simple_read_from_buffer(buf, len, ppos, rwtm->last_sig, 136); 308 + rwtm->last_sig_done = 0; 309 + 310 + return ret; 311 + } 312 + 313 + static ssize_t do_sign_write(struct file *file, const char __user *buf, 314 + size_t len, loff_t *ppos) 315 + { 316 + struct mox_rwtm *rwtm = file->private_data; 317 + struct armada_37xx_rwtm_rx_msg *reply = &rwtm->reply; 318 + struct armada_37xx_rwtm_tx_msg msg; 319 + loff_t dummy = 0; 320 + ssize_t ret; 321 + 322 + /* the input is a SHA-512 hash, so exactly 64 bytes have to be read */ 323 + if (len != 64) 324 + return -EINVAL; 325 + 326 + /* if last result is not zero user has not read that information yet */ 327 + if (rwtm->last_sig_done) 328 + return -EBUSY; 329 + 330 + if (!mutex_trylock(&rwtm->busy)) 331 + return -EBUSY; 332 + 333 + /* 334 + * Here we have to send: 335 + * 1. Address of the input to sign. 336 + * The input is an array of 17 32-bit words, the first (most 337 + * significat) is 0, the rest 16 words are copied from the SHA-512 338 + * hash given by the user and converted from BE to LE. 339 + * 2. Address of the buffer where ECDSA signature value R shall be 340 + * stored by the rWTM firmware. 341 + * 3. Address of the buffer where ECDSA signature value S shall be 342 + * stored by the rWTM firmware. 343 + */ 344 + memset(rwtm->buf, 0, 4); 345 + ret = simple_write_to_buffer(rwtm->buf + 4, 64, &dummy, buf, len); 346 + if (ret < 0) 347 + goto unlock_mutex; 348 + be32_to_cpu_array(rwtm->buf, rwtm->buf, 17); 349 + 350 + msg.command = MBOX_CMD_SIGN; 351 + msg.args[0] = 1; 352 + msg.args[1] = rwtm->buf_phys; 353 + msg.args[2] = rwtm->buf_phys + 68; 354 + msg.args[3] = rwtm->buf_phys + 2 * 68; 355 + ret = mbox_send_message(rwtm->mbox, &msg); 356 + if (ret < 0) 357 + goto unlock_mutex; 358 + 359 + ret = wait_for_completion_interruptible(&rwtm->cmd_done); 360 + if (ret < 0) 361 + goto unlock_mutex; 362 + 363 + ret = MBOX_STS_VALUE(reply->retval); 364 + if (MBOX_STS_ERROR(reply->retval) != MBOX_STS_SUCCESS) 365 + goto unlock_mutex; 366 + 367 + /* 368 + * Here we read the R and S values of the ECDSA signature 369 + * computed by the rWTM firmware and convert their words from 370 + * LE to BE. 371 + */ 372 + memcpy(rwtm->last_sig, rwtm->buf + 68, 136); 373 + cpu_to_be32_array(rwtm->last_sig, rwtm->last_sig, 34); 374 + rwtm->last_sig_done = 1; 375 + 376 + mutex_unlock(&rwtm->busy); 377 + return len; 378 + unlock_mutex: 379 + mutex_unlock(&rwtm->busy); 380 + return ret; 381 + } 382 + 383 + static const struct file_operations do_sign_fops = { 384 + .owner = THIS_MODULE, 385 + .open = rwtm_debug_open, 386 + .read = do_sign_read, 387 + .write = do_sign_write, 388 + .llseek = no_llseek, 389 + }; 390 + 391 + static int rwtm_register_debugfs(struct mox_rwtm *rwtm) 392 + { 393 + struct dentry *root, *entry; 394 + 395 + root = debugfs_create_dir("turris-mox-rwtm", NULL); 396 + 397 + if (IS_ERR(root)) 398 + return PTR_ERR(root); 399 + 400 + entry = debugfs_create_file_unsafe("do_sign", 0600, root, rwtm, 401 + &do_sign_fops); 402 + if (IS_ERR(entry)) 403 + goto err_remove; 404 + 405 + rwtm->debugfs_root = root; 406 + 407 + return 0; 408 + err_remove: 409 + debugfs_remove_recursive(root); 410 + return PTR_ERR(entry); 411 + } 412 + 413 + static void rwtm_unregister_debugfs(struct mox_rwtm *rwtm) 414 + { 415 + debugfs_remove_recursive(rwtm->debugfs_root); 416 + } 417 + #else 418 + static inline int rwtm_register_debugfs(struct mox_rwtm *rwtm) 419 + { 420 + return 0; 421 + } 422 + 423 + static inline void rwtm_unregister_debugfs(struct mox_rwtm *rwtm) 424 + { 425 + } 426 + #endif 427 + 295 428 static int turris_mox_rwtm_probe(struct platform_device *pdev) 296 429 { 297 430 struct mox_rwtm *rwtm; ··· 499 340 goto free_channel; 500 341 } 501 342 343 + ret = rwtm_register_debugfs(rwtm); 344 + if (ret < 0) { 345 + dev_err(dev, "Failed creating debugfs entries: %i\n", ret); 346 + goto free_channel; 347 + } 348 + 502 349 return 0; 503 350 504 351 free_channel: ··· 520 355 { 521 356 struct mox_rwtm *rwtm = platform_get_drvdata(pdev); 522 357 358 + rwtm_unregister_debugfs(rwtm); 523 359 sysfs_remove_files(rwtm_to_kobj(rwtm), mox_rwtm_attrs); 524 360 kobject_put(rwtm_to_kobj(rwtm)); 525 361 mbox_free_channel(rwtm->mbox);
+1
drivers/gpu/drm/mediatek/mtk_drm_crtc.c
··· 487 487 cmdq_pkt_clear_event(cmdq_handle, mtk_crtc->cmdq_event); 488 488 cmdq_pkt_wfe(cmdq_handle, mtk_crtc->cmdq_event); 489 489 mtk_crtc_ddp_config(crtc, cmdq_handle); 490 + cmdq_pkt_finalize(cmdq_handle); 490 491 cmdq_pkt_flush_async(cmdq_handle, ddp_cmdq_cb, cmdq_handle); 491 492 } 492 493 #endif
+25 -1
drivers/i2c/busses/i2c-qcom-geni.c
··· 559 559 gi2c->adap.dev.of_node = dev->of_node; 560 560 strlcpy(gi2c->adap.name, "Geni-I2C", sizeof(gi2c->adap.name)); 561 561 562 + ret = geni_icc_get(&gi2c->se, "qup-memory"); 563 + if (ret) 564 + return ret; 565 + /* 566 + * Set the bus quota for core and cpu to a reasonable value for 567 + * register access. 568 + * Set quota for DDR based on bus speed. 569 + */ 570 + gi2c->se.icc_paths[GENI_TO_CORE].avg_bw = GENI_DEFAULT_BW; 571 + gi2c->se.icc_paths[CPU_TO_GENI].avg_bw = GENI_DEFAULT_BW; 572 + gi2c->se.icc_paths[GENI_TO_DDR].avg_bw = Bps_to_icc(gi2c->clk_freq_out); 573 + 574 + ret = geni_icc_set_bw(&gi2c->se); 575 + if (ret) 576 + return ret; 577 + 562 578 ret = geni_se_resources_on(&gi2c->se); 563 579 if (ret) { 564 580 dev_err(dev, "Error turning on resources %d\n", ret); ··· 596 580 dev_err(dev, "Error turning off resources %d\n", ret); 597 581 return ret; 598 582 } 583 + 584 + ret = geni_icc_disable(&gi2c->se); 585 + if (ret) 586 + return ret; 599 587 600 588 dev_dbg(dev, "i2c fifo/se-dma mode. fifo depth:%d\n", tx_depth); 601 589 ··· 645 625 gi2c->suspended = 1; 646 626 } 647 627 648 - return 0; 628 + return geni_icc_disable(&gi2c->se); 649 629 } 650 630 651 631 static int __maybe_unused geni_i2c_runtime_resume(struct device *dev) 652 632 { 653 633 int ret; 654 634 struct geni_i2c_dev *gi2c = dev_get_drvdata(dev); 635 + 636 + ret = geni_icc_enable(&gi2c->se); 637 + if (ret) 638 + return ret; 655 639 656 640 ret = geni_se_resources_on(&gi2c->se); 657 641 if (ret)
+1 -5
drivers/interconnect/qcom/bcm-voter.c
··· 266 266 if (!commit_idx[0]) 267 267 goto out; 268 268 269 - ret = rpmh_invalidate(voter->dev); 270 - if (ret) { 271 - pr_err("Error invalidating RPMH client (%d)\n", ret); 272 - goto out; 273 - } 269 + rpmh_invalidate(voter->dev); 274 270 275 271 ret = rpmh_write_batch(voter->dev, RPMH_ACTIVE_ONLY_STATE, 276 272 cmds, commit_idx);
+1 -1
drivers/irqchip/irq-ti-sci-inta.c
··· 2 2 /* 3 3 * Texas Instruments' K3 Interrupt Aggregator irqchip driver 4 4 * 5 - * Copyright (C) 2018-2019 Texas Instruments Incorporated - http://www.ti.com/ 5 + * Copyright (C) 2018-2019 Texas Instruments Incorporated - https://www.ti.com/ 6 6 * Lokesh Vutla <lokeshvutla@ti.com> 7 7 */ 8 8
+1 -1
drivers/irqchip/irq-ti-sci-intr.c
··· 2 2 /* 3 3 * Texas Instruments' K3 Interrupt Router irqchip driver 4 4 * 5 - * Copyright (C) 2018-2019 Texas Instruments Incorporated - http://www.ti.com/ 5 + * Copyright (C) 2018-2019 Texas Instruments Incorporated - https://www.ti.com/ 6 6 * Lokesh Vutla <lokeshvutla@ti.com> 7 7 */ 8 8
+6
drivers/memory/Kconfig
··· 5 5 6 6 menuconfig MEMORY 7 7 bool "Memory Controller drivers" 8 + help 9 + This option allows to enable specific memory controller drivers, 10 + useful mostly on embedded systems. These could be controllers 11 + for DRAM (SDR, DDR), ROM, SRAM and others. The drivers features 12 + vary from memory tuning and frequency scaling to enabling 13 + access to attached peripherals through memory bus. 8 14 9 15 if MEMORY 10 16
+3 -4
drivers/memory/brcmstb_dpfe.c
··· 23 23 * - BE kernel + LE firmware image 24 24 * - BE kernel + BE firmware image 25 25 * 26 - * The DPCU always runs in big endian mode. The firwmare image, however, can 26 + * The DPCU always runs in big endian mode. The firmware image, however, can 27 27 * be in either format. Also, communication between host CPU and DCPU is 28 28 * always in little endian. 29 29 */ ··· 188 188 struct mutex lock; 189 189 }; 190 190 191 - static const char *error_text[] = { 191 + static const char * const error_text[] = { 192 192 "Success", "Header code incorrect", "Unknown command or argument", 193 193 "Incorrect checksum", "Malformed command", "Timed out", 194 194 }; ··· 379 379 void __iomem *ptr = NULL; 380 380 381 381 /* There is no need to use this function for API v3 or later. */ 382 - if (unlikely(priv->dpfe_api->version >= 3)) { 382 + if (unlikely(priv->dpfe_api->version >= 3)) 383 383 return NULL; 384 - } 385 384 386 385 msg_type = (response >> DRAM_MSG_TYPE_OFFSET) & DRAM_MSG_TYPE_MASK; 387 386 offset = (response >> DRAM_MSG_ADDR_OFFSET) & DRAM_MSG_ADDR_MASK;
+2
drivers/memory/bt1-l2-ctl.c
··· 66 66 struct device_attribute dev_attr; 67 67 enum l2_ctl_stall id; 68 68 }; 69 + 69 70 #define to_l2_ctl_dev_attr(_dev_attr) \ 70 71 container_of(_dev_attr, struct l2_ctl_device_attribute, dev_attr) 71 72 ··· 243 242 244 243 return count; 245 244 } 245 + 246 246 static L2_CTL_ATTR_RW(l2_ws_latency, l2_ctl_latency, L2_WS_STALL); 247 247 static L2_CTL_ATTR_RW(l2_tag_latency, l2_ctl_latency, L2_TAG_STALL); 248 248 static L2_CTL_ATTR_RW(l2_data_latency, l2_ctl_latency, L2_DATA_STALL);
-2
drivers/memory/da8xx-ddrctl.c
··· 102 102 { 103 103 const struct da8xx_ddrctl_config_knob *knob; 104 104 const struct da8xx_ddrctl_setting *setting; 105 - struct device_node *node; 106 105 struct resource *res; 107 106 void __iomem *ddrctl; 108 107 struct device *dev; 109 108 u32 reg; 110 109 111 110 dev = &pdev->dev; 112 - node = dev->of_node; 113 111 114 112 setting = da8xx_ddrctl_get_board_settings(); 115 113 if (!setting) {
+1 -9
drivers/memory/emif-asm-offsets.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * TI AM33XX EMIF PM Assembly Offsets 3 4 * 4 5 * Copyright (C) 2016-2017 Texas Instruments Inc. 5 - * 6 - * This program is free software; you can redistribute it and/or 7 - * modify it under the terms of the GNU General Public License as 8 - * published by the Free Software Foundation version 2. 9 - * 10 - * This program is distributed "as is" WITHOUT ANY WARRANTY of any 11 - * kind, whether express or implied; without even the implied warranty 12 - * of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 - * GNU General Public License for more details. 14 6 */ 15 7 #include <linux/ti-emif-sram.h> 16 8
+9 -14
drivers/memory/emif.c
··· 282 282 * the EMIF_PWR_MGMT_CTRL[10:8] REG_LP_MODE bit field to 0x4. 283 283 */ 284 284 if ((emif->plat_data->ip_rev == EMIF_4D) && 285 - (EMIF_LP_MODE_PWR_DN == lpmode)) { 285 + (lpmode == EMIF_LP_MODE_PWR_DN)) { 286 286 WARN_ONCE(1, 287 - "REG_LP_MODE = LP_MODE_PWR_DN(4) is prohibited by" 288 - "erratum i743 switch to LP_MODE_SELF_REFRESH(2)\n"); 287 + "REG_LP_MODE = LP_MODE_PWR_DN(4) is prohibited by erratum i743 switch to LP_MODE_SELF_REFRESH(2)\n"); 289 288 /* rollback LP_MODE to Self-refresh mode */ 290 289 lpmode = EMIF_LP_MODE_SELF_REFRESH; 291 290 } ··· 713 714 u32 fifo_we_slave_ratio; 714 715 715 716 fifo_we_slave_ratio = DIV_ROUND_CLOSEST( 716 - EMIF_INTELLI_PHY_DQS_GATE_OPENING_DELAY_PS * 256 , t_ck); 717 + EMIF_INTELLI_PHY_DQS_GATE_OPENING_DELAY_PS * 256, t_ck); 717 718 718 719 return fifo_we_slave_ratio | fifo_we_slave_ratio << 11 | 719 720 fifo_we_slave_ratio << 22; ··· 724 725 u32 fifo_we_slave_ratio; 725 726 726 727 fifo_we_slave_ratio = DIV_ROUND_CLOSEST( 727 - EMIF_INTELLI_PHY_DQS_GATE_OPENING_DELAY_PS * 256 , t_ck); 728 + EMIF_INTELLI_PHY_DQS_GATE_OPENING_DELAY_PS * 256, t_ck); 728 729 729 730 return fifo_we_slave_ratio >> 10 | fifo_we_slave_ratio << 1 | 730 731 fifo_we_slave_ratio << 12 | fifo_we_slave_ratio << 23; ··· 735 736 u32 fifo_we_slave_ratio; 736 737 737 738 fifo_we_slave_ratio = DIV_ROUND_CLOSEST( 738 - EMIF_INTELLI_PHY_DQS_GATE_OPENING_DELAY_PS * 256 , t_ck); 739 + EMIF_INTELLI_PHY_DQS_GATE_OPENING_DELAY_PS * 256, t_ck); 739 740 740 741 return fifo_we_slave_ratio >> 9 | fifo_we_slave_ratio << 2 | 741 742 fifo_we_slave_ratio << 13; ··· 974 975 EMIF_CUSTOM_CONFIG_EXTENDED_TEMP_PART)) { 975 976 if (emif->temperature_level >= SDRAM_TEMP_HIGH_DERATE_REFRESH) { 976 977 dev_err(emif->dev, 977 - "%s:NOT Extended temperature capable memory." 978 - "Converting MR4=0x%02x as shutdown event\n", 978 + "%s:NOT Extended temperature capable memory. Converting MR4=0x%02x as shutdown event\n", 979 979 __func__, emif->temperature_level); 980 980 /* 981 981 * Temperature far too high - do kernel_power_off() ··· 1316 1318 if (of_find_property(np_emif, "cal-resistor-per-cs", &len)) 1317 1319 dev_info->cal_resistors_per_cs = true; 1318 1320 1319 - if (of_device_is_compatible(np_ddr , "jedec,lpddr2-s4")) 1321 + if (of_device_is_compatible(np_ddr, "jedec,lpddr2-s4")) 1320 1322 dev_info->type = DDR_TYPE_LPDDR2_S4; 1321 - else if (of_device_is_compatible(np_ddr , "jedec,lpddr2-s2")) 1323 + else if (of_device_is_compatible(np_ddr, "jedec,lpddr2-s2")) 1322 1324 dev_info->type = DDR_TYPE_LPDDR2_S2; 1323 1325 1324 1326 of_property_read_u32(np_ddr, "density", &density); ··· 1561 1563 goto error; 1562 1564 1563 1565 irq = platform_get_irq(pdev, 0); 1564 - if (irq < 0) { 1565 - dev_err(emif->dev, "%s: error getting IRQ resource - %d\n", 1566 - __func__, irq); 1566 + if (irq < 0) 1567 1567 goto error; 1568 - } 1569 1568 1570 1569 emif_onetime_settings(emif); 1571 1570 emif_debugfs_init(emif);
+15 -15
drivers/memory/fsl_ifc.c
··· 53 53 54 54 for (i = 0; i < fsl_ifc_ctrl_dev->banks; i++) { 55 55 u32 cspr = ifc_in32(&fsl_ifc_ctrl_dev->gregs->cspr_cs[i].cspr); 56 + 56 57 if (cspr & CSPR_V && (cspr & CSPR_BA) == 57 58 convert_ifc_address(addr_base)) 58 59 return i; ··· 154 153 /* read for chip select error */ 155 154 cs_err = ifc_in32(&ifc->cm_evter_stat); 156 155 if (cs_err) { 157 - dev_err(ctrl->dev, "transaction sent to IFC is not mapped to" 158 - "any memory bank 0x%08X\n", cs_err); 156 + dev_err(ctrl->dev, "transaction sent to IFC is not mapped to any memory bank 0x%08X\n", 157 + cs_err); 159 158 /* clear the chip select error */ 160 159 ifc_out32(IFC_CM_EVTER_STAT_CSER, &ifc->cm_evter_stat); 161 160 ··· 164 163 err_addr = ifc_in32(&ifc->cm_erattr1); 165 164 166 165 if (status & IFC_CM_ERATTR0_ERTYP_READ) 167 - dev_err(ctrl->dev, "Read transaction error" 168 - "CM_ERATTR0 0x%08X\n", status); 166 + dev_err(ctrl->dev, "Read transaction error CM_ERATTR0 0x%08X\n", 167 + status); 169 168 else 170 - dev_err(ctrl->dev, "Write transaction error" 171 - "CM_ERATTR0 0x%08X\n", status); 169 + dev_err(ctrl->dev, "Write transaction error CM_ERATTR0 0x%08X\n", 170 + status); 172 171 173 172 err_axiid = (status & IFC_CM_ERATTR0_ERAID) >> 174 173 IFC_CM_ERATTR0_ERAID_SHIFT; 175 - dev_err(ctrl->dev, "AXI ID of the error" 176 - "transaction 0x%08X\n", err_axiid); 174 + dev_err(ctrl->dev, "AXI ID of the error transaction 0x%08X\n", 175 + err_axiid); 177 176 178 177 err_srcid = (status & IFC_CM_ERATTR0_ESRCID) >> 179 178 IFC_CM_ERATTR0_ESRCID_SHIFT; 180 - dev_err(ctrl->dev, "SRC ID of the error" 181 - "transaction 0x%08X\n", err_srcid); 179 + dev_err(ctrl->dev, "SRC ID of the error transaction 0x%08X\n", 180 + err_srcid); 182 181 183 - dev_err(ctrl->dev, "Transaction Address corresponding to error" 184 - "ERADDR 0x%08X\n", err_addr); 182 + dev_err(ctrl->dev, "Transaction Address corresponding to error ERADDR 0x%08X\n", 183 + err_addr); 185 184 186 185 ret = IRQ_HANDLED; 187 186 } ··· 200 199 * the resources needed for the controller only. The 201 200 * resources for the NAND banks themselves are allocated 202 201 * in the chip probe function. 203 - */ 202 + */ 204 203 static int fsl_ifc_ctrl_probe(struct platform_device *dev) 205 204 { 206 205 int ret = 0; ··· 251 250 /* get the Controller level irq */ 252 251 fsl_ifc_ctrl_dev->irq = irq_of_parse_and_map(dev->dev.of_node, 0); 253 252 if (fsl_ifc_ctrl_dev->irq == 0) { 254 - dev_err(&dev->dev, "failed to get irq resource " 255 - "for IFC\n"); 253 + dev_err(&dev->dev, "failed to get irq resource for IFC\n"); 256 254 ret = -ENODEV; 257 255 goto err; 258 256 }
+16 -1
drivers/memory/jz4780-nemc.c
··· 8 8 9 9 #include <linux/clk.h> 10 10 #include <linux/init.h> 11 + #include <linux/io.h> 11 12 #include <linux/math64.h> 12 13 #include <linux/of.h> 13 14 #include <linux/of_address.h> ··· 22 21 23 22 #define NEMC_SMCRn(n) (0x14 + (((n) - 1) * 4)) 24 23 #define NEMC_NFCSR 0x50 24 + 25 + #define NEMC_REG_LEN 0x54 25 26 26 27 #define NEMC_SMCR_SMT BIT(0) 27 28 #define NEMC_SMCR_BW_SHIFT 6 ··· 291 288 nemc->dev = dev; 292 289 293 290 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 294 - nemc->base = devm_ioremap_resource(dev, res); 291 + 292 + /* 293 + * The driver currently only uses the registers up to offset 294 + * NEMC_REG_LEN. Since the EFUSE registers are in the middle of the 295 + * NEMC registers, we only request the registers we will use for now; 296 + * that way the EFUSE driver can probe too. 297 + */ 298 + if (!devm_request_mem_region(dev, res->start, NEMC_REG_LEN, dev_name(dev))) { 299 + dev_err(dev, "unable to request I/O memory region\n"); 300 + return -EBUSY; 301 + } 302 + 303 + nemc->base = devm_ioremap(dev, res->start, NEMC_REG_LEN); 295 304 if (IS_ERR(nemc->base)) { 296 305 dev_err(dev, "failed to get I/O memory\n"); 297 306 return PTR_ERR(nemc->base);
+1 -1
drivers/memory/mtk-smi.c
··· 60 60 61 61 struct mtk_smi_larb_gen { 62 62 int port_in_larb[MTK_LARB_NR_MAX + 1]; 63 - void (*config_port)(struct device *); 63 + void (*config_port)(struct device *dev); 64 64 unsigned int larb_direct_to_common_mask; 65 65 bool has_gals; 66 66 };
+10 -10
drivers/memory/mvebu-devbus.c
··· 124 124 * The bus width is encoded into the register as 0 for 8 bits, 125 125 * and 1 for 16 bits, so we do the necessary conversion here. 126 126 */ 127 - if (r->bus_width == 8) 127 + if (r->bus_width == 8) { 128 128 r->bus_width = 0; 129 - else if (r->bus_width == 16) 129 + } else if (r->bus_width == 16) { 130 130 r->bus_width = 1; 131 - else { 131 + } else { 132 132 dev_err(devbus->dev, "invalid bus width %d\n", r->bus_width); 133 133 return -EINVAL; 134 134 } 135 135 136 136 err = get_timing_param_ps(devbus, node, "devbus,badr-skew-ps", 137 - &r->badr_skew); 137 + &r->badr_skew); 138 138 if (err < 0) 139 139 return err; 140 140 141 141 err = get_timing_param_ps(devbus, node, "devbus,turn-off-ps", 142 - &r->turn_off); 142 + &r->turn_off); 143 143 if (err < 0) 144 144 return err; 145 145 146 146 err = get_timing_param_ps(devbus, node, "devbus,acc-first-ps", 147 - &r->acc_first); 147 + &r->acc_first); 148 148 if (err < 0) 149 149 return err; 150 150 151 151 err = get_timing_param_ps(devbus, node, "devbus,acc-next-ps", 152 - &r->acc_next); 152 + &r->acc_next); 153 153 if (err < 0) 154 154 return err; 155 155 ··· 175 175 } 176 176 177 177 err = get_timing_param_ps(devbus, node, "devbus,ale-wr-ps", 178 - &w->ale_wr); 178 + &w->ale_wr); 179 179 if (err < 0) 180 180 return err; 181 181 182 182 err = get_timing_param_ps(devbus, node, "devbus,wr-low-ps", 183 - &w->wr_low); 183 + &w->wr_low); 184 184 if (err < 0) 185 185 return err; 186 186 187 187 err = get_timing_param_ps(devbus, node, "devbus,wr-high-ps", 188 - &w->wr_high); 188 + &w->wr_high); 189 189 if (err < 0) 190 190 return err; 191 191
+16 -16
drivers/memory/of_memory.c
··· 4 4 * 5 5 * Copyright (C) 2012 Texas Instruments, Inc. 6 6 * Copyright (C) 2019 Samsung Electronics Co., Ltd. 7 + * Copyright (C) 2020 Krzysztof Kozlowski <krzk@kernel.org> 7 8 */ 8 9 9 10 #include <linux/device.h> 10 - #include <linux/platform_device.h> 11 - #include <linux/list.h> 12 11 #include <linux/of.h> 13 12 #include <linux/gfp.h> 14 13 #include <linux/export.h> ··· 18 19 /** 19 20 * of_get_min_tck() - extract min timing values for ddr 20 21 * @np: pointer to ddr device tree node 21 - * @device: device requesting for min timing values 22 + * @dev: device requesting for min timing values 22 23 * 23 24 * Populates the lpddr2_min_tck structure by extracting data 24 25 * from device tree node. Returns a pointer to the populated ··· 26 27 * default min timings provided by JEDEC. 27 28 */ 28 29 const struct lpddr2_min_tck *of_get_min_tck(struct device_node *np, 29 - struct device *dev) 30 + struct device *dev) 30 31 { 31 32 int ret = 0; 32 33 struct lpddr2_min_tck *min; ··· 55 56 return min; 56 57 57 58 default_min_tck: 58 - dev_warn(dev, "%s: using default min-tck values\n", __func__); 59 + dev_warn(dev, "Using default min-tck values\n"); 59 60 return &lpddr2_jedec_min_tck; 60 61 } 61 62 EXPORT_SYMBOL(of_get_min_tck); 62 63 63 64 static int of_do_get_timings(struct device_node *np, 64 - struct lpddr2_timings *tim) 65 + struct lpddr2_timings *tim) 65 66 { 66 67 int ret; 67 68 ··· 83 84 ret |= of_property_read_u32(np, "tZQinit", &tim->tZQinit); 84 85 ret |= of_property_read_u32(np, "tRAS-max-ns", &tim->tRAS_max_ns); 85 86 ret |= of_property_read_u32(np, "tDQSCK-max-derated", 86 - &tim->tDQSCK_max_derated); 87 + &tim->tDQSCK_max_derated); 87 88 88 89 return ret; 89 90 } ··· 102 103 * while populating, returns default timings provided by JEDEC. 103 104 */ 104 105 const struct lpddr2_timings *of_get_ddr_timings(struct device_node *np_ddr, 105 - struct device *dev, u32 device_type, u32 *nr_frequencies) 106 + struct device *dev, 107 + u32 device_type, 108 + u32 *nr_frequencies) 106 109 { 107 110 struct lpddr2_timings *timings = NULL; 108 111 u32 arr_sz = 0, i = 0; ··· 117 116 tim_compat = "jedec,lpddr2-timings"; 118 117 break; 119 118 default: 120 - dev_warn(dev, "%s: un-supported memory type\n", __func__); 119 + dev_warn(dev, "Unsupported memory type\n"); 121 120 } 122 121 123 122 for_each_child_of_node(np_ddr, np_tim) ··· 146 145 return timings; 147 146 148 147 default_timings: 149 - dev_warn(dev, "%s: using default timings\n", __func__); 148 + dev_warn(dev, "Using default memory timings\n"); 150 149 *nr_frequencies = ARRAY_SIZE(lpddr2_jedec_timings); 151 150 return lpddr2_jedec_timings; 152 151 } ··· 155 154 /** 156 155 * of_lpddr3_get_min_tck() - extract min timing values for lpddr3 157 156 * @np: pointer to ddr device tree node 158 - * @device: device requesting for min timing values 157 + * @dev: device requesting for min timing values 159 158 * 160 159 * Populates the lpddr3_min_tck structure by extracting data 161 160 * from device tree node. Returns a pointer to the populated ··· 194 193 ret |= of_property_read_u32(np, "tMRD-min-tck", &min->tMRD); 195 194 196 195 if (ret) { 197 - dev_warn(dev, "%s: errors while parsing min-tck values\n", 198 - __func__); 196 + dev_warn(dev, "Errors while parsing min-tck values\n"); 199 197 devm_kfree(dev, min); 200 198 goto default_min_tck; 201 199 } ··· 202 202 return min; 203 203 204 204 default_min_tck: 205 - dev_warn(dev, "%s: using default min-tck values\n", __func__); 205 + dev_warn(dev, "Using default min-tck values\n"); 206 206 return NULL; 207 207 } 208 208 EXPORT_SYMBOL(of_lpddr3_get_min_tck); ··· 264 264 tim_compat = "jedec,lpddr3-timings"; 265 265 break; 266 266 default: 267 - dev_warn(dev, "%s: un-supported memory type\n", __func__); 267 + dev_warn(dev, "Unsupported memory type\n"); 268 268 } 269 269 270 270 for_each_child_of_node(np_ddr, np_tim) ··· 293 293 return timings; 294 294 295 295 default_timings: 296 - dev_warn(dev, "%s: failed to get timings\n", __func__); 296 + dev_warn(dev, "Failed to get timings\n"); 297 297 *nr_frequencies = 0; 298 298 return NULL; 299 299 }
+11 -10
drivers/memory/of_memory.h
··· 3 3 * OpenFirmware helpers for memory drivers 4 4 * 5 5 * Copyright (C) 2012 Texas Instruments, Inc. 6 + * Copyright (C) 2020 Krzysztof Kozlowski <krzk@kernel.org> 6 7 */ 7 8 8 9 #ifndef __LINUX_MEMORY_OF_REG_H 9 10 #define __LINUX_MEMORY_OF_REG_H 10 11 11 12 #if defined(CONFIG_OF) && defined(CONFIG_DDR) 12 - extern const struct lpddr2_min_tck *of_get_min_tck(struct device_node *np, 13 - struct device *dev); 14 - extern const struct lpddr2_timings 15 - *of_get_ddr_timings(struct device_node *np_ddr, struct device *dev, 16 - u32 device_type, u32 *nr_frequencies); 17 - extern const struct lpddr3_min_tck 18 - *of_lpddr3_get_min_tck(struct device_node *np, struct device *dev); 19 - extern const struct lpddr3_timings 20 - *of_lpddr3_get_ddr_timings(struct device_node *np_ddr, 21 - struct device *dev, u32 device_type, u32 *nr_frequencies); 13 + const struct lpddr2_min_tck *of_get_min_tck(struct device_node *np, 14 + struct device *dev); 15 + const struct lpddr2_timings *of_get_ddr_timings(struct device_node *np_ddr, 16 + struct device *dev, 17 + u32 device_type, u32 *nr_frequencies); 18 + const struct lpddr3_min_tck *of_lpddr3_get_min_tck(struct device_node *np, 19 + struct device *dev); 20 + const struct lpddr3_timings * 21 + of_lpddr3_get_ddr_timings(struct device_node *np_ddr, 22 + struct device *dev, u32 device_type, u32 *nr_frequencies); 22 23 #else 23 24 static inline const struct lpddr2_min_tck 24 25 *of_get_min_tck(struct device_node *np, struct device *dev)
+29 -37
drivers/memory/omap-gpmc.c
··· 29 29 #include <linux/of_platform.h> 30 30 #include <linux/omap-gpmc.h> 31 31 #include <linux/pm_runtime.h> 32 + #include <linux/sizes.h> 32 33 33 34 #include <linux/platform_data/mtd-nand-omap2.h> 34 35 ··· 109 108 #define ENABLE_PREFETCH (0x1 << 7) 110 109 #define DMA_MPU_MODE 2 111 110 112 - #define GPMC_REVISION_MAJOR(l) ((l >> 4) & 0xf) 113 - #define GPMC_REVISION_MINOR(l) (l & 0xf) 111 + #define GPMC_REVISION_MAJOR(l) (((l) >> 4) & 0xf) 112 + #define GPMC_REVISION_MINOR(l) ((l) & 0xf) 114 113 115 114 #define GPMC_HAS_WR_ACCESS 0x1 116 115 #define GPMC_HAS_WR_DATA_MUX_BUS 0x2 ··· 141 140 #define GPMC_CONFIG1_WRITEMULTIPLE_SUPP (1 << 28) 142 141 #define GPMC_CONFIG1_WRITETYPE_ASYNC (0 << 27) 143 142 #define GPMC_CONFIG1_WRITETYPE_SYNC (1 << 27) 144 - #define GPMC_CONFIG1_CLKACTIVATIONTIME(val) ((val & 3) << 25) 143 + #define GPMC_CONFIG1_CLKACTIVATIONTIME(val) (((val) & 3) << 25) 145 144 /** CLKACTIVATIONTIME Max Ticks */ 146 145 #define GPMC_CONFIG1_CLKACTIVATIONTIME_MAX 2 147 - #define GPMC_CONFIG1_PAGE_LEN(val) ((val & 3) << 23) 146 + #define GPMC_CONFIG1_PAGE_LEN(val) (((val) & 3) << 23) 148 147 /** ATTACHEDDEVICEPAGELENGTH Max Value */ 149 148 #define GPMC_CONFIG1_ATTACHEDDEVICEPAGELENGTH_MAX 2 150 149 #define GPMC_CONFIG1_WAIT_READ_MON (1 << 22) 151 150 #define GPMC_CONFIG1_WAIT_WRITE_MON (1 << 21) 152 - #define GPMC_CONFIG1_WAIT_MON_TIME(val) ((val & 3) << 18) 151 + #define GPMC_CONFIG1_WAIT_MON_TIME(val) (((val) & 3) << 18) 153 152 /** WAITMONITORINGTIME Max Ticks */ 154 153 #define GPMC_CONFIG1_WAITMONITORINGTIME_MAX 2 155 - #define GPMC_CONFIG1_WAIT_PIN_SEL(val) ((val & 3) << 16) 156 - #define GPMC_CONFIG1_DEVICESIZE(val) ((val & 3) << 12) 154 + #define GPMC_CONFIG1_WAIT_PIN_SEL(val) (((val) & 3) << 16) 155 + #define GPMC_CONFIG1_DEVICESIZE(val) (((val) & 3) << 12) 157 156 #define GPMC_CONFIG1_DEVICESIZE_16 GPMC_CONFIG1_DEVICESIZE(1) 158 157 /** DEVICESIZE Max Value */ 159 158 #define GPMC_CONFIG1_DEVICESIZE_MAX 1 160 - #define GPMC_CONFIG1_DEVICETYPE(val) ((val & 3) << 10) 159 + #define GPMC_CONFIG1_DEVICETYPE(val) (((val) & 3) << 10) 161 160 #define GPMC_CONFIG1_DEVICETYPE_NOR GPMC_CONFIG1_DEVICETYPE(0) 162 - #define GPMC_CONFIG1_MUXTYPE(val) ((val & 3) << 8) 161 + #define GPMC_CONFIG1_MUXTYPE(val) (((val) & 3) << 8) 163 162 #define GPMC_CONFIG1_TIME_PARA_GRAN (1 << 4) 164 - #define GPMC_CONFIG1_FCLK_DIV(val) (val & 3) 163 + #define GPMC_CONFIG1_FCLK_DIV(val) ((val) & 3) 165 164 #define GPMC_CONFIG1_FCLK_DIV2 (GPMC_CONFIG1_FCLK_DIV(1)) 166 165 #define GPMC_CONFIG1_FCLK_DIV3 (GPMC_CONFIG1_FCLK_DIV(2)) 167 166 #define GPMC_CONFIG1_FCLK_DIV4 (GPMC_CONFIG1_FCLK_DIV(3)) ··· 246 245 static unsigned int gpmc_cs_num = GPMC_CS_NUM; 247 246 static unsigned int gpmc_nr_waitpins; 248 247 static resource_size_t phys_base, mem_size; 249 - static unsigned gpmc_capability; 248 + static unsigned int gpmc_capability; 250 249 static void __iomem *gpmc_base; 251 250 252 251 static struct clk *gpmc_l3_clk; ··· 292 291 293 292 /** 294 293 * gpmc_get_clk_period - get period of selected clock domain in ps 295 - * @cs Chip Select Region. 296 - * @cd Clock Domain. 294 + * @cs: Chip Select Region. 295 + * @cd: Clock Domain. 297 296 * 298 297 * GPMC_CS_CONFIG1 GPMCFCLKDIVIDER for cs has to be setup 299 298 * prior to calling this function with GPMC_CD_CLK. 300 299 */ 301 300 static unsigned long gpmc_get_clk_period(int cs, enum gpmc_clk_domain cd) 302 301 { 303 - 304 302 unsigned long tick_ps = gpmc_get_fclk_period(); 305 303 u32 l; 306 304 int div; ··· 319 319 } 320 320 321 321 return tick_ps; 322 - 323 322 } 324 323 325 324 static unsigned int gpmc_ns_to_clk_ticks(unsigned int time_ns, int cs, ··· 410 411 * @reg: GPMC_CS_CONFIGn register offset. 411 412 * @st_bit: Start Bit 412 413 * @end_bit: End Bit. Must be >= @st_bit. 413 - * @ma:x Maximum parameter value (before optional @shift). 414 + * @max: Maximum parameter value (before optional @shift). 414 415 * If 0, maximum is as high as @st_bit and @end_bit allow. 415 416 * @name: DTS node name, w/o "gpmc," 416 417 * @cd: Clock Domain of timing parameter. ··· 510 511 GPMC_GET_RAW_BOOL(GPMC_CS_CONFIG1, 4, 4, "time-para-granularity"); 511 512 GPMC_GET_RAW(GPMC_CS_CONFIG1, 8, 9, "mux-add-data"); 512 513 GPMC_GET_RAW_SHIFT_MAX(GPMC_CS_CONFIG1, 12, 13, 1, 513 - GPMC_CONFIG1_DEVICESIZE_MAX, "device-width"); 514 + GPMC_CONFIG1_DEVICESIZE_MAX, "device-width"); 514 515 GPMC_GET_RAW(GPMC_CS_CONFIG1, 16, 17, "wait-pin"); 515 516 GPMC_GET_RAW_BOOL(GPMC_CS_CONFIG1, 21, 21, "wait-on-write"); 516 517 GPMC_GET_RAW_BOOL(GPMC_CS_CONFIG1, 22, 22, "wait-on-read"); ··· 624 625 625 626 l = gpmc_cs_read_reg(cs, reg); 626 627 #ifdef CONFIG_OMAP_GPMC_DEBUG 627 - pr_info( 628 - "GPMC CS%d: %-17s: %3d ticks, %3lu ns (was %3i ticks) %3d ns\n", 629 - cs, name, ticks, gpmc_get_clk_period(cs, cd) * ticks / 1000, 628 + pr_info("GPMC CS%d: %-17s: %3d ticks, %3lu ns (was %3i ticks) %3d ns\n", 629 + cs, name, ticks, gpmc_get_clk_period(cs, cd) * ticks / 1000, 630 630 (l >> st_bit) & mask, time); 631 631 #endif 632 632 l &= ~(mask << st_bit); ··· 660 662 */ 661 663 static int gpmc_calc_waitmonitoring_divider(unsigned int wait_monitoring) 662 664 { 663 - 664 665 int div = gpmc_ns_to_ticks(wait_monitoring); 665 666 666 667 div += GPMC_CONFIG1_WAITMONITORINGTIME_MAX - 1; ··· 671 674 div = 1; 672 675 673 676 return div; 674 - 675 677 } 676 678 677 679 /** ··· 724 728 if (!s->sync_read && !s->sync_write && 725 729 (s->wait_on_read || s->wait_on_write) 726 730 ) { 727 - 728 731 div = gpmc_calc_waitmonitoring_divider(t->wait_monitoring); 729 732 if (div < 0) { 730 733 pr_err("%s: waitmonitoringtime %3d ns too large for greatest gpmcfclkdivider.\n", ··· 953 958 * Make sure we ignore any device offsets from the GPMC partition 954 959 * allocated for the chip select and that the new base confirms 955 960 * to the GPMC 16MB minimum granularity. 956 - */ 961 + */ 957 962 base &= ~(SZ_16M - 1); 958 963 959 964 gpmc_cs_get_memconf(cs, &old_base, &size); ··· 1082 1087 1083 1088 /** 1084 1089 * gpmc_omap_get_nand_ops - Get the GPMC NAND interface 1085 - * @regs: the GPMC NAND register map exclusive for NAND use. 1090 + * @reg: the GPMC NAND register map exclusive for NAND use. 1086 1091 * @cs: GPMC chip select number on which the NAND sits. The 1087 1092 * register map returned will be specific to this chip select. 1088 1093 * ··· 1237 1242 } 1238 1243 EXPORT_SYMBOL_GPL(gpmc_omap_onenand_set_timings); 1239 1244 1240 - int gpmc_get_client_irq(unsigned irq_config) 1245 + int gpmc_get_client_irq(unsigned int irq_config) 1241 1246 { 1242 1247 if (!gpmc_irq_domain) { 1243 1248 pr_warn("%s called before GPMC IRQ domain available\n", ··· 1460 1465 continue; 1461 1466 gpmc_cs_delete_mem(cs); 1462 1467 } 1463 - 1464 1468 } 1465 1469 1466 1470 static void gpmc_mem_init(void) ··· 1628 1634 /* oe_on */ 1629 1635 temp = dev_t->t_oeasu; 1630 1636 if (mux) 1631 - temp = max_t(u32, temp, 1632 - gpmc_t->adv_rd_off + dev_t->t_aavdh); 1637 + temp = max_t(u32, temp, gpmc_t->adv_rd_off + dev_t->t_aavdh); 1633 1638 gpmc_t->oe_on = gpmc_round_ps_to_ticks(temp); 1634 1639 1635 1640 /* access */ 1636 1641 temp = max_t(u32, dev_t->t_iaa, /* XXX: remove t_iaa in async ? */ 1637 - gpmc_t->oe_on + dev_t->t_oe); 1638 - temp = max_t(u32, temp, 1639 - gpmc_t->cs_on + dev_t->t_ce); 1640 - temp = max_t(u32, temp, 1641 - gpmc_t->adv_on + dev_t->t_aa); 1642 + gpmc_t->oe_on + dev_t->t_oe); 1643 + temp = max_t(u32, temp, gpmc_t->cs_on + dev_t->t_ce); 1644 + temp = max_t(u32, temp, gpmc_t->adv_on + dev_t->t_aa); 1642 1645 gpmc_t->access = gpmc_round_ps_to_ticks(temp); 1643 1646 1644 1647 gpmc_t->oe_off = gpmc_t->access + gpmc_ticks_to_ps(1); ··· 1744 1753 return 0; 1745 1754 } 1746 1755 1747 - /* TODO: remove this function once all peripherals are confirmed to 1756 + /* 1757 + * TODO: remove this function once all peripherals are confirmed to 1748 1758 * work with generic timing. Simultaneously gpmc_cs_set_timings() 1749 1759 * has to be modified to handle timings in ps instead of ns 1750 - */ 1760 + */ 1751 1761 static void gpmc_convert_ps_to_ns(struct gpmc_timings *t) 1752 1762 { 1753 1763 t->cs_on /= 1000; ··· 2081 2089 gpmc_cs_disable_mem(cs); 2082 2090 2083 2091 /* 2084 - * FIXME: gpmc_cs_request() will map the CS to an arbitary 2092 + * FIXME: gpmc_cs_request() will map the CS to an arbitrary 2085 2093 * location in the gpmc address space. When booting with 2086 2094 * device-tree we want the NOR flash to be mapped to the 2087 2095 * location specified in the device-tree blob. So remap the
+8 -11
drivers/memory/pl172.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Memory controller driver for ARM PrimeCell PL172 3 4 * PrimeCell MultiPort Memory Controller (PL172) ··· 7 6 * 8 7 * Based on: 9 8 * TI AEMIF driver, Copyright (C) 2010 - 2013 Texas Instruments Inc. 10 - * 11 - * This file is licensed under the terms of the GNU General Public 12 - * License version 2. This program is licensed "as is" without any 13 - * warranty of any kind, whether express or implied. 14 9 */ 15 10 16 11 #include <linux/amba/bus.h> ··· 21 24 #include <linux/of_platform.h> 22 25 #include <linux/time.h> 23 26 24 - #define MPMC_STATIC_CFG(n) (0x200 + 0x20 * n) 27 + #define MPMC_STATIC_CFG(n) (0x200 + 0x20 * (n)) 25 28 #define MPMC_STATIC_CFG_MW_8BIT 0x0 26 29 #define MPMC_STATIC_CFG_MW_16BIT 0x1 27 30 #define MPMC_STATIC_CFG_MW_32BIT 0x2 ··· 31 34 #define MPMC_STATIC_CFG_EW BIT(8) 32 35 #define MPMC_STATIC_CFG_B BIT(19) 33 36 #define MPMC_STATIC_CFG_P BIT(20) 34 - #define MPMC_STATIC_WAIT_WEN(n) (0x204 + 0x20 * n) 37 + #define MPMC_STATIC_WAIT_WEN(n) (0x204 + 0x20 * (n)) 35 38 #define MPMC_STATIC_WAIT_WEN_MAX 0x0f 36 - #define MPMC_STATIC_WAIT_OEN(n) (0x208 + 0x20 * n) 39 + #define MPMC_STATIC_WAIT_OEN(n) (0x208 + 0x20 * (n)) 37 40 #define MPMC_STATIC_WAIT_OEN_MAX 0x0f 38 - #define MPMC_STATIC_WAIT_RD(n) (0x20c + 0x20 * n) 41 + #define MPMC_STATIC_WAIT_RD(n) (0x20c + 0x20 * (n)) 39 42 #define MPMC_STATIC_WAIT_RD_MAX 0x1f 40 - #define MPMC_STATIC_WAIT_PAGE(n) (0x210 + 0x20 * n) 43 + #define MPMC_STATIC_WAIT_PAGE(n) (0x210 + 0x20 * (n)) 41 44 #define MPMC_STATIC_WAIT_PAGE_MAX 0x1f 42 - #define MPMC_STATIC_WAIT_WR(n) (0x214 + 0x20 * n) 45 + #define MPMC_STATIC_WAIT_WR(n) (0x214 + 0x20 * (n)) 43 46 #define MPMC_STATIC_WAIT_WR_MAX 0x1f 44 - #define MPMC_STATIC_WAIT_TURN(n) (0x218 + 0x20 * n) 47 + #define MPMC_STATIC_WAIT_TURN(n) (0x218 + 0x20 * (n)) 45 48 #define MPMC_STATIC_WAIT_TURN_MAX 0x0f 46 49 47 50 /* Maximum number of static chip selects */
+7
drivers/memory/samsung/Kconfig
··· 23 23 config EXYNOS_SROM 24 24 bool "Exynos SROM controller driver" if COMPILE_TEST 25 25 depends on (ARM && ARCH_EXYNOS) || (COMPILE_TEST && HAS_IOMEM) 26 + help 27 + This adds driver for Samsung Exynos SoC SROM controller. The driver 28 + in basic operation mode only saves and restores SROM registers 29 + during suspend. If however appropriate device tree configuration 30 + is provided, the driver enables support for external memory 31 + or external devices. 32 + If unsure, say Y on devices with Samsung Exynos SocS. 26 33 27 34 endif
+11 -11
drivers/memory/samsung/exynos-srom.c
··· 47 47 struct exynos_srom_reg_dump *reg_offset; 48 48 }; 49 49 50 - static struct exynos_srom_reg_dump *exynos_srom_alloc_reg_dump( 51 - const unsigned long *rdump, 52 - unsigned long nr_rdump) 50 + static struct exynos_srom_reg_dump * 51 + exynos_srom_alloc_reg_dump(const unsigned long *rdump, 52 + unsigned long nr_rdump) 53 53 { 54 54 struct exynos_srom_reg_dump *rd; 55 55 unsigned int i; ··· 116 116 } 117 117 118 118 srom = devm_kzalloc(&pdev->dev, 119 - sizeof(struct exynos_srom), GFP_KERNEL); 119 + sizeof(struct exynos_srom), GFP_KERNEL); 120 120 if (!srom) 121 121 return -ENOMEM; 122 122 ··· 130 130 platform_set_drvdata(pdev, srom); 131 131 132 132 srom->reg_offset = exynos_srom_alloc_reg_dump(exynos_srom_offsets, 133 - ARRAY_SIZE(exynos_srom_offsets)); 133 + ARRAY_SIZE(exynos_srom_offsets)); 134 134 if (!srom->reg_offset) { 135 135 iounmap(srom->reg_base); 136 136 return -ENOMEM; ··· 157 157 158 158 #ifdef CONFIG_PM_SLEEP 159 159 static void exynos_srom_save(void __iomem *base, 160 - struct exynos_srom_reg_dump *rd, 161 - unsigned int num_regs) 160 + struct exynos_srom_reg_dump *rd, 161 + unsigned int num_regs) 162 162 { 163 163 for (; num_regs > 0; --num_regs, ++rd) 164 164 rd->value = readl(base + rd->offset); 165 165 } 166 166 167 167 static void exynos_srom_restore(void __iomem *base, 168 - const struct exynos_srom_reg_dump *rd, 169 - unsigned int num_regs) 168 + const struct exynos_srom_reg_dump *rd, 169 + unsigned int num_regs) 170 170 { 171 171 for (; num_regs > 0; --num_regs, ++rd) 172 172 writel(rd->value, base + rd->offset); ··· 177 177 struct exynos_srom *srom = dev_get_drvdata(dev); 178 178 179 179 exynos_srom_save(srom->reg_base, srom->reg_offset, 180 - ARRAY_SIZE(exynos_srom_offsets)); 180 + ARRAY_SIZE(exynos_srom_offsets)); 181 181 return 0; 182 182 } 183 183 ··· 186 186 struct exynos_srom *srom = dev_get_drvdata(dev); 187 187 188 188 exynos_srom_restore(srom->reg_base, srom->reg_offset, 189 - ARRAY_SIZE(exynos_srom_offsets)); 189 + ARRAY_SIZE(exynos_srom_offsets)); 190 190 return 0; 191 191 } 192 192 #endif
+10 -5
drivers/memory/samsung/exynos5422-dmc.c
··· 270 270 * This function switches between these banks according to the 271 271 * currently used clock source. 272 272 */ 273 - static void exynos5_switch_timing_regs(struct exynos5_dmc *dmc, bool set) 273 + static int exynos5_switch_timing_regs(struct exynos5_dmc *dmc, bool set) 274 274 { 275 275 unsigned int reg; 276 276 int ret; 277 277 278 278 ret = regmap_read(dmc->clk_regmap, CDREX_LPDDR3PHY_CON3, &reg); 279 + if (ret) 280 + return ret; 279 281 280 282 if (set) 281 283 reg |= EXYNOS5_TIMING_SET_SWI; ··· 285 283 reg &= ~EXYNOS5_TIMING_SET_SWI; 286 284 287 285 regmap_write(dmc->clk_regmap, CDREX_LPDDR3PHY_CON3, reg); 286 + 287 + return 0; 288 288 } 289 289 290 290 /** ··· 520 516 /* 521 517 * Delays are long enough, so use them for the new coming clock. 522 518 */ 523 - exynos5_switch_timing_regs(dmc, USE_MX_MSPLL_TIMINGS); 519 + ret = exynos5_switch_timing_regs(dmc, USE_MX_MSPLL_TIMINGS); 524 520 525 521 return ret; 526 522 } ··· 581 577 582 578 clk_set_rate(dmc->fout_bpll, target_rate); 583 579 584 - exynos5_switch_timing_regs(dmc, USE_BPLL_TIMINGS); 580 + ret = exynos5_switch_timing_regs(dmc, USE_BPLL_TIMINGS); 581 + if (ret) 582 + goto disable_clocks; 585 583 586 584 ret = clk_set_parent(dmc->mout_mclk_cdrex, dmc->mout_bpll); 587 585 if (ret) ··· 1398 1392 return PTR_ERR(dmc->base_drexi1); 1399 1393 1400 1394 dmc->clk_regmap = syscon_regmap_lookup_by_phandle(np, 1401 - "samsung,syscon-clk"); 1395 + "samsung,syscon-clk"); 1402 1396 if (IS_ERR(dmc->clk_regmap)) 1403 1397 return PTR_ERR(dmc->clk_regmap); 1404 1398 ··· 1476 1470 1477 1471 exynos5_dmc_df_profile.polling_ms = 500; 1478 1472 } 1479 - 1480 1473 1481 1474 dmc->df = devm_devfreq_add_device(dev, &exynos5_dmc_df_profile, 1482 1475 DEVFREQ_GOV_SIMPLE_ONDEMAND,
+14
drivers/memory/tegra/Kconfig
··· 36 36 Tegra124 chips. The EMC controls the external DRAM on the board. 37 37 This driver is required to change memory timings / clock rate for 38 38 external memory. 39 + 40 + config TEGRA210_EMC_TABLE 41 + bool 42 + depends on ARCH_TEGRA_210_SOC 43 + 44 + config TEGRA210_EMC 45 + tristate "NVIDIA Tegra210 External Memory Controller driver" 46 + depends on TEGRA_MC && ARCH_TEGRA_210_SOC 47 + select TEGRA210_EMC_TABLE 48 + help 49 + This driver is for the External Memory Controller (EMC) found on 50 + Tegra210 chips. The EMC controls the external DRAM on the board. 51 + This driver is required to change memory timings / clock rate for 52 + external memory.
+4
drivers/memory/tegra/Makefile
··· 13 13 obj-$(CONFIG_TEGRA20_EMC) += tegra20-emc.o 14 14 obj-$(CONFIG_TEGRA30_EMC) += tegra30-emc.o 15 15 obj-$(CONFIG_TEGRA124_EMC) += tegra124-emc.o 16 + obj-$(CONFIG_TEGRA210_EMC_TABLE) += tegra210-emc-table.o 17 + obj-$(CONFIG_TEGRA210_EMC) += tegra210-emc.o 16 18 obj-$(CONFIG_ARCH_TEGRA_186_SOC) += tegra186.o tegra186-emc.o 17 19 obj-$(CONFIG_ARCH_TEGRA_194_SOC) += tegra186.o tegra186-emc.o 20 + 21 + tegra210-emc-y := tegra210-emc-core.o tegra210-emc-cc-r21021.o
+1
drivers/memory/tegra/mc.h
··· 34 34 #define MC_EMEM_ARB_TIMING_W2W 0xbc 35 35 #define MC_EMEM_ARB_TIMING_R2W 0xc0 36 36 #define MC_EMEM_ARB_TIMING_W2R 0xc4 37 + #define MC_EMEM_ARB_MISC2 0xc8 37 38 #define MC_EMEM_ARB_DA_TURNS 0xd0 38 39 #define MC_EMEM_ARB_DA_COVERS 0xd4 39 40 #define MC_EMEM_ARB_MISC0 0xd8
+4 -3
drivers/memory/tegra/tegra124-emc.c
··· 984 984 985 985 static const struct of_device_id tegra_emc_of_match[] = { 986 986 { .compatible = "nvidia,tegra124-emc" }, 987 + { .compatible = "nvidia,tegra132-emc" }, 987 988 {} 988 989 }; 989 990 ··· 1179 1178 return; 1180 1179 } 1181 1180 1182 - debugfs_create_file("available_rates", S_IRUGO, emc->debugfs.root, emc, 1181 + debugfs_create_file("available_rates", 0444, emc->debugfs.root, emc, 1183 1182 &tegra_emc_debug_available_rates_fops); 1184 - debugfs_create_file("min_rate", S_IRUGO | S_IWUSR, emc->debugfs.root, 1183 + debugfs_create_file("min_rate", 0644, emc->debugfs.root, 1185 1184 emc, &tegra_emc_debug_min_rate_fops); 1186 - debugfs_create_file("max_rate", S_IRUGO | S_IWUSR, emc->debugfs.root, 1185 + debugfs_create_file("max_rate", 0644, emc->debugfs.root, 1187 1186 emc, &tegra_emc_debug_max_rate_fops); 1188 1187 } 1189 1188
+13 -12
drivers/memory/tegra/tegra186-emc.c
··· 185 185 if (IS_ERR(emc->clk)) { 186 186 err = PTR_ERR(emc->clk); 187 187 dev_err(&pdev->dev, "failed to get EMC clock: %d\n", err); 188 - return err; 188 + goto put_bpmp; 189 189 } 190 190 191 191 platform_set_drvdata(pdev, emc); ··· 201 201 err = tegra_bpmp_transfer(emc->bpmp, &msg); 202 202 if (err < 0) { 203 203 dev_err(&pdev->dev, "failed to EMC DVFS pairs: %d\n", err); 204 - return err; 204 + goto put_bpmp; 205 205 } 206 206 207 207 emc->debugfs.min_rate = ULONG_MAX; ··· 211 211 212 212 emc->dvfs = devm_kmalloc_array(&pdev->dev, emc->num_dvfs, 213 213 sizeof(*emc->dvfs), GFP_KERNEL); 214 - if (!emc->dvfs) 215 - return -ENOMEM; 214 + if (!emc->dvfs) { 215 + err = -ENOMEM; 216 + goto put_bpmp; 217 + } 216 218 217 219 dev_dbg(&pdev->dev, "%u DVFS pairs:\n", emc->num_dvfs); 218 220 ··· 239 237 "failed to set rate range [%lu-%lu] for %pC\n", 240 238 emc->debugfs.min_rate, emc->debugfs.max_rate, 241 239 emc->clk); 242 - return err; 240 + goto put_bpmp; 243 241 } 244 242 245 243 emc->debugfs.root = debugfs_create_dir("emc", NULL); 246 - if (!emc->debugfs.root) { 247 - dev_err(&pdev->dev, "failed to create debugfs directory\n"); 248 - return 0; 249 - } 250 - 251 244 debugfs_create_file("available_rates", S_IRUGO, emc->debugfs.root, 252 245 emc, &tegra186_emc_debug_available_rates_fops); 253 246 debugfs_create_file("min_rate", S_IRUGO | S_IWUSR, emc->debugfs.root, ··· 251 254 emc, &tegra186_emc_debug_max_rate_fops); 252 255 253 256 return 0; 257 + 258 + put_bpmp: 259 + tegra_bpmp_put(emc->bpmp); 260 + return err; 254 261 } 255 262 256 263 static int tegra186_emc_remove(struct platform_device *pdev) ··· 268 267 } 269 268 270 269 static const struct of_device_id tegra186_emc_of_match[] = { 271 - #if defined(CONFIG_ARCH_TEGRA186_SOC) 270 + #if defined(CONFIG_ARCH_TEGRA_186_SOC) 272 271 { .compatible = "nvidia,tegra186-emc" }, 273 272 #endif 274 - #if defined(CONFIG_ARCH_TEGRA194_SOC) 273 + #if defined(CONFIG_ARCH_TEGRA_194_SOC) 275 274 { .compatible = "nvidia,tegra194-emc" }, 276 275 #endif 277 276 { /* sentinel */ }
+2 -2
drivers/memory/tegra/tegra186.c
··· 1570 1570 }; 1571 1571 MODULE_DEVICE_TABLE(of, tegra186_mc_of_match); 1572 1572 1573 - static int tegra186_mc_suspend(struct device *dev) 1573 + static int __maybe_unused tegra186_mc_suspend(struct device *dev) 1574 1574 { 1575 1575 return 0; 1576 1576 } 1577 1577 1578 - static int tegra186_mc_resume(struct device *dev) 1578 + static int __maybe_unused tegra186_mc_resume(struct device *dev) 1579 1579 { 1580 1580 struct tegra186_mc *mc = dev_get_drvdata(dev); 1581 1581
+14 -20
drivers/memory/tegra/tegra20-emc.c
··· 7 7 8 8 #include <linux/clk.h> 9 9 #include <linux/clk/tegra.h> 10 - #include <linux/completion.h> 11 10 #include <linux/debugfs.h> 12 11 #include <linux/err.h> 13 12 #include <linux/interrupt.h> 14 13 #include <linux/io.h> 14 + #include <linux/iopoll.h> 15 15 #include <linux/kernel.h> 16 16 #include <linux/module.h> 17 17 #include <linux/of.h> ··· 144 144 145 145 struct tegra_emc { 146 146 struct device *dev; 147 - struct completion clk_handshake_complete; 148 147 struct notifier_block clk_nb; 149 148 struct clk *clk; 150 149 void __iomem *regs; ··· 161 162 static irqreturn_t tegra_emc_isr(int irq, void *data) 162 163 { 163 164 struct tegra_emc *emc = data; 164 - u32 intmask = EMC_REFRESH_OVERFLOW_INT | EMC_CLKCHANGE_COMPLETE_INT; 165 + u32 intmask = EMC_REFRESH_OVERFLOW_INT; 165 166 u32 status; 166 167 167 168 status = readl_relaxed(emc->regs + EMC_INTSTATUS) & intmask; 168 169 if (!status) 169 170 return IRQ_NONE; 170 - 171 - /* notify about EMC-CAR handshake completion */ 172 - if (status & EMC_CLKCHANGE_COMPLETE_INT) 173 - complete(&emc->clk_handshake_complete); 174 171 175 172 /* notify about HW problem */ 176 173 if (status & EMC_REFRESH_OVERFLOW_INT) ··· 219 224 /* wait until programming has settled */ 220 225 readl_relaxed(emc->regs + emc_timing_registers[i - 1]); 221 226 222 - reinit_completion(&emc->clk_handshake_complete); 223 - 224 227 return 0; 225 228 } 226 229 227 230 static int emc_complete_timing_change(struct tegra_emc *emc, bool flush) 228 231 { 229 - unsigned long timeout; 232 + int err; 233 + u32 v; 230 234 231 235 dev_dbg(emc->dev, "%s: flush %d\n", __func__, flush); 232 236 ··· 236 242 return 0; 237 243 } 238 244 239 - timeout = wait_for_completion_timeout(&emc->clk_handshake_complete, 240 - msecs_to_jiffies(100)); 241 - if (timeout == 0) { 242 - dev_err(emc->dev, "EMC-CAR handshake failed\n"); 243 - return -EIO; 245 + err = readl_relaxed_poll_timeout_atomic(emc->regs + EMC_INTSTATUS, v, 246 + v & EMC_CLKCHANGE_COMPLETE_INT, 247 + 1, 100); 248 + if (err) { 249 + dev_err(emc->dev, "emc-car handshake timeout: %d\n", err); 250 + return err; 244 251 } 245 252 246 253 return 0; ··· 407 412 408 413 static int emc_setup_hw(struct tegra_emc *emc) 409 414 { 410 - u32 intmask = EMC_REFRESH_OVERFLOW_INT | EMC_CLKCHANGE_COMPLETE_INT; 415 + u32 intmask = EMC_REFRESH_OVERFLOW_INT; 411 416 u32 emc_cfg, emc_dbg; 412 417 413 418 emc_cfg = readl_relaxed(emc->regs + EMC_CFG_2); ··· 642 647 return; 643 648 } 644 649 645 - debugfs_create_file("available_rates", S_IRUGO, emc->debugfs.root, 650 + debugfs_create_file("available_rates", 0444, emc->debugfs.root, 646 651 emc, &tegra_emc_debug_available_rates_fops); 647 - debugfs_create_file("min_rate", S_IRUGO | S_IWUSR, emc->debugfs.root, 652 + debugfs_create_file("min_rate", 0644, emc->debugfs.root, 648 653 emc, &tegra_emc_debug_min_rate_fops); 649 - debugfs_create_file("max_rate", S_IRUGO | S_IWUSR, emc->debugfs.root, 654 + debugfs_create_file("max_rate", 0644, emc->debugfs.root, 650 655 emc, &tegra_emc_debug_max_rate_fops); 651 656 } 652 657 ··· 681 686 return -ENOMEM; 682 687 } 683 688 684 - init_completion(&emc->clk_handshake_complete); 685 689 emc->clk_nb.notifier_call = tegra_emc_clk_change_notify; 686 690 emc->dev = &pdev->dev; 687 691
+1775
drivers/memory/tegra/tegra210-emc-cc-r21021.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2014-2020, NVIDIA CORPORATION. All rights reserved. 4 + */ 5 + 6 + #include <linux/kernel.h> 7 + #include <linux/io.h> 8 + #include <linux/clk.h> 9 + #include <linux/delay.h> 10 + #include <linux/of.h> 11 + 12 + #include <soc/tegra/mc.h> 13 + 14 + #include "tegra210-emc.h" 15 + #include "tegra210-mc.h" 16 + 17 + /* 18 + * Enable flags for specifying verbosity. 19 + */ 20 + #define INFO (1 << 0) 21 + #define STEPS (1 << 1) 22 + #define SUB_STEPS (1 << 2) 23 + #define PRELOCK (1 << 3) 24 + #define PRELOCK_STEPS (1 << 4) 25 + #define ACTIVE_EN (1 << 5) 26 + #define PRAMP_UP (1 << 6) 27 + #define PRAMP_DN (1 << 7) 28 + #define EMA_WRITES (1 << 10) 29 + #define EMA_UPDATES (1 << 11) 30 + #define PER_TRAIN (1 << 16) 31 + #define CC_PRINT (1 << 17) 32 + #define CCFIFO (1 << 29) 33 + #define REGS (1 << 30) 34 + #define REG_LISTS (1 << 31) 35 + 36 + #define emc_dbg(emc, flags, ...) dev_dbg(emc->dev, __VA_ARGS__) 37 + 38 + #define DVFS_CLOCK_CHANGE_VERSION 21021 39 + #define EMC_PRELOCK_VERSION 2101 40 + 41 + enum { 42 + DVFS_SEQUENCE = 1, 43 + WRITE_TRAINING_SEQUENCE = 2, 44 + PERIODIC_TRAINING_SEQUENCE = 3, 45 + DVFS_PT1 = 10, 46 + DVFS_UPDATE = 11, 47 + TRAINING_PT1 = 12, 48 + TRAINING_UPDATE = 13, 49 + PERIODIC_TRAINING_UPDATE = 14 50 + }; 51 + 52 + /* 53 + * PTFV defines - basically just indexes into the per table PTFV array. 54 + */ 55 + #define PTFV_DQSOSC_MOVAVG_C0D0U0_INDEX 0 56 + #define PTFV_DQSOSC_MOVAVG_C0D0U1_INDEX 1 57 + #define PTFV_DQSOSC_MOVAVG_C0D1U0_INDEX 2 58 + #define PTFV_DQSOSC_MOVAVG_C0D1U1_INDEX 3 59 + #define PTFV_DQSOSC_MOVAVG_C1D0U0_INDEX 4 60 + #define PTFV_DQSOSC_MOVAVG_C1D0U1_INDEX 5 61 + #define PTFV_DQSOSC_MOVAVG_C1D1U0_INDEX 6 62 + #define PTFV_DQSOSC_MOVAVG_C1D1U1_INDEX 7 63 + #define PTFV_DVFS_SAMPLES_INDEX 9 64 + #define PTFV_MOVAVG_WEIGHT_INDEX 10 65 + #define PTFV_CONFIG_CTRL_INDEX 11 66 + 67 + #define PTFV_CONFIG_CTRL_USE_PREVIOUS_EMA (1 << 0) 68 + 69 + /* 70 + * Do arithmetic in fixed point. 71 + */ 72 + #define MOVAVG_PRECISION_FACTOR 100 73 + 74 + /* 75 + * The division portion of the average operation. 76 + */ 77 + #define __AVERAGE_PTFV(dev) \ 78 + ({ next->ptfv_list[PTFV_DQSOSC_MOVAVG_ ## dev ## _INDEX] = \ 79 + next->ptfv_list[PTFV_DQSOSC_MOVAVG_ ## dev ## _INDEX] / \ 80 + next->ptfv_list[PTFV_DVFS_SAMPLES_INDEX]; }) 81 + 82 + /* 83 + * Convert val to fixed point and add it to the temporary average. 84 + */ 85 + #define __INCREMENT_PTFV(dev, val) \ 86 + ({ next->ptfv_list[PTFV_DQSOSC_MOVAVG_ ## dev ## _INDEX] += \ 87 + ((val) * MOVAVG_PRECISION_FACTOR); }) 88 + 89 + /* 90 + * Convert a moving average back to integral form and return the value. 91 + */ 92 + #define __MOVAVG_AC(timing, dev) \ 93 + ((timing)->ptfv_list[PTFV_DQSOSC_MOVAVG_ ## dev ## _INDEX] / \ 94 + MOVAVG_PRECISION_FACTOR) 95 + 96 + /* Weighted update. */ 97 + #define __WEIGHTED_UPDATE_PTFV(dev, nval) \ 98 + do { \ 99 + int w = PTFV_MOVAVG_WEIGHT_INDEX; \ 100 + int dqs = PTFV_DQSOSC_MOVAVG_ ## dev ## _INDEX; \ 101 + \ 102 + next->ptfv_list[dqs] = \ 103 + ((nval * MOVAVG_PRECISION_FACTOR) + \ 104 + (next->ptfv_list[dqs] * \ 105 + next->ptfv_list[w])) / \ 106 + (next->ptfv_list[w] + 1); \ 107 + \ 108 + emc_dbg(emc, EMA_UPDATES, "%s: (s=%lu) EMA: %u\n", \ 109 + __stringify(dev), nval, next->ptfv_list[dqs]); \ 110 + } while (0) 111 + 112 + /* Access a particular average. */ 113 + #define __MOVAVG(timing, dev) \ 114 + ((timing)->ptfv_list[PTFV_DQSOSC_MOVAVG_ ## dev ## _INDEX]) 115 + 116 + static u32 update_clock_tree_delay(struct tegra210_emc *emc, int type) 117 + { 118 + bool periodic_training_update = type == PERIODIC_TRAINING_UPDATE; 119 + struct tegra210_emc_timing *last = emc->last; 120 + struct tegra210_emc_timing *next = emc->next; 121 + u32 last_timing_rate_mhz = last->rate / 1000; 122 + u32 next_timing_rate_mhz = next->rate / 1000; 123 + bool dvfs_update = type == DVFS_UPDATE; 124 + s32 tdel = 0, tmdel = 0, adel = 0; 125 + bool dvfs_pt1 = type == DVFS_PT1; 126 + unsigned long cval = 0; 127 + u32 temp[2][2], value; 128 + unsigned int i; 129 + 130 + /* 131 + * Dev0 MSB. 132 + */ 133 + if (dvfs_pt1 || periodic_training_update) { 134 + value = tegra210_emc_mrr_read(emc, 2, 19); 135 + 136 + for (i = 0; i < emc->num_channels; i++) { 137 + temp[i][0] = (value & 0x00ff) << 8; 138 + temp[i][1] = (value & 0xff00) << 0; 139 + value >>= 16; 140 + } 141 + 142 + /* 143 + * Dev0 LSB. 144 + */ 145 + value = tegra210_emc_mrr_read(emc, 2, 18); 146 + 147 + for (i = 0; i < emc->num_channels; i++) { 148 + temp[i][0] |= (value & 0x00ff) >> 0; 149 + temp[i][1] |= (value & 0xff00) >> 8; 150 + value >>= 16; 151 + } 152 + } 153 + 154 + if (dvfs_pt1 || periodic_training_update) { 155 + cval = tegra210_emc_actual_osc_clocks(last->run_clocks); 156 + cval *= 1000000; 157 + cval /= last_timing_rate_mhz * 2 * temp[0][0]; 158 + } 159 + 160 + if (dvfs_pt1) 161 + __INCREMENT_PTFV(C0D0U0, cval); 162 + else if (dvfs_update) 163 + __AVERAGE_PTFV(C0D0U0); 164 + else if (periodic_training_update) 165 + __WEIGHTED_UPDATE_PTFV(C0D0U0, cval); 166 + 167 + if (dvfs_update || periodic_training_update) { 168 + tdel = next->current_dram_clktree[C0D0U0] - 169 + __MOVAVG_AC(next, C0D0U0); 170 + tmdel = (tdel < 0) ? -1 * tdel : tdel; 171 + adel = tmdel; 172 + 173 + if (tmdel * 128 * next_timing_rate_mhz / 1000000 > 174 + next->tree_margin) 175 + next->current_dram_clktree[C0D0U0] = 176 + __MOVAVG_AC(next, C0D0U0); 177 + } 178 + 179 + if (dvfs_pt1 || periodic_training_update) { 180 + cval = tegra210_emc_actual_osc_clocks(last->run_clocks); 181 + cval *= 1000000; 182 + cval /= last_timing_rate_mhz * 2 * temp[0][1]; 183 + } 184 + 185 + if (dvfs_pt1) 186 + __INCREMENT_PTFV(C0D0U1, cval); 187 + else if (dvfs_update) 188 + __AVERAGE_PTFV(C0D0U1); 189 + else if (periodic_training_update) 190 + __WEIGHTED_UPDATE_PTFV(C0D0U1, cval); 191 + 192 + if (dvfs_update || periodic_training_update) { 193 + tdel = next->current_dram_clktree[C0D0U1] - 194 + __MOVAVG_AC(next, C0D0U1); 195 + tmdel = (tdel < 0) ? -1 * tdel : tdel; 196 + 197 + if (tmdel > adel) 198 + adel = tmdel; 199 + 200 + if (tmdel * 128 * next_timing_rate_mhz / 1000000 > 201 + next->tree_margin) 202 + next->current_dram_clktree[C0D0U1] = 203 + __MOVAVG_AC(next, C0D0U1); 204 + } 205 + 206 + if (emc->num_channels > 1) { 207 + if (dvfs_pt1 || periodic_training_update) { 208 + cval = tegra210_emc_actual_osc_clocks(last->run_clocks); 209 + cval *= 1000000; 210 + cval /= last_timing_rate_mhz * 2 * temp[1][0]; 211 + } 212 + 213 + if (dvfs_pt1) 214 + __INCREMENT_PTFV(C1D0U0, cval); 215 + else if (dvfs_update) 216 + __AVERAGE_PTFV(C1D0U0); 217 + else if (periodic_training_update) 218 + __WEIGHTED_UPDATE_PTFV(C1D0U0, cval); 219 + 220 + if (dvfs_update || periodic_training_update) { 221 + tdel = next->current_dram_clktree[C1D0U0] - 222 + __MOVAVG_AC(next, C1D0U0); 223 + tmdel = (tdel < 0) ? -1 * tdel : tdel; 224 + 225 + if (tmdel > adel) 226 + adel = tmdel; 227 + 228 + if (tmdel * 128 * next_timing_rate_mhz / 1000000 > 229 + next->tree_margin) 230 + next->current_dram_clktree[C1D0U0] = 231 + __MOVAVG_AC(next, C1D0U0); 232 + } 233 + 234 + if (dvfs_pt1 || periodic_training_update) { 235 + cval = tegra210_emc_actual_osc_clocks(last->run_clocks); 236 + cval *= 1000000; 237 + cval /= last_timing_rate_mhz * 2 * temp[1][1]; 238 + } 239 + 240 + if (dvfs_pt1) 241 + __INCREMENT_PTFV(C1D0U1, cval); 242 + else if (dvfs_update) 243 + __AVERAGE_PTFV(C1D0U1); 244 + else if (periodic_training_update) 245 + __WEIGHTED_UPDATE_PTFV(C1D0U1, cval); 246 + 247 + if (dvfs_update || periodic_training_update) { 248 + tdel = next->current_dram_clktree[C1D0U1] - 249 + __MOVAVG_AC(next, C1D0U1); 250 + tmdel = (tdel < 0) ? -1 * tdel : tdel; 251 + 252 + if (tmdel > adel) 253 + adel = tmdel; 254 + 255 + if (tmdel * 128 * next_timing_rate_mhz / 1000000 > 256 + next->tree_margin) 257 + next->current_dram_clktree[C1D0U1] = 258 + __MOVAVG_AC(next, C1D0U1); 259 + } 260 + } 261 + 262 + if (emc->num_devices < 2) 263 + goto done; 264 + 265 + /* 266 + * Dev1 MSB. 267 + */ 268 + if (dvfs_pt1 || periodic_training_update) { 269 + value = tegra210_emc_mrr_read(emc, 1, 19); 270 + 271 + for (i = 0; i < emc->num_channels; i++) { 272 + temp[i][0] = (value & 0x00ff) << 8; 273 + temp[i][1] = (value & 0xff00) << 0; 274 + value >>= 16; 275 + } 276 + 277 + /* 278 + * Dev1 LSB. 279 + */ 280 + value = tegra210_emc_mrr_read(emc, 2, 18); 281 + 282 + for (i = 0; i < emc->num_channels; i++) { 283 + temp[i][0] |= (value & 0x00ff) >> 0; 284 + temp[i][1] |= (value & 0xff00) >> 8; 285 + value >>= 16; 286 + } 287 + } 288 + 289 + if (dvfs_pt1 || periodic_training_update) { 290 + cval = tegra210_emc_actual_osc_clocks(last->run_clocks); 291 + cval *= 1000000; 292 + cval /= last_timing_rate_mhz * 2 * temp[0][0]; 293 + } 294 + 295 + if (dvfs_pt1) 296 + __INCREMENT_PTFV(C0D1U0, cval); 297 + else if (dvfs_update) 298 + __AVERAGE_PTFV(C0D1U0); 299 + else if (periodic_training_update) 300 + __WEIGHTED_UPDATE_PTFV(C0D1U0, cval); 301 + 302 + if (dvfs_update || periodic_training_update) { 303 + tdel = next->current_dram_clktree[C0D1U0] - 304 + __MOVAVG_AC(next, C0D1U0); 305 + tmdel = (tdel < 0) ? -1 * tdel : tdel; 306 + 307 + if (tmdel > adel) 308 + adel = tmdel; 309 + 310 + if (tmdel * 128 * next_timing_rate_mhz / 1000000 > 311 + next->tree_margin) 312 + next->current_dram_clktree[C0D1U0] = 313 + __MOVAVG_AC(next, C0D1U0); 314 + } 315 + 316 + if (dvfs_pt1 || periodic_training_update) { 317 + cval = tegra210_emc_actual_osc_clocks(last->run_clocks); 318 + cval *= 1000000; 319 + cval /= last_timing_rate_mhz * 2 * temp[0][1]; 320 + } 321 + 322 + if (dvfs_pt1) 323 + __INCREMENT_PTFV(C0D1U1, cval); 324 + else if (dvfs_update) 325 + __AVERAGE_PTFV(C0D1U1); 326 + else if (periodic_training_update) 327 + __WEIGHTED_UPDATE_PTFV(C0D1U1, cval); 328 + 329 + if (dvfs_update || periodic_training_update) { 330 + tdel = next->current_dram_clktree[C0D1U1] - 331 + __MOVAVG_AC(next, C0D1U1); 332 + tmdel = (tdel < 0) ? -1 * tdel : tdel; 333 + 334 + if (tmdel > adel) 335 + adel = tmdel; 336 + 337 + if (tmdel * 128 * next_timing_rate_mhz / 1000000 > 338 + next->tree_margin) 339 + next->current_dram_clktree[C0D1U1] = 340 + __MOVAVG_AC(next, C0D1U1); 341 + } 342 + 343 + if (emc->num_channels > 1) { 344 + if (dvfs_pt1 || periodic_training_update) { 345 + cval = tegra210_emc_actual_osc_clocks(last->run_clocks); 346 + cval *= 1000000; 347 + cval /= last_timing_rate_mhz * 2 * temp[1][0]; 348 + } 349 + 350 + if (dvfs_pt1) 351 + __INCREMENT_PTFV(C1D1U0, cval); 352 + else if (dvfs_update) 353 + __AVERAGE_PTFV(C1D1U0); 354 + else if (periodic_training_update) 355 + __WEIGHTED_UPDATE_PTFV(C1D1U0, cval); 356 + 357 + if (dvfs_update || periodic_training_update) { 358 + tdel = next->current_dram_clktree[C1D1U0] - 359 + __MOVAVG_AC(next, C1D1U0); 360 + tmdel = (tdel < 0) ? -1 * tdel : tdel; 361 + 362 + if (tmdel > adel) 363 + adel = tmdel; 364 + 365 + if (tmdel * 128 * next_timing_rate_mhz / 1000000 > 366 + next->tree_margin) 367 + next->current_dram_clktree[C1D1U0] = 368 + __MOVAVG_AC(next, C1D1U0); 369 + } 370 + 371 + if (dvfs_pt1 || periodic_training_update) { 372 + cval = tegra210_emc_actual_osc_clocks(last->run_clocks); 373 + cval *= 1000000; 374 + cval /= last_timing_rate_mhz * 2 * temp[1][1]; 375 + } 376 + 377 + if (dvfs_pt1) 378 + __INCREMENT_PTFV(C1D1U1, cval); 379 + else if (dvfs_update) 380 + __AVERAGE_PTFV(C1D1U1); 381 + else if (periodic_training_update) 382 + __WEIGHTED_UPDATE_PTFV(C1D1U1, cval); 383 + 384 + if (dvfs_update || periodic_training_update) { 385 + tdel = next->current_dram_clktree[C1D1U1] - 386 + __MOVAVG_AC(next, C1D1U1); 387 + tmdel = (tdel < 0) ? -1 * tdel : tdel; 388 + 389 + if (tmdel > adel) 390 + adel = tmdel; 391 + 392 + if (tmdel * 128 * next_timing_rate_mhz / 1000000 > 393 + next->tree_margin) 394 + next->current_dram_clktree[C1D1U1] = 395 + __MOVAVG_AC(next, C1D1U1); 396 + } 397 + } 398 + 399 + done: 400 + return adel; 401 + } 402 + 403 + static u32 periodic_compensation_handler(struct tegra210_emc *emc, u32 type, 404 + struct tegra210_emc_timing *last, 405 + struct tegra210_emc_timing *next) 406 + { 407 + #define __COPY_EMA(nt, lt, dev) \ 408 + ({ __MOVAVG(nt, dev) = __MOVAVG(lt, dev) * \ 409 + (nt)->ptfv_list[PTFV_DVFS_SAMPLES_INDEX]; }) 410 + 411 + u32 i, adel = 0, samples = next->ptfv_list[PTFV_DVFS_SAMPLES_INDEX]; 412 + u32 delay; 413 + 414 + delay = tegra210_emc_actual_osc_clocks(last->run_clocks); 415 + delay *= 1000; 416 + delay = 2 + (delay / last->rate); 417 + 418 + if (!next->periodic_training) 419 + return 0; 420 + 421 + if (type == DVFS_SEQUENCE) { 422 + if (last->periodic_training && 423 + (next->ptfv_list[PTFV_CONFIG_CTRL_INDEX] & 424 + PTFV_CONFIG_CTRL_USE_PREVIOUS_EMA)) { 425 + /* 426 + * If the previous frequency was using periodic 427 + * calibration then we can reuse the previous 428 + * frequencies EMA data. 429 + */ 430 + __COPY_EMA(next, last, C0D0U0); 431 + __COPY_EMA(next, last, C0D0U1); 432 + __COPY_EMA(next, last, C1D0U0); 433 + __COPY_EMA(next, last, C1D0U1); 434 + __COPY_EMA(next, last, C0D1U0); 435 + __COPY_EMA(next, last, C0D1U1); 436 + __COPY_EMA(next, last, C1D1U0); 437 + __COPY_EMA(next, last, C1D1U1); 438 + } else { 439 + /* Reset the EMA.*/ 440 + __MOVAVG(next, C0D0U0) = 0; 441 + __MOVAVG(next, C0D0U1) = 0; 442 + __MOVAVG(next, C1D0U0) = 0; 443 + __MOVAVG(next, C1D0U1) = 0; 444 + __MOVAVG(next, C0D1U0) = 0; 445 + __MOVAVG(next, C0D1U1) = 0; 446 + __MOVAVG(next, C1D1U0) = 0; 447 + __MOVAVG(next, C1D1U1) = 0; 448 + 449 + for (i = 0; i < samples; i++) { 450 + tegra210_emc_start_periodic_compensation(emc); 451 + udelay(delay); 452 + 453 + /* 454 + * Generate next sample of data. 455 + */ 456 + adel = update_clock_tree_delay(emc, DVFS_PT1); 457 + } 458 + } 459 + 460 + /* 461 + * Seems like it should be part of the 462 + * 'if (last_timing->periodic_training)' conditional 463 + * since is already done for the else clause. 464 + */ 465 + adel = update_clock_tree_delay(emc, DVFS_UPDATE); 466 + } 467 + 468 + if (type == PERIODIC_TRAINING_SEQUENCE) { 469 + tegra210_emc_start_periodic_compensation(emc); 470 + udelay(delay); 471 + 472 + adel = update_clock_tree_delay(emc, PERIODIC_TRAINING_UPDATE); 473 + } 474 + 475 + return adel; 476 + } 477 + 478 + static u32 tegra210_emc_r21021_periodic_compensation(struct tegra210_emc *emc) 479 + { 480 + u32 emc_cfg, emc_cfg_o, emc_cfg_update, del, value; 481 + u32 list[] = { 482 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0, 483 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1, 484 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2, 485 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_3, 486 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_0, 487 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_1, 488 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_2, 489 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_3, 490 + EMC_DATA_BRLSHFT_0, 491 + EMC_DATA_BRLSHFT_1 492 + }; 493 + struct tegra210_emc_timing *last = emc->last; 494 + unsigned int items = ARRAY_SIZE(list), i; 495 + unsigned long delay; 496 + 497 + if (last->periodic_training) { 498 + emc_dbg(emc, PER_TRAIN, "Periodic training starting\n"); 499 + 500 + value = emc_readl(emc, EMC_DBG); 501 + emc_cfg_o = emc_readl(emc, EMC_CFG); 502 + emc_cfg = emc_cfg_o & ~(EMC_CFG_DYN_SELF_REF | 503 + EMC_CFG_DRAM_ACPD | 504 + EMC_CFG_DRAM_CLKSTOP_PD | 505 + EMC_CFG_DRAM_CLKSTOP_PD); 506 + 507 + 508 + /* 509 + * 1. Power optimizations should be off. 510 + */ 511 + emc_writel(emc, emc_cfg, EMC_CFG); 512 + 513 + /* Does emc_timing_update() for above changes. */ 514 + tegra210_emc_dll_disable(emc); 515 + 516 + for (i = 0; i < emc->num_channels; i++) 517 + tegra210_emc_wait_for_update(emc, i, EMC_EMC_STATUS, 518 + EMC_EMC_STATUS_DRAM_IN_POWERDOWN_MASK, 519 + 0); 520 + 521 + for (i = 0; i < emc->num_channels; i++) 522 + tegra210_emc_wait_for_update(emc, i, EMC_EMC_STATUS, 523 + EMC_EMC_STATUS_DRAM_IN_SELF_REFRESH_MASK, 524 + 0); 525 + 526 + emc_cfg_update = value = emc_readl(emc, EMC_CFG_UPDATE); 527 + value &= ~EMC_CFG_UPDATE_UPDATE_DLL_IN_UPDATE_MASK; 528 + value |= (2 << EMC_CFG_UPDATE_UPDATE_DLL_IN_UPDATE_SHIFT); 529 + emc_writel(emc, value, EMC_CFG_UPDATE); 530 + 531 + /* 532 + * 2. osc kick off - this assumes training and dvfs have set 533 + * correct MR23. 534 + */ 535 + tegra210_emc_start_periodic_compensation(emc); 536 + 537 + /* 538 + * 3. Let dram capture its clock tree delays. 539 + */ 540 + delay = tegra210_emc_actual_osc_clocks(last->run_clocks); 541 + delay *= 1000; 542 + delay /= last->rate + 1; 543 + udelay(delay); 544 + 545 + /* 546 + * 4. Check delta wrt previous values (save value if margin 547 + * exceeds what is set in table). 548 + */ 549 + del = periodic_compensation_handler(emc, 550 + PERIODIC_TRAINING_SEQUENCE, 551 + last, last); 552 + 553 + /* 554 + * 5. Apply compensation w.r.t. trained values (if clock tree 555 + * has drifted more than the set margin). 556 + */ 557 + if (last->tree_margin < ((del * 128 * (last->rate / 1000)) / 1000000)) { 558 + for (i = 0; i < items; i++) { 559 + value = tegra210_emc_compensate(last, list[i]); 560 + emc_dbg(emc, EMA_WRITES, "0x%08x <= 0x%08x\n", 561 + list[i], value); 562 + emc_writel(emc, value, list[i]); 563 + } 564 + } 565 + 566 + emc_writel(emc, emc_cfg_o, EMC_CFG); 567 + 568 + /* 569 + * 6. Timing update actally applies the new trimmers. 570 + */ 571 + tegra210_emc_timing_update(emc); 572 + 573 + /* 6.1. Restore the UPDATE_DLL_IN_UPDATE field. */ 574 + emc_writel(emc, emc_cfg_update, EMC_CFG_UPDATE); 575 + 576 + /* 6.2. Restore the DLL. */ 577 + tegra210_emc_dll_enable(emc); 578 + } 579 + 580 + return 0; 581 + } 582 + 583 + /* 584 + * Do the clock change sequence. 585 + */ 586 + static void tegra210_emc_r21021_set_clock(struct tegra210_emc *emc, u32 clksrc) 587 + { 588 + /* state variables */ 589 + static bool fsp_for_next_freq; 590 + /* constant configuration parameters */ 591 + const bool save_restore_clkstop_pd = true; 592 + const u32 zqcal_before_cc_cutoff = 2400; 593 + const bool cya_allow_ref_cc = false; 594 + const bool cya_issue_pc_ref = false; 595 + const bool opt_cc_short_zcal = true; 596 + const bool ref_b4_sref_en = false; 597 + const u32 tZQCAL_lpddr4 = 1000000; 598 + const bool opt_short_zcal = true; 599 + const bool opt_do_sw_qrst = true; 600 + const u32 opt_dvfs_mode = MAN_SR; 601 + /* 602 + * This is the timing table for the source frequency. It does _not_ 603 + * necessarily correspond to the actual timing values in the EMC at the 604 + * moment. If the boot BCT differs from the table then this can happen. 605 + * However, we need it for accessing the dram_timings (which are not 606 + * really registers) array for the current frequency. 607 + */ 608 + struct tegra210_emc_timing *fake, *last = emc->last, *next = emc->next; 609 + u32 tRTM, RP_war, R2P_war, TRPab_war, deltaTWATM, W2P_war, tRPST; 610 + u32 mr13_flip_fspwr, mr13_flip_fspop, ramp_up_wait, ramp_down_wait; 611 + u32 zq_wait_long, zq_latch_dvfs_wait_time, tZQCAL_lpddr4_fc_adj; 612 + u32 emc_auto_cal_config, auto_cal_en, emc_cfg, emc_sel_dpd_ctrl; 613 + u32 tFC_lpddr4 = 1000 * next->dram_timings[T_FC_LPDDR4]; 614 + u32 bg_reg_mode_change, enable_bglp_reg, enable_bg_reg; 615 + bool opt_zcal_en_cc = false, is_lpddr3 = false; 616 + bool compensate_trimmer_applicable = false; 617 + u32 emc_dbg, emc_cfg_pipe_clk, emc_pin; 618 + u32 src_clk_period, dst_clk_period; /* in picoseconds */ 619 + bool shared_zq_resistor = false; 620 + u32 value, dram_type; 621 + u32 opt_dll_mode = 0; 622 + unsigned long delay; 623 + unsigned int i; 624 + 625 + emc_dbg(emc, INFO, "Running clock change.\n"); 626 + 627 + /* XXX fake == last */ 628 + fake = tegra210_emc_find_timing(emc, last->rate * 1000UL); 629 + fsp_for_next_freq = !fsp_for_next_freq; 630 + 631 + value = emc_readl(emc, EMC_FBIO_CFG5) & EMC_FBIO_CFG5_DRAM_TYPE_MASK; 632 + dram_type = value >> EMC_FBIO_CFG5_DRAM_TYPE_SHIFT; 633 + 634 + if (last->burst_regs[EMC_ZCAL_WAIT_CNT_INDEX] & BIT(31)) 635 + shared_zq_resistor = true; 636 + 637 + if ((next->burst_regs[EMC_ZCAL_INTERVAL_INDEX] != 0 && 638 + last->burst_regs[EMC_ZCAL_INTERVAL_INDEX] == 0) || 639 + dram_type == DRAM_TYPE_LPDDR4) 640 + opt_zcal_en_cc = true; 641 + 642 + if (dram_type == DRAM_TYPE_DDR3) 643 + opt_dll_mode = tegra210_emc_get_dll_state(next); 644 + 645 + if ((next->burst_regs[EMC_FBIO_CFG5_INDEX] & BIT(25)) && 646 + (dram_type == DRAM_TYPE_LPDDR2)) 647 + is_lpddr3 = true; 648 + 649 + emc_readl(emc, EMC_CFG); 650 + emc_readl(emc, EMC_AUTO_CAL_CONFIG); 651 + 652 + src_clk_period = 1000000000 / last->rate; 653 + dst_clk_period = 1000000000 / next->rate; 654 + 655 + if (dst_clk_period <= zqcal_before_cc_cutoff) 656 + tZQCAL_lpddr4_fc_adj = tZQCAL_lpddr4 - tFC_lpddr4; 657 + else 658 + tZQCAL_lpddr4_fc_adj = tZQCAL_lpddr4; 659 + 660 + tZQCAL_lpddr4_fc_adj /= dst_clk_period; 661 + 662 + emc_dbg = emc_readl(emc, EMC_DBG); 663 + emc_pin = emc_readl(emc, EMC_PIN); 664 + emc_cfg_pipe_clk = emc_readl(emc, EMC_CFG_PIPE_CLK); 665 + 666 + emc_cfg = next->burst_regs[EMC_CFG_INDEX]; 667 + emc_cfg &= ~(EMC_CFG_DYN_SELF_REF | EMC_CFG_DRAM_ACPD | 668 + EMC_CFG_DRAM_CLKSTOP_SR | EMC_CFG_DRAM_CLKSTOP_PD); 669 + emc_sel_dpd_ctrl = next->emc_sel_dpd_ctrl; 670 + emc_sel_dpd_ctrl &= ~(EMC_SEL_DPD_CTRL_CLK_SEL_DPD_EN | 671 + EMC_SEL_DPD_CTRL_CA_SEL_DPD_EN | 672 + EMC_SEL_DPD_CTRL_RESET_SEL_DPD_EN | 673 + EMC_SEL_DPD_CTRL_ODT_SEL_DPD_EN | 674 + EMC_SEL_DPD_CTRL_DATA_SEL_DPD_EN); 675 + 676 + emc_dbg(emc, INFO, "Clock change version: %d\n", 677 + DVFS_CLOCK_CHANGE_VERSION); 678 + emc_dbg(emc, INFO, "DRAM type = %d\n", dram_type); 679 + emc_dbg(emc, INFO, "DRAM dev #: %u\n", emc->num_devices); 680 + emc_dbg(emc, INFO, "Next EMC clksrc: 0x%08x\n", clksrc); 681 + emc_dbg(emc, INFO, "DLL clksrc: 0x%08x\n", next->dll_clk_src); 682 + emc_dbg(emc, INFO, "last rate: %u, next rate %u\n", last->rate, 683 + next->rate); 684 + emc_dbg(emc, INFO, "last period: %u, next period: %u\n", 685 + src_clk_period, dst_clk_period); 686 + emc_dbg(emc, INFO, " shared_zq_resistor: %d\n", !!shared_zq_resistor); 687 + emc_dbg(emc, INFO, " num_channels: %u\n", emc->num_channels); 688 + emc_dbg(emc, INFO, " opt_dll_mode: %d\n", opt_dll_mode); 689 + 690 + /* 691 + * Step 1: 692 + * Pre DVFS SW sequence. 693 + */ 694 + emc_dbg(emc, STEPS, "Step 1\n"); 695 + emc_dbg(emc, STEPS, "Step 1.1: Disable DLL temporarily.\n"); 696 + 697 + value = emc_readl(emc, EMC_CFG_DIG_DLL); 698 + value &= ~EMC_CFG_DIG_DLL_CFG_DLL_EN; 699 + emc_writel(emc, value, EMC_CFG_DIG_DLL); 700 + 701 + tegra210_emc_timing_update(emc); 702 + 703 + for (i = 0; i < emc->num_channels; i++) 704 + tegra210_emc_wait_for_update(emc, i, EMC_CFG_DIG_DLL, 705 + EMC_CFG_DIG_DLL_CFG_DLL_EN, 0); 706 + 707 + emc_dbg(emc, STEPS, "Step 1.2: Disable AUTOCAL temporarily.\n"); 708 + 709 + emc_auto_cal_config = next->emc_auto_cal_config; 710 + auto_cal_en = emc_auto_cal_config & EMC_AUTO_CAL_CONFIG_AUTO_CAL_ENABLE; 711 + emc_auto_cal_config &= ~EMC_AUTO_CAL_CONFIG_AUTO_CAL_START; 712 + emc_auto_cal_config |= EMC_AUTO_CAL_CONFIG_AUTO_CAL_MEASURE_STALL; 713 + emc_auto_cal_config |= EMC_AUTO_CAL_CONFIG_AUTO_CAL_UPDATE_STALL; 714 + emc_auto_cal_config |= auto_cal_en; 715 + emc_writel(emc, emc_auto_cal_config, EMC_AUTO_CAL_CONFIG); 716 + emc_readl(emc, EMC_AUTO_CAL_CONFIG); /* Flush write. */ 717 + 718 + emc_dbg(emc, STEPS, "Step 1.3: Disable other power features.\n"); 719 + 720 + tegra210_emc_set_shadow_bypass(emc, ACTIVE); 721 + emc_writel(emc, emc_cfg, EMC_CFG); 722 + emc_writel(emc, emc_sel_dpd_ctrl, EMC_SEL_DPD_CTRL); 723 + tegra210_emc_set_shadow_bypass(emc, ASSEMBLY); 724 + 725 + if (next->periodic_training) { 726 + tegra210_emc_reset_dram_clktree_values(next); 727 + 728 + for (i = 0; i < emc->num_channels; i++) 729 + tegra210_emc_wait_for_update(emc, i, EMC_EMC_STATUS, 730 + EMC_EMC_STATUS_DRAM_IN_POWERDOWN_MASK, 731 + 0); 732 + 733 + for (i = 0; i < emc->num_channels; i++) 734 + tegra210_emc_wait_for_update(emc, i, EMC_EMC_STATUS, 735 + EMC_EMC_STATUS_DRAM_IN_SELF_REFRESH_MASK, 736 + 0); 737 + 738 + tegra210_emc_start_periodic_compensation(emc); 739 + 740 + delay = 1000 * tegra210_emc_actual_osc_clocks(last->run_clocks); 741 + udelay((delay / last->rate) + 2); 742 + 743 + value = periodic_compensation_handler(emc, DVFS_SEQUENCE, fake, 744 + next); 745 + value = (value * 128 * next->rate / 1000) / 1000000; 746 + 747 + if (next->periodic_training && value > next->tree_margin) 748 + compensate_trimmer_applicable = true; 749 + } 750 + 751 + emc_writel(emc, EMC_INTSTATUS_CLKCHANGE_COMPLETE, EMC_INTSTATUS); 752 + tegra210_emc_set_shadow_bypass(emc, ACTIVE); 753 + emc_writel(emc, emc_cfg, EMC_CFG); 754 + emc_writel(emc, emc_sel_dpd_ctrl, EMC_SEL_DPD_CTRL); 755 + emc_writel(emc, emc_cfg_pipe_clk | EMC_CFG_PIPE_CLK_CLK_ALWAYS_ON, 756 + EMC_CFG_PIPE_CLK); 757 + emc_writel(emc, next->emc_fdpd_ctrl_cmd_no_ramp & 758 + ~EMC_FDPD_CTRL_CMD_NO_RAMP_CMD_DPD_NO_RAMP_ENABLE, 759 + EMC_FDPD_CTRL_CMD_NO_RAMP); 760 + 761 + bg_reg_mode_change = 762 + ((next->burst_regs[EMC_PMACRO_BG_BIAS_CTRL_0_INDEX] & 763 + EMC_PMACRO_BG_BIAS_CTRL_0_BGLP_E_PWRD) ^ 764 + (last->burst_regs[EMC_PMACRO_BG_BIAS_CTRL_0_INDEX] & 765 + EMC_PMACRO_BG_BIAS_CTRL_0_BGLP_E_PWRD)) || 766 + ((next->burst_regs[EMC_PMACRO_BG_BIAS_CTRL_0_INDEX] & 767 + EMC_PMACRO_BG_BIAS_CTRL_0_BG_E_PWRD) ^ 768 + (last->burst_regs[EMC_PMACRO_BG_BIAS_CTRL_0_INDEX] & 769 + EMC_PMACRO_BG_BIAS_CTRL_0_BG_E_PWRD)); 770 + enable_bglp_reg = 771 + (next->burst_regs[EMC_PMACRO_BG_BIAS_CTRL_0_INDEX] & 772 + EMC_PMACRO_BG_BIAS_CTRL_0_BGLP_E_PWRD) == 0; 773 + enable_bg_reg = 774 + (next->burst_regs[EMC_PMACRO_BG_BIAS_CTRL_0_INDEX] & 775 + EMC_PMACRO_BG_BIAS_CTRL_0_BG_E_PWRD) == 0; 776 + 777 + if (bg_reg_mode_change) { 778 + if (enable_bg_reg) 779 + emc_writel(emc, last->burst_regs 780 + [EMC_PMACRO_BG_BIAS_CTRL_0_INDEX] & 781 + ~EMC_PMACRO_BG_BIAS_CTRL_0_BG_E_PWRD, 782 + EMC_PMACRO_BG_BIAS_CTRL_0); 783 + 784 + if (enable_bglp_reg) 785 + emc_writel(emc, last->burst_regs 786 + [EMC_PMACRO_BG_BIAS_CTRL_0_INDEX] & 787 + ~EMC_PMACRO_BG_BIAS_CTRL_0_BGLP_E_PWRD, 788 + EMC_PMACRO_BG_BIAS_CTRL_0); 789 + } 790 + 791 + /* Check if we need to turn on VREF generator. */ 792 + if ((((last->burst_regs[EMC_PMACRO_DATA_PAD_TX_CTRL_INDEX] & 793 + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQ_E_IVREF) == 0) && 794 + ((next->burst_regs[EMC_PMACRO_DATA_PAD_TX_CTRL_INDEX] & 795 + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQ_E_IVREF) == 1)) || 796 + (((last->burst_regs[EMC_PMACRO_DATA_PAD_TX_CTRL_INDEX] & 797 + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQS_E_IVREF) == 0) && 798 + ((next->burst_regs[EMC_PMACRO_DATA_PAD_TX_CTRL_INDEX] & 799 + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQS_E_IVREF) != 0))) { 800 + u32 pad_tx_ctrl = 801 + next->burst_regs[EMC_PMACRO_DATA_PAD_TX_CTRL_INDEX]; 802 + u32 last_pad_tx_ctrl = 803 + last->burst_regs[EMC_PMACRO_DATA_PAD_TX_CTRL_INDEX]; 804 + u32 next_dq_e_ivref, next_dqs_e_ivref; 805 + 806 + next_dqs_e_ivref = pad_tx_ctrl & 807 + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQS_E_IVREF; 808 + next_dq_e_ivref = pad_tx_ctrl & 809 + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQ_E_IVREF; 810 + value = (last_pad_tx_ctrl & 811 + ~EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQ_E_IVREF & 812 + ~EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQS_E_IVREF) | 813 + next_dq_e_ivref | next_dqs_e_ivref; 814 + emc_writel(emc, value, EMC_PMACRO_DATA_PAD_TX_CTRL); 815 + udelay(1); 816 + } else if (bg_reg_mode_change) { 817 + udelay(1); 818 + } 819 + 820 + tegra210_emc_set_shadow_bypass(emc, ASSEMBLY); 821 + 822 + /* 823 + * Step 2: 824 + * Prelock the DLL. 825 + */ 826 + emc_dbg(emc, STEPS, "Step 2\n"); 827 + 828 + if (next->burst_regs[EMC_CFG_DIG_DLL_INDEX] & 829 + EMC_CFG_DIG_DLL_CFG_DLL_EN) { 830 + emc_dbg(emc, INFO, "Prelock enabled for target frequency.\n"); 831 + value = tegra210_emc_dll_prelock(emc, clksrc); 832 + emc_dbg(emc, INFO, "DLL out: 0x%03x\n", value); 833 + } else { 834 + emc_dbg(emc, INFO, "Disabling DLL for target frequency.\n"); 835 + tegra210_emc_dll_disable(emc); 836 + } 837 + 838 + /* 839 + * Step 3: 840 + * Prepare autocal for the clock change. 841 + */ 842 + emc_dbg(emc, STEPS, "Step 3\n"); 843 + 844 + tegra210_emc_set_shadow_bypass(emc, ACTIVE); 845 + emc_writel(emc, next->emc_auto_cal_config2, EMC_AUTO_CAL_CONFIG2); 846 + emc_writel(emc, next->emc_auto_cal_config3, EMC_AUTO_CAL_CONFIG3); 847 + emc_writel(emc, next->emc_auto_cal_config4, EMC_AUTO_CAL_CONFIG4); 848 + emc_writel(emc, next->emc_auto_cal_config5, EMC_AUTO_CAL_CONFIG5); 849 + emc_writel(emc, next->emc_auto_cal_config6, EMC_AUTO_CAL_CONFIG6); 850 + emc_writel(emc, next->emc_auto_cal_config7, EMC_AUTO_CAL_CONFIG7); 851 + emc_writel(emc, next->emc_auto_cal_config8, EMC_AUTO_CAL_CONFIG8); 852 + tegra210_emc_set_shadow_bypass(emc, ASSEMBLY); 853 + 854 + emc_auto_cal_config |= (EMC_AUTO_CAL_CONFIG_AUTO_CAL_COMPUTE_START | 855 + auto_cal_en); 856 + emc_writel(emc, emc_auto_cal_config, EMC_AUTO_CAL_CONFIG); 857 + 858 + /* 859 + * Step 4: 860 + * Update EMC_CFG. (??) 861 + */ 862 + emc_dbg(emc, STEPS, "Step 4\n"); 863 + 864 + if (src_clk_period > 50000 && dram_type == DRAM_TYPE_LPDDR4) 865 + ccfifo_writel(emc, 1, EMC_SELF_REF, 0); 866 + else 867 + emc_writel(emc, next->emc_cfg_2, EMC_CFG_2); 868 + 869 + /* 870 + * Step 5: 871 + * Prepare reference variables for ZQCAL regs. 872 + */ 873 + emc_dbg(emc, STEPS, "Step 5\n"); 874 + 875 + if (dram_type == DRAM_TYPE_LPDDR4) 876 + zq_wait_long = max((u32)1, div_o3(1000000, dst_clk_period)); 877 + else if (dram_type == DRAM_TYPE_LPDDR2 || is_lpddr3) 878 + zq_wait_long = max(next->min_mrs_wait, 879 + div_o3(360000, dst_clk_period)) + 4; 880 + else if (dram_type == DRAM_TYPE_DDR3) 881 + zq_wait_long = max((u32)256, 882 + div_o3(320000, dst_clk_period) + 2); 883 + else 884 + zq_wait_long = 0; 885 + 886 + /* 887 + * Step 6: 888 + * Training code - removed. 889 + */ 890 + emc_dbg(emc, STEPS, "Step 6\n"); 891 + 892 + /* 893 + * Step 7: 894 + * Program FSP reference registers and send MRWs to new FSPWR. 895 + */ 896 + emc_dbg(emc, STEPS, "Step 7\n"); 897 + emc_dbg(emc, SUB_STEPS, "Step 7.1: Bug 200024907 - Patch RP R2P"); 898 + 899 + /* WAR 200024907 */ 900 + if (dram_type == DRAM_TYPE_LPDDR4) { 901 + u32 nRTP = 16; 902 + 903 + if (src_clk_period >= 1000000 / 1866) /* 535.91 ps */ 904 + nRTP = 14; 905 + 906 + if (src_clk_period >= 1000000 / 1600) /* 625.00 ps */ 907 + nRTP = 12; 908 + 909 + if (src_clk_period >= 1000000 / 1333) /* 750.19 ps */ 910 + nRTP = 10; 911 + 912 + if (src_clk_period >= 1000000 / 1066) /* 938.09 ps */ 913 + nRTP = 8; 914 + 915 + deltaTWATM = max_t(u32, div_o3(7500, src_clk_period), 8); 916 + 917 + /* 918 + * Originally there was a + .5 in the tRPST calculation. 919 + * However since we can't do FP in the kernel and the tRTM 920 + * computation was in a floating point ceiling function, adding 921 + * one to tRTP should be ok. There is no other source of non 922 + * integer values, so the result was always going to be 923 + * something for the form: f_ceil(N + .5) = N + 1; 924 + */ 925 + tRPST = (last->emc_mrw & 0x80) >> 7; 926 + tRTM = fake->dram_timings[RL] + div_o3(3600, src_clk_period) + 927 + max_t(u32, div_o3(7500, src_clk_period), 8) + tRPST + 928 + 1 + nRTP; 929 + 930 + emc_dbg(emc, INFO, "tRTM = %u, EMC_RP = %u\n", tRTM, 931 + next->burst_regs[EMC_RP_INDEX]); 932 + 933 + if (last->burst_regs[EMC_RP_INDEX] < tRTM) { 934 + if (tRTM > (last->burst_regs[EMC_R2P_INDEX] + 935 + last->burst_regs[EMC_RP_INDEX])) { 936 + R2P_war = tRTM - last->burst_regs[EMC_RP_INDEX]; 937 + RP_war = last->burst_regs[EMC_RP_INDEX]; 938 + TRPab_war = last->burst_regs[EMC_TRPAB_INDEX]; 939 + 940 + if (R2P_war > 63) { 941 + RP_war = R2P_war + 942 + last->burst_regs[EMC_RP_INDEX] - 63; 943 + 944 + if (TRPab_war < RP_war) 945 + TRPab_war = RP_war; 946 + 947 + R2P_war = 63; 948 + } 949 + } else { 950 + R2P_war = last->burst_regs[EMC_R2P_INDEX]; 951 + RP_war = last->burst_regs[EMC_RP_INDEX]; 952 + TRPab_war = last->burst_regs[EMC_TRPAB_INDEX]; 953 + } 954 + 955 + if (RP_war < deltaTWATM) { 956 + W2P_war = last->burst_regs[EMC_W2P_INDEX] 957 + + deltaTWATM - RP_war; 958 + if (W2P_war > 63) { 959 + RP_war = RP_war + W2P_war - 63; 960 + if (TRPab_war < RP_war) 961 + TRPab_war = RP_war; 962 + W2P_war = 63; 963 + } 964 + } else { 965 + W2P_war = last->burst_regs[ 966 + EMC_W2P_INDEX]; 967 + } 968 + 969 + if ((last->burst_regs[EMC_W2P_INDEX] ^ W2P_war) || 970 + (last->burst_regs[EMC_R2P_INDEX] ^ R2P_war) || 971 + (last->burst_regs[EMC_RP_INDEX] ^ RP_war) || 972 + (last->burst_regs[EMC_TRPAB_INDEX] ^ TRPab_war)) { 973 + emc_writel(emc, RP_war, EMC_RP); 974 + emc_writel(emc, R2P_war, EMC_R2P); 975 + emc_writel(emc, W2P_war, EMC_W2P); 976 + emc_writel(emc, TRPab_war, EMC_TRPAB); 977 + } 978 + 979 + tegra210_emc_timing_update(emc); 980 + } else { 981 + emc_dbg(emc, INFO, "Skipped WAR\n"); 982 + } 983 + } 984 + 985 + if (!fsp_for_next_freq) { 986 + mr13_flip_fspwr = (next->emc_mrw3 & 0xffffff3f) | 0x80; 987 + mr13_flip_fspop = (next->emc_mrw3 & 0xffffff3f) | 0x00; 988 + } else { 989 + mr13_flip_fspwr = (next->emc_mrw3 & 0xffffff3f) | 0x40; 990 + mr13_flip_fspop = (next->emc_mrw3 & 0xffffff3f) | 0xc0; 991 + } 992 + 993 + if (dram_type == DRAM_TYPE_LPDDR4) { 994 + emc_writel(emc, mr13_flip_fspwr, EMC_MRW3); 995 + emc_writel(emc, next->emc_mrw, EMC_MRW); 996 + emc_writel(emc, next->emc_mrw2, EMC_MRW2); 997 + } 998 + 999 + /* 1000 + * Step 8: 1001 + * Program the shadow registers. 1002 + */ 1003 + emc_dbg(emc, STEPS, "Step 8\n"); 1004 + emc_dbg(emc, SUB_STEPS, "Writing burst_regs\n"); 1005 + 1006 + for (i = 0; i < next->num_burst; i++) { 1007 + const u16 *offsets = emc->offsets->burst; 1008 + u16 offset; 1009 + 1010 + if (!offsets[i]) 1011 + continue; 1012 + 1013 + value = next->burst_regs[i]; 1014 + offset = offsets[i]; 1015 + 1016 + if (dram_type != DRAM_TYPE_LPDDR4 && 1017 + (offset == EMC_MRW6 || offset == EMC_MRW7 || 1018 + offset == EMC_MRW8 || offset == EMC_MRW9 || 1019 + offset == EMC_MRW10 || offset == EMC_MRW11 || 1020 + offset == EMC_MRW12 || offset == EMC_MRW13 || 1021 + offset == EMC_MRW14 || offset == EMC_MRW15 || 1022 + offset == EMC_TRAINING_CTRL)) 1023 + continue; 1024 + 1025 + /* Pain... And suffering. */ 1026 + if (offset == EMC_CFG) { 1027 + value &= ~EMC_CFG_DRAM_ACPD; 1028 + value &= ~EMC_CFG_DYN_SELF_REF; 1029 + 1030 + if (dram_type == DRAM_TYPE_LPDDR4) { 1031 + value &= ~EMC_CFG_DRAM_CLKSTOP_SR; 1032 + value &= ~EMC_CFG_DRAM_CLKSTOP_PD; 1033 + } 1034 + } else if (offset == EMC_MRS_WAIT_CNT && 1035 + dram_type == DRAM_TYPE_LPDDR2 && 1036 + opt_zcal_en_cc && !opt_cc_short_zcal && 1037 + opt_short_zcal) { 1038 + value = (value & ~(EMC_MRS_WAIT_CNT_SHORT_WAIT_MASK << 1039 + EMC_MRS_WAIT_CNT_SHORT_WAIT_SHIFT)) | 1040 + ((zq_wait_long & EMC_MRS_WAIT_CNT_SHORT_WAIT_MASK) << 1041 + EMC_MRS_WAIT_CNT_SHORT_WAIT_SHIFT); 1042 + } else if (offset == EMC_ZCAL_WAIT_CNT && 1043 + dram_type == DRAM_TYPE_DDR3 && opt_zcal_en_cc && 1044 + !opt_cc_short_zcal && opt_short_zcal) { 1045 + value = (value & ~(EMC_ZCAL_WAIT_CNT_ZCAL_WAIT_CNT_MASK << 1046 + EMC_ZCAL_WAIT_CNT_ZCAL_WAIT_CNT_SHIFT)) | 1047 + ((zq_wait_long & EMC_ZCAL_WAIT_CNT_ZCAL_WAIT_CNT_MASK) << 1048 + EMC_MRS_WAIT_CNT_SHORT_WAIT_SHIFT); 1049 + } else if (offset == EMC_ZCAL_INTERVAL && opt_zcal_en_cc) { 1050 + value = 0; /* EMC_ZCAL_INTERVAL reset value. */ 1051 + } else if (offset == EMC_PMACRO_AUTOCAL_CFG_COMMON) { 1052 + value |= EMC_PMACRO_AUTOCAL_CFG_COMMON_E_CAL_BYPASS_DVFS; 1053 + } else if (offset == EMC_PMACRO_DATA_PAD_TX_CTRL) { 1054 + value &= ~(EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQSP_TX_E_DCC | 1055 + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQSN_TX_E_DCC | 1056 + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQ_TX_E_DCC | 1057 + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_CMD_TX_E_DCC); 1058 + } else if (offset == EMC_PMACRO_CMD_PAD_TX_CTRL) { 1059 + value |= EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQ_TX_DRVFORCEON; 1060 + value &= ~(EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQSP_TX_E_DCC | 1061 + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQSN_TX_E_DCC | 1062 + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQ_TX_E_DCC | 1063 + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_CMD_TX_E_DCC); 1064 + } else if (offset == EMC_PMACRO_BRICK_CTRL_RFU1) { 1065 + value &= 0xf800f800; 1066 + } else if (offset == EMC_PMACRO_COMMON_PAD_TX_CTRL) { 1067 + value &= 0xfffffff0; 1068 + } 1069 + 1070 + emc_writel(emc, value, offset); 1071 + } 1072 + 1073 + /* SW addition: do EMC refresh adjustment here. */ 1074 + tegra210_emc_adjust_timing(emc, next); 1075 + 1076 + if (dram_type == DRAM_TYPE_LPDDR4) { 1077 + value = (23 << EMC_MRW_MRW_MA_SHIFT) | 1078 + (next->run_clocks & EMC_MRW_MRW_OP_MASK); 1079 + emc_writel(emc, value, EMC_MRW); 1080 + } 1081 + 1082 + /* Per channel burst registers. */ 1083 + emc_dbg(emc, SUB_STEPS, "Writing burst_regs_per_ch\n"); 1084 + 1085 + for (i = 0; i < next->num_burst_per_ch; i++) { 1086 + const struct tegra210_emc_per_channel_regs *burst = 1087 + emc->offsets->burst_per_channel; 1088 + 1089 + if (!burst[i].offset) 1090 + continue; 1091 + 1092 + if (dram_type != DRAM_TYPE_LPDDR4 && 1093 + (burst[i].offset == EMC_MRW6 || 1094 + burst[i].offset == EMC_MRW7 || 1095 + burst[i].offset == EMC_MRW8 || 1096 + burst[i].offset == EMC_MRW9 || 1097 + burst[i].offset == EMC_MRW10 || 1098 + burst[i].offset == EMC_MRW11 || 1099 + burst[i].offset == EMC_MRW12 || 1100 + burst[i].offset == EMC_MRW13 || 1101 + burst[i].offset == EMC_MRW14 || 1102 + burst[i].offset == EMC_MRW15)) 1103 + continue; 1104 + 1105 + /* Filter out second channel if not in DUAL_CHANNEL mode. */ 1106 + if (emc->num_channels < 2 && burst[i].bank >= 1) 1107 + continue; 1108 + 1109 + emc_dbg(emc, REG_LISTS, "(%u) 0x%08x => 0x%08x\n", i, 1110 + next->burst_reg_per_ch[i], burst[i].offset); 1111 + emc_channel_writel(emc, burst[i].bank, 1112 + next->burst_reg_per_ch[i], 1113 + burst[i].offset); 1114 + } 1115 + 1116 + /* Vref regs. */ 1117 + emc_dbg(emc, SUB_STEPS, "Writing vref_regs\n"); 1118 + 1119 + for (i = 0; i < next->vref_num; i++) { 1120 + const struct tegra210_emc_per_channel_regs *vref = 1121 + emc->offsets->vref_per_channel; 1122 + 1123 + if (!vref[i].offset) 1124 + continue; 1125 + 1126 + if (emc->num_channels < 2 && vref[i].bank >= 1) 1127 + continue; 1128 + 1129 + emc_dbg(emc, REG_LISTS, "(%u) 0x%08x => 0x%08x\n", i, 1130 + next->vref_perch_regs[i], vref[i].offset); 1131 + emc_channel_writel(emc, vref[i].bank, next->vref_perch_regs[i], 1132 + vref[i].offset); 1133 + } 1134 + 1135 + /* Trimmers. */ 1136 + emc_dbg(emc, SUB_STEPS, "Writing trim_regs\n"); 1137 + 1138 + for (i = 0; i < next->num_trim; i++) { 1139 + const u16 *offsets = emc->offsets->trim; 1140 + 1141 + if (!offsets[i]) 1142 + continue; 1143 + 1144 + if (compensate_trimmer_applicable && 1145 + (offsets[i] == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0 || 1146 + offsets[i] == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1 || 1147 + offsets[i] == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2 || 1148 + offsets[i] == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_3 || 1149 + offsets[i] == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_0 || 1150 + offsets[i] == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_1 || 1151 + offsets[i] == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_2 || 1152 + offsets[i] == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_3 || 1153 + offsets[i] == EMC_DATA_BRLSHFT_0 || 1154 + offsets[i] == EMC_DATA_BRLSHFT_1)) { 1155 + value = tegra210_emc_compensate(next, offsets[i]); 1156 + emc_dbg(emc, REG_LISTS, "(%u) 0x%08x => 0x%08x\n", i, 1157 + value, offsets[i]); 1158 + emc_dbg(emc, EMA_WRITES, "0x%08x <= 0x%08x\n", 1159 + (u32)(u64)offsets[i], value); 1160 + emc_writel(emc, value, offsets[i]); 1161 + } else { 1162 + emc_dbg(emc, REG_LISTS, "(%u) 0x%08x => 0x%08x\n", i, 1163 + next->trim_regs[i], offsets[i]); 1164 + emc_writel(emc, next->trim_regs[i], offsets[i]); 1165 + } 1166 + } 1167 + 1168 + /* Per channel trimmers. */ 1169 + emc_dbg(emc, SUB_STEPS, "Writing trim_regs_per_ch\n"); 1170 + 1171 + for (i = 0; i < next->num_trim_per_ch; i++) { 1172 + const struct tegra210_emc_per_channel_regs *trim = 1173 + &emc->offsets->trim_per_channel[0]; 1174 + unsigned int offset; 1175 + 1176 + if (!trim[i].offset) 1177 + continue; 1178 + 1179 + if (emc->num_channels < 2 && trim[i].bank >= 1) 1180 + continue; 1181 + 1182 + offset = trim[i].offset; 1183 + 1184 + if (compensate_trimmer_applicable && 1185 + (offset == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0 || 1186 + offset == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1 || 1187 + offset == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2 || 1188 + offset == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_3 || 1189 + offset == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_0 || 1190 + offset == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_1 || 1191 + offset == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_2 || 1192 + offset == EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_3 || 1193 + offset == EMC_DATA_BRLSHFT_0 || 1194 + offset == EMC_DATA_BRLSHFT_1)) { 1195 + value = tegra210_emc_compensate(next, offset); 1196 + emc_dbg(emc, REG_LISTS, "(%u) 0x%08x => 0x%08x\n", i, 1197 + value, offset); 1198 + emc_dbg(emc, EMA_WRITES, "0x%08x <= 0x%08x\n", offset, 1199 + value); 1200 + emc_channel_writel(emc, trim[i].bank, value, offset); 1201 + } else { 1202 + emc_dbg(emc, REG_LISTS, "(%u) 0x%08x => 0x%08x\n", i, 1203 + next->trim_perch_regs[i], offset); 1204 + emc_channel_writel(emc, trim[i].bank, 1205 + next->trim_perch_regs[i], offset); 1206 + } 1207 + } 1208 + 1209 + emc_dbg(emc, SUB_STEPS, "Writing burst_mc_regs\n"); 1210 + 1211 + for (i = 0; i < next->num_mc_regs; i++) { 1212 + const u16 *offsets = emc->offsets->burst_mc; 1213 + u32 *values = next->burst_mc_regs; 1214 + 1215 + emc_dbg(emc, REG_LISTS, "(%u) 0x%08x => 0x%08x\n", i, 1216 + values[i], offsets[i]); 1217 + mc_writel(emc->mc, values[i], offsets[i]); 1218 + } 1219 + 1220 + /* Registers to be programmed on the faster clock. */ 1221 + if (next->rate < last->rate) { 1222 + const u16 *la = emc->offsets->la_scale; 1223 + 1224 + emc_dbg(emc, SUB_STEPS, "Writing la_scale_regs\n"); 1225 + 1226 + for (i = 0; i < next->num_up_down; i++) { 1227 + emc_dbg(emc, REG_LISTS, "(%u) 0x%08x => 0x%08x\n", i, 1228 + next->la_scale_regs[i], la[i]); 1229 + mc_writel(emc->mc, next->la_scale_regs[i], la[i]); 1230 + } 1231 + } 1232 + 1233 + /* Flush all the burst register writes. */ 1234 + mc_readl(emc->mc, MC_EMEM_ADR_CFG); 1235 + 1236 + /* 1237 + * Step 9: 1238 + * LPDDR4 section A. 1239 + */ 1240 + emc_dbg(emc, STEPS, "Step 9\n"); 1241 + 1242 + value = next->burst_regs[EMC_ZCAL_WAIT_CNT_INDEX]; 1243 + value &= ~EMC_ZCAL_WAIT_CNT_ZCAL_WAIT_CNT_MASK; 1244 + 1245 + if (dram_type == DRAM_TYPE_LPDDR4) { 1246 + emc_writel(emc, 0, EMC_ZCAL_INTERVAL); 1247 + emc_writel(emc, value, EMC_ZCAL_WAIT_CNT); 1248 + 1249 + value = emc_dbg | (EMC_DBG_WRITE_MUX_ACTIVE | 1250 + EMC_DBG_WRITE_ACTIVE_ONLY); 1251 + 1252 + emc_writel(emc, value, EMC_DBG); 1253 + emc_writel(emc, 0, EMC_ZCAL_INTERVAL); 1254 + emc_writel(emc, emc_dbg, EMC_DBG); 1255 + } 1256 + 1257 + /* 1258 + * Step 10: 1259 + * LPDDR4 and DDR3 common section. 1260 + */ 1261 + emc_dbg(emc, STEPS, "Step 10\n"); 1262 + 1263 + if (opt_dvfs_mode == MAN_SR || dram_type == DRAM_TYPE_LPDDR4) { 1264 + if (dram_type == DRAM_TYPE_LPDDR4) 1265 + ccfifo_writel(emc, 0x101, EMC_SELF_REF, 0); 1266 + else 1267 + ccfifo_writel(emc, 0x1, EMC_SELF_REF, 0); 1268 + 1269 + if (dram_type == DRAM_TYPE_LPDDR4 && 1270 + dst_clk_period <= zqcal_before_cc_cutoff) { 1271 + ccfifo_writel(emc, mr13_flip_fspwr ^ 0x40, EMC_MRW3, 0); 1272 + ccfifo_writel(emc, (next->burst_regs[EMC_MRW6_INDEX] & 1273 + 0xFFFF3F3F) | 1274 + (last->burst_regs[EMC_MRW6_INDEX] & 1275 + 0x0000C0C0), EMC_MRW6, 0); 1276 + ccfifo_writel(emc, (next->burst_regs[EMC_MRW14_INDEX] & 1277 + 0xFFFF0707) | 1278 + (last->burst_regs[EMC_MRW14_INDEX] & 1279 + 0x00003838), EMC_MRW14, 0); 1280 + 1281 + if (emc->num_devices > 1) { 1282 + ccfifo_writel(emc, 1283 + (next->burst_regs[EMC_MRW7_INDEX] & 1284 + 0xFFFF3F3F) | 1285 + (last->burst_regs[EMC_MRW7_INDEX] & 1286 + 0x0000C0C0), EMC_MRW7, 0); 1287 + ccfifo_writel(emc, 1288 + (next->burst_regs[EMC_MRW15_INDEX] & 1289 + 0xFFFF0707) | 1290 + (last->burst_regs[EMC_MRW15_INDEX] & 1291 + 0x00003838), EMC_MRW15, 0); 1292 + } 1293 + 1294 + if (opt_zcal_en_cc) { 1295 + if (emc->num_devices < 2) 1296 + ccfifo_writel(emc, 1297 + 2UL << EMC_ZQ_CAL_DEV_SEL_SHIFT 1298 + | EMC_ZQ_CAL_ZQ_CAL_CMD, 1299 + EMC_ZQ_CAL, 0); 1300 + else if (shared_zq_resistor) 1301 + ccfifo_writel(emc, 1302 + 2UL << EMC_ZQ_CAL_DEV_SEL_SHIFT 1303 + | EMC_ZQ_CAL_ZQ_CAL_CMD, 1304 + EMC_ZQ_CAL, 0); 1305 + else 1306 + ccfifo_writel(emc, 1307 + EMC_ZQ_CAL_ZQ_CAL_CMD, 1308 + EMC_ZQ_CAL, 0); 1309 + } 1310 + } 1311 + } 1312 + 1313 + if (dram_type == DRAM_TYPE_LPDDR4) { 1314 + value = (1000 * fake->dram_timings[T_RP]) / src_clk_period; 1315 + ccfifo_writel(emc, mr13_flip_fspop | 0x8, EMC_MRW3, value); 1316 + ccfifo_writel(emc, 0, 0, tFC_lpddr4 / src_clk_period); 1317 + } 1318 + 1319 + if (dram_type == DRAM_TYPE_LPDDR4 || opt_dvfs_mode != MAN_SR) { 1320 + delay = 30; 1321 + 1322 + if (cya_allow_ref_cc) { 1323 + delay += (1000 * fake->dram_timings[T_RP]) / 1324 + src_clk_period; 1325 + delay += 4000 * fake->dram_timings[T_RFC]; 1326 + } 1327 + 1328 + ccfifo_writel(emc, emc_pin & ~(EMC_PIN_PIN_CKE_PER_DEV | 1329 + EMC_PIN_PIN_CKEB | 1330 + EMC_PIN_PIN_CKE), 1331 + EMC_PIN, delay); 1332 + } 1333 + 1334 + /* calculate reference delay multiplier */ 1335 + value = 1; 1336 + 1337 + if (ref_b4_sref_en) 1338 + value++; 1339 + 1340 + if (cya_allow_ref_cc) 1341 + value++; 1342 + 1343 + if (cya_issue_pc_ref) 1344 + value++; 1345 + 1346 + if (dram_type != DRAM_TYPE_LPDDR4) { 1347 + delay = ((1000 * fake->dram_timings[T_RP] / src_clk_period) + 1348 + (1000 * fake->dram_timings[T_RFC] / src_clk_period)); 1349 + delay = value * delay + 20; 1350 + } else { 1351 + delay = 0; 1352 + } 1353 + 1354 + /* 1355 + * Step 11: 1356 + * Ramp down. 1357 + */ 1358 + emc_dbg(emc, STEPS, "Step 11\n"); 1359 + 1360 + ccfifo_writel(emc, 0x0, EMC_CFG_SYNC, delay); 1361 + 1362 + value = emc_dbg | EMC_DBG_WRITE_MUX_ACTIVE | EMC_DBG_WRITE_ACTIVE_ONLY; 1363 + ccfifo_writel(emc, value, EMC_DBG, 0); 1364 + 1365 + ramp_down_wait = tegra210_emc_dvfs_power_ramp_down(emc, src_clk_period, 1366 + 0); 1367 + 1368 + /* 1369 + * Step 12: 1370 + * And finally - trigger the clock change. 1371 + */ 1372 + emc_dbg(emc, STEPS, "Step 12\n"); 1373 + 1374 + ccfifo_writel(emc, 1, EMC_STALL_THEN_EXE_AFTER_CLKCHANGE, 0); 1375 + value &= ~EMC_DBG_WRITE_ACTIVE_ONLY; 1376 + ccfifo_writel(emc, value, EMC_DBG, 0); 1377 + 1378 + /* 1379 + * Step 13: 1380 + * Ramp up. 1381 + */ 1382 + emc_dbg(emc, STEPS, "Step 13\n"); 1383 + 1384 + ramp_up_wait = tegra210_emc_dvfs_power_ramp_up(emc, dst_clk_period, 0); 1385 + ccfifo_writel(emc, emc_dbg, EMC_DBG, 0); 1386 + 1387 + /* 1388 + * Step 14: 1389 + * Bringup CKE pins. 1390 + */ 1391 + emc_dbg(emc, STEPS, "Step 14\n"); 1392 + 1393 + if (dram_type == DRAM_TYPE_LPDDR4) { 1394 + value = emc_pin | EMC_PIN_PIN_CKE; 1395 + 1396 + if (emc->num_devices <= 1) 1397 + value &= ~(EMC_PIN_PIN_CKEB | EMC_PIN_PIN_CKE_PER_DEV); 1398 + else 1399 + value |= EMC_PIN_PIN_CKEB | EMC_PIN_PIN_CKE_PER_DEV; 1400 + 1401 + ccfifo_writel(emc, value, EMC_PIN, 0); 1402 + } 1403 + 1404 + /* 1405 + * Step 15: (two step 15s ??) 1406 + * Calculate zqlatch wait time; has dependency on ramping times. 1407 + */ 1408 + emc_dbg(emc, STEPS, "Step 15\n"); 1409 + 1410 + if (dst_clk_period <= zqcal_before_cc_cutoff) { 1411 + s32 t = (s32)(ramp_up_wait + ramp_down_wait) / 1412 + (s32)dst_clk_period; 1413 + zq_latch_dvfs_wait_time = (s32)tZQCAL_lpddr4_fc_adj - t; 1414 + } else { 1415 + zq_latch_dvfs_wait_time = tZQCAL_lpddr4_fc_adj - 1416 + div_o3(1000 * next->dram_timings[T_PDEX], 1417 + dst_clk_period); 1418 + } 1419 + 1420 + emc_dbg(emc, INFO, "tZQCAL_lpddr4_fc_adj = %u\n", tZQCAL_lpddr4_fc_adj); 1421 + emc_dbg(emc, INFO, "dst_clk_period = %u\n", 1422 + dst_clk_period); 1423 + emc_dbg(emc, INFO, "next->dram_timings[T_PDEX] = %u\n", 1424 + next->dram_timings[T_PDEX]); 1425 + emc_dbg(emc, INFO, "zq_latch_dvfs_wait_time = %d\n", 1426 + max_t(s32, 0, zq_latch_dvfs_wait_time)); 1427 + 1428 + if (dram_type == DRAM_TYPE_LPDDR4 && opt_zcal_en_cc) { 1429 + delay = div_o3(1000 * next->dram_timings[T_PDEX], 1430 + dst_clk_period); 1431 + 1432 + if (emc->num_devices < 2) { 1433 + if (dst_clk_period > zqcal_before_cc_cutoff) 1434 + ccfifo_writel(emc, 1435 + 2UL << EMC_ZQ_CAL_DEV_SEL_SHIFT | 1436 + EMC_ZQ_CAL_ZQ_CAL_CMD, EMC_ZQ_CAL, 1437 + delay); 1438 + 1439 + value = (mr13_flip_fspop & 0xfffffff7) | 0x0c000000; 1440 + ccfifo_writel(emc, value, EMC_MRW3, delay); 1441 + ccfifo_writel(emc, 0, EMC_SELF_REF, 0); 1442 + ccfifo_writel(emc, 0, EMC_REF, 0); 1443 + ccfifo_writel(emc, 2UL << EMC_ZQ_CAL_DEV_SEL_SHIFT | 1444 + EMC_ZQ_CAL_ZQ_LATCH_CMD, 1445 + EMC_ZQ_CAL, 1446 + max_t(s32, 0, zq_latch_dvfs_wait_time)); 1447 + } else if (shared_zq_resistor) { 1448 + if (dst_clk_period > zqcal_before_cc_cutoff) 1449 + ccfifo_writel(emc, 1450 + 2UL << EMC_ZQ_CAL_DEV_SEL_SHIFT | 1451 + EMC_ZQ_CAL_ZQ_CAL_CMD, EMC_ZQ_CAL, 1452 + delay); 1453 + 1454 + ccfifo_writel(emc, 2UL << EMC_ZQ_CAL_DEV_SEL_SHIFT | 1455 + EMC_ZQ_CAL_ZQ_LATCH_CMD, EMC_ZQ_CAL, 1456 + max_t(s32, 0, zq_latch_dvfs_wait_time) + 1457 + delay); 1458 + ccfifo_writel(emc, 1UL << EMC_ZQ_CAL_DEV_SEL_SHIFT | 1459 + EMC_ZQ_CAL_ZQ_LATCH_CMD, 1460 + EMC_ZQ_CAL, 0); 1461 + 1462 + value = (mr13_flip_fspop & 0xfffffff7) | 0x0c000000; 1463 + ccfifo_writel(emc, value, EMC_MRW3, 0); 1464 + ccfifo_writel(emc, 0, EMC_SELF_REF, 0); 1465 + ccfifo_writel(emc, 0, EMC_REF, 0); 1466 + 1467 + ccfifo_writel(emc, 1UL << EMC_ZQ_CAL_DEV_SEL_SHIFT | 1468 + EMC_ZQ_CAL_ZQ_LATCH_CMD, EMC_ZQ_CAL, 1469 + tZQCAL_lpddr4 / dst_clk_period); 1470 + } else { 1471 + if (dst_clk_period > zqcal_before_cc_cutoff) 1472 + ccfifo_writel(emc, EMC_ZQ_CAL_ZQ_CAL_CMD, 1473 + EMC_ZQ_CAL, delay); 1474 + 1475 + value = (mr13_flip_fspop & 0xfffffff7) | 0x0c000000; 1476 + ccfifo_writel(emc, value, EMC_MRW3, delay); 1477 + ccfifo_writel(emc, 0, EMC_SELF_REF, 0); 1478 + ccfifo_writel(emc, 0, EMC_REF, 0); 1479 + 1480 + ccfifo_writel(emc, EMC_ZQ_CAL_ZQ_LATCH_CMD, EMC_ZQ_CAL, 1481 + max_t(s32, 0, zq_latch_dvfs_wait_time)); 1482 + } 1483 + } 1484 + 1485 + /* WAR: delay for zqlatch */ 1486 + ccfifo_writel(emc, 0, 0, 10); 1487 + 1488 + /* 1489 + * Step 16: 1490 + * LPDDR4 Conditional Training Kickoff. Removed. 1491 + */ 1492 + 1493 + /* 1494 + * Step 17: 1495 + * MANSR exit self refresh. 1496 + */ 1497 + emc_dbg(emc, STEPS, "Step 17\n"); 1498 + 1499 + if (opt_dvfs_mode == MAN_SR && dram_type != DRAM_TYPE_LPDDR4) 1500 + ccfifo_writel(emc, 0, EMC_SELF_REF, 0); 1501 + 1502 + /* 1503 + * Step 18: 1504 + * Send MRWs to LPDDR3/DDR3. 1505 + */ 1506 + emc_dbg(emc, STEPS, "Step 18\n"); 1507 + 1508 + if (dram_type == DRAM_TYPE_LPDDR2) { 1509 + ccfifo_writel(emc, next->emc_mrw2, EMC_MRW2, 0); 1510 + ccfifo_writel(emc, next->emc_mrw, EMC_MRW, 0); 1511 + if (is_lpddr3) 1512 + ccfifo_writel(emc, next->emc_mrw4, EMC_MRW4, 0); 1513 + } else if (dram_type == DRAM_TYPE_DDR3) { 1514 + if (opt_dll_mode) 1515 + ccfifo_writel(emc, next->emc_emrs & 1516 + ~EMC_EMRS_USE_EMRS_LONG_CNT, EMC_EMRS, 0); 1517 + ccfifo_writel(emc, next->emc_emrs2 & 1518 + ~EMC_EMRS2_USE_EMRS2_LONG_CNT, EMC_EMRS2, 0); 1519 + ccfifo_writel(emc, next->emc_mrs | 1520 + EMC_EMRS_USE_EMRS_LONG_CNT, EMC_MRS, 0); 1521 + } 1522 + 1523 + /* 1524 + * Step 19: 1525 + * ZQCAL for LPDDR3/DDR3 1526 + */ 1527 + emc_dbg(emc, STEPS, "Step 19\n"); 1528 + 1529 + if (opt_zcal_en_cc) { 1530 + if (dram_type == DRAM_TYPE_LPDDR2) { 1531 + value = opt_cc_short_zcal ? 90000 : 360000; 1532 + value = div_o3(value, dst_clk_period); 1533 + value = value << 1534 + EMC_MRS_WAIT_CNT2_MRS_EXT2_WAIT_CNT_SHIFT | 1535 + value << 1536 + EMC_MRS_WAIT_CNT2_MRS_EXT1_WAIT_CNT_SHIFT; 1537 + ccfifo_writel(emc, value, EMC_MRS_WAIT_CNT2, 0); 1538 + 1539 + value = opt_cc_short_zcal ? 0x56 : 0xab; 1540 + ccfifo_writel(emc, 2 << EMC_MRW_MRW_DEV_SELECTN_SHIFT | 1541 + EMC_MRW_USE_MRW_EXT_CNT | 1542 + 10 << EMC_MRW_MRW_MA_SHIFT | 1543 + value << EMC_MRW_MRW_OP_SHIFT, 1544 + EMC_MRW, 0); 1545 + 1546 + if (emc->num_devices > 1) { 1547 + value = 1 << EMC_MRW_MRW_DEV_SELECTN_SHIFT | 1548 + EMC_MRW_USE_MRW_EXT_CNT | 1549 + 10 << EMC_MRW_MRW_MA_SHIFT | 1550 + value << EMC_MRW_MRW_OP_SHIFT; 1551 + ccfifo_writel(emc, value, EMC_MRW, 0); 1552 + } 1553 + } else if (dram_type == DRAM_TYPE_DDR3) { 1554 + value = opt_cc_short_zcal ? 0 : EMC_ZQ_CAL_LONG; 1555 + 1556 + ccfifo_writel(emc, value | 1557 + 2 << EMC_ZQ_CAL_DEV_SEL_SHIFT | 1558 + EMC_ZQ_CAL_ZQ_CAL_CMD, EMC_ZQ_CAL, 1559 + 0); 1560 + 1561 + if (emc->num_devices > 1) { 1562 + value = value | 1 << EMC_ZQ_CAL_DEV_SEL_SHIFT | 1563 + EMC_ZQ_CAL_ZQ_CAL_CMD; 1564 + ccfifo_writel(emc, value, EMC_ZQ_CAL, 0); 1565 + } 1566 + } 1567 + } 1568 + 1569 + if (bg_reg_mode_change) { 1570 + tegra210_emc_set_shadow_bypass(emc, ACTIVE); 1571 + 1572 + if (ramp_up_wait <= 1250000) 1573 + delay = (1250000 - ramp_up_wait) / dst_clk_period; 1574 + else 1575 + delay = 0; 1576 + 1577 + ccfifo_writel(emc, 1578 + next->burst_regs[EMC_PMACRO_BG_BIAS_CTRL_0_INDEX], 1579 + EMC_PMACRO_BG_BIAS_CTRL_0, delay); 1580 + tegra210_emc_set_shadow_bypass(emc, ASSEMBLY); 1581 + } 1582 + 1583 + /* 1584 + * Step 20: 1585 + * Issue ref and optional QRST. 1586 + */ 1587 + emc_dbg(emc, STEPS, "Step 20\n"); 1588 + 1589 + if (dram_type != DRAM_TYPE_LPDDR4) 1590 + ccfifo_writel(emc, 0, EMC_REF, 0); 1591 + 1592 + if (opt_do_sw_qrst) { 1593 + ccfifo_writel(emc, 1, EMC_ISSUE_QRST, 0); 1594 + ccfifo_writel(emc, 0, EMC_ISSUE_QRST, 2); 1595 + } 1596 + 1597 + /* 1598 + * Step 21: 1599 + * Restore ZCAL and ZCAL interval. 1600 + */ 1601 + emc_dbg(emc, STEPS, "Step 21\n"); 1602 + 1603 + if (save_restore_clkstop_pd || opt_zcal_en_cc) { 1604 + ccfifo_writel(emc, emc_dbg | EMC_DBG_WRITE_MUX_ACTIVE, 1605 + EMC_DBG, 0); 1606 + if (opt_zcal_en_cc && dram_type != DRAM_TYPE_LPDDR4) 1607 + ccfifo_writel(emc, next->burst_regs[EMC_ZCAL_INTERVAL_INDEX], 1608 + EMC_ZCAL_INTERVAL, 0); 1609 + 1610 + if (save_restore_clkstop_pd) 1611 + ccfifo_writel(emc, next->burst_regs[EMC_CFG_INDEX] & 1612 + ~EMC_CFG_DYN_SELF_REF, 1613 + EMC_CFG, 0); 1614 + ccfifo_writel(emc, emc_dbg, EMC_DBG, 0); 1615 + } 1616 + 1617 + /* 1618 + * Step 22: 1619 + * Restore EMC_CFG_PIPE_CLK. 1620 + */ 1621 + emc_dbg(emc, STEPS, "Step 22\n"); 1622 + 1623 + ccfifo_writel(emc, emc_cfg_pipe_clk, EMC_CFG_PIPE_CLK, 0); 1624 + 1625 + if (bg_reg_mode_change) { 1626 + if (enable_bg_reg) 1627 + emc_writel(emc, 1628 + next->burst_regs[EMC_PMACRO_BG_BIAS_CTRL_0_INDEX] & 1629 + ~EMC_PMACRO_BG_BIAS_CTRL_0_BGLP_E_PWRD, 1630 + EMC_PMACRO_BG_BIAS_CTRL_0); 1631 + else 1632 + emc_writel(emc, 1633 + next->burst_regs[EMC_PMACRO_BG_BIAS_CTRL_0_INDEX] & 1634 + ~EMC_PMACRO_BG_BIAS_CTRL_0_BG_E_PWRD, 1635 + EMC_PMACRO_BG_BIAS_CTRL_0); 1636 + } 1637 + 1638 + /* 1639 + * Step 23: 1640 + */ 1641 + emc_dbg(emc, STEPS, "Step 23\n"); 1642 + 1643 + value = emc_readl(emc, EMC_CFG_DIG_DLL); 1644 + value |= EMC_CFG_DIG_DLL_CFG_DLL_STALL_ALL_TRAFFIC; 1645 + value &= ~EMC_CFG_DIG_DLL_CFG_DLL_STALL_RW_UNTIL_LOCK; 1646 + value &= ~EMC_CFG_DIG_DLL_CFG_DLL_STALL_ALL_UNTIL_LOCK; 1647 + value &= ~EMC_CFG_DIG_DLL_CFG_DLL_EN; 1648 + value = (value & ~EMC_CFG_DIG_DLL_CFG_DLL_MODE_MASK) | 1649 + (2 << EMC_CFG_DIG_DLL_CFG_DLL_MODE_SHIFT); 1650 + emc_writel(emc, value, EMC_CFG_DIG_DLL); 1651 + 1652 + tegra210_emc_do_clock_change(emc, clksrc); 1653 + 1654 + /* 1655 + * Step 24: 1656 + * Save training results. Removed. 1657 + */ 1658 + 1659 + /* 1660 + * Step 25: 1661 + * Program MC updown registers. 1662 + */ 1663 + emc_dbg(emc, STEPS, "Step 25\n"); 1664 + 1665 + if (next->rate > last->rate) { 1666 + for (i = 0; i < next->num_up_down; i++) 1667 + mc_writel(emc->mc, next->la_scale_regs[i], 1668 + emc->offsets->la_scale[i]); 1669 + 1670 + tegra210_emc_timing_update(emc); 1671 + } 1672 + 1673 + /* 1674 + * Step 26: 1675 + * Restore ZCAL registers. 1676 + */ 1677 + emc_dbg(emc, STEPS, "Step 26\n"); 1678 + 1679 + if (dram_type == DRAM_TYPE_LPDDR4) { 1680 + tegra210_emc_set_shadow_bypass(emc, ACTIVE); 1681 + emc_writel(emc, next->burst_regs[EMC_ZCAL_WAIT_CNT_INDEX], 1682 + EMC_ZCAL_WAIT_CNT); 1683 + emc_writel(emc, next->burst_regs[EMC_ZCAL_INTERVAL_INDEX], 1684 + EMC_ZCAL_INTERVAL); 1685 + tegra210_emc_set_shadow_bypass(emc, ASSEMBLY); 1686 + } 1687 + 1688 + if (dram_type != DRAM_TYPE_LPDDR4 && opt_zcal_en_cc && 1689 + !opt_short_zcal && opt_cc_short_zcal) { 1690 + udelay(2); 1691 + 1692 + tegra210_emc_set_shadow_bypass(emc, ACTIVE); 1693 + if (dram_type == DRAM_TYPE_LPDDR2) 1694 + emc_writel(emc, next->burst_regs[EMC_MRS_WAIT_CNT_INDEX], 1695 + EMC_MRS_WAIT_CNT); 1696 + else if (dram_type == DRAM_TYPE_DDR3) 1697 + emc_writel(emc, next->burst_regs[EMC_ZCAL_WAIT_CNT_INDEX], 1698 + EMC_ZCAL_WAIT_CNT); 1699 + tegra210_emc_set_shadow_bypass(emc, ASSEMBLY); 1700 + } 1701 + 1702 + /* 1703 + * Step 27: 1704 + * Restore EMC_CFG, FDPD registers. 1705 + */ 1706 + emc_dbg(emc, STEPS, "Step 27\n"); 1707 + 1708 + tegra210_emc_set_shadow_bypass(emc, ACTIVE); 1709 + emc_writel(emc, next->burst_regs[EMC_CFG_INDEX], EMC_CFG); 1710 + tegra210_emc_set_shadow_bypass(emc, ASSEMBLY); 1711 + emc_writel(emc, next->emc_fdpd_ctrl_cmd_no_ramp, 1712 + EMC_FDPD_CTRL_CMD_NO_RAMP); 1713 + emc_writel(emc, next->emc_sel_dpd_ctrl, EMC_SEL_DPD_CTRL); 1714 + 1715 + /* 1716 + * Step 28: 1717 + * Training recover. Removed. 1718 + */ 1719 + emc_dbg(emc, STEPS, "Step 28\n"); 1720 + 1721 + tegra210_emc_set_shadow_bypass(emc, ACTIVE); 1722 + emc_writel(emc, 1723 + next->burst_regs[EMC_PMACRO_AUTOCAL_CFG_COMMON_INDEX], 1724 + EMC_PMACRO_AUTOCAL_CFG_COMMON); 1725 + tegra210_emc_set_shadow_bypass(emc, ASSEMBLY); 1726 + 1727 + /* 1728 + * Step 29: 1729 + * Power fix WAR. 1730 + */ 1731 + emc_dbg(emc, STEPS, "Step 29\n"); 1732 + 1733 + emc_writel(emc, EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE0 | 1734 + EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE1 | 1735 + EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE2 | 1736 + EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE3 | 1737 + EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE4 | 1738 + EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE5 | 1739 + EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE6 | 1740 + EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE7, 1741 + EMC_PMACRO_CFG_PM_GLOBAL_0); 1742 + emc_writel(emc, EMC_PMACRO_TRAINING_CTRL_0_CH0_TRAINING_E_WRPTR, 1743 + EMC_PMACRO_TRAINING_CTRL_0); 1744 + emc_writel(emc, EMC_PMACRO_TRAINING_CTRL_1_CH1_TRAINING_E_WRPTR, 1745 + EMC_PMACRO_TRAINING_CTRL_1); 1746 + emc_writel(emc, 0, EMC_PMACRO_CFG_PM_GLOBAL_0); 1747 + 1748 + /* 1749 + * Step 30: 1750 + * Re-enable autocal. 1751 + */ 1752 + emc_dbg(emc, STEPS, "Step 30: Re-enable DLL and AUTOCAL\n"); 1753 + 1754 + if (next->burst_regs[EMC_CFG_DIG_DLL_INDEX] & EMC_CFG_DIG_DLL_CFG_DLL_EN) { 1755 + value = emc_readl(emc, EMC_CFG_DIG_DLL); 1756 + value |= EMC_CFG_DIG_DLL_CFG_DLL_STALL_ALL_TRAFFIC; 1757 + value |= EMC_CFG_DIG_DLL_CFG_DLL_EN; 1758 + value &= ~EMC_CFG_DIG_DLL_CFG_DLL_STALL_RW_UNTIL_LOCK; 1759 + value &= ~EMC_CFG_DIG_DLL_CFG_DLL_STALL_ALL_UNTIL_LOCK; 1760 + value = (value & ~EMC_CFG_DIG_DLL_CFG_DLL_MODE_MASK) | 1761 + (2 << EMC_CFG_DIG_DLL_CFG_DLL_MODE_SHIFT); 1762 + emc_writel(emc, value, EMC_CFG_DIG_DLL); 1763 + tegra210_emc_timing_update(emc); 1764 + } 1765 + 1766 + emc_writel(emc, next->emc_auto_cal_config, EMC_AUTO_CAL_CONFIG); 1767 + 1768 + /* Done! Yay. */ 1769 + } 1770 + 1771 + const struct tegra210_emc_sequence tegra210_emc_r21021 = { 1772 + .revision = 0x7, 1773 + .set_clock = tegra210_emc_r21021_set_clock, 1774 + .periodic_compensation = tegra210_emc_r21021_periodic_compensation, 1775 + };
+2100
drivers/memory/tegra/tegra210-emc-core.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2015-2020, NVIDIA CORPORATION. All rights reserved. 4 + */ 5 + 6 + #include <linux/bitfield.h> 7 + #include <linux/clk.h> 8 + #include <linux/clk/tegra.h> 9 + #include <linux/debugfs.h> 10 + #include <linux/delay.h> 11 + #include <linux/kernel.h> 12 + #include <linux/module.h> 13 + #include <linux/of_address.h> 14 + #include <linux/of_platform.h> 15 + #include <linux/of_reserved_mem.h> 16 + #include <linux/slab.h> 17 + #include <linux/thermal.h> 18 + #include <soc/tegra/fuse.h> 19 + #include <soc/tegra/mc.h> 20 + 21 + #include "tegra210-emc.h" 22 + #include "tegra210-mc.h" 23 + 24 + /* CLK_RST_CONTROLLER_CLK_SOURCE_EMC */ 25 + #define EMC_CLK_EMC_2X_CLK_SRC_SHIFT 29 26 + #define EMC_CLK_EMC_2X_CLK_SRC_MASK \ 27 + (0x7 << EMC_CLK_EMC_2X_CLK_SRC_SHIFT) 28 + #define EMC_CLK_SOURCE_PLLM_LJ 0x4 29 + #define EMC_CLK_SOURCE_PLLMB_LJ 0x5 30 + #define EMC_CLK_FORCE_CC_TRIGGER BIT(27) 31 + #define EMC_CLK_MC_EMC_SAME_FREQ BIT(16) 32 + #define EMC_CLK_EMC_2X_CLK_DIVISOR_SHIFT 0 33 + #define EMC_CLK_EMC_2X_CLK_DIVISOR_MASK \ 34 + (0xff << EMC_CLK_EMC_2X_CLK_DIVISOR_SHIFT) 35 + 36 + /* CLK_RST_CONTROLLER_CLK_SOURCE_EMC_DLL */ 37 + #define DLL_CLK_EMC_DLL_CLK_SRC_SHIFT 29 38 + #define DLL_CLK_EMC_DLL_CLK_SRC_MASK \ 39 + (0x7 << DLL_CLK_EMC_DLL_CLK_SRC_SHIFT) 40 + #define DLL_CLK_EMC_DLL_DDLL_CLK_SEL_SHIFT 10 41 + #define DLL_CLK_EMC_DLL_DDLL_CLK_SEL_MASK \ 42 + (0x3 << DLL_CLK_EMC_DLL_DDLL_CLK_SEL_SHIFT) 43 + #define PLLM_VCOA 0 44 + #define PLLM_VCOB 1 45 + #define EMC_DLL_SWITCH_OUT 2 46 + #define DLL_CLK_EMC_DLL_CLK_DIVISOR_SHIFT 0 47 + #define DLL_CLK_EMC_DLL_CLK_DIVISOR_MASK \ 48 + (0xff << DLL_CLK_EMC_DLL_CLK_DIVISOR_SHIFT) 49 + 50 + /* MC_EMEM_ARB_MISC0 */ 51 + #define MC_EMEM_ARB_MISC0_EMC_SAME_FREQ BIT(27) 52 + 53 + /* EMC_DATA_BRLSHFT_X */ 54 + #define EMC0_EMC_DATA_BRLSHFT_0_INDEX 2 55 + #define EMC1_EMC_DATA_BRLSHFT_0_INDEX 3 56 + #define EMC0_EMC_DATA_BRLSHFT_1_INDEX 4 57 + #define EMC1_EMC_DATA_BRLSHFT_1_INDEX 5 58 + 59 + #define TRIM_REG(chan, rank, reg, byte) \ 60 + (((EMC_PMACRO_OB_DDLL_LONG_DQ_RANK ## rank ## _ ## reg ## \ 61 + _OB_DDLL_LONG_DQ_RANK ## rank ## _BYTE ## byte ## _MASK & \ 62 + next->trim_regs[EMC_PMACRO_OB_DDLL_LONG_DQ_RANK ## \ 63 + rank ## _ ## reg ## _INDEX]) >> \ 64 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK ## rank ## _ ## reg ## \ 65 + _OB_DDLL_LONG_DQ_RANK ## rank ## _BYTE ## byte ## _SHIFT) \ 66 + + \ 67 + (((EMC_DATA_BRLSHFT_ ## rank ## _RANK ## rank ## _BYTE ## \ 68 + byte ## _DATA_BRLSHFT_MASK & \ 69 + next->trim_perch_regs[EMC ## chan ## \ 70 + _EMC_DATA_BRLSHFT_ ## rank ## _INDEX]) >> \ 71 + EMC_DATA_BRLSHFT_ ## rank ## _RANK ## rank ## _BYTE ## \ 72 + byte ## _DATA_BRLSHFT_SHIFT) * 64)) 73 + 74 + #define CALC_TEMP(rank, reg, byte1, byte2, n) \ 75 + (((new[n] << EMC_PMACRO_OB_DDLL_LONG_DQ_RANK ## rank ## _ ## \ 76 + reg ## _OB_DDLL_LONG_DQ_RANK ## rank ## _BYTE ## byte1 ## _SHIFT) & \ 77 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK ## rank ## _ ## reg ## \ 78 + _OB_DDLL_LONG_DQ_RANK ## rank ## _BYTE ## byte1 ## _MASK) \ 79 + | \ 80 + ((new[n + 1] << EMC_PMACRO_OB_DDLL_LONG_DQ_RANK ## rank ## _ ##\ 81 + reg ## _OB_DDLL_LONG_DQ_RANK ## rank ## _BYTE ## byte2 ## _SHIFT) & \ 82 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK ## rank ## _ ## reg ## \ 83 + _OB_DDLL_LONG_DQ_RANK ## rank ## _BYTE ## byte2 ## _MASK)) 84 + 85 + #define REFRESH_SPEEDUP(value, speedup) \ 86 + (((value) & 0xffff0000) | ((value) & 0xffff) * (speedup)) 87 + 88 + #define LPDDR2_MR4_SRR GENMASK(2, 0) 89 + 90 + static const struct tegra210_emc_sequence *tegra210_emc_sequences[] = { 91 + &tegra210_emc_r21021, 92 + }; 93 + 94 + static const struct tegra210_emc_table_register_offsets 95 + tegra210_emc_table_register_offsets = { 96 + .burst = { 97 + EMC_RC, 98 + EMC_RFC, 99 + EMC_RFCPB, 100 + EMC_REFCTRL2, 101 + EMC_RFC_SLR, 102 + EMC_RAS, 103 + EMC_RP, 104 + EMC_R2W, 105 + EMC_W2R, 106 + EMC_R2P, 107 + EMC_W2P, 108 + EMC_R2R, 109 + EMC_TPPD, 110 + EMC_CCDMW, 111 + EMC_RD_RCD, 112 + EMC_WR_RCD, 113 + EMC_RRD, 114 + EMC_REXT, 115 + EMC_WEXT, 116 + EMC_WDV_CHK, 117 + EMC_WDV, 118 + EMC_WSV, 119 + EMC_WEV, 120 + EMC_WDV_MASK, 121 + EMC_WS_DURATION, 122 + EMC_WE_DURATION, 123 + EMC_QUSE, 124 + EMC_QUSE_WIDTH, 125 + EMC_IBDLY, 126 + EMC_OBDLY, 127 + EMC_EINPUT, 128 + EMC_MRW6, 129 + EMC_EINPUT_DURATION, 130 + EMC_PUTERM_EXTRA, 131 + EMC_PUTERM_WIDTH, 132 + EMC_QRST, 133 + EMC_QSAFE, 134 + EMC_RDV, 135 + EMC_RDV_MASK, 136 + EMC_RDV_EARLY, 137 + EMC_RDV_EARLY_MASK, 138 + EMC_REFRESH, 139 + EMC_BURST_REFRESH_NUM, 140 + EMC_PRE_REFRESH_REQ_CNT, 141 + EMC_PDEX2WR, 142 + EMC_PDEX2RD, 143 + EMC_PCHG2PDEN, 144 + EMC_ACT2PDEN, 145 + EMC_AR2PDEN, 146 + EMC_RW2PDEN, 147 + EMC_CKE2PDEN, 148 + EMC_PDEX2CKE, 149 + EMC_PDEX2MRR, 150 + EMC_TXSR, 151 + EMC_TXSRDLL, 152 + EMC_TCKE, 153 + EMC_TCKESR, 154 + EMC_TPD, 155 + EMC_TFAW, 156 + EMC_TRPAB, 157 + EMC_TCLKSTABLE, 158 + EMC_TCLKSTOP, 159 + EMC_MRW7, 160 + EMC_TREFBW, 161 + EMC_ODT_WRITE, 162 + EMC_FBIO_CFG5, 163 + EMC_FBIO_CFG7, 164 + EMC_CFG_DIG_DLL, 165 + EMC_CFG_DIG_DLL_PERIOD, 166 + EMC_PMACRO_IB_RXRT, 167 + EMC_CFG_PIPE_1, 168 + EMC_CFG_PIPE_2, 169 + EMC_PMACRO_QUSE_DDLL_RANK0_4, 170 + EMC_PMACRO_QUSE_DDLL_RANK0_5, 171 + EMC_PMACRO_QUSE_DDLL_RANK1_4, 172 + EMC_PMACRO_QUSE_DDLL_RANK1_5, 173 + EMC_MRW8, 174 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_4, 175 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_5, 176 + EMC_PMACRO_OB_DDLL_LONG_DQS_RANK0_0, 177 + EMC_PMACRO_OB_DDLL_LONG_DQS_RANK0_1, 178 + EMC_PMACRO_OB_DDLL_LONG_DQS_RANK0_2, 179 + EMC_PMACRO_OB_DDLL_LONG_DQS_RANK0_3, 180 + EMC_PMACRO_OB_DDLL_LONG_DQS_RANK0_4, 181 + EMC_PMACRO_OB_DDLL_LONG_DQS_RANK0_5, 182 + EMC_PMACRO_OB_DDLL_LONG_DQS_RANK1_0, 183 + EMC_PMACRO_OB_DDLL_LONG_DQS_RANK1_1, 184 + EMC_PMACRO_OB_DDLL_LONG_DQS_RANK1_2, 185 + EMC_PMACRO_OB_DDLL_LONG_DQS_RANK1_3, 186 + EMC_PMACRO_OB_DDLL_LONG_DQS_RANK1_4, 187 + EMC_PMACRO_OB_DDLL_LONG_DQS_RANK1_5, 188 + EMC_PMACRO_DDLL_LONG_CMD_0, 189 + EMC_PMACRO_DDLL_LONG_CMD_1, 190 + EMC_PMACRO_DDLL_LONG_CMD_2, 191 + EMC_PMACRO_DDLL_LONG_CMD_3, 192 + EMC_PMACRO_DDLL_LONG_CMD_4, 193 + EMC_PMACRO_DDLL_SHORT_CMD_0, 194 + EMC_PMACRO_DDLL_SHORT_CMD_1, 195 + EMC_PMACRO_DDLL_SHORT_CMD_2, 196 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE0_3, 197 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE1_3, 198 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE2_3, 199 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE3_3, 200 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE4_3, 201 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE5_3, 202 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE6_3, 203 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE7_3, 204 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD0_3, 205 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD1_3, 206 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD2_3, 207 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD3_3, 208 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE0_3, 209 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE1_3, 210 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE2_3, 211 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE3_3, 212 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE4_3, 213 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE5_3, 214 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE6_3, 215 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE7_3, 216 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD0_0, 217 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD0_1, 218 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD0_2, 219 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD0_3, 220 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD1_0, 221 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD1_1, 222 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD1_2, 223 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD1_3, 224 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD2_0, 225 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD2_1, 226 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD2_2, 227 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD2_3, 228 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD3_0, 229 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD3_1, 230 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD3_2, 231 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD3_3, 232 + EMC_TXDSRVTTGEN, 233 + EMC_FDPD_CTRL_DQ, 234 + EMC_FDPD_CTRL_CMD, 235 + EMC_FBIO_SPARE, 236 + EMC_ZCAL_INTERVAL, 237 + EMC_ZCAL_WAIT_CNT, 238 + EMC_MRS_WAIT_CNT, 239 + EMC_MRS_WAIT_CNT2, 240 + EMC_AUTO_CAL_CHANNEL, 241 + EMC_DLL_CFG_0, 242 + EMC_DLL_CFG_1, 243 + EMC_PMACRO_AUTOCAL_CFG_COMMON, 244 + EMC_PMACRO_ZCTRL, 245 + EMC_CFG, 246 + EMC_CFG_PIPE, 247 + EMC_DYN_SELF_REF_CONTROL, 248 + EMC_QPOP, 249 + EMC_DQS_BRLSHFT_0, 250 + EMC_DQS_BRLSHFT_1, 251 + EMC_CMD_BRLSHFT_2, 252 + EMC_CMD_BRLSHFT_3, 253 + EMC_PMACRO_PAD_CFG_CTRL, 254 + EMC_PMACRO_DATA_PAD_RX_CTRL, 255 + EMC_PMACRO_CMD_PAD_RX_CTRL, 256 + EMC_PMACRO_DATA_RX_TERM_MODE, 257 + EMC_PMACRO_CMD_RX_TERM_MODE, 258 + EMC_PMACRO_CMD_PAD_TX_CTRL, 259 + EMC_PMACRO_DATA_PAD_TX_CTRL, 260 + EMC_PMACRO_COMMON_PAD_TX_CTRL, 261 + EMC_PMACRO_VTTGEN_CTRL_0, 262 + EMC_PMACRO_VTTGEN_CTRL_1, 263 + EMC_PMACRO_VTTGEN_CTRL_2, 264 + EMC_PMACRO_BRICK_CTRL_RFU1, 265 + EMC_PMACRO_CMD_BRICK_CTRL_FDPD, 266 + EMC_PMACRO_BRICK_CTRL_RFU2, 267 + EMC_PMACRO_DATA_BRICK_CTRL_FDPD, 268 + EMC_PMACRO_BG_BIAS_CTRL_0, 269 + EMC_CFG_3, 270 + EMC_PMACRO_TX_PWRD_0, 271 + EMC_PMACRO_TX_PWRD_1, 272 + EMC_PMACRO_TX_PWRD_2, 273 + EMC_PMACRO_TX_PWRD_3, 274 + EMC_PMACRO_TX_PWRD_4, 275 + EMC_PMACRO_TX_PWRD_5, 276 + EMC_CONFIG_SAMPLE_DELAY, 277 + EMC_PMACRO_TX_SEL_CLK_SRC_0, 278 + EMC_PMACRO_TX_SEL_CLK_SRC_1, 279 + EMC_PMACRO_TX_SEL_CLK_SRC_2, 280 + EMC_PMACRO_TX_SEL_CLK_SRC_3, 281 + EMC_PMACRO_TX_SEL_CLK_SRC_4, 282 + EMC_PMACRO_TX_SEL_CLK_SRC_5, 283 + EMC_PMACRO_DDLL_BYPASS, 284 + EMC_PMACRO_DDLL_PWRD_0, 285 + EMC_PMACRO_DDLL_PWRD_1, 286 + EMC_PMACRO_DDLL_PWRD_2, 287 + EMC_PMACRO_CMD_CTRL_0, 288 + EMC_PMACRO_CMD_CTRL_1, 289 + EMC_PMACRO_CMD_CTRL_2, 290 + EMC_TR_TIMING_0, 291 + EMC_TR_DVFS, 292 + EMC_TR_CTRL_1, 293 + EMC_TR_RDV, 294 + EMC_TR_QPOP, 295 + EMC_TR_RDV_MASK, 296 + EMC_MRW14, 297 + EMC_TR_QSAFE, 298 + EMC_TR_QRST, 299 + EMC_TRAINING_CTRL, 300 + EMC_TRAINING_SETTLE, 301 + EMC_TRAINING_VREF_SETTLE, 302 + EMC_TRAINING_CA_FINE_CTRL, 303 + EMC_TRAINING_CA_CTRL_MISC, 304 + EMC_TRAINING_CA_CTRL_MISC1, 305 + EMC_TRAINING_CA_VREF_CTRL, 306 + EMC_TRAINING_QUSE_CORS_CTRL, 307 + EMC_TRAINING_QUSE_FINE_CTRL, 308 + EMC_TRAINING_QUSE_CTRL_MISC, 309 + EMC_TRAINING_QUSE_VREF_CTRL, 310 + EMC_TRAINING_READ_FINE_CTRL, 311 + EMC_TRAINING_READ_CTRL_MISC, 312 + EMC_TRAINING_READ_VREF_CTRL, 313 + EMC_TRAINING_WRITE_FINE_CTRL, 314 + EMC_TRAINING_WRITE_CTRL_MISC, 315 + EMC_TRAINING_WRITE_VREF_CTRL, 316 + EMC_TRAINING_MPC, 317 + EMC_MRW15, 318 + }, 319 + .trim = { 320 + EMC_PMACRO_IB_DDLL_LONG_DQS_RANK0_0, 321 + EMC_PMACRO_IB_DDLL_LONG_DQS_RANK0_1, 322 + EMC_PMACRO_IB_DDLL_LONG_DQS_RANK0_2, 323 + EMC_PMACRO_IB_DDLL_LONG_DQS_RANK0_3, 324 + EMC_PMACRO_IB_DDLL_LONG_DQS_RANK1_0, 325 + EMC_PMACRO_IB_DDLL_LONG_DQS_RANK1_1, 326 + EMC_PMACRO_IB_DDLL_LONG_DQS_RANK1_2, 327 + EMC_PMACRO_IB_DDLL_LONG_DQS_RANK1_3, 328 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE0_0, 329 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE0_1, 330 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE0_2, 331 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE1_0, 332 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE1_1, 333 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE1_2, 334 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE2_0, 335 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE2_1, 336 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE2_2, 337 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE3_0, 338 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE3_1, 339 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE3_2, 340 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE4_0, 341 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE4_1, 342 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE4_2, 343 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE5_0, 344 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE5_1, 345 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE5_2, 346 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE6_0, 347 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE6_1, 348 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE6_2, 349 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE7_0, 350 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE7_1, 351 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE7_2, 352 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE0_0, 353 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE0_1, 354 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE0_2, 355 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE1_0, 356 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE1_1, 357 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE1_2, 358 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE2_0, 359 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE2_1, 360 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE2_2, 361 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE3_0, 362 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE3_1, 363 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE3_2, 364 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE4_0, 365 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE4_1, 366 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE4_2, 367 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE5_0, 368 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE5_1, 369 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE5_2, 370 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE6_0, 371 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE6_1, 372 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE6_2, 373 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE7_0, 374 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE7_1, 375 + EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE7_2, 376 + EMC_PMACRO_IB_VREF_DQS_0, 377 + EMC_PMACRO_IB_VREF_DQS_1, 378 + EMC_PMACRO_IB_VREF_DQ_0, 379 + EMC_PMACRO_IB_VREF_DQ_1, 380 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0, 381 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1, 382 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2, 383 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_3, 384 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_4, 385 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_5, 386 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_0, 387 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_1, 388 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_2, 389 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_3, 390 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE0_0, 391 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE0_1, 392 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE0_2, 393 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE1_0, 394 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE1_1, 395 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE1_2, 396 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE2_0, 397 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE2_1, 398 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE2_2, 399 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE3_0, 400 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE3_1, 401 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE3_2, 402 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE4_0, 403 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE4_1, 404 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE4_2, 405 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE5_0, 406 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE5_1, 407 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE5_2, 408 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE6_0, 409 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE6_1, 410 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE6_2, 411 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE7_0, 412 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE7_1, 413 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE7_2, 414 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD0_0, 415 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD0_1, 416 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD0_2, 417 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD1_0, 418 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD1_1, 419 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD1_2, 420 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD2_0, 421 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD2_1, 422 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD2_2, 423 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD3_0, 424 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD3_1, 425 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD3_2, 426 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE0_0, 427 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE0_1, 428 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE0_2, 429 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE1_0, 430 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE1_1, 431 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE1_2, 432 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE2_0, 433 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE2_1, 434 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE2_2, 435 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE3_0, 436 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE3_1, 437 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE3_2, 438 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE4_0, 439 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE4_1, 440 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE4_2, 441 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE5_0, 442 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE5_1, 443 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE5_2, 444 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE6_0, 445 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE6_1, 446 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE6_2, 447 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE7_0, 448 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE7_1, 449 + EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE7_2, 450 + EMC_PMACRO_QUSE_DDLL_RANK0_0, 451 + EMC_PMACRO_QUSE_DDLL_RANK0_1, 452 + EMC_PMACRO_QUSE_DDLL_RANK0_2, 453 + EMC_PMACRO_QUSE_DDLL_RANK0_3, 454 + EMC_PMACRO_QUSE_DDLL_RANK1_0, 455 + EMC_PMACRO_QUSE_DDLL_RANK1_1, 456 + EMC_PMACRO_QUSE_DDLL_RANK1_2, 457 + EMC_PMACRO_QUSE_DDLL_RANK1_3 458 + }, 459 + .burst_mc = { 460 + MC_EMEM_ARB_CFG, 461 + MC_EMEM_ARB_OUTSTANDING_REQ, 462 + MC_EMEM_ARB_REFPB_HP_CTRL, 463 + MC_EMEM_ARB_REFPB_BANK_CTRL, 464 + MC_EMEM_ARB_TIMING_RCD, 465 + MC_EMEM_ARB_TIMING_RP, 466 + MC_EMEM_ARB_TIMING_RC, 467 + MC_EMEM_ARB_TIMING_RAS, 468 + MC_EMEM_ARB_TIMING_FAW, 469 + MC_EMEM_ARB_TIMING_RRD, 470 + MC_EMEM_ARB_TIMING_RAP2PRE, 471 + MC_EMEM_ARB_TIMING_WAP2PRE, 472 + MC_EMEM_ARB_TIMING_R2R, 473 + MC_EMEM_ARB_TIMING_W2W, 474 + MC_EMEM_ARB_TIMING_R2W, 475 + MC_EMEM_ARB_TIMING_CCDMW, 476 + MC_EMEM_ARB_TIMING_W2R, 477 + MC_EMEM_ARB_TIMING_RFCPB, 478 + MC_EMEM_ARB_DA_TURNS, 479 + MC_EMEM_ARB_DA_COVERS, 480 + MC_EMEM_ARB_MISC0, 481 + MC_EMEM_ARB_MISC1, 482 + MC_EMEM_ARB_MISC2, 483 + MC_EMEM_ARB_RING1_THROTTLE, 484 + MC_EMEM_ARB_DHYST_CTRL, 485 + MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_0, 486 + MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_1, 487 + MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_2, 488 + MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_3, 489 + MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_4, 490 + MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_5, 491 + MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_6, 492 + MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_7, 493 + }, 494 + .la_scale = { 495 + MC_MLL_MPCORER_PTSA_RATE, 496 + MC_FTOP_PTSA_RATE, 497 + MC_PTSA_GRANT_DECREMENT, 498 + MC_LATENCY_ALLOWANCE_XUSB_0, 499 + MC_LATENCY_ALLOWANCE_XUSB_1, 500 + MC_LATENCY_ALLOWANCE_TSEC_0, 501 + MC_LATENCY_ALLOWANCE_SDMMCA_0, 502 + MC_LATENCY_ALLOWANCE_SDMMCAA_0, 503 + MC_LATENCY_ALLOWANCE_SDMMC_0, 504 + MC_LATENCY_ALLOWANCE_SDMMCAB_0, 505 + MC_LATENCY_ALLOWANCE_PPCS_0, 506 + MC_LATENCY_ALLOWANCE_PPCS_1, 507 + MC_LATENCY_ALLOWANCE_MPCORE_0, 508 + MC_LATENCY_ALLOWANCE_HC_0, 509 + MC_LATENCY_ALLOWANCE_HC_1, 510 + MC_LATENCY_ALLOWANCE_AVPC_0, 511 + MC_LATENCY_ALLOWANCE_GPU_0, 512 + MC_LATENCY_ALLOWANCE_GPU2_0, 513 + MC_LATENCY_ALLOWANCE_NVENC_0, 514 + MC_LATENCY_ALLOWANCE_NVDEC_0, 515 + MC_LATENCY_ALLOWANCE_VIC_0, 516 + MC_LATENCY_ALLOWANCE_VI2_0, 517 + MC_LATENCY_ALLOWANCE_ISP2_0, 518 + MC_LATENCY_ALLOWANCE_ISP2_1, 519 + }, 520 + .burst_per_channel = { 521 + { .bank = 0, .offset = EMC_MRW10, }, 522 + { .bank = 1, .offset = EMC_MRW10, }, 523 + { .bank = 0, .offset = EMC_MRW11, }, 524 + { .bank = 1, .offset = EMC_MRW11, }, 525 + { .bank = 0, .offset = EMC_MRW12, }, 526 + { .bank = 1, .offset = EMC_MRW12, }, 527 + { .bank = 0, .offset = EMC_MRW13, }, 528 + { .bank = 1, .offset = EMC_MRW13, }, 529 + }, 530 + .trim_per_channel = { 531 + { .bank = 0, .offset = EMC_CMD_BRLSHFT_0, }, 532 + { .bank = 1, .offset = EMC_CMD_BRLSHFT_1, }, 533 + { .bank = 0, .offset = EMC_DATA_BRLSHFT_0, }, 534 + { .bank = 1, .offset = EMC_DATA_BRLSHFT_0, }, 535 + { .bank = 0, .offset = EMC_DATA_BRLSHFT_1, }, 536 + { .bank = 1, .offset = EMC_DATA_BRLSHFT_1, }, 537 + { .bank = 0, .offset = EMC_QUSE_BRLSHFT_0, }, 538 + { .bank = 1, .offset = EMC_QUSE_BRLSHFT_1, }, 539 + { .bank = 0, .offset = EMC_QUSE_BRLSHFT_2, }, 540 + { .bank = 1, .offset = EMC_QUSE_BRLSHFT_3, }, 541 + }, 542 + .vref_per_channel = { 543 + { 544 + .bank = 0, 545 + .offset = EMC_TRAINING_OPT_DQS_IB_VREF_RANK0, 546 + }, { 547 + .bank = 1, 548 + .offset = EMC_TRAINING_OPT_DQS_IB_VREF_RANK0, 549 + }, { 550 + .bank = 0, 551 + .offset = EMC_TRAINING_OPT_DQS_IB_VREF_RANK1, 552 + }, { 553 + .bank = 1, 554 + .offset = EMC_TRAINING_OPT_DQS_IB_VREF_RANK1, 555 + }, 556 + }, 557 + }; 558 + 559 + static void tegra210_emc_train(struct timer_list *timer) 560 + { 561 + struct tegra210_emc *emc = from_timer(emc, timer, training); 562 + unsigned long flags; 563 + 564 + if (!emc->last) 565 + return; 566 + 567 + spin_lock_irqsave(&emc->lock, flags); 568 + 569 + if (emc->sequence->periodic_compensation) 570 + emc->sequence->periodic_compensation(emc); 571 + 572 + spin_unlock_irqrestore(&emc->lock, flags); 573 + 574 + mod_timer(&emc->training, 575 + jiffies + msecs_to_jiffies(emc->training_interval)); 576 + } 577 + 578 + static void tegra210_emc_training_start(struct tegra210_emc *emc) 579 + { 580 + mod_timer(&emc->training, 581 + jiffies + msecs_to_jiffies(emc->training_interval)); 582 + } 583 + 584 + static void tegra210_emc_training_stop(struct tegra210_emc *emc) 585 + { 586 + del_timer(&emc->training); 587 + } 588 + 589 + static unsigned int tegra210_emc_get_temperature(struct tegra210_emc *emc) 590 + { 591 + unsigned long flags; 592 + u32 value, max = 0; 593 + unsigned int i; 594 + 595 + spin_lock_irqsave(&emc->lock, flags); 596 + 597 + for (i = 0; i < emc->num_devices; i++) { 598 + value = tegra210_emc_mrr_read(emc, i, 4); 599 + 600 + if (value & BIT(7)) 601 + dev_dbg(emc->dev, 602 + "sensor reading changed for device %u: %08x\n", 603 + i, value); 604 + 605 + value = FIELD_GET(LPDDR2_MR4_SRR, value); 606 + if (value > max) 607 + max = value; 608 + } 609 + 610 + spin_unlock_irqrestore(&emc->lock, flags); 611 + 612 + return max; 613 + } 614 + 615 + static void tegra210_emc_poll_refresh(struct timer_list *timer) 616 + { 617 + struct tegra210_emc *emc = from_timer(emc, timer, refresh_timer); 618 + unsigned int temperature; 619 + 620 + if (!emc->debugfs.temperature) 621 + temperature = tegra210_emc_get_temperature(emc); 622 + else 623 + temperature = emc->debugfs.temperature; 624 + 625 + if (temperature == emc->temperature) 626 + goto reset; 627 + 628 + switch (temperature) { 629 + case 0 ... 3: 630 + /* temperature is fine, using regular refresh */ 631 + dev_dbg(emc->dev, "switching to nominal refresh...\n"); 632 + tegra210_emc_set_refresh(emc, TEGRA210_EMC_REFRESH_NOMINAL); 633 + break; 634 + 635 + case 4: 636 + dev_dbg(emc->dev, "switching to 2x refresh...\n"); 637 + tegra210_emc_set_refresh(emc, TEGRA210_EMC_REFRESH_2X); 638 + break; 639 + 640 + case 5: 641 + dev_dbg(emc->dev, "switching to 4x refresh...\n"); 642 + tegra210_emc_set_refresh(emc, TEGRA210_EMC_REFRESH_4X); 643 + break; 644 + 645 + case 6 ... 7: 646 + dev_dbg(emc->dev, "switching to throttle refresh...\n"); 647 + tegra210_emc_set_refresh(emc, TEGRA210_EMC_REFRESH_THROTTLE); 648 + break; 649 + 650 + default: 651 + WARN(1, "invalid DRAM temperature state %u\n", temperature); 652 + return; 653 + } 654 + 655 + emc->temperature = temperature; 656 + 657 + reset: 658 + if (atomic_read(&emc->refresh_poll) > 0) { 659 + unsigned int interval = emc->refresh_poll_interval; 660 + unsigned int timeout = msecs_to_jiffies(interval); 661 + 662 + mod_timer(&emc->refresh_timer, jiffies + timeout); 663 + } 664 + } 665 + 666 + static void tegra210_emc_poll_refresh_stop(struct tegra210_emc *emc) 667 + { 668 + atomic_set(&emc->refresh_poll, 0); 669 + del_timer_sync(&emc->refresh_timer); 670 + } 671 + 672 + static void tegra210_emc_poll_refresh_start(struct tegra210_emc *emc) 673 + { 674 + atomic_set(&emc->refresh_poll, 1); 675 + 676 + mod_timer(&emc->refresh_timer, 677 + jiffies + msecs_to_jiffies(emc->refresh_poll_interval)); 678 + } 679 + 680 + static int tegra210_emc_cd_max_state(struct thermal_cooling_device *cd, 681 + unsigned long *state) 682 + { 683 + *state = 1; 684 + 685 + return 0; 686 + } 687 + 688 + static int tegra210_emc_cd_get_state(struct thermal_cooling_device *cd, 689 + unsigned long *state) 690 + { 691 + struct tegra210_emc *emc = cd->devdata; 692 + 693 + *state = atomic_read(&emc->refresh_poll); 694 + 695 + return 0; 696 + } 697 + 698 + static int tegra210_emc_cd_set_state(struct thermal_cooling_device *cd, 699 + unsigned long state) 700 + { 701 + struct tegra210_emc *emc = cd->devdata; 702 + 703 + if (state == atomic_read(&emc->refresh_poll)) 704 + return 0; 705 + 706 + if (state) 707 + tegra210_emc_poll_refresh_start(emc); 708 + else 709 + tegra210_emc_poll_refresh_stop(emc); 710 + 711 + return 0; 712 + } 713 + 714 + static struct thermal_cooling_device_ops tegra210_emc_cd_ops = { 715 + .get_max_state = tegra210_emc_cd_max_state, 716 + .get_cur_state = tegra210_emc_cd_get_state, 717 + .set_cur_state = tegra210_emc_cd_set_state, 718 + }; 719 + 720 + static void tegra210_emc_set_clock(struct tegra210_emc *emc, u32 clksrc) 721 + { 722 + emc->sequence->set_clock(emc, clksrc); 723 + 724 + if (emc->next->periodic_training) 725 + tegra210_emc_training_start(emc); 726 + else 727 + tegra210_emc_training_stop(emc); 728 + } 729 + 730 + static void tegra210_change_dll_src(struct tegra210_emc *emc, 731 + u32 clksrc) 732 + { 733 + u32 dll_setting = emc->next->dll_clk_src; 734 + u32 emc_clk_src; 735 + u32 emc_clk_div; 736 + 737 + emc_clk_src = (clksrc & EMC_CLK_EMC_2X_CLK_SRC_MASK) >> 738 + EMC_CLK_EMC_2X_CLK_SRC_SHIFT; 739 + emc_clk_div = (clksrc & EMC_CLK_EMC_2X_CLK_DIVISOR_MASK) >> 740 + EMC_CLK_EMC_2X_CLK_DIVISOR_SHIFT; 741 + 742 + dll_setting &= ~(DLL_CLK_EMC_DLL_CLK_SRC_MASK | 743 + DLL_CLK_EMC_DLL_CLK_DIVISOR_MASK); 744 + dll_setting |= emc_clk_src << DLL_CLK_EMC_DLL_CLK_SRC_SHIFT; 745 + dll_setting |= emc_clk_div << DLL_CLK_EMC_DLL_CLK_DIVISOR_SHIFT; 746 + 747 + dll_setting &= ~DLL_CLK_EMC_DLL_DDLL_CLK_SEL_MASK; 748 + if (emc_clk_src == EMC_CLK_SOURCE_PLLMB_LJ) 749 + dll_setting |= (PLLM_VCOB << 750 + DLL_CLK_EMC_DLL_DDLL_CLK_SEL_SHIFT); 751 + else if (emc_clk_src == EMC_CLK_SOURCE_PLLM_LJ) 752 + dll_setting |= (PLLM_VCOA << 753 + DLL_CLK_EMC_DLL_DDLL_CLK_SEL_SHIFT); 754 + else 755 + dll_setting |= (EMC_DLL_SWITCH_OUT << 756 + DLL_CLK_EMC_DLL_DDLL_CLK_SEL_SHIFT); 757 + 758 + tegra210_clk_emc_dll_update_setting(dll_setting); 759 + 760 + if (emc->next->clk_out_enb_x_0_clk_enb_emc_dll) 761 + tegra210_clk_emc_dll_enable(true); 762 + else 763 + tegra210_clk_emc_dll_enable(false); 764 + } 765 + 766 + int tegra210_emc_set_refresh(struct tegra210_emc *emc, 767 + enum tegra210_emc_refresh refresh) 768 + { 769 + struct tegra210_emc_timing *timings; 770 + unsigned long flags; 771 + 772 + if ((emc->dram_type != DRAM_TYPE_LPDDR2 && 773 + emc->dram_type != DRAM_TYPE_LPDDR4) || 774 + !emc->last) 775 + return -ENODEV; 776 + 777 + if (refresh > TEGRA210_EMC_REFRESH_THROTTLE) 778 + return -EINVAL; 779 + 780 + if (refresh == emc->refresh) 781 + return 0; 782 + 783 + spin_lock_irqsave(&emc->lock, flags); 784 + 785 + if (refresh == TEGRA210_EMC_REFRESH_THROTTLE && emc->derated) 786 + timings = emc->derated; 787 + else 788 + timings = emc->nominal; 789 + 790 + if (timings != emc->timings) { 791 + unsigned int index = emc->last - emc->timings; 792 + u32 clksrc; 793 + 794 + clksrc = emc->provider.configs[index].value | 795 + EMC_CLK_FORCE_CC_TRIGGER; 796 + 797 + emc->next = &timings[index]; 798 + emc->timings = timings; 799 + 800 + tegra210_emc_set_clock(emc, clksrc); 801 + } else { 802 + tegra210_emc_adjust_timing(emc, emc->last); 803 + tegra210_emc_timing_update(emc); 804 + 805 + if (refresh != TEGRA210_EMC_REFRESH_NOMINAL) 806 + emc_writel(emc, EMC_REF_REF_CMD, EMC_REF); 807 + } 808 + 809 + spin_unlock_irqrestore(&emc->lock, flags); 810 + 811 + return 0; 812 + } 813 + 814 + u32 tegra210_emc_mrr_read(struct tegra210_emc *emc, unsigned int chip, 815 + unsigned int address) 816 + { 817 + u32 value, ret = 0; 818 + unsigned int i; 819 + 820 + value = (chip & EMC_MRR_DEV_SEL_MASK) << EMC_MRR_DEV_SEL_SHIFT | 821 + (address & EMC_MRR_MA_MASK) << EMC_MRR_MA_SHIFT; 822 + emc_writel(emc, value, EMC_MRR); 823 + 824 + for (i = 0; i < emc->num_channels; i++) 825 + WARN(tegra210_emc_wait_for_update(emc, i, EMC_EMC_STATUS, 826 + EMC_EMC_STATUS_MRR_DIVLD, 1), 827 + "Timed out waiting for MRR %u (ch=%u)\n", address, i); 828 + 829 + for (i = 0; i < emc->num_channels; i++) { 830 + value = emc_channel_readl(emc, i, EMC_MRR); 831 + value &= EMC_MRR_DATA_MASK; 832 + 833 + ret = (ret << 16) | value; 834 + } 835 + 836 + return ret; 837 + } 838 + 839 + void tegra210_emc_do_clock_change(struct tegra210_emc *emc, u32 clksrc) 840 + { 841 + int err; 842 + 843 + mc_readl(emc->mc, MC_EMEM_ADR_CFG); 844 + emc_readl(emc, EMC_INTSTATUS); 845 + 846 + tegra210_clk_emc_update_setting(clksrc); 847 + 848 + err = tegra210_emc_wait_for_update(emc, 0, EMC_INTSTATUS, 849 + EMC_INTSTATUS_CLKCHANGE_COMPLETE, 850 + true); 851 + if (err) 852 + dev_warn(emc->dev, "clock change completion error: %d\n", err); 853 + } 854 + 855 + struct tegra210_emc_timing *tegra210_emc_find_timing(struct tegra210_emc *emc, 856 + unsigned long rate) 857 + { 858 + unsigned int i; 859 + 860 + for (i = 0; i < emc->num_timings; i++) 861 + if (emc->timings[i].rate * 1000UL == rate) 862 + return &emc->timings[i]; 863 + 864 + return NULL; 865 + } 866 + 867 + int tegra210_emc_wait_for_update(struct tegra210_emc *emc, unsigned int channel, 868 + unsigned int offset, u32 bit_mask, bool state) 869 + { 870 + unsigned int i; 871 + u32 value; 872 + 873 + for (i = 0; i < EMC_STATUS_UPDATE_TIMEOUT; i++) { 874 + value = emc_channel_readl(emc, channel, offset); 875 + if (!!(value & bit_mask) == state) 876 + return 0; 877 + 878 + udelay(1); 879 + } 880 + 881 + return -ETIMEDOUT; 882 + } 883 + 884 + void tegra210_emc_set_shadow_bypass(struct tegra210_emc *emc, int set) 885 + { 886 + u32 emc_dbg = emc_readl(emc, EMC_DBG); 887 + 888 + if (set) 889 + emc_writel(emc, emc_dbg | EMC_DBG_WRITE_MUX_ACTIVE, EMC_DBG); 890 + else 891 + emc_writel(emc, emc_dbg & ~EMC_DBG_WRITE_MUX_ACTIVE, EMC_DBG); 892 + } 893 + 894 + u32 tegra210_emc_get_dll_state(struct tegra210_emc_timing *next) 895 + { 896 + if (next->emc_emrs & 0x1) 897 + return 0; 898 + 899 + return 1; 900 + } 901 + 902 + void tegra210_emc_timing_update(struct tegra210_emc *emc) 903 + { 904 + unsigned int i; 905 + int err = 0; 906 + 907 + emc_writel(emc, 0x1, EMC_TIMING_CONTROL); 908 + 909 + for (i = 0; i < emc->num_channels; i++) { 910 + err |= tegra210_emc_wait_for_update(emc, i, EMC_EMC_STATUS, 911 + EMC_EMC_STATUS_TIMING_UPDATE_STALLED, 912 + false); 913 + } 914 + 915 + if (err) 916 + dev_warn(emc->dev, "timing update error: %d\n", err); 917 + } 918 + 919 + unsigned long tegra210_emc_actual_osc_clocks(u32 in) 920 + { 921 + if (in < 0x40) 922 + return in * 16; 923 + else if (in < 0x80) 924 + return 2048; 925 + else if (in < 0xc0) 926 + return 4096; 927 + else 928 + return 8192; 929 + } 930 + 931 + void tegra210_emc_start_periodic_compensation(struct tegra210_emc *emc) 932 + { 933 + u32 mpc_req = 0x4b; 934 + 935 + emc_writel(emc, mpc_req, EMC_MPC); 936 + mpc_req = emc_readl(emc, EMC_MPC); 937 + } 938 + 939 + u32 tegra210_emc_compensate(struct tegra210_emc_timing *next, u32 offset) 940 + { 941 + u32 temp = 0, rate = next->rate / 1000; 942 + s32 delta[4], delta_taps[4]; 943 + s32 new[] = { 944 + TRIM_REG(0, 0, 0, 0), 945 + TRIM_REG(0, 0, 0, 1), 946 + TRIM_REG(0, 0, 1, 2), 947 + TRIM_REG(0, 0, 1, 3), 948 + 949 + TRIM_REG(1, 0, 2, 4), 950 + TRIM_REG(1, 0, 2, 5), 951 + TRIM_REG(1, 0, 3, 6), 952 + TRIM_REG(1, 0, 3, 7), 953 + 954 + TRIM_REG(0, 1, 0, 0), 955 + TRIM_REG(0, 1, 0, 1), 956 + TRIM_REG(0, 1, 1, 2), 957 + TRIM_REG(0, 1, 1, 3), 958 + 959 + TRIM_REG(1, 1, 2, 4), 960 + TRIM_REG(1, 1, 2, 5), 961 + TRIM_REG(1, 1, 3, 6), 962 + TRIM_REG(1, 1, 3, 7) 963 + }; 964 + unsigned i; 965 + 966 + switch (offset) { 967 + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0: 968 + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1: 969 + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2: 970 + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_3: 971 + case EMC_DATA_BRLSHFT_0: 972 + delta[0] = 128 * (next->current_dram_clktree[C0D0U0] - 973 + next->trained_dram_clktree[C0D0U0]); 974 + delta[1] = 128 * (next->current_dram_clktree[C0D0U1] - 975 + next->trained_dram_clktree[C0D0U1]); 976 + delta[2] = 128 * (next->current_dram_clktree[C1D0U0] - 977 + next->trained_dram_clktree[C1D0U0]); 978 + delta[3] = 128 * (next->current_dram_clktree[C1D0U1] - 979 + next->trained_dram_clktree[C1D0U1]); 980 + 981 + delta_taps[0] = (delta[0] * (s32)rate) / 1000000; 982 + delta_taps[1] = (delta[1] * (s32)rate) / 1000000; 983 + delta_taps[2] = (delta[2] * (s32)rate) / 1000000; 984 + delta_taps[3] = (delta[3] * (s32)rate) / 1000000; 985 + 986 + for (i = 0; i < 4; i++) { 987 + if ((delta_taps[i] > next->tree_margin) || 988 + (delta_taps[i] < (-1 * next->tree_margin))) { 989 + new[i * 2] = new[i * 2] + delta_taps[i]; 990 + new[i * 2 + 1] = new[i * 2 + 1] + 991 + delta_taps[i]; 992 + } 993 + } 994 + 995 + if (offset == EMC_DATA_BRLSHFT_0) { 996 + for (i = 0; i < 8; i++) 997 + new[i] = new[i] / 64; 998 + } else { 999 + for (i = 0; i < 8; i++) 1000 + new[i] = new[i] % 64; 1001 + } 1002 + 1003 + break; 1004 + 1005 + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_0: 1006 + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_1: 1007 + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_2: 1008 + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_3: 1009 + case EMC_DATA_BRLSHFT_1: 1010 + delta[0] = 128 * (next->current_dram_clktree[C0D1U0] - 1011 + next->trained_dram_clktree[C0D1U0]); 1012 + delta[1] = 128 * (next->current_dram_clktree[C0D1U1] - 1013 + next->trained_dram_clktree[C0D1U1]); 1014 + delta[2] = 128 * (next->current_dram_clktree[C1D1U0] - 1015 + next->trained_dram_clktree[C1D1U0]); 1016 + delta[3] = 128 * (next->current_dram_clktree[C1D1U1] - 1017 + next->trained_dram_clktree[C1D1U1]); 1018 + 1019 + delta_taps[0] = (delta[0] * (s32)rate) / 1000000; 1020 + delta_taps[1] = (delta[1] * (s32)rate) / 1000000; 1021 + delta_taps[2] = (delta[2] * (s32)rate) / 1000000; 1022 + delta_taps[3] = (delta[3] * (s32)rate) / 1000000; 1023 + 1024 + for (i = 0; i < 4; i++) { 1025 + if ((delta_taps[i] > next->tree_margin) || 1026 + (delta_taps[i] < (-1 * next->tree_margin))) { 1027 + new[8 + i * 2] = new[8 + i * 2] + 1028 + delta_taps[i]; 1029 + new[8 + i * 2 + 1] = new[8 + i * 2 + 1] + 1030 + delta_taps[i]; 1031 + } 1032 + } 1033 + 1034 + if (offset == EMC_DATA_BRLSHFT_1) { 1035 + for (i = 0; i < 8; i++) 1036 + new[i + 8] = new[i + 8] / 64; 1037 + } else { 1038 + for (i = 0; i < 8; i++) 1039 + new[i + 8] = new[i + 8] % 64; 1040 + } 1041 + 1042 + break; 1043 + } 1044 + 1045 + switch (offset) { 1046 + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0: 1047 + temp = CALC_TEMP(0, 0, 0, 1, 0); 1048 + break; 1049 + 1050 + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1: 1051 + temp = CALC_TEMP(0, 1, 2, 3, 2); 1052 + break; 1053 + 1054 + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2: 1055 + temp = CALC_TEMP(0, 2, 4, 5, 4); 1056 + break; 1057 + 1058 + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_3: 1059 + temp = CALC_TEMP(0, 3, 6, 7, 6); 1060 + break; 1061 + 1062 + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_0: 1063 + temp = CALC_TEMP(1, 0, 0, 1, 8); 1064 + break; 1065 + 1066 + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_1: 1067 + temp = CALC_TEMP(1, 1, 2, 3, 10); 1068 + break; 1069 + 1070 + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_2: 1071 + temp = CALC_TEMP(1, 2, 4, 5, 12); 1072 + break; 1073 + 1074 + case EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_3: 1075 + temp = CALC_TEMP(1, 3, 6, 7, 14); 1076 + break; 1077 + 1078 + case EMC_DATA_BRLSHFT_0: 1079 + temp = ((new[0] << 1080 + EMC_DATA_BRLSHFT_0_RANK0_BYTE0_DATA_BRLSHFT_SHIFT) & 1081 + EMC_DATA_BRLSHFT_0_RANK0_BYTE0_DATA_BRLSHFT_MASK) | 1082 + ((new[1] << 1083 + EMC_DATA_BRLSHFT_0_RANK0_BYTE1_DATA_BRLSHFT_SHIFT) & 1084 + EMC_DATA_BRLSHFT_0_RANK0_BYTE1_DATA_BRLSHFT_MASK) | 1085 + ((new[2] << 1086 + EMC_DATA_BRLSHFT_0_RANK0_BYTE2_DATA_BRLSHFT_SHIFT) & 1087 + EMC_DATA_BRLSHFT_0_RANK0_BYTE2_DATA_BRLSHFT_MASK) | 1088 + ((new[3] << 1089 + EMC_DATA_BRLSHFT_0_RANK0_BYTE3_DATA_BRLSHFT_SHIFT) & 1090 + EMC_DATA_BRLSHFT_0_RANK0_BYTE3_DATA_BRLSHFT_MASK) | 1091 + ((new[4] << 1092 + EMC_DATA_BRLSHFT_0_RANK0_BYTE4_DATA_BRLSHFT_SHIFT) & 1093 + EMC_DATA_BRLSHFT_0_RANK0_BYTE4_DATA_BRLSHFT_MASK) | 1094 + ((new[5] << 1095 + EMC_DATA_BRLSHFT_0_RANK0_BYTE5_DATA_BRLSHFT_SHIFT) & 1096 + EMC_DATA_BRLSHFT_0_RANK0_BYTE5_DATA_BRLSHFT_MASK) | 1097 + ((new[6] << 1098 + EMC_DATA_BRLSHFT_0_RANK0_BYTE6_DATA_BRLSHFT_SHIFT) & 1099 + EMC_DATA_BRLSHFT_0_RANK0_BYTE6_DATA_BRLSHFT_MASK) | 1100 + ((new[7] << 1101 + EMC_DATA_BRLSHFT_0_RANK0_BYTE7_DATA_BRLSHFT_SHIFT) & 1102 + EMC_DATA_BRLSHFT_0_RANK0_BYTE7_DATA_BRLSHFT_MASK); 1103 + break; 1104 + 1105 + case EMC_DATA_BRLSHFT_1: 1106 + temp = ((new[8] << 1107 + EMC_DATA_BRLSHFT_1_RANK1_BYTE0_DATA_BRLSHFT_SHIFT) & 1108 + EMC_DATA_BRLSHFT_1_RANK1_BYTE0_DATA_BRLSHFT_MASK) | 1109 + ((new[9] << 1110 + EMC_DATA_BRLSHFT_1_RANK1_BYTE1_DATA_BRLSHFT_SHIFT) & 1111 + EMC_DATA_BRLSHFT_1_RANK1_BYTE1_DATA_BRLSHFT_MASK) | 1112 + ((new[10] << 1113 + EMC_DATA_BRLSHFT_1_RANK1_BYTE2_DATA_BRLSHFT_SHIFT) & 1114 + EMC_DATA_BRLSHFT_1_RANK1_BYTE2_DATA_BRLSHFT_MASK) | 1115 + ((new[11] << 1116 + EMC_DATA_BRLSHFT_1_RANK1_BYTE3_DATA_BRLSHFT_SHIFT) & 1117 + EMC_DATA_BRLSHFT_1_RANK1_BYTE3_DATA_BRLSHFT_MASK) | 1118 + ((new[12] << 1119 + EMC_DATA_BRLSHFT_1_RANK1_BYTE4_DATA_BRLSHFT_SHIFT) & 1120 + EMC_DATA_BRLSHFT_1_RANK1_BYTE4_DATA_BRLSHFT_MASK) | 1121 + ((new[13] << 1122 + EMC_DATA_BRLSHFT_1_RANK1_BYTE5_DATA_BRLSHFT_SHIFT) & 1123 + EMC_DATA_BRLSHFT_1_RANK1_BYTE5_DATA_BRLSHFT_MASK) | 1124 + ((new[14] << 1125 + EMC_DATA_BRLSHFT_1_RANK1_BYTE6_DATA_BRLSHFT_SHIFT) & 1126 + EMC_DATA_BRLSHFT_1_RANK1_BYTE6_DATA_BRLSHFT_MASK) | 1127 + ((new[15] << 1128 + EMC_DATA_BRLSHFT_1_RANK1_BYTE7_DATA_BRLSHFT_SHIFT) & 1129 + EMC_DATA_BRLSHFT_1_RANK1_BYTE7_DATA_BRLSHFT_MASK); 1130 + break; 1131 + 1132 + default: 1133 + break; 1134 + } 1135 + 1136 + return temp; 1137 + } 1138 + 1139 + u32 tegra210_emc_dll_prelock(struct tegra210_emc *emc, u32 clksrc) 1140 + { 1141 + unsigned int i; 1142 + u32 value; 1143 + 1144 + value = emc_readl(emc, EMC_CFG_DIG_DLL); 1145 + value &= ~EMC_CFG_DIG_DLL_CFG_DLL_LOCK_LIMIT_MASK; 1146 + value |= (3 << EMC_CFG_DIG_DLL_CFG_DLL_LOCK_LIMIT_SHIFT); 1147 + value &= ~EMC_CFG_DIG_DLL_CFG_DLL_EN; 1148 + value &= ~EMC_CFG_DIG_DLL_CFG_DLL_MODE_MASK; 1149 + value |= (3 << EMC_CFG_DIG_DLL_CFG_DLL_MODE_SHIFT); 1150 + value |= EMC_CFG_DIG_DLL_CFG_DLL_STALL_ALL_TRAFFIC; 1151 + value &= ~EMC_CFG_DIG_DLL_CFG_DLL_STALL_RW_UNTIL_LOCK; 1152 + value &= ~EMC_CFG_DIG_DLL_CFG_DLL_STALL_ALL_UNTIL_LOCK; 1153 + emc_writel(emc, value, EMC_CFG_DIG_DLL); 1154 + emc_writel(emc, 1, EMC_TIMING_CONTROL); 1155 + 1156 + for (i = 0; i < emc->num_channels; i++) 1157 + tegra210_emc_wait_for_update(emc, i, EMC_EMC_STATUS, 1158 + EMC_EMC_STATUS_TIMING_UPDATE_STALLED, 1159 + 0); 1160 + 1161 + for (i = 0; i < emc->num_channels; i++) { 1162 + while (true) { 1163 + value = emc_channel_readl(emc, i, EMC_CFG_DIG_DLL); 1164 + if ((value & EMC_CFG_DIG_DLL_CFG_DLL_EN) == 0) 1165 + break; 1166 + } 1167 + } 1168 + 1169 + value = emc->next->burst_regs[EMC_DLL_CFG_0_INDEX]; 1170 + emc_writel(emc, value, EMC_DLL_CFG_0); 1171 + 1172 + value = emc_readl(emc, EMC_DLL_CFG_1); 1173 + value &= EMC_DLL_CFG_1_DDLLCAL_CTRL_START_TRIM_MASK; 1174 + 1175 + if (emc->next->rate >= 400000 && emc->next->rate < 600000) 1176 + value |= 150; 1177 + else if (emc->next->rate >= 600000 && emc->next->rate < 800000) 1178 + value |= 100; 1179 + else if (emc->next->rate >= 800000 && emc->next->rate < 1000000) 1180 + value |= 70; 1181 + else if (emc->next->rate >= 1000000 && emc->next->rate < 1200000) 1182 + value |= 30; 1183 + else 1184 + value |= 20; 1185 + 1186 + emc_writel(emc, value, EMC_DLL_CFG_1); 1187 + 1188 + tegra210_change_dll_src(emc, clksrc); 1189 + 1190 + value = emc_readl(emc, EMC_CFG_DIG_DLL); 1191 + value |= EMC_CFG_DIG_DLL_CFG_DLL_EN; 1192 + emc_writel(emc, value, EMC_CFG_DIG_DLL); 1193 + 1194 + tegra210_emc_timing_update(emc); 1195 + 1196 + for (i = 0; i < emc->num_channels; i++) { 1197 + while (true) { 1198 + value = emc_channel_readl(emc, 0, EMC_CFG_DIG_DLL); 1199 + if (value & EMC_CFG_DIG_DLL_CFG_DLL_EN) 1200 + break; 1201 + } 1202 + } 1203 + 1204 + while (true) { 1205 + value = emc_readl(emc, EMC_DIG_DLL_STATUS); 1206 + 1207 + if ((value & EMC_DIG_DLL_STATUS_DLL_PRIV_UPDATED) == 0) 1208 + continue; 1209 + 1210 + if ((value & EMC_DIG_DLL_STATUS_DLL_LOCK) == 0) 1211 + continue; 1212 + 1213 + break; 1214 + } 1215 + 1216 + value = emc_readl(emc, EMC_DIG_DLL_STATUS); 1217 + 1218 + return value & EMC_DIG_DLL_STATUS_DLL_OUT_MASK; 1219 + } 1220 + 1221 + u32 tegra210_emc_dvfs_power_ramp_up(struct tegra210_emc *emc, u32 clk, 1222 + bool flip_backward) 1223 + { 1224 + u32 cmd_pad, dq_pad, rfu1, cfg5, common_tx, ramp_up_wait = 0; 1225 + const struct tegra210_emc_timing *timing; 1226 + 1227 + if (flip_backward) 1228 + timing = emc->last; 1229 + else 1230 + timing = emc->next; 1231 + 1232 + cmd_pad = timing->burst_regs[EMC_PMACRO_CMD_PAD_TX_CTRL_INDEX]; 1233 + dq_pad = timing->burst_regs[EMC_PMACRO_DATA_PAD_TX_CTRL_INDEX]; 1234 + rfu1 = timing->burst_regs[EMC_PMACRO_BRICK_CTRL_RFU1_INDEX]; 1235 + cfg5 = timing->burst_regs[EMC_FBIO_CFG5_INDEX]; 1236 + common_tx = timing->burst_regs[EMC_PMACRO_COMMON_PAD_TX_CTRL_INDEX]; 1237 + 1238 + cmd_pad |= EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQ_TX_DRVFORCEON; 1239 + 1240 + if (clk < 1000000 / DVFS_FGCG_MID_SPEED_THRESHOLD) { 1241 + ccfifo_writel(emc, common_tx & 0xa, 1242 + EMC_PMACRO_COMMON_PAD_TX_CTRL, 0); 1243 + ccfifo_writel(emc, common_tx & 0xf, 1244 + EMC_PMACRO_COMMON_PAD_TX_CTRL, 1245 + (100000 / clk) + 1); 1246 + ramp_up_wait += 100000; 1247 + } else { 1248 + ccfifo_writel(emc, common_tx | 0x8, 1249 + EMC_PMACRO_COMMON_PAD_TX_CTRL, 0); 1250 + } 1251 + 1252 + if (clk < 1000000 / DVFS_FGCG_HIGH_SPEED_THRESHOLD) { 1253 + if (clk < 1000000 / IOBRICK_DCC_THRESHOLD) { 1254 + cmd_pad |= 1255 + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQSP_TX_E_DCC | 1256 + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQSN_TX_E_DCC; 1257 + cmd_pad &= 1258 + ~(EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQ_TX_E_DCC | 1259 + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_CMD_TX_E_DCC); 1260 + ccfifo_writel(emc, cmd_pad, 1261 + EMC_PMACRO_CMD_PAD_TX_CTRL, 1262 + (100000 / clk) + 1); 1263 + ramp_up_wait += 100000; 1264 + 1265 + dq_pad |= 1266 + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQSP_TX_E_DCC | 1267 + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQSN_TX_E_DCC; 1268 + dq_pad &= 1269 + ~(EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQ_TX_E_DCC | 1270 + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_CMD_TX_E_DCC); 1271 + ccfifo_writel(emc, dq_pad, 1272 + EMC_PMACRO_DATA_PAD_TX_CTRL, 0); 1273 + ccfifo_writel(emc, rfu1 & 0xfe40fe40, 1274 + EMC_PMACRO_BRICK_CTRL_RFU1, 0); 1275 + } else { 1276 + ccfifo_writel(emc, rfu1 & 0xfe40fe40, 1277 + EMC_PMACRO_BRICK_CTRL_RFU1, 1278 + (100000 / clk) + 1); 1279 + ramp_up_wait += 100000; 1280 + } 1281 + 1282 + ccfifo_writel(emc, rfu1 & 0xfeedfeed, 1283 + EMC_PMACRO_BRICK_CTRL_RFU1, (100000 / clk) + 1); 1284 + ramp_up_wait += 100000; 1285 + 1286 + if (clk < 1000000 / IOBRICK_DCC_THRESHOLD) { 1287 + cmd_pad |= 1288 + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQSP_TX_E_DCC | 1289 + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQSN_TX_E_DCC | 1290 + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQ_TX_E_DCC | 1291 + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_CMD_TX_E_DCC; 1292 + ccfifo_writel(emc, cmd_pad, 1293 + EMC_PMACRO_CMD_PAD_TX_CTRL, 1294 + (100000 / clk) + 1); 1295 + ramp_up_wait += 100000; 1296 + 1297 + dq_pad |= 1298 + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQSP_TX_E_DCC | 1299 + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQSN_TX_E_DCC | 1300 + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQ_TX_E_DCC | 1301 + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_CMD_TX_E_DCC; 1302 + ccfifo_writel(emc, dq_pad, 1303 + EMC_PMACRO_DATA_PAD_TX_CTRL, 0); 1304 + ccfifo_writel(emc, rfu1, 1305 + EMC_PMACRO_BRICK_CTRL_RFU1, 0); 1306 + } else { 1307 + ccfifo_writel(emc, rfu1, 1308 + EMC_PMACRO_BRICK_CTRL_RFU1, 1309 + (100000 / clk) + 1); 1310 + ramp_up_wait += 100000; 1311 + } 1312 + 1313 + ccfifo_writel(emc, cfg5 & ~EMC_FBIO_CFG5_CMD_TX_DIS, 1314 + EMC_FBIO_CFG5, (100000 / clk) + 10); 1315 + ramp_up_wait += 100000 + (10 * clk); 1316 + } else if (clk < 1000000 / DVFS_FGCG_MID_SPEED_THRESHOLD) { 1317 + ccfifo_writel(emc, rfu1 | 0x06000600, 1318 + EMC_PMACRO_BRICK_CTRL_RFU1, (100000 / clk) + 1); 1319 + ccfifo_writel(emc, cfg5 & ~EMC_FBIO_CFG5_CMD_TX_DIS, 1320 + EMC_FBIO_CFG5, (100000 / clk) + 10); 1321 + ramp_up_wait += 100000 + 10 * clk; 1322 + } else { 1323 + ccfifo_writel(emc, rfu1 | 0x00000600, 1324 + EMC_PMACRO_BRICK_CTRL_RFU1, 0); 1325 + ccfifo_writel(emc, cfg5 & ~EMC_FBIO_CFG5_CMD_TX_DIS, 1326 + EMC_FBIO_CFG5, 12); 1327 + ramp_up_wait += 12 * clk; 1328 + } 1329 + 1330 + cmd_pad &= ~EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQ_TX_DRVFORCEON; 1331 + ccfifo_writel(emc, cmd_pad, EMC_PMACRO_CMD_PAD_TX_CTRL, 5); 1332 + 1333 + return ramp_up_wait; 1334 + } 1335 + 1336 + u32 tegra210_emc_dvfs_power_ramp_down(struct tegra210_emc *emc, u32 clk, 1337 + bool flip_backward) 1338 + { 1339 + u32 ramp_down_wait = 0, cmd_pad, dq_pad, rfu1, cfg5, common_tx; 1340 + const struct tegra210_emc_timing *entry; 1341 + u32 seq_wait; 1342 + 1343 + if (flip_backward) 1344 + entry = emc->next; 1345 + else 1346 + entry = emc->last; 1347 + 1348 + cmd_pad = entry->burst_regs[EMC_PMACRO_CMD_PAD_TX_CTRL_INDEX]; 1349 + dq_pad = entry->burst_regs[EMC_PMACRO_DATA_PAD_TX_CTRL_INDEX]; 1350 + rfu1 = entry->burst_regs[EMC_PMACRO_BRICK_CTRL_RFU1_INDEX]; 1351 + cfg5 = entry->burst_regs[EMC_FBIO_CFG5_INDEX]; 1352 + common_tx = entry->burst_regs[EMC_PMACRO_COMMON_PAD_TX_CTRL_INDEX]; 1353 + 1354 + cmd_pad |= EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQ_TX_DRVFORCEON; 1355 + 1356 + ccfifo_writel(emc, cmd_pad, EMC_PMACRO_CMD_PAD_TX_CTRL, 0); 1357 + ccfifo_writel(emc, cfg5 | EMC_FBIO_CFG5_CMD_TX_DIS, 1358 + EMC_FBIO_CFG5, 12); 1359 + ramp_down_wait = 12 * clk; 1360 + 1361 + seq_wait = (100000 / clk) + 1; 1362 + 1363 + if (clk < (1000000 / DVFS_FGCG_HIGH_SPEED_THRESHOLD)) { 1364 + if (clk < (1000000 / IOBRICK_DCC_THRESHOLD)) { 1365 + cmd_pad &= 1366 + ~(EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQ_TX_E_DCC | 1367 + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_CMD_TX_E_DCC); 1368 + cmd_pad |= 1369 + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQSP_TX_E_DCC | 1370 + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQSN_TX_E_DCC; 1371 + ccfifo_writel(emc, cmd_pad, 1372 + EMC_PMACRO_CMD_PAD_TX_CTRL, seq_wait); 1373 + ramp_down_wait += 100000; 1374 + 1375 + dq_pad &= 1376 + ~(EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQ_TX_E_DCC | 1377 + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_CMD_TX_E_DCC); 1378 + dq_pad |= 1379 + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQSP_TX_E_DCC | 1380 + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQSN_TX_E_DCC; 1381 + ccfifo_writel(emc, dq_pad, 1382 + EMC_PMACRO_DATA_PAD_TX_CTRL, 0); 1383 + ccfifo_writel(emc, rfu1 & ~0x01120112, 1384 + EMC_PMACRO_BRICK_CTRL_RFU1, 0); 1385 + } else { 1386 + ccfifo_writel(emc, rfu1 & ~0x01120112, 1387 + EMC_PMACRO_BRICK_CTRL_RFU1, seq_wait); 1388 + ramp_down_wait += 100000; 1389 + } 1390 + 1391 + ccfifo_writel(emc, rfu1 & ~0x01bf01bf, 1392 + EMC_PMACRO_BRICK_CTRL_RFU1, seq_wait); 1393 + ramp_down_wait += 100000; 1394 + 1395 + if (clk < (1000000 / IOBRICK_DCC_THRESHOLD)) { 1396 + cmd_pad &= 1397 + ~(EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQ_TX_E_DCC | 1398 + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_CMD_TX_E_DCC | 1399 + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQSP_TX_E_DCC | 1400 + EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQSN_TX_E_DCC); 1401 + ccfifo_writel(emc, cmd_pad, 1402 + EMC_PMACRO_CMD_PAD_TX_CTRL, seq_wait); 1403 + ramp_down_wait += 100000; 1404 + 1405 + dq_pad &= 1406 + ~(EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQ_TX_E_DCC | 1407 + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_CMD_TX_E_DCC | 1408 + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQSP_TX_E_DCC | 1409 + EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQSN_TX_E_DCC); 1410 + ccfifo_writel(emc, dq_pad, 1411 + EMC_PMACRO_DATA_PAD_TX_CTRL, 0); 1412 + ccfifo_writel(emc, rfu1 & ~0x07ff07ff, 1413 + EMC_PMACRO_BRICK_CTRL_RFU1, 0); 1414 + } else { 1415 + ccfifo_writel(emc, rfu1 & ~0x07ff07ff, 1416 + EMC_PMACRO_BRICK_CTRL_RFU1, seq_wait); 1417 + ramp_down_wait += 100000; 1418 + } 1419 + } else { 1420 + ccfifo_writel(emc, rfu1 & ~0xffff07ff, 1421 + EMC_PMACRO_BRICK_CTRL_RFU1, seq_wait + 19); 1422 + ramp_down_wait += 100000 + (20 * clk); 1423 + } 1424 + 1425 + if (clk < (1000000 / DVFS_FGCG_MID_SPEED_THRESHOLD)) { 1426 + ramp_down_wait += 100000; 1427 + ccfifo_writel(emc, common_tx & ~0x5, 1428 + EMC_PMACRO_COMMON_PAD_TX_CTRL, seq_wait); 1429 + ramp_down_wait += 100000; 1430 + ccfifo_writel(emc, common_tx & ~0xf, 1431 + EMC_PMACRO_COMMON_PAD_TX_CTRL, seq_wait); 1432 + ramp_down_wait += 100000; 1433 + ccfifo_writel(emc, 0, 0, seq_wait); 1434 + ramp_down_wait += 100000; 1435 + } else { 1436 + ccfifo_writel(emc, common_tx & ~0xf, 1437 + EMC_PMACRO_COMMON_PAD_TX_CTRL, seq_wait); 1438 + } 1439 + 1440 + return ramp_down_wait; 1441 + } 1442 + 1443 + void tegra210_emc_reset_dram_clktree_values(struct tegra210_emc_timing *timing) 1444 + { 1445 + timing->current_dram_clktree[C0D0U0] = 1446 + timing->trained_dram_clktree[C0D0U0]; 1447 + timing->current_dram_clktree[C0D0U1] = 1448 + timing->trained_dram_clktree[C0D0U1]; 1449 + timing->current_dram_clktree[C1D0U0] = 1450 + timing->trained_dram_clktree[C1D0U0]; 1451 + timing->current_dram_clktree[C1D0U1] = 1452 + timing->trained_dram_clktree[C1D0U1]; 1453 + timing->current_dram_clktree[C1D1U0] = 1454 + timing->trained_dram_clktree[C1D1U0]; 1455 + timing->current_dram_clktree[C1D1U1] = 1456 + timing->trained_dram_clktree[C1D1U1]; 1457 + } 1458 + 1459 + static void update_dll_control(struct tegra210_emc *emc, u32 value, bool state) 1460 + { 1461 + unsigned int i; 1462 + 1463 + emc_writel(emc, value, EMC_CFG_DIG_DLL); 1464 + tegra210_emc_timing_update(emc); 1465 + 1466 + for (i = 0; i < emc->num_channels; i++) 1467 + tegra210_emc_wait_for_update(emc, i, EMC_CFG_DIG_DLL, 1468 + EMC_CFG_DIG_DLL_CFG_DLL_EN, 1469 + state); 1470 + } 1471 + 1472 + void tegra210_emc_dll_disable(struct tegra210_emc *emc) 1473 + { 1474 + u32 value; 1475 + 1476 + value = emc_readl(emc, EMC_CFG_DIG_DLL); 1477 + value &= ~EMC_CFG_DIG_DLL_CFG_DLL_EN; 1478 + 1479 + update_dll_control(emc, value, false); 1480 + } 1481 + 1482 + void tegra210_emc_dll_enable(struct tegra210_emc *emc) 1483 + { 1484 + u32 value; 1485 + 1486 + value = emc_readl(emc, EMC_CFG_DIG_DLL); 1487 + value |= EMC_CFG_DIG_DLL_CFG_DLL_EN; 1488 + 1489 + update_dll_control(emc, value, true); 1490 + } 1491 + 1492 + void tegra210_emc_adjust_timing(struct tegra210_emc *emc, 1493 + struct tegra210_emc_timing *timing) 1494 + { 1495 + u32 dsr_cntrl = timing->burst_regs[EMC_DYN_SELF_REF_CONTROL_INDEX]; 1496 + u32 pre_ref = timing->burst_regs[EMC_PRE_REFRESH_REQ_CNT_INDEX]; 1497 + u32 ref = timing->burst_regs[EMC_REFRESH_INDEX]; 1498 + 1499 + switch (emc->refresh) { 1500 + case TEGRA210_EMC_REFRESH_NOMINAL: 1501 + case TEGRA210_EMC_REFRESH_THROTTLE: 1502 + break; 1503 + 1504 + case TEGRA210_EMC_REFRESH_2X: 1505 + ref = REFRESH_SPEEDUP(ref, 2); 1506 + pre_ref = REFRESH_SPEEDUP(pre_ref, 2); 1507 + dsr_cntrl = REFRESH_SPEEDUP(dsr_cntrl, 2); 1508 + break; 1509 + 1510 + case TEGRA210_EMC_REFRESH_4X: 1511 + ref = REFRESH_SPEEDUP(ref, 4); 1512 + pre_ref = REFRESH_SPEEDUP(pre_ref, 4); 1513 + dsr_cntrl = REFRESH_SPEEDUP(dsr_cntrl, 4); 1514 + break; 1515 + 1516 + default: 1517 + dev_warn(emc->dev, "failed to set refresh: %d\n", emc->refresh); 1518 + return; 1519 + } 1520 + 1521 + emc_writel(emc, ref, emc->offsets->burst[EMC_REFRESH_INDEX]); 1522 + emc_writel(emc, pre_ref, 1523 + emc->offsets->burst[EMC_PRE_REFRESH_REQ_CNT_INDEX]); 1524 + emc_writel(emc, dsr_cntrl, 1525 + emc->offsets->burst[EMC_DYN_SELF_REF_CONTROL_INDEX]); 1526 + } 1527 + 1528 + static int tegra210_emc_set_rate(struct device *dev, 1529 + const struct tegra210_clk_emc_config *config) 1530 + { 1531 + struct tegra210_emc *emc = dev_get_drvdata(dev); 1532 + struct tegra210_emc_timing *timing = NULL; 1533 + unsigned long rate = config->rate; 1534 + s64 last_change_delay; 1535 + unsigned long flags; 1536 + unsigned int i; 1537 + 1538 + if (rate == emc->last->rate * 1000UL) 1539 + return 0; 1540 + 1541 + for (i = 0; i < emc->num_timings; i++) { 1542 + if (emc->timings[i].rate * 1000UL == rate) { 1543 + timing = &emc->timings[i]; 1544 + break; 1545 + } 1546 + } 1547 + 1548 + if (!timing) 1549 + return -EINVAL; 1550 + 1551 + if (rate > 204000000 && !timing->trained) 1552 + return -EINVAL; 1553 + 1554 + emc->next = timing; 1555 + last_change_delay = ktime_us_delta(ktime_get(), emc->clkchange_time); 1556 + 1557 + /* XXX use non-busy-looping sleep? */ 1558 + if ((last_change_delay >= 0) && 1559 + (last_change_delay < emc->clkchange_delay)) 1560 + udelay(emc->clkchange_delay - (int)last_change_delay); 1561 + 1562 + spin_lock_irqsave(&emc->lock, flags); 1563 + tegra210_emc_set_clock(emc, config->value); 1564 + emc->clkchange_time = ktime_get(); 1565 + emc->last = timing; 1566 + spin_unlock_irqrestore(&emc->lock, flags); 1567 + 1568 + return 0; 1569 + } 1570 + 1571 + /* 1572 + * debugfs interface 1573 + * 1574 + * The memory controller driver exposes some files in debugfs that can be used 1575 + * to control the EMC frequency. The top-level directory can be found here: 1576 + * 1577 + * /sys/kernel/debug/emc 1578 + * 1579 + * It contains the following files: 1580 + * 1581 + * - available_rates: This file contains a list of valid, space-separated 1582 + * EMC frequencies. 1583 + * 1584 + * - min_rate: Writing a value to this file sets the given frequency as the 1585 + * floor of the permitted range. If this is higher than the currently 1586 + * configured EMC frequency, this will cause the frequency to be 1587 + * increased so that it stays within the valid range. 1588 + * 1589 + * - max_rate: Similarily to the min_rate file, writing a value to this file 1590 + * sets the given frequency as the ceiling of the permitted range. If 1591 + * the value is lower than the currently configured EMC frequency, this 1592 + * will cause the frequency to be decreased so that it stays within the 1593 + * valid range. 1594 + */ 1595 + 1596 + static bool tegra210_emc_validate_rate(struct tegra210_emc *emc, 1597 + unsigned long rate) 1598 + { 1599 + unsigned int i; 1600 + 1601 + for (i = 0; i < emc->num_timings; i++) 1602 + if (rate == emc->timings[i].rate * 1000UL) 1603 + return true; 1604 + 1605 + return false; 1606 + } 1607 + 1608 + static int tegra210_emc_debug_available_rates_show(struct seq_file *s, 1609 + void *data) 1610 + { 1611 + struct tegra210_emc *emc = s->private; 1612 + const char *prefix = ""; 1613 + unsigned int i; 1614 + 1615 + for (i = 0; i < emc->num_timings; i++) { 1616 + seq_printf(s, "%s%u", prefix, emc->timings[i].rate * 1000); 1617 + prefix = " "; 1618 + } 1619 + 1620 + seq_puts(s, "\n"); 1621 + 1622 + return 0; 1623 + } 1624 + 1625 + static int tegra210_emc_debug_available_rates_open(struct inode *inode, 1626 + struct file *file) 1627 + { 1628 + return single_open(file, tegra210_emc_debug_available_rates_show, 1629 + inode->i_private); 1630 + } 1631 + 1632 + static const struct file_operations tegra210_emc_debug_available_rates_fops = { 1633 + .open = tegra210_emc_debug_available_rates_open, 1634 + .read = seq_read, 1635 + .llseek = seq_lseek, 1636 + .release = single_release, 1637 + }; 1638 + 1639 + static int tegra210_emc_debug_min_rate_get(void *data, u64 *rate) 1640 + { 1641 + struct tegra210_emc *emc = data; 1642 + 1643 + *rate = emc->debugfs.min_rate; 1644 + 1645 + return 0; 1646 + } 1647 + 1648 + static int tegra210_emc_debug_min_rate_set(void *data, u64 rate) 1649 + { 1650 + struct tegra210_emc *emc = data; 1651 + int err; 1652 + 1653 + if (!tegra210_emc_validate_rate(emc, rate)) 1654 + return -EINVAL; 1655 + 1656 + err = clk_set_min_rate(emc->clk, rate); 1657 + if (err < 0) 1658 + return err; 1659 + 1660 + emc->debugfs.min_rate = rate; 1661 + 1662 + return 0; 1663 + } 1664 + 1665 + DEFINE_SIMPLE_ATTRIBUTE(tegra210_emc_debug_min_rate_fops, 1666 + tegra210_emc_debug_min_rate_get, 1667 + tegra210_emc_debug_min_rate_set, "%llu\n"); 1668 + 1669 + static int tegra210_emc_debug_max_rate_get(void *data, u64 *rate) 1670 + { 1671 + struct tegra210_emc *emc = data; 1672 + 1673 + *rate = emc->debugfs.max_rate; 1674 + 1675 + return 0; 1676 + } 1677 + 1678 + static int tegra210_emc_debug_max_rate_set(void *data, u64 rate) 1679 + { 1680 + struct tegra210_emc *emc = data; 1681 + int err; 1682 + 1683 + if (!tegra210_emc_validate_rate(emc, rate)) 1684 + return -EINVAL; 1685 + 1686 + err = clk_set_max_rate(emc->clk, rate); 1687 + if (err < 0) 1688 + return err; 1689 + 1690 + emc->debugfs.max_rate = rate; 1691 + 1692 + return 0; 1693 + } 1694 + 1695 + DEFINE_SIMPLE_ATTRIBUTE(tegra210_emc_debug_max_rate_fops, 1696 + tegra210_emc_debug_max_rate_get, 1697 + tegra210_emc_debug_max_rate_set, "%llu\n"); 1698 + 1699 + static int tegra210_emc_debug_temperature_get(void *data, u64 *temperature) 1700 + { 1701 + struct tegra210_emc *emc = data; 1702 + unsigned int value; 1703 + 1704 + if (!emc->debugfs.temperature) 1705 + value = tegra210_emc_get_temperature(emc); 1706 + else 1707 + value = emc->debugfs.temperature; 1708 + 1709 + *temperature = value; 1710 + 1711 + return 0; 1712 + } 1713 + 1714 + static int tegra210_emc_debug_temperature_set(void *data, u64 temperature) 1715 + { 1716 + struct tegra210_emc *emc = data; 1717 + 1718 + if (temperature > 7) 1719 + return -EINVAL; 1720 + 1721 + emc->debugfs.temperature = temperature; 1722 + 1723 + return 0; 1724 + } 1725 + 1726 + DEFINE_SIMPLE_ATTRIBUTE(tegra210_emc_debug_temperature_fops, 1727 + tegra210_emc_debug_temperature_get, 1728 + tegra210_emc_debug_temperature_set, "%llu\n"); 1729 + 1730 + static void tegra210_emc_debugfs_init(struct tegra210_emc *emc) 1731 + { 1732 + struct device *dev = emc->dev; 1733 + unsigned int i; 1734 + int err; 1735 + 1736 + emc->debugfs.min_rate = ULONG_MAX; 1737 + emc->debugfs.max_rate = 0; 1738 + 1739 + for (i = 0; i < emc->num_timings; i++) { 1740 + if (emc->timings[i].rate * 1000UL < emc->debugfs.min_rate) 1741 + emc->debugfs.min_rate = emc->timings[i].rate * 1000UL; 1742 + 1743 + if (emc->timings[i].rate * 1000UL > emc->debugfs.max_rate) 1744 + emc->debugfs.max_rate = emc->timings[i].rate * 1000UL; 1745 + } 1746 + 1747 + if (!emc->num_timings) { 1748 + emc->debugfs.min_rate = clk_get_rate(emc->clk); 1749 + emc->debugfs.max_rate = emc->debugfs.min_rate; 1750 + } 1751 + 1752 + err = clk_set_rate_range(emc->clk, emc->debugfs.min_rate, 1753 + emc->debugfs.max_rate); 1754 + if (err < 0) { 1755 + dev_err(dev, "failed to set rate range [%lu-%lu] for %pC\n", 1756 + emc->debugfs.min_rate, emc->debugfs.max_rate, 1757 + emc->clk); 1758 + return; 1759 + } 1760 + 1761 + emc->debugfs.root = debugfs_create_dir("emc", NULL); 1762 + if (!emc->debugfs.root) { 1763 + dev_err(dev, "failed to create debugfs directory\n"); 1764 + return; 1765 + } 1766 + 1767 + debugfs_create_file("available_rates", 0444, emc->debugfs.root, emc, 1768 + &tegra210_emc_debug_available_rates_fops); 1769 + debugfs_create_file("min_rate", 0644, emc->debugfs.root, emc, 1770 + &tegra210_emc_debug_min_rate_fops); 1771 + debugfs_create_file("max_rate", 0644, emc->debugfs.root, emc, 1772 + &tegra210_emc_debug_max_rate_fops); 1773 + debugfs_create_file("temperature", 0644, emc->debugfs.root, emc, 1774 + &tegra210_emc_debug_temperature_fops); 1775 + } 1776 + 1777 + static void tegra210_emc_detect(struct tegra210_emc *emc) 1778 + { 1779 + u32 value; 1780 + 1781 + /* probe the number of connected DRAM devices */ 1782 + value = mc_readl(emc->mc, MC_EMEM_ADR_CFG); 1783 + 1784 + if (value & MC_EMEM_ADR_CFG_EMEM_NUMDEV) 1785 + emc->num_devices = 2; 1786 + else 1787 + emc->num_devices = 1; 1788 + 1789 + /* probe the type of DRAM */ 1790 + value = emc_readl(emc, EMC_FBIO_CFG5); 1791 + emc->dram_type = value & 0x3; 1792 + 1793 + /* probe the number of channels */ 1794 + value = emc_readl(emc, EMC_FBIO_CFG7); 1795 + 1796 + if ((value & EMC_FBIO_CFG7_CH1_ENABLE) && 1797 + (value & EMC_FBIO_CFG7_CH0_ENABLE)) 1798 + emc->num_channels = 2; 1799 + else 1800 + emc->num_channels = 1; 1801 + } 1802 + 1803 + static int tegra210_emc_validate_timings(struct tegra210_emc *emc, 1804 + struct tegra210_emc_timing *timings, 1805 + unsigned int num_timings) 1806 + { 1807 + unsigned int i; 1808 + 1809 + for (i = 0; i < num_timings; i++) { 1810 + u32 min_volt = timings[i].min_volt; 1811 + u32 rate = timings[i].rate; 1812 + 1813 + if (!rate) 1814 + return -EINVAL; 1815 + 1816 + if ((i > 0) && ((rate <= timings[i - 1].rate) || 1817 + (min_volt < timings[i - 1].min_volt))) 1818 + return -EINVAL; 1819 + 1820 + if (timings[i].revision != timings[0].revision) 1821 + continue; 1822 + } 1823 + 1824 + return 0; 1825 + } 1826 + 1827 + static int tegra210_emc_probe(struct platform_device *pdev) 1828 + { 1829 + struct thermal_cooling_device *cd; 1830 + unsigned long current_rate; 1831 + struct platform_device *mc; 1832 + struct tegra210_emc *emc; 1833 + struct device_node *np; 1834 + unsigned int i; 1835 + int err; 1836 + 1837 + emc = devm_kzalloc(&pdev->dev, sizeof(*emc), GFP_KERNEL); 1838 + if (!emc) 1839 + return -ENOMEM; 1840 + 1841 + emc->clk = devm_clk_get(&pdev->dev, "emc"); 1842 + if (IS_ERR(emc->clk)) 1843 + return PTR_ERR(emc->clk); 1844 + 1845 + platform_set_drvdata(pdev, emc); 1846 + spin_lock_init(&emc->lock); 1847 + emc->dev = &pdev->dev; 1848 + 1849 + np = of_parse_phandle(pdev->dev.of_node, "nvidia,memory-controller", 0); 1850 + if (!np) { 1851 + dev_err(&pdev->dev, "could not get memory controller\n"); 1852 + return -ENOENT; 1853 + } 1854 + 1855 + mc = of_find_device_by_node(np); 1856 + of_node_put(np); 1857 + if (!mc) 1858 + return -ENOENT; 1859 + 1860 + emc->mc = platform_get_drvdata(mc); 1861 + if (!emc->mc) { 1862 + put_device(&mc->dev); 1863 + return -EPROBE_DEFER; 1864 + } 1865 + 1866 + emc->regs = devm_platform_ioremap_resource(pdev, 0); 1867 + if (IS_ERR(emc->regs)) { 1868 + err = PTR_ERR(emc->regs); 1869 + goto put_mc; 1870 + } 1871 + 1872 + for (i = 0; i < 2; i++) { 1873 + emc->channel[i] = devm_platform_ioremap_resource(pdev, 1 + i); 1874 + if (IS_ERR(emc->channel[i])) { 1875 + err = PTR_ERR(emc->channel[i]); 1876 + goto put_mc; 1877 + } 1878 + } 1879 + 1880 + tegra210_emc_detect(emc); 1881 + np = pdev->dev.of_node; 1882 + 1883 + /* attach to the nominal and (optional) derated tables */ 1884 + err = of_reserved_mem_device_init_by_name(emc->dev, np, "nominal"); 1885 + if (err < 0) { 1886 + dev_err(emc->dev, "failed to get nominal EMC table: %d\n", err); 1887 + goto put_mc; 1888 + } 1889 + 1890 + err = of_reserved_mem_device_init_by_name(emc->dev, np, "derated"); 1891 + if (err < 0 && err != -ENODEV) { 1892 + dev_err(emc->dev, "failed to get derated EMC table: %d\n", err); 1893 + goto release; 1894 + } 1895 + 1896 + /* validate the tables */ 1897 + if (emc->nominal) { 1898 + err = tegra210_emc_validate_timings(emc, emc->nominal, 1899 + emc->num_timings); 1900 + if (err < 0) 1901 + goto release; 1902 + } 1903 + 1904 + if (emc->derated) { 1905 + err = tegra210_emc_validate_timings(emc, emc->derated, 1906 + emc->num_timings); 1907 + if (err < 0) 1908 + goto release; 1909 + } 1910 + 1911 + /* default to the nominal table */ 1912 + emc->timings = emc->nominal; 1913 + 1914 + /* pick the current timing based on the current EMC clock rate */ 1915 + current_rate = clk_get_rate(emc->clk) / 1000; 1916 + 1917 + for (i = 0; i < emc->num_timings; i++) { 1918 + if (emc->timings[i].rate == current_rate) { 1919 + emc->last = &emc->timings[i]; 1920 + break; 1921 + } 1922 + } 1923 + 1924 + if (i == emc->num_timings) { 1925 + dev_err(emc->dev, "no EMC table entry found for %lu kHz\n", 1926 + current_rate); 1927 + err = -ENOENT; 1928 + goto release; 1929 + } 1930 + 1931 + /* pick a compatible clock change sequence for the EMC table */ 1932 + for (i = 0; i < ARRAY_SIZE(tegra210_emc_sequences); i++) { 1933 + const struct tegra210_emc_sequence *sequence = 1934 + tegra210_emc_sequences[i]; 1935 + 1936 + if (emc->timings[0].revision == sequence->revision) { 1937 + emc->sequence = sequence; 1938 + break; 1939 + } 1940 + } 1941 + 1942 + if (!emc->sequence) { 1943 + dev_err(&pdev->dev, "sequence %u not supported\n", 1944 + emc->timings[0].revision); 1945 + err = -ENOTSUPP; 1946 + goto release; 1947 + } 1948 + 1949 + emc->offsets = &tegra210_emc_table_register_offsets; 1950 + emc->refresh = TEGRA210_EMC_REFRESH_NOMINAL; 1951 + 1952 + emc->provider.owner = THIS_MODULE; 1953 + emc->provider.dev = &pdev->dev; 1954 + emc->provider.set_rate = tegra210_emc_set_rate; 1955 + 1956 + emc->provider.configs = devm_kcalloc(&pdev->dev, emc->num_timings, 1957 + sizeof(*emc->provider.configs), 1958 + GFP_KERNEL); 1959 + if (!emc->provider.configs) { 1960 + err = -ENOMEM; 1961 + goto release; 1962 + } 1963 + 1964 + emc->provider.num_configs = emc->num_timings; 1965 + 1966 + for (i = 0; i < emc->provider.num_configs; i++) { 1967 + struct tegra210_emc_timing *timing = &emc->timings[i]; 1968 + struct tegra210_clk_emc_config *config = 1969 + &emc->provider.configs[i]; 1970 + u32 value; 1971 + 1972 + config->rate = timing->rate * 1000UL; 1973 + config->value = timing->clk_src_emc; 1974 + 1975 + value = timing->burst_mc_regs[MC_EMEM_ARB_MISC0_INDEX]; 1976 + 1977 + if ((value & MC_EMEM_ARB_MISC0_EMC_SAME_FREQ) == 0) 1978 + config->same_freq = false; 1979 + else 1980 + config->same_freq = true; 1981 + } 1982 + 1983 + err = tegra210_clk_emc_attach(emc->clk, &emc->provider); 1984 + if (err < 0) { 1985 + dev_err(&pdev->dev, "failed to attach to EMC clock: %d\n", err); 1986 + goto release; 1987 + } 1988 + 1989 + emc->clkchange_delay = 100; 1990 + emc->training_interval = 100; 1991 + dev_set_drvdata(emc->dev, emc); 1992 + 1993 + timer_setup(&emc->refresh_timer, tegra210_emc_poll_refresh, 1994 + TIMER_DEFERRABLE); 1995 + atomic_set(&emc->refresh_poll, 0); 1996 + emc->refresh_poll_interval = 1000; 1997 + 1998 + timer_setup(&emc->training, tegra210_emc_train, 0); 1999 + 2000 + tegra210_emc_debugfs_init(emc); 2001 + 2002 + cd = devm_thermal_of_cooling_device_register(emc->dev, np, "emc", emc, 2003 + &tegra210_emc_cd_ops); 2004 + if (IS_ERR(cd)) { 2005 + err = PTR_ERR(cd); 2006 + dev_err(emc->dev, "failed to register cooling device: %d\n", 2007 + err); 2008 + goto detach; 2009 + } 2010 + 2011 + return 0; 2012 + 2013 + detach: 2014 + debugfs_remove_recursive(emc->debugfs.root); 2015 + tegra210_clk_emc_detach(emc->clk); 2016 + release: 2017 + of_reserved_mem_device_release(emc->dev); 2018 + put_mc: 2019 + put_device(emc->mc->dev); 2020 + return err; 2021 + } 2022 + 2023 + static int tegra210_emc_remove(struct platform_device *pdev) 2024 + { 2025 + struct tegra210_emc *emc = platform_get_drvdata(pdev); 2026 + 2027 + debugfs_remove_recursive(emc->debugfs.root); 2028 + tegra210_clk_emc_detach(emc->clk); 2029 + of_reserved_mem_device_release(emc->dev); 2030 + put_device(emc->mc->dev); 2031 + 2032 + return 0; 2033 + } 2034 + 2035 + static int __maybe_unused tegra210_emc_suspend(struct device *dev) 2036 + { 2037 + struct tegra210_emc *emc = dev_get_drvdata(dev); 2038 + int err; 2039 + 2040 + err = clk_rate_exclusive_get(emc->clk); 2041 + if (err < 0) { 2042 + dev_err(emc->dev, "failed to acquire clock: %d\n", err); 2043 + return err; 2044 + } 2045 + 2046 + emc->resume_rate = clk_get_rate(emc->clk); 2047 + 2048 + clk_set_rate(emc->clk, 204000000); 2049 + tegra210_clk_emc_detach(emc->clk); 2050 + 2051 + dev_dbg(dev, "suspending at %lu Hz\n", clk_get_rate(emc->clk)); 2052 + 2053 + return 0; 2054 + } 2055 + 2056 + static int __maybe_unused tegra210_emc_resume(struct device *dev) 2057 + { 2058 + struct tegra210_emc *emc = dev_get_drvdata(dev); 2059 + int err; 2060 + 2061 + err = tegra210_clk_emc_attach(emc->clk, &emc->provider); 2062 + if (err < 0) { 2063 + dev_err(dev, "failed to attach to EMC clock: %d\n", err); 2064 + return err; 2065 + } 2066 + 2067 + clk_set_rate(emc->clk, emc->resume_rate); 2068 + clk_rate_exclusive_put(emc->clk); 2069 + 2070 + dev_dbg(dev, "resuming at %lu Hz\n", clk_get_rate(emc->clk)); 2071 + 2072 + return 0; 2073 + } 2074 + 2075 + static const struct dev_pm_ops tegra210_emc_pm_ops = { 2076 + SET_SYSTEM_SLEEP_PM_OPS(tegra210_emc_suspend, tegra210_emc_resume) 2077 + }; 2078 + 2079 + static const struct of_device_id tegra210_emc_of_match[] = { 2080 + { .compatible = "nvidia,tegra210-emc", }, 2081 + { }, 2082 + }; 2083 + MODULE_DEVICE_TABLE(of, tegra210_emc_of_match); 2084 + 2085 + static struct platform_driver tegra210_emc_driver = { 2086 + .driver = { 2087 + .name = "tegra210-emc", 2088 + .of_match_table = tegra210_emc_of_match, 2089 + .pm = &tegra210_emc_pm_ops, 2090 + }, 2091 + .probe = tegra210_emc_probe, 2092 + .remove = tegra210_emc_remove, 2093 + }; 2094 + 2095 + module_platform_driver(tegra210_emc_driver); 2096 + 2097 + MODULE_AUTHOR("Thierry Reding <treding@nvidia.com>"); 2098 + MODULE_AUTHOR("Joseph Lo <josephl@nvidia.com>"); 2099 + MODULE_DESCRIPTION("NVIDIA Tegra210 EMC driver"); 2100 + MODULE_LICENSE("GPL v2");
+90
drivers/memory/tegra/tegra210-emc-table.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved. 4 + */ 5 + 6 + #include <linux/of_reserved_mem.h> 7 + 8 + #include "tegra210-emc.h" 9 + 10 + #define TEGRA_EMC_MAX_FREQS 16 11 + 12 + static int tegra210_emc_table_device_init(struct reserved_mem *rmem, 13 + struct device *dev) 14 + { 15 + struct tegra210_emc *emc = dev_get_drvdata(dev); 16 + struct tegra210_emc_timing *timings; 17 + unsigned int i, count = 0; 18 + 19 + timings = memremap(rmem->base, rmem->size, MEMREMAP_WB); 20 + if (!timings) { 21 + dev_err(dev, "failed to map EMC table\n"); 22 + return -ENOMEM; 23 + } 24 + 25 + count = 0; 26 + 27 + for (i = 0; i < TEGRA_EMC_MAX_FREQS; i++) { 28 + if (timings[i].revision == 0) 29 + break; 30 + 31 + count++; 32 + } 33 + 34 + /* only the nominal and derated tables are expected */ 35 + if (emc->derated) { 36 + dev_warn(dev, "excess EMC table '%s'\n", rmem->name); 37 + goto out; 38 + } 39 + 40 + if (emc->nominal) { 41 + if (count != emc->num_timings) { 42 + dev_warn(dev, "%u derated vs. %u nominal entries\n", 43 + count, emc->num_timings); 44 + memunmap(timings); 45 + return -EINVAL; 46 + } 47 + 48 + emc->derated = timings; 49 + } else { 50 + emc->num_timings = count; 51 + emc->nominal = timings; 52 + } 53 + 54 + out: 55 + /* keep track of which table this is */ 56 + rmem->priv = timings; 57 + 58 + return 0; 59 + } 60 + 61 + static void tegra210_emc_table_device_release(struct reserved_mem *rmem, 62 + struct device *dev) 63 + { 64 + struct tegra210_emc_timing *timings = rmem->priv; 65 + struct tegra210_emc *emc = dev_get_drvdata(dev); 66 + 67 + if ((emc->nominal && timings != emc->nominal) && 68 + (emc->derated && timings != emc->derated)) 69 + dev_warn(dev, "trying to release unassigned EMC table '%s'\n", 70 + rmem->name); 71 + 72 + memunmap(timings); 73 + } 74 + 75 + static const struct reserved_mem_ops tegra210_emc_table_ops = { 76 + .device_init = tegra210_emc_table_device_init, 77 + .device_release = tegra210_emc_table_device_release, 78 + }; 79 + 80 + static int tegra210_emc_table_init(struct reserved_mem *rmem) 81 + { 82 + pr_debug("Tegra210 EMC table at %pa, size %lu bytes\n", &rmem->base, 83 + (unsigned long)rmem->size); 84 + 85 + rmem->ops = &tegra210_emc_table_ops; 86 + 87 + return 0; 88 + } 89 + RESERVEDMEM_OF_DECLARE(tegra210_emc_table, "nvidia,tegra210-emc-table", 90 + tegra210_emc_table_init);
+1016
drivers/memory/tegra/tegra210-emc.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (c) 2015-2020, NVIDIA CORPORATION. All rights reserved. 4 + */ 5 + 6 + #ifndef TEGRA210_EMC_H 7 + #define TEGRA210_EMC_H 8 + 9 + #include <linux/clk.h> 10 + #include <linux/clk/tegra.h> 11 + #include <linux/io.h> 12 + #include <linux/platform_device.h> 13 + 14 + #define DVFS_FGCG_HIGH_SPEED_THRESHOLD 1000 15 + #define IOBRICK_DCC_THRESHOLD 2400 16 + #define DVFS_FGCG_MID_SPEED_THRESHOLD 600 17 + 18 + #define EMC_STATUS_UPDATE_TIMEOUT 1000 19 + 20 + /* register definitions */ 21 + #define EMC_INTSTATUS 0x0 22 + #define EMC_INTSTATUS_CLKCHANGE_COMPLETE BIT(4) 23 + #define EMC_DBG 0x8 24 + #define EMC_DBG_WRITE_MUX_ACTIVE BIT(1) 25 + #define EMC_DBG_WRITE_ACTIVE_ONLY BIT(30) 26 + #define EMC_CFG 0xc 27 + #define EMC_CFG_DRAM_CLKSTOP_PD BIT(31) 28 + #define EMC_CFG_DRAM_CLKSTOP_SR BIT(30) 29 + #define EMC_CFG_DRAM_ACPD BIT(29) 30 + #define EMC_CFG_DYN_SELF_REF BIT(28) 31 + #define EMC_PIN 0x24 32 + #define EMC_PIN_PIN_CKE BIT(0) 33 + #define EMC_PIN_PIN_CKEB BIT(1) 34 + #define EMC_PIN_PIN_CKE_PER_DEV BIT(2) 35 + #define EMC_TIMING_CONTROL 0x28 36 + #define EMC_RC 0x2c 37 + #define EMC_RFC 0x30 38 + #define EMC_RAS 0x34 39 + #define EMC_RP 0x38 40 + #define EMC_R2W 0x3c 41 + #define EMC_W2R 0x40 42 + #define EMC_R2P 0x44 43 + #define EMC_W2P 0x48 44 + #define EMC_RD_RCD 0x4c 45 + #define EMC_WR_RCD 0x50 46 + #define EMC_RRD 0x54 47 + #define EMC_REXT 0x58 48 + #define EMC_WDV 0x5c 49 + #define EMC_QUSE 0x60 50 + #define EMC_QRST 0x64 51 + #define EMC_QSAFE 0x68 52 + #define EMC_RDV 0x6c 53 + #define EMC_REFRESH 0x70 54 + #define EMC_BURST_REFRESH_NUM 0x74 55 + #define EMC_PDEX2WR 0x78 56 + #define EMC_PDEX2RD 0x7c 57 + #define EMC_PCHG2PDEN 0x80 58 + #define EMC_ACT2PDEN 0x84 59 + #define EMC_AR2PDEN 0x88 60 + #define EMC_RW2PDEN 0x8c 61 + #define EMC_TXSR 0x90 62 + #define EMC_TCKE 0x94 63 + #define EMC_TFAW 0x98 64 + #define EMC_TRPAB 0x9c 65 + #define EMC_TCLKSTABLE 0xa0 66 + #define EMC_TCLKSTOP 0xa4 67 + #define EMC_TREFBW 0xa8 68 + #define EMC_TPPD 0xac 69 + #define EMC_ODT_WRITE 0xb0 70 + #define EMC_PDEX2MRR 0xb4 71 + #define EMC_WEXT 0xb8 72 + #define EMC_RFC_SLR 0xc0 73 + #define EMC_MRS_WAIT_CNT2 0xc4 74 + #define EMC_MRS_WAIT_CNT2_MRS_EXT2_WAIT_CNT_SHIFT 16 75 + #define EMC_MRS_WAIT_CNT2_MRS_EXT1_WAIT_CNT_SHIFT 0 76 + #define EMC_MRS_WAIT_CNT 0xc8 77 + #define EMC_MRS_WAIT_CNT_SHORT_WAIT_SHIFT 0 78 + #define EMC_MRS_WAIT_CNT_SHORT_WAIT_MASK \ 79 + (0x3FF << EMC_MRS_WAIT_CNT_SHORT_WAIT_SHIFT) 80 + 81 + #define EMC_MRS 0xcc 82 + #define EMC_EMRS 0xd0 83 + #define EMC_EMRS_USE_EMRS_LONG_CNT BIT(26) 84 + #define EMC_REF 0xd4 85 + #define EMC_REF_REF_CMD BIT(0) 86 + #define EMC_SELF_REF 0xe0 87 + #define EMC_MRW 0xe8 88 + #define EMC_MRW_MRW_OP_SHIFT 0 89 + #define EMC_MRW_MRW_OP_MASK \ 90 + (0xff << EMC_MRW_MRW_OP_SHIFT) 91 + #define EMC_MRW_MRW_MA_SHIFT 16 92 + #define EMC_MRW_USE_MRW_EXT_CNT 27 93 + #define EMC_MRW_MRW_DEV_SELECTN_SHIFT 30 94 + 95 + #define EMC_MRR 0xec 96 + #define EMC_MRR_DEV_SEL_SHIFT 30 97 + #define EMC_MRR_DEV_SEL_MASK 0x3 98 + #define EMC_MRR_MA_SHIFT 16 99 + #define EMC_MRR_MA_MASK 0xff 100 + #define EMC_MRR_DATA_SHIFT 0 101 + #define EMC_MRR_DATA_MASK 0xffff 102 + 103 + #define EMC_FBIO_SPARE 0x100 104 + #define EMC_FBIO_CFG5 0x104 105 + #define EMC_FBIO_CFG5_DRAM_TYPE_SHIFT 0 106 + #define EMC_FBIO_CFG5_DRAM_TYPE_MASK \ 107 + (0x3 << EMC_FBIO_CFG5_DRAM_TYPE_SHIFT) 108 + #define EMC_FBIO_CFG5_CMD_TX_DIS BIT(8) 109 + 110 + #define EMC_PDEX2CKE 0x118 111 + #define EMC_CKE2PDEN 0x11c 112 + #define EMC_MPC 0x128 113 + #define EMC_EMRS2 0x12c 114 + #define EMC_EMRS2_USE_EMRS2_LONG_CNT BIT(26) 115 + #define EMC_MRW2 0x134 116 + #define EMC_MRW3 0x138 117 + #define EMC_MRW4 0x13c 118 + #define EMC_R2R 0x144 119 + #define EMC_EINPUT 0x14c 120 + #define EMC_EINPUT_DURATION 0x150 121 + #define EMC_PUTERM_EXTRA 0x154 122 + #define EMC_TCKESR 0x158 123 + #define EMC_TPD 0x15c 124 + #define EMC_AUTO_CAL_CONFIG 0x2a4 125 + #define EMC_AUTO_CAL_CONFIG_AUTO_CAL_COMPUTE_START BIT(0) 126 + #define EMC_AUTO_CAL_CONFIG_AUTO_CAL_MEASURE_STALL BIT(9) 127 + #define EMC_AUTO_CAL_CONFIG_AUTO_CAL_UPDATE_STALL BIT(10) 128 + #define EMC_AUTO_CAL_CONFIG_AUTO_CAL_ENABLE BIT(29) 129 + #define EMC_AUTO_CAL_CONFIG_AUTO_CAL_START BIT(31) 130 + #define EMC_EMC_STATUS 0x2b4 131 + #define EMC_EMC_STATUS_MRR_DIVLD BIT(20) 132 + #define EMC_EMC_STATUS_TIMING_UPDATE_STALLED BIT(23) 133 + #define EMC_EMC_STATUS_DRAM_IN_POWERDOWN_SHIFT 4 134 + #define EMC_EMC_STATUS_DRAM_IN_POWERDOWN_MASK \ 135 + (0x3 << EMC_EMC_STATUS_DRAM_IN_POWERDOWN_SHIFT) 136 + #define EMC_EMC_STATUS_DRAM_IN_SELF_REFRESH_SHIFT 8 137 + #define EMC_EMC_STATUS_DRAM_IN_SELF_REFRESH_MASK \ 138 + (0x3 << EMC_EMC_STATUS_DRAM_IN_SELF_REFRESH_SHIFT) 139 + 140 + #define EMC_CFG_2 0x2b8 141 + #define EMC_CFG_DIG_DLL 0x2bc 142 + #define EMC_CFG_DIG_DLL_CFG_DLL_EN BIT(0) 143 + #define EMC_CFG_DIG_DLL_CFG_DLL_STALL_ALL_UNTIL_LOCK BIT(1) 144 + #define EMC_CFG_DIG_DLL_CFG_DLL_STALL_ALL_TRAFFIC BIT(3) 145 + #define EMC_CFG_DIG_DLL_CFG_DLL_STALL_RW_UNTIL_LOCK BIT(4) 146 + #define EMC_CFG_DIG_DLL_CFG_DLL_MODE_SHIFT 6 147 + #define EMC_CFG_DIG_DLL_CFG_DLL_MODE_MASK \ 148 + (0x3 << EMC_CFG_DIG_DLL_CFG_DLL_MODE_SHIFT) 149 + #define EMC_CFG_DIG_DLL_CFG_DLL_LOCK_LIMIT_SHIFT 8 150 + #define EMC_CFG_DIG_DLL_CFG_DLL_LOCK_LIMIT_MASK \ 151 + (0x7 << EMC_CFG_DIG_DLL_CFG_DLL_LOCK_LIMIT_SHIFT) 152 + 153 + #define EMC_CFG_DIG_DLL_PERIOD 0x2c0 154 + #define EMC_DIG_DLL_STATUS 0x2c4 155 + #define EMC_DIG_DLL_STATUS_DLL_LOCK BIT(15) 156 + #define EMC_DIG_DLL_STATUS_DLL_PRIV_UPDATED BIT(17) 157 + #define EMC_DIG_DLL_STATUS_DLL_OUT_SHIFT 0 158 + #define EMC_DIG_DLL_STATUS_DLL_OUT_MASK \ 159 + (0x7ff << EMC_DIG_DLL_STATUS_DLL_OUT_SHIFT) 160 + 161 + #define EMC_CFG_DIG_DLL_1 0x2c8 162 + #define EMC_RDV_MASK 0x2cc 163 + #define EMC_WDV_MASK 0x2d0 164 + #define EMC_RDV_EARLY_MASK 0x2d4 165 + #define EMC_RDV_EARLY 0x2d8 166 + #define EMC_AUTO_CAL_CONFIG8 0x2dc 167 + #define EMC_ZCAL_INTERVAL 0x2e0 168 + #define EMC_ZCAL_WAIT_CNT 0x2e4 169 + #define EMC_ZCAL_WAIT_CNT_ZCAL_WAIT_CNT_MASK 0x7ff 170 + #define EMC_ZCAL_WAIT_CNT_ZCAL_WAIT_CNT_SHIFT 0 171 + 172 + #define EMC_ZQ_CAL 0x2ec 173 + #define EMC_ZQ_CAL_DEV_SEL_SHIFT 30 174 + #define EMC_ZQ_CAL_LONG BIT(4) 175 + #define EMC_ZQ_CAL_ZQ_LATCH_CMD BIT(1) 176 + #define EMC_ZQ_CAL_ZQ_CAL_CMD BIT(0) 177 + #define EMC_FDPD_CTRL_DQ 0x310 178 + #define EMC_FDPD_CTRL_CMD 0x314 179 + #define EMC_PMACRO_CMD_BRICK_CTRL_FDPD 0x318 180 + #define EMC_PMACRO_DATA_BRICK_CTRL_FDPD 0x31c 181 + #define EMC_PMACRO_BRICK_CTRL_RFU1 0x330 182 + #define EMC_PMACRO_BRICK_CTRL_RFU2 0x334 183 + #define EMC_TR_TIMING_0 0x3b4 184 + #define EMC_TR_CTRL_1 0x3bc 185 + #define EMC_TR_RDV 0x3c4 186 + #define EMC_STALL_THEN_EXE_AFTER_CLKCHANGE 0x3cc 187 + #define EMC_SEL_DPD_CTRL 0x3d8 188 + #define EMC_SEL_DPD_CTRL_DATA_SEL_DPD_EN BIT(8) 189 + #define EMC_SEL_DPD_CTRL_ODT_SEL_DPD_EN BIT(5) 190 + #define EMC_SEL_DPD_CTRL_RESET_SEL_DPD_EN BIT(4) 191 + #define EMC_SEL_DPD_CTRL_CA_SEL_DPD_EN BIT(3) 192 + #define EMC_SEL_DPD_CTRL_CLK_SEL_DPD_EN BIT(2) 193 + #define EMC_PRE_REFRESH_REQ_CNT 0x3dc 194 + #define EMC_DYN_SELF_REF_CONTROL 0x3e0 195 + #define EMC_TXSRDLL 0x3e4 196 + #define EMC_CCFIFO_ADDR 0x3e8 197 + #define EMC_CCFIFO_ADDR_STALL_BY_1 (1 << 31) 198 + #define EMC_CCFIFO_ADDR_STALL(x) (((x) & 0x7fff) << 16) 199 + #define EMC_CCFIFO_ADDR_OFFSET(x) ((x) & 0xffff) 200 + #define EMC_CCFIFO_DATA 0x3ec 201 + #define EMC_TR_QPOP 0x3f4 202 + #define EMC_TR_RDV_MASK 0x3f8 203 + #define EMC_TR_QSAFE 0x3fc 204 + #define EMC_TR_QRST 0x400 205 + #define EMC_ISSUE_QRST 0x428 206 + #define EMC_AUTO_CAL_CONFIG2 0x458 207 + #define EMC_AUTO_CAL_CONFIG3 0x45c 208 + #define EMC_TR_DVFS 0x460 209 + #define EMC_AUTO_CAL_CHANNEL 0x464 210 + #define EMC_IBDLY 0x468 211 + #define EMC_OBDLY 0x46c 212 + #define EMC_TXDSRVTTGEN 0x480 213 + #define EMC_WE_DURATION 0x48c 214 + #define EMC_WS_DURATION 0x490 215 + #define EMC_WEV 0x494 216 + #define EMC_WSV 0x498 217 + #define EMC_CFG_3 0x49c 218 + #define EMC_MRW6 0x4a4 219 + #define EMC_MRW7 0x4a8 220 + #define EMC_MRW8 0x4ac 221 + #define EMC_MRW9 0x4b0 222 + #define EMC_MRW10 0x4b4 223 + #define EMC_MRW11 0x4b8 224 + #define EMC_MRW12 0x4bc 225 + #define EMC_MRW13 0x4c0 226 + #define EMC_MRW14 0x4c4 227 + #define EMC_MRW15 0x4d0 228 + #define EMC_CFG_SYNC 0x4d4 229 + #define EMC_FDPD_CTRL_CMD_NO_RAMP 0x4d8 230 + #define EMC_FDPD_CTRL_CMD_NO_RAMP_CMD_DPD_NO_RAMP_ENABLE BIT(0) 231 + #define EMC_WDV_CHK 0x4e0 232 + #define EMC_CFG_PIPE_2 0x554 233 + #define EMC_CFG_PIPE_CLK 0x558 234 + #define EMC_CFG_PIPE_CLK_CLK_ALWAYS_ON BIT(0) 235 + #define EMC_CFG_PIPE_1 0x55c 236 + #define EMC_CFG_PIPE 0x560 237 + #define EMC_QPOP 0x564 238 + #define EMC_QUSE_WIDTH 0x568 239 + #define EMC_PUTERM_WIDTH 0x56c 240 + #define EMC_AUTO_CAL_CONFIG7 0x574 241 + #define EMC_REFCTRL2 0x580 242 + #define EMC_FBIO_CFG7 0x584 243 + #define EMC_FBIO_CFG7_CH0_ENABLE BIT(1) 244 + #define EMC_FBIO_CFG7_CH1_ENABLE BIT(2) 245 + #define EMC_DATA_BRLSHFT_0 0x588 246 + #define EMC_DATA_BRLSHFT_0_RANK0_BYTE7_DATA_BRLSHFT_SHIFT 21 247 + #define EMC_DATA_BRLSHFT_0_RANK0_BYTE7_DATA_BRLSHFT_MASK \ 248 + (0x7 << EMC_DATA_BRLSHFT_0_RANK0_BYTE7_DATA_BRLSHFT_SHIFT) 249 + #define EMC_DATA_BRLSHFT_0_RANK0_BYTE6_DATA_BRLSHFT_SHIFT 18 250 + #define EMC_DATA_BRLSHFT_0_RANK0_BYTE6_DATA_BRLSHFT_MASK \ 251 + (0x7 << EMC_DATA_BRLSHFT_0_RANK0_BYTE6_DATA_BRLSHFT_SHIFT) 252 + #define EMC_DATA_BRLSHFT_0_RANK0_BYTE5_DATA_BRLSHFT_SHIFT 15 253 + #define EMC_DATA_BRLSHFT_0_RANK0_BYTE5_DATA_BRLSHFT_MASK \ 254 + (0x7 << EMC_DATA_BRLSHFT_0_RANK0_BYTE5_DATA_BRLSHFT_SHIFT) 255 + #define EMC_DATA_BRLSHFT_0_RANK0_BYTE4_DATA_BRLSHFT_SHIFT 12 256 + #define EMC_DATA_BRLSHFT_0_RANK0_BYTE4_DATA_BRLSHFT_MASK \ 257 + (0x7 << EMC_DATA_BRLSHFT_0_RANK0_BYTE4_DATA_BRLSHFT_SHIFT) 258 + #define EMC_DATA_BRLSHFT_0_RANK0_BYTE3_DATA_BRLSHFT_SHIFT 9 259 + #define EMC_DATA_BRLSHFT_0_RANK0_BYTE3_DATA_BRLSHFT_MASK \ 260 + (0x7 << EMC_DATA_BRLSHFT_0_RANK0_BYTE3_DATA_BRLSHFT_SHIFT) 261 + #define EMC_DATA_BRLSHFT_0_RANK0_BYTE2_DATA_BRLSHFT_SHIFT 6 262 + #define EMC_DATA_BRLSHFT_0_RANK0_BYTE2_DATA_BRLSHFT_MASK \ 263 + (0x7 << EMC_DATA_BRLSHFT_0_RANK0_BYTE2_DATA_BRLSHFT_SHIFT) 264 + #define EMC_DATA_BRLSHFT_0_RANK0_BYTE1_DATA_BRLSHFT_SHIFT 3 265 + #define EMC_DATA_BRLSHFT_0_RANK0_BYTE1_DATA_BRLSHFT_MASK \ 266 + (0x7 << EMC_DATA_BRLSHFT_0_RANK0_BYTE1_DATA_BRLSHFT_SHIFT) 267 + #define EMC_DATA_BRLSHFT_0_RANK0_BYTE0_DATA_BRLSHFT_SHIFT 0 268 + #define EMC_DATA_BRLSHFT_0_RANK0_BYTE0_DATA_BRLSHFT_MASK \ 269 + (0x7 << EMC_DATA_BRLSHFT_0_RANK0_BYTE0_DATA_BRLSHFT_SHIFT) 270 + 271 + #define EMC_DATA_BRLSHFT_1 0x58c 272 + #define EMC_DATA_BRLSHFT_1_RANK1_BYTE7_DATA_BRLSHFT_SHIFT 21 273 + #define EMC_DATA_BRLSHFT_1_RANK1_BYTE7_DATA_BRLSHFT_MASK \ 274 + (0x7 << EMC_DATA_BRLSHFT_1_RANK1_BYTE7_DATA_BRLSHFT_SHIFT) 275 + #define EMC_DATA_BRLSHFT_1_RANK1_BYTE6_DATA_BRLSHFT_SHIFT 18 276 + #define EMC_DATA_BRLSHFT_1_RANK1_BYTE6_DATA_BRLSHFT_MASK \ 277 + (0x7 << EMC_DATA_BRLSHFT_1_RANK1_BYTE6_DATA_BRLSHFT_SHIFT) 278 + #define EMC_DATA_BRLSHFT_1_RANK1_BYTE5_DATA_BRLSHFT_SHIFT 15 279 + #define EMC_DATA_BRLSHFT_1_RANK1_BYTE5_DATA_BRLSHFT_MASK \ 280 + (0x7 << EMC_DATA_BRLSHFT_1_RANK1_BYTE5_DATA_BRLSHFT_SHIFT) 281 + #define EMC_DATA_BRLSHFT_1_RANK1_BYTE4_DATA_BRLSHFT_SHIFT 12 282 + #define EMC_DATA_BRLSHFT_1_RANK1_BYTE4_DATA_BRLSHFT_MASK \ 283 + (0x7 << EMC_DATA_BRLSHFT_1_RANK1_BYTE4_DATA_BRLSHFT_SHIFT) 284 + #define EMC_DATA_BRLSHFT_1_RANK1_BYTE3_DATA_BRLSHFT_SHIFT 9 285 + #define EMC_DATA_BRLSHFT_1_RANK1_BYTE3_DATA_BRLSHFT_MASK \ 286 + (0x7 << EMC_DATA_BRLSHFT_1_RANK1_BYTE3_DATA_BRLSHFT_SHIFT) 287 + #define EMC_DATA_BRLSHFT_1_RANK1_BYTE2_DATA_BRLSHFT_SHIFT 6 288 + #define EMC_DATA_BRLSHFT_1_RANK1_BYTE2_DATA_BRLSHFT_MASK \ 289 + (0x7 << EMC_DATA_BRLSHFT_1_RANK1_BYTE2_DATA_BRLSHFT_SHIFT) 290 + #define EMC_DATA_BRLSHFT_1_RANK1_BYTE1_DATA_BRLSHFT_SHIFT 3 291 + #define EMC_DATA_BRLSHFT_1_RANK1_BYTE1_DATA_BRLSHFT_MASK \ 292 + (0x7 << EMC_DATA_BRLSHFT_1_RANK1_BYTE1_DATA_BRLSHFT_SHIFT) 293 + #define EMC_DATA_BRLSHFT_1_RANK1_BYTE0_DATA_BRLSHFT_SHIFT 0 294 + #define EMC_DATA_BRLSHFT_1_RANK1_BYTE0_DATA_BRLSHFT_MASK \ 295 + (0x7 << EMC_DATA_BRLSHFT_1_RANK1_BYTE0_DATA_BRLSHFT_SHIFT) 296 + 297 + #define EMC_RFCPB 0x590 298 + #define EMC_DQS_BRLSHFT_0 0x594 299 + #define EMC_DQS_BRLSHFT_1 0x598 300 + #define EMC_CMD_BRLSHFT_0 0x59c 301 + #define EMC_CMD_BRLSHFT_1 0x5a0 302 + #define EMC_CMD_BRLSHFT_2 0x5a4 303 + #define EMC_CMD_BRLSHFT_3 0x5a8 304 + #define EMC_QUSE_BRLSHFT_0 0x5ac 305 + #define EMC_AUTO_CAL_CONFIG4 0x5b0 306 + #define EMC_AUTO_CAL_CONFIG5 0x5b4 307 + #define EMC_QUSE_BRLSHFT_1 0x5b8 308 + #define EMC_QUSE_BRLSHFT_2 0x5bc 309 + #define EMC_CCDMW 0x5c0 310 + #define EMC_QUSE_BRLSHFT_3 0x5c4 311 + #define EMC_AUTO_CAL_CONFIG6 0x5cc 312 + #define EMC_DLL_CFG_0 0x5e4 313 + #define EMC_DLL_CFG_1 0x5e8 314 + #define EMC_DLL_CFG_1_DDLLCAL_CTRL_START_TRIM_SHIFT 10 315 + #define EMC_DLL_CFG_1_DDLLCAL_CTRL_START_TRIM_MASK \ 316 + (0x7ff << EMC_DLL_CFG_1_DDLLCAL_CTRL_START_TRIM_SHIFT) 317 + 318 + #define EMC_CONFIG_SAMPLE_DELAY 0x5f0 319 + #define EMC_CFG_UPDATE 0x5f4 320 + #define EMC_CFG_UPDATE_UPDATE_DLL_IN_UPDATE_SHIFT 9 321 + #define EMC_CFG_UPDATE_UPDATE_DLL_IN_UPDATE_MASK \ 322 + (0x3 << EMC_CFG_UPDATE_UPDATE_DLL_IN_UPDATE_SHIFT) 323 + 324 + #define EMC_PMACRO_QUSE_DDLL_RANK0_0 0x600 325 + #define EMC_PMACRO_QUSE_DDLL_RANK0_1 0x604 326 + #define EMC_PMACRO_QUSE_DDLL_RANK0_2 0x608 327 + #define EMC_PMACRO_QUSE_DDLL_RANK0_3 0x60c 328 + #define EMC_PMACRO_QUSE_DDLL_RANK0_4 0x610 329 + #define EMC_PMACRO_QUSE_DDLL_RANK0_5 0x614 330 + #define EMC_PMACRO_QUSE_DDLL_RANK1_0 0x620 331 + #define EMC_PMACRO_QUSE_DDLL_RANK1_1 0x624 332 + #define EMC_PMACRO_QUSE_DDLL_RANK1_2 0x628 333 + #define EMC_PMACRO_QUSE_DDLL_RANK1_3 0x62c 334 + #define EMC_PMACRO_QUSE_DDLL_RANK1_4 0x630 335 + #define EMC_PMACRO_QUSE_DDLL_RANK1_5 0x634 336 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0 0x640 337 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0_OB_DDLL_LONG_DQ_RANK0_BYTE1_SHIFT \ 338 + 16 339 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0_OB_DDLL_LONG_DQ_RANK0_BYTE1_MASK \ 340 + (0x3ff << \ 341 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0_OB_DDLL_LONG_DQ_RANK0_BYTE1_SHIFT) 342 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0_OB_DDLL_LONG_DQ_RANK0_BYTE0_SHIFT \ 343 + 0 344 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0_OB_DDLL_LONG_DQ_RANK0_BYTE0_MASK \ 345 + (0x3ff << \ 346 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0_OB_DDLL_LONG_DQ_RANK0_BYTE0_SHIFT) 347 + 348 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1 0x644 349 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1_OB_DDLL_LONG_DQ_RANK0_BYTE3_SHIFT \ 350 + 16 351 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1_OB_DDLL_LONG_DQ_RANK0_BYTE3_MASK \ 352 + (0x3ff << \ 353 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1_OB_DDLL_LONG_DQ_RANK0_BYTE3_SHIFT) 354 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1_OB_DDLL_LONG_DQ_RANK0_BYTE2_SHIFT \ 355 + 0 356 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1_OB_DDLL_LONG_DQ_RANK0_BYTE2_MASK \ 357 + (0x3ff << \ 358 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1_OB_DDLL_LONG_DQ_RANK0_BYTE2_SHIFT) 359 + 360 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2 0x648 361 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2_OB_DDLL_LONG_DQ_RANK0_BYTE5_SHIFT \ 362 + 16 363 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2_OB_DDLL_LONG_DQ_RANK0_BYTE5_MASK \ 364 + (0x3ff << \ 365 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2_OB_DDLL_LONG_DQ_RANK0_BYTE5_SHIFT) 366 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2_OB_DDLL_LONG_DQ_RANK0_BYTE4_SHIFT \ 367 + 0 368 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2_OB_DDLL_LONG_DQ_RANK0_BYTE4_MASK \ 369 + (0x3ff << \ 370 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2_OB_DDLL_LONG_DQ_RANK0_BYTE4_SHIFT) 371 + 372 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_3 0x64c 373 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_3_OB_DDLL_LONG_DQ_RANK0_BYTE7_SHIFT \ 374 + 16 375 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_3_OB_DDLL_LONG_DQ_RANK0_BYTE7_MASK \ 376 + (0x3ff << \ 377 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_3_OB_DDLL_LONG_DQ_RANK0_BYTE7_SHIFT) 378 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_3_OB_DDLL_LONG_DQ_RANK0_BYTE6_SHIFT \ 379 + 0 380 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_3_OB_DDLL_LONG_DQ_RANK0_BYTE6_MASK \ 381 + (0x3ff << \ 382 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_3_OB_DDLL_LONG_DQ_RANK0_BYTE6_SHIFT) 383 + 384 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_4 0x650 385 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_5 0x654 386 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_0 0x660 387 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_0_OB_DDLL_LONG_DQ_RANK1_BYTE1_SHIFT \ 388 + 16 389 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_0_OB_DDLL_LONG_DQ_RANK1_BYTE1_MASK \ 390 + (0x3ff << \ 391 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_0_OB_DDLL_LONG_DQ_RANK1_BYTE1_SHIFT) 392 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_0_OB_DDLL_LONG_DQ_RANK1_BYTE0_SHIFT \ 393 + 0 394 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_0_OB_DDLL_LONG_DQ_RANK1_BYTE0_MASK \ 395 + (0x3ff << \ 396 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_0_OB_DDLL_LONG_DQ_RANK1_BYTE0_SHIFT) 397 + 398 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_1 0x664 399 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_1_OB_DDLL_LONG_DQ_RANK1_BYTE3_SHIFT \ 400 + 16 401 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_1_OB_DDLL_LONG_DQ_RANK1_BYTE3_MASK \ 402 + (0x3ff << \ 403 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_1_OB_DDLL_LONG_DQ_RANK1_BYTE3_SHIFT) 404 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_1_OB_DDLL_LONG_DQ_RANK1_BYTE2_SHIFT \ 405 + 0 406 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_1_OB_DDLL_LONG_DQ_RANK1_BYTE2_MASK \ 407 + (0x3ff << \ 408 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_1_OB_DDLL_LONG_DQ_RANK1_BYTE2_SHIFT) 409 + 410 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_2 0x668 411 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_2_OB_DDLL_LONG_DQ_RANK1_BYTE5_SHIFT \ 412 + 16 413 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_2_OB_DDLL_LONG_DQ_RANK1_BYTE5_MASK \ 414 + (0x3ff << \ 415 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_2_OB_DDLL_LONG_DQ_RANK1_BYTE5_SHIFT) 416 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_2_OB_DDLL_LONG_DQ_RANK1_BYTE4_SHIFT \ 417 + 0 418 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_2_OB_DDLL_LONG_DQ_RANK1_BYTE4_MASK \ 419 + (0x3ff << \ 420 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_2_OB_DDLL_LONG_DQ_RANK1_BYTE4_SHIFT) 421 + 422 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_3 0x66c 423 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_3_OB_DDLL_LONG_DQ_RANK1_BYTE7_SHIFT \ 424 + 16 425 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_3_OB_DDLL_LONG_DQ_RANK1_BYTE7_MASK \ 426 + (0x3ff << \ 427 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_3_OB_DDLL_LONG_DQ_RANK1_BYTE7_SHIFT) 428 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_3_OB_DDLL_LONG_DQ_RANK1_BYTE6_SHIFT \ 429 + 0 430 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_3_OB_DDLL_LONG_DQ_RANK1_BYTE6_MASK \ 431 + (0x3ff << \ 432 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_3_OB_DDLL_LONG_DQ_RANK1_BYTE6_SHIFT) 433 + 434 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_4 0x670 435 + #define EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_5 0x674 436 + #define EMC_PMACRO_OB_DDLL_LONG_DQS_RANK0_0 0x680 437 + #define EMC_PMACRO_OB_DDLL_LONG_DQS_RANK0_1 0x684 438 + #define EMC_PMACRO_OB_DDLL_LONG_DQS_RANK0_2 0x688 439 + #define EMC_PMACRO_OB_DDLL_LONG_DQS_RANK0_3 0x68c 440 + #define EMC_PMACRO_OB_DDLL_LONG_DQS_RANK0_4 0x690 441 + #define EMC_PMACRO_OB_DDLL_LONG_DQS_RANK0_5 0x694 442 + #define EMC_PMACRO_OB_DDLL_LONG_DQS_RANK1_0 0x6a0 443 + #define EMC_PMACRO_OB_DDLL_LONG_DQS_RANK1_1 0x6a4 444 + #define EMC_PMACRO_OB_DDLL_LONG_DQS_RANK1_2 0x6a8 445 + #define EMC_PMACRO_OB_DDLL_LONG_DQS_RANK1_3 0x6ac 446 + #define EMC_PMACRO_OB_DDLL_LONG_DQS_RANK1_4 0x6b0 447 + #define EMC_PMACRO_OB_DDLL_LONG_DQS_RANK1_5 0x6b4 448 + #define EMC_PMACRO_IB_DDLL_LONG_DQS_RANK0_0 0x6c0 449 + #define EMC_PMACRO_IB_DDLL_LONG_DQS_RANK0_1 0x6c4 450 + #define EMC_PMACRO_IB_DDLL_LONG_DQS_RANK0_2 0x6c8 451 + #define EMC_PMACRO_IB_DDLL_LONG_DQS_RANK0_3 0x6cc 452 + #define EMC_PMACRO_IB_DDLL_LONG_DQS_RANK1_0 0x6e0 453 + #define EMC_PMACRO_IB_DDLL_LONG_DQS_RANK1_1 0x6e4 454 + #define EMC_PMACRO_IB_DDLL_LONG_DQS_RANK1_2 0x6e8 455 + #define EMC_PMACRO_IB_DDLL_LONG_DQS_RANK1_3 0x6ec 456 + #define EMC_PMACRO_TX_PWRD_0 0x720 457 + #define EMC_PMACRO_TX_PWRD_1 0x724 458 + #define EMC_PMACRO_TX_PWRD_2 0x728 459 + #define EMC_PMACRO_TX_PWRD_3 0x72c 460 + #define EMC_PMACRO_TX_PWRD_4 0x730 461 + #define EMC_PMACRO_TX_PWRD_5 0x734 462 + #define EMC_PMACRO_TX_SEL_CLK_SRC_0 0x740 463 + #define EMC_PMACRO_TX_SEL_CLK_SRC_1 0x744 464 + #define EMC_PMACRO_TX_SEL_CLK_SRC_3 0x74c 465 + #define EMC_PMACRO_TX_SEL_CLK_SRC_2 0x748 466 + #define EMC_PMACRO_TX_SEL_CLK_SRC_4 0x750 467 + #define EMC_PMACRO_TX_SEL_CLK_SRC_5 0x754 468 + #define EMC_PMACRO_DDLL_BYPASS 0x760 469 + #define EMC_PMACRO_DDLL_PWRD_0 0x770 470 + #define EMC_PMACRO_DDLL_PWRD_1 0x774 471 + #define EMC_PMACRO_DDLL_PWRD_2 0x778 472 + #define EMC_PMACRO_CMD_CTRL_0 0x780 473 + #define EMC_PMACRO_CMD_CTRL_1 0x784 474 + #define EMC_PMACRO_CMD_CTRL_2 0x788 475 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE0_0 0x800 476 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE0_1 0x804 477 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE0_2 0x808 478 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE0_3 0x80c 479 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE1_0 0x810 480 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE1_1 0x814 481 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE1_2 0x818 482 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE1_3 0x81c 483 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE2_0 0x820 484 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE2_1 0x824 485 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE2_2 0x828 486 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE2_3 0x82c 487 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE3_0 0x830 488 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE3_1 0x834 489 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE3_2 0x838 490 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE3_3 0x83c 491 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE4_0 0x840 492 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE4_1 0x844 493 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE4_2 0x848 494 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE4_3 0x84c 495 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE5_0 0x850 496 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE5_1 0x854 497 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE5_2 0x858 498 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE5_3 0x85c 499 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE6_0 0x860 500 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE6_1 0x864 501 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE6_2 0x868 502 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE6_3 0x86c 503 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE7_0 0x870 504 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE7_1 0x874 505 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE7_2 0x878 506 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_BYTE7_3 0x87c 507 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD0_0 0x880 508 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD0_1 0x884 509 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD0_2 0x888 510 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD0_3 0x88c 511 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD1_0 0x890 512 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD1_1 0x894 513 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD1_2 0x898 514 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD1_3 0x89c 515 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD2_0 0x8a0 516 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD2_1 0x8a4 517 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD2_2 0x8a8 518 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD2_3 0x8ac 519 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD3_0 0x8b0 520 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD3_1 0x8b4 521 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD3_2 0x8b8 522 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK0_CMD3_3 0x8bc 523 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE0_0 0x900 524 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE0_1 0x904 525 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE0_2 0x908 526 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE0_3 0x90c 527 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE1_0 0x910 528 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE1_1 0x914 529 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE1_2 0x918 530 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE1_3 0x91c 531 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE2_0 0x920 532 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE2_1 0x924 533 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE2_2 0x928 534 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE2_3 0x92c 535 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE3_0 0x930 536 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE3_1 0x934 537 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE3_2 0x938 538 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE3_3 0x93c 539 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE4_0 0x940 540 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE4_1 0x944 541 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE4_2 0x948 542 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE4_3 0x94c 543 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE5_0 0x950 544 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE5_1 0x954 545 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE5_2 0x958 546 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE5_3 0x95c 547 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE6_0 0x960 548 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE6_1 0x964 549 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE6_2 0x968 550 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE6_3 0x96c 551 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE7_0 0x970 552 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE7_1 0x974 553 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE7_2 0x978 554 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_BYTE7_3 0x97c 555 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD0_0 0x980 556 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD0_1 0x984 557 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD0_2 0x988 558 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD0_3 0x98c 559 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD1_0 0x990 560 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD1_1 0x994 561 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD1_2 0x998 562 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD1_3 0x99c 563 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD2_0 0x9a0 564 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD2_1 0x9a4 565 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD2_2 0x9a8 566 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD2_3 0x9ac 567 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD3_0 0x9b0 568 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD3_1 0x9b4 569 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD3_2 0x9b8 570 + #define EMC_PMACRO_OB_DDLL_SHORT_DQ_RANK1_CMD3_3 0x9bc 571 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE0_0 0xa00 572 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE0_1 0xa04 573 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE0_2 0xa08 574 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE1_0 0xa10 575 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE1_1 0xa14 576 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE1_2 0xa18 577 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE2_0 0xa20 578 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE2_1 0xa24 579 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE2_2 0xa28 580 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE3_0 0xa30 581 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE3_1 0xa34 582 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE3_2 0xa38 583 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE4_0 0xa40 584 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE4_1 0xa44 585 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE4_2 0xa48 586 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE5_0 0xa50 587 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE5_1 0xa54 588 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE5_2 0xa58 589 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE6_0 0xa60 590 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE6_1 0xa64 591 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE6_2 0xa68 592 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE7_0 0xa70 593 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE7_1 0xa74 594 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK0_BYTE7_2 0xa78 595 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE0_0 0xb00 596 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE0_1 0xb04 597 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE0_2 0xb08 598 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE1_0 0xb10 599 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE1_1 0xb14 600 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE1_2 0xb18 601 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE2_0 0xb20 602 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE2_1 0xb24 603 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE2_2 0xb28 604 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE3_0 0xb30 605 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE3_1 0xb34 606 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE3_2 0xb38 607 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE4_0 0xb40 608 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE4_1 0xb44 609 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE4_2 0xb48 610 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE5_0 0xb50 611 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE5_1 0xb54 612 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE5_2 0xb58 613 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE6_0 0xb60 614 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE6_1 0xb64 615 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE6_2 0xb68 616 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE7_0 0xb70 617 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE7_1 0xb74 618 + #define EMC_PMACRO_IB_DDLL_SHORT_DQ_RANK1_BYTE7_2 0xb78 619 + #define EMC_PMACRO_IB_VREF_DQ_0 0xbe0 620 + #define EMC_PMACRO_IB_VREF_DQ_1 0xbe4 621 + #define EMC_PMACRO_IB_VREF_DQS_0 0xbf0 622 + #define EMC_PMACRO_IB_VREF_DQS_1 0xbf4 623 + #define EMC_PMACRO_DDLL_LONG_CMD_0 0xc00 624 + #define EMC_PMACRO_DDLL_LONG_CMD_1 0xc04 625 + #define EMC_PMACRO_DDLL_LONG_CMD_2 0xc08 626 + #define EMC_PMACRO_DDLL_LONG_CMD_3 0xc0c 627 + #define EMC_PMACRO_DDLL_LONG_CMD_4 0xc10 628 + #define EMC_PMACRO_DDLL_LONG_CMD_5 0xc14 629 + #define EMC_PMACRO_DDLL_SHORT_CMD_0 0xc20 630 + #define EMC_PMACRO_DDLL_SHORT_CMD_1 0xc24 631 + #define EMC_PMACRO_DDLL_SHORT_CMD_2 0xc28 632 + #define EMC_PMACRO_CFG_PM_GLOBAL_0 0xc30 633 + #define EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE0 BIT(16) 634 + #define EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE1 BIT(17) 635 + #define EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE2 BIT(18) 636 + #define EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE3 BIT(19) 637 + #define EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE4 BIT(20) 638 + #define EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE5 BIT(21) 639 + #define EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE6 BIT(22) 640 + #define EMC_PMACRO_CFG_PM_GLOBAL_0_DISABLE_CFG_BYTE7 BIT(23) 641 + #define EMC_PMACRO_VTTGEN_CTRL_0 0xc34 642 + #define EMC_PMACRO_VTTGEN_CTRL_1 0xc38 643 + #define EMC_PMACRO_BG_BIAS_CTRL_0 0xc3c 644 + #define EMC_PMACRO_BG_BIAS_CTRL_0_BG_E_PWRD BIT(0) 645 + #define EMC_PMACRO_BG_BIAS_CTRL_0_BGLP_E_PWRD BIT(2) 646 + #define EMC_PMACRO_PAD_CFG_CTRL 0xc40 647 + #define EMC_PMACRO_ZCTRL 0xc44 648 + #define EMC_PMACRO_CMD_PAD_RX_CTRL 0xc50 649 + #define EMC_PMACRO_DATA_PAD_RX_CTRL 0xc54 650 + #define EMC_PMACRO_CMD_RX_TERM_MODE 0xc58 651 + #define EMC_PMACRO_DATA_RX_TERM_MODE 0xc5c 652 + #define EMC_PMACRO_CMD_PAD_TX_CTRL 0xc60 653 + #define EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQ_TX_E_DCC BIT(1) 654 + #define EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQSP_TX_E_DCC BIT(9) 655 + #define EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQSN_TX_E_DCC BIT(16) 656 + #define EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_CMD_TX_E_DCC BIT(24) 657 + #define EMC_PMACRO_CMD_PAD_TX_CTRL_CMD_DQ_TX_DRVFORCEON BIT(26) 658 + 659 + #define EMC_PMACRO_DATA_PAD_TX_CTRL 0xc64 660 + #define EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQ_E_IVREF BIT(0) 661 + #define EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQ_TX_E_DCC BIT(1) 662 + #define EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQS_E_IVREF BIT(8) 663 + #define EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQSP_TX_E_DCC BIT(9) 664 + #define EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_DQSN_TX_E_DCC BIT(16) 665 + #define EMC_PMACRO_DATA_PAD_TX_CTRL_DATA_CMD_TX_E_DCC BIT(24) 666 + 667 + #define EMC_PMACRO_COMMON_PAD_TX_CTRL 0xc68 668 + #define EMC_PMACRO_AUTOCAL_CFG_COMMON 0xc78 669 + #define EMC_PMACRO_AUTOCAL_CFG_COMMON_E_CAL_BYPASS_DVFS BIT(16) 670 + #define EMC_PMACRO_VTTGEN_CTRL_2 0xcf0 671 + #define EMC_PMACRO_IB_RXRT 0xcf4 672 + #define EMC_PMACRO_TRAINING_CTRL_0 0xcf8 673 + #define EMC_PMACRO_TRAINING_CTRL_0_CH0_TRAINING_E_WRPTR BIT(3) 674 + #define EMC_PMACRO_TRAINING_CTRL_1 0xcfc 675 + #define EMC_PMACRO_TRAINING_CTRL_1_CH1_TRAINING_E_WRPTR BIT(3) 676 + #define EMC_TRAINING_CTRL 0xe04 677 + #define EMC_TRAINING_QUSE_CORS_CTRL 0xe0c 678 + #define EMC_TRAINING_QUSE_FINE_CTRL 0xe10 679 + #define EMC_TRAINING_QUSE_CTRL_MISC 0xe14 680 + #define EMC_TRAINING_WRITE_FINE_CTRL 0xe18 681 + #define EMC_TRAINING_WRITE_CTRL_MISC 0xe1c 682 + #define EMC_TRAINING_WRITE_VREF_CTRL 0xe20 683 + #define EMC_TRAINING_READ_FINE_CTRL 0xe24 684 + #define EMC_TRAINING_READ_CTRL_MISC 0xe28 685 + #define EMC_TRAINING_READ_VREF_CTRL 0xe2c 686 + #define EMC_TRAINING_CA_FINE_CTRL 0xe30 687 + #define EMC_TRAINING_CA_CTRL_MISC 0xe34 688 + #define EMC_TRAINING_CA_CTRL_MISC1 0xe38 689 + #define EMC_TRAINING_CA_VREF_CTRL 0xe3c 690 + #define EMC_TRAINING_SETTLE 0xe44 691 + #define EMC_TRAINING_MPC 0xe5c 692 + #define EMC_TRAINING_VREF_SETTLE 0xe6c 693 + #define EMC_TRAINING_QUSE_VREF_CTRL 0xed0 694 + #define EMC_TRAINING_OPT_DQS_IB_VREF_RANK0 0xed4 695 + #define EMC_TRAINING_OPT_DQS_IB_VREF_RANK1 0xed8 696 + 697 + #define EMC_COPY_TABLE_PARAM_PERIODIC_FIELDS BIT(0) 698 + #define EMC_COPY_TABLE_PARAM_TRIM_REGS BIT(1) 699 + 700 + enum burst_regs_list { 701 + EMC_RP_INDEX = 6, 702 + EMC_R2P_INDEX = 9, 703 + EMC_W2P_INDEX, 704 + EMC_MRW6_INDEX = 31, 705 + EMC_REFRESH_INDEX = 41, 706 + EMC_PRE_REFRESH_REQ_CNT_INDEX = 43, 707 + EMC_TRPAB_INDEX = 59, 708 + EMC_MRW7_INDEX = 62, 709 + EMC_FBIO_CFG5_INDEX = 65, 710 + EMC_FBIO_CFG7_INDEX, 711 + EMC_CFG_DIG_DLL_INDEX, 712 + EMC_ZCAL_INTERVAL_INDEX = 139, 713 + EMC_ZCAL_WAIT_CNT_INDEX, 714 + EMC_MRS_WAIT_CNT_INDEX = 141, 715 + EMC_DLL_CFG_0_INDEX = 144, 716 + EMC_PMACRO_AUTOCAL_CFG_COMMON_INDEX = 146, 717 + EMC_CFG_INDEX = 148, 718 + EMC_DYN_SELF_REF_CONTROL_INDEX = 150, 719 + EMC_PMACRO_CMD_PAD_TX_CTRL_INDEX = 161, 720 + EMC_PMACRO_DATA_PAD_TX_CTRL_INDEX, 721 + EMC_PMACRO_COMMON_PAD_TX_CTRL_INDEX, 722 + EMC_PMACRO_BRICK_CTRL_RFU1_INDEX = 167, 723 + EMC_PMACRO_BG_BIAS_CTRL_0_INDEX = 171, 724 + EMC_MRW14_INDEX = 199, 725 + EMC_MRW15_INDEX = 220, 726 + }; 727 + 728 + enum trim_regs_list { 729 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_0_INDEX = 60, 730 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_1_INDEX, 731 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_2_INDEX, 732 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_3_INDEX, 733 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_4_INDEX, 734 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK0_5_INDEX, 735 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_0_INDEX, 736 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_1_INDEX, 737 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_2_INDEX, 738 + EMC_PMACRO_OB_DDLL_LONG_DQ_RANK1_3_INDEX, 739 + }; 740 + 741 + enum burst_mc_regs_list { 742 + MC_EMEM_ARB_MISC0_INDEX = 20, 743 + }; 744 + 745 + enum { 746 + T_RP, 747 + T_FC_LPDDR4, 748 + T_RFC, 749 + T_PDEX, 750 + RL, 751 + }; 752 + 753 + enum { 754 + AUTO_PD = 0, 755 + MAN_SR = 2, 756 + }; 757 + 758 + enum { 759 + ASSEMBLY = 0, 760 + ACTIVE, 761 + }; 762 + 763 + enum { 764 + C0D0U0, 765 + C0D0U1, 766 + C0D1U0, 767 + C0D1U1, 768 + C1D0U0, 769 + C1D0U1, 770 + C1D1U0, 771 + C1D1U1, 772 + DRAM_CLKTREE_NUM, 773 + }; 774 + 775 + #define VREF_REGS_PER_CHANNEL_SIZE 4 776 + #define DRAM_TIMINGS_NUM 5 777 + #define BURST_REGS_PER_CHANNEL_SIZE 8 778 + #define TRIM_REGS_PER_CHANNEL_SIZE 10 779 + #define PTFV_ARRAY_SIZE 12 780 + #define SAVE_RESTORE_MOD_REGS_SIZE 12 781 + #define TRAINING_MOD_REGS_SIZE 20 782 + #define BURST_UP_DOWN_REGS_SIZE 24 783 + #define BURST_MC_REGS_SIZE 33 784 + #define TRIM_REGS_SIZE 138 785 + #define BURST_REGS_SIZE 221 786 + 787 + struct tegra210_emc_per_channel_regs { 788 + u16 bank; 789 + u16 offset; 790 + }; 791 + 792 + struct tegra210_emc_table_register_offsets { 793 + u16 burst[BURST_REGS_SIZE]; 794 + u16 trim[TRIM_REGS_SIZE]; 795 + u16 burst_mc[BURST_MC_REGS_SIZE]; 796 + u16 la_scale[BURST_UP_DOWN_REGS_SIZE]; 797 + struct tegra210_emc_per_channel_regs burst_per_channel[BURST_REGS_PER_CHANNEL_SIZE]; 798 + struct tegra210_emc_per_channel_regs trim_per_channel[TRIM_REGS_PER_CHANNEL_SIZE]; 799 + struct tegra210_emc_per_channel_regs vref_per_channel[VREF_REGS_PER_CHANNEL_SIZE]; 800 + }; 801 + 802 + struct tegra210_emc_timing { 803 + u32 revision; 804 + const char dvfs_ver[60]; 805 + u32 rate; 806 + u32 min_volt; 807 + u32 gpu_min_volt; 808 + const char clock_src[32]; 809 + u32 clk_src_emc; 810 + u32 needs_training; 811 + u32 training_pattern; 812 + u32 trained; 813 + 814 + u32 periodic_training; 815 + u32 trained_dram_clktree[DRAM_CLKTREE_NUM]; 816 + u32 current_dram_clktree[DRAM_CLKTREE_NUM]; 817 + u32 run_clocks; 818 + u32 tree_margin; 819 + 820 + u32 num_burst; 821 + u32 num_burst_per_ch; 822 + u32 num_trim; 823 + u32 num_trim_per_ch; 824 + u32 num_mc_regs; 825 + u32 num_up_down; 826 + u32 vref_num; 827 + u32 training_mod_num; 828 + u32 dram_timing_num; 829 + 830 + u32 ptfv_list[PTFV_ARRAY_SIZE]; 831 + 832 + u32 burst_regs[BURST_REGS_SIZE]; 833 + u32 burst_reg_per_ch[BURST_REGS_PER_CHANNEL_SIZE]; 834 + u32 shadow_regs_ca_train[BURST_REGS_SIZE]; 835 + u32 shadow_regs_quse_train[BURST_REGS_SIZE]; 836 + u32 shadow_regs_rdwr_train[BURST_REGS_SIZE]; 837 + 838 + u32 trim_regs[TRIM_REGS_SIZE]; 839 + u32 trim_perch_regs[TRIM_REGS_PER_CHANNEL_SIZE]; 840 + 841 + u32 vref_perch_regs[VREF_REGS_PER_CHANNEL_SIZE]; 842 + 843 + u32 dram_timings[DRAM_TIMINGS_NUM]; 844 + u32 training_mod_regs[TRAINING_MOD_REGS_SIZE]; 845 + u32 save_restore_mod_regs[SAVE_RESTORE_MOD_REGS_SIZE]; 846 + u32 burst_mc_regs[BURST_MC_REGS_SIZE]; 847 + u32 la_scale_regs[BURST_UP_DOWN_REGS_SIZE]; 848 + 849 + u32 min_mrs_wait; 850 + u32 emc_mrw; 851 + u32 emc_mrw2; 852 + u32 emc_mrw3; 853 + u32 emc_mrw4; 854 + u32 emc_mrw9; 855 + u32 emc_mrs; 856 + u32 emc_emrs; 857 + u32 emc_emrs2; 858 + u32 emc_auto_cal_config; 859 + u32 emc_auto_cal_config2; 860 + u32 emc_auto_cal_config3; 861 + u32 emc_auto_cal_config4; 862 + u32 emc_auto_cal_config5; 863 + u32 emc_auto_cal_config6; 864 + u32 emc_auto_cal_config7; 865 + u32 emc_auto_cal_config8; 866 + u32 emc_cfg_2; 867 + u32 emc_sel_dpd_ctrl; 868 + u32 emc_fdpd_ctrl_cmd_no_ramp; 869 + u32 dll_clk_src; 870 + u32 clk_out_enb_x_0_clk_enb_emc_dll; 871 + u32 latency; 872 + }; 873 + 874 + enum tegra210_emc_refresh { 875 + TEGRA210_EMC_REFRESH_NOMINAL = 0, 876 + TEGRA210_EMC_REFRESH_2X, 877 + TEGRA210_EMC_REFRESH_4X, 878 + TEGRA210_EMC_REFRESH_THROTTLE, /* 4x Refresh + derating. */ 879 + }; 880 + 881 + #define DRAM_TYPE_DDR3 0 882 + #define DRAM_TYPE_LPDDR4 1 883 + #define DRAM_TYPE_LPDDR2 2 884 + #define DRAM_TYPE_DDR2 3 885 + 886 + struct tegra210_emc { 887 + struct tegra_mc *mc; 888 + struct device *dev; 889 + struct clk *clk; 890 + 891 + /* nominal EMC frequency table */ 892 + struct tegra210_emc_timing *nominal; 893 + /* derated EMC frequency table */ 894 + struct tegra210_emc_timing *derated; 895 + 896 + /* currently selected table (nominal or derated) */ 897 + struct tegra210_emc_timing *timings; 898 + unsigned int num_timings; 899 + 900 + const struct tegra210_emc_table_register_offsets *offsets; 901 + 902 + const struct tegra210_emc_sequence *sequence; 903 + spinlock_t lock; 904 + 905 + void __iomem *regs, *channel[2]; 906 + unsigned int num_channels; 907 + unsigned int num_devices; 908 + unsigned int dram_type; 909 + 910 + struct tegra210_emc_timing *last; 911 + struct tegra210_emc_timing *next; 912 + 913 + unsigned int training_interval; 914 + struct timer_list training; 915 + 916 + enum tegra210_emc_refresh refresh; 917 + unsigned int refresh_poll_interval; 918 + struct timer_list refresh_timer; 919 + unsigned int temperature; 920 + atomic_t refresh_poll; 921 + 922 + ktime_t clkchange_time; 923 + int clkchange_delay; 924 + 925 + unsigned long resume_rate; 926 + 927 + struct { 928 + struct dentry *root; 929 + unsigned long min_rate; 930 + unsigned long max_rate; 931 + unsigned int temperature; 932 + } debugfs; 933 + 934 + struct tegra210_clk_emc_provider provider; 935 + }; 936 + 937 + struct tegra210_emc_sequence { 938 + u8 revision; 939 + void (*set_clock)(struct tegra210_emc *emc, u32 clksrc); 940 + u32 (*periodic_compensation)(struct tegra210_emc *emc); 941 + }; 942 + 943 + static inline void emc_writel(struct tegra210_emc *emc, u32 value, 944 + unsigned int offset) 945 + { 946 + writel_relaxed(value, emc->regs + offset); 947 + } 948 + 949 + static inline u32 emc_readl(struct tegra210_emc *emc, unsigned int offset) 950 + { 951 + return readl_relaxed(emc->regs + offset); 952 + } 953 + 954 + static inline void emc_channel_writel(struct tegra210_emc *emc, 955 + unsigned int channel, 956 + u32 value, unsigned int offset) 957 + { 958 + writel_relaxed(value, emc->channel[channel] + offset); 959 + } 960 + 961 + static inline u32 emc_channel_readl(struct tegra210_emc *emc, 962 + unsigned int channel, unsigned int offset) 963 + { 964 + return readl_relaxed(emc->channel[channel] + offset); 965 + } 966 + 967 + static inline void ccfifo_writel(struct tegra210_emc *emc, u32 value, 968 + unsigned int offset, u32 delay) 969 + { 970 + writel_relaxed(value, emc->regs + EMC_CCFIFO_DATA); 971 + 972 + value = EMC_CCFIFO_ADDR_STALL_BY_1 | EMC_CCFIFO_ADDR_STALL(delay) | 973 + EMC_CCFIFO_ADDR_OFFSET(offset); 974 + writel_relaxed(value, emc->regs + EMC_CCFIFO_ADDR); 975 + } 976 + 977 + static inline u32 div_o3(u32 a, u32 b) 978 + { 979 + u32 result = a / b; 980 + 981 + if ((b * result) < a) 982 + return result + 1; 983 + 984 + return result; 985 + } 986 + 987 + /* from tegra210-emc-r21021.c */ 988 + extern const struct tegra210_emc_sequence tegra210_emc_r21021; 989 + 990 + int tegra210_emc_set_refresh(struct tegra210_emc *emc, 991 + enum tegra210_emc_refresh refresh); 992 + u32 tegra210_emc_mrr_read(struct tegra210_emc *emc, unsigned int chip, 993 + unsigned int address); 994 + void tegra210_emc_do_clock_change(struct tegra210_emc *emc, u32 clksrc); 995 + void tegra210_emc_set_shadow_bypass(struct tegra210_emc *emc, int set); 996 + void tegra210_emc_timing_update(struct tegra210_emc *emc); 997 + u32 tegra210_emc_get_dll_state(struct tegra210_emc_timing *next); 998 + struct tegra210_emc_timing *tegra210_emc_find_timing(struct tegra210_emc *emc, 999 + unsigned long rate); 1000 + void tegra210_emc_adjust_timing(struct tegra210_emc *emc, 1001 + struct tegra210_emc_timing *timing); 1002 + int tegra210_emc_wait_for_update(struct tegra210_emc *emc, unsigned int channel, 1003 + unsigned int offset, u32 bit_mask, bool state); 1004 + unsigned long tegra210_emc_actual_osc_clocks(u32 in); 1005 + u32 tegra210_emc_compensate(struct tegra210_emc_timing *next, u32 offset); 1006 + void tegra210_emc_dll_disable(struct tegra210_emc *emc); 1007 + void tegra210_emc_dll_enable(struct tegra210_emc *emc); 1008 + u32 tegra210_emc_dll_prelock(struct tegra210_emc *emc, u32 clksrc); 1009 + u32 tegra210_emc_dvfs_power_ramp_down(struct tegra210_emc *emc, u32 clk, 1010 + bool flip_backward); 1011 + u32 tegra210_emc_dvfs_power_ramp_up(struct tegra210_emc *emc, u32 clk, 1012 + bool flip_backward); 1013 + void tegra210_emc_reset_dram_clktree_values(struct tegra210_emc_timing *timing); 1014 + void tegra210_emc_start_periodic_compensation(struct tegra210_emc *emc); 1015 + 1016 + #endif
+50
drivers/memory/tegra/tegra210-mc.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (c) 2015-2020, NVIDIA CORPORATION. All rights reserved. 4 + */ 5 + 6 + #ifndef TEGRA210_MC_H 7 + #define TEGRA210_MC_H 8 + 9 + #include "mc.h" 10 + 11 + /* register definitions */ 12 + #define MC_LATENCY_ALLOWANCE_AVPC_0 0x2e4 13 + #define MC_LATENCY_ALLOWANCE_HC_0 0x310 14 + #define MC_LATENCY_ALLOWANCE_HC_1 0x314 15 + #define MC_LATENCY_ALLOWANCE_MPCORE_0 0x320 16 + #define MC_LATENCY_ALLOWANCE_NVENC_0 0x328 17 + #define MC_LATENCY_ALLOWANCE_PPCS_0 0x344 18 + #define MC_LATENCY_ALLOWANCE_PPCS_1 0x348 19 + #define MC_LATENCY_ALLOWANCE_ISP2_0 0x370 20 + #define MC_LATENCY_ALLOWANCE_ISP2_1 0x374 21 + #define MC_LATENCY_ALLOWANCE_XUSB_0 0x37c 22 + #define MC_LATENCY_ALLOWANCE_XUSB_1 0x380 23 + #define MC_LATENCY_ALLOWANCE_TSEC_0 0x390 24 + #define MC_LATENCY_ALLOWANCE_VIC_0 0x394 25 + #define MC_LATENCY_ALLOWANCE_VI2_0 0x398 26 + #define MC_LATENCY_ALLOWANCE_GPU_0 0x3ac 27 + #define MC_LATENCY_ALLOWANCE_SDMMCA_0 0x3b8 28 + #define MC_LATENCY_ALLOWANCE_SDMMCAA_0 0x3bc 29 + #define MC_LATENCY_ALLOWANCE_SDMMC_0 0x3c0 30 + #define MC_LATENCY_ALLOWANCE_SDMMCAB_0 0x3c4 31 + #define MC_LATENCY_ALLOWANCE_GPU2_0 0x3e8 32 + #define MC_LATENCY_ALLOWANCE_NVDEC_0 0x3d8 33 + #define MC_MLL_MPCORER_PTSA_RATE 0x44c 34 + #define MC_FTOP_PTSA_RATE 0x50c 35 + #define MC_EMEM_ARB_TIMING_RFCPB 0x6c0 36 + #define MC_EMEM_ARB_TIMING_CCDMW 0x6c4 37 + #define MC_EMEM_ARB_REFPB_HP_CTRL 0x6f0 38 + #define MC_EMEM_ARB_REFPB_BANK_CTRL 0x6f4 39 + #define MC_PTSA_GRANT_DECREMENT 0x960 40 + #define MC_EMEM_ARB_DHYST_CTRL 0xbcc 41 + #define MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_0 0xbd0 42 + #define MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_1 0xbd4 43 + #define MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_2 0xbd8 44 + #define MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_3 0xbdc 45 + #define MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_4 0xbe0 46 + #define MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_5 0xbe4 47 + #define MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_6 0xbe8 48 + #define MC_EMEM_ARB_DHYST_TIMEOUT_UTIL_7 0xbec 49 + 50 + #endif
+47 -75
drivers/memory/tegra/tegra30-emc.c
··· 11 11 12 12 #include <linux/clk.h> 13 13 #include <linux/clk/tegra.h> 14 - #include <linux/completion.h> 15 14 #include <linux/debugfs.h> 16 15 #include <linux/delay.h> 17 16 #include <linux/err.h> ··· 326 327 struct tegra_emc { 327 328 struct device *dev; 328 329 struct tegra_mc *mc; 329 - struct completion clk_handshake_complete; 330 330 struct notifier_block clk_nb; 331 331 struct clk *clk; 332 332 void __iomem *regs; ··· 372 374 return 0; 373 375 } 374 376 375 - static void emc_complete_clk_change(struct tegra_emc *emc) 376 - { 377 - struct emc_timing *timing = emc->new_timing; 378 - unsigned int dram_num; 379 - bool failed = false; 380 - int err; 381 - 382 - /* re-enable auto-refresh */ 383 - dram_num = tegra_mc_get_emem_device_count(emc->mc); 384 - writel_relaxed(EMC_REFCTRL_ENABLE_ALL(dram_num), 385 - emc->regs + EMC_REFCTRL); 386 - 387 - /* restore auto-calibration */ 388 - if (emc->vref_cal_toggle) 389 - writel_relaxed(timing->emc_auto_cal_interval, 390 - emc->regs + EMC_AUTO_CAL_INTERVAL); 391 - 392 - /* restore dynamic self-refresh */ 393 - if (timing->emc_cfg_dyn_self_ref) { 394 - emc->emc_cfg |= EMC_CFG_DYN_SREF_ENABLE; 395 - writel_relaxed(emc->emc_cfg, emc->regs + EMC_CFG); 396 - } 397 - 398 - /* set number of clocks to wait after each ZQ command */ 399 - if (emc->zcal_long) 400 - writel_relaxed(timing->emc_zcal_cnt_long, 401 - emc->regs + EMC_ZCAL_WAIT_CNT); 402 - 403 - /* wait for writes to settle */ 404 - udelay(2); 405 - 406 - /* update restored timing */ 407 - err = emc_seq_update_timing(emc); 408 - if (err) 409 - failed = true; 410 - 411 - /* restore early ACK */ 412 - mc_writel(emc->mc, emc->mc_override, MC_EMEM_ARB_OVERRIDE); 413 - 414 - WRITE_ONCE(emc->bad_state, failed); 415 - } 416 - 417 377 static irqreturn_t tegra_emc_isr(int irq, void *data) 418 378 { 419 379 struct tegra_emc *emc = data; 420 - u32 intmask = EMC_REFRESH_OVERFLOW_INT | EMC_CLKCHANGE_COMPLETE_INT; 380 + u32 intmask = EMC_REFRESH_OVERFLOW_INT; 421 381 u32 status; 422 382 423 383 status = readl_relaxed(emc->regs + EMC_INTSTATUS) & intmask; ··· 389 433 390 434 /* clear interrupts */ 391 435 writel_relaxed(status, emc->regs + EMC_INTSTATUS); 392 - 393 - /* notify about EMC-CAR handshake completion */ 394 - if (status & EMC_CLKCHANGE_COMPLETE_INT) { 395 - if (completion_done(&emc->clk_handshake_complete)) { 396 - dev_err_ratelimited(emc->dev, 397 - "bogus handshake interrupt\n"); 398 - return IRQ_NONE; 399 - } 400 - 401 - emc_complete_clk_change(emc); 402 - complete(&emc->clk_handshake_complete); 403 - } 404 436 405 437 return IRQ_HANDLED; 406 438 } ··· 745 801 */ 746 802 mc_readl(emc->mc, MC_EMEM_ARB_OVERRIDE); 747 803 748 - reinit_completion(&emc->clk_handshake_complete); 749 - 750 - emc->new_timing = timing; 751 - 752 804 return 0; 753 805 } 754 806 755 807 static int emc_complete_timing_change(struct tegra_emc *emc, 756 808 unsigned long rate) 757 809 { 758 - unsigned long timeout; 810 + struct emc_timing *timing = emc_find_timing(emc, rate); 811 + unsigned int dram_num; 812 + int err; 813 + u32 v; 759 814 760 - timeout = wait_for_completion_timeout(&emc->clk_handshake_complete, 761 - msecs_to_jiffies(100)); 762 - if (timeout == 0) { 763 - dev_err(emc->dev, "emc-car handshake failed\n"); 764 - return -EIO; 815 + err = readl_relaxed_poll_timeout_atomic(emc->regs + EMC_INTSTATUS, v, 816 + v & EMC_CLKCHANGE_COMPLETE_INT, 817 + 1, 100); 818 + if (err) { 819 + dev_err(emc->dev, "emc-car handshake timeout: %d\n", err); 820 + return err; 765 821 } 766 822 767 - if (READ_ONCE(emc->bad_state)) 768 - return -EIO; 823 + /* re-enable auto-refresh */ 824 + dram_num = tegra_mc_get_emem_device_count(emc->mc); 825 + writel_relaxed(EMC_REFCTRL_ENABLE_ALL(dram_num), 826 + emc->regs + EMC_REFCTRL); 769 827 770 - return 0; 828 + /* restore auto-calibration */ 829 + if (emc->vref_cal_toggle) 830 + writel_relaxed(timing->emc_auto_cal_interval, 831 + emc->regs + EMC_AUTO_CAL_INTERVAL); 832 + 833 + /* restore dynamic self-refresh */ 834 + if (timing->emc_cfg_dyn_self_ref) { 835 + emc->emc_cfg |= EMC_CFG_DYN_SREF_ENABLE; 836 + writel_relaxed(emc->emc_cfg, emc->regs + EMC_CFG); 837 + } 838 + 839 + /* set number of clocks to wait after each ZQ command */ 840 + if (emc->zcal_long) 841 + writel_relaxed(timing->emc_zcal_cnt_long, 842 + emc->regs + EMC_ZCAL_WAIT_CNT); 843 + 844 + /* wait for writes to settle */ 845 + udelay(2); 846 + 847 + /* update restored timing */ 848 + err = emc_seq_update_timing(emc); 849 + if (!err) 850 + emc->bad_state = false; 851 + 852 + /* restore early ACK */ 853 + mc_writel(emc->mc, emc->mc_override, MC_EMEM_ARB_OVERRIDE); 854 + 855 + return err; 771 856 } 772 857 773 858 static int emc_unprepare_timing_change(struct tegra_emc *emc, ··· 1006 1033 1007 1034 static int emc_setup_hw(struct tegra_emc *emc) 1008 1035 { 1009 - u32 intmask = EMC_REFRESH_OVERFLOW_INT | EMC_CLKCHANGE_COMPLETE_INT; 1036 + u32 intmask = EMC_REFRESH_OVERFLOW_INT; 1010 1037 u32 fbio_cfg5, emc_cfg, emc_dbg; 1011 1038 enum emc_dram_type dram_type; 1012 1039 ··· 1248 1275 return; 1249 1276 } 1250 1277 1251 - debugfs_create_file("available_rates", S_IRUGO, emc->debugfs.root, 1278 + debugfs_create_file("available_rates", 0444, emc->debugfs.root, 1252 1279 emc, &tegra_emc_debug_available_rates_fops); 1253 - debugfs_create_file("min_rate", S_IRUGO | S_IWUSR, emc->debugfs.root, 1280 + debugfs_create_file("min_rate", 0644, emc->debugfs.root, 1254 1281 emc, &tegra_emc_debug_min_rate_fops); 1255 - debugfs_create_file("max_rate", S_IRUGO | S_IWUSR, emc->debugfs.root, 1282 + debugfs_create_file("max_rate", 0644, emc->debugfs.root, 1256 1283 emc, &tegra_emc_debug_max_rate_fops); 1257 1284 } 1258 1285 ··· 1294 1321 if (!emc->mc) 1295 1322 return -EPROBE_DEFER; 1296 1323 1297 - init_completion(&emc->clk_handshake_complete); 1298 1324 emc->clk_nb.notifier_call = emc_clk_change_notify; 1299 1325 emc->dev = &pdev->dev; 1300 1326
+8 -8
drivers/memory/ti-aemif.c
··· 27 27 #define WSTROBE_SHIFT 20 28 28 #define WSETUP_SHIFT 26 29 29 #define EW_SHIFT 30 30 - #define SS_SHIFT 31 30 + #define SSTROBE_SHIFT 31 31 31 32 32 #define TA(x) ((x) << TA_SHIFT) 33 33 #define RHOLD(x) ((x) << RHOLD_SHIFT) ··· 37 37 #define WSTROBE(x) ((x) << WSTROBE_SHIFT) 38 38 #define WSETUP(x) ((x) << WSETUP_SHIFT) 39 39 #define EW(x) ((x) << EW_SHIFT) 40 - #define SS(x) ((x) << SS_SHIFT) 40 + #define SSTROBE(x) ((x) << SSTROBE_SHIFT) 41 41 42 42 #define ASIZE_MAX 0x1 43 43 #define TA_MAX 0x3 ··· 48 48 #define WSTROBE_MAX 0x3f 49 49 #define WSETUP_MAX 0xf 50 50 #define EW_MAX 0x1 51 - #define SS_MAX 0x1 51 + #define SSTROBE_MAX 0x1 52 52 #define NUM_CS 4 53 53 54 54 #define TA_VAL(x) (((x) & TA(TA_MAX)) >> TA_SHIFT) ··· 59 59 #define WSTROBE_VAL(x) (((x) & WSTROBE(WSTROBE_MAX)) >> WSTROBE_SHIFT) 60 60 #define WSETUP_VAL(x) (((x) & WSETUP(WSETUP_MAX)) >> WSETUP_SHIFT) 61 61 #define EW_VAL(x) (((x) & EW(EW_MAX)) >> EW_SHIFT) 62 - #define SS_VAL(x) (((x) & SS(SS_MAX)) >> SS_SHIFT) 62 + #define SSTROBE_VAL(x) (((x) & SSTROBE(SSTROBE_MAX)) >> SSTROBE_SHIFT) 63 63 64 64 #define NRCSR_OFFSET 0x00 65 65 #define AWCCR_OFFSET 0x04 ··· 67 67 68 68 #define ACR_ASIZE_MASK 0x3 69 69 #define ACR_EW_MASK BIT(30) 70 - #define ACR_SS_MASK BIT(31) 70 + #define ACR_SSTROBE_MASK BIT(31) 71 71 #define ASIZE_16BIT 1 72 72 73 73 #define CONFIG_MASK (TA(TA_MAX) | \ ··· 77 77 WHOLD(WHOLD_MAX) | \ 78 78 WSTROBE(WSTROBE_MAX) | \ 79 79 WSETUP(WSETUP_MAX) | \ 80 - EW(EW_MAX) | SS(SS_MAX) | \ 80 + EW(EW_MAX) | SSTROBE(SSTROBE_MAX) | \ 81 81 ASIZE_MAX) 82 82 83 83 /** ··· 204 204 if (data->enable_ew) 205 205 set |= ACR_EW_MASK; 206 206 if (data->enable_ss) 207 - set |= ACR_SS_MASK; 207 + set |= ACR_SSTROBE_MASK; 208 208 209 209 val = readl(aemif->base + offset); 210 210 val &= ~CONFIG_MASK; ··· 246 246 data->wstrobe = aemif_cycles_to_nsec(WSTROBE_VAL(val), clk_rate); 247 247 data->wsetup = aemif_cycles_to_nsec(WSETUP_VAL(val), clk_rate); 248 248 data->enable_ew = EW_VAL(val); 249 - data->enable_ss = SS_VAL(val); 249 + data->enable_ss = SSTROBE_VAL(val); 250 250 data->asize = val & ASIZE_MAX; 251 251 } 252 252
+1 -1
drivers/memory/ti-emif-pm.c
··· 248 248 static int ti_emif_resume(struct device *dev) 249 249 { 250 250 unsigned long tmp = 251 - __raw_readl((void *)emif_instance->ti_emif_sram_virt); 251 + __raw_readl((void __iomem *)emif_instance->ti_emif_sram_virt); 252 252 253 253 /* 254 254 * Check to see if what we are copying is already present in the
+12 -12
drivers/reset/reset-intel-gw.c
··· 15 15 #define RCU_RST_STAT 0x0024 16 16 #define RCU_RST_REQ 0x0048 17 17 18 - #define REG_OFFSET GENMASK(31, 16) 19 - #define BIT_OFFSET GENMASK(15, 8) 20 - #define STAT_BIT_OFFSET GENMASK(7, 0) 18 + #define REG_OFFSET_MASK GENMASK(31, 16) 19 + #define BIT_OFFSET_MASK GENMASK(15, 8) 20 + #define STAT_BIT_OFFSET_MASK GENMASK(7, 0) 21 21 22 22 #define to_reset_data(x) container_of(x, struct intel_reset_data, rcdev) 23 23 ··· 51 51 unsigned long id, u32 *rst_req, 52 52 u32 *req_bit, u32 *stat_bit) 53 53 { 54 - *rst_req = FIELD_GET(REG_OFFSET, id); 55 - *req_bit = FIELD_GET(BIT_OFFSET, id); 54 + *rst_req = FIELD_GET(REG_OFFSET_MASK, id); 55 + *req_bit = FIELD_GET(BIT_OFFSET_MASK, id); 56 56 57 57 if (data->soc_data->legacy) 58 - *stat_bit = FIELD_GET(STAT_BIT_OFFSET, id); 58 + *stat_bit = FIELD_GET(STAT_BIT_OFFSET_MASK, id); 59 59 else 60 60 *stat_bit = *req_bit; 61 61 ··· 141 141 if (spec->args[1] > 31) 142 142 return -EINVAL; 143 143 144 - id = FIELD_PREP(REG_OFFSET, spec->args[0]); 145 - id |= FIELD_PREP(BIT_OFFSET, spec->args[1]); 144 + id = FIELD_PREP(REG_OFFSET_MASK, spec->args[0]); 145 + id |= FIELD_PREP(BIT_OFFSET_MASK, spec->args[1]); 146 146 147 147 if (data->soc_data->legacy) { 148 148 if (spec->args[2] > 31) 149 149 return -EINVAL; 150 150 151 - id |= FIELD_PREP(STAT_BIT_OFFSET, spec->args[2]); 151 + id |= FIELD_PREP(STAT_BIT_OFFSET_MASK, spec->args[2]); 152 152 } 153 153 154 154 return id; ··· 210 210 if (ret) 211 211 return ret; 212 212 213 - data->reboot_id = FIELD_PREP(REG_OFFSET, rb_id[0]); 214 - data->reboot_id |= FIELD_PREP(BIT_OFFSET, rb_id[1]); 213 + data->reboot_id = FIELD_PREP(REG_OFFSET_MASK, rb_id[0]); 214 + data->reboot_id |= FIELD_PREP(BIT_OFFSET_MASK, rb_id[1]); 215 215 216 216 if (data->soc_data->legacy) 217 - data->reboot_id |= FIELD_PREP(STAT_BIT_OFFSET, rb_id[2]); 217 + data->reboot_id |= FIELD_PREP(STAT_BIT_OFFSET_MASK, rb_id[2]); 218 218 219 219 data->restart_nb.notifier_call = intel_reset_restart_handler; 220 220 data->restart_nb.priority = 128;
+21 -2
drivers/reset/reset-simple.c
··· 11 11 * Maxime Ripard <maxime.ripard@free-electrons.com> 12 12 */ 13 13 14 + #include <linux/delay.h> 14 15 #include <linux/device.h> 15 16 #include <linux/err.h> 16 17 #include <linux/io.h> ··· 19 18 #include <linux/of_device.h> 20 19 #include <linux/platform_device.h> 21 20 #include <linux/reset-controller.h> 21 + #include <linux/reset/reset-simple.h> 22 22 #include <linux/spinlock.h> 23 - 24 - #include "reset-simple.h" 25 23 26 24 static inline struct reset_simple_data * 27 25 to_reset_simple_data(struct reset_controller_dev *rcdev) ··· 64 64 return reset_simple_update(rcdev, id, false); 65 65 } 66 66 67 + static int reset_simple_reset(struct reset_controller_dev *rcdev, 68 + unsigned long id) 69 + { 70 + struct reset_simple_data *data = to_reset_simple_data(rcdev); 71 + int ret; 72 + 73 + if (!data->reset_us) 74 + return -ENOTSUPP; 75 + 76 + ret = reset_simple_assert(rcdev, id); 77 + if (ret) 78 + return ret; 79 + 80 + usleep_range(data->reset_us, data->reset_us * 2); 81 + 82 + return reset_simple_deassert(rcdev, id); 83 + } 84 + 67 85 static int reset_simple_status(struct reset_controller_dev *rcdev, 68 86 unsigned long id) 69 87 { ··· 99 81 const struct reset_control_ops reset_simple_ops = { 100 82 .assert = reset_simple_assert, 101 83 .deassert = reset_simple_deassert, 84 + .reset = reset_simple_reset, 102 85 .status = reset_simple_status, 103 86 }; 104 87 EXPORT_SYMBOL_GPL(reset_simple_ops);
+7
drivers/reset/reset-simple.h include/linux/reset/reset-simple.h
··· 27 27 * @status_active_low: if true, bits read back as cleared while the reset is 28 28 * asserted. Otherwise, bits read back as set while the 29 29 * reset is asserted. 30 + * @reset_us: Minimum delay in microseconds needed that needs to be 31 + * waited for between an assert and a deassert to reset the 32 + * device. If multiple consumers with different delay 33 + * requirements are connected to this controller, it must 34 + * be the largest minimum delay. 0 means that such a delay is 35 + * unknown and the reset operation is unsupported. 30 36 */ 31 37 struct reset_simple_data { 32 38 spinlock_t lock; ··· 40 34 struct reset_controller_dev rcdev; 41 35 bool active_low; 42 36 bool status_active_low; 37 + unsigned int reset_us; 43 38 }; 44 39 45 40 extern const struct reset_control_ops reset_simple_ops;
+1 -2
drivers/reset/reset-socfpga.c
··· 11 11 #include <linux/of_address.h> 12 12 #include <linux/platform_device.h> 13 13 #include <linux/reset-controller.h> 14 + #include <linux/reset/reset-simple.h> 14 15 #include <linux/reset/socfpga.h> 15 16 #include <linux/slab.h> 16 17 #include <linux/spinlock.h> 17 18 #include <linux/types.h> 18 - 19 - #include "reset-simple.h" 20 19 21 20 #define SOCFPGA_NR_BANKS 8 22 21
+1 -2
drivers/reset/reset-sunxi.c
··· 14 14 #include <linux/of_address.h> 15 15 #include <linux/platform_device.h> 16 16 #include <linux/reset-controller.h> 17 + #include <linux/reset/reset-simple.h> 17 18 #include <linux/reset/sunxi.h> 18 19 #include <linux/slab.h> 19 20 #include <linux/spinlock.h> 20 21 #include <linux/types.h> 21 - 22 - #include "reset-simple.h" 23 22 24 23 static int sunxi_reset_init(struct device_node *np) 25 24 {
+1 -1
drivers/reset/reset-ti-sci.c
··· 1 1 /* 2 2 * Texas Instrument's System Control Interface (TI-SCI) reset driver 3 3 * 4 - * Copyright (C) 2015-2017 Texas Instruments Incorporated - http://www.ti.com/ 4 + * Copyright (C) 2015-2017 Texas Instruments Incorporated - https://www.ti.com/ 5 5 * Andrew F. Davis <afd@ti.com> 6 6 * 7 7 * This program is free software; you can redistribute it and/or modify
+1 -1
drivers/reset/reset-ti-syscon.c
··· 1 1 /* 2 2 * TI SYSCON regmap reset driver 3 3 * 4 - * Copyright (C) 2015-2016 Texas Instruments Incorporated - http://www.ti.com/ 4 + * Copyright (C) 2015-2016 Texas Instruments Incorporated - https://www.ti.com/ 5 5 * Andrew F. Davis <afd@ti.com> 6 6 * Suman Anna <afd@ti.com> 7 7 *
+1 -2
drivers/reset/reset-uniphier-glue.c
··· 9 9 #include <linux/of_device.h> 10 10 #include <linux/platform_device.h> 11 11 #include <linux/reset.h> 12 - 13 - #include "reset-simple.h" 12 + #include <linux/reset/reset-simple.h> 14 13 15 14 #define MAX_CLKS 2 16 15 #define MAX_RSTS 2
+1 -9
drivers/soc/imx/Kconfig
··· 8 8 select PM_GENERIC_DOMAINS 9 9 default y if SOC_IMX7D 10 10 11 - config IMX_SCU_SOC 12 - bool "i.MX System Controller Unit SoC info support" 13 - depends on IMX_SCU 14 - select SOC_BUS 15 - help 16 - If you say yes here you get support for the NXP i.MX System 17 - Controller Unit SoC info module, it will provide the SoC info 18 - like SoC family, ID and revision etc. 19 - 20 11 config SOC_IMX8M 21 12 bool "i.MX8M SoC family support" 22 13 depends on ARCH_MXC || COMPILE_TEST 23 14 default ARCH_MXC && ARM64 24 15 select SOC_BUS 16 + select ARM_GIC_V3 if ARCH_MXC 25 17 help 26 18 If you say yes here you get support for the NXP i.MX8M family 27 19 support, it will provide the SoC info like SoC family,
-1
drivers/soc/imx/Makefile
··· 5 5 obj-$(CONFIG_HAVE_IMX_GPC) += gpc.o 6 6 obj-$(CONFIG_IMX_GPCV2_PM_DOMAINS) += gpcv2.o 7 7 obj-$(CONFIG_SOC_IMX8M) += soc-imx8m.o 8 - obj-$(CONFIG_IMX_SCU_SOC) += soc-imx-scu.o
+17 -66
drivers/soc/imx/soc-imx-scu.c drivers/firmware/imx/imx-scu-soc.c
··· 10 10 #include <linux/platform_device.h> 11 11 #include <linux/of.h> 12 12 13 - #define IMX_SCU_SOC_DRIVER_NAME "imx-scu-soc" 14 - 15 - static struct imx_sc_ipc *soc_ipc_handle; 13 + static struct imx_sc_ipc *imx_sc_soc_ipc_handle; 16 14 17 15 struct imx_sc_msg_misc_get_soc_id { 18 16 struct imx_sc_rpc_msg hdr; ··· 42 44 hdr->func = IMX_SC_MISC_FUNC_UNIQUE_ID; 43 45 hdr->size = 1; 44 46 45 - ret = imx_scu_call_rpc(soc_ipc_handle, &msg, true); 47 + ret = imx_scu_call_rpc(imx_sc_soc_ipc_handle, &msg, true); 46 48 if (ret) { 47 49 pr_err("%s: get soc uid failed, ret %d\n", __func__, ret); 48 50 return ret; ··· 69 71 msg.data.req.control = IMX_SC_C_ID; 70 72 msg.data.req.resource = IMX_SC_R_SYSTEM; 71 73 72 - ret = imx_scu_call_rpc(soc_ipc_handle, &msg, true); 74 + ret = imx_scu_call_rpc(imx_sc_soc_ipc_handle, &msg, true); 73 75 if (ret) { 74 76 pr_err("%s: get soc info failed, ret %d\n", __func__, ret); 75 77 return ret; ··· 78 80 return msg.data.resp.id; 79 81 } 80 82 81 - static int imx_scu_soc_probe(struct platform_device *pdev) 83 + int imx_scu_soc_init(struct device *dev) 82 84 { 83 85 struct soc_device_attribute *soc_dev_attr; 84 86 struct soc_device *soc_dev; ··· 86 88 u64 uid = 0; 87 89 u32 val; 88 90 89 - ret = imx_scu_get_handle(&soc_ipc_handle); 91 + ret = imx_scu_get_handle(&imx_sc_soc_ipc_handle); 90 92 if (ret) 91 93 return ret; 92 94 93 - soc_dev_attr = devm_kzalloc(&pdev->dev, sizeof(*soc_dev_attr), 95 + soc_dev_attr = devm_kzalloc(dev, sizeof(*soc_dev_attr), 94 96 GFP_KERNEL); 95 97 if (!soc_dev_attr) 96 98 return -ENOMEM; ··· 113 115 114 116 /* format soc_id value passed from SCU firmware */ 115 117 val = id & 0x1f; 116 - soc_dev_attr->soc_id = kasprintf(GFP_KERNEL, "0x%x", val); 118 + soc_dev_attr->soc_id = devm_kasprintf(dev, GFP_KERNEL, "0x%x", val); 117 119 if (!soc_dev_attr->soc_id) 118 120 return -ENOMEM; 119 121 120 122 /* format revision value passed from SCU firmware */ 121 123 val = (id >> 5) & 0xf; 122 124 val = (((val >> 2) + 1) << 4) | (val & 0x3); 123 - soc_dev_attr->revision = kasprintf(GFP_KERNEL, 124 - "%d.%d", 125 - (val >> 4) & 0xf, 126 - val & 0xf); 127 - if (!soc_dev_attr->revision) { 128 - ret = -ENOMEM; 129 - goto free_soc_id; 130 - } 125 + soc_dev_attr->revision = devm_kasprintf(dev, GFP_KERNEL, "%d.%d", 126 + (val >> 4) & 0xf, val & 0xf); 127 + if (!soc_dev_attr->revision) 128 + return -ENOMEM; 131 129 132 - soc_dev_attr->serial_number = kasprintf(GFP_KERNEL, "%016llX", uid); 133 - if (!soc_dev_attr->serial_number) { 134 - ret = -ENOMEM; 135 - goto free_revision; 136 - } 130 + soc_dev_attr->serial_number = devm_kasprintf(dev, GFP_KERNEL, 131 + "%016llX", uid); 132 + if (!soc_dev_attr->serial_number) 133 + return -ENOMEM; 137 134 138 135 soc_dev = soc_device_register(soc_dev_attr); 139 - if (IS_ERR(soc_dev)) { 140 - ret = PTR_ERR(soc_dev); 141 - goto free_serial_number; 142 - } 136 + if (IS_ERR(soc_dev)) 137 + return PTR_ERR(soc_dev); 143 138 144 139 return 0; 145 - 146 - free_serial_number: 147 - kfree(soc_dev_attr->serial_number); 148 - free_revision: 149 - kfree(soc_dev_attr->revision); 150 - free_soc_id: 151 - kfree(soc_dev_attr->soc_id); 152 - return ret; 153 140 } 154 - 155 - static struct platform_driver imx_scu_soc_driver = { 156 - .driver = { 157 - .name = IMX_SCU_SOC_DRIVER_NAME, 158 - }, 159 - .probe = imx_scu_soc_probe, 160 - }; 161 - 162 - static int __init imx_scu_soc_init(void) 163 - { 164 - struct platform_device *pdev; 165 - struct device_node *np; 166 - int ret; 167 - 168 - np = of_find_compatible_node(NULL, NULL, "fsl,imx-scu"); 169 - if (!np) 170 - return -ENODEV; 171 - 172 - of_node_put(np); 173 - 174 - ret = platform_driver_register(&imx_scu_soc_driver); 175 - if (ret) 176 - return ret; 177 - 178 - pdev = platform_device_register_simple(IMX_SCU_SOC_DRIVER_NAME, 179 - -1, NULL, 0); 180 - if (IS_ERR(pdev)) 181 - platform_driver_unregister(&imx_scu_soc_driver); 182 - 183 - return PTR_ERR_OR_ZERO(pdev); 184 - } 185 - device_initcall(imx_scu_soc_init);
+40 -6
drivers/soc/mediatek/mtk-cmdq-helper.c
··· 12 12 #define CMDQ_WRITE_ENABLE_MASK BIT(0) 13 13 #define CMDQ_POLL_ENABLE_MASK BIT(0) 14 14 #define CMDQ_EOC_IRQ_EN BIT(0) 15 + #define CMDQ_REG_TYPE 1 15 16 16 17 struct cmdq_instruction { 17 18 union { ··· 22 21 union { 23 22 u16 offset; 24 23 u16 event; 24 + u16 reg_dst; 25 25 }; 26 - u8 subsys; 26 + union { 27 + u8 subsys; 28 + struct { 29 + u8 sop:5; 30 + u8 arg_c_t:1; 31 + u8 src_t:1; 32 + u8 dst_t:1; 33 + }; 34 + }; 27 35 u8 op; 28 36 }; 29 37 ··· 253 243 } 254 244 EXPORT_SYMBOL(cmdq_pkt_clear_event); 255 245 246 + int cmdq_pkt_set_event(struct cmdq_pkt *pkt, u16 event) 247 + { 248 + struct cmdq_instruction inst = {}; 249 + 250 + if (event >= CMDQ_MAX_EVENT) 251 + return -EINVAL; 252 + 253 + inst.op = CMDQ_CODE_WFE; 254 + inst.value = CMDQ_WFE_UPDATE | CMDQ_WFE_UPDATE_VALUE; 255 + inst.event = event; 256 + 257 + return cmdq_pkt_append_command(pkt, inst); 258 + } 259 + EXPORT_SYMBOL(cmdq_pkt_set_event); 260 + 256 261 int cmdq_pkt_poll(struct cmdq_pkt *pkt, u8 subsys, 257 262 u16 offset, u32 value) 258 263 { ··· 303 278 } 304 279 EXPORT_SYMBOL(cmdq_pkt_poll_mask); 305 280 306 - static int cmdq_pkt_finalize(struct cmdq_pkt *pkt) 281 + int cmdq_pkt_assign(struct cmdq_pkt *pkt, u16 reg_idx, u32 value) 282 + { 283 + struct cmdq_instruction inst = {}; 284 + 285 + inst.op = CMDQ_CODE_LOGIC; 286 + inst.dst_t = CMDQ_REG_TYPE; 287 + inst.reg_dst = reg_idx; 288 + inst.value = value; 289 + return cmdq_pkt_append_command(pkt, inst); 290 + } 291 + EXPORT_SYMBOL(cmdq_pkt_assign); 292 + 293 + int cmdq_pkt_finalize(struct cmdq_pkt *pkt) 307 294 { 308 295 struct cmdq_instruction inst = { {0} }; 309 296 int err; ··· 334 297 335 298 return err; 336 299 } 300 + EXPORT_SYMBOL(cmdq_pkt_finalize); 337 301 338 302 static void cmdq_pkt_flush_async_cb(struct cmdq_cb_data data) 339 303 { ··· 368 330 int err; 369 331 unsigned long flags = 0; 370 332 struct cmdq_client *client = (struct cmdq_client *)pkt->cl; 371 - 372 - err = cmdq_pkt_finalize(pkt); 373 - if (err < 0) 374 - return err; 375 333 376 334 pkt->cb.cb = cb; 377 335 pkt->cb.data = data;
+1 -1
drivers/soc/qcom/Kconfig
··· 89 89 90 90 config QCOM_RPMH 91 91 bool "Qualcomm RPM-Hardened (RPMH) Communication" 92 - depends on ARCH_QCOM && ARM64 || COMPILE_TEST 92 + depends on ARCH_QCOM || COMPILE_TEST 93 93 help 94 94 Support for communication with the hardened-RPM blocks in 95 95 Qualcomm Technologies Inc (QTI) SoCs. RPMH communication uses an
+3 -1
drivers/soc/qcom/pdr_interface.c
··· 278 278 279 279 list_for_each_entry_safe(ind, tmp, &pdr->indack_list, node) { 280 280 pds = ind->pds; 281 - pdr_send_indack_msg(pdr, pds, ind->transaction_id); 282 281 283 282 mutex_lock(&pdr->status_lock); 284 283 pds->state = ind->curr_state; 285 284 pdr->status(pds->state, pds->service_path, pdr->priv); 286 285 mutex_unlock(&pdr->status_lock); 286 + 287 + /* Ack the indication after clients release the PD resources */ 288 + pdr_send_indack_msg(pdr, pds, ind->transaction_id); 287 289 288 290 mutex_lock(&pdr->list_lock); 289 291 list_del(&ind->node);
+165
drivers/soc/qcom/qcom-geni-se.c
··· 3 3 4 4 #include <linux/acpi.h> 5 5 #include <linux/clk.h> 6 + #include <linux/console.h> 6 7 #include <linux/slab.h> 7 8 #include <linux/dma-mapping.h> 8 9 #include <linux/io.h> ··· 91 90 struct device *dev; 92 91 void __iomem *base; 93 92 struct clk_bulk_data ahb_clks[NUM_AHB_CLKS]; 93 + struct geni_icc_path to_core; 94 94 }; 95 + 96 + static const char * const icc_path_names[] = {"qup-core", "qup-config", 97 + "qup-memory"}; 98 + 99 + static struct geni_wrapper *earlycon_wrapper; 95 100 96 101 #define QUP_HW_VER_REG 0x4 97 102 ··· 727 720 } 728 721 EXPORT_SYMBOL(geni_se_rx_dma_unprep); 729 722 723 + int geni_icc_get(struct geni_se *se, const char *icc_ddr) 724 + { 725 + int i, err; 726 + const char *icc_names[] = {"qup-core", "qup-config", icc_ddr}; 727 + 728 + for (i = 0; i < ARRAY_SIZE(se->icc_paths); i++) { 729 + if (!icc_names[i]) 730 + continue; 731 + 732 + se->icc_paths[i].path = devm_of_icc_get(se->dev, icc_names[i]); 733 + if (IS_ERR(se->icc_paths[i].path)) 734 + goto err; 735 + } 736 + 737 + return 0; 738 + 739 + err: 740 + err = PTR_ERR(se->icc_paths[i].path); 741 + if (err != -EPROBE_DEFER) 742 + dev_err_ratelimited(se->dev, "Failed to get ICC path '%s': %d\n", 743 + icc_names[i], err); 744 + return err; 745 + 746 + } 747 + EXPORT_SYMBOL(geni_icc_get); 748 + 749 + int geni_icc_set_bw(struct geni_se *se) 750 + { 751 + int i, ret; 752 + 753 + for (i = 0; i < ARRAY_SIZE(se->icc_paths); i++) { 754 + ret = icc_set_bw(se->icc_paths[i].path, 755 + se->icc_paths[i].avg_bw, se->icc_paths[i].avg_bw); 756 + if (ret) { 757 + dev_err_ratelimited(se->dev, "ICC BW voting failed on path '%s': %d\n", 758 + icc_path_names[i], ret); 759 + return ret; 760 + } 761 + } 762 + 763 + return 0; 764 + } 765 + EXPORT_SYMBOL(geni_icc_set_bw); 766 + 767 + void geni_icc_set_tag(struct geni_se *se, u32 tag) 768 + { 769 + int i; 770 + 771 + for (i = 0; i < ARRAY_SIZE(se->icc_paths); i++) 772 + icc_set_tag(se->icc_paths[i].path, tag); 773 + } 774 + EXPORT_SYMBOL(geni_icc_set_tag); 775 + 776 + /* To do: Replace this by icc_bulk_enable once it's implemented in ICC core */ 777 + int geni_icc_enable(struct geni_se *se) 778 + { 779 + int i, ret; 780 + 781 + for (i = 0; i < ARRAY_SIZE(se->icc_paths); i++) { 782 + ret = icc_enable(se->icc_paths[i].path); 783 + if (ret) { 784 + dev_err_ratelimited(se->dev, "ICC enable failed on path '%s': %d\n", 785 + icc_path_names[i], ret); 786 + return ret; 787 + } 788 + } 789 + 790 + return 0; 791 + } 792 + EXPORT_SYMBOL(geni_icc_enable); 793 + 794 + int geni_icc_disable(struct geni_se *se) 795 + { 796 + int i, ret; 797 + 798 + for (i = 0; i < ARRAY_SIZE(se->icc_paths); i++) { 799 + ret = icc_disable(se->icc_paths[i].path); 800 + if (ret) { 801 + dev_err_ratelimited(se->dev, "ICC disable failed on path '%s': %d\n", 802 + icc_path_names[i], ret); 803 + return ret; 804 + } 805 + } 806 + 807 + return 0; 808 + } 809 + EXPORT_SYMBOL(geni_icc_disable); 810 + 811 + void geni_remove_earlycon_icc_vote(void) 812 + { 813 + struct platform_device *pdev; 814 + struct geni_wrapper *wrapper; 815 + struct device_node *parent; 816 + struct device_node *child; 817 + 818 + if (!earlycon_wrapper) 819 + return; 820 + 821 + wrapper = earlycon_wrapper; 822 + parent = of_get_next_parent(wrapper->dev->of_node); 823 + for_each_child_of_node(parent, child) { 824 + if (!of_device_is_compatible(child, "qcom,geni-se-qup")) 825 + continue; 826 + 827 + pdev = of_find_device_by_node(child); 828 + if (!pdev) 829 + continue; 830 + 831 + wrapper = platform_get_drvdata(pdev); 832 + icc_put(wrapper->to_core.path); 833 + wrapper->to_core.path = NULL; 834 + 835 + } 836 + of_node_put(parent); 837 + 838 + earlycon_wrapper = NULL; 839 + } 840 + EXPORT_SYMBOL(geni_remove_earlycon_icc_vote); 841 + 730 842 static int geni_se_probe(struct platform_device *pdev) 731 843 { 732 844 struct device *dev = &pdev->dev; 733 845 struct resource *res; 734 846 struct geni_wrapper *wrapper; 847 + struct console __maybe_unused *bcon; 848 + bool __maybe_unused has_earlycon = false; 735 849 int ret; 736 850 737 851 wrapper = devm_kzalloc(dev, sizeof(*wrapper), GFP_KERNEL); ··· 875 747 } 876 748 } 877 749 750 + #ifdef CONFIG_SERIAL_EARLYCON 751 + for_each_console(bcon) { 752 + if (!strcmp(bcon->name, "qcom_geni")) { 753 + has_earlycon = true; 754 + break; 755 + } 756 + } 757 + if (!has_earlycon) 758 + goto exit; 759 + 760 + wrapper->to_core.path = devm_of_icc_get(dev, "qup-core"); 761 + if (IS_ERR(wrapper->to_core.path)) 762 + return PTR_ERR(wrapper->to_core.path); 763 + /* 764 + * Put minmal BW request on core clocks on behalf of early console. 765 + * The vote will be removed earlycon exit function. 766 + * 767 + * Note: We are putting vote on each QUP wrapper instead only to which 768 + * earlycon is connected because QUP core clock of different wrapper 769 + * share same voltage domain. If core1 is put to 0, then core2 will 770 + * also run at 0, if not voted. Default ICC vote will be removed ASA 771 + * we touch any of the core clock. 772 + * core1 = core2 = max(core1, core2) 773 + */ 774 + ret = icc_set_bw(wrapper->to_core.path, GENI_DEFAULT_BW, 775 + GENI_DEFAULT_BW); 776 + if (ret) { 777 + dev_err(&pdev->dev, "%s: ICC BW voting failed for core: %d\n", 778 + __func__, ret); 779 + return ret; 780 + } 781 + 782 + if (of_get_compatible_child(pdev->dev.of_node, "qcom,geni-debug-uart")) 783 + earlycon_wrapper = wrapper; 784 + of_node_put(pdev->dev.of_node); 785 + exit: 786 + #endif 878 787 dev_set_drvdata(dev, wrapper); 879 788 dev_dbg(dev, "GENI SE Driver probed\n"); 880 789 return devm_of_platform_populate(dev);
+14 -5
drivers/soc/qcom/rpmh-rsc.c
··· 175 175 static void write_tcs_reg_sync(const struct rsc_drv *drv, int reg, int tcs_id, 176 176 u32 data) 177 177 { 178 - u32 new_data; 178 + int i; 179 179 180 180 writel(data, tcs_reg_addr(drv, reg, tcs_id)); 181 - if (readl_poll_timeout_atomic(tcs_reg_addr(drv, reg, tcs_id), new_data, 182 - new_data == data, 1, USEC_PER_SEC)) 183 - pr_err("%s: error writing %#x to %d:%#x\n", drv->name, 184 - data, tcs_id, reg); 181 + 182 + /* 183 + * Wait until we read back the same value. Use a counter rather than 184 + * ktime for timeout since this may be called after timekeeping stops. 185 + */ 186 + for (i = 0; i < USEC_PER_SEC; i++) { 187 + if (readl(tcs_reg_addr(drv, reg, tcs_id)) == data) 188 + return; 189 + udelay(1); 190 + } 191 + pr_err("%s: error writing %#x to %d:%#x\n", drv->name, 192 + data, tcs_id, reg); 185 193 } 186 194 187 195 /** ··· 1031 1023 .driver = { 1032 1024 .name = "rpmh", 1033 1025 .of_match_table = rpmh_drv_match, 1026 + .suppress_bind_attrs = true, 1034 1027 }, 1035 1028 }; 1036 1029
+1 -3
drivers/soc/qcom/rpmh.c
··· 497 497 * 498 498 * Invalidate the sleep and wake values in batch_cache. 499 499 */ 500 - int rpmh_invalidate(const struct device *dev) 500 + void rpmh_invalidate(const struct device *dev) 501 501 { 502 502 struct rpmh_ctrlr *ctrlr = get_rpmh_ctrlr(dev); 503 503 struct batch_cache_req *req, *tmp; ··· 509 509 INIT_LIST_HEAD(&ctrlr->batch_cache); 510 510 ctrlr->dirty = true; 511 511 spin_unlock_irqrestore(&ctrlr->cache_lock, flags); 512 - 513 - return 0; 514 512 } 515 513 EXPORT_SYMBOL(rpmh_invalidate);
+5
drivers/soc/qcom/smd-rpm.c
··· 20 20 * struct qcom_smd_rpm - state of the rpm device driver 21 21 * @rpm_channel: reference to the smd channel 22 22 * @icc: interconnect proxy device 23 + * @dev: rpm device 23 24 * @ack: completion for acks 24 25 * @lock: mutual exclusion around the send/complete pair 25 26 * @ack_status: result of the rpm request ··· 87 86 /** 88 87 * qcom_rpm_smd_write - write @buf to @type:@id 89 88 * @rpm: rpm handle 89 + * @state: active/sleep state flags 90 90 * @type: resource type 91 91 * @id: resource identifier 92 92 * @buf: the data to be written ··· 232 230 233 231 static const struct of_device_id qcom_smd_rpm_of_match[] = { 234 232 { .compatible = "qcom,rpm-apq8084" }, 233 + { .compatible = "qcom,rpm-ipq6018" }, 235 234 { .compatible = "qcom,rpm-msm8916" }, 235 + { .compatible = "qcom,rpm-msm8936" }, 236 236 { .compatible = "qcom,rpm-msm8974" }, 237 237 { .compatible = "qcom,rpm-msm8976" }, 238 + { .compatible = "qcom,rpm-msm8994" }, 238 239 { .compatible = "qcom,rpm-msm8996" }, 239 240 { .compatible = "qcom,rpm-msm8998" }, 240 241 { .compatible = "qcom,rpm-sdm660" },
+64 -1
drivers/soc/qcom/socinfo.c
··· 24 24 #define SOCINFO_VERSION(maj, min) ((((maj) & 0xffff) << 16)|((min) & 0xffff)) 25 25 26 26 #define SMEM_SOCINFO_BUILD_ID_LENGTH 32 27 + #define SMEM_SOCINFO_CHIP_ID_LENGTH 32 27 28 28 29 /* 29 30 * SMEM item id, used to acquire handles to respective ··· 122 121 __le32 chip_family; 123 122 __le32 raw_device_family; 124 123 __le32 raw_device_num; 124 + /* Version 13 */ 125 + __le32 nproduct_id; 126 + char chip_id[SMEM_SOCINFO_CHIP_ID_LENGTH]; 127 + /* Version 14 */ 128 + __le32 num_clusters; 129 + __le32 ncluster_array_offset; 130 + __le32 num_defective_parts; 131 + __le32 ndefective_parts_array_offset; 132 + /* Version 15 */ 133 + __le32 nmodem_supported; 125 134 }; 126 135 127 136 #ifdef CONFIG_DEBUG_FS ··· 146 135 u32 raw_ver; 147 136 u32 hw_plat; 148 137 u32 fmt; 138 + u32 nproduct_id; 139 + u32 num_clusters; 140 + u32 ncluster_array_offset; 141 + u32 num_defective_parts; 142 + u32 ndefective_parts_array_offset; 143 + u32 nmodem_supported; 149 144 }; 150 145 151 146 struct smem_image_version { ··· 219 202 { 310, "MSM8996AU" }, 220 203 { 311, "APQ8096AU" }, 221 204 { 312, "APQ8096SG" }, 205 + { 318, "SDM630" }, 222 206 { 321, "SDM845" }, 223 207 { 341, "SDA845" }, 208 + { 356, "SM8250" }, 224 209 }; 225 210 226 211 static const char *socinfo_machine(struct device *dev, unsigned int id) ··· 275 256 if (model < 0) 276 257 return -EINVAL; 277 258 278 - seq_printf(seq, "%s\n", pmic_models[model]); 259 + if (model <= ARRAY_SIZE(pmic_models) && pmic_models[model]) 260 + seq_printf(seq, "%s\n", pmic_models[model]); 261 + else 262 + seq_printf(seq, "unknown (%d)\n", model); 279 263 280 264 return 0; 281 265 } ··· 294 272 return 0; 295 273 } 296 274 275 + static int qcom_show_chip_id(struct seq_file *seq, void *p) 276 + { 277 + struct socinfo *socinfo = seq->private; 278 + 279 + seq_printf(seq, "%s\n", socinfo->chip_id); 280 + 281 + return 0; 282 + } 283 + 297 284 QCOM_OPEN(build_id, qcom_show_build_id); 298 285 QCOM_OPEN(pmic_model, qcom_show_pmic_model); 299 286 QCOM_OPEN(pmic_die_rev, qcom_show_pmic_die_revision); 287 + QCOM_OPEN(chip_id, qcom_show_chip_id); 300 288 301 289 #define DEFINE_IMAGE_OPS(type) \ 302 290 static int show_image_##type(struct seq_file *seq, void *p) \ ··· 344 312 345 313 qcom_socinfo->info.fmt = __le32_to_cpu(info->fmt); 346 314 315 + debugfs_create_x32("info_fmt", 0400, qcom_socinfo->dbg_root, 316 + &qcom_socinfo->info.fmt); 317 + 347 318 switch (qcom_socinfo->info.fmt) { 319 + case SOCINFO_VERSION(0, 15): 320 + qcom_socinfo->info.nmodem_supported = __le32_to_cpu(info->nmodem_supported); 321 + 322 + debugfs_create_u32("nmodem_supported", 0400, qcom_socinfo->dbg_root, 323 + &qcom_socinfo->info.nmodem_supported); 324 + /* Fall through */ 325 + case SOCINFO_VERSION(0, 14): 326 + qcom_socinfo->info.num_clusters = __le32_to_cpu(info->num_clusters); 327 + qcom_socinfo->info.ncluster_array_offset = __le32_to_cpu(info->ncluster_array_offset); 328 + qcom_socinfo->info.num_defective_parts = __le32_to_cpu(info->num_defective_parts); 329 + qcom_socinfo->info.ndefective_parts_array_offset = __le32_to_cpu(info->ndefective_parts_array_offset); 330 + 331 + debugfs_create_u32("num_clusters", 0400, qcom_socinfo->dbg_root, 332 + &qcom_socinfo->info.num_clusters); 333 + debugfs_create_u32("ncluster_array_offset", 0400, qcom_socinfo->dbg_root, 334 + &qcom_socinfo->info.ncluster_array_offset); 335 + debugfs_create_u32("num_defective_parts", 0400, qcom_socinfo->dbg_root, 336 + &qcom_socinfo->info.num_defective_parts); 337 + debugfs_create_u32("ndefective_parts_array_offset", 0400, qcom_socinfo->dbg_root, 338 + &qcom_socinfo->info.ndefective_parts_array_offset); 339 + /* Fall through */ 340 + case SOCINFO_VERSION(0, 13): 341 + qcom_socinfo->info.nproduct_id = __le32_to_cpu(info->nproduct_id); 342 + 343 + debugfs_create_u32("nproduct_id", 0400, qcom_socinfo->dbg_root, 344 + &qcom_socinfo->info.nproduct_id); 345 + DEBUGFS_ADD(info, chip_id); 346 + /* Fall through */ 348 347 case SOCINFO_VERSION(0, 12): 349 348 qcom_socinfo->info.chip_family = 350 349 __le32_to_cpu(info->chip_family);
+11
drivers/soc/renesas/Kconfig
··· 201 201 help 202 202 This enables support for the Renesas RZ/G2E SoC. 203 203 204 + config ARCH_R8A774E1 205 + bool "Renesas RZ/G2H SoC Platform" 206 + select ARCH_RCAR_GEN3 207 + select SYSC_R8A774E1 208 + help 209 + This enables support for the Renesas RZ/G2H SoC. 210 + 204 211 config ARCH_R8A77950 205 212 bool "Renesas R-Car H3 ES1.x SoC Platform" 206 213 select ARCH_RCAR_GEN3 ··· 301 294 302 295 config SYSC_R8A774C0 303 296 bool "RZ/G2E System Controller support" if COMPILE_TEST 297 + select SYSC_RCAR 298 + 299 + config SYSC_R8A774E1 300 + bool "RZ/G2H System Controller support" if COMPILE_TEST 304 301 select SYSC_RCAR 305 302 306 303 config SYSC_R8A7779
+1
drivers/soc/renesas/Makefile
··· 10 10 obj-$(CONFIG_SYSC_R8A774A1) += r8a774a1-sysc.o 11 11 obj-$(CONFIG_SYSC_R8A774B1) += r8a774b1-sysc.o 12 12 obj-$(CONFIG_SYSC_R8A774C0) += r8a774c0-sysc.o 13 + obj-$(CONFIG_SYSC_R8A774E1) += r8a774e1-sysc.o 13 14 obj-$(CONFIG_SYSC_R8A7779) += r8a7779-sysc.o 14 15 obj-$(CONFIG_SYSC_R8A7790) += r8a7790-sysc.o 15 16 obj-$(CONFIG_SYSC_R8A7791) += r8a7791-sysc.o
+43
drivers/soc/renesas/r8a774e1-sysc.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Renesas RZ/G2H System Controller 4 + * Copyright (C) 2020 Renesas Electronics Corp. 5 + * 6 + * Based on Renesas R-Car H3 System Controller 7 + * Copyright (C) 2016-2017 Glider bvba 8 + */ 9 + 10 + #include <linux/kernel.h> 11 + 12 + #include <dt-bindings/power/r8a774e1-sysc.h> 13 + 14 + #include "rcar-sysc.h" 15 + 16 + static const struct rcar_sysc_area r8a774e1_areas[] __initconst = { 17 + { "always-on", 0, 0, R8A774E1_PD_ALWAYS_ON, -1, PD_ALWAYS_ON }, 18 + { "ca57-scu", 0x1c0, 0, R8A774E1_PD_CA57_SCU, R8A774E1_PD_ALWAYS_ON, PD_SCU }, 19 + { "ca57-cpu0", 0x80, 0, R8A774E1_PD_CA57_CPU0, R8A774E1_PD_CA57_SCU, PD_CPU_NOCR }, 20 + { "ca57-cpu1", 0x80, 1, R8A774E1_PD_CA57_CPU1, R8A774E1_PD_CA57_SCU, PD_CPU_NOCR }, 21 + { "ca57-cpu2", 0x80, 2, R8A774E1_PD_CA57_CPU2, R8A774E1_PD_CA57_SCU, PD_CPU_NOCR }, 22 + { "ca57-cpu3", 0x80, 3, R8A774E1_PD_CA57_CPU3, R8A774E1_PD_CA57_SCU, PD_CPU_NOCR }, 23 + { "ca53-scu", 0x140, 0, R8A774E1_PD_CA53_SCU, R8A774E1_PD_ALWAYS_ON, PD_SCU }, 24 + { "ca53-cpu0", 0x200, 0, R8A774E1_PD_CA53_CPU0, R8A774E1_PD_CA53_SCU, PD_CPU_NOCR }, 25 + { "ca53-cpu1", 0x200, 1, R8A774E1_PD_CA53_CPU1, R8A774E1_PD_CA53_SCU, PD_CPU_NOCR }, 26 + { "ca53-cpu2", 0x200, 2, R8A774E1_PD_CA53_CPU2, R8A774E1_PD_CA53_SCU, PD_CPU_NOCR }, 27 + { "ca53-cpu3", 0x200, 3, R8A774E1_PD_CA53_CPU3, R8A774E1_PD_CA53_SCU, PD_CPU_NOCR }, 28 + { "a3vp", 0x340, 0, R8A774E1_PD_A3VP, R8A774E1_PD_ALWAYS_ON }, 29 + { "a3vc", 0x380, 0, R8A774E1_PD_A3VC, R8A774E1_PD_ALWAYS_ON }, 30 + { "a2vc1", 0x3c0, 1, R8A774E1_PD_A2VC1, R8A774E1_PD_A3VC }, 31 + { "3dg-a", 0x100, 0, R8A774E1_PD_3DG_A, R8A774E1_PD_ALWAYS_ON }, 32 + { "3dg-b", 0x100, 1, R8A774E1_PD_3DG_B, R8A774E1_PD_3DG_A }, 33 + { "3dg-c", 0x100, 2, R8A774E1_PD_3DG_C, R8A774E1_PD_3DG_B }, 34 + { "3dg-d", 0x100, 3, R8A774E1_PD_3DG_D, R8A774E1_PD_3DG_C }, 35 + { "3dg-e", 0x100, 4, R8A774E1_PD_3DG_E, R8A774E1_PD_3DG_D }, 36 + }; 37 + 38 + const struct rcar_sysc_info r8a774e1_sysc_info __initconst = { 39 + .areas = r8a774e1_areas, 40 + .num_areas = ARRAY_SIZE(r8a774e1_areas), 41 + .extmask_offs = 0x2f8, 42 + .extmask_val = BIT(0), 43 + };
+1
drivers/soc/renesas/rcar-rst.c
··· 48 48 { .compatible = "renesas,r8a774a1-rst", .data = &rcar_rst_gen3 }, 49 49 { .compatible = "renesas,r8a774b1-rst", .data = &rcar_rst_gen3 }, 50 50 { .compatible = "renesas,r8a774c0-rst", .data = &rcar_rst_gen3 }, 51 + { .compatible = "renesas,r8a774e1-rst", .data = &rcar_rst_gen3 }, 51 52 /* R-Car Gen1 */ 52 53 { .compatible = "renesas,r8a7778-reset-wdt", .data = &rcar_rst_gen1 }, 53 54 { .compatible = "renesas,r8a7779-reset-wdt", .data = &rcar_rst_gen1 },
+3
drivers/soc/renesas/rcar-sysc.c
··· 296 296 #ifdef CONFIG_SYSC_R8A774C0 297 297 { .compatible = "renesas,r8a774c0-sysc", .data = &r8a774c0_sysc_info }, 298 298 #endif 299 + #ifdef CONFIG_SYSC_R8A774E1 300 + { .compatible = "renesas,r8a774e1-sysc", .data = &r8a774e1_sysc_info }, 301 + #endif 299 302 #ifdef CONFIG_SYSC_R8A7779 300 303 { .compatible = "renesas,r8a7779-sysc", .data = &r8a7779_sysc_info }, 301 304 #endif
+1
drivers/soc/renesas/rcar-sysc.h
··· 56 56 extern const struct rcar_sysc_info r8a774a1_sysc_info; 57 57 extern const struct rcar_sysc_info r8a774b1_sysc_info; 58 58 extern const struct rcar_sysc_info r8a774c0_sysc_info; 59 + extern const struct rcar_sysc_info r8a774e1_sysc_info; 59 60 extern const struct rcar_sysc_info r8a7779_sysc_info; 60 61 extern const struct rcar_sysc_info r8a7790_sysc_info; 61 62 extern const struct rcar_sysc_info r8a7791_sysc_info;
+8
drivers/soc/renesas/renesas-soc.c
··· 126 126 .id = 0x57, 127 127 }; 128 128 129 + static const struct renesas_soc soc_rz_g2h __initconst __maybe_unused = { 130 + .family = &fam_rzg2, 131 + .id = 0x4f, 132 + }; 133 + 129 134 static const struct renesas_soc soc_rcar_m1a __initconst __maybe_unused = { 130 135 .family = &fam_rcar_gen1, 131 136 }; ··· 242 237 #endif 243 238 #ifdef CONFIG_ARCH_R8A774C0 244 239 { .compatible = "renesas,r8a774c0", .data = &soc_rz_g2e }, 240 + #endif 241 + #ifdef CONFIG_ARCH_R8A774E1 242 + { .compatible = "renesas,r8a774e1", .data = &soc_rz_g2h }, 245 243 #endif 246 244 #ifdef CONFIG_ARCH_R8A7778 247 245 { .compatible = "renesas,r8a7778", .data = &soc_rcar_m1a },
+3
drivers/soc/samsung/Kconfig
··· 37 37 bool "Exynos PM domains" if COMPILE_TEST 38 38 depends on PM_GENERIC_DOMAINS || COMPILE_TEST 39 39 40 + config EXYNOS_REGULATOR_COUPLER 41 + bool "Exynos SoC Regulator Coupler" if COMPILE_TEST 42 + depends on ARCH_EXYNOS || COMPILE_TEST 40 43 endif
+1
drivers/soc/samsung/Makefile
··· 9 9 obj-$(CONFIG_EXYNOS_PMU_ARM_DRIVERS) += exynos3250-pmu.o exynos4-pmu.o \ 10 10 exynos5250-pmu.o exynos5420-pmu.o 11 11 obj-$(CONFIG_EXYNOS_PM_DOMAINS) += pm_domains.o 12 + obj-$(CONFIG_EXYNOS_REGULATOR_COUPLER) += exynos-regulator-coupler.o
+221
drivers/soc/samsung/exynos-regulator-coupler.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2020 Samsung Electronics Co., Ltd. 4 + * http://www.samsung.com/ 5 + * Author: Marek Szyprowski <m.szyprowski@samsung.com> 6 + * 7 + * Simplified generic voltage coupler from regulator core.c 8 + * The main difference is that it keeps current regulator voltage 9 + * if consumers didn't apply their constraints yet. 10 + */ 11 + 12 + #include <linux/init.h> 13 + #include <linux/kernel.h> 14 + #include <linux/of.h> 15 + #include <linux/regulator/coupler.h> 16 + #include <linux/regulator/driver.h> 17 + #include <linux/regulator/machine.h> 18 + 19 + static int regulator_get_optimal_voltage(struct regulator_dev *rdev, 20 + int *current_uV, 21 + int *min_uV, int *max_uV, 22 + suspend_state_t state) 23 + { 24 + struct coupling_desc *c_desc = &rdev->coupling_desc; 25 + struct regulator_dev **c_rdevs = c_desc->coupled_rdevs; 26 + struct regulation_constraints *constraints = rdev->constraints; 27 + int desired_min_uV = 0, desired_max_uV = INT_MAX; 28 + int max_current_uV = 0, min_current_uV = INT_MAX; 29 + int highest_min_uV = 0, target_uV, possible_uV; 30 + int i, ret, max_spread, n_coupled = c_desc->n_coupled; 31 + bool done; 32 + 33 + *current_uV = -1; 34 + 35 + /* Find highest min desired voltage */ 36 + for (i = 0; i < n_coupled; i++) { 37 + int tmp_min = 0; 38 + int tmp_max = INT_MAX; 39 + 40 + lockdep_assert_held_once(&c_rdevs[i]->mutex.base); 41 + 42 + ret = regulator_check_consumers(c_rdevs[i], 43 + &tmp_min, 44 + &tmp_max, state); 45 + if (ret < 0) 46 + return ret; 47 + 48 + if (tmp_min == 0) { 49 + ret = regulator_get_voltage_rdev(c_rdevs[i]); 50 + if (ret < 0) 51 + return ret; 52 + tmp_min = ret; 53 + } 54 + 55 + /* apply constraints */ 56 + ret = regulator_check_voltage(c_rdevs[i], &tmp_min, &tmp_max); 57 + if (ret < 0) 58 + return ret; 59 + 60 + highest_min_uV = max(highest_min_uV, tmp_min); 61 + 62 + if (i == 0) { 63 + desired_min_uV = tmp_min; 64 + desired_max_uV = tmp_max; 65 + } 66 + } 67 + 68 + max_spread = constraints->max_spread[0]; 69 + 70 + /* 71 + * Let target_uV be equal to the desired one if possible. 72 + * If not, set it to minimum voltage, allowed by other coupled 73 + * regulators. 74 + */ 75 + target_uV = max(desired_min_uV, highest_min_uV - max_spread); 76 + 77 + /* 78 + * Find min and max voltages, which currently aren't violating 79 + * max_spread. 80 + */ 81 + for (i = 1; i < n_coupled; i++) { 82 + int tmp_act; 83 + 84 + tmp_act = regulator_get_voltage_rdev(c_rdevs[i]); 85 + if (tmp_act < 0) 86 + return tmp_act; 87 + 88 + min_current_uV = min(tmp_act, min_current_uV); 89 + max_current_uV = max(tmp_act, max_current_uV); 90 + } 91 + 92 + /* 93 + * Correct target voltage, so as it currently isn't 94 + * violating max_spread 95 + */ 96 + possible_uV = max(target_uV, max_current_uV - max_spread); 97 + possible_uV = min(possible_uV, min_current_uV + max_spread); 98 + 99 + if (possible_uV > desired_max_uV) 100 + return -EINVAL; 101 + 102 + done = (possible_uV == target_uV); 103 + desired_min_uV = possible_uV; 104 + 105 + /* Set current_uV if wasn't done earlier in the code and if necessary */ 106 + if (*current_uV == -1) { 107 + ret = regulator_get_voltage_rdev(rdev); 108 + if (ret < 0) 109 + return ret; 110 + *current_uV = ret; 111 + } 112 + 113 + *min_uV = desired_min_uV; 114 + *max_uV = desired_max_uV; 115 + 116 + return done; 117 + } 118 + 119 + static int exynos_coupler_balance_voltage(struct regulator_coupler *coupler, 120 + struct regulator_dev *rdev, 121 + suspend_state_t state) 122 + { 123 + struct regulator_dev **c_rdevs; 124 + struct regulator_dev *best_rdev; 125 + struct coupling_desc *c_desc = &rdev->coupling_desc; 126 + int i, ret, n_coupled, best_min_uV, best_max_uV, best_c_rdev; 127 + unsigned int delta, best_delta; 128 + unsigned long c_rdev_done = 0; 129 + bool best_c_rdev_done; 130 + 131 + c_rdevs = c_desc->coupled_rdevs; 132 + n_coupled = c_desc->n_coupled; 133 + 134 + /* 135 + * Find the best possible voltage change on each loop. Leave the loop 136 + * if there isn't any possible change. 137 + */ 138 + do { 139 + best_c_rdev_done = false; 140 + best_delta = 0; 141 + best_min_uV = 0; 142 + best_max_uV = 0; 143 + best_c_rdev = 0; 144 + best_rdev = NULL; 145 + 146 + /* 147 + * Find highest difference between optimal voltage 148 + * and current voltage. 149 + */ 150 + for (i = 0; i < n_coupled; i++) { 151 + /* 152 + * optimal_uV is the best voltage that can be set for 153 + * i-th regulator at the moment without violating 154 + * max_spread constraint in order to balance 155 + * the coupled voltages. 156 + */ 157 + int optimal_uV = 0, optimal_max_uV = 0, current_uV = 0; 158 + 159 + if (test_bit(i, &c_rdev_done)) 160 + continue; 161 + 162 + ret = regulator_get_optimal_voltage(c_rdevs[i], 163 + &current_uV, 164 + &optimal_uV, 165 + &optimal_max_uV, 166 + state); 167 + if (ret < 0) 168 + goto out; 169 + 170 + delta = abs(optimal_uV - current_uV); 171 + 172 + if (delta && best_delta <= delta) { 173 + best_c_rdev_done = ret; 174 + best_delta = delta; 175 + best_rdev = c_rdevs[i]; 176 + best_min_uV = optimal_uV; 177 + best_max_uV = optimal_max_uV; 178 + best_c_rdev = i; 179 + } 180 + } 181 + 182 + /* Nothing to change, return successfully */ 183 + if (!best_rdev) { 184 + ret = 0; 185 + goto out; 186 + } 187 + 188 + ret = regulator_set_voltage_rdev(best_rdev, best_min_uV, 189 + best_max_uV, state); 190 + 191 + if (ret < 0) 192 + goto out; 193 + 194 + if (best_c_rdev_done) 195 + set_bit(best_c_rdev, &c_rdev_done); 196 + 197 + } while (n_coupled > 1); 198 + 199 + out: 200 + return ret; 201 + } 202 + 203 + static int exynos_coupler_attach(struct regulator_coupler *coupler, 204 + struct regulator_dev *rdev) 205 + { 206 + return 0; 207 + } 208 + 209 + static struct regulator_coupler exynos_coupler = { 210 + .attach_regulator = exynos_coupler_attach, 211 + .balance_voltage = exynos_coupler_balance_voltage, 212 + }; 213 + 214 + static int __init exynos_coupler_init(void) 215 + { 216 + if (!of_machine_is_compatible("samsung,exynos5800")) 217 + return 0; 218 + 219 + return regulator_coupler_register(&exynos_coupler); 220 + } 221 + arch_initcall(exynos_coupler_init);
+1 -1
drivers/soc/tegra/fuse/tegra-apbmisc.c
··· 27 27 28 28 u32 tegra_read_chipid(void) 29 29 { 30 - WARN(!chipid, "Tegra ABP MISC not yet available\n"); 30 + WARN(!chipid, "Tegra APB MISC not yet available\n"); 31 31 32 32 return chipid; 33 33 }
+135 -65
drivers/soc/ti/k3-ringacc.c
··· 109 109 }; 110 110 111 111 /** 112 + * struct k3_ring_state - Internal state tracking structure 113 + * 114 + * @free: Number of free entries 115 + * @occ: Occupancy 116 + * @windex: Write index 117 + * @rindex: Read index 118 + */ 119 + struct k3_ring_state { 120 + u32 free; 121 + u32 occ; 122 + u32 windex; 123 + u32 rindex; 124 + }; 125 + 126 + /** 112 127 * struct k3_ring - RA Ring descriptor 113 128 * 114 129 * @rt: Ring control/status registers ··· 136 121 * @elm_size: Size of the ring element 137 122 * @mode: Ring mode 138 123 * @flags: flags 139 - * @free: Number of free elements 140 - * @occ: Ring occupancy 141 - * @windex: Write index (only for @K3_RINGACC_RING_MODE_RING) 142 - * @rindex: Read index (only for @K3_RINGACC_RING_MODE_RING) 143 124 * @ring_id: Ring Id 144 125 * @parent: Pointer on struct @k3_ringacc 145 126 * @use_count: Use count for shared rings ··· 154 143 u32 flags; 155 144 #define K3_RING_FLAG_BUSY BIT(1) 156 145 #define K3_RING_FLAG_SHARED BIT(2) 157 - u32 free; 158 - u32 occ; 159 - u32 windex; 160 - u32 rindex; 146 + struct k3_ring_state state; 161 147 u32 ring_id; 162 148 struct k3_ringacc *parent; 163 149 u32 use_count; 164 150 int proxy_id; 151 + }; 152 + 153 + struct k3_ringacc_ops { 154 + int (*init)(struct platform_device *pdev, struct k3_ringacc *ringacc); 165 155 }; 166 156 167 157 /** ··· 183 171 * @tisci: pointer ti-sci handle 184 172 * @tisci_ring_ops: ti-sci rings ops 185 173 * @tisci_dev_id: ti-sci device id 174 + * @ops: SoC specific ringacc operation 186 175 */ 187 176 struct k3_ringacc { 188 177 struct device *dev; ··· 204 191 const struct ti_sci_handle *tisci; 205 192 const struct ti_sci_rm_ringacc_ops *tisci_ring_ops; 206 193 u32 tisci_dev_id; 194 + 195 + const struct k3_ringacc_ops *ops; 207 196 }; 208 197 209 198 static long k3_ringacc_ring_get_fifo_pos(struct k3_ring *ring) ··· 260 245 &ring->ring_mem_dma); 261 246 dev_dbg(dev, "dump elmsize %d, size %d, mode %d, proxy_id %d\n", 262 247 ring->elm_size, ring->size, ring->mode, ring->proxy_id); 248 + dev_dbg(dev, "dump flags %08X\n", ring->flags); 263 249 264 250 dev_dbg(dev, "dump ring_rt_regs: db%08x\n", readl(&ring->rt->db)); 265 251 dev_dbg(dev, "dump occ%08x\n", readl(&ring->rt->occ)); ··· 329 313 } 330 314 EXPORT_SYMBOL_GPL(k3_ringacc_request_ring); 331 315 316 + int k3_ringacc_request_rings_pair(struct k3_ringacc *ringacc, 317 + int fwd_id, int compl_id, 318 + struct k3_ring **fwd_ring, 319 + struct k3_ring **compl_ring) 320 + { 321 + int ret = 0; 322 + 323 + if (!fwd_ring || !compl_ring) 324 + return -EINVAL; 325 + 326 + *fwd_ring = k3_ringacc_request_ring(ringacc, fwd_id, 0); 327 + if (!(*fwd_ring)) 328 + return -ENODEV; 329 + 330 + *compl_ring = k3_ringacc_request_ring(ringacc, compl_id, 0); 331 + if (!(*compl_ring)) { 332 + k3_ringacc_ring_free(*fwd_ring); 333 + ret = -ENODEV; 334 + } 335 + 336 + return ret; 337 + } 338 + EXPORT_SYMBOL_GPL(k3_ringacc_request_rings_pair); 339 + 332 340 static void k3_ringacc_ring_reset_sci(struct k3_ring *ring) 333 341 { 334 342 struct k3_ringacc *ringacc = ring->parent; ··· 379 339 if (!ring || !(ring->flags & K3_RING_FLAG_BUSY)) 380 340 return; 381 341 382 - ring->occ = 0; 383 - ring->free = 0; 384 - ring->rindex = 0; 385 - ring->windex = 0; 342 + memset(&ring->state, 0, sizeof(ring->state)); 386 343 387 344 k3_ringacc_ring_reset_sci(ring); 388 345 } ··· 593 556 594 557 int k3_ringacc_ring_cfg(struct k3_ring *ring, struct k3_ring_cfg *cfg) 595 558 { 596 - struct k3_ringacc *ringacc = ring->parent; 559 + struct k3_ringacc *ringacc; 597 560 int ret = 0; 598 561 599 562 if (!ring || !cfg) 600 563 return -EINVAL; 564 + ringacc = ring->parent; 565 + 601 566 if (cfg->elm_size > K3_RINGACC_RING_ELSIZE_256 || 602 567 cfg->mode >= K3_RINGACC_RING_MODE_INVALID || 603 568 cfg->size & ~K3_RINGACC_CFG_RING_SIZE_ELCNT_MASK || ··· 629 590 ring->size = cfg->size; 630 591 ring->elm_size = cfg->elm_size; 631 592 ring->mode = cfg->mode; 632 - ring->occ = 0; 633 - ring->free = 0; 634 - ring->rindex = 0; 635 - ring->windex = 0; 593 + memset(&ring->state, 0, sizeof(ring->state)); 636 594 637 595 if (ring->proxy_id != K3_RINGACC_PROXY_NOT_USED) 638 596 ring->proxy = ringacc->proxy_target_base + ··· 649 613 ring->ops = NULL; 650 614 ret = -EINVAL; 651 615 goto err_free_proxy; 652 - }; 616 + } 653 617 654 618 ring->ring_mem_virt = dma_alloc_coherent(ringacc->dev, 655 619 ring->size * (4 << ring->elm_size), ··· 700 664 if (!ring || !(ring->flags & K3_RING_FLAG_BUSY)) 701 665 return -EINVAL; 702 666 703 - if (!ring->free) 704 - ring->free = ring->size - readl(&ring->rt->occ); 667 + if (!ring->state.free) 668 + ring->state.free = ring->size - readl(&ring->rt->occ); 705 669 706 - return ring->free; 670 + return ring->state.free; 707 671 } 708 672 EXPORT_SYMBOL_GPL(k3_ringacc_ring_get_free); 709 673 ··· 774 738 "proxy:memcpy_fromio(x): --> ptr(%p), mode:%d\n", ptr, 775 739 access_mode); 776 740 memcpy_fromio(elem, ptr, (4 << ring->elm_size)); 777 - ring->occ--; 741 + ring->state.occ--; 778 742 break; 779 743 case K3_RINGACC_ACCESS_MODE_PUSH_TAIL: 780 744 case K3_RINGACC_ACCESS_MODE_PUSH_HEAD: ··· 782 746 "proxy:memcpy_toio(x): --> ptr(%p), mode:%d\n", ptr, 783 747 access_mode); 784 748 memcpy_toio(ptr, elem, (4 << ring->elm_size)); 785 - ring->free--; 749 + ring->state.free--; 786 750 break; 787 751 default: 788 752 return -EINVAL; 789 753 } 790 754 791 - dev_dbg(ring->parent->dev, "proxy: free%d occ%d\n", ring->free, 792 - ring->occ); 755 + dev_dbg(ring->parent->dev, "proxy: free%d occ%d\n", ring->state.free, 756 + ring->state.occ); 793 757 return 0; 794 758 } 795 759 ··· 844 808 "memcpy_fromio(x): --> ptr(%p), mode:%d\n", ptr, 845 809 access_mode); 846 810 memcpy_fromio(elem, ptr, (4 << ring->elm_size)); 847 - ring->occ--; 811 + ring->state.occ--; 848 812 break; 849 813 case K3_RINGACC_ACCESS_MODE_PUSH_TAIL: 850 814 case K3_RINGACC_ACCESS_MODE_PUSH_HEAD: ··· 852 816 "memcpy_toio(x): --> ptr(%p), mode:%d\n", ptr, 853 817 access_mode); 854 818 memcpy_toio(ptr, elem, (4 << ring->elm_size)); 855 - ring->free--; 819 + ring->state.free--; 856 820 break; 857 821 default: 858 822 return -EINVAL; 859 823 } 860 824 861 - dev_dbg(ring->parent->dev, "free%d index%d occ%d index%d\n", ring->free, 862 - ring->windex, ring->occ, ring->rindex); 825 + dev_dbg(ring->parent->dev, "free%d index%d occ%d index%d\n", 826 + ring->state.free, ring->state.windex, ring->state.occ, 827 + ring->state.rindex); 863 828 return 0; 864 829 } 865 830 ··· 892 855 { 893 856 void *elem_ptr; 894 857 895 - elem_ptr = k3_ringacc_get_elm_addr(ring, ring->windex); 858 + elem_ptr = k3_ringacc_get_elm_addr(ring, ring->state.windex); 896 859 897 860 memcpy(elem_ptr, elem, (4 << ring->elm_size)); 898 861 899 - ring->windex = (ring->windex + 1) % ring->size; 900 - ring->free--; 862 + ring->state.windex = (ring->state.windex + 1) % ring->size; 863 + ring->state.free--; 901 864 writel(1, &ring->rt->db); 902 865 903 866 dev_dbg(ring->parent->dev, "ring_push_mem: free%d index%d\n", 904 - ring->free, ring->windex); 867 + ring->state.free, ring->state.windex); 905 868 906 869 return 0; 907 870 } ··· 910 873 { 911 874 void *elem_ptr; 912 875 913 - elem_ptr = k3_ringacc_get_elm_addr(ring, ring->rindex); 876 + elem_ptr = k3_ringacc_get_elm_addr(ring, ring->state.rindex); 914 877 915 878 memcpy(elem, elem_ptr, (4 << ring->elm_size)); 916 879 917 - ring->rindex = (ring->rindex + 1) % ring->size; 918 - ring->occ--; 880 + ring->state.rindex = (ring->state.rindex + 1) % ring->size; 881 + ring->state.occ--; 919 882 writel(-1, &ring->rt->db); 920 883 921 884 dev_dbg(ring->parent->dev, "ring_pop_mem: occ%d index%d pos_ptr%p\n", 922 - ring->occ, ring->rindex, elem_ptr); 885 + ring->state.occ, ring->state.rindex, elem_ptr); 923 886 return 0; 924 887 } 925 888 ··· 930 893 if (!ring || !(ring->flags & K3_RING_FLAG_BUSY)) 931 894 return -EINVAL; 932 895 933 - dev_dbg(ring->parent->dev, "ring_push: free%d index%d\n", ring->free, 934 - ring->windex); 896 + dev_dbg(ring->parent->dev, "ring_push: free%d index%d\n", 897 + ring->state.free, ring->state.windex); 935 898 936 899 if (k3_ringacc_ring_is_full(ring)) 937 900 return -ENOMEM; ··· 951 914 return -EINVAL; 952 915 953 916 dev_dbg(ring->parent->dev, "ring_push_head: free%d index%d\n", 954 - ring->free, ring->windex); 917 + ring->state.free, ring->state.windex); 955 918 956 919 if (k3_ringacc_ring_is_full(ring)) 957 920 return -ENOMEM; ··· 970 933 if (!ring || !(ring->flags & K3_RING_FLAG_BUSY)) 971 934 return -EINVAL; 972 935 973 - if (!ring->occ) 974 - ring->occ = k3_ringacc_ring_get_occ(ring); 936 + if (!ring->state.occ) 937 + ring->state.occ = k3_ringacc_ring_get_occ(ring); 975 938 976 - dev_dbg(ring->parent->dev, "ring_pop: occ%d index%d\n", ring->occ, 977 - ring->rindex); 939 + dev_dbg(ring->parent->dev, "ring_pop: occ%d index%d\n", ring->state.occ, 940 + ring->state.rindex); 978 941 979 - if (!ring->occ) 942 + if (!ring->state.occ) 980 943 return -ENODATA; 981 944 982 945 if (ring->ops && ring->ops->pop_head) ··· 993 956 if (!ring || !(ring->flags & K3_RING_FLAG_BUSY)) 994 957 return -EINVAL; 995 958 996 - if (!ring->occ) 997 - ring->occ = k3_ringacc_ring_get_occ(ring); 959 + if (!ring->state.occ) 960 + ring->state.occ = k3_ringacc_ring_get_occ(ring); 998 961 999 - dev_dbg(ring->parent->dev, "ring_pop_tail: occ%d index%d\n", ring->occ, 1000 - ring->rindex); 962 + dev_dbg(ring->parent->dev, "ring_pop_tail: occ%d index%d\n", 963 + ring->state.occ, ring->state.rindex); 1001 964 1002 - if (!ring->occ) 965 + if (!ring->state.occ) 1003 966 return -ENODATA; 1004 967 1005 968 if (ring->ops && ring->ops->pop_tail) ··· 1084 1047 ringacc->rm_gp_range); 1085 1048 } 1086 1049 1087 - static int k3_ringacc_probe(struct platform_device *pdev) 1050 + static int k3_ringacc_init(struct platform_device *pdev, 1051 + struct k3_ringacc *ringacc) 1088 1052 { 1089 - struct k3_ringacc *ringacc; 1090 1053 void __iomem *base_fifo, *base_rt; 1091 1054 struct device *dev = &pdev->dev; 1092 1055 struct resource *res; 1093 1056 int ret, i; 1094 - 1095 - ringacc = devm_kzalloc(dev, sizeof(*ringacc), GFP_KERNEL); 1096 - if (!ringacc) 1097 - return -ENOMEM; 1098 - 1099 - ringacc->dev = dev; 1100 - mutex_init(&ringacc->req_lock); 1101 1057 1102 1058 dev->msi_domain = of_msi_get_domain(dev, dev->of_node, 1103 1059 DOMAIN_BUS_TI_SCI_INTA_MSI); ··· 1150 1120 ringacc->rings[i].ring_id = i; 1151 1121 ringacc->rings[i].proxy_id = K3_RINGACC_PROXY_NOT_USED; 1152 1122 } 1153 - dev_set_drvdata(dev, ringacc); 1154 1123 1155 1124 ringacc->tisci_ring_ops = &ringacc->tisci->ops.rm_ring_ops; 1156 - 1157 - mutex_lock(&k3_ringacc_list_lock); 1158 - list_add_tail(&ringacc->list, &k3_ringacc_list); 1159 - mutex_unlock(&k3_ringacc_list_lock); 1160 1125 1161 1126 dev_info(dev, "Ring Accelerator probed rings:%u, gp-rings[%u,%u] sci-dev-id:%u\n", 1162 1127 ringacc->num_rings, ··· 1162 1137 ringacc->dma_ring_reset_quirk ? "enabled" : "disabled"); 1163 1138 dev_info(dev, "RA Proxy rev. %08x, num_proxies:%u\n", 1164 1139 readl(&ringacc->proxy_gcfg->revision), ringacc->num_proxies); 1140 + 1165 1141 return 0; 1166 1142 } 1167 1143 1144 + struct ringacc_match_data { 1145 + struct k3_ringacc_ops ops; 1146 + }; 1147 + 1148 + static struct ringacc_match_data k3_ringacc_data = { 1149 + .ops = { 1150 + .init = k3_ringacc_init, 1151 + }, 1152 + }; 1153 + 1168 1154 /* Match table for of_platform binding */ 1169 1155 static const struct of_device_id k3_ringacc_of_match[] = { 1170 - { .compatible = "ti,am654-navss-ringacc", }, 1156 + { .compatible = "ti,am654-navss-ringacc", .data = &k3_ringacc_data, }, 1171 1157 {}, 1172 1158 }; 1159 + 1160 + static int k3_ringacc_probe(struct platform_device *pdev) 1161 + { 1162 + const struct ringacc_match_data *match_data; 1163 + const struct of_device_id *match; 1164 + struct device *dev = &pdev->dev; 1165 + struct k3_ringacc *ringacc; 1166 + int ret; 1167 + 1168 + match = of_match_node(k3_ringacc_of_match, dev->of_node); 1169 + if (!match) 1170 + return -ENODEV; 1171 + match_data = match->data; 1172 + 1173 + ringacc = devm_kzalloc(dev, sizeof(*ringacc), GFP_KERNEL); 1174 + if (!ringacc) 1175 + return -ENOMEM; 1176 + 1177 + ringacc->dev = dev; 1178 + mutex_init(&ringacc->req_lock); 1179 + ringacc->ops = &match_data->ops; 1180 + 1181 + ret = ringacc->ops->init(pdev, ringacc); 1182 + if (ret) 1183 + return ret; 1184 + 1185 + dev_set_drvdata(dev, ringacc); 1186 + 1187 + mutex_lock(&k3_ringacc_list_lock); 1188 + list_add_tail(&ringacc->list, &k3_ringacc_list); 1189 + mutex_unlock(&k3_ringacc_list_lock); 1190 + 1191 + return 0; 1192 + } 1173 1193 1174 1194 static struct platform_driver k3_ringacc_driver = { 1175 1195 .probe = k3_ringacc_probe,
+1 -1
drivers/soc/ti/knav_qmss_acc.c
··· 450 450 return 0; 451 451 } 452 452 453 - struct knav_range_ops knav_acc_range_ops = { 453 + static struct knav_range_ops knav_acc_range_ops = { 454 454 .set_notify = knav_acc_set_notify, 455 455 .init_queue = knav_acc_init_queue, 456 456 .open_queue = knav_acc_open_queue,
+12 -10
drivers/soc/ux500/ux500-soc-id.c
··· 146 146 return kasprintf(GFP_KERNEL, "%s", "Unknown"); 147 147 } 148 148 149 - static ssize_t ux500_get_process(struct device *dev, 150 - struct device_attribute *attr, 151 - char *buf) 149 + static ssize_t 150 + process_show(struct device *dev, struct device_attribute *attr, char *buf) 152 151 { 153 152 if (dbx500_id.process == 0x00) 154 153 return sprintf(buf, "Standard\n"); 155 154 156 155 return sprintf(buf, "%02xnm\n", dbx500_id.process); 157 156 } 157 + 158 + static DEVICE_ATTR_RO(process); 159 + 160 + static struct attribute *ux500_soc_attrs[] = { 161 + &dev_attr_process.attr, 162 + NULL 163 + }; 164 + 165 + ATTRIBUTE_GROUPS(ux500_soc); 158 166 159 167 static const char *db8500_read_soc_id(struct device_node *backupram) 160 168 { ··· 192 184 soc_dev_attr->machine = ux500_get_machine(); 193 185 soc_dev_attr->family = ux500_get_family(); 194 186 soc_dev_attr->revision = ux500_get_revision(); 187 + soc_dev_attr->custom_attr_group = ux500_soc_groups[0]; 195 188 } 196 - 197 - static const struct device_attribute ux500_soc_attr = 198 - __ATTR(process, S_IRUGO, ux500_get_process, NULL); 199 189 200 190 static int __init ux500_soc_device_init(void) 201 191 { 202 - struct device *parent; 203 192 struct soc_device *soc_dev; 204 193 struct soc_device_attribute *soc_dev_attr; 205 194 struct device_node *backupram; ··· 221 216 kfree(soc_dev_attr); 222 217 return PTR_ERR(soc_dev); 223 218 } 224 - 225 - parent = soc_device_to_device(soc_dev); 226 - device_create_file(parent, &ux500_soc_attr); 227 219 228 220 return 0; 229 221 }
+23 -25
drivers/soc/versatile/soc-integrator.c
··· 56 56 } 57 57 } 58 58 59 - static ssize_t integrator_get_manf(struct device *dev, 60 - struct device_attribute *attr, 61 - char *buf) 59 + static ssize_t 60 + manufacturer_show(struct device *dev, struct device_attribute *attr, char *buf) 62 61 { 63 62 return sprintf(buf, "%02x\n", integrator_coreid >> 24); 64 63 } 65 64 66 - static struct device_attribute integrator_manf_attr = 67 - __ATTR(manufacturer, S_IRUGO, integrator_get_manf, NULL); 65 + static DEVICE_ATTR_RO(manufacturer); 68 66 69 - static ssize_t integrator_get_arch(struct device *dev, 70 - struct device_attribute *attr, 71 - char *buf) 67 + static ssize_t 68 + arch_show(struct device *dev, struct device_attribute *attr, char *buf) 72 69 { 73 70 return sprintf(buf, "%s\n", integrator_arch_str(integrator_coreid)); 74 71 } 75 72 76 - static struct device_attribute integrator_arch_attr = 77 - __ATTR(arch, S_IRUGO, integrator_get_arch, NULL); 73 + static DEVICE_ATTR_RO(arch); 78 74 79 - static ssize_t integrator_get_fpga(struct device *dev, 80 - struct device_attribute *attr, 81 - char *buf) 75 + static ssize_t 76 + fpga_show(struct device *dev, struct device_attribute *attr, char *buf) 82 77 { 83 78 return sprintf(buf, "%s\n", integrator_fpga_str(integrator_coreid)); 84 79 } 85 80 86 - static struct device_attribute integrator_fpga_attr = 87 - __ATTR(fpga, S_IRUGO, integrator_get_fpga, NULL); 81 + static DEVICE_ATTR_RO(fpga); 88 82 89 - static ssize_t integrator_get_build(struct device *dev, 90 - struct device_attribute *attr, 91 - char *buf) 83 + static ssize_t 84 + build_show(struct device *dev, struct device_attribute *attr, char *buf) 92 85 { 93 86 return sprintf(buf, "%02x\n", (integrator_coreid >> 4) & 0xFF); 94 87 } 95 88 96 - static struct device_attribute integrator_build_attr = 97 - __ATTR(build, S_IRUGO, integrator_get_build, NULL); 89 + static DEVICE_ATTR_RO(build); 90 + 91 + static struct attribute *integrator_attrs[] = { 92 + &dev_attr_manufacturer.attr, 93 + &dev_attr_arch.attr, 94 + &dev_attr_fpga.attr, 95 + &dev_attr_build.attr, 96 + NULL 97 + }; 98 + 99 + ATTRIBUTE_GROUPS(integrator); 98 100 99 101 static int __init integrator_soc_init(void) 100 102 { ··· 129 127 soc_dev_attr->soc_id = "Integrator"; 130 128 soc_dev_attr->machine = "Integrator"; 131 129 soc_dev_attr->family = "Versatile"; 130 + soc_dev_attr->custom_attr_group = integrator_groups[0]; 132 131 soc_dev = soc_device_register(soc_dev_attr); 133 132 if (IS_ERR(soc_dev)) { 134 133 kfree(soc_dev_attr); 135 134 return -ENODEV; 136 135 } 137 136 dev = soc_device_to_device(soc_dev); 138 - 139 - device_create_file(dev, &integrator_manf_attr); 140 - device_create_file(dev, &integrator_arch_attr); 141 - device_create_file(dev, &integrator_fpga_attr); 142 - device_create_file(dev, &integrator_build_attr); 143 137 144 138 dev_info(dev, "Detected ARM core module:\n"); 145 139 dev_info(dev, " Manufacturer: %02x\n", (val >> 24));
+23 -25
drivers/soc/versatile/soc-realview.c
··· 39 39 } 40 40 } 41 41 42 - static ssize_t realview_get_manf(struct device *dev, 43 - struct device_attribute *attr, 44 - char *buf) 42 + static ssize_t 43 + manufacturer_show(struct device *dev, struct device_attribute *attr, char *buf) 45 44 { 46 45 return sprintf(buf, "%02x\n", realview_coreid >> 24); 47 46 } 48 47 49 - static struct device_attribute realview_manf_attr = 50 - __ATTR(manufacturer, S_IRUGO, realview_get_manf, NULL); 48 + static DEVICE_ATTR_RO(manufacturer); 51 49 52 - static ssize_t realview_get_board(struct device *dev, 53 - struct device_attribute *attr, 54 - char *buf) 50 + static ssize_t 51 + board_show(struct device *dev, struct device_attribute *attr, char *buf) 55 52 { 56 53 return sprintf(buf, "HBI-%03x\n", ((realview_coreid >> 16) & 0xfff)); 57 54 } 58 55 59 - static struct device_attribute realview_board_attr = 60 - __ATTR(board, S_IRUGO, realview_get_board, NULL); 56 + static DEVICE_ATTR_RO(board); 61 57 62 - static ssize_t realview_get_arch(struct device *dev, 63 - struct device_attribute *attr, 64 - char *buf) 58 + static ssize_t 59 + fpga_show(struct device *dev, struct device_attribute *attr, char *buf) 65 60 { 66 61 return sprintf(buf, "%s\n", realview_arch_str(realview_coreid)); 67 62 } 68 63 69 - static struct device_attribute realview_arch_attr = 70 - __ATTR(fpga, S_IRUGO, realview_get_arch, NULL); 64 + static DEVICE_ATTR_RO(fpga); 71 65 72 - static ssize_t realview_get_build(struct device *dev, 73 - struct device_attribute *attr, 74 - char *buf) 66 + static ssize_t 67 + build_show(struct device *dev, struct device_attribute *attr, char *buf) 75 68 { 76 69 return sprintf(buf, "%02x\n", (realview_coreid & 0xFF)); 77 70 } 78 71 79 - static struct device_attribute realview_build_attr = 80 - __ATTR(build, S_IRUGO, realview_get_build, NULL); 72 + static DEVICE_ATTR_RO(build); 73 + 74 + static struct attribute *realview_attrs[] = { 75 + &dev_attr_manufacturer.attr, 76 + &dev_attr_board.attr, 77 + &dev_attr_fpga.attr, 78 + &dev_attr_build.attr, 79 + NULL 80 + }; 81 + 82 + ATTRIBUTE_GROUPS(realview); 81 83 82 84 static int realview_soc_probe(struct platform_device *pdev) 83 85 { ··· 104 102 105 103 soc_dev_attr->machine = "RealView"; 106 104 soc_dev_attr->family = "Versatile"; 105 + soc_dev_attr->custom_attr_group = realview_groups[0]; 107 106 soc_dev = soc_device_register(soc_dev_attr); 108 107 if (IS_ERR(soc_dev)) { 109 108 kfree(soc_dev_attr); ··· 114 111 &realview_coreid); 115 112 if (ret) 116 113 return -ENODEV; 117 - 118 - device_create_file(soc_device_to_device(soc_dev), &realview_manf_attr); 119 - device_create_file(soc_device_to_device(soc_dev), &realview_board_attr); 120 - device_create_file(soc_device_to_device(soc_dev), &realview_arch_attr); 121 - device_create_file(soc_device_to_device(soc_dev), &realview_build_attr); 122 114 123 115 dev_info(&pdev->dev, "RealView Syscon Core ID: 0x%08x, HBI-%03x\n", 124 116 realview_coreid,
+122 -71
drivers/spi/spi-geni-qcom.c
··· 7 7 #include <linux/log2.h> 8 8 #include <linux/module.h> 9 9 #include <linux/platform_device.h> 10 + #include <linux/pm_opp.h> 10 11 #include <linux/pm_runtime.h> 11 12 #include <linux/qcom-geni-se.h> 12 13 #include <linux/spi/spi.h> ··· 77 76 u32 tx_fifo_depth; 78 77 u32 fifo_width_bits; 79 78 u32 tx_wm; 79 + u32 last_mode; 80 80 unsigned long cur_speed_hz; 81 + unsigned long cur_sclk_hz; 81 82 unsigned int cur_bits_per_word; 82 83 unsigned int tx_rem_bytes; 83 84 unsigned int rx_rem_bytes; ··· 98 95 { 99 96 unsigned long sclk_freq; 100 97 unsigned int actual_hz; 101 - struct geni_se *se = &mas->se; 102 98 int ret; 103 99 104 100 ret = geni_se_clk_freq_match(&mas->se, ··· 114 112 115 113 dev_dbg(mas->dev, "req %u=>%u sclk %lu, idx %d, div %d\n", speed_hz, 116 114 actual_hz, sclk_freq, *clk_idx, *clk_div); 117 - ret = clk_set_rate(se->clk, sclk_freq); 115 + ret = dev_pm_opp_set_rate(mas->dev, sclk_freq); 118 116 if (ret) 119 - dev_err(mas->dev, "clk_set_rate failed %d\n", ret); 117 + dev_err(mas->dev, "dev_pm_opp_set_rate failed %d\n", ret); 118 + else 119 + mas->cur_sclk_hz = sclk_freq; 120 + 120 121 return ret; 121 122 } 122 123 ··· 182 177 struct geni_se *se = &mas->se; 183 178 u32 word_len; 184 179 185 - word_len = readl(se->base + SE_SPI_WORD_LEN); 186 - 187 180 /* 188 181 * If bits_per_word isn't a byte aligned value, set the packing to be 189 182 * 1 SPI word per FIFO word. ··· 190 187 pack_words = mas->fifo_width_bits / bits_per_word; 191 188 else 192 189 pack_words = 1; 193 - word_len &= ~WORD_LEN_MSK; 194 - word_len |= ((bits_per_word - MIN_WORD_LEN) & WORD_LEN_MSK); 195 190 geni_se_config_packing(&mas->se, bits_per_word, pack_words, msb_first, 196 191 true, true); 192 + word_len = (bits_per_word - MIN_WORD_LEN) & WORD_LEN_MSK; 197 193 writel(word_len, se->base + SE_SPI_WORD_LEN); 194 + } 195 + 196 + static int geni_spi_set_clock_and_bw(struct spi_geni_master *mas, 197 + unsigned long clk_hz) 198 + { 199 + u32 clk_sel, m_clk_cfg, idx, div; 200 + struct geni_se *se = &mas->se; 201 + int ret; 202 + 203 + if (clk_hz == mas->cur_speed_hz) 204 + return 0; 205 + 206 + ret = get_spi_clk_cfg(clk_hz, mas, &idx, &div); 207 + if (ret) { 208 + dev_err(mas->dev, "Err setting clk to %lu: %d\n", clk_hz, ret); 209 + return ret; 210 + } 211 + 212 + /* 213 + * SPI core clock gets configured with the requested frequency 214 + * or the frequency closer to the requested frequency. 215 + * For that reason requested frequency is stored in the 216 + * cur_speed_hz and referred in the consecutive transfer instead 217 + * of calling clk_get_rate() API. 218 + */ 219 + mas->cur_speed_hz = clk_hz; 220 + 221 + clk_sel = idx & CLK_SEL_MSK; 222 + m_clk_cfg = (div << CLK_DIV_SHFT) | SER_CLK_EN; 223 + writel(clk_sel, se->base + SE_GENI_CLK_SEL); 224 + writel(m_clk_cfg, se->base + GENI_SER_M_CLK_CFG); 225 + 226 + /* Set BW quota for CPU as driver supports FIFO mode only. */ 227 + se->icc_paths[CPU_TO_GENI].avg_bw = Bps_to_icc(mas->cur_speed_hz); 228 + ret = geni_icc_set_bw(se); 229 + if (ret) 230 + return ret; 231 + 232 + return 0; 198 233 } 199 234 200 235 static int setup_fifo_params(struct spi_device *spi_slv, ··· 240 199 { 241 200 struct spi_geni_master *mas = spi_master_get_devdata(spi); 242 201 struct geni_se *se = &mas->se; 243 - u32 loopback_cfg, cpol, cpha, demux_output_inv; 244 - u32 demux_sel, clk_sel, m_clk_cfg, idx, div; 245 - int ret; 202 + u32 loopback_cfg = 0, cpol = 0, cpha = 0, demux_output_inv = 0; 203 + u32 demux_sel; 246 204 247 - loopback_cfg = readl(se->base + SE_SPI_LOOPBACK); 248 - cpol = readl(se->base + SE_SPI_CPOL); 249 - cpha = readl(se->base + SE_SPI_CPHA); 250 - demux_output_inv = 0; 251 - loopback_cfg &= ~LOOPBACK_MSK; 252 - cpol &= ~CPOL; 253 - cpha &= ~CPHA; 205 + if (mas->last_mode != spi_slv->mode) { 206 + if (spi_slv->mode & SPI_LOOP) 207 + loopback_cfg = LOOPBACK_ENABLE; 254 208 255 - if (spi_slv->mode & SPI_LOOP) 256 - loopback_cfg |= LOOPBACK_ENABLE; 209 + if (spi_slv->mode & SPI_CPOL) 210 + cpol = CPOL; 257 211 258 - if (spi_slv->mode & SPI_CPOL) 259 - cpol |= CPOL; 212 + if (spi_slv->mode & SPI_CPHA) 213 + cpha = CPHA; 260 214 261 - if (spi_slv->mode & SPI_CPHA) 262 - cpha |= CPHA; 215 + if (spi_slv->mode & SPI_CS_HIGH) 216 + demux_output_inv = BIT(spi_slv->chip_select); 263 217 264 - if (spi_slv->mode & SPI_CS_HIGH) 265 - demux_output_inv = BIT(spi_slv->chip_select); 218 + demux_sel = spi_slv->chip_select; 219 + mas->cur_bits_per_word = spi_slv->bits_per_word; 266 220 267 - demux_sel = spi_slv->chip_select; 268 - mas->cur_speed_hz = spi_slv->max_speed_hz; 269 - mas->cur_bits_per_word = spi_slv->bits_per_word; 221 + spi_setup_word_len(mas, spi_slv->mode, spi_slv->bits_per_word); 222 + writel(loopback_cfg, se->base + SE_SPI_LOOPBACK); 223 + writel(demux_sel, se->base + SE_SPI_DEMUX_SEL); 224 + writel(cpha, se->base + SE_SPI_CPHA); 225 + writel(cpol, se->base + SE_SPI_CPOL); 226 + writel(demux_output_inv, se->base + SE_SPI_DEMUX_OUTPUT_INV); 270 227 271 - ret = get_spi_clk_cfg(mas->cur_speed_hz, mas, &idx, &div); 272 - if (ret) { 273 - dev_err(mas->dev, "Err setting clks ret(%d) for %ld\n", 274 - ret, mas->cur_speed_hz); 275 - return ret; 228 + mas->last_mode = spi_slv->mode; 276 229 } 277 230 278 - clk_sel = idx & CLK_SEL_MSK; 279 - m_clk_cfg = (div << CLK_DIV_SHFT) | SER_CLK_EN; 280 - spi_setup_word_len(mas, spi_slv->mode, spi_slv->bits_per_word); 281 - writel(loopback_cfg, se->base + SE_SPI_LOOPBACK); 282 - writel(demux_sel, se->base + SE_SPI_DEMUX_SEL); 283 - writel(cpha, se->base + SE_SPI_CPHA); 284 - writel(cpol, se->base + SE_SPI_CPOL); 285 - writel(demux_output_inv, se->base + SE_SPI_DEMUX_OUTPUT_INV); 286 - writel(clk_sel, se->base + SE_GENI_CLK_SEL); 287 - writel(m_clk_cfg, se->base + GENI_SER_M_CLK_CFG); 288 - return 0; 231 + return geni_spi_set_clock_and_bw(mas, spi_slv->max_speed_hz); 289 232 } 290 233 291 234 static int spi_geni_prepare_message(struct spi_master *spi, ··· 277 252 { 278 253 int ret; 279 254 struct spi_geni_master *mas = spi_master_get_devdata(spi); 280 - struct geni_se *se = &mas->se; 281 255 282 - geni_se_select_mode(se, GENI_SE_FIFO); 283 256 ret = setup_fifo_params(spi_msg->spi, spi); 284 257 if (ret) 285 258 dev_err(mas->dev, "Couldn't select mode %d\n", ret); ··· 318 295 else 319 296 mas->oversampling = 1; 320 297 298 + geni_se_select_mode(se, GENI_SE_FIFO); 299 + 321 300 pm_runtime_put(mas->dev); 322 301 return 0; 323 302 } ··· 331 306 u32 m_cmd = 0; 332 307 u32 spi_tx_cfg, len; 333 308 struct geni_se *se = &mas->se; 309 + int ret; 334 310 335 311 spi_tx_cfg = readl(se->base + SE_SPI_TRANS_CFG); 336 312 if (xfer->bits_per_word != mas->cur_bits_per_word) { ··· 340 314 } 341 315 342 316 /* Speed and bits per word can be overridden per transfer */ 343 - if (xfer->speed_hz != mas->cur_speed_hz) { 344 - int ret; 345 - u32 clk_sel, m_clk_cfg; 346 - unsigned int idx, div; 347 - 348 - ret = get_spi_clk_cfg(xfer->speed_hz, mas, &idx, &div); 349 - if (ret) { 350 - dev_err(mas->dev, "Err setting clks:%d\n", ret); 351 - return; 352 - } 353 - /* 354 - * SPI core clock gets configured with the requested frequency 355 - * or the frequency closer to the requested frequency. 356 - * For that reason requested frequency is stored in the 357 - * cur_speed_hz and referred in the consecutive transfer instead 358 - * of calling clk_get_rate() API. 359 - */ 360 - mas->cur_speed_hz = xfer->speed_hz; 361 - clk_sel = idx & CLK_SEL_MSK; 362 - m_clk_cfg = (div << CLK_DIV_SHFT) | SER_CLK_EN; 363 - writel(clk_sel, se->base + SE_GENI_CLK_SEL); 364 - writel(m_clk_cfg, se->base + GENI_SER_M_CLK_CFG); 365 - } 317 + ret = geni_spi_set_clock_and_bw(mas, xfer->speed_hz); 318 + if (ret) 319 + return; 366 320 367 321 mas->tx_rem_bytes = 0; 368 322 mas->rx_rem_bytes = 0; ··· 567 561 mas->se.wrapper = dev_get_drvdata(dev->parent); 568 562 mas->se.base = base; 569 563 mas->se.clk = clk; 564 + mas->se.opp_table = dev_pm_opp_set_clkname(&pdev->dev, "se"); 565 + if (IS_ERR(mas->se.opp_table)) 566 + return PTR_ERR(mas->se.opp_table); 567 + /* OPP table is optional */ 568 + ret = dev_pm_opp_of_add_table(&pdev->dev); 569 + if (!ret) { 570 + mas->se.has_opp_table = true; 571 + } else if (ret != -ENODEV) { 572 + dev_err(&pdev->dev, "invalid OPP table in device tree\n"); 573 + return ret; 574 + } 570 575 571 576 spi->bus_num = -1; 572 577 spi->dev.of_node = dev->of_node; ··· 594 577 init_completion(&mas->xfer_done); 595 578 spin_lock_init(&mas->lock); 596 579 pm_runtime_enable(dev); 580 + 581 + ret = geni_icc_get(&mas->se, NULL); 582 + if (ret) 583 + goto spi_geni_probe_runtime_disable; 584 + /* Set the bus quota to a reasonable value for register access */ 585 + mas->se.icc_paths[GENI_TO_CORE].avg_bw = Bps_to_icc(CORE_2X_50_MHZ); 586 + mas->se.icc_paths[CPU_TO_GENI].avg_bw = GENI_DEFAULT_BW; 587 + 588 + ret = geni_icc_set_bw(&mas->se); 589 + if (ret) 590 + goto spi_geni_probe_runtime_disable; 597 591 598 592 ret = spi_geni_init(mas); 599 593 if (ret) ··· 624 596 spi_geni_probe_runtime_disable: 625 597 pm_runtime_disable(dev); 626 598 spi_master_put(spi); 599 + if (mas->se.has_opp_table) 600 + dev_pm_opp_of_remove_table(&pdev->dev); 601 + dev_pm_opp_put_clkname(mas->se.opp_table); 627 602 return ret; 628 603 } 629 604 ··· 640 609 641 610 free_irq(mas->irq, spi); 642 611 pm_runtime_disable(&pdev->dev); 612 + if (mas->se.has_opp_table) 613 + dev_pm_opp_of_remove_table(&pdev->dev); 614 + dev_pm_opp_put_clkname(mas->se.opp_table); 643 615 return 0; 644 616 } 645 617 ··· 650 616 { 651 617 struct spi_master *spi = dev_get_drvdata(dev); 652 618 struct spi_geni_master *mas = spi_master_get_devdata(spi); 619 + int ret; 653 620 654 - return geni_se_resources_off(&mas->se); 621 + /* Drop the performance state vote */ 622 + dev_pm_opp_set_rate(dev, 0); 623 + 624 + ret = geni_se_resources_off(&mas->se); 625 + if (ret) 626 + return ret; 627 + 628 + return geni_icc_disable(&mas->se); 655 629 } 656 630 657 631 static int __maybe_unused spi_geni_runtime_resume(struct device *dev) 658 632 { 659 633 struct spi_master *spi = dev_get_drvdata(dev); 660 634 struct spi_geni_master *mas = spi_master_get_devdata(spi); 635 + int ret; 661 636 662 - return geni_se_resources_on(&mas->se); 637 + ret = geni_icc_enable(&mas->se); 638 + if (ret) 639 + return ret; 640 + 641 + ret = geni_se_resources_on(&mas->se); 642 + if (ret) 643 + return ret; 644 + 645 + return dev_pm_opp_set_rate(mas->dev, mas->cur_sclk_hz); 663 646 } 664 647 665 648 static int __maybe_unused spi_geni_suspend(struct device *dev)
+110 -7
drivers/spi/spi-qcom-qspi.c
··· 2 2 // Copyright (c) 2017-2018, The Linux foundation. All rights reserved. 3 3 4 4 #include <linux/clk.h> 5 + #include <linux/interconnect.h> 5 6 #include <linux/interrupt.h> 6 7 #include <linux/io.h> 7 8 #include <linux/module.h> 8 9 #include <linux/of.h> 9 10 #include <linux/of_platform.h> 10 11 #include <linux/pm_runtime.h> 12 + #include <linux/pm_opp.h> 11 13 #include <linux/spi/spi.h> 12 14 #include <linux/spi/spi-mem.h> 13 15 ··· 141 139 struct device *dev; 142 140 struct clk_bulk_data *clks; 143 141 struct qspi_xfer xfer; 144 - /* Lock to protect xfer and IRQ accessed registers */ 142 + struct icc_path *icc_path_cpu_to_qspi; 143 + struct opp_table *opp_table; 144 + bool has_opp_table; 145 + unsigned long last_speed; 146 + /* Lock to protect data accessed by IRQs */ 145 147 spinlock_t lock; 146 148 }; 147 149 ··· 227 221 spin_unlock_irqrestore(&ctrl->lock, flags); 228 222 } 229 223 224 + static int qcom_qspi_set_speed(struct qcom_qspi *ctrl, unsigned long speed_hz) 225 + { 226 + int ret; 227 + unsigned int avg_bw_cpu; 228 + 229 + if (speed_hz == ctrl->last_speed) 230 + return 0; 231 + 232 + /* In regular operation (SBL_EN=1) core must be 4x transfer clock */ 233 + ret = dev_pm_opp_set_rate(ctrl->dev, speed_hz * 4); 234 + if (ret) { 235 + dev_err(ctrl->dev, "Failed to set core clk %d\n", ret); 236 + return ret; 237 + } 238 + 239 + /* 240 + * Set BW quota for CPU as driver supports FIFO mode only. 241 + * We don't have explicit peak requirement so keep it equal to avg_bw. 242 + */ 243 + avg_bw_cpu = Bps_to_icc(speed_hz); 244 + ret = icc_set_bw(ctrl->icc_path_cpu_to_qspi, avg_bw_cpu, avg_bw_cpu); 245 + if (ret) { 246 + dev_err(ctrl->dev, "%s: ICC BW voting failed for cpu: %d\n", 247 + __func__, ret); 248 + return ret; 249 + } 250 + 251 + ctrl->last_speed = speed_hz; 252 + 253 + return 0; 254 + } 255 + 230 256 static int qcom_qspi_transfer_one(struct spi_master *master, 231 257 struct spi_device *slv, 232 258 struct spi_transfer *xfer) ··· 272 234 if (xfer->speed_hz) 273 235 speed_hz = xfer->speed_hz; 274 236 275 - /* In regular operation (SBL_EN=1) core must be 4x transfer clock */ 276 - ret = clk_set_rate(ctrl->clks[QSPI_CLK_CORE].clk, speed_hz * 4); 277 - if (ret) { 278 - dev_err(ctrl->dev, "Failed to set core clk %d\n", ret); 237 + ret = qcom_qspi_set_speed(ctrl, speed_hz); 238 + if (ret) 279 239 return ret; 280 - } 281 240 282 241 spin_lock_irqsave(&ctrl->lock, flags); 283 242 ··· 493 458 if (ret) 494 459 goto exit_probe_master_put; 495 460 461 + ctrl->icc_path_cpu_to_qspi = devm_of_icc_get(dev, "qspi-config"); 462 + if (IS_ERR(ctrl->icc_path_cpu_to_qspi)) { 463 + ret = PTR_ERR(ctrl->icc_path_cpu_to_qspi); 464 + if (ret != -EPROBE_DEFER) 465 + dev_err(dev, "Failed to get cpu path: %d\n", ret); 466 + goto exit_probe_master_put; 467 + } 468 + /* Set BW vote for register access */ 469 + ret = icc_set_bw(ctrl->icc_path_cpu_to_qspi, Bps_to_icc(1000), 470 + Bps_to_icc(1000)); 471 + if (ret) { 472 + dev_err(ctrl->dev, "%s: ICC BW voting failed for cpu: %d\n", 473 + __func__, ret); 474 + goto exit_probe_master_put; 475 + } 476 + 477 + ret = icc_disable(ctrl->icc_path_cpu_to_qspi); 478 + if (ret) { 479 + dev_err(ctrl->dev, "%s: ICC disable failed for cpu: %d\n", 480 + __func__, ret); 481 + goto exit_probe_master_put; 482 + } 483 + 496 484 ret = platform_get_irq(pdev, 0); 497 485 if (ret < 0) 498 486 goto exit_probe_master_put; ··· 539 481 master->handle_err = qcom_qspi_handle_err; 540 482 master->auto_runtime_pm = true; 541 483 484 + ctrl->opp_table = dev_pm_opp_set_clkname(&pdev->dev, "core"); 485 + if (IS_ERR(ctrl->opp_table)) { 486 + ret = PTR_ERR(ctrl->opp_table); 487 + goto exit_probe_master_put; 488 + } 489 + /* OPP table is optional */ 490 + ret = dev_pm_opp_of_add_table(&pdev->dev); 491 + if (!ret) { 492 + ctrl->has_opp_table = true; 493 + } else if (ret != -ENODEV) { 494 + dev_err(&pdev->dev, "invalid OPP table in device tree\n"); 495 + goto exit_probe_master_put; 496 + } 497 + 498 + pm_runtime_use_autosuspend(dev); 499 + pm_runtime_set_autosuspend_delay(dev, 250); 542 500 pm_runtime_enable(dev); 543 501 544 502 ret = spi_register_master(master); ··· 562 488 return 0; 563 489 564 490 pm_runtime_disable(dev); 491 + if (ctrl->has_opp_table) 492 + dev_pm_opp_of_remove_table(&pdev->dev); 493 + dev_pm_opp_put_clkname(ctrl->opp_table); 565 494 566 495 exit_probe_master_put: 567 496 spi_master_put(master); ··· 575 498 static int qcom_qspi_remove(struct platform_device *pdev) 576 499 { 577 500 struct spi_master *master = platform_get_drvdata(pdev); 501 + struct qcom_qspi *ctrl = spi_master_get_devdata(master); 578 502 579 503 /* Unregister _before_ disabling pm_runtime() so we stop transfers */ 580 504 spi_unregister_master(master); 581 505 582 506 pm_runtime_disable(&pdev->dev); 507 + if (ctrl->has_opp_table) 508 + dev_pm_opp_of_remove_table(&pdev->dev); 509 + dev_pm_opp_put_clkname(ctrl->opp_table); 583 510 584 511 return 0; 585 512 } ··· 592 511 { 593 512 struct spi_master *master = dev_get_drvdata(dev); 594 513 struct qcom_qspi *ctrl = spi_master_get_devdata(master); 514 + int ret; 595 515 516 + /* Drop the performance state vote */ 517 + dev_pm_opp_set_rate(dev, 0); 596 518 clk_bulk_disable_unprepare(QSPI_NUM_CLKS, ctrl->clks); 519 + 520 + ret = icc_disable(ctrl->icc_path_cpu_to_qspi); 521 + if (ret) { 522 + dev_err_ratelimited(ctrl->dev, "%s: ICC disable failed for cpu: %d\n", 523 + __func__, ret); 524 + return ret; 525 + } 597 526 598 527 return 0; 599 528 } ··· 612 521 { 613 522 struct spi_master *master = dev_get_drvdata(dev); 614 523 struct qcom_qspi *ctrl = spi_master_get_devdata(master); 524 + int ret; 615 525 616 - return clk_bulk_prepare_enable(QSPI_NUM_CLKS, ctrl->clks); 526 + ret = icc_enable(ctrl->icc_path_cpu_to_qspi); 527 + if (ret) { 528 + dev_err_ratelimited(ctrl->dev, "%s: ICC enable failed for cpu: %d\n", 529 + __func__, ret); 530 + return ret; 531 + } 532 + 533 + ret = clk_bulk_prepare_enable(QSPI_NUM_CLKS, ctrl->clks); 534 + if (ret) 535 + return ret; 536 + 537 + return dev_pm_opp_set_rate(dev, ctrl->last_speed * 4); 617 538 } 618 539 619 540 static int __maybe_unused qcom_qspi_suspend(struct device *dev)
+24 -3
drivers/tee/optee/core.c
··· 17 17 #include <linux/tee_drv.h> 18 18 #include <linux/types.h> 19 19 #include <linux/uaccess.h> 20 + #include <linux/workqueue.h> 20 21 #include "optee_private.h" 21 22 #include "optee_smc.h" 22 23 #include "shm_pool.h" ··· 219 218 *vers = v; 220 219 } 221 220 221 + static void optee_bus_scan(struct work_struct *work) 222 + { 223 + WARN_ON(optee_enumerate_devices(PTA_CMD_GET_DEVICES_SUPP)); 224 + } 225 + 222 226 static int optee_open(struct tee_context *ctx) 223 227 { 224 228 struct optee_context_data *ctxdata; ··· 247 241 kfree(ctxdata); 248 242 return -EBUSY; 249 243 } 250 - } 251 244 245 + if (!optee->scan_bus_done) { 246 + INIT_WORK(&optee->scan_bus_work, optee_bus_scan); 247 + optee->scan_bus_wq = create_workqueue("optee_bus_scan"); 248 + if (!optee->scan_bus_wq) { 249 + kfree(ctxdata); 250 + return -ECHILD; 251 + } 252 + queue_work(optee->scan_bus_wq, &optee->scan_bus_work); 253 + optee->scan_bus_done = true; 254 + } 255 + } 252 256 mutex_init(&ctxdata->mutex); 253 257 INIT_LIST_HEAD(&ctxdata->sess_list); 254 258 ··· 312 296 313 297 ctx->data = NULL; 314 298 315 - if (teedev == optee->supp_teedev) 299 + if (teedev == optee->supp_teedev) { 300 + if (optee->scan_bus_wq) { 301 + destroy_workqueue(optee->scan_bus_wq); 302 + optee->scan_bus_wq = NULL; 303 + } 316 304 optee_supp_release(&optee->supp); 305 + } 317 306 } 318 307 319 308 static const struct tee_driver_ops optee_ops = { ··· 696 675 697 676 platform_set_drvdata(pdev, optee); 698 677 699 - rc = optee_enumerate_devices(); 678 + rc = optee_enumerate_devices(PTA_CMD_GET_DEVICES); 700 679 if (rc) { 701 680 optee_remove(pdev); 702 681 return rc;
+17 -21
drivers/tee/optee/device.c
··· 11 11 #include <linux/uuid.h> 12 12 #include "optee_private.h" 13 13 14 - /* 15 - * Get device UUIDs 16 - * 17 - * [out] memref[0] Array of device UUIDs 18 - * 19 - * Return codes: 20 - * TEE_SUCCESS - Invoke command success 21 - * TEE_ERROR_BAD_PARAMETERS - Incorrect input param 22 - * TEE_ERROR_SHORT_BUFFER - Output buffer size less than required 23 - */ 24 - #define PTA_CMD_GET_DEVICES 0x0 25 - 26 14 static int optee_ctx_match(struct tee_ioctl_version_data *ver, const void *data) 27 15 { 28 16 if (ver->impl_id == TEE_IMPL_ID_OPTEE) ··· 20 32 } 21 33 22 34 static int get_devices(struct tee_context *ctx, u32 session, 23 - struct tee_shm *device_shm, u32 *shm_size) 35 + struct tee_shm *device_shm, u32 *shm_size, 36 + u32 func) 24 37 { 25 38 int ret = 0; 26 39 struct tee_ioctl_invoke_arg inv_arg; ··· 30 41 memset(&inv_arg, 0, sizeof(inv_arg)); 31 42 memset(&param, 0, sizeof(param)); 32 43 33 - /* Invoke PTA_CMD_GET_DEVICES function */ 34 - inv_arg.func = PTA_CMD_GET_DEVICES; 44 + inv_arg.func = func; 35 45 inv_arg.session = session; 36 46 inv_arg.num_params = 4; 37 47 ··· 53 65 return 0; 54 66 } 55 67 56 - static int optee_register_device(const uuid_t *device_uuid, u32 device_id) 68 + static int optee_register_device(const uuid_t *device_uuid) 57 69 { 58 70 struct tee_client_device *optee_device = NULL; 59 71 int rc; ··· 63 75 return -ENOMEM; 64 76 65 77 optee_device->dev.bus = &tee_bus_type; 66 - dev_set_name(&optee_device->dev, "optee-clnt%u", device_id); 78 + if (dev_set_name(&optee_device->dev, "optee-ta-%pUb", device_uuid)) { 79 + kfree(optee_device); 80 + return -ENOMEM; 81 + } 67 82 uuid_copy(&optee_device->id.uuid, device_uuid); 68 83 69 84 rc = device_register(&optee_device->dev); ··· 78 87 return rc; 79 88 } 80 89 81 - int optee_enumerate_devices(void) 90 + static int __optee_enumerate_devices(u32 func) 82 91 { 83 92 const uuid_t pta_uuid = 84 93 UUID_INIT(0x7011a688, 0xddde, 0x4053, ··· 109 118 goto out_ctx; 110 119 } 111 120 112 - rc = get_devices(ctx, sess_arg.session, NULL, &shm_size); 121 + rc = get_devices(ctx, sess_arg.session, NULL, &shm_size, func); 113 122 if (rc < 0 || !shm_size) 114 123 goto out_sess; 115 124 ··· 121 130 goto out_sess; 122 131 } 123 132 124 - rc = get_devices(ctx, sess_arg.session, device_shm, &shm_size); 133 + rc = get_devices(ctx, sess_arg.session, device_shm, &shm_size, func); 125 134 if (rc < 0) 126 135 goto out_shm; 127 136 ··· 135 144 num_devices = shm_size / sizeof(uuid_t); 136 145 137 146 for (idx = 0; idx < num_devices; idx++) { 138 - rc = optee_register_device(&device_uuid[idx], idx); 147 + rc = optee_register_device(&device_uuid[idx]); 139 148 if (rc) 140 149 goto out_shm; 141 150 } ··· 148 157 tee_client_close_context(ctx); 149 158 150 159 return rc; 160 + } 161 + 162 + int optee_enumerate_devices(u32 func) 163 + { 164 + return __optee_enumerate_devices(func); 151 165 }
+9 -1
drivers/tee/optee/optee_private.h
··· 78 78 * @memremaped_shm virtual address of memory in shared memory pool 79 79 * @sec_caps: secure world capabilities defined by 80 80 * OPTEE_SMC_SEC_CAP_* in optee_smc.h 81 + * @scan_bus_done flag if device registation was already done. 82 + * @scan_bus_wq workqueue to scan optee bus and register optee drivers 83 + * @scan_bus_work workq to scan optee bus and register optee drivers 81 84 */ 82 85 struct optee { 83 86 struct tee_device *supp_teedev; ··· 92 89 struct tee_shm_pool *pool; 93 90 void *memremaped_shm; 94 91 u32 sec_caps; 92 + bool scan_bus_done; 93 + struct workqueue_struct *scan_bus_wq; 94 + struct work_struct scan_bus_work; 95 95 }; 96 96 97 97 struct optee_session { ··· 179 173 void optee_fill_pages_list(u64 *dst, struct page **pages, int num_pages, 180 174 size_t page_offset); 181 175 182 - int optee_enumerate_devices(void); 176 + #define PTA_CMD_GET_DEVICES 0x0 177 + #define PTA_CMD_GET_DEVICES_SUPP 0x1 178 + int optee_enumerate_devices(u32 func); 183 179 184 180 /* 185 181 * Small helpers
+162 -48
drivers/tty/serial/qcom_geni_serial.c
··· 9 9 #include <linux/module.h> 10 10 #include <linux/of.h> 11 11 #include <linux/of_device.h> 12 + #include <linux/pm_opp.h> 12 13 #include <linux/platform_device.h> 13 14 #include <linux/pm_runtime.h> 14 15 #include <linux/pm_wakeirq.h> ··· 103 102 #define DEFAULT_IO_MACRO_IO2_IO3_MASK GENMASK(15, 4) 104 103 #define IO_MACRO_IO2_IO3_SWAP 0x4640 105 104 106 - #ifdef CONFIG_CONSOLE_POLL 107 - #define CONSOLE_RX_BYTES_PW 1 108 - #else 109 - #define CONSOLE_RX_BYTES_PW 4 110 - #endif 105 + /* We always configure 4 bytes per FIFO word */ 106 + #define BYTES_PER_FIFO_WORD 4 107 + 108 + struct qcom_geni_private_data { 109 + /* NOTE: earlycon port will have NULL here */ 110 + struct uart_driver *drv; 111 + 112 + u32 poll_cached_bytes; 113 + unsigned int poll_cached_bytes_cnt; 114 + 115 + u32 write_cached_bytes; 116 + unsigned int write_cached_bytes_cnt; 117 + }; 111 118 112 119 struct qcom_geni_serial_port { 113 120 struct uart_port uport; ··· 127 118 bool setup; 128 119 int (*handle_rx)(struct uart_port *uport, u32 bytes, bool drop); 129 120 unsigned int baud; 130 - unsigned int tx_bytes_pw; 131 - unsigned int rx_bytes_pw; 132 121 void *rx_fifo; 133 122 u32 loopback; 134 123 bool brk; ··· 135 128 int wakeup_irq; 136 129 bool rx_tx_swap; 137 130 bool cts_rts_swap; 131 + 132 + struct qcom_geni_private_data private_data; 138 133 }; 139 134 140 135 static const struct uart_ops qcom_geni_console_pops; ··· 272 263 unsigned int baud; 273 264 unsigned int fifo_bits; 274 265 unsigned long timeout_us = 20000; 266 + struct qcom_geni_private_data *private_data = uport->private_data; 275 267 276 - if (uport->private_data) { 268 + if (private_data->drv) { 277 269 port = to_dev_port(uport, uport); 278 270 baud = port->baud; 279 271 if (!baud) ··· 340 330 } 341 331 342 332 #ifdef CONFIG_CONSOLE_POLL 333 + 343 334 static int qcom_geni_serial_get_char(struct uart_port *uport) 344 335 { 345 - u32 rx_fifo; 336 + struct qcom_geni_private_data *private_data = uport->private_data; 346 337 u32 status; 338 + u32 word_cnt; 339 + int ret; 347 340 348 - status = readl(uport->membase + SE_GENI_M_IRQ_STATUS); 349 - writel(status, uport->membase + SE_GENI_M_IRQ_CLEAR); 341 + if (!private_data->poll_cached_bytes_cnt) { 342 + status = readl(uport->membase + SE_GENI_M_IRQ_STATUS); 343 + writel(status, uport->membase + SE_GENI_M_IRQ_CLEAR); 350 344 351 - status = readl(uport->membase + SE_GENI_S_IRQ_STATUS); 352 - writel(status, uport->membase + SE_GENI_S_IRQ_CLEAR); 345 + status = readl(uport->membase + SE_GENI_S_IRQ_STATUS); 346 + writel(status, uport->membase + SE_GENI_S_IRQ_CLEAR); 353 347 354 - status = readl(uport->membase + SE_GENI_RX_FIFO_STATUS); 355 - if (!(status & RX_FIFO_WC_MSK)) 356 - return NO_POLL_CHAR; 348 + status = readl(uport->membase + SE_GENI_RX_FIFO_STATUS); 349 + word_cnt = status & RX_FIFO_WC_MSK; 350 + if (!word_cnt) 351 + return NO_POLL_CHAR; 357 352 358 - rx_fifo = readl(uport->membase + SE_GENI_RX_FIFOn); 359 - return rx_fifo & 0xff; 353 + if (word_cnt == 1 && (status & RX_LAST)) 354 + private_data->poll_cached_bytes_cnt = 355 + (status & RX_LAST_BYTE_VALID_MSK) >> 356 + RX_LAST_BYTE_VALID_SHFT; 357 + else 358 + private_data->poll_cached_bytes_cnt = 4; 359 + 360 + private_data->poll_cached_bytes = 361 + readl(uport->membase + SE_GENI_RX_FIFOn); 362 + } 363 + 364 + private_data->poll_cached_bytes_cnt--; 365 + ret = private_data->poll_cached_bytes & 0xff; 366 + private_data->poll_cached_bytes >>= 8; 367 + 368 + return ret; 360 369 } 361 370 362 371 static void qcom_geni_serial_poll_put_char(struct uart_port *uport, ··· 394 365 #ifdef CONFIG_SERIAL_QCOM_GENI_CONSOLE 395 366 static void qcom_geni_serial_wr_char(struct uart_port *uport, int ch) 396 367 { 397 - writel(ch, uport->membase + SE_GENI_TX_FIFOn); 368 + struct qcom_geni_private_data *private_data = uport->private_data; 369 + 370 + private_data->write_cached_bytes = 371 + (private_data->write_cached_bytes >> 8) | (ch << 24); 372 + private_data->write_cached_bytes_cnt++; 373 + 374 + if (private_data->write_cached_bytes_cnt == BYTES_PER_FIFO_WORD) { 375 + writel(private_data->write_cached_bytes, 376 + uport->membase + SE_GENI_TX_FIFOn); 377 + private_data->write_cached_bytes_cnt = 0; 378 + } 398 379 } 399 380 400 381 static void 401 382 __qcom_geni_serial_console_write(struct uart_port *uport, const char *s, 402 383 unsigned int count) 403 384 { 385 + struct qcom_geni_private_data *private_data = uport->private_data; 386 + 404 387 int i; 405 388 u32 bytes_to_send = count; 406 389 ··· 447 406 SE_GENI_M_IRQ_CLEAR); 448 407 i += chars_to_write; 449 408 } 409 + 410 + if (private_data->write_cached_bytes_cnt) { 411 + private_data->write_cached_bytes >>= BITS_PER_BYTE * 412 + (BYTES_PER_FIFO_WORD - private_data->write_cached_bytes_cnt); 413 + writel(private_data->write_cached_bytes, 414 + uport->membase + SE_GENI_TX_FIFOn); 415 + private_data->write_cached_bytes_cnt = 0; 416 + } 417 + 450 418 qcom_geni_serial_poll_tx_done(uport); 451 419 } 452 420 ··· 528 478 tport = &uport->state->port; 529 479 for (i = 0; i < bytes; ) { 530 480 int c; 531 - int chunk = min_t(int, bytes - i, port->rx_bytes_pw); 481 + int chunk = min_t(int, bytes - i, BYTES_PER_FIFO_WORD); 532 482 533 483 ioread32_rep(uport->membase + SE_GENI_RX_FIFOn, buf, 1); 534 484 i += chunk; ··· 708 658 709 659 if (!word_cnt) 710 660 return; 711 - total_bytes = port->rx_bytes_pw * (word_cnt - 1); 661 + total_bytes = BYTES_PER_FIFO_WORD * (word_cnt - 1); 712 662 if (last_word_partial && last_word_byte_cnt) 713 663 total_bytes += last_word_byte_cnt; 714 664 else 715 - total_bytes += port->rx_bytes_pw; 665 + total_bytes += BYTES_PER_FIFO_WORD; 716 666 port->handle_rx(uport, total_bytes, drop); 717 667 } 718 668 ··· 745 695 } 746 696 747 697 avail = port->tx_fifo_depth - (status & TX_FIFO_WC); 748 - avail *= port->tx_bytes_pw; 698 + avail *= BYTES_PER_FIFO_WORD; 749 699 750 700 tail = xmit->tail; 751 701 chunk = min(avail, pending); ··· 769 719 int c; 770 720 771 721 memset(buf, 0, ARRAY_SIZE(buf)); 772 - tx_bytes = min_t(size_t, remaining, port->tx_bytes_pw); 722 + tx_bytes = min_t(size_t, remaining, BYTES_PER_FIFO_WORD); 773 723 774 724 for (c = 0; c < tx_bytes ; c++) { 775 725 buf[c] = xmit->buf[tail++]; ··· 886 836 u32 proto; 887 837 u32 pin_swap; 888 838 889 - if (uart_console(uport)) { 890 - port->tx_bytes_pw = 1; 891 - port->rx_bytes_pw = CONSOLE_RX_BYTES_PW; 892 - } else { 893 - port->tx_bytes_pw = 4; 894 - port->rx_bytes_pw = 4; 895 - } 896 - 897 839 proto = geni_se_read_proto(&port->se); 898 840 if (proto != GENI_SE_UART) { 899 841 dev_err(uport->dev, "Invalid FW loaded, proto: %d\n", proto); ··· 917 875 */ 918 876 if (uart_console(uport)) 919 877 qcom_geni_serial_poll_tx_done(uport); 920 - geni_se_config_packing(&port->se, BITS_PER_BYTE, port->tx_bytes_pw, 921 - false, true, false); 922 - geni_se_config_packing(&port->se, BITS_PER_BYTE, port->rx_bytes_pw, 923 - false, false, true); 878 + geni_se_config_packing(&port->se, BITS_PER_BYTE, BYTES_PER_FIFO_WORD, 879 + false, true, true); 924 880 geni_se_init(&port->se, UART_RX_WM, port->rx_fifo_depth - 2); 925 881 geni_se_select_mode(&port->se, GENI_SE_FIFO); 926 882 port->setup = true; ··· 985 945 struct qcom_geni_serial_port *port = to_dev_port(uport, uport); 986 946 unsigned long clk_rate; 987 947 u32 ver, sampling_rate; 948 + unsigned int avg_bw_core; 988 949 989 950 qcom_geni_serial_stop_rx(uport); 990 951 /* baud rate */ ··· 1003 962 goto out_restart_rx; 1004 963 1005 964 uport->uartclk = clk_rate; 1006 - clk_set_rate(port->se.clk, clk_rate); 965 + dev_pm_opp_set_rate(uport->dev, clk_rate); 1007 966 ser_clk_cfg = SER_CLK_EN; 1008 967 ser_clk_cfg |= clk_div << CLK_DIV_SHFT; 968 + 969 + /* 970 + * Bump up BW vote on CPU and CORE path as driver supports FIFO mode 971 + * only. 972 + */ 973 + avg_bw_core = (baud > 115200) ? Bps_to_icc(CORE_2X_50_MHZ) 974 + : GENI_DEFAULT_BW; 975 + port->se.icc_paths[GENI_TO_CORE].avg_bw = avg_bw_core; 976 + port->se.icc_paths[CPU_TO_GENI].avg_bw = Bps_to_icc(baud); 977 + geni_icc_set_bw(&port->se); 1009 978 1010 979 /* parity */ 1011 980 tx_trans_cfg = readl(uport->membase + SE_UART_TX_TRANS_CFG); ··· 1172 1121 struct console *con) { } 1173 1122 #endif 1174 1123 1124 + static int qcom_geni_serial_earlycon_exit(struct console *con) 1125 + { 1126 + geni_remove_earlycon_icc_vote(); 1127 + return 0; 1128 + } 1129 + 1130 + static struct qcom_geni_private_data earlycon_private_data; 1131 + 1175 1132 static int __init qcom_geni_serial_earlycon_setup(struct earlycon_device *dev, 1176 1133 const char *opt) 1177 1134 { ··· 1194 1135 1195 1136 if (!uport->membase) 1196 1137 return -EINVAL; 1138 + 1139 + uport->private_data = &earlycon_private_data; 1197 1140 1198 1141 memset(&se, 0, sizeof(se)); 1199 1142 se.base = uport->membase; ··· 1214 1153 */ 1215 1154 qcom_geni_serial_poll_tx_done(uport); 1216 1155 qcom_geni_serial_abort_rx(uport); 1217 - geni_se_config_packing(&se, BITS_PER_BYTE, 1, false, true, false); 1156 + geni_se_config_packing(&se, BITS_PER_BYTE, BYTES_PER_FIFO_WORD, 1157 + false, true, true); 1218 1158 geni_se_init(&se, DEF_FIFO_DEPTH_WORDS / 2, DEF_FIFO_DEPTH_WORDS - 2); 1219 1159 geni_se_select_mode(&se, GENI_SE_FIFO); 1220 1160 ··· 1228 1166 writel(stop_bit_len, uport->membase + SE_UART_TX_STOP_BIT_LEN); 1229 1167 1230 1168 dev->con->write = qcom_geni_serial_earlycon_write; 1169 + dev->con->exit = qcom_geni_serial_earlycon_exit; 1231 1170 dev->con->setup = NULL; 1232 1171 qcom_geni_serial_enable_early_read(&se, dev->con); 1233 1172 ··· 1291 1228 if (old_state == UART_PM_STATE_UNDEFINED) 1292 1229 old_state = UART_PM_STATE_OFF; 1293 1230 1294 - if (new_state == UART_PM_STATE_ON && old_state == UART_PM_STATE_OFF) 1231 + if (new_state == UART_PM_STATE_ON && old_state == UART_PM_STATE_OFF) { 1232 + geni_icc_enable(&port->se); 1295 1233 geni_se_resources_on(&port->se); 1296 - else if (new_state == UART_PM_STATE_OFF && 1297 - old_state == UART_PM_STATE_ON) 1234 + } else if (new_state == UART_PM_STATE_OFF && 1235 + old_state == UART_PM_STATE_ON) { 1298 1236 geni_se_resources_off(&port->se); 1237 + geni_icc_disable(&port->se); 1238 + } 1299 1239 } 1300 1240 1301 1241 static const struct uart_ops qcom_geni_console_pops = { ··· 1396 1330 return -ENOMEM; 1397 1331 } 1398 1332 1333 + ret = geni_icc_get(&port->se, NULL); 1334 + if (ret) 1335 + return ret; 1336 + port->se.icc_paths[GENI_TO_CORE].avg_bw = GENI_DEFAULT_BW; 1337 + port->se.icc_paths[CPU_TO_GENI].avg_bw = GENI_DEFAULT_BW; 1338 + 1339 + /* Set BW for register access */ 1340 + ret = geni_icc_set_bw(&port->se); 1341 + if (ret) 1342 + return ret; 1343 + 1399 1344 port->name = devm_kasprintf(uport->dev, GFP_KERNEL, 1400 1345 "qcom_geni_serial_%s%d", 1401 1346 uart_console(uport) ? "console" : "uart", uport->line); ··· 1428 1351 if (of_property_read_bool(pdev->dev.of_node, "cts-rts-swap")) 1429 1352 port->cts_rts_swap = true; 1430 1353 1431 - uport->private_data = drv; 1354 + port->se.opp_table = dev_pm_opp_set_clkname(&pdev->dev, "se"); 1355 + if (IS_ERR(port->se.opp_table)) 1356 + return PTR_ERR(port->se.opp_table); 1357 + /* OPP table is optional */ 1358 + ret = dev_pm_opp_of_add_table(&pdev->dev); 1359 + if (!ret) { 1360 + port->se.has_opp_table = true; 1361 + } else if (ret != -ENODEV) { 1362 + dev_err(&pdev->dev, "invalid OPP table in device tree\n"); 1363 + return ret; 1364 + } 1365 + 1366 + port->private_data.drv = drv; 1367 + uport->private_data = &port->private_data; 1432 1368 platform_set_drvdata(pdev, port); 1433 1369 port->handle_rx = console ? handle_rx_console : handle_rx_uart; 1434 1370 1435 1371 ret = uart_add_one_port(drv, uport); 1436 1372 if (ret) 1437 - return ret; 1373 + goto err; 1438 1374 1439 1375 irq_set_status_flags(uport->irq, IRQ_NOAUTOEN); 1440 1376 ret = devm_request_irq(uport->dev, uport->irq, qcom_geni_serial_isr, ··· 1455 1365 if (ret) { 1456 1366 dev_err(uport->dev, "Failed to get IRQ ret %d\n", ret); 1457 1367 uart_remove_one_port(drv, uport); 1458 - return ret; 1368 + goto err; 1459 1369 } 1460 1370 1461 1371 /* ··· 1472 1382 if (ret) { 1473 1383 device_init_wakeup(&pdev->dev, false); 1474 1384 uart_remove_one_port(drv, uport); 1475 - return ret; 1385 + goto err; 1476 1386 } 1477 1387 } 1478 1388 1479 1389 return 0; 1390 + err: 1391 + if (port->se.has_opp_table) 1392 + dev_pm_opp_of_remove_table(&pdev->dev); 1393 + dev_pm_opp_put_clkname(port->se.opp_table); 1394 + return ret; 1480 1395 } 1481 1396 1482 1397 static int qcom_geni_serial_remove(struct platform_device *pdev) 1483 1398 { 1484 1399 struct qcom_geni_serial_port *port = platform_get_drvdata(pdev); 1485 - struct uart_driver *drv = port->uport.private_data; 1400 + struct uart_driver *drv = port->private_data.drv; 1486 1401 1402 + if (port->se.has_opp_table) 1403 + dev_pm_opp_of_remove_table(&pdev->dev); 1404 + dev_pm_opp_put_clkname(port->se.opp_table); 1487 1405 dev_pm_clear_wake_irq(&pdev->dev); 1488 1406 device_init_wakeup(&pdev->dev, false); 1489 1407 uart_remove_one_port(drv, &port->uport); ··· 1503 1405 { 1504 1406 struct qcom_geni_serial_port *port = dev_get_drvdata(dev); 1505 1407 struct uart_port *uport = &port->uport; 1408 + struct qcom_geni_private_data *private_data = uport->private_data; 1506 1409 1507 - return uart_suspend_port(uport->private_data, uport); 1410 + /* 1411 + * This is done so we can hit the lowest possible state in suspend 1412 + * even with no_console_suspend 1413 + */ 1414 + if (uart_console(uport)) { 1415 + geni_icc_set_tag(&port->se, 0x3); 1416 + geni_icc_set_bw(&port->se); 1417 + } 1418 + return uart_suspend_port(private_data->drv, uport); 1508 1419 } 1509 1420 1510 1421 static int __maybe_unused qcom_geni_serial_sys_resume(struct device *dev) 1511 1422 { 1423 + int ret; 1512 1424 struct qcom_geni_serial_port *port = dev_get_drvdata(dev); 1513 1425 struct uart_port *uport = &port->uport; 1426 + struct qcom_geni_private_data *private_data = uport->private_data; 1514 1427 1515 - return uart_resume_port(uport->private_data, uport); 1428 + ret = uart_resume_port(private_data->drv, uport); 1429 + if (uart_console(uport)) { 1430 + geni_icc_set_tag(&port->se, 0x7); 1431 + geni_icc_set_bw(&port->se); 1432 + } 1433 + return ret; 1516 1434 } 1517 1435 1518 1436 static const struct dev_pm_ops qcom_geni_serial_pm_ops = {
+1 -1
include/dt-bindings/reset/ti-syscon.h
··· 2 2 /* 3 3 * TI Syscon Reset definitions 4 4 * 5 - * Copyright (C) 2015-2016 Texas Instruments Incorporated - http://www.ti.com/ 5 + * Copyright (C) 2015-2016 Texas Instruments Incorporated - https://www.ti.com/ 6 6 */ 7 7 8 8 #ifndef __DT_BINDINGS_RESET_TI_SYSCON_H__
+5
include/linux/arm-smccc.h
··· 71 71 ARM_SMCCC_SMC_32, \ 72 72 0, 1) 73 73 74 + #define ARM_SMCCC_ARCH_SOC_ID \ 75 + ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ 76 + ARM_SMCCC_SMC_32, \ 77 + 0, 2) 78 + 74 79 #define ARM_SMCCC_ARCH_WORKAROUND_1 \ 75 80 ARM_SMCCC_CALL_VAL(ARM_SMCCC_FAST_CALL, \ 76 81 ARM_SMCCC_SMC_32, \
+2
include/linux/firmware/imx/sci.h
··· 14 14 15 15 #include <linux/firmware/imx/svc/misc.h> 16 16 #include <linux/firmware/imx/svc/pm.h> 17 + #include <linux/firmware/imx/svc/rm.h> 17 18 18 19 int imx_scu_enable_general_irq_channel(struct device *dev); 19 20 int imx_scu_irq_register_notifier(struct notifier_block *nb); 20 21 int imx_scu_irq_unregister_notifier(struct notifier_block *nb); 21 22 int imx_scu_irq_group_enable(u8 group, u32 mask, u8 enable); 23 + int imx_scu_soc_init(struct device *dev); 22 24 #endif /* _SC_SCI_H */
+69
include/linux/firmware/imx/svc/rm.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0+ */ 2 + /* 3 + * Copyright (C) 2016 Freescale Semiconductor, Inc. 4 + * Copyright 2017-2020 NXP 5 + * 6 + * Header file containing the public API for the System Controller (SC) 7 + * Resource Management (RM) function. This includes functions for 8 + * partitioning resources, pads, and memory regions. 9 + * 10 + * RM_SVC (SVC) Resource Management Service 11 + * 12 + * Module for the Resource Management (RM) service. 13 + */ 14 + 15 + #ifndef _SC_RM_API_H 16 + #define _SC_RM_API_H 17 + 18 + #include <linux/firmware/imx/sci.h> 19 + 20 + /* 21 + * This type is used to indicate RPC RM function calls. 22 + */ 23 + enum imx_sc_rm_func { 24 + IMX_SC_RM_FUNC_UNKNOWN = 0, 25 + IMX_SC_RM_FUNC_PARTITION_ALLOC = 1, 26 + IMX_SC_RM_FUNC_SET_CONFIDENTIAL = 31, 27 + IMX_SC_RM_FUNC_PARTITION_FREE = 2, 28 + IMX_SC_RM_FUNC_GET_DID = 26, 29 + IMX_SC_RM_FUNC_PARTITION_STATIC = 3, 30 + IMX_SC_RM_FUNC_PARTITION_LOCK = 4, 31 + IMX_SC_RM_FUNC_GET_PARTITION = 5, 32 + IMX_SC_RM_FUNC_SET_PARENT = 6, 33 + IMX_SC_RM_FUNC_MOVE_ALL = 7, 34 + IMX_SC_RM_FUNC_ASSIGN_RESOURCE = 8, 35 + IMX_SC_RM_FUNC_SET_RESOURCE_MOVABLE = 9, 36 + IMX_SC_RM_FUNC_SET_SUBSYS_RSRC_MOVABLE = 28, 37 + IMX_SC_RM_FUNC_SET_MASTER_ATTRIBUTES = 10, 38 + IMX_SC_RM_FUNC_SET_MASTER_SID = 11, 39 + IMX_SC_RM_FUNC_SET_PERIPHERAL_PERMISSIONS = 12, 40 + IMX_SC_RM_FUNC_IS_RESOURCE_OWNED = 13, 41 + IMX_SC_RM_FUNC_GET_RESOURCE_OWNER = 33, 42 + IMX_SC_RM_FUNC_IS_RESOURCE_MASTER = 14, 43 + IMX_SC_RM_FUNC_IS_RESOURCE_PERIPHERAL = 15, 44 + IMX_SC_RM_FUNC_GET_RESOURCE_INFO = 16, 45 + IMX_SC_RM_FUNC_MEMREG_ALLOC = 17, 46 + IMX_SC_RM_FUNC_MEMREG_SPLIT = 29, 47 + IMX_SC_RM_FUNC_MEMREG_FRAG = 32, 48 + IMX_SC_RM_FUNC_MEMREG_FREE = 18, 49 + IMX_SC_RM_FUNC_FIND_MEMREG = 30, 50 + IMX_SC_RM_FUNC_ASSIGN_MEMREG = 19, 51 + IMX_SC_RM_FUNC_SET_MEMREG_PERMISSIONS = 20, 52 + IMX_SC_RM_FUNC_IS_MEMREG_OWNED = 21, 53 + IMX_SC_RM_FUNC_GET_MEMREG_INFO = 22, 54 + IMX_SC_RM_FUNC_ASSIGN_PAD = 23, 55 + IMX_SC_RM_FUNC_SET_PAD_MOVABLE = 24, 56 + IMX_SC_RM_FUNC_IS_PAD_OWNED = 25, 57 + IMX_SC_RM_FUNC_DUMP = 27, 58 + }; 59 + 60 + #if IS_ENABLED(CONFIG_IMX_SCU) 61 + bool imx_sc_rm_is_resource_owned(struct imx_sc_ipc *ipc, u16 resource); 62 + #else 63 + static inline bool 64 + imx_sc_rm_is_resource_owned(struct imx_sc_ipc *ipc, u16 resource) 65 + { 66 + return true; 67 + } 68 + #endif 69 + #endif
+2
include/linux/mailbox/mtk-cmdq-mailbox.h
··· 17 17 #define CMDQ_JUMP_PASS CMDQ_INST_SIZE 18 18 19 19 #define CMDQ_WFE_UPDATE BIT(31) 20 + #define CMDQ_WFE_UPDATE_VALUE BIT(16) 20 21 #define CMDQ_WFE_WAIT BIT(15) 21 22 #define CMDQ_WFE_WAIT_VALUE 0x1 22 23 ··· 60 59 CMDQ_CODE_JUMP = 0x10, 61 60 CMDQ_CODE_WFE = 0x20, 62 61 CMDQ_CODE_EOC = 0x40, 62 + CMDQ_CODE_LOGIC = 0xa0, 63 63 }; 64 64 65 65 enum cmdq_cb_status {
+5
include/linux/of.h
··· 630 630 return NULL; 631 631 } 632 632 633 + static inline struct device_node *of_get_next_parent(struct device_node *node) 634 + { 635 + return NULL; 636 + } 637 + 633 638 static inline struct device_node *of_get_next_child( 634 639 const struct device_node *node, struct device_node *prev) 635 640 {
+45
include/linux/qcom-geni-se.h
··· 6 6 #ifndef _LINUX_QCOM_GENI_SE 7 7 #define _LINUX_QCOM_GENI_SE 8 8 9 + #include <linux/interconnect.h> 10 + 9 11 /* Transfer mode supported by GENI Serial Engines */ 10 12 enum geni_se_xfer_mode { 11 13 GENI_SE_INVALID, ··· 27 25 struct geni_wrapper; 28 26 struct clk; 29 27 28 + enum geni_icc_path_index { 29 + GENI_TO_CORE, 30 + CPU_TO_GENI, 31 + GENI_TO_DDR 32 + }; 33 + 34 + struct geni_icc_path { 35 + struct icc_path *path; 36 + unsigned int avg_bw; 37 + }; 38 + 30 39 /** 31 40 * struct geni_se - GENI Serial Engine 32 41 * @base: Base Address of the Serial Engine's register block ··· 46 33 * @clk: Handle to the core serial engine clock 47 34 * @num_clk_levels: Number of valid clock levels in clk_perf_tbl 48 35 * @clk_perf_tbl: Table of clock frequency input to serial engine clock 36 + * @icc_paths: Array of ICC paths for SE 37 + * @opp_table: Pointer to the OPP table 38 + * @has_opp_table: Specifies if the SE has an OPP table 49 39 */ 50 40 struct geni_se { 51 41 void __iomem *base; ··· 57 41 struct clk *clk; 58 42 unsigned int num_clk_levels; 59 43 unsigned long *clk_perf_tbl; 44 + struct geni_icc_path icc_paths[3]; 45 + struct opp_table *opp_table; 46 + bool has_opp_table; 60 47 }; 61 48 62 49 /* Common SE registers */ ··· 248 229 #define GENI_SE_VERSION_MINOR(ver) ((ver & HW_VER_MINOR_MASK) >> HW_VER_MINOR_SHFT) 249 230 #define GENI_SE_VERSION_STEP(ver) (ver & HW_VER_STEP_MASK) 250 231 232 + /* 233 + * Define bandwidth thresholds that cause the underlying Core 2X interconnect 234 + * clock to run at the named frequency. These baseline values are recommended 235 + * by the hardware team, and are not dynamically scaled with GENI bandwidth 236 + * beyond basic on/off. 237 + */ 238 + #define CORE_2X_19_2_MHZ 960 239 + #define CORE_2X_50_MHZ 2500 240 + #define CORE_2X_100_MHZ 5000 241 + #define CORE_2X_150_MHZ 7500 242 + #define CORE_2X_200_MHZ 10000 243 + #define CORE_2X_236_MHZ 16383 244 + 245 + #define GENI_DEFAULT_BW Bps_to_icc(1000) 246 + 251 247 #if IS_ENABLED(CONFIG_QCOM_GENI_SE) 252 248 253 249 u32 geni_se_get_qup_hw_version(struct geni_se *se); ··· 450 416 void geni_se_tx_dma_unprep(struct geni_se *se, dma_addr_t iova, size_t len); 451 417 452 418 void geni_se_rx_dma_unprep(struct geni_se *se, dma_addr_t iova, size_t len); 419 + 420 + int geni_icc_get(struct geni_se *se, const char *icc_ddr); 421 + 422 + int geni_icc_set_bw(struct geni_se *se); 423 + void geni_icc_set_tag(struct geni_se *se, u32 tag); 424 + 425 + int geni_icc_enable(struct geni_se *se); 426 + 427 + int geni_icc_disable(struct geni_se *se); 428 + 429 + void geni_remove_earlycon_icc_vote(void); 453 430 #endif 454 431 #endif
+105 -5
include/linux/scmi_protocol.h
··· 9 9 #define _LINUX_SCMI_PROTOCOL_H 10 10 11 11 #include <linux/device.h> 12 + #include <linux/notifier.h> 12 13 #include <linux/types.h> 13 14 14 15 #define SCMI_MAX_STR_SIZE 16 ··· 119 118 unsigned long *rate, bool poll); 120 119 int (*est_power_get)(const struct scmi_handle *handle, u32 domain, 121 120 unsigned long *rate, unsigned long *power); 121 + bool (*fast_switch_possible)(const struct scmi_handle *handle, 122 + struct device *dev); 122 123 }; 123 124 124 125 /** ··· 176 173 * 177 174 * @count_get: get the count of sensors provided by SCMI 178 175 * @info_get: get the information of the specified sensor 179 - * @trip_point_notify: control notifications on cross-over events for 180 - * the trip-points 181 176 * @trip_point_config: selects and configures a trip-point of interest 182 177 * @reading_get: gets the current value of the sensor 183 178 */ 184 179 struct scmi_sensor_ops { 185 180 int (*count_get)(const struct scmi_handle *handle); 186 - 187 181 const struct scmi_sensor_info *(*info_get) 188 182 (const struct scmi_handle *handle, u32 sensor_id); 189 - int (*trip_point_notify)(const struct scmi_handle *handle, 190 - u32 sensor_id, bool enable); 191 183 int (*trip_point_config)(const struct scmi_handle *handle, 192 184 u32 sensor_id, u8 trip_id, u64 trip_value); 193 185 int (*reading_get)(const struct scmi_handle *handle, u32 sensor_id, ··· 210 212 }; 211 213 212 214 /** 215 + * struct scmi_notify_ops - represents notifications' operations provided by 216 + * SCMI core 217 + * @register_event_notifier: Register a notifier_block for the requested event 218 + * @unregister_event_notifier: Unregister a notifier_block for the requested 219 + * event 220 + * 221 + * A user can register/unregister its own notifier_block against the wanted 222 + * platform instance regarding the desired event identified by the 223 + * tuple: (proto_id, evt_id, src_id) using the provided register/unregister 224 + * interface where: 225 + * 226 + * @handle: The handle identifying the platform instance to use 227 + * @proto_id: The protocol ID as in SCMI Specification 228 + * @evt_id: The message ID of the desired event as in SCMI Specification 229 + * @src_id: A pointer to the desired source ID if different sources are 230 + * possible for the protocol (like domain_id, sensor_id...etc) 231 + * 232 + * @src_id can be provided as NULL if it simply does NOT make sense for 233 + * the protocol at hand, OR if the user is explicitly interested in 234 + * receiving notifications from ANY existent source associated to the 235 + * specified proto_id / evt_id. 236 + * 237 + * Received notifications are finally delivered to the registered users, 238 + * invoking the callback provided with the notifier_block *nb as follows: 239 + * 240 + * int user_cb(nb, evt_id, report) 241 + * 242 + * with: 243 + * 244 + * @nb: The notifier block provided by the user 245 + * @evt_id: The message ID of the delivered event 246 + * @report: A custom struct describing the specific event delivered 247 + */ 248 + struct scmi_notify_ops { 249 + int (*register_event_notifier)(const struct scmi_handle *handle, 250 + u8 proto_id, u8 evt_id, u32 *src_id, 251 + struct notifier_block *nb); 252 + int (*unregister_event_notifier)(const struct scmi_handle *handle, 253 + u8 proto_id, u8 evt_id, u32 *src_id, 254 + struct notifier_block *nb); 255 + }; 256 + 257 + /** 213 258 * struct scmi_handle - Handle returned to ARM SCMI clients for usage. 214 259 * 215 260 * @dev: pointer to the SCMI device ··· 262 221 * @clk_ops: pointer to set of clock protocol operations 263 222 * @sensor_ops: pointer to set of sensor protocol operations 264 223 * @reset_ops: pointer to set of reset protocol operations 224 + * @notify_ops: pointer to set of notifications related operations 265 225 * @perf_priv: pointer to private data structure specific to performance 266 226 * protocol(for internal use only) 267 227 * @clk_priv: pointer to private data structure specific to clock ··· 273 231 * protocol(for internal use only) 274 232 * @reset_priv: pointer to private data structure specific to reset 275 233 * protocol(for internal use only) 234 + * @notify_priv: pointer to private data structure specific to notifications 235 + * (for internal use only) 276 236 */ 277 237 struct scmi_handle { 278 238 struct device *dev; ··· 284 240 struct scmi_power_ops *power_ops; 285 241 struct scmi_sensor_ops *sensor_ops; 286 242 struct scmi_reset_ops *reset_ops; 243 + struct scmi_notify_ops *notify_ops; 287 244 /* for protocol internal use */ 288 245 void *perf_priv; 289 246 void *clk_priv; 290 247 void *power_priv; 291 248 void *sensor_priv; 292 249 void *reset_priv; 250 + void *notify_priv; 293 251 }; 294 252 295 253 enum scmi_std_protocol { ··· 369 323 typedef int (*scmi_prot_init_fn_t)(struct scmi_handle *); 370 324 int scmi_protocol_register(int protocol_id, scmi_prot_init_fn_t fn); 371 325 void scmi_protocol_unregister(int protocol_id); 326 + 327 + /* SCMI Notification API - Custom Event Reports */ 328 + enum scmi_notification_events { 329 + SCMI_EVENT_POWER_STATE_CHANGED = 0x0, 330 + SCMI_EVENT_PERFORMANCE_LIMITS_CHANGED = 0x0, 331 + SCMI_EVENT_PERFORMANCE_LEVEL_CHANGED = 0x1, 332 + SCMI_EVENT_SENSOR_TRIP_POINT_EVENT = 0x0, 333 + SCMI_EVENT_RESET_ISSUED = 0x0, 334 + SCMI_EVENT_BASE_ERROR_EVENT = 0x0, 335 + }; 336 + 337 + struct scmi_power_state_changed_report { 338 + ktime_t timestamp; 339 + unsigned int agent_id; 340 + unsigned int domain_id; 341 + unsigned int power_state; 342 + }; 343 + 344 + struct scmi_perf_limits_report { 345 + ktime_t timestamp; 346 + unsigned int agent_id; 347 + unsigned int domain_id; 348 + unsigned int range_max; 349 + unsigned int range_min; 350 + }; 351 + 352 + struct scmi_perf_level_report { 353 + ktime_t timestamp; 354 + unsigned int agent_id; 355 + unsigned int domain_id; 356 + unsigned int performance_level; 357 + }; 358 + 359 + struct scmi_sensor_trip_point_report { 360 + ktime_t timestamp; 361 + unsigned int agent_id; 362 + unsigned int sensor_id; 363 + unsigned int trip_point_desc; 364 + }; 365 + 366 + struct scmi_reset_issued_report { 367 + ktime_t timestamp; 368 + unsigned int agent_id; 369 + unsigned int domain_id; 370 + unsigned int reset_state; 371 + }; 372 + 373 + struct scmi_base_error_report { 374 + ktime_t timestamp; 375 + unsigned int agent_id; 376 + bool fatal; 377 + unsigned int cmd_count; 378 + unsigned long long reports[]; 379 + }; 372 380 373 381 #endif /* _LINUX_SCMI_PROTOCOL_H */
+31
include/linux/soc/mediatek/mtk-cmdq.h
··· 121 121 int cmdq_pkt_clear_event(struct cmdq_pkt *pkt, u16 event); 122 122 123 123 /** 124 + * cmdq_pkt_set_event() - append set event command to the CMDQ packet 125 + * @pkt: the CMDQ packet 126 + * @event: the desired event to be set 127 + * 128 + * Return: 0 for success; else the error code is returned 129 + */ 130 + int cmdq_pkt_set_event(struct cmdq_pkt *pkt, u16 event); 131 + 132 + /** 124 133 * cmdq_pkt_poll() - Append polling command to the CMDQ packet, ask GCE to 125 134 * execute an instruction that wait for a specified 126 135 * hardware register to check for the value w/o mask. ··· 161 152 */ 162 153 int cmdq_pkt_poll_mask(struct cmdq_pkt *pkt, u8 subsys, 163 154 u16 offset, u32 value, u32 mask); 155 + 156 + /** 157 + * cmdq_pkt_assign() - Append logic assign command to the CMDQ packet, ask GCE 158 + * to execute an instruction that set a constant value into 159 + * internal register and use as value, mask or address in 160 + * read/write instruction. 161 + * @pkt: the CMDQ packet 162 + * @reg_idx: the CMDQ internal register ID 163 + * @value: the specified value 164 + * 165 + * Return: 0 for success; else the error code is returned 166 + */ 167 + int cmdq_pkt_assign(struct cmdq_pkt *pkt, u16 reg_idx, u32 value); 168 + 169 + /** 170 + * cmdq_pkt_finalize() - Append EOC and jump command to pkt. 171 + * @pkt: the CMDQ packet 172 + * 173 + * Return: 0 for success; else the error code is returned 174 + */ 175 + int cmdq_pkt_finalize(struct cmdq_pkt *pkt); 176 + 164 177 /** 165 178 * cmdq_pkt_flush_async() - trigger CMDQ to asynchronously execute the CMDQ 166 179 * packet and call back at the end of done packet
+4
include/linux/soc/ti/k3-ringacc.h
··· 107 107 struct k3_ring *k3_ringacc_request_ring(struct k3_ringacc *ringacc, 108 108 int id, u32 flags); 109 109 110 + int k3_ringacc_request_rings_pair(struct k3_ringacc *ringacc, 111 + int fwd_id, int compl_id, 112 + struct k3_ring **fwd_ring, 113 + struct k3_ring **compl_ring); 110 114 /** 111 115 * k3_ringacc_ring_reset - ring reset 112 116 * @ring: pointer on Ring
+1 -1
include/linux/soc/ti/ti_sci_inta_msi.h
··· 2 2 /* 3 3 * Texas Instruments' K3 TI SCI INTA MSI helper 4 4 * 5 - * Copyright (C) 2018-2019 Texas Instruments Incorporated - http://www.ti.com/ 5 + * Copyright (C) 2018-2019 Texas Instruments Incorporated - https://www.ti.com/ 6 6 * Lokesh Vutla <lokeshvutla@ti.com> 7 7 */ 8 8
+3 -3
include/linux/soc/ti/ti_sci_protocol.h
··· 2 2 /* 3 3 * Texas Instruments System Control Interface Protocol 4 4 * 5 - * Copyright (C) 2015-2016 Texas Instruments Incorporated - http://www.ti.com/ 5 + * Copyright (C) 2015-2016 Texas Instruments Incorporated - https://www.ti.com/ 6 6 * Nishanth Menon 7 7 */ 8 8 ··· 226 226 * and destination 227 227 * @set_event_map: Set an Event based peripheral irq to Interrupt 228 228 * Aggregator. 229 - * @free_irq: Free an an IRQ route between the requested source 230 - * destination. 229 + * @free_irq: Free an IRQ route between the requested source 230 + * and destination. 231 231 * @free_event_map: Free an event based peripheral irq to Interrupt 232 232 * Aggregator. 233 233 */
+4 -3
include/soc/qcom/rpmh.h
··· 20 20 int rpmh_write_batch(const struct device *dev, enum rpmh_state state, 21 21 const struct tcs_cmd *cmd, u32 *n); 22 22 23 - int rpmh_invalidate(const struct device *dev); 23 + void rpmh_invalidate(const struct device *dev); 24 24 25 25 #else 26 26 ··· 38 38 const struct tcs_cmd *cmd, u32 *n) 39 39 { return -ENODEV; } 40 40 41 - static inline int rpmh_invalidate(const struct device *dev) 42 - { return -ENODEV; } 41 + static inline void rpmh_invalidate(const struct device *dev) 42 + { 43 + } 43 44 44 45 #endif /* CONFIG_QCOM_RPMH */ 45 46
+636 -277
include/soc/tegra/bpmp-abi.h
··· 3 3 * Copyright (c) 2014-2020, NVIDIA CORPORATION. All rights reserved. 4 4 */ 5 5 6 - #ifndef _ABI_BPMP_ABI_H_ 7 - #define _ABI_BPMP_ABI_H_ 6 + #ifndef ABI_BPMP_ABI_H 7 + #define ABI_BPMP_ABI_H 8 8 9 - #ifdef LK 9 + #if defined(LK) || defined(BPMP_ABI_HAVE_STDC) 10 + #include <stddef.h> 10 11 #include <stdint.h> 11 12 #endif 12 13 13 - #ifndef __ABI_PACKED 14 - #define __ABI_PACKED __attribute__((packed)) 14 + #ifndef BPMP_ABI_PACKED 15 + #ifdef __ABI_PACKED 16 + #define BPMP_ABI_PACKED __ABI_PACKED 17 + #else 18 + #define BPMP_ABI_PACKED __attribute__((packed)) 19 + #endif 15 20 #endif 16 21 17 22 #ifdef NO_GCC_EXTENSIONS 18 - #define EMPTY char empty; 19 - #define EMPTY_ARRAY 1 23 + #define BPMP_ABI_EMPTY char empty; 24 + #define BPMP_ABI_EMPTY_ARRAY 1 20 25 #else 21 - #define EMPTY 22 - #define EMPTY_ARRAY 0 26 + #define BPMP_ABI_EMPTY 27 + #define BPMP_ABI_EMPTY_ARRAY 0 23 28 #endif 24 29 25 - #ifndef __UNION_ANON 26 - #define __UNION_ANON 30 + #ifndef BPMP_UNION_ANON 31 + #ifdef __UNION_ANON 32 + #define BPMP_UNION_ANON __UNION_ANON 33 + #else 34 + #define BPMP_UNION_ANON 27 35 #endif 36 + #endif 37 + 28 38 /** 29 39 * @file 30 40 */ ··· 83 73 struct mrq_request { 84 74 /** @brief MRQ number of the request */ 85 75 uint32_t mrq; 76 + 86 77 /** 87 78 * @brief Flags providing follow up directions to the receiver 88 79 * ··· 93 82 * | 0 | should be 1 | 94 83 */ 95 84 uint32_t flags; 96 - } __ABI_PACKED; 85 + } BPMP_ABI_PACKED; 97 86 98 87 /** 99 88 * @ingroup MRQ_Format ··· 109 98 int32_t err; 110 99 /** @brief Reserved for future use */ 111 100 uint32_t flags; 112 - } __ABI_PACKED; 101 + } BPMP_ABI_PACKED; 113 102 114 103 /** 115 104 * @ingroup MRQ_Format 116 105 * Minimum needed size for an IPC message buffer 117 106 */ 118 - #define MSG_MIN_SZ 128 107 + #define MSG_MIN_SZ 128U 119 108 /** 120 109 * @ingroup MRQ_Format 121 110 * Minimum size guaranteed for data in an IPC message buffer 122 111 */ 123 - #define MSG_DATA_MIN_SZ 120 112 + #define MSG_DATA_MIN_SZ 120U 124 113 125 114 /** 126 115 * @ingroup MRQ_Codes ··· 129 118 * @{ 130 119 */ 131 120 132 - #define MRQ_PING 0 133 - #define MRQ_QUERY_TAG 1 134 - #define MRQ_MODULE_LOAD 4 135 - #define MRQ_MODULE_UNLOAD 5 136 - #define MRQ_TRACE_MODIFY 7 137 - #define MRQ_WRITE_TRACE 8 138 - #define MRQ_THREADED_PING 9 139 - #define MRQ_MODULE_MAIL 11 140 - #define MRQ_DEBUGFS 19 141 - #define MRQ_RESET 20 142 - #define MRQ_I2C 21 143 - #define MRQ_CLK 22 144 - #define MRQ_QUERY_ABI 23 145 - #define MRQ_PG_READ_STATE 25 146 - #define MRQ_PG_UPDATE_STATE 26 147 - #define MRQ_THERMAL 27 148 - #define MRQ_CPU_VHINT 28 149 - #define MRQ_ABI_RATCHET 29 150 - #define MRQ_EMC_DVFS_LATENCY 31 151 - #define MRQ_TRACE_ITER 64 152 - #define MRQ_RINGBUF_CONSOLE 65 153 - #define MRQ_PG 66 154 - #define MRQ_CPU_NDIV_LIMITS 67 155 - #define MRQ_STRAP 68 156 - #define MRQ_UPHY 69 157 - #define MRQ_CPU_AUTO_CC3 70 158 - #define MRQ_QUERY_FW_TAG 71 159 - #define MRQ_FMON 72 160 - #define MRQ_EC 73 161 - #define MRQ_FBVOLT_STATUS 74 121 + #define MRQ_PING 0U 122 + #define MRQ_QUERY_TAG 1U 123 + #define MRQ_MODULE_LOAD 4U 124 + #define MRQ_MODULE_UNLOAD 5U 125 + #define MRQ_TRACE_MODIFY 7U 126 + #define MRQ_WRITE_TRACE 8U 127 + #define MRQ_THREADED_PING 9U 128 + #define MRQ_MODULE_MAIL 11U 129 + #define MRQ_DEBUGFS 19U 130 + #define MRQ_RESET 20U 131 + #define MRQ_I2C 21U 132 + #define MRQ_CLK 22U 133 + #define MRQ_QUERY_ABI 23U 134 + #define MRQ_PG_READ_STATE 25U 135 + #define MRQ_PG_UPDATE_STATE 26U 136 + #define MRQ_THERMAL 27U 137 + #define MRQ_CPU_VHINT 28U 138 + #define MRQ_ABI_RATCHET 29U 139 + #define MRQ_EMC_DVFS_LATENCY 31U 140 + #define MRQ_TRACE_ITER 64U 141 + #define MRQ_RINGBUF_CONSOLE 65U 142 + #define MRQ_PG 66U 143 + #define MRQ_CPU_NDIV_LIMITS 67U 144 + #define MRQ_STRAP 68U 145 + #define MRQ_UPHY 69U 146 + #define MRQ_CPU_AUTO_CC3 70U 147 + #define MRQ_QUERY_FW_TAG 71U 148 + #define MRQ_FMON 72U 149 + #define MRQ_EC 73U 150 + #define MRQ_DEBUG 75U 162 151 163 152 /** @} */ 164 153 ··· 167 156 * @brief Maximum MRQ code to be sent by CPU software to 168 157 * BPMP. Subject to change in future 169 158 */ 170 - #define MAX_CPU_MRQ_ID 74 159 + #define MAX_CPU_MRQ_ID 75U 171 160 172 161 /** 173 162 * @addtogroup MRQ_Payloads ··· 234 223 struct mrq_ping_request { 235 224 /** @brief Arbitrarily chosen value */ 236 225 uint32_t challenge; 237 - } __ABI_PACKED; 226 + } BPMP_ABI_PACKED; 238 227 239 228 /** 240 229 * @ingroup Ping ··· 248 237 struct mrq_ping_response { 249 238 /** @brief Response to the MRQ_PING challege */ 250 239 uint32_t reply; 251 - } __ABI_PACKED; 240 + } BPMP_ABI_PACKED; 252 241 253 242 /** 254 243 * @ingroup MRQ_Codes ··· 275 264 struct mrq_query_tag_request { 276 265 /** @brief Base address to store the firmware tag */ 277 266 uint32_t addr; 278 - } __ABI_PACKED; 267 + } BPMP_ABI_PACKED; 279 268 280 269 281 270 /** ··· 302 291 struct mrq_query_fw_tag_response { 303 292 /** @brief Array to store tag information */ 304 293 uint8_t tag[32]; 305 - } __ABI_PACKED; 294 + } BPMP_ABI_PACKED; 306 295 307 296 /** 308 297 * @ingroup MRQ_Codes 309 298 * @def MRQ_MODULE_LOAD 310 299 * @brief Dynamically load a BPMP code module 311 300 * 312 - * * Platforms: T210, T214, T186 313 - * @cond (bpmp_t210 || bpmp_t214 || bpmp_t186) 301 + * * Platforms: T210, T210B01, T186 302 + * @cond (bpmp_t210 || bpmp_t210b01 || bpmp_t186) 314 303 * * Initiators: CCPLEX 315 304 * * Targets: BPMP 316 305 * * Request Payload: @ref mrq_module_load_request ··· 338 327 * 339 328 */ 340 329 struct mrq_module_load_request { 341 - /** @brief Base address of the code to load. Treated as (void *) */ 342 - uint32_t phys_addr; /* (void *) */ 330 + /** @brief Base address of the code to load */ 331 + uint32_t phys_addr; 343 332 /** @brief Size in bytes of code to load */ 344 333 uint32_t size; 345 - } __ABI_PACKED; 334 + } BPMP_ABI_PACKED; 346 335 347 336 /** 348 337 * @ingroup Module ··· 353 342 struct mrq_module_load_response { 354 343 /** @brief Handle to the loaded module */ 355 344 uint32_t base; 356 - } __ABI_PACKED; 345 + } BPMP_ABI_PACKED; 357 346 /** @endcond*/ 358 347 359 348 /** ··· 361 350 * @def MRQ_MODULE_UNLOAD 362 351 * @brief Unload a previously loaded code module 363 352 * 364 - * * Platforms: T210, T214, T186 365 - * @cond (bpmp_t210 || bpmp_t214 || bpmp_t186) 353 + * * Platforms: T210, T210B01, T186 354 + * @cond (bpmp_t210 || bpmp_t210b01 || bpmp_t186) 366 355 * * Initiators: CCPLEX 367 356 * * Targets: BPMP 368 357 * * Request Payload: @ref mrq_module_unload_request ··· 381 370 struct mrq_module_unload_request { 382 371 /** @brief Handle of the module to unload */ 383 372 uint32_t base; 384 - } __ABI_PACKED; 373 + } BPMP_ABI_PACKED; 385 374 /** @endcond*/ 386 375 387 376 /** 388 377 * @ingroup MRQ_Codes 389 378 * @def MRQ_TRACE_MODIFY 390 379 * @brief Modify the set of enabled trace events 380 + * 381 + * @deprecated 391 382 * 392 383 * * Platforms: All 393 384 * * Initiators: CCPLEX ··· 413 400 uint32_t clr; 414 401 /** @brief Bit mask of trace events to enable */ 415 402 uint32_t set; 416 - } __ABI_PACKED; 403 + } BPMP_ABI_PACKED; 417 404 418 405 /** 419 406 * @ingroup Trace ··· 427 414 struct mrq_trace_modify_response { 428 415 /** @brief Bit mask of trace event enable states */ 429 416 uint32_t mask; 430 - } __ABI_PACKED; 417 + } BPMP_ABI_PACKED; 431 418 432 419 /** 433 420 * @ingroup MRQ_Codes 434 421 * @def MRQ_WRITE_TRACE 435 422 * @brief Write trace data to a buffer 423 + * 424 + * @deprecated 436 425 * 437 426 * * Platforms: All 438 427 * * Initiators: CCPLEX ··· 469 454 uint32_t area; 470 455 /** @brief Size in bytes of the output buffer */ 471 456 uint32_t size; 472 - } __ABI_PACKED; 457 + } BPMP_ABI_PACKED; 473 458 474 459 /** 475 460 * @ingroup Trace ··· 486 471 * drained to the outputbuffer. Value is 0 otherwise. 487 472 */ 488 473 uint32_t eof; 489 - } __ABI_PACKED; 474 + } BPMP_ABI_PACKED; 490 475 491 476 /** @private */ 492 477 struct mrq_threaded_ping_request { 493 478 uint32_t challenge; 494 - } __ABI_PACKED; 479 + } BPMP_ABI_PACKED; 495 480 496 481 /** @private */ 497 482 struct mrq_threaded_ping_response { 498 483 uint32_t reply; 499 - } __ABI_PACKED; 484 + } BPMP_ABI_PACKED; 500 485 501 486 /** 502 487 * @ingroup MRQ_Codes 503 488 * @def MRQ_MODULE_MAIL 504 489 * @brief Send a message to a loadable module 505 490 * 506 - * * Platforms: T210, T214, T186 507 - * @cond (bpmp_t210 || bpmp_t214 || bpmp_t186) 491 + * * Platforms: T210, T210B01, T186 492 + * @cond (bpmp_t210 || bpmp_t210b01 || bpmp_t186) 508 493 * * Initiators: Any 509 494 * * Targets: BPMP 510 495 * * Request Payload: @ref mrq_module_mail_request ··· 525 510 * The length of data[ ] is unknown to the BPMP core firmware 526 511 * but it is limited to the size of an IPC message. 527 512 */ 528 - uint8_t data[EMPTY_ARRAY]; 529 - } __ABI_PACKED; 513 + uint8_t data[BPMP_ABI_EMPTY_ARRAY]; 514 + } BPMP_ABI_PACKED; 530 515 531 516 /** 532 517 * @ingroup Module ··· 538 523 * The length of data[ ] is unknown to the BPMP core firmware 539 524 * but it is limited to the size of an IPC message. 540 525 */ 541 - uint8_t data[EMPTY_ARRAY]; 542 - } __ABI_PACKED; 526 + uint8_t data[BPMP_ABI_EMPTY_ARRAY]; 527 + } BPMP_ABI_PACKED; 543 528 /** @endcond */ 544 529 545 530 /** 546 531 * @ingroup MRQ_Codes 547 532 * @def MRQ_DEBUGFS 548 533 * @brief Interact with BPMP's debugfs file nodes 534 + * 535 + * @deprecated use MRQ_DEBUG instead. 549 536 * 550 537 * * Platforms: T186, T194 551 538 * * Initiators: Any ··· 604 587 uint32_t dataaddr; 605 588 /** @brief Length in bytes of data buffer */ 606 589 uint32_t datalen; 607 - } __ABI_PACKED; 590 + } BPMP_ABI_PACKED; 608 591 609 592 /** 610 593 * @ingroup Debugfs ··· 615 598 uint32_t dataaddr; 616 599 /** @brief Length in bytes of data buffer */ 617 600 uint32_t datalen; 618 - } __ABI_PACKED; 601 + } BPMP_ABI_PACKED; 619 602 620 603 /** 621 604 * @ingroup Debugfs ··· 626 609 uint32_t reserved; 627 610 /** @brief Number of bytes read from or written to data buffer */ 628 611 uint32_t nbytes; 629 - } __ABI_PACKED; 612 + } BPMP_ABI_PACKED; 630 613 631 614 /** 632 615 * @ingroup Debugfs ··· 637 620 uint32_t reserved; 638 621 /** @brief Number of bytes read from or written to data buffer */ 639 622 uint32_t nbytes; 640 - } __ABI_PACKED; 623 + } BPMP_ABI_PACKED; 641 624 642 625 /** 643 626 * @ingroup Debugfs ··· 660 643 union { 661 644 struct cmd_debugfs_fileop_request fop; 662 645 struct cmd_debugfs_dumpdir_request dumpdir; 663 - } __UNION_ANON; 664 - } __ABI_PACKED; 646 + } BPMP_UNION_ANON; 647 + } BPMP_ABI_PACKED; 665 648 666 649 /** 667 650 * @ingroup Debugfs ··· 676 659 struct cmd_debugfs_fileop_response fop; 677 660 /** @brief Response data for CMD_DEBUGFS_DUMPDIR command */ 678 661 struct cmd_debugfs_dumpdir_response dumpdir; 679 - } __UNION_ANON; 680 - } __ABI_PACKED; 662 + } BPMP_UNION_ANON; 663 + } BPMP_ABI_PACKED; 681 664 682 665 /** 683 666 * @addtogroup Debugfs ··· 687 670 #define DEBUGFS_S_IRUSR (1 << 8) 688 671 #define DEBUGFS_S_IWUSR (1 << 7) 689 672 /** @} */ 673 + 674 + /** 675 + * @ingroup MRQ_Codes 676 + * @def MRQ_DEBUG 677 + * @brief Interact with BPMP's debugfs file nodes. Use message payload 678 + * for exchanging data. This is functionally equivalent to 679 + * @ref MRQ_DEBUGFS. But the way in which data is exchanged is different. 680 + * When software running on CPU tries to read a debugfs file, 681 + * the file path and read data will be stored in message payload. 682 + * Since the message payload size is limited, a debugfs file 683 + * transaction might require multiple frames of data exchanged 684 + * between BPMP and CPU until the transaction completes. 685 + * 686 + * * Platforms: T194 687 + * * Initiators: Any 688 + * * Targets: BPMP 689 + * * Request Payload: @ref mrq_debug_request 690 + * * Response Payload: @ref mrq_debug_response 691 + */ 692 + 693 + /** @ingroup Debugfs */ 694 + enum mrq_debug_commands { 695 + /** @brief Open required file for read operation */ 696 + CMD_DEBUG_OPEN_RO = 0, 697 + /** @brief Open required file for write operation */ 698 + CMD_DEBUG_OPEN_WO = 1, 699 + /** @brief Perform read */ 700 + CMD_DEBUG_READ = 2, 701 + /** @brief Perform write */ 702 + CMD_DEBUG_WRITE = 3, 703 + /** @brief Close file */ 704 + CMD_DEBUG_CLOSE = 4, 705 + /** @brief Not a command */ 706 + CMD_DEBUG_MAX 707 + }; 708 + 709 + /** 710 + * @ingroup Debugfs 711 + * @brief Maximum number of files that can be open at a given time 712 + */ 713 + #define DEBUG_MAX_OPEN_FILES 1 714 + 715 + /** 716 + * @ingroup Debugfs 717 + * @brief Maximum size of null-terminated file name string in bytes. 718 + * Value is derived from memory available in message payload while 719 + * using @ref cmd_debug_fopen_request 720 + * Value 4 corresponds to size of @ref mrq_debug_commands 721 + * in @ref mrq_debug_request. 722 + * 120 - 4 dbg_cmd(32bit) = 116 723 + */ 724 + #define DEBUG_FNAME_MAX_SZ (MSG_DATA_MIN_SZ - 4) 725 + 726 + /** 727 + * @ingroup Debugfs 728 + * @brief Parameters for CMD_DEBUG_OPEN command 729 + */ 730 + struct cmd_debug_fopen_request { 731 + /** @brief File name - Null-terminated string with maximum 732 + * length @ref DEBUG_FNAME_MAX_SZ 733 + */ 734 + char name[DEBUG_FNAME_MAX_SZ]; 735 + } BPMP_ABI_PACKED; 736 + 737 + /** 738 + * @ingroup Debugfs 739 + * @brief Response data for CMD_DEBUG_OPEN_RO/WO command 740 + */ 741 + struct cmd_debug_fopen_response { 742 + /** @brief Identifier for file access */ 743 + uint32_t fd; 744 + /** @brief Data length. File data size for READ command. 745 + * Maximum allowed length for WRITE command 746 + */ 747 + uint32_t datalen; 748 + } BPMP_ABI_PACKED; 749 + 750 + /** 751 + * @ingroup Debugfs 752 + * @brief Parameters for CMD_DEBUG_READ command 753 + */ 754 + struct cmd_debug_fread_request { 755 + /** @brief File access identifier received in response 756 + * to CMD_DEBUG_OPEN_RO request 757 + */ 758 + uint32_t fd; 759 + } BPMP_ABI_PACKED; 760 + 761 + /** 762 + * @ingroup Debugfs 763 + * @brief Maximum size of read data in bytes. 764 + * Value is derived from memory available in message payload while 765 + * using @ref cmd_debug_fread_response. 766 + */ 767 + #define DEBUG_READ_MAX_SZ (MSG_DATA_MIN_SZ - 4) 768 + 769 + /** 770 + * @ingroup Debugfs 771 + * @brief Response data for CMD_DEBUG_READ command 772 + */ 773 + struct cmd_debug_fread_response { 774 + /** @brief Size of data provided in this response in bytes */ 775 + uint32_t readlen; 776 + /** @brief File data from seek position */ 777 + char data[DEBUG_READ_MAX_SZ]; 778 + } BPMP_ABI_PACKED; 779 + 780 + /** 781 + * @ingroup Debugfs 782 + * @brief Maximum size of write data in bytes. 783 + * Value is derived from memory available in message payload while 784 + * using @ref cmd_debug_fwrite_request. 785 + */ 786 + #define DEBUG_WRITE_MAX_SZ (MSG_DATA_MIN_SZ - 12) 787 + 788 + /** 789 + * @ingroup Debugfs 790 + * @brief Parameters for CMD_DEBUG_WRITE command 791 + */ 792 + struct cmd_debug_fwrite_request { 793 + /** @brief File access identifier received in response 794 + * to CMD_DEBUG_OPEN_RO request 795 + */ 796 + uint32_t fd; 797 + /** @brief Size of write data in bytes */ 798 + uint32_t datalen; 799 + /** @brief Data to be written */ 800 + char data[DEBUG_WRITE_MAX_SZ]; 801 + } BPMP_ABI_PACKED; 802 + 803 + /** 804 + * @ingroup Debugfs 805 + * @brief Parameters for CMD_DEBUG_CLOSE command 806 + */ 807 + struct cmd_debug_fclose_request { 808 + /** @brief File access identifier received in response 809 + * to CMD_DEBUG_OPEN_RO request 810 + */ 811 + uint32_t fd; 812 + } BPMP_ABI_PACKED; 813 + 814 + /** 815 + * @ingroup Debugfs 816 + * @brief Request with #MRQ_DEBUG. 817 + * 818 + * The sender of an MRQ_DEBUG message uses #cmd to specify a debugfs 819 + * command to execute. Legal commands are the values of @ref 820 + * mrq_debug_commands. Each command requires a specific additional 821 + * payload of data. 822 + * 823 + * |command |payload| 824 + * |-------------------|-------| 825 + * |CMD_DEBUG_OPEN_RO |fop | 826 + * |CMD_DEBUG_OPEN_WO |fop | 827 + * |CMD_DEBUG_READ |frd | 828 + * |CMD_DEBUG_WRITE |fwr | 829 + * |CMD_DEBUG_CLOSE |fcl | 830 + */ 831 + struct mrq_debug_request { 832 + /** @brief Sub-command (@ref mrq_debug_commands) */ 833 + uint32_t cmd; 834 + union { 835 + /** @brief Request payload for CMD_DEBUG_OPEN_RO/WO command */ 836 + struct cmd_debug_fopen_request fop; 837 + /** @brief Request payload for CMD_DEBUG_READ command */ 838 + struct cmd_debug_fread_request frd; 839 + /** @brief Request payload for CMD_DEBUG_WRITE command */ 840 + struct cmd_debug_fwrite_request fwr; 841 + /** @brief Request payload for CMD_DEBUG_CLOSE command */ 842 + struct cmd_debug_fclose_request fcl; 843 + } BPMP_UNION_ANON; 844 + } BPMP_ABI_PACKED; 845 + 846 + /** 847 + * @ingroup Debugfs 848 + */ 849 + struct mrq_debug_response { 850 + union { 851 + /** @brief Response data for CMD_DEBUG_OPEN_RO/WO command */ 852 + struct cmd_debug_fopen_response fop; 853 + /** @brief Response data for CMD_DEBUG_READ command */ 854 + struct cmd_debug_fread_response frd; 855 + } BPMP_UNION_ANON; 856 + } BPMP_ABI_PACKED; 690 857 691 858 /** 692 859 * @ingroup MRQ_Codes ··· 888 687 */ 889 688 890 689 enum mrq_reset_commands { 891 - /** @brief Assert module reset */ 690 + /** 691 + * @brief Assert module reset 692 + * 693 + * mrq_response::err is 0 if the operation was successful, or @n 694 + * -#BPMP_EINVAL if mrq_reset_request::reset_id is invalid @n 695 + * -#BPMP_EACCES if mrq master is not an owner of target domain reset @n 696 + * -#BPMP_ENOTSUP if target domain h/w state does not allow reset 697 + */ 892 698 CMD_RESET_ASSERT = 1, 893 - /** @brief Deassert module reset */ 699 + /** 700 + * @brief Deassert module reset 701 + * 702 + * mrq_response::err is 0 if the operation was successful, or @n 703 + * -#BPMP_EINVAL if mrq_reset_request::reset_id is invalid @n 704 + * -#BPMP_EACCES if mrq master is not an owner of target domain reset @n 705 + * -#BPMP_ENOTSUP if target domain h/w state does not allow reset 706 + */ 894 707 CMD_RESET_DEASSERT = 2, 895 - /** @brief Assert and deassert the module reset */ 708 + /** 709 + * @brief Assert and deassert the module reset 710 + * 711 + * mrq_response::err is 0 if the operation was successful, or @n 712 + * -#BPMP_EINVAL if mrq_reset_request::reset_id is invalid @n 713 + * -#BPMP_EACCES if mrq master is not an owner of target domain reset @n 714 + * -#BPMP_ENOTSUP if target domain h/w state does not allow reset 715 + */ 896 716 CMD_RESET_MODULE = 3, 897 - /** @brief Get the highest reset ID */ 717 + /** 718 + * @brief Get the highest reset ID 719 + * 720 + * mrq_response::err is 0 if the operation was successful, or @n 721 + * -#BPMP_ENODEV if no reset domains are supported (number of IDs is 0) 722 + */ 898 723 CMD_RESET_GET_MAX_ID = 4, 724 + 899 725 /** @brief Not part of ABI and subject to change */ 900 726 CMD_RESET_MAX, 901 727 }; ··· 938 710 uint32_t cmd; 939 711 /** @brief Id of the reset to affected */ 940 712 uint32_t reset_id; 941 - } __ABI_PACKED; 713 + } BPMP_ABI_PACKED; 942 714 943 715 /** 944 716 * @brief Response for MRQ_RESET sub-command CMD_RESET_GET_MAX_ID. When ··· 948 720 struct cmd_reset_get_max_id_response { 949 721 /** @brief Max reset id */ 950 722 uint32_t max_id; 951 - } __ABI_PACKED; 723 + } BPMP_ABI_PACKED; 952 724 953 725 /** 954 726 * @brief Response with MRQ_RESET ··· 967 739 struct mrq_reset_response { 968 740 union { 969 741 struct cmd_reset_get_max_id_response reset_get_max_id; 970 - } __UNION_ANON; 971 - } __ABI_PACKED; 742 + } BPMP_UNION_ANON; 743 + } BPMP_ABI_PACKED; 972 744 973 745 /** @} */ 974 746 ··· 986 758 * @addtogroup I2C 987 759 * @{ 988 760 */ 989 - #define TEGRA_I2C_IPC_MAX_IN_BUF_SIZE (MSG_DATA_MIN_SZ - 12) 990 - #define TEGRA_I2C_IPC_MAX_OUT_BUF_SIZE (MSG_DATA_MIN_SZ - 4) 761 + #define TEGRA_I2C_IPC_MAX_IN_BUF_SIZE (MSG_DATA_MIN_SZ - 12U) 762 + #define TEGRA_I2C_IPC_MAX_OUT_BUF_SIZE (MSG_DATA_MIN_SZ - 4U) 991 763 992 - #define SERIALI2C_TEN 0x0010 993 - #define SERIALI2C_RD 0x0001 994 - #define SERIALI2C_STOP 0x8000 995 - #define SERIALI2C_NOSTART 0x4000 996 - #define SERIALI2C_REV_DIR_ADDR 0x2000 997 - #define SERIALI2C_IGNORE_NAK 0x1000 998 - #define SERIALI2C_NO_RD_ACK 0x0800 999 - #define SERIALI2C_RECV_LEN 0x0400 764 + #define SERIALI2C_TEN 0x0010U 765 + #define SERIALI2C_RD 0x0001U 766 + #define SERIALI2C_STOP 0x8000U 767 + #define SERIALI2C_NOSTART 0x4000U 768 + #define SERIALI2C_REV_DIR_ADDR 0x2000U 769 + #define SERIALI2C_IGNORE_NAK 0x1000U 770 + #define SERIALI2C_NO_RD_ACK 0x0800U 771 + #define SERIALI2C_RECV_LEN 0x0400U 1000 772 1001 773 enum { 1002 774 CMD_I2C_XFER = 1 ··· 1026 798 uint16_t len; 1027 799 /** @brief For write transactions only, #len bytes of data */ 1028 800 uint8_t data[]; 1029 - } __ABI_PACKED; 801 + } BPMP_ABI_PACKED; 1030 802 1031 803 /** 1032 804 * @brief Trigger one or more i2c transactions ··· 1040 812 1041 813 /** @brief Serialized packed instances of @ref serial_i2c_request*/ 1042 814 uint8_t data_buf[TEGRA_I2C_IPC_MAX_IN_BUF_SIZE]; 1043 - } __ABI_PACKED; 815 + } BPMP_ABI_PACKED; 1044 816 1045 817 /** 1046 818 * @brief Container for data read from the i2c bus ··· 1054 826 uint32_t data_size; 1055 827 /** @brief I2c read data */ 1056 828 uint8_t data_buf[TEGRA_I2C_IPC_MAX_OUT_BUF_SIZE]; 1057 - } __ABI_PACKED; 829 + } BPMP_ABI_PACKED; 1058 830 1059 831 /** 1060 832 * @brief Request with #MRQ_I2C ··· 1064 836 uint32_t cmd; 1065 837 /** @brief Parameters of the transfer request */ 1066 838 struct cmd_i2c_xfer_request xfer; 1067 - } __ABI_PACKED; 839 + } BPMP_ABI_PACKED; 1068 840 1069 841 /** 1070 842 * @brief Response to #MRQ_I2C 843 + * 844 + * mrq_response:err is 845 + * 0: Success 846 + * -#BPMP_EBADCMD: if mrq_i2c_request::cmd is other than 1 847 + * -#BPMP_EINVAL: if cmd_i2c_xfer_request does not contain correctly formatted request 848 + * -#BPMP_ENODEV: if cmd_i2c_xfer_request::bus_id is not supported by BPMP 849 + * -#BPMP_EACCES: if i2c transaction is not allowed due to firewall rules 850 + * -#BPMP_ETIMEDOUT: if i2c transaction times out 851 + * -#BPMP_ENXIO: if i2c slave device does not reply with ACK to the transaction 852 + * -#BPMP_EAGAIN: if ARB_LOST condition is detected by the i2c controller 853 + * -#BPMP_EIO: any other i2c controller error code than NO_ACK or ARB_LOST 1071 854 */ 1072 855 struct mrq_i2c_response { 1073 856 struct cmd_i2c_xfer_response xfer; 1074 - } __ABI_PACKED; 857 + } BPMP_ABI_PACKED; 1075 858 1076 859 /** @} */ 1077 860 ··· 1115 876 CMD_CLK_MAX, 1116 877 }; 1117 878 1118 - #define BPMP_CLK_HAS_MUX (1 << 0) 1119 - #define BPMP_CLK_HAS_SET_RATE (1 << 1) 1120 - #define BPMP_CLK_IS_ROOT (1 << 2) 879 + #define BPMP_CLK_HAS_MUX (1U << 0U) 880 + #define BPMP_CLK_HAS_SET_RATE (1U << 1U) 881 + #define BPMP_CLK_IS_ROOT (1U << 2U) 882 + #define BPMP_CLK_IS_VAR_ROOT (1U << 3U) 1121 883 1122 - #define MRQ_CLK_NAME_MAXLEN 40 1123 - #define MRQ_CLK_MAX_PARENTS 16 884 + #define MRQ_CLK_NAME_MAXLEN 40U 885 + #define MRQ_CLK_MAX_PARENTS 16U 1124 886 1125 887 /** @private */ 1126 888 struct cmd_clk_get_rate_request { 1127 - EMPTY 1128 - } __ABI_PACKED; 889 + BPMP_ABI_EMPTY 890 + } BPMP_ABI_PACKED; 1129 891 1130 892 struct cmd_clk_get_rate_response { 1131 893 int64_t rate; 1132 - } __ABI_PACKED; 894 + } BPMP_ABI_PACKED; 1133 895 1134 896 struct cmd_clk_set_rate_request { 1135 897 int32_t unused; 1136 898 int64_t rate; 1137 - } __ABI_PACKED; 899 + } BPMP_ABI_PACKED; 1138 900 1139 901 struct cmd_clk_set_rate_response { 1140 902 int64_t rate; 1141 - } __ABI_PACKED; 903 + } BPMP_ABI_PACKED; 1142 904 1143 905 struct cmd_clk_round_rate_request { 1144 906 int32_t unused; 1145 907 int64_t rate; 1146 - } __ABI_PACKED; 908 + } BPMP_ABI_PACKED; 1147 909 1148 910 struct cmd_clk_round_rate_response { 1149 911 int64_t rate; 1150 - } __ABI_PACKED; 912 + } BPMP_ABI_PACKED; 1151 913 1152 914 /** @private */ 1153 915 struct cmd_clk_get_parent_request { 1154 - EMPTY 1155 - } __ABI_PACKED; 916 + BPMP_ABI_EMPTY 917 + } BPMP_ABI_PACKED; 1156 918 1157 919 struct cmd_clk_get_parent_response { 1158 920 uint32_t parent_id; 1159 - } __ABI_PACKED; 921 + } BPMP_ABI_PACKED; 1160 922 1161 923 struct cmd_clk_set_parent_request { 1162 924 uint32_t parent_id; 1163 - } __ABI_PACKED; 925 + } BPMP_ABI_PACKED; 1164 926 1165 927 struct cmd_clk_set_parent_response { 1166 928 uint32_t parent_id; 1167 - } __ABI_PACKED; 929 + } BPMP_ABI_PACKED; 1168 930 1169 931 /** @private */ 1170 932 struct cmd_clk_is_enabled_request { 1171 - EMPTY 1172 - } __ABI_PACKED; 933 + BPMP_ABI_EMPTY 934 + } BPMP_ABI_PACKED; 1173 935 936 + /** 937 + * @brief Response data to #MRQ_CLK sub-command CMD_CLK_IS_ENABLED 938 + */ 1174 939 struct cmd_clk_is_enabled_response { 940 + /** 941 + * @brief The state of the clock that has been succesfully 942 + * requested with CMD_CLK_ENABLE or CMD_CLK_DISABLE by the 943 + * master invoking the command earlier. 944 + * 945 + * The state may not reflect the physical state of the clock 946 + * if there are some other masters requesting it to be 947 + * enabled. 948 + * 949 + * Value 0 is disabled, all other values indicate enabled. 950 + */ 1175 951 int32_t state; 1176 - } __ABI_PACKED; 952 + } BPMP_ABI_PACKED; 1177 953 1178 954 /** @private */ 1179 955 struct cmd_clk_enable_request { 1180 - EMPTY 1181 - } __ABI_PACKED; 956 + BPMP_ABI_EMPTY 957 + } BPMP_ABI_PACKED; 1182 958 1183 959 /** @private */ 1184 960 struct cmd_clk_enable_response { 1185 - EMPTY 1186 - } __ABI_PACKED; 961 + BPMP_ABI_EMPTY 962 + } BPMP_ABI_PACKED; 1187 963 1188 964 /** @private */ 1189 965 struct cmd_clk_disable_request { 1190 - EMPTY 1191 - } __ABI_PACKED; 966 + BPMP_ABI_EMPTY 967 + } BPMP_ABI_PACKED; 1192 968 1193 969 /** @private */ 1194 970 struct cmd_clk_disable_response { 1195 - EMPTY 1196 - } __ABI_PACKED; 971 + BPMP_ABI_EMPTY 972 + } BPMP_ABI_PACKED; 1197 973 1198 974 /** @private */ 1199 975 struct cmd_clk_get_all_info_request { 1200 - EMPTY 1201 - } __ABI_PACKED; 976 + BPMP_ABI_EMPTY 977 + } BPMP_ABI_PACKED; 1202 978 1203 979 struct cmd_clk_get_all_info_response { 1204 980 uint32_t flags; ··· 1221 967 uint32_t parents[MRQ_CLK_MAX_PARENTS]; 1222 968 uint8_t num_parents; 1223 969 uint8_t name[MRQ_CLK_NAME_MAXLEN]; 1224 - } __ABI_PACKED; 970 + } BPMP_ABI_PACKED; 1225 971 1226 972 /** @private */ 1227 973 struct cmd_clk_get_max_clk_id_request { 1228 - EMPTY 1229 - } __ABI_PACKED; 974 + BPMP_ABI_EMPTY 975 + } BPMP_ABI_PACKED; 1230 976 1231 977 struct cmd_clk_get_max_clk_id_response { 1232 978 uint32_t max_id; 1233 - } __ABI_PACKED; 979 + } BPMP_ABI_PACKED; 1234 980 1235 981 /** @private */ 1236 982 struct cmd_clk_get_fmax_at_vmin_request { 1237 - EMPTY 1238 - } __ABI_PACKED; 983 + BPMP_ABI_EMPTY 984 + } BPMP_ABI_PACKED; 1239 985 1240 986 struct cmd_clk_get_fmax_at_vmin_response { 1241 987 int64_t rate; 1242 - } __ABI_PACKED; 988 + } BPMP_ABI_PACKED; 1243 989 1244 990 /** 1245 991 * @ingroup Clocks ··· 1294 1040 struct cmd_clk_get_max_clk_id_request clk_get_max_clk_id; 1295 1041 /** @private */ 1296 1042 struct cmd_clk_get_fmax_at_vmin_request clk_get_fmax_at_vmin; 1297 - } __UNION_ANON; 1298 - } __ABI_PACKED; 1043 + } BPMP_UNION_ANON; 1044 + } BPMP_ABI_PACKED; 1299 1045 1300 1046 /** 1301 1047 * @ingroup Clocks ··· 1336 1082 struct cmd_clk_get_all_info_response clk_get_all_info; 1337 1083 struct cmd_clk_get_max_clk_id_response clk_get_max_clk_id; 1338 1084 struct cmd_clk_get_fmax_at_vmin_response clk_get_fmax_at_vmin; 1339 - } __UNION_ANON; 1340 - } __ABI_PACKED; 1085 + } BPMP_UNION_ANON; 1086 + } BPMP_ABI_PACKED; 1341 1087 1342 1088 /** @} */ 1343 1089 ··· 1363 1109 struct mrq_query_abi_request { 1364 1110 /** @brief MRQ code to query */ 1365 1111 uint32_t mrq; 1366 - } __ABI_PACKED; 1112 + } BPMP_ABI_PACKED; 1367 1113 1368 1114 /** 1369 1115 * @ingroup ABI_info ··· 1375 1121 struct mrq_query_abi_response { 1376 1122 /** @brief 0 if queried MRQ is supported. Else, -#BPMP_ENODEV */ 1377 1123 int32_t status; 1378 - } __ABI_PACKED; 1124 + } BPMP_ABI_PACKED; 1379 1125 1380 1126 /** 1381 1127 * @ingroup MRQ_Codes ··· 1400 1146 struct mrq_pg_read_state_request { 1401 1147 /** @brief ID of partition */ 1402 1148 uint32_t partition_id; 1403 - } __ABI_PACKED; 1149 + } BPMP_ABI_PACKED; 1404 1150 1405 1151 /** 1406 1152 * @ingroup Powergating ··· 1415 1161 * * 1 : on 1416 1162 */ 1417 1163 uint32_t logic_state; 1418 - } __ABI_PACKED; 1164 + } BPMP_ABI_PACKED; 1419 1165 /** @endcond*/ 1420 1166 /** @} */ 1421 1167 ··· 1465 1211 * @ref logic_state == 0x3) 1466 1212 */ 1467 1213 uint32_t clock_state; 1468 - } __ABI_PACKED; 1214 + } BPMP_ABI_PACKED; 1469 1215 /** @endcond*/ 1470 1216 1471 1217 /** ··· 1561 1307 struct cmd_pg_query_abi_request { 1562 1308 /** @ref mrq_pg_cmd */ 1563 1309 uint32_t type; 1564 - } __ABI_PACKED; 1310 + } BPMP_ABI_PACKED; 1565 1311 1566 1312 struct cmd_pg_set_state_request { 1567 1313 /** @ref pg_states */ 1568 1314 uint32_t state; 1569 - } __ABI_PACKED; 1315 + } BPMP_ABI_PACKED; 1570 1316 1317 + /** 1318 + * @brief Response data to #MRQ_PG sub command #CMD_PG_GET_STATE 1319 + */ 1571 1320 struct cmd_pg_get_state_response { 1572 - /** @ref pg_states */ 1321 + /** 1322 + * @brief The state of the power partition that has been 1323 + * succesfuly requested by the master earlier using #MRQ_PG 1324 + * command #CMD_PG_SET_STATE. 1325 + * 1326 + * The state may not reflect the physical state of the power 1327 + * partition if there are some other masters requesting it to 1328 + * be enabled. 1329 + * 1330 + * See @ref pg_states for possible values 1331 + */ 1573 1332 uint32_t state; 1574 - } __ABI_PACKED; 1333 + } BPMP_ABI_PACKED; 1575 1334 1576 1335 struct cmd_pg_get_name_response { 1577 1336 uint8_t name[MRQ_PG_NAME_MAXLEN]; 1578 - } __ABI_PACKED; 1337 + } BPMP_ABI_PACKED; 1579 1338 1580 1339 struct cmd_pg_get_max_id_response { 1581 1340 uint32_t max_id; 1582 - } __ABI_PACKED; 1341 + } BPMP_ABI_PACKED; 1583 1342 1584 1343 /** 1585 1344 * @brief Request with #MRQ_PG ··· 1617 1350 union { 1618 1351 struct cmd_pg_query_abi_request query_abi; 1619 1352 struct cmd_pg_set_state_request set_state; 1620 - } __UNION_ANON; 1621 - } __ABI_PACKED; 1353 + } BPMP_UNION_ANON; 1354 + } BPMP_ABI_PACKED; 1622 1355 1623 1356 /** 1624 1357 * @brief Response to MRQ_PG ··· 1640 1373 struct cmd_pg_get_state_response get_state; 1641 1374 struct cmd_pg_get_name_response get_name; 1642 1375 struct cmd_pg_get_max_id_response get_max_id; 1643 - } __UNION_ANON; 1644 - } __ABI_PACKED; 1376 + } BPMP_UNION_ANON; 1377 + } BPMP_ABI_PACKED; 1645 1378 1646 1379 /** @} */ 1647 1380 ··· 1730 1463 */ 1731 1464 CMD_THERMAL_GET_NUM_ZONES = 3, 1732 1465 1466 + /** 1467 + * @brief Get the thermtrip of the specified zone. 1468 + * 1469 + * Host needs to supply request parameters. 1470 + * 1471 + * mrq_response::err is 1472 + * * 0: Valid zone information returned. 1473 + * * -#BPMP_EINVAL: Invalid request parameters. 1474 + * * -#BPMP_ENOENT: No driver registered for thermal zone. 1475 + * * -#BPMP_ERANGE if thermtrip is invalid or disabled. 1476 + * * -#BPMP_EFAULT: Problem reading zone information. 1477 + */ 1478 + CMD_THERMAL_GET_THERMTRIP = 4, 1479 + 1733 1480 /** @brief: number of supported host-to-bpmp commands. May 1734 1481 * increase in future 1735 1482 */ ··· 1774 1493 */ 1775 1494 struct cmd_thermal_query_abi_request { 1776 1495 uint32_t type; 1777 - } __ABI_PACKED; 1496 + } BPMP_ABI_PACKED; 1778 1497 1779 1498 /* 1780 1499 * Host->BPMP request data for request type CMD_THERMAL_GET_TEMP ··· 1783 1502 */ 1784 1503 struct cmd_thermal_get_temp_request { 1785 1504 uint32_t zone; 1786 - } __ABI_PACKED; 1505 + } BPMP_ABI_PACKED; 1787 1506 1788 1507 /* 1789 1508 * BPMP->Host reply data for request CMD_THERMAL_GET_TEMP ··· 1796 1515 */ 1797 1516 struct cmd_thermal_get_temp_response { 1798 1517 int32_t temp; 1799 - } __ABI_PACKED; 1518 + } BPMP_ABI_PACKED; 1800 1519 1801 1520 /* 1802 1521 * Host->BPMP request data for request type CMD_THERMAL_SET_TRIP ··· 1811 1530 int32_t low; 1812 1531 int32_t high; 1813 1532 uint32_t enabled; 1814 - } __ABI_PACKED; 1533 + } BPMP_ABI_PACKED; 1815 1534 1816 1535 /* 1817 1536 * BPMP->Host request data for request type CMD_THERMAL_HOST_TRIP_REACHED ··· 1820 1539 */ 1821 1540 struct cmd_thermal_host_trip_reached_request { 1822 1541 uint32_t zone; 1823 - } __ABI_PACKED; 1542 + } BPMP_ABI_PACKED; 1824 1543 1825 1544 /* 1826 1545 * BPMP->Host reply data for request type CMD_THERMAL_GET_NUM_ZONES ··· 1830 1549 */ 1831 1550 struct cmd_thermal_get_num_zones_response { 1832 1551 uint32_t num; 1833 - } __ABI_PACKED; 1552 + } BPMP_ABI_PACKED; 1553 + 1554 + /* 1555 + * Host->BPMP request data for request type CMD_THERMAL_GET_THERMTRIP 1556 + * 1557 + * zone: Number of thermal zone. 1558 + */ 1559 + struct cmd_thermal_get_thermtrip_request { 1560 + uint32_t zone; 1561 + } BPMP_ABI_PACKED; 1562 + 1563 + /* 1564 + * BPMP->Host reply data for request CMD_THERMAL_GET_THERMTRIP 1565 + * 1566 + * thermtrip: HW shutdown temperature in millicelsius. 1567 + */ 1568 + struct cmd_thermal_get_thermtrip_response { 1569 + int32_t thermtrip; 1570 + } BPMP_ABI_PACKED; 1834 1571 1835 1572 /* 1836 1573 * Host->BPMP request data. ··· 1864 1565 struct cmd_thermal_query_abi_request query_abi; 1865 1566 struct cmd_thermal_get_temp_request get_temp; 1866 1567 struct cmd_thermal_set_trip_request set_trip; 1867 - } __UNION_ANON; 1868 - } __ABI_PACKED; 1568 + struct cmd_thermal_get_thermtrip_request get_thermtrip; 1569 + } BPMP_UNION_ANON; 1570 + } BPMP_ABI_PACKED; 1869 1571 1870 1572 /* 1871 1573 * BPMP->Host request data. ··· 1878 1578 uint32_t type; 1879 1579 union { 1880 1580 struct cmd_thermal_host_trip_reached_request host_trip_reached; 1881 - } __UNION_ANON; 1882 - } __ABI_PACKED; 1581 + } BPMP_UNION_ANON; 1582 + } BPMP_ABI_PACKED; 1883 1583 1884 1584 /* 1885 1585 * Data in reply to a Host->BPMP request. 1886 1586 */ 1887 1587 union mrq_thermal_bpmp_to_host_response { 1888 1588 struct cmd_thermal_get_temp_response get_temp; 1589 + struct cmd_thermal_get_thermtrip_response get_thermtrip; 1889 1590 struct cmd_thermal_get_num_zones_response get_num_zones; 1890 - } __ABI_PACKED; 1591 + } BPMP_ABI_PACKED; 1891 1592 /** @} */ 1892 1593 1893 1594 /** ··· 1920 1619 uint32_t addr; 1921 1620 /** @brief ID of the cluster whose data is requested */ 1922 1621 uint32_t cluster_id; 1923 - } __ABI_PACKED; 1622 + } BPMP_ABI_PACKED; 1924 1623 1925 1624 /** 1926 1625 * @brief Description of the CPU v/f relation ··· 1947 1646 uint16_t vindex_div; 1948 1647 /** reserved for future use */ 1949 1648 uint16_t reserved[328]; 1950 - } __ABI_PACKED; 1649 + } BPMP_ABI_PACKED; 1951 1650 /** @endcond */ 1952 1651 /** @} */ 1953 1652 ··· 2036 1735 * @brief Used by @ref mrq_emc_dvfs_latency_response 2037 1736 */ 2038 1737 struct emc_dvfs_latency { 2039 - /** @brief EMC frequency in kHz */ 1738 + /** @brief EMC DVFS node frequency in kHz */ 2040 1739 uint32_t freq; 2041 1740 /** @brief EMC DVFS latency in nanoseconds */ 2042 1741 uint32_t latency; 2043 - } __ABI_PACKED; 1742 + } BPMP_ABI_PACKED; 2044 1743 2045 1744 #define EMC_DVFS_LATENCY_MAX_SIZE 14 2046 1745 /** ··· 2049 1748 struct mrq_emc_dvfs_latency_response { 2050 1749 /** @brief The number valid entries in #pairs */ 2051 1750 uint32_t num_pairs; 2052 - /** @brief EMC <frequency, latency> information */ 1751 + /** @brief EMC DVFS node <frequency, latency> information */ 2053 1752 struct emc_dvfs_latency pairs[EMC_DVFS_LATENCY_MAX_SIZE]; 2054 - } __ABI_PACKED; 1753 + } BPMP_ABI_PACKED; 2055 1754 2056 1755 /** @} */ 2057 1756 ··· 2076 1775 struct mrq_cpu_ndiv_limits_request { 2077 1776 /** @brief Enum cluster_id */ 2078 1777 uint32_t cluster_id; 2079 - } __ABI_PACKED; 1778 + } BPMP_ABI_PACKED; 2080 1779 2081 1780 /** 2082 1781 * @brief Response to #MRQ_CPU_NDIV_LIMITS ··· 2092 1791 uint16_t ndiv_max; 2093 1792 /** @brief Minimum allowed NDIV value */ 2094 1793 uint16_t ndiv_min; 2095 - } __ABI_PACKED; 1794 + } BPMP_ABI_PACKED; 2096 1795 2097 1796 /** @} */ 2098 1797 /** @endcond */ ··· 2124 1823 struct mrq_cpu_auto_cc3_request { 2125 1824 /** @brief Enum cluster_id (logical cluster id, known to CCPLEX s/w) */ 2126 1825 uint32_t cluster_id; 2127 - } __ABI_PACKED; 1826 + } BPMP_ABI_PACKED; 2128 1827 2129 1828 /** 2130 1829 * @brief Response to #MRQ_CPU_AUTO_CC3 ··· 2138 1837 * - bit [0] if "1" auto-CC3 is allowed, if "0" auto-CC3 is not allowed 2139 1838 */ 2140 1839 uint32_t auto_cc3_config; 2141 - } __ABI_PACKED; 1840 + } BPMP_ABI_PACKED; 2142 1841 2143 1842 /** @} */ 2144 1843 /** @endcond */ ··· 2147 1846 * @ingroup MRQ_Codes 2148 1847 * @def MRQ_TRACE_ITER 2149 1848 * @brief Manage the trace iterator 1849 + * 1850 + * @deprecated 2150 1851 * 2151 1852 * * Platforms: All 2152 1853 * * Initiators: CCPLEX ··· 2171 1868 struct mrq_trace_iter_request { 2172 1869 /** @brief TRACE_ITER_INIT or TRACE_ITER_CLEAN */ 2173 1870 uint32_t cmd; 2174 - } __ABI_PACKED; 1871 + } BPMP_ABI_PACKED; 2175 1872 2176 1873 /** @} */ 2177 1874 ··· 2247 1944 struct cmd_ringbuf_console_query_abi_req { 2248 1945 /** @brief Command identifier to be queried */ 2249 1946 uint32_t cmd; 2250 - } __ABI_PACKED; 1947 + } BPMP_ABI_PACKED; 2251 1948 2252 1949 /** @private */ 2253 1950 struct cmd_ringbuf_console_query_abi_resp { 2254 - EMPTY 2255 - } __ABI_PACKED; 1951 + BPMP_ABI_EMPTY 1952 + } BPMP_ABI_PACKED; 2256 1953 2257 1954 /** 2258 1955 * @ingroup RingbufConsole ··· 2263 1960 * @brief Number of bytes requested to be read from the BPMP TX buffer 2264 1961 */ 2265 1962 uint8_t len; 2266 - } __ABI_PACKED; 1963 + } BPMP_ABI_PACKED; 2267 1964 2268 1965 /** 2269 1966 * @ingroup RingbufConsole ··· 2274 1971 uint8_t data[MRQ_RINGBUF_CONSOLE_MAX_READ_LEN]; 2275 1972 /** @brief Number of bytes in cmd_ringbuf_console_read_resp::data */ 2276 1973 uint8_t len; 2277 - } __ABI_PACKED; 1974 + } BPMP_ABI_PACKED; 2278 1975 2279 1976 /** 2280 1977 * @ingroup RingbufConsole ··· 2285 1982 uint8_t data[MRQ_RINGBUF_CONSOLE_MAX_WRITE_LEN]; 2286 1983 /** @brief Number of bytes in cmd_ringbuf_console_write_req::data */ 2287 1984 uint8_t len; 2288 - } __ABI_PACKED; 1985 + } BPMP_ABI_PACKED; 2289 1986 2290 1987 /** 2291 1988 * @ingroup RingbufConsole ··· 2296 1993 uint32_t space_avail; 2297 1994 /** @brief Number of bytes that were written to the BPMP RX buffer */ 2298 1995 uint8_t len; 2299 - } __ABI_PACKED; 1996 + } BPMP_ABI_PACKED; 2300 1997 2301 1998 /** @private */ 2302 1999 struct cmd_ringbuf_console_get_fifo_req { 2303 - EMPTY 2304 - } __ABI_PACKED; 2000 + BPMP_ABI_EMPTY 2001 + } BPMP_ABI_PACKED; 2305 2002 2306 2003 /** 2307 2004 * @ingroup RingbufConsole ··· 2316 2013 uint64_t bpmp_tx_tail_addr; 2317 2014 /** @brief Length of the BPMP TX buffer */ 2318 2015 uint32_t bpmp_tx_buf_len; 2319 - } __ABI_PACKED; 2016 + } BPMP_ABI_PACKED; 2320 2017 2321 2018 /** 2322 2019 * @ingroup RingbufConsole ··· 2336 2033 struct cmd_ringbuf_console_read_req read; 2337 2034 struct cmd_ringbuf_console_write_req write; 2338 2035 struct cmd_ringbuf_console_get_fifo_req get_fifo; 2339 - } __UNION_ANON; 2340 - } __ABI_PACKED; 2036 + } BPMP_UNION_ANON; 2037 + } BPMP_ABI_PACKED; 2341 2038 2342 2039 /** 2343 2040 * @ingroup RingbufConsole ··· 2350 2047 struct cmd_ringbuf_console_read_resp read; 2351 2048 struct cmd_ringbuf_console_write_resp write; 2352 2049 struct cmd_ringbuf_console_get_fifo_resp get_fifo; 2353 - } __ABI_PACKED; 2050 + } BPMP_ABI_PACKED; 2354 2051 /** @} */ 2355 2052 2356 2053 /** ··· 2394 2091 uint32_t id; 2395 2092 /** @brief Desired value for strap (if cmd is #STRAP_SET) */ 2396 2093 uint32_t value; 2397 - } __ABI_PACKED; 2094 + } BPMP_ABI_PACKED; 2398 2095 2399 2096 /** 2400 2097 * @defgroup Strap_Ids Strap Identifiers ··· 2437 2134 uint32_t y; 2438 2135 /** @brief Set number of bit blocks for each margin section */ 2439 2136 uint32_t nblks; 2440 - } __ABI_PACKED; 2137 + } BPMP_ABI_PACKED; 2441 2138 2442 2139 struct cmd_uphy_margin_status_response { 2443 2140 /** @brief Number of errors observed */ 2444 2141 uint32_t status; 2445 - } __ABI_PACKED; 2142 + } BPMP_ABI_PACKED; 2446 2143 2447 2144 struct cmd_uphy_ep_controller_pll_init_request { 2448 2145 /** @brief EP controller number, valid: 0, 4, 5 */ 2449 2146 uint8_t ep_controller; 2450 - } __ABI_PACKED; 2147 + } BPMP_ABI_PACKED; 2451 2148 2452 2149 struct cmd_uphy_pcie_controller_state_request { 2453 2150 /** @brief PCIE controller number, valid: 0, 1, 2, 3, 4 */ 2454 2151 uint8_t pcie_controller; 2455 2152 uint8_t enable; 2456 - } __ABI_PACKED; 2153 + } BPMP_ABI_PACKED; 2457 2154 2458 2155 struct cmd_uphy_ep_controller_pll_off_request { 2459 2156 /** @brief EP controller number, valid: 0, 4, 5 */ 2460 2157 uint8_t ep_controller; 2461 - } __ABI_PACKED; 2158 + } BPMP_ABI_PACKED; 2462 2159 2463 2160 /** 2464 2161 * @ingroup UPHY ··· 2489 2186 struct cmd_uphy_ep_controller_pll_init_request ep_ctrlr_pll_init; 2490 2187 struct cmd_uphy_pcie_controller_state_request controller_state; 2491 2188 struct cmd_uphy_ep_controller_pll_off_request ep_ctrlr_pll_off; 2492 - } __UNION_ANON; 2493 - } __ABI_PACKED; 2189 + } BPMP_UNION_ANON; 2190 + } BPMP_ABI_PACKED; 2494 2191 2495 2192 /** 2496 2193 * @ingroup UPHY ··· 2510 2207 struct mrq_uphy_response { 2511 2208 union { 2512 2209 struct cmd_uphy_margin_status_response uphy_get_margin_status; 2513 - } __UNION_ANON; 2514 - } __ABI_PACKED; 2210 + } BPMP_UNION_ANON; 2211 + } BPMP_ABI_PACKED; 2515 2212 2516 2213 /** @} */ 2517 2214 /** @endcond */ ··· 2561 2258 struct cmd_fmon_gear_clamp_request { 2562 2259 int32_t unused; 2563 2260 int64_t rate; 2564 - } __ABI_PACKED; 2261 + } BPMP_ABI_PACKED; 2565 2262 2566 2263 /** @private */ 2567 2264 struct cmd_fmon_gear_clamp_response { 2568 - EMPTY 2569 - } __ABI_PACKED; 2265 + BPMP_ABI_EMPTY 2266 + } BPMP_ABI_PACKED; 2570 2267 2571 2268 /** @private */ 2572 2269 struct cmd_fmon_gear_free_request { 2573 - EMPTY 2574 - } __ABI_PACKED; 2270 + BPMP_ABI_EMPTY 2271 + } BPMP_ABI_PACKED; 2575 2272 2576 2273 /** @private */ 2577 2274 struct cmd_fmon_gear_free_response { 2578 - EMPTY 2579 - } __ABI_PACKED; 2275 + BPMP_ABI_EMPTY 2276 + } BPMP_ABI_PACKED; 2580 2277 2581 2278 /** @private */ 2582 2279 struct cmd_fmon_gear_get_request { 2583 - EMPTY 2584 - } __ABI_PACKED; 2280 + BPMP_ABI_EMPTY 2281 + } BPMP_ABI_PACKED; 2585 2282 2586 2283 struct cmd_fmon_gear_get_response { 2587 2284 int64_t rate; 2588 - } __ABI_PACKED; 2285 + } BPMP_ABI_PACKED; 2589 2286 2590 2287 /** 2591 2288 * @ingroup FMON ··· 2618 2315 struct cmd_fmon_gear_free_request fmon_gear_free; 2619 2316 /** @private */ 2620 2317 struct cmd_fmon_gear_get_request fmon_gear_get; 2621 - } __UNION_ANON; 2622 - } __ABI_PACKED; 2318 + } BPMP_UNION_ANON; 2319 + } BPMP_ABI_PACKED; 2623 2320 2624 2321 /** 2625 2322 * @ingroup FMON ··· 2643 2340 /** @private */ 2644 2341 struct cmd_fmon_gear_free_response fmon_gear_free; 2645 2342 struct cmd_fmon_gear_get_response fmon_gear_get; 2646 - } __UNION_ANON; 2647 - } __ABI_PACKED; 2343 + } BPMP_UNION_ANON; 2344 + } BPMP_ABI_PACKED; 2648 2345 2649 2346 /** @} */ 2650 2347 /** @endcond */ ··· 2669 2366 */ 2670 2367 enum { 2671 2368 /** 2369 + * @cond DEPRECATED 2672 2370 * @brief Retrieve specified EC status. 2673 2371 * 2674 2372 * mrq_response::err is 0 if the operation was successful, or @n 2675 2373 * -#BPMP_ENODEV if target EC is not owned by BPMP @n 2676 - * -#BPMP_EACCES if target EC power domain is turned off 2374 + * -#BPMP_EACCES if target EC power domain is turned off @n 2375 + * -#BPMP_EBADCMD if subcommand is not supported 2376 + * @endcond 2677 2377 */ 2678 - CMD_EC_STATUS_GET = 1, 2378 + CMD_EC_STATUS_GET = 1, /* deprecated */ 2379 + 2380 + /** 2381 + * @brief Retrieve specified EC extended status (includes error 2382 + * counter and user values). 2383 + * 2384 + * mrq_response::err is 0 if the operation was successful, or @n 2385 + * -#BPMP_ENODEV if target EC is not owned by BPMP @n 2386 + * -#BPMP_EACCES if target EC power domain is turned off @n 2387 + * -#BPMP_EBADCMD if subcommand is not supported 2388 + */ 2389 + CMD_EC_STATUS_EX_GET = 2, 2679 2390 CMD_EC_NUM, 2680 2391 }; 2681 2392 ··· 2745 2428 2746 2429 /** @brief SW Correctable error 2747 2430 * 2748 - * Error descriptor @ref ec_err_simple_desc. 2431 + * Error descriptor @ref ec_err_sw_error_desc. 2749 2432 */ 2750 2433 EC_ERR_TYPE_SW_CORRECTABLE = 16, 2751 2434 2752 2435 /** @brief SW Uncorrectable error 2753 2436 * 2754 - * Error descriptor @ref ec_err_simple_desc. 2437 + * Error descriptor @ref ec_err_sw_error_desc. 2755 2438 */ 2756 2439 EC_ERR_TYPE_SW_UNCORRECTABLE = 17, 2757 2440 ··· 2771 2454 /** @brief Group of registers with parity error. */ 2772 2455 enum ec_registers_group { 2773 2456 /** @brief Functional registers group */ 2774 - EC_ERR_GROUP_FUNC_REG = 0, 2457 + EC_ERR_GROUP_FUNC_REG = 0U, 2775 2458 /** @brief SCR registers group */ 2776 - EC_ERR_GROUP_SCR_REG = 1, 2459 + EC_ERR_GROUP_SCR_REG = 1U, 2777 2460 }; 2778 2461 2779 2462 /** ··· 2782 2465 * @{ 2783 2466 */ 2784 2467 /** @brief No EC error found flag */ 2785 - #define EC_STATUS_FLAG_NO_ERROR 0x0001 2468 + #define EC_STATUS_FLAG_NO_ERROR 0x0001U 2786 2469 /** @brief Last EC error found flag */ 2787 - #define EC_STATUS_FLAG_LAST_ERROR 0x0002 2470 + #define EC_STATUS_FLAG_LAST_ERROR 0x0002U 2788 2471 /** @brief EC latent error flag */ 2789 - #define EC_STATUS_FLAG_LATENT_ERROR 0x0004 2472 + #define EC_STATUS_FLAG_LATENT_ERROR 0x0004U 2790 2473 /** @} */ 2791 2474 2792 2475 /** ··· 2795 2478 * @{ 2796 2479 */ 2797 2480 /** @brief EC descriptor error resolved flag */ 2798 - #define EC_DESC_FLAG_RESOLVED 0x0001 2481 + #define EC_DESC_FLAG_RESOLVED 0x0001U 2799 2482 /** @brief EC descriptor failed to retrieve id flag */ 2800 - #define EC_DESC_FLAG_NO_ID 0x0002 2483 + #define EC_DESC_FLAG_NO_ID 0x0002U 2801 2484 /** @} */ 2802 2485 2803 2486 /** ··· 2814 2497 uint32_t fmon_faults; 2815 2498 /** @brief FMON faults access error */ 2816 2499 int32_t fmon_access_error; 2817 - } __ABI_PACKED; 2500 + } BPMP_ABI_PACKED; 2818 2501 2819 2502 /** 2820 2503 * |error type | vmon_adc_id values | ··· 2830 2513 uint32_t vmon_faults; 2831 2514 /** @brief VMON faults access error */ 2832 2515 int32_t vmon_access_error; 2833 - } __ABI_PACKED; 2516 + } BPMP_ABI_PACKED; 2834 2517 2835 2518 /** 2836 2519 * |error type | reg_id values | ··· 2844 2527 uint16_t reg_id; 2845 2528 /** @brief Register group @ref ec_registers_group */ 2846 2529 uint16_t reg_group; 2847 - } __ABI_PACKED; 2530 + } BPMP_ABI_PACKED; 2531 + 2532 + /** 2533 + * |error type | err_source_id values | 2534 + * |--------------------------------- |--------------------------| 2535 + * |@ref EC_ERR_TYPE_SW_CORRECTABLE | @ref bpmp_ec_ce_swd_ids | 2536 + * |@ref EC_ERR_TYPE_SW_UNCORRECTABLE | @ref bpmp_ec_ue_swd_ids | 2537 + */ 2538 + struct ec_err_sw_error_desc { 2539 + /** @brief Bitmask of @ref bpmp_ec_desc_flags */ 2540 + uint16_t desc_flags; 2541 + /** @brief Error source id */ 2542 + uint16_t err_source_id; 2543 + /** @brief Sw error data */ 2544 + uint32_t sw_error_data; 2545 + } BPMP_ABI_PACKED; 2848 2546 2849 2547 /** 2850 2548 * |error type | err_source_id values | ··· 2869 2537 * |@ref EC_ERR_TYPE_ECC_DED_INTERNAL |@ref bpmp_ec_ipath_ids | 2870 2538 * |@ref EC_ERR_TYPE_COMPARATOR |@ref bpmp_ec_comparator_ids| 2871 2539 * |@ref EC_ERR_TYPE_PARITY_SRAM |@ref bpmp_clock_ids | 2872 - * |@ref EC_ERR_TYPE_SW_CORRECTABLE |@ref bpmp_ec_misc_ids | 2873 - * |@ref EC_ERR_TYPE_SW_UNCORRECTABLE |@ref bpmp_ec_misc_ids | 2874 - * |@ref EC_ERR_TYPE_OTHER_HW_CORRECTABLE |@ref bpmp_ec_misc_ids | 2875 - * |@ref EC_ERR_TYPE_OTHER_HW_UNCORRECTABLE |@ref bpmp_ec_misc_ids | 2540 + * |@ref EC_ERR_TYPE_OTHER_HW_CORRECTABLE |@ref bpmp_ec_misc_hwd_ids | 2541 + * |@ref EC_ERR_TYPE_OTHER_HW_UNCORRECTABLE |@ref bpmp_ec_misc_hwd_ids | 2876 2542 */ 2877 2543 struct ec_err_simple_desc { 2878 2544 /** @brief Bitmask of @ref bpmp_ec_desc_flags */ 2879 2545 uint16_t desc_flags; 2880 2546 /** @brief Error source id. Id space depends on error type. */ 2881 2547 uint16_t err_source_id; 2882 - } __ABI_PACKED; 2548 + } BPMP_ABI_PACKED; 2883 2549 2884 2550 /** @brief Union of EC error descriptors */ 2885 2551 union ec_err_desc { 2886 2552 struct ec_err_fmon_desc fmon_desc; 2887 2553 struct ec_err_vmon_desc vmon_desc; 2888 2554 struct ec_err_reg_parity_desc reg_parity_desc; 2555 + struct ec_err_sw_error_desc sw_error_desc; 2889 2556 struct ec_err_simple_desc simple_desc; 2890 - } __ABI_PACKED; 2557 + } BPMP_ABI_PACKED; 2891 2558 2892 2559 struct cmd_ec_status_get_request { 2893 2560 /** @brief HSM error line number that identifies target EC. */ 2894 2561 uint32_t ec_hsm_id; 2895 - } __ABI_PACKED; 2562 + } BPMP_ABI_PACKED; 2896 2563 2897 2564 /** EC status maximum number of descriptors */ 2898 - #define EC_ERR_STATUS_DESC_MAX_NUM 4 2565 + #define EC_ERR_STATUS_DESC_MAX_NUM 4U 2899 2566 2567 + /** 2568 + * @cond DEPRECATED 2569 + */ 2900 2570 struct cmd_ec_status_get_response { 2901 2571 /** @brief Target EC id (the same id received with request). */ 2902 2572 uint32_t ec_hsm_id; ··· 2916 2582 uint32_t error_desc_num; 2917 2583 /** @brief EC error descriptors */ 2918 2584 union ec_err_desc error_descs[EC_ERR_STATUS_DESC_MAX_NUM]; 2919 - } __ABI_PACKED; 2585 + } BPMP_ABI_PACKED; 2586 + /** @endcond */ 2587 + 2588 + struct cmd_ec_status_ex_get_response { 2589 + /** @brief Target EC id (the same id received with request). */ 2590 + uint32_t ec_hsm_id; 2591 + /** 2592 + * @brief Bitmask of @ref bpmp_ec_status_flags 2593 + * 2594 + * If NO_ERROR flag is set, error_ fields should be ignored 2595 + */ 2596 + uint32_t ec_status_flags; 2597 + /** @brief Found EC error index. */ 2598 + uint32_t error_idx; 2599 + /** @brief Found EC error type @ref bpmp_ec_err_type. */ 2600 + uint32_t error_type; 2601 + /** @brief Found EC mission error counter value */ 2602 + uint32_t error_counter; 2603 + /** @brief Found EC mission error user value */ 2604 + uint32_t error_uval; 2605 + /** @brief Reserved entry */ 2606 + uint32_t reserved; 2607 + /** @brief Number of returned EC error descriptors */ 2608 + uint32_t error_desc_num; 2609 + /** @brief EC error descriptors */ 2610 + union ec_err_desc error_descs[EC_ERR_STATUS_DESC_MAX_NUM]; 2611 + } BPMP_ABI_PACKED; 2920 2612 2921 2613 /** 2922 2614 * @ingroup EC ··· 2951 2591 * Used by the sender of an #MRQ_EC message to access ECs owned 2952 2592 * by BPMP. 2953 2593 * 2594 + * @cond DEPRECATED 2954 2595 * |sub-command |payload | 2955 2596 * |----------------------------|-----------------------| 2956 2597 * |@ref CMD_EC_STATUS_GET |ec_status_get | 2598 + * @endcond 2599 + * 2600 + * |sub-command |payload | 2601 + * |----------------------------|-----------------------| 2602 + * |@ref CMD_EC_STATUS_EX_GET |ec_status_get | 2957 2603 * 2958 2604 */ 2959 2605 ··· 2969 2603 2970 2604 union { 2971 2605 struct cmd_ec_status_get_request ec_status_get; 2972 - } __UNION_ANON; 2973 - } __ABI_PACKED; 2606 + } BPMP_UNION_ANON; 2607 + } BPMP_ABI_PACKED; 2974 2608 2975 2609 /** 2976 2610 * @ingroup EC ··· 2979 2613 * Each sub-command supported by @ref mrq_ec_request may return 2980 2614 * sub-command-specific data as indicated below. 2981 2615 * 2616 + * @cond DEPRECATED 2982 2617 * |sub-command |payload | 2983 2618 * |----------------------------|------------------------| 2984 2619 * |@ref CMD_EC_STATUS_GET |ec_status_get | 2620 + * @endcond 2621 + * 2622 + * |sub-command |payload | 2623 + * |----------------------------|------------------------| 2624 + * |@ref CMD_EC_STATUS_EX_GET |ec_status_ex_get | 2985 2625 * 2986 2626 */ 2987 2627 2988 2628 struct mrq_ec_response { 2989 2629 union { 2630 + /** 2631 + * @cond DEPRECATED 2632 + */ 2990 2633 struct cmd_ec_status_get_response ec_status_get; 2991 - } __UNION_ANON; 2992 - } __ABI_PACKED; 2993 - 2994 - /** @} */ 2995 - /** @endcond */ 2996 - 2997 - /** 2998 - * @ingroup MRQ_Codes 2999 - * @def MRQ_FBVOLT_STATUS 3000 - * @brief Provides status information about voltage state for fuse burning 3001 - * 3002 - * * Platforms: T194 onwards 3003 - * @cond bpmp_t194 3004 - * * Initiators: CCPLEX 3005 - * * Target: BPMP 3006 - * * Request Payload: None 3007 - * * Response Payload: @ref mrq_fbvolt_status_response 3008 - * @{ 3009 - */ 3010 - 3011 - /** 3012 - * @ingroup Fbvolt_status 3013 - * @brief Response to #MRQ_FBVOLT_STATUS 3014 - * 3015 - * Value of #ready reflects if core voltages are in a suitable state for buring 3016 - * fuses. A value of 0x1 indicates that core voltages are ready for burning 3017 - * fuses. A value of 0x0 indicates that core voltages are not ready. 3018 - */ 3019 - struct mrq_fbvolt_status_response { 3020 - /** @brief Bit [0:0] - ready status, bits [31:1] - reserved */ 3021 - uint32_t ready; 3022 - /** @brief Reserved */ 3023 - uint32_t unused; 3024 - } __ABI_PACKED; 2634 + /** @endcond */ 2635 + struct cmd_ec_status_ex_get_response ec_status_ex_get; 2636 + } BPMP_UNION_ANON; 2637 + } BPMP_ABI_PACKED; 3025 2638 3026 2639 /** @} */ 3027 2640 /** @endcond */ ··· 3013 2668 * @{ 3014 2669 */ 3015 2670 2671 + /** @brief Operation not permitted */ 2672 + #define BPMP_EPERM 1 3016 2673 /** @brief No such file or directory */ 3017 2674 #define BPMP_ENOENT 2 3018 2675 /** @brief No MRQ handler */ ··· 3023 2676 #define BPMP_EIO 5 3024 2677 /** @brief Bad sub-MRQ command */ 3025 2678 #define BPMP_EBADCMD 6 2679 + /** @brief Resource temporarily unavailable */ 2680 + #define BPMP_EAGAIN 11 3026 2681 /** @brief Not enough memory */ 3027 2682 #define BPMP_ENOMEM 12 3028 2683 /** @brief Permission denied */ 3029 2684 #define BPMP_EACCES 13 3030 2685 /** @brief Bad address */ 3031 2686 #define BPMP_EFAULT 14 2687 + /** @brief Resource busy */ 2688 + #define BPMP_EBUSY 16 3032 2689 /** @brief No such device */ 3033 2690 #define BPMP_ENODEV 19 3034 2691 /** @brief Argument is a directory */ ··· 3044 2693 /** @brief Out of range */ 3045 2694 #define BPMP_ERANGE 34 3046 2695 /** @brief Function not implemented */ 3047 - #define BPMP_ENOSYS 38 2696 + #define BPMP_ENOSYS 38 3048 2697 /** @brief Invalid slot */ 3049 2698 #define BPMP_EBADSLT 57 2699 + /** @brief Not supported */ 2700 + #define BPMP_ENOTSUP 134 2701 + /** @brief No such device or address */ 2702 + #define BPMP_ENXIO 140 3050 2703 3051 2704 /** @} */ 2705 + 2706 + #if defined(BPMP_ABI_CHECKS) 2707 + #include "bpmp_abi_checks.h" 2708 + #endif 3052 2709 3053 2710 #endif
+2
include/soc/tegra/fuse.h
··· 12 12 #define TEGRA124 0x40 13 13 #define TEGRA132 0x13 14 14 #define TEGRA210 0x21 15 + #define TEGRA186 0x18 16 + #define TEGRA194 0x19 15 17 16 18 #define TEGRA_FUSE_SKU_CALIB_0 0xf0 17 19 #define TEGRA30_FUSE_SATA_CALIB 0x124
+3 -3
include/trace/events/scmi.h
··· 35 35 36 36 TRACE_EVENT(scmi_xfer_end, 37 37 TP_PROTO(int transfer_id, u8 msg_id, u8 protocol_id, u16 seq, 38 - u32 status), 38 + int status), 39 39 TP_ARGS(transfer_id, msg_id, protocol_id, seq, status), 40 40 41 41 TP_STRUCT__entry( ··· 43 43 __field(u8, msg_id) 44 44 __field(u8, protocol_id) 45 45 __field(u16, seq) 46 - __field(u32, status) 46 + __field(int, status) 47 47 ), 48 48 49 49 TP_fast_assign( ··· 54 54 __entry->status = status; 55 55 ), 56 56 57 - TP_printk("transfer_id=%d msg_id=%u protocol_id=%u seq=%u status=%u", 57 + TP_printk("transfer_id=%d msg_id=%u protocol_id=%u seq=%u status=%d", 58 58 __entry->transfer_id, __entry->msg_id, __entry->protocol_id, 59 59 __entry->seq, __entry->status) 60 60 );