Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'char-misc-5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char / misc driver updates from Greg KH:
"Here is the big set of char / misc and other driver subsystem updates
for 5.14-rc1. Included in here are:

- habanalabs driver updates

- fsl-mc driver updates

- comedi driver updates

- fpga driver updates

- extcon driver updates

- interconnect driver updates

- mei driver updates

- nvmem driver updates

- phy driver updates

- pnp driver updates

- soundwire driver updates

- lots of other tiny driver updates for char and misc drivers

This is looking more and more like the "various driver subsystems
mushed together" tree...

All of these have been in linux-next for a while with no reported
issues"

* tag 'char-misc-5.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (292 commits)
mcb: Use DEFINE_RES_MEM() helper macro and fix the end address
PNP: moved EXPORT_SYMBOL so that it immediately followed its function/variable
bus: mhi: pci-generic: Add missing 'pci_disable_pcie_error_reporting()' calls
bus: mhi: Wait for M2 state during system resume
bus: mhi: core: Fix power down latency
intel_th: Wait until port is in reset before programming it
intel_th: msu: Make contiguous buffers uncached
intel_th: Remove an unused exit point from intel_th_remove()
stm class: Spelling fix
nitro_enclaves: Set Bus Master for the NE PCI device
misc: ibmasm: Modify matricies to matrices
misc: vmw_vmci: return the correct errno code
siox: Simplify error handling via dev_err_probe()
fpga: machxo2-spi: Address warning about unused variable
lkdtm/heap: Add init_on_alloc tests
selftests/lkdtm: Enable various testable CONFIGs
lkdtm: Add CONFIG hints in errors where possible
lkdtm: Enable DOUBLE_FAULT on all architectures
lkdtm/heap: Add vmalloc linear overflow test
lkdtm/bugs: XFAIL UNALIGNED_LOAD_STORE_WRITE
...

+11680 -2831
+13
Documentation/ABI/stable/sysfs-driver-w1_ds2438
··· 1 + What: /sys/bus/w1/devices/.../page1 2 + Date: April 2021 3 + Contact: Luiz Sampaio <sampaio.ime@gmail.com> 4 + Description: read the contents of the page1 of the DS2438 5 + see Documentation/w1/slaves/w1_ds2438.rst for detailed information 6 + Users: any user space application which wants to communicate with DS2438 7 + 8 + What: /sys/bus/w1/devices/.../offset 9 + Date: April 2021 10 + Contact: Luiz Sampaio <sampaio.ime@gmail.com> 11 + Description: write the contents to the offset register of the DS2438 12 + see Documentation/w1/slaves/w1_ds2438.rst for detailed information 13 + Users: any user space application which wants to communicate with DS2438
+8
Documentation/ABI/testing/debugfs-driver-habanalabs
··· 207 207 Description: Sets the PCI power state. Valid values are "1" for D0 and "2" 208 208 for D3Hot 209 209 210 + What: /sys/kernel/debug/habanalabs/hl<n>/skip_reset_on_timeout 211 + Date: Jun 2021 212 + KernelVersion: 5.13 213 + Contact: ynudelman@habana.ai 214 + Description: Sets the skip reset on timeout option for the device. Value of 215 + "0" means device will be reset in case some CS has timed out, 216 + otherwise it will not be reset. 217 + 210 218 What: /sys/kernel/debug/habanalabs/hl<n>/stop_on_err 211 219 Date: Mar 2020 212 220 KernelVersion: 5.6
+19
Documentation/ABI/testing/sysfs-class-spi-eeprom
··· 1 + What: /sys/class/spi_master/spi<bus>/spi<bus>.<dev>/fram 2 + Date: June 2021 3 + KernelVersion: 5.14 4 + Contact: Jiri Prchal <jiri.prchal@aksignal.cz> 5 + Description: 6 + Contains the FRAM binary data. Same as EEPROM, just another file 7 + name to indicate that it employs ferroelectric process. 8 + It performs write operations at bus speed - no write delays. 9 + 10 + What: /sys/class/spi_master/spi<bus>/spi<bus>.<dev>/sernum 11 + Date: May 2021 12 + KernelVersion: 5.14 13 + Contact: Jiri Prchal <jiri.prchal@aksignal.cz> 14 + Description: 15 + Contains the serial number of the Cypress FRAM (FM25VN) if it is 16 + present. It will be displayed as a 8 byte hex string, as read 17 + from the device. 18 + 19 + This is a read-only attribute.
+25 -6
Documentation/devicetree/bindings/eeprom/at25.yaml
··· 4 4 $id: "http://devicetree.org/schemas/eeprom/at25.yaml#" 5 5 $schema: "http://devicetree.org/meta-schemas/core.yaml#" 6 6 7 - title: SPI EEPROMs compatible with Atmel's AT25 7 + title: SPI EEPROMs or FRAMs compatible with Atmel's AT25 8 8 9 9 maintainers: 10 10 - Christian Eggers <ceggers@arri.de> 11 11 12 12 properties: 13 13 $nodename: 14 - pattern: "^eeprom@[0-9a-f]{1,2}$" 14 + anyOf: 15 + - pattern: "^eeprom@[0-9a-f]{1,2}$" 16 + - pattern: "^fram@[0-9a-f]{1,2}$" 15 17 16 18 # There are multiple known vendors who manufacture EEPROM chips compatible 17 19 # with Atmel's AT25. The compatible string requires two items where the ··· 33 31 - microchip,25lc040 34 32 - st,m95m02 35 33 - st,m95256 34 + - cypress,fm25 36 35 37 36 - const: atmel,at25 38 37 ··· 50 47 $ref: /schemas/types.yaml#/definitions/uint32 51 48 enum: [1, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, 4096, 8192, 16384, 32768, 65536, 131072] 52 49 description: 53 - Size of the eeprom page. 50 + Size of the eeprom page. FRAMs don't have pages. 54 51 55 52 size: 56 53 $ref: /schemas/types.yaml#/definitions/uint32 ··· 103 100 - compatible 104 101 - reg 105 102 - spi-max-frequency 106 - - pagesize 107 - - size 108 - - address-width 103 + 104 + allOf: 105 + - if: 106 + properties: 107 + compatible: 108 + not: 109 + contains: 110 + const: cypress,fm25 111 + then: 112 + required: 113 + - pagesize 114 + - size 115 + - address-width 109 116 110 117 additionalProperties: false 111 118 ··· 137 124 pagesize = <64>; 138 125 size = <32768>; 139 126 address-width = <16>; 127 + }; 128 + 129 + fram@1 { 130 + compatible = "cypress,fm25", "atmel,at25"; 131 + reg = <1>; 132 + spi-max-frequency = <40000000>; 140 133 }; 141 134 };
-21
Documentation/devicetree/bindings/extcon/extcon-sm5502.txt
··· 1 - 2 - * SM5502 MUIC (Micro-USB Interface Controller) device 3 - 4 - The Silicon Mitus SM5502 is a MUIC (Micro-USB Interface Controller) device 5 - which can detect the state of external accessory when external accessory is 6 - attached or detached and button is pressed or released. It is interfaced to 7 - the host controller using an I2C interface. 8 - 9 - Required properties: 10 - - compatible: Should be "siliconmitus,sm5502-muic" 11 - - reg: Specifies the I2C slave address of the MUIC block. It should be 0x25 12 - - interrupts: Interrupt specifiers for detection interrupt sources. 13 - 14 - Example: 15 - 16 - sm5502@25 { 17 - compatible = "siliconmitus,sm5502-muic"; 18 - interrupt-parent = <&gpx1>; 19 - interrupts = <5 0>; 20 - reg = <0x25>; 21 - };
+52
Documentation/devicetree/bindings/extcon/siliconmitus,sm5502-muic.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/extcon/siliconmitus,sm5502-muic.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: SM5502/SM5504 MUIC (Micro-USB Interface Controller) device 8 + 9 + maintainers: 10 + - Chanwoo Choi <cw00.choi@samsung.com> 11 + 12 + description: 13 + The Silicon Mitus SM5502 is a MUIC (Micro-USB Interface Controller) device 14 + which can detect the state of external accessory when external accessory is 15 + attached or detached and button is pressed or released. It is interfaced to 16 + the host controller using an I2C interface. 17 + 18 + properties: 19 + compatible: 20 + enum: 21 + - siliconmitus,sm5502-muic 22 + - siliconmitus,sm5504-muic 23 + 24 + reg: 25 + maxItems: 1 26 + description: I2C slave address of the device. Usually 0x25 for SM5502, 27 + 0x14 for SM5504. 28 + 29 + interrupts: 30 + maxItems: 1 31 + 32 + required: 33 + - compatible 34 + - reg 35 + - interrupts 36 + 37 + additionalProperties: false 38 + 39 + examples: 40 + - | 41 + #include <dt-bindings/interrupt-controller/irq.h> 42 + i2c { 43 + #address-cells = <1>; 44 + #size-cells = <0>; 45 + 46 + extcon@25 { 47 + compatible = "siliconmitus,sm5502-muic"; 48 + reg = <0x25>; 49 + interrupt-parent = <&msmgpio>; 50 + interrupts = <12 IRQ_TYPE_EDGE_FALLING>; 51 + }; 52 + };
+11 -11
Documentation/devicetree/bindings/fpga/fpga-region.txt
··· 38 38 39 39 Partial Reconfiguration Region (PRR) 40 40 * Also called a "reconfigurable partition" 41 - * A PRR is a specific section of a FPGA reserved for reconfiguration. 41 + * A PRR is a specific section of an FPGA reserved for reconfiguration. 42 42 * A base (or static) FPGA image may create a set of PRR's that later may 43 43 be independently reprogrammed many times. 44 44 * The size and specific location of each PRR is fixed. ··· 105 105 Sequence 106 106 ======== 107 107 108 - When a DT overlay that targets a FPGA Region is applied, the FPGA Region will 108 + When a DT overlay that targets an FPGA Region is applied, the FPGA Region will 109 109 do the following: 110 110 111 111 1. Disable appropriate FPGA bridges. ··· 134 134 FPGA while an operating system is running. 135 135 136 136 An FPGA Region that exists in the live Device Tree reflects the current state. 137 - If the live tree shows a "firmware-name" property or child nodes under a FPGA 138 - Region, the FPGA already has been programmed. A DTO that targets a FPGA Region 137 + If the live tree shows a "firmware-name" property or child nodes under an FPGA 138 + Region, the FPGA already has been programmed. A DTO that targets an FPGA Region 139 139 and adds the "firmware-name" property is taken as a request to reprogram the 140 140 FPGA. After reprogramming is successful, the overlay is accepted into the live 141 141 tree. ··· 152 152 base FPGA region. The "Full Reconfiguration to add PRR's" example below shows 153 153 this. 154 154 155 - If an FPGA Region does not specify a FPGA Manager, it will inherit the FPGA 155 + If an FPGA Region does not specify an FPGA Manager, it will inherit the FPGA 156 156 Manager specified by its ancestor FPGA Region. This supports both the case 157 - where the same FPGA Manager is used for all of a FPGA as well the case where 157 + where the same FPGA Manager is used for all of an FPGA as well the case where 158 158 a different FPGA Manager is used for each region. 159 159 160 160 FPGA Regions do not inherit their ancestor FPGA regions' bridges. This prevents ··· 166 166 Required properties: 167 167 - compatible : should contain "fpga-region" 168 168 - fpga-mgr : should contain a phandle to an FPGA Manager. Child FPGA Regions 169 - inherit this property from their ancestor regions. A fpga-mgr property 169 + inherit this property from their ancestor regions. An fpga-mgr property 170 170 in a region will override any inherited FPGA manager. 171 171 - #address-cells, #size-cells, ranges : must be present to handle address space 172 172 mapping for child nodes. ··· 175 175 - firmware-name : should contain the name of an FPGA image file located on the 176 176 firmware search path. If this property shows up in a live device tree 177 177 it indicates that the FPGA has already been programmed with this image. 178 - If this property is in an overlay targeting a FPGA region, it is a 178 + If this property is in an overlay targeting an FPGA region, it is a 179 179 request to program the FPGA with that image. 180 180 - fpga-bridges : should contain a list of phandles to FPGA Bridges that must be 181 181 controlled during FPGA programming along with the parent FPGA bridge. 182 182 This property is optional if the FPGA Manager handles the bridges. 183 - If the fpga-region is the child of a fpga-bridge, the list should not 183 + If the fpga-region is the child of an fpga-bridge, the list should not 184 184 contain the parent bridge. 185 185 - partial-fpga-config : boolean, set if partial reconfiguration is to be done, 186 186 otherwise full reconfiguration is done. ··· 279 279 280 280 In all cases the live DT must have the FPGA Manager, FPGA Bridges (if any), and 281 281 a FPGA Region. The target of the Device Tree Overlay is the FPGA Region. Some 282 - uses are specific to a FPGA device. 282 + uses are specific to an FPGA device. 283 283 284 284 * No FPGA Bridges 285 285 In this case, the FPGA Manager which programs the FPGA also handles the ··· 300 300 bridges need to exist in the FPGA that can gate the buses going to each FPGA 301 301 region while the buses are enabled for other sections. Before any partial 302 302 reconfiguration can be done, a base FPGA image must be loaded which includes 303 - PRR's with FPGA bridges. The device tree should have a FPGA region for each 303 + PRR's with FPGA bridges. The device tree should have an FPGA region for each 304 304 PRR. 305 305 306 306 Device Tree Examples
+12
Documentation/devicetree/bindings/interconnect/qcom,rpmh.yaml
··· 37 37 - qcom,sc7180-npu-noc 38 38 - qcom,sc7180-qup-virt 39 39 - qcom,sc7180-system-noc 40 + - qcom,sc7280-aggre1-noc 41 + - qcom,sc7280-aggre2-noc 42 + - qcom,sc7280-clk-virt 43 + - qcom,sc7280-cnoc2 44 + - qcom,sc7280-cnoc3 45 + - qcom,sc7280-dc-noc 46 + - qcom,sc7280-gem-noc 47 + - qcom,sc7280-lpass-ag-noc 48 + - qcom,sc7280-mc-virt 49 + - qcom,sc7280-mmss-noc 50 + - qcom,sc7280-nsp-noc 51 + - qcom,sc7280-system-noc 40 52 - qcom,sdm845-aggre1-noc 41 53 - qcom,sdm845-aggre2-noc 42 54 - qcom,sdm845-config-noc
+3
Documentation/devicetree/bindings/misc/eeprom-93xx46.txt
··· 2 2 3 3 Required properties: 4 4 - compatible : shall be one of: 5 + "atmel,at93c46" 5 6 "atmel,at93c46d" 7 + "atmel,at93c56" 8 + "atmel,at93c66" 6 9 "eeprom-93xx46" 7 10 "microchip,93lc46b" 8 11 - data-size : number of data bits per word (either 8 or 16)
+24
Documentation/devicetree/bindings/pci/qcom,pcie.txt
··· 14 14 - "qcom,pcie-qcs404" for qcs404 15 15 - "qcom,pcie-sdm845" for sdm845 16 16 - "qcom,pcie-sm8250" for sm8250 17 + - "qcom,pcie-ipq6018" for ipq6018 17 18 18 19 - reg: 19 20 Usage: required ··· 125 124 - "aux" Auxiliary clock 126 125 127 126 - clock-names: 127 + Usage: required for ipq6018 128 + Value type: <stringlist> 129 + Definition: Should contain the following entries 130 + - "iface" PCIe to SysNOC BIU clock 131 + - "axi_m" AXI Master clock 132 + - "axi_s" AXI Slave clock 133 + - "axi_bridge" AXI bridge clock 134 + - "rchng" 135 + 136 + - clock-names: 128 137 Usage: required for qcs404 129 138 Value type: <stringlist> 130 139 Definition: Should contain the following entries ··· 219 208 - "axi_s" AXI Slave reset 220 209 - "ahb" AHB Reset 221 210 - "axi_m_sticky" AXI Master Sticky reset 211 + 212 + - reset-names: 213 + Usage: required for ipq6018 214 + Value type: <stringlist> 215 + Definition: Should contain the following entries 216 + - "pipe" PIPE reset 217 + - "sleep" Sleep reset 218 + - "sticky" Core Sticky reset 219 + - "axi_m" AXI Master reset 220 + - "axi_s" AXI Slave reset 221 + - "ahb" AHB Reset 222 + - "axi_m_sticky" AXI Master Sticky reset 223 + - "axi_s_sticky" AXI Slave Sticky reset 222 224 223 225 - reset-names: 224 226 Usage: required for qcs404
+5
Documentation/devicetree/bindings/phy/mediatek,mt7621-pci-phy.yaml
··· 16 16 reg: 17 17 maxItems: 1 18 18 19 + clocks: 20 + maxItems: 1 21 + 19 22 "#phy-cells": 20 23 const: 1 21 24 description: selects if the phy is dual-ported ··· 26 23 required: 27 24 - compatible 28 25 - reg 26 + - clocks 29 27 - "#phy-cells" 30 28 31 29 additionalProperties: false ··· 36 32 pcie0_phy: pcie-phy@1e149000 { 37 33 compatible = "mediatek,mt7621-pci-phy"; 38 34 reg = <0x1e149000 0x0700>; 35 + clocks = <&sysc 0>; 39 36 #phy-cells = <1>; 40 37 };
+1
Documentation/devicetree/bindings/phy/phy-rockchip-inno-usb2.yaml
··· 14 14 enum: 15 15 - rockchip,px30-usb2phy 16 16 - rockchip,rk3228-usb2phy 17 + - rockchip,rk3308-usb2phy 17 18 - rockchip,rk3328-usb2phy 18 19 - rockchip,rk3366-usb2phy 19 20 - rockchip,rk3399-usb2phy
+11
Documentation/devicetree/bindings/phy/phy-stm32-usbphyc.yaml
··· 74 74 "#phy-cells": 75 75 enum: [ 0x0, 0x1 ] 76 76 77 + connector: 78 + type: object 79 + allOf: 80 + - $ref: ../connector/usb-connector.yaml 81 + properties: 82 + vbus-supply: true 83 + 77 84 allOf: 78 85 - if: 79 86 properties: ··· 137 130 reg = <0>; 138 131 phy-supply = <&vdd_usb>; 139 132 #phy-cells = <0>; 133 + connector { 134 + compatible = "usb-a-connector"; 135 + vbus-supply = <&vbus_sw>; 136 + }; 140 137 }; 141 138 142 139 usbphyc_port1: usb-phy@1 {
+27
Documentation/devicetree/bindings/phy/qcom,qmp-phy.yaml
··· 17 17 properties: 18 18 compatible: 19 19 enum: 20 + - qcom,ipq6018-qmp-pcie-phy 20 21 - qcom,ipq8074-qmp-pcie-phy 21 22 - qcom,ipq8074-qmp-usb3-phy 22 23 - qcom,msm8996-qmp-pcie-phy ··· 46 45 - qcom,sm8350-qmp-ufs-phy 47 46 - qcom,sm8350-qmp-usb3-phy 48 47 - qcom,sm8350-qmp-usb3-uni-phy 48 + - qcom,sdx55-qmp-pcie-phy 49 49 - qcom,sdx55-qmp-usb3-uni-phy 50 50 51 51 reg: ··· 302 300 compatible: 303 301 contains: 304 302 enum: 303 + - qcom,ipq6018-qmp-pcie-phy 304 + then: 305 + properties: 306 + clocks: 307 + items: 308 + - description: Phy aux clock. 309 + - description: Phy config clock. 310 + clock-names: 311 + items: 312 + - const: aux 313 + - const: cfg_ahb 314 + resets: 315 + items: 316 + - description: reset of phy block. 317 + - description: phy common block reset. 318 + reset-names: 319 + items: 320 + - const: phy 321 + - const: common 322 + - if: 323 + properties: 324 + compatible: 325 + contains: 326 + enum: 305 327 - qcom,sdm845-qhp-pcie-phy 306 328 - qcom,sdm845-qmp-pcie-phy 329 + - qcom,sdx55-qmp-pcie-phy 307 330 - qcom,sm8250-qmp-gen3x1-pcie-phy 308 331 - qcom,sm8250-qmp-gen3x2-pcie-phy 309 332 - qcom,sm8250-qmp-modem-pcie-phy
-24
Documentation/devicetree/bindings/phy/rcar-gen3-phy-pcie.txt
··· 1 - * Renesas R-Car generation 3 PCIe PHY 2 - 3 - This file provides information on what the device node for the R-Car 4 - generation 3 PCIe PHY contains. 5 - 6 - Required properties: 7 - - compatible: "renesas,r8a77980-pcie-phy" if the device is a part of the 8 - R8A77980 SoC. 9 - - reg: offset and length of the register block. 10 - - clocks: clock phandle and specifier pair. 11 - - power-domains: power domain phandle and specifier pair. 12 - - resets: reset phandle and specifier pair. 13 - - #phy-cells: see phy-bindings.txt in the same directory, must be <0>. 14 - 15 - Example (R-Car V3H): 16 - 17 - pcie-phy@e65d0000 { 18 - compatible = "renesas,r8a77980-pcie-phy"; 19 - reg = <0 0xe65d0000 0 0x8000>; 20 - #phy-cells = <0>; 21 - clocks = <&cpg CPG_MOD 319>; 22 - power-domains = <&sysc 32>; 23 - resets = <&cpg 319>; 24 - };
+53
Documentation/devicetree/bindings/phy/renesas,rcar-gen3-pcie-phy.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/phy/renesas,rcar-gen3-pcie-phy.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Renesas R-Car Generation 3 PCIe PHY 8 + 9 + maintainers: 10 + - Sergei Shtylyov <sergei.shtylyov@gmail.com> 11 + 12 + properties: 13 + compatible: 14 + const: renesas,r8a77980-pcie-phy 15 + 16 + reg: 17 + maxItems: 1 18 + 19 + clocks: 20 + maxItems: 1 21 + 22 + power-domains: 23 + maxItems: 1 24 + 25 + resets: 26 + maxItems: 1 27 + 28 + '#phy-cells': 29 + const: 0 30 + 31 + required: 32 + - compatible 33 + - reg 34 + - clocks 35 + - power-domains 36 + - resets 37 + - '#phy-cells' 38 + 39 + additionalProperties: false 40 + 41 + examples: 42 + - | 43 + #include <dt-bindings/clock/r8a77980-cpg-mssr.h> 44 + #include <dt-bindings/power/r8a77980-sysc.h> 45 + 46 + pcie-phy@e65d0000 { 47 + compatible = "renesas,r8a77980-pcie-phy"; 48 + reg = <0xe65d0000 0x8000>; 49 + #phy-cells = <0>; 50 + clocks = <&cpg CPG_MOD 319>; 51 + power-domains = <&sysc R8A77980_PD_ALWAYS_ON>; 52 + resets = <&cpg 319>; 53 + };
+79
Documentation/devicetree/bindings/phy/rockchip-inno-csi-dphy.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/phy/rockchip-inno-csi-dphy.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Rockchip SoC MIPI RX0 D-PHY Device Tree Bindings 8 + 9 + maintainers: 10 + - Heiko Stuebner <heiko@sntech.de> 11 + 12 + description: | 13 + The Rockchip SoC has a MIPI CSI D-PHY based on an Innosilicon IP wich 14 + connects to the ISP1 (Image Signal Processing unit v1.0) for CSI cameras. 15 + 16 + properties: 17 + compatible: 18 + enum: 19 + - rockchip,px30-csi-dphy 20 + - rockchip,rk1808-csi-dphy 21 + - rockchip,rk3326-csi-dphy 22 + - rockchip,rk3368-csi-dphy 23 + 24 + reg: 25 + maxItems: 1 26 + 27 + clocks: 28 + maxItems: 1 29 + 30 + clock-names: 31 + const: pclk 32 + 33 + '#phy-cells': 34 + const: 0 35 + 36 + power-domains: 37 + description: Video in/out power domain. 38 + maxItems: 1 39 + 40 + resets: 41 + items: 42 + - description: exclusive PHY reset line 43 + 44 + reset-names: 45 + items: 46 + - const: apb 47 + 48 + rockchip,grf: 49 + $ref: /schemas/types.yaml#/definitions/phandle 50 + description: 51 + Some additional phy settings are access through GRF regs. 52 + 53 + required: 54 + - compatible 55 + - reg 56 + - clocks 57 + - clock-names 58 + - '#phy-cells' 59 + - power-domains 60 + - resets 61 + - reset-names 62 + - rockchip,grf 63 + 64 + additionalProperties: false 65 + 66 + examples: 67 + - | 68 + 69 + csi_dphy: phy@ff2f0000 { 70 + compatible = "rockchip,px30-csi-dphy"; 71 + reg = <0xff2f0000 0x4000>; 72 + clocks = <&cru 1>; 73 + clock-names = "pclk"; 74 + #phy-cells = <0>; 75 + power-domains = <&power 1>; 76 + resets = <&cru 1>; 77 + reset-names = "apb"; 78 + rockchip,grf = <&grf>; 79 + };
-52
Documentation/devicetree/bindings/phy/rockchip-usb-phy.txt
··· 1 - ROCKCHIP USB2 PHY 2 - 3 - Required properties: 4 - - compatible: matching the soc type, one of 5 - "rockchip,rk3066a-usb-phy" 6 - "rockchip,rk3188-usb-phy" 7 - "rockchip,rk3288-usb-phy" 8 - - #address-cells: should be 1 9 - - #size-cells: should be 0 10 - 11 - Deprecated properties: 12 - - rockchip,grf : phandle to the syscon managing the "general 13 - register files" - phy should be a child of the GRF instead 14 - 15 - Sub-nodes: 16 - Each PHY should be represented as a sub-node. 17 - 18 - Sub-nodes 19 - required properties: 20 - - #phy-cells: should be 0 21 - - reg: PHY configure reg address offset in GRF 22 - "0x320" - for PHY attach to OTG controller 23 - "0x334" - for PHY attach to HOST0 controller 24 - "0x348" - for PHY attach to HOST1 controller 25 - 26 - Optional Properties: 27 - - clocks : phandle + clock specifier for the phy clocks 28 - - clock-names: string, clock name, must be "phyclk" 29 - - #clock-cells: for users of the phy-pll, should be 0 30 - - reset-names: Only allow the following entries: 31 - - phy-reset 32 - - resets: Must contain an entry for each entry in reset-names. 33 - - vbus-supply: power-supply phandle for vbus power source 34 - 35 - Example: 36 - 37 - grf: syscon@ff770000 { 38 - compatible = "rockchip,rk3288-grf", "syscon", "simple-mfd"; 39 - 40 - ... 41 - 42 - usbphy: phy { 43 - compatible = "rockchip,rk3288-usb-phy"; 44 - #address-cells = <1>; 45 - #size-cells = <0>; 46 - 47 - usbphy0: usb-phy0 { 48 - #phy-cells = <0>; 49 - reg = <0x320>; 50 - }; 51 - }; 52 - };
+81
Documentation/devicetree/bindings/phy/rockchip-usb-phy.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/phy/rockchip-usb-phy.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Rockchip USB2.0 phy 8 + 9 + maintainers: 10 + - Heiko Stuebner <heiko@sntech.de> 11 + 12 + properties: 13 + compatible: 14 + oneOf: 15 + - const: rockchip,rk3288-usb-phy 16 + - items: 17 + - enum: 18 + - rockchip,rk3066a-usb-phy 19 + - rockchip,rk3188-usb-phy 20 + - const: rockchip,rk3288-usb-phy 21 + 22 + "#address-cells": 23 + const: 1 24 + 25 + "#size-cells": 26 + const: 0 27 + 28 + required: 29 + - compatible 30 + - "#address-cells" 31 + - "#size-cells" 32 + 33 + additionalProperties: false 34 + 35 + patternProperties: 36 + "usb-phy@[0-9a-f]+$": 37 + type: object 38 + 39 + properties: 40 + reg: 41 + maxItems: 1 42 + 43 + "#phy-cells": 44 + const: 0 45 + 46 + clocks: 47 + maxItems: 1 48 + 49 + clock-names: 50 + const: phyclk 51 + 52 + "#clock-cells": 53 + const: 0 54 + 55 + resets: 56 + maxItems: 1 57 + 58 + reset-names: 59 + const: phy-reset 60 + 61 + vbus-supply: 62 + description: phandle for vbus power source 63 + 64 + required: 65 + - reg 66 + - "#phy-cells" 67 + 68 + additionalProperties: false 69 + 70 + examples: 71 + - | 72 + usbphy: usbphy { 73 + compatible = "rockchip,rk3288-usb-phy"; 74 + #address-cells = <1>; 75 + #size-cells = <0>; 76 + 77 + usbphy0: usb-phy@320 { 78 + reg = <0x320>; 79 + #phy-cells = <0>; 80 + }; 81 + };
+56
Documentation/devicetree/bindings/phy/ti,tcan104x-can.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: "http://devicetree.org/schemas/phy/ti,tcan104x-can.yaml#" 5 + $schema: "http://devicetree.org/meta-schemas/core.yaml#" 6 + 7 + title: TCAN104x CAN TRANSCEIVER PHY 8 + 9 + maintainers: 10 + - Aswath Govindraju <a-govindraju@ti.com> 11 + 12 + properties: 13 + $nodename: 14 + pattern: "^can-phy" 15 + 16 + compatible: 17 + enum: 18 + - ti,tcan1042 19 + - ti,tcan1043 20 + 21 + '#phy-cells': 22 + const: 0 23 + 24 + standby-gpios: 25 + description: 26 + gpio node to toggle standby signal on transceiver 27 + maxItems: 1 28 + 29 + enable-gpios: 30 + description: 31 + gpio node to toggle enable signal on transceiver 32 + maxItems: 1 33 + 34 + max-bitrate: 35 + $ref: /schemas/types.yaml#/definitions/uint32 36 + description: 37 + max bit rate supported in bps 38 + minimum: 1 39 + 40 + required: 41 + - compatible 42 + - '#phy-cells' 43 + 44 + additionalProperties: false 45 + 46 + examples: 47 + - | 48 + #include <dt-bindings/gpio/gpio.h> 49 + 50 + transceiver1: can-phy { 51 + compatible = "ti,tcan1043"; 52 + #phy-cells = <0>; 53 + max-bitrate = <5000000>; 54 + standby-gpios = <&wakeup_gpio1 16 GPIO_ACTIVE_LOW>; 55 + enable-gpios = <&main_gpio1 67 GPIO_ACTIVE_HIGH>; 56 + };
+2 -2
Documentation/fpga/dfl.rst
··· 57 57 interface to FPGA, e.g. the FPGA Management Engine (FME) and Port (more 58 58 descriptions on FME and Port in later sections). 59 59 60 - Accelerated Function Unit (AFU) represents a FPGA programmable region and 60 + Accelerated Function Unit (AFU) represents an FPGA programmable region and 61 61 always connects to a FIU (e.g. a Port) as its child as illustrated above. 62 62 63 63 Private Features represent sub features of the FIU and AFU. They could be ··· 311 311 | PCI PF Device | | | PCI VF Device | 312 312 +---------------+ | +---------------+ 313 313 314 - FPGA PCIe device driver is always loaded first once a FPGA PCIe PF or VF device 314 + FPGA PCIe device driver is always loaded first once an FPGA PCIe PF or VF device 315 315 is detected. It: 316 316 317 317 * Finishes enumeration on both FPGA PCIe PF and VF device using common
+1 -1
Documentation/userspace-api/accelerators/ocxl.rst
··· 6 6 at being low-latency and high-bandwidth. The specification is 7 7 developed by the `OpenCAPI Consortium <http://opencapi.org/>`_. 8 8 9 - It allows an accelerator (which could be a FPGA, ASICs, ...) to access 9 + It allows an accelerator (which could be an FPGA, ASICs, ...) to access 10 10 the host memory coherently, using virtual addresses. An OpenCAPI 11 11 device can also host its own memory, that can be accessed from the 12 12 host.
+18 -1
Documentation/w1/slaves/w1_ds2438.rst
··· 22 22 wind speed/direction measuring, humidity sensing, etc. 23 23 24 24 Current support is provided through the following sysfs files (all files 25 - except "iad" are readonly): 25 + except "iad" and "offset" are readonly): 26 26 27 27 "iad" 28 28 ----- ··· 43 43 Internally when this file is read, the additional CRC byte is also obtained 44 44 from the slave device. If it is correct, the 8 bytes page data are passed 45 45 to userspace, otherwise an I/O error is returned. 46 + 47 + "page1" 48 + ------- 49 + This file provides full 8 bytes of the chip Page 1 (01h). 50 + This page contains the ICA, elapsed time meter and current offset data of the DS2438. 51 + Internally when this file is read, the additional CRC byte is also obtained 52 + from the slave device. If it is correct, the 8 bytes page data are passed 53 + to userspace, otherwise an I/O error is returned. 54 + 55 + "offset" 56 + -------- 57 + This file controls the 2-byte Offset Register of the chip. 58 + Writing a 2-byte value will change the Offset Register, which changes the 59 + current measurement done by the chip. Changing this register to the two's complement 60 + of the current register while forcing zero current through the load will calibrate 61 + the chip, canceling offset errors in the current ADC. 62 + 46 63 47 64 "temperature" 48 65 -------------
+3 -1
MAINTAINERS
··· 4075 4075 T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can.git 4076 4076 T: git git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next.git 4077 4077 F: Documentation/devicetree/bindings/net/can/ 4078 + F: Documentation/devicetree/bindings/phy/ti,tcan104x-can.yaml 4078 4079 F: drivers/net/can/ 4080 + F: drivers/phy/phy-can-transceiver.c 4079 4081 F: include/linux/can/bittiming.h 4080 4082 F: include/linux/can/dev.h 4081 4083 F: include/linux/can/led.h ··· 11014 11012 M: Miquel Raynal <miquel.raynal@bootlin.com> 11015 11013 S: Maintained 11016 11014 F: Documentation/devicetree/bindings/phy/phy-mvebu-comphy.txt 11017 - F: Documentation/devicetree/bindings/phy/phy-mvebu-utmi.txt 11015 + F: Documentation/devicetree/bindings/phy/marvell,armada-3700-utmi-phy.yaml 11018 11016 F: drivers/phy/marvell/phy-mvebu-a3700-comphy.c 11019 11017 F: drivers/phy/marvell/phy-mvebu-a3700-utmi.c 11020 11018
+1 -1
arch/sparc/include/asm/vio.h
··· 362 362 struct list_head node; 363 363 const struct vio_device_id *id_table; 364 364 int (*probe)(struct vio_dev *dev, const struct vio_device_id *id); 365 - int (*remove)(struct vio_dev *dev); 365 + void (*remove)(struct vio_dev *dev); 366 366 void (*shutdown)(struct vio_dev *dev); 367 367 unsigned long driver_data; 368 368 struct device_driver driver;
-6
arch/sparc/kernel/ds.c
··· 1236 1236 return err; 1237 1237 } 1238 1238 1239 - static int ds_remove(struct vio_dev *vdev) 1240 - { 1241 - return 0; 1242 - } 1243 - 1244 1239 static const struct vio_device_id ds_match[] = { 1245 1240 { 1246 1241 .type = "domain-services-port", ··· 1246 1251 static struct vio_driver ds_driver = { 1247 1252 .id_table = ds_match, 1248 1253 .probe = ds_probe, 1249 - .remove = ds_remove, 1250 1254 .name = "ds", 1251 1255 }; 1252 1256
+2 -2
arch/sparc/kernel/vio.c
··· 105 105 * routines to do so at the moment. TBD 106 106 */ 107 107 108 - return drv->remove(vdev); 108 + drv->remove(vdev); 109 109 } 110 110 111 - return 1; 111 + return 0; 112 112 } 113 113 114 114 static ssize_t devspec_show(struct device *dev,
+3
drivers/accessibility/braille/braille_console.c
··· 225 225 case KBD_POST_KEYSYM: 226 226 { 227 227 unsigned char type = KTYP(param->value) - 0xf0; 228 + 228 229 if (type == KT_SPEC) { 229 230 unsigned char val = KVAL(param->value); 230 231 int on_off = -1; ··· 266 265 { 267 266 struct vt_notifier_param *param = _param; 268 267 struct vc_data *vc = param->vc; 268 + 269 269 switch (code) { 270 270 case VT_ALLOCATE: 271 271 break; ··· 275 273 case VT_WRITE: 276 274 { 277 275 unsigned char c = param->c; 276 + 278 277 if (vc->vc_num != fg_console) 279 278 break; 280 279 switch (c) {
+7
drivers/accessibility/speakup/i18n.c
··· 90 90 [MSG_COLOR_YELLOW] = "yellow", 91 91 [MSG_COLOR_WHITE] = "white", 92 92 [MSG_COLOR_GREY] = "grey", 93 + [MSG_COLOR_BRIGHTBLUE] "bright blue", 94 + [MSG_COLOR_BRIGHTGREEN] "bright green", 95 + [MSG_COLOR_BRIGHTCYAN] "bright cyan", 96 + [MSG_COLOR_BRIGHTRED] "bright red", 97 + [MSG_COLOR_BRIGHTMAGENTA] "bright magenta", 98 + [MSG_COLOR_BRIGHTYELLOW] "bright yellow", 99 + [MSG_COLOR_BRIGHTWHITE] "bright white", 93 100 94 101 /* Names of key states. */ 95 102 [MSG_STATE_DOUBLE] = "double",
+8 -1
drivers/accessibility/speakup/i18n.h
··· 99 99 MSG_COLOR_YELLOW, 100 100 MSG_COLOR_WHITE, 101 101 MSG_COLOR_GREY, 102 - MSG_COLORS_END = MSG_COLOR_GREY, 102 + MSG_COLOR_BRIGHTBLUE, 103 + MSG_COLOR_BRIGHTGREEN, 104 + MSG_COLOR_BRIGHTCYAN, 105 + MSG_COLOR_BRIGHTRED, 106 + MSG_COLOR_BRIGHTMAGENTA, 107 + MSG_COLOR_BRIGHTYELLOW, 108 + MSG_COLOR_BRIGHTWHITE, 109 + MSG_COLORS_END = MSG_COLOR_BRIGHTWHITE, 103 110 104 111 MSG_STATES_START, 105 112 MSG_STATE_DOUBLE = MSG_STATES_START,
-4
drivers/accessibility/speakup/main.c
··· 389 389 int fg = spk_attr & 0x0f; 390 390 int bg = spk_attr >> 4; 391 391 392 - if (fg > 8) { 393 - synth_printf("%s ", spk_msg_get(MSG_BRIGHT)); 394 - fg -= 8; 395 - } 396 392 synth_printf("%s", spk_msg_get(MSG_COLORS_START + fg)); 397 393 if (bg > 7) { 398 394 synth_printf(" %s ", spk_msg_get(MSG_ON_BLINKING));
+1 -2
drivers/block/sunvdc.c
··· 1050 1050 return err; 1051 1051 } 1052 1052 1053 - static int vdc_port_remove(struct vio_dev *vdev) 1053 + static void vdc_port_remove(struct vio_dev *vdev) 1054 1054 { 1055 1055 struct vdc_port *port = dev_get_drvdata(&vdev->dev); 1056 1056 ··· 1072 1072 1073 1073 kfree(port); 1074 1074 } 1075 - return 0; 1076 1075 } 1077 1076 1078 1077 static void vdc_requeue_inflight(struct vdc_port *port)
+5 -3
drivers/bus/fsl-mc/dprc-driver.c
··· 350 350 * dprc_scan_container - Scans a physical DPRC and synchronizes Linux bus state 351 351 * 352 352 * @mc_bus_dev: pointer to the fsl-mc device that represents a DPRC object 353 - * 353 + * @alloc_interrupts: if true the function allocates the interrupt pool, 354 + * otherwise the interrupt allocation is delayed 354 355 * Scans the physical DPRC and synchronizes the state of the Linux 355 356 * bus driver with the actual state of the MC by adding and removing 356 357 * devices as appropriate. ··· 374 373 return error; 375 374 } 376 375 EXPORT_SYMBOL_GPL(dprc_scan_container); 376 + 377 377 /** 378 378 * dprc_irq0_handler - Regular ISR for DPRC interrupt 0 379 379 * 380 - * @irq: IRQ number of the interrupt being handled 380 + * @irq_num: IRQ number of the interrupt being handled 381 381 * @arg: Pointer to device structure 382 382 */ 383 383 static irqreturn_t dprc_irq0_handler(int irq_num, void *arg) ··· 389 387 /** 390 388 * dprc_irq0_handler_thread - Handler thread function for DPRC interrupt 0 391 389 * 392 - * @irq: IRQ number of the interrupt being handled 390 + * @irq_num: IRQ number of the interrupt being handled 393 391 * @arg: Pointer to device structure 394 392 */ 395 393 static irqreturn_t dprc_irq0_handler_thread(int irq_num, void *arg)
+2 -2
drivers/bus/fsl-mc/dprc.c
··· 334 334 * @mc_io: Pointer to MC portal's I/O object 335 335 * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' 336 336 * @token: Token of DPRC object 337 - * @attributes Returned container attributes 337 + * @attr: Returned container attributes 338 338 * 339 339 * Return: '0' on Success; Error code otherwise. 340 340 */ ··· 504 504 * @mc_io: Pointer to MC portal's I/O object 505 505 * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_' 506 506 * @token: Token of DPRC object 507 - * @obj_type; Object type as returned in dprc_get_obj() 507 + * @obj_type: Object type as returned in dprc_get_obj() 508 508 * @obj_id: Unique object instance as returned in dprc_get_obj() 509 509 * @region_index: The specific region to query 510 510 * @region_desc: Returns the requested region descriptor
+5 -5
drivers/bus/fsl-mc/fsl-mc-allocator.c
··· 254 254 * @mc_dev: fsl-mc device which is used in conjunction with the 255 255 * allocated object 256 256 * @pool_type: pool type 257 - * @new_mc_dev: pointer to area where the pointer to the allocated device 257 + * @new_mc_adev: pointer to area where the pointer to the allocated device 258 258 * is to be returned 259 259 * 260 260 * Allocatable objects are always used in conjunction with some functional ··· 409 409 } 410 410 EXPORT_SYMBOL_GPL(fsl_mc_populate_irq_pool); 411 411 412 - /** 412 + /* 413 413 * Teardown the interrupt pool associated with an fsl-mc bus. 414 414 * It frees the IRQs that were allocated to the pool, back to the GIC-ITS. 415 415 */ ··· 436 436 } 437 437 EXPORT_SYMBOL_GPL(fsl_mc_cleanup_irq_pool); 438 438 439 - /** 439 + /* 440 440 * Allocate the IRQs required by a given fsl-mc device. 441 441 */ 442 442 int __must_check fsl_mc_allocate_irqs(struct fsl_mc_device *mc_dev) ··· 578 578 fsl_mc_cleanup_resource_pool(mc_bus_dev, pool_type); 579 579 } 580 580 581 - /** 581 + /* 582 582 * fsl_mc_allocator_probe - callback invoked when an allocatable device is 583 583 * being added to the system 584 584 */ ··· 610 610 return 0; 611 611 } 612 612 613 - /** 613 + /* 614 614 * fsl_mc_allocator_remove - callback invoked when an allocatable device is 615 615 * being removed from the system 616 616 */
+10 -9
drivers/bus/fsl-mc/fsl-mc-bus.c
··· 24 24 25 25 #include "fsl-mc-private.h" 26 26 27 - /** 27 + /* 28 28 * Default DMA mask for devices on a fsl-mc bus 29 29 */ 30 30 #define FSL_MC_DEFAULT_DMA_MASK (~0ULL) ··· 36 36 * @root_mc_bus_dev: fsl-mc device representing the root DPRC 37 37 * @num_translation_ranges: number of entries in addr_translation_ranges 38 38 * @translation_ranges: array of bus to system address translation ranges 39 + * @fsl_mc_regs: base address of register bank 39 40 */ 40 41 struct fsl_mc { 41 42 struct fsl_mc_device *root_mc_bus_dev; ··· 118 117 return found; 119 118 } 120 119 121 - /** 120 + /* 122 121 * fsl_mc_bus_uevent - callback invoked when a device is added 123 122 */ 124 123 static int fsl_mc_bus_uevent(struct device *dev, struct kobj_uevent_env *env) ··· 468 467 mc_drv->shutdown(mc_dev); 469 468 } 470 469 471 - /** 470 + /* 472 471 * __fsl_mc_driver_register - registers a child device driver with the 473 472 * MC bus 474 473 * ··· 504 503 } 505 504 EXPORT_SYMBOL_GPL(__fsl_mc_driver_register); 506 505 507 - /** 506 + /* 508 507 * fsl_mc_driver_unregister - unregisters a device driver from the 509 508 * MC bus 510 509 */ ··· 564 563 } 565 564 EXPORT_SYMBOL_GPL(fsl_mc_get_version); 566 565 567 - /** 566 + /* 568 567 * fsl_mc_get_root_dprc - function to traverse to the root dprc 569 568 */ 570 569 void fsl_mc_get_root_dprc(struct device *dev, ··· 733 732 return error; 734 733 } 735 734 736 - /** 735 + /* 737 736 * fsl_mc_is_root_dprc - function to check if a given device is a root dprc 738 737 */ 739 738 bool fsl_mc_is_root_dprc(struct device *dev) ··· 758 757 kfree(mc_dev); 759 758 } 760 759 761 - /** 760 + /* 762 761 * Add a newly discovered fsl-mc device to be visible in Linux 763 762 */ 764 763 int fsl_mc_device_add(struct fsl_mc_obj_desc *obj_desc, ··· 1059 1058 return 0; 1060 1059 } 1061 1060 1062 - /** 1061 + /* 1063 1062 * fsl_mc_bus_probe - callback invoked when the root MC bus is being 1064 1063 * added 1065 1064 */ ··· 1183 1182 return error; 1184 1183 } 1185 1184 1186 - /** 1185 + /* 1187 1186 * fsl_mc_bus_remove - callback invoked when the root MC bus is being 1188 1187 * removed 1189 1188 */
+1 -1
drivers/bus/fsl-mc/fsl-mc-msi.c
··· 148 148 149 149 /** 150 150 * fsl_mc_msi_create_irq_domain - Create a fsl-mc MSI interrupt domain 151 - * @np: Optional device-tree node of the interrupt controller 151 + * @fwnode: Optional firmware node of the interrupt controller 152 152 * @info: MSI domain info 153 153 * @parent: Parent irq domain 154 154 *
+3 -3
drivers/bus/fsl-mc/mc-io.c
··· 50 50 } 51 51 52 52 /** 53 - * Creates an MC I/O object 53 + * fsl_create_mc_io() - Creates an MC I/O object 54 54 * 55 55 * @dev: device to be associated with the MC I/O object 56 56 * @mc_portal_phys_addr: physical address of the MC portal to use 57 57 * @mc_portal_size: size in bytes of the MC portal 58 - * @dpmcp-dev: Pointer to the DPMCP object associated with this MC I/O 58 + * @dpmcp_dev: Pointer to the DPMCP object associated with this MC I/O 59 59 * object or NULL if none. 60 60 * @flags: flags for the new MC I/O object 61 61 * @new_mc_io: Area to return pointer to newly created MC I/O object ··· 123 123 } 124 124 125 125 /** 126 - * Destroys an MC I/O object 126 + * fsl_destroy_mc_io() - Destroys an MC I/O object 127 127 * 128 128 * @mc_io: MC I/O object to destroy 129 129 */
+10 -9
drivers/bus/fsl-mc/mc-sys.c
··· 16 16 17 17 #include "fsl-mc-private.h" 18 18 19 - /** 19 + /* 20 20 * Timeout in milliseconds to wait for the completion of an MC command 21 21 */ 22 22 #define MC_CMD_COMPLETION_TIMEOUT_MS 500 ··· 148 148 } 149 149 150 150 /** 151 - * Waits for the completion of an MC command doing preemptible polling. 152 - * uslepp_range() is called between polling iterations. 153 - * 151 + * mc_polling_wait_preemptible() - Waits for the completion of an MC 152 + * command doing preemptible polling. 153 + * uslepp_range() is called between 154 + * polling iterations. 154 155 * @mc_io: MC I/O object to be used 155 156 * @cmd: command buffer to receive MC response 156 157 * @mc_status: MC command completion status ··· 195 194 } 196 195 197 196 /** 198 - * Waits for the completion of an MC command doing atomic polling. 199 - * udelay() is called between polling iterations. 200 - * 197 + * mc_polling_wait_atomic() - Waits for the completion of an MC command 198 + * doing atomic polling. udelay() is called 199 + * between polling iterations. 201 200 * @mc_io: MC I/O object to be used 202 201 * @cmd: command buffer to receive MC response 203 202 * @mc_status: MC command completion status ··· 235 234 } 236 235 237 236 /** 238 - * Sends a command to the MC device using the given MC I/O object 239 - * 237 + * mc_send_command() - Sends a command to the MC device using the given 238 + * MC I/O object 240 239 * @mc_io: MC I/O object to be used 241 240 * @cmd: command to be sent 242 241 *
+6 -13
drivers/bus/mhi/core/pm.c
··· 465 465 466 466 /* Trigger MHI RESET so that the device will not access host memory */ 467 467 if (!MHI_PM_IN_FATAL_STATE(mhi_cntrl->pm_state)) { 468 - u32 in_reset = -1; 469 - unsigned long timeout = msecs_to_jiffies(mhi_cntrl->timeout_ms); 470 - 471 468 dev_dbg(dev, "Triggering MHI Reset in device\n"); 472 469 mhi_set_mhi_state(mhi_cntrl, MHI_STATE_RESET); 473 470 474 471 /* Wait for the reset bit to be cleared by the device */ 475 - ret = wait_event_timeout(mhi_cntrl->state_event, 476 - mhi_read_reg_field(mhi_cntrl, 477 - mhi_cntrl->regs, 478 - MHICTRL, 479 - MHICTRL_RESET_MASK, 480 - MHICTRL_RESET_SHIFT, 481 - &in_reset) || 482 - !in_reset, timeout); 483 - if (!ret || in_reset) 484 - dev_err(dev, "Device failed to exit MHI Reset state\n"); 472 + ret = mhi_poll_reg_field(mhi_cntrl, mhi_cntrl->regs, MHICTRL, 473 + MHICTRL_RESET_MASK, MHICTRL_RESET_SHIFT, 0, 474 + 25000); 475 + if (ret) 476 + dev_err(dev, "Device failed to clear MHI Reset\n"); 485 477 486 478 /* 487 479 * Device will clear BHI_INTVEC as a part of RESET processing, ··· 926 934 927 935 ret = wait_event_timeout(mhi_cntrl->state_event, 928 936 mhi_cntrl->dev_state == MHI_STATE_M0 || 937 + mhi_cntrl->dev_state == MHI_STATE_M2 || 929 938 MHI_PM_IN_ERROR_STATE(mhi_cntrl->pm_state), 930 939 msecs_to_jiffies(mhi_cntrl->timeout_ms)); 931 940
+4 -1
drivers/bus/mhi/pci_generic.c
··· 665 665 666 666 err = mhi_register_controller(mhi_cntrl, mhi_cntrl_config); 667 667 if (err) 668 - return err; 668 + goto err_disable_reporting; 669 669 670 670 /* MHI bus does not power up the controller by default */ 671 671 err = mhi_prepare_for_power_up(mhi_cntrl); ··· 699 699 mhi_unprepare_after_power_down(mhi_cntrl); 700 700 err_unregister: 701 701 mhi_unregister_controller(mhi_cntrl); 702 + err_disable_reporting: 703 + pci_disable_pcie_error_reporting(pdev); 702 704 703 705 return err; 704 706 } ··· 723 721 pm_runtime_get_noresume(&pdev->dev); 724 722 725 723 mhi_unregister_controller(mhi_cntrl); 724 + pci_disable_pcie_error_reporting(pdev); 726 725 } 727 726 728 727 static void mhi_pci_shutdown(struct pci_dev *pdev)
-21
drivers/char/Kconfig
··· 357 357 To compile this driver as a module, choose M here: the 358 358 module will be called nvram. 359 359 360 - config RAW_DRIVER 361 - tristate "RAW driver (/dev/raw/rawN)" 362 - depends on BLOCK 363 - help 364 - The raw driver permits block devices to be bound to /dev/raw/rawN. 365 - Once bound, I/O against /dev/raw/rawN uses efficient zero-copy I/O. 366 - See the raw(8) manpage for more details. 367 - 368 - Applications should preferably open the device (eg /dev/hda1) 369 - with the O_DIRECT flag. 370 - 371 - config MAX_RAW_DEVS 372 - int "Maximum number of RAW devices to support (1-65536)" 373 - depends on RAW_DRIVER 374 - range 1 65536 375 - default "256" 376 - help 377 - The maximum number of RAW devices that are supported. 378 - Default is 256. Increase this number in case you need lots of 379 - raw devices. 380 - 381 360 config DEVPORT 382 361 bool "/dev/port character device" 383 362 depends on ISA || PCI
+1 -2
drivers/char/Makefile
··· 8 8 obj-y += misc.o 9 9 obj-$(CONFIG_ATARI_DSP56K) += dsp56k.o 10 10 obj-$(CONFIG_VIRTIO_CONSOLE) += virtio_console.o 11 - obj-$(CONFIG_RAW_DRIVER) += raw.o 12 11 obj-$(CONFIG_MSPEC) += mspec.o 13 12 obj-$(CONFIG_UV_MMTIMER) += uv_mmtimer.o 14 13 obj-$(CONFIG_IBM_BSR) += bsr.o ··· 43 44 44 45 obj-$(CONFIG_PS3_FLASH) += ps3flash.o 45 46 46 - obj-$(CONFIG_XILLYBUS) += xillybus/ 47 + obj-$(CONFIG_XILLYBUS_CLASS) += xillybus/ 47 48 obj-$(CONFIG_POWERNV_OP_PANEL) += powernv-op-panel.o 48 49 obj-$(CONFIG_ADI) += adi.o
+2 -2
drivers/char/hpet.c
··· 156 156 * This has the effect of treating non-periodic like periodic. 157 157 */ 158 158 if ((devp->hd_flags & (HPET_IE | HPET_PERIODIC)) == HPET_IE) { 159 - unsigned long m, t, mc, base, k; 159 + unsigned long t, mc, base, k; 160 160 struct hpet __iomem *hpet = devp->hd_hpet; 161 161 struct hpets *hpetp = devp->hd_hpets; 162 162 163 163 t = devp->hd_ireqfreq; 164 - m = read_counter(&devp->hd_timer->hpet_compare); 164 + read_counter(&devp->hd_timer->hpet_compare); 165 165 mc = read_counter(&hpet->hpet_mc); 166 166 /* The time for the next interrupt would logically be t + m, 167 167 * however, if we are very unlucky and the interrupt is delayed
+1 -1
drivers/char/hw_random/pseries-rng.c
··· 29 29 return 8; 30 30 } 31 31 32 - /** 32 + /* 33 33 * pseries_rng_get_desired_dma - Return desired DMA allocate for CMO operations 34 34 * 35 35 * This is a required function for a driver to operate in a CMO environment
-1
drivers/char/mem.c
··· 16 16 #include <linux/mman.h> 17 17 #include <linux/random.h> 18 18 #include <linux/init.h> 19 - #include <linux/raw.h> 20 19 #include <linux/tty.h> 21 20 #include <linux/capability.h> 22 21 #include <linux/ptrace.h>
+5 -2
drivers/char/pcmcia/cm4000_cs.c
··· 544 544 io_read_num_rec_bytes(iobase, &num_bytes_read); 545 545 if (num_bytes_read >= 4) { 546 546 DEBUGP(2, dev, "NumRecBytes = %i\n", num_bytes_read); 547 + if (num_bytes_read > 4) { 548 + rc = -EIO; 549 + goto exit_setprotocol; 550 + } 547 551 break; 548 552 } 549 553 usleep_range(10000, 11000); ··· 1054 1050 struct cm4000_dev *dev = filp->private_data; 1055 1051 unsigned int iobase = dev->p_dev->resource[0]->start; 1056 1052 unsigned short s; 1057 - unsigned char tmp; 1058 1053 unsigned char infolen; 1059 1054 unsigned char sendT0; 1060 1055 unsigned short nsend; ··· 1151 1148 set_cardparameter(dev); 1152 1149 1153 1150 /* dummy read, reset flag procedure received */ 1154 - tmp = inb(REG_FLAGS1(iobase)); 1151 + inb(REG_FLAGS1(iobase)); 1155 1152 1156 1153 dev->flags1 = 0x20 /* T_Active */ 1157 1154 | (sendT0)
+1 -2
drivers/char/pcmcia/cm4040_cs.c
··· 221 221 unsigned long i; 222 222 size_t min_bytes_to_read; 223 223 int rc; 224 - unsigned char uc; 225 224 226 225 DEBUGP(2, dev, "-> cm4040_read(%s,%d)\n", current->comm, current->pid); 227 226 ··· 307 308 return -EIO; 308 309 } 309 310 310 - uc = xinb(iobase + REG_OFFSET_BULK_IN); 311 + xinb(iobase + REG_OFFSET_BULK_IN); 311 312 312 313 DEBUGP(2, dev, "<- cm4040_read (successfully)\n"); 313 314 return min_bytes_to_read;
-1
drivers/char/pcmcia/scr24x_cs.c
··· 265 265 266 266 cdev_init(&dev->c_dev, &scr24x_fops); 267 267 dev->c_dev.owner = THIS_MODULE; 268 - dev->c_dev.ops = &scr24x_fops; 269 268 ret = cdev_add(&dev->c_dev, MKDEV(MAJOR(scr24x_devt), dev->devno), 1); 270 269 if (ret < 0) 271 270 goto err;
-362
drivers/char/raw.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * linux/drivers/char/raw.c 4 - * 5 - * Front-end raw character devices. These can be bound to any block 6 - * devices to provide genuine Unix raw character device semantics. 7 - * 8 - * We reserve minor number 0 for a control interface. ioctl()s on this 9 - * device are used to bind the other minor numbers to block devices. 10 - */ 11 - 12 - #include <linux/init.h> 13 - #include <linux/fs.h> 14 - #include <linux/major.h> 15 - #include <linux/blkdev.h> 16 - #include <linux/backing-dev.h> 17 - #include <linux/module.h> 18 - #include <linux/raw.h> 19 - #include <linux/capability.h> 20 - #include <linux/uio.h> 21 - #include <linux/cdev.h> 22 - #include <linux/device.h> 23 - #include <linux/mutex.h> 24 - #include <linux/gfp.h> 25 - #include <linux/compat.h> 26 - #include <linux/vmalloc.h> 27 - 28 - #include <linux/uaccess.h> 29 - 30 - struct raw_device_data { 31 - dev_t binding; 32 - struct block_device *bdev; 33 - int inuse; 34 - }; 35 - 36 - static struct class *raw_class; 37 - static struct raw_device_data *raw_devices; 38 - static DEFINE_MUTEX(raw_mutex); 39 - static const struct file_operations raw_ctl_fops; /* forward declaration */ 40 - 41 - static int max_raw_minors = CONFIG_MAX_RAW_DEVS; 42 - 43 - module_param(max_raw_minors, int, 0); 44 - MODULE_PARM_DESC(max_raw_minors, "Maximum number of raw devices (1-65536)"); 45 - 46 - /* 47 - * Open/close code for raw IO. 48 - * 49 - * We just rewrite the i_mapping for the /dev/raw/rawN file descriptor to 50 - * point at the blockdev's address_space and set the file handle to use 51 - * O_DIRECT. 52 - * 53 - * Set the device's soft blocksize to the minimum possible. This gives the 54 - * finest possible alignment and has no adverse impact on performance. 55 - */ 56 - static int raw_open(struct inode *inode, struct file *filp) 57 - { 58 - const int minor = iminor(inode); 59 - struct block_device *bdev; 60 - int err; 61 - 62 - if (minor == 0) { /* It is the control device */ 63 - filp->f_op = &raw_ctl_fops; 64 - return 0; 65 - } 66 - 67 - pr_warn_ratelimited( 68 - "process %s (pid %d) is using the deprecated raw device\n" 69 - "support will be removed in Linux 5.14.\n", 70 - current->comm, current->pid); 71 - 72 - mutex_lock(&raw_mutex); 73 - 74 - /* 75 - * All we need to do on open is check that the device is bound. 76 - */ 77 - err = -ENODEV; 78 - if (!raw_devices[minor].binding) 79 - goto out; 80 - bdev = blkdev_get_by_dev(raw_devices[minor].binding, 81 - filp->f_mode | FMODE_EXCL, raw_open); 82 - if (IS_ERR(bdev)) { 83 - err = PTR_ERR(bdev); 84 - goto out; 85 - } 86 - err = set_blocksize(bdev, bdev_logical_block_size(bdev)); 87 - if (err) 88 - goto out1; 89 - filp->f_flags |= O_DIRECT; 90 - filp->f_mapping = bdev->bd_inode->i_mapping; 91 - if (++raw_devices[minor].inuse == 1) 92 - file_inode(filp)->i_mapping = 93 - bdev->bd_inode->i_mapping; 94 - filp->private_data = bdev; 95 - raw_devices[minor].bdev = bdev; 96 - mutex_unlock(&raw_mutex); 97 - return 0; 98 - 99 - out1: 100 - blkdev_put(bdev, filp->f_mode | FMODE_EXCL); 101 - out: 102 - mutex_unlock(&raw_mutex); 103 - return err; 104 - } 105 - 106 - /* 107 - * When the final fd which refers to this character-special node is closed, we 108 - * make its ->mapping point back at its own i_data. 109 - */ 110 - static int raw_release(struct inode *inode, struct file *filp) 111 - { 112 - const int minor= iminor(inode); 113 - struct block_device *bdev; 114 - 115 - mutex_lock(&raw_mutex); 116 - bdev = raw_devices[minor].bdev; 117 - if (--raw_devices[minor].inuse == 0) 118 - /* Here inode->i_mapping == bdev->bd_inode->i_mapping */ 119 - inode->i_mapping = &inode->i_data; 120 - mutex_unlock(&raw_mutex); 121 - 122 - blkdev_put(bdev, filp->f_mode | FMODE_EXCL); 123 - return 0; 124 - } 125 - 126 - /* 127 - * Forward ioctls to the underlying block device. 128 - */ 129 - static long 130 - raw_ioctl(struct file *filp, unsigned int command, unsigned long arg) 131 - { 132 - struct block_device *bdev = filp->private_data; 133 - return blkdev_ioctl(bdev, 0, command, arg); 134 - } 135 - 136 - static int bind_set(int number, u64 major, u64 minor) 137 - { 138 - dev_t dev = MKDEV(major, minor); 139 - dev_t raw = MKDEV(RAW_MAJOR, number); 140 - struct raw_device_data *rawdev; 141 - int err = 0; 142 - 143 - if (number <= 0 || number >= max_raw_minors) 144 - return -EINVAL; 145 - 146 - if (MAJOR(dev) != major || MINOR(dev) != minor) 147 - return -EINVAL; 148 - 149 - rawdev = &raw_devices[number]; 150 - 151 - /* 152 - * This is like making block devices, so demand the 153 - * same capability 154 - */ 155 - if (!capable(CAP_SYS_ADMIN)) 156 - return -EPERM; 157 - 158 - /* 159 - * For now, we don't need to check that the underlying 160 - * block device is present or not: we can do that when 161 - * the raw device is opened. Just check that the 162 - * major/minor numbers make sense. 163 - */ 164 - 165 - if (MAJOR(dev) == 0 && dev != 0) 166 - return -EINVAL; 167 - 168 - mutex_lock(&raw_mutex); 169 - if (rawdev->inuse) { 170 - mutex_unlock(&raw_mutex); 171 - return -EBUSY; 172 - } 173 - if (rawdev->binding) 174 - module_put(THIS_MODULE); 175 - 176 - rawdev->binding = dev; 177 - if (!dev) { 178 - /* unbind */ 179 - device_destroy(raw_class, raw); 180 - } else { 181 - __module_get(THIS_MODULE); 182 - device_destroy(raw_class, raw); 183 - device_create(raw_class, NULL, raw, NULL, "raw%d", number); 184 - } 185 - mutex_unlock(&raw_mutex); 186 - return err; 187 - } 188 - 189 - static int bind_get(int number, dev_t *dev) 190 - { 191 - if (number <= 0 || number >= max_raw_minors) 192 - return -EINVAL; 193 - *dev = raw_devices[number].binding; 194 - return 0; 195 - } 196 - 197 - /* 198 - * Deal with ioctls against the raw-device control interface, to bind 199 - * and unbind other raw devices. 200 - */ 201 - static long raw_ctl_ioctl(struct file *filp, unsigned int command, 202 - unsigned long arg) 203 - { 204 - struct raw_config_request rq; 205 - dev_t dev; 206 - int err; 207 - 208 - switch (command) { 209 - case RAW_SETBIND: 210 - if (copy_from_user(&rq, (void __user *) arg, sizeof(rq))) 211 - return -EFAULT; 212 - 213 - return bind_set(rq.raw_minor, rq.block_major, rq.block_minor); 214 - 215 - case RAW_GETBIND: 216 - if (copy_from_user(&rq, (void __user *) arg, sizeof(rq))) 217 - return -EFAULT; 218 - 219 - err = bind_get(rq.raw_minor, &dev); 220 - if (err) 221 - return err; 222 - 223 - rq.block_major = MAJOR(dev); 224 - rq.block_minor = MINOR(dev); 225 - 226 - if (copy_to_user((void __user *)arg, &rq, sizeof(rq))) 227 - return -EFAULT; 228 - 229 - return 0; 230 - } 231 - 232 - return -EINVAL; 233 - } 234 - 235 - #ifdef CONFIG_COMPAT 236 - struct raw32_config_request { 237 - compat_int_t raw_minor; 238 - compat_u64 block_major; 239 - compat_u64 block_minor; 240 - }; 241 - 242 - static long raw_ctl_compat_ioctl(struct file *file, unsigned int cmd, 243 - unsigned long arg) 244 - { 245 - struct raw32_config_request __user *user_req = compat_ptr(arg); 246 - struct raw32_config_request rq; 247 - dev_t dev; 248 - int err = 0; 249 - 250 - switch (cmd) { 251 - case RAW_SETBIND: 252 - if (copy_from_user(&rq, user_req, sizeof(rq))) 253 - return -EFAULT; 254 - 255 - return bind_set(rq.raw_minor, rq.block_major, rq.block_minor); 256 - 257 - case RAW_GETBIND: 258 - if (copy_from_user(&rq, user_req, sizeof(rq))) 259 - return -EFAULT; 260 - 261 - err = bind_get(rq.raw_minor, &dev); 262 - if (err) 263 - return err; 264 - 265 - rq.block_major = MAJOR(dev); 266 - rq.block_minor = MINOR(dev); 267 - 268 - if (copy_to_user(user_req, &rq, sizeof(rq))) 269 - return -EFAULT; 270 - 271 - return 0; 272 - } 273 - 274 - return -EINVAL; 275 - } 276 - #endif 277 - 278 - static const struct file_operations raw_fops = { 279 - .read_iter = blkdev_read_iter, 280 - .write_iter = blkdev_write_iter, 281 - .fsync = blkdev_fsync, 282 - .open = raw_open, 283 - .release = raw_release, 284 - .unlocked_ioctl = raw_ioctl, 285 - .llseek = default_llseek, 286 - .owner = THIS_MODULE, 287 - }; 288 - 289 - static const struct file_operations raw_ctl_fops = { 290 - .unlocked_ioctl = raw_ctl_ioctl, 291 - #ifdef CONFIG_COMPAT 292 - .compat_ioctl = raw_ctl_compat_ioctl, 293 - #endif 294 - .open = raw_open, 295 - .owner = THIS_MODULE, 296 - .llseek = noop_llseek, 297 - }; 298 - 299 - static struct cdev raw_cdev; 300 - 301 - static char *raw_devnode(struct device *dev, umode_t *mode) 302 - { 303 - return kasprintf(GFP_KERNEL, "raw/%s", dev_name(dev)); 304 - } 305 - 306 - static int __init raw_init(void) 307 - { 308 - dev_t dev = MKDEV(RAW_MAJOR, 0); 309 - int ret; 310 - 311 - if (max_raw_minors < 1 || max_raw_minors > 65536) { 312 - pr_warn("raw: invalid max_raw_minors (must be between 1 and 65536), using %d\n", 313 - CONFIG_MAX_RAW_DEVS); 314 - max_raw_minors = CONFIG_MAX_RAW_DEVS; 315 - } 316 - 317 - raw_devices = vzalloc(array_size(max_raw_minors, 318 - sizeof(struct raw_device_data))); 319 - if (!raw_devices) { 320 - printk(KERN_ERR "Not enough memory for raw device structures\n"); 321 - ret = -ENOMEM; 322 - goto error; 323 - } 324 - 325 - ret = register_chrdev_region(dev, max_raw_minors, "raw"); 326 - if (ret) 327 - goto error; 328 - 329 - cdev_init(&raw_cdev, &raw_fops); 330 - ret = cdev_add(&raw_cdev, dev, max_raw_minors); 331 - if (ret) 332 - goto error_region; 333 - raw_class = class_create(THIS_MODULE, "raw"); 334 - if (IS_ERR(raw_class)) { 335 - printk(KERN_ERR "Error creating raw class.\n"); 336 - cdev_del(&raw_cdev); 337 - ret = PTR_ERR(raw_class); 338 - goto error_region; 339 - } 340 - raw_class->devnode = raw_devnode; 341 - device_create(raw_class, NULL, MKDEV(RAW_MAJOR, 0), NULL, "rawctl"); 342 - 343 - return 0; 344 - 345 - error_region: 346 - unregister_chrdev_region(dev, max_raw_minors); 347 - error: 348 - vfree(raw_devices); 349 - return ret; 350 - } 351 - 352 - static void __exit raw_exit(void) 353 - { 354 - device_destroy(raw_class, MKDEV(RAW_MAJOR, 0)); 355 - class_destroy(raw_class); 356 - cdev_del(&raw_cdev); 357 - unregister_chrdev_region(MKDEV(RAW_MAJOR, 0), max_raw_minors); 358 - } 359 - 360 - module_init(raw_init); 361 - module_exit(raw_exit); 362 - MODULE_LICENSE("GPL");
+20 -2
drivers/char/xillybus/Kconfig
··· 3 3 # Xillybus devices 4 4 # 5 5 6 + config XILLYBUS_CLASS 7 + tristate 8 + 6 9 config XILLYBUS 7 10 tristate "Xillybus generic FPGA interface" 8 11 depends on PCI || OF 9 12 select CRC32 13 + select XILLYBUS_CLASS 10 14 help 11 15 Xillybus is a generic interface for peripherals designed on 12 16 programmable logic (FPGA). The driver probes the hardware for ··· 25 21 depends on PCI_MSI 26 22 help 27 23 Set to M if you want Xillybus to use PCI Express for communicating 28 - with the FPGA. 24 + with the FPGA. The module will be called xillybus_pcie. 29 25 30 26 config XILLYBUS_OF 31 27 tristate "Xillybus over Device Tree" ··· 33 29 help 34 30 Set to M if you want Xillybus to find its resources from the 35 31 Open Firmware Flattened Device Tree. If the target is an embedded 36 - system, say M. 32 + system, say M. The module will be called xillybus_of. 37 33 38 34 endif # if XILLYBUS 35 + 36 + # XILLYUSB doesn't depend on XILLYBUS 37 + 38 + config XILLYUSB 39 + tristate "XillyUSB: Xillybus generic FPGA interface for USB" 40 + depends on USB 41 + select CRC32 42 + select XILLYBUS_CLASS 43 + help 44 + XillyUSB is the Xillybus variant which uses USB for communicating 45 + with the FPGA. 46 + 47 + Set to M if you want Xillybus to use USB for communicating with 48 + the FPGA. The module will be called xillyusb.
+2
drivers/char/xillybus/Makefile
··· 3 3 # Makefile for Xillybus driver 4 4 # 5 5 6 + obj-$(CONFIG_XILLYBUS_CLASS) += xillybus_class.o 6 7 obj-$(CONFIG_XILLYBUS) += xillybus_core.o 7 8 obj-$(CONFIG_XILLYBUS_PCIE) += xillybus_pcie.o 8 9 obj-$(CONFIG_XILLYBUS_OF) += xillybus_of.o 10 + obj-$(CONFIG_XILLYUSB) += xillyusb.o
+2 -8
drivers/char/xillybus/xillybus.h
··· 30 30 31 31 struct xilly_idt_handle { 32 32 unsigned char *chandesc; 33 - unsigned char *idt; 33 + unsigned char *names; 34 + int names_len; 34 35 int entries; 35 36 }; 36 37 ··· 95 94 struct device *dev; 96 95 struct xilly_endpoint_hardware *ephw; 97 96 98 - struct list_head ep_list; 99 97 int dma_using_dac; /* =1 if 64-bit DMA is used, =0 otherwise. */ 100 98 __iomem void *registers; 101 99 int fatal_error; 102 100 103 101 struct mutex register_mutex; 104 102 wait_queue_head_t ep_wait; 105 - 106 - /* Channels and message handling */ 107 - struct cdev cdev; 108 - 109 - int major; 110 - int lowest_minor; /* Highest minor = lowest_minor + num_channels - 1 */ 111 103 112 104 int num_channels; /* EXCLUDING message buffer */ 113 105 struct xilly_channel **channels;
+262
drivers/char/xillybus/xillybus_class.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright 2021 Xillybus Ltd, http://xillybus.com 4 + * 5 + * Driver for the Xillybus class 6 + */ 7 + 8 + #include <linux/types.h> 9 + #include <linux/module.h> 10 + #include <linux/device.h> 11 + #include <linux/fs.h> 12 + #include <linux/cdev.h> 13 + #include <linux/slab.h> 14 + #include <linux/list.h> 15 + #include <linux/mutex.h> 16 + 17 + #include "xillybus_class.h" 18 + 19 + MODULE_DESCRIPTION("Driver for Xillybus class"); 20 + MODULE_AUTHOR("Eli Billauer, Xillybus Ltd."); 21 + MODULE_ALIAS("xillybus_class"); 22 + MODULE_LICENSE("GPL v2"); 23 + 24 + static DEFINE_MUTEX(unit_mutex); 25 + static LIST_HEAD(unit_list); 26 + static struct class *xillybus_class; 27 + 28 + #define UNITNAMELEN 16 29 + 30 + struct xilly_unit { 31 + struct list_head list_entry; 32 + void *private_data; 33 + 34 + struct cdev *cdev; 35 + char name[UNITNAMELEN]; 36 + int major; 37 + int lowest_minor; 38 + int num_nodes; 39 + }; 40 + 41 + int xillybus_init_chrdev(struct device *dev, 42 + const struct file_operations *fops, 43 + struct module *owner, 44 + void *private_data, 45 + unsigned char *idt, unsigned int len, 46 + int num_nodes, 47 + const char *prefix, bool enumerate) 48 + { 49 + int rc; 50 + dev_t mdev; 51 + int i; 52 + char devname[48]; 53 + 54 + struct device *device; 55 + size_t namelen; 56 + struct xilly_unit *unit, *u; 57 + 58 + unit = kzalloc(sizeof(*unit), GFP_KERNEL); 59 + 60 + if (!unit) 61 + return -ENOMEM; 62 + 63 + mutex_lock(&unit_mutex); 64 + 65 + if (!enumerate) 66 + snprintf(unit->name, UNITNAMELEN, "%s", prefix); 67 + 68 + for (i = 0; enumerate; i++) { 69 + snprintf(unit->name, UNITNAMELEN, "%s_%02d", 70 + prefix, i); 71 + 72 + enumerate = false; 73 + list_for_each_entry(u, &unit_list, list_entry) 74 + if (!strcmp(unit->name, u->name)) { 75 + enumerate = true; 76 + break; 77 + } 78 + } 79 + 80 + rc = alloc_chrdev_region(&mdev, 0, num_nodes, unit->name); 81 + 82 + if (rc) { 83 + dev_warn(dev, "Failed to obtain major/minors"); 84 + goto fail_obtain; 85 + } 86 + 87 + unit->major = MAJOR(mdev); 88 + unit->lowest_minor = MINOR(mdev); 89 + unit->num_nodes = num_nodes; 90 + unit->private_data = private_data; 91 + 92 + unit->cdev = cdev_alloc(); 93 + if (!unit->cdev) { 94 + rc = -ENOMEM; 95 + goto unregister_chrdev; 96 + } 97 + unit->cdev->ops = fops; 98 + unit->cdev->owner = owner; 99 + 100 + rc = cdev_add(unit->cdev, MKDEV(unit->major, unit->lowest_minor), 101 + unit->num_nodes); 102 + if (rc) { 103 + dev_err(dev, "Failed to add cdev.\n"); 104 + /* kobject_put() is normally done by cdev_del() */ 105 + kobject_put(&unit->cdev->kobj); 106 + goto unregister_chrdev; 107 + } 108 + 109 + for (i = 0; i < num_nodes; i++) { 110 + namelen = strnlen(idt, len); 111 + 112 + if (namelen == len) { 113 + dev_err(dev, "IDT's list of names is too short. This is exceptionally weird, because its CRC is OK\n"); 114 + rc = -ENODEV; 115 + goto unroll_device_create; 116 + } 117 + 118 + snprintf(devname, sizeof(devname), "%s_%s", 119 + unit->name, idt); 120 + 121 + len -= namelen + 1; 122 + idt += namelen + 1; 123 + 124 + device = device_create(xillybus_class, 125 + NULL, 126 + MKDEV(unit->major, 127 + i + unit->lowest_minor), 128 + NULL, 129 + "%s", devname); 130 + 131 + if (IS_ERR(device)) { 132 + dev_err(dev, "Failed to create %s device. Aborting.\n", 133 + devname); 134 + rc = -ENODEV; 135 + goto unroll_device_create; 136 + } 137 + } 138 + 139 + if (len) { 140 + dev_err(dev, "IDT's list of names is too long. This is exceptionally weird, because its CRC is OK\n"); 141 + rc = -ENODEV; 142 + goto unroll_device_create; 143 + } 144 + 145 + list_add_tail(&unit->list_entry, &unit_list); 146 + 147 + dev_info(dev, "Created %d device files.\n", num_nodes); 148 + 149 + mutex_unlock(&unit_mutex); 150 + 151 + return 0; 152 + 153 + unroll_device_create: 154 + for (i--; i >= 0; i--) 155 + device_destroy(xillybus_class, MKDEV(unit->major, 156 + i + unit->lowest_minor)); 157 + 158 + cdev_del(unit->cdev); 159 + 160 + unregister_chrdev: 161 + unregister_chrdev_region(MKDEV(unit->major, unit->lowest_minor), 162 + unit->num_nodes); 163 + 164 + fail_obtain: 165 + mutex_unlock(&unit_mutex); 166 + 167 + kfree(unit); 168 + 169 + return rc; 170 + } 171 + EXPORT_SYMBOL(xillybus_init_chrdev); 172 + 173 + void xillybus_cleanup_chrdev(void *private_data, 174 + struct device *dev) 175 + { 176 + int minor; 177 + struct xilly_unit *unit; 178 + bool found = false; 179 + 180 + mutex_lock(&unit_mutex); 181 + 182 + list_for_each_entry(unit, &unit_list, list_entry) 183 + if (unit->private_data == private_data) { 184 + found = true; 185 + break; 186 + } 187 + 188 + if (!found) { 189 + dev_err(dev, "Weird bug: Failed to find unit\n"); 190 + mutex_unlock(&unit_mutex); 191 + return; 192 + } 193 + 194 + for (minor = unit->lowest_minor; 195 + minor < (unit->lowest_minor + unit->num_nodes); 196 + minor++) 197 + device_destroy(xillybus_class, MKDEV(unit->major, minor)); 198 + 199 + cdev_del(unit->cdev); 200 + 201 + unregister_chrdev_region(MKDEV(unit->major, unit->lowest_minor), 202 + unit->num_nodes); 203 + 204 + dev_info(dev, "Removed %d device files.\n", 205 + unit->num_nodes); 206 + 207 + list_del(&unit->list_entry); 208 + kfree(unit); 209 + 210 + mutex_unlock(&unit_mutex); 211 + } 212 + EXPORT_SYMBOL(xillybus_cleanup_chrdev); 213 + 214 + int xillybus_find_inode(struct inode *inode, 215 + void **private_data, int *index) 216 + { 217 + int minor = iminor(inode); 218 + int major = imajor(inode); 219 + struct xilly_unit *unit; 220 + bool found = false; 221 + 222 + mutex_lock(&unit_mutex); 223 + 224 + list_for_each_entry(unit, &unit_list, list_entry) 225 + if (unit->major == major && 226 + minor >= unit->lowest_minor && 227 + minor < (unit->lowest_minor + unit->num_nodes)) { 228 + found = true; 229 + break; 230 + } 231 + 232 + mutex_unlock(&unit_mutex); 233 + 234 + if (!found) 235 + return -ENODEV; 236 + 237 + *private_data = unit->private_data; 238 + *index = minor - unit->lowest_minor; 239 + 240 + return 0; 241 + } 242 + EXPORT_SYMBOL(xillybus_find_inode); 243 + 244 + static int __init xillybus_class_init(void) 245 + { 246 + xillybus_class = class_create(THIS_MODULE, "xillybus"); 247 + 248 + if (IS_ERR(xillybus_class)) { 249 + pr_warn("Failed to register xillybus class\n"); 250 + 251 + return PTR_ERR(xillybus_class); 252 + } 253 + return 0; 254 + } 255 + 256 + static void __exit xillybus_class_exit(void) 257 + { 258 + class_destroy(xillybus_class); 259 + } 260 + 261 + module_init(xillybus_class_init); 262 + module_exit(xillybus_class_exit);
+30
drivers/char/xillybus/xillybus_class.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright 2021 Xillybus Ltd, http://www.xillybus.com 4 + * 5 + * Header file for the Xillybus class 6 + */ 7 + 8 + #ifndef __XILLYBUS_CLASS_H 9 + #define __XILLYBUS_CLASS_H 10 + 11 + #include <linux/types.h> 12 + #include <linux/device.h> 13 + #include <linux/fs.h> 14 + #include <linux/module.h> 15 + 16 + int xillybus_init_chrdev(struct device *dev, 17 + const struct file_operations *fops, 18 + struct module *owner, 19 + void *private_data, 20 + unsigned char *idt, unsigned int len, 21 + int num_nodes, 22 + const char *prefix, bool enumerate); 23 + 24 + void xillybus_cleanup_chrdev(void *private_data, 25 + struct device *dev); 26 + 27 + int xillybus_find_inode(struct inode *inode, 28 + void **private_data, int *index); 29 + 30 + #endif /* __XILLYBUS_CLASS_H */
+21 -159
drivers/char/xillybus/xillybus_core.c
··· 21 21 #include <linux/interrupt.h> 22 22 #include <linux/sched.h> 23 23 #include <linux/fs.h> 24 - #include <linux/cdev.h> 25 24 #include <linux/spinlock.h> 26 25 #include <linux/mutex.h> 27 26 #include <linux/crc32.h> ··· 29 30 #include <linux/slab.h> 30 31 #include <linux/workqueue.h> 31 32 #include "xillybus.h" 33 + #include "xillybus_class.h" 32 34 33 35 MODULE_DESCRIPTION("Xillybus core functions"); 34 36 MODULE_AUTHOR("Eli Billauer, Xillybus Ltd."); 35 - MODULE_VERSION("1.07"); 36 37 MODULE_ALIAS("xillybus_core"); 37 38 MODULE_LICENSE("GPL v2"); 38 39 ··· 57 58 58 59 static const char xillyname[] = "xillybus"; 59 60 60 - static struct class *xillybus_class; 61 - 62 - /* 63 - * ep_list_lock is the last lock to be taken; No other lock requests are 64 - * allowed while holding it. It merely protects list_of_endpoints, and not 65 - * the endpoints listed in it. 66 - */ 67 - 68 - static LIST_HEAD(list_of_endpoints); 69 - static struct mutex ep_list_lock; 70 61 static struct workqueue_struct *xillybus_wq; 71 62 72 63 /* ··· 559 570 unsigned char *scan; 560 571 int len; 561 572 562 - scan = idt; 563 - idt_handle->idt = idt; 564 - 565 - scan++; /* Skip version number */ 573 + scan = idt + 1; 574 + idt_handle->names = scan; 566 575 567 576 while ((scan <= end_of_idt) && *scan) { 568 577 while ((scan <= end_of_idt) && *scan++) 569 578 /* Do nothing, just scan thru string */; 570 579 count++; 571 580 } 581 + 582 + idt_handle->names_len = scan - idt_handle->names; 572 583 573 584 scan++; 574 585 ··· 1396 1407 1397 1408 static int xillybus_open(struct inode *inode, struct file *filp) 1398 1409 { 1399 - int rc = 0; 1410 + int rc; 1400 1411 unsigned long flags; 1401 - int minor = iminor(inode); 1402 - int major = imajor(inode); 1403 - struct xilly_endpoint *ep_iter, *endpoint = NULL; 1412 + struct xilly_endpoint *endpoint; 1404 1413 struct xilly_channel *channel; 1414 + int index; 1405 1415 1406 - mutex_lock(&ep_list_lock); 1407 - 1408 - list_for_each_entry(ep_iter, &list_of_endpoints, ep_list) { 1409 - if ((ep_iter->major == major) && 1410 - (minor >= ep_iter->lowest_minor) && 1411 - (minor < (ep_iter->lowest_minor + 1412 - ep_iter->num_channels))) { 1413 - endpoint = ep_iter; 1414 - break; 1415 - } 1416 - } 1417 - mutex_unlock(&ep_list_lock); 1418 - 1419 - if (!endpoint) { 1420 - pr_err("xillybus: open() failed to find a device for major=%d and minor=%d\n", 1421 - major, minor); 1422 - return -ENODEV; 1423 - } 1416 + rc = xillybus_find_inode(inode, (void **)&endpoint, &index); 1417 + if (rc) 1418 + return rc; 1424 1419 1425 1420 if (endpoint->fatal_error) 1426 1421 return -EIO; 1427 1422 1428 - channel = endpoint->channels[1 + minor - endpoint->lowest_minor]; 1423 + channel = endpoint->channels[1 + index]; 1429 1424 filp->private_data = channel; 1430 1425 1431 1426 /* ··· 1772 1799 .poll = xillybus_poll, 1773 1800 }; 1774 1801 1775 - static int xillybus_init_chrdev(struct xilly_endpoint *endpoint, 1776 - const unsigned char *idt) 1777 - { 1778 - int rc; 1779 - dev_t dev; 1780 - int devnum, i, minor, major; 1781 - char devname[48]; 1782 - struct device *device; 1783 - 1784 - rc = alloc_chrdev_region(&dev, 0, /* minor start */ 1785 - endpoint->num_channels, 1786 - xillyname); 1787 - if (rc) { 1788 - dev_warn(endpoint->dev, "Failed to obtain major/minors"); 1789 - return rc; 1790 - } 1791 - 1792 - endpoint->major = major = MAJOR(dev); 1793 - endpoint->lowest_minor = minor = MINOR(dev); 1794 - 1795 - cdev_init(&endpoint->cdev, &xillybus_fops); 1796 - endpoint->cdev.owner = endpoint->ephw->owner; 1797 - rc = cdev_add(&endpoint->cdev, MKDEV(major, minor), 1798 - endpoint->num_channels); 1799 - if (rc) { 1800 - dev_warn(endpoint->dev, "Failed to add cdev. Aborting.\n"); 1801 - goto unregister_chrdev; 1802 - } 1803 - 1804 - idt++; 1805 - 1806 - for (i = minor, devnum = 0; 1807 - devnum < endpoint->num_channels; 1808 - devnum++, i++) { 1809 - snprintf(devname, sizeof(devname)-1, "xillybus_%s", idt); 1810 - 1811 - devname[sizeof(devname)-1] = 0; /* Should never matter */ 1812 - 1813 - while (*idt++) 1814 - /* Skip to next */; 1815 - 1816 - device = device_create(xillybus_class, 1817 - NULL, 1818 - MKDEV(major, i), 1819 - NULL, 1820 - "%s", devname); 1821 - 1822 - if (IS_ERR(device)) { 1823 - dev_warn(endpoint->dev, 1824 - "Failed to create %s device. Aborting.\n", 1825 - devname); 1826 - rc = -ENODEV; 1827 - goto unroll_device_create; 1828 - } 1829 - } 1830 - 1831 - dev_info(endpoint->dev, "Created %d device files.\n", 1832 - endpoint->num_channels); 1833 - return 0; /* succeed */ 1834 - 1835 - unroll_device_create: 1836 - devnum--; i--; 1837 - for (; devnum >= 0; devnum--, i--) 1838 - device_destroy(xillybus_class, MKDEV(major, i)); 1839 - 1840 - cdev_del(&endpoint->cdev); 1841 - unregister_chrdev: 1842 - unregister_chrdev_region(MKDEV(major, minor), endpoint->num_channels); 1843 - 1844 - return rc; 1845 - } 1846 - 1847 - static void xillybus_cleanup_chrdev(struct xilly_endpoint *endpoint) 1848 - { 1849 - int minor; 1850 - 1851 - for (minor = endpoint->lowest_minor; 1852 - minor < (endpoint->lowest_minor + endpoint->num_channels); 1853 - minor++) 1854 - device_destroy(xillybus_class, MKDEV(endpoint->major, minor)); 1855 - cdev_del(&endpoint->cdev); 1856 - unregister_chrdev_region(MKDEV(endpoint->major, 1857 - endpoint->lowest_minor), 1858 - endpoint->num_channels); 1859 - 1860 - dev_info(endpoint->dev, "Removed %d device files.\n", 1861 - endpoint->num_channels); 1862 - } 1863 - 1864 1802 struct xilly_endpoint *xillybus_init_endpoint(struct pci_dev *pdev, 1865 1803 struct device *dev, 1866 1804 struct xilly_endpoint_hardware ··· 1911 2027 if (rc) 1912 2028 goto failed_idt; 1913 2029 1914 - /* 1915 - * endpoint is now completely configured. We put it on the list 1916 - * available to open() before registering the char device(s) 1917 - */ 2030 + rc = xillybus_init_chrdev(dev, &xillybus_fops, 2031 + endpoint->ephw->owner, endpoint, 2032 + idt_handle.names, 2033 + idt_handle.names_len, 2034 + endpoint->num_channels, 2035 + xillyname, false); 1918 2036 1919 - mutex_lock(&ep_list_lock); 1920 - list_add_tail(&endpoint->ep_list, &list_of_endpoints); 1921 - mutex_unlock(&ep_list_lock); 1922 - 1923 - rc = xillybus_init_chrdev(endpoint, idt_handle.idt); 1924 2037 if (rc) 1925 - goto failed_chrdevs; 2038 + goto failed_idt; 1926 2039 1927 2040 devres_release_group(dev, bootstrap_resources); 1928 2041 1929 2042 return 0; 1930 - 1931 - failed_chrdevs: 1932 - mutex_lock(&ep_list_lock); 1933 - list_del(&endpoint->ep_list); 1934 - mutex_unlock(&ep_list_lock); 1935 2043 1936 2044 failed_idt: 1937 2045 xilly_quiesce(endpoint); ··· 1935 2059 1936 2060 void xillybus_endpoint_remove(struct xilly_endpoint *endpoint) 1937 2061 { 1938 - xillybus_cleanup_chrdev(endpoint); 1939 - 1940 - mutex_lock(&ep_list_lock); 1941 - list_del(&endpoint->ep_list); 1942 - mutex_unlock(&ep_list_lock); 2062 + xillybus_cleanup_chrdev(endpoint, endpoint->dev); 1943 2063 1944 2064 xilly_quiesce(endpoint); 1945 2065 ··· 1949 2077 1950 2078 static int __init xillybus_init(void) 1951 2079 { 1952 - mutex_init(&ep_list_lock); 1953 - 1954 - xillybus_class = class_create(THIS_MODULE, xillyname); 1955 - if (IS_ERR(xillybus_class)) 1956 - return PTR_ERR(xillybus_class); 1957 - 1958 2080 xillybus_wq = alloc_workqueue(xillyname, 0, 0); 1959 - if (!xillybus_wq) { 1960 - class_destroy(xillybus_class); 2081 + if (!xillybus_wq) 1961 2082 return -ENOMEM; 1962 - } 1963 2083 1964 2084 return 0; 1965 2085 } ··· 1960 2096 { 1961 2097 /* flush_workqueue() was called for each endpoint released */ 1962 2098 destroy_workqueue(xillybus_wq); 1963 - 1964 - class_destroy(xillybus_class); 1965 2099 } 1966 2100 1967 2101 module_init(xillybus_init);
-1
drivers/char/xillybus/xillybus_of.c
··· 17 17 18 18 MODULE_DESCRIPTION("Xillybus driver for Open Firmware"); 19 19 MODULE_AUTHOR("Eli Billauer, Xillybus Ltd."); 20 - MODULE_VERSION("1.06"); 21 20 MODULE_ALIAS("xillybus_of"); 22 21 MODULE_LICENSE("GPL v2"); 23 22
-1
drivers/char/xillybus/xillybus_pcie.c
··· 14 14 15 15 MODULE_DESCRIPTION("Xillybus driver for PCIe"); 16 16 MODULE_AUTHOR("Eli Billauer, Xillybus Ltd."); 17 - MODULE_VERSION("1.06"); 18 17 MODULE_ALIAS("xillybus_pcie"); 19 18 MODULE_LICENSE("GPL v2"); 20 19
+2259
drivers/char/xillybus/xillyusb.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright 2020 Xillybus Ltd, http://xillybus.com 4 + * 5 + * Driver for the XillyUSB FPGA/host framework. 6 + * 7 + * This driver interfaces with a special IP core in an FPGA, setting up 8 + * a pipe between a hardware FIFO in the programmable logic and a device 9 + * file in the host. The number of such pipes and their attributes are 10 + * set up on the logic. This driver detects these automatically and 11 + * creates the device files accordingly. 12 + */ 13 + 14 + #include <linux/types.h> 15 + #include <linux/slab.h> 16 + #include <linux/list.h> 17 + #include <linux/device.h> 18 + #include <linux/module.h> 19 + #include <asm/byteorder.h> 20 + #include <linux/io.h> 21 + #include <linux/interrupt.h> 22 + #include <linux/sched.h> 23 + #include <linux/fs.h> 24 + #include <linux/spinlock.h> 25 + #include <linux/mutex.h> 26 + #include <linux/workqueue.h> 27 + #include <linux/crc32.h> 28 + #include <linux/poll.h> 29 + #include <linux/delay.h> 30 + #include <linux/usb.h> 31 + 32 + #include "xillybus_class.h" 33 + 34 + MODULE_DESCRIPTION("Driver for XillyUSB FPGA IP Core"); 35 + MODULE_AUTHOR("Eli Billauer, Xillybus Ltd."); 36 + MODULE_ALIAS("xillyusb"); 37 + MODULE_LICENSE("GPL v2"); 38 + 39 + #define XILLY_RX_TIMEOUT (10 * HZ / 1000) 40 + #define XILLY_RESPONSE_TIMEOUT (500 * HZ / 1000) 41 + 42 + #define BUF_SIZE_ORDER 4 43 + #define BUFNUM 8 44 + #define LOG2_IDT_FIFO_SIZE 16 45 + #define LOG2_INITIAL_FIFO_BUF_SIZE 16 46 + 47 + #define MSG_EP_NUM 1 48 + #define IN_EP_NUM 1 49 + 50 + static const char xillyname[] = "xillyusb"; 51 + 52 + static unsigned int fifo_buf_order; 53 + 54 + #define USB_VENDOR_ID_XILINX 0x03fd 55 + #define USB_VENDOR_ID_ALTERA 0x09fb 56 + 57 + #define USB_PRODUCT_ID_XILLYUSB 0xebbe 58 + 59 + static const struct usb_device_id xillyusb_table[] = { 60 + { USB_DEVICE(USB_VENDOR_ID_XILINX, USB_PRODUCT_ID_XILLYUSB) }, 61 + { USB_DEVICE(USB_VENDOR_ID_ALTERA, USB_PRODUCT_ID_XILLYUSB) }, 62 + { } 63 + }; 64 + 65 + MODULE_DEVICE_TABLE(usb, xillyusb_table); 66 + 67 + struct xillyusb_dev; 68 + 69 + struct xillyfifo { 70 + unsigned int bufsize; /* In bytes, always a power of 2 */ 71 + unsigned int bufnum; 72 + unsigned int size; /* Lazy: Equals bufsize * bufnum */ 73 + unsigned int buf_order; 74 + 75 + int fill; /* Number of bytes in the FIFO */ 76 + spinlock_t lock; 77 + wait_queue_head_t waitq; 78 + 79 + unsigned int readpos; 80 + unsigned int readbuf; 81 + unsigned int writepos; 82 + unsigned int writebuf; 83 + char **mem; 84 + }; 85 + 86 + struct xillyusb_channel; 87 + 88 + struct xillyusb_endpoint { 89 + struct xillyusb_dev *xdev; 90 + 91 + struct mutex ep_mutex; /* serialize operations on endpoint */ 92 + 93 + struct list_head buffers; 94 + struct list_head filled_buffers; 95 + spinlock_t buffers_lock; /* protect these two lists */ 96 + 97 + unsigned int order; 98 + unsigned int buffer_size; 99 + 100 + unsigned int fill_mask; 101 + 102 + int outstanding_urbs; 103 + 104 + struct usb_anchor anchor; 105 + 106 + struct xillyfifo fifo; 107 + 108 + struct work_struct workitem; 109 + 110 + bool shutting_down; 111 + bool drained; 112 + bool wake_on_drain; 113 + 114 + u8 ep_num; 115 + }; 116 + 117 + struct xillyusb_channel { 118 + struct xillyusb_dev *xdev; 119 + 120 + struct xillyfifo *in_fifo; 121 + struct xillyusb_endpoint *out_ep; 122 + struct mutex lock; /* protect @out_ep, @in_fifo, bit fields below */ 123 + 124 + struct mutex in_mutex; /* serialize fops on FPGA to host stream */ 125 + struct mutex out_mutex; /* serialize fops on host to FPGA stream */ 126 + wait_queue_head_t flushq; 127 + 128 + int chan_idx; 129 + 130 + u32 in_consumed_bytes; 131 + u32 in_current_checkpoint; 132 + u32 out_bytes; 133 + 134 + unsigned int in_log2_element_size; 135 + unsigned int out_log2_element_size; 136 + unsigned int in_log2_fifo_size; 137 + unsigned int out_log2_fifo_size; 138 + 139 + unsigned int read_data_ok; /* EOF not arrived (yet) */ 140 + unsigned int poll_used; 141 + unsigned int flushing; 142 + unsigned int flushed; 143 + unsigned int canceled; 144 + 145 + /* Bit fields protected by @lock except for initialization */ 146 + unsigned readable:1; 147 + unsigned writable:1; 148 + unsigned open_for_read:1; 149 + unsigned open_for_write:1; 150 + unsigned in_synchronous:1; 151 + unsigned out_synchronous:1; 152 + unsigned in_seekable:1; 153 + unsigned out_seekable:1; 154 + }; 155 + 156 + struct xillybuffer { 157 + struct list_head entry; 158 + struct xillyusb_endpoint *ep; 159 + void *buf; 160 + unsigned int len; 161 + }; 162 + 163 + struct xillyusb_dev { 164 + struct xillyusb_channel *channels; 165 + 166 + struct usb_device *udev; 167 + struct device *dev; /* For dev_err() and such */ 168 + struct kref kref; 169 + struct workqueue_struct *workq; 170 + 171 + int error; 172 + spinlock_t error_lock; /* protect @error */ 173 + struct work_struct wakeup_workitem; 174 + 175 + int num_channels; 176 + 177 + struct xillyusb_endpoint *msg_ep; 178 + struct xillyusb_endpoint *in_ep; 179 + 180 + struct mutex msg_mutex; /* serialize opcode transmission */ 181 + int in_bytes_left; 182 + int leftover_chan_num; 183 + unsigned int in_counter; 184 + struct mutex process_in_mutex; /* synchronize wakeup_all() */ 185 + }; 186 + 187 + /* FPGA to host opcodes */ 188 + enum { 189 + OPCODE_DATA = 0, 190 + OPCODE_QUIESCE_ACK = 1, 191 + OPCODE_EOF = 2, 192 + OPCODE_REACHED_CHECKPOINT = 3, 193 + OPCODE_CANCELED_CHECKPOINT = 4, 194 + }; 195 + 196 + /* Host to FPGA opcodes */ 197 + enum { 198 + OPCODE_QUIESCE = 0, 199 + OPCODE_REQ_IDT = 1, 200 + OPCODE_SET_CHECKPOINT = 2, 201 + OPCODE_CLOSE = 3, 202 + OPCODE_SET_PUSH = 4, 203 + OPCODE_UPDATE_PUSH = 5, 204 + OPCODE_CANCEL_CHECKPOINT = 6, 205 + OPCODE_SET_ADDR = 7, 206 + }; 207 + 208 + /* 209 + * fifo_write() and fifo_read() are NOT reentrant (i.e. concurrent multiple 210 + * calls to each on the same FIFO is not allowed) however it's OK to have 211 + * threads calling each of the two functions once on the same FIFO, and 212 + * at the same time. 213 + */ 214 + 215 + static int fifo_write(struct xillyfifo *fifo, 216 + const void *data, unsigned int len, 217 + int (*copier)(void *, const void *, int)) 218 + { 219 + unsigned int done = 0; 220 + unsigned int todo = len; 221 + unsigned int nmax; 222 + unsigned int writepos = fifo->writepos; 223 + unsigned int writebuf = fifo->writebuf; 224 + unsigned long flags; 225 + int rc; 226 + 227 + nmax = fifo->size - READ_ONCE(fifo->fill); 228 + 229 + while (1) { 230 + unsigned int nrail = fifo->bufsize - writepos; 231 + unsigned int n = min(todo, nmax); 232 + 233 + if (n == 0) { 234 + spin_lock_irqsave(&fifo->lock, flags); 235 + fifo->fill += done; 236 + spin_unlock_irqrestore(&fifo->lock, flags); 237 + 238 + fifo->writepos = writepos; 239 + fifo->writebuf = writebuf; 240 + 241 + return done; 242 + } 243 + 244 + if (n > nrail) 245 + n = nrail; 246 + 247 + rc = (*copier)(fifo->mem[writebuf] + writepos, data + done, n); 248 + 249 + if (rc) 250 + return rc; 251 + 252 + done += n; 253 + todo -= n; 254 + 255 + writepos += n; 256 + nmax -= n; 257 + 258 + if (writepos == fifo->bufsize) { 259 + writepos = 0; 260 + writebuf++; 261 + 262 + if (writebuf == fifo->bufnum) 263 + writebuf = 0; 264 + } 265 + } 266 + } 267 + 268 + static int fifo_read(struct xillyfifo *fifo, 269 + void *data, unsigned int len, 270 + int (*copier)(void *, const void *, int)) 271 + { 272 + unsigned int done = 0; 273 + unsigned int todo = len; 274 + unsigned int fill; 275 + unsigned int readpos = fifo->readpos; 276 + unsigned int readbuf = fifo->readbuf; 277 + unsigned long flags; 278 + int rc; 279 + 280 + /* 281 + * The spinlock here is necessary, because otherwise fifo->fill 282 + * could have been increased by fifo_write() after writing data 283 + * to the buffer, but this data would potentially not have been 284 + * visible on this thread at the time the updated fifo->fill was. 285 + * That could lead to reading invalid data. 286 + */ 287 + 288 + spin_lock_irqsave(&fifo->lock, flags); 289 + fill = fifo->fill; 290 + spin_unlock_irqrestore(&fifo->lock, flags); 291 + 292 + while (1) { 293 + unsigned int nrail = fifo->bufsize - readpos; 294 + unsigned int n = min(todo, fill); 295 + 296 + if (n == 0) { 297 + spin_lock_irqsave(&fifo->lock, flags); 298 + fifo->fill -= done; 299 + spin_unlock_irqrestore(&fifo->lock, flags); 300 + 301 + fifo->readpos = readpos; 302 + fifo->readbuf = readbuf; 303 + 304 + return done; 305 + } 306 + 307 + if (n > nrail) 308 + n = nrail; 309 + 310 + rc = (*copier)(data + done, fifo->mem[readbuf] + readpos, n); 311 + 312 + if (rc) 313 + return rc; 314 + 315 + done += n; 316 + todo -= n; 317 + 318 + readpos += n; 319 + fill -= n; 320 + 321 + if (readpos == fifo->bufsize) { 322 + readpos = 0; 323 + readbuf++; 324 + 325 + if (readbuf == fifo->bufnum) 326 + readbuf = 0; 327 + } 328 + } 329 + } 330 + 331 + /* 332 + * These three wrapper functions are used as the @copier argument to 333 + * fifo_write() and fifo_read(), so that they can work directly with 334 + * user memory as well. 335 + */ 336 + 337 + static int xilly_copy_from_user(void *dst, const void *src, int n) 338 + { 339 + if (copy_from_user(dst, (const void __user *)src, n)) 340 + return -EFAULT; 341 + 342 + return 0; 343 + } 344 + 345 + static int xilly_copy_to_user(void *dst, const void *src, int n) 346 + { 347 + if (copy_to_user((void __user *)dst, src, n)) 348 + return -EFAULT; 349 + 350 + return 0; 351 + } 352 + 353 + static int xilly_memcpy(void *dst, const void *src, int n) 354 + { 355 + memcpy(dst, src, n); 356 + 357 + return 0; 358 + } 359 + 360 + static int fifo_init(struct xillyfifo *fifo, 361 + unsigned int log2_size) 362 + { 363 + unsigned int log2_bufnum; 364 + unsigned int buf_order; 365 + int i; 366 + 367 + unsigned int log2_fifo_buf_size; 368 + 369 + retry: 370 + log2_fifo_buf_size = fifo_buf_order + PAGE_SHIFT; 371 + 372 + if (log2_size > log2_fifo_buf_size) { 373 + log2_bufnum = log2_size - log2_fifo_buf_size; 374 + buf_order = fifo_buf_order; 375 + fifo->bufsize = 1 << log2_fifo_buf_size; 376 + } else { 377 + log2_bufnum = 0; 378 + buf_order = (log2_size > PAGE_SHIFT) ? 379 + log2_size - PAGE_SHIFT : 0; 380 + fifo->bufsize = 1 << log2_size; 381 + } 382 + 383 + fifo->bufnum = 1 << log2_bufnum; 384 + fifo->size = fifo->bufnum * fifo->bufsize; 385 + fifo->buf_order = buf_order; 386 + 387 + fifo->mem = kmalloc_array(fifo->bufnum, sizeof(void *), GFP_KERNEL); 388 + 389 + if (!fifo->mem) 390 + return -ENOMEM; 391 + 392 + for (i = 0; i < fifo->bufnum; i++) { 393 + fifo->mem[i] = (void *) 394 + __get_free_pages(GFP_KERNEL, buf_order); 395 + 396 + if (!fifo->mem[i]) 397 + goto memfail; 398 + } 399 + 400 + fifo->fill = 0; 401 + fifo->readpos = 0; 402 + fifo->readbuf = 0; 403 + fifo->writepos = 0; 404 + fifo->writebuf = 0; 405 + spin_lock_init(&fifo->lock); 406 + init_waitqueue_head(&fifo->waitq); 407 + return 0; 408 + 409 + memfail: 410 + for (i--; i >= 0; i--) 411 + free_pages((unsigned long)fifo->mem[i], buf_order); 412 + 413 + kfree(fifo->mem); 414 + fifo->mem = NULL; 415 + 416 + if (fifo_buf_order) { 417 + fifo_buf_order--; 418 + goto retry; 419 + } else { 420 + return -ENOMEM; 421 + } 422 + } 423 + 424 + static void fifo_mem_release(struct xillyfifo *fifo) 425 + { 426 + int i; 427 + 428 + if (!fifo->mem) 429 + return; 430 + 431 + for (i = 0; i < fifo->bufnum; i++) 432 + free_pages((unsigned long)fifo->mem[i], fifo->buf_order); 433 + 434 + kfree(fifo->mem); 435 + } 436 + 437 + /* 438 + * When endpoint_quiesce() returns, the endpoint has no URBs submitted, 439 + * won't accept any new URB submissions, and its related work item doesn't 440 + * and won't run anymore. 441 + */ 442 + 443 + static void endpoint_quiesce(struct xillyusb_endpoint *ep) 444 + { 445 + mutex_lock(&ep->ep_mutex); 446 + ep->shutting_down = true; 447 + mutex_unlock(&ep->ep_mutex); 448 + 449 + usb_kill_anchored_urbs(&ep->anchor); 450 + cancel_work_sync(&ep->workitem); 451 + } 452 + 453 + /* 454 + * Note that endpoint_dealloc() also frees fifo memory (if allocated), even 455 + * though endpoint_alloc doesn't allocate that memory. 456 + */ 457 + 458 + static void endpoint_dealloc(struct xillyusb_endpoint *ep) 459 + { 460 + struct list_head *this, *next; 461 + 462 + fifo_mem_release(&ep->fifo); 463 + 464 + /* Join @filled_buffers with @buffers to free these entries too */ 465 + list_splice(&ep->filled_buffers, &ep->buffers); 466 + 467 + list_for_each_safe(this, next, &ep->buffers) { 468 + struct xillybuffer *xb = 469 + list_entry(this, struct xillybuffer, entry); 470 + 471 + free_pages((unsigned long)xb->buf, ep->order); 472 + kfree(xb); 473 + } 474 + 475 + kfree(ep); 476 + } 477 + 478 + static struct xillyusb_endpoint 479 + *endpoint_alloc(struct xillyusb_dev *xdev, 480 + u8 ep_num, 481 + void (*work)(struct work_struct *), 482 + unsigned int order, 483 + int bufnum) 484 + { 485 + int i; 486 + 487 + struct xillyusb_endpoint *ep; 488 + 489 + ep = kzalloc(sizeof(*ep), GFP_KERNEL); 490 + 491 + if (!ep) 492 + return NULL; 493 + 494 + INIT_LIST_HEAD(&ep->buffers); 495 + INIT_LIST_HEAD(&ep->filled_buffers); 496 + 497 + spin_lock_init(&ep->buffers_lock); 498 + mutex_init(&ep->ep_mutex); 499 + 500 + init_usb_anchor(&ep->anchor); 501 + INIT_WORK(&ep->workitem, work); 502 + 503 + ep->order = order; 504 + ep->buffer_size = 1 << (PAGE_SHIFT + order); 505 + ep->outstanding_urbs = 0; 506 + ep->drained = true; 507 + ep->wake_on_drain = false; 508 + ep->xdev = xdev; 509 + ep->ep_num = ep_num; 510 + ep->shutting_down = false; 511 + 512 + for (i = 0; i < bufnum; i++) { 513 + struct xillybuffer *xb; 514 + unsigned long addr; 515 + 516 + xb = kzalloc(sizeof(*xb), GFP_KERNEL); 517 + 518 + if (!xb) { 519 + endpoint_dealloc(ep); 520 + return NULL; 521 + } 522 + 523 + addr = __get_free_pages(GFP_KERNEL, order); 524 + 525 + if (!addr) { 526 + kfree(xb); 527 + endpoint_dealloc(ep); 528 + return NULL; 529 + } 530 + 531 + xb->buf = (void *)addr; 532 + xb->ep = ep; 533 + list_add_tail(&xb->entry, &ep->buffers); 534 + } 535 + return ep; 536 + } 537 + 538 + static void cleanup_dev(struct kref *kref) 539 + { 540 + struct xillyusb_dev *xdev = 541 + container_of(kref, struct xillyusb_dev, kref); 542 + 543 + if (xdev->in_ep) 544 + endpoint_dealloc(xdev->in_ep); 545 + 546 + if (xdev->msg_ep) 547 + endpoint_dealloc(xdev->msg_ep); 548 + 549 + if (xdev->workq) 550 + destroy_workqueue(xdev->workq); 551 + 552 + kfree(xdev->channels); /* Argument may be NULL, and that's fine */ 553 + kfree(xdev); 554 + } 555 + 556 + /* 557 + * @process_in_mutex is taken to ensure that bulk_in_work() won't call 558 + * process_bulk_in() after wakeup_all()'s execution: The latter zeroes all 559 + * @read_data_ok entries, which will make process_bulk_in() report false 560 + * errors if executed. The mechanism relies on that xdev->error is assigned 561 + * a non-zero value by report_io_error() prior to queueing wakeup_all(), 562 + * which prevents bulk_in_work() from calling process_bulk_in(). 563 + * 564 + * The fact that wakeup_all() and bulk_in_work() are queued on the same 565 + * workqueue makes their concurrent execution very unlikely, however the 566 + * kernel's API doesn't seem to ensure this strictly. 567 + */ 568 + 569 + static void wakeup_all(struct work_struct *work) 570 + { 571 + int i; 572 + struct xillyusb_dev *xdev = container_of(work, struct xillyusb_dev, 573 + wakeup_workitem); 574 + 575 + mutex_lock(&xdev->process_in_mutex); 576 + 577 + for (i = 0; i < xdev->num_channels; i++) { 578 + struct xillyusb_channel *chan = &xdev->channels[i]; 579 + 580 + mutex_lock(&chan->lock); 581 + 582 + if (chan->in_fifo) { 583 + /* 584 + * Fake an EOF: Even if such arrives, it won't be 585 + * processed. 586 + */ 587 + chan->read_data_ok = 0; 588 + wake_up_interruptible(&chan->in_fifo->waitq); 589 + } 590 + 591 + if (chan->out_ep) 592 + wake_up_interruptible(&chan->out_ep->fifo.waitq); 593 + 594 + mutex_unlock(&chan->lock); 595 + 596 + wake_up_interruptible(&chan->flushq); 597 + } 598 + 599 + mutex_unlock(&xdev->process_in_mutex); 600 + 601 + wake_up_interruptible(&xdev->msg_ep->fifo.waitq); 602 + 603 + kref_put(&xdev->kref, cleanup_dev); 604 + } 605 + 606 + static void report_io_error(struct xillyusb_dev *xdev, 607 + int errcode) 608 + { 609 + unsigned long flags; 610 + bool do_once = false; 611 + 612 + spin_lock_irqsave(&xdev->error_lock, flags); 613 + if (!xdev->error) { 614 + xdev->error = errcode; 615 + do_once = true; 616 + } 617 + spin_unlock_irqrestore(&xdev->error_lock, flags); 618 + 619 + if (do_once) { 620 + kref_get(&xdev->kref); /* xdev is used by work item */ 621 + queue_work(xdev->workq, &xdev->wakeup_workitem); 622 + } 623 + } 624 + 625 + /* 626 + * safely_assign_in_fifo() changes the value of chan->in_fifo and ensures 627 + * the previous pointer is never used after its return. 628 + */ 629 + 630 + static void safely_assign_in_fifo(struct xillyusb_channel *chan, 631 + struct xillyfifo *fifo) 632 + { 633 + mutex_lock(&chan->lock); 634 + chan->in_fifo = fifo; 635 + mutex_unlock(&chan->lock); 636 + 637 + flush_work(&chan->xdev->in_ep->workitem); 638 + } 639 + 640 + static void bulk_in_completer(struct urb *urb) 641 + { 642 + struct xillybuffer *xb = urb->context; 643 + struct xillyusb_endpoint *ep = xb->ep; 644 + unsigned long flags; 645 + 646 + if (urb->status) { 647 + if (!(urb->status == -ENOENT || 648 + urb->status == -ECONNRESET || 649 + urb->status == -ESHUTDOWN)) 650 + report_io_error(ep->xdev, -EIO); 651 + 652 + spin_lock_irqsave(&ep->buffers_lock, flags); 653 + list_add_tail(&xb->entry, &ep->buffers); 654 + ep->outstanding_urbs--; 655 + spin_unlock_irqrestore(&ep->buffers_lock, flags); 656 + 657 + return; 658 + } 659 + 660 + xb->len = urb->actual_length; 661 + 662 + spin_lock_irqsave(&ep->buffers_lock, flags); 663 + list_add_tail(&xb->entry, &ep->filled_buffers); 664 + spin_unlock_irqrestore(&ep->buffers_lock, flags); 665 + 666 + if (!ep->shutting_down) 667 + queue_work(ep->xdev->workq, &ep->workitem); 668 + } 669 + 670 + static void bulk_out_completer(struct urb *urb) 671 + { 672 + struct xillybuffer *xb = urb->context; 673 + struct xillyusb_endpoint *ep = xb->ep; 674 + unsigned long flags; 675 + 676 + if (urb->status && 677 + (!(urb->status == -ENOENT || 678 + urb->status == -ECONNRESET || 679 + urb->status == -ESHUTDOWN))) 680 + report_io_error(ep->xdev, -EIO); 681 + 682 + spin_lock_irqsave(&ep->buffers_lock, flags); 683 + list_add_tail(&xb->entry, &ep->buffers); 684 + ep->outstanding_urbs--; 685 + spin_unlock_irqrestore(&ep->buffers_lock, flags); 686 + 687 + if (!ep->shutting_down) 688 + queue_work(ep->xdev->workq, &ep->workitem); 689 + } 690 + 691 + static void try_queue_bulk_in(struct xillyusb_endpoint *ep) 692 + { 693 + struct xillyusb_dev *xdev = ep->xdev; 694 + struct xillybuffer *xb; 695 + struct urb *urb; 696 + 697 + int rc; 698 + unsigned long flags; 699 + unsigned int bufsize = ep->buffer_size; 700 + 701 + mutex_lock(&ep->ep_mutex); 702 + 703 + if (ep->shutting_down || xdev->error) 704 + goto done; 705 + 706 + while (1) { 707 + spin_lock_irqsave(&ep->buffers_lock, flags); 708 + 709 + if (list_empty(&ep->buffers)) { 710 + spin_unlock_irqrestore(&ep->buffers_lock, flags); 711 + goto done; 712 + } 713 + 714 + xb = list_first_entry(&ep->buffers, struct xillybuffer, entry); 715 + list_del(&xb->entry); 716 + ep->outstanding_urbs++; 717 + 718 + spin_unlock_irqrestore(&ep->buffers_lock, flags); 719 + 720 + urb = usb_alloc_urb(0, GFP_KERNEL); 721 + if (!urb) { 722 + report_io_error(xdev, -ENOMEM); 723 + goto relist; 724 + } 725 + 726 + usb_fill_bulk_urb(urb, xdev->udev, 727 + usb_rcvbulkpipe(xdev->udev, ep->ep_num), 728 + xb->buf, bufsize, bulk_in_completer, xb); 729 + 730 + usb_anchor_urb(urb, &ep->anchor); 731 + 732 + rc = usb_submit_urb(urb, GFP_KERNEL); 733 + 734 + if (rc) { 735 + report_io_error(xdev, (rc == -ENOMEM) ? -ENOMEM : 736 + -EIO); 737 + goto unanchor; 738 + } 739 + 740 + usb_free_urb(urb); /* This just decrements reference count */ 741 + } 742 + 743 + unanchor: 744 + usb_unanchor_urb(urb); 745 + usb_free_urb(urb); 746 + 747 + relist: 748 + spin_lock_irqsave(&ep->buffers_lock, flags); 749 + list_add_tail(&xb->entry, &ep->buffers); 750 + ep->outstanding_urbs--; 751 + spin_unlock_irqrestore(&ep->buffers_lock, flags); 752 + 753 + done: 754 + mutex_unlock(&ep->ep_mutex); 755 + } 756 + 757 + static void try_queue_bulk_out(struct xillyusb_endpoint *ep) 758 + { 759 + struct xillyfifo *fifo = &ep->fifo; 760 + struct xillyusb_dev *xdev = ep->xdev; 761 + struct xillybuffer *xb; 762 + struct urb *urb; 763 + 764 + int rc; 765 + unsigned int fill; 766 + unsigned long flags; 767 + bool do_wake = false; 768 + 769 + mutex_lock(&ep->ep_mutex); 770 + 771 + if (ep->shutting_down || xdev->error) 772 + goto done; 773 + 774 + fill = READ_ONCE(fifo->fill) & ep->fill_mask; 775 + 776 + while (1) { 777 + int count; 778 + unsigned int max_read; 779 + 780 + spin_lock_irqsave(&ep->buffers_lock, flags); 781 + 782 + /* 783 + * Race conditions might have the FIFO filled while the 784 + * endpoint is marked as drained here. That doesn't matter, 785 + * because the sole purpose of @drained is to ensure that 786 + * certain data has been sent on the USB channel before 787 + * shutting it down. Hence knowing that the FIFO appears 788 + * to be empty with no outstanding URBs at some moment 789 + * is good enough. 790 + */ 791 + 792 + if (!fill) { 793 + ep->drained = !ep->outstanding_urbs; 794 + if (ep->drained && ep->wake_on_drain) 795 + do_wake = true; 796 + 797 + spin_unlock_irqrestore(&ep->buffers_lock, flags); 798 + goto done; 799 + } 800 + 801 + ep->drained = false; 802 + 803 + if ((fill < ep->buffer_size && ep->outstanding_urbs) || 804 + list_empty(&ep->buffers)) { 805 + spin_unlock_irqrestore(&ep->buffers_lock, flags); 806 + goto done; 807 + } 808 + 809 + xb = list_first_entry(&ep->buffers, struct xillybuffer, entry); 810 + list_del(&xb->entry); 811 + ep->outstanding_urbs++; 812 + 813 + spin_unlock_irqrestore(&ep->buffers_lock, flags); 814 + 815 + max_read = min(fill, ep->buffer_size); 816 + 817 + count = fifo_read(&ep->fifo, xb->buf, max_read, xilly_memcpy); 818 + 819 + /* 820 + * xilly_memcpy always returns 0 => fifo_read can't fail => 821 + * count > 0 822 + */ 823 + 824 + urb = usb_alloc_urb(0, GFP_KERNEL); 825 + if (!urb) { 826 + report_io_error(xdev, -ENOMEM); 827 + goto relist; 828 + } 829 + 830 + usb_fill_bulk_urb(urb, xdev->udev, 831 + usb_sndbulkpipe(xdev->udev, ep->ep_num), 832 + xb->buf, count, bulk_out_completer, xb); 833 + 834 + usb_anchor_urb(urb, &ep->anchor); 835 + 836 + rc = usb_submit_urb(urb, GFP_KERNEL); 837 + 838 + if (rc) { 839 + report_io_error(xdev, (rc == -ENOMEM) ? -ENOMEM : 840 + -EIO); 841 + goto unanchor; 842 + } 843 + 844 + usb_free_urb(urb); /* This just decrements reference count */ 845 + 846 + fill -= count; 847 + do_wake = true; 848 + } 849 + 850 + unanchor: 851 + usb_unanchor_urb(urb); 852 + usb_free_urb(urb); 853 + 854 + relist: 855 + spin_lock_irqsave(&ep->buffers_lock, flags); 856 + list_add_tail(&xb->entry, &ep->buffers); 857 + ep->outstanding_urbs--; 858 + spin_unlock_irqrestore(&ep->buffers_lock, flags); 859 + 860 + done: 861 + mutex_unlock(&ep->ep_mutex); 862 + 863 + if (do_wake) 864 + wake_up_interruptible(&fifo->waitq); 865 + } 866 + 867 + static void bulk_out_work(struct work_struct *work) 868 + { 869 + struct xillyusb_endpoint *ep = container_of(work, 870 + struct xillyusb_endpoint, 871 + workitem); 872 + try_queue_bulk_out(ep); 873 + } 874 + 875 + static int process_in_opcode(struct xillyusb_dev *xdev, 876 + int opcode, 877 + int chan_num) 878 + { 879 + struct xillyusb_channel *chan; 880 + struct device *dev = xdev->dev; 881 + int chan_idx = chan_num >> 1; 882 + 883 + if (chan_idx >= xdev->num_channels) { 884 + dev_err(dev, "Received illegal channel ID %d from FPGA\n", 885 + chan_num); 886 + return -EIO; 887 + } 888 + 889 + chan = &xdev->channels[chan_idx]; 890 + 891 + switch (opcode) { 892 + case OPCODE_EOF: 893 + if (!chan->read_data_ok) { 894 + dev_err(dev, "Received unexpected EOF for channel %d\n", 895 + chan_num); 896 + return -EIO; 897 + } 898 + 899 + /* 900 + * A write memory barrier ensures that the FIFO's fill level 901 + * is visible before read_data_ok turns zero, so the data in 902 + * the FIFO isn't missed by the consumer. 903 + */ 904 + smp_wmb(); 905 + WRITE_ONCE(chan->read_data_ok, 0); 906 + wake_up_interruptible(&chan->in_fifo->waitq); 907 + break; 908 + 909 + case OPCODE_REACHED_CHECKPOINT: 910 + chan->flushing = 0; 911 + wake_up_interruptible(&chan->flushq); 912 + break; 913 + 914 + case OPCODE_CANCELED_CHECKPOINT: 915 + chan->canceled = 1; 916 + wake_up_interruptible(&chan->flushq); 917 + break; 918 + 919 + default: 920 + dev_err(dev, "Received illegal opcode %d from FPGA\n", 921 + opcode); 922 + return -EIO; 923 + } 924 + 925 + return 0; 926 + } 927 + 928 + static int process_bulk_in(struct xillybuffer *xb) 929 + { 930 + struct xillyusb_endpoint *ep = xb->ep; 931 + struct xillyusb_dev *xdev = ep->xdev; 932 + struct device *dev = xdev->dev; 933 + int dws = xb->len >> 2; 934 + __le32 *p = xb->buf; 935 + u32 ctrlword; 936 + struct xillyusb_channel *chan; 937 + struct xillyfifo *fifo; 938 + int chan_num = 0, opcode; 939 + int chan_idx; 940 + int bytes, count, dwconsume; 941 + int in_bytes_left = 0; 942 + int rc; 943 + 944 + if ((dws << 2) != xb->len) { 945 + dev_err(dev, "Received BULK IN transfer with %d bytes, not a multiple of 4\n", 946 + xb->len); 947 + return -EIO; 948 + } 949 + 950 + if (xdev->in_bytes_left) { 951 + bytes = min(xdev->in_bytes_left, dws << 2); 952 + in_bytes_left = xdev->in_bytes_left - bytes; 953 + chan_num = xdev->leftover_chan_num; 954 + goto resume_leftovers; 955 + } 956 + 957 + while (dws) { 958 + ctrlword = le32_to_cpu(*p++); 959 + dws--; 960 + 961 + chan_num = ctrlword & 0xfff; 962 + count = (ctrlword >> 12) & 0x3ff; 963 + opcode = (ctrlword >> 24) & 0xf; 964 + 965 + if (opcode != OPCODE_DATA) { 966 + unsigned int in_counter = xdev->in_counter++ & 0x3ff; 967 + 968 + if (count != in_counter) { 969 + dev_err(dev, "Expected opcode counter %d, got %d\n", 970 + in_counter, count); 971 + return -EIO; 972 + } 973 + 974 + rc = process_in_opcode(xdev, opcode, chan_num); 975 + 976 + if (rc) 977 + return rc; 978 + 979 + continue; 980 + } 981 + 982 + bytes = min(count + 1, dws << 2); 983 + in_bytes_left = count + 1 - bytes; 984 + 985 + resume_leftovers: 986 + chan_idx = chan_num >> 1; 987 + 988 + if (!(chan_num & 1) || chan_idx >= xdev->num_channels || 989 + !xdev->channels[chan_idx].read_data_ok) { 990 + dev_err(dev, "Received illegal channel ID %d from FPGA\n", 991 + chan_num); 992 + return -EIO; 993 + } 994 + chan = &xdev->channels[chan_idx]; 995 + 996 + fifo = chan->in_fifo; 997 + 998 + if (unlikely(!fifo)) 999 + return -EIO; /* We got really unexpected data */ 1000 + 1001 + if (bytes != fifo_write(fifo, p, bytes, xilly_memcpy)) { 1002 + dev_err(dev, "Misbehaving FPGA overflowed an upstream FIFO!\n"); 1003 + return -EIO; 1004 + } 1005 + 1006 + wake_up_interruptible(&fifo->waitq); 1007 + 1008 + dwconsume = (bytes + 3) >> 2; 1009 + dws -= dwconsume; 1010 + p += dwconsume; 1011 + } 1012 + 1013 + xdev->in_bytes_left = in_bytes_left; 1014 + xdev->leftover_chan_num = chan_num; 1015 + return 0; 1016 + } 1017 + 1018 + static void bulk_in_work(struct work_struct *work) 1019 + { 1020 + struct xillyusb_endpoint *ep = 1021 + container_of(work, struct xillyusb_endpoint, workitem); 1022 + struct xillyusb_dev *xdev = ep->xdev; 1023 + unsigned long flags; 1024 + struct xillybuffer *xb; 1025 + bool consumed = false; 1026 + int rc = 0; 1027 + 1028 + mutex_lock(&xdev->process_in_mutex); 1029 + 1030 + spin_lock_irqsave(&ep->buffers_lock, flags); 1031 + 1032 + while (1) { 1033 + if (rc || list_empty(&ep->filled_buffers)) { 1034 + spin_unlock_irqrestore(&ep->buffers_lock, flags); 1035 + mutex_unlock(&xdev->process_in_mutex); 1036 + 1037 + if (rc) 1038 + report_io_error(xdev, rc); 1039 + else if (consumed) 1040 + try_queue_bulk_in(ep); 1041 + 1042 + return; 1043 + } 1044 + 1045 + xb = list_first_entry(&ep->filled_buffers, struct xillybuffer, 1046 + entry); 1047 + list_del(&xb->entry); 1048 + 1049 + spin_unlock_irqrestore(&ep->buffers_lock, flags); 1050 + 1051 + consumed = true; 1052 + 1053 + if (!xdev->error) 1054 + rc = process_bulk_in(xb); 1055 + 1056 + spin_lock_irqsave(&ep->buffers_lock, flags); 1057 + list_add_tail(&xb->entry, &ep->buffers); 1058 + ep->outstanding_urbs--; 1059 + } 1060 + } 1061 + 1062 + static int xillyusb_send_opcode(struct xillyusb_dev *xdev, 1063 + int chan_num, char opcode, u32 data) 1064 + { 1065 + struct xillyusb_endpoint *ep = xdev->msg_ep; 1066 + struct xillyfifo *fifo = &ep->fifo; 1067 + __le32 msg[2]; 1068 + 1069 + int rc = 0; 1070 + 1071 + msg[0] = cpu_to_le32((chan_num & 0xfff) | 1072 + ((opcode & 0xf) << 24)); 1073 + msg[1] = cpu_to_le32(data); 1074 + 1075 + mutex_lock(&xdev->msg_mutex); 1076 + 1077 + /* 1078 + * The wait queue is woken with the interruptible variant, so the 1079 + * wait function matches, however returning because of an interrupt 1080 + * will mess things up considerably, in particular when the caller is 1081 + * the release method. And the xdev->error part prevents being stuck 1082 + * forever in the event of a bizarre hardware bug: Pull the USB plug. 1083 + */ 1084 + 1085 + while (wait_event_interruptible(fifo->waitq, 1086 + fifo->fill <= (fifo->size - 8) || 1087 + xdev->error)) 1088 + ; /* Empty loop */ 1089 + 1090 + if (xdev->error) { 1091 + rc = xdev->error; 1092 + goto unlock_done; 1093 + } 1094 + 1095 + fifo_write(fifo, (void *)msg, 8, xilly_memcpy); 1096 + 1097 + try_queue_bulk_out(ep); 1098 + 1099 + unlock_done: 1100 + mutex_unlock(&xdev->msg_mutex); 1101 + 1102 + return rc; 1103 + } 1104 + 1105 + /* 1106 + * Note that flush_downstream() merely waits for the data to arrive to 1107 + * the application logic at the FPGA -- unlike PCIe Xillybus' counterpart, 1108 + * it does nothing to make it happen (and neither is it necessary). 1109 + * 1110 + * This function is not reentrant for the same @chan, but this is covered 1111 + * by the fact that for any given @chan, it's called either by the open, 1112 + * write, llseek and flush fops methods, which can't run in parallel (and the 1113 + * write + flush and llseek method handlers are protected with out_mutex). 1114 + * 1115 + * chan->flushed is there to avoid multiple flushes at the same position, 1116 + * in particular as a result of programs that close the file descriptor 1117 + * e.g. after a dup2() for redirection. 1118 + */ 1119 + 1120 + static int flush_downstream(struct xillyusb_channel *chan, 1121 + long timeout, 1122 + bool interruptible) 1123 + { 1124 + struct xillyusb_dev *xdev = chan->xdev; 1125 + int chan_num = chan->chan_idx << 1; 1126 + long deadline, left_to_sleep; 1127 + int rc; 1128 + 1129 + if (chan->flushed) 1130 + return 0; 1131 + 1132 + deadline = jiffies + 1 + timeout; 1133 + 1134 + if (chan->flushing) { 1135 + long cancel_deadline = jiffies + 1 + XILLY_RESPONSE_TIMEOUT; 1136 + 1137 + chan->canceled = 0; 1138 + rc = xillyusb_send_opcode(xdev, chan_num, 1139 + OPCODE_CANCEL_CHECKPOINT, 0); 1140 + 1141 + if (rc) 1142 + return rc; /* Only real error, never -EINTR */ 1143 + 1144 + /* Ignoring interrupts. Cancellation must be handled */ 1145 + while (!chan->canceled) { 1146 + left_to_sleep = cancel_deadline - ((long)jiffies); 1147 + 1148 + if (left_to_sleep <= 0) { 1149 + report_io_error(xdev, -EIO); 1150 + return -EIO; 1151 + } 1152 + 1153 + rc = wait_event_interruptible_timeout(chan->flushq, 1154 + chan->canceled || 1155 + xdev->error, 1156 + left_to_sleep); 1157 + 1158 + if (xdev->error) 1159 + return xdev->error; 1160 + } 1161 + } 1162 + 1163 + chan->flushing = 1; 1164 + 1165 + /* 1166 + * The checkpoint is given in terms of data elements, not bytes. As 1167 + * a result, if less than an element's worth of data is stored in the 1168 + * FIFO, it's not flushed, including the flush before closing, which 1169 + * means that such data is lost. This is consistent with PCIe Xillybus. 1170 + */ 1171 + 1172 + rc = xillyusb_send_opcode(xdev, chan_num, 1173 + OPCODE_SET_CHECKPOINT, 1174 + chan->out_bytes >> 1175 + chan->out_log2_element_size); 1176 + 1177 + if (rc) 1178 + return rc; /* Only real error, never -EINTR */ 1179 + 1180 + if (!timeout) { 1181 + while (chan->flushing) { 1182 + rc = wait_event_interruptible(chan->flushq, 1183 + !chan->flushing || 1184 + xdev->error); 1185 + if (xdev->error) 1186 + return xdev->error; 1187 + 1188 + if (interruptible && rc) 1189 + return -EINTR; 1190 + } 1191 + 1192 + goto done; 1193 + } 1194 + 1195 + while (chan->flushing) { 1196 + left_to_sleep = deadline - ((long)jiffies); 1197 + 1198 + if (left_to_sleep <= 0) 1199 + return -ETIMEDOUT; 1200 + 1201 + rc = wait_event_interruptible_timeout(chan->flushq, 1202 + !chan->flushing || 1203 + xdev->error, 1204 + left_to_sleep); 1205 + 1206 + if (xdev->error) 1207 + return xdev->error; 1208 + 1209 + if (interruptible && rc < 0) 1210 + return -EINTR; 1211 + } 1212 + 1213 + done: 1214 + chan->flushed = 1; 1215 + return 0; 1216 + } 1217 + 1218 + /* request_read_anything(): Ask the FPGA for any little amount of data */ 1219 + static int request_read_anything(struct xillyusb_channel *chan, 1220 + char opcode) 1221 + { 1222 + struct xillyusb_dev *xdev = chan->xdev; 1223 + unsigned int sh = chan->in_log2_element_size; 1224 + int chan_num = (chan->chan_idx << 1) | 1; 1225 + u32 mercy = chan->in_consumed_bytes + (2 << sh) - 1; 1226 + 1227 + return xillyusb_send_opcode(xdev, chan_num, opcode, mercy >> sh); 1228 + } 1229 + 1230 + static int xillyusb_open(struct inode *inode, struct file *filp) 1231 + { 1232 + struct xillyusb_dev *xdev; 1233 + struct xillyusb_channel *chan; 1234 + struct xillyfifo *in_fifo = NULL; 1235 + struct xillyusb_endpoint *out_ep = NULL; 1236 + int rc; 1237 + int index; 1238 + 1239 + rc = xillybus_find_inode(inode, (void **)&xdev, &index); 1240 + if (rc) 1241 + return rc; 1242 + 1243 + chan = &xdev->channels[index]; 1244 + filp->private_data = chan; 1245 + 1246 + mutex_lock(&chan->lock); 1247 + 1248 + rc = -ENODEV; 1249 + 1250 + if (xdev->error) 1251 + goto unmutex_fail; 1252 + 1253 + if (((filp->f_mode & FMODE_READ) && !chan->readable) || 1254 + ((filp->f_mode & FMODE_WRITE) && !chan->writable)) 1255 + goto unmutex_fail; 1256 + 1257 + if ((filp->f_flags & O_NONBLOCK) && (filp->f_mode & FMODE_READ) && 1258 + chan->in_synchronous) { 1259 + dev_err(xdev->dev, 1260 + "open() failed: O_NONBLOCK not allowed for read on this device\n"); 1261 + goto unmutex_fail; 1262 + } 1263 + 1264 + if ((filp->f_flags & O_NONBLOCK) && (filp->f_mode & FMODE_WRITE) && 1265 + chan->out_synchronous) { 1266 + dev_err(xdev->dev, 1267 + "open() failed: O_NONBLOCK not allowed for write on this device\n"); 1268 + goto unmutex_fail; 1269 + } 1270 + 1271 + rc = -EBUSY; 1272 + 1273 + if (((filp->f_mode & FMODE_READ) && chan->open_for_read) || 1274 + ((filp->f_mode & FMODE_WRITE) && chan->open_for_write)) 1275 + goto unmutex_fail; 1276 + 1277 + kref_get(&xdev->kref); 1278 + 1279 + if (filp->f_mode & FMODE_READ) 1280 + chan->open_for_read = 1; 1281 + 1282 + if (filp->f_mode & FMODE_WRITE) 1283 + chan->open_for_write = 1; 1284 + 1285 + mutex_unlock(&chan->lock); 1286 + 1287 + if (filp->f_mode & FMODE_WRITE) { 1288 + out_ep = endpoint_alloc(xdev, 1289 + (chan->chan_idx + 2) | USB_DIR_OUT, 1290 + bulk_out_work, BUF_SIZE_ORDER, BUFNUM); 1291 + 1292 + if (!out_ep) { 1293 + rc = -ENOMEM; 1294 + goto unopen; 1295 + } 1296 + 1297 + rc = fifo_init(&out_ep->fifo, chan->out_log2_fifo_size); 1298 + 1299 + if (rc) 1300 + goto late_unopen; 1301 + 1302 + out_ep->fill_mask = -(1 << chan->out_log2_element_size); 1303 + chan->out_bytes = 0; 1304 + chan->flushed = 0; 1305 + 1306 + /* 1307 + * Sending a flush request to a previously closed stream 1308 + * effectively opens it, and also waits until the command is 1309 + * confirmed by the FPGA. The latter is necessary because the 1310 + * data is sent through a separate BULK OUT endpoint, and the 1311 + * xHCI controller is free to reorder transmissions. 1312 + * 1313 + * This can't go wrong unless there's a serious hardware error 1314 + * (or the computer is stuck for 500 ms?) 1315 + */ 1316 + rc = flush_downstream(chan, XILLY_RESPONSE_TIMEOUT, false); 1317 + 1318 + if (rc == -ETIMEDOUT) { 1319 + rc = -EIO; 1320 + report_io_error(xdev, rc); 1321 + } 1322 + 1323 + if (rc) 1324 + goto late_unopen; 1325 + } 1326 + 1327 + if (filp->f_mode & FMODE_READ) { 1328 + in_fifo = kzalloc(sizeof(*in_fifo), GFP_KERNEL); 1329 + 1330 + if (!in_fifo) { 1331 + rc = -ENOMEM; 1332 + goto late_unopen; 1333 + } 1334 + 1335 + rc = fifo_init(in_fifo, chan->in_log2_fifo_size); 1336 + 1337 + if (rc) { 1338 + kfree(in_fifo); 1339 + goto late_unopen; 1340 + } 1341 + } 1342 + 1343 + mutex_lock(&chan->lock); 1344 + if (in_fifo) { 1345 + chan->in_fifo = in_fifo; 1346 + chan->read_data_ok = 1; 1347 + } 1348 + if (out_ep) 1349 + chan->out_ep = out_ep; 1350 + mutex_unlock(&chan->lock); 1351 + 1352 + if (in_fifo) { 1353 + u32 in_checkpoint = 0; 1354 + 1355 + if (!chan->in_synchronous) 1356 + in_checkpoint = in_fifo->size >> 1357 + chan->in_log2_element_size; 1358 + 1359 + chan->in_consumed_bytes = 0; 1360 + chan->poll_used = 0; 1361 + chan->in_current_checkpoint = in_checkpoint; 1362 + rc = xillyusb_send_opcode(xdev, (chan->chan_idx << 1) | 1, 1363 + OPCODE_SET_CHECKPOINT, 1364 + in_checkpoint); 1365 + 1366 + if (rc) /* Failure guarantees that opcode wasn't sent */ 1367 + goto unfifo; 1368 + 1369 + /* 1370 + * In non-blocking mode, request the FPGA to send any data it 1371 + * has right away. Otherwise, the first read() will always 1372 + * return -EAGAIN, which is OK strictly speaking, but ugly. 1373 + * Checking and unrolling if this fails isn't worth the 1374 + * effort -- the error is propagated to the first read() 1375 + * anyhow. 1376 + */ 1377 + if (filp->f_flags & O_NONBLOCK) 1378 + request_read_anything(chan, OPCODE_SET_PUSH); 1379 + } 1380 + 1381 + return 0; 1382 + 1383 + unfifo: 1384 + chan->read_data_ok = 0; 1385 + safely_assign_in_fifo(chan, NULL); 1386 + fifo_mem_release(in_fifo); 1387 + kfree(in_fifo); 1388 + 1389 + if (out_ep) { 1390 + mutex_lock(&chan->lock); 1391 + chan->out_ep = NULL; 1392 + mutex_unlock(&chan->lock); 1393 + } 1394 + 1395 + late_unopen: 1396 + if (out_ep) 1397 + endpoint_dealloc(out_ep); 1398 + 1399 + unopen: 1400 + mutex_lock(&chan->lock); 1401 + 1402 + if (filp->f_mode & FMODE_READ) 1403 + chan->open_for_read = 0; 1404 + 1405 + if (filp->f_mode & FMODE_WRITE) 1406 + chan->open_for_write = 0; 1407 + 1408 + mutex_unlock(&chan->lock); 1409 + 1410 + kref_put(&xdev->kref, cleanup_dev); 1411 + 1412 + return rc; 1413 + 1414 + unmutex_fail: 1415 + mutex_unlock(&chan->lock); 1416 + return rc; 1417 + } 1418 + 1419 + static ssize_t xillyusb_read(struct file *filp, char __user *userbuf, 1420 + size_t count, loff_t *f_pos) 1421 + { 1422 + struct xillyusb_channel *chan = filp->private_data; 1423 + struct xillyusb_dev *xdev = chan->xdev; 1424 + struct xillyfifo *fifo = chan->in_fifo; 1425 + int chan_num = (chan->chan_idx << 1) | 1; 1426 + 1427 + long deadline, left_to_sleep; 1428 + int bytes_done = 0; 1429 + bool sent_set_push = false; 1430 + int rc; 1431 + 1432 + deadline = jiffies + 1 + XILLY_RX_TIMEOUT; 1433 + 1434 + rc = mutex_lock_interruptible(&chan->in_mutex); 1435 + 1436 + if (rc) 1437 + return rc; 1438 + 1439 + while (1) { 1440 + u32 fifo_checkpoint_bytes, complete_checkpoint_bytes; 1441 + u32 complete_checkpoint, fifo_checkpoint; 1442 + u32 checkpoint; 1443 + s32 diff, leap; 1444 + unsigned int sh = chan->in_log2_element_size; 1445 + bool checkpoint_for_complete; 1446 + 1447 + rc = fifo_read(fifo, (__force void *)userbuf + bytes_done, 1448 + count - bytes_done, xilly_copy_to_user); 1449 + 1450 + if (rc < 0) 1451 + break; 1452 + 1453 + bytes_done += rc; 1454 + chan->in_consumed_bytes += rc; 1455 + 1456 + left_to_sleep = deadline - ((long)jiffies); 1457 + 1458 + /* 1459 + * Some 32-bit arithmetic that may wrap. Note that 1460 + * complete_checkpoint is rounded up to the closest element 1461 + * boundary, because the read() can't be completed otherwise. 1462 + * fifo_checkpoint_bytes is rounded down, because it protects 1463 + * in_fifo from overflowing. 1464 + */ 1465 + 1466 + fifo_checkpoint_bytes = chan->in_consumed_bytes + fifo->size; 1467 + complete_checkpoint_bytes = 1468 + chan->in_consumed_bytes + count - bytes_done; 1469 + 1470 + fifo_checkpoint = fifo_checkpoint_bytes >> sh; 1471 + complete_checkpoint = 1472 + (complete_checkpoint_bytes + (1 << sh) - 1) >> sh; 1473 + 1474 + diff = (fifo_checkpoint - complete_checkpoint) << sh; 1475 + 1476 + if (chan->in_synchronous && diff >= 0) { 1477 + checkpoint = complete_checkpoint; 1478 + checkpoint_for_complete = true; 1479 + } else { 1480 + checkpoint = fifo_checkpoint; 1481 + checkpoint_for_complete = false; 1482 + } 1483 + 1484 + leap = (checkpoint - chan->in_current_checkpoint) << sh; 1485 + 1486 + /* 1487 + * To prevent flooding of OPCODE_SET_CHECKPOINT commands as 1488 + * data is consumed, it's issued only if it moves the 1489 + * checkpoint by at least an 8th of the FIFO's size, or if 1490 + * it's necessary to complete the number of bytes requested by 1491 + * the read() call. 1492 + * 1493 + * chan->read_data_ok is checked to spare an unnecessary 1494 + * submission after receiving EOF, however it's harmless if 1495 + * such slips away. 1496 + */ 1497 + 1498 + if (chan->read_data_ok && 1499 + (leap > (fifo->size >> 3) || 1500 + (checkpoint_for_complete && leap > 0))) { 1501 + chan->in_current_checkpoint = checkpoint; 1502 + rc = xillyusb_send_opcode(xdev, chan_num, 1503 + OPCODE_SET_CHECKPOINT, 1504 + checkpoint); 1505 + 1506 + if (rc) 1507 + break; 1508 + } 1509 + 1510 + if (bytes_done == count || 1511 + (left_to_sleep <= 0 && bytes_done)) 1512 + break; 1513 + 1514 + /* 1515 + * Reaching here means that the FIFO was empty when 1516 + * fifo_read() returned, but not necessarily right now. Error 1517 + * and EOF are checked and reported only now, so that no data 1518 + * that managed its way to the FIFO is lost. 1519 + */ 1520 + 1521 + if (!READ_ONCE(chan->read_data_ok)) { /* FPGA has sent EOF */ 1522 + /* Has data slipped into the FIFO since fifo_read()? */ 1523 + smp_rmb(); 1524 + if (READ_ONCE(fifo->fill)) 1525 + continue; 1526 + 1527 + rc = 0; 1528 + break; 1529 + } 1530 + 1531 + if (xdev->error) { 1532 + rc = xdev->error; 1533 + break; 1534 + } 1535 + 1536 + if (filp->f_flags & O_NONBLOCK) { 1537 + rc = -EAGAIN; 1538 + break; 1539 + } 1540 + 1541 + if (!sent_set_push) { 1542 + rc = xillyusb_send_opcode(xdev, chan_num, 1543 + OPCODE_SET_PUSH, 1544 + complete_checkpoint); 1545 + 1546 + if (rc) 1547 + break; 1548 + 1549 + sent_set_push = true; 1550 + } 1551 + 1552 + if (left_to_sleep > 0) { 1553 + /* 1554 + * Note that when xdev->error is set (e.g. when the 1555 + * device is unplugged), read_data_ok turns zero and 1556 + * fifo->waitq is awaken. 1557 + * Therefore no special attention to xdev->error. 1558 + */ 1559 + 1560 + rc = wait_event_interruptible_timeout 1561 + (fifo->waitq, 1562 + fifo->fill || !chan->read_data_ok, 1563 + left_to_sleep); 1564 + } else { /* bytes_done == 0 */ 1565 + /* Tell FPGA to send anything it has */ 1566 + rc = request_read_anything(chan, OPCODE_UPDATE_PUSH); 1567 + 1568 + if (rc) 1569 + break; 1570 + 1571 + rc = wait_event_interruptible 1572 + (fifo->waitq, 1573 + fifo->fill || !chan->read_data_ok); 1574 + } 1575 + 1576 + if (rc < 0) { 1577 + rc = -EINTR; 1578 + break; 1579 + } 1580 + } 1581 + 1582 + if (((filp->f_flags & O_NONBLOCK) || chan->poll_used) && 1583 + !READ_ONCE(fifo->fill)) 1584 + request_read_anything(chan, OPCODE_SET_PUSH); 1585 + 1586 + mutex_unlock(&chan->in_mutex); 1587 + 1588 + if (bytes_done) 1589 + return bytes_done; 1590 + 1591 + return rc; 1592 + } 1593 + 1594 + static int xillyusb_flush(struct file *filp, fl_owner_t id) 1595 + { 1596 + struct xillyusb_channel *chan = filp->private_data; 1597 + int rc; 1598 + 1599 + if (!(filp->f_mode & FMODE_WRITE)) 1600 + return 0; 1601 + 1602 + rc = mutex_lock_interruptible(&chan->out_mutex); 1603 + 1604 + if (rc) 1605 + return rc; 1606 + 1607 + /* 1608 + * One second's timeout on flushing. Interrupts are ignored, because if 1609 + * the user pressed CTRL-C, that interrupt will still be in flight by 1610 + * the time we reach here, and the opportunity to flush is lost. 1611 + */ 1612 + rc = flush_downstream(chan, HZ, false); 1613 + 1614 + mutex_unlock(&chan->out_mutex); 1615 + 1616 + if (rc == -ETIMEDOUT) { 1617 + /* The things you do to use dev_warn() and not pr_warn() */ 1618 + struct xillyusb_dev *xdev = chan->xdev; 1619 + 1620 + mutex_lock(&chan->lock); 1621 + if (!xdev->error) 1622 + dev_warn(xdev->dev, 1623 + "Timed out while flushing. Output data may be lost.\n"); 1624 + mutex_unlock(&chan->lock); 1625 + } 1626 + 1627 + return rc; 1628 + } 1629 + 1630 + static ssize_t xillyusb_write(struct file *filp, const char __user *userbuf, 1631 + size_t count, loff_t *f_pos) 1632 + { 1633 + struct xillyusb_channel *chan = filp->private_data; 1634 + struct xillyusb_dev *xdev = chan->xdev; 1635 + struct xillyfifo *fifo = &chan->out_ep->fifo; 1636 + int rc; 1637 + 1638 + rc = mutex_lock_interruptible(&chan->out_mutex); 1639 + 1640 + if (rc) 1641 + return rc; 1642 + 1643 + while (1) { 1644 + if (xdev->error) { 1645 + rc = xdev->error; 1646 + break; 1647 + } 1648 + 1649 + if (count == 0) 1650 + break; 1651 + 1652 + rc = fifo_write(fifo, (__force void *)userbuf, count, 1653 + xilly_copy_from_user); 1654 + 1655 + if (rc != 0) 1656 + break; 1657 + 1658 + if (filp->f_flags & O_NONBLOCK) { 1659 + rc = -EAGAIN; 1660 + break; 1661 + } 1662 + 1663 + if (wait_event_interruptible 1664 + (fifo->waitq, 1665 + fifo->fill != fifo->size || xdev->error)) { 1666 + rc = -EINTR; 1667 + break; 1668 + } 1669 + } 1670 + 1671 + if (rc < 0) 1672 + goto done; 1673 + 1674 + chan->out_bytes += rc; 1675 + 1676 + if (rc) { 1677 + try_queue_bulk_out(chan->out_ep); 1678 + chan->flushed = 0; 1679 + } 1680 + 1681 + if (chan->out_synchronous) { 1682 + int flush_rc = flush_downstream(chan, 0, true); 1683 + 1684 + if (flush_rc && !rc) 1685 + rc = flush_rc; 1686 + } 1687 + 1688 + done: 1689 + mutex_unlock(&chan->out_mutex); 1690 + 1691 + return rc; 1692 + } 1693 + 1694 + static int xillyusb_release(struct inode *inode, struct file *filp) 1695 + { 1696 + struct xillyusb_channel *chan = filp->private_data; 1697 + struct xillyusb_dev *xdev = chan->xdev; 1698 + int rc_read = 0, rc_write = 0; 1699 + 1700 + if (filp->f_mode & FMODE_READ) { 1701 + struct xillyfifo *in_fifo = chan->in_fifo; 1702 + 1703 + rc_read = xillyusb_send_opcode(xdev, (chan->chan_idx << 1) | 1, 1704 + OPCODE_CLOSE, 0); 1705 + /* 1706 + * If rc_read is nonzero, xdev->error indicates a global 1707 + * device error. The error is reported later, so that 1708 + * resources are freed. 1709 + * 1710 + * Looping on wait_event_interruptible() kinda breaks the idea 1711 + * of being interruptible, and this should have been 1712 + * wait_event(). Only it's being waken with 1713 + * wake_up_interruptible() for the sake of other uses. If 1714 + * there's a global device error, chan->read_data_ok is 1715 + * deasserted and the wait queue is awaken, so this is covered. 1716 + */ 1717 + 1718 + while (wait_event_interruptible(in_fifo->waitq, 1719 + !chan->read_data_ok)) 1720 + ; /* Empty loop */ 1721 + 1722 + safely_assign_in_fifo(chan, NULL); 1723 + fifo_mem_release(in_fifo); 1724 + kfree(in_fifo); 1725 + 1726 + mutex_lock(&chan->lock); 1727 + chan->open_for_read = 0; 1728 + mutex_unlock(&chan->lock); 1729 + } 1730 + 1731 + if (filp->f_mode & FMODE_WRITE) { 1732 + struct xillyusb_endpoint *ep = chan->out_ep; 1733 + /* 1734 + * chan->flushing isn't zeroed. If the pre-release flush timed 1735 + * out, a cancel request will be sent before the next 1736 + * OPCODE_SET_CHECKPOINT (i.e. when the file is opened again). 1737 + * This is despite that the FPGA forgets about the checkpoint 1738 + * request as the file closes. Still, in an exceptional race 1739 + * condition, the FPGA could send an OPCODE_REACHED_CHECKPOINT 1740 + * just before closing that would reach the host after the 1741 + * file has re-opened. 1742 + */ 1743 + 1744 + mutex_lock(&chan->lock); 1745 + chan->out_ep = NULL; 1746 + mutex_unlock(&chan->lock); 1747 + 1748 + endpoint_quiesce(ep); 1749 + endpoint_dealloc(ep); 1750 + 1751 + /* See comments on rc_read above */ 1752 + rc_write = xillyusb_send_opcode(xdev, chan->chan_idx << 1, 1753 + OPCODE_CLOSE, 0); 1754 + 1755 + mutex_lock(&chan->lock); 1756 + chan->open_for_write = 0; 1757 + mutex_unlock(&chan->lock); 1758 + } 1759 + 1760 + kref_put(&xdev->kref, cleanup_dev); 1761 + 1762 + return rc_read ? rc_read : rc_write; 1763 + } 1764 + 1765 + /* 1766 + * Xillybus' API allows device nodes to be seekable, giving the user 1767 + * application access to a RAM array on the FPGA (or logic emulating it). 1768 + */ 1769 + 1770 + static loff_t xillyusb_llseek(struct file *filp, loff_t offset, int whence) 1771 + { 1772 + struct xillyusb_channel *chan = filp->private_data; 1773 + struct xillyusb_dev *xdev = chan->xdev; 1774 + loff_t pos = filp->f_pos; 1775 + int rc = 0; 1776 + unsigned int log2_element_size = chan->readable ? 1777 + chan->in_log2_element_size : chan->out_log2_element_size; 1778 + 1779 + /* 1780 + * Take both mutexes not allowing interrupts, since it seems like 1781 + * common applications don't expect an -EINTR here. Besides, multiple 1782 + * access to a single file descriptor on seekable devices is a mess 1783 + * anyhow. 1784 + */ 1785 + 1786 + mutex_lock(&chan->out_mutex); 1787 + mutex_lock(&chan->in_mutex); 1788 + 1789 + switch (whence) { 1790 + case SEEK_SET: 1791 + pos = offset; 1792 + break; 1793 + case SEEK_CUR: 1794 + pos += offset; 1795 + break; 1796 + case SEEK_END: 1797 + pos = offset; /* Going to the end => to the beginning */ 1798 + break; 1799 + default: 1800 + rc = -EINVAL; 1801 + goto end; 1802 + } 1803 + 1804 + /* In any case, we must finish on an element boundary */ 1805 + if (pos & ((1 << log2_element_size) - 1)) { 1806 + rc = -EINVAL; 1807 + goto end; 1808 + } 1809 + 1810 + rc = xillyusb_send_opcode(xdev, chan->chan_idx << 1, 1811 + OPCODE_SET_ADDR, 1812 + pos >> log2_element_size); 1813 + 1814 + if (rc) 1815 + goto end; 1816 + 1817 + if (chan->writable) { 1818 + chan->flushed = 0; 1819 + rc = flush_downstream(chan, HZ, false); 1820 + } 1821 + 1822 + end: 1823 + mutex_unlock(&chan->out_mutex); 1824 + mutex_unlock(&chan->in_mutex); 1825 + 1826 + if (rc) /* Return error after releasing mutexes */ 1827 + return rc; 1828 + 1829 + filp->f_pos = pos; 1830 + 1831 + return pos; 1832 + } 1833 + 1834 + static __poll_t xillyusb_poll(struct file *filp, poll_table *wait) 1835 + { 1836 + struct xillyusb_channel *chan = filp->private_data; 1837 + __poll_t mask = 0; 1838 + 1839 + if (chan->in_fifo) 1840 + poll_wait(filp, &chan->in_fifo->waitq, wait); 1841 + 1842 + if (chan->out_ep) 1843 + poll_wait(filp, &chan->out_ep->fifo.waitq, wait); 1844 + 1845 + /* 1846 + * If this is the first time poll() is called, and the file is 1847 + * readable, set the relevant flag. Also tell the FPGA to send all it 1848 + * has, to kickstart the mechanism that ensures there's always some 1849 + * data in in_fifo unless the stream is dry end-to-end. Note that the 1850 + * first poll() may not return a EPOLLIN, even if there's data on the 1851 + * FPGA. Rather, the data will arrive soon, and trigger the relevant 1852 + * wait queue. 1853 + */ 1854 + 1855 + if (!chan->poll_used && chan->in_fifo) { 1856 + chan->poll_used = 1; 1857 + request_read_anything(chan, OPCODE_SET_PUSH); 1858 + } 1859 + 1860 + /* 1861 + * poll() won't play ball regarding read() channels which 1862 + * are synchronous. Allowing that will create situations where data has 1863 + * been delivered at the FPGA, and users expecting select() to wake up, 1864 + * which it may not. So make it never work. 1865 + */ 1866 + 1867 + if (chan->in_fifo && !chan->in_synchronous && 1868 + (READ_ONCE(chan->in_fifo->fill) || !chan->read_data_ok)) 1869 + mask |= EPOLLIN | EPOLLRDNORM; 1870 + 1871 + if (chan->out_ep && 1872 + (READ_ONCE(chan->out_ep->fifo.fill) != chan->out_ep->fifo.size)) 1873 + mask |= EPOLLOUT | EPOLLWRNORM; 1874 + 1875 + if (chan->xdev->error) 1876 + mask |= EPOLLERR; 1877 + 1878 + return mask; 1879 + } 1880 + 1881 + static const struct file_operations xillyusb_fops = { 1882 + .owner = THIS_MODULE, 1883 + .read = xillyusb_read, 1884 + .write = xillyusb_write, 1885 + .open = xillyusb_open, 1886 + .flush = xillyusb_flush, 1887 + .release = xillyusb_release, 1888 + .llseek = xillyusb_llseek, 1889 + .poll = xillyusb_poll, 1890 + }; 1891 + 1892 + static int xillyusb_setup_base_eps(struct xillyusb_dev *xdev) 1893 + { 1894 + xdev->msg_ep = endpoint_alloc(xdev, MSG_EP_NUM | USB_DIR_OUT, 1895 + bulk_out_work, 1, 2); 1896 + if (!xdev->msg_ep) 1897 + return -ENOMEM; 1898 + 1899 + if (fifo_init(&xdev->msg_ep->fifo, 13)) /* 8 kiB */ 1900 + goto dealloc; 1901 + 1902 + xdev->msg_ep->fill_mask = -8; /* 8 bytes granularity */ 1903 + 1904 + xdev->in_ep = endpoint_alloc(xdev, IN_EP_NUM | USB_DIR_IN, 1905 + bulk_in_work, BUF_SIZE_ORDER, BUFNUM); 1906 + if (!xdev->in_ep) 1907 + goto dealloc; 1908 + 1909 + try_queue_bulk_in(xdev->in_ep); 1910 + 1911 + return 0; 1912 + 1913 + dealloc: 1914 + endpoint_dealloc(xdev->msg_ep); /* Also frees FIFO mem if allocated */ 1915 + return -ENOMEM; 1916 + } 1917 + 1918 + static int setup_channels(struct xillyusb_dev *xdev, 1919 + __le16 *chandesc, 1920 + int num_channels) 1921 + { 1922 + struct xillyusb_channel *chan; 1923 + int i; 1924 + 1925 + chan = kcalloc(num_channels, sizeof(*chan), GFP_KERNEL); 1926 + if (!chan) 1927 + return -ENOMEM; 1928 + 1929 + xdev->channels = chan; 1930 + 1931 + for (i = 0; i < num_channels; i++, chan++) { 1932 + unsigned int in_desc = le16_to_cpu(*chandesc++); 1933 + unsigned int out_desc = le16_to_cpu(*chandesc++); 1934 + 1935 + chan->xdev = xdev; 1936 + mutex_init(&chan->in_mutex); 1937 + mutex_init(&chan->out_mutex); 1938 + mutex_init(&chan->lock); 1939 + init_waitqueue_head(&chan->flushq); 1940 + 1941 + chan->chan_idx = i; 1942 + 1943 + if (in_desc & 0x80) { /* Entry is valid */ 1944 + chan->readable = 1; 1945 + chan->in_synchronous = !!(in_desc & 0x40); 1946 + chan->in_seekable = !!(in_desc & 0x20); 1947 + chan->in_log2_element_size = in_desc & 0x0f; 1948 + chan->in_log2_fifo_size = ((in_desc >> 8) & 0x1f) + 16; 1949 + } 1950 + 1951 + /* 1952 + * A downstream channel should never exist above index 13, 1953 + * as it would request a nonexistent BULK endpoint > 15. 1954 + * In the peculiar case that it does, it's ignored silently. 1955 + */ 1956 + 1957 + if ((out_desc & 0x80) && i < 14) { /* Entry is valid */ 1958 + chan->writable = 1; 1959 + chan->out_synchronous = !!(out_desc & 0x40); 1960 + chan->out_seekable = !!(out_desc & 0x20); 1961 + chan->out_log2_element_size = out_desc & 0x0f; 1962 + chan->out_log2_fifo_size = 1963 + ((out_desc >> 8) & 0x1f) + 16; 1964 + } 1965 + } 1966 + 1967 + return 0; 1968 + } 1969 + 1970 + static int xillyusb_discovery(struct usb_interface *interface) 1971 + { 1972 + int rc; 1973 + struct xillyusb_dev *xdev = usb_get_intfdata(interface); 1974 + __le16 bogus_chandesc[2]; 1975 + struct xillyfifo idt_fifo; 1976 + struct xillyusb_channel *chan; 1977 + unsigned int idt_len, names_offset; 1978 + unsigned char *idt; 1979 + int num_channels; 1980 + 1981 + rc = xillyusb_send_opcode(xdev, ~0, OPCODE_QUIESCE, 0); 1982 + 1983 + if (rc) { 1984 + dev_err(&interface->dev, "Failed to send quiesce request. Aborting.\n"); 1985 + return rc; 1986 + } 1987 + 1988 + /* Phase I: Set up one fake upstream channel and obtain IDT */ 1989 + 1990 + /* Set up a fake IDT with one async IN stream */ 1991 + bogus_chandesc[0] = cpu_to_le16(0x80); 1992 + bogus_chandesc[1] = cpu_to_le16(0); 1993 + 1994 + rc = setup_channels(xdev, bogus_chandesc, 1); 1995 + 1996 + if (rc) 1997 + return rc; 1998 + 1999 + rc = fifo_init(&idt_fifo, LOG2_IDT_FIFO_SIZE); 2000 + 2001 + if (rc) 2002 + return rc; 2003 + 2004 + chan = xdev->channels; 2005 + 2006 + chan->in_fifo = &idt_fifo; 2007 + chan->read_data_ok = 1; 2008 + 2009 + xdev->num_channels = 1; 2010 + 2011 + rc = xillyusb_send_opcode(xdev, ~0, OPCODE_REQ_IDT, 0); 2012 + 2013 + if (rc) { 2014 + dev_err(&interface->dev, "Failed to send IDT request. Aborting.\n"); 2015 + goto unfifo; 2016 + } 2017 + 2018 + rc = wait_event_interruptible_timeout(idt_fifo.waitq, 2019 + !chan->read_data_ok, 2020 + XILLY_RESPONSE_TIMEOUT); 2021 + 2022 + if (xdev->error) { 2023 + rc = xdev->error; 2024 + goto unfifo; 2025 + } 2026 + 2027 + if (rc < 0) { 2028 + rc = -EINTR; /* Interrupt on probe method? Interesting. */ 2029 + goto unfifo; 2030 + } 2031 + 2032 + if (chan->read_data_ok) { 2033 + rc = -ETIMEDOUT; 2034 + dev_err(&interface->dev, "No response from FPGA. Aborting.\n"); 2035 + goto unfifo; 2036 + } 2037 + 2038 + idt_len = READ_ONCE(idt_fifo.fill); 2039 + idt = kmalloc(idt_len, GFP_KERNEL); 2040 + 2041 + if (!idt) { 2042 + rc = -ENOMEM; 2043 + goto unfifo; 2044 + } 2045 + 2046 + fifo_read(&idt_fifo, idt, idt_len, xilly_memcpy); 2047 + 2048 + if (crc32_le(~0, idt, idt_len) != 0) { 2049 + dev_err(&interface->dev, "IDT failed CRC check. Aborting.\n"); 2050 + rc = -ENODEV; 2051 + goto unidt; 2052 + } 2053 + 2054 + if (*idt > 0x90) { 2055 + dev_err(&interface->dev, "No support for IDT version 0x%02x. Maybe the xillyusb driver needs an upgrade. Aborting.\n", 2056 + (int)*idt); 2057 + rc = -ENODEV; 2058 + goto unidt; 2059 + } 2060 + 2061 + /* Phase II: Set up the streams as defined in IDT */ 2062 + 2063 + num_channels = le16_to_cpu(*((__le16 *)(idt + 1))); 2064 + names_offset = 3 + num_channels * 4; 2065 + idt_len -= 4; /* Exclude CRC */ 2066 + 2067 + if (idt_len < names_offset) { 2068 + dev_err(&interface->dev, "IDT too short. This is exceptionally weird, because its CRC is OK\n"); 2069 + rc = -ENODEV; 2070 + goto unidt; 2071 + } 2072 + 2073 + rc = setup_channels(xdev, (void *)idt + 3, num_channels); 2074 + 2075 + if (rc) 2076 + goto unidt; 2077 + 2078 + /* 2079 + * Except for wildly misbehaving hardware, or if it was disconnected 2080 + * just after responding with the IDT, there is no reason for any 2081 + * work item to be running now. To be sure that xdev->channels 2082 + * is updated on anything that might run in parallel, flush the 2083 + * workqueue, which rarely does anything. 2084 + */ 2085 + flush_workqueue(xdev->workq); 2086 + 2087 + xdev->num_channels = num_channels; 2088 + 2089 + fifo_mem_release(&idt_fifo); 2090 + kfree(chan); 2091 + 2092 + rc = xillybus_init_chrdev(&interface->dev, &xillyusb_fops, 2093 + THIS_MODULE, xdev, 2094 + idt + names_offset, 2095 + idt_len - names_offset, 2096 + num_channels, 2097 + xillyname, true); 2098 + 2099 + kfree(idt); 2100 + 2101 + return rc; 2102 + 2103 + unidt: 2104 + kfree(idt); 2105 + 2106 + unfifo: 2107 + safely_assign_in_fifo(chan, NULL); 2108 + fifo_mem_release(&idt_fifo); 2109 + 2110 + return rc; 2111 + } 2112 + 2113 + static int xillyusb_probe(struct usb_interface *interface, 2114 + const struct usb_device_id *id) 2115 + { 2116 + struct xillyusb_dev *xdev; 2117 + int rc; 2118 + 2119 + xdev = kzalloc(sizeof(*xdev), GFP_KERNEL); 2120 + if (!xdev) 2121 + return -ENOMEM; 2122 + 2123 + kref_init(&xdev->kref); 2124 + mutex_init(&xdev->process_in_mutex); 2125 + mutex_init(&xdev->msg_mutex); 2126 + 2127 + xdev->udev = usb_get_dev(interface_to_usbdev(interface)); 2128 + xdev->dev = &interface->dev; 2129 + xdev->error = 0; 2130 + spin_lock_init(&xdev->error_lock); 2131 + xdev->in_counter = 0; 2132 + xdev->in_bytes_left = 0; 2133 + xdev->workq = alloc_workqueue(xillyname, WQ_HIGHPRI, 0); 2134 + 2135 + if (!xdev->workq) { 2136 + dev_err(&interface->dev, "Failed to allocate work queue\n"); 2137 + rc = -ENOMEM; 2138 + goto fail; 2139 + } 2140 + 2141 + INIT_WORK(&xdev->wakeup_workitem, wakeup_all); 2142 + 2143 + usb_set_intfdata(interface, xdev); 2144 + 2145 + rc = xillyusb_setup_base_eps(xdev); 2146 + if (rc) 2147 + goto fail; 2148 + 2149 + rc = xillyusb_discovery(interface); 2150 + if (rc) 2151 + goto latefail; 2152 + 2153 + return 0; 2154 + 2155 + latefail: 2156 + endpoint_quiesce(xdev->in_ep); 2157 + endpoint_quiesce(xdev->msg_ep); 2158 + 2159 + fail: 2160 + usb_set_intfdata(interface, NULL); 2161 + kref_put(&xdev->kref, cleanup_dev); 2162 + return rc; 2163 + } 2164 + 2165 + static void xillyusb_disconnect(struct usb_interface *interface) 2166 + { 2167 + struct xillyusb_dev *xdev = usb_get_intfdata(interface); 2168 + struct xillyusb_endpoint *msg_ep = xdev->msg_ep; 2169 + struct xillyfifo *fifo = &msg_ep->fifo; 2170 + int rc; 2171 + int i; 2172 + 2173 + xillybus_cleanup_chrdev(xdev, &interface->dev); 2174 + 2175 + /* 2176 + * Try to send OPCODE_QUIESCE, which will fail silently if the device 2177 + * was disconnected, but makes sense on module unload. 2178 + */ 2179 + 2180 + msg_ep->wake_on_drain = true; 2181 + xillyusb_send_opcode(xdev, ~0, OPCODE_QUIESCE, 0); 2182 + 2183 + /* 2184 + * If the device has been disconnected, sending the opcode causes 2185 + * a global device error with xdev->error, if such error didn't 2186 + * occur earlier. Hence timing out means that the USB link is fine, 2187 + * but somehow the message wasn't sent. Should never happen. 2188 + */ 2189 + 2190 + rc = wait_event_interruptible_timeout(fifo->waitq, 2191 + msg_ep->drained || xdev->error, 2192 + XILLY_RESPONSE_TIMEOUT); 2193 + 2194 + if (!rc) 2195 + dev_err(&interface->dev, 2196 + "Weird timeout condition on sending quiesce request.\n"); 2197 + 2198 + report_io_error(xdev, -ENODEV); /* Discourage further activity */ 2199 + 2200 + /* 2201 + * This device driver is declared with soft_unbind set, or else 2202 + * sending OPCODE_QUIESCE above would always fail. The price is 2203 + * that the USB framework didn't kill outstanding URBs, so it has 2204 + * to be done explicitly before returning from this call. 2205 + */ 2206 + 2207 + for (i = 0; i < xdev->num_channels; i++) { 2208 + struct xillyusb_channel *chan = &xdev->channels[i]; 2209 + 2210 + /* 2211 + * Lock taken to prevent chan->out_ep from changing. It also 2212 + * ensures xillyusb_open() and xillyusb_flush() don't access 2213 + * xdev->dev after being nullified below. 2214 + */ 2215 + mutex_lock(&chan->lock); 2216 + if (chan->out_ep) 2217 + endpoint_quiesce(chan->out_ep); 2218 + mutex_unlock(&chan->lock); 2219 + } 2220 + 2221 + endpoint_quiesce(xdev->in_ep); 2222 + endpoint_quiesce(xdev->msg_ep); 2223 + 2224 + usb_set_intfdata(interface, NULL); 2225 + 2226 + xdev->dev = NULL; 2227 + 2228 + kref_put(&xdev->kref, cleanup_dev); 2229 + } 2230 + 2231 + static struct usb_driver xillyusb_driver = { 2232 + .name = xillyname, 2233 + .id_table = xillyusb_table, 2234 + .probe = xillyusb_probe, 2235 + .disconnect = xillyusb_disconnect, 2236 + .soft_unbind = 1, 2237 + }; 2238 + 2239 + static int __init xillyusb_init(void) 2240 + { 2241 + int rc = 0; 2242 + 2243 + if (LOG2_INITIAL_FIFO_BUF_SIZE > PAGE_SHIFT) 2244 + fifo_buf_order = LOG2_INITIAL_FIFO_BUF_SIZE - PAGE_SHIFT; 2245 + else 2246 + fifo_buf_order = 0; 2247 + 2248 + rc = usb_register(&xillyusb_driver); 2249 + 2250 + return rc; 2251 + } 2252 + 2253 + static void __exit xillyusb_exit(void) 2254 + { 2255 + usb_deregister(&xillyusb_driver); 2256 + } 2257 + 2258 + module_init(xillyusb_init); 2259 + module_exit(xillyusb_exit);
+2 -1
drivers/comedi/drivers/comedi_8254.c
··· 555 555 /** 556 556 * comedi_8254_subdevice_init - initialize a comedi_subdevice for the 8254 timer 557 557 * @s: comedi_subdevice struct 558 + * @i8254: comedi_8254 struct 558 559 */ 559 560 void comedi_8254_subdevice_init(struct comedi_subdevice *s, 560 561 struct comedi_8254 *i8254) ··· 608 607 609 608 /** 610 609 * comedi_8254_init - allocate and initialize the 8254 device for pio access 611 - * @mmio: port I/O base address 610 + * @iobase: port I/O base address 612 611 * @osc_base: base time of the counter in ns 613 612 * OPTIONAL - only used by comedi_8254_cascade_ns_to_timer() 614 613 * @iosize: I/O register size
+1 -1
drivers/comedi/drivers/comedi_isadma.c
··· 143 143 * comedi_isadma_alloc - allocate and initialize the ISA DMA 144 144 * @dev: comedi_device struct 145 145 * @n_desc: the number of cookies to allocate 146 - * @dma_chan: DMA channel for the first cookie 146 + * @dma_chan1: DMA channel for the first cookie 147 147 * @dma_chan2: DMA channel for the second cookie 148 148 * @maxsize: the size of the buffer to allocate for each cookie 149 149 * @dma_dir: the DMA direction
+3 -4
drivers/comedi/drivers/ni_routes.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routes.c 5 4 * Route information for NI boards. ··· 245 246 } 246 247 EXPORT_SYMBOL_GPL(ni_get_valid_routes); 247 248 248 - /** 249 + /* 249 250 * List of NI global signal names that, as destinations, are only routeable 250 251 * indirectly through the *_arg elements of the comedi_cmd structure. 251 252 */ ··· 387 388 } 388 389 EXPORT_SYMBOL_GPL(ni_find_route_set); 389 390 390 - /** 391 + /* 391 392 * ni_route_set_has_source() - Determines whether the given source is in 392 393 * included given route_set. 393 394 * ··· 506 507 } 507 508 EXPORT_SYMBOL_GPL(ni_route_to_register); 508 509 509 - /** 510 + /* 510 511 * ni_find_route_source() - Finds the signal source corresponding to a signal 511 512 * route (src-->dest) of the specified routing register 512 513 * value and the specified route destination on the
-1
drivers/comedi/drivers/ni_routes.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0+ */ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routes.h 5 4 * Route information for NI boards.
-1
drivers/comedi/drivers/ni_routing/ni_device_routes.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_device_routes.c 5 4 * List of valid routes for specific NI boards.
-1
drivers/comedi/drivers/ni_routing/ni_device_routes.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0+ */ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_device_routes.c 5 4 * List of valid routes for specific NI boards.
-1
drivers/comedi/drivers/ni_routing/ni_device_routes/all.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0+ */ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_device_routes/all.h 5 4 * List of valid routes for specific NI boards.
-1
drivers/comedi/drivers/ni_routing/ni_device_routes/pci-6070e.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_device_routes/pci-6070e.c 5 4 * List of valid routes for specific NI boards.
-1
drivers/comedi/drivers/ni_routing/ni_device_routes/pci-6220.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_device_routes/pci-6220.c 5 4 * List of valid routes for specific NI boards.
-1
drivers/comedi/drivers/ni_routing/ni_device_routes/pci-6221.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_device_routes/pci-6221.c 5 4 * List of valid routes for specific NI boards.
-1
drivers/comedi/drivers/ni_routing/ni_device_routes/pci-6229.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_device_routes/pci-6229.c 5 4 * List of valid routes for specific NI boards.
-1
drivers/comedi/drivers/ni_routing/ni_device_routes/pci-6251.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_device_routes/pci-6251.c 5 4 * List of valid routes for specific NI boards.
-1
drivers/comedi/drivers/ni_routing/ni_device_routes/pci-6254.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_device_routes/pci-6254.c 5 4 * List of valid routes for specific NI boards.
-1
drivers/comedi/drivers/ni_routing/ni_device_routes/pci-6259.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_device_routes/pci-6259.c 5 4 * List of valid routes for specific NI boards.
-1
drivers/comedi/drivers/ni_routing/ni_device_routes/pci-6534.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_device_routes/pci-6534.c 5 4 * List of valid routes for specific NI boards.
-1
drivers/comedi/drivers/ni_routing/ni_device_routes/pci-6602.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_device_routes/pci-6602.c 5 4 * List of valid routes for specific NI boards.
-1
drivers/comedi/drivers/ni_routing/ni_device_routes/pci-6713.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_device_routes/pci-6713.c 5 4 * List of valid routes for specific NI boards.
-1
drivers/comedi/drivers/ni_routing/ni_device_routes/pci-6723.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_device_routes/pci-6723.c 5 4 * List of valid routes for specific NI boards.
-1
drivers/comedi/drivers/ni_routing/ni_device_routes/pci-6733.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_device_routes/pci-6733.c 5 4 * List of valid routes for specific NI boards.
-1
drivers/comedi/drivers/ni_routing/ni_device_routes/pxi-6030e.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_device_routes/pxi-6030e.c 5 4 * List of valid routes for specific NI boards.
-1
drivers/comedi/drivers/ni_routing/ni_device_routes/pxi-6224.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_device_routes/pxi-6224.c 5 4 * List of valid routes for specific NI boards.
-1
drivers/comedi/drivers/ni_routing/ni_device_routes/pxi-6225.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_device_routes/pxi-6225.c 5 4 * List of valid routes for specific NI boards.
-1
drivers/comedi/drivers/ni_routing/ni_device_routes/pxi-6251.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_device_routes/pxi-6251.c 5 4 * List of valid routes for specific NI boards.
-1
drivers/comedi/drivers/ni_routing/ni_device_routes/pxi-6733.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_device_routes/pxi-6733.c 5 4 * List of valid routes for specific NI boards.
-1
drivers/comedi/drivers/ni_routing/ni_device_routes/pxie-6251.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_device_routes/pxie-6251.c 5 4 * List of valid routes for specific NI boards.
-1
drivers/comedi/drivers/ni_routing/ni_device_routes/pxie-6535.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_device_routes/pxie-6535.c 5 4 * List of valid routes for specific NI boards.
-1
drivers/comedi/drivers/ni_routing/ni_device_routes/pxie-6738.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_device_routes/pxie-6738.c 5 4 * List of valid routes for specific NI boards.
-1
drivers/comedi/drivers/ni_routing/ni_route_values.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_route_values.c 5 4 * Route information for NI boards.
-1
drivers/comedi/drivers/ni_routing/ni_route_values.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0+ */ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_route_values.h 5 4 * Route information for NI boards.
-1
drivers/comedi/drivers/ni_routing/ni_route_values/all.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0+ */ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_route_values/all.h 5 4 * List of valid routes for specific NI boards.
-1
drivers/comedi/drivers/ni_routing/ni_route_values/ni_660x.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_route_values/ni_660x.c 5 4 * Route information for NI_660X boards.
-1
drivers/comedi/drivers/ni_routing/ni_route_values/ni_eseries.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_route_values/ni_eseries.c 5 4 * Route information for NI_ESERIES boards.
-1
drivers/comedi/drivers/ni_routing/ni_route_values/ni_mseries.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/ni_routing/ni_route_values/ni_mseries.c 5 4 * Route information for NI_MSERIES boards.
-1
drivers/comedi/drivers/ni_routing/tools/convert_c_to_py.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 4 3 #include <stdint.h> 5 4 #include <stdbool.h>
-7
drivers/comedi/drivers/ni_routing/tools/convert_csv_to_c.py
··· 1 1 #!/usr/bin/env python3 2 2 # SPDX-License-Identifier: GPL-2.0+ 3 - # vim: ts=2:sw=2:et:tw=80:nowrap 4 3 5 4 # This is simply to aide in creating the entries in the order of the value of 6 5 # the device-global NI signal/terminal constants defined in comedi.h ··· 122 123 123 124 output_file_top = """\ 124 125 // SPDX-License-Identifier: GPL-2.0+ 125 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 126 126 /* 127 127 * comedi/drivers/ni_routing/{filename} 128 128 * List of valid routes for specific NI boards. ··· 153 155 154 156 extern_header = """\ 155 157 /* SPDX-License-Identifier: GPL-2.0+ */ 156 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 157 158 /* 158 159 * comedi/drivers/ni_routing/{filename} 159 160 * List of valid routes for specific NI boards. ··· 190 193 191 194 single_output_file_top = """\ 192 195 // SPDX-License-Identifier: GPL-2.0+ 193 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 194 196 /* 195 197 * comedi/drivers/ni_routing/{filename} 196 198 * List of valid routes for specific NI boards. ··· 295 299 296 300 output_file_top = """\ 297 301 // SPDX-License-Identifier: GPL-2.0+ 298 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 299 302 /* 300 303 * comedi/drivers/ni_routing/{filename} 301 304 * Route information for NI boards. ··· 332 337 333 338 extern_header = """\ 334 339 /* SPDX-License-Identifier: GPL-2.0+ */ 335 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 336 340 /* 337 341 * comedi/drivers/ni_routing/{filename} 338 342 * List of valid routes for specific NI boards. ··· 369 375 370 376 single_output_file_top = """\ 371 377 // SPDX-License-Identifier: GPL-2.0+ 372 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 373 378 /* 374 379 * comedi/drivers/ni_routing/{filename} 375 380 * Route information for {sheet} boards.
-1
drivers/comedi/drivers/ni_routing/tools/convert_py_to_csv.py
··· 1 1 #!/usr/bin/env python3 2 2 # SPDX-License-Identifier: GPL-2.0+ 3 - # vim: ts=2:sw=2:et:tw=80:nowrap 4 3 5 4 from os import path 6 5 import os, csv
-1
drivers/comedi/drivers/ni_routing/tools/csv_collection.py
··· 1 1 # SPDX-License-Identifier: GPL-2.0+ 2 - # vim: ts=2:sw=2:et:tw=80:nowrap 3 2 4 3 import os, csv, glob 5 4
-1
drivers/comedi/drivers/ni_routing/tools/make_blank_csv.py
··· 1 1 #!/usr/bin/env python3 2 2 # SPDX-License-Identifier: GPL-2.0+ 3 - # vim: ts=2:sw=2:et:tw=80:nowrap 4 3 5 4 from os import path 6 5 import os, csv
-1
drivers/comedi/drivers/ni_routing/tools/ni_names.py
··· 1 1 # SPDX-License-Identifier: GPL-2.0+ 2 - # vim: ts=2:sw=2:et:tw=80:nowrap 3 2 """ 4 3 This file helps to extract string names of NI signals as included in comedi.h 5 4 between NI_NAMES_BASE and NI_NAMES_BASE+NI_NUM_NAMES.
+6 -6
drivers/comedi/drivers/ni_tio.c
··· 1501 1501 } 1502 1502 EXPORT_SYMBOL_GPL(ni_tio_insn_config); 1503 1503 1504 - /** 1504 + /* 1505 1505 * Retrieves the register value of the current source of the output selector for 1506 1506 * the given destination. 1507 1507 * ··· 1541 1541 EXPORT_SYMBOL_GPL(ni_tio_get_routing); 1542 1542 1543 1543 /** 1544 - * Sets the register value of the selector MUX for the given destination. 1545 - * @counter_dev:Pointer to general counter device. 1546 - * @destination:Device-global identifier of route destination. 1547 - * @register_value: 1544 + * ni_tio_set_routing() - Sets the register value of the selector MUX for the given destination. 1545 + * @counter_dev: Pointer to general counter device. 1546 + * @dest: Device-global identifier of route destination. 1547 + * @reg: 1548 1548 * The first several bits of this value should store the desired 1549 1549 * value to write to the register. All other bits are for 1550 1550 * transmitting information that modify the mode of the particular ··· 1580 1580 } 1581 1581 EXPORT_SYMBOL_GPL(ni_tio_set_routing); 1582 1582 1583 - /** 1583 + /* 1584 1584 * Sets the given destination MUX to its default value or disable it. 1585 1585 * 1586 1586 * Return: 0 if successful; -EINVAL if terminal is unknown.
-1
drivers/comedi/drivers/tests/comedi_example_test.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/tests/comedi_example_test.c 5 4 * Example set of unit tests.
-1
drivers/comedi/drivers/tests/ni_routes_test.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0+ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/tests/ni_routes_test.c 5 4 * Unit tests for NI routes (ni_routes.c module).
-1
drivers/comedi/drivers/tests/unittest.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0+ */ 2 - /* vim: set ts=8 sw=8 noet tw=80 nowrap: */ 3 2 /* 4 3 * comedi/drivers/tests/unittest.h 5 4 * Simple framework for unittests for comedi drivers.
+9 -14
drivers/eisa/eisa-bus.c
··· 155 155 } 156 156 EXPORT_SYMBOL(eisa_driver_unregister); 157 157 158 - static ssize_t eisa_show_sig(struct device *dev, struct device_attribute *attr, 159 - char *buf) 158 + static ssize_t signature_show(struct device *dev, 159 + struct device_attribute *attr, char *buf) 160 160 { 161 161 struct eisa_device *edev = to_eisa_device(dev); 162 162 return sprintf(buf, "%s\n", edev->id.sig); 163 163 } 164 + static DEVICE_ATTR_RO(signature); 164 165 165 - static DEVICE_ATTR(signature, S_IRUGO, eisa_show_sig, NULL); 166 - 167 - static ssize_t eisa_show_state(struct device *dev, 168 - struct device_attribute *attr, 169 - char *buf) 166 + static ssize_t enabled_show(struct device *dev, 167 + struct device_attribute *attr, char *buf) 170 168 { 171 169 struct eisa_device *edev = to_eisa_device(dev); 172 170 return sprintf(buf, "%d\n", edev->state & EISA_CONFIG_ENABLED); 173 171 } 172 + static DEVICE_ATTR_RO(enabled); 174 173 175 - static DEVICE_ATTR(enabled, S_IRUGO, eisa_show_state, NULL); 176 - 177 - static ssize_t eisa_show_modalias(struct device *dev, 178 - struct device_attribute *attr, 179 - char *buf) 174 + static ssize_t modalias_show(struct device *dev, 175 + struct device_attribute *attr, char *buf) 180 176 { 181 177 struct eisa_device *edev = to_eisa_device(dev); 182 178 return sprintf(buf, EISA_DEVICE_MODALIAS_FMT "\n", edev->id.sig); 183 179 } 184 - 185 - static DEVICE_ATTR(modalias, S_IRUGO, eisa_show_modalias, NULL); 180 + static DEVICE_ATTR_RO(modalias); 186 181 187 182 static int __init eisa_init_device(struct eisa_root_device *root, 188 183 struct eisa_device *edev,
+1 -1
drivers/extcon/Kconfig
··· 154 154 from abnormal high input voltage (up to 28V). 155 155 156 156 config EXTCON_SM5502 157 - tristate "Silicon Mitus SM5502 EXTCON support" 157 + tristate "Silicon Mitus SM5502/SM5504 EXTCON support" 158 158 depends on I2C 159 159 select IRQ_DOMAIN 160 160 select REGMAP_I2C
+9
drivers/extcon/extcon-intel-mrfld.c
··· 197 197 struct intel_soc_pmic *pmic = dev_get_drvdata(dev->parent); 198 198 struct regmap *regmap = pmic->regmap; 199 199 struct mrfld_extcon_data *data; 200 + unsigned int status; 200 201 unsigned int id; 201 202 int irq, ret; 202 203 ··· 244 243 245 244 /* Get initial state */ 246 245 mrfld_extcon_role_detect(data); 246 + 247 + /* 248 + * Cached status value is used for cable detection, see comments 249 + * in mrfld_extcon_cable_detect(), we need to sync cached value 250 + * with a real state of the hardware. 251 + */ 252 + regmap_read(regmap, BCOVE_SCHGRIRQ1, &status); 253 + data->status = status; 247 254 248 255 mrfld_extcon_clear(data, BCOVE_MIRQLVL1, BCOVE_LVL1_CHGR); 249 256 mrfld_extcon_clear(data, BCOVE_MCHGRIRQ1, BCOVE_CHGRIRQ_ALL);
+1
drivers/extcon/extcon-max8997.c
··· 773 773 MODULE_DESCRIPTION("Maxim MAX8997 Extcon driver"); 774 774 MODULE_AUTHOR("Donggeun Kim <dg77.kim@samsung.com>"); 775 775 MODULE_LICENSE("GPL"); 776 + MODULE_ALIAS("platform:max8997-muic");
+169 -43
drivers/extcon/extcon-sm5502.c
··· 40 40 struct i2c_client *i2c; 41 41 struct regmap *regmap; 42 42 43 + const struct sm5502_type *type; 43 44 struct regmap_irq_chip_data *irq_data; 44 - struct muic_irq *muic_irqs; 45 - unsigned int num_muic_irqs; 46 45 int irq; 47 46 bool irq_attach; 48 47 bool irq_detach; 49 48 struct work_struct irq_work; 50 - 51 - struct reg_data *reg_data; 52 - unsigned int num_reg_data; 53 49 54 50 struct mutex mutex; 55 51 ··· 56 60 * driver should notify cable state to upper layer. 57 61 */ 58 62 struct delayed_work wq_detcable; 63 + }; 64 + 65 + struct sm5502_type { 66 + struct muic_irq *muic_irqs; 67 + unsigned int num_muic_irqs; 68 + const struct regmap_irq_chip *irq_chip; 69 + 70 + struct reg_data *reg_data; 71 + unsigned int num_reg_data; 72 + 73 + unsigned int otg_dev_type1; 74 + int (*parse_irq)(struct sm5502_muic_info *info, int irq_type); 59 75 }; 60 76 61 77 /* Default value of SM5502 register to bring up MUIC device. */ ··· 96 88 | SM5502_REG_INTM2_MHL_MASK, 97 89 .invert = true, 98 90 }, 99 - { } 91 + }; 92 + 93 + /* Default value of SM5504 register to bring up MUIC device. */ 94 + static struct reg_data sm5504_reg_data[] = { 95 + { 96 + .reg = SM5502_REG_RESET, 97 + .val = SM5502_REG_RESET_MASK, 98 + .invert = true, 99 + }, { 100 + .reg = SM5502_REG_INTMASK1, 101 + .val = SM5504_REG_INTM1_ATTACH_MASK 102 + | SM5504_REG_INTM1_DETACH_MASK, 103 + .invert = false, 104 + }, { 105 + .reg = SM5502_REG_INTMASK2, 106 + .val = SM5504_REG_INTM2_RID_CHG_MASK 107 + | SM5504_REG_INTM2_UVLO_MASK 108 + | SM5504_REG_INTM2_POR_MASK, 109 + .invert = true, 110 + }, { 111 + .reg = SM5502_REG_CONTROL, 112 + .val = SM5502_REG_CONTROL_MANUAL_SW_MASK 113 + | SM5504_REG_CONTROL_CHGTYP_MASK 114 + | SM5504_REG_CONTROL_USBCHDEN_MASK 115 + | SM5504_REG_CONTROL_ADC_EN_MASK, 116 + .invert = true, 117 + }, 100 118 }; 101 119 102 120 /* List of detectable cables */ ··· 233 199 .num_irqs = ARRAY_SIZE(sm5502_irqs), 234 200 }; 235 201 202 + /* List of supported interrupt for SM5504 */ 203 + static struct muic_irq sm5504_muic_irqs[] = { 204 + { SM5504_IRQ_INT1_ATTACH, "muic-attach" }, 205 + { SM5504_IRQ_INT1_DETACH, "muic-detach" }, 206 + { SM5504_IRQ_INT1_CHG_DET, "muic-chg-det" }, 207 + { SM5504_IRQ_INT1_DCD_OUT, "muic-dcd-out" }, 208 + { SM5504_IRQ_INT1_OVP_EVENT, "muic-ovp-event" }, 209 + { SM5504_IRQ_INT1_CONNECT, "muic-connect" }, 210 + { SM5504_IRQ_INT1_ADC_CHG, "muic-adc-chg" }, 211 + { SM5504_IRQ_INT2_RID_CHG, "muic-rid-chg" }, 212 + { SM5504_IRQ_INT2_UVLO, "muic-uvlo" }, 213 + { SM5504_IRQ_INT2_POR, "muic-por" }, 214 + { SM5504_IRQ_INT2_OVP_FET, "muic-ovp-fet" }, 215 + { SM5504_IRQ_INT2_OCP_LATCH, "muic-ocp-latch" }, 216 + { SM5504_IRQ_INT2_OCP_EVENT, "muic-ocp-event" }, 217 + { SM5504_IRQ_INT2_OVP_OCP_EVENT, "muic-ovp-ocp-event" }, 218 + }; 219 + 220 + /* Define interrupt list of SM5504 to register regmap_irq */ 221 + static const struct regmap_irq sm5504_irqs[] = { 222 + /* INT1 interrupts */ 223 + { .reg_offset = 0, .mask = SM5504_IRQ_INT1_ATTACH_MASK, }, 224 + { .reg_offset = 0, .mask = SM5504_IRQ_INT1_DETACH_MASK, }, 225 + { .reg_offset = 0, .mask = SM5504_IRQ_INT1_CHG_DET_MASK, }, 226 + { .reg_offset = 0, .mask = SM5504_IRQ_INT1_DCD_OUT_MASK, }, 227 + { .reg_offset = 0, .mask = SM5504_IRQ_INT1_OVP_MASK, }, 228 + { .reg_offset = 0, .mask = SM5504_IRQ_INT1_CONNECT_MASK, }, 229 + { .reg_offset = 0, .mask = SM5504_IRQ_INT1_ADC_CHG_MASK, }, 230 + 231 + /* INT2 interrupts */ 232 + { .reg_offset = 1, .mask = SM5504_IRQ_INT2_RID_CHG_MASK,}, 233 + { .reg_offset = 1, .mask = SM5504_IRQ_INT2_UVLO_MASK, }, 234 + { .reg_offset = 1, .mask = SM5504_IRQ_INT2_POR_MASK, }, 235 + { .reg_offset = 1, .mask = SM5504_IRQ_INT2_OVP_FET_MASK, }, 236 + { .reg_offset = 1, .mask = SM5504_IRQ_INT2_OCP_LATCH_MASK, }, 237 + { .reg_offset = 1, .mask = SM5504_IRQ_INT2_OCP_EVENT_MASK, }, 238 + { .reg_offset = 1, .mask = SM5504_IRQ_INT2_OVP_OCP_EVENT_MASK, }, 239 + }; 240 + 241 + static const struct regmap_irq_chip sm5504_muic_irq_chip = { 242 + .name = "sm5504", 243 + .status_base = SM5502_REG_INT1, 244 + .mask_base = SM5502_REG_INTMASK1, 245 + .mask_invert = false, 246 + .num_regs = 2, 247 + .irqs = sm5504_irqs, 248 + .num_irqs = ARRAY_SIZE(sm5504_irqs), 249 + }; 250 + 236 251 /* Define regmap configuration of SM5502 for I2C communication */ 237 252 static bool sm5502_muic_volatile_reg(struct device *dev, unsigned int reg) 238 253 { ··· 385 302 return ret; 386 303 } 387 304 388 - switch (dev_type1) { 389 - case SM5502_REG_DEV_TYPE1_USB_OTG_MASK: 305 + if (dev_type1 == info->type->otg_dev_type1) { 390 306 cable_type = SM5502_MUIC_ADC_GROUND_USB_OTG; 391 - break; 392 - default: 307 + } else { 393 308 dev_dbg(info->dev, 394 309 "cannot identify the cable type: adc(0x%x), dev_type1(0x%x)\n", 395 310 adc, dev_type1); ··· 440 359 return ret; 441 360 } 442 361 362 + if (dev_type1 == info->type->otg_dev_type1) { 363 + cable_type = SM5502_MUIC_ADC_OPEN_USB_OTG; 364 + break; 365 + } 366 + 443 367 switch (dev_type1) { 444 368 case SM5502_REG_DEV_TYPE1_USB_SDP_MASK: 445 369 cable_type = SM5502_MUIC_ADC_OPEN_USB; 446 370 break; 447 371 case SM5502_REG_DEV_TYPE1_DEDICATED_CHG_MASK: 448 372 cable_type = SM5502_MUIC_ADC_OPEN_TA; 449 - break; 450 - case SM5502_REG_DEV_TYPE1_USB_OTG_MASK: 451 - cable_type = SM5502_MUIC_ADC_OPEN_USB_OTG; 452 373 break; 453 374 default: 454 375 dev_dbg(info->dev, ··· 581 498 return 0; 582 499 } 583 500 501 + static int sm5504_parse_irq(struct sm5502_muic_info *info, int irq_type) 502 + { 503 + switch (irq_type) { 504 + case SM5504_IRQ_INT1_ATTACH: 505 + info->irq_attach = true; 506 + break; 507 + case SM5504_IRQ_INT1_DETACH: 508 + info->irq_detach = true; 509 + break; 510 + case SM5504_IRQ_INT1_CHG_DET: 511 + case SM5504_IRQ_INT1_DCD_OUT: 512 + case SM5504_IRQ_INT1_OVP_EVENT: 513 + case SM5504_IRQ_INT1_CONNECT: 514 + case SM5504_IRQ_INT1_ADC_CHG: 515 + case SM5504_IRQ_INT2_RID_CHG: 516 + case SM5504_IRQ_INT2_UVLO: 517 + case SM5504_IRQ_INT2_POR: 518 + case SM5504_IRQ_INT2_OVP_FET: 519 + case SM5504_IRQ_INT2_OCP_LATCH: 520 + case SM5504_IRQ_INT2_OCP_EVENT: 521 + case SM5504_IRQ_INT2_OVP_OCP_EVENT: 522 + default: 523 + break; 524 + } 525 + 526 + return 0; 527 + } 528 + 584 529 static irqreturn_t sm5502_muic_irq_handler(int irq, void *data) 585 530 { 586 531 struct sm5502_muic_info *info = data; 587 532 int i, irq_type = -1, ret; 588 533 589 - for (i = 0; i < info->num_muic_irqs; i++) 590 - if (irq == info->muic_irqs[i].virq) 591 - irq_type = info->muic_irqs[i].irq; 534 + for (i = 0; i < info->type->num_muic_irqs; i++) 535 + if (irq == info->type->muic_irqs[i].virq) 536 + irq_type = info->type->muic_irqs[i].irq; 592 537 593 - ret = sm5502_parse_irq(info, irq_type); 538 + ret = info->type->parse_irq(info, irq_type); 594 539 if (ret < 0) { 595 540 dev_warn(info->dev, "cannot handle is interrupt:%d\n", 596 541 irq_type); ··· 663 552 version_id, vendor_id); 664 553 665 554 /* Initiazle the register of SM5502 device to bring-up */ 666 - for (i = 0; i < info->num_reg_data; i++) { 555 + for (i = 0; i < info->type->num_reg_data; i++) { 667 556 unsigned int val = 0; 668 557 669 - if (!info->reg_data[i].invert) 670 - val |= ~info->reg_data[i].val; 558 + if (!info->type->reg_data[i].invert) 559 + val |= ~info->type->reg_data[i].val; 671 560 else 672 - val = info->reg_data[i].val; 673 - regmap_write(info->regmap, info->reg_data[i].reg, val); 561 + val = info->type->reg_data[i].val; 562 + regmap_write(info->regmap, info->type->reg_data[i].reg, val); 674 563 } 675 564 } 676 565 677 - static int sm5022_muic_i2c_probe(struct i2c_client *i2c, 678 - const struct i2c_device_id *id) 566 + static int sm5022_muic_i2c_probe(struct i2c_client *i2c) 679 567 { 680 568 struct device_node *np = i2c->dev.of_node; 681 569 struct sm5502_muic_info *info; ··· 691 581 info->dev = &i2c->dev; 692 582 info->i2c = i2c; 693 583 info->irq = i2c->irq; 694 - info->muic_irqs = sm5502_muic_irqs; 695 - info->num_muic_irqs = ARRAY_SIZE(sm5502_muic_irqs); 696 - info->reg_data = sm5502_reg_data; 697 - info->num_reg_data = ARRAY_SIZE(sm5502_reg_data); 584 + info->type = device_get_match_data(info->dev); 585 + if (!info->type) 586 + return -EINVAL; 587 + if (!info->type->parse_irq) { 588 + dev_err(info->dev, "parse_irq missing in struct sm5502_type\n"); 589 + return -EINVAL; 590 + } 698 591 699 592 mutex_init(&info->mutex); 700 593 ··· 713 600 714 601 /* Support irq domain for SM5502 MUIC device */ 715 602 irq_flags = IRQF_TRIGGER_FALLING | IRQF_ONESHOT | IRQF_SHARED; 716 - ret = regmap_add_irq_chip(info->regmap, info->irq, irq_flags, 0, 717 - &sm5502_muic_irq_chip, &info->irq_data); 603 + ret = devm_regmap_add_irq_chip(info->dev, info->regmap, info->irq, 604 + irq_flags, 0, info->type->irq_chip, 605 + &info->irq_data); 718 606 if (ret != 0) { 719 607 dev_err(info->dev, "failed to request IRQ %d: %d\n", 720 608 info->irq, ret); 721 609 return ret; 722 610 } 723 611 724 - for (i = 0; i < info->num_muic_irqs; i++) { 725 - struct muic_irq *muic_irq = &info->muic_irqs[i]; 612 + for (i = 0; i < info->type->num_muic_irqs; i++) { 613 + struct muic_irq *muic_irq = &info->type->muic_irqs[i]; 726 614 int virq = 0; 727 615 728 616 virq = regmap_irq_get_virq(info->irq_data, muic_irq->irq); ··· 775 661 return 0; 776 662 } 777 663 778 - static int sm5502_muic_i2c_remove(struct i2c_client *i2c) 779 - { 780 - struct sm5502_muic_info *info = i2c_get_clientdata(i2c); 664 + static const struct sm5502_type sm5502_data = { 665 + .muic_irqs = sm5502_muic_irqs, 666 + .num_muic_irqs = ARRAY_SIZE(sm5502_muic_irqs), 667 + .irq_chip = &sm5502_muic_irq_chip, 668 + .reg_data = sm5502_reg_data, 669 + .num_reg_data = ARRAY_SIZE(sm5502_reg_data), 670 + .otg_dev_type1 = SM5502_REG_DEV_TYPE1_USB_OTG_MASK, 671 + .parse_irq = sm5502_parse_irq, 672 + }; 781 673 782 - regmap_del_irq_chip(info->irq, info->irq_data); 783 - 784 - return 0; 785 - } 674 + static const struct sm5502_type sm5504_data = { 675 + .muic_irqs = sm5504_muic_irqs, 676 + .num_muic_irqs = ARRAY_SIZE(sm5504_muic_irqs), 677 + .irq_chip = &sm5504_muic_irq_chip, 678 + .reg_data = sm5504_reg_data, 679 + .num_reg_data = ARRAY_SIZE(sm5504_reg_data), 680 + .otg_dev_type1 = SM5504_REG_DEV_TYPE1_USB_OTG_MASK, 681 + .parse_irq = sm5504_parse_irq, 682 + }; 786 683 787 684 static const struct of_device_id sm5502_dt_match[] = { 788 - { .compatible = "siliconmitus,sm5502-muic" }, 685 + { .compatible = "siliconmitus,sm5502-muic", .data = &sm5502_data }, 686 + { .compatible = "siliconmitus,sm5504-muic", .data = &sm5504_data }, 789 687 { }, 790 688 }; 791 689 MODULE_DEVICE_TABLE(of, sm5502_dt_match); ··· 828 702 sm5502_muic_suspend, sm5502_muic_resume); 829 703 830 704 static const struct i2c_device_id sm5502_i2c_id[] = { 831 - { "sm5502", TYPE_SM5502 }, 705 + { "sm5502", (kernel_ulong_t)&sm5502_data }, 706 + { "sm5504", (kernel_ulong_t)&sm5504_data }, 832 707 { } 833 708 }; 834 709 MODULE_DEVICE_TABLE(i2c, sm5502_i2c_id); ··· 840 713 .pm = &sm5502_muic_pm_ops, 841 714 .of_match_table = sm5502_dt_match, 842 715 }, 843 - .probe = sm5022_muic_i2c_probe, 844 - .remove = sm5502_muic_i2c_remove, 716 + .probe_new = sm5022_muic_i2c_probe, 845 717 .id_table = sm5502_i2c_id, 846 718 }; 847 719
+78 -4
drivers/extcon/extcon-sm5502.h
··· 8 8 #ifndef __LINUX_EXTCON_SM5502_H 9 9 #define __LINUX_EXTCON_SM5502_H 10 10 11 - enum sm5502_types { 12 - TYPE_SM5502, 13 - }; 14 - 15 11 /* SM5502 registers */ 16 12 enum sm5502_reg { 17 13 SM5502_REG_DEVICE_ID = 0x01, ··· 89 93 #define SM5502_REG_CONTROL_RAW_DATA_MASK (0x1 << SM5502_REG_CONTROL_RAW_DATA_SHIFT) 90 94 #define SM5502_REG_CONTROL_SW_OPEN_MASK (0x1 << SM5502_REG_CONTROL_SW_OPEN_SHIFT) 91 95 96 + #define SM5504_REG_CONTROL_CHGTYP_SHIFT 5 97 + #define SM5504_REG_CONTROL_USBCHDEN_SHIFT 6 98 + #define SM5504_REG_CONTROL_ADC_EN_SHIFT 7 99 + #define SM5504_REG_CONTROL_CHGTYP_MASK (0x1 << SM5504_REG_CONTROL_CHGTYP_SHIFT) 100 + #define SM5504_REG_CONTROL_USBCHDEN_MASK (0x1 << SM5504_REG_CONTROL_USBCHDEN_SHIFT) 101 + #define SM5504_REG_CONTROL_ADC_EN_MASK (0x1 << SM5504_REG_CONTROL_ADC_EN_SHIFT) 102 + 92 103 #define SM5502_REG_INTM1_ATTACH_SHIFT 0 93 104 #define SM5502_REG_INTM1_DETACH_SHIFT 1 94 105 #define SM5502_REG_INTM1_KP_SHIFT 2 ··· 125 122 #define SM5502_REG_INTM2_STUCK_KEY_MASK (0x1 << SM5502_REG_INTM2_STUCK_KEY_SHIFT) 126 123 #define SM5502_REG_INTM2_STUCK_KEY_RCV_MASK (0x1 << SM5502_REG_INTM2_STUCK_KEY_RCV_SHIFT) 127 124 #define SM5502_REG_INTM2_MHL_MASK (0x1 << SM5502_REG_INTM2_MHL_SHIFT) 125 + 126 + #define SM5504_REG_INTM1_ATTACH_SHIFT 0 127 + #define SM5504_REG_INTM1_DETACH_SHIFT 1 128 + #define SM5504_REG_INTM1_CHG_DET_SHIFT 2 129 + #define SM5504_REG_INTM1_DCD_OUT_SHIFT 3 130 + #define SM5504_REG_INTM1_OVP_EVENT_SHIFT 4 131 + #define SM5504_REG_INTM1_CONNECT_SHIFT 5 132 + #define SM5504_REG_INTM1_ADC_CHG_SHIFT 6 133 + #define SM5504_REG_INTM1_ATTACH_MASK (0x1 << SM5504_REG_INTM1_ATTACH_SHIFT) 134 + #define SM5504_REG_INTM1_DETACH_MASK (0x1 << SM5504_REG_INTM1_DETACH_SHIFT) 135 + #define SM5504_REG_INTM1_CHG_DET_MASK (0x1 << SM5504_REG_INTM1_CHG_DET_SHIFT) 136 + #define SM5504_REG_INTM1_DCD_OUT_MASK (0x1 << SM5504_REG_INTM1_DCD_OUT_SHIFT) 137 + #define SM5504_REG_INTM1_OVP_EVENT_MASK (0x1 << SM5504_REG_INTM1_OVP_EVENT_SHIFT) 138 + #define SM5504_REG_INTM1_CONNECT_MASK (0x1 << SM5504_REG_INTM1_CONNECT_SHIFT) 139 + #define SM5504_REG_INTM1_ADC_CHG_MASK (0x1 << SM5504_REG_INTM1_ADC_CHG_SHIFT) 140 + 141 + #define SM5504_REG_INTM2_RID_CHG_SHIFT 0 142 + #define SM5504_REG_INTM2_UVLO_SHIFT 1 143 + #define SM5504_REG_INTM2_POR_SHIFT 2 144 + #define SM5504_REG_INTM2_OVP_FET_SHIFT 4 145 + #define SM5504_REG_INTM2_OCP_LATCH_SHIFT 5 146 + #define SM5504_REG_INTM2_OCP_EVENT_SHIFT 6 147 + #define SM5504_REG_INTM2_OVP_OCP_EVENT_SHIFT 7 148 + #define SM5504_REG_INTM2_RID_CHG_MASK (0x1 << SM5504_REG_INTM2_RID_CHG_SHIFT) 149 + #define SM5504_REG_INTM2_UVLO_MASK (0x1 << SM5504_REG_INTM2_UVLO_SHIFT) 150 + #define SM5504_REG_INTM2_POR_MASK (0x1 << SM5504_REG_INTM2_POR_SHIFT) 151 + #define SM5504_REG_INTM2_OVP_FET_MASK (0x1 << SM5504_REG_INTM2_OVP_FET_SHIFT) 152 + #define SM5504_REG_INTM2_OCP_LATCH_MASK (0x1 << SM5504_REG_INTM2_OCP_LATCH_SHIFT) 153 + #define SM5504_REG_INTM2_OCP_EVENT_MASK (0x1 << SM5504_REG_INTM2_OCP_EVENT_SHIFT) 154 + #define SM5504_REG_INTM2_OVP_OCP_EVENT_MASK (0x1 << SM5504_REG_INTM2_OVP_OCP_EVENT_SHIFT) 128 155 129 156 #define SM5502_REG_ADC_SHIFT 0 130 157 #define SM5502_REG_ADC_MASK (0x1f << SM5502_REG_ADC_SHIFT) ··· 231 198 #define SM5502_REG_DEV_TYPE1_USB_CHG_MASK (0x1 << SM5502_REG_DEV_TYPE1_USB_CHG_SHIFT) 232 199 #define SM5502_REG_DEV_TYPE1_DEDICATED_CHG_MASK (0x1 << SM5502_REG_DEV_TYPE1_DEDICATED_CHG_SHIFT) 233 200 #define SM5502_REG_DEV_TYPE1_USB_OTG_MASK (0x1 << SM5502_REG_DEV_TYPE1_USB_OTG_SHIFT) 201 + 202 + #define SM5504_REG_DEV_TYPE1_USB_OTG_SHIFT 0 203 + #define SM5504_REG_DEV_TYPE1_USB_OTG_MASK (0x1 << SM5504_REG_DEV_TYPE1_USB_OTG_SHIFT) 234 204 235 205 #define SM5502_REG_DEV_TYPE2_JIG_USB_ON_SHIFT 0 236 206 #define SM5502_REG_DEV_TYPE2_JIG_USB_OFF_SHIFT 1 ··· 312 276 #define SM5502_IRQ_INT2_STUCK_KEY_MASK BIT(3) 313 277 #define SM5502_IRQ_INT2_STUCK_KEY_RCV_MASK BIT(4) 314 278 #define SM5502_IRQ_INT2_MHL_MASK BIT(5) 279 + 280 + /* SM5504 Interrupts */ 281 + enum sm5504_irq { 282 + /* INT1 */ 283 + SM5504_IRQ_INT1_ATTACH, 284 + SM5504_IRQ_INT1_DETACH, 285 + SM5504_IRQ_INT1_CHG_DET, 286 + SM5504_IRQ_INT1_DCD_OUT, 287 + SM5504_IRQ_INT1_OVP_EVENT, 288 + SM5504_IRQ_INT1_CONNECT, 289 + SM5504_IRQ_INT1_ADC_CHG, 290 + 291 + /* INT2 */ 292 + SM5504_IRQ_INT2_RID_CHG, 293 + SM5504_IRQ_INT2_UVLO, 294 + SM5504_IRQ_INT2_POR, 295 + SM5504_IRQ_INT2_OVP_FET, 296 + SM5504_IRQ_INT2_OCP_LATCH, 297 + SM5504_IRQ_INT2_OCP_EVENT, 298 + SM5504_IRQ_INT2_OVP_OCP_EVENT, 299 + 300 + SM5504_IRQ_NUM, 301 + }; 302 + 303 + #define SM5504_IRQ_INT1_ATTACH_MASK BIT(0) 304 + #define SM5504_IRQ_INT1_DETACH_MASK BIT(1) 305 + #define SM5504_IRQ_INT1_CHG_DET_MASK BIT(2) 306 + #define SM5504_IRQ_INT1_DCD_OUT_MASK BIT(3) 307 + #define SM5504_IRQ_INT1_OVP_MASK BIT(4) 308 + #define SM5504_IRQ_INT1_CONNECT_MASK BIT(5) 309 + #define SM5504_IRQ_INT1_ADC_CHG_MASK BIT(6) 310 + #define SM5504_IRQ_INT2_RID_CHG_MASK BIT(0) 311 + #define SM5504_IRQ_INT2_UVLO_MASK BIT(1) 312 + #define SM5504_IRQ_INT2_POR_MASK BIT(2) 313 + #define SM5504_IRQ_INT2_OVP_FET_MASK BIT(4) 314 + #define SM5504_IRQ_INT2_OCP_LATCH_MASK BIT(5) 315 + #define SM5504_IRQ_INT2_OCP_EVENT_MASK BIT(6) 316 + #define SM5504_IRQ_INT2_OVP_OCP_EVENT_MASK BIT(7) 315 317 316 318 #endif /* __LINUX_EXTCON_SM5502_H */
+24 -19
drivers/firewire/nosy.c
··· 511 511 wake_up_interruptible(&client->buffer.wait); 512 512 spin_unlock_irq(&lynx->client_list_lock); 513 513 514 - pci_free_consistent(lynx->pci_device, sizeof(struct pcl), 515 - lynx->rcv_start_pcl, lynx->rcv_start_pcl_bus); 516 - pci_free_consistent(lynx->pci_device, sizeof(struct pcl), 517 - lynx->rcv_pcl, lynx->rcv_pcl_bus); 518 - pci_free_consistent(lynx->pci_device, PAGE_SIZE, 519 - lynx->rcv_buffer, lynx->rcv_buffer_bus); 514 + dma_free_coherent(&lynx->pci_device->dev, sizeof(struct pcl), 515 + lynx->rcv_start_pcl, lynx->rcv_start_pcl_bus); 516 + dma_free_coherent(&lynx->pci_device->dev, sizeof(struct pcl), 517 + lynx->rcv_pcl, lynx->rcv_pcl_bus); 518 + dma_free_coherent(&lynx->pci_device->dev, PAGE_SIZE, lynx->rcv_buffer, 519 + lynx->rcv_buffer_bus); 520 520 521 521 iounmap(lynx->registers); 522 522 pci_disable_device(dev); ··· 532 532 u32 p, end; 533 533 int ret, i; 534 534 535 - if (pci_set_dma_mask(dev, DMA_BIT_MASK(32))) { 535 + if (dma_set_mask(&dev->dev, DMA_BIT_MASK(32))) { 536 536 dev_err(&dev->dev, 537 537 "DMA address limits not supported for PCILynx hardware\n"); 538 538 return -ENXIO; ··· 564 564 goto fail_deallocate_lynx; 565 565 } 566 566 567 - lynx->rcv_start_pcl = pci_alloc_consistent(lynx->pci_device, 568 - sizeof(struct pcl), &lynx->rcv_start_pcl_bus); 569 - lynx->rcv_pcl = pci_alloc_consistent(lynx->pci_device, 570 - sizeof(struct pcl), &lynx->rcv_pcl_bus); 571 - lynx->rcv_buffer = pci_alloc_consistent(lynx->pci_device, 572 - RCV_BUFFER_SIZE, &lynx->rcv_buffer_bus); 567 + lynx->rcv_start_pcl = dma_alloc_coherent(&lynx->pci_device->dev, 568 + sizeof(struct pcl), 569 + &lynx->rcv_start_pcl_bus, 570 + GFP_KERNEL); 571 + lynx->rcv_pcl = dma_alloc_coherent(&lynx->pci_device->dev, 572 + sizeof(struct pcl), 573 + &lynx->rcv_pcl_bus, GFP_KERNEL); 574 + lynx->rcv_buffer = dma_alloc_coherent(&lynx->pci_device->dev, 575 + RCV_BUFFER_SIZE, 576 + &lynx->rcv_buffer_bus, GFP_KERNEL); 573 577 if (lynx->rcv_start_pcl == NULL || 574 578 lynx->rcv_pcl == NULL || 575 579 lynx->rcv_buffer == NULL) { ··· 671 667 672 668 fail_deallocate_buffers: 673 669 if (lynx->rcv_start_pcl) 674 - pci_free_consistent(lynx->pci_device, sizeof(struct pcl), 675 - lynx->rcv_start_pcl, lynx->rcv_start_pcl_bus); 670 + dma_free_coherent(&lynx->pci_device->dev, sizeof(struct pcl), 671 + lynx->rcv_start_pcl, 672 + lynx->rcv_start_pcl_bus); 676 673 if (lynx->rcv_pcl) 677 - pci_free_consistent(lynx->pci_device, sizeof(struct pcl), 678 - lynx->rcv_pcl, lynx->rcv_pcl_bus); 674 + dma_free_coherent(&lynx->pci_device->dev, sizeof(struct pcl), 675 + lynx->rcv_pcl, lynx->rcv_pcl_bus); 679 676 if (lynx->rcv_buffer) 680 - pci_free_consistent(lynx->pci_device, PAGE_SIZE, 681 - lynx->rcv_buffer, lynx->rcv_buffer_bus); 677 + dma_free_coherent(&lynx->pci_device->dev, PAGE_SIZE, 678 + lynx->rcv_buffer, lynx->rcv_buffer_bus); 682 679 iounmap(lynx->registers); 683 680 684 681 fail_deallocate_lynx:
+15 -7
drivers/firmware/stratix10-svc.c
··· 1034 1034 1035 1035 /* add svc client device(s) */ 1036 1036 svc = devm_kzalloc(dev, sizeof(*svc), GFP_KERNEL); 1037 - if (!svc) 1038 - return -ENOMEM; 1037 + if (!svc) { 1038 + ret = -ENOMEM; 1039 + goto err_free_kfifo; 1040 + } 1039 1041 1040 1042 svc->stratix10_svc_rsu = platform_device_alloc(STRATIX10_RSU, 0); 1041 1043 if (!svc->stratix10_svc_rsu) { 1042 1044 dev_err(dev, "failed to allocate %s device\n", STRATIX10_RSU); 1043 - return -ENOMEM; 1045 + ret = -ENOMEM; 1046 + goto err_free_kfifo; 1044 1047 } 1045 1048 1046 1049 ret = platform_device_add(svc->stratix10_svc_rsu); 1047 - if (ret) { 1048 - platform_device_put(svc->stratix10_svc_rsu); 1049 - return ret; 1050 - } 1050 + if (ret) 1051 + goto err_put_device; 1052 + 1051 1053 dev_set_drvdata(dev, svc); 1052 1054 1053 1055 pr_info("Intel Service Layer Driver Initialized\n"); 1054 1056 1057 + return 0; 1058 + 1059 + err_put_device: 1060 + platform_device_put(svc->stratix10_svc_rsu); 1061 + err_free_kfifo: 1062 + kfifo_free(&controller->svc_fifo); 1055 1063 return ret; 1056 1064 } 1057 1065
+2 -2
drivers/fpga/Kconfig
··· 7 7 tristate "FPGA Configuration Framework" 8 8 help 9 9 Say Y here if you want support for configuring FPGAs from the 10 - kernel. The FPGA framework adds a FPGA manager class and FPGA 10 + kernel. The FPGA framework adds an FPGA manager class and FPGA 11 11 manager drivers. 12 12 13 13 if FPGA ··· 134 134 tristate "FPGA Region" 135 135 depends on FPGA_BRIDGE 136 136 help 137 - FPGA Region common code. A FPGA Region controls a FPGA Manager 137 + FPGA Region common code. An FPGA Region controls an FPGA Manager 138 138 and the FPGA Bridges associated with either a reconfigurable 139 139 region of an FPGA or a whole FPGA. 140 140
-10
drivers/fpga/altera-pr-ip-core.c
··· 199 199 } 200 200 EXPORT_SYMBOL_GPL(alt_pr_register); 201 201 202 - void alt_pr_unregister(struct device *dev) 203 - { 204 - struct fpga_manager *mgr = dev_get_drvdata(dev); 205 - 206 - dev_dbg(dev, "%s\n", __func__); 207 - 208 - fpga_mgr_unregister(mgr); 209 - } 210 - EXPORT_SYMBOL_GPL(alt_pr_unregister); 211 - 212 202 MODULE_AUTHOR("Matthew Gerlach <matthew.gerlach@linux.intel.com>"); 213 203 MODULE_DESCRIPTION("Altera Partial Reconfiguration IP Core"); 214 204 MODULE_LICENSE("GPL v2");
+20 -20
drivers/fpga/fpga-bridge.c
··· 85 85 } 86 86 87 87 /** 88 - * of_fpga_bridge_get - get an exclusive reference to a fpga bridge 88 + * of_fpga_bridge_get - get an exclusive reference to an fpga bridge 89 89 * 90 - * @np: node pointer of a FPGA bridge 90 + * @np: node pointer of an FPGA bridge 91 91 * @info: fpga image specific information 92 92 * 93 93 * Return fpga_bridge struct if successful. 94 94 * Return -EBUSY if someone already has a reference to the bridge. 95 - * Return -ENODEV if @np is not a FPGA Bridge. 95 + * Return -ENODEV if @np is not an FPGA Bridge. 96 96 */ 97 97 struct fpga_bridge *of_fpga_bridge_get(struct device_node *np, 98 98 struct fpga_image_info *info) ··· 113 113 } 114 114 115 115 /** 116 - * fpga_bridge_get - get an exclusive reference to a fpga bridge 116 + * fpga_bridge_get - get an exclusive reference to an fpga bridge 117 117 * @dev: parent device that fpga bridge was registered with 118 118 * @info: fpga manager info 119 119 * 120 - * Given a device, get an exclusive reference to a fpga bridge. 120 + * Given a device, get an exclusive reference to an fpga bridge. 121 121 * 122 122 * Return: fpga bridge struct or IS_ERR() condition containing error code. 123 123 */ ··· 224 224 /** 225 225 * of_fpga_bridge_get_to_list - get a bridge, add it to a list 226 226 * 227 - * @np: node pointer of a FPGA bridge 227 + * @np: node pointer of an FPGA bridge 228 228 * @info: fpga image specific information 229 229 * @bridge_list: list of FPGA bridges 230 230 * ··· 313 313 314 314 /** 315 315 * fpga_bridge_create - create and initialize a struct fpga_bridge 316 - * @dev: FPGA bridge device from pdev 316 + * @parent: FPGA bridge device from pdev 317 317 * @name: FPGA bridge name 318 318 * @br_ops: pointer to structure of fpga bridge ops 319 319 * @priv: FPGA bridge private data ··· 323 323 * 324 324 * Return: struct fpga_bridge or NULL 325 325 */ 326 - struct fpga_bridge *fpga_bridge_create(struct device *dev, const char *name, 326 + struct fpga_bridge *fpga_bridge_create(struct device *parent, const char *name, 327 327 const struct fpga_bridge_ops *br_ops, 328 328 void *priv) 329 329 { ··· 331 331 int id, ret; 332 332 333 333 if (!name || !strlen(name)) { 334 - dev_err(dev, "Attempt to register with no name!\n"); 334 + dev_err(parent, "Attempt to register with no name!\n"); 335 335 return NULL; 336 336 } 337 337 ··· 353 353 device_initialize(&bridge->dev); 354 354 bridge->dev.groups = br_ops->groups; 355 355 bridge->dev.class = fpga_bridge_class; 356 - bridge->dev.parent = dev; 357 - bridge->dev.of_node = dev->of_node; 356 + bridge->dev.parent = parent; 357 + bridge->dev.of_node = parent->of_node; 358 358 bridge->dev.id = id; 359 359 360 360 ret = dev_set_name(&bridge->dev, "br%d", id); ··· 373 373 EXPORT_SYMBOL_GPL(fpga_bridge_create); 374 374 375 375 /** 376 - * fpga_bridge_free - free a fpga bridge created by fpga_bridge_create() 376 + * fpga_bridge_free - free an fpga bridge created by fpga_bridge_create() 377 377 * @bridge: FPGA bridge struct 378 378 */ 379 379 void fpga_bridge_free(struct fpga_bridge *bridge) ··· 392 392 393 393 /** 394 394 * devm_fpga_bridge_create - create and init a managed struct fpga_bridge 395 - * @dev: FPGA bridge device from pdev 395 + * @parent: FPGA bridge device from pdev 396 396 * @name: FPGA bridge name 397 397 * @br_ops: pointer to structure of fpga bridge ops 398 398 * @priv: FPGA bridge private data 399 399 * 400 - * This function is intended for use in a FPGA bridge driver's probe function. 400 + * This function is intended for use in an FPGA bridge driver's probe function. 401 401 * After the bridge driver creates the struct with devm_fpga_bridge_create(), it 402 402 * should register the bridge with fpga_bridge_register(). The bridge driver's 403 403 * remove function should call fpga_bridge_unregister(). The bridge struct ··· 408 408 * Return: struct fpga_bridge or NULL 409 409 */ 410 410 struct fpga_bridge 411 - *devm_fpga_bridge_create(struct device *dev, const char *name, 411 + *devm_fpga_bridge_create(struct device *parent, const char *name, 412 412 const struct fpga_bridge_ops *br_ops, void *priv) 413 413 { 414 414 struct fpga_bridge **ptr, *bridge; ··· 417 417 if (!ptr) 418 418 return NULL; 419 419 420 - bridge = fpga_bridge_create(dev, name, br_ops, priv); 420 + bridge = fpga_bridge_create(parent, name, br_ops, priv); 421 421 if (!bridge) { 422 422 devres_free(ptr); 423 423 } else { 424 424 *ptr = bridge; 425 - devres_add(dev, ptr); 425 + devres_add(parent, ptr); 426 426 } 427 427 428 428 return bridge; ··· 430 430 EXPORT_SYMBOL_GPL(devm_fpga_bridge_create); 431 431 432 432 /** 433 - * fpga_bridge_register - register a FPGA bridge 433 + * fpga_bridge_register - register an FPGA bridge 434 434 * 435 435 * @bridge: FPGA bridge struct 436 436 * ··· 454 454 EXPORT_SYMBOL_GPL(fpga_bridge_register); 455 455 456 456 /** 457 - * fpga_bridge_unregister - unregister a FPGA bridge 457 + * fpga_bridge_unregister - unregister an FPGA bridge 458 458 * 459 459 * @bridge: FPGA bridge struct 460 460 * 461 - * This function is intended for use in a FPGA bridge driver's remove function. 461 + * This function is intended for use in an FPGA bridge driver's remove function. 462 462 */ 463 463 void fpga_bridge_unregister(struct fpga_bridge *bridge) 464 464 {
+21 -21
drivers/fpga/fpga-mgr.c
··· 26 26 }; 27 27 28 28 /** 29 - * fpga_image_info_alloc - Allocate a FPGA image info struct 29 + * fpga_image_info_alloc - Allocate an FPGA image info struct 30 30 * @dev: owning device 31 31 * 32 32 * Return: struct fpga_image_info or NULL ··· 50 50 EXPORT_SYMBOL_GPL(fpga_image_info_alloc); 51 51 52 52 /** 53 - * fpga_image_info_free - Free a FPGA image info struct 53 + * fpga_image_info_free - Free an FPGA image info struct 54 54 * @info: FPGA image info struct to free 55 55 */ 56 56 void fpga_image_info_free(struct fpga_image_info *info) ··· 470 470 } 471 471 472 472 /** 473 - * fpga_mgr_get - Given a device, get a reference to a fpga mgr. 473 + * fpga_mgr_get - Given a device, get a reference to an fpga mgr. 474 474 * @dev: parent device that fpga mgr was registered with 475 475 * 476 476 * Return: fpga manager struct or IS_ERR() condition containing error code. ··· 487 487 EXPORT_SYMBOL_GPL(fpga_mgr_get); 488 488 489 489 /** 490 - * of_fpga_mgr_get - Given a device node, get a reference to a fpga mgr. 490 + * of_fpga_mgr_get - Given a device node, get a reference to an fpga mgr. 491 491 * 492 492 * @node: device node 493 493 * ··· 506 506 EXPORT_SYMBOL_GPL(of_fpga_mgr_get); 507 507 508 508 /** 509 - * fpga_mgr_put - release a reference to a fpga manager 509 + * fpga_mgr_put - release a reference to an fpga manager 510 510 * @mgr: fpga manager structure 511 511 */ 512 512 void fpga_mgr_put(struct fpga_manager *mgr) ··· 550 550 EXPORT_SYMBOL_GPL(fpga_mgr_unlock); 551 551 552 552 /** 553 - * fpga_mgr_create - create and initialize a FPGA manager struct 554 - * @dev: fpga manager device from pdev 553 + * fpga_mgr_create - create and initialize an FPGA manager struct 554 + * @parent: fpga manager device from pdev 555 555 * @name: fpga manager name 556 556 * @mops: pointer to structure of fpga manager ops 557 557 * @priv: fpga manager private data ··· 561 561 * 562 562 * Return: pointer to struct fpga_manager or NULL 563 563 */ 564 - struct fpga_manager *fpga_mgr_create(struct device *dev, const char *name, 564 + struct fpga_manager *fpga_mgr_create(struct device *parent, const char *name, 565 565 const struct fpga_manager_ops *mops, 566 566 void *priv) 567 567 { ··· 571 571 if (!mops || !mops->write_complete || !mops->state || 572 572 !mops->write_init || (!mops->write && !mops->write_sg) || 573 573 (mops->write && mops->write_sg)) { 574 - dev_err(dev, "Attempt to register without fpga_manager_ops\n"); 574 + dev_err(parent, "Attempt to register without fpga_manager_ops\n"); 575 575 return NULL; 576 576 } 577 577 578 578 if (!name || !strlen(name)) { 579 - dev_err(dev, "Attempt to register with no name!\n"); 579 + dev_err(parent, "Attempt to register with no name!\n"); 580 580 return NULL; 581 581 } 582 582 ··· 597 597 device_initialize(&mgr->dev); 598 598 mgr->dev.class = fpga_mgr_class; 599 599 mgr->dev.groups = mops->groups; 600 - mgr->dev.parent = dev; 601 - mgr->dev.of_node = dev->of_node; 600 + mgr->dev.parent = parent; 601 + mgr->dev.of_node = parent->of_node; 602 602 mgr->dev.id = id; 603 603 604 604 ret = dev_set_name(&mgr->dev, "fpga%d", id); ··· 617 617 EXPORT_SYMBOL_GPL(fpga_mgr_create); 618 618 619 619 /** 620 - * fpga_mgr_free - free a FPGA manager created with fpga_mgr_create() 620 + * fpga_mgr_free - free an FPGA manager created with fpga_mgr_create() 621 621 * @mgr: fpga manager struct 622 622 */ 623 623 void fpga_mgr_free(struct fpga_manager *mgr) ··· 636 636 637 637 /** 638 638 * devm_fpga_mgr_create - create and initialize a managed FPGA manager struct 639 - * @dev: fpga manager device from pdev 639 + * @parent: fpga manager device from pdev 640 640 * @name: fpga manager name 641 641 * @mops: pointer to structure of fpga manager ops 642 642 * @priv: fpga manager private data 643 643 * 644 - * This function is intended for use in a FPGA manager driver's probe function. 644 + * This function is intended for use in an FPGA manager driver's probe function. 645 645 * After the manager driver creates the manager struct with 646 646 * devm_fpga_mgr_create(), it should register it with fpga_mgr_register(). The 647 647 * manager driver's remove function should call fpga_mgr_unregister(). The ··· 651 651 * 652 652 * Return: pointer to struct fpga_manager or NULL 653 653 */ 654 - struct fpga_manager *devm_fpga_mgr_create(struct device *dev, const char *name, 654 + struct fpga_manager *devm_fpga_mgr_create(struct device *parent, const char *name, 655 655 const struct fpga_manager_ops *mops, 656 656 void *priv) 657 657 { ··· 661 661 if (!dr) 662 662 return NULL; 663 663 664 - dr->mgr = fpga_mgr_create(dev, name, mops, priv); 664 + dr->mgr = fpga_mgr_create(parent, name, mops, priv); 665 665 if (!dr->mgr) { 666 666 devres_free(dr); 667 667 return NULL; 668 668 } 669 669 670 - devres_add(dev, dr); 670 + devres_add(parent, dr); 671 671 672 672 return dr->mgr; 673 673 } 674 674 EXPORT_SYMBOL_GPL(devm_fpga_mgr_create); 675 675 676 676 /** 677 - * fpga_mgr_register - register a FPGA manager 677 + * fpga_mgr_register - register an FPGA manager 678 678 * @mgr: fpga manager struct 679 679 * 680 680 * Return: 0 on success, negative error code otherwise. ··· 706 706 EXPORT_SYMBOL_GPL(fpga_mgr_register); 707 707 708 708 /** 709 - * fpga_mgr_unregister - unregister a FPGA manager 709 + * fpga_mgr_unregister - unregister an FPGA manager 710 710 * @mgr: fpga manager struct 711 711 * 712 - * This function is intended for use in a FPGA manager driver's remove function. 712 + * This function is intended for use in an FPGA manager driver's remove function. 713 713 */ 714 714 void fpga_mgr_unregister(struct fpga_manager *mgr) 715 715 {
+15 -15
drivers/fpga/fpga-region.c
··· 33 33 EXPORT_SYMBOL_GPL(fpga_region_class_find); 34 34 35 35 /** 36 - * fpga_region_get - get an exclusive reference to a fpga region 36 + * fpga_region_get - get an exclusive reference to an fpga region 37 37 * @region: FPGA Region struct 38 38 * 39 39 * Caller should call fpga_region_put() when done with region. 40 40 * 41 41 * Return fpga_region struct if successful. 42 42 * Return -EBUSY if someone already has a reference to the region. 43 - * Return -ENODEV if @np is not a FPGA Region. 43 + * Return -ENODEV if @np is not an FPGA Region. 44 44 */ 45 45 static struct fpga_region *fpga_region_get(struct fpga_region *region) 46 46 { ··· 181 181 182 182 /** 183 183 * fpga_region_create - alloc and init a struct fpga_region 184 - * @dev: device parent 184 + * @parent: device parent 185 185 * @mgr: manager that programs this region 186 186 * @get_bridges: optional function to get bridges to a list 187 187 * ··· 192 192 * Return: struct fpga_region or NULL 193 193 */ 194 194 struct fpga_region 195 - *fpga_region_create(struct device *dev, 195 + *fpga_region_create(struct device *parent, 196 196 struct fpga_manager *mgr, 197 197 int (*get_bridges)(struct fpga_region *)) 198 198 { ··· 214 214 215 215 device_initialize(&region->dev); 216 216 region->dev.class = fpga_region_class; 217 - region->dev.parent = dev; 218 - region->dev.of_node = dev->of_node; 217 + region->dev.parent = parent; 218 + region->dev.of_node = parent->of_node; 219 219 region->dev.id = id; 220 220 221 221 ret = dev_set_name(&region->dev, "region%d", id); ··· 234 234 EXPORT_SYMBOL_GPL(fpga_region_create); 235 235 236 236 /** 237 - * fpga_region_free - free a FPGA region created by fpga_region_create() 237 + * fpga_region_free - free an FPGA region created by fpga_region_create() 238 238 * @region: FPGA region 239 239 */ 240 240 void fpga_region_free(struct fpga_region *region) ··· 253 253 254 254 /** 255 255 * devm_fpga_region_create - create and initialize a managed FPGA region struct 256 - * @dev: device parent 256 + * @parent: device parent 257 257 * @mgr: manager that programs this region 258 258 * @get_bridges: optional function to get bridges to a list 259 259 * 260 - * This function is intended for use in a FPGA region driver's probe function. 260 + * This function is intended for use in an FPGA region driver's probe function. 261 261 * After the region driver creates the region struct with 262 262 * devm_fpga_region_create(), it should register it with fpga_region_register(). 263 263 * The region driver's remove function should call fpga_region_unregister(). ··· 268 268 * Return: struct fpga_region or NULL 269 269 */ 270 270 struct fpga_region 271 - *devm_fpga_region_create(struct device *dev, 271 + *devm_fpga_region_create(struct device *parent, 272 272 struct fpga_manager *mgr, 273 273 int (*get_bridges)(struct fpga_region *)) 274 274 { ··· 278 278 if (!ptr) 279 279 return NULL; 280 280 281 - region = fpga_region_create(dev, mgr, get_bridges); 281 + region = fpga_region_create(parent, mgr, get_bridges); 282 282 if (!region) { 283 283 devres_free(ptr); 284 284 } else { 285 285 *ptr = region; 286 - devres_add(dev, ptr); 286 + devres_add(parent, ptr); 287 287 } 288 288 289 289 return region; ··· 291 291 EXPORT_SYMBOL_GPL(devm_fpga_region_create); 292 292 293 293 /** 294 - * fpga_region_register - register a FPGA region 294 + * fpga_region_register - register an FPGA region 295 295 * @region: FPGA region 296 296 * 297 297 * Return: 0 or -errno ··· 303 303 EXPORT_SYMBOL_GPL(fpga_region_register); 304 304 305 305 /** 306 - * fpga_region_unregister - unregister a FPGA region 306 + * fpga_region_unregister - unregister an FPGA region 307 307 * @region: FPGA region 308 308 * 309 - * This function is intended for use in a FPGA region driver's remove function. 309 + * This function is intended for use in an FPGA region driver's remove function. 310 310 */ 311 311 void fpga_region_unregister(struct fpga_region *region) 312 312 {
+2
drivers/fpga/machxo2-spi.c
··· 374 374 return devm_fpga_mgr_register(dev, mgr); 375 375 } 376 376 377 + #ifdef CONFIG_OF 377 378 static const struct of_device_id of_match[] = { 378 379 { .compatible = "lattice,machxo2-slave-spi", }, 379 380 {} 380 381 }; 381 382 MODULE_DEVICE_TABLE(of, of_match); 383 + #endif 382 384 383 385 static const struct spi_device_id lattice_ids[] = { 384 386 { "machxo2-slave-spi", 0 },
+4 -4
drivers/fpga/of-fpga-region.c
··· 181 181 * @region: FPGA region 182 182 * @overlay: overlay applied to the FPGA region 183 183 * 184 - * Given an overlay applied to a FPGA region, parse the FPGA image specific 184 + * Given an overlay applied to an FPGA region, parse the FPGA image specific 185 185 * info in the overlay and do some checking. 186 186 * 187 187 * Returns: ··· 273 273 * @region: FPGA region that the overlay was applied to 274 274 * @nd: overlay notification data 275 275 * 276 - * Called when an overlay targeted to a FPGA Region is about to be applied. 276 + * Called when an overlay targeted to an FPGA Region is about to be applied. 277 277 * Parses the overlay for properties that influence how the FPGA will be 278 278 * programmed and does some checking. If the checks pass, programs the FPGA. 279 279 * If the checks fail, overlay is rejected and does not get added to the ··· 336 336 * @action: notifier action 337 337 * @arg: reconfig data 338 338 * 339 - * This notifier handles programming a FPGA when a "firmware-name" property is 340 - * added to a fpga-region. 339 + * This notifier handles programming an FPGA when a "firmware-name" property is 340 + * added to an fpga-region. 341 341 * 342 342 * Returns NOTIFY_OK or error if FPGA programming fails. 343 343 */
+2 -1
drivers/fpga/stratix10-soc.c
··· 271 271 } 272 272 273 273 /* 274 - * Send a FPGA image to privileged layers to write to the FPGA. When done 274 + * Send an FPGA image to privileged layers to write to the FPGA. When done 275 275 * sending, free all service layer buffers we allocated in write_init. 276 276 */ 277 277 static int s10_ops_write(struct fpga_manager *mgr, const char *buf, ··· 454 454 struct s10_priv *priv = mgr->priv; 455 455 456 456 fpga_mgr_unregister(mgr); 457 + fpga_mgr_free(mgr); 457 458 stratix10_svc_free_channel(priv->chan); 458 459 459 460 return 0;
+2 -2
drivers/fsi/fsi-core.c
··· 724 724 rc = count; 725 725 fail: 726 726 *offset = off; 727 - return count; 727 + return rc; 728 728 } 729 729 730 730 static ssize_t cfam_write(struct file *filep, const char __user *buf, ··· 761 761 rc = count; 762 762 fail: 763 763 *offset = off; 764 - return count; 764 + return rc; 765 765 } 766 766 767 767 static loff_t cfam_llseek(struct file *file, loff_t offset, int whence)
+20 -13
drivers/fsi/fsi-master-aspeed.c
··· 92 92 static u16 aspeed_fsi_divisor = FSI_DIVISOR_DEFAULT; 93 93 module_param_named(bus_div,aspeed_fsi_divisor, ushort, 0); 94 94 95 - #define OPB_POLL_TIMEOUT 10000 95 + #define OPB_POLL_TIMEOUT 500 96 96 97 97 static int __opb_write(struct fsi_master_aspeed *aspeed, u32 addr, 98 98 u32 val, u32 transfer_size) ··· 101 101 u32 reg, status; 102 102 int ret; 103 103 104 - writel(CMD_WRITE, base + OPB0_RW); 105 - writel(transfer_size, base + OPB0_XFER_SIZE); 106 - writel(addr, base + OPB0_FSI_ADDR); 107 - writel(val, base + OPB0_FSI_DATA_W); 108 - writel(0x1, base + OPB_IRQ_CLEAR); 104 + /* 105 + * The ordering of these writes up until the trigger 106 + * write does not matter, so use writel_relaxed. 107 + */ 108 + writel_relaxed(CMD_WRITE, base + OPB0_RW); 109 + writel_relaxed(transfer_size, base + OPB0_XFER_SIZE); 110 + writel_relaxed(addr, base + OPB0_FSI_ADDR); 111 + writel_relaxed(val, base + OPB0_FSI_DATA_W); 112 + writel_relaxed(0x1, base + OPB_IRQ_CLEAR); 109 113 writel(0x1, base + OPB_TRIGGER); 110 114 111 115 ret = readl_poll_timeout(base + OPB_IRQ_STATUS, reg, ··· 153 149 u32 result, reg; 154 150 int status, ret; 155 151 156 - writel(CMD_READ, base + OPB0_RW); 157 - writel(transfer_size, base + OPB0_XFER_SIZE); 158 - writel(addr, base + OPB0_FSI_ADDR); 159 - writel(0x1, base + OPB_IRQ_CLEAR); 152 + /* 153 + * The ordering of these writes up until the trigger 154 + * write does not matter, so use writel_relaxed. 155 + */ 156 + writel_relaxed(CMD_READ, base + OPB0_RW); 157 + writel_relaxed(transfer_size, base + OPB0_XFER_SIZE); 158 + writel_relaxed(addr, base + OPB0_FSI_ADDR); 159 + writel_relaxed(0x1, base + OPB_IRQ_CLEAR); 160 160 writel(0x1, base + OPB_TRIGGER); 161 161 162 162 ret = readl_poll_timeout(base + OPB_IRQ_STATUS, reg, ··· 533 525 static int fsi_master_aspeed_probe(struct platform_device *pdev) 534 526 { 535 527 struct fsi_master_aspeed *aspeed; 536 - struct resource *res; 537 528 int rc, links, reg; 538 529 __be32 raw; 539 530 ··· 548 541 549 542 aspeed->dev = &pdev->dev; 550 543 551 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 552 - aspeed->base = devm_ioremap_resource(&pdev->dev, res); 544 + aspeed->base = devm_platform_ioremap_resource(pdev, 0); 553 545 if (IS_ERR(aspeed->base)) 554 546 return PTR_ERR(aspeed->base); 555 547 ··· 651 645 { .compatible = "aspeed,ast2600-fsi-master" }, 652 646 { }, 653 647 }; 648 + MODULE_DEVICE_TABLE(of, fsi_master_aspeed_match); 654 649 655 650 static struct platform_driver fsi_master_aspeed_driver = { 656 651 .driver = {
+1 -1
drivers/fsi/fsi-master-ast-cf.c
··· 1309 1309 master->cf_mem = devm_ioremap_resource(&pdev->dev, &res); 1310 1310 if (IS_ERR(master->cf_mem)) { 1311 1311 rc = PTR_ERR(master->cf_mem); 1312 - dev_err(&pdev->dev, "Error %d mapping coldfire memory\n", rc); 1313 1312 goto err_free; 1314 1313 } 1315 1314 dev_dbg(&pdev->dev, "DRAM allocation @%x\n", master->cf_mem_addr); ··· 1426 1427 { .compatible = "aspeed,ast2500-cf-fsi-master" }, 1427 1428 { }, 1428 1429 }; 1430 + MODULE_DEVICE_TABLE(of, fsi_master_acf_match); 1429 1431 1430 1432 static struct platform_driver fsi_master_acf = { 1431 1433 .driver = {
+1
drivers/fsi/fsi-master-gpio.c
··· 882 882 { .compatible = "fsi-master-gpio" }, 883 883 { }, 884 884 }; 885 + MODULE_DEVICE_TABLE(of, fsi_master_gpio_match); 885 886 886 887 static struct platform_driver fsi_master_gpio_driver = { 887 888 .driver = {
+9 -3
drivers/fsi/fsi-occ.c
··· 223 223 .release = occ_release, 224 224 }; 225 225 226 - static int occ_verify_checksum(struct occ_response *resp, u16 data_length) 226 + static int occ_verify_checksum(struct occ *occ, struct occ_response *resp, 227 + u16 data_length) 227 228 { 228 229 /* Fetch the two bytes after the data for the checksum. */ 229 230 u16 checksum_resp = get_unaligned_be16(&resp->data[data_length]); ··· 239 238 for (i = 0; i < data_length; ++i) 240 239 checksum += resp->data[i]; 241 240 242 - if (checksum != checksum_resp) 241 + if (checksum != checksum_resp) { 242 + dev_err(occ->dev, "Bad checksum: %04x!=%04x\n", checksum, 243 + checksum_resp); 243 244 return -EBADMSG; 245 + } 244 246 245 247 return 0; 246 248 } ··· 499 495 goto done; 500 496 501 497 if (resp->return_status == OCC_RESP_CMD_IN_PRG || 498 + resp->return_status == OCC_RESP_CRIT_INIT || 502 499 resp->seq_no != seq_no) { 503 500 rc = -ETIMEDOUT; 504 501 ··· 537 532 } 538 533 539 534 *resp_len = resp_data_length + 7; 540 - rc = occ_verify_checksum(resp, resp_data_length); 535 + rc = occ_verify_checksum(occ, resp, resp_data_length); 541 536 542 537 done: 543 538 mutex_unlock(&occ->occ_lock); ··· 640 635 }, 641 636 { }, 642 637 }; 638 + MODULE_DEVICE_TABLE(of, occ_match); 643 639 644 640 static struct platform_driver occ_driver = { 645 641 .driver = {
+6 -4
drivers/fsi/fsi-sbefifo.c
··· 325 325 static int sbefifo_request_reset(struct sbefifo *sbefifo) 326 326 { 327 327 struct device *dev = &sbefifo->fsi_dev->dev; 328 - u32 status, timeout; 328 + unsigned long end_time; 329 + u32 status; 329 330 int rc; 330 331 331 332 dev_dbg(dev, "Requesting FIFO reset\n"); ··· 342 341 } 343 342 344 343 /* Wait for it to complete */ 345 - for (timeout = 0; timeout < SBEFIFO_RESET_TIMEOUT; timeout++) { 344 + end_time = jiffies + msecs_to_jiffies(SBEFIFO_RESET_TIMEOUT); 345 + while (!time_after(jiffies, end_time)) { 346 346 rc = sbefifo_regr(sbefifo, SBEFIFO_UP | SBEFIFO_STS, &status); 347 347 if (rc) { 348 348 dev_err(dev, "Failed to read UP fifo status during reset" ··· 357 355 return 0; 358 356 } 359 357 360 - msleep(1); 358 + cond_resched(); 361 359 } 362 360 dev_err(dev, "FIFO reset timed out\n"); 363 361 ··· 402 400 /* The FIFO already contains a reset request from the SBE ? */ 403 401 if (down_status & SBEFIFO_STS_RESET_REQ) { 404 402 dev_info(dev, "Cleanup: FIFO reset request set, resetting\n"); 405 - rc = sbefifo_regw(sbefifo, SBEFIFO_UP, SBEFIFO_PERFORM_RESET); 403 + rc = sbefifo_regw(sbefifo, SBEFIFO_DOWN, SBEFIFO_PERFORM_RESET); 406 404 if (rc) { 407 405 sbefifo->broken = true; 408 406 dev_err(dev, "Cleanup: Reset reg write failed, rc=%d\n", rc);
+36 -65
drivers/fsi/fsi-scom.c
··· 38 38 #define SCOM_STATUS_PIB_RESP_MASK 0x00007000 39 39 #define SCOM_STATUS_PIB_RESP_SHIFT 12 40 40 41 - #define SCOM_STATUS_ANY_ERR (SCOM_STATUS_PROTECTION | \ 42 - SCOM_STATUS_PARITY | \ 43 - SCOM_STATUS_PIB_ABORT | \ 41 + #define SCOM_STATUS_FSI2PIB_ERROR (SCOM_STATUS_PROTECTION | \ 42 + SCOM_STATUS_PARITY | \ 43 + SCOM_STATUS_PIB_ABORT) 44 + #define SCOM_STATUS_ANY_ERR (SCOM_STATUS_FSI2PIB_ERROR | \ 44 45 SCOM_STATUS_PIB_RESP_MASK) 45 46 /* SCOM address encodings */ 46 47 #define XSCOM_ADDR_IND_FLAG BIT_ULL(63) ··· 61 60 #define XSCOM_ADDR_FORM1_HI_SHIFT 20 62 61 63 62 /* Retries */ 64 - #define SCOM_MAX_RETRIES 100 /* Retries on busy */ 65 63 #define SCOM_MAX_IND_RETRIES 10 /* Retries indirect not ready */ 66 64 67 65 struct scom_device { ··· 240 240 { 241 241 uint32_t dummy = -1; 242 242 243 - if (status & SCOM_STATUS_PROTECTION) 244 - return -EPERM; 245 - if (status & SCOM_STATUS_PARITY) { 243 + if (status & SCOM_STATUS_FSI2PIB_ERROR) 246 244 fsi_device_write(scom->fsi_dev, SCOM_FSI2PIB_RESET_REG, &dummy, 247 245 sizeof(uint32_t)); 246 + 247 + if (status & SCOM_STATUS_PROTECTION) 248 + return -EPERM; 249 + if (status & SCOM_STATUS_PARITY) 248 250 return -EIO; 249 - } 250 - /* Return -EBUSY on PIB abort to force a retry */ 251 + 251 252 if (status & SCOM_STATUS_PIB_ABORT) 252 253 return -EBUSY; 253 254 return 0; ··· 285 284 static int put_scom(struct scom_device *scom, uint64_t value, 286 285 uint64_t addr) 287 286 { 288 - uint32_t status, dummy = -1; 289 - int rc, retries; 287 + uint32_t status; 288 + int rc; 290 289 291 - for (retries = 0; retries < SCOM_MAX_RETRIES; retries++) { 292 - rc = raw_put_scom(scom, value, addr, &status); 293 - if (rc) { 294 - /* Try resetting the bridge if FSI fails */ 295 - if (rc != -ENODEV && retries == 0) { 296 - fsi_device_write(scom->fsi_dev, SCOM_FSI2PIB_RESET_REG, 297 - &dummy, sizeof(uint32_t)); 298 - rc = -EBUSY; 299 - } else 300 - return rc; 301 - } else 302 - rc = handle_fsi2pib_status(scom, status); 303 - if (rc && rc != -EBUSY) 304 - break; 305 - if (rc == 0) { 306 - rc = handle_pib_status(scom, 307 - (status & SCOM_STATUS_PIB_RESP_MASK) 308 - >> SCOM_STATUS_PIB_RESP_SHIFT); 309 - if (rc && rc != -EBUSY) 310 - break; 311 - } 312 - if (rc == 0) 313 - break; 314 - msleep(1); 315 - } 316 - return rc; 290 + rc = raw_put_scom(scom, value, addr, &status); 291 + if (rc == -ENODEV) 292 + return rc; 293 + 294 + rc = handle_fsi2pib_status(scom, status); 295 + if (rc) 296 + return rc; 297 + 298 + return handle_pib_status(scom, 299 + (status & SCOM_STATUS_PIB_RESP_MASK) 300 + >> SCOM_STATUS_PIB_RESP_SHIFT); 317 301 } 318 302 319 303 static int get_scom(struct scom_device *scom, uint64_t *value, 320 304 uint64_t addr) 321 305 { 322 - uint32_t status, dummy = -1; 323 - int rc, retries; 306 + uint32_t status; 307 + int rc; 324 308 325 - for (retries = 0; retries < SCOM_MAX_RETRIES; retries++) { 326 - rc = raw_get_scom(scom, value, addr, &status); 327 - if (rc) { 328 - /* Try resetting the bridge if FSI fails */ 329 - if (rc != -ENODEV && retries == 0) { 330 - fsi_device_write(scom->fsi_dev, SCOM_FSI2PIB_RESET_REG, 331 - &dummy, sizeof(uint32_t)); 332 - rc = -EBUSY; 333 - } else 334 - return rc; 335 - } else 336 - rc = handle_fsi2pib_status(scom, status); 337 - if (rc && rc != -EBUSY) 338 - break; 339 - if (rc == 0) { 340 - rc = handle_pib_status(scom, 341 - (status & SCOM_STATUS_PIB_RESP_MASK) 342 - >> SCOM_STATUS_PIB_RESP_SHIFT); 343 - if (rc && rc != -EBUSY) 344 - break; 345 - } 346 - if (rc == 0) 347 - break; 348 - msleep(1); 349 - } 350 - return rc; 309 + rc = raw_get_scom(scom, value, addr, &status); 310 + if (rc == -ENODEV) 311 + return rc; 312 + 313 + rc = handle_fsi2pib_status(scom, status); 314 + if (rc) 315 + return rc; 316 + 317 + return handle_pib_status(scom, 318 + (status & SCOM_STATUS_PIB_RESP_MASK) 319 + >> SCOM_STATUS_PIB_RESP_SHIFT); 351 320 } 352 321 353 322 static ssize_t scom_read(struct file *filep, char __user *buf, size_t len,
+5 -2
drivers/hwmon/occ/common.c
··· 1151 1151 { 1152 1152 int rc; 1153 1153 1154 + /* start with 1 to avoid false match with zero-initialized SRAM buffer */ 1155 + occ->seq_no = 1; 1154 1156 mutex_init(&occ->lock); 1155 1157 occ->groups[0] = &occ->group; 1156 1158 ··· 1162 1160 dev_info(occ->bus_dev, "host is not ready\n"); 1163 1161 return rc; 1164 1162 } else if (rc < 0) { 1165 - dev_err(occ->bus_dev, "failed to get OCC poll response: %d\n", 1166 - rc); 1163 + dev_err(occ->bus_dev, 1164 + "failed to get OCC poll response=%02x: %d\n", 1165 + occ->resp.return_status, rc); 1167 1166 return rc; 1168 1167 } 1169 1168
+5 -6
drivers/hwtracing/coresight/coresight-core.c
··· 608 608 coresight_find_enabled_sink(struct coresight_device *csdev) 609 609 { 610 610 int i; 611 - struct coresight_device *sink; 611 + struct coresight_device *sink = NULL; 612 612 613 613 if ((csdev->type == CORESIGHT_DEV_TYPE_SINK || 614 614 csdev->type == CORESIGHT_DEV_TYPE_LINKSINK) && ··· 886 886 } 887 887 888 888 kfree(path); 889 - path = NULL; 890 889 } 891 890 892 891 /* return true if the device is a suitable type for a default sink */ ··· 1391 1392 } 1392 1393 } 1393 1394 1394 - return 0; 1395 + return ret; 1395 1396 } 1396 1397 1397 1398 static int coresight_remove_match(struct device *dev, void *data) ··· 1729 1730 if (idx < 0) { 1730 1731 /* Make space for the new entry */ 1731 1732 idx = dict->nr_idx; 1732 - list = krealloc(dict->fwnode_list, 1733 - (idx + 1) * sizeof(*dict->fwnode_list), 1734 - GFP_KERNEL); 1733 + list = krealloc_array(dict->fwnode_list, 1734 + idx + 1, sizeof(*dict->fwnode_list), 1735 + GFP_KERNEL); 1735 1736 if (ZERO_OR_NULL_PTR(list)) { 1736 1737 idx = -ENOMEM; 1737 1738 goto done;
-5
drivers/hwtracing/coresight/coresight-etm4x-core.c
··· 568 568 struct etmv4_config *config = &drvdata->config; 569 569 struct perf_event_attr *attr = &event->attr; 570 570 571 - if (!attr) { 572 - ret = -EINVAL; 573 - goto out; 574 - } 575 - 576 571 /* Clear configuration from previous run */ 577 572 memset(config, 0, sizeof(struct etmv4_config)); 578 573
+1 -1
drivers/hwtracing/coresight/coresight-tmc-etf.c
··· 530 530 buf_ptr = buf->data_pages[cur] + offset; 531 531 *buf_ptr = readl_relaxed(drvdata->base + TMC_RRD); 532 532 533 - if (lost && *barrier) { 533 + if (lost && i < CORESIGHT_BARRIER_PKT_SIZE) { 534 534 *buf_ptr = *barrier; 535 535 barrier++; 536 536 }
+24 -5
drivers/hwtracing/intel_th/core.c
··· 100 100 struct intel_th_driver *thdrv = to_intel_th_driver(dev->driver); 101 101 struct intel_th_device *thdev = to_intel_th_device(dev); 102 102 struct intel_th_device *hub = to_intel_th_hub(thdev); 103 - int err; 104 103 105 104 if (thdev->type == INTEL_TH_SWITCH) { 106 105 struct intel_th *th = to_intel_th(hub); 107 106 int i, lowest; 108 107 109 - /* disconnect outputs */ 110 - err = device_for_each_child(dev, thdev, intel_th_child_remove); 111 - if (err) 112 - return err; 108 + /* 109 + * disconnect outputs 110 + * 111 + * intel_th_child_remove returns 0 unconditionally, so there is 112 + * no need to check the return value of device_for_each_child. 113 + */ 114 + device_for_each_child(dev, thdev, intel_th_child_remove); 113 115 114 116 /* 115 117 * Remove outputs, that is, hub's children: they are created ··· 217 215 218 216 static DEVICE_ATTR_RO(port); 219 217 218 + static void intel_th_trace_prepare(struct intel_th_device *thdev) 219 + { 220 + struct intel_th_device *hub = to_intel_th_hub(thdev); 221 + struct intel_th_driver *hubdrv = to_intel_th_driver(hub->dev.driver); 222 + 223 + if (hub->type != INTEL_TH_SWITCH) 224 + return; 225 + 226 + if (thdev->type != INTEL_TH_OUTPUT) 227 + return; 228 + 229 + pm_runtime_get_sync(&thdev->dev); 230 + hubdrv->prepare(hub, &thdev->output); 231 + pm_runtime_put(&thdev->dev); 232 + } 233 + 220 234 static int intel_th_output_activate(struct intel_th_device *thdev) 221 235 { 222 236 struct intel_th_driver *thdrv = ··· 253 235 if (ret) 254 236 goto fail_put; 255 237 238 + intel_th_trace_prepare(thdev); 256 239 if (thdrv->activate) 257 240 ret = thdrv->activate(thdev); 258 241 else
+16
drivers/hwtracing/intel_th/gth.c
··· 564 564 iowrite32(reg, gth->base + REG_TSCU_TSUCTRL); 565 565 } 566 566 567 + static void intel_th_gth_prepare(struct intel_th_device *thdev, 568 + struct intel_th_output *output) 569 + { 570 + struct gth_device *gth = dev_get_drvdata(&thdev->dev); 571 + int count; 572 + 573 + /* 574 + * Wait until the output port is in reset before we start 575 + * programming it. 576 + */ 577 + for (count = GTH_PLE_WAITLOOP_DEPTH; 578 + count && !(gth_output_get(gth, output->port) & BIT(5)); count--) 579 + cpu_relax(); 580 + } 581 + 567 582 /** 568 583 * intel_th_gth_enable() - enable tracing to an output device 569 584 * @thdev: GTH device ··· 830 815 .assign = intel_th_gth_assign, 831 816 .unassign = intel_th_gth_unassign, 832 817 .set_output = intel_th_gth_set_output, 818 + .prepare = intel_th_gth_prepare, 833 819 .enable = intel_th_gth_enable, 834 820 .trig_switch = intel_th_gth_switch, 835 821 .disable = intel_th_gth_disable,
+3
drivers/hwtracing/intel_th/intel_th.h
··· 143 143 * @remove: remove method 144 144 * @assign: match a given output type device against available outputs 145 145 * @unassign: deassociate an output type device from an output port 146 + * @prepare: prepare output port for tracing 146 147 * @enable: enable tracing for a given output device 147 148 * @disable: disable tracing for a given output device 148 149 * @irq: interrupt callback ··· 165 164 struct intel_th_device *othdev); 166 165 void (*unassign)(struct intel_th_device *thdev, 167 166 struct intel_th_device *othdev); 167 + void (*prepare)(struct intel_th_device *thdev, 168 + struct intel_th_output *output); 168 169 void (*enable)(struct intel_th_device *thdev, 169 170 struct intel_th_output *output); 170 171 void (*trig_switch)(struct intel_th_device *thdev,
+32 -16
drivers/hwtracing/intel_th/msu.c
··· 1024 1024 } 1025 1025 1026 1026 #ifdef CONFIG_X86 1027 - static void msc_buffer_set_uc(struct msc_window *win, unsigned int nr_segs) 1027 + static void msc_buffer_set_uc(struct msc *msc) 1028 1028 { 1029 1029 struct scatterlist *sg_ptr; 1030 + struct msc_window *win; 1030 1031 int i; 1031 1032 1032 - for_each_sg(win->sgt->sgl, sg_ptr, nr_segs, i) { 1033 - /* Set the page as uncached */ 1034 - set_memory_uc((unsigned long)sg_virt(sg_ptr), 1035 - PFN_DOWN(sg_ptr->length)); 1033 + if (msc->mode == MSC_MODE_SINGLE) { 1034 + set_memory_uc((unsigned long)msc->base, msc->nr_pages); 1035 + return; 1036 + } 1037 + 1038 + list_for_each_entry(win, &msc->win_list, entry) { 1039 + for_each_sg(win->sgt->sgl, sg_ptr, win->nr_segs, i) { 1040 + /* Set the page as uncached */ 1041 + set_memory_uc((unsigned long)sg_virt(sg_ptr), 1042 + PFN_DOWN(sg_ptr->length)); 1043 + } 1036 1044 } 1037 1045 } 1038 1046 1039 - static void msc_buffer_set_wb(struct msc_window *win) 1047 + static void msc_buffer_set_wb(struct msc *msc) 1040 1048 { 1041 1049 struct scatterlist *sg_ptr; 1050 + struct msc_window *win; 1042 1051 int i; 1043 1052 1044 - for_each_sg(win->sgt->sgl, sg_ptr, win->nr_segs, i) { 1045 - /* Reset the page to write-back */ 1046 - set_memory_wb((unsigned long)sg_virt(sg_ptr), 1047 - PFN_DOWN(sg_ptr->length)); 1053 + if (msc->mode == MSC_MODE_SINGLE) { 1054 + set_memory_wb((unsigned long)msc->base, msc->nr_pages); 1055 + return; 1056 + } 1057 + 1058 + list_for_each_entry(win, &msc->win_list, entry) { 1059 + for_each_sg(win->sgt->sgl, sg_ptr, win->nr_segs, i) { 1060 + /* Reset the page to write-back */ 1061 + set_memory_wb((unsigned long)sg_virt(sg_ptr), 1062 + PFN_DOWN(sg_ptr->length)); 1063 + } 1048 1064 } 1049 1065 } 1050 1066 #else /* !X86 */ 1051 1067 static inline void 1052 - msc_buffer_set_uc(struct msc_window *win, unsigned int nr_segs) {} 1053 - static inline void msc_buffer_set_wb(struct msc_window *win) {} 1068 + msc_buffer_set_uc(struct msc *msc) {} 1069 + static inline void msc_buffer_set_wb(struct msc *msc) {} 1054 1070 #endif /* CONFIG_X86 */ 1055 1071 1056 1072 /** ··· 1112 1096 1113 1097 if (ret <= 0) 1114 1098 goto err_nomem; 1115 - 1116 - msc_buffer_set_uc(win, ret); 1117 1099 1118 1100 win->nr_segs = ret; 1119 1101 win->nr_blocks = nr_blocks; ··· 1165 1151 msc->base = NULL; 1166 1152 msc->base_addr = 0; 1167 1153 } 1168 - 1169 - msc_buffer_set_wb(win); 1170 1154 1171 1155 if (msc->mbuf && msc->mbuf->free_window) 1172 1156 msc->mbuf->free_window(msc->mbuf_priv, win->sgt); ··· 1272 1260 */ 1273 1261 static void msc_buffer_free(struct msc *msc) 1274 1262 { 1263 + msc_buffer_set_wb(msc); 1264 + 1275 1265 if (msc->mode == MSC_MODE_SINGLE) 1276 1266 msc_buffer_contig_free(msc); 1277 1267 else if (msc->mode == MSC_MODE_MULTI) ··· 1317 1303 } 1318 1304 1319 1305 if (!ret) { 1306 + msc_buffer_set_uc(msc); 1307 + 1320 1308 /* allocation should be visible before the counter goes to 0 */ 1321 1309 smp_mb__before_atomic(); 1322 1310
+9
drivers/interconnect/qcom/Kconfig
··· 74 74 This is a driver for the Qualcomm Network-on-Chip on sc7180-based 75 75 platforms. 76 76 77 + config INTERCONNECT_QCOM_SC7280 78 + tristate "Qualcomm SC7280 interconnect driver" 79 + depends on INTERCONNECT_QCOM_RPMH_POSSIBLE 80 + select INTERCONNECT_QCOM_RPMH 81 + select INTERCONNECT_QCOM_BCM_VOTER 82 + help 83 + This is a driver for the Qualcomm Network-on-Chip on sc7280-based 84 + platforms. 85 + 77 86 config INTERCONNECT_QCOM_SDM660 78 87 tristate "Qualcomm SDM660 interconnect driver" 79 88 depends on INTERCONNECT_QCOM
+2
drivers/interconnect/qcom/Makefile
··· 8 8 qnoc-qcs404-objs := qcs404.o 9 9 icc-rpmh-obj := icc-rpmh.o 10 10 qnoc-sc7180-objs := sc7180.o 11 + qnoc-sc7280-objs := sc7280.o 11 12 qnoc-sdm660-objs := sdm660.o 12 13 qnoc-sdm845-objs := sdm845.o 13 14 qnoc-sdx55-objs := sdx55.o ··· 25 24 obj-$(CONFIG_INTERCONNECT_QCOM_QCS404) += qnoc-qcs404.o 26 25 obj-$(CONFIG_INTERCONNECT_QCOM_RPMH) += icc-rpmh.o 27 26 obj-$(CONFIG_INTERCONNECT_QCOM_SC7180) += qnoc-sc7180.o 27 + obj-$(CONFIG_INTERCONNECT_QCOM_SC7280) += qnoc-sc7280.o 28 28 obj-$(CONFIG_INTERCONNECT_QCOM_SDM660) += qnoc-sdm660.o 29 29 obj-$(CONFIG_INTERCONNECT_QCOM_SDM845) += qnoc-sdm845.o 30 30 obj-$(CONFIG_INTERCONNECT_QCOM_SDX55) += qnoc-sdx55.o
+1938
drivers/interconnect/qcom/sc7280.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2021, The Linux Foundation. All rights reserved. 4 + * 5 + */ 6 + 7 + #include <linux/device.h> 8 + #include <linux/interconnect.h> 9 + #include <linux/interconnect-provider.h> 10 + #include <linux/module.h> 11 + #include <linux/of_platform.h> 12 + #include <dt-bindings/interconnect/qcom,sc7280.h> 13 + 14 + #include "bcm-voter.h" 15 + #include "icc-rpmh.h" 16 + #include "sc7280.h" 17 + 18 + static struct qcom_icc_node qhm_qspi = { 19 + .name = "qhm_qspi", 20 + .id = SC7280_MASTER_QSPI_0, 21 + .channels = 1, 22 + .buswidth = 4, 23 + .num_links = 1, 24 + .links = { SC7280_SLAVE_A1NOC_SNOC }, 25 + }; 26 + 27 + static struct qcom_icc_node qhm_qup0 = { 28 + .name = "qhm_qup0", 29 + .id = SC7280_MASTER_QUP_0, 30 + .channels = 1, 31 + .buswidth = 4, 32 + .num_links = 1, 33 + .links = { SC7280_SLAVE_A1NOC_SNOC }, 34 + }; 35 + 36 + static struct qcom_icc_node qhm_qup1 = { 37 + .name = "qhm_qup1", 38 + .id = SC7280_MASTER_QUP_1, 39 + .channels = 1, 40 + .buswidth = 4, 41 + .num_links = 1, 42 + .links = { SC7280_SLAVE_A1NOC_SNOC }, 43 + }; 44 + 45 + static struct qcom_icc_node qnm_a1noc_cfg = { 46 + .name = "qnm_a1noc_cfg", 47 + .id = SC7280_MASTER_A1NOC_CFG, 48 + .channels = 1, 49 + .buswidth = 4, 50 + .num_links = 1, 51 + .links = { SC7280_SLAVE_SERVICE_A1NOC }, 52 + }; 53 + 54 + static struct qcom_icc_node xm_sdc1 = { 55 + .name = "xm_sdc1", 56 + .id = SC7280_MASTER_SDCC_1, 57 + .channels = 1, 58 + .buswidth = 8, 59 + .num_links = 1, 60 + .links = { SC7280_SLAVE_A1NOC_SNOC }, 61 + }; 62 + 63 + static struct qcom_icc_node xm_sdc2 = { 64 + .name = "xm_sdc2", 65 + .id = SC7280_MASTER_SDCC_2, 66 + .channels = 1, 67 + .buswidth = 8, 68 + .num_links = 1, 69 + .links = { SC7280_SLAVE_A1NOC_SNOC }, 70 + }; 71 + 72 + static struct qcom_icc_node xm_sdc4 = { 73 + .name = "xm_sdc4", 74 + .id = SC7280_MASTER_SDCC_4, 75 + .channels = 1, 76 + .buswidth = 8, 77 + .num_links = 1, 78 + .links = { SC7280_SLAVE_A1NOC_SNOC }, 79 + }; 80 + 81 + static struct qcom_icc_node xm_ufs_mem = { 82 + .name = "xm_ufs_mem", 83 + .id = SC7280_MASTER_UFS_MEM, 84 + .channels = 1, 85 + .buswidth = 8, 86 + .num_links = 1, 87 + .links = { SC7280_SLAVE_A1NOC_SNOC }, 88 + }; 89 + 90 + static struct qcom_icc_node xm_usb2 = { 91 + .name = "xm_usb2", 92 + .id = SC7280_MASTER_USB2, 93 + .channels = 1, 94 + .buswidth = 8, 95 + .num_links = 1, 96 + .links = { SC7280_SLAVE_A1NOC_SNOC }, 97 + }; 98 + 99 + static struct qcom_icc_node xm_usb3_0 = { 100 + .name = "xm_usb3_0", 101 + .id = SC7280_MASTER_USB3_0, 102 + .channels = 1, 103 + .buswidth = 8, 104 + .num_links = 1, 105 + .links = { SC7280_SLAVE_A1NOC_SNOC }, 106 + }; 107 + 108 + static struct qcom_icc_node qhm_qdss_bam = { 109 + .name = "qhm_qdss_bam", 110 + .id = SC7280_MASTER_QDSS_BAM, 111 + .channels = 1, 112 + .buswidth = 4, 113 + .num_links = 1, 114 + .links = { SC7280_SLAVE_A2NOC_SNOC }, 115 + }; 116 + 117 + static struct qcom_icc_node qnm_a2noc_cfg = { 118 + .name = "qnm_a2noc_cfg", 119 + .id = SC7280_MASTER_A2NOC_CFG, 120 + .channels = 1, 121 + .buswidth = 4, 122 + .num_links = 1, 123 + .links = { SC7280_SLAVE_SERVICE_A2NOC }, 124 + }; 125 + 126 + static struct qcom_icc_node qnm_cnoc_datapath = { 127 + .name = "qnm_cnoc_datapath", 128 + .id = SC7280_MASTER_CNOC_A2NOC, 129 + .channels = 1, 130 + .buswidth = 8, 131 + .num_links = 1, 132 + .links = { SC7280_SLAVE_A2NOC_SNOC }, 133 + }; 134 + 135 + static struct qcom_icc_node qxm_crypto = { 136 + .name = "qxm_crypto", 137 + .id = SC7280_MASTER_CRYPTO, 138 + .channels = 1, 139 + .buswidth = 8, 140 + .num_links = 1, 141 + .links = { SC7280_SLAVE_A2NOC_SNOC }, 142 + }; 143 + 144 + static struct qcom_icc_node qxm_ipa = { 145 + .name = "qxm_ipa", 146 + .id = SC7280_MASTER_IPA, 147 + .channels = 1, 148 + .buswidth = 8, 149 + .num_links = 1, 150 + .links = { SC7280_SLAVE_A2NOC_SNOC }, 151 + }; 152 + 153 + static struct qcom_icc_node xm_pcie3_0 = { 154 + .name = "xm_pcie3_0", 155 + .id = SC7280_MASTER_PCIE_0, 156 + .channels = 1, 157 + .buswidth = 8, 158 + .num_links = 1, 159 + .links = { SC7280_SLAVE_ANOC_PCIE_GEM_NOC }, 160 + }; 161 + 162 + static struct qcom_icc_node xm_pcie3_1 = { 163 + .name = "xm_pcie3_1", 164 + .id = SC7280_MASTER_PCIE_1, 165 + .channels = 1, 166 + .buswidth = 8, 167 + .links = { SC7280_SLAVE_ANOC_PCIE_GEM_NOC }, 168 + }; 169 + 170 + static struct qcom_icc_node xm_qdss_etr = { 171 + .name = "xm_qdss_etr", 172 + .id = SC7280_MASTER_QDSS_ETR, 173 + .channels = 1, 174 + .buswidth = 8, 175 + .num_links = 1, 176 + .links = { SC7280_SLAVE_A2NOC_SNOC }, 177 + }; 178 + 179 + static struct qcom_icc_node qup0_core_master = { 180 + .name = "qup0_core_master", 181 + .id = SC7280_MASTER_QUP_CORE_0, 182 + .channels = 1, 183 + .buswidth = 4, 184 + .num_links = 1, 185 + .links = { SC7280_SLAVE_QUP_CORE_0 }, 186 + }; 187 + 188 + static struct qcom_icc_node qup1_core_master = { 189 + .name = "qup1_core_master", 190 + .id = SC7280_MASTER_QUP_CORE_1, 191 + .channels = 1, 192 + .buswidth = 4, 193 + .num_links = 1, 194 + .links = { SC7280_SLAVE_QUP_CORE_1 }, 195 + }; 196 + 197 + static struct qcom_icc_node qnm_cnoc3_cnoc2 = { 198 + .name = "qnm_cnoc3_cnoc2", 199 + .id = SC7280_MASTER_CNOC3_CNOC2, 200 + .channels = 1, 201 + .buswidth = 8, 202 + .num_links = 44, 203 + .links = { SC7280_SLAVE_AHB2PHY_SOUTH, SC7280_SLAVE_AHB2PHY_NORTH, 204 + SC7280_SLAVE_CAMERA_CFG, SC7280_SLAVE_CLK_CTL, 205 + SC7280_SLAVE_CDSP_CFG, SC7280_SLAVE_RBCPR_CX_CFG, 206 + SC7280_SLAVE_RBCPR_MX_CFG, SC7280_SLAVE_CRYPTO_0_CFG, 207 + SC7280_SLAVE_CX_RDPM, SC7280_SLAVE_DCC_CFG, 208 + SC7280_SLAVE_DISPLAY_CFG, SC7280_SLAVE_GFX3D_CFG, 209 + SC7280_SLAVE_HWKM, SC7280_SLAVE_IMEM_CFG, 210 + SC7280_SLAVE_IPA_CFG, SC7280_SLAVE_IPC_ROUTER_CFG, 211 + SC7280_SLAVE_LPASS, SC7280_SLAVE_CNOC_MSS, 212 + SC7280_SLAVE_MX_RDPM, SC7280_SLAVE_PCIE_0_CFG, 213 + SC7280_SLAVE_PCIE_1_CFG, SC7280_SLAVE_PDM, 214 + SC7280_SLAVE_PIMEM_CFG, SC7280_SLAVE_PKA_WRAPPER_CFG, 215 + SC7280_SLAVE_PMU_WRAPPER_CFG, SC7280_SLAVE_QDSS_CFG, 216 + SC7280_SLAVE_QSPI_0, SC7280_SLAVE_QUP_0, 217 + SC7280_SLAVE_QUP_1, SC7280_SLAVE_SDCC_1, 218 + SC7280_SLAVE_SDCC_2, SC7280_SLAVE_SDCC_4, 219 + SC7280_SLAVE_SECURITY, SC7280_SLAVE_TCSR, 220 + SC7280_SLAVE_TLMM, SC7280_SLAVE_UFS_MEM_CFG, 221 + SC7280_SLAVE_USB2, SC7280_SLAVE_USB3_0, 222 + SC7280_SLAVE_VENUS_CFG, SC7280_SLAVE_VSENSE_CTRL_CFG, 223 + SC7280_SLAVE_A1NOC_CFG, SC7280_SLAVE_A2NOC_CFG, 224 + SC7280_SLAVE_CNOC_MNOC_CFG, SC7280_SLAVE_SNOC_CFG }, 225 + }; 226 + 227 + static struct qcom_icc_node xm_qdss_dap = { 228 + .name = "xm_qdss_dap", 229 + .id = SC7280_MASTER_QDSS_DAP, 230 + .channels = 1, 231 + .buswidth = 8, 232 + .num_links = 45, 233 + .links = { SC7280_SLAVE_AHB2PHY_SOUTH, SC7280_SLAVE_AHB2PHY_NORTH, 234 + SC7280_SLAVE_CAMERA_CFG, SC7280_SLAVE_CLK_CTL, 235 + SC7280_SLAVE_CDSP_CFG, SC7280_SLAVE_RBCPR_CX_CFG, 236 + SC7280_SLAVE_RBCPR_MX_CFG, SC7280_SLAVE_CRYPTO_0_CFG, 237 + SC7280_SLAVE_CX_RDPM, SC7280_SLAVE_DCC_CFG, 238 + SC7280_SLAVE_DISPLAY_CFG, SC7280_SLAVE_GFX3D_CFG, 239 + SC7280_SLAVE_HWKM, SC7280_SLAVE_IMEM_CFG, 240 + SC7280_SLAVE_IPA_CFG, SC7280_SLAVE_IPC_ROUTER_CFG, 241 + SC7280_SLAVE_LPASS, SC7280_SLAVE_CNOC_MSS, 242 + SC7280_SLAVE_MX_RDPM, SC7280_SLAVE_PCIE_0_CFG, 243 + SC7280_SLAVE_PCIE_1_CFG, SC7280_SLAVE_PDM, 244 + SC7280_SLAVE_PIMEM_CFG, SC7280_SLAVE_PKA_WRAPPER_CFG, 245 + SC7280_SLAVE_PMU_WRAPPER_CFG, SC7280_SLAVE_QDSS_CFG, 246 + SC7280_SLAVE_QSPI_0, SC7280_SLAVE_QUP_0, 247 + SC7280_SLAVE_QUP_1, SC7280_SLAVE_SDCC_1, 248 + SC7280_SLAVE_SDCC_2, SC7280_SLAVE_SDCC_4, 249 + SC7280_SLAVE_SECURITY, SC7280_SLAVE_TCSR, 250 + SC7280_SLAVE_TLMM, SC7280_SLAVE_UFS_MEM_CFG, 251 + SC7280_SLAVE_USB2, SC7280_SLAVE_USB3_0, 252 + SC7280_SLAVE_VENUS_CFG, SC7280_SLAVE_VSENSE_CTRL_CFG, 253 + SC7280_SLAVE_A1NOC_CFG, SC7280_SLAVE_A2NOC_CFG, 254 + SC7280_SLAVE_CNOC2_CNOC3, SC7280_SLAVE_CNOC_MNOC_CFG, 255 + SC7280_SLAVE_SNOC_CFG }, 256 + }; 257 + 258 + static struct qcom_icc_node qnm_cnoc2_cnoc3 = { 259 + .name = "qnm_cnoc2_cnoc3", 260 + .id = SC7280_MASTER_CNOC2_CNOC3, 261 + .channels = 1, 262 + .buswidth = 8, 263 + .num_links = 9, 264 + .links = { SC7280_SLAVE_AOSS, SC7280_SLAVE_APPSS, 265 + SC7280_SLAVE_CNOC_A2NOC, SC7280_SLAVE_DDRSS_CFG, 266 + SC7280_SLAVE_BOOT_IMEM, SC7280_SLAVE_IMEM, 267 + SC7280_SLAVE_PIMEM, SC7280_SLAVE_QDSS_STM, 268 + SC7280_SLAVE_TCU }, 269 + }; 270 + 271 + static struct qcom_icc_node qnm_gemnoc_cnoc = { 272 + .name = "qnm_gemnoc_cnoc", 273 + .id = SC7280_MASTER_GEM_NOC_CNOC, 274 + .channels = 1, 275 + .buswidth = 16, 276 + .num_links = 9, 277 + .links = { SC7280_SLAVE_AOSS, SC7280_SLAVE_APPSS, 278 + SC7280_SLAVE_CNOC3_CNOC2, SC7280_SLAVE_DDRSS_CFG, 279 + SC7280_SLAVE_BOOT_IMEM, SC7280_SLAVE_IMEM, 280 + SC7280_SLAVE_PIMEM, SC7280_SLAVE_QDSS_STM, 281 + SC7280_SLAVE_TCU }, 282 + }; 283 + 284 + static struct qcom_icc_node qnm_gemnoc_pcie = { 285 + .name = "qnm_gemnoc_pcie", 286 + .id = SC7280_MASTER_GEM_NOC_PCIE_SNOC, 287 + .channels = 1, 288 + .buswidth = 8, 289 + .num_links = 2, 290 + .links = { SC7280_SLAVE_PCIE_0, SC7280_SLAVE_PCIE_1 }, 291 + }; 292 + 293 + static struct qcom_icc_node qnm_cnoc_dc_noc = { 294 + .name = "qnm_cnoc_dc_noc", 295 + .id = SC7280_MASTER_CNOC_DC_NOC, 296 + .channels = 1, 297 + .buswidth = 4, 298 + .num_links = 2, 299 + .links = { SC7280_SLAVE_LLCC_CFG, SC7280_SLAVE_GEM_NOC_CFG }, 300 + }; 301 + 302 + static struct qcom_icc_node alm_gpu_tcu = { 303 + .name = "alm_gpu_tcu", 304 + .id = SC7280_MASTER_GPU_TCU, 305 + .channels = 1, 306 + .buswidth = 8, 307 + .num_links = 2, 308 + .links = { SC7280_SLAVE_GEM_NOC_CNOC, SC7280_SLAVE_LLCC }, 309 + }; 310 + 311 + static struct qcom_icc_node alm_sys_tcu = { 312 + .name = "alm_sys_tcu", 313 + .id = SC7280_MASTER_SYS_TCU, 314 + .channels = 1, 315 + .buswidth = 8, 316 + .num_links = 2, 317 + .links = { SC7280_SLAVE_GEM_NOC_CNOC, SC7280_SLAVE_LLCC }, 318 + }; 319 + 320 + static struct qcom_icc_node chm_apps = { 321 + .name = "chm_apps", 322 + .id = SC7280_MASTER_APPSS_PROC, 323 + .channels = 1, 324 + .buswidth = 32, 325 + .num_links = 3, 326 + .links = { SC7280_SLAVE_GEM_NOC_CNOC, SC7280_SLAVE_LLCC, 327 + SC7280_SLAVE_MEM_NOC_PCIE_SNOC }, 328 + }; 329 + 330 + static struct qcom_icc_node qnm_cmpnoc = { 331 + .name = "qnm_cmpnoc", 332 + .id = SC7280_MASTER_COMPUTE_NOC, 333 + .channels = 2, 334 + .buswidth = 32, 335 + .num_links = 2, 336 + .links = { SC7280_SLAVE_GEM_NOC_CNOC, SC7280_SLAVE_LLCC }, 337 + }; 338 + 339 + static struct qcom_icc_node qnm_gemnoc_cfg = { 340 + .name = "qnm_gemnoc_cfg", 341 + .id = SC7280_MASTER_GEM_NOC_CFG, 342 + .channels = 1, 343 + .buswidth = 4, 344 + .num_links = 5, 345 + .links = { SC7280_SLAVE_MSS_PROC_MS_MPU_CFG, SC7280_SLAVE_MCDMA_MS_MPU_CFG, 346 + SC7280_SLAVE_SERVICE_GEM_NOC_1, SC7280_SLAVE_SERVICE_GEM_NOC_2, 347 + SC7280_SLAVE_SERVICE_GEM_NOC }, 348 + }; 349 + 350 + static struct qcom_icc_node qnm_gpu = { 351 + .name = "qnm_gpu", 352 + .id = SC7280_MASTER_GFX3D, 353 + .channels = 2, 354 + .buswidth = 32, 355 + .num_links = 2, 356 + .links = { SC7280_SLAVE_GEM_NOC_CNOC, SC7280_SLAVE_LLCC }, 357 + }; 358 + 359 + static struct qcom_icc_node qnm_mnoc_hf = { 360 + .name = "qnm_mnoc_hf", 361 + .id = SC7280_MASTER_MNOC_HF_MEM_NOC, 362 + .channels = 2, 363 + .buswidth = 32, 364 + .num_links = 1, 365 + .links = { SC7280_SLAVE_LLCC }, 366 + }; 367 + 368 + static struct qcom_icc_node qnm_mnoc_sf = { 369 + .name = "qnm_mnoc_sf", 370 + .id = SC7280_MASTER_MNOC_SF_MEM_NOC, 371 + .channels = 1, 372 + .buswidth = 32, 373 + .num_links = 2, 374 + .links = { SC7280_SLAVE_GEM_NOC_CNOC, SC7280_SLAVE_LLCC }, 375 + }; 376 + 377 + static struct qcom_icc_node qnm_pcie = { 378 + .name = "qnm_pcie", 379 + .id = SC7280_MASTER_ANOC_PCIE_GEM_NOC, 380 + .channels = 1, 381 + .buswidth = 16, 382 + .num_links = 2, 383 + .links = { SC7280_SLAVE_GEM_NOC_CNOC, SC7280_SLAVE_LLCC }, 384 + }; 385 + 386 + static struct qcom_icc_node qnm_snoc_gc = { 387 + .name = "qnm_snoc_gc", 388 + .id = SC7280_MASTER_SNOC_GC_MEM_NOC, 389 + .channels = 1, 390 + .buswidth = 8, 391 + .num_links = 1, 392 + .links = { SC7280_SLAVE_LLCC }, 393 + }; 394 + 395 + static struct qcom_icc_node qnm_snoc_sf = { 396 + .name = "qnm_snoc_sf", 397 + .id = SC7280_MASTER_SNOC_SF_MEM_NOC, 398 + .channels = 1, 399 + .buswidth = 16, 400 + .num_links = 3, 401 + .links = { SC7280_SLAVE_GEM_NOC_CNOC, SC7280_SLAVE_LLCC, 402 + SC7280_SLAVE_MEM_NOC_PCIE_SNOC }, 403 + }; 404 + 405 + static struct qcom_icc_node qhm_config_noc = { 406 + .name = "qhm_config_noc", 407 + .id = SC7280_MASTER_CNOC_LPASS_AG_NOC, 408 + .channels = 1, 409 + .buswidth = 4, 410 + .num_links = 6, 411 + .links = { SC7280_SLAVE_LPASS_CORE_CFG, SC7280_SLAVE_LPASS_LPI_CFG, 412 + SC7280_SLAVE_LPASS_MPU_CFG, SC7280_SLAVE_LPASS_TOP_CFG, 413 + SC7280_SLAVE_SERVICES_LPASS_AML_NOC, SC7280_SLAVE_SERVICE_LPASS_AG_NOC }, 414 + }; 415 + 416 + static struct qcom_icc_node llcc_mc = { 417 + .name = "llcc_mc", 418 + .id = SC7280_MASTER_LLCC, 419 + .channels = 2, 420 + .buswidth = 4, 421 + .num_links = 1, 422 + .links = { SC7280_SLAVE_EBI1 }, 423 + }; 424 + 425 + static struct qcom_icc_node qnm_mnoc_cfg = { 426 + .name = "qnm_mnoc_cfg", 427 + .id = SC7280_MASTER_CNOC_MNOC_CFG, 428 + .channels = 1, 429 + .buswidth = 4, 430 + .num_links = 1, 431 + .links = { SC7280_SLAVE_SERVICE_MNOC }, 432 + }; 433 + 434 + static struct qcom_icc_node qnm_video0 = { 435 + .name = "qnm_video0", 436 + .id = SC7280_MASTER_VIDEO_P0, 437 + .channels = 1, 438 + .buswidth = 32, 439 + .num_links = 1, 440 + .links = { SC7280_SLAVE_MNOC_SF_MEM_NOC }, 441 + }; 442 + 443 + static struct qcom_icc_node qnm_video_cpu = { 444 + .name = "qnm_video_cpu", 445 + .id = SC7280_MASTER_VIDEO_PROC, 446 + .channels = 1, 447 + .buswidth = 8, 448 + .num_links = 1, 449 + .links = { SC7280_SLAVE_MNOC_SF_MEM_NOC }, 450 + }; 451 + 452 + static struct qcom_icc_node qxm_camnoc_hf = { 453 + .name = "qxm_camnoc_hf", 454 + .id = SC7280_MASTER_CAMNOC_HF, 455 + .channels = 2, 456 + .buswidth = 32, 457 + .num_links = 1, 458 + .links = { SC7280_SLAVE_MNOC_HF_MEM_NOC }, 459 + }; 460 + 461 + static struct qcom_icc_node qxm_camnoc_icp = { 462 + .name = "qxm_camnoc_icp", 463 + .id = SC7280_MASTER_CAMNOC_ICP, 464 + .channels = 1, 465 + .buswidth = 8, 466 + .num_links = 1, 467 + .links = { SC7280_SLAVE_MNOC_SF_MEM_NOC }, 468 + }; 469 + 470 + static struct qcom_icc_node qxm_camnoc_sf = { 471 + .name = "qxm_camnoc_sf", 472 + .id = SC7280_MASTER_CAMNOC_SF, 473 + .channels = 1, 474 + .buswidth = 32, 475 + .num_links = 1, 476 + .links = { SC7280_SLAVE_MNOC_SF_MEM_NOC }, 477 + }; 478 + 479 + static struct qcom_icc_node qxm_mdp0 = { 480 + .name = "qxm_mdp0", 481 + .id = SC7280_MASTER_MDP0, 482 + .channels = 1, 483 + .buswidth = 32, 484 + .num_links = 1, 485 + .links = { SC7280_SLAVE_MNOC_HF_MEM_NOC }, 486 + }; 487 + 488 + static struct qcom_icc_node qhm_nsp_noc_config = { 489 + .name = "qhm_nsp_noc_config", 490 + .id = SC7280_MASTER_CDSP_NOC_CFG, 491 + .channels = 1, 492 + .buswidth = 4, 493 + .num_links = 1, 494 + .links = { SC7280_SLAVE_SERVICE_NSP_NOC }, 495 + }; 496 + 497 + static struct qcom_icc_node qxm_nsp = { 498 + .name = "qxm_nsp", 499 + .id = SC7280_MASTER_CDSP_PROC, 500 + .channels = 2, 501 + .buswidth = 32, 502 + .num_links = 1, 503 + .links = { SC7280_SLAVE_CDSP_MEM_NOC }, 504 + }; 505 + 506 + static struct qcom_icc_node qnm_aggre1_noc = { 507 + .name = "qnm_aggre1_noc", 508 + .id = SC7280_MASTER_A1NOC_SNOC, 509 + .channels = 1, 510 + .buswidth = 16, 511 + .num_links = 1, 512 + .links = { SC7280_SLAVE_SNOC_GEM_NOC_SF }, 513 + }; 514 + 515 + static struct qcom_icc_node qnm_aggre2_noc = { 516 + .name = "qnm_aggre2_noc", 517 + .id = SC7280_MASTER_A2NOC_SNOC, 518 + .channels = 1, 519 + .buswidth = 16, 520 + .num_links = 1, 521 + .links = { SC7280_SLAVE_SNOC_GEM_NOC_SF }, 522 + }; 523 + 524 + static struct qcom_icc_node qnm_snoc_cfg = { 525 + .name = "qnm_snoc_cfg", 526 + .id = SC7280_MASTER_SNOC_CFG, 527 + .channels = 1, 528 + .buswidth = 4, 529 + .num_links = 1, 530 + .links = { SC7280_SLAVE_SERVICE_SNOC }, 531 + }; 532 + 533 + static struct qcom_icc_node qxm_pimem = { 534 + .name = "qxm_pimem", 535 + .id = SC7280_MASTER_PIMEM, 536 + .channels = 1, 537 + .buswidth = 8, 538 + .num_links = 1, 539 + .links = { SC7280_SLAVE_SNOC_GEM_NOC_GC }, 540 + }; 541 + 542 + static struct qcom_icc_node xm_gic = { 543 + .name = "xm_gic", 544 + .id = SC7280_MASTER_GIC, 545 + .channels = 1, 546 + .buswidth = 8, 547 + .num_links = 1, 548 + .links = { SC7280_SLAVE_SNOC_GEM_NOC_GC }, 549 + }; 550 + 551 + static struct qcom_icc_node qns_a1noc_snoc = { 552 + .name = "qns_a1noc_snoc", 553 + .id = SC7280_SLAVE_A1NOC_SNOC, 554 + .channels = 1, 555 + .buswidth = 16, 556 + .num_links = 1, 557 + .links = { SC7280_MASTER_A1NOC_SNOC }, 558 + }; 559 + 560 + static struct qcom_icc_node srvc_aggre1_noc = { 561 + .name = "srvc_aggre1_noc", 562 + .id = SC7280_SLAVE_SERVICE_A1NOC, 563 + .channels = 1, 564 + .buswidth = 4, 565 + .num_links = 0, 566 + }; 567 + 568 + static struct qcom_icc_node qns_a2noc_snoc = { 569 + .name = "qns_a2noc_snoc", 570 + .id = SC7280_SLAVE_A2NOC_SNOC, 571 + .channels = 1, 572 + .buswidth = 16, 573 + .num_links = 1, 574 + .links = { SC7280_MASTER_A2NOC_SNOC }, 575 + }; 576 + 577 + static struct qcom_icc_node qns_pcie_mem_noc = { 578 + .name = "qns_pcie_mem_noc", 579 + .id = SC7280_SLAVE_ANOC_PCIE_GEM_NOC, 580 + .channels = 1, 581 + .buswidth = 16, 582 + .num_links = 1, 583 + .links = { SC7280_MASTER_ANOC_PCIE_GEM_NOC }, 584 + }; 585 + 586 + static struct qcom_icc_node srvc_aggre2_noc = { 587 + .name = "srvc_aggre2_noc", 588 + .id = SC7280_SLAVE_SERVICE_A2NOC, 589 + .channels = 1, 590 + .buswidth = 4, 591 + .num_links = 0, 592 + }; 593 + 594 + static struct qcom_icc_node qup0_core_slave = { 595 + .name = "qup0_core_slave", 596 + .id = SC7280_SLAVE_QUP_CORE_0, 597 + .channels = 1, 598 + .buswidth = 4, 599 + .num_links = 0, 600 + }; 601 + 602 + static struct qcom_icc_node qup1_core_slave = { 603 + .name = "qup1_core_slave", 604 + .id = SC7280_SLAVE_QUP_CORE_1, 605 + .channels = 1, 606 + .buswidth = 4, 607 + .num_links = 0, 608 + }; 609 + 610 + static struct qcom_icc_node qhs_ahb2phy0 = { 611 + .name = "qhs_ahb2phy0", 612 + .id = SC7280_SLAVE_AHB2PHY_SOUTH, 613 + .channels = 1, 614 + .buswidth = 4, 615 + .num_links = 0, 616 + }; 617 + 618 + static struct qcom_icc_node qhs_ahb2phy1 = { 619 + .name = "qhs_ahb2phy1", 620 + .id = SC7280_SLAVE_AHB2PHY_NORTH, 621 + .channels = 1, 622 + .buswidth = 4, 623 + .num_links = 0, 624 + }; 625 + 626 + static struct qcom_icc_node qhs_camera_cfg = { 627 + .name = "qhs_camera_cfg", 628 + .id = SC7280_SLAVE_CAMERA_CFG, 629 + .channels = 1, 630 + .buswidth = 4, 631 + .num_links = 0, 632 + }; 633 + 634 + static struct qcom_icc_node qhs_clk_ctl = { 635 + .name = "qhs_clk_ctl", 636 + .id = SC7280_SLAVE_CLK_CTL, 637 + .channels = 1, 638 + .buswidth = 4, 639 + .num_links = 0, 640 + }; 641 + 642 + static struct qcom_icc_node qhs_compute_cfg = { 643 + .name = "qhs_compute_cfg", 644 + .id = SC7280_SLAVE_CDSP_CFG, 645 + .channels = 1, 646 + .buswidth = 4, 647 + .num_links = 1, 648 + .links = { SC7280_MASTER_CDSP_NOC_CFG }, 649 + }; 650 + 651 + static struct qcom_icc_node qhs_cpr_cx = { 652 + .name = "qhs_cpr_cx", 653 + .id = SC7280_SLAVE_RBCPR_CX_CFG, 654 + .channels = 1, 655 + .buswidth = 4, 656 + .num_links = 0, 657 + }; 658 + 659 + static struct qcom_icc_node qhs_cpr_mx = { 660 + .name = "qhs_cpr_mx", 661 + .id = SC7280_SLAVE_RBCPR_MX_CFG, 662 + .channels = 1, 663 + .buswidth = 4, 664 + .num_links = 0, 665 + }; 666 + 667 + static struct qcom_icc_node qhs_crypto0_cfg = { 668 + .name = "qhs_crypto0_cfg", 669 + .id = SC7280_SLAVE_CRYPTO_0_CFG, 670 + .channels = 1, 671 + .buswidth = 4, 672 + .num_links = 0, 673 + }; 674 + 675 + static struct qcom_icc_node qhs_cx_rdpm = { 676 + .name = "qhs_cx_rdpm", 677 + .id = SC7280_SLAVE_CX_RDPM, 678 + .channels = 1, 679 + .buswidth = 4, 680 + .num_links = 0, 681 + }; 682 + 683 + static struct qcom_icc_node qhs_dcc_cfg = { 684 + .name = "qhs_dcc_cfg", 685 + .id = SC7280_SLAVE_DCC_CFG, 686 + .channels = 1, 687 + .buswidth = 4, 688 + .num_links = 0, 689 + }; 690 + 691 + static struct qcom_icc_node qhs_display_cfg = { 692 + .name = "qhs_display_cfg", 693 + .id = SC7280_SLAVE_DISPLAY_CFG, 694 + .channels = 1, 695 + .buswidth = 4, 696 + .num_links = 0, 697 + }; 698 + 699 + static struct qcom_icc_node qhs_gpuss_cfg = { 700 + .name = "qhs_gpuss_cfg", 701 + .id = SC7280_SLAVE_GFX3D_CFG, 702 + .channels = 1, 703 + .buswidth = 8, 704 + .num_links = 0, 705 + }; 706 + 707 + static struct qcom_icc_node qhs_hwkm = { 708 + .name = "qhs_hwkm", 709 + .id = SC7280_SLAVE_HWKM, 710 + .channels = 1, 711 + .buswidth = 4, 712 + .num_links = 0, 713 + }; 714 + 715 + static struct qcom_icc_node qhs_imem_cfg = { 716 + .name = "qhs_imem_cfg", 717 + .id = SC7280_SLAVE_IMEM_CFG, 718 + .channels = 1, 719 + .buswidth = 4, 720 + .num_links = 0, 721 + }; 722 + 723 + static struct qcom_icc_node qhs_ipa = { 724 + .name = "qhs_ipa", 725 + .id = SC7280_SLAVE_IPA_CFG, 726 + .channels = 1, 727 + .buswidth = 4, 728 + .num_links = 0, 729 + }; 730 + 731 + static struct qcom_icc_node qhs_ipc_router = { 732 + .name = "qhs_ipc_router", 733 + .id = SC7280_SLAVE_IPC_ROUTER_CFG, 734 + .channels = 1, 735 + .buswidth = 4, 736 + .num_links = 0, 737 + }; 738 + 739 + static struct qcom_icc_node qhs_lpass_cfg = { 740 + .name = "qhs_lpass_cfg", 741 + .id = SC7280_SLAVE_LPASS, 742 + .channels = 1, 743 + .buswidth = 4, 744 + .num_links = 1, 745 + .links = { SC7280_MASTER_CNOC_LPASS_AG_NOC }, 746 + }; 747 + 748 + static struct qcom_icc_node qhs_mss_cfg = { 749 + .name = "qhs_mss_cfg", 750 + .id = SC7280_SLAVE_CNOC_MSS, 751 + .channels = 1, 752 + .buswidth = 4, 753 + .num_links = 0, 754 + }; 755 + 756 + static struct qcom_icc_node qhs_mx_rdpm = { 757 + .name = "qhs_mx_rdpm", 758 + .id = SC7280_SLAVE_MX_RDPM, 759 + .channels = 1, 760 + .buswidth = 4, 761 + .num_links = 0, 762 + }; 763 + 764 + static struct qcom_icc_node qhs_pcie0_cfg = { 765 + .name = "qhs_pcie0_cfg", 766 + .id = SC7280_SLAVE_PCIE_0_CFG, 767 + .channels = 1, 768 + .buswidth = 4, 769 + .num_links = 0, 770 + }; 771 + 772 + static struct qcom_icc_node qhs_pcie1_cfg = { 773 + .name = "qhs_pcie1_cfg", 774 + .id = SC7280_SLAVE_PCIE_1_CFG, 775 + .channels = 1, 776 + .buswidth = 4, 777 + .num_links = 0, 778 + }; 779 + 780 + static struct qcom_icc_node qhs_pdm = { 781 + .name = "qhs_pdm", 782 + .id = SC7280_SLAVE_PDM, 783 + .channels = 1, 784 + .buswidth = 4, 785 + .num_links = 0, 786 + }; 787 + 788 + static struct qcom_icc_node qhs_pimem_cfg = { 789 + .name = "qhs_pimem_cfg", 790 + .id = SC7280_SLAVE_PIMEM_CFG, 791 + .channels = 1, 792 + .buswidth = 4, 793 + .num_links = 0, 794 + }; 795 + 796 + static struct qcom_icc_node qhs_pka_wrapper_cfg = { 797 + .name = "qhs_pka_wrapper_cfg", 798 + .id = SC7280_SLAVE_PKA_WRAPPER_CFG, 799 + .channels = 1, 800 + .buswidth = 4, 801 + .num_links = 0, 802 + }; 803 + 804 + static struct qcom_icc_node qhs_pmu_wrapper_cfg = { 805 + .name = "qhs_pmu_wrapper_cfg", 806 + .id = SC7280_SLAVE_PMU_WRAPPER_CFG, 807 + .channels = 1, 808 + .buswidth = 4, 809 + .num_links = 0, 810 + }; 811 + 812 + static struct qcom_icc_node qhs_qdss_cfg = { 813 + .name = "qhs_qdss_cfg", 814 + .id = SC7280_SLAVE_QDSS_CFG, 815 + .channels = 1, 816 + .buswidth = 4, 817 + .num_links = 0, 818 + }; 819 + 820 + static struct qcom_icc_node qhs_qspi = { 821 + .name = "qhs_qspi", 822 + .id = SC7280_SLAVE_QSPI_0, 823 + .channels = 1, 824 + .buswidth = 4, 825 + .num_links = 0, 826 + }; 827 + 828 + static struct qcom_icc_node qhs_qup0 = { 829 + .name = "qhs_qup0", 830 + .id = SC7280_SLAVE_QUP_0, 831 + .channels = 1, 832 + .buswidth = 4, 833 + .num_links = 0, 834 + }; 835 + 836 + static struct qcom_icc_node qhs_qup1 = { 837 + .name = "qhs_qup1", 838 + .id = SC7280_SLAVE_QUP_1, 839 + .channels = 1, 840 + .buswidth = 4, 841 + .num_links = 0, 842 + }; 843 + 844 + static struct qcom_icc_node qhs_sdc1 = { 845 + .name = "qhs_sdc1", 846 + .id = SC7280_SLAVE_SDCC_1, 847 + .channels = 1, 848 + .buswidth = 4, 849 + .num_links = 0, 850 + }; 851 + 852 + static struct qcom_icc_node qhs_sdc2 = { 853 + .name = "qhs_sdc2", 854 + .id = SC7280_SLAVE_SDCC_2, 855 + .channels = 1, 856 + .buswidth = 4, 857 + .num_links = 0, 858 + }; 859 + 860 + static struct qcom_icc_node qhs_sdc4 = { 861 + .name = "qhs_sdc4", 862 + .id = SC7280_SLAVE_SDCC_4, 863 + .channels = 1, 864 + .buswidth = 4, 865 + .num_links = 0, 866 + }; 867 + 868 + static struct qcom_icc_node qhs_security = { 869 + .name = "qhs_security", 870 + .id = SC7280_SLAVE_SECURITY, 871 + .channels = 1, 872 + .buswidth = 4, 873 + .num_links = 0, 874 + }; 875 + 876 + static struct qcom_icc_node qhs_tcsr = { 877 + .name = "qhs_tcsr", 878 + .id = SC7280_SLAVE_TCSR, 879 + .channels = 1, 880 + .buswidth = 4, 881 + .num_links = 0, 882 + }; 883 + 884 + static struct qcom_icc_node qhs_tlmm = { 885 + .name = "qhs_tlmm", 886 + .id = SC7280_SLAVE_TLMM, 887 + .channels = 1, 888 + .buswidth = 4, 889 + .num_links = 0, 890 + }; 891 + 892 + static struct qcom_icc_node qhs_ufs_mem_cfg = { 893 + .name = "qhs_ufs_mem_cfg", 894 + .id = SC7280_SLAVE_UFS_MEM_CFG, 895 + .channels = 1, 896 + .buswidth = 4, 897 + .num_links = 0, 898 + }; 899 + 900 + static struct qcom_icc_node qhs_usb2 = { 901 + .name = "qhs_usb2", 902 + .id = SC7280_SLAVE_USB2, 903 + .channels = 1, 904 + .buswidth = 4, 905 + .num_links = 0, 906 + }; 907 + 908 + static struct qcom_icc_node qhs_usb3_0 = { 909 + .name = "qhs_usb3_0", 910 + .id = SC7280_SLAVE_USB3_0, 911 + .channels = 1, 912 + .buswidth = 4, 913 + .num_links = 0, 914 + }; 915 + 916 + static struct qcom_icc_node qhs_venus_cfg = { 917 + .name = "qhs_venus_cfg", 918 + .id = SC7280_SLAVE_VENUS_CFG, 919 + .channels = 1, 920 + .buswidth = 4, 921 + .num_links = 0, 922 + }; 923 + 924 + static struct qcom_icc_node qhs_vsense_ctrl_cfg = { 925 + .name = "qhs_vsense_ctrl_cfg", 926 + .id = SC7280_SLAVE_VSENSE_CTRL_CFG, 927 + .channels = 1, 928 + .buswidth = 4, 929 + .num_links = 0, 930 + }; 931 + 932 + static struct qcom_icc_node qns_a1_noc_cfg = { 933 + .name = "qns_a1_noc_cfg", 934 + .id = SC7280_SLAVE_A1NOC_CFG, 935 + .channels = 1, 936 + .buswidth = 4, 937 + .num_links = 1, 938 + .links = { SC7280_MASTER_A1NOC_CFG }, 939 + }; 940 + 941 + static struct qcom_icc_node qns_a2_noc_cfg = { 942 + .name = "qns_a2_noc_cfg", 943 + .id = SC7280_SLAVE_A2NOC_CFG, 944 + .channels = 1, 945 + .buswidth = 4, 946 + .num_links = 1, 947 + .links = { SC7280_MASTER_A2NOC_CFG }, 948 + }; 949 + 950 + static struct qcom_icc_node qns_cnoc2_cnoc3 = { 951 + .name = "qns_cnoc2_cnoc3", 952 + .id = SC7280_SLAVE_CNOC2_CNOC3, 953 + .channels = 1, 954 + .buswidth = 8, 955 + .num_links = 1, 956 + .links = { SC7280_MASTER_CNOC2_CNOC3 }, 957 + }; 958 + 959 + static struct qcom_icc_node qns_mnoc_cfg = { 960 + .name = "qns_mnoc_cfg", 961 + .id = SC7280_SLAVE_CNOC_MNOC_CFG, 962 + .channels = 1, 963 + .buswidth = 4, 964 + .num_links = 1, 965 + .links = { SC7280_MASTER_CNOC_MNOC_CFG }, 966 + }; 967 + 968 + static struct qcom_icc_node qns_snoc_cfg = { 969 + .name = "qns_snoc_cfg", 970 + .id = SC7280_SLAVE_SNOC_CFG, 971 + .channels = 1, 972 + .buswidth = 4, 973 + .num_links = 1, 974 + .links = { SC7280_MASTER_SNOC_CFG }, 975 + }; 976 + 977 + static struct qcom_icc_node qhs_aoss = { 978 + .name = "qhs_aoss", 979 + .id = SC7280_SLAVE_AOSS, 980 + .channels = 1, 981 + .buswidth = 4, 982 + .num_links = 0, 983 + }; 984 + 985 + static struct qcom_icc_node qhs_apss = { 986 + .name = "qhs_apss", 987 + .id = SC7280_SLAVE_APPSS, 988 + .channels = 1, 989 + .buswidth = 8, 990 + .num_links = 0, 991 + }; 992 + 993 + static struct qcom_icc_node qns_cnoc3_cnoc2 = { 994 + .name = "qns_cnoc3_cnoc2", 995 + .id = SC7280_SLAVE_CNOC3_CNOC2, 996 + .channels = 1, 997 + .buswidth = 8, 998 + .num_links = 1, 999 + .links = { SC7280_MASTER_CNOC3_CNOC2 }, 1000 + }; 1001 + 1002 + static struct qcom_icc_node qns_cnoc_a2noc = { 1003 + .name = "qns_cnoc_a2noc", 1004 + .id = SC7280_SLAVE_CNOC_A2NOC, 1005 + .channels = 1, 1006 + .buswidth = 8, 1007 + .num_links = 1, 1008 + .links = { SC7280_MASTER_CNOC_A2NOC }, 1009 + }; 1010 + 1011 + static struct qcom_icc_node qns_ddrss_cfg = { 1012 + .name = "qns_ddrss_cfg", 1013 + .id = SC7280_SLAVE_DDRSS_CFG, 1014 + .channels = 1, 1015 + .buswidth = 4, 1016 + .num_links = 1, 1017 + .links = { SC7280_MASTER_CNOC_DC_NOC }, 1018 + }; 1019 + 1020 + static struct qcom_icc_node qxs_boot_imem = { 1021 + .name = "qxs_boot_imem", 1022 + .id = SC7280_SLAVE_BOOT_IMEM, 1023 + .channels = 1, 1024 + .buswidth = 8, 1025 + .num_links = 0, 1026 + }; 1027 + 1028 + static struct qcom_icc_node qxs_imem = { 1029 + .name = "qxs_imem", 1030 + .id = SC7280_SLAVE_IMEM, 1031 + .channels = 1, 1032 + .buswidth = 8, 1033 + .num_links = 0, 1034 + }; 1035 + 1036 + static struct qcom_icc_node qxs_pimem = { 1037 + .name = "qxs_pimem", 1038 + .id = SC7280_SLAVE_PIMEM, 1039 + .channels = 1, 1040 + .buswidth = 8, 1041 + .num_links = 0, 1042 + }; 1043 + 1044 + static struct qcom_icc_node xs_pcie_0 = { 1045 + .name = "xs_pcie_0", 1046 + .id = SC7280_SLAVE_PCIE_0, 1047 + .channels = 1, 1048 + .buswidth = 8, 1049 + .num_links = 0, 1050 + }; 1051 + 1052 + static struct qcom_icc_node xs_pcie_1 = { 1053 + .name = "xs_pcie_1", 1054 + .id = SC7280_SLAVE_PCIE_1, 1055 + .channels = 1, 1056 + .buswidth = 8, 1057 + .num_links = 0, 1058 + }; 1059 + 1060 + static struct qcom_icc_node xs_qdss_stm = { 1061 + .name = "xs_qdss_stm", 1062 + .id = SC7280_SLAVE_QDSS_STM, 1063 + .channels = 1, 1064 + .buswidth = 4, 1065 + .num_links = 0, 1066 + }; 1067 + 1068 + static struct qcom_icc_node xs_sys_tcu_cfg = { 1069 + .name = "xs_sys_tcu_cfg", 1070 + .id = SC7280_SLAVE_TCU, 1071 + .channels = 1, 1072 + .buswidth = 8, 1073 + .num_links = 0, 1074 + }; 1075 + 1076 + static struct qcom_icc_node qhs_llcc = { 1077 + .name = "qhs_llcc", 1078 + .id = SC7280_SLAVE_LLCC_CFG, 1079 + .channels = 1, 1080 + .buswidth = 4, 1081 + .num_links = 0, 1082 + }; 1083 + 1084 + static struct qcom_icc_node qns_gemnoc = { 1085 + .name = "qns_gemnoc", 1086 + .id = SC7280_SLAVE_GEM_NOC_CFG, 1087 + .channels = 1, 1088 + .buswidth = 4, 1089 + .num_links = 1, 1090 + .links = { SC7280_MASTER_GEM_NOC_CFG }, 1091 + }; 1092 + 1093 + static struct qcom_icc_node qhs_mdsp_ms_mpu_cfg = { 1094 + .name = "qhs_mdsp_ms_mpu_cfg", 1095 + .id = SC7280_SLAVE_MSS_PROC_MS_MPU_CFG, 1096 + .channels = 1, 1097 + .buswidth = 4, 1098 + .num_links = 0, 1099 + }; 1100 + 1101 + static struct qcom_icc_node qhs_modem_ms_mpu_cfg = { 1102 + .name = "qhs_modem_ms_mpu_cfg", 1103 + .id = SC7280_SLAVE_MCDMA_MS_MPU_CFG, 1104 + .channels = 1, 1105 + .buswidth = 4, 1106 + .num_links = 0, 1107 + }; 1108 + 1109 + static struct qcom_icc_node qns_gem_noc_cnoc = { 1110 + .name = "qns_gem_noc_cnoc", 1111 + .id = SC7280_SLAVE_GEM_NOC_CNOC, 1112 + .channels = 1, 1113 + .buswidth = 16, 1114 + .num_links = 1, 1115 + .links = { SC7280_MASTER_GEM_NOC_CNOC }, 1116 + }; 1117 + 1118 + static struct qcom_icc_node qns_llcc = { 1119 + .name = "qns_llcc", 1120 + .id = SC7280_SLAVE_LLCC, 1121 + .channels = 2, 1122 + .buswidth = 16, 1123 + .num_links = 1, 1124 + .links = { SC7280_MASTER_LLCC }, 1125 + }; 1126 + 1127 + static struct qcom_icc_node qns_pcie = { 1128 + .name = "qns_pcie", 1129 + .id = SC7280_SLAVE_MEM_NOC_PCIE_SNOC, 1130 + .channels = 1, 1131 + .buswidth = 8, 1132 + .num_links = 1, 1133 + .links = { SC7280_MASTER_GEM_NOC_PCIE_SNOC }, 1134 + }; 1135 + 1136 + static struct qcom_icc_node srvc_even_gemnoc = { 1137 + .name = "srvc_even_gemnoc", 1138 + .id = SC7280_SLAVE_SERVICE_GEM_NOC_1, 1139 + .channels = 1, 1140 + .buswidth = 4, 1141 + .num_links = 0, 1142 + }; 1143 + 1144 + static struct qcom_icc_node srvc_odd_gemnoc = { 1145 + .name = "srvc_odd_gemnoc", 1146 + .id = SC7280_SLAVE_SERVICE_GEM_NOC_2, 1147 + .channels = 1, 1148 + .buswidth = 4, 1149 + .num_links = 0, 1150 + }; 1151 + 1152 + static struct qcom_icc_node srvc_sys_gemnoc = { 1153 + .name = "srvc_sys_gemnoc", 1154 + .id = SC7280_SLAVE_SERVICE_GEM_NOC, 1155 + .channels = 1, 1156 + .buswidth = 4, 1157 + .num_links = 0, 1158 + }; 1159 + 1160 + static struct qcom_icc_node qhs_lpass_core = { 1161 + .name = "qhs_lpass_core", 1162 + .id = SC7280_SLAVE_LPASS_CORE_CFG, 1163 + .channels = 1, 1164 + .buswidth = 4, 1165 + .num_links = 0, 1166 + }; 1167 + 1168 + static struct qcom_icc_node qhs_lpass_lpi = { 1169 + .name = "qhs_lpass_lpi", 1170 + .id = SC7280_SLAVE_LPASS_LPI_CFG, 1171 + .channels = 1, 1172 + .buswidth = 4, 1173 + .num_links = 0, 1174 + }; 1175 + 1176 + static struct qcom_icc_node qhs_lpass_mpu = { 1177 + .name = "qhs_lpass_mpu", 1178 + .id = SC7280_SLAVE_LPASS_MPU_CFG, 1179 + .channels = 1, 1180 + .buswidth = 4, 1181 + .num_links = 0, 1182 + }; 1183 + 1184 + static struct qcom_icc_node qhs_lpass_top = { 1185 + .name = "qhs_lpass_top", 1186 + .id = SC7280_SLAVE_LPASS_TOP_CFG, 1187 + .channels = 1, 1188 + .buswidth = 4, 1189 + .num_links = 0, 1190 + }; 1191 + 1192 + static struct qcom_icc_node srvc_niu_aml_noc = { 1193 + .name = "srvc_niu_aml_noc", 1194 + .id = SC7280_SLAVE_SERVICES_LPASS_AML_NOC, 1195 + .channels = 1, 1196 + .buswidth = 4, 1197 + .num_links = 0, 1198 + }; 1199 + 1200 + static struct qcom_icc_node srvc_niu_lpass_agnoc = { 1201 + .name = "srvc_niu_lpass_agnoc", 1202 + .id = SC7280_SLAVE_SERVICE_LPASS_AG_NOC, 1203 + .channels = 1, 1204 + .buswidth = 4, 1205 + .num_links = 0, 1206 + }; 1207 + 1208 + static struct qcom_icc_node ebi = { 1209 + .name = "ebi", 1210 + .id = SC7280_SLAVE_EBI1, 1211 + .channels = 2, 1212 + .buswidth = 4, 1213 + .num_links = 0, 1214 + }; 1215 + 1216 + static struct qcom_icc_node qns_mem_noc_hf = { 1217 + .name = "qns_mem_noc_hf", 1218 + .id = SC7280_SLAVE_MNOC_HF_MEM_NOC, 1219 + .channels = 2, 1220 + .buswidth = 32, 1221 + .num_links = 1, 1222 + .links = { SC7280_MASTER_MNOC_HF_MEM_NOC }, 1223 + }; 1224 + 1225 + static struct qcom_icc_node qns_mem_noc_sf = { 1226 + .name = "qns_mem_noc_sf", 1227 + .id = SC7280_SLAVE_MNOC_SF_MEM_NOC, 1228 + .channels = 1, 1229 + .buswidth = 32, 1230 + .num_links = 1, 1231 + .links = { SC7280_MASTER_MNOC_SF_MEM_NOC }, 1232 + }; 1233 + 1234 + static struct qcom_icc_node srvc_mnoc = { 1235 + .name = "srvc_mnoc", 1236 + .id = SC7280_SLAVE_SERVICE_MNOC, 1237 + .channels = 1, 1238 + .buswidth = 4, 1239 + .num_links = 0, 1240 + }; 1241 + 1242 + static struct qcom_icc_node qns_nsp_gemnoc = { 1243 + .name = "qns_nsp_gemnoc", 1244 + .id = SC7280_SLAVE_CDSP_MEM_NOC, 1245 + .channels = 2, 1246 + .buswidth = 32, 1247 + .num_links = 1, 1248 + .links = { SC7280_MASTER_COMPUTE_NOC }, 1249 + }; 1250 + 1251 + static struct qcom_icc_node service_nsp_noc = { 1252 + .name = "service_nsp_noc", 1253 + .id = SC7280_SLAVE_SERVICE_NSP_NOC, 1254 + .channels = 1, 1255 + .buswidth = 4, 1256 + .num_links = 0, 1257 + }; 1258 + 1259 + static struct qcom_icc_node qns_gemnoc_gc = { 1260 + .name = "qns_gemnoc_gc", 1261 + .id = SC7280_SLAVE_SNOC_GEM_NOC_GC, 1262 + .channels = 1, 1263 + .buswidth = 8, 1264 + .num_links = 1, 1265 + .links = { SC7280_MASTER_SNOC_GC_MEM_NOC }, 1266 + }; 1267 + 1268 + static struct qcom_icc_node qns_gemnoc_sf = { 1269 + .name = "qns_gemnoc_sf", 1270 + .id = SC7280_SLAVE_SNOC_GEM_NOC_SF, 1271 + .channels = 1, 1272 + .buswidth = 16, 1273 + .num_links = 1, 1274 + .links = { SC7280_MASTER_SNOC_SF_MEM_NOC }, 1275 + }; 1276 + 1277 + static struct qcom_icc_node srvc_snoc = { 1278 + .name = "srvc_snoc", 1279 + .id = SC7280_SLAVE_SERVICE_SNOC, 1280 + .channels = 1, 1281 + .buswidth = 4, 1282 + .num_links = 0, 1283 + }; 1284 + 1285 + static struct qcom_icc_bcm bcm_acv = { 1286 + .name = "ACV", 1287 + .num_nodes = 1, 1288 + .nodes = { &ebi }, 1289 + }; 1290 + 1291 + static struct qcom_icc_bcm bcm_ce0 = { 1292 + .name = "CE0", 1293 + .num_nodes = 1, 1294 + .nodes = { &qxm_crypto }, 1295 + }; 1296 + 1297 + static struct qcom_icc_bcm bcm_cn0 = { 1298 + .name = "CN0", 1299 + .keepalive = true, 1300 + .num_nodes = 2, 1301 + .nodes = { &qnm_gemnoc_cnoc, &qnm_gemnoc_pcie }, 1302 + }; 1303 + 1304 + static struct qcom_icc_bcm bcm_cn1 = { 1305 + .name = "CN1", 1306 + .num_nodes = 47, 1307 + .nodes = { &qnm_cnoc3_cnoc2, &xm_qdss_dap, 1308 + &qhs_ahb2phy0, &qhs_ahb2phy1, 1309 + &qhs_camera_cfg, &qhs_clk_ctl, 1310 + &qhs_compute_cfg, &qhs_cpr_cx, 1311 + &qhs_cpr_mx, &qhs_crypto0_cfg, 1312 + &qhs_cx_rdpm, &qhs_dcc_cfg, 1313 + &qhs_display_cfg, &qhs_gpuss_cfg, 1314 + &qhs_hwkm, &qhs_imem_cfg, 1315 + &qhs_ipa, &qhs_ipc_router, 1316 + &qhs_mss_cfg, &qhs_mx_rdpm, 1317 + &qhs_pcie0_cfg, &qhs_pcie1_cfg, 1318 + &qhs_pimem_cfg, &qhs_pka_wrapper_cfg, 1319 + &qhs_pmu_wrapper_cfg, &qhs_qdss_cfg, 1320 + &qhs_qup0, &qhs_qup1, 1321 + &qhs_security, &qhs_tcsr, 1322 + &qhs_tlmm, &qhs_ufs_mem_cfg, &qhs_usb2, 1323 + &qhs_usb3_0, &qhs_venus_cfg, 1324 + &qhs_vsense_ctrl_cfg, &qns_a1_noc_cfg, 1325 + &qns_a2_noc_cfg, &qns_cnoc2_cnoc3, 1326 + &qns_mnoc_cfg, &qns_snoc_cfg, 1327 + &qnm_cnoc2_cnoc3, &qhs_aoss, 1328 + &qhs_apss, &qns_cnoc3_cnoc2, 1329 + &qns_cnoc_a2noc, &qns_ddrss_cfg }, 1330 + }; 1331 + 1332 + static struct qcom_icc_bcm bcm_cn2 = { 1333 + .name = "CN2", 1334 + .num_nodes = 6, 1335 + .nodes = { &qhs_lpass_cfg, &qhs_pdm, 1336 + &qhs_qspi, &qhs_sdc1, 1337 + &qhs_sdc2, &qhs_sdc4 }, 1338 + }; 1339 + 1340 + static struct qcom_icc_bcm bcm_co0 = { 1341 + .name = "CO0", 1342 + .num_nodes = 1, 1343 + .nodes = { &qns_nsp_gemnoc }, 1344 + }; 1345 + 1346 + static struct qcom_icc_bcm bcm_co3 = { 1347 + .name = "CO3", 1348 + .num_nodes = 1, 1349 + .nodes = { &qxm_nsp }, 1350 + }; 1351 + 1352 + static struct qcom_icc_bcm bcm_mc0 = { 1353 + .name = "MC0", 1354 + .keepalive = true, 1355 + .num_nodes = 1, 1356 + .nodes = { &ebi }, 1357 + }; 1358 + 1359 + static struct qcom_icc_bcm bcm_mm0 = { 1360 + .name = "MM0", 1361 + .keepalive = true, 1362 + .num_nodes = 1, 1363 + .nodes = { &qns_mem_noc_hf }, 1364 + }; 1365 + 1366 + static struct qcom_icc_bcm bcm_mm1 = { 1367 + .name = "MM1", 1368 + .num_nodes = 2, 1369 + .nodes = { &qxm_camnoc_hf, &qxm_mdp0 }, 1370 + }; 1371 + 1372 + static struct qcom_icc_bcm bcm_mm4 = { 1373 + .name = "MM4", 1374 + .num_nodes = 1, 1375 + .nodes = { &qns_mem_noc_sf }, 1376 + }; 1377 + 1378 + static struct qcom_icc_bcm bcm_mm5 = { 1379 + .name = "MM5", 1380 + .num_nodes = 3, 1381 + .nodes = { &qnm_video0, &qxm_camnoc_icp, 1382 + &qxm_camnoc_sf }, 1383 + }; 1384 + 1385 + static struct qcom_icc_bcm bcm_qup0 = { 1386 + .name = "QUP0", 1387 + .vote_scale = 1, 1388 + .num_nodes = 1, 1389 + .nodes = { &qup0_core_slave }, 1390 + }; 1391 + 1392 + static struct qcom_icc_bcm bcm_qup1 = { 1393 + .name = "QUP1", 1394 + .vote_scale = 1, 1395 + .num_nodes = 1, 1396 + .nodes = { &qup1_core_slave }, 1397 + }; 1398 + 1399 + static struct qcom_icc_bcm bcm_sh0 = { 1400 + .name = "SH0", 1401 + .keepalive = true, 1402 + .num_nodes = 1, 1403 + .nodes = { &qns_llcc }, 1404 + }; 1405 + 1406 + static struct qcom_icc_bcm bcm_sh2 = { 1407 + .name = "SH2", 1408 + .num_nodes = 2, 1409 + .nodes = { &alm_gpu_tcu, &alm_sys_tcu }, 1410 + }; 1411 + 1412 + static struct qcom_icc_bcm bcm_sh3 = { 1413 + .name = "SH3", 1414 + .num_nodes = 1, 1415 + .nodes = { &qnm_cmpnoc }, 1416 + }; 1417 + 1418 + static struct qcom_icc_bcm bcm_sh4 = { 1419 + .name = "SH4", 1420 + .num_nodes = 1, 1421 + .nodes = { &chm_apps }, 1422 + }; 1423 + 1424 + static struct qcom_icc_bcm bcm_sn0 = { 1425 + .name = "SN0", 1426 + .keepalive = true, 1427 + .num_nodes = 1, 1428 + .nodes = { &qns_gemnoc_sf }, 1429 + }; 1430 + 1431 + static struct qcom_icc_bcm bcm_sn2 = { 1432 + .name = "SN2", 1433 + .num_nodes = 1, 1434 + .nodes = { &qns_gemnoc_gc }, 1435 + }; 1436 + 1437 + static struct qcom_icc_bcm bcm_sn3 = { 1438 + .name = "SN3", 1439 + .num_nodes = 1, 1440 + .nodes = { &qxs_pimem }, 1441 + }; 1442 + 1443 + static struct qcom_icc_bcm bcm_sn4 = { 1444 + .name = "SN4", 1445 + .num_nodes = 1, 1446 + .nodes = { &xs_qdss_stm }, 1447 + }; 1448 + 1449 + static struct qcom_icc_bcm bcm_sn5 = { 1450 + .name = "SN5", 1451 + .num_nodes = 1, 1452 + .nodes = { &xm_pcie3_0 }, 1453 + }; 1454 + 1455 + static struct qcom_icc_bcm bcm_sn6 = { 1456 + .name = "SN6", 1457 + .num_nodes = 1, 1458 + .nodes = { &xm_pcie3_1 }, 1459 + }; 1460 + 1461 + static struct qcom_icc_bcm bcm_sn7 = { 1462 + .name = "SN7", 1463 + .num_nodes = 1, 1464 + .nodes = { &qnm_aggre1_noc }, 1465 + }; 1466 + 1467 + static struct qcom_icc_bcm bcm_sn8 = { 1468 + .name = "SN8", 1469 + .num_nodes = 1, 1470 + .nodes = { &qnm_aggre2_noc }, 1471 + }; 1472 + 1473 + static struct qcom_icc_bcm bcm_sn14 = { 1474 + .name = "SN14", 1475 + .num_nodes = 1, 1476 + .nodes = { &qns_pcie_mem_noc }, 1477 + }; 1478 + 1479 + static struct qcom_icc_bcm *aggre1_noc_bcms[] = { 1480 + &bcm_sn5, 1481 + &bcm_sn6, 1482 + &bcm_sn14, 1483 + }; 1484 + 1485 + static struct qcom_icc_node *aggre1_noc_nodes[] = { 1486 + [MASTER_QSPI_0] = &qhm_qspi, 1487 + [MASTER_QUP_0] = &qhm_qup0, 1488 + [MASTER_QUP_1] = &qhm_qup1, 1489 + [MASTER_A1NOC_CFG] = &qnm_a1noc_cfg, 1490 + [MASTER_PCIE_0] = &xm_pcie3_0, 1491 + [MASTER_PCIE_1] = &xm_pcie3_1, 1492 + [MASTER_SDCC_1] = &xm_sdc1, 1493 + [MASTER_SDCC_2] = &xm_sdc2, 1494 + [MASTER_SDCC_4] = &xm_sdc4, 1495 + [MASTER_UFS_MEM] = &xm_ufs_mem, 1496 + [MASTER_USB2] = &xm_usb2, 1497 + [MASTER_USB3_0] = &xm_usb3_0, 1498 + [SLAVE_A1NOC_SNOC] = &qns_a1noc_snoc, 1499 + [SLAVE_ANOC_PCIE_GEM_NOC] = &qns_pcie_mem_noc, 1500 + [SLAVE_SERVICE_A1NOC] = &srvc_aggre1_noc, 1501 + }; 1502 + 1503 + static struct qcom_icc_desc sc7280_aggre1_noc = { 1504 + .nodes = aggre1_noc_nodes, 1505 + .num_nodes = ARRAY_SIZE(aggre1_noc_nodes), 1506 + .bcms = aggre1_noc_bcms, 1507 + .num_bcms = ARRAY_SIZE(aggre1_noc_bcms), 1508 + }; 1509 + 1510 + static struct qcom_icc_bcm *aggre2_noc_bcms[] = { 1511 + &bcm_ce0, 1512 + }; 1513 + 1514 + static struct qcom_icc_node *aggre2_noc_nodes[] = { 1515 + [MASTER_QDSS_BAM] = &qhm_qdss_bam, 1516 + [MASTER_A2NOC_CFG] = &qnm_a2noc_cfg, 1517 + [MASTER_CNOC_A2NOC] = &qnm_cnoc_datapath, 1518 + [MASTER_CRYPTO] = &qxm_crypto, 1519 + [MASTER_IPA] = &qxm_ipa, 1520 + [MASTER_QDSS_ETR] = &xm_qdss_etr, 1521 + [SLAVE_A2NOC_SNOC] = &qns_a2noc_snoc, 1522 + [SLAVE_SERVICE_A2NOC] = &srvc_aggre2_noc, 1523 + }; 1524 + 1525 + static struct qcom_icc_desc sc7280_aggre2_noc = { 1526 + .nodes = aggre2_noc_nodes, 1527 + .num_nodes = ARRAY_SIZE(aggre2_noc_nodes), 1528 + .bcms = aggre2_noc_bcms, 1529 + .num_bcms = ARRAY_SIZE(aggre2_noc_bcms), 1530 + }; 1531 + 1532 + static struct qcom_icc_bcm *clk_virt_bcms[] = { 1533 + &bcm_qup0, 1534 + &bcm_qup1, 1535 + }; 1536 + 1537 + static struct qcom_icc_node *clk_virt_nodes[] = { 1538 + [MASTER_QUP_CORE_0] = &qup0_core_master, 1539 + [MASTER_QUP_CORE_1] = &qup1_core_master, 1540 + [SLAVE_QUP_CORE_0] = &qup0_core_slave, 1541 + [SLAVE_QUP_CORE_1] = &qup1_core_slave, 1542 + }; 1543 + 1544 + static struct qcom_icc_desc sc7280_clk_virt = { 1545 + .nodes = clk_virt_nodes, 1546 + .num_nodes = ARRAY_SIZE(clk_virt_nodes), 1547 + .bcms = clk_virt_bcms, 1548 + .num_bcms = ARRAY_SIZE(clk_virt_bcms), 1549 + }; 1550 + 1551 + static struct qcom_icc_bcm *cnoc2_bcms[] = { 1552 + &bcm_cn1, 1553 + &bcm_cn2, 1554 + }; 1555 + 1556 + static struct qcom_icc_node *cnoc2_nodes[] = { 1557 + [MASTER_CNOC3_CNOC2] = &qnm_cnoc3_cnoc2, 1558 + [MASTER_QDSS_DAP] = &xm_qdss_dap, 1559 + [SLAVE_AHB2PHY_SOUTH] = &qhs_ahb2phy0, 1560 + [SLAVE_AHB2PHY_NORTH] = &qhs_ahb2phy1, 1561 + [SLAVE_CAMERA_CFG] = &qhs_camera_cfg, 1562 + [SLAVE_CLK_CTL] = &qhs_clk_ctl, 1563 + [SLAVE_CDSP_CFG] = &qhs_compute_cfg, 1564 + [SLAVE_RBCPR_CX_CFG] = &qhs_cpr_cx, 1565 + [SLAVE_RBCPR_MX_CFG] = &qhs_cpr_mx, 1566 + [SLAVE_CRYPTO_0_CFG] = &qhs_crypto0_cfg, 1567 + [SLAVE_CX_RDPM] = &qhs_cx_rdpm, 1568 + [SLAVE_DCC_CFG] = &qhs_dcc_cfg, 1569 + [SLAVE_DISPLAY_CFG] = &qhs_display_cfg, 1570 + [SLAVE_GFX3D_CFG] = &qhs_gpuss_cfg, 1571 + [SLAVE_HWKM] = &qhs_hwkm, 1572 + [SLAVE_IMEM_CFG] = &qhs_imem_cfg, 1573 + [SLAVE_IPA_CFG] = &qhs_ipa, 1574 + [SLAVE_IPC_ROUTER_CFG] = &qhs_ipc_router, 1575 + [SLAVE_LPASS] = &qhs_lpass_cfg, 1576 + [SLAVE_CNOC_MSS] = &qhs_mss_cfg, 1577 + [SLAVE_MX_RDPM] = &qhs_mx_rdpm, 1578 + [SLAVE_PCIE_0_CFG] = &qhs_pcie0_cfg, 1579 + [SLAVE_PCIE_1_CFG] = &qhs_pcie1_cfg, 1580 + [SLAVE_PDM] = &qhs_pdm, 1581 + [SLAVE_PIMEM_CFG] = &qhs_pimem_cfg, 1582 + [SLAVE_PKA_WRAPPER_CFG] = &qhs_pka_wrapper_cfg, 1583 + [SLAVE_PMU_WRAPPER_CFG] = &qhs_pmu_wrapper_cfg, 1584 + [SLAVE_QDSS_CFG] = &qhs_qdss_cfg, 1585 + [SLAVE_QSPI_0] = &qhs_qspi, 1586 + [SLAVE_QUP_0] = &qhs_qup0, 1587 + [SLAVE_QUP_1] = &qhs_qup1, 1588 + [SLAVE_SDCC_1] = &qhs_sdc1, 1589 + [SLAVE_SDCC_2] = &qhs_sdc2, 1590 + [SLAVE_SDCC_4] = &qhs_sdc4, 1591 + [SLAVE_SECURITY] = &qhs_security, 1592 + [SLAVE_TCSR] = &qhs_tcsr, 1593 + [SLAVE_TLMM] = &qhs_tlmm, 1594 + [SLAVE_UFS_MEM_CFG] = &qhs_ufs_mem_cfg, 1595 + [SLAVE_USB2] = &qhs_usb2, 1596 + [SLAVE_USB3_0] = &qhs_usb3_0, 1597 + [SLAVE_VENUS_CFG] = &qhs_venus_cfg, 1598 + [SLAVE_VSENSE_CTRL_CFG] = &qhs_vsense_ctrl_cfg, 1599 + [SLAVE_A1NOC_CFG] = &qns_a1_noc_cfg, 1600 + [SLAVE_A2NOC_CFG] = &qns_a2_noc_cfg, 1601 + [SLAVE_CNOC2_CNOC3] = &qns_cnoc2_cnoc3, 1602 + [SLAVE_CNOC_MNOC_CFG] = &qns_mnoc_cfg, 1603 + [SLAVE_SNOC_CFG] = &qns_snoc_cfg, 1604 + }; 1605 + 1606 + static struct qcom_icc_desc sc7280_cnoc2 = { 1607 + .nodes = cnoc2_nodes, 1608 + .num_nodes = ARRAY_SIZE(cnoc2_nodes), 1609 + .bcms = cnoc2_bcms, 1610 + .num_bcms = ARRAY_SIZE(cnoc2_bcms), 1611 + }; 1612 + 1613 + static struct qcom_icc_bcm *cnoc3_bcms[] = { 1614 + &bcm_cn0, 1615 + &bcm_cn1, 1616 + &bcm_sn3, 1617 + &bcm_sn4, 1618 + }; 1619 + 1620 + static struct qcom_icc_node *cnoc3_nodes[] = { 1621 + [MASTER_CNOC2_CNOC3] = &qnm_cnoc2_cnoc3, 1622 + [MASTER_GEM_NOC_CNOC] = &qnm_gemnoc_cnoc, 1623 + [MASTER_GEM_NOC_PCIE_SNOC] = &qnm_gemnoc_pcie, 1624 + [SLAVE_AOSS] = &qhs_aoss, 1625 + [SLAVE_APPSS] = &qhs_apss, 1626 + [SLAVE_CNOC3_CNOC2] = &qns_cnoc3_cnoc2, 1627 + [SLAVE_CNOC_A2NOC] = &qns_cnoc_a2noc, 1628 + [SLAVE_DDRSS_CFG] = &qns_ddrss_cfg, 1629 + [SLAVE_BOOT_IMEM] = &qxs_boot_imem, 1630 + [SLAVE_IMEM] = &qxs_imem, 1631 + [SLAVE_PIMEM] = &qxs_pimem, 1632 + [SLAVE_PCIE_0] = &xs_pcie_0, 1633 + [SLAVE_PCIE_1] = &xs_pcie_1, 1634 + [SLAVE_QDSS_STM] = &xs_qdss_stm, 1635 + [SLAVE_TCU] = &xs_sys_tcu_cfg, 1636 + }; 1637 + 1638 + static struct qcom_icc_desc sc7280_cnoc3 = { 1639 + .nodes = cnoc3_nodes, 1640 + .num_nodes = ARRAY_SIZE(cnoc3_nodes), 1641 + .bcms = cnoc3_bcms, 1642 + .num_bcms = ARRAY_SIZE(cnoc3_bcms), 1643 + }; 1644 + 1645 + static struct qcom_icc_bcm *dc_noc_bcms[] = { 1646 + }; 1647 + 1648 + static struct qcom_icc_node *dc_noc_nodes[] = { 1649 + [MASTER_CNOC_DC_NOC] = &qnm_cnoc_dc_noc, 1650 + [SLAVE_LLCC_CFG] = &qhs_llcc, 1651 + [SLAVE_GEM_NOC_CFG] = &qns_gemnoc, 1652 + }; 1653 + 1654 + static struct qcom_icc_desc sc7280_dc_noc = { 1655 + .nodes = dc_noc_nodes, 1656 + .num_nodes = ARRAY_SIZE(dc_noc_nodes), 1657 + .bcms = dc_noc_bcms, 1658 + .num_bcms = ARRAY_SIZE(dc_noc_bcms), 1659 + }; 1660 + 1661 + static struct qcom_icc_bcm *gem_noc_bcms[] = { 1662 + &bcm_sh0, 1663 + &bcm_sh2, 1664 + &bcm_sh3, 1665 + &bcm_sh4, 1666 + }; 1667 + 1668 + static struct qcom_icc_node *gem_noc_nodes[] = { 1669 + [MASTER_GPU_TCU] = &alm_gpu_tcu, 1670 + [MASTER_SYS_TCU] = &alm_sys_tcu, 1671 + [MASTER_APPSS_PROC] = &chm_apps, 1672 + [MASTER_COMPUTE_NOC] = &qnm_cmpnoc, 1673 + [MASTER_GEM_NOC_CFG] = &qnm_gemnoc_cfg, 1674 + [MASTER_GFX3D] = &qnm_gpu, 1675 + [MASTER_MNOC_HF_MEM_NOC] = &qnm_mnoc_hf, 1676 + [MASTER_MNOC_SF_MEM_NOC] = &qnm_mnoc_sf, 1677 + [MASTER_ANOC_PCIE_GEM_NOC] = &qnm_pcie, 1678 + [MASTER_SNOC_GC_MEM_NOC] = &qnm_snoc_gc, 1679 + [MASTER_SNOC_SF_MEM_NOC] = &qnm_snoc_sf, 1680 + [SLAVE_MSS_PROC_MS_MPU_CFG] = &qhs_mdsp_ms_mpu_cfg, 1681 + [SLAVE_MCDMA_MS_MPU_CFG] = &qhs_modem_ms_mpu_cfg, 1682 + [SLAVE_GEM_NOC_CNOC] = &qns_gem_noc_cnoc, 1683 + [SLAVE_LLCC] = &qns_llcc, 1684 + [SLAVE_MEM_NOC_PCIE_SNOC] = &qns_pcie, 1685 + [SLAVE_SERVICE_GEM_NOC_1] = &srvc_even_gemnoc, 1686 + [SLAVE_SERVICE_GEM_NOC_2] = &srvc_odd_gemnoc, 1687 + [SLAVE_SERVICE_GEM_NOC] = &srvc_sys_gemnoc, 1688 + }; 1689 + 1690 + static struct qcom_icc_desc sc7280_gem_noc = { 1691 + .nodes = gem_noc_nodes, 1692 + .num_nodes = ARRAY_SIZE(gem_noc_nodes), 1693 + .bcms = gem_noc_bcms, 1694 + .num_bcms = ARRAY_SIZE(gem_noc_bcms), 1695 + }; 1696 + 1697 + static struct qcom_icc_bcm *lpass_ag_noc_bcms[] = { 1698 + }; 1699 + 1700 + static struct qcom_icc_node *lpass_ag_noc_nodes[] = { 1701 + [MASTER_CNOC_LPASS_AG_NOC] = &qhm_config_noc, 1702 + [SLAVE_LPASS_CORE_CFG] = &qhs_lpass_core, 1703 + [SLAVE_LPASS_LPI_CFG] = &qhs_lpass_lpi, 1704 + [SLAVE_LPASS_MPU_CFG] = &qhs_lpass_mpu, 1705 + [SLAVE_LPASS_TOP_CFG] = &qhs_lpass_top, 1706 + [SLAVE_SERVICES_LPASS_AML_NOC] = &srvc_niu_aml_noc, 1707 + [SLAVE_SERVICE_LPASS_AG_NOC] = &srvc_niu_lpass_agnoc, 1708 + }; 1709 + 1710 + static struct qcom_icc_desc sc7280_lpass_ag_noc = { 1711 + .nodes = lpass_ag_noc_nodes, 1712 + .num_nodes = ARRAY_SIZE(lpass_ag_noc_nodes), 1713 + .bcms = lpass_ag_noc_bcms, 1714 + .num_bcms = ARRAY_SIZE(lpass_ag_noc_bcms), 1715 + }; 1716 + 1717 + static struct qcom_icc_bcm *mc_virt_bcms[] = { 1718 + &bcm_acv, 1719 + &bcm_mc0, 1720 + }; 1721 + 1722 + static struct qcom_icc_node *mc_virt_nodes[] = { 1723 + [MASTER_LLCC] = &llcc_mc, 1724 + [SLAVE_EBI1] = &ebi, 1725 + }; 1726 + 1727 + static struct qcom_icc_desc sc7280_mc_virt = { 1728 + .nodes = mc_virt_nodes, 1729 + .num_nodes = ARRAY_SIZE(mc_virt_nodes), 1730 + .bcms = mc_virt_bcms, 1731 + .num_bcms = ARRAY_SIZE(mc_virt_bcms), 1732 + }; 1733 + 1734 + static struct qcom_icc_bcm *mmss_noc_bcms[] = { 1735 + &bcm_mm0, 1736 + &bcm_mm1, 1737 + &bcm_mm4, 1738 + &bcm_mm5, 1739 + }; 1740 + 1741 + static struct qcom_icc_node *mmss_noc_nodes[] = { 1742 + [MASTER_CNOC_MNOC_CFG] = &qnm_mnoc_cfg, 1743 + [MASTER_VIDEO_P0] = &qnm_video0, 1744 + [MASTER_VIDEO_PROC] = &qnm_video_cpu, 1745 + [MASTER_CAMNOC_HF] = &qxm_camnoc_hf, 1746 + [MASTER_CAMNOC_ICP] = &qxm_camnoc_icp, 1747 + [MASTER_CAMNOC_SF] = &qxm_camnoc_sf, 1748 + [MASTER_MDP0] = &qxm_mdp0, 1749 + [SLAVE_MNOC_HF_MEM_NOC] = &qns_mem_noc_hf, 1750 + [SLAVE_MNOC_SF_MEM_NOC] = &qns_mem_noc_sf, 1751 + [SLAVE_SERVICE_MNOC] = &srvc_mnoc, 1752 + }; 1753 + 1754 + static struct qcom_icc_desc sc7280_mmss_noc = { 1755 + .nodes = mmss_noc_nodes, 1756 + .num_nodes = ARRAY_SIZE(mmss_noc_nodes), 1757 + .bcms = mmss_noc_bcms, 1758 + .num_bcms = ARRAY_SIZE(mmss_noc_bcms), 1759 + }; 1760 + 1761 + static struct qcom_icc_bcm *nsp_noc_bcms[] = { 1762 + &bcm_co0, 1763 + &bcm_co3, 1764 + }; 1765 + 1766 + static struct qcom_icc_node *nsp_noc_nodes[] = { 1767 + [MASTER_CDSP_NOC_CFG] = &qhm_nsp_noc_config, 1768 + [MASTER_CDSP_PROC] = &qxm_nsp, 1769 + [SLAVE_CDSP_MEM_NOC] = &qns_nsp_gemnoc, 1770 + [SLAVE_SERVICE_NSP_NOC] = &service_nsp_noc, 1771 + }; 1772 + 1773 + static struct qcom_icc_desc sc7280_nsp_noc = { 1774 + .nodes = nsp_noc_nodes, 1775 + .num_nodes = ARRAY_SIZE(nsp_noc_nodes), 1776 + .bcms = nsp_noc_bcms, 1777 + .num_bcms = ARRAY_SIZE(nsp_noc_bcms), 1778 + }; 1779 + 1780 + static struct qcom_icc_bcm *system_noc_bcms[] = { 1781 + &bcm_sn0, 1782 + &bcm_sn2, 1783 + &bcm_sn7, 1784 + &bcm_sn8, 1785 + }; 1786 + 1787 + static struct qcom_icc_node *system_noc_nodes[] = { 1788 + [MASTER_A1NOC_SNOC] = &qnm_aggre1_noc, 1789 + [MASTER_A2NOC_SNOC] = &qnm_aggre2_noc, 1790 + [MASTER_SNOC_CFG] = &qnm_snoc_cfg, 1791 + [MASTER_PIMEM] = &qxm_pimem, 1792 + [MASTER_GIC] = &xm_gic, 1793 + [SLAVE_SNOC_GEM_NOC_GC] = &qns_gemnoc_gc, 1794 + [SLAVE_SNOC_GEM_NOC_SF] = &qns_gemnoc_sf, 1795 + [SLAVE_SERVICE_SNOC] = &srvc_snoc, 1796 + }; 1797 + 1798 + static struct qcom_icc_desc sc7280_system_noc = { 1799 + .nodes = system_noc_nodes, 1800 + .num_nodes = ARRAY_SIZE(system_noc_nodes), 1801 + .bcms = system_noc_bcms, 1802 + .num_bcms = ARRAY_SIZE(system_noc_bcms), 1803 + }; 1804 + 1805 + static int qnoc_probe(struct platform_device *pdev) 1806 + { 1807 + const struct qcom_icc_desc *desc; 1808 + struct icc_onecell_data *data; 1809 + struct icc_provider *provider; 1810 + struct qcom_icc_node **qnodes; 1811 + struct qcom_icc_provider *qp; 1812 + struct icc_node *node; 1813 + size_t num_nodes, i; 1814 + int ret; 1815 + 1816 + desc = device_get_match_data(&pdev->dev); 1817 + if (!desc) 1818 + return -EINVAL; 1819 + 1820 + qnodes = desc->nodes; 1821 + num_nodes = desc->num_nodes; 1822 + 1823 + qp = devm_kzalloc(&pdev->dev, sizeof(*qp), GFP_KERNEL); 1824 + if (!qp) 1825 + return -ENOMEM; 1826 + 1827 + data = devm_kcalloc(&pdev->dev, num_nodes, sizeof(*node), GFP_KERNEL); 1828 + if (!data) 1829 + return -ENOMEM; 1830 + 1831 + provider = &qp->provider; 1832 + provider->dev = &pdev->dev; 1833 + provider->set = qcom_icc_set; 1834 + provider->pre_aggregate = qcom_icc_pre_aggregate; 1835 + provider->aggregate = qcom_icc_aggregate; 1836 + provider->xlate_extended = qcom_icc_xlate_extended; 1837 + INIT_LIST_HEAD(&provider->nodes); 1838 + provider->data = data; 1839 + 1840 + qp->dev = &pdev->dev; 1841 + qp->bcms = desc->bcms; 1842 + qp->num_bcms = desc->num_bcms; 1843 + 1844 + qp->voter = of_bcm_voter_get(qp->dev, NULL); 1845 + if (IS_ERR(qp->voter)) 1846 + return PTR_ERR(qp->voter); 1847 + 1848 + ret = icc_provider_add(provider); 1849 + if (ret) { 1850 + dev_err(&pdev->dev, "error adding interconnect provider\n"); 1851 + return ret; 1852 + } 1853 + 1854 + for (i = 0; i < qp->num_bcms; i++) 1855 + qcom_icc_bcm_init(qp->bcms[i], &pdev->dev); 1856 + 1857 + for (i = 0; i < num_nodes; i++) { 1858 + size_t j; 1859 + 1860 + if (!qnodes[i]) 1861 + continue; 1862 + 1863 + node = icc_node_create(qnodes[i]->id); 1864 + if (IS_ERR(node)) { 1865 + ret = PTR_ERR(node); 1866 + goto err; 1867 + } 1868 + 1869 + node->name = qnodes[i]->name; 1870 + node->data = qnodes[i]; 1871 + icc_node_add(node, provider); 1872 + 1873 + for (j = 0; j < qnodes[i]->num_links; j++) 1874 + icc_link_create(node, qnodes[i]->links[j]); 1875 + 1876 + data->nodes[i] = node; 1877 + } 1878 + data->num_nodes = num_nodes; 1879 + 1880 + platform_set_drvdata(pdev, qp); 1881 + 1882 + return 0; 1883 + err: 1884 + icc_nodes_remove(provider); 1885 + icc_provider_del(provider); 1886 + return ret; 1887 + } 1888 + 1889 + static int qnoc_remove(struct platform_device *pdev) 1890 + { 1891 + struct qcom_icc_provider *qp = platform_get_drvdata(pdev); 1892 + 1893 + icc_nodes_remove(&qp->provider); 1894 + return icc_provider_del(&qp->provider); 1895 + } 1896 + 1897 + static const struct of_device_id qnoc_of_match[] = { 1898 + { .compatible = "qcom,sc7280-aggre1-noc", 1899 + .data = &sc7280_aggre1_noc}, 1900 + { .compatible = "qcom,sc7280-aggre2-noc", 1901 + .data = &sc7280_aggre2_noc}, 1902 + { .compatible = "qcom,sc7280-clk-virt", 1903 + .data = &sc7280_clk_virt}, 1904 + { .compatible = "qcom,sc7280-cnoc2", 1905 + .data = &sc7280_cnoc2}, 1906 + { .compatible = "qcom,sc7280-cnoc3", 1907 + .data = &sc7280_cnoc3}, 1908 + { .compatible = "qcom,sc7280-dc-noc", 1909 + .data = &sc7280_dc_noc}, 1910 + { .compatible = "qcom,sc7280-gem-noc", 1911 + .data = &sc7280_gem_noc}, 1912 + { .compatible = "qcom,sc7280-lpass-ag-noc", 1913 + .data = &sc7280_lpass_ag_noc}, 1914 + { .compatible = "qcom,sc7280-mc-virt", 1915 + .data = &sc7280_mc_virt}, 1916 + { .compatible = "qcom,sc7280-mmss-noc", 1917 + .data = &sc7280_mmss_noc}, 1918 + { .compatible = "qcom,sc7280-nsp-noc", 1919 + .data = &sc7280_nsp_noc}, 1920 + { .compatible = "qcom,sc7280-system-noc", 1921 + .data = &sc7280_system_noc}, 1922 + { } 1923 + }; 1924 + MODULE_DEVICE_TABLE(of, qnoc_of_match); 1925 + 1926 + static struct platform_driver qnoc_driver = { 1927 + .probe = qnoc_probe, 1928 + .remove = qnoc_remove, 1929 + .driver = { 1930 + .name = "qnoc-sc7280", 1931 + .of_match_table = qnoc_of_match, 1932 + .sync_state = icc_sync_state, 1933 + }, 1934 + }; 1935 + module_platform_driver(qnoc_driver); 1936 + 1937 + MODULE_DESCRIPTION("SC7280 NoC driver"); 1938 + MODULE_LICENSE("GPL v2");
+154
drivers/interconnect/qcom/sc7280.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Qualcomm #define SC7280 interconnect IDs 4 + * 5 + * Copyright (c) 2021, The Linux Foundation. All rights reserved. 6 + */ 7 + 8 + #ifndef __DRIVERS_INTERCONNECT_QCOM_SC7280_H 9 + #define __DRIVERS_INTERCONNECT_QCOM_SC7280_H 10 + 11 + #define SC7280_MASTER_GPU_TCU 0 12 + #define SC7280_MASTER_SYS_TCU 1 13 + #define SC7280_MASTER_APPSS_PROC 2 14 + #define SC7280_MASTER_LLCC 3 15 + #define SC7280_MASTER_CNOC_LPASS_AG_NOC 4 16 + #define SC7280_MASTER_CDSP_NOC_CFG 5 17 + #define SC7280_MASTER_QDSS_BAM 6 18 + #define SC7280_MASTER_QSPI_0 7 19 + #define SC7280_MASTER_QUP_0 8 20 + #define SC7280_MASTER_QUP_1 9 21 + #define SC7280_MASTER_A1NOC_CFG 10 22 + #define SC7280_MASTER_A2NOC_CFG 11 23 + #define SC7280_MASTER_A1NOC_SNOC 12 24 + #define SC7280_MASTER_A2NOC_SNOC 13 25 + #define SC7280_MASTER_COMPUTE_NOC 14 26 + #define SC7280_MASTER_CNOC2_CNOC3 15 27 + #define SC7280_MASTER_CNOC3_CNOC2 16 28 + #define SC7280_MASTER_CNOC_A2NOC 17 29 + #define SC7280_MASTER_CNOC_DC_NOC 18 30 + #define SC7280_MASTER_GEM_NOC_CFG 19 31 + #define SC7280_MASTER_GEM_NOC_CNOC 20 32 + #define SC7280_MASTER_GEM_NOC_PCIE_SNOC 21 33 + #define SC7280_MASTER_GFX3D 22 34 + #define SC7280_MASTER_CNOC_MNOC_CFG 23 35 + #define SC7280_MASTER_MNOC_HF_MEM_NOC 24 36 + #define SC7280_MASTER_MNOC_SF_MEM_NOC 25 37 + #define SC7280_MASTER_ANOC_PCIE_GEM_NOC 26 38 + #define SC7280_MASTER_SNOC_CFG 27 39 + #define SC7280_MASTER_SNOC_GC_MEM_NOC 28 40 + #define SC7280_MASTER_SNOC_SF_MEM_NOC 29 41 + #define SC7280_MASTER_VIDEO_P0 30 42 + #define SC7280_MASTER_VIDEO_PROC 31 43 + #define SC7280_MASTER_QUP_CORE_0 32 44 + #define SC7280_MASTER_QUP_CORE_1 33 45 + #define SC7280_MASTER_CAMNOC_HF 34 46 + #define SC7280_MASTER_CAMNOC_ICP 35 47 + #define SC7280_MASTER_CAMNOC_SF 36 48 + #define SC7280_MASTER_CRYPTO 37 49 + #define SC7280_MASTER_IPA 38 50 + #define SC7280_MASTER_MDP0 39 51 + #define SC7280_MASTER_CDSP_PROC 40 52 + #define SC7280_MASTER_PIMEM 41 53 + #define SC7280_MASTER_GIC 42 54 + #define SC7280_MASTER_PCIE_0 43 55 + #define SC7280_MASTER_PCIE_1 44 56 + #define SC7280_MASTER_QDSS_DAP 45 57 + #define SC7280_MASTER_QDSS_ETR 46 58 + #define SC7280_MASTER_SDCC_1 47 59 + #define SC7280_MASTER_SDCC_2 48 60 + #define SC7280_MASTER_SDCC_4 49 61 + #define SC7280_MASTER_UFS_MEM 50 62 + #define SC7280_MASTER_USB2 51 63 + #define SC7280_MASTER_USB3_0 52 64 + #define SC7280_SLAVE_EBI1 53 65 + #define SC7280_SLAVE_AHB2PHY_SOUTH 54 66 + #define SC7280_SLAVE_AHB2PHY_NORTH 55 67 + #define SC7280_SLAVE_AOSS 56 68 + #define SC7280_SLAVE_APPSS 57 69 + #define SC7280_SLAVE_CAMERA_CFG 58 70 + #define SC7280_SLAVE_CLK_CTL 59 71 + #define SC7280_SLAVE_CDSP_CFG 60 72 + #define SC7280_SLAVE_RBCPR_CX_CFG 61 73 + #define SC7280_SLAVE_RBCPR_MX_CFG 62 74 + #define SC7280_SLAVE_CRYPTO_0_CFG 63 75 + #define SC7280_SLAVE_CX_RDPM 64 76 + #define SC7280_SLAVE_DCC_CFG 65 77 + #define SC7280_SLAVE_DISPLAY_CFG 66 78 + #define SC7280_SLAVE_GFX3D_CFG 67 79 + #define SC7280_SLAVE_HWKM 68 80 + #define SC7280_SLAVE_IMEM_CFG 69 81 + #define SC7280_SLAVE_IPA_CFG 70 82 + #define SC7280_SLAVE_IPC_ROUTER_CFG 71 83 + #define SC7280_SLAVE_LLCC_CFG 72 84 + #define SC7280_SLAVE_LPASS 73 85 + #define SC7280_SLAVE_LPASS_CORE_CFG 74 86 + #define SC7280_SLAVE_LPASS_LPI_CFG 75 87 + #define SC7280_SLAVE_LPASS_MPU_CFG 76 88 + #define SC7280_SLAVE_LPASS_TOP_CFG 77 89 + #define SC7280_SLAVE_MSS_PROC_MS_MPU_CFG 78 90 + #define SC7280_SLAVE_MCDMA_MS_MPU_CFG 79 91 + #define SC7280_SLAVE_CNOC_MSS 80 92 + #define SC7280_SLAVE_MX_RDPM 81 93 + #define SC7280_SLAVE_PCIE_0_CFG 82 94 + #define SC7280_SLAVE_PCIE_1_CFG 83 95 + #define SC7280_SLAVE_PDM 84 96 + #define SC7280_SLAVE_PIMEM_CFG 85 97 + #define SC7280_SLAVE_PKA_WRAPPER_CFG 86 98 + #define SC7280_SLAVE_PMU_WRAPPER_CFG 87 99 + #define SC7280_SLAVE_QDSS_CFG 88 100 + #define SC7280_SLAVE_QSPI_0 89 101 + #define SC7280_SLAVE_QUP_0 90 102 + #define SC7280_SLAVE_QUP_1 91 103 + #define SC7280_SLAVE_SDCC_1 92 104 + #define SC7280_SLAVE_SDCC_2 93 105 + #define SC7280_SLAVE_SDCC_4 94 106 + #define SC7280_SLAVE_SECURITY 95 107 + #define SC7280_SLAVE_TCSR 96 108 + #define SC7280_SLAVE_TLMM 97 109 + #define SC7280_SLAVE_UFS_MEM_CFG 98 110 + #define SC7280_SLAVE_USB2 99 111 + #define SC7280_SLAVE_USB3_0 100 112 + #define SC7280_SLAVE_VENUS_CFG 101 113 + #define SC7280_SLAVE_VSENSE_CTRL_CFG 102 114 + #define SC7280_SLAVE_A1NOC_CFG 103 115 + #define SC7280_SLAVE_A1NOC_SNOC 104 116 + #define SC7280_SLAVE_A2NOC_CFG 105 117 + #define SC7280_SLAVE_A2NOC_SNOC 106 118 + #define SC7280_SLAVE_CNOC2_CNOC3 107 119 + #define SC7280_SLAVE_CNOC3_CNOC2 108 120 + #define SC7280_SLAVE_CNOC_A2NOC 109 121 + #define SC7280_SLAVE_DDRSS_CFG 110 122 + #define SC7280_SLAVE_GEM_NOC_CNOC 111 123 + #define SC7280_SLAVE_GEM_NOC_CFG 112 124 + #define SC7280_SLAVE_SNOC_GEM_NOC_GC 113 125 + #define SC7280_SLAVE_SNOC_GEM_NOC_SF 114 126 + #define SC7280_SLAVE_LLCC 115 127 + #define SC7280_SLAVE_MNOC_HF_MEM_NOC 116 128 + #define SC7280_SLAVE_MNOC_SF_MEM_NOC 117 129 + #define SC7280_SLAVE_CNOC_MNOC_CFG 118 130 + #define SC7280_SLAVE_CDSP_MEM_NOC 119 131 + #define SC7280_SLAVE_MEM_NOC_PCIE_SNOC 120 132 + #define SC7280_SLAVE_ANOC_PCIE_GEM_NOC 121 133 + #define SC7280_SLAVE_SNOC_CFG 122 134 + #define SC7280_SLAVE_QUP_CORE_0 123 135 + #define SC7280_SLAVE_QUP_CORE_1 124 136 + #define SC7280_SLAVE_BOOT_IMEM 125 137 + #define SC7280_SLAVE_IMEM 126 138 + #define SC7280_SLAVE_PIMEM 127 139 + #define SC7280_SLAVE_SERVICE_NSP_NOC 128 140 + #define SC7280_SLAVE_SERVICE_A1NOC 129 141 + #define SC7280_SLAVE_SERVICE_A2NOC 130 142 + #define SC7280_SLAVE_SERVICE_GEM_NOC_1 131 143 + #define SC7280_SLAVE_SERVICE_MNOC 132 144 + #define SC7280_SLAVE_SERVICES_LPASS_AML_NOC 133 145 + #define SC7280_SLAVE_SERVICE_LPASS_AG_NOC 134 146 + #define SC7280_SLAVE_SERVICE_GEM_NOC_2 135 147 + #define SC7280_SLAVE_SERVICE_SNOC 136 148 + #define SC7280_SLAVE_SERVICE_GEM_NOC 137 149 + #define SC7280_SLAVE_PCIE_0 138 150 + #define SC7280_SLAVE_PCIE_1 139 151 + #define SC7280_SLAVE_QDSS_STM 140 152 + #define SC7280_SLAVE_TCU 141 153 + 154 + #endif
+5 -4
drivers/ipack/carriers/tpci200.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 - /** 3 - * tpci200.c 4 - * 2 + /* 5 3 * driver for the TEWS TPCI-200 device 6 4 * 7 5 * Copyright (C) 2009-2012 CERN (www.cern.ch) ··· 594 596 595 597 out_err_bus_register: 596 598 tpci200_uninstall(tpci200); 599 + /* tpci200->info->cfg_regs is unmapped in tpci200_uninstall */ 600 + tpci200->info->cfg_regs = NULL; 597 601 out_err_install: 598 - iounmap(tpci200->info->cfg_regs); 602 + if (tpci200->info->cfg_regs) 603 + iounmap(tpci200->info->cfg_regs); 599 604 out_err_ioremap: 600 605 pci_release_region(pdev, TPCI200_CFG_MEM_BAR); 601 606 out_err_pci_request:
+1 -3
drivers/ipack/carriers/tpci200.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /** 3 - * tpci200.h 4 - * 2 + /* 5 3 * driver for the carrier TEWS TPCI-200 6 4 * 7 5 * Copyright (C) 2009-2012 CERN (www.cern.ch)
+1 -3
drivers/ipack/devices/ipoctal.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 - /** 3 - * ipoctal.c 4 - * 2 + /* 5 3 * driver for the GE IP-OCTAL boards 6 4 * 7 5 * Copyright (C) 2009-2012 CERN (www.cern.ch)
+2 -4
drivers/ipack/devices/ipoctal.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /** 3 - * ipoctal.h 4 - * 2 + /* 5 3 * driver for the IPOCTAL boards 6 - 4 + * 7 5 * Copyright (C) 2009-2012 CERN (www.cern.ch) 8 6 * Author: Nicolas Serafini, EIC2 SA 9 7 * Author: Samuel Iglesias Gonsalvez <siglesias@igalia.com>
+2 -11
drivers/mcb/mcb-lpc.c
··· 105 105 return ret; 106 106 } 107 107 108 - static struct resource sc24_fpga_resource = { 109 - .start = 0xe000e000, 110 - .end = 0xe000e000 + CHAM_HEADER_SIZE, 111 - .flags = IORESOURCE_MEM, 112 - }; 113 - 114 - static struct resource sc31_fpga_resource = { 115 - .start = 0xf000e000, 116 - .end = 0xf000e000 + CHAM_HEADER_SIZE, 117 - .flags = IORESOURCE_MEM, 118 - }; 108 + static struct resource sc24_fpga_resource = DEFINE_RES_MEM(0xe000e000, CHAM_HEADER_SIZE); 109 + static struct resource sc31_fpga_resource = DEFINE_RES_MEM(0xf000e000, CHAM_HEADER_SIZE); 119 110 120 111 static struct platform_driver mcb_lpc_driver = { 121 112 .driver = {
+2 -4
drivers/misc/bcm-vk/bcm_vk_msg.c
··· 354 354 for (num = 0; num < chan->q_nr; num++) { 355 355 list_for_each_entry_safe(entry, tmp, &chan->pendq[num], node) { 356 356 if ((!ctx) || (entry->ctx->idx == ctx->idx)) { 357 - list_del(&entry->node); 358 - list_add_tail(&entry->node, &del_q); 357 + list_move_tail(&entry->node, &del_q); 359 358 } 360 359 } 361 360 } ··· 700 701 return -EINVAL; 701 702 } 702 703 703 - entry = kzalloc(sizeof(*entry) + 704 - sizeof(struct vk_msg_blk), GFP_KERNEL); 704 + entry = kzalloc(struct_size(entry, to_v_msg, 1), GFP_KERNEL); 705 705 if (!entry) 706 706 return -ENOMEM; 707 707
+1 -1
drivers/misc/bcm-vk/bcm_vk_msg.h
··· 116 116 u32 usr_msg_id; 117 117 u32 to_v_blks; 118 118 u32 seq_num; 119 - struct vk_msg_blk to_v_msg[0]; 119 + struct vk_msg_blk to_v_msg[]; 120 120 }; 121 121 122 122 /* queue stats counters */
+7 -1
drivers/misc/cardreader/alcor_pci.c
··· 139 139 u32 val32; 140 140 141 141 priv->pdev_cap_off = alcor_pci_find_cap_offset(priv, priv->pdev); 142 - priv->parent_cap_off = alcor_pci_find_cap_offset(priv, 142 + /* 143 + * A device might be attached to root complex directly and 144 + * priv->parent_pdev will be NULL. In this case we don't check its 145 + * capability and disable ASPM completely. 146 + */ 147 + if (priv->parent_pdev) 148 + priv->parent_cap_off = alcor_pci_find_cap_offset(priv, 143 149 priv->parent_pdev); 144 150 145 151 if ((priv->pdev_cap_off == 0) || (priv->parent_cap_off == 0)) {
+3 -2
drivers/misc/cxl/file.c
··· 569 569 int rc; 570 570 571 571 cdev_init(cdev, fops); 572 - if ((rc = cdev_add(cdev, devt, 1))) { 572 + rc = cdev_add(cdev, devt, 1); 573 + if (rc) { 573 574 dev_err(&afu->dev, "Unable to add %s chardev: %i\n", desc, rc); 574 575 return rc; 575 576 } ··· 578 577 dev = device_create(cxl_class, &afu->dev, devt, afu, 579 578 "afu%i.%i%s", afu->adapter->adapter_num, afu->slice, postfix); 580 579 if (IS_ERR(dev)) { 581 - dev_err(&afu->dev, "Unable to create %s chardev in sysfs: %i\n", desc, rc); 582 580 rc = PTR_ERR(dev); 581 + dev_err(&afu->dev, "Unable to create %s chardev in sysfs: %i\n", desc, rc); 583 582 goto err; 584 583 } 585 584
+3 -2
drivers/misc/eeprom/Kconfig
··· 32 32 will be called at24. 33 33 34 34 config EEPROM_AT25 35 - tristate "SPI EEPROMs from most vendors" 35 + tristate "SPI EEPROMs (FRAMs) from most vendors" 36 36 depends on SPI && SYSFS 37 37 select NVMEM 38 38 select NVMEM_SYSFS 39 39 help 40 - Enable this driver to get read/write support to most SPI EEPROMs, 40 + Enable this driver to get read/write support to most SPI EEPROMs 41 + and Cypress FRAMs, 41 42 after you configure the board init code to know about each eeprom 42 43 on your target board. 43 44
+128 -30
drivers/misc/eeprom/at25.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-or-later 2 2 /* 3 3 * at25.c -- support most SPI EEPROMs, such as Atmel AT25 models 4 + * and Cypress FRAMs FM25 models 4 5 * 5 6 * Copyright (C) 2006 David Brownell 6 7 */ ··· 17 16 #include <linux/spi/spi.h> 18 17 #include <linux/spi/eeprom.h> 19 18 #include <linux/property.h> 19 + #include <linux/of.h> 20 + #include <linux/of_device.h> 21 + #include <linux/math.h> 20 22 21 23 /* 22 24 * NOTE: this is an *EEPROM* driver. The vagaries of product naming ··· 31 27 * AT25M02, AT25128B 32 28 */ 33 29 30 + #define FM25_SN_LEN 8 /* serial number length */ 34 31 struct at25_data { 35 32 struct spi_device *spi; 36 33 struct mutex lock; ··· 39 34 unsigned addrlen; 40 35 struct nvmem_config nvmem_config; 41 36 struct nvmem_device *nvmem; 37 + u8 sernum[FM25_SN_LEN]; 42 38 }; 43 39 44 40 #define AT25_WREN 0x06 /* latch the write enable */ ··· 48 42 #define AT25_WRSR 0x01 /* write status register */ 49 43 #define AT25_READ 0x03 /* read byte(s) */ 50 44 #define AT25_WRITE 0x02 /* write byte(s)/sector */ 45 + #define FM25_SLEEP 0xb9 /* enter sleep mode */ 46 + #define FM25_RDID 0x9f /* read device ID */ 47 + #define FM25_RDSN 0xc3 /* read S/N */ 51 48 52 49 #define AT25_SR_nRDY 0x01 /* nRDY = write-in-progress */ 53 50 #define AT25_SR_WEN 0x02 /* write enable (latched) */ ··· 59 50 #define AT25_SR_WPEN 0x80 /* writeprotect enable */ 60 51 61 52 #define AT25_INSTR_BIT3 0x08 /* Additional address bit in instr */ 53 + 54 + #define FM25_ID_LEN 9 /* ID length */ 62 55 63 56 #define EE_MAXADDRLEN 3 /* 24 bit addresses, up to 2 MBytes */ 64 57 ··· 139 128 mutex_unlock(&at25->lock); 140 129 return status; 141 130 } 131 + 132 + /* 133 + * read extra registers as ID or serial number 134 + */ 135 + static int fm25_aux_read(struct at25_data *at25, u8 *buf, uint8_t command, 136 + int len) 137 + { 138 + int status; 139 + struct spi_transfer t[2]; 140 + struct spi_message m; 141 + 142 + spi_message_init(&m); 143 + memset(t, 0, sizeof(t)); 144 + 145 + t[0].tx_buf = &command; 146 + t[0].len = 1; 147 + spi_message_add_tail(&t[0], &m); 148 + 149 + t[1].rx_buf = buf; 150 + t[1].len = len; 151 + spi_message_add_tail(&t[1], &m); 152 + 153 + mutex_lock(&at25->lock); 154 + 155 + status = spi_sync(at25->spi, &m); 156 + dev_dbg(&at25->spi->dev, "read %d aux bytes --> %d\n", len, status); 157 + 158 + mutex_unlock(&at25->lock); 159 + return status; 160 + } 161 + 162 + static ssize_t sernum_show(struct device *dev, struct device_attribute *attr, char *buf) 163 + { 164 + struct at25_data *at25; 165 + 166 + at25 = dev_get_drvdata(dev); 167 + return sysfs_emit(buf, "%*ph\n", (int)sizeof(at25->sernum), at25->sernum); 168 + } 169 + static DEVICE_ATTR_RO(sernum); 170 + 171 + static struct attribute *sernum_attrs[] = { 172 + &dev_attr_sernum.attr, 173 + NULL, 174 + }; 175 + ATTRIBUTE_GROUPS(sernum); 142 176 143 177 static int at25_ee_write(void *priv, unsigned int off, void *val, size_t count) 144 178 { ··· 359 303 return 0; 360 304 } 361 305 306 + static const struct of_device_id at25_of_match[] = { 307 + { .compatible = "atmel,at25",}, 308 + { .compatible = "cypress,fm25",}, 309 + { } 310 + }; 311 + MODULE_DEVICE_TABLE(of, at25_of_match); 312 + 362 313 static int at25_probe(struct spi_device *spi) 363 314 { 364 315 struct at25_data *at25 = NULL; 365 316 struct spi_eeprom chip; 366 317 int err; 367 318 int sr; 368 - int addrlen; 319 + u8 id[FM25_ID_LEN]; 320 + u8 sernum[FM25_SN_LEN]; 321 + int i; 322 + const struct of_device_id *match; 323 + bool is_fram = 0; 324 + 325 + match = of_match_device(of_match_ptr(at25_of_match), &spi->dev); 326 + if (match && !strcmp(match->compatible, "cypress,fm25")) 327 + is_fram = 1; 369 328 370 329 /* Chip description */ 371 330 if (!spi->dev.platform_data) { 372 - err = at25_fw_to_chip(&spi->dev, &chip); 373 - if (err) 374 - return err; 331 + if (!is_fram) { 332 + err = at25_fw_to_chip(&spi->dev, &chip); 333 + if (err) 334 + return err; 335 + } 375 336 } else 376 337 chip = *(struct spi_eeprom *)spi->dev.platform_data; 377 - 378 - /* For now we only support 8/16/24 bit addressing */ 379 - if (chip.flags & EE_ADDR1) 380 - addrlen = 1; 381 - else if (chip.flags & EE_ADDR2) 382 - addrlen = 2; 383 - else if (chip.flags & EE_ADDR3) 384 - addrlen = 3; 385 - else { 386 - dev_dbg(&spi->dev, "unsupported address type\n"); 387 - return -EINVAL; 388 - } 389 338 390 339 /* Ping the chip ... the status register is pretty portable, 391 340 * unlike probing manufacturer IDs. We do expect that system ··· 410 349 at25->chip = chip; 411 350 at25->spi = spi; 412 351 spi_set_drvdata(spi, at25); 413 - at25->addrlen = addrlen; 414 352 415 - at25->nvmem_config.type = NVMEM_TYPE_EEPROM; 353 + if (is_fram) { 354 + /* Get ID of chip */ 355 + fm25_aux_read(at25, id, FM25_RDID, FM25_ID_LEN); 356 + if (id[6] != 0xc2) { 357 + dev_err(&spi->dev, 358 + "Error: no Cypress FRAM (id %02x)\n", id[6]); 359 + return -ENODEV; 360 + } 361 + /* set size found in ID */ 362 + if (id[7] < 0x21 || id[7] > 0x26) { 363 + dev_err(&spi->dev, "Error: unsupported size (id %02x)\n", id[7]); 364 + return -ENODEV; 365 + } 366 + chip.byte_len = int_pow(2, id[7] - 0x21 + 4) * 1024; 367 + 368 + if (at25->chip.byte_len > 64 * 1024) 369 + at25->chip.flags |= EE_ADDR3; 370 + else 371 + at25->chip.flags |= EE_ADDR2; 372 + 373 + if (id[8]) { 374 + fm25_aux_read(at25, sernum, FM25_RDSN, FM25_SN_LEN); 375 + /* swap byte order */ 376 + for (i = 0; i < FM25_SN_LEN; i++) 377 + at25->sernum[i] = sernum[FM25_SN_LEN - 1 - i]; 378 + } 379 + 380 + at25->chip.page_size = PAGE_SIZE; 381 + strncpy(at25->chip.name, "fm25", sizeof(at25->chip.name)); 382 + } 383 + 384 + /* For now we only support 8/16/24 bit addressing */ 385 + if (at25->chip.flags & EE_ADDR1) 386 + at25->addrlen = 1; 387 + else if (at25->chip.flags & EE_ADDR2) 388 + at25->addrlen = 2; 389 + else if (at25->chip.flags & EE_ADDR3) 390 + at25->addrlen = 3; 391 + else { 392 + dev_dbg(&spi->dev, "unsupported address type\n"); 393 + return -EINVAL; 394 + } 395 + 396 + at25->nvmem_config.type = is_fram ? NVMEM_TYPE_FRAM : NVMEM_TYPE_EEPROM; 416 397 at25->nvmem_config.name = dev_name(&spi->dev); 417 398 at25->nvmem_config.dev = &spi->dev; 418 399 at25->nvmem_config.read_only = chip.flags & EE_READONLY; ··· 473 370 if (IS_ERR(at25->nvmem)) 474 371 return PTR_ERR(at25->nvmem); 475 372 476 - dev_info(&spi->dev, "%d %s %s eeprom%s, pagesize %u\n", 477 - (chip.byte_len < 1024) ? chip.byte_len : (chip.byte_len / 1024), 478 - (chip.byte_len < 1024) ? "Byte" : "KByte", 479 - at25->chip.name, 480 - (chip.flags & EE_READONLY) ? " (readonly)" : "", 481 - at25->chip.page_size); 373 + dev_info(&spi->dev, "%d %s %s %s%s, pagesize %u\n", 374 + (chip.byte_len < 1024) ? chip.byte_len : (chip.byte_len / 1024), 375 + (chip.byte_len < 1024) ? "Byte" : "KByte", 376 + at25->chip.name, is_fram ? "fram" : "eeprom", 377 + (chip.flags & EE_READONLY) ? " (readonly)" : "", 378 + at25->chip.page_size); 482 379 return 0; 483 380 } 484 381 485 382 /*-------------------------------------------------------------------------*/ 486 383 487 - static const struct of_device_id at25_of_match[] = { 488 - { .compatible = "atmel,at25", }, 489 - { } 490 - }; 491 - MODULE_DEVICE_TABLE(of, at25_of_match); 492 - 493 384 static struct spi_driver at25_driver = { 494 385 .driver = { 495 386 .name = "at25", 496 387 .of_match_table = at25_of_match, 388 + .dev_groups = sernum_groups, 497 389 }, 498 390 .probe = at25_probe, 499 391 };
+90 -133
drivers/misc/eeprom/ee1004.c
··· 32 32 */ 33 33 34 34 #define EE1004_ADDR_SET_PAGE 0x36 35 - #define EE1004_EEPROM_SIZE 512 35 + #define EE1004_NUM_PAGES 2 36 36 #define EE1004_PAGE_SIZE 256 37 37 #define EE1004_PAGE_SHIFT 8 38 + #define EE1004_EEPROM_SIZE (EE1004_PAGE_SIZE * EE1004_NUM_PAGES) 38 39 39 40 /* 40 41 * Mutex protects ee1004_set_page and ee1004_dev_count, and must be held 41 42 * from page selection to end of read. 42 43 */ 43 44 static DEFINE_MUTEX(ee1004_bus_lock); 44 - static struct i2c_client *ee1004_set_page[2]; 45 + static struct i2c_client *ee1004_set_page[EE1004_NUM_PAGES]; 45 46 static unsigned int ee1004_dev_count; 46 47 static int ee1004_current_page; 47 48 ··· 72 71 return 0; 73 72 } 74 73 74 + static int ee1004_set_current_page(struct device *dev, int page) 75 + { 76 + int ret; 77 + 78 + if (page == ee1004_current_page) 79 + return 0; 80 + 81 + /* Data is ignored */ 82 + ret = i2c_smbus_write_byte(ee1004_set_page[page], 0x00); 83 + /* 84 + * Don't give up just yet. Some memory modules will select the page 85 + * but not ack the command. Check which page is selected now. 86 + */ 87 + if (ret == -ENXIO && ee1004_get_current_page() == page) 88 + ret = 0; 89 + if (ret < 0) { 90 + dev_err(dev, "Failed to select page %d (%d)\n", page, ret); 91 + return ret; 92 + } 93 + 94 + dev_dbg(dev, "Selected page %d\n", page); 95 + ee1004_current_page = page; 96 + 97 + return 0; 98 + } 99 + 75 100 static ssize_t ee1004_eeprom_read(struct i2c_client *client, char *buf, 76 101 unsigned int offset, size_t count) 77 102 { 78 - int status; 103 + int status, page; 79 104 80 - if (count > I2C_SMBUS_BLOCK_MAX) 81 - count = I2C_SMBUS_BLOCK_MAX; 105 + page = offset >> EE1004_PAGE_SHIFT; 106 + offset &= (1 << EE1004_PAGE_SHIFT) - 1; 107 + 108 + status = ee1004_set_current_page(&client->dev, page); 109 + if (status) 110 + return status; 111 + 82 112 /* Can't cross page boundaries */ 83 - if (unlikely(offset + count > EE1004_PAGE_SIZE)) 113 + if (offset + count > EE1004_PAGE_SIZE) 84 114 count = EE1004_PAGE_SIZE - offset; 85 115 86 - status = i2c_smbus_read_i2c_block_data_or_emulated(client, offset, 87 - count, buf); 88 - dev_dbg(&client->dev, "read %zu@%d --> %d\n", count, offset, status); 89 - 90 - return status; 116 + return i2c_smbus_read_i2c_block_data_or_emulated(client, offset, count, buf); 91 117 } 92 118 93 - static ssize_t ee1004_read(struct file *filp, struct kobject *kobj, 119 + static ssize_t eeprom_read(struct file *filp, struct kobject *kobj, 94 120 struct bin_attribute *bin_attr, 95 121 char *buf, loff_t off, size_t count) 96 122 { 97 - struct device *dev = kobj_to_dev(kobj); 98 - struct i2c_client *client = to_i2c_client(dev); 123 + struct i2c_client *client = kobj_to_i2c_client(kobj); 99 124 size_t requested = count; 100 - int page; 101 - 102 - if (unlikely(!count)) 103 - return count; 104 - 105 - page = off >> EE1004_PAGE_SHIFT; 106 - if (unlikely(page > 1)) 107 - return 0; 108 - off &= (1 << EE1004_PAGE_SHIFT) - 1; 125 + int ret = 0; 109 126 110 127 /* 111 128 * Read data from chip, protecting against concurrent access to ··· 132 113 mutex_lock(&ee1004_bus_lock); 133 114 134 115 while (count) { 135 - int status; 116 + ret = ee1004_eeprom_read(client, buf, off, count); 117 + if (ret < 0) 118 + goto out; 136 119 137 - /* Select page */ 138 - if (page != ee1004_current_page) { 139 - /* Data is ignored */ 140 - status = i2c_smbus_write_byte(ee1004_set_page[page], 141 - 0x00); 142 - if (status == -ENXIO) { 143 - /* 144 - * Don't give up just yet. Some memory 145 - * modules will select the page but not 146 - * ack the command. Check which page is 147 - * selected now. 148 - */ 149 - if (ee1004_get_current_page() == page) 150 - status = 0; 151 - } 152 - if (status < 0) { 153 - dev_err(dev, "Failed to select page %d (%d)\n", 154 - page, status); 155 - mutex_unlock(&ee1004_bus_lock); 156 - return status; 157 - } 158 - dev_dbg(dev, "Selected page %d\n", page); 159 - ee1004_current_page = page; 160 - } 161 - 162 - status = ee1004_eeprom_read(client, buf, off, count); 163 - if (status < 0) { 164 - mutex_unlock(&ee1004_bus_lock); 165 - return status; 166 - } 167 - buf += status; 168 - off += status; 169 - count -= status; 170 - 171 - if (off == EE1004_PAGE_SIZE) { 172 - page++; 173 - off = 0; 174 - } 120 + buf += ret; 121 + off += ret; 122 + count -= ret; 175 123 } 176 - 124 + out: 177 125 mutex_unlock(&ee1004_bus_lock); 178 126 179 - return requested; 127 + return ret < 0 ? ret : requested; 180 128 } 181 129 182 - static const struct bin_attribute eeprom_attr = { 183 - .attr = { 184 - .name = "eeprom", 185 - .mode = 0444, 186 - }, 187 - .size = EE1004_EEPROM_SIZE, 188 - .read = ee1004_read, 130 + static BIN_ATTR_RO(eeprom, EE1004_EEPROM_SIZE); 131 + 132 + static struct bin_attribute *ee1004_attrs[] = { 133 + &bin_attr_eeprom, 134 + NULL 189 135 }; 190 136 191 - static int ee1004_probe(struct i2c_client *client, 192 - const struct i2c_device_id *id) 137 + BIN_ATTRIBUTE_GROUPS(ee1004); 138 + 139 + static void ee1004_cleanup(int idx) 140 + { 141 + if (--ee1004_dev_count == 0) 142 + while (--idx >= 0) { 143 + i2c_unregister_device(ee1004_set_page[idx]); 144 + ee1004_set_page[idx] = NULL; 145 + } 146 + } 147 + 148 + static int ee1004_probe(struct i2c_client *client) 193 149 { 194 150 int err, cnr = 0; 195 - const char *slow = NULL; 196 151 197 152 /* Make sure we can operate on this adapter */ 198 153 if (!i2c_check_functionality(client->adapter, 199 - I2C_FUNC_SMBUS_READ_BYTE | 200 - I2C_FUNC_SMBUS_READ_I2C_BLOCK)) { 201 - if (i2c_check_functionality(client->adapter, 202 - I2C_FUNC_SMBUS_READ_BYTE | 203 - I2C_FUNC_SMBUS_READ_WORD_DATA)) 204 - slow = "word"; 205 - else if (i2c_check_functionality(client->adapter, 206 - I2C_FUNC_SMBUS_READ_BYTE | 207 - I2C_FUNC_SMBUS_READ_BYTE_DATA)) 208 - slow = "byte"; 209 - else 210 - return -EPFNOSUPPORT; 211 - } 154 + I2C_FUNC_SMBUS_BYTE | I2C_FUNC_SMBUS_READ_I2C_BLOCK) && 155 + !i2c_check_functionality(client->adapter, 156 + I2C_FUNC_SMBUS_BYTE | I2C_FUNC_SMBUS_READ_BYTE_DATA)) 157 + return -EPFNOSUPPORT; 212 158 213 159 /* Use 2 dummy devices for page select command */ 214 160 mutex_lock(&ee1004_bus_lock); 215 161 if (++ee1004_dev_count == 1) { 216 - for (cnr = 0; cnr < 2; cnr++) { 217 - ee1004_set_page[cnr] = i2c_new_dummy_device(client->adapter, 218 - EE1004_ADDR_SET_PAGE + cnr); 219 - if (IS_ERR(ee1004_set_page[cnr])) { 220 - dev_err(&client->dev, 221 - "address 0x%02x unavailable\n", 222 - EE1004_ADDR_SET_PAGE + cnr); 223 - err = PTR_ERR(ee1004_set_page[cnr]); 162 + for (cnr = 0; cnr < EE1004_NUM_PAGES; cnr++) { 163 + struct i2c_client *cl; 164 + 165 + cl = i2c_new_dummy_device(client->adapter, EE1004_ADDR_SET_PAGE + cnr); 166 + if (IS_ERR(cl)) { 167 + err = PTR_ERR(cl); 224 168 goto err_clients; 225 169 } 170 + ee1004_set_page[cnr] = cl; 226 171 } 227 - } else if (i2c_adapter_id(client->adapter) != 228 - i2c_adapter_id(ee1004_set_page[0]->adapter)) { 172 + 173 + /* Remember current page to avoid unneeded page select */ 174 + err = ee1004_get_current_page(); 175 + if (err < 0) 176 + goto err_clients; 177 + dev_dbg(&client->dev, "Currently selected page: %d\n", err); 178 + ee1004_current_page = err; 179 + } else if (client->adapter != ee1004_set_page[0]->adapter) { 229 180 dev_err(&client->dev, 230 181 "Driver only supports devices on a single I2C bus\n"); 231 182 err = -EOPNOTSUPP; 232 183 goto err_clients; 233 184 } 234 - 235 - /* Remember current page to avoid unneeded page select */ 236 - err = ee1004_get_current_page(); 237 - if (err < 0) 238 - goto err_clients; 239 - ee1004_current_page = err; 240 - dev_dbg(&client->dev, "Currently selected page: %d\n", 241 - ee1004_current_page); 242 185 mutex_unlock(&ee1004_bus_lock); 243 - 244 - /* Create the sysfs eeprom file */ 245 - err = sysfs_create_bin_file(&client->dev.kobj, &eeprom_attr); 246 - if (err) 247 - goto err_clients_lock; 248 186 249 187 dev_info(&client->dev, 250 188 "%u byte EE1004-compliant SPD EEPROM, read-only\n", 251 189 EE1004_EEPROM_SIZE); 252 - if (slow) 253 - dev_notice(&client->dev, 254 - "Falling back to %s reads, performance will suffer\n", 255 - slow); 256 190 257 191 return 0; 258 192 259 - err_clients_lock: 260 - mutex_lock(&ee1004_bus_lock); 261 193 err_clients: 262 - if (--ee1004_dev_count == 0) { 263 - for (cnr--; cnr >= 0; cnr--) { 264 - i2c_unregister_device(ee1004_set_page[cnr]); 265 - ee1004_set_page[cnr] = NULL; 266 - } 267 - } 194 + ee1004_cleanup(cnr); 268 195 mutex_unlock(&ee1004_bus_lock); 269 196 270 197 return err; ··· 218 253 219 254 static int ee1004_remove(struct i2c_client *client) 220 255 { 221 - int i; 222 - 223 - sysfs_remove_bin_file(&client->dev.kobj, &eeprom_attr); 224 - 225 256 /* Remove page select clients if this is the last device */ 226 257 mutex_lock(&ee1004_bus_lock); 227 - if (--ee1004_dev_count == 0) { 228 - for (i = 0; i < 2; i++) { 229 - i2c_unregister_device(ee1004_set_page[i]); 230 - ee1004_set_page[i] = NULL; 231 - } 232 - } 258 + ee1004_cleanup(EE1004_NUM_PAGES); 233 259 mutex_unlock(&ee1004_bus_lock); 234 260 235 261 return 0; ··· 231 275 static struct i2c_driver ee1004_driver = { 232 276 .driver = { 233 277 .name = "ee1004", 278 + .dev_groups = ee1004_groups, 234 279 }, 235 - .probe = ee1004_probe, 280 + .probe_new = ee1004_probe, 236 281 .remove = ee1004_remove, 237 282 .id_table = ee1004_ids, 238 283 };
+63 -27
drivers/misc/eeprom/eeprom_93xx46.c
··· 9 9 #include <linux/device.h> 10 10 #include <linux/gpio/consumer.h> 11 11 #include <linux/kernel.h> 12 + #include <linux/log2.h> 12 13 #include <linux/module.h> 13 14 #include <linux/mutex.h> 14 15 #include <linux/of.h> ··· 29 28 30 29 struct eeprom_93xx46_devtype_data { 31 30 unsigned int quirks; 31 + unsigned char flags; 32 + }; 33 + 34 + static const struct eeprom_93xx46_devtype_data at93c46_data = { 35 + .flags = EE_SIZE1K, 36 + }; 37 + 38 + static const struct eeprom_93xx46_devtype_data at93c56_data = { 39 + .flags = EE_SIZE2K, 40 + }; 41 + 42 + static const struct eeprom_93xx46_devtype_data at93c66_data = { 43 + .flags = EE_SIZE4K, 32 44 }; 33 45 34 46 static const struct eeprom_93xx46_devtype_data atmel_at93c46d_data = { 47 + .flags = EE_SIZE1K, 35 48 .quirks = EEPROM_93XX46_QUIRK_SINGLE_WORD_READ | 36 49 EEPROM_93XX46_QUIRK_INSTRUCTION_LENGTH, 37 50 }; 38 51 39 52 static const struct eeprom_93xx46_devtype_data microchip_93lc46b_data = { 53 + .flags = EE_SIZE1K, 40 54 .quirks = EEPROM_93XX46_QUIRK_EXTRA_READ_CYCLE, 41 55 }; 42 56 ··· 86 70 struct eeprom_93xx46_dev *edev = priv; 87 71 char *buf = val; 88 72 int err = 0; 73 + int bits; 89 74 90 75 if (unlikely(off >= edev->size)) 91 76 return 0; ··· 100 83 if (edev->pdata->prepare) 101 84 edev->pdata->prepare(edev); 102 85 86 + /* The opcode in front of the address is three bits. */ 87 + bits = edev->addrlen + 3; 88 + 103 89 while (count) { 104 90 struct spi_message m; 105 91 struct spi_transfer t[2] = { { 0 } }; 106 92 u16 cmd_addr = OP_READ << edev->addrlen; 107 93 size_t nbytes = count; 108 - int bits; 109 94 110 - if (edev->addrlen == 7) { 111 - cmd_addr |= off & 0x7f; 112 - bits = 10; 95 + if (edev->pdata->flags & EE_ADDR8) { 96 + cmd_addr |= off; 113 97 if (has_quirk_single_word_read(edev)) 114 98 nbytes = 1; 115 99 } else { 116 - cmd_addr |= (off >> 1) & 0x3f; 117 - bits = 9; 100 + cmd_addr |= (off >> 1); 118 101 if (has_quirk_single_word_read(edev)) 119 102 nbytes = 2; 120 103 } ··· 169 152 int bits, ret; 170 153 u16 cmd_addr; 171 154 155 + /* The opcode in front of the address is three bits. */ 156 + bits = edev->addrlen + 3; 157 + 172 158 cmd_addr = OP_START << edev->addrlen; 173 - if (edev->addrlen == 7) { 159 + if (edev->pdata->flags & EE_ADDR8) 174 160 cmd_addr |= (is_on ? ADDR_EWEN : ADDR_EWDS) << 1; 175 - bits = 10; 176 - } else { 161 + else 177 162 cmd_addr |= (is_on ? ADDR_EWEN : ADDR_EWDS); 178 - bits = 9; 179 - } 180 163 181 164 if (has_quirk_instruction_length(edev)) { 182 165 cmd_addr <<= 2; ··· 222 205 int bits, data_len, ret; 223 206 u16 cmd_addr; 224 207 208 + if (unlikely(off >= edev->size)) 209 + return -EINVAL; 210 + 211 + /* The opcode in front of the address is three bits. */ 212 + bits = edev->addrlen + 3; 213 + 225 214 cmd_addr = OP_WRITE << edev->addrlen; 226 215 227 - if (edev->addrlen == 7) { 228 - cmd_addr |= off & 0x7f; 229 - bits = 10; 216 + if (edev->pdata->flags & EE_ADDR8) { 217 + cmd_addr |= off; 230 218 data_len = 1; 231 219 } else { 232 - cmd_addr |= (off >> 1) & 0x3f; 233 - bits = 9; 220 + cmd_addr |= (off >> 1); 234 221 data_len = 2; 235 222 } 236 223 ··· 274 253 return count; 275 254 276 255 /* only write even number of bytes on 16-bit devices */ 277 - if (edev->addrlen == 6) { 256 + if (edev->pdata->flags & EE_ADDR16) { 278 257 step = 2; 279 258 count &= ~1; 280 259 } ··· 316 295 int bits, ret; 317 296 u16 cmd_addr; 318 297 298 + /* The opcode in front of the address is three bits. */ 299 + bits = edev->addrlen + 3; 300 + 319 301 cmd_addr = OP_START << edev->addrlen; 320 - if (edev->addrlen == 7) { 302 + if (edev->pdata->flags & EE_ADDR8) 321 303 cmd_addr |= ADDR_ERAL << 1; 322 - bits = 10; 323 - } else { 304 + else 324 305 cmd_addr |= ADDR_ERAL; 325 - bits = 9; 326 - } 327 306 328 307 if (has_quirk_instruction_length(edev)) { 329 308 cmd_addr <<= 2; ··· 396 375 } 397 376 398 377 static const struct of_device_id eeprom_93xx46_of_table[] = { 399 - { .compatible = "eeprom-93xx46", }, 378 + { .compatible = "eeprom-93xx46", .data = &at93c46_data, }, 379 + { .compatible = "atmel,at93c46", .data = &at93c46_data, }, 400 380 { .compatible = "atmel,at93c46d", .data = &atmel_at93c46d_data, }, 381 + { .compatible = "atmel,at93c56", .data = &at93c56_data, }, 382 + { .compatible = "atmel,at93c66", .data = &at93c66_data, }, 401 383 { .compatible = "microchip,93lc46b", .data = &microchip_93lc46b_data, }, 402 384 {} 403 385 }; ··· 450 426 const struct eeprom_93xx46_devtype_data *data = of_id->data; 451 427 452 428 pd->quirks = data->quirks; 429 + pd->flags |= data->flags; 453 430 } 454 431 455 432 spi->dev.platform_data = pd; ··· 480 455 if (!edev) 481 456 return -ENOMEM; 482 457 458 + if (pd->flags & EE_SIZE1K) 459 + edev->size = 128; 460 + else if (pd->flags & EE_SIZE2K) 461 + edev->size = 256; 462 + else if (pd->flags & EE_SIZE4K) 463 + edev->size = 512; 464 + else { 465 + dev_err(&spi->dev, "unspecified size\n"); 466 + return -EINVAL; 467 + } 468 + 483 469 if (pd->flags & EE_ADDR8) 484 - edev->addrlen = 7; 470 + edev->addrlen = ilog2(edev->size); 485 471 else if (pd->flags & EE_ADDR16) 486 - edev->addrlen = 6; 472 + edev->addrlen = ilog2(edev->size) - 1; 487 473 else { 488 474 dev_err(&spi->dev, "unspecified address type\n"); 489 475 return -EINVAL; ··· 505 469 edev->spi = spi; 506 470 edev->pdata = pd; 507 471 508 - edev->size = 128; 509 472 edev->nvmem_config.type = NVMEM_TYPE_EEPROM; 510 473 edev->nvmem_config.name = dev_name(&spi->dev); 511 474 edev->nvmem_config.dev = &spi->dev; ··· 524 489 if (IS_ERR(edev->nvmem)) 525 490 return PTR_ERR(edev->nvmem); 526 491 527 - dev_info(&spi->dev, "%d-bit eeprom %s\n", 492 + dev_info(&spi->dev, "%d-bit eeprom containing %d bytes %s\n", 528 493 (pd->flags & EE_ADDR8) ? 8 : 16, 494 + edev->size, 529 495 (pd->flags & EE_READONLY) ? "(readonly)" : ""); 530 496 531 497 if (!(pd->flags & EE_READONLY)) {
+5 -36
drivers/misc/eeprom/idt_89hpesx.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 1 2 /* 2 - * This file is provided under a GPLv2 license. When using or 3 - * redistributing this file, you may do so under that license. 4 - * 5 - * GPL LICENSE SUMMARY 6 - * 7 3 * Copyright (C) 2016 T-Platforms. All Rights Reserved. 8 - * 9 - * This program is free software; you can redistribute it and/or modify it 10 - * under the terms and conditions of the GNU General Public License, 11 - * version 2, as published by the Free Software Foundation. 12 - * 13 - * This program is distributed in the hope that it will be useful, but WITHOUT 14 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 15 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 16 - * more details. 17 - * 18 - * You should have received a copy of the GNU General Public License along 19 - * with this program; if not, it can be found <http://www.gnu.org/licenses/>. 20 - * 21 - * The full GNU General Public License is included in this distribution in 22 - * the file called "COPYING". 23 - * 24 - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 25 - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 26 - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 27 - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 28 - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 29 - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 30 - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 31 - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 32 - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 33 - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 34 - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 35 4 * 36 5 * IDT PCIe-switch NTB Linux driver 37 6 * ··· 1095 1126 1096 1127 device_for_each_child_node(dev, fwnode) { 1097 1128 ee_id = idt_ee_match_id(fwnode); 1098 - if (!ee_id) { 1099 - dev_warn(dev, "Skip unsupported EEPROM device"); 1100 - continue; 1101 - } else 1129 + if (ee_id) 1102 1130 break; 1131 + 1132 + dev_warn(dev, "Skip unsupported EEPROM device %pfw\n", fwnode); 1103 1133 } 1104 1134 1105 1135 /* If there is no fwnode EEPROM device, then set zero size */ ··· 1129 1161 else /* if (!fwnode_property_read_bool(node, "read-only")) */ 1130 1162 pdev->eero = false; 1131 1163 1164 + fwnode_handle_put(fwnode); 1132 1165 dev_info(dev, "EEPROM of %d bytes found by 0x%x", 1133 1166 pdev->eesize, pdev->eeaddr); 1134 1167 }
+76 -5
drivers/misc/habanalabs/common/command_submission.c
··· 556 556 else if (!cs->submitted) 557 557 cs->fence->error = -EBUSY; 558 558 559 + if (unlikely(cs->skip_reset_on_timeout)) { 560 + dev_err(hdev->dev, 561 + "Command submission %llu completed after %llu (s)\n", 562 + cs->sequence, 563 + div_u64(jiffies - cs->submission_time_jiffies, HZ)); 564 + } 565 + 559 566 if (cs->timestamp) 560 567 cs->fence->timestamp = ktime_get(); 561 568 complete_all(&cs->fence->completion); ··· 578 571 int rc; 579 572 struct hl_cs *cs = container_of(work, struct hl_cs, 580 573 work_tdr.work); 574 + bool skip_reset_on_timeout = cs->skip_reset_on_timeout; 575 + 581 576 rc = cs_get_unless_zero(cs); 582 577 if (!rc) 583 578 return; ··· 590 581 } 591 582 592 583 /* Mark the CS is timed out so we won't try to cancel its TDR */ 593 - cs->timedout = true; 584 + if (likely(!skip_reset_on_timeout)) 585 + cs->timedout = true; 594 586 595 587 hdev = cs->ctx->hdev; 596 588 ··· 623 613 624 614 cs_put(cs); 625 615 626 - if (hdev->reset_on_lockup) 627 - hl_device_reset(hdev, 0); 628 - else 629 - hdev->needs_reset = true; 616 + if (likely(!skip_reset_on_timeout)) { 617 + if (hdev->reset_on_lockup) 618 + hl_device_reset(hdev, HL_RESET_TDR); 619 + else 620 + hdev->needs_reset = true; 621 + } 630 622 } 631 623 632 624 static int allocate_cs(struct hl_device *hdev, struct hl_ctx *ctx, ··· 662 650 cs->type = cs_type; 663 651 cs->timestamp = !!(flags & HL_CS_FLAGS_TIMESTAMP); 664 652 cs->timeout_jiffies = timeout; 653 + cs->skip_reset_on_timeout = 654 + hdev->skip_reset_on_timeout || 655 + !!(flags & HL_CS_FLAGS_SKIP_RESET_ON_TIMEOUT); 656 + cs->submission_time_jiffies = jiffies; 665 657 INIT_LIST_HEAD(&cs->job_list); 666 658 INIT_DELAYED_WORK(&cs->work_tdr, cs_timedout); 667 659 kref_init(&cs->refcount); ··· 1495 1479 hl_device_reset(hdev, 0); 1496 1480 1497 1481 return rc; 1482 + } 1483 + 1484 + /* 1485 + * hl_cs_signal_sob_wraparound_handler: handle SOB value wrapaound case. 1486 + * if the SOB value reaches the max value move to the other SOB reserved 1487 + * to the queue. 1488 + * Note that this function must be called while hw_queues_lock is taken. 1489 + */ 1490 + int hl_cs_signal_sob_wraparound_handler(struct hl_device *hdev, u32 q_idx, 1491 + struct hl_hw_sob **hw_sob, u32 count) 1492 + { 1493 + struct hl_sync_stream_properties *prop; 1494 + struct hl_hw_sob *sob = *hw_sob, *other_sob; 1495 + u8 other_sob_offset; 1496 + 1497 + prop = &hdev->kernel_queues[q_idx].sync_stream_prop; 1498 + 1499 + kref_get(&sob->kref); 1500 + 1501 + /* check for wraparound */ 1502 + if (prop->next_sob_val + count >= HL_MAX_SOB_VAL) { 1503 + /* 1504 + * Decrement as we reached the max value. 1505 + * The release function won't be called here as we've 1506 + * just incremented the refcount right before calling this 1507 + * function. 1508 + */ 1509 + kref_put(&sob->kref, hl_sob_reset_error); 1510 + 1511 + /* 1512 + * check the other sob value, if it still in use then fail 1513 + * otherwise make the switch 1514 + */ 1515 + other_sob_offset = (prop->curr_sob_offset + 1) % HL_RSVD_SOBS; 1516 + other_sob = &prop->hw_sob[other_sob_offset]; 1517 + 1518 + if (kref_read(&other_sob->kref) != 1) { 1519 + dev_err(hdev->dev, "error: Cannot switch SOBs q_idx: %d\n", 1520 + q_idx); 1521 + return -EINVAL; 1522 + } 1523 + 1524 + prop->next_sob_val = 1; 1525 + 1526 + /* only two SOBs are currently in use */ 1527 + prop->curr_sob_offset = other_sob_offset; 1528 + *hw_sob = other_sob; 1529 + 1530 + dev_dbg(hdev->dev, "switched to SOB %d, q_idx: %d\n", 1531 + prop->curr_sob_offset, q_idx); 1532 + } else { 1533 + prop->next_sob_val += count; 1534 + } 1535 + 1536 + return 0; 1498 1537 } 1499 1538 1500 1539 static int cs_ioctl_extract_signal_seq(struct hl_device *hdev,
-9
drivers/misc/habanalabs/common/context.c
··· 12 12 static void hl_ctx_fini(struct hl_ctx *ctx) 13 13 { 14 14 struct hl_device *hdev = ctx->hdev; 15 - u64 idle_mask[HL_BUSY_ENGINES_MASK_EXT_SIZE] = {0}; 16 15 int i; 17 16 18 17 /* Release all allocated pending cb's, those cb's were never ··· 56 57 57 58 /* Scrub both SRAM and DRAM */ 58 59 hdev->asic_funcs->scrub_device_mem(hdev, 0, 0); 59 - 60 - if ((!hdev->pldm) && (hdev->pdev) && 61 - (!hdev->asic_funcs->is_device_idle(hdev, 62 - idle_mask, 63 - HL_BUSY_ENGINES_MASK_EXT_SIZE, NULL))) 64 - dev_notice(hdev->dev, 65 - "device not idle after user context is closed (0x%llx, 0x%llx)\n", 66 - idle_mask[0], idle_mask[1]); 67 60 } else { 68 61 dev_dbg(hdev->dev, "closing kernel context\n"); 69 62 hdev->asic_funcs->ctx_fini(ctx);
+5
drivers/misc/habanalabs/common/debugfs.c
··· 1278 1278 dev_entry->root, 1279 1279 &dev_entry->blob_desc); 1280 1280 1281 + debugfs_create_x8("skip_reset_on_timeout", 1282 + 0644, 1283 + dev_entry->root, 1284 + &hdev->skip_reset_on_timeout); 1285 + 1281 1286 for (i = 0, entry = dev_entry->entry_arr ; i < count ; i++, entry++) { 1282 1287 debugfs_create_file(hl_debugfs_list[i].name, 1283 1288 0444,
+67 -15
drivers/misc/habanalabs/common/device.c
··· 51 51 52 52 static void hpriv_release(struct kref *ref) 53 53 { 54 + u64 idle_mask[HL_BUSY_ENGINES_MASK_EXT_SIZE] = {0}; 55 + bool device_is_idle = true; 54 56 struct hl_fpriv *hpriv; 55 57 struct hl_device *hdev; 56 58 ··· 73 71 74 72 kfree(hpriv); 75 73 76 - if (hdev->reset_upon_device_release) 77 - hl_device_reset(hdev, 0); 74 + if ((!hdev->pldm) && (hdev->pdev) && 75 + (!hdev->asic_funcs->is_device_idle(hdev, 76 + idle_mask, 77 + HL_BUSY_ENGINES_MASK_EXT_SIZE, NULL))) { 78 + dev_err(hdev->dev, 79 + "device not idle after user context is closed (0x%llx_%llx)\n", 80 + idle_mask[1], idle_mask[0]); 81 + 82 + device_is_idle = false; 83 + } 84 + 85 + if ((hdev->reset_if_device_not_idle && !device_is_idle) 86 + || hdev->reset_upon_device_release) 87 + hl_device_reset(hdev, HL_RESET_DEVICE_RELEASE); 78 88 } 79 89 80 90 void hl_hpriv_get(struct hl_fpriv *hpriv) ··· 131 117 if (!hl_hpriv_put(hpriv)) 132 118 dev_warn(hdev->dev, 133 119 "Device is still in use because there are live CS and/or memory mappings\n"); 120 + 121 + hdev->last_open_session_duration_jif = 122 + jiffies - hdev->last_successful_open_jif; 134 123 135 124 return 0; 136 125 } ··· 885 868 int hl_device_reset(struct hl_device *hdev, u32 flags) 886 869 { 887 870 u64 idle_mask[HL_BUSY_ENGINES_MASK_EXT_SIZE] = {0}; 888 - bool hard_reset, from_hard_reset_thread; 871 + bool hard_reset, from_hard_reset_thread, hard_instead_soft = false; 889 872 int i, rc; 890 873 891 874 if (!hdev->init_done) { ··· 897 880 hard_reset = (flags & HL_RESET_HARD) != 0; 898 881 from_hard_reset_thread = (flags & HL_RESET_FROM_RESET_THREAD) != 0; 899 882 900 - if ((!hard_reset) && (!hdev->supports_soft_reset)) { 901 - dev_dbg(hdev->dev, "Doing hard-reset instead of soft-reset\n"); 883 + if (!hard_reset && !hdev->supports_soft_reset) { 884 + hard_instead_soft = true; 902 885 hard_reset = true; 903 886 } 904 887 888 + if (hdev->reset_upon_device_release && 889 + (flags & HL_RESET_DEVICE_RELEASE)) { 890 + dev_dbg(hdev->dev, 891 + "Perform %s-reset upon device release\n", 892 + hard_reset ? "hard" : "soft"); 893 + goto do_reset; 894 + } 895 + 896 + if (!hard_reset && !hdev->allow_external_soft_reset) { 897 + hard_instead_soft = true; 898 + hard_reset = true; 899 + } 900 + 901 + if (hard_instead_soft) 902 + dev_dbg(hdev->dev, "Doing hard-reset instead of soft-reset\n"); 903 + 904 + do_reset: 905 905 /* Re-entry of reset thread */ 906 906 if (from_hard_reset_thread && hdev->process_kill_trial_cnt) 907 907 goto kill_processes; ··· 933 899 rc = atomic_cmpxchg(&hdev->in_reset, 0, 1); 934 900 if (rc) 935 901 return 0; 902 + 903 + /* 904 + * 'reset cause' is being updated here, because getting here 905 + * means that it's the 1st time and the last time we're here 906 + * ('in_reset' makes sure of it). This makes sure that 907 + * 'reset_cause' will continue holding its 1st recorded reason! 908 + */ 909 + if (flags & HL_RESET_HEARTBEAT) 910 + hdev->curr_reset_cause = HL_RESET_CAUSE_HEARTBEAT; 911 + else if (flags & HL_RESET_TDR) 912 + hdev->curr_reset_cause = HL_RESET_CAUSE_TDR; 913 + else 914 + hdev->curr_reset_cause = HL_RESET_CAUSE_UNKNOWN; 936 915 937 916 /* 938 917 * if reset is due to heartbeat, device CPU is no responsive in ··· 990 943 hdev->process_kill_trial_cnt = 0; 991 944 992 945 /* 993 - * Because the reset function can't run from interrupt or 994 - * from heartbeat work, we need to call the reset function 995 - * from a dedicated work 946 + * Because the reset function can't run from heartbeat work, 947 + * we need to call the reset function from a dedicated work. 996 948 */ 997 949 queue_delayed_work(hdev->device_reset_work.wq, 998 950 &hdev->device_reset_work.reset_work, 0); ··· 1142 1096 if (!hdev->asic_funcs->is_device_idle(hdev, idle_mask, 1143 1097 HL_BUSY_ENGINES_MASK_EXT_SIZE, NULL)) { 1144 1098 dev_err(hdev->dev, 1145 - "device is not idle (mask %#llx %#llx) after reset\n", 1146 - idle_mask[0], idle_mask[1]); 1099 + "device is not idle (mask 0x%llx_%llx) after reset\n", 1100 + idle_mask[1], idle_mask[0]); 1147 1101 rc = -EIO; 1148 1102 goto out_err; 1149 1103 } ··· 1380 1334 } 1381 1335 1382 1336 /* 1383 - * From this point, in case of an error, add char devices and create 1384 - * sysfs nodes as part of the error flow, to allow debugging. 1337 + * From this point, override rc (=0) in case of an error to allow 1338 + * debugging (by adding char devices and create sysfs nodes as part of 1339 + * the error flow). 1385 1340 */ 1386 1341 add_cdev_sysfs_on_err = true; 1387 1342 ··· 1416 1369 1417 1370 dev_info(hdev->dev, "Found %s device with %lluGB DRAM\n", 1418 1371 hdev->asic_name, 1419 - hdev->asic_prop.dram_size / 1024 / 1024 / 1024); 1372 + hdev->asic_prop.dram_size / SZ_1G); 1420 1373 1421 1374 rc = hl_vm_init(hdev); 1422 1375 if (rc) { ··· 1522 1475 void hl_device_fini(struct hl_device *hdev) 1523 1476 { 1524 1477 ktime_t timeout; 1478 + u64 reset_sec; 1525 1479 int i, rc; 1526 1480 1527 1481 dev_info(hdev->dev, "Removing device\n"); 1528 1482 1529 1483 hdev->device_fini_pending = 1; 1530 1484 flush_delayed_work(&hdev->device_reset_work.reset_work); 1485 + 1486 + if (hdev->pldm) 1487 + reset_sec = HL_PLDM_HARD_RESET_MAX_TIMEOUT; 1488 + else 1489 + reset_sec = HL_HARD_RESET_MAX_TIMEOUT; 1531 1490 1532 1491 /* 1533 1492 * This function is competing with the reset function, so try to ··· 1543 1490 * ports, the hard reset could take between 10-30 seconds 1544 1491 */ 1545 1492 1546 - timeout = ktime_add_us(ktime_get(), 1547 - HL_HARD_RESET_MAX_TIMEOUT * 1000 * 1000); 1493 + timeout = ktime_add_us(ktime_get(), reset_sec * 1000 * 1000); 1548 1494 rc = atomic_cmpxchg(&hdev->in_reset, 0, 1); 1549 1495 while (rc) { 1550 1496 usleep_range(50, 200);
+1625 -187
drivers/misc/habanalabs/common/firmware_if.c
··· 9 9 #include "../include/common/hl_boot_if.h" 10 10 11 11 #include <linux/firmware.h> 12 + #include <linux/crc32.h> 12 13 #include <linux/slab.h> 14 + #include <linux/ctype.h> 13 15 14 - #define FW_FILE_MAX_SIZE 0x1400000 /* maximum size of 20MB */ 16 + #define FW_FILE_MAX_SIZE 0x1400000 /* maximum size of 20MB */ 17 + 18 + #define FW_CPU_STATUS_POLL_INTERVAL_USEC 10000 19 + 20 + static char *extract_fw_ver_from_str(const char *fw_str) 21 + { 22 + char *str, *fw_ver, *whitespace; 23 + 24 + fw_ver = kmalloc(16, GFP_KERNEL); 25 + if (!fw_ver) 26 + return NULL; 27 + 28 + str = strnstr(fw_str, "fw-", VERSION_MAX_LEN); 29 + if (!str) 30 + goto free_fw_ver; 31 + 32 + /* Skip the fw- part */ 33 + str += 3; 34 + 35 + /* Copy until the next whitespace */ 36 + whitespace = strnstr(str, " ", 15); 37 + if (!whitespace) 38 + goto free_fw_ver; 39 + 40 + strscpy(fw_ver, str, whitespace - str + 1); 41 + 42 + return fw_ver; 43 + 44 + free_fw_ver: 45 + kfree(fw_ver); 46 + return NULL; 47 + } 48 + 49 + static int hl_request_fw(struct hl_device *hdev, 50 + const struct firmware **firmware_p, 51 + const char *fw_name) 52 + { 53 + size_t fw_size; 54 + int rc; 55 + 56 + rc = request_firmware(firmware_p, fw_name, hdev->dev); 57 + if (rc) { 58 + dev_err(hdev->dev, "Firmware file %s is not found! (error %d)\n", 59 + fw_name, rc); 60 + goto out; 61 + } 62 + 63 + fw_size = (*firmware_p)->size; 64 + if ((fw_size % 4) != 0) { 65 + dev_err(hdev->dev, "Illegal %s firmware size %zu\n", 66 + fw_name, fw_size); 67 + rc = -EINVAL; 68 + goto release_fw; 69 + } 70 + 71 + dev_dbg(hdev->dev, "%s firmware size == %zu\n", fw_name, fw_size); 72 + 73 + if (fw_size > FW_FILE_MAX_SIZE) { 74 + dev_err(hdev->dev, 75 + "FW file size %zu exceeds maximum of %u bytes\n", 76 + fw_size, FW_FILE_MAX_SIZE); 77 + rc = -EINVAL; 78 + goto release_fw; 79 + } 80 + 81 + return 0; 82 + 83 + release_fw: 84 + release_firmware(*firmware_p); 85 + out: 86 + return rc; 87 + } 88 + 89 + /** 90 + * hl_release_firmware() - release FW 91 + * 92 + * @fw: fw descriptor 93 + * 94 + * note: this inline function added to serve as a comprehensive mirror for the 95 + * hl_request_fw function. 96 + */ 97 + static inline void hl_release_firmware(const struct firmware *fw) 98 + { 99 + release_firmware(fw); 100 + } 101 + 102 + /** 103 + * hl_fw_copy_fw_to_device() - copy FW to device 104 + * 105 + * @hdev: pointer to hl_device structure. 106 + * @fw: fw descriptor 107 + * @dst: IO memory mapped address space to copy firmware to 108 + * @src_offset: offset in src FW to copy from 109 + * @size: amount of bytes to copy (0 to copy the whole binary) 110 + * 111 + * actual copy of FW binary data to device, shared by static and dynamic loaders 112 + */ 113 + static int hl_fw_copy_fw_to_device(struct hl_device *hdev, 114 + const struct firmware *fw, void __iomem *dst, 115 + u32 src_offset, u32 size) 116 + { 117 + const void *fw_data; 118 + 119 + /* size 0 indicates to copy the whole file */ 120 + if (!size) 121 + size = fw->size; 122 + 123 + if (src_offset + size > fw->size) { 124 + dev_err(hdev->dev, 125 + "size to copy(%u) and offset(%u) are invalid\n", 126 + size, src_offset); 127 + return -EINVAL; 128 + } 129 + 130 + fw_data = (const void *) fw->data; 131 + 132 + memcpy_toio(dst, fw_data + src_offset, size); 133 + return 0; 134 + } 135 + 136 + /** 137 + * hl_fw_copy_msg_to_device() - copy message to device 138 + * 139 + * @hdev: pointer to hl_device structure. 140 + * @msg: message 141 + * @dst: IO memory mapped address space to copy firmware to 142 + * @src_offset: offset in src message to copy from 143 + * @size: amount of bytes to copy (0 to copy the whole binary) 144 + * 145 + * actual copy of message data to device. 146 + */ 147 + static int hl_fw_copy_msg_to_device(struct hl_device *hdev, 148 + struct lkd_msg_comms *msg, void __iomem *dst, 149 + u32 src_offset, u32 size) 150 + { 151 + void *msg_data; 152 + 153 + /* size 0 indicates to copy the whole file */ 154 + if (!size) 155 + size = sizeof(struct lkd_msg_comms); 156 + 157 + if (src_offset + size > sizeof(struct lkd_msg_comms)) { 158 + dev_err(hdev->dev, 159 + "size to copy(%u) and offset(%u) are invalid\n", 160 + size, src_offset); 161 + return -EINVAL; 162 + } 163 + 164 + msg_data = (void *) msg; 165 + 166 + memcpy_toio(dst, msg_data + src_offset, size); 167 + 168 + return 0; 169 + } 170 + 15 171 /** 16 172 * hl_fw_load_fw_to_device() - Load F/W code to device's memory. 17 173 * ··· 185 29 void __iomem *dst, u32 src_offset, u32 size) 186 30 { 187 31 const struct firmware *fw; 188 - const void *fw_data; 189 - size_t fw_size; 190 32 int rc; 191 33 192 - rc = request_firmware(&fw, fw_name, hdev->dev); 193 - if (rc) { 194 - dev_err(hdev->dev, "Firmware file %s is not found!\n", fw_name); 195 - goto out; 196 - } 34 + rc = hl_request_fw(hdev, &fw, fw_name); 35 + if (rc) 36 + return rc; 197 37 198 - fw_size = fw->size; 199 - if ((fw_size % 4) != 0) { 200 - dev_err(hdev->dev, "Illegal %s firmware size %zu\n", 201 - fw_name, fw_size); 202 - rc = -EINVAL; 203 - goto out; 204 - } 38 + rc = hl_fw_copy_fw_to_device(hdev, fw, dst, src_offset, size); 205 39 206 - dev_dbg(hdev->dev, "%s firmware size == %zu\n", fw_name, fw_size); 207 - 208 - if (fw_size > FW_FILE_MAX_SIZE) { 209 - dev_err(hdev->dev, 210 - "FW file size %zu exceeds maximum of %u bytes\n", 211 - fw_size, FW_FILE_MAX_SIZE); 212 - rc = -EINVAL; 213 - goto out; 214 - } 215 - 216 - if (size - src_offset > fw_size) { 217 - dev_err(hdev->dev, 218 - "size to copy(%u) and offset(%u) are invalid\n", 219 - size, src_offset); 220 - rc = -EINVAL; 221 - goto out; 222 - } 223 - 224 - if (size) 225 - fw_size = size; 226 - 227 - fw_data = (const void *) fw->data; 228 - 229 - memcpy_toio(dst, fw_data + src_offset, fw_size); 230 - 231 - out: 232 - release_firmware(fw); 40 + hl_release_firmware(fw); 233 41 return rc; 234 42 } 235 43 ··· 211 91 u16 len, u32 timeout, u64 *result) 212 92 { 213 93 struct hl_hw_queue *queue = &hdev->kernel_queues[hw_queue_id]; 94 + struct asic_fixed_properties *prop = &hdev->asic_prop; 214 95 struct cpucp_packet *pkt; 215 96 dma_addr_t pkt_dma_addr; 216 97 u32 tmp, expected_ack_val; ··· 238 117 } 239 118 240 119 /* set fence to a non valid value */ 241 - pkt->fence = UINT_MAX; 120 + pkt->fence = cpu_to_le32(UINT_MAX); 242 121 243 122 rc = hl_hw_queue_send_cb_no_cmpl(hdev, hw_queue_id, len, pkt_dma_addr); 244 123 if (rc) { ··· 246 125 goto out; 247 126 } 248 127 249 - if (hdev->asic_prop.fw_app_security_map & 250 - CPU_BOOT_DEV_STS0_PKT_PI_ACK_EN) 128 + if (prop->fw_app_cpu_boot_dev_sts0 & CPU_BOOT_DEV_STS0_PKT_PI_ACK_EN) 251 129 expected_ack_val = queue->pi; 252 130 else 253 131 expected_ack_val = CPUCP_PACKET_FENCE_VAL; ··· 392 272 393 273 int hl_fw_send_heartbeat(struct hl_device *hdev) 394 274 { 395 - struct cpucp_packet hb_pkt = {}; 275 + struct cpucp_packet hb_pkt; 396 276 u64 result; 397 277 int rc; 398 278 279 + memset(&hb_pkt, 0, sizeof(hb_pkt)); 399 280 hb_pkt.ctl = cpu_to_le32(CPUCP_PACKET_TEST << 400 281 CPUCP_PKT_CTL_OPCODE_SHIFT); 401 282 hb_pkt.value = cpu_to_le64(CPUCP_PACKET_FENCE_VAL); ··· 405 284 sizeof(hb_pkt), 0, &result); 406 285 407 286 if ((rc) || (result != CPUCP_PACKET_FENCE_VAL)) 287 + return -EIO; 288 + 289 + if (le32_to_cpu(hb_pkt.status_mask) & 290 + CPUCP_PKT_HB_STATUS_EQ_FAULT_MASK) { 291 + dev_warn(hdev->dev, "FW reported EQ fault during heartbeat\n"); 408 292 rc = -EIO; 293 + } 409 294 410 295 return rc; 411 296 } 412 297 413 - static int fw_read_errors(struct hl_device *hdev, u32 boot_err0_reg, 414 - u32 cpu_security_boot_status_reg) 298 + static bool fw_report_boot_dev0(struct hl_device *hdev, u32 err_val, 299 + u32 sts_val) 415 300 { 416 - u32 err_val, security_val; 417 301 bool err_exists = false; 418 302 419 - /* Some of the firmware status codes are deprecated in newer f/w 420 - * versions. In those versions, the errors are reported 421 - * in different registers. Therefore, we need to check those 422 - * registers and print the exact errors. Moreover, there 423 - * may be multiple errors, so we need to report on each error 424 - * separately. Some of the error codes might indicate a state 425 - * that is not an error per-se, but it is an error in production 426 - * environment 427 - */ 428 - err_val = RREG32(boot_err0_reg); 429 303 if (!(err_val & CPU_BOOT_ERR0_ENABLED)) 430 - return 0; 304 + return false; 431 305 432 306 if (err_val & CPU_BOOT_ERR0_DRAM_INIT_FAIL) { 433 307 dev_err(hdev->dev, ··· 493 377 err_exists = true; 494 378 } 495 379 380 + if (err_val & CPU_BOOT_ERR0_PRI_IMG_VER_FAIL) { 381 + dev_warn(hdev->dev, 382 + "Device boot warning - Failed to load preboot primary image\n"); 383 + /* This is a warning so we don't want it to disable the 384 + * device as we have a secondary preboot image 385 + */ 386 + err_val &= ~CPU_BOOT_ERR0_PRI_IMG_VER_FAIL; 387 + } 388 + 389 + if (err_val & CPU_BOOT_ERR0_SEC_IMG_VER_FAIL) { 390 + dev_err(hdev->dev, "Device boot error - Failed to load preboot secondary image\n"); 391 + err_exists = true; 392 + } 393 + 496 394 if (err_val & CPU_BOOT_ERR0_PLL_FAIL) { 497 395 dev_err(hdev->dev, "Device boot error - PLL failure\n"); 498 396 err_exists = true; 499 397 } 500 398 501 399 if (err_val & CPU_BOOT_ERR0_DEVICE_UNUSABLE_FAIL) { 502 - dev_err(hdev->dev, 503 - "Device boot error - device unusable\n"); 504 - err_exists = true; 400 + /* Ignore this bit, don't prevent driver loading */ 401 + dev_dbg(hdev->dev, "device unusable status is set\n"); 402 + err_val &= ~CPU_BOOT_ERR0_DEVICE_UNUSABLE_FAIL; 505 403 } 506 404 507 - security_val = RREG32(cpu_security_boot_status_reg); 508 - if (security_val & CPU_BOOT_DEV_STS0_ENABLED) 509 - dev_dbg(hdev->dev, "Device security status %#x\n", 510 - security_val); 405 + if (sts_val & CPU_BOOT_DEV_STS0_ENABLED) 406 + dev_dbg(hdev->dev, "Device status0 %#x\n", sts_val); 511 407 512 408 if (!err_exists && (err_val & ~CPU_BOOT_ERR0_ENABLED)) { 513 409 dev_err(hdev->dev, 514 - "Device boot error - unknown error 0x%08x\n", 515 - err_val); 410 + "Device boot error - unknown ERR0 error 0x%08x\n", err_val); 516 411 err_exists = true; 517 412 } 518 413 414 + /* return error only if it's in the predefined mask */ 519 415 if (err_exists && ((err_val & ~CPU_BOOT_ERR0_ENABLED) & 520 416 lower_32_bits(hdev->boot_error_status_mask))) 417 + return true; 418 + 419 + return false; 420 + } 421 + 422 + /* placeholder for ERR1 as no errors defined there yet */ 423 + static bool fw_report_boot_dev1(struct hl_device *hdev, u32 err_val, 424 + u32 sts_val) 425 + { 426 + /* 427 + * keep this variable to preserve the logic of the function. 428 + * this way it would require less modifications when error will be 429 + * added to DEV_ERR1 430 + */ 431 + bool err_exists = false; 432 + 433 + if (!(err_val & CPU_BOOT_ERR1_ENABLED)) 434 + return false; 435 + 436 + if (sts_val & CPU_BOOT_DEV_STS1_ENABLED) 437 + dev_dbg(hdev->dev, "Device status1 %#x\n", sts_val); 438 + 439 + if (!err_exists && (err_val & ~CPU_BOOT_ERR1_ENABLED)) { 440 + dev_err(hdev->dev, 441 + "Device boot error - unknown ERR1 error 0x%08x\n", 442 + err_val); 443 + err_exists = true; 444 + } 445 + 446 + /* return error only if it's in the predefined mask */ 447 + if (err_exists && ((err_val & ~CPU_BOOT_ERR1_ENABLED) & 448 + upper_32_bits(hdev->boot_error_status_mask))) 449 + return true; 450 + 451 + return false; 452 + } 453 + 454 + static int fw_read_errors(struct hl_device *hdev, u32 boot_err0_reg, 455 + u32 boot_err1_reg, u32 cpu_boot_dev_status0_reg, 456 + u32 cpu_boot_dev_status1_reg) 457 + { 458 + u32 err_val, status_val; 459 + bool err_exists = false; 460 + 461 + /* Some of the firmware status codes are deprecated in newer f/w 462 + * versions. In those versions, the errors are reported 463 + * in different registers. Therefore, we need to check those 464 + * registers and print the exact errors. Moreover, there 465 + * may be multiple errors, so we need to report on each error 466 + * separately. Some of the error codes might indicate a state 467 + * that is not an error per-se, but it is an error in production 468 + * environment 469 + */ 470 + err_val = RREG32(boot_err0_reg); 471 + status_val = RREG32(cpu_boot_dev_status0_reg); 472 + err_exists = fw_report_boot_dev0(hdev, err_val, status_val); 473 + 474 + err_val = RREG32(boot_err1_reg); 475 + status_val = RREG32(cpu_boot_dev_status1_reg); 476 + err_exists |= fw_report_boot_dev1(hdev, err_val, status_val); 477 + 478 + if (err_exists) 521 479 return -EIO; 522 480 523 481 return 0; 524 482 } 525 483 526 484 int hl_fw_cpucp_info_get(struct hl_device *hdev, 527 - u32 cpu_security_boot_status_reg, 528 - u32 boot_err0_reg) 485 + u32 sts_boot_dev_sts0_reg, 486 + u32 sts_boot_dev_sts1_reg, u32 boot_err0_reg, 487 + u32 boot_err1_reg) 529 488 { 530 489 struct asic_fixed_properties *prop = &hdev->asic_prop; 531 490 struct cpucp_packet pkt = {}; 532 - void *cpucp_info_cpu_addr; 533 491 dma_addr_t cpucp_info_dma_addr; 492 + void *cpucp_info_cpu_addr; 493 + char *kernel_ver; 534 494 u64 result; 535 495 int rc; 536 496 ··· 635 443 goto out; 636 444 } 637 445 638 - rc = fw_read_errors(hdev, boot_err0_reg, cpu_security_boot_status_reg); 446 + rc = fw_read_errors(hdev, boot_err0_reg, boot_err1_reg, 447 + sts_boot_dev_sts0_reg, sts_boot_dev_sts1_reg); 639 448 if (rc) { 640 449 dev_err(hdev->dev, "Errors in device boot\n"); 641 450 goto out; ··· 653 460 goto out; 654 461 } 655 462 463 + kernel_ver = extract_fw_ver_from_str(prop->cpucp_info.kernel_version); 464 + if (kernel_ver) { 465 + dev_info(hdev->dev, "Linux version %s", kernel_ver); 466 + kfree(kernel_ver); 467 + } 468 + 469 + /* assume EQ code doesn't need to check eqe index */ 470 + hdev->event_queue.check_eqe_index = false; 471 + 656 472 /* Read FW application security bits again */ 657 - if (hdev->asic_prop.fw_security_status_valid) 658 - hdev->asic_prop.fw_app_security_map = 659 - RREG32(cpu_security_boot_status_reg); 473 + if (hdev->asic_prop.fw_cpu_boot_dev_sts0_valid) { 474 + hdev->asic_prop.fw_app_cpu_boot_dev_sts0 = 475 + RREG32(sts_boot_dev_sts0_reg); 476 + if (hdev->asic_prop.fw_app_cpu_boot_dev_sts0 & 477 + CPU_BOOT_DEV_STS0_EQ_INDEX_EN) 478 + hdev->event_queue.check_eqe_index = true; 479 + } 480 + 481 + if (hdev->asic_prop.fw_cpu_boot_dev_sts1_valid) 482 + hdev->asic_prop.fw_app_cpu_boot_dev_sts1 = 483 + RREG32(sts_boot_dev_sts1_reg); 660 484 661 485 out: 662 486 hdev->asic_funcs->cpu_accessible_dma_pool_free(hdev, ··· 711 501 712 502 pkt->length = cpu_to_le32(CPUCP_NUM_OF_MSI_TYPES); 713 503 714 - hdev->asic_funcs->get_msi_info((u32 *)&pkt->data); 504 + memset((void *) &pkt->data, 0xFF, data_size); 505 + hdev->asic_funcs->get_msi_info(pkt->data); 715 506 716 507 pkt->cpucp_pkt.ctl = cpu_to_le32(CPUCP_PACKET_MSI_INFO_SET << 717 508 CPUCP_PKT_CTL_OPCODE_SHIFT); ··· 737 526 } 738 527 739 528 int hl_fw_cpucp_handshake(struct hl_device *hdev, 740 - u32 cpu_security_boot_status_reg, 741 - u32 boot_err0_reg) 529 + u32 sts_boot_dev_sts0_reg, 530 + u32 sts_boot_dev_sts1_reg, u32 boot_err0_reg, 531 + u32 boot_err1_reg) 742 532 { 743 533 int rc; 744 534 745 - rc = hl_fw_cpucp_info_get(hdev, cpu_security_boot_status_reg, 746 - boot_err0_reg); 535 + rc = hl_fw_cpucp_info_get(hdev, sts_boot_dev_sts0_reg, 536 + sts_boot_dev_sts1_reg, boot_err0_reg, 537 + boot_err1_reg); 747 538 if (rc) 748 539 return rc; 749 540 ··· 880 667 bool dynamic_pll; 881 668 int fw_pll_idx; 882 669 883 - dynamic_pll = prop->fw_security_status_valid && 884 - (prop->fw_app_security_map & CPU_BOOT_DEV_STS0_DYN_PLL_EN); 670 + dynamic_pll = !!(prop->fw_app_cpu_boot_dev_sts0 & 671 + CPU_BOOT_DEV_STS0_DYN_PLL_EN); 885 672 886 673 if (!dynamic_pll) { 887 674 /* ··· 972 759 return rc; 973 760 } 974 761 762 + void hl_fw_ask_hard_reset_without_linux(struct hl_device *hdev) 763 + { 764 + struct static_fw_load_mgr *static_loader = 765 + &hdev->fw_loader.static_loader; 766 + int rc; 767 + 768 + if (hdev->asic_prop.dynamic_fw_load) { 769 + rc = hl_fw_dynamic_send_protocol_cmd(hdev, &hdev->fw_loader, 770 + COMMS_RST_DEV, 0, false, 771 + hdev->fw_loader.cpu_timeout); 772 + if (rc) 773 + dev_warn(hdev->dev, "Failed sending COMMS_RST_DEV\n"); 774 + } else { 775 + WREG32(static_loader->kmd_msg_to_cpu_reg, KMD_MSG_RST_DEV); 776 + } 777 + } 778 + 779 + void hl_fw_ask_halt_machine_without_linux(struct hl_device *hdev) 780 + { 781 + struct static_fw_load_mgr *static_loader = 782 + &hdev->fw_loader.static_loader; 783 + int rc; 784 + 785 + if (hdev->device_cpu_is_halted) 786 + return; 787 + 788 + /* Stop device CPU to make sure nothing bad happens */ 789 + if (hdev->asic_prop.dynamic_fw_load) { 790 + rc = hl_fw_dynamic_send_protocol_cmd(hdev, &hdev->fw_loader, 791 + COMMS_GOTO_WFE, 0, true, 792 + hdev->fw_loader.cpu_timeout); 793 + if (rc) 794 + dev_warn(hdev->dev, "Failed sending COMMS_GOTO_WFE\n"); 795 + } else { 796 + WREG32(static_loader->kmd_msg_to_cpu_reg, KMD_MSG_GOTO_WFE); 797 + msleep(static_loader->cpu_reset_wait_msec); 798 + } 799 + 800 + hdev->device_cpu_is_halted = true; 801 + } 802 + 975 803 static void detect_cpu_boot_status(struct hl_device *hdev, u32 status) 976 804 { 977 805 /* Some of the status codes below are deprecated in newer f/w ··· 1021 767 switch (status) { 1022 768 case CPU_BOOT_STATUS_NA: 1023 769 dev_err(hdev->dev, 1024 - "Device boot error - BTL did NOT run\n"); 770 + "Device boot progress - BTL did NOT run\n"); 1025 771 break; 1026 772 case CPU_BOOT_STATUS_IN_WFE: 1027 773 dev_err(hdev->dev, 1028 - "Device boot error - Stuck inside WFE loop\n"); 774 + "Device boot progress - Stuck inside WFE loop\n"); 1029 775 break; 1030 776 case CPU_BOOT_STATUS_IN_BTL: 1031 777 dev_err(hdev->dev, 1032 - "Device boot error - Stuck in BTL\n"); 778 + "Device boot progress - Stuck in BTL\n"); 1033 779 break; 1034 780 case CPU_BOOT_STATUS_IN_PREBOOT: 1035 781 dev_err(hdev->dev, 1036 - "Device boot error - Stuck in Preboot\n"); 782 + "Device boot progress - Stuck in Preboot\n"); 1037 783 break; 1038 784 case CPU_BOOT_STATUS_IN_SPL: 1039 785 dev_err(hdev->dev, 1040 - "Device boot error - Stuck in SPL\n"); 786 + "Device boot progress - Stuck in SPL\n"); 1041 787 break; 1042 788 case CPU_BOOT_STATUS_IN_UBOOT: 1043 789 dev_err(hdev->dev, 1044 - "Device boot error - Stuck in u-boot\n"); 790 + "Device boot progress - Stuck in u-boot\n"); 1045 791 break; 1046 792 case CPU_BOOT_STATUS_DRAM_INIT_FAIL: 1047 793 dev_err(hdev->dev, 1048 - "Device boot error - DRAM initialization failed\n"); 794 + "Device boot progress - DRAM initialization failed\n"); 1049 795 break; 1050 796 case CPU_BOOT_STATUS_UBOOT_NOT_READY: 1051 797 dev_err(hdev->dev, 1052 - "Device boot error - u-boot stopped by user\n"); 798 + "Device boot progress - Cannot boot\n"); 1053 799 break; 1054 800 case CPU_BOOT_STATUS_TS_INIT_FAIL: 1055 801 dev_err(hdev->dev, 1056 - "Device boot error - Thermal Sensor initialization failed\n"); 802 + "Device boot progress - Thermal Sensor initialization failed\n"); 1057 803 break; 1058 804 default: 1059 805 dev_err(hdev->dev, 1060 - "Device boot error - Invalid status code %d\n", 806 + "Device boot progress - Invalid status code %d\n", 1061 807 status); 1062 808 break; 1063 809 } 1064 810 } 1065 811 1066 - int hl_fw_read_preboot_status(struct hl_device *hdev, u32 cpu_boot_status_reg, 1067 - u32 cpu_security_boot_status_reg, u32 boot_err0_reg, 1068 - u32 timeout) 812 + static int hl_fw_read_preboot_caps(struct hl_device *hdev, 813 + u32 cpu_boot_status_reg, 814 + u32 sts_boot_dev_sts0_reg, 815 + u32 sts_boot_dev_sts1_reg, 816 + u32 boot_err0_reg, u32 boot_err1_reg, 817 + u32 timeout) 1069 818 { 1070 819 struct asic_fixed_properties *prop = &hdev->asic_prop; 1071 - u32 status, security_status; 820 + u32 status, reg_val; 1072 821 int rc; 1073 - 1074 - /* pldm was added for cases in which we use preboot on pldm and want 1075 - * to load boot fit, but we can't wait for preboot because it runs 1076 - * very slowly 1077 - */ 1078 - if (!(hdev->fw_components & FW_TYPE_PREBOOT_CPU) || hdev->pldm) 1079 - return 0; 1080 822 1081 823 /* Need to check two possible scenarios: 1082 824 * ··· 1092 842 (status == CPU_BOOT_STATUS_READY_TO_BOOT) || 1093 843 (status == CPU_BOOT_STATUS_SRAM_AVAIL) || 1094 844 (status == CPU_BOOT_STATUS_WAITING_FOR_BOOT_FIT), 1095 - 10000, 845 + FW_CPU_STATUS_POLL_INTERVAL_USEC, 1096 846 timeout); 1097 847 1098 848 if (rc) { 1099 - dev_err(hdev->dev, "Failed to read preboot version\n"); 849 + dev_err(hdev->dev, "CPU boot ready status timeout\n"); 1100 850 detect_cpu_boot_status(hdev, status); 1101 851 1102 852 /* If we read all FF, then something is totally wrong, no point 1103 853 * of reading specific errors 1104 854 */ 1105 855 if (status != -1) 1106 - fw_read_errors(hdev, boot_err0_reg, 1107 - cpu_security_boot_status_reg); 856 + fw_read_errors(hdev, boot_err0_reg, boot_err1_reg, 857 + sts_boot_dev_sts0_reg, 858 + sts_boot_dev_sts1_reg); 1108 859 return -EIO; 1109 860 } 1110 861 1111 - rc = hdev->asic_funcs->read_device_fw_version(hdev, FW_COMP_PREBOOT); 1112 - if (rc) 1113 - return rc; 862 + /* 863 + * the registers DEV_STS* contain FW capabilities/features. 864 + * We can rely on this registers only if bit CPU_BOOT_DEV_STS*_ENABLED 865 + * is set. 866 + * In the first read of this register we store the value of this 867 + * register ONLY if the register is enabled (which will be propagated 868 + * to next stages) and also mark the register as valid. 869 + * In case it is not enabled the stored value will be left 0- all 870 + * caps/features are off 871 + */ 872 + reg_val = RREG32(sts_boot_dev_sts0_reg); 873 + if (reg_val & CPU_BOOT_DEV_STS0_ENABLED) { 874 + prop->fw_cpu_boot_dev_sts0_valid = true; 875 + prop->fw_preboot_cpu_boot_dev_sts0 = reg_val; 876 + } 1114 877 1115 - security_status = RREG32(cpu_security_boot_status_reg); 878 + reg_val = RREG32(sts_boot_dev_sts1_reg); 879 + if (reg_val & CPU_BOOT_DEV_STS1_ENABLED) { 880 + prop->fw_cpu_boot_dev_sts1_valid = true; 881 + prop->fw_preboot_cpu_boot_dev_sts1 = reg_val; 882 + } 1116 883 1117 - /* We read security status multiple times during boot: 884 + prop->dynamic_fw_load = !!(prop->fw_preboot_cpu_boot_dev_sts0 & 885 + CPU_BOOT_DEV_STS0_FW_LD_COM_EN); 886 + 887 + /* initialize FW loader once we know what load protocol is used */ 888 + hdev->asic_funcs->init_firmware_loader(hdev); 889 + 890 + dev_dbg(hdev->dev, "Attempting %s FW load\n", 891 + prop->dynamic_fw_load ? "dynamic" : "legacy"); 892 + return 0; 893 + } 894 + 895 + static int hl_fw_static_read_device_fw_version(struct hl_device *hdev, 896 + enum hl_fw_component fwc) 897 + { 898 + struct asic_fixed_properties *prop = &hdev->asic_prop; 899 + struct fw_load_mgr *fw_loader = &hdev->fw_loader; 900 + struct static_fw_load_mgr *static_loader; 901 + char *dest, *boot_ver, *preboot_ver; 902 + u32 ver_off, limit; 903 + const char *name; 904 + char btl_ver[32]; 905 + 906 + static_loader = &hdev->fw_loader.static_loader; 907 + 908 + switch (fwc) { 909 + case FW_COMP_BOOT_FIT: 910 + ver_off = RREG32(static_loader->boot_fit_version_offset_reg); 911 + dest = prop->uboot_ver; 912 + name = "Boot-fit"; 913 + limit = static_loader->boot_fit_version_max_off; 914 + break; 915 + case FW_COMP_PREBOOT: 916 + ver_off = RREG32(static_loader->preboot_version_offset_reg); 917 + dest = prop->preboot_ver; 918 + name = "Preboot"; 919 + limit = static_loader->preboot_version_max_off; 920 + break; 921 + default: 922 + dev_warn(hdev->dev, "Undefined FW component: %d\n", fwc); 923 + return -EIO; 924 + } 925 + 926 + ver_off &= static_loader->sram_offset_mask; 927 + 928 + if (ver_off < limit) { 929 + memcpy_fromio(dest, 930 + hdev->pcie_bar[fw_loader->sram_bar_id] + ver_off, 931 + VERSION_MAX_LEN); 932 + } else { 933 + dev_err(hdev->dev, "%s version offset (0x%x) is above SRAM\n", 934 + name, ver_off); 935 + strscpy(dest, "unavailable", VERSION_MAX_LEN); 936 + return -EIO; 937 + } 938 + 939 + if (fwc == FW_COMP_BOOT_FIT) { 940 + boot_ver = extract_fw_ver_from_str(prop->uboot_ver); 941 + if (boot_ver) { 942 + dev_info(hdev->dev, "boot-fit version %s\n", boot_ver); 943 + kfree(boot_ver); 944 + } 945 + } else if (fwc == FW_COMP_PREBOOT) { 946 + preboot_ver = strnstr(prop->preboot_ver, "Preboot", 947 + VERSION_MAX_LEN); 948 + if (preboot_ver && preboot_ver != prop->preboot_ver) { 949 + strscpy(btl_ver, prop->preboot_ver, 950 + min((int) (preboot_ver - prop->preboot_ver), 951 + 31)); 952 + dev_info(hdev->dev, "%s\n", btl_ver); 953 + } 954 + 955 + preboot_ver = extract_fw_ver_from_str(prop->preboot_ver); 956 + if (preboot_ver) { 957 + dev_info(hdev->dev, "preboot version %s\n", 958 + preboot_ver); 959 + kfree(preboot_ver); 960 + } 961 + } 962 + 963 + return 0; 964 + } 965 + 966 + /** 967 + * hl_fw_preboot_update_state - update internal data structures during 968 + * handshake with preboot 969 + * 970 + * 971 + * @hdev: pointer to the habanalabs device structure 972 + * 973 + * @return 0 on success, otherwise non-zero error code 974 + */ 975 + static void hl_fw_preboot_update_state(struct hl_device *hdev) 976 + { 977 + struct asic_fixed_properties *prop = &hdev->asic_prop; 978 + u32 cpu_boot_dev_sts0, cpu_boot_dev_sts1; 979 + 980 + cpu_boot_dev_sts0 = prop->fw_preboot_cpu_boot_dev_sts0; 981 + cpu_boot_dev_sts1 = prop->fw_preboot_cpu_boot_dev_sts1; 982 + 983 + /* We read boot_dev_sts registers multiple times during boot: 1118 984 * 1. preboot - a. Check whether the security status bits are valid 1119 985 * b. Check whether fw security is enabled 1120 986 * c. Check whether hard reset is done by preboot ··· 1240 874 * b. Check whether hard reset is done by fw app 1241 875 * 1242 876 * Preboot: 1243 - * Check security status bit (CPU_BOOT_DEV_STS0_ENABLED), if it is set 877 + * Check security status bit (CPU_BOOT_DEV_STS0_ENABLED). If set, then- 1244 878 * check security enabled bit (CPU_BOOT_DEV_STS0_SECURITY_EN) 879 + * If set, then mark GIC controller to be disabled. 1245 880 */ 1246 - if (security_status & CPU_BOOT_DEV_STS0_ENABLED) { 1247 - prop->fw_security_status_valid = 1; 881 + prop->hard_reset_done_by_fw = 882 + !!(cpu_boot_dev_sts0 & CPU_BOOT_DEV_STS0_FW_HARD_RST_EN); 1248 883 1249 - /* FW security should be derived from PCI ID, we keep this 1250 - * check for backward compatibility 1251 - */ 1252 - if (security_status & CPU_BOOT_DEV_STS0_SECURITY_EN) 1253 - prop->fw_security_disabled = false; 884 + dev_dbg(hdev->dev, "Firmware preboot boot device status0 %#x\n", 885 + cpu_boot_dev_sts0); 1254 886 1255 - if (security_status & CPU_BOOT_DEV_STS0_FW_HARD_RST_EN) 1256 - prop->hard_reset_done_by_fw = true; 1257 - } else { 1258 - prop->fw_security_status_valid = 0; 1259 - } 1260 - 1261 - dev_dbg(hdev->dev, "Firmware preboot security status %#x\n", 1262 - security_status); 887 + dev_dbg(hdev->dev, "Firmware preboot boot device status1 %#x\n", 888 + cpu_boot_dev_sts1); 1263 889 1264 890 dev_dbg(hdev->dev, "Firmware preboot hard-reset is %s\n", 1265 891 prop->hard_reset_done_by_fw ? "enabled" : "disabled"); 1266 892 1267 - dev_info(hdev->dev, "firmware-level security is %s\n", 1268 - prop->fw_security_disabled ? "disabled" : "enabled"); 893 + dev_dbg(hdev->dev, "firmware-level security is %s\n", 894 + prop->fw_security_enabled ? "enabled" : "disabled"); 895 + 896 + dev_dbg(hdev->dev, "GIC controller is %s\n", 897 + prop->gic_interrupts_enable ? "enabled" : "disabled"); 898 + } 899 + 900 + static int hl_fw_static_read_preboot_status(struct hl_device *hdev) 901 + { 902 + int rc; 903 + 904 + rc = hl_fw_static_read_device_fw_version(hdev, FW_COMP_PREBOOT); 905 + if (rc) 906 + return rc; 1269 907 1270 908 return 0; 1271 909 } 1272 910 1273 - int hl_fw_init_cpu(struct hl_device *hdev, u32 cpu_boot_status_reg, 1274 - u32 msg_to_cpu_reg, u32 cpu_msg_status_reg, 1275 - u32 cpu_security_boot_status_reg, u32 boot_err0_reg, 1276 - bool skip_bmc, u32 cpu_timeout, u32 boot_fit_timeout) 911 + int hl_fw_read_preboot_status(struct hl_device *hdev, u32 cpu_boot_status_reg, 912 + u32 sts_boot_dev_sts0_reg, 913 + u32 sts_boot_dev_sts1_reg, u32 boot_err0_reg, 914 + u32 boot_err1_reg, u32 timeout) 915 + { 916 + int rc; 917 + 918 + /* pldm was added for cases in which we use preboot on pldm and want 919 + * to load boot fit, but we can't wait for preboot because it runs 920 + * very slowly 921 + */ 922 + if (!(hdev->fw_components & FW_TYPE_PREBOOT_CPU) || hdev->pldm) 923 + return 0; 924 + 925 + /* 926 + * In order to determine boot method (static VS dymanic) we need to 927 + * read the boot caps register 928 + */ 929 + rc = hl_fw_read_preboot_caps(hdev, cpu_boot_status_reg, 930 + sts_boot_dev_sts0_reg, 931 + sts_boot_dev_sts1_reg, boot_err0_reg, 932 + boot_err1_reg, timeout); 933 + if (rc) 934 + return rc; 935 + 936 + hl_fw_preboot_update_state(hdev); 937 + 938 + /* no need to read preboot status in dynamic load */ 939 + if (hdev->asic_prop.dynamic_fw_load) 940 + return 0; 941 + 942 + return hl_fw_static_read_preboot_status(hdev); 943 + } 944 + 945 + /* associate string with COMM status */ 946 + static char *hl_dynamic_fw_status_str[COMMS_STS_INVLD_LAST] = { 947 + [COMMS_STS_NOOP] = "NOOP", 948 + [COMMS_STS_ACK] = "ACK", 949 + [COMMS_STS_OK] = "OK", 950 + [COMMS_STS_ERR] = "ERR", 951 + [COMMS_STS_VALID_ERR] = "VALID_ERR", 952 + [COMMS_STS_TIMEOUT_ERR] = "TIMEOUT_ERR", 953 + }; 954 + 955 + /** 956 + * hl_fw_dynamic_report_error_status - report error status 957 + * 958 + * @hdev: pointer to the habanalabs device structure 959 + * @status: value of FW status register 960 + * @expected_status: the expected status 961 + */ 962 + static void hl_fw_dynamic_report_error_status(struct hl_device *hdev, 963 + u32 status, 964 + enum comms_sts expected_status) 965 + { 966 + enum comms_sts comm_status = 967 + FIELD_GET(COMMS_STATUS_STATUS_MASK, status); 968 + 969 + if (comm_status < COMMS_STS_INVLD_LAST) 970 + dev_err(hdev->dev, "Device status %s, expected status: %s\n", 971 + hl_dynamic_fw_status_str[comm_status], 972 + hl_dynamic_fw_status_str[expected_status]); 973 + else 974 + dev_err(hdev->dev, "Device status unknown %d, expected status: %s\n", 975 + comm_status, 976 + hl_dynamic_fw_status_str[expected_status]); 977 + } 978 + 979 + /** 980 + * hl_fw_dynamic_send_cmd - send LKD to FW cmd 981 + * 982 + * @hdev: pointer to the habanalabs device structure 983 + * @fw_loader: managing structure for loading device's FW 984 + * @cmd: LKD to FW cmd code 985 + * @size: size of next FW component to be loaded (0 if not necessary) 986 + * 987 + * LDK to FW exact command layout is defined at struct comms_command. 988 + * note: the size argument is used only when the next FW component should be 989 + * loaded, otherwise it shall be 0. the size is used by the FW in later 990 + * protocol stages and when sending only indicating the amount of memory 991 + * to be allocated by the FW to receive the next boot component. 992 + */ 993 + static void hl_fw_dynamic_send_cmd(struct hl_device *hdev, 994 + struct fw_load_mgr *fw_loader, 995 + enum comms_cmd cmd, unsigned int size) 996 + { 997 + struct cpu_dyn_regs *dyn_regs; 998 + u32 val; 999 + 1000 + dyn_regs = &fw_loader->dynamic_loader.comm_desc.cpu_dyn_regs; 1001 + 1002 + val = FIELD_PREP(COMMS_COMMAND_CMD_MASK, cmd); 1003 + val |= FIELD_PREP(COMMS_COMMAND_SIZE_MASK, size); 1004 + 1005 + WREG32(le32_to_cpu(dyn_regs->kmd_msg_to_cpu), val); 1006 + } 1007 + 1008 + /** 1009 + * hl_fw_dynamic_extract_fw_response - update the FW response 1010 + * 1011 + * @hdev: pointer to the habanalabs device structure 1012 + * @fw_loader: managing structure for loading device's FW 1013 + * @response: FW response 1014 + * @status: the status read from CPU status register 1015 + * 1016 + * @return 0 on success, otherwise non-zero error code 1017 + */ 1018 + static int hl_fw_dynamic_extract_fw_response(struct hl_device *hdev, 1019 + struct fw_load_mgr *fw_loader, 1020 + struct fw_response *response, 1021 + u32 status) 1022 + { 1023 + response->status = FIELD_GET(COMMS_STATUS_STATUS_MASK, status); 1024 + response->ram_offset = FIELD_GET(COMMS_STATUS_OFFSET_MASK, status) << 1025 + COMMS_STATUS_OFFSET_ALIGN_SHIFT; 1026 + response->ram_type = FIELD_GET(COMMS_STATUS_RAM_TYPE_MASK, status); 1027 + 1028 + if ((response->ram_type != COMMS_SRAM) && 1029 + (response->ram_type != COMMS_DRAM)) { 1030 + dev_err(hdev->dev, "FW status: invalid RAM type %u\n", 1031 + response->ram_type); 1032 + return -EIO; 1033 + } 1034 + 1035 + return 0; 1036 + } 1037 + 1038 + /** 1039 + * hl_fw_dynamic_wait_for_status - wait for status in dynamic FW load 1040 + * 1041 + * @hdev: pointer to the habanalabs device structure 1042 + * @fw_loader: managing structure for loading device's FW 1043 + * @expected_status: expected status to wait for 1044 + * @timeout: timeout for status wait 1045 + * 1046 + * @return 0 on success, otherwise non-zero error code 1047 + * 1048 + * waiting for status from FW include polling the FW status register until 1049 + * expected status is received or timeout occurs (whatever occurs first). 1050 + */ 1051 + static int hl_fw_dynamic_wait_for_status(struct hl_device *hdev, 1052 + struct fw_load_mgr *fw_loader, 1053 + enum comms_sts expected_status, 1054 + u32 timeout) 1055 + { 1056 + struct cpu_dyn_regs *dyn_regs; 1057 + u32 status; 1058 + int rc; 1059 + 1060 + dyn_regs = &fw_loader->dynamic_loader.comm_desc.cpu_dyn_regs; 1061 + 1062 + /* Wait for expected status */ 1063 + rc = hl_poll_timeout( 1064 + hdev, 1065 + le32_to_cpu(dyn_regs->cpu_cmd_status_to_host), 1066 + status, 1067 + FIELD_GET(COMMS_STATUS_STATUS_MASK, status) == expected_status, 1068 + FW_CPU_STATUS_POLL_INTERVAL_USEC, 1069 + timeout); 1070 + 1071 + if (rc) { 1072 + hl_fw_dynamic_report_error_status(hdev, status, 1073 + expected_status); 1074 + return -EIO; 1075 + } 1076 + 1077 + /* 1078 + * skip storing FW response for NOOP to preserve the actual desired 1079 + * FW status 1080 + */ 1081 + if (expected_status == COMMS_STS_NOOP) 1082 + return 0; 1083 + 1084 + rc = hl_fw_dynamic_extract_fw_response(hdev, fw_loader, 1085 + &fw_loader->dynamic_loader.response, 1086 + status); 1087 + return rc; 1088 + } 1089 + 1090 + /** 1091 + * hl_fw_dynamic_send_clear_cmd - send clear command to FW 1092 + * 1093 + * @hdev: pointer to the habanalabs device structure 1094 + * @fw_loader: managing structure for loading device's FW 1095 + * 1096 + * @return 0 on success, otherwise non-zero error code 1097 + * 1098 + * after command cycle between LKD to FW CPU (i.e. LKD got an expected status 1099 + * from FW) we need to clear the CPU status register in order to avoid garbage 1100 + * between command cycles. 1101 + * This is done by sending clear command and polling the CPU to LKD status 1102 + * register to hold the status NOOP 1103 + */ 1104 + static int hl_fw_dynamic_send_clear_cmd(struct hl_device *hdev, 1105 + struct fw_load_mgr *fw_loader) 1106 + { 1107 + hl_fw_dynamic_send_cmd(hdev, fw_loader, COMMS_CLR_STS, 0); 1108 + 1109 + return hl_fw_dynamic_wait_for_status(hdev, fw_loader, COMMS_STS_NOOP, 1110 + fw_loader->cpu_timeout); 1111 + } 1112 + 1113 + /** 1114 + * hl_fw_dynamic_send_protocol_cmd - send LKD to FW cmd and wait for ACK 1115 + * 1116 + * @hdev: pointer to the habanalabs device structure 1117 + * @fw_loader: managing structure for loading device's FW 1118 + * @cmd: LKD to FW cmd code 1119 + * @size: size of next FW component to be loaded (0 if not necessary) 1120 + * @wait_ok: if true also wait for OK response from FW 1121 + * @timeout: timeout for status wait 1122 + * 1123 + * @return 0 on success, otherwise non-zero error code 1124 + * 1125 + * brief: 1126 + * when sending protocol command we have the following steps: 1127 + * - send clear (clear command and verify clear status register) 1128 + * - send the actual protocol command 1129 + * - wait for ACK on the protocol command 1130 + * - send clear 1131 + * - send NOOP 1132 + * if, in addition, the specific protocol command should wait for OK then: 1133 + * - wait for OK 1134 + * - send clear 1135 + * - send NOOP 1136 + * 1137 + * NOTES: 1138 + * send clear: this is necessary in order to clear the status register to avoid 1139 + * leftovers between command 1140 + * NOOP command: necessary to avoid loop on the clear command by the FW 1141 + */ 1142 + int hl_fw_dynamic_send_protocol_cmd(struct hl_device *hdev, 1143 + struct fw_load_mgr *fw_loader, 1144 + enum comms_cmd cmd, unsigned int size, 1145 + bool wait_ok, u32 timeout) 1146 + { 1147 + int rc; 1148 + 1149 + /* first send clear command to clean former commands */ 1150 + rc = hl_fw_dynamic_send_clear_cmd(hdev, fw_loader); 1151 + 1152 + /* send the actual command */ 1153 + hl_fw_dynamic_send_cmd(hdev, fw_loader, cmd, size); 1154 + 1155 + /* wait for ACK for the command */ 1156 + rc = hl_fw_dynamic_wait_for_status(hdev, fw_loader, COMMS_STS_ACK, 1157 + timeout); 1158 + if (rc) 1159 + return rc; 1160 + 1161 + /* clear command to prepare for NOOP command */ 1162 + rc = hl_fw_dynamic_send_clear_cmd(hdev, fw_loader); 1163 + if (rc) 1164 + return rc; 1165 + 1166 + /* send the actual NOOP command */ 1167 + hl_fw_dynamic_send_cmd(hdev, fw_loader, COMMS_NOOP, 0); 1168 + 1169 + if (!wait_ok) 1170 + return 0; 1171 + 1172 + rc = hl_fw_dynamic_wait_for_status(hdev, fw_loader, COMMS_STS_OK, 1173 + timeout); 1174 + if (rc) 1175 + return rc; 1176 + 1177 + /* clear command to prepare for NOOP command */ 1178 + rc = hl_fw_dynamic_send_clear_cmd(hdev, fw_loader); 1179 + if (rc) 1180 + return rc; 1181 + 1182 + /* send the actual NOOP command */ 1183 + hl_fw_dynamic_send_cmd(hdev, fw_loader, COMMS_NOOP, 0); 1184 + 1185 + return 0; 1186 + } 1187 + 1188 + /** 1189 + * hl_fw_compat_crc32 - CRC compatible with FW 1190 + * 1191 + * @data: pointer to the data 1192 + * @size: size of the data 1193 + * 1194 + * @return the CRC32 result 1195 + * 1196 + * NOTE: kernel's CRC32 differ's from standard CRC32 calculation. 1197 + * in order to be aligned we need to flip the bits of both the input 1198 + * initial CRC and kernel's CRC32 result. 1199 + * in addition both sides use initial CRC of 0, 1200 + */ 1201 + static u32 hl_fw_compat_crc32(u8 *data, size_t size) 1202 + { 1203 + return ~crc32_le(~((u32)0), data, size); 1204 + } 1205 + 1206 + /** 1207 + * hl_fw_dynamic_validate_memory_bound - validate memory bounds for memory 1208 + * transfer (image or descriptor) between 1209 + * host and FW 1210 + * 1211 + * @hdev: pointer to the habanalabs device structure 1212 + * @addr: device address of memory transfer 1213 + * @size: memory transter size 1214 + * @region: PCI memory region 1215 + * 1216 + * @return 0 on success, otherwise non-zero error code 1217 + */ 1218 + static int hl_fw_dynamic_validate_memory_bound(struct hl_device *hdev, 1219 + u64 addr, size_t size, 1220 + struct pci_mem_region *region) 1221 + { 1222 + u64 end_addr; 1223 + 1224 + /* now make sure that the memory transfer is within region's bounds */ 1225 + end_addr = addr + size; 1226 + if (end_addr >= region->region_base + region->region_size) { 1227 + dev_err(hdev->dev, 1228 + "dynamic FW load: memory transfer end address out of memory region bounds. addr: %llx\n", 1229 + end_addr); 1230 + return -EIO; 1231 + } 1232 + 1233 + /* 1234 + * now make sure memory transfer is within predefined BAR bounds. 1235 + * this is to make sure we do not need to set the bar (e.g. for DRAM 1236 + * memory transfers) 1237 + */ 1238 + if (end_addr >= region->region_base - region->offset_in_bar + 1239 + region->bar_size) { 1240 + dev_err(hdev->dev, 1241 + "FW image beyond PCI BAR bounds\n"); 1242 + return -EIO; 1243 + } 1244 + 1245 + return 0; 1246 + } 1247 + 1248 + /** 1249 + * hl_fw_dynamic_validate_descriptor - validate FW descriptor 1250 + * 1251 + * @hdev: pointer to the habanalabs device structure 1252 + * @fw_loader: managing structure for loading device's FW 1253 + * @fw_desc: the descriptor form FW 1254 + * 1255 + * @return 0 on success, otherwise non-zero error code 1256 + */ 1257 + static int hl_fw_dynamic_validate_descriptor(struct hl_device *hdev, 1258 + struct fw_load_mgr *fw_loader, 1259 + struct lkd_fw_comms_desc *fw_desc) 1260 + { 1261 + struct pci_mem_region *region; 1262 + enum pci_region region_id; 1263 + size_t data_size; 1264 + u32 data_crc32; 1265 + u8 *data_ptr; 1266 + u64 addr; 1267 + int rc; 1268 + 1269 + if (le32_to_cpu(fw_desc->header.magic) != HL_COMMS_DESC_MAGIC) { 1270 + dev_err(hdev->dev, "Invalid magic for dynamic FW descriptor (%x)\n", 1271 + fw_desc->header.magic); 1272 + return -EIO; 1273 + } 1274 + 1275 + if (fw_desc->header.version != HL_COMMS_DESC_VER) { 1276 + dev_err(hdev->dev, "Invalid version for dynamic FW descriptor (%x)\n", 1277 + fw_desc->header.version); 1278 + return -EIO; 1279 + } 1280 + 1281 + /* 1282 + * calc CRC32 of data without header. 1283 + * note that no alignment/stride address issues here as all structures 1284 + * are 64 bit padded 1285 + */ 1286 + data_size = sizeof(struct lkd_fw_comms_desc) - 1287 + sizeof(struct comms_desc_header); 1288 + data_ptr = (u8 *)fw_desc + sizeof(struct comms_desc_header); 1289 + 1290 + if (le16_to_cpu(fw_desc->header.size) != data_size) { 1291 + dev_err(hdev->dev, 1292 + "Invalid descriptor size 0x%x, expected size 0x%zx\n", 1293 + le16_to_cpu(fw_desc->header.size), data_size); 1294 + return -EIO; 1295 + } 1296 + 1297 + data_crc32 = hl_fw_compat_crc32(data_ptr, data_size); 1298 + 1299 + if (data_crc32 != le32_to_cpu(fw_desc->header.crc32)) { 1300 + dev_err(hdev->dev, 1301 + "CRC32 mismatch for dynamic FW descriptor (%x:%x)\n", 1302 + data_crc32, fw_desc->header.crc32); 1303 + return -EIO; 1304 + } 1305 + 1306 + /* find memory region to which to copy the image */ 1307 + addr = le64_to_cpu(fw_desc->img_addr); 1308 + region_id = hl_get_pci_memory_region(hdev, addr); 1309 + if ((region_id != PCI_REGION_SRAM) && 1310 + ((region_id != PCI_REGION_DRAM))) { 1311 + dev_err(hdev->dev, 1312 + "Invalid region to copy FW image address=%llx\n", addr); 1313 + return -EIO; 1314 + } 1315 + 1316 + region = &hdev->pci_mem_region[region_id]; 1317 + 1318 + /* store the region for the copy stage */ 1319 + fw_loader->dynamic_loader.image_region = region; 1320 + 1321 + /* 1322 + * here we know that the start address is valid, now make sure that the 1323 + * image is within region's bounds 1324 + */ 1325 + rc = hl_fw_dynamic_validate_memory_bound(hdev, addr, 1326 + fw_loader->dynamic_loader.fw_image_size, 1327 + region); 1328 + if (rc) { 1329 + dev_err(hdev->dev, 1330 + "invalid mem transfer request for FW image\n"); 1331 + return rc; 1332 + } 1333 + 1334 + return 0; 1335 + } 1336 + 1337 + static int hl_fw_dynamic_validate_response(struct hl_device *hdev, 1338 + struct fw_response *response, 1339 + struct pci_mem_region *region) 1340 + { 1341 + u64 device_addr; 1342 + int rc; 1343 + 1344 + device_addr = region->region_base + response->ram_offset; 1345 + 1346 + /* 1347 + * validate that the descriptor is within region's bounds 1348 + * Note that as the start address was supplied according to the RAM 1349 + * type- testing only the end address is enough 1350 + */ 1351 + rc = hl_fw_dynamic_validate_memory_bound(hdev, device_addr, 1352 + sizeof(struct lkd_fw_comms_desc), 1353 + region); 1354 + return rc; 1355 + } 1356 + 1357 + /** 1358 + * hl_fw_dynamic_read_and_validate_descriptor - read and validate FW descriptor 1359 + * 1360 + * @hdev: pointer to the habanalabs device structure 1361 + * @fw_loader: managing structure for loading device's FW 1362 + * 1363 + * @return 0 on success, otherwise non-zero error code 1364 + */ 1365 + static int hl_fw_dynamic_read_and_validate_descriptor(struct hl_device *hdev, 1366 + struct fw_load_mgr *fw_loader) 1367 + { 1368 + struct lkd_fw_comms_desc *fw_desc; 1369 + struct pci_mem_region *region; 1370 + struct fw_response *response; 1371 + enum pci_region region_id; 1372 + void __iomem *src; 1373 + int rc; 1374 + 1375 + fw_desc = &fw_loader->dynamic_loader.comm_desc; 1376 + response = &fw_loader->dynamic_loader.response; 1377 + 1378 + region_id = (response->ram_type == COMMS_SRAM) ? 1379 + PCI_REGION_SRAM : PCI_REGION_DRAM; 1380 + 1381 + region = &hdev->pci_mem_region[region_id]; 1382 + 1383 + rc = hl_fw_dynamic_validate_response(hdev, response, region); 1384 + if (rc) { 1385 + dev_err(hdev->dev, 1386 + "invalid mem transfer request for FW descriptor\n"); 1387 + return rc; 1388 + } 1389 + 1390 + /* extract address copy the descriptor from */ 1391 + src = hdev->pcie_bar[region->bar_id] + region->offset_in_bar + 1392 + response->ram_offset; 1393 + memcpy_fromio(fw_desc, src, sizeof(struct lkd_fw_comms_desc)); 1394 + 1395 + return hl_fw_dynamic_validate_descriptor(hdev, fw_loader, fw_desc); 1396 + } 1397 + 1398 + /** 1399 + * hl_fw_dynamic_request_descriptor - handshake with CPU to get FW descriptor 1400 + * 1401 + * @hdev: pointer to the habanalabs device structure 1402 + * @fw_loader: managing structure for loading device's FW 1403 + * @next_image_size: size to allocate for next FW component 1404 + * 1405 + * @return 0 on success, otherwise non-zero error code 1406 + */ 1407 + static int hl_fw_dynamic_request_descriptor(struct hl_device *hdev, 1408 + struct fw_load_mgr *fw_loader, 1409 + size_t next_image_size) 1410 + { 1411 + int rc; 1412 + 1413 + rc = hl_fw_dynamic_send_protocol_cmd(hdev, fw_loader, COMMS_PREP_DESC, 1414 + next_image_size, true, 1415 + fw_loader->cpu_timeout); 1416 + if (rc) 1417 + return rc; 1418 + 1419 + return hl_fw_dynamic_read_and_validate_descriptor(hdev, fw_loader); 1420 + } 1421 + 1422 + /** 1423 + * hl_fw_dynamic_read_device_fw_version - read FW version to exposed properties 1424 + * 1425 + * @hdev: pointer to the habanalabs device structure 1426 + * @fwc: the firmware component 1427 + * @fw_version: fw component's version string 1428 + */ 1429 + static void hl_fw_dynamic_read_device_fw_version(struct hl_device *hdev, 1430 + enum hl_fw_component fwc, 1431 + const char *fw_version) 1277 1432 { 1278 1433 struct asic_fixed_properties *prop = &hdev->asic_prop; 1434 + char *preboot_ver, *boot_ver; 1435 + char btl_ver[32]; 1436 + 1437 + switch (fwc) { 1438 + case FW_COMP_BOOT_FIT: 1439 + strscpy(prop->uboot_ver, fw_version, VERSION_MAX_LEN); 1440 + boot_ver = extract_fw_ver_from_str(prop->uboot_ver); 1441 + if (boot_ver) { 1442 + dev_info(hdev->dev, "boot-fit version %s\n", boot_ver); 1443 + kfree(boot_ver); 1444 + } 1445 + 1446 + break; 1447 + case FW_COMP_PREBOOT: 1448 + strscpy(prop->preboot_ver, fw_version, VERSION_MAX_LEN); 1449 + preboot_ver = strnstr(prop->preboot_ver, "Preboot", 1450 + VERSION_MAX_LEN); 1451 + if (preboot_ver && preboot_ver != prop->preboot_ver) { 1452 + strscpy(btl_ver, prop->preboot_ver, 1453 + min((int) (preboot_ver - prop->preboot_ver), 1454 + 31)); 1455 + dev_info(hdev->dev, "%s\n", btl_ver); 1456 + } 1457 + 1458 + preboot_ver = extract_fw_ver_from_str(prop->preboot_ver); 1459 + if (preboot_ver) { 1460 + dev_info(hdev->dev, "preboot version %s\n", 1461 + preboot_ver); 1462 + kfree(preboot_ver); 1463 + } 1464 + 1465 + break; 1466 + default: 1467 + dev_warn(hdev->dev, "Undefined FW component: %d\n", fwc); 1468 + return; 1469 + } 1470 + } 1471 + 1472 + /** 1473 + * hl_fw_dynamic_copy_image - copy image to memory allocated by the FW 1474 + * 1475 + * @hdev: pointer to the habanalabs device structure 1476 + * @fw: fw descriptor 1477 + * @fw_loader: managing structure for loading device's FW 1478 + */ 1479 + static int hl_fw_dynamic_copy_image(struct hl_device *hdev, 1480 + const struct firmware *fw, 1481 + struct fw_load_mgr *fw_loader) 1482 + { 1483 + struct lkd_fw_comms_desc *fw_desc; 1484 + struct pci_mem_region *region; 1485 + void __iomem *dest; 1486 + u64 addr; 1487 + int rc; 1488 + 1489 + fw_desc = &fw_loader->dynamic_loader.comm_desc; 1490 + addr = le64_to_cpu(fw_desc->img_addr); 1491 + 1492 + /* find memory region to which to copy the image */ 1493 + region = fw_loader->dynamic_loader.image_region; 1494 + 1495 + dest = hdev->pcie_bar[region->bar_id] + region->offset_in_bar + 1496 + (addr - region->region_base); 1497 + 1498 + rc = hl_fw_copy_fw_to_device(hdev, fw, dest, 1499 + fw_loader->boot_fit_img.src_off, 1500 + fw_loader->boot_fit_img.copy_size); 1501 + 1502 + return rc; 1503 + } 1504 + 1505 + /** 1506 + * hl_fw_dynamic_copy_msg - copy msg to memory allocated by the FW 1507 + * 1508 + * @hdev: pointer to the habanalabs device structure 1509 + * @msg: message 1510 + * @fw_loader: managing structure for loading device's FW 1511 + */ 1512 + static int hl_fw_dynamic_copy_msg(struct hl_device *hdev, 1513 + struct lkd_msg_comms *msg, struct fw_load_mgr *fw_loader) 1514 + { 1515 + struct lkd_fw_comms_desc *fw_desc; 1516 + struct pci_mem_region *region; 1517 + void __iomem *dest; 1518 + u64 addr; 1519 + int rc; 1520 + 1521 + fw_desc = &fw_loader->dynamic_loader.comm_desc; 1522 + addr = le64_to_cpu(fw_desc->img_addr); 1523 + 1524 + /* find memory region to which to copy the image */ 1525 + region = fw_loader->dynamic_loader.image_region; 1526 + 1527 + dest = hdev->pcie_bar[region->bar_id] + region->offset_in_bar + 1528 + (addr - region->region_base); 1529 + 1530 + rc = hl_fw_copy_msg_to_device(hdev, msg, dest, 0, 0); 1531 + 1532 + return rc; 1533 + } 1534 + 1535 + /** 1536 + * hl_fw_boot_fit_update_state - update internal data structures after boot-fit 1537 + * is loaded 1538 + * 1539 + * @hdev: pointer to the habanalabs device structure 1540 + * @cpu_boot_dev_sts0_reg: register holding CPU boot dev status 0 1541 + * @cpu_boot_dev_sts1_reg: register holding CPU boot dev status 1 1542 + * 1543 + * @return 0 on success, otherwise non-zero error code 1544 + */ 1545 + static void hl_fw_boot_fit_update_state(struct hl_device *hdev, 1546 + u32 cpu_boot_dev_sts0_reg, 1547 + u32 cpu_boot_dev_sts1_reg) 1548 + { 1549 + struct asic_fixed_properties *prop = &hdev->asic_prop; 1550 + 1551 + /* Clear reset status since we need to read it again from boot CPU */ 1552 + prop->hard_reset_done_by_fw = false; 1553 + 1554 + /* Read boot_cpu status bits */ 1555 + if (prop->fw_preboot_cpu_boot_dev_sts0 & CPU_BOOT_DEV_STS0_ENABLED) { 1556 + prop->fw_bootfit_cpu_boot_dev_sts0 = 1557 + RREG32(cpu_boot_dev_sts0_reg); 1558 + 1559 + if (prop->fw_bootfit_cpu_boot_dev_sts0 & 1560 + CPU_BOOT_DEV_STS0_FW_HARD_RST_EN) 1561 + prop->hard_reset_done_by_fw = true; 1562 + 1563 + dev_dbg(hdev->dev, "Firmware boot CPU status0 %#x\n", 1564 + prop->fw_bootfit_cpu_boot_dev_sts0); 1565 + } 1566 + 1567 + if (prop->fw_cpu_boot_dev_sts1_valid) { 1568 + prop->fw_bootfit_cpu_boot_dev_sts1 = 1569 + RREG32(cpu_boot_dev_sts1_reg); 1570 + 1571 + dev_dbg(hdev->dev, "Firmware boot CPU status1 %#x\n", 1572 + prop->fw_bootfit_cpu_boot_dev_sts1); 1573 + } 1574 + 1575 + dev_dbg(hdev->dev, "Firmware boot CPU hard-reset is %s\n", 1576 + prop->hard_reset_done_by_fw ? "enabled" : "disabled"); 1577 + } 1578 + 1579 + static void hl_fw_dynamic_update_linux_interrupt_if(struct hl_device *hdev) 1580 + { 1581 + struct cpu_dyn_regs *dyn_regs = 1582 + &hdev->fw_loader.dynamic_loader.comm_desc.cpu_dyn_regs; 1583 + 1584 + /* Check whether all 3 interrupt interfaces are set, if not use a 1585 + * single interface 1586 + */ 1587 + if (!hdev->asic_prop.gic_interrupts_enable && 1588 + !(hdev->asic_prop.fw_app_cpu_boot_dev_sts0 & 1589 + CPU_BOOT_DEV_STS0_MULTI_IRQ_POLL_EN)) { 1590 + dyn_regs->gic_host_halt_irq = dyn_regs->gic_host_irq_ctrl; 1591 + dyn_regs->gic_host_ints_irq = dyn_regs->gic_host_irq_ctrl; 1592 + 1593 + dev_warn(hdev->dev, 1594 + "Using a single interrupt interface towards cpucp"); 1595 + } 1596 + } 1597 + /** 1598 + * hl_fw_dynamic_load_image - load FW image using dynamic protocol 1599 + * 1600 + * @hdev: pointer to the habanalabs device structure 1601 + * @fw_loader: managing structure for loading device's FW 1602 + * @load_fwc: the FW component to be loaded 1603 + * @img_ld_timeout: image load timeout 1604 + * 1605 + * @return 0 on success, otherwise non-zero error code 1606 + */ 1607 + static int hl_fw_dynamic_load_image(struct hl_device *hdev, 1608 + struct fw_load_mgr *fw_loader, 1609 + enum hl_fw_component load_fwc, 1610 + u32 img_ld_timeout) 1611 + { 1612 + enum hl_fw_component cur_fwc; 1613 + const struct firmware *fw; 1614 + char *fw_name; 1615 + int rc = 0; 1616 + 1617 + /* 1618 + * when loading image we have one of 2 scenarios: 1619 + * 1. current FW component is preboot and we want to load boot-fit 1620 + * 2. current FW component is boot-fit and we want to load linux 1621 + */ 1622 + if (load_fwc == FW_COMP_BOOT_FIT) { 1623 + cur_fwc = FW_COMP_PREBOOT; 1624 + fw_name = fw_loader->boot_fit_img.image_name; 1625 + } else { 1626 + cur_fwc = FW_COMP_BOOT_FIT; 1627 + fw_name = fw_loader->linux_img.image_name; 1628 + } 1629 + 1630 + /* request FW in order to communicate to FW the size to be allocated */ 1631 + rc = hl_request_fw(hdev, &fw, fw_name); 1632 + if (rc) 1633 + return rc; 1634 + 1635 + /* store the image size for future validation */ 1636 + fw_loader->dynamic_loader.fw_image_size = fw->size; 1637 + 1638 + rc = hl_fw_dynamic_request_descriptor(hdev, fw_loader, fw->size); 1639 + if (rc) 1640 + goto release_fw; 1641 + 1642 + /* read preboot version */ 1643 + hl_fw_dynamic_read_device_fw_version(hdev, cur_fwc, 1644 + fw_loader->dynamic_loader.comm_desc.cur_fw_ver); 1645 + 1646 + 1647 + /* update state according to boot stage */ 1648 + if (cur_fwc == FW_COMP_BOOT_FIT) { 1649 + struct cpu_dyn_regs *dyn_regs; 1650 + 1651 + dyn_regs = &fw_loader->dynamic_loader.comm_desc.cpu_dyn_regs; 1652 + hl_fw_boot_fit_update_state(hdev, 1653 + le32_to_cpu(dyn_regs->cpu_boot_dev_sts0), 1654 + le32_to_cpu(dyn_regs->cpu_boot_dev_sts1)); 1655 + } 1656 + 1657 + /* copy boot fit to space allocated by FW */ 1658 + rc = hl_fw_dynamic_copy_image(hdev, fw, fw_loader); 1659 + if (rc) 1660 + goto release_fw; 1661 + 1662 + rc = hl_fw_dynamic_send_protocol_cmd(hdev, fw_loader, COMMS_DATA_RDY, 1663 + 0, true, 1664 + fw_loader->cpu_timeout); 1665 + if (rc) 1666 + goto release_fw; 1667 + 1668 + rc = hl_fw_dynamic_send_protocol_cmd(hdev, fw_loader, COMMS_EXEC, 1669 + 0, false, 1670 + img_ld_timeout); 1671 + 1672 + release_fw: 1673 + hl_release_firmware(fw); 1674 + return rc; 1675 + } 1676 + 1677 + static int hl_fw_dynamic_wait_for_boot_fit_active(struct hl_device *hdev, 1678 + struct fw_load_mgr *fw_loader) 1679 + { 1680 + struct dynamic_fw_load_mgr *dyn_loader; 1279 1681 u32 status; 1682 + int rc; 1683 + 1684 + dyn_loader = &fw_loader->dynamic_loader; 1685 + 1686 + /* Make sure CPU boot-loader is running */ 1687 + rc = hl_poll_timeout( 1688 + hdev, 1689 + le32_to_cpu(dyn_loader->comm_desc.cpu_dyn_regs.cpu_boot_status), 1690 + status, 1691 + (status == CPU_BOOT_STATUS_NIC_FW_RDY) || 1692 + (status == CPU_BOOT_STATUS_READY_TO_BOOT), 1693 + FW_CPU_STATUS_POLL_INTERVAL_USEC, 1694 + dyn_loader->wait_for_bl_timeout); 1695 + if (rc) { 1696 + dev_err(hdev->dev, "failed to wait for boot\n"); 1697 + return rc; 1698 + } 1699 + 1700 + dev_dbg(hdev->dev, "uboot status = %d\n", status); 1701 + return 0; 1702 + } 1703 + 1704 + static int hl_fw_dynamic_wait_for_linux_active(struct hl_device *hdev, 1705 + struct fw_load_mgr *fw_loader) 1706 + { 1707 + struct dynamic_fw_load_mgr *dyn_loader; 1708 + u32 status; 1709 + int rc; 1710 + 1711 + dyn_loader = &fw_loader->dynamic_loader; 1712 + 1713 + /* Make sure CPU boot-loader is running */ 1714 + 1715 + rc = hl_poll_timeout( 1716 + hdev, 1717 + le32_to_cpu(dyn_loader->comm_desc.cpu_dyn_regs.cpu_boot_status), 1718 + status, 1719 + (status == CPU_BOOT_STATUS_SRAM_AVAIL), 1720 + FW_CPU_STATUS_POLL_INTERVAL_USEC, 1721 + fw_loader->cpu_timeout); 1722 + if (rc) { 1723 + dev_err(hdev->dev, "failed to wait for Linux\n"); 1724 + return rc; 1725 + } 1726 + 1727 + dev_dbg(hdev->dev, "Boot status = %d\n", status); 1728 + return 0; 1729 + } 1730 + 1731 + /** 1732 + * hl_fw_linux_update_state - update internal data structures after Linux 1733 + * is loaded. 1734 + * Note: Linux initialization is comprised mainly 1735 + * of two stages - loading kernel (SRAM_AVAIL) 1736 + * & loading ARMCP. 1737 + * Therefore reading boot device status in any of 1738 + * these stages might result in different values. 1739 + * 1740 + * @hdev: pointer to the habanalabs device structure 1741 + * @cpu_boot_dev_sts0_reg: register holding CPU boot dev status 0 1742 + * @cpu_boot_dev_sts1_reg: register holding CPU boot dev status 1 1743 + * 1744 + * @return 0 on success, otherwise non-zero error code 1745 + */ 1746 + static void hl_fw_linux_update_state(struct hl_device *hdev, 1747 + u32 cpu_boot_dev_sts0_reg, 1748 + u32 cpu_boot_dev_sts1_reg) 1749 + { 1750 + struct asic_fixed_properties *prop = &hdev->asic_prop; 1751 + 1752 + hdev->fw_loader.linux_loaded = true; 1753 + 1754 + /* Clear reset status since we need to read again from app */ 1755 + prop->hard_reset_done_by_fw = false; 1756 + 1757 + /* Read FW application security bits */ 1758 + if (prop->fw_cpu_boot_dev_sts0_valid) { 1759 + prop->fw_app_cpu_boot_dev_sts0 = 1760 + RREG32(cpu_boot_dev_sts0_reg); 1761 + 1762 + if (prop->fw_app_cpu_boot_dev_sts0 & 1763 + CPU_BOOT_DEV_STS0_FW_HARD_RST_EN) 1764 + prop->hard_reset_done_by_fw = true; 1765 + 1766 + if (prop->fw_app_cpu_boot_dev_sts0 & 1767 + CPU_BOOT_DEV_STS0_GIC_PRIVILEGED_EN) 1768 + prop->gic_interrupts_enable = false; 1769 + 1770 + dev_dbg(hdev->dev, 1771 + "Firmware application CPU status0 %#x\n", 1772 + prop->fw_app_cpu_boot_dev_sts0); 1773 + 1774 + dev_dbg(hdev->dev, "GIC controller is %s\n", 1775 + prop->gic_interrupts_enable ? 1776 + "enabled" : "disabled"); 1777 + } 1778 + 1779 + if (prop->fw_cpu_boot_dev_sts1_valid) { 1780 + prop->fw_app_cpu_boot_dev_sts1 = 1781 + RREG32(cpu_boot_dev_sts1_reg); 1782 + 1783 + dev_dbg(hdev->dev, 1784 + "Firmware application CPU status1 %#x\n", 1785 + prop->fw_app_cpu_boot_dev_sts1); 1786 + } 1787 + 1788 + dev_dbg(hdev->dev, "Firmware application CPU hard-reset is %s\n", 1789 + prop->hard_reset_done_by_fw ? "enabled" : "disabled"); 1790 + 1791 + dev_info(hdev->dev, "Successfully loaded firmware to device\n"); 1792 + } 1793 + 1794 + /** 1795 + * hl_fw_dynamic_report_reset_cause - send a COMMS message with the cause 1796 + * of the newly triggered hard reset 1797 + * 1798 + * @hdev: pointer to the habanalabs device structure 1799 + * @fw_loader: managing structure for loading device's FW 1800 + * @reset_cause: enumerated cause for the recent hard reset 1801 + * 1802 + * @return 0 on success, otherwise non-zero error code 1803 + */ 1804 + static int hl_fw_dynamic_report_reset_cause(struct hl_device *hdev, 1805 + struct fw_load_mgr *fw_loader, 1806 + enum comms_reset_cause reset_cause) 1807 + { 1808 + struct lkd_msg_comms msg; 1809 + int rc; 1810 + 1811 + memset(&msg, 0, sizeof(msg)); 1812 + 1813 + /* create message to be sent */ 1814 + msg.header.type = HL_COMMS_RESET_CAUSE_TYPE; 1815 + msg.header.size = cpu_to_le16(sizeof(struct comms_msg_header)); 1816 + msg.header.magic = cpu_to_le32(HL_COMMS_MSG_MAGIC); 1817 + 1818 + msg.reset_cause = reset_cause; 1819 + 1820 + rc = hl_fw_dynamic_request_descriptor(hdev, fw_loader, 1821 + sizeof(struct lkd_msg_comms)); 1822 + if (rc) 1823 + return rc; 1824 + 1825 + /* copy message to space allocated by FW */ 1826 + rc = hl_fw_dynamic_copy_msg(hdev, &msg, fw_loader); 1827 + if (rc) 1828 + return rc; 1829 + 1830 + rc = hl_fw_dynamic_send_protocol_cmd(hdev, fw_loader, COMMS_DATA_RDY, 1831 + 0, true, 1832 + fw_loader->cpu_timeout); 1833 + if (rc) 1834 + return rc; 1835 + 1836 + rc = hl_fw_dynamic_send_protocol_cmd(hdev, fw_loader, COMMS_EXEC, 1837 + 0, true, 1838 + fw_loader->cpu_timeout); 1839 + if (rc) 1840 + return rc; 1841 + 1842 + return 0; 1843 + } 1844 + 1845 + /** 1846 + * hl_fw_dynamic_init_cpu - initialize the device CPU using dynamic protocol 1847 + * 1848 + * @hdev: pointer to the habanalabs device structure 1849 + * @fw_loader: managing structure for loading device's FW 1850 + * 1851 + * @return 0 on success, otherwise non-zero error code 1852 + * 1853 + * brief: the dynamic protocol is master (LKD) slave (FW CPU) protocol. 1854 + * the communication is done using registers: 1855 + * - LKD command register 1856 + * - FW status register 1857 + * the protocol is race free. this goal is achieved by splitting the requests 1858 + * and response to known synchronization points between the LKD and the FW. 1859 + * each response to LKD request is known and bound to a predefined timeout. 1860 + * in case of timeout expiration without the desired status from FW- the 1861 + * protocol (and hence the boot) will fail. 1862 + */ 1863 + static int hl_fw_dynamic_init_cpu(struct hl_device *hdev, 1864 + struct fw_load_mgr *fw_loader) 1865 + { 1866 + struct cpu_dyn_regs *dyn_regs; 1867 + int rc; 1868 + 1869 + dev_info(hdev->dev, 1870 + "Loading firmware to device, may take some time...\n"); 1871 + 1872 + dyn_regs = &fw_loader->dynamic_loader.comm_desc.cpu_dyn_regs; 1873 + 1874 + rc = hl_fw_dynamic_send_protocol_cmd(hdev, fw_loader, COMMS_RST_STATE, 1875 + 0, true, 1876 + fw_loader->cpu_timeout); 1877 + if (rc) 1878 + goto protocol_err; 1879 + 1880 + if (hdev->curr_reset_cause) { 1881 + rc = hl_fw_dynamic_report_reset_cause(hdev, fw_loader, 1882 + hdev->curr_reset_cause); 1883 + if (rc) 1884 + goto protocol_err; 1885 + 1886 + /* Clear current reset cause */ 1887 + hdev->curr_reset_cause = HL_RESET_CAUSE_UNKNOWN; 1888 + } 1889 + 1890 + if (!(hdev->fw_components & FW_TYPE_BOOT_CPU)) { 1891 + rc = hl_fw_dynamic_request_descriptor(hdev, fw_loader, 0); 1892 + if (rc) 1893 + goto protocol_err; 1894 + 1895 + /* read preboot version */ 1896 + hl_fw_dynamic_read_device_fw_version(hdev, FW_COMP_PREBOOT, 1897 + fw_loader->dynamic_loader.comm_desc.cur_fw_ver); 1898 + return 0; 1899 + } 1900 + 1901 + /* load boot fit to FW */ 1902 + rc = hl_fw_dynamic_load_image(hdev, fw_loader, FW_COMP_BOOT_FIT, 1903 + fw_loader->boot_fit_timeout); 1904 + if (rc) { 1905 + dev_err(hdev->dev, "failed to load boot fit\n"); 1906 + goto protocol_err; 1907 + } 1908 + 1909 + rc = hl_fw_dynamic_wait_for_boot_fit_active(hdev, fw_loader); 1910 + if (rc) 1911 + goto protocol_err; 1912 + 1913 + /* Enable DRAM scrambling before Linux boot and after successful 1914 + * UBoot 1915 + */ 1916 + hdev->asic_funcs->init_cpu_scrambler_dram(hdev); 1917 + 1918 + if (!(hdev->fw_components & FW_TYPE_LINUX)) { 1919 + dev_info(hdev->dev, "Skip loading Linux F/W\n"); 1920 + return 0; 1921 + } 1922 + 1923 + if (fw_loader->skip_bmc) { 1924 + rc = hl_fw_dynamic_send_protocol_cmd(hdev, fw_loader, 1925 + COMMS_SKIP_BMC, 0, 1926 + true, 1927 + fw_loader->cpu_timeout); 1928 + if (rc) { 1929 + dev_err(hdev->dev, "failed to load boot fit\n"); 1930 + goto protocol_err; 1931 + } 1932 + } 1933 + 1934 + /* load Linux image to FW */ 1935 + rc = hl_fw_dynamic_load_image(hdev, fw_loader, FW_COMP_LINUX, 1936 + fw_loader->cpu_timeout); 1937 + if (rc) { 1938 + dev_err(hdev->dev, "failed to load Linux\n"); 1939 + goto protocol_err; 1940 + } 1941 + 1942 + rc = hl_fw_dynamic_wait_for_linux_active(hdev, fw_loader); 1943 + if (rc) 1944 + goto protocol_err; 1945 + 1946 + hl_fw_linux_update_state(hdev, le32_to_cpu(dyn_regs->cpu_boot_dev_sts0), 1947 + le32_to_cpu(dyn_regs->cpu_boot_dev_sts1)); 1948 + 1949 + hl_fw_dynamic_update_linux_interrupt_if(hdev); 1950 + 1951 + return 0; 1952 + 1953 + protocol_err: 1954 + fw_read_errors(hdev, le32_to_cpu(dyn_regs->cpu_boot_err0), 1955 + le32_to_cpu(dyn_regs->cpu_boot_err1), 1956 + le32_to_cpu(dyn_regs->cpu_boot_dev_sts0), 1957 + le32_to_cpu(dyn_regs->cpu_boot_dev_sts1)); 1958 + return rc; 1959 + } 1960 + 1961 + /** 1962 + * hl_fw_static_init_cpu - initialize the device CPU using static protocol 1963 + * 1964 + * @hdev: pointer to the habanalabs device structure 1965 + * @fw_loader: managing structure for loading device's FW 1966 + * 1967 + * @return 0 on success, otherwise non-zero error code 1968 + */ 1969 + static int hl_fw_static_init_cpu(struct hl_device *hdev, 1970 + struct fw_load_mgr *fw_loader) 1971 + { 1972 + u32 cpu_msg_status_reg, cpu_timeout, msg_to_cpu_reg, status; 1973 + u32 cpu_boot_dev_status0_reg, cpu_boot_dev_status1_reg; 1974 + struct static_fw_load_mgr *static_loader; 1975 + u32 cpu_boot_status_reg; 1280 1976 int rc; 1281 1977 1282 1978 if (!(hdev->fw_components & FW_TYPE_BOOT_CPU)) 1283 1979 return 0; 1980 + 1981 + /* init common loader parameters */ 1982 + cpu_timeout = fw_loader->cpu_timeout; 1983 + 1984 + /* init static loader parameters */ 1985 + static_loader = &fw_loader->static_loader; 1986 + cpu_msg_status_reg = static_loader->cpu_cmd_status_to_host_reg; 1987 + msg_to_cpu_reg = static_loader->kmd_msg_to_cpu_reg; 1988 + cpu_boot_dev_status0_reg = static_loader->cpu_boot_dev_status0_reg; 1989 + cpu_boot_dev_status1_reg = static_loader->cpu_boot_dev_status1_reg; 1990 + cpu_boot_status_reg = static_loader->cpu_boot_status_reg; 1284 1991 1285 1992 dev_info(hdev->dev, "Going to wait for device boot (up to %lds)\n", 1286 1993 cpu_timeout / USEC_PER_SEC); ··· 2364 925 cpu_boot_status_reg, 2365 926 status, 2366 927 status == CPU_BOOT_STATUS_WAITING_FOR_BOOT_FIT, 2367 - 10000, 2368 - boot_fit_timeout); 928 + FW_CPU_STATUS_POLL_INTERVAL_USEC, 929 + fw_loader->boot_fit_timeout); 2369 930 2370 931 if (rc) { 2371 932 dev_dbg(hdev->dev, ··· 2387 948 cpu_msg_status_reg, 2388 949 status, 2389 950 status == CPU_MSG_OK, 2390 - 10000, 2391 - boot_fit_timeout); 951 + FW_CPU_STATUS_POLL_INTERVAL_USEC, 952 + fw_loader->boot_fit_timeout); 2392 953 2393 954 if (rc) { 2394 955 dev_err(hdev->dev, ··· 2409 970 (status == CPU_BOOT_STATUS_NIC_FW_RDY) || 2410 971 (status == CPU_BOOT_STATUS_READY_TO_BOOT) || 2411 972 (status == CPU_BOOT_STATUS_SRAM_AVAIL), 2412 - 10000, 973 + FW_CPU_STATUS_POLL_INTERVAL_USEC, 2413 974 cpu_timeout); 2414 975 2415 976 dev_dbg(hdev->dev, "uboot status = %d\n", status); 2416 977 2417 978 /* Read U-Boot version now in case we will later fail */ 2418 - hdev->asic_funcs->read_device_fw_version(hdev, FW_COMP_UBOOT); 979 + hl_fw_static_read_device_fw_version(hdev, FW_COMP_BOOT_FIT); 2419 980 2420 - /* Clear reset status since we need to read it again from boot CPU */ 2421 - prop->hard_reset_done_by_fw = false; 2422 - 2423 - /* Read boot_cpu security bits */ 2424 - if (prop->fw_security_status_valid) { 2425 - prop->fw_boot_cpu_security_map = 2426 - RREG32(cpu_security_boot_status_reg); 2427 - 2428 - if (prop->fw_boot_cpu_security_map & 2429 - CPU_BOOT_DEV_STS0_FW_HARD_RST_EN) 2430 - prop->hard_reset_done_by_fw = true; 2431 - 2432 - dev_dbg(hdev->dev, 2433 - "Firmware boot CPU security status %#x\n", 2434 - prop->fw_boot_cpu_security_map); 2435 - } 2436 - 2437 - dev_dbg(hdev->dev, "Firmware boot CPU hard-reset is %s\n", 2438 - prop->hard_reset_done_by_fw ? "enabled" : "disabled"); 981 + /* update state according to boot stage */ 982 + hl_fw_boot_fit_update_state(hdev, cpu_boot_dev_status0_reg, 983 + cpu_boot_dev_status1_reg); 2439 984 2440 985 if (rc) { 2441 986 detect_cpu_boot_status(hdev, status); ··· 2427 1004 goto out; 2428 1005 } 2429 1006 1007 + /* Enable DRAM scrambling before Linux boot and after successful 1008 + * UBoot 1009 + */ 1010 + hdev->asic_funcs->init_cpu_scrambler_dram(hdev); 1011 + 2430 1012 if (!(hdev->fw_components & FW_TYPE_LINUX)) { 2431 1013 dev_info(hdev->dev, "Skip loading Linux F/W\n"); 1014 + rc = 0; 2432 1015 goto out; 2433 1016 } 2434 1017 2435 - if (status == CPU_BOOT_STATUS_SRAM_AVAIL) 1018 + if (status == CPU_BOOT_STATUS_SRAM_AVAIL) { 1019 + rc = 0; 2436 1020 goto out; 1021 + } 2437 1022 2438 1023 dev_info(hdev->dev, 2439 1024 "Loading firmware to device, may take some time...\n"); ··· 2450 1019 if (rc) 2451 1020 goto out; 2452 1021 2453 - if (skip_bmc) { 1022 + if (fw_loader->skip_bmc) { 2454 1023 WREG32(msg_to_cpu_reg, KMD_MSG_SKIP_BMC); 2455 1024 2456 1025 rc = hl_poll_timeout( ··· 2458 1027 cpu_boot_status_reg, 2459 1028 status, 2460 1029 (status == CPU_BOOT_STATUS_BMC_WAITING_SKIPPED), 2461 - 10000, 1030 + FW_CPU_STATUS_POLL_INTERVAL_USEC, 2462 1031 cpu_timeout); 2463 1032 2464 1033 if (rc) { ··· 2478 1047 cpu_boot_status_reg, 2479 1048 status, 2480 1049 (status == CPU_BOOT_STATUS_SRAM_AVAIL), 2481 - 10000, 1050 + FW_CPU_STATUS_POLL_INTERVAL_USEC, 2482 1051 cpu_timeout); 2483 1052 2484 1053 /* Clear message */ ··· 2497 1066 goto out; 2498 1067 } 2499 1068 2500 - rc = fw_read_errors(hdev, boot_err0_reg, cpu_security_boot_status_reg); 1069 + rc = fw_read_errors(hdev, fw_loader->static_loader.boot_err0_reg, 1070 + fw_loader->static_loader.boot_err1_reg, 1071 + cpu_boot_dev_status0_reg, 1072 + cpu_boot_dev_status1_reg); 2501 1073 if (rc) 2502 1074 return rc; 2503 1075 2504 - /* Clear reset status since we need to read again from app */ 2505 - prop->hard_reset_done_by_fw = false; 2506 - 2507 - /* Read FW application security bits */ 2508 - if (prop->fw_security_status_valid) { 2509 - prop->fw_app_security_map = 2510 - RREG32(cpu_security_boot_status_reg); 2511 - 2512 - if (prop->fw_app_security_map & 2513 - CPU_BOOT_DEV_STS0_FW_HARD_RST_EN) 2514 - prop->hard_reset_done_by_fw = true; 2515 - 2516 - dev_dbg(hdev->dev, 2517 - "Firmware application CPU security status %#x\n", 2518 - prop->fw_app_security_map); 2519 - } 2520 - 2521 - dev_dbg(hdev->dev, "Firmware application CPU hard-reset is %s\n", 2522 - prop->hard_reset_done_by_fw ? "enabled" : "disabled"); 2523 - 2524 - dev_info(hdev->dev, "Successfully loaded firmware to device\n"); 1076 + hl_fw_linux_update_state(hdev, cpu_boot_dev_status0_reg, 1077 + cpu_boot_dev_status1_reg); 2525 1078 2526 1079 return 0; 2527 1080 2528 1081 out: 2529 - fw_read_errors(hdev, boot_err0_reg, cpu_security_boot_status_reg); 1082 + fw_read_errors(hdev, fw_loader->static_loader.boot_err0_reg, 1083 + fw_loader->static_loader.boot_err1_reg, 1084 + cpu_boot_dev_status0_reg, 1085 + cpu_boot_dev_status1_reg); 2530 1086 2531 1087 return rc; 1088 + } 1089 + 1090 + /** 1091 + * hl_fw_init_cpu - initialize the device CPU 1092 + * 1093 + * @hdev: pointer to the habanalabs device structure 1094 + * 1095 + * @return 0 on success, otherwise non-zero error code 1096 + * 1097 + * perform necessary initializations for device's CPU. takes into account if 1098 + * init protocol is static or dynamic. 1099 + */ 1100 + int hl_fw_init_cpu(struct hl_device *hdev) 1101 + { 1102 + struct asic_fixed_properties *prop = &hdev->asic_prop; 1103 + struct fw_load_mgr *fw_loader = &hdev->fw_loader; 1104 + 1105 + return prop->dynamic_fw_load ? 1106 + hl_fw_dynamic_init_cpu(hdev, fw_loader) : 1107 + hl_fw_static_init_cpu(hdev, fw_loader); 2532 1108 }
+245 -35
drivers/misc/habanalabs/common/habanalabs.h
··· 48 48 #define HL_PENDING_RESET_LONG_SEC 60 49 49 50 50 #define HL_HARD_RESET_MAX_TIMEOUT 120 51 + #define HL_PLDM_HARD_RESET_MAX_TIMEOUT (HL_HARD_RESET_MAX_TIMEOUT * 3) 51 52 52 53 #define HL_DEVICE_TIMEOUT_USEC 1000000 /* 1 s */ 53 54 ··· 116 115 * 117 116 * - HL_RESET_HEARTBEAT 118 117 * Set if reset is due to heartbeat 118 + * 119 + * - HL_RESET_TDR 120 + * Set if reset is due to TDR 121 + * 122 + * - HL_RESET_DEVICE_RELEASE 123 + * Set if reset is due to device release 119 124 */ 120 125 #define HL_RESET_HARD (1 << 0) 121 126 #define HL_RESET_FROM_RESET_THREAD (1 << 1) 122 127 #define HL_RESET_HEARTBEAT (1 << 2) 128 + #define HL_RESET_TDR (1 << 3) 129 + #define HL_RESET_DEVICE_RELEASE (1 << 4) 123 130 124 131 #define HL_MAX_SOBS_PER_MONITOR 8 125 132 ··· 187 178 188 179 /** 189 180 * enum hl_fw_component - F/W components to read version through registers. 190 - * @FW_COMP_UBOOT: u-boot. 181 + * @FW_COMP_BOOT_FIT: boot fit. 191 182 * @FW_COMP_PREBOOT: preboot. 183 + * @FW_COMP_LINUX: linux. 192 184 */ 193 185 enum hl_fw_component { 194 - FW_COMP_UBOOT, 195 - FW_COMP_PREBOOT 186 + FW_COMP_BOOT_FIT, 187 + FW_COMP_PREBOOT, 188 + FW_COMP_LINUX, 196 189 }; 197 190 198 191 /** ··· 431 420 * @cb_pool_cb_size: size of each CB in the CB pool. 432 421 * @max_pending_cs: maximum of concurrent pending command submissions 433 422 * @max_queues: maximum amount of queues in the system 434 - * @fw_boot_cpu_security_map: bitmap representation of boot cpu security status 435 - * reported by FW, bit description can be found in 436 - * CPU_BOOT_DEV_STS* 437 - * @fw_app_security_map: bitmap representation of application security status 438 - * reported by FW, bit description can be found in 439 - * CPU_BOOT_DEV_STS* 423 + * @fw_preboot_cpu_boot_dev_sts0: bitmap representation of preboot cpu 424 + * capabilities reported by FW, bit description 425 + * can be found in CPU_BOOT_DEV_STS0 426 + * @fw_preboot_cpu_boot_dev_sts1: bitmap representation of preboot cpu 427 + * capabilities reported by FW, bit description 428 + * can be found in CPU_BOOT_DEV_STS1 429 + * @fw_bootfit_cpu_boot_dev_sts0: bitmap representation of boot cpu security 430 + * status reported by FW, bit description can be 431 + * found in CPU_BOOT_DEV_STS0 432 + * @fw_bootfit_cpu_boot_dev_sts1: bitmap representation of boot cpu security 433 + * status reported by FW, bit description can be 434 + * found in CPU_BOOT_DEV_STS1 435 + * @fw_app_cpu_boot_dev_sts0: bitmap representation of application security 436 + * status reported by FW, bit description can be 437 + * found in CPU_BOOT_DEV_STS0 438 + * @fw_app_cpu_boot_dev_sts1: bitmap representation of application security 439 + * status reported by FW, bit description can be 440 + * found in CPU_BOOT_DEV_STS1 440 441 * @collective_first_sob: first sync object available for collective use 441 442 * @collective_first_mon: first monitor available for collective use 442 443 * @sync_stream_first_sob: first sync object available for sync stream use ··· 461 438 * @user_interrupt_count: number of user interrupts. 462 439 * @tpc_enabled_mask: which TPCs are enabled. 463 440 * @completion_queues_count: number of completion queues. 464 - * @fw_security_disabled: true if security measures are disabled in firmware, 465 - * false otherwise 466 - * @fw_security_status_valid: security status bits are valid and can be fetched 467 - * from BOOT_DEV_STS0 441 + * @fw_security_enabled: true if security measures are enabled in firmware, 442 + * false otherwise 443 + * @fw_cpu_boot_dev_sts0_valid: status bits are valid and can be fetched from 444 + * BOOT_DEV_STS0 445 + * @fw_cpu_boot_dev_sts1_valid: status bits are valid and can be fetched from 446 + * BOOT_DEV_STS1 468 447 * @dram_supports_virtual_memory: is there an MMU towards the DRAM 469 448 * @hard_reset_done_by_fw: true if firmware is handling hard reset flow 470 449 * @num_functional_hbms: number of functional HBMs in each DCORE. 471 450 * @iatu_done_by_fw: true if iATU configuration is being done by FW. 451 + * @dynamic_fw_load: is dynamic FW load is supported. 452 + * @gic_interrupts_enable: true if FW is not blocking GIC controller, 453 + * false otherwise. 472 454 */ 473 455 struct asic_fixed_properties { 474 456 struct hw_queue_properties *hw_queues_props; ··· 519 491 u32 cb_pool_cb_size; 520 492 u32 max_pending_cs; 521 493 u32 max_queues; 522 - u32 fw_boot_cpu_security_map; 523 - u32 fw_app_security_map; 494 + u32 fw_preboot_cpu_boot_dev_sts0; 495 + u32 fw_preboot_cpu_boot_dev_sts1; 496 + u32 fw_bootfit_cpu_boot_dev_sts0; 497 + u32 fw_bootfit_cpu_boot_dev_sts1; 498 + u32 fw_app_cpu_boot_dev_sts0; 499 + u32 fw_app_cpu_boot_dev_sts1; 524 500 u16 collective_first_sob; 525 501 u16 collective_first_mon; 526 502 u16 sync_stream_first_sob; ··· 536 504 u16 user_interrupt_count; 537 505 u8 tpc_enabled_mask; 538 506 u8 completion_queues_count; 539 - u8 fw_security_disabled; 540 - u8 fw_security_status_valid; 507 + u8 fw_security_enabled; 508 + u8 fw_cpu_boot_dev_sts0_valid; 509 + u8 fw_cpu_boot_dev_sts1_valid; 541 510 u8 dram_supports_virtual_memory; 542 511 u8 hard_reset_done_by_fw; 543 512 u8 num_functional_hbms; 544 513 u8 iatu_done_by_fw; 514 + u8 dynamic_fw_load; 515 + u8 gic_interrupts_enable; 545 516 }; 546 517 547 518 /** ··· 785 750 * @kernel_address: holds the queue's kernel virtual address 786 751 * @bus_address: holds the queue's DMA address 787 752 * @ci: ci inside the queue 753 + * @prev_eqe_index: the index of the previous event queue entry. The index of 754 + * the current entry's index must be +1 of the previous one. 755 + * @check_eqe_index: do we need to check the index of the current entry vs. the 756 + * previous one. This is for backward compatibility with older 757 + * firmwares 788 758 */ 789 759 struct hl_eq { 790 760 struct hl_device *hdev; 791 761 void *kernel_address; 792 762 dma_addr_t bus_address; 793 763 u32 ci; 764 + u32 prev_eqe_index; 765 + bool check_eqe_index; 794 766 }; 795 767 796 768 ··· 852 810 DIV_SEL_PLL_CLK = 1, 853 811 DIV_SEL_DIVIDED_REF = 2, 854 812 DIV_SEL_DIVIDED_PLL = 3, 813 + }; 814 + 815 + enum pci_region { 816 + PCI_REGION_CFG, 817 + PCI_REGION_SRAM, 818 + PCI_REGION_DRAM, 819 + PCI_REGION_SP_SRAM, 820 + PCI_REGION_NUMBER, 821 + }; 822 + 823 + /** 824 + * struct pci_mem_region - describe memory region in a PCI bar 825 + * @region_base: region base address 826 + * @region_size: region size 827 + * @bar_size: size of the BAR 828 + * @offset_in_bar: region offset into the bar 829 + * @bar_id: bar ID of the region 830 + * @used: if used 1, otherwise 0 831 + */ 832 + struct pci_mem_region { 833 + u64 region_base; 834 + u64 region_size; 835 + u64 bar_size; 836 + u32 offset_in_bar; 837 + u8 bar_id; 838 + u8 used; 839 + }; 840 + 841 + /** 842 + * struct static_fw_load_mgr - static FW load manager 843 + * @preboot_version_max_off: max offset to preboot version 844 + * @boot_fit_version_max_off: max offset to boot fit version 845 + * @kmd_msg_to_cpu_reg: register address for KDM->CPU messages 846 + * @cpu_cmd_status_to_host_reg: register address for CPU command status response 847 + * @cpu_boot_status_reg: boot status register 848 + * @cpu_boot_dev_status0_reg: boot device status register 0 849 + * @cpu_boot_dev_status1_reg: boot device status register 1 850 + * @boot_err0_reg: boot error register 0 851 + * @boot_err1_reg: boot error register 1 852 + * @preboot_version_offset_reg: SRAM offset to preboot version register 853 + * @boot_fit_version_offset_reg: SRAM offset to boot fit version register 854 + * @sram_offset_mask: mask for getting offset into the SRAM 855 + * @cpu_reset_wait_msec: used when setting WFE via kmd_msg_to_cpu_reg 856 + */ 857 + struct static_fw_load_mgr { 858 + u64 preboot_version_max_off; 859 + u64 boot_fit_version_max_off; 860 + u32 kmd_msg_to_cpu_reg; 861 + u32 cpu_cmd_status_to_host_reg; 862 + u32 cpu_boot_status_reg; 863 + u32 cpu_boot_dev_status0_reg; 864 + u32 cpu_boot_dev_status1_reg; 865 + u32 boot_err0_reg; 866 + u32 boot_err1_reg; 867 + u32 preboot_version_offset_reg; 868 + u32 boot_fit_version_offset_reg; 869 + u32 sram_offset_mask; 870 + u32 cpu_reset_wait_msec; 871 + }; 872 + 873 + /** 874 + * struct fw_response - FW response to LKD command 875 + * @ram_offset: descriptor offset into the RAM 876 + * @ram_type: RAM type containing the descriptor (SRAM/DRAM) 877 + * @status: command status 878 + */ 879 + struct fw_response { 880 + u32 ram_offset; 881 + u8 ram_type; 882 + u8 status; 883 + }; 884 + 885 + /** 886 + * struct dynamic_fw_load_mgr - dynamic FW load manager 887 + * @response: FW to LKD response 888 + * @comm_desc: the communication descriptor with FW 889 + * @image_region: region to copy the FW image to 890 + * @fw_image_size: size of FW image to load 891 + * @wait_for_bl_timeout: timeout for waiting for boot loader to respond 892 + */ 893 + struct dynamic_fw_load_mgr { 894 + struct fw_response response; 895 + struct lkd_fw_comms_desc comm_desc; 896 + struct pci_mem_region *image_region; 897 + size_t fw_image_size; 898 + u32 wait_for_bl_timeout; 899 + }; 900 + 901 + /** 902 + * struct fw_image_props - properties of FW image 903 + * @image_name: name of the image 904 + * @src_off: offset in src FW to copy from 905 + * @copy_size: amount of bytes to copy (0 to copy the whole binary) 906 + */ 907 + struct fw_image_props { 908 + char *image_name; 909 + u32 src_off; 910 + u32 copy_size; 911 + }; 912 + 913 + /** 914 + * struct fw_load_mgr - manager FW loading process 915 + * @dynamic_loader: specific structure for dynamic load 916 + * @static_loader: specific structure for static load 917 + * @boot_fit_img: boot fit image properties 918 + * @linux_img: linux image properties 919 + * @cpu_timeout: CPU response timeout in usec 920 + * @boot_fit_timeout: Boot fit load timeout in usec 921 + * @skip_bmc: should BMC be skipped 922 + * @sram_bar_id: SRAM bar ID 923 + * @dram_bar_id: DRAM bar ID 924 + * @linux_loaded: true if linux was loaded so far 925 + */ 926 + struct fw_load_mgr { 927 + union { 928 + struct dynamic_fw_load_mgr dynamic_loader; 929 + struct static_fw_load_mgr static_loader; 930 + }; 931 + struct fw_image_props boot_fit_img; 932 + struct fw_image_props linux_img; 933 + u32 cpu_timeout; 934 + u32 boot_fit_timeout; 935 + u8 skip_bmc; 936 + u8 sram_bar_id; 937 + u8 dram_bar_id; 938 + u8 linux_loaded; 855 939 }; 856 940 857 941 /** ··· 1069 901 * @ctx_fini: context dependent cleanup. 1070 902 * @get_clk_rate: Retrieve the ASIC current and maximum clock rate in MHz 1071 903 * @get_queue_id_for_cq: Get the H/W queue id related to the given CQ index. 1072 - * @read_device_fw_version: read the device's firmware versions that are 1073 - * contained in registers 1074 904 * @load_firmware_to_device: load the firmware to the device's memory 1075 905 * @load_boot_fit_to_device: load boot fit to device's memory 1076 906 * @get_signal_cb_size: Get signal CB size. ··· 1099 933 * @get_msi_info: Retrieve asic-specific MSI ID of the f/w async event 1100 934 * @map_pll_idx_to_fw_idx: convert driver specific per asic PLL index to 1101 935 * generic f/w compatible PLL Indexes 936 + * @init_firmware_loader: initialize data for FW loader. 937 + * @init_cpu_scrambler_dram: Enable CPU specific DRAM scrambling 1102 938 */ 1103 939 struct hl_asic_funcs { 1104 940 int (*early_init)(struct hl_device *hdev); ··· 1174 1006 int (*mmu_invalidate_cache)(struct hl_device *hdev, bool is_hard, 1175 1007 u32 flags); 1176 1008 int (*mmu_invalidate_cache_range)(struct hl_device *hdev, bool is_hard, 1177 - u32 asid, u64 va, u64 size); 1009 + u32 flags, u32 asid, u64 va, u64 size); 1178 1010 int (*send_heartbeat)(struct hl_device *hdev); 1179 1011 void (*set_clock_gating)(struct hl_device *hdev); 1180 1012 void (*disable_clock_gating)(struct hl_device *hdev); ··· 1198 1030 void (*ctx_fini)(struct hl_ctx *ctx); 1199 1031 int (*get_clk_rate)(struct hl_device *hdev, u32 *cur_clk, u32 *max_clk); 1200 1032 u32 (*get_queue_id_for_cq)(struct hl_device *hdev, u32 cq_idx); 1201 - int (*read_device_fw_version)(struct hl_device *hdev, 1202 - enum hl_fw_component fwc); 1203 1033 int (*load_firmware_to_device)(struct hl_device *hdev); 1204 1034 int (*load_boot_fit_to_device)(struct hl_device *hdev); 1205 1035 u32 (*get_signal_cb_size)(struct hl_device *hdev); ··· 1222 1056 int (*hw_block_mmap)(struct hl_device *hdev, struct vm_area_struct *vma, 1223 1057 u32 block_id, u32 block_size); 1224 1058 void (*enable_events_from_fw)(struct hl_device *hdev); 1225 - void (*get_msi_info)(u32 *table); 1059 + void (*get_msi_info)(__le32 *table); 1226 1060 int (*map_pll_idx_to_fw_idx)(u32 pll_idx); 1061 + void (*init_firmware_loader)(struct hl_device *hdev); 1062 + void (*init_cpu_scrambler_dram)(struct hl_device *hdev); 1227 1063 }; 1228 1064 1229 1065 ··· 1430 1262 * @staged_sequence: the sequence of the staged submission this CS is part of, 1431 1263 * relevant only if staged_cs is set. 1432 1264 * @timeout_jiffies: cs timeout in jiffies. 1265 + * @submission_time_jiffies: submission time of the cs 1433 1266 * @type: CS_TYPE_*. 1434 1267 * @submitted: true if CS was submitted to H/W. 1435 1268 * @completed: true if CS was completed by device. ··· 1443 1274 * @staged_first: true if this is the first staged CS and we need to receive 1444 1275 * timeout for this CS. 1445 1276 * @staged_cs: true if this CS is part of a staged submission. 1277 + * @skip_reset_on_timeout: true if we shall not reset the device in case 1278 + * timeout occurs (debug scenario). 1446 1279 */ 1447 1280 struct hl_cs { 1448 1281 u16 *jobs_in_queue_cnt; ··· 1462 1291 u64 sequence; 1463 1292 u64 staged_sequence; 1464 1293 u64 timeout_jiffies; 1294 + u64 submission_time_jiffies; 1465 1295 enum hl_cs_type type; 1466 1296 u8 submitted; 1467 1297 u8 completed; ··· 1473 1301 u8 staged_last; 1474 1302 u8 staged_first; 1475 1303 u8 staged_cs; 1304 + u8 skip_reset_on_timeout; 1476 1305 }; 1477 1306 1478 1307 /** ··· 2095 1922 * @kernel_queues: array of hl_hw_queue. 2096 1923 * @cs_mirror_list: CS mirror list for TDR. 2097 1924 * @cs_mirror_lock: protects cs_mirror_list. 2098 - * @kernel_cb_mgr: command buffer manager for creating/destroying/handling CGs. 1925 + * @kernel_cb_mgr: command buffer manager for creating/destroying/handling CBs. 2099 1926 * @event_queue: event queue for IRQ from CPU-CP. 2100 1927 * @dma_pool: DMA pool for small allocations. 2101 1928 * @cpu_accessible_dma_mem: Host <-> CPU-CP shared memory CPU address. ··· 2127 1954 * @aggregated_cs_counters: aggregated cs counters among all contexts 2128 1955 * @mmu_priv: device-specific MMU data. 2129 1956 * @mmu_func: device-related MMU functions. 1957 + * @fw_loader: FW loader manager. 1958 + * @pci_mem_region: array of memory regions in the PCI 2130 1959 * @dram_used_mem: current DRAM memory consumption. 2131 1960 * @timeout_jiffies: device CS timeout value. 2132 1961 * @max_power: the max power of the device, as configured by the sysadmin. This ··· 2143 1968 * the error will be ignored by the driver during 2144 1969 * device initialization. Mainly used to debug and 2145 1970 * workaround firmware bugs 1971 + * @last_successful_open_jif: timestamp (jiffies) of the last successful 1972 + * device open. 1973 + * @last_open_session_duration_jif: duration (jiffies) of the last device open 1974 + * session. 1975 + * @open_counter: number of successful device open operations. 2146 1976 * @in_reset: is device in reset flow. 2147 1977 * @curr_pll_profile: current PLL profile. 2148 1978 * @card_type: Various ASICs have several card types. This indicates the card ··· 2187 2007 * @collective_mon_idx: helper index for collective initialization 2188 2008 * @supports_coresight: is CoreSight supported. 2189 2009 * @supports_soft_reset: is soft reset supported. 2010 + * @allow_external_soft_reset: true if soft reset initiated by user or TDR is 2011 + * allowed. 2190 2012 * @supports_cb_mapping: is mapping a CB to the device's MMU supported. 2191 2013 * @needs_reset: true if reset_on_lockup is false and device should be reset 2192 2014 * due to lockup. ··· 2197 2015 * @device_fini_pending: true if device_fini was called and might be 2198 2016 * waiting for the reset thread to finish 2199 2017 * @supports_staged_submission: true if staged submissions are supported 2018 + * @curr_reset_cause: saves an enumerated reset cause when a hard reset is 2019 + * triggered, and cleared after it is shared with preboot. 2020 + * @skip_reset_on_timeout: Skip device reset if CS has timed out, wait for it to 2021 + * complete instead. 2022 + * @device_cpu_is_halted: Flag to indicate whether the device CPU was already 2023 + * halted. We can't halt it again because the COMMS 2024 + * protocol will throw an error. Relevant only for 2025 + * cases where Linux was not loaded to device CPU 2200 2026 */ 2201 2027 struct hl_device { 2202 2028 struct pci_dev *pdev; ··· 2269 2079 struct hl_mmu_priv mmu_priv; 2270 2080 struct hl_mmu_funcs mmu_func[MMU_NUM_PGT_LOCATIONS]; 2271 2081 2082 + struct fw_load_mgr fw_loader; 2083 + 2084 + struct pci_mem_region pci_mem_region[PCI_REGION_NUMBER]; 2085 + 2272 2086 atomic64_t dram_used_mem; 2273 2087 u64 timeout_jiffies; 2274 2088 u64 max_power; 2275 2089 u64 clock_gating_mask; 2276 2090 u64 boot_error_status_mask; 2091 + u64 last_successful_open_jif; 2092 + u64 last_open_session_duration_jif; 2093 + u64 open_counter; 2277 2094 atomic_t in_reset; 2278 2095 enum hl_pll_frequency curr_pll_profile; 2279 2096 enum cpucp_card_types card_type; ··· 2313 2116 u8 collective_mon_idx; 2314 2117 u8 supports_coresight; 2315 2118 u8 supports_soft_reset; 2119 + u8 allow_external_soft_reset; 2316 2120 u8 supports_cb_mapping; 2317 2121 u8 needs_reset; 2318 2122 u8 process_kill_trial_cnt; 2319 2123 u8 device_fini_pending; 2320 2124 u8 supports_staged_submission; 2125 + u8 curr_reset_cause; 2126 + u8 skip_reset_on_timeout; 2127 + u8 device_cpu_is_halted; 2321 2128 2322 2129 /* Parameters for bring-up */ 2323 2130 u64 nic_ports_mask; ··· 2339 2138 u8 rl_enable; 2340 2139 u8 reset_on_preboot_fail; 2341 2140 u8 reset_upon_device_release; 2141 + u8 reset_if_device_not_idle; 2342 2142 }; 2343 2143 2344 2144 ··· 2586 2384 void *vaddr); 2587 2385 int hl_fw_send_heartbeat(struct hl_device *hdev); 2588 2386 int hl_fw_cpucp_info_get(struct hl_device *hdev, 2589 - u32 cpu_security_boot_status_reg, 2590 - u32 boot_err0_reg); 2387 + u32 sts_boot_dev_sts0_reg, 2388 + u32 sts_boot_dev_sts1_reg, u32 boot_err0_reg, 2389 + u32 boot_err1_reg); 2591 2390 int hl_fw_cpucp_handshake(struct hl_device *hdev, 2592 - u32 cpu_security_boot_status_reg, 2593 - u32 boot_err0_reg); 2391 + u32 sts_boot_dev_sts0_reg, 2392 + u32 sts_boot_dev_sts1_reg, u32 boot_err0_reg, 2393 + u32 boot_err1_reg); 2594 2394 int hl_fw_get_eeprom_data(struct hl_device *hdev, void *data, size_t max_size); 2595 2395 int hl_fw_cpucp_pci_counters_get(struct hl_device *hdev, 2596 2396 struct hl_info_pci_counters *counters); ··· 2603 2399 int hl_fw_cpucp_pll_info_get(struct hl_device *hdev, u32 pll_index, 2604 2400 u16 *pll_freq_arr); 2605 2401 int hl_fw_cpucp_power_get(struct hl_device *hdev, u64 *power); 2606 - int hl_fw_init_cpu(struct hl_device *hdev, u32 cpu_boot_status_reg, 2607 - u32 msg_to_cpu_reg, u32 cpu_msg_status_reg, 2608 - u32 cpu_security_boot_status_reg, u32 boot_err0_reg, 2609 - bool skip_bmc, u32 cpu_timeout, u32 boot_fit_timeout); 2402 + void hl_fw_ask_hard_reset_without_linux(struct hl_device *hdev); 2403 + void hl_fw_ask_halt_machine_without_linux(struct hl_device *hdev); 2404 + int hl_fw_init_cpu(struct hl_device *hdev); 2610 2405 int hl_fw_read_preboot_status(struct hl_device *hdev, u32 cpu_boot_status_reg, 2611 - u32 cpu_security_boot_status_reg, u32 boot_err0_reg, 2612 - u32 timeout); 2613 - 2406 + u32 sts_boot_dev_sts0_reg, 2407 + u32 sts_boot_dev_sts1_reg, u32 boot_err0_reg, 2408 + u32 boot_err1_reg, u32 timeout); 2409 + int hl_fw_dynamic_send_protocol_cmd(struct hl_device *hdev, 2410 + struct fw_load_mgr *fw_loader, 2411 + enum comms_cmd cmd, unsigned int size, 2412 + bool wait_ok, u32 timeout); 2614 2413 int hl_pci_bars_map(struct hl_device *hdev, const char * const name[3], 2615 2414 bool is_wc[3]); 2616 2415 int hl_pci_elbi_read(struct hl_device *hdev, u64 addr, u32 *data); ··· 2622 2415 struct hl_inbound_pci_region *pci_region); 2623 2416 int hl_pci_set_outbound_region(struct hl_device *hdev, 2624 2417 struct hl_outbound_pci_region *pci_region); 2418 + enum pci_region hl_get_pci_memory_region(struct hl_device *hdev, u64 addr); 2625 2419 int hl_pci_init(struct hl_device *hdev); 2626 2420 void hl_pci_fini(struct hl_device *hdev); 2627 2421 ··· 2651 2443 int hl_set_current(struct hl_device *hdev, 2652 2444 int sensor_index, u32 attr, long value); 2653 2445 void hl_release_pending_user_interrupts(struct hl_device *hdev); 2446 + int hl_cs_signal_sob_wraparound_handler(struct hl_device *hdev, u32 q_idx, 2447 + struct hl_hw_sob **hw_sob, u32 count); 2654 2448 2655 2449 #ifdef CONFIG_DEBUG_FS 2656 2450
+18 -6
drivers/misc/habanalabs/common/habanalabs_drv.c
··· 29 29 30 30 static int timeout_locked = 30; 31 31 static int reset_on_lockup = 1; 32 - static int memory_scrub = 1; 32 + static int memory_scrub; 33 33 static ulong boot_error_status_mask = ULONG_MAX; 34 34 35 35 module_param(timeout_locked, int, 0444); ··· 42 42 43 43 module_param(memory_scrub, int, 0444); 44 44 MODULE_PARM_DESC(memory_scrub, 45 - "Scrub device memory in various states (0 = no, 1 = yes, default yes)"); 45 + "Scrub device memory in various states (0 = no, 1 = yes, default no)"); 46 46 47 47 module_param(boot_error_status_mask, ulong, 0444); 48 48 MODULE_PARM_DESC(boot_error_status_mask, ··· 187 187 188 188 hl_debugfs_add_file(hpriv); 189 189 190 + hdev->open_counter++; 191 + hdev->last_successful_open_jif = jiffies; 192 + 190 193 return 0; 191 194 192 195 out_err: ··· 267 264 hdev->bmc_enable = 1; 268 265 hdev->hard_reset_on_fw_events = 1; 269 266 hdev->reset_on_preboot_fail = 1; 267 + hdev->reset_if_device_not_idle = 1; 270 268 271 269 hdev->reset_pcilink = 0; 272 270 hdev->axi_drain = 0; ··· 312 308 } 313 309 314 310 if (pdev) 315 - hdev->asic_prop.fw_security_disabled = 316 - !is_asic_secured(pdev->device); 311 + hdev->asic_prop.fw_security_enabled = 312 + is_asic_secured(hdev->asic_type); 317 313 else 318 - hdev->asic_prop.fw_security_disabled = true; 314 + hdev->asic_prop.fw_security_enabled = false; 319 315 320 316 /* Assign status description string */ 321 317 strncpy(hdev->status[HL_DEVICE_STATUS_MALFUNCTION], ··· 329 325 hdev->reset_on_lockup = reset_on_lockup; 330 326 hdev->memory_scrub = memory_scrub; 331 327 hdev->boot_error_status_mask = boot_error_status_mask; 328 + hdev->stop_on_err = true; 332 329 333 330 hdev->pldm = 0; 334 331 335 332 set_driver_behavior_per_device(hdev); 333 + 334 + hdev->curr_reset_cause = HL_RESET_CAUSE_UNKNOWN; 336 335 337 336 if (timeout_locked) 338 337 hdev->timeout_jiffies = msecs_to_jiffies(timeout_locked * 1000); ··· 471 464 return 0; 472 465 473 466 disable_device: 467 + pci_disable_pcie_error_reporting(pdev); 474 468 pci_set_drvdata(pdev, NULL); 475 469 destroy_hdev(hdev); 476 470 ··· 580 572 .probe = hl_pci_probe, 581 573 .remove = hl_pci_remove, 582 574 .shutdown = hl_pci_remove, 583 - .driver.pm = &hl_pm_ops, 575 + .driver = { 576 + .name = HL_NAME, 577 + .pm = &hl_pm_ops, 578 + .probe_type = PROBE_PREFER_ASYNCHRONOUS, 579 + }, 584 580 .err_handler = &hl_pci_err_handler, 585 581 }; 586 582
+22 -1
drivers/misc/habanalabs/common/habanalabs_ioctl.c
··· 95 95 hw_ip.first_available_interrupt_id = 96 96 prop->first_available_user_msix_interrupt; 97 97 return copy_to_user(out, &hw_ip, 98 - min((size_t)size, sizeof(hw_ip))) ? -EFAULT : 0; 98 + min((size_t) size, sizeof(hw_ip))) ? -EFAULT : 0; 99 99 } 100 100 101 101 static int hw_events_info(struct hl_device *hdev, bool aggregate, ··· 460 460 min((size_t) max_size, sizeof(power_info))) ? -EFAULT : 0; 461 461 } 462 462 463 + static int open_stats_info(struct hl_fpriv *hpriv, struct hl_info_args *args) 464 + { 465 + struct hl_device *hdev = hpriv->hdev; 466 + u32 max_size = args->return_size; 467 + struct hl_open_stats_info open_stats_info = {0}; 468 + void __user *out = (void __user *) (uintptr_t) args->return_pointer; 469 + 470 + if ((!max_size) || (!out)) 471 + return -EINVAL; 472 + 473 + open_stats_info.last_open_period_ms = jiffies64_to_msecs( 474 + hdev->last_open_session_duration_jif); 475 + open_stats_info.open_counter = hdev->open_counter; 476 + 477 + return copy_to_user(out, &open_stats_info, 478 + min((size_t) max_size, sizeof(open_stats_info))) ? -EFAULT : 0; 479 + } 480 + 463 481 static int _hl_info_ioctl(struct hl_fpriv *hpriv, void *data, 464 482 struct device *dev) 465 483 { ··· 560 542 561 543 case HL_INFO_POWER: 562 544 return power_info(hpriv, args); 545 + 546 + case HL_INFO_OPEN_STATS: 547 + return open_stats_info(hpriv, args); 563 548 564 549 default: 565 550 dev_err(dev, "Invalid request %d\n", args->op);
+18 -24
drivers/misc/habanalabs/common/hw_queue.c
··· 410 410 ext_and_hw_queue_submit_bd(hdev, q, ctl, len, ptr); 411 411 } 412 412 413 - static void init_signal_cs(struct hl_device *hdev, 413 + static int init_signal_cs(struct hl_device *hdev, 414 414 struct hl_cs_job *job, struct hl_cs_compl *cs_cmpl) 415 415 { 416 416 struct hl_sync_stream_properties *prop; 417 417 struct hl_hw_sob *hw_sob; 418 418 u32 q_idx; 419 + int rc = 0; 419 420 420 421 q_idx = job->hw_queue_id; 421 422 prop = &hdev->kernel_queues[q_idx].sync_stream_prop; 422 423 hw_sob = &prop->hw_sob[prop->curr_sob_offset]; 423 424 424 425 cs_cmpl->hw_sob = hw_sob; 425 - cs_cmpl->sob_val = prop->next_sob_val++; 426 + cs_cmpl->sob_val = prop->next_sob_val; 426 427 427 428 dev_dbg(hdev->dev, 428 429 "generate signal CB, sob_id: %d, sob val: 0x%x, q_idx: %d\n", ··· 435 434 hdev->asic_funcs->gen_signal_cb(hdev, job->patched_cb, 436 435 cs_cmpl->hw_sob->sob_id, 0, true); 437 436 438 - kref_get(&hw_sob->kref); 437 + rc = hl_cs_signal_sob_wraparound_handler(hdev, q_idx, &hw_sob, 1); 439 438 440 - /* check for wraparound */ 441 - if (prop->next_sob_val == HL_MAX_SOB_VAL) { 442 - /* 443 - * Decrement as we reached the max value. 444 - * The release function won't be called here as we've 445 - * just incremented the refcount. 446 - */ 447 - kref_put(&hw_sob->kref, hl_sob_reset_error); 448 - prop->next_sob_val = 1; 449 - /* only two SOBs are currently in use */ 450 - prop->curr_sob_offset = 451 - (prop->curr_sob_offset + 1) % HL_RSVD_SOBS; 452 - 453 - dev_dbg(hdev->dev, "switched to SOB %d, q_idx: %d\n", 454 - prop->curr_sob_offset, q_idx); 455 - } 439 + return rc; 456 440 } 457 441 458 442 static void init_wait_cs(struct hl_device *hdev, struct hl_cs *cs, ··· 490 504 * 491 505 * H/W queues spinlock should be taken before calling this function 492 506 */ 493 - static void init_signal_wait_cs(struct hl_cs *cs) 507 + static int init_signal_wait_cs(struct hl_cs *cs) 494 508 { 495 509 struct hl_ctx *ctx = cs->ctx; 496 510 struct hl_device *hdev = ctx->hdev; 497 511 struct hl_cs_job *job; 498 512 struct hl_cs_compl *cs_cmpl = 499 513 container_of(cs->fence, struct hl_cs_compl, base_fence); 514 + int rc = 0; 500 515 501 516 /* There is only one job in a signal/wait CS */ 502 517 job = list_first_entry(&cs->job_list, struct hl_cs_job, 503 518 cs_node); 504 519 505 520 if (cs->type & CS_TYPE_SIGNAL) 506 - init_signal_cs(hdev, job, cs_cmpl); 521 + rc = init_signal_cs(hdev, job, cs_cmpl); 507 522 else if (cs->type & CS_TYPE_WAIT) 508 523 init_wait_cs(hdev, cs, job, cs_cmpl); 524 + 525 + return rc; 509 526 } 510 527 511 528 /* ··· 579 590 } 580 591 } 581 592 582 - if ((cs->type == CS_TYPE_SIGNAL) || (cs->type == CS_TYPE_WAIT)) 583 - init_signal_wait_cs(cs); 584 - else if (cs->type == CS_TYPE_COLLECTIVE_WAIT) 593 + if ((cs->type == CS_TYPE_SIGNAL) || (cs->type == CS_TYPE_WAIT)) { 594 + rc = init_signal_wait_cs(cs); 595 + if (rc) { 596 + dev_err(hdev->dev, "Failed to submit signal cs\n"); 597 + goto unroll_cq_resv; 598 + } 599 + } else if (cs->type == CS_TYPE_COLLECTIVE_WAIT) 585 600 hdev->asic_funcs->collective_wait_init_cs(cs); 601 + 586 602 587 603 spin_lock(&hdev->cs_mirror_lock); 588 604
+21 -3
drivers/misc/habanalabs/common/irq.c
··· 207 207 struct hl_eq_entry *eq_entry; 208 208 struct hl_eq_entry *eq_base; 209 209 struct hl_eqe_work *handle_eqe_work; 210 + bool entry_ready; 211 + u32 cur_eqe; 212 + u16 cur_eqe_index; 210 213 211 214 eq_base = eq->kernel_address; 212 215 213 216 while (1) { 214 - bool entry_ready = 215 - ((le32_to_cpu(eq_base[eq->ci].hdr.ctl) & 216 - EQ_CTL_READY_MASK) >> EQ_CTL_READY_SHIFT); 217 + cur_eqe = le32_to_cpu(eq_base[eq->ci].hdr.ctl); 218 + entry_ready = !!FIELD_GET(EQ_CTL_READY_MASK, cur_eqe); 217 219 218 220 if (!entry_ready) 219 221 break; 222 + 223 + cur_eqe_index = FIELD_GET(EQ_CTL_INDEX_MASK, cur_eqe); 224 + if ((hdev->event_queue.check_eqe_index) && 225 + (((eq->prev_eqe_index + 1) & EQ_CTL_INDEX_MASK) 226 + != cur_eqe_index)) { 227 + dev_dbg(hdev->dev, 228 + "EQE 0x%x in queue is ready but index does not match %d!=%d", 229 + eq_base[eq->ci].hdr.ctl, 230 + ((eq->prev_eqe_index + 1) & EQ_CTL_INDEX_MASK), 231 + cur_eqe_index); 232 + break; 233 + } 234 + 235 + eq->prev_eqe_index++; 220 236 221 237 eq_entry = &eq_base[eq->ci]; 222 238 ··· 357 341 q->hdev = hdev; 358 342 q->kernel_address = p; 359 343 q->ci = 0; 344 + q->prev_eqe_index = 0; 360 345 361 346 return 0; 362 347 } ··· 382 365 void hl_eq_reset(struct hl_device *hdev, struct hl_eq *q) 383 366 { 384 367 q->ci = 0; 368 + q->prev_eqe_index = 0; 385 369 386 370 /* 387 371 * It's not enough to just reset the PI/CI because the H/W may have
+11 -11
drivers/misc/habanalabs/common/memory.c
··· 570 570 if ((is_align_pow_2 && (hint_addr & (va_block_align - 1))) || 571 571 (!is_align_pow_2 && 572 572 do_div(tmp_hint_addr, va_range->page_size))) { 573 - dev_info(hdev->dev, "Hint address 0x%llx will be ignored\n", 574 - hint_addr); 573 + 574 + dev_dbg(hdev->dev, 575 + "Hint address 0x%llx will be ignored because it is not aligned\n", 576 + hint_addr); 575 577 hint_addr = 0; 576 578 } 577 579 ··· 1119 1117 goto map_err; 1120 1118 } 1121 1119 1122 - rc = hdev->asic_funcs->mmu_invalidate_cache(hdev, false, *vm_type); 1120 + rc = hdev->asic_funcs->mmu_invalidate_cache_range(hdev, false, 1121 + *vm_type, ctx->asid, ret_vaddr, phys_pg_pack->total_size); 1123 1122 1124 1123 mutex_unlock(&ctx->mmu_lock); 1125 1124 ··· 1264 1261 * at the loop end rather than for each iteration 1265 1262 */ 1266 1263 if (!ctx_free) 1267 - rc = hdev->asic_funcs->mmu_invalidate_cache(hdev, true, 1268 - *vm_type); 1264 + rc = hdev->asic_funcs->mmu_invalidate_cache_range(hdev, true, 1265 + *vm_type, ctx->asid, vaddr, 1266 + phys_pg_pack->total_size); 1269 1267 1270 1268 mutex_unlock(&ctx->mmu_lock); 1271 1269 ··· 1373 1369 /* Driver only allows mapping of a complete HW block */ 1374 1370 block_size = vma->vm_end - vma->vm_start; 1375 1371 1376 - #ifdef _HAS_TYPE_ARG_IN_ACCESS_OK 1377 - if (!access_ok(VERIFY_WRITE, 1378 - (void __user *) (uintptr_t) vma->vm_start, block_size)) { 1379 - #else 1380 1372 if (!access_ok((void __user *) (uintptr_t) vma->vm_start, block_size)) { 1381 - #endif 1382 1373 dev_err(hdev->dev, 1383 1374 "user pointer is invalid - 0x%lx\n", 1384 1375 vma->vm_start); ··· 1607 1608 1608 1609 if (rc != npages) { 1609 1610 dev_err(hdev->dev, 1610 - "Failed to map host memory, user ptr probably wrong\n"); 1611 + "Failed (%d) to pin host memory with user ptr 0x%llx, size 0x%llx, npages %d\n", 1612 + rc, addr, size, npages); 1611 1613 if (rc < 0) 1612 1614 goto destroy_pages; 1613 1615 npages = rc;
+11 -3
drivers/misc/habanalabs/common/mmu/mmu.c
··· 501 501 502 502 if ((hops->range_type == HL_VA_RANGE_TYPE_DRAM) && 503 503 !is_power_of_2(prop->dram_page_size)) { 504 - u32 bit; 504 + unsigned long dram_page_size = prop->dram_page_size; 505 505 u64 page_offset_mask; 506 506 u64 phys_addr_mask; 507 + u32 bit; 507 508 508 - bit = __ffs64((u64)prop->dram_page_size); 509 - page_offset_mask = ((1ull << bit) - 1); 509 + /* 510 + * find last set bit in page_size to cover all bits of page 511 + * offset. note that 1 has to be added to bit index. 512 + * note that the internal ulong variable is used to avoid 513 + * alignment issue. 514 + */ 515 + bit = find_last_bit(&dram_page_size, 516 + sizeof(dram_page_size) * BITS_PER_BYTE) + 1; 517 + page_offset_mask = (BIT_ULL(bit) - 1); 510 518 phys_addr_mask = ~page_offset_mask; 511 519 *phys_addr = (tmp_phys_addr & phys_addr_mask) | 512 520 (virt_addr & page_offset_mask);
+33 -1
drivers/misc/habanalabs/common/pci/pci.c
··· 10 10 11 11 #include <linux/pci.h> 12 12 13 - #define HL_PLDM_PCI_ELBI_TIMEOUT_MSEC (HL_PCI_ELBI_TIMEOUT_MSEC * 10) 13 + #define HL_PLDM_PCI_ELBI_TIMEOUT_MSEC (HL_PCI_ELBI_TIMEOUT_MSEC * 100) 14 14 15 15 #define IATU_REGION_CTRL_REGION_EN_MASK BIT(31) 16 16 #define IATU_REGION_CTRL_MATCH_MODE_MASK BIT(30) ··· 360 360 } 361 361 362 362 /** 363 + * hl_get_pci_memory_region() - get PCI region for given address 364 + * @hdev: Pointer to hl_device structure. 365 + * @addr: device address 366 + * 367 + * @return region index on success, otherwise PCI_REGION_NUMBER (invalid 368 + * region index) 369 + */ 370 + enum pci_region hl_get_pci_memory_region(struct hl_device *hdev, u64 addr) 371 + { 372 + int i; 373 + 374 + for (i = 0 ; i < PCI_REGION_NUMBER ; i++) { 375 + struct pci_mem_region *region = &hdev->pci_mem_region[i]; 376 + 377 + if (!region->used) 378 + continue; 379 + 380 + if ((addr >= region->region_base) && 381 + (addr < region->region_base + region->region_size)) 382 + return i; 383 + } 384 + 385 + return PCI_REGION_NUMBER; 386 + } 387 + 388 + /** 363 389 * hl_pci_init() - PCI initialization code. 364 390 * @hdev: Pointer to hl_device structure. 365 391 * ··· 419 393 if (rc) { 420 394 dev_err(hdev->dev, "Failed to initialize iATU\n"); 421 395 goto unmap_pci_bars; 396 + } 397 + 398 + /* Driver must sleep in order for FW to finish the iATU configuration */ 399 + if (hdev->asic_prop.iatu_done_by_fw) { 400 + usleep_range(2000, 3000); 401 + hdev->asic_funcs->set_dma_mask_from_fw(hdev); 422 402 } 423 403 424 404 rc = dma_set_mask_and_coherent(&pdev->dev,
+1 -1
drivers/misc/habanalabs/common/sysfs.c
··· 208 208 goto out; 209 209 } 210 210 211 - if (!hdev->supports_soft_reset) { 211 + if (!hdev->allow_external_soft_reset) { 212 212 dev_err(hdev->dev, "Device does not support soft-reset\n"); 213 213 goto out; 214 214 }
+620 -411
drivers/misc/habanalabs/gaudi/gaudi.c
··· 78 78 #define GAUDI_PLDM_TPC_KERNEL_WAIT_USEC (HL_DEVICE_TIMEOUT_USEC * 30) 79 79 #define GAUDI_BOOT_FIT_REQ_TIMEOUT_USEC 1000000 /* 1s */ 80 80 #define GAUDI_MSG_TO_CPU_TIMEOUT_USEC 4000000 /* 4s */ 81 + #define GAUDI_WAIT_FOR_BL_TIMEOUT_USEC 15000000 /* 15s */ 81 82 82 83 #define GAUDI_QMAN0_FENCE_VAL 0x72E91AB9 83 84 ··· 410 409 } 411 410 } 412 411 413 - static int gaudi_get_fixed_properties(struct hl_device *hdev) 412 + static int gaudi_set_fixed_properties(struct hl_device *hdev) 414 413 { 415 414 struct asic_fixed_properties *prop = &hdev->asic_prop; 416 415 u32 num_sync_stream_queues = 0; ··· 546 545 for (i = 0 ; i < HL_MAX_DCORES ; i++) 547 546 prop->first_available_cq[i] = USHRT_MAX; 548 547 549 - prop->fw_security_status_valid = false; 548 + prop->fw_cpu_boot_dev_sts0_valid = false; 549 + prop->fw_cpu_boot_dev_sts1_valid = false; 550 550 prop->hard_reset_done_by_fw = false; 551 + prop->gic_interrupts_enable = true; 551 552 552 553 return 0; 553 554 } ··· 580 577 if ((gaudi) && (gaudi->hbm_bar_cur_addr == addr)) 581 578 return old_addr; 582 579 580 + if (hdev->asic_prop.iatu_done_by_fw) 581 + return U64_MAX; 582 + 583 583 /* Inbound Region 2 - Bar 4 - Point to HBM */ 584 584 pci_region.mode = PCI_BAR_MATCH_MODE; 585 585 pci_region.bar = HBM_BAR_ID; ··· 605 599 struct hl_outbound_pci_region outbound_region; 606 600 int rc; 607 601 608 - if (hdev->asic_prop.iatu_done_by_fw) { 609 - hdev->asic_funcs->set_dma_mask_from_fw(hdev); 602 + if (hdev->asic_prop.iatu_done_by_fw) 610 603 return 0; 611 - } 612 604 613 605 /* Inbound Region 0 - Bar 0 - Point to SRAM + CFG */ 614 606 inbound_region.mode = PCI_BAR_MATCH_MODE; ··· 655 651 u32 fw_boot_status; 656 652 int rc; 657 653 658 - rc = gaudi_get_fixed_properties(hdev); 654 + rc = gaudi_set_fixed_properties(hdev); 659 655 if (rc) { 660 - dev_err(hdev->dev, "Failed to get fixed properties\n"); 656 + dev_err(hdev->dev, "Failed setting fixed properties\n"); 661 657 return rc; 662 658 } 663 659 ··· 687 683 prop->dram_pci_bar_size = pci_resource_len(pdev, HBM_BAR_ID); 688 684 689 685 /* If FW security is enabled at this point it means no access to ELBI */ 690 - if (!hdev->asic_prop.fw_security_disabled) { 686 + if (hdev->asic_prop.fw_security_enabled) { 691 687 hdev->asic_prop.iatu_done_by_fw = true; 688 + 689 + /* 690 + * GIC-security-bit can ONLY be set by CPUCP, so in this stage 691 + * decision can only be taken based on PCI ID security. 692 + */ 693 + hdev->asic_prop.gic_interrupts_enable = false; 692 694 goto pci_init; 693 695 } 694 696 ··· 717 707 * version to determine whether we run with a security-enabled firmware 718 708 */ 719 709 rc = hl_fw_read_preboot_status(hdev, mmPSOC_GLOBAL_CONF_CPU_BOOT_STATUS, 720 - mmCPU_BOOT_DEV_STS0, mmCPU_BOOT_ERR0, 721 - GAUDI_BOOT_FIT_REQ_TIMEOUT_USEC); 710 + mmCPU_BOOT_DEV_STS0, 711 + mmCPU_BOOT_DEV_STS1, mmCPU_BOOT_ERR0, 712 + mmCPU_BOOT_ERR1, 713 + GAUDI_BOOT_FIT_REQ_TIMEOUT_USEC); 722 714 if (rc) { 723 715 if (hdev->reset_on_preboot_fail) 724 716 hdev->asic_funcs->hw_fini(hdev, true); ··· 763 751 u16 pll_freq_arr[HL_PLL_NUM_OUTPUTS], freq; 764 752 int rc; 765 753 766 - if (hdev->asic_prop.fw_security_disabled) { 754 + if (hdev->asic_prop.fw_security_enabled) { 755 + rc = hl_fw_cpucp_pll_info_get(hdev, HL_GAUDI_CPU_PLL, pll_freq_arr); 756 + 757 + if (rc) 758 + return rc; 759 + 760 + freq = pll_freq_arr[2]; 761 + } else { 767 762 /* Backward compatibility */ 768 763 div_fctr = RREG32(mmPSOC_CPU_PLL_DIV_FACTOR_2); 769 764 div_sel = RREG32(mmPSOC_CPU_PLL_DIV_SEL_2); ··· 798 779 div_sel); 799 780 freq = 0; 800 781 } 801 - } else { 802 - rc = hl_fw_cpucp_pll_info_get(hdev, HL_GAUDI_CPU_PLL, pll_freq_arr); 803 - 804 - if (rc) 805 - return rc; 806 - 807 - freq = pll_freq_arr[2]; 808 782 } 809 783 810 784 prop->psoc_timestamp_frequency = freq; ··· 1000 988 hw_sob_group->base_sob_id); 1001 989 } 1002 990 991 + static void gaudi_collective_mstr_sob_mask_set(struct gaudi_device *gaudi) 992 + { 993 + struct gaudi_collective_properties *prop; 994 + int i; 995 + 996 + prop = &gaudi->collective_props; 997 + 998 + memset(prop->mstr_sob_mask, 0, sizeof(prop->mstr_sob_mask)); 999 + 1000 + for (i = 0 ; i < NIC_NUMBER_OF_ENGINES ; i++) 1001 + if (gaudi->hw_cap_initialized & BIT(HW_CAP_NIC_SHIFT + i)) 1002 + prop->mstr_sob_mask[i / HL_MAX_SOBS_PER_MONITOR] |= 1003 + BIT(i % HL_MAX_SOBS_PER_MONITOR); 1004 + /* Set collective engine bit */ 1005 + prop->mstr_sob_mask[i / HL_MAX_SOBS_PER_MONITOR] |= 1006 + BIT(i % HL_MAX_SOBS_PER_MONITOR); 1007 + } 1008 + 1003 1009 static int gaudi_collective_init(struct hl_device *hdev) 1004 1010 { 1005 - u32 i, master_monitor_sobs, sob_id, reserved_sobs_per_group; 1011 + u32 i, sob_id, reserved_sobs_per_group; 1006 1012 struct gaudi_collective_properties *prop; 1007 1013 struct gaudi_device *gaudi; 1008 1014 ··· 1046 1016 gaudi_collective_map_sobs(hdev, i); 1047 1017 } 1048 1018 1049 - prop->mstr_sob_mask[0] = 0; 1050 - master_monitor_sobs = HL_MAX_SOBS_PER_MONITOR; 1051 - for (i = 0 ; i < master_monitor_sobs ; i++) 1052 - if (gaudi->hw_cap_initialized & BIT(HW_CAP_NIC_SHIFT + i)) 1053 - prop->mstr_sob_mask[0] |= BIT(i); 1054 - 1055 - prop->mstr_sob_mask[1] = 0; 1056 - master_monitor_sobs = 1057 - NIC_NUMBER_OF_ENGINES - HL_MAX_SOBS_PER_MONITOR; 1058 - for (i = 0 ; i < master_monitor_sobs; i++) { 1059 - if (gaudi->hw_cap_initialized & BIT(HW_CAP_NIC_SHIFT + i)) 1060 - prop->mstr_sob_mask[1] |= BIT(i); 1061 - } 1062 - 1063 - /* Set collective engine bit */ 1064 - prop->mstr_sob_mask[1] |= BIT(i); 1019 + gaudi_collective_mstr_sob_mask_set(gaudi); 1065 1020 1066 1021 return 0; 1067 1022 } ··· 1528 1513 hdev->cpu_pci_msb_addr = 1529 1514 GAUDI_CPU_PCI_MSB_ADDR(hdev->cpu_accessible_dma_address); 1530 1515 1531 - if (hdev->asic_prop.fw_security_disabled) 1516 + if (!hdev->asic_prop.fw_security_enabled) 1532 1517 GAUDI_PCI_TO_CPU_ADDR(hdev->cpu_accessible_dma_address); 1533 1518 1534 1519 free_dma_mem_arr: ··· 1605 1590 return rc; 1606 1591 } 1607 1592 1593 + static void gaudi_set_pci_memory_regions(struct hl_device *hdev) 1594 + { 1595 + struct asic_fixed_properties *prop = &hdev->asic_prop; 1596 + struct pci_mem_region *region; 1597 + 1598 + /* CFG */ 1599 + region = &hdev->pci_mem_region[PCI_REGION_CFG]; 1600 + region->region_base = CFG_BASE; 1601 + region->region_size = CFG_SIZE; 1602 + region->offset_in_bar = CFG_BASE - SPI_FLASH_BASE_ADDR; 1603 + region->bar_size = CFG_BAR_SIZE; 1604 + region->bar_id = CFG_BAR_ID; 1605 + region->used = 1; 1606 + 1607 + /* SRAM */ 1608 + region = &hdev->pci_mem_region[PCI_REGION_SRAM]; 1609 + region->region_base = SRAM_BASE_ADDR; 1610 + region->region_size = SRAM_SIZE; 1611 + region->offset_in_bar = 0; 1612 + region->bar_size = SRAM_BAR_SIZE; 1613 + region->bar_id = SRAM_BAR_ID; 1614 + region->used = 1; 1615 + 1616 + /* DRAM */ 1617 + region = &hdev->pci_mem_region[PCI_REGION_DRAM]; 1618 + region->region_base = DRAM_PHYS_BASE; 1619 + region->region_size = hdev->asic_prop.dram_size; 1620 + region->offset_in_bar = 0; 1621 + region->bar_size = prop->dram_pci_bar_size; 1622 + region->bar_id = HBM_BAR_ID; 1623 + region->used = 1; 1624 + 1625 + /* SP SRAM */ 1626 + region = &hdev->pci_mem_region[PCI_REGION_SP_SRAM]; 1627 + region->region_base = PSOC_SCRATCHPAD_ADDR; 1628 + region->region_size = PSOC_SCRATCHPAD_SIZE; 1629 + region->offset_in_bar = PSOC_SCRATCHPAD_ADDR - SPI_FLASH_BASE_ADDR; 1630 + region->bar_size = CFG_BAR_SIZE; 1631 + region->bar_id = CFG_BAR_ID; 1632 + region->used = 1; 1633 + } 1634 + 1608 1635 static int gaudi_sw_init(struct hl_device *hdev) 1609 1636 { 1610 1637 struct gaudi_device *gaudi; ··· 1721 1664 hdev->supports_coresight = true; 1722 1665 hdev->supports_staged_submission = true; 1723 1666 1667 + gaudi_set_pci_memory_regions(hdev); 1668 + 1724 1669 return 0; 1725 1670 1726 1671 free_cpu_accessible_dma_pool: 1727 1672 gen_pool_destroy(hdev->cpu_accessible_dma_pool); 1728 1673 free_cpu_dma_mem: 1729 - if (hdev->asic_prop.fw_security_disabled) 1674 + if (!hdev->asic_prop.fw_security_enabled) 1730 1675 GAUDI_CPU_TO_PCI_ADDR(hdev->cpu_accessible_dma_address, 1731 1676 hdev->cpu_pci_msb_addr); 1732 1677 hdev->asic_funcs->asic_dma_free_coherent(hdev, ··· 1750 1691 1751 1692 gen_pool_destroy(hdev->cpu_accessible_dma_pool); 1752 1693 1753 - if (hdev->asic_prop.fw_security_disabled) 1694 + if (!hdev->asic_prop.fw_security_enabled) 1754 1695 GAUDI_CPU_TO_PCI_ADDR(hdev->cpu_accessible_dma_address, 1755 1696 hdev->cpu_pci_msb_addr); 1756 1697 ··· 1938 1879 { 1939 1880 struct gaudi_device *gaudi = hdev->asic_specific; 1940 1881 1941 - if (!hdev->asic_prop.fw_security_disabled) 1882 + if (hdev->asic_prop.fw_security_enabled) 1942 1883 return; 1943 1884 1944 - if (hdev->asic_prop.fw_security_status_valid && 1945 - (hdev->asic_prop.fw_app_security_map & 1946 - CPU_BOOT_DEV_STS0_SRAM_SCR_EN)) 1885 + if (hdev->asic_prop.fw_app_cpu_boot_dev_sts0 & 1886 + CPU_BOOT_DEV_STS0_SRAM_SCR_EN) 1947 1887 return; 1948 1888 1949 1889 if (gaudi->hw_cap_initialized & HW_CAP_SRAM_SCRAMBLER) ··· 2009 1951 { 2010 1952 struct gaudi_device *gaudi = hdev->asic_specific; 2011 1953 2012 - if (!hdev->asic_prop.fw_security_disabled) 1954 + if (hdev->asic_prop.fw_security_enabled) 2013 1955 return; 2014 1956 2015 - if (hdev->asic_prop.fw_security_status_valid && 2016 - (hdev->asic_prop.fw_boot_cpu_security_map & 2017 - CPU_BOOT_DEV_STS0_DRAM_SCR_EN)) 1957 + if (hdev->asic_prop.fw_bootfit_cpu_boot_dev_sts0 & 1958 + CPU_BOOT_DEV_STS0_DRAM_SCR_EN) 2018 1959 return; 2019 1960 2020 1961 if (gaudi->hw_cap_initialized & HW_CAP_HBM_SCRAMBLER) ··· 2078 2021 2079 2022 static void gaudi_init_e2e(struct hl_device *hdev) 2080 2023 { 2081 - if (!hdev->asic_prop.fw_security_disabled) 2024 + if (hdev->asic_prop.fw_security_enabled) 2082 2025 return; 2083 2026 2084 - if (hdev->asic_prop.fw_security_status_valid && 2085 - (hdev->asic_prop.fw_boot_cpu_security_map & 2086 - CPU_BOOT_DEV_STS0_E2E_CRED_EN)) 2027 + if (hdev->asic_prop.fw_bootfit_cpu_boot_dev_sts0 & 2028 + CPU_BOOT_DEV_STS0_E2E_CRED_EN) 2087 2029 return; 2088 2030 2089 2031 WREG32(mmSIF_RTR_CTRL_0_E2E_HBM_WR_SIZE, 247 >> 3); ··· 2452 2396 { 2453 2397 uint32_t hbm0_wr, hbm1_wr, hbm0_rd, hbm1_rd; 2454 2398 2455 - if (!hdev->asic_prop.fw_security_disabled) 2399 + if (hdev->asic_prop.fw_security_enabled) 2456 2400 return; 2457 2401 2458 - if (hdev->asic_prop.fw_security_status_valid && 2459 - (hdev->asic_prop.fw_boot_cpu_security_map & 2460 - CPU_BOOT_DEV_STS0_HBM_CRED_EN)) 2402 + if (hdev->asic_prop.fw_bootfit_cpu_boot_dev_sts0 & 2403 + CPU_BOOT_DEV_STS0_HBM_CRED_EN) 2461 2404 return; 2462 2405 2463 2406 hbm0_wr = 0x33333333; ··· 2542 2487 static void gaudi_init_pci_dma_qman(struct hl_device *hdev, int dma_id, 2543 2488 int qman_id, dma_addr_t qman_pq_addr) 2544 2489 { 2490 + struct cpu_dyn_regs *dyn_regs = 2491 + &hdev->fw_loader.dynamic_loader.comm_desc.cpu_dyn_regs; 2545 2492 u32 mtr_base_en_lo, mtr_base_en_hi, mtr_base_ws_lo, mtr_base_ws_hi; 2546 2493 u32 so_base_en_lo, so_base_en_hi, so_base_ws_lo, so_base_ws_hi; 2547 2494 u32 q_off, dma_qm_offset; 2548 - u32 dma_qm_err_cfg; 2495 + u32 dma_qm_err_cfg, irq_handler_offset; 2549 2496 2550 2497 dma_qm_offset = dma_id * DMA_QMAN_OFFSET; 2551 2498 ··· 2596 2539 2597 2540 /* The following configuration is needed only once per QMAN */ 2598 2541 if (qman_id == 0) { 2542 + irq_handler_offset = hdev->asic_prop.gic_interrupts_enable ? 2543 + mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR : 2544 + le32_to_cpu(dyn_regs->gic_dma_qm_irq_ctrl); 2545 + 2599 2546 /* Configure RAZWI IRQ */ 2600 2547 dma_qm_err_cfg = PCI_DMA_QMAN_GLBL_ERR_CFG_MSG_EN_MASK; 2601 - if (hdev->stop_on_err) { 2548 + if (hdev->stop_on_err) 2602 2549 dma_qm_err_cfg |= 2603 2550 PCI_DMA_QMAN_GLBL_ERR_CFG_STOP_ON_ERR_EN_MASK; 2604 - } 2605 2551 2606 2552 WREG32(mmDMA0_QM_GLBL_ERR_CFG + dma_qm_offset, dma_qm_err_cfg); 2553 + 2607 2554 WREG32(mmDMA0_QM_GLBL_ERR_ADDR_LO + dma_qm_offset, 2608 - lower_32_bits(CFG_BASE + 2609 - mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR)); 2555 + lower_32_bits(CFG_BASE + irq_handler_offset)); 2610 2556 WREG32(mmDMA0_QM_GLBL_ERR_ADDR_HI + dma_qm_offset, 2611 - upper_32_bits(CFG_BASE + 2612 - mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR)); 2557 + upper_32_bits(CFG_BASE + irq_handler_offset)); 2558 + 2613 2559 WREG32(mmDMA0_QM_GLBL_ERR_WDATA + dma_qm_offset, 2614 2560 gaudi_irq_map_table[GAUDI_EVENT_DMA0_QM].cpu_id + 2615 2561 dma_id); ··· 2633 2573 2634 2574 static void gaudi_init_dma_core(struct hl_device *hdev, int dma_id) 2635 2575 { 2636 - u32 dma_offset = dma_id * DMA_CORE_OFFSET; 2576 + struct cpu_dyn_regs *dyn_regs = 2577 + &hdev->fw_loader.dynamic_loader.comm_desc.cpu_dyn_regs; 2637 2578 u32 dma_err_cfg = 1 << DMA0_CORE_ERR_CFG_ERR_MSG_EN_SHIFT; 2579 + u32 dma_offset = dma_id * DMA_CORE_OFFSET; 2580 + u32 irq_handler_offset; 2638 2581 2639 2582 /* Set to maximum possible according to physical size */ 2640 2583 WREG32(mmDMA0_CORE_RD_MAX_OUTSTAND + dma_offset, 0); ··· 2651 2588 dma_err_cfg |= 1 << DMA0_CORE_ERR_CFG_STOP_ON_ERR_SHIFT; 2652 2589 2653 2590 WREG32(mmDMA0_CORE_ERR_CFG + dma_offset, dma_err_cfg); 2591 + 2592 + irq_handler_offset = hdev->asic_prop.gic_interrupts_enable ? 2593 + mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR : 2594 + le32_to_cpu(dyn_regs->gic_dma_core_irq_ctrl); 2595 + 2654 2596 WREG32(mmDMA0_CORE_ERRMSG_ADDR_LO + dma_offset, 2655 - lower_32_bits(CFG_BASE + mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR)); 2597 + lower_32_bits(CFG_BASE + irq_handler_offset)); 2656 2598 WREG32(mmDMA0_CORE_ERRMSG_ADDR_HI + dma_offset, 2657 - upper_32_bits(CFG_BASE + mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR)); 2599 + upper_32_bits(CFG_BASE + irq_handler_offset)); 2600 + 2658 2601 WREG32(mmDMA0_CORE_ERRMSG_WDATA + dma_offset, 2659 2602 gaudi_irq_map_table[GAUDI_EVENT_DMA0_CORE].cpu_id + dma_id); 2660 2603 WREG32(mmDMA0_CORE_PROT + dma_offset, ··· 2723 2654 static void gaudi_init_hbm_dma_qman(struct hl_device *hdev, int dma_id, 2724 2655 int qman_id, u64 qman_base_addr) 2725 2656 { 2657 + struct cpu_dyn_regs *dyn_regs = 2658 + &hdev->fw_loader.dynamic_loader.comm_desc.cpu_dyn_regs; 2726 2659 u32 mtr_base_en_lo, mtr_base_en_hi, mtr_base_ws_lo, mtr_base_ws_hi; 2727 2660 u32 so_base_en_lo, so_base_en_hi, so_base_ws_lo, so_base_ws_hi; 2661 + u32 dma_qm_err_cfg, irq_handler_offset; 2728 2662 u32 q_off, dma_qm_offset; 2729 - u32 dma_qm_err_cfg; 2730 2663 2731 2664 dma_qm_offset = dma_id * DMA_QMAN_OFFSET; 2732 2665 ··· 2768 2697 WREG32(mmDMA0_QM_CP_LDMA_DST_BASE_LO_OFFSET_0 + q_off, 2769 2698 QMAN_CPDMA_DST_OFFSET); 2770 2699 } else { 2700 + irq_handler_offset = hdev->asic_prop.gic_interrupts_enable ? 2701 + mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR : 2702 + le32_to_cpu(dyn_regs->gic_dma_qm_irq_ctrl); 2703 + 2771 2704 WREG32(mmDMA0_QM_CP_LDMA_TSIZE_OFFSET_0 + q_off, 2772 2705 QMAN_LDMA_SIZE_OFFSET); 2773 2706 WREG32(mmDMA0_QM_CP_LDMA_SRC_BASE_LO_OFFSET_0 + q_off, ··· 2781 2706 2782 2707 /* Configure RAZWI IRQ */ 2783 2708 dma_qm_err_cfg = HBM_DMA_QMAN_GLBL_ERR_CFG_MSG_EN_MASK; 2784 - if (hdev->stop_on_err) { 2709 + if (hdev->stop_on_err) 2785 2710 dma_qm_err_cfg |= 2786 2711 HBM_DMA_QMAN_GLBL_ERR_CFG_STOP_ON_ERR_EN_MASK; 2787 - } 2712 + 2788 2713 WREG32(mmDMA0_QM_GLBL_ERR_CFG + dma_qm_offset, dma_qm_err_cfg); 2789 2714 2790 2715 WREG32(mmDMA0_QM_GLBL_ERR_ADDR_LO + dma_qm_offset, 2791 - lower_32_bits(CFG_BASE + 2792 - mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR)); 2716 + lower_32_bits(CFG_BASE + irq_handler_offset)); 2793 2717 WREG32(mmDMA0_QM_GLBL_ERR_ADDR_HI + dma_qm_offset, 2794 - upper_32_bits(CFG_BASE + 2795 - mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR)); 2718 + upper_32_bits(CFG_BASE + irq_handler_offset)); 2719 + 2796 2720 WREG32(mmDMA0_QM_GLBL_ERR_WDATA + dma_qm_offset, 2797 2721 gaudi_irq_map_table[GAUDI_EVENT_DMA0_QM].cpu_id + 2798 2722 dma_id); ··· 2866 2792 static void gaudi_init_mme_qman(struct hl_device *hdev, u32 mme_offset, 2867 2793 int qman_id, u64 qman_base_addr) 2868 2794 { 2795 + struct cpu_dyn_regs *dyn_regs = 2796 + &hdev->fw_loader.dynamic_loader.comm_desc.cpu_dyn_regs; 2869 2797 u32 mtr_base_lo, mtr_base_hi; 2870 2798 u32 so_base_lo, so_base_hi; 2799 + u32 irq_handler_offset; 2871 2800 u32 q_off, mme_id; 2872 2801 u32 mme_qm_err_cfg; 2873 2802 ··· 2902 2825 WREG32(mmMME0_QM_CP_LDMA_DST_BASE_LO_OFFSET_0 + q_off, 2903 2826 QMAN_CPDMA_DST_OFFSET); 2904 2827 } else { 2828 + irq_handler_offset = hdev->asic_prop.gic_interrupts_enable ? 2829 + mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR : 2830 + le32_to_cpu(dyn_regs->gic_mme_qm_irq_ctrl); 2831 + 2905 2832 WREG32(mmMME0_QM_CP_LDMA_TSIZE_OFFSET_0 + q_off, 2906 2833 QMAN_LDMA_SIZE_OFFSET); 2907 2834 WREG32(mmMME0_QM_CP_LDMA_SRC_BASE_LO_OFFSET_0 + q_off, ··· 2915 2834 2916 2835 /* Configure RAZWI IRQ */ 2917 2836 mme_id = mme_offset / 2918 - (mmMME1_QM_GLBL_CFG0 - mmMME0_QM_GLBL_CFG0); 2837 + (mmMME1_QM_GLBL_CFG0 - mmMME0_QM_GLBL_CFG0) / 2; 2919 2838 2920 2839 mme_qm_err_cfg = MME_QMAN_GLBL_ERR_CFG_MSG_EN_MASK; 2921 - if (hdev->stop_on_err) { 2840 + if (hdev->stop_on_err) 2922 2841 mme_qm_err_cfg |= 2923 2842 MME_QMAN_GLBL_ERR_CFG_STOP_ON_ERR_EN_MASK; 2924 - } 2843 + 2925 2844 WREG32(mmMME0_QM_GLBL_ERR_CFG + mme_offset, mme_qm_err_cfg); 2845 + 2926 2846 WREG32(mmMME0_QM_GLBL_ERR_ADDR_LO + mme_offset, 2927 - lower_32_bits(CFG_BASE + 2928 - mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR)); 2847 + lower_32_bits(CFG_BASE + irq_handler_offset)); 2929 2848 WREG32(mmMME0_QM_GLBL_ERR_ADDR_HI + mme_offset, 2930 - upper_32_bits(CFG_BASE + 2931 - mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR)); 2849 + upper_32_bits(CFG_BASE + irq_handler_offset)); 2850 + 2932 2851 WREG32(mmMME0_QM_GLBL_ERR_WDATA + mme_offset, 2933 2852 gaudi_irq_map_table[GAUDI_EVENT_MME0_QM].cpu_id + 2934 2853 mme_id); ··· 2993 2912 static void gaudi_init_tpc_qman(struct hl_device *hdev, u32 tpc_offset, 2994 2913 int qman_id, u64 qman_base_addr) 2995 2914 { 2915 + struct cpu_dyn_regs *dyn_regs = 2916 + &hdev->fw_loader.dynamic_loader.comm_desc.cpu_dyn_regs; 2996 2917 u32 mtr_base_en_lo, mtr_base_en_hi, mtr_base_ws_lo, mtr_base_ws_hi; 2997 2918 u32 so_base_en_lo, so_base_en_hi, so_base_ws_lo, so_base_ws_hi; 2919 + u32 tpc_qm_err_cfg, irq_handler_offset; 2998 2920 u32 q_off, tpc_id; 2999 - u32 tpc_qm_err_cfg; 3000 2921 3001 2922 mtr_base_en_lo = lower_32_bits(CFG_BASE + 3002 2923 mmSYNC_MNGR_E_N_SYNC_MNGR_OBJS_MON_PAY_ADDRL_0); ··· 3039 2956 WREG32(mmTPC0_QM_CP_LDMA_DST_BASE_LO_OFFSET_0 + q_off, 3040 2957 QMAN_CPDMA_DST_OFFSET); 3041 2958 } else { 2959 + irq_handler_offset = hdev->asic_prop.gic_interrupts_enable ? 2960 + mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR : 2961 + le32_to_cpu(dyn_regs->gic_tpc_qm_irq_ctrl); 2962 + 3042 2963 WREG32(mmTPC0_QM_CP_LDMA_TSIZE_OFFSET_0 + q_off, 3043 2964 QMAN_LDMA_SIZE_OFFSET); 3044 2965 WREG32(mmTPC0_QM_CP_LDMA_SRC_BASE_LO_OFFSET_0 + q_off, ··· 3052 2965 3053 2966 /* Configure RAZWI IRQ */ 3054 2967 tpc_qm_err_cfg = TPC_QMAN_GLBL_ERR_CFG_MSG_EN_MASK; 3055 - if (hdev->stop_on_err) { 2968 + if (hdev->stop_on_err) 3056 2969 tpc_qm_err_cfg |= 3057 2970 TPC_QMAN_GLBL_ERR_CFG_STOP_ON_ERR_EN_MASK; 3058 - } 3059 2971 3060 2972 WREG32(mmTPC0_QM_GLBL_ERR_CFG + tpc_offset, tpc_qm_err_cfg); 2973 + 3061 2974 WREG32(mmTPC0_QM_GLBL_ERR_ADDR_LO + tpc_offset, 3062 - lower_32_bits(CFG_BASE + 3063 - mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR)); 2975 + lower_32_bits(CFG_BASE + irq_handler_offset)); 3064 2976 WREG32(mmTPC0_QM_GLBL_ERR_ADDR_HI + tpc_offset, 3065 - upper_32_bits(CFG_BASE + 3066 - mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR)); 2977 + upper_32_bits(CFG_BASE + irq_handler_offset)); 2978 + 3067 2979 WREG32(mmTPC0_QM_GLBL_ERR_WDATA + tpc_offset, 3068 2980 gaudi_irq_map_table[GAUDI_EVENT_TPC0_QM].cpu_id + 3069 2981 tpc_id); ··· 3145 3059 static void gaudi_init_nic_qman(struct hl_device *hdev, u32 nic_offset, 3146 3060 int qman_id, u64 qman_base_addr, int nic_id) 3147 3061 { 3062 + struct cpu_dyn_regs *dyn_regs = 3063 + &hdev->fw_loader.dynamic_loader.comm_desc.cpu_dyn_regs; 3148 3064 u32 mtr_base_en_lo, mtr_base_en_hi, mtr_base_ws_lo, mtr_base_ws_hi; 3149 3065 u32 so_base_en_lo, so_base_en_hi, so_base_ws_lo, so_base_ws_hi; 3066 + u32 nic_qm_err_cfg, irq_handler_offset; 3150 3067 u32 q_off; 3151 - u32 nic_qm_err_cfg; 3152 3068 3153 3069 mtr_base_en_lo = lower_32_bits(CFG_BASE + 3154 3070 mmSYNC_MNGR_E_N_SYNC_MNGR_OBJS_MON_PAY_ADDRL_0); ··· 3197 3109 WREG32(mmNIC0_QM0_CP_MSG_BASE3_ADDR_HI_0 + q_off, so_base_ws_hi); 3198 3110 3199 3111 if (qman_id == 0) { 3112 + irq_handler_offset = hdev->asic_prop.gic_interrupts_enable ? 3113 + mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR : 3114 + le32_to_cpu(dyn_regs->gic_nic_qm_irq_ctrl); 3115 + 3200 3116 /* Configure RAZWI IRQ */ 3201 3117 nic_qm_err_cfg = NIC_QMAN_GLBL_ERR_CFG_MSG_EN_MASK; 3202 - if (hdev->stop_on_err) { 3118 + if (hdev->stop_on_err) 3203 3119 nic_qm_err_cfg |= 3204 3120 NIC_QMAN_GLBL_ERR_CFG_STOP_ON_ERR_EN_MASK; 3205 - } 3206 3121 3207 3122 WREG32(mmNIC0_QM0_GLBL_ERR_CFG + nic_offset, nic_qm_err_cfg); 3123 + 3208 3124 WREG32(mmNIC0_QM0_GLBL_ERR_ADDR_LO + nic_offset, 3209 - lower_32_bits(CFG_BASE + 3210 - mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR)); 3125 + lower_32_bits(CFG_BASE + irq_handler_offset)); 3211 3126 WREG32(mmNIC0_QM0_GLBL_ERR_ADDR_HI + nic_offset, 3212 - upper_32_bits(CFG_BASE + 3213 - mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR)); 3127 + upper_32_bits(CFG_BASE + irq_handler_offset)); 3128 + 3214 3129 WREG32(mmNIC0_QM0_GLBL_ERR_WDATA + nic_offset, 3215 3130 gaudi_irq_map_table[GAUDI_EVENT_NIC0_QM0].cpu_id + 3216 3131 nic_id); ··· 3566 3475 if (hdev->in_debug) 3567 3476 return; 3568 3477 3569 - if (!hdev->asic_prop.fw_security_disabled) 3478 + if (hdev->asic_prop.fw_security_enabled) 3570 3479 return; 3571 3480 3572 3481 for (i = GAUDI_PCI_DMA_1, qman_offset = 0 ; i < GAUDI_HBM_DMA_1 ; i++) { ··· 3626 3535 u32 qman_offset; 3627 3536 int i; 3628 3537 3629 - if (!hdev->asic_prop.fw_security_disabled) 3538 + if (hdev->asic_prop.fw_security_enabled) 3630 3539 return; 3631 3540 3632 3541 for (i = 0, qman_offset = 0 ; i < DMA_NUMBER_OF_CHANNELS ; i++) { ··· 3765 3674 { 3766 3675 void __iomem *dst; 3767 3676 3768 - /* HBM scrambler must be initialized before pushing F/W to HBM */ 3769 - gaudi_init_scrambler_hbm(hdev); 3770 - 3771 3677 dst = hdev->pcie_bar[HBM_BAR_ID] + LINUX_FW_OFFSET; 3772 3678 3773 3679 return hl_fw_load_fw_to_device(hdev, GAUDI_LINUX_FW_FILE, dst, 0, 0); ··· 3779 3691 return hl_fw_load_fw_to_device(hdev, GAUDI_BOOT_FIT_FILE, dst, 0, 0); 3780 3692 } 3781 3693 3782 - static int gaudi_read_device_fw_version(struct hl_device *hdev, 3783 - enum hl_fw_component fwc) 3694 + static void gaudi_init_dynamic_firmware_loader(struct hl_device *hdev) 3784 3695 { 3785 - const char *name; 3786 - u32 ver_off; 3787 - char *dest; 3696 + struct dynamic_fw_load_mgr *dynamic_loader; 3697 + struct cpu_dyn_regs *dyn_regs; 3788 3698 3789 - switch (fwc) { 3790 - case FW_COMP_UBOOT: 3791 - ver_off = RREG32(mmUBOOT_VER_OFFSET); 3792 - dest = hdev->asic_prop.uboot_ver; 3793 - name = "U-Boot"; 3794 - break; 3795 - case FW_COMP_PREBOOT: 3796 - ver_off = RREG32(mmPREBOOT_VER_OFFSET); 3797 - dest = hdev->asic_prop.preboot_ver; 3798 - name = "Preboot"; 3799 - break; 3800 - default: 3801 - dev_warn(hdev->dev, "Undefined FW component: %d\n", fwc); 3802 - return -EIO; 3803 - } 3699 + dynamic_loader = &hdev->fw_loader.dynamic_loader; 3804 3700 3805 - ver_off &= ~((u32)SRAM_BASE_ADDR); 3701 + /* 3702 + * here we update initial values for few specific dynamic regs (as 3703 + * before reading the first descriptor from FW those value has to be 3704 + * hard-coded) in later stages of the protocol those values will be 3705 + * updated automatically by reading the FW descriptor so data there 3706 + * will always be up-to-date 3707 + */ 3708 + dyn_regs = &dynamic_loader->comm_desc.cpu_dyn_regs; 3709 + dyn_regs->kmd_msg_to_cpu = 3710 + cpu_to_le32(mmPSOC_GLOBAL_CONF_KMD_MSG_TO_CPU); 3711 + dyn_regs->cpu_cmd_status_to_host = 3712 + cpu_to_le32(mmCPU_CMD_STATUS_TO_HOST); 3806 3713 3807 - if (ver_off < SRAM_SIZE - VERSION_MAX_LEN) { 3808 - memcpy_fromio(dest, hdev->pcie_bar[SRAM_BAR_ID] + ver_off, 3809 - VERSION_MAX_LEN); 3810 - } else { 3811 - dev_err(hdev->dev, "%s version offset (0x%x) is above SRAM\n", 3812 - name, ver_off); 3813 - strcpy(dest, "unavailable"); 3814 - return -EIO; 3815 - } 3714 + dynamic_loader->wait_for_bl_timeout = GAUDI_WAIT_FOR_BL_TIMEOUT_USEC; 3715 + } 3816 3716 3817 - return 0; 3717 + static void gaudi_init_static_firmware_loader(struct hl_device *hdev) 3718 + { 3719 + struct static_fw_load_mgr *static_loader; 3720 + 3721 + static_loader = &hdev->fw_loader.static_loader; 3722 + 3723 + static_loader->preboot_version_max_off = SRAM_SIZE - VERSION_MAX_LEN; 3724 + static_loader->boot_fit_version_max_off = SRAM_SIZE - VERSION_MAX_LEN; 3725 + static_loader->kmd_msg_to_cpu_reg = mmPSOC_GLOBAL_CONF_KMD_MSG_TO_CPU; 3726 + static_loader->cpu_cmd_status_to_host_reg = mmCPU_CMD_STATUS_TO_HOST; 3727 + static_loader->cpu_boot_status_reg = mmPSOC_GLOBAL_CONF_CPU_BOOT_STATUS; 3728 + static_loader->cpu_boot_dev_status0_reg = mmCPU_BOOT_DEV_STS0; 3729 + static_loader->cpu_boot_dev_status1_reg = mmCPU_BOOT_DEV_STS1; 3730 + static_loader->boot_err0_reg = mmCPU_BOOT_ERR0; 3731 + static_loader->boot_err1_reg = mmCPU_BOOT_ERR1; 3732 + static_loader->preboot_version_offset_reg = mmPREBOOT_VER_OFFSET; 3733 + static_loader->boot_fit_version_offset_reg = mmUBOOT_VER_OFFSET; 3734 + static_loader->sram_offset_mask = ~(lower_32_bits(SRAM_BASE_ADDR)); 3735 + static_loader->cpu_reset_wait_msec = hdev->pldm ? 3736 + GAUDI_PLDM_RESET_WAIT_MSEC : 3737 + GAUDI_CPU_RESET_WAIT_MSEC; 3738 + } 3739 + 3740 + static void gaudi_init_firmware_loader(struct hl_device *hdev) 3741 + { 3742 + struct asic_fixed_properties *prop = &hdev->asic_prop; 3743 + struct fw_load_mgr *fw_loader = &hdev->fw_loader; 3744 + 3745 + /* fill common fields */ 3746 + fw_loader->linux_loaded = false; 3747 + fw_loader->boot_fit_img.image_name = GAUDI_BOOT_FIT_FILE; 3748 + fw_loader->linux_img.image_name = GAUDI_LINUX_FW_FILE; 3749 + fw_loader->cpu_timeout = GAUDI_CPU_TIMEOUT_USEC; 3750 + fw_loader->boot_fit_timeout = GAUDI_BOOT_FIT_REQ_TIMEOUT_USEC; 3751 + fw_loader->skip_bmc = !hdev->bmc_enable; 3752 + fw_loader->sram_bar_id = SRAM_BAR_ID; 3753 + fw_loader->dram_bar_id = HBM_BAR_ID; 3754 + 3755 + if (prop->dynamic_fw_load) 3756 + gaudi_init_dynamic_firmware_loader(hdev); 3757 + else 3758 + gaudi_init_static_firmware_loader(hdev); 3818 3759 } 3819 3760 3820 3761 static int gaudi_init_cpu(struct hl_device *hdev) ··· 3861 3744 * The device CPU works with 40 bits addresses. 3862 3745 * This register sets the extension to 50 bits. 3863 3746 */ 3864 - if (hdev->asic_prop.fw_security_disabled) 3747 + if (!hdev->asic_prop.fw_security_enabled) 3865 3748 WREG32(mmCPU_IF_CPU_MSB_ADDR, hdev->cpu_pci_msb_addr); 3866 3749 3867 - rc = hl_fw_init_cpu(hdev, mmPSOC_GLOBAL_CONF_CPU_BOOT_STATUS, 3868 - mmPSOC_GLOBAL_CONF_KMD_MSG_TO_CPU, 3869 - mmCPU_CMD_STATUS_TO_HOST, 3870 - mmCPU_BOOT_DEV_STS0, mmCPU_BOOT_ERR0, 3871 - !hdev->bmc_enable, GAUDI_CPU_TIMEOUT_USEC, 3872 - GAUDI_BOOT_FIT_REQ_TIMEOUT_USEC); 3750 + rc = hl_fw_init_cpu(hdev); 3873 3751 3874 3752 if (rc) 3875 3753 return rc; ··· 3876 3764 3877 3765 static int gaudi_init_cpu_queues(struct hl_device *hdev, u32 cpu_timeout) 3878 3766 { 3879 - struct gaudi_device *gaudi = hdev->asic_specific; 3767 + struct cpu_dyn_regs *dyn_regs = 3768 + &hdev->fw_loader.dynamic_loader.comm_desc.cpu_dyn_regs; 3880 3769 struct asic_fixed_properties *prop = &hdev->asic_prop; 3770 + struct gaudi_device *gaudi = hdev->asic_specific; 3771 + u32 status, irq_handler_offset; 3881 3772 struct hl_eq *eq; 3882 - u32 status; 3883 3773 struct hl_hw_queue *cpu_pq = 3884 3774 &hdev->kernel_queues[GAUDI_QUEUE_ID_CPU_PQ]; 3885 3775 int err; ··· 3920 3806 WREG32(mmCPU_IF_QUEUE_INIT, 3921 3807 PQ_INIT_STATUS_READY_FOR_CP_SINGLE_MSI); 3922 3808 3923 - WREG32(mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR, GAUDI_EVENT_PI_UPDATE); 3809 + irq_handler_offset = prop->gic_interrupts_enable ? 3810 + mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR : 3811 + le32_to_cpu(dyn_regs->gic_host_pi_upd_irq); 3812 + 3813 + WREG32(irq_handler_offset, 3814 + gaudi_irq_map_table[GAUDI_EVENT_PI_UPDATE].cpu_id); 3924 3815 3925 3816 err = hl_poll_timeout( 3926 3817 hdev, ··· 3942 3823 } 3943 3824 3944 3825 /* update FW application security bits */ 3945 - if (prop->fw_security_status_valid) 3946 - prop->fw_app_security_map = RREG32(mmCPU_BOOT_DEV_STS0); 3826 + if (prop->fw_cpu_boot_dev_sts0_valid) 3827 + prop->fw_app_cpu_boot_dev_sts0 = RREG32(mmCPU_BOOT_DEV_STS0); 3828 + if (prop->fw_cpu_boot_dev_sts1_valid) 3829 + prop->fw_app_cpu_boot_dev_sts1 = RREG32(mmCPU_BOOT_DEV_STS1); 3947 3830 3948 3831 gaudi->hw_cap_initialized |= HW_CAP_CPU_Q; 3949 3832 return 0; ··· 3956 3835 /* Perform read from the device to make sure device is up */ 3957 3836 RREG32(mmHW_STATE); 3958 3837 3959 - if (hdev->asic_prop.fw_security_disabled) { 3838 + if (!hdev->asic_prop.fw_security_enabled) { 3960 3839 /* Set the access through PCI bars (Linux driver only) as 3961 3840 * secured 3962 3841 */ ··· 3981 3860 3982 3861 static int gaudi_hw_init(struct hl_device *hdev) 3983 3862 { 3863 + struct gaudi_device *gaudi = hdev->asic_specific; 3984 3864 int rc; 3985 3865 3986 3866 gaudi_pre_hw_init(hdev); 3987 3867 3988 - gaudi_init_pci_dma_qmans(hdev); 3868 + /* If iATU is done by FW, the HBM bar ALWAYS points to DRAM_PHYS_BASE. 3869 + * So we set it here and if anyone tries to move it later to 3870 + * a different address, there will be an error 3871 + */ 3872 + if (hdev->asic_prop.iatu_done_by_fw) 3873 + gaudi->hbm_bar_cur_addr = DRAM_PHYS_BASE; 3989 3874 3990 - gaudi_init_hbm_dma_qmans(hdev); 3875 + /* 3876 + * Before pushing u-boot/linux to device, need to set the hbm bar to 3877 + * base address of dram 3878 + */ 3879 + if (gaudi_set_hbm_bar_base(hdev, DRAM_PHYS_BASE) == U64_MAX) { 3880 + dev_err(hdev->dev, 3881 + "failed to map HBM bar to DRAM base address\n"); 3882 + return -EIO; 3883 + } 3991 3884 3992 3885 rc = gaudi_init_cpu(hdev); 3993 3886 if (rc) { ··· 4029 3894 return rc; 4030 3895 4031 3896 gaudi_init_security(hdev); 3897 + 3898 + gaudi_init_pci_dma_qmans(hdev); 3899 + 3900 + gaudi_init_hbm_dma_qmans(hdev); 4032 3901 4033 3902 gaudi_init_mme_qmans(hdev); 4034 3903 ··· 4073 3934 4074 3935 static void gaudi_hw_fini(struct hl_device *hdev, bool hard_reset) 4075 3936 { 3937 + struct cpu_dyn_regs *dyn_regs = 3938 + &hdev->fw_loader.dynamic_loader.comm_desc.cpu_dyn_regs; 3939 + u32 status, reset_timeout_ms, cpu_timeout_ms, irq_handler_offset; 4076 3940 struct gaudi_device *gaudi = hdev->asic_specific; 4077 - u32 status, reset_timeout_ms, cpu_timeout_ms; 3941 + bool driver_performs_reset; 4078 3942 4079 3943 if (!hard_reset) { 4080 3944 dev_err(hdev->dev, "GAUDI doesn't support soft-reset\n"); ··· 4092 3950 cpu_timeout_ms = GAUDI_CPU_RESET_WAIT_MSEC; 4093 3951 } 4094 3952 3953 + driver_performs_reset = !!(!hdev->asic_prop.fw_security_enabled && 3954 + !hdev->asic_prop.hard_reset_done_by_fw); 3955 + 4095 3956 /* Set device to handle FLR by H/W as we will put the device CPU to 4096 3957 * halt mode 4097 3958 */ 4098 - if (hdev->asic_prop.fw_security_disabled && 4099 - !hdev->asic_prop.hard_reset_done_by_fw) 3959 + if (driver_performs_reset) 4100 3960 WREG32(mmPCIE_AUX_FLR_CTRL, (PCIE_AUX_FLR_CTRL_HW_CTRL_MASK | 4101 3961 PCIE_AUX_FLR_CTRL_INT_MASK_MASK)); 4102 3962 4103 - /* I don't know what is the state of the CPU so make sure it is 4104 - * stopped in any means necessary 3963 + /* If linux is loaded in the device CPU we need to communicate with it 3964 + * via the GIC. Otherwise, we need to use COMMS or the MSG_TO_CPU 3965 + * registers in case of old F/Ws 4105 3966 */ 4106 - if (hdev->asic_prop.hard_reset_done_by_fw) 4107 - WREG32(mmPSOC_GLOBAL_CONF_KMD_MSG_TO_CPU, KMD_MSG_RST_DEV); 4108 - else 4109 - WREG32(mmPSOC_GLOBAL_CONF_KMD_MSG_TO_CPU, KMD_MSG_GOTO_WFE); 3967 + if (hdev->fw_loader.linux_loaded) { 3968 + irq_handler_offset = hdev->asic_prop.gic_interrupts_enable ? 3969 + mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR : 3970 + le32_to_cpu(dyn_regs->gic_host_halt_irq); 4110 3971 4111 - WREG32(mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR, GAUDI_EVENT_HALT_MACHINE); 3972 + WREG32(irq_handler_offset, 3973 + gaudi_irq_map_table[GAUDI_EVENT_HALT_MACHINE].cpu_id); 3974 + } else { 3975 + if (hdev->asic_prop.hard_reset_done_by_fw) 3976 + hl_fw_ask_hard_reset_without_linux(hdev); 3977 + else 3978 + hl_fw_ask_halt_machine_without_linux(hdev); 3979 + } 4112 3980 4113 - if (hdev->asic_prop.fw_security_disabled && 4114 - !hdev->asic_prop.hard_reset_done_by_fw) { 3981 + if (driver_performs_reset) { 4115 3982 4116 3983 /* Configure the reset registers. Must be done as early as 4117 3984 * possible in case we fail during H/W initialization ··· 4154 4003 WREG32(mmPREBOOT_PCIE_EN, LKD_HARD_RESET_MAGIC); 4155 4004 4156 4005 /* Restart BTL/BLR upon hard-reset */ 4157 - if (hdev->asic_prop.fw_security_disabled) 4158 - WREG32(mmPSOC_GLOBAL_CONF_BOOT_SEQ_RE_START, 1); 4006 + WREG32(mmPSOC_GLOBAL_CONF_BOOT_SEQ_RE_START, 1); 4159 4007 4160 4008 WREG32(mmPSOC_GLOBAL_CONF_SW_ALL_RST, 4161 4009 1 << PSOC_GLOBAL_CONF_SW_ALL_RST_IND_SHIFT); ··· 4191 4041 HW_CAP_CLK_GATE); 4192 4042 4193 4043 memset(gaudi->events_stat, 0, sizeof(gaudi->events_stat)); 4044 + 4045 + hdev->device_cpu_is_halted = false; 4194 4046 } 4195 4047 } 4196 4048 ··· 4230 4078 4231 4079 static void gaudi_ring_doorbell(struct hl_device *hdev, u32 hw_queue_id, u32 pi) 4232 4080 { 4081 + struct cpu_dyn_regs *dyn_regs = 4082 + &hdev->fw_loader.dynamic_loader.comm_desc.cpu_dyn_regs; 4083 + u32 db_reg_offset, db_value, dma_qm_offset, q_off, irq_handler_offset; 4233 4084 struct gaudi_device *gaudi = hdev->asic_specific; 4234 - u32 db_reg_offset, db_value, dma_qm_offset, q_off; 4235 - int dma_id; 4236 4085 bool invalid_queue = false; 4086 + int dma_id; 4237 4087 4238 4088 switch (hw_queue_id) { 4239 4089 case GAUDI_QUEUE_ID_DMA_0_0...GAUDI_QUEUE_ID_DMA_0_3: ··· 4461 4307 db_reg_offset = mmTPC7_QM_PQ_PI_3; 4462 4308 break; 4463 4309 4464 - case GAUDI_QUEUE_ID_NIC_0_0: 4465 - db_reg_offset = mmNIC0_QM0_PQ_PI_0; 4310 + case GAUDI_QUEUE_ID_NIC_0_0...GAUDI_QUEUE_ID_NIC_0_3: 4311 + if (!(gaudi->hw_cap_initialized & HW_CAP_NIC0)) 4312 + invalid_queue = true; 4313 + 4314 + q_off = ((hw_queue_id - 1) & 0x3) * 4; 4315 + db_reg_offset = mmNIC0_QM0_PQ_PI_0 + q_off; 4466 4316 break; 4467 4317 4468 - case GAUDI_QUEUE_ID_NIC_0_1: 4469 - db_reg_offset = mmNIC0_QM0_PQ_PI_1; 4318 + case GAUDI_QUEUE_ID_NIC_1_0...GAUDI_QUEUE_ID_NIC_1_3: 4319 + if (!(gaudi->hw_cap_initialized & HW_CAP_NIC1)) 4320 + invalid_queue = true; 4321 + 4322 + q_off = ((hw_queue_id - 1) & 0x3) * 4; 4323 + db_reg_offset = mmNIC0_QM1_PQ_PI_0 + q_off; 4470 4324 break; 4471 4325 4472 - case GAUDI_QUEUE_ID_NIC_0_2: 4473 - db_reg_offset = mmNIC0_QM0_PQ_PI_2; 4326 + case GAUDI_QUEUE_ID_NIC_2_0...GAUDI_QUEUE_ID_NIC_2_3: 4327 + if (!(gaudi->hw_cap_initialized & HW_CAP_NIC2)) 4328 + invalid_queue = true; 4329 + 4330 + q_off = ((hw_queue_id - 1) & 0x3) * 4; 4331 + db_reg_offset = mmNIC1_QM0_PQ_PI_0 + q_off; 4474 4332 break; 4475 4333 4476 - case GAUDI_QUEUE_ID_NIC_0_3: 4477 - db_reg_offset = mmNIC0_QM0_PQ_PI_3; 4334 + case GAUDI_QUEUE_ID_NIC_3_0...GAUDI_QUEUE_ID_NIC_3_3: 4335 + if (!(gaudi->hw_cap_initialized & HW_CAP_NIC3)) 4336 + invalid_queue = true; 4337 + 4338 + q_off = ((hw_queue_id - 1) & 0x3) * 4; 4339 + db_reg_offset = mmNIC1_QM1_PQ_PI_0 + q_off; 4478 4340 break; 4479 4341 4480 - case GAUDI_QUEUE_ID_NIC_1_0: 4481 - db_reg_offset = mmNIC0_QM1_PQ_PI_0; 4342 + case GAUDI_QUEUE_ID_NIC_4_0...GAUDI_QUEUE_ID_NIC_4_3: 4343 + if (!(gaudi->hw_cap_initialized & HW_CAP_NIC4)) 4344 + invalid_queue = true; 4345 + 4346 + q_off = ((hw_queue_id - 1) & 0x3) * 4; 4347 + db_reg_offset = mmNIC2_QM0_PQ_PI_0 + q_off; 4482 4348 break; 4483 4349 4484 - case GAUDI_QUEUE_ID_NIC_1_1: 4485 - db_reg_offset = mmNIC0_QM1_PQ_PI_1; 4350 + case GAUDI_QUEUE_ID_NIC_5_0...GAUDI_QUEUE_ID_NIC_5_3: 4351 + if (!(gaudi->hw_cap_initialized & HW_CAP_NIC5)) 4352 + invalid_queue = true; 4353 + 4354 + q_off = ((hw_queue_id - 1) & 0x3) * 4; 4355 + db_reg_offset = mmNIC2_QM1_PQ_PI_0 + q_off; 4486 4356 break; 4487 4357 4488 - case GAUDI_QUEUE_ID_NIC_1_2: 4489 - db_reg_offset = mmNIC0_QM1_PQ_PI_2; 4358 + case GAUDI_QUEUE_ID_NIC_6_0...GAUDI_QUEUE_ID_NIC_6_3: 4359 + if (!(gaudi->hw_cap_initialized & HW_CAP_NIC6)) 4360 + invalid_queue = true; 4361 + 4362 + q_off = ((hw_queue_id - 1) & 0x3) * 4; 4363 + db_reg_offset = mmNIC3_QM0_PQ_PI_0 + q_off; 4490 4364 break; 4491 4365 4492 - case GAUDI_QUEUE_ID_NIC_1_3: 4493 - db_reg_offset = mmNIC0_QM1_PQ_PI_3; 4366 + case GAUDI_QUEUE_ID_NIC_7_0...GAUDI_QUEUE_ID_NIC_7_3: 4367 + if (!(gaudi->hw_cap_initialized & HW_CAP_NIC7)) 4368 + invalid_queue = true; 4369 + 4370 + q_off = ((hw_queue_id - 1) & 0x3) * 4; 4371 + db_reg_offset = mmNIC3_QM1_PQ_PI_0 + q_off; 4494 4372 break; 4495 4373 4496 - case GAUDI_QUEUE_ID_NIC_2_0: 4497 - db_reg_offset = mmNIC1_QM0_PQ_PI_0; 4374 + case GAUDI_QUEUE_ID_NIC_8_0...GAUDI_QUEUE_ID_NIC_8_3: 4375 + if (!(gaudi->hw_cap_initialized & HW_CAP_NIC8)) 4376 + invalid_queue = true; 4377 + 4378 + q_off = ((hw_queue_id - 1) & 0x3) * 4; 4379 + db_reg_offset = mmNIC4_QM0_PQ_PI_0 + q_off; 4498 4380 break; 4499 4381 4500 - case GAUDI_QUEUE_ID_NIC_2_1: 4501 - db_reg_offset = mmNIC1_QM0_PQ_PI_1; 4502 - break; 4382 + case GAUDI_QUEUE_ID_NIC_9_0...GAUDI_QUEUE_ID_NIC_9_3: 4383 + if (!(gaudi->hw_cap_initialized & HW_CAP_NIC9)) 4384 + invalid_queue = true; 4503 4385 4504 - case GAUDI_QUEUE_ID_NIC_2_2: 4505 - db_reg_offset = mmNIC1_QM0_PQ_PI_2; 4506 - break; 4507 - 4508 - case GAUDI_QUEUE_ID_NIC_2_3: 4509 - db_reg_offset = mmNIC1_QM0_PQ_PI_3; 4510 - break; 4511 - 4512 - case GAUDI_QUEUE_ID_NIC_3_0: 4513 - db_reg_offset = mmNIC1_QM1_PQ_PI_0; 4514 - break; 4515 - 4516 - case GAUDI_QUEUE_ID_NIC_3_1: 4517 - db_reg_offset = mmNIC1_QM1_PQ_PI_1; 4518 - break; 4519 - 4520 - case GAUDI_QUEUE_ID_NIC_3_2: 4521 - db_reg_offset = mmNIC1_QM1_PQ_PI_2; 4522 - break; 4523 - 4524 - case GAUDI_QUEUE_ID_NIC_3_3: 4525 - db_reg_offset = mmNIC1_QM1_PQ_PI_3; 4526 - break; 4527 - 4528 - case GAUDI_QUEUE_ID_NIC_4_0: 4529 - db_reg_offset = mmNIC2_QM0_PQ_PI_0; 4530 - break; 4531 - 4532 - case GAUDI_QUEUE_ID_NIC_4_1: 4533 - db_reg_offset = mmNIC2_QM0_PQ_PI_1; 4534 - break; 4535 - 4536 - case GAUDI_QUEUE_ID_NIC_4_2: 4537 - db_reg_offset = mmNIC2_QM0_PQ_PI_2; 4538 - break; 4539 - 4540 - case GAUDI_QUEUE_ID_NIC_4_3: 4541 - db_reg_offset = mmNIC2_QM0_PQ_PI_3; 4542 - break; 4543 - 4544 - case GAUDI_QUEUE_ID_NIC_5_0: 4545 - db_reg_offset = mmNIC2_QM1_PQ_PI_0; 4546 - break; 4547 - 4548 - case GAUDI_QUEUE_ID_NIC_5_1: 4549 - db_reg_offset = mmNIC2_QM1_PQ_PI_1; 4550 - break; 4551 - 4552 - case GAUDI_QUEUE_ID_NIC_5_2: 4553 - db_reg_offset = mmNIC2_QM1_PQ_PI_2; 4554 - break; 4555 - 4556 - case GAUDI_QUEUE_ID_NIC_5_3: 4557 - db_reg_offset = mmNIC2_QM1_PQ_PI_3; 4558 - break; 4559 - 4560 - case GAUDI_QUEUE_ID_NIC_6_0: 4561 - db_reg_offset = mmNIC3_QM0_PQ_PI_0; 4562 - break; 4563 - 4564 - case GAUDI_QUEUE_ID_NIC_6_1: 4565 - db_reg_offset = mmNIC3_QM0_PQ_PI_1; 4566 - break; 4567 - 4568 - case GAUDI_QUEUE_ID_NIC_6_2: 4569 - db_reg_offset = mmNIC3_QM0_PQ_PI_2; 4570 - break; 4571 - 4572 - case GAUDI_QUEUE_ID_NIC_6_3: 4573 - db_reg_offset = mmNIC3_QM0_PQ_PI_3; 4574 - break; 4575 - 4576 - case GAUDI_QUEUE_ID_NIC_7_0: 4577 - db_reg_offset = mmNIC3_QM1_PQ_PI_0; 4578 - break; 4579 - 4580 - case GAUDI_QUEUE_ID_NIC_7_1: 4581 - db_reg_offset = mmNIC3_QM1_PQ_PI_1; 4582 - break; 4583 - 4584 - case GAUDI_QUEUE_ID_NIC_7_2: 4585 - db_reg_offset = mmNIC3_QM1_PQ_PI_2; 4586 - break; 4587 - 4588 - case GAUDI_QUEUE_ID_NIC_7_3: 4589 - db_reg_offset = mmNIC3_QM1_PQ_PI_3; 4590 - break; 4591 - 4592 - case GAUDI_QUEUE_ID_NIC_8_0: 4593 - db_reg_offset = mmNIC4_QM0_PQ_PI_0; 4594 - break; 4595 - 4596 - case GAUDI_QUEUE_ID_NIC_8_1: 4597 - db_reg_offset = mmNIC4_QM0_PQ_PI_1; 4598 - break; 4599 - 4600 - case GAUDI_QUEUE_ID_NIC_8_2: 4601 - db_reg_offset = mmNIC4_QM0_PQ_PI_2; 4602 - break; 4603 - 4604 - case GAUDI_QUEUE_ID_NIC_8_3: 4605 - db_reg_offset = mmNIC4_QM0_PQ_PI_3; 4606 - break; 4607 - 4608 - case GAUDI_QUEUE_ID_NIC_9_0: 4609 - db_reg_offset = mmNIC4_QM1_PQ_PI_0; 4610 - break; 4611 - 4612 - case GAUDI_QUEUE_ID_NIC_9_1: 4613 - db_reg_offset = mmNIC4_QM1_PQ_PI_1; 4614 - break; 4615 - 4616 - case GAUDI_QUEUE_ID_NIC_9_2: 4617 - db_reg_offset = mmNIC4_QM1_PQ_PI_2; 4618 - break; 4619 - 4620 - case GAUDI_QUEUE_ID_NIC_9_3: 4621 - db_reg_offset = mmNIC4_QM1_PQ_PI_3; 4386 + q_off = ((hw_queue_id - 1) & 0x3) * 4; 4387 + db_reg_offset = mmNIC4_QM1_PQ_PI_0 + q_off; 4622 4388 break; 4623 4389 4624 4390 default: ··· 4560 4486 if (hw_queue_id == GAUDI_QUEUE_ID_CPU_PQ) { 4561 4487 /* make sure device CPU will read latest data from host */ 4562 4488 mb(); 4563 - WREG32(mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR, 4564 - GAUDI_EVENT_PI_UPDATE); 4489 + 4490 + irq_handler_offset = hdev->asic_prop.gic_interrupts_enable ? 4491 + mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR : 4492 + le32_to_cpu(dyn_regs->gic_host_pi_upd_irq); 4493 + 4494 + WREG32(irq_handler_offset, 4495 + gaudi_irq_map_table[GAUDI_EVENT_PI_UPDATE].cpu_id); 4565 4496 } 4566 4497 } 4567 4498 ··· 5013 4934 return 0; 5014 4935 5015 4936 unpin_memory: 4937 + list_del(&userptr->job_node); 5016 4938 hl_unpin_host_memory(hdev, userptr); 5017 4939 free_userptr: 5018 4940 kfree(userptr); ··· 6593 6513 gaudi_mmu_prepare_reg(hdev, mmMME2_ACC_WBC, asid); 6594 6514 gaudi_mmu_prepare_reg(hdev, mmMME3_ACC_WBC, asid); 6595 6515 6596 - if (hdev->nic_ports_mask & GAUDI_NIC_MASK_NIC0) { 6516 + if (gaudi->hw_cap_initialized & HW_CAP_NIC0) { 6597 6517 gaudi_mmu_prepare_reg(hdev, mmNIC0_QM0_GLBL_NON_SECURE_PROPS_0, 6598 6518 asid); 6599 6519 gaudi_mmu_prepare_reg(hdev, mmNIC0_QM0_GLBL_NON_SECURE_PROPS_1, ··· 6606 6526 asid); 6607 6527 } 6608 6528 6609 - if (hdev->nic_ports_mask & GAUDI_NIC_MASK_NIC1) { 6529 + if (gaudi->hw_cap_initialized & HW_CAP_NIC1) { 6610 6530 gaudi_mmu_prepare_reg(hdev, mmNIC0_QM1_GLBL_NON_SECURE_PROPS_0, 6611 6531 asid); 6612 6532 gaudi_mmu_prepare_reg(hdev, mmNIC0_QM1_GLBL_NON_SECURE_PROPS_1, ··· 6619 6539 asid); 6620 6540 } 6621 6541 6622 - if (hdev->nic_ports_mask & GAUDI_NIC_MASK_NIC2) { 6542 + if (gaudi->hw_cap_initialized & HW_CAP_NIC2) { 6623 6543 gaudi_mmu_prepare_reg(hdev, mmNIC1_QM0_GLBL_NON_SECURE_PROPS_0, 6624 6544 asid); 6625 6545 gaudi_mmu_prepare_reg(hdev, mmNIC1_QM0_GLBL_NON_SECURE_PROPS_1, ··· 6632 6552 asid); 6633 6553 } 6634 6554 6635 - if (hdev->nic_ports_mask & GAUDI_NIC_MASK_NIC3) { 6555 + if (gaudi->hw_cap_initialized & HW_CAP_NIC3) { 6636 6556 gaudi_mmu_prepare_reg(hdev, mmNIC1_QM1_GLBL_NON_SECURE_PROPS_0, 6637 6557 asid); 6638 6558 gaudi_mmu_prepare_reg(hdev, mmNIC1_QM1_GLBL_NON_SECURE_PROPS_1, ··· 6645 6565 asid); 6646 6566 } 6647 6567 6648 - if (hdev->nic_ports_mask & GAUDI_NIC_MASK_NIC4) { 6568 + if (gaudi->hw_cap_initialized & HW_CAP_NIC4) { 6649 6569 gaudi_mmu_prepare_reg(hdev, mmNIC2_QM0_GLBL_NON_SECURE_PROPS_0, 6650 6570 asid); 6651 6571 gaudi_mmu_prepare_reg(hdev, mmNIC2_QM0_GLBL_NON_SECURE_PROPS_1, ··· 6658 6578 asid); 6659 6579 } 6660 6580 6661 - if (hdev->nic_ports_mask & GAUDI_NIC_MASK_NIC5) { 6581 + if (gaudi->hw_cap_initialized & HW_CAP_NIC5) { 6662 6582 gaudi_mmu_prepare_reg(hdev, mmNIC2_QM1_GLBL_NON_SECURE_PROPS_0, 6663 6583 asid); 6664 6584 gaudi_mmu_prepare_reg(hdev, mmNIC2_QM1_GLBL_NON_SECURE_PROPS_1, ··· 6671 6591 asid); 6672 6592 } 6673 6593 6674 - if (hdev->nic_ports_mask & GAUDI_NIC_MASK_NIC6) { 6594 + if (gaudi->hw_cap_initialized & HW_CAP_NIC6) { 6675 6595 gaudi_mmu_prepare_reg(hdev, mmNIC3_QM0_GLBL_NON_SECURE_PROPS_0, 6676 6596 asid); 6677 6597 gaudi_mmu_prepare_reg(hdev, mmNIC3_QM0_GLBL_NON_SECURE_PROPS_1, ··· 6684 6604 asid); 6685 6605 } 6686 6606 6687 - if (hdev->nic_ports_mask & GAUDI_NIC_MASK_NIC7) { 6607 + if (gaudi->hw_cap_initialized & HW_CAP_NIC7) { 6688 6608 gaudi_mmu_prepare_reg(hdev, mmNIC3_QM1_GLBL_NON_SECURE_PROPS_0, 6689 6609 asid); 6690 6610 gaudi_mmu_prepare_reg(hdev, mmNIC3_QM1_GLBL_NON_SECURE_PROPS_1, ··· 6697 6617 asid); 6698 6618 } 6699 6619 6700 - if (hdev->nic_ports_mask & GAUDI_NIC_MASK_NIC8) { 6620 + if (gaudi->hw_cap_initialized & HW_CAP_NIC8) { 6701 6621 gaudi_mmu_prepare_reg(hdev, mmNIC4_QM0_GLBL_NON_SECURE_PROPS_0, 6702 6622 asid); 6703 6623 gaudi_mmu_prepare_reg(hdev, mmNIC4_QM0_GLBL_NON_SECURE_PROPS_1, ··· 6710 6630 asid); 6711 6631 } 6712 6632 6713 - if (hdev->nic_ports_mask & GAUDI_NIC_MASK_NIC9) { 6633 + if (gaudi->hw_cap_initialized & HW_CAP_NIC9) { 6714 6634 gaudi_mmu_prepare_reg(hdev, mmNIC4_QM1_GLBL_NON_SECURE_PROPS_0, 6715 6635 asid); 6716 6636 gaudi_mmu_prepare_reg(hdev, mmNIC4_QM1_GLBL_NON_SECURE_PROPS_1, ··· 7124 7044 return rc; 7125 7045 } 7126 7046 7047 + /* 7048 + * gaudi_queue_idx_dec - decrement queue index (pi/ci) and handle wrap 7049 + * 7050 + * @idx: the current pi/ci value 7051 + * @q_len: the queue length (power of 2) 7052 + * 7053 + * @return the cyclically decremented index 7054 + */ 7055 + static inline u32 gaudi_queue_idx_dec(u32 idx, u32 q_len) 7056 + { 7057 + u32 mask = q_len - 1; 7058 + 7059 + /* 7060 + * modular decrement is equivalent to adding (queue_size -1) 7061 + * later we take LSBs to make sure the value is in the 7062 + * range [0, queue_len - 1] 7063 + */ 7064 + return (idx + q_len - 1) & mask; 7065 + } 7066 + 7067 + /** 7068 + * gaudi_print_sw_config_stream_data - print SW config stream data 7069 + * 7070 + * @hdev: pointer to the habanalabs device structure 7071 + * @stream: the QMAN's stream 7072 + * @qman_base: base address of QMAN registers block 7073 + */ 7074 + static void gaudi_print_sw_config_stream_data(struct hl_device *hdev, u32 stream, 7075 + u64 qman_base) 7076 + { 7077 + u64 cq_ptr_lo, cq_ptr_hi, cq_tsize, cq_ptr; 7078 + u32 cq_ptr_lo_off, size; 7079 + 7080 + cq_ptr_lo_off = mmTPC0_QM_CQ_PTR_LO_1 - mmTPC0_QM_CQ_PTR_LO_0; 7081 + 7082 + cq_ptr_lo = qman_base + (mmTPC0_QM_CQ_PTR_LO_0 - mmTPC0_QM_BASE) + 7083 + stream * cq_ptr_lo_off; 7084 + cq_ptr_hi = cq_ptr_lo + 7085 + (mmTPC0_QM_CQ_PTR_HI_0 - mmTPC0_QM_CQ_PTR_LO_0); 7086 + cq_tsize = cq_ptr_lo + 7087 + (mmTPC0_QM_CQ_TSIZE_0 - mmTPC0_QM_CQ_PTR_LO_0); 7088 + 7089 + cq_ptr = (((u64) RREG32(cq_ptr_hi)) << 32) | RREG32(cq_ptr_lo); 7090 + size = RREG32(cq_tsize); 7091 + dev_info(hdev->dev, "stop on err: stream: %u, addr: %#llx, size: %x\n", 7092 + stream, cq_ptr, size); 7093 + } 7094 + 7095 + /** 7096 + * gaudi_print_last_pqes_on_err - print last PQEs on error 7097 + * 7098 + * @hdev: pointer to the habanalabs device structure 7099 + * @qid_base: first QID of the QMAN (out of 4 streams) 7100 + * @stream: the QMAN's stream 7101 + * @qman_base: base address of QMAN registers block 7102 + * @pr_sw_conf: if true print the SW config stream data (CQ PTR and SIZE) 7103 + */ 7104 + static void gaudi_print_last_pqes_on_err(struct hl_device *hdev, u32 qid_base, 7105 + u32 stream, u64 qman_base, 7106 + bool pr_sw_conf) 7107 + { 7108 + u32 ci, qm_ci_stream_off, queue_len; 7109 + struct hl_hw_queue *q; 7110 + u64 pq_ci; 7111 + int i; 7112 + 7113 + q = &hdev->kernel_queues[qid_base + stream]; 7114 + 7115 + qm_ci_stream_off = mmTPC0_QM_PQ_CI_1 - mmTPC0_QM_PQ_CI_0; 7116 + pq_ci = qman_base + (mmTPC0_QM_PQ_CI_0 - mmTPC0_QM_BASE) + 7117 + stream * qm_ci_stream_off; 7118 + 7119 + queue_len = (q->queue_type == QUEUE_TYPE_INT) ? 7120 + q->int_queue_len : HL_QUEUE_LENGTH; 7121 + 7122 + hdev->asic_funcs->hw_queues_lock(hdev); 7123 + 7124 + if (pr_sw_conf) 7125 + gaudi_print_sw_config_stream_data(hdev, stream, qman_base); 7126 + 7127 + ci = RREG32(pq_ci); 7128 + 7129 + /* we should start printing form ci -1 */ 7130 + ci = gaudi_queue_idx_dec(ci, queue_len); 7131 + 7132 + for (i = 0; i < PQ_FETCHER_CACHE_SIZE; i++) { 7133 + struct hl_bd *bd; 7134 + u64 addr; 7135 + u32 len; 7136 + 7137 + bd = q->kernel_address; 7138 + bd += ci; 7139 + 7140 + len = le32_to_cpu(bd->len); 7141 + /* len 0 means uninitialized entry- break */ 7142 + if (!len) 7143 + break; 7144 + 7145 + addr = le64_to_cpu(bd->ptr); 7146 + 7147 + dev_info(hdev->dev, "stop on err PQE(stream %u): ci: %u, addr: %#llx, size: %x\n", 7148 + stream, ci, addr, len); 7149 + 7150 + /* get previous ci, wrap if needed */ 7151 + ci = gaudi_queue_idx_dec(ci, queue_len); 7152 + } 7153 + 7154 + hdev->asic_funcs->hw_queues_unlock(hdev); 7155 + } 7156 + 7157 + /** 7158 + * print_qman_data_on_err - extract QMAN data on error 7159 + * 7160 + * @hdev: pointer to the habanalabs device structure 7161 + * @qid_base: first QID of the QMAN (out of 4 streams) 7162 + * @stream: the QMAN's stream 7163 + * @qman_base: base address of QMAN registers block 7164 + * 7165 + * This function attempt to exatract as much data as possible on QMAN error. 7166 + * On upper CP print the SW config stream data and last 8 PQEs. 7167 + * On lower CP print SW config data and last PQEs of ALL 4 upper CPs 7168 + */ 7169 + static void print_qman_data_on_err(struct hl_device *hdev, u32 qid_base, 7170 + u32 stream, u64 qman_base) 7171 + { 7172 + u32 i; 7173 + 7174 + if (stream != QMAN_STREAMS) { 7175 + gaudi_print_last_pqes_on_err(hdev, qid_base, stream, qman_base, 7176 + true); 7177 + return; 7178 + } 7179 + 7180 + gaudi_print_sw_config_stream_data(hdev, stream, qman_base); 7181 + 7182 + for (i = 0; i < QMAN_STREAMS; i++) 7183 + gaudi_print_last_pqes_on_err(hdev, qid_base, i, qman_base, 7184 + false); 7185 + } 7186 + 7127 7187 static void gaudi_handle_qman_err_generic(struct hl_device *hdev, 7128 7188 const char *qm_name, 7129 - u64 glbl_sts_addr, 7130 - u64 arb_err_addr) 7189 + u64 qman_base, 7190 + u32 qid_base) 7131 7191 { 7132 7192 u32 i, j, glbl_sts_val, arb_err_val, glbl_sts_clr_val; 7193 + u64 glbl_sts_addr, arb_err_addr; 7133 7194 char reg_desc[32]; 7195 + 7196 + glbl_sts_addr = qman_base + (mmTPC0_QM_GLBL_STS1_0 - mmTPC0_QM_BASE); 7197 + arb_err_addr = qman_base + (mmTPC0_QM_ARB_ERR_CAUSE - mmTPC0_QM_BASE); 7134 7198 7135 7199 /* Iterate through all stream GLBL_STS1 registers + Lower CP */ 7136 7200 for (i = 0 ; i < QMAN_STREAMS + 1 ; i++) { ··· 7302 7078 /* Write 1 clear errors */ 7303 7079 if (!hdev->stop_on_err) 7304 7080 WREG32(glbl_sts_addr + 4 * i, glbl_sts_clr_val); 7081 + else 7082 + print_qman_data_on_err(hdev, qid_base, i, qman_base); 7305 7083 } 7306 7084 7307 7085 arb_err_val = RREG32(arb_err_addr); ··· 7448 7222 7449 7223 static void gaudi_handle_qman_err(struct hl_device *hdev, u16 event_type) 7450 7224 { 7451 - u64 glbl_sts_addr, arb_err_addr; 7452 - u8 index; 7225 + u64 qman_base; 7453 7226 char desc[32]; 7227 + u32 qid_base; 7228 + u8 index; 7454 7229 7455 7230 switch (event_type) { 7456 7231 case GAUDI_EVENT_TPC0_QM ... GAUDI_EVENT_TPC7_QM: 7457 7232 index = event_type - GAUDI_EVENT_TPC0_QM; 7458 - glbl_sts_addr = 7459 - mmTPC0_QM_GLBL_STS1_0 + index * TPC_QMAN_OFFSET; 7460 - arb_err_addr = 7461 - mmTPC0_QM_ARB_ERR_CAUSE + index * TPC_QMAN_OFFSET; 7233 + qid_base = GAUDI_QUEUE_ID_TPC_0_0 + index * QMAN_STREAMS; 7234 + qman_base = mmTPC0_QM_BASE + index * TPC_QMAN_OFFSET; 7462 7235 snprintf(desc, ARRAY_SIZE(desc), "%s%d", "TPC_QM", index); 7463 7236 break; 7464 7237 case GAUDI_EVENT_MME0_QM ... GAUDI_EVENT_MME2_QM: 7465 7238 index = event_type - GAUDI_EVENT_MME0_QM; 7466 - glbl_sts_addr = 7467 - mmMME0_QM_GLBL_STS1_0 + index * MME_QMAN_OFFSET; 7468 - arb_err_addr = 7469 - mmMME0_QM_ARB_ERR_CAUSE + index * MME_QMAN_OFFSET; 7239 + qid_base = GAUDI_QUEUE_ID_MME_0_0 + index * QMAN_STREAMS; 7240 + qman_base = mmMME0_QM_BASE + index * MME_QMAN_OFFSET; 7470 7241 snprintf(desc, ARRAY_SIZE(desc), "%s%d", "MME_QM", index); 7471 7242 break; 7472 7243 case GAUDI_EVENT_DMA0_QM ... GAUDI_EVENT_DMA7_QM: 7473 7244 index = event_type - GAUDI_EVENT_DMA0_QM; 7474 - glbl_sts_addr = 7475 - mmDMA0_QM_GLBL_STS1_0 + index * DMA_QMAN_OFFSET; 7476 - arb_err_addr = 7477 - mmDMA0_QM_ARB_ERR_CAUSE + index * DMA_QMAN_OFFSET; 7245 + qid_base = GAUDI_QUEUE_ID_DMA_0_0 + index * QMAN_STREAMS; 7246 + /* skip GAUDI_QUEUE_ID_CPU_PQ if necessary */ 7247 + if (index > 1) 7248 + qid_base++; 7249 + qman_base = mmDMA0_QM_BASE + index * DMA_QMAN_OFFSET; 7478 7250 snprintf(desc, ARRAY_SIZE(desc), "%s%d", "DMA_QM", index); 7479 7251 break; 7480 7252 case GAUDI_EVENT_NIC0_QM0: 7481 - glbl_sts_addr = mmNIC0_QM0_GLBL_STS1_0; 7482 - arb_err_addr = mmNIC0_QM0_ARB_ERR_CAUSE; 7253 + qid_base = GAUDI_QUEUE_ID_NIC_0_0; 7254 + qman_base = mmNIC0_QM0_BASE; 7483 7255 snprintf(desc, ARRAY_SIZE(desc), "NIC0_QM0"); 7484 7256 break; 7485 7257 case GAUDI_EVENT_NIC0_QM1: 7486 - glbl_sts_addr = mmNIC0_QM1_GLBL_STS1_0; 7487 - arb_err_addr = mmNIC0_QM1_ARB_ERR_CAUSE; 7258 + qid_base = GAUDI_QUEUE_ID_NIC_1_0; 7259 + qman_base = mmNIC0_QM1_BASE; 7488 7260 snprintf(desc, ARRAY_SIZE(desc), "NIC0_QM1"); 7489 7261 break; 7490 7262 case GAUDI_EVENT_NIC1_QM0: 7491 - glbl_sts_addr = mmNIC1_QM0_GLBL_STS1_0; 7492 - arb_err_addr = mmNIC1_QM0_ARB_ERR_CAUSE; 7263 + qid_base = GAUDI_QUEUE_ID_NIC_2_0; 7264 + qman_base = mmNIC1_QM0_BASE; 7493 7265 snprintf(desc, ARRAY_SIZE(desc), "NIC1_QM0"); 7494 7266 break; 7495 7267 case GAUDI_EVENT_NIC1_QM1: 7496 - glbl_sts_addr = mmNIC1_QM1_GLBL_STS1_0; 7497 - arb_err_addr = mmNIC1_QM1_ARB_ERR_CAUSE; 7268 + qid_base = GAUDI_QUEUE_ID_NIC_3_0; 7269 + qman_base = mmNIC1_QM1_BASE; 7498 7270 snprintf(desc, ARRAY_SIZE(desc), "NIC1_QM1"); 7499 7271 break; 7500 7272 case GAUDI_EVENT_NIC2_QM0: 7501 - glbl_sts_addr = mmNIC2_QM0_GLBL_STS1_0; 7502 - arb_err_addr = mmNIC2_QM0_ARB_ERR_CAUSE; 7273 + qid_base = GAUDI_QUEUE_ID_NIC_4_0; 7274 + qman_base = mmNIC2_QM0_BASE; 7503 7275 snprintf(desc, ARRAY_SIZE(desc), "NIC2_QM0"); 7504 7276 break; 7505 7277 case GAUDI_EVENT_NIC2_QM1: 7506 - glbl_sts_addr = mmNIC2_QM1_GLBL_STS1_0; 7507 - arb_err_addr = mmNIC2_QM1_ARB_ERR_CAUSE; 7278 + qid_base = GAUDI_QUEUE_ID_NIC_5_0; 7279 + qman_base = mmNIC2_QM1_BASE; 7508 7280 snprintf(desc, ARRAY_SIZE(desc), "NIC2_QM1"); 7509 7281 break; 7510 7282 case GAUDI_EVENT_NIC3_QM0: 7511 - glbl_sts_addr = mmNIC3_QM0_GLBL_STS1_0; 7512 - arb_err_addr = mmNIC3_QM0_ARB_ERR_CAUSE; 7283 + qid_base = GAUDI_QUEUE_ID_NIC_6_0; 7284 + qman_base = mmNIC3_QM0_BASE; 7513 7285 snprintf(desc, ARRAY_SIZE(desc), "NIC3_QM0"); 7514 7286 break; 7515 7287 case GAUDI_EVENT_NIC3_QM1: 7516 - glbl_sts_addr = mmNIC3_QM1_GLBL_STS1_0; 7517 - arb_err_addr = mmNIC3_QM1_ARB_ERR_CAUSE; 7288 + qid_base = GAUDI_QUEUE_ID_NIC_7_0; 7289 + qman_base = mmNIC3_QM1_BASE; 7518 7290 snprintf(desc, ARRAY_SIZE(desc), "NIC3_QM1"); 7519 7291 break; 7520 7292 case GAUDI_EVENT_NIC4_QM0: 7521 - glbl_sts_addr = mmNIC4_QM0_GLBL_STS1_0; 7522 - arb_err_addr = mmNIC4_QM0_ARB_ERR_CAUSE; 7293 + qid_base = GAUDI_QUEUE_ID_NIC_8_0; 7294 + qman_base = mmNIC4_QM0_BASE; 7523 7295 snprintf(desc, ARRAY_SIZE(desc), "NIC4_QM0"); 7524 7296 break; 7525 7297 case GAUDI_EVENT_NIC4_QM1: 7526 - glbl_sts_addr = mmNIC4_QM1_GLBL_STS1_0; 7527 - arb_err_addr = mmNIC4_QM1_ARB_ERR_CAUSE; 7298 + qid_base = GAUDI_QUEUE_ID_NIC_9_0; 7299 + qman_base = mmNIC4_QM1_BASE; 7528 7300 snprintf(desc, ARRAY_SIZE(desc), "NIC4_QM1"); 7529 7301 break; 7530 7302 default: 7531 7303 return; 7532 7304 } 7533 7305 7534 - gaudi_handle_qman_err_generic(hdev, desc, glbl_sts_addr, arb_err_addr); 7306 + gaudi_handle_qman_err_generic(hdev, desc, qman_base, qid_base); 7535 7307 } 7536 7308 7537 7309 static void gaudi_print_irq_info(struct hl_device *hdev, u16 event_type, ··· 7556 7332 sync_err->pi, sync_err->ci, q->pi, atomic_read(&q->ci)); 7557 7333 } 7558 7334 7335 + static void gaudi_print_fw_alive_info(struct hl_device *hdev, 7336 + struct hl_eq_fw_alive *fw_alive) 7337 + { 7338 + dev_err(hdev->dev, 7339 + "FW alive report: severity=%s, process_id=%u, thread_id=%u, uptime=%llu seconds\n", 7340 + (fw_alive->severity == FW_ALIVE_SEVERITY_MINOR) ? 7341 + "Minor" : "Critical", fw_alive->process_id, 7342 + fw_alive->thread_id, fw_alive->uptime_seconds); 7343 + } 7344 + 7559 7345 static int gaudi_soft_reset_late_init(struct hl_device *hdev) 7560 7346 { 7561 7347 struct gaudi_device *gaudi = hdev->asic_specific; ··· 7580 7346 struct hl_eq_hbm_ecc_data *hbm_ecc_data) 7581 7347 { 7582 7348 u32 base, val, val2, wr_par, rd_par, ca_par, derr, serr, type, ch; 7583 - int err = 0; 7349 + int rc = 0; 7584 7350 7585 - if (hdev->asic_prop.fw_security_status_valid && 7586 - (hdev->asic_prop.fw_app_security_map & 7587 - CPU_BOOT_DEV_STS0_HBM_ECC_EN)) { 7351 + if (hdev->asic_prop.fw_app_cpu_boot_dev_sts0 & 7352 + CPU_BOOT_DEV_STS0_HBM_ECC_EN) { 7588 7353 if (!hbm_ecc_data) { 7589 7354 dev_err(hdev->dev, "No FW ECC data"); 7590 7355 return 0; ··· 7612 7379 device, ch, hbm_ecc_data->first_addr, type, 7613 7380 hbm_ecc_data->sec_cont_cnt, hbm_ecc_data->sec_cnt, 7614 7381 hbm_ecc_data->dec_cnt); 7615 - 7616 - err = 1; 7617 - 7618 7382 return 0; 7619 7383 } 7620 7384 7621 - if (!hdev->asic_prop.fw_security_disabled) { 7385 + if (hdev->asic_prop.fw_security_enabled) { 7622 7386 dev_info(hdev->dev, "Cannot access MC regs for ECC data while security is enabled\n"); 7623 7387 return 0; 7624 7388 } ··· 7625 7395 val = RREG32_MASK(base + ch * 0x1000 + 0x06C, 0x0000FFFF); 7626 7396 val = (val & 0xFF) | ((val >> 8) & 0xFF); 7627 7397 if (val) { 7628 - err = 1; 7398 + rc = -EIO; 7629 7399 dev_err(hdev->dev, 7630 7400 "HBM%d pc%d interrupts info: WR_PAR=%d, RD_PAR=%d, CA_PAR=%d, SERR=%d, DERR=%d\n", 7631 7401 device, ch * 2, val & 0x1, (val >> 1) & 0x1, ··· 7645 7415 val = RREG32_MASK(base + ch * 0x1000 + 0x07C, 0x0000FFFF); 7646 7416 val = (val & 0xFF) | ((val >> 8) & 0xFF); 7647 7417 if (val) { 7648 - err = 1; 7418 + rc = -EIO; 7649 7419 dev_err(hdev->dev, 7650 7420 "HBM%d pc%d interrupts info: WR_PAR=%d, RD_PAR=%d, CA_PAR=%d, SERR=%d, DERR=%d\n", 7651 7421 device, ch * 2 + 1, val & 0x1, (val >> 1) & 0x1, ··· 7674 7444 val = RREG32(base + 0x8F30); 7675 7445 val2 = RREG32(base + 0x8F34); 7676 7446 if (val | val2) { 7677 - err = 1; 7447 + rc = -EIO; 7678 7448 dev_err(hdev->dev, 7679 7449 "HBM %d MC SRAM SERR info: Reg 0x8F30=0x%x, Reg 0x8F34=0x%x\n", 7680 7450 device, val, val2); ··· 7682 7452 val = RREG32(base + 0x8F40); 7683 7453 val2 = RREG32(base + 0x8F44); 7684 7454 if (val | val2) { 7685 - err = 1; 7455 + rc = -EIO; 7686 7456 dev_err(hdev->dev, 7687 7457 "HBM %d MC SRAM DERR info: Reg 0x8F40=0x%x, Reg 0x8F44=0x%x\n", 7688 7458 device, val, val2); 7689 7459 } 7690 7460 7691 - return err; 7461 + return rc; 7692 7462 } 7693 7463 7694 7464 static int gaudi_hbm_event_to_dev(u16 hbm_event_type) ··· 7834 7604 case GAUDI_EVENT_DMA_IF0_DERR ... GAUDI_EVENT_DMA_IF3_DERR: 7835 7605 case GAUDI_EVENT_HBM_0_DERR ... GAUDI_EVENT_HBM_3_DERR: 7836 7606 case GAUDI_EVENT_MMU_DERR: 7607 + case GAUDI_EVENT_NIC0_CS_DBG_DERR ... GAUDI_EVENT_NIC4_CS_DBG_DERR: 7837 7608 gaudi_print_irq_info(hdev, event_type, true); 7838 7609 gaudi_handle_ecc_event(hdev, event_type, &eq_entry->ecc_data); 7839 7610 goto reset_device; ··· 8017 7786 gaudi_print_out_of_sync_info(hdev, &eq_entry->pkt_sync_err); 8018 7787 goto reset_device; 8019 7788 7789 + case GAUDI_EVENT_FW_ALIVE_S: 7790 + gaudi_print_irq_info(hdev, event_type, false); 7791 + gaudi_print_fw_alive_info(hdev, &eq_entry->fw_alive); 7792 + goto reset_device; 7793 + 8020 7794 default: 8021 7795 dev_err(hdev->dev, "Received invalid H/W interrupt %d\n", 8022 7796 event_type); ··· 8092 7856 } 8093 7857 8094 7858 static int gaudi_mmu_invalidate_cache_range(struct hl_device *hdev, 8095 - bool is_hard, u32 asid, u64 va, u64 size) 7859 + bool is_hard, u32 flags, 7860 + u32 asid, u64 va, u64 size) 8096 7861 { 8097 - struct gaudi_device *gaudi = hdev->asic_specific; 8098 - u32 status, timeout_usec; 8099 - u32 inv_data; 8100 - u32 pi; 8101 - int rc; 8102 - 8103 - if (!(gaudi->hw_cap_initialized & HW_CAP_MMU) || 8104 - hdev->hard_reset_pending) 8105 - return 0; 8106 - 8107 - if (hdev->pldm) 8108 - timeout_usec = GAUDI_PLDM_MMU_TIMEOUT_USEC; 8109 - else 8110 - timeout_usec = MMU_CONFIG_TIMEOUT_USEC; 8111 - 8112 - /* 8113 - * TODO: currently invalidate entire L0 & L1 as in regular hard 8114 - * invalidation. Need to apply invalidation of specific cache 8115 - * lines with mask of ASID & VA & size. 8116 - * Note that L1 with be flushed entirely in any case. 7862 + /* Treat as invalidate all because there is no range invalidation 7863 + * in Gaudi 8117 7864 */ 8118 - 8119 - /* L0 & L1 invalidation */ 8120 - inv_data = RREG32(mmSTLB_CACHE_INV); 8121 - /* PI is 8 bit */ 8122 - pi = ((inv_data & STLB_CACHE_INV_PRODUCER_INDEX_MASK) + 1) & 0xFF; 8123 - WREG32(mmSTLB_CACHE_INV, 8124 - (inv_data & STLB_CACHE_INV_INDEX_MASK_MASK) | pi); 8125 - 8126 - rc = hl_poll_timeout( 8127 - hdev, 8128 - mmSTLB_INV_CONSUMER_INDEX, 8129 - status, 8130 - status == pi, 8131 - 1000, 8132 - timeout_usec); 8133 - 8134 - if (rc) { 8135 - dev_err_ratelimited(hdev->dev, 8136 - "MMU cache invalidation timeout\n"); 8137 - hl_device_reset(hdev, HL_RESET_HARD); 8138 - } 8139 - 8140 - return rc; 7865 + return hdev->asic_funcs->mmu_invalidate_cache(hdev, is_hard, flags); 8141 7866 } 8142 7867 8143 7868 static int gaudi_mmu_update_asid_hop0_addr(struct hl_device *hdev, ··· 8153 7956 if (!(gaudi->hw_cap_initialized & HW_CAP_CPU_Q)) 8154 7957 return 0; 8155 7958 8156 - rc = hl_fw_cpucp_handshake(hdev, mmCPU_BOOT_DEV_STS0, mmCPU_BOOT_ERR0); 7959 + rc = hl_fw_cpucp_handshake(hdev, mmCPU_BOOT_DEV_STS0, 7960 + mmCPU_BOOT_DEV_STS1, mmCPU_BOOT_ERR0, 7961 + mmCPU_BOOT_ERR1); 8157 7962 if (rc) 8158 7963 return rc; 8159 7964 ··· 8276 8077 for (i = 0 ; i < (NIC_NUMBER_OF_ENGINES / 2) ; i++) { 8277 8078 offset = i * NIC_MACRO_QMAN_OFFSET; 8278 8079 port = 2 * i; 8279 - if (hdev->nic_ports_mask & BIT(port)) { 8080 + if (gaudi->hw_cap_initialized & BIT(HW_CAP_NIC_SHIFT + port)) { 8280 8081 qm_glbl_sts0 = RREG32(mmNIC0_QM0_GLBL_STS0 + offset); 8281 8082 qm_cgm_sts = RREG32(mmNIC0_QM0_CGM_STS + offset); 8282 8083 is_eng_idle = IS_QM_IDLE(qm_glbl_sts0, qm_cgm_sts); ··· 8291 8092 } 8292 8093 8293 8094 port = 2 * i + 1; 8294 - if (hdev->nic_ports_mask & BIT(port)) { 8095 + if (gaudi->hw_cap_initialized & BIT(HW_CAP_NIC_SHIFT + port)) { 8295 8096 qm_glbl_sts0 = RREG32(mmNIC0_QM1_GLBL_STS0 + offset); 8296 8097 qm_cgm_sts = RREG32(mmNIC0_QM1_CGM_STS + offset); 8297 8098 is_eng_idle = IS_QM_IDLE(qm_glbl_sts0, qm_cgm_sts); ··· 8505 8306 HL_VA_RANGE_TYPE_HOST, HOST_SPACE_INTERNAL_CB_SZ, 8506 8307 HL_MMU_VA_ALIGNMENT_NOT_NEEDED); 8507 8308 8508 - if (!hdev->internal_cb_va_base) 8309 + if (!hdev->internal_cb_va_base) { 8310 + rc = -ENOMEM; 8509 8311 goto destroy_internal_cb_pool; 8312 + } 8510 8313 8511 8314 mutex_lock(&ctx->mmu_lock); 8512 8315 rc = hl_mmu_map_contiguous(ctx, hdev->internal_cb_va_base, ··· 8950 8749 8951 8750 static void gaudi_enable_events_from_fw(struct hl_device *hdev) 8952 8751 { 8953 - WREG32(mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR, GAUDI_EVENT_INTS_REGISTER); 8752 + struct cpu_dyn_regs *dyn_regs = 8753 + &hdev->fw_loader.dynamic_loader.comm_desc.cpu_dyn_regs; 8754 + u32 irq_handler_offset = hdev->asic_prop.gic_interrupts_enable ? 8755 + mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR : 8756 + le32_to_cpu(dyn_regs->gic_host_ints_irq); 8757 + 8758 + WREG32(irq_handler_offset, 8759 + gaudi_irq_map_table[GAUDI_EVENT_INTS_REGISTER].cpu_id); 8954 8760 } 8955 8761 8956 8762 static int gaudi_map_pll_idx_to_fw_idx(u32 pll_idx) ··· 9042 8834 .ctx_fini = gaudi_ctx_fini, 9043 8835 .get_clk_rate = gaudi_get_clk_rate, 9044 8836 .get_queue_id_for_cq = gaudi_get_queue_id_for_cq, 9045 - .read_device_fw_version = gaudi_read_device_fw_version, 9046 8837 .load_firmware_to_device = gaudi_load_firmware_to_device, 9047 8838 .load_boot_fit_to_device = gaudi_load_boot_fit_to_device, 9048 8839 .get_signal_cb_size = gaudi_get_signal_cb_size, ··· 9060 8853 .get_hw_block_id = gaudi_get_hw_block_id, 9061 8854 .hw_block_mmap = gaudi_block_mmap, 9062 8855 .enable_events_from_fw = gaudi_enable_events_from_fw, 9063 - .map_pll_idx_to_fw_idx = gaudi_map_pll_idx_to_fw_idx 8856 + .map_pll_idx_to_fw_idx = gaudi_map_pll_idx_to_fw_idx, 8857 + .init_firmware_loader = gaudi_init_firmware_loader, 8858 + .init_cpu_scrambler_dram = gaudi_init_scrambler_hbm 9064 8859 }; 9065 8860 9066 8861 /**
+1
drivers/misc/habanalabs/gaudi/gaudiP.h
··· 82 82 QMAN_STREAMS) 83 83 84 84 #define QMAN_STREAMS 4 85 + #define PQ_FETCHER_CACHE_SIZE 8 85 86 86 87 #define DMA_QMAN_OFFSET (mmDMA1_QM_BASE - mmDMA0_QM_BASE) 87 88 #define TPC_QMAN_OFFSET (mmTPC1_QM_BASE - mmTPC0_QM_BASE)
+3 -3
drivers/misc/habanalabs/gaudi/gaudi_coresight.c
··· 424 424 if (frequency == 0) 425 425 frequency = input->frequency; 426 426 WREG32(base_reg + 0xE8C, frequency); 427 - WREG32(base_reg + 0xE90, 0x7FF); 427 + WREG32(base_reg + 0xE90, 0x1F00); 428 428 429 429 /* SW-2176 - SW WA for HW bug */ 430 430 if ((CFG_BASE + base_reg) >= mmDMA_CH_0_CS_STM_BASE && ··· 434 434 WREG32(base_reg + 0xE6C, 0x0); 435 435 } 436 436 437 - WREG32(base_reg + 0xE80, 0x27 | (input->id << 16)); 437 + WREG32(base_reg + 0xE80, 0x23 | (input->id << 16)); 438 438 } else { 439 439 WREG32(base_reg + 0xE80, 4); 440 440 WREG32(base_reg + 0xD64, 0); ··· 634 634 WREG32(mmPSOC_ETR_BUFWM, 0x3FFC); 635 635 WREG32(mmPSOC_ETR_RSZ, input->buffer_size); 636 636 WREG32(mmPSOC_ETR_MODE, input->sink_mode); 637 - if (hdev->asic_prop.fw_security_disabled) { 637 + if (!hdev->asic_prop.fw_security_enabled) { 638 638 /* make ETR not privileged */ 639 639 val = FIELD_PREP( 640 640 PSOC_ETR_AXICTL_PROTCTRLBIT0_MASK, 0);
+8 -7
drivers/misc/habanalabs/gaudi/gaudi_security.c
··· 1448 1448 u32 pb_addr, mask; 1449 1449 u8 word_offset; 1450 1450 1451 - if (hdev->asic_prop.fw_security_disabled) { 1451 + if (!hdev->asic_prop.fw_security_enabled) { 1452 1452 gaudi_pb_set_block(hdev, mmDMA_IF_E_S_BASE); 1453 1453 gaudi_pb_set_block(hdev, mmDMA_IF_E_S_DOWN_CH0_BASE); 1454 1454 gaudi_pb_set_block(hdev, mmDMA_IF_E_S_DOWN_CH1_BASE); ··· 9135 9135 u32 pb_addr, mask; 9136 9136 u8 word_offset; 9137 9137 9138 - if (hdev->asic_prop.fw_security_disabled) { 9138 + if (!hdev->asic_prop.fw_security_enabled) { 9139 9139 gaudi_pb_set_block(hdev, mmTPC0_E2E_CRED_BASE); 9140 9140 gaudi_pb_set_block(hdev, mmTPC1_E2E_CRED_BASE); 9141 9141 gaudi_pb_set_block(hdev, mmTPC2_E2E_CRED_BASE); ··· 12818 12818 * secured 12819 12819 */ 12820 12820 12821 - if (hdev->asic_prop.fw_security_disabled) { 12821 + if (!hdev->asic_prop.fw_security_enabled) { 12822 12822 gaudi_pb_set_block(hdev, mmIF_E_PLL_BASE); 12823 12823 gaudi_pb_set_block(hdev, mmMESH_W_PLL_BASE); 12824 12824 gaudi_pb_set_block(hdev, mmSRAM_W_PLL_BASE); ··· 13023 13023 * property configuration of MME SBAB and ACC to be non-privileged and 13024 13024 * non-secured 13025 13025 */ 13026 - if (hdev->asic_prop.fw_security_disabled) { 13026 + if (!hdev->asic_prop.fw_security_enabled) { 13027 13027 WREG32(mmMME0_SBAB_PROT, 0x2); 13028 13028 WREG32(mmMME0_ACC_PROT, 0x2); 13029 13029 WREG32(mmMME1_SBAB_PROT, 0x2); ··· 13032 13032 WREG32(mmMME2_ACC_PROT, 0x2); 13033 13033 WREG32(mmMME3_SBAB_PROT, 0x2); 13034 13034 WREG32(mmMME3_ACC_PROT, 0x2); 13035 - } 13036 13035 13037 - /* On RAZWI, 0 will be returned from RR and 0xBABA0BAD from PB */ 13038 - if (hdev->asic_prop.fw_security_disabled) 13036 + /* 13037 + * On RAZWI, 0 will be returned from RR and 0xBABA0BAD from PB 13038 + */ 13039 13039 WREG32(0xC01B28, 0x1); 13040 + } 13040 13041 13041 13042 gaudi_init_range_registers_lbw(hdev); 13042 13043
+136 -111
drivers/misc/habanalabs/goya/goya.c
··· 87 87 #define GOYA_PLDM_QMAN0_TIMEOUT_USEC (HL_DEVICE_TIMEOUT_USEC * 30) 88 88 #define GOYA_BOOT_FIT_REQ_TIMEOUT_USEC 1000000 /* 1s */ 89 89 #define GOYA_MSG_TO_CPU_TIMEOUT_USEC 4000000 /* 4s */ 90 + #define GOYA_WAIT_FOR_BL_TIMEOUT_USEC 15000000 /* 15s */ 90 91 91 92 #define GOYA_QMAN0_FENCE_VAL 0xD169B243 92 93 ··· 355 354 static int goya_mmu_add_mappings_for_device_cpu(struct hl_device *hdev); 356 355 static void goya_mmu_prepare(struct hl_device *hdev, u32 asid); 357 356 358 - int goya_get_fixed_properties(struct hl_device *hdev) 357 + int goya_set_fixed_properties(struct hl_device *hdev) 359 358 { 360 359 struct asic_fixed_properties *prop = &hdev->asic_prop; 361 360 int i; ··· 461 460 for (i = 0 ; i < HL_MAX_DCORES ; i++) 462 461 prop->first_available_cq[i] = USHRT_MAX; 463 462 464 - prop->fw_security_status_valid = false; 463 + prop->fw_cpu_boot_dev_sts0_valid = false; 464 + prop->fw_cpu_boot_dev_sts1_valid = false; 465 465 prop->hard_reset_done_by_fw = false; 466 + prop->gic_interrupts_enable = true; 466 467 467 468 return 0; 468 469 } ··· 534 531 struct hl_outbound_pci_region outbound_region; 535 532 int rc; 536 533 537 - if (hdev->asic_prop.iatu_done_by_fw) { 538 - hdev->asic_funcs->set_dma_mask_from_fw(hdev); 534 + if (hdev->asic_prop.iatu_done_by_fw) 539 535 return 0; 540 - } 541 536 542 537 /* Inbound Region 0 - Bar 0 - Point to SRAM and CFG */ 543 538 inbound_region.mode = PCI_BAR_MATCH_MODE; ··· 587 586 u32 fw_boot_status, val; 588 587 int rc; 589 588 590 - rc = goya_get_fixed_properties(hdev); 589 + rc = goya_set_fixed_properties(hdev); 591 590 if (rc) { 592 591 dev_err(hdev->dev, "Failed to get fixed properties\n"); 593 592 return rc; ··· 619 618 prop->dram_pci_bar_size = pci_resource_len(pdev, DDR_BAR_ID); 620 619 621 620 /* If FW security is enabled at this point it means no access to ELBI */ 622 - if (!hdev->asic_prop.fw_security_disabled) { 621 + if (hdev->asic_prop.fw_security_enabled) { 623 622 hdev->asic_prop.iatu_done_by_fw = true; 624 623 goto pci_init; 625 624 } ··· 643 642 * version to determine whether we run with a security-enabled firmware 644 643 */ 645 644 rc = hl_fw_read_preboot_status(hdev, mmPSOC_GLOBAL_CONF_CPU_BOOT_STATUS, 646 - mmCPU_BOOT_DEV_STS0, mmCPU_BOOT_ERR0, 647 - GOYA_BOOT_FIT_REQ_TIMEOUT_USEC); 645 + mmCPU_BOOT_DEV_STS0, 646 + mmCPU_BOOT_DEV_STS1, mmCPU_BOOT_ERR0, 647 + mmCPU_BOOT_ERR1, 648 + GOYA_BOOT_FIT_REQ_TIMEOUT_USEC); 648 649 if (rc) { 649 650 if (hdev->reset_on_preboot_fail) 650 651 hdev->asic_funcs->hw_fini(hdev, true); ··· 726 723 u16 pll_freq_arr[HL_PLL_NUM_OUTPUTS], freq; 727 724 int rc; 728 725 729 - if (hdev->asic_prop.fw_security_disabled) { 726 + if (hdev->asic_prop.fw_security_enabled) { 727 + rc = hl_fw_cpucp_pll_info_get(hdev, HL_GOYA_PCI_PLL, 728 + pll_freq_arr); 729 + 730 + if (rc) 731 + return; 732 + 733 + freq = pll_freq_arr[1]; 734 + } else { 730 735 div_fctr = RREG32(mmPSOC_PCI_PLL_DIV_FACTOR_1); 731 736 div_sel = RREG32(mmPSOC_PCI_PLL_DIV_SEL_1); 732 737 nr = RREG32(mmPSOC_PCI_PLL_NR); ··· 761 750 div_sel); 762 751 freq = 0; 763 752 } 764 - } else { 765 - rc = hl_fw_cpucp_pll_info_get(hdev, HL_GOYA_PCI_PLL, 766 - pll_freq_arr); 767 - 768 - if (rc) 769 - return; 770 - 771 - freq = pll_freq_arr[1]; 772 753 } 773 754 774 755 prop->psoc_timestamp_frequency = freq; ··· 852 849 hdev->hl_chip_info->info = NULL; 853 850 } 854 851 852 + static void goya_set_pci_memory_regions(struct hl_device *hdev) 853 + { 854 + struct asic_fixed_properties *prop = &hdev->asic_prop; 855 + struct pci_mem_region *region; 856 + 857 + /* CFG */ 858 + region = &hdev->pci_mem_region[PCI_REGION_CFG]; 859 + region->region_base = CFG_BASE; 860 + region->region_size = CFG_SIZE; 861 + region->offset_in_bar = CFG_BASE - SRAM_BASE_ADDR; 862 + region->bar_size = CFG_BAR_SIZE; 863 + region->bar_id = SRAM_CFG_BAR_ID; 864 + region->used = 1; 865 + 866 + /* SRAM */ 867 + region = &hdev->pci_mem_region[PCI_REGION_SRAM]; 868 + region->region_base = SRAM_BASE_ADDR; 869 + region->region_size = SRAM_SIZE; 870 + region->offset_in_bar = 0; 871 + region->bar_size = CFG_BAR_SIZE; 872 + region->bar_id = SRAM_CFG_BAR_ID; 873 + region->used = 1; 874 + 875 + /* DRAM */ 876 + region = &hdev->pci_mem_region[PCI_REGION_DRAM]; 877 + region->region_base = DRAM_PHYS_BASE; 878 + region->region_size = hdev->asic_prop.dram_size; 879 + region->offset_in_bar = 0; 880 + region->bar_size = prop->dram_pci_bar_size; 881 + region->bar_id = DDR_BAR_ID; 882 + region->used = 1; 883 + } 884 + 855 885 /* 856 886 * goya_sw_init - Goya software initialization code 857 887 * ··· 954 918 spin_lock_init(&goya->hw_queues_lock); 955 919 hdev->supports_coresight = true; 956 920 hdev->supports_soft_reset = true; 921 + hdev->allow_external_soft_reset = true; 922 + 923 + goya_set_pci_memory_regions(hdev); 957 924 958 925 return 0; 959 926 ··· 1302 1263 } 1303 1264 1304 1265 /* update FW application security bits */ 1305 - if (prop->fw_security_status_valid) 1306 - prop->fw_app_security_map = RREG32(mmCPU_BOOT_DEV_STS0); 1266 + if (prop->fw_cpu_boot_dev_sts0_valid) 1267 + prop->fw_app_cpu_boot_dev_sts0 = RREG32(mmCPU_BOOT_DEV_STS0); 1268 + 1269 + if (prop->fw_cpu_boot_dev_sts1_valid) 1270 + prop->fw_app_cpu_boot_dev_sts1 = RREG32(mmCPU_BOOT_DEV_STS1); 1307 1271 1308 1272 goya->hw_cap_initialized |= HW_CAP_CPU_Q; 1309 1273 return 0; ··· 2444 2402 return hl_fw_load_fw_to_device(hdev, GOYA_BOOT_FIT_FILE, dst, 0, 0); 2445 2403 } 2446 2404 2447 - /* 2448 - * FW component passes an offset from SRAM_BASE_ADDR in SCRATCHPAD_xx. 2449 - * The version string should be located by that offset. 2450 - */ 2451 - static int goya_read_device_fw_version(struct hl_device *hdev, 2452 - enum hl_fw_component fwc) 2405 + static void goya_init_dynamic_firmware_loader(struct hl_device *hdev) 2453 2406 { 2454 - const char *name; 2455 - u32 ver_off; 2456 - char *dest; 2407 + struct dynamic_fw_load_mgr *dynamic_loader; 2408 + struct cpu_dyn_regs *dyn_regs; 2457 2409 2458 - switch (fwc) { 2459 - case FW_COMP_UBOOT: 2460 - ver_off = RREG32(mmUBOOT_VER_OFFSET); 2461 - dest = hdev->asic_prop.uboot_ver; 2462 - name = "U-Boot"; 2463 - break; 2464 - case FW_COMP_PREBOOT: 2465 - ver_off = RREG32(mmPREBOOT_VER_OFFSET); 2466 - dest = hdev->asic_prop.preboot_ver; 2467 - name = "Preboot"; 2468 - break; 2469 - default: 2470 - dev_warn(hdev->dev, "Undefined FW component: %d\n", fwc); 2471 - return -EIO; 2472 - } 2410 + dynamic_loader = &hdev->fw_loader.dynamic_loader; 2473 2411 2474 - ver_off &= ~((u32)SRAM_BASE_ADDR); 2412 + /* 2413 + * here we update initial values for few specific dynamic regs (as 2414 + * before reading the first descriptor from FW those value has to be 2415 + * hard-coded) in later stages of the protocol those values will be 2416 + * updated automatically by reading the FW descriptor so data there 2417 + * will always be up-to-date 2418 + */ 2419 + dyn_regs = &dynamic_loader->comm_desc.cpu_dyn_regs; 2420 + dyn_regs->kmd_msg_to_cpu = 2421 + cpu_to_le32(mmPSOC_GLOBAL_CONF_KMD_MSG_TO_CPU); 2422 + dyn_regs->cpu_cmd_status_to_host = 2423 + cpu_to_le32(mmCPU_CMD_STATUS_TO_HOST); 2475 2424 2476 - if (ver_off < SRAM_SIZE - VERSION_MAX_LEN) { 2477 - memcpy_fromio(dest, hdev->pcie_bar[SRAM_CFG_BAR_ID] + ver_off, 2478 - VERSION_MAX_LEN); 2479 - } else { 2480 - dev_err(hdev->dev, "%s version offset (0x%x) is above SRAM\n", 2481 - name, ver_off); 2482 - strcpy(dest, "unavailable"); 2425 + dynamic_loader->wait_for_bl_timeout = GOYA_WAIT_FOR_BL_TIMEOUT_USEC; 2426 + } 2483 2427 2484 - return -EIO; 2485 - } 2428 + static void goya_init_static_firmware_loader(struct hl_device *hdev) 2429 + { 2430 + struct static_fw_load_mgr *static_loader; 2486 2431 2487 - return 0; 2432 + static_loader = &hdev->fw_loader.static_loader; 2433 + 2434 + static_loader->preboot_version_max_off = SRAM_SIZE - VERSION_MAX_LEN; 2435 + static_loader->boot_fit_version_max_off = SRAM_SIZE - VERSION_MAX_LEN; 2436 + static_loader->kmd_msg_to_cpu_reg = mmPSOC_GLOBAL_CONF_KMD_MSG_TO_CPU; 2437 + static_loader->cpu_cmd_status_to_host_reg = mmCPU_CMD_STATUS_TO_HOST; 2438 + static_loader->cpu_boot_status_reg = mmPSOC_GLOBAL_CONF_CPU_BOOT_STATUS; 2439 + static_loader->cpu_boot_dev_status0_reg = mmCPU_BOOT_DEV_STS0; 2440 + static_loader->cpu_boot_dev_status1_reg = mmCPU_BOOT_DEV_STS1; 2441 + static_loader->boot_err0_reg = mmCPU_BOOT_ERR0; 2442 + static_loader->boot_err1_reg = mmCPU_BOOT_ERR1; 2443 + static_loader->preboot_version_offset_reg = mmPREBOOT_VER_OFFSET; 2444 + static_loader->boot_fit_version_offset_reg = mmUBOOT_VER_OFFSET; 2445 + static_loader->sram_offset_mask = ~(lower_32_bits(SRAM_BASE_ADDR)); 2446 + } 2447 + 2448 + static void goya_init_firmware_loader(struct hl_device *hdev) 2449 + { 2450 + struct asic_fixed_properties *prop = &hdev->asic_prop; 2451 + struct fw_load_mgr *fw_loader = &hdev->fw_loader; 2452 + 2453 + /* fill common fields */ 2454 + fw_loader->boot_fit_img.image_name = GOYA_BOOT_FIT_FILE; 2455 + fw_loader->linux_img.image_name = GOYA_LINUX_FW_FILE; 2456 + fw_loader->cpu_timeout = GOYA_CPU_TIMEOUT_USEC; 2457 + fw_loader->boot_fit_timeout = GOYA_BOOT_FIT_REQ_TIMEOUT_USEC; 2458 + fw_loader->skip_bmc = false; 2459 + fw_loader->sram_bar_id = SRAM_CFG_BAR_ID; 2460 + fw_loader->dram_bar_id = DDR_BAR_ID; 2461 + 2462 + if (prop->dynamic_fw_load) 2463 + goya_init_dynamic_firmware_loader(hdev); 2464 + else 2465 + goya_init_static_firmware_loader(hdev); 2488 2466 } 2489 2467 2490 2468 static int goya_init_cpu(struct hl_device *hdev) ··· 2528 2466 return -EIO; 2529 2467 } 2530 2468 2531 - rc = hl_fw_init_cpu(hdev, mmPSOC_GLOBAL_CONF_CPU_BOOT_STATUS, 2532 - mmPSOC_GLOBAL_CONF_UBOOT_MAGIC, 2533 - mmCPU_CMD_STATUS_TO_HOST, 2534 - mmCPU_BOOT_DEV_STS0, mmCPU_BOOT_ERR0, 2535 - false, GOYA_CPU_TIMEOUT_USEC, 2536 - GOYA_BOOT_FIT_REQ_TIMEOUT_USEC); 2469 + rc = hl_fw_init_cpu(hdev); 2537 2470 2538 2471 if (rc) 2539 2472 return rc; ··· 2938 2881 2939 2882 *dma_handle = hdev->asic_prop.sram_base_address; 2940 2883 2941 - base = (void *) hdev->pcie_bar[SRAM_CFG_BAR_ID]; 2884 + base = (__force void *) hdev->pcie_bar[SRAM_CFG_BAR_ID]; 2942 2885 2943 2886 switch (queue_id) { 2944 2887 case GOYA_QUEUE_ID_MME: ··· 3327 3270 return 0; 3328 3271 3329 3272 unpin_memory: 3273 + list_del(&userptr->job_node); 3330 3274 hl_unpin_host_memory(hdev, userptr); 3331 3275 free_userptr: 3332 3276 kfree(userptr); ··· 5227 5169 } 5228 5170 5229 5171 static int goya_mmu_invalidate_cache_range(struct hl_device *hdev, 5230 - bool is_hard, u32 asid, u64 va, u64 size) 5172 + bool is_hard, u32 flags, 5173 + u32 asid, u64 va, u64 size) 5231 5174 { 5232 - struct goya_device *goya = hdev->asic_specific; 5233 - u32 status, timeout_usec, inv_data, pi; 5234 - int rc; 5235 - 5236 - if (!(goya->hw_cap_initialized & HW_CAP_MMU) || 5237 - hdev->hard_reset_pending) 5238 - return 0; 5239 - 5240 - /* no need in L1 only invalidation in Goya */ 5241 - if (!is_hard) 5242 - return 0; 5243 - 5244 - if (hdev->pldm) 5245 - timeout_usec = GOYA_PLDM_MMU_TIMEOUT_USEC; 5246 - else 5247 - timeout_usec = MMU_CONFIG_TIMEOUT_USEC; 5248 - 5249 - /* 5250 - * TODO: currently invalidate entire L0 & L1 as in regular hard 5251 - * invalidation. Need to apply invalidation of specific cache lines with 5252 - * mask of ASID & VA & size. 5253 - * Note that L1 with be flushed entirely in any case. 5175 + /* Treat as invalidate all because there is no range invalidation 5176 + * in Goya 5254 5177 */ 5255 - 5256 - /* L0 & L1 invalidation */ 5257 - inv_data = RREG32(mmSTLB_CACHE_INV); 5258 - /* PI is 8 bit */ 5259 - pi = ((inv_data & STLB_CACHE_INV_PRODUCER_INDEX_MASK) + 1) & 0xFF; 5260 - WREG32(mmSTLB_CACHE_INV, 5261 - (inv_data & STLB_CACHE_INV_INDEX_MASK_MASK) | pi); 5262 - 5263 - rc = hl_poll_timeout( 5264 - hdev, 5265 - mmSTLB_INV_CONSUMER_INDEX, 5266 - status, 5267 - status == pi, 5268 - 1000, 5269 - timeout_usec); 5270 - 5271 - if (rc) { 5272 - dev_err_ratelimited(hdev->dev, 5273 - "MMU cache invalidation timeout\n"); 5274 - hl_device_reset(hdev, HL_RESET_HARD); 5275 - } 5276 - 5277 - return rc; 5178 + return hdev->asic_funcs->mmu_invalidate_cache(hdev, is_hard, flags); 5278 5179 } 5279 5180 5280 5181 int goya_send_heartbeat(struct hl_device *hdev) ··· 5256 5239 if (!(goya->hw_cap_initialized & HW_CAP_CPU_Q)) 5257 5240 return 0; 5258 5241 5259 - rc = hl_fw_cpucp_handshake(hdev, mmCPU_BOOT_DEV_STS0, mmCPU_BOOT_ERR0); 5242 + rc = hl_fw_cpucp_handshake(hdev, mmCPU_BOOT_DEV_STS0, 5243 + mmCPU_BOOT_DEV_STS1, mmCPU_BOOT_ERR0, 5244 + mmCPU_BOOT_ERR1); 5260 5245 if (rc) 5261 5246 return rc; 5262 5247 ··· 5402 5383 return 0; 5403 5384 5404 5385 return hl_fw_get_eeprom_data(hdev, data, max_size); 5386 + } 5387 + 5388 + static void goya_cpu_init_scrambler_dram(struct hl_device *hdev) 5389 + { 5390 + 5405 5391 } 5406 5392 5407 5393 static int goya_ctx_init(struct hl_ctx *ctx) ··· 5589 5565 .ctx_fini = goya_ctx_fini, 5590 5566 .get_clk_rate = goya_get_clk_rate, 5591 5567 .get_queue_id_for_cq = goya_get_queue_id_for_cq, 5592 - .read_device_fw_version = goya_read_device_fw_version, 5593 5568 .load_firmware_to_device = goya_load_firmware_to_device, 5594 5569 .load_boot_fit_to_device = goya_load_boot_fit_to_device, 5595 5570 .get_signal_cb_size = goya_get_signal_cb_size, ··· 5607 5584 .get_hw_block_id = goya_get_hw_block_id, 5608 5585 .hw_block_mmap = goya_block_mmap, 5609 5586 .enable_events_from_fw = goya_enable_events_from_fw, 5610 - .map_pll_idx_to_fw_idx = goya_map_pll_idx_to_fw_idx 5587 + .map_pll_idx_to_fw_idx = goya_map_pll_idx_to_fw_idx, 5588 + .init_firmware_loader = goya_init_firmware_loader, 5589 + .init_cpu_scrambler_dram = goya_cpu_init_scrambler_dram 5611 5590 }; 5612 5591 5613 5592 /*
+1 -1
drivers/misc/habanalabs/goya/goyaP.h
··· 168 168 u8 device_cpu_mmu_mappings_done; 169 169 }; 170 170 171 - int goya_get_fixed_properties(struct hl_device *hdev); 171 + int goya_set_fixed_properties(struct hl_device *hdev); 172 172 int goya_mmu_init(struct hl_device *hdev); 173 173 void goya_init_dma_qmans(struct hl_device *hdev); 174 174 void goya_init_mme_qmans(struct hl_device *hdev);
+1 -1
drivers/misc/habanalabs/goya/goya_coresight.c
··· 434 434 WREG32(mmPSOC_ETR_BUFWM, 0x3FFC); 435 435 WREG32(mmPSOC_ETR_RSZ, input->buffer_size); 436 436 WREG32(mmPSOC_ETR_MODE, input->sink_mode); 437 - if (hdev->asic_prop.fw_security_disabled) { 437 + if (!hdev->asic_prop.fw_security_enabled) { 438 438 /* make ETR not privileged */ 439 439 val = FIELD_PREP(PSOC_ETR_AXICTL_PROTCTRLBIT0_MASK, 0); 440 440 /* make ETR non-secured (inverted logic) */
+44 -1
drivers/misc/habanalabs/include/common/cpucp_if.h
··· 84 84 __u8 pad[3]; 85 85 }; 86 86 87 + enum hl_fw_alive_severity { 88 + FW_ALIVE_SEVERITY_MINOR, 89 + FW_ALIVE_SEVERITY_CRITICAL 90 + }; 91 + 92 + struct hl_eq_fw_alive { 93 + __le64 uptime_seconds; 94 + __le32 process_id; 95 + __le32 thread_id; 96 + /* enum hl_fw_alive_severity */ 97 + __u8 severity; 98 + __u8 pad[7]; 99 + }; 100 + 87 101 struct hl_eq_entry { 88 102 struct hl_eq_header hdr; 89 103 union { ··· 105 91 struct hl_eq_hbm_ecc_data hbm_ecc_data; 106 92 struct hl_eq_sm_sei_data sm_sei_data; 107 93 struct cpucp_pkt_sync_err pkt_sync_err; 94 + struct hl_eq_fw_alive fw_alive; 108 95 __le64 data[7]; 109 96 }; 110 97 }; ··· 118 103 #define EQ_CTL_EVENT_TYPE_SHIFT 16 119 104 #define EQ_CTL_EVENT_TYPE_MASK 0x03FF0000 120 105 106 + #define EQ_CTL_INDEX_SHIFT 0 107 + #define EQ_CTL_INDEX_MASK 0x0000FFFF 108 + 121 109 enum pq_init_status { 122 110 PQ_INIT_STATUS_NA = 0, 123 111 PQ_INIT_STATUS_READY_FOR_CP, 124 112 PQ_INIT_STATUS_READY_FOR_HOST, 125 - PQ_INIT_STATUS_READY_FOR_CP_SINGLE_MSI 113 + PQ_INIT_STATUS_READY_FOR_CP_SINGLE_MSI, 114 + PQ_INIT_STATUS_LEN_NOT_POWER_OF_TWO_ERR, 115 + PQ_INIT_STATUS_ILLEGAL_Q_ADDR_ERR 126 116 }; 127 117 128 118 /* ··· 404 384 #define CPUCP_PKT_RES_PLL_OUT3_SHIFT 48 405 385 #define CPUCP_PKT_RES_PLL_OUT3_MASK 0xFFFF000000000000ull 406 386 387 + #define CPUCP_PKT_VAL_PFC_IN1_SHIFT 0 388 + #define CPUCP_PKT_VAL_PFC_IN1_MASK 0x0000000000000001ull 389 + #define CPUCP_PKT_VAL_PFC_IN2_SHIFT 1 390 + #define CPUCP_PKT_VAL_PFC_IN2_MASK 0x000000000000001Eull 391 + 392 + #define CPUCP_PKT_VAL_LPBK_IN1_SHIFT 0 393 + #define CPUCP_PKT_VAL_LPBK_IN1_MASK 0x0000000000000001ull 394 + #define CPUCP_PKT_VAL_LPBK_IN2_SHIFT 1 395 + #define CPUCP_PKT_VAL_LPBK_IN2_MASK 0x000000000000001Eull 396 + 397 + /* heartbeat status bits */ 398 + #define CPUCP_PKT_HB_STATUS_EQ_FAULT_SHIFT 0 399 + #define CPUCP_PKT_HB_STATUS_EQ_FAULT_MASK 0x00000001 400 + 407 401 struct cpucp_packet { 408 402 union { 409 403 __le64 value; /* For SET packets */ ··· 459 425 460 426 /* For get CpuCP info/EEPROM data/NIC info */ 461 427 __le32 data_max_size; 428 + 429 + /* 430 + * For any general status bitmask. Shall be used whenever the 431 + * result cannot be used to hold general purpose data. 432 + */ 433 + __le32 status_mask; 462 434 }; 463 435 464 436 __le32 reserved; ··· 669 629 * @card_name: card name that will be displayed in HWMON subsystem on the host 670 630 * @sec_info: security information 671 631 * @pll_map: Bit map of supported PLLs for current ASIC version. 632 + * @mme_binning_mask: MME binning mask, 633 + * (0 = functional, 1 = binned) 672 634 */ 673 635 struct cpucp_info { 674 636 struct cpucp_sensor sensors[CPUCP_MAX_SENSORS]; ··· 693 651 struct cpucp_security_info sec_info; 694 652 __le32 reserved6; 695 653 __u8 pll_map[PLL_MAP_LEN]; 654 + __le64 mme_binning_mask; 696 655 }; 697 656 698 657 struct cpucp_mac_addr {
+144 -40
drivers/misc/habanalabs/include/common/hl_boot_if.h
··· 8 8 #ifndef HL_BOOT_IF_H 9 9 #define HL_BOOT_IF_H 10 10 11 - #define LKD_HARD_RESET_MAGIC 0xED7BD694 11 + #define LKD_HARD_RESET_MAGIC 0xED7BD694 /* deprecated - do not use */ 12 12 #define HL_POWER9_HOST_MAGIC 0x1DA30009 13 13 14 14 #define BOOT_FIT_SRAM_OFFSET 0x200000 ··· 99 99 #define CPU_BOOT_ERR0_PLL_FAIL (1 << 12) 100 100 #define CPU_BOOT_ERR0_DEVICE_UNUSABLE_FAIL (1 << 13) 101 101 #define CPU_BOOT_ERR0_ENABLED (1 << 31) 102 + #define CPU_BOOT_ERR1_ENABLED (1 << 31) 102 103 103 104 /* 104 105 * BOOT DEVICE STATUS bits in BOOT_DEVICE_STS registers ··· 191 190 * PLLs. 192 191 * Initialized in: linux 193 192 * 193 + * CPU_BOOT_DEV_STS0_GIC_PRIVILEGED_EN GIC access permission only from 194 + * previleged entity. FW sets this status 195 + * bit for host. If this bit is set then 196 + * GIC can not be accessed from host. 197 + * Initialized in: linux 198 + * 199 + * CPU_BOOT_DEV_STS0_EQ_INDEX_EN Event Queue (EQ) index is a running 200 + * index for each new event sent to host. 201 + * This is used as a method in host to 202 + * identify that the waiting event in 203 + * queue is actually a new event which 204 + * was not served before. 205 + * Initialized in: linux 206 + * 207 + * CPU_BOOT_DEV_STS0_MULTI_IRQ_POLL_EN Use multiple scratchpad interfaces to 208 + * prevent IRQs overriding each other. 209 + * Initialized in: linux 210 + * 194 211 * CPU_BOOT_DEV_STS0_ENABLED Device status register enabled. 195 212 * This is a main indication that the 196 213 * running FW populates the device status ··· 237 218 #define CPU_BOOT_DEV_STS0_FW_LD_COM_EN (1 << 16) 238 219 #define CPU_BOOT_DEV_STS0_FW_IATU_CONF_EN (1 << 17) 239 220 #define CPU_BOOT_DEV_STS0_DYN_PLL_EN (1 << 19) 221 + #define CPU_BOOT_DEV_STS0_GIC_PRIVILEGED_EN (1 << 20) 222 + #define CPU_BOOT_DEV_STS0_EQ_INDEX_EN (1 << 21) 223 + #define CPU_BOOT_DEV_STS0_MULTI_IRQ_POLL_EN (1 << 22) 240 224 #define CPU_BOOT_DEV_STS0_ENABLED (1 << 31) 225 + #define CPU_BOOT_DEV_STS1_ENABLED (1 << 31) 241 226 242 227 enum cpu_boot_status { 243 228 CPU_BOOT_STATUS_NA = 0, /* Default value after reset of chip */ ··· 287 264 288 265 /* communication registers mapping - consider ABI when changing */ 289 266 struct cpu_dyn_regs { 290 - uint32_t cpu_pq_base_addr_low; 291 - uint32_t cpu_pq_base_addr_high; 292 - uint32_t cpu_pq_length; 293 - uint32_t cpu_pq_init_status; 294 - uint32_t cpu_eq_base_addr_low; 295 - uint32_t cpu_eq_base_addr_high; 296 - uint32_t cpu_eq_length; 297 - uint32_t cpu_eq_ci; 298 - uint32_t cpu_cq_base_addr_low; 299 - uint32_t cpu_cq_base_addr_high; 300 - uint32_t cpu_cq_length; 301 - uint32_t cpu_pf_pq_pi; 302 - uint32_t cpu_boot_dev_sts0; 303 - uint32_t cpu_boot_dev_sts1; 304 - uint32_t cpu_boot_err0; 305 - uint32_t cpu_boot_err1; 306 - uint32_t cpu_boot_status; 307 - uint32_t fw_upd_sts; 308 - uint32_t fw_upd_cmd; 309 - uint32_t fw_upd_pending_sts; 310 - uint32_t fuse_ver_offset; 311 - uint32_t preboot_ver_offset; 312 - uint32_t uboot_ver_offset; 313 - uint32_t hw_state; 314 - uint32_t kmd_msg_to_cpu; 315 - uint32_t cpu_cmd_status_to_host; 316 - uint32_t reserved1[32]; /* reserve for future use */ 267 + __le32 cpu_pq_base_addr_low; 268 + __le32 cpu_pq_base_addr_high; 269 + __le32 cpu_pq_length; 270 + __le32 cpu_pq_init_status; 271 + __le32 cpu_eq_base_addr_low; 272 + __le32 cpu_eq_base_addr_high; 273 + __le32 cpu_eq_length; 274 + __le32 cpu_eq_ci; 275 + __le32 cpu_cq_base_addr_low; 276 + __le32 cpu_cq_base_addr_high; 277 + __le32 cpu_cq_length; 278 + __le32 cpu_pf_pq_pi; 279 + __le32 cpu_boot_dev_sts0; 280 + __le32 cpu_boot_dev_sts1; 281 + __le32 cpu_boot_err0; 282 + __le32 cpu_boot_err1; 283 + __le32 cpu_boot_status; 284 + __le32 fw_upd_sts; 285 + __le32 fw_upd_cmd; 286 + __le32 fw_upd_pending_sts; 287 + __le32 fuse_ver_offset; 288 + __le32 preboot_ver_offset; 289 + __le32 uboot_ver_offset; 290 + __le32 hw_state; 291 + __le32 kmd_msg_to_cpu; 292 + __le32 cpu_cmd_status_to_host; 293 + union { 294 + __le32 gic_host_irq_ctrl; 295 + __le32 gic_host_pi_upd_irq; 296 + }; 297 + __le32 gic_tpc_qm_irq_ctrl; 298 + __le32 gic_mme_qm_irq_ctrl; 299 + __le32 gic_dma_qm_irq_ctrl; 300 + __le32 gic_nic_qm_irq_ctrl; 301 + __le32 gic_dma_core_irq_ctrl; 302 + __le32 gic_host_halt_irq; 303 + __le32 gic_host_ints_irq; 304 + __le32 reserved1[24]; /* reserve for future use */ 317 305 }; 318 306 307 + /* TODO: remove the desc magic after the code is updated to use message */ 319 308 /* HCDM - Habana Communications Descriptor Magic */ 320 309 #define HL_COMMS_DESC_MAGIC 0x4843444D 321 310 #define HL_COMMS_DESC_VER 1 322 311 312 + /* HCMv - Habana Communications Message + header version */ 313 + #define HL_COMMS_MSG_MAGIC_VALUE 0x48434D00 314 + #define HL_COMMS_MSG_MAGIC_MASK 0xFFFFFF00 315 + #define HL_COMMS_MSG_MAGIC_VER_MASK 0xFF 316 + 317 + #define HL_COMMS_MSG_MAGIC_VER(ver) (HL_COMMS_MSG_MAGIC_VALUE | \ 318 + ((ver) & HL_COMMS_MSG_MAGIC_VER_MASK)) 319 + #define HL_COMMS_MSG_MAGIC_V0 HL_COMMS_DESC_MAGIC 320 + #define HL_COMMS_MSG_MAGIC_V1 HL_COMMS_MSG_MAGIC_VER(1) 321 + 322 + #define HL_COMMS_MSG_MAGIC HL_COMMS_MSG_MAGIC_V1 323 + 324 + #define HL_COMMS_MSG_MAGIC_VALIDATE_MAGIC(magic) \ 325 + (((magic) & HL_COMMS_MSG_MAGIC_MASK) == \ 326 + HL_COMMS_MSG_MAGIC_VALUE) 327 + 328 + #define HL_COMMS_MSG_MAGIC_VALIDATE_VERSION(magic, ver) \ 329 + (((magic) & HL_COMMS_MSG_MAGIC_VER_MASK) >= \ 330 + ((ver) & HL_COMMS_MSG_MAGIC_VER_MASK)) 331 + 332 + #define HL_COMMS_MSG_MAGIC_VALIDATE(magic, ver) \ 333 + (HL_COMMS_MSG_MAGIC_VALIDATE_MAGIC((magic)) && \ 334 + HL_COMMS_MSG_MAGIC_VALIDATE_VERSION((magic), (ver))) 335 + 336 + enum comms_msg_type { 337 + HL_COMMS_DESC_TYPE = 0, 338 + HL_COMMS_RESET_CAUSE_TYPE = 1, 339 + }; 340 + 341 + /* TODO: remove this struct after the code is updated to use message */ 323 342 /* this is the comms descriptor header - meta data */ 324 343 struct comms_desc_header { 325 - uint32_t magic; /* magic for validation */ 326 - uint32_t crc32; /* CRC32 of the descriptor w/o header */ 327 - uint16_t size; /* size of the descriptor w/o header */ 328 - uint8_t version; /* descriptor version */ 329 - uint8_t reserved[5]; /* pad to 64 bit */ 344 + __le32 magic; /* magic for validation */ 345 + __le32 crc32; /* CRC32 of the descriptor w/o header */ 346 + __le16 size; /* size of the descriptor w/o header */ 347 + __u8 version; /* descriptor version */ 348 + __u8 reserved[5]; /* pad to 64 bit */ 349 + }; 350 + 351 + /* this is the comms message header - meta data */ 352 + struct comms_msg_header { 353 + __le32 magic; /* magic for validation */ 354 + __le32 crc32; /* CRC32 of the message w/o header */ 355 + __le16 size; /* size of the message w/o header */ 356 + __u8 version; /* message payload version */ 357 + __u8 type; /* message type */ 358 + __u8 reserved[4]; /* pad to 64 bit */ 330 359 }; 331 360 332 361 /* this is the main FW descriptor - consider ABI when changing */ ··· 389 314 char cur_fw_ver[VERSION_MAX_LEN]; 390 315 /* can be used for 1 more version w/o ABI change */ 391 316 char reserved0[VERSION_MAX_LEN]; 392 - uint64_t img_addr; /* address for next FW component load */ 317 + __le64 img_addr; /* address for next FW component load */ 318 + }; 319 + 320 + enum comms_reset_cause { 321 + HL_RESET_CAUSE_UNKNOWN = 0, 322 + HL_RESET_CAUSE_HEARTBEAT = 1, 323 + HL_RESET_CAUSE_TDR = 2, 324 + }; 325 + 326 + /* TODO: remove define after struct name is aligned on all projects */ 327 + #define lkd_msg_comms lkd_fw_comms_msg 328 + 329 + /* this is the comms message descriptor */ 330 + struct lkd_fw_comms_msg { 331 + struct comms_msg_header header; 332 + /* union for future expantions of new messages */ 333 + union { 334 + struct { 335 + struct cpu_dyn_regs cpu_dyn_regs; 336 + char fuse_ver[VERSION_MAX_LEN]; 337 + char cur_fw_ver[VERSION_MAX_LEN]; 338 + /* can be used for 1 more version w/o ABI change */ 339 + char reserved0[VERSION_MAX_LEN]; 340 + /* address for next FW component load */ 341 + __le64 img_addr; 342 + }; 343 + struct { 344 + __u8 reset_cause; 345 + }; 346 + }; 393 347 }; 394 348 395 349 /* ··· 490 386 struct comms_command { 491 387 union { /* bit fields are only for FW use */ 492 388 struct { 493 - unsigned int size :25; /* 32MB max. */ 494 - unsigned int reserved :2; 389 + u32 size :25; /* 32MB max. */ 390 + u32 reserved :2; 495 391 enum comms_cmd cmd :5; /* 32 commands */ 496 392 }; 497 - unsigned int val; 393 + __le32 val; 498 394 }; 499 395 }; 500 396 ··· 553 449 struct comms_status { 554 450 union { /* bit fields are only for FW use */ 555 451 struct { 556 - unsigned int offset :26; 557 - unsigned int ram_type :2; 452 + u32 offset :26; 453 + enum comms_ram_types ram_type :2; 558 454 enum comms_sts status :4; /* 16 statuses */ 559 455 }; 560 - unsigned int val; 456 + __le32 val; 561 457 }; 562 458 }; 563 459
+10 -4
drivers/misc/habanalabs/include/gaudi/gaudi_async_events.h
··· 252 252 GAUDI_EVENT_HBM3_SPI_0 = 407, 253 253 GAUDI_EVENT_HBM3_SPI_1 = 408, 254 254 GAUDI_EVENT_PSOC_GPIO_U16_0 = 421, 255 - GAUDI_EVENT_PI_UPDATE = 484, 256 - GAUDI_EVENT_HALT_MACHINE = 485, 257 - GAUDI_EVENT_INTS_REGISTER = 486, 258 - GAUDI_EVENT_SOFT_RESET = 487, 255 + GAUDI_EVENT_NIC0_CS_DBG_DERR = 483, 256 + GAUDI_EVENT_NIC1_CS_DBG_DERR = 487, 257 + GAUDI_EVENT_NIC2_CS_DBG_DERR = 491, 258 + GAUDI_EVENT_NIC3_CS_DBG_DERR = 495, 259 + GAUDI_EVENT_NIC4_CS_DBG_DERR = 499, 259 260 GAUDI_EVENT_RAZWI_OR_ADC = 548, 260 261 GAUDI_EVENT_TPC0_QM = 572, 261 262 GAUDI_EVENT_TPC1_QM = 573, ··· 304 303 GAUDI_EVENT_NIC3_QP1 = 619, 305 304 GAUDI_EVENT_NIC4_QP0 = 620, 306 305 GAUDI_EVENT_NIC4_QP1 = 621, 306 + GAUDI_EVENT_PI_UPDATE = 635, 307 + GAUDI_EVENT_HALT_MACHINE = 636, 308 + GAUDI_EVENT_INTS_REGISTER = 637, 309 + GAUDI_EVENT_SOFT_RESET = 638, 310 + GAUDI_EVENT_FW_ALIVE_S = 645, 307 311 GAUDI_EVENT_DEV_RESET_REQ = 646, 308 312 GAUDI_EVENT_PKT_QUEUE_OUT_SYNC = 647, 309 313 GAUDI_EVENT_FIX_POWER_ENV_S = 658,
+18 -13
drivers/misc/habanalabs/include/gaudi/gaudi_async_ids_map_extended.h
··· 507 507 { .fc_id = 480, .cpu_id = 329, .valid = 0, .name = "" }, 508 508 { .fc_id = 481, .cpu_id = 330, .valid = 0, .name = "" }, 509 509 { .fc_id = 482, .cpu_id = 331, .valid = 0, .name = "" }, 510 - { .fc_id = 483, .cpu_id = 332, .valid = 0, .name = "" }, 511 - { .fc_id = 484, .cpu_id = 333, .valid = 1, .name = "PI_UPDATE" }, 512 - { .fc_id = 485, .cpu_id = 334, .valid = 1, .name = "HALT_MACHINE" }, 513 - { .fc_id = 486, .cpu_id = 335, .valid = 1, .name = "INTS_REGISTER" }, 514 - { .fc_id = 487, .cpu_id = 336, .valid = 1, .name = "SOFT_RESET" }, 510 + { .fc_id = 483, .cpu_id = 332, .valid = 1, 511 + .name = "NIC0_CS_DBG_DERR" }, 512 + { .fc_id = 484, .cpu_id = 333, .valid = 0, .name = "" }, 513 + { .fc_id = 485, .cpu_id = 334, .valid = 0, .name = "" }, 514 + { .fc_id = 486, .cpu_id = 335, .valid = 0, .name = "" }, 515 + { .fc_id = 487, .cpu_id = 336, .valid = 1, 516 + .name = "NIC1_CS_DBG_DERR" }, 515 517 { .fc_id = 488, .cpu_id = 337, .valid = 0, .name = "" }, 516 518 { .fc_id = 489, .cpu_id = 338, .valid = 0, .name = "" }, 517 519 { .fc_id = 490, .cpu_id = 339, .valid = 0, .name = "" }, 518 - { .fc_id = 491, .cpu_id = 340, .valid = 0, .name = "" }, 520 + { .fc_id = 491, .cpu_id = 340, .valid = 1, 521 + .name = "NIC2_CS_DBG_DERR" }, 519 522 { .fc_id = 492, .cpu_id = 341, .valid = 0, .name = "" }, 520 523 { .fc_id = 493, .cpu_id = 342, .valid = 0, .name = "" }, 521 524 { .fc_id = 494, .cpu_id = 343, .valid = 0, .name = "" }, 522 - { .fc_id = 495, .cpu_id = 344, .valid = 0, .name = "" }, 525 + { .fc_id = 495, .cpu_id = 344, .valid = 1, 526 + .name = "NIC3_CS_DBG_DERR" }, 523 527 { .fc_id = 496, .cpu_id = 345, .valid = 0, .name = "" }, 524 528 { .fc_id = 497, .cpu_id = 346, .valid = 0, .name = "" }, 525 529 { .fc_id = 498, .cpu_id = 347, .valid = 0, .name = "" }, 526 - { .fc_id = 499, .cpu_id = 348, .valid = 0, .name = "" }, 530 + { .fc_id = 499, .cpu_id = 348, .valid = 1, 531 + .name = "NIC4_CS_DBG_DERR" }, 527 532 { .fc_id = 500, .cpu_id = 349, .valid = 0, .name = "" }, 528 533 { .fc_id = 501, .cpu_id = 350, .valid = 0, .name = "" }, 529 534 { .fc_id = 502, .cpu_id = 351, .valid = 0, .name = "" }, ··· 664 659 { .fc_id = 632, .cpu_id = 481, .valid = 0, .name = "" }, 665 660 { .fc_id = 633, .cpu_id = 482, .valid = 0, .name = "" }, 666 661 { .fc_id = 634, .cpu_id = 483, .valid = 0, .name = "" }, 667 - { .fc_id = 635, .cpu_id = 484, .valid = 0, .name = "" }, 668 - { .fc_id = 636, .cpu_id = 485, .valid = 0, .name = "" }, 669 - { .fc_id = 637, .cpu_id = 486, .valid = 0, .name = "" }, 670 - { .fc_id = 638, .cpu_id = 487, .valid = 0, .name = "" }, 662 + { .fc_id = 635, .cpu_id = 484, .valid = 1, .name = "PI_UPDATE" }, 663 + { .fc_id = 636, .cpu_id = 485, .valid = 1, .name = "HALT_MACHINE" }, 664 + { .fc_id = 637, .cpu_id = 486, .valid = 1, .name = "INTS_REGISTER" }, 665 + { .fc_id = 638, .cpu_id = 487, .valid = 1, .name = "SOFT_RESET" }, 671 666 { .fc_id = 639, .cpu_id = 488, .valid = 0, .name = "" }, 672 667 { .fc_id = 640, .cpu_id = 489, .valid = 0, .name = "" }, 673 668 { .fc_id = 641, .cpu_id = 490, .valid = 0, .name = "" }, 674 669 { .fc_id = 642, .cpu_id = 491, .valid = 0, .name = "" }, 675 670 { .fc_id = 643, .cpu_id = 492, .valid = 0, .name = "" }, 676 671 { .fc_id = 644, .cpu_id = 493, .valid = 0, .name = "" }, 677 - { .fc_id = 645, .cpu_id = 494, .valid = 0, .name = "" }, 672 + { .fc_id = 645, .cpu_id = 494, .valid = 1, .name = "FW_ALIVE_S" }, 678 673 { .fc_id = 646, .cpu_id = 495, .valid = 1, .name = "DEV_RESET_REQ" }, 679 674 { .fc_id = 647, .cpu_id = 496, .valid = 1, 680 675 .name = "PKT_QUEUE_OUT_SYNC" },
+46
drivers/misc/habanalabs/include/gaudi/gaudi_fw_if.h
··· 20 20 #define UBOOT_FW_OFFSET 0x100000 /* 1MB in SRAM */ 21 21 #define LINUX_FW_OFFSET 0x800000 /* 8MB in HBM */ 22 22 23 + /* HBM thermal delta in [Deg] added to composite (CTemp) */ 24 + #define HBM_TEMP_ADJUST_COEFF 6 25 + 23 26 enum gaudi_nic_axi_error { 24 27 RXB, 25 28 RXE, ··· 30 27 TXE, 31 28 QPC_RESP, 32 29 NON_AXI_ERR, 30 + TMR, 33 31 }; 34 32 35 33 /* ··· 44 40 __u8 axi_error_cause; 45 41 __u8 id; 46 42 __u8 pad[6]; 43 + }; 44 + 45 + /* 46 + * struct gaudi_nic_status - describes the status of a NIC port. 47 + * @port: NIC port index. 48 + * @bad_format_cnt: e.g. CRC. 49 + * @responder_out_of_sequence_psn_cnt: e.g NAK. 50 + * @high_ber_reinit_cnt: link reinit due to high BER. 51 + * @correctable_err_cnt: e.g. bit-flip. 52 + * @uncorrectable_err_cnt: e.g. MAC errors. 53 + * @retraining_cnt: re-training counter. 54 + * @up: is port up. 55 + * @pcs_link: has PCS link. 56 + * @phy_ready: is PHY ready. 57 + * @auto_neg: is Autoneg enabled. 58 + * @timeout_retransmission_cnt: timeout retransmission events 59 + * @high_ber_cnt: high ber events 60 + */ 61 + struct gaudi_nic_status { 62 + __u32 port; 63 + __u32 bad_format_cnt; 64 + __u32 responder_out_of_sequence_psn_cnt; 65 + __u32 high_ber_reinit; 66 + __u32 correctable_err_cnt; 67 + __u32 uncorrectable_err_cnt; 68 + __u32 retraining_cnt; 69 + __u8 up; 70 + __u8 pcs_link; 71 + __u8 phy_ready; 72 + __u8 auto_neg; 73 + __u32 timeout_retransmission_cnt; 74 + __u32 high_ber_cnt; 75 + }; 76 + 77 + struct gaudi_flops_2_data { 78 + union { 79 + struct { 80 + __u32 spsram_init_done : 1; 81 + __u32 reserved : 31; 82 + }; 83 + __u32 data; 84 + }; 47 85 }; 48 86 49 87 #define GAUDI_PLL_FREQ_LOW 200000000 /* 200 MHz */
+10 -5
drivers/misc/habanalabs/include/gaudi/gaudi_masks.h
··· 66 66 #define PCI_DMA_QMAN_GLBL_ERR_CFG_STOP_ON_ERR_EN_MASK (\ 67 67 (FIELD_PREP(DMA0_QM_GLBL_ERR_CFG_PQF_STOP_ON_ERR_MASK, 0xF)) | \ 68 68 (FIELD_PREP(DMA0_QM_GLBL_ERR_CFG_CQF_STOP_ON_ERR_MASK, 0xF)) | \ 69 - (FIELD_PREP(DMA0_QM_GLBL_ERR_CFG_CP_STOP_ON_ERR_MASK, 0xF))) 69 + (FIELD_PREP(DMA0_QM_GLBL_ERR_CFG_CP_STOP_ON_ERR_MASK, 0xF)) | \ 70 + (FIELD_PREP(DMA0_QM_GLBL_ERR_CFG_ARB_STOP_ON_ERR_MASK, 0x1))) 70 71 71 72 #define HBM_DMA_QMAN_GLBL_ERR_CFG_MSG_EN_MASK (\ 72 73 (FIELD_PREP(DMA0_QM_GLBL_ERR_CFG_PQF_ERR_MSG_EN_MASK, 0xF)) | \ ··· 77 76 #define HBM_DMA_QMAN_GLBL_ERR_CFG_STOP_ON_ERR_EN_MASK (\ 78 77 (FIELD_PREP(DMA0_QM_GLBL_ERR_CFG_PQF_STOP_ON_ERR_MASK, 0xF)) | \ 79 78 (FIELD_PREP(DMA0_QM_GLBL_ERR_CFG_CQF_STOP_ON_ERR_MASK, 0x1F)) | \ 80 - (FIELD_PREP(DMA0_QM_GLBL_ERR_CFG_CP_STOP_ON_ERR_MASK, 0x1F))) 79 + (FIELD_PREP(DMA0_QM_GLBL_ERR_CFG_CP_STOP_ON_ERR_MASK, 0x1F)) | \ 80 + (FIELD_PREP(DMA0_QM_GLBL_ERR_CFG_ARB_STOP_ON_ERR_MASK, 0x1))) 81 81 82 82 #define TPC_QMAN_GLBL_ERR_CFG_MSG_EN_MASK (\ 83 83 (FIELD_PREP(TPC0_QM_GLBL_ERR_CFG_PQF_ERR_MSG_EN_MASK, 0xF)) | \ ··· 88 86 #define TPC_QMAN_GLBL_ERR_CFG_STOP_ON_ERR_EN_MASK (\ 89 87 (FIELD_PREP(TPC0_QM_GLBL_ERR_CFG_PQF_STOP_ON_ERR_MASK, 0xF)) | \ 90 88 (FIELD_PREP(TPC0_QM_GLBL_ERR_CFG_CQF_STOP_ON_ERR_MASK, 0x1F)) | \ 91 - (FIELD_PREP(TPC0_QM_GLBL_ERR_CFG_CP_STOP_ON_ERR_MASK, 0x1F))) 89 + (FIELD_PREP(TPC0_QM_GLBL_ERR_CFG_CP_STOP_ON_ERR_MASK, 0x1F)) | \ 90 + (FIELD_PREP(TPC0_QM_GLBL_ERR_CFG_ARB_STOP_ON_ERR_MASK, 0x1))) 92 91 93 92 #define MME_QMAN_GLBL_ERR_CFG_MSG_EN_MASK (\ 94 93 (FIELD_PREP(MME0_QM_GLBL_ERR_CFG_PQF_ERR_MSG_EN_MASK, 0xF)) | \ ··· 99 96 #define MME_QMAN_GLBL_ERR_CFG_STOP_ON_ERR_EN_MASK (\ 100 97 (FIELD_PREP(MME0_QM_GLBL_ERR_CFG_PQF_STOP_ON_ERR_MASK, 0xF)) | \ 101 98 (FIELD_PREP(MME0_QM_GLBL_ERR_CFG_CQF_STOP_ON_ERR_MASK, 0x1F)) | \ 102 - (FIELD_PREP(MME0_QM_GLBL_ERR_CFG_CP_STOP_ON_ERR_MASK, 0x1F))) 99 + (FIELD_PREP(MME0_QM_GLBL_ERR_CFG_CP_STOP_ON_ERR_MASK, 0x1F)) | \ 100 + (FIELD_PREP(MME0_QM_GLBL_ERR_CFG_ARB_STOP_ON_ERR_MASK, 0x1))) 103 101 104 102 #define NIC_QMAN_GLBL_ERR_CFG_MSG_EN_MASK (\ 105 103 (FIELD_PREP(NIC0_QM0_GLBL_ERR_CFG_PQF_ERR_MSG_EN_MASK, 0xF)) | \ ··· 110 106 #define NIC_QMAN_GLBL_ERR_CFG_STOP_ON_ERR_EN_MASK (\ 111 107 (FIELD_PREP(NIC0_QM0_GLBL_ERR_CFG_PQF_STOP_ON_ERR_MASK, 0xF)) | \ 112 108 (FIELD_PREP(NIC0_QM0_GLBL_ERR_CFG_CQF_STOP_ON_ERR_MASK, 0xF)) | \ 113 - (FIELD_PREP(NIC0_QM0_GLBL_ERR_CFG_CP_STOP_ON_ERR_MASK, 0xF))) 109 + (FIELD_PREP(NIC0_QM0_GLBL_ERR_CFG_CP_STOP_ON_ERR_MASK, 0xF)) | \ 110 + (FIELD_PREP(NIC0_QM0_GLBL_ERR_CFG_ARB_STOP_ON_ERR_MASK, 0x1))) 114 111 115 112 #define QMAN_CGM1_PWR_GATE_EN (FIELD_PREP(DMA0_QM_CGM_CFG1_MASK_TH_MASK, 0xA)) 116 113
+10
drivers/misc/habanalabs/include/gaudi/gaudi_reg_map.h
··· 12 12 * PSOC scratch-pad registers 13 13 */ 14 14 #define mmHW_STATE mmPSOC_GLOBAL_CONF_SCRATCHPAD_0 15 + /* TODO: remove mmGIC_HOST_IRQ_CTRL_POLL_REG */ 16 + #define mmGIC_HOST_IRQ_CTRL_POLL_REG mmPSOC_GLOBAL_CONF_SCRATCHPAD_1 17 + #define mmGIC_HOST_PI_UPD_IRQ_POLL_REG mmPSOC_GLOBAL_CONF_SCRATCHPAD_1 18 + #define mmGIC_TPC_QM_IRQ_CTRL_POLL_REG mmPSOC_GLOBAL_CONF_SCRATCHPAD_2 19 + #define mmGIC_MME_QM_IRQ_CTRL_POLL_REG mmPSOC_GLOBAL_CONF_SCRATCHPAD_3 20 + #define mmGIC_DMA_QM_IRQ_CTRL_POLL_REG mmPSOC_GLOBAL_CONF_SCRATCHPAD_4 21 + #define mmGIC_NIC_QM_IRQ_CTRL_POLL_REG mmPSOC_GLOBAL_CONF_SCRATCHPAD_5 22 + #define mmGIC_DMA_CR_IRQ_CTRL_POLL_REG mmPSOC_GLOBAL_CONF_SCRATCHPAD_6 23 + #define mmGIC_HOST_HALT_IRQ_POLL_REG mmPSOC_GLOBAL_CONF_SCRATCHPAD_7 24 + #define mmGIC_HOST_INTS_IRQ_POLL_REG mmPSOC_GLOBAL_CONF_SCRATCHPAD_8 15 25 #define mmCPU_BOOT_DEV_STS0 mmPSOC_GLOBAL_CONF_SCRATCHPAD_20 16 26 #define mmCPU_BOOT_DEV_STS1 mmPSOC_GLOBAL_CONF_SCRATCHPAD_21 17 27 #define mmFUSE_VER_OFFSET mmPSOC_GLOBAL_CONF_SCRATCHPAD_22
+9 -1
drivers/misc/hpilo.c
··· 693 693 { 694 694 int bar; 695 695 unsigned long off; 696 + u8 pci_rev_id; 697 + int rc; 696 698 697 699 /* map the memory mapped i/o registers */ 698 700 hw->mmio_vaddr = pci_iomap(pdev, 1, 0); ··· 704 702 } 705 703 706 704 /* map the adapter shared memory region */ 707 - if (pdev->subsystem_device == 0x00E4) { 705 + rc = pci_read_config_byte(pdev, PCI_REVISION_ID, &pci_rev_id); 706 + if (rc != 0) { 707 + dev_err(&pdev->dev, "Error reading PCI rev id: %d\n", rc); 708 + goto out; 709 + } 710 + 711 + if (pci_rev_id >= PCI_REV_ID_NECHES) { 708 712 bar = 5; 709 713 /* Last 8k is reserved for CCBs */ 710 714 off = pci_resource_len(pdev, bar) - 0x2000;
+3
drivers/misc/hpilo.h
··· 10 10 11 11 #define ILO_NAME "hpilo" 12 12 13 + /* iLO ASIC PCI revision id */ 14 + #define PCI_REV_ID_NECHES 7 15 + 13 16 /* max number of open channel control blocks per device, hw limited to 32 */ 14 17 #define MAX_CCB 24 15 18 /* min number of open channel control blocks per device, hw limited to 32 */
+3 -2
drivers/misc/ibmasm/module.c
··· 111 111 result = ibmasm_init_remote_input_dev(sp); 112 112 if (result) { 113 113 dev_err(sp->dev, "Failed to initialize remote queue\n"); 114 - goto error_send_message; 114 + goto error_init_remote; 115 115 } 116 116 117 117 result = ibmasm_send_driver_vpd(sp); ··· 131 131 return 0; 132 132 133 133 error_send_message: 134 - disable_sp_interrupts(sp->base_address); 135 134 ibmasm_free_remote_input_dev(sp); 135 + error_init_remote: 136 + disable_sp_interrupts(sp->base_address); 136 137 free_irq(sp->irq, (void *)sp); 137 138 error_request_irq: 138 139 iounmap(sp->base_address);
+1 -1
drivers/misc/ibmasm/remote.h
··· 43 43 #define REMOTE_BUTTON_MIDDLE 0x02 44 44 #define REMOTE_BUTTON_RIGHT 0x04 45 45 46 - /* size of keysym/keycode translation matricies */ 46 + /* size of keysym/keycode translation matrices */ 47 47 #define XLATE_SIZE 256 48 48 49 49 struct mouse_input {
+9 -2
drivers/misc/lkdtm/bugs.c
··· 161 161 if (*p == 0) 162 162 val = 0x87654321; 163 163 *p = val; 164 + 165 + if (IS_ENABLED(CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS)) 166 + pr_err("XFAIL: arch has CONFIG_HAVE_EFFICIENT_UNALIGNED_ACCESS\n"); 164 167 } 165 168 166 169 void lkdtm_SOFTLOCKUP(void) ··· 303 300 304 301 if (target[0] == NULL && target[1] == NULL) 305 302 pr_err("Overwrite did not happen, but no BUG?!\n"); 306 - else 303 + else { 307 304 pr_err("list_add() corruption not detected!\n"); 305 + pr_expected_config(CONFIG_DEBUG_LIST); 306 + } 308 307 } 309 308 310 309 void lkdtm_CORRUPT_LIST_DEL(void) ··· 330 325 331 326 if (target[0] == NULL && target[1] == NULL) 332 327 pr_err("Overwrite did not happen, but no BUG?!\n"); 333 - else 328 + else { 334 329 pr_err("list_del() corruption not detected!\n"); 330 + pr_expected_config(CONFIG_DEBUG_LIST); 331 + } 335 332 } 336 333 337 334 /* Test that VMAP_STACK is actually allocating with a leading guard page */
+2 -1
drivers/misc/lkdtm/cfi.c
··· 38 38 func = (void *)lkdtm_increment_int; 39 39 func(&called_count); 40 40 41 - pr_info("Fail: survived mismatched prototype function call!\n"); 41 + pr_err("FAIL: survived mismatched prototype function call!\n"); 42 + pr_expected_config(CONFIG_CFI_CLANG); 42 43 }
+55 -3
drivers/misc/lkdtm/core.c
··· 26 26 #include <linux/init.h> 27 27 #include <linux/slab.h> 28 28 #include <linux/debugfs.h> 29 + #include <linux/init.h> 29 30 30 31 #define DEFAULT_COUNT 10 31 32 ··· 121 120 CRASHTYPE(UNALIGNED_LOAD_STORE_WRITE), 122 121 CRASHTYPE(FORTIFY_OBJECT), 123 122 CRASHTYPE(FORTIFY_SUBOBJECT), 124 - CRASHTYPE(OVERWRITE_ALLOCATION), 123 + CRASHTYPE(SLAB_LINEAR_OVERFLOW), 124 + CRASHTYPE(VMALLOC_LINEAR_OVERFLOW), 125 125 CRASHTYPE(WRITE_AFTER_FREE), 126 126 CRASHTYPE(READ_AFTER_FREE), 127 127 CRASHTYPE(WRITE_BUDDY_AFTER_FREE), 128 128 CRASHTYPE(READ_BUDDY_AFTER_FREE), 129 + CRASHTYPE(SLAB_INIT_ON_ALLOC), 130 + CRASHTYPE(BUDDY_INIT_ON_ALLOC), 129 131 CRASHTYPE(SLAB_FREE_DOUBLE), 130 132 CRASHTYPE(SLAB_FREE_CROSS), 131 133 CRASHTYPE(SLAB_FREE_PAGE), ··· 181 177 CRASHTYPE(STACKLEAK_ERASING), 182 178 CRASHTYPE(CFI_FORWARD_PROTO), 183 179 CRASHTYPE(FORTIFIED_STRSCPY), 184 - #ifdef CONFIG_X86_32 185 180 CRASHTYPE(DOUBLE_FAULT), 186 - #endif 187 181 #ifdef CONFIG_PPC_BOOK3S_64 188 182 CRASHTYPE(PPC_SLB_MULTIHIT), 189 183 #endif ··· 400 398 401 399 return count; 402 400 } 401 + 402 + #ifndef MODULE 403 + /* 404 + * To avoid needing to export parse_args(), just don't use this code 405 + * when LKDTM is built as a module. 406 + */ 407 + struct check_cmdline_args { 408 + const char *param; 409 + int value; 410 + }; 411 + 412 + static int lkdtm_parse_one(char *param, char *val, 413 + const char *unused, void *arg) 414 + { 415 + struct check_cmdline_args *args = arg; 416 + 417 + /* short circuit if we already found a value. */ 418 + if (args->value != -ESRCH) 419 + return 0; 420 + if (strncmp(param, args->param, strlen(args->param)) == 0) { 421 + bool bool_result; 422 + int ret; 423 + 424 + ret = kstrtobool(val, &bool_result); 425 + if (ret == 0) 426 + args->value = bool_result; 427 + } 428 + return 0; 429 + } 430 + 431 + int lkdtm_check_bool_cmdline(const char *param) 432 + { 433 + char *command_line; 434 + struct check_cmdline_args args = { 435 + .param = param, 436 + .value = -ESRCH, 437 + }; 438 + 439 + command_line = kstrdup(saved_command_line, GFP_KERNEL); 440 + if (!command_line) 441 + return -ENOMEM; 442 + 443 + parse_args("Setting sysctl args", command_line, 444 + NULL, 0, -1, -1, &args, lkdtm_parse_one); 445 + 446 + kfree(command_line); 447 + 448 + return args.value; 449 + } 450 + #endif 403 451 404 452 static struct dentry *lkdtm_debugfs_root; 405 453
+2 -1
drivers/misc/lkdtm/fortify.c
··· 76 76 */ 77 77 strscpy(dst, src, strlen(src)); 78 78 79 - pr_warn("FAIL: No overflow in above strscpy()\n"); 79 + pr_err("FAIL: strscpy() overflow not detected!\n"); 80 + pr_expected_config(CONFIG_FORTIFY_SOURCE); 80 81 81 82 kfree(src); 82 83 }
+92 -5
drivers/misc/lkdtm/heap.c
··· 5 5 */ 6 6 #include "lkdtm.h" 7 7 #include <linux/slab.h> 8 + #include <linux/vmalloc.h> 8 9 #include <linux/sched.h> 9 10 10 11 static struct kmem_cache *double_free_cache; ··· 13 12 static struct kmem_cache *b_cache; 14 13 15 14 /* 15 + * If there aren't guard pages, it's likely that a consecutive allocation will 16 + * let us overflow into the second allocation without overwriting something real. 17 + */ 18 + void lkdtm_VMALLOC_LINEAR_OVERFLOW(void) 19 + { 20 + char *one, *two; 21 + 22 + one = vzalloc(PAGE_SIZE); 23 + two = vzalloc(PAGE_SIZE); 24 + 25 + pr_info("Attempting vmalloc linear overflow ...\n"); 26 + memset(one, 0xAA, PAGE_SIZE + 1); 27 + 28 + vfree(two); 29 + vfree(one); 30 + } 31 + 32 + /* 16 33 * This tries to stay within the next largest power-of-2 kmalloc cache 17 34 * to avoid actually overwriting anything important if it's not detected 18 35 * correctly. 19 36 */ 20 - void lkdtm_OVERWRITE_ALLOCATION(void) 37 + void lkdtm_SLAB_LINEAR_OVERFLOW(void) 21 38 { 22 39 size_t len = 1020; 23 40 u32 *data = kmalloc(len, GFP_KERNEL); 24 41 if (!data) 25 42 return; 26 43 44 + pr_info("Attempting slab linear overflow ...\n"); 27 45 data[1024 / sizeof(u32)] = 0x12345678; 28 46 kfree(data); 29 47 } ··· 109 89 if (saw != *val) { 110 90 /* Good! Poisoning happened, so declare a win. */ 111 91 pr_info("Memory correctly poisoned (%x)\n", saw); 112 - BUG(); 92 + } else { 93 + pr_err("FAIL: Memory was not poisoned!\n"); 94 + pr_expected_config_param(CONFIG_INIT_ON_FREE_DEFAULT_ON, "init_on_free"); 113 95 } 114 - pr_info("Memory was not poisoned\n"); 115 96 116 97 kfree(val); 117 98 } ··· 166 145 if (saw != *val) { 167 146 /* Good! Poisoning happened, so declare a win. */ 168 147 pr_info("Memory correctly poisoned (%x)\n", saw); 169 - BUG(); 148 + } else { 149 + pr_err("FAIL: Buddy page was not poisoned!\n"); 150 + pr_expected_config_param(CONFIG_INIT_ON_FREE_DEFAULT_ON, "init_on_free"); 170 151 } 171 - pr_info("Buddy page was not poisoned\n"); 172 152 173 153 kfree(val); 154 + } 155 + 156 + void lkdtm_SLAB_INIT_ON_ALLOC(void) 157 + { 158 + u8 *first; 159 + u8 *val; 160 + 161 + first = kmalloc(512, GFP_KERNEL); 162 + if (!first) { 163 + pr_info("Unable to allocate 512 bytes the first time.\n"); 164 + return; 165 + } 166 + 167 + memset(first, 0xAB, 512); 168 + kfree(first); 169 + 170 + val = kmalloc(512, GFP_KERNEL); 171 + if (!val) { 172 + pr_info("Unable to allocate 512 bytes the second time.\n"); 173 + return; 174 + } 175 + if (val != first) { 176 + pr_warn("Reallocation missed clobbered memory.\n"); 177 + } 178 + 179 + if (memchr(val, 0xAB, 512) == NULL) { 180 + pr_info("Memory appears initialized (%x, no earlier values)\n", *val); 181 + } else { 182 + pr_err("FAIL: Slab was not initialized\n"); 183 + pr_expected_config_param(CONFIG_INIT_ON_ALLOC_DEFAULT_ON, "init_on_alloc"); 184 + } 185 + kfree(val); 186 + } 187 + 188 + void lkdtm_BUDDY_INIT_ON_ALLOC(void) 189 + { 190 + u8 *first; 191 + u8 *val; 192 + 193 + first = (u8 *)__get_free_page(GFP_KERNEL); 194 + if (!first) { 195 + pr_info("Unable to allocate first free page\n"); 196 + return; 197 + } 198 + 199 + memset(first, 0xAB, PAGE_SIZE); 200 + free_page((unsigned long)first); 201 + 202 + val = (u8 *)__get_free_page(GFP_KERNEL); 203 + if (!val) { 204 + pr_info("Unable to allocate second free page\n"); 205 + return; 206 + } 207 + 208 + if (val != first) { 209 + pr_warn("Reallocation missed clobbered memory.\n"); 210 + } 211 + 212 + if (memchr(val, 0xAB, PAGE_SIZE) == NULL) { 213 + pr_info("Memory appears initialized (%x, no earlier values)\n", *val); 214 + } else { 215 + pr_err("FAIL: Slab was not initialized\n"); 216 + pr_expected_config_param(CONFIG_INIT_ON_ALLOC_DEFAULT_ON, "init_on_alloc"); 217 + } 218 + free_page((unsigned long)val); 174 219 } 175 220 176 221 void lkdtm_SLAB_FREE_DOUBLE(void)
+45 -1
drivers/misc/lkdtm/lkdtm.h
··· 6 6 7 7 #include <linux/kernel.h> 8 8 9 + #define pr_expected_config(kconfig) \ 10 + { \ 11 + if (IS_ENABLED(kconfig)) \ 12 + pr_err("Unexpected! This kernel was built with " #kconfig "=y\n"); \ 13 + else \ 14 + pr_warn("This is probably expected, since this kernel was built *without* " #kconfig "=y\n"); \ 15 + } 16 + 17 + #ifndef MODULE 18 + int lkdtm_check_bool_cmdline(const char *param); 19 + #define pr_expected_config_param(kconfig, param) \ 20 + { \ 21 + if (IS_ENABLED(kconfig)) { \ 22 + switch (lkdtm_check_bool_cmdline(param)) { \ 23 + case 0: \ 24 + pr_warn("This is probably expected, since this kernel was built with " #kconfig "=y but booted with '" param "=N'\n"); \ 25 + break; \ 26 + case 1: \ 27 + pr_err("Unexpected! This kernel was built with " #kconfig "=y and booted with '" param "=Y'\n"); \ 28 + break; \ 29 + default: \ 30 + pr_err("Unexpected! This kernel was built with " #kconfig "=y (and booted without '" param "' specified)\n"); \ 31 + } \ 32 + } else { \ 33 + switch (lkdtm_check_bool_cmdline(param)) { \ 34 + case 0: \ 35 + pr_warn("This is probably expected, as kernel was built *without* " #kconfig "=y and booted with '" param "=N'\n"); \ 36 + break; \ 37 + case 1: \ 38 + pr_err("Unexpected! This kernel was built *without* " #kconfig "=y but booted with '" param "=Y'\n"); \ 39 + break; \ 40 + default: \ 41 + pr_err("This is probably expected, since this kernel was built *without* " #kconfig "=y (and booted without '" param "' specified)\n"); \ 42 + break; \ 43 + } \ 44 + } \ 45 + } 46 + #else 47 + #define pr_expected_config_param(kconfig, param) pr_expected_config(kconfig) 48 + #endif 49 + 9 50 /* bugs.c */ 10 51 void __init lkdtm_bugs_init(int *recur_param); 11 52 void lkdtm_PANIC(void); ··· 80 39 /* heap.c */ 81 40 void __init lkdtm_heap_init(void); 82 41 void __exit lkdtm_heap_exit(void); 83 - void lkdtm_OVERWRITE_ALLOCATION(void); 42 + void lkdtm_VMALLOC_LINEAR_OVERFLOW(void); 43 + void lkdtm_SLAB_LINEAR_OVERFLOW(void); 84 44 void lkdtm_WRITE_AFTER_FREE(void); 85 45 void lkdtm_READ_AFTER_FREE(void); 86 46 void lkdtm_WRITE_BUDDY_AFTER_FREE(void); 87 47 void lkdtm_READ_BUDDY_AFTER_FREE(void); 48 + void lkdtm_SLAB_INIT_ON_ALLOC(void); 49 + void lkdtm_BUDDY_INIT_ON_ALLOC(void); 88 50 void lkdtm_SLAB_FREE_DOUBLE(void); 89 51 void lkdtm_SLAB_FREE_CROSS(void); 90 52 void lkdtm_SLAB_FREE_PAGE(void);
+2 -2
drivers/misc/lkdtm/stackleak.c
··· 74 74 75 75 end: 76 76 if (test_failed) { 77 - pr_err("FAIL: the thread stack is NOT properly erased\n"); 78 - dump_stack(); 77 + pr_err("FAIL: the thread stack is NOT properly erased!\n"); 78 + pr_expected_config(CONFIG_GCC_PLUGIN_STACKLEAK); 79 79 } else { 80 80 pr_info("OK: the rest of the thread stack is properly erased\n"); 81 81 }
+6 -1
drivers/misc/lkdtm/usercopy.c
··· 173 173 goto free_user; 174 174 } 175 175 } 176 + pr_err("FAIL: bad usercopy not detected!\n"); 177 + pr_expected_config_param(CONFIG_HARDENED_USERCOPY, "hardened_usercopy"); 176 178 177 179 free_user: 178 180 vm_munmap(user_addr, PAGE_SIZE); ··· 250 248 goto free_user; 251 249 } 252 250 } 251 + pr_err("FAIL: bad usercopy not detected!\n"); 252 + pr_expected_config_param(CONFIG_HARDENED_USERCOPY, "hardened_usercopy"); 253 253 254 254 free_user: 255 255 vm_munmap(user_alloc, PAGE_SIZE); ··· 323 319 pr_warn("copy_to_user failed, but lacked Oops\n"); 324 320 goto free_user; 325 321 } 326 - pr_err("FAIL: survived bad copy_to_user()\n"); 322 + pr_err("FAIL: bad copy_to_user() not detected!\n"); 323 + pr_expected_config_param(CONFIG_HARDENED_USERCOPY, "hardened_usercopy"); 327 324 328 325 free_user: 329 326 vm_munmap(user_addr, PAGE_SIZE);
+1 -1
drivers/misc/mei/bus-fixup.c
··· 498 498 }; 499 499 500 500 /** 501 - * mei_cldev_fixup - run fixup handlers 501 + * mei_cl_bus_dev_fixup - run fixup handlers 502 502 * 503 503 * @cldev: me client device 504 504 */
+12 -10
drivers/misc/mei/client.c
··· 326 326 } 327 327 328 328 /** 329 - * mei_tx_cb_queue - queue tx callback 329 + * mei_tx_cb_enqueue - queue tx callback 330 330 * 331 331 * Locking: called under "dev->device_lock" lock 332 332 * ··· 1726 1726 return rets; 1727 1727 } 1728 1728 1729 - static inline u8 mei_ext_hdr_set_vtag(struct mei_ext_hdr *ext, u8 vtag) 1729 + static inline u8 mei_ext_hdr_set_vtag(void *ext, u8 vtag) 1730 1730 { 1731 - ext->type = MEI_EXT_HDR_VTAG; 1732 - ext->ext_payload[0] = vtag; 1733 - ext->length = mei_data2slots(sizeof(*ext)); 1734 - return ext->length; 1731 + struct mei_ext_hdr_vtag *vtag_hdr = ext; 1732 + 1733 + vtag_hdr->hdr.type = MEI_EXT_HDR_VTAG; 1734 + vtag_hdr->hdr.length = mei_data2slots(sizeof(*vtag_hdr)); 1735 + vtag_hdr->vtag = vtag; 1736 + vtag_hdr->reserved = 0; 1737 + return vtag_hdr->hdr.length; 1735 1738 } 1736 1739 1737 1740 /** ··· 1748 1745 { 1749 1746 size_t hdr_len; 1750 1747 struct mei_ext_meta_hdr *meta; 1751 - struct mei_ext_hdr *ext; 1752 1748 struct mei_msg_hdr *mei_hdr; 1753 1749 bool is_ext, is_vtag; 1754 1750 ··· 1766 1764 1767 1765 hdr_len += sizeof(*meta); 1768 1766 if (is_vtag) 1769 - hdr_len += sizeof(*ext); 1767 + hdr_len += sizeof(struct mei_ext_hdr_vtag); 1770 1768 1771 1769 setup_hdr: 1772 1770 mei_hdr = kzalloc(hdr_len, GFP_KERNEL); ··· 2252 2250 } 2253 2251 2254 2252 /** 2255 - * mei_cl_alloc_and_map - send client dma map request 2253 + * mei_cl_dma_alloc_and_map - send client dma map request 2256 2254 * 2257 2255 * @cl: host client 2258 2256 * @fp: pointer to file structure ··· 2351 2349 } 2352 2350 2353 2351 /** 2354 - * mei_cl_unmap_and_free - send client dma unmap request 2352 + * mei_cl_dma_unmap - send client dma unmap request 2355 2353 * 2356 2354 * @cl: host client 2357 2355 * @fp: pointer to file structure
+1 -1
drivers/misc/mei/hbm.c
··· 853 853 } 854 854 855 855 /** 856 - * mei_hbm_cl_flow_control_res - flow control response from me 856 + * mei_hbm_cl_tx_flow_ctrl_creds_res - flow control response from me 857 857 * 858 858 * @dev: the device structure 859 859 * @fctrl: flow control response bus message
-1
drivers/misc/mei/hdcp/Kconfig
··· 1 - 2 1 # SPDX-License-Identifier: GPL-2.0 3 2 # Copyright (c) 2019, Intel Corporation. All rights reserved. 4 3 #
+2 -2
drivers/misc/mei/hw-me.c
··· 1380 1380 .quirk_probe = mei_me_fw_type_nm 1381 1381 1382 1382 /** 1383 - * mei_me_fw_sku_sps_4() - check for sps 4.0 sku 1383 + * mei_me_fw_type_sps_4() - check for sps 4.0 sku 1384 1384 * 1385 1385 * Read ME FW Status register to check for SPS Firmware. 1386 1386 * The SPS FW is only signaled in the PCI function 0. ··· 1405 1405 .quirk_probe = mei_me_fw_type_sps_4 1406 1406 1407 1407 /** 1408 - * mei_me_fw_sku_sps() - check for sps sku 1408 + * mei_me_fw_type_sps() - check for sps sku 1409 1409 * 1410 1410 * Read ME FW Status register to check for SPS Firmware. 1411 1411 * The SPS FW is only signaled in pci function 0
+20 -8
drivers/misc/mei/hw.h
··· 235 235 struct mei_ext_hdr { 236 236 u8 type; 237 237 u8 length; 238 - u8 ext_payload[2]; 239 - u8 hdr[]; 240 - }; 238 + u8 data[]; 239 + } __packed; 241 240 242 241 /** 243 242 * struct mei_ext_meta_hdr - extend header meta data ··· 249 250 u8 count; 250 251 u8 size; 251 252 u8 reserved[2]; 252 - struct mei_ext_hdr hdrs[]; 253 - }; 253 + u8 hdrs[]; 254 + } __packed; 255 + 256 + /** 257 + * struct mei_ext_hdr_vtag - extend header for vtag 258 + * 259 + * @hdr: standard extend header 260 + * @vtag: virtual tag 261 + * @reserved: reserved 262 + */ 263 + struct mei_ext_hdr_vtag { 264 + struct mei_ext_hdr hdr; 265 + u8 vtag; 266 + u8 reserved; 267 + } __packed; 254 268 255 269 /* 256 270 * Extended header iterator functions ··· 278 266 */ 279 267 static inline struct mei_ext_hdr *mei_ext_begin(struct mei_ext_meta_hdr *meta) 280 268 { 281 - return meta->hdrs; 269 + return (struct mei_ext_hdr *)meta->hdrs; 282 270 } 283 271 284 272 /** ··· 296 284 } 297 285 298 286 /** 299 - *mei_ext_next - following extended header on the TLV list 287 + * mei_ext_next - following extended header on the TLV list 300 288 * 301 289 * @ext: current extend header 302 290 * ··· 307 295 */ 308 296 static inline struct mei_ext_hdr *mei_ext_next(struct mei_ext_hdr *ext) 309 297 { 310 - return (struct mei_ext_hdr *)(ext->hdr + (ext->length * 4)); 298 + return (struct mei_ext_hdr *)((u8 *)ext + (ext->length * 4)); 311 299 } 312 300 313 301 /**
+10 -13
drivers/misc/mei/interrupt.c
··· 123 123 124 124 if (mei_hdr->extended) { 125 125 struct mei_ext_hdr *ext; 126 - struct mei_ext_hdr *vtag = NULL; 126 + struct mei_ext_hdr_vtag *vtag_hdr = NULL; 127 127 128 128 ext = mei_ext_begin(meta); 129 129 do { 130 130 switch (ext->type) { 131 131 case MEI_EXT_HDR_VTAG: 132 - vtag = ext; 132 + vtag_hdr = (struct mei_ext_hdr_vtag *)ext; 133 133 break; 134 134 case MEI_EXT_HDR_NONE: 135 135 fallthrough; ··· 141 141 ext = mei_ext_next(ext); 142 142 } while (!mei_ext_last(meta, ext)); 143 143 144 - if (!vtag) { 144 + if (!vtag_hdr) { 145 145 cl_dbg(dev, cl, "vtag not found in extended header.\n"); 146 146 cb->status = -EPROTO; 147 147 goto discard; 148 148 } 149 149 150 - cl_dbg(dev, cl, "vtag: %d\n", vtag->ext_payload[0]); 151 - if (cb->vtag && cb->vtag != vtag->ext_payload[0]) { 150 + cl_dbg(dev, cl, "vtag: %d\n", vtag_hdr->vtag); 151 + if (cb->vtag && cb->vtag != vtag_hdr->vtag) { 152 152 cl_err(dev, cl, "mismatched tag: %d != %d\n", 153 - cb->vtag, vtag->ext_payload[0]); 153 + cb->vtag, vtag_hdr->vtag); 154 154 cb->status = -EPROTO; 155 155 goto discard; 156 156 } 157 - cb->vtag = vtag->ext_payload[0]; 157 + cb->vtag = vtag_hdr->vtag; 158 158 } 159 159 160 160 if (!mei_cl_is_connected(cl)) { ··· 331 331 struct mei_ext_meta_hdr *meta_hdr = NULL; 332 332 struct mei_cl *cl; 333 333 int ret; 334 - u32 ext_meta_hdr_u32; 335 334 u32 hdr_size_left; 336 335 u32 hdr_size_ext; 337 336 int i; ··· 366 367 367 368 if (mei_hdr->extended) { 368 369 if (!dev->rd_msg_hdr[1]) { 369 - ext_meta_hdr_u32 = mei_read_hdr(dev); 370 - dev->rd_msg_hdr[1] = ext_meta_hdr_u32; 370 + dev->rd_msg_hdr[1] = mei_read_hdr(dev); 371 371 dev->rd_msg_hdr_count++; 372 372 (*slots)--; 373 - dev_dbg(dev->dev, "extended header is %08x\n", 374 - ext_meta_hdr_u32); 373 + dev_dbg(dev->dev, "extended header is %08x\n", dev->rd_msg_hdr[1]); 375 374 } 376 - meta_hdr = ((struct mei_ext_meta_hdr *)dev->rd_msg_hdr + 1); 375 + meta_hdr = ((struct mei_ext_meta_hdr *)&dev->rd_msg_hdr[1]); 377 376 if (check_add_overflow((u32)sizeof(*meta_hdr), 378 377 mei_slots2data(meta_hdr->size), 379 378 &hdr_size_ext)) {
+1 -3
drivers/misc/mei/main.c
··· 50 50 int err; 51 51 52 52 dev = container_of(inode->i_cdev, struct mei_device, cdev); 53 - if (!dev) 54 - return -ENODEV; 55 53 56 54 mutex_lock(&dev->device_lock); 57 55 ··· 1102 1104 static DEVICE_ATTR_RO(dev_state); 1103 1105 1104 1106 /** 1105 - * dev_set_devstate: set to new device state and notify sysfs file. 1107 + * mei_set_devstate: set to new device state and notify sysfs file. 1106 1108 * 1107 1109 * @dev: mei_device 1108 1110 * @state: new device state
+1 -1
drivers/misc/mei/pci-txe.c
··· 156 156 } 157 157 158 158 /** 159 - * mei_txe_remove - Device Shutdown Routine 159 + * mei_txe_shutdown- Device Shutdown Routine 160 160 * 161 161 * @pdev: PCI device structure 162 162 *
+2 -15
drivers/misc/pvpanic/pvpanic-mmio.c
··· 93 93 return -EINVAL; 94 94 } 95 95 96 - pi = kmalloc(sizeof(*pi), GFP_ATOMIC); 96 + pi = devm_kmalloc(dev, sizeof(*pi), GFP_KERNEL); 97 97 if (!pi) 98 98 return -ENOMEM; 99 99 ··· 104 104 pi->capability &= ioread8(base); 105 105 pi->events = pi->capability; 106 106 107 - dev_set_drvdata(dev, pi); 108 - 109 - return pvpanic_probe(pi); 110 - } 111 - 112 - static int pvpanic_mmio_remove(struct platform_device *pdev) 113 - { 114 - struct pvpanic_instance *pi = dev_get_drvdata(&pdev->dev); 115 - 116 - pvpanic_remove(pi); 117 - kfree(pi); 118 - 119 - return 0; 107 + return devm_pvpanic_probe(dev, pi); 120 108 } 121 109 122 110 static const struct of_device_id pvpanic_mmio_match[] = { ··· 127 139 .dev_groups = pvpanic_mmio_dev_groups, 128 140 }, 129 141 .probe = pvpanic_mmio_probe, 130 - .remove = pvpanic_mmio_remove, 131 142 }; 132 143 module_platform_driver(pvpanic_mmio_driver);
+4 -18
drivers/misc/pvpanic/pvpanic-pci.c
··· 73 73 static int pvpanic_pci_probe(struct pci_dev *pdev, 74 74 const struct pci_device_id *ent) 75 75 { 76 - struct device *dev = &pdev->dev; 77 76 struct pvpanic_instance *pi; 78 77 void __iomem *base; 79 78 int ret; 80 79 81 - ret = pci_enable_device(pdev); 80 + ret = pcim_enable_device(pdev); 82 81 if (ret < 0) 83 82 return ret; 84 83 85 - base = pci_iomap(pdev, 0, 0); 84 + base = pcim_iomap(pdev, 0, 0); 86 85 if (!base) 87 86 return -ENOMEM; 88 87 89 - pi = kmalloc(sizeof(*pi), GFP_ATOMIC); 88 + pi = devm_kmalloc(&pdev->dev, sizeof(*pi), GFP_KERNEL); 90 89 if (!pi) 91 90 return -ENOMEM; 92 91 ··· 96 97 pi->capability &= ioread8(base); 97 98 pi->events = pi->capability; 98 99 99 - dev_set_drvdata(dev, pi); 100 - 101 - return pvpanic_probe(pi); 102 - } 103 - 104 - static void pvpanic_pci_remove(struct pci_dev *pdev) 105 - { 106 - struct pvpanic_instance *pi = dev_get_drvdata(&pdev->dev); 107 - 108 - pvpanic_remove(pi); 109 - iounmap(pi->base); 110 - kfree(pi); 111 - pci_disable_device(pdev); 100 + return devm_pvpanic_probe(&pdev->dev, pi); 112 101 } 113 102 114 103 static struct pci_driver pvpanic_pci_driver = { 115 104 .name = "pvpanic-pci", 116 105 .id_table = pvpanic_pci_id_tbl, 117 106 .probe = pvpanic_pci_probe, 118 - .remove = pvpanic_pci_remove, 119 107 .driver = { 120 108 .dev_groups = pvpanic_pci_dev_groups, 121 109 },
+15 -18
drivers/misc/pvpanic/pvpanic.c
··· 61 61 .priority = 1, /* let this called before broken drm_fb_helper */ 62 62 }; 63 63 64 - int pvpanic_probe(struct pvpanic_instance *pi) 65 - { 66 - if (!pi || !pi->base) 67 - return -EINVAL; 68 - 69 - spin_lock(&pvpanic_lock); 70 - list_add(&pi->list, &pvpanic_list); 71 - spin_unlock(&pvpanic_lock); 72 - 73 - return 0; 74 - } 75 - EXPORT_SYMBOL_GPL(pvpanic_probe); 76 - 77 - void pvpanic_remove(struct pvpanic_instance *pi) 64 + static void pvpanic_remove(void *param) 78 65 { 79 66 struct pvpanic_instance *pi_cur, *pi_next; 80 - 81 - if (!pi) 82 - return; 67 + struct pvpanic_instance *pi = param; 83 68 84 69 spin_lock(&pvpanic_lock); 85 70 list_for_each_entry_safe(pi_cur, pi_next, &pvpanic_list, list) { ··· 75 90 } 76 91 spin_unlock(&pvpanic_lock); 77 92 } 78 - EXPORT_SYMBOL_GPL(pvpanic_remove); 93 + 94 + int devm_pvpanic_probe(struct device *dev, struct pvpanic_instance *pi) 95 + { 96 + if (!pi || !pi->base) 97 + return -EINVAL; 98 + 99 + spin_lock(&pvpanic_lock); 100 + list_add(&pi->list, &pvpanic_list); 101 + spin_unlock(&pvpanic_lock); 102 + 103 + return devm_add_action_or_reset(dev, pvpanic_remove, pi); 104 + } 105 + EXPORT_SYMBOL_GPL(devm_pvpanic_probe); 79 106 80 107 static int pvpanic_init(void) 81 108 {
+1 -2
drivers/misc/pvpanic/pvpanic.h
··· 15 15 struct list_head list; 16 16 }; 17 17 18 - int pvpanic_probe(struct pvpanic_instance *pi); 19 - void pvpanic_remove(struct pvpanic_instance *pi); 18 + int devm_pvpanic_probe(struct device *dev, struct pvpanic_instance *pi); 20 19 21 20 #endif /* PVPANIC_H_ */
+9 -2
drivers/misc/uacce/uacce.c
··· 387 387 388 388 static unsigned int uacce_enable_sva(struct device *parent, unsigned int flags) 389 389 { 390 + int ret; 391 + 390 392 if (!(flags & UACCE_DEV_SVA)) 391 393 return flags; 392 394 393 395 flags &= ~UACCE_DEV_SVA; 394 396 395 - if (iommu_dev_enable_feature(parent, IOMMU_DEV_FEAT_IOPF)) 397 + ret = iommu_dev_enable_feature(parent, IOMMU_DEV_FEAT_IOPF); 398 + if (ret) { 399 + dev_err(parent, "failed to enable IOPF feature! ret = %pe\n", ERR_PTR(ret)); 396 400 return flags; 401 + } 397 402 398 - if (iommu_dev_enable_feature(parent, IOMMU_DEV_FEAT_SVA)) { 403 + ret = iommu_dev_enable_feature(parent, IOMMU_DEV_FEAT_SVA); 404 + if (ret) { 405 + dev_err(parent, "failed to enable SVA feature! ret = %pe\n", ERR_PTR(ret)); 399 406 iommu_dev_disable_feature(parent, IOMMU_DEV_FEAT_IOPF); 400 407 return flags; 401 408 }
+1 -1
drivers/misc/vmw_vmci/vmci_context.c
··· 107 107 context = kzalloc(sizeof(*context), GFP_KERNEL); 108 108 if (!context) { 109 109 pr_warn("Failed to allocate memory for VMCI context\n"); 110 - error = -EINVAL; 110 + error = -ENOMEM; 111 111 goto err_out; 112 112 } 113 113
-3
drivers/misc/xilinx_sdfec.c
··· 1013 1013 1014 1014 xsdfec = container_of(file->private_data, struct xsdfec_dev, miscdev); 1015 1015 1016 - if (!xsdfec) 1017 - return EPOLLNVAL | EPOLLHUP; 1018 - 1019 1016 poll_wait(file, &xsdfec->waitq, wait); 1020 1017 1021 1018 /* XSDFEC ISR detected an error */
+1 -3
drivers/net/ethernet/sun/ldmvsw.c
··· 404 404 return err; 405 405 } 406 406 407 - static int vsw_port_remove(struct vio_dev *vdev) 407 + static void vsw_port_remove(struct vio_dev *vdev) 408 408 { 409 409 struct vnet_port *port = dev_get_drvdata(&vdev->dev); 410 410 unsigned long flags; ··· 430 430 431 431 free_netdev(port->dev); 432 432 } 433 - 434 - return 0; 435 433 } 436 434 437 435 static void vsw_cleanup(void)
+1 -2
drivers/net/ethernet/sun/sunvnet.c
··· 510 510 return err; 511 511 } 512 512 513 - static int vnet_port_remove(struct vio_dev *vdev) 513 + static void vnet_port_remove(struct vio_dev *vdev) 514 514 { 515 515 struct vnet_port *port = dev_get_drvdata(&vdev->dev); 516 516 ··· 533 533 534 534 kfree(port); 535 535 } 536 - return 0; 537 536 } 538 537 539 538 static const struct vio_device_id vnet_port_match[] = {
+15 -8
drivers/nvmem/core.c
··· 180 180 [NVMEM_TYPE_EEPROM] = "EEPROM", 181 181 [NVMEM_TYPE_OTP] = "OTP", 182 182 [NVMEM_TYPE_BATTERY_BACKED] = "Battery backed", 183 + [NVMEM_TYPE_FRAM] = "FRAM", 183 184 }; 184 185 185 186 #ifdef CONFIG_DEBUG_LOCK_ALLOC ··· 359 358 360 359 if (!config->base_dev) 361 360 return -EINVAL; 361 + 362 + if (config->type == NVMEM_TYPE_FRAM) 363 + bin_attr_nvmem_eeprom_compat.attr.name = "fram"; 362 364 363 365 nvmem->eeprom = bin_attr_nvmem_eeprom_compat; 364 366 nvmem->eeprom.attr.mode = nvmem_bin_attr_get_umode(nvmem); ··· 690 686 continue; 691 687 if (len < 2 * sizeof(u32)) { 692 688 dev_err(dev, "nvmem: invalid reg on %pOF\n", child); 689 + of_node_put(child); 693 690 return -EINVAL; 694 691 } 695 692 696 693 cell = kzalloc(sizeof(*cell), GFP_KERNEL); 697 - if (!cell) 694 + if (!cell) { 695 + of_node_put(child); 698 696 return -ENOMEM; 697 + } 699 698 700 699 cell->nvmem = nvmem; 701 - cell->np = of_node_get(child); 702 700 cell->offset = be32_to_cpup(addr++); 703 701 cell->bytes = be32_to_cpup(addr); 704 702 cell->name = kasprintf(GFP_KERNEL, "%pOFn", child); ··· 721 715 cell->name, nvmem->stride); 722 716 /* Cells already added will be freed later. */ 723 717 kfree_const(cell->name); 724 - of_node_put(cell->np); 725 718 kfree(cell); 719 + of_node_put(child); 726 720 return -EINVAL; 727 721 } 728 722 723 + cell->np = of_node_get(child); 729 724 nvmem_cell_add(cell); 730 725 } 731 726 ··· 1615 1608 } 1616 1609 EXPORT_SYMBOL_GPL(nvmem_cell_read_u64); 1617 1610 1618 - static void *nvmem_cell_read_variable_common(struct device *dev, 1619 - const char *cell_id, 1620 - size_t max_len, size_t *len) 1611 + static const void *nvmem_cell_read_variable_common(struct device *dev, 1612 + const char *cell_id, 1613 + size_t max_len, size_t *len) 1621 1614 { 1622 1615 struct nvmem_cell *cell; 1623 1616 int nbits; ··· 1661 1654 u32 *val) 1662 1655 { 1663 1656 size_t len; 1664 - u8 *buf; 1657 + const u8 *buf; 1665 1658 int i; 1666 1659 1667 1660 buf = nvmem_cell_read_variable_common(dev, cell_id, sizeof(*val), &len); ··· 1692 1685 u64 *val) 1693 1686 { 1694 1687 size_t len; 1695 - u8 *buf; 1688 + const u8 *buf; 1696 1689 int i; 1697 1690 1698 1691 buf = nvmem_cell_read_variable_common(dev, cell_id, sizeof(*val), &len);
+5 -4
drivers/nvmem/qfprom.c
··· 122 122 .keepout = sc7280_qfprom_keepout, 123 123 .nkeepout = ARRAY_SIZE(sc7280_qfprom_keepout) 124 124 }; 125 + 125 126 /** 126 127 * qfprom_disable_fuse_blowing() - Undo enabling of fuse blowing. 127 128 * @priv: Our driver data. ··· 196 195 } 197 196 198 197 /* 199 - * Hardware requires 1.8V min for fuse blowing; this may be 200 - * a rail shared do don't specify a max--regulator constraints 201 - * will handle. 198 + * Hardware requires a minimum voltage for fuse blowing. 199 + * This may be a shared rail so don't specify a maximum. 200 + * Regulator constraints will cap to the actual maximum. 202 201 */ 203 202 ret = regulator_set_voltage(priv->vcc, qfprom_blow_uV, INT_MAX); 204 203 if (ret) { ··· 400 399 401 400 if (major_version == 7 && minor_version == 8) 402 401 priv->soc_data = &qfprom_7_8_data; 403 - if (major_version == 7 && minor_version == 15) 402 + else if (major_version == 7 && minor_version == 15) 404 403 priv->soc_data = &qfprom_7_15_data; 405 404 406 405 priv->vcc = devm_regulator_get(&pdev->dev, "vcc");
+1 -1
drivers/nvmem/sprd-efuse.c
··· 234 234 status = readl(efuse->base + SPRD_EFUSE_ERR_FLAG); 235 235 if (status) { 236 236 dev_err(efuse->dev, 237 - "write error status %d of block %d\n", ret, blk); 237 + "write error status %u of block %d\n", status, blk); 238 238 239 239 writel(SPRD_EFUSE_ERR_CLR_MASK, 240 240 efuse->base + SPRD_EFUSE_ERR_CLR);
+1
drivers/nvmem/sunxi_sid.c
··· 142 142 143 143 nvmem_cfg->dev = dev; 144 144 nvmem_cfg->name = "sunxi-sid"; 145 + nvmem_cfg->type = NVMEM_TYPE_OTP; 145 146 nvmem_cfg->read_only = true; 146 147 nvmem_cfg->size = cfg->size; 147 148 nvmem_cfg->word_size = 1;
+3 -8
drivers/parport/probe.c
··· 8 8 9 9 #include <linux/module.h> 10 10 #include <linux/parport.h> 11 - #include <linux/ctype.h> 12 11 #include <linux/string.h> 12 + #include <linux/string_helpers.h> 13 13 #include <linux/slab.h> 14 14 #include <linux/uaccess.h> 15 15 ··· 74 74 u = sep + strlen (sep) - 1; 75 75 while (u >= p && *u == ' ') 76 76 *u-- = '\0'; 77 - u = p; 78 - while (*u) { 79 - *u = toupper(*u); 80 - u++; 81 - } 77 + string_upper(p, p); 82 78 if (!strcmp(p, "MFG") || !strcmp(p, "MANUFACTURER")) { 83 79 kfree(info->mfr); 84 80 info->mfr = kstrdup(sep, GFP_KERNEL); ··· 86 90 87 91 kfree(info->class_name); 88 92 info->class_name = kstrdup(sep, GFP_KERNEL); 89 - for (u = sep; *u; u++) 90 - *u = toupper(*u); 93 + string_upper(sep, sep); 91 94 for (i = 0; classes[i].token; i++) { 92 95 if (!strcmp(classes[i].token, sep)) { 93 96 info->class = i;
+9
drivers/phy/Kconfig
··· 61 61 interface to interact with USB GEN-II and USB 3.x PHY that is part 62 62 of the Intel network SOC. 63 63 64 + config PHY_CAN_TRANSCEIVER 65 + tristate "CAN transceiver PHY" 66 + select GENERIC_PHY 67 + help 68 + This option enables support for CAN transceivers as a PHY. This 69 + driver provides function for putting the transceivers in various 70 + functional modes using gpios and sets the attribute max link 71 + rate, for CAN drivers. 72 + 64 73 source "drivers/phy/allwinner/Kconfig" 65 74 source "drivers/phy/amlogic/Kconfig" 66 75 source "drivers/phy/broadcom/Kconfig"
+1
drivers/phy/Makefile
··· 5 5 6 6 obj-$(CONFIG_GENERIC_PHY) += phy-core.o 7 7 obj-$(CONFIG_GENERIC_PHY_MIPI_DPHY) += phy-core-mipi-dphy.o 8 + obj-$(CONFIG_PHY_CAN_TRANSCEIVER) += phy-can-transceiver.o 8 9 obj-$(CONFIG_PHY_LPC18XX_USB_OTG) += phy-lpc18xx-usb-otg.o 9 10 obj-$(CONFIG_PHY_XGENE) += phy-xgene.o 10 11 obj-$(CONFIG_PHY_PISTACHIO_USB) += phy-pistachio-usb.o
+1 -3
drivers/phy/broadcom/phy-bcm-ns-usb3.c
··· 215 215 return err; 216 216 217 217 usb3->dmp = devm_ioremap_resource(dev, &res); 218 - if (IS_ERR(usb3->dmp)) { 219 - dev_err(dev, "Failed to map DMP regs\n"); 218 + if (IS_ERR(usb3->dmp)) 220 219 return PTR_ERR(usb3->dmp); 221 - } 222 220 223 221 usb3->phy = devm_phy_create(dev, NULL, &ops); 224 222 if (IS_ERR(usb3->phy)) {
+1 -3
drivers/phy/marvell/phy-mmp3-hsic.c
··· 47 47 48 48 resource = platform_get_resource(pdev, IORESOURCE_MEM, 0); 49 49 base = devm_ioremap_resource(dev, resource); 50 - if (IS_ERR(base)) { 51 - dev_err(dev, "failed to remap PHY regs\n"); 50 + if (IS_ERR(base)) 52 51 return PTR_ERR(base); 53 - } 54 52 55 53 phy = devm_phy_create(dev, NULL, &mmp3_hsic_phy_ops); 56 54 if (IS_ERR(phy)) {
+1 -3
drivers/phy/mediatek/phy-mtk-hdmi.c
··· 119 119 mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 120 120 hdmi_phy->regs = devm_ioremap_resource(dev, mem); 121 121 if (IS_ERR(hdmi_phy->regs)) { 122 - ret = PTR_ERR(hdmi_phy->regs); 123 - dev_err(dev, "Failed to get memory resource: %d\n", ret); 124 - return ret; 122 + return PTR_ERR(hdmi_phy->regs); 125 123 } 126 124 127 125 ref_clk = devm_clk_get(dev, "pll_ref");
+1 -3
drivers/phy/mediatek/phy-mtk-mipi-dsi.c
··· 151 151 mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 152 152 mipi_tx->regs = devm_ioremap_resource(dev, mem); 153 153 if (IS_ERR(mipi_tx->regs)) { 154 - ret = PTR_ERR(mipi_tx->regs); 155 - dev_err(dev, "Failed to get memory resource: %d\n", ret); 156 - return ret; 154 + return PTR_ERR(mipi_tx->regs); 157 155 } 158 156 159 157 ref_clk = devm_clk_get(dev, NULL);
+146
drivers/phy/phy-can-transceiver.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * phy-can-transceiver.c - phy driver for CAN transceivers 4 + * 5 + * Copyright (C) 2021 Texas Instruments Incorporated - https://www.ti.com 6 + * 7 + */ 8 + #include<linux/phy/phy.h> 9 + #include<linux/platform_device.h> 10 + #include<linux/module.h> 11 + #include<linux/gpio.h> 12 + #include<linux/gpio/consumer.h> 13 + 14 + struct can_transceiver_data { 15 + u32 flags; 16 + #define CAN_TRANSCEIVER_STB_PRESENT BIT(0) 17 + #define CAN_TRANSCEIVER_EN_PRESENT BIT(1) 18 + }; 19 + 20 + struct can_transceiver_phy { 21 + struct phy *generic_phy; 22 + struct gpio_desc *standby_gpio; 23 + struct gpio_desc *enable_gpio; 24 + }; 25 + 26 + /* Power on function */ 27 + static int can_transceiver_phy_power_on(struct phy *phy) 28 + { 29 + struct can_transceiver_phy *can_transceiver_phy = phy_get_drvdata(phy); 30 + 31 + if (can_transceiver_phy->standby_gpio) 32 + gpiod_set_value_cansleep(can_transceiver_phy->standby_gpio, 0); 33 + if (can_transceiver_phy->enable_gpio) 34 + gpiod_set_value_cansleep(can_transceiver_phy->enable_gpio, 1); 35 + 36 + return 0; 37 + } 38 + 39 + /* Power off function */ 40 + static int can_transceiver_phy_power_off(struct phy *phy) 41 + { 42 + struct can_transceiver_phy *can_transceiver_phy = phy_get_drvdata(phy); 43 + 44 + if (can_transceiver_phy->standby_gpio) 45 + gpiod_set_value_cansleep(can_transceiver_phy->standby_gpio, 1); 46 + if (can_transceiver_phy->enable_gpio) 47 + gpiod_set_value_cansleep(can_transceiver_phy->enable_gpio, 0); 48 + 49 + return 0; 50 + } 51 + 52 + static const struct phy_ops can_transceiver_phy_ops = { 53 + .power_on = can_transceiver_phy_power_on, 54 + .power_off = can_transceiver_phy_power_off, 55 + .owner = THIS_MODULE, 56 + }; 57 + 58 + static const struct can_transceiver_data tcan1042_drvdata = { 59 + .flags = CAN_TRANSCEIVER_STB_PRESENT, 60 + }; 61 + 62 + static const struct can_transceiver_data tcan1043_drvdata = { 63 + .flags = CAN_TRANSCEIVER_STB_PRESENT | CAN_TRANSCEIVER_EN_PRESENT, 64 + }; 65 + 66 + static const struct of_device_id can_transceiver_phy_ids[] = { 67 + { 68 + .compatible = "ti,tcan1042", 69 + .data = &tcan1042_drvdata 70 + }, 71 + { 72 + .compatible = "ti,tcan1043", 73 + .data = &tcan1043_drvdata 74 + }, 75 + { } 76 + }; 77 + MODULE_DEVICE_TABLE(of, can_transceiver_phy_ids); 78 + 79 + static int can_transceiver_phy_probe(struct platform_device *pdev) 80 + { 81 + struct phy_provider *phy_provider; 82 + struct device *dev = &pdev->dev; 83 + struct can_transceiver_phy *can_transceiver_phy; 84 + const struct can_transceiver_data *drvdata; 85 + const struct of_device_id *match; 86 + struct phy *phy; 87 + struct gpio_desc *standby_gpio; 88 + struct gpio_desc *enable_gpio; 89 + u32 max_bitrate = 0; 90 + 91 + can_transceiver_phy = devm_kzalloc(dev, sizeof(struct can_transceiver_phy), GFP_KERNEL); 92 + if (!can_transceiver_phy) 93 + return -ENOMEM; 94 + 95 + match = of_match_node(can_transceiver_phy_ids, pdev->dev.of_node); 96 + drvdata = match->data; 97 + 98 + phy = devm_phy_create(dev, dev->of_node, 99 + &can_transceiver_phy_ops); 100 + if (IS_ERR(phy)) { 101 + dev_err(dev, "failed to create can transceiver phy\n"); 102 + return PTR_ERR(phy); 103 + } 104 + 105 + device_property_read_u32(dev, "max-bitrate", &max_bitrate); 106 + if (!max_bitrate) 107 + dev_warn(dev, "Invalid value for transceiver max bitrate. Ignoring bitrate limit\n"); 108 + phy->attrs.max_link_rate = max_bitrate; 109 + 110 + can_transceiver_phy->generic_phy = phy; 111 + 112 + if (drvdata->flags & CAN_TRANSCEIVER_STB_PRESENT) { 113 + standby_gpio = devm_gpiod_get(dev, "standby", GPIOD_OUT_HIGH); 114 + if (IS_ERR(standby_gpio)) 115 + return PTR_ERR(standby_gpio); 116 + can_transceiver_phy->standby_gpio = standby_gpio; 117 + } 118 + 119 + if (drvdata->flags & CAN_TRANSCEIVER_EN_PRESENT) { 120 + enable_gpio = devm_gpiod_get(dev, "enable", GPIOD_OUT_LOW); 121 + if (IS_ERR(enable_gpio)) 122 + return PTR_ERR(enable_gpio); 123 + can_transceiver_phy->enable_gpio = enable_gpio; 124 + } 125 + 126 + phy_set_drvdata(can_transceiver_phy->generic_phy, can_transceiver_phy); 127 + 128 + phy_provider = devm_of_phy_provider_register(dev, of_phy_simple_xlate); 129 + 130 + return PTR_ERR_OR_ZERO(phy_provider); 131 + } 132 + 133 + static struct platform_driver can_transceiver_phy_driver = { 134 + .probe = can_transceiver_phy_probe, 135 + .driver = { 136 + .name = "can-transceiver-phy", 137 + .of_match_table = can_transceiver_phy_ids, 138 + }, 139 + }; 140 + 141 + module_platform_driver(can_transceiver_phy_driver); 142 + 143 + MODULE_AUTHOR("Faiz Abbas <faiz_abbas@ti.com>"); 144 + MODULE_AUTHOR("Aswath Govindraju <a-govindraju@ti.com>"); 145 + MODULE_DESCRIPTION("CAN TRANSCEIVER PHY driver"); 146 + MODULE_LICENSE("GPL v2");
+1 -1
drivers/phy/phy-core-mipi-dphy.c
··· 15 15 /* 16 16 * Minimum D-PHY timings based on MIPI D-PHY specification. Derived 17 17 * from the valid ranges specified in Section 6.9, Table 14, Page 41 18 - * of the D-PHY specification (v2.1). 18 + * of the D-PHY specification (v1.2). 19 19 */ 20 20 int phy_mipi_dphy_get_default_config(unsigned long pixel_clock, 21 21 unsigned int bpp,
+9 -7
drivers/phy/phy-core.c
··· 697 697 struct phy *phy; 698 698 struct device_link *link; 699 699 700 - if (string == NULL) { 701 - dev_WARN(dev, "missing string\n"); 702 - return ERR_PTR(-EINVAL); 703 - } 704 - 705 700 if (dev->of_node) { 706 - index = of_property_match_string(dev->of_node, "phy-names", 707 - string); 701 + if (string) 702 + index = of_property_match_string(dev->of_node, "phy-names", 703 + string); 704 + else 705 + index = 0; 708 706 phy = _of_phy_get(dev->of_node, index); 709 707 } else { 708 + if (string == NULL) { 709 + dev_WARN(dev, "missing string\n"); 710 + return ERR_PTR(-EINVAL); 711 + } 710 712 phy = phy_find(dev, string); 711 713 } 712 714 if (IS_ERR(phy))
+2 -1
drivers/phy/phy-xgene.c
··· 961 961 serdes_wr(ctx, lane, RXTX_REG1, val); 962 962 963 963 /* Latch VTT value based on the termination to ground and 964 - enable TX FIFO */ 964 + * enable TX FIFO 965 + */ 965 966 serdes_rd(ctx, lane, RXTX_REG2, &val); 966 967 val = RXTX_REG2_VTT_ENA_SET(val, 0x1); 967 968 val = RXTX_REG2_VTT_SEL_SET(val, 0x1);
+308 -7
drivers/phy/qualcomm/phy-qcom-qmp.c
··· 35 35 #define PLL_READY_GATE_EN BIT(3) 36 36 /* QPHY_PCS_STATUS bit */ 37 37 #define PHYSTATUS BIT(6) 38 + #define PHYSTATUS_4_20 BIT(7) 38 39 /* QPHY_PCS_READY_STATUS & QPHY_COM_PCS_READY_STATUS bit */ 39 40 #define PCS_READY BIT(0) 40 41 ··· 142 141 static const unsigned int msm8996_ufsphy_regs_layout[QPHY_LAYOUT_SIZE] = { 143 142 [QPHY_START_CTRL] = 0x00, 144 143 [QPHY_PCS_READY_STATUS] = 0x168, 144 + }; 145 + 146 + static const unsigned int ipq_pciephy_gen3_regs_layout[QPHY_LAYOUT_SIZE] = { 147 + [QPHY_SW_RESET] = 0x00, 148 + [QPHY_START_CTRL] = 0x44, 149 + [QPHY_PCS_STATUS] = 0x14, 150 + [QPHY_PCS_POWER_DOWN_CONTROL] = 0x40, 145 151 }; 146 152 147 153 static const unsigned int pciephy_regs_layout[QPHY_LAYOUT_SIZE] = { ··· 620 612 QMP_PHY_INIT_CFG(QPHY_LOCK_DETECT_CONFIG2, 0x1f), 621 613 QMP_PHY_INIT_CFG(QPHY_LOCK_DETECT_CONFIG3, 0x47), 622 614 QMP_PHY_INIT_CFG(QPHY_POWER_STATE_CONFIG2, 0x08), 615 + }; 616 + 617 + static const struct qmp_phy_init_tbl ipq6018_pcie_serdes_tbl[] = { 618 + QMP_PHY_INIT_CFG(QSERDES_PLL_SSC_PER1, 0x7d), 619 + QMP_PHY_INIT_CFG(QSERDES_PLL_SSC_PER2, 0x01), 620 + QMP_PHY_INIT_CFG(QSERDES_PLL_SSC_STEP_SIZE1_MODE0, 0x0a), 621 + QMP_PHY_INIT_CFG(QSERDES_PLL_SSC_STEP_SIZE2_MODE0, 0x05), 622 + QMP_PHY_INIT_CFG(QSERDES_PLL_SSC_STEP_SIZE1_MODE1, 0x08), 623 + QMP_PHY_INIT_CFG(QSERDES_PLL_SSC_STEP_SIZE2_MODE1, 0x04), 624 + QMP_PHY_INIT_CFG(QSERDES_PLL_BIAS_EN_CLKBUFLR_EN, 0x18), 625 + QMP_PHY_INIT_CFG(QSERDES_PLL_CLK_ENABLE1, 0x90), 626 + QMP_PHY_INIT_CFG(QSERDES_PLL_SYS_CLK_CTRL, 0x02), 627 + QMP_PHY_INIT_CFG(QSERDES_PLL_SYSCLK_BUF_ENABLE, 0x07), 628 + QMP_PHY_INIT_CFG(QSERDES_PLL_PLL_IVCO, 0x0f), 629 + QMP_PHY_INIT_CFG(QSERDES_PLL_LOCK_CMP1_MODE0, 0xd4), 630 + QMP_PHY_INIT_CFG(QSERDES_PLL_LOCK_CMP2_MODE0, 0x14), 631 + QMP_PHY_INIT_CFG(QSERDES_PLL_LOCK_CMP1_MODE1, 0xaa), 632 + QMP_PHY_INIT_CFG(QSERDES_PLL_LOCK_CMP2_MODE1, 0x29), 633 + QMP_PHY_INIT_CFG(QSERDES_PLL_BG_TRIM, 0x0f), 634 + QMP_PHY_INIT_CFG(QSERDES_PLL_CP_CTRL_MODE0, 0x09), 635 + QMP_PHY_INIT_CFG(QSERDES_PLL_CP_CTRL_MODE1, 0x09), 636 + QMP_PHY_INIT_CFG(QSERDES_PLL_PLL_RCTRL_MODE0, 0x16), 637 + QMP_PHY_INIT_CFG(QSERDES_PLL_PLL_RCTRL_MODE1, 0x16), 638 + QMP_PHY_INIT_CFG(QSERDES_PLL_PLL_CCTRL_MODE0, 0x28), 639 + QMP_PHY_INIT_CFG(QSERDES_PLL_PLL_CCTRL_MODE1, 0x28), 640 + QMP_PHY_INIT_CFG(QSERDES_PLL_BIAS_EN_CTRL_BY_PSM, 0x01), 641 + QMP_PHY_INIT_CFG(QSERDES_PLL_SYSCLK_EN_SEL, 0x08), 642 + QMP_PHY_INIT_CFG(QSERDES_PLL_RESETSM_CNTRL, 0x20), 643 + QMP_PHY_INIT_CFG(QSERDES_PLL_LOCK_CMP_EN, 0x42), 644 + QMP_PHY_INIT_CFG(QSERDES_PLL_DEC_START_MODE0, 0x68), 645 + QMP_PHY_INIT_CFG(QSERDES_PLL_DEC_START_MODE1, 0x53), 646 + QMP_PHY_INIT_CFG(QSERDES_PLL_DIV_FRAC_START1_MODE0, 0xab), 647 + QMP_PHY_INIT_CFG(QSERDES_PLL_DIV_FRAC_START2_MODE0, 0xaa), 648 + QMP_PHY_INIT_CFG(QSERDES_PLL_DIV_FRAC_START3_MODE0, 0x02), 649 + QMP_PHY_INIT_CFG(QSERDES_PLL_DIV_FRAC_START1_MODE1, 0x55), 650 + QMP_PHY_INIT_CFG(QSERDES_PLL_DIV_FRAC_START2_MODE1, 0x55), 651 + QMP_PHY_INIT_CFG(QSERDES_PLL_DIV_FRAC_START3_MODE1, 0x05), 652 + QMP_PHY_INIT_CFG(QSERDES_PLL_INTEGLOOP_GAIN0_MODE0, 0xa0), 653 + QMP_PHY_INIT_CFG(QSERDES_PLL_INTEGLOOP_GAIN0_MODE1, 0xa0), 654 + QMP_PHY_INIT_CFG(QSERDES_PLL_VCO_TUNE1_MODE0, 0x24), 655 + QMP_PHY_INIT_CFG(QSERDES_PLL_VCO_TUNE2_MODE0, 0x02), 656 + QMP_PHY_INIT_CFG(QSERDES_PLL_VCO_TUNE1_MODE1, 0xb4), 657 + QMP_PHY_INIT_CFG(QSERDES_PLL_VCO_TUNE2_MODE1, 0x03), 658 + QMP_PHY_INIT_CFG(QSERDES_PLL_CLK_SELECT, 0x32), 659 + QMP_PHY_INIT_CFG(QSERDES_PLL_HSCLK_SEL, 0x01), 660 + QMP_PHY_INIT_CFG(QSERDES_PLL_CORE_CLK_EN, 0x00), 661 + QMP_PHY_INIT_CFG(QSERDES_PLL_CMN_CONFIG, 0x06), 662 + QMP_PHY_INIT_CFG(QSERDES_PLL_SVS_MODE_CLK_SEL, 0x05), 663 + QMP_PHY_INIT_CFG(QSERDES_PLL_CORECLK_DIV_MODE1, 0x08), 664 + }; 665 + 666 + static const struct qmp_phy_init_tbl ipq6018_pcie_tx_tbl[] = { 667 + QMP_PHY_INIT_CFG(QSERDES_TX0_RES_CODE_LANE_OFFSET_TX, 0x02), 668 + QMP_PHY_INIT_CFG(QSERDES_TX0_LANE_MODE_1, 0x06), 669 + QMP_PHY_INIT_CFG(QSERDES_TX0_RCV_DETECT_LVL_2, 0x12), 670 + }; 671 + 672 + static const struct qmp_phy_init_tbl ipq6018_pcie_rx_tbl[] = { 673 + QMP_PHY_INIT_CFG(QSERDES_RX0_UCDR_FO_GAIN, 0x0c), 674 + QMP_PHY_INIT_CFG(QSERDES_RX0_UCDR_SO_GAIN, 0x02), 675 + QMP_PHY_INIT_CFG(QSERDES_RX0_UCDR_SO_SATURATION_AND_ENABLE, 0x7f), 676 + QMP_PHY_INIT_CFG(QSERDES_RX0_UCDR_PI_CONTROLS, 0x70), 677 + QMP_PHY_INIT_CFG(QSERDES_RX0_RX_EQU_ADAPTOR_CNTRL2, 0x61), 678 + QMP_PHY_INIT_CFG(QSERDES_RX0_RX_EQU_ADAPTOR_CNTRL3, 0x04), 679 + QMP_PHY_INIT_CFG(QSERDES_RX0_RX_EQU_ADAPTOR_CNTRL4, 0x1e), 680 + QMP_PHY_INIT_CFG(QSERDES_RX0_RX_IDAC_TSETTLE_LOW, 0xc0), 681 + QMP_PHY_INIT_CFG(QSERDES_RX0_RX_IDAC_TSETTLE_HIGH, 0x00), 682 + QMP_PHY_INIT_CFG(QSERDES_RX0_RX_EQ_OFFSET_ADAPTOR_CNTRL1, 0x73), 683 + QMP_PHY_INIT_CFG(QSERDES_RX0_RX_OFFSET_ADAPTOR_CNTRL2, 0x80), 684 + QMP_PHY_INIT_CFG(QSERDES_RX0_SIGDET_ENABLES, 0x1c), 685 + QMP_PHY_INIT_CFG(QSERDES_RX0_SIGDET_CNTRL, 0x03), 686 + QMP_PHY_INIT_CFG(QSERDES_RX0_SIGDET_DEGLITCH_CNTRL, 0x14), 687 + QMP_PHY_INIT_CFG(QSERDES_RX0_RX_MODE_00_LOW, 0xf0), 688 + QMP_PHY_INIT_CFG(QSERDES_RX0_RX_MODE_00_HIGH, 0x01), 689 + QMP_PHY_INIT_CFG(QSERDES_RX0_RX_MODE_00_HIGH2, 0x2f), 690 + QMP_PHY_INIT_CFG(QSERDES_RX0_RX_MODE_00_HIGH3, 0xd3), 691 + QMP_PHY_INIT_CFG(QSERDES_RX0_RX_MODE_00_HIGH4, 0x40), 692 + QMP_PHY_INIT_CFG(QSERDES_RX0_RX_MODE_01_LOW, 0x01), 693 + QMP_PHY_INIT_CFG(QSERDES_RX0_RX_MODE_01_HIGH, 0x02), 694 + QMP_PHY_INIT_CFG(QSERDES_RX0_RX_MODE_01_HIGH2, 0xc8), 695 + QMP_PHY_INIT_CFG(QSERDES_RX0_RX_MODE_01_HIGH3, 0x09), 696 + QMP_PHY_INIT_CFG(QSERDES_RX0_RX_MODE_01_HIGH4, 0xb1), 697 + QMP_PHY_INIT_CFG(QSERDES_RX0_RX_MODE_10_LOW, 0x00), 698 + QMP_PHY_INIT_CFG(QSERDES_RX0_RX_MODE_10_HIGH, 0x02), 699 + QMP_PHY_INIT_CFG(QSERDES_RX0_RX_MODE_10_HIGH2, 0xc8), 700 + QMP_PHY_INIT_CFG(QSERDES_RX0_RX_MODE_10_HIGH3, 0x09), 701 + QMP_PHY_INIT_CFG(QSERDES_RX0_RX_MODE_10_HIGH4, 0xb1), 702 + QMP_PHY_INIT_CFG(QSERDES_RX0_DFE_EN_TIMER, 0x04), 703 + }; 704 + 705 + static const struct qmp_phy_init_tbl ipq6018_pcie_pcs_tbl[] = { 706 + QMP_PHY_INIT_CFG(PCS_COM_FLL_CNTRL1, 0x01), 707 + QMP_PHY_INIT_CFG(PCS_COM_REFGEN_REQ_CONFIG1, 0x0d), 708 + QMP_PHY_INIT_CFG(PCS_COM_G12S1_TXDEEMPH_M3P5DB, 0x10), 709 + QMP_PHY_INIT_CFG(PCS_COM_RX_SIGDET_LVL, 0xaa), 710 + QMP_PHY_INIT_CFG(PCS_COM_P2U3_WAKEUP_DLY_TIME_AUXCLK_L, 0x01), 711 + QMP_PHY_INIT_CFG(PCS_COM_RX_DCC_CAL_CONFIG, 0x01), 712 + QMP_PHY_INIT_CFG(PCS_COM_EQ_CONFIG5, 0x01), 713 + QMP_PHY_INIT_CFG(PCS_PCIE_POWER_STATE_CONFIG2, 0x0d), 714 + QMP_PHY_INIT_CFG(PCS_PCIE_POWER_STATE_CONFIG4, 0x07), 715 + QMP_PHY_INIT_CFG(PCS_PCIE_ENDPOINT_REFCLK_DRIVE, 0xc1), 716 + QMP_PHY_INIT_CFG(PCS_PCIE_L1P1_WAKEUP_DLY_TIME_AUXCLK_L, 0x01), 717 + QMP_PHY_INIT_CFG(PCS_PCIE_L1P2_WAKEUP_DLY_TIME_AUXCLK_L, 0x01), 718 + QMP_PHY_INIT_CFG(PCS_PCIE_OSC_DTCT_ACTIONS, 0x00), 719 + QMP_PHY_INIT_CFG(PCS_PCIE_EQ_CONFIG1, 0x11), 720 + QMP_PHY_INIT_CFG(PCS_PCIE_PRESET_P10_PRE, 0x00), 721 + QMP_PHY_INIT_CFG(PCS_PCIE_PRESET_P10_POST, 0x58), 623 722 }; 624 723 625 724 static const struct qmp_phy_init_tbl ipq8074_pcie_serdes_tbl[] = { ··· 2225 2110 QMP_PHY_INIT_CFG(QSERDES_V4_RX_GM_CAL, 0x1f), 2226 2111 }; 2227 2112 2113 + static const struct qmp_phy_init_tbl sdx55_qmp_pcie_serdes_tbl[] = { 2114 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_BG_TIMER, 0x02), 2115 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_BIAS_EN_CLKBUFLR_EN, 0x18), 2116 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_SYS_CLK_CTRL, 0x07), 2117 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_PLL_IVCO, 0x0f), 2118 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_CP_CTRL_MODE0, 0x0a), 2119 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_CP_CTRL_MODE1, 0x0a), 2120 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_PLL_RCTRL_MODE0, 0x19), 2121 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_PLL_RCTRL_MODE1, 0x19), 2122 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_PLL_CCTRL_MODE0, 0x03), 2123 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_PLL_CCTRL_MODE1, 0x03), 2124 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_SYSCLK_EN_SEL, 0x00), 2125 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_LOCK_CMP_EN, 0x46), 2126 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_LOCK_CMP_CFG, 0x04), 2127 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_LOCK_CMP1_MODE0, 0x7f), 2128 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_LOCK_CMP2_MODE0, 0x02), 2129 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_LOCK_CMP1_MODE1, 0xff), 2130 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_LOCK_CMP2_MODE1, 0x04), 2131 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_DEC_START_MODE0, 0x4b), 2132 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_DEC_START_MODE1, 0x50), 2133 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_DIV_FRAC_START3_MODE0, 0x00), 2134 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_INTEGLOOP_GAIN0_MODE0, 0xfb), 2135 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_INTEGLOOP_GAIN1_MODE0, 0x01), 2136 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_INTEGLOOP_GAIN0_MODE1, 0xfb), 2137 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_INTEGLOOP_GAIN1_MODE1, 0x01), 2138 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_VCO_TUNE_MAP, 0x02), 2139 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_HSCLK_SEL, 0x12), 2140 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_HSCLK_HS_SWITCH_SEL, 0x00), 2141 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_CORECLK_DIV_MODE0, 0x05), 2142 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_CORECLK_DIV_MODE1, 0x04), 2143 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_CMN_CONFIG, 0x04), 2144 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_CMN_MISC1, 0x88), 2145 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_INTERNAL_DIG_CORECLK_DIV, 0x03), 2146 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_CMN_MODE, 0x17), 2147 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_VCO_DC_LEVEL_CTRL, 0x0b), 2148 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_BIN_VCOCAL_CMP_CODE1_MODE0, 0x56), 2149 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_BIN_VCOCAL_CMP_CODE2_MODE0, 0x1d), 2150 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_BIN_VCOCAL_CMP_CODE1_MODE1, 0x4b), 2151 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_BIN_VCOCAL_CMP_CODE2_MODE1, 0x1f), 2152 + QMP_PHY_INIT_CFG(QSERDES_V4_COM_BIN_VCOCAL_HSCLK_SEL, 0x22), 2153 + }; 2154 + 2155 + static const struct qmp_phy_init_tbl sdx55_qmp_pcie_tx_tbl[] = { 2156 + QMP_PHY_INIT_CFG(QSERDES_V4_20_TX_LANE_MODE_1, 0x05), 2157 + QMP_PHY_INIT_CFG(QSERDES_V4_20_TX_LANE_MODE_2, 0xf6), 2158 + QMP_PHY_INIT_CFG(QSERDES_V4_20_TX_LANE_MODE_3, 0x13), 2159 + QMP_PHY_INIT_CFG(QSERDES_V4_20_TX_VMODE_CTRL1, 0x00), 2160 + QMP_PHY_INIT_CFG(QSERDES_V4_20_TX_PI_QEC_CTRL, 0x00), 2161 + }; 2162 + 2163 + static const struct qmp_phy_init_tbl sdx55_qmp_pcie_rx_tbl[] = { 2164 + QMP_PHY_INIT_CFG(QSERDES_V4_20_RX_FO_GAIN_RATE2, 0x0c), 2165 + QMP_PHY_INIT_CFG(QSERDES_V4_20_RX_UCDR_PI_CONTROLS, 0x16), 2166 + QMP_PHY_INIT_CFG(QSERDES_V4_20_RX_AUX_DATA_TCOARSE_TFINE, 0x7f), 2167 + QMP_PHY_INIT_CFG(QSERDES_V4_20_RX_DFE_3, 0x55), 2168 + QMP_PHY_INIT_CFG(QSERDES_V4_20_RX_DFE_DAC_ENABLE1, 0x0c), 2169 + QMP_PHY_INIT_CFG(QSERDES_V4_20_RX_DFE_DAC_ENABLE2, 0x00), 2170 + QMP_PHY_INIT_CFG(QSERDES_V4_20_RX_VGA_CAL_CNTRL2, 0x08), 2171 + QMP_PHY_INIT_CFG(QSERDES_V4_20_RX_RX_EQ_OFFSET_ADAPTOR_CNTRL1, 0x27), 2172 + QMP_PHY_INIT_CFG(QSERDES_V4_20_RX_RX_MODE_RATE_0_1_B1, 0x1a), 2173 + QMP_PHY_INIT_CFG(QSERDES_V4_20_RX_RX_MODE_RATE_0_1_B2, 0x5a), 2174 + QMP_PHY_INIT_CFG(QSERDES_V4_20_RX_RX_MODE_RATE_0_1_B3, 0x09), 2175 + QMP_PHY_INIT_CFG(QSERDES_V4_20_RX_RX_MODE_RATE_0_1_B4, 0x37), 2176 + QMP_PHY_INIT_CFG(QSERDES_V4_20_RX_RX_MODE_RATE2_B0, 0xbd), 2177 + QMP_PHY_INIT_CFG(QSERDES_V4_20_RX_RX_MODE_RATE2_B1, 0xf9), 2178 + QMP_PHY_INIT_CFG(QSERDES_V4_20_RX_RX_MODE_RATE2_B2, 0xbf), 2179 + QMP_PHY_INIT_CFG(QSERDES_V4_20_RX_RX_MODE_RATE2_B3, 0xce), 2180 + QMP_PHY_INIT_CFG(QSERDES_V4_20_RX_RX_MODE_RATE2_B4, 0x62), 2181 + QMP_PHY_INIT_CFG(QSERDES_V4_20_RX_RX_MODE_RATE3_B0, 0xbf), 2182 + QMP_PHY_INIT_CFG(QSERDES_V4_20_RX_RX_MODE_RATE3_B1, 0x7d), 2183 + QMP_PHY_INIT_CFG(QSERDES_V4_20_RX_RX_MODE_RATE3_B2, 0xbf), 2184 + QMP_PHY_INIT_CFG(QSERDES_V4_20_RX_RX_MODE_RATE3_B3, 0xcf), 2185 + QMP_PHY_INIT_CFG(QSERDES_V4_20_RX_RX_MODE_RATE3_B4, 0xd6), 2186 + QMP_PHY_INIT_CFG(QSERDES_V4_20_RX_PHPRE_CTRL, 0xa0), 2187 + QMP_PHY_INIT_CFG(QSERDES_V4_20_RX_DFE_CTLE_POST_CAL_OFFSET, 0x38), 2188 + QMP_PHY_INIT_CFG(QSERDES_V4_20_RX_MARG_COARSE_CTRL2, 0x12), 2189 + }; 2190 + 2191 + static const struct qmp_phy_init_tbl sdx55_qmp_pcie_pcs_tbl[] = { 2192 + QMP_PHY_INIT_CFG(QPHY_V4_20_PCS_RX_SIGDET_LVL, 0x77), 2193 + QMP_PHY_INIT_CFG(QPHY_V4_20_PCS_EQ_CONFIG2, 0x01), 2194 + QMP_PHY_INIT_CFG(QPHY_V4_20_PCS_EQ_CONFIG4, 0x16), 2195 + QMP_PHY_INIT_CFG(QPHY_V4_20_PCS_EQ_CONFIG5, 0x02), 2196 + }; 2197 + 2198 + static const struct qmp_phy_init_tbl sdx55_qmp_pcie_pcs_misc_tbl[] = { 2199 + QMP_PHY_INIT_CFG(QPHY_V4_20_PCS_PCIE_EQ_CONFIG1, 0x17), 2200 + QMP_PHY_INIT_CFG(QPHY_V4_20_PCS_PCIE_G3_RXEQEVAL_TIME, 0x13), 2201 + QMP_PHY_INIT_CFG(QPHY_V4_20_PCS_PCIE_G4_RXEQEVAL_TIME, 0x13), 2202 + QMP_PHY_INIT_CFG(QPHY_V4_20_PCS_PCIE_G4_EQ_CONFIG2, 0x01), 2203 + QMP_PHY_INIT_CFG(QPHY_V4_20_PCS_PCIE_G4_EQ_CONFIG5, 0x02), 2204 + QMP_PHY_INIT_CFG(QPHY_V4_20_PCS_LANE1_INSIG_SW_CTRL2, 0x00), 2205 + QMP_PHY_INIT_CFG(QPHY_V4_20_PCS_LANE1_INSIG_MX_CTRL2, 0x00), 2206 + }; 2207 + 2228 2208 static const struct qmp_phy_init_tbl sm8350_ufsphy_serdes_tbl[] = { 2229 2209 QMP_PHY_INIT_CFG(QSERDES_V5_COM_SYSCLK_EN_SEL, 0xd9), 2230 2210 QMP_PHY_INIT_CFG(QSERDES_V5_COM_HSCLK_SEL, 0x11), ··· 2621 2411 unsigned int start_ctrl; 2622 2412 unsigned int pwrdn_ctrl; 2623 2413 unsigned int mask_com_pcs_ready; 2414 + /* bit offset of PHYSTATUS in QPHY_PCS_STATUS register */ 2415 + unsigned int phy_status; 2624 2416 2625 2417 /* true, if PHY has a separate PHY_COM control block */ 2626 2418 bool has_phy_com_ctrl; ··· 2836 2624 2837 2625 .start_ctrl = SERDES_START | PCS_START, 2838 2626 .pwrdn_ctrl = SW_PWRDN, 2627 + .phy_status = PHYSTATUS, 2839 2628 }; 2840 2629 2841 2630 static const struct qmp_phy_cfg msm8996_pciephy_cfg = { ··· 2862 2649 .start_ctrl = PCS_START | PLL_READY_GATE_EN, 2863 2650 .pwrdn_ctrl = SW_PWRDN | REFCLK_DRV_DSBL, 2864 2651 .mask_com_pcs_ready = PCS_READY, 2652 + .phy_status = PHYSTATUS, 2865 2653 2866 2654 .has_phy_com_ctrl = true, 2867 2655 .has_lane_rst = true, ··· 2892 2678 2893 2679 .start_ctrl = SERDES_START, 2894 2680 .pwrdn_ctrl = SW_PWRDN, 2681 + .phy_status = PHYSTATUS, 2895 2682 2896 2683 .no_pcs_sw_reset = true, 2897 2684 }; ··· 2919 2704 2920 2705 .start_ctrl = SERDES_START | PCS_START, 2921 2706 .pwrdn_ctrl = SW_PWRDN, 2707 + .phy_status = PHYSTATUS, 2922 2708 }; 2923 2709 2924 2710 static const char * const ipq8074_pciephy_clk_l[] = { ··· 2949 2733 .vreg_list = NULL, 2950 2734 .num_vregs = 0, 2951 2735 .regs = pciephy_regs_layout, 2736 + 2737 + .start_ctrl = SERDES_START | PCS_START, 2738 + .pwrdn_ctrl = SW_PWRDN | REFCLK_DRV_DSBL, 2739 + .phy_status = PHYSTATUS, 2740 + 2741 + .has_phy_com_ctrl = false, 2742 + .has_lane_rst = false, 2743 + .has_pwrdn_delay = true, 2744 + .pwrdn_delay_min = 995, /* us */ 2745 + .pwrdn_delay_max = 1005, /* us */ 2746 + }; 2747 + 2748 + static const struct qmp_phy_cfg ipq6018_pciephy_cfg = { 2749 + .type = PHY_TYPE_PCIE, 2750 + .nlanes = 1, 2751 + 2752 + .serdes_tbl = ipq6018_pcie_serdes_tbl, 2753 + .serdes_tbl_num = ARRAY_SIZE(ipq6018_pcie_serdes_tbl), 2754 + .tx_tbl = ipq6018_pcie_tx_tbl, 2755 + .tx_tbl_num = ARRAY_SIZE(ipq6018_pcie_tx_tbl), 2756 + .rx_tbl = ipq6018_pcie_rx_tbl, 2757 + .rx_tbl_num = ARRAY_SIZE(ipq6018_pcie_rx_tbl), 2758 + .pcs_tbl = ipq6018_pcie_pcs_tbl, 2759 + .pcs_tbl_num = ARRAY_SIZE(ipq6018_pcie_pcs_tbl), 2760 + .clk_list = ipq8074_pciephy_clk_l, 2761 + .num_clks = ARRAY_SIZE(ipq8074_pciephy_clk_l), 2762 + .reset_list = ipq8074_pciephy_reset_l, 2763 + .num_resets = ARRAY_SIZE(ipq8074_pciephy_reset_l), 2764 + .vreg_list = NULL, 2765 + .num_vregs = 0, 2766 + .regs = ipq_pciephy_gen3_regs_layout, 2952 2767 2953 2768 .start_ctrl = SERDES_START | PCS_START, 2954 2769 .pwrdn_ctrl = SW_PWRDN | REFCLK_DRV_DSBL, ··· 3015 2768 3016 2769 .start_ctrl = PCS_START | SERDES_START, 3017 2770 .pwrdn_ctrl = SW_PWRDN | REFCLK_DRV_DSBL, 2771 + .phy_status = PHYSTATUS, 3018 2772 3019 2773 .has_pwrdn_delay = true, 3020 2774 .pwrdn_delay_min = 995, /* us */ ··· 3044 2796 3045 2797 .start_ctrl = PCS_START | SERDES_START, 3046 2798 .pwrdn_ctrl = SW_PWRDN | REFCLK_DRV_DSBL, 2799 + .phy_status = PHYSTATUS, 3047 2800 3048 2801 .has_pwrdn_delay = true, 3049 2802 .pwrdn_delay_min = 995, /* us */ ··· 3083 2834 3084 2835 .start_ctrl = PCS_START | SERDES_START, 3085 2836 .pwrdn_ctrl = SW_PWRDN | REFCLK_DRV_DSBL, 2837 + .phy_status = PHYSTATUS, 3086 2838 3087 2839 .has_pwrdn_delay = true, 3088 2840 .pwrdn_delay_min = 995, /* us */ ··· 3122 2872 3123 2873 .start_ctrl = PCS_START | SERDES_START, 3124 2874 .pwrdn_ctrl = SW_PWRDN | REFCLK_DRV_DSBL, 2875 + .phy_status = PHYSTATUS, 3125 2876 3126 2877 .is_dual_lane_phy = true, 3127 2878 .has_pwrdn_delay = true, ··· 3152 2901 3153 2902 .start_ctrl = SERDES_START | PCS_START, 3154 2903 .pwrdn_ctrl = SW_PWRDN, 2904 + .phy_status = PHYSTATUS, 3155 2905 3156 2906 .has_pwrdn_delay = true, 3157 2907 .pwrdn_delay_min = POWER_DOWN_DELAY_US_MIN, ··· 3184 2932 3185 2933 .start_ctrl = SERDES_START | PCS_START, 3186 2934 .pwrdn_ctrl = SW_PWRDN, 2935 + .phy_status = PHYSTATUS, 3187 2936 3188 2937 .has_pwrdn_delay = true, 3189 2938 .pwrdn_delay_min = POWER_DOWN_DELAY_US_MIN, ··· 3256 3003 3257 3004 .start_ctrl = SERDES_START | PCS_START, 3258 3005 .pwrdn_ctrl = SW_PWRDN, 3006 + .phy_status = PHYSTATUS, 3259 3007 3260 3008 .has_pwrdn_delay = true, 3261 3009 .pwrdn_delay_min = POWER_DOWN_DELAY_US_MIN, ··· 3283 3029 3284 3030 .start_ctrl = SERDES_START, 3285 3031 .pwrdn_ctrl = SW_PWRDN, 3032 + .phy_status = PHYSTATUS, 3286 3033 3287 3034 .is_dual_lane_phy = true, 3288 3035 .no_pcs_sw_reset = true, ··· 3311 3056 3312 3057 .start_ctrl = SERDES_START | PCS_START, 3313 3058 .pwrdn_ctrl = SW_PWRDN | REFCLK_DRV_DSBL, 3059 + .phy_status = PHYSTATUS, 3314 3060 }; 3315 3061 3316 3062 static const struct qmp_phy_cfg msm8998_usb3phy_cfg = { ··· 3336 3080 3337 3081 .start_ctrl = SERDES_START | PCS_START, 3338 3082 .pwrdn_ctrl = SW_PWRDN, 3083 + .phy_status = PHYSTATUS, 3339 3084 3340 3085 .is_dual_lane_phy = true, 3341 3086 }; ··· 3361 3104 3362 3105 .start_ctrl = SERDES_START, 3363 3106 .pwrdn_ctrl = SW_PWRDN, 3107 + .phy_status = PHYSTATUS, 3364 3108 3365 3109 .is_dual_lane_phy = true, 3366 3110 }; ··· 3388 3130 3389 3131 .start_ctrl = SERDES_START | PCS_START, 3390 3132 .pwrdn_ctrl = SW_PWRDN, 3133 + .phy_status = PHYSTATUS, 3134 + 3391 3135 3392 3136 .has_pwrdn_delay = true, 3393 3137 .pwrdn_delay_min = POWER_DOWN_DELAY_US_MIN, ··· 3421 3161 3422 3162 .start_ctrl = SERDES_START | PCS_START, 3423 3163 .pwrdn_ctrl = SW_PWRDN, 3164 + .phy_status = PHYSTATUS, 3424 3165 3425 3166 .has_pwrdn_delay = true, 3426 3167 .pwrdn_delay_min = POWER_DOWN_DELAY_US_MIN, ··· 3450 3189 3451 3190 .start_ctrl = SERDES_START | PCS_START, 3452 3191 .pwrdn_ctrl = SW_PWRDN, 3192 + .phy_status = PHYSTATUS, 3453 3193 3454 3194 .has_pwrdn_delay = true, 3455 3195 .pwrdn_delay_min = POWER_DOWN_DELAY_US_MIN, ··· 3482 3220 3483 3221 .start_ctrl = SERDES_START | PCS_START, 3484 3222 .pwrdn_ctrl = SW_PWRDN, 3223 + .phy_status = PHYSTATUS, 3485 3224 3486 3225 .has_pwrdn_delay = true, 3487 3226 .pwrdn_delay_min = POWER_DOWN_DELAY_US_MIN, ··· 3551 3288 3552 3289 .start_ctrl = SERDES_START | PCS_START, 3553 3290 .pwrdn_ctrl = SW_PWRDN, 3291 + .phy_status = PHYSTATUS, 3554 3292 3555 3293 .has_pwrdn_delay = true, 3556 3294 .pwrdn_delay_min = POWER_DOWN_DELAY_US_MIN, 3557 3295 .pwrdn_delay_max = POWER_DOWN_DELAY_US_MAX, 3296 + }; 3297 + 3298 + static const struct qmp_phy_cfg sdx55_qmp_pciephy_cfg = { 3299 + .type = PHY_TYPE_PCIE, 3300 + .nlanes = 2, 3301 + 3302 + .serdes_tbl = sdx55_qmp_pcie_serdes_tbl, 3303 + .serdes_tbl_num = ARRAY_SIZE(sdx55_qmp_pcie_serdes_tbl), 3304 + .tx_tbl = sdx55_qmp_pcie_tx_tbl, 3305 + .tx_tbl_num = ARRAY_SIZE(sdx55_qmp_pcie_tx_tbl), 3306 + .rx_tbl = sdx55_qmp_pcie_rx_tbl, 3307 + .rx_tbl_num = ARRAY_SIZE(sdx55_qmp_pcie_rx_tbl), 3308 + .pcs_tbl = sdx55_qmp_pcie_pcs_tbl, 3309 + .pcs_tbl_num = ARRAY_SIZE(sdx55_qmp_pcie_pcs_tbl), 3310 + .pcs_misc_tbl = sdx55_qmp_pcie_pcs_misc_tbl, 3311 + .pcs_misc_tbl_num = ARRAY_SIZE(sdx55_qmp_pcie_pcs_misc_tbl), 3312 + .clk_list = sdm845_pciephy_clk_l, 3313 + .num_clks = ARRAY_SIZE(sdm845_pciephy_clk_l), 3314 + .reset_list = sdm845_pciephy_reset_l, 3315 + .num_resets = ARRAY_SIZE(sdm845_pciephy_reset_l), 3316 + .vreg_list = qmp_phy_vreg_l, 3317 + .num_vregs = ARRAY_SIZE(qmp_phy_vreg_l), 3318 + .regs = sm8250_pcie_regs_layout, 3319 + 3320 + .start_ctrl = PCS_START | SERDES_START, 3321 + .pwrdn_ctrl = SW_PWRDN, 3322 + .phy_status = PHYSTATUS_4_20, 3323 + 3324 + .is_dual_lane_phy = true, 3325 + .has_pwrdn_delay = true, 3326 + .pwrdn_delay_min = 995, /* us */ 3327 + .pwrdn_delay_max = 1005, /* us */ 3558 3328 }; 3559 3329 3560 3330 static const struct qmp_phy_cfg sm8350_ufsphy_cfg = { ··· 3610 3314 3611 3315 .start_ctrl = SERDES_START, 3612 3316 .pwrdn_ctrl = SW_PWRDN, 3317 + .phy_status = PHYSTATUS, 3613 3318 3614 3319 .is_dual_lane_phy = true, 3615 3320 }; ··· 3637 3340 3638 3341 .start_ctrl = SERDES_START | PCS_START, 3639 3342 .pwrdn_ctrl = SW_PWRDN, 3343 + .phy_status = PHYSTATUS, 3640 3344 3641 3345 .has_pwrdn_delay = true, 3642 3346 .pwrdn_delay_min = POWER_DOWN_DELAY_US_MIN, ··· 3669 3371 3670 3372 .start_ctrl = SERDES_START | PCS_START, 3671 3373 .pwrdn_ctrl = SW_PWRDN, 3374 + .phy_status = PHYSTATUS, 3672 3375 3673 3376 .has_pwrdn_delay = true, 3674 3377 .pwrdn_delay_min = POWER_DOWN_DELAY_US_MIN, ··· 4292 3993 } 4293 3994 4294 3995 ret = clk_bulk_prepare_enable(cfg->num_clks, qmp->clks); 4295 - if (ret) { 4296 - dev_err(qmp->dev, "failed to enable clks, err=%d\n", ret); 3996 + if (ret) 4297 3997 goto err_rst; 4298 - } 4299 3998 4300 3999 if (cfg->has_phy_dp_com_ctrl) { 4301 4000 qphy_setbits(dp_com, QPHY_V3_DP_COM_POWER_DOWN_CTRL, ··· 4535 4238 ready = PCS_READY; 4536 4239 } else { 4537 4240 status = pcs + cfg->regs[QPHY_PCS_STATUS]; 4538 - mask = PHYSTATUS; 4241 + mask = cfg->phy_status; 4539 4242 ready = 0; 4540 4243 } 4541 4244 ··· 4727 4430 } 4728 4431 4729 4432 ret = clk_bulk_prepare_enable(cfg->num_clks, qmp->clks); 4730 - if (ret) { 4731 - dev_err(qmp->dev, "failed to enable clks, err=%d\n", ret); 4433 + if (ret) 4732 4434 return ret; 4733 - } 4734 4435 4735 4436 ret = clk_prepare_enable(qphy->pipe_clk); 4736 4437 if (ret) { ··· 5223 4928 .compatible = "qcom,ipq8074-qmp-pcie-phy", 5224 4929 .data = &ipq8074_pciephy_cfg, 5225 4930 }, { 4931 + .compatible = "qcom,ipq6018-qmp-pcie-phy", 4932 + .data = &ipq6018_pciephy_cfg, 4933 + }, { 5226 4934 .compatible = "qcom,sc7180-qmp-usb3-phy", 5227 4935 .data = &sc7180_usb3phy_cfg, 5228 4936 }, { ··· 5288 4990 }, { 5289 4991 .compatible = "qcom,sm8250-qmp-modem-pcie-phy", 5290 4992 .data = &sm8250_qmp_gen3x2_pciephy_cfg, 4993 + }, { 4994 + .compatible = "qcom,sdx55-qmp-pcie-phy", 4995 + .data = &sdx55_qmp_pciephy_cfg, 5291 4996 }, { 5292 4997 .compatible = "qcom,sdx55-qmp-usb3-uni-phy", 5293 4998 .data = &sdx55_usb3_uniphy_cfg,
+188 -1
drivers/phy/qualcomm/phy-qcom-qmp.h
··· 6 6 #ifndef QCOM_PHY_QMP_H_ 7 7 #define QCOM_PHY_QMP_H_ 8 8 9 + /* QMP V2 PHY for PCIE gen3 ports - QSERDES PLL registers */ 10 + 11 + #define QSERDES_PLL_BG_TIMER 0x00c 12 + #define QSERDES_PLL_SSC_PER1 0x01c 13 + #define QSERDES_PLL_SSC_PER2 0x020 14 + #define QSERDES_PLL_SSC_STEP_SIZE1_MODE0 0x024 15 + #define QSERDES_PLL_SSC_STEP_SIZE2_MODE0 0x028 16 + #define QSERDES_PLL_SSC_STEP_SIZE1_MODE1 0x02c 17 + #define QSERDES_PLL_SSC_STEP_SIZE2_MODE1 0x030 18 + #define QSERDES_PLL_BIAS_EN_CLKBUFLR_EN 0x03c 19 + #define QSERDES_PLL_CLK_ENABLE1 0x040 20 + #define QSERDES_PLL_SYS_CLK_CTRL 0x044 21 + #define QSERDES_PLL_SYSCLK_BUF_ENABLE 0x048 22 + #define QSERDES_PLL_PLL_IVCO 0x050 23 + #define QSERDES_PLL_LOCK_CMP1_MODE0 0x054 24 + #define QSERDES_PLL_LOCK_CMP2_MODE0 0x058 25 + #define QSERDES_PLL_LOCK_CMP1_MODE1 0x060 26 + #define QSERDES_PLL_LOCK_CMP2_MODE1 0x064 27 + #define QSERDES_PLL_BG_TRIM 0x074 28 + #define QSERDES_PLL_CLK_EP_DIV_MODE0 0x078 29 + #define QSERDES_PLL_CLK_EP_DIV_MODE1 0x07c 30 + #define QSERDES_PLL_CP_CTRL_MODE0 0x080 31 + #define QSERDES_PLL_CP_CTRL_MODE1 0x084 32 + #define QSERDES_PLL_PLL_RCTRL_MODE0 0x088 33 + #define QSERDES_PLL_PLL_RCTRL_MODE1 0x08C 34 + #define QSERDES_PLL_PLL_CCTRL_MODE0 0x090 35 + #define QSERDES_PLL_PLL_CCTRL_MODE1 0x094 36 + #define QSERDES_PLL_BIAS_EN_CTRL_BY_PSM 0x0a4 37 + #define QSERDES_PLL_SYSCLK_EN_SEL 0x0a8 38 + #define QSERDES_PLL_RESETSM_CNTRL 0x0b0 39 + #define QSERDES_PLL_LOCK_CMP_EN 0x0c4 40 + #define QSERDES_PLL_DEC_START_MODE0 0x0cc 41 + #define QSERDES_PLL_DEC_START_MODE1 0x0d0 42 + #define QSERDES_PLL_DIV_FRAC_START1_MODE0 0x0d8 43 + #define QSERDES_PLL_DIV_FRAC_START2_MODE0 0x0dc 44 + #define QSERDES_PLL_DIV_FRAC_START3_MODE0 0x0e0 45 + #define QSERDES_PLL_DIV_FRAC_START1_MODE1 0x0e4 46 + #define QSERDES_PLL_DIV_FRAC_START2_MODE1 0x0e8 47 + #define QSERDES_PLL_DIV_FRAC_START3_MODE1 0x0eC 48 + #define QSERDES_PLL_INTEGLOOP_GAIN0_MODE0 0x100 49 + #define QSERDES_PLL_INTEGLOOP_GAIN1_MODE0 0x104 50 + #define QSERDES_PLL_INTEGLOOP_GAIN0_MODE1 0x108 51 + #define QSERDES_PLL_INTEGLOOP_GAIN1_MODE1 0x10c 52 + #define QSERDES_PLL_VCO_TUNE_MAP 0x120 53 + #define QSERDES_PLL_VCO_TUNE1_MODE0 0x124 54 + #define QSERDES_PLL_VCO_TUNE2_MODE0 0x128 55 + #define QSERDES_PLL_VCO_TUNE1_MODE1 0x12c 56 + #define QSERDES_PLL_VCO_TUNE2_MODE1 0x130 57 + #define QSERDES_PLL_VCO_TUNE_TIMER1 0x13c 58 + #define QSERDES_PLL_VCO_TUNE_TIMER2 0x140 59 + #define QSERDES_PLL_CLK_SELECT 0x16c 60 + #define QSERDES_PLL_HSCLK_SEL 0x170 61 + #define QSERDES_PLL_CORECLK_DIV 0x17c 62 + #define QSERDES_PLL_CORE_CLK_EN 0x184 63 + #define QSERDES_PLL_CMN_CONFIG 0x18c 64 + #define QSERDES_PLL_SVS_MODE_CLK_SEL 0x194 65 + #define QSERDES_PLL_CORECLK_DIV_MODE1 0x1b4 66 + 67 + /* QMP V2 PHY for PCIE gen3 ports - QSERDES TX registers */ 68 + 69 + #define QSERDES_TX0_RES_CODE_LANE_OFFSET_TX 0x03c 70 + #define QSERDES_TX0_HIGHZ_DRVR_EN 0x058 71 + #define QSERDES_TX0_LANE_MODE_1 0x084 72 + #define QSERDES_TX0_RCV_DETECT_LVL_2 0x09c 73 + 74 + /* QMP V2 PHY for PCIE gen3 ports - QSERDES RX registers */ 75 + 76 + #define QSERDES_RX0_UCDR_FO_GAIN 0x008 77 + #define QSERDES_RX0_UCDR_SO_GAIN 0x014 78 + #define QSERDES_RX0_UCDR_SO_SATURATION_AND_ENABLE 0x034 79 + #define QSERDES_RX0_UCDR_PI_CONTROLS 0x044 80 + #define QSERDES_RX0_RX_EQU_ADAPTOR_CNTRL2 0x0ec 81 + #define QSERDES_RX0_RX_EQU_ADAPTOR_CNTRL3 0x0f0 82 + #define QSERDES_RX0_RX_EQU_ADAPTOR_CNTRL4 0x0f4 83 + #define QSERDES_RX0_RX_IDAC_TSETTLE_LOW 0x0f8 84 + #define QSERDES_RX0_RX_IDAC_TSETTLE_HIGH 0x0fc 85 + #define QSERDES_RX0_RX_EQ_OFFSET_ADAPTOR_CNTRL1 0x110 86 + #define QSERDES_RX0_RX_OFFSET_ADAPTOR_CNTRL2 0x114 87 + #define QSERDES_RX0_SIGDET_ENABLES 0x118 88 + #define QSERDES_RX0_SIGDET_CNTRL 0x11c 89 + #define QSERDES_RX0_SIGDET_DEGLITCH_CNTRL 0x124 90 + #define QSERDES_RX0_RX_MODE_00_LOW 0x170 91 + #define QSERDES_RX0_RX_MODE_00_HIGH 0x174 92 + #define QSERDES_RX0_RX_MODE_00_HIGH2 0x178 93 + #define QSERDES_RX0_RX_MODE_00_HIGH3 0x17c 94 + #define QSERDES_RX0_RX_MODE_00_HIGH4 0x180 95 + #define QSERDES_RX0_RX_MODE_01_LOW 0x184 96 + #define QSERDES_RX0_RX_MODE_01_HIGH 0x188 97 + #define QSERDES_RX0_RX_MODE_01_HIGH2 0x18c 98 + #define QSERDES_RX0_RX_MODE_01_HIGH3 0x190 99 + #define QSERDES_RX0_RX_MODE_01_HIGH4 0x194 100 + #define QSERDES_RX0_RX_MODE_10_LOW 0x198 101 + #define QSERDES_RX0_RX_MODE_10_HIGH 0x19c 102 + #define QSERDES_RX0_RX_MODE_10_HIGH2 0x1a0 103 + #define QSERDES_RX0_RX_MODE_10_HIGH3 0x1a4 104 + #define QSERDES_RX0_RX_MODE_10_HIGH4 0x1a8 105 + #define QSERDES_RX0_DFE_EN_TIMER 0x1b4 106 + 107 + /* QMP V2 PHY for PCIE gen3 ports - PCS registers */ 108 + 109 + #define PCS_COM_FLL_CNTRL1 0x098 110 + #define PCS_COM_FLL_CNTRL2 0x09c 111 + #define PCS_COM_FLL_CNT_VAL_L 0x0a0 112 + #define PCS_COM_FLL_CNT_VAL_H_TOL 0x0a4 113 + #define PCS_COM_FLL_MAN_CODE 0x0a8 114 + #define PCS_COM_REFGEN_REQ_CONFIG1 0x0dc 115 + #define PCS_COM_G12S1_TXDEEMPH_M3P5DB 0x16c 116 + #define PCS_COM_RX_SIGDET_LVL 0x188 117 + #define PCS_COM_P2U3_WAKEUP_DLY_TIME_AUXCLK_L 0x1a4 118 + #define PCS_COM_P2U3_WAKEUP_DLY_TIME_AUXCLK_H 0x1a8 119 + #define PCS_COM_RX_DCC_CAL_CONFIG 0x1d8 120 + #define PCS_COM_EQ_CONFIG5 0x1ec 121 + 122 + /* QMP V2 PHY for PCIE gen3 ports - PCS Misc registers */ 123 + 124 + #define PCS_PCIE_POWER_STATE_CONFIG2 0x40c 125 + #define PCS_PCIE_POWER_STATE_CONFIG4 0x414 126 + #define PCS_PCIE_ENDPOINT_REFCLK_DRIVE 0x41c 127 + #define PCS_PCIE_L1P1_WAKEUP_DLY_TIME_AUXCLK_L 0x440 128 + #define PCS_PCIE_L1P1_WAKEUP_DLY_TIME_AUXCLK_H 0x444 129 + #define PCS_PCIE_L1P2_WAKEUP_DLY_TIME_AUXCLK_L 0x448 130 + #define PCS_PCIE_L1P2_WAKEUP_DLY_TIME_AUXCLK_H 0x44c 131 + #define PCS_PCIE_OSC_DTCT_CONFIG2 0x45c 132 + #define PCS_PCIE_OSC_DTCT_MODE2_CONFIG2 0x478 133 + #define PCS_PCIE_OSC_DTCT_MODE2_CONFIG4 0x480 134 + #define PCS_PCIE_OSC_DTCT_MODE2_CONFIG5 0x484 135 + #define PCS_PCIE_OSC_DTCT_ACTIONS 0x490 136 + #define PCS_PCIE_EQ_CONFIG1 0x4a0 137 + #define PCS_PCIE_EQ_CONFIG2 0x4a4 138 + #define PCS_PCIE_PRESET_P10_PRE 0x4bc 139 + #define PCS_PCIE_PRESET_P10_POST 0x4e0 140 + 9 141 /* Only for QMP V2 PHY - QSERDES COM registers */ 10 142 #define QSERDES_COM_BG_TIMER 0x00c 11 143 #define QSERDES_COM_SSC_EN_CENTER 0x010 ··· 552 420 #define QSERDES_V4_COM_SYSCLK_EN_SEL 0x094 553 421 #define QSERDES_V4_COM_RESETSM_CNTRL 0x09c 554 422 #define QSERDES_V4_COM_LOCK_CMP_EN 0x0a4 423 + #define QSERDES_V4_COM_LOCK_CMP_CFG 0x0a8 555 424 #define QSERDES_V4_COM_LOCK_CMP1_MODE0 0x0ac 556 425 #define QSERDES_V4_COM_LOCK_CMP2_MODE0 0x0b0 557 426 #define QSERDES_V4_COM_LOCK_CMP1_MODE1 0x0b4 ··· 567 434 #define QSERDES_V4_COM_DIV_FRAC_START3_MODE1 0x0e0 568 435 #define QSERDES_V4_COM_INTEGLOOP_GAIN0_MODE0 0x0ec 569 436 #define QSERDES_V4_COM_INTEGLOOP_GAIN1_MODE0 0x0f0 437 + #define QSERDES_V4_COM_INTEGLOOP_GAIN0_MODE1 0x0f4 438 + #define QSERDES_V4_COM_INTEGLOOP_GAIN1_MODE1 0x0f8 570 439 #define QSERDES_V4_COM_VCO_TUNE_CTRL 0x108 571 440 #define QSERDES_V4_COM_VCO_TUNE_MAP 0x10c 572 441 #define QSERDES_V4_COM_VCO_TUNE1_MODE0 0x110 ··· 586 451 #define QSERDES_V4_COM_C_READY_STATUS 0x178 587 452 #define QSERDES_V4_COM_CMN_CONFIG 0x17c 588 453 #define QSERDES_V4_COM_SVS_MODE_CLK_SEL 0x184 454 + #define QSERDES_V4_COM_CMN_MISC1 0x19c 455 + #define QSERDES_V4_COM_INTERNAL_DIG_CORECLK_DIV 0x1a0 456 + #define QSERDES_V4_COM_CMN_MODE 0x1a4 457 + #define QSERDES_V4_COM_VCO_DC_LEVEL_CTRL 0x1a8 589 458 #define QSERDES_V4_COM_BIN_VCOCAL_CMP_CODE1_MODE0 0x1ac 590 459 #define QSERDES_V4_COM_BIN_VCOCAL_CMP_CODE2_MODE0 0x1b0 591 460 #define QSERDES_V4_COM_BIN_VCOCAL_CMP_CODE1_MODE1 0x1b4 592 - #define QSERDES_V4_COM_BIN_VCOCAL_HSCLK_SEL 0x1bc 593 461 #define QSERDES_V4_COM_BIN_VCOCAL_CMP_CODE2_MODE1 0x1b8 462 + #define QSERDES_V4_COM_BIN_VCOCAL_HSCLK_SEL 0x1bc 594 463 595 464 /* Only for QMP V4 PHY - TX registers */ 596 465 #define QSERDES_V4_TX_CLKBUF_ENABLE 0x08 ··· 623 484 #define QSERDES_V4_TX_PWM_GEAR_4_DIVIDER_BAND0_1 0xe4 624 485 #define QSERDES_V4_TX_VMODE_CTRL1 0xe8 625 486 #define QSERDES_V4_TX_PI_QEC_CTRL 0x104 487 + 488 + /* Only for QMP V4_20 PHY - TX registers */ 489 + #define QSERDES_V4_20_TX_LANE_MODE_1 0x88 490 + #define QSERDES_V4_20_TX_LANE_MODE_2 0x8c 491 + #define QSERDES_V4_20_TX_LANE_MODE_3 0x90 492 + #define QSERDES_V4_20_TX_VMODE_CTRL1 0xc4 493 + #define QSERDES_V4_20_TX_PI_QEC_CTRL 0xe0 626 494 627 495 /* Only for QMP V4 PHY - RX registers */ 628 496 #define QSERDES_V4_RX_UCDR_FO_GAIN 0x008 ··· 696 550 #define QSERDES_V4_DP_PHY_SPARE0 0x0c8 697 551 #define QSERDES_V4_DP_PHY_AUX_INTERRUPT_STATUS 0x0d8 698 552 #define QSERDES_V4_DP_PHY_STATUS 0x0dc 553 + 554 + /* Only for QMP V4_20 PHY - RX registers */ 555 + #define QSERDES_V4_20_RX_FO_GAIN_RATE2 0x008 556 + #define QSERDES_V4_20_RX_UCDR_PI_CONTROLS 0x058 557 + #define QSERDES_V4_20_RX_AUX_DATA_TCOARSE_TFINE 0x0ac 558 + #define QSERDES_V4_20_RX_DFE_3 0x110 559 + #define QSERDES_V4_20_RX_DFE_DAC_ENABLE1 0x134 560 + #define QSERDES_V4_20_RX_DFE_DAC_ENABLE2 0x138 561 + #define QSERDES_V4_20_RX_VGA_CAL_CNTRL2 0x150 562 + #define QSERDES_V4_20_RX_RX_EQ_OFFSET_ADAPTOR_CNTRL1 0x178 563 + #define QSERDES_V4_20_RX_RX_MODE_RATE_0_1_B1 0x1c8 564 + #define QSERDES_V4_20_RX_RX_MODE_RATE_0_1_B2 0x1cc 565 + #define QSERDES_V4_20_RX_RX_MODE_RATE_0_1_B3 0x1d0 566 + #define QSERDES_V4_20_RX_RX_MODE_RATE_0_1_B4 0x1d4 567 + #define QSERDES_V4_20_RX_RX_MODE_RATE2_B0 0x1d8 568 + #define QSERDES_V4_20_RX_RX_MODE_RATE2_B1 0x1dc 569 + #define QSERDES_V4_20_RX_RX_MODE_RATE2_B2 0x1e0 570 + #define QSERDES_V4_20_RX_RX_MODE_RATE2_B3 0x1e4 571 + #define QSERDES_V4_20_RX_RX_MODE_RATE2_B4 0x1e8 572 + #define QSERDES_V4_20_RX_RX_MODE_RATE3_B0 0x1ec 573 + #define QSERDES_V4_20_RX_RX_MODE_RATE3_B1 0x1f0 574 + #define QSERDES_V4_20_RX_RX_MODE_RATE3_B2 0x1f4 575 + #define QSERDES_V4_20_RX_RX_MODE_RATE3_B3 0x1f8 576 + #define QSERDES_V4_20_RX_RX_MODE_RATE3_B4 0x1fc 577 + #define QSERDES_V4_20_RX_PHPRE_CTRL 0x200 578 + #define QSERDES_V4_20_RX_DFE_CTLE_POST_CAL_OFFSET 0x20c 579 + #define QSERDES_V4_20_RX_MARG_COARSE_CTRL2 0x23c 699 580 700 581 /* Only for QMP V4 PHY - UFS PCS registers */ 701 582 #define QPHY_V4_PCS_UFS_PHY_START 0x000 ··· 1009 836 #define QPHY_V4_PCS_USB3_SIGDET_STARTUP_TIMER_VAL 0x354 1010 837 #define QPHY_V4_PCS_USB3_TEST_CONTROL 0x358 1011 838 839 + /* Only for QMP V4_20 PHY - USB/PCIe PCS registers */ 840 + #define QPHY_V4_20_PCS_RX_SIGDET_LVL 0x188 841 + #define QPHY_V4_20_PCS_EQ_CONFIG2 0x1d8 842 + #define QPHY_V4_20_PCS_EQ_CONFIG4 0x1e0 843 + #define QPHY_V4_20_PCS_EQ_CONFIG5 0x1e4 844 + 1012 845 /* Only for QMP V4 PHY - UNI has 0x300 offset for PCS_USB3 regs */ 1013 846 #define QPHY_V4_PCS_USB3_UNI_LFPS_DET_HIGH_COUNT_VAL 0x618 1014 847 #define QPHY_V4_PCS_USB3_UNI_RXEQTRAINING_DFE_TIME_S2 0x638 ··· 1039 860 #define QPHY_V4_PCS_PCIE_PRESET_P6_P7_PRE 0xb4 1040 861 #define QPHY_V4_PCS_PCIE_PRESET_P10_PRE 0xbc 1041 862 #define QPHY_V4_PCS_PCIE_PRESET_P10_POST 0xe0 863 + 864 + #define QPHY_V4_20_PCS_PCIE_EQ_CONFIG1 0x0a0 865 + #define QPHY_V4_20_PCS_PCIE_G3_RXEQEVAL_TIME 0x0f0 866 + #define QPHY_V4_20_PCS_PCIE_G4_RXEQEVAL_TIME 0x0f4 867 + #define QPHY_V4_20_PCS_PCIE_G4_EQ_CONFIG2 0x0fc 868 + #define QPHY_V4_20_PCS_PCIE_G4_EQ_CONFIG5 0x108 869 + #define QPHY_V4_20_PCS_LANE1_INSIG_SW_CTRL2 0x824 870 + #define QPHY_V4_20_PCS_LANE1_INSIG_MX_CTRL2 0x828 1042 871 1043 872 /* Only for QMP V5 PHY - QSERDES COM registers */ 1044 873 #define QSERDES_V5_COM_PLL_IVCO 0x058
+1 -1
drivers/phy/ralink/Kconfig
··· 4 4 # 5 5 config PHY_MT7621_PCI 6 6 tristate "MediaTek MT7621 PCI PHY Driver" 7 - depends on RALINK && OF 7 + depends on (RALINK && OF) || COMPILE_TEST 8 8 select GENERIC_PHY 9 9 select REGMAP_MMIO 10 10 help
+22 -15
drivers/phy/ralink/phy-mt7621-pci.c
··· 5 5 */ 6 6 7 7 #include <dt-bindings/phy/phy.h> 8 + #include <linux/clk.h> 8 9 #include <linux/bitfield.h> 9 10 #include <linux/bitops.h> 10 11 #include <linux/module.h> ··· 15 14 #include <linux/platform_device.h> 16 15 #include <linux/regmap.h> 17 16 #include <linux/sys_soc.h> 18 - #include <mt7621.h> 19 - #include <ralink_regs.h> 20 17 21 18 #define RG_PE1_PIPE_REG 0x02c 22 19 #define RG_PE1_PIPE_RST BIT(12) ··· 61 62 62 63 #define RG_PE1_FRC_MSTCKDIV BIT(5) 63 64 64 - #define XTAL_MASK GENMASK(8, 6) 65 - 66 65 #define MAX_PHYS 2 67 66 68 67 /** ··· 68 71 * @dev: pointer to device 69 72 * @regmap: kernel regmap pointer 70 73 * @phy: pointer to the kernel PHY device 74 + * @sys_clk: pointer to the system XTAL clock 71 75 * @port_base: base register 72 76 * @has_dual_port: if the phy has dual ports. 73 77 * @bypass_pipe_rst: mark if 'mt7621_bypass_pipe_rst' ··· 78 80 struct device *dev; 79 81 struct regmap *regmap; 80 82 struct phy *phy; 83 + struct clk *sys_clk; 81 84 void __iomem *port_base; 82 85 bool has_dual_port; 83 86 bool bypass_pipe_rst; ··· 115 116 } 116 117 } 117 118 118 - static void mt7621_set_phy_for_ssc(struct mt7621_pci_phy *phy) 119 + static int mt7621_set_phy_for_ssc(struct mt7621_pci_phy *phy) 119 120 { 120 121 struct device *dev = phy->dev; 121 - u32 xtal_mode; 122 + unsigned long clk_rate; 122 123 123 - xtal_mode = FIELD_GET(XTAL_MASK, rt_sysc_r32(SYSC_REG_SYSTEM_CONFIG0)); 124 + clk_rate = clk_get_rate(phy->sys_clk); 125 + if (!clk_rate) 126 + return -EINVAL; 124 127 125 128 /* Set PCIe Port PHY to disable SSC */ 126 129 /* Debug Xtal Type */ ··· 140 139 RG_PE1_PHY_EN, RG_PE1_FRC_PHY_EN); 141 140 } 142 141 143 - if (xtal_mode <= 5 && xtal_mode >= 3) { /* 40MHz Xtal */ 142 + if (clk_rate == 40000000) { /* 40MHz Xtal */ 144 143 /* Set Pre-divider ratio (for host mode) */ 145 144 mt7621_phy_rmw(phy, RG_PE1_H_PLL_REG, RG_PE1_H_PLL_PREDIV, 146 145 FIELD_PREP(RG_PE1_H_PLL_PREDIV, 0x01)); 147 146 148 147 dev_dbg(dev, "Xtal is 40MHz\n"); 149 - } else if (xtal_mode >= 6) { /* 25MHz Xal */ 148 + } else if (clk_rate == 25000000) { /* 25MHz Xal */ 150 149 mt7621_phy_rmw(phy, RG_PE1_H_PLL_REG, RG_PE1_H_PLL_PREDIV, 151 150 FIELD_PREP(RG_PE1_H_PLL_PREDIV, 0x00)); 152 151 ··· 197 196 mt7621_phy_rmw(phy, RG_PE1_H_PLL_BR_REG, RG_PE1_H_PLL_BR, 198 197 FIELD_PREP(RG_PE1_H_PLL_BR, 0x00)); 199 198 200 - if (xtal_mode <= 5 && xtal_mode >= 3) { /* 40MHz Xtal */ 199 + if (clk_rate == 40000000) { /* 40MHz Xtal */ 201 200 /* set force mode enable of da_pe1_mstckdiv */ 202 201 mt7621_phy_rmw(phy, RG_PE1_MSTCKDIV_REG, 203 202 RG_PE1_MSTCKDIV | RG_PE1_FRC_MSTCKDIV, 204 203 FIELD_PREP(RG_PE1_MSTCKDIV, 0x01) | 205 204 RG_PE1_FRC_MSTCKDIV); 206 205 } 206 + 207 + return 0; 207 208 } 208 209 209 210 static int mt7621_pci_phy_init(struct phy *phy) ··· 215 212 if (mphy->bypass_pipe_rst) 216 213 mt7621_bypass_pipe_rst(mphy); 217 214 218 - mt7621_set_phy_for_ssc(mphy); 219 - 220 - return 0; 215 + return mt7621_set_phy_for_ssc(mphy); 221 216 } 222 217 223 218 static int mt7621_pci_phy_power_on(struct phy *phy) ··· 273 272 274 273 mt7621_phy->has_dual_port = args->args[0]; 275 274 276 - dev_info(dev, "PHY for 0x%08x (dual port = %d)\n", 277 - (unsigned int)mt7621_phy->port_base, mt7621_phy->has_dual_port); 275 + dev_dbg(dev, "PHY for 0x%px (dual port = %d)\n", 276 + mt7621_phy->port_base, mt7621_phy->has_dual_port); 278 277 279 278 return mt7621_phy->phy; 280 279 } ··· 323 322 if (IS_ERR(phy->phy)) { 324 323 dev_err(dev, "failed to create phy\n"); 325 324 return PTR_ERR(phy->phy); 325 + } 326 + 327 + phy->sys_clk = devm_clk_get(dev, NULL); 328 + if (IS_ERR(phy->sys_clk)) { 329 + dev_err(dev, "failed to get phy clock\n"); 330 + return PTR_ERR(phy->sys_clk); 326 331 } 327 332 328 333 phy_set_drvdata(phy->phy, phy);
+9
drivers/phy/rockchip/Kconfig
··· 48 48 help 49 49 Support for Rockchip USB2.0 PHY with Innosilicon IP block. 50 50 51 + config PHY_ROCKCHIP_INNO_CSIDPHY 52 + tristate "Rockchip Innosilicon MIPI CSI PHY driver" 53 + depends on (ARCH_ROCKCHIP || COMPILE_TEST) && OF 54 + select GENERIC_PHY 55 + select GENERIC_PHY_MIPI_DPHY 56 + help 57 + Enable this to support the Rockchip MIPI CSI PHY with 58 + Innosilicon IP block. 59 + 51 60 config PHY_ROCKCHIP_INNO_DSIDPHY 52 61 tristate "Rockchip Innosilicon MIPI/LVDS/TTL PHY driver" 53 62 depends on (ARCH_ROCKCHIP || COMPILE_TEST) && OF
+1
drivers/phy/rockchip/Makefile
··· 2 2 obj-$(CONFIG_PHY_ROCKCHIP_DP) += phy-rockchip-dp.o 3 3 obj-$(CONFIG_PHY_ROCKCHIP_DPHY_RX0) += phy-rockchip-dphy-rx0.o 4 4 obj-$(CONFIG_PHY_ROCKCHIP_EMMC) += phy-rockchip-emmc.o 5 + obj-$(CONFIG_PHY_ROCKCHIP_INNO_CSIDPHY) += phy-rockchip-inno-csidphy.o 5 6 obj-$(CONFIG_PHY_ROCKCHIP_INNO_DSIDPHY) += phy-rockchip-inno-dsidphy.o 6 7 obj-$(CONFIG_PHY_ROCKCHIP_INNO_HDMI) += phy-rockchip-inno-hdmi.o 7 8 obj-$(CONFIG_PHY_ROCKCHIP_INNO_USB2) += phy-rockchip-inno-usb2.o
+459
drivers/phy/rockchip/phy-rockchip-inno-csidphy.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Rockchip MIPI RX Innosilicon DPHY driver 4 + * 5 + * Copyright (C) 2021 Fuzhou Rockchip Electronics Co., Ltd. 6 + */ 7 + 8 + #include <linux/bitfield.h> 9 + #include <linux/clk.h> 10 + #include <linux/delay.h> 11 + #include <linux/io.h> 12 + #include <linux/mfd/syscon.h> 13 + #include <linux/module.h> 14 + #include <linux/of.h> 15 + #include <linux/of_platform.h> 16 + #include <linux/phy/phy.h> 17 + #include <linux/phy/phy-mipi-dphy.h> 18 + #include <linux/platform_device.h> 19 + #include <linux/pm_runtime.h> 20 + #include <linux/regmap.h> 21 + #include <linux/reset.h> 22 + 23 + /* GRF */ 24 + #define RK1808_GRF_PD_VI_CON_OFFSET 0x0430 25 + 26 + #define RK3326_GRF_PD_VI_CON_OFFSET 0x0430 27 + 28 + #define RK3368_GRF_SOC_CON6_OFFSET 0x0418 29 + 30 + /* PHY */ 31 + #define CSIDPHY_CTRL_LANE_ENABLE 0x00 32 + #define CSIDPHY_CTRL_LANE_ENABLE_CK BIT(6) 33 + #define CSIDPHY_CTRL_LANE_ENABLE_MASK GENMASK(5, 2) 34 + #define CSIDPHY_CTRL_LANE_ENABLE_UNDEFINED BIT(0) 35 + 36 + /* not present on all variants */ 37 + #define CSIDPHY_CTRL_PWRCTL 0x04 38 + #define CSIDPHY_CTRL_PWRCTL_UNDEFINED GENMASK(7, 5) 39 + #define CSIDPHY_CTRL_PWRCTL_SYNCRST BIT(2) 40 + #define CSIDPHY_CTRL_PWRCTL_LDO_PD BIT(1) 41 + #define CSIDPHY_CTRL_PWRCTL_PLL_PD BIT(0) 42 + 43 + #define CSIDPHY_CTRL_DIG_RST 0x80 44 + #define CSIDPHY_CTRL_DIG_RST_UNDEFINED 0x1e 45 + #define CSIDPHY_CTRL_DIG_RST_RESET BIT(0) 46 + 47 + /* offset after ths_settle_offset */ 48 + #define CSIDPHY_CLK_THS_SETTLE 0 49 + #define CSIDPHY_LANE_THS_SETTLE(n) (((n) + 1) * 0x80) 50 + #define CSIDPHY_THS_SETTLE_MASK GENMASK(6, 0) 51 + 52 + /* offset after calib_offset */ 53 + #define CSIDPHY_CLK_CALIB_EN 0 54 + #define CSIDPHY_LANE_CALIB_EN(n) (((n) + 1) * 0x80) 55 + #define CSIDPHY_CALIB_EN BIT(7) 56 + 57 + /* Configure the count time of the THS-SETTLE by protocol. */ 58 + #define RK1808_CSIDPHY_CLK_WR_THS_SETTLE 0x160 59 + #define RK3326_CSIDPHY_CLK_WR_THS_SETTLE 0x100 60 + #define RK3368_CSIDPHY_CLK_WR_THS_SETTLE 0x100 61 + 62 + /* Calibration reception enable */ 63 + #define RK1808_CSIDPHY_CLK_CALIB_EN 0x168 64 + 65 + /* 66 + * The higher 16-bit of this register is used for write protection 67 + * only if BIT(x + 16) set to 1 the BIT(x) can be written. 68 + */ 69 + #define HIWORD_UPDATE(val, mask, shift) \ 70 + ((val) << (shift) | (mask) << ((shift) + 16)) 71 + 72 + #define HZ_TO_MHZ(freq) div_u64(freq, 1000 * 1000) 73 + 74 + enum dphy_reg_id { 75 + /* rk1808 & rk3326 */ 76 + GRF_DPHY_CSIPHY_FORCERXMODE, 77 + GRF_DPHY_CSIPHY_CLKLANE_EN, 78 + GRF_DPHY_CSIPHY_DATALANE_EN, 79 + }; 80 + 81 + struct dphy_reg { 82 + u32 offset; 83 + u32 mask; 84 + u32 shift; 85 + }; 86 + 87 + #define PHY_REG(_offset, _width, _shift) \ 88 + { .offset = _offset, .mask = BIT(_width) - 1, .shift = _shift, } 89 + 90 + static const struct dphy_reg rk1808_grf_dphy_regs[] = { 91 + [GRF_DPHY_CSIPHY_FORCERXMODE] = PHY_REG(RK1808_GRF_PD_VI_CON_OFFSET, 4, 0), 92 + [GRF_DPHY_CSIPHY_CLKLANE_EN] = PHY_REG(RK1808_GRF_PD_VI_CON_OFFSET, 1, 8), 93 + [GRF_DPHY_CSIPHY_DATALANE_EN] = PHY_REG(RK1808_GRF_PD_VI_CON_OFFSET, 4, 4), 94 + }; 95 + 96 + static const struct dphy_reg rk3326_grf_dphy_regs[] = { 97 + [GRF_DPHY_CSIPHY_FORCERXMODE] = PHY_REG(RK3326_GRF_PD_VI_CON_OFFSET, 4, 0), 98 + [GRF_DPHY_CSIPHY_CLKLANE_EN] = PHY_REG(RK3326_GRF_PD_VI_CON_OFFSET, 1, 8), 99 + [GRF_DPHY_CSIPHY_DATALANE_EN] = PHY_REG(RK3326_GRF_PD_VI_CON_OFFSET, 4, 4), 100 + }; 101 + 102 + static const struct dphy_reg rk3368_grf_dphy_regs[] = { 103 + [GRF_DPHY_CSIPHY_FORCERXMODE] = PHY_REG(RK3368_GRF_SOC_CON6_OFFSET, 4, 8), 104 + }; 105 + 106 + struct hsfreq_range { 107 + u32 range_h; 108 + u8 cfg_bit; 109 + }; 110 + 111 + struct dphy_drv_data { 112 + int pwrctl_offset; 113 + int ths_settle_offset; 114 + int calib_offset; 115 + const struct hsfreq_range *hsfreq_ranges; 116 + int num_hsfreq_ranges; 117 + const struct dphy_reg *grf_regs; 118 + }; 119 + 120 + struct rockchip_inno_csidphy { 121 + struct device *dev; 122 + void __iomem *phy_base; 123 + struct clk *pclk; 124 + struct regmap *grf; 125 + struct reset_control *rst; 126 + const struct dphy_drv_data *drv_data; 127 + struct phy_configure_opts_mipi_dphy config; 128 + u8 hsfreq; 129 + }; 130 + 131 + static inline void write_grf_reg(struct rockchip_inno_csidphy *priv, 132 + int index, u8 value) 133 + { 134 + const struct dphy_drv_data *drv_data = priv->drv_data; 135 + const struct dphy_reg *reg = &drv_data->grf_regs[index]; 136 + 137 + if (reg->offset) 138 + regmap_write(priv->grf, reg->offset, 139 + HIWORD_UPDATE(value, reg->mask, reg->shift)); 140 + } 141 + 142 + /* These tables must be sorted by .range_h ascending. */ 143 + static const struct hsfreq_range rk1808_mipidphy_hsfreq_ranges[] = { 144 + { 109, 0x02}, { 149, 0x03}, { 199, 0x06}, { 249, 0x06}, 145 + { 299, 0x06}, { 399, 0x08}, { 499, 0x0b}, { 599, 0x0e}, 146 + { 699, 0x10}, { 799, 0x12}, { 999, 0x16}, {1199, 0x1e}, 147 + {1399, 0x23}, {1599, 0x2d}, {1799, 0x32}, {1999, 0x37}, 148 + {2199, 0x3c}, {2399, 0x41}, {2499, 0x46} 149 + }; 150 + 151 + static const struct hsfreq_range rk3326_mipidphy_hsfreq_ranges[] = { 152 + { 109, 0x00}, { 149, 0x01}, { 199, 0x02}, { 249, 0x03}, 153 + { 299, 0x04}, { 399, 0x05}, { 499, 0x06}, { 599, 0x07}, 154 + { 699, 0x08}, { 799, 0x09}, { 899, 0x0a}, {1099, 0x0b}, 155 + {1249, 0x0c}, {1349, 0x0d}, {1500, 0x0e} 156 + }; 157 + 158 + static const struct hsfreq_range rk3368_mipidphy_hsfreq_ranges[] = { 159 + { 109, 0x00}, { 149, 0x01}, { 199, 0x02}, { 249, 0x03}, 160 + { 299, 0x04}, { 399, 0x05}, { 499, 0x06}, { 599, 0x07}, 161 + { 699, 0x08}, { 799, 0x09}, { 899, 0x0a}, {1099, 0x0b}, 162 + {1249, 0x0c}, {1349, 0x0d}, {1500, 0x0e} 163 + }; 164 + 165 + static void rockchip_inno_csidphy_ths_settle(struct rockchip_inno_csidphy *priv, 166 + int hsfreq, int offset) 167 + { 168 + const struct dphy_drv_data *drv_data = priv->drv_data; 169 + u32 val; 170 + 171 + val = readl(priv->phy_base + drv_data->ths_settle_offset + offset); 172 + val &= ~CSIDPHY_THS_SETTLE_MASK; 173 + val |= hsfreq; 174 + writel(val, priv->phy_base + drv_data->ths_settle_offset + offset); 175 + } 176 + 177 + static int rockchip_inno_csidphy_configure(struct phy *phy, 178 + union phy_configure_opts *opts) 179 + { 180 + struct rockchip_inno_csidphy *priv = phy_get_drvdata(phy); 181 + const struct dphy_drv_data *drv_data = priv->drv_data; 182 + struct phy_configure_opts_mipi_dphy *config = &opts->mipi_dphy; 183 + unsigned int hsfreq = 0; 184 + unsigned int i; 185 + u64 data_rate_mbps; 186 + int ret; 187 + 188 + /* pass with phy_mipi_dphy_get_default_config (with pixel rate?) */ 189 + ret = phy_mipi_dphy_config_validate(config); 190 + if (ret) 191 + return ret; 192 + 193 + data_rate_mbps = HZ_TO_MHZ(config->hs_clk_rate); 194 + 195 + dev_dbg(priv->dev, "lanes %d - data_rate_mbps %llu\n", 196 + config->lanes, data_rate_mbps); 197 + for (i = 0; i < drv_data->num_hsfreq_ranges; i++) { 198 + if (drv_data->hsfreq_ranges[i].range_h >= data_rate_mbps) { 199 + hsfreq = drv_data->hsfreq_ranges[i].cfg_bit; 200 + break; 201 + } 202 + } 203 + if (!hsfreq) 204 + return -EINVAL; 205 + 206 + priv->hsfreq = hsfreq; 207 + priv->config = *config; 208 + return 0; 209 + } 210 + 211 + static int rockchip_inno_csidphy_power_on(struct phy *phy) 212 + { 213 + struct rockchip_inno_csidphy *priv = phy_get_drvdata(phy); 214 + const struct dphy_drv_data *drv_data = priv->drv_data; 215 + u64 data_rate_mbps = HZ_TO_MHZ(priv->config.hs_clk_rate); 216 + u32 val; 217 + int ret, i; 218 + 219 + ret = clk_enable(priv->pclk); 220 + if (ret < 0) 221 + return ret; 222 + 223 + ret = pm_runtime_resume_and_get(priv->dev); 224 + if (ret < 0) { 225 + clk_disable(priv->pclk); 226 + return ret; 227 + } 228 + 229 + /* phy start */ 230 + if (drv_data->pwrctl_offset >= 0) 231 + writel(CSIDPHY_CTRL_PWRCTL_UNDEFINED | 232 + CSIDPHY_CTRL_PWRCTL_SYNCRST, 233 + priv->phy_base + drv_data->pwrctl_offset); 234 + 235 + /* set data lane num and enable clock lane */ 236 + val = FIELD_PREP(CSIDPHY_CTRL_LANE_ENABLE_MASK, GENMASK(priv->config.lanes - 1, 0)) | 237 + FIELD_PREP(CSIDPHY_CTRL_LANE_ENABLE_CK, 1) | 238 + FIELD_PREP(CSIDPHY_CTRL_LANE_ENABLE_UNDEFINED, 1); 239 + writel(val, priv->phy_base + CSIDPHY_CTRL_LANE_ENABLE); 240 + 241 + /* Reset dphy analog part */ 242 + if (drv_data->pwrctl_offset >= 0) 243 + writel(CSIDPHY_CTRL_PWRCTL_UNDEFINED, 244 + priv->phy_base + drv_data->pwrctl_offset); 245 + usleep_range(500, 1000); 246 + 247 + /* Reset dphy digital part */ 248 + writel(CSIDPHY_CTRL_DIG_RST_UNDEFINED, 249 + priv->phy_base + CSIDPHY_CTRL_DIG_RST); 250 + writel(CSIDPHY_CTRL_DIG_RST_UNDEFINED + CSIDPHY_CTRL_DIG_RST_RESET, 251 + priv->phy_base + CSIDPHY_CTRL_DIG_RST); 252 + 253 + /* not into receive mode/wait stopstate */ 254 + write_grf_reg(priv, GRF_DPHY_CSIPHY_FORCERXMODE, 0x0); 255 + 256 + /* enable calibration */ 257 + if (data_rate_mbps > 1500 && drv_data->calib_offset >= 0) { 258 + writel(CSIDPHY_CALIB_EN, 259 + priv->phy_base + drv_data->calib_offset + 260 + CSIDPHY_CLK_CALIB_EN); 261 + for (i = 0; i < priv->config.lanes; i++) 262 + writel(CSIDPHY_CALIB_EN, 263 + priv->phy_base + drv_data->calib_offset + 264 + CSIDPHY_LANE_CALIB_EN(i)); 265 + } 266 + 267 + rockchip_inno_csidphy_ths_settle(priv, priv->hsfreq, 268 + CSIDPHY_CLK_THS_SETTLE); 269 + for (i = 0; i < priv->config.lanes; i++) 270 + rockchip_inno_csidphy_ths_settle(priv, priv->hsfreq, 271 + CSIDPHY_LANE_THS_SETTLE(i)); 272 + 273 + write_grf_reg(priv, GRF_DPHY_CSIPHY_CLKLANE_EN, 0x1); 274 + write_grf_reg(priv, GRF_DPHY_CSIPHY_DATALANE_EN, 275 + GENMASK(priv->config.lanes - 1, 0)); 276 + 277 + return 0; 278 + } 279 + 280 + static int rockchip_inno_csidphy_power_off(struct phy *phy) 281 + { 282 + struct rockchip_inno_csidphy *priv = phy_get_drvdata(phy); 283 + const struct dphy_drv_data *drv_data = priv->drv_data; 284 + 285 + /* disable all lanes */ 286 + writel(CSIDPHY_CTRL_LANE_ENABLE_UNDEFINED, 287 + priv->phy_base + CSIDPHY_CTRL_LANE_ENABLE); 288 + 289 + /* disable pll and ldo */ 290 + if (drv_data->pwrctl_offset >= 0) 291 + writel(CSIDPHY_CTRL_PWRCTL_UNDEFINED | 292 + CSIDPHY_CTRL_PWRCTL_LDO_PD | 293 + CSIDPHY_CTRL_PWRCTL_PLL_PD, 294 + priv->phy_base + drv_data->pwrctl_offset); 295 + usleep_range(500, 1000); 296 + 297 + pm_runtime_put(priv->dev); 298 + clk_disable(priv->pclk); 299 + 300 + return 0; 301 + } 302 + 303 + static int rockchip_inno_csidphy_init(struct phy *phy) 304 + { 305 + struct rockchip_inno_csidphy *priv = phy_get_drvdata(phy); 306 + 307 + return clk_prepare(priv->pclk); 308 + } 309 + 310 + static int rockchip_inno_csidphy_exit(struct phy *phy) 311 + { 312 + struct rockchip_inno_csidphy *priv = phy_get_drvdata(phy); 313 + 314 + clk_unprepare(priv->pclk); 315 + 316 + return 0; 317 + } 318 + 319 + static const struct phy_ops rockchip_inno_csidphy_ops = { 320 + .power_on = rockchip_inno_csidphy_power_on, 321 + .power_off = rockchip_inno_csidphy_power_off, 322 + .init = rockchip_inno_csidphy_init, 323 + .exit = rockchip_inno_csidphy_exit, 324 + .configure = rockchip_inno_csidphy_configure, 325 + .owner = THIS_MODULE, 326 + }; 327 + 328 + static const struct dphy_drv_data rk1808_mipidphy_drv_data = { 329 + .pwrctl_offset = -1, 330 + .ths_settle_offset = RK1808_CSIDPHY_CLK_WR_THS_SETTLE, 331 + .calib_offset = RK1808_CSIDPHY_CLK_CALIB_EN, 332 + .hsfreq_ranges = rk1808_mipidphy_hsfreq_ranges, 333 + .num_hsfreq_ranges = ARRAY_SIZE(rk1808_mipidphy_hsfreq_ranges), 334 + .grf_regs = rk1808_grf_dphy_regs, 335 + }; 336 + 337 + static const struct dphy_drv_data rk3326_mipidphy_drv_data = { 338 + .pwrctl_offset = CSIDPHY_CTRL_PWRCTL, 339 + .ths_settle_offset = RK3326_CSIDPHY_CLK_WR_THS_SETTLE, 340 + .calib_offset = -1, 341 + .hsfreq_ranges = rk3326_mipidphy_hsfreq_ranges, 342 + .num_hsfreq_ranges = ARRAY_SIZE(rk3326_mipidphy_hsfreq_ranges), 343 + .grf_regs = rk3326_grf_dphy_regs, 344 + }; 345 + 346 + static const struct dphy_drv_data rk3368_mipidphy_drv_data = { 347 + .pwrctl_offset = CSIDPHY_CTRL_PWRCTL, 348 + .ths_settle_offset = RK3368_CSIDPHY_CLK_WR_THS_SETTLE, 349 + .calib_offset = -1, 350 + .hsfreq_ranges = rk3368_mipidphy_hsfreq_ranges, 351 + .num_hsfreq_ranges = ARRAY_SIZE(rk3368_mipidphy_hsfreq_ranges), 352 + .grf_regs = rk3368_grf_dphy_regs, 353 + }; 354 + 355 + static const struct of_device_id rockchip_inno_csidphy_match_id[] = { 356 + { 357 + .compatible = "rockchip,px30-csi-dphy", 358 + .data = &rk3326_mipidphy_drv_data, 359 + }, 360 + { 361 + .compatible = "rockchip,rk1808-csi-dphy", 362 + .data = &rk1808_mipidphy_drv_data, 363 + }, 364 + { 365 + .compatible = "rockchip,rk3326-csi-dphy", 366 + .data = &rk3326_mipidphy_drv_data, 367 + }, 368 + { 369 + .compatible = "rockchip,rk3368-csi-dphy", 370 + .data = &rk3368_mipidphy_drv_data, 371 + }, 372 + {} 373 + }; 374 + MODULE_DEVICE_TABLE(of, rockchip_inno_csidphy_match_id); 375 + 376 + static int rockchip_inno_csidphy_probe(struct platform_device *pdev) 377 + { 378 + struct rockchip_inno_csidphy *priv; 379 + struct device *dev = &pdev->dev; 380 + struct phy_provider *phy_provider; 381 + struct phy *phy; 382 + 383 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 384 + if (!priv) 385 + return -ENOMEM; 386 + 387 + priv->dev = dev; 388 + platform_set_drvdata(pdev, priv); 389 + 390 + priv->drv_data = of_device_get_match_data(dev); 391 + if (!priv->drv_data) { 392 + dev_err(dev, "Can't find device data\n"); 393 + return -ENODEV; 394 + } 395 + 396 + priv->grf = syscon_regmap_lookup_by_phandle(dev->of_node, 397 + "rockchip,grf"); 398 + if (IS_ERR(priv->grf)) { 399 + dev_err(dev, "Can't find GRF syscon\n"); 400 + return PTR_ERR(priv->grf); 401 + } 402 + 403 + priv->phy_base = devm_platform_ioremap_resource(pdev, 0); 404 + if (IS_ERR(priv->phy_base)) 405 + return PTR_ERR(priv->phy_base); 406 + 407 + priv->pclk = devm_clk_get(dev, "pclk"); 408 + if (IS_ERR(priv->pclk)) { 409 + dev_err(dev, "failed to get pclk\n"); 410 + return PTR_ERR(priv->pclk); 411 + } 412 + 413 + priv->rst = devm_reset_control_get(dev, "apb"); 414 + if (IS_ERR(priv->rst)) { 415 + dev_err(dev, "failed to get system reset control\n"); 416 + return PTR_ERR(priv->rst); 417 + } 418 + 419 + phy = devm_phy_create(dev, NULL, &rockchip_inno_csidphy_ops); 420 + if (IS_ERR(phy)) { 421 + dev_err(dev, "failed to create phy\n"); 422 + return PTR_ERR(phy); 423 + } 424 + 425 + phy_set_drvdata(phy, priv); 426 + 427 + phy_provider = devm_of_phy_provider_register(dev, of_phy_simple_xlate); 428 + if (IS_ERR(phy_provider)) { 429 + dev_err(dev, "failed to register phy provider\n"); 430 + return PTR_ERR(phy_provider); 431 + } 432 + 433 + pm_runtime_enable(dev); 434 + 435 + return 0; 436 + } 437 + 438 + static int rockchip_inno_csidphy_remove(struct platform_device *pdev) 439 + { 440 + struct rockchip_inno_csidphy *priv = platform_get_drvdata(pdev); 441 + 442 + pm_runtime_disable(priv->dev); 443 + 444 + return 0; 445 + } 446 + 447 + static struct platform_driver rockchip_inno_csidphy_driver = { 448 + .driver = { 449 + .name = "rockchip-inno-csidphy", 450 + .of_match_table = rockchip_inno_csidphy_match_id, 451 + }, 452 + .probe = rockchip_inno_csidphy_probe, 453 + .remove = rockchip_inno_csidphy_remove, 454 + }; 455 + 456 + module_platform_driver(rockchip_inno_csidphy_driver); 457 + MODULE_AUTHOR("Heiko Stuebner <heiko.stuebner@theobroma-systems.com>"); 458 + MODULE_DESCRIPTION("Rockchip MIPI Innosilicon CSI-DPHY driver"); 459 + MODULE_LICENSE("GPL v2");
+2 -2
drivers/phy/rockchip/phy-rockchip-inno-hdmi.c
··· 620 620 unsigned long parent_rate) 621 621 { 622 622 struct inno_hdmi_phy *inno = to_inno_hdmi_phy(hw); 623 - const struct pre_pll_config *cfg = pre_pll_cfg_table; 623 + const struct pre_pll_config *cfg; 624 624 unsigned long tmdsclock = inno_hdmi_phy_get_tmdsclk(inno, rate); 625 625 u32 v; 626 626 int ret; ··· 774 774 unsigned long parent_rate) 775 775 { 776 776 struct inno_hdmi_phy *inno = to_inno_hdmi_phy(hw); 777 - const struct pre_pll_config *cfg = pre_pll_cfg_table; 777 + const struct pre_pll_config *cfg; 778 778 unsigned long tmdsclock = inno_hdmi_phy_get_tmdsclk(inno, rate); 779 779 u32 val; 780 780 int ret;
+44
drivers/phy/rockchip/phy-rockchip-inno-usb2.c
··· 1256 1256 { /* sentinel */ } 1257 1257 }; 1258 1258 1259 + static const struct rockchip_usb2phy_cfg rk3308_phy_cfgs[] = { 1260 + { 1261 + .reg = 0x100, 1262 + .num_ports = 2, 1263 + .clkout_ctl = { 0x108, 4, 4, 1, 0 }, 1264 + .port_cfgs = { 1265 + [USB2PHY_PORT_OTG] = { 1266 + .phy_sus = { 0x0100, 8, 0, 0, 0x1d1 }, 1267 + .bvalid_det_en = { 0x3020, 2, 2, 0, 1 }, 1268 + .bvalid_det_st = { 0x3024, 2, 2, 0, 1 }, 1269 + .bvalid_det_clr = { 0x3028, 2, 2, 0, 1 }, 1270 + .ls_det_en = { 0x3020, 0, 0, 0, 1 }, 1271 + .ls_det_st = { 0x3024, 0, 0, 0, 1 }, 1272 + .ls_det_clr = { 0x3028, 0, 0, 0, 1 }, 1273 + .utmi_avalid = { 0x0120, 10, 10, 0, 1 }, 1274 + .utmi_bvalid = { 0x0120, 9, 9, 0, 1 }, 1275 + .utmi_ls = { 0x0120, 5, 4, 0, 1 }, 1276 + }, 1277 + [USB2PHY_PORT_HOST] = { 1278 + .phy_sus = { 0x0104, 8, 0, 0, 0x1d1 }, 1279 + .ls_det_en = { 0x3020, 1, 1, 0, 1 }, 1280 + .ls_det_st = { 0x3024, 1, 1, 0, 1 }, 1281 + .ls_det_clr = { 0x3028, 1, 1, 0, 1 }, 1282 + .utmi_ls = { 0x0120, 17, 16, 0, 1 }, 1283 + .utmi_hstdet = { 0x0120, 19, 19, 0, 1 } 1284 + } 1285 + }, 1286 + .chg_det = { 1287 + .opmode = { 0x0100, 3, 0, 5, 1 }, 1288 + .cp_det = { 0x0120, 24, 24, 0, 1 }, 1289 + .dcp_det = { 0x0120, 23, 23, 0, 1 }, 1290 + .dp_det = { 0x0120, 25, 25, 0, 1 }, 1291 + .idm_sink_en = { 0x0108, 8, 8, 0, 1 }, 1292 + .idp_sink_en = { 0x0108, 7, 7, 0, 1 }, 1293 + .idp_src_en = { 0x0108, 9, 9, 0, 1 }, 1294 + .rdm_pdwn_en = { 0x0108, 10, 10, 0, 1 }, 1295 + .vdm_src_en = { 0x0108, 12, 12, 0, 1 }, 1296 + .vdp_src_en = { 0x0108, 11, 11, 0, 1 }, 1297 + }, 1298 + }, 1299 + { /* sentinel */ } 1300 + }; 1301 + 1259 1302 static const struct rockchip_usb2phy_cfg rk3328_phy_cfgs[] = { 1260 1303 { 1261 1304 .reg = 0x100, ··· 1468 1425 static const struct of_device_id rockchip_usb2phy_dt_match[] = { 1469 1426 { .compatible = "rockchip,px30-usb2phy", .data = &rk3328_phy_cfgs }, 1470 1427 { .compatible = "rockchip,rk3228-usb2phy", .data = &rk3228_phy_cfgs }, 1428 + { .compatible = "rockchip,rk3308-usb2phy", .data = &rk3308_phy_cfgs }, 1471 1429 { .compatible = "rockchip,rk3328-usb2phy", .data = &rk3328_phy_cfgs }, 1472 1430 { .compatible = "rockchip,rk3366-usb2phy", .data = &rk3366_phy_cfgs }, 1473 1431 { .compatible = "rockchip,rk3399-usb2phy", .data = &rk3399_phy_cfgs },
+7 -4
drivers/phy/socionext/phy-uniphier-pcie.c
··· 24 24 #define PORT_SEL_1 FIELD_PREP(PORT_SEL_MASK, 1) 25 25 26 26 #define PCL_PHY_TEST_I 0x2000 27 - #define PCL_PHY_TEST_O 0x2004 28 27 #define TESTI_DAT_MASK GENMASK(13, 6) 29 28 #define TESTI_ADR_MASK GENMASK(5, 1) 30 29 #define TESTI_WR_EN BIT(0) 30 + 31 + #define PCL_PHY_TEST_O 0x2004 32 + #define TESTO_DAT_MASK GENMASK(7, 0) 31 33 32 34 #define PCL_PHY_RESET 0x200c 33 35 #define PCL_PHY_RESET_N_MNMODE BIT(8) /* =1:manual */ ··· 79 77 val = FIELD_PREP(TESTI_DAT_MASK, 1); 80 78 val |= FIELD_PREP(TESTI_ADR_MASK, reg); 81 79 uniphier_pciephy_testio_write(priv, val); 82 - val = readl(priv->base + PCL_PHY_TEST_O); 80 + val = readl(priv->base + PCL_PHY_TEST_O) & TESTO_DAT_MASK; 83 81 84 82 /* update value */ 85 - val &= ~FIELD_PREP(TESTI_DAT_MASK, mask); 86 - val = FIELD_PREP(TESTI_DAT_MASK, mask & param); 83 + val &= ~mask; 84 + val |= mask & param; 85 + val = FIELD_PREP(TESTI_DAT_MASK, val); 87 86 val |= FIELD_PREP(TESTI_ADR_MASK, reg); 88 87 uniphier_pciephy_testio_write(priv, val); 89 88 uniphier_pciephy_testio_write(priv, val | TESTI_WR_EN);
+31
drivers/phy/st/phy-stm32-usbphyc.c
··· 57 57 struct stm32_usbphyc_phy { 58 58 struct phy *phy; 59 59 struct stm32_usbphyc *usbphyc; 60 + struct regulator *vbus; 60 61 u32 index; 61 62 bool active; 62 63 }; ··· 292 291 return stm32_usbphyc_pll_disable(usbphyc); 293 292 } 294 293 294 + static int stm32_usbphyc_phy_power_on(struct phy *phy) 295 + { 296 + struct stm32_usbphyc_phy *usbphyc_phy = phy_get_drvdata(phy); 297 + 298 + if (usbphyc_phy->vbus) 299 + return regulator_enable(usbphyc_phy->vbus); 300 + 301 + return 0; 302 + } 303 + 304 + static int stm32_usbphyc_phy_power_off(struct phy *phy) 305 + { 306 + struct stm32_usbphyc_phy *usbphyc_phy = phy_get_drvdata(phy); 307 + 308 + if (usbphyc_phy->vbus) 309 + return regulator_disable(usbphyc_phy->vbus); 310 + 311 + return 0; 312 + } 313 + 295 314 static const struct phy_ops stm32_usbphyc_phy_ops = { 296 315 .init = stm32_usbphyc_phy_init, 297 316 .exit = stm32_usbphyc_phy_exit, 317 + .power_on = stm32_usbphyc_phy_power_on, 318 + .power_off = stm32_usbphyc_phy_power_off, 298 319 .owner = THIS_MODULE, 299 320 }; 300 321 ··· 541 518 usbphyc->phys[port]->usbphyc = usbphyc; 542 519 usbphyc->phys[port]->index = index; 543 520 usbphyc->phys[port]->active = false; 521 + 522 + usbphyc->phys[port]->vbus = devm_regulator_get_optional(&phy->dev, "vbus"); 523 + if (IS_ERR(usbphyc->phys[port]->vbus)) { 524 + ret = PTR_ERR(usbphyc->phys[port]->vbus); 525 + if (ret == -EPROBE_DEFER) 526 + goto put_child; 527 + usbphyc->phys[port]->vbus = NULL; 528 + } 544 529 545 530 port++; 546 531 }
+13 -4
drivers/phy/ti/phy-dm816x-usb.c
··· 242 242 243 243 pm_runtime_enable(phy->dev); 244 244 generic_phy = devm_phy_create(phy->dev, NULL, &ops); 245 - if (IS_ERR(generic_phy)) 246 - return PTR_ERR(generic_phy); 245 + if (IS_ERR(generic_phy)) { 246 + error = PTR_ERR(generic_phy); 247 + goto clk_unprepare; 248 + } 247 249 248 250 phy_set_drvdata(generic_phy, phy); 249 251 250 252 phy_provider = devm_of_phy_provider_register(phy->dev, 251 253 of_phy_simple_xlate); 252 - if (IS_ERR(phy_provider)) 253 - return PTR_ERR(phy_provider); 254 + if (IS_ERR(phy_provider)) { 255 + error = PTR_ERR(phy_provider); 256 + goto clk_unprepare; 257 + } 254 258 255 259 usb_add_phy_dev(&phy->phy); 256 260 257 261 return 0; 262 + 263 + clk_unprepare: 264 + pm_runtime_disable(phy->dev); 265 + clk_unprepare(phy->refclk); 266 + return error; 258 267 } 259 268 260 269 static int dm816x_usb_phy_remove(struct platform_device *pdev)
+3 -3
drivers/phy/ti/phy-twl4030-usb.c
··· 544 544 return 0; 545 545 } 546 546 547 - static ssize_t twl4030_usb_vbus_show(struct device *dev, 548 - struct device_attribute *attr, char *buf) 547 + static ssize_t vbus_show(struct device *dev, 548 + struct device_attribute *attr, char *buf) 549 549 { 550 550 struct twl4030_usb *twl = dev_get_drvdata(dev); 551 551 int ret = -EINVAL; ··· 557 557 558 558 return ret; 559 559 } 560 - static DEVICE_ATTR(vbus, 0444, twl4030_usb_vbus_show, NULL); 560 + static DEVICE_ATTR_RO(vbus); 561 561 562 562 static irqreturn_t twl4030_usb_irq(int irq, void *_twl) 563 563 {
+3 -4
drivers/pnp/card.c
··· 369 369 dev->card_link = NULL; 370 370 return NULL; 371 371 } 372 + EXPORT_SYMBOL(pnp_request_card_device); 372 373 373 374 /** 374 375 * pnp_release_card_device - call this when the driver no longer needs the device ··· 383 382 device_release_driver(&dev->dev); 384 383 drv->link.remove = &card_remove_first; 385 384 } 385 + EXPORT_SYMBOL(pnp_release_card_device); 386 386 387 387 /* 388 388 * suspend/resume callbacks ··· 441 439 } 442 440 return 0; 443 441 } 442 + EXPORT_SYMBOL(pnp_register_card_driver); 444 443 445 444 /** 446 445 * pnp_unregister_card_driver - unregisters a PnP card driver from the PnP Layer ··· 454 451 mutex_unlock(&pnp_lock); 455 452 pnp_unregister_driver(&drv->link); 456 453 } 457 - 458 - EXPORT_SYMBOL(pnp_request_card_device); 459 - EXPORT_SYMBOL(pnp_release_card_device); 460 - EXPORT_SYMBOL(pnp_register_card_driver); 461 454 EXPORT_SYMBOL(pnp_unregister_card_driver);
+4 -5
drivers/pnp/driver.c
··· 68 68 mutex_unlock(&pnp_lock); 69 69 return 0; 70 70 } 71 + EXPORT_SYMBOL(pnp_device_attach); 71 72 72 73 void pnp_device_detach(struct pnp_dev *pnp_dev) 73 74 { ··· 77 76 pnp_dev->status = PNP_READY; 78 77 mutex_unlock(&pnp_lock); 79 78 } 79 + EXPORT_SYMBOL(pnp_device_detach); 80 80 81 81 static int pnp_device_probe(struct device *dev) 82 82 { ··· 273 271 274 272 return driver_register(&drv->driver); 275 273 } 274 + EXPORT_SYMBOL(pnp_register_driver); 276 275 277 276 void pnp_unregister_driver(struct pnp_driver *drv) 278 277 { 279 278 driver_unregister(&drv->driver); 280 279 } 280 + EXPORT_SYMBOL(pnp_unregister_driver); 281 281 282 282 /** 283 283 * pnp_add_id - adds an EISA id to the specified device ··· 314 310 315 311 return dev_id; 316 312 } 317 - 318 - EXPORT_SYMBOL(pnp_register_driver); 319 - EXPORT_SYMBOL(pnp_unregister_driver); 320 - EXPORT_SYMBOL(pnp_device_attach); 321 - EXPORT_SYMBOL(pnp_device_detach);
-1
drivers/pnp/isapnp/compat.c
··· 63 63 } 64 64 return NULL; 65 65 } 66 - 67 66 EXPORT_SYMBOL(pnp_find_dev);
+3 -4
drivers/pnp/manager.c
··· 350 350 dev_info(&dev->dev, "activated\n"); 351 351 return 0; 352 352 } 353 + EXPORT_SYMBOL(pnp_start_dev); 353 354 354 355 /** 355 356 * pnp_stop_dev - low-level disable of the PnP device ··· 372 371 dev_info(&dev->dev, "disabled\n"); 373 372 return 0; 374 373 } 374 + EXPORT_SYMBOL(pnp_stop_dev); 375 375 376 376 /** 377 377 * pnp_activate_dev - activates a PnP device for use ··· 398 396 dev->active = 1; 399 397 return 0; 400 398 } 399 + EXPORT_SYMBOL(pnp_activate_dev); 401 400 402 401 /** 403 402 * pnp_disable_dev - disables device ··· 426 423 427 424 return 0; 428 425 } 429 - 430 - EXPORT_SYMBOL(pnp_start_dev); 431 - EXPORT_SYMBOL(pnp_stop_dev); 432 - EXPORT_SYMBOL(pnp_activate_dev); 433 426 EXPORT_SYMBOL(pnp_disable_dev);
-1
drivers/pnp/support.c
··· 30 30 else 31 31 return 1; 32 32 } 33 - 34 33 EXPORT_SYMBOL(pnp_is_active); 35 34 36 35 /*
+10 -9
drivers/siox/siox-bus-gpio.c
··· 102 102 103 103 ddata->din = devm_gpiod_get(dev, "din", GPIOD_IN); 104 104 if (IS_ERR(ddata->din)) { 105 - ret = PTR_ERR(ddata->din); 106 - dev_err(dev, "Failed to get %s GPIO: %d\n", "din", ret); 105 + ret = dev_err_probe(dev, PTR_ERR(ddata->din), 106 + "Failed to get din GPIO\n"); 107 107 goto err; 108 108 } 109 109 110 110 ddata->dout = devm_gpiod_get(dev, "dout", GPIOD_OUT_LOW); 111 111 if (IS_ERR(ddata->dout)) { 112 - ret = PTR_ERR(ddata->dout); 113 - dev_err(dev, "Failed to get %s GPIO: %d\n", "dout", ret); 112 + ret = dev_err_probe(dev, PTR_ERR(ddata->dout), 113 + "Failed to get dout GPIO\n"); 114 114 goto err; 115 115 } 116 116 117 117 ddata->dclk = devm_gpiod_get(dev, "dclk", GPIOD_OUT_LOW); 118 118 if (IS_ERR(ddata->dclk)) { 119 - ret = PTR_ERR(ddata->dclk); 120 - dev_err(dev, "Failed to get %s GPIO: %d\n", "dclk", ret); 119 + ret = dev_err_probe(dev, PTR_ERR(ddata->dclk), 120 + "Failed to get dclk GPIO\n"); 121 121 goto err; 122 122 } 123 123 124 124 ddata->dld = devm_gpiod_get(dev, "dld", GPIOD_OUT_LOW); 125 125 if (IS_ERR(ddata->dld)) { 126 - ret = PTR_ERR(ddata->dld); 127 - dev_err(dev, "Failed to get %s GPIO: %d\n", "dld", ret); 126 + ret = dev_err_probe(dev, PTR_ERR(ddata->dld), 127 + "Failed to get dld GPIO\n"); 128 128 goto err; 129 129 } 130 130 ··· 134 134 135 135 ret = siox_master_register(smaster); 136 136 if (ret) { 137 - dev_err(dev, "Failed to register siox master: %d\n", ret); 137 + dev_err_probe(dev, ret, 138 + "Failed to register siox master\n"); 138 139 err: 139 140 siox_master_put(smaster); 140 141 }
+1
drivers/soundwire/Kconfig
··· 25 25 tristate "Intel SoundWire Master driver" 26 26 select SOUNDWIRE_CADENCE 27 27 select SOUNDWIRE_GENERIC_ALLOCATION 28 + select AUXILIARY_BUS 28 29 depends on ACPI && SND_SOC 29 30 help 30 31 SoundWire Intel Master driver.
+73 -90
drivers/soundwire/bus.c
··· 394 394 } 395 395 396 396 static int 397 - sdw_nwrite_no_pm(struct sdw_slave *slave, u32 addr, size_t count, u8 *val) 397 + sdw_nwrite_no_pm(struct sdw_slave *slave, u32 addr, size_t count, const u8 *val) 398 398 { 399 399 struct sdw_msg msg; 400 400 int ret; 401 401 402 402 ret = sdw_fill_msg(&msg, slave, addr, count, 403 - slave->dev_num, SDW_MSG_FLAG_WRITE, val); 403 + slave->dev_num, SDW_MSG_FLAG_WRITE, (u8 *)val); 404 404 if (ret < 0) 405 405 return ret; 406 406 ··· 550 550 * @slave: SDW Slave 551 551 * @addr: Register address 552 552 * @count: length 553 - * @val: Buffer for values to be read 553 + * @val: Buffer for values to be written 554 554 */ 555 - int sdw_nwrite(struct sdw_slave *slave, u32 addr, size_t count, u8 *val) 555 + int sdw_nwrite(struct sdw_slave *slave, u32 addr, size_t count, const u8 *val) 556 556 { 557 557 int ret; 558 558 ··· 836 836 mutex_unlock(&bus->bus_lock); 837 837 } 838 838 839 - static enum sdw_clk_stop_mode sdw_get_clk_stop_mode(struct sdw_slave *slave) 840 - { 841 - enum sdw_clk_stop_mode mode; 842 - 843 - /* 844 - * Query for clock stop mode if Slave implements 845 - * ops->get_clk_stop_mode, else read from property. 846 - */ 847 - if (slave->ops && slave->ops->get_clk_stop_mode) { 848 - mode = slave->ops->get_clk_stop_mode(slave); 849 - } else { 850 - if (slave->prop.clk_stop_mode1) 851 - mode = SDW_CLK_STOP_MODE1; 852 - else 853 - mode = SDW_CLK_STOP_MODE0; 854 - } 855 - 856 - return mode; 857 - } 858 - 859 839 static int sdw_slave_clk_stop_callback(struct sdw_slave *slave, 860 840 enum sdw_clk_stop_mode mode, 861 841 enum sdw_clk_stop_type type) ··· 844 864 845 865 if (slave->ops && slave->ops->clk_stop) { 846 866 ret = slave->ops->clk_stop(slave, mode, type); 847 - if (ret < 0) { 848 - dev_err(&slave->dev, 849 - "Clk Stop type =%d failed: %d\n", type, ret); 867 + if (ret < 0) 850 868 return ret; 851 - } 852 869 } 853 870 854 871 return 0; ··· 872 895 } else { 873 896 ret = sdw_read_no_pm(slave, SDW_SCP_SYSTEMCTRL); 874 897 if (ret < 0) { 875 - dev_err(&slave->dev, "SDW_SCP_SYSTEMCTRL read failed:%d\n", ret); 898 + if (ret != -ENODATA) 899 + dev_err(&slave->dev, "SDW_SCP_SYSTEMCTRL read failed:%d\n", ret); 876 900 return ret; 877 901 } 878 902 val = ret; ··· 882 904 883 905 ret = sdw_write_no_pm(slave, SDW_SCP_SYSTEMCTRL, val); 884 906 885 - if (ret < 0) 886 - dev_err(&slave->dev, 887 - "Clock Stop prepare failed for slave: %d", ret); 907 + if (ret < 0 && ret != -ENODATA) 908 + dev_err(&slave->dev, "SDW_SCP_SYSTEMCTRL write failed:%d\n", ret); 888 909 889 910 return ret; 890 911 } ··· 901 924 } 902 925 val &= SDW_SCP_STAT_CLK_STP_NF; 903 926 if (!val) { 904 - dev_dbg(bus->dev, "clock stop prep/de-prep done slave:%d", 927 + dev_dbg(bus->dev, "clock stop prep/de-prep done slave:%d\n", 905 928 dev_num); 906 929 return 0; 907 930 } ··· 910 933 retry--; 911 934 } while (retry); 912 935 913 - dev_err(bus->dev, "clock stop prep/de-prep failed slave:%d", 936 + dev_err(bus->dev, "clock stop prep/de-prep failed slave:%d\n", 914 937 dev_num); 915 938 916 939 return -ETIMEDOUT; ··· 925 948 */ 926 949 int sdw_bus_prep_clk_stop(struct sdw_bus *bus) 927 950 { 928 - enum sdw_clk_stop_mode slave_mode; 929 951 bool simple_clk_stop = true; 930 952 struct sdw_slave *slave; 931 953 bool is_slave = false; ··· 934 958 * In order to save on transition time, prepare 935 959 * each Slave and then wait for all Slave(s) to be 936 960 * prepared for clock stop. 961 + * If one of the Slave devices has lost sync and 962 + * replies with Command Ignored/-ENODATA, we continue 963 + * the loop 937 964 */ 938 965 list_for_each_entry(slave, &bus->slaves, node) { 939 966 if (!slave->dev_num) ··· 949 970 /* Identify if Slave(s) are available on Bus */ 950 971 is_slave = true; 951 972 952 - slave_mode = sdw_get_clk_stop_mode(slave); 953 - slave->curr_clk_stop_mode = slave_mode; 954 - 955 - ret = sdw_slave_clk_stop_callback(slave, slave_mode, 973 + ret = sdw_slave_clk_stop_callback(slave, 974 + SDW_CLK_STOP_MODE0, 956 975 SDW_CLK_PRE_PREPARE); 957 - if (ret < 0) { 958 - dev_err(&slave->dev, 959 - "pre-prepare failed:%d", ret); 976 + if (ret < 0 && ret != -ENODATA) { 977 + dev_err(&slave->dev, "clock stop pre-prepare cb failed:%d\n", ret); 960 978 return ret; 961 979 } 962 980 963 - ret = sdw_slave_clk_stop_prepare(slave, 964 - slave_mode, true); 965 - if (ret < 0) { 966 - dev_err(&slave->dev, 967 - "pre-prepare failed:%d", ret); 968 - return ret; 969 - } 970 - 971 - if (slave_mode == SDW_CLK_STOP_MODE1) 981 + /* Only prepare a Slave device if needed */ 982 + if (!slave->prop.simple_clk_stop_capable) { 972 983 simple_clk_stop = false; 984 + 985 + ret = sdw_slave_clk_stop_prepare(slave, 986 + SDW_CLK_STOP_MODE0, 987 + true); 988 + if (ret < 0 && ret != -ENODATA) { 989 + dev_err(&slave->dev, "clock stop prepare failed:%d\n", ret); 990 + return ret; 991 + } 992 + } 973 993 } 974 994 975 995 /* Skip remaining clock stop preparation if no Slave is attached */ 976 996 if (!is_slave) 977 - return ret; 997 + return 0; 978 998 999 + /* 1000 + * Don't wait for all Slaves to be ready if they follow the simple 1001 + * state machine 1002 + */ 979 1003 if (!simple_clk_stop) { 980 1004 ret = sdw_bus_wait_for_clk_prep_deprep(bus, 981 1005 SDW_BROADCAST_DEV_NUM); 1006 + /* 1007 + * if there are no Slave devices present and the reply is 1008 + * Command_Ignored/-ENODATA, we don't need to continue with the 1009 + * flow and can just return here. The error code is not modified 1010 + * and its handling left as an exercise for the caller. 1011 + */ 982 1012 if (ret < 0) 983 1013 return ret; 984 1014 } ··· 1001 1013 slave->status != SDW_SLAVE_ALERT) 1002 1014 continue; 1003 1015 1004 - slave_mode = slave->curr_clk_stop_mode; 1016 + ret = sdw_slave_clk_stop_callback(slave, 1017 + SDW_CLK_STOP_MODE0, 1018 + SDW_CLK_POST_PREPARE); 1005 1019 1006 - if (slave_mode == SDW_CLK_STOP_MODE1) { 1007 - ret = sdw_slave_clk_stop_callback(slave, 1008 - slave_mode, 1009 - SDW_CLK_POST_PREPARE); 1010 - 1011 - if (ret < 0) { 1012 - dev_err(&slave->dev, 1013 - "post-prepare failed:%d", ret); 1014 - } 1020 + if (ret < 0 && ret != -ENODATA) { 1021 + dev_err(&slave->dev, "clock stop post-prepare cb failed:%d\n", ret); 1022 + return ret; 1015 1023 } 1016 1024 } 1017 1025 1018 - return ret; 1026 + return 0; 1019 1027 } 1020 1028 EXPORT_SYMBOL(sdw_bus_prep_clk_stop); 1021 1029 ··· 1034 1050 ret = sdw_bwrite_no_pm(bus, SDW_BROADCAST_DEV_NUM, 1035 1051 SDW_SCP_CTRL, SDW_SCP_CTRL_CLK_STP_NOW); 1036 1052 if (ret < 0) { 1037 - if (ret == -ENODATA) 1038 - dev_dbg(bus->dev, 1039 - "ClockStopNow Broadcast msg ignored %d", ret); 1040 - else 1041 - dev_err(bus->dev, 1042 - "ClockStopNow Broadcast msg failed %d", ret); 1053 + if (ret != -ENODATA) 1054 + dev_err(bus->dev, "ClockStopNow Broadcast msg failed %d\n", ret); 1043 1055 return ret; 1044 1056 } 1045 1057 ··· 1054 1074 */ 1055 1075 int sdw_bus_exit_clk_stop(struct sdw_bus *bus) 1056 1076 { 1057 - enum sdw_clk_stop_mode mode; 1058 1077 bool simple_clk_stop = true; 1059 1078 struct sdw_slave *slave; 1060 1079 bool is_slave = false; ··· 1075 1096 /* Identify if Slave(s) are available on Bus */ 1076 1097 is_slave = true; 1077 1098 1078 - mode = slave->curr_clk_stop_mode; 1079 - 1080 - if (mode == SDW_CLK_STOP_MODE1) { 1081 - simple_clk_stop = false; 1082 - continue; 1083 - } 1084 - 1085 - ret = sdw_slave_clk_stop_callback(slave, mode, 1099 + ret = sdw_slave_clk_stop_callback(slave, SDW_CLK_STOP_MODE0, 1086 1100 SDW_CLK_PRE_DEPREPARE); 1087 1101 if (ret < 0) 1088 - dev_warn(&slave->dev, 1089 - "clk stop deprep failed:%d", ret); 1102 + dev_warn(&slave->dev, "clock stop pre-deprepare cb failed:%d\n", ret); 1090 1103 1091 - ret = sdw_slave_clk_stop_prepare(slave, mode, 1092 - false); 1104 + /* Only de-prepare a Slave device if needed */ 1105 + if (!slave->prop.simple_clk_stop_capable) { 1106 + simple_clk_stop = false; 1093 1107 1094 - if (ret < 0) 1095 - dev_warn(&slave->dev, 1096 - "clk stop deprep failed:%d", ret); 1108 + ret = sdw_slave_clk_stop_prepare(slave, SDW_CLK_STOP_MODE0, 1109 + false); 1110 + 1111 + if (ret < 0) 1112 + dev_warn(&slave->dev, "clock stop deprepare failed:%d\n", ret); 1113 + } 1097 1114 } 1098 1115 1099 1116 /* Skip remaining clock stop de-preparation if no Slave is attached */ 1100 1117 if (!is_slave) 1101 1118 return 0; 1102 1119 1103 - if (!simple_clk_stop) 1104 - sdw_bus_wait_for_clk_prep_deprep(bus, SDW_BROADCAST_DEV_NUM); 1120 + /* 1121 + * Don't wait for all Slaves to be ready if they follow the simple 1122 + * state machine 1123 + */ 1124 + if (!simple_clk_stop) { 1125 + ret = sdw_bus_wait_for_clk_prep_deprep(bus, SDW_BROADCAST_DEV_NUM); 1126 + if (ret < 0) 1127 + dev_warn(&slave->dev, "clock stop deprepare wait failed:%d\n", ret); 1128 + } 1105 1129 1106 1130 list_for_each_entry(slave, &bus->slaves, node) { 1107 1131 if (!slave->dev_num) ··· 1114 1132 slave->status != SDW_SLAVE_ALERT) 1115 1133 continue; 1116 1134 1117 - mode = slave->curr_clk_stop_mode; 1118 - sdw_slave_clk_stop_callback(slave, mode, 1119 - SDW_CLK_POST_DEPREPARE); 1135 + ret = sdw_slave_clk_stop_callback(slave, SDW_CLK_STOP_MODE0, 1136 + SDW_CLK_POST_DEPREPARE); 1137 + if (ret < 0) 1138 + dev_warn(&slave->dev, "clock stop post-deprepare cb failed:%d\n", ret); 1120 1139 } 1121 1140 1122 1141 return 0;
+2 -19
drivers/soundwire/cadence_master.c
··· 1428 1428 } 1429 1429 } 1430 1430 1431 - /* 1432 - * This CMD_ACCEPT should be used when there are no devices 1433 - * attached on the link when entering clock stop mode. If this is 1434 - * not set and there is a broadcast write then the command ignored 1435 - * will be treated as a failure 1436 - */ 1437 - if (!slave_present) 1438 - cdns_updatel(cdns, CDNS_MCP_CONTROL, 1439 - CDNS_MCP_CONTROL_CMD_ACCEPT, 1440 - CDNS_MCP_CONTROL_CMD_ACCEPT); 1441 - else 1442 - cdns_updatel(cdns, CDNS_MCP_CONTROL, 1443 - CDNS_MCP_CONTROL_CMD_ACCEPT, 0); 1444 - 1445 1431 /* commit changes */ 1446 1432 ret = cdns_config_update(cdns); 1447 1433 if (ret < 0) { ··· 1494 1508 cdns_updatel(cdns, CDNS_MCP_CONTROL, 1495 1509 CDNS_MCP_CONTROL_BLOCK_WAKEUP, 0); 1496 1510 1497 - /* 1498 - * clear CMD_ACCEPT so that the command ignored 1499 - * will be treated as a failure during a broadcast write 1500 - */ 1501 - cdns_updatel(cdns, CDNS_MCP_CONTROL, CDNS_MCP_CONTROL_CMD_ACCEPT, 0); 1511 + cdns_updatel(cdns, CDNS_MCP_CONTROL, CDNS_MCP_CONTROL_CMD_ACCEPT, 1512 + CDNS_MCP_CONTROL_CMD_ACCEPT); 1502 1513 1503 1514 if (!bus_reset) { 1504 1515
-3
drivers/soundwire/cadence_master.h
··· 180 180 cdns_xfer_msg_defer(struct sdw_bus *bus, 181 181 struct sdw_msg *msg, struct sdw_defer *defer); 182 182 183 - enum sdw_command_response 184 - cdns_reset_page_addr(struct sdw_bus *bus, unsigned int dev_num); 185 - 186 183 int cdns_bus_conf(struct sdw_bus *bus, struct sdw_bus_params *params); 187 184 188 185 int cdns_set_sdw_stream(struct snd_soc_dai *dai,
+1 -1
drivers/soundwire/dmi-quirks.c
··· 80 80 /* check if any address remap quirk applies */ 81 81 dmi_id = dmi_first_match(adr_remap_quirk_table); 82 82 if (dmi_id) { 83 - struct adr_remap *map = dmi_id->driver_data; 83 + struct adr_remap *map; 84 84 85 85 for (map = dmi_id->driver_data; map->adr; map++) { 86 86 if (map->adr == addr) {
+9 -5
drivers/soundwire/generic_bandwidth_allocation.c
··· 382 382 */ 383 383 } 384 384 385 - if (i == clk_values) 385 + if (i == clk_values) { 386 + dev_err(bus->dev, "%s: could not find clock value for bandwidth %d\n", 387 + __func__, bus->params.bandwidth); 386 388 return -EINVAL; 389 + } 387 390 388 391 ret = sdw_select_row_col(bus, curr_dr_freq); 389 - if (ret < 0) 392 + if (ret < 0) { 393 + dev_err(bus->dev, "%s: could not find frame configuration for bus dr_freq %d\n", 394 + __func__, curr_dr_freq); 390 395 return -EINVAL; 396 + } 391 397 392 398 bus->params.curr_dr_freq = curr_dr_freq; 393 399 return 0; ··· 410 404 411 405 /* Computes clock frequency, frame shape and frame frequency */ 412 406 ret = sdw_compute_bus_params(bus); 413 - if (ret < 0) { 414 - dev_err(bus->dev, "Compute bus params failed: %d\n", ret); 407 + if (ret < 0) 415 408 return ret; 416 - } 417 409 418 410 /* Compute transport and port params */ 419 411 ret = sdw_compute_port_params(bus);
+32 -26
drivers/soundwire/intel.c
··· 11 11 #include <linux/module.h> 12 12 #include <linux/interrupt.h> 13 13 #include <linux/io.h> 14 - #include <linux/platform_device.h> 14 + #include <linux/auxiliary_bus.h> 15 15 #include <sound/pcm_params.h> 16 16 #include <linux/pm_runtime.h> 17 17 #include <sound/soc.h> ··· 1327 1327 } 1328 1328 1329 1329 /* 1330 - * probe and init 1330 + * probe and init (aux_dev_id argument is required by function prototype but not used) 1331 1331 */ 1332 - static int intel_master_probe(struct platform_device *pdev) 1332 + static int intel_link_probe(struct auxiliary_device *auxdev, 1333 + const struct auxiliary_device_id *aux_dev_id) 1334 + 1333 1335 { 1334 - struct device *dev = &pdev->dev; 1336 + struct device *dev = &auxdev->dev; 1337 + struct sdw_intel_link_dev *ldev = auxiliary_dev_to_sdw_intel_link_dev(auxdev); 1335 1338 struct sdw_intel *sdw; 1336 1339 struct sdw_cdns *cdns; 1337 1340 struct sdw_bus *bus; ··· 1347 1344 cdns = &sdw->cdns; 1348 1345 bus = &cdns->bus; 1349 1346 1350 - sdw->instance = pdev->id; 1351 - sdw->link_res = dev_get_platdata(dev); 1347 + sdw->instance = auxdev->id; 1348 + sdw->link_res = &ldev->link_res; 1352 1349 cdns->dev = dev; 1353 1350 cdns->registers = sdw->link_res->registers; 1354 1351 cdns->instance = sdw->instance; 1355 1352 cdns->msg_count = 0; 1356 1353 1357 - bus->link_id = pdev->id; 1354 + bus->link_id = auxdev->id; 1358 1355 1359 1356 sdw_cdns_probe(cdns); 1360 1357 ··· 1387 1384 return 0; 1388 1385 } 1389 1386 1390 - int intel_master_startup(struct platform_device *pdev) 1387 + int intel_link_startup(struct auxiliary_device *auxdev) 1391 1388 { 1392 1389 struct sdw_cdns_stream_config config; 1393 - struct device *dev = &pdev->dev; 1390 + struct device *dev = &auxdev->dev; 1394 1391 struct sdw_cdns *cdns = dev_get_drvdata(dev); 1395 1392 struct sdw_intel *sdw = cdns_to_intel(cdns); 1396 1393 struct sdw_bus *bus = &cdns->bus; ··· 1527 1524 return ret; 1528 1525 } 1529 1526 1530 - static int intel_master_remove(struct platform_device *pdev) 1527 + static void intel_link_remove(struct auxiliary_device *auxdev) 1531 1528 { 1532 - struct device *dev = &pdev->dev; 1529 + struct device *dev = &auxdev->dev; 1533 1530 struct sdw_cdns *cdns = dev_get_drvdata(dev); 1534 1531 struct sdw_intel *sdw = cdns_to_intel(cdns); 1535 1532 struct sdw_bus *bus = &cdns->bus; ··· 1545 1542 snd_soc_unregister_component(dev); 1546 1543 } 1547 1544 sdw_bus_master_delete(bus); 1548 - 1549 - return 0; 1550 1545 } 1551 1546 1552 - int intel_master_process_wakeen_event(struct platform_device *pdev) 1547 + int intel_link_process_wakeen_event(struct auxiliary_device *auxdev) 1553 1548 { 1554 - struct device *dev = &pdev->dev; 1549 + struct device *dev = &auxdev->dev; 1555 1550 struct sdw_intel *sdw; 1556 1551 struct sdw_bus *bus; 1557 1552 void __iomem *shim; 1558 1553 u16 wake_sts; 1559 1554 1560 - sdw = platform_get_drvdata(pdev); 1555 + sdw = dev_get_drvdata(dev); 1561 1556 bus = &sdw->cdns.bus; 1562 1557 1563 1558 if (bus->prop.hw_disabled) { ··· 1977 1976 SET_RUNTIME_PM_OPS(intel_suspend_runtime, intel_resume_runtime, NULL) 1978 1977 }; 1979 1978 1980 - static struct platform_driver sdw_intel_drv = { 1981 - .probe = intel_master_probe, 1982 - .remove = intel_master_remove, 1983 - .driver = { 1984 - .name = "intel-sdw", 1985 - .pm = &intel_pm, 1986 - } 1979 + static const struct auxiliary_device_id intel_link_id_table[] = { 1980 + { .name = "soundwire_intel.link" }, 1981 + {}, 1987 1982 }; 1983 + MODULE_DEVICE_TABLE(auxiliary, intel_link_id_table); 1988 1984 1989 - module_platform_driver(sdw_intel_drv); 1985 + static struct auxiliary_driver sdw_intel_drv = { 1986 + .probe = intel_link_probe, 1987 + .remove = intel_link_remove, 1988 + .driver = { 1989 + /* auxiliary_driver_register() sets .name to be the modname */ 1990 + .pm = &intel_pm, 1991 + }, 1992 + .id_table = intel_link_id_table 1993 + }; 1994 + module_auxiliary_driver(sdw_intel_drv); 1990 1995 1991 1996 MODULE_LICENSE("Dual BSD/GPL"); 1992 - MODULE_ALIAS("platform:intel-sdw"); 1993 - MODULE_DESCRIPTION("Intel Soundwire Master Driver"); 1997 + MODULE_DESCRIPTION("Intel Soundwire Link Driver");
+10 -4
drivers/soundwire/intel.h
··· 7 7 /** 8 8 * struct sdw_intel_link_res - Soundwire Intel link resource structure, 9 9 * typically populated by the controller driver. 10 - * @pdev: platform_device 11 10 * @mmio_base: mmio base of SoundWire registers 12 11 * @registers: Link IO registers base 13 12 * @shim: Audio shim pointer ··· 22 23 * @list: used to walk-through all masters exposed by the same controller 23 24 */ 24 25 struct sdw_intel_link_res { 25 - struct platform_device *pdev; 26 26 void __iomem *mmio_base; /* not strictly needed, useful for debug */ 27 27 void __iomem *registers; 28 28 void __iomem *shim; ··· 46 48 #endif 47 49 }; 48 50 49 - int intel_master_startup(struct platform_device *pdev); 50 - int intel_master_process_wakeen_event(struct platform_device *pdev); 51 + int intel_link_startup(struct auxiliary_device *auxdev); 52 + int intel_link_process_wakeen_event(struct auxiliary_device *auxdev); 53 + 54 + struct sdw_intel_link_dev { 55 + struct auxiliary_device auxdev; 56 + struct sdw_intel_link_res link_res; 57 + }; 58 + 59 + #define auxiliary_dev_to_sdw_intel_link_dev(auxiliary_dev) \ 60 + container_of(auxiliary_dev, struct sdw_intel_link_dev, auxdev) 51 61 52 62 #endif /* __SDW_INTEL_LOCAL_H */
+158 -76
drivers/soundwire/intel_init.c
··· 12 12 #include <linux/interrupt.h> 13 13 #include <linux/io.h> 14 14 #include <linux/module.h> 15 - #include <linux/platform_device.h> 15 + #include <linux/auxiliary_bus.h> 16 16 #include <linux/pm_runtime.h> 17 17 #include <linux/soundwire/sdw_intel.h> 18 18 #include "cadence_master.h" ··· 24 24 #define SDW_LINK_BASE 0x30000 25 25 #define SDW_LINK_SIZE 0x10000 26 26 27 + static void intel_link_dev_release(struct device *dev) 28 + { 29 + struct auxiliary_device *auxdev = to_auxiliary_dev(dev); 30 + struct sdw_intel_link_dev *ldev = auxiliary_dev_to_sdw_intel_link_dev(auxdev); 31 + 32 + kfree(ldev); 33 + } 34 + 35 + /* alloc, init and add link devices */ 36 + static struct sdw_intel_link_dev *intel_link_dev_register(struct sdw_intel_res *res, 37 + struct sdw_intel_ctx *ctx, 38 + struct fwnode_handle *fwnode, 39 + const char *name, 40 + int link_id) 41 + { 42 + struct sdw_intel_link_dev *ldev; 43 + struct sdw_intel_link_res *link; 44 + struct auxiliary_device *auxdev; 45 + int ret; 46 + 47 + ldev = kzalloc(sizeof(*ldev), GFP_KERNEL); 48 + if (!ldev) 49 + return ERR_PTR(-ENOMEM); 50 + 51 + auxdev = &ldev->auxdev; 52 + auxdev->name = name; 53 + auxdev->dev.parent = res->parent; 54 + auxdev->dev.fwnode = fwnode; 55 + auxdev->dev.release = intel_link_dev_release; 56 + 57 + /* we don't use an IDA since we already have a link ID */ 58 + auxdev->id = link_id; 59 + 60 + /* 61 + * keep a handle on the allocated memory, to be used in all other functions. 62 + * Since the same pattern is used to skip links that are not enabled, there is 63 + * no need to check if ctx->ldev[i] is NULL later on. 64 + */ 65 + ctx->ldev[link_id] = ldev; 66 + 67 + /* Add link information used in the driver probe */ 68 + link = &ldev->link_res; 69 + link->mmio_base = res->mmio_base; 70 + link->registers = res->mmio_base + SDW_LINK_BASE 71 + + (SDW_LINK_SIZE * link_id); 72 + link->shim = res->mmio_base + SDW_SHIM_BASE; 73 + link->alh = res->mmio_base + SDW_ALH_BASE; 74 + 75 + link->ops = res->ops; 76 + link->dev = res->dev; 77 + 78 + link->clock_stop_quirks = res->clock_stop_quirks; 79 + link->shim_lock = &ctx->shim_lock; 80 + link->shim_mask = &ctx->shim_mask; 81 + link->link_mask = ctx->link_mask; 82 + 83 + /* now follow the two-step init/add sequence */ 84 + ret = auxiliary_device_init(auxdev); 85 + if (ret < 0) { 86 + dev_err(res->parent, "failed to initialize link dev %s link_id %d\n", 87 + name, link_id); 88 + kfree(ldev); 89 + return ERR_PTR(ret); 90 + } 91 + 92 + ret = auxiliary_device_add(&ldev->auxdev); 93 + if (ret < 0) { 94 + dev_err(res->parent, "failed to add link dev %s link_id %d\n", 95 + ldev->auxdev.name, link_id); 96 + /* ldev will be freed with the put_device() and .release sequence */ 97 + auxiliary_device_uninit(&ldev->auxdev); 98 + return ERR_PTR(ret); 99 + } 100 + 101 + return ldev; 102 + } 103 + 104 + static void intel_link_dev_unregister(struct sdw_intel_link_dev *ldev) 105 + { 106 + auxiliary_device_delete(&ldev->auxdev); 107 + auxiliary_device_uninit(&ldev->auxdev); 108 + } 109 + 27 110 static int sdw_intel_cleanup(struct sdw_intel_ctx *ctx) 28 111 { 29 - struct sdw_intel_link_res *link = ctx->links; 112 + struct sdw_intel_link_dev *ldev; 30 113 u32 link_mask; 31 114 int i; 32 115 33 - if (!link) 34 - return 0; 35 - 36 116 link_mask = ctx->link_mask; 37 117 38 - for (i = 0; i < ctx->count; i++, link++) { 118 + for (i = 0; i < ctx->count; i++) { 39 119 if (!(link_mask & BIT(i))) 40 120 continue; 41 121 42 - if (link->pdev) { 43 - pm_runtime_disable(&link->pdev->dev); 44 - platform_device_unregister(link->pdev); 45 - } 122 + ldev = ctx->ldev[i]; 46 123 47 - if (!link->clock_stop_quirks) 48 - pm_runtime_put_noidle(link->dev); 124 + pm_runtime_disable(&ldev->auxdev.dev); 125 + if (!ldev->link_res.clock_stop_quirks) 126 + pm_runtime_put_noidle(ldev->link_res.dev); 127 + 128 + intel_link_dev_unregister(ldev); 49 129 } 50 130 51 131 return 0; ··· 171 91 static struct sdw_intel_ctx 172 92 *sdw_intel_probe_controller(struct sdw_intel_res *res) 173 93 { 174 - struct platform_device_info pdevinfo; 175 - struct platform_device *pdev; 176 94 struct sdw_intel_link_res *link; 95 + struct sdw_intel_link_dev *ldev; 177 96 struct sdw_intel_ctx *ctx; 178 97 struct acpi_device *adev; 179 98 struct sdw_slave *slave; ··· 195 116 count = res->count; 196 117 dev_dbg(&adev->dev, "Creating %d SDW Link devices\n", count); 197 118 198 - ctx = devm_kzalloc(&adev->dev, sizeof(*ctx), GFP_KERNEL); 119 + /* 120 + * we need to alloc/free memory manually and can't use devm: 121 + * this routine may be called from a workqueue, and not from 122 + * the parent .probe. 123 + * If devm_ was used, the memory might never be freed on errors. 124 + */ 125 + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); 199 126 if (!ctx) 200 127 return NULL; 201 128 202 129 ctx->count = count; 203 - ctx->links = devm_kcalloc(&adev->dev, ctx->count, 204 - sizeof(*ctx->links), GFP_KERNEL); 205 - if (!ctx->links) 206 - return NULL; 207 130 208 - ctx->count = count; 131 + /* 132 + * allocate the array of pointers. The link-specific data is allocated 133 + * as part of the first loop below and released with the auxiliary_device_uninit(). 134 + * If some links are disabled, the link pointer will remain NULL. Given that the 135 + * number of links is small, this is simpler than using a list to keep track of links. 136 + */ 137 + ctx->ldev = kcalloc(ctx->count, sizeof(*ctx->ldev), GFP_KERNEL); 138 + if (!ctx->ldev) { 139 + kfree(ctx); 140 + return NULL; 141 + } 142 + 209 143 ctx->mmio_base = res->mmio_base; 210 144 ctx->link_mask = res->link_mask; 211 145 ctx->handle = res->handle; 212 146 mutex_init(&ctx->shim_lock); 213 147 214 - link = ctx->links; 215 148 link_mask = ctx->link_mask; 216 149 217 150 INIT_LIST_HEAD(&ctx->link_list); 218 151 219 - /* Create SDW Master devices */ 220 - for (i = 0; i < count; i++, link++) { 221 - if (!(link_mask & BIT(i))) { 222 - dev_dbg(&adev->dev, 223 - "Link %d masked, will not be enabled\n", i); 152 + for (i = 0; i < count; i++) { 153 + if (!(link_mask & BIT(i))) 224 154 continue; 225 - } 226 155 227 - link->mmio_base = res->mmio_base; 228 - link->registers = res->mmio_base + SDW_LINK_BASE 229 - + (SDW_LINK_SIZE * i); 230 - link->shim = res->mmio_base + SDW_SHIM_BASE; 231 - link->alh = res->mmio_base + SDW_ALH_BASE; 232 - 233 - link->ops = res->ops; 234 - link->dev = res->dev; 235 - 236 - link->clock_stop_quirks = res->clock_stop_quirks; 237 - link->shim_lock = &ctx->shim_lock; 238 - link->shim_mask = &ctx->shim_mask; 239 - link->link_mask = link_mask; 240 - 241 - memset(&pdevinfo, 0, sizeof(pdevinfo)); 242 - 243 - pdevinfo.parent = res->parent; 244 - pdevinfo.name = "intel-sdw"; 245 - pdevinfo.id = i; 246 - pdevinfo.fwnode = acpi_fwnode_handle(adev); 247 - pdevinfo.data = link; 248 - pdevinfo.size_data = sizeof(*link); 249 - 250 - pdev = platform_device_register_full(&pdevinfo); 251 - if (IS_ERR(pdev)) { 252 - dev_err(&adev->dev, 253 - "platform device creation failed: %ld\n", 254 - PTR_ERR(pdev)); 156 + /* 157 + * init and add a device for each link 158 + * 159 + * The name of the device will be soundwire_intel.link.[i], 160 + * with the "soundwire_intel" module prefix automatically added 161 + * by the auxiliary bus core. 162 + */ 163 + ldev = intel_link_dev_register(res, 164 + ctx, 165 + acpi_fwnode_handle(adev), 166 + "link", 167 + i); 168 + if (IS_ERR(ldev)) 255 169 goto err; 256 - } 257 - link->pdev = pdev; 258 - link->cdns = platform_get_drvdata(pdev); 170 + 171 + link = &ldev->link_res; 172 + link->cdns = dev_get_drvdata(&ldev->auxdev.dev); 259 173 260 174 if (!link->cdns) { 261 175 dev_err(&adev->dev, "failed to get link->cdns\n"); ··· 266 194 num_slaves++; 267 195 } 268 196 269 - ctx->ids = devm_kcalloc(&adev->dev, num_slaves, 270 - sizeof(*ctx->ids), GFP_KERNEL); 197 + ctx->ids = kcalloc(num_slaves, sizeof(*ctx->ids), GFP_KERNEL); 271 198 if (!ctx->ids) 272 199 goto err; 273 200 ··· 284 213 return ctx; 285 214 286 215 err: 287 - ctx->count = i; 288 - sdw_intel_cleanup(ctx); 216 + while (i--) { 217 + if (!(link_mask & BIT(i))) 218 + continue; 219 + ldev = ctx->ldev[i]; 220 + intel_link_dev_unregister(ldev); 221 + } 222 + kfree(ctx->ldev); 223 + kfree(ctx); 289 224 return NULL; 290 225 } 291 226 ··· 299 222 sdw_intel_startup_controller(struct sdw_intel_ctx *ctx) 300 223 { 301 224 struct acpi_device *adev; 302 - struct sdw_intel_link_res *link; 225 + struct sdw_intel_link_dev *ldev; 303 226 u32 caps; 304 227 u32 link_mask; 305 228 int i; ··· 318 241 return -EINVAL; 319 242 } 320 243 321 - if (!ctx->links) 244 + if (!ctx->ldev) 322 245 return -EINVAL; 323 246 324 - link = ctx->links; 325 247 link_mask = ctx->link_mask; 326 248 327 249 /* Startup SDW Master devices */ 328 - for (i = 0; i < ctx->count; i++, link++) { 250 + for (i = 0; i < ctx->count; i++) { 329 251 if (!(link_mask & BIT(i))) 330 252 continue; 331 253 332 - intel_master_startup(link->pdev); 254 + ldev = ctx->ldev[i]; 333 255 334 - if (!link->clock_stop_quirks) { 256 + intel_link_startup(&ldev->auxdev); 257 + 258 + if (!ldev->link_res.clock_stop_quirks) { 335 259 /* 336 260 * we need to prevent the parent PCI device 337 261 * from entering pm_runtime suspend, so that 338 262 * power rails to the SoundWire IP are not 339 263 * turned off. 340 264 */ 341 - pm_runtime_get_noresume(link->dev); 265 + pm_runtime_get_noresume(ldev->link_res.dev); 342 266 } 343 267 } 344 268 ··· 350 272 * sdw_intel_probe() - SoundWire Intel probe routine 351 273 * @res: resource data 352 274 * 353 - * This registers a platform device for each Master handled by the controller, 354 - * and SoundWire Master and Slave devices will be created by the platform 275 + * This registers an auxiliary device for each Master handled by the controller, 276 + * and SoundWire Master and Slave devices will be created by the auxiliary 355 277 * device probe. All the information necessary is stored in the context, and 356 278 * the res argument pointer can be freed after this step. 357 279 * This function will be called after sdw_intel_acpi_scan() by SOF probe. ··· 384 306 void sdw_intel_exit(struct sdw_intel_ctx *ctx) 385 307 { 386 308 sdw_intel_cleanup(ctx); 309 + kfree(ctx->ids); 310 + kfree(ctx->ldev); 311 + kfree(ctx); 387 312 } 388 313 EXPORT_SYMBOL_NS(sdw_intel_exit, SOUNDWIRE_INTEL_INIT); 389 314 390 315 void sdw_intel_process_wakeen_event(struct sdw_intel_ctx *ctx) 391 316 { 392 - struct sdw_intel_link_res *link; 317 + struct sdw_intel_link_dev *ldev; 393 318 u32 link_mask; 394 319 int i; 395 320 396 - if (!ctx->links) 321 + if (!ctx->ldev) 397 322 return; 398 323 399 - link = ctx->links; 400 324 link_mask = ctx->link_mask; 401 325 402 326 /* Startup SDW Master devices */ 403 - for (i = 0; i < ctx->count; i++, link++) { 327 + for (i = 0; i < ctx->count; i++) { 404 328 if (!(link_mask & BIT(i))) 405 329 continue; 406 330 407 - intel_master_process_wakeen_event(link->pdev); 331 + ldev = ctx->ldev[i]; 332 + 333 + intel_link_process_wakeen_event(&ldev->auxdev); 408 334 } 409 335 } 410 336 EXPORT_SYMBOL_NS(sdw_intel_process_wakeen_event, SOUNDWIRE_INTEL_INIT);
+2 -2
drivers/soundwire/slave.c
··· 39 39 40 40 if (id->unique_id == SDW_IGNORED_UNIQUE_ID) { 41 41 /* name shall be sdw:link:mfg:part:class */ 42 - dev_set_name(&slave->dev, "sdw:%x:%x:%x:%x", 42 + dev_set_name(&slave->dev, "sdw:%01x:%04x:%04x:%02x", 43 43 bus->link_id, id->mfg_id, id->part_id, 44 44 id->class_id); 45 45 } else { 46 46 /* name shall be sdw:link:mfg:part:class:unique */ 47 - dev_set_name(&slave->dev, "sdw:%x:%x:%x:%x:%x", 47 + dev_set_name(&slave->dev, "sdw:%01x:%04x:%04x:%02x:%01x", 48 48 bus->link_id, id->mfg_id, id->part_id, 49 49 id->class_id, id->unique_id); 50 50 }
+6 -7
drivers/soundwire/stream.c
··· 422 422 struct completion *port_ready; 423 423 struct sdw_dpn_prop *dpn_prop; 424 424 struct sdw_prepare_ch prep_ch; 425 - unsigned int time_left; 426 425 bool intr = false; 427 426 int ret = 0, val; 428 427 u32 addr; ··· 478 479 479 480 /* Wait for completion on port ready */ 480 481 port_ready = &s_rt->slave->port_ready[prep_ch.num]; 481 - time_left = wait_for_completion_timeout(port_ready, 482 - msecs_to_jiffies(dpn_prop->ch_prep_timeout)); 482 + wait_for_completion_timeout(port_ready, 483 + msecs_to_jiffies(dpn_prop->ch_prep_timeout)); 483 484 484 485 val = sdw_read(s_rt->slave, SDW_DPN_PREPARESTATUS(p_rt->num)); 485 - val &= p_rt->ch_mask; 486 - if (!time_left || val) { 486 + if ((val < 0) || (val & p_rt->ch_mask)) { 487 + ret = (val < 0) ? val : -ETIMEDOUT; 487 488 dev_err(&s_rt->slave->dev, 488 - "Chn prep failed for port:%d\n", prep_ch.num); 489 - return -ETIMEDOUT; 489 + "Chn prep failed for port %d: %d\n", prep_ch.num, ret); 490 + return ret; 490 491 } 491 492 } 492 493
+1 -3
drivers/tty/vcc.c
··· 668 668 * 669 669 * Return: status of removal 670 670 */ 671 - static int vcc_remove(struct vio_dev *vdev) 671 + static void vcc_remove(struct vio_dev *vdev) 672 672 { 673 673 struct vcc_port *port = dev_get_drvdata(&vdev->dev); 674 674 ··· 703 703 kfree(port->domain); 704 704 kfree(port); 705 705 } 706 - 707 - return 0; 708 706 } 709 707 710 708 static const struct vio_device_id vcc_match[] = {
+1 -1
drivers/uio/Kconfig
··· 18 18 depends on PCI 19 19 help 20 20 Driver for Hilscher CIF DeviceNet and Profibus cards. This 21 - driver requires a userspace component called cif that handles 21 + driver requires a userspace component called cif that handles 22 22 all of the heavy lifting and can be found at: 23 23 <http://www.osadl.org/projects/downloads/UIO/user/> 24 24
+1 -1
drivers/uio/uio_aec.c
··· 133 133 uio_unregister_device(info); 134 134 pci_release_regions(pdev); 135 135 pci_disable_device(pdev); 136 - iounmap(info->priv); 136 + pci_iounmap(pdev, info->priv); 137 137 } 138 138 139 139 static struct pci_driver pci_driver = {
+32
drivers/uio/uio_pci_generic.c
··· 72 72 const struct pci_device_id *id) 73 73 { 74 74 struct uio_pci_generic_dev *gdev; 75 + struct uio_mem *uiomem; 75 76 int err; 77 + int i; 76 78 77 79 err = pcim_enable_device(pdev); 78 80 if (err) { ··· 101 99 } else { 102 100 dev_warn(&pdev->dev, "No IRQ assigned to device: " 103 101 "no support for interrupts?\n"); 102 + } 103 + 104 + uiomem = &gdev->info.mem[0]; 105 + for (i = 0; i < MAX_UIO_MAPS; ++i) { 106 + struct resource *r = &pdev->resource[i]; 107 + 108 + if (r->flags != (IORESOURCE_SIZEALIGN | IORESOURCE_MEM)) 109 + continue; 110 + 111 + if (uiomem >= &gdev->info.mem[MAX_UIO_MAPS]) { 112 + dev_warn( 113 + &pdev->dev, 114 + "device has more than " __stringify( 115 + MAX_UIO_MAPS) " I/O memory resources.\n"); 116 + break; 117 + } 118 + 119 + uiomem->memtype = UIO_MEM_PHYS; 120 + uiomem->addr = r->start & PAGE_MASK; 121 + uiomem->offs = r->start & ~PAGE_MASK; 122 + uiomem->size = 123 + (uiomem->offs + resource_size(r) + PAGE_SIZE - 1) & 124 + PAGE_MASK; 125 + uiomem->name = r->name; 126 + ++uiomem; 127 + } 128 + 129 + while (uiomem < &gdev->info.mem[MAX_UIO_MAPS]) { 130 + uiomem->size = 0; 131 + ++uiomem; 104 132 } 105 133 106 134 return devm_uio_register_device(&pdev->dev, &gdev->info);
-1
drivers/video/fbdev/Kconfig
··· 2209 2209 config FB_SSD1307 2210 2210 tristate "Solomon SSD1307 framebuffer support" 2211 2211 depends on FB && I2C 2212 - depends on OF 2213 2212 depends on GPIOLIB || COMPILE_TEST 2214 2213 select FB_SYS_FOPS 2215 2214 select FB_SYS_FILLRECT
+2
drivers/virt/nitro_enclaves/ne_pci_dev.c
··· 480 480 goto free_ne_pci_dev; 481 481 } 482 482 483 + pci_set_master(pdev); 484 + 483 485 rc = pci_request_regions_exclusive(pdev, "nitro_enclaves"); 484 486 if (rc < 0) { 485 487 dev_err(&pdev->dev, "Error in pci request regions [rc=%d]\n", rc);
+4 -2
drivers/visorbus/visorchipset.c
··· 1561 1561 1562 1562 static int visorchipset_init(struct acpi_device *acpi_device) 1563 1563 { 1564 - int err = -ENODEV; 1564 + int err = -ENOMEM; 1565 1565 struct visorchannel *controlvm_channel; 1566 1566 1567 1567 chipset_dev = kzalloc(sizeof(*chipset_dev), GFP_KERNEL); ··· 1584 1584 "controlvm", 1585 1585 sizeof(struct visor_controlvm_channel), 1586 1586 VISOR_CONTROLVM_CHANNEL_VERSIONID, 1587 - VISOR_CHANNEL_SIGNATURE)) 1587 + VISOR_CHANNEL_SIGNATURE)) { 1588 + err = -ENODEV; 1588 1589 goto error_delete_groups; 1590 + } 1589 1591 /* if booting in a crash kernel */ 1590 1592 if (is_kdump_kernel()) 1591 1593 INIT_DELAYED_WORK(&chipset_dev->periodic_controlvm_work,
+47 -47
drivers/w1/masters/ds2482.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-only 2 - /** 2 + /* 3 3 * ds2482.c - provides i2c to w1-master bridge(s) 4 4 * Copyright (C) 2005 Ben Gardner <bgardner@wabtec.com> 5 5 * ··· 19 19 20 20 #include <linux/w1.h> 21 21 22 - /** 22 + /* 23 23 * Allow the active pullup to be disabled, default is enabled. 24 24 * 25 25 * Note from the DS2482 datasheet: ··· 39 39 module_param(extra_config, int, S_IRUGO | S_IWUSR); 40 40 MODULE_PARM_DESC(extra_config, "Extra Configuration settings 1=APU,2=PPM,3=SPU,8=1WS"); 41 41 42 - /** 42 + /* 43 43 * The DS2482 registers - there are 3 registers that are addressed by a read 44 44 * pointer. The read pointer is set by the last command executed. 45 45 * ··· 62 62 #define DS2482_PTR_CODE_CHANNEL 0xD2 /* DS2482-800 only */ 63 63 #define DS2482_PTR_CODE_CONFIG 0xC3 64 64 65 - /** 65 + /* 66 66 * Configure Register bit definitions 67 67 * The top 4 bits always read 0. 68 68 * To write, the top nibble must be the 1's compl. of the low nibble. ··· 73 73 #define DS2482_REG_CFG_APU 0x01 /* active pull-up */ 74 74 75 75 76 - /** 76 + /* 77 77 * Write and verify codes for the CHANNEL_SELECT command (DS2482-800 only). 78 78 * To set the channel, write the value at the index of the channel. 79 79 * Read and compare against the corresponding value to verify the change. ··· 84 84 { 0xB8, 0xB1, 0xAA, 0xA3, 0x9C, 0x95, 0x8E, 0x87 }; 85 85 86 86 87 - /** 87 + /* 88 88 * Status Register bit definitions (read only) 89 89 */ 90 90 #define DS2482_REG_STS_DIR 0x80 ··· 124 124 125 125 126 126 /** 127 - * Helper to calculate values for configuration register 128 - * @param conf the raw config value 129 - * @return the value w/ complements that can be written to register 127 + * ds2482_calculate_config - Helper to calculate values for configuration register 128 + * @conf: the raw config value 129 + * Return: the value w/ complements that can be written to register 130 130 */ 131 131 static inline u8 ds2482_calculate_config(u8 conf) 132 132 { ··· 140 140 141 141 142 142 /** 143 - * Sets the read pointer. 144 - * @param pdev The ds2482 client pointer 145 - * @param read_ptr see DS2482_PTR_CODE_xxx above 146 - * @return -1 on failure, 0 on success 143 + * ds2482_select_register - Sets the read pointer. 144 + * @pdev: The ds2482 client pointer 145 + * @read_ptr: see DS2482_PTR_CODE_xxx above 146 + * Return: -1 on failure, 0 on success 147 147 */ 148 148 static inline int ds2482_select_register(struct ds2482_data *pdev, u8 read_ptr) 149 149 { ··· 159 159 } 160 160 161 161 /** 162 - * Sends a command without a parameter 163 - * @param pdev The ds2482 client pointer 164 - * @param cmd DS2482_CMD_RESET, 162 + * ds2482_send_cmd - Sends a command without a parameter 163 + * @pdev: The ds2482 client pointer 164 + * @cmd: DS2482_CMD_RESET, 165 165 * DS2482_CMD_1WIRE_RESET, 166 166 * DS2482_CMD_1WIRE_READ_BYTE 167 - * @return -1 on failure, 0 on success 167 + * Return: -1 on failure, 0 on success 168 168 */ 169 169 static inline int ds2482_send_cmd(struct ds2482_data *pdev, u8 cmd) 170 170 { ··· 176 176 } 177 177 178 178 /** 179 - * Sends a command with a parameter 180 - * @param pdev The ds2482 client pointer 181 - * @param cmd DS2482_CMD_WRITE_CONFIG, 179 + * ds2482_send_cmd_data - Sends a command with a parameter 180 + * @pdev: The ds2482 client pointer 181 + * @cmd: DS2482_CMD_WRITE_CONFIG, 182 182 * DS2482_CMD_1WIRE_SINGLE_BIT, 183 183 * DS2482_CMD_1WIRE_WRITE_BYTE, 184 184 * DS2482_CMD_1WIRE_TRIPLET 185 - * @param byte The data to send 186 - * @return -1 on failure, 0 on success 185 + * @byte: The data to send 186 + * Return: -1 on failure, 0 on success 187 187 */ 188 188 static inline int ds2482_send_cmd_data(struct ds2482_data *pdev, 189 189 u8 cmd, u8 byte) ··· 205 205 #define DS2482_WAIT_IDLE_TIMEOUT 100 206 206 207 207 /** 208 - * Waits until the 1-wire interface is idle (not busy) 208 + * ds2482_wait_1wire_idle - Waits until the 1-wire interface is idle (not busy) 209 209 * 210 - * @param pdev Pointer to the device structure 211 - * @return the last value read from status or -1 (failure) 210 + * @pdev: Pointer to the device structure 211 + * Return: the last value read from status or -1 (failure) 212 212 */ 213 213 static int ds2482_wait_1wire_idle(struct ds2482_data *pdev) 214 214 { ··· 230 230 } 231 231 232 232 /** 233 - * Selects a w1 channel. 233 + * ds2482_set_channel - Selects a w1 channel. 234 234 * The 1-wire interface must be idle before calling this function. 235 235 * 236 - * @param pdev The ds2482 client pointer 237 - * @param channel 0-7 238 - * @return -1 (failure) or 0 (success) 236 + * @pdev: The ds2482 client pointer 237 + * @channel: 0-7 238 + * Return: -1 (failure) or 0 (success) 239 239 */ 240 240 static int ds2482_set_channel(struct ds2482_data *pdev, u8 channel) 241 241 { ··· 254 254 255 255 256 256 /** 257 - * Performs the touch-bit function, which writes a 0 or 1 and reads the level. 257 + * ds2482_w1_touch_bit - Performs the touch-bit function, which writes a 0 or 1 and reads the level. 258 258 * 259 - * @param data The ds2482 channel pointer 260 - * @param bit The level to write: 0 or non-zero 261 - * @return The level read: 0 or 1 259 + * @data: The ds2482 channel pointer 260 + * @bit: The level to write: 0 or non-zero 261 + * Return: The level read: 0 or 1 262 262 */ 263 263 static u8 ds2482_w1_touch_bit(void *data, u8 bit) 264 264 { ··· 284 284 } 285 285 286 286 /** 287 - * Performs the triplet function, which reads two bits and writes a bit. 287 + * ds2482_w1_triplet - Performs the triplet function, which reads two bits and writes a bit. 288 288 * The bit written is determined by the two reads: 289 289 * 00 => dbit, 01 => 0, 10 => 1 290 290 * 291 - * @param data The ds2482 channel pointer 292 - * @param dbit The direction to choose if both branches are valid 293 - * @return b0=read1 b1=read2 b3=bit written 291 + * @data: The ds2482 channel pointer 292 + * @dbit: The direction to choose if both branches are valid 293 + * Return: b0=read1 b1=read2 b3=bit written 294 294 */ 295 295 static u8 ds2482_w1_triplet(void *data, u8 dbit) 296 296 { ··· 317 317 } 318 318 319 319 /** 320 - * Performs the write byte function. 320 + * ds2482_w1_write_byte - Performs the write byte function. 321 321 * 322 - * @param data The ds2482 channel pointer 323 - * @param byte The value to write 322 + * @data: The ds2482 channel pointer 323 + * @byte: The value to write 324 324 */ 325 325 static void ds2482_w1_write_byte(void *data, u8 byte) 326 326 { ··· 341 341 } 342 342 343 343 /** 344 - * Performs the read byte function. 344 + * ds2482_w1_read_byte - Performs the read byte function. 345 345 * 346 - * @param data The ds2482 channel pointer 347 - * @return The value read 346 + * @data: The ds2482 channel pointer 347 + * Return: The value read 348 348 */ 349 349 static u8 ds2482_w1_read_byte(void *data) 350 350 { ··· 378 378 379 379 380 380 /** 381 - * Sends a reset on the 1-wire interface 381 + * ds2482_w1_reset_bus - Sends a reset on the 1-wire interface 382 382 * 383 - * @param data The ds2482 channel pointer 384 - * @return 0=Device present, 1=No device present or error 383 + * @data: The ds2482 channel pointer 384 + * Return: 0=Device present, 1=No device present or error 385 385 */ 386 386 static u8 ds2482_w1_reset_bus(void *data) 387 387 { ··· 541 541 return 0; 542 542 } 543 543 544 - /** 544 + /* 545 545 * Driver data (common to all clients) 546 546 */ 547 547 static const struct i2c_device_id ds2482_id[] = {
+106 -16
drivers/w1/slaves/w1_ds2438.c
··· 49 49 #define DS2438_CURRENT_MSB 0x06 50 50 #define DS2438_THRESHOLD 0x07 51 51 52 + /* Page #1 definitions */ 53 + #define DS2438_ETM_0 0x00 54 + #define DS2438_ETM_1 0x01 55 + #define DS2438_ETM_2 0x02 56 + #define DS2438_ETM_3 0x03 57 + #define DS2438_ICA 0x04 58 + #define DS2438_OFFSET_LSB 0x05 59 + #define DS2438_OFFSET_MSB 0x06 60 + 52 61 static int w1_ds2438_get_page(struct w1_slave *sl, int pageno, u8 *buf) 53 62 { 54 63 unsigned int retries = W1_DS2438_RETRIES; ··· 71 62 if (w1_reset_select_slave(sl)) 72 63 continue; 73 64 w1_buf[0] = W1_DS2438_RECALL_MEMORY; 74 - w1_buf[1] = 0x00; 65 + w1_buf[1] = (u8)pageno; 75 66 w1_write_block(sl->master, w1_buf, 2); 76 67 77 68 if (w1_reset_select_slave(sl)) 78 69 continue; 79 70 w1_buf[0] = W1_DS2438_READ_SCRATCH; 80 - w1_buf[1] = 0x00; 71 + w1_buf[1] = (u8)pageno; 81 72 w1_write_block(sl->master, w1_buf, 2); 82 73 83 74 count = w1_read_block(sl->master, buf, DS2438_PAGE_SIZE + 1); ··· 163 154 164 155 if ((status & mask) == value) 165 156 return 0; /* already set as requested */ 166 - else { 167 - /* changing bit */ 168 - status ^= mask; 169 - perform_write = 1; 170 - } 157 + 158 + /* changing bit */ 159 + status ^= mask; 160 + perform_write = 1; 161 + 171 162 break; 172 163 } 173 164 ··· 187 178 w1_buf[1] = 0x00; 188 179 w1_write_block(sl->master, w1_buf, 2); 189 180 181 + return 0; 182 + } 183 + } 184 + return -1; 185 + } 186 + 187 + static int w1_ds2438_change_offset_register(struct w1_slave *sl, u8 *value) 188 + { 189 + unsigned int retries = W1_DS2438_RETRIES; 190 + u8 w1_buf[9]; 191 + u8 w1_page1_buf[DS2438_PAGE_SIZE + 1 /*for CRC*/]; 192 + 193 + if (w1_ds2438_get_page(sl, 1, w1_page1_buf) == 0) { 194 + memcpy(&w1_buf[2], w1_page1_buf, DS2438_PAGE_SIZE - 1); /* last register reserved */ 195 + w1_buf[7] = value[0]; /* change only offset register */ 196 + w1_buf[8] = value[1]; 197 + while (retries--) { 198 + if (w1_reset_select_slave(sl)) 199 + continue; 200 + w1_buf[0] = W1_DS2438_WRITE_SCRATCH; 201 + w1_buf[1] = 0x01; /* write to page 1 */ 202 + w1_write_block(sl->master, w1_buf, 9); 203 + 204 + if (w1_reset_select_slave(sl)) 205 + continue; 206 + w1_buf[0] = W1_DS2438_COPY_SCRATCH; 207 + w1_buf[1] = 0x01; 208 + w1_write_block(sl->master, w1_buf, 2); 190 209 return 0; 191 210 } 192 211 } ··· 324 287 if (!buf) 325 288 return -EINVAL; 326 289 327 - if (w1_ds2438_get_current(sl, &voltage) == 0) { 290 + if (w1_ds2438_get_current(sl, &voltage) == 0) 328 291 ret = snprintf(buf, count, "%i\n", voltage); 329 - } else 292 + else 330 293 ret = -EIO; 331 294 332 295 return ret; ··· 362 325 return ret; 363 326 } 364 327 328 + static ssize_t page1_read(struct file *filp, struct kobject *kobj, 329 + struct bin_attribute *bin_attr, char *buf, 330 + loff_t off, size_t count) 331 + { 332 + struct w1_slave *sl = kobj_to_w1_slave(kobj); 333 + int ret; 334 + u8 w1_buf[DS2438_PAGE_SIZE + 1 /*for CRC*/]; 335 + 336 + if (off != 0) 337 + return 0; 338 + if (!buf) 339 + return -EINVAL; 340 + 341 + mutex_lock(&sl->master->bus_mutex); 342 + 343 + /* Read no more than page1 size */ 344 + if (count > DS2438_PAGE_SIZE) 345 + count = DS2438_PAGE_SIZE; 346 + 347 + if (w1_ds2438_get_page(sl, 1, w1_buf) == 0) { 348 + memcpy(buf, &w1_buf, count); 349 + ret = count; 350 + } else 351 + ret = -EIO; 352 + 353 + mutex_unlock(&sl->master->bus_mutex); 354 + 355 + return ret; 356 + } 357 + 358 + static ssize_t offset_write(struct file *filp, struct kobject *kobj, 359 + struct bin_attribute *bin_attr, char *buf, 360 + loff_t off, size_t count) 361 + { 362 + struct w1_slave *sl = kobj_to_w1_slave(kobj); 363 + int ret; 364 + 365 + mutex_lock(&sl->master->bus_mutex); 366 + 367 + if (w1_ds2438_change_offset_register(sl, buf) == 0) 368 + ret = count; 369 + else 370 + ret = -EIO; 371 + 372 + mutex_unlock(&sl->master->bus_mutex); 373 + 374 + return ret; 375 + } 376 + 365 377 static ssize_t temperature_read(struct file *filp, struct kobject *kobj, 366 378 struct bin_attribute *bin_attr, char *buf, 367 379 loff_t off, size_t count) ··· 424 338 if (!buf) 425 339 return -EINVAL; 426 340 427 - if (w1_ds2438_get_temperature(sl, &temp) == 0) { 341 + if (w1_ds2438_get_temperature(sl, &temp) == 0) 428 342 ret = snprintf(buf, count, "%i\n", temp); 429 - } else 343 + else 430 344 ret = -EIO; 431 345 432 346 return ret; ··· 445 359 if (!buf) 446 360 return -EINVAL; 447 361 448 - if (w1_ds2438_get_voltage(sl, DS2438_ADC_INPUT_VAD, &voltage) == 0) { 362 + if (w1_ds2438_get_voltage(sl, DS2438_ADC_INPUT_VAD, &voltage) == 0) 449 363 ret = snprintf(buf, count, "%u\n", voltage); 450 - } else 364 + else 451 365 ret = -EIO; 452 366 453 367 return ret; ··· 466 380 if (!buf) 467 381 return -EINVAL; 468 382 469 - if (w1_ds2438_get_voltage(sl, DS2438_ADC_INPUT_VDD, &voltage) == 0) { 383 + if (w1_ds2438_get_voltage(sl, DS2438_ADC_INPUT_VDD, &voltage) == 0) 470 384 ret = snprintf(buf, count, "%u\n", voltage); 471 - } else 385 + else 472 386 ret = -EIO; 473 387 474 388 return ret; 475 389 } 476 390 477 - static BIN_ATTR(iad, S_IRUGO | S_IWUSR | S_IWGRP, iad_read, iad_write, 0); 391 + static BIN_ATTR_RW(iad, 0); 478 392 static BIN_ATTR_RO(page0, DS2438_PAGE_SIZE); 393 + static BIN_ATTR_RO(page1, DS2438_PAGE_SIZE); 394 + static BIN_ATTR_WO(offset, 2); 479 395 static BIN_ATTR_RO(temperature, 0/* real length varies */); 480 396 static BIN_ATTR_RO(vad, 0/* real length varies */); 481 397 static BIN_ATTR_RO(vdd, 0/* real length varies */); ··· 485 397 static struct bin_attribute *w1_ds2438_bin_attrs[] = { 486 398 &bin_attr_iad, 487 399 &bin_attr_page0, 400 + &bin_attr_page1, 401 + &bin_attr_offset, 488 402 &bin_attr_temperature, 489 403 &bin_attr_vad, 490 404 &bin_attr_vdd,
+2 -3
drivers/w1/slaves/w1_therm.c
··· 834 834 } 835 835 836 836 /** 837 - * support_bulk_read() - check if slave support bulk read 837 + * bulk_read_support() - check if slave support bulk read 838 838 * @sl: device to check the ability 839 839 * 840 840 * Return: true if bulk read is supported, false if not or error ··· 2056 2056 { 2057 2057 struct w1_slave *sl = dev_to_w1_slave(device); 2058 2058 ssize_t c = PAGE_SIZE; 2059 - int rv; 2060 2059 int i; 2061 2060 u8 ack; 2062 2061 u64 rn; ··· 2083 2084 goto error; 2084 2085 2085 2086 w1_write_8(sl->master, W1_42_COND_READ); 2086 - rv = w1_read_block(sl->master, (u8 *)&rn, 8); 2087 + w1_read_block(sl->master, (u8 *)&rn, 8); 2087 2088 reg_num = (struct w1_reg_num *) &rn; 2088 2089 if (reg_num->family == W1_42_FINISHED_BYTE) 2089 2090 break;
+2 -2
drivers/watchdog/mei_wdt.c
··· 121 121 }; 122 122 123 123 /** 124 - * struct mei_wdt_start_request watchdog start/ping 124 + * struct mei_wdt_start_request - watchdog start/ping 125 125 * 126 126 * @hdr: Management Control Command Header 127 127 * @timeout: timeout value ··· 134 134 } __packed; 135 135 136 136 /** 137 - * struct mei_wdt_start_response watchdog start/ping response 137 + * struct mei_wdt_start_response - watchdog start/ping response 138 138 * 139 139 * @hdr: Management Control Command Header 140 140 * @status: operation status
+2 -4
fs/block_dev.c
··· 1609 1609 * Does not take i_mutex for the write and thus is not for general purpose 1610 1610 * use. 1611 1611 */ 1612 - ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from) 1612 + static ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from) 1613 1613 { 1614 1614 struct file *file = iocb->ki_filp; 1615 1615 struct inode *bd_inode = bdev_file_inode(file); ··· 1647 1647 blk_finish_plug(&plug); 1648 1648 return ret; 1649 1649 } 1650 - EXPORT_SYMBOL_GPL(blkdev_write_iter); 1651 1650 1652 - ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to) 1651 + static ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to) 1653 1652 { 1654 1653 struct file *file = iocb->ki_filp; 1655 1654 struct inode *bd_inode = bdev_file_inode(file); ··· 1670 1671 iov_iter_reexpand(to, iov_iter_count(to) + shorted); 1671 1672 return ret; 1672 1673 } 1673 - EXPORT_SYMBOL_GPL(blkdev_read_iter); 1674 1674 1675 1675 static int blkdev_writepages(struct address_space *mapping, 1676 1676 struct writeback_control *wbc)
+165
include/dt-bindings/interconnect/qcom,sc7280.h
··· 1 + /* SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) */ 2 + /* 3 + * Qualcomm SC7280 interconnect IDs 4 + * 5 + * Copyright (c) 2021, The Linux Foundation. All rights reserved. 6 + */ 7 + 8 + #ifndef __DT_BINDINGS_INTERCONNECT_QCOM_SC7280_H 9 + #define __DT_BINDINGS_INTERCONNECT_QCOM_SC7280_H 10 + 11 + #define MASTER_QSPI_0 0 12 + #define MASTER_QUP_0 1 13 + #define MASTER_QUP_1 2 14 + #define MASTER_A1NOC_CFG 3 15 + #define MASTER_PCIE_0 4 16 + #define MASTER_PCIE_1 5 17 + #define MASTER_SDCC_1 6 18 + #define MASTER_SDCC_2 7 19 + #define MASTER_SDCC_4 8 20 + #define MASTER_UFS_MEM 9 21 + #define MASTER_USB2 10 22 + #define MASTER_USB3_0 11 23 + #define SLAVE_A1NOC_SNOC 12 24 + #define SLAVE_ANOC_PCIE_GEM_NOC 13 25 + #define SLAVE_SERVICE_A1NOC 14 26 + 27 + #define MASTER_QDSS_BAM 0 28 + #define MASTER_A2NOC_CFG 1 29 + #define MASTER_CNOC_A2NOC 2 30 + #define MASTER_CRYPTO 3 31 + #define MASTER_IPA 4 32 + #define MASTER_QDSS_ETR 5 33 + #define SLAVE_A2NOC_SNOC 6 34 + #define SLAVE_SERVICE_A2NOC 7 35 + 36 + #define MASTER_QUP_CORE_0 0 37 + #define MASTER_QUP_CORE_1 1 38 + #define SLAVE_QUP_CORE_0 2 39 + #define SLAVE_QUP_CORE_1 3 40 + 41 + #define MASTER_CNOC3_CNOC2 0 42 + #define MASTER_QDSS_DAP 1 43 + #define SLAVE_AHB2PHY_SOUTH 2 44 + #define SLAVE_AHB2PHY_NORTH 3 45 + #define SLAVE_CAMERA_CFG 4 46 + #define SLAVE_CLK_CTL 5 47 + #define SLAVE_CDSP_CFG 6 48 + #define SLAVE_RBCPR_CX_CFG 7 49 + #define SLAVE_RBCPR_MX_CFG 8 50 + #define SLAVE_CRYPTO_0_CFG 9 51 + #define SLAVE_CX_RDPM 10 52 + #define SLAVE_DCC_CFG 11 53 + #define SLAVE_DISPLAY_CFG 12 54 + #define SLAVE_GFX3D_CFG 13 55 + #define SLAVE_HWKM 14 56 + #define SLAVE_IMEM_CFG 15 57 + #define SLAVE_IPA_CFG 16 58 + #define SLAVE_IPC_ROUTER_CFG 17 59 + #define SLAVE_LPASS 18 60 + #define SLAVE_CNOC_MSS 19 61 + #define SLAVE_MX_RDPM 20 62 + #define SLAVE_PCIE_0_CFG 21 63 + #define SLAVE_PCIE_1_CFG 22 64 + #define SLAVE_PDM 23 65 + #define SLAVE_PIMEM_CFG 24 66 + #define SLAVE_PKA_WRAPPER_CFG 25 67 + #define SLAVE_PMU_WRAPPER_CFG 26 68 + #define SLAVE_QDSS_CFG 27 69 + #define SLAVE_QSPI_0 28 70 + #define SLAVE_QUP_0 29 71 + #define SLAVE_QUP_1 30 72 + #define SLAVE_SDCC_1 31 73 + #define SLAVE_SDCC_2 32 74 + #define SLAVE_SDCC_4 33 75 + #define SLAVE_SECURITY 34 76 + #define SLAVE_TCSR 35 77 + #define SLAVE_TLMM 36 78 + #define SLAVE_UFS_MEM_CFG 37 79 + #define SLAVE_USB2 38 80 + #define SLAVE_USB3_0 39 81 + #define SLAVE_VENUS_CFG 40 82 + #define SLAVE_VSENSE_CTRL_CFG 41 83 + #define SLAVE_A1NOC_CFG 42 84 + #define SLAVE_A2NOC_CFG 43 85 + #define SLAVE_CNOC2_CNOC3 44 86 + #define SLAVE_CNOC_MNOC_CFG 45 87 + #define SLAVE_SNOC_CFG 46 88 + 89 + #define MASTER_CNOC2_CNOC3 0 90 + #define MASTER_GEM_NOC_CNOC 1 91 + #define MASTER_GEM_NOC_PCIE_SNOC 2 92 + #define SLAVE_AOSS 3 93 + #define SLAVE_APPSS 4 94 + #define SLAVE_CNOC3_CNOC2 5 95 + #define SLAVE_CNOC_A2NOC 6 96 + #define SLAVE_DDRSS_CFG 7 97 + #define SLAVE_BOOT_IMEM 8 98 + #define SLAVE_IMEM 9 99 + #define SLAVE_PIMEM 10 100 + #define SLAVE_PCIE_0 11 101 + #define SLAVE_PCIE_1 12 102 + #define SLAVE_QDSS_STM 13 103 + #define SLAVE_TCU 14 104 + 105 + #define MASTER_CNOC_DC_NOC 0 106 + #define SLAVE_LLCC_CFG 1 107 + #define SLAVE_GEM_NOC_CFG 2 108 + 109 + #define MASTER_GPU_TCU 0 110 + #define MASTER_SYS_TCU 1 111 + #define MASTER_APPSS_PROC 2 112 + #define MASTER_COMPUTE_NOC 3 113 + #define MASTER_GEM_NOC_CFG 4 114 + #define MASTER_GFX3D 5 115 + #define MASTER_MNOC_HF_MEM_NOC 6 116 + #define MASTER_MNOC_SF_MEM_NOC 7 117 + #define MASTER_ANOC_PCIE_GEM_NOC 8 118 + #define MASTER_SNOC_GC_MEM_NOC 9 119 + #define MASTER_SNOC_SF_MEM_NOC 10 120 + #define SLAVE_MSS_PROC_MS_MPU_CFG 11 121 + #define SLAVE_MCDMA_MS_MPU_CFG 12 122 + #define SLAVE_GEM_NOC_CNOC 13 123 + #define SLAVE_LLCC 14 124 + #define SLAVE_MEM_NOC_PCIE_SNOC 15 125 + #define SLAVE_SERVICE_GEM_NOC_1 16 126 + #define SLAVE_SERVICE_GEM_NOC_2 17 127 + #define SLAVE_SERVICE_GEM_NOC 18 128 + 129 + #define MASTER_CNOC_LPASS_AG_NOC 0 130 + #define SLAVE_LPASS_CORE_CFG 1 131 + #define SLAVE_LPASS_LPI_CFG 2 132 + #define SLAVE_LPASS_MPU_CFG 3 133 + #define SLAVE_LPASS_TOP_CFG 4 134 + #define SLAVE_SERVICES_LPASS_AML_NOC 5 135 + #define SLAVE_SERVICE_LPASS_AG_NOC 6 136 + 137 + #define MASTER_LLCC 0 138 + #define SLAVE_EBI1 1 139 + 140 + #define MASTER_CNOC_MNOC_CFG 0 141 + #define MASTER_VIDEO_P0 1 142 + #define MASTER_VIDEO_PROC 2 143 + #define MASTER_CAMNOC_HF 3 144 + #define MASTER_CAMNOC_ICP 4 145 + #define MASTER_CAMNOC_SF 5 146 + #define MASTER_MDP0 6 147 + #define SLAVE_MNOC_HF_MEM_NOC 7 148 + #define SLAVE_MNOC_SF_MEM_NOC 8 149 + #define SLAVE_SERVICE_MNOC 9 150 + 151 + #define MASTER_CDSP_NOC_CFG 0 152 + #define MASTER_CDSP_PROC 1 153 + #define SLAVE_CDSP_MEM_NOC 2 154 + #define SLAVE_SERVICE_NSP_NOC 3 155 + 156 + #define MASTER_A1NOC_SNOC 0 157 + #define MASTER_A2NOC_SNOC 1 158 + #define MASTER_SNOC_CFG 2 159 + #define MASTER_PIMEM 3 160 + #define MASTER_GIC 4 161 + #define SLAVE_SNOC_GEM_NOC_GC 5 162 + #define SLAVE_SNOC_GEM_NOC_SF 6 163 + #define SLAVE_SERVICE_SNOC 7 164 + 165 + #endif
+3
include/linux/eeprom_93xx46.h
··· 10 10 #define EE_ADDR8 0x01 /* 8 bit addr. cfg */ 11 11 #define EE_ADDR16 0x02 /* 16 bit addr. cfg */ 12 12 #define EE_READONLY 0x08 /* forbid writing */ 13 + #define EE_SIZE1K 0x10 /* 1 kb of data, that is a 93xx46 */ 14 + #define EE_SIZE2K 0x20 /* 2 kb of data, that is a 93xx56 */ 15 + #define EE_SIZE4K 0x40 /* 4 kb of data, that is a 93xx66 */ 13 16 14 17 unsigned int quirks; 15 18 /* Single word read transfers only; no sequential read. */
-1
include/linux/fpga/altera-pr-ip-core.h
··· 13 13 #include <linux/io.h> 14 14 15 15 int alt_pr_register(struct device *dev, void __iomem *reg_base); 16 - void alt_pr_unregister(struct device *dev); 17 16 18 17 #endif /* _ALT_PR_IP_CORE_H */
+1 -1
include/linux/fpga/fpga-bridge.h
··· 11 11 /** 12 12 * struct fpga_bridge_ops - ops for low level FPGA bridge drivers 13 13 * @enable_show: returns the FPGA bridge's status 14 - * @enable_set: set a FPGA bridge as enabled or disabled 14 + * @enable_set: set an FPGA bridge as enabled or disabled 15 15 * @fpga_bridge_remove: set FPGA into a specific state during driver remove 16 16 * @groups: optional attribute groups. 17 17 */
+1 -1
include/linux/fpga/fpga-mgr.h
··· 75 75 #define FPGA_MGR_COMPRESSED_BITSTREAM BIT(4) 76 76 77 77 /** 78 - * struct fpga_image_info - information specific to a FPGA image 78 + * struct fpga_image_info - information specific to an FPGA image 79 79 * @flags: boolean flags as defined above 80 80 * @enable_timeout_us: maximum time to enable traffic through bridge (uSec) 81 81 * @disable_timeout_us: maximum time to disable traffic through bridge (uSec)
-3
include/linux/fs.h
··· 3247 3247 struct iov_iter *iter); 3248 3248 3249 3249 /* fs/block_dev.c */ 3250 - extern ssize_t blkdev_read_iter(struct kiocb *iocb, struct iov_iter *to); 3251 - extern ssize_t blkdev_write_iter(struct kiocb *iocb, struct iov_iter *from); 3252 3250 extern int blkdev_fsync(struct file *filp, loff_t start, loff_t end, 3253 3251 int datasync); 3254 - extern void block_sync_page(struct page *page); 3255 3252 3256 3253 /* fs/splice.c */ 3257 3254 extern ssize_t generic_file_splice_read(struct file *, loff_t *,
+1 -1
include/linux/mcb.h
··· 120 120 __mcb_register_driver(driver, THIS_MODULE, KBUILD_MODNAME) 121 121 extern void mcb_unregister_driver(struct mcb_driver *driver); 122 122 #define module_mcb_driver(__mcb_driver) \ 123 - module_driver(__mcb_driver, mcb_register_driver, mcb_unregister_driver); 123 + module_driver(__mcb_driver, mcb_register_driver, mcb_unregister_driver) 124 124 extern void mcb_bus_add_devices(const struct mcb_bus *bus); 125 125 extern int mcb_device_register(struct mcb_bus *bus, struct mcb_device *dev); 126 126 extern struct mcb_bus *mcb_alloc_bus(struct device *carrier);
+1
include/linux/nvmem-provider.h
··· 25 25 NVMEM_TYPE_EEPROM, 26 26 NVMEM_TYPE_OTP, 27 27 NVMEM_TYPE_BATTERY_BACKED, 28 + NVMEM_TYPE_FRAM, 28 29 }; 29 30 30 31 #define NVMEM_DEVID_NONE (-1)
+1 -1
include/linux/phy/phy.h
··· 125 125 /** 126 126 * struct phy_attrs - represents phy attributes 127 127 * @bus_width: Data path width implemented by PHY 128 - * @max_link_rate: Maximum link rate supported by PHY (in Mbps) 128 + * @max_link_rate: Maximum link rate supported by PHY (units to be decided by producer and consumer) 129 129 * @mode: PHY mode 130 130 */ 131 131 struct phy_attrs {
+2 -3
include/linux/soundwire/sdw.h
··· 612 612 * @update_status: Update Slave status 613 613 * @bus_config: Update the bus config for Slave 614 614 * @port_prep: Prepare the port with parameters 615 + * @clk_stop: handle imp-def sequences before and after prepare and de-prepare 615 616 */ 616 617 struct sdw_slave_ops { 617 618 int (*read_prop)(struct sdw_slave *sdw); ··· 625 624 int (*port_prep)(struct sdw_slave *slave, 626 625 struct sdw_prepare_ch *prepare_ch, 627 626 enum sdw_port_prep_ops pre_ops); 628 - int (*get_clk_stop_mode)(struct sdw_slave *slave); 629 627 int (*clk_stop)(struct sdw_slave *slave, 630 628 enum sdw_clk_stop_mode mode, 631 629 enum sdw_clk_stop_type type); ··· 675 675 struct list_head node; 676 676 struct completion port_ready[SDW_MAX_PORTS]; 677 677 unsigned int m_port_map[SDW_MAX_PORTS]; 678 - enum sdw_clk_stop_mode curr_clk_stop_mode; 679 678 u16 dev_num; 680 679 u16 dev_num_sticky; 681 680 bool probed; ··· 1039 1040 int sdw_write_no_pm(struct sdw_slave *slave, u32 addr, u8 value); 1040 1041 int sdw_read_no_pm(struct sdw_slave *slave, u32 addr); 1041 1042 int sdw_nread(struct sdw_slave *slave, u32 addr, size_t count, u8 *val); 1042 - int sdw_nwrite(struct sdw_slave *slave, u32 addr, size_t count, u8 *val); 1043 + int sdw_nwrite(struct sdw_slave *slave, u32 addr, size_t count, const u8 *val); 1043 1044 int sdw_update(struct sdw_slave *slave, u32 addr, u8 mask, u8 val); 1044 1045 int sdw_update_no_pm(struct sdw_slave *slave, u32 addr, u8 mask, u8 val); 1045 1046
+3 -3
include/linux/soundwire/sdw_intel.h
··· 58 58 u32 link_mask; 59 59 }; 60 60 61 - struct sdw_intel_link_res; 61 + struct sdw_intel_link_dev; 62 62 63 63 /* Intel clock-stop/pm_runtime quirk definitions */ 64 64 ··· 109 109 * Controller 110 110 * @num_slaves: total number of devices exposed across all enabled links 111 111 * @handle: ACPI parent handle 112 - * @links: information for each link (controller-specific and kept 112 + * @ldev: information for each link (controller-specific and kept 113 113 * opaque here) 114 114 * @ids: array of slave_id, representing Slaves exposed across all enabled 115 115 * links ··· 123 123 u32 link_mask; 124 124 int num_slaves; 125 125 acpi_handle handle; 126 - struct sdw_intel_link_res *links; 126 + struct sdw_intel_link_dev **ldev; 127 127 struct sdw_intel_slave_id *ids; 128 128 struct list_head link_list; 129 129 struct mutex shim_lock; /* lock for access to shared SHIM registers */
+1 -1
include/linux/stm.h
··· 57 57 * 58 58 * Normally, an STM device will have a range of masters available to software 59 59 * and the rest being statically assigned to various hardware trace sources. 60 - * The former is defined by the the range [@sw_start..@sw_end] of the device 60 + * The former is defined by the range [@sw_start..@sw_end] of the device 61 61 * description. That is, the lowest master that can be allocated to software 62 62 * writers is @sw_start and data from this writer will appear is @sw_start 63 63 * master in the STP stream.
+6
include/linux/sysfs.h
··· 162 162 }; \ 163 163 __ATTRIBUTE_GROUPS(_name) 164 164 165 + #define BIN_ATTRIBUTE_GROUPS(_name) \ 166 + static const struct attribute_group _name##_group = { \ 167 + .bin_attrs = _name##_attrs, \ 168 + }; \ 169 + __ATTRIBUTE_GROUPS(_name) 170 + 165 171 struct file; 166 172 struct vm_area_struct; 167 173 struct address_space;
-17
include/uapi/linux/raw.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 - #ifndef __LINUX_RAW_H 3 - #define __LINUX_RAW_H 4 - 5 - #include <linux/types.h> 6 - 7 - #define RAW_SETBIND _IO( 0xac, 0 ) 8 - #define RAW_GETBIND _IO( 0xac, 1 ) 9 - 10 - struct raw_config_request 11 - { 12 - int raw_minor; 13 - __u64 block_major; 14 - __u64 block_minor; 15 - }; 16 - 17 - #endif /* __LINUX_RAW_H */
+13
include/uapi/misc/habanalabs.h
··· 313 313 * HL_INFO_SYNC_MANAGER - Retrieve sync manager info per dcore 314 314 * HL_INFO_TOTAL_ENERGY - Retrieve total energy consumption 315 315 * HL_INFO_PLL_FREQUENCY - Retrieve PLL frequency 316 + * HL_INFO_OPEN_STATS - Retrieve info regarding recent device open calls 316 317 */ 317 318 #define HL_INFO_HW_IP_INFO 0 318 319 #define HL_INFO_HW_EVENTS 1 ··· 332 331 #define HL_INFO_TOTAL_ENERGY 15 333 332 #define HL_INFO_PLL_FREQUENCY 16 334 333 #define HL_INFO_POWER 17 334 + #define HL_INFO_OPEN_STATS 18 335 335 336 336 #define HL_INFO_VERSION_MAX_LEN 128 337 337 #define HL_INFO_CARD_NAME_MAX_LEN 16 ··· 444 442 445 443 struct hl_pll_frequency_info { 446 444 __u16 output[HL_PLL_NUM_OUTPUTS]; 445 + }; 446 + 447 + /** 448 + * struct hl_open_stats_info - device open statistics information 449 + * @open_counter: ever growing counter, increased on each successful dev open 450 + * @last_open_period_ms: duration (ms) device was open last time 451 + */ 452 + struct hl_open_stats_info { 453 + __u64 open_counter; 454 + __u64 last_open_period_ms; 447 455 }; 448 456 449 457 /** ··· 676 664 #define HL_CS_FLAGS_STAGED_SUBMISSION_FIRST 0x80 677 665 #define HL_CS_FLAGS_STAGED_SUBMISSION_LAST 0x100 678 666 #define HL_CS_FLAGS_CUSTOM_TIMEOUT 0x200 667 + #define HL_CS_FLAGS_SKIP_RESET_ON_TIMEOUT 0x400 679 668 680 669 #define HL_CS_STATUS_SUCCESS 0 681 670
+3 -3
lib/dynamic_debug.c
··· 1117 1117 goto out_err; 1118 1118 1119 1119 ddebug_init_success = 1; 1120 - vpr_info("%d modules, %d entries and %d bytes in ddebug tables, %d bytes in __dyndbg section\n", 1121 - modct, entries, (int)(modct * sizeof(struct ddebug_table)), 1122 - (int)(entries * sizeof(struct _ddebug))); 1120 + vpr_info("%d prdebugs in %d modules, %d KiB in ddebug tables, %d kiB in __dyndbg section\n", 1121 + entries, modct, (int)((modct * sizeof(struct ddebug_table)) >> 10), 1122 + (int)((entries * sizeof(struct _ddebug)) >> 10)); 1123 1123 1124 1124 /* apply ddebug_query boot param, dont unload tables on err */ 1125 1125 if (ddebug_setup_string[0] != '\0') {
+2 -2
sound/soc/intel/boards/sof_sdw.c
··· 600 600 comp_index = i + offset; 601 601 if (is_unique_device(link, sdw_version, mfg_id, part_id, 602 602 class_id, i)) { 603 - codec_str = "sdw:%x:%x:%x:%x"; 603 + codec_str = "sdw:%01x:%04x:%04x:%02x"; 604 604 codec[comp_index].name = 605 605 devm_kasprintf(dev, GFP_KERNEL, codec_str, 606 606 link_id, mfg_id, part_id, 607 607 class_id); 608 608 } else { 609 - codec_str = "sdw:%x:%x:%x:%x:%x"; 609 + codec_str = "sdw:%01x:%04x:%04x:%02x:%01x"; 610 610 codec[comp_index].name = 611 611 devm_kasprintf(dev, GFP_KERNEL, codec_str, 612 612 link_id, mfg_id, part_id,
+7
tools/testing/selftests/lkdtm/config
··· 1 1 CONFIG_LKDTM=y 2 + CONFIG_DEBUG_LIST=y 3 + CONFIG_SLAB_FREELIST_HARDENED=y 4 + CONFIG_FORTIFY_SOURCE=y 5 + CONFIG_HARDENED_USERCOPY=y 6 + # CONFIG_HARDENED_USERCOPY_FALLBACK is not set 7 + CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT=y 8 + CONFIG_INIT_ON_ALLOC_DEFAULT_ON=y
+8 -4
tools/testing/selftests/lkdtm/run.sh
··· 76 76 # Save existing dmesg so we can detect new content below 77 77 dmesg > "$DMESG" 78 78 79 - # Most shells yell about signals and we're expecting the "cat" process 80 - # to usually be killed by the kernel. So we have to run it in a sub-shell 81 - # and silence errors. 82 - ($SHELL -c 'cat <(echo '"$test"') >'"$TRIGGER" 2>/dev/null) || true 79 + # Since the kernel is likely killing the process writing to the trigger 80 + # file, it must not be the script's shell itself. i.e. we cannot do: 81 + # echo "$test" >"$TRIGGER" 82 + # Instead, use "cat" to take the signal. Since the shell will yell about 83 + # the signal that killed the subprocess, we must ignore the failure and 84 + # continue. However we don't silence stderr since there might be other 85 + # useful details reported there in the case of other unexpected conditions. 86 + echo "$test" | cat >"$TRIGGER" || true 83 87 84 88 # Record and dump the results 85 89 dmesg | comm --nocheck-order -13 "$DMESG" - > "$LOG" || true
+1
tools/testing/selftests/lkdtm/stack-entropy.sh
··· 30 30 31 31 # We would expect any functional stack randomization to be at least 5 bits. 32 32 if [ "$bits" -lt 5 ]; then 33 + echo "Stack entropy is low! Booted without 'randomize_kstack_offset=y'?" 33 34 exit 1 34 35 else 35 36 exit 0
+7 -4
tools/testing/selftests/lkdtm/tests.txt
··· 11 11 CORRUPT_LIST_DEL list_del corruption 12 12 STACK_GUARD_PAGE_LEADING 13 13 STACK_GUARD_PAGE_TRAILING 14 - UNSET_SMEP CR4 bits went missing 14 + UNSET_SMEP pinned CR4 bits changed: 15 15 DOUBLE_FAULT 16 16 CORRUPT_PAC 17 17 UNALIGNED_LOAD_STORE_WRITE 18 - #OVERWRITE_ALLOCATION Corrupts memory on failure 18 + SLAB_LINEAR_OVERFLOW 19 + VMALLOC_LINEAR_OVERFLOW 19 20 #WRITE_AFTER_FREE Corrupts memory on failure 20 - READ_AFTER_FREE 21 + READ_AFTER_FREE call trace:|Memory correctly poisoned 21 22 #WRITE_BUDDY_AFTER_FREE Corrupts memory on failure 22 - READ_BUDDY_AFTER_FREE 23 + READ_BUDDY_AFTER_FREE call trace:|Memory correctly poisoned 24 + SLAB_INIT_ON_ALLOC Memory appears initialized 25 + BUDDY_INIT_ON_ALLOC Memory appears initialized 23 26 SLAB_FREE_DOUBLE 24 27 SLAB_FREE_CROSS 25 28 SLAB_FREE_PAGE