Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'char-misc-5.3-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char / misc driver updates from Greg KH:
"Here is the "large" pull request for char and misc and other assorted
smaller driver subsystems for 5.3-rc1.

It seems that this tree is becoming the funnel point of lots of
smaller driver subsystems, which is fine for me, but that's why it is
getting larger over time and does not just contain stuff under
drivers/char/ and drivers/misc.

Lots of small updates all over the place here from different driver
subsystems:
- habana driver updates
- coresight driver updates
- documentation file movements and updates
- Android binder fixes and updates
- extcon driver updates
- google firmware driver updates
- fsi driver updates
- smaller misc and char driver updates
- soundwire driver updates
- nvmem driver updates
- w1 driver fixes

All of these have been in linux-next for a while with no reported
issues"

* tag 'char-misc-5.3-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (188 commits)
coresight: Do not default to CPU0 for missing CPU phandle
dt-bindings: coresight: Change CPU phandle to required property
ocxl: Allow contexts to be attached with a NULL mm
fsi: sbefifo: Don't fail operations when in SBE IPL state
coresight: tmc: Smatch: Fix potential NULL pointer dereference
coresight: etm3x: Smatch: Fix potential NULL pointer dereference
coresight: Potential uninitialized variable in probe()
coresight: etb10: Do not call smp_processor_id from preemptible
coresight: tmc-etf: Do not call smp_processor_id from preemptible
coresight: tmc-etr: alloc_perf_buf: Do not call smp_processor_id from preemptible
coresight: tmc-etr: Do not call smp_processor_id() from preemptible
docs: misc-devices: convert files without extension to ReST
fpga: dfl: fme: align PR buffer size per PR datawidth
fpga: dfl: fme: remove copy_to_user() in ioctl for PR
fpga: dfl-fme-mgr: fix FME_PR_INTFC_ID register address.
intel_th: msu: Start read iterator from a non-empty window
intel_th: msu: Split sgt array and pointer in multiwindow mode
intel_th: msu: Support multipage blocks
intel_th: pci: Add Ice Lake NNPI support
intel_th: msu: Fix single mode with disabled IOMMU
...

+6063 -2988
+15 -3
Documentation/ABI/testing/debugfs-driver-habanalabs
··· 3 3 KernelVersion: 5.1 4 4 Contact: oded.gabbay@gmail.com 5 5 Description: Sets the device address to be used for read or write through 6 - PCI bar. The acceptable value is a string that starts with "0x" 6 + PCI bar, or the device VA of a host mapped memory to be read or 7 + written directly from the host. The latter option is allowed 8 + only when the IOMMU is disabled. 9 + The acceptable value is a string that starts with "0x" 7 10 8 11 What: /sys/kernel/debug/habanalabs/hl<n>/command_buffers 9 12 Date: Jan 2019 ··· 36 33 Description: Allows the root user to read or write directly through the 37 34 device's PCI bar. Writing to this file generates a write 38 35 transaction while reading from the file generates a read 39 - transcation. This custom interface is needed (instead of using 36 + transaction. This custom interface is needed (instead of using 40 37 the generic Linux user-space PCI mapping) because the DDR bar 41 38 is very small compared to the DDR memory and only the driver can 42 - move the bar before and after the transaction 39 + move the bar before and after the transaction. 40 + If the IOMMU is disabled, it also allows the root user to read 41 + or write from the host a device VA of a host mapped memory 43 42 44 43 What: /sys/kernel/debug/habanalabs/hl<n>/device 45 44 Date: Jan 2019 ··· 50 45 Description: Enables the root user to set the device to specific state. 51 46 Valid values are "disable", "enable", "suspend", "resume". 52 47 User can read this property to see the valid values 48 + 49 + What: /sys/kernel/debug/habanalabs/hl<n>/engines 50 + Date: Jul 2019 51 + KernelVersion: 5.3 52 + Contact: oded.gabbay@gmail.com 53 + Description: Displays the status registers values of the device engines and 54 + their derived idle status 53 55 54 56 What: /sys/kernel/debug/habanalabs/hl<n>/i2c_addr 55 57 Date: Jan 2019
+24 -18
Documentation/ABI/testing/sysfs-driver-habanalabs
··· 62 62 Date: Jan 2019 63 63 KernelVersion: 5.1 64 64 Contact: oded.gabbay@gmail.com 65 - Description: Allows the user to set the maximum clock frequency of the 66 - Interconnect fabric. Writes to this parameter affect the device 67 - only when the power management profile is set to "manual" mode. 68 - The device IC clock might be set to lower value then the 65 + Description: Allows the user to set the maximum clock frequency, in Hz, of 66 + the Interconnect fabric. Writes to this parameter affect the 67 + device only when the power management profile is set to "manual" 68 + mode. The device IC clock might be set to lower value than the 69 69 maximum. The user should read the ic_clk_curr to see the actual 70 - frequency value of the IC 70 + frequency value of the IC. This property is valid only for the 71 + Goya ASIC family 71 72 72 73 What: /sys/class/habanalabs/hl<n>/ic_clk_curr 73 74 Date: Jan 2019 74 75 KernelVersion: 5.1 75 76 Contact: oded.gabbay@gmail.com 76 - Description: Displays the current clock frequency of the Interconnect fabric 77 + Description: Displays the current clock frequency, in Hz, of the Interconnect 78 + fabric. This property is valid only for the Goya ASIC family 77 79 78 80 What: /sys/class/habanalabs/hl<n>/infineon_ver 79 81 Date: Jan 2019 ··· 94 92 Date: Jan 2019 95 93 KernelVersion: 5.1 96 94 Contact: oded.gabbay@gmail.com 97 - Description: Allows the user to set the maximum clock frequency of the 98 - MME compute engine. Writes to this parameter affect the device 99 - only when the power management profile is set to "manual" mode. 100 - The device MME clock might be set to lower value then the 95 + Description: Allows the user to set the maximum clock frequency, in Hz, of 96 + the MME compute engine. Writes to this parameter affect the 97 + device only when the power management profile is set to "manual" 98 + mode. The device MME clock might be set to lower value than the 101 99 maximum. The user should read the mme_clk_curr to see the actual 102 - frequency value of the MME 100 + frequency value of the MME. This property is valid only for the 101 + Goya ASIC family 103 102 104 103 What: /sys/class/habanalabs/hl<n>/mme_clk_curr 105 104 Date: Jan 2019 106 105 KernelVersion: 5.1 107 106 Contact: oded.gabbay@gmail.com 108 - Description: Displays the current clock frequency of the MME compute engine 107 + Description: Displays the current clock frequency, in Hz, of the MME compute 108 + engine. This property is valid only for the Goya ASIC family 109 109 110 110 What: /sys/class/habanalabs/hl<n>/pci_addr 111 111 Date: Jan 2019 ··· 167 163 Date: Jan 2019 168 164 KernelVersion: 5.1 169 165 Contact: oded.gabbay@gmail.com 170 - Description: Allows the user to set the maximum clock frequency of the 171 - TPC compute engines. Writes to this parameter affect the device 172 - only when the power management profile is set to "manual" mode. 173 - The device TPC clock might be set to lower value then the 166 + Description: Allows the user to set the maximum clock frequency, in Hz, of 167 + the TPC compute engines. Writes to this parameter affect the 168 + device only when the power management profile is set to "manual" 169 + mode. The device TPC clock might be set to lower value than the 174 170 maximum. The user should read the tpc_clk_curr to see the actual 175 - frequency value of the TPC 171 + frequency value of the TPC. This property is valid only for 172 + Goya ASIC family 176 173 177 174 What: /sys/class/habanalabs/hl<n>/tpc_clk_curr 178 175 Date: Jan 2019 179 176 KernelVersion: 5.1 180 177 Contact: oded.gabbay@gmail.com 181 - Description: Displays the current clock frequency of the TPC compute engines 178 + Description: Displays the current clock frequency, in Hz, of the TPC compute 179 + engines. This property is valid only for the Goya ASIC family 182 180 183 181 What: /sys/class/habanalabs/hl<n>/uboot_ver 184 182 Date: Jan 2019
+2 -2
Documentation/devicetree/bindings/arm/coresight-cpu-debug.txt
··· 26 26 processor core is clocked by the internal CPU clock, so it 27 27 is enabled with CPU clock by default. 28 28 29 - - cpu : the CPU phandle the debug module is affined to. When omitted 30 - the module is considered to belong to CPU0. 29 + - cpu : the CPU phandle the debug module is affined to. Do not assume it 30 + to default to CPU0 if omitted. 31 31 32 32 Optional properties: 33 33
+5 -3
Documentation/devicetree/bindings/arm/coresight.txt
··· 59 59 60 60 * port or ports: see "Graph bindings for Coresight" below. 61 61 62 + * Additional required property for Embedded Trace Macrocell (version 3.x and 63 + version 4.x): 64 + * cpu: the cpu phandle this ETM/PTM is affined to. Do not 65 + assume it to default to CPU0 if omitted. 66 + 62 67 * Additional required properties for System Trace Macrocells (STM): 63 68 * reg: along with the physical base address and length of the register 64 69 set as described above, another entry is required to describe the ··· 91 86 92 87 * arm,cp14: must be present if the system accesses ETM/PTM management 93 88 registers via co-processor 14. 94 - 95 - * cpu: the cpu phandle this ETM/PTM is affined to. When omitted the 96 - source is considered to belong to CPU0. 97 89 98 90 * Optional property for TMC: 99 91
+22
Documentation/devicetree/bindings/arm/freescale/fsl,scu.txt
··· 133 133 Required properties: 134 134 - compatible: should be "fsl,imx8qxp-sc-rtc"; 135 135 136 + OCOTP bindings based on SCU Message Protocol 137 + ------------------------------------------------------------ 138 + Required properties: 139 + - compatible: Should be "fsl,imx8qxp-scu-ocotp" 140 + - #address-cells: Must be 1. Contains byte index 141 + - #size-cells: Must be 1. Contains byte length 142 + 143 + Optional Child nodes: 144 + 145 + - Data cells of ocotp: 146 + Detailed bindings are described in bindings/nvmem/nvmem.txt 147 + 136 148 Example (imx8qxp): 137 149 ------------- 138 150 aliases { ··· 187 175 >; 188 176 }; 189 177 ... 178 + }; 179 + 180 + ocotp: imx8qx-ocotp { 181 + compatible = "fsl,imx8qxp-scu-ocotp"; 182 + #address-cells = <1>; 183 + #size-cells = <1>; 184 + 185 + fec_mac0: mac@2c4 { 186 + reg = <0x2c4 8>; 187 + }; 190 188 }; 191 189 192 190 pd: imx8qx-pd {
+19
Documentation/devicetree/bindings/extcon/extcon-fsa9480.txt
··· 1 + FAIRCHILD SEMICONDUCTOR FSA9480 MICROUSB SWITCH 2 + 3 + The FSA9480 is a USB port accessory detector and switch. The FSA9480 is fully 4 + controlled using I2C and enables USB data, stereo and mono audio, video, 5 + microphone, and UART data to use a common connector port. 6 + 7 + Required properties: 8 + - compatible : Must be "fcs,fsa9480" 9 + - reg : Specifies i2c slave address. Must be 0x25. 10 + - interrupts : Should contain one entry specifying interrupt signal of 11 + interrupt parent to which interrupt pin of the chip is connected. 12 + 13 + Example: 14 + musb@25 { 15 + compatible = "fcs,fsa9480"; 16 + reg = <0x25>; 17 + interrupt-parent = <&gph2>; 18 + interrupts = <7 0>; 19 + };
+1
Documentation/devicetree/bindings/memory-controllers/ingenic,jz4780-nemc.txt
··· 5 5 6 6 Required properties: 7 7 - compatible: Should be set to one of: 8 + "ingenic,jz4740-nemc" (JZ4740) 8 9 "ingenic,jz4780-nemc" (JZ4780) 9 10 - reg: Should specify the NEMC controller registers location and length. 10 11 - clocks: Clock for the NEMC controller.
+58
Documentation/devicetree/bindings/misc/xlnx,sd-fec.txt
··· 1 + * Xilinx SDFEC(16nm) IP * 2 + 3 + The Soft Decision Forward Error Correction (SDFEC) Engine is a Hard IP block 4 + which provides high-throughput LDPC and Turbo Code implementations. 5 + The LDPC decode & encode functionality is capable of covering a range of 6 + customer specified Quasi-cyclic (QC) codes. The Turbo decode functionality 7 + principally covers codes used by LTE. The FEC Engine offers significant 8 + power and area savings versus implementations done in the FPGA fabric. 9 + 10 + 11 + Required properties: 12 + - compatible: Must be "xlnx,sd-fec-1.1" 13 + - clock-names : List of input clock names from the following: 14 + - "core_clk", Main processing clock for processing core (required) 15 + - "s_axi_aclk", AXI4-Lite memory-mapped slave interface clock (required) 16 + - "s_axis_din_aclk", DIN AXI4-Stream Slave interface clock (optional) 17 + - "s_axis_din_words-aclk", DIN_WORDS AXI4-Stream Slave interface clock (optional) 18 + - "s_axis_ctrl_aclk", Control input AXI4-Stream Slave interface clock (optional) 19 + - "m_axis_dout_aclk", DOUT AXI4-Stream Master interface clock (optional) 20 + - "m_axis_dout_words_aclk", DOUT_WORDS AXI4-Stream Master interface clock (optional) 21 + - "m_axis_status_aclk", Status output AXI4-Stream Master interface clock (optional) 22 + - clocks : Clock phandles (see clock_bindings.txt for details). 23 + - reg: Should contain Xilinx SDFEC 16nm Hardened IP block registers 24 + location and length. 25 + - xlnx,sdfec-code : Should contain "ldpc" or "turbo" to describe the codes 26 + being used. 27 + - xlnx,sdfec-din-words : A value 0 indicates that the DIN_WORDS interface is 28 + driven with a fixed value and is not present on the device, a value of 1 29 + configures the DIN_WORDS to be block based, while a value of 2 configures the 30 + DIN_WORDS input to be supplied for each AXI transaction. 31 + - xlnx,sdfec-din-width : Configures the DIN AXI stream where a value of 1 32 + configures a width of "1x128b", 2 a width of "2x128b" and 4 configures a width 33 + of "4x128b". 34 + - xlnx,sdfec-dout-words : A value 0 indicates that the DOUT_WORDS interface is 35 + driven with a fixed value and is not present on the device, a value of 1 36 + configures the DOUT_WORDS to be block based, while a value of 2 configures the 37 + DOUT_WORDS input to be supplied for each AXI transaction. 38 + - xlnx,sdfec-dout-width : Configures the DOUT AXI stream where a value of 1 39 + configures a width of "1x128b", 2 a width of "2x128b" and 4 configures a width 40 + of "4x128b". 41 + Optional properties: 42 + - interrupts: should contain SDFEC interrupt number 43 + 44 + Example 45 + --------------------------------------- 46 + sd_fec_0: sd-fec@a0040000 { 47 + compatible = "xlnx,sd-fec-1.1"; 48 + clock-names = "core_clk","s_axi_aclk","s_axis_ctrl_aclk","s_axis_din_aclk","m_axis_status_aclk","m_axis_dout_aclk"; 49 + clocks = <&misc_clk_2>,<&misc_clk_0>,<&misc_clk_1>,<&misc_clk_1>,<&misc_clk_1>, <&misc_clk_1>; 50 + reg = <0x0 0xa0040000 0x0 0x40000>; 51 + interrupt-parent = <&axi_intc>; 52 + interrupts = <1 0>; 53 + xlnx,sdfec-code = "ldpc"; 54 + xlnx,sdfec-din-words = <0>; 55 + xlnx,sdfec-din-width = <2>; 56 + xlnx,sdfec-dout-words = <0>; 57 + xlnx,sdfec-dout-width = <1>; 58 + };
-60
Documentation/devicetree/bindings/mux/mmio-mux.txt
··· 1 - MMIO register bitfield-based multiplexer controller bindings 2 - 3 - Define register bitfields to be used to control multiplexers. The parent 4 - device tree node must be a syscon node to provide register access. 5 - 6 - Required properties: 7 - - compatible : "mmio-mux" 8 - - #mux-control-cells : <1> 9 - - mux-reg-masks : an array of register offset and pre-shifted bitfield mask 10 - pairs, each describing a single mux control. 11 - * Standard mux-controller bindings as decribed in mux-controller.txt 12 - 13 - Optional properties: 14 - - idle-states : if present, the state the muxes will have when idle. The 15 - special state MUX_IDLE_AS_IS is the default. 16 - 17 - The multiplexer state of each multiplexer is defined as the value of the 18 - bitfield described by the corresponding register offset and bitfield mask pair 19 - in the mux-reg-masks array, accessed through the parent syscon. 20 - 21 - Example: 22 - 23 - syscon { 24 - compatible = "syscon"; 25 - 26 - mux: mux-controller { 27 - compatible = "mmio-mux"; 28 - #mux-control-cells = <1>; 29 - 30 - mux-reg-masks = <0x3 0x30>, /* 0: reg 0x3, bits 5:4 */ 31 - <0x3 0x40>, /* 1: reg 0x3, bit 6 */ 32 - idle-states = <MUX_IDLE_AS_IS>, <0>; 33 - }; 34 - }; 35 - 36 - video-mux { 37 - compatible = "video-mux"; 38 - mux-controls = <&mux 0>; 39 - 40 - ports { 41 - /* inputs 0..3 */ 42 - port@0 { 43 - reg = <0>; 44 - }; 45 - port@1 { 46 - reg = <1>; 47 - }; 48 - port@2 { 49 - reg = <2>; 50 - }; 51 - port@3 { 52 - reg = <3>; 53 - }; 54 - 55 - /* output */ 56 - port@4 { 57 - reg = <4>; 58 - }; 59 - }; 60 - };
+129
Documentation/devicetree/bindings/mux/reg-mux.txt
··· 1 + Generic register bitfield-based multiplexer controller bindings 2 + 3 + Define register bitfields to be used to control multiplexers. The parent 4 + device tree node must be a device node to provide register r/w access. 5 + 6 + Required properties: 7 + - compatible : should be one of 8 + "reg-mux" : if parent device of mux controller is not syscon device 9 + "mmio-mux" : if parent device of mux controller is syscon device 10 + - #mux-control-cells : <1> 11 + - mux-reg-masks : an array of register offset and pre-shifted bitfield mask 12 + pairs, each describing a single mux control. 13 + * Standard mux-controller bindings as decribed in mux-controller.txt 14 + 15 + Optional properties: 16 + - idle-states : if present, the state the muxes will have when idle. The 17 + special state MUX_IDLE_AS_IS is the default. 18 + 19 + The multiplexer state of each multiplexer is defined as the value of the 20 + bitfield described by the corresponding register offset and bitfield mask 21 + pair in the mux-reg-masks array. 22 + 23 + Example 1: 24 + The parent device of mux controller is not a syscon device. 25 + 26 + &i2c0 { 27 + fpga@66 { // fpga connected to i2c 28 + compatible = "fsl,lx2160aqds-fpga", "fsl,fpga-qixis-i2c", 29 + "simple-mfd"; 30 + reg = <0x66>; 31 + 32 + mux: mux-controller { 33 + compatible = "reg-mux"; 34 + #mux-control-cells = <1>; 35 + mux-reg-masks = <0x54 0xf8>, /* 0: reg 0x54, bits 7:3 */ 36 + <0x54 0x07>; /* 1: reg 0x54, bits 2:0 */ 37 + }; 38 + }; 39 + }; 40 + 41 + mdio-mux-1 { 42 + compatible = "mdio-mux-multiplexer"; 43 + mux-controls = <&mux 0>; 44 + mdio-parent-bus = <&emdio1>; 45 + #address-cells = <1>; 46 + #size-cells = <0>; 47 + 48 + mdio@0 { 49 + reg = <0x0>; 50 + #address-cells = <1>; 51 + #size-cells = <0>; 52 + }; 53 + 54 + mdio@8 { 55 + reg = <0x8>; 56 + #address-cells = <1>; 57 + #size-cells = <0>; 58 + }; 59 + 60 + .. 61 + .. 62 + }; 63 + 64 + mdio-mux-2 { 65 + compatible = "mdio-mux-multiplexer"; 66 + mux-controls = <&mux 1>; 67 + mdio-parent-bus = <&emdio2>; 68 + #address-cells = <1>; 69 + #size-cells = <0>; 70 + 71 + mdio@0 { 72 + reg = <0x0>; 73 + #address-cells = <1>; 74 + #size-cells = <0>; 75 + }; 76 + 77 + mdio@1 { 78 + reg = <0x1>; 79 + #address-cells = <1>; 80 + #size-cells = <0>; 81 + }; 82 + 83 + .. 84 + .. 85 + }; 86 + 87 + Example 2: 88 + The parent device of mux controller is syscon device. 89 + 90 + syscon { 91 + compatible = "syscon"; 92 + 93 + mux: mux-controller { 94 + compatible = "mmio-mux"; 95 + #mux-control-cells = <1>; 96 + 97 + mux-reg-masks = <0x3 0x30>, /* 0: reg 0x3, bits 5:4 */ 98 + <0x3 0x40>, /* 1: reg 0x3, bit 6 */ 99 + idle-states = <MUX_IDLE_AS_IS>, <0>; 100 + }; 101 + }; 102 + 103 + video-mux { 104 + compatible = "video-mux"; 105 + mux-controls = <&mux 0>; 106 + #address-cells = <1>; 107 + #size-cells = <0>; 108 + 109 + ports { 110 + /* inputs 0..3 */ 111 + port@0 { 112 + reg = <0>; 113 + }; 114 + port@1 { 115 + reg = <1>; 116 + }; 117 + port@2 { 118 + reg = <2>; 119 + }; 120 + port@3 { 121 + reg = <3>; 122 + }; 123 + 124 + /* output */ 125 + port@4 { 126 + reg = <4>; 127 + }; 128 + }; 129 + };
+51
Documentation/devicetree/bindings/nvmem/allwinner,sun4i-a10-sid.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/nvmem/allwinner,sun4i-a10-sid.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Allwinner A10 Security ID Device Tree Bindings 8 + 9 + maintainers: 10 + - Chen-Yu Tsai <wens@csie.org> 11 + - Maxime Ripard <maxime.ripard@bootlin.com> 12 + 13 + allOf: 14 + - $ref: "nvmem.yaml#" 15 + 16 + properties: 17 + compatible: 18 + enum: 19 + - allwinner,sun4i-a10-sid 20 + - allwinner,sun7i-a20-sid 21 + - allwinner,sun8i-a83t-sid 22 + - allwinner,sun8i-h3-sid 23 + - allwinner,sun50i-a64-sid 24 + - allwinner,sun50i-h5-sid 25 + - allwinner,sun50i-h6-sid 26 + 27 + reg: 28 + maxItems: 1 29 + 30 + required: 31 + - compatible 32 + - reg 33 + 34 + # FIXME: We should set it, but it would report all the generic 35 + # properties as additional properties. 36 + # additionalProperties: false 37 + 38 + examples: 39 + - | 40 + sid@1c23800 { 41 + compatible = "allwinner,sun4i-a10-sid"; 42 + reg = <0x01c23800 0x10>; 43 + }; 44 + 45 + - | 46 + sid@1c23800 { 47 + compatible = "allwinner,sun7i-a20-sid"; 48 + reg = <0x01c23800 0x200>; 49 + }; 50 + 51 + ...
-29
Documentation/devicetree/bindings/nvmem/allwinner,sunxi-sid.txt
··· 1 - Allwinner sunxi-sid 2 - 3 - Required properties: 4 - - compatible: Should be one of the following: 5 - "allwinner,sun4i-a10-sid" 6 - "allwinner,sun7i-a20-sid" 7 - "allwinner,sun8i-a83t-sid" 8 - "allwinner,sun8i-h3-sid" 9 - "allwinner,sun50i-a64-sid" 10 - "allwinner,sun50i-h5-sid" 11 - "allwinner,sun50i-h6-sid" 12 - 13 - - reg: Should contain registers location and length 14 - 15 - = Data cells = 16 - Are child nodes of sunxi-sid, bindings of which as described in 17 - bindings/nvmem/nvmem.txt 18 - 19 - Example for sun4i: 20 - sid@1c23800 { 21 - compatible = "allwinner,sun4i-a10-sid"; 22 - reg = <0x01c23800 0x10> 23 - }; 24 - 25 - Example for sun7i: 26 - sid@1c23800 { 27 - compatible = "allwinner,sun7i-a20-sid"; 28 - reg = <0x01c23800 0x200> 29 - };
+1
Documentation/devicetree/bindings/nvmem/imx-ocotp.txt
··· 15 15 "fsl,imx6sll-ocotp" (i.MX6SLL), 16 16 "fsl,imx7ulp-ocotp" (i.MX7ULP), 17 17 "fsl,imx8mq-ocotp" (i.MX8MQ), 18 + "fsl,imx8mm-ocotp" (i.MX8MM), 18 19 followed by "syscon". 19 20 - #address-cells : Should be 1 20 21 - #size-cells : Should be 1
+1
Documentation/driver-api/index.rst
··· 42 42 target 43 43 mtdnand 44 44 miscellaneous 45 + mei/index 45 46 w1 46 47 rapidio 47 48 s390-drivers
+32
Documentation/driver-api/mei/hdcp.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + HDCP: 4 + ===== 5 + 6 + ME FW as a security engine provides the capability for setting up 7 + HDCP2.2 protocol negotiation between the Intel graphics device and 8 + an HDC2.2 sink. 9 + 10 + ME FW prepares HDCP2.2 negotiation parameters, signs and encrypts them 11 + according the HDCP 2.2 spec. The Intel graphics sends the created blob 12 + to the HDCP2.2 sink. 13 + 14 + Similarly, the HDCP2.2 sink's response is transferred to ME FW 15 + for decryption and verification. 16 + 17 + Once all the steps of HDCP2.2 negotiation are completed, 18 + upon request ME FW will configure the port as authenticated and supply 19 + the HDCP encryption keys to Intel graphics hardware. 20 + 21 + 22 + mei_hdcp driver 23 + --------------- 24 + .. kernel-doc:: drivers/misc/mei/hdcp/mei_hdcp.c 25 + :doc: MEI_HDCP Client Driver 26 + 27 + mei_hdcp api 28 + ------------ 29 + 30 + .. kernel-doc:: drivers/misc/mei/hdcp/mei_hdcp.c 31 + :functions: 32 +
+101
Documentation/driver-api/mei/iamt.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + Intel(R) Active Management Technology (Intel AMT) 4 + ================================================= 5 + 6 + Prominent usage of the Intel ME Interface is to communicate with Intel(R) 7 + Active Management Technology (Intel AMT) implemented in firmware running on 8 + the Intel ME. 9 + 10 + Intel AMT provides the ability to manage a host remotely out-of-band (OOB) 11 + even when the operating system running on the host processor has crashed or 12 + is in a sleep state. 13 + 14 + Some examples of Intel AMT usage are: 15 + - Monitoring hardware state and platform components 16 + - Remote power off/on (useful for green computing or overnight IT 17 + maintenance) 18 + - OS updates 19 + - Storage of useful platform information such as software assets 20 + - Built-in hardware KVM 21 + - Selective network isolation of Ethernet and IP protocol flows based 22 + on policies set by a remote management console 23 + - IDE device redirection from remote management console 24 + 25 + Intel AMT (OOB) communication is based on SOAP (deprecated 26 + starting with Release 6.0) over HTTP/S or WS-Management protocol over 27 + HTTP/S that are received from a remote management console application. 28 + 29 + For more information about Intel AMT: 30 + https://software.intel.com/sites/manageability/AMT_Implementation_and_Reference_Guide/default.htm 31 + 32 + 33 + Intel AMT Applications 34 + ---------------------- 35 + 36 + 1) Intel Local Management Service (Intel LMS) 37 + 38 + Applications running locally on the platform communicate with Intel AMT Release 39 + 2.0 and later releases in the same way that network applications do via SOAP 40 + over HTTP (deprecated starting with Release 6.0) or with WS-Management over 41 + SOAP over HTTP. This means that some Intel AMT features can be accessed from a 42 + local application using the same network interface as a remote application 43 + communicating with Intel AMT over the network. 44 + 45 + When a local application sends a message addressed to the local Intel AMT host 46 + name, the Intel LMS, which listens for traffic directed to the host name, 47 + intercepts the message and routes it to the Intel MEI. 48 + For more information: 49 + https://software.intel.com/sites/manageability/AMT_Implementation_and_Reference_Guide/default.htm 50 + Under "About Intel AMT" => "Local Access" 51 + 52 + For downloading Intel LMS: 53 + https://github.com/intel/lms 54 + 55 + The Intel LMS opens a connection using the Intel MEI driver to the Intel LMS 56 + firmware feature using a defined GUID and then communicates with the feature 57 + using a protocol called Intel AMT Port Forwarding Protocol (Intel APF protocol). 58 + The protocol is used to maintain multiple sessions with Intel AMT from a 59 + single application. 60 + 61 + See the protocol specification in the Intel AMT Software Development Kit (SDK) 62 + https://software.intel.com/sites/manageability/AMT_Implementation_and_Reference_Guide/default.htm 63 + Under "SDK Resources" => "Intel(R) vPro(TM) Gateway (MPS)" 64 + => "Information for Intel(R) vPro(TM) Gateway Developers" 65 + => "Description of the Intel AMT Port Forwarding (APF) Protocol" 66 + 67 + 2) Intel AMT Remote configuration using a Local Agent 68 + 69 + A Local Agent enables IT personnel to configure Intel AMT out-of-the-box 70 + without requiring installing additional data to enable setup. The remote 71 + configuration process may involve an ISV-developed remote configuration 72 + agent that runs on the host. 73 + For more information: 74 + https://software.intel.com/sites/manageability/AMT_Implementation_and_Reference_Guide/default.htm 75 + Under "Setup and Configuration of Intel AMT" => 76 + "SDK Tools Supporting Setup and Configuration" => 77 + "Using the Local Agent Sample" 78 + 79 + Intel AMT OS Health Watchdog 80 + ---------------------------- 81 + 82 + The Intel AMT Watchdog is an OS Health (Hang/Crash) watchdog. 83 + Whenever the OS hangs or crashes, Intel AMT will send an event 84 + to any subscriber to this event. This mechanism means that 85 + IT knows when a platform crashes even when there is a hard failure on the host. 86 + 87 + The Intel AMT Watchdog is composed of two parts: 88 + 1) Firmware feature - receives the heartbeats 89 + and sends an event when the heartbeats stop. 90 + 2) Intel MEI iAMT watchdog driver - connects to the watchdog feature, 91 + configures the watchdog and sends the heartbeats. 92 + 93 + The Intel iAMT watchdog MEI driver uses the kernel watchdog API to configure 94 + the Intel AMT Watchdog and to send heartbeats to it. The default timeout of the 95 + watchdog is 120 seconds. 96 + 97 + If the Intel AMT is not enabled in the firmware then the watchdog client won't enumerate 98 + on the me client bus and watchdog devices won't be exposed. 99 + 100 + --- 101 + linux-mei@linux.intel.com
+23
Documentation/driver-api/mei/index.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + .. include:: <isonum.txt> 4 + 5 + =================================================== 6 + Intel(R) Management Engine Interface (Intel(R) MEI) 7 + =================================================== 8 + 9 + **Copyright** |copy| 2019 Intel Corporation 10 + 11 + 12 + .. only:: html 13 + 14 + .. class:: toc-title 15 + 16 + Table of Contents 17 + 18 + .. toctree:: 19 + :maxdepth: 3 20 + 21 + mei 22 + mei-client-bus 23 + iamt
+168
Documentation/driver-api/mei/mei-client-bus.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + ============================================== 4 + Intel(R) Management Engine (ME) Client bus API 5 + ============================================== 6 + 7 + 8 + Rationale 9 + ========= 10 + 11 + The MEI character device is useful for dedicated applications to send and receive 12 + data to the many FW appliance found in Intel's ME from the user space. 13 + However, for some of the ME functionalities it makes sense to leverage existing software 14 + stack and expose them through existing kernel subsystems. 15 + 16 + In order to plug seamlessly into the kernel device driver model we add kernel virtual 17 + bus abstraction on top of the MEI driver. This allows implementing Linux kernel drivers 18 + for the various MEI features as a stand alone entities found in their respective subsystem. 19 + Existing device drivers can even potentially be re-used by adding an MEI CL bus layer to 20 + the existing code. 21 + 22 + 23 + MEI CL bus API 24 + ============== 25 + 26 + A driver implementation for an MEI Client is very similar to any other existing bus 27 + based device drivers. The driver registers itself as an MEI CL bus driver through 28 + the ``struct mei_cl_driver`` structure defined in :file:`include/linux/mei_cl_bus.c` 29 + 30 + .. code-block:: C 31 + 32 + struct mei_cl_driver { 33 + struct device_driver driver; 34 + const char *name; 35 + 36 + const struct mei_cl_device_id *id_table; 37 + 38 + int (*probe)(struct mei_cl_device *dev, const struct mei_cl_id *id); 39 + int (*remove)(struct mei_cl_device *dev); 40 + }; 41 + 42 + 43 + 44 + The mei_cl_device_id structure defined in :file:`include/linux/mod_devicetable.h` allows a 45 + driver to bind itself against a device name. 46 + 47 + .. code-block:: C 48 + 49 + struct mei_cl_device_id { 50 + char name[MEI_CL_NAME_SIZE]; 51 + uuid_le uuid; 52 + __u8 version; 53 + kernel_ulong_t driver_info; 54 + }; 55 + 56 + To actually register a driver on the ME Client bus one must call the :c:func:`mei_cl_add_driver` 57 + API. This is typically called at module initialization time. 58 + 59 + Once the driver is registered and bound to the device, a driver will typically 60 + try to do some I/O on this bus and this should be done through the :c:func:`mei_cl_send` 61 + and :c:func:`mei_cl_recv` functions. More detailed information is in :ref:`api` section. 62 + 63 + In order for a driver to be notified about pending traffic or event, the driver 64 + should register a callback via :c:func:`mei_cl_devev_register_rx_cb` and 65 + :c:func:`mei_cldev_register_notify_cb` function respectively. 66 + 67 + .. _api: 68 + 69 + API: 70 + ---- 71 + .. kernel-doc:: drivers/misc/mei/bus.c 72 + :export: drivers/misc/mei/bus.c 73 + 74 + 75 + 76 + Example 77 + ======= 78 + 79 + As a theoretical example let's pretend the ME comes with a "contact" NFC IP. 80 + The driver init and exit routines for this device would look like: 81 + 82 + .. code-block:: C 83 + 84 + #define CONTACT_DRIVER_NAME "contact" 85 + 86 + static struct mei_cl_device_id contact_mei_cl_tbl[] = { 87 + { CONTACT_DRIVER_NAME, }, 88 + 89 + /* required last entry */ 90 + { } 91 + }; 92 + MODULE_DEVICE_TABLE(mei_cl, contact_mei_cl_tbl); 93 + 94 + static struct mei_cl_driver contact_driver = { 95 + .id_table = contact_mei_tbl, 96 + .name = CONTACT_DRIVER_NAME, 97 + 98 + .probe = contact_probe, 99 + .remove = contact_remove, 100 + }; 101 + 102 + static int contact_init(void) 103 + { 104 + int r; 105 + 106 + r = mei_cl_driver_register(&contact_driver); 107 + if (r) { 108 + pr_err(CONTACT_DRIVER_NAME ": driver registration failed\n"); 109 + return r; 110 + } 111 + 112 + return 0; 113 + } 114 + 115 + static void __exit contact_exit(void) 116 + { 117 + mei_cl_driver_unregister(&contact_driver); 118 + } 119 + 120 + module_init(contact_init); 121 + module_exit(contact_exit); 122 + 123 + And the driver's simplified probe routine would look like that: 124 + 125 + .. code-block:: C 126 + 127 + int contact_probe(struct mei_cl_device *dev, struct mei_cl_device_id *id) 128 + { 129 + [...] 130 + mei_cldev_enable(dev); 131 + 132 + mei_cldev_register_rx_cb(dev, contact_rx_cb); 133 + 134 + return 0; 135 + } 136 + 137 + In the probe routine the driver first enable the MEI device and then registers 138 + an rx handler which is as close as it can get to registering a threaded IRQ handler. 139 + The handler implementation will typically call :c:func:`mei_cldev_recv` and then 140 + process received data. 141 + 142 + .. code-block:: C 143 + 144 + #define MAX_PAYLOAD 128 145 + #define HDR_SIZE 4 146 + static void conntact_rx_cb(struct mei_cl_device *cldev) 147 + { 148 + struct contact *c = mei_cldev_get_drvdata(cldev); 149 + unsigned char payload[MAX_PAYLOAD]; 150 + ssize_t payload_sz; 151 + 152 + payload_sz = mei_cldev_recv(cldev, payload, MAX_PAYLOAD) 153 + if (reply_size < HDR_SIZE) { 154 + return; 155 + } 156 + 157 + c->process_rx(payload); 158 + 159 + } 160 + 161 + MEI Client Bus Drivers 162 + ====================== 163 + 164 + .. toctree:: 165 + :maxdepth: 2 166 + 167 + hdcp 168 + nfc
+176
Documentation/driver-api/mei/mei.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + Introduction 4 + ============ 5 + 6 + The Intel Management Engine (Intel ME) is an isolated and protected computing 7 + resource (Co-processor) residing inside certain Intel chipsets. The Intel ME 8 + provides support for computer/IT management and security features. 9 + The actual feature set depends on the Intel chipset SKU. 10 + 11 + The Intel Management Engine Interface (Intel MEI, previously known as HECI) 12 + is the interface between the Host and Intel ME. This interface is exposed 13 + to the host as a PCI device, actually multiple PCI devices might be exposed. 14 + The Intel MEI Driver is in charge of the communication channel between 15 + a host application and the Intel ME features. 16 + 17 + Each Intel ME feature, or Intel ME Client is addressed by a unique GUID and 18 + each client has its own protocol. The protocol is message-based with a 19 + header and payload up to maximal number of bytes advertised by the client, 20 + upon connection. 21 + 22 + Intel MEI Driver 23 + ================ 24 + 25 + The driver exposes a character device with device nodes /dev/meiX. 26 + 27 + An application maintains communication with an Intel ME feature while 28 + /dev/meiX is open. The binding to a specific feature is performed by calling 29 + :c:macro:`MEI_CONNECT_CLIENT_IOCTL`, which passes the desired GUID. 30 + The number of instances of an Intel ME feature that can be opened 31 + at the same time depends on the Intel ME feature, but most of the 32 + features allow only a single instance. 33 + 34 + The driver is transparent to data that are passed between firmware feature 35 + and host application. 36 + 37 + Because some of the Intel ME features can change the system 38 + configuration, the driver by default allows only a privileged 39 + user to access it. 40 + 41 + The session is terminated calling :c:func:`close(int fd)`. 42 + 43 + A code snippet for an application communicating with Intel AMTHI client: 44 + 45 + .. code-block:: C 46 + 47 + struct mei_connect_client_data data; 48 + fd = open(MEI_DEVICE); 49 + 50 + data.d.in_client_uuid = AMTHI_GUID; 51 + 52 + ioctl(fd, IOCTL_MEI_CONNECT_CLIENT, &data); 53 + 54 + printf("Ver=%d, MaxLen=%ld\n", 55 + data.d.in_client_uuid.protocol_version, 56 + data.d.in_client_uuid.max_msg_length); 57 + 58 + [...] 59 + 60 + write(fd, amthi_req_data, amthi_req_data_len); 61 + 62 + [...] 63 + 64 + read(fd, &amthi_res_data, amthi_res_data_len); 65 + 66 + [...] 67 + close(fd); 68 + 69 + 70 + User space API 71 + 72 + IOCTLs: 73 + ======= 74 + 75 + The Intel MEI Driver supports the following IOCTL commands: 76 + 77 + IOCTL_MEI_CONNECT_CLIENT 78 + ------------------------- 79 + Connect to firmware Feature/Client. 80 + 81 + .. code-block:: none 82 + 83 + Usage: 84 + 85 + struct mei_connect_client_data client_data; 86 + 87 + ioctl(fd, IOCTL_MEI_CONNECT_CLIENT, &client_data); 88 + 89 + Inputs: 90 + 91 + struct mei_connect_client_data - contain the following 92 + Input field: 93 + 94 + in_client_uuid - GUID of the FW Feature that needs 95 + to connect to. 96 + Outputs: 97 + out_client_properties - Client Properties: MTU and Protocol Version. 98 + 99 + Error returns: 100 + 101 + ENOTTY No such client (i.e. wrong GUID) or connection is not allowed. 102 + EINVAL Wrong IOCTL Number 103 + ENODEV Device or Connection is not initialized or ready. 104 + ENOMEM Unable to allocate memory to client internal data. 105 + EFAULT Fatal Error (e.g. Unable to access user input data) 106 + EBUSY Connection Already Open 107 + 108 + :Note: 109 + max_msg_length (MTU) in client properties describes the maximum 110 + data that can be sent or received. (e.g. if MTU=2K, can send 111 + requests up to bytes 2k and received responses up to 2k bytes). 112 + 113 + 114 + IOCTL_MEI_NOTIFY_SET 115 + --------------------- 116 + Enable or disable event notifications. 117 + 118 + 119 + .. code-block:: none 120 + 121 + Usage: 122 + 123 + uint32_t enable; 124 + 125 + ioctl(fd, IOCTL_MEI_NOTIFY_SET, &enable); 126 + 127 + 128 + uint32_t enable = 1; 129 + or 130 + uint32_t enable[disable] = 0; 131 + 132 + Error returns: 133 + 134 + 135 + EINVAL Wrong IOCTL Number 136 + ENODEV Device is not initialized or the client not connected 137 + ENOMEM Unable to allocate memory to client internal data. 138 + EFAULT Fatal Error (e.g. Unable to access user input data) 139 + EOPNOTSUPP if the device doesn't support the feature 140 + 141 + :Note: 142 + The client must be connected in order to enable notification events 143 + 144 + 145 + IOCTL_MEI_NOTIFY_GET 146 + -------------------- 147 + Retrieve event 148 + 149 + .. code-block:: none 150 + 151 + Usage: 152 + uint32_t event; 153 + ioctl(fd, IOCTL_MEI_NOTIFY_GET, &event); 154 + 155 + Outputs: 156 + 1 - if an event is pending 157 + 0 - if there is no even pending 158 + 159 + Error returns: 160 + EINVAL Wrong IOCTL Number 161 + ENODEV Device is not initialized or the client not connected 162 + ENOMEM Unable to allocate memory to client internal data. 163 + EFAULT Fatal Error (e.g. Unable to access user input data) 164 + EOPNOTSUPP if the device doesn't support the feature 165 + 166 + :Note: 167 + The client must be connected and event notification has to be enabled 168 + in order to receive an event 169 + 170 + 171 + 172 + Supported Chipsets 173 + ================== 174 + 82X38/X48 Express and newer 175 + 176 + linux-mei@linux.intel.com
+28
Documentation/driver-api/mei/nfc.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + MEI NFC 4 + ------- 5 + 6 + Some Intel 8 and 9 Serieses chipsets supports NFC devices connected behind 7 + the Intel Management Engine controller. 8 + MEI client bus exposes the NFC chips as NFC phy devices and enables 9 + binding with Microread and NXP PN544 NFC device driver from the Linux NFC 10 + subsystem. 11 + 12 + .. kernel-render:: DOT 13 + :alt: MEI NFC digraph 14 + :caption: **MEI NFC** Stack 15 + 16 + digraph NFC { 17 + cl_nfc -> me_cl_nfc; 18 + "drivers/nfc/mei_phy" -> cl_nfc [lhead=bus]; 19 + "drivers/nfc/microread/mei" -> cl_nfc; 20 + "drivers/nfc/microread/mei" -> "drivers/nfc/mei_phy"; 21 + "drivers/nfc/pn544/mei" -> cl_nfc; 22 + "drivers/nfc/pn544/mei" -> "drivers/nfc/mei_phy"; 23 + "net/nfc" -> "drivers/nfc/microread/mei"; 24 + "net/nfc" -> "drivers/nfc/pn544/mei"; 25 + "neard" -> "net/nfc"; 26 + cl_nfc [label="mei/bus(nfc)"]; 27 + me_cl_nfc [label="me fw (nfc)"]; 28 + }
+3 -1
Documentation/driver-api/soundwire/locking.rst
··· 44 44 b. Transfer message (Read/Write) to Slave1 or broadcast message on 45 45 Bus in case of bank switch. 46 46 47 - c. Release Message lock :: 47 + c. Release Message lock 48 + 49 + :: 48 50 49 51 +----------+ +---------+ 50 52 | | | |
+27 -16
Documentation/misc-devices/eeprom Documentation/misc-devices/eeprom.rst
··· 1 + ==================== 1 2 Kernel driver eeprom 2 3 ==================== 3 4 4 5 Supported chips: 6 + 5 7 * Any EEPROM chip in the designated address range 8 + 6 9 Prefix: 'eeprom' 10 + 7 11 Addresses scanned: I2C 0x50 - 0x57 12 + 8 13 Datasheets: Publicly available from: 14 + 9 15 Atmel (www.atmel.com), 10 16 Catalyst (www.catsemi.com), 11 17 Fairchild (www.fairchildsemi.com), ··· 22 16 Xicor (www.xicor.com), 23 17 and others. 24 18 25 - Chip Size (bits) Address 19 + ========= ============= ============================================ 20 + Chip Size (bits) Address 21 + ========= ============= ============================================ 26 22 24C01 1K 0x50 (shadows at 0x51 - 0x57) 27 23 24C01A 1K 0x50 - 0x57 (Typical device on DIMMs) 28 24 24C02 2K 0x50 - 0x57 ··· 32 24 (additional data at 0x51, 0x53, 0x55, 0x57) 33 25 24C08 8K 0x50, 0x54 (additional data at 0x51, 0x52, 34 26 0x53, 0x55, 0x56, 0x57) 35 - 24C16 16K 0x50 (additional data at 0x51 - 0x57) 27 + 24C16 16K 0x50 (additional data at 0x51 - 0x57) 36 28 Sony 2K 0x57 37 29 38 30 Atmel 34C02B 2K 0x50 - 0x57, SW write protect at 0x30-37 ··· 41 33 Fairchild 34W02 2K 0x50 - 0x57, SW write protect at 0x30-37 42 34 Microchip 24AA52 2K 0x50 - 0x57, SW write protect at 0x30-37 43 35 ST M34C02 2K 0x50 - 0x57, SW write protect at 0x30-37 36 + ========= ============= ============================================ 44 37 45 38 46 39 Authors: 47 - Frodo Looijaard <frodol@dds.nl>, 48 - Philip Edelbrock <phil@netroedge.com>, 49 - Jean Delvare <jdelvare@suse.de>, 50 - Greg Kroah-Hartman <greg@kroah.com>, 51 - IBM Corp. 40 + - Frodo Looijaard <frodol@dds.nl>, 41 + - Philip Edelbrock <phil@netroedge.com>, 42 + - Jean Delvare <jdelvare@suse.de>, 43 + - Greg Kroah-Hartman <greg@kroah.com>, 44 + - IBM Corp. 52 45 53 46 Description 54 47 ----------- ··· 83 74 device will no longer respond at the 0x30-37 address. The eeprom driver 84 75 does not support this register. 85 76 86 - Lacking functionality: 77 + Lacking functionality 78 + --------------------- 87 79 88 80 * Full support for larger devices (24C04, 24C08, 24C16). These are not 89 - typically found on a PC. These devices will appear as separate devices at 90 - multiple addresses. 81 + typically found on a PC. These devices will appear as separate devices at 82 + multiple addresses. 91 83 92 84 * Support for really large devices (24C32, 24C64, 24C128, 24C256, 24C512). 93 - These devices require two-byte address fields and are not supported. 85 + These devices require two-byte address fields and are not supported. 94 86 95 87 * Enable Writing. Again, no technical reason why not, but making it easy 96 - to change the contents of the EEPROMs (on DIMMs anyway) also makes it easy 97 - to disable the DIMMs (potentially preventing the computer from booting) 98 - until the values are restored somehow. 88 + to change the contents of the EEPROMs (on DIMMs anyway) also makes it easy 89 + to disable the DIMMs (potentially preventing the computer from booting) 90 + until the values are restored somehow. 99 91 100 - Use: 92 + Use 93 + --- 101 94 102 95 After inserting the module (and any other required SMBus/i2c modules), you 103 - should have some EEPROM directories in /sys/bus/i2c/devices/* of names such 96 + should have some EEPROM directories in ``/sys/bus/i2c/devices/*`` of names such 104 97 as "0-0050". Inside each of these is a series of files, the eeprom file 105 98 contains the binary data from EEPROM.
+6 -1
Documentation/misc-devices/ics932s401 Documentation/misc-devices/ics932s401.rst
··· 1 + ======================== 1 2 Kernel driver ics932s401 2 - ====================== 3 + ======================== 3 4 4 5 Supported chips: 6 + 5 7 * IDT ICS932S401 8 + 6 9 Prefix: 'ics932s401' 10 + 7 11 Addresses scanned: I2C 0x69 12 + 8 13 Datasheet: Publicly available at the IDT website 9 14 10 15 Author: Darrick J. Wong
+5
Documentation/misc-devices/index.rst
··· 14 14 .. toctree:: 15 15 :maxdepth: 2 16 16 17 + eeprom 17 18 ibmvmc 19 + ics932s401 20 + isl29003 21 + lis3lv02d 22 + max6875
+14 -1
Documentation/misc-devices/isl29003 Documentation/misc-devices/isl29003.rst
··· 1 + ====================== 1 2 Kernel driver isl29003 2 - ===================== 3 + ====================== 3 4 4 5 Supported chips: 6 + 5 7 * Intersil ISL29003 8 + 6 9 Prefix: 'isl29003' 10 + 7 11 Addresses scanned: none 12 + 8 13 Datasheet: 9 14 http://www.intersil.com/data/fn/fn7464.pdf 10 15 ··· 42 37 ------------- 43 38 44 39 range: 40 + == =========================== 45 41 0: 0 lux to 1000 lux (default) 46 42 1: 0 lux to 4000 lux 47 43 2: 0 lux to 16,000 lux 48 44 3: 0 lux to 64,000 lux 45 + == =========================== 49 46 50 47 resolution: 48 + == ===================== 51 49 0: 2^16 cycles (default) 52 50 1: 2^12 cycles 53 51 2: 2^8 cycles 54 52 3: 2^4 cycles 53 + == ===================== 55 54 56 55 mode: 56 + == ================================================= 57 57 0: diode1's current (unsigned 16bit) (default) 58 58 1: diode1's current (unsigned 16bit) 59 59 2: difference between diodes (l1 - l2, signed 15bit) 60 + == ================================================= 60 61 61 62 power_state: 63 + == ================================================= 62 64 0: device is disabled (default) 63 65 1: device is enabled 66 + == ================================================= 64 67 65 68 lux (read only): 66 69 returns the value from the last sensor reading
+13 -7
Documentation/misc-devices/lis3lv02d Documentation/misc-devices/lis3lv02d.rst
··· 1 + ======================= 1 2 Kernel driver lis3lv02d 2 3 ======================= 3 4 ··· 9 8 LIS331DLH (16 bits) 10 9 11 10 Authors: 12 - Yan Burman <burman.yan@gmail.com> 13 - Eric Piel <eric.piel@tremplin-utc.net> 11 + - Yan Burman <burman.yan@gmail.com> 12 + - Eric Piel <eric.piel@tremplin-utc.net> 14 13 15 14 16 15 Description ··· 26 25 to mg values (1/1000th of earth gravity). 27 26 28 27 Sysfs attributes under /sys/devices/platform/lis3lv02d/: 29 - position - 3D position that the accelerometer reports. Format: "(x,y,z)" 30 - rate - read reports the sampling rate of the accelerometer device in HZ. 28 + 29 + position 30 + - 3D position that the accelerometer reports. Format: "(x,y,z)" 31 + rate 32 + - read reports the sampling rate of the accelerometer device in HZ. 31 33 write changes sampling rate of the accelerometer device. 32 34 Only values which are supported by HW are accepted. 33 - selftest - performs selftest for the chip as specified by chip manufacturer. 35 + selftest 36 + - performs selftest for the chip as specified by chip manufacturer. 34 37 35 38 This driver also provides an absolute input class device, allowing 36 39 the laptop to act as a pinball machine-esque joystick. Joystick device can be ··· 74 69 For better compatibility between the various laptops. The values reported by 75 70 the accelerometer are converted into a "standard" organisation of the axes 76 71 (aka "can play neverball out of the box"): 72 + 77 73 * When the laptop is horizontal the position reported is about 0 for X and Y 78 - and a positive value for Z 74 + and a positive value for Z 79 75 * If the left side is elevated, X increases (becomes positive) 80 76 * If the front side (where the touchpad is) is elevated, Y decreases 81 - (becomes negative) 77 + (becomes negative) 82 78 * If the laptop is put upside-down, Z becomes negative 83 79 84 80 If your laptop model is not recognized (cf "dmesg"), you can send an
+39 -13
Documentation/misc-devices/max6875 Documentation/misc-devices/max6875.rst
··· 1 + ===================== 1 2 Kernel driver max6875 2 3 ===================== 3 4 4 5 Supported chips: 6 + 5 7 * Maxim MAX6874, MAX6875 8 + 6 9 Prefix: 'max6875' 10 + 7 11 Addresses scanned: None (see below) 8 - Datasheet: 9 - http://pdfserv.maxim-ic.com/en/ds/MAX6874-MAX6875.pdf 12 + 13 + Datasheet: http://pdfserv.maxim-ic.com/en/ds/MAX6874-MAX6875.pdf 10 14 11 15 Author: Ben Gardner <bgardner@wabtec.com> 12 16 ··· 28 24 29 25 The Maxim MAX6874 is a similar, mostly compatible device, with more inputs 30 26 and outputs: 31 - vin gpi vout 27 + 28 + =========== === === ==== 29 + - vin gpi vout 30 + =========== === === ==== 32 31 MAX6874 6 4 8 33 32 MAX6875 4 3 5 33 + =========== === === ==== 34 34 35 35 See the datasheet for more information. 36 36 ··· 49 41 --------------- 50 42 51 43 Valid addresses for the MAX6875 are 0x50 and 0x52. 44 + 52 45 Valid addresses for the MAX6874 are 0x50, 0x52, 0x54 and 0x56. 46 + 53 47 The driver does not probe any address, so you explicitly instantiate the 54 48 devices. 55 49 56 - Example: 57 - $ modprobe max6875 58 - $ echo max6875 0x50 > /sys/bus/i2c/devices/i2c-0/new_device 50 + Example:: 51 + 52 + $ modprobe max6875 53 + $ echo max6875 0x50 > /sys/bus/i2c/devices/i2c-0/new_device 59 54 60 55 The MAX6874/MAX6875 ignores address bit 0, so this driver attaches to multiple 61 56 addresses. For example, for address 0x50, it also reserves 0x51. ··· 69 58 ---------------------------------- 70 59 71 60 Use the i2c-dev interface to access and program the chips. 61 + 72 62 Reads and writes are performed differently depending on the address range. 73 63 74 64 The configuration registers are at addresses 0x00 - 0x45. 65 + 75 66 Use i2c_smbus_write_byte_data() to write a register and 76 67 i2c_smbus_read_byte_data() to read a register. 68 + 77 69 The command is the register number. 78 70 79 71 Examples: 80 - To write a 1 to register 0x45: 72 + 73 + To write a 1 to register 0x45:: 74 + 81 75 i2c_smbus_write_byte_data(fd, 0x45, 1); 82 76 83 - To read register 0x45: 77 + To read register 0x45:: 78 + 84 79 value = i2c_smbus_read_byte_data(fd, 0x45); 85 80 86 81 87 82 The configuration EEPROM is at addresses 0x8000 - 0x8045. 83 + 88 84 The user EEPROM is at addresses 0x8100 - 0x82ff. 89 85 90 86 Use i2c_smbus_write_word_data() to write a byte to EEPROM. 91 87 92 88 The command is the upper byte of the address: 0x80, 0x81, or 0x82. 93 - The data word is the lower part of the address or'd with data << 8. 89 + The data word is the lower part of the address or'd with data << 8:: 90 + 94 91 cmd = address >> 8; 95 92 val = (address & 0xff) | (data << 8); 96 93 97 94 Example: 98 - To write 0x5a to address 0x8003: 95 + 96 + To write 0x5a to address 0x8003:: 97 + 99 98 i2c_smbus_write_word_data(fd, 0x80, 0x5a03); 100 99 101 100 102 101 Reading data from the EEPROM is a little more complicated. 102 + 103 103 Use i2c_smbus_write_byte_data() to set the read address and then 104 104 i2c_smbus_read_byte() or i2c_smbus_read_i2c_block_data() to read the data. 105 105 106 106 Example: 107 - To read data starting at offset 0x8100, first set the address: 107 + 108 + To read data starting at offset 0x8100, first set the address:: 109 + 108 110 i2c_smbus_write_byte_data(fd, 0x81, 0x00); 109 111 110 - And then read the data 112 + And then read the data:: 113 + 111 114 value = i2c_smbus_read_byte(fd); 112 115 113 - or 116 + or:: 114 117 115 118 count = i2c_smbus_read_i2c_block_data(fd, 0x84, 16, buffer); 116 119 117 120 The block read should read 16 bytes. 121 + 118 122 0x84 is the block read command. 119 123 120 124 See the datasheet for more details.
-141
Documentation/misc-devices/mei/mei-client-bus.txt
··· 1 - Intel(R) Management Engine (ME) Client bus API 2 - ============================================== 3 - 4 - 5 - Rationale 6 - ========= 7 - 8 - MEI misc character device is useful for dedicated applications to send and receive 9 - data to the many FW appliance found in Intel's ME from the user space. 10 - However for some of the ME functionalities it make sense to leverage existing software 11 - stack and expose them through existing kernel subsystems. 12 - 13 - In order to plug seamlessly into the kernel device driver model we add kernel virtual 14 - bus abstraction on top of the MEI driver. This allows implementing linux kernel drivers 15 - for the various MEI features as a stand alone entities found in their respective subsystem. 16 - Existing device drivers can even potentially be re-used by adding an MEI CL bus layer to 17 - the existing code. 18 - 19 - 20 - MEI CL bus API 21 - ============== 22 - 23 - A driver implementation for an MEI Client is very similar to existing bus 24 - based device drivers. The driver registers itself as an MEI CL bus driver through 25 - the mei_cl_driver structure: 26 - 27 - struct mei_cl_driver { 28 - struct device_driver driver; 29 - const char *name; 30 - 31 - const struct mei_cl_device_id *id_table; 32 - 33 - int (*probe)(struct mei_cl_device *dev, const struct mei_cl_id *id); 34 - int (*remove)(struct mei_cl_device *dev); 35 - }; 36 - 37 - struct mei_cl_id { 38 - char name[MEI_NAME_SIZE]; 39 - kernel_ulong_t driver_info; 40 - }; 41 - 42 - The mei_cl_id structure allows the driver to bind itself against a device name. 43 - 44 - To actually register a driver on the ME Client bus one must call the mei_cl_add_driver() 45 - API. This is typically called at module init time. 46 - 47 - Once registered on the ME Client bus, a driver will typically try to do some I/O on 48 - this bus and this should be done through the mei_cl_send() and mei_cl_recv() 49 - routines. The latter is synchronous (blocks and sleeps until data shows up). 50 - In order for drivers to be notified of pending events waiting for them (e.g. 51 - an Rx event) they can register an event handler through the 52 - mei_cl_register_event_cb() routine. Currently only the MEI_EVENT_RX event 53 - will trigger an event handler call and the driver implementation is supposed 54 - to call mei_recv() from the event handler in order to fetch the pending 55 - received buffers. 56 - 57 - 58 - Example 59 - ======= 60 - 61 - As a theoretical example let's pretend the ME comes with a "contact" NFC IP. 62 - The driver init and exit routines for this device would look like: 63 - 64 - #define CONTACT_DRIVER_NAME "contact" 65 - 66 - static struct mei_cl_device_id contact_mei_cl_tbl[] = { 67 - { CONTACT_DRIVER_NAME, }, 68 - 69 - /* required last entry */ 70 - { } 71 - }; 72 - MODULE_DEVICE_TABLE(mei_cl, contact_mei_cl_tbl); 73 - 74 - static struct mei_cl_driver contact_driver = { 75 - .id_table = contact_mei_tbl, 76 - .name = CONTACT_DRIVER_NAME, 77 - 78 - .probe = contact_probe, 79 - .remove = contact_remove, 80 - }; 81 - 82 - static int contact_init(void) 83 - { 84 - int r; 85 - 86 - r = mei_cl_driver_register(&contact_driver); 87 - if (r) { 88 - pr_err(CONTACT_DRIVER_NAME ": driver registration failed\n"); 89 - return r; 90 - } 91 - 92 - return 0; 93 - } 94 - 95 - static void __exit contact_exit(void) 96 - { 97 - mei_cl_driver_unregister(&contact_driver); 98 - } 99 - 100 - module_init(contact_init); 101 - module_exit(contact_exit); 102 - 103 - And the driver's simplified probe routine would look like that: 104 - 105 - int contact_probe(struct mei_cl_device *dev, struct mei_cl_device_id *id) 106 - { 107 - struct contact_driver *contact; 108 - 109 - [...] 110 - mei_cl_enable_device(dev); 111 - 112 - mei_cl_register_event_cb(dev, contact_event_cb, contact); 113 - 114 - return 0; 115 - } 116 - 117 - In the probe routine the driver first enable the MEI device and then registers 118 - an ME bus event handler which is as close as it can get to registering a 119 - threaded IRQ handler. 120 - The handler implementation will typically call some I/O routine depending on 121 - the pending events: 122 - 123 - #define MAX_NFC_PAYLOAD 128 124 - 125 - static void contact_event_cb(struct mei_cl_device *dev, u32 events, 126 - void *context) 127 - { 128 - struct contact_driver *contact = context; 129 - 130 - if (events & BIT(MEI_EVENT_RX)) { 131 - u8 payload[MAX_NFC_PAYLOAD]; 132 - int payload_size; 133 - 134 - payload_size = mei_recv(dev, payload, MAX_NFC_PAYLOAD); 135 - if (payload_size <= 0) 136 - return; 137 - 138 - /* Hook to the NFC subsystem */ 139 - nfc_hci_recv_frame(contact->hdev, payload, payload_size); 140 - } 141 - }
-266
Documentation/misc-devices/mei/mei.txt
··· 1 - Intel(R) Management Engine Interface (Intel(R) MEI) 2 - =================================================== 3 - 4 - Introduction 5 - ============ 6 - 7 - The Intel Management Engine (Intel ME) is an isolated and protected computing 8 - resource (Co-processor) residing inside certain Intel chipsets. The Intel ME 9 - provides support for computer/IT management features. The feature set 10 - depends on the Intel chipset SKU. 11 - 12 - The Intel Management Engine Interface (Intel MEI, previously known as HECI) 13 - is the interface between the Host and Intel ME. This interface is exposed 14 - to the host as a PCI device. The Intel MEI Driver is in charge of the 15 - communication channel between a host application and the Intel ME feature. 16 - 17 - Each Intel ME feature (Intel ME Client) is addressed by a GUID/UUID and 18 - each client has its own protocol. The protocol is message-based with a 19 - header and payload up to 512 bytes. 20 - 21 - Prominent usage of the Intel ME Interface is to communicate with Intel(R) 22 - Active Management Technology (Intel AMT) implemented in firmware running on 23 - the Intel ME. 24 - 25 - Intel AMT provides the ability to manage a host remotely out-of-band (OOB) 26 - even when the operating system running on the host processor has crashed or 27 - is in a sleep state. 28 - 29 - Some examples of Intel AMT usage are: 30 - - Monitoring hardware state and platform components 31 - - Remote power off/on (useful for green computing or overnight IT 32 - maintenance) 33 - - OS updates 34 - - Storage of useful platform information such as software assets 35 - - Built-in hardware KVM 36 - - Selective network isolation of Ethernet and IP protocol flows based 37 - on policies set by a remote management console 38 - - IDE device redirection from remote management console 39 - 40 - Intel AMT (OOB) communication is based on SOAP (deprecated 41 - starting with Release 6.0) over HTTP/S or WS-Management protocol over 42 - HTTP/S that are received from a remote management console application. 43 - 44 - For more information about Intel AMT: 45 - http://software.intel.com/sites/manageability/AMT_Implementation_and_Reference_Guide 46 - 47 - 48 - Intel MEI Driver 49 - ================ 50 - 51 - The driver exposes a misc device called /dev/mei. 52 - 53 - An application maintains communication with an Intel ME feature while 54 - /dev/mei is open. The binding to a specific feature is performed by calling 55 - MEI_CONNECT_CLIENT_IOCTL, which passes the desired UUID. 56 - The number of instances of an Intel ME feature that can be opened 57 - at the same time depends on the Intel ME feature, but most of the 58 - features allow only a single instance. 59 - 60 - The Intel AMT Host Interface (Intel AMTHI) feature supports multiple 61 - simultaneous user connected applications. The Intel MEI driver 62 - handles this internally by maintaining request queues for the applications. 63 - 64 - The driver is transparent to data that are passed between firmware feature 65 - and host application. 66 - 67 - Because some of the Intel ME features can change the system 68 - configuration, the driver by default allows only a privileged 69 - user to access it. 70 - 71 - A code snippet for an application communicating with Intel AMTHI client: 72 - 73 - struct mei_connect_client_data data; 74 - fd = open(MEI_DEVICE); 75 - 76 - data.d.in_client_uuid = AMTHI_UUID; 77 - 78 - ioctl(fd, IOCTL_MEI_CONNECT_CLIENT, &data); 79 - 80 - printf("Ver=%d, MaxLen=%ld\n", 81 - data.d.in_client_uuid.protocol_version, 82 - data.d.in_client_uuid.max_msg_length); 83 - 84 - [...] 85 - 86 - write(fd, amthi_req_data, amthi_req_data_len); 87 - 88 - [...] 89 - 90 - read(fd, &amthi_res_data, amthi_res_data_len); 91 - 92 - [...] 93 - close(fd); 94 - 95 - 96 - IOCTL 97 - ===== 98 - 99 - The Intel MEI Driver supports the following IOCTL commands: 100 - IOCTL_MEI_CONNECT_CLIENT Connect to firmware Feature (client). 101 - 102 - usage: 103 - struct mei_connect_client_data clientData; 104 - ioctl(fd, IOCTL_MEI_CONNECT_CLIENT, &clientData); 105 - 106 - inputs: 107 - mei_connect_client_data struct contain the following 108 - input field: 109 - 110 - in_client_uuid - UUID of the FW Feature that needs 111 - to connect to. 112 - outputs: 113 - out_client_properties - Client Properties: MTU and Protocol Version. 114 - 115 - error returns: 116 - EINVAL Wrong IOCTL Number 117 - ENODEV Device or Connection is not initialized or ready. 118 - (e.g. Wrong UUID) 119 - ENOMEM Unable to allocate memory to client internal data. 120 - EFAULT Fatal Error (e.g. Unable to access user input data) 121 - EBUSY Connection Already Open 122 - 123 - Notes: 124 - max_msg_length (MTU) in client properties describes the maximum 125 - data that can be sent or received. (e.g. if MTU=2K, can send 126 - requests up to bytes 2k and received responses up to 2k bytes). 127 - 128 - IOCTL_MEI_NOTIFY_SET: enable or disable event notifications 129 - 130 - Usage: 131 - uint32_t enable; 132 - ioctl(fd, IOCTL_MEI_NOTIFY_SET, &enable); 133 - 134 - Inputs: 135 - uint32_t enable = 1; 136 - or 137 - uint32_t enable[disable] = 0; 138 - 139 - Error returns: 140 - EINVAL Wrong IOCTL Number 141 - ENODEV Device is not initialized or the client not connected 142 - ENOMEM Unable to allocate memory to client internal data. 143 - EFAULT Fatal Error (e.g. Unable to access user input data) 144 - EOPNOTSUPP if the device doesn't support the feature 145 - 146 - Notes: 147 - The client must be connected in order to enable notification events 148 - 149 - 150 - IOCTL_MEI_NOTIFY_GET : retrieve event 151 - 152 - Usage: 153 - uint32_t event; 154 - ioctl(fd, IOCTL_MEI_NOTIFY_GET, &event); 155 - 156 - Outputs: 157 - 1 - if an event is pending 158 - 0 - if there is no even pending 159 - 160 - Error returns: 161 - EINVAL Wrong IOCTL Number 162 - ENODEV Device is not initialized or the client not connected 163 - ENOMEM Unable to allocate memory to client internal data. 164 - EFAULT Fatal Error (e.g. Unable to access user input data) 165 - EOPNOTSUPP if the device doesn't support the feature 166 - 167 - Notes: 168 - The client must be connected and event notification has to be enabled 169 - in order to receive an event 170 - 171 - 172 - Intel ME Applications 173 - ===================== 174 - 175 - 1) Intel Local Management Service (Intel LMS) 176 - 177 - Applications running locally on the platform communicate with Intel AMT Release 178 - 2.0 and later releases in the same way that network applications do via SOAP 179 - over HTTP (deprecated starting with Release 6.0) or with WS-Management over 180 - SOAP over HTTP. This means that some Intel AMT features can be accessed from a 181 - local application using the same network interface as a remote application 182 - communicating with Intel AMT over the network. 183 - 184 - When a local application sends a message addressed to the local Intel AMT host 185 - name, the Intel LMS, which listens for traffic directed to the host name, 186 - intercepts the message and routes it to the Intel MEI. 187 - For more information: 188 - http://software.intel.com/sites/manageability/AMT_Implementation_and_Reference_Guide 189 - Under "About Intel AMT" => "Local Access" 190 - 191 - For downloading Intel LMS: 192 - http://software.intel.com/en-us/articles/download-the-latest-intel-amt-open-source-drivers/ 193 - 194 - The Intel LMS opens a connection using the Intel MEI driver to the Intel LMS 195 - firmware feature using a defined UUID and then communicates with the feature 196 - using a protocol called Intel AMT Port Forwarding Protocol (Intel APF protocol). 197 - The protocol is used to maintain multiple sessions with Intel AMT from a 198 - single application. 199 - 200 - See the protocol specification in the Intel AMT Software Development Kit (SDK) 201 - http://software.intel.com/sites/manageability/AMT_Implementation_and_Reference_Guide 202 - Under "SDK Resources" => "Intel(R) vPro(TM) Gateway (MPS)" 203 - => "Information for Intel(R) vPro(TM) Gateway Developers" 204 - => "Description of the Intel AMT Port Forwarding (APF) Protocol" 205 - 206 - 2) Intel AMT Remote configuration using a Local Agent 207 - 208 - A Local Agent enables IT personnel to configure Intel AMT out-of-the-box 209 - without requiring installing additional data to enable setup. The remote 210 - configuration process may involve an ISV-developed remote configuration 211 - agent that runs on the host. 212 - For more information: 213 - http://software.intel.com/sites/manageability/AMT_Implementation_and_Reference_Guide 214 - Under "Setup and Configuration of Intel AMT" => 215 - "SDK Tools Supporting Setup and Configuration" => 216 - "Using the Local Agent Sample" 217 - 218 - An open source Intel AMT configuration utility, implementing a local agent 219 - that accesses the Intel MEI driver, can be found here: 220 - http://software.intel.com/en-us/articles/download-the-latest-intel-amt-open-source-drivers/ 221 - 222 - 223 - Intel AMT OS Health Watchdog 224 - ============================ 225 - 226 - The Intel AMT Watchdog is an OS Health (Hang/Crash) watchdog. 227 - Whenever the OS hangs or crashes, Intel AMT will send an event 228 - to any subscriber to this event. This mechanism means that 229 - IT knows when a platform crashes even when there is a hard failure on the host. 230 - 231 - The Intel AMT Watchdog is composed of two parts: 232 - 1) Firmware feature - receives the heartbeats 233 - and sends an event when the heartbeats stop. 234 - 2) Intel MEI iAMT watchdog driver - connects to the watchdog feature, 235 - configures the watchdog and sends the heartbeats. 236 - 237 - The Intel iAMT watchdog MEI driver uses the kernel watchdog API to configure 238 - the Intel AMT Watchdog and to send heartbeats to it. The default timeout of the 239 - watchdog is 120 seconds. 240 - 241 - If the Intel AMT is not enabled in the firmware then the watchdog client won't enumerate 242 - on the me client bus and watchdog devices won't be exposed. 243 - 244 - 245 - Supported Chipsets 246 - ================== 247 - 248 - 7 Series Chipset Family 249 - 6 Series Chipset Family 250 - 5 Series Chipset Family 251 - 4 Series Chipset Family 252 - Mobile 4 Series Chipset Family 253 - ICH9 254 - 82946GZ/GL 255 - 82G35 Express 256 - 82Q963/Q965 257 - 82P965/G965 258 - Mobile PM965/GM965 259 - Mobile GME965/GLE960 260 - 82Q35 Express 261 - 82G33/G31/P35/P31 Express 262 - 82Q33 Express 263 - 82X38/X48 Express 264 - 265 - --- 266 - linux-mei@linux.intel.com
+16 -3
MAINTAINERS
··· 6537 6537 F: include/linux/fscrypt*.h 6538 6538 F: Documentation/filesystems/fscrypt.rst 6539 6539 6540 + FSI SUBSYSTEM 6541 + M: Jeremy Kerr <jk@ozlabs.org> 6542 + M: Joel Stanley <joel@jms.id.au> 6543 + R: Alistar Popple <alistair@popple.id.au> 6544 + R: Eddie James <eajames@linux.ibm.com> 6545 + L: linux-fsi@lists.ozlabs.org 6546 + T: git git://git.kernel.org/pub/scm/linux/kernel/git/joel/fsi.git 6547 + Q: http://patchwork.ozlabs.org/project/linux-fsi/list/ 6548 + S: Supported 6549 + F: drivers/fsi/ 6550 + F: include/linux/fsi*.h 6551 + F: include/trace/events/fsi*.h 6552 + 6540 6553 FSI-ATTACHED I2C DRIVER 6541 6554 M: Eddie James <eajames@linux.ibm.com> 6542 6555 L: linux-i2c@vger.kernel.org ··· 8116 8103 F: include/linux/mei_cl_bus.h 8117 8104 F: drivers/misc/mei/* 8118 8105 F: drivers/watchdog/mei_wdt.c 8119 - F: Documentation/misc-devices/mei/* 8106 + F: Documentation/driver-api/mei/* 8120 8107 F: samples/mei/* 8121 8108 8122 8109 INTEL MENLOW THERMAL DRIVER ··· 8949 8936 LEGACY EEPROM DRIVER 8950 8937 M: Jean Delvare <jdelvare@suse.com> 8951 8938 S: Maintained 8952 - F: Documentation/misc-devices/eeprom 8939 + F: Documentation/misc-devices/eeprom.rst 8953 8940 F: drivers/misc/eeprom/eeprom.c 8954 8941 8955 8942 LEGO MINDSTORMS EV3 ··· 9235 9222 LIS3LV02D ACCELEROMETER DRIVER 9236 9223 M: Eric Piel <eric.piel@tremplin-utc.net> 9237 9224 S: Maintained 9238 - F: Documentation/misc-devices/lis3lv02d 9225 + F: Documentation/misc-devices/lis3lv02d.rst 9239 9226 F: drivers/misc/lis3lv02d/ 9240 9227 F: drivers/platform/x86/hp_accel.c 9241 9228
+5
arch/powerpc/mm/book3s64/radix_tlb.c
··· 666 666 #define radix__flush_all_mm radix__local_flush_all_mm 667 667 #endif /* CONFIG_SMP */ 668 668 669 + /* 670 + * If kernel TLBIs ever become local rather than global, then 671 + * drivers/misc/ocxl/link.c:ocxl_link_add_pe will need some work, as it 672 + * assumes kernel TLBIs are global. 673 + */ 669 674 void radix__flush_tlb_kernel_range(unsigned long start, unsigned long end) 670 675 { 671 676 _tlbie_pid(0, RIC_FLUSH_ALL);
+9
drivers/acpi/acpi_amba.c
··· 21 21 22 22 static const struct acpi_device_id amba_id_list[] = { 23 23 {"ARMH0061", 0}, /* PL061 GPIO Device */ 24 + {"ARMHC500", 0}, /* ARM CoreSight ETM4x */ 25 + {"ARMHC501", 0}, /* ARM CoreSight ETR */ 26 + {"ARMHC502", 0}, /* ARM CoreSight STM */ 27 + {"ARMHC503", 0}, /* ARM CoreSight Debug */ 28 + {"ARMHC979", 0}, /* ARM CoreSight TPIU */ 29 + {"ARMHC97C", 0}, /* ARM CoreSight SoC-400 TMC, SoC-600 ETF/ETB */ 30 + {"ARMHC98D", 0}, /* ARM CoreSight Dynamic Replicator */ 31 + {"ARMHC9CA", 0}, /* ARM CoreSight CATU */ 32 + {"ARMHC9FF", 0}, /* ARM CoreSight Dynamic Funnel */ 24 33 {"", 0}, 25 34 }; 26 35
+93 -62
drivers/android/binder.c
··· 2059 2059 2060 2060 read_size = min_t(size_t, sizeof(*object), buffer->data_size - offset); 2061 2061 if (offset > buffer->data_size || read_size < sizeof(*hdr) || 2062 - !IS_ALIGNED(offset, sizeof(u32))) 2062 + binder_alloc_copy_from_buffer(&proc->alloc, object, buffer, 2063 + offset, read_size)) 2063 2064 return 0; 2064 - binder_alloc_copy_from_buffer(&proc->alloc, object, buffer, 2065 - offset, read_size); 2066 2065 2067 2066 /* Ok, now see if we read a complete object. */ 2068 2067 hdr = &object->hdr; ··· 2130 2131 return NULL; 2131 2132 2132 2133 buffer_offset = start_offset + sizeof(binder_size_t) * index; 2133 - binder_alloc_copy_from_buffer(&proc->alloc, &object_offset, 2134 - b, buffer_offset, sizeof(object_offset)); 2134 + if (binder_alloc_copy_from_buffer(&proc->alloc, &object_offset, 2135 + b, buffer_offset, 2136 + sizeof(object_offset))) 2137 + return NULL; 2135 2138 object_size = binder_get_object(proc, b, object_offset, object); 2136 2139 if (!object_size || object->hdr.type != BINDER_TYPE_PTR) 2137 2140 return NULL; ··· 2213 2212 return false; 2214 2213 last_min_offset = last_bbo->parent_offset + sizeof(uintptr_t); 2215 2214 buffer_offset = objects_start_offset + 2216 - sizeof(binder_size_t) * last_bbo->parent, 2217 - binder_alloc_copy_from_buffer(&proc->alloc, &last_obj_offset, 2218 - b, buffer_offset, 2219 - sizeof(last_obj_offset)); 2215 + sizeof(binder_size_t) * last_bbo->parent; 2216 + if (binder_alloc_copy_from_buffer(&proc->alloc, 2217 + &last_obj_offset, 2218 + b, buffer_offset, 2219 + sizeof(last_obj_offset))) 2220 + return false; 2220 2221 } 2221 2222 return (fixup_offset >= last_min_offset); 2222 2223 } ··· 2304 2301 for (buffer_offset = off_start_offset; buffer_offset < off_end_offset; 2305 2302 buffer_offset += sizeof(binder_size_t)) { 2306 2303 struct binder_object_header *hdr; 2307 - size_t object_size; 2304 + size_t object_size = 0; 2308 2305 struct binder_object object; 2309 2306 binder_size_t object_offset; 2310 2307 2311 - binder_alloc_copy_from_buffer(&proc->alloc, &object_offset, 2312 - buffer, buffer_offset, 2313 - sizeof(object_offset)); 2314 - object_size = binder_get_object(proc, buffer, 2315 - object_offset, &object); 2308 + if (!binder_alloc_copy_from_buffer(&proc->alloc, &object_offset, 2309 + buffer, buffer_offset, 2310 + sizeof(object_offset))) 2311 + object_size = binder_get_object(proc, buffer, 2312 + object_offset, &object); 2316 2313 if (object_size == 0) { 2317 2314 pr_err("transaction release %d bad object at offset %lld, size %zd\n", 2318 2315 debug_id, (u64)object_offset, buffer->data_size); ··· 2435 2432 for (fd_index = 0; fd_index < fda->num_fds; 2436 2433 fd_index++) { 2437 2434 u32 fd; 2435 + int err; 2438 2436 binder_size_t offset = fda_offset + 2439 2437 fd_index * sizeof(fd); 2440 2438 2441 - binder_alloc_copy_from_buffer(&proc->alloc, 2442 - &fd, 2443 - buffer, 2444 - offset, 2445 - sizeof(fd)); 2446 - binder_deferred_fd_close(fd); 2439 + err = binder_alloc_copy_from_buffer( 2440 + &proc->alloc, &fd, buffer, 2441 + offset, sizeof(fd)); 2442 + WARN_ON(err); 2443 + if (!err) 2444 + binder_deferred_fd_close(fd); 2447 2445 } 2448 2446 } break; 2449 2447 default: ··· 2687 2683 int ret; 2688 2684 binder_size_t offset = fda_offset + fdi * sizeof(fd); 2689 2685 2690 - binder_alloc_copy_from_buffer(&target_proc->alloc, 2691 - &fd, t->buffer, 2692 - offset, sizeof(fd)); 2693 - ret = binder_translate_fd(fd, offset, t, thread, 2694 - in_reply_to); 2686 + ret = binder_alloc_copy_from_buffer(&target_proc->alloc, 2687 + &fd, t->buffer, 2688 + offset, sizeof(fd)); 2689 + if (!ret) 2690 + ret = binder_translate_fd(fd, offset, t, thread, 2691 + in_reply_to); 2695 2692 if (ret < 0) 2696 2693 return ret; 2697 2694 } ··· 2745 2740 } 2746 2741 buffer_offset = bp->parent_offset + 2747 2742 (uintptr_t)parent->buffer - (uintptr_t)b->user_data; 2748 - binder_alloc_copy_to_buffer(&target_proc->alloc, b, buffer_offset, 2749 - &bp->buffer, sizeof(bp->buffer)); 2743 + if (binder_alloc_copy_to_buffer(&target_proc->alloc, b, buffer_offset, 2744 + &bp->buffer, sizeof(bp->buffer))) { 2745 + binder_user_error("%d:%d got transaction with invalid parent offset\n", 2746 + proc->pid, thread->pid); 2747 + return -EINVAL; 2748 + } 2750 2749 2751 2750 return 0; 2752 2751 } ··· 3169 3160 goto err_binder_alloc_buf_failed; 3170 3161 } 3171 3162 if (secctx) { 3163 + int err; 3172 3164 size_t buf_offset = ALIGN(tr->data_size, sizeof(void *)) + 3173 3165 ALIGN(tr->offsets_size, sizeof(void *)) + 3174 3166 ALIGN(extra_buffers_size, sizeof(void *)) - 3175 3167 ALIGN(secctx_sz, sizeof(u64)); 3176 3168 3177 3169 t->security_ctx = (uintptr_t)t->buffer->user_data + buf_offset; 3178 - binder_alloc_copy_to_buffer(&target_proc->alloc, 3179 - t->buffer, buf_offset, 3180 - secctx, secctx_sz); 3170 + err = binder_alloc_copy_to_buffer(&target_proc->alloc, 3171 + t->buffer, buf_offset, 3172 + secctx, secctx_sz); 3173 + if (err) { 3174 + t->security_ctx = 0; 3175 + WARN_ON(1); 3176 + } 3181 3177 security_release_secctx(secctx, secctx_sz); 3182 3178 secctx = NULL; 3183 3179 } ··· 3248 3234 struct binder_object object; 3249 3235 binder_size_t object_offset; 3250 3236 3251 - binder_alloc_copy_from_buffer(&target_proc->alloc, 3252 - &object_offset, 3253 - t->buffer, 3254 - buffer_offset, 3255 - sizeof(object_offset)); 3237 + if (binder_alloc_copy_from_buffer(&target_proc->alloc, 3238 + &object_offset, 3239 + t->buffer, 3240 + buffer_offset, 3241 + sizeof(object_offset))) { 3242 + return_error = BR_FAILED_REPLY; 3243 + return_error_param = -EINVAL; 3244 + return_error_line = __LINE__; 3245 + goto err_bad_offset; 3246 + } 3256 3247 object_size = binder_get_object(target_proc, t->buffer, 3257 3248 object_offset, &object); 3258 3249 if (object_size == 0 || object_offset < off_min) { ··· 3281 3262 3282 3263 fp = to_flat_binder_object(hdr); 3283 3264 ret = binder_translate_binder(fp, t, thread); 3284 - if (ret < 0) { 3265 + 3266 + if (ret < 0 || 3267 + binder_alloc_copy_to_buffer(&target_proc->alloc, 3268 + t->buffer, 3269 + object_offset, 3270 + fp, sizeof(*fp))) { 3285 3271 return_error = BR_FAILED_REPLY; 3286 3272 return_error_param = ret; 3287 3273 return_error_line = __LINE__; 3288 3274 goto err_translate_failed; 3289 3275 } 3290 - binder_alloc_copy_to_buffer(&target_proc->alloc, 3291 - t->buffer, object_offset, 3292 - fp, sizeof(*fp)); 3293 3276 } break; 3294 3277 case BINDER_TYPE_HANDLE: 3295 3278 case BINDER_TYPE_WEAK_HANDLE: { ··· 3299 3278 3300 3279 fp = to_flat_binder_object(hdr); 3301 3280 ret = binder_translate_handle(fp, t, thread); 3302 - if (ret < 0) { 3281 + if (ret < 0 || 3282 + binder_alloc_copy_to_buffer(&target_proc->alloc, 3283 + t->buffer, 3284 + object_offset, 3285 + fp, sizeof(*fp))) { 3303 3286 return_error = BR_FAILED_REPLY; 3304 3287 return_error_param = ret; 3305 3288 return_error_line = __LINE__; 3306 3289 goto err_translate_failed; 3307 3290 } 3308 - binder_alloc_copy_to_buffer(&target_proc->alloc, 3309 - t->buffer, object_offset, 3310 - fp, sizeof(*fp)); 3311 3291 } break; 3312 3292 3313 3293 case BINDER_TYPE_FD: { ··· 3318 3296 int ret = binder_translate_fd(fp->fd, fd_offset, t, 3319 3297 thread, in_reply_to); 3320 3298 3321 - if (ret < 0) { 3299 + fp->pad_binder = 0; 3300 + if (ret < 0 || 3301 + binder_alloc_copy_to_buffer(&target_proc->alloc, 3302 + t->buffer, 3303 + object_offset, 3304 + fp, sizeof(*fp))) { 3322 3305 return_error = BR_FAILED_REPLY; 3323 3306 return_error_param = ret; 3324 3307 return_error_line = __LINE__; 3325 3308 goto err_translate_failed; 3326 3309 } 3327 - fp->pad_binder = 0; 3328 - binder_alloc_copy_to_buffer(&target_proc->alloc, 3329 - t->buffer, object_offset, 3330 - fp, sizeof(*fp)); 3331 3310 } break; 3332 3311 case BINDER_TYPE_FDA: { 3333 3312 struct binder_object ptr_object; ··· 3416 3393 num_valid, 3417 3394 last_fixup_obj_off, 3418 3395 last_fixup_min_off); 3419 - if (ret < 0) { 3396 + if (ret < 0 || 3397 + binder_alloc_copy_to_buffer(&target_proc->alloc, 3398 + t->buffer, 3399 + object_offset, 3400 + bp, sizeof(*bp))) { 3420 3401 return_error = BR_FAILED_REPLY; 3421 3402 return_error_param = ret; 3422 3403 return_error_line = __LINE__; 3423 3404 goto err_translate_failed; 3424 3405 } 3425 - binder_alloc_copy_to_buffer(&target_proc->alloc, 3426 - t->buffer, object_offset, 3427 - bp, sizeof(*bp)); 3428 3406 last_fixup_obj_off = object_offset; 3429 3407 last_fixup_min_off = 0; 3430 3408 } break; ··· 4164 4140 trace_binder_transaction_fd_recv(t, fd, fixup->offset); 4165 4141 fd_install(fd, fixup->file); 4166 4142 fixup->file = NULL; 4167 - binder_alloc_copy_to_buffer(&proc->alloc, t->buffer, 4168 - fixup->offset, &fd, 4169 - sizeof(u32)); 4143 + if (binder_alloc_copy_to_buffer(&proc->alloc, t->buffer, 4144 + fixup->offset, &fd, 4145 + sizeof(u32))) { 4146 + ret = -EINVAL; 4147 + break; 4148 + } 4170 4149 } 4171 4150 list_for_each_entry_safe(fixup, tmp, &t->fd_fixups, fixup_entry) { 4172 4151 if (fixup->file) { 4173 4152 fput(fixup->file); 4174 4153 } else if (ret) { 4175 4154 u32 fd; 4155 + int err; 4176 4156 4177 - binder_alloc_copy_from_buffer(&proc->alloc, &fd, 4178 - t->buffer, fixup->offset, 4179 - sizeof(fd)); 4180 - binder_deferred_fd_close(fd); 4157 + err = binder_alloc_copy_from_buffer(&proc->alloc, &fd, 4158 + t->buffer, 4159 + fixup->offset, 4160 + sizeof(fd)); 4161 + WARN_ON(err); 4162 + if (!err) 4163 + binder_deferred_fd_close(fd); 4181 4164 } 4182 4165 list_del(&fixup->fixup_entry); 4183 4166 kfree(fixup); ··· 4299 4268 case BINDER_WORK_TRANSACTION_COMPLETE: { 4300 4269 binder_inner_proc_unlock(proc); 4301 4270 cmd = BR_TRANSACTION_COMPLETE; 4271 + kfree(w); 4272 + binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE); 4302 4273 if (put_user(cmd, (uint32_t __user *)ptr)) 4303 4274 return -EFAULT; 4304 4275 ptr += sizeof(uint32_t); ··· 4309 4276 binder_debug(BINDER_DEBUG_TRANSACTION_COMPLETE, 4310 4277 "%d:%d BR_TRANSACTION_COMPLETE\n", 4311 4278 proc->pid, thread->pid); 4312 - kfree(w); 4313 - binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE); 4314 4279 } break; 4315 4280 case BINDER_WORK_NODE: { 4316 4281 struct binder_node *node = container_of(w, struct binder_node, work);
+23 -21
drivers/android/binder_alloc.c
··· 1119 1119 return 0; 1120 1120 } 1121 1121 1122 - static void binder_alloc_do_buffer_copy(struct binder_alloc *alloc, 1123 - bool to_buffer, 1124 - struct binder_buffer *buffer, 1125 - binder_size_t buffer_offset, 1126 - void *ptr, 1127 - size_t bytes) 1122 + static int binder_alloc_do_buffer_copy(struct binder_alloc *alloc, 1123 + bool to_buffer, 1124 + struct binder_buffer *buffer, 1125 + binder_size_t buffer_offset, 1126 + void *ptr, 1127 + size_t bytes) 1128 1128 { 1129 1129 /* All copies must be 32-bit aligned and 32-bit size */ 1130 - BUG_ON(!check_buffer(alloc, buffer, buffer_offset, bytes)); 1130 + if (!check_buffer(alloc, buffer, buffer_offset, bytes)) 1131 + return -EINVAL; 1131 1132 1132 1133 while (bytes) { 1133 1134 unsigned long size; ··· 1156 1155 ptr = ptr + size; 1157 1156 buffer_offset += size; 1158 1157 } 1158 + return 0; 1159 1159 } 1160 1160 1161 - void binder_alloc_copy_to_buffer(struct binder_alloc *alloc, 1162 - struct binder_buffer *buffer, 1163 - binder_size_t buffer_offset, 1164 - void *src, 1165 - size_t bytes) 1161 + int binder_alloc_copy_to_buffer(struct binder_alloc *alloc, 1162 + struct binder_buffer *buffer, 1163 + binder_size_t buffer_offset, 1164 + void *src, 1165 + size_t bytes) 1166 1166 { 1167 - binder_alloc_do_buffer_copy(alloc, true, buffer, buffer_offset, 1168 - src, bytes); 1167 + return binder_alloc_do_buffer_copy(alloc, true, buffer, buffer_offset, 1168 + src, bytes); 1169 1169 } 1170 1170 1171 - void binder_alloc_copy_from_buffer(struct binder_alloc *alloc, 1172 - void *dest, 1173 - struct binder_buffer *buffer, 1174 - binder_size_t buffer_offset, 1175 - size_t bytes) 1171 + int binder_alloc_copy_from_buffer(struct binder_alloc *alloc, 1172 + void *dest, 1173 + struct binder_buffer *buffer, 1174 + binder_size_t buffer_offset, 1175 + size_t bytes) 1176 1176 { 1177 - binder_alloc_do_buffer_copy(alloc, false, buffer, buffer_offset, 1178 - dest, bytes); 1177 + return binder_alloc_do_buffer_copy(alloc, false, buffer, buffer_offset, 1178 + dest, bytes); 1179 1179 } 1180 1180
+10 -10
drivers/android/binder_alloc.h
··· 159 159 const void __user *from, 160 160 size_t bytes); 161 161 162 - void binder_alloc_copy_to_buffer(struct binder_alloc *alloc, 163 - struct binder_buffer *buffer, 164 - binder_size_t buffer_offset, 165 - void *src, 166 - size_t bytes); 162 + int binder_alloc_copy_to_buffer(struct binder_alloc *alloc, 163 + struct binder_buffer *buffer, 164 + binder_size_t buffer_offset, 165 + void *src, 166 + size_t bytes); 167 167 168 - void binder_alloc_copy_from_buffer(struct binder_alloc *alloc, 169 - void *dest, 170 - struct binder_buffer *buffer, 171 - binder_size_t buffer_offset, 172 - size_t bytes); 168 + int binder_alloc_copy_from_buffer(struct binder_alloc *alloc, 169 + void *dest, 170 + struct binder_buffer *buffer, 171 + binder_size_t buffer_offset, 172 + size_t bytes); 173 173 174 174 #endif /* _LINUX_BINDER_ALLOC_H */ 175 175
+3 -2
drivers/char/bsr.c
··· 134 134 return 0; 135 135 } 136 136 137 - static int bsr_open(struct inode * inode, struct file * filp) 137 + static int bsr_open(struct inode *inode, struct file *filp) 138 138 { 139 139 struct cdev *cdev = inode->i_cdev; 140 140 struct bsr_dev *dev = container_of(cdev, struct bsr_dev, bsr_cdev); ··· 309 309 goto out_err_2; 310 310 } 311 311 312 - if ((ret = bsr_create_devs(np)) < 0) { 312 + ret = bsr_create_devs(np); 313 + if (ret < 0) { 313 314 np = NULL; 314 315 goto out_err_3; 315 316 }
+1 -2
drivers/char/misc.c
··· 226 226 mutex_unlock(&misc_mtx); 227 227 return err; 228 228 } 229 + EXPORT_SYMBOL(misc_register); 229 230 230 231 /** 231 232 * misc_deregister - unregister a miscellaneous device ··· 250 249 clear_bit(i, misc_minors); 251 250 mutex_unlock(&misc_mtx); 252 251 } 253 - 254 - EXPORT_SYMBOL(misc_register); 255 252 EXPORT_SYMBOL(misc_deregister); 256 253 257 254 static char *misc_devnode(struct device *dev, umode_t *mode)
+1 -1
drivers/counter/104-quad-8.c
··· 833 833 return 0; 834 834 } 835 835 836 - const struct counter_ops quad8_ops = { 836 + static const struct counter_ops quad8_ops = { 837 837 .signal_read = quad8_signal_read, 838 838 .count_read = quad8_count_read, 839 839 .count_write = quad8_count_write,
+12
drivers/extcon/Kconfig
··· 37 37 Say Y here to enable support for USB peripheral detection 38 38 and USB MUX switching by X-Power AXP288 PMIC. 39 39 40 + config EXTCON_FSA9480 41 + tristate "FSA9480 EXTCON Support" 42 + depends on INPUT && I2C 43 + select IRQ_DOMAIN 44 + select REGMAP_I2C 45 + help 46 + If you say yes here you get support for the Fairchild Semiconductor 47 + FSA9480 microUSB switch and accessory detector chip. The FSA9480 is a USB 48 + port accessory detector and switch. The FSA9480 is fully controlled using 49 + I2C and enables USB data, stereo and mono audio, video, microphone 50 + and UART data to use a common connector port. 51 + 40 52 config EXTCON_GPIO 41 53 tristate "GPIO extcon support" 42 54 depends on GPIOLIB || COMPILE_TEST
+1
drivers/extcon/Makefile
··· 8 8 obj-$(CONFIG_EXTCON_ADC_JACK) += extcon-adc-jack.o 9 9 obj-$(CONFIG_EXTCON_ARIZONA) += extcon-arizona.o 10 10 obj-$(CONFIG_EXTCON_AXP288) += extcon-axp288.o 11 + obj-$(CONFIG_EXTCON_FSA9480) += extcon-fsa9480.o 11 12 obj-$(CONFIG_EXTCON_GPIO) += extcon-gpio.o 12 13 obj-$(CONFIG_EXTCON_INTEL_INT3496) += extcon-intel-int3496.o 13 14 obj-$(CONFIG_EXTCON_INTEL_CHT_WC) += extcon-intel-cht-wc.o
+20 -13
drivers/extcon/extcon-arizona.c
··· 326 326 327 327 arizona_extcon_pulse_micbias(info); 328 328 329 - regmap_update_bits_check(arizona->regmap, ARIZONA_MIC_DETECT_1, 330 - ARIZONA_MICD_ENA, ARIZONA_MICD_ENA, 331 - &change); 332 - if (!change) { 329 + ret = regmap_update_bits_check(arizona->regmap, ARIZONA_MIC_DETECT_1, 330 + ARIZONA_MICD_ENA, ARIZONA_MICD_ENA, 331 + &change); 332 + if (ret < 0) { 333 + dev_err(arizona->dev, "Failed to enable micd: %d\n", ret); 334 + } else if (!change) { 333 335 regulator_disable(info->micvdd); 334 336 pm_runtime_put_autosuspend(info->dev); 335 337 } ··· 343 341 const char *widget = arizona_extcon_get_micbias(info); 344 342 struct snd_soc_dapm_context *dapm = arizona->dapm; 345 343 struct snd_soc_component *component = snd_soc_dapm_to_component(dapm); 346 - bool change; 344 + bool change = false; 347 345 int ret; 348 346 349 - regmap_update_bits_check(arizona->regmap, ARIZONA_MIC_DETECT_1, 350 - ARIZONA_MICD_ENA, 0, 351 - &change); 347 + ret = regmap_update_bits_check(arizona->regmap, ARIZONA_MIC_DETECT_1, 348 + ARIZONA_MICD_ENA, 0, 349 + &change); 350 + if (ret < 0) 351 + dev_err(arizona->dev, "Failed to disable micd: %d\n", ret); 352 352 353 353 ret = snd_soc_component_disable_pin(component, widget); 354 354 if (ret != 0) ··· 1722 1718 struct arizona *arizona = info->arizona; 1723 1719 int jack_irq_rise, jack_irq_fall; 1724 1720 bool change; 1721 + int ret; 1725 1722 1726 - regmap_update_bits_check(arizona->regmap, ARIZONA_MIC_DETECT_1, 1727 - ARIZONA_MICD_ENA, 0, 1728 - &change); 1729 - 1730 - if (change) { 1723 + ret = regmap_update_bits_check(arizona->regmap, ARIZONA_MIC_DETECT_1, 1724 + ARIZONA_MICD_ENA, 0, 1725 + &change); 1726 + if (ret < 0) { 1727 + dev_err(&pdev->dev, "Failed to disable micd on remove: %d\n", 1728 + ret); 1729 + } else if (change) { 1731 1730 regulator_disable(info->micvdd); 1732 1731 pm_runtime_put(info->dev); 1733 1732 }
+395
drivers/extcon/extcon-fsa9480.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * extcon-fsa9480.c - Fairchild Semiconductor FSA9480 extcon driver 4 + * 5 + * Copyright (c) 2019 Tomasz Figa <tomasz.figa@gmail.com> 6 + * 7 + * Loosely based on old fsa9480 misc-device driver. 8 + */ 9 + 10 + #include <linux/kernel.h> 11 + #include <linux/module.h> 12 + #include <linux/types.h> 13 + #include <linux/i2c.h> 14 + #include <linux/slab.h> 15 + #include <linux/bitops.h> 16 + #include <linux/interrupt.h> 17 + #include <linux/err.h> 18 + #include <linux/platform_device.h> 19 + #include <linux/kobject.h> 20 + #include <linux/extcon-provider.h> 21 + #include <linux/irqdomain.h> 22 + #include <linux/regmap.h> 23 + 24 + /* FSA9480 I2C registers */ 25 + #define FSA9480_REG_DEVID 0x01 26 + #define FSA9480_REG_CTRL 0x02 27 + #define FSA9480_REG_INT1 0x03 28 + #define FSA9480_REG_INT2 0x04 29 + #define FSA9480_REG_INT1_MASK 0x05 30 + #define FSA9480_REG_INT2_MASK 0x06 31 + #define FSA9480_REG_ADC 0x07 32 + #define FSA9480_REG_TIMING1 0x08 33 + #define FSA9480_REG_TIMING2 0x09 34 + #define FSA9480_REG_DEV_T1 0x0a 35 + #define FSA9480_REG_DEV_T2 0x0b 36 + #define FSA9480_REG_BTN1 0x0c 37 + #define FSA9480_REG_BTN2 0x0d 38 + #define FSA9480_REG_CK 0x0e 39 + #define FSA9480_REG_CK_INT1 0x0f 40 + #define FSA9480_REG_CK_INT2 0x10 41 + #define FSA9480_REG_CK_INTMASK1 0x11 42 + #define FSA9480_REG_CK_INTMASK2 0x12 43 + #define FSA9480_REG_MANSW1 0x13 44 + #define FSA9480_REG_MANSW2 0x14 45 + #define FSA9480_REG_END 0x15 46 + 47 + /* Control */ 48 + #define CON_SWITCH_OPEN (1 << 4) 49 + #define CON_RAW_DATA (1 << 3) 50 + #define CON_MANUAL_SW (1 << 2) 51 + #define CON_WAIT (1 << 1) 52 + #define CON_INT_MASK (1 << 0) 53 + #define CON_MASK (CON_SWITCH_OPEN | CON_RAW_DATA | \ 54 + CON_MANUAL_SW | CON_WAIT) 55 + 56 + /* Device Type 1 */ 57 + #define DEV_USB_OTG 7 58 + #define DEV_DEDICATED_CHG 6 59 + #define DEV_USB_CHG 5 60 + #define DEV_CAR_KIT 4 61 + #define DEV_UART 3 62 + #define DEV_USB 2 63 + #define DEV_AUDIO_2 1 64 + #define DEV_AUDIO_1 0 65 + 66 + #define DEV_T1_USB_MASK (DEV_USB_OTG | DEV_USB) 67 + #define DEV_T1_UART_MASK (DEV_UART) 68 + #define DEV_T1_CHARGER_MASK (DEV_DEDICATED_CHG | DEV_USB_CHG) 69 + 70 + /* Device Type 2 */ 71 + #define DEV_AV 14 72 + #define DEV_TTY 13 73 + #define DEV_PPD 12 74 + #define DEV_JIG_UART_OFF 11 75 + #define DEV_JIG_UART_ON 10 76 + #define DEV_JIG_USB_OFF 9 77 + #define DEV_JIG_USB_ON 8 78 + 79 + #define DEV_T2_USB_MASK (DEV_JIG_USB_OFF | DEV_JIG_USB_ON) 80 + #define DEV_T2_UART_MASK (DEV_JIG_UART_OFF | DEV_JIG_UART_ON) 81 + #define DEV_T2_JIG_MASK (DEV_JIG_USB_OFF | DEV_JIG_USB_ON | \ 82 + DEV_JIG_UART_OFF | DEV_JIG_UART_ON) 83 + 84 + /* 85 + * Manual Switch 86 + * D- [7:5] / D+ [4:2] 87 + * 000: Open all / 001: USB / 010: AUDIO / 011: UART / 100: V_AUDIO 88 + */ 89 + #define SW_VAUDIO ((4 << 5) | (4 << 2)) 90 + #define SW_UART ((3 << 5) | (3 << 2)) 91 + #define SW_AUDIO ((2 << 5) | (2 << 2)) 92 + #define SW_DHOST ((1 << 5) | (1 << 2)) 93 + #define SW_AUTO ((0 << 5) | (0 << 2)) 94 + 95 + /* Interrupt 1 */ 96 + #define INT1_MASK (0xff << 0) 97 + #define INT_DETACH (1 << 1) 98 + #define INT_ATTACH (1 << 0) 99 + 100 + /* Interrupt 2 mask */ 101 + #define INT2_MASK (0x1f << 0) 102 + 103 + /* Timing Set 1 */ 104 + #define TIMING1_ADC_500MS (0x6 << 0) 105 + 106 + struct fsa9480_usbsw { 107 + struct device *dev; 108 + struct regmap *regmap; 109 + struct extcon_dev *edev; 110 + u16 cable; 111 + }; 112 + 113 + static const unsigned int fsa9480_extcon_cable[] = { 114 + EXTCON_USB_HOST, 115 + EXTCON_USB, 116 + EXTCON_CHG_USB_DCP, 117 + EXTCON_CHG_USB_SDP, 118 + EXTCON_CHG_USB_ACA, 119 + EXTCON_JACK_LINE_OUT, 120 + EXTCON_JACK_VIDEO_OUT, 121 + EXTCON_JIG, 122 + 123 + EXTCON_NONE, 124 + }; 125 + 126 + static const u64 cable_types[] = { 127 + [DEV_USB_OTG] = BIT_ULL(EXTCON_USB_HOST), 128 + [DEV_DEDICATED_CHG] = BIT_ULL(EXTCON_USB) | BIT_ULL(EXTCON_CHG_USB_DCP), 129 + [DEV_USB_CHG] = BIT_ULL(EXTCON_USB) | BIT_ULL(EXTCON_CHG_USB_SDP), 130 + [DEV_CAR_KIT] = BIT_ULL(EXTCON_USB) | BIT_ULL(EXTCON_CHG_USB_SDP) 131 + | BIT_ULL(EXTCON_JACK_LINE_OUT), 132 + [DEV_UART] = BIT_ULL(EXTCON_JIG), 133 + [DEV_USB] = BIT_ULL(EXTCON_USB) | BIT_ULL(EXTCON_CHG_USB_SDP), 134 + [DEV_AUDIO_2] = BIT_ULL(EXTCON_JACK_LINE_OUT), 135 + [DEV_AUDIO_1] = BIT_ULL(EXTCON_JACK_LINE_OUT), 136 + [DEV_AV] = BIT_ULL(EXTCON_JACK_LINE_OUT) 137 + | BIT_ULL(EXTCON_JACK_VIDEO_OUT), 138 + [DEV_TTY] = BIT_ULL(EXTCON_JIG), 139 + [DEV_PPD] = BIT_ULL(EXTCON_JACK_LINE_OUT) | BIT_ULL(EXTCON_CHG_USB_ACA), 140 + [DEV_JIG_UART_OFF] = BIT_ULL(EXTCON_JIG), 141 + [DEV_JIG_UART_ON] = BIT_ULL(EXTCON_JIG), 142 + [DEV_JIG_USB_OFF] = BIT_ULL(EXTCON_USB) | BIT_ULL(EXTCON_JIG), 143 + [DEV_JIG_USB_ON] = BIT_ULL(EXTCON_USB) | BIT_ULL(EXTCON_JIG), 144 + }; 145 + 146 + /* Define regmap configuration of FSA9480 for I2C communication */ 147 + static bool fsa9480_volatile_reg(struct device *dev, unsigned int reg) 148 + { 149 + switch (reg) { 150 + case FSA9480_REG_INT1_MASK: 151 + return true; 152 + default: 153 + break; 154 + } 155 + return false; 156 + } 157 + 158 + static const struct regmap_config fsa9480_regmap_config = { 159 + .reg_bits = 8, 160 + .val_bits = 8, 161 + .volatile_reg = fsa9480_volatile_reg, 162 + .max_register = FSA9480_REG_END, 163 + }; 164 + 165 + static int fsa9480_write_reg(struct fsa9480_usbsw *usbsw, int reg, int value) 166 + { 167 + int ret; 168 + 169 + ret = regmap_write(usbsw->regmap, reg, value); 170 + if (ret < 0) 171 + dev_err(usbsw->dev, "%s: err %d\n", __func__, ret); 172 + 173 + return ret; 174 + } 175 + 176 + static int fsa9480_read_reg(struct fsa9480_usbsw *usbsw, int reg) 177 + { 178 + int ret, val; 179 + 180 + ret = regmap_read(usbsw->regmap, reg, &val); 181 + if (ret < 0) { 182 + dev_err(usbsw->dev, "%s: err %d\n", __func__, ret); 183 + return ret; 184 + } 185 + 186 + return val; 187 + } 188 + 189 + static int fsa9480_read_irq(struct fsa9480_usbsw *usbsw, int *value) 190 + { 191 + u8 regs[2]; 192 + int ret; 193 + 194 + ret = regmap_bulk_read(usbsw->regmap, FSA9480_REG_INT1, regs, 2); 195 + if (ret < 0) 196 + dev_err(usbsw->dev, "%s: err %d\n", __func__, ret); 197 + 198 + *value = regs[1] << 8 | regs[0]; 199 + return ret; 200 + } 201 + 202 + static void fsa9480_handle_change(struct fsa9480_usbsw *usbsw, 203 + u16 mask, bool attached) 204 + { 205 + while (mask) { 206 + int dev = fls64(mask) - 1; 207 + u64 cables = cable_types[dev]; 208 + 209 + while (cables) { 210 + int cable = fls64(cables) - 1; 211 + 212 + extcon_set_state_sync(usbsw->edev, cable, attached); 213 + cables &= ~BIT_ULL(cable); 214 + } 215 + 216 + mask &= ~BIT_ULL(dev); 217 + } 218 + } 219 + 220 + static void fsa9480_detect_dev(struct fsa9480_usbsw *usbsw) 221 + { 222 + int val1, val2; 223 + u16 val; 224 + 225 + val1 = fsa9480_read_reg(usbsw, FSA9480_REG_DEV_T1); 226 + val2 = fsa9480_read_reg(usbsw, FSA9480_REG_DEV_T2); 227 + if (val1 < 0 || val2 < 0) { 228 + dev_err(usbsw->dev, "%s: failed to read registers", __func__); 229 + return; 230 + } 231 + val = val2 << 8 | val1; 232 + 233 + dev_info(usbsw->dev, "dev1: 0x%x, dev2: 0x%x\n", val1, val2); 234 + 235 + /* handle detached cables first */ 236 + fsa9480_handle_change(usbsw, usbsw->cable & ~val, false); 237 + 238 + /* then handle attached ones */ 239 + fsa9480_handle_change(usbsw, val & ~usbsw->cable, true); 240 + 241 + usbsw->cable = val; 242 + } 243 + 244 + static irqreturn_t fsa9480_irq_handler(int irq, void *data) 245 + { 246 + struct fsa9480_usbsw *usbsw = data; 247 + int intr = 0; 248 + 249 + /* clear interrupt */ 250 + fsa9480_read_irq(usbsw, &intr); 251 + if (!intr) 252 + return IRQ_NONE; 253 + 254 + /* device detection */ 255 + fsa9480_detect_dev(usbsw); 256 + 257 + return IRQ_HANDLED; 258 + } 259 + 260 + static int fsa9480_probe(struct i2c_client *client, 261 + const struct i2c_device_id *id) 262 + { 263 + struct fsa9480_usbsw *info; 264 + int ret; 265 + 266 + if (!client->irq) { 267 + dev_err(&client->dev, "no interrupt provided\n"); 268 + return -EINVAL; 269 + } 270 + 271 + info = devm_kzalloc(&client->dev, sizeof(*info), GFP_KERNEL); 272 + if (!info) 273 + return -ENOMEM; 274 + info->dev = &client->dev; 275 + 276 + i2c_set_clientdata(client, info); 277 + 278 + /* External connector */ 279 + info->edev = devm_extcon_dev_allocate(info->dev, 280 + fsa9480_extcon_cable); 281 + if (IS_ERR(info->edev)) { 282 + dev_err(info->dev, "failed to allocate memory for extcon\n"); 283 + ret = -ENOMEM; 284 + return ret; 285 + } 286 + 287 + ret = devm_extcon_dev_register(info->dev, info->edev); 288 + if (ret) { 289 + dev_err(info->dev, "failed to register extcon device\n"); 290 + return ret; 291 + } 292 + 293 + info->regmap = devm_regmap_init_i2c(client, &fsa9480_regmap_config); 294 + if (IS_ERR(info->regmap)) { 295 + ret = PTR_ERR(info->regmap); 296 + dev_err(info->dev, "failed to allocate register map: %d\n", 297 + ret); 298 + return ret; 299 + } 300 + 301 + /* ADC Detect Time: 500ms */ 302 + fsa9480_write_reg(info, FSA9480_REG_TIMING1, TIMING1_ADC_500MS); 303 + 304 + /* configure automatic switching */ 305 + fsa9480_write_reg(info, FSA9480_REG_CTRL, CON_MASK); 306 + 307 + /* unmask interrupt (attach/detach only) */ 308 + fsa9480_write_reg(info, FSA9480_REG_INT1_MASK, 309 + INT1_MASK & ~(INT_ATTACH | INT_DETACH)); 310 + fsa9480_write_reg(info, FSA9480_REG_INT2_MASK, INT2_MASK); 311 + 312 + ret = devm_request_threaded_irq(info->dev, client->irq, NULL, 313 + fsa9480_irq_handler, 314 + IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 315 + "fsa9480", info); 316 + if (ret) { 317 + dev_err(info->dev, "failed to request IRQ\n"); 318 + return ret; 319 + } 320 + 321 + device_init_wakeup(info->dev, true); 322 + fsa9480_detect_dev(info); 323 + 324 + return 0; 325 + } 326 + 327 + static int fsa9480_remove(struct i2c_client *client) 328 + { 329 + return 0; 330 + } 331 + 332 + #ifdef CONFIG_PM_SLEEP 333 + static int fsa9480_suspend(struct device *dev) 334 + { 335 + struct i2c_client *client = to_i2c_client(dev); 336 + 337 + if (device_may_wakeup(&client->dev) && client->irq) 338 + enable_irq_wake(client->irq); 339 + 340 + return 0; 341 + } 342 + 343 + static int fsa9480_resume(struct device *dev) 344 + { 345 + struct i2c_client *client = to_i2c_client(dev); 346 + 347 + if (device_may_wakeup(&client->dev) && client->irq) 348 + disable_irq_wake(client->irq); 349 + 350 + return 0; 351 + } 352 + #endif 353 + 354 + static const struct dev_pm_ops fsa9480_pm_ops = { 355 + SET_SYSTEM_SLEEP_PM_OPS(fsa9480_suspend, fsa9480_resume) 356 + }; 357 + 358 + static const struct i2c_device_id fsa9480_id[] = { 359 + { "fsa9480", 0 }, 360 + {} 361 + }; 362 + MODULE_DEVICE_TABLE(i2c, fsa9480_id); 363 + 364 + static const struct of_device_id fsa9480_of_match[] = { 365 + { .compatible = "fcs,fsa9480", }, 366 + { }, 367 + }; 368 + MODULE_DEVICE_TABLE(of, fsa9480_of_match); 369 + 370 + static struct i2c_driver fsa9480_i2c_driver = { 371 + .driver = { 372 + .name = "fsa9480", 373 + .pm = &fsa9480_pm_ops, 374 + .of_match_table = fsa9480_of_match, 375 + }, 376 + .probe = fsa9480_probe, 377 + .remove = fsa9480_remove, 378 + .id_table = fsa9480_id, 379 + }; 380 + 381 + static int __init fsa9480_module_init(void) 382 + { 383 + return i2c_add_driver(&fsa9480_i2c_driver); 384 + } 385 + subsys_initcall(fsa9480_module_init); 386 + 387 + static void __exit fsa9480_module_exit(void) 388 + { 389 + i2c_del_driver(&fsa9480_i2c_driver); 390 + } 391 + module_exit(fsa9480_module_exit); 392 + 393 + MODULE_DESCRIPTION("Fairchild Semiconductor FSA9480 extcon driver"); 394 + MODULE_AUTHOR("Tomasz Figa <tomasz.figa@gmail.com>"); 395 + MODULE_LICENSE("GPL");
+10 -1
drivers/firmware/google/coreboot_table.h
··· 12 12 #ifndef __COREBOOT_TABLE_H 13 13 #define __COREBOOT_TABLE_H 14 14 15 - #include <linux/io.h> 15 + #include <linux/device.h> 16 16 17 17 /* Coreboot table header structure */ 18 18 struct coreboot_table_header { ··· 82 82 83 83 /* Unregister a driver that uses the data from a coreboot table. */ 84 84 void coreboot_driver_unregister(struct coreboot_driver *driver); 85 + 86 + /* module_coreboot_driver() - Helper macro for drivers that don't do 87 + * anything special in module init/exit. This eliminates a lot of 88 + * boilerplate. Each module may only use this macro once, and 89 + * calling it replaces module_init() and module_exit() 90 + */ 91 + #define module_coreboot_driver(__coreboot_driver) \ 92 + module_driver(__coreboot_driver, coreboot_driver_register, \ 93 + coreboot_driver_unregister) 85 94 86 95 #endif /* __COREBOOT_TABLE_H */
+1 -13
drivers/firmware/google/framebuffer-coreboot.c
··· 89 89 }, 90 90 .tag = CB_TAG_FRAMEBUFFER, 91 91 }; 92 - 93 - static int __init coreboot_framebuffer_init(void) 94 - { 95 - return coreboot_driver_register(&framebuffer_driver); 96 - } 97 - 98 - static void coreboot_framebuffer_exit(void) 99 - { 100 - coreboot_driver_unregister(&framebuffer_driver); 101 - } 102 - 103 - module_init(coreboot_framebuffer_init); 104 - module_exit(coreboot_framebuffer_exit); 92 + module_coreboot_driver(framebuffer_driver); 105 93 106 94 MODULE_AUTHOR("Samuel Holland <samuel@sholland.org>"); 107 95 MODULE_LICENSE("GPL");
+7 -21
drivers/firmware/google/memconsole-coreboot.c
··· 8 8 */ 9 9 10 10 #include <linux/device.h> 11 + #include <linux/io.h> 11 12 #include <linux/kernel.h> 12 13 #include <linux/module.h> 13 14 ··· 27 26 #define CURSOR_MASK ((1 << 28) - 1) 28 27 #define OVERFLOW (1 << 31) 29 28 30 - static struct cbmem_cons __iomem *cbmem_console; 29 + static struct cbmem_cons *cbmem_console; 31 30 static u32 cbmem_console_size; 32 31 33 32 /* ··· 68 67 69 68 static int memconsole_probe(struct coreboot_device *dev) 70 69 { 71 - struct cbmem_cons __iomem *tmp_cbmc; 70 + struct cbmem_cons *tmp_cbmc; 72 71 73 72 tmp_cbmc = memremap(dev->cbmem_ref.cbmem_addr, 74 73 sizeof(*tmp_cbmc), MEMREMAP_WB); ··· 78 77 79 78 /* Read size only once to prevent overrun attack through /dev/mem. */ 80 79 cbmem_console_size = tmp_cbmc->size_dont_access_after_boot; 81 - cbmem_console = memremap(dev->cbmem_ref.cbmem_addr, 80 + cbmem_console = devm_memremap(&dev->dev, dev->cbmem_ref.cbmem_addr, 82 81 cbmem_console_size + sizeof(*cbmem_console), 83 82 MEMREMAP_WB); 84 83 memunmap(tmp_cbmc); 85 84 86 - if (!cbmem_console) 87 - return -ENOMEM; 85 + if (IS_ERR(cbmem_console)) 86 + return PTR_ERR(cbmem_console); 88 87 89 88 memconsole_setup(memconsole_coreboot_read); 90 89 ··· 94 93 static int memconsole_remove(struct coreboot_device *dev) 95 94 { 96 95 memconsole_exit(); 97 - 98 - if (cbmem_console) 99 - memunmap(cbmem_console); 100 96 101 97 return 0; 102 98 } ··· 106 108 }, 107 109 .tag = CB_TAG_CBMEM_CONSOLE, 108 110 }; 109 - 110 - static void coreboot_memconsole_exit(void) 111 - { 112 - coreboot_driver_unregister(&memconsole_driver); 113 - } 114 - 115 - static int __init coreboot_memconsole_init(void) 116 - { 117 - return coreboot_driver_register(&memconsole_driver); 118 - } 119 - 120 - module_exit(coreboot_memconsole_exit); 121 - module_init(coreboot_memconsole_init); 111 + module_coreboot_driver(memconsole_driver); 122 112 123 113 MODULE_AUTHOR("Google, Inc."); 124 114 MODULE_LICENSE("GPL");
+5 -4
drivers/firmware/google/memconsole.c
··· 7 7 * Copyright 2017 Google Inc. 8 8 */ 9 9 10 - #include <linux/init.h> 11 10 #include <linux/sysfs.h> 12 11 #include <linux/kobject.h> 13 12 #include <linux/module.h> 14 13 15 14 #include "memconsole.h" 16 15 17 - static ssize_t (*memconsole_read_func)(char *, loff_t, size_t); 18 - 19 16 static ssize_t memconsole_read(struct file *filp, struct kobject *kobp, 20 17 struct bin_attribute *bin_attr, char *buf, 21 18 loff_t pos, size_t count) 22 19 { 20 + ssize_t (*memconsole_read_func)(char *, loff_t, size_t); 21 + 22 + memconsole_read_func = bin_attr->private; 23 23 if (WARN_ON_ONCE(!memconsole_read_func)) 24 24 return -EIO; 25 + 25 26 return memconsole_read_func(buf, pos, count); 26 27 } 27 28 ··· 33 32 34 33 void memconsole_setup(ssize_t (*read_func)(char *, loff_t, size_t)) 35 34 { 36 - memconsole_read_func = read_func; 35 + memconsole_bin_attr.private = read_func; 37 36 } 38 37 EXPORT_SYMBOL(memconsole_setup); 39 38
+1 -13
drivers/firmware/google/vpd.c
··· 316 316 }, 317 317 .tag = CB_TAG_VPD, 318 318 }; 319 - 320 - static int __init coreboot_vpd_init(void) 321 - { 322 - return coreboot_driver_register(&vpd_driver); 323 - } 324 - 325 - static void __exit coreboot_vpd_exit(void) 326 - { 327 - coreboot_driver_unregister(&vpd_driver); 328 - } 329 - 330 - module_init(coreboot_vpd_init); 331 - module_exit(coreboot_vpd_exit); 319 + module_coreboot_driver(vpd_driver); 332 320 333 321 MODULE_AUTHOR("Google, Inc."); 334 322 MODULE_LICENSE("GPL");
-2
drivers/firmware/google/vpd_decode.c
··· 7 7 * Copyright 2017 Google Inc. 8 8 */ 9 9 10 - #include <linux/export.h> 11 - 12 10 #include "vpd_decode.h" 13 11 14 12 static int vpd_decode_len(const s32 max_len, const u8 *in,
+3 -3
drivers/fpga/Kconfig
··· 26 26 FPGA manager driver support for Altera Arria10 SoCFPGA. 27 27 28 28 config ALTERA_PR_IP_CORE 29 - tristate "Altera Partial Reconfiguration IP Core" 30 - help 31 - Core driver support for Altera Partial Reconfiguration IP component 29 + tristate "Altera Partial Reconfiguration IP Core" 30 + help 31 + Core driver support for Altera Partial Reconfiguration IP component 32 32 33 33 config ALTERA_PR_IP_CORE_PLAT 34 34 tristate "Platform support of Altera Partial Reconfiguration IP Core"
+2 -2
drivers/fpga/dfl-fme-mgr.c
··· 30 30 #define FME_PR_STS 0x10 31 31 #define FME_PR_DATA 0x18 32 32 #define FME_PR_ERR 0x20 33 - #define FME_PR_INTFC_ID_H 0xA8 34 - #define FME_PR_INTFC_ID_L 0xB0 33 + #define FME_PR_INTFC_ID_L 0xA8 34 + #define FME_PR_INTFC_ID_H 0xB0 35 35 36 36 /* FME PR Control Register Bitfield */ 37 37 #define FME_PR_CTRL_PR_RST BIT_ULL(0) /* Reset PR engine */
+9 -8
drivers/fpga/dfl-fme-pr.c
··· 74 74 struct dfl_fme *fme; 75 75 unsigned long minsz; 76 76 void *buf = NULL; 77 + size_t length; 77 78 int ret = 0; 78 79 u64 v; 79 80 ··· 84 83 return -EFAULT; 85 84 86 85 if (port_pr.argsz < minsz || port_pr.flags) 87 - return -EINVAL; 88 - 89 - if (!IS_ALIGNED(port_pr.buffer_size, 4)) 90 86 return -EINVAL; 91 87 92 88 /* get fme header region */ ··· 101 103 port_pr.buffer_size)) 102 104 return -EFAULT; 103 105 104 - buf = vmalloc(port_pr.buffer_size); 106 + /* 107 + * align PR buffer per PR bandwidth, as HW ignores the extra padding 108 + * data automatically. 109 + */ 110 + length = ALIGN(port_pr.buffer_size, 4); 111 + 112 + buf = vmalloc(length); 105 113 if (!buf) 106 114 return -ENOMEM; 107 115 ··· 144 140 fpga_image_info_free(region->info); 145 141 146 142 info->buf = buf; 147 - info->count = port_pr.buffer_size; 143 + info->count = length; 148 144 info->region_id = port_pr.port_id; 149 145 region->info = info; 150 146 ··· 163 159 mutex_unlock(&pdata->lock); 164 160 free_exit: 165 161 vfree(buf); 166 - if (copy_to_user((void __user *)arg, &port_pr, minsz)) 167 - return -EFAULT; 168 - 169 162 return ret; 170 163 } 171 164
+1 -1
drivers/fsi/cf-fsi-fw.h
··· 1 - // SPDX-License-Identifier: GPL-2.0+ 1 + /* SPDX-License-Identifier: GPL-2.0+ */ 2 2 #ifndef __CF_FSI_FW_H 3 3 #define __CF_FSI_FW_H 4 4
+20 -12
drivers/fsi/fsi-core.c
··· 1029 1029 1030 1030 } 1031 1031 1032 + rc = fsi_slave_set_smode(slave); 1033 + if (rc) { 1034 + dev_warn(&master->dev, 1035 + "can't set smode on slave:%02x:%02x %d\n", 1036 + link, id, rc); 1037 + goto err_free; 1038 + } 1039 + 1032 1040 /* Allocate a minor in the FSI space */ 1033 1041 rc = __fsi_get_new_minor(slave, fsi_dev_cfam, &slave->dev.devt, 1034 1042 &slave->cdev_idx); ··· 1048 1040 rc = cdev_device_add(&slave->cdev, &slave->dev); 1049 1041 if (rc) { 1050 1042 dev_err(&slave->dev, "Error %d creating slave device\n", rc); 1051 - goto err_free; 1043 + goto err_free_ida; 1052 1044 } 1053 1045 1054 - rc = fsi_slave_set_smode(slave); 1055 - if (rc) { 1056 - dev_warn(&master->dev, 1057 - "can't set smode on slave:%02x:%02x %d\n", 1058 - link, id, rc); 1059 - kfree(slave); 1060 - return -ENODEV; 1061 - } 1046 + /* Now that we have the cdev registered with the core, any fatal 1047 + * failures beyond this point will need to clean up through 1048 + * cdev_device_del(). Fortunately though, nothing past here is fatal. 1049 + */ 1050 + 1062 1051 if (master->link_config) 1063 1052 master->link_config(master, link, 1064 1053 slave->t_send_delay, ··· 1072 1067 dev_dbg(&master->dev, "failed during slave scan with: %d\n", 1073 1068 rc); 1074 1069 1075 - return rc; 1070 + return 0; 1076 1071 1077 - err_free: 1078 - put_device(&slave->dev); 1072 + err_free_ida: 1073 + fsi_free_minor(slave->dev.devt); 1074 + err_free: 1075 + of_node_put(slave->dev.of_node); 1076 + kfree(slave); 1079 1077 return rc; 1080 1078 } 1081 1079
+12 -3
drivers/fsi/fsi-occ.c
··· 412 412 msecs_to_jiffies(OCC_CMD_IN_PRG_WAIT_MS); 413 413 struct occ *occ = dev_get_drvdata(dev); 414 414 struct occ_response *resp = response; 415 + u8 seq_no; 415 416 u16 resp_data_length; 416 417 unsigned long start; 417 418 int rc; ··· 427 426 428 427 mutex_lock(&occ->occ_lock); 429 428 429 + /* Extract the seq_no from the command (first byte) */ 430 + seq_no = *(const u8 *)request; 430 431 rc = occ_putsram(occ, OCC_SRAM_CMD_ADDR, request, req_len); 431 432 if (rc) 432 433 goto done; ··· 444 441 if (rc) 445 442 goto done; 446 443 447 - if (resp->return_status == OCC_RESP_CMD_IN_PRG) { 444 + if (resp->return_status == OCC_RESP_CMD_IN_PRG || 445 + resp->seq_no != seq_no) { 448 446 rc = -ETIMEDOUT; 449 447 450 - if (time_after(jiffies, start + timeout)) 451 - break; 448 + if (time_after(jiffies, start + timeout)) { 449 + dev_err(occ->dev, "resp timeout status=%02x " 450 + "resp seq_no=%d our seq_no=%d\n", 451 + resp->return_status, resp->seq_no, 452 + seq_no); 453 + goto done; 454 + } 452 455 453 456 set_current_state(TASK_UNINTERRUPTIBLE); 454 457 schedule_timeout(wait_time);
+2 -2
drivers/fsi/fsi-sbefifo.c
··· 289 289 switch ((sbm & CFAM_SBM_SBE_STATE_MASK) >> CFAM_SBM_SBE_STATE_SHIFT) { 290 290 case SBE_STATE_UNKNOWN: 291 291 return -ESHUTDOWN; 292 + case SBE_STATE_DMT: 293 + return -EBUSY; 292 294 case SBE_STATE_IPLING: 293 295 case SBE_STATE_ISTEP: 294 296 case SBE_STATE_MPIPL: 295 - case SBE_STATE_DMT: 296 - return -EBUSY; 297 297 case SBE_STATE_RUNTIME: 298 298 case SBE_STATE_DUMP: /* Not sure about that one */ 299 299 break;
+2 -2
drivers/hwmon/occ/common.c
··· 124 124 static int occ_poll(struct occ *occ) 125 125 { 126 126 int rc; 127 - u16 checksum = occ->poll_cmd_data + 1; 127 + u16 checksum = occ->poll_cmd_data + occ->seq_no + 1; 128 128 u8 cmd[8]; 129 129 struct occ_poll_response_header *header; 130 130 131 131 /* big endian */ 132 - cmd[0] = 0; /* sequence number */ 132 + cmd[0] = occ->seq_no++; /* sequence number */ 133 133 cmd[1] = 0; /* cmd type */ 134 134 cmd[2] = 0; /* data length msb */ 135 135 cmd[3] = 1; /* data length lsb */
+1
drivers/hwmon/occ/common.h
··· 95 95 struct occ_sensors sensors; 96 96 97 97 int powr_sample_time_us; /* average power sample time */ 98 + u8 seq_no; 98 99 u8 poll_cmd_data; /* to perform OCC poll command */ 99 100 int (*send_cmd)(struct occ *occ, u8 *cmd); 100 101
+1
drivers/hwtracing/coresight/Kconfig
··· 4 4 # 5 5 menuconfig CORESIGHT 6 6 bool "CoreSight Tracing Support" 7 + depends on OF || ACPI 7 8 select ARM_AMBA 8 9 select PERF_EVENTS 9 10 help
+1 -2
drivers/hwtracing/coresight/Makefile
··· 2 2 # 3 3 # Makefile for CoreSight drivers. 4 4 # 5 - obj-$(CONFIG_CORESIGHT) += coresight.o coresight-etm-perf.o 6 - obj-$(CONFIG_OF) += of_coresight.o 5 + obj-$(CONFIG_CORESIGHT) += coresight.o coresight-etm-perf.o coresight-platform.o 7 6 obj-$(CONFIG_CORESIGHT_LINK_AND_SINK_TMC) += coresight-tmc.o \ 8 7 coresight-tmc-etf.o \ 9 8 coresight-tmc-etr.o
+22 -18
drivers/hwtracing/coresight/coresight-catu.c
··· 28 28 #define catu_dbg(x, ...) do {} while (0) 29 29 #endif 30 30 31 + DEFINE_CORESIGHT_DEVLIST(catu_devs, "catu"); 32 + 31 33 struct catu_etr_buf { 32 34 struct tmc_sg_table *catu_table; 33 35 dma_addr_t sladdr; ··· 330 328 struct etr_buf *etr_buf, int node, void **pages) 331 329 { 332 330 struct coresight_device *csdev; 333 - struct device *catu_dev; 334 331 struct tmc_sg_table *catu_table; 335 332 struct catu_etr_buf *catu_buf; 336 333 337 334 csdev = tmc_etr_get_catu_device(tmc_drvdata); 338 335 if (!csdev) 339 336 return -ENODEV; 340 - catu_dev = csdev->dev.parent; 341 337 catu_buf = kzalloc(sizeof(*catu_buf), GFP_KERNEL); 342 338 if (!catu_buf) 343 339 return -ENOMEM; 344 340 345 - catu_table = catu_init_sg_table(catu_dev, node, etr_buf->size, pages); 341 + catu_table = catu_init_sg_table(&csdev->dev, node, 342 + etr_buf->size, pages); 346 343 if (IS_ERR(catu_table)) { 347 344 kfree(catu_buf); 348 345 return PTR_ERR(catu_table); ··· 410 409 int rc; 411 410 u32 control, mode; 412 411 struct etr_buf *etr_buf = data; 412 + struct device *dev = &drvdata->csdev->dev; 413 413 414 414 if (catu_wait_for_ready(drvdata)) 415 - dev_warn(drvdata->dev, "Timeout while waiting for READY\n"); 415 + dev_warn(dev, "Timeout while waiting for READY\n"); 416 416 417 417 control = catu_read_control(drvdata); 418 418 if (control & BIT(CATU_CONTROL_ENABLE)) { 419 - dev_warn(drvdata->dev, "CATU is already enabled\n"); 419 + dev_warn(dev, "CATU is already enabled\n"); 420 420 return -EBUSY; 421 421 } 422 422 ··· 443 441 catu_write_irqen(drvdata, 0); 444 442 catu_write_mode(drvdata, mode); 445 443 catu_write_control(drvdata, control); 446 - dev_dbg(drvdata->dev, "Enabled in %s mode\n", 444 + dev_dbg(dev, "Enabled in %s mode\n", 447 445 (mode == CATU_MODE_PASS_THROUGH) ? 448 446 "Pass through" : 449 447 "Translate"); ··· 464 462 static int catu_disable_hw(struct catu_drvdata *drvdata) 465 463 { 466 464 int rc = 0; 465 + struct device *dev = &drvdata->csdev->dev; 467 466 468 467 catu_write_control(drvdata, 0); 469 468 coresight_disclaim_device_unlocked(drvdata->base); 470 469 if (catu_wait_for_ready(drvdata)) { 471 - dev_info(drvdata->dev, "Timeout while waiting for READY\n"); 470 + dev_info(dev, "Timeout while waiting for READY\n"); 472 471 rc = -EAGAIN; 473 472 } 474 473 475 - dev_dbg(drvdata->dev, "Disabled\n"); 474 + dev_dbg(dev, "Disabled\n"); 476 475 return rc; 477 476 } 478 477 ··· 505 502 struct coresight_desc catu_desc; 506 503 struct coresight_platform_data *pdata = NULL; 507 504 struct device *dev = &adev->dev; 508 - struct device_node *np = dev->of_node; 509 505 void __iomem *base; 510 506 511 - if (np) { 512 - pdata = of_get_coresight_platform_data(dev, np); 513 - if (IS_ERR(pdata)) { 514 - ret = PTR_ERR(pdata); 515 - goto out; 516 - } 517 - dev->platform_data = pdata; 518 - } 507 + catu_desc.name = coresight_alloc_device_name(&catu_devs, dev); 508 + if (!catu_desc.name) 509 + return -ENOMEM; 519 510 520 511 drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL); 521 512 if (!drvdata) { ··· 517 520 goto out; 518 521 } 519 522 520 - drvdata->dev = dev; 521 523 dev_set_drvdata(dev, drvdata); 522 524 base = devm_ioremap_resource(dev, &adev->res); 523 525 if (IS_ERR(base)) { ··· 543 547 if (ret) 544 548 goto out; 545 549 550 + pdata = coresight_get_platform_data(dev); 551 + if (IS_ERR(pdata)) { 552 + ret = PTR_ERR(pdata); 553 + goto out; 554 + } 555 + dev->platform_data = pdata; 556 + 546 557 drvdata->base = base; 547 558 catu_desc.pdata = pdata; 548 559 catu_desc.dev = dev; ··· 557 554 catu_desc.type = CORESIGHT_DEV_TYPE_HELPER; 558 555 catu_desc.subtype.helper_subtype = CORESIGHT_DEV_SUBTYPE_HELPER_CATU; 559 556 catu_desc.ops = &catu_ops; 557 + 560 558 drvdata->csdev = coresight_register(&catu_desc); 561 559 if (IS_ERR(drvdata->csdev)) 562 560 ret = PTR_ERR(drvdata->csdev);
-1
drivers/hwtracing/coresight/coresight-catu.h
··· 61 61 #define CATU_IRQEN_OFF 0x0 62 62 63 63 struct catu_drvdata { 64 - struct device *dev; 65 64 void __iomem *base; 66 65 struct coresight_device *csdev; 67 66 int irq;
+4 -2
drivers/hwtracing/coresight/coresight-cpu-debug.c
··· 572 572 struct device *dev = &adev->dev; 573 573 struct debug_drvdata *drvdata; 574 574 struct resource *res = &adev->res; 575 - struct device_node *np = adev->dev.of_node; 576 575 int ret; 577 576 578 577 drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL); 579 578 if (!drvdata) 580 579 return -ENOMEM; 581 580 582 - drvdata->cpu = np ? of_coresight_get_cpu(np) : 0; 581 + drvdata->cpu = coresight_get_cpu(dev); 582 + if (drvdata->cpu < 0) 583 + return drvdata->cpu; 584 + 583 585 if (per_cpu(debug_drvdata, drvdata->cpu)) { 584 586 dev_err(dev, "CPU%d drvdata has already been initialized\n", 585 587 drvdata->cpu);
+43 -35
drivers/hwtracing/coresight/coresight-etb10.c
··· 63 63 #define ETB_FFSR_BIT 1 64 64 #define ETB_FRAME_SIZE_WORDS 4 65 65 66 + DEFINE_CORESIGHT_DEVLIST(etb_devs, "etb"); 67 + 66 68 /** 67 69 * struct etb_drvdata - specifics associated to an ETB component 68 70 * @base: memory mapped base address for this component. 69 - * @dev: the device entity associated to this component. 70 71 * @atclk: optional clock for the core parts of the ETB. 71 72 * @csdev: component vitals needed by the framework. 72 73 * @miscdev: specifics to handle "/dev/xyz.etb" entry. ··· 82 81 */ 83 82 struct etb_drvdata { 84 83 void __iomem *base; 85 - struct device *dev; 86 84 struct clk *atclk; 87 85 struct coresight_device *csdev; 88 86 struct miscdevice miscdev; ··· 227 227 static int etb_enable(struct coresight_device *csdev, u32 mode, void *data) 228 228 { 229 229 int ret; 230 - struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 231 230 232 231 switch (mode) { 233 232 case CS_MODE_SYSFS: ··· 243 244 if (ret) 244 245 return ret; 245 246 246 - dev_dbg(drvdata->dev, "ETB enabled\n"); 247 + dev_dbg(&csdev->dev, "ETB enabled\n"); 247 248 return 0; 248 249 } 249 250 250 251 static void __etb_disable_hw(struct etb_drvdata *drvdata) 251 252 { 252 253 u32 ffcr; 254 + struct device *dev = &drvdata->csdev->dev; 253 255 254 256 CS_UNLOCK(drvdata->base); 255 257 ··· 263 263 writel_relaxed(ffcr, drvdata->base + ETB_FFCR); 264 264 265 265 if (coresight_timeout(drvdata->base, ETB_FFCR, ETB_FFCR_BIT, 0)) { 266 - dev_err(drvdata->dev, 266 + dev_err(dev, 267 267 "timeout while waiting for completion of Manual Flush\n"); 268 268 } 269 269 ··· 271 271 writel_relaxed(0x0, drvdata->base + ETB_CTL_REG); 272 272 273 273 if (coresight_timeout(drvdata->base, ETB_FFSR, ETB_FFSR_BIT, 1)) { 274 - dev_err(drvdata->dev, 274 + dev_err(dev, 275 275 "timeout while waiting for Formatter to Stop\n"); 276 276 } 277 277 ··· 286 286 u32 read_data, depth; 287 287 u32 read_ptr, write_ptr; 288 288 u32 frame_off, frame_endoff; 289 + struct device *dev = &drvdata->csdev->dev; 289 290 290 291 CS_UNLOCK(drvdata->base); 291 292 ··· 296 295 frame_off = write_ptr % ETB_FRAME_SIZE_WORDS; 297 296 frame_endoff = ETB_FRAME_SIZE_WORDS - frame_off; 298 297 if (frame_off) { 299 - dev_err(drvdata->dev, 298 + dev_err(dev, 300 299 "write_ptr: %lu not aligned to formatter frame size\n", 301 300 (unsigned long)write_ptr); 302 - dev_err(drvdata->dev, "frameoff: %lu, frame_endoff: %lu\n", 301 + dev_err(dev, "frameoff: %lu, frame_endoff: %lu\n", 303 302 (unsigned long)frame_off, (unsigned long)frame_endoff); 304 303 write_ptr += frame_endoff; 305 304 } ··· 366 365 drvdata->mode = CS_MODE_DISABLED; 367 366 spin_unlock_irqrestore(&drvdata->spinlock, flags); 368 367 369 - dev_dbg(drvdata->dev, "ETB disabled\n"); 368 + dev_dbg(&csdev->dev, "ETB disabled\n"); 370 369 return 0; 371 370 } 372 371 ··· 374 373 struct perf_event *event, void **pages, 375 374 int nr_pages, bool overwrite) 376 375 { 377 - int node, cpu = event->cpu; 376 + int node; 378 377 struct cs_buffers *buf; 379 378 380 - if (cpu == -1) 381 - cpu = smp_processor_id(); 382 - node = cpu_to_node(cpu); 379 + node = (event->cpu == -1) ? NUMA_NO_NODE : cpu_to_node(event->cpu); 383 380 384 381 buf = kzalloc_node(sizeof(struct cs_buffers), GFP_KERNEL, node); 385 382 if (!buf) ··· 459 460 * chance to fix things. 460 461 */ 461 462 if (write_ptr % ETB_FRAME_SIZE_WORDS) { 462 - dev_err(drvdata->dev, 463 + dev_err(&csdev->dev, 463 464 "write_ptr: %lu not aligned to formatter frame size\n", 464 465 (unsigned long)write_ptr); 465 466 ··· 511 512 lost = true; 512 513 } 513 514 514 - if (lost) 515 + /* 516 + * Don't set the TRUNCATED flag in snapshot mode because 1) the 517 + * captured buffer is expected to be truncated and 2) a full buffer 518 + * prevents the event from being re-enabled by the perf core, 519 + * resulting in stale data being send to user space. 520 + */ 521 + if (!buf->snapshot && lost) 515 522 perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED); 516 523 517 524 /* finally tell HW where we want to start reading from */ ··· 553 548 writel_relaxed(0x0, drvdata->base + ETB_RAM_WRITE_POINTER); 554 549 555 550 /* 556 - * In snapshot mode we have to update the handle->head to point 557 - * to the new location. 551 + * In snapshot mode we simply increment the head by the number of byte 552 + * that were written. User space function cs_etm_find_snapshot() will 553 + * figure out how many bytes to get from the AUX buffer based on the 554 + * position of the head. 558 555 */ 559 - if (buf->snapshot) { 560 - handle->head = (cur * PAGE_SIZE) + offset; 561 - to_read = buf->nr_pages << PAGE_SHIFT; 562 - } 556 + if (buf->snapshot) 557 + handle->head += to_read; 558 + 563 559 __etb_enable_hw(drvdata); 564 560 CS_LOCK(drvdata->base); 565 561 out: ··· 593 587 } 594 588 spin_unlock_irqrestore(&drvdata->spinlock, flags); 595 589 596 - dev_dbg(drvdata->dev, "ETB dumped\n"); 590 + dev_dbg(&drvdata->csdev->dev, "ETB dumped\n"); 597 591 } 598 592 599 593 static int etb_open(struct inode *inode, struct file *file) ··· 604 598 if (local_cmpxchg(&drvdata->reading, 0, 1)) 605 599 return -EBUSY; 606 600 607 - dev_dbg(drvdata->dev, "%s: successfully opened\n", __func__); 601 + dev_dbg(&drvdata->csdev->dev, "%s: successfully opened\n", __func__); 608 602 return 0; 609 603 } 610 604 ··· 614 608 u32 depth; 615 609 struct etb_drvdata *drvdata = container_of(file->private_data, 616 610 struct etb_drvdata, miscdev); 611 + struct device *dev = &drvdata->csdev->dev; 617 612 618 613 etb_dump(drvdata); 619 614 ··· 623 616 len = depth * 4 - *ppos; 624 617 625 618 if (copy_to_user(data, drvdata->buf + *ppos, len)) { 626 - dev_dbg(drvdata->dev, "%s: copy_to_user failed\n", __func__); 619 + dev_dbg(dev, 620 + "%s: copy_to_user failed\n", __func__); 627 621 return -EFAULT; 628 622 } 629 623 630 624 *ppos += len; 631 625 632 - dev_dbg(drvdata->dev, "%s: %zu bytes copied, %d bytes left\n", 626 + dev_dbg(dev, "%s: %zu bytes copied, %d bytes left\n", 633 627 __func__, len, (int)(depth * 4 - *ppos)); 634 628 return len; 635 629 } ··· 641 633 struct etb_drvdata, miscdev); 642 634 local_set(&drvdata->reading, 0); 643 635 644 - dev_dbg(drvdata->dev, "%s: released\n", __func__); 636 + dev_dbg(&drvdata->csdev->dev, "%s: released\n", __func__); 645 637 return 0; 646 638 } 647 639 ··· 732 724 struct etb_drvdata *drvdata; 733 725 struct resource *res = &adev->res; 734 726 struct coresight_desc desc = { 0 }; 735 - struct device_node *np = adev->dev.of_node; 736 727 737 - if (np) { 738 - pdata = of_get_coresight_platform_data(dev, np); 739 - if (IS_ERR(pdata)) 740 - return PTR_ERR(pdata); 741 - adev->dev.platform_data = pdata; 742 - } 728 + desc.name = coresight_alloc_device_name(&etb_devs, dev); 729 + if (!desc.name) 730 + return -ENOMEM; 743 731 744 732 drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL); 745 733 if (!drvdata) 746 734 return -ENOMEM; 747 735 748 - drvdata->dev = &adev->dev; 749 736 drvdata->atclk = devm_clk_get(&adev->dev, "atclk"); /* optional */ 750 737 if (!IS_ERR(drvdata->atclk)) { 751 738 ret = clk_prepare_enable(drvdata->atclk); ··· 771 768 /* This device is not associated with a session */ 772 769 drvdata->pid = -1; 773 770 771 + pdata = coresight_get_platform_data(dev); 772 + if (IS_ERR(pdata)) 773 + return PTR_ERR(pdata); 774 + adev->dev.platform_data = pdata; 775 + 774 776 desc.type = CORESIGHT_DEV_TYPE_SINK; 775 777 desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_BUFFER; 776 778 desc.ops = &etb_cs_ops; ··· 786 778 if (IS_ERR(drvdata->csdev)) 787 779 return PTR_ERR(drvdata->csdev); 788 780 789 - drvdata->miscdev.name = pdata->name; 781 + drvdata->miscdev.name = desc.name; 790 782 drvdata->miscdev.minor = MISC_DYNAMIC_MINOR; 791 783 drvdata->miscdev.fops = &etb_fops; 792 784 ret = misc_register(&drvdata->miscdev);
+4 -4
drivers/hwtracing/coresight/coresight-etm-perf.c
··· 523 523 unsigned long hash; 524 524 const char *name; 525 525 struct device *pmu_dev = etm_pmu.dev; 526 - struct device *pdev = csdev->dev.parent; 526 + struct device *dev = &csdev->dev; 527 527 struct dev_ext_attribute *ea; 528 528 529 529 if (csdev->type != CORESIGHT_DEV_TYPE_SINK && ··· 536 536 if (!etm_perf_up) 537 537 return -EPROBE_DEFER; 538 538 539 - ea = devm_kzalloc(pdev, sizeof(*ea), GFP_KERNEL); 539 + ea = devm_kzalloc(dev, sizeof(*ea), GFP_KERNEL); 540 540 if (!ea) 541 541 return -ENOMEM; 542 542 543 - name = dev_name(pdev); 543 + name = dev_name(dev); 544 544 /* See function coresight_get_sink_by_id() to know where this is used */ 545 545 hash = hashlen_hash(hashlen_string(NULL, name)); 546 546 547 - ea->attr.attr.name = devm_kstrdup(pdev, name, GFP_KERNEL); 547 + ea->attr.attr.name = devm_kstrdup(dev, name, GFP_KERNEL); 548 548 if (!ea->attr.attr.name) 549 549 return -ENOMEM; 550 550
+2 -4
drivers/hwtracing/coresight/coresight-etm.h
··· 208 208 /** 209 209 * struct etm_drvdata - specifics associated to an ETM component 210 210 * @base: memory mapped base address for this component. 211 - * @dev: the device entity associated to this component. 212 211 * @atclk: optional clock for the core parts of the ETM. 213 212 * @csdev: component vitals needed by the framework. 214 213 * @spinlock: only one at a time pls. ··· 231 232 */ 232 233 struct etm_drvdata { 233 234 void __iomem *base; 234 - struct device *dev; 235 235 struct clk *atclk; 236 236 struct coresight_device *csdev; 237 237 spinlock_t spinlock; ··· 258 260 { 259 261 if (drvdata->use_cp14) { 260 262 if (etm_writel_cp14(off, val)) { 261 - dev_err(drvdata->dev, 263 + dev_err(&drvdata->csdev->dev, 262 264 "invalid CP14 access to ETM reg: %#x", off); 263 265 } 264 266 } else { ··· 272 274 273 275 if (drvdata->use_cp14) { 274 276 if (etm_readl_cp14(off, &val)) { 275 - dev_err(drvdata->dev, 277 + dev_err(&drvdata->csdev->dev, 276 278 "invalid CP14 access to ETM reg: %#x", off); 277 279 } 278 280 } else {
+6 -6
drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
··· 48 48 unsigned long flags, val; 49 49 struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent); 50 50 51 - pm_runtime_get_sync(drvdata->dev); 51 + pm_runtime_get_sync(dev->parent); 52 52 spin_lock_irqsave(&drvdata->spinlock, flags); 53 53 CS_UNLOCK(drvdata->base); 54 54 ··· 56 56 57 57 CS_LOCK(drvdata->base); 58 58 spin_unlock_irqrestore(&drvdata->spinlock, flags); 59 - pm_runtime_put(drvdata->dev); 59 + pm_runtime_put(dev->parent); 60 60 61 61 return sprintf(buf, "%#lx\n", val); 62 62 } ··· 131 131 132 132 if (config->mode & ETM_MODE_STALL) { 133 133 if (!(drvdata->etmccr & ETMCCR_FIFOFULL)) { 134 - dev_warn(drvdata->dev, "stall mode not supported\n"); 134 + dev_warn(dev, "stall mode not supported\n"); 135 135 ret = -EINVAL; 136 136 goto err_unlock; 137 137 } ··· 141 141 142 142 if (config->mode & ETM_MODE_TIMESTAMP) { 143 143 if (!(drvdata->etmccer & ETMCCER_TIMESTAMP)) { 144 - dev_warn(drvdata->dev, "timestamp not supported\n"); 144 + dev_warn(dev, "timestamp not supported\n"); 145 145 ret = -EINVAL; 146 146 goto err_unlock; 147 147 } ··· 945 945 goto out; 946 946 } 947 947 948 - pm_runtime_get_sync(drvdata->dev); 948 + pm_runtime_get_sync(dev->parent); 949 949 spin_lock_irqsave(&drvdata->spinlock, flags); 950 950 951 951 CS_UNLOCK(drvdata->base); ··· 953 953 CS_LOCK(drvdata->base); 954 954 955 955 spin_unlock_irqrestore(&drvdata->spinlock, flags); 956 - pm_runtime_put(drvdata->dev); 956 + pm_runtime_put(dev->parent); 957 957 out: 958 958 return sprintf(buf, "%#lx\n", val); 959 959 }
+28 -21
drivers/hwtracing/coresight/coresight-etm3x.c
··· 165 165 */ 166 166 isb(); 167 167 if (coresight_timeout_etm(drvdata, ETMSR, ETMSR_PROG_BIT, 1)) { 168 - dev_err(drvdata->dev, 168 + dev_err(&drvdata->csdev->dev, 169 169 "%s: timeout observed when probing at offset %#x\n", 170 170 __func__, ETMSR); 171 171 } ··· 184 184 */ 185 185 isb(); 186 186 if (coresight_timeout_etm(drvdata, ETMSR, ETMSR_PROG_BIT, 0)) { 187 - dev_err(drvdata->dev, 187 + dev_err(&drvdata->csdev->dev, 188 188 "%s: timeout observed when probing at offset %#x\n", 189 189 __func__, ETMSR); 190 190 } ··· 425 425 done: 426 426 CS_LOCK(drvdata->base); 427 427 428 - dev_dbg(drvdata->dev, "cpu: %d enable smp call done: %d\n", 428 + dev_dbg(&drvdata->csdev->dev, "cpu: %d enable smp call done: %d\n", 429 429 drvdata->cpu, rc); 430 430 return rc; 431 431 } ··· 455 455 { 456 456 unsigned long flags; 457 457 int trace_id = -1; 458 + struct device *etm_dev; 458 459 459 460 if (!drvdata) 460 461 goto out; 461 462 463 + etm_dev = drvdata->csdev->dev.parent; 462 464 if (!local_read(&drvdata->mode)) 463 465 return drvdata->traceid; 464 466 465 - pm_runtime_get_sync(drvdata->dev); 467 + pm_runtime_get_sync(etm_dev); 466 468 467 469 spin_lock_irqsave(&drvdata->spinlock, flags); 468 470 ··· 473 471 CS_LOCK(drvdata->base); 474 472 475 473 spin_unlock_irqrestore(&drvdata->spinlock, flags); 476 - pm_runtime_put(drvdata->dev); 474 + pm_runtime_put(etm_dev); 477 475 478 476 out: 479 477 return trace_id; ··· 528 526 spin_unlock(&drvdata->spinlock); 529 527 530 528 if (!ret) 531 - dev_dbg(drvdata->dev, "ETM tracing enabled\n"); 529 + dev_dbg(&csdev->dev, "ETM tracing enabled\n"); 532 530 return ret; 533 531 } 534 532 ··· 583 581 584 582 CS_LOCK(drvdata->base); 585 583 586 - dev_dbg(drvdata->dev, "cpu: %d disable smp call done\n", drvdata->cpu); 584 + dev_dbg(&drvdata->csdev->dev, 585 + "cpu: %d disable smp call done\n", drvdata->cpu); 587 586 } 588 587 589 588 static void etm_disable_perf(struct coresight_device *csdev) ··· 631 628 spin_unlock(&drvdata->spinlock); 632 629 cpus_read_unlock(); 633 630 634 - dev_dbg(drvdata->dev, "ETM tracing disabled\n"); 631 + dev_dbg(&csdev->dev, "ETM tracing disabled\n"); 635 632 } 636 633 637 634 static void etm_disable(struct coresight_device *csdev, ··· 791 788 struct etm_drvdata *drvdata; 792 789 struct resource *res = &adev->res; 793 790 struct coresight_desc desc = { 0 }; 794 - struct device_node *np = adev->dev.of_node; 795 791 796 792 drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL); 797 793 if (!drvdata) 798 794 return -ENOMEM; 799 795 800 - if (np) { 801 - pdata = of_get_coresight_platform_data(dev, np); 802 - if (IS_ERR(pdata)) 803 - return PTR_ERR(pdata); 804 - 805 - adev->dev.platform_data = pdata; 806 - drvdata->use_cp14 = of_property_read_bool(np, "arm,cp14"); 807 - } 808 - 809 - drvdata->dev = &adev->dev; 796 + drvdata->use_cp14 = fwnode_property_read_bool(dev->fwnode, "arm,cp14"); 810 797 dev_set_drvdata(dev, drvdata); 811 798 812 799 /* Validity for the resource is already checked by the AMBA core */ ··· 815 822 return ret; 816 823 } 817 824 818 - drvdata->cpu = pdata ? pdata->cpu : 0; 825 + drvdata->cpu = coresight_get_cpu(dev); 826 + if (drvdata->cpu < 0) 827 + return drvdata->cpu; 828 + 829 + desc.name = devm_kasprintf(dev, GFP_KERNEL, "etm%d", drvdata->cpu); 830 + if (!desc.name) 831 + return -ENOMEM; 819 832 820 833 cpus_read_lock(); 821 834 etmdrvdata[drvdata->cpu] = drvdata; ··· 851 852 etm_init_trace_id(drvdata); 852 853 etm_set_default(&drvdata->config); 853 854 855 + pdata = coresight_get_platform_data(dev); 856 + if (IS_ERR(pdata)) { 857 + ret = PTR_ERR(pdata); 858 + goto err_arch_supported; 859 + } 860 + adev->dev.platform_data = pdata; 861 + 854 862 desc.type = CORESIGHT_DEV_TYPE_SOURCE; 855 863 desc.subtype.source_subtype = CORESIGHT_DEV_SUBTYPE_SOURCE_PROC; 856 864 desc.ops = &etm_cs_ops; ··· 877 871 } 878 872 879 873 pm_runtime_put(&adev->dev); 880 - dev_info(dev, "%s initialized\n", (char *)coresight_get_uci_data(id)); 874 + dev_info(&drvdata->csdev->dev, 875 + "%s initialized\n", (char *)coresight_get_uci_data(id)); 881 876 if (boot_enable) { 882 877 coresight_enable(drvdata->csdev); 883 878 drvdata->boot_enable = true;
+23 -17
drivers/hwtracing/coresight/coresight-etm4x.c
··· 88 88 { 89 89 int i, rc; 90 90 struct etmv4_config *config = &drvdata->config; 91 + struct device *etm_dev = &drvdata->csdev->dev; 91 92 92 93 CS_UNLOCK(drvdata->base); 93 94 ··· 103 102 104 103 /* wait for TRCSTATR.IDLE to go up */ 105 104 if (coresight_timeout(drvdata->base, TRCSTATR, TRCSTATR_IDLE_BIT, 1)) 106 - dev_err(drvdata->dev, 105 + dev_err(etm_dev, 107 106 "timeout while waiting for Idle Trace Status\n"); 108 107 109 108 writel_relaxed(config->pe_sel, drvdata->base + TRCPROCSELR); ··· 185 184 186 185 /* wait for TRCSTATR.IDLE to go back down to '0' */ 187 186 if (coresight_timeout(drvdata->base, TRCSTATR, TRCSTATR_IDLE_BIT, 0)) 188 - dev_err(drvdata->dev, 187 + dev_err(etm_dev, 189 188 "timeout while waiting for Idle Trace Status\n"); 190 189 191 190 done: 192 191 CS_LOCK(drvdata->base); 193 192 194 - dev_dbg(drvdata->dev, "cpu: %d enable smp call done: %d\n", 193 + dev_dbg(etm_dev, "cpu: %d enable smp call done: %d\n", 195 194 drvdata->cpu, rc); 196 195 return rc; 197 196 } ··· 401 400 spin_unlock(&drvdata->spinlock); 402 401 403 402 if (!ret) 404 - dev_dbg(drvdata->dev, "ETM tracing enabled\n"); 403 + dev_dbg(&csdev->dev, "ETM tracing enabled\n"); 405 404 return ret; 406 405 } 407 406 ··· 462 461 463 462 CS_LOCK(drvdata->base); 464 463 465 - dev_dbg(drvdata->dev, "cpu: %d disable smp call done\n", drvdata->cpu); 464 + dev_dbg(&drvdata->csdev->dev, 465 + "cpu: %d disable smp call done\n", drvdata->cpu); 466 466 } 467 467 468 468 static int etm4_disable_perf(struct coresight_device *csdev, ··· 513 511 spin_unlock(&drvdata->spinlock); 514 512 cpus_read_unlock(); 515 513 516 - dev_dbg(drvdata->dev, "ETM tracing disabled\n"); 514 + dev_dbg(&csdev->dev, "ETM tracing disabled\n"); 517 515 } 518 516 519 517 static void etm4_disable(struct coresight_device *csdev, ··· 1084 1082 struct etmv4_drvdata *drvdata; 1085 1083 struct resource *res = &adev->res; 1086 1084 struct coresight_desc desc = { 0 }; 1087 - struct device_node *np = adev->dev.of_node; 1088 1085 1089 1086 drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL); 1090 1087 if (!drvdata) 1091 1088 return -ENOMEM; 1092 1089 1093 - if (np) { 1094 - pdata = of_get_coresight_platform_data(dev, np); 1095 - if (IS_ERR(pdata)) 1096 - return PTR_ERR(pdata); 1097 - adev->dev.platform_data = pdata; 1098 - } 1099 - 1100 - drvdata->dev = &adev->dev; 1101 1090 dev_set_drvdata(dev, drvdata); 1102 1091 1103 1092 /* Validity for the resource is already checked by the AMBA core */ ··· 1100 1107 1101 1108 spin_lock_init(&drvdata->spinlock); 1102 1109 1103 - drvdata->cpu = pdata ? pdata->cpu : 0; 1110 + drvdata->cpu = coresight_get_cpu(dev); 1111 + if (drvdata->cpu < 0) 1112 + return drvdata->cpu; 1113 + 1114 + desc.name = devm_kasprintf(dev, GFP_KERNEL, "etm%d", drvdata->cpu); 1115 + if (!desc.name) 1116 + return -ENOMEM; 1104 1117 1105 1118 cpus_read_lock(); 1106 1119 etmdrvdata[drvdata->cpu] = drvdata; ··· 1137 1138 etm4_init_trace_id(drvdata); 1138 1139 etm4_set_default(&drvdata->config); 1139 1140 1141 + pdata = coresight_get_platform_data(dev); 1142 + if (IS_ERR(pdata)) { 1143 + ret = PTR_ERR(pdata); 1144 + goto err_arch_supported; 1145 + } 1146 + adev->dev.platform_data = pdata; 1147 + 1140 1148 desc.type = CORESIGHT_DEV_TYPE_SOURCE; 1141 1149 desc.subtype.source_subtype = CORESIGHT_DEV_SUBTYPE_SOURCE_PROC; 1142 1150 desc.ops = &etm4_cs_ops; ··· 1163 1157 } 1164 1158 1165 1159 pm_runtime_put(&adev->dev); 1166 - dev_info(dev, "CPU%d: ETM v%d.%d initialized\n", 1160 + dev_info(&drvdata->csdev->dev, "CPU%d: ETM v%d.%d initialized\n", 1167 1161 drvdata->cpu, drvdata->arch >> 4, drvdata->arch & 0xf); 1168 1162 1169 1163 if (boot_enable) {
-2
drivers/hwtracing/coresight/coresight-etm4x.h
··· 284 284 /** 285 285 * struct etm4_drvdata - specifics associated to an ETM component 286 286 * @base: Memory mapped base address for this component. 287 - * @dev: The device entity associated to this component. 288 287 * @csdev: Component vitals needed by the framework. 289 288 * @spinlock: Only one at a time pls. 290 289 * @mode: This tracer's mode, i.e sysFS, Perf or disabled. ··· 339 340 */ 340 341 struct etmv4_drvdata { 341 342 void __iomem *base; 342 - struct device *dev; 343 343 struct coresight_device *csdev; 344 344 spinlock_t spinlock; 345 345 local_t mode;
+20 -16
drivers/hwtracing/coresight/coresight-funnel.c
··· 29 29 #define FUNNEL_HOLDTIME (0x7 << FUNNEL_HOLDTIME_SHFT) 30 30 #define FUNNEL_ENSx_MASK 0xff 31 31 32 + DEFINE_CORESIGHT_DEVLIST(funnel_devs, "funnel"); 33 + 32 34 /** 33 35 * struct funnel_drvdata - specifics associated to a funnel component 34 36 * @base: memory mapped base address for this component. 35 - * @dev: the device entity associated to this component. 36 37 * @atclk: optional clock for the core parts of the funnel. 37 38 * @csdev: component vitals needed by the framework. 38 39 * @priority: port selection order. 39 40 */ 40 41 struct funnel_drvdata { 41 42 void __iomem *base; 42 - struct device *dev; 43 43 struct clk *atclk; 44 44 struct coresight_device *csdev; 45 45 unsigned long priority; ··· 80 80 rc = dynamic_funnel_enable_hw(drvdata, inport); 81 81 82 82 if (!rc) 83 - dev_dbg(drvdata->dev, "FUNNEL inport %d enabled\n", inport); 83 + dev_dbg(&csdev->dev, "FUNNEL inport %d enabled\n", inport); 84 84 return rc; 85 85 } 86 86 ··· 110 110 if (drvdata->base) 111 111 dynamic_funnel_disable_hw(drvdata, inport); 112 112 113 - dev_dbg(drvdata->dev, "FUNNEL inport %d disabled\n", inport); 113 + dev_dbg(&csdev->dev, "FUNNEL inport %d disabled\n", inport); 114 114 } 115 115 116 116 static const struct coresight_ops_link funnel_link_ops = { ··· 165 165 u32 val; 166 166 struct funnel_drvdata *drvdata = dev_get_drvdata(dev->parent); 167 167 168 - pm_runtime_get_sync(drvdata->dev); 168 + pm_runtime_get_sync(dev->parent); 169 169 170 170 val = get_funnel_ctrl_hw(drvdata); 171 171 172 - pm_runtime_put(drvdata->dev); 172 + pm_runtime_put(dev->parent); 173 173 174 174 return sprintf(buf, "%#x\n", val); 175 175 } ··· 189 189 struct coresight_platform_data *pdata = NULL; 190 190 struct funnel_drvdata *drvdata; 191 191 struct coresight_desc desc = { 0 }; 192 - struct device_node *np = dev->of_node; 193 192 194 - if (np) { 195 - pdata = of_get_coresight_platform_data(dev, np); 196 - if (IS_ERR(pdata)) 197 - return PTR_ERR(pdata); 198 - dev->platform_data = pdata; 199 - } 200 - 201 - if (of_device_is_compatible(np, "arm,coresight-funnel")) 193 + if (is_of_node(dev_fwnode(dev)) && 194 + of_device_is_compatible(dev->of_node, "arm,coresight-funnel")) 202 195 pr_warn_once("Uses OBSOLETE CoreSight funnel binding\n"); 196 + 197 + desc.name = coresight_alloc_device_name(&funnel_devs, dev); 198 + if (!desc.name) 199 + return -ENOMEM; 203 200 204 201 drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL); 205 202 if (!drvdata) 206 203 return -ENOMEM; 207 204 208 - drvdata->dev = dev; 209 205 drvdata->atclk = devm_clk_get(dev, "atclk"); /* optional */ 210 206 if (!IS_ERR(drvdata->atclk)) { 211 207 ret = clk_prepare_enable(drvdata->atclk); ··· 225 229 226 230 dev_set_drvdata(dev, drvdata); 227 231 232 + pdata = coresight_get_platform_data(dev); 233 + if (IS_ERR(pdata)) { 234 + ret = PTR_ERR(pdata); 235 + goto out_disable_clk; 236 + } 237 + dev->platform_data = pdata; 238 + 228 239 desc.type = CORESIGHT_DEV_TYPE_LINK; 229 240 desc.subtype.link_subtype = CORESIGHT_DEV_SUBTYPE_LINK_MERG; 230 241 desc.ops = &funnel_cs_ops; ··· 244 241 } 245 242 246 243 pm_runtime_put(dev); 244 + ret = 0; 247 245 248 246 out_disable_clk: 249 247 if (ret && !IS_ERR_OR_NULL(drvdata->atclk))
+815
drivers/hwtracing/coresight/coresight-platform.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2012, The Linux Foundation. All rights reserved. 4 + */ 5 + 6 + #include <linux/acpi.h> 7 + #include <linux/types.h> 8 + #include <linux/err.h> 9 + #include <linux/slab.h> 10 + #include <linux/clk.h> 11 + #include <linux/of.h> 12 + #include <linux/of_address.h> 13 + #include <linux/of_graph.h> 14 + #include <linux/of_platform.h> 15 + #include <linux/platform_device.h> 16 + #include <linux/amba/bus.h> 17 + #include <linux/coresight.h> 18 + #include <linux/cpumask.h> 19 + #include <asm/smp_plat.h> 20 + 21 + #include "coresight-priv.h" 22 + /* 23 + * coresight_alloc_conns: Allocate connections record for each output 24 + * port from the device. 25 + */ 26 + static int coresight_alloc_conns(struct device *dev, 27 + struct coresight_platform_data *pdata) 28 + { 29 + if (pdata->nr_outport) { 30 + pdata->conns = devm_kzalloc(dev, pdata->nr_outport * 31 + sizeof(*pdata->conns), 32 + GFP_KERNEL); 33 + if (!pdata->conns) 34 + return -ENOMEM; 35 + } 36 + 37 + return 0; 38 + } 39 + 40 + int coresight_device_fwnode_match(struct device *dev, void *fwnode) 41 + { 42 + return dev_fwnode(dev) == fwnode; 43 + } 44 + 45 + static struct device * 46 + coresight_find_device_by_fwnode(struct fwnode_handle *fwnode) 47 + { 48 + struct device *dev = NULL; 49 + 50 + /* 51 + * If we have a non-configurable replicator, it will be found on the 52 + * platform bus. 53 + */ 54 + dev = bus_find_device(&platform_bus_type, NULL, 55 + fwnode, coresight_device_fwnode_match); 56 + if (dev) 57 + return dev; 58 + 59 + /* 60 + * We have a configurable component - circle through the AMBA bus 61 + * looking for the device that matches the endpoint node. 62 + */ 63 + return bus_find_device(&amba_bustype, NULL, 64 + fwnode, coresight_device_fwnode_match); 65 + } 66 + 67 + #ifdef CONFIG_OF 68 + static inline bool of_coresight_legacy_ep_is_input(struct device_node *ep) 69 + { 70 + return of_property_read_bool(ep, "slave-mode"); 71 + } 72 + 73 + static void of_coresight_get_ports_legacy(const struct device_node *node, 74 + int *nr_inport, int *nr_outport) 75 + { 76 + struct device_node *ep = NULL; 77 + int in = 0, out = 0; 78 + 79 + do { 80 + ep = of_graph_get_next_endpoint(node, ep); 81 + if (!ep) 82 + break; 83 + 84 + if (of_coresight_legacy_ep_is_input(ep)) 85 + in++; 86 + else 87 + out++; 88 + 89 + } while (ep); 90 + 91 + *nr_inport = in; 92 + *nr_outport = out; 93 + } 94 + 95 + static struct device_node *of_coresight_get_port_parent(struct device_node *ep) 96 + { 97 + struct device_node *parent = of_graph_get_port_parent(ep); 98 + 99 + /* 100 + * Skip one-level up to the real device node, if we 101 + * are using the new bindings. 102 + */ 103 + if (of_node_name_eq(parent, "in-ports") || 104 + of_node_name_eq(parent, "out-ports")) 105 + parent = of_get_next_parent(parent); 106 + 107 + return parent; 108 + } 109 + 110 + static inline struct device_node * 111 + of_coresight_get_input_ports_node(const struct device_node *node) 112 + { 113 + return of_get_child_by_name(node, "in-ports"); 114 + } 115 + 116 + static inline struct device_node * 117 + of_coresight_get_output_ports_node(const struct device_node *node) 118 + { 119 + return of_get_child_by_name(node, "out-ports"); 120 + } 121 + 122 + static inline int 123 + of_coresight_count_ports(struct device_node *port_parent) 124 + { 125 + int i = 0; 126 + struct device_node *ep = NULL; 127 + 128 + while ((ep = of_graph_get_next_endpoint(port_parent, ep))) 129 + i++; 130 + return i; 131 + } 132 + 133 + static void of_coresight_get_ports(const struct device_node *node, 134 + int *nr_inport, int *nr_outport) 135 + { 136 + struct device_node *input_ports = NULL, *output_ports = NULL; 137 + 138 + input_ports = of_coresight_get_input_ports_node(node); 139 + output_ports = of_coresight_get_output_ports_node(node); 140 + 141 + if (input_ports || output_ports) { 142 + if (input_ports) { 143 + *nr_inport = of_coresight_count_ports(input_ports); 144 + of_node_put(input_ports); 145 + } 146 + if (output_ports) { 147 + *nr_outport = of_coresight_count_ports(output_ports); 148 + of_node_put(output_ports); 149 + } 150 + } else { 151 + /* Fall back to legacy DT bindings parsing */ 152 + of_coresight_get_ports_legacy(node, nr_inport, nr_outport); 153 + } 154 + } 155 + 156 + static int of_coresight_get_cpu(struct device *dev) 157 + { 158 + int cpu; 159 + struct device_node *dn; 160 + 161 + if (!dev->of_node) 162 + return -ENODEV; 163 + 164 + dn = of_parse_phandle(dev->of_node, "cpu", 0); 165 + if (!dn) 166 + return -ENODEV; 167 + 168 + cpu = of_cpu_node_to_id(dn); 169 + of_node_put(dn); 170 + 171 + return cpu; 172 + } 173 + 174 + /* 175 + * of_coresight_parse_endpoint : Parse the given output endpoint @ep 176 + * and fill the connection information in @conn 177 + * 178 + * Parses the local port, remote device name and the remote port. 179 + * 180 + * Returns : 181 + * 1 - If the parsing is successful and a connection record 182 + * was created for an output connection. 183 + * 0 - If the parsing completed without any fatal errors. 184 + * -Errno - Fatal error, abort the scanning. 185 + */ 186 + static int of_coresight_parse_endpoint(struct device *dev, 187 + struct device_node *ep, 188 + struct coresight_connection *conn) 189 + { 190 + int ret = 0; 191 + struct of_endpoint endpoint, rendpoint; 192 + struct device_node *rparent = NULL; 193 + struct device_node *rep = NULL; 194 + struct device *rdev = NULL; 195 + struct fwnode_handle *rdev_fwnode; 196 + 197 + do { 198 + /* Parse the local port details */ 199 + if (of_graph_parse_endpoint(ep, &endpoint)) 200 + break; 201 + /* 202 + * Get a handle on the remote endpoint and the device it is 203 + * attached to. 204 + */ 205 + rep = of_graph_get_remote_endpoint(ep); 206 + if (!rep) 207 + break; 208 + rparent = of_coresight_get_port_parent(rep); 209 + if (!rparent) 210 + break; 211 + if (of_graph_parse_endpoint(rep, &rendpoint)) 212 + break; 213 + 214 + rdev_fwnode = of_fwnode_handle(rparent); 215 + /* If the remote device is not available, defer probing */ 216 + rdev = coresight_find_device_by_fwnode(rdev_fwnode); 217 + if (!rdev) { 218 + ret = -EPROBE_DEFER; 219 + break; 220 + } 221 + 222 + conn->outport = endpoint.port; 223 + /* 224 + * Hold the refcount to the target device. This could be 225 + * released via: 226 + * 1) coresight_release_platform_data() if the probe fails or 227 + * this device is unregistered. 228 + * 2) While removing the target device via 229 + * coresight_remove_match() 230 + */ 231 + conn->child_fwnode = fwnode_handle_get(rdev_fwnode); 232 + conn->child_port = rendpoint.port; 233 + /* Connection record updated */ 234 + ret = 1; 235 + } while (0); 236 + 237 + of_node_put(rparent); 238 + of_node_put(rep); 239 + put_device(rdev); 240 + 241 + return ret; 242 + } 243 + 244 + static int of_get_coresight_platform_data(struct device *dev, 245 + struct coresight_platform_data *pdata) 246 + { 247 + int ret = 0; 248 + struct coresight_connection *conn; 249 + struct device_node *ep = NULL; 250 + const struct device_node *parent = NULL; 251 + bool legacy_binding = false; 252 + struct device_node *node = dev->of_node; 253 + 254 + /* Get the number of input and output port for this component */ 255 + of_coresight_get_ports(node, &pdata->nr_inport, &pdata->nr_outport); 256 + 257 + /* If there are no output connections, we are done */ 258 + if (!pdata->nr_outport) 259 + return 0; 260 + 261 + ret = coresight_alloc_conns(dev, pdata); 262 + if (ret) 263 + return ret; 264 + 265 + parent = of_coresight_get_output_ports_node(node); 266 + /* 267 + * If the DT uses obsoleted bindings, the ports are listed 268 + * under the device and we need to filter out the input 269 + * ports. 270 + */ 271 + if (!parent) { 272 + legacy_binding = true; 273 + parent = node; 274 + dev_warn_once(dev, "Uses obsolete Coresight DT bindings\n"); 275 + } 276 + 277 + conn = pdata->conns; 278 + 279 + /* Iterate through each output port to discover topology */ 280 + while ((ep = of_graph_get_next_endpoint(parent, ep))) { 281 + /* 282 + * Legacy binding mixes input/output ports under the 283 + * same parent. So, skip the input ports if we are dealing 284 + * with legacy binding, as they processed with their 285 + * connected output ports. 286 + */ 287 + if (legacy_binding && of_coresight_legacy_ep_is_input(ep)) 288 + continue; 289 + 290 + ret = of_coresight_parse_endpoint(dev, ep, conn); 291 + switch (ret) { 292 + case 1: 293 + conn++; /* Fall through */ 294 + case 0: 295 + break; 296 + default: 297 + return ret; 298 + } 299 + } 300 + 301 + return 0; 302 + } 303 + #else 304 + static inline int 305 + of_get_coresight_platform_data(struct device *dev, 306 + struct coresight_platform_data *pdata) 307 + { 308 + return -ENOENT; 309 + } 310 + 311 + static inline int of_coresight_get_cpu(struct device *dev) 312 + { 313 + return -ENODEV; 314 + } 315 + #endif 316 + 317 + #ifdef CONFIG_ACPI 318 + 319 + #include <acpi/actypes.h> 320 + #include <acpi/processor.h> 321 + 322 + /* ACPI Graph _DSD UUID : "ab02a46b-74c7-45a2-bd68-f7d344ef2153" */ 323 + static const guid_t acpi_graph_uuid = GUID_INIT(0xab02a46b, 0x74c7, 0x45a2, 324 + 0xbd, 0x68, 0xf7, 0xd3, 325 + 0x44, 0xef, 0x21, 0x53); 326 + /* Coresight ACPI Graph UUID : "3ecbc8b6-1d0e-4fb3-8107-e627f805c6cd" */ 327 + static const guid_t coresight_graph_uuid = GUID_INIT(0x3ecbc8b6, 0x1d0e, 0x4fb3, 328 + 0x81, 0x07, 0xe6, 0x27, 329 + 0xf8, 0x05, 0xc6, 0xcd); 330 + #define ACPI_CORESIGHT_LINK_SLAVE 0 331 + #define ACPI_CORESIGHT_LINK_MASTER 1 332 + 333 + static inline bool is_acpi_guid(const union acpi_object *obj) 334 + { 335 + return (obj->type == ACPI_TYPE_BUFFER) && (obj->buffer.length == 16); 336 + } 337 + 338 + /* 339 + * acpi_guid_matches - Checks if the given object is a GUID object and 340 + * that it matches the supplied the GUID. 341 + */ 342 + static inline bool acpi_guid_matches(const union acpi_object *obj, 343 + const guid_t *guid) 344 + { 345 + return is_acpi_guid(obj) && 346 + guid_equal((guid_t *)obj->buffer.pointer, guid); 347 + } 348 + 349 + static inline bool is_acpi_dsd_graph_guid(const union acpi_object *obj) 350 + { 351 + return acpi_guid_matches(obj, &acpi_graph_uuid); 352 + } 353 + 354 + static inline bool is_acpi_coresight_graph_guid(const union acpi_object *obj) 355 + { 356 + return acpi_guid_matches(obj, &coresight_graph_uuid); 357 + } 358 + 359 + static inline bool is_acpi_coresight_graph(const union acpi_object *obj) 360 + { 361 + const union acpi_object *graphid, *guid, *links; 362 + 363 + if (obj->type != ACPI_TYPE_PACKAGE || 364 + obj->package.count < 3) 365 + return false; 366 + 367 + graphid = &obj->package.elements[0]; 368 + guid = &obj->package.elements[1]; 369 + links = &obj->package.elements[2]; 370 + 371 + if (graphid->type != ACPI_TYPE_INTEGER || 372 + links->type != ACPI_TYPE_INTEGER) 373 + return false; 374 + 375 + return is_acpi_coresight_graph_guid(guid); 376 + } 377 + 378 + /* 379 + * acpi_validate_dsd_graph - Make sure the given _DSD graph conforms 380 + * to the ACPI _DSD Graph specification. 381 + * 382 + * ACPI Devices Graph property has the following format: 383 + * { 384 + * Revision - Integer, must be 0 385 + * NumberOfGraphs - Integer, N indicating the following list. 386 + * Graph[1], 387 + * ... 388 + * Graph[N] 389 + * } 390 + * 391 + * And each Graph entry has the following format: 392 + * { 393 + * GraphID - Integer, identifying a graph the device belongs to. 394 + * UUID - UUID identifying the specification that governs 395 + * this graph. (e.g, see is_acpi_coresight_graph()) 396 + * NumberOfLinks - Number "N" of connections on this node of the graph. 397 + * Links[1] 398 + * ... 399 + * Links[N] 400 + * } 401 + * 402 + * Where each "Links" entry has the following format: 403 + * 404 + * { 405 + * SourcePortAddress - Integer 406 + * DestinationPortAddress - Integer 407 + * DestinationDeviceName - Reference to another device 408 + * ( --- CoreSight specific extensions below ---) 409 + * DirectionOfFlow - Integer 1 for output(master) 410 + * 0 for input(slave) 411 + * } 412 + * 413 + * e.g: 414 + * For a Funnel device 415 + * 416 + * Device(MFUN) { 417 + * ... 418 + * 419 + * Name (_DSD, Package() { 420 + * // DSD Package contains tuples of { Proeprty_Type_UUID, Package() } 421 + * ToUUID("daffd814-6eba-4d8c-8a91-bc9bbf4aa301"), //Std. Property UUID 422 + * Package() { 423 + * Package(2) { "property-name", <property-value> } 424 + * }, 425 + * 426 + * ToUUID("ab02a46b-74c7-45a2-bd68-f7d344ef2153"), // ACPI Graph UUID 427 + * Package() { 428 + * 0, // Revision 429 + * 1, // NumberOfGraphs. 430 + * Package() { // Graph[0] Package 431 + * 1, // GraphID 432 + * // Coresight Graph UUID 433 + * ToUUID("3ecbc8b6-1d0e-4fb3-8107-e627f805c6cd"), 434 + * 3, // NumberOfLinks aka ports 435 + * // Link[0]: Output_0 -> Replicator:Input_0 436 + * Package () { 0, 0, \_SB_.RPL0, 1 }, 437 + * // Link[1]: Input_0 <- Cluster0_Funnel0:Output_0 438 + * Package () { 0, 0, \_SB_.CLU0.FUN0, 0 }, 439 + * // Link[2]: Input_1 <- Cluster1_Funnel0:Output_0 440 + * Package () { 1, 0, \_SB_.CLU1.FUN0, 0 }, 441 + * } // End of Graph[0] Package 442 + * 443 + * }, // End of ACPI Graph Property 444 + * }) 445 + */ 446 + static inline bool acpi_validate_dsd_graph(const union acpi_object *graph) 447 + { 448 + int i, n; 449 + const union acpi_object *rev, *nr_graphs; 450 + 451 + /* The graph must contain at least the Revision and Number of Graphs */ 452 + if (graph->package.count < 2) 453 + return false; 454 + 455 + rev = &graph->package.elements[0]; 456 + nr_graphs = &graph->package.elements[1]; 457 + 458 + if (rev->type != ACPI_TYPE_INTEGER || 459 + nr_graphs->type != ACPI_TYPE_INTEGER) 460 + return false; 461 + 462 + /* We only support revision 0 */ 463 + if (rev->integer.value != 0) 464 + return false; 465 + 466 + n = nr_graphs->integer.value; 467 + /* CoreSight devices are only part of a single Graph */ 468 + if (n != 1) 469 + return false; 470 + 471 + /* Make sure the ACPI graph package has right number of elements */ 472 + if (graph->package.count != (n + 2)) 473 + return false; 474 + 475 + /* 476 + * Each entry must be a graph package with at least 3 members : 477 + * { GraphID, UUID, NumberOfLinks(n), Links[.],... } 478 + */ 479 + for (i = 2; i < n + 2; i++) { 480 + const union acpi_object *obj = &graph->package.elements[i]; 481 + 482 + if (obj->type != ACPI_TYPE_PACKAGE || 483 + obj->package.count < 3) 484 + return false; 485 + } 486 + 487 + return true; 488 + } 489 + 490 + /* acpi_get_dsd_graph - Find the _DSD Graph property for the given device. */ 491 + const union acpi_object * 492 + acpi_get_dsd_graph(struct acpi_device *adev) 493 + { 494 + int i; 495 + struct acpi_buffer buf = { ACPI_ALLOCATE_BUFFER }; 496 + acpi_status status; 497 + const union acpi_object *dsd; 498 + 499 + status = acpi_evaluate_object_typed(adev->handle, "_DSD", NULL, 500 + &buf, ACPI_TYPE_PACKAGE); 501 + if (ACPI_FAILURE(status)) 502 + return NULL; 503 + 504 + dsd = buf.pointer; 505 + 506 + /* 507 + * _DSD property consists tuples { Prop_UUID, Package() } 508 + * Iterate through all the packages and find the Graph. 509 + */ 510 + for (i = 0; i + 1 < dsd->package.count; i += 2) { 511 + const union acpi_object *guid, *package; 512 + 513 + guid = &dsd->package.elements[i]; 514 + package = &dsd->package.elements[i + 1]; 515 + 516 + /* All _DSD elements must have a UUID and a Package */ 517 + if (!is_acpi_guid(guid) || package->type != ACPI_TYPE_PACKAGE) 518 + break; 519 + /* Skip the non-Graph _DSD packages */ 520 + if (!is_acpi_dsd_graph_guid(guid)) 521 + continue; 522 + if (acpi_validate_dsd_graph(package)) 523 + return package; 524 + /* Invalid graph format, continue */ 525 + dev_warn(&adev->dev, "Invalid Graph _DSD property\n"); 526 + } 527 + 528 + return NULL; 529 + } 530 + 531 + static inline bool 532 + acpi_validate_coresight_graph(const union acpi_object *cs_graph) 533 + { 534 + int nlinks; 535 + 536 + nlinks = cs_graph->package.elements[2].integer.value; 537 + /* 538 + * Graph must have the following fields : 539 + * { GraphID, GraphUUID, NumberOfLinks, Links... } 540 + */ 541 + if (cs_graph->package.count != (nlinks + 3)) 542 + return false; 543 + /* The links are validated in acpi_coresight_parse_link() */ 544 + return true; 545 + } 546 + 547 + /* 548 + * acpi_get_coresight_graph - Parse the device _DSD tables and find 549 + * the Graph property matching the CoreSight Graphs. 550 + * 551 + * Returns the pointer to the CoreSight Graph Package when found. Otherwise 552 + * returns NULL. 553 + */ 554 + const union acpi_object * 555 + acpi_get_coresight_graph(struct acpi_device *adev) 556 + { 557 + const union acpi_object *graph_list, *graph; 558 + int i, nr_graphs; 559 + 560 + graph_list = acpi_get_dsd_graph(adev); 561 + if (!graph_list) 562 + return graph_list; 563 + 564 + nr_graphs = graph_list->package.elements[1].integer.value; 565 + 566 + for (i = 2; i < nr_graphs + 2; i++) { 567 + graph = &graph_list->package.elements[i]; 568 + if (!is_acpi_coresight_graph(graph)) 569 + continue; 570 + if (acpi_validate_coresight_graph(graph)) 571 + return graph; 572 + /* Invalid graph format */ 573 + break; 574 + } 575 + 576 + return NULL; 577 + } 578 + 579 + /* 580 + * acpi_coresight_parse_link - Parse the given Graph connection 581 + * of the device and populate the coresight_connection for an output 582 + * connection. 583 + * 584 + * CoreSight Graph specification mandates that the direction of the data 585 + * flow must be specified in the link. i.e, 586 + * 587 + * SourcePortAddress, // Integer 588 + * DestinationPortAddress, // Integer 589 + * DestinationDeviceName, // Reference to another device 590 + * DirectionOfFlow, // 1 for output(master), 0 for input(slave) 591 + * 592 + * Returns the direction of the data flow [ Input(slave) or Output(master) ] 593 + * upon success. 594 + * Returns an negative error number otherwise. 595 + */ 596 + static int acpi_coresight_parse_link(struct acpi_device *adev, 597 + const union acpi_object *link, 598 + struct coresight_connection *conn) 599 + { 600 + int rc, dir; 601 + const union acpi_object *fields; 602 + struct acpi_device *r_adev; 603 + struct device *rdev; 604 + 605 + if (link->type != ACPI_TYPE_PACKAGE || 606 + link->package.count != 4) 607 + return -EINVAL; 608 + 609 + fields = link->package.elements; 610 + 611 + if (fields[0].type != ACPI_TYPE_INTEGER || 612 + fields[1].type != ACPI_TYPE_INTEGER || 613 + fields[2].type != ACPI_TYPE_LOCAL_REFERENCE || 614 + fields[3].type != ACPI_TYPE_INTEGER) 615 + return -EINVAL; 616 + 617 + rc = acpi_bus_get_device(fields[2].reference.handle, &r_adev); 618 + if (rc) 619 + return rc; 620 + 621 + dir = fields[3].integer.value; 622 + if (dir == ACPI_CORESIGHT_LINK_MASTER) { 623 + conn->outport = fields[0].integer.value; 624 + conn->child_port = fields[1].integer.value; 625 + rdev = coresight_find_device_by_fwnode(&r_adev->fwnode); 626 + if (!rdev) 627 + return -EPROBE_DEFER; 628 + /* 629 + * Hold the refcount to the target device. This could be 630 + * released via: 631 + * 1) coresight_release_platform_data() if the probe fails or 632 + * this device is unregistered. 633 + * 2) While removing the target device via 634 + * coresight_remove_match(). 635 + */ 636 + conn->child_fwnode = fwnode_handle_get(&r_adev->fwnode); 637 + } 638 + 639 + return dir; 640 + } 641 + 642 + /* 643 + * acpi_coresight_parse_graph - Parse the _DSD CoreSight graph 644 + * connection information and populate the supplied coresight_platform_data 645 + * instance. 646 + */ 647 + static int acpi_coresight_parse_graph(struct acpi_device *adev, 648 + struct coresight_platform_data *pdata) 649 + { 650 + int rc, i, nlinks; 651 + const union acpi_object *graph; 652 + struct coresight_connection *conns, *ptr; 653 + 654 + pdata->nr_inport = pdata->nr_outport = 0; 655 + graph = acpi_get_coresight_graph(adev); 656 + if (!graph) 657 + return -ENOENT; 658 + 659 + nlinks = graph->package.elements[2].integer.value; 660 + if (!nlinks) 661 + return 0; 662 + 663 + /* 664 + * To avoid scanning the table twice (once for finding the number of 665 + * output links and then later for parsing the output links), 666 + * cache the links information in one go and then later copy 667 + * it to the pdata. 668 + */ 669 + conns = devm_kcalloc(&adev->dev, nlinks, sizeof(*conns), GFP_KERNEL); 670 + if (!conns) 671 + return -ENOMEM; 672 + ptr = conns; 673 + for (i = 0; i < nlinks; i++) { 674 + const union acpi_object *link = &graph->package.elements[3 + i]; 675 + int dir; 676 + 677 + dir = acpi_coresight_parse_link(adev, link, ptr); 678 + if (dir < 0) 679 + return dir; 680 + 681 + if (dir == ACPI_CORESIGHT_LINK_MASTER) { 682 + pdata->nr_outport++; 683 + ptr++; 684 + } else { 685 + pdata->nr_inport++; 686 + } 687 + } 688 + 689 + rc = coresight_alloc_conns(&adev->dev, pdata); 690 + if (rc) 691 + return rc; 692 + 693 + /* Copy the connection information to the final location */ 694 + for (i = 0; i < pdata->nr_outport; i++) 695 + pdata->conns[i] = conns[i]; 696 + 697 + devm_kfree(&adev->dev, conns); 698 + return 0; 699 + } 700 + 701 + /* 702 + * acpi_handle_to_logical_cpuid - Map a given acpi_handle to the 703 + * logical CPU id of the corresponding CPU device. 704 + * 705 + * Returns the logical CPU id when found. Otherwise returns >= nr_cpus_id. 706 + */ 707 + static int 708 + acpi_handle_to_logical_cpuid(acpi_handle handle) 709 + { 710 + int i; 711 + struct acpi_processor *pr; 712 + 713 + for_each_possible_cpu(i) { 714 + pr = per_cpu(processors, i); 715 + if (pr && pr->handle == handle) 716 + break; 717 + } 718 + 719 + return i; 720 + } 721 + 722 + /* 723 + * acpi_coresigh_get_cpu - Find the logical CPU id of the CPU associated 724 + * with this coresight device. With ACPI bindings, the CoreSight components 725 + * are listed as child device of the associated CPU. 726 + * 727 + * Returns the logical CPU id when found. Otherwise returns 0. 728 + */ 729 + static int acpi_coresight_get_cpu(struct device *dev) 730 + { 731 + int cpu; 732 + acpi_handle cpu_handle; 733 + acpi_status status; 734 + struct acpi_device *adev = ACPI_COMPANION(dev); 735 + 736 + if (!adev) 737 + return -ENODEV; 738 + status = acpi_get_parent(adev->handle, &cpu_handle); 739 + if (ACPI_FAILURE(status)) 740 + return -ENODEV; 741 + 742 + cpu = acpi_handle_to_logical_cpuid(cpu_handle); 743 + if (cpu >= nr_cpu_ids) 744 + return -ENODEV; 745 + return cpu; 746 + } 747 + 748 + static int 749 + acpi_get_coresight_platform_data(struct device *dev, 750 + struct coresight_platform_data *pdata) 751 + { 752 + struct acpi_device *adev; 753 + 754 + adev = ACPI_COMPANION(dev); 755 + if (!adev) 756 + return -EINVAL; 757 + 758 + return acpi_coresight_parse_graph(adev, pdata); 759 + } 760 + 761 + #else 762 + 763 + static inline int 764 + acpi_get_coresight_platform_data(struct device *dev, 765 + struct coresight_platform_data *pdata) 766 + { 767 + return -ENOENT; 768 + } 769 + 770 + static inline int acpi_coresight_get_cpu(struct device *dev) 771 + { 772 + return -ENODEV; 773 + } 774 + #endif 775 + 776 + int coresight_get_cpu(struct device *dev) 777 + { 778 + if (is_of_node(dev->fwnode)) 779 + return of_coresight_get_cpu(dev); 780 + else if (is_acpi_device_node(dev->fwnode)) 781 + return acpi_coresight_get_cpu(dev); 782 + return 0; 783 + } 784 + EXPORT_SYMBOL_GPL(coresight_get_cpu); 785 + 786 + struct coresight_platform_data * 787 + coresight_get_platform_data(struct device *dev) 788 + { 789 + int ret = -ENOENT; 790 + struct coresight_platform_data *pdata = NULL; 791 + struct fwnode_handle *fwnode = dev_fwnode(dev); 792 + 793 + if (IS_ERR_OR_NULL(fwnode)) 794 + goto error; 795 + 796 + pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL); 797 + if (!pdata) { 798 + ret = -ENOMEM; 799 + goto error; 800 + } 801 + 802 + if (is_of_node(fwnode)) 803 + ret = of_get_coresight_platform_data(dev, pdata); 804 + else if (is_acpi_device_node(fwnode)) 805 + ret = acpi_get_coresight_platform_data(dev, pdata); 806 + 807 + if (!ret) 808 + return pdata; 809 + error: 810 + if (!IS_ERR_OR_NULL(pdata)) 811 + /* Cleanup the connection information */ 812 + coresight_release_platform_data(pdata); 813 + return ERR_PTR(ret); 814 + } 815 + EXPORT_SYMBOL_GPL(coresight_get_platform_data);
+4
drivers/hwtracing/coresight/coresight-priv.h
··· 200 200 return 0; 201 201 } 202 202 203 + void coresight_release_platform_data(struct coresight_platform_data *pdata); 204 + 205 + int coresight_device_fwnode_match(struct device *dev, void *fwnode); 206 + 203 207 #endif
+28 -15
drivers/hwtracing/coresight/coresight-replicator.c
··· 5 5 * Description: CoreSight Replicator driver 6 6 */ 7 7 8 + #include <linux/acpi.h> 8 9 #include <linux/amba/bus.h> 9 10 #include <linux/kernel.h> 10 11 #include <linux/device.h> ··· 23 22 #define REPLICATOR_IDFILTER0 0x000 24 23 #define REPLICATOR_IDFILTER1 0x004 25 24 25 + DEFINE_CORESIGHT_DEVLIST(replicator_devs, "replicator"); 26 + 26 27 /** 27 28 * struct replicator_drvdata - specifics associated to a replicator component 28 29 * @base: memory mapped base address for this component. Also indicates 29 30 * whether this one is programmable or not. 30 - * @dev: the device entity associated with this component 31 31 * @atclk: optional clock for the core parts of the replicator. 32 32 * @csdev: component vitals needed by the framework 33 33 */ 34 34 struct replicator_drvdata { 35 35 void __iomem *base; 36 - struct device *dev; 37 36 struct clk *atclk; 38 37 struct coresight_device *csdev; 39 38 }; ··· 101 100 if (drvdata->base) 102 101 rc = dynamic_replicator_enable(drvdata, inport, outport); 103 102 if (!rc) 104 - dev_dbg(drvdata->dev, "REPLICATOR enabled\n"); 103 + dev_dbg(&csdev->dev, "REPLICATOR enabled\n"); 105 104 return rc; 106 105 } 107 106 ··· 140 139 141 140 if (drvdata->base) 142 141 dynamic_replicator_disable(drvdata, inport, outport); 143 - dev_dbg(drvdata->dev, "REPLICATOR disabled\n"); 142 + dev_dbg(&csdev->dev, "REPLICATOR disabled\n"); 144 143 } 145 144 146 145 static const struct coresight_ops_link replicator_link_ops = { ··· 180 179 struct coresight_platform_data *pdata = NULL; 181 180 struct replicator_drvdata *drvdata; 182 181 struct coresight_desc desc = { 0 }; 183 - struct device_node *np = dev->of_node; 184 182 void __iomem *base; 185 183 186 - if (np) { 187 - pdata = of_get_coresight_platform_data(dev, np); 188 - if (IS_ERR(pdata)) 189 - return PTR_ERR(pdata); 190 - dev->platform_data = pdata; 191 - } 192 - 193 - if (of_device_is_compatible(np, "arm,coresight-replicator")) 184 + if (is_of_node(dev_fwnode(dev)) && 185 + of_device_is_compatible(dev->of_node, "arm,coresight-replicator")) 194 186 pr_warn_once("Uses OBSOLETE CoreSight replicator binding\n"); 187 + 188 + desc.name = coresight_alloc_device_name(&replicator_devs, dev); 189 + if (!desc.name) 190 + return -ENOMEM; 195 191 196 192 drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL); 197 193 if (!drvdata) 198 194 return -ENOMEM; 199 195 200 - drvdata->dev = dev; 201 196 drvdata->atclk = devm_clk_get(dev, "atclk"); /* optional */ 202 197 if (!IS_ERR(drvdata->atclk)) { 203 198 ret = clk_prepare_enable(drvdata->atclk); ··· 217 220 218 221 dev_set_drvdata(dev, drvdata); 219 222 223 + pdata = coresight_get_platform_data(dev); 224 + if (IS_ERR(pdata)) { 225 + ret = PTR_ERR(pdata); 226 + goto out_disable_clk; 227 + } 228 + dev->platform_data = pdata; 229 + 220 230 desc.type = CORESIGHT_DEV_TYPE_LINK; 221 231 desc.subtype.link_subtype = CORESIGHT_DEV_SUBTYPE_LINK_SPLIT; 222 232 desc.ops = &replicator_cs_ops; 223 233 desc.pdata = dev->platform_data; 224 234 desc.dev = dev; 235 + 225 236 drvdata->csdev = coresight_register(&desc); 226 237 if (IS_ERR(drvdata->csdev)) { 227 238 ret = PTR_ERR(drvdata->csdev); ··· 297 292 {} 298 293 }; 299 294 295 + #ifdef CONFIG_ACPI 296 + static const struct acpi_device_id static_replicator_acpi_ids[] = { 297 + {"ARMHC985", 0}, /* ARM CoreSight Static Replicator */ 298 + {} 299 + }; 300 + #endif 301 + 300 302 static struct platform_driver static_replicator_driver = { 301 303 .probe = static_replicator_probe, 302 304 .driver = { 303 305 .name = "coresight-static-replicator", 304 - .of_match_table = static_replicator_match, 306 + .of_match_table = of_match_ptr(static_replicator_match), 307 + .acpi_match_table = ACPI_PTR(static_replicator_acpi_ids), 305 308 .pm = &replicator_dev_pm_ops, 306 309 .suppress_bind_attrs = true, 307 310 },
+95 -23
drivers/hwtracing/coresight/coresight-stm.c
··· 16 16 * (C) 2015-2016 Chunyan Zhang <zhang.chunyan@linaro.org> 17 17 */ 18 18 #include <asm/local.h> 19 + #include <linux/acpi.h> 19 20 #include <linux/amba/bus.h> 20 21 #include <linux/bitmap.h> 21 22 #include <linux/clk.h> ··· 108 107 unsigned long *guaranteed; 109 108 }; 110 109 110 + DEFINE_CORESIGHT_DEVLIST(stm_devs, "stm"); 111 + 111 112 /** 112 113 * struct stm_drvdata - specifics associated to an STM component 113 114 * @base: memory mapped base address for this component. 114 - * @dev: the device entity associated to this component. 115 115 * @atclk: optional clock for the core parts of the STM. 116 116 * @csdev: component vitals needed by the framework. 117 117 * @spinlock: only one at a time pls. ··· 130 128 */ 131 129 struct stm_drvdata { 132 130 void __iomem *base; 133 - struct device *dev; 134 131 struct clk *atclk; 135 132 struct coresight_device *csdev; 136 133 spinlock_t spinlock; ··· 206 205 if (val) 207 206 return -EBUSY; 208 207 209 - pm_runtime_get_sync(drvdata->dev); 208 + pm_runtime_get_sync(csdev->dev.parent); 210 209 211 210 spin_lock(&drvdata->spinlock); 212 211 stm_enable_hw(drvdata); 213 212 spin_unlock(&drvdata->spinlock); 214 213 215 - dev_dbg(drvdata->dev, "STM tracing enabled\n"); 214 + dev_dbg(&csdev->dev, "STM tracing enabled\n"); 216 215 return 0; 217 216 } 218 217 ··· 272 271 /* Wait until the engine has completely stopped */ 273 272 coresight_timeout(drvdata->base, STMTCSR, STMTCSR_BUSY_BIT, 0); 274 273 275 - pm_runtime_put(drvdata->dev); 274 + pm_runtime_put(csdev->dev.parent); 276 275 277 276 local_set(&drvdata->mode, CS_MODE_DISABLED); 278 - dev_dbg(drvdata->dev, "STM tracing disabled\n"); 277 + dev_dbg(&csdev->dev, "STM tracing disabled\n"); 279 278 } 280 279 } 281 280 ··· 686 685 NULL, 687 686 }; 688 687 689 - static int stm_get_resource_byname(struct device_node *np, 690 - char *ch_base, struct resource *res) 688 + #ifdef CONFIG_OF 689 + static int of_stm_get_stimulus_area(struct device *dev, struct resource *res) 691 690 { 692 691 const char *name = NULL; 693 692 int index = 0, found = 0; 693 + struct device_node *np = dev->of_node; 694 694 695 695 while (!of_property_read_string_index(np, "reg-names", index, &name)) { 696 - if (strcmp(ch_base, name)) { 696 + if (strcmp("stm-stimulus-base", name)) { 697 697 index++; 698 698 continue; 699 699 } ··· 708 706 return -EINVAL; 709 707 710 708 return of_address_to_resource(np, index, res); 709 + } 710 + #else 711 + static inline int of_stm_get_stimulus_area(struct device *dev, 712 + struct resource *res) 713 + { 714 + return -ENOENT; 715 + } 716 + #endif 717 + 718 + #ifdef CONFIG_ACPI 719 + static int acpi_stm_get_stimulus_area(struct device *dev, struct resource *res) 720 + { 721 + int rc; 722 + bool found_base = false; 723 + struct resource_entry *rent; 724 + LIST_HEAD(res_list); 725 + 726 + struct acpi_device *adev = ACPI_COMPANION(dev); 727 + 728 + if (!adev) 729 + return -ENODEV; 730 + rc = acpi_dev_get_resources(adev, &res_list, NULL, NULL); 731 + if (rc < 0) 732 + return rc; 733 + 734 + /* 735 + * The stimulus base for STM device must be listed as the second memory 736 + * resource, followed by the programming base address as described in 737 + * "Section 2.3 Resources" in ACPI for CoreSightTM 1.0 Platform Design 738 + * document (DEN0067). 739 + */ 740 + rc = -ENOENT; 741 + list_for_each_entry(rent, &res_list, node) { 742 + if (resource_type(rent->res) != IORESOURCE_MEM) 743 + continue; 744 + if (found_base) { 745 + *res = *rent->res; 746 + rc = 0; 747 + break; 748 + } 749 + 750 + found_base = true; 751 + } 752 + 753 + acpi_dev_free_resource_list(&res_list); 754 + return rc; 755 + } 756 + #else 757 + static inline int acpi_stm_get_stimulus_area(struct device *dev, 758 + struct resource *res) 759 + { 760 + return -ENOENT; 761 + } 762 + #endif 763 + 764 + static int stm_get_stimulus_area(struct device *dev, struct resource *res) 765 + { 766 + struct fwnode_handle *fwnode = dev_fwnode(dev); 767 + 768 + if (is_of_node(fwnode)) 769 + return of_stm_get_stimulus_area(dev, res); 770 + else if (is_acpi_node(fwnode)) 771 + return acpi_stm_get_stimulus_area(dev, res); 772 + return -ENOENT; 711 773 } 712 774 713 775 static u32 stm_fundamental_data_size(struct stm_drvdata *drvdata) ··· 829 763 bitmap_clear(drvdata->chs.guaranteed, 0, drvdata->numsp); 830 764 } 831 765 832 - static void stm_init_generic_data(struct stm_drvdata *drvdata) 766 + static void stm_init_generic_data(struct stm_drvdata *drvdata, 767 + const char *name) 833 768 { 834 - drvdata->stm.name = dev_name(drvdata->dev); 769 + drvdata->stm.name = name; 835 770 836 771 /* 837 772 * MasterIDs are assigned at HW design phase. As such the core is ··· 862 795 struct resource ch_res; 863 796 size_t bitmap_size; 864 797 struct coresight_desc desc = { 0 }; 865 - struct device_node *np = adev->dev.of_node; 866 798 867 - if (np) { 868 - pdata = of_get_coresight_platform_data(dev, np); 869 - if (IS_ERR(pdata)) 870 - return PTR_ERR(pdata); 871 - adev->dev.platform_data = pdata; 872 - } 799 + desc.name = coresight_alloc_device_name(&stm_devs, dev); 800 + if (!desc.name) 801 + return -ENOMEM; 802 + 873 803 drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL); 874 804 if (!drvdata) 875 805 return -ENOMEM; 876 806 877 - drvdata->dev = &adev->dev; 878 807 drvdata->atclk = devm_clk_get(&adev->dev, "atclk"); /* optional */ 879 808 if (!IS_ERR(drvdata->atclk)) { 880 809 ret = clk_prepare_enable(drvdata->atclk); ··· 884 821 return PTR_ERR(base); 885 822 drvdata->base = base; 886 823 887 - ret = stm_get_resource_byname(np, "stm-stimulus-base", &ch_res); 824 + ret = stm_get_stimulus_area(dev, &ch_res); 888 825 if (ret) 889 826 return ret; 890 827 drvdata->chs.phys = ch_res.start; ··· 911 848 spin_lock_init(&drvdata->spinlock); 912 849 913 850 stm_init_default_data(drvdata); 914 - stm_init_generic_data(drvdata); 851 + stm_init_generic_data(drvdata, desc.name); 915 852 916 853 if (stm_register_device(dev, &drvdata->stm, THIS_MODULE)) { 917 854 dev_info(dev, 918 - "stm_register_device failed, probing deferred\n"); 855 + "%s : stm_register_device failed, probing deferred\n", 856 + desc.name); 919 857 return -EPROBE_DEFER; 920 858 } 859 + 860 + pdata = coresight_get_platform_data(dev); 861 + if (IS_ERR(pdata)) { 862 + ret = PTR_ERR(pdata); 863 + goto stm_unregister; 864 + } 865 + adev->dev.platform_data = pdata; 921 866 922 867 desc.type = CORESIGHT_DEV_TYPE_SOURCE; 923 868 desc.subtype.source_subtype = CORESIGHT_DEV_SUBTYPE_SOURCE_SOFTWARE; ··· 941 870 942 871 pm_runtime_put(&adev->dev); 943 872 944 - dev_info(dev, "%s initialized\n", (char *)coresight_get_uci_data(id)); 873 + dev_info(&drvdata->csdev->dev, "%s initialized\n", 874 + (char *)coresight_get_uci_data(id)); 945 875 return 0; 946 876 947 877 stm_unregister:
+26 -17
drivers/hwtracing/coresight/coresight-tmc-etf.c
··· 280 280 u32 mode, void *data) 281 281 { 282 282 int ret; 283 - struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 284 283 285 284 switch (mode) { 286 285 case CS_MODE_SYSFS: ··· 297 298 if (ret) 298 299 return ret; 299 300 300 - dev_dbg(drvdata->dev, "TMC-ETB/ETF enabled\n"); 301 + dev_dbg(&csdev->dev, "TMC-ETB/ETF enabled\n"); 301 302 return 0; 302 303 } 303 304 ··· 327 328 328 329 spin_unlock_irqrestore(&drvdata->spinlock, flags); 329 330 330 - dev_dbg(drvdata->dev, "TMC-ETB/ETF disabled\n"); 331 + dev_dbg(&csdev->dev, "TMC-ETB/ETF disabled\n"); 331 332 return 0; 332 333 } 333 334 ··· 350 351 spin_unlock_irqrestore(&drvdata->spinlock, flags); 351 352 352 353 if (!ret) 353 - dev_dbg(drvdata->dev, "TMC-ETF enabled\n"); 354 + dev_dbg(&csdev->dev, "TMC-ETF enabled\n"); 354 355 return ret; 355 356 } 356 357 ··· 370 371 drvdata->mode = CS_MODE_DISABLED; 371 372 spin_unlock_irqrestore(&drvdata->spinlock, flags); 372 373 373 - dev_dbg(drvdata->dev, "TMC-ETF disabled\n"); 374 + dev_dbg(&csdev->dev, "TMC-ETF disabled\n"); 374 375 } 375 376 376 377 static void *tmc_alloc_etf_buffer(struct coresight_device *csdev, 377 378 struct perf_event *event, void **pages, 378 379 int nr_pages, bool overwrite) 379 380 { 380 - int node, cpu = event->cpu; 381 + int node; 381 382 struct cs_buffers *buf; 382 383 383 - if (cpu == -1) 384 - cpu = smp_processor_id(); 385 - node = cpu_to_node(cpu); 384 + node = (event->cpu == -1) ? NUMA_NO_NODE : cpu_to_node(event->cpu); 386 385 387 386 /* Allocate memory structure for interaction with Perf */ 388 387 buf = kzalloc_node(sizeof(struct cs_buffers), GFP_KERNEL, node); ··· 474 477 /* 475 478 * The TMC RAM buffer may be bigger than the space available in the 476 479 * perf ring buffer (handle->size). If so advance the RRP so that we 477 - * get the latest trace data. 480 + * get the latest trace data. In snapshot mode none of that matters 481 + * since we are expected to clobber stale data in favour of the latest 482 + * traces. 478 483 */ 479 - if (to_read > handle->size) { 484 + if (!buf->snapshot && to_read > handle->size) { 480 485 u32 mask = 0; 481 486 482 487 /* ··· 515 516 lost = true; 516 517 } 517 518 518 - if (lost) 519 + /* 520 + * Don't set the TRUNCATED flag in snapshot mode because 1) the 521 + * captured buffer is expected to be truncated and 2) a full buffer 522 + * prevents the event from being re-enabled by the perf core, 523 + * resulting in stale data being send to user space. 524 + */ 525 + if (!buf->snapshot && lost) 519 526 perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED); 520 527 521 528 cur = buf->cur; ··· 547 542 } 548 543 } 549 544 550 - /* In snapshot mode we have to update the head */ 551 - if (buf->snapshot) { 552 - handle->head = (cur * PAGE_SIZE) + offset; 553 - to_read = buf->nr_pages << PAGE_SHIFT; 554 - } 545 + /* 546 + * In snapshot mode we simply increment the head by the number of byte 547 + * that were written. User space function cs_etm_find_snapshot() will 548 + * figure out how many bytes to get from the AUX buffer based on the 549 + * position of the head. 550 + */ 551 + if (buf->snapshot) 552 + handle->head += to_read; 553 + 555 554 CS_LOCK(drvdata->base); 556 555 out: 557 556 spin_unlock_irqrestore(&drvdata->spinlock, flags);
+44 -36
drivers/hwtracing/coresight/coresight-tmc-etr.c
··· 162 162 struct device *dev, enum dma_data_direction dir) 163 163 { 164 164 int i; 165 + struct device *real_dev = dev->parent; 165 166 166 167 for (i = 0; i < tmc_pages->nr_pages; i++) { 167 168 if (tmc_pages->daddrs && tmc_pages->daddrs[i]) 168 - dma_unmap_page(dev, tmc_pages->daddrs[i], 169 + dma_unmap_page(real_dev, tmc_pages->daddrs[i], 169 170 PAGE_SIZE, dir); 170 171 if (tmc_pages->pages && tmc_pages->pages[i]) 171 172 __free_page(tmc_pages->pages[i]); ··· 194 193 int i, nr_pages; 195 194 dma_addr_t paddr; 196 195 struct page *page; 196 + struct device *real_dev = dev->parent; 197 197 198 198 nr_pages = tmc_pages->nr_pages; 199 199 tmc_pages->daddrs = kcalloc(nr_pages, sizeof(*tmc_pages->daddrs), ··· 218 216 page = alloc_pages_node(node, 219 217 GFP_KERNEL | __GFP_ZERO, 0); 220 218 } 221 - paddr = dma_map_page(dev, page, 0, PAGE_SIZE, dir); 222 - if (dma_mapping_error(dev, paddr)) 219 + paddr = dma_map_page(real_dev, page, 0, PAGE_SIZE, dir); 220 + if (dma_mapping_error(real_dev, paddr)) 223 221 goto err; 224 222 tmc_pages->daddrs[i] = paddr; 225 223 tmc_pages->pages[i] = page; ··· 306 304 * and data buffers. TMC writes to the data buffers and reads from the SG 307 305 * Table pages. 308 306 * 309 - * @dev - Device to which page should be DMA mapped. 307 + * @dev - Coresight device to which page should be DMA mapped. 310 308 * @node - Numa node for mem allocations 311 309 * @nr_tpages - Number of pages for the table entries. 312 310 * @nr_dpages - Number of pages for Data buffer. ··· 350 348 { 351 349 int i, index, start; 352 350 int npages = DIV_ROUND_UP(size, PAGE_SIZE); 353 - struct device *dev = table->dev; 351 + struct device *real_dev = table->dev->parent; 354 352 struct tmc_pages *data = &table->data_pages; 355 353 356 354 start = offset >> PAGE_SHIFT; 357 355 for (i = start; i < (start + npages); i++) { 358 356 index = i % data->nr_pages; 359 - dma_sync_single_for_cpu(dev, data->daddrs[index], 357 + dma_sync_single_for_cpu(real_dev, data->daddrs[index], 360 358 PAGE_SIZE, DMA_FROM_DEVICE); 361 359 } 362 360 } ··· 365 363 void tmc_sg_table_sync_table(struct tmc_sg_table *sg_table) 366 364 { 367 365 int i; 368 - struct device *dev = sg_table->dev; 366 + struct device *real_dev = sg_table->dev->parent; 369 367 struct tmc_pages *table_pages = &sg_table->table_pages; 370 368 371 369 for (i = 0; i < table_pages->nr_pages; i++) 372 - dma_sync_single_for_device(dev, table_pages->daddrs[i], 370 + dma_sync_single_for_device(real_dev, table_pages->daddrs[i], 373 371 PAGE_SIZE, DMA_TO_DEVICE); 374 372 } 375 373 ··· 592 590 void **pages) 593 591 { 594 592 struct etr_flat_buf *flat_buf; 593 + struct device *real_dev = drvdata->csdev->dev.parent; 595 594 596 595 /* We cannot reuse existing pages for flat buf */ 597 596 if (pages) ··· 602 599 if (!flat_buf) 603 600 return -ENOMEM; 604 601 605 - flat_buf->vaddr = dma_alloc_coherent(drvdata->dev, etr_buf->size, 602 + flat_buf->vaddr = dma_alloc_coherent(real_dev, etr_buf->size, 606 603 &flat_buf->daddr, GFP_KERNEL); 607 604 if (!flat_buf->vaddr) { 608 605 kfree(flat_buf); ··· 610 607 } 611 608 612 609 flat_buf->size = etr_buf->size; 613 - flat_buf->dev = drvdata->dev; 610 + flat_buf->dev = &drvdata->csdev->dev; 614 611 etr_buf->hwaddr = flat_buf->daddr; 615 612 etr_buf->mode = ETR_MODE_FLAT; 616 613 etr_buf->private = flat_buf; ··· 621 618 { 622 619 struct etr_flat_buf *flat_buf = etr_buf->private; 623 620 624 - if (flat_buf && flat_buf->daddr) 625 - dma_free_coherent(flat_buf->dev, flat_buf->size, 621 + if (flat_buf && flat_buf->daddr) { 622 + struct device *real_dev = flat_buf->dev->parent; 623 + 624 + dma_free_coherent(real_dev, flat_buf->size, 626 625 flat_buf->vaddr, flat_buf->daddr); 626 + } 627 627 kfree(flat_buf); 628 628 } 629 629 ··· 672 666 void **pages) 673 667 { 674 668 struct etr_sg_table *etr_table; 669 + struct device *dev = &drvdata->csdev->dev; 675 670 676 - etr_table = tmc_init_etr_sg_table(drvdata->dev, node, 671 + etr_table = tmc_init_etr_sg_table(dev, node, 677 672 etr_buf->size, pages); 678 673 if (IS_ERR(etr_table)) 679 674 return -ENOMEM; ··· 758 751 if (!IS_ENABLED(CONFIG_CORESIGHT_CATU)) 759 752 return NULL; 760 753 761 - for (i = 0; i < etr->nr_outport; i++) { 762 - tmp = etr->conns[i].child_dev; 754 + for (i = 0; i < etr->pdata->nr_outport; i++) { 755 + tmp = etr->pdata->conns[i].child_dev; 763 756 if (tmp && coresight_is_catu_device(tmp)) 764 757 return tmp; 765 758 } ··· 830 823 bool has_etr_sg, has_iommu; 831 824 bool has_sg, has_catu; 832 825 struct etr_buf *etr_buf; 826 + struct device *dev = &drvdata->csdev->dev; 833 827 834 828 has_etr_sg = tmc_etr_has_cap(drvdata, TMC_ETR_SG); 835 - has_iommu = iommu_get_domain_for_dev(drvdata->dev); 829 + has_iommu = iommu_get_domain_for_dev(dev->parent); 836 830 has_catu = !!tmc_etr_get_catu_device(drvdata); 837 831 838 832 has_sg = has_catu || has_etr_sg; ··· 871 863 return ERR_PTR(rc); 872 864 } 873 865 874 - dev_dbg(drvdata->dev, "allocated buffer of size %ldKB in mode %d\n", 866 + dev_dbg(dev, "allocated buffer of size %ldKB in mode %d\n", 875 867 (unsigned long)size >> 10, etr_buf->mode); 876 868 return etr_buf; 877 869 } ··· 1170 1162 tmc_etr_free_sysfs_buf(free_buf); 1171 1163 1172 1164 if (!ret) 1173 - dev_dbg(drvdata->dev, "TMC-ETR enabled\n"); 1165 + dev_dbg(&csdev->dev, "TMC-ETR enabled\n"); 1174 1166 1175 1167 return ret; 1176 1168 } ··· 1186 1178 alloc_etr_buf(struct tmc_drvdata *drvdata, struct perf_event *event, 1187 1179 int nr_pages, void **pages, bool snapshot) 1188 1180 { 1189 - int node, cpu = event->cpu; 1181 + int node; 1190 1182 struct etr_buf *etr_buf; 1191 1183 unsigned long size; 1192 1184 1193 - if (cpu == -1) 1194 - cpu = smp_processor_id(); 1195 - node = cpu_to_node(cpu); 1196 - 1185 + node = (event->cpu == -1) ? NUMA_NO_NODE : cpu_to_node(event->cpu); 1197 1186 /* 1198 1187 * Try to match the perf ring buffer size if it is larger 1199 1188 * than the size requested via sysfs. ··· 1322 1317 tmc_etr_setup_perf_buf(struct tmc_drvdata *drvdata, struct perf_event *event, 1323 1318 int nr_pages, void **pages, bool snapshot) 1324 1319 { 1325 - int node, cpu = event->cpu; 1320 + int node; 1326 1321 struct etr_buf *etr_buf; 1327 1322 struct etr_perf_buffer *etr_perf; 1328 1323 1329 - if (cpu == -1) 1330 - cpu = smp_processor_id(); 1331 - node = cpu_to_node(cpu); 1324 + node = (event->cpu == -1) ? NUMA_NO_NODE : cpu_to_node(event->cpu); 1332 1325 1333 1326 etr_perf = kzalloc_node(sizeof(*etr_perf), GFP_KERNEL, node); 1334 1327 if (!etr_perf) ··· 1361 1358 etr_perf = tmc_etr_setup_perf_buf(drvdata, event, 1362 1359 nr_pages, pages, snapshot); 1363 1360 if (IS_ERR(etr_perf)) { 1364 - dev_dbg(drvdata->dev, "Unable to allocate ETR buffer\n"); 1361 + dev_dbg(&csdev->dev, "Unable to allocate ETR buffer\n"); 1365 1362 return NULL; 1366 1363 } 1367 1364 ··· 1504 1501 tmc_etr_sync_perf_buffer(etr_perf); 1505 1502 1506 1503 /* 1507 - * Update handle->head in snapshot mode. Also update the size to the 1508 - * hardware buffer size if there was an overflow. 1504 + * In snapshot mode we simply increment the head by the number of byte 1505 + * that were written. User space function cs_etm_find_snapshot() will 1506 + * figure out how many bytes to get from the AUX buffer based on the 1507 + * position of the head. 1509 1508 */ 1510 - if (etr_perf->snapshot) { 1509 + if (etr_perf->snapshot) 1511 1510 handle->head += size; 1512 - if (etr_buf->full) 1513 - size = etr_buf->size; 1514 - } 1515 1511 1516 1512 lost |= etr_buf->full; 1517 1513 out: 1518 - if (lost) 1514 + /* 1515 + * Don't set the TRUNCATED flag in snapshot mode because 1) the 1516 + * captured buffer is expected to be truncated and 2) a full buffer 1517 + * prevents the event from being re-enabled by the perf core, 1518 + * resulting in stale data being send to user space. 1519 + */ 1520 + if (!etr_perf->snapshot && lost) 1519 1521 perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED); 1520 1522 return size; 1521 1523 } ··· 1620 1612 1621 1613 spin_unlock_irqrestore(&drvdata->spinlock, flags); 1622 1614 1623 - dev_dbg(drvdata->dev, "TMC-ETR disabled\n"); 1615 + dev_dbg(&csdev->dev, "TMC-ETR disabled\n"); 1624 1616 return 0; 1625 1617 } 1626 1618
+54 -42
drivers/hwtracing/coresight/coresight-tmc.c
··· 27 27 #include "coresight-priv.h" 28 28 #include "coresight-tmc.h" 29 29 30 + DEFINE_CORESIGHT_DEVLIST(etb_devs, "tmc_etb"); 31 + DEFINE_CORESIGHT_DEVLIST(etf_devs, "tmc_etf"); 32 + DEFINE_CORESIGHT_DEVLIST(etr_devs, "tmc_etr"); 33 + 30 34 void tmc_wait_for_tmcready(struct tmc_drvdata *drvdata) 31 35 { 32 36 /* Ensure formatter, unformatter and hardware fifo are empty */ 33 37 if (coresight_timeout(drvdata->base, 34 38 TMC_STS, TMC_STS_TMCREADY_BIT, 1)) { 35 - dev_err(drvdata->dev, 39 + dev_err(&drvdata->csdev->dev, 36 40 "timeout while waiting for TMC to be Ready\n"); 37 41 } 38 42 } ··· 53 49 /* Ensure flush completes */ 54 50 if (coresight_timeout(drvdata->base, 55 51 TMC_FFCR, TMC_FFCR_FLUSHMAN_BIT, 0)) { 56 - dev_err(drvdata->dev, 52 + dev_err(&drvdata->csdev->dev, 57 53 "timeout while waiting for completion of Manual Flush\n"); 58 54 } 59 55 ··· 87 83 } 88 84 89 85 if (!ret) 90 - dev_dbg(drvdata->dev, "TMC read start\n"); 86 + dev_dbg(&drvdata->csdev->dev, "TMC read start\n"); 91 87 92 88 return ret; 93 89 } ··· 109 105 } 110 106 111 107 if (!ret) 112 - dev_dbg(drvdata->dev, "TMC read end\n"); 108 + dev_dbg(&drvdata->csdev->dev, "TMC read end\n"); 113 109 114 110 return ret; 115 111 } ··· 126 122 127 123 nonseekable_open(inode, file); 128 124 129 - dev_dbg(drvdata->dev, "%s: successfully opened\n", __func__); 125 + dev_dbg(&drvdata->csdev->dev, "%s: successfully opened\n", __func__); 130 126 return 0; 131 127 } 132 128 ··· 156 152 return 0; 157 153 158 154 if (copy_to_user(data, bufp, actual)) { 159 - dev_dbg(drvdata->dev, "%s: copy_to_user failed\n", __func__); 155 + dev_dbg(&drvdata->csdev->dev, 156 + "%s: copy_to_user failed\n", __func__); 160 157 return -EFAULT; 161 158 } 162 159 163 160 *ppos += actual; 164 - dev_dbg(drvdata->dev, "%zu bytes copied\n", actual); 161 + dev_dbg(&drvdata->csdev->dev, "%zu bytes copied\n", actual); 165 162 166 163 return actual; 167 164 } ··· 177 172 if (ret) 178 173 return ret; 179 174 180 - dev_dbg(drvdata->dev, "%s: released\n", __func__); 175 + dev_dbg(&drvdata->csdev->dev, "%s: released\n", __func__); 181 176 return 0; 182 177 } 183 178 ··· 337 332 NULL, 338 333 }; 339 334 340 - static inline bool tmc_etr_can_use_sg(struct tmc_drvdata *drvdata) 335 + static inline bool tmc_etr_can_use_sg(struct device *dev) 341 336 { 342 - return fwnode_property_present(drvdata->dev->fwnode, 343 - "arm,scatter-gather"); 337 + return fwnode_property_present(dev->fwnode, "arm,scatter-gather"); 344 338 } 345 339 346 340 /* Detect and initialise the capabilities of a TMC ETR */ 347 - static int tmc_etr_setup_caps(struct tmc_drvdata *drvdata, 348 - u32 devid, void *dev_caps) 341 + static int tmc_etr_setup_caps(struct device *parent, u32 devid, void *dev_caps) 349 342 { 350 343 int rc; 351 - 352 344 u32 dma_mask = 0; 345 + struct tmc_drvdata *drvdata = dev_get_drvdata(parent); 353 346 354 347 /* Set the unadvertised capabilities */ 355 348 tmc_etr_init_caps(drvdata, (u32)(unsigned long)dev_caps); 356 349 357 - if (!(devid & TMC_DEVID_NOSCAT) && tmc_etr_can_use_sg(drvdata)) 350 + if (!(devid & TMC_DEVID_NOSCAT) && tmc_etr_can_use_sg(parent)) 358 351 tmc_etr_set_cap(drvdata, TMC_ETR_SG); 359 352 360 353 /* Check if the AXI address width is available */ ··· 370 367 case 44: 371 368 case 48: 372 369 case 52: 373 - dev_info(drvdata->dev, "Detected dma mask %dbits\n", dma_mask); 370 + dev_info(parent, "Detected dma mask %dbits\n", dma_mask); 374 371 break; 375 372 default: 376 373 dma_mask = 40; 377 374 } 378 375 379 - rc = dma_set_mask_and_coherent(drvdata->dev, DMA_BIT_MASK(dma_mask)); 376 + rc = dma_set_mask_and_coherent(parent, DMA_BIT_MASK(dma_mask)); 380 377 if (rc) 381 - dev_err(drvdata->dev, "Failed to setup DMA mask: %d\n", rc); 378 + dev_err(parent, "Failed to setup DMA mask: %d\n", rc); 382 379 return rc; 380 + } 381 + 382 + static u32 tmc_etr_get_default_buffer_size(struct device *dev) 383 + { 384 + u32 size; 385 + 386 + if (fwnode_property_read_u32(dev->fwnode, "arm,buffer-size", &size)) 387 + size = SZ_1M; 388 + return size; 383 389 } 384 390 385 391 static int tmc_probe(struct amba_device *adev, const struct amba_id *id) ··· 401 389 struct tmc_drvdata *drvdata; 402 390 struct resource *res = &adev->res; 403 391 struct coresight_desc desc = { 0 }; 404 - struct device_node *np = adev->dev.of_node; 405 - 406 - if (np) { 407 - pdata = of_get_coresight_platform_data(dev, np); 408 - if (IS_ERR(pdata)) { 409 - ret = PTR_ERR(pdata); 410 - goto out; 411 - } 412 - adev->dev.platform_data = pdata; 413 - } 392 + struct coresight_dev_list *dev_list = NULL; 414 393 415 394 ret = -ENOMEM; 416 395 drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL); 417 396 if (!drvdata) 418 397 goto out; 419 398 420 - drvdata->dev = &adev->dev; 421 399 dev_set_drvdata(dev, drvdata); 422 400 423 401 /* Validity for the resource is already checked by the AMBA core */ ··· 427 425 /* This device is not associated with a session */ 428 426 drvdata->pid = -1; 429 427 430 - if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) { 431 - if (np) 432 - ret = of_property_read_u32(np, 433 - "arm,buffer-size", 434 - &drvdata->size); 435 - if (ret) 436 - drvdata->size = SZ_1M; 437 - } else { 428 + if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) 429 + drvdata->size = tmc_etr_get_default_buffer_size(dev); 430 + else 438 431 drvdata->size = readl_relaxed(drvdata->base + TMC_RSZ) * 4; 439 - } 440 432 441 - desc.pdata = pdata; 442 433 desc.dev = dev; 443 434 desc.groups = coresight_tmc_groups; 444 435 ··· 440 445 desc.type = CORESIGHT_DEV_TYPE_SINK; 441 446 desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_BUFFER; 442 447 desc.ops = &tmc_etb_cs_ops; 448 + dev_list = &etb_devs; 443 449 break; 444 450 case TMC_CONFIG_TYPE_ETR: 445 451 desc.type = CORESIGHT_DEV_TYPE_SINK; 446 452 desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_BUFFER; 447 453 desc.ops = &tmc_etr_cs_ops; 448 - ret = tmc_etr_setup_caps(drvdata, devid, 454 + ret = tmc_etr_setup_caps(dev, devid, 449 455 coresight_get_uci_data(id)); 450 456 if (ret) 451 457 goto out; 452 458 idr_init(&drvdata->idr); 453 459 mutex_init(&drvdata->idr_mutex); 460 + dev_list = &etr_devs; 454 461 break; 455 462 case TMC_CONFIG_TYPE_ETF: 456 463 desc.type = CORESIGHT_DEV_TYPE_LINKSINK; 457 464 desc.subtype.link_subtype = CORESIGHT_DEV_SUBTYPE_LINK_FIFO; 458 465 desc.ops = &tmc_etf_cs_ops; 466 + dev_list = &etf_devs; 459 467 break; 460 468 default: 461 - pr_err("%s: Unsupported TMC config\n", pdata->name); 469 + pr_err("%s: Unsupported TMC config\n", desc.name); 462 470 ret = -EINVAL; 463 471 goto out; 464 472 } 473 + 474 + desc.name = coresight_alloc_device_name(dev_list, dev); 475 + if (!desc.name) { 476 + ret = -ENOMEM; 477 + goto out; 478 + } 479 + 480 + pdata = coresight_get_platform_data(dev); 481 + if (IS_ERR(pdata)) { 482 + ret = PTR_ERR(pdata); 483 + goto out; 484 + } 485 + adev->dev.platform_data = pdata; 486 + desc.pdata = pdata; 465 487 466 488 drvdata->csdev = coresight_register(&desc); 467 489 if (IS_ERR(drvdata->csdev)) { ··· 486 474 goto out; 487 475 } 488 476 489 - drvdata->miscdev.name = pdata->name; 477 + drvdata->miscdev.name = desc.name; 490 478 drvdata->miscdev.minor = MISC_DYNAMIC_MINOR; 491 479 drvdata->miscdev.fops = &tmc_fops; 492 480 ret = misc_register(&drvdata->miscdev);
-2
drivers/hwtracing/coresight/coresight-tmc.h
··· 161 161 /** 162 162 * struct tmc_drvdata - specifics associated to an TMC component 163 163 * @base: memory mapped base address for this component. 164 - * @dev: the device entity associated to this component. 165 164 * @csdev: component vitals needed by the framework. 166 165 * @miscdev: specifics to handle "/dev/xyz.tmc" entry. 167 166 * @spinlock: only one at a time pls. ··· 183 184 */ 184 185 struct tmc_drvdata { 185 186 void __iomem *base; 186 - struct device *dev; 187 187 struct coresight_device *csdev; 188 188 struct miscdevice miscdev; 189 189 spinlock_t spinlock;
+12 -12
drivers/hwtracing/coresight/coresight-tpiu.c
··· 47 47 #define FFCR_FON_MAN BIT(6) 48 48 #define FFCR_STOP_FI BIT(12) 49 49 50 + DEFINE_CORESIGHT_DEVLIST(tpiu_devs, "tpiu"); 51 + 50 52 /** 51 53 * @base: memory mapped base address for this component. 52 - * @dev: the device entity associated to this component. 53 54 * @atclk: optional clock for the core parts of the TPIU. 54 55 * @csdev: component vitals needed by the framework. 55 56 */ 56 57 struct tpiu_drvdata { 57 58 void __iomem *base; 58 - struct device *dev; 59 59 struct clk *atclk; 60 60 struct coresight_device *csdev; 61 61 }; ··· 75 75 76 76 tpiu_enable_hw(drvdata); 77 77 atomic_inc(csdev->refcnt); 78 - dev_dbg(drvdata->dev, "TPIU enabled\n"); 78 + dev_dbg(&csdev->dev, "TPIU enabled\n"); 79 79 return 0; 80 80 } 81 81 ··· 104 104 105 105 tpiu_disable_hw(drvdata); 106 106 107 - dev_dbg(drvdata->dev, "TPIU disabled\n"); 107 + dev_dbg(&csdev->dev, "TPIU disabled\n"); 108 108 return 0; 109 109 } 110 110 ··· 126 126 struct tpiu_drvdata *drvdata; 127 127 struct resource *res = &adev->res; 128 128 struct coresight_desc desc = { 0 }; 129 - struct device_node *np = adev->dev.of_node; 130 129 131 - if (np) { 132 - pdata = of_get_coresight_platform_data(dev, np); 133 - if (IS_ERR(pdata)) 134 - return PTR_ERR(pdata); 135 - adev->dev.platform_data = pdata; 136 - } 130 + desc.name = coresight_alloc_device_name(&tpiu_devs, dev); 131 + if (!desc.name) 132 + return -ENOMEM; 137 133 138 134 drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL); 139 135 if (!drvdata) 140 136 return -ENOMEM; 141 137 142 - drvdata->dev = &adev->dev; 143 138 drvdata->atclk = devm_clk_get(&adev->dev, "atclk"); /* optional */ 144 139 if (!IS_ERR(drvdata->atclk)) { 145 140 ret = clk_prepare_enable(drvdata->atclk); ··· 152 157 153 158 /* Disable tpiu to support older devices */ 154 159 tpiu_disable_hw(drvdata); 160 + 161 + pdata = coresight_get_platform_data(dev); 162 + if (IS_ERR(pdata)) 163 + return PTR_ERR(pdata); 164 + dev->platform_data = pdata; 155 165 156 166 desc.type = CORESIGHT_DEV_TYPE_SINK; 157 167 desc.subtype.sink_subtype = CORESIGHT_DEV_SUBTYPE_SINK_PORT;
+131 -33
drivers/hwtracing/coresight/coresight.c
··· 100 100 int i; 101 101 struct coresight_connection *conn; 102 102 103 - for (i = 0; i < parent->nr_outport; i++) { 104 - conn = &parent->conns[i]; 103 + for (i = 0; i < parent->pdata->nr_outport; i++) { 104 + conn = &parent->pdata->conns[i]; 105 105 if (conn->child_dev == csdev) 106 106 return conn->child_port; 107 107 } ··· 118 118 int i; 119 119 struct coresight_connection *conn; 120 120 121 - for (i = 0; i < csdev->nr_outport; i++) { 122 - conn = &csdev->conns[i]; 121 + for (i = 0; i < csdev->pdata->nr_outport; i++) { 122 + conn = &csdev->pdata->conns[i]; 123 123 if (conn->child_dev == child) 124 124 return conn->outport; 125 125 } ··· 306 306 307 307 if (link_subtype == CORESIGHT_DEV_SUBTYPE_LINK_MERG) { 308 308 refport = inport; 309 - nr_conns = csdev->nr_inport; 309 + nr_conns = csdev->pdata->nr_inport; 310 310 } else if (link_subtype == CORESIGHT_DEV_SUBTYPE_LINK_SPLIT) { 311 311 refport = outport; 312 - nr_conns = csdev->nr_outport; 312 + nr_conns = csdev->pdata->nr_outport; 313 313 } else { 314 314 refport = 0; 315 315 nr_conns = 1; ··· 595 595 { 596 596 int i; 597 597 598 - for (i = 0; i < csdev->nr_outport; i++) { 599 - struct coresight_device *child = csdev->conns[i].child_dev; 598 + for (i = 0; i < csdev->pdata->nr_outport; i++) { 599 + struct coresight_device *child; 600 600 601 + child = csdev->pdata->conns[i].child_dev; 601 602 if (child && child->type == CORESIGHT_DEV_TYPE_HELPER) 602 603 pm_runtime_get_sync(child->dev.parent); 603 604 } ··· 614 613 int i; 615 614 616 615 pm_runtime_put(csdev->dev.parent); 617 - for (i = 0; i < csdev->nr_outport; i++) { 618 - struct coresight_device *child = csdev->conns[i].child_dev; 616 + for (i = 0; i < csdev->pdata->nr_outport; i++) { 617 + struct coresight_device *child; 619 618 619 + child = csdev->pdata->conns[i].child_dev; 620 620 if (child && child->type == CORESIGHT_DEV_TYPE_HELPER) 621 621 pm_runtime_put(child->dev.parent); 622 622 } ··· 647 645 goto out; 648 646 649 647 /* Not a sink - recursively explore each port found on this element */ 650 - for (i = 0; i < csdev->nr_outport; i++) { 651 - struct coresight_device *child_dev = csdev->conns[i].child_dev; 648 + for (i = 0; i < csdev->pdata->nr_outport; i++) { 649 + struct coresight_device *child_dev; 652 650 651 + child_dev = csdev->pdata->conns[i].child_dev; 653 652 if (child_dev && 654 653 _coresight_build_path(child_dev, sink, path) == 0) { 655 654 found = true; ··· 978 975 { 979 976 struct coresight_device *csdev = to_coresight_device(dev); 980 977 978 + fwnode_handle_put(csdev->dev.fwnode); 981 979 kfree(csdev->refcnt); 982 980 kfree(csdev); 983 981 } ··· 1004 1000 * Circle throuch all the connection of that component. If we find 1005 1001 * an orphan connection whose name matches @csdev, link it. 1006 1002 */ 1007 - for (i = 0; i < i_csdev->nr_outport; i++) { 1008 - conn = &i_csdev->conns[i]; 1003 + for (i = 0; i < i_csdev->pdata->nr_outport; i++) { 1004 + conn = &i_csdev->pdata->conns[i]; 1009 1005 1010 1006 /* We have found at least one orphan connection */ 1011 1007 if (conn->child_dev == NULL) { 1012 1008 /* Does it match this newly added device? */ 1013 - if (conn->child_name && 1014 - !strcmp(dev_name(&csdev->dev), conn->child_name)) { 1009 + if (conn->child_fwnode == csdev->dev.fwnode) 1015 1010 conn->child_dev = csdev; 1016 - } else { 1011 + else 1017 1012 /* This component still has an orphan */ 1018 1013 still_orphan = true; 1019 - } 1020 1014 } 1021 1015 } 1022 1016 ··· 1042 1040 { 1043 1041 int i; 1044 1042 1045 - for (i = 0; i < csdev->nr_outport; i++) { 1046 - struct coresight_connection *conn = &csdev->conns[i]; 1043 + for (i = 0; i < csdev->pdata->nr_outport; i++) { 1044 + struct coresight_connection *conn = &csdev->pdata->conns[i]; 1047 1045 struct device *dev = NULL; 1048 1046 1049 - if (conn->child_name) 1050 - dev = bus_find_device_by_name(&coresight_bustype, NULL, 1051 - conn->child_name); 1047 + dev = bus_find_device(&coresight_bustype, NULL, 1048 + (void *)conn->child_fwnode, 1049 + coresight_device_fwnode_match); 1052 1050 if (dev) { 1053 1051 conn->child_dev = to_coresight_device(dev); 1054 1052 /* and put reference from 'bus_find_device()' */ ··· 1077 1075 * Circle throuch all the connection of that component. If we find 1078 1076 * a connection whose name matches @csdev, remove it. 1079 1077 */ 1080 - for (i = 0; i < iterator->nr_outport; i++) { 1081 - conn = &iterator->conns[i]; 1078 + for (i = 0; i < iterator->pdata->nr_outport; i++) { 1079 + conn = &iterator->pdata->conns[i]; 1082 1080 1083 1081 if (conn->child_dev == NULL) 1084 1082 continue; 1085 1083 1086 - if (!strcmp(dev_name(&csdev->dev), conn->child_name)) { 1084 + if (csdev->dev.fwnode == conn->child_fwnode) { 1087 1085 iterator->orphan = true; 1088 1086 conn->child_dev = NULL; 1087 + /* 1088 + * Drop the reference to the handle for the remote 1089 + * device acquired in parsing the connections from 1090 + * platform data. 1091 + */ 1092 + fwnode_handle_put(conn->child_fwnode); 1089 1093 /* No need to continue */ 1090 1094 break; 1091 1095 } ··· 1104 1096 return 0; 1105 1097 } 1106 1098 1099 + /* 1100 + * coresight_remove_conns - Remove references to this given devices 1101 + * from the connections of other devices. 1102 + */ 1107 1103 static void coresight_remove_conns(struct coresight_device *csdev) 1108 1104 { 1109 - bus_for_each_dev(&coresight_bustype, NULL, 1110 - csdev, coresight_remove_match); 1105 + /* 1106 + * Another device will point to this device only if there is 1107 + * an output port connected to this one. i.e, if the device 1108 + * doesn't have at least one input port, there is no point 1109 + * in searching all the devices. 1110 + */ 1111 + if (csdev->pdata->nr_inport) 1112 + bus_for_each_dev(&coresight_bustype, NULL, 1113 + csdev, coresight_remove_match); 1111 1114 } 1112 1115 1113 1116 /** ··· 1171 1152 } 1172 1153 postcore_initcall(coresight_init); 1173 1154 1155 + /* 1156 + * coresight_release_platform_data: Release references to the devices connected 1157 + * to the output port of this device. 1158 + */ 1159 + void coresight_release_platform_data(struct coresight_platform_data *pdata) 1160 + { 1161 + int i; 1162 + 1163 + for (i = 0; i < pdata->nr_outport; i++) { 1164 + if (pdata->conns[i].child_fwnode) { 1165 + fwnode_handle_put(pdata->conns[i].child_fwnode); 1166 + pdata->conns[i].child_fwnode = NULL; 1167 + } 1168 + } 1169 + } 1170 + 1174 1171 struct coresight_device *coresight_register(struct coresight_desc *desc) 1175 1172 { 1176 1173 int ret; ··· 1219 1184 1220 1185 csdev->refcnt = refcnts; 1221 1186 1222 - csdev->nr_inport = desc->pdata->nr_inport; 1223 - csdev->nr_outport = desc->pdata->nr_outport; 1224 - 1225 - csdev->conns = desc->pdata->conns; 1187 + csdev->pdata = desc->pdata; 1226 1188 1227 1189 csdev->type = desc->type; 1228 1190 csdev->subtype = desc->subtype; ··· 1231 1199 csdev->dev.parent = desc->dev; 1232 1200 csdev->dev.release = coresight_device_release; 1233 1201 csdev->dev.bus = &coresight_bustype; 1234 - dev_set_name(&csdev->dev, "%s", desc->pdata->name); 1202 + /* 1203 + * Hold the reference to our parent device. This will be 1204 + * dropped only in coresight_device_release(). 1205 + */ 1206 + csdev->dev.fwnode = fwnode_handle_get(dev_fwnode(desc->dev)); 1207 + dev_set_name(&csdev->dev, "%s", desc->name); 1235 1208 1236 1209 ret = device_register(&csdev->dev); 1237 1210 if (ret) { ··· 1276 1239 err_free_csdev: 1277 1240 kfree(csdev); 1278 1241 err_out: 1242 + /* Cleanup the connection information */ 1243 + coresight_release_platform_data(desc->pdata); 1279 1244 return ERR_PTR(ret); 1280 1245 } 1281 1246 EXPORT_SYMBOL_GPL(coresight_register); ··· 1287 1248 etm_perf_del_symlink_sink(csdev); 1288 1249 /* Remove references of that device in the topology */ 1289 1250 coresight_remove_conns(csdev); 1251 + coresight_release_platform_data(csdev->pdata); 1290 1252 device_unregister(&csdev->dev); 1291 1253 } 1292 1254 EXPORT_SYMBOL_GPL(coresight_unregister); 1255 + 1256 + 1257 + /* 1258 + * coresight_search_device_idx - Search the fwnode handle of a device 1259 + * in the given dev_idx list. Must be called with the coresight_mutex held. 1260 + * 1261 + * Returns the index of the entry, when found. Otherwise, -ENOENT. 1262 + */ 1263 + static inline int coresight_search_device_idx(struct coresight_dev_list *dict, 1264 + struct fwnode_handle *fwnode) 1265 + { 1266 + int i; 1267 + 1268 + for (i = 0; i < dict->nr_idx; i++) 1269 + if (dict->fwnode_list[i] == fwnode) 1270 + return i; 1271 + return -ENOENT; 1272 + } 1273 + 1274 + /* 1275 + * coresight_alloc_device_name - Get an index for a given device in the 1276 + * device index list specific to a driver. An index is allocated for a 1277 + * device and is tracked with the fwnode_handle to prevent allocating 1278 + * duplicate indices for the same device (e.g, if we defer probing of 1279 + * a device due to dependencies), in case the index is requested again. 1280 + */ 1281 + char *coresight_alloc_device_name(struct coresight_dev_list *dict, 1282 + struct device *dev) 1283 + { 1284 + int idx; 1285 + char *name = NULL; 1286 + struct fwnode_handle **list; 1287 + 1288 + mutex_lock(&coresight_mutex); 1289 + 1290 + idx = coresight_search_device_idx(dict, dev_fwnode(dev)); 1291 + if (idx < 0) { 1292 + /* Make space for the new entry */ 1293 + idx = dict->nr_idx; 1294 + list = krealloc(dict->fwnode_list, 1295 + (idx + 1) * sizeof(*dict->fwnode_list), 1296 + GFP_KERNEL); 1297 + if (ZERO_OR_NULL_PTR(list)) { 1298 + idx = -ENOMEM; 1299 + goto done; 1300 + } 1301 + 1302 + list[idx] = dev_fwnode(dev); 1303 + dict->fwnode_list = list; 1304 + dict->nr_idx = idx + 1; 1305 + } 1306 + 1307 + name = devm_kasprintf(dev, GFP_KERNEL, "%s%d", dict->pfx, idx); 1308 + done: 1309 + mutex_unlock(&coresight_mutex); 1310 + return name; 1311 + } 1312 + EXPORT_SYMBOL_GPL(coresight_alloc_device_name);
-297
drivers/hwtracing/coresight/of_coresight.c
··· 1 - // SPDX-License-Identifier: GPL-2.0 2 - /* 3 - * Copyright (c) 2012, The Linux Foundation. All rights reserved. 4 - */ 5 - 6 - #include <linux/types.h> 7 - #include <linux/err.h> 8 - #include <linux/slab.h> 9 - #include <linux/clk.h> 10 - #include <linux/of.h> 11 - #include <linux/of_address.h> 12 - #include <linux/of_graph.h> 13 - #include <linux/of_platform.h> 14 - #include <linux/platform_device.h> 15 - #include <linux/amba/bus.h> 16 - #include <linux/coresight.h> 17 - #include <linux/cpumask.h> 18 - #include <asm/smp_plat.h> 19 - 20 - 21 - static int of_dev_node_match(struct device *dev, void *data) 22 - { 23 - return dev->of_node == data; 24 - } 25 - 26 - static struct device * 27 - of_coresight_get_endpoint_device(struct device_node *endpoint) 28 - { 29 - struct device *dev = NULL; 30 - 31 - /* 32 - * If we have a non-configurable replicator, it will be found on the 33 - * platform bus. 34 - */ 35 - dev = bus_find_device(&platform_bus_type, NULL, 36 - endpoint, of_dev_node_match); 37 - if (dev) 38 - return dev; 39 - 40 - /* 41 - * We have a configurable component - circle through the AMBA bus 42 - * looking for the device that matches the endpoint node. 43 - */ 44 - return bus_find_device(&amba_bustype, NULL, 45 - endpoint, of_dev_node_match); 46 - } 47 - 48 - static inline bool of_coresight_legacy_ep_is_input(struct device_node *ep) 49 - { 50 - return of_property_read_bool(ep, "slave-mode"); 51 - } 52 - 53 - static void of_coresight_get_ports_legacy(const struct device_node *node, 54 - int *nr_inport, int *nr_outport) 55 - { 56 - struct device_node *ep = NULL; 57 - int in = 0, out = 0; 58 - 59 - do { 60 - ep = of_graph_get_next_endpoint(node, ep); 61 - if (!ep) 62 - break; 63 - 64 - if (of_coresight_legacy_ep_is_input(ep)) 65 - in++; 66 - else 67 - out++; 68 - 69 - } while (ep); 70 - 71 - *nr_inport = in; 72 - *nr_outport = out; 73 - } 74 - 75 - static struct device_node *of_coresight_get_port_parent(struct device_node *ep) 76 - { 77 - struct device_node *parent = of_graph_get_port_parent(ep); 78 - 79 - /* 80 - * Skip one-level up to the real device node, if we 81 - * are using the new bindings. 82 - */ 83 - if (of_node_name_eq(parent, "in-ports") || 84 - of_node_name_eq(parent, "out-ports")) 85 - parent = of_get_next_parent(parent); 86 - 87 - return parent; 88 - } 89 - 90 - static inline struct device_node * 91 - of_coresight_get_input_ports_node(const struct device_node *node) 92 - { 93 - return of_get_child_by_name(node, "in-ports"); 94 - } 95 - 96 - static inline struct device_node * 97 - of_coresight_get_output_ports_node(const struct device_node *node) 98 - { 99 - return of_get_child_by_name(node, "out-ports"); 100 - } 101 - 102 - static inline int 103 - of_coresight_count_ports(struct device_node *port_parent) 104 - { 105 - int i = 0; 106 - struct device_node *ep = NULL; 107 - 108 - while ((ep = of_graph_get_next_endpoint(port_parent, ep))) 109 - i++; 110 - return i; 111 - } 112 - 113 - static void of_coresight_get_ports(const struct device_node *node, 114 - int *nr_inport, int *nr_outport) 115 - { 116 - struct device_node *input_ports = NULL, *output_ports = NULL; 117 - 118 - input_ports = of_coresight_get_input_ports_node(node); 119 - output_ports = of_coresight_get_output_ports_node(node); 120 - 121 - if (input_ports || output_ports) { 122 - if (input_ports) { 123 - *nr_inport = of_coresight_count_ports(input_ports); 124 - of_node_put(input_ports); 125 - } 126 - if (output_ports) { 127 - *nr_outport = of_coresight_count_ports(output_ports); 128 - of_node_put(output_ports); 129 - } 130 - } else { 131 - /* Fall back to legacy DT bindings parsing */ 132 - of_coresight_get_ports_legacy(node, nr_inport, nr_outport); 133 - } 134 - } 135 - 136 - static int of_coresight_alloc_memory(struct device *dev, 137 - struct coresight_platform_data *pdata) 138 - { 139 - if (pdata->nr_outport) { 140 - pdata->conns = devm_kzalloc(dev, pdata->nr_outport * 141 - sizeof(*pdata->conns), 142 - GFP_KERNEL); 143 - if (!pdata->conns) 144 - return -ENOMEM; 145 - } 146 - 147 - return 0; 148 - } 149 - 150 - int of_coresight_get_cpu(const struct device_node *node) 151 - { 152 - int cpu; 153 - struct device_node *dn; 154 - 155 - dn = of_parse_phandle(node, "cpu", 0); 156 - /* Affinity defaults to CPU0 */ 157 - if (!dn) 158 - return 0; 159 - cpu = of_cpu_node_to_id(dn); 160 - of_node_put(dn); 161 - 162 - /* Affinity to CPU0 if no cpu nodes are found */ 163 - return (cpu < 0) ? 0 : cpu; 164 - } 165 - EXPORT_SYMBOL_GPL(of_coresight_get_cpu); 166 - 167 - /* 168 - * of_coresight_parse_endpoint : Parse the given output endpoint @ep 169 - * and fill the connection information in @conn 170 - * 171 - * Parses the local port, remote device name and the remote port. 172 - * 173 - * Returns : 174 - * 1 - If the parsing is successful and a connection record 175 - * was created for an output connection. 176 - * 0 - If the parsing completed without any fatal errors. 177 - * -Errno - Fatal error, abort the scanning. 178 - */ 179 - static int of_coresight_parse_endpoint(struct device *dev, 180 - struct device_node *ep, 181 - struct coresight_connection *conn) 182 - { 183 - int ret = 0; 184 - struct of_endpoint endpoint, rendpoint; 185 - struct device_node *rparent = NULL; 186 - struct device_node *rep = NULL; 187 - struct device *rdev = NULL; 188 - 189 - do { 190 - /* Parse the local port details */ 191 - if (of_graph_parse_endpoint(ep, &endpoint)) 192 - break; 193 - /* 194 - * Get a handle on the remote endpoint and the device it is 195 - * attached to. 196 - */ 197 - rep = of_graph_get_remote_endpoint(ep); 198 - if (!rep) 199 - break; 200 - rparent = of_coresight_get_port_parent(rep); 201 - if (!rparent) 202 - break; 203 - if (of_graph_parse_endpoint(rep, &rendpoint)) 204 - break; 205 - 206 - /* If the remote device is not available, defer probing */ 207 - rdev = of_coresight_get_endpoint_device(rparent); 208 - if (!rdev) { 209 - ret = -EPROBE_DEFER; 210 - break; 211 - } 212 - 213 - conn->outport = endpoint.port; 214 - conn->child_name = devm_kstrdup(dev, 215 - dev_name(rdev), 216 - GFP_KERNEL); 217 - conn->child_port = rendpoint.port; 218 - /* Connection record updated */ 219 - ret = 1; 220 - } while (0); 221 - 222 - of_node_put(rparent); 223 - of_node_put(rep); 224 - put_device(rdev); 225 - 226 - return ret; 227 - } 228 - 229 - struct coresight_platform_data * 230 - of_get_coresight_platform_data(struct device *dev, 231 - const struct device_node *node) 232 - { 233 - int ret = 0; 234 - struct coresight_platform_data *pdata; 235 - struct coresight_connection *conn; 236 - struct device_node *ep = NULL; 237 - const struct device_node *parent = NULL; 238 - bool legacy_binding = false; 239 - 240 - pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL); 241 - if (!pdata) 242 - return ERR_PTR(-ENOMEM); 243 - 244 - /* Use device name as sysfs handle */ 245 - pdata->name = dev_name(dev); 246 - pdata->cpu = of_coresight_get_cpu(node); 247 - 248 - /* Get the number of input and output port for this component */ 249 - of_coresight_get_ports(node, &pdata->nr_inport, &pdata->nr_outport); 250 - 251 - /* If there are no output connections, we are done */ 252 - if (!pdata->nr_outport) 253 - return pdata; 254 - 255 - ret = of_coresight_alloc_memory(dev, pdata); 256 - if (ret) 257 - return ERR_PTR(ret); 258 - 259 - parent = of_coresight_get_output_ports_node(node); 260 - /* 261 - * If the DT uses obsoleted bindings, the ports are listed 262 - * under the device and we need to filter out the input 263 - * ports. 264 - */ 265 - if (!parent) { 266 - legacy_binding = true; 267 - parent = node; 268 - dev_warn_once(dev, "Uses obsolete Coresight DT bindings\n"); 269 - } 270 - 271 - conn = pdata->conns; 272 - 273 - /* Iterate through each output port to discover topology */ 274 - while ((ep = of_graph_get_next_endpoint(parent, ep))) { 275 - /* 276 - * Legacy binding mixes input/output ports under the 277 - * same parent. So, skip the input ports if we are dealing 278 - * with legacy binding, as they processed with their 279 - * connected output ports. 280 - */ 281 - if (legacy_binding && of_coresight_legacy_ep_is_input(ep)) 282 - continue; 283 - 284 - ret = of_coresight_parse_endpoint(dev, ep, conn); 285 - switch (ret) { 286 - case 1: 287 - conn++; /* Fall through */ 288 - case 0: 289 - break; 290 - default: 291 - return ERR_PTR(ret); 292 - } 293 - } 294 - 295 - return pdata; 296 - } 297 - EXPORT_SYMBOL_GPL(of_get_coresight_platform_data);
+105 -45
drivers/hwtracing/intel_th/msu.c
··· 33 33 * @entry: window list linkage (msc::win_list) 34 34 * @pgoff: page offset into the buffer that this window starts at 35 35 * @nr_blocks: number of blocks (pages) in this window 36 + * @nr_segs: number of segments in this window (<= @nr_blocks) 37 + * @_sgt: array of block descriptors 36 38 * @sgt: array of block descriptors 37 39 */ 38 40 struct msc_window { 39 41 struct list_head entry; 40 42 unsigned long pgoff; 41 43 unsigned int nr_blocks; 44 + unsigned int nr_segs; 42 45 struct msc *msc; 43 - struct sg_table sgt; 46 + struct sg_table _sgt; 47 + struct sg_table *sgt; 44 48 }; 45 49 46 50 /** ··· 142 138 static inline struct msc_block_desc * 143 139 msc_win_block(struct msc_window *win, unsigned int block) 144 140 { 145 - return sg_virt(&win->sgt.sgl[block]); 141 + return sg_virt(&win->sgt->sgl[block]); 142 + } 143 + 144 + static inline size_t 145 + msc_win_actual_bsz(struct msc_window *win, unsigned int block) 146 + { 147 + return win->sgt->sgl[block].length; 146 148 } 147 149 148 150 static inline dma_addr_t 149 151 msc_win_baddr(struct msc_window *win, unsigned int block) 150 152 { 151 - return sg_dma_address(&win->sgt.sgl[block]); 153 + return sg_dma_address(&win->sgt->sgl[block]); 152 154 } 153 155 154 156 static inline unsigned long ··· 189 179 } 190 180 191 181 /** 192 - * msc_oldest_window() - locate the window with oldest data 182 + * msc_find_window() - find a window matching a given sg_table 193 183 * @msc: MSC device 184 + * @sgt: SG table of the window 185 + * @nonempty: skip over empty windows 194 186 * 195 - * This should only be used in multiblock mode. Caller should hold the 196 - * msc::user_count reference. 197 - * 198 - * Return: the oldest window with valid data 187 + * Return: MSC window structure pointer or NULL if the window 188 + * could not be found. 199 189 */ 200 - static struct msc_window *msc_oldest_window(struct msc *msc) 190 + static struct msc_window * 191 + msc_find_window(struct msc *msc, struct sg_table *sgt, bool nonempty) 201 192 { 202 - struct msc_window *win, *next = msc_next_window(msc->cur_win); 193 + struct msc_window *win; 203 194 unsigned int found = 0; 204 195 205 196 if (list_empty(&msc->win_list)) ··· 212 201 * something like 2, in which case we're good 213 202 */ 214 203 list_for_each_entry(win, &msc->win_list, entry) { 215 - if (win == next) 204 + if (win->sgt == sgt) 216 205 found++; 217 206 218 207 /* skip the empty ones */ 219 - if (msc_block_is_empty(msc_win_block(win, 0))) 208 + if (nonempty && msc_block_is_empty(msc_win_block(win, 0))) 220 209 continue; 221 210 222 211 if (found) 223 212 return win; 224 213 } 214 + 215 + return NULL; 216 + } 217 + 218 + /** 219 + * msc_oldest_window() - locate the window with oldest data 220 + * @msc: MSC device 221 + * 222 + * This should only be used in multiblock mode. Caller should hold the 223 + * msc::user_count reference. 224 + * 225 + * Return: the oldest window with valid data 226 + */ 227 + static struct msc_window *msc_oldest_window(struct msc *msc) 228 + { 229 + struct msc_window *win; 230 + 231 + if (list_empty(&msc->win_list)) 232 + return NULL; 233 + 234 + win = msc_find_window(msc, msc_next_window(msc->cur_win)->sgt, true); 235 + if (win) 236 + return win; 225 237 226 238 return list_first_entry(&msc->win_list, struct msc_window, entry); 227 239 } ··· 268 234 * with wrapping, last written block contains both the newest and the 269 235 * oldest data for this window. 270 236 */ 271 - for (blk = 0; blk < win->nr_blocks; blk++) { 237 + for (blk = 0; blk < win->nr_segs; blk++) { 272 238 bdesc = msc_win_block(win, blk); 273 239 274 240 if (msc_block_last_written(bdesc)) ··· 400 366 return msc_iter_win_advance(iter); 401 367 402 368 /* block advance */ 403 - if (++iter->block == iter->win->nr_blocks) 369 + if (++iter->block == iter->win->nr_segs) 404 370 iter->block = 0; 405 371 406 372 /* no wrapping, sanity check in case there is no last written block */ ··· 512 478 size_t hw_sz = sizeof(struct msc_block_desc) - 513 479 offsetof(struct msc_block_desc, hw_tag); 514 480 515 - for (blk = 0; blk < win->nr_blocks; blk++) { 481 + for (blk = 0; blk < win->nr_segs; blk++) { 516 482 struct msc_block_desc *bdesc = msc_win_block(win, blk); 517 483 518 484 memset(&bdesc->hw_tag, 0, hw_sz); ··· 701 667 goto err_out; 702 668 703 669 ret = -ENOMEM; 704 - page = alloc_pages(GFP_KERNEL | __GFP_ZERO, order); 670 + page = alloc_pages(GFP_KERNEL | __GFP_ZERO | GFP_DMA32, order); 705 671 if (!page) 706 672 goto err_free_sgt; 707 673 ··· 768 734 } 769 735 770 736 static int __msc_buffer_win_alloc(struct msc_window *win, 771 - unsigned int nr_blocks) 737 + unsigned int nr_segs) 772 738 { 773 739 struct scatterlist *sg_ptr; 774 740 void *block; 775 741 int i, ret; 776 742 777 - ret = sg_alloc_table(&win->sgt, nr_blocks, GFP_KERNEL); 743 + ret = sg_alloc_table(win->sgt, nr_segs, GFP_KERNEL); 778 744 if (ret) 779 745 return -ENOMEM; 780 746 781 - for_each_sg(win->sgt.sgl, sg_ptr, nr_blocks, i) { 747 + for_each_sg(win->sgt->sgl, sg_ptr, nr_segs, i) { 782 748 block = dma_alloc_coherent(msc_dev(win->msc)->parent->parent, 783 749 PAGE_SIZE, &sg_dma_address(sg_ptr), 784 750 GFP_KERNEL); ··· 788 754 sg_set_buf(sg_ptr, block, PAGE_SIZE); 789 755 } 790 756 791 - return nr_blocks; 757 + return nr_segs; 792 758 793 759 err_nomem: 794 760 for (i--; i >= 0; i--) ··· 796 762 msc_win_block(win, i), 797 763 msc_win_baddr(win, i)); 798 764 799 - sg_free_table(&win->sgt); 765 + sg_free_table(win->sgt); 800 766 801 767 return -ENOMEM; 802 768 } 769 + 770 + #ifdef CONFIG_X86 771 + static void msc_buffer_set_uc(struct msc_window *win, unsigned int nr_segs) 772 + { 773 + int i; 774 + 775 + for (i = 0; i < nr_segs; i++) 776 + /* Set the page as uncached */ 777 + set_memory_uc((unsigned long)msc_win_block(win, i), 1); 778 + } 779 + 780 + static void msc_buffer_set_wb(struct msc_window *win) 781 + { 782 + int i; 783 + 784 + for (i = 0; i < win->nr_segs; i++) 785 + /* Reset the page to write-back */ 786 + set_memory_wb((unsigned long)msc_win_block(win, i), 1); 787 + } 788 + #else /* !X86 */ 789 + static inline void 790 + msc_buffer_set_uc(struct msc_window *win, unsigned int nr_segs) {} 791 + static inline void msc_buffer_set_wb(struct msc_window *win) {} 792 + #endif /* CONFIG_X86 */ 803 793 804 794 /** 805 795 * msc_buffer_win_alloc() - alloc a window for a multiblock mode ··· 838 780 static int msc_buffer_win_alloc(struct msc *msc, unsigned int nr_blocks) 839 781 { 840 782 struct msc_window *win; 841 - int ret = -ENOMEM, i; 783 + int ret = -ENOMEM; 842 784 843 785 if (!nr_blocks) 844 786 return 0; ··· 855 797 return -ENOMEM; 856 798 857 799 win->msc = msc; 800 + win->sgt = &win->_sgt; 858 801 859 802 if (!list_empty(&msc->win_list)) { 860 803 struct msc_window *prev = list_last_entry(&msc->win_list, 861 804 struct msc_window, 862 805 entry); 863 806 864 - /* This works as long as blocks are page-sized */ 865 807 win->pgoff = prev->pgoff + prev->nr_blocks; 866 808 } 867 809 ··· 869 811 if (ret < 0) 870 812 goto err_nomem; 871 813 872 - #ifdef CONFIG_X86 873 - for (i = 0; i < ret; i++) 874 - /* Set the page as uncached */ 875 - set_memory_uc((unsigned long)msc_win_block(win, i), 1); 876 - #endif 814 + msc_buffer_set_uc(win, ret); 877 815 878 - win->nr_blocks = ret; 816 + win->nr_segs = ret; 817 + win->nr_blocks = nr_blocks; 879 818 880 819 if (list_empty(&msc->win_list)) { 881 820 msc->base = msc_win_block(win, 0); ··· 895 840 { 896 841 int i; 897 842 898 - for (i = 0; i < win->nr_blocks; i++) { 899 - struct page *page = sg_page(&win->sgt.sgl[i]); 843 + for (i = 0; i < win->nr_segs; i++) { 844 + struct page *page = sg_page(&win->sgt->sgl[i]); 900 845 901 846 page->mapping = NULL; 902 847 dma_free_coherent(msc_dev(win->msc)->parent->parent, PAGE_SIZE, 903 848 msc_win_block(win, i), msc_win_baddr(win, i)); 904 849 } 905 - sg_free_table(&win->sgt); 850 + sg_free_table(win->sgt); 906 851 } 907 852 908 853 /** ··· 915 860 */ 916 861 static void msc_buffer_win_free(struct msc *msc, struct msc_window *win) 917 862 { 918 - int i; 919 - 920 863 msc->nr_pages -= win->nr_blocks; 921 864 922 865 list_del(&win->entry); ··· 923 870 msc->base_addr = 0; 924 871 } 925 872 926 - #ifdef CONFIG_X86 927 - for (i = 0; i < win->nr_blocks; i++) 928 - /* Reset the page to write-back */ 929 - set_memory_wb((unsigned long)msc_win_block(win, i), 1); 930 - #endif 873 + msc_buffer_set_wb(win); 931 874 932 875 __msc_buffer_win_free(msc, win); 933 876 ··· 958 909 next_win = list_next_entry(win, entry); 959 910 } 960 911 961 - for (blk = 0; blk < win->nr_blocks; blk++) { 912 + for (blk = 0; blk < win->nr_segs; blk++) { 962 913 struct msc_block_desc *bdesc = msc_win_block(win, blk); 963 914 964 915 memset(bdesc, 0, sizeof(*bdesc)); ··· 969 920 * Similarly to last window, last block should point 970 921 * to the first one. 971 922 */ 972 - if (blk == win->nr_blocks - 1) { 923 + if (blk == win->nr_segs - 1) { 973 924 sw_tag |= MSC_SW_TAG_LASTBLK; 974 925 bdesc->next_blk = msc_win_bpfn(win, 0); 975 926 } else { ··· 977 928 } 978 929 979 930 bdesc->sw_tag = sw_tag; 980 - bdesc->block_sz = PAGE_SIZE / 64; 931 + bdesc->block_sz = msc_win_actual_bsz(win, blk) / 64; 981 932 } 982 933 } 983 934 ··· 1136 1087 static struct page *msc_buffer_get_page(struct msc *msc, unsigned long pgoff) 1137 1088 { 1138 1089 struct msc_window *win; 1090 + unsigned int blk; 1139 1091 1140 1092 if (msc->mode == MSC_MODE_SINGLE) 1141 1093 return msc_buffer_contig_get_page(msc, pgoff); ··· 1149 1099 1150 1100 found: 1151 1101 pgoff -= win->pgoff; 1152 - return sg_page(&win->sgt.sgl[pgoff]); 1102 + 1103 + for (blk = 0; blk < win->nr_segs; blk++) { 1104 + struct page *page = sg_page(&win->sgt->sgl[blk]); 1105 + size_t pgsz = PFN_DOWN(msc_win_actual_bsz(win, blk)); 1106 + 1107 + if (pgoff < pgsz) 1108 + return page + pgoff; 1109 + 1110 + pgoff -= pgsz; 1111 + } 1112 + 1113 + return NULL; 1153 1114 } 1154 1115 1155 1116 /** ··· 1447 1386 1448 1387 static void msc_win_switch(struct msc *msc) 1449 1388 { 1450 - struct msc_window *last, *first; 1389 + struct msc_window *first; 1451 1390 1452 1391 first = list_first_entry(&msc->win_list, struct msc_window, entry); 1453 - last = list_last_entry(&msc->win_list, struct msc_window, entry); 1454 1392 1455 1393 if (msc_is_last_win(msc->cur_win)) 1456 1394 msc->cur_win = first;
+5
drivers/hwtracing/intel_th/pci.c
··· 194 194 PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x02a6), 195 195 .driver_data = (kernel_ulong_t)&intel_th_2x, 196 196 }, 197 + { 198 + /* Ice Lake NNPI */ 199 + PCI_DEVICE(PCI_VENDOR_ID_INTEL, 0x45c5), 200 + .driver_data = (kernel_ulong_t)&intel_th_2x, 201 + }, 197 202 { 0 }, 198 203 }; 199 204
+1 -1
drivers/memory/Kconfig
··· 123 123 config JZ4780_NEMC 124 124 bool "Ingenic JZ4780 SoC NEMC driver" 125 125 default y 126 - depends on MACH_JZ4780 || COMPILE_TEST 126 + depends on MIPS || COMPILE_TEST 127 127 depends on HAS_IOMEM && OF 128 128 help 129 129 This driver is for the NAND/External Memory Controller (NEMC) in
+22 -4
drivers/memory/jz4780-nemc.c
··· 41 41 #define NEMC_NFCSR_NFCEn(n) BIT((((n) - 1) << 1) + 1) 42 42 #define NEMC_NFCSR_TNFEn(n) BIT(16 + (n) - 1) 43 43 44 + struct jz_soc_info { 45 + u8 tas_tah_cycles_max; 46 + }; 47 + 44 48 struct jz4780_nemc { 45 49 spinlock_t lock; 46 50 struct device *dev; 51 + const struct jz_soc_info *soc_info; 47 52 void __iomem *base; 48 53 struct clk *clk; 49 54 uint32_t clk_period; ··· 163 158 * Conversion of tBP and tAW cycle counts to values supported by the 164 159 * hardware (round up to the next supported value). 165 160 */ 166 - static const uint32_t convert_tBP_tAW[] = { 161 + static const u8 convert_tBP_tAW[] = { 167 162 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 168 163 169 164 /* 11 - 12 -> 12 cycles */ ··· 204 199 if (of_property_read_u32(node, "ingenic,nemc-tAS", &val) == 0) { 205 200 smcr &= ~NEMC_SMCR_TAS_MASK; 206 201 cycles = jz4780_nemc_ns_to_cycles(nemc, val); 207 - if (cycles > 15) { 202 + if (cycles > nemc->soc_info->tas_tah_cycles_max) { 208 203 dev_err(nemc->dev, "tAS %u is too high (%u cycles)\n", 209 204 val, cycles); 210 205 return false; ··· 216 211 if (of_property_read_u32(node, "ingenic,nemc-tAH", &val) == 0) { 217 212 smcr &= ~NEMC_SMCR_TAH_MASK; 218 213 cycles = jz4780_nemc_ns_to_cycles(nemc, val); 219 - if (cycles > 15) { 214 + if (cycles > nemc->soc_info->tas_tah_cycles_max) { 220 215 dev_err(nemc->dev, "tAH %u is too high (%u cycles)\n", 221 216 val, cycles); 222 217 return false; ··· 279 274 nemc = devm_kzalloc(dev, sizeof(*nemc), GFP_KERNEL); 280 275 if (!nemc) 281 276 return -ENOMEM; 277 + 278 + nemc->soc_info = device_get_match_data(dev); 279 + if (!nemc->soc_info) 280 + return -EINVAL; 282 281 283 282 spin_lock_init(&nemc->lock); 284 283 nemc->dev = dev; ··· 376 367 return 0; 377 368 } 378 369 370 + static const struct jz_soc_info jz4740_soc_info = { 371 + .tas_tah_cycles_max = 7, 372 + }; 373 + 374 + static const struct jz_soc_info jz4780_soc_info = { 375 + .tas_tah_cycles_max = 15, 376 + }; 377 + 379 378 static const struct of_device_id jz4780_nemc_dt_match[] = { 380 - { .compatible = "ingenic,jz4780-nemc" }, 379 + { .compatible = "ingenic,jz4740-nemc", .data = &jz4740_soc_info, }, 380 + { .compatible = "ingenic,jz4780-nemc", .data = &jz4780_soc_info, }, 381 381 {}, 382 382 }; 383 383
+13 -19
drivers/misc/Kconfig
··· 9 9 tristate 10 10 depends on INPUT 11 11 select INPUT_POLLDEV 12 - default n 13 12 14 13 config AD525X_DPOT 15 14 tristate "Analog Devices Digital Potentiometers" ··· 61 62 62 63 config DUMMY_IRQ 63 64 tristate "Dummy IRQ handler" 64 - default n 65 65 ---help--- 66 66 This module accepts a single 'irq' parameter, which it should register for. 67 67 The sole purpose of this module is to help with debugging of systems on ··· 116 118 config INTEL_MID_PTI 117 119 tristate "Parallel Trace Interface for MIPI P1149.7 cJTAG standard" 118 120 depends on PCI && TTY && (X86_INTEL_MID || COMPILE_TEST) 119 - default n 120 121 help 121 122 The PTI (Parallel Trace Interface) driver directs 122 123 trace data routed from various parts in the system out ··· 191 194 192 195 config ENCLOSURE_SERVICES 193 196 tristate "Enclosure Services" 194 - default n 195 197 help 196 198 Provides support for intelligent enclosures (bays which 197 199 contain storage devices). You also need either a host ··· 214 218 config CS5535_MFGPT 215 219 tristate "CS5535/CS5536 Geode Multi-Function General Purpose Timer (MFGPT) support" 216 220 depends on MFD_CS5535 217 - default n 218 221 help 219 222 This driver provides access to MFGPT functionality for other 220 223 drivers that need timers. MFGPTs are available in the CS5535 and ··· 246 251 config HP_ILO 247 252 tristate "Channel interface driver for the HP iLO processor" 248 253 depends on PCI 249 - default n 250 254 help 251 255 The channel interface driver allows applications to communicate 252 256 with iLO management processors present on HP ProLiant servers. ··· 280 286 config SGI_GRU 281 287 tristate "SGI GRU driver" 282 288 depends on X86_UV && SMP 283 - default n 284 289 select MMU_NOTIFIER 285 290 ---help--- 286 291 The GRU is a hardware resource located in the system chipset. The GRU ··· 294 301 config SGI_GRU_DEBUG 295 302 bool "SGI GRU driver debug" 296 303 depends on SGI_GRU 297 - default n 298 304 ---help--- 299 305 This option enables additional debugging code for the SGI GRU driver. 300 306 If you are unsure, say N. ··· 351 359 config SENSORS_APDS990X 352 360 tristate "APDS990X combined als and proximity sensors" 353 361 depends on I2C 354 - default n 355 362 ---help--- 356 363 Say Y here if you want to build a driver for Avago APDS990x 357 364 combined ambient light and proximity sensor chip. ··· 378 387 config SPEAR13XX_PCIE_GADGET 379 388 bool "PCIe gadget support for SPEAr13XX platform" 380 389 depends on ARCH_SPEAR13XX && BROKEN 381 - default n 382 390 help 383 391 This option enables gadget support for PCIe controller. If 384 392 board file defines any controller as PCIe endpoint then a sysfs ··· 387 397 config VMWARE_BALLOON 388 398 tristate "VMware Balloon Driver" 389 399 depends on VMWARE_VMCI && X86 && HYPERVISOR_GUEST 400 + select MEMORY_BALLOON 390 401 help 391 402 This is VMware physical memory management driver which acts 392 403 like a "balloon" that can be inflated to reclaim physical pages ··· 421 430 422 431 To compile this driver as a module, choose M here: the module will 423 432 be called pch_phub. 424 - 425 - config USB_SWITCH_FSA9480 426 - tristate "FSA9480 USB Switch" 427 - depends on I2C 428 - help 429 - The FSA9480 is a USB port accessory detector and switch. 430 - The FSA9480 is fully controlled using I2C and enables USB data, 431 - stereo and mono audio, video, microphone and UART data to use 432 - a common connector port. 433 433 434 434 config LATTICE_ECP3_CONFIG 435 435 tristate "Lattice ECP3 FPGA bitstream configuration via SPI" ··· 462 480 ---help--- 463 481 Enable this configuration option to enable the host side test driver 464 482 for PCI Endpoint. 483 + 484 + config XILINX_SDFEC 485 + tristate "Xilinx SDFEC 16" 486 + help 487 + This option enables support for the Xilinx SDFEC (Soft Decision 488 + Forward Error Correction) driver. This enables a char driver 489 + for the SDFEC. 490 + 491 + You may select this driver if your design instantiates the 492 + SDFEC(16nm) hardened block. To compile this as a module choose M. 493 + 494 + If unsure, say N. 465 495 466 496 config MISC_RTSX 467 497 tristate
+1 -1
drivers/misc/Makefile
··· 42 42 obj-$(CONFIG_PCH_PHUB) += pch_phub.o 43 43 obj-y += ti-st/ 44 44 obj-y += lis3lv02d/ 45 - obj-$(CONFIG_USB_SWITCH_FSA9480) += fsa9480.o 46 45 obj-$(CONFIG_ALTERA_STAPL) +=altera-stapl/ 47 46 obj-$(CONFIG_INTEL_MEI) += mei/ 48 47 obj-$(CONFIG_VMWARE_VMCI) += vmw_vmci/ ··· 58 59 obj-y += cardreader/ 59 60 obj-$(CONFIG_PVPANIC) += pvpanic.o 60 61 obj-$(CONFIG_HABANA_AI) += habanalabs/ 62 + obj-$(CONFIG_XILINX_SDFEC) += xilinx_sdfec.o
-1
drivers/misc/altera-stapl/Kconfig
··· 5 5 config ALTERA_STAPL 6 6 tristate "Altera FPGA firmware download module" 7 7 depends on I2C 8 - default n 9 8 help 10 9 An Altera FPGA module. Say Y when you want to support this tool.
-2
drivers/misc/c2port/Kconfig
··· 5 5 6 6 menuconfig C2PORT 7 7 tristate "Silicon Labs C2 port support" 8 - default n 9 8 help 10 9 This option enables support for Silicon Labs C2 port used to 11 10 program Silicon micro controller chips (and other 8051 compatible). ··· 23 24 config C2PORT_DURAMAR_2150 24 25 tristate "C2 port support for Eurotech's Duramar 2150" 25 26 depends on X86 26 - default n 27 27 help 28 28 This option enables C2 support for the Eurotech's Duramar 2150 29 29 on board micro controller.
-1
drivers/misc/cb710/Kconfig
··· 15 15 config CB710_DEBUG 16 16 bool "Enable driver debugging" 17 17 depends on CB710_CORE != n 18 - default n 19 18 help 20 19 This is an option for use by developers; most people should 21 20 say N here. This adds a lot of debugging output to dmesg.
-3
drivers/misc/cxl/Kconfig
··· 5 5 6 6 config CXL_BASE 7 7 bool 8 - default n 9 8 select PPC_COPRO_BASE 10 9 11 10 config CXL_AFU_DRIVER_OPS 12 11 bool 13 - default n 14 12 15 13 config CXL_LIB 16 14 bool 17 - default n 18 15 19 16 config CXL 20 17 tristate "Support for IBM Coherent Accelerators (CXL)"
-1
drivers/misc/echo/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 config ECHO 3 3 tristate "Line Echo Canceller support" 4 - default n 5 4 ---help--- 6 5 This driver provides line echo cancelling support for mISDN and 7 6 Zaptel drivers.
+32 -11
drivers/misc/eeprom/ee1004.c
··· 2 2 /* 3 3 * ee1004 - driver for DDR4 SPD EEPROMs 4 4 * 5 - * Copyright (C) 2017 Jean Delvare 5 + * Copyright (C) 2017-2019 Jean Delvare 6 6 * 7 7 * Based on the at24 driver: 8 8 * Copyright (C) 2005-2007 David Brownell ··· 53 53 54 54 /*-------------------------------------------------------------------------*/ 55 55 56 + static int ee1004_get_current_page(void) 57 + { 58 + int err; 59 + 60 + err = i2c_smbus_read_byte(ee1004_set_page[0]); 61 + if (err == -ENXIO) { 62 + /* Nack means page 1 is selected */ 63 + return 1; 64 + } 65 + if (err < 0) { 66 + /* Anything else is a real error, bail out */ 67 + return err; 68 + } 69 + 70 + /* Ack means page 0 is selected, returned value meaningless */ 71 + return 0; 72 + } 73 + 56 74 static ssize_t ee1004_eeprom_read(struct i2c_client *client, char *buf, 57 75 unsigned int offset, size_t count) 58 76 { ··· 120 102 /* Data is ignored */ 121 103 status = i2c_smbus_write_byte(ee1004_set_page[page], 122 104 0x00); 105 + if (status == -ENXIO) { 106 + /* 107 + * Don't give up just yet. Some memory 108 + * modules will select the page but not 109 + * ack the command. Check which page is 110 + * selected now. 111 + */ 112 + if (ee1004_get_current_page() == page) 113 + status = 0; 114 + } 123 115 if (status < 0) { 124 116 dev_err(dev, "Failed to select page %d (%d)\n", 125 117 page, status); ··· 214 186 } 215 187 216 188 /* Remember current page to avoid unneeded page select */ 217 - err = i2c_smbus_read_byte(ee1004_set_page[0]); 218 - if (err == -ENXIO) { 219 - /* Nack means page 1 is selected */ 220 - ee1004_current_page = 1; 221 - } else if (err < 0) { 222 - /* Anything else is a real error, bail out */ 189 + err = ee1004_get_current_page(); 190 + if (err < 0) 223 191 goto err_clients; 224 - } else { 225 - /* Ack means page 0 is selected, returned value meaningless */ 226 - ee1004_current_page = 0; 227 - } 192 + ee1004_current_page = err; 228 193 dev_dbg(&client->dev, "Currently selected page: %d\n", 229 194 ee1004_current_page); 230 195 mutex_unlock(&ee1004_bus_lock);
+2 -4
drivers/misc/eeprom/idt_89hpesx.c
··· 115 115 * @client: i2c client used to perform IO operations 116 116 * 117 117 * @ee_file: EEPROM read/write sysfs-file 118 - * @csr_file: CSR read/write debugfs-node 119 118 */ 120 119 struct idt_smb_seq; 121 120 struct idt_89hpesx_dev { ··· 136 137 137 138 struct bin_attribute *ee_file; 138 139 struct dentry *csr_dir; 139 - struct dentry *csr_file; 140 140 }; 141 141 142 142 /* ··· 1376 1378 pdev->csr_dir = debugfs_create_dir(fname, csr_dbgdir); 1377 1379 1378 1380 /* Create Debugfs file for CSR read/write operations */ 1379 - pdev->csr_file = debugfs_create_file(cli->name, 0600, 1380 - pdev->csr_dir, pdev, &csr_dbgfs_ops); 1381 + debugfs_create_file(cli->name, 0600, pdev->csr_dir, pdev, 1382 + &csr_dbgfs_ops); 1381 1383 } 1382 1384 1383 1385 /*
-547
drivers/misc/fsa9480.c
··· 1 - // SPDX-License-Identifier: GPL-2.0-only 2 - /* 3 - * fsa9480.c - FSA9480 micro USB switch device driver 4 - * 5 - * Copyright (C) 2010 Samsung Electronics 6 - * Minkyu Kang <mk7.kang@samsung.com> 7 - * Wonguk Jeong <wonguk.jeong@samsung.com> 8 - */ 9 - 10 - #include <linux/kernel.h> 11 - #include <linux/module.h> 12 - #include <linux/err.h> 13 - #include <linux/i2c.h> 14 - #include <linux/platform_data/fsa9480.h> 15 - #include <linux/irq.h> 16 - #include <linux/interrupt.h> 17 - #include <linux/workqueue.h> 18 - #include <linux/platform_device.h> 19 - #include <linux/slab.h> 20 - #include <linux/pm_runtime.h> 21 - 22 - /* FSA9480 I2C registers */ 23 - #define FSA9480_REG_DEVID 0x01 24 - #define FSA9480_REG_CTRL 0x02 25 - #define FSA9480_REG_INT1 0x03 26 - #define FSA9480_REG_INT2 0x04 27 - #define FSA9480_REG_INT1_MASK 0x05 28 - #define FSA9480_REG_INT2_MASK 0x06 29 - #define FSA9480_REG_ADC 0x07 30 - #define FSA9480_REG_TIMING1 0x08 31 - #define FSA9480_REG_TIMING2 0x09 32 - #define FSA9480_REG_DEV_T1 0x0a 33 - #define FSA9480_REG_DEV_T2 0x0b 34 - #define FSA9480_REG_BTN1 0x0c 35 - #define FSA9480_REG_BTN2 0x0d 36 - #define FSA9480_REG_CK 0x0e 37 - #define FSA9480_REG_CK_INT1 0x0f 38 - #define FSA9480_REG_CK_INT2 0x10 39 - #define FSA9480_REG_CK_INTMASK1 0x11 40 - #define FSA9480_REG_CK_INTMASK2 0x12 41 - #define FSA9480_REG_MANSW1 0x13 42 - #define FSA9480_REG_MANSW2 0x14 43 - 44 - /* Control */ 45 - #define CON_SWITCH_OPEN (1 << 4) 46 - #define CON_RAW_DATA (1 << 3) 47 - #define CON_MANUAL_SW (1 << 2) 48 - #define CON_WAIT (1 << 1) 49 - #define CON_INT_MASK (1 << 0) 50 - #define CON_MASK (CON_SWITCH_OPEN | CON_RAW_DATA | \ 51 - CON_MANUAL_SW | CON_WAIT) 52 - 53 - /* Device Type 1 */ 54 - #define DEV_USB_OTG (1 << 7) 55 - #define DEV_DEDICATED_CHG (1 << 6) 56 - #define DEV_USB_CHG (1 << 5) 57 - #define DEV_CAR_KIT (1 << 4) 58 - #define DEV_UART (1 << 3) 59 - #define DEV_USB (1 << 2) 60 - #define DEV_AUDIO_2 (1 << 1) 61 - #define DEV_AUDIO_1 (1 << 0) 62 - 63 - #define DEV_T1_USB_MASK (DEV_USB_OTG | DEV_USB) 64 - #define DEV_T1_UART_MASK (DEV_UART) 65 - #define DEV_T1_CHARGER_MASK (DEV_DEDICATED_CHG | DEV_USB_CHG) 66 - 67 - /* Device Type 2 */ 68 - #define DEV_AV (1 << 6) 69 - #define DEV_TTY (1 << 5) 70 - #define DEV_PPD (1 << 4) 71 - #define DEV_JIG_UART_OFF (1 << 3) 72 - #define DEV_JIG_UART_ON (1 << 2) 73 - #define DEV_JIG_USB_OFF (1 << 1) 74 - #define DEV_JIG_USB_ON (1 << 0) 75 - 76 - #define DEV_T2_USB_MASK (DEV_JIG_USB_OFF | DEV_JIG_USB_ON) 77 - #define DEV_T2_UART_MASK (DEV_JIG_UART_OFF | DEV_JIG_UART_ON) 78 - #define DEV_T2_JIG_MASK (DEV_JIG_USB_OFF | DEV_JIG_USB_ON | \ 79 - DEV_JIG_UART_OFF | DEV_JIG_UART_ON) 80 - 81 - /* 82 - * Manual Switch 83 - * D- [7:5] / D+ [4:2] 84 - * 000: Open all / 001: USB / 010: AUDIO / 011: UART / 100: V_AUDIO 85 - */ 86 - #define SW_VAUDIO ((4 << 5) | (4 << 2)) 87 - #define SW_UART ((3 << 5) | (3 << 2)) 88 - #define SW_AUDIO ((2 << 5) | (2 << 2)) 89 - #define SW_DHOST ((1 << 5) | (1 << 2)) 90 - #define SW_AUTO ((0 << 5) | (0 << 2)) 91 - 92 - /* Interrupt 1 */ 93 - #define INT_DETACH (1 << 1) 94 - #define INT_ATTACH (1 << 0) 95 - 96 - struct fsa9480_usbsw { 97 - struct i2c_client *client; 98 - struct fsa9480_platform_data *pdata; 99 - int dev1; 100 - int dev2; 101 - int mansw; 102 - }; 103 - 104 - static struct fsa9480_usbsw *chip; 105 - 106 - static int fsa9480_write_reg(struct i2c_client *client, 107 - int reg, int value) 108 - { 109 - int ret; 110 - 111 - ret = i2c_smbus_write_byte_data(client, reg, value); 112 - 113 - if (ret < 0) 114 - dev_err(&client->dev, "%s: err %d\n", __func__, ret); 115 - 116 - return ret; 117 - } 118 - 119 - static int fsa9480_read_reg(struct i2c_client *client, int reg) 120 - { 121 - int ret; 122 - 123 - ret = i2c_smbus_read_byte_data(client, reg); 124 - 125 - if (ret < 0) 126 - dev_err(&client->dev, "%s: err %d\n", __func__, ret); 127 - 128 - return ret; 129 - } 130 - 131 - static int fsa9480_read_irq(struct i2c_client *client, int *value) 132 - { 133 - int ret; 134 - 135 - ret = i2c_smbus_read_i2c_block_data(client, 136 - FSA9480_REG_INT1, 2, (u8 *)value); 137 - *value &= 0xffff; 138 - 139 - if (ret < 0) 140 - dev_err(&client->dev, "%s: err %d\n", __func__, ret); 141 - 142 - return ret; 143 - } 144 - 145 - static void fsa9480_set_switch(const char *buf) 146 - { 147 - struct fsa9480_usbsw *usbsw = chip; 148 - struct i2c_client *client = usbsw->client; 149 - unsigned int value; 150 - unsigned int path = 0; 151 - 152 - value = fsa9480_read_reg(client, FSA9480_REG_CTRL); 153 - 154 - if (!strncmp(buf, "VAUDIO", 6)) { 155 - path = SW_VAUDIO; 156 - value &= ~CON_MANUAL_SW; 157 - } else if (!strncmp(buf, "UART", 4)) { 158 - path = SW_UART; 159 - value &= ~CON_MANUAL_SW; 160 - } else if (!strncmp(buf, "AUDIO", 5)) { 161 - path = SW_AUDIO; 162 - value &= ~CON_MANUAL_SW; 163 - } else if (!strncmp(buf, "DHOST", 5)) { 164 - path = SW_DHOST; 165 - value &= ~CON_MANUAL_SW; 166 - } else if (!strncmp(buf, "AUTO", 4)) { 167 - path = SW_AUTO; 168 - value |= CON_MANUAL_SW; 169 - } else { 170 - printk(KERN_ERR "Wrong command\n"); 171 - return; 172 - } 173 - 174 - usbsw->mansw = path; 175 - fsa9480_write_reg(client, FSA9480_REG_MANSW1, path); 176 - fsa9480_write_reg(client, FSA9480_REG_CTRL, value); 177 - } 178 - 179 - static ssize_t fsa9480_get_switch(char *buf) 180 - { 181 - struct fsa9480_usbsw *usbsw = chip; 182 - struct i2c_client *client = usbsw->client; 183 - unsigned int value; 184 - 185 - value = fsa9480_read_reg(client, FSA9480_REG_MANSW1); 186 - 187 - if (value == SW_VAUDIO) 188 - return sprintf(buf, "VAUDIO\n"); 189 - else if (value == SW_UART) 190 - return sprintf(buf, "UART\n"); 191 - else if (value == SW_AUDIO) 192 - return sprintf(buf, "AUDIO\n"); 193 - else if (value == SW_DHOST) 194 - return sprintf(buf, "DHOST\n"); 195 - else if (value == SW_AUTO) 196 - return sprintf(buf, "AUTO\n"); 197 - else 198 - return sprintf(buf, "%x", value); 199 - } 200 - 201 - static ssize_t fsa9480_show_device(struct device *dev, 202 - struct device_attribute *attr, 203 - char *buf) 204 - { 205 - struct fsa9480_usbsw *usbsw = dev_get_drvdata(dev); 206 - struct i2c_client *client = usbsw->client; 207 - int dev1, dev2; 208 - 209 - dev1 = fsa9480_read_reg(client, FSA9480_REG_DEV_T1); 210 - dev2 = fsa9480_read_reg(client, FSA9480_REG_DEV_T2); 211 - 212 - if (!dev1 && !dev2) 213 - return sprintf(buf, "NONE\n"); 214 - 215 - /* USB */ 216 - if (dev1 & DEV_T1_USB_MASK || dev2 & DEV_T2_USB_MASK) 217 - return sprintf(buf, "USB\n"); 218 - 219 - /* UART */ 220 - if (dev1 & DEV_T1_UART_MASK || dev2 & DEV_T2_UART_MASK) 221 - return sprintf(buf, "UART\n"); 222 - 223 - /* CHARGER */ 224 - if (dev1 & DEV_T1_CHARGER_MASK) 225 - return sprintf(buf, "CHARGER\n"); 226 - 227 - /* JIG */ 228 - if (dev2 & DEV_T2_JIG_MASK) 229 - return sprintf(buf, "JIG\n"); 230 - 231 - return sprintf(buf, "UNKNOWN\n"); 232 - } 233 - 234 - static ssize_t fsa9480_show_manualsw(struct device *dev, 235 - struct device_attribute *attr, char *buf) 236 - { 237 - return fsa9480_get_switch(buf); 238 - 239 - } 240 - 241 - static ssize_t fsa9480_set_manualsw(struct device *dev, 242 - struct device_attribute *attr, 243 - const char *buf, size_t count) 244 - { 245 - fsa9480_set_switch(buf); 246 - 247 - return count; 248 - } 249 - 250 - static DEVICE_ATTR(device, S_IRUGO, fsa9480_show_device, NULL); 251 - static DEVICE_ATTR(switch, S_IRUGO | S_IWUSR, 252 - fsa9480_show_manualsw, fsa9480_set_manualsw); 253 - 254 - static struct attribute *fsa9480_attributes[] = { 255 - &dev_attr_device.attr, 256 - &dev_attr_switch.attr, 257 - NULL 258 - }; 259 - 260 - static const struct attribute_group fsa9480_group = { 261 - .attrs = fsa9480_attributes, 262 - }; 263 - 264 - static void fsa9480_detect_dev(struct fsa9480_usbsw *usbsw, int intr) 265 - { 266 - int val1, val2, ctrl; 267 - struct fsa9480_platform_data *pdata = usbsw->pdata; 268 - struct i2c_client *client = usbsw->client; 269 - 270 - val1 = fsa9480_read_reg(client, FSA9480_REG_DEV_T1); 271 - val2 = fsa9480_read_reg(client, FSA9480_REG_DEV_T2); 272 - ctrl = fsa9480_read_reg(client, FSA9480_REG_CTRL); 273 - 274 - dev_info(&client->dev, "intr: 0x%x, dev1: 0x%x, dev2: 0x%x\n", 275 - intr, val1, val2); 276 - 277 - if (!intr) 278 - goto out; 279 - 280 - if (intr & INT_ATTACH) { /* Attached */ 281 - /* USB */ 282 - if (val1 & DEV_T1_USB_MASK || val2 & DEV_T2_USB_MASK) { 283 - if (pdata->usb_cb) 284 - pdata->usb_cb(FSA9480_ATTACHED); 285 - 286 - if (usbsw->mansw) { 287 - fsa9480_write_reg(client, 288 - FSA9480_REG_MANSW1, usbsw->mansw); 289 - } 290 - } 291 - 292 - /* UART */ 293 - if (val1 & DEV_T1_UART_MASK || val2 & DEV_T2_UART_MASK) { 294 - if (pdata->uart_cb) 295 - pdata->uart_cb(FSA9480_ATTACHED); 296 - 297 - if (!(ctrl & CON_MANUAL_SW)) { 298 - fsa9480_write_reg(client, 299 - FSA9480_REG_MANSW1, SW_UART); 300 - } 301 - } 302 - 303 - /* CHARGER */ 304 - if (val1 & DEV_T1_CHARGER_MASK) { 305 - if (pdata->charger_cb) 306 - pdata->charger_cb(FSA9480_ATTACHED); 307 - } 308 - 309 - /* JIG */ 310 - if (val2 & DEV_T2_JIG_MASK) { 311 - if (pdata->jig_cb) 312 - pdata->jig_cb(FSA9480_ATTACHED); 313 - } 314 - } else if (intr & INT_DETACH) { /* Detached */ 315 - /* USB */ 316 - if (usbsw->dev1 & DEV_T1_USB_MASK || 317 - usbsw->dev2 & DEV_T2_USB_MASK) { 318 - if (pdata->usb_cb) 319 - pdata->usb_cb(FSA9480_DETACHED); 320 - } 321 - 322 - /* UART */ 323 - if (usbsw->dev1 & DEV_T1_UART_MASK || 324 - usbsw->dev2 & DEV_T2_UART_MASK) { 325 - if (pdata->uart_cb) 326 - pdata->uart_cb(FSA9480_DETACHED); 327 - } 328 - 329 - /* CHARGER */ 330 - if (usbsw->dev1 & DEV_T1_CHARGER_MASK) { 331 - if (pdata->charger_cb) 332 - pdata->charger_cb(FSA9480_DETACHED); 333 - } 334 - 335 - /* JIG */ 336 - if (usbsw->dev2 & DEV_T2_JIG_MASK) { 337 - if (pdata->jig_cb) 338 - pdata->jig_cb(FSA9480_DETACHED); 339 - } 340 - } 341 - 342 - usbsw->dev1 = val1; 343 - usbsw->dev2 = val2; 344 - 345 - out: 346 - ctrl &= ~CON_INT_MASK; 347 - fsa9480_write_reg(client, FSA9480_REG_CTRL, ctrl); 348 - } 349 - 350 - static irqreturn_t fsa9480_irq_handler(int irq, void *data) 351 - { 352 - struct fsa9480_usbsw *usbsw = data; 353 - struct i2c_client *client = usbsw->client; 354 - int intr; 355 - 356 - /* clear interrupt */ 357 - fsa9480_read_irq(client, &intr); 358 - 359 - /* device detection */ 360 - fsa9480_detect_dev(usbsw, intr); 361 - 362 - return IRQ_HANDLED; 363 - } 364 - 365 - static int fsa9480_irq_init(struct fsa9480_usbsw *usbsw) 366 - { 367 - struct fsa9480_platform_data *pdata = usbsw->pdata; 368 - struct i2c_client *client = usbsw->client; 369 - int ret; 370 - int intr; 371 - unsigned int ctrl = CON_MASK; 372 - 373 - /* clear interrupt */ 374 - fsa9480_read_irq(client, &intr); 375 - 376 - /* unmask interrupt (attach/detach only) */ 377 - fsa9480_write_reg(client, FSA9480_REG_INT1_MASK, 0xfc); 378 - fsa9480_write_reg(client, FSA9480_REG_INT2_MASK, 0x1f); 379 - 380 - usbsw->mansw = fsa9480_read_reg(client, FSA9480_REG_MANSW1); 381 - 382 - if (usbsw->mansw) 383 - ctrl &= ~CON_MANUAL_SW; /* Manual Switching Mode */ 384 - 385 - fsa9480_write_reg(client, FSA9480_REG_CTRL, ctrl); 386 - 387 - if (pdata && pdata->cfg_gpio) 388 - pdata->cfg_gpio(); 389 - 390 - if (client->irq) { 391 - ret = request_threaded_irq(client->irq, NULL, 392 - fsa9480_irq_handler, 393 - IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 394 - "fsa9480 micro USB", usbsw); 395 - if (ret) { 396 - dev_err(&client->dev, "failed to request IRQ\n"); 397 - return ret; 398 - } 399 - 400 - if (pdata) 401 - device_init_wakeup(&client->dev, pdata->wakeup); 402 - } 403 - 404 - return 0; 405 - } 406 - 407 - static int fsa9480_probe(struct i2c_client *client, 408 - const struct i2c_device_id *id) 409 - { 410 - struct i2c_adapter *adapter = to_i2c_adapter(client->dev.parent); 411 - struct fsa9480_usbsw *usbsw; 412 - int ret = 0; 413 - 414 - if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_BYTE_DATA)) 415 - return -EIO; 416 - 417 - usbsw = kzalloc(sizeof(struct fsa9480_usbsw), GFP_KERNEL); 418 - if (!usbsw) { 419 - dev_err(&client->dev, "failed to allocate driver data\n"); 420 - return -ENOMEM; 421 - } 422 - 423 - usbsw->client = client; 424 - usbsw->pdata = client->dev.platform_data; 425 - 426 - chip = usbsw; 427 - 428 - i2c_set_clientdata(client, usbsw); 429 - 430 - ret = fsa9480_irq_init(usbsw); 431 - if (ret) 432 - goto fail1; 433 - 434 - ret = sysfs_create_group(&client->dev.kobj, &fsa9480_group); 435 - if (ret) { 436 - dev_err(&client->dev, 437 - "failed to create fsa9480 attribute group\n"); 438 - goto fail2; 439 - } 440 - 441 - /* ADC Detect Time: 500ms */ 442 - fsa9480_write_reg(client, FSA9480_REG_TIMING1, 0x6); 443 - 444 - if (chip->pdata->reset_cb) 445 - chip->pdata->reset_cb(); 446 - 447 - /* device detection */ 448 - fsa9480_detect_dev(usbsw, INT_ATTACH); 449 - 450 - pm_runtime_set_active(&client->dev); 451 - 452 - return 0; 453 - 454 - fail2: 455 - if (client->irq) 456 - free_irq(client->irq, usbsw); 457 - fail1: 458 - kfree(usbsw); 459 - return ret; 460 - } 461 - 462 - static int fsa9480_remove(struct i2c_client *client) 463 - { 464 - struct fsa9480_usbsw *usbsw = i2c_get_clientdata(client); 465 - 466 - if (client->irq) 467 - free_irq(client->irq, usbsw); 468 - 469 - sysfs_remove_group(&client->dev.kobj, &fsa9480_group); 470 - device_init_wakeup(&client->dev, 0); 471 - kfree(usbsw); 472 - return 0; 473 - } 474 - 475 - #ifdef CONFIG_PM_SLEEP 476 - 477 - static int fsa9480_suspend(struct device *dev) 478 - { 479 - struct i2c_client *client = to_i2c_client(dev); 480 - struct fsa9480_usbsw *usbsw = i2c_get_clientdata(client); 481 - struct fsa9480_platform_data *pdata = usbsw->pdata; 482 - 483 - if (device_may_wakeup(&client->dev) && client->irq) 484 - enable_irq_wake(client->irq); 485 - 486 - if (pdata->usb_power) 487 - pdata->usb_power(0); 488 - 489 - return 0; 490 - } 491 - 492 - static int fsa9480_resume(struct device *dev) 493 - { 494 - struct i2c_client *client = to_i2c_client(dev); 495 - struct fsa9480_usbsw *usbsw = i2c_get_clientdata(client); 496 - int dev1, dev2; 497 - 498 - if (device_may_wakeup(&client->dev) && client->irq) 499 - disable_irq_wake(client->irq); 500 - 501 - /* 502 - * Clear Pending interrupt. Note that detect_dev does what 503 - * the interrupt handler does. So, we don't miss pending and 504 - * we reenable interrupt if there is one. 505 - */ 506 - fsa9480_read_reg(client, FSA9480_REG_INT1); 507 - fsa9480_read_reg(client, FSA9480_REG_INT2); 508 - 509 - dev1 = fsa9480_read_reg(client, FSA9480_REG_DEV_T1); 510 - dev2 = fsa9480_read_reg(client, FSA9480_REG_DEV_T2); 511 - 512 - /* device detection */ 513 - fsa9480_detect_dev(usbsw, (dev1 || dev2) ? INT_ATTACH : INT_DETACH); 514 - 515 - return 0; 516 - } 517 - 518 - static SIMPLE_DEV_PM_OPS(fsa9480_pm_ops, fsa9480_suspend, fsa9480_resume); 519 - #define FSA9480_PM_OPS (&fsa9480_pm_ops) 520 - 521 - #else 522 - 523 - #define FSA9480_PM_OPS NULL 524 - 525 - #endif /* CONFIG_PM_SLEEP */ 526 - 527 - static const struct i2c_device_id fsa9480_id[] = { 528 - {"fsa9480", 0}, 529 - {} 530 - }; 531 - MODULE_DEVICE_TABLE(i2c, fsa9480_id); 532 - 533 - static struct i2c_driver fsa9480_i2c_driver = { 534 - .driver = { 535 - .name = "fsa9480", 536 - .pm = FSA9480_PM_OPS, 537 - }, 538 - .probe = fsa9480_probe, 539 - .remove = fsa9480_remove, 540 - .id_table = fsa9480_id, 541 - }; 542 - 543 - module_i2c_driver(fsa9480_i2c_driver); 544 - 545 - MODULE_AUTHOR("Minkyu Kang <mk7.kang@samsung.com>"); 546 - MODULE_DESCRIPTION("FSA9480 USB Switch driver"); 547 - MODULE_LICENSE("GPL");
-1
drivers/misc/genwqe/Kconfig
··· 7 7 tristate "GenWQE PCIe Accelerator" 8 8 depends on PCI && 64BIT 9 9 select CRC_ITU_T 10 - default n 11 10 help 12 11 Enables PCIe card driver for IBM GenWQE accelerators. 13 12 The user-space interface is described in
+1 -1
drivers/misc/habanalabs/asid.c
··· 18 18 19 19 mutex_init(&hdev->asid_mutex); 20 20 21 - /* ASID 0 is reserved for KMD */ 21 + /* ASID 0 is reserved for KMD and device CPU */ 22 22 set_bit(0, hdev->asid_bitmap); 23 23 24 24 return 0;
+4 -6
drivers/misc/habanalabs/command_submission.c
··· 682 682 u32 tmp; 683 683 684 684 rc = hl_poll_timeout_memory(hdev, 685 - (u64) (uintptr_t) &ctx->thread_ctx_switch_wait_token, 686 - jiffies_to_usecs(hdev->timeout_jiffies), 687 - &tmp); 685 + &ctx->thread_ctx_switch_wait_token, tmp, (tmp == 1), 686 + 100, jiffies_to_usecs(hdev->timeout_jiffies)); 688 687 689 - if (rc || !tmp) { 688 + if (rc == -ETIMEDOUT) { 690 689 dev_err(hdev->dev, 691 - "context switch phase didn't finish in time\n"); 692 - rc = -ETIMEDOUT; 690 + "context switch phase timeout (%d)\n", tmp); 693 691 goto out; 694 692 } 695 693 }
+10 -1
drivers/misc/habanalabs/context.c
··· 31 31 * Coresight might be still working by accessing addresses 32 32 * related to the stopped engines. Hence stop it explicitly. 33 33 */ 34 - hdev->asic_funcs->halt_coresight(hdev); 34 + if (hdev->in_debug) 35 + hl_device_set_debug_mode(hdev, false); 36 + 35 37 hl_vm_ctx_fini(ctx); 36 38 hl_asid_free(hdev, ctx->asid); 39 + } else { 40 + hl_mmu_ctx_fini(ctx); 37 41 } 38 42 } 39 43 ··· 121 117 122 118 if (is_kernel_ctx) { 123 119 ctx->asid = HL_KERNEL_ASID_ID; /* KMD gets ASID 0 */ 120 + rc = hl_mmu_ctx_init(ctx); 121 + if (rc) { 122 + dev_err(hdev->dev, "Failed to init mmu ctx module\n"); 123 + goto mem_ctx_err; 124 + } 124 125 } else { 125 126 ctx->asid = hl_asid_alloc(hdev); 126 127 if (!ctx->asid) {
+41 -13
drivers/misc/habanalabs/debugfs.c
··· 355 355 struct hl_debugfs_entry *entry = s->private; 356 356 struct hl_dbg_device_entry *dev_entry = entry->dev_entry; 357 357 struct hl_device *hdev = dev_entry->hdev; 358 - struct hl_ctx *ctx = hdev->user_ctx; 358 + struct hl_ctx *ctx; 359 359 360 360 u64 hop0_addr = 0, hop0_pte_addr = 0, hop0_pte = 0, 361 361 hop1_addr = 0, hop1_pte_addr = 0, hop1_pte = 0, ··· 366 366 367 367 if (!hdev->mmu_enable) 368 368 return 0; 369 + 370 + if (dev_entry->mmu_asid == HL_KERNEL_ASID_ID) 371 + ctx = hdev->kernel_ctx; 372 + else 373 + ctx = hdev->user_ctx; 369 374 370 375 if (!ctx) { 371 376 dev_err(hdev->dev, "no ctx available\n"); ··· 500 495 return -EINVAL; 501 496 } 502 497 498 + static int engines_show(struct seq_file *s, void *data) 499 + { 500 + struct hl_debugfs_entry *entry = s->private; 501 + struct hl_dbg_device_entry *dev_entry = entry->dev_entry; 502 + struct hl_device *hdev = dev_entry->hdev; 503 + 504 + hdev->asic_funcs->is_device_idle(hdev, NULL, s); 505 + 506 + return 0; 507 + } 508 + 509 + static bool hl_is_device_va(struct hl_device *hdev, u64 addr) 510 + { 511 + struct asic_fixed_properties *prop = &hdev->asic_prop; 512 + 513 + if (!hdev->mmu_enable) 514 + goto out; 515 + 516 + if (hdev->dram_supports_virtual_memory && 517 + addr >= prop->va_space_dram_start_address && 518 + addr < prop->va_space_dram_end_address) 519 + return true; 520 + 521 + if (addr >= prop->va_space_host_start_address && 522 + addr < prop->va_space_host_end_address) 523 + return true; 524 + out: 525 + return false; 526 + } 527 + 503 528 static int device_va_to_pa(struct hl_device *hdev, u64 virt_addr, 504 529 u64 *phys_addr) 505 530 { ··· 603 568 { 604 569 struct hl_dbg_device_entry *entry = file_inode(f)->i_private; 605 570 struct hl_device *hdev = entry->hdev; 606 - struct asic_fixed_properties *prop = &hdev->asic_prop; 607 571 char tmp_buf[32]; 608 572 u64 addr = entry->addr; 609 573 u32 val; ··· 611 577 if (*ppos) 612 578 return 0; 613 579 614 - if (addr >= prop->va_space_dram_start_address && 615 - addr < prop->va_space_dram_end_address && 616 - hdev->mmu_enable && 617 - hdev->dram_supports_virtual_memory) { 618 - rc = device_va_to_pa(hdev, entry->addr, &addr); 580 + if (hl_is_device_va(hdev, addr)) { 581 + rc = device_va_to_pa(hdev, addr, &addr); 619 582 if (rc) 620 583 return rc; 621 584 } ··· 633 602 { 634 603 struct hl_dbg_device_entry *entry = file_inode(f)->i_private; 635 604 struct hl_device *hdev = entry->hdev; 636 - struct asic_fixed_properties *prop = &hdev->asic_prop; 637 605 u64 addr = entry->addr; 638 606 u32 value; 639 607 ssize_t rc; ··· 641 611 if (rc) 642 612 return rc; 643 613 644 - if (addr >= prop->va_space_dram_start_address && 645 - addr < prop->va_space_dram_end_address && 646 - hdev->mmu_enable && 647 - hdev->dram_supports_virtual_memory) { 648 - rc = device_va_to_pa(hdev, entry->addr, &addr); 614 + if (hl_is_device_va(hdev, addr)) { 615 + rc = device_va_to_pa(hdev, addr, &addr); 649 616 if (rc) 650 617 return rc; 651 618 } ··· 904 877 {"userptr", userptr_show, NULL}, 905 878 {"vm", vm_show, NULL}, 906 879 {"mmu", mmu_show, mmu_write}, 880 + {"engines", engines_show, NULL} 907 881 }; 908 882 909 883 static int hl_debugfs_open(struct inode *inode, struct file *file)
+79 -110
drivers/misc/habanalabs/device.c
··· 231 231 232 232 mutex_init(&hdev->fd_open_cnt_lock); 233 233 mutex_init(&hdev->send_cpu_message_lock); 234 + mutex_init(&hdev->debug_lock); 234 235 mutex_init(&hdev->mmu_cache_lock); 235 236 INIT_LIST_HEAD(&hdev->hw_queues_mirror_list); 236 237 spin_lock_init(&hdev->hw_queues_mirror_lock); ··· 263 262 static void device_early_fini(struct hl_device *hdev) 264 263 { 265 264 mutex_destroy(&hdev->mmu_cache_lock); 265 + mutex_destroy(&hdev->debug_lock); 266 266 mutex_destroy(&hdev->send_cpu_message_lock); 267 267 268 268 hl_cb_mgr_fini(hdev, &hdev->kernel_cb_mgr); ··· 326 324 { 327 325 int rc; 328 326 329 - INIT_DELAYED_WORK(&hdev->work_freq, set_freq_to_low_job); 327 + if (hdev->asic_funcs->late_init) { 328 + rc = hdev->asic_funcs->late_init(hdev); 329 + if (rc) { 330 + dev_err(hdev->dev, 331 + "failed late initialization for the H/W\n"); 332 + return rc; 333 + } 334 + } 335 + 330 336 hdev->high_pll = hdev->asic_prop.high_pll; 331 337 332 338 /* force setting to low frequency */ ··· 345 335 else 346 336 hdev->asic_funcs->set_pll_profile(hdev, PLL_LAST); 347 337 348 - if (hdev->asic_funcs->late_init) { 349 - rc = hdev->asic_funcs->late_init(hdev); 350 - if (rc) { 351 - dev_err(hdev->dev, 352 - "failed late initialization for the H/W\n"); 353 - return rc; 354 - } 355 - } 356 - 338 + INIT_DELAYED_WORK(&hdev->work_freq, set_freq_to_low_job); 357 339 schedule_delayed_work(&hdev->work_freq, 358 - usecs_to_jiffies(HL_PLL_LOW_JOB_FREQ_USEC)); 340 + usecs_to_jiffies(HL_PLL_LOW_JOB_FREQ_USEC)); 359 341 360 342 if (hdev->heartbeat) { 361 343 INIT_DELAYED_WORK(&hdev->work_heartbeat, hl_device_heartbeat); ··· 420 418 hdev->asic_funcs->set_pll_profile(hdev, freq); 421 419 422 420 return 1; 421 + } 422 + 423 + int hl_device_set_debug_mode(struct hl_device *hdev, bool enable) 424 + { 425 + int rc = 0; 426 + 427 + mutex_lock(&hdev->debug_lock); 428 + 429 + if (!enable) { 430 + if (!hdev->in_debug) { 431 + dev_err(hdev->dev, 432 + "Failed to disable debug mode because device was not in debug mode\n"); 433 + rc = -EFAULT; 434 + goto out; 435 + } 436 + 437 + hdev->asic_funcs->halt_coresight(hdev); 438 + hdev->in_debug = 0; 439 + 440 + goto out; 441 + } 442 + 443 + if (hdev->in_debug) { 444 + dev_err(hdev->dev, 445 + "Failed to enable debug mode because device is already in debug mode\n"); 446 + rc = -EFAULT; 447 + goto out; 448 + } 449 + 450 + mutex_lock(&hdev->fd_open_cnt_lock); 451 + 452 + if (atomic_read(&hdev->fd_open_cnt) > 1) { 453 + dev_err(hdev->dev, 454 + "Failed to enable debug mode. More then a single user is using the device\n"); 455 + rc = -EPERM; 456 + goto unlock_fd_open_lock; 457 + } 458 + 459 + hdev->in_debug = 1; 460 + 461 + unlock_fd_open_lock: 462 + mutex_unlock(&hdev->fd_open_cnt_lock); 463 + out: 464 + mutex_unlock(&hdev->debug_lock); 465 + 466 + return rc; 423 467 } 424 468 425 469 /* ··· 695 647 696 648 hdev->hard_reset_pending = true; 697 649 698 - if (!hdev->pdev) { 699 - dev_err(hdev->dev, 700 - "Reset action is NOT supported in simulator\n"); 701 - rc = -EINVAL; 702 - goto out_err; 703 - } 704 - 705 650 device_reset_work = kzalloc(sizeof(*device_reset_work), 706 651 GFP_ATOMIC); 707 652 if (!device_reset_work) { ··· 745 704 746 705 if (hard_reset) { 747 706 hl_vm_fini(hdev); 707 + hl_mmu_fini(hdev); 748 708 hl_eq_reset(hdev, &hdev->event_queue); 749 709 } 750 710 ··· 770 728 dev_crit(hdev->dev, 771 729 "kernel ctx was alive during hard reset, something is terribly wrong\n"); 772 730 rc = -EBUSY; 731 + goto out_err; 732 + } 733 + 734 + rc = hl_mmu_init(hdev); 735 + if (rc) { 736 + dev_err(hdev->dev, 737 + "Failed to initialize MMU S/W after hard reset\n"); 773 738 goto out_err; 774 739 } 775 740 ··· 951 902 goto cq_fini; 952 903 } 953 904 905 + /* MMU S/W must be initialized before kernel context is created */ 906 + rc = hl_mmu_init(hdev); 907 + if (rc) { 908 + dev_err(hdev->dev, "Failed to initialize MMU S/W structures\n"); 909 + goto eq_fini; 910 + } 911 + 954 912 /* Allocate the kernel context */ 955 913 hdev->kernel_ctx = kzalloc(sizeof(*hdev->kernel_ctx), GFP_KERNEL); 956 914 if (!hdev->kernel_ctx) { 957 915 rc = -ENOMEM; 958 - goto eq_fini; 916 + goto mmu_fini; 959 917 } 960 918 961 919 hdev->user_ctx = NULL; ··· 1010 954 goto out_disabled; 1011 955 } 1012 956 1013 - /* After test_queues, KMD can start sending messages to device CPU */ 1014 - 1015 957 rc = device_late_init(hdev); 1016 958 if (rc) { 1017 959 dev_err(hdev->dev, "Failed late initialization\n"); ··· 1055 1001 "kernel ctx is still alive on initialization failure\n"); 1056 1002 free_ctx: 1057 1003 kfree(hdev->kernel_ctx); 1004 + mmu_fini: 1005 + hl_mmu_fini(hdev); 1058 1006 eq_fini: 1059 1007 hl_eq_fini(hdev, &hdev->event_queue); 1060 1008 cq_fini: ··· 1161 1105 1162 1106 hl_vm_fini(hdev); 1163 1107 1108 + hl_mmu_fini(hdev); 1109 + 1164 1110 hl_eq_fini(hdev, &hdev->event_queue); 1165 1111 1166 1112 for (i = 0 ; i < hdev->asic_prop.completion_queues_count ; i++) ··· 1181 1123 cdev_del(&hdev->cdev); 1182 1124 1183 1125 pr_info("removed device successfully\n"); 1184 - } 1185 - 1186 - /* 1187 - * hl_poll_timeout_memory - Periodically poll a host memory address 1188 - * until it is not zero or a timeout occurs 1189 - * @hdev: pointer to habanalabs device structure 1190 - * @addr: Address to poll 1191 - * @timeout_us: timeout in us 1192 - * @val: Variable to read the value into 1193 - * 1194 - * Returns 0 on success and -ETIMEDOUT upon a timeout. In either 1195 - * case, the last read value at @addr is stored in @val. Must not 1196 - * be called from atomic context if sleep_us or timeout_us are used. 1197 - * 1198 - * The function sleeps for 100us with timeout value of 1199 - * timeout_us 1200 - */ 1201 - int hl_poll_timeout_memory(struct hl_device *hdev, u64 addr, 1202 - u32 timeout_us, u32 *val) 1203 - { 1204 - /* 1205 - * address in this function points always to a memory location in the 1206 - * host's (server's) memory. That location is updated asynchronously 1207 - * either by the direct access of the device or by another core 1208 - */ 1209 - u32 *paddr = (u32 *) (uintptr_t) addr; 1210 - ktime_t timeout; 1211 - 1212 - /* timeout should be longer when working with simulator */ 1213 - if (!hdev->pdev) 1214 - timeout_us *= 10; 1215 - 1216 - timeout = ktime_add_us(ktime_get(), timeout_us); 1217 - 1218 - might_sleep(); 1219 - 1220 - for (;;) { 1221 - /* 1222 - * Flush CPU read/write buffers to make sure we read updates 1223 - * done by other cores or by the device 1224 - */ 1225 - mb(); 1226 - *val = *paddr; 1227 - if (*val) 1228 - break; 1229 - if (ktime_compare(ktime_get(), timeout) > 0) { 1230 - *val = *paddr; 1231 - break; 1232 - } 1233 - usleep_range((100 >> 2) + 1, 100); 1234 - } 1235 - 1236 - return *val ? 0 : -ETIMEDOUT; 1237 - } 1238 - 1239 - /* 1240 - * hl_poll_timeout_devicememory - Periodically poll a device memory address 1241 - * until it is not zero or a timeout occurs 1242 - * @hdev: pointer to habanalabs device structure 1243 - * @addr: Device address to poll 1244 - * @timeout_us: timeout in us 1245 - * @val: Variable to read the value into 1246 - * 1247 - * Returns 0 on success and -ETIMEDOUT upon a timeout. In either 1248 - * case, the last read value at @addr is stored in @val. Must not 1249 - * be called from atomic context if sleep_us or timeout_us are used. 1250 - * 1251 - * The function sleeps for 100us with timeout value of 1252 - * timeout_us 1253 - */ 1254 - int hl_poll_timeout_device_memory(struct hl_device *hdev, void __iomem *addr, 1255 - u32 timeout_us, u32 *val) 1256 - { 1257 - ktime_t timeout = ktime_add_us(ktime_get(), timeout_us); 1258 - 1259 - might_sleep(); 1260 - 1261 - for (;;) { 1262 - *val = readl(addr); 1263 - if (*val) 1264 - break; 1265 - if (ktime_compare(ktime_get(), timeout) > 0) { 1266 - *val = readl(addr); 1267 - break; 1268 - } 1269 - usleep_range((100 >> 2) + 1, 100); 1270 - } 1271 - 1272 - return *val ? 0 : -ETIMEDOUT; 1273 1126 } 1274 1127 1275 1128 /*
+17 -34
drivers/misc/habanalabs/firmware_if.c
··· 29 29 30 30 rc = request_firmware(&fw, fw_name, hdev->dev); 31 31 if (rc) { 32 - dev_err(hdev->dev, "Failed to request %s\n", fw_name); 32 + dev_err(hdev->dev, "Firmware file %s is not found!\n", fw_name); 33 33 goto out; 34 34 } 35 35 36 36 fw_size = fw->size; 37 37 if ((fw_size % 4) != 0) { 38 - dev_err(hdev->dev, "illegal %s firmware size %zu\n", 38 + dev_err(hdev->dev, "Illegal %s firmware size %zu\n", 39 39 fw_name, fw_size); 40 40 rc = -EINVAL; 41 41 goto out; ··· 85 85 u32 tmp; 86 86 int rc = 0; 87 87 88 - if (len > HL_CPU_CB_SIZE) { 89 - dev_err(hdev->dev, "Invalid CPU message size of %d bytes\n", 90 - len); 91 - return -ENOMEM; 92 - } 93 - 94 88 pkt = hdev->asic_funcs->cpu_accessible_dma_pool_alloc(hdev, len, 95 89 &pkt_dma_addr); 96 90 if (!pkt) { ··· 111 117 goto out; 112 118 } 113 119 114 - rc = hl_poll_timeout_memory(hdev, (u64) (uintptr_t) &pkt->fence, 115 - timeout, &tmp); 120 + rc = hl_poll_timeout_memory(hdev, &pkt->fence, tmp, 121 + (tmp == ARMCP_PACKET_FENCE_VAL), 1000, timeout); 116 122 117 123 hl_hw_queue_inc_ci_kernel(hdev, hw_queue_id); 118 124 119 125 if (rc == -ETIMEDOUT) { 120 - dev_err(hdev->dev, "Timeout while waiting for device CPU\n"); 126 + dev_err(hdev->dev, "Device CPU packet timeout (0x%x)\n", tmp); 121 127 hdev->device_cpu_disabled = true; 122 128 goto out; 123 129 } 124 130 125 - if (tmp == ARMCP_PACKET_FENCE_VAL) { 126 - u32 ctl = le32_to_cpu(pkt->ctl); 131 + tmp = le32_to_cpu(pkt->ctl); 127 132 128 - rc = (ctl & ARMCP_PKT_CTL_RC_MASK) >> ARMCP_PKT_CTL_RC_SHIFT; 129 - if (rc) { 130 - dev_err(hdev->dev, 131 - "F/W ERROR %d for CPU packet %d\n", 132 - rc, (ctl & ARMCP_PKT_CTL_OPCODE_MASK) 133 + rc = (tmp & ARMCP_PKT_CTL_RC_MASK) >> ARMCP_PKT_CTL_RC_SHIFT; 134 + if (rc) { 135 + dev_err(hdev->dev, "F/W ERROR %d for CPU packet %d\n", 136 + rc, 137 + (tmp & ARMCP_PKT_CTL_OPCODE_MASK) 133 138 >> ARMCP_PKT_CTL_OPCODE_SHIFT); 134 - rc = -EINVAL; 135 - } else if (result) { 136 - *result = (long) le64_to_cpu(pkt->result); 137 - } 138 - } else { 139 - dev_err(hdev->dev, "CPU packet wrong fence value\n"); 140 - rc = -EINVAL; 139 + rc = -EIO; 140 + } else if (result) { 141 + *result = (long) le64_to_cpu(pkt->result); 141 142 } 142 143 143 144 out: ··· 175 186 { 176 187 u64 kernel_addr; 177 188 178 - /* roundup to HL_CPU_PKT_SIZE */ 179 - size = (size + (HL_CPU_PKT_SIZE - 1)) & HL_CPU_PKT_MASK; 180 - 181 189 kernel_addr = gen_pool_alloc(hdev->cpu_accessible_dma_pool, size); 182 190 183 191 *dma_handle = hdev->cpu_accessible_dma_address + ··· 186 200 void hl_fw_cpu_accessible_dma_pool_free(struct hl_device *hdev, size_t size, 187 201 void *vaddr) 188 202 { 189 - /* roundup to HL_CPU_PKT_SIZE */ 190 - size = (size + (HL_CPU_PKT_SIZE - 1)) & HL_CPU_PKT_MASK; 191 - 192 203 gen_pool_free(hdev->cpu_accessible_dma_pool, (u64) (uintptr_t) vaddr, 193 204 size); 194 205 } ··· 239 256 HL_ARMCP_INFO_TIMEOUT_USEC, &result); 240 257 if (rc) { 241 258 dev_err(hdev->dev, 242 - "Failed to send armcp info pkt, error %d\n", rc); 259 + "Failed to send ArmCP info pkt, error %d\n", rc); 243 260 goto out; 244 261 } 245 262 ··· 274 291 max_size, &eeprom_info_dma_addr); 275 292 if (!eeprom_info_cpu_addr) { 276 293 dev_err(hdev->dev, 277 - "Failed to allocate DMA memory for EEPROM info packet\n"); 294 + "Failed to allocate DMA memory for ArmCP EEPROM packet\n"); 278 295 return -ENOMEM; 279 296 } 280 297 ··· 290 307 291 308 if (rc) { 292 309 dev_err(hdev->dev, 293 - "Failed to send armcp EEPROM pkt, error %d\n", rc); 310 + "Failed to send ArmCP EEPROM packet, error %d\n", rc); 294 311 goto out; 295 312 } 296 313
+442 -179
drivers/misc/habanalabs/goya/goya.c
··· 14 14 #include <linux/genalloc.h> 15 15 #include <linux/hwmon.h> 16 16 #include <linux/io-64-nonatomic-lo-hi.h> 17 + #include <linux/iommu.h> 18 + #include <linux/seq_file.h> 17 19 18 20 /* 19 21 * GOYA security scheme: ··· 90 88 91 89 #define GOYA_CB_POOL_CB_CNT 512 92 90 #define GOYA_CB_POOL_CB_SIZE 0x20000 /* 128KB */ 91 + 92 + #define IS_QM_IDLE(engine, qm_glbl_sts0) \ 93 + (((qm_glbl_sts0) & engine##_QM_IDLE_MASK) == engine##_QM_IDLE_MASK) 94 + #define IS_DMA_QM_IDLE(qm_glbl_sts0) IS_QM_IDLE(DMA, qm_glbl_sts0) 95 + #define IS_TPC_QM_IDLE(qm_glbl_sts0) IS_QM_IDLE(TPC, qm_glbl_sts0) 96 + #define IS_MME_QM_IDLE(qm_glbl_sts0) IS_QM_IDLE(MME, qm_glbl_sts0) 97 + 98 + #define IS_CMDQ_IDLE(engine, cmdq_glbl_sts0) \ 99 + (((cmdq_glbl_sts0) & engine##_CMDQ_IDLE_MASK) == \ 100 + engine##_CMDQ_IDLE_MASK) 101 + #define IS_TPC_CMDQ_IDLE(cmdq_glbl_sts0) \ 102 + IS_CMDQ_IDLE(TPC, cmdq_glbl_sts0) 103 + #define IS_MME_CMDQ_IDLE(cmdq_glbl_sts0) \ 104 + IS_CMDQ_IDLE(MME, cmdq_glbl_sts0) 105 + 106 + #define IS_DMA_IDLE(dma_core_sts0) \ 107 + !((dma_core_sts0) & DMA_CH_0_STS0_DMA_BUSY_MASK) 108 + 109 + #define IS_TPC_IDLE(tpc_cfg_sts) \ 110 + (((tpc_cfg_sts) & TPC_CFG_IDLE_MASK) == TPC_CFG_IDLE_MASK) 111 + 112 + #define IS_MME_IDLE(mme_arch_sts) \ 113 + (((mme_arch_sts) & MME_ARCH_IDLE_MASK) == MME_ARCH_IDLE_MASK) 114 + 93 115 94 116 static const char goya_irq_name[GOYA_MSIX_ENTRIES][GOYA_MAX_STRING_LEN] = { 95 117 "goya cq 0", "goya cq 1", "goya cq 2", "goya cq 3", ··· 323 297 GOYA_ASYNC_EVENT_ID_DMA_BM_CH4 324 298 }; 325 299 300 + static int goya_mmu_clear_pgt_range(struct hl_device *hdev); 301 + static int goya_mmu_set_dram_default_page(struct hl_device *hdev); 302 + static int goya_mmu_add_mappings_for_device_cpu(struct hl_device *hdev); 303 + static void goya_mmu_prepare(struct hl_device *hdev, u32 asid); 304 + 326 305 void goya_get_fixed_properties(struct hl_device *hdev) 327 306 { 328 307 struct asic_fixed_properties *prop = &hdev->asic_prop; ··· 498 467 499 468 prop->dram_pci_bar_size = pci_resource_len(pdev, DDR_BAR_ID); 500 469 501 - rc = hl_pci_init(hdev, 39); 470 + rc = hl_pci_init(hdev, 48); 502 471 if (rc) 503 472 return rc; 504 473 ··· 570 539 struct asic_fixed_properties *prop = &hdev->asic_prop; 571 540 int rc; 572 541 542 + goya_fetch_psoc_frequency(hdev); 543 + 544 + rc = goya_mmu_clear_pgt_range(hdev); 545 + if (rc) { 546 + dev_err(hdev->dev, 547 + "Failed to clear MMU page tables range %d\n", rc); 548 + return rc; 549 + } 550 + 551 + rc = goya_mmu_set_dram_default_page(hdev); 552 + if (rc) { 553 + dev_err(hdev->dev, "Failed to set DRAM default page %d\n", rc); 554 + return rc; 555 + } 556 + 557 + rc = goya_mmu_add_mappings_for_device_cpu(hdev); 558 + if (rc) 559 + return rc; 560 + 561 + rc = goya_init_cpu_queues(hdev); 562 + if (rc) 563 + return rc; 564 + 565 + rc = goya_test_cpu_queue(hdev); 566 + if (rc) 567 + return rc; 568 + 573 569 rc = goya_armcp_info_get(hdev); 574 570 if (rc) { 575 - dev_err(hdev->dev, "Failed to get armcp info\n"); 571 + dev_err(hdev->dev, "Failed to get armcp info %d\n", rc); 576 572 return rc; 577 573 } 578 574 ··· 611 553 612 554 rc = hl_fw_send_pci_access_msg(hdev, ARMCP_PACKET_ENABLE_PCI_ACCESS); 613 555 if (rc) { 614 - dev_err(hdev->dev, "Failed to enable PCI access from CPU\n"); 556 + dev_err(hdev->dev, 557 + "Failed to enable PCI access from CPU %d\n", rc); 615 558 return rc; 616 559 } 617 560 618 561 WREG32(mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR, 619 562 GOYA_ASYNC_EVENT_ID_INTS_REGISTER); 620 563 621 - goya_fetch_psoc_frequency(hdev); 622 - 623 - rc = goya_mmu_clear_pgt_range(hdev); 624 - if (rc) { 625 - dev_err(hdev->dev, "Failed to clear MMU page tables range\n"); 626 - goto disable_pci_access; 627 - } 628 - 629 - rc = goya_mmu_set_dram_default_page(hdev); 630 - if (rc) { 631 - dev_err(hdev->dev, "Failed to set DRAM default page\n"); 632 - goto disable_pci_access; 633 - } 634 - 635 564 return 0; 636 - 637 - disable_pci_access: 638 - hl_fw_send_pci_access_msg(hdev, ARMCP_PACKET_DISABLE_PCI_ACCESS); 639 - 640 - return rc; 641 565 } 642 566 643 567 /* ··· 695 655 goto free_dma_pool; 696 656 } 697 657 698 - hdev->cpu_accessible_dma_pool = gen_pool_create(HL_CPU_PKT_SHIFT, -1); 658 + dev_dbg(hdev->dev, "cpu accessible memory at bus address 0x%llx\n", 659 + hdev->cpu_accessible_dma_address); 660 + 661 + hdev->cpu_accessible_dma_pool = gen_pool_create(ilog2(32), -1); 699 662 if (!hdev->cpu_accessible_dma_pool) { 700 663 dev_err(hdev->dev, 701 664 "Failed to create CPU accessible DMA pool\n"); ··· 829 786 else 830 787 sob_addr = CFG_BASE + mmSYNC_MNGR_SOB_OBJ_1007; 831 788 832 - WREG32(mmDMA_CH_0_WR_COMP_ADDR_LO + reg_off, lower_32_bits(sob_addr)); 833 789 WREG32(mmDMA_CH_0_WR_COMP_ADDR_HI + reg_off, upper_32_bits(sob_addr)); 834 790 WREG32(mmDMA_CH_0_WR_COMP_WDATA + reg_off, 0x80000001); 835 791 } ··· 1015 973 WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_3, upper_32_bits(eq->bus_address)); 1016 974 1017 975 WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_8, 1018 - lower_32_bits(hdev->cpu_accessible_dma_address)); 976 + lower_32_bits(VA_CPU_ACCESSIBLE_MEM_ADDR)); 1019 977 WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_9, 1020 - upper_32_bits(hdev->cpu_accessible_dma_address)); 978 + upper_32_bits(VA_CPU_ACCESSIBLE_MEM_ADDR)); 1021 979 1022 980 WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_5, HL_QUEUE_SIZE_IN_BYTES); 1023 981 WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_4, HL_EQ_SIZE_IN_BYTES); ··· 1043 1001 1044 1002 if (err) { 1045 1003 dev_err(hdev->dev, 1046 - "Failed to communicate with ARM CPU (ArmCP timeout)\n"); 1004 + "Failed to setup communication with device CPU\n"); 1047 1005 return -EIO; 1048 1006 } 1049 1007 ··· 2103 2061 goya_disable_external_queues(hdev); 2104 2062 goya_disable_internal_queues(hdev); 2105 2063 2106 - if (hard_reset) 2064 + if (hard_reset) { 2107 2065 goya_disable_msix(hdev); 2108 - else 2066 + goya_mmu_remove_device_cpu_mappings(hdev); 2067 + } else { 2109 2068 goya_sync_irqs(hdev); 2069 + } 2110 2070 } 2111 2071 2112 2072 /* ··· 2321 2277 goya_read_device_fw_version(hdev, FW_COMP_UBOOT); 2322 2278 goya_read_device_fw_version(hdev, FW_COMP_PREBOOT); 2323 2279 2324 - if (status == CPU_BOOT_STATUS_SRAM_AVAIL) 2325 - goto out; 2326 - 2327 2280 if (!hdev->fw_loading) { 2328 2281 dev_info(hdev->dev, "Skip loading FW\n"); 2329 2282 goto out; 2330 2283 } 2284 + 2285 + if (status == CPU_BOOT_STATUS_SRAM_AVAIL) 2286 + goto out; 2331 2287 2332 2288 rc = goya_push_linux_to_device(hdev); 2333 2289 if (rc) ··· 2510 2466 if (rc) 2511 2467 goto disable_queues; 2512 2468 2513 - rc = goya_init_cpu_queues(hdev); 2514 - if (rc) { 2515 - dev_err(hdev->dev, "failed to initialize CPU H/W queues %d\n", 2516 - rc); 2517 - goto disable_msix; 2518 - } 2519 - 2520 - /* 2521 - * Check if we managed to set the DMA mask to more then 32 bits. If so, 2522 - * let's try to increase it again because in Goya we set the initial 2523 - * dma mask to less then 39 bits so that the allocation of the memory 2524 - * area for the device's cpu will be under 39 bits 2525 - */ 2526 - if (hdev->dma_mask > 32) { 2527 - rc = hl_pci_set_dma_mask(hdev, 48); 2528 - if (rc) 2529 - goto disable_pci_access; 2530 - } 2531 - 2532 2469 /* Perform read from the device to flush all MSI-X configuration */ 2533 2470 val = RREG32(mmPCIE_DBI_DEVICE_ID_VENDOR_ID_REG); 2534 2471 2535 2472 return 0; 2536 2473 2537 - disable_pci_access: 2538 - hl_fw_send_pci_access_msg(hdev, ARMCP_PACKET_DISABLE_PCI_ACCESS); 2539 - disable_msix: 2540 - goya_disable_msix(hdev); 2541 2474 disable_queues: 2542 2475 goya_disable_internal_queues(hdev); 2543 2476 goya_disable_external_queues(hdev); ··· 2650 2629 void goya_ring_doorbell(struct hl_device *hdev, u32 hw_queue_id, u32 pi) 2651 2630 { 2652 2631 u32 db_reg_offset, db_value; 2653 - bool invalid_queue = false; 2654 2632 2655 2633 switch (hw_queue_id) { 2656 2634 case GOYA_QUEUE_ID_DMA_0: ··· 2673 2653 break; 2674 2654 2675 2655 case GOYA_QUEUE_ID_CPU_PQ: 2676 - if (hdev->cpu_queues_enable) 2677 - db_reg_offset = mmCPU_IF_PF_PQ_PI; 2678 - else 2679 - invalid_queue = true; 2656 + db_reg_offset = mmCPU_IF_PF_PQ_PI; 2680 2657 break; 2681 2658 2682 2659 case GOYA_QUEUE_ID_MME: ··· 2713 2696 break; 2714 2697 2715 2698 default: 2716 - invalid_queue = true; 2717 - } 2718 - 2719 - if (invalid_queue) { 2720 2699 /* Should never get here */ 2721 - dev_err(hdev->dev, "h/w queue %d is invalid. Can't set pi\n", 2700 + dev_err(hdev->dev, "H/W queue %d is invalid. Can't set pi\n", 2722 2701 hw_queue_id); 2723 2702 return; 2724 2703 } ··· 2821 2808 dma_addr_t fence_dma_addr; 2822 2809 struct hl_cb *cb; 2823 2810 u32 tmp, timeout; 2824 - char buf[16] = {}; 2825 2811 int rc; 2826 2812 2827 2813 if (hdev->pldm) ··· 2828 2816 else 2829 2817 timeout = HL_DEVICE_TIMEOUT_USEC; 2830 2818 2831 - if (!hdev->asic_funcs->is_device_idle(hdev, buf, sizeof(buf))) { 2819 + if (!hdev->asic_funcs->is_device_idle(hdev, NULL, NULL)) { 2832 2820 dev_err_ratelimited(hdev->dev, 2833 - "Can't send KMD job on QMAN0 because %s is busy\n", 2834 - buf); 2821 + "Can't send KMD job on QMAN0 because the device is not idle\n"); 2835 2822 return -EBUSY; 2836 2823 } 2837 2824 ··· 2842 2831 return -ENOMEM; 2843 2832 } 2844 2833 2845 - *fence_ptr = 0; 2846 - 2847 2834 goya_qman0_set_security(hdev, true); 2848 - 2849 - /* 2850 - * goya cs parser saves space for 2xpacket_msg_prot at end of CB. For 2851 - * synchronized kernel jobs we only need space for 1 packet_msg_prot 2852 - */ 2853 - job->job_cb_size -= sizeof(struct packet_msg_prot); 2854 2835 2855 2836 cb = job->patched_cb; 2856 2837 ··· 2863 2860 goto free_fence_ptr; 2864 2861 } 2865 2862 2866 - rc = hl_poll_timeout_memory(hdev, (u64) (uintptr_t) fence_ptr, timeout, 2867 - &tmp); 2863 + rc = hl_poll_timeout_memory(hdev, fence_ptr, tmp, 2864 + (tmp == GOYA_QMAN0_FENCE_VAL), 1000, timeout); 2868 2865 2869 2866 hl_hw_queue_inc_ci_kernel(hdev, GOYA_QUEUE_ID_DMA_0); 2870 2867 2871 - if ((rc) || (tmp != GOYA_QMAN0_FENCE_VAL)) { 2872 - dev_err(hdev->dev, "QMAN0 Job hasn't finished in time\n"); 2873 - rc = -ETIMEDOUT; 2868 + if (rc == -ETIMEDOUT) { 2869 + dev_err(hdev->dev, "QMAN0 Job timeout (0x%x)\n", tmp); 2870 + goto free_fence_ptr; 2874 2871 } 2875 2872 2876 2873 free_fence_ptr: ··· 2944 2941 goto free_pkt; 2945 2942 } 2946 2943 2947 - rc = hl_poll_timeout_memory(hdev, (u64) (uintptr_t) fence_ptr, 2948 - GOYA_TEST_QUEUE_WAIT_USEC, &tmp); 2944 + rc = hl_poll_timeout_memory(hdev, fence_ptr, tmp, (tmp == fence_val), 2945 + 1000, GOYA_TEST_QUEUE_WAIT_USEC); 2949 2946 2950 2947 hl_hw_queue_inc_ci_kernel(hdev, hw_queue_id); 2951 2948 2952 - if ((!rc) && (tmp == fence_val)) { 2953 - dev_info(hdev->dev, 2954 - "queue test on H/W queue %d succeeded\n", 2955 - hw_queue_id); 2956 - } else { 2949 + if (rc == -ETIMEDOUT) { 2957 2950 dev_err(hdev->dev, 2958 2951 "H/W queue %d test failed (scratch(0x%08llX) == 0x%08X)\n", 2959 2952 hw_queue_id, (unsigned long long) fence_dma_addr, tmp); 2960 - rc = -EINVAL; 2953 + rc = -EIO; 2954 + } else { 2955 + dev_info(hdev->dev, "queue test on H/W queue %d succeeded\n", 2956 + hw_queue_id); 2961 2957 } 2962 2958 2963 2959 free_pkt: ··· 2988 2986 2989 2987 for (i = 0 ; i < NUMBER_OF_EXT_HW_QUEUES ; i++) { 2990 2988 rc = goya_test_queue(hdev, i); 2991 - if (rc) 2992 - ret_val = -EINVAL; 2993 - } 2994 - 2995 - if (hdev->cpu_queues_enable) { 2996 - rc = goya_test_cpu_queue(hdev); 2997 2989 if (rc) 2998 2990 ret_val = -EINVAL; 2999 2991 } ··· 3024 3028 void *goya_cpu_accessible_dma_pool_alloc(struct hl_device *hdev, size_t size, 3025 3029 dma_addr_t *dma_handle) 3026 3030 { 3027 - return hl_fw_cpu_accessible_dma_pool_alloc(hdev, size, dma_handle); 3031 + void *vaddr; 3032 + 3033 + vaddr = hl_fw_cpu_accessible_dma_pool_alloc(hdev, size, dma_handle); 3034 + *dma_handle = (*dma_handle) - hdev->cpu_accessible_dma_address + 3035 + VA_CPU_ACCESSIBLE_MEM_ADDR; 3036 + 3037 + return vaddr; 3028 3038 } 3029 3039 3030 3040 void goya_cpu_accessible_dma_pool_free(struct hl_device *hdev, size_t size, ··· 3909 3907 return goya_parse_cb_no_mmu(hdev, parser); 3910 3908 } 3911 3909 3912 - void goya_add_end_of_cb_packets(u64 kernel_address, u32 len, u64 cq_addr, 3913 - u32 cq_val, u32 msix_vec) 3910 + void goya_add_end_of_cb_packets(struct hl_device *hdev, u64 kernel_address, 3911 + u32 len, u64 cq_addr, u32 cq_val, u32 msix_vec) 3914 3912 { 3915 3913 struct packet_msg_prot *cq_pkt; 3916 3914 u32 tmp; ··· 3941 3939 3942 3940 void goya_restore_phase_topology(struct hl_device *hdev) 3943 3941 { 3942 + 3943 + } 3944 + 3945 + static void goya_clear_sm_regs(struct hl_device *hdev) 3946 + { 3944 3947 int i, num_of_sob_in_longs, num_of_mon_in_longs; 3945 3948 3946 3949 num_of_sob_in_longs = ··· 3965 3958 } 3966 3959 3967 3960 /* 3968 - * goya_debugfs_read32 - read a 32bit value from a given device address 3961 + * goya_debugfs_read32 - read a 32bit value from a given device or a host mapped 3962 + * address. 3969 3963 * 3970 3964 * @hdev: pointer to hl_device structure 3971 - * @addr: address in device 3965 + * @addr: device or host mapped address 3972 3966 * @val: returned value 3973 3967 * 3974 3968 * In case of DDR address that is not mapped into the default aperture that ··· 4010 4002 } 4011 4003 if (ddr_bar_addr == U64_MAX) 4012 4004 rc = -EIO; 4005 + 4006 + } else if (addr >= HOST_PHYS_BASE && !iommu_present(&pci_bus_type)) { 4007 + *val = *(u32 *) phys_to_virt(addr - HOST_PHYS_BASE); 4008 + 4013 4009 } else { 4014 4010 rc = -EFAULT; 4015 4011 } ··· 4022 4010 } 4023 4011 4024 4012 /* 4025 - * goya_debugfs_write32 - write a 32bit value to a given device address 4013 + * goya_debugfs_write32 - write a 32bit value to a given device or a host mapped 4014 + * address. 4026 4015 * 4027 4016 * @hdev: pointer to hl_device structure 4028 - * @addr: address in device 4017 + * @addr: device or host mapped address 4029 4018 * @val: returned value 4030 4019 * 4031 4020 * In case of DDR address that is not mapped into the default aperture that ··· 4067 4054 } 4068 4055 if (ddr_bar_addr == U64_MAX) 4069 4056 rc = -EIO; 4057 + 4058 + } else if (addr >= HOST_PHYS_BASE && !iommu_present(&pci_bus_type)) { 4059 + *(u32 *) phys_to_virt(addr - HOST_PHYS_BASE) = val; 4060 + 4070 4061 } else { 4071 4062 rc = -EFAULT; 4072 4063 } ··· 4103 4086 static const char *_goya_get_event_desc(u16 event_type) 4104 4087 { 4105 4088 switch (event_type) { 4089 + case GOYA_ASYNC_EVENT_ID_PCIE_IF: 4090 + return "PCIe_if"; 4091 + case GOYA_ASYNC_EVENT_ID_TPC0_ECC: 4092 + case GOYA_ASYNC_EVENT_ID_TPC1_ECC: 4093 + case GOYA_ASYNC_EVENT_ID_TPC2_ECC: 4094 + case GOYA_ASYNC_EVENT_ID_TPC3_ECC: 4095 + case GOYA_ASYNC_EVENT_ID_TPC4_ECC: 4096 + case GOYA_ASYNC_EVENT_ID_TPC5_ECC: 4097 + case GOYA_ASYNC_EVENT_ID_TPC6_ECC: 4098 + case GOYA_ASYNC_EVENT_ID_TPC7_ECC: 4099 + return "TPC%d_ecc"; 4100 + case GOYA_ASYNC_EVENT_ID_MME_ECC: 4101 + return "MME_ecc"; 4102 + case GOYA_ASYNC_EVENT_ID_MME_ECC_EXT: 4103 + return "MME_ecc_ext"; 4104 + case GOYA_ASYNC_EVENT_ID_MMU_ECC: 4105 + return "MMU_ecc"; 4106 + case GOYA_ASYNC_EVENT_ID_DMA_MACRO: 4107 + return "DMA_macro"; 4108 + case GOYA_ASYNC_EVENT_ID_DMA_ECC: 4109 + return "DMA_ecc"; 4110 + case GOYA_ASYNC_EVENT_ID_CPU_IF_ECC: 4111 + return "CPU_if_ecc"; 4112 + case GOYA_ASYNC_EVENT_ID_PSOC_MEM: 4113 + return "PSOC_mem"; 4114 + case GOYA_ASYNC_EVENT_ID_PSOC_CORESIGHT: 4115 + return "PSOC_coresight"; 4116 + case GOYA_ASYNC_EVENT_ID_SRAM0 ... GOYA_ASYNC_EVENT_ID_SRAM29: 4117 + return "SRAM%d"; 4118 + case GOYA_ASYNC_EVENT_ID_GIC500: 4119 + return "GIC500"; 4120 + case GOYA_ASYNC_EVENT_ID_PLL0 ... GOYA_ASYNC_EVENT_ID_PLL6: 4121 + return "PLL%d"; 4122 + case GOYA_ASYNC_EVENT_ID_AXI_ECC: 4123 + return "AXI_ecc"; 4124 + case GOYA_ASYNC_EVENT_ID_L2_RAM_ECC: 4125 + return "L2_ram_ecc"; 4126 + case GOYA_ASYNC_EVENT_ID_PSOC_GPIO_05_SW_RESET: 4127 + return "PSOC_gpio_05_sw_reset"; 4128 + case GOYA_ASYNC_EVENT_ID_PSOC_GPIO_10_VRHOT_ICRIT: 4129 + return "PSOC_gpio_10_vrhot_icrit"; 4106 4130 case GOYA_ASYNC_EVENT_ID_PCIE_DEC: 4107 4131 return "PCIe_dec"; 4108 4132 case GOYA_ASYNC_EVENT_ID_TPC0_DEC: ··· 4186 4128 return "DMA%d_qm"; 4187 4129 case GOYA_ASYNC_EVENT_ID_DMA0_CH ... GOYA_ASYNC_EVENT_ID_DMA4_CH: 4188 4130 return "DMA%d_ch"; 4131 + case GOYA_ASYNC_EVENT_ID_TPC0_BMON_SPMU: 4132 + case GOYA_ASYNC_EVENT_ID_TPC1_BMON_SPMU: 4133 + case GOYA_ASYNC_EVENT_ID_TPC2_BMON_SPMU: 4134 + case GOYA_ASYNC_EVENT_ID_TPC3_BMON_SPMU: 4135 + case GOYA_ASYNC_EVENT_ID_TPC4_BMON_SPMU: 4136 + case GOYA_ASYNC_EVENT_ID_TPC5_BMON_SPMU: 4137 + case GOYA_ASYNC_EVENT_ID_TPC6_BMON_SPMU: 4138 + case GOYA_ASYNC_EVENT_ID_TPC7_BMON_SPMU: 4139 + return "TPC%d_bmon_spmu"; 4140 + case GOYA_ASYNC_EVENT_ID_DMA_BM_CH0 ... GOYA_ASYNC_EVENT_ID_DMA_BM_CH4: 4141 + return "DMA_bm_ch%d"; 4189 4142 default: 4190 4143 return "N/A"; 4191 4144 } ··· 4207 4138 u8 index; 4208 4139 4209 4140 switch (event_type) { 4141 + case GOYA_ASYNC_EVENT_ID_TPC0_ECC: 4142 + case GOYA_ASYNC_EVENT_ID_TPC1_ECC: 4143 + case GOYA_ASYNC_EVENT_ID_TPC2_ECC: 4144 + case GOYA_ASYNC_EVENT_ID_TPC3_ECC: 4145 + case GOYA_ASYNC_EVENT_ID_TPC4_ECC: 4146 + case GOYA_ASYNC_EVENT_ID_TPC5_ECC: 4147 + case GOYA_ASYNC_EVENT_ID_TPC6_ECC: 4148 + case GOYA_ASYNC_EVENT_ID_TPC7_ECC: 4149 + index = (event_type - GOYA_ASYNC_EVENT_ID_TPC0_ECC) / 3; 4150 + snprintf(desc, size, _goya_get_event_desc(event_type), index); 4151 + break; 4152 + case GOYA_ASYNC_EVENT_ID_SRAM0 ... GOYA_ASYNC_EVENT_ID_SRAM29: 4153 + index = event_type - GOYA_ASYNC_EVENT_ID_SRAM0; 4154 + snprintf(desc, size, _goya_get_event_desc(event_type), index); 4155 + break; 4156 + case GOYA_ASYNC_EVENT_ID_PLL0 ... GOYA_ASYNC_EVENT_ID_PLL6: 4157 + index = event_type - GOYA_ASYNC_EVENT_ID_PLL0; 4158 + snprintf(desc, size, _goya_get_event_desc(event_type), index); 4159 + break; 4210 4160 case GOYA_ASYNC_EVENT_ID_TPC0_DEC: 4211 4161 case GOYA_ASYNC_EVENT_ID_TPC1_DEC: 4212 4162 case GOYA_ASYNC_EVENT_ID_TPC2_DEC: ··· 4262 4174 break; 4263 4175 case GOYA_ASYNC_EVENT_ID_DMA0_CH ... GOYA_ASYNC_EVENT_ID_DMA4_CH: 4264 4176 index = event_type - GOYA_ASYNC_EVENT_ID_DMA0_CH; 4177 + snprintf(desc, size, _goya_get_event_desc(event_type), index); 4178 + break; 4179 + case GOYA_ASYNC_EVENT_ID_TPC0_BMON_SPMU: 4180 + case GOYA_ASYNC_EVENT_ID_TPC1_BMON_SPMU: 4181 + case GOYA_ASYNC_EVENT_ID_TPC2_BMON_SPMU: 4182 + case GOYA_ASYNC_EVENT_ID_TPC3_BMON_SPMU: 4183 + case GOYA_ASYNC_EVENT_ID_TPC4_BMON_SPMU: 4184 + case GOYA_ASYNC_EVENT_ID_TPC5_BMON_SPMU: 4185 + case GOYA_ASYNC_EVENT_ID_TPC6_BMON_SPMU: 4186 + case GOYA_ASYNC_EVENT_ID_TPC7_BMON_SPMU: 4187 + index = (event_type - GOYA_ASYNC_EVENT_ID_TPC0_BMON_SPMU) / 10; 4188 + snprintf(desc, size, _goya_get_event_desc(event_type), index); 4189 + break; 4190 + case GOYA_ASYNC_EVENT_ID_DMA_BM_CH0 ... GOYA_ASYNC_EVENT_ID_DMA_BM_CH4: 4191 + index = event_type - GOYA_ASYNC_EVENT_ID_DMA_BM_CH0; 4265 4192 snprintf(desc, size, _goya_get_event_desc(event_type), index); 4266 4193 break; 4267 4194 default: ··· 4329 4226 } 4330 4227 } 4331 4228 4332 - static void goya_print_irq_info(struct hl_device *hdev, u16 event_type) 4229 + static void goya_print_irq_info(struct hl_device *hdev, u16 event_type, 4230 + bool razwi) 4333 4231 { 4334 4232 char desc[20] = ""; 4335 4233 ··· 4338 4234 dev_err(hdev->dev, "Received H/W interrupt %d [\"%s\"]\n", 4339 4235 event_type, desc); 4340 4236 4341 - goya_print_razwi_info(hdev); 4342 - goya_print_mmu_error_info(hdev); 4237 + if (razwi) { 4238 + goya_print_razwi_info(hdev); 4239 + goya_print_mmu_error_info(hdev); 4240 + } 4343 4241 } 4344 4242 4345 4243 static int goya_unmask_irq_arr(struct hl_device *hdev, u32 *irq_arr, ··· 4445 4339 case GOYA_ASYNC_EVENT_ID_PSOC_CORESIGHT: 4446 4340 case GOYA_ASYNC_EVENT_ID_SRAM0 ... GOYA_ASYNC_EVENT_ID_SRAM29: 4447 4341 case GOYA_ASYNC_EVENT_ID_GIC500: 4448 - case GOYA_ASYNC_EVENT_ID_PLL0: 4449 - case GOYA_ASYNC_EVENT_ID_PLL1: 4450 - case GOYA_ASYNC_EVENT_ID_PLL3: 4451 - case GOYA_ASYNC_EVENT_ID_PLL4: 4452 - case GOYA_ASYNC_EVENT_ID_PLL5: 4453 - case GOYA_ASYNC_EVENT_ID_PLL6: 4342 + case GOYA_ASYNC_EVENT_ID_PLL0 ... GOYA_ASYNC_EVENT_ID_PLL6: 4454 4343 case GOYA_ASYNC_EVENT_ID_AXI_ECC: 4455 4344 case GOYA_ASYNC_EVENT_ID_L2_RAM_ECC: 4456 4345 case GOYA_ASYNC_EVENT_ID_PSOC_GPIO_05_SW_RESET: 4457 4346 case GOYA_ASYNC_EVENT_ID_PSOC_GPIO_10_VRHOT_ICRIT: 4458 - dev_err(hdev->dev, 4459 - "Received H/W interrupt %d, reset the chip\n", 4460 - event_type); 4347 + goya_print_irq_info(hdev, event_type, false); 4461 4348 hl_device_reset(hdev, true, false); 4462 4349 break; 4463 4350 ··· 4481 4382 case GOYA_ASYNC_EVENT_ID_MME_CMDQ: 4482 4383 case GOYA_ASYNC_EVENT_ID_DMA0_QM ... GOYA_ASYNC_EVENT_ID_DMA4_QM: 4483 4384 case GOYA_ASYNC_EVENT_ID_DMA0_CH ... GOYA_ASYNC_EVENT_ID_DMA4_CH: 4484 - goya_print_irq_info(hdev, event_type); 4385 + goya_print_irq_info(hdev, event_type, true); 4485 4386 goya_unmask_irq(hdev, event_type); 4486 4387 break; 4487 4388 ··· 4493 4394 case GOYA_ASYNC_EVENT_ID_TPC5_BMON_SPMU: 4494 4395 case GOYA_ASYNC_EVENT_ID_TPC6_BMON_SPMU: 4495 4396 case GOYA_ASYNC_EVENT_ID_TPC7_BMON_SPMU: 4496 - case GOYA_ASYNC_EVENT_ID_DMA_BM_CH0: 4497 - case GOYA_ASYNC_EVENT_ID_DMA_BM_CH1: 4498 - case GOYA_ASYNC_EVENT_ID_DMA_BM_CH2: 4499 - case GOYA_ASYNC_EVENT_ID_DMA_BM_CH3: 4500 - case GOYA_ASYNC_EVENT_ID_DMA_BM_CH4: 4501 - dev_info(hdev->dev, "Received H/W interrupt %d\n", event_type); 4397 + case GOYA_ASYNC_EVENT_ID_DMA_BM_CH0 ... GOYA_ASYNC_EVENT_ID_DMA_BM_CH4: 4398 + goya_print_irq_info(hdev, event_type, false); 4399 + goya_unmask_irq(hdev, event_type); 4502 4400 break; 4503 4401 4504 4402 default: ··· 4514 4418 return goya->events_stat; 4515 4419 } 4516 4420 4517 - static int goya_memset_device_memory(struct hl_device *hdev, u64 addr, u32 size, 4421 + static int goya_memset_device_memory(struct hl_device *hdev, u64 addr, u64 size, 4518 4422 u64 val, bool is_dram) 4519 4423 { 4520 4424 struct packet_lin_dma *lin_dma_pkt; 4521 4425 struct hl_cs_job *job; 4522 4426 u32 cb_size, ctl; 4523 4427 struct hl_cb *cb; 4524 - int rc; 4428 + int rc, lin_dma_pkts_cnt; 4525 4429 4526 - cb = hl_cb_kernel_create(hdev, PAGE_SIZE); 4430 + lin_dma_pkts_cnt = DIV_ROUND_UP_ULL(size, SZ_2G); 4431 + cb_size = lin_dma_pkts_cnt * sizeof(struct packet_lin_dma) + 4432 + sizeof(struct packet_msg_prot); 4433 + cb = hl_cb_kernel_create(hdev, cb_size); 4527 4434 if (!cb) 4528 - return -EFAULT; 4435 + return -ENOMEM; 4529 4436 4530 4437 lin_dma_pkt = (struct packet_lin_dma *) (uintptr_t) cb->kernel_address; 4531 4438 4532 - memset(lin_dma_pkt, 0, sizeof(*lin_dma_pkt)); 4533 - cb_size = sizeof(*lin_dma_pkt); 4439 + do { 4440 + memset(lin_dma_pkt, 0, sizeof(*lin_dma_pkt)); 4534 4441 4535 - ctl = ((PACKET_LIN_DMA << GOYA_PKT_CTL_OPCODE_SHIFT) | 4536 - (1 << GOYA_PKT_LIN_DMA_CTL_MEMSET_SHIFT) | 4537 - (1 << GOYA_PKT_LIN_DMA_CTL_WO_SHIFT) | 4538 - (1 << GOYA_PKT_CTL_RB_SHIFT) | 4539 - (1 << GOYA_PKT_CTL_MB_SHIFT)); 4540 - ctl |= (is_dram ? DMA_HOST_TO_DRAM : DMA_HOST_TO_SRAM) << 4541 - GOYA_PKT_LIN_DMA_CTL_DMA_DIR_SHIFT; 4542 - lin_dma_pkt->ctl = cpu_to_le32(ctl); 4442 + ctl = ((PACKET_LIN_DMA << GOYA_PKT_CTL_OPCODE_SHIFT) | 4443 + (1 << GOYA_PKT_LIN_DMA_CTL_MEMSET_SHIFT) | 4444 + (1 << GOYA_PKT_LIN_DMA_CTL_WO_SHIFT) | 4445 + (1 << GOYA_PKT_CTL_RB_SHIFT) | 4446 + (1 << GOYA_PKT_CTL_MB_SHIFT)); 4447 + ctl |= (is_dram ? DMA_HOST_TO_DRAM : DMA_HOST_TO_SRAM) << 4448 + GOYA_PKT_LIN_DMA_CTL_DMA_DIR_SHIFT; 4449 + lin_dma_pkt->ctl = cpu_to_le32(ctl); 4543 4450 4544 - lin_dma_pkt->src_addr = cpu_to_le64(val); 4545 - lin_dma_pkt->dst_addr = cpu_to_le64(addr); 4546 - lin_dma_pkt->tsize = cpu_to_le32(size); 4451 + lin_dma_pkt->src_addr = cpu_to_le64(val); 4452 + lin_dma_pkt->dst_addr = cpu_to_le64(addr); 4453 + if (lin_dma_pkts_cnt > 1) 4454 + lin_dma_pkt->tsize = cpu_to_le32(SZ_2G); 4455 + else 4456 + lin_dma_pkt->tsize = cpu_to_le32(size); 4457 + 4458 + size -= SZ_2G; 4459 + addr += SZ_2G; 4460 + lin_dma_pkt++; 4461 + } while (--lin_dma_pkts_cnt); 4547 4462 4548 4463 job = hl_cs_allocate_job(hdev, true); 4549 4464 if (!job) { ··· 4569 4462 job->user_cb_size = cb_size; 4570 4463 job->hw_queue_id = GOYA_QUEUE_ID_DMA_0; 4571 4464 job->patched_cb = job->user_cb; 4572 - job->job_cb_size = job->user_cb_size + 4573 - sizeof(struct packet_msg_prot) * 2; 4465 + job->job_cb_size = job->user_cb_size; 4574 4466 4575 4467 hl_debugfs_add_job(hdev, job); 4576 4468 ··· 4591 4485 int goya_context_switch(struct hl_device *hdev, u32 asid) 4592 4486 { 4593 4487 struct asic_fixed_properties *prop = &hdev->asic_prop; 4594 - u64 addr = prop->sram_base_address; 4488 + u64 addr = prop->sram_base_address, sob_addr; 4595 4489 u32 size = hdev->pldm ? 0x10000 : prop->sram_size; 4596 4490 u64 val = 0x7777777777777777ull; 4597 - int rc; 4491 + int rc, dma_id; 4492 + u32 channel_off = mmDMA_CH_1_WR_COMP_ADDR_LO - 4493 + mmDMA_CH_0_WR_COMP_ADDR_LO; 4598 4494 4599 4495 rc = goya_memset_device_memory(hdev, addr, size, val, false); 4600 4496 if (rc) { ··· 4604 4496 return rc; 4605 4497 } 4606 4498 4499 + /* we need to reset registers that the user is allowed to change */ 4500 + sob_addr = CFG_BASE + mmSYNC_MNGR_SOB_OBJ_1007; 4501 + WREG32(mmDMA_CH_0_WR_COMP_ADDR_LO, lower_32_bits(sob_addr)); 4502 + 4503 + for (dma_id = 1 ; dma_id < NUMBER_OF_EXT_HW_QUEUES ; dma_id++) { 4504 + sob_addr = CFG_BASE + mmSYNC_MNGR_SOB_OBJ_1000 + 4505 + (dma_id - 1) * 4; 4506 + WREG32(mmDMA_CH_0_WR_COMP_ADDR_LO + channel_off * dma_id, 4507 + lower_32_bits(sob_addr)); 4508 + } 4509 + 4607 4510 WREG32(mmTPC_PLL_CLK_RLX_0, 0x200020); 4511 + 4608 4512 goya_mmu_prepare(hdev, asid); 4513 + 4514 + goya_clear_sm_regs(hdev); 4609 4515 4610 4516 return 0; 4611 4517 } 4612 4518 4613 - int goya_mmu_clear_pgt_range(struct hl_device *hdev) 4519 + static int goya_mmu_clear_pgt_range(struct hl_device *hdev) 4614 4520 { 4615 4521 struct asic_fixed_properties *prop = &hdev->asic_prop; 4616 4522 struct goya_device *goya = hdev->asic_specific; ··· 4638 4516 return goya_memset_device_memory(hdev, addr, size, 0, true); 4639 4517 } 4640 4518 4641 - int goya_mmu_set_dram_default_page(struct hl_device *hdev) 4519 + static int goya_mmu_set_dram_default_page(struct hl_device *hdev) 4642 4520 { 4643 4521 struct goya_device *goya = hdev->asic_specific; 4644 4522 u64 addr = hdev->asic_prop.mmu_dram_default_page_addr; ··· 4651 4529 return goya_memset_device_memory(hdev, addr, size, val, true); 4652 4530 } 4653 4531 4654 - void goya_mmu_prepare(struct hl_device *hdev, u32 asid) 4532 + static int goya_mmu_add_mappings_for_device_cpu(struct hl_device *hdev) 4533 + { 4534 + struct asic_fixed_properties *prop = &hdev->asic_prop; 4535 + struct goya_device *goya = hdev->asic_specific; 4536 + s64 off, cpu_off; 4537 + int rc; 4538 + 4539 + if (!(goya->hw_cap_initialized & HW_CAP_MMU)) 4540 + return 0; 4541 + 4542 + for (off = 0 ; off < CPU_FW_IMAGE_SIZE ; off += PAGE_SIZE_2MB) { 4543 + rc = hl_mmu_map(hdev->kernel_ctx, prop->dram_base_address + off, 4544 + prop->dram_base_address + off, PAGE_SIZE_2MB); 4545 + if (rc) { 4546 + dev_err(hdev->dev, "Map failed for address 0x%llx\n", 4547 + prop->dram_base_address + off); 4548 + goto unmap; 4549 + } 4550 + } 4551 + 4552 + if (!(hdev->cpu_accessible_dma_address & (PAGE_SIZE_2MB - 1))) { 4553 + rc = hl_mmu_map(hdev->kernel_ctx, VA_CPU_ACCESSIBLE_MEM_ADDR, 4554 + hdev->cpu_accessible_dma_address, PAGE_SIZE_2MB); 4555 + 4556 + if (rc) { 4557 + dev_err(hdev->dev, 4558 + "Map failed for CPU accessible memory\n"); 4559 + off -= PAGE_SIZE_2MB; 4560 + goto unmap; 4561 + } 4562 + } else { 4563 + for (cpu_off = 0 ; cpu_off < SZ_2M ; cpu_off += PAGE_SIZE_4KB) { 4564 + rc = hl_mmu_map(hdev->kernel_ctx, 4565 + VA_CPU_ACCESSIBLE_MEM_ADDR + cpu_off, 4566 + hdev->cpu_accessible_dma_address + cpu_off, 4567 + PAGE_SIZE_4KB); 4568 + if (rc) { 4569 + dev_err(hdev->dev, 4570 + "Map failed for CPU accessible memory\n"); 4571 + cpu_off -= PAGE_SIZE_4KB; 4572 + goto unmap_cpu; 4573 + } 4574 + } 4575 + } 4576 + 4577 + goya_mmu_prepare_reg(hdev, mmCPU_IF_ARUSER_OVR, HL_KERNEL_ASID_ID); 4578 + goya_mmu_prepare_reg(hdev, mmCPU_IF_AWUSER_OVR, HL_KERNEL_ASID_ID); 4579 + WREG32(mmCPU_IF_ARUSER_OVR_EN, 0x7FF); 4580 + WREG32(mmCPU_IF_AWUSER_OVR_EN, 0x7FF); 4581 + 4582 + /* Make sure configuration is flushed to device */ 4583 + RREG32(mmCPU_IF_AWUSER_OVR_EN); 4584 + 4585 + goya->device_cpu_mmu_mappings_done = true; 4586 + 4587 + return 0; 4588 + 4589 + unmap_cpu: 4590 + for (; cpu_off >= 0 ; cpu_off -= PAGE_SIZE_4KB) 4591 + if (hl_mmu_unmap(hdev->kernel_ctx, 4592 + VA_CPU_ACCESSIBLE_MEM_ADDR + cpu_off, 4593 + PAGE_SIZE_4KB)) 4594 + dev_warn_ratelimited(hdev->dev, 4595 + "failed to unmap address 0x%llx\n", 4596 + VA_CPU_ACCESSIBLE_MEM_ADDR + cpu_off); 4597 + unmap: 4598 + for (; off >= 0 ; off -= PAGE_SIZE_2MB) 4599 + if (hl_mmu_unmap(hdev->kernel_ctx, 4600 + prop->dram_base_address + off, PAGE_SIZE_2MB)) 4601 + dev_warn_ratelimited(hdev->dev, 4602 + "failed to unmap address 0x%llx\n", 4603 + prop->dram_base_address + off); 4604 + 4605 + return rc; 4606 + } 4607 + 4608 + void goya_mmu_remove_device_cpu_mappings(struct hl_device *hdev) 4609 + { 4610 + struct asic_fixed_properties *prop = &hdev->asic_prop; 4611 + struct goya_device *goya = hdev->asic_specific; 4612 + u32 off, cpu_off; 4613 + 4614 + if (!(goya->hw_cap_initialized & HW_CAP_MMU)) 4615 + return; 4616 + 4617 + if (!goya->device_cpu_mmu_mappings_done) 4618 + return; 4619 + 4620 + WREG32(mmCPU_IF_ARUSER_OVR_EN, 0); 4621 + WREG32(mmCPU_IF_AWUSER_OVR_EN, 0); 4622 + 4623 + if (!(hdev->cpu_accessible_dma_address & (PAGE_SIZE_2MB - 1))) { 4624 + if (hl_mmu_unmap(hdev->kernel_ctx, VA_CPU_ACCESSIBLE_MEM_ADDR, 4625 + PAGE_SIZE_2MB)) 4626 + dev_warn(hdev->dev, 4627 + "Failed to unmap CPU accessible memory\n"); 4628 + } else { 4629 + for (cpu_off = 0 ; cpu_off < SZ_2M ; cpu_off += PAGE_SIZE_4KB) 4630 + if (hl_mmu_unmap(hdev->kernel_ctx, 4631 + VA_CPU_ACCESSIBLE_MEM_ADDR + cpu_off, 4632 + PAGE_SIZE_4KB)) 4633 + dev_warn_ratelimited(hdev->dev, 4634 + "failed to unmap address 0x%llx\n", 4635 + VA_CPU_ACCESSIBLE_MEM_ADDR + cpu_off); 4636 + } 4637 + 4638 + for (off = 0 ; off < CPU_FW_IMAGE_SIZE ; off += PAGE_SIZE_2MB) 4639 + if (hl_mmu_unmap(hdev->kernel_ctx, 4640 + prop->dram_base_address + off, PAGE_SIZE_2MB)) 4641 + dev_warn_ratelimited(hdev->dev, 4642 + "Failed to unmap address 0x%llx\n", 4643 + prop->dram_base_address + off); 4644 + 4645 + goya->device_cpu_mmu_mappings_done = false; 4646 + } 4647 + 4648 + static void goya_mmu_prepare(struct hl_device *hdev, u32 asid) 4655 4649 { 4656 4650 struct goya_device *goya = hdev->asic_specific; 4657 4651 int i; ··· 4914 4676 return 0; 4915 4677 } 4916 4678 4917 - static bool goya_is_device_idle(struct hl_device *hdev, char *buf, size_t size) 4679 + static bool goya_is_device_idle(struct hl_device *hdev, u32 *mask, 4680 + struct seq_file *s) 4918 4681 { 4919 - u64 offset, dma_qm_reg, tpc_qm_reg, tpc_cmdq_reg, tpc_cfg_reg; 4682 + const char *fmt = "%-5d%-9s%#-14x%#-16x%#x\n"; 4683 + const char *dma_fmt = "%-5d%-9s%#-14x%#x\n"; 4684 + u32 qm_glbl_sts0, cmdq_glbl_sts0, dma_core_sts0, tpc_cfg_sts, 4685 + mme_arch_sts; 4686 + bool is_idle = true, is_eng_idle; 4687 + u64 offset; 4920 4688 int i; 4689 + 4690 + if (s) 4691 + seq_puts(s, "\nDMA is_idle QM_GLBL_STS0 DMA_CORE_STS0\n" 4692 + "--- ------- ------------ -------------\n"); 4921 4693 4922 4694 offset = mmDMA_QM_1_GLBL_STS0 - mmDMA_QM_0_GLBL_STS0; 4923 4695 4924 4696 for (i = 0 ; i < DMA_MAX_NUM ; i++) { 4925 - dma_qm_reg = mmDMA_QM_0_GLBL_STS0 + i * offset; 4697 + qm_glbl_sts0 = RREG32(mmDMA_QM_0_GLBL_STS0 + i * offset); 4698 + dma_core_sts0 = RREG32(mmDMA_CH_0_STS0 + i * offset); 4699 + is_eng_idle = IS_DMA_QM_IDLE(qm_glbl_sts0) && 4700 + IS_DMA_IDLE(dma_core_sts0); 4701 + is_idle &= is_eng_idle; 4926 4702 4927 - if ((RREG32(dma_qm_reg) & DMA_QM_IDLE_MASK) != 4928 - DMA_QM_IDLE_MASK) 4929 - return HL_ENG_BUSY(buf, size, "DMA%d_QM", i); 4703 + if (mask) 4704 + *mask |= !is_eng_idle << (GOYA_ENGINE_ID_DMA_0 + i); 4705 + if (s) 4706 + seq_printf(s, dma_fmt, i, is_eng_idle ? "Y" : "N", 4707 + qm_glbl_sts0, dma_core_sts0); 4930 4708 } 4709 + 4710 + if (s) 4711 + seq_puts(s, 4712 + "\nTPC is_idle QM_GLBL_STS0 CMDQ_GLBL_STS0 CFG_STATUS\n" 4713 + "--- ------- ------------ -------------- ----------\n"); 4931 4714 4932 4715 offset = mmTPC1_QM_GLBL_STS0 - mmTPC0_QM_GLBL_STS0; 4933 4716 4934 4717 for (i = 0 ; i < TPC_MAX_NUM ; i++) { 4935 - tpc_qm_reg = mmTPC0_QM_GLBL_STS0 + i * offset; 4936 - tpc_cmdq_reg = mmTPC0_CMDQ_GLBL_STS0 + i * offset; 4937 - tpc_cfg_reg = mmTPC0_CFG_STATUS + i * offset; 4718 + qm_glbl_sts0 = RREG32(mmTPC0_QM_GLBL_STS0 + i * offset); 4719 + cmdq_glbl_sts0 = RREG32(mmTPC0_CMDQ_GLBL_STS0 + i * offset); 4720 + tpc_cfg_sts = RREG32(mmTPC0_CFG_STATUS + i * offset); 4721 + is_eng_idle = IS_TPC_QM_IDLE(qm_glbl_sts0) && 4722 + IS_TPC_CMDQ_IDLE(cmdq_glbl_sts0) && 4723 + IS_TPC_IDLE(tpc_cfg_sts); 4724 + is_idle &= is_eng_idle; 4938 4725 4939 - if ((RREG32(tpc_qm_reg) & TPC_QM_IDLE_MASK) != 4940 - TPC_QM_IDLE_MASK) 4941 - return HL_ENG_BUSY(buf, size, "TPC%d_QM", i); 4942 - 4943 - if ((RREG32(tpc_cmdq_reg) & TPC_CMDQ_IDLE_MASK) != 4944 - TPC_CMDQ_IDLE_MASK) 4945 - return HL_ENG_BUSY(buf, size, "TPC%d_CMDQ", i); 4946 - 4947 - if ((RREG32(tpc_cfg_reg) & TPC_CFG_IDLE_MASK) != 4948 - TPC_CFG_IDLE_MASK) 4949 - return HL_ENG_BUSY(buf, size, "TPC%d_CFG", i); 4726 + if (mask) 4727 + *mask |= !is_eng_idle << (GOYA_ENGINE_ID_TPC_0 + i); 4728 + if (s) 4729 + seq_printf(s, fmt, i, is_eng_idle ? "Y" : "N", 4730 + qm_glbl_sts0, cmdq_glbl_sts0, tpc_cfg_sts); 4950 4731 } 4951 4732 4952 - if ((RREG32(mmMME_QM_GLBL_STS0) & MME_QM_IDLE_MASK) != 4953 - MME_QM_IDLE_MASK) 4954 - return HL_ENG_BUSY(buf, size, "MME_QM"); 4733 + if (s) 4734 + seq_puts(s, 4735 + "\nMME is_idle QM_GLBL_STS0 CMDQ_GLBL_STS0 ARCH_STATUS\n" 4736 + "--- ------- ------------ -------------- -----------\n"); 4955 4737 4956 - if ((RREG32(mmMME_CMDQ_GLBL_STS0) & MME_CMDQ_IDLE_MASK) != 4957 - MME_CMDQ_IDLE_MASK) 4958 - return HL_ENG_BUSY(buf, size, "MME_CMDQ"); 4738 + qm_glbl_sts0 = RREG32(mmMME_QM_GLBL_STS0); 4739 + cmdq_glbl_sts0 = RREG32(mmMME_CMDQ_GLBL_STS0); 4740 + mme_arch_sts = RREG32(mmMME_ARCH_STATUS); 4741 + is_eng_idle = IS_MME_QM_IDLE(qm_glbl_sts0) && 4742 + IS_MME_CMDQ_IDLE(cmdq_glbl_sts0) && 4743 + IS_MME_IDLE(mme_arch_sts); 4744 + is_idle &= is_eng_idle; 4959 4745 4960 - if ((RREG32(mmMME_ARCH_STATUS) & MME_ARCH_IDLE_MASK) != 4961 - MME_ARCH_IDLE_MASK) 4962 - return HL_ENG_BUSY(buf, size, "MME_ARCH"); 4746 + if (mask) 4747 + *mask |= !is_eng_idle << GOYA_ENGINE_ID_MME_0; 4748 + if (s) { 4749 + seq_printf(s, fmt, 0, is_eng_idle ? "Y" : "N", qm_glbl_sts0, 4750 + cmdq_glbl_sts0, mme_arch_sts); 4751 + seq_puts(s, "\n"); 4752 + } 4963 4753 4964 - if (RREG32(mmMME_SHADOW_0_STATUS) & MME_SHADOW_IDLE_MASK) 4965 - return HL_ENG_BUSY(buf, size, "MME"); 4966 - 4967 - return true; 4754 + return is_idle; 4968 4755 } 4969 4756 4970 4757 static void goya_hw_queues_lock(struct hl_device *hdev)
+10 -6
drivers/misc/habanalabs/goya/goyaP.h
··· 126 126 #define VA_DDR_SPACE_SIZE (VA_DDR_SPACE_END - \ 127 127 VA_DDR_SPACE_START) /* 128GB */ 128 128 129 + #if (HL_CPU_ACCESSIBLE_MEM_SIZE != SZ_2M) 130 + #error "HL_CPU_ACCESSIBLE_MEM_SIZE must be exactly 2MB to enable MMU mapping" 131 + #endif 132 + 133 + #define VA_CPU_ACCESSIBLE_MEM_ADDR 0x8000000000ull 134 + 129 135 #define DMA_MAX_TRANSFER_SIZE U32_MAX 130 136 131 137 #define HW_CAP_PLL 0x00000001 ··· 163 157 u64 ddr_bar_cur_addr; 164 158 u32 events_stat[GOYA_ASYNC_EVENT_ID_SIZE]; 165 159 u32 hw_cap_initialized; 160 + u8 device_cpu_mmu_mappings_done; 166 161 }; 167 162 168 163 void goya_get_fixed_properties(struct hl_device *hdev); ··· 211 204 int goya_debug_coresight(struct hl_device *hdev, void *data); 212 205 void goya_halt_coresight(struct hl_device *hdev); 213 206 214 - void goya_mmu_prepare(struct hl_device *hdev, u32 asid); 215 - int goya_mmu_clear_pgt_range(struct hl_device *hdev); 216 - int goya_mmu_set_dram_default_page(struct hl_device *hdev); 217 - 218 207 int goya_suspend(struct hl_device *hdev); 219 208 int goya_resume(struct hl_device *hdev); 220 209 221 210 void goya_handle_eqe(struct hl_device *hdev, struct hl_eq_entry *eq_entry); 222 211 void *goya_get_events_stat(struct hl_device *hdev, u32 *size); 223 212 224 - void goya_add_end_of_cb_packets(u64 kernel_address, u32 len, u64 cq_addr, 225 - u32 cq_val, u32 msix_vec); 213 + void goya_add_end_of_cb_packets(struct hl_device *hdev, u64 kernel_address, 214 + u32 len, u64 cq_addr, u32 cq_val, u32 msix_vec); 226 215 int goya_cs_parser(struct hl_device *hdev, struct hl_cs_parser *parser); 227 216 void *goya_get_int_queue_base(struct hl_device *hdev, u32 queue_id, 228 217 dma_addr_t *dma_handle, u16 *queue_len); ··· 228 225 dma_addr_t *dma_handle); 229 226 void goya_cpu_accessible_dma_pool_free(struct hl_device *hdev, size_t size, 230 227 void *vaddr); 228 + void goya_mmu_remove_device_cpu_mappings(struct hl_device *hdev); 231 229 232 230 #endif /* GOYAP_H_ */
+16
drivers/misc/habanalabs/goya/goya_security.c
··· 677 677 goya_pb_set_block(hdev, mmTPC0_RD_REGULATOR_BASE); 678 678 goya_pb_set_block(hdev, mmTPC0_WR_REGULATOR_BASE); 679 679 680 + pb_addr = (mmTPC0_CFG_SEMAPHORE & ~0xFFF) + PROT_BITS_OFFS; 681 + word_offset = ((mmTPC0_CFG_SEMAPHORE & PROT_BITS_OFFS) >> 7) << 2; 682 + 683 + mask = 1 << ((mmTPC0_CFG_SEMAPHORE & 0x7F) >> 2); 684 + mask |= 1 << ((mmTPC0_CFG_VFLAGS & 0x7F) >> 2); 685 + mask |= 1 << ((mmTPC0_CFG_SFLAGS & 0x7F) >> 2); 686 + mask |= 1 << ((mmTPC0_CFG_LFSR_POLYNOM & 0x7F) >> 2); 687 + mask |= 1 << ((mmTPC0_CFG_STATUS & 0x7F) >> 2); 688 + 689 + WREG32(pb_addr + word_offset, ~mask); 690 + 680 691 pb_addr = (mmTPC0_CFG_CFG_BASE_ADDRESS_HIGH & ~0xFFF) + PROT_BITS_OFFS; 681 692 word_offset = ((mmTPC0_CFG_CFG_BASE_ADDRESS_HIGH & 682 693 PROT_BITS_OFFS) >> 7) << 2; ··· 695 684 mask |= 1 << ((mmTPC0_CFG_CFG_SUBTRACT_VALUE & 0x7F) >> 2); 696 685 mask |= 1 << ((mmTPC0_CFG_SM_BASE_ADDRESS_LOW & 0x7F) >> 2); 697 686 mask |= 1 << ((mmTPC0_CFG_SM_BASE_ADDRESS_HIGH & 0x7F) >> 2); 687 + mask |= 1 << ((mmTPC0_CFG_CFG_SUBTRACT_VALUE & 0x7F) >> 2); 688 + mask |= 1 << ((mmTPC0_CFG_TPC_STALL & 0x7F) >> 2); 689 + mask |= 1 << ((mmTPC0_CFG_MSS_CONFIG & 0x7F) >> 2); 690 + mask |= 1 << ((mmTPC0_CFG_TPC_INTR_CAUSE & 0x7F) >> 2); 691 + mask |= 1 << ((mmTPC0_CFG_TPC_INTR_MASK & 0x7F) >> 2); 698 692 699 693 WREG32(pb_addr + word_offset, ~mask); 700 694
+68 -25
drivers/misc/habanalabs/habanalabs.h
··· 34 34 #define HL_ARMCP_INFO_TIMEOUT_USEC 10000000 /* 10s */ 35 35 #define HL_ARMCP_EEPROM_TIMEOUT_USEC 10000000 /* 10s */ 36 36 37 + #define HL_PCI_ELBI_TIMEOUT_MSEC 10 /* 10ms */ 38 + 37 39 #define HL_MAX_QUEUES 128 38 40 39 41 #define HL_MAX_JOBS_PER_CS 64 ··· 125 123 /** 126 124 * struct asic_fixed_properties - ASIC specific immutable properties. 127 125 * @hw_queues_props: H/W queues properties. 128 - * @armcp_info: received various information from ArmCP regarding the H/W. e.g. 126 + * @armcp_info: received various information from ArmCP regarding the H/W, e.g. 129 127 * available sensors. 130 128 * @uboot_ver: F/W U-boot version. 131 129 * @preboot_ver: F/W Preboot version. ··· 320 318 #define HL_EQ_LENGTH 64 321 319 #define HL_EQ_SIZE_IN_BYTES (HL_EQ_LENGTH * HL_EQ_ENTRY_SIZE) 322 320 323 - #define HL_CPU_PKT_SHIFT 5 324 - #define HL_CPU_PKT_SIZE (1 << HL_CPU_PKT_SHIFT) 325 - #define HL_CPU_PKT_MASK (~((1 << HL_CPU_PKT_SHIFT) - 1)) 326 - #define HL_CPU_MAX_PKTS_IN_CB 32 327 - #define HL_CPU_CB_SIZE (HL_CPU_PKT_SIZE * \ 328 - HL_CPU_MAX_PKTS_IN_CB) 329 - #define HL_CPU_CB_QUEUE_SIZE (HL_QUEUE_LENGTH * HL_CPU_CB_SIZE) 330 - 331 - /* KMD <-> ArmCP shared memory size (EQ + PQ + CPU CB queue) */ 332 - #define HL_CPU_ACCESSIBLE_MEM_SIZE (HL_EQ_SIZE_IN_BYTES + \ 333 - HL_QUEUE_SIZE_IN_BYTES + \ 334 - HL_CPU_CB_QUEUE_SIZE) 321 + /* KMD <-> ArmCP shared memory size */ 322 + #define HL_CPU_ACCESSIBLE_MEM_SIZE SZ_2M 335 323 336 324 /** 337 325 * struct hl_hw_queue - describes a H/W transport queue. ··· 535 543 enum dma_data_direction dir); 536 544 u32 (*get_dma_desc_list_size)(struct hl_device *hdev, 537 545 struct sg_table *sgt); 538 - void (*add_end_of_cb_packets)(u64 kernel_address, u32 len, u64 cq_addr, 539 - u32 cq_val, u32 msix_num); 546 + void (*add_end_of_cb_packets)(struct hl_device *hdev, 547 + u64 kernel_address, u32 len, 548 + u64 cq_addr, u32 cq_val, u32 msix_num); 540 549 void (*update_eq_ci)(struct hl_device *hdev, u32 val); 541 550 int (*context_switch)(struct hl_device *hdev, u32 asid); 542 551 void (*restore_phase_topology)(struct hl_device *hdev); ··· 557 564 u32 asid, u64 va, u64 size); 558 565 int (*send_heartbeat)(struct hl_device *hdev); 559 566 int (*debug_coresight)(struct hl_device *hdev, void *data); 560 - bool (*is_device_idle)(struct hl_device *hdev, char *buf, size_t size); 567 + bool (*is_device_idle)(struct hl_device *hdev, u32 *mask, 568 + struct seq_file *s); 561 569 int (*soft_reset_late_init)(struct hl_device *hdev); 562 570 void (*hw_queues_lock)(struct hl_device *hdev); 563 571 void (*hw_queues_unlock)(struct hl_device *hdev); ··· 1059 1065 (cond) ? 0 : -ETIMEDOUT; \ 1060 1066 }) 1061 1067 1068 + /* 1069 + * address in this macro points always to a memory location in the 1070 + * host's (server's) memory. That location is updated asynchronously 1071 + * either by the direct access of the device or by another core 1072 + */ 1073 + #define hl_poll_timeout_memory(hdev, addr, val, cond, sleep_us, timeout_us) \ 1074 + ({ \ 1075 + ktime_t __timeout; \ 1076 + /* timeout should be longer when working with simulator */ \ 1077 + if (hdev->pdev) \ 1078 + __timeout = ktime_add_us(ktime_get(), timeout_us); \ 1079 + else \ 1080 + __timeout = ktime_add_us(ktime_get(), (timeout_us * 10)); \ 1081 + might_sleep_if(sleep_us); \ 1082 + for (;;) { \ 1083 + /* Verify we read updates done by other cores or by device */ \ 1084 + mb(); \ 1085 + (val) = *((u32 *) (uintptr_t) (addr)); \ 1086 + if (cond) \ 1087 + break; \ 1088 + if (timeout_us && ktime_compare(ktime_get(), __timeout) > 0) { \ 1089 + (val) = *((u32 *) (uintptr_t) (addr)); \ 1090 + break; \ 1091 + } \ 1092 + if (sleep_us) \ 1093 + usleep_range((sleep_us >> 2) + 1, sleep_us); \ 1094 + } \ 1095 + (cond) ? 0 : -ETIMEDOUT; \ 1096 + }) 1062 1097 1063 - #define HL_ENG_BUSY(buf, size, fmt, ...) ({ \ 1064 - if (buf) \ 1065 - snprintf(buf, size, fmt, ##__VA_ARGS__); \ 1066 - false; \ 1067 - }) 1098 + #define hl_poll_timeout_device_memory(hdev, addr, val, cond, sleep_us, \ 1099 + timeout_us) \ 1100 + ({ \ 1101 + ktime_t __timeout; \ 1102 + /* timeout should be longer when working with simulator */ \ 1103 + if (hdev->pdev) \ 1104 + __timeout = ktime_add_us(ktime_get(), timeout_us); \ 1105 + else \ 1106 + __timeout = ktime_add_us(ktime_get(), (timeout_us * 10)); \ 1107 + might_sleep_if(sleep_us); \ 1108 + for (;;) { \ 1109 + (val) = readl(addr); \ 1110 + if (cond) \ 1111 + break; \ 1112 + if (timeout_us && ktime_compare(ktime_get(), __timeout) > 0) { \ 1113 + (val) = readl(addr); \ 1114 + break; \ 1115 + } \ 1116 + if (sleep_us) \ 1117 + usleep_range((sleep_us >> 2) + 1, sleep_us); \ 1118 + } \ 1119 + (cond) ? 0 : -ETIMEDOUT; \ 1120 + }) 1068 1121 1069 1122 struct hwmon_chip_info; 1070 1123 ··· 1158 1117 * lock here so we can flush user processes which are opening 1159 1118 * the device while we are trying to hard reset it 1160 1119 * @send_cpu_message_lock: enforces only one message in KMD <-> ArmCP queue. 1120 + * @debug_lock: protects critical section of setting debug mode for device 1161 1121 * @asic_prop: ASIC specific immutable properties. 1162 1122 * @asic_funcs: ASIC specific functions. 1163 1123 * @asic_specific: ASIC specific information to use only from ASIC files. ··· 1201 1159 * @mmu_enable: is MMU enabled. 1202 1160 * @device_cpu_disabled: is the device CPU disabled (due to timeouts) 1203 1161 * @dma_mask: the dma mask that was set for this device 1162 + * @in_debug: is device under debug. This, together with fd_open_cnt, enforces 1163 + * that only a single user is configuring the debug infrastructure. 1204 1164 */ 1205 1165 struct hl_device { 1206 1166 struct pci_dev *pdev; ··· 1232 1188 /* TODO: remove fd_open_cnt_lock for multiple process support */ 1233 1189 struct mutex fd_open_cnt_lock; 1234 1190 struct mutex send_cpu_message_lock; 1191 + struct mutex debug_lock; 1235 1192 struct asic_fixed_properties asic_prop; 1236 1193 const struct hl_asic_funcs *asic_funcs; 1237 1194 void *asic_specific; ··· 1275 1230 u8 init_done; 1276 1231 u8 device_cpu_disabled; 1277 1232 u8 dma_mask; 1233 + u8 in_debug; 1278 1234 1279 1235 /* Parameters for bring-up */ 1280 1236 u8 mmu_enable; ··· 1371 1325 int hl_device_open(struct inode *inode, struct file *filp); 1372 1326 bool hl_device_disabled_or_in_reset(struct hl_device *hdev); 1373 1327 enum hl_device_status hl_device_status(struct hl_device *hdev); 1328 + int hl_device_set_debug_mode(struct hl_device *hdev, bool enable); 1374 1329 int create_hdev(struct hl_device **dev, struct pci_dev *pdev, 1375 1330 enum hl_asic_type asic_type, int minor); 1376 1331 void destroy_hdev(struct hl_device *hdev); 1377 - int hl_poll_timeout_memory(struct hl_device *hdev, u64 addr, u32 timeout_us, 1378 - u32 *val); 1379 - int hl_poll_timeout_device_memory(struct hl_device *hdev, void __iomem *addr, 1380 - u32 timeout_us, u32 *val); 1381 1332 int hl_hw_queues_create(struct hl_device *hdev); 1382 1333 void hl_hw_queues_destroy(struct hl_device *hdev); 1383 1334 int hl_hw_queue_send_cb_no_cmpl(struct hl_device *hdev, u32 hw_queue_id,
+37 -33
drivers/misc/habanalabs/habanalabs_drv.c
··· 105 105 return -EPERM; 106 106 } 107 107 108 + if (hdev->in_debug) { 109 + dev_err_ratelimited(hdev->dev, 110 + "Can't open %s because it is being debugged by another user\n", 111 + dev_name(hdev->dev)); 112 + mutex_unlock(&hdev->fd_open_cnt_lock); 113 + return -EPERM; 114 + } 115 + 108 116 if (atomic_read(&hdev->fd_open_cnt)) { 109 117 dev_info_ratelimited(hdev->dev, 110 - "Device %s is already attached to application\n", 118 + "Can't open %s because another user is working on it\n", 111 119 dev_name(hdev->dev)); 112 120 mutex_unlock(&hdev->fd_open_cnt_lock); 113 121 return -EBUSY; ··· 172 164 return rc; 173 165 } 174 166 167 + static void set_driver_behavior_per_device(struct hl_device *hdev) 168 + { 169 + hdev->mmu_enable = 1; 170 + hdev->cpu_enable = 1; 171 + hdev->fw_loading = 1; 172 + hdev->cpu_queues_enable = 1; 173 + hdev->heartbeat = 1; 174 + 175 + hdev->reset_pcilink = 0; 176 + } 177 + 175 178 /* 176 179 * create_hdev - create habanalabs device instance 177 180 * ··· 207 188 if (!hdev) 208 189 return -ENOMEM; 209 190 210 - hdev->major = hl_major; 211 - hdev->reset_on_lockup = reset_on_lockup; 212 - 213 - /* Parameters for bring-up - set them to defaults */ 214 - hdev->mmu_enable = 1; 215 - hdev->cpu_enable = 1; 216 - hdev->reset_pcilink = 0; 217 - hdev->cpu_queues_enable = 1; 218 - hdev->fw_loading = 1; 219 - hdev->pldm = 0; 220 - hdev->heartbeat = 1; 221 - 222 - /* If CPU is disabled, no point in loading FW */ 223 - if (!hdev->cpu_enable) 224 - hdev->fw_loading = 0; 225 - 226 - /* If we don't load FW, no need to initialize CPU queues */ 227 - if (!hdev->fw_loading) 228 - hdev->cpu_queues_enable = 0; 229 - 230 - /* If CPU queues not enabled, no way to do heartbeat */ 231 - if (!hdev->cpu_queues_enable) 232 - hdev->heartbeat = 0; 233 - 234 - if (timeout_locked) 235 - hdev->timeout_jiffies = msecs_to_jiffies(timeout_locked * 1000); 236 - else 237 - hdev->timeout_jiffies = MAX_SCHEDULE_TIMEOUT; 238 - 239 - hdev->disabled = true; 240 - hdev->pdev = pdev; /* can be NULL in case of simulator device */ 241 - 191 + /* First, we must find out which ASIC are we handling. This is needed 192 + * to configure the behavior of the driver (kernel parameters) 193 + */ 242 194 if (pdev) { 243 195 hdev->asic_type = get_asic_type(pdev->device); 244 196 if (hdev->asic_type == ASIC_INVALID) { ··· 220 230 } else { 221 231 hdev->asic_type = asic_type; 222 232 } 233 + 234 + hdev->major = hl_major; 235 + hdev->reset_on_lockup = reset_on_lockup; 236 + hdev->pldm = 0; 237 + 238 + set_driver_behavior_per_device(hdev); 239 + 240 + if (timeout_locked) 241 + hdev->timeout_jiffies = msecs_to_jiffies(timeout_locked * 1000); 242 + else 243 + hdev->timeout_jiffies = MAX_SCHEDULE_TIMEOUT; 244 + 245 + hdev->disabled = true; 246 + hdev->pdev = pdev; /* can be NULL in case of simulator device */ 223 247 224 248 /* Set default DMA mask to 32 bits */ 225 249 hdev->dma_mask = 32;
+10 -1
drivers/misc/habanalabs/habanalabs_ioctl.c
··· 119 119 if ((!max_size) || (!out)) 120 120 return -EINVAL; 121 121 122 - hw_idle.is_idle = hdev->asic_funcs->is_device_idle(hdev, NULL, 0); 122 + hw_idle.is_idle = hdev->asic_funcs->is_device_idle(hdev, 123 + &hw_idle.busy_engines_mask, NULL); 123 124 124 125 return copy_to_user(out, &hw_idle, 125 126 min((size_t) max_size, sizeof(hw_idle))) ? -EFAULT : 0; ··· 255 254 case HL_DEBUG_OP_BMON: 256 255 case HL_DEBUG_OP_SPMU: 257 256 case HL_DEBUG_OP_TIMESTAMP: 257 + if (!hdev->in_debug) { 258 + dev_err_ratelimited(hdev->dev, 259 + "Rejecting debug configuration request because device not in debug mode\n"); 260 + return -EFAULT; 261 + } 258 262 args->input_size = 259 263 min(args->input_size, hl_debug_struct_size[args->op]); 260 264 rc = debug_coresight(hdev, args); 265 + break; 266 + case HL_DEBUG_OP_SET_MODE: 267 + rc = hl_device_set_debug_mode(hdev, (bool) args->enable); 261 268 break; 262 269 default: 263 270 dev_err(hdev->dev, "Invalid request %d\n", args->op);
+1 -1
drivers/misc/habanalabs/hw_queue.c
··· 265 265 cq = &hdev->completion_queue[q->hw_queue_id]; 266 266 cq_addr = cq->bus_address + cq->pi * sizeof(struct hl_cq_entry); 267 267 268 - hdev->asic_funcs->add_end_of_cb_packets(cb->kernel_address, len, 268 + hdev->asic_funcs->add_end_of_cb_packets(hdev, cb->kernel_address, len, 269 269 cq_addr, 270 270 __le32_to_cpu(cq_pkt.data), 271 271 q->hw_queue_id);
+418
drivers/misc/habanalabs/include/goya/asic_reg/dma_ch_0_masks.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 3 + * Copyright 2016-2018 HabanaLabs, Ltd. 4 + * All Rights Reserved. 5 + * 6 + */ 7 + 8 + /************************************ 9 + ** This is an auto-generated file ** 10 + ** DO NOT EDIT BELOW ** 11 + ************************************/ 12 + 13 + #ifndef ASIC_REG_DMA_CH_0_MASKS_H_ 14 + #define ASIC_REG_DMA_CH_0_MASKS_H_ 15 + 16 + /* 17 + ***************************************** 18 + * DMA_CH_0 (Prototype: DMA_CH) 19 + ***************************************** 20 + */ 21 + 22 + /* DMA_CH_0_CFG0 */ 23 + #define DMA_CH_0_CFG0_RD_MAX_OUTSTAND_SHIFT 0 24 + #define DMA_CH_0_CFG0_RD_MAX_OUTSTAND_MASK 0x3FF 25 + #define DMA_CH_0_CFG0_WR_MAX_OUTSTAND_SHIFT 16 26 + #define DMA_CH_0_CFG0_WR_MAX_OUTSTAND_MASK 0xFFF0000 27 + 28 + /* DMA_CH_0_CFG1 */ 29 + #define DMA_CH_0_CFG1_RD_BUF_MAX_SIZE_SHIFT 0 30 + #define DMA_CH_0_CFG1_RD_BUF_MAX_SIZE_MASK 0x3FF 31 + 32 + /* DMA_CH_0_ERRMSG_ADDR_LO */ 33 + #define DMA_CH_0_ERRMSG_ADDR_LO_VAL_SHIFT 0 34 + #define DMA_CH_0_ERRMSG_ADDR_LO_VAL_MASK 0xFFFFFFFF 35 + 36 + /* DMA_CH_0_ERRMSG_ADDR_HI */ 37 + #define DMA_CH_0_ERRMSG_ADDR_HI_VAL_SHIFT 0 38 + #define DMA_CH_0_ERRMSG_ADDR_HI_VAL_MASK 0xFFFFFFFF 39 + 40 + /* DMA_CH_0_ERRMSG_WDATA */ 41 + #define DMA_CH_0_ERRMSG_WDATA_VAL_SHIFT 0 42 + #define DMA_CH_0_ERRMSG_WDATA_VAL_MASK 0xFFFFFFFF 43 + 44 + /* DMA_CH_0_RD_COMP_ADDR_LO */ 45 + #define DMA_CH_0_RD_COMP_ADDR_LO_VAL_SHIFT 0 46 + #define DMA_CH_0_RD_COMP_ADDR_LO_VAL_MASK 0xFFFFFFFF 47 + 48 + /* DMA_CH_0_RD_COMP_ADDR_HI */ 49 + #define DMA_CH_0_RD_COMP_ADDR_HI_VAL_SHIFT 0 50 + #define DMA_CH_0_RD_COMP_ADDR_HI_VAL_MASK 0xFFFFFFFF 51 + 52 + /* DMA_CH_0_RD_COMP_WDATA */ 53 + #define DMA_CH_0_RD_COMP_WDATA_VAL_SHIFT 0 54 + #define DMA_CH_0_RD_COMP_WDATA_VAL_MASK 0xFFFFFFFF 55 + 56 + /* DMA_CH_0_WR_COMP_ADDR_LO */ 57 + #define DMA_CH_0_WR_COMP_ADDR_LO_VAL_SHIFT 0 58 + #define DMA_CH_0_WR_COMP_ADDR_LO_VAL_MASK 0xFFFFFFFF 59 + 60 + /* DMA_CH_0_WR_COMP_ADDR_HI */ 61 + #define DMA_CH_0_WR_COMP_ADDR_HI_VAL_SHIFT 0 62 + #define DMA_CH_0_WR_COMP_ADDR_HI_VAL_MASK 0xFFFFFFFF 63 + 64 + /* DMA_CH_0_WR_COMP_WDATA */ 65 + #define DMA_CH_0_WR_COMP_WDATA_VAL_SHIFT 0 66 + #define DMA_CH_0_WR_COMP_WDATA_VAL_MASK 0xFFFFFFFF 67 + 68 + /* DMA_CH_0_LDMA_SRC_ADDR_LO */ 69 + #define DMA_CH_0_LDMA_SRC_ADDR_LO_VAL_SHIFT 0 70 + #define DMA_CH_0_LDMA_SRC_ADDR_LO_VAL_MASK 0xFFFFFFFF 71 + 72 + /* DMA_CH_0_LDMA_SRC_ADDR_HI */ 73 + #define DMA_CH_0_LDMA_SRC_ADDR_HI_VAL_SHIFT 0 74 + #define DMA_CH_0_LDMA_SRC_ADDR_HI_VAL_MASK 0xFFFFFFFF 75 + 76 + /* DMA_CH_0_LDMA_DST_ADDR_LO */ 77 + #define DMA_CH_0_LDMA_DST_ADDR_LO_VAL_SHIFT 0 78 + #define DMA_CH_0_LDMA_DST_ADDR_LO_VAL_MASK 0xFFFFFFFF 79 + 80 + /* DMA_CH_0_LDMA_DST_ADDR_HI */ 81 + #define DMA_CH_0_LDMA_DST_ADDR_HI_VAL_SHIFT 0 82 + #define DMA_CH_0_LDMA_DST_ADDR_HI_VAL_MASK 0xFFFFFFFF 83 + 84 + /* DMA_CH_0_LDMA_TSIZE */ 85 + #define DMA_CH_0_LDMA_TSIZE_VAL_SHIFT 0 86 + #define DMA_CH_0_LDMA_TSIZE_VAL_MASK 0xFFFFFFFF 87 + 88 + /* DMA_CH_0_COMIT_TRANSFER */ 89 + #define DMA_CH_0_COMIT_TRANSFER_PCI_UPS_WKORDR_SHIFT 0 90 + #define DMA_CH_0_COMIT_TRANSFER_PCI_UPS_WKORDR_MASK 0x1 91 + #define DMA_CH_0_COMIT_TRANSFER_RD_COMP_EN_SHIFT 1 92 + #define DMA_CH_0_COMIT_TRANSFER_RD_COMP_EN_MASK 0x2 93 + #define DMA_CH_0_COMIT_TRANSFER_WR_COMP_EN_SHIFT 2 94 + #define DMA_CH_0_COMIT_TRANSFER_WR_COMP_EN_MASK 0x4 95 + #define DMA_CH_0_COMIT_TRANSFER_NOSNOOP_SHIFT 3 96 + #define DMA_CH_0_COMIT_TRANSFER_NOSNOOP_MASK 0x8 97 + #define DMA_CH_0_COMIT_TRANSFER_SRC_ADDR_INC_DIS_SHIFT 4 98 + #define DMA_CH_0_COMIT_TRANSFER_SRC_ADDR_INC_DIS_MASK 0x10 99 + #define DMA_CH_0_COMIT_TRANSFER_DST_ADDR_INC_DIS_SHIFT 5 100 + #define DMA_CH_0_COMIT_TRANSFER_DST_ADDR_INC_DIS_MASK 0x20 101 + #define DMA_CH_0_COMIT_TRANSFER_MEM_SET_SHIFT 6 102 + #define DMA_CH_0_COMIT_TRANSFER_MEM_SET_MASK 0x40 103 + #define DMA_CH_0_COMIT_TRANSFER_MOD_TENSOR_SHIFT 15 104 + #define DMA_CH_0_COMIT_TRANSFER_MOD_TENSOR_MASK 0x8000 105 + #define DMA_CH_0_COMIT_TRANSFER_CTL_SHIFT 16 106 + #define DMA_CH_0_COMIT_TRANSFER_CTL_MASK 0xFFFF0000 107 + 108 + /* DMA_CH_0_STS0 */ 109 + #define DMA_CH_0_STS0_DMA_BUSY_SHIFT 0 110 + #define DMA_CH_0_STS0_DMA_BUSY_MASK 0x1 111 + #define DMA_CH_0_STS0_RD_STS_CTX_FULL_SHIFT 1 112 + #define DMA_CH_0_STS0_RD_STS_CTX_FULL_MASK 0x2 113 + #define DMA_CH_0_STS0_WR_STS_CTX_FULL_SHIFT 2 114 + #define DMA_CH_0_STS0_WR_STS_CTX_FULL_MASK 0x4 115 + 116 + /* DMA_CH_0_STS1 */ 117 + #define DMA_CH_0_STS1_RD_STS_CTX_CNT_SHIFT 0 118 + #define DMA_CH_0_STS1_RD_STS_CTX_CNT_MASK 0xFFFFFFFF 119 + 120 + /* DMA_CH_0_STS2 */ 121 + #define DMA_CH_0_STS2_WR_STS_CTX_CNT_SHIFT 0 122 + #define DMA_CH_0_STS2_WR_STS_CTX_CNT_MASK 0xFFFFFFFF 123 + 124 + /* DMA_CH_0_STS3 */ 125 + #define DMA_CH_0_STS3_RD_STS_TRN_CNT_SHIFT 0 126 + #define DMA_CH_0_STS3_RD_STS_TRN_CNT_MASK 0xFFFFFFFF 127 + 128 + /* DMA_CH_0_STS4 */ 129 + #define DMA_CH_0_STS4_WR_STS_TRN_CNT_SHIFT 0 130 + #define DMA_CH_0_STS4_WR_STS_TRN_CNT_MASK 0xFFFFFFFF 131 + 132 + /* DMA_CH_0_SRC_ADDR_LO_STS */ 133 + #define DMA_CH_0_SRC_ADDR_LO_STS_VAL_SHIFT 0 134 + #define DMA_CH_0_SRC_ADDR_LO_STS_VAL_MASK 0xFFFFFFFF 135 + 136 + /* DMA_CH_0_SRC_ADDR_HI_STS */ 137 + #define DMA_CH_0_SRC_ADDR_HI_STS_VAL_SHIFT 0 138 + #define DMA_CH_0_SRC_ADDR_HI_STS_VAL_MASK 0xFFFFFFFF 139 + 140 + /* DMA_CH_0_SRC_TSIZE_STS */ 141 + #define DMA_CH_0_SRC_TSIZE_STS_VAL_SHIFT 0 142 + #define DMA_CH_0_SRC_TSIZE_STS_VAL_MASK 0xFFFFFFFF 143 + 144 + /* DMA_CH_0_DST_ADDR_LO_STS */ 145 + #define DMA_CH_0_DST_ADDR_LO_STS_VAL_SHIFT 0 146 + #define DMA_CH_0_DST_ADDR_LO_STS_VAL_MASK 0xFFFFFFFF 147 + 148 + /* DMA_CH_0_DST_ADDR_HI_STS */ 149 + #define DMA_CH_0_DST_ADDR_HI_STS_VAL_SHIFT 0 150 + #define DMA_CH_0_DST_ADDR_HI_STS_VAL_MASK 0xFFFFFFFF 151 + 152 + /* DMA_CH_0_DST_TSIZE_STS */ 153 + #define DMA_CH_0_DST_TSIZE_STS_VAL_SHIFT 0 154 + #define DMA_CH_0_DST_TSIZE_STS_VAL_MASK 0xFFFFFFFF 155 + 156 + /* DMA_CH_0_RD_RATE_LIM_EN */ 157 + #define DMA_CH_0_RD_RATE_LIM_EN_VAL_SHIFT 0 158 + #define DMA_CH_0_RD_RATE_LIM_EN_VAL_MASK 0x1 159 + 160 + /* DMA_CH_0_RD_RATE_LIM_RST_TOKEN */ 161 + #define DMA_CH_0_RD_RATE_LIM_RST_TOKEN_VAL_SHIFT 0 162 + #define DMA_CH_0_RD_RATE_LIM_RST_TOKEN_VAL_MASK 0xFFFF 163 + 164 + /* DMA_CH_0_RD_RATE_LIM_SAT */ 165 + #define DMA_CH_0_RD_RATE_LIM_SAT_VAL_SHIFT 0 166 + #define DMA_CH_0_RD_RATE_LIM_SAT_VAL_MASK 0xFFFF 167 + 168 + /* DMA_CH_0_RD_RATE_LIM_TOUT */ 169 + #define DMA_CH_0_RD_RATE_LIM_TOUT_VAL_SHIFT 0 170 + #define DMA_CH_0_RD_RATE_LIM_TOUT_VAL_MASK 0x7FFFFFFF 171 + 172 + /* DMA_CH_0_WR_RATE_LIM_EN */ 173 + #define DMA_CH_0_WR_RATE_LIM_EN_VAL_SHIFT 0 174 + #define DMA_CH_0_WR_RATE_LIM_EN_VAL_MASK 0x1 175 + 176 + /* DMA_CH_0_WR_RATE_LIM_RST_TOKEN */ 177 + #define DMA_CH_0_WR_RATE_LIM_RST_TOKEN_VAL_SHIFT 0 178 + #define DMA_CH_0_WR_RATE_LIM_RST_TOKEN_VAL_MASK 0xFFFF 179 + 180 + /* DMA_CH_0_WR_RATE_LIM_SAT */ 181 + #define DMA_CH_0_WR_RATE_LIM_SAT_VAL_SHIFT 0 182 + #define DMA_CH_0_WR_RATE_LIM_SAT_VAL_MASK 0xFFFF 183 + 184 + /* DMA_CH_0_WR_RATE_LIM_TOUT */ 185 + #define DMA_CH_0_WR_RATE_LIM_TOUT_VAL_SHIFT 0 186 + #define DMA_CH_0_WR_RATE_LIM_TOUT_VAL_MASK 0x7FFFFFFF 187 + 188 + /* DMA_CH_0_CFG2 */ 189 + #define DMA_CH_0_CFG2_FORCE_WORD_SHIFT 0 190 + #define DMA_CH_0_CFG2_FORCE_WORD_MASK 0x1 191 + 192 + /* DMA_CH_0_TDMA_CTL */ 193 + #define DMA_CH_0_TDMA_CTL_DTYPE_SHIFT 0 194 + #define DMA_CH_0_TDMA_CTL_DTYPE_MASK 0x7 195 + 196 + /* DMA_CH_0_TDMA_SRC_BASE_ADDR_LO */ 197 + #define DMA_CH_0_TDMA_SRC_BASE_ADDR_LO_VAL_SHIFT 0 198 + #define DMA_CH_0_TDMA_SRC_BASE_ADDR_LO_VAL_MASK 0xFFFFFFFF 199 + 200 + /* DMA_CH_0_TDMA_SRC_BASE_ADDR_HI */ 201 + #define DMA_CH_0_TDMA_SRC_BASE_ADDR_HI_VAL_SHIFT 0 202 + #define DMA_CH_0_TDMA_SRC_BASE_ADDR_HI_VAL_MASK 0xFFFFFFFF 203 + 204 + /* DMA_CH_0_TDMA_SRC_ROI_BASE_0 */ 205 + #define DMA_CH_0_TDMA_SRC_ROI_BASE_0_VAL_SHIFT 0 206 + #define DMA_CH_0_TDMA_SRC_ROI_BASE_0_VAL_MASK 0xFFFFFFFF 207 + 208 + /* DMA_CH_0_TDMA_SRC_ROI_SIZE_0 */ 209 + #define DMA_CH_0_TDMA_SRC_ROI_SIZE_0_VAL_SHIFT 0 210 + #define DMA_CH_0_TDMA_SRC_ROI_SIZE_0_VAL_MASK 0xFFFFFFFF 211 + 212 + /* DMA_CH_0_TDMA_SRC_VALID_ELEMENTS_0 */ 213 + #define DMA_CH_0_TDMA_SRC_VALID_ELEMENTS_0_VAL_SHIFT 0 214 + #define DMA_CH_0_TDMA_SRC_VALID_ELEMENTS_0_VAL_MASK 0xFFFFFFFF 215 + 216 + /* DMA_CH_0_TDMA_SRC_START_OFFSET_0 */ 217 + #define DMA_CH_0_TDMA_SRC_START_OFFSET_0_VAL_SHIFT 0 218 + #define DMA_CH_0_TDMA_SRC_START_OFFSET_0_VAL_MASK 0xFFFFFFFF 219 + 220 + /* DMA_CH_0_TDMA_SRC_STRIDE_0 */ 221 + #define DMA_CH_0_TDMA_SRC_STRIDE_0_VAL_SHIFT 0 222 + #define DMA_CH_0_TDMA_SRC_STRIDE_0_VAL_MASK 0xFFFFFFFF 223 + 224 + /* DMA_CH_0_TDMA_SRC_ROI_BASE_1 */ 225 + #define DMA_CH_0_TDMA_SRC_ROI_BASE_1_VAL_SHIFT 0 226 + #define DMA_CH_0_TDMA_SRC_ROI_BASE_1_VAL_MASK 0xFFFFFFFF 227 + 228 + /* DMA_CH_0_TDMA_SRC_ROI_SIZE_1 */ 229 + #define DMA_CH_0_TDMA_SRC_ROI_SIZE_1_VAL_SHIFT 0 230 + #define DMA_CH_0_TDMA_SRC_ROI_SIZE_1_VAL_MASK 0xFFFFFFFF 231 + 232 + /* DMA_CH_0_TDMA_SRC_VALID_ELEMENTS_1 */ 233 + #define DMA_CH_0_TDMA_SRC_VALID_ELEMENTS_1_VAL_SHIFT 0 234 + #define DMA_CH_0_TDMA_SRC_VALID_ELEMENTS_1_VAL_MASK 0xFFFFFFFF 235 + 236 + /* DMA_CH_0_TDMA_SRC_START_OFFSET_1 */ 237 + #define DMA_CH_0_TDMA_SRC_START_OFFSET_1_VAL_SHIFT 0 238 + #define DMA_CH_0_TDMA_SRC_START_OFFSET_1_VAL_MASK 0xFFFFFFFF 239 + 240 + /* DMA_CH_0_TDMA_SRC_STRIDE_1 */ 241 + #define DMA_CH_0_TDMA_SRC_STRIDE_1_VAL_SHIFT 0 242 + #define DMA_CH_0_TDMA_SRC_STRIDE_1_VAL_MASK 0xFFFFFFFF 243 + 244 + /* DMA_CH_0_TDMA_SRC_ROI_BASE_2 */ 245 + #define DMA_CH_0_TDMA_SRC_ROI_BASE_2_VAL_SHIFT 0 246 + #define DMA_CH_0_TDMA_SRC_ROI_BASE_2_VAL_MASK 0xFFFFFFFF 247 + 248 + /* DMA_CH_0_TDMA_SRC_ROI_SIZE_2 */ 249 + #define DMA_CH_0_TDMA_SRC_ROI_SIZE_2_VAL_SHIFT 0 250 + #define DMA_CH_0_TDMA_SRC_ROI_SIZE_2_VAL_MASK 0xFFFFFFFF 251 + 252 + /* DMA_CH_0_TDMA_SRC_VALID_ELEMENTS_2 */ 253 + #define DMA_CH_0_TDMA_SRC_VALID_ELEMENTS_2_VAL_SHIFT 0 254 + #define DMA_CH_0_TDMA_SRC_VALID_ELEMENTS_2_VAL_MASK 0xFFFFFFFF 255 + 256 + /* DMA_CH_0_TDMA_SRC_START_OFFSET_2 */ 257 + #define DMA_CH_0_TDMA_SRC_START_OFFSET_2_VAL_SHIFT 0 258 + #define DMA_CH_0_TDMA_SRC_START_OFFSET_2_VAL_MASK 0xFFFFFFFF 259 + 260 + /* DMA_CH_0_TDMA_SRC_STRIDE_2 */ 261 + #define DMA_CH_0_TDMA_SRC_STRIDE_2_VAL_SHIFT 0 262 + #define DMA_CH_0_TDMA_SRC_STRIDE_2_VAL_MASK 0xFFFFFFFF 263 + 264 + /* DMA_CH_0_TDMA_SRC_ROI_BASE_3 */ 265 + #define DMA_CH_0_TDMA_SRC_ROI_BASE_3_VAL_SHIFT 0 266 + #define DMA_CH_0_TDMA_SRC_ROI_BASE_3_VAL_MASK 0xFFFFFFFF 267 + 268 + /* DMA_CH_0_TDMA_SRC_ROI_SIZE_3 */ 269 + #define DMA_CH_0_TDMA_SRC_ROI_SIZE_3_VAL_SHIFT 0 270 + #define DMA_CH_0_TDMA_SRC_ROI_SIZE_3_VAL_MASK 0xFFFFFFFF 271 + 272 + /* DMA_CH_0_TDMA_SRC_VALID_ELEMENTS_3 */ 273 + #define DMA_CH_0_TDMA_SRC_VALID_ELEMENTS_3_VAL_SHIFT 0 274 + #define DMA_CH_0_TDMA_SRC_VALID_ELEMENTS_3_VAL_MASK 0xFFFFFFFF 275 + 276 + /* DMA_CH_0_TDMA_SRC_START_OFFSET_3 */ 277 + #define DMA_CH_0_TDMA_SRC_START_OFFSET_3_VAL_SHIFT 0 278 + #define DMA_CH_0_TDMA_SRC_START_OFFSET_3_VAL_MASK 0xFFFFFFFF 279 + 280 + /* DMA_CH_0_TDMA_SRC_STRIDE_3 */ 281 + #define DMA_CH_0_TDMA_SRC_STRIDE_3_VAL_SHIFT 0 282 + #define DMA_CH_0_TDMA_SRC_STRIDE_3_VAL_MASK 0xFFFFFFFF 283 + 284 + /* DMA_CH_0_TDMA_SRC_ROI_BASE_4 */ 285 + #define DMA_CH_0_TDMA_SRC_ROI_BASE_4_VAL_SHIFT 0 286 + #define DMA_CH_0_TDMA_SRC_ROI_BASE_4_VAL_MASK 0xFFFFFFFF 287 + 288 + /* DMA_CH_0_TDMA_SRC_ROI_SIZE_4 */ 289 + #define DMA_CH_0_TDMA_SRC_ROI_SIZE_4_VAL_SHIFT 0 290 + #define DMA_CH_0_TDMA_SRC_ROI_SIZE_4_VAL_MASK 0xFFFFFFFF 291 + 292 + /* DMA_CH_0_TDMA_SRC_VALID_ELEMENTS_4 */ 293 + #define DMA_CH_0_TDMA_SRC_VALID_ELEMENTS_4_VAL_SHIFT 0 294 + #define DMA_CH_0_TDMA_SRC_VALID_ELEMENTS_4_VAL_MASK 0xFFFFFFFF 295 + 296 + /* DMA_CH_0_TDMA_SRC_START_OFFSET_4 */ 297 + #define DMA_CH_0_TDMA_SRC_START_OFFSET_4_VAL_SHIFT 0 298 + #define DMA_CH_0_TDMA_SRC_START_OFFSET_4_VAL_MASK 0xFFFFFFFF 299 + 300 + /* DMA_CH_0_TDMA_SRC_STRIDE_4 */ 301 + #define DMA_CH_0_TDMA_SRC_STRIDE_4_VAL_SHIFT 0 302 + #define DMA_CH_0_TDMA_SRC_STRIDE_4_VAL_MASK 0xFFFFFFFF 303 + 304 + /* DMA_CH_0_TDMA_DST_BASE_ADDR_LO */ 305 + #define DMA_CH_0_TDMA_DST_BASE_ADDR_LO_VAL_SHIFT 0 306 + #define DMA_CH_0_TDMA_DST_BASE_ADDR_LO_VAL_MASK 0xFFFFFFFF 307 + 308 + /* DMA_CH_0_TDMA_DST_BASE_ADDR_HI */ 309 + #define DMA_CH_0_TDMA_DST_BASE_ADDR_HI_VAL_SHIFT 0 310 + #define DMA_CH_0_TDMA_DST_BASE_ADDR_HI_VAL_MASK 0xFFFFFFFF 311 + 312 + /* DMA_CH_0_TDMA_DST_ROI_BASE_0 */ 313 + #define DMA_CH_0_TDMA_DST_ROI_BASE_0_VAL_SHIFT 0 314 + #define DMA_CH_0_TDMA_DST_ROI_BASE_0_VAL_MASK 0xFFFFFFFF 315 + 316 + /* DMA_CH_0_TDMA_DST_ROI_SIZE_0 */ 317 + #define DMA_CH_0_TDMA_DST_ROI_SIZE_0_VAL_SHIFT 0 318 + #define DMA_CH_0_TDMA_DST_ROI_SIZE_0_VAL_MASK 0xFFFFFFFF 319 + 320 + /* DMA_CH_0_TDMA_DST_VALID_ELEMENTS_0 */ 321 + #define DMA_CH_0_TDMA_DST_VALID_ELEMENTS_0_VAL_SHIFT 0 322 + #define DMA_CH_0_TDMA_DST_VALID_ELEMENTS_0_VAL_MASK 0xFFFFFFFF 323 + 324 + /* DMA_CH_0_TDMA_DST_START_OFFSET_0 */ 325 + #define DMA_CH_0_TDMA_DST_START_OFFSET_0_VAL_SHIFT 0 326 + #define DMA_CH_0_TDMA_DST_START_OFFSET_0_VAL_MASK 0xFFFFFFFF 327 + 328 + /* DMA_CH_0_TDMA_DST_STRIDE_0 */ 329 + #define DMA_CH_0_TDMA_DST_STRIDE_0_VAL_SHIFT 0 330 + #define DMA_CH_0_TDMA_DST_STRIDE_0_VAL_MASK 0xFFFFFFFF 331 + 332 + /* DMA_CH_0_TDMA_DST_ROI_BASE_1 */ 333 + #define DMA_CH_0_TDMA_DST_ROI_BASE_1_VAL_SHIFT 0 334 + #define DMA_CH_0_TDMA_DST_ROI_BASE_1_VAL_MASK 0xFFFFFFFF 335 + 336 + /* DMA_CH_0_TDMA_DST_ROI_SIZE_1 */ 337 + #define DMA_CH_0_TDMA_DST_ROI_SIZE_1_VAL_SHIFT 0 338 + #define DMA_CH_0_TDMA_DST_ROI_SIZE_1_VAL_MASK 0xFFFFFFFF 339 + 340 + /* DMA_CH_0_TDMA_DST_VALID_ELEMENTS_1 */ 341 + #define DMA_CH_0_TDMA_DST_VALID_ELEMENTS_1_VAL_SHIFT 0 342 + #define DMA_CH_0_TDMA_DST_VALID_ELEMENTS_1_VAL_MASK 0xFFFFFFFF 343 + 344 + /* DMA_CH_0_TDMA_DST_START_OFFSET_1 */ 345 + #define DMA_CH_0_TDMA_DST_START_OFFSET_1_VAL_SHIFT 0 346 + #define DMA_CH_0_TDMA_DST_START_OFFSET_1_VAL_MASK 0xFFFFFFFF 347 + 348 + /* DMA_CH_0_TDMA_DST_STRIDE_1 */ 349 + #define DMA_CH_0_TDMA_DST_STRIDE_1_VAL_SHIFT 0 350 + #define DMA_CH_0_TDMA_DST_STRIDE_1_VAL_MASK 0xFFFFFFFF 351 + 352 + /* DMA_CH_0_TDMA_DST_ROI_BASE_2 */ 353 + #define DMA_CH_0_TDMA_DST_ROI_BASE_2_VAL_SHIFT 0 354 + #define DMA_CH_0_TDMA_DST_ROI_BASE_2_VAL_MASK 0xFFFFFFFF 355 + 356 + /* DMA_CH_0_TDMA_DST_ROI_SIZE_2 */ 357 + #define DMA_CH_0_TDMA_DST_ROI_SIZE_2_VAL_SHIFT 0 358 + #define DMA_CH_0_TDMA_DST_ROI_SIZE_2_VAL_MASK 0xFFFFFFFF 359 + 360 + /* DMA_CH_0_TDMA_DST_VALID_ELEMENTS_2 */ 361 + #define DMA_CH_0_TDMA_DST_VALID_ELEMENTS_2_VAL_SHIFT 0 362 + #define DMA_CH_0_TDMA_DST_VALID_ELEMENTS_2_VAL_MASK 0xFFFFFFFF 363 + 364 + /* DMA_CH_0_TDMA_DST_START_OFFSET_2 */ 365 + #define DMA_CH_0_TDMA_DST_START_OFFSET_2_VAL_SHIFT 0 366 + #define DMA_CH_0_TDMA_DST_START_OFFSET_2_VAL_MASK 0xFFFFFFFF 367 + 368 + /* DMA_CH_0_TDMA_DST_STRIDE_2 */ 369 + #define DMA_CH_0_TDMA_DST_STRIDE_2_VAL_SHIFT 0 370 + #define DMA_CH_0_TDMA_DST_STRIDE_2_VAL_MASK 0xFFFFFFFF 371 + 372 + /* DMA_CH_0_TDMA_DST_ROI_BASE_3 */ 373 + #define DMA_CH_0_TDMA_DST_ROI_BASE_3_VAL_SHIFT 0 374 + #define DMA_CH_0_TDMA_DST_ROI_BASE_3_VAL_MASK 0xFFFFFFFF 375 + 376 + /* DMA_CH_0_TDMA_DST_ROI_SIZE_3 */ 377 + #define DMA_CH_0_TDMA_DST_ROI_SIZE_3_VAL_SHIFT 0 378 + #define DMA_CH_0_TDMA_DST_ROI_SIZE_3_VAL_MASK 0xFFFFFFFF 379 + 380 + /* DMA_CH_0_TDMA_DST_VALID_ELEMENTS_3 */ 381 + #define DMA_CH_0_TDMA_DST_VALID_ELEMENTS_3_VAL_SHIFT 0 382 + #define DMA_CH_0_TDMA_DST_VALID_ELEMENTS_3_VAL_MASK 0xFFFFFFFF 383 + 384 + /* DMA_CH_0_TDMA_DST_START_OFFSET_3 */ 385 + #define DMA_CH_0_TDMA_DST_START_OFFSET_3_VAL_SHIFT 0 386 + #define DMA_CH_0_TDMA_DST_START_OFFSET_3_VAL_MASK 0xFFFFFFFF 387 + 388 + /* DMA_CH_0_TDMA_DST_STRIDE_3 */ 389 + #define DMA_CH_0_TDMA_DST_STRIDE_3_VAL_SHIFT 0 390 + #define DMA_CH_0_TDMA_DST_STRIDE_3_VAL_MASK 0xFFFFFFFF 391 + 392 + /* DMA_CH_0_TDMA_DST_ROI_BASE_4 */ 393 + #define DMA_CH_0_TDMA_DST_ROI_BASE_4_VAL_SHIFT 0 394 + #define DMA_CH_0_TDMA_DST_ROI_BASE_4_VAL_MASK 0xFFFFFFFF 395 + 396 + /* DMA_CH_0_TDMA_DST_ROI_SIZE_4 */ 397 + #define DMA_CH_0_TDMA_DST_ROI_SIZE_4_VAL_SHIFT 0 398 + #define DMA_CH_0_TDMA_DST_ROI_SIZE_4_VAL_MASK 0xFFFFFFFF 399 + 400 + /* DMA_CH_0_TDMA_DST_VALID_ELEMENTS_4 */ 401 + #define DMA_CH_0_TDMA_DST_VALID_ELEMENTS_4_VAL_SHIFT 0 402 + #define DMA_CH_0_TDMA_DST_VALID_ELEMENTS_4_VAL_MASK 0xFFFFFFFF 403 + 404 + /* DMA_CH_0_TDMA_DST_START_OFFSET_4 */ 405 + #define DMA_CH_0_TDMA_DST_START_OFFSET_4_VAL_SHIFT 0 406 + #define DMA_CH_0_TDMA_DST_START_OFFSET_4_VAL_MASK 0xFFFFFFFF 407 + 408 + /* DMA_CH_0_TDMA_DST_STRIDE_4 */ 409 + #define DMA_CH_0_TDMA_DST_STRIDE_4_VAL_SHIFT 0 410 + #define DMA_CH_0_TDMA_DST_STRIDE_4_VAL_MASK 0xFFFFFFFF 411 + 412 + /* DMA_CH_0_MEM_INIT_BUSY */ 413 + #define DMA_CH_0_MEM_INIT_BUSY_SBC_DATA_SHIFT 0 414 + #define DMA_CH_0_MEM_INIT_BUSY_SBC_DATA_MASK 0xFF 415 + #define DMA_CH_0_MEM_INIT_BUSY_SBC_MD_SHIFT 8 416 + #define DMA_CH_0_MEM_INIT_BUSY_SBC_MD_MASK 0x100 417 + 418 + #endif /* ASIC_REG_DMA_CH_0_MASKS_H_ */
+1
drivers/misc/habanalabs/include/goya/asic_reg/goya_regs.h
··· 88 88 #include "psoc_global_conf_masks.h" 89 89 #include "dma_macro_masks.h" 90 90 #include "dma_qm_0_masks.h" 91 + #include "dma_ch_0_masks.h" 91 92 #include "tpc0_qm_masks.h" 92 93 #include "tpc0_cmdq_masks.h" 93 94 #include "mme_qm_masks.h"
+1 -12
drivers/misc/habanalabs/memory.c
··· 1657 1657 struct hl_vm *vm = &hdev->vm; 1658 1658 int rc; 1659 1659 1660 - rc = hl_mmu_init(hdev); 1661 - if (rc) { 1662 - dev_err(hdev->dev, "Failed to init MMU\n"); 1663 - return rc; 1664 - } 1665 - 1666 1660 vm->dram_pg_pool = gen_pool_create(__ffs(prop->dram_page_size), -1); 1667 1661 if (!vm->dram_pg_pool) { 1668 1662 dev_err(hdev->dev, "Failed to create dram page pool\n"); 1669 - rc = -ENOMEM; 1670 - goto pool_create_err; 1663 + return -ENOMEM; 1671 1664 } 1672 1665 1673 1666 kref_init(&vm->dram_pg_pool_refcount); ··· 1686 1693 1687 1694 pool_add_err: 1688 1695 gen_pool_destroy(vm->dram_pg_pool); 1689 - pool_create_err: 1690 - hl_mmu_fini(hdev); 1691 1696 1692 1697 return rc; 1693 1698 } ··· 1714 1723 if (kref_put(&vm->dram_pg_pool_refcount, dram_pg_pool_do_release) != 1) 1715 1724 dev_warn(hdev->dev, "dram_pg_pool was not destroyed on %s\n", 1716 1725 __func__); 1717 - 1718 - hl_mmu_fini(hdev); 1719 1726 1720 1727 vm->init_done = false; 1721 1728 }
+11 -9
drivers/misc/habanalabs/mmu.c
··· 241 241 hop2_pte_addr, hop3_pte_addr, pte_val; 242 242 int rc, i, j, hop3_allocated = 0; 243 243 244 - if (!hdev->dram_supports_virtual_memory || 245 - !hdev->dram_default_page_mapping) 244 + if ((!hdev->dram_supports_virtual_memory) || 245 + (!hdev->dram_default_page_mapping) || 246 + (ctx->asid == HL_KERNEL_ASID_ID)) 246 247 return 0; 247 248 248 249 num_of_hop3 = prop->dram_size_for_default_page_mapping; ··· 341 340 hop2_pte_addr, hop3_pte_addr; 342 341 int i, j; 343 342 344 - if (!hdev->dram_supports_virtual_memory || 345 - !hdev->dram_default_page_mapping) 343 + if ((!hdev->dram_supports_virtual_memory) || 344 + (!hdev->dram_default_page_mapping) || 345 + (ctx->asid == HL_KERNEL_ASID_ID)) 346 346 return; 347 347 348 348 num_of_hop3 = prop->dram_size_for_default_page_mapping; ··· 387 385 * @hdev: habanalabs device structure. 388 386 * 389 387 * This function does the following: 390 - * - Allocate max_asid zeroed hop0 pgts so no mapping is available. 391 - * - Enable MMU in H/W. 392 - * - Invalidate the MMU cache. 393 388 * - Create a pool of pages for pgt_infos. 394 - * 395 - * This function depends on DMA QMAN to be working! 389 + * - Create a shadow table for pgt 396 390 * 397 391 * Return: 0 for success, non-zero for failure. 398 392 */ ··· 912 914 913 915 return -EFAULT; 914 916 } 917 + 918 + WARN_ONCE((phys_addr & (real_page_size - 1)), 919 + "Mapping 0x%llx with page size of 0x%x is erroneous! Address must be divisible by page size", 920 + phys_addr, real_page_size); 915 921 916 922 npages = page_size / real_page_size; 917 923 real_virt_addr = virt_addr;
+9 -1
drivers/misc/habanalabs/pci.c
··· 10 10 11 11 #include <linux/pci.h> 12 12 13 + #define HL_PLDM_PCI_ELBI_TIMEOUT_MSEC (HL_PCI_ELBI_TIMEOUT_MSEC * 10) 14 + 13 15 /** 14 16 * hl_pci_bars_map() - Map PCI BARs. 15 17 * @hdev: Pointer to hl_device structure. ··· 90 88 { 91 89 struct pci_dev *pdev = hdev->pdev; 92 90 ktime_t timeout; 91 + u64 msec; 93 92 u32 val; 93 + 94 + if (hdev->pldm) 95 + msec = HL_PLDM_PCI_ELBI_TIMEOUT_MSEC; 96 + else 97 + msec = HL_PCI_ELBI_TIMEOUT_MSEC; 94 98 95 99 /* Clear previous status */ 96 100 pci_write_config_dword(pdev, mmPCI_CONFIG_ELBI_STS, 0); ··· 106 98 pci_write_config_dword(pdev, mmPCI_CONFIG_ELBI_CTRL, 107 99 PCI_CONFIG_ELBI_CTRL_WRITE); 108 100 109 - timeout = ktime_add_ms(ktime_get(), 10); 101 + timeout = ktime_add_ms(ktime_get(), msec); 110 102 for (;;) { 111 103 pci_read_config_dword(pdev, mmPCI_CONFIG_ELBI_STS, &val); 112 104 if (val & PCI_CONFIG_ELBI_STS_MASK)
-4
drivers/misc/habanalabs/sysfs.c
··· 328 328 { 329 329 struct hl_device *hdev = dev_get_drvdata(dev); 330 330 331 - /* Use dummy, fixed address for simulator */ 332 - if (!hdev->pdev) 333 - return sprintf(buf, "0000:%02d:00.0\n", hdev->id); 334 - 335 331 return sprintf(buf, "%04x:%02x:%02x.%x\n", 336 332 pci_domain_nr(hdev->pdev->bus), 337 333 hdev->pdev->bus->number,
+2 -2
drivers/misc/isl29003.c
··· 3 3 * isl29003.c - Linux kernel module for 4 4 * Intersil ISL29003 ambient light sensor 5 5 * 6 - * See file:Documentation/misc-devices/isl29003 6 + * See file:Documentation/misc-devices/isl29003.rst 7 7 * 8 8 * Copyright (c) 2009 Daniel Mack <daniel@caiaq.de> 9 9 * ··· 377 377 static int isl29003_probe(struct i2c_client *client, 378 378 const struct i2c_device_id *id) 379 379 { 380 - struct i2c_adapter *adapter = to_i2c_adapter(client->dev.parent); 380 + struct i2c_adapter *adapter = client->adapter; 381 381 struct isl29003_data *data; 382 382 int err = 0; 383 383
-2
drivers/misc/lis3lv02d/Kconfig
··· 7 7 tristate "STMicroeletronics LIS3LV02Dx three-axis digital accelerometer (SPI)" 8 8 depends on !ACPI && SPI_MASTER && INPUT 9 9 select SENSORS_LIS3LV02D 10 - default n 11 10 help 12 11 This driver provides support for the LIS3LV02Dx accelerometer connected 13 12 via SPI. The accelerometer data is readable via ··· 23 24 tristate "STMicroeletronics LIS3LV02Dx three-axis digital accelerometer (I2C)" 24 25 depends on I2C && INPUT 25 26 select SENSORS_LIS3LV02D 26 - default n 27 27 help 28 28 This driver provides support for the LIS3LV02Dx accelerometer connected 29 29 via I2C. The accelerometer data is readable via
+1 -2
drivers/misc/lkdtm/Makefile
··· 15 15 16 16 OBJCOPYFLAGS := 17 17 OBJCOPYFLAGS_rodata_objcopy.o := \ 18 - --set-section-flags .text=alloc,readonly \ 19 - --rename-section .text=.rodata 18 + --rename-section .text=.rodata,alloc,readonly,load 20 19 targets += rodata.o rodata_objcopy.o 21 20 $(obj)/rodata_objcopy.o: $(obj)/rodata.o FORCE 22 21 $(call if_changed,objcopy)
+66
drivers/misc/lkdtm/bugs.c
··· 266 266 267 267 pr_err("FAIL: accessed page after stack!\n"); 268 268 } 269 + 270 + void lkdtm_UNSET_SMEP(void) 271 + { 272 + #ifdef CONFIG_X86_64 273 + #define MOV_CR4_DEPTH 64 274 + void (*direct_write_cr4)(unsigned long val); 275 + unsigned char *insn; 276 + unsigned long cr4; 277 + int i; 278 + 279 + cr4 = native_read_cr4(); 280 + 281 + if ((cr4 & X86_CR4_SMEP) != X86_CR4_SMEP) { 282 + pr_err("FAIL: SMEP not in use\n"); 283 + return; 284 + } 285 + cr4 &= ~(X86_CR4_SMEP); 286 + 287 + pr_info("trying to clear SMEP normally\n"); 288 + native_write_cr4(cr4); 289 + if (cr4 == native_read_cr4()) { 290 + pr_err("FAIL: pinning SMEP failed!\n"); 291 + cr4 |= X86_CR4_SMEP; 292 + pr_info("restoring SMEP\n"); 293 + native_write_cr4(cr4); 294 + return; 295 + } 296 + pr_info("ok: SMEP did not get cleared\n"); 297 + 298 + /* 299 + * To test the post-write pinning verification we need to call 300 + * directly into the middle of native_write_cr4() where the 301 + * cr4 write happens, skipping any pinning. This searches for 302 + * the cr4 writing instruction. 303 + */ 304 + insn = (unsigned char *)native_write_cr4; 305 + for (i = 0; i < MOV_CR4_DEPTH; i++) { 306 + /* mov %rdi, %cr4 */ 307 + if (insn[i] == 0x0f && insn[i+1] == 0x22 && insn[i+2] == 0xe7) 308 + break; 309 + /* mov %rdi,%rax; mov %rax, %cr4 */ 310 + if (insn[i] == 0x48 && insn[i+1] == 0x89 && 311 + insn[i+2] == 0xf8 && insn[i+3] == 0x0f && 312 + insn[i+4] == 0x22 && insn[i+5] == 0xe0) 313 + break; 314 + } 315 + if (i >= MOV_CR4_DEPTH) { 316 + pr_info("ok: cannot locate cr4 writing call gadget\n"); 317 + return; 318 + } 319 + direct_write_cr4 = (void *)(insn + i); 320 + 321 + pr_info("trying to clear SMEP with call gadget\n"); 322 + direct_write_cr4(cr4); 323 + if (native_read_cr4() & X86_CR4_SMEP) { 324 + pr_info("ok: SMEP removal was reverted\n"); 325 + } else { 326 + pr_err("FAIL: cleared SMEP not detected!\n"); 327 + cr4 |= X86_CR4_SMEP; 328 + pr_info("restoring SMEP\n"); 329 + native_write_cr4(cr4); 330 + } 331 + #else 332 + pr_err("FAIL: this test is x86_64-only\n"); 333 + #endif 334 + }
+1
drivers/misc/lkdtm/core.c
··· 114 114 CRASHTYPE(CORRUPT_USER_DS), 115 115 CRASHTYPE(STACK_GUARD_PAGE_LEADING), 116 116 CRASHTYPE(STACK_GUARD_PAGE_TRAILING), 117 + CRASHTYPE(UNSET_SMEP), 117 118 CRASHTYPE(UNALIGNED_LOAD_STORE_WRITE), 118 119 CRASHTYPE(OVERWRITE_ALLOCATION), 119 120 CRASHTYPE(WRITE_AFTER_FREE),
+1
drivers/misc/lkdtm/lkdtm.h
··· 26 26 void lkdtm_CORRUPT_USER_DS(void); 27 27 void lkdtm_STACK_GUARD_PAGE_LEADING(void); 28 28 void lkdtm_STACK_GUARD_PAGE_TRAILING(void); 29 + void lkdtm_UNSET_SMEP(void); 29 30 30 31 /* lkdtm_heap.c */ 31 32 void lkdtm_OVERWRITE_ALLOCATION(void);
+51 -131
drivers/misc/mei/debugfs.c
··· 8 8 #include <linux/kernel.h> 9 9 #include <linux/device.h> 10 10 #include <linux/debugfs.h> 11 + #include <linux/seq_file.h> 11 12 12 13 #include <linux/mei.h> 13 14 ··· 16 15 #include "client.h" 17 16 #include "hw.h" 18 17 19 - static ssize_t mei_dbgfs_read_meclients(struct file *fp, char __user *ubuf, 20 - size_t cnt, loff_t *ppos) 18 + static int mei_dbgfs_meclients_show(struct seq_file *m, void *unused) 21 19 { 22 - struct mei_device *dev = fp->private_data; 20 + struct mei_device *dev = m->private; 23 21 struct mei_me_client *me_cl; 24 - size_t bufsz = 1; 25 - char *buf; 26 22 int i = 0; 27 - int pos = 0; 28 - int ret; 29 23 30 - #define HDR \ 31 - " |id|fix| UUID |con|msg len|sb|refc|\n" 24 + if (!dev) 25 + return -ENODEV; 32 26 33 27 down_read(&dev->me_clients_rwsem); 34 - list_for_each_entry(me_cl, &dev->me_clients, list) 35 - bufsz++; 36 28 37 - bufsz *= sizeof(HDR) + 1; 38 - buf = kzalloc(bufsz, GFP_KERNEL); 39 - if (!buf) { 40 - up_read(&dev->me_clients_rwsem); 41 - return -ENOMEM; 42 - } 43 - 44 - pos += scnprintf(buf + pos, bufsz - pos, HDR); 45 - #undef HDR 29 + seq_puts(m, " |id|fix| UUID |con|msg len|sb|refc|\n"); 46 30 47 31 /* if the driver is not enabled the list won't be consistent */ 48 32 if (dev->dev_state != MEI_DEV_ENABLED) 49 33 goto out; 50 34 51 35 list_for_each_entry(me_cl, &dev->me_clients, list) { 36 + if (!mei_me_cl_get(me_cl)) 37 + continue; 52 38 53 - if (mei_me_cl_get(me_cl)) { 54 - pos += scnprintf(buf + pos, bufsz - pos, 55 - "%2d|%2d|%3d|%pUl|%3d|%7d|%2d|%4d|\n", 56 - i++, me_cl->client_id, 57 - me_cl->props.fixed_address, 58 - &me_cl->props.protocol_name, 59 - me_cl->props.max_number_of_connections, 60 - me_cl->props.max_msg_length, 61 - me_cl->props.single_recv_buf, 62 - kref_read(&me_cl->refcnt)); 63 - 64 - mei_me_cl_put(me_cl); 65 - } 39 + seq_printf(m, "%2d|%2d|%3d|%pUl|%3d|%7d|%2d|%4d|\n", 40 + i++, me_cl->client_id, 41 + me_cl->props.fixed_address, 42 + &me_cl->props.protocol_name, 43 + me_cl->props.max_number_of_connections, 44 + me_cl->props.max_msg_length, 45 + me_cl->props.single_recv_buf, 46 + kref_read(&me_cl->refcnt)); 47 + mei_me_cl_put(me_cl); 66 48 } 67 49 68 50 out: 69 51 up_read(&dev->me_clients_rwsem); 70 - ret = simple_read_from_buffer(ubuf, cnt, ppos, buf, pos); 71 - kfree(buf); 72 - return ret; 52 + return 0; 73 53 } 54 + DEFINE_SHOW_ATTRIBUTE(mei_dbgfs_meclients); 74 55 75 - static const struct file_operations mei_dbgfs_fops_meclients = { 76 - .open = simple_open, 77 - .read = mei_dbgfs_read_meclients, 78 - .llseek = generic_file_llseek, 79 - }; 80 - 81 - static ssize_t mei_dbgfs_read_active(struct file *fp, char __user *ubuf, 82 - size_t cnt, loff_t *ppos) 56 + static int mei_dbgfs_active_show(struct seq_file *m, void *unused) 83 57 { 84 - struct mei_device *dev = fp->private_data; 58 + struct mei_device *dev = m->private; 85 59 struct mei_cl *cl; 86 - size_t bufsz = 1; 87 - char *buf; 88 60 int i = 0; 89 - int pos = 0; 90 - int ret; 91 - 92 - #define HDR " |me|host|state|rd|wr|wrq\n" 93 61 94 62 if (!dev) 95 63 return -ENODEV; 96 64 97 65 mutex_lock(&dev->device_lock); 98 66 99 - /* 100 - * if the driver is not enabled the list won't be consistent, 101 - * we output empty table 102 - */ 103 - if (dev->dev_state == MEI_DEV_ENABLED) 104 - list_for_each_entry(cl, &dev->file_list, link) 105 - bufsz++; 106 - 107 - bufsz *= sizeof(HDR) + 1; 108 - 109 - buf = kzalloc(bufsz, GFP_KERNEL); 110 - if (!buf) { 111 - mutex_unlock(&dev->device_lock); 112 - return -ENOMEM; 113 - } 114 - 115 - pos += scnprintf(buf + pos, bufsz - pos, HDR); 116 - #undef HDR 67 + seq_puts(m, " |me|host|state|rd|wr|wrq\n"); 117 68 118 69 /* if the driver is not enabled the list won't be consistent */ 119 70 if (dev->dev_state != MEI_DEV_ENABLED) ··· 73 120 74 121 list_for_each_entry(cl, &dev->file_list, link) { 75 122 76 - pos += scnprintf(buf + pos, bufsz - pos, 77 - "%3d|%2d|%4d|%5d|%2d|%2d|%3u\n", 78 - i, mei_cl_me_id(cl), cl->host_client_id, cl->state, 79 - !list_empty(&cl->rd_completed), cl->writing_state, 80 - cl->tx_cb_queued); 123 + seq_printf(m, "%3d|%2d|%4d|%5d|%2d|%2d|%3u\n", 124 + i, mei_cl_me_id(cl), cl->host_client_id, cl->state, 125 + !list_empty(&cl->rd_completed), cl->writing_state, 126 + cl->tx_cb_queued); 81 127 i++; 82 128 } 83 129 out: 84 130 mutex_unlock(&dev->device_lock); 85 - ret = simple_read_from_buffer(ubuf, cnt, ppos, buf, pos); 86 - kfree(buf); 87 - return ret; 131 + return 0; 88 132 } 133 + DEFINE_SHOW_ATTRIBUTE(mei_dbgfs_active); 89 134 90 - static const struct file_operations mei_dbgfs_fops_active = { 91 - .open = simple_open, 92 - .read = mei_dbgfs_read_active, 93 - .llseek = generic_file_llseek, 94 - }; 95 - 96 - static ssize_t mei_dbgfs_read_devstate(struct file *fp, char __user *ubuf, 97 - size_t cnt, loff_t *ppos) 135 + static int mei_dbgfs_devstate_show(struct seq_file *m, void *unused) 98 136 { 99 - struct mei_device *dev = fp->private_data; 100 - const size_t bufsz = 1024; 101 - char *buf = kzalloc(bufsz, GFP_KERNEL); 102 - int pos = 0; 103 - int ret; 137 + struct mei_device *dev = m->private; 104 138 105 - if (!buf) 106 - return -ENOMEM; 107 - 108 - pos += scnprintf(buf + pos, bufsz - pos, "dev: %s\n", 109 - mei_dev_state_str(dev->dev_state)); 110 - pos += scnprintf(buf + pos, bufsz - pos, "hbm: %s\n", 111 - mei_hbm_state_str(dev->hbm_state)); 139 + seq_printf(m, "dev: %s\n", mei_dev_state_str(dev->dev_state)); 140 + seq_printf(m, "hbm: %s\n", mei_hbm_state_str(dev->hbm_state)); 112 141 113 142 if (dev->hbm_state >= MEI_HBM_ENUM_CLIENTS && 114 143 dev->hbm_state <= MEI_HBM_STARTED) { 115 - pos += scnprintf(buf + pos, bufsz - pos, "hbm features:\n"); 116 - pos += scnprintf(buf + pos, bufsz - pos, "\tPG: %01d\n", 117 - dev->hbm_f_pg_supported); 118 - pos += scnprintf(buf + pos, bufsz - pos, "\tDC: %01d\n", 119 - dev->hbm_f_dc_supported); 120 - pos += scnprintf(buf + pos, bufsz - pos, "\tIE: %01d\n", 121 - dev->hbm_f_ie_supported); 122 - pos += scnprintf(buf + pos, bufsz - pos, "\tDOT: %01d\n", 123 - dev->hbm_f_dot_supported); 124 - pos += scnprintf(buf + pos, bufsz - pos, "\tEV: %01d\n", 125 - dev->hbm_f_ev_supported); 126 - pos += scnprintf(buf + pos, bufsz - pos, "\tFA: %01d\n", 127 - dev->hbm_f_fa_supported); 128 - pos += scnprintf(buf + pos, bufsz - pos, "\tOS: %01d\n", 129 - dev->hbm_f_os_supported); 130 - pos += scnprintf(buf + pos, bufsz - pos, "\tDR: %01d\n", 131 - dev->hbm_f_dr_supported); 144 + seq_puts(m, "hbm features:\n"); 145 + seq_printf(m, "\tPG: %01d\n", dev->hbm_f_pg_supported); 146 + seq_printf(m, "\tDC: %01d\n", dev->hbm_f_dc_supported); 147 + seq_printf(m, "\tIE: %01d\n", dev->hbm_f_ie_supported); 148 + seq_printf(m, "\tDOT: %01d\n", dev->hbm_f_dot_supported); 149 + seq_printf(m, "\tEV: %01d\n", dev->hbm_f_ev_supported); 150 + seq_printf(m, "\tFA: %01d\n", dev->hbm_f_fa_supported); 151 + seq_printf(m, "\tOS: %01d\n", dev->hbm_f_os_supported); 152 + seq_printf(m, "\tDR: %01d\n", dev->hbm_f_dr_supported); 132 153 } 133 154 134 - pos += scnprintf(buf + pos, bufsz - pos, "pg: %s, %s\n", 135 - mei_pg_is_enabled(dev) ? "ENABLED" : "DISABLED", 136 - mei_pg_state_str(mei_pg_state(dev))); 137 - ret = simple_read_from_buffer(ubuf, cnt, ppos, buf, pos); 138 - kfree(buf); 139 - return ret; 155 + seq_printf(m, "pg: %s, %s\n", 156 + mei_pg_is_enabled(dev) ? "ENABLED" : "DISABLED", 157 + mei_pg_state_str(mei_pg_state(dev))); 158 + return 0; 140 159 } 141 - static const struct file_operations mei_dbgfs_fops_devstate = { 142 - .open = simple_open, 143 - .read = mei_dbgfs_read_devstate, 144 - .llseek = generic_file_llseek, 145 - }; 160 + DEFINE_SHOW_ATTRIBUTE(mei_dbgfs_devstate); 146 161 147 162 static ssize_t mei_dbgfs_write_allow_fa(struct file *file, 148 163 const char __user *user_buf, ··· 129 208 return ret; 130 209 } 131 210 132 - static const struct file_operations mei_dbgfs_fops_allow_fa = { 211 + static const struct file_operations mei_dbgfs_allow_fa_fops = { 133 212 .open = simple_open, 134 213 .read = debugfs_read_file_bool, 135 214 .write = mei_dbgfs_write_allow_fa, ··· 168 247 dev->dbgfs_dir = dir; 169 248 170 249 f = debugfs_create_file("meclients", S_IRUSR, dir, 171 - dev, &mei_dbgfs_fops_meclients); 250 + dev, &mei_dbgfs_meclients_fops); 172 251 if (!f) { 173 252 dev_err(dev->dev, "meclients: registration failed\n"); 174 253 goto err; 175 254 } 176 255 f = debugfs_create_file("active", S_IRUSR, dir, 177 - dev, &mei_dbgfs_fops_active); 256 + dev, &mei_dbgfs_active_fops); 178 257 if (!f) { 179 258 dev_err(dev->dev, "active: registration failed\n"); 180 259 goto err; 181 260 } 182 261 f = debugfs_create_file("devstate", S_IRUSR, dir, 183 - dev, &mei_dbgfs_fops_devstate); 262 + dev, &mei_dbgfs_devstate_fops); 184 263 if (!f) { 185 264 dev_err(dev->dev, "devstate: registration failed\n"); 186 265 goto err; 187 266 } 188 267 f = debugfs_create_file("allow_fixed_address", S_IRUSR | S_IWUSR, dir, 189 268 &dev->allow_fixed_address, 190 - &mei_dbgfs_fops_allow_fa); 269 + &mei_dbgfs_allow_fa_fops); 191 270 if (!f) { 192 271 dev_err(dev->dev, "allow_fixed_address: registration failed\n"); 193 272 goto err; ··· 197 276 mei_dbgfs_deregister(dev); 198 277 return -ENODEV; 199 278 } 200 -
+4 -7
drivers/misc/mei/hdcp/mei_hdcp.c
··· 2 2 /* 3 3 * Copyright © 2019 Intel Corporation 4 4 * 5 - * Mei_hdcp.c: HDCP client driver for mei bus 5 + * mei_hdcp.c: HDCP client driver for mei bus 6 6 * 7 7 * Author: 8 8 * Ramalingam C <ramalingam.c@intel.com> ··· 11 11 /** 12 12 * DOC: MEI_HDCP Client Driver 13 13 * 14 - * This is a client driver to the mei_bus to make the HDCP2.2 services of 15 - * ME FW available for the interested consumers like I915. 16 - * 17 - * This module will act as a translation layer between HDCP protocol 18 - * implementor(I915) and ME FW by translating HDCP2.2 authentication 19 - * messages to ME FW command payloads and vice versa. 14 + * The mei_hdcp driver acts as a translation layer between HDCP 2.2 15 + * protocol implementer (I915) and ME FW by translating HDCP2.2 16 + * negotiation messages to ME FW command payloads and vice versa. 20 17 */ 21 18 22 19 #include <linux/module.h>
+1
drivers/misc/mic/scif/scif_main.c
··· 133 133 static void scif_destroy_scifdev(void) 134 134 { 135 135 kfree(scif_dev); 136 + scif_dev = NULL; 136 137 } 137 138 138 139 static int scif_probe(struct scif_hw_dev *sdev)
-1
drivers/misc/ocxl/Kconfig
··· 5 5 6 6 config OCXL_BASE 7 7 bool 8 - default n 9 8 select PPC_COPRO_BASE 10 9 11 10 config OCXL
+6 -3
drivers/misc/ocxl/context.c
··· 69 69 int ocxl_context_attach(struct ocxl_context *ctx, u64 amr, struct mm_struct *mm) 70 70 { 71 71 int rc; 72 + unsigned long pidr = 0; 72 73 73 74 // Locks both status & tidr 74 75 mutex_lock(&ctx->status_mutex); ··· 78 77 goto out; 79 78 } 80 79 81 - rc = ocxl_link_add_pe(ctx->afu->fn->link, ctx->pasid, 82 - mm->context.id, ctx->tidr, amr, mm, 83 - xsl_fault_error, ctx); 80 + if (mm) 81 + pidr = mm->context.id; 82 + 83 + rc = ocxl_link_add_pe(ctx->afu->fn->link, ctx->pasid, pidr, ctx->tidr, 84 + amr, mm, xsl_fault_error, ctx); 84 85 if (rc) 85 86 goto out; 86 87
+24 -4
drivers/misc/ocxl/link.c
··· 224 224 ack_irq(spa, ADDRESS_ERROR); 225 225 return IRQ_HANDLED; 226 226 } 227 + 228 + if (!pe_data->mm) { 229 + /* 230 + * translation fault from a kernel context - an OpenCAPI 231 + * device tried to access a bad kernel address 232 + */ 233 + rcu_read_unlock(); 234 + pr_warn("Unresolved OpenCAPI xsl fault in kernel context\n"); 235 + ack_irq(spa, ADDRESS_ERROR); 236 + return IRQ_HANDLED; 237 + } 227 238 WARN_ON(pe_data->mm->context.id != pid); 228 239 229 240 if (mmget_not_zero(pe_data->mm)) { ··· 534 523 pe->amr = cpu_to_be64(amr); 535 524 pe->software_state = cpu_to_be32(SPA_PE_VALID); 536 525 537 - mm_context_add_copro(mm); 526 + /* 527 + * For user contexts, register a copro so that TLBIs are seen 528 + * by the nest MMU. If we have a kernel context, TLBIs are 529 + * already global. 530 + */ 531 + if (mm) 532 + mm_context_add_copro(mm); 538 533 /* 539 534 * Barrier is to make sure PE is visible in the SPA before it 540 535 * is used by the device. It also helps with the global TLBI ··· 563 546 * have a reference on mm_users. Incrementing mm_count solves 564 547 * the problem. 565 548 */ 566 - mmgrab(mm); 549 + if (mm) 550 + mmgrab(mm); 567 551 trace_ocxl_context_add(current->pid, spa->spa_mem, pasid, pidr, tidr); 568 552 unlock: 569 553 mutex_unlock(&spa->spa_lock); ··· 670 652 if (!pe_data) { 671 653 WARN(1, "Couldn't find pe data when removing PE\n"); 672 654 } else { 673 - mm_context_remove_copro(pe_data->mm); 674 - mmdrop(pe_data->mm); 655 + if (pe_data->mm) { 656 + mm_context_remove_copro(pe_data->mm); 657 + mmdrop(pe_data->mm); 658 + } 675 659 kfree_rcu(pe_data, rcu); 676 660 } 677 661 unlock:
+1 -1
drivers/misc/sgi-xp/xpc_partition.c
··· 70 70 unsigned long rp_pa = nasid; /* seed with nasid */ 71 71 size_t len = 0; 72 72 size_t buf_len = 0; 73 - void *buf = buf; 73 + void *buf = NULL; 74 74 void *buf_base = NULL; 75 75 enum xp_retval (*get_partition_rsvd_page_pa) 76 76 (void *, u64 *, unsigned long *, size_t *) =
+1 -1
drivers/misc/tsl2550.c
··· 336 336 static int tsl2550_probe(struct i2c_client *client, 337 337 const struct i2c_device_id *id) 338 338 { 339 - struct i2c_adapter *adapter = to_i2c_adapter(client->dev.parent); 339 + struct i2c_adapter *adapter = client->adapter; 340 340 struct tsl2550_data *data; 341 341 int *opmode, err = 0; 342 342
+441 -46
drivers/misc/vmw_balloon.c
··· 28 28 #include <linux/rwsem.h> 29 29 #include <linux/slab.h> 30 30 #include <linux/spinlock.h> 31 + #include <linux/mount.h> 32 + #include <linux/balloon_compaction.h> 31 33 #include <linux/vmw_vmci_defs.h> 32 34 #include <linux/vmw_vmci_api.h> 33 35 #include <asm/hypervisor.h> ··· 40 38 MODULE_ALIAS("vmware_vmmemctl"); 41 39 MODULE_LICENSE("GPL"); 42 40 43 - /* 44 - * Use __GFP_HIGHMEM to allow pages from HIGHMEM zone. We don't allow wait 45 - * (__GFP_RECLAIM) for huge page allocations. Use __GFP_NOWARN, to suppress page 46 - * allocation failure warnings. Disallow access to emergency low-memory pools. 47 - */ 48 - #define VMW_HUGE_PAGE_ALLOC_FLAGS (__GFP_HIGHMEM|__GFP_NOWARN| \ 49 - __GFP_NOMEMALLOC) 41 + static bool __read_mostly vmwballoon_shrinker_enable; 42 + module_param(vmwballoon_shrinker_enable, bool, 0444); 43 + MODULE_PARM_DESC(vmwballoon_shrinker_enable, 44 + "Enable non-cooperative out-of-memory protection. Disabled by default as it may degrade performance."); 50 45 51 - /* 52 - * Use __GFP_HIGHMEM to allow pages from HIGHMEM zone. We allow lightweight 53 - * reclamation (__GFP_NORETRY). Use __GFP_NOWARN, to suppress page allocation 54 - * failure warnings. Disallow access to emergency low-memory pools. 55 - */ 56 - #define VMW_PAGE_ALLOC_FLAGS (__GFP_HIGHMEM|__GFP_NOWARN| \ 57 - __GFP_NOMEMALLOC|__GFP_NORETRY) 46 + /* Delay in seconds after shrink before inflation. */ 47 + #define VMBALLOON_SHRINK_DELAY (5) 58 48 59 49 /* Maximum number of refused pages we accumulate during inflation cycle */ 60 50 #define VMW_BALLOON_MAX_REFUSED 16 51 + 52 + /* Magic number for the balloon mount-point */ 53 + #define BALLOON_VMW_MAGIC 0x0ba11007 61 54 62 55 /* 63 56 * Hypervisor communication port definitions. ··· 226 229 VMW_BALLOON_STAT_TIMER, 227 230 VMW_BALLOON_STAT_DOORBELL, 228 231 VMW_BALLOON_STAT_RESET, 229 - VMW_BALLOON_STAT_LAST = VMW_BALLOON_STAT_RESET 232 + VMW_BALLOON_STAT_SHRINK, 233 + VMW_BALLOON_STAT_SHRINK_FREE, 234 + VMW_BALLOON_STAT_LAST = VMW_BALLOON_STAT_SHRINK_FREE 230 235 }; 231 236 232 237 #define VMW_BALLOON_STAT_NUM (VMW_BALLOON_STAT_LAST + 1) 233 - 234 238 235 239 static DEFINE_STATIC_KEY_TRUE(vmw_balloon_batching); 236 240 static DEFINE_STATIC_KEY_FALSE(balloon_stat_enabled); ··· 239 241 struct vmballoon_ctl { 240 242 struct list_head pages; 241 243 struct list_head refused_pages; 244 + struct list_head prealloc_pages; 242 245 unsigned int n_refused_pages; 243 246 unsigned int n_pages; 244 247 enum vmballoon_page_size_type page_size; 245 248 enum vmballoon_op op; 246 - }; 247 - 248 - struct vmballoon_page_size { 249 - /* list of reserved physical pages */ 250 - struct list_head pages; 251 249 }; 252 250 253 251 /** ··· 260 266 } __packed; 261 267 262 268 struct vmballoon { 263 - struct vmballoon_page_size page_sizes[VMW_BALLOON_NUM_PAGE_SIZES]; 264 - 265 269 /** 266 270 * @max_page_size: maximum supported page size for ballooning. 267 271 * ··· 332 340 */ 333 341 struct page *page; 334 342 343 + /** 344 + * @shrink_timeout: timeout until the next inflation. 345 + * 346 + * After an shrink event, indicates the time in jiffies after which 347 + * inflation is allowed again. Can be written concurrently with reads, 348 + * so must use READ_ONCE/WRITE_ONCE when accessing. 349 + */ 350 + unsigned long shrink_timeout; 351 + 335 352 /* statistics */ 336 353 struct vmballoon_stats *stats; 337 354 ··· 349 348 struct dentry *dbg_entry; 350 349 #endif 351 350 351 + /** 352 + * @b_dev_info: balloon device information descriptor. 353 + */ 354 + struct balloon_dev_info b_dev_info; 355 + 352 356 struct delayed_work dwork; 357 + 358 + /** 359 + * @huge_pages - list of the inflated 2MB pages. 360 + * 361 + * Protected by @b_dev_info.pages_lock . 362 + */ 363 + struct list_head huge_pages; 353 364 354 365 /** 355 366 * @vmci_doorbell. ··· 381 368 * Lock ordering: @conf_sem -> @comm_lock . 382 369 */ 383 370 spinlock_t comm_lock; 371 + 372 + /** 373 + * @shrinker: shrinker interface that is used to avoid over-inflation. 374 + */ 375 + struct shrinker shrinker; 376 + 377 + /** 378 + * @shrinker_registered: whether the shrinker was registered. 379 + * 380 + * The shrinker interface does not handle gracefully the removal of 381 + * shrinker that was not registered before. This indication allows to 382 + * simplify the unregistration process. 383 + */ 384 + bool shrinker_registered; 384 385 }; 385 386 386 387 static struct vmballoon balloon; ··· 669 642 unsigned int i; 670 643 671 644 for (i = 0; i < req_n_pages; i++) { 672 - if (ctl->page_size == VMW_BALLOON_2M_PAGE) 673 - page = alloc_pages(VMW_HUGE_PAGE_ALLOC_FLAGS, 674 - VMW_BALLOON_2M_ORDER); 675 - else 676 - page = alloc_page(VMW_PAGE_ALLOC_FLAGS); 645 + /* 646 + * First check if we happen to have pages that were allocated 647 + * before. This happens when 2MB page rejected during inflation 648 + * by the hypervisor, and then split into 4KB pages. 649 + */ 650 + if (!list_empty(&ctl->prealloc_pages)) { 651 + page = list_first_entry(&ctl->prealloc_pages, 652 + struct page, lru); 653 + list_del(&page->lru); 654 + } else { 655 + if (ctl->page_size == VMW_BALLOON_2M_PAGE) 656 + page = alloc_pages(__GFP_HIGHMEM|__GFP_NOWARN| 657 + __GFP_NOMEMALLOC, VMW_BALLOON_2M_ORDER); 658 + else 659 + page = balloon_page_alloc(); 677 660 678 - /* Update statistics */ 679 - vmballoon_stats_page_inc(b, VMW_BALLOON_PAGE_STAT_ALLOC, 680 - ctl->page_size); 661 + vmballoon_stats_page_inc(b, VMW_BALLOON_PAGE_STAT_ALLOC, 662 + ctl->page_size); 663 + } 681 664 682 665 if (page) { 683 666 vmballoon_mark_page_offline(page, ctl->page_size); ··· 933 896 __free_pages(page, vmballoon_page_order(page_size)); 934 897 } 935 898 936 - *n_pages = 0; 899 + if (n_pages) 900 + *n_pages = 0; 937 901 } 938 902 939 903 ··· 980 942 size - target < vmballoon_page_in_frames(VMW_BALLOON_2M_PAGE)) 981 943 return 0; 982 944 945 + /* If an out-of-memory recently occurred, inflation is disallowed. */ 946 + if (target > size && time_before(jiffies, READ_ONCE(b->shrink_timeout))) 947 + return 0; 948 + 983 949 return target - size; 984 950 } 985 951 ··· 1003 961 unsigned int *n_pages, 1004 962 enum vmballoon_page_size_type page_size) 1005 963 { 1006 - struct vmballoon_page_size *page_size_info = &b->page_sizes[page_size]; 964 + unsigned long flags; 1007 965 1008 - list_splice_init(pages, &page_size_info->pages); 966 + if (page_size == VMW_BALLOON_4K_PAGE) { 967 + balloon_page_list_enqueue(&b->b_dev_info, pages); 968 + } else { 969 + /* 970 + * Keep the huge pages in a local list which is not available 971 + * for the balloon compaction mechanism. 972 + */ 973 + spin_lock_irqsave(&b->b_dev_info.pages_lock, flags); 974 + list_splice_init(pages, &b->huge_pages); 975 + __count_vm_events(BALLOON_INFLATE, *n_pages * 976 + vmballoon_page_in_frames(VMW_BALLOON_2M_PAGE)); 977 + spin_unlock_irqrestore(&b->b_dev_info.pages_lock, flags); 978 + } 979 + 1009 980 *n_pages = 0; 1010 981 } 1011 982 ··· 1041 986 enum vmballoon_page_size_type page_size, 1042 987 unsigned int n_req_pages) 1043 988 { 1044 - struct vmballoon_page_size *page_size_info = &b->page_sizes[page_size]; 1045 989 struct page *page, *tmp; 1046 990 unsigned int i = 0; 991 + unsigned long flags; 1047 992 1048 - list_for_each_entry_safe(page, tmp, &page_size_info->pages, lru) { 993 + /* In the case of 4k pages, use the compaction infrastructure */ 994 + if (page_size == VMW_BALLOON_4K_PAGE) { 995 + *n_pages = balloon_page_list_dequeue(&b->b_dev_info, pages, 996 + n_req_pages); 997 + return; 998 + } 999 + 1000 + /* 2MB pages */ 1001 + spin_lock_irqsave(&b->b_dev_info.pages_lock, flags); 1002 + list_for_each_entry_safe(page, tmp, &b->huge_pages, lru) { 1049 1003 list_move(&page->lru, pages); 1050 1004 if (++i == n_req_pages) 1051 1005 break; 1052 1006 } 1007 + 1008 + __count_vm_events(BALLOON_DEFLATE, 1009 + i * vmballoon_page_in_frames(VMW_BALLOON_2M_PAGE)); 1010 + spin_unlock_irqrestore(&b->b_dev_info.pages_lock, flags); 1053 1011 *n_pages = i; 1012 + } 1013 + 1014 + /** 1015 + * vmballoon_split_refused_pages() - Split the 2MB refused pages to 4k. 1016 + * 1017 + * If inflation of 2MB pages was denied by the hypervisor, it is likely to be 1018 + * due to one or few 4KB pages. These 2MB pages may keep being allocated and 1019 + * then being refused. To prevent this case, this function splits the refused 1020 + * pages into 4KB pages and adds them into @prealloc_pages list. 1021 + * 1022 + * @ctl: pointer for the %struct vmballoon_ctl, which defines the operation. 1023 + */ 1024 + static void vmballoon_split_refused_pages(struct vmballoon_ctl *ctl) 1025 + { 1026 + struct page *page, *tmp; 1027 + unsigned int i, order; 1028 + 1029 + order = vmballoon_page_order(ctl->page_size); 1030 + 1031 + list_for_each_entry_safe(page, tmp, &ctl->refused_pages, lru) { 1032 + list_del(&page->lru); 1033 + split_page(page, order); 1034 + for (i = 0; i < (1 << order); i++) 1035 + list_add(&page[i].lru, &ctl->prealloc_pages); 1036 + } 1037 + ctl->n_refused_pages = 0; 1054 1038 } 1055 1039 1056 1040 /** ··· 1103 1009 struct vmballoon_ctl ctl = { 1104 1010 .pages = LIST_HEAD_INIT(ctl.pages), 1105 1011 .refused_pages = LIST_HEAD_INIT(ctl.refused_pages), 1012 + .prealloc_pages = LIST_HEAD_INIT(ctl.prealloc_pages), 1106 1013 .page_size = b->max_page_size, 1107 1014 .op = VMW_BALLOON_INFLATE 1108 1015 }; ··· 1151 1056 break; 1152 1057 1153 1058 /* 1154 - * Ignore errors from locking as we now switch to 4k 1155 - * pages and we might get different errors. 1059 + * Split the refused pages to 4k. This will also empty 1060 + * the refused pages list. 1156 1061 */ 1157 - vmballoon_release_refused_pages(b, &ctl); 1062 + vmballoon_split_refused_pages(&ctl); 1158 1063 ctl.page_size--; 1159 1064 } 1160 1065 ··· 1168 1073 */ 1169 1074 if (ctl.n_refused_pages != 0) 1170 1075 vmballoon_release_refused_pages(b, &ctl); 1076 + 1077 + vmballoon_release_page_list(&ctl.prealloc_pages, NULL, ctl.page_size); 1171 1078 } 1172 1079 1173 1080 /** ··· 1508 1411 1509 1412 } 1510 1413 1414 + /** 1415 + * vmballoon_shrinker_scan() - deflate the balloon due to memory pressure. 1416 + * @shrinker: pointer to the balloon shrinker. 1417 + * @sc: page reclaim information. 1418 + * 1419 + * Returns: number of pages that were freed during deflation. 1420 + */ 1421 + static unsigned long vmballoon_shrinker_scan(struct shrinker *shrinker, 1422 + struct shrink_control *sc) 1423 + { 1424 + struct vmballoon *b = &balloon; 1425 + unsigned long deflated_frames; 1426 + 1427 + pr_debug("%s - size: %llu", __func__, atomic64_read(&b->size)); 1428 + 1429 + vmballoon_stats_gen_inc(b, VMW_BALLOON_STAT_SHRINK); 1430 + 1431 + /* 1432 + * If the lock is also contended for read, we cannot easily reclaim and 1433 + * we bail out. 1434 + */ 1435 + if (!down_read_trylock(&b->conf_sem)) 1436 + return 0; 1437 + 1438 + deflated_frames = vmballoon_deflate(b, sc->nr_to_scan, true); 1439 + 1440 + vmballoon_stats_gen_add(b, VMW_BALLOON_STAT_SHRINK_FREE, 1441 + deflated_frames); 1442 + 1443 + /* 1444 + * Delay future inflation for some time to mitigate the situations in 1445 + * which balloon continuously grows and shrinks. Use WRITE_ONCE() since 1446 + * the access is asynchronous. 1447 + */ 1448 + WRITE_ONCE(b->shrink_timeout, jiffies + HZ * VMBALLOON_SHRINK_DELAY); 1449 + 1450 + up_read(&b->conf_sem); 1451 + 1452 + return deflated_frames; 1453 + } 1454 + 1455 + /** 1456 + * vmballoon_shrinker_count() - return the number of ballooned pages. 1457 + * @shrinker: pointer to the balloon shrinker. 1458 + * @sc: page reclaim information. 1459 + * 1460 + * Returns: number of 4k pages that are allocated for the balloon and can 1461 + * therefore be reclaimed under pressure. 1462 + */ 1463 + static unsigned long vmballoon_shrinker_count(struct shrinker *shrinker, 1464 + struct shrink_control *sc) 1465 + { 1466 + struct vmballoon *b = &balloon; 1467 + 1468 + return atomic64_read(&b->size); 1469 + } 1470 + 1471 + static void vmballoon_unregister_shrinker(struct vmballoon *b) 1472 + { 1473 + if (b->shrinker_registered) 1474 + unregister_shrinker(&b->shrinker); 1475 + b->shrinker_registered = false; 1476 + } 1477 + 1478 + static int vmballoon_register_shrinker(struct vmballoon *b) 1479 + { 1480 + int r; 1481 + 1482 + /* Do nothing if the shrinker is not enabled */ 1483 + if (!vmwballoon_shrinker_enable) 1484 + return 0; 1485 + 1486 + b->shrinker.scan_objects = vmballoon_shrinker_scan; 1487 + b->shrinker.count_objects = vmballoon_shrinker_count; 1488 + b->shrinker.seeks = DEFAULT_SEEKS; 1489 + 1490 + r = register_shrinker(&b->shrinker); 1491 + 1492 + if (r == 0) 1493 + b->shrinker_registered = true; 1494 + 1495 + return r; 1496 + } 1497 + 1511 1498 /* 1512 1499 * DEBUGFS Interface 1513 1500 */ ··· 1609 1428 [VMW_BALLOON_STAT_TIMER] = "timer", 1610 1429 [VMW_BALLOON_STAT_DOORBELL] = "doorbell", 1611 1430 [VMW_BALLOON_STAT_RESET] = "reset", 1431 + [VMW_BALLOON_STAT_SHRINK] = "shrink", 1432 + [VMW_BALLOON_STAT_SHRINK_FREE] = "shrinkFree" 1612 1433 }; 1613 1434 1614 1435 static int vmballoon_enable_stats(struct vmballoon *b) ··· 1735 1552 1736 1553 #endif /* CONFIG_DEBUG_FS */ 1737 1554 1555 + 1556 + #ifdef CONFIG_BALLOON_COMPACTION 1557 + 1558 + static struct dentry *vmballoon_mount(struct file_system_type *fs_type, 1559 + int flags, const char *dev_name, 1560 + void *data) 1561 + { 1562 + static const struct dentry_operations ops = { 1563 + .d_dname = simple_dname, 1564 + }; 1565 + 1566 + return mount_pseudo(fs_type, "balloon-vmware:", NULL, &ops, 1567 + BALLOON_VMW_MAGIC); 1568 + } 1569 + 1570 + static struct file_system_type vmballoon_fs = { 1571 + .name = "balloon-vmware", 1572 + .mount = vmballoon_mount, 1573 + .kill_sb = kill_anon_super, 1574 + }; 1575 + 1576 + static struct vfsmount *vmballoon_mnt; 1577 + 1578 + /** 1579 + * vmballoon_migratepage() - migrates a balloon page. 1580 + * @b_dev_info: balloon device information descriptor. 1581 + * @newpage: the page to which @page should be migrated. 1582 + * @page: a ballooned page that should be migrated. 1583 + * @mode: migration mode, ignored. 1584 + * 1585 + * This function is really open-coded, but that is according to the interface 1586 + * that balloon_compaction provides. 1587 + * 1588 + * Return: zero on success, -EAGAIN when migration cannot be performed 1589 + * momentarily, and -EBUSY if migration failed and should be retried 1590 + * with that specific page. 1591 + */ 1592 + static int vmballoon_migratepage(struct balloon_dev_info *b_dev_info, 1593 + struct page *newpage, struct page *page, 1594 + enum migrate_mode mode) 1595 + { 1596 + unsigned long status, flags; 1597 + struct vmballoon *b; 1598 + int ret; 1599 + 1600 + b = container_of(b_dev_info, struct vmballoon, b_dev_info); 1601 + 1602 + /* 1603 + * If the semaphore is taken, there is ongoing configuration change 1604 + * (i.e., balloon reset), so try again. 1605 + */ 1606 + if (!down_read_trylock(&b->conf_sem)) 1607 + return -EAGAIN; 1608 + 1609 + spin_lock(&b->comm_lock); 1610 + /* 1611 + * We must start by deflating and not inflating, as otherwise the 1612 + * hypervisor may tell us that it has enough memory and the new page is 1613 + * not needed. Since the old page is isolated, we cannot use the list 1614 + * interface to unlock it, as the LRU field is used for isolation. 1615 + * Instead, we use the native interface directly. 1616 + */ 1617 + vmballoon_add_page(b, 0, page); 1618 + status = vmballoon_lock_op(b, 1, VMW_BALLOON_4K_PAGE, 1619 + VMW_BALLOON_DEFLATE); 1620 + 1621 + if (status == VMW_BALLOON_SUCCESS) 1622 + status = vmballoon_status_page(b, 0, &page); 1623 + 1624 + /* 1625 + * If a failure happened, let the migration mechanism know that it 1626 + * should not retry. 1627 + */ 1628 + if (status != VMW_BALLOON_SUCCESS) { 1629 + spin_unlock(&b->comm_lock); 1630 + ret = -EBUSY; 1631 + goto out_unlock; 1632 + } 1633 + 1634 + /* 1635 + * The page is isolated, so it is safe to delete it without holding 1636 + * @pages_lock . We keep holding @comm_lock since we will need it in a 1637 + * second. 1638 + */ 1639 + balloon_page_delete(page); 1640 + 1641 + put_page(page); 1642 + 1643 + /* Inflate */ 1644 + vmballoon_add_page(b, 0, newpage); 1645 + status = vmballoon_lock_op(b, 1, VMW_BALLOON_4K_PAGE, 1646 + VMW_BALLOON_INFLATE); 1647 + 1648 + if (status == VMW_BALLOON_SUCCESS) 1649 + status = vmballoon_status_page(b, 0, &newpage); 1650 + 1651 + spin_unlock(&b->comm_lock); 1652 + 1653 + if (status != VMW_BALLOON_SUCCESS) { 1654 + /* 1655 + * A failure happened. While we can deflate the page we just 1656 + * inflated, this deflation can also encounter an error. Instead 1657 + * we will decrease the size of the balloon to reflect the 1658 + * change and report failure. 1659 + */ 1660 + atomic64_dec(&b->size); 1661 + ret = -EBUSY; 1662 + } else { 1663 + /* 1664 + * Success. Take a reference for the page, and we will add it to 1665 + * the list after acquiring the lock. 1666 + */ 1667 + get_page(newpage); 1668 + ret = MIGRATEPAGE_SUCCESS; 1669 + } 1670 + 1671 + /* Update the balloon list under the @pages_lock */ 1672 + spin_lock_irqsave(&b->b_dev_info.pages_lock, flags); 1673 + 1674 + /* 1675 + * On inflation success, we already took a reference for the @newpage. 1676 + * If we succeed just insert it to the list and update the statistics 1677 + * under the lock. 1678 + */ 1679 + if (ret == MIGRATEPAGE_SUCCESS) { 1680 + balloon_page_insert(&b->b_dev_info, newpage); 1681 + __count_vm_event(BALLOON_MIGRATE); 1682 + } 1683 + 1684 + /* 1685 + * We deflated successfully, so regardless to the inflation success, we 1686 + * need to reduce the number of isolated_pages. 1687 + */ 1688 + b->b_dev_info.isolated_pages--; 1689 + spin_unlock_irqrestore(&b->b_dev_info.pages_lock, flags); 1690 + 1691 + out_unlock: 1692 + up_read(&b->conf_sem); 1693 + return ret; 1694 + } 1695 + 1696 + /** 1697 + * vmballoon_compaction_deinit() - removes compaction related data. 1698 + * 1699 + * @b: pointer to the balloon. 1700 + */ 1701 + static void vmballoon_compaction_deinit(struct vmballoon *b) 1702 + { 1703 + if (!IS_ERR(b->b_dev_info.inode)) 1704 + iput(b->b_dev_info.inode); 1705 + 1706 + b->b_dev_info.inode = NULL; 1707 + kern_unmount(vmballoon_mnt); 1708 + vmballoon_mnt = NULL; 1709 + } 1710 + 1711 + /** 1712 + * vmballoon_compaction_init() - initialized compaction for the balloon. 1713 + * 1714 + * @b: pointer to the balloon. 1715 + * 1716 + * If during the initialization a failure occurred, this function does not 1717 + * perform cleanup. The caller must call vmballoon_compaction_deinit() in this 1718 + * case. 1719 + * 1720 + * Return: zero on success or error code on failure. 1721 + */ 1722 + static __init int vmballoon_compaction_init(struct vmballoon *b) 1723 + { 1724 + vmballoon_mnt = kern_mount(&vmballoon_fs); 1725 + if (IS_ERR(vmballoon_mnt)) 1726 + return PTR_ERR(vmballoon_mnt); 1727 + 1728 + b->b_dev_info.migratepage = vmballoon_migratepage; 1729 + b->b_dev_info.inode = alloc_anon_inode(vmballoon_mnt->mnt_sb); 1730 + 1731 + if (IS_ERR(b->b_dev_info.inode)) 1732 + return PTR_ERR(b->b_dev_info.inode); 1733 + 1734 + b->b_dev_info.inode->i_mapping->a_ops = &balloon_aops; 1735 + return 0; 1736 + } 1737 + 1738 + #else /* CONFIG_BALLOON_COMPACTION */ 1739 + 1740 + static void vmballoon_compaction_deinit(struct vmballoon *b) 1741 + { 1742 + } 1743 + 1744 + static int vmballoon_compaction_init(struct vmballoon *b) 1745 + { 1746 + return 0; 1747 + } 1748 + 1749 + #endif /* CONFIG_BALLOON_COMPACTION */ 1750 + 1738 1751 static int __init vmballoon_init(void) 1739 1752 { 1740 - enum vmballoon_page_size_type page_size; 1741 1753 int error; 1742 1754 1743 1755 /* ··· 1942 1564 if (x86_hyper_type != X86_HYPER_VMWARE) 1943 1565 return -ENODEV; 1944 1566 1945 - for (page_size = VMW_BALLOON_4K_PAGE; 1946 - page_size <= VMW_BALLOON_LAST_SIZE; page_size++) 1947 - INIT_LIST_HEAD(&balloon.page_sizes[page_size].pages); 1948 - 1949 - 1950 1567 INIT_DELAYED_WORK(&balloon.dwork, vmballoon_work); 1568 + 1569 + error = vmballoon_register_shrinker(&balloon); 1570 + if (error) 1571 + goto fail; 1951 1572 1952 1573 error = vmballoon_debugfs_init(&balloon); 1953 1574 if (error) 1954 - return error; 1575 + goto fail; 1955 1576 1577 + /* 1578 + * Initialization of compaction must be done after the call to 1579 + * balloon_devinfo_init() . 1580 + */ 1581 + balloon_devinfo_init(&balloon.b_dev_info); 1582 + error = vmballoon_compaction_init(&balloon); 1583 + if (error) 1584 + goto fail; 1585 + 1586 + INIT_LIST_HEAD(&balloon.huge_pages); 1956 1587 spin_lock_init(&balloon.comm_lock); 1957 1588 init_rwsem(&balloon.conf_sem); 1958 1589 balloon.vmci_doorbell = VMCI_INVALID_HANDLE; ··· 1972 1585 queue_delayed_work(system_freezable_wq, &balloon.dwork, 0); 1973 1586 1974 1587 return 0; 1588 + fail: 1589 + vmballoon_unregister_shrinker(&balloon); 1590 + vmballoon_compaction_deinit(&balloon); 1591 + return error; 1975 1592 } 1976 1593 1977 1594 /* ··· 1988 1597 1989 1598 static void __exit vmballoon_exit(void) 1990 1599 { 1600 + vmballoon_unregister_shrinker(&balloon); 1991 1601 vmballoon_vmci_cleanup(&balloon); 1992 1602 cancel_delayed_work_sync(&balloon.dwork); 1993 1603 ··· 2001 1609 */ 2002 1610 vmballoon_send_start(&balloon, 0); 2003 1611 vmballoon_pop(&balloon); 1612 + 1613 + /* Only once we popped the balloon, compaction can be deinit */ 1614 + vmballoon_compaction_deinit(&balloon); 2004 1615 } 2005 1616 module_exit(vmballoon_exit);
+45 -35
drivers/misc/vmw_vmci/vmci_context.c
··· 21 21 #include "vmci_driver.h" 22 22 #include "vmci_event.h" 23 23 24 + /* Use a wide upper bound for the maximum contexts. */ 25 + #define VMCI_MAX_CONTEXTS 2000 26 + 24 27 /* 25 28 * List of current VMCI contexts. Contexts can be added by 26 29 * vmci_ctx_create() and removed via vmci_ctx_destroy(). ··· 120 117 /* Initialize host-specific VMCI context. */ 121 118 init_waitqueue_head(&context->host_context.wait_queue); 122 119 123 - context->queue_pair_array = vmci_handle_arr_create(0); 120 + context->queue_pair_array = 121 + vmci_handle_arr_create(0, VMCI_MAX_GUEST_QP_COUNT); 124 122 if (!context->queue_pair_array) { 125 123 error = -ENOMEM; 126 124 goto err_free_ctx; 127 125 } 128 126 129 - context->doorbell_array = vmci_handle_arr_create(0); 127 + context->doorbell_array = 128 + vmci_handle_arr_create(0, VMCI_MAX_GUEST_DOORBELL_COUNT); 130 129 if (!context->doorbell_array) { 131 130 error = -ENOMEM; 132 131 goto err_free_qp_array; 133 132 } 134 133 135 - context->pending_doorbell_array = vmci_handle_arr_create(0); 134 + context->pending_doorbell_array = 135 + vmci_handle_arr_create(0, VMCI_MAX_GUEST_DOORBELL_COUNT); 136 136 if (!context->pending_doorbell_array) { 137 137 error = -ENOMEM; 138 138 goto err_free_db_array; ··· 210 204 * We create an array to hold the subscribers we find when 211 205 * scanning through all contexts. 212 206 */ 213 - subscriber_array = vmci_handle_arr_create(0); 207 + subscriber_array = vmci_handle_arr_create(0, VMCI_MAX_CONTEXTS); 214 208 if (subscriber_array == NULL) 215 209 return VMCI_ERROR_NO_MEM; 216 210 ··· 629 623 630 624 spin_lock(&context->lock); 631 625 632 - list_for_each_entry(n, &context->notifier_list, node) { 633 - if (vmci_handle_is_equal(n->handle, notifier->handle)) { 634 - exists = true; 635 - break; 626 + if (context->n_notifiers < VMCI_MAX_CONTEXTS) { 627 + list_for_each_entry(n, &context->notifier_list, node) { 628 + if (vmci_handle_is_equal(n->handle, notifier->handle)) { 629 + exists = true; 630 + break; 631 + } 636 632 } 637 - } 638 633 639 - if (exists) { 640 - kfree(notifier); 641 - result = VMCI_ERROR_ALREADY_EXISTS; 634 + if (exists) { 635 + kfree(notifier); 636 + result = VMCI_ERROR_ALREADY_EXISTS; 637 + } else { 638 + list_add_tail_rcu(&notifier->node, 639 + &context->notifier_list); 640 + context->n_notifiers++; 641 + result = VMCI_SUCCESS; 642 + } 642 643 } else { 643 - list_add_tail_rcu(&notifier->node, &context->notifier_list); 644 - context->n_notifiers++; 645 - result = VMCI_SUCCESS; 644 + kfree(notifier); 645 + result = VMCI_ERROR_NO_MEM; 646 646 } 647 647 648 648 spin_unlock(&context->lock); ··· 733 721 u32 *buf_size, void **pbuf) 734 722 { 735 723 struct dbell_cpt_state *dbells; 736 - size_t n_doorbells; 737 - int i; 724 + u32 i, n_doorbells; 738 725 739 726 n_doorbells = vmci_handle_arr_get_size(context->doorbell_array); 740 727 if (n_doorbells > 0) { ··· 871 860 spin_lock(&context->lock); 872 861 873 862 *db_handle_array = context->pending_doorbell_array; 874 - context->pending_doorbell_array = vmci_handle_arr_create(0); 863 + context->pending_doorbell_array = 864 + vmci_handle_arr_create(0, VMCI_MAX_GUEST_DOORBELL_COUNT); 875 865 if (!context->pending_doorbell_array) { 876 866 context->pending_doorbell_array = *db_handle_array; 877 867 *db_handle_array = NULL; ··· 954 942 return VMCI_ERROR_NOT_FOUND; 955 943 956 944 spin_lock(&context->lock); 957 - if (!vmci_handle_arr_has_entry(context->doorbell_array, handle)) { 958 - vmci_handle_arr_append_entry(&context->doorbell_array, handle); 959 - result = VMCI_SUCCESS; 960 - } else { 945 + if (!vmci_handle_arr_has_entry(context->doorbell_array, handle)) 946 + result = vmci_handle_arr_append_entry(&context->doorbell_array, 947 + handle); 948 + else 961 949 result = VMCI_ERROR_DUPLICATE_ENTRY; 962 - } 963 950 964 951 spin_unlock(&context->lock); 965 952 vmci_ctx_put(context); ··· 1094 1083 if (!vmci_handle_arr_has_entry( 1095 1084 dst_context->pending_doorbell_array, 1096 1085 handle)) { 1097 - vmci_handle_arr_append_entry( 1086 + result = vmci_handle_arr_append_entry( 1098 1087 &dst_context->pending_doorbell_array, 1099 1088 handle); 1100 - 1101 - ctx_signal_notify(dst_context); 1102 - wake_up(&dst_context->host_context.wait_queue); 1103 - 1089 + if (result == VMCI_SUCCESS) { 1090 + ctx_signal_notify(dst_context); 1091 + wake_up(&dst_context->host_context.wait_queue); 1092 + } 1093 + } else { 1094 + result = VMCI_SUCCESS; 1104 1095 } 1105 - result = VMCI_SUCCESS; 1106 1096 } 1107 1097 spin_unlock(&dst_context->lock); 1108 1098 } ··· 1130 1118 if (context == NULL || vmci_handle_is_invalid(handle)) 1131 1119 return VMCI_ERROR_INVALID_ARGS; 1132 1120 1133 - if (!vmci_handle_arr_has_entry(context->queue_pair_array, handle)) { 1134 - vmci_handle_arr_append_entry(&context->queue_pair_array, 1135 - handle); 1136 - result = VMCI_SUCCESS; 1137 - } else { 1121 + if (!vmci_handle_arr_has_entry(context->queue_pair_array, handle)) 1122 + result = vmci_handle_arr_append_entry( 1123 + &context->queue_pair_array, handle); 1124 + else 1138 1125 result = VMCI_ERROR_DUPLICATE_ENTRY; 1139 - } 1140 1126 1141 1127 return result; 1142 1128 }
+25 -13
drivers/misc/vmw_vmci/vmci_handle_array.c
··· 8 8 #include <linux/slab.h> 9 9 #include "vmci_handle_array.h" 10 10 11 - static size_t handle_arr_calc_size(size_t capacity) 11 + static size_t handle_arr_calc_size(u32 capacity) 12 12 { 13 - return sizeof(struct vmci_handle_arr) + 13 + return VMCI_HANDLE_ARRAY_HEADER_SIZE + 14 14 capacity * sizeof(struct vmci_handle); 15 15 } 16 16 17 - struct vmci_handle_arr *vmci_handle_arr_create(size_t capacity) 17 + struct vmci_handle_arr *vmci_handle_arr_create(u32 capacity, u32 max_capacity) 18 18 { 19 19 struct vmci_handle_arr *array; 20 20 21 + if (max_capacity == 0 || capacity > max_capacity) 22 + return NULL; 23 + 21 24 if (capacity == 0) 22 - capacity = VMCI_HANDLE_ARRAY_DEFAULT_SIZE; 25 + capacity = min((u32)VMCI_HANDLE_ARRAY_DEFAULT_CAPACITY, 26 + max_capacity); 23 27 24 28 array = kmalloc(handle_arr_calc_size(capacity), GFP_ATOMIC); 25 29 if (!array) 26 30 return NULL; 27 31 28 32 array->capacity = capacity; 33 + array->max_capacity = max_capacity; 29 34 array->size = 0; 30 35 31 36 return array; ··· 41 36 kfree(array); 42 37 } 43 38 44 - void vmci_handle_arr_append_entry(struct vmci_handle_arr **array_ptr, 45 - struct vmci_handle handle) 39 + int vmci_handle_arr_append_entry(struct vmci_handle_arr **array_ptr, 40 + struct vmci_handle handle) 46 41 { 47 42 struct vmci_handle_arr *array = *array_ptr; 48 43 49 44 if (unlikely(array->size >= array->capacity)) { 50 45 /* reallocate. */ 51 46 struct vmci_handle_arr *new_array; 52 - size_t new_capacity = array->capacity * VMCI_ARR_CAP_MULT; 53 - size_t new_size = handle_arr_calc_size(new_capacity); 47 + u32 capacity_bump = min(array->max_capacity - array->capacity, 48 + array->capacity); 49 + size_t new_size = handle_arr_calc_size(array->capacity + 50 + capacity_bump); 51 + 52 + if (array->size >= array->max_capacity) 53 + return VMCI_ERROR_NO_MEM; 54 54 55 55 new_array = krealloc(array, new_size, GFP_ATOMIC); 56 56 if (!new_array) 57 - return; 57 + return VMCI_ERROR_NO_MEM; 58 58 59 - new_array->capacity = new_capacity; 59 + new_array->capacity += capacity_bump; 60 60 *array_ptr = array = new_array; 61 61 } 62 62 63 63 array->entries[array->size] = handle; 64 64 array->size++; 65 + 66 + return VMCI_SUCCESS; 65 67 } 66 68 67 69 /* ··· 78 66 struct vmci_handle entry_handle) 79 67 { 80 68 struct vmci_handle handle = VMCI_INVALID_HANDLE; 81 - size_t i; 69 + u32 i; 82 70 83 71 for (i = 0; i < array->size; i++) { 84 72 if (vmci_handle_is_equal(array->entries[i], entry_handle)) { ··· 113 101 * Handle at given index, VMCI_INVALID_HANDLE if invalid index. 114 102 */ 115 103 struct vmci_handle 116 - vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, size_t index) 104 + vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, u32 index) 117 105 { 118 106 if (unlikely(index >= array->size)) 119 107 return VMCI_INVALID_HANDLE; ··· 124 112 bool vmci_handle_arr_has_entry(const struct vmci_handle_arr *array, 125 113 struct vmci_handle entry_handle) 126 114 { 127 - size_t i; 115 + u32 i; 128 116 129 117 for (i = 0; i < array->size; i++) 130 118 if (vmci_handle_is_equal(array->entries[i], entry_handle))
+19 -10
drivers/misc/vmw_vmci/vmci_handle_array.h
··· 9 9 #define _VMCI_HANDLE_ARRAY_H_ 10 10 11 11 #include <linux/vmw_vmci_defs.h> 12 + #include <linux/limits.h> 12 13 #include <linux/types.h> 13 14 14 - #define VMCI_HANDLE_ARRAY_DEFAULT_SIZE 4 15 - #define VMCI_ARR_CAP_MULT 2 /* Array capacity multiplier */ 16 - 17 15 struct vmci_handle_arr { 18 - size_t capacity; 19 - size_t size; 16 + u32 capacity; 17 + u32 max_capacity; 18 + u32 size; 19 + u32 pad; 20 20 struct vmci_handle entries[]; 21 21 }; 22 22 23 - struct vmci_handle_arr *vmci_handle_arr_create(size_t capacity); 23 + #define VMCI_HANDLE_ARRAY_HEADER_SIZE \ 24 + offsetof(struct vmci_handle_arr, entries) 25 + /* Select a default capacity that results in a 64 byte sized array */ 26 + #define VMCI_HANDLE_ARRAY_DEFAULT_CAPACITY 6 27 + /* Make sure that the max array size can be expressed by a u32 */ 28 + #define VMCI_HANDLE_ARRAY_MAX_CAPACITY \ 29 + ((U32_MAX - VMCI_HANDLE_ARRAY_HEADER_SIZE - 1) / \ 30 + sizeof(struct vmci_handle)) 31 + 32 + struct vmci_handle_arr *vmci_handle_arr_create(u32 capacity, u32 max_capacity); 24 33 void vmci_handle_arr_destroy(struct vmci_handle_arr *array); 25 - void vmci_handle_arr_append_entry(struct vmci_handle_arr **array_ptr, 26 - struct vmci_handle handle); 34 + int vmci_handle_arr_append_entry(struct vmci_handle_arr **array_ptr, 35 + struct vmci_handle handle); 27 36 struct vmci_handle vmci_handle_arr_remove_entry(struct vmci_handle_arr *array, 28 37 struct vmci_handle 29 38 entry_handle); 30 39 struct vmci_handle vmci_handle_arr_remove_tail(struct vmci_handle_arr *array); 31 40 struct vmci_handle 32 - vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, size_t index); 41 + vmci_handle_arr_get_entry(const struct vmci_handle_arr *array, u32 index); 33 42 bool vmci_handle_arr_has_entry(const struct vmci_handle_arr *array, 34 43 struct vmci_handle entry_handle); 35 44 struct vmci_handle *vmci_handle_arr_get_handles(struct vmci_handle_arr *array); 36 45 37 - static inline size_t vmci_handle_arr_get_size( 46 + static inline u32 vmci_handle_arr_get_size( 38 47 const struct vmci_handle_arr *array) 39 48 { 40 49 return array->size;
+345
drivers/misc/xilinx_sdfec.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Xilinx SDFEC 4 + * 5 + * Copyright (C) 2019 Xilinx, Inc. 6 + * 7 + * Description: 8 + * This driver is developed for SDFEC16 (Soft Decision FEC 16nm) 9 + * IP. It exposes a char device which supports file operations 10 + * like open(), close() and ioctl(). 11 + */ 12 + 13 + #include <linux/miscdevice.h> 14 + #include <linux/io.h> 15 + #include <linux/interrupt.h> 16 + #include <linux/kernel.h> 17 + #include <linux/module.h> 18 + #include <linux/of_platform.h> 19 + #include <linux/poll.h> 20 + #include <linux/slab.h> 21 + #include <linux/clk.h> 22 + 23 + #define DEV_NAME_LEN 12 24 + 25 + static struct idr dev_idr; 26 + static struct mutex dev_idr_lock; 27 + 28 + /** 29 + * struct xsdfec_clks - For managing SD-FEC clocks 30 + * @core_clk: Main processing clock for core 31 + * @axi_clk: AXI4-Lite memory-mapped clock 32 + * @din_words_clk: DIN Words AXI4-Stream Slave clock 33 + * @din_clk: DIN AXI4-Stream Slave clock 34 + * @dout_clk: DOUT Words AXI4-Stream Slave clock 35 + * @dout_words_clk: DOUT AXI4-Stream Slave clock 36 + * @ctrl_clk: Control AXI4-Stream Slave clock 37 + * @status_clk: Status AXI4-Stream Slave clock 38 + */ 39 + struct xsdfec_clks { 40 + struct clk *core_clk; 41 + struct clk *axi_clk; 42 + struct clk *din_words_clk; 43 + struct clk *din_clk; 44 + struct clk *dout_clk; 45 + struct clk *dout_words_clk; 46 + struct clk *ctrl_clk; 47 + struct clk *status_clk; 48 + }; 49 + 50 + /** 51 + * struct xsdfec_dev - Driver data for SDFEC 52 + * @regs: device physical base address 53 + * @dev: pointer to device struct 54 + * @miscdev: Misc device handle 55 + * @error_data_lock: Error counter and states spinlock 56 + * @clks: Clocks managed by the SDFEC driver 57 + * @dev_name: Device name 58 + * @dev_id: Device ID 59 + * 60 + * This structure contains necessary state for SDFEC driver to operate 61 + */ 62 + struct xsdfec_dev { 63 + void __iomem *regs; 64 + struct device *dev; 65 + struct miscdevice miscdev; 66 + /* Spinlock to protect state_updated and stats_updated */ 67 + spinlock_t error_data_lock; 68 + struct xsdfec_clks clks; 69 + char dev_name[DEV_NAME_LEN]; 70 + int dev_id; 71 + }; 72 + 73 + static const struct file_operations xsdfec_fops = { 74 + .owner = THIS_MODULE, 75 + }; 76 + 77 + static int xsdfec_clk_init(struct platform_device *pdev, 78 + struct xsdfec_clks *clks) 79 + { 80 + int err; 81 + 82 + clks->core_clk = devm_clk_get(&pdev->dev, "core_clk"); 83 + if (IS_ERR(clks->core_clk)) { 84 + dev_err(&pdev->dev, "failed to get core_clk"); 85 + return PTR_ERR(clks->core_clk); 86 + } 87 + 88 + clks->axi_clk = devm_clk_get(&pdev->dev, "s_axi_aclk"); 89 + if (IS_ERR(clks->axi_clk)) { 90 + dev_err(&pdev->dev, "failed to get axi_clk"); 91 + return PTR_ERR(clks->axi_clk); 92 + } 93 + 94 + clks->din_words_clk = devm_clk_get(&pdev->dev, "s_axis_din_words_aclk"); 95 + if (IS_ERR(clks->din_words_clk)) { 96 + if (PTR_ERR(clks->din_words_clk) != -ENOENT) { 97 + err = PTR_ERR(clks->din_words_clk); 98 + return err; 99 + } 100 + clks->din_words_clk = NULL; 101 + } 102 + 103 + clks->din_clk = devm_clk_get(&pdev->dev, "s_axis_din_aclk"); 104 + if (IS_ERR(clks->din_clk)) { 105 + if (PTR_ERR(clks->din_clk) != -ENOENT) { 106 + err = PTR_ERR(clks->din_clk); 107 + return err; 108 + } 109 + clks->din_clk = NULL; 110 + } 111 + 112 + clks->dout_clk = devm_clk_get(&pdev->dev, "m_axis_dout_aclk"); 113 + if (IS_ERR(clks->dout_clk)) { 114 + if (PTR_ERR(clks->dout_clk) != -ENOENT) { 115 + err = PTR_ERR(clks->dout_clk); 116 + return err; 117 + } 118 + clks->dout_clk = NULL; 119 + } 120 + 121 + clks->dout_words_clk = 122 + devm_clk_get(&pdev->dev, "s_axis_dout_words_aclk"); 123 + if (IS_ERR(clks->dout_words_clk)) { 124 + if (PTR_ERR(clks->dout_words_clk) != -ENOENT) { 125 + err = PTR_ERR(clks->dout_words_clk); 126 + return err; 127 + } 128 + clks->dout_words_clk = NULL; 129 + } 130 + 131 + clks->ctrl_clk = devm_clk_get(&pdev->dev, "s_axis_ctrl_aclk"); 132 + if (IS_ERR(clks->ctrl_clk)) { 133 + if (PTR_ERR(clks->ctrl_clk) != -ENOENT) { 134 + err = PTR_ERR(clks->ctrl_clk); 135 + return err; 136 + } 137 + clks->ctrl_clk = NULL; 138 + } 139 + 140 + clks->status_clk = devm_clk_get(&pdev->dev, "m_axis_status_aclk"); 141 + if (IS_ERR(clks->status_clk)) { 142 + if (PTR_ERR(clks->status_clk) != -ENOENT) { 143 + err = PTR_ERR(clks->status_clk); 144 + return err; 145 + } 146 + clks->status_clk = NULL; 147 + } 148 + 149 + err = clk_prepare_enable(clks->core_clk); 150 + if (err) { 151 + dev_err(&pdev->dev, "failed to enable core_clk (%d)", err); 152 + return err; 153 + } 154 + 155 + err = clk_prepare_enable(clks->axi_clk); 156 + if (err) { 157 + dev_err(&pdev->dev, "failed to enable axi_clk (%d)", err); 158 + goto err_disable_core_clk; 159 + } 160 + 161 + err = clk_prepare_enable(clks->din_clk); 162 + if (err) { 163 + dev_err(&pdev->dev, "failed to enable din_clk (%d)", err); 164 + goto err_disable_axi_clk; 165 + } 166 + 167 + err = clk_prepare_enable(clks->din_words_clk); 168 + if (err) { 169 + dev_err(&pdev->dev, "failed to enable din_words_clk (%d)", err); 170 + goto err_disable_din_clk; 171 + } 172 + 173 + err = clk_prepare_enable(clks->dout_clk); 174 + if (err) { 175 + dev_err(&pdev->dev, "failed to enable dout_clk (%d)", err); 176 + goto err_disable_din_words_clk; 177 + } 178 + 179 + err = clk_prepare_enable(clks->dout_words_clk); 180 + if (err) { 181 + dev_err(&pdev->dev, "failed to enable dout_words_clk (%d)", 182 + err); 183 + goto err_disable_dout_clk; 184 + } 185 + 186 + err = clk_prepare_enable(clks->ctrl_clk); 187 + if (err) { 188 + dev_err(&pdev->dev, "failed to enable ctrl_clk (%d)", err); 189 + goto err_disable_dout_words_clk; 190 + } 191 + 192 + err = clk_prepare_enable(clks->status_clk); 193 + if (err) { 194 + dev_err(&pdev->dev, "failed to enable status_clk (%d)\n", err); 195 + goto err_disable_ctrl_clk; 196 + } 197 + 198 + return err; 199 + 200 + err_disable_ctrl_clk: 201 + clk_disable_unprepare(clks->ctrl_clk); 202 + err_disable_dout_words_clk: 203 + clk_disable_unprepare(clks->dout_words_clk); 204 + err_disable_dout_clk: 205 + clk_disable_unprepare(clks->dout_clk); 206 + err_disable_din_words_clk: 207 + clk_disable_unprepare(clks->din_words_clk); 208 + err_disable_din_clk: 209 + clk_disable_unprepare(clks->din_clk); 210 + err_disable_axi_clk: 211 + clk_disable_unprepare(clks->axi_clk); 212 + err_disable_core_clk: 213 + clk_disable_unprepare(clks->core_clk); 214 + 215 + return err; 216 + } 217 + 218 + static void xsdfec_disable_all_clks(struct xsdfec_clks *clks) 219 + { 220 + clk_disable_unprepare(clks->status_clk); 221 + clk_disable_unprepare(clks->ctrl_clk); 222 + clk_disable_unprepare(clks->dout_words_clk); 223 + clk_disable_unprepare(clks->dout_clk); 224 + clk_disable_unprepare(clks->din_words_clk); 225 + clk_disable_unprepare(clks->din_clk); 226 + clk_disable_unprepare(clks->core_clk); 227 + clk_disable_unprepare(clks->axi_clk); 228 + } 229 + 230 + static void xsdfec_idr_remove(struct xsdfec_dev *xsdfec) 231 + { 232 + mutex_lock(&dev_idr_lock); 233 + idr_remove(&dev_idr, xsdfec->dev_id); 234 + mutex_unlock(&dev_idr_lock); 235 + } 236 + 237 + static int xsdfec_probe(struct platform_device *pdev) 238 + { 239 + struct xsdfec_dev *xsdfec; 240 + struct device *dev; 241 + struct resource *res; 242 + int err; 243 + 244 + xsdfec = devm_kzalloc(&pdev->dev, sizeof(*xsdfec), GFP_KERNEL); 245 + if (!xsdfec) 246 + return -ENOMEM; 247 + 248 + xsdfec->dev = &pdev->dev; 249 + spin_lock_init(&xsdfec->error_data_lock); 250 + 251 + err = xsdfec_clk_init(pdev, &xsdfec->clks); 252 + if (err) 253 + return err; 254 + 255 + dev = xsdfec->dev; 256 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 257 + xsdfec->regs = devm_ioremap_resource(dev, res); 258 + if (IS_ERR(xsdfec->regs)) { 259 + err = PTR_ERR(xsdfec->regs); 260 + goto err_xsdfec_dev; 261 + } 262 + 263 + /* Save driver private data */ 264 + platform_set_drvdata(pdev, xsdfec); 265 + 266 + mutex_lock(&dev_idr_lock); 267 + err = idr_alloc(&dev_idr, xsdfec->dev_name, 0, 0, GFP_KERNEL); 268 + mutex_unlock(&dev_idr_lock); 269 + if (err < 0) 270 + goto err_xsdfec_dev; 271 + xsdfec->dev_id = err; 272 + 273 + snprintf(xsdfec->dev_name, DEV_NAME_LEN, "xsdfec%d", xsdfec->dev_id); 274 + xsdfec->miscdev.minor = MISC_DYNAMIC_MINOR; 275 + xsdfec->miscdev.name = xsdfec->dev_name; 276 + xsdfec->miscdev.fops = &xsdfec_fops; 277 + xsdfec->miscdev.parent = dev; 278 + err = misc_register(&xsdfec->miscdev); 279 + if (err) { 280 + dev_err(dev, "error:%d. Unable to register device", err); 281 + goto err_xsdfec_idr; 282 + } 283 + return 0; 284 + 285 + err_xsdfec_idr: 286 + xsdfec_idr_remove(xsdfec); 287 + err_xsdfec_dev: 288 + xsdfec_disable_all_clks(&xsdfec->clks); 289 + return err; 290 + } 291 + 292 + static int xsdfec_remove(struct platform_device *pdev) 293 + { 294 + struct xsdfec_dev *xsdfec; 295 + 296 + xsdfec = platform_get_drvdata(pdev); 297 + misc_deregister(&xsdfec->miscdev); 298 + xsdfec_idr_remove(xsdfec); 299 + xsdfec_disable_all_clks(&xsdfec->clks); 300 + return 0; 301 + } 302 + 303 + static const struct of_device_id xsdfec_of_match[] = { 304 + { 305 + .compatible = "xlnx,sd-fec-1.1", 306 + }, 307 + { /* end of table */ } 308 + }; 309 + MODULE_DEVICE_TABLE(of, xsdfec_of_match); 310 + 311 + static struct platform_driver xsdfec_driver = { 312 + .driver = { 313 + .name = "xilinx-sdfec", 314 + .of_match_table = xsdfec_of_match, 315 + }, 316 + .probe = xsdfec_probe, 317 + .remove = xsdfec_remove, 318 + }; 319 + 320 + static int __init xsdfec_init(void) 321 + { 322 + int err; 323 + 324 + mutex_init(&dev_idr_lock); 325 + idr_init(&dev_idr); 326 + err = platform_driver_register(&xsdfec_driver); 327 + if (err < 0) { 328 + pr_err("%s Unabled to register SDFEC driver", __func__); 329 + return err; 330 + } 331 + return 0; 332 + } 333 + 334 + static void __exit xsdfec_exit(void) 335 + { 336 + platform_driver_unregister(&xsdfec_driver); 337 + idr_destroy(&dev_idr); 338 + } 339 + 340 + module_init(xsdfec_init); 341 + module_exit(xsdfec_exit); 342 + 343 + MODULE_AUTHOR("Xilinx, Inc"); 344 + MODULE_DESCRIPTION("Xilinx SD-FEC16 Driver"); 345 + MODULE_LICENSE("GPL");
+6 -6
drivers/mux/Kconfig
··· 46 46 be called mux-gpio. 47 47 48 48 config MUX_MMIO 49 - tristate "MMIO register bitfield-controlled Multiplexer" 50 - depends on (OF && MFD_SYSCON) || COMPILE_TEST 49 + tristate "MMIO/Regmap register bitfield-controlled Multiplexer" 50 + depends on OF || COMPILE_TEST 51 51 help 52 - MMIO register bitfield-controlled Multiplexer controller. 52 + MMIO/Regmap register bitfield-controlled Multiplexer controller. 53 53 54 - The driver builds multiplexer controllers for bitfields in a syscon 55 - register. For N bit wide bitfields, there will be 2^N possible 56 - multiplexer states. 54 + The driver builds multiplexer controllers for bitfields in either 55 + a syscon register or a driver regmap register. For N bit wide 56 + bitfields, there will be 2^N possible multiplexer states. 57 57 58 58 To compile the driver as a module, choose M here: the module will 59 59 be called mux-mmio.
+5 -1
drivers/mux/mmio.c
··· 28 28 29 29 static const struct of_device_id mux_mmio_dt_ids[] = { 30 30 { .compatible = "mmio-mux", }, 31 + { .compatible = "reg-mux", }, 31 32 { /* sentinel */ } 32 33 }; 33 34 MODULE_DEVICE_TABLE(of, mux_mmio_dt_ids); ··· 44 43 int ret; 45 44 int i; 46 45 47 - regmap = syscon_node_to_regmap(np->parent); 46 + if (of_device_is_compatible(np, "mmio-mux")) 47 + regmap = syscon_node_to_regmap(np->parent); 48 + else 49 + regmap = dev_get_regmap(dev->parent, NULL) ?: ERR_PTR(-ENODEV); 48 50 if (IS_ERR(regmap)) { 49 51 ret = PTR_ERR(regmap); 50 52 dev_err(dev, "failed to get regmap: %d\n", ret);
+8 -1
drivers/nvmem/Kconfig
··· 47 47 This driver can also be built as a module. If so, the module 48 48 will be called nvmem-imx-ocotp. 49 49 50 + config NVMEM_IMX_OCOTP_SCU 51 + tristate "i.MX8 SCU On-Chip OTP Controller support" 52 + depends on IMX_SCU 53 + help 54 + This is a driver for the SCU On-Chip OTP Controller (OCOTP) 55 + available on i.MX8 SoCs. 56 + 50 57 config NVMEM_LPC18XX_EEPROM 51 58 tristate "NXP LPC18XX EEPROM Memory Support" 52 59 depends on ARCH_LPC18XX || COMPILE_TEST ··· 195 188 196 189 config NVMEM_SNVS_LPGPR 197 190 tristate "Support for Low Power General Purpose Register" 198 - depends on SOC_IMX6 || SOC_IMX7D || COMPILE_TEST 191 + depends on ARCH_MXC || COMPILE_TEST 199 192 help 200 193 This is a driver for Low Power General Purpose Register (LPGPR) available on 201 194 i.MX6 and i.MX7 SoCs in Secure Non-Volatile Storage (SNVS) of this chip.
+2
drivers/nvmem/Makefile
··· 16 16 nvmem-imx-iim-y := imx-iim.o 17 17 obj-$(CONFIG_NVMEM_IMX_OCOTP) += nvmem-imx-ocotp.o 18 18 nvmem-imx-ocotp-y := imx-ocotp.o 19 + obj-$(CONFIG_NVMEM_IMX_OCOTP_SCU) += nvmem-imx-ocotp-scu.o 20 + nvmem-imx-ocotp-scu-y := imx-ocotp-scu.o 19 21 obj-$(CONFIG_NVMEM_LPC18XX_EEPROM) += nvmem_lpc18xx_eeprom.o 20 22 nvmem_lpc18xx_eeprom-y := lpc18xx_eeprom.o 21 23 obj-$(CONFIG_NVMEM_LPC18XX_OTP) += nvmem_lpc18xx_otp.o
+161
drivers/nvmem/imx-ocotp-scu.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * i.MX8 OCOTP fusebox driver 4 + * 5 + * Copyright 2019 NXP 6 + * 7 + * Peng Fan <peng.fan@nxp.com> 8 + */ 9 + 10 + #include <linux/firmware/imx/sci.h> 11 + #include <linux/module.h> 12 + #include <linux/nvmem-provider.h> 13 + #include <linux/of_device.h> 14 + #include <linux/platform_device.h> 15 + #include <linux/slab.h> 16 + 17 + enum ocotp_devtype { 18 + IMX8QXP, 19 + }; 20 + 21 + struct ocotp_devtype_data { 22 + int devtype; 23 + int nregs; 24 + }; 25 + 26 + struct ocotp_priv { 27 + struct device *dev; 28 + const struct ocotp_devtype_data *data; 29 + struct imx_sc_ipc *nvmem_ipc; 30 + }; 31 + 32 + struct imx_sc_msg_misc_fuse_read { 33 + struct imx_sc_rpc_msg hdr; 34 + u32 word; 35 + } __packed; 36 + 37 + static struct ocotp_devtype_data imx8qxp_data = { 38 + .devtype = IMX8QXP, 39 + .nregs = 800, 40 + }; 41 + 42 + static int imx_sc_misc_otp_fuse_read(struct imx_sc_ipc *ipc, u32 word, 43 + u32 *val) 44 + { 45 + struct imx_sc_msg_misc_fuse_read msg; 46 + struct imx_sc_rpc_msg *hdr = &msg.hdr; 47 + int ret; 48 + 49 + hdr->ver = IMX_SC_RPC_VERSION; 50 + hdr->svc = IMX_SC_RPC_SVC_MISC; 51 + hdr->func = IMX_SC_MISC_FUNC_OTP_FUSE_READ; 52 + hdr->size = 2; 53 + 54 + msg.word = word; 55 + 56 + ret = imx_scu_call_rpc(ipc, &msg, true); 57 + if (ret) 58 + return ret; 59 + 60 + *val = msg.word; 61 + 62 + return 0; 63 + } 64 + 65 + static int imx_scu_ocotp_read(void *context, unsigned int offset, 66 + void *val, size_t bytes) 67 + { 68 + struct ocotp_priv *priv = context; 69 + u32 count, index, num_bytes; 70 + u32 *buf; 71 + void *p; 72 + int i, ret; 73 + 74 + index = offset >> 2; 75 + num_bytes = round_up((offset % 4) + bytes, 4); 76 + count = num_bytes >> 2; 77 + 78 + if (count > (priv->data->nregs - index)) 79 + count = priv->data->nregs - index; 80 + 81 + p = kzalloc(num_bytes, GFP_KERNEL); 82 + if (!p) 83 + return -ENOMEM; 84 + 85 + buf = p; 86 + 87 + for (i = index; i < (index + count); i++) { 88 + if (priv->data->devtype == IMX8QXP) { 89 + if ((i > 271) && (i < 544)) { 90 + *buf++ = 0; 91 + continue; 92 + } 93 + } 94 + 95 + ret = imx_sc_misc_otp_fuse_read(priv->nvmem_ipc, i, buf); 96 + if (ret) { 97 + kfree(p); 98 + return ret; 99 + } 100 + buf++; 101 + } 102 + 103 + memcpy(val, (u8 *)p + offset % 4, bytes); 104 + 105 + kfree(p); 106 + 107 + return 0; 108 + } 109 + 110 + static struct nvmem_config imx_scu_ocotp_nvmem_config = { 111 + .name = "imx-scu-ocotp", 112 + .read_only = true, 113 + .word_size = 4, 114 + .stride = 1, 115 + .owner = THIS_MODULE, 116 + .reg_read = imx_scu_ocotp_read, 117 + }; 118 + 119 + static const struct of_device_id imx_scu_ocotp_dt_ids[] = { 120 + { .compatible = "fsl,imx8qxp-scu-ocotp", (void *)&imx8qxp_data }, 121 + { }, 122 + }; 123 + MODULE_DEVICE_TABLE(of, imx_scu_ocotp_dt_ids); 124 + 125 + static int imx_scu_ocotp_probe(struct platform_device *pdev) 126 + { 127 + struct device *dev = &pdev->dev; 128 + struct ocotp_priv *priv; 129 + struct nvmem_device *nvmem; 130 + int ret; 131 + 132 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 133 + if (!priv) 134 + return -ENOMEM; 135 + 136 + ret = imx_scu_get_handle(&priv->nvmem_ipc); 137 + if (ret) 138 + return ret; 139 + 140 + priv->data = of_device_get_match_data(dev); 141 + priv->dev = dev; 142 + imx_scu_ocotp_nvmem_config.size = 4 * priv->data->nregs; 143 + imx_scu_ocotp_nvmem_config.dev = dev; 144 + imx_scu_ocotp_nvmem_config.priv = priv; 145 + nvmem = devm_nvmem_register(dev, &imx_scu_ocotp_nvmem_config); 146 + 147 + return PTR_ERR_OR_ZERO(nvmem); 148 + } 149 + 150 + static struct platform_driver imx_scu_ocotp_driver = { 151 + .probe = imx_scu_ocotp_probe, 152 + .driver = { 153 + .name = "imx_scu_ocotp", 154 + .of_match_table = imx_scu_ocotp_dt_ids, 155 + }, 156 + }; 157 + module_platform_driver(imx_scu_ocotp_driver); 158 + 159 + MODULE_AUTHOR("Peng Fan <peng.fan@nxp.com>"); 160 + MODULE_DESCRIPTION("i.MX8 SCU OCOTP fuse box driver"); 161 + MODULE_LICENSE("GPL v2");
+44 -8
drivers/nvmem/imx-ocotp.c
··· 39 39 #define IMX_OCOTP_ADDR_DATA2 0x0040 40 40 #define IMX_OCOTP_ADDR_DATA3 0x0050 41 41 42 - #define IMX_OCOTP_BM_CTRL_ADDR 0x0000007F 42 + #define IMX_OCOTP_BM_CTRL_ADDR 0x000000FF 43 43 #define IMX_OCOTP_BM_CTRL_BUSY 0x00000100 44 44 #define IMX_OCOTP_BM_CTRL_ERROR 0x00000200 45 45 #define IMX_OCOTP_BM_CTRL_REL_SHADOWS 0x00000400 46 46 47 - #define DEF_RELAX 20 /* > 16.5ns */ 47 + #define TIMING_STROBE_PROG_US 10 /* Min time to blow a fuse */ 48 + #define TIMING_STROBE_READ_NS 37 /* Min time before read */ 49 + #define TIMING_RELAX_NS 17 48 50 #define DEF_FSOURCE 1001 /* > 1000 ns */ 49 51 #define DEF_STROBE_PROG 10000 /* IPG clocks */ 50 52 #define IMX_OCOTP_WR_UNLOCK 0x3E770000 ··· 178 176 * fields with timing values to match the current frequency of the 179 177 * ipg_clk. OTP writes will work at maximum bus frequencies as long 180 178 * as the HW_OCOTP_TIMING parameters are set correctly. 179 + * 180 + * Note: there are minimum timings required to ensure an OTP fuse burns 181 + * correctly that are independent of the ipg_clk. Those values are not 182 + * formally documented anywhere however, working from the minimum 183 + * timings given in u-boot we can say: 184 + * 185 + * - Minimum STROBE_PROG time is 10 microseconds. Intuitively 10 186 + * microseconds feels about right as representative of a minimum time 187 + * to physically burn out a fuse. 188 + * 189 + * - Minimum STROBE_READ i.e. the time to wait post OTP fuse burn before 190 + * performing another read is 37 nanoseconds 191 + * 192 + * - Minimum RELAX timing is 17 nanoseconds. This final RELAX minimum 193 + * timing is not entirely clear the documentation says "This 194 + * count value specifies the time to add to all default timing 195 + * parameters other than the Tpgm and Trd. It is given in number 196 + * of ipg_clk periods." where Tpgm and Trd refer to STROBE_PROG 197 + * and STROBE_READ respectively. What the other timing parameters 198 + * are though, is not specified. Experience shows a zero RELAX 199 + * value will mess up a re-load of the shadow registers post OTP 200 + * burn. 181 201 */ 182 202 clk_rate = clk_get_rate(priv->clk); 183 203 184 - relax = clk_rate / (1000000000 / DEF_RELAX) - 1; 185 - strobe_prog = clk_rate / (1000000000 / 10000) + 2 * (DEF_RELAX + 1) - 1; 186 - strobe_read = clk_rate / (1000000000 / 40) + 2 * (DEF_RELAX + 1) - 1; 204 + relax = DIV_ROUND_UP(clk_rate * TIMING_RELAX_NS, 1000000000) - 1; 205 + strobe_read = DIV_ROUND_UP(clk_rate * TIMING_STROBE_READ_NS, 206 + 1000000000); 207 + strobe_read += 2 * (relax + 1) - 1; 208 + strobe_prog = DIV_ROUND_CLOSEST(clk_rate * TIMING_STROBE_PROG_US, 209 + 1000000); 210 + strobe_prog += 2 * (relax + 1) - 1; 187 211 188 - timing = strobe_prog & 0x00000FFF; 212 + timing = readl(priv->base + IMX_OCOTP_ADDR_TIMING) & 0x0FC00000; 213 + timing |= strobe_prog & 0x00000FFF; 189 214 timing |= (relax << 12) & 0x0000F000; 190 215 timing |= (strobe_read << 16) & 0x003F0000; 191 216 ··· 469 440 470 441 static const struct ocotp_params imx8mq_params = { 471 442 .nregs = 256, 472 - .bank_address_words = 4, 473 - .set_timing = imx_ocotp_set_imx7_timing, 443 + .bank_address_words = 0, 444 + .set_timing = imx_ocotp_set_imx6_timing, 445 + }; 446 + 447 + static const struct ocotp_params imx8mm_params = { 448 + .nregs = 256, 449 + .bank_address_words = 0, 450 + .set_timing = imx_ocotp_set_imx6_timing, 474 451 }; 475 452 476 453 static const struct of_device_id imx_ocotp_dt_ids[] = { ··· 489 454 { .compatible = "fsl,imx6sll-ocotp", .data = &imx6sll_params }, 490 455 { .compatible = "fsl,imx7ulp-ocotp", .data = &imx7ulp_params }, 491 456 { .compatible = "fsl,imx8mq-ocotp", .data = &imx8mq_params }, 457 + { .compatible = "fsl,imx8mm-ocotp", .data = &imx8mm_params }, 492 458 { }, 493 459 }; 494 460 MODULE_DEVICE_TABLE(of, imx_ocotp_dt_ids);
+1 -1
drivers/platform/x86/Kconfig
··· 341 341 342 342 Support for a led indicating disk protection will be provided as 343 343 hp::hddprotect. For more information on the feature, refer to 344 - Documentation/misc-devices/lis3lv02d. 344 + Documentation/misc-devices/lis3lv02d.rst. 345 345 346 346 To compile this driver as a module, choose M here: the module will 347 347 be called hp_accel.
-5
drivers/slimbus/core.c
··· 98 98 static int slim_device_uevent(struct device *dev, struct kobj_uevent_env *env) 99 99 { 100 100 struct slim_device *sbdev = to_slim_device(dev); 101 - int ret; 102 - 103 - ret = of_device_uevent_modalias(dev, env); 104 - if (ret != -ENODEV) 105 - return ret; 106 101 107 102 return add_uevent_var(env, "MODALIAS=slim:%s", dev_name(&sbdev->dev)); 108 103 }
+1 -3
drivers/slimbus/qcom-ctrl.c
··· 528 528 529 529 slim_mem = platform_get_resource_byname(pdev, IORESOURCE_MEM, "ctrl"); 530 530 ctrl->base = devm_ioremap_resource(ctrl->dev, slim_mem); 531 - if (IS_ERR(ctrl->base)) { 532 - dev_err(&pdev->dev, "IOremap failed\n"); 531 + if (IS_ERR(ctrl->base)) 533 532 return PTR_ERR(ctrl->base); 534 - } 535 533 536 534 sctrl->set_laddr = qcom_set_laddr; 537 535 sctrl->xfer_msg = qcom_xfer_msg;
+6 -6
drivers/slimbus/stream.c
··· 84 84 512000, 85 85 }; 86 86 87 - /* 87 + /** 88 88 * slim_stream_allocate() - Allocate a new SLIMbus Stream 89 89 * @dev:Slim device to be associated with 90 90 * @name: name of the stream ··· 189 189 return -EINVAL; 190 190 } 191 191 192 - /* 192 + /** 193 193 * slim_stream_prepare() - Prepare a SLIMbus Stream 194 194 * 195 195 * @rt: instance of slim stream runtime to configure ··· 336 336 return slim_do_transfer(sdev->ctrl, &txn); 337 337 } 338 338 339 - /* 339 + /** 340 340 * slim_stream_enable() - Enable a prepared SLIMbus Stream 341 341 * 342 342 * @stream: instance of slim stream runtime to enable ··· 389 389 } 390 390 EXPORT_SYMBOL_GPL(slim_stream_enable); 391 391 392 - /* 392 + /** 393 393 * slim_stream_disable() - Disable a SLIMbus Stream 394 394 * 395 395 * @stream: instance of slim stream runtime to disable ··· 423 423 } 424 424 EXPORT_SYMBOL_GPL(slim_stream_disable); 425 425 426 - /* 426 + /** 427 427 * slim_stream_unprepare() - Un-prepare a SLIMbus Stream 428 428 * 429 429 * @stream: instance of slim stream runtime to unprepare ··· 449 449 } 450 450 EXPORT_SYMBOL_GPL(slim_stream_unprepare); 451 451 452 - /* 452 + /** 453 453 * slim_stream_free() - Free a SLIMbus Stream 454 454 * 455 455 * @stream: instance of slim stream runtime to free
+3 -3
drivers/soundwire/bus.c
··· 87 87 88 88 /* 89 89 * Initialize clock values based on Master properties. The max 90 - * frequency is read from max_freq property. Current assumption 90 + * frequency is read from max_clk_freq property. Current assumption 91 91 * is that the bus will start at highest clock frequency when 92 92 * powered on. 93 93 * ··· 95 95 * to start with bank 0 (Table 40 of Spec) 96 96 */ 97 97 prop = &bus->prop; 98 - bus->params.max_dr_freq = prop->max_freq * SDW_DOUBLE_RATE_FACTOR; 98 + bus->params.max_dr_freq = prop->max_clk_freq * SDW_DOUBLE_RATE_FACTOR; 99 99 bus->params.curr_dr_freq = bus->params.max_dr_freq; 100 100 bus->params.curr_bank = SDW_BANK0; 101 101 bus->params.next_bank = SDW_BANK1; ··· 648 648 return 0; 649 649 650 650 /* Enable DP0 interrupts */ 651 - val = prop->dp0_prop->device_interrupts; 651 + val = prop->dp0_prop->imp_def_interrupts; 652 652 val |= SDW_DP0_INT_PORT_READY | SDW_DP0_INT_BRA_FAILURE; 653 653 654 654 ret = sdw_update(slave, SDW_DP0_INTMASK, val, val);
+17 -13
drivers/soundwire/cadence_master.c
··· 9 9 #include <linux/delay.h> 10 10 #include <linux/device.h> 11 11 #include <linux/interrupt.h> 12 + #include <linux/io.h> 12 13 #include <linux/module.h> 13 14 #include <linux/mod_devicetable.h> 14 15 #include <linux/soundwire/sdw_registers.h> ··· 237 236 for (i = 0; i < count; i++) { 238 237 if (!(cdns->response_buf[i] & CDNS_MCP_RESP_ACK)) { 239 238 no_ack = 1; 240 - dev_dbg(cdns->dev, "Msg Ack not received\n"); 239 + dev_dbg_ratelimited(cdns->dev, "Msg Ack not received\n"); 241 240 if (cdns->response_buf[i] & CDNS_MCP_RESP_NACK) { 242 241 nack = 1; 243 - dev_err(cdns->dev, "Msg NACK received\n"); 242 + dev_err_ratelimited(cdns->dev, "Msg NACK received\n"); 244 243 } 245 244 } 246 245 } 247 246 248 247 if (nack) { 249 - dev_err(cdns->dev, "Msg NACKed for Slave %d\n", msg->dev_num); 248 + dev_err_ratelimited(cdns->dev, "Msg NACKed for Slave %d\n", msg->dev_num); 250 249 return SDW_CMD_FAIL; 251 250 } else if (no_ack) { 252 - dev_dbg(cdns->dev, "Msg ignored for Slave %d\n", msg->dev_num); 251 + dev_dbg_ratelimited(cdns->dev, "Msg ignored for Slave %d\n", msg->dev_num); 253 252 return SDW_CMD_IGNORED; 254 253 } 255 254 ··· 357 356 358 357 /* For NACK, NO ack, don't return err if we are in Broadcast mode */ 359 358 if (nack) { 360 - dev_err(cdns->dev, 361 - "SCP_addrpage NACKed for Slave %d\n", msg->dev_num); 359 + dev_err_ratelimited(cdns->dev, 360 + "SCP_addrpage NACKed for Slave %d\n", msg->dev_num); 362 361 return SDW_CMD_FAIL; 363 362 } else if (no_ack) { 364 - dev_dbg(cdns->dev, 365 - "SCP_addrpage ignored for Slave %d\n", msg->dev_num); 363 + dev_dbg_ratelimited(cdns->dev, 364 + "SCP_addrpage ignored for Slave %d\n", msg->dev_num); 366 365 return SDW_CMD_IGNORED; 367 366 } 368 367 ··· 487 486 { 488 487 enum sdw_slave_status status[SDW_MAX_DEVICES + 1]; 489 488 bool is_slave = false; 490 - u64 slave, mask; 489 + u64 slave; 490 + u32 mask; 491 491 int i, set_status; 492 492 493 493 /* combine the two status */ ··· 526 524 527 525 /* first check if Slave reported multiple status */ 528 526 if (set_status > 1) { 529 - dev_warn(cdns->dev, 530 - "Slave reported multiple Status: %d\n", 531 - status[i]); 527 + dev_warn_ratelimited(cdns->dev, 528 + "Slave reported multiple Status: %d\n", 529 + mask); 532 530 /* 533 531 * TODO: we need to reread the status here by 534 532 * issuing a PING cmd ··· 614 612 struct sdw_cdns *cdns = dev_id; 615 613 u32 slave0, slave1; 616 614 617 - dev_dbg(cdns->dev, "Slave status change\n"); 615 + dev_dbg_ratelimited(cdns->dev, "Slave status change\n"); 618 616 619 617 slave0 = cdns_readl(cdns, CDNS_MCP_SLAVE_INTSTAT0); 620 618 slave1 = cdns_readl(cdns, CDNS_MCP_SLAVE_INTSTAT1); ··· 718 716 stream = &cdns->pcm; 719 717 720 718 /* First two PDIs are reserved for bulk transfers */ 719 + if (stream->num_bd < CDNS_PCM_PDI_OFFSET) 720 + return -EINVAL; 721 721 stream->num_bd -= CDNS_PCM_PDI_OFFSET; 722 722 offset = CDNS_PCM_PDI_OFFSET; 723 723
+12 -5
drivers/soundwire/intel.c
··· 263 263 config->pcm_out = (pcm_cap & SDW_SHIM_PCMSCAP_OSS) >> 264 264 SDW_REG_SHIFT(SDW_SHIM_PCMSCAP_OSS); 265 265 266 + dev_dbg(sdw->cdns.dev, "PCM cap bd:%d in:%d out:%d\n", 267 + config->pcm_bd, config->pcm_in, config->pcm_out); 268 + 266 269 /* PDM Stream Capability */ 267 270 pdm_cap = intel_readw(shim, SDW_SHIM_PDMSCAP(link_id)); 268 271 ··· 275 272 SDW_REG_SHIFT(SDW_SHIM_PDMSCAP_ISS); 276 273 config->pdm_out = (pdm_cap & SDW_SHIM_PDMSCAP_OSS) >> 277 274 SDW_REG_SHIFT(SDW_SHIM_PDMSCAP_OSS); 275 + 276 + dev_dbg(sdw->cdns.dev, "PDM cap bd:%d in:%d out:%d\n", 277 + config->pdm_bd, config->pdm_in, config->pdm_out); 278 278 } 279 279 280 280 static int ··· 802 796 sdw_master_read_prop(bus); 803 797 804 798 /* BIOS is not giving some values correctly. So, lets override them */ 805 - bus->prop.num_freq = 1; 806 - bus->prop.freq = devm_kcalloc(bus->dev, bus->prop.num_freq, 807 - sizeof(*bus->prop.freq), GFP_KERNEL); 808 - if (!bus->prop.freq) 799 + bus->prop.num_clk_freq = 1; 800 + bus->prop.clk_freq = devm_kcalloc(bus->dev, bus->prop.num_clk_freq, 801 + sizeof(*bus->prop.clk_freq), 802 + GFP_KERNEL); 803 + if (!bus->prop.clk_freq) 809 804 return -ENOMEM; 810 805 811 - bus->prop.freq[0] = bus->prop.max_freq; 806 + bus->prop.clk_freq[0] = bus->prop.max_clk_freq; 812 807 bus->prop.err_threshold = 5; 813 808 814 809 return 0;
+1 -1
drivers/soundwire/intel.h
··· 5 5 #define __SDW_INTEL_LOCAL_H 6 6 7 7 /** 8 - * struct sdw_intel_res - Soundwire link resources 8 + * struct sdw_intel_link_res - Soundwire link resources 9 9 * @registers: Link IO registers base 10 10 * @shim: Audio shim pointer 11 11 * @alh: ALH (Audio Link Hub) pointer
+24 -1
drivers/soundwire/intel_init.c
··· 14 14 #include <linux/soundwire/sdw_intel.h> 15 15 #include "intel.h" 16 16 17 + #define SDW_LINK_TYPE 4 /* from Intel ACPI documentation */ 17 18 #define SDW_MAX_LINKS 4 18 19 #define SDW_SHIM_LCAP 0x0 19 20 #define SDW_SHIM_BASE 0x2C000 ··· 81 80 82 81 /* Check SNDWLCAP.LCOUNT */ 83 82 caps = ioread32(res->mmio_base + SDW_SHIM_BASE + SDW_SHIM_LCAP); 83 + caps &= GENMASK(2, 0); 84 84 85 85 /* Check HW supported vs property value and use min of two */ 86 86 count = min_t(u8, caps, count); ··· 90 88 if (count > SDW_MAX_LINKS) { 91 89 dev_err(&adev->dev, "Link count %d exceeds max %d\n", 92 90 count, SDW_MAX_LINKS); 91 + return NULL; 92 + } else if (!count) { 93 + dev_warn(&adev->dev, "No SoundWire links detected\n"); 93 94 return NULL; 94 95 } 95 96 ··· 155 150 { 156 151 struct sdw_intel_res *res = cdata; 157 152 struct acpi_device *adev; 153 + acpi_status status; 154 + u64 adr; 155 + 156 + status = acpi_evaluate_integer(handle, METHOD_NAME__ADR, NULL, &adr); 157 + if (ACPI_FAILURE(status)) 158 + return AE_OK; /* keep going */ 158 159 159 160 if (acpi_bus_get_device(handle, &adev)) { 160 161 pr_err("%s: Couldn't find ACPI handle\n", __func__); ··· 168 157 } 169 158 170 159 res->handle = handle; 171 - return AE_OK; 160 + 161 + /* 162 + * On some Intel platforms, multiple children of the HDAS 163 + * device can be found, but only one of them is the SoundWire 164 + * controller. The SNDW device is always exposed with 165 + * Name(_ADR, 0x40000000), with bits 31..28 representing the 166 + * SoundWire link so filter accordingly 167 + */ 168 + if ((adr & GENMASK(31, 28)) >> 28 != SDW_LINK_TYPE) 169 + return AE_OK; /* keep going */ 170 + 171 + /* device found, stop namespace walk */ 172 + return AE_CTRL_TERMINATE; 172 173 } 173 174 174 175 /**
+18 -17
drivers/soundwire/mipi_disco.c
··· 40 40 41 41 /* Find master handle */ 42 42 snprintf(name, sizeof(name), 43 - "mipi-sdw-master-%d-subproperties", bus->link_id); 43 + "mipi-sdw-link-%d-subproperties", bus->link_id); 44 44 45 45 link = device_get_named_child_node(bus->dev, name); 46 46 if (!link) { ··· 50 50 51 51 if (fwnode_property_read_bool(link, 52 52 "mipi-sdw-clock-stop-mode0-supported")) 53 - prop->clk_stop_mode = SDW_CLK_STOP_MODE0; 53 + prop->clk_stop_modes |= BIT(SDW_CLK_STOP_MODE0); 54 54 55 55 if (fwnode_property_read_bool(link, 56 56 "mipi-sdw-clock-stop-mode1-supported")) 57 - prop->clk_stop_mode |= SDW_CLK_STOP_MODE1; 57 + prop->clk_stop_modes |= BIT(SDW_CLK_STOP_MODE1); 58 58 59 59 fwnode_property_read_u32(link, 60 60 "mipi-sdw-max-clock-frequency", 61 - &prop->max_freq); 61 + &prop->max_clk_freq); 62 62 63 63 nval = fwnode_property_read_u32_array(link, 64 64 "mipi-sdw-clock-frequencies-supported", NULL, 0); 65 65 if (nval > 0) { 66 - prop->num_freq = nval; 67 - prop->freq = devm_kcalloc(bus->dev, prop->num_freq, 68 - sizeof(*prop->freq), GFP_KERNEL); 69 - if (!prop->freq) 66 + prop->num_clk_freq = nval; 67 + prop->clk_freq = devm_kcalloc(bus->dev, prop->num_clk_freq, 68 + sizeof(*prop->clk_freq), 69 + GFP_KERNEL); 70 + if (!prop->clk_freq) 70 71 return -ENOMEM; 71 72 72 73 fwnode_property_read_u32_array(link, 73 74 "mipi-sdw-clock-frequencies-supported", 74 - prop->freq, prop->num_freq); 75 + prop->clk_freq, prop->num_clk_freq); 75 76 } 76 77 77 78 /* 78 79 * Check the frequencies supported. If FW doesn't provide max 79 80 * freq, then populate here by checking values. 80 81 */ 81 - if (!prop->max_freq && prop->freq) { 82 - prop->max_freq = prop->freq[0]; 83 - for (i = 1; i < prop->num_freq; i++) { 84 - if (prop->freq[i] > prop->max_freq) 85 - prop->max_freq = prop->freq[i]; 82 + if (!prop->max_clk_freq && prop->clk_freq) { 83 + prop->max_clk_freq = prop->clk_freq[0]; 84 + for (i = 1; i < prop->num_clk_freq; i++) { 85 + if (prop->clk_freq[i] > prop->max_clk_freq) 86 + prop->max_clk_freq = prop->clk_freq[i]; 86 87 } 87 88 } 88 89 ··· 150 149 dp0->words, dp0->num_words); 151 150 } 152 151 153 - dp0->flow_controlled = fwnode_property_read_bool(port, 152 + dp0->BRA_flow_controlled = fwnode_property_read_bool(port, 154 153 "mipi-sdw-bra-flow-controlled"); 155 154 156 155 dp0->simple_ch_prep_sm = fwnode_property_read_bool(port, 157 156 "mipi-sdw-simplified-channel-prepare-sm"); 158 157 159 - dp0->device_interrupts = fwnode_property_read_bool(port, 158 + dp0->imp_def_interrupts = fwnode_property_read_bool(port, 160 159 "mipi-sdw-imp-def-dp0-interrupts-supported"); 161 160 162 161 return 0; ··· 225 224 226 225 fwnode_property_read_u32(node, 227 226 "mipi-sdw-imp-def-dpn-interrupts-supported", 228 - &dpn[i].device_interrupts); 227 + &dpn[i].imp_def_interrupts); 229 228 230 229 fwnode_property_read_u32(node, "mipi-sdw-min-channel-number", 231 230 &dpn[i].min_ch);
+4 -4
drivers/soundwire/stream.c
··· 439 439 440 440 prep_ch.bank = bus->params.next_bank; 441 441 442 - if (dpn_prop->device_interrupts || !dpn_prop->simple_ch_prep_sm) 442 + if (dpn_prop->imp_def_interrupts || !dpn_prop->simple_ch_prep_sm) 443 443 intr = true; 444 444 445 445 /* ··· 449 449 */ 450 450 if (prep && intr) { 451 451 ret = sdw_configure_dpn_intr(s_rt->slave, p_rt->num, prep, 452 - dpn_prop->device_interrupts); 452 + dpn_prop->imp_def_interrupts); 453 453 if (ret < 0) 454 454 return ret; 455 455 } ··· 493 493 /* Disable interrupt after Port de-prepare */ 494 494 if (!prep && intr) 495 495 ret = sdw_configure_dpn_intr(s_rt->slave, p_rt->num, prep, 496 - dpn_prop->device_interrupts); 496 + dpn_prop->imp_def_interrupts); 497 497 498 498 return ret; 499 499 } ··· 1473 1473 memcpy(&params, &bus->params, sizeof(params)); 1474 1474 1475 1475 /* TODO: Support Asynchronous mode */ 1476 - if ((prop->max_freq % stream->params.rate) != 0) { 1476 + if ((prop->max_clk_freq % stream->params.rate) != 0) { 1477 1477 dev_err(bus->dev, "Async mode not supported\n"); 1478 1478 return -EINVAL; 1479 1479 }
+44 -21
drivers/w1/slaves/w1_ds2413.c
··· 22 22 #define W1_F3A_FUNC_PIO_ACCESS_READ 0xF5 23 23 #define W1_F3A_FUNC_PIO_ACCESS_WRITE 0x5A 24 24 #define W1_F3A_SUCCESS_CONFIRM_BYTE 0xAA 25 + #define W1_F3A_INVALID_PIO_STATE 0xFF 25 26 26 27 static ssize_t state_read(struct file *filp, struct kobject *kobj, 27 28 struct bin_attribute *bin_attr, char *buf, loff_t off, 28 29 size_t count) 29 30 { 30 31 struct w1_slave *sl = kobj_to_w1_slave(kobj); 32 + unsigned int retries = W1_F3A_RETRIES; 33 + ssize_t bytes_read = -EIO; 34 + u8 state; 35 + 31 36 dev_dbg(&sl->dev, 32 37 "Reading %s kobj: %p, off: %0#10x, count: %zu, buff addr: %p", 33 38 bin_attr->attr.name, kobj, (unsigned int)off, count, buf); ··· 45 40 mutex_lock(&sl->master->bus_mutex); 46 41 dev_dbg(&sl->dev, "mutex locked"); 47 42 48 - if (w1_reset_select_slave(sl)) { 49 - mutex_unlock(&sl->master->bus_mutex); 50 - return -EIO; 43 + next: 44 + if (w1_reset_select_slave(sl)) 45 + goto out; 46 + 47 + while (retries--) { 48 + w1_write_8(sl->master, W1_F3A_FUNC_PIO_ACCESS_READ); 49 + 50 + state = w1_read_8(sl->master); 51 + if ((state & 0x0F) == ((~state >> 4) & 0x0F)) { 52 + /* complement is correct */ 53 + *buf = state; 54 + bytes_read = 1; 55 + goto out; 56 + } else if (state == W1_F3A_INVALID_PIO_STATE) { 57 + /* slave didn't respond, try to select it again */ 58 + dev_warn(&sl->dev, "slave device did not respond to PIO_ACCESS_READ, " \ 59 + "reselecting, retries left: %d\n", retries); 60 + goto next; 61 + } 62 + 63 + if (w1_reset_resume_command(sl->master)) 64 + goto out; /* unrecoverable error */ 65 + 66 + dev_warn(&sl->dev, "PIO_ACCESS_READ error, retries left: %d\n", retries); 51 67 } 52 68 53 - w1_write_8(sl->master, W1_F3A_FUNC_PIO_ACCESS_READ); 54 - *buf = w1_read_8(sl->master); 55 - 69 + out: 56 70 mutex_unlock(&sl->master->bus_mutex); 57 - dev_dbg(&sl->dev, "mutex unlocked"); 58 - 59 - /* check for correct complement */ 60 - if ((*buf & 0x0F) != ((~*buf >> 4) & 0x0F)) 61 - return -EIO; 62 - else 63 - return 1; 71 + dev_dbg(&sl->dev, "%s, mutex unlocked, retries: %d\n", 72 + (bytes_read > 0) ? "succeeded" : "error", retries); 73 + return bytes_read; 64 74 } 65 75 66 76 static BIN_ATTR_RO(state, 1); ··· 87 67 struct w1_slave *sl = kobj_to_w1_slave(kobj); 88 68 u8 w1_buf[3]; 89 69 unsigned int retries = W1_F3A_RETRIES; 70 + ssize_t bytes_written = -EIO; 90 71 91 72 if (count != 1 || off != 0) 92 73 return -EFAULT; ··· 97 76 dev_dbg(&sl->dev, "mutex locked"); 98 77 99 78 if (w1_reset_select_slave(sl)) 100 - goto error; 79 + goto out; 101 80 102 81 /* according to the DS2413 datasheet the most significant 6 bits 103 82 should be set to "1"s, so do it now */ ··· 110 89 w1_write_block(sl->master, w1_buf, 3); 111 90 112 91 if (w1_read_8(sl->master) == W1_F3A_SUCCESS_CONFIRM_BYTE) { 113 - mutex_unlock(&sl->master->bus_mutex); 114 - dev_dbg(&sl->dev, "mutex unlocked, retries:%d", retries); 115 - return 1; 92 + bytes_written = 1; 93 + goto out; 116 94 } 117 95 if (w1_reset_resume_command(sl->master)) 118 - goto error; 96 + goto out; /* unrecoverable error */ 97 + 98 + dev_warn(&sl->dev, "PIO_ACCESS_WRITE error, retries left: %d\n", retries); 119 99 } 120 100 121 - error: 101 + out: 122 102 mutex_unlock(&sl->master->bus_mutex); 123 - dev_dbg(&sl->dev, "mutex unlocked in error, retries:%d", retries); 124 - return -EIO; 103 + dev_dbg(&sl->dev, "%s, mutex unlocked, retries: %d\n", 104 + (bytes_written > 0) ? "succeeded" : "error", retries); 105 + return bytes_written; 125 106 } 126 107 127 108 static BIN_ATTR(output, S_IRUGO | S_IWUSR | S_IWGRP, NULL, output_write, 1);
+3 -3
drivers/w1/slaves/w1_ds2805.c
··· 286 286 .remove_slave = w1_f0d_remove_slave, 287 287 }; 288 288 289 - static struct w1_family w1_family_2d = { 289 + static struct w1_family w1_family_0d = { 290 290 .fid = W1_EEPROM_DS2805, 291 291 .fops = &w1_f0d_fops, 292 292 }; ··· 294 294 static int __init w1_f0d_init(void) 295 295 { 296 296 pr_info("%s()\n", __func__); 297 - return w1_register_family(&w1_family_2d); 297 + return w1_register_family(&w1_family_0d); 298 298 } 299 299 300 300 static void __exit w1_f0d_fini(void) 301 301 { 302 302 pr_info("%s()\n", __func__); 303 - w1_unregister_family(&w1_family_2d); 303 + w1_unregister_family(&w1_family_0d); 304 304 } 305 305 306 306 module_init(w1_f0d_init);
+2 -1
fs/char_dev.c
··· 98 98 int minorct, const char *name) 99 99 { 100 100 struct char_device_struct *cd, *curr, *prev = NULL; 101 - int ret = -EBUSY; 101 + int ret; 102 102 int i; 103 103 104 104 if (major >= CHRDEV_MAJOR_MAX) { ··· 129 129 major = ret; 130 130 } 131 131 132 + ret = -EBUSY; 132 133 i = major_to_index(major); 133 134 for (curr = chrdevs[i]; curr; prev = curr, curr = curr->next) { 134 135 if (curr->major < major)
+4
include/linux/balloon_compaction.h
··· 64 64 extern void balloon_page_enqueue(struct balloon_dev_info *b_dev_info, 65 65 struct page *page); 66 66 extern struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info); 67 + extern size_t balloon_page_list_enqueue(struct balloon_dev_info *b_dev_info, 68 + struct list_head *pages); 69 + extern size_t balloon_page_list_dequeue(struct balloon_dev_info *b_dev_info, 70 + struct list_head *pages, size_t n_req_pages); 67 71 68 72 static inline void balloon_devinfo_init(struct balloon_dev_info *balloon) 69 73 {
+35 -26
include/linux/coresight.h
··· 91 91 92 92 /** 93 93 * struct coresight_platform_data - data harvested from the DT specification 94 - * @cpu: the CPU a source belongs to. Only applicable for ETM/PTMs. 95 - * @name: name of the component as shown under sysfs. 96 94 * @nr_inport: number of input ports for this component. 97 95 * @nr_outport: number of output ports for this component. 98 96 * @conns: Array of nr_outport connections from this component 99 97 */ 100 98 struct coresight_platform_data { 101 - int cpu; 102 - const char *name; 103 99 int nr_inport; 104 100 int nr_outport; 105 101 struct coresight_connection *conns; ··· 106 110 * @type: as defined by @coresight_dev_type. 107 111 * @subtype: as defined by @coresight_dev_subtype. 108 112 * @ops: generic operations for this component, as defined 109 - by @coresight_ops. 113 + * by @coresight_ops. 110 114 * @pdata: platform data collected from DT. 111 115 * @dev: The device entity associated to this component. 112 116 * @groups: operations specific to this component. These will end up 113 - in the component's sysfs sub-directory. 117 + * in the component's sysfs sub-directory. 118 + * @name: name for the coresight device, also shown under sysfs. 114 119 */ 115 120 struct coresight_desc { 116 121 enum coresight_dev_type type; ··· 120 123 struct coresight_platform_data *pdata; 121 124 struct device *dev; 122 125 const struct attribute_group **groups; 126 + const char *name; 123 127 }; 124 128 125 129 /** 126 130 * struct coresight_connection - representation of a single connection 127 131 * @outport: a connection's output port number. 128 - * @chid_name: remote component's name. 129 132 * @child_port: remote component's port number @output is connected to. 133 + * @chid_fwnode: remote component's fwnode handle. 130 134 * @child_dev: a @coresight_device representation of the component 131 135 connected to @outport. 132 136 */ 133 137 struct coresight_connection { 134 138 int outport; 135 - const char *child_name; 136 139 int child_port; 140 + struct fwnode_handle *child_fwnode; 137 141 struct coresight_device *child_dev; 138 142 }; 139 143 140 144 /** 141 145 * struct coresight_device - representation of a device as used by the framework 142 - * @conns: array of coresight_connections associated to this component. 143 - * @nr_inport: number of input port associated to this component. 144 - * @nr_outport: number of output port associated to this component. 146 + * @pdata: Platform data with device connections associated to this device. 145 147 * @type: as defined by @coresight_dev_type. 146 148 * @subtype: as defined by @coresight_dev_subtype. 147 149 * @ops: generic operations for this component, as defined ··· 155 159 * @ea: Device attribute for sink representation under PMU directory. 156 160 */ 157 161 struct coresight_device { 158 - struct coresight_connection *conns; 159 - int nr_inport; 160 - int nr_outport; 162 + struct coresight_platform_data *pdata; 161 163 enum coresight_dev_type type; 162 164 union coresight_dev_subtype subtype; 163 165 const struct coresight_ops *ops; ··· 167 173 bool activated; /* true only if a sink is part of a path */ 168 174 struct dev_ext_attribute *ea; 169 175 }; 176 + 177 + /* 178 + * coresight_dev_list - Mapping for devices to "name" index for device 179 + * names. 180 + * 181 + * @nr_idx: Number of entries already allocated. 182 + * @pfx: Prefix pattern for device name. 183 + * @fwnode_list: Array of fwnode_handles associated with each allocated 184 + * index, upto nr_idx entries. 185 + */ 186 + struct coresight_dev_list { 187 + int nr_idx; 188 + const char *pfx; 189 + struct fwnode_handle **fwnode_list; 190 + }; 191 + 192 + #define DEFINE_CORESIGHT_DEVLIST(var, dev_pfx) \ 193 + static struct coresight_dev_list (var) = { \ 194 + .pfx = dev_pfx, \ 195 + .nr_idx = 0, \ 196 + .fwnode_list = NULL, \ 197 + } 170 198 171 199 #define to_coresight_device(d) container_of(d, struct coresight_device, dev) 172 200 ··· 283 267 284 268 extern void coresight_disclaim_device(void __iomem *base); 285 269 extern void coresight_disclaim_device_unlocked(void __iomem *base); 286 - 270 + extern char *coresight_alloc_device_name(struct coresight_dev_list *devs, 271 + struct device *dev); 287 272 #else 288 273 static inline struct coresight_device * 289 274 coresight_register(struct coresight_desc *desc) { return NULL; } ··· 309 292 310 293 #endif 311 294 312 - #ifdef CONFIG_OF 313 - extern int of_coresight_get_cpu(const struct device_node *node); 314 - extern struct coresight_platform_data * 315 - of_get_coresight_platform_data(struct device *dev, 316 - const struct device_node *node); 317 - #else 318 - static inline int of_coresight_get_cpu(const struct device_node *node) 319 - { return 0; } 320 - static inline struct coresight_platform_data *of_get_coresight_platform_data( 321 - struct device *dev, const struct device_node *node) { return NULL; } 322 - #endif 295 + extern int coresight_get_cpu(struct device *dev); 296 + 297 + struct coresight_platform_data *coresight_get_platform_data(struct device *dev); 323 298 324 299 #endif
-1
include/linux/firmware/xlnx-zynqmp.h
··· 46 46 #define ZYNQMP_PM_CAPABILITY_ACCESS 0x1U 47 47 #define ZYNQMP_PM_CAPABILITY_CONTEXT 0x2U 48 48 #define ZYNQMP_PM_CAPABILITY_WAKEUP 0x4U 49 - #define ZYNQMP_PM_CAPABILITY_POWER 0x8U 50 49 51 50 /* 52 51 * Firmware FPGA Manager flags
-24
include/linux/platform_data/fsa9480.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /* 3 - * Copyright (C) 2010 Samsung Electronics 4 - * Minkyu Kang <mk7.kang@samsung.com> 5 - */ 6 - 7 - #ifndef _FSA9480_H_ 8 - #define _FSA9480_H_ 9 - 10 - #define FSA9480_ATTACHED 1 11 - #define FSA9480_DETACHED 0 12 - 13 - struct fsa9480_platform_data { 14 - void (*cfg_gpio) (void); 15 - void (*usb_cb) (u8 attached); 16 - void (*uart_cb) (u8 attached); 17 - void (*charger_cb) (u8 attached); 18 - void (*jig_cb) (u8 attached); 19 - void (*reset_cb) (void); 20 - void (*usb_power) (u8 on); 21 - int wakeup; 22 - }; 23 - 24 - #endif /* _FSA9480_H_ */
+69 -19
include/linux/soundwire/sdw.h
··· 41 41 #define SDW_DAI_ID_RANGE_START 100 42 42 #define SDW_DAI_ID_RANGE_END 200 43 43 44 + enum { 45 + SDW_PORT_DIRN_SINK = 0, 46 + SDW_PORT_DIRN_SOURCE, 47 + SDW_PORT_DIRN_MAX, 48 + }; 49 + 50 + /* 51 + * constants for flow control, ports and transport 52 + * 53 + * these are bit masks as devices can have multiple capabilities 54 + */ 55 + 56 + /* 57 + * flow modes for SDW port. These can be isochronous, tx controlled, 58 + * rx controlled or async 59 + */ 60 + #define SDW_PORT_FLOW_MODE_ISOCH 0 61 + #define SDW_PORT_FLOW_MODE_TX_CNTRL BIT(0) 62 + #define SDW_PORT_FLOW_MODE_RX_CNTRL BIT(1) 63 + #define SDW_PORT_FLOW_MODE_ASYNC GENMASK(1, 0) 64 + 65 + /* sample packaging for block. It can be per port or per channel */ 66 + #define SDW_BLOCK_PACKG_PER_PORT BIT(0) 67 + #define SDW_BLOCK_PACKG_PER_CH BIT(1) 68 + 44 69 /** 45 70 * enum sdw_slave_status - Slave status 46 71 * @SDW_SLAVE_UNATTACHED: Slave is not attached with the bus. ··· 101 76 SDW_CMD_FAIL_OTHER = 4, 102 77 }; 103 78 79 + /* block group count enum */ 80 + enum sdw_dpn_grouping { 81 + SDW_BLK_GRP_CNT_1 = 0, 82 + SDW_BLK_GRP_CNT_2 = 1, 83 + SDW_BLK_GRP_CNT_3 = 2, 84 + SDW_BLK_GRP_CNT_4 = 3, 85 + }; 86 + 104 87 /** 105 88 * enum sdw_stream_type: data stream type 106 89 * ··· 131 98 enum sdw_data_direction { 132 99 SDW_DATA_DIR_RX = 0, 133 100 SDW_DATA_DIR_TX = 1, 101 + }; 102 + 103 + /** 104 + * enum sdw_port_data_mode: Data Port mode 105 + * 106 + * @SDW_PORT_DATA_MODE_NORMAL: Normal data mode where audio data is received 107 + * and transmitted. 108 + * @SDW_PORT_DATA_MODE_STATIC_1: Simple test mode which uses static value of 109 + * logic 1. The encoding will result in signal transitions at every bitslot 110 + * owned by this Port 111 + * @SDW_PORT_DATA_MODE_STATIC_0: Simple test mode which uses static value of 112 + * logic 0. The encoding will result in no signal transitions 113 + * @SDW_PORT_DATA_MODE_PRBS: Test mode which uses a PRBS generator to produce 114 + * a pseudo random data pattern that is transferred 115 + */ 116 + enum sdw_port_data_mode { 117 + SDW_PORT_DATA_MODE_NORMAL = 0, 118 + SDW_PORT_DATA_MODE_STATIC_1 = 1, 119 + SDW_PORT_DATA_MODE_STATIC_0 = 2, 120 + SDW_PORT_DATA_MODE_PRBS = 3, 134 121 }; 135 122 136 123 /* ··· 206 153 * (inclusive) 207 154 * @num_words: number of wordlengths supported 208 155 * @words: wordlengths supported 209 - * @flow_controlled: Slave implementation results in an OK_NotReady 156 + * @BRA_flow_controlled: Slave implementation results in an OK_NotReady 210 157 * response 211 158 * @simple_ch_prep_sm: If channel prepare sequence is required 212 - * @device_interrupts: If implementation-defined interrupts are supported 159 + * @imp_def_interrupts: If set, each bit corresponds to support for 160 + * implementation-defined interrupts 213 161 * 214 162 * The wordlengths are specified by Spec as max, min AND number of 215 163 * discrete values, implementation can define based on the wordlengths they ··· 221 167 u32 min_word; 222 168 u32 num_words; 223 169 u32 *words; 224 - bool flow_controlled; 170 + bool BRA_flow_controlled; 225 171 bool simple_ch_prep_sm; 226 - bool device_interrupts; 172 + bool imp_def_interrupts; 227 173 }; 228 174 229 175 /** ··· 273 219 * @simple_ch_prep_sm: If the port supports simplified channel prepare state 274 220 * machine 275 221 * @ch_prep_timeout: Port-specific timeout value, in milliseconds 276 - * @device_interrupts: If set, each bit corresponds to support for 222 + * @imp_def_interrupts: If set, each bit corresponds to support for 277 223 * implementation-defined interrupts 278 224 * @max_ch: Maximum channels supported 279 225 * @min_ch: Minimum channels supported ··· 298 244 u32 max_grouping; 299 245 bool simple_ch_prep_sm; 300 246 u32 ch_prep_timeout; 301 - u32 device_interrupts; 247 + u32 imp_def_interrupts; 302 248 u32 max_ch; 303 249 u32 min_ch; 304 250 u32 num_ch; ··· 365 311 /** 366 312 * struct sdw_master_prop - Master properties 367 313 * @revision: MIPI spec version of the implementation 368 - * @master_count: Number of masters 369 - * @clk_stop_mode: Bitmap for Clock Stop modes supported 370 - * @max_freq: Maximum Bus clock frequency, in Hz 314 + * @clk_stop_modes: Bitmap, bit N set when clock-stop-modeN supported 315 + * @max_clk_freq: Maximum Bus clock frequency, in Hz 371 316 * @num_clk_gears: Number of clock gears supported 372 317 * @clk_gears: Clock gears supported 373 - * @num_freq: Number of clock frequencies supported, in Hz 374 - * @freq: Clock frequencies supported, in Hz 318 + * @num_clk_freq: Number of clock frequencies supported, in Hz 319 + * @clk_freq: Clock frequencies supported, in Hz 375 320 * @default_frame_rate: Controller default Frame rate, in Hz 376 321 * @default_row: Number of rows 377 322 * @default_col: Number of columns 378 - * @dynamic_frame: Dynamic frame supported 323 + * @dynamic_frame: Dynamic frame shape supported 379 324 * @err_threshold: Number of times that software may retry sending a single 380 325 * command 381 - * @dpn_prop: Data Port N properties 382 326 */ 383 327 struct sdw_master_prop { 384 328 u32 revision; 385 - u32 master_count; 386 - enum sdw_clk_stop_mode clk_stop_mode; 387 - u32 max_freq; 329 + u32 clk_stop_modes; 330 + u32 max_clk_freq; 388 331 u32 num_clk_gears; 389 332 u32 *clk_gears; 390 - u32 num_freq; 391 - u32 *freq; 333 + u32 num_clk_freq; 334 + u32 *clk_freq; 392 335 u32 default_frame_rate; 393 336 u32 default_row; 394 337 u32 default_col; 395 338 bool dynamic_frame; 396 339 u32 err_threshold; 397 - struct sdw_dpn_prop *dpn_prop; 398 340 }; 399 341 400 342 int sdw_master_read_prop(struct sdw_bus *bus);
+11
include/linux/soundwire/sdw_type.h
··· 16 16 17 17 int sdw_slave_modalias(const struct sdw_slave *slave, char *buf, size_t size); 18 18 19 + /** 20 + * module_sdw_driver() - Helper macro for registering a Soundwire driver 21 + * @__sdw_driver: soundwire slave driver struct 22 + * 23 + * Helper macro for Soundwire drivers which do not do anything special in 24 + * module init/exit. This eliminates a lot of boilerplate. Each module may only 25 + * use this macro once, and calling it replaces module_init() and module_exit() 26 + */ 27 + #define module_sdw_driver(__sdw_driver) \ 28 + module_driver(__sdw_driver, sdw_register_driver, \ 29 + sdw_unregister_driver) 19 30 #endif /* __SOUNDWIRE_TYPES_H */
+20 -21
include/linux/vmw_vmci_defs.h
··· 62 62 63 63 /* 64 64 * A single VMCI device has an upper limit of 128MB on the amount of 65 - * memory that can be used for queue pairs. 65 + * memory that can be used for queue pairs. Since each queue pair 66 + * consists of at least two pages, the memory limit also dictates the 67 + * number of queue pairs a guest can create. 66 68 */ 67 69 #define VMCI_MAX_GUEST_QP_MEMORY (128 * 1024 * 1024) 70 + #define VMCI_MAX_GUEST_QP_COUNT (VMCI_MAX_GUEST_QP_MEMORY / PAGE_SIZE / 2) 71 + 72 + /* 73 + * There can be at most PAGE_SIZE doorbells since there is one doorbell 74 + * per byte in the doorbell bitmap page. 75 + */ 76 + #define VMCI_MAX_GUEST_DOORBELL_COUNT PAGE_SIZE 68 77 69 78 /* 70 79 * Queues with pre-mapped data pages must be small, so that we don't pin ··· 439 430 struct vmci_queue_header { 440 431 /* All fields are 64bit and aligned. */ 441 432 struct vmci_handle handle; /* Identifier. */ 442 - atomic64_t producer_tail; /* Offset in this queue. */ 443 - atomic64_t consumer_head; /* Offset in peer queue. */ 433 + u64 producer_tail; /* Offset in this queue. */ 434 + u64 consumer_head; /* Offset in peer queue. */ 444 435 }; 445 436 446 437 /* ··· 741 732 * prefix will be used, so correctness isn't an issue, but using a 742 733 * 64bit operation still adds unnecessary overhead. 743 734 */ 744 - static inline u64 vmci_q_read_pointer(atomic64_t *var) 735 + static inline u64 vmci_q_read_pointer(u64 *var) 745 736 { 746 - #if defined(CONFIG_X86_32) 747 - return atomic_read((atomic_t *)var); 748 - #else 749 - return atomic64_read(var); 750 - #endif 737 + return READ_ONCE(*(unsigned long *)var); 751 738 } 752 739 753 740 /* ··· 752 747 * never exceeds a 32bit value in this case. On 32bit SMP, using a 753 748 * locked cmpxchg8b adds unnecessary overhead. 754 749 */ 755 - static inline void vmci_q_set_pointer(atomic64_t *var, 756 - u64 new_val) 750 + static inline void vmci_q_set_pointer(u64 *var, u64 new_val) 757 751 { 758 - #if defined(CONFIG_X86_32) 759 - return atomic_set((atomic_t *)var, (u32)new_val); 760 - #else 761 - return atomic64_set(var, new_val); 762 - #endif 752 + /* XXX buggered on big-endian */ 753 + WRITE_ONCE(*(unsigned long *)var, (unsigned long)new_val); 763 754 } 764 755 765 756 /* 766 757 * Helper to add a given offset to a head or tail pointer. Wraps the 767 758 * value of the pointer around the max size of the queue. 768 759 */ 769 - static inline void vmci_qp_add_pointer(atomic64_t *var, 770 - size_t add, 771 - u64 size) 760 + static inline void vmci_qp_add_pointer(u64 *var, size_t add, u64 size) 772 761 { 773 762 u64 new_val = vmci_q_read_pointer(var); 774 763 ··· 839 840 const struct vmci_handle handle) 840 841 { 841 842 q_header->handle = handle; 842 - atomic64_set(&q_header->producer_tail, 0); 843 - atomic64_set(&q_header->consumer_head, 0); 843 + q_header->producer_tail = 0; 844 + q_header->consumer_head = 0; 844 845 } 845 846 846 847 /*
+29 -1
include/uapi/misc/habanalabs.h
··· 45 45 GOYA_QUEUE_ID_SIZE 46 46 }; 47 47 48 + /* 49 + * Engine Numbering 50 + * 51 + * Used in the "busy_engines_mask" field in `struct hl_info_hw_idle' 52 + */ 53 + 54 + enum goya_engine_id { 55 + GOYA_ENGINE_ID_DMA_0 = 0, 56 + GOYA_ENGINE_ID_DMA_1, 57 + GOYA_ENGINE_ID_DMA_2, 58 + GOYA_ENGINE_ID_DMA_3, 59 + GOYA_ENGINE_ID_DMA_4, 60 + GOYA_ENGINE_ID_MME_0, 61 + GOYA_ENGINE_ID_TPC_0, 62 + GOYA_ENGINE_ID_TPC_1, 63 + GOYA_ENGINE_ID_TPC_2, 64 + GOYA_ENGINE_ID_TPC_3, 65 + GOYA_ENGINE_ID_TPC_4, 66 + GOYA_ENGINE_ID_TPC_5, 67 + GOYA_ENGINE_ID_TPC_6, 68 + GOYA_ENGINE_ID_TPC_7, 69 + GOYA_ENGINE_ID_SIZE 70 + }; 71 + 48 72 enum hl_device_status { 49 73 HL_DEVICE_STATUS_OPERATIONAL, 50 74 HL_DEVICE_STATUS_IN_RESET, ··· 110 86 111 87 struct hl_info_hw_idle { 112 88 __u32 is_idle; 113 - __u32 pad; 89 + /* 90 + * Bitmask of busy engines. 91 + * Bits definition is according to `enum <chip>_enging_id'. 92 + */ 93 + __u32 busy_engines_mask; 114 94 }; 115 95 116 96 struct hl_info_device_status {
+45 -56
lib/fonts/fonts.c
··· 20 20 #endif 21 21 #include <linux/font.h> 22 22 23 - #define NO_FONTS 24 - 25 23 static const struct font_desc *fonts[] = { 26 24 #ifdef CONFIG_FONT_8x8 27 - #undef NO_FONTS 28 - &font_vga_8x8, 25 + &font_vga_8x8, 29 26 #endif 30 27 #ifdef CONFIG_FONT_8x16 31 - #undef NO_FONTS 32 - &font_vga_8x16, 28 + &font_vga_8x16, 33 29 #endif 34 30 #ifdef CONFIG_FONT_6x11 35 - #undef NO_FONTS 36 - &font_vga_6x11, 31 + &font_vga_6x11, 37 32 #endif 38 33 #ifdef CONFIG_FONT_7x14 39 - #undef NO_FONTS 40 - &font_7x14, 34 + &font_7x14, 41 35 #endif 42 36 #ifdef CONFIG_FONT_SUN8x16 43 - #undef NO_FONTS 44 - &font_sun_8x16, 37 + &font_sun_8x16, 45 38 #endif 46 39 #ifdef CONFIG_FONT_SUN12x22 47 - #undef NO_FONTS 48 - &font_sun_12x22, 40 + &font_sun_12x22, 49 41 #endif 50 42 #ifdef CONFIG_FONT_10x18 51 - #undef NO_FONTS 52 - &font_10x18, 43 + &font_10x18, 53 44 #endif 54 45 #ifdef CONFIG_FONT_ACORN_8x8 55 - #undef NO_FONTS 56 - &font_acorn_8x8, 46 + &font_acorn_8x8, 57 47 #endif 58 48 #ifdef CONFIG_FONT_PEARL_8x8 59 - #undef NO_FONTS 60 - &font_pearl_8x8, 49 + &font_pearl_8x8, 61 50 #endif 62 51 #ifdef CONFIG_FONT_MINI_4x6 63 - #undef NO_FONTS 64 - &font_mini_4x6, 52 + &font_mini_4x6, 65 53 #endif 66 54 #ifdef CONFIG_FONT_6x10 67 - #undef NO_FONTS 68 - &font_6x10, 55 + &font_6x10, 69 56 #endif 70 57 #ifdef CONFIG_FONT_TER16x32 71 - #undef NO_FONTS 72 - &font_ter_16x32, 58 + &font_ter_16x32, 73 59 #endif 74 60 }; 75 61 ··· 76 90 * specified font. 77 91 * 78 92 */ 79 - 80 93 const struct font_desc *find_font(const char *name) 81 94 { 82 - unsigned int i; 95 + unsigned int i; 83 96 84 - for (i = 0; i < num_fonts; i++) 85 - if (!strcmp(fonts[i]->name, name)) 86 - return fonts[i]; 87 - return NULL; 97 + BUILD_BUG_ON(!num_fonts); 98 + for (i = 0; i < num_fonts; i++) 99 + if (!strcmp(fonts[i]->name, name)) 100 + return fonts[i]; 101 + return NULL; 88 102 } 103 + EXPORT_SYMBOL(find_font); 89 104 90 105 91 106 /** ··· 103 116 * chosen font. 104 117 * 105 118 */ 106 - 107 119 const struct font_desc *get_default_font(int xres, int yres, u32 font_w, 108 120 u32 font_h) 109 121 { 110 - int i, c, cc; 111 - const struct font_desc *f, *g; 122 + int i, c, cc, res; 123 + const struct font_desc *f, *g; 112 124 113 - g = NULL; 114 - cc = -10000; 115 - for(i=0; i<num_fonts; i++) { 116 - f = fonts[i]; 117 - c = f->pref; 125 + g = NULL; 126 + cc = -10000; 127 + for (i = 0; i < num_fonts; i++) { 128 + f = fonts[i]; 129 + c = f->pref; 118 130 #if defined(__mc68000__) 119 131 #ifdef CONFIG_FONT_PEARL_8x8 120 - if (MACH_IS_AMIGA && f->idx == PEARL8x8_IDX) 121 - c = 100; 132 + if (MACH_IS_AMIGA && f->idx == PEARL8x8_IDX) 133 + c = 100; 122 134 #endif 123 135 #ifdef CONFIG_FONT_6x11 124 - if (MACH_IS_MAC && xres < 640 && f->idx == VGA6x11_IDX) 125 - c = 100; 136 + if (MACH_IS_MAC && xres < 640 && f->idx == VGA6x11_IDX) 137 + c = 100; 126 138 #endif 127 139 #endif 128 - if ((yres < 400) == (f->height <= 8)) 129 - c += 1000; 140 + if ((yres < 400) == (f->height <= 8)) 141 + c += 1000; 130 142 131 - if ((font_w & (1 << (f->width - 1))) && 132 - (font_h & (1 << (f->height - 1)))) 133 - c += 1000; 143 + /* prefer a bigger font for high resolution */ 144 + res = (xres / f->width) * (yres / f->height) / 1000; 145 + if (res > 20) 146 + c += 20 - res; 134 147 135 - if (c > cc) { 136 - cc = c; 137 - g = f; 148 + if ((font_w & (1 << (f->width - 1))) && 149 + (font_h & (1 << (f->height - 1)))) 150 + c += 1000; 151 + 152 + if (c > cc) { 153 + cc = c; 154 + g = f; 155 + } 138 156 } 139 - } 140 - return g; 157 + return g; 141 158 } 142 - 143 - EXPORT_SYMBOL(find_font); 144 159 EXPORT_SYMBOL(get_default_font); 145 160 146 161 MODULE_AUTHOR("James Simmons <jsimmons@users.sf.net>");
+106 -38
mm/balloon_compaction.c
··· 11 11 #include <linux/export.h> 12 12 #include <linux/balloon_compaction.h> 13 13 14 + static void balloon_page_enqueue_one(struct balloon_dev_info *b_dev_info, 15 + struct page *page) 16 + { 17 + /* 18 + * Block others from accessing the 'page' when we get around to 19 + * establishing additional references. We should be the only one 20 + * holding a reference to the 'page' at this point. If we are not, then 21 + * memory corruption is possible and we should stop execution. 22 + */ 23 + BUG_ON(!trylock_page(page)); 24 + list_del(&page->lru); 25 + balloon_page_insert(b_dev_info, page); 26 + unlock_page(page); 27 + __count_vm_event(BALLOON_INFLATE); 28 + } 29 + 30 + /** 31 + * balloon_page_list_enqueue() - inserts a list of pages into the balloon page 32 + * list. 33 + * @b_dev_info: balloon device descriptor where we will insert a new page to 34 + * @pages: pages to enqueue - allocated using balloon_page_alloc. 35 + * 36 + * Driver must call it to properly enqueue a balloon pages before definitively 37 + * removing it from the guest system. 38 + * 39 + * Return: number of pages that were enqueued. 40 + */ 41 + size_t balloon_page_list_enqueue(struct balloon_dev_info *b_dev_info, 42 + struct list_head *pages) 43 + { 44 + struct page *page, *tmp; 45 + unsigned long flags; 46 + size_t n_pages = 0; 47 + 48 + spin_lock_irqsave(&b_dev_info->pages_lock, flags); 49 + list_for_each_entry_safe(page, tmp, pages, lru) { 50 + balloon_page_enqueue_one(b_dev_info, page); 51 + n_pages++; 52 + } 53 + spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); 54 + return n_pages; 55 + } 56 + EXPORT_SYMBOL_GPL(balloon_page_list_enqueue); 57 + 58 + /** 59 + * balloon_page_list_dequeue() - removes pages from balloon's page list and 60 + * returns a list of the pages. 61 + * @b_dev_info: balloon device decriptor where we will grab a page from. 62 + * @pages: pointer to the list of pages that would be returned to the caller. 63 + * @n_req_pages: number of requested pages. 64 + * 65 + * Driver must call this function to properly de-allocate a previous enlisted 66 + * balloon pages before definetively releasing it back to the guest system. 67 + * This function tries to remove @n_req_pages from the ballooned pages and 68 + * return them to the caller in the @pages list. 69 + * 70 + * Note that this function may fail to dequeue some pages temporarily empty due 71 + * to compaction isolated pages. 72 + * 73 + * Return: number of pages that were added to the @pages list. 74 + */ 75 + size_t balloon_page_list_dequeue(struct balloon_dev_info *b_dev_info, 76 + struct list_head *pages, size_t n_req_pages) 77 + { 78 + struct page *page, *tmp; 79 + unsigned long flags; 80 + size_t n_pages = 0; 81 + 82 + spin_lock_irqsave(&b_dev_info->pages_lock, flags); 83 + list_for_each_entry_safe(page, tmp, &b_dev_info->pages, lru) { 84 + if (n_pages == n_req_pages) 85 + break; 86 + 87 + /* 88 + * Block others from accessing the 'page' while we get around to 89 + * establishing additional references and preparing the 'page' 90 + * to be released by the balloon driver. 91 + */ 92 + if (!trylock_page(page)) 93 + continue; 94 + 95 + if (IS_ENABLED(CONFIG_BALLOON_COMPACTION) && 96 + PageIsolated(page)) { 97 + /* raced with isolation */ 98 + unlock_page(page); 99 + continue; 100 + } 101 + balloon_page_delete(page); 102 + __count_vm_event(BALLOON_DEFLATE); 103 + list_add(&page->lru, pages); 104 + unlock_page(page); 105 + n_pages++; 106 + } 107 + spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); 108 + 109 + return n_pages; 110 + } 111 + EXPORT_SYMBOL_GPL(balloon_page_list_dequeue); 112 + 14 113 /* 15 114 * balloon_page_alloc - allocates a new page for insertion into the balloon 16 115 * page list. ··· 143 44 { 144 45 unsigned long flags; 145 46 146 - /* 147 - * Block others from accessing the 'page' when we get around to 148 - * establishing additional references. We should be the only one 149 - * holding a reference to the 'page' at this point. 150 - */ 151 - BUG_ON(!trylock_page(page)); 152 47 spin_lock_irqsave(&b_dev_info->pages_lock, flags); 153 - balloon_page_insert(b_dev_info, page); 154 - __count_vm_event(BALLOON_INFLATE); 48 + balloon_page_enqueue_one(b_dev_info, page); 155 49 spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); 156 - unlock_page(page); 157 50 } 158 51 EXPORT_SYMBOL_GPL(balloon_page_enqueue); 159 52 ··· 162 71 */ 163 72 struct page *balloon_page_dequeue(struct balloon_dev_info *b_dev_info) 164 73 { 165 - struct page *page, *tmp; 166 74 unsigned long flags; 167 - bool dequeued_page; 75 + LIST_HEAD(pages); 76 + int n_pages; 168 77 169 - dequeued_page = false; 170 - spin_lock_irqsave(&b_dev_info->pages_lock, flags); 171 - list_for_each_entry_safe(page, tmp, &b_dev_info->pages, lru) { 172 - /* 173 - * Block others from accessing the 'page' while we get around 174 - * establishing additional references and preparing the 'page' 175 - * to be released by the balloon driver. 176 - */ 177 - if (trylock_page(page)) { 178 - #ifdef CONFIG_BALLOON_COMPACTION 179 - if (PageIsolated(page)) { 180 - /* raced with isolation */ 181 - unlock_page(page); 182 - continue; 183 - } 184 - #endif 185 - balloon_page_delete(page); 186 - __count_vm_event(BALLOON_DEFLATE); 187 - unlock_page(page); 188 - dequeued_page = true; 189 - break; 190 - } 191 - } 192 - spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); 78 + n_pages = balloon_page_list_dequeue(b_dev_info, &pages, 1); 193 79 194 - if (!dequeued_page) { 80 + if (n_pages != 1) { 195 81 /* 196 82 * If we are unable to dequeue a balloon page because the page 197 83 * list is empty and there is no isolated pages, then something ··· 181 113 !b_dev_info->isolated_pages)) 182 114 BUG(); 183 115 spin_unlock_irqrestore(&b_dev_info->pages_lock, flags); 184 - page = NULL; 116 + return NULL; 185 117 } 186 - return page; 118 + return list_first_entry(&pages, struct page, lru); 187 119 } 188 120 EXPORT_SYMBOL_GPL(balloon_page_dequeue); 189 121