Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'char-misc-4.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char/misc driver updates from Greg KH:
"Here is the bit set of char/misc drivers for 4.19-rc1

There is a lot here, much more than normal, seems like everyone is
writing new driver subsystems these days... Anyway, major things here
are:

- new FSI driver subsystem, yet-another-powerpc low-level hardware
bus

- gnss, finally an in-kernel GPS subsystem to try to tame all of the
crazy out-of-tree drivers that have been floating around for years,
combined with some really hacky userspace implementations. This is
only for GNSS receivers, but you have to start somewhere, and this
is great to see.

Other than that, there are new slimbus drivers, new coresight drivers,
new fpga drivers, and loads of DT bindings for all of these and
existing drivers.

All of these have been in linux-next for a while with no reported
issues"

* tag 'char-misc-4.19-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (255 commits)
android: binder: Rate-limit debug and userspace triggered err msgs
fsi: sbefifo: Bump max command length
fsi: scom: Fix NULL dereference
misc: mic: SCIF Fix scif_get_new_port() error handling
misc: cxl: changed asterisk position
genwqe: card_base: Use true and false for boolean values
misc: eeprom: assignment outside the if statement
uio: potential double frees if __uio_register_device() fails
eeprom: idt_89hpesx: clean up an error pointer vs NULL inconsistency
misc: ti-st: Fix memory leak in the error path of probe()
android: binder: Show extra_buffers_size in trace
firmware: vpd: Fix section enabled flag on vpd_section_destroy
platform: goldfish: Retire pdev_bus
goldfish: Use dedicated macros instead of manual bit shifting
goldfish: Add missing includes to goldfish.h
mux: adgs1408: new driver for Analog Devices ADGS1408/1409 mux
dt-bindings: mux: add adi,adgs1408
Drivers: hv: vmbus: Cleanup synic memory free path
Drivers: hv: vmbus: Remove use of slow_virt_to_phys()
Drivers: hv: vmbus: Reset the channel callback in vmbus_onoffer_rescind()
...

+18161 -1578
+7
Documentation/ABI/stable/sysfs-bus-vmbus
··· 42 42 Description: The 16 bit vendor ID of the device 43 43 Users: tools/hv/lsvmbus and user level RDMA libraries 44 44 45 + What: /sys/bus/vmbus/devices/<UUID>/numa_node 46 + Date: Jul 2018 47 + KernelVersion: 4.19 48 + Contact: Stephen Hemminger <sthemmin@microsoft.com> 49 + Description: This NUMA node to which the VMBUS device is 50 + attached, or -1 if the node is unknown. 51 + 45 52 What: /sys/bus/vmbus/devices/<UUID>/channels/<N> 46 53 Date: September. 2017 47 54 KernelVersion: 4.14
+8
Documentation/ABI/testing/sysfs-bus-coresight-devices-tmc
··· 83 83 Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 84 84 Description: (R) Indicates the capabilities of the Coresight TMC. 85 85 The value is read directly from the DEVID register, 0xFC8, 86 + 87 + What: /sys/bus/coresight/devices/<memory_map>.tmc/buffer_size 88 + Date: December 2018 89 + KernelVersion: 4.19 90 + Contact: Mathieu Poirier <mathieu.poirier@linaro.org> 91 + Description: (RW) Size of the trace buffer for TMC-ETR when used in SYSFS 92 + mode. Writable only for TMC-ETR configurations. The value 93 + should be aligned to the kernel pagesize.
+24
Documentation/ABI/testing/sysfs-class-fpga-manager
··· 35 35 * write complete = Doing post programming steps 36 36 * write complete error = Error while doing post programming 37 37 * operating = FPGA is programmed and operating 38 + 39 + What: /sys/class/fpga_manager/<fpga>/status 40 + Date: June 2018 41 + KernelVersion: 4.19 42 + Contact: Wu Hao <hao.wu@intel.com> 43 + Description: Read fpga manager status as a string. 44 + If FPGA programming operation fails, it could be caused by crc 45 + error or incompatible bitstream image. The intent of this 46 + interface is to provide more detailed information for FPGA 47 + programming errors to userspace. This is a list of strings for 48 + the supported status. 49 + 50 + * reconfig operation error - invalid operations detected by 51 + reconfiguration hardware. 52 + e.g. start reconfiguration 53 + with errors not cleared 54 + * reconfig CRC error - CRC error detected by 55 + reconfiguration hardware. 56 + * reconfig incompatible image - reconfiguration image is 57 + incompatible with hardware 58 + * reconfig IP protocol error - protocol errors detected by 59 + reconfiguration hardware 60 + * reconfig fifo overflow error - FIFO overflow detected by 61 + reconfiguration hardware
+9
Documentation/ABI/testing/sysfs-class-fpga-region
··· 1 + What: /sys/class/fpga_region/<region>/compat_id 2 + Date: June 2018 3 + KernelVersion: 4.19 4 + Contact: Wu Hao <hao.wu@intel.com> 5 + Description: FPGA region id for compatibility check, e.g. compatibility 6 + of the FPGA reconfiguration hardware and image. This value 7 + is defined or calculated by the layer that is creating the 8 + FPGA region. This interface returns the compat_id value or 9 + just error code -ENOENT in case compat_id is not used.
+15
Documentation/ABI/testing/sysfs-class-gnss
··· 1 + What: /sys/class/gnss/gnssN/type 2 + Date: May 2018 3 + KernelVersion: 4.18 4 + Contact: Johan Hovold <johan@kernel.org> 5 + Description: 6 + The GNSS receiver type. The currently identified types reflect 7 + the protocol(s) supported by the receiver: 8 + 9 + "NMEA" NMEA 0183 10 + "SiRF" SiRF Binary 11 + "UBX" UBX 12 + 13 + Note that also non-"NMEA" type receivers typically support a 14 + subset of NMEA 0183 with vendor extensions (e.g. to allow 15 + switching to a vendor protocol).
+11
Documentation/ABI/testing/sysfs-class-mei
··· 54 54 55 55 Set maximal number of pending writes 56 56 per opened session. 57 + 58 + What: /sys/class/mei/meiN/fw_ver 59 + Date: May 2018 60 + KernelVersion: 4.18 61 + Contact: Tomas Winkler <tomas.winkler@intel.com> 62 + Description: Display the ME firmware version. 63 + 64 + The version of the platform ME firmware is in format: 65 + <platform>:<major>.<minor>.<milestone>.<build_no>. 66 + There can be up to three such blocks for different 67 + FW components.
+23
Documentation/ABI/testing/sysfs-platform-dfl-fme
··· 1 + What: /sys/bus/platform/devices/dfl-fme.0/ports_num 2 + Date: June 2018 3 + KernelVersion: 4.19 4 + Contact: Wu Hao <hao.wu@intel.com> 5 + Description: Read-only. One DFL FPGA device may have more than 1 6 + port/Accelerator Function Unit (AFU). It returns the 7 + number of ports on the FPGA device when read it. 8 + 9 + What: /sys/bus/platform/devices/dfl-fme.0/bitstream_id 10 + Date: June 2018 11 + KernelVersion: 4.19 12 + Contact: Wu Hao <hao.wu@intel.com> 13 + Description: Read-only. It returns Bitstream (static FPGA region) 14 + identifier number, which includes the detailed version 15 + and other information of this static FPGA region. 16 + 17 + What: /sys/bus/platform/devices/dfl-fme.0/bitstream_metadata 18 + Date: June 2018 19 + KernelVersion: 4.19 20 + Contact: Wu Hao <hao.wu@intel.com> 21 + Description: Read-only. It returns Bitstream (static FPGA region) meta 22 + data, which includes the synthesis date, seed and other 23 + information of this static FPGA region.
+16
Documentation/ABI/testing/sysfs-platform-dfl-port
··· 1 + What: /sys/bus/platform/devices/dfl-port.0/id 2 + Date: June 2018 3 + KernelVersion: 4.19 4 + Contact: Wu Hao <hao.wu@intel.com> 5 + Description: Read-only. It returns id of this port. One DFL FPGA device 6 + may have more than one port. Userspace could use this id to 7 + distinguish different ports under same FPGA device. 8 + 9 + What: /sys/bus/platform/devices/dfl-port.0/afu_id 10 + Date: June 2018 11 + KernelVersion: 4.19 12 + Contact: Wu Hao <hao.wu@intel.com> 13 + Description: Read-only. User can program different PR bitstreams to FPGA 14 + Accelerator Function Unit (AFU) for different functions. It 15 + returns uuid which could be used to identify which PR bitstream 16 + is programmed in this AFU.
+57 -1
Documentation/devicetree/bindings/arm/coresight.txt
··· 39 39 40 40 - System Trace Macrocell: 41 41 "arm,coresight-stm", "arm,primecell"; [1] 42 + - Coresight Address Translation Unit (CATU) 43 + "arm,coresight-catu", "arm,primecell"; 42 44 43 45 * reg: physical base address and length of the register 44 46 set(s) of the component. ··· 86 84 * Optional property for TMC: 87 85 88 86 * arm,buffer-size: size of contiguous buffer space for TMC ETR 89 - (embedded trace router) 87 + (embedded trace router). This property is obsolete. The buffer size 88 + can be configured dynamically via buffer_size property in sysfs. 90 89 90 + * arm,scatter-gather: boolean. Indicates that the TMC-ETR can safely 91 + use the SG mode on this system. 92 + 93 + * Optional property for CATU : 94 + * interrupts : Exactly one SPI may be listed for reporting the address 95 + error 91 96 92 97 Example: 93 98 ··· 123 114 tpiu_in_port: endpoint@0 { 124 115 slave-mode; 125 116 remote-endpoint = <&replicator_out_port1>; 117 + }; 118 + }; 119 + }; 120 + 121 + etr@20070000 { 122 + compatible = "arm,coresight-tmc", "arm,primecell"; 123 + reg = <0 0x20070000 0 0x1000>; 124 + 125 + clocks = <&oscclk6a>; 126 + clock-names = "apb_pclk"; 127 + ports { 128 + #address-cells = <1>; 129 + #size-cells = <0>; 130 + 131 + /* input port */ 132 + port@0 { 133 + reg = <0>; 134 + etr_in_port: endpoint { 135 + slave-mode; 136 + remote-endpoint = <&replicator2_out_port0>; 137 + }; 138 + }; 139 + 140 + /* CATU link represented by output port */ 141 + port@1 { 142 + reg = <1>; 143 + etr_out_port: endpoint { 144 + remote-endpoint = <&catu_in_port>; 145 + }; 126 146 }; 127 147 }; 128 148 }; ··· 281 243 port { 282 244 stm_out_port: endpoint { 283 245 remote-endpoint = <&main_funnel_in_port2>; 246 + }; 247 + }; 248 + }; 249 + 250 + 5. CATU 251 + 252 + catu@207e0000 { 253 + compatible = "arm,coresight-catu", "arm,primecell"; 254 + reg = <0 0x207e0000 0 0x1000>; 255 + 256 + clocks = <&oscclk6a>; 257 + clock-names = "apb_pclk"; 258 + 259 + interrupts = <GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>; 260 + port { 261 + catu_in_port: endpoint { 262 + slave-mode; 263 + remote-endpoint = <&etr_out_port>; 284 264 }; 285 265 }; 286 266 };
+36
Documentation/devicetree/bindings/fsi/fsi-master-ast-cf.txt
··· 1 + Device-tree bindings for ColdFire offloaded gpio-based FSI master driver 2 + ------------------------------------------------------------------------ 3 + 4 + Required properties: 5 + - compatible = 6 + "aspeed,ast2400-cf-fsi-master" for an AST2400 based system 7 + or 8 + "aspeed,ast2500-cf-fsi-master" for an AST2500 based system 9 + 10 + - clock-gpios = <gpio-descriptor>; : GPIO for FSI clock 11 + - data-gpios = <gpio-descriptor>; : GPIO for FSI data signal 12 + - enable-gpios = <gpio-descriptor>; : GPIO for enable signal 13 + - trans-gpios = <gpio-descriptor>; : GPIO for voltage translator enable 14 + - mux-gpios = <gpio-descriptor>; : GPIO for pin multiplexing with other 15 + functions (eg, external FSI masters) 16 + - memory-region = <phandle>; : Reference to the reserved memory for 17 + the ColdFire. Must be 2M aligned on 18 + AST2400 and 1M aligned on AST2500 19 + - aspeed,sram = <phandle>; : Reference to the SRAM node. 20 + - aspeed,cvic = <phandle>; : Reference to the CVIC node. 21 + 22 + Examples: 23 + 24 + fsi-master { 25 + compatible = "aspeed,ast2500-cf-fsi-master", "fsi-master"; 26 + 27 + clock-gpios = <&gpio 0>; 28 + data-gpios = <&gpio 1>; 29 + enable-gpios = <&gpio 2>; 30 + trans-gpios = <&gpio 3>; 31 + mux-gpios = <&gpio 4>; 32 + 33 + memory-region = <&coldfire_memory>; 34 + aspeed,sram = <&sram>; 35 + aspeed,cvic = <&cvic>; 36 + }
+5
Documentation/devicetree/bindings/fsi/fsi.txt
··· 83 83 #address-cells = <1>; 84 84 #size-cells = <1>; 85 85 86 + Optionally, a slave can provide a global unique chip ID which is used to 87 + identify the physical location of the chip in a system specific way 88 + 89 + chip-id = <0>; 86 90 87 91 FSI engines (devices) 88 92 --------------------- ··· 129 125 reg = <0 0>; 130 126 #address-cells = <1>; 131 127 #size-cells = <1>; 128 + chip-id = <0>; 132 129 133 130 /* FSI engine at 0xc00, using a single page. In this example, 134 131 * it's an I2C master controller, so subnodes describe the
+36
Documentation/devicetree/bindings/gnss/gnss.txt
··· 1 + GNSS Receiver DT binding 2 + 3 + This documents the binding structure and common properties for GNSS receiver 4 + devices. 5 + 6 + A GNSS receiver node is a node named "gnss" and typically resides on a serial 7 + bus (e.g. UART, I2C or SPI). 8 + 9 + Please refer to the following documents for generic properties: 10 + 11 + Documentation/devicetree/bindings/serial/slave-device.txt 12 + Documentation/devicetree/bindings/spi/spi-bus.txt 13 + 14 + Required properties: 15 + 16 + - compatible : A string reflecting the vendor and specific device the node 17 + represents 18 + 19 + Optional properties: 20 + - enable-gpios : GPIO used to enable the device 21 + - timepulse-gpios : Time pulse GPIO 22 + 23 + Example: 24 + 25 + serial@1234 { 26 + compatible = "ns16550a"; 27 + 28 + gnss { 29 + compatible = "u-blox,neo-8"; 30 + 31 + vcc-supply = <&gnss_reg>; 32 + timepulse-gpios = <&gpio0 16 GPIO_ACTIVE_HIGH>; 33 + 34 + current-speed = <4800>; 35 + }; 36 + };
+45
Documentation/devicetree/bindings/gnss/sirfstar.txt
··· 1 + SiRFstar-based GNSS Receiver DT binding 2 + 3 + SiRFstar chipsets are used in GNSS-receiver modules produced by several 4 + vendors and can use UART, SPI or I2C interfaces. 5 + 6 + Please see Documentation/devicetree/bindings/gnss/gnss.txt for generic 7 + properties. 8 + 9 + Required properties: 10 + 11 + - compatible : Must be one of 12 + 13 + "fastrax,uc430" 14 + "linx,r4" 15 + "wi2wi,w2sg0008i" 16 + "wi2wi,w2sg0084i" 17 + 18 + - vcc-supply : Main voltage regulator (pin name: 3V3_IN, VCC, VDD) 19 + 20 + Required properties (I2C): 21 + - reg : I2C slave address 22 + 23 + Required properties (SPI): 24 + - reg : SPI chip select address 25 + 26 + Optional properties: 27 + 28 + - sirf,onoff-gpios : GPIO used to power on and off device (pin name: ON_OFF) 29 + - sirf,wakeup-gpios : GPIO used to determine device power state 30 + (pin name: RFPWRUP, WAKEUP) 31 + - timepulse-gpios : Time pulse GPIO (pin name: 1PPS, TM) 32 + 33 + Example: 34 + 35 + serial@1234 { 36 + compatible = "ns16550a"; 37 + 38 + gnss { 39 + compatible = "wi2wi,w2sg0084i"; 40 + 41 + vcc-supply = <&gnss_reg>; 42 + sirf,onoff-gpios = <&gpio0 16 GPIO_ACTIVE_HIGH>; 43 + sirf,wakeup-gpios = <&gpio0 17 GPIO_ACTIVE_HIGH>; 44 + }; 45 + };
+44
Documentation/devicetree/bindings/gnss/u-blox.txt
··· 1 + u-blox GNSS Receiver DT binding 2 + 3 + The u-blox GNSS receivers can use UART, DDC (I2C), SPI and USB interfaces. 4 + 5 + Please see Documentation/devicetree/bindings/gnss/gnss.txt for generic 6 + properties. 7 + 8 + Required properties: 9 + 10 + - compatible : Must be one of 11 + 12 + "u-blox,neo-8" 13 + "u-blox,neo-m8" 14 + 15 + - vcc-supply : Main voltage regulator 16 + 17 + Required properties (DDC): 18 + - reg : DDC (I2C) slave address 19 + 20 + Required properties (SPI): 21 + - reg : SPI chip select address 22 + 23 + Required properties (USB): 24 + - reg : Number of the USB hub port or the USB host-controller port 25 + to which this device is attached 26 + 27 + Optional properties: 28 + 29 + - timepulse-gpios : Time pulse GPIO 30 + - u-blox,extint-gpios : GPIO connected to the "external interrupt" input pin 31 + - v-bckp-supply : Backup voltage regulator 32 + 33 + Example: 34 + 35 + serial@1234 { 36 + compatible = "ns16550a"; 37 + 38 + gnss { 39 + compatible = "u-blox,neo-8"; 40 + 41 + v-bckp-supply = <&gnss_v_bckp_reg>; 42 + vcc-supply = <&gnss_vcc_reg>; 43 + }; 44 + };
+48
Documentation/devicetree/bindings/mux/adi,adgs1408.txt
··· 1 + Bindings for Analog Devices ADGS1408/1409 8:1/Dual 4:1 Mux 2 + 3 + Required properties: 4 + - compatible : Should be one of 5 + * "adi,adgs1408" 6 + * "adi,adgs1409" 7 + * Standard mux-controller bindings as described in mux-controller.txt 8 + 9 + Optional properties for ADGS1408/1409: 10 + - gpio-controller : if present, #gpio-cells is required. 11 + - #gpio-cells : should be <2> 12 + - First cell is the GPO line number, i.e. 0 to 3 13 + for ADGS1408 and 0 to 4 for ADGS1409 14 + - Second cell is used to specify active high (0) 15 + or active low (1) 16 + 17 + Optional properties: 18 + - idle-state : if present, the state that the mux controller will have 19 + when idle. The special state MUX_IDLE_AS_IS is the default and 20 + MUX_IDLE_DISCONNECT is also supported. 21 + 22 + States 0 through 7 correspond to signals S1 through S8 in the datasheet. 23 + For ADGS1409 only states 0 to 3 are available. 24 + 25 + Example: 26 + 27 + /* 28 + * One mux controller. 29 + * Mux state set to idle as is (no idle-state declared) 30 + */ 31 + &spi0 { 32 + mux: mux-controller@0 { 33 + compatible = "adi,adgs1408"; 34 + reg = <0>; 35 + spi-max-frequency = <1000000>; 36 + #mux-control-cells = <0>; 37 + }; 38 + } 39 + 40 + adc-mux { 41 + compatible = "io-channel-mux"; 42 + io-channels = <&adc 1>; 43 + io-channel-names = "parent"; 44 + mux-controls = <&mux>; 45 + 46 + channels = "out_a0", "out_a1", "test0", "test1", 47 + "out_b0", "out_b1", "testb0", "testb1"; 48 + };
+2 -1
Documentation/devicetree/bindings/nvmem/imx-ocotp.txt
··· 1 1 Freescale i.MX6 On-Chip OTP Controller (OCOTP) device tree bindings 2 2 3 3 This binding represents the on-chip eFuse OTP controller found on 4 - i.MX6Q/D, i.MX6DL/S, i.MX6SL, i.MX6SX and i.MX6UL SoCs. 4 + i.MX6Q/D, i.MX6DL/S, i.MX6SL, i.MX6SX, i.MX6UL and i.MX6SLL SoCs. 5 5 6 6 Required properties: 7 7 - compatible: should be one of ··· 10 10 "fsl,imx6sx-ocotp" (i.MX6SX), 11 11 "fsl,imx6ul-ocotp" (i.MX6UL), 12 12 "fsl,imx7d-ocotp" (i.MX7D/S), 13 + "fsl,imx6sll-ocotp" (i.MX6SLL), 13 14 followed by "syscon". 14 15 - #address-cells : Should be 1 15 16 - #size-cells : Should be 1
+52
Documentation/devicetree/bindings/nvmem/sc27xx-efuse.txt
··· 1 + = Spreadtrum SC27XX PMIC eFuse device tree bindings = 2 + 3 + Required properties: 4 + - compatible: Should be one of the following. 5 + "sprd,sc2720-efuse" 6 + "sprd,sc2721-efuse" 7 + "sprd,sc2723-efuse" 8 + "sprd,sc2730-efuse" 9 + "sprd,sc2731-efuse" 10 + - reg: Specify the address offset of efuse controller. 11 + - hwlocks: Reference to a phandle of a hwlock provider node. 12 + 13 + = Data cells = 14 + Are child nodes of eFuse, bindings of which as described in 15 + bindings/nvmem/nvmem.txt 16 + 17 + Example: 18 + 19 + sc2731_pmic: pmic@0 { 20 + compatible = "sprd,sc2731"; 21 + reg = <0>; 22 + spi-max-frequency = <26000000>; 23 + interrupts = <GIC_SPI 31 IRQ_TYPE_LEVEL_HIGH>; 24 + interrupt-controller; 25 + #interrupt-cells = <2>; 26 + #address-cells = <1>; 27 + #size-cells = <0>; 28 + 29 + efuse@380 { 30 + compatible = "sprd,sc2731-efuse"; 31 + reg = <0x380>; 32 + #address-cells = <1>; 33 + #size-cells = <1>; 34 + hwlocks = <&hwlock 12>; 35 + 36 + /* Data cells */ 37 + thermal_calib: calib@10 { 38 + reg = <0x10 0x2>; 39 + }; 40 + }; 41 + }; 42 + 43 + = Data consumers = 44 + Are device nodes which consume nvmem data cells. 45 + 46 + Example: 47 + 48 + thermal { 49 + ... 50 + nvmem-cells = <&thermal_calib>; 51 + nvmem-cell-names = "calibration"; 52 + };
+84
Documentation/devicetree/bindings/slimbus/slim-ngd-qcom-ctrl.txt
··· 1 + Qualcomm SLIMBus Non Generic Device (NGD) Controller binding 2 + 3 + SLIMBus NGD controller is a light-weight driver responsible for communicating 4 + with SLIMBus slaves directly over the bus using messaging interface and 5 + communicating with master component residing on ADSP for bandwidth and 6 + data-channel management 7 + 8 + Please refer to slimbus/bus.txt for details of the common SLIMBus bindings. 9 + 10 + - compatible: 11 + Usage: required 12 + Value type: <stringlist> 13 + Definition: must be "qcom,slim-ngd-v<MAJOR>.<MINOR>.<STEP>" 14 + must be one of the following. 15 + "qcom,slim-ngd-v1.5.0" for MSM8996 16 + "qcom,slim-ngd-v2.1.0" for SDM845 17 + 18 + - reg: 19 + Usage: required 20 + Value type: <prop-encoded-array> 21 + Definition: must specify the base address and size of the controller 22 + register space. 23 + - dmas 24 + Usage: required 25 + Value type: <array of phandles> 26 + Definition: List of rx and tx dma channels 27 + 28 + - dma-names 29 + Usage: required 30 + Value type: <stringlist> 31 + Definition: must be "rx" and "tx". 32 + 33 + - interrupts: 34 + Usage: required 35 + Value type: <prop-encoded-array> 36 + Definition: must list controller IRQ. 37 + 38 + #address-cells 39 + Usage: required 40 + Value type: <u32> 41 + Definition: Should be 1, reflecting the instance id of ngd. 42 + 43 + #size-cells 44 + Usage: required 45 + Value type: <u32> 46 + Definition: Should be 0 47 + 48 + = NGD Devices 49 + Each subnode represents an instance of NGD, must contain the following 50 + properties: 51 + 52 + - reg: 53 + Usage: required 54 + Value type: <u32> 55 + Definition: Should be instance id of ngd. 56 + 57 + #address-cells 58 + Usage: required 59 + Refer to slimbus/bus.txt for details of the common SLIMBus bindings. 60 + 61 + #size-cells 62 + Usage: required 63 + Refer to slimbus/bus.txt for details of the common SLIMBus bindings. 64 + 65 + = EXAMPLE 66 + 67 + slim@91c0000 { 68 + compatible = "qcom,slim-ngd-v1.5.0"; 69 + reg = <0x91c0000 0x2c000>; 70 + interrupts = <0 163 0>; 71 + dmas = <&slimbam 3>, <&slimbam 4>; 72 + dma-names = "rx", "tx"; 73 + #address-cells = <1>; 74 + #size-cells = <0>; 75 + ngd@1 { 76 + reg = <1>; 77 + #address-cells = <1>; 78 + #size-cells = <1>; 79 + codec@1 { 80 + compatible = "slim217,1a0"; 81 + reg = <1 0>; 82 + }; 83 + }; 84 + };
+3
Documentation/devicetree/bindings/vendor-prefixes.txt
··· 129 129 ezchip EZchip Semiconductor 130 130 fairphone Fairphone B.V. 131 131 faraday Faraday Technology Corporation 132 + fastrax Fastrax Oy 132 133 fcs Fairchild Semiconductor 133 134 firefly Firefly 134 135 focaltech FocalTech Systems Co.,Ltd ··· 210 209 linaro Linaro Limited 211 210 linksys Belkin International, Inc. (Linksys) 212 211 linux Linux-specific binding 212 + linx Linx Technologies 213 213 lltc Linear Technology Corporation 214 214 logicpd Logic PD, Inc. 215 215 lsi LSI Corp. (LSI Logic) ··· 392 390 truly Truly Semiconductors Limited 393 391 tsd Theobroma Systems Design und Consulting GmbH 394 392 tyan Tyan Computer Corporation 393 + u-blox u-blox 395 394 ucrobotics uCRobotics 396 395 ubnt Ubiquiti Networks 397 396 udoo Udoo
+6 -6
Documentation/driver-api/fpga/fpga-mgr.rst
··· 83 83 3. .write_complete 84 84 85 85 The .write_init function will prepare the FPGA to receive the image data. The 86 - buffer passed into .write_init will be atmost .initial_header_size bytes long, 86 + buffer passed into .write_init will be at most .initial_header_size bytes long; 87 87 if the whole bitstream is not immediately available then the core code will 88 88 buffer up at least this much before starting. 89 89 ··· 98 98 The .write_complete function is called after all the image has been written 99 99 to put the FPGA into operating mode. 100 100 101 - The ops include a .state function which will read the hardware FPGA manager and 102 - return a code of type enum fpga_mgr_states. It doesn't result in a change in 103 - hardware state. 101 + The ops include a .state function which will determine the state the FPGA is in 102 + and return a code of type enum fpga_mgr_states. It doesn't result in a change 103 + in state. 104 104 105 105 How to write an image buffer to a supported FPGA 106 106 ------------------------------------------------ ··· 181 181 .. kernel-doc:: drivers/fpga/fpga-mgr.c 182 182 :functions: fpga_mgr_unregister 183 183 184 - API for programming a FPGA 185 - -------------------------- 184 + API for programming an FPGA 185 + --------------------------- 186 186 187 187 .. kernel-doc:: include/linux/fpga/fpga-mgr.h 188 188 :functions: fpga_image_info
+6 -6
Documentation/driver-api/fpga/fpga-region.rst
··· 4 4 Overview 5 5 -------- 6 6 7 - This document is meant to be an brief overview of the FPGA region API usage. A 7 + This document is meant to be a brief overview of the FPGA region API usage. A 8 8 more conceptual look at regions can be found in the Device Tree binding 9 9 document [#f1]_. 10 10 ··· 31 31 * pointers to the image as either a scatter-gather buffer, a contiguous 32 32 buffer, or the name of firmware file 33 33 34 - * flags indicating specifics such as whether the image if for partial 34 + * flags indicating specifics such as whether the image is for partial 35 35 reconfiguration. 36 36 37 - How to program a FPGA using a region 38 - ------------------------------------ 37 + How to program an FPGA using a region 38 + ------------------------------------- 39 39 40 40 First, allocate the info struct:: 41 41 ··· 77 77 .. [#f1] ../devicetree/bindings/fpga/fpga-region.txt 78 78 .. [#f2] ../../drivers/fpga/of-fpga-region.c 79 79 80 - API to program a FGPA 81 - --------------------- 80 + API to program an FPGA 81 + ---------------------- 82 82 83 83 .. kernel-doc:: drivers/fpga/fpga-region.c 84 84 :functions: fpga_region_program_fpga
+7 -7
Documentation/driver-api/fpga/intro.rst
··· 12 12 13 13 * Code should not be shared between upper and lower layers. This 14 14 should go without saying. If that seems necessary, there's probably 15 - framework functionality that that can be added that will benefit 15 + framework functionality that can be added that will benefit 16 16 other users. Write the linux-fpga mailing list and maintainers and 17 17 seek out a solution that expands the framework for broad reuse. 18 18 19 - * Generally, when adding code, think of the future. Plan for re-use. 19 + * Generally, when adding code, think of the future. Plan for reuse. 20 20 21 21 The framework in the kernel is divided into: 22 22 23 23 FPGA Manager 24 24 ------------ 25 25 26 - If you are adding a new FPGA or a new method of programming a FPGA, 26 + If you are adding a new FPGA or a new method of programming an FPGA, 27 27 this is the subsystem for you. Low level FPGA manager drivers contain 28 28 the knowledge of how to program a specific device. This subsystem 29 29 includes the framework in fpga-mgr.c and the low level drivers that ··· 32 32 FPGA Bridge 33 33 ----------- 34 34 35 - FPGA Bridges prevent spurious signals from going out of a FPGA or a 36 - region of a FPGA during programming. They are disabled before 35 + FPGA Bridges prevent spurious signals from going out of an FPGA or a 36 + region of an FPGA during programming. They are disabled before 37 37 programming begins and re-enabled afterwards. An FPGA bridge may be 38 - actual hard hardware that gates a bus to a cpu or a soft ("freeze") 38 + actual hard hardware that gates a bus to a CPU or a soft ("freeze") 39 39 bridge in FPGA fabric that surrounds a partial reconfiguration region 40 40 of an FPGA. This subsystem includes fpga-bridge.c and the low level 41 41 drivers that are registered with it. ··· 44 44 ----------- 45 45 46 46 If you are adding a new interface to the FPGA framework, add it on top 47 - of a FPGA region to allow the most reuse of your interface. 47 + of an FPGA region to allow the most reuse of your interface. 48 48 49 49 The FPGA Region framework (fpga-region.c) associates managers and 50 50 bridges as reconfigurable regions. A region may refer to the whole
+5
Documentation/driver-api/slimbus.rst
··· 125 125 ~~~~~~~~~~~~~~~ 126 126 .. kernel-doc:: drivers/slimbus/messaging.c 127 127 :export: 128 + 129 + Streaming APIs: 130 + ~~~~~~~~~~~~~~~ 131 + .. kernel-doc:: drivers/slimbus/stream.c 132 + :export:
+285
Documentation/fpga/dfl.txt
··· 1 + =============================================================================== 2 + FPGA Device Feature List (DFL) Framework Overview 3 + ------------------------------------------------------------------------------- 4 + Enno Luebbers <enno.luebbers@intel.com> 5 + Xiao Guangrong <guangrong.xiao@linux.intel.com> 6 + Wu Hao <hao.wu@intel.com> 7 + 8 + The Device Feature List (DFL) FPGA framework (and drivers according to this 9 + this framework) hides the very details of low layer hardwares and provides 10 + unified interfaces to userspace. Applications could use these interfaces to 11 + configure, enumerate, open and access FPGA accelerators on platforms which 12 + implement the DFL in the device memory. Besides this, the DFL framework 13 + enables system level management functions such as FPGA reconfiguration. 14 + 15 + 16 + Device Feature List (DFL) Overview 17 + ================================== 18 + Device Feature List (DFL) defines a linked list of feature headers within the 19 + device MMIO space to provide an extensible way of adding features. Software can 20 + walk through these predefined data structures to enumerate FPGA features: 21 + FPGA Interface Unit (FIU), Accelerated Function Unit (AFU) and Private Features, 22 + as illustrated below: 23 + 24 + Header Header Header Header 25 + +----------+ +-->+----------+ +-->+----------+ +-->+----------+ 26 + | Type | | | Type | | | Type | | | Type | 27 + | FIU | | | Private | | | Private | | | Private | 28 + +----------+ | | Feature | | | Feature | | | Feature | 29 + | Next_DFH |--+ +----------+ | +----------+ | +----------+ 30 + +----------+ | Next_DFH |--+ | Next_DFH |--+ | Next_DFH |--> NULL 31 + | ID | +----------+ +----------+ +----------+ 32 + +----------+ | ID | | ID | | ID | 33 + | Next_AFU |--+ +----------+ +----------+ +----------+ 34 + +----------+ | | Feature | | Feature | | Feature | 35 + | Header | | | Register | | Register | | Register | 36 + | Register | | | Set | | Set | | Set | 37 + | Set | | +----------+ +----------+ +----------+ 38 + +----------+ | Header 39 + +-->+----------+ 40 + | Type | 41 + | AFU | 42 + +----------+ 43 + | Next_DFH |--> NULL 44 + +----------+ 45 + | GUID | 46 + +----------+ 47 + | Header | 48 + | Register | 49 + | Set | 50 + +----------+ 51 + 52 + FPGA Interface Unit (FIU) represents a standalone functional unit for the 53 + interface to FPGA, e.g. the FPGA Management Engine (FME) and Port (more 54 + descriptions on FME and Port in later sections). 55 + 56 + Accelerated Function Unit (AFU) represents a FPGA programmable region and 57 + always connects to a FIU (e.g. a Port) as its child as illustrated above. 58 + 59 + Private Features represent sub features of the FIU and AFU. They could be 60 + various function blocks with different IDs, but all private features which 61 + belong to the same FIU or AFU, must be linked to one list via the Next Device 62 + Feature Header (Next_DFH) pointer. 63 + 64 + Each FIU, AFU and Private Feature could implement its own functional registers. 65 + The functional register set for FIU and AFU, is named as Header Register Set, 66 + e.g. FME Header Register Set, and the one for Private Feature, is named as 67 + Feature Register Set, e.g. FME Partial Reconfiguration Feature Register Set. 68 + 69 + This Device Feature List provides a way of linking features together, it's 70 + convenient for software to locate each feature by walking through this list, 71 + and can be implemented in register regions of any FPGA device. 72 + 73 + 74 + FIU - FME (FPGA Management Engine) 75 + ================================== 76 + The FPGA Management Engine performs reconfiguration and other infrastructure 77 + functions. Each FPGA device only has one FME. 78 + 79 + User-space applications can acquire exclusive access to the FME using open(), 80 + and release it using close(). 81 + 82 + The following functions are exposed through ioctls: 83 + 84 + Get driver API version (DFL_FPGA_GET_API_VERSION) 85 + Check for extensions (DFL_FPGA_CHECK_EXTENSION) 86 + Program bitstream (DFL_FPGA_FME_PORT_PR) 87 + 88 + More functions are exposed through sysfs 89 + (/sys/class/fpga_region/regionX/dfl-fme.n/): 90 + 91 + Read bitstream ID (bitstream_id) 92 + bitstream_id indicates version of the static FPGA region. 93 + 94 + Read bitstream metadata (bitstream_metadata) 95 + bitstream_metadata includes detailed information of static FPGA region, 96 + e.g. synthesis date and seed. 97 + 98 + Read number of ports (ports_num) 99 + one FPGA device may have more than one port, this sysfs interface indicates 100 + how many ports the FPGA device has. 101 + 102 + 103 + FIU - PORT 104 + ========== 105 + A port represents the interface between the static FPGA fabric and a partially 106 + reconfigurable region containing an AFU. It controls the communication from SW 107 + to the accelerator and exposes features such as reset and debug. Each FPGA 108 + device may have more than one port, but always one AFU per port. 109 + 110 + 111 + AFU 112 + === 113 + An AFU is attached to a port FIU and exposes a fixed length MMIO region to be 114 + used for accelerator-specific control registers. 115 + 116 + User-space applications can acquire exclusive access to an AFU attached to a 117 + port by using open() on the port device node and release it using close(). 118 + 119 + The following functions are exposed through ioctls: 120 + 121 + Get driver API version (DFL_FPGA_GET_API_VERSION) 122 + Check for extensions (DFL_FPGA_CHECK_EXTENSION) 123 + Get port info (DFL_FPGA_PORT_GET_INFO) 124 + Get MMIO region info (DFL_FPGA_PORT_GET_REGION_INFO) 125 + Map DMA buffer (DFL_FPGA_PORT_DMA_MAP) 126 + Unmap DMA buffer (DFL_FPGA_PORT_DMA_UNMAP) 127 + Reset AFU (*DFL_FPGA_PORT_RESET) 128 + 129 + *DFL_FPGA_PORT_RESET: reset the FPGA Port and its AFU. Userspace can do Port 130 + reset at any time, e.g. during DMA or Partial Reconfiguration. But it should 131 + never cause any system level issue, only functional failure (e.g. DMA or PR 132 + operation failure) and be recoverable from the failure. 133 + 134 + User-space applications can also mmap() accelerator MMIO regions. 135 + 136 + More functions are exposed through sysfs: 137 + (/sys/class/fpga_region/<regionX>/<dfl-port.m>/): 138 + 139 + Read Accelerator GUID (afu_id) 140 + afu_id indicates which PR bitstream is programmed to this AFU. 141 + 142 + 143 + DFL Framework Overview 144 + ====================== 145 + 146 + +----------+ +--------+ +--------+ +--------+ 147 + | FME | | AFU | | AFU | | AFU | 148 + | Module | | Module | | Module | | Module | 149 + +----------+ +--------+ +--------+ +--------+ 150 + +-----------------------+ 151 + | FPGA Container Device | Device Feature List 152 + | (FPGA Base Region) | Framework 153 + +-----------------------+ 154 + -------------------------------------------------------------------- 155 + +----------------------------+ 156 + | FPGA DFL Device Module | 157 + | (e.g. PCIE/Platform Device)| 158 + +----------------------------+ 159 + +------------------------+ 160 + | FPGA Hardware Device | 161 + +------------------------+ 162 + 163 + DFL framework in kernel provides common interfaces to create container device 164 + (FPGA base region), discover feature devices and their private features from the 165 + given Device Feature Lists and create platform devices for feature devices 166 + (e.g. FME, Port and AFU) with related resources under the container device. It 167 + also abstracts operations for the private features and exposes common ops to 168 + feature device drivers. 169 + 170 + The FPGA DFL Device could be different hardwares, e.g. PCIe device, platform 171 + device and etc. Its driver module is always loaded first once the device is 172 + created by the system. This driver plays an infrastructural role in the 173 + driver architecture. It locates the DFLs in the device memory, handles them 174 + and related resources to common interfaces from DFL framework for enumeration. 175 + (Please refer to drivers/fpga/dfl.c for detailed enumeration APIs). 176 + 177 + The FPGA Management Engine (FME) driver is a platform driver which is loaded 178 + automatically after FME platform device creation from the DFL device module. It 179 + provides the key features for FPGA management, including: 180 + 181 + a) Expose static FPGA region information, e.g. version and metadata. 182 + Users can read related information via sysfs interfaces exposed 183 + by FME driver. 184 + 185 + b) Partial Reconfiguration. The FME driver creates FPGA manager, FPGA 186 + bridges and FPGA regions during PR sub feature initialization. Once 187 + it receives a DFL_FPGA_FME_PORT_PR ioctl from user, it invokes the 188 + common interface function from FPGA Region to complete the partial 189 + reconfiguration of the PR bitstream to the given port. 190 + 191 + Similar to the FME driver, the FPGA Accelerated Function Unit (AFU) driver is 192 + probed once the AFU platform device is created. The main function of this module 193 + is to provide an interface for userspace applications to access the individual 194 + accelerators, including basic reset control on port, AFU MMIO region export, dma 195 + buffer mapping service functions. 196 + 197 + After feature platform devices creation, matched platform drivers will be loaded 198 + automatically to handle different functionalities. Please refer to next sections 199 + for detailed information on functional units which have been already implemented 200 + under this DFL framework. 201 + 202 + 203 + Partial Reconfiguration 204 + ======================= 205 + As mentioned above, accelerators can be reconfigured through partial 206 + reconfiguration of a PR bitstream file. The PR bitstream file must have been 207 + generated for the exact static FPGA region and targeted reconfigurable region 208 + (port) of the FPGA, otherwise, the reconfiguration operation will fail and 209 + possibly cause system instability. This compatibility can be checked by 210 + comparing the compatibility ID noted in the header of PR bitstream file against 211 + the compat_id exposed by the target FPGA region. This check is usually done by 212 + userspace before calling the reconfiguration IOCTL. 213 + 214 + 215 + Device enumeration 216 + ================== 217 + This section introduces how applications enumerate the fpga device from 218 + the sysfs hierarchy under /sys/class/fpga_region. 219 + 220 + In the example below, two DFL based FPGA devices are installed in the host. Each 221 + fpga device has one FME and two ports (AFUs). 222 + 223 + FPGA regions are created under /sys/class/fpga_region/ 224 + 225 + /sys/class/fpga_region/region0 226 + /sys/class/fpga_region/region1 227 + /sys/class/fpga_region/region2 228 + ... 229 + 230 + Application needs to search each regionX folder, if feature device is found, 231 + (e.g. "dfl-port.n" or "dfl-fme.m" is found), then it's the base 232 + fpga region which represents the FPGA device. 233 + 234 + Each base region has one FME and two ports (AFUs) as child devices: 235 + 236 + /sys/class/fpga_region/region0/dfl-fme.0 237 + /sys/class/fpga_region/region0/dfl-port.0 238 + /sys/class/fpga_region/region0/dfl-port.1 239 + ... 240 + 241 + /sys/class/fpga_region/region3/dfl-fme.1 242 + /sys/class/fpga_region/region3/dfl-port.2 243 + /sys/class/fpga_region/region3/dfl-port.3 244 + ... 245 + 246 + In general, the FME/AFU sysfs interfaces are named as follows: 247 + 248 + /sys/class/fpga_region/<regionX>/<dfl-fme.n>/ 249 + /sys/class/fpga_region/<regionX>/<dfl-port.m>/ 250 + 251 + with 'n' consecutively numbering all FMEs and 'm' consecutively numbering all 252 + ports. 253 + 254 + The device nodes used for ioctl() or mmap() can be referenced through: 255 + 256 + /sys/class/fpga_region/<regionX>/<dfl-fme.n>/dev 257 + /sys/class/fpga_region/<regionX>/<dfl-port.n>/dev 258 + 259 + 260 + Add new FIUs support 261 + ==================== 262 + It's possible that developers made some new function blocks (FIUs) under this 263 + DFL framework, then new platform device driver needs to be developed for the 264 + new feature dev (FIU) following the same way as existing feature dev drivers 265 + (e.g. FME and Port/AFU platform device driver). Besides that, it requires 266 + modification on DFL framework enumeration code too, for new FIU type detection 267 + and related platform devices creation. 268 + 269 + 270 + Add new private features support 271 + ================================ 272 + In some cases, we may need to add some new private features to existing FIUs 273 + (e.g. FME or Port). Developers don't need to touch enumeration code in DFL 274 + framework, as each private feature will be parsed automatically and related 275 + mmio resources can be found under FIU platform device created by DFL framework. 276 + Developer only needs to provide a sub feature driver with matched feature id. 277 + FME Partial Reconfiguration Sub Feature driver (see drivers/fpga/dfl-fme-pr.c) 278 + could be a reference. 279 + 280 + 281 + Open discussion 282 + =============== 283 + FME driver exports one ioctl (DFL_FPGA_FME_PORT_PR) for partial reconfiguration 284 + to user now. In the future, if unified user interfaces for reconfiguration are 285 + added, FME driver should switch to them from ioctl interface.
+1
Documentation/ioctl/ioctl-number.txt
··· 324 324 0xB3 00 linux/mmc/ioctl.h 325 325 0xB4 00-0F linux/gpio.h <mailto:linux-gpio@vger.kernel.org> 326 326 0xB5 00-0F uapi/linux/rpmsg.h <mailto:linux-remoteproc@vger.kernel.org> 327 + 0xB6 all linux/fpga-dfl.h 327 328 0xC0 00-0F linux/usb/iowarrior.h 328 329 0xCA 00-0F uapi/misc/cxl.h 329 330 0xCA 10-2F uapi/misc/ocxl.h
+11
Documentation/sysctl/kernel.txt
··· 39 39 - hung_task_check_count 40 40 - hung_task_timeout_secs 41 41 - hung_task_warnings 42 + - hyperv_record_panic_msg 42 43 - kexec_load_disabled 43 44 - kptr_restrict 44 45 - l2cr [ PPC only ] ··· 372 371 This file shows up if CONFIG_DETECT_HUNG_TASK is enabled. 373 372 374 373 -1: report an infinite number of warnings. 374 + 375 + ============================================================== 376 + 377 + hyperv_record_panic_msg: 378 + 379 + Controls whether the panic kmsg data should be reported to Hyper-V. 380 + 381 + 0: do not report panic kmsg data. 382 + 383 + 1: report the panic kmsg data. This is the default behavior. 375 384 376 385 ============================================================== 377 386
+1 -1
Documentation/w1/slaves/w1_ds2438
··· 60 60 vdd: battery input (VDD) 61 61 62 62 After the voltage conversion the value is returned as decimal ASCII. 63 - Note: The value is in mV, so to get a volts the value has to be divided by 10. 63 + Note: To get a volts the value has to be divided by 100.
+24 -1
MAINTAINERS
··· 836 836 S: Maintained 837 837 F: drivers/media/i2c/ad9389b* 838 838 839 + ANALOG DEVICES INC ADGS1408 DRIVER 840 + M: Mircea Caprioru <mircea.caprioru@analog.com> 841 + S: Supported 842 + F: drivers/mux/adgs1408.c 843 + F: Documentation/devicetree/bindings/mux/adgs1408.txt 844 + 839 845 ANALOG DEVICES INC ADV7180 DRIVER 840 846 M: Lars-Peter Clausen <lars@metafoo.de> 841 847 L: linux-media@vger.kernel.org ··· 5720 5714 F: include/linux/fpga/ 5721 5715 W: http://www.rocketboards.org 5722 5716 5717 + FPGA DFL DRIVERS 5718 + M: Wu Hao <hao.wu@intel.com> 5719 + L: linux-fpga@vger.kernel.org 5720 + S: Maintained 5721 + F: Documentation/fpga/dfl.txt 5722 + F: include/uapi/linux/fpga-dfl.h 5723 + F: drivers/fpga/dfl* 5724 + 5723 5725 FPU EMULATOR 5724 5726 M: Bill Metzenthen <billm@melbpc.org.au> 5725 5727 W: http://floatingpoint.sourceforge.net/emulator/index.html ··· 6143 6129 F: Documentation/isdn/README.gigaset 6144 6130 F: drivers/isdn/gigaset/ 6145 6131 F: include/uapi/linux/gigaset_dev.h 6132 + 6133 + GNSS SUBSYSTEM 6134 + M: Johan Hovold <johan@kernel.org> 6135 + S: Maintained 6136 + F: Documentation/ABI/testing/sysfs-class-gnss 6137 + F: Documentation/devicetree/bindings/gnss/ 6138 + F: drivers/gnss/ 6139 + F: include/linux/gnss.h 6146 6140 6147 6141 GO7007 MPEG CODEC 6148 6142 M: Hans Verkuil <hans.verkuil@cisco.com> ··· 15545 15523 15546 15524 VMWARE BALLOON DRIVER 15547 15525 M: Xavier Deguillard <xdeguillard@vmware.com> 15548 - M: Philip Moltmann <moltmann@vmware.com> 15526 + M: Nadav Amit <namit@vmware.com> 15549 15527 M: "VMware, Inc." <pv-drivers@vmware.com> 15550 15528 L: linux-kernel@vger.kernel.org 15551 15529 S: Maintained ··· 15637 15615 W1 DALLAS'S 1-WIRE BUS 15638 15616 M: Evgeniy Polyakov <zbr@ioremap.net> 15639 15617 S: Maintained 15618 + F: Documentation/devicetree/bindings/w1/ 15640 15619 F: Documentation/w1/ 15641 15620 F: drivers/w1/ 15642 15621 F: include/linux/w1.h
+1
arch/arm/boot/dts/aspeed-bmc-opp-romulus.dts
··· 52 52 compatible = "fsi-master-gpio", "fsi-master"; 53 53 #address-cells = <2>; 54 54 #size-cells = <0>; 55 + no-gpio-delays; 55 56 56 57 clock-gpios = <&gpio ASPEED_GPIO(AA, 0) GPIO_ACTIVE_HIGH>; 57 58 data-gpios = <&gpio ASPEED_GPIO(AA, 2) GPIO_ACTIVE_HIGH>;
+1
arch/arm/boot/dts/aspeed-bmc-opp-witherspoon.dts
··· 153 153 compatible = "fsi-master-gpio", "fsi-master"; 154 154 #address-cells = <2>; 155 155 #size-cells = <0>; 156 + no-gpio-delays; 156 157 157 158 clock-gpios = <&gpio ASPEED_GPIO(AA, 0) GPIO_ACTIVE_HIGH>; 158 159 data-gpios = <&gpio ASPEED_GPIO(E, 0) GPIO_ACTIVE_HIGH>;
+1
arch/arm/boot/dts/aspeed-bmc-opp-zaius.dts
··· 91 91 compatible = "fsi-master-gpio", "fsi-master"; 92 92 #address-cells = <2>; 93 93 #size-cells = <0>; 94 + no-gpio-delays; 94 95 95 96 trans-gpios = <&gpio ASPEED_GPIO(O, 6) GPIO_ACTIVE_HIGH>; 96 97 enable-gpios = <&gpio ASPEED_GPIO(D, 0) GPIO_ACTIVE_HIGH>;
+1
arch/arm/mach-mmp/sram.c
··· 15 15 */ 16 16 17 17 #include <linux/module.h> 18 + #include <linux/mod_devicetable.h> 18 19 #include <linux/init.h> 19 20 #include <linux/platform_device.h> 20 21 #include <linux/io.h>
+1
arch/arm/plat-samsung/adc.c
··· 8 8 9 9 #include <linux/module.h> 10 10 #include <linux/kernel.h> 11 + #include <linux/mod_devicetable.h> 11 12 #include <linux/platform_device.h> 12 13 #include <linux/sched.h> 13 14 #include <linux/list.h>
+29 -2
arch/x86/hyperv/hv_init.c
··· 333 333 * Register Hyper-V specific clocksource. 334 334 */ 335 335 #ifdef CONFIG_HYPERV_TSCPAGE 336 - if (ms_hyperv.features & HV_X64_MSR_REFERENCE_TSC_AVAILABLE) { 336 + if (ms_hyperv.features & HV_MSR_REFERENCE_TSC_AVAILABLE) { 337 337 union hv_x64_msr_hypercall_contents tsc_msr; 338 338 339 339 tsc_pg = __vmalloc(PAGE_SIZE, GFP_KERNEL, PAGE_KERNEL); ··· 362 362 */ 363 363 364 364 hyperv_cs = &hyperv_cs_msr; 365 - if (ms_hyperv.features & HV_X64_MSR_TIME_REF_COUNT_AVAILABLE) 365 + if (ms_hyperv.features & HV_MSR_TIME_REF_COUNT_AVAILABLE) 366 366 clocksource_register_hz(&hyperv_cs_msr, NSEC_PER_SEC/100); 367 367 368 368 return; ··· 425 425 wrmsrl(HV_X64_MSR_CRASH_CTL, HV_CRASH_CTL_CRASH_NOTIFY); 426 426 } 427 427 EXPORT_SYMBOL_GPL(hyperv_report_panic); 428 + 429 + /** 430 + * hyperv_report_panic_msg - report panic message to Hyper-V 431 + * @pa: physical address of the panic page containing the message 432 + * @size: size of the message in the page 433 + */ 434 + void hyperv_report_panic_msg(phys_addr_t pa, size_t size) 435 + { 436 + /* 437 + * P3 to contain the physical address of the panic page & P4 to 438 + * contain the size of the panic data in that page. Rest of the 439 + * registers are no-op when the NOTIFY_MSG flag is set. 440 + */ 441 + wrmsrl(HV_X64_MSR_CRASH_P0, 0); 442 + wrmsrl(HV_X64_MSR_CRASH_P1, 0); 443 + wrmsrl(HV_X64_MSR_CRASH_P2, 0); 444 + wrmsrl(HV_X64_MSR_CRASH_P3, pa); 445 + wrmsrl(HV_X64_MSR_CRASH_P4, size); 446 + 447 + /* 448 + * Let Hyper-V know there is crash data available along with 449 + * the panic message. 450 + */ 451 + wrmsrl(HV_X64_MSR_CRASH_CTL, 452 + (HV_CRASH_CTL_CRASH_NOTIFY | HV_CRASH_CTL_CRASH_NOTIFY_MSG)); 453 + } 454 + EXPORT_SYMBOL_GPL(hyperv_report_panic_msg); 428 455 429 456 bool hv_is_hyperv_initialized(void) 430 457 {
+9 -8
arch/x86/include/asm/hyperv-tlfs.h
··· 35 35 /* VP Runtime (HV_X64_MSR_VP_RUNTIME) available */ 36 36 #define HV_X64_MSR_VP_RUNTIME_AVAILABLE (1 << 0) 37 37 /* Partition Reference Counter (HV_X64_MSR_TIME_REF_COUNT) available*/ 38 - #define HV_X64_MSR_TIME_REF_COUNT_AVAILABLE (1 << 1) 38 + #define HV_MSR_TIME_REF_COUNT_AVAILABLE (1 << 1) 39 39 /* Partition reference TSC MSR is available */ 40 - #define HV_X64_MSR_REFERENCE_TSC_AVAILABLE (1 << 9) 40 + #define HV_MSR_REFERENCE_TSC_AVAILABLE (1 << 9) 41 41 42 42 /* A partition's reference time stamp counter (TSC) page */ 43 43 #define HV_X64_MSR_REFERENCE_TSC 0x40000021 ··· 60 60 * Synthetic Timer MSRs (HV_X64_MSR_STIMER0_CONFIG through 61 61 * HV_X64_MSR_STIMER3_COUNT) available 62 62 */ 63 - #define HV_X64_MSR_SYNTIMER_AVAILABLE (1 << 3) 63 + #define HV_MSR_SYNTIMER_AVAILABLE (1 << 3) 64 64 /* 65 65 * APIC access MSRs (HV_X64_MSR_EOI, HV_X64_MSR_ICR and HV_X64_MSR_TPR) 66 66 * are available ··· 86 86 #define HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE (1 << 10) 87 87 88 88 /* stimer Direct Mode is available */ 89 - #define HV_X64_STIMER_DIRECT_MODE_AVAILABLE (1 << 19) 89 + #define HV_STIMER_DIRECT_MODE_AVAILABLE (1 << 19) 90 90 91 91 /* 92 92 * Feature identification: EBX indicates which flags were specified at ··· 160 160 #define HV_X64_RELAXED_TIMING_RECOMMENDED (1 << 5) 161 161 162 162 /* 163 - * Virtual APIC support 163 + * Recommend not using Auto End-Of-Interrupt feature 164 164 */ 165 - #define HV_X64_DEPRECATING_AEOI_RECOMMENDED (1 << 9) 165 + #define HV_DEPRECATING_AEOI_RECOMMENDED (1 << 9) 166 166 167 167 /* 168 168 * Recommend using cluster IPI hypercalls. ··· 176 176 #define HV_X64_ENLIGHTENED_VMCS_RECOMMENDED (1 << 14) 177 177 178 178 /* 179 - * Crash notification flag. 179 + * Crash notification flags. 180 180 */ 181 - #define HV_CRASH_CTL_CRASH_NOTIFY (1ULL << 63) 181 + #define HV_CRASH_CTL_CRASH_NOTIFY_MSG BIT_ULL(62) 182 + #define HV_CRASH_CTL_CRASH_NOTIFY BIT_ULL(63) 182 183 183 184 /* MSR used to identify the guest OS. */ 184 185 #define HV_X64_MSR_GUEST_OS_ID 0x40000000
+12 -4
arch/x86/include/asm/mshyperv.h
··· 76 76 } 77 77 } 78 78 79 - #define hv_init_timer(timer, tick) wrmsrl(timer, tick) 80 - #define hv_init_timer_config(config, val) wrmsrl(config, val) 79 + #define hv_init_timer(timer, tick) \ 80 + wrmsrl(HV_X64_MSR_STIMER0_COUNT + (2*timer), tick) 81 + #define hv_init_timer_config(timer, val) \ 82 + wrmsrl(HV_X64_MSR_STIMER0_CONFIG + (2*timer), val) 81 83 82 84 #define hv_get_simp(val) rdmsrl(HV_X64_MSR_SIMP, val) 83 85 #define hv_set_simp(val) wrmsrl(HV_X64_MSR_SIMP, val) ··· 92 90 93 91 #define hv_get_vp_index(index) rdmsrl(HV_X64_MSR_VP_INDEX, index) 94 92 95 - #define hv_get_synint_state(int_num, val) rdmsrl(int_num, val) 96 - #define hv_set_synint_state(int_num, val) wrmsrl(int_num, val) 93 + #define hv_get_synint_state(int_num, val) \ 94 + rdmsrl(HV_X64_MSR_SINT0 + int_num, val) 95 + #define hv_set_synint_state(int_num, val) \ 96 + wrmsrl(HV_X64_MSR_SINT0 + int_num, val) 97 + 98 + #define hv_get_crash_ctl(val) \ 99 + rdmsrl(HV_X64_MSR_CRASH_CTL, val) 97 100 98 101 void hyperv_callback_vector(void); 99 102 void hyperv_reenlightenment_vector(void); ··· 339 332 void __init hyperv_init(void); 340 333 void hyperv_setup_mmu_ops(void); 341 334 void hyperv_report_panic(struct pt_regs *regs, long err); 335 + void hyperv_report_panic_msg(phys_addr_t pa, size_t size); 342 336 bool hv_is_hyperv_initialized(void); 343 337 void hyperv_cleanup(void); 344 338
+3 -3
arch/x86/kernel/cpu/mshyperv.c
··· 41 41 static void (*hv_kexec_handler)(void); 42 42 static void (*hv_crash_handler)(struct pt_regs *regs); 43 43 44 - void hyperv_vector_handler(struct pt_regs *regs) 44 + __visible void __irq_entry hyperv_vector_handler(struct pt_regs *regs) 45 45 { 46 46 struct pt_regs *old_regs = set_irq_regs(regs); 47 47 ··· 50 50 if (vmbus_handler) 51 51 vmbus_handler(); 52 52 53 - if (ms_hyperv.hints & HV_X64_DEPRECATING_AEOI_RECOMMENDED) 53 + if (ms_hyperv.hints & HV_DEPRECATING_AEOI_RECOMMENDED) 54 54 ack_APIC_irq(); 55 55 56 56 exiting_irq(); ··· 300 300 hyperv_reenlightenment_vector); 301 301 302 302 /* Setup the IDT for stimer0 */ 303 - if (ms_hyperv.misc_features & HV_X64_STIMER_DIRECT_MODE_AVAILABLE) 303 + if (ms_hyperv.misc_features & HV_STIMER_DIRECT_MODE_AVAILABLE) 304 304 alloc_intr_gate(HYPERV_STIMER0_VECTOR, 305 305 hv_stimer0_callback_vector); 306 306 #endif
+2
drivers/Kconfig
··· 9 9 10 10 source "drivers/connector/Kconfig" 11 11 12 + source "drivers/gnss/Kconfig" 13 + 12 14 source "drivers/mtd/Kconfig" 13 15 14 16 source "drivers/of/Kconfig"
+1
drivers/Makefile
··· 185 185 obj-$(CONFIG_MULTIPLEXER) += mux/ 186 186 obj-$(CONFIG_UNISYS_VISORBUS) += visorbus/ 187 187 obj-$(CONFIG_SIOX) += siox/ 188 + obj-$(CONFIG_GNSS) += gnss/
+1 -1
drivers/android/Kconfig
··· 10 10 11 11 config ANDROID_BINDER_IPC 12 12 bool "Android Binder IPC Driver" 13 - depends on MMU && !M68K 13 + depends on MMU 14 14 default n 15 15 ---help--- 16 16 Binder is used in Android for both communication between processes,
+6 -3
drivers/android/binder.c
··· 51 51 52 52 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 53 53 54 - #include <asm/cacheflush.h> 55 54 #include <linux/fdtable.h> 56 55 #include <linux/file.h> 57 56 #include <linux/freezer.h> ··· 70 71 #include <linux/pid_namespace.h> 71 72 #include <linux/security.h> 72 73 #include <linux/spinlock.h> 74 + #include <linux/ratelimit.h> 73 75 74 76 #include <uapi/linux/android/binder.h> 77 + 78 + #include <asm/cacheflush.h> 79 + 75 80 #include "binder_alloc.h" 76 81 #include "binder_trace.h" 77 82 ··· 164 161 #define binder_debug(mask, x...) \ 165 162 do { \ 166 163 if (binder_debug_mask & mask) \ 167 - pr_info(x); \ 164 + pr_info_ratelimited(x); \ 168 165 } while (0) 169 166 170 167 #define binder_user_error(x...) \ 171 168 do { \ 172 169 if (binder_debug_mask & BINDER_DEBUG_USER_ERROR) \ 173 - pr_info(x); \ 170 + pr_info_ratelimited(x); \ 174 171 if (binder_stop_on_user_error) \ 175 172 binder_stop_on_user_error = 2; \ 176 173 } while (0)
+27 -16
drivers/android/binder_alloc.c
··· 17 17 18 18 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 19 19 20 - #include <asm/cacheflush.h> 21 20 #include <linux/list.h> 22 21 #include <linux/sched/mm.h> 23 22 #include <linux/module.h> ··· 27 28 #include <linux/slab.h> 28 29 #include <linux/sched.h> 29 30 #include <linux/list_lru.h> 31 + #include <linux/ratelimit.h> 32 + #include <asm/cacheflush.h> 30 33 #include "binder_alloc.h" 31 34 #include "binder_trace.h" 32 35 ··· 37 36 static DEFINE_MUTEX(binder_alloc_mmap_lock); 38 37 39 38 enum { 39 + BINDER_DEBUG_USER_ERROR = 1U << 0, 40 40 BINDER_DEBUG_OPEN_CLOSE = 1U << 1, 41 41 BINDER_DEBUG_BUFFER_ALLOC = 1U << 2, 42 42 BINDER_DEBUG_BUFFER_ALLOC_ASYNC = 1U << 3, 43 43 }; 44 - static uint32_t binder_alloc_debug_mask; 44 + static uint32_t binder_alloc_debug_mask = BINDER_DEBUG_USER_ERROR; 45 45 46 46 module_param_named(debug_mask, binder_alloc_debug_mask, 47 47 uint, 0644); ··· 50 48 #define binder_alloc_debug(mask, x...) \ 51 49 do { \ 52 50 if (binder_alloc_debug_mask & mask) \ 53 - pr_info(x); \ 51 + pr_info_ratelimited(x); \ 54 52 } while (0) 55 53 56 54 static struct binder_buffer *binder_buffer_next(struct binder_buffer *buffer) ··· 154 152 * free the buffer twice 155 153 */ 156 154 if (buffer->free_in_progress) { 157 - pr_err("%d:%d FREE_BUFFER u%016llx user freed buffer twice\n", 158 - alloc->pid, current->pid, (u64)user_ptr); 155 + binder_alloc_debug(BINDER_DEBUG_USER_ERROR, 156 + "%d:%d FREE_BUFFER u%016llx user freed buffer twice\n", 157 + alloc->pid, current->pid, 158 + (u64)user_ptr); 159 159 return NULL; 160 160 } 161 161 buffer->free_in_progress = 1; ··· 228 224 } 229 225 230 226 if (!vma && need_mm) { 231 - pr_err("%d: binder_alloc_buf failed to map pages in userspace, no vma\n", 232 - alloc->pid); 227 + binder_alloc_debug(BINDER_DEBUG_USER_ERROR, 228 + "%d: binder_alloc_buf failed to map pages in userspace, no vma\n", 229 + alloc->pid); 233 230 goto err_no_vma; 234 231 } 235 232 ··· 349 344 int ret; 350 345 351 346 if (alloc->vma == NULL) { 352 - pr_err("%d: binder_alloc_buf, no vma\n", 353 - alloc->pid); 347 + binder_alloc_debug(BINDER_DEBUG_USER_ERROR, 348 + "%d: binder_alloc_buf, no vma\n", 349 + alloc->pid); 354 350 return ERR_PTR(-ESRCH); 355 351 } 356 352 ··· 423 417 if (buffer_size > largest_free_size) 424 418 largest_free_size = buffer_size; 425 419 } 426 - pr_err("%d: binder_alloc_buf size %zd failed, no address space\n", 427 - alloc->pid, size); 428 - pr_err("allocated: %zd (num: %zd largest: %zd), free: %zd (num: %zd largest: %zd)\n", 429 - total_alloc_size, allocated_buffers, largest_alloc_size, 430 - total_free_size, free_buffers, largest_free_size); 420 + binder_alloc_debug(BINDER_DEBUG_USER_ERROR, 421 + "%d: binder_alloc_buf size %zd failed, no address space\n", 422 + alloc->pid, size); 423 + binder_alloc_debug(BINDER_DEBUG_USER_ERROR, 424 + "allocated: %zd (num: %zd largest: %zd), free: %zd (num: %zd largest: %zd)\n", 425 + total_alloc_size, allocated_buffers, 426 + largest_alloc_size, total_free_size, 427 + free_buffers, largest_free_size); 431 428 return ERR_PTR(-ENOSPC); 432 429 } 433 430 if (n == NULL) { ··· 740 731 err_get_vm_area_failed: 741 732 err_already_mapped: 742 733 mutex_unlock(&binder_alloc_mmap_lock); 743 - pr_err("%s: %d %lx-%lx %s failed %d\n", __func__, 744 - alloc->pid, vma->vm_start, vma->vm_end, failure_string, ret); 734 + binder_alloc_debug(BINDER_DEBUG_USER_ERROR, 735 + "%s: %d %lx-%lx %s failed %d\n", __func__, 736 + alloc->pid, vma->vm_start, vma->vm_end, 737 + failure_string, ret); 745 738 return ret; 746 739 } 747 740
+5 -2
drivers/android/binder_trace.h
··· 248 248 __field(int, debug_id) 249 249 __field(size_t, data_size) 250 250 __field(size_t, offsets_size) 251 + __field(size_t, extra_buffers_size) 251 252 ), 252 253 TP_fast_assign( 253 254 __entry->debug_id = buf->debug_id; 254 255 __entry->data_size = buf->data_size; 255 256 __entry->offsets_size = buf->offsets_size; 257 + __entry->extra_buffers_size = buf->extra_buffers_size; 256 258 ), 257 - TP_printk("transaction=%d data_size=%zd offsets_size=%zd", 258 - __entry->debug_id, __entry->data_size, __entry->offsets_size) 259 + TP_printk("transaction=%d data_size=%zd offsets_size=%zd extra_buffers_size=%zd", 260 + __entry->debug_id, __entry->data_size, __entry->offsets_size, 261 + __entry->extra_buffers_size) 259 262 ); 260 263 261 264 DEFINE_EVENT(binder_buffer_class, binder_transaction_alloc_buf,
+1
drivers/ata/pata_imx.c
··· 17 17 #include <linux/clk.h> 18 18 #include <linux/libata.h> 19 19 #include <linux/module.h> 20 + #include <linux/mod_devicetable.h> 20 21 #include <linux/platform_device.h> 21 22 22 23 #define DRV_NAME "pata_imx"
+1
drivers/ata/pata_samsung_cf.c
··· 17 17 18 18 #include <linux/kernel.h> 19 19 #include <linux/module.h> 20 + #include <linux/mod_devicetable.h> 20 21 #include <linux/init.h> 21 22 #include <linux/clk.h> 22 23 #include <linux/libata.h>
+1
drivers/auxdisplay/hd44780.c
··· 9 9 #include <linux/delay.h> 10 10 #include <linux/gpio/consumer.h> 11 11 #include <linux/module.h> 12 + #include <linux/mod_devicetable.h> 12 13 #include <linux/platform_device.h> 13 14 #include <linux/property.h> 14 15 #include <linux/slab.h>
-2
drivers/char/hpet.c
··· 579 579 struct hpet_info *info) 580 580 { 581 581 struct hpet_timer __iomem *timer; 582 - struct hpet __iomem *hpet; 583 582 struct hpets *hpetp; 584 583 int err; 585 584 unsigned long v; ··· 590 591 case HPET_DPI: 591 592 case HPET_IRQFREQ: 592 593 timer = devp->hd_timer; 593 - hpet = devp->hd_hpet; 594 594 hpetp = devp->hd_hpets; 595 595 break; 596 596 case HPET_IE_ON:
+1
drivers/char/hw_random/atmel-rng.c
··· 8 8 9 9 #include <linux/kernel.h> 10 10 #include <linux/module.h> 11 + #include <linux/mod_devicetable.h> 11 12 #include <linux/slab.h> 12 13 #include <linux/err.h> 13 14 #include <linux/clk.h>
+1
drivers/char/hw_random/exynos-trng.c
··· 19 19 #include <linux/iopoll.h> 20 20 #include <linux/kernel.h> 21 21 #include <linux/module.h> 22 + #include <linux/mod_devicetable.h> 22 23 #include <linux/platform_device.h> 23 24 #include <linux/pm_runtime.h> 24 25
+1
drivers/char/hw_random/imx-rngc.c
··· 13 13 */ 14 14 15 15 #include <linux/module.h> 16 + #include <linux/mod_devicetable.h> 16 17 #include <linux/init.h> 17 18 #include <linux/kernel.h> 18 19 #include <linux/clk.h>
+1
drivers/char/hw_random/powernv-rng.c
··· 10 10 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 11 11 12 12 #include <linux/module.h> 13 + #include <linux/mod_devicetable.h> 13 14 #include <linux/kernel.h> 14 15 #include <linux/platform_device.h> 15 16 #include <linux/random.h>
+1
drivers/char/mem.c
··· 766 766 switch (orig) { 767 767 case SEEK_CUR: 768 768 offset += file->f_pos; 769 + /* fall through */ 769 770 case SEEK_SET: 770 771 /* to avoid userland mistaking f_pos=-9 as -EBADF=-9 */ 771 772 if ((unsigned long long)offset >= -MAX_ERRNO) {
-4
drivers/char/pcmcia/cm4000_cs.c
··· 1748 1748 1749 1749 static int cm4000_config(struct pcmcia_device * link, int devno) 1750 1750 { 1751 - struct cm4000_dev *dev; 1752 - 1753 1751 link->config_flags |= CONF_AUTO_SET_IO; 1754 1752 1755 1753 /* read the config-tuples */ ··· 1756 1758 1757 1759 if (pcmcia_enable_device(link)) 1758 1760 goto cs_release; 1759 - 1760 - dev = link->priv; 1761 1761 1762 1762 return 0; 1763 1763
+21 -37
drivers/char/virtio_console.c
··· 1309 1309 .attrs = port_sysfs_entries, 1310 1310 }; 1311 1311 1312 - static ssize_t debugfs_read(struct file *filp, char __user *ubuf, 1313 - size_t count, loff_t *offp) 1312 + static int debugfs_show(struct seq_file *s, void *data) 1314 1313 { 1315 - struct port *port; 1316 - char *buf; 1317 - ssize_t ret, out_offset, out_count; 1314 + struct port *port = s->private; 1318 1315 1319 - out_count = 1024; 1320 - buf = kmalloc(out_count, GFP_KERNEL); 1321 - if (!buf) 1322 - return -ENOMEM; 1316 + seq_printf(s, "name: %s\n", port->name ? port->name : ""); 1317 + seq_printf(s, "guest_connected: %d\n", port->guest_connected); 1318 + seq_printf(s, "host_connected: %d\n", port->host_connected); 1319 + seq_printf(s, "outvq_full: %d\n", port->outvq_full); 1320 + seq_printf(s, "bytes_sent: %lu\n", port->stats.bytes_sent); 1321 + seq_printf(s, "bytes_received: %lu\n", port->stats.bytes_received); 1322 + seq_printf(s, "bytes_discarded: %lu\n", port->stats.bytes_discarded); 1323 + seq_printf(s, "is_console: %s\n", 1324 + is_console_port(port) ? "yes" : "no"); 1325 + seq_printf(s, "console_vtermno: %u\n", port->cons.vtermno); 1323 1326 1324 - port = filp->private_data; 1325 - out_offset = 0; 1326 - out_offset += snprintf(buf + out_offset, out_count, 1327 - "name: %s\n", port->name ? port->name : ""); 1328 - out_offset += snprintf(buf + out_offset, out_count - out_offset, 1329 - "guest_connected: %d\n", port->guest_connected); 1330 - out_offset += snprintf(buf + out_offset, out_count - out_offset, 1331 - "host_connected: %d\n", port->host_connected); 1332 - out_offset += snprintf(buf + out_offset, out_count - out_offset, 1333 - "outvq_full: %d\n", port->outvq_full); 1334 - out_offset += snprintf(buf + out_offset, out_count - out_offset, 1335 - "bytes_sent: %lu\n", port->stats.bytes_sent); 1336 - out_offset += snprintf(buf + out_offset, out_count - out_offset, 1337 - "bytes_received: %lu\n", 1338 - port->stats.bytes_received); 1339 - out_offset += snprintf(buf + out_offset, out_count - out_offset, 1340 - "bytes_discarded: %lu\n", 1341 - port->stats.bytes_discarded); 1342 - out_offset += snprintf(buf + out_offset, out_count - out_offset, 1343 - "is_console: %s\n", 1344 - is_console_port(port) ? "yes" : "no"); 1345 - out_offset += snprintf(buf + out_offset, out_count - out_offset, 1346 - "console_vtermno: %u\n", port->cons.vtermno); 1327 + return 0; 1328 + } 1347 1329 1348 - ret = simple_read_from_buffer(ubuf, count, offp, buf, out_offset); 1349 - kfree(buf); 1350 - return ret; 1330 + static int debugfs_open(struct inode *inode, struct file *file) 1331 + { 1332 + return single_open(file, debugfs_show, inode->i_private); 1351 1333 } 1352 1334 1353 1335 static const struct file_operations port_debugfs_ops = { 1354 1336 .owner = THIS_MODULE, 1355 - .open = simple_open, 1356 - .read = debugfs_read, 1337 + .open = debugfs_open, 1338 + .read = seq_read, 1339 + .llseek = seq_lseek, 1340 + .release = single_release, 1357 1341 }; 1358 1342 1359 1343 static void set_console_size(struct port *port, u16 rows, u16 cols)
+1
drivers/crypto/mediatek/mtk-platform.c
··· 13 13 #include <linux/init.h> 14 14 #include <linux/kernel.h> 15 15 #include <linux/module.h> 16 + #include <linux/mod_devicetable.h> 16 17 #include <linux/platform_device.h> 17 18 #include <linux/pm_runtime.h> 18 19 #include "mtk-platform.h"
+1
drivers/crypto/qce/core.c
··· 14 14 #include <linux/clk.h> 15 15 #include <linux/interrupt.h> 16 16 #include <linux/module.h> 17 + #include <linux/mod_devicetable.h> 17 18 #include <linux/platform_device.h> 18 19 #include <linux/spinlock.h> 19 20 #include <linux/types.h>
+1
drivers/crypto/stm32/stm32_crc32.c
··· 8 8 #include <linux/clk.h> 9 9 #include <linux/crc32poly.h> 10 10 #include <linux/module.h> 11 + #include <linux/mod_devicetable.h> 11 12 #include <linux/platform_device.h> 12 13 #include <linux/pm_runtime.h> 13 14
+1
drivers/crypto/ux500/cryp/cryp_core.c
··· 20 20 #include <linux/irqreturn.h> 21 21 #include <linux/klist.h> 22 22 #include <linux/module.h> 23 + #include <linux/mod_devicetable.h> 23 24 #include <linux/platform_device.h> 24 25 #include <linux/regulator/consumer.h> 25 26 #include <linux/semaphore.h>
+1
drivers/crypto/ux500/hash/hash_core.c
··· 21 21 #include <linux/klist.h> 22 22 #include <linux/kernel.h> 23 23 #include <linux/module.h> 24 + #include <linux/mod_devicetable.h> 24 25 #include <linux/platform_device.h> 25 26 #include <linux/crypto.h> 26 27
+1
drivers/devfreq/tegra-devfreq.c
··· 24 24 #include <linux/interrupt.h> 25 25 #include <linux/io.h> 26 26 #include <linux/module.h> 27 + #include <linux/mod_devicetable.h> 27 28 #include <linux/platform_device.h> 28 29 #include <linux/pm_opp.h> 29 30 #include <linux/reset.h>
+1
drivers/dma/ep93xx_dma.c
··· 23 23 #include <linux/interrupt.h> 24 24 #include <linux/dmaengine.h> 25 25 #include <linux/module.h> 26 + #include <linux/mod_devicetable.h> 26 27 #include <linux/platform_device.h> 27 28 #include <linux/slab.h> 28 29
+1
drivers/dma/s3c24xx-dma.c
··· 35 35 #include <linux/interrupt.h> 36 36 #include <linux/clk.h> 37 37 #include <linux/module.h> 38 + #include <linux/mod_devicetable.h> 38 39 #include <linux/slab.h> 39 40 #include <linux/platform_data/dma-s3c24xx.h> 40 41
+1
drivers/extcon/extcon-intel-cht-wc.c
··· 20 20 #include <linux/kernel.h> 21 21 #include <linux/mfd/intel_soc_pmic.h> 22 22 #include <linux/module.h> 23 + #include <linux/mod_devicetable.h> 23 24 #include <linux/platform_device.h> 24 25 #include <linux/regmap.h> 25 26 #include <linux/slab.h>
+1 -1
drivers/extcon/extcon-intel-int3496.c
··· 20 20 21 21 #include <linux/acpi.h> 22 22 #include <linux/extcon-provider.h> 23 - #include <linux/gpio.h> 23 + #include <linux/gpio/consumer.h> 24 24 #include <linux/interrupt.h> 25 25 #include <linux/module.h> 26 26 #include <linux/platform_device.h>
+1
drivers/extcon/extcon-max3355.c
··· 14 14 #include <linux/gpio/consumer.h> 15 15 #include <linux/interrupt.h> 16 16 #include <linux/module.h> 17 + #include <linux/mod_devicetable.h> 17 18 #include <linux/platform_device.h> 18 19 19 20 struct max3355_data {
+1
drivers/extcon/extcon-qcom-spmi-misc.c
··· 20 20 #include <linux/interrupt.h> 21 21 #include <linux/kernel.h> 22 22 #include <linux/module.h> 23 + #include <linux/mod_devicetable.h> 23 24 #include <linux/platform_device.h> 24 25 #include <linux/slab.h> 25 26 #include <linux/workqueue.h>
+6 -16
drivers/extcon/extcon-usbc-cros-ec.c
··· 1 - /** 2 - * drivers/extcon/extcon-usbc-cros-ec - ChromeOS Embedded Controller extcon 3 - * 4 - * Copyright (C) 2017 Google, Inc 5 - * Author: Benson Leung <bleung@chromium.org> 6 - * 7 - * This software is licensed under the terms of the GNU General Public 8 - * License version 2, as published by the Free Software Foundation, and 9 - * may be copied, distributed, and modified under those terms. 10 - * 11 - * This program is distributed in the hope that it will be useful, 12 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 - * GNU General Public License for more details. 15 - */ 1 + // SPDX-License-Identifier: GPL-2.0 2 + // ChromeOS Embedded Controller extcon 3 + // 4 + // Copyright (C) 2017 Google, Inc. 5 + // Author: Benson Leung <bleung@chromium.org> 16 6 17 7 #include <linux/extcon-provider.h> 18 8 #include <linux/kernel.h> ··· 538 548 539 549 MODULE_DESCRIPTION("ChromeOS Embedded Controller extcon driver"); 540 550 MODULE_AUTHOR("Benson Leung <bleung@chromium.org>"); 541 - MODULE_LICENSE("GPL"); 551 + MODULE_LICENSE("GPL v2");
+2 -1
drivers/extcon/extcon.c
··· 433 433 return index; 434 434 435 435 spin_lock_irqsave(&edev->lock, flags); 436 - 437 436 state = !!(edev->state & BIT(index)); 437 + spin_unlock_irqrestore(&edev->lock, flags); 438 438 439 439 /* 440 440 * Call functions in a raw notifier chain for the specific one ··· 448 448 */ 449 449 raw_notifier_call_chain(&edev->nh_all, state, edev); 450 450 451 + spin_lock_irqsave(&edev->lock, flags); 451 452 /* This could be in interrupt handler */ 452 453 prop_buf = (char *)get_zeroed_page(GFP_ATOMIC); 453 454 if (!prop_buf) {
+4 -1
drivers/firmware/google/vpd.c
··· 246 246 sysfs_remove_bin_file(vpd_kobj, &sec->bin_attr); 247 247 kfree(sec->raw_name); 248 248 memunmap(sec->baseaddr); 249 + sec->enabled = false; 249 250 } 250 251 251 252 return 0; ··· 280 279 ret = vpd_section_init("rw", &rw_vpd, 281 280 physaddr + sizeof(struct vpd_cbmem) + 282 281 header.ro_size, header.rw_size); 283 - if (ret) 282 + if (ret) { 283 + vpd_section_destroy(&ro_vpd); 284 284 return ret; 285 + } 285 286 } 286 287 287 288 return 0;
+68
drivers/fpga/Kconfig
··· 130 130 Support for loading FPGA images by applying a Device Tree 131 131 overlay. 132 132 133 + config FPGA_DFL 134 + tristate "FPGA Device Feature List (DFL) support" 135 + select FPGA_BRIDGE 136 + select FPGA_REGION 137 + help 138 + Device Feature List (DFL) defines a feature list structure that 139 + creates a linked list of feature headers within the MMIO space 140 + to provide an extensible way of adding features for FPGA. 141 + Driver can walk through the feature headers to enumerate feature 142 + devices (e.g. FPGA Management Engine, Port and Accelerator 143 + Function Unit) and their private features for target FPGA devices. 144 + 145 + Select this option to enable common support for Field-Programmable 146 + Gate Array (FPGA) solutions which implement Device Feature List. 147 + It provides enumeration APIs and feature device infrastructure. 148 + 149 + config FPGA_DFL_FME 150 + tristate "FPGA DFL FME Driver" 151 + depends on FPGA_DFL 152 + help 153 + The FPGA Management Engine (FME) is a feature device implemented 154 + under Device Feature List (DFL) framework. Select this option to 155 + enable the platform device driver for FME which implements all 156 + FPGA platform level management features. There shall be one FME 157 + per DFL based FPGA device. 158 + 159 + config FPGA_DFL_FME_MGR 160 + tristate "FPGA DFL FME Manager Driver" 161 + depends on FPGA_DFL_FME && HAS_IOMEM 162 + help 163 + Say Y to enable FPGA Manager driver for FPGA Management Engine. 164 + 165 + config FPGA_DFL_FME_BRIDGE 166 + tristate "FPGA DFL FME Bridge Driver" 167 + depends on FPGA_DFL_FME && HAS_IOMEM 168 + help 169 + Say Y to enable FPGA Bridge driver for FPGA Management Engine. 170 + 171 + config FPGA_DFL_FME_REGION 172 + tristate "FPGA DFL FME Region Driver" 173 + depends on FPGA_DFL_FME && HAS_IOMEM 174 + help 175 + Say Y to enable FPGA Region driver for FPGA Management Engine. 176 + 177 + config FPGA_DFL_AFU 178 + tristate "FPGA DFL AFU Driver" 179 + depends on FPGA_DFL 180 + help 181 + This is the driver for FPGA Accelerated Function Unit (AFU) which 182 + implements AFU and Port management features. A User AFU connects 183 + to the FPGA infrastructure via a Port. There may be more than one 184 + Port/AFU per DFL based FPGA device. 185 + 186 + config FPGA_DFL_PCI 187 + tristate "FPGA DFL PCIe Device Driver" 188 + depends on PCI && FPGA_DFL 189 + help 190 + Select this option to enable PCIe driver for PCIe-based 191 + Field-Programmable Gate Array (FPGA) solutions which implement 192 + the Device Feature List (DFL). This driver provides interfaces 193 + for userspace applications to configure, enumerate, open and access 194 + FPGA accelerators on the FPGA DFL devices, enables system level 195 + management functions such as FPGA partial reconfiguration, power 196 + management and virtualization with DFL framework and DFL feature 197 + device drivers. 198 + 199 + To compile this as a module, choose M here. 200 + 133 201 endif # FPGA
+14
drivers/fpga/Makefile
··· 28 28 # High Level Interfaces 29 29 obj-$(CONFIG_FPGA_REGION) += fpga-region.o 30 30 obj-$(CONFIG_OF_FPGA_REGION) += of-fpga-region.o 31 + 32 + # FPGA Device Feature List Support 33 + obj-$(CONFIG_FPGA_DFL) += dfl.o 34 + obj-$(CONFIG_FPGA_DFL_FME) += dfl-fme.o 35 + obj-$(CONFIG_FPGA_DFL_FME_MGR) += dfl-fme-mgr.o 36 + obj-$(CONFIG_FPGA_DFL_FME_BRIDGE) += dfl-fme-br.o 37 + obj-$(CONFIG_FPGA_DFL_FME_REGION) += dfl-fme-region.o 38 + obj-$(CONFIG_FPGA_DFL_AFU) += dfl-afu.o 39 + 40 + dfl-fme-objs := dfl-fme-main.o dfl-fme-pr.o 41 + dfl-afu-objs := dfl-afu-main.o dfl-afu-region.o dfl-afu-dma-region.o 42 + 43 + # Drivers for FPGAs which implement DFL 44 + obj-$(CONFIG_FPGA_DFL_PCI) += dfl-pci.o
+463
drivers/fpga/dfl-afu-dma-region.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Driver for FPGA Accelerated Function Unit (AFU) DMA Region Management 4 + * 5 + * Copyright (C) 2017-2018 Intel Corporation, Inc. 6 + * 7 + * Authors: 8 + * Wu Hao <hao.wu@intel.com> 9 + * Xiao Guangrong <guangrong.xiao@linux.intel.com> 10 + */ 11 + 12 + #include <linux/dma-mapping.h> 13 + #include <linux/sched/signal.h> 14 + #include <linux/uaccess.h> 15 + 16 + #include "dfl-afu.h" 17 + 18 + static void put_all_pages(struct page **pages, int npages) 19 + { 20 + int i; 21 + 22 + for (i = 0; i < npages; i++) 23 + if (pages[i]) 24 + put_page(pages[i]); 25 + } 26 + 27 + void afu_dma_region_init(struct dfl_feature_platform_data *pdata) 28 + { 29 + struct dfl_afu *afu = dfl_fpga_pdata_get_private(pdata); 30 + 31 + afu->dma_regions = RB_ROOT; 32 + } 33 + 34 + /** 35 + * afu_dma_adjust_locked_vm - adjust locked memory 36 + * @dev: port device 37 + * @npages: number of pages 38 + * @incr: increase or decrease locked memory 39 + * 40 + * Increase or decrease the locked memory size with npages input. 41 + * 42 + * Return 0 on success. 43 + * Return -ENOMEM if locked memory size is over the limit and no CAP_IPC_LOCK. 44 + */ 45 + static int afu_dma_adjust_locked_vm(struct device *dev, long npages, bool incr) 46 + { 47 + unsigned long locked, lock_limit; 48 + int ret = 0; 49 + 50 + /* the task is exiting. */ 51 + if (!current->mm) 52 + return 0; 53 + 54 + down_write(&current->mm->mmap_sem); 55 + 56 + if (incr) { 57 + locked = current->mm->locked_vm + npages; 58 + lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT; 59 + 60 + if (locked > lock_limit && !capable(CAP_IPC_LOCK)) 61 + ret = -ENOMEM; 62 + else 63 + current->mm->locked_vm += npages; 64 + } else { 65 + if (WARN_ON_ONCE(npages > current->mm->locked_vm)) 66 + npages = current->mm->locked_vm; 67 + current->mm->locked_vm -= npages; 68 + } 69 + 70 + dev_dbg(dev, "[%d] RLIMIT_MEMLOCK %c%ld %ld/%ld%s\n", current->pid, 71 + incr ? '+' : '-', npages << PAGE_SHIFT, 72 + current->mm->locked_vm << PAGE_SHIFT, rlimit(RLIMIT_MEMLOCK), 73 + ret ? "- execeeded" : ""); 74 + 75 + up_write(&current->mm->mmap_sem); 76 + 77 + return ret; 78 + } 79 + 80 + /** 81 + * afu_dma_pin_pages - pin pages of given dma memory region 82 + * @pdata: feature device platform data 83 + * @region: dma memory region to be pinned 84 + * 85 + * Pin all the pages of given dfl_afu_dma_region. 86 + * Return 0 for success or negative error code. 87 + */ 88 + static int afu_dma_pin_pages(struct dfl_feature_platform_data *pdata, 89 + struct dfl_afu_dma_region *region) 90 + { 91 + int npages = region->length >> PAGE_SHIFT; 92 + struct device *dev = &pdata->dev->dev; 93 + int ret, pinned; 94 + 95 + ret = afu_dma_adjust_locked_vm(dev, npages, true); 96 + if (ret) 97 + return ret; 98 + 99 + region->pages = kcalloc(npages, sizeof(struct page *), GFP_KERNEL); 100 + if (!region->pages) { 101 + ret = -ENOMEM; 102 + goto unlock_vm; 103 + } 104 + 105 + pinned = get_user_pages_fast(region->user_addr, npages, 1, 106 + region->pages); 107 + if (pinned < 0) { 108 + ret = pinned; 109 + goto put_pages; 110 + } else if (pinned != npages) { 111 + ret = -EFAULT; 112 + goto free_pages; 113 + } 114 + 115 + dev_dbg(dev, "%d pages pinned\n", pinned); 116 + 117 + return 0; 118 + 119 + put_pages: 120 + put_all_pages(region->pages, pinned); 121 + free_pages: 122 + kfree(region->pages); 123 + unlock_vm: 124 + afu_dma_adjust_locked_vm(dev, npages, false); 125 + return ret; 126 + } 127 + 128 + /** 129 + * afu_dma_unpin_pages - unpin pages of given dma memory region 130 + * @pdata: feature device platform data 131 + * @region: dma memory region to be unpinned 132 + * 133 + * Unpin all the pages of given dfl_afu_dma_region. 134 + * Return 0 for success or negative error code. 135 + */ 136 + static void afu_dma_unpin_pages(struct dfl_feature_platform_data *pdata, 137 + struct dfl_afu_dma_region *region) 138 + { 139 + long npages = region->length >> PAGE_SHIFT; 140 + struct device *dev = &pdata->dev->dev; 141 + 142 + put_all_pages(region->pages, npages); 143 + kfree(region->pages); 144 + afu_dma_adjust_locked_vm(dev, npages, false); 145 + 146 + dev_dbg(dev, "%ld pages unpinned\n", npages); 147 + } 148 + 149 + /** 150 + * afu_dma_check_continuous_pages - check if pages are continuous 151 + * @region: dma memory region 152 + * 153 + * Return true if pages of given dma memory region have continuous physical 154 + * address, otherwise return false. 155 + */ 156 + static bool afu_dma_check_continuous_pages(struct dfl_afu_dma_region *region) 157 + { 158 + int npages = region->length >> PAGE_SHIFT; 159 + int i; 160 + 161 + for (i = 0; i < npages - 1; i++) 162 + if (page_to_pfn(region->pages[i]) + 1 != 163 + page_to_pfn(region->pages[i + 1])) 164 + return false; 165 + 166 + return true; 167 + } 168 + 169 + /** 170 + * dma_region_check_iova - check if memory area is fully contained in the region 171 + * @region: dma memory region 172 + * @iova: address of the dma memory area 173 + * @size: size of the dma memory area 174 + * 175 + * Compare the dma memory area defined by @iova and @size with given dma region. 176 + * Return true if memory area is fully contained in the region, otherwise false. 177 + */ 178 + static bool dma_region_check_iova(struct dfl_afu_dma_region *region, 179 + u64 iova, u64 size) 180 + { 181 + if (!size && region->iova != iova) 182 + return false; 183 + 184 + return (region->iova <= iova) && 185 + (region->length + region->iova >= iova + size); 186 + } 187 + 188 + /** 189 + * afu_dma_region_add - add given dma region to rbtree 190 + * @pdata: feature device platform data 191 + * @region: dma region to be added 192 + * 193 + * Return 0 for success, -EEXIST if dma region has already been added. 194 + * 195 + * Needs to be called with pdata->lock heold. 196 + */ 197 + static int afu_dma_region_add(struct dfl_feature_platform_data *pdata, 198 + struct dfl_afu_dma_region *region) 199 + { 200 + struct dfl_afu *afu = dfl_fpga_pdata_get_private(pdata); 201 + struct rb_node **new, *parent = NULL; 202 + 203 + dev_dbg(&pdata->dev->dev, "add region (iova = %llx)\n", 204 + (unsigned long long)region->iova); 205 + 206 + new = &afu->dma_regions.rb_node; 207 + 208 + while (*new) { 209 + struct dfl_afu_dma_region *this; 210 + 211 + this = container_of(*new, struct dfl_afu_dma_region, node); 212 + 213 + parent = *new; 214 + 215 + if (dma_region_check_iova(this, region->iova, region->length)) 216 + return -EEXIST; 217 + 218 + if (region->iova < this->iova) 219 + new = &((*new)->rb_left); 220 + else if (region->iova > this->iova) 221 + new = &((*new)->rb_right); 222 + else 223 + return -EEXIST; 224 + } 225 + 226 + rb_link_node(&region->node, parent, new); 227 + rb_insert_color(&region->node, &afu->dma_regions); 228 + 229 + return 0; 230 + } 231 + 232 + /** 233 + * afu_dma_region_remove - remove given dma region from rbtree 234 + * @pdata: feature device platform data 235 + * @region: dma region to be removed 236 + * 237 + * Needs to be called with pdata->lock heold. 238 + */ 239 + static void afu_dma_region_remove(struct dfl_feature_platform_data *pdata, 240 + struct dfl_afu_dma_region *region) 241 + { 242 + struct dfl_afu *afu; 243 + 244 + dev_dbg(&pdata->dev->dev, "del region (iova = %llx)\n", 245 + (unsigned long long)region->iova); 246 + 247 + afu = dfl_fpga_pdata_get_private(pdata); 248 + rb_erase(&region->node, &afu->dma_regions); 249 + } 250 + 251 + /** 252 + * afu_dma_region_destroy - destroy all regions in rbtree 253 + * @pdata: feature device platform data 254 + * 255 + * Needs to be called with pdata->lock heold. 256 + */ 257 + void afu_dma_region_destroy(struct dfl_feature_platform_data *pdata) 258 + { 259 + struct dfl_afu *afu = dfl_fpga_pdata_get_private(pdata); 260 + struct rb_node *node = rb_first(&afu->dma_regions); 261 + struct dfl_afu_dma_region *region; 262 + 263 + while (node) { 264 + region = container_of(node, struct dfl_afu_dma_region, node); 265 + 266 + dev_dbg(&pdata->dev->dev, "del region (iova = %llx)\n", 267 + (unsigned long long)region->iova); 268 + 269 + rb_erase(node, &afu->dma_regions); 270 + 271 + if (region->iova) 272 + dma_unmap_page(dfl_fpga_pdata_to_parent(pdata), 273 + region->iova, region->length, 274 + DMA_BIDIRECTIONAL); 275 + 276 + if (region->pages) 277 + afu_dma_unpin_pages(pdata, region); 278 + 279 + node = rb_next(node); 280 + kfree(region); 281 + } 282 + } 283 + 284 + /** 285 + * afu_dma_region_find - find the dma region from rbtree based on iova and size 286 + * @pdata: feature device platform data 287 + * @iova: address of the dma memory area 288 + * @size: size of the dma memory area 289 + * 290 + * It finds the dma region from the rbtree based on @iova and @size: 291 + * - if @size == 0, it finds the dma region which starts from @iova 292 + * - otherwise, it finds the dma region which fully contains 293 + * [@iova, @iova+size) 294 + * If nothing is matched returns NULL. 295 + * 296 + * Needs to be called with pdata->lock held. 297 + */ 298 + struct dfl_afu_dma_region * 299 + afu_dma_region_find(struct dfl_feature_platform_data *pdata, u64 iova, u64 size) 300 + { 301 + struct dfl_afu *afu = dfl_fpga_pdata_get_private(pdata); 302 + struct rb_node *node = afu->dma_regions.rb_node; 303 + struct device *dev = &pdata->dev->dev; 304 + 305 + while (node) { 306 + struct dfl_afu_dma_region *region; 307 + 308 + region = container_of(node, struct dfl_afu_dma_region, node); 309 + 310 + if (dma_region_check_iova(region, iova, size)) { 311 + dev_dbg(dev, "find region (iova = %llx)\n", 312 + (unsigned long long)region->iova); 313 + return region; 314 + } 315 + 316 + if (iova < region->iova) 317 + node = node->rb_left; 318 + else if (iova > region->iova) 319 + node = node->rb_right; 320 + else 321 + /* the iova region is not fully covered. */ 322 + break; 323 + } 324 + 325 + dev_dbg(dev, "region with iova %llx and size %llx is not found\n", 326 + (unsigned long long)iova, (unsigned long long)size); 327 + 328 + return NULL; 329 + } 330 + 331 + /** 332 + * afu_dma_region_find_iova - find the dma region from rbtree by iova 333 + * @pdata: feature device platform data 334 + * @iova: address of the dma region 335 + * 336 + * Needs to be called with pdata->lock held. 337 + */ 338 + static struct dfl_afu_dma_region * 339 + afu_dma_region_find_iova(struct dfl_feature_platform_data *pdata, u64 iova) 340 + { 341 + return afu_dma_region_find(pdata, iova, 0); 342 + } 343 + 344 + /** 345 + * afu_dma_map_region - map memory region for dma 346 + * @pdata: feature device platform data 347 + * @user_addr: address of the memory region 348 + * @length: size of the memory region 349 + * @iova: pointer of iova address 350 + * 351 + * Map memory region defined by @user_addr and @length, and return dma address 352 + * of the memory region via @iova. 353 + * Return 0 for success, otherwise error code. 354 + */ 355 + int afu_dma_map_region(struct dfl_feature_platform_data *pdata, 356 + u64 user_addr, u64 length, u64 *iova) 357 + { 358 + struct dfl_afu_dma_region *region; 359 + int ret; 360 + 361 + /* 362 + * Check Inputs, only accept page-aligned user memory region with 363 + * valid length. 364 + */ 365 + if (!PAGE_ALIGNED(user_addr) || !PAGE_ALIGNED(length) || !length) 366 + return -EINVAL; 367 + 368 + /* Check overflow */ 369 + if (user_addr + length < user_addr) 370 + return -EINVAL; 371 + 372 + if (!access_ok(VERIFY_WRITE, (void __user *)(unsigned long)user_addr, 373 + length)) 374 + return -EINVAL; 375 + 376 + region = kzalloc(sizeof(*region), GFP_KERNEL); 377 + if (!region) 378 + return -ENOMEM; 379 + 380 + region->user_addr = user_addr; 381 + region->length = length; 382 + 383 + /* Pin the user memory region */ 384 + ret = afu_dma_pin_pages(pdata, region); 385 + if (ret) { 386 + dev_err(&pdata->dev->dev, "failed to pin memory region\n"); 387 + goto free_region; 388 + } 389 + 390 + /* Only accept continuous pages, return error else */ 391 + if (!afu_dma_check_continuous_pages(region)) { 392 + dev_err(&pdata->dev->dev, "pages are not continuous\n"); 393 + ret = -EINVAL; 394 + goto unpin_pages; 395 + } 396 + 397 + /* As pages are continuous then start to do DMA mapping */ 398 + region->iova = dma_map_page(dfl_fpga_pdata_to_parent(pdata), 399 + region->pages[0], 0, 400 + region->length, 401 + DMA_BIDIRECTIONAL); 402 + if (dma_mapping_error(&pdata->dev->dev, region->iova)) { 403 + dev_err(&pdata->dev->dev, "failed to map for dma\n"); 404 + ret = -EFAULT; 405 + goto unpin_pages; 406 + } 407 + 408 + *iova = region->iova; 409 + 410 + mutex_lock(&pdata->lock); 411 + ret = afu_dma_region_add(pdata, region); 412 + mutex_unlock(&pdata->lock); 413 + if (ret) { 414 + dev_err(&pdata->dev->dev, "failed to add dma region\n"); 415 + goto unmap_dma; 416 + } 417 + 418 + return 0; 419 + 420 + unmap_dma: 421 + dma_unmap_page(dfl_fpga_pdata_to_parent(pdata), 422 + region->iova, region->length, DMA_BIDIRECTIONAL); 423 + unpin_pages: 424 + afu_dma_unpin_pages(pdata, region); 425 + free_region: 426 + kfree(region); 427 + return ret; 428 + } 429 + 430 + /** 431 + * afu_dma_unmap_region - unmap dma memory region 432 + * @pdata: feature device platform data 433 + * @iova: dma address of the region 434 + * 435 + * Unmap dma memory region based on @iova. 436 + * Return 0 for success, otherwise error code. 437 + */ 438 + int afu_dma_unmap_region(struct dfl_feature_platform_data *pdata, u64 iova) 439 + { 440 + struct dfl_afu_dma_region *region; 441 + 442 + mutex_lock(&pdata->lock); 443 + region = afu_dma_region_find_iova(pdata, iova); 444 + if (!region) { 445 + mutex_unlock(&pdata->lock); 446 + return -EINVAL; 447 + } 448 + 449 + if (region->in_use) { 450 + mutex_unlock(&pdata->lock); 451 + return -EBUSY; 452 + } 453 + 454 + afu_dma_region_remove(pdata, region); 455 + mutex_unlock(&pdata->lock); 456 + 457 + dma_unmap_page(dfl_fpga_pdata_to_parent(pdata), 458 + region->iova, region->length, DMA_BIDIRECTIONAL); 459 + afu_dma_unpin_pages(pdata, region); 460 + kfree(region); 461 + 462 + return 0; 463 + }
+636
drivers/fpga/dfl-afu-main.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Driver for FPGA Accelerated Function Unit (AFU) 4 + * 5 + * Copyright (C) 2017-2018 Intel Corporation, Inc. 6 + * 7 + * Authors: 8 + * Wu Hao <hao.wu@intel.com> 9 + * Xiao Guangrong <guangrong.xiao@linux.intel.com> 10 + * Joseph Grecco <joe.grecco@intel.com> 11 + * Enno Luebbers <enno.luebbers@intel.com> 12 + * Tim Whisonant <tim.whisonant@intel.com> 13 + * Ananda Ravuri <ananda.ravuri@intel.com> 14 + * Henry Mitchel <henry.mitchel@intel.com> 15 + */ 16 + 17 + #include <linux/kernel.h> 18 + #include <linux/module.h> 19 + #include <linux/uaccess.h> 20 + #include <linux/fpga-dfl.h> 21 + 22 + #include "dfl-afu.h" 23 + 24 + /** 25 + * port_enable - enable a port 26 + * @pdev: port platform device. 27 + * 28 + * Enable Port by clear the port soft reset bit, which is set by default. 29 + * The AFU is unable to respond to any MMIO access while in reset. 30 + * port_enable function should only be used after port_disable function. 31 + */ 32 + static void port_enable(struct platform_device *pdev) 33 + { 34 + struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev); 35 + void __iomem *base; 36 + u64 v; 37 + 38 + WARN_ON(!pdata->disable_count); 39 + 40 + if (--pdata->disable_count != 0) 41 + return; 42 + 43 + base = dfl_get_feature_ioaddr_by_id(&pdev->dev, PORT_FEATURE_ID_HEADER); 44 + 45 + /* Clear port soft reset */ 46 + v = readq(base + PORT_HDR_CTRL); 47 + v &= ~PORT_CTRL_SFTRST; 48 + writeq(v, base + PORT_HDR_CTRL); 49 + } 50 + 51 + #define RST_POLL_INVL 10 /* us */ 52 + #define RST_POLL_TIMEOUT 1000 /* us */ 53 + 54 + /** 55 + * port_disable - disable a port 56 + * @pdev: port platform device. 57 + * 58 + * Disable Port by setting the port soft reset bit, it puts the port into 59 + * reset. 60 + */ 61 + static int port_disable(struct platform_device *pdev) 62 + { 63 + struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev); 64 + void __iomem *base; 65 + u64 v; 66 + 67 + if (pdata->disable_count++ != 0) 68 + return 0; 69 + 70 + base = dfl_get_feature_ioaddr_by_id(&pdev->dev, PORT_FEATURE_ID_HEADER); 71 + 72 + /* Set port soft reset */ 73 + v = readq(base + PORT_HDR_CTRL); 74 + v |= PORT_CTRL_SFTRST; 75 + writeq(v, base + PORT_HDR_CTRL); 76 + 77 + /* 78 + * HW sets ack bit to 1 when all outstanding requests have been drained 79 + * on this port and minimum soft reset pulse width has elapsed. 80 + * Driver polls port_soft_reset_ack to determine if reset done by HW. 81 + */ 82 + if (readq_poll_timeout(base + PORT_HDR_CTRL, v, v & PORT_CTRL_SFTRST, 83 + RST_POLL_INVL, RST_POLL_TIMEOUT)) { 84 + dev_err(&pdev->dev, "timeout, fail to reset device\n"); 85 + return -ETIMEDOUT; 86 + } 87 + 88 + return 0; 89 + } 90 + 91 + /* 92 + * This function resets the FPGA Port and its accelerator (AFU) by function 93 + * __port_disable and __port_enable (set port soft reset bit and then clear 94 + * it). Userspace can do Port reset at any time, e.g. during DMA or Partial 95 + * Reconfiguration. But it should never cause any system level issue, only 96 + * functional failure (e.g. DMA or PR operation failure) and be recoverable 97 + * from the failure. 98 + * 99 + * Note: the accelerator (AFU) is not accessible when its port is in reset 100 + * (disabled). Any attempts on MMIO access to AFU while in reset, will 101 + * result errors reported via port error reporting sub feature (if present). 102 + */ 103 + static int __port_reset(struct platform_device *pdev) 104 + { 105 + int ret; 106 + 107 + ret = port_disable(pdev); 108 + if (!ret) 109 + port_enable(pdev); 110 + 111 + return ret; 112 + } 113 + 114 + static int port_reset(struct platform_device *pdev) 115 + { 116 + struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev); 117 + int ret; 118 + 119 + mutex_lock(&pdata->lock); 120 + ret = __port_reset(pdev); 121 + mutex_unlock(&pdata->lock); 122 + 123 + return ret; 124 + } 125 + 126 + static int port_get_id(struct platform_device *pdev) 127 + { 128 + void __iomem *base; 129 + 130 + base = dfl_get_feature_ioaddr_by_id(&pdev->dev, PORT_FEATURE_ID_HEADER); 131 + 132 + return FIELD_GET(PORT_CAP_PORT_NUM, readq(base + PORT_HDR_CAP)); 133 + } 134 + 135 + static ssize_t 136 + id_show(struct device *dev, struct device_attribute *attr, char *buf) 137 + { 138 + int id = port_get_id(to_platform_device(dev)); 139 + 140 + return scnprintf(buf, PAGE_SIZE, "%d\n", id); 141 + } 142 + static DEVICE_ATTR_RO(id); 143 + 144 + static const struct attribute *port_hdr_attrs[] = { 145 + &dev_attr_id.attr, 146 + NULL, 147 + }; 148 + 149 + static int port_hdr_init(struct platform_device *pdev, 150 + struct dfl_feature *feature) 151 + { 152 + dev_dbg(&pdev->dev, "PORT HDR Init.\n"); 153 + 154 + port_reset(pdev); 155 + 156 + return sysfs_create_files(&pdev->dev.kobj, port_hdr_attrs); 157 + } 158 + 159 + static void port_hdr_uinit(struct platform_device *pdev, 160 + struct dfl_feature *feature) 161 + { 162 + dev_dbg(&pdev->dev, "PORT HDR UInit.\n"); 163 + 164 + sysfs_remove_files(&pdev->dev.kobj, port_hdr_attrs); 165 + } 166 + 167 + static long 168 + port_hdr_ioctl(struct platform_device *pdev, struct dfl_feature *feature, 169 + unsigned int cmd, unsigned long arg) 170 + { 171 + long ret; 172 + 173 + switch (cmd) { 174 + case DFL_FPGA_PORT_RESET: 175 + if (!arg) 176 + ret = port_reset(pdev); 177 + else 178 + ret = -EINVAL; 179 + break; 180 + default: 181 + dev_dbg(&pdev->dev, "%x cmd not handled", cmd); 182 + ret = -ENODEV; 183 + } 184 + 185 + return ret; 186 + } 187 + 188 + static const struct dfl_feature_ops port_hdr_ops = { 189 + .init = port_hdr_init, 190 + .uinit = port_hdr_uinit, 191 + .ioctl = port_hdr_ioctl, 192 + }; 193 + 194 + static ssize_t 195 + afu_id_show(struct device *dev, struct device_attribute *attr, char *buf) 196 + { 197 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 198 + void __iomem *base; 199 + u64 guidl, guidh; 200 + 201 + base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_AFU); 202 + 203 + mutex_lock(&pdata->lock); 204 + if (pdata->disable_count) { 205 + mutex_unlock(&pdata->lock); 206 + return -EBUSY; 207 + } 208 + 209 + guidl = readq(base + GUID_L); 210 + guidh = readq(base + GUID_H); 211 + mutex_unlock(&pdata->lock); 212 + 213 + return scnprintf(buf, PAGE_SIZE, "%016llx%016llx\n", guidh, guidl); 214 + } 215 + static DEVICE_ATTR_RO(afu_id); 216 + 217 + static const struct attribute *port_afu_attrs[] = { 218 + &dev_attr_afu_id.attr, 219 + NULL 220 + }; 221 + 222 + static int port_afu_init(struct platform_device *pdev, 223 + struct dfl_feature *feature) 224 + { 225 + struct resource *res = &pdev->resource[feature->resource_index]; 226 + int ret; 227 + 228 + dev_dbg(&pdev->dev, "PORT AFU Init.\n"); 229 + 230 + ret = afu_mmio_region_add(dev_get_platdata(&pdev->dev), 231 + DFL_PORT_REGION_INDEX_AFU, resource_size(res), 232 + res->start, DFL_PORT_REGION_READ | 233 + DFL_PORT_REGION_WRITE | DFL_PORT_REGION_MMAP); 234 + if (ret) 235 + return ret; 236 + 237 + return sysfs_create_files(&pdev->dev.kobj, port_afu_attrs); 238 + } 239 + 240 + static void port_afu_uinit(struct platform_device *pdev, 241 + struct dfl_feature *feature) 242 + { 243 + dev_dbg(&pdev->dev, "PORT AFU UInit.\n"); 244 + 245 + sysfs_remove_files(&pdev->dev.kobj, port_afu_attrs); 246 + } 247 + 248 + static const struct dfl_feature_ops port_afu_ops = { 249 + .init = port_afu_init, 250 + .uinit = port_afu_uinit, 251 + }; 252 + 253 + static struct dfl_feature_driver port_feature_drvs[] = { 254 + { 255 + .id = PORT_FEATURE_ID_HEADER, 256 + .ops = &port_hdr_ops, 257 + }, 258 + { 259 + .id = PORT_FEATURE_ID_AFU, 260 + .ops = &port_afu_ops, 261 + }, 262 + { 263 + .ops = NULL, 264 + } 265 + }; 266 + 267 + static int afu_open(struct inode *inode, struct file *filp) 268 + { 269 + struct platform_device *fdev = dfl_fpga_inode_to_feature_dev(inode); 270 + struct dfl_feature_platform_data *pdata; 271 + int ret; 272 + 273 + pdata = dev_get_platdata(&fdev->dev); 274 + if (WARN_ON(!pdata)) 275 + return -ENODEV; 276 + 277 + ret = dfl_feature_dev_use_begin(pdata); 278 + if (ret) 279 + return ret; 280 + 281 + dev_dbg(&fdev->dev, "Device File Open\n"); 282 + filp->private_data = fdev; 283 + 284 + return 0; 285 + } 286 + 287 + static int afu_release(struct inode *inode, struct file *filp) 288 + { 289 + struct platform_device *pdev = filp->private_data; 290 + struct dfl_feature_platform_data *pdata; 291 + 292 + dev_dbg(&pdev->dev, "Device File Release\n"); 293 + 294 + pdata = dev_get_platdata(&pdev->dev); 295 + 296 + mutex_lock(&pdata->lock); 297 + __port_reset(pdev); 298 + afu_dma_region_destroy(pdata); 299 + mutex_unlock(&pdata->lock); 300 + 301 + dfl_feature_dev_use_end(pdata); 302 + 303 + return 0; 304 + } 305 + 306 + static long afu_ioctl_check_extension(struct dfl_feature_platform_data *pdata, 307 + unsigned long arg) 308 + { 309 + /* No extension support for now */ 310 + return 0; 311 + } 312 + 313 + static long 314 + afu_ioctl_get_info(struct dfl_feature_platform_data *pdata, void __user *arg) 315 + { 316 + struct dfl_fpga_port_info info; 317 + struct dfl_afu *afu; 318 + unsigned long minsz; 319 + 320 + minsz = offsetofend(struct dfl_fpga_port_info, num_umsgs); 321 + 322 + if (copy_from_user(&info, arg, minsz)) 323 + return -EFAULT; 324 + 325 + if (info.argsz < minsz) 326 + return -EINVAL; 327 + 328 + mutex_lock(&pdata->lock); 329 + afu = dfl_fpga_pdata_get_private(pdata); 330 + info.flags = 0; 331 + info.num_regions = afu->num_regions; 332 + info.num_umsgs = afu->num_umsgs; 333 + mutex_unlock(&pdata->lock); 334 + 335 + if (copy_to_user(arg, &info, sizeof(info))) 336 + return -EFAULT; 337 + 338 + return 0; 339 + } 340 + 341 + static long afu_ioctl_get_region_info(struct dfl_feature_platform_data *pdata, 342 + void __user *arg) 343 + { 344 + struct dfl_fpga_port_region_info rinfo; 345 + struct dfl_afu_mmio_region region; 346 + unsigned long minsz; 347 + long ret; 348 + 349 + minsz = offsetofend(struct dfl_fpga_port_region_info, offset); 350 + 351 + if (copy_from_user(&rinfo, arg, minsz)) 352 + return -EFAULT; 353 + 354 + if (rinfo.argsz < minsz || rinfo.padding) 355 + return -EINVAL; 356 + 357 + ret = afu_mmio_region_get_by_index(pdata, rinfo.index, &region); 358 + if (ret) 359 + return ret; 360 + 361 + rinfo.flags = region.flags; 362 + rinfo.size = region.size; 363 + rinfo.offset = region.offset; 364 + 365 + if (copy_to_user(arg, &rinfo, sizeof(rinfo))) 366 + return -EFAULT; 367 + 368 + return 0; 369 + } 370 + 371 + static long 372 + afu_ioctl_dma_map(struct dfl_feature_platform_data *pdata, void __user *arg) 373 + { 374 + struct dfl_fpga_port_dma_map map; 375 + unsigned long minsz; 376 + long ret; 377 + 378 + minsz = offsetofend(struct dfl_fpga_port_dma_map, iova); 379 + 380 + if (copy_from_user(&map, arg, minsz)) 381 + return -EFAULT; 382 + 383 + if (map.argsz < minsz || map.flags) 384 + return -EINVAL; 385 + 386 + ret = afu_dma_map_region(pdata, map.user_addr, map.length, &map.iova); 387 + if (ret) 388 + return ret; 389 + 390 + if (copy_to_user(arg, &map, sizeof(map))) { 391 + afu_dma_unmap_region(pdata, map.iova); 392 + return -EFAULT; 393 + } 394 + 395 + dev_dbg(&pdata->dev->dev, "dma map: ua=%llx, len=%llx, iova=%llx\n", 396 + (unsigned long long)map.user_addr, 397 + (unsigned long long)map.length, 398 + (unsigned long long)map.iova); 399 + 400 + return 0; 401 + } 402 + 403 + static long 404 + afu_ioctl_dma_unmap(struct dfl_feature_platform_data *pdata, void __user *arg) 405 + { 406 + struct dfl_fpga_port_dma_unmap unmap; 407 + unsigned long minsz; 408 + 409 + minsz = offsetofend(struct dfl_fpga_port_dma_unmap, iova); 410 + 411 + if (copy_from_user(&unmap, arg, minsz)) 412 + return -EFAULT; 413 + 414 + if (unmap.argsz < minsz || unmap.flags) 415 + return -EINVAL; 416 + 417 + return afu_dma_unmap_region(pdata, unmap.iova); 418 + } 419 + 420 + static long afu_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) 421 + { 422 + struct platform_device *pdev = filp->private_data; 423 + struct dfl_feature_platform_data *pdata; 424 + struct dfl_feature *f; 425 + long ret; 426 + 427 + dev_dbg(&pdev->dev, "%s cmd 0x%x\n", __func__, cmd); 428 + 429 + pdata = dev_get_platdata(&pdev->dev); 430 + 431 + switch (cmd) { 432 + case DFL_FPGA_GET_API_VERSION: 433 + return DFL_FPGA_API_VERSION; 434 + case DFL_FPGA_CHECK_EXTENSION: 435 + return afu_ioctl_check_extension(pdata, arg); 436 + case DFL_FPGA_PORT_GET_INFO: 437 + return afu_ioctl_get_info(pdata, (void __user *)arg); 438 + case DFL_FPGA_PORT_GET_REGION_INFO: 439 + return afu_ioctl_get_region_info(pdata, (void __user *)arg); 440 + case DFL_FPGA_PORT_DMA_MAP: 441 + return afu_ioctl_dma_map(pdata, (void __user *)arg); 442 + case DFL_FPGA_PORT_DMA_UNMAP: 443 + return afu_ioctl_dma_unmap(pdata, (void __user *)arg); 444 + default: 445 + /* 446 + * Let sub-feature's ioctl function to handle the cmd 447 + * Sub-feature's ioctl returns -ENODEV when cmd is not 448 + * handled in this sub feature, and returns 0 and other 449 + * error code if cmd is handled. 450 + */ 451 + dfl_fpga_dev_for_each_feature(pdata, f) 452 + if (f->ops && f->ops->ioctl) { 453 + ret = f->ops->ioctl(pdev, f, cmd, arg); 454 + if (ret != -ENODEV) 455 + return ret; 456 + } 457 + } 458 + 459 + return -EINVAL; 460 + } 461 + 462 + static int afu_mmap(struct file *filp, struct vm_area_struct *vma) 463 + { 464 + struct platform_device *pdev = filp->private_data; 465 + struct dfl_feature_platform_data *pdata; 466 + u64 size = vma->vm_end - vma->vm_start; 467 + struct dfl_afu_mmio_region region; 468 + u64 offset; 469 + int ret; 470 + 471 + if (!(vma->vm_flags & VM_SHARED)) 472 + return -EINVAL; 473 + 474 + pdata = dev_get_platdata(&pdev->dev); 475 + 476 + offset = vma->vm_pgoff << PAGE_SHIFT; 477 + ret = afu_mmio_region_get_by_offset(pdata, offset, size, &region); 478 + if (ret) 479 + return ret; 480 + 481 + if (!(region.flags & DFL_PORT_REGION_MMAP)) 482 + return -EINVAL; 483 + 484 + if ((vma->vm_flags & VM_READ) && !(region.flags & DFL_PORT_REGION_READ)) 485 + return -EPERM; 486 + 487 + if ((vma->vm_flags & VM_WRITE) && 488 + !(region.flags & DFL_PORT_REGION_WRITE)) 489 + return -EPERM; 490 + 491 + vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); 492 + 493 + return remap_pfn_range(vma, vma->vm_start, 494 + (region.phys + (offset - region.offset)) >> PAGE_SHIFT, 495 + size, vma->vm_page_prot); 496 + } 497 + 498 + static const struct file_operations afu_fops = { 499 + .owner = THIS_MODULE, 500 + .open = afu_open, 501 + .release = afu_release, 502 + .unlocked_ioctl = afu_ioctl, 503 + .mmap = afu_mmap, 504 + }; 505 + 506 + static int afu_dev_init(struct platform_device *pdev) 507 + { 508 + struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev); 509 + struct dfl_afu *afu; 510 + 511 + afu = devm_kzalloc(&pdev->dev, sizeof(*afu), GFP_KERNEL); 512 + if (!afu) 513 + return -ENOMEM; 514 + 515 + afu->pdata = pdata; 516 + 517 + mutex_lock(&pdata->lock); 518 + dfl_fpga_pdata_set_private(pdata, afu); 519 + afu_mmio_region_init(pdata); 520 + afu_dma_region_init(pdata); 521 + mutex_unlock(&pdata->lock); 522 + 523 + return 0; 524 + } 525 + 526 + static int afu_dev_destroy(struct platform_device *pdev) 527 + { 528 + struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev); 529 + struct dfl_afu *afu; 530 + 531 + mutex_lock(&pdata->lock); 532 + afu = dfl_fpga_pdata_get_private(pdata); 533 + afu_mmio_region_destroy(pdata); 534 + afu_dma_region_destroy(pdata); 535 + dfl_fpga_pdata_set_private(pdata, NULL); 536 + mutex_unlock(&pdata->lock); 537 + 538 + return 0; 539 + } 540 + 541 + static int port_enable_set(struct platform_device *pdev, bool enable) 542 + { 543 + struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev); 544 + int ret = 0; 545 + 546 + mutex_lock(&pdata->lock); 547 + if (enable) 548 + port_enable(pdev); 549 + else 550 + ret = port_disable(pdev); 551 + mutex_unlock(&pdata->lock); 552 + 553 + return ret; 554 + } 555 + 556 + static struct dfl_fpga_port_ops afu_port_ops = { 557 + .name = DFL_FPGA_FEATURE_DEV_PORT, 558 + .owner = THIS_MODULE, 559 + .get_id = port_get_id, 560 + .enable_set = port_enable_set, 561 + }; 562 + 563 + static int afu_probe(struct platform_device *pdev) 564 + { 565 + int ret; 566 + 567 + dev_dbg(&pdev->dev, "%s\n", __func__); 568 + 569 + ret = afu_dev_init(pdev); 570 + if (ret) 571 + goto exit; 572 + 573 + ret = dfl_fpga_dev_feature_init(pdev, port_feature_drvs); 574 + if (ret) 575 + goto dev_destroy; 576 + 577 + ret = dfl_fpga_dev_ops_register(pdev, &afu_fops, THIS_MODULE); 578 + if (ret) { 579 + dfl_fpga_dev_feature_uinit(pdev); 580 + goto dev_destroy; 581 + } 582 + 583 + return 0; 584 + 585 + dev_destroy: 586 + afu_dev_destroy(pdev); 587 + exit: 588 + return ret; 589 + } 590 + 591 + static int afu_remove(struct platform_device *pdev) 592 + { 593 + dev_dbg(&pdev->dev, "%s\n", __func__); 594 + 595 + dfl_fpga_dev_ops_unregister(pdev); 596 + dfl_fpga_dev_feature_uinit(pdev); 597 + afu_dev_destroy(pdev); 598 + 599 + return 0; 600 + } 601 + 602 + static struct platform_driver afu_driver = { 603 + .driver = { 604 + .name = DFL_FPGA_FEATURE_DEV_PORT, 605 + }, 606 + .probe = afu_probe, 607 + .remove = afu_remove, 608 + }; 609 + 610 + static int __init afu_init(void) 611 + { 612 + int ret; 613 + 614 + dfl_fpga_port_ops_add(&afu_port_ops); 615 + 616 + ret = platform_driver_register(&afu_driver); 617 + if (ret) 618 + dfl_fpga_port_ops_del(&afu_port_ops); 619 + 620 + return ret; 621 + } 622 + 623 + static void __exit afu_exit(void) 624 + { 625 + platform_driver_unregister(&afu_driver); 626 + 627 + dfl_fpga_port_ops_del(&afu_port_ops); 628 + } 629 + 630 + module_init(afu_init); 631 + module_exit(afu_exit); 632 + 633 + MODULE_DESCRIPTION("FPGA Accelerated Function Unit driver"); 634 + MODULE_AUTHOR("Intel Corporation"); 635 + MODULE_LICENSE("GPL v2"); 636 + MODULE_ALIAS("platform:dfl-port");
+166
drivers/fpga/dfl-afu-region.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Driver for FPGA Accelerated Function Unit (AFU) MMIO Region Management 4 + * 5 + * Copyright (C) 2017-2018 Intel Corporation, Inc. 6 + * 7 + * Authors: 8 + * Wu Hao <hao.wu@intel.com> 9 + * Xiao Guangrong <guangrong.xiao@linux.intel.com> 10 + */ 11 + #include "dfl-afu.h" 12 + 13 + /** 14 + * afu_mmio_region_init - init function for afu mmio region support 15 + * @pdata: afu platform device's pdata. 16 + */ 17 + void afu_mmio_region_init(struct dfl_feature_platform_data *pdata) 18 + { 19 + struct dfl_afu *afu = dfl_fpga_pdata_get_private(pdata); 20 + 21 + INIT_LIST_HEAD(&afu->regions); 22 + } 23 + 24 + #define for_each_region(region, afu) \ 25 + list_for_each_entry((region), &(afu)->regions, node) 26 + 27 + static struct dfl_afu_mmio_region *get_region_by_index(struct dfl_afu *afu, 28 + u32 region_index) 29 + { 30 + struct dfl_afu_mmio_region *region; 31 + 32 + for_each_region(region, afu) 33 + if (region->index == region_index) 34 + return region; 35 + 36 + return NULL; 37 + } 38 + 39 + /** 40 + * afu_mmio_region_add - add a mmio region to given feature dev. 41 + * 42 + * @region_index: region index. 43 + * @region_size: region size. 44 + * @phys: region's physical address of this region. 45 + * @flags: region flags (access permission). 46 + * 47 + * Return: 0 on success, negative error code otherwise. 48 + */ 49 + int afu_mmio_region_add(struct dfl_feature_platform_data *pdata, 50 + u32 region_index, u64 region_size, u64 phys, u32 flags) 51 + { 52 + struct dfl_afu_mmio_region *region; 53 + struct dfl_afu *afu; 54 + int ret = 0; 55 + 56 + region = devm_kzalloc(&pdata->dev->dev, sizeof(*region), GFP_KERNEL); 57 + if (!region) 58 + return -ENOMEM; 59 + 60 + region->index = region_index; 61 + region->size = region_size; 62 + region->phys = phys; 63 + region->flags = flags; 64 + 65 + mutex_lock(&pdata->lock); 66 + 67 + afu = dfl_fpga_pdata_get_private(pdata); 68 + 69 + /* check if @index already exists */ 70 + if (get_region_by_index(afu, region_index)) { 71 + mutex_unlock(&pdata->lock); 72 + ret = -EEXIST; 73 + goto exit; 74 + } 75 + 76 + region_size = PAGE_ALIGN(region_size); 77 + region->offset = afu->region_cur_offset; 78 + list_add(&region->node, &afu->regions); 79 + 80 + afu->region_cur_offset += region_size; 81 + afu->num_regions++; 82 + mutex_unlock(&pdata->lock); 83 + 84 + return 0; 85 + 86 + exit: 87 + devm_kfree(&pdata->dev->dev, region); 88 + return ret; 89 + } 90 + 91 + /** 92 + * afu_mmio_region_destroy - destroy all mmio regions under given feature dev. 93 + * @pdata: afu platform device's pdata. 94 + */ 95 + void afu_mmio_region_destroy(struct dfl_feature_platform_data *pdata) 96 + { 97 + struct dfl_afu *afu = dfl_fpga_pdata_get_private(pdata); 98 + struct dfl_afu_mmio_region *tmp, *region; 99 + 100 + list_for_each_entry_safe(region, tmp, &afu->regions, node) 101 + devm_kfree(&pdata->dev->dev, region); 102 + } 103 + 104 + /** 105 + * afu_mmio_region_get_by_index - find an afu region by index. 106 + * @pdata: afu platform device's pdata. 107 + * @region_index: region index. 108 + * @pregion: ptr to region for result. 109 + * 110 + * Return: 0 on success, negative error code otherwise. 111 + */ 112 + int afu_mmio_region_get_by_index(struct dfl_feature_platform_data *pdata, 113 + u32 region_index, 114 + struct dfl_afu_mmio_region *pregion) 115 + { 116 + struct dfl_afu_mmio_region *region; 117 + struct dfl_afu *afu; 118 + int ret = 0; 119 + 120 + mutex_lock(&pdata->lock); 121 + afu = dfl_fpga_pdata_get_private(pdata); 122 + region = get_region_by_index(afu, region_index); 123 + if (!region) { 124 + ret = -EINVAL; 125 + goto exit; 126 + } 127 + *pregion = *region; 128 + exit: 129 + mutex_unlock(&pdata->lock); 130 + return ret; 131 + } 132 + 133 + /** 134 + * afu_mmio_region_get_by_offset - find an afu mmio region by offset and size 135 + * 136 + * @pdata: afu platform device's pdata. 137 + * @offset: region offset from start of the device fd. 138 + * @size: region size. 139 + * @pregion: ptr to region for result. 140 + * 141 + * Find the region which fully contains the region described by input 142 + * parameters (offset and size) from the feature dev's region linked list. 143 + * 144 + * Return: 0 on success, negative error code otherwise. 145 + */ 146 + int afu_mmio_region_get_by_offset(struct dfl_feature_platform_data *pdata, 147 + u64 offset, u64 size, 148 + struct dfl_afu_mmio_region *pregion) 149 + { 150 + struct dfl_afu_mmio_region *region; 151 + struct dfl_afu *afu; 152 + int ret = 0; 153 + 154 + mutex_lock(&pdata->lock); 155 + afu = dfl_fpga_pdata_get_private(pdata); 156 + for_each_region(region, afu) 157 + if (region->offset <= offset && 158 + region->offset + region->size >= offset + size) { 159 + *pregion = *region; 160 + goto exit; 161 + } 162 + ret = -EINVAL; 163 + exit: 164 + mutex_unlock(&pdata->lock); 165 + return ret; 166 + }
+100
drivers/fpga/dfl-afu.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Header file for FPGA Accelerated Function Unit (AFU) Driver 4 + * 5 + * Copyright (C) 2017-2018 Intel Corporation, Inc. 6 + * 7 + * Authors: 8 + * Wu Hao <hao.wu@intel.com> 9 + * Xiao Guangrong <guangrong.xiao@linux.intel.com> 10 + * Joseph Grecco <joe.grecco@intel.com> 11 + * Enno Luebbers <enno.luebbers@intel.com> 12 + * Tim Whisonant <tim.whisonant@intel.com> 13 + * Ananda Ravuri <ananda.ravuri@intel.com> 14 + * Henry Mitchel <henry.mitchel@intel.com> 15 + */ 16 + 17 + #ifndef __DFL_AFU_H 18 + #define __DFL_AFU_H 19 + 20 + #include <linux/mm.h> 21 + 22 + #include "dfl.h" 23 + 24 + /** 25 + * struct dfl_afu_mmio_region - afu mmio region data structure 26 + * 27 + * @index: region index. 28 + * @flags: region flags (access permission). 29 + * @size: region size. 30 + * @offset: region offset from start of the device fd. 31 + * @phys: region's physical address. 32 + * @node: node to add to afu feature dev's region list. 33 + */ 34 + struct dfl_afu_mmio_region { 35 + u32 index; 36 + u32 flags; 37 + u64 size; 38 + u64 offset; 39 + u64 phys; 40 + struct list_head node; 41 + }; 42 + 43 + /** 44 + * struct fpga_afu_dma_region - afu DMA region data structure 45 + * 46 + * @user_addr: region userspace virtual address. 47 + * @length: region length. 48 + * @iova: region IO virtual address. 49 + * @pages: ptr to pages of this region. 50 + * @node: rb tree node. 51 + * @in_use: flag to indicate if this region is in_use. 52 + */ 53 + struct dfl_afu_dma_region { 54 + u64 user_addr; 55 + u64 length; 56 + u64 iova; 57 + struct page **pages; 58 + struct rb_node node; 59 + bool in_use; 60 + }; 61 + 62 + /** 63 + * struct dfl_afu - afu device data structure 64 + * 65 + * @region_cur_offset: current region offset from start to the device fd. 66 + * @num_regions: num of mmio regions. 67 + * @regions: the mmio region linked list of this afu feature device. 68 + * @dma_regions: root of dma regions rb tree. 69 + * @num_umsgs: num of umsgs. 70 + * @pdata: afu platform device's pdata. 71 + */ 72 + struct dfl_afu { 73 + u64 region_cur_offset; 74 + int num_regions; 75 + u8 num_umsgs; 76 + struct list_head regions; 77 + struct rb_root dma_regions; 78 + 79 + struct dfl_feature_platform_data *pdata; 80 + }; 81 + 82 + void afu_mmio_region_init(struct dfl_feature_platform_data *pdata); 83 + int afu_mmio_region_add(struct dfl_feature_platform_data *pdata, 84 + u32 region_index, u64 region_size, u64 phys, u32 flags); 85 + void afu_mmio_region_destroy(struct dfl_feature_platform_data *pdata); 86 + int afu_mmio_region_get_by_index(struct dfl_feature_platform_data *pdata, 87 + u32 region_index, 88 + struct dfl_afu_mmio_region *pregion); 89 + int afu_mmio_region_get_by_offset(struct dfl_feature_platform_data *pdata, 90 + u64 offset, u64 size, 91 + struct dfl_afu_mmio_region *pregion); 92 + void afu_dma_region_init(struct dfl_feature_platform_data *pdata); 93 + void afu_dma_region_destroy(struct dfl_feature_platform_data *pdata); 94 + int afu_dma_map_region(struct dfl_feature_platform_data *pdata, 95 + u64 user_addr, u64 length, u64 *iova); 96 + int afu_dma_unmap_region(struct dfl_feature_platform_data *pdata, u64 iova); 97 + struct dfl_afu_dma_region * 98 + afu_dma_region_find(struct dfl_feature_platform_data *pdata, 99 + u64 iova, u64 size); 100 + #endif /* __DFL_AFU_H */
+114
drivers/fpga/dfl-fme-br.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * FPGA Bridge Driver for FPGA Management Engine (FME) 4 + * 5 + * Copyright (C) 2017-2018 Intel Corporation, Inc. 6 + * 7 + * Authors: 8 + * Wu Hao <hao.wu@intel.com> 9 + * Joseph Grecco <joe.grecco@intel.com> 10 + * Enno Luebbers <enno.luebbers@intel.com> 11 + * Tim Whisonant <tim.whisonant@intel.com> 12 + * Ananda Ravuri <ananda.ravuri@intel.com> 13 + * Henry Mitchel <henry.mitchel@intel.com> 14 + */ 15 + 16 + #include <linux/module.h> 17 + #include <linux/fpga/fpga-bridge.h> 18 + 19 + #include "dfl.h" 20 + #include "dfl-fme-pr.h" 21 + 22 + struct fme_br_priv { 23 + struct dfl_fme_br_pdata *pdata; 24 + struct dfl_fpga_port_ops *port_ops; 25 + struct platform_device *port_pdev; 26 + }; 27 + 28 + static int fme_bridge_enable_set(struct fpga_bridge *bridge, bool enable) 29 + { 30 + struct fme_br_priv *priv = bridge->priv; 31 + struct platform_device *port_pdev; 32 + struct dfl_fpga_port_ops *ops; 33 + 34 + if (!priv->port_pdev) { 35 + port_pdev = dfl_fpga_cdev_find_port(priv->pdata->cdev, 36 + &priv->pdata->port_id, 37 + dfl_fpga_check_port_id); 38 + if (!port_pdev) 39 + return -ENODEV; 40 + 41 + priv->port_pdev = port_pdev; 42 + } 43 + 44 + if (priv->port_pdev && !priv->port_ops) { 45 + ops = dfl_fpga_port_ops_get(priv->port_pdev); 46 + if (!ops || !ops->enable_set) 47 + return -ENOENT; 48 + 49 + priv->port_ops = ops; 50 + } 51 + 52 + return priv->port_ops->enable_set(priv->port_pdev, enable); 53 + } 54 + 55 + static const struct fpga_bridge_ops fme_bridge_ops = { 56 + .enable_set = fme_bridge_enable_set, 57 + }; 58 + 59 + static int fme_br_probe(struct platform_device *pdev) 60 + { 61 + struct device *dev = &pdev->dev; 62 + struct fme_br_priv *priv; 63 + struct fpga_bridge *br; 64 + int ret; 65 + 66 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 67 + if (!priv) 68 + return -ENOMEM; 69 + 70 + priv->pdata = dev_get_platdata(dev); 71 + 72 + br = fpga_bridge_create(dev, "DFL FPGA FME Bridge", 73 + &fme_bridge_ops, priv); 74 + if (!br) 75 + return -ENOMEM; 76 + 77 + platform_set_drvdata(pdev, br); 78 + 79 + ret = fpga_bridge_register(br); 80 + if (ret) 81 + fpga_bridge_free(br); 82 + 83 + return ret; 84 + } 85 + 86 + static int fme_br_remove(struct platform_device *pdev) 87 + { 88 + struct fpga_bridge *br = platform_get_drvdata(pdev); 89 + struct fme_br_priv *priv = br->priv; 90 + 91 + fpga_bridge_unregister(br); 92 + 93 + if (priv->port_pdev) 94 + put_device(&priv->port_pdev->dev); 95 + if (priv->port_ops) 96 + dfl_fpga_port_ops_put(priv->port_ops); 97 + 98 + return 0; 99 + } 100 + 101 + static struct platform_driver fme_br_driver = { 102 + .driver = { 103 + .name = DFL_FPGA_FME_BRIDGE, 104 + }, 105 + .probe = fme_br_probe, 106 + .remove = fme_br_remove, 107 + }; 108 + 109 + module_platform_driver(fme_br_driver); 110 + 111 + MODULE_DESCRIPTION("FPGA Bridge for DFL FPGA Management Engine"); 112 + MODULE_AUTHOR("Intel Corporation"); 113 + MODULE_LICENSE("GPL v2"); 114 + MODULE_ALIAS("platform:dfl-fme-bridge");
+279
drivers/fpga/dfl-fme-main.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Driver for FPGA Management Engine (FME) 4 + * 5 + * Copyright (C) 2017-2018 Intel Corporation, Inc. 6 + * 7 + * Authors: 8 + * Kang Luwei <luwei.kang@intel.com> 9 + * Xiao Guangrong <guangrong.xiao@linux.intel.com> 10 + * Joseph Grecco <joe.grecco@intel.com> 11 + * Enno Luebbers <enno.luebbers@intel.com> 12 + * Tim Whisonant <tim.whisonant@intel.com> 13 + * Ananda Ravuri <ananda.ravuri@intel.com> 14 + * Henry Mitchel <henry.mitchel@intel.com> 15 + */ 16 + 17 + #include <linux/kernel.h> 18 + #include <linux/module.h> 19 + #include <linux/fpga-dfl.h> 20 + 21 + #include "dfl.h" 22 + #include "dfl-fme.h" 23 + 24 + static ssize_t ports_num_show(struct device *dev, 25 + struct device_attribute *attr, char *buf) 26 + { 27 + void __iomem *base; 28 + u64 v; 29 + 30 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_HEADER); 31 + 32 + v = readq(base + FME_HDR_CAP); 33 + 34 + return scnprintf(buf, PAGE_SIZE, "%u\n", 35 + (unsigned int)FIELD_GET(FME_CAP_NUM_PORTS, v)); 36 + } 37 + static DEVICE_ATTR_RO(ports_num); 38 + 39 + /* 40 + * Bitstream (static FPGA region) identifier number. It contains the 41 + * detailed version and other information of this static FPGA region. 42 + */ 43 + static ssize_t bitstream_id_show(struct device *dev, 44 + struct device_attribute *attr, char *buf) 45 + { 46 + void __iomem *base; 47 + u64 v; 48 + 49 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_HEADER); 50 + 51 + v = readq(base + FME_HDR_BITSTREAM_ID); 52 + 53 + return scnprintf(buf, PAGE_SIZE, "0x%llx\n", (unsigned long long)v); 54 + } 55 + static DEVICE_ATTR_RO(bitstream_id); 56 + 57 + /* 58 + * Bitstream (static FPGA region) meta data. It contains the synthesis 59 + * date, seed and other information of this static FPGA region. 60 + */ 61 + static ssize_t bitstream_metadata_show(struct device *dev, 62 + struct device_attribute *attr, char *buf) 63 + { 64 + void __iomem *base; 65 + u64 v; 66 + 67 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_HEADER); 68 + 69 + v = readq(base + FME_HDR_BITSTREAM_MD); 70 + 71 + return scnprintf(buf, PAGE_SIZE, "0x%llx\n", (unsigned long long)v); 72 + } 73 + static DEVICE_ATTR_RO(bitstream_metadata); 74 + 75 + static const struct attribute *fme_hdr_attrs[] = { 76 + &dev_attr_ports_num.attr, 77 + &dev_attr_bitstream_id.attr, 78 + &dev_attr_bitstream_metadata.attr, 79 + NULL, 80 + }; 81 + 82 + static int fme_hdr_init(struct platform_device *pdev, 83 + struct dfl_feature *feature) 84 + { 85 + void __iomem *base = feature->ioaddr; 86 + int ret; 87 + 88 + dev_dbg(&pdev->dev, "FME HDR Init.\n"); 89 + dev_dbg(&pdev->dev, "FME cap %llx.\n", 90 + (unsigned long long)readq(base + FME_HDR_CAP)); 91 + 92 + ret = sysfs_create_files(&pdev->dev.kobj, fme_hdr_attrs); 93 + if (ret) 94 + return ret; 95 + 96 + return 0; 97 + } 98 + 99 + static void fme_hdr_uinit(struct platform_device *pdev, 100 + struct dfl_feature *feature) 101 + { 102 + dev_dbg(&pdev->dev, "FME HDR UInit.\n"); 103 + sysfs_remove_files(&pdev->dev.kobj, fme_hdr_attrs); 104 + } 105 + 106 + static const struct dfl_feature_ops fme_hdr_ops = { 107 + .init = fme_hdr_init, 108 + .uinit = fme_hdr_uinit, 109 + }; 110 + 111 + static struct dfl_feature_driver fme_feature_drvs[] = { 112 + { 113 + .id = FME_FEATURE_ID_HEADER, 114 + .ops = &fme_hdr_ops, 115 + }, 116 + { 117 + .id = FME_FEATURE_ID_PR_MGMT, 118 + .ops = &pr_mgmt_ops, 119 + }, 120 + { 121 + .ops = NULL, 122 + }, 123 + }; 124 + 125 + static long fme_ioctl_check_extension(struct dfl_feature_platform_data *pdata, 126 + unsigned long arg) 127 + { 128 + /* No extension support for now */ 129 + return 0; 130 + } 131 + 132 + static int fme_open(struct inode *inode, struct file *filp) 133 + { 134 + struct platform_device *fdev = dfl_fpga_inode_to_feature_dev(inode); 135 + struct dfl_feature_platform_data *pdata = dev_get_platdata(&fdev->dev); 136 + int ret; 137 + 138 + if (WARN_ON(!pdata)) 139 + return -ENODEV; 140 + 141 + ret = dfl_feature_dev_use_begin(pdata); 142 + if (ret) 143 + return ret; 144 + 145 + dev_dbg(&fdev->dev, "Device File Open\n"); 146 + filp->private_data = pdata; 147 + 148 + return 0; 149 + } 150 + 151 + static int fme_release(struct inode *inode, struct file *filp) 152 + { 153 + struct dfl_feature_platform_data *pdata = filp->private_data; 154 + struct platform_device *pdev = pdata->dev; 155 + 156 + dev_dbg(&pdev->dev, "Device File Release\n"); 157 + dfl_feature_dev_use_end(pdata); 158 + 159 + return 0; 160 + } 161 + 162 + static long fme_ioctl(struct file *filp, unsigned int cmd, unsigned long arg) 163 + { 164 + struct dfl_feature_platform_data *pdata = filp->private_data; 165 + struct platform_device *pdev = pdata->dev; 166 + struct dfl_feature *f; 167 + long ret; 168 + 169 + dev_dbg(&pdev->dev, "%s cmd 0x%x\n", __func__, cmd); 170 + 171 + switch (cmd) { 172 + case DFL_FPGA_GET_API_VERSION: 173 + return DFL_FPGA_API_VERSION; 174 + case DFL_FPGA_CHECK_EXTENSION: 175 + return fme_ioctl_check_extension(pdata, arg); 176 + default: 177 + /* 178 + * Let sub-feature's ioctl function to handle the cmd. 179 + * Sub-feature's ioctl returns -ENODEV when cmd is not 180 + * handled in this sub feature, and returns 0 or other 181 + * error code if cmd is handled. 182 + */ 183 + dfl_fpga_dev_for_each_feature(pdata, f) { 184 + if (f->ops && f->ops->ioctl) { 185 + ret = f->ops->ioctl(pdev, f, cmd, arg); 186 + if (ret != -ENODEV) 187 + return ret; 188 + } 189 + } 190 + } 191 + 192 + return -EINVAL; 193 + } 194 + 195 + static int fme_dev_init(struct platform_device *pdev) 196 + { 197 + struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev); 198 + struct dfl_fme *fme; 199 + 200 + fme = devm_kzalloc(&pdev->dev, sizeof(*fme), GFP_KERNEL); 201 + if (!fme) 202 + return -ENOMEM; 203 + 204 + fme->pdata = pdata; 205 + 206 + mutex_lock(&pdata->lock); 207 + dfl_fpga_pdata_set_private(pdata, fme); 208 + mutex_unlock(&pdata->lock); 209 + 210 + return 0; 211 + } 212 + 213 + static void fme_dev_destroy(struct platform_device *pdev) 214 + { 215 + struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev); 216 + struct dfl_fme *fme; 217 + 218 + mutex_lock(&pdata->lock); 219 + fme = dfl_fpga_pdata_get_private(pdata); 220 + dfl_fpga_pdata_set_private(pdata, NULL); 221 + mutex_unlock(&pdata->lock); 222 + } 223 + 224 + static const struct file_operations fme_fops = { 225 + .owner = THIS_MODULE, 226 + .open = fme_open, 227 + .release = fme_release, 228 + .unlocked_ioctl = fme_ioctl, 229 + }; 230 + 231 + static int fme_probe(struct platform_device *pdev) 232 + { 233 + int ret; 234 + 235 + ret = fme_dev_init(pdev); 236 + if (ret) 237 + goto exit; 238 + 239 + ret = dfl_fpga_dev_feature_init(pdev, fme_feature_drvs); 240 + if (ret) 241 + goto dev_destroy; 242 + 243 + ret = dfl_fpga_dev_ops_register(pdev, &fme_fops, THIS_MODULE); 244 + if (ret) 245 + goto feature_uinit; 246 + 247 + return 0; 248 + 249 + feature_uinit: 250 + dfl_fpga_dev_feature_uinit(pdev); 251 + dev_destroy: 252 + fme_dev_destroy(pdev); 253 + exit: 254 + return ret; 255 + } 256 + 257 + static int fme_remove(struct platform_device *pdev) 258 + { 259 + dfl_fpga_dev_ops_unregister(pdev); 260 + dfl_fpga_dev_feature_uinit(pdev); 261 + fme_dev_destroy(pdev); 262 + 263 + return 0; 264 + } 265 + 266 + static struct platform_driver fme_driver = { 267 + .driver = { 268 + .name = DFL_FPGA_FEATURE_DEV_FME, 269 + }, 270 + .probe = fme_probe, 271 + .remove = fme_remove, 272 + }; 273 + 274 + module_platform_driver(fme_driver); 275 + 276 + MODULE_DESCRIPTION("FPGA Management Engine driver"); 277 + MODULE_AUTHOR("Intel Corporation"); 278 + MODULE_LICENSE("GPL v2"); 279 + MODULE_ALIAS("platform:dfl-fme");
+349
drivers/fpga/dfl-fme-mgr.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * FPGA Manager Driver for FPGA Management Engine (FME) 4 + * 5 + * Copyright (C) 2017-2018 Intel Corporation, Inc. 6 + * 7 + * Authors: 8 + * Kang Luwei <luwei.kang@intel.com> 9 + * Xiao Guangrong <guangrong.xiao@linux.intel.com> 10 + * Wu Hao <hao.wu@intel.com> 11 + * Joseph Grecco <joe.grecco@intel.com> 12 + * Enno Luebbers <enno.luebbers@intel.com> 13 + * Tim Whisonant <tim.whisonant@intel.com> 14 + * Ananda Ravuri <ananda.ravuri@intel.com> 15 + * Christopher Rauer <christopher.rauer@intel.com> 16 + * Henry Mitchel <henry.mitchel@intel.com> 17 + */ 18 + 19 + #include <linux/bitfield.h> 20 + #include <linux/module.h> 21 + #include <linux/iopoll.h> 22 + #include <linux/io-64-nonatomic-lo-hi.h> 23 + #include <linux/fpga/fpga-mgr.h> 24 + 25 + #include "dfl-fme-pr.h" 26 + 27 + /* FME Partial Reconfiguration Sub Feature Register Set */ 28 + #define FME_PR_DFH 0x0 29 + #define FME_PR_CTRL 0x8 30 + #define FME_PR_STS 0x10 31 + #define FME_PR_DATA 0x18 32 + #define FME_PR_ERR 0x20 33 + #define FME_PR_INTFC_ID_H 0xA8 34 + #define FME_PR_INTFC_ID_L 0xB0 35 + 36 + /* FME PR Control Register Bitfield */ 37 + #define FME_PR_CTRL_PR_RST BIT_ULL(0) /* Reset PR engine */ 38 + #define FME_PR_CTRL_PR_RSTACK BIT_ULL(4) /* Ack for PR engine reset */ 39 + #define FME_PR_CTRL_PR_RGN_ID GENMASK_ULL(9, 7) /* PR Region ID */ 40 + #define FME_PR_CTRL_PR_START BIT_ULL(12) /* Start to request PR service */ 41 + #define FME_PR_CTRL_PR_COMPLETE BIT_ULL(13) /* PR data push completion */ 42 + 43 + /* FME PR Status Register Bitfield */ 44 + /* Number of available entries in HW queue inside the PR engine. */ 45 + #define FME_PR_STS_PR_CREDIT GENMASK_ULL(8, 0) 46 + #define FME_PR_STS_PR_STS BIT_ULL(16) /* PR operation status */ 47 + #define FME_PR_STS_PR_STS_IDLE 0 48 + #define FME_PR_STS_PR_CTRLR_STS GENMASK_ULL(22, 20) /* Controller status */ 49 + #define FME_PR_STS_PR_HOST_STS GENMASK_ULL(27, 24) /* PR host status */ 50 + 51 + /* FME PR Data Register Bitfield */ 52 + /* PR data from the raw-binary file. */ 53 + #define FME_PR_DATA_PR_DATA_RAW GENMASK_ULL(32, 0) 54 + 55 + /* FME PR Error Register */ 56 + /* PR Operation errors detected. */ 57 + #define FME_PR_ERR_OPERATION_ERR BIT_ULL(0) 58 + /* CRC error detected. */ 59 + #define FME_PR_ERR_CRC_ERR BIT_ULL(1) 60 + /* Incompatible PR bitstream detected. */ 61 + #define FME_PR_ERR_INCOMPATIBLE_BS BIT_ULL(2) 62 + /* PR data push protocol violated. */ 63 + #define FME_PR_ERR_PROTOCOL_ERR BIT_ULL(3) 64 + /* PR data fifo overflow error detected */ 65 + #define FME_PR_ERR_FIFO_OVERFLOW BIT_ULL(4) 66 + 67 + #define PR_WAIT_TIMEOUT 8000000 68 + #define PR_HOST_STATUS_IDLE 0 69 + 70 + struct fme_mgr_priv { 71 + void __iomem *ioaddr; 72 + u64 pr_error; 73 + }; 74 + 75 + static u64 pr_error_to_mgr_status(u64 err) 76 + { 77 + u64 status = 0; 78 + 79 + if (err & FME_PR_ERR_OPERATION_ERR) 80 + status |= FPGA_MGR_STATUS_OPERATION_ERR; 81 + if (err & FME_PR_ERR_CRC_ERR) 82 + status |= FPGA_MGR_STATUS_CRC_ERR; 83 + if (err & FME_PR_ERR_INCOMPATIBLE_BS) 84 + status |= FPGA_MGR_STATUS_INCOMPATIBLE_IMAGE_ERR; 85 + if (err & FME_PR_ERR_PROTOCOL_ERR) 86 + status |= FPGA_MGR_STATUS_IP_PROTOCOL_ERR; 87 + if (err & FME_PR_ERR_FIFO_OVERFLOW) 88 + status |= FPGA_MGR_STATUS_FIFO_OVERFLOW_ERR; 89 + 90 + return status; 91 + } 92 + 93 + static u64 fme_mgr_pr_error_handle(void __iomem *fme_pr) 94 + { 95 + u64 pr_status, pr_error; 96 + 97 + pr_status = readq(fme_pr + FME_PR_STS); 98 + if (!(pr_status & FME_PR_STS_PR_STS)) 99 + return 0; 100 + 101 + pr_error = readq(fme_pr + FME_PR_ERR); 102 + writeq(pr_error, fme_pr + FME_PR_ERR); 103 + 104 + return pr_error; 105 + } 106 + 107 + static int fme_mgr_write_init(struct fpga_manager *mgr, 108 + struct fpga_image_info *info, 109 + const char *buf, size_t count) 110 + { 111 + struct device *dev = &mgr->dev; 112 + struct fme_mgr_priv *priv = mgr->priv; 113 + void __iomem *fme_pr = priv->ioaddr; 114 + u64 pr_ctrl, pr_status; 115 + 116 + if (!(info->flags & FPGA_MGR_PARTIAL_RECONFIG)) { 117 + dev_err(dev, "only supports partial reconfiguration.\n"); 118 + return -EINVAL; 119 + } 120 + 121 + dev_dbg(dev, "resetting PR before initiated PR\n"); 122 + 123 + pr_ctrl = readq(fme_pr + FME_PR_CTRL); 124 + pr_ctrl |= FME_PR_CTRL_PR_RST; 125 + writeq(pr_ctrl, fme_pr + FME_PR_CTRL); 126 + 127 + if (readq_poll_timeout(fme_pr + FME_PR_CTRL, pr_ctrl, 128 + pr_ctrl & FME_PR_CTRL_PR_RSTACK, 1, 129 + PR_WAIT_TIMEOUT)) { 130 + dev_err(dev, "PR Reset ACK timeout\n"); 131 + return -ETIMEDOUT; 132 + } 133 + 134 + pr_ctrl = readq(fme_pr + FME_PR_CTRL); 135 + pr_ctrl &= ~FME_PR_CTRL_PR_RST; 136 + writeq(pr_ctrl, fme_pr + FME_PR_CTRL); 137 + 138 + dev_dbg(dev, 139 + "waiting for PR resource in HW to be initialized and ready\n"); 140 + 141 + if (readq_poll_timeout(fme_pr + FME_PR_STS, pr_status, 142 + (pr_status & FME_PR_STS_PR_STS) == 143 + FME_PR_STS_PR_STS_IDLE, 1, PR_WAIT_TIMEOUT)) { 144 + dev_err(dev, "PR Status timeout\n"); 145 + priv->pr_error = fme_mgr_pr_error_handle(fme_pr); 146 + return -ETIMEDOUT; 147 + } 148 + 149 + dev_dbg(dev, "check and clear previous PR error\n"); 150 + priv->pr_error = fme_mgr_pr_error_handle(fme_pr); 151 + if (priv->pr_error) 152 + dev_dbg(dev, "previous PR error detected %llx\n", 153 + (unsigned long long)priv->pr_error); 154 + 155 + dev_dbg(dev, "set PR port ID\n"); 156 + 157 + pr_ctrl = readq(fme_pr + FME_PR_CTRL); 158 + pr_ctrl &= ~FME_PR_CTRL_PR_RGN_ID; 159 + pr_ctrl |= FIELD_PREP(FME_PR_CTRL_PR_RGN_ID, info->region_id); 160 + writeq(pr_ctrl, fme_pr + FME_PR_CTRL); 161 + 162 + return 0; 163 + } 164 + 165 + static int fme_mgr_write(struct fpga_manager *mgr, 166 + const char *buf, size_t count) 167 + { 168 + struct device *dev = &mgr->dev; 169 + struct fme_mgr_priv *priv = mgr->priv; 170 + void __iomem *fme_pr = priv->ioaddr; 171 + u64 pr_ctrl, pr_status, pr_data; 172 + int delay = 0, pr_credit, i = 0; 173 + 174 + dev_dbg(dev, "start request\n"); 175 + 176 + pr_ctrl = readq(fme_pr + FME_PR_CTRL); 177 + pr_ctrl |= FME_PR_CTRL_PR_START; 178 + writeq(pr_ctrl, fme_pr + FME_PR_CTRL); 179 + 180 + dev_dbg(dev, "pushing data from bitstream to HW\n"); 181 + 182 + /* 183 + * driver can push data to PR hardware using PR_DATA register once HW 184 + * has enough pr_credit (> 1), pr_credit reduces one for every 32bit 185 + * pr data write to PR_DATA register. If pr_credit <= 1, driver needs 186 + * to wait for enough pr_credit from hardware by polling. 187 + */ 188 + pr_status = readq(fme_pr + FME_PR_STS); 189 + pr_credit = FIELD_GET(FME_PR_STS_PR_CREDIT, pr_status); 190 + 191 + while (count > 0) { 192 + while (pr_credit <= 1) { 193 + if (delay++ > PR_WAIT_TIMEOUT) { 194 + dev_err(dev, "PR_CREDIT timeout\n"); 195 + return -ETIMEDOUT; 196 + } 197 + udelay(1); 198 + 199 + pr_status = readq(fme_pr + FME_PR_STS); 200 + pr_credit = FIELD_GET(FME_PR_STS_PR_CREDIT, pr_status); 201 + } 202 + 203 + if (count < 4) { 204 + dev_err(dev, "Invaild PR bitstream size\n"); 205 + return -EINVAL; 206 + } 207 + 208 + pr_data = 0; 209 + pr_data |= FIELD_PREP(FME_PR_DATA_PR_DATA_RAW, 210 + *(((u32 *)buf) + i)); 211 + writeq(pr_data, fme_pr + FME_PR_DATA); 212 + count -= 4; 213 + pr_credit--; 214 + i++; 215 + } 216 + 217 + return 0; 218 + } 219 + 220 + static int fme_mgr_write_complete(struct fpga_manager *mgr, 221 + struct fpga_image_info *info) 222 + { 223 + struct device *dev = &mgr->dev; 224 + struct fme_mgr_priv *priv = mgr->priv; 225 + void __iomem *fme_pr = priv->ioaddr; 226 + u64 pr_ctrl; 227 + 228 + pr_ctrl = readq(fme_pr + FME_PR_CTRL); 229 + pr_ctrl |= FME_PR_CTRL_PR_COMPLETE; 230 + writeq(pr_ctrl, fme_pr + FME_PR_CTRL); 231 + 232 + dev_dbg(dev, "green bitstream push complete\n"); 233 + dev_dbg(dev, "waiting for HW to release PR resource\n"); 234 + 235 + if (readq_poll_timeout(fme_pr + FME_PR_CTRL, pr_ctrl, 236 + !(pr_ctrl & FME_PR_CTRL_PR_START), 1, 237 + PR_WAIT_TIMEOUT)) { 238 + dev_err(dev, "PR Completion ACK timeout.\n"); 239 + return -ETIMEDOUT; 240 + } 241 + 242 + dev_dbg(dev, "PR operation complete, checking status\n"); 243 + priv->pr_error = fme_mgr_pr_error_handle(fme_pr); 244 + if (priv->pr_error) { 245 + dev_dbg(dev, "PR error detected %llx\n", 246 + (unsigned long long)priv->pr_error); 247 + return -EIO; 248 + } 249 + 250 + dev_dbg(dev, "PR done successfully\n"); 251 + 252 + return 0; 253 + } 254 + 255 + static enum fpga_mgr_states fme_mgr_state(struct fpga_manager *mgr) 256 + { 257 + return FPGA_MGR_STATE_UNKNOWN; 258 + } 259 + 260 + static u64 fme_mgr_status(struct fpga_manager *mgr) 261 + { 262 + struct fme_mgr_priv *priv = mgr->priv; 263 + 264 + return pr_error_to_mgr_status(priv->pr_error); 265 + } 266 + 267 + static const struct fpga_manager_ops fme_mgr_ops = { 268 + .write_init = fme_mgr_write_init, 269 + .write = fme_mgr_write, 270 + .write_complete = fme_mgr_write_complete, 271 + .state = fme_mgr_state, 272 + .status = fme_mgr_status, 273 + }; 274 + 275 + static void fme_mgr_get_compat_id(void __iomem *fme_pr, 276 + struct fpga_compat_id *id) 277 + { 278 + id->id_l = readq(fme_pr + FME_PR_INTFC_ID_L); 279 + id->id_h = readq(fme_pr + FME_PR_INTFC_ID_H); 280 + } 281 + 282 + static int fme_mgr_probe(struct platform_device *pdev) 283 + { 284 + struct dfl_fme_mgr_pdata *pdata = dev_get_platdata(&pdev->dev); 285 + struct fpga_compat_id *compat_id; 286 + struct device *dev = &pdev->dev; 287 + struct fme_mgr_priv *priv; 288 + struct fpga_manager *mgr; 289 + struct resource *res; 290 + int ret; 291 + 292 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 293 + if (!priv) 294 + return -ENOMEM; 295 + 296 + if (pdata->ioaddr) 297 + priv->ioaddr = pdata->ioaddr; 298 + 299 + if (!priv->ioaddr) { 300 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 301 + priv->ioaddr = devm_ioremap_resource(dev, res); 302 + if (IS_ERR(priv->ioaddr)) 303 + return PTR_ERR(priv->ioaddr); 304 + } 305 + 306 + compat_id = devm_kzalloc(dev, sizeof(*compat_id), GFP_KERNEL); 307 + if (!compat_id) 308 + return -ENOMEM; 309 + 310 + fme_mgr_get_compat_id(priv->ioaddr, compat_id); 311 + 312 + mgr = fpga_mgr_create(dev, "DFL FME FPGA Manager", 313 + &fme_mgr_ops, priv); 314 + if (!mgr) 315 + return -ENOMEM; 316 + 317 + mgr->compat_id = compat_id; 318 + platform_set_drvdata(pdev, mgr); 319 + 320 + ret = fpga_mgr_register(mgr); 321 + if (ret) 322 + fpga_mgr_free(mgr); 323 + 324 + return ret; 325 + } 326 + 327 + static int fme_mgr_remove(struct platform_device *pdev) 328 + { 329 + struct fpga_manager *mgr = platform_get_drvdata(pdev); 330 + 331 + fpga_mgr_unregister(mgr); 332 + 333 + return 0; 334 + } 335 + 336 + static struct platform_driver fme_mgr_driver = { 337 + .driver = { 338 + .name = DFL_FPGA_FME_MGR, 339 + }, 340 + .probe = fme_mgr_probe, 341 + .remove = fme_mgr_remove, 342 + }; 343 + 344 + module_platform_driver(fme_mgr_driver); 345 + 346 + MODULE_DESCRIPTION("FPGA Manager for DFL FPGA Management Engine"); 347 + MODULE_AUTHOR("Intel Corporation"); 348 + MODULE_LICENSE("GPL v2"); 349 + MODULE_ALIAS("platform:dfl-fme-mgr");
+479
drivers/fpga/dfl-fme-pr.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Driver for FPGA Management Engine (FME) Partial Reconfiguration 4 + * 5 + * Copyright (C) 2017-2018 Intel Corporation, Inc. 6 + * 7 + * Authors: 8 + * Kang Luwei <luwei.kang@intel.com> 9 + * Xiao Guangrong <guangrong.xiao@linux.intel.com> 10 + * Wu Hao <hao.wu@intel.com> 11 + * Joseph Grecco <joe.grecco@intel.com> 12 + * Enno Luebbers <enno.luebbers@intel.com> 13 + * Tim Whisonant <tim.whisonant@intel.com> 14 + * Ananda Ravuri <ananda.ravuri@intel.com> 15 + * Christopher Rauer <christopher.rauer@intel.com> 16 + * Henry Mitchel <henry.mitchel@intel.com> 17 + */ 18 + 19 + #include <linux/types.h> 20 + #include <linux/device.h> 21 + #include <linux/vmalloc.h> 22 + #include <linux/uaccess.h> 23 + #include <linux/fpga/fpga-mgr.h> 24 + #include <linux/fpga/fpga-bridge.h> 25 + #include <linux/fpga/fpga-region.h> 26 + #include <linux/fpga-dfl.h> 27 + 28 + #include "dfl.h" 29 + #include "dfl-fme.h" 30 + #include "dfl-fme-pr.h" 31 + 32 + static struct dfl_fme_region * 33 + dfl_fme_region_find_by_port_id(struct dfl_fme *fme, int port_id) 34 + { 35 + struct dfl_fme_region *fme_region; 36 + 37 + list_for_each_entry(fme_region, &fme->region_list, node) 38 + if (fme_region->port_id == port_id) 39 + return fme_region; 40 + 41 + return NULL; 42 + } 43 + 44 + static int dfl_fme_region_match(struct device *dev, const void *data) 45 + { 46 + return dev->parent == data; 47 + } 48 + 49 + static struct fpga_region *dfl_fme_region_find(struct dfl_fme *fme, int port_id) 50 + { 51 + struct dfl_fme_region *fme_region; 52 + struct fpga_region *region; 53 + 54 + fme_region = dfl_fme_region_find_by_port_id(fme, port_id); 55 + if (!fme_region) 56 + return NULL; 57 + 58 + region = fpga_region_class_find(NULL, &fme_region->region->dev, 59 + dfl_fme_region_match); 60 + if (!region) 61 + return NULL; 62 + 63 + return region; 64 + } 65 + 66 + static int fme_pr(struct platform_device *pdev, unsigned long arg) 67 + { 68 + struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev); 69 + void __user *argp = (void __user *)arg; 70 + struct dfl_fpga_fme_port_pr port_pr; 71 + struct fpga_image_info *info; 72 + struct fpga_region *region; 73 + void __iomem *fme_hdr; 74 + struct dfl_fme *fme; 75 + unsigned long minsz; 76 + void *buf = NULL; 77 + int ret = 0; 78 + u64 v; 79 + 80 + minsz = offsetofend(struct dfl_fpga_fme_port_pr, buffer_address); 81 + 82 + if (copy_from_user(&port_pr, argp, minsz)) 83 + return -EFAULT; 84 + 85 + if (port_pr.argsz < minsz || port_pr.flags) 86 + return -EINVAL; 87 + 88 + if (!IS_ALIGNED(port_pr.buffer_size, 4)) 89 + return -EINVAL; 90 + 91 + /* get fme header region */ 92 + fme_hdr = dfl_get_feature_ioaddr_by_id(&pdev->dev, 93 + FME_FEATURE_ID_HEADER); 94 + 95 + /* check port id */ 96 + v = readq(fme_hdr + FME_HDR_CAP); 97 + if (port_pr.port_id >= FIELD_GET(FME_CAP_NUM_PORTS, v)) { 98 + dev_dbg(&pdev->dev, "port number more than maximum\n"); 99 + return -EINVAL; 100 + } 101 + 102 + if (!access_ok(VERIFY_READ, 103 + (void __user *)(unsigned long)port_pr.buffer_address, 104 + port_pr.buffer_size)) 105 + return -EFAULT; 106 + 107 + buf = vmalloc(port_pr.buffer_size); 108 + if (!buf) 109 + return -ENOMEM; 110 + 111 + if (copy_from_user(buf, 112 + (void __user *)(unsigned long)port_pr.buffer_address, 113 + port_pr.buffer_size)) { 114 + ret = -EFAULT; 115 + goto free_exit; 116 + } 117 + 118 + /* prepare fpga_image_info for PR */ 119 + info = fpga_image_info_alloc(&pdev->dev); 120 + if (!info) { 121 + ret = -ENOMEM; 122 + goto free_exit; 123 + } 124 + 125 + info->flags |= FPGA_MGR_PARTIAL_RECONFIG; 126 + 127 + mutex_lock(&pdata->lock); 128 + fme = dfl_fpga_pdata_get_private(pdata); 129 + /* fme device has been unregistered. */ 130 + if (!fme) { 131 + ret = -EINVAL; 132 + goto unlock_exit; 133 + } 134 + 135 + region = dfl_fme_region_find(fme, port_pr.port_id); 136 + if (!region) { 137 + ret = -EINVAL; 138 + goto unlock_exit; 139 + } 140 + 141 + fpga_image_info_free(region->info); 142 + 143 + info->buf = buf; 144 + info->count = port_pr.buffer_size; 145 + info->region_id = port_pr.port_id; 146 + region->info = info; 147 + 148 + ret = fpga_region_program_fpga(region); 149 + 150 + /* 151 + * it allows userspace to reset the PR region's logic by disabling and 152 + * reenabling the bridge to clear things out between accleration runs. 153 + * so no need to hold the bridges after partial reconfiguration. 154 + */ 155 + if (region->get_bridges) 156 + fpga_bridges_put(&region->bridge_list); 157 + 158 + put_device(&region->dev); 159 + unlock_exit: 160 + mutex_unlock(&pdata->lock); 161 + free_exit: 162 + vfree(buf); 163 + if (copy_to_user((void __user *)arg, &port_pr, minsz)) 164 + return -EFAULT; 165 + 166 + return ret; 167 + } 168 + 169 + /** 170 + * dfl_fme_create_mgr - create fpga mgr platform device as child device 171 + * 172 + * @pdata: fme platform_device's pdata 173 + * 174 + * Return: mgr platform device if successful, and error code otherwise. 175 + */ 176 + static struct platform_device * 177 + dfl_fme_create_mgr(struct dfl_feature_platform_data *pdata, 178 + struct dfl_feature *feature) 179 + { 180 + struct platform_device *mgr, *fme = pdata->dev; 181 + struct dfl_fme_mgr_pdata mgr_pdata; 182 + int ret = -ENOMEM; 183 + 184 + if (!feature->ioaddr) 185 + return ERR_PTR(-ENODEV); 186 + 187 + mgr_pdata.ioaddr = feature->ioaddr; 188 + 189 + /* 190 + * Each FME has only one fpga-mgr, so allocate platform device using 191 + * the same FME platform device id. 192 + */ 193 + mgr = platform_device_alloc(DFL_FPGA_FME_MGR, fme->id); 194 + if (!mgr) 195 + return ERR_PTR(ret); 196 + 197 + mgr->dev.parent = &fme->dev; 198 + 199 + ret = platform_device_add_data(mgr, &mgr_pdata, sizeof(mgr_pdata)); 200 + if (ret) 201 + goto create_mgr_err; 202 + 203 + ret = platform_device_add(mgr); 204 + if (ret) 205 + goto create_mgr_err; 206 + 207 + return mgr; 208 + 209 + create_mgr_err: 210 + platform_device_put(mgr); 211 + return ERR_PTR(ret); 212 + } 213 + 214 + /** 215 + * dfl_fme_destroy_mgr - destroy fpga mgr platform device 216 + * @pdata: fme platform device's pdata 217 + */ 218 + static void dfl_fme_destroy_mgr(struct dfl_feature_platform_data *pdata) 219 + { 220 + struct dfl_fme *priv = dfl_fpga_pdata_get_private(pdata); 221 + 222 + platform_device_unregister(priv->mgr); 223 + } 224 + 225 + /** 226 + * dfl_fme_create_bridge - create fme fpga bridge platform device as child 227 + * 228 + * @pdata: fme platform device's pdata 229 + * @port_id: port id for the bridge to be created. 230 + * 231 + * Return: bridge platform device if successful, and error code otherwise. 232 + */ 233 + static struct dfl_fme_bridge * 234 + dfl_fme_create_bridge(struct dfl_feature_platform_data *pdata, int port_id) 235 + { 236 + struct device *dev = &pdata->dev->dev; 237 + struct dfl_fme_br_pdata br_pdata; 238 + struct dfl_fme_bridge *fme_br; 239 + int ret = -ENOMEM; 240 + 241 + fme_br = devm_kzalloc(dev, sizeof(*fme_br), GFP_KERNEL); 242 + if (!fme_br) 243 + return ERR_PTR(ret); 244 + 245 + br_pdata.cdev = pdata->dfl_cdev; 246 + br_pdata.port_id = port_id; 247 + 248 + fme_br->br = platform_device_alloc(DFL_FPGA_FME_BRIDGE, 249 + PLATFORM_DEVID_AUTO); 250 + if (!fme_br->br) 251 + return ERR_PTR(ret); 252 + 253 + fme_br->br->dev.parent = dev; 254 + 255 + ret = platform_device_add_data(fme_br->br, &br_pdata, sizeof(br_pdata)); 256 + if (ret) 257 + goto create_br_err; 258 + 259 + ret = platform_device_add(fme_br->br); 260 + if (ret) 261 + goto create_br_err; 262 + 263 + return fme_br; 264 + 265 + create_br_err: 266 + platform_device_put(fme_br->br); 267 + return ERR_PTR(ret); 268 + } 269 + 270 + /** 271 + * dfl_fme_destroy_bridge - destroy fpga bridge platform device 272 + * @fme_br: fme bridge to destroy 273 + */ 274 + static void dfl_fme_destroy_bridge(struct dfl_fme_bridge *fme_br) 275 + { 276 + platform_device_unregister(fme_br->br); 277 + } 278 + 279 + /** 280 + * dfl_fme_destroy_bridge - destroy all fpga bridge platform device 281 + * @pdata: fme platform device's pdata 282 + */ 283 + static void dfl_fme_destroy_bridges(struct dfl_feature_platform_data *pdata) 284 + { 285 + struct dfl_fme *priv = dfl_fpga_pdata_get_private(pdata); 286 + struct dfl_fme_bridge *fbridge, *tmp; 287 + 288 + list_for_each_entry_safe(fbridge, tmp, &priv->bridge_list, node) { 289 + list_del(&fbridge->node); 290 + dfl_fme_destroy_bridge(fbridge); 291 + } 292 + } 293 + 294 + /** 295 + * dfl_fme_create_region - create fpga region platform device as child 296 + * 297 + * @pdata: fme platform device's pdata 298 + * @mgr: mgr platform device needed for region 299 + * @br: br platform device needed for region 300 + * @port_id: port id 301 + * 302 + * Return: fme region if successful, and error code otherwise. 303 + */ 304 + static struct dfl_fme_region * 305 + dfl_fme_create_region(struct dfl_feature_platform_data *pdata, 306 + struct platform_device *mgr, 307 + struct platform_device *br, int port_id) 308 + { 309 + struct dfl_fme_region_pdata region_pdata; 310 + struct device *dev = &pdata->dev->dev; 311 + struct dfl_fme_region *fme_region; 312 + int ret = -ENOMEM; 313 + 314 + fme_region = devm_kzalloc(dev, sizeof(*fme_region), GFP_KERNEL); 315 + if (!fme_region) 316 + return ERR_PTR(ret); 317 + 318 + region_pdata.mgr = mgr; 319 + region_pdata.br = br; 320 + 321 + /* 322 + * Each FPGA device may have more than one port, so allocate platform 323 + * device using the same port platform device id. 324 + */ 325 + fme_region->region = platform_device_alloc(DFL_FPGA_FME_REGION, br->id); 326 + if (!fme_region->region) 327 + return ERR_PTR(ret); 328 + 329 + fme_region->region->dev.parent = dev; 330 + 331 + ret = platform_device_add_data(fme_region->region, &region_pdata, 332 + sizeof(region_pdata)); 333 + if (ret) 334 + goto create_region_err; 335 + 336 + ret = platform_device_add(fme_region->region); 337 + if (ret) 338 + goto create_region_err; 339 + 340 + fme_region->port_id = port_id; 341 + 342 + return fme_region; 343 + 344 + create_region_err: 345 + platform_device_put(fme_region->region); 346 + return ERR_PTR(ret); 347 + } 348 + 349 + /** 350 + * dfl_fme_destroy_region - destroy fme region 351 + * @fme_region: fme region to destroy 352 + */ 353 + static void dfl_fme_destroy_region(struct dfl_fme_region *fme_region) 354 + { 355 + platform_device_unregister(fme_region->region); 356 + } 357 + 358 + /** 359 + * dfl_fme_destroy_regions - destroy all fme regions 360 + * @pdata: fme platform device's pdata 361 + */ 362 + static void dfl_fme_destroy_regions(struct dfl_feature_platform_data *pdata) 363 + { 364 + struct dfl_fme *priv = dfl_fpga_pdata_get_private(pdata); 365 + struct dfl_fme_region *fme_region, *tmp; 366 + 367 + list_for_each_entry_safe(fme_region, tmp, &priv->region_list, node) { 368 + list_del(&fme_region->node); 369 + dfl_fme_destroy_region(fme_region); 370 + } 371 + } 372 + 373 + static int pr_mgmt_init(struct platform_device *pdev, 374 + struct dfl_feature *feature) 375 + { 376 + struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev); 377 + struct dfl_fme_region *fme_region; 378 + struct dfl_fme_bridge *fme_br; 379 + struct platform_device *mgr; 380 + struct dfl_fme *priv; 381 + void __iomem *fme_hdr; 382 + int ret = -ENODEV, i = 0; 383 + u64 fme_cap, port_offset; 384 + 385 + fme_hdr = dfl_get_feature_ioaddr_by_id(&pdev->dev, 386 + FME_FEATURE_ID_HEADER); 387 + 388 + mutex_lock(&pdata->lock); 389 + priv = dfl_fpga_pdata_get_private(pdata); 390 + 391 + /* Initialize the region and bridge sub device list */ 392 + INIT_LIST_HEAD(&priv->region_list); 393 + INIT_LIST_HEAD(&priv->bridge_list); 394 + 395 + /* Create fpga mgr platform device */ 396 + mgr = dfl_fme_create_mgr(pdata, feature); 397 + if (IS_ERR(mgr)) { 398 + dev_err(&pdev->dev, "fail to create fpga mgr pdev\n"); 399 + goto unlock; 400 + } 401 + 402 + priv->mgr = mgr; 403 + 404 + /* Read capability register to check number of regions and bridges */ 405 + fme_cap = readq(fme_hdr + FME_HDR_CAP); 406 + for (; i < FIELD_GET(FME_CAP_NUM_PORTS, fme_cap); i++) { 407 + port_offset = readq(fme_hdr + FME_HDR_PORT_OFST(i)); 408 + if (!(port_offset & FME_PORT_OFST_IMP)) 409 + continue; 410 + 411 + /* Create bridge for each port */ 412 + fme_br = dfl_fme_create_bridge(pdata, i); 413 + if (IS_ERR(fme_br)) { 414 + ret = PTR_ERR(fme_br); 415 + goto destroy_region; 416 + } 417 + 418 + list_add(&fme_br->node, &priv->bridge_list); 419 + 420 + /* Create region for each port */ 421 + fme_region = dfl_fme_create_region(pdata, mgr, 422 + fme_br->br, i); 423 + if (!fme_region) { 424 + ret = PTR_ERR(fme_region); 425 + goto destroy_region; 426 + } 427 + 428 + list_add(&fme_region->node, &priv->region_list); 429 + } 430 + mutex_unlock(&pdata->lock); 431 + 432 + return 0; 433 + 434 + destroy_region: 435 + dfl_fme_destroy_regions(pdata); 436 + dfl_fme_destroy_bridges(pdata); 437 + dfl_fme_destroy_mgr(pdata); 438 + unlock: 439 + mutex_unlock(&pdata->lock); 440 + return ret; 441 + } 442 + 443 + static void pr_mgmt_uinit(struct platform_device *pdev, 444 + struct dfl_feature *feature) 445 + { 446 + struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev); 447 + struct dfl_fme *priv; 448 + 449 + mutex_lock(&pdata->lock); 450 + priv = dfl_fpga_pdata_get_private(pdata); 451 + 452 + dfl_fme_destroy_regions(pdata); 453 + dfl_fme_destroy_bridges(pdata); 454 + dfl_fme_destroy_mgr(pdata); 455 + mutex_unlock(&pdata->lock); 456 + } 457 + 458 + static long fme_pr_ioctl(struct platform_device *pdev, 459 + struct dfl_feature *feature, 460 + unsigned int cmd, unsigned long arg) 461 + { 462 + long ret; 463 + 464 + switch (cmd) { 465 + case DFL_FPGA_FME_PORT_PR: 466 + ret = fme_pr(pdev, arg); 467 + break; 468 + default: 469 + ret = -ENODEV; 470 + } 471 + 472 + return ret; 473 + } 474 + 475 + const struct dfl_feature_ops pr_mgmt_ops = { 476 + .init = pr_mgmt_init, 477 + .uinit = pr_mgmt_uinit, 478 + .ioctl = fme_pr_ioctl, 479 + };
+84
drivers/fpga/dfl-fme-pr.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Header file for FPGA Management Engine (FME) Partial Reconfiguration Driver 4 + * 5 + * Copyright (C) 2017-2018 Intel Corporation, Inc. 6 + * 7 + * Authors: 8 + * Kang Luwei <luwei.kang@intel.com> 9 + * Xiao Guangrong <guangrong.xiao@linux.intel.com> 10 + * Wu Hao <hao.wu@intel.com> 11 + * Joseph Grecco <joe.grecco@intel.com> 12 + * Enno Luebbers <enno.luebbers@intel.com> 13 + * Tim Whisonant <tim.whisonant@intel.com> 14 + * Ananda Ravuri <ananda.ravuri@intel.com> 15 + * Henry Mitchel <henry.mitchel@intel.com> 16 + */ 17 + 18 + #ifndef __DFL_FME_PR_H 19 + #define __DFL_FME_PR_H 20 + 21 + #include <linux/platform_device.h> 22 + 23 + /** 24 + * struct dfl_fme_region - FME fpga region data structure 25 + * 26 + * @region: platform device of the FPGA region. 27 + * @node: used to link fme_region to a list. 28 + * @port_id: indicate which port this region connected to. 29 + */ 30 + struct dfl_fme_region { 31 + struct platform_device *region; 32 + struct list_head node; 33 + int port_id; 34 + }; 35 + 36 + /** 37 + * struct dfl_fme_region_pdata - platform data for FME region platform device. 38 + * 39 + * @mgr: platform device of the FPGA manager. 40 + * @br: platform device of the FPGA bridge. 41 + * @region_id: region id (same as port_id). 42 + */ 43 + struct dfl_fme_region_pdata { 44 + struct platform_device *mgr; 45 + struct platform_device *br; 46 + int region_id; 47 + }; 48 + 49 + /** 50 + * struct dfl_fme_bridge - FME fpga bridge data structure 51 + * 52 + * @br: platform device of the FPGA bridge. 53 + * @node: used to link fme_bridge to a list. 54 + */ 55 + struct dfl_fme_bridge { 56 + struct platform_device *br; 57 + struct list_head node; 58 + }; 59 + 60 + /** 61 + * struct dfl_fme_bridge_pdata - platform data for FME bridge platform device. 62 + * 63 + * @cdev: container device. 64 + * @port_id: port id. 65 + */ 66 + struct dfl_fme_br_pdata { 67 + struct dfl_fpga_cdev *cdev; 68 + int port_id; 69 + }; 70 + 71 + /** 72 + * struct dfl_fme_mgr_pdata - platform data for FME manager platform device. 73 + * 74 + * @ioaddr: mapped io address for FME manager platform device. 75 + */ 76 + struct dfl_fme_mgr_pdata { 77 + void __iomem *ioaddr; 78 + }; 79 + 80 + #define DFL_FPGA_FME_MGR "dfl-fme-mgr" 81 + #define DFL_FPGA_FME_BRIDGE "dfl-fme-bridge" 82 + #define DFL_FPGA_FME_REGION "dfl-fme-region" 83 + 84 + #endif /* __DFL_FME_PR_H */
+89
drivers/fpga/dfl-fme-region.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * FPGA Region Driver for FPGA Management Engine (FME) 4 + * 5 + * Copyright (C) 2017-2018 Intel Corporation, Inc. 6 + * 7 + * Authors: 8 + * Wu Hao <hao.wu@intel.com> 9 + * Joseph Grecco <joe.grecco@intel.com> 10 + * Enno Luebbers <enno.luebbers@intel.com> 11 + * Tim Whisonant <tim.whisonant@intel.com> 12 + * Ananda Ravuri <ananda.ravuri@intel.com> 13 + * Henry Mitchel <henry.mitchel@intel.com> 14 + */ 15 + 16 + #include <linux/module.h> 17 + #include <linux/fpga/fpga-region.h> 18 + 19 + #include "dfl-fme-pr.h" 20 + 21 + static int fme_region_get_bridges(struct fpga_region *region) 22 + { 23 + struct dfl_fme_region_pdata *pdata = region->priv; 24 + struct device *dev = &pdata->br->dev; 25 + 26 + return fpga_bridge_get_to_list(dev, region->info, &region->bridge_list); 27 + } 28 + 29 + static int fme_region_probe(struct platform_device *pdev) 30 + { 31 + struct dfl_fme_region_pdata *pdata = dev_get_platdata(&pdev->dev); 32 + struct device *dev = &pdev->dev; 33 + struct fpga_region *region; 34 + struct fpga_manager *mgr; 35 + int ret; 36 + 37 + mgr = fpga_mgr_get(&pdata->mgr->dev); 38 + if (IS_ERR(mgr)) 39 + return -EPROBE_DEFER; 40 + 41 + region = fpga_region_create(dev, mgr, fme_region_get_bridges); 42 + if (!region) { 43 + ret = -ENOMEM; 44 + goto eprobe_mgr_put; 45 + } 46 + 47 + region->priv = pdata; 48 + region->compat_id = mgr->compat_id; 49 + platform_set_drvdata(pdev, region); 50 + 51 + ret = fpga_region_register(region); 52 + if (ret) 53 + goto region_free; 54 + 55 + dev_dbg(dev, "DFL FME FPGA Region probed\n"); 56 + 57 + return 0; 58 + 59 + region_free: 60 + fpga_region_free(region); 61 + eprobe_mgr_put: 62 + fpga_mgr_put(mgr); 63 + return ret; 64 + } 65 + 66 + static int fme_region_remove(struct platform_device *pdev) 67 + { 68 + struct fpga_region *region = dev_get_drvdata(&pdev->dev); 69 + 70 + fpga_region_unregister(region); 71 + fpga_mgr_put(region->mgr); 72 + 73 + return 0; 74 + } 75 + 76 + static struct platform_driver fme_region_driver = { 77 + .driver = { 78 + .name = DFL_FPGA_FME_REGION, 79 + }, 80 + .probe = fme_region_probe, 81 + .remove = fme_region_remove, 82 + }; 83 + 84 + module_platform_driver(fme_region_driver); 85 + 86 + MODULE_DESCRIPTION("FPGA Region for DFL FPGA Management Engine"); 87 + MODULE_AUTHOR("Intel Corporation"); 88 + MODULE_LICENSE("GPL v2"); 89 + MODULE_ALIAS("platform:dfl-fme-region");
+38
drivers/fpga/dfl-fme.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Header file for FPGA Management Engine (FME) Driver 4 + * 5 + * Copyright (C) 2017-2018 Intel Corporation, Inc. 6 + * 7 + * Authors: 8 + * Kang Luwei <luwei.kang@intel.com> 9 + * Xiao Guangrong <guangrong.xiao@linux.intel.com> 10 + * Wu Hao <hao.wu@intel.com> 11 + * Joseph Grecco <joe.grecco@intel.com> 12 + * Enno Luebbers <enno.luebbers@intel.com> 13 + * Tim Whisonant <tim.whisonant@intel.com> 14 + * Ananda Ravuri <ananda.ravuri@intel.com> 15 + * Henry Mitchel <henry.mitchel@intel.com> 16 + */ 17 + 18 + #ifndef __DFL_FME_H 19 + #define __DFL_FME_H 20 + 21 + /** 22 + * struct dfl_fme - dfl fme private data 23 + * 24 + * @mgr: FME's FPGA manager platform device. 25 + * @region_list: linked list of FME's FPGA regions. 26 + * @bridge_list: linked list of FME's FPGA bridges. 27 + * @pdata: fme platform device's pdata. 28 + */ 29 + struct dfl_fme { 30 + struct platform_device *mgr; 31 + struct list_head region_list; 32 + struct list_head bridge_list; 33 + struct dfl_feature_platform_data *pdata; 34 + }; 35 + 36 + extern const struct dfl_feature_ops pr_mgmt_ops; 37 + 38 + #endif /* __DFL_FME_H */
+243
drivers/fpga/dfl-pci.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Driver for FPGA Device Feature List (DFL) PCIe device 4 + * 5 + * Copyright (C) 2017-2018 Intel Corporation, Inc. 6 + * 7 + * Authors: 8 + * Zhang Yi <Yi.Z.Zhang@intel.com> 9 + * Xiao Guangrong <guangrong.xiao@linux.intel.com> 10 + * Joseph Grecco <joe.grecco@intel.com> 11 + * Enno Luebbers <enno.luebbers@intel.com> 12 + * Tim Whisonant <tim.whisonant@intel.com> 13 + * Ananda Ravuri <ananda.ravuri@intel.com> 14 + * Henry Mitchel <henry.mitchel@intel.com> 15 + */ 16 + 17 + #include <linux/pci.h> 18 + #include <linux/types.h> 19 + #include <linux/kernel.h> 20 + #include <linux/module.h> 21 + #include <linux/stddef.h> 22 + #include <linux/errno.h> 23 + #include <linux/aer.h> 24 + 25 + #include "dfl.h" 26 + 27 + #define DRV_VERSION "0.8" 28 + #define DRV_NAME "dfl-pci" 29 + 30 + struct cci_drvdata { 31 + struct dfl_fpga_cdev *cdev; /* container device */ 32 + }; 33 + 34 + static void __iomem *cci_pci_ioremap_bar(struct pci_dev *pcidev, int bar) 35 + { 36 + if (pcim_iomap_regions(pcidev, BIT(bar), DRV_NAME)) 37 + return NULL; 38 + 39 + return pcim_iomap_table(pcidev)[bar]; 40 + } 41 + 42 + /* PCI Device ID */ 43 + #define PCIE_DEVICE_ID_PF_INT_5_X 0xBCBD 44 + #define PCIE_DEVICE_ID_PF_INT_6_X 0xBCC0 45 + #define PCIE_DEVICE_ID_PF_DSC_1_X 0x09C4 46 + /* VF Device */ 47 + #define PCIE_DEVICE_ID_VF_INT_5_X 0xBCBF 48 + #define PCIE_DEVICE_ID_VF_INT_6_X 0xBCC1 49 + #define PCIE_DEVICE_ID_VF_DSC_1_X 0x09C5 50 + 51 + static struct pci_device_id cci_pcie_id_tbl[] = { 52 + {PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_PF_INT_5_X),}, 53 + {PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_VF_INT_5_X),}, 54 + {PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_PF_INT_6_X),}, 55 + {PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_VF_INT_6_X),}, 56 + {PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_PF_DSC_1_X),}, 57 + {PCI_DEVICE(PCI_VENDOR_ID_INTEL, PCIE_DEVICE_ID_VF_DSC_1_X),}, 58 + {0,} 59 + }; 60 + MODULE_DEVICE_TABLE(pci, cci_pcie_id_tbl); 61 + 62 + static int cci_init_drvdata(struct pci_dev *pcidev) 63 + { 64 + struct cci_drvdata *drvdata; 65 + 66 + drvdata = devm_kzalloc(&pcidev->dev, sizeof(*drvdata), GFP_KERNEL); 67 + if (!drvdata) 68 + return -ENOMEM; 69 + 70 + pci_set_drvdata(pcidev, drvdata); 71 + 72 + return 0; 73 + } 74 + 75 + static void cci_remove_feature_devs(struct pci_dev *pcidev) 76 + { 77 + struct cci_drvdata *drvdata = pci_get_drvdata(pcidev); 78 + 79 + /* remove all children feature devices */ 80 + dfl_fpga_feature_devs_remove(drvdata->cdev); 81 + } 82 + 83 + /* enumerate feature devices under pci device */ 84 + static int cci_enumerate_feature_devs(struct pci_dev *pcidev) 85 + { 86 + struct cci_drvdata *drvdata = pci_get_drvdata(pcidev); 87 + struct dfl_fpga_enum_info *info; 88 + struct dfl_fpga_cdev *cdev; 89 + resource_size_t start, len; 90 + int port_num, bar, i, ret = 0; 91 + void __iomem *base; 92 + u32 offset; 93 + u64 v; 94 + 95 + /* allocate enumeration info via pci_dev */ 96 + info = dfl_fpga_enum_info_alloc(&pcidev->dev); 97 + if (!info) 98 + return -ENOMEM; 99 + 100 + /* start to find Device Feature List from Bar 0 */ 101 + base = cci_pci_ioremap_bar(pcidev, 0); 102 + if (!base) { 103 + ret = -ENOMEM; 104 + goto enum_info_free_exit; 105 + } 106 + 107 + /* 108 + * PF device has FME and Ports/AFUs, and VF device only has one 109 + * Port/AFU. Check them and add related "Device Feature List" info 110 + * for the next step enumeration. 111 + */ 112 + if (dfl_feature_is_fme(base)) { 113 + start = pci_resource_start(pcidev, 0); 114 + len = pci_resource_len(pcidev, 0); 115 + 116 + dfl_fpga_enum_info_add_dfl(info, start, len, base); 117 + 118 + /* 119 + * find more Device Feature Lists (e.g. Ports) per information 120 + * indicated by FME module. 121 + */ 122 + v = readq(base + FME_HDR_CAP); 123 + port_num = FIELD_GET(FME_CAP_NUM_PORTS, v); 124 + 125 + WARN_ON(port_num > MAX_DFL_FPGA_PORT_NUM); 126 + 127 + for (i = 0; i < port_num; i++) { 128 + v = readq(base + FME_HDR_PORT_OFST(i)); 129 + 130 + /* skip ports which are not implemented. */ 131 + if (!(v & FME_PORT_OFST_IMP)) 132 + continue; 133 + 134 + /* 135 + * add Port's Device Feature List information for next 136 + * step enumeration. 137 + */ 138 + bar = FIELD_GET(FME_PORT_OFST_BAR_ID, v); 139 + offset = FIELD_GET(FME_PORT_OFST_DFH_OFST, v); 140 + base = cci_pci_ioremap_bar(pcidev, bar); 141 + if (!base) 142 + continue; 143 + 144 + start = pci_resource_start(pcidev, bar) + offset; 145 + len = pci_resource_len(pcidev, bar) - offset; 146 + 147 + dfl_fpga_enum_info_add_dfl(info, start, len, 148 + base + offset); 149 + } 150 + } else if (dfl_feature_is_port(base)) { 151 + start = pci_resource_start(pcidev, 0); 152 + len = pci_resource_len(pcidev, 0); 153 + 154 + dfl_fpga_enum_info_add_dfl(info, start, len, base); 155 + } else { 156 + ret = -ENODEV; 157 + goto enum_info_free_exit; 158 + } 159 + 160 + /* start enumeration with prepared enumeration information */ 161 + cdev = dfl_fpga_feature_devs_enumerate(info); 162 + if (IS_ERR(cdev)) { 163 + dev_err(&pcidev->dev, "Enumeration failure\n"); 164 + ret = PTR_ERR(cdev); 165 + goto enum_info_free_exit; 166 + } 167 + 168 + drvdata->cdev = cdev; 169 + 170 + enum_info_free_exit: 171 + dfl_fpga_enum_info_free(info); 172 + 173 + return ret; 174 + } 175 + 176 + static 177 + int cci_pci_probe(struct pci_dev *pcidev, const struct pci_device_id *pcidevid) 178 + { 179 + int ret; 180 + 181 + ret = pcim_enable_device(pcidev); 182 + if (ret < 0) { 183 + dev_err(&pcidev->dev, "Failed to enable device %d.\n", ret); 184 + return ret; 185 + } 186 + 187 + ret = pci_enable_pcie_error_reporting(pcidev); 188 + if (ret && ret != -EINVAL) 189 + dev_info(&pcidev->dev, "PCIE AER unavailable %d.\n", ret); 190 + 191 + pci_set_master(pcidev); 192 + 193 + if (!pci_set_dma_mask(pcidev, DMA_BIT_MASK(64))) { 194 + ret = pci_set_consistent_dma_mask(pcidev, DMA_BIT_MASK(64)); 195 + if (ret) 196 + goto disable_error_report_exit; 197 + } else if (!pci_set_dma_mask(pcidev, DMA_BIT_MASK(32))) { 198 + ret = pci_set_consistent_dma_mask(pcidev, DMA_BIT_MASK(32)); 199 + if (ret) 200 + goto disable_error_report_exit; 201 + } else { 202 + ret = -EIO; 203 + dev_err(&pcidev->dev, "No suitable DMA support available.\n"); 204 + goto disable_error_report_exit; 205 + } 206 + 207 + ret = cci_init_drvdata(pcidev); 208 + if (ret) { 209 + dev_err(&pcidev->dev, "Fail to init drvdata %d.\n", ret); 210 + goto disable_error_report_exit; 211 + } 212 + 213 + ret = cci_enumerate_feature_devs(pcidev); 214 + if (ret) { 215 + dev_err(&pcidev->dev, "enumeration failure %d.\n", ret); 216 + goto disable_error_report_exit; 217 + } 218 + 219 + return ret; 220 + 221 + disable_error_report_exit: 222 + pci_disable_pcie_error_reporting(pcidev); 223 + return ret; 224 + } 225 + 226 + static void cci_pci_remove(struct pci_dev *pcidev) 227 + { 228 + cci_remove_feature_devs(pcidev); 229 + pci_disable_pcie_error_reporting(pcidev); 230 + } 231 + 232 + static struct pci_driver cci_pci_driver = { 233 + .name = DRV_NAME, 234 + .id_table = cci_pcie_id_tbl, 235 + .probe = cci_pci_probe, 236 + .remove = cci_pci_remove, 237 + }; 238 + 239 + module_pci_driver(cci_pci_driver); 240 + 241 + MODULE_DESCRIPTION("FPGA DFL PCIe Device Driver"); 242 + MODULE_AUTHOR("Intel Corporation"); 243 + MODULE_LICENSE("GPL v2");
+1044
drivers/fpga/dfl.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Driver for FPGA Device Feature List (DFL) Support 4 + * 5 + * Copyright (C) 2017-2018 Intel Corporation, Inc. 6 + * 7 + * Authors: 8 + * Kang Luwei <luwei.kang@intel.com> 9 + * Zhang Yi <yi.z.zhang@intel.com> 10 + * Wu Hao <hao.wu@intel.com> 11 + * Xiao Guangrong <guangrong.xiao@linux.intel.com> 12 + */ 13 + #include <linux/module.h> 14 + 15 + #include "dfl.h" 16 + 17 + static DEFINE_MUTEX(dfl_id_mutex); 18 + 19 + /* 20 + * when adding a new feature dev support in DFL framework, it's required to 21 + * add a new item in enum dfl_id_type and provide related information in below 22 + * dfl_devs table which is indexed by dfl_id_type, e.g. name string used for 23 + * platform device creation (define name strings in dfl.h, as they could be 24 + * reused by platform device drivers). 25 + * 26 + * if the new feature dev needs chardev support, then it's required to add 27 + * a new item in dfl_chardevs table and configure dfl_devs[i].devt_type as 28 + * index to dfl_chardevs table. If no chardev support just set devt_type 29 + * as one invalid index (DFL_FPGA_DEVT_MAX). 30 + */ 31 + enum dfl_id_type { 32 + FME_ID, /* fme id allocation and mapping */ 33 + PORT_ID, /* port id allocation and mapping */ 34 + DFL_ID_MAX, 35 + }; 36 + 37 + enum dfl_fpga_devt_type { 38 + DFL_FPGA_DEVT_FME, 39 + DFL_FPGA_DEVT_PORT, 40 + DFL_FPGA_DEVT_MAX, 41 + }; 42 + 43 + /** 44 + * dfl_dev_info - dfl feature device information. 45 + * @name: name string of the feature platform device. 46 + * @dfh_id: id value in Device Feature Header (DFH) register by DFL spec. 47 + * @id: idr id of the feature dev. 48 + * @devt_type: index to dfl_chrdevs[]. 49 + */ 50 + struct dfl_dev_info { 51 + const char *name; 52 + u32 dfh_id; 53 + struct idr id; 54 + enum dfl_fpga_devt_type devt_type; 55 + }; 56 + 57 + /* it is indexed by dfl_id_type */ 58 + static struct dfl_dev_info dfl_devs[] = { 59 + {.name = DFL_FPGA_FEATURE_DEV_FME, .dfh_id = DFH_ID_FIU_FME, 60 + .devt_type = DFL_FPGA_DEVT_FME}, 61 + {.name = DFL_FPGA_FEATURE_DEV_PORT, .dfh_id = DFH_ID_FIU_PORT, 62 + .devt_type = DFL_FPGA_DEVT_PORT}, 63 + }; 64 + 65 + /** 66 + * dfl_chardev_info - chardev information of dfl feature device 67 + * @name: nmae string of the char device. 68 + * @devt: devt of the char device. 69 + */ 70 + struct dfl_chardev_info { 71 + const char *name; 72 + dev_t devt; 73 + }; 74 + 75 + /* indexed by enum dfl_fpga_devt_type */ 76 + static struct dfl_chardev_info dfl_chrdevs[] = { 77 + {.name = DFL_FPGA_FEATURE_DEV_FME}, 78 + {.name = DFL_FPGA_FEATURE_DEV_PORT}, 79 + }; 80 + 81 + static void dfl_ids_init(void) 82 + { 83 + int i; 84 + 85 + for (i = 0; i < ARRAY_SIZE(dfl_devs); i++) 86 + idr_init(&dfl_devs[i].id); 87 + } 88 + 89 + static void dfl_ids_destroy(void) 90 + { 91 + int i; 92 + 93 + for (i = 0; i < ARRAY_SIZE(dfl_devs); i++) 94 + idr_destroy(&dfl_devs[i].id); 95 + } 96 + 97 + static int dfl_id_alloc(enum dfl_id_type type, struct device *dev) 98 + { 99 + int id; 100 + 101 + WARN_ON(type >= DFL_ID_MAX); 102 + mutex_lock(&dfl_id_mutex); 103 + id = idr_alloc(&dfl_devs[type].id, dev, 0, 0, GFP_KERNEL); 104 + mutex_unlock(&dfl_id_mutex); 105 + 106 + return id; 107 + } 108 + 109 + static void dfl_id_free(enum dfl_id_type type, int id) 110 + { 111 + WARN_ON(type >= DFL_ID_MAX); 112 + mutex_lock(&dfl_id_mutex); 113 + idr_remove(&dfl_devs[type].id, id); 114 + mutex_unlock(&dfl_id_mutex); 115 + } 116 + 117 + static enum dfl_id_type feature_dev_id_type(struct platform_device *pdev) 118 + { 119 + int i; 120 + 121 + for (i = 0; i < ARRAY_SIZE(dfl_devs); i++) 122 + if (!strcmp(dfl_devs[i].name, pdev->name)) 123 + return i; 124 + 125 + return DFL_ID_MAX; 126 + } 127 + 128 + static enum dfl_id_type dfh_id_to_type(u32 id) 129 + { 130 + int i; 131 + 132 + for (i = 0; i < ARRAY_SIZE(dfl_devs); i++) 133 + if (dfl_devs[i].dfh_id == id) 134 + return i; 135 + 136 + return DFL_ID_MAX; 137 + } 138 + 139 + /* 140 + * introduce a global port_ops list, it allows port drivers to register ops 141 + * in such list, then other feature devices (e.g. FME), could use the port 142 + * functions even related port platform device is hidden. Below is one example, 143 + * in virtualization case of PCIe-based FPGA DFL device, when SRIOV is 144 + * enabled, port (and it's AFU) is turned into VF and port platform device 145 + * is hidden from system but it's still required to access port to finish FPGA 146 + * reconfiguration function in FME. 147 + */ 148 + 149 + static DEFINE_MUTEX(dfl_port_ops_mutex); 150 + static LIST_HEAD(dfl_port_ops_list); 151 + 152 + /** 153 + * dfl_fpga_port_ops_get - get matched port ops from the global list 154 + * @pdev: platform device to match with associated port ops. 155 + * Return: matched port ops on success, NULL otherwise. 156 + * 157 + * Please note that must dfl_fpga_port_ops_put after use the port_ops. 158 + */ 159 + struct dfl_fpga_port_ops *dfl_fpga_port_ops_get(struct platform_device *pdev) 160 + { 161 + struct dfl_fpga_port_ops *ops = NULL; 162 + 163 + mutex_lock(&dfl_port_ops_mutex); 164 + if (list_empty(&dfl_port_ops_list)) 165 + goto done; 166 + 167 + list_for_each_entry(ops, &dfl_port_ops_list, node) { 168 + /* match port_ops using the name of platform device */ 169 + if (!strcmp(pdev->name, ops->name)) { 170 + if (!try_module_get(ops->owner)) 171 + ops = NULL; 172 + goto done; 173 + } 174 + } 175 + 176 + ops = NULL; 177 + done: 178 + mutex_unlock(&dfl_port_ops_mutex); 179 + return ops; 180 + } 181 + EXPORT_SYMBOL_GPL(dfl_fpga_port_ops_get); 182 + 183 + /** 184 + * dfl_fpga_port_ops_put - put port ops 185 + * @ops: port ops. 186 + */ 187 + void dfl_fpga_port_ops_put(struct dfl_fpga_port_ops *ops) 188 + { 189 + if (ops && ops->owner) 190 + module_put(ops->owner); 191 + } 192 + EXPORT_SYMBOL_GPL(dfl_fpga_port_ops_put); 193 + 194 + /** 195 + * dfl_fpga_port_ops_add - add port_ops to global list 196 + * @ops: port ops to add. 197 + */ 198 + void dfl_fpga_port_ops_add(struct dfl_fpga_port_ops *ops) 199 + { 200 + mutex_lock(&dfl_port_ops_mutex); 201 + list_add_tail(&ops->node, &dfl_port_ops_list); 202 + mutex_unlock(&dfl_port_ops_mutex); 203 + } 204 + EXPORT_SYMBOL_GPL(dfl_fpga_port_ops_add); 205 + 206 + /** 207 + * dfl_fpga_port_ops_del - remove port_ops from global list 208 + * @ops: port ops to del. 209 + */ 210 + void dfl_fpga_port_ops_del(struct dfl_fpga_port_ops *ops) 211 + { 212 + mutex_lock(&dfl_port_ops_mutex); 213 + list_del(&ops->node); 214 + mutex_unlock(&dfl_port_ops_mutex); 215 + } 216 + EXPORT_SYMBOL_GPL(dfl_fpga_port_ops_del); 217 + 218 + /** 219 + * dfl_fpga_check_port_id - check the port id 220 + * @pdev: port platform device. 221 + * @pport_id: port id to compare. 222 + * 223 + * Return: 1 if port device matches with given port id, otherwise 0. 224 + */ 225 + int dfl_fpga_check_port_id(struct platform_device *pdev, void *pport_id) 226 + { 227 + struct dfl_fpga_port_ops *port_ops = dfl_fpga_port_ops_get(pdev); 228 + int port_id; 229 + 230 + if (!port_ops || !port_ops->get_id) 231 + return 0; 232 + 233 + port_id = port_ops->get_id(pdev); 234 + dfl_fpga_port_ops_put(port_ops); 235 + 236 + return port_id == *(int *)pport_id; 237 + } 238 + EXPORT_SYMBOL_GPL(dfl_fpga_check_port_id); 239 + 240 + /** 241 + * dfl_fpga_dev_feature_uinit - uinit for sub features of dfl feature device 242 + * @pdev: feature device. 243 + */ 244 + void dfl_fpga_dev_feature_uinit(struct platform_device *pdev) 245 + { 246 + struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev); 247 + struct dfl_feature *feature; 248 + 249 + dfl_fpga_dev_for_each_feature(pdata, feature) 250 + if (feature->ops) { 251 + feature->ops->uinit(pdev, feature); 252 + feature->ops = NULL; 253 + } 254 + } 255 + EXPORT_SYMBOL_GPL(dfl_fpga_dev_feature_uinit); 256 + 257 + static int dfl_feature_instance_init(struct platform_device *pdev, 258 + struct dfl_feature_platform_data *pdata, 259 + struct dfl_feature *feature, 260 + struct dfl_feature_driver *drv) 261 + { 262 + int ret; 263 + 264 + ret = drv->ops->init(pdev, feature); 265 + if (ret) 266 + return ret; 267 + 268 + feature->ops = drv->ops; 269 + 270 + return ret; 271 + } 272 + 273 + /** 274 + * dfl_fpga_dev_feature_init - init for sub features of dfl feature device 275 + * @pdev: feature device. 276 + * @feature_drvs: drvs for sub features. 277 + * 278 + * This function will match sub features with given feature drvs list and 279 + * use matched drv to init related sub feature. 280 + * 281 + * Return: 0 on success, negative error code otherwise. 282 + */ 283 + int dfl_fpga_dev_feature_init(struct platform_device *pdev, 284 + struct dfl_feature_driver *feature_drvs) 285 + { 286 + struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev); 287 + struct dfl_feature_driver *drv = feature_drvs; 288 + struct dfl_feature *feature; 289 + int ret; 290 + 291 + while (drv->ops) { 292 + dfl_fpga_dev_for_each_feature(pdata, feature) { 293 + /* match feature and drv using id */ 294 + if (feature->id == drv->id) { 295 + ret = dfl_feature_instance_init(pdev, pdata, 296 + feature, drv); 297 + if (ret) 298 + goto exit; 299 + } 300 + } 301 + drv++; 302 + } 303 + 304 + return 0; 305 + exit: 306 + dfl_fpga_dev_feature_uinit(pdev); 307 + return ret; 308 + } 309 + EXPORT_SYMBOL_GPL(dfl_fpga_dev_feature_init); 310 + 311 + static void dfl_chardev_uinit(void) 312 + { 313 + int i; 314 + 315 + for (i = 0; i < DFL_FPGA_DEVT_MAX; i++) 316 + if (MAJOR(dfl_chrdevs[i].devt)) { 317 + unregister_chrdev_region(dfl_chrdevs[i].devt, 318 + MINORMASK); 319 + dfl_chrdevs[i].devt = MKDEV(0, 0); 320 + } 321 + } 322 + 323 + static int dfl_chardev_init(void) 324 + { 325 + int i, ret; 326 + 327 + for (i = 0; i < DFL_FPGA_DEVT_MAX; i++) { 328 + ret = alloc_chrdev_region(&dfl_chrdevs[i].devt, 0, MINORMASK, 329 + dfl_chrdevs[i].name); 330 + if (ret) 331 + goto exit; 332 + } 333 + 334 + return 0; 335 + 336 + exit: 337 + dfl_chardev_uinit(); 338 + return ret; 339 + } 340 + 341 + static dev_t dfl_get_devt(enum dfl_fpga_devt_type type, int id) 342 + { 343 + if (type >= DFL_FPGA_DEVT_MAX) 344 + return 0; 345 + 346 + return MKDEV(MAJOR(dfl_chrdevs[type].devt), id); 347 + } 348 + 349 + /** 350 + * dfl_fpga_dev_ops_register - register cdev ops for feature dev 351 + * 352 + * @pdev: feature dev. 353 + * @fops: file operations for feature dev's cdev. 354 + * @owner: owning module/driver. 355 + * 356 + * Return: 0 on success, negative error code otherwise. 357 + */ 358 + int dfl_fpga_dev_ops_register(struct platform_device *pdev, 359 + const struct file_operations *fops, 360 + struct module *owner) 361 + { 362 + struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev); 363 + 364 + cdev_init(&pdata->cdev, fops); 365 + pdata->cdev.owner = owner; 366 + 367 + /* 368 + * set parent to the feature device so that its refcount is 369 + * decreased after the last refcount of cdev is gone, that 370 + * makes sure the feature device is valid during device 371 + * file's life-cycle. 372 + */ 373 + pdata->cdev.kobj.parent = &pdev->dev.kobj; 374 + 375 + return cdev_add(&pdata->cdev, pdev->dev.devt, 1); 376 + } 377 + EXPORT_SYMBOL_GPL(dfl_fpga_dev_ops_register); 378 + 379 + /** 380 + * dfl_fpga_dev_ops_unregister - unregister cdev ops for feature dev 381 + * @pdev: feature dev. 382 + */ 383 + void dfl_fpga_dev_ops_unregister(struct platform_device *pdev) 384 + { 385 + struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev); 386 + 387 + cdev_del(&pdata->cdev); 388 + } 389 + EXPORT_SYMBOL_GPL(dfl_fpga_dev_ops_unregister); 390 + 391 + /** 392 + * struct build_feature_devs_info - info collected during feature dev build. 393 + * 394 + * @dev: device to enumerate. 395 + * @cdev: the container device for all feature devices. 396 + * @feature_dev: current feature device. 397 + * @ioaddr: header register region address of feature device in enumeration. 398 + * @sub_features: a sub features linked list for feature device in enumeration. 399 + * @feature_num: number of sub features for feature device in enumeration. 400 + */ 401 + struct build_feature_devs_info { 402 + struct device *dev; 403 + struct dfl_fpga_cdev *cdev; 404 + struct platform_device *feature_dev; 405 + void __iomem *ioaddr; 406 + struct list_head sub_features; 407 + int feature_num; 408 + }; 409 + 410 + /** 411 + * struct dfl_feature_info - sub feature info collected during feature dev build 412 + * 413 + * @fid: id of this sub feature. 414 + * @mmio_res: mmio resource of this sub feature. 415 + * @ioaddr: mapped base address of mmio resource. 416 + * @node: node in sub_features linked list. 417 + */ 418 + struct dfl_feature_info { 419 + u64 fid; 420 + struct resource mmio_res; 421 + void __iomem *ioaddr; 422 + struct list_head node; 423 + }; 424 + 425 + static void dfl_fpga_cdev_add_port_dev(struct dfl_fpga_cdev *cdev, 426 + struct platform_device *port) 427 + { 428 + struct dfl_feature_platform_data *pdata = dev_get_platdata(&port->dev); 429 + 430 + mutex_lock(&cdev->lock); 431 + list_add(&pdata->node, &cdev->port_dev_list); 432 + get_device(&pdata->dev->dev); 433 + mutex_unlock(&cdev->lock); 434 + } 435 + 436 + /* 437 + * register current feature device, it is called when we need to switch to 438 + * another feature parsing or we have parsed all features on given device 439 + * feature list. 440 + */ 441 + static int build_info_commit_dev(struct build_feature_devs_info *binfo) 442 + { 443 + struct platform_device *fdev = binfo->feature_dev; 444 + struct dfl_feature_platform_data *pdata; 445 + struct dfl_feature_info *finfo, *p; 446 + int ret, index = 0; 447 + 448 + if (!fdev) 449 + return 0; 450 + 451 + /* 452 + * we do not need to care for the memory which is associated with 453 + * the platform device. After calling platform_device_unregister(), 454 + * it will be automatically freed by device's release() callback, 455 + * platform_device_release(). 456 + */ 457 + pdata = kzalloc(dfl_feature_platform_data_size(binfo->feature_num), 458 + GFP_KERNEL); 459 + if (!pdata) 460 + return -ENOMEM; 461 + 462 + pdata->dev = fdev; 463 + pdata->num = binfo->feature_num; 464 + pdata->dfl_cdev = binfo->cdev; 465 + mutex_init(&pdata->lock); 466 + 467 + /* 468 + * the count should be initialized to 0 to make sure 469 + *__fpga_port_enable() following __fpga_port_disable() 470 + * works properly for port device. 471 + * and it should always be 0 for fme device. 472 + */ 473 + WARN_ON(pdata->disable_count); 474 + 475 + fdev->dev.platform_data = pdata; 476 + 477 + /* each sub feature has one MMIO resource */ 478 + fdev->num_resources = binfo->feature_num; 479 + fdev->resource = kcalloc(binfo->feature_num, sizeof(*fdev->resource), 480 + GFP_KERNEL); 481 + if (!fdev->resource) 482 + return -ENOMEM; 483 + 484 + /* fill features and resource information for feature dev */ 485 + list_for_each_entry_safe(finfo, p, &binfo->sub_features, node) { 486 + struct dfl_feature *feature = &pdata->features[index]; 487 + 488 + /* save resource information for each feature */ 489 + feature->id = finfo->fid; 490 + feature->resource_index = index; 491 + feature->ioaddr = finfo->ioaddr; 492 + fdev->resource[index++] = finfo->mmio_res; 493 + 494 + list_del(&finfo->node); 495 + kfree(finfo); 496 + } 497 + 498 + ret = platform_device_add(binfo->feature_dev); 499 + if (!ret) { 500 + if (feature_dev_id_type(binfo->feature_dev) == PORT_ID) 501 + dfl_fpga_cdev_add_port_dev(binfo->cdev, 502 + binfo->feature_dev); 503 + else 504 + binfo->cdev->fme_dev = 505 + get_device(&binfo->feature_dev->dev); 506 + /* 507 + * reset it to avoid build_info_free() freeing their resource. 508 + * 509 + * The resource of successfully registered feature devices 510 + * will be freed by platform_device_unregister(). See the 511 + * comments in build_info_create_dev(). 512 + */ 513 + binfo->feature_dev = NULL; 514 + } 515 + 516 + return ret; 517 + } 518 + 519 + static int 520 + build_info_create_dev(struct build_feature_devs_info *binfo, 521 + enum dfl_id_type type, void __iomem *ioaddr) 522 + { 523 + struct platform_device *fdev; 524 + int ret; 525 + 526 + if (type >= DFL_ID_MAX) 527 + return -EINVAL; 528 + 529 + /* we will create a new device, commit current device first */ 530 + ret = build_info_commit_dev(binfo); 531 + if (ret) 532 + return ret; 533 + 534 + /* 535 + * we use -ENODEV as the initialization indicator which indicates 536 + * whether the id need to be reclaimed 537 + */ 538 + fdev = platform_device_alloc(dfl_devs[type].name, -ENODEV); 539 + if (!fdev) 540 + return -ENOMEM; 541 + 542 + binfo->feature_dev = fdev; 543 + binfo->feature_num = 0; 544 + binfo->ioaddr = ioaddr; 545 + INIT_LIST_HEAD(&binfo->sub_features); 546 + 547 + fdev->id = dfl_id_alloc(type, &fdev->dev); 548 + if (fdev->id < 0) 549 + return fdev->id; 550 + 551 + fdev->dev.parent = &binfo->cdev->region->dev; 552 + fdev->dev.devt = dfl_get_devt(dfl_devs[type].devt_type, fdev->id); 553 + 554 + return 0; 555 + } 556 + 557 + static void build_info_free(struct build_feature_devs_info *binfo) 558 + { 559 + struct dfl_feature_info *finfo, *p; 560 + 561 + /* 562 + * it is a valid id, free it. See comments in 563 + * build_info_create_dev() 564 + */ 565 + if (binfo->feature_dev && binfo->feature_dev->id >= 0) { 566 + dfl_id_free(feature_dev_id_type(binfo->feature_dev), 567 + binfo->feature_dev->id); 568 + 569 + list_for_each_entry_safe(finfo, p, &binfo->sub_features, node) { 570 + list_del(&finfo->node); 571 + kfree(finfo); 572 + } 573 + } 574 + 575 + platform_device_put(binfo->feature_dev); 576 + 577 + devm_kfree(binfo->dev, binfo); 578 + } 579 + 580 + static inline u32 feature_size(void __iomem *start) 581 + { 582 + u64 v = readq(start + DFH); 583 + u32 ofst = FIELD_GET(DFH_NEXT_HDR_OFST, v); 584 + /* workaround for private features with invalid size, use 4K instead */ 585 + return ofst ? ofst : 4096; 586 + } 587 + 588 + static u64 feature_id(void __iomem *start) 589 + { 590 + u64 v = readq(start + DFH); 591 + u16 id = FIELD_GET(DFH_ID, v); 592 + u8 type = FIELD_GET(DFH_TYPE, v); 593 + 594 + if (type == DFH_TYPE_FIU) 595 + return FEATURE_ID_FIU_HEADER; 596 + else if (type == DFH_TYPE_PRIVATE) 597 + return id; 598 + else if (type == DFH_TYPE_AFU) 599 + return FEATURE_ID_AFU; 600 + 601 + WARN_ON(1); 602 + return 0; 603 + } 604 + 605 + /* 606 + * when create sub feature instances, for private features, it doesn't need 607 + * to provide resource size and feature id as they could be read from DFH 608 + * register. For afu sub feature, its register region only contains user 609 + * defined registers, so never trust any information from it, just use the 610 + * resource size information provided by its parent FIU. 611 + */ 612 + static int 613 + create_feature_instance(struct build_feature_devs_info *binfo, 614 + struct dfl_fpga_enum_dfl *dfl, resource_size_t ofst, 615 + resource_size_t size, u64 fid) 616 + { 617 + struct dfl_feature_info *finfo; 618 + 619 + /* read feature size and id if inputs are invalid */ 620 + size = size ? size : feature_size(dfl->ioaddr + ofst); 621 + fid = fid ? fid : feature_id(dfl->ioaddr + ofst); 622 + 623 + if (dfl->len - ofst < size) 624 + return -EINVAL; 625 + 626 + finfo = kzalloc(sizeof(*finfo), GFP_KERNEL); 627 + if (!finfo) 628 + return -ENOMEM; 629 + 630 + finfo->fid = fid; 631 + finfo->mmio_res.start = dfl->start + ofst; 632 + finfo->mmio_res.end = finfo->mmio_res.start + size - 1; 633 + finfo->mmio_res.flags = IORESOURCE_MEM; 634 + finfo->ioaddr = dfl->ioaddr + ofst; 635 + 636 + list_add_tail(&finfo->node, &binfo->sub_features); 637 + binfo->feature_num++; 638 + 639 + return 0; 640 + } 641 + 642 + static int parse_feature_port_afu(struct build_feature_devs_info *binfo, 643 + struct dfl_fpga_enum_dfl *dfl, 644 + resource_size_t ofst) 645 + { 646 + u64 v = readq(binfo->ioaddr + PORT_HDR_CAP); 647 + u32 size = FIELD_GET(PORT_CAP_MMIO_SIZE, v) << 10; 648 + 649 + WARN_ON(!size); 650 + 651 + return create_feature_instance(binfo, dfl, ofst, size, FEATURE_ID_AFU); 652 + } 653 + 654 + static int parse_feature_afu(struct build_feature_devs_info *binfo, 655 + struct dfl_fpga_enum_dfl *dfl, 656 + resource_size_t ofst) 657 + { 658 + if (!binfo->feature_dev) { 659 + dev_err(binfo->dev, "this AFU does not belong to any FIU.\n"); 660 + return -EINVAL; 661 + } 662 + 663 + switch (feature_dev_id_type(binfo->feature_dev)) { 664 + case PORT_ID: 665 + return parse_feature_port_afu(binfo, dfl, ofst); 666 + default: 667 + dev_info(binfo->dev, "AFU belonging to FIU %s is not supported yet.\n", 668 + binfo->feature_dev->name); 669 + } 670 + 671 + return 0; 672 + } 673 + 674 + static int parse_feature_fiu(struct build_feature_devs_info *binfo, 675 + struct dfl_fpga_enum_dfl *dfl, 676 + resource_size_t ofst) 677 + { 678 + u32 id, offset; 679 + u64 v; 680 + int ret = 0; 681 + 682 + v = readq(dfl->ioaddr + ofst + DFH); 683 + id = FIELD_GET(DFH_ID, v); 684 + 685 + /* create platform device for dfl feature dev */ 686 + ret = build_info_create_dev(binfo, dfh_id_to_type(id), 687 + dfl->ioaddr + ofst); 688 + if (ret) 689 + return ret; 690 + 691 + ret = create_feature_instance(binfo, dfl, ofst, 0, 0); 692 + if (ret) 693 + return ret; 694 + /* 695 + * find and parse FIU's child AFU via its NEXT_AFU register. 696 + * please note that only Port has valid NEXT_AFU pointer per spec. 697 + */ 698 + v = readq(dfl->ioaddr + ofst + NEXT_AFU); 699 + 700 + offset = FIELD_GET(NEXT_AFU_NEXT_DFH_OFST, v); 701 + if (offset) 702 + return parse_feature_afu(binfo, dfl, ofst + offset); 703 + 704 + dev_dbg(binfo->dev, "No AFUs detected on FIU %d\n", id); 705 + 706 + return ret; 707 + } 708 + 709 + static int parse_feature_private(struct build_feature_devs_info *binfo, 710 + struct dfl_fpga_enum_dfl *dfl, 711 + resource_size_t ofst) 712 + { 713 + if (!binfo->feature_dev) { 714 + dev_err(binfo->dev, "the private feature %llx does not belong to any AFU.\n", 715 + (unsigned long long)feature_id(dfl->ioaddr + ofst)); 716 + return -EINVAL; 717 + } 718 + 719 + return create_feature_instance(binfo, dfl, ofst, 0, 0); 720 + } 721 + 722 + /** 723 + * parse_feature - parse a feature on given device feature list 724 + * 725 + * @binfo: build feature devices information. 726 + * @dfl: device feature list to parse 727 + * @ofst: offset to feature header on this device feature list 728 + */ 729 + static int parse_feature(struct build_feature_devs_info *binfo, 730 + struct dfl_fpga_enum_dfl *dfl, resource_size_t ofst) 731 + { 732 + u64 v; 733 + u32 type; 734 + 735 + v = readq(dfl->ioaddr + ofst + DFH); 736 + type = FIELD_GET(DFH_TYPE, v); 737 + 738 + switch (type) { 739 + case DFH_TYPE_AFU: 740 + return parse_feature_afu(binfo, dfl, ofst); 741 + case DFH_TYPE_PRIVATE: 742 + return parse_feature_private(binfo, dfl, ofst); 743 + case DFH_TYPE_FIU: 744 + return parse_feature_fiu(binfo, dfl, ofst); 745 + default: 746 + dev_info(binfo->dev, 747 + "Feature Type %x is not supported.\n", type); 748 + } 749 + 750 + return 0; 751 + } 752 + 753 + static int parse_feature_list(struct build_feature_devs_info *binfo, 754 + struct dfl_fpga_enum_dfl *dfl) 755 + { 756 + void __iomem *start = dfl->ioaddr; 757 + void __iomem *end = dfl->ioaddr + dfl->len; 758 + int ret = 0; 759 + u32 ofst = 0; 760 + u64 v; 761 + 762 + /* walk through the device feature list via DFH's next DFH pointer. */ 763 + for (; start < end; start += ofst) { 764 + if (end - start < DFH_SIZE) { 765 + dev_err(binfo->dev, "The region is too small to contain a feature.\n"); 766 + return -EINVAL; 767 + } 768 + 769 + ret = parse_feature(binfo, dfl, start - dfl->ioaddr); 770 + if (ret) 771 + return ret; 772 + 773 + v = readq(start + DFH); 774 + ofst = FIELD_GET(DFH_NEXT_HDR_OFST, v); 775 + 776 + /* stop parsing if EOL(End of List) is set or offset is 0 */ 777 + if ((v & DFH_EOL) || !ofst) 778 + break; 779 + } 780 + 781 + /* commit current feature device when reach the end of list */ 782 + return build_info_commit_dev(binfo); 783 + } 784 + 785 + struct dfl_fpga_enum_info *dfl_fpga_enum_info_alloc(struct device *dev) 786 + { 787 + struct dfl_fpga_enum_info *info; 788 + 789 + get_device(dev); 790 + 791 + info = devm_kzalloc(dev, sizeof(*info), GFP_KERNEL); 792 + if (!info) { 793 + put_device(dev); 794 + return NULL; 795 + } 796 + 797 + info->dev = dev; 798 + INIT_LIST_HEAD(&info->dfls); 799 + 800 + return info; 801 + } 802 + EXPORT_SYMBOL_GPL(dfl_fpga_enum_info_alloc); 803 + 804 + void dfl_fpga_enum_info_free(struct dfl_fpga_enum_info *info) 805 + { 806 + struct dfl_fpga_enum_dfl *tmp, *dfl; 807 + struct device *dev; 808 + 809 + if (!info) 810 + return; 811 + 812 + dev = info->dev; 813 + 814 + /* remove all device feature lists in the list. */ 815 + list_for_each_entry_safe(dfl, tmp, &info->dfls, node) { 816 + list_del(&dfl->node); 817 + devm_kfree(dev, dfl); 818 + } 819 + 820 + devm_kfree(dev, info); 821 + put_device(dev); 822 + } 823 + EXPORT_SYMBOL_GPL(dfl_fpga_enum_info_free); 824 + 825 + /** 826 + * dfl_fpga_enum_info_add_dfl - add info of a device feature list to enum info 827 + * 828 + * @info: ptr to dfl_fpga_enum_info 829 + * @start: mmio resource address of the device feature list. 830 + * @len: mmio resource length of the device feature list. 831 + * @ioaddr: mapped mmio resource address of the device feature list. 832 + * 833 + * One FPGA device may have one or more Device Feature Lists (DFLs), use this 834 + * function to add information of each DFL to common data structure for next 835 + * step enumeration. 836 + * 837 + * Return: 0 on success, negative error code otherwise. 838 + */ 839 + int dfl_fpga_enum_info_add_dfl(struct dfl_fpga_enum_info *info, 840 + resource_size_t start, resource_size_t len, 841 + void __iomem *ioaddr) 842 + { 843 + struct dfl_fpga_enum_dfl *dfl; 844 + 845 + dfl = devm_kzalloc(info->dev, sizeof(*dfl), GFP_KERNEL); 846 + if (!dfl) 847 + return -ENOMEM; 848 + 849 + dfl->start = start; 850 + dfl->len = len; 851 + dfl->ioaddr = ioaddr; 852 + 853 + list_add_tail(&dfl->node, &info->dfls); 854 + 855 + return 0; 856 + } 857 + EXPORT_SYMBOL_GPL(dfl_fpga_enum_info_add_dfl); 858 + 859 + static int remove_feature_dev(struct device *dev, void *data) 860 + { 861 + struct platform_device *pdev = to_platform_device(dev); 862 + enum dfl_id_type type = feature_dev_id_type(pdev); 863 + int id = pdev->id; 864 + 865 + platform_device_unregister(pdev); 866 + 867 + dfl_id_free(type, id); 868 + 869 + return 0; 870 + } 871 + 872 + static void remove_feature_devs(struct dfl_fpga_cdev *cdev) 873 + { 874 + device_for_each_child(&cdev->region->dev, NULL, remove_feature_dev); 875 + } 876 + 877 + /** 878 + * dfl_fpga_feature_devs_enumerate - enumerate feature devices 879 + * @info: information for enumeration. 880 + * 881 + * This function creates a container device (base FPGA region), enumerates 882 + * feature devices based on the enumeration info and creates platform devices 883 + * under the container device. 884 + * 885 + * Return: dfl_fpga_cdev struct on success, -errno on failure 886 + */ 887 + struct dfl_fpga_cdev * 888 + dfl_fpga_feature_devs_enumerate(struct dfl_fpga_enum_info *info) 889 + { 890 + struct build_feature_devs_info *binfo; 891 + struct dfl_fpga_enum_dfl *dfl; 892 + struct dfl_fpga_cdev *cdev; 893 + int ret = 0; 894 + 895 + if (!info->dev) 896 + return ERR_PTR(-ENODEV); 897 + 898 + cdev = devm_kzalloc(info->dev, sizeof(*cdev), GFP_KERNEL); 899 + if (!cdev) 900 + return ERR_PTR(-ENOMEM); 901 + 902 + cdev->region = fpga_region_create(info->dev, NULL, NULL); 903 + if (!cdev->region) { 904 + ret = -ENOMEM; 905 + goto free_cdev_exit; 906 + } 907 + 908 + cdev->parent = info->dev; 909 + mutex_init(&cdev->lock); 910 + INIT_LIST_HEAD(&cdev->port_dev_list); 911 + 912 + ret = fpga_region_register(cdev->region); 913 + if (ret) 914 + goto free_region_exit; 915 + 916 + /* create and init build info for enumeration */ 917 + binfo = devm_kzalloc(info->dev, sizeof(*binfo), GFP_KERNEL); 918 + if (!binfo) { 919 + ret = -ENOMEM; 920 + goto unregister_region_exit; 921 + } 922 + 923 + binfo->dev = info->dev; 924 + binfo->cdev = cdev; 925 + 926 + /* 927 + * start enumeration for all feature devices based on Device Feature 928 + * Lists. 929 + */ 930 + list_for_each_entry(dfl, &info->dfls, node) { 931 + ret = parse_feature_list(binfo, dfl); 932 + if (ret) { 933 + remove_feature_devs(cdev); 934 + build_info_free(binfo); 935 + goto unregister_region_exit; 936 + } 937 + } 938 + 939 + build_info_free(binfo); 940 + 941 + return cdev; 942 + 943 + unregister_region_exit: 944 + fpga_region_unregister(cdev->region); 945 + free_region_exit: 946 + fpga_region_free(cdev->region); 947 + free_cdev_exit: 948 + devm_kfree(info->dev, cdev); 949 + return ERR_PTR(ret); 950 + } 951 + EXPORT_SYMBOL_GPL(dfl_fpga_feature_devs_enumerate); 952 + 953 + /** 954 + * dfl_fpga_feature_devs_remove - remove all feature devices 955 + * @cdev: fpga container device. 956 + * 957 + * Remove the container device and all feature devices under given container 958 + * devices. 959 + */ 960 + void dfl_fpga_feature_devs_remove(struct dfl_fpga_cdev *cdev) 961 + { 962 + struct dfl_feature_platform_data *pdata, *ptmp; 963 + 964 + remove_feature_devs(cdev); 965 + 966 + mutex_lock(&cdev->lock); 967 + if (cdev->fme_dev) { 968 + /* the fme should be unregistered. */ 969 + WARN_ON(device_is_registered(cdev->fme_dev)); 970 + put_device(cdev->fme_dev); 971 + } 972 + 973 + list_for_each_entry_safe(pdata, ptmp, &cdev->port_dev_list, node) { 974 + struct platform_device *port_dev = pdata->dev; 975 + 976 + /* the port should be unregistered. */ 977 + WARN_ON(device_is_registered(&port_dev->dev)); 978 + list_del(&pdata->node); 979 + put_device(&port_dev->dev); 980 + } 981 + mutex_unlock(&cdev->lock); 982 + 983 + fpga_region_unregister(cdev->region); 984 + devm_kfree(cdev->parent, cdev); 985 + } 986 + EXPORT_SYMBOL_GPL(dfl_fpga_feature_devs_remove); 987 + 988 + /** 989 + * __dfl_fpga_cdev_find_port - find a port under given container device 990 + * 991 + * @cdev: container device 992 + * @data: data passed to match function 993 + * @match: match function used to find specific port from the port device list 994 + * 995 + * Find a port device under container device. This function needs to be 996 + * invoked with lock held. 997 + * 998 + * Return: pointer to port's platform device if successful, NULL otherwise. 999 + * 1000 + * NOTE: you will need to drop the device reference with put_device() after use. 1001 + */ 1002 + struct platform_device * 1003 + __dfl_fpga_cdev_find_port(struct dfl_fpga_cdev *cdev, void *data, 1004 + int (*match)(struct platform_device *, void *)) 1005 + { 1006 + struct dfl_feature_platform_data *pdata; 1007 + struct platform_device *port_dev; 1008 + 1009 + list_for_each_entry(pdata, &cdev->port_dev_list, node) { 1010 + port_dev = pdata->dev; 1011 + 1012 + if (match(port_dev, data) && get_device(&port_dev->dev)) 1013 + return port_dev; 1014 + } 1015 + 1016 + return NULL; 1017 + } 1018 + EXPORT_SYMBOL_GPL(__dfl_fpga_cdev_find_port); 1019 + 1020 + static int __init dfl_fpga_init(void) 1021 + { 1022 + int ret; 1023 + 1024 + dfl_ids_init(); 1025 + 1026 + ret = dfl_chardev_init(); 1027 + if (ret) 1028 + dfl_ids_destroy(); 1029 + 1030 + return ret; 1031 + } 1032 + 1033 + static void __exit dfl_fpga_exit(void) 1034 + { 1035 + dfl_chardev_uinit(); 1036 + dfl_ids_destroy(); 1037 + } 1038 + 1039 + module_init(dfl_fpga_init); 1040 + module_exit(dfl_fpga_exit); 1041 + 1042 + MODULE_DESCRIPTION("FPGA Device Feature List (DFL) Support"); 1043 + MODULE_AUTHOR("Intel Corporation"); 1044 + MODULE_LICENSE("GPL v2");
+410
drivers/fpga/dfl.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Driver Header File for FPGA Device Feature List (DFL) Support 4 + * 5 + * Copyright (C) 2017-2018 Intel Corporation, Inc. 6 + * 7 + * Authors: 8 + * Kang Luwei <luwei.kang@intel.com> 9 + * Zhang Yi <yi.z.zhang@intel.com> 10 + * Wu Hao <hao.wu@intel.com> 11 + * Xiao Guangrong <guangrong.xiao@linux.intel.com> 12 + */ 13 + 14 + #ifndef __FPGA_DFL_H 15 + #define __FPGA_DFL_H 16 + 17 + #include <linux/bitfield.h> 18 + #include <linux/cdev.h> 19 + #include <linux/delay.h> 20 + #include <linux/fs.h> 21 + #include <linux/iopoll.h> 22 + #include <linux/io-64-nonatomic-lo-hi.h> 23 + #include <linux/platform_device.h> 24 + #include <linux/slab.h> 25 + #include <linux/uuid.h> 26 + #include <linux/fpga/fpga-region.h> 27 + 28 + /* maximum supported number of ports */ 29 + #define MAX_DFL_FPGA_PORT_NUM 4 30 + /* plus one for fme device */ 31 + #define MAX_DFL_FEATURE_DEV_NUM (MAX_DFL_FPGA_PORT_NUM + 1) 32 + 33 + /* Reserved 0x0 for Header Group Register and 0xff for AFU */ 34 + #define FEATURE_ID_FIU_HEADER 0x0 35 + #define FEATURE_ID_AFU 0xff 36 + 37 + #define FME_FEATURE_ID_HEADER FEATURE_ID_FIU_HEADER 38 + #define FME_FEATURE_ID_THERMAL_MGMT 0x1 39 + #define FME_FEATURE_ID_POWER_MGMT 0x2 40 + #define FME_FEATURE_ID_GLOBAL_IPERF 0x3 41 + #define FME_FEATURE_ID_GLOBAL_ERR 0x4 42 + #define FME_FEATURE_ID_PR_MGMT 0x5 43 + #define FME_FEATURE_ID_HSSI 0x6 44 + #define FME_FEATURE_ID_GLOBAL_DPERF 0x7 45 + 46 + #define PORT_FEATURE_ID_HEADER FEATURE_ID_FIU_HEADER 47 + #define PORT_FEATURE_ID_AFU FEATURE_ID_AFU 48 + #define PORT_FEATURE_ID_ERROR 0x10 49 + #define PORT_FEATURE_ID_UMSG 0x11 50 + #define PORT_FEATURE_ID_UINT 0x12 51 + #define PORT_FEATURE_ID_STP 0x13 52 + 53 + /* 54 + * Device Feature Header Register Set 55 + * 56 + * For FIUs, they all have DFH + GUID + NEXT_AFU as common header registers. 57 + * For AFUs, they have DFH + GUID as common header registers. 58 + * For private features, they only have DFH register as common header. 59 + */ 60 + #define DFH 0x0 61 + #define GUID_L 0x8 62 + #define GUID_H 0x10 63 + #define NEXT_AFU 0x18 64 + 65 + #define DFH_SIZE 0x8 66 + 67 + /* Device Feature Header Register Bitfield */ 68 + #define DFH_ID GENMASK_ULL(11, 0) /* Feature ID */ 69 + #define DFH_ID_FIU_FME 0 70 + #define DFH_ID_FIU_PORT 1 71 + #define DFH_REVISION GENMASK_ULL(15, 12) /* Feature revision */ 72 + #define DFH_NEXT_HDR_OFST GENMASK_ULL(39, 16) /* Offset to next DFH */ 73 + #define DFH_EOL BIT_ULL(40) /* End of list */ 74 + #define DFH_TYPE GENMASK_ULL(63, 60) /* Feature type */ 75 + #define DFH_TYPE_AFU 1 76 + #define DFH_TYPE_PRIVATE 3 77 + #define DFH_TYPE_FIU 4 78 + 79 + /* Next AFU Register Bitfield */ 80 + #define NEXT_AFU_NEXT_DFH_OFST GENMASK_ULL(23, 0) /* Offset to next AFU */ 81 + 82 + /* FME Header Register Set */ 83 + #define FME_HDR_DFH DFH 84 + #define FME_HDR_GUID_L GUID_L 85 + #define FME_HDR_GUID_H GUID_H 86 + #define FME_HDR_NEXT_AFU NEXT_AFU 87 + #define FME_HDR_CAP 0x30 88 + #define FME_HDR_PORT_OFST(n) (0x38 + ((n) * 0x8)) 89 + #define FME_HDR_BITSTREAM_ID 0x60 90 + #define FME_HDR_BITSTREAM_MD 0x68 91 + 92 + /* FME Fab Capability Register Bitfield */ 93 + #define FME_CAP_FABRIC_VERID GENMASK_ULL(7, 0) /* Fabric version ID */ 94 + #define FME_CAP_SOCKET_ID BIT_ULL(8) /* Socket ID */ 95 + #define FME_CAP_PCIE0_LINK_AVL BIT_ULL(12) /* PCIE0 Link */ 96 + #define FME_CAP_PCIE1_LINK_AVL BIT_ULL(13) /* PCIE1 Link */ 97 + #define FME_CAP_COHR_LINK_AVL BIT_ULL(14) /* Coherent Link */ 98 + #define FME_CAP_IOMMU_AVL BIT_ULL(16) /* IOMMU available */ 99 + #define FME_CAP_NUM_PORTS GENMASK_ULL(19, 17) /* Number of ports */ 100 + #define FME_CAP_ADDR_WIDTH GENMASK_ULL(29, 24) /* Address bus width */ 101 + #define FME_CAP_CACHE_SIZE GENMASK_ULL(43, 32) /* cache size in KB */ 102 + #define FME_CAP_CACHE_ASSOC GENMASK_ULL(47, 44) /* Associativity */ 103 + 104 + /* FME Port Offset Register Bitfield */ 105 + /* Offset to port device feature header */ 106 + #define FME_PORT_OFST_DFH_OFST GENMASK_ULL(23, 0) 107 + /* PCI Bar ID for this port */ 108 + #define FME_PORT_OFST_BAR_ID GENMASK_ULL(34, 32) 109 + /* AFU MMIO access permission. 1 - VF, 0 - PF. */ 110 + #define FME_PORT_OFST_ACC_CTRL BIT_ULL(55) 111 + #define FME_PORT_OFST_ACC_PF 0 112 + #define FME_PORT_OFST_ACC_VF 1 113 + #define FME_PORT_OFST_IMP BIT_ULL(60) 114 + 115 + /* PORT Header Register Set */ 116 + #define PORT_HDR_DFH DFH 117 + #define PORT_HDR_GUID_L GUID_L 118 + #define PORT_HDR_GUID_H GUID_H 119 + #define PORT_HDR_NEXT_AFU NEXT_AFU 120 + #define PORT_HDR_CAP 0x30 121 + #define PORT_HDR_CTRL 0x38 122 + 123 + /* Port Capability Register Bitfield */ 124 + #define PORT_CAP_PORT_NUM GENMASK_ULL(1, 0) /* ID of this port */ 125 + #define PORT_CAP_MMIO_SIZE GENMASK_ULL(23, 8) /* MMIO size in KB */ 126 + #define PORT_CAP_SUPP_INT_NUM GENMASK_ULL(35, 32) /* Interrupts num */ 127 + 128 + /* Port Control Register Bitfield */ 129 + #define PORT_CTRL_SFTRST BIT_ULL(0) /* Port soft reset */ 130 + /* Latency tolerance reporting. '1' >= 40us, '0' < 40us.*/ 131 + #define PORT_CTRL_LATENCY BIT_ULL(2) 132 + #define PORT_CTRL_SFTRST_ACK BIT_ULL(4) /* HW ack for reset */ 133 + /** 134 + * struct dfl_fpga_port_ops - port ops 135 + * 136 + * @name: name of this port ops, to match with port platform device. 137 + * @owner: pointer to the module which owns this port ops. 138 + * @node: node to link port ops to global list. 139 + * @get_id: get port id from hardware. 140 + * @enable_set: enable/disable the port. 141 + */ 142 + struct dfl_fpga_port_ops { 143 + const char *name; 144 + struct module *owner; 145 + struct list_head node; 146 + int (*get_id)(struct platform_device *pdev); 147 + int (*enable_set)(struct platform_device *pdev, bool enable); 148 + }; 149 + 150 + void dfl_fpga_port_ops_add(struct dfl_fpga_port_ops *ops); 151 + void dfl_fpga_port_ops_del(struct dfl_fpga_port_ops *ops); 152 + struct dfl_fpga_port_ops *dfl_fpga_port_ops_get(struct platform_device *pdev); 153 + void dfl_fpga_port_ops_put(struct dfl_fpga_port_ops *ops); 154 + int dfl_fpga_check_port_id(struct platform_device *pdev, void *pport_id); 155 + 156 + /** 157 + * struct dfl_feature_driver - sub feature's driver 158 + * 159 + * @id: sub feature id. 160 + * @ops: ops of this sub feature. 161 + */ 162 + struct dfl_feature_driver { 163 + u64 id; 164 + const struct dfl_feature_ops *ops; 165 + }; 166 + 167 + /** 168 + * struct dfl_feature - sub feature of the feature devices 169 + * 170 + * @id: sub feature id. 171 + * @resource_index: each sub feature has one mmio resource for its registers. 172 + * this index is used to find its mmio resource from the 173 + * feature dev (platform device)'s reources. 174 + * @ioaddr: mapped mmio resource address. 175 + * @ops: ops of this sub feature. 176 + */ 177 + struct dfl_feature { 178 + u64 id; 179 + int resource_index; 180 + void __iomem *ioaddr; 181 + const struct dfl_feature_ops *ops; 182 + }; 183 + 184 + #define DEV_STATUS_IN_USE 0 185 + 186 + /** 187 + * struct dfl_feature_platform_data - platform data for feature devices 188 + * 189 + * @node: node to link feature devs to container device's port_dev_list. 190 + * @lock: mutex to protect platform data. 191 + * @cdev: cdev of feature dev. 192 + * @dev: ptr to platform device linked with this platform data. 193 + * @dfl_cdev: ptr to container device. 194 + * @disable_count: count for port disable. 195 + * @num: number for sub features. 196 + * @dev_status: dev status (e.g. DEV_STATUS_IN_USE). 197 + * @private: ptr to feature dev private data. 198 + * @features: sub features of this feature dev. 199 + */ 200 + struct dfl_feature_platform_data { 201 + struct list_head node; 202 + struct mutex lock; 203 + struct cdev cdev; 204 + struct platform_device *dev; 205 + struct dfl_fpga_cdev *dfl_cdev; 206 + unsigned int disable_count; 207 + unsigned long dev_status; 208 + void *private; 209 + int num; 210 + struct dfl_feature features[0]; 211 + }; 212 + 213 + static inline 214 + int dfl_feature_dev_use_begin(struct dfl_feature_platform_data *pdata) 215 + { 216 + /* Test and set IN_USE flags to ensure file is exclusively used */ 217 + if (test_and_set_bit_lock(DEV_STATUS_IN_USE, &pdata->dev_status)) 218 + return -EBUSY; 219 + 220 + return 0; 221 + } 222 + 223 + static inline 224 + void dfl_feature_dev_use_end(struct dfl_feature_platform_data *pdata) 225 + { 226 + clear_bit_unlock(DEV_STATUS_IN_USE, &pdata->dev_status); 227 + } 228 + 229 + static inline 230 + void dfl_fpga_pdata_set_private(struct dfl_feature_platform_data *pdata, 231 + void *private) 232 + { 233 + pdata->private = private; 234 + } 235 + 236 + static inline 237 + void *dfl_fpga_pdata_get_private(struct dfl_feature_platform_data *pdata) 238 + { 239 + return pdata->private; 240 + } 241 + 242 + struct dfl_feature_ops { 243 + int (*init)(struct platform_device *pdev, struct dfl_feature *feature); 244 + void (*uinit)(struct platform_device *pdev, 245 + struct dfl_feature *feature); 246 + long (*ioctl)(struct platform_device *pdev, struct dfl_feature *feature, 247 + unsigned int cmd, unsigned long arg); 248 + }; 249 + 250 + #define DFL_FPGA_FEATURE_DEV_FME "dfl-fme" 251 + #define DFL_FPGA_FEATURE_DEV_PORT "dfl-port" 252 + 253 + static inline int dfl_feature_platform_data_size(const int num) 254 + { 255 + return sizeof(struct dfl_feature_platform_data) + 256 + num * sizeof(struct dfl_feature); 257 + } 258 + 259 + void dfl_fpga_dev_feature_uinit(struct platform_device *pdev); 260 + int dfl_fpga_dev_feature_init(struct platform_device *pdev, 261 + struct dfl_feature_driver *feature_drvs); 262 + 263 + int dfl_fpga_dev_ops_register(struct platform_device *pdev, 264 + const struct file_operations *fops, 265 + struct module *owner); 266 + void dfl_fpga_dev_ops_unregister(struct platform_device *pdev); 267 + 268 + static inline 269 + struct platform_device *dfl_fpga_inode_to_feature_dev(struct inode *inode) 270 + { 271 + struct dfl_feature_platform_data *pdata; 272 + 273 + pdata = container_of(inode->i_cdev, struct dfl_feature_platform_data, 274 + cdev); 275 + return pdata->dev; 276 + } 277 + 278 + #define dfl_fpga_dev_for_each_feature(pdata, feature) \ 279 + for ((feature) = (pdata)->features; \ 280 + (feature) < (pdata)->features + (pdata)->num; (feature)++) 281 + 282 + static inline 283 + struct dfl_feature *dfl_get_feature_by_id(struct device *dev, u64 id) 284 + { 285 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 286 + struct dfl_feature *feature; 287 + 288 + dfl_fpga_dev_for_each_feature(pdata, feature) 289 + if (feature->id == id) 290 + return feature; 291 + 292 + return NULL; 293 + } 294 + 295 + static inline 296 + void __iomem *dfl_get_feature_ioaddr_by_id(struct device *dev, u64 id) 297 + { 298 + struct dfl_feature *feature = dfl_get_feature_by_id(dev, id); 299 + 300 + if (feature && feature->ioaddr) 301 + return feature->ioaddr; 302 + 303 + WARN_ON(1); 304 + return NULL; 305 + } 306 + 307 + static inline bool is_dfl_feature_present(struct device *dev, u64 id) 308 + { 309 + return !!dfl_get_feature_ioaddr_by_id(dev, id); 310 + } 311 + 312 + static inline 313 + struct device *dfl_fpga_pdata_to_parent(struct dfl_feature_platform_data *pdata) 314 + { 315 + return pdata->dev->dev.parent->parent; 316 + } 317 + 318 + static inline bool dfl_feature_is_fme(void __iomem *base) 319 + { 320 + u64 v = readq(base + DFH); 321 + 322 + return (FIELD_GET(DFH_TYPE, v) == DFH_TYPE_FIU) && 323 + (FIELD_GET(DFH_ID, v) == DFH_ID_FIU_FME); 324 + } 325 + 326 + static inline bool dfl_feature_is_port(void __iomem *base) 327 + { 328 + u64 v = readq(base + DFH); 329 + 330 + return (FIELD_GET(DFH_TYPE, v) == DFH_TYPE_FIU) && 331 + (FIELD_GET(DFH_ID, v) == DFH_ID_FIU_PORT); 332 + } 333 + 334 + /** 335 + * struct dfl_fpga_enum_info - DFL FPGA enumeration information 336 + * 337 + * @dev: parent device. 338 + * @dfls: list of device feature lists. 339 + */ 340 + struct dfl_fpga_enum_info { 341 + struct device *dev; 342 + struct list_head dfls; 343 + }; 344 + 345 + /** 346 + * struct dfl_fpga_enum_dfl - DFL FPGA enumeration device feature list info 347 + * 348 + * @start: base address of this device feature list. 349 + * @len: size of this device feature list. 350 + * @ioaddr: mapped base address of this device feature list. 351 + * @node: node in list of device feature lists. 352 + */ 353 + struct dfl_fpga_enum_dfl { 354 + resource_size_t start; 355 + resource_size_t len; 356 + 357 + void __iomem *ioaddr; 358 + 359 + struct list_head node; 360 + }; 361 + 362 + struct dfl_fpga_enum_info *dfl_fpga_enum_info_alloc(struct device *dev); 363 + int dfl_fpga_enum_info_add_dfl(struct dfl_fpga_enum_info *info, 364 + resource_size_t start, resource_size_t len, 365 + void __iomem *ioaddr); 366 + void dfl_fpga_enum_info_free(struct dfl_fpga_enum_info *info); 367 + 368 + /** 369 + * struct dfl_fpga_cdev - container device of DFL based FPGA 370 + * 371 + * @parent: parent device of this container device. 372 + * @region: base fpga region. 373 + * @fme_dev: FME feature device under this container device. 374 + * @lock: mutex lock to protect the port device list. 375 + * @port_dev_list: list of all port feature devices under this container device. 376 + */ 377 + struct dfl_fpga_cdev { 378 + struct device *parent; 379 + struct fpga_region *region; 380 + struct device *fme_dev; 381 + struct mutex lock; 382 + struct list_head port_dev_list; 383 + }; 384 + 385 + struct dfl_fpga_cdev * 386 + dfl_fpga_feature_devs_enumerate(struct dfl_fpga_enum_info *info); 387 + void dfl_fpga_feature_devs_remove(struct dfl_fpga_cdev *cdev); 388 + 389 + /* 390 + * need to drop the device reference with put_device() after use port platform 391 + * device returned by __dfl_fpga_cdev_find_port and dfl_fpga_cdev_find_port 392 + * functions. 393 + */ 394 + struct platform_device * 395 + __dfl_fpga_cdev_find_port(struct dfl_fpga_cdev *cdev, void *data, 396 + int (*match)(struct platform_device *, void *)); 397 + 398 + static inline struct platform_device * 399 + dfl_fpga_cdev_find_port(struct dfl_fpga_cdev *cdev, void *data, 400 + int (*match)(struct platform_device *, void *)) 401 + { 402 + struct platform_device *pdev; 403 + 404 + mutex_lock(&cdev->lock); 405 + pdev = __dfl_fpga_cdev_find_port(cdev, data, match); 406 + mutex_unlock(&cdev->lock); 407 + 408 + return pdev; 409 + } 410 + #endif /* __FPGA_DFL_H */
+28
drivers/fpga/fpga-mgr.c
··· 406 406 return sprintf(buf, "%s\n", state_str[mgr->state]); 407 407 } 408 408 409 + static ssize_t status_show(struct device *dev, 410 + struct device_attribute *attr, char *buf) 411 + { 412 + struct fpga_manager *mgr = to_fpga_manager(dev); 413 + u64 status; 414 + int len = 0; 415 + 416 + if (!mgr->mops->status) 417 + return -ENOENT; 418 + 419 + status = mgr->mops->status(mgr); 420 + 421 + if (status & FPGA_MGR_STATUS_OPERATION_ERR) 422 + len += sprintf(buf + len, "reconfig operation error\n"); 423 + if (status & FPGA_MGR_STATUS_CRC_ERR) 424 + len += sprintf(buf + len, "reconfig CRC error\n"); 425 + if (status & FPGA_MGR_STATUS_INCOMPATIBLE_IMAGE_ERR) 426 + len += sprintf(buf + len, "reconfig incompatible image\n"); 427 + if (status & FPGA_MGR_STATUS_IP_PROTOCOL_ERR) 428 + len += sprintf(buf + len, "reconfig IP protocol error\n"); 429 + if (status & FPGA_MGR_STATUS_FIFO_OVERFLOW_ERR) 430 + len += sprintf(buf + len, "reconfig fifo overflow error\n"); 431 + 432 + return len; 433 + } 434 + 409 435 static DEVICE_ATTR_RO(name); 410 436 static DEVICE_ATTR_RO(state); 437 + static DEVICE_ATTR_RO(status); 411 438 412 439 static struct attribute *fpga_mgr_attrs[] = { 413 440 &dev_attr_name.attr, 414 441 &dev_attr_state.attr, 442 + &dev_attr_status.attr, 415 443 NULL, 416 444 }; 417 445 ATTRIBUTE_GROUPS(fpga_mgr);
+22
drivers/fpga/fpga-region.c
··· 158 158 } 159 159 EXPORT_SYMBOL_GPL(fpga_region_program_fpga); 160 160 161 + static ssize_t compat_id_show(struct device *dev, 162 + struct device_attribute *attr, char *buf) 163 + { 164 + struct fpga_region *region = to_fpga_region(dev); 165 + 166 + if (!region->compat_id) 167 + return -ENOENT; 168 + 169 + return sprintf(buf, "%016llx%016llx\n", 170 + (unsigned long long)region->compat_id->id_h, 171 + (unsigned long long)region->compat_id->id_l); 172 + } 173 + 174 + static DEVICE_ATTR_RO(compat_id); 175 + 176 + static struct attribute *fpga_region_attrs[] = { 177 + &dev_attr_compat_id.attr, 178 + NULL, 179 + }; 180 + ATTRIBUTE_GROUPS(fpga_region); 181 + 161 182 /** 162 183 * fpga_region_create - alloc and init a struct fpga_region 163 184 * @dev: device parent ··· 279 258 if (IS_ERR(fpga_region_class)) 280 259 return PTR_ERR(fpga_region_class); 281 260 261 + fpga_region_class->dev_groups = fpga_region_groups; 282 262 fpga_region_class->dev_release = fpga_region_dev_release; 283 263 284 264 return 0;
+32
drivers/fsi/Kconfig
··· 12 12 13 13 if FSI 14 14 15 + config FSI_NEW_DEV_NODE 16 + bool "Create '/dev/fsi' directory for char devices" 17 + default n 18 + ---help--- 19 + This option causes char devices created for FSI devices to be 20 + located under a common /dev/fsi/ directory. Set to N unless your 21 + userspace has been updated to handle the new location. 22 + 23 + Additionally, it also causes the char device names to be offset 24 + by one so that chip 0 will have /dev/scom1 and chip1 /dev/scom2 25 + to match old userspace expectations. 26 + 27 + New userspace will use udev rules to generate predictable access 28 + symlinks in /dev/fsi/by-path when this option is enabled. 29 + 15 30 config FSI_MASTER_GPIO 16 31 tristate "GPIO-based FSI master" 17 32 depends on GPIOLIB ··· 42 27 allow chaining of FSI links to an arbitrary depth. This allows for 43 28 a high target device fanout. 44 29 30 + config FSI_MASTER_AST_CF 31 + tristate "FSI master based on Aspeed ColdFire coprocessor" 32 + depends on GPIOLIB 33 + depends on GPIO_ASPEED 34 + ---help--- 35 + This option enables a FSI master using the AST2400 and AST2500 GPIO 36 + lines driven by the internal ColdFire coprocessor. This requires 37 + the corresponding machine specific ColdFire firmware to be available. 38 + 45 39 config FSI_SCOM 46 40 tristate "SCOM FSI client device driver" 47 41 ---help--- 48 42 This option enables an FSI based SCOM device driver. 43 + 44 + config FSI_SBEFIFO 45 + tristate "SBEFIFO FSI client device driver" 46 + depends on OF_ADDRESS 47 + ---help--- 48 + This option enables an FSI based SBEFIFO device driver. The SBEFIFO is 49 + a pipe-like FSI device for communicating with the self boot engine 50 + (SBE) on POWER processors. 49 51 50 52 endif
+2
drivers/fsi/Makefile
··· 2 2 obj-$(CONFIG_FSI) += fsi-core.o 3 3 obj-$(CONFIG_FSI_MASTER_HUB) += fsi-master-hub.o 4 4 obj-$(CONFIG_FSI_MASTER_GPIO) += fsi-master-gpio.o 5 + obj-$(CONFIG_FSI_MASTER_AST_CF) += fsi-master-ast-cf.o 5 6 obj-$(CONFIG_FSI_SCOM) += fsi-scom.o 7 + obj-$(CONFIG_FSI_SBEFIFO) += fsi-sbefifo.o
+157
drivers/fsi/cf-fsi-fw.h
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + #ifndef __CF_FSI_FW_H 3 + #define __CF_FSI_FW_H 4 + 5 + /* 6 + * uCode file layout 7 + * 8 + * 0000...03ff : m68k exception vectors 9 + * 0400...04ff : Header info & boot config block 10 + * 0500....... : Code & stack 11 + */ 12 + 13 + /* 14 + * Header info & boot config area 15 + * 16 + * The Header info is built into the ucode and provide version and 17 + * platform information. 18 + * 19 + * the Boot config needs to be adjusted by the ARM prior to starting 20 + * the ucode if the Command/Status area isn't at 0x320000 in CF space 21 + * (ie. beginning of SRAM). 22 + */ 23 + 24 + #define HDR_OFFSET 0x400 25 + 26 + /* Info: Signature & version */ 27 + #define HDR_SYS_SIG 0x00 /* 2 bytes system signature */ 28 + #define SYS_SIG_SHARED 0x5348 29 + #define SYS_SIG_SPLIT 0x5350 30 + #define HDR_FW_VERS 0x02 /* 2 bytes Major.Minor */ 31 + #define HDR_API_VERS 0x04 /* 2 bytes Major.Minor */ 32 + #define API_VERSION_MAJ 2 /* Current version */ 33 + #define API_VERSION_MIN 1 34 + #define HDR_FW_OPTIONS 0x08 /* 4 bytes option flags */ 35 + #define FW_OPTION_TRACE_EN 0x00000001 /* FW tracing enabled */ 36 + #define FW_OPTION_CONT_CLOCK 0x00000002 /* Continuous clocking supported */ 37 + #define HDR_FW_SIZE 0x10 /* 4 bytes size for combo image */ 38 + 39 + /* Boot Config: Address of Command/Status area */ 40 + #define HDR_CMD_STAT_AREA 0x80 /* 4 bytes CF address */ 41 + #define HDR_FW_CONTROL 0x84 /* 4 bytes control flags */ 42 + #define FW_CONTROL_CONT_CLOCK 0x00000002 /* Continuous clocking enabled */ 43 + #define FW_CONTROL_DUMMY_RD 0x00000004 /* Extra dummy read (AST2400) */ 44 + #define FW_CONTROL_USE_STOP 0x00000008 /* Use STOP instructions */ 45 + #define HDR_CLOCK_GPIO_VADDR 0x90 /* 2 bytes offset from GPIO base */ 46 + #define HDR_CLOCK_GPIO_DADDR 0x92 /* 2 bytes offset from GPIO base */ 47 + #define HDR_DATA_GPIO_VADDR 0x94 /* 2 bytes offset from GPIO base */ 48 + #define HDR_DATA_GPIO_DADDR 0x96 /* 2 bytes offset from GPIO base */ 49 + #define HDR_TRANS_GPIO_VADDR 0x98 /* 2 bytes offset from GPIO base */ 50 + #define HDR_TRANS_GPIO_DADDR 0x9a /* 2 bytes offset from GPIO base */ 51 + #define HDR_CLOCK_GPIO_BIT 0x9c /* 1 byte bit number */ 52 + #define HDR_DATA_GPIO_BIT 0x9d /* 1 byte bit number */ 53 + #define HDR_TRANS_GPIO_BIT 0x9e /* 1 byte bit number */ 54 + 55 + /* 56 + * Command/Status area layout: Main part 57 + */ 58 + 59 + /* Command/Status register: 60 + * 61 + * +---------------------------+ 62 + * | STAT | RLEN | CLEN | CMD | 63 + * | 8 | 8 | 8 | 8 | 64 + * +---------------------------+ 65 + * | | | | 66 + * status | | | 67 + * Response len | | 68 + * (in bits) | | 69 + * | | 70 + * Command len | 71 + * (in bits) | 72 + * | 73 + * Command code 74 + * 75 + * Due to the big endian layout, that means that a byte read will 76 + * return the status byte 77 + */ 78 + #define CMD_STAT_REG 0x00 79 + #define CMD_REG_CMD_MASK 0x000000ff 80 + #define CMD_REG_CMD_SHIFT 0 81 + #define CMD_NONE 0x00 82 + #define CMD_COMMAND 0x01 83 + #define CMD_BREAK 0x02 84 + #define CMD_IDLE_CLOCKS 0x03 /* clen = #clocks */ 85 + #define CMD_INVALID 0xff 86 + #define CMD_REG_CLEN_MASK 0x0000ff00 87 + #define CMD_REG_CLEN_SHIFT 8 88 + #define CMD_REG_RLEN_MASK 0x00ff0000 89 + #define CMD_REG_RLEN_SHIFT 16 90 + #define CMD_REG_STAT_MASK 0xff000000 91 + #define CMD_REG_STAT_SHIFT 24 92 + #define STAT_WORKING 0x00 93 + #define STAT_COMPLETE 0x01 94 + #define STAT_ERR_INVAL_CMD 0x80 95 + #define STAT_ERR_INVAL_IRQ 0x81 96 + #define STAT_ERR_MTOE 0x82 97 + 98 + /* Response tag & CRC */ 99 + #define STAT_RTAG 0x04 100 + 101 + /* Response CRC */ 102 + #define STAT_RCRC 0x05 103 + 104 + /* Echo and Send delay */ 105 + #define ECHO_DLY_REG 0x08 106 + #define SEND_DLY_REG 0x09 107 + 108 + /* Command data area 109 + * 110 + * Last byte of message must be left aligned 111 + */ 112 + #define CMD_DATA 0x10 /* 64 bit of data */ 113 + 114 + /* Response data area, right aligned, unused top bits are 1 */ 115 + #define RSP_DATA 0x20 /* 32 bit of data */ 116 + 117 + /* Misc */ 118 + #define INT_CNT 0x30 /* 32-bit interrupt count */ 119 + #define BAD_INT_VEC 0x34 /* 32-bit bad interrupt vector # */ 120 + #define CF_STARTED 0x38 /* byte, set to -1 when copro started */ 121 + #define CLK_CNT 0x3c /* 32-bit, clock count (debug only) */ 122 + 123 + /* 124 + * SRAM layout: GPIO arbitration part 125 + */ 126 + #define ARB_REG 0x40 127 + #define ARB_ARM_REQ 0x01 128 + #define ARB_ARM_ACK 0x02 129 + 130 + /* Misc2 */ 131 + #define CF_RESET_D0 0x50 132 + #define CF_RESET_D1 0x54 133 + #define BAD_INT_S0 0x58 134 + #define BAD_INT_S1 0x5c 135 + #define STOP_CNT 0x60 136 + 137 + /* Internal */ 138 + 139 + /* 140 + * SRAM layout: Trace buffer (debug builds only) 141 + */ 142 + #define TRACEBUF 0x100 143 + #define TR_CLKOBIT0 0xc0 144 + #define TR_CLKOBIT1 0xc1 145 + #define TR_CLKOSTART 0x82 146 + #define TR_OLEN 0x83 /* + len */ 147 + #define TR_CLKZ 0x84 /* + count */ 148 + #define TR_CLKWSTART 0x85 149 + #define TR_CLKTAG 0x86 /* + tag */ 150 + #define TR_CLKDATA 0x87 /* + len */ 151 + #define TR_CLKCRC 0x88 /* + raw crc */ 152 + #define TR_CLKIBIT0 0x90 153 + #define TR_CLKIBIT1 0x91 154 + #define TR_END 0xff 155 + 156 + #endif /* __CF_FSI_FW_H */ 157 +
+475 -115
drivers/fsi/fsi-core.c
··· 11 11 * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 12 * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 13 * GNU General Public License for more details. 14 + * 15 + * TODO: 16 + * - Rework topology 17 + * - s/chip_id/chip_loc 18 + * - s/cfam/chip (cfam_id -> chip_id etc...) 14 19 */ 15 20 16 21 #include <linux/crc4.h> ··· 26 21 #include <linux/of.h> 27 22 #include <linux/slab.h> 28 23 #include <linux/bitops.h> 24 + #include <linux/cdev.h> 25 + #include <linux/fs.h> 26 + #include <linux/uaccess.h> 29 27 30 28 #include "fsi-master.h" 31 29 ··· 86 78 struct fsi_slave { 87 79 struct device dev; 88 80 struct fsi_master *master; 89 - int id; 90 - int link; 81 + struct cdev cdev; 82 + int cdev_idx; 83 + int id; /* FSI address */ 84 + int link; /* FSI link# */ 85 + u32 cfam_id; 86 + int chip_id; 91 87 uint32_t size; /* size of slave address space */ 88 + u8 t_send_delay; 89 + u8 t_echo_delay; 92 90 }; 93 91 94 92 #define to_fsi_master(d) container_of(d, struct fsi_master, dev) ··· 102 88 103 89 static const int slave_retries = 2; 104 90 static int discard_errors; 91 + 92 + static dev_t fsi_base_dev; 93 + static DEFINE_IDA(fsi_minor_ida); 94 + #define FSI_CHAR_MAX_DEVICES 0x1000 95 + 96 + /* Legacy /dev numbering: 4 devices per chip, 16 chips */ 97 + #define FSI_CHAR_LEGACY_TOP 64 105 98 106 99 static int fsi_master_read(struct fsi_master *master, int link, 107 100 uint8_t slave_id, uint32_t addr, void *val, size_t size); ··· 211 190 static int fsi_slave_report_and_clear_errors(struct fsi_slave *slave) 212 191 { 213 192 struct fsi_master *master = slave->master; 214 - uint32_t irq, stat; 193 + __be32 irq, stat; 215 194 int rc, link; 216 195 uint8_t id; 217 196 ··· 236 215 &irq, sizeof(irq)); 237 216 } 238 217 239 - static int fsi_slave_set_smode(struct fsi_master *master, int link, int id); 218 + /* Encode slave local bus echo delay */ 219 + static inline uint32_t fsi_smode_echodly(int x) 220 + { 221 + return (x & FSI_SMODE_ED_MASK) << FSI_SMODE_ED_SHIFT; 222 + } 223 + 224 + /* Encode slave local bus send delay */ 225 + static inline uint32_t fsi_smode_senddly(int x) 226 + { 227 + return (x & FSI_SMODE_SD_MASK) << FSI_SMODE_SD_SHIFT; 228 + } 229 + 230 + /* Encode slave local bus clock rate ratio */ 231 + static inline uint32_t fsi_smode_lbcrr(int x) 232 + { 233 + return (x & FSI_SMODE_LBCRR_MASK) << FSI_SMODE_LBCRR_SHIFT; 234 + } 235 + 236 + /* Encode slave ID */ 237 + static inline uint32_t fsi_smode_sid(int x) 238 + { 239 + return (x & FSI_SMODE_SID_MASK) << FSI_SMODE_SID_SHIFT; 240 + } 241 + 242 + static uint32_t fsi_slave_smode(int id, u8 t_senddly, u8 t_echodly) 243 + { 244 + return FSI_SMODE_WSC | FSI_SMODE_ECRC 245 + | fsi_smode_sid(id) 246 + | fsi_smode_echodly(t_echodly - 1) | fsi_smode_senddly(t_senddly - 1) 247 + | fsi_smode_lbcrr(0x8); 248 + } 249 + 250 + static int fsi_slave_set_smode(struct fsi_slave *slave) 251 + { 252 + uint32_t smode; 253 + __be32 data; 254 + 255 + /* set our smode register with the slave ID field to 0; this enables 256 + * extended slave addressing 257 + */ 258 + smode = fsi_slave_smode(slave->id, slave->t_send_delay, slave->t_echo_delay); 259 + data = cpu_to_be32(smode); 260 + 261 + return fsi_master_write(slave->master, slave->link, slave->id, 262 + FSI_SLAVE_BASE + FSI_SMODE, 263 + &data, sizeof(data)); 264 + } 240 265 241 266 static int fsi_slave_handle_error(struct fsi_slave *slave, bool write, 242 267 uint32_t addr, size_t size) ··· 290 223 struct fsi_master *master = slave->master; 291 224 int rc, link; 292 225 uint32_t reg; 293 - uint8_t id; 226 + uint8_t id, send_delay, echo_delay; 294 227 295 228 if (discard_errors) 296 229 return -1; ··· 321 254 } 322 255 } 323 256 257 + send_delay = slave->t_send_delay; 258 + echo_delay = slave->t_echo_delay; 259 + 324 260 /* getting serious, reset the slave via BREAK */ 325 261 rc = fsi_master_break(master, link); 326 262 if (rc) 327 263 return rc; 328 264 329 - rc = fsi_slave_set_smode(master, link, id); 265 + slave->t_send_delay = send_delay; 266 + slave->t_echo_delay = echo_delay; 267 + 268 + rc = fsi_slave_set_smode(slave); 330 269 if (rc) 331 270 return rc; 271 + 272 + if (master->link_config) 273 + master->link_config(master, link, 274 + slave->t_send_delay, 275 + slave->t_echo_delay); 332 276 333 277 return fsi_slave_report_and_clear_errors(slave); 334 278 } ··· 468 390 static int fsi_slave_scan(struct fsi_slave *slave) 469 391 { 470 392 uint32_t engine_addr; 471 - uint32_t conf; 472 393 int rc, i; 473 394 474 395 /* ··· 481 404 for (i = 2; i < engine_page_size / sizeof(uint32_t); i++) { 482 405 uint8_t slots, version, type, crc; 483 406 struct fsi_device *dev; 407 + uint32_t conf; 408 + __be32 data; 484 409 485 - rc = fsi_slave_read(slave, (i + 1) * sizeof(conf), 486 - &conf, sizeof(conf)); 410 + rc = fsi_slave_read(slave, (i + 1) * sizeof(data), 411 + &data, sizeof(data)); 487 412 if (rc) { 488 413 dev_warn(&slave->dev, 489 414 "error reading slave registers\n"); 490 415 return -1; 491 416 } 492 - conf = be32_to_cpu(conf); 417 + conf = be32_to_cpu(data); 493 418 494 419 crc = crc4(0, conf, 32); 495 420 if (crc) { ··· 618 539 .write = fsi_slave_sysfs_raw_write, 619 540 }; 620 541 621 - static ssize_t fsi_slave_sysfs_term_write(struct file *file, 622 - struct kobject *kobj, struct bin_attribute *attr, 623 - char *buf, loff_t off, size_t count) 624 - { 625 - struct fsi_slave *slave = to_fsi_slave(kobj_to_dev(kobj)); 626 - struct fsi_master *master = slave->master; 627 - 628 - if (!master->term) 629 - return -ENODEV; 630 - 631 - master->term(master, slave->link, slave->id); 632 - return count; 633 - } 634 - 635 - static const struct bin_attribute fsi_slave_term_attr = { 636 - .attr = { 637 - .name = "term", 638 - .mode = 0200, 639 - }, 640 - .size = 0, 641 - .write = fsi_slave_sysfs_term_write, 642 - }; 643 - 644 - /* Encode slave local bus echo delay */ 645 - static inline uint32_t fsi_smode_echodly(int x) 646 - { 647 - return (x & FSI_SMODE_ED_MASK) << FSI_SMODE_ED_SHIFT; 648 - } 649 - 650 - /* Encode slave local bus send delay */ 651 - static inline uint32_t fsi_smode_senddly(int x) 652 - { 653 - return (x & FSI_SMODE_SD_MASK) << FSI_SMODE_SD_SHIFT; 654 - } 655 - 656 - /* Encode slave local bus clock rate ratio */ 657 - static inline uint32_t fsi_smode_lbcrr(int x) 658 - { 659 - return (x & FSI_SMODE_LBCRR_MASK) << FSI_SMODE_LBCRR_SHIFT; 660 - } 661 - 662 - /* Encode slave ID */ 663 - static inline uint32_t fsi_smode_sid(int x) 664 - { 665 - return (x & FSI_SMODE_SID_MASK) << FSI_SMODE_SID_SHIFT; 666 - } 667 - 668 - static uint32_t fsi_slave_smode(int id) 669 - { 670 - return FSI_SMODE_WSC | FSI_SMODE_ECRC 671 - | fsi_smode_sid(id) 672 - | fsi_smode_echodly(0xf) | fsi_smode_senddly(0xf) 673 - | fsi_smode_lbcrr(0x8); 674 - } 675 - 676 - static int fsi_slave_set_smode(struct fsi_master *master, int link, int id) 677 - { 678 - uint32_t smode; 679 - 680 - /* set our smode register with the slave ID field to 0; this enables 681 - * extended slave addressing 682 - */ 683 - smode = fsi_slave_smode(id); 684 - smode = cpu_to_be32(smode); 685 - 686 - return fsi_master_write(master, link, id, FSI_SLAVE_BASE + FSI_SMODE, 687 - &smode, sizeof(smode)); 688 - } 689 - 690 542 static void fsi_slave_release(struct device *dev) 691 543 { 692 544 struct fsi_slave *slave = to_fsi_slave(dev); 693 545 546 + fsi_free_minor(slave->dev.devt); 694 547 of_node_put(dev->of_node); 695 548 kfree(slave); 696 549 } ··· 670 659 return NULL; 671 660 } 672 661 662 + static ssize_t cfam_read(struct file *filep, char __user *buf, size_t count, 663 + loff_t *offset) 664 + { 665 + struct fsi_slave *slave = filep->private_data; 666 + size_t total_len, read_len; 667 + loff_t off = *offset; 668 + ssize_t rc; 669 + 670 + if (off < 0) 671 + return -EINVAL; 672 + 673 + if (off > 0xffffffff || count > 0xffffffff || off + count > 0xffffffff) 674 + return -EINVAL; 675 + 676 + for (total_len = 0; total_len < count; total_len += read_len) { 677 + __be32 data; 678 + 679 + read_len = min_t(size_t, count, 4); 680 + read_len -= off & 0x3; 681 + 682 + rc = fsi_slave_read(slave, off, &data, read_len); 683 + if (rc) 684 + goto fail; 685 + rc = copy_to_user(buf + total_len, &data, read_len); 686 + if (rc) { 687 + rc = -EFAULT; 688 + goto fail; 689 + } 690 + off += read_len; 691 + } 692 + rc = count; 693 + fail: 694 + *offset = off; 695 + return count; 696 + } 697 + 698 + static ssize_t cfam_write(struct file *filep, const char __user *buf, 699 + size_t count, loff_t *offset) 700 + { 701 + struct fsi_slave *slave = filep->private_data; 702 + size_t total_len, write_len; 703 + loff_t off = *offset; 704 + ssize_t rc; 705 + 706 + 707 + if (off < 0) 708 + return -EINVAL; 709 + 710 + if (off > 0xffffffff || count > 0xffffffff || off + count > 0xffffffff) 711 + return -EINVAL; 712 + 713 + for (total_len = 0; total_len < count; total_len += write_len) { 714 + __be32 data; 715 + 716 + write_len = min_t(size_t, count, 4); 717 + write_len -= off & 0x3; 718 + 719 + rc = copy_from_user(&data, buf + total_len, write_len); 720 + if (rc) { 721 + rc = -EFAULT; 722 + goto fail; 723 + } 724 + rc = fsi_slave_write(slave, off, &data, write_len); 725 + if (rc) 726 + goto fail; 727 + off += write_len; 728 + } 729 + rc = count; 730 + fail: 731 + *offset = off; 732 + return count; 733 + } 734 + 735 + static loff_t cfam_llseek(struct file *file, loff_t offset, int whence) 736 + { 737 + switch (whence) { 738 + case SEEK_CUR: 739 + break; 740 + case SEEK_SET: 741 + file->f_pos = offset; 742 + break; 743 + default: 744 + return -EINVAL; 745 + } 746 + 747 + return offset; 748 + } 749 + 750 + static int cfam_open(struct inode *inode, struct file *file) 751 + { 752 + struct fsi_slave *slave = container_of(inode->i_cdev, struct fsi_slave, cdev); 753 + 754 + file->private_data = slave; 755 + 756 + return 0; 757 + } 758 + 759 + static const struct file_operations cfam_fops = { 760 + .owner = THIS_MODULE, 761 + .open = cfam_open, 762 + .llseek = cfam_llseek, 763 + .read = cfam_read, 764 + .write = cfam_write, 765 + }; 766 + 767 + static ssize_t send_term_store(struct device *dev, 768 + struct device_attribute *attr, 769 + const char *buf, size_t count) 770 + { 771 + struct fsi_slave *slave = to_fsi_slave(dev); 772 + struct fsi_master *master = slave->master; 773 + 774 + if (!master->term) 775 + return -ENODEV; 776 + 777 + master->term(master, slave->link, slave->id); 778 + return count; 779 + } 780 + 781 + static DEVICE_ATTR_WO(send_term); 782 + 783 + static ssize_t slave_send_echo_show(struct device *dev, 784 + struct device_attribute *attr, 785 + char *buf) 786 + { 787 + struct fsi_slave *slave = to_fsi_slave(dev); 788 + 789 + return sprintf(buf, "%u\n", slave->t_send_delay); 790 + } 791 + 792 + static ssize_t slave_send_echo_store(struct device *dev, 793 + struct device_attribute *attr, const char *buf, size_t count) 794 + { 795 + struct fsi_slave *slave = to_fsi_slave(dev); 796 + struct fsi_master *master = slave->master; 797 + unsigned long val; 798 + int rc; 799 + 800 + if (kstrtoul(buf, 0, &val) < 0) 801 + return -EINVAL; 802 + 803 + if (val < 1 || val > 16) 804 + return -EINVAL; 805 + 806 + if (!master->link_config) 807 + return -ENXIO; 808 + 809 + /* Current HW mandates that send and echo delay are identical */ 810 + slave->t_send_delay = val; 811 + slave->t_echo_delay = val; 812 + 813 + rc = fsi_slave_set_smode(slave); 814 + if (rc < 0) 815 + return rc; 816 + if (master->link_config) 817 + master->link_config(master, slave->link, 818 + slave->t_send_delay, 819 + slave->t_echo_delay); 820 + 821 + return count; 822 + } 823 + 824 + static DEVICE_ATTR(send_echo_delays, 0600, 825 + slave_send_echo_show, slave_send_echo_store); 826 + 827 + static ssize_t chip_id_show(struct device *dev, 828 + struct device_attribute *attr, 829 + char *buf) 830 + { 831 + struct fsi_slave *slave = to_fsi_slave(dev); 832 + 833 + return sprintf(buf, "%d\n", slave->chip_id); 834 + } 835 + 836 + static DEVICE_ATTR_RO(chip_id); 837 + 838 + static ssize_t cfam_id_show(struct device *dev, 839 + struct device_attribute *attr, 840 + char *buf) 841 + { 842 + struct fsi_slave *slave = to_fsi_slave(dev); 843 + 844 + return sprintf(buf, "0x%x\n", slave->cfam_id); 845 + } 846 + 847 + static DEVICE_ATTR_RO(cfam_id); 848 + 849 + static struct attribute *cfam_attr[] = { 850 + &dev_attr_send_echo_delays.attr, 851 + &dev_attr_chip_id.attr, 852 + &dev_attr_cfam_id.attr, 853 + &dev_attr_send_term.attr, 854 + NULL, 855 + }; 856 + 857 + static const struct attribute_group cfam_attr_group = { 858 + .attrs = cfam_attr, 859 + }; 860 + 861 + static const struct attribute_group *cfam_attr_groups[] = { 862 + &cfam_attr_group, 863 + NULL, 864 + }; 865 + 866 + static char *cfam_devnode(struct device *dev, umode_t *mode, 867 + kuid_t *uid, kgid_t *gid) 868 + { 869 + struct fsi_slave *slave = to_fsi_slave(dev); 870 + 871 + #ifdef CONFIG_FSI_NEW_DEV_NODE 872 + return kasprintf(GFP_KERNEL, "fsi/cfam%d", slave->cdev_idx); 873 + #else 874 + return kasprintf(GFP_KERNEL, "cfam%d", slave->cdev_idx); 875 + #endif 876 + } 877 + 878 + static const struct device_type cfam_type = { 879 + .name = "cfam", 880 + .devnode = cfam_devnode, 881 + .groups = cfam_attr_groups 882 + }; 883 + 884 + static char *fsi_cdev_devnode(struct device *dev, umode_t *mode, 885 + kuid_t *uid, kgid_t *gid) 886 + { 887 + #ifdef CONFIG_FSI_NEW_DEV_NODE 888 + return kasprintf(GFP_KERNEL, "fsi/%s", dev_name(dev)); 889 + #else 890 + return kasprintf(GFP_KERNEL, "%s", dev_name(dev)); 891 + #endif 892 + } 893 + 894 + const struct device_type fsi_cdev_type = { 895 + .name = "fsi-cdev", 896 + .devnode = fsi_cdev_devnode, 897 + }; 898 + EXPORT_SYMBOL_GPL(fsi_cdev_type); 899 + 900 + /* Backward compatible /dev/ numbering in "old style" mode */ 901 + static int fsi_adjust_index(int index) 902 + { 903 + #ifdef CONFIG_FSI_NEW_DEV_NODE 904 + return index; 905 + #else 906 + return index + 1; 907 + #endif 908 + } 909 + 910 + static int __fsi_get_new_minor(struct fsi_slave *slave, enum fsi_dev_type type, 911 + dev_t *out_dev, int *out_index) 912 + { 913 + int cid = slave->chip_id; 914 + int id; 915 + 916 + /* Check if we qualify for legacy numbering */ 917 + if (cid >= 0 && cid < 16 && type < 4) { 918 + /* Try reserving the legacy number */ 919 + id = (cid << 4) | type; 920 + id = ida_simple_get(&fsi_minor_ida, id, id + 1, GFP_KERNEL); 921 + if (id >= 0) { 922 + *out_index = fsi_adjust_index(cid); 923 + *out_dev = fsi_base_dev + id; 924 + return 0; 925 + } 926 + /* Other failure */ 927 + if (id != -ENOSPC) 928 + return id; 929 + /* Fallback to non-legacy allocation */ 930 + } 931 + id = ida_simple_get(&fsi_minor_ida, FSI_CHAR_LEGACY_TOP, 932 + FSI_CHAR_MAX_DEVICES, GFP_KERNEL); 933 + if (id < 0) 934 + return id; 935 + *out_index = fsi_adjust_index(id); 936 + *out_dev = fsi_base_dev + id; 937 + return 0; 938 + } 939 + 940 + int fsi_get_new_minor(struct fsi_device *fdev, enum fsi_dev_type type, 941 + dev_t *out_dev, int *out_index) 942 + { 943 + return __fsi_get_new_minor(fdev->slave, type, out_dev, out_index); 944 + } 945 + EXPORT_SYMBOL_GPL(fsi_get_new_minor); 946 + 947 + void fsi_free_minor(dev_t dev) 948 + { 949 + ida_simple_remove(&fsi_minor_ida, MINOR(dev)); 950 + } 951 + EXPORT_SYMBOL_GPL(fsi_free_minor); 952 + 673 953 static int fsi_slave_init(struct fsi_master *master, int link, uint8_t id) 674 954 { 675 - uint32_t chip_id, llmode; 955 + uint32_t cfam_id; 676 956 struct fsi_slave *slave; 677 957 uint8_t crc; 958 + __be32 data, llmode; 678 959 int rc; 679 960 680 961 /* Currently, we only support single slaves on a link, and use the ··· 975 672 if (id != 0) 976 673 return -EINVAL; 977 674 978 - rc = fsi_master_read(master, link, id, 0, &chip_id, sizeof(chip_id)); 675 + rc = fsi_master_read(master, link, id, 0, &data, sizeof(data)); 979 676 if (rc) { 980 677 dev_dbg(&master->dev, "can't read slave %02x:%02x %d\n", 981 678 link, id, rc); 982 679 return -ENODEV; 983 680 } 984 - chip_id = be32_to_cpu(chip_id); 681 + cfam_id = be32_to_cpu(data); 985 682 986 - crc = crc4(0, chip_id, 32); 683 + crc = crc4(0, cfam_id, 32); 987 684 if (crc) { 988 - dev_warn(&master->dev, "slave %02x:%02x invalid chip id CRC!\n", 685 + dev_warn(&master->dev, "slave %02x:%02x invalid cfam id CRC!\n", 989 686 link, id); 990 687 return -EIO; 991 688 } 992 689 993 690 dev_dbg(&master->dev, "fsi: found chip %08x at %02x:%02x:%02x\n", 994 - chip_id, master->idx, link, id); 995 - 996 - rc = fsi_slave_set_smode(master, link, id); 997 - if (rc) { 998 - dev_warn(&master->dev, 999 - "can't set smode on slave:%02x:%02x %d\n", 1000 - link, id, rc); 1001 - return -ENODEV; 1002 - } 691 + cfam_id, master->idx, link, id); 1003 692 1004 693 /* If we're behind a master that doesn't provide a self-running bus 1005 694 * clock, put the slave into async mode ··· 1014 719 if (!slave) 1015 720 return -ENOMEM; 1016 721 1017 - slave->master = master; 722 + dev_set_name(&slave->dev, "slave@%02x:%02x", link, id); 723 + slave->dev.type = &cfam_type; 1018 724 slave->dev.parent = &master->dev; 1019 725 slave->dev.of_node = fsi_slave_find_of_node(master, link, id); 1020 726 slave->dev.release = fsi_slave_release; 727 + device_initialize(&slave->dev); 728 + slave->cfam_id = cfam_id; 729 + slave->master = master; 1021 730 slave->link = link; 1022 731 slave->id = id; 1023 732 slave->size = FSI_SLAVE_SIZE_23b; 733 + slave->t_send_delay = 16; 734 + slave->t_echo_delay = 16; 1024 735 1025 - dev_set_name(&slave->dev, "slave@%02x:%02x", link, id); 1026 - rc = device_register(&slave->dev); 1027 - if (rc < 0) { 1028 - dev_warn(&master->dev, "failed to create slave device: %d\n", 1029 - rc); 1030 - put_device(&slave->dev); 1031 - return rc; 736 + /* Get chip ID if any */ 737 + slave->chip_id = -1; 738 + if (slave->dev.of_node) { 739 + uint32_t prop; 740 + if (!of_property_read_u32(slave->dev.of_node, "chip-id", &prop)) 741 + slave->chip_id = prop; 742 + 1032 743 } 1033 744 745 + /* Allocate a minor in the FSI space */ 746 + rc = __fsi_get_new_minor(slave, fsi_dev_cfam, &slave->dev.devt, 747 + &slave->cdev_idx); 748 + if (rc) 749 + goto err_free; 750 + 751 + /* Create chardev for userspace access */ 752 + cdev_init(&slave->cdev, &cfam_fops); 753 + rc = cdev_device_add(&slave->cdev, &slave->dev); 754 + if (rc) { 755 + dev_err(&slave->dev, "Error %d creating slave device\n", rc); 756 + goto err_free; 757 + } 758 + 759 + rc = fsi_slave_set_smode(slave); 760 + if (rc) { 761 + dev_warn(&master->dev, 762 + "can't set smode on slave:%02x:%02x %d\n", 763 + link, id, rc); 764 + kfree(slave); 765 + return -ENODEV; 766 + } 767 + if (master->link_config) 768 + master->link_config(master, link, 769 + slave->t_send_delay, 770 + slave->t_echo_delay); 771 + 772 + /* Legacy raw file -> to be removed */ 1034 773 rc = device_create_bin_file(&slave->dev, &fsi_slave_raw_attr); 1035 774 if (rc) 1036 775 dev_warn(&slave->dev, "failed to create raw attr: %d\n", rc); 1037 776 1038 - rc = device_create_bin_file(&slave->dev, &fsi_slave_term_attr); 1039 - if (rc) 1040 - dev_warn(&slave->dev, "failed to create term attr: %d\n", rc); 1041 777 1042 778 rc = fsi_slave_scan(slave); 1043 779 if (rc) 1044 780 dev_dbg(&master->dev, "failed during slave scan with: %d\n", 1045 781 rc); 1046 782 783 + return rc; 784 + 785 + err_free: 786 + put_device(&slave->dev); 1047 787 return rc; 1048 788 } 1049 789 ··· 1144 814 */ 1145 815 static int fsi_master_break(struct fsi_master *master, int link) 1146 816 { 817 + int rc = 0; 818 + 1147 819 trace_fsi_master_break(master, link); 1148 820 1149 821 if (master->send_break) 1150 - return master->send_break(master, link); 822 + rc = master->send_break(master, link); 823 + if (master->link_config) 824 + master->link_config(master, link, 16, 16); 1151 825 1152 - return 0; 826 + return rc; 1153 827 } 1154 828 1155 829 static int fsi_master_scan(struct fsi_master *master) ··· 1188 854 1189 855 static int fsi_master_remove_slave(struct device *dev, void *arg) 1190 856 { 857 + struct fsi_slave *slave = to_fsi_slave(dev); 858 + 1191 859 device_for_each_child(dev, NULL, fsi_slave_remove_device); 1192 - device_unregister(dev); 860 + cdev_device_del(&slave->cdev, &slave->dev); 861 + put_device(dev); 1193 862 return 0; 1194 863 } 1195 864 ··· 1203 866 1204 867 int fsi_master_rescan(struct fsi_master *master) 1205 868 { 869 + int rc; 870 + 871 + mutex_lock(&master->scan_lock); 1206 872 fsi_master_unscan(master); 1207 - return fsi_master_scan(master); 873 + rc = fsi_master_scan(master); 874 + mutex_unlock(&master->scan_lock); 875 + 876 + return rc; 1208 877 } 1209 878 EXPORT_SYMBOL_GPL(fsi_master_rescan); 1210 879 ··· 1246 903 int rc; 1247 904 struct device_node *np; 1248 905 1249 - if (!master) 1250 - return -EINVAL; 1251 - 906 + mutex_init(&master->scan_lock); 1252 907 master->idx = ida_simple_get(&master_ida, 0, INT_MAX, GFP_KERNEL); 1253 908 dev_set_name(&master->dev, "fsi%d", master->idx); 1254 909 ··· 1258 917 1259 918 rc = device_create_file(&master->dev, &dev_attr_rescan); 1260 919 if (rc) { 1261 - device_unregister(&master->dev); 920 + device_del(&master->dev); 1262 921 ida_simple_remove(&master_ida, master->idx); 1263 922 return rc; 1264 923 } 1265 924 1266 925 rc = device_create_file(&master->dev, &dev_attr_break); 1267 926 if (rc) { 1268 - device_unregister(&master->dev); 927 + device_del(&master->dev); 1269 928 ida_simple_remove(&master_ida, master->idx); 1270 929 return rc; 1271 930 } 1272 931 1273 932 np = dev_of_node(&master->dev); 1274 - if (!of_property_read_bool(np, "no-scan-on-init")) 933 + if (!of_property_read_bool(np, "no-scan-on-init")) { 934 + mutex_lock(&master->scan_lock); 1275 935 fsi_master_scan(master); 936 + mutex_unlock(&master->scan_lock); 937 + } 1276 938 1277 939 return 0; 1278 940 } ··· 1288 944 master->idx = -1; 1289 945 } 1290 946 947 + mutex_lock(&master->scan_lock); 1291 948 fsi_master_unscan(master); 949 + mutex_unlock(&master->scan_lock); 1292 950 device_unregister(&master->dev); 1293 951 } 1294 952 EXPORT_SYMBOL_GPL(fsi_master_unregister); ··· 1342 996 1343 997 static int __init fsi_init(void) 1344 998 { 1345 - return bus_register(&fsi_bus_type); 999 + int rc; 1000 + 1001 + rc = alloc_chrdev_region(&fsi_base_dev, 0, FSI_CHAR_MAX_DEVICES, "fsi"); 1002 + if (rc) 1003 + return rc; 1004 + rc = bus_register(&fsi_bus_type); 1005 + if (rc) 1006 + goto fail_bus; 1007 + return 0; 1008 + 1009 + fail_bus: 1010 + unregister_chrdev_region(fsi_base_dev, FSI_CHAR_MAX_DEVICES); 1011 + return rc; 1346 1012 } 1347 1013 postcore_initcall(fsi_init); 1348 1014 1349 1015 static void fsi_exit(void) 1350 1016 { 1351 1017 bus_unregister(&fsi_bus_type); 1018 + unregister_chrdev_region(fsi_base_dev, FSI_CHAR_MAX_DEVICES); 1019 + ida_destroy(&fsi_minor_ida); 1352 1020 } 1353 1021 module_exit(fsi_exit); 1354 1022 module_param(discard_errors, int, 0664);
+1440
drivers/fsi/fsi-master-ast-cf.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + // Copyright 2018 IBM Corp 3 + /* 4 + * A FSI master controller, using a simple GPIO bit-banging interface 5 + */ 6 + 7 + #include <linux/crc4.h> 8 + #include <linux/delay.h> 9 + #include <linux/device.h> 10 + #include <linux/fsi.h> 11 + #include <linux/gpio/consumer.h> 12 + #include <linux/io.h> 13 + #include <linux/irqflags.h> 14 + #include <linux/module.h> 15 + #include <linux/of.h> 16 + #include <linux/platform_device.h> 17 + #include <linux/slab.h> 18 + #include <linux/regmap.h> 19 + #include <linux/firmware.h> 20 + #include <linux/gpio/aspeed.h> 21 + #include <linux/mfd/syscon.h> 22 + #include <linux/of_address.h> 23 + #include <linux/genalloc.h> 24 + 25 + #include "fsi-master.h" 26 + #include "cf-fsi-fw.h" 27 + 28 + #define FW_FILE_NAME "cf-fsi-fw.bin" 29 + 30 + /* Common SCU based coprocessor control registers */ 31 + #define SCU_COPRO_CTRL 0x100 32 + #define SCU_COPRO_RESET 0x00000002 33 + #define SCU_COPRO_CLK_EN 0x00000001 34 + 35 + /* AST2500 specific ones */ 36 + #define SCU_2500_COPRO_SEG0 0x104 37 + #define SCU_2500_COPRO_SEG1 0x108 38 + #define SCU_2500_COPRO_SEG2 0x10c 39 + #define SCU_2500_COPRO_SEG3 0x110 40 + #define SCU_2500_COPRO_SEG4 0x114 41 + #define SCU_2500_COPRO_SEG5 0x118 42 + #define SCU_2500_COPRO_SEG6 0x11c 43 + #define SCU_2500_COPRO_SEG7 0x120 44 + #define SCU_2500_COPRO_SEG8 0x124 45 + #define SCU_2500_COPRO_SEG_SWAP 0x00000001 46 + #define SCU_2500_COPRO_CACHE_CTL 0x128 47 + #define SCU_2500_COPRO_CACHE_EN 0x00000001 48 + #define SCU_2500_COPRO_SEG0_CACHE_EN 0x00000002 49 + #define SCU_2500_COPRO_SEG1_CACHE_EN 0x00000004 50 + #define SCU_2500_COPRO_SEG2_CACHE_EN 0x00000008 51 + #define SCU_2500_COPRO_SEG3_CACHE_EN 0x00000010 52 + #define SCU_2500_COPRO_SEG4_CACHE_EN 0x00000020 53 + #define SCU_2500_COPRO_SEG5_CACHE_EN 0x00000040 54 + #define SCU_2500_COPRO_SEG6_CACHE_EN 0x00000080 55 + #define SCU_2500_COPRO_SEG7_CACHE_EN 0x00000100 56 + #define SCU_2500_COPRO_SEG8_CACHE_EN 0x00000200 57 + 58 + #define SCU_2400_COPRO_SEG0 0x104 59 + #define SCU_2400_COPRO_SEG2 0x108 60 + #define SCU_2400_COPRO_SEG4 0x10c 61 + #define SCU_2400_COPRO_SEG6 0x110 62 + #define SCU_2400_COPRO_SEG8 0x114 63 + #define SCU_2400_COPRO_SEG_SWAP 0x80000000 64 + #define SCU_2400_COPRO_CACHE_CTL 0x118 65 + #define SCU_2400_COPRO_CACHE_EN 0x00000001 66 + #define SCU_2400_COPRO_SEG0_CACHE_EN 0x00000002 67 + #define SCU_2400_COPRO_SEG2_CACHE_EN 0x00000004 68 + #define SCU_2400_COPRO_SEG4_CACHE_EN 0x00000008 69 + #define SCU_2400_COPRO_SEG6_CACHE_EN 0x00000010 70 + #define SCU_2400_COPRO_SEG8_CACHE_EN 0x00000020 71 + 72 + /* CVIC registers */ 73 + #define CVIC_EN_REG 0x10 74 + #define CVIC_TRIG_REG 0x18 75 + 76 + /* 77 + * System register base address (needed for configuring the 78 + * coldfire maps) 79 + */ 80 + #define SYSREG_BASE 0x1e600000 81 + 82 + /* Amount of SRAM required */ 83 + #define SRAM_SIZE 0x1000 84 + 85 + #define LAST_ADDR_INVALID 0x1 86 + 87 + struct fsi_master_acf { 88 + struct fsi_master master; 89 + struct device *dev; 90 + struct regmap *scu; 91 + struct mutex lock; /* mutex for command ordering */ 92 + struct gpio_desc *gpio_clk; 93 + struct gpio_desc *gpio_data; 94 + struct gpio_desc *gpio_trans; /* Voltage translator */ 95 + struct gpio_desc *gpio_enable; /* FSI enable */ 96 + struct gpio_desc *gpio_mux; /* Mux control */ 97 + uint16_t gpio_clk_vreg; 98 + uint16_t gpio_clk_dreg; 99 + uint16_t gpio_dat_vreg; 100 + uint16_t gpio_dat_dreg; 101 + uint16_t gpio_tra_vreg; 102 + uint16_t gpio_tra_dreg; 103 + uint8_t gpio_clk_bit; 104 + uint8_t gpio_dat_bit; 105 + uint8_t gpio_tra_bit; 106 + uint32_t cf_mem_addr; 107 + size_t cf_mem_size; 108 + void __iomem *cf_mem; 109 + void __iomem *cvic; 110 + struct gen_pool *sram_pool; 111 + void __iomem *sram; 112 + bool is_ast2500; 113 + bool external_mode; 114 + bool trace_enabled; 115 + uint32_t last_addr; 116 + uint8_t t_send_delay; 117 + uint8_t t_echo_delay; 118 + uint32_t cvic_sw_irq; 119 + }; 120 + #define to_fsi_master_acf(m) container_of(m, struct fsi_master_acf, master) 121 + 122 + struct fsi_msg { 123 + uint64_t msg; 124 + uint8_t bits; 125 + }; 126 + 127 + #define CREATE_TRACE_POINTS 128 + #include <trace/events/fsi_master_ast_cf.h> 129 + 130 + static void msg_push_bits(struct fsi_msg *msg, uint64_t data, int bits) 131 + { 132 + msg->msg <<= bits; 133 + msg->msg |= data & ((1ull << bits) - 1); 134 + msg->bits += bits; 135 + } 136 + 137 + static void msg_push_crc(struct fsi_msg *msg) 138 + { 139 + uint8_t crc; 140 + int top; 141 + 142 + top = msg->bits & 0x3; 143 + 144 + /* start bit, and any non-aligned top bits */ 145 + crc = crc4(0, 1 << top | msg->msg >> (msg->bits - top), top + 1); 146 + 147 + /* aligned bits */ 148 + crc = crc4(crc, msg->msg, msg->bits - top); 149 + 150 + msg_push_bits(msg, crc, 4); 151 + } 152 + 153 + static void msg_finish_cmd(struct fsi_msg *cmd) 154 + { 155 + /* Left align message */ 156 + cmd->msg <<= (64 - cmd->bits); 157 + } 158 + 159 + static bool check_same_address(struct fsi_master_acf *master, int id, 160 + uint32_t addr) 161 + { 162 + /* this will also handle LAST_ADDR_INVALID */ 163 + return master->last_addr == (((id & 0x3) << 21) | (addr & ~0x3)); 164 + } 165 + 166 + static bool check_relative_address(struct fsi_master_acf *master, int id, 167 + uint32_t addr, uint32_t *rel_addrp) 168 + { 169 + uint32_t last_addr = master->last_addr; 170 + int32_t rel_addr; 171 + 172 + if (last_addr == LAST_ADDR_INVALID) 173 + return false; 174 + 175 + /* We may be in 23-bit addressing mode, which uses the id as the 176 + * top two address bits. So, if we're referencing a different ID, 177 + * use absolute addresses. 178 + */ 179 + if (((last_addr >> 21) & 0x3) != id) 180 + return false; 181 + 182 + /* remove the top two bits from any 23-bit addressing */ 183 + last_addr &= (1 << 21) - 1; 184 + 185 + /* We know that the addresses are limited to 21 bits, so this won't 186 + * overflow the signed rel_addr */ 187 + rel_addr = addr - last_addr; 188 + if (rel_addr > 255 || rel_addr < -256) 189 + return false; 190 + 191 + *rel_addrp = (uint32_t)rel_addr; 192 + 193 + return true; 194 + } 195 + 196 + static void last_address_update(struct fsi_master_acf *master, 197 + int id, bool valid, uint32_t addr) 198 + { 199 + if (!valid) 200 + master->last_addr = LAST_ADDR_INVALID; 201 + else 202 + master->last_addr = ((id & 0x3) << 21) | (addr & ~0x3); 203 + } 204 + 205 + /* 206 + * Encode an Absolute/Relative/Same Address command 207 + */ 208 + static void build_ar_command(struct fsi_master_acf *master, 209 + struct fsi_msg *cmd, uint8_t id, 210 + uint32_t addr, size_t size, 211 + const void *data) 212 + { 213 + int i, addr_bits, opcode_bits; 214 + bool write = !!data; 215 + uint8_t ds, opcode; 216 + uint32_t rel_addr; 217 + 218 + cmd->bits = 0; 219 + cmd->msg = 0; 220 + 221 + /* we have 21 bits of address max */ 222 + addr &= ((1 << 21) - 1); 223 + 224 + /* cmd opcodes are variable length - SAME_AR is only two bits */ 225 + opcode_bits = 3; 226 + 227 + if (check_same_address(master, id, addr)) { 228 + /* we still address the byte offset within the word */ 229 + addr_bits = 2; 230 + opcode_bits = 2; 231 + opcode = FSI_CMD_SAME_AR; 232 + trace_fsi_master_acf_cmd_same_addr(master); 233 + 234 + } else if (check_relative_address(master, id, addr, &rel_addr)) { 235 + /* 8 bits plus sign */ 236 + addr_bits = 9; 237 + addr = rel_addr; 238 + opcode = FSI_CMD_REL_AR; 239 + trace_fsi_master_acf_cmd_rel_addr(master, rel_addr); 240 + 241 + } else { 242 + addr_bits = 21; 243 + opcode = FSI_CMD_ABS_AR; 244 + trace_fsi_master_acf_cmd_abs_addr(master, addr); 245 + } 246 + 247 + /* 248 + * The read/write size is encoded in the lower bits of the address 249 + * (as it must be naturally-aligned), and the following ds bit. 250 + * 251 + * size addr:1 addr:0 ds 252 + * 1 x x 0 253 + * 2 x 0 1 254 + * 4 0 1 1 255 + * 256 + */ 257 + ds = size > 1 ? 1 : 0; 258 + addr &= ~(size - 1); 259 + if (size == 4) 260 + addr |= 1; 261 + 262 + msg_push_bits(cmd, id, 2); 263 + msg_push_bits(cmd, opcode, opcode_bits); 264 + msg_push_bits(cmd, write ? 0 : 1, 1); 265 + msg_push_bits(cmd, addr, addr_bits); 266 + msg_push_bits(cmd, ds, 1); 267 + for (i = 0; write && i < size; i++) 268 + msg_push_bits(cmd, ((uint8_t *)data)[i], 8); 269 + 270 + msg_push_crc(cmd); 271 + msg_finish_cmd(cmd); 272 + } 273 + 274 + static void build_dpoll_command(struct fsi_msg *cmd, uint8_t slave_id) 275 + { 276 + cmd->bits = 0; 277 + cmd->msg = 0; 278 + 279 + msg_push_bits(cmd, slave_id, 2); 280 + msg_push_bits(cmd, FSI_CMD_DPOLL, 3); 281 + msg_push_crc(cmd); 282 + msg_finish_cmd(cmd); 283 + } 284 + 285 + static void build_epoll_command(struct fsi_msg *cmd, uint8_t slave_id) 286 + { 287 + cmd->bits = 0; 288 + cmd->msg = 0; 289 + 290 + msg_push_bits(cmd, slave_id, 2); 291 + msg_push_bits(cmd, FSI_CMD_EPOLL, 3); 292 + msg_push_crc(cmd); 293 + msg_finish_cmd(cmd); 294 + } 295 + 296 + static void build_term_command(struct fsi_msg *cmd, uint8_t slave_id) 297 + { 298 + cmd->bits = 0; 299 + cmd->msg = 0; 300 + 301 + msg_push_bits(cmd, slave_id, 2); 302 + msg_push_bits(cmd, FSI_CMD_TERM, 6); 303 + msg_push_crc(cmd); 304 + msg_finish_cmd(cmd); 305 + } 306 + 307 + static int do_copro_command(struct fsi_master_acf *master, uint32_t op) 308 + { 309 + uint32_t timeout = 10000000; 310 + uint8_t stat; 311 + 312 + trace_fsi_master_acf_copro_command(master, op); 313 + 314 + /* Send command */ 315 + iowrite32be(op, master->sram + CMD_STAT_REG); 316 + 317 + /* Ring doorbell if any */ 318 + if (master->cvic) 319 + iowrite32(0x2, master->cvic + CVIC_TRIG_REG); 320 + 321 + /* Wait for status to indicate completion (or error) */ 322 + do { 323 + if (timeout-- == 0) { 324 + dev_warn(master->dev, 325 + "Timeout waiting for coprocessor completion\n"); 326 + return -ETIMEDOUT; 327 + } 328 + stat = ioread8(master->sram + CMD_STAT_REG); 329 + } while(stat < STAT_COMPLETE || stat == 0xff); 330 + 331 + if (stat == STAT_COMPLETE) 332 + return 0; 333 + switch(stat) { 334 + case STAT_ERR_INVAL_CMD: 335 + return -EINVAL; 336 + case STAT_ERR_INVAL_IRQ: 337 + return -EIO; 338 + case STAT_ERR_MTOE: 339 + return -ESHUTDOWN; 340 + } 341 + return -ENXIO; 342 + } 343 + 344 + static int clock_zeros(struct fsi_master_acf *master, int count) 345 + { 346 + while (count) { 347 + int rc, lcnt = min(count, 255); 348 + 349 + rc = do_copro_command(master, 350 + CMD_IDLE_CLOCKS | (lcnt << CMD_REG_CLEN_SHIFT)); 351 + if (rc) 352 + return rc; 353 + count -= lcnt; 354 + } 355 + return 0; 356 + } 357 + 358 + static int send_request(struct fsi_master_acf *master, struct fsi_msg *cmd, 359 + unsigned int resp_bits) 360 + { 361 + uint32_t op; 362 + 363 + trace_fsi_master_acf_send_request(master, cmd, resp_bits); 364 + 365 + /* Store message into SRAM */ 366 + iowrite32be((cmd->msg >> 32), master->sram + CMD_DATA); 367 + iowrite32be((cmd->msg & 0xffffffff), master->sram + CMD_DATA + 4); 368 + 369 + op = CMD_COMMAND; 370 + op |= cmd->bits << CMD_REG_CLEN_SHIFT; 371 + if (resp_bits) 372 + op |= resp_bits << CMD_REG_RLEN_SHIFT; 373 + 374 + return do_copro_command(master, op); 375 + } 376 + 377 + static int read_copro_response(struct fsi_master_acf *master, uint8_t size, 378 + uint32_t *response, u8 *tag) 379 + { 380 + uint8_t rtag = ioread8(master->sram + STAT_RTAG) & 0xf; 381 + uint8_t rcrc = ioread8(master->sram + STAT_RCRC) & 0xf; 382 + uint32_t rdata = 0; 383 + uint32_t crc; 384 + uint8_t ack; 385 + 386 + *tag = ack = rtag & 3; 387 + 388 + /* we have a whole message now; check CRC */ 389 + crc = crc4(0, 1, 1); 390 + crc = crc4(crc, rtag, 4); 391 + if (ack == FSI_RESP_ACK && size) { 392 + rdata = ioread32be(master->sram + RSP_DATA); 393 + crc = crc4(crc, rdata, size); 394 + if (response) 395 + *response = rdata; 396 + } 397 + crc = crc4(crc, rcrc, 4); 398 + 399 + trace_fsi_master_acf_copro_response(master, rtag, rcrc, rdata, crc == 0); 400 + 401 + if (crc) { 402 + /* 403 + * Check if it's all 1's or all 0's, that probably means 404 + * the host is off 405 + */ 406 + if ((rtag == 0xf && rcrc == 0xf) || (rtag == 0 && rcrc == 0)) 407 + return -ENODEV; 408 + dev_dbg(master->dev, "Bad response CRC !\n"); 409 + return -EAGAIN; 410 + } 411 + return 0; 412 + } 413 + 414 + static int send_term(struct fsi_master_acf *master, uint8_t slave) 415 + { 416 + struct fsi_msg cmd; 417 + uint8_t tag; 418 + int rc; 419 + 420 + build_term_command(&cmd, slave); 421 + 422 + rc = send_request(master, &cmd, 0); 423 + if (rc) { 424 + dev_warn(master->dev, "Error %d sending term\n", rc); 425 + return rc; 426 + } 427 + 428 + rc = read_copro_response(master, 0, NULL, &tag); 429 + if (rc < 0) { 430 + dev_err(master->dev, 431 + "TERM failed; lost communication with slave\n"); 432 + return -EIO; 433 + } else if (tag != FSI_RESP_ACK) { 434 + dev_err(master->dev, "TERM failed; response %d\n", tag); 435 + return -EIO; 436 + } 437 + return 0; 438 + } 439 + 440 + static void dump_ucode_trace(struct fsi_master_acf *master) 441 + { 442 + char trbuf[52]; 443 + char *p; 444 + int i; 445 + 446 + dev_dbg(master->dev, 447 + "CMDSTAT:%08x RTAG=%02x RCRC=%02x RDATA=%02x #INT=%08x\n", 448 + ioread32be(master->sram + CMD_STAT_REG), 449 + ioread8(master->sram + STAT_RTAG), 450 + ioread8(master->sram + STAT_RCRC), 451 + ioread32be(master->sram + RSP_DATA), 452 + ioread32be(master->sram + INT_CNT)); 453 + 454 + for (i = 0; i < 512; i++) { 455 + uint8_t v; 456 + if ((i % 16) == 0) 457 + p = trbuf; 458 + v = ioread8(master->sram + TRACEBUF + i); 459 + p += sprintf(p, "%02x ", v); 460 + if (((i % 16) == 15) || v == TR_END) 461 + dev_dbg(master->dev, "%s\n", trbuf); 462 + if (v == TR_END) 463 + break; 464 + } 465 + } 466 + 467 + static int handle_response(struct fsi_master_acf *master, 468 + uint8_t slave, uint8_t size, void *data) 469 + { 470 + int busy_count = 0, rc; 471 + int crc_err_retries = 0; 472 + struct fsi_msg cmd; 473 + uint32_t response; 474 + uint8_t tag; 475 + retry: 476 + rc = read_copro_response(master, size, &response, &tag); 477 + 478 + /* Handle retries on CRC errors */ 479 + if (rc == -EAGAIN) { 480 + /* Too many retries ? */ 481 + if (crc_err_retries++ > FSI_CRC_ERR_RETRIES) { 482 + /* 483 + * Pass it up as a -EIO otherwise upper level will retry 484 + * the whole command which isn't what we want here. 485 + */ 486 + rc = -EIO; 487 + goto bail; 488 + } 489 + trace_fsi_master_acf_crc_rsp_error(master, crc_err_retries); 490 + if (master->trace_enabled) 491 + dump_ucode_trace(master); 492 + rc = clock_zeros(master, FSI_MASTER_EPOLL_CLOCKS); 493 + if (rc) { 494 + dev_warn(master->dev, 495 + "Error %d clocking zeros for E_POLL\n", rc); 496 + return rc; 497 + } 498 + build_epoll_command(&cmd, slave); 499 + rc = send_request(master, &cmd, size); 500 + if (rc) { 501 + dev_warn(master->dev, "Error %d sending E_POLL\n", rc); 502 + return -EIO; 503 + } 504 + goto retry; 505 + } 506 + if (rc) 507 + return rc; 508 + 509 + switch (tag) { 510 + case FSI_RESP_ACK: 511 + if (size && data) { 512 + if (size == 32) 513 + *(__be32 *)data = cpu_to_be32(response); 514 + else if (size == 16) 515 + *(__be16 *)data = cpu_to_be16(response); 516 + else 517 + *(u8 *)data = response; 518 + } 519 + break; 520 + case FSI_RESP_BUSY: 521 + /* 522 + * Its necessary to clock slave before issuing 523 + * d-poll, not indicated in the hardware protocol 524 + * spec. < 20 clocks causes slave to hang, 21 ok. 525 + */ 526 + dev_dbg(master->dev, "Busy, retrying...\n"); 527 + if (master->trace_enabled) 528 + dump_ucode_trace(master); 529 + rc = clock_zeros(master, FSI_MASTER_DPOLL_CLOCKS); 530 + if (rc) { 531 + dev_warn(master->dev, 532 + "Error %d clocking zeros for D_POLL\n", rc); 533 + break; 534 + } 535 + if (busy_count++ < FSI_MASTER_MAX_BUSY) { 536 + build_dpoll_command(&cmd, slave); 537 + rc = send_request(master, &cmd, size); 538 + if (rc) { 539 + dev_warn(master->dev, "Error %d sending D_POLL\n", rc); 540 + break; 541 + } 542 + goto retry; 543 + } 544 + dev_dbg(master->dev, 545 + "ERR slave is stuck in busy state, issuing TERM\n"); 546 + send_term(master, slave); 547 + rc = -EIO; 548 + break; 549 + 550 + case FSI_RESP_ERRA: 551 + dev_dbg(master->dev, "ERRA received\n"); 552 + if (master->trace_enabled) 553 + dump_ucode_trace(master); 554 + rc = -EIO; 555 + break; 556 + case FSI_RESP_ERRC: 557 + dev_dbg(master->dev, "ERRC received\n"); 558 + if (master->trace_enabled) 559 + dump_ucode_trace(master); 560 + rc = -EAGAIN; 561 + break; 562 + } 563 + bail: 564 + if (busy_count > 0) { 565 + trace_fsi_master_acf_poll_response_busy(master, busy_count); 566 + } 567 + 568 + return rc; 569 + } 570 + 571 + static int fsi_master_acf_xfer(struct fsi_master_acf *master, uint8_t slave, 572 + struct fsi_msg *cmd, size_t resp_len, void *resp) 573 + { 574 + int rc = -EAGAIN, retries = 0; 575 + 576 + resp_len <<= 3; 577 + while ((retries++) < FSI_CRC_ERR_RETRIES) { 578 + rc = send_request(master, cmd, resp_len); 579 + if (rc) { 580 + if (rc != -ESHUTDOWN) 581 + dev_warn(master->dev, "Error %d sending command\n", rc); 582 + break; 583 + } 584 + rc = handle_response(master, slave, resp_len, resp); 585 + if (rc != -EAGAIN) 586 + break; 587 + rc = -EIO; 588 + dev_dbg(master->dev, "ECRC retry %d\n", retries); 589 + 590 + /* Pace it a bit before retry */ 591 + msleep(1); 592 + } 593 + 594 + return rc; 595 + } 596 + 597 + static int fsi_master_acf_read(struct fsi_master *_master, int link, 598 + uint8_t id, uint32_t addr, void *val, 599 + size_t size) 600 + { 601 + struct fsi_master_acf *master = to_fsi_master_acf(_master); 602 + struct fsi_msg cmd; 603 + int rc; 604 + 605 + if (link != 0) 606 + return -ENODEV; 607 + 608 + mutex_lock(&master->lock); 609 + dev_dbg(master->dev, "read id %d addr %x size %zd\n", id, addr, size); 610 + build_ar_command(master, &cmd, id, addr, size, NULL); 611 + rc = fsi_master_acf_xfer(master, id, &cmd, size, val); 612 + last_address_update(master, id, rc == 0, addr); 613 + if (rc) 614 + dev_dbg(master->dev, "read id %d addr 0x%08x err: %d\n", 615 + id, addr, rc); 616 + mutex_unlock(&master->lock); 617 + 618 + return rc; 619 + } 620 + 621 + static int fsi_master_acf_write(struct fsi_master *_master, int link, 622 + uint8_t id, uint32_t addr, const void *val, 623 + size_t size) 624 + { 625 + struct fsi_master_acf *master = to_fsi_master_acf(_master); 626 + struct fsi_msg cmd; 627 + int rc; 628 + 629 + if (link != 0) 630 + return -ENODEV; 631 + 632 + mutex_lock(&master->lock); 633 + build_ar_command(master, &cmd, id, addr, size, val); 634 + dev_dbg(master->dev, "write id %d addr %x size %zd raw_data: %08x\n", 635 + id, addr, size, *(uint32_t *)val); 636 + rc = fsi_master_acf_xfer(master, id, &cmd, 0, NULL); 637 + last_address_update(master, id, rc == 0, addr); 638 + if (rc) 639 + dev_dbg(master->dev, "write id %d addr 0x%08x err: %d\n", 640 + id, addr, rc); 641 + mutex_unlock(&master->lock); 642 + 643 + return rc; 644 + } 645 + 646 + static int fsi_master_acf_term(struct fsi_master *_master, 647 + int link, uint8_t id) 648 + { 649 + struct fsi_master_acf *master = to_fsi_master_acf(_master); 650 + struct fsi_msg cmd; 651 + int rc; 652 + 653 + if (link != 0) 654 + return -ENODEV; 655 + 656 + mutex_lock(&master->lock); 657 + build_term_command(&cmd, id); 658 + dev_dbg(master->dev, "term id %d\n", id); 659 + rc = fsi_master_acf_xfer(master, id, &cmd, 0, NULL); 660 + last_address_update(master, id, false, 0); 661 + mutex_unlock(&master->lock); 662 + 663 + return rc; 664 + } 665 + 666 + static int fsi_master_acf_break(struct fsi_master *_master, int link) 667 + { 668 + struct fsi_master_acf *master = to_fsi_master_acf(_master); 669 + int rc; 670 + 671 + if (link != 0) 672 + return -ENODEV; 673 + 674 + mutex_lock(&master->lock); 675 + if (master->external_mode) { 676 + mutex_unlock(&master->lock); 677 + return -EBUSY; 678 + } 679 + dev_dbg(master->dev, "sending BREAK\n"); 680 + rc = do_copro_command(master, CMD_BREAK); 681 + last_address_update(master, 0, false, 0); 682 + mutex_unlock(&master->lock); 683 + 684 + /* Wait for logic reset to take effect */ 685 + udelay(200); 686 + 687 + return rc; 688 + } 689 + 690 + static void reset_cf(struct fsi_master_acf *master) 691 + { 692 + regmap_write(master->scu, SCU_COPRO_CTRL, SCU_COPRO_RESET); 693 + usleep_range(20,20); 694 + regmap_write(master->scu, SCU_COPRO_CTRL, 0); 695 + usleep_range(20,20); 696 + } 697 + 698 + static void start_cf(struct fsi_master_acf *master) 699 + { 700 + regmap_write(master->scu, SCU_COPRO_CTRL, SCU_COPRO_CLK_EN); 701 + } 702 + 703 + static void setup_ast2500_cf_maps(struct fsi_master_acf *master) 704 + { 705 + /* 706 + * Note about byteswap setting: the bus is wired backwards, 707 + * so setting the byteswap bit actually makes the ColdFire 708 + * work "normally" for a BE processor, ie, put the MSB in 709 + * the lowest address byte. 710 + * 711 + * We thus need to set the bit for our main memory which 712 + * contains our program code. We create two mappings for 713 + * the register, one with each setting. 714 + * 715 + * Segments 2 and 3 has a "swapped" mapping (BE) 716 + * and 6 and 7 have a non-swapped mapping (LE) which allows 717 + * us to avoid byteswapping register accesses since the 718 + * registers are all LE. 719 + */ 720 + 721 + /* Setup segment 0 to our memory region */ 722 + regmap_write(master->scu, SCU_2500_COPRO_SEG0, master->cf_mem_addr | 723 + SCU_2500_COPRO_SEG_SWAP); 724 + 725 + /* Segments 2 and 3 to sysregs with byteswap (for SRAM) */ 726 + regmap_write(master->scu, SCU_2500_COPRO_SEG2, SYSREG_BASE | 727 + SCU_2500_COPRO_SEG_SWAP); 728 + regmap_write(master->scu, SCU_2500_COPRO_SEG3, SYSREG_BASE | 0x100000 | 729 + SCU_2500_COPRO_SEG_SWAP); 730 + 731 + /* And segment 6 and 7 to sysregs no byteswap */ 732 + regmap_write(master->scu, SCU_2500_COPRO_SEG6, SYSREG_BASE); 733 + regmap_write(master->scu, SCU_2500_COPRO_SEG7, SYSREG_BASE | 0x100000); 734 + 735 + /* Memory cachable, regs and SRAM not cachable */ 736 + regmap_write(master->scu, SCU_2500_COPRO_CACHE_CTL, 737 + SCU_2500_COPRO_SEG0_CACHE_EN | SCU_2500_COPRO_CACHE_EN); 738 + } 739 + 740 + static void setup_ast2400_cf_maps(struct fsi_master_acf *master) 741 + { 742 + /* Setup segment 0 to our memory region */ 743 + regmap_write(master->scu, SCU_2400_COPRO_SEG0, master->cf_mem_addr | 744 + SCU_2400_COPRO_SEG_SWAP); 745 + 746 + /* Segments 2 to sysregs with byteswap (for SRAM) */ 747 + regmap_write(master->scu, SCU_2400_COPRO_SEG2, SYSREG_BASE | 748 + SCU_2400_COPRO_SEG_SWAP); 749 + 750 + /* And segment 6 to sysregs no byteswap */ 751 + regmap_write(master->scu, SCU_2400_COPRO_SEG6, SYSREG_BASE); 752 + 753 + /* Memory cachable, regs and SRAM not cachable */ 754 + regmap_write(master->scu, SCU_2400_COPRO_CACHE_CTL, 755 + SCU_2400_COPRO_SEG0_CACHE_EN | SCU_2400_COPRO_CACHE_EN); 756 + } 757 + 758 + static void setup_common_fw_config(struct fsi_master_acf *master, 759 + void __iomem *base) 760 + { 761 + iowrite16be(master->gpio_clk_vreg, base + HDR_CLOCK_GPIO_VADDR); 762 + iowrite16be(master->gpio_clk_dreg, base + HDR_CLOCK_GPIO_DADDR); 763 + iowrite16be(master->gpio_dat_vreg, base + HDR_DATA_GPIO_VADDR); 764 + iowrite16be(master->gpio_dat_dreg, base + HDR_DATA_GPIO_DADDR); 765 + iowrite16be(master->gpio_tra_vreg, base + HDR_TRANS_GPIO_VADDR); 766 + iowrite16be(master->gpio_tra_dreg, base + HDR_TRANS_GPIO_DADDR); 767 + iowrite8(master->gpio_clk_bit, base + HDR_CLOCK_GPIO_BIT); 768 + iowrite8(master->gpio_dat_bit, base + HDR_DATA_GPIO_BIT); 769 + iowrite8(master->gpio_tra_bit, base + HDR_TRANS_GPIO_BIT); 770 + } 771 + 772 + static void setup_ast2500_fw_config(struct fsi_master_acf *master) 773 + { 774 + void __iomem *base = master->cf_mem + HDR_OFFSET; 775 + 776 + setup_common_fw_config(master, base); 777 + iowrite32be(FW_CONTROL_USE_STOP, base + HDR_FW_CONTROL); 778 + } 779 + 780 + static void setup_ast2400_fw_config(struct fsi_master_acf *master) 781 + { 782 + void __iomem *base = master->cf_mem + HDR_OFFSET; 783 + 784 + setup_common_fw_config(master, base); 785 + iowrite32be(FW_CONTROL_CONT_CLOCK|FW_CONTROL_DUMMY_RD, base + HDR_FW_CONTROL); 786 + } 787 + 788 + static int setup_gpios_for_copro(struct fsi_master_acf *master) 789 + { 790 + 791 + int rc; 792 + 793 + /* This aren't under ColdFire control, just set them up appropriately */ 794 + gpiod_direction_output(master->gpio_mux, 1); 795 + gpiod_direction_output(master->gpio_enable, 1); 796 + 797 + /* Those are under ColdFire control, let it configure them */ 798 + rc = aspeed_gpio_copro_grab_gpio(master->gpio_clk, &master->gpio_clk_vreg, 799 + &master->gpio_clk_dreg, &master->gpio_clk_bit); 800 + if (rc) { 801 + dev_err(master->dev, "failed to assign clock gpio to coprocessor\n"); 802 + return rc; 803 + } 804 + rc = aspeed_gpio_copro_grab_gpio(master->gpio_data, &master->gpio_dat_vreg, 805 + &master->gpio_dat_dreg, &master->gpio_dat_bit); 806 + if (rc) { 807 + dev_err(master->dev, "failed to assign data gpio to coprocessor\n"); 808 + aspeed_gpio_copro_release_gpio(master->gpio_clk); 809 + return rc; 810 + } 811 + rc = aspeed_gpio_copro_grab_gpio(master->gpio_trans, &master->gpio_tra_vreg, 812 + &master->gpio_tra_dreg, &master->gpio_tra_bit); 813 + if (rc) { 814 + dev_err(master->dev, "failed to assign trans gpio to coprocessor\n"); 815 + aspeed_gpio_copro_release_gpio(master->gpio_clk); 816 + aspeed_gpio_copro_release_gpio(master->gpio_data); 817 + return rc; 818 + } 819 + return 0; 820 + } 821 + 822 + static void release_copro_gpios(struct fsi_master_acf *master) 823 + { 824 + aspeed_gpio_copro_release_gpio(master->gpio_clk); 825 + aspeed_gpio_copro_release_gpio(master->gpio_data); 826 + aspeed_gpio_copro_release_gpio(master->gpio_trans); 827 + } 828 + 829 + static int load_copro_firmware(struct fsi_master_acf *master) 830 + { 831 + const struct firmware *fw; 832 + uint16_t sig = 0, wanted_sig; 833 + const u8 *data; 834 + size_t size = 0; 835 + int rc; 836 + 837 + /* Get the binary */ 838 + rc = request_firmware(&fw, FW_FILE_NAME, master->dev); 839 + if (rc) { 840 + dev_err( 841 + master->dev, "Error %d to load firwmare '%s' !\n", 842 + rc, FW_FILE_NAME); 843 + return rc; 844 + } 845 + 846 + /* Which image do we want ? (shared vs. split clock/data GPIOs) */ 847 + if (master->gpio_clk_vreg == master->gpio_dat_vreg) 848 + wanted_sig = SYS_SIG_SHARED; 849 + else 850 + wanted_sig = SYS_SIG_SPLIT; 851 + dev_dbg(master->dev, "Looking for image sig %04x\n", wanted_sig); 852 + 853 + /* Try to find it */ 854 + for (data = fw->data; data < (fw->data + fw->size);) { 855 + sig = be16_to_cpup((__be16 *)(data + HDR_OFFSET + HDR_SYS_SIG)); 856 + size = be32_to_cpup((__be32 *)(data + HDR_OFFSET + HDR_FW_SIZE)); 857 + if (sig == wanted_sig) 858 + break; 859 + data += size; 860 + } 861 + if (sig != wanted_sig) { 862 + dev_err(master->dev, "Failed to locate image sig %04x in FW blob\n", 863 + wanted_sig); 864 + rc = -ENODEV; 865 + goto release_fw; 866 + } 867 + if (size > master->cf_mem_size) { 868 + dev_err(master->dev, "FW size (%zd) bigger than memory reserve (%zd)\n", 869 + fw->size, master->cf_mem_size); 870 + rc = -ENOMEM; 871 + } else { 872 + memcpy_toio(master->cf_mem, data, size); 873 + } 874 + 875 + release_fw: 876 + release_firmware(fw); 877 + return rc; 878 + } 879 + 880 + static int check_firmware_image(struct fsi_master_acf *master) 881 + { 882 + uint32_t fw_vers, fw_api, fw_options; 883 + 884 + fw_vers = ioread16be(master->cf_mem + HDR_OFFSET + HDR_FW_VERS); 885 + fw_api = ioread16be(master->cf_mem + HDR_OFFSET + HDR_API_VERS); 886 + fw_options = ioread32be(master->cf_mem + HDR_OFFSET + HDR_FW_OPTIONS); 887 + master->trace_enabled = !!(fw_options & FW_OPTION_TRACE_EN); 888 + 889 + /* Check version and signature */ 890 + dev_info(master->dev, "ColdFire initialized, firmware v%d API v%d.%d (trace %s)\n", 891 + fw_vers, fw_api >> 8, fw_api & 0xff, 892 + master->trace_enabled ? "enabled" : "disabled"); 893 + 894 + if ((fw_api >> 8) != API_VERSION_MAJ) { 895 + dev_err(master->dev, "Unsupported coprocessor API version !\n"); 896 + return -ENODEV; 897 + } 898 + 899 + return 0; 900 + } 901 + 902 + static int copro_enable_sw_irq(struct fsi_master_acf *master) 903 + { 904 + int timeout; 905 + uint32_t val; 906 + 907 + /* 908 + * Enable coprocessor interrupt input. I've had problems getting the 909 + * value to stick, so try in a loop 910 + */ 911 + for (timeout = 0; timeout < 10; timeout++) { 912 + iowrite32(0x2, master->cvic + CVIC_EN_REG); 913 + val = ioread32(master->cvic + CVIC_EN_REG); 914 + if (val & 2) 915 + break; 916 + msleep(1); 917 + } 918 + if (!(val & 2)) { 919 + dev_err(master->dev, "Failed to enable coprocessor interrupt !\n"); 920 + return -ENODEV; 921 + } 922 + return 0; 923 + } 924 + 925 + static int fsi_master_acf_setup(struct fsi_master_acf *master) 926 + { 927 + int timeout, rc; 928 + uint32_t val; 929 + 930 + /* Make sure the ColdFire is stopped */ 931 + reset_cf(master); 932 + 933 + /* 934 + * Clear SRAM. This needs to happen before we setup the GPIOs 935 + * as we might start trying to arbitrate as soon as that happens. 936 + */ 937 + memset_io(master->sram, 0, SRAM_SIZE); 938 + 939 + /* Configure GPIOs */ 940 + rc = setup_gpios_for_copro(master); 941 + if (rc) 942 + return rc; 943 + 944 + /* Load the firmware into the reserved memory */ 945 + rc = load_copro_firmware(master); 946 + if (rc) 947 + return rc; 948 + 949 + /* Read signature and check versions */ 950 + rc = check_firmware_image(master); 951 + if (rc) 952 + return rc; 953 + 954 + /* Setup coldfire memory map */ 955 + if (master->is_ast2500) { 956 + setup_ast2500_cf_maps(master); 957 + setup_ast2500_fw_config(master); 958 + } else { 959 + setup_ast2400_cf_maps(master); 960 + setup_ast2400_fw_config(master); 961 + } 962 + 963 + /* Start the ColdFire */ 964 + start_cf(master); 965 + 966 + /* Wait for status register to indicate command completion 967 + * which signals the initialization is complete 968 + */ 969 + for (timeout = 0; timeout < 10; timeout++) { 970 + val = ioread8(master->sram + CF_STARTED); 971 + if (val) 972 + break; 973 + msleep(1); 974 + } 975 + if (!val) { 976 + dev_err(master->dev, "Coprocessor startup timeout !\n"); 977 + rc = -ENODEV; 978 + goto err; 979 + } 980 + 981 + /* Configure echo & send delay */ 982 + iowrite8(master->t_send_delay, master->sram + SEND_DLY_REG); 983 + iowrite8(master->t_echo_delay, master->sram + ECHO_DLY_REG); 984 + 985 + /* Enable SW interrupt to copro if any */ 986 + if (master->cvic) { 987 + rc = copro_enable_sw_irq(master); 988 + if (rc) 989 + goto err; 990 + } 991 + return 0; 992 + err: 993 + /* An error occurred, don't leave the coprocessor running */ 994 + reset_cf(master); 995 + 996 + /* Release the GPIOs */ 997 + release_copro_gpios(master); 998 + 999 + return rc; 1000 + } 1001 + 1002 + 1003 + static void fsi_master_acf_terminate(struct fsi_master_acf *master) 1004 + { 1005 + unsigned long flags; 1006 + 1007 + /* 1008 + * A GPIO arbitration requestion could come in while this is 1009 + * happening. To avoid problems, we disable interrupts so it 1010 + * cannot preempt us on this CPU 1011 + */ 1012 + 1013 + local_irq_save(flags); 1014 + 1015 + /* Stop the coprocessor */ 1016 + reset_cf(master); 1017 + 1018 + /* We mark the copro not-started */ 1019 + iowrite32(0, master->sram + CF_STARTED); 1020 + 1021 + /* We mark the ARB register as having given up arbitration to 1022 + * deal with a potential race with the arbitration request 1023 + */ 1024 + iowrite8(ARB_ARM_ACK, master->sram + ARB_REG); 1025 + 1026 + local_irq_restore(flags); 1027 + 1028 + /* Return the GPIOs to the ARM */ 1029 + release_copro_gpios(master); 1030 + } 1031 + 1032 + static void fsi_master_acf_setup_external(struct fsi_master_acf *master) 1033 + { 1034 + /* Setup GPIOs for external FSI master (FSP box) */ 1035 + gpiod_direction_output(master->gpio_mux, 0); 1036 + gpiod_direction_output(master->gpio_trans, 0); 1037 + gpiod_direction_output(master->gpio_enable, 1); 1038 + gpiod_direction_input(master->gpio_clk); 1039 + gpiod_direction_input(master->gpio_data); 1040 + } 1041 + 1042 + static int fsi_master_acf_link_enable(struct fsi_master *_master, int link) 1043 + { 1044 + struct fsi_master_acf *master = to_fsi_master_acf(_master); 1045 + int rc = -EBUSY; 1046 + 1047 + if (link != 0) 1048 + return -ENODEV; 1049 + 1050 + mutex_lock(&master->lock); 1051 + if (!master->external_mode) { 1052 + gpiod_set_value(master->gpio_enable, 1); 1053 + rc = 0; 1054 + } 1055 + mutex_unlock(&master->lock); 1056 + 1057 + return rc; 1058 + } 1059 + 1060 + static int fsi_master_acf_link_config(struct fsi_master *_master, int link, 1061 + u8 t_send_delay, u8 t_echo_delay) 1062 + { 1063 + struct fsi_master_acf *master = to_fsi_master_acf(_master); 1064 + 1065 + if (link != 0) 1066 + return -ENODEV; 1067 + 1068 + mutex_lock(&master->lock); 1069 + master->t_send_delay = t_send_delay; 1070 + master->t_echo_delay = t_echo_delay; 1071 + dev_dbg(master->dev, "Changing delays: send=%d echo=%d\n", 1072 + t_send_delay, t_echo_delay); 1073 + iowrite8(master->t_send_delay, master->sram + SEND_DLY_REG); 1074 + iowrite8(master->t_echo_delay, master->sram + ECHO_DLY_REG); 1075 + mutex_unlock(&master->lock); 1076 + 1077 + return 0; 1078 + } 1079 + 1080 + static ssize_t external_mode_show(struct device *dev, 1081 + struct device_attribute *attr, char *buf) 1082 + { 1083 + struct fsi_master_acf *master = dev_get_drvdata(dev); 1084 + 1085 + return snprintf(buf, PAGE_SIZE - 1, "%u\n", 1086 + master->external_mode ? 1 : 0); 1087 + } 1088 + 1089 + static ssize_t external_mode_store(struct device *dev, 1090 + struct device_attribute *attr, const char *buf, size_t count) 1091 + { 1092 + struct fsi_master_acf *master = dev_get_drvdata(dev); 1093 + unsigned long val; 1094 + bool external_mode; 1095 + int err; 1096 + 1097 + err = kstrtoul(buf, 0, &val); 1098 + if (err) 1099 + return err; 1100 + 1101 + external_mode = !!val; 1102 + 1103 + mutex_lock(&master->lock); 1104 + 1105 + if (external_mode == master->external_mode) { 1106 + mutex_unlock(&master->lock); 1107 + return count; 1108 + } 1109 + 1110 + master->external_mode = external_mode; 1111 + if (master->external_mode) { 1112 + fsi_master_acf_terminate(master); 1113 + fsi_master_acf_setup_external(master); 1114 + } else 1115 + fsi_master_acf_setup(master); 1116 + 1117 + mutex_unlock(&master->lock); 1118 + 1119 + fsi_master_rescan(&master->master); 1120 + 1121 + return count; 1122 + } 1123 + 1124 + static DEVICE_ATTR(external_mode, 0664, 1125 + external_mode_show, external_mode_store); 1126 + 1127 + static int fsi_master_acf_gpio_request(void *data) 1128 + { 1129 + struct fsi_master_acf *master = data; 1130 + int timeout; 1131 + u8 val; 1132 + 1133 + /* Note: This doesn't require holding out mutex */ 1134 + 1135 + /* Write reqest */ 1136 + iowrite8(ARB_ARM_REQ, master->sram + ARB_REG); 1137 + 1138 + /* 1139 + * There is a race (which does happen at boot time) when we get an 1140 + * arbitration request as we are either about to or just starting 1141 + * the coprocessor. 1142 + * 1143 + * To handle it, we first check if we are running. If not yet we 1144 + * check whether the copro is started in the SCU. 1145 + * 1146 + * If it's not started, we can basically just assume we have arbitration 1147 + * and return. Otherwise, we wait normally expecting for the arbitration 1148 + * to eventually complete. 1149 + */ 1150 + if (ioread32(master->sram + CF_STARTED) == 0) { 1151 + unsigned int reg = 0; 1152 + 1153 + regmap_read(master->scu, SCU_COPRO_CTRL, &reg); 1154 + if (!(reg & SCU_COPRO_CLK_EN)) 1155 + return 0; 1156 + } 1157 + 1158 + /* Ring doorbell if any */ 1159 + if (master->cvic) 1160 + iowrite32(0x2, master->cvic + CVIC_TRIG_REG); 1161 + 1162 + for (timeout = 0; timeout < 10000; timeout++) { 1163 + val = ioread8(master->sram + ARB_REG); 1164 + if (val != ARB_ARM_REQ) 1165 + break; 1166 + udelay(1); 1167 + } 1168 + 1169 + /* If it failed, override anyway */ 1170 + if (val != ARB_ARM_ACK) 1171 + dev_warn(master->dev, "GPIO request arbitration timeout\n"); 1172 + 1173 + return 0; 1174 + } 1175 + 1176 + static int fsi_master_acf_gpio_release(void *data) 1177 + { 1178 + struct fsi_master_acf *master = data; 1179 + 1180 + /* Write release */ 1181 + iowrite8(0, master->sram + ARB_REG); 1182 + 1183 + /* Ring doorbell if any */ 1184 + if (master->cvic) 1185 + iowrite32(0x2, master->cvic + CVIC_TRIG_REG); 1186 + 1187 + return 0; 1188 + } 1189 + 1190 + static void fsi_master_acf_release(struct device *dev) 1191 + { 1192 + struct fsi_master_acf *master = to_fsi_master_acf(dev_to_fsi_master(dev)); 1193 + 1194 + /* Cleanup, stop coprocessor */ 1195 + mutex_lock(&master->lock); 1196 + fsi_master_acf_terminate(master); 1197 + aspeed_gpio_copro_set_ops(NULL, NULL); 1198 + mutex_unlock(&master->lock); 1199 + 1200 + /* Free resources */ 1201 + gen_pool_free(master->sram_pool, (unsigned long)master->sram, SRAM_SIZE); 1202 + of_node_put(dev_of_node(master->dev)); 1203 + 1204 + kfree(master); 1205 + } 1206 + 1207 + static const struct aspeed_gpio_copro_ops fsi_master_acf_gpio_ops = { 1208 + .request_access = fsi_master_acf_gpio_request, 1209 + .release_access = fsi_master_acf_gpio_release, 1210 + }; 1211 + 1212 + static int fsi_master_acf_probe(struct platform_device *pdev) 1213 + { 1214 + struct device_node *np, *mnode = dev_of_node(&pdev->dev); 1215 + struct genpool_data_fixed gpdf; 1216 + struct fsi_master_acf *master; 1217 + struct gpio_desc *gpio; 1218 + struct resource res; 1219 + uint32_t cf_mem_align; 1220 + int rc; 1221 + 1222 + master = kzalloc(sizeof(*master), GFP_KERNEL); 1223 + if (!master) 1224 + return -ENOMEM; 1225 + 1226 + master->dev = &pdev->dev; 1227 + master->master.dev.parent = master->dev; 1228 + master->last_addr = LAST_ADDR_INVALID; 1229 + 1230 + /* AST2400 vs. AST2500 */ 1231 + master->is_ast2500 = of_device_is_compatible(mnode, "aspeed,ast2500-cf-fsi-master"); 1232 + 1233 + /* Grab the SCU, we'll need to access it to configure the coprocessor */ 1234 + if (master->is_ast2500) 1235 + master->scu = syscon_regmap_lookup_by_compatible("aspeed,ast2500-scu"); 1236 + else 1237 + master->scu = syscon_regmap_lookup_by_compatible("aspeed,ast2400-scu"); 1238 + if (IS_ERR(master->scu)) { 1239 + dev_err(&pdev->dev, "failed to find SCU regmap\n"); 1240 + rc = PTR_ERR(master->scu); 1241 + goto err_free; 1242 + } 1243 + 1244 + /* Grab all the GPIOs we need */ 1245 + gpio = devm_gpiod_get(&pdev->dev, "clock", 0); 1246 + if (IS_ERR(gpio)) { 1247 + dev_err(&pdev->dev, "failed to get clock gpio\n"); 1248 + rc = PTR_ERR(gpio); 1249 + goto err_free; 1250 + } 1251 + master->gpio_clk = gpio; 1252 + 1253 + gpio = devm_gpiod_get(&pdev->dev, "data", 0); 1254 + if (IS_ERR(gpio)) { 1255 + dev_err(&pdev->dev, "failed to get data gpio\n"); 1256 + rc = PTR_ERR(gpio); 1257 + goto err_free; 1258 + } 1259 + master->gpio_data = gpio; 1260 + 1261 + /* Optional GPIOs */ 1262 + gpio = devm_gpiod_get_optional(&pdev->dev, "trans", 0); 1263 + if (IS_ERR(gpio)) { 1264 + dev_err(&pdev->dev, "failed to get trans gpio\n"); 1265 + rc = PTR_ERR(gpio); 1266 + goto err_free; 1267 + } 1268 + master->gpio_trans = gpio; 1269 + 1270 + gpio = devm_gpiod_get_optional(&pdev->dev, "enable", 0); 1271 + if (IS_ERR(gpio)) { 1272 + dev_err(&pdev->dev, "failed to get enable gpio\n"); 1273 + rc = PTR_ERR(gpio); 1274 + goto err_free; 1275 + } 1276 + master->gpio_enable = gpio; 1277 + 1278 + gpio = devm_gpiod_get_optional(&pdev->dev, "mux", 0); 1279 + if (IS_ERR(gpio)) { 1280 + dev_err(&pdev->dev, "failed to get mux gpio\n"); 1281 + rc = PTR_ERR(gpio); 1282 + goto err_free; 1283 + } 1284 + master->gpio_mux = gpio; 1285 + 1286 + /* Grab the reserved memory region (use DMA API instead ?) */ 1287 + np = of_parse_phandle(mnode, "memory-region", 0); 1288 + if (!np) { 1289 + dev_err(&pdev->dev, "Didn't find reserved memory\n"); 1290 + rc = -EINVAL; 1291 + goto err_free; 1292 + } 1293 + rc = of_address_to_resource(np, 0, &res); 1294 + of_node_put(np); 1295 + if (rc) { 1296 + dev_err(&pdev->dev, "Couldn't address to resource for reserved memory\n"); 1297 + rc = -ENOMEM; 1298 + goto err_free; 1299 + } 1300 + master->cf_mem_size = resource_size(&res); 1301 + master->cf_mem_addr = (uint32_t)res.start; 1302 + cf_mem_align = master->is_ast2500 ? 0x00100000 : 0x00200000; 1303 + if (master->cf_mem_addr & (cf_mem_align - 1)) { 1304 + dev_err(&pdev->dev, "Reserved memory has insufficient alignment\n"); 1305 + rc = -ENOMEM; 1306 + goto err_free; 1307 + } 1308 + master->cf_mem = devm_ioremap_resource(&pdev->dev, &res); 1309 + if (IS_ERR(master->cf_mem)) { 1310 + rc = PTR_ERR(master->cf_mem); 1311 + dev_err(&pdev->dev, "Error %d mapping coldfire memory\n", rc); 1312 + goto err_free; 1313 + } 1314 + dev_dbg(&pdev->dev, "DRAM allocation @%x\n", master->cf_mem_addr); 1315 + 1316 + /* AST2500 has a SW interrupt to the coprocessor */ 1317 + if (master->is_ast2500) { 1318 + /* Grab the CVIC (ColdFire interrupts controller) */ 1319 + np = of_parse_phandle(mnode, "aspeed,cvic", 0); 1320 + if (!np) { 1321 + dev_err(&pdev->dev, "Didn't find CVIC\n"); 1322 + rc = -EINVAL; 1323 + goto err_free; 1324 + } 1325 + master->cvic = devm_of_iomap(&pdev->dev, np, 0, NULL); 1326 + if (IS_ERR(master->cvic)) { 1327 + rc = PTR_ERR(master->cvic); 1328 + dev_err(&pdev->dev, "Error %d mapping CVIC\n", rc); 1329 + goto err_free; 1330 + } 1331 + rc = of_property_read_u32(np, "copro-sw-interrupts", 1332 + &master->cvic_sw_irq); 1333 + if (rc) { 1334 + dev_err(&pdev->dev, "Can't find coprocessor SW interrupt\n"); 1335 + goto err_free; 1336 + } 1337 + } 1338 + 1339 + /* Grab the SRAM */ 1340 + master->sram_pool = of_gen_pool_get(dev_of_node(&pdev->dev), "aspeed,sram", 0); 1341 + if (!master->sram_pool) { 1342 + rc = -ENODEV; 1343 + dev_err(&pdev->dev, "Can't find sram pool\n"); 1344 + goto err_free; 1345 + } 1346 + 1347 + /* Current microcode only deals with fixed location in SRAM */ 1348 + gpdf.offset = 0; 1349 + master->sram = (void __iomem *)gen_pool_alloc_algo(master->sram_pool, SRAM_SIZE, 1350 + gen_pool_fixed_alloc, &gpdf); 1351 + if (!master->sram) { 1352 + rc = -ENOMEM; 1353 + dev_err(&pdev->dev, "Failed to allocate sram from pool\n"); 1354 + goto err_free; 1355 + } 1356 + dev_dbg(&pdev->dev, "SRAM allocation @%lx\n", 1357 + (unsigned long)gen_pool_virt_to_phys(master->sram_pool, 1358 + (unsigned long)master->sram)); 1359 + 1360 + /* 1361 + * Hookup with the GPIO driver for arbitration of GPIO banks 1362 + * ownership. 1363 + */ 1364 + aspeed_gpio_copro_set_ops(&fsi_master_acf_gpio_ops, master); 1365 + 1366 + /* Default FSI command delays */ 1367 + master->t_send_delay = FSI_SEND_DELAY_CLOCKS; 1368 + master->t_echo_delay = FSI_ECHO_DELAY_CLOCKS; 1369 + master->master.n_links = 1; 1370 + if (master->is_ast2500) 1371 + master->master.flags = FSI_MASTER_FLAG_SWCLOCK; 1372 + master->master.read = fsi_master_acf_read; 1373 + master->master.write = fsi_master_acf_write; 1374 + master->master.term = fsi_master_acf_term; 1375 + master->master.send_break = fsi_master_acf_break; 1376 + master->master.link_enable = fsi_master_acf_link_enable; 1377 + master->master.link_config = fsi_master_acf_link_config; 1378 + master->master.dev.of_node = of_node_get(dev_of_node(master->dev)); 1379 + master->master.dev.release = fsi_master_acf_release; 1380 + platform_set_drvdata(pdev, master); 1381 + mutex_init(&master->lock); 1382 + 1383 + mutex_lock(&master->lock); 1384 + rc = fsi_master_acf_setup(master); 1385 + mutex_unlock(&master->lock); 1386 + if (rc) 1387 + goto release_of_dev; 1388 + 1389 + rc = device_create_file(&pdev->dev, &dev_attr_external_mode); 1390 + if (rc) 1391 + goto stop_copro; 1392 + 1393 + rc = fsi_master_register(&master->master); 1394 + if (!rc) 1395 + return 0; 1396 + 1397 + device_remove_file(master->dev, &dev_attr_external_mode); 1398 + put_device(&master->master.dev); 1399 + return rc; 1400 + 1401 + stop_copro: 1402 + fsi_master_acf_terminate(master); 1403 + release_of_dev: 1404 + aspeed_gpio_copro_set_ops(NULL, NULL); 1405 + gen_pool_free(master->sram_pool, (unsigned long)master->sram, SRAM_SIZE); 1406 + of_node_put(dev_of_node(master->dev)); 1407 + err_free: 1408 + kfree(master); 1409 + return rc; 1410 + } 1411 + 1412 + 1413 + static int fsi_master_acf_remove(struct platform_device *pdev) 1414 + { 1415 + struct fsi_master_acf *master = platform_get_drvdata(pdev); 1416 + 1417 + device_remove_file(master->dev, &dev_attr_external_mode); 1418 + 1419 + fsi_master_unregister(&master->master); 1420 + 1421 + return 0; 1422 + } 1423 + 1424 + static const struct of_device_id fsi_master_acf_match[] = { 1425 + { .compatible = "aspeed,ast2400-cf-fsi-master" }, 1426 + { .compatible = "aspeed,ast2500-cf-fsi-master" }, 1427 + { }, 1428 + }; 1429 + 1430 + static struct platform_driver fsi_master_acf = { 1431 + .driver = { 1432 + .name = "fsi-master-acf", 1433 + .of_match_table = fsi_master_acf_match, 1434 + }, 1435 + .probe = fsi_master_acf_probe, 1436 + .remove = fsi_master_acf_remove, 1437 + }; 1438 + 1439 + module_platform_driver(fsi_master_acf); 1440 + MODULE_LICENSE("GPL");
+341 -136
drivers/fsi/fsi-master-gpio.c
··· 8 8 #include <linux/fsi.h> 9 9 #include <linux/gpio/consumer.h> 10 10 #include <linux/io.h> 11 + #include <linux/irqflags.h> 11 12 #include <linux/module.h> 12 13 #include <linux/of.h> 13 14 #include <linux/platform_device.h> 14 15 #include <linux/slab.h> 15 - #include <linux/spinlock.h> 16 16 17 17 #include "fsi-master.h" 18 18 19 19 #define FSI_GPIO_STD_DLY 1 /* Standard pin delay in nS */ 20 - #define FSI_ECHO_DELAY_CLOCKS 16 /* Number clocks for echo delay */ 21 - #define FSI_PRE_BREAK_CLOCKS 50 /* Number clocks to prep for break */ 22 - #define FSI_BREAK_CLOCKS 256 /* Number of clocks to issue break */ 23 - #define FSI_POST_BREAK_CLOCKS 16000 /* Number clocks to set up cfam */ 24 - #define FSI_INIT_CLOCKS 5000 /* Clock out any old data */ 25 - #define FSI_GPIO_STD_DELAY 10 /* Standard GPIO delay in nS */ 26 - /* todo: adjust down as low as */ 27 - /* possible or eliminate */ 28 - #define FSI_GPIO_CMD_DPOLL 0x2 29 - #define FSI_GPIO_CMD_TERM 0x3f 30 - #define FSI_GPIO_CMD_ABS_AR 0x4 31 - 32 - #define FSI_GPIO_DPOLL_CLOCKS 100 /* < 21 will cause slave to hang */ 33 - 34 - /* Bus errors */ 35 - #define FSI_GPIO_ERR_BUSY 1 /* Slave stuck in busy state */ 36 - #define FSI_GPIO_RESP_ERRA 2 /* Any (misc) Error */ 37 - #define FSI_GPIO_RESP_ERRC 3 /* Slave reports master CRC error */ 38 - #define FSI_GPIO_MTOE 4 /* Master time out error */ 39 - #define FSI_GPIO_CRC_INVAL 5 /* Master reports slave CRC error */ 40 - 41 - /* Normal slave responses */ 42 - #define FSI_GPIO_RESP_BUSY 1 43 - #define FSI_GPIO_RESP_ACK 0 44 - #define FSI_GPIO_RESP_ACKD 4 45 - 46 - #define FSI_GPIO_MAX_BUSY 100 47 - #define FSI_GPIO_MTOE_COUNT 1000 48 - #define FSI_GPIO_DRAIN_BITS 20 49 - #define FSI_GPIO_CRC_SIZE 4 50 - #define FSI_GPIO_MSG_ID_SIZE 2 51 - #define FSI_GPIO_MSG_RESPID_SIZE 2 52 - #define FSI_GPIO_PRIME_SLAVE_CLOCKS 100 20 + #define LAST_ADDR_INVALID 0x1 53 21 54 22 struct fsi_master_gpio { 55 23 struct fsi_master master; 56 24 struct device *dev; 57 - spinlock_t cmd_lock; /* Lock for commands */ 25 + struct mutex cmd_lock; /* mutex for command ordering */ 58 26 struct gpio_desc *gpio_clk; 59 27 struct gpio_desc *gpio_data; 60 28 struct gpio_desc *gpio_trans; /* Voltage translator */ 61 29 struct gpio_desc *gpio_enable; /* FSI enable */ 62 30 struct gpio_desc *gpio_mux; /* Mux control */ 63 31 bool external_mode; 32 + bool no_delays; 33 + uint32_t last_addr; 34 + uint8_t t_send_delay; 35 + uint8_t t_echo_delay; 64 36 }; 65 37 66 38 #define CREATE_TRACE_POINTS ··· 50 78 int i; 51 79 52 80 for (i = 0; i < count; i++) { 53 - ndelay(FSI_GPIO_STD_DLY); 81 + if (!master->no_delays) 82 + ndelay(FSI_GPIO_STD_DLY); 54 83 gpiod_set_value(master->gpio_clk, 0); 55 - ndelay(FSI_GPIO_STD_DLY); 84 + if (!master->no_delays) 85 + ndelay(FSI_GPIO_STD_DLY); 56 86 gpiod_set_value(master->gpio_clk, 1); 57 87 } 58 88 } 59 89 60 - static int sda_in(struct fsi_master_gpio *master) 90 + static int sda_clock_in(struct fsi_master_gpio *master) 61 91 { 62 92 int in; 63 93 64 - ndelay(FSI_GPIO_STD_DLY); 94 + if (!master->no_delays) 95 + ndelay(FSI_GPIO_STD_DLY); 96 + gpiod_set_value(master->gpio_clk, 0); 97 + 98 + /* Dummy read to feed the synchronizers */ 99 + gpiod_get_value(master->gpio_data); 100 + 101 + /* Actual data read */ 65 102 in = gpiod_get_value(master->gpio_data); 103 + if (!master->no_delays) 104 + ndelay(FSI_GPIO_STD_DLY); 105 + gpiod_set_value(master->gpio_clk, 1); 66 106 return in ? 1 : 0; 67 107 } 68 108 ··· 97 113 98 114 static void clock_zeros(struct fsi_master_gpio *master, int count) 99 115 { 116 + trace_fsi_master_gpio_clock_zeros(master, count); 100 117 set_sda_output(master, 1); 101 118 clock_toggle(master, count); 102 119 } 120 + 121 + static void echo_delay(struct fsi_master_gpio *master) 122 + { 123 + clock_zeros(master, master->t_echo_delay); 124 + } 125 + 103 126 104 127 static void serial_in(struct fsi_master_gpio *master, struct fsi_gpio_msg *msg, 105 128 uint8_t num_bits) ··· 116 125 set_sda_input(master); 117 126 118 127 for (bit = 0; bit < num_bits; bit++) { 119 - clock_toggle(master, 1); 120 - in_bit = sda_in(master); 128 + in_bit = sda_clock_in(master); 121 129 msg->msg <<= 1; 122 130 msg->msg |= ~in_bit & 0x1; /* Data is active low */ 123 131 } ··· 181 191 msg_push_bits(msg, crc, 4); 182 192 } 183 193 184 - /* 185 - * Encode an Absolute Address command 186 - */ 187 - static void build_abs_ar_command(struct fsi_gpio_msg *cmd, 188 - uint8_t id, uint32_t addr, size_t size, const void *data) 194 + static bool check_same_address(struct fsi_master_gpio *master, int id, 195 + uint32_t addr) 189 196 { 197 + /* this will also handle LAST_ADDR_INVALID */ 198 + return master->last_addr == (((id & 0x3) << 21) | (addr & ~0x3)); 199 + } 200 + 201 + static bool check_relative_address(struct fsi_master_gpio *master, int id, 202 + uint32_t addr, uint32_t *rel_addrp) 203 + { 204 + uint32_t last_addr = master->last_addr; 205 + int32_t rel_addr; 206 + 207 + if (last_addr == LAST_ADDR_INVALID) 208 + return false; 209 + 210 + /* We may be in 23-bit addressing mode, which uses the id as the 211 + * top two address bits. So, if we're referencing a different ID, 212 + * use absolute addresses. 213 + */ 214 + if (((last_addr >> 21) & 0x3) != id) 215 + return false; 216 + 217 + /* remove the top two bits from any 23-bit addressing */ 218 + last_addr &= (1 << 21) - 1; 219 + 220 + /* We know that the addresses are limited to 21 bits, so this won't 221 + * overflow the signed rel_addr */ 222 + rel_addr = addr - last_addr; 223 + if (rel_addr > 255 || rel_addr < -256) 224 + return false; 225 + 226 + *rel_addrp = (uint32_t)rel_addr; 227 + 228 + return true; 229 + } 230 + 231 + static void last_address_update(struct fsi_master_gpio *master, 232 + int id, bool valid, uint32_t addr) 233 + { 234 + if (!valid) 235 + master->last_addr = LAST_ADDR_INVALID; 236 + else 237 + master->last_addr = ((id & 0x3) << 21) | (addr & ~0x3); 238 + } 239 + 240 + /* 241 + * Encode an Absolute/Relative/Same Address command 242 + */ 243 + static void build_ar_command(struct fsi_master_gpio *master, 244 + struct fsi_gpio_msg *cmd, uint8_t id, 245 + uint32_t addr, size_t size, const void *data) 246 + { 247 + int i, addr_bits, opcode_bits; 190 248 bool write = !!data; 191 - uint8_t ds; 192 - int i; 249 + uint8_t ds, opcode; 250 + uint32_t rel_addr; 193 251 194 252 cmd->bits = 0; 195 253 cmd->msg = 0; 196 254 197 - msg_push_bits(cmd, id, 2); 198 - msg_push_bits(cmd, FSI_GPIO_CMD_ABS_AR, 3); 199 - msg_push_bits(cmd, write ? 0 : 1, 1); 255 + /* we have 21 bits of address max */ 256 + addr &= ((1 << 21) - 1); 257 + 258 + /* cmd opcodes are variable length - SAME_AR is only two bits */ 259 + opcode_bits = 3; 260 + 261 + if (check_same_address(master, id, addr)) { 262 + /* we still address the byte offset within the word */ 263 + addr_bits = 2; 264 + opcode_bits = 2; 265 + opcode = FSI_CMD_SAME_AR; 266 + trace_fsi_master_gpio_cmd_same_addr(master); 267 + 268 + } else if (check_relative_address(master, id, addr, &rel_addr)) { 269 + /* 8 bits plus sign */ 270 + addr_bits = 9; 271 + addr = rel_addr; 272 + opcode = FSI_CMD_REL_AR; 273 + trace_fsi_master_gpio_cmd_rel_addr(master, rel_addr); 274 + 275 + } else { 276 + addr_bits = 21; 277 + opcode = FSI_CMD_ABS_AR; 278 + trace_fsi_master_gpio_cmd_abs_addr(master, addr); 279 + } 200 280 201 281 /* 202 282 * The read/write size is encoded in the lower bits of the address ··· 283 223 if (size == 4) 284 224 addr |= 1; 285 225 286 - msg_push_bits(cmd, addr & ((1 << 21) - 1), 21); 226 + msg_push_bits(cmd, id, 2); 227 + msg_push_bits(cmd, opcode, opcode_bits); 228 + msg_push_bits(cmd, write ? 0 : 1, 1); 229 + msg_push_bits(cmd, addr, addr_bits); 287 230 msg_push_bits(cmd, ds, 1); 288 231 for (i = 0; write && i < size; i++) 289 232 msg_push_bits(cmd, ((uint8_t *)data)[i], 8); ··· 300 237 cmd->msg = 0; 301 238 302 239 msg_push_bits(cmd, slave_id, 2); 303 - msg_push_bits(cmd, FSI_GPIO_CMD_DPOLL, 3); 240 + msg_push_bits(cmd, FSI_CMD_DPOLL, 3); 304 241 msg_push_crc(cmd); 305 242 } 306 243 307 - static void echo_delay(struct fsi_master_gpio *master) 244 + static void build_epoll_command(struct fsi_gpio_msg *cmd, uint8_t slave_id) 308 245 { 309 - set_sda_output(master, 1); 310 - clock_toggle(master, FSI_ECHO_DELAY_CLOCKS); 246 + cmd->bits = 0; 247 + cmd->msg = 0; 248 + 249 + msg_push_bits(cmd, slave_id, 2); 250 + msg_push_bits(cmd, FSI_CMD_EPOLL, 3); 251 + msg_push_crc(cmd); 311 252 } 312 253 313 254 static void build_term_command(struct fsi_gpio_msg *cmd, uint8_t slave_id) ··· 320 253 cmd->msg = 0; 321 254 322 255 msg_push_bits(cmd, slave_id, 2); 323 - msg_push_bits(cmd, FSI_GPIO_CMD_TERM, 6); 256 + msg_push_bits(cmd, FSI_CMD_TERM, 6); 324 257 msg_push_crc(cmd); 325 258 } 326 259 327 260 /* 328 - * Store information on master errors so handler can detect and clean 329 - * up the bus 261 + * Note: callers rely specifically on this returning -EAGAIN for 262 + * a CRC error detected in the response. Use other error code 263 + * for other situations. It will be converted to something else 264 + * higher up the stack before it reaches userspace. 330 265 */ 331 - static void fsi_master_gpio_error(struct fsi_master_gpio *master, int error) 332 - { 333 - 334 - } 335 - 336 266 static int read_one_response(struct fsi_master_gpio *master, 337 267 uint8_t data_size, struct fsi_gpio_msg *msgp, uint8_t *tagp) 338 268 { 339 269 struct fsi_gpio_msg msg; 340 - uint8_t id, tag; 270 + unsigned long flags; 341 271 uint32_t crc; 272 + uint8_t tag; 342 273 int i; 343 274 275 + local_irq_save(flags); 276 + 344 277 /* wait for the start bit */ 345 - for (i = 0; i < FSI_GPIO_MTOE_COUNT; i++) { 278 + for (i = 0; i < FSI_MASTER_MTOE_COUNT; i++) { 346 279 msg.bits = 0; 347 280 msg.msg = 0; 348 281 serial_in(master, &msg, 1); 349 282 if (msg.msg) 350 283 break; 351 284 } 352 - if (i == FSI_GPIO_MTOE_COUNT) { 285 + if (i == FSI_MASTER_MTOE_COUNT) { 353 286 dev_dbg(master->dev, 354 287 "Master time out waiting for response\n"); 355 - fsi_master_gpio_error(master, FSI_GPIO_MTOE); 356 - return -EIO; 288 + local_irq_restore(flags); 289 + return -ETIMEDOUT; 357 290 } 358 291 359 292 msg.bits = 0; ··· 362 295 /* Read slave ID & response tag */ 363 296 serial_in(master, &msg, 4); 364 297 365 - id = (msg.msg >> FSI_GPIO_MSG_RESPID_SIZE) & 0x3; 366 298 tag = msg.msg & 0x3; 367 299 368 300 /* If we have an ACK and we're expecting data, clock the data in too */ 369 - if (tag == FSI_GPIO_RESP_ACK && data_size) 301 + if (tag == FSI_RESP_ACK && data_size) 370 302 serial_in(master, &msg, data_size * 8); 371 303 372 304 /* read CRC */ 373 - serial_in(master, &msg, FSI_GPIO_CRC_SIZE); 305 + serial_in(master, &msg, FSI_CRC_SIZE); 306 + 307 + local_irq_restore(flags); 374 308 375 309 /* we have a whole message now; check CRC */ 376 310 crc = crc4(0, 1, 1); 377 311 crc = crc4(crc, msg.msg, msg.bits); 378 312 if (crc) { 379 - dev_dbg(master->dev, "ERR response CRC\n"); 380 - fsi_master_gpio_error(master, FSI_GPIO_CRC_INVAL); 381 - return -EIO; 313 + /* Check if it's all 1's, that probably means the host is off */ 314 + if (((~msg.msg) & ((1ull << msg.bits) - 1)) == 0) 315 + return -ENODEV; 316 + dev_dbg(master->dev, "ERR response CRC msg: 0x%016llx (%d bits)\n", 317 + msg.msg, msg.bits); 318 + return -EAGAIN; 382 319 } 383 320 384 321 if (msgp) ··· 396 325 static int issue_term(struct fsi_master_gpio *master, uint8_t slave) 397 326 { 398 327 struct fsi_gpio_msg cmd; 328 + unsigned long flags; 399 329 uint8_t tag; 400 330 int rc; 401 331 402 332 build_term_command(&cmd, slave); 333 + 334 + local_irq_save(flags); 403 335 serial_out(master, &cmd); 404 336 echo_delay(master); 337 + local_irq_restore(flags); 405 338 406 339 rc = read_one_response(master, 0, NULL, &tag); 407 340 if (rc < 0) { 408 341 dev_err(master->dev, 409 342 "TERM failed; lost communication with slave\n"); 410 343 return -EIO; 411 - } else if (tag != FSI_GPIO_RESP_ACK) { 344 + } else if (tag != FSI_RESP_ACK) { 412 345 dev_err(master->dev, "TERM failed; response %d\n", tag); 413 346 return -EIO; 414 347 } ··· 425 350 { 426 351 struct fsi_gpio_msg response, cmd; 427 352 int busy_count = 0, rc, i; 353 + unsigned long flags; 428 354 uint8_t tag; 429 355 uint8_t *data_byte = data; 430 - 356 + int crc_err_retries = 0; 431 357 retry: 432 358 rc = read_one_response(master, size, &response, &tag); 433 - if (rc) 434 - return rc; 359 + 360 + /* Handle retries on CRC errors */ 361 + if (rc == -EAGAIN) { 362 + /* Too many retries ? */ 363 + if (crc_err_retries++ > FSI_CRC_ERR_RETRIES) { 364 + /* 365 + * Pass it up as a -EIO otherwise upper level will retry 366 + * the whole command which isn't what we want here. 367 + */ 368 + rc = -EIO; 369 + goto fail; 370 + } 371 + dev_dbg(master->dev, 372 + "CRC error retry %d\n", crc_err_retries); 373 + trace_fsi_master_gpio_crc_rsp_error(master); 374 + build_epoll_command(&cmd, slave); 375 + local_irq_save(flags); 376 + clock_zeros(master, FSI_MASTER_EPOLL_CLOCKS); 377 + serial_out(master, &cmd); 378 + echo_delay(master); 379 + local_irq_restore(flags); 380 + goto retry; 381 + } else if (rc) 382 + goto fail; 435 383 436 384 switch (tag) { 437 - case FSI_GPIO_RESP_ACK: 385 + case FSI_RESP_ACK: 438 386 if (size && data) { 439 387 uint64_t val = response.msg; 440 388 /* clear crc & mask */ ··· 470 372 } 471 373 } 472 374 break; 473 - case FSI_GPIO_RESP_BUSY: 375 + case FSI_RESP_BUSY: 474 376 /* 475 377 * Its necessary to clock slave before issuing 476 378 * d-poll, not indicated in the hardware protocol 477 379 * spec. < 20 clocks causes slave to hang, 21 ok. 478 380 */ 479 - clock_zeros(master, FSI_GPIO_DPOLL_CLOCKS); 480 - if (busy_count++ < FSI_GPIO_MAX_BUSY) { 381 + if (busy_count++ < FSI_MASTER_MAX_BUSY) { 481 382 build_dpoll_command(&cmd, slave); 383 + local_irq_save(flags); 384 + clock_zeros(master, FSI_MASTER_DPOLL_CLOCKS); 482 385 serial_out(master, &cmd); 483 386 echo_delay(master); 387 + local_irq_restore(flags); 484 388 goto retry; 485 389 } 486 390 dev_warn(master->dev, 487 391 "ERR slave is stuck in busy state, issuing TERM\n"); 392 + local_irq_save(flags); 393 + clock_zeros(master, FSI_MASTER_DPOLL_CLOCKS); 394 + local_irq_restore(flags); 488 395 issue_term(master, slave); 489 396 rc = -EIO; 490 397 break; 491 398 492 - case FSI_GPIO_RESP_ERRA: 493 - case FSI_GPIO_RESP_ERRC: 494 - dev_dbg(master->dev, "ERR%c received: 0x%x\n", 495 - tag == FSI_GPIO_RESP_ERRA ? 'A' : 'C', 496 - (int)response.msg); 497 - fsi_master_gpio_error(master, response.msg); 399 + case FSI_RESP_ERRA: 400 + dev_dbg(master->dev, "ERRA received: 0x%x\n", (int)response.msg); 498 401 rc = -EIO; 402 + break; 403 + case FSI_RESP_ERRC: 404 + dev_dbg(master->dev, "ERRC received: 0x%x\n", (int)response.msg); 405 + trace_fsi_master_gpio_crc_cmd_error(master); 406 + rc = -EAGAIN; 499 407 break; 500 408 } 501 409 502 - /* Clock the slave enough to be ready for next operation */ 503 - clock_zeros(master, FSI_GPIO_PRIME_SLAVE_CLOCKS); 410 + if (busy_count > 0) 411 + trace_fsi_master_gpio_poll_response_busy(master, busy_count); 412 + fail: 413 + /* 414 + * tSendDelay clocks, avoids signal reflections when switching 415 + * from receive of response back to send of data. 416 + */ 417 + local_irq_save(flags); 418 + clock_zeros(master, master->t_send_delay); 419 + local_irq_restore(flags); 420 + 504 421 return rc; 422 + } 423 + 424 + static int send_request(struct fsi_master_gpio *master, 425 + struct fsi_gpio_msg *cmd) 426 + { 427 + unsigned long flags; 428 + 429 + if (master->external_mode) 430 + return -EBUSY; 431 + 432 + local_irq_save(flags); 433 + serial_out(master, cmd); 434 + echo_delay(master); 435 + local_irq_restore(flags); 436 + 437 + return 0; 505 438 } 506 439 507 440 static int fsi_master_gpio_xfer(struct fsi_master_gpio *master, uint8_t slave, 508 441 struct fsi_gpio_msg *cmd, size_t resp_len, void *resp) 509 442 { 510 - unsigned long flags; 511 - int rc; 443 + int rc = -EAGAIN, retries = 0; 512 444 513 - spin_lock_irqsave(&master->cmd_lock, flags); 445 + while ((retries++) < FSI_CRC_ERR_RETRIES) { 446 + rc = send_request(master, cmd); 447 + if (rc) 448 + break; 449 + rc = poll_for_response(master, slave, resp_len, resp); 450 + if (rc != -EAGAIN) 451 + break; 452 + rc = -EIO; 453 + dev_warn(master->dev, "ECRC retry %d\n", retries); 514 454 515 - if (master->external_mode) { 516 - spin_unlock_irqrestore(&master->cmd_lock, flags); 517 - return -EBUSY; 455 + /* Pace it a bit before retry */ 456 + msleep(1); 518 457 } 519 - 520 - serial_out(master, cmd); 521 - echo_delay(master); 522 - rc = poll_for_response(master, slave, resp_len, resp); 523 - spin_unlock_irqrestore(&master->cmd_lock, flags); 524 458 525 459 return rc; 526 460 } ··· 562 432 { 563 433 struct fsi_master_gpio *master = to_fsi_master_gpio(_master); 564 434 struct fsi_gpio_msg cmd; 435 + int rc; 565 436 566 437 if (link != 0) 567 438 return -ENODEV; 568 439 569 - build_abs_ar_command(&cmd, id, addr, size, NULL); 570 - return fsi_master_gpio_xfer(master, id, &cmd, size, val); 440 + mutex_lock(&master->cmd_lock); 441 + build_ar_command(master, &cmd, id, addr, size, NULL); 442 + rc = fsi_master_gpio_xfer(master, id, &cmd, size, val); 443 + last_address_update(master, id, rc == 0, addr); 444 + mutex_unlock(&master->cmd_lock); 445 + 446 + return rc; 571 447 } 572 448 573 449 static int fsi_master_gpio_write(struct fsi_master *_master, int link, ··· 581 445 { 582 446 struct fsi_master_gpio *master = to_fsi_master_gpio(_master); 583 447 struct fsi_gpio_msg cmd; 448 + int rc; 584 449 585 450 if (link != 0) 586 451 return -ENODEV; 587 452 588 - build_abs_ar_command(&cmd, id, addr, size, val); 589 - return fsi_master_gpio_xfer(master, id, &cmd, 0, NULL); 453 + mutex_lock(&master->cmd_lock); 454 + build_ar_command(master, &cmd, id, addr, size, val); 455 + rc = fsi_master_gpio_xfer(master, id, &cmd, 0, NULL); 456 + last_address_update(master, id, rc == 0, addr); 457 + mutex_unlock(&master->cmd_lock); 458 + 459 + return rc; 590 460 } 591 461 592 462 static int fsi_master_gpio_term(struct fsi_master *_master, ··· 600 458 { 601 459 struct fsi_master_gpio *master = to_fsi_master_gpio(_master); 602 460 struct fsi_gpio_msg cmd; 461 + int rc; 603 462 604 463 if (link != 0) 605 464 return -ENODEV; 606 465 466 + mutex_lock(&master->cmd_lock); 607 467 build_term_command(&cmd, id); 608 - return fsi_master_gpio_xfer(master, id, &cmd, 0, NULL); 468 + rc = fsi_master_gpio_xfer(master, id, &cmd, 0, NULL); 469 + last_address_update(master, id, false, 0); 470 + mutex_unlock(&master->cmd_lock); 471 + 472 + return rc; 609 473 } 610 474 611 475 static int fsi_master_gpio_break(struct fsi_master *_master, int link) ··· 624 476 625 477 trace_fsi_master_gpio_break(master); 626 478 627 - spin_lock_irqsave(&master->cmd_lock, flags); 479 + mutex_lock(&master->cmd_lock); 628 480 if (master->external_mode) { 629 - spin_unlock_irqrestore(&master->cmd_lock, flags); 481 + mutex_unlock(&master->cmd_lock); 630 482 return -EBUSY; 631 483 } 484 + 485 + local_irq_save(flags); 486 + 632 487 set_sda_output(master, 1); 633 488 sda_out(master, 1); 634 489 clock_toggle(master, FSI_PRE_BREAK_CLOCKS); ··· 640 489 echo_delay(master); 641 490 sda_out(master, 1); 642 491 clock_toggle(master, FSI_POST_BREAK_CLOCKS); 643 - spin_unlock_irqrestore(&master->cmd_lock, flags); 492 + 493 + local_irq_restore(flags); 494 + 495 + last_address_update(master, 0, false, 0); 496 + mutex_unlock(&master->cmd_lock); 644 497 645 498 /* Wait for logic reset to take effect */ 646 499 udelay(200); ··· 654 499 655 500 static void fsi_master_gpio_init(struct fsi_master_gpio *master) 656 501 { 502 + unsigned long flags; 503 + 657 504 gpiod_direction_output(master->gpio_mux, 1); 658 505 gpiod_direction_output(master->gpio_trans, 1); 659 506 gpiod_direction_output(master->gpio_enable, 1); ··· 663 506 gpiod_direction_output(master->gpio_data, 1); 664 507 665 508 /* todo: evaluate if clocks can be reduced */ 509 + local_irq_save(flags); 666 510 clock_zeros(master, FSI_INIT_CLOCKS); 511 + local_irq_restore(flags); 667 512 } 668 513 669 514 static void fsi_master_gpio_init_external(struct fsi_master_gpio *master) ··· 680 521 static int fsi_master_gpio_link_enable(struct fsi_master *_master, int link) 681 522 { 682 523 struct fsi_master_gpio *master = to_fsi_master_gpio(_master); 683 - unsigned long flags; 684 524 int rc = -EBUSY; 685 525 686 526 if (link != 0) 687 527 return -ENODEV; 688 528 689 - spin_lock_irqsave(&master->cmd_lock, flags); 529 + mutex_lock(&master->cmd_lock); 690 530 if (!master->external_mode) { 691 531 gpiod_set_value(master->gpio_enable, 1); 692 532 rc = 0; 693 533 } 694 - spin_unlock_irqrestore(&master->cmd_lock, flags); 534 + mutex_unlock(&master->cmd_lock); 695 535 696 536 return rc; 537 + } 538 + 539 + static int fsi_master_gpio_link_config(struct fsi_master *_master, int link, 540 + u8 t_send_delay, u8 t_echo_delay) 541 + { 542 + struct fsi_master_gpio *master = to_fsi_master_gpio(_master); 543 + 544 + if (link != 0) 545 + return -ENODEV; 546 + 547 + mutex_lock(&master->cmd_lock); 548 + master->t_send_delay = t_send_delay; 549 + master->t_echo_delay = t_echo_delay; 550 + mutex_unlock(&master->cmd_lock); 551 + 552 + return 0; 697 553 } 698 554 699 555 static ssize_t external_mode_show(struct device *dev, ··· 724 550 struct device_attribute *attr, const char *buf, size_t count) 725 551 { 726 552 struct fsi_master_gpio *master = dev_get_drvdata(dev); 727 - unsigned long flags, val; 553 + unsigned long val; 728 554 bool external_mode; 729 555 int err; 730 556 ··· 734 560 735 561 external_mode = !!val; 736 562 737 - spin_lock_irqsave(&master->cmd_lock, flags); 563 + mutex_lock(&master->cmd_lock); 738 564 739 565 if (external_mode == master->external_mode) { 740 - spin_unlock_irqrestore(&master->cmd_lock, flags); 566 + mutex_unlock(&master->cmd_lock); 741 567 return count; 742 568 } 743 569 ··· 746 572 fsi_master_gpio_init_external(master); 747 573 else 748 574 fsi_master_gpio_init(master); 749 - spin_unlock_irqrestore(&master->cmd_lock, flags); 575 + 576 + mutex_unlock(&master->cmd_lock); 750 577 751 578 fsi_master_rescan(&master->master); 752 579 ··· 757 582 static DEVICE_ATTR(external_mode, 0664, 758 583 external_mode_show, external_mode_store); 759 584 585 + static void fsi_master_gpio_release(struct device *dev) 586 + { 587 + struct fsi_master_gpio *master = to_fsi_master_gpio(dev_to_fsi_master(dev)); 588 + 589 + of_node_put(dev_of_node(master->dev)); 590 + 591 + kfree(master); 592 + } 593 + 760 594 static int fsi_master_gpio_probe(struct platform_device *pdev) 761 595 { 762 596 struct fsi_master_gpio *master; 763 597 struct gpio_desc *gpio; 764 598 int rc; 765 599 766 - master = devm_kzalloc(&pdev->dev, sizeof(*master), GFP_KERNEL); 600 + master = kzalloc(sizeof(*master), GFP_KERNEL); 767 601 if (!master) 768 602 return -ENOMEM; 769 603 770 604 master->dev = &pdev->dev; 771 605 master->master.dev.parent = master->dev; 772 606 master->master.dev.of_node = of_node_get(dev_of_node(master->dev)); 607 + master->master.dev.release = fsi_master_gpio_release; 608 + master->last_addr = LAST_ADDR_INVALID; 773 609 774 610 gpio = devm_gpiod_get(&pdev->dev, "clock", 0); 775 611 if (IS_ERR(gpio)) { 776 612 dev_err(&pdev->dev, "failed to get clock gpio\n"); 777 - return PTR_ERR(gpio); 613 + rc = PTR_ERR(gpio); 614 + goto err_free; 778 615 } 779 616 master->gpio_clk = gpio; 780 617 781 618 gpio = devm_gpiod_get(&pdev->dev, "data", 0); 782 619 if (IS_ERR(gpio)) { 783 620 dev_err(&pdev->dev, "failed to get data gpio\n"); 784 - return PTR_ERR(gpio); 621 + rc = PTR_ERR(gpio); 622 + goto err_free; 785 623 } 786 624 master->gpio_data = gpio; 787 625 ··· 802 614 gpio = devm_gpiod_get_optional(&pdev->dev, "trans", 0); 803 615 if (IS_ERR(gpio)) { 804 616 dev_err(&pdev->dev, "failed to get trans gpio\n"); 805 - return PTR_ERR(gpio); 617 + rc = PTR_ERR(gpio); 618 + goto err_free; 806 619 } 807 620 master->gpio_trans = gpio; 808 621 809 622 gpio = devm_gpiod_get_optional(&pdev->dev, "enable", 0); 810 623 if (IS_ERR(gpio)) { 811 624 dev_err(&pdev->dev, "failed to get enable gpio\n"); 812 - return PTR_ERR(gpio); 625 + rc = PTR_ERR(gpio); 626 + goto err_free; 813 627 } 814 628 master->gpio_enable = gpio; 815 629 816 630 gpio = devm_gpiod_get_optional(&pdev->dev, "mux", 0); 817 631 if (IS_ERR(gpio)) { 818 632 dev_err(&pdev->dev, "failed to get mux gpio\n"); 819 - return PTR_ERR(gpio); 633 + rc = PTR_ERR(gpio); 634 + goto err_free; 820 635 } 821 636 master->gpio_mux = gpio; 637 + 638 + /* 639 + * Check if GPIO block is slow enought that no extra delays 640 + * are necessary. This improves performance on ast2500 by 641 + * an order of magnitude. 642 + */ 643 + master->no_delays = device_property_present(&pdev->dev, "no-gpio-delays"); 644 + 645 + /* Default FSI command delays */ 646 + master->t_send_delay = FSI_SEND_DELAY_CLOCKS; 647 + master->t_echo_delay = FSI_ECHO_DELAY_CLOCKS; 822 648 823 649 master->master.n_links = 1; 824 650 master->master.flags = FSI_MASTER_FLAG_SWCLOCK; ··· 841 639 master->master.term = fsi_master_gpio_term; 842 640 master->master.send_break = fsi_master_gpio_break; 843 641 master->master.link_enable = fsi_master_gpio_link_enable; 642 + master->master.link_config = fsi_master_gpio_link_config; 844 643 platform_set_drvdata(pdev, master); 845 - spin_lock_init(&master->cmd_lock); 644 + mutex_init(&master->cmd_lock); 846 645 847 646 fsi_master_gpio_init(master); 848 647 849 648 rc = device_create_file(&pdev->dev, &dev_attr_external_mode); 850 649 if (rc) 851 - return rc; 650 + goto err_free; 852 651 853 - return fsi_master_register(&master->master); 652 + rc = fsi_master_register(&master->master); 653 + if (rc) { 654 + device_remove_file(&pdev->dev, &dev_attr_external_mode); 655 + put_device(&master->master.dev); 656 + return rc; 657 + } 658 + return 0; 659 + err_free: 660 + kfree(master); 661 + return rc; 854 662 } 663 + 855 664 856 665 857 666 static int fsi_master_gpio_remove(struct platform_device *pdev) 858 667 { 859 668 struct fsi_master_gpio *master = platform_get_drvdata(pdev); 860 669 861 - devm_gpiod_put(&pdev->dev, master->gpio_clk); 862 - devm_gpiod_put(&pdev->dev, master->gpio_data); 863 - if (master->gpio_trans) 864 - devm_gpiod_put(&pdev->dev, master->gpio_trans); 865 - if (master->gpio_enable) 866 - devm_gpiod_put(&pdev->dev, master->gpio_enable); 867 - if (master->gpio_mux) 868 - devm_gpiod_put(&pdev->dev, master->gpio_mux); 869 - fsi_master_unregister(&master->master); 670 + device_remove_file(&pdev->dev, &dev_attr_external_mode); 870 671 871 - of_node_put(master->master.dev.of_node); 672 + fsi_master_unregister(&master->master); 872 673 873 674 return 0; 874 675 }
+3 -2
drivers/fsi/fsi-master-hub.c
··· 122 122 123 123 static int hub_master_break(struct fsi_master *master, int link) 124 124 { 125 - uint32_t addr, cmd; 125 + uint32_t addr; 126 + __be32 cmd; 126 127 127 128 addr = 0x4; 128 129 cmd = cpu_to_be32(0xc0de0000); ··· 206 205 if (rc) 207 206 return rc; 208 207 209 - reg = ~0; 208 + reg = cpu_to_be32(~0); 210 209 rc = fsi_device_write(dev, FSI_MSENP0, &reg, sizeof(reg)); 211 210 if (rc) 212 211 return rc;
+37
drivers/fsi/fsi-master.h
··· 18 18 #define DRIVERS_FSI_MASTER_H 19 19 20 20 #include <linux/device.h> 21 + #include <linux/mutex.h> 21 22 23 + /* Various protocol delays */ 24 + #define FSI_ECHO_DELAY_CLOCKS 16 /* Number clocks for echo delay */ 25 + #define FSI_SEND_DELAY_CLOCKS 16 /* Number clocks for send delay */ 26 + #define FSI_PRE_BREAK_CLOCKS 50 /* Number clocks to prep for break */ 27 + #define FSI_BREAK_CLOCKS 256 /* Number of clocks to issue break */ 28 + #define FSI_POST_BREAK_CLOCKS 16000 /* Number clocks to set up cfam */ 29 + #define FSI_INIT_CLOCKS 5000 /* Clock out any old data */ 30 + #define FSI_MASTER_DPOLL_CLOCKS 50 /* < 21 will cause slave to hang */ 31 + #define FSI_MASTER_EPOLL_CLOCKS 50 /* Number of clocks for E_POLL retry */ 32 + 33 + /* Various retry maximums */ 34 + #define FSI_CRC_ERR_RETRIES 10 35 + #define FSI_MASTER_MAX_BUSY 200 36 + #define FSI_MASTER_MTOE_COUNT 1000 37 + 38 + /* Command encodings */ 39 + #define FSI_CMD_DPOLL 0x2 40 + #define FSI_CMD_EPOLL 0x3 41 + #define FSI_CMD_TERM 0x3f 42 + #define FSI_CMD_ABS_AR 0x4 43 + #define FSI_CMD_REL_AR 0x5 44 + #define FSI_CMD_SAME_AR 0x3 /* but only a 2-bit opcode... */ 45 + 46 + /* Slave responses */ 47 + #define FSI_RESP_ACK 0 /* Success */ 48 + #define FSI_RESP_BUSY 1 /* Slave busy */ 49 + #define FSI_RESP_ERRA 2 /* Any (misc) Error */ 50 + #define FSI_RESP_ERRC 3 /* Slave reports master CRC error */ 51 + 52 + /* Misc */ 53 + #define FSI_CRC_SIZE 4 54 + 55 + /* fsi-master definition and flags */ 22 56 #define FSI_MASTER_FLAG_SWCLOCK 0x1 23 57 24 58 struct fsi_master { ··· 60 26 int idx; 61 27 int n_links; 62 28 int flags; 29 + struct mutex scan_lock; 63 30 int (*read)(struct fsi_master *, int link, uint8_t id, 64 31 uint32_t addr, void *val, size_t size); 65 32 int (*write)(struct fsi_master *, int link, uint8_t id, ··· 68 33 int (*term)(struct fsi_master *, int link, uint8_t id); 69 34 int (*send_break)(struct fsi_master *, int link); 70 35 int (*link_enable)(struct fsi_master *, int link); 36 + int (*link_config)(struct fsi_master *, int link, 37 + u8 t_send_delay, u8 t_echo_delay); 71 38 }; 72 39 73 40 #define dev_to_fsi_master(d) container_of(d, struct fsi_master, dev)
+1066
drivers/fsi/fsi-sbefifo.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) IBM Corporation 2017 4 + * 5 + * This program is free software; you can redistribute it and/or modify 6 + * it under the terms of the GNU General Public License version 2 as 7 + * published by the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope that it will be useful, 10 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 + * MERGCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 + * GNU General Public License for more details. 13 + */ 14 + 15 + #include <linux/device.h> 16 + #include <linux/errno.h> 17 + #include <linux/fs.h> 18 + #include <linux/fsi.h> 19 + #include <linux/fsi-sbefifo.h> 20 + #include <linux/kernel.h> 21 + #include <linux/cdev.h> 22 + #include <linux/module.h> 23 + #include <linux/mutex.h> 24 + #include <linux/of.h> 25 + #include <linux/of_device.h> 26 + #include <linux/of_platform.h> 27 + #include <linux/sched.h> 28 + #include <linux/slab.h> 29 + #include <linux/uaccess.h> 30 + #include <linux/delay.h> 31 + #include <linux/uio.h> 32 + #include <linux/vmalloc.h> 33 + #include <linux/mm.h> 34 + 35 + /* 36 + * The SBEFIFO is a pipe-like FSI device for communicating with 37 + * the self boot engine on POWER processors. 38 + */ 39 + 40 + #define DEVICE_NAME "sbefifo" 41 + #define FSI_ENGID_SBE 0x22 42 + 43 + /* 44 + * Register layout 45 + */ 46 + 47 + /* Register banks */ 48 + #define SBEFIFO_UP 0x00 /* FSI -> Host */ 49 + #define SBEFIFO_DOWN 0x40 /* Host -> FSI */ 50 + 51 + /* Per-bank registers */ 52 + #define SBEFIFO_FIFO 0x00 /* The FIFO itself */ 53 + #define SBEFIFO_STS 0x04 /* Status register */ 54 + #define SBEFIFO_STS_PARITY_ERR 0x20000000 55 + #define SBEFIFO_STS_RESET_REQ 0x02000000 56 + #define SBEFIFO_STS_GOT_EOT 0x00800000 57 + #define SBEFIFO_STS_MAX_XFER_LIMIT 0x00400000 58 + #define SBEFIFO_STS_FULL 0x00200000 59 + #define SBEFIFO_STS_EMPTY 0x00100000 60 + #define SBEFIFO_STS_ECNT_MASK 0x000f0000 61 + #define SBEFIFO_STS_ECNT_SHIFT 16 62 + #define SBEFIFO_STS_VALID_MASK 0x0000ff00 63 + #define SBEFIFO_STS_VALID_SHIFT 8 64 + #define SBEFIFO_STS_EOT_MASK 0x000000ff 65 + #define SBEFIFO_STS_EOT_SHIFT 0 66 + #define SBEFIFO_EOT_RAISE 0x08 /* (Up only) Set End Of Transfer */ 67 + #define SBEFIFO_REQ_RESET 0x0C /* (Up only) Reset Request */ 68 + #define SBEFIFO_PERFORM_RESET 0x10 /* (Down only) Perform Reset */ 69 + #define SBEFIFO_EOT_ACK 0x14 /* (Down only) Acknowledge EOT */ 70 + #define SBEFIFO_DOWN_MAX 0x18 /* (Down only) Max transfer */ 71 + 72 + /* CFAM GP Mailbox SelfBoot Message register */ 73 + #define CFAM_GP_MBOX_SBM_ADDR 0x2824 /* Converted 0x2809 */ 74 + 75 + #define CFAM_SBM_SBE_BOOTED 0x80000000 76 + #define CFAM_SBM_SBE_ASYNC_FFDC 0x40000000 77 + #define CFAM_SBM_SBE_STATE_MASK 0x00f00000 78 + #define CFAM_SBM_SBE_STATE_SHIFT 20 79 + 80 + enum sbe_state 81 + { 82 + SBE_STATE_UNKNOWN = 0x0, // Unkown, initial state 83 + SBE_STATE_IPLING = 0x1, // IPL'ing - autonomous mode (transient) 84 + SBE_STATE_ISTEP = 0x2, // ISTEP - Running IPL by steps (transient) 85 + SBE_STATE_MPIPL = 0x3, // MPIPL 86 + SBE_STATE_RUNTIME = 0x4, // SBE Runtime 87 + SBE_STATE_DMT = 0x5, // Dead Man Timer State (transient) 88 + SBE_STATE_DUMP = 0x6, // Dumping 89 + SBE_STATE_FAILURE = 0x7, // Internal SBE failure 90 + SBE_STATE_QUIESCE = 0x8, // Final state - needs SBE reset to get out 91 + }; 92 + 93 + /* FIFO depth */ 94 + #define SBEFIFO_FIFO_DEPTH 8 95 + 96 + /* Helpers */ 97 + #define sbefifo_empty(sts) ((sts) & SBEFIFO_STS_EMPTY) 98 + #define sbefifo_full(sts) ((sts) & SBEFIFO_STS_FULL) 99 + #define sbefifo_parity_err(sts) ((sts) & SBEFIFO_STS_PARITY_ERR) 100 + #define sbefifo_populated(sts) (((sts) & SBEFIFO_STS_ECNT_MASK) >> SBEFIFO_STS_ECNT_SHIFT) 101 + #define sbefifo_vacant(sts) (SBEFIFO_FIFO_DEPTH - sbefifo_populated(sts)) 102 + #define sbefifo_eot_set(sts) (((sts) & SBEFIFO_STS_EOT_MASK) >> SBEFIFO_STS_EOT_SHIFT) 103 + 104 + /* Reset request timeout in ms */ 105 + #define SBEFIFO_RESET_TIMEOUT 10000 106 + 107 + /* Timeouts for commands in ms */ 108 + #define SBEFIFO_TIMEOUT_START_CMD 10000 109 + #define SBEFIFO_TIMEOUT_IN_CMD 1000 110 + #define SBEFIFO_TIMEOUT_START_RSP 10000 111 + #define SBEFIFO_TIMEOUT_IN_RSP 1000 112 + 113 + /* Other constants */ 114 + #define SBEFIFO_MAX_USER_CMD_LEN (0x100000 + PAGE_SIZE) 115 + #define SBEFIFO_RESET_MAGIC 0x52534554 /* "RSET" */ 116 + 117 + struct sbefifo { 118 + uint32_t magic; 119 + #define SBEFIFO_MAGIC 0x53424546 /* "SBEF" */ 120 + struct fsi_device *fsi_dev; 121 + struct device dev; 122 + struct cdev cdev; 123 + struct mutex lock; 124 + bool broken; 125 + bool dead; 126 + bool async_ffdc; 127 + }; 128 + 129 + struct sbefifo_user { 130 + struct sbefifo *sbefifo; 131 + struct mutex file_lock; 132 + void *cmd_page; 133 + void *pending_cmd; 134 + size_t pending_len; 135 + }; 136 + 137 + static DEFINE_MUTEX(sbefifo_ffdc_mutex); 138 + 139 + 140 + static void __sbefifo_dump_ffdc(struct device *dev, const __be32 *ffdc, 141 + size_t ffdc_sz, bool internal) 142 + { 143 + int pack = 0; 144 + #define FFDC_LSIZE 60 145 + static char ffdc_line[FFDC_LSIZE]; 146 + char *p = ffdc_line; 147 + 148 + while (ffdc_sz) { 149 + u32 w0, w1, w2, i; 150 + if (ffdc_sz < 3) { 151 + dev_err(dev, "SBE invalid FFDC package size %zd\n", ffdc_sz); 152 + return; 153 + } 154 + w0 = be32_to_cpu(*(ffdc++)); 155 + w1 = be32_to_cpu(*(ffdc++)); 156 + w2 = be32_to_cpu(*(ffdc++)); 157 + ffdc_sz -= 3; 158 + if ((w0 >> 16) != 0xFFDC) { 159 + dev_err(dev, "SBE invalid FFDC package signature %08x %08x %08x\n", 160 + w0, w1, w2); 161 + break; 162 + } 163 + w0 &= 0xffff; 164 + if (w0 > ffdc_sz) { 165 + dev_err(dev, "SBE FFDC package len %d words but only %zd remaining\n", 166 + w0, ffdc_sz); 167 + w0 = ffdc_sz; 168 + break; 169 + } 170 + if (internal) { 171 + dev_warn(dev, "+---- SBE FFDC package %d for async err -----+\n", 172 + pack++); 173 + } else { 174 + dev_warn(dev, "+---- SBE FFDC package %d for cmd %02x:%02x -----+\n", 175 + pack++, (w1 >> 8) & 0xff, w1 & 0xff); 176 + } 177 + dev_warn(dev, "| Response code: %08x |\n", w2); 178 + dev_warn(dev, "|-------------------------------------------|\n"); 179 + for (i = 0; i < w0; i++) { 180 + if ((i & 3) == 0) { 181 + p = ffdc_line; 182 + p += sprintf(p, "| %04x:", i << 4); 183 + } 184 + p += sprintf(p, " %08x", be32_to_cpu(*(ffdc++))); 185 + ffdc_sz--; 186 + if ((i & 3) == 3 || i == (w0 - 1)) { 187 + while ((i & 3) < 3) { 188 + p += sprintf(p, " "); 189 + i++; 190 + } 191 + dev_warn(dev, "%s |\n", ffdc_line); 192 + } 193 + } 194 + dev_warn(dev, "+-------------------------------------------+\n"); 195 + } 196 + } 197 + 198 + static void sbefifo_dump_ffdc(struct device *dev, const __be32 *ffdc, 199 + size_t ffdc_sz, bool internal) 200 + { 201 + mutex_lock(&sbefifo_ffdc_mutex); 202 + __sbefifo_dump_ffdc(dev, ffdc, ffdc_sz, internal); 203 + mutex_unlock(&sbefifo_ffdc_mutex); 204 + } 205 + 206 + int sbefifo_parse_status(struct device *dev, u16 cmd, __be32 *response, 207 + size_t resp_len, size_t *data_len) 208 + { 209 + u32 dh, s0, s1; 210 + size_t ffdc_sz; 211 + 212 + if (resp_len < 3) { 213 + pr_debug("sbefifo: cmd %04x, response too small: %zd\n", 214 + cmd, resp_len); 215 + return -ENXIO; 216 + } 217 + dh = be32_to_cpu(response[resp_len - 1]); 218 + if (dh > resp_len || dh < 3) { 219 + dev_err(dev, "SBE cmd %02x:%02x status offset out of range: %d/%zd\n", 220 + cmd >> 8, cmd & 0xff, dh, resp_len); 221 + return -ENXIO; 222 + } 223 + s0 = be32_to_cpu(response[resp_len - dh]); 224 + s1 = be32_to_cpu(response[resp_len - dh + 1]); 225 + if (((s0 >> 16) != 0xC0DE) || ((s0 & 0xffff) != cmd)) { 226 + dev_err(dev, "SBE cmd %02x:%02x, status signature invalid: 0x%08x 0x%08x\n", 227 + cmd >> 8, cmd & 0xff, s0, s1); 228 + return -ENXIO; 229 + } 230 + if (s1 != 0) { 231 + ffdc_sz = dh - 3; 232 + dev_warn(dev, "SBE error cmd %02x:%02x status=%04x:%04x\n", 233 + cmd >> 8, cmd & 0xff, s1 >> 16, s1 & 0xffff); 234 + if (ffdc_sz) 235 + sbefifo_dump_ffdc(dev, &response[resp_len - dh + 2], 236 + ffdc_sz, false); 237 + } 238 + if (data_len) 239 + *data_len = resp_len - dh; 240 + 241 + /* 242 + * Primary status don't have the top bit set, so can't be confused with 243 + * Linux negative error codes, so return the status word whole. 244 + */ 245 + return s1; 246 + } 247 + EXPORT_SYMBOL_GPL(sbefifo_parse_status); 248 + 249 + static int sbefifo_regr(struct sbefifo *sbefifo, int reg, u32 *word) 250 + { 251 + __be32 raw_word; 252 + int rc; 253 + 254 + rc = fsi_device_read(sbefifo->fsi_dev, reg, &raw_word, 255 + sizeof(raw_word)); 256 + if (rc) 257 + return rc; 258 + 259 + *word = be32_to_cpu(raw_word); 260 + 261 + return 0; 262 + } 263 + 264 + static int sbefifo_regw(struct sbefifo *sbefifo, int reg, u32 word) 265 + { 266 + __be32 raw_word = cpu_to_be32(word); 267 + 268 + return fsi_device_write(sbefifo->fsi_dev, reg, &raw_word, 269 + sizeof(raw_word)); 270 + } 271 + 272 + static int sbefifo_check_sbe_state(struct sbefifo *sbefifo) 273 + { 274 + __be32 raw_word; 275 + u32 sbm; 276 + int rc; 277 + 278 + rc = fsi_slave_read(sbefifo->fsi_dev->slave, CFAM_GP_MBOX_SBM_ADDR, 279 + &raw_word, sizeof(raw_word)); 280 + if (rc) 281 + return rc; 282 + sbm = be32_to_cpu(raw_word); 283 + 284 + /* SBE booted at all ? */ 285 + if (!(sbm & CFAM_SBM_SBE_BOOTED)) 286 + return -ESHUTDOWN; 287 + 288 + /* Check its state */ 289 + switch ((sbm & CFAM_SBM_SBE_STATE_MASK) >> CFAM_SBM_SBE_STATE_SHIFT) { 290 + case SBE_STATE_UNKNOWN: 291 + return -ESHUTDOWN; 292 + case SBE_STATE_IPLING: 293 + case SBE_STATE_ISTEP: 294 + case SBE_STATE_MPIPL: 295 + case SBE_STATE_DMT: 296 + return -EBUSY; 297 + case SBE_STATE_RUNTIME: 298 + case SBE_STATE_DUMP: /* Not sure about that one */ 299 + break; 300 + case SBE_STATE_FAILURE: 301 + case SBE_STATE_QUIESCE: 302 + return -ESHUTDOWN; 303 + } 304 + 305 + /* Is there async FFDC available ? Remember it */ 306 + if (sbm & CFAM_SBM_SBE_ASYNC_FFDC) 307 + sbefifo->async_ffdc = true; 308 + 309 + return 0; 310 + } 311 + 312 + /* Don't flip endianness of data to/from FIFO, just pass through. */ 313 + static int sbefifo_down_read(struct sbefifo *sbefifo, __be32 *word) 314 + { 315 + return fsi_device_read(sbefifo->fsi_dev, SBEFIFO_DOWN, word, 316 + sizeof(*word)); 317 + } 318 + 319 + static int sbefifo_up_write(struct sbefifo *sbefifo, __be32 word) 320 + { 321 + return fsi_device_write(sbefifo->fsi_dev, SBEFIFO_UP, &word, 322 + sizeof(word)); 323 + } 324 + 325 + static int sbefifo_request_reset(struct sbefifo *sbefifo) 326 + { 327 + struct device *dev = &sbefifo->fsi_dev->dev; 328 + u32 status, timeout; 329 + int rc; 330 + 331 + dev_dbg(dev, "Requesting FIFO reset\n"); 332 + 333 + /* Mark broken first, will be cleared if reset succeeds */ 334 + sbefifo->broken = true; 335 + 336 + /* Send reset request */ 337 + rc = sbefifo_regw(sbefifo, SBEFIFO_UP | SBEFIFO_REQ_RESET, 1); 338 + if (rc) { 339 + dev_err(dev, "Sending reset request failed, rc=%d\n", rc); 340 + return rc; 341 + } 342 + 343 + /* Wait for it to complete */ 344 + for (timeout = 0; timeout < SBEFIFO_RESET_TIMEOUT; timeout++) { 345 + rc = sbefifo_regr(sbefifo, SBEFIFO_UP | SBEFIFO_STS, &status); 346 + if (rc) { 347 + dev_err(dev, "Failed to read UP fifo status during reset" 348 + " , rc=%d\n", rc); 349 + return rc; 350 + } 351 + 352 + if (!(status & SBEFIFO_STS_RESET_REQ)) { 353 + dev_dbg(dev, "FIFO reset done\n"); 354 + sbefifo->broken = false; 355 + return 0; 356 + } 357 + 358 + msleep(1); 359 + } 360 + dev_err(dev, "FIFO reset timed out\n"); 361 + 362 + return -ETIMEDOUT; 363 + } 364 + 365 + static int sbefifo_cleanup_hw(struct sbefifo *sbefifo) 366 + { 367 + struct device *dev = &sbefifo->fsi_dev->dev; 368 + u32 up_status, down_status; 369 + bool need_reset = false; 370 + int rc; 371 + 372 + rc = sbefifo_check_sbe_state(sbefifo); 373 + if (rc) { 374 + dev_dbg(dev, "SBE state=%d\n", rc); 375 + return rc; 376 + } 377 + 378 + /* If broken, we don't need to look at status, go straight to reset */ 379 + if (sbefifo->broken) 380 + goto do_reset; 381 + 382 + rc = sbefifo_regr(sbefifo, SBEFIFO_UP | SBEFIFO_STS, &up_status); 383 + if (rc) { 384 + dev_err(dev, "Cleanup: Reading UP status failed, rc=%d\n", rc); 385 + 386 + /* Will try reset again on next attempt at using it */ 387 + sbefifo->broken = true; 388 + return rc; 389 + } 390 + 391 + rc = sbefifo_regr(sbefifo, SBEFIFO_DOWN | SBEFIFO_STS, &down_status); 392 + if (rc) { 393 + dev_err(dev, "Cleanup: Reading DOWN status failed, rc=%d\n", rc); 394 + 395 + /* Will try reset again on next attempt at using it */ 396 + sbefifo->broken = true; 397 + return rc; 398 + } 399 + 400 + /* The FIFO already contains a reset request from the SBE ? */ 401 + if (down_status & SBEFIFO_STS_RESET_REQ) { 402 + dev_info(dev, "Cleanup: FIFO reset request set, resetting\n"); 403 + rc = sbefifo_regw(sbefifo, SBEFIFO_UP, SBEFIFO_PERFORM_RESET); 404 + if (rc) { 405 + sbefifo->broken = true; 406 + dev_err(dev, "Cleanup: Reset reg write failed, rc=%d\n", rc); 407 + return rc; 408 + } 409 + sbefifo->broken = false; 410 + return 0; 411 + } 412 + 413 + /* Parity error on either FIFO ? */ 414 + if ((up_status | down_status) & SBEFIFO_STS_PARITY_ERR) 415 + need_reset = true; 416 + 417 + /* Either FIFO not empty ? */ 418 + if (!((up_status & down_status) & SBEFIFO_STS_EMPTY)) 419 + need_reset = true; 420 + 421 + if (!need_reset) 422 + return 0; 423 + 424 + dev_info(dev, "Cleanup: FIFO not clean (up=0x%08x down=0x%08x)\n", 425 + up_status, down_status); 426 + 427 + do_reset: 428 + 429 + /* Mark broken, will be cleared if/when reset succeeds */ 430 + return sbefifo_request_reset(sbefifo); 431 + } 432 + 433 + static int sbefifo_wait(struct sbefifo *sbefifo, bool up, 434 + u32 *status, unsigned long timeout) 435 + { 436 + struct device *dev = &sbefifo->fsi_dev->dev; 437 + unsigned long end_time; 438 + bool ready = false; 439 + u32 addr, sts = 0; 440 + int rc; 441 + 442 + dev_vdbg(dev, "Wait on %s fifo...\n", up ? "up" : "down"); 443 + 444 + addr = (up ? SBEFIFO_UP : SBEFIFO_DOWN) | SBEFIFO_STS; 445 + 446 + end_time = jiffies + timeout; 447 + while (!time_after(jiffies, end_time)) { 448 + cond_resched(); 449 + rc = sbefifo_regr(sbefifo, addr, &sts); 450 + if (rc < 0) { 451 + dev_err(dev, "FSI error %d reading status register\n", rc); 452 + return rc; 453 + } 454 + if (!up && sbefifo_parity_err(sts)) { 455 + dev_err(dev, "Parity error in DOWN FIFO\n"); 456 + return -ENXIO; 457 + } 458 + ready = !(up ? sbefifo_full(sts) : sbefifo_empty(sts)); 459 + if (ready) 460 + break; 461 + } 462 + if (!ready) { 463 + dev_err(dev, "%s FIFO Timeout ! status=%08x\n", up ? "UP" : "DOWN", sts); 464 + return -ETIMEDOUT; 465 + } 466 + dev_vdbg(dev, "End of wait status: %08x\n", sts); 467 + 468 + *status = sts; 469 + 470 + return 0; 471 + } 472 + 473 + static int sbefifo_send_command(struct sbefifo *sbefifo, 474 + const __be32 *command, size_t cmd_len) 475 + { 476 + struct device *dev = &sbefifo->fsi_dev->dev; 477 + size_t len, chunk, vacant = 0, remaining = cmd_len; 478 + unsigned long timeout; 479 + u32 status; 480 + int rc; 481 + 482 + dev_vdbg(dev, "sending command (%zd words, cmd=%04x)\n", 483 + cmd_len, be32_to_cpu(command[1])); 484 + 485 + /* As long as there's something to send */ 486 + timeout = msecs_to_jiffies(SBEFIFO_TIMEOUT_START_CMD); 487 + while (remaining) { 488 + /* Wait for room in the FIFO */ 489 + rc = sbefifo_wait(sbefifo, true, &status, timeout); 490 + if (rc < 0) 491 + return rc; 492 + timeout = msecs_to_jiffies(SBEFIFO_TIMEOUT_IN_CMD); 493 + 494 + vacant = sbefifo_vacant(status); 495 + len = chunk = min(vacant, remaining); 496 + 497 + dev_vdbg(dev, " status=%08x vacant=%zd chunk=%zd\n", 498 + status, vacant, chunk); 499 + 500 + /* Write as much as we can */ 501 + while (len--) { 502 + rc = sbefifo_up_write(sbefifo, *(command++)); 503 + if (rc) { 504 + dev_err(dev, "FSI error %d writing UP FIFO\n", rc); 505 + return rc; 506 + } 507 + } 508 + remaining -= chunk; 509 + vacant -= chunk; 510 + } 511 + 512 + /* If there's no room left, wait for some to write EOT */ 513 + if (!vacant) { 514 + rc = sbefifo_wait(sbefifo, true, &status, timeout); 515 + if (rc) 516 + return rc; 517 + } 518 + 519 + /* Send an EOT */ 520 + rc = sbefifo_regw(sbefifo, SBEFIFO_UP | SBEFIFO_EOT_RAISE, 0); 521 + if (rc) 522 + dev_err(dev, "FSI error %d writing EOT\n", rc); 523 + return rc; 524 + } 525 + 526 + static int sbefifo_read_response(struct sbefifo *sbefifo, struct iov_iter *response) 527 + { 528 + struct device *dev = &sbefifo->fsi_dev->dev; 529 + u32 status, eot_set; 530 + unsigned long timeout; 531 + bool overflow = false; 532 + __be32 data; 533 + size_t len; 534 + int rc; 535 + 536 + dev_vdbg(dev, "reading response, buflen = %zd\n", iov_iter_count(response)); 537 + 538 + timeout = msecs_to_jiffies(SBEFIFO_TIMEOUT_START_RSP); 539 + for (;;) { 540 + /* Grab FIFO status (this will handle parity errors) */ 541 + rc = sbefifo_wait(sbefifo, false, &status, timeout); 542 + if (rc < 0) 543 + return rc; 544 + timeout = msecs_to_jiffies(SBEFIFO_TIMEOUT_IN_RSP); 545 + 546 + /* Decode status */ 547 + len = sbefifo_populated(status); 548 + eot_set = sbefifo_eot_set(status); 549 + 550 + dev_vdbg(dev, " chunk size %zd eot_set=0x%x\n", len, eot_set); 551 + 552 + /* Go through the chunk */ 553 + while(len--) { 554 + /* Read the data */ 555 + rc = sbefifo_down_read(sbefifo, &data); 556 + if (rc < 0) 557 + return rc; 558 + 559 + /* Was it an EOT ? */ 560 + if (eot_set & 0x80) { 561 + /* 562 + * There should be nothing else in the FIFO, 563 + * if there is, mark broken, this will force 564 + * a reset on next use, but don't fail the 565 + * command. 566 + */ 567 + if (len) { 568 + dev_warn(dev, "FIFO read hit" 569 + " EOT with still %zd data\n", 570 + len); 571 + sbefifo->broken = true; 572 + } 573 + 574 + /* We are done */ 575 + rc = sbefifo_regw(sbefifo, 576 + SBEFIFO_DOWN | SBEFIFO_EOT_ACK, 0); 577 + 578 + /* 579 + * If that write fail, still complete the request but mark 580 + * the fifo as broken for subsequent reset (not much else 581 + * we can do here). 582 + */ 583 + if (rc) { 584 + dev_err(dev, "FSI error %d ack'ing EOT\n", rc); 585 + sbefifo->broken = true; 586 + } 587 + 588 + /* Tell whether we overflowed */ 589 + return overflow ? -EOVERFLOW : 0; 590 + } 591 + 592 + /* Store it if there is room */ 593 + if (iov_iter_count(response) >= sizeof(__be32)) { 594 + if (copy_to_iter(&data, sizeof(__be32), response) < sizeof(__be32)) 595 + return -EFAULT; 596 + } else { 597 + dev_vdbg(dev, "Response overflowed !\n"); 598 + 599 + overflow = true; 600 + } 601 + 602 + /* Next EOT bit */ 603 + eot_set <<= 1; 604 + } 605 + } 606 + /* Shouldn't happen */ 607 + return -EIO; 608 + } 609 + 610 + static int sbefifo_do_command(struct sbefifo *sbefifo, 611 + const __be32 *command, size_t cmd_len, 612 + struct iov_iter *response) 613 + { 614 + /* Try sending the command */ 615 + int rc = sbefifo_send_command(sbefifo, command, cmd_len); 616 + if (rc) 617 + return rc; 618 + 619 + /* Now, get the response */ 620 + return sbefifo_read_response(sbefifo, response); 621 + } 622 + 623 + static void sbefifo_collect_async_ffdc(struct sbefifo *sbefifo) 624 + { 625 + struct device *dev = &sbefifo->fsi_dev->dev; 626 + struct iov_iter ffdc_iter; 627 + struct kvec ffdc_iov; 628 + __be32 *ffdc; 629 + size_t ffdc_sz; 630 + __be32 cmd[2]; 631 + int rc; 632 + 633 + sbefifo->async_ffdc = false; 634 + ffdc = vmalloc(SBEFIFO_MAX_FFDC_SIZE); 635 + if (!ffdc) { 636 + dev_err(dev, "Failed to allocate SBE FFDC buffer\n"); 637 + return; 638 + } 639 + ffdc_iov.iov_base = ffdc; 640 + ffdc_iov.iov_len = SBEFIFO_MAX_FFDC_SIZE; 641 + iov_iter_kvec(&ffdc_iter, WRITE | ITER_KVEC, &ffdc_iov, 1, SBEFIFO_MAX_FFDC_SIZE); 642 + cmd[0] = cpu_to_be32(2); 643 + cmd[1] = cpu_to_be32(SBEFIFO_CMD_GET_SBE_FFDC); 644 + rc = sbefifo_do_command(sbefifo, cmd, 2, &ffdc_iter); 645 + if (rc != 0) { 646 + dev_err(dev, "Error %d retrieving SBE FFDC\n", rc); 647 + goto bail; 648 + } 649 + ffdc_sz = SBEFIFO_MAX_FFDC_SIZE - iov_iter_count(&ffdc_iter); 650 + ffdc_sz /= sizeof(__be32); 651 + rc = sbefifo_parse_status(dev, SBEFIFO_CMD_GET_SBE_FFDC, ffdc, 652 + ffdc_sz, &ffdc_sz); 653 + if (rc != 0) { 654 + dev_err(dev, "Error %d decoding SBE FFDC\n", rc); 655 + goto bail; 656 + } 657 + if (ffdc_sz > 0) 658 + sbefifo_dump_ffdc(dev, ffdc, ffdc_sz, true); 659 + bail: 660 + vfree(ffdc); 661 + 662 + } 663 + 664 + static int __sbefifo_submit(struct sbefifo *sbefifo, 665 + const __be32 *command, size_t cmd_len, 666 + struct iov_iter *response) 667 + { 668 + struct device *dev = &sbefifo->fsi_dev->dev; 669 + int rc; 670 + 671 + if (sbefifo->dead) 672 + return -ENODEV; 673 + 674 + if (cmd_len < 2 || be32_to_cpu(command[0]) != cmd_len) { 675 + dev_vdbg(dev, "Invalid command len %zd (header: %d)\n", 676 + cmd_len, be32_to_cpu(command[0])); 677 + return -EINVAL; 678 + } 679 + 680 + /* First ensure the HW is in a clean state */ 681 + rc = sbefifo_cleanup_hw(sbefifo); 682 + if (rc) 683 + return rc; 684 + 685 + /* Look for async FFDC first if any */ 686 + if (sbefifo->async_ffdc) 687 + sbefifo_collect_async_ffdc(sbefifo); 688 + 689 + rc = sbefifo_do_command(sbefifo, command, cmd_len, response); 690 + if (rc != 0 && rc != -EOVERFLOW) 691 + goto fail; 692 + return rc; 693 + fail: 694 + /* 695 + * On failure, attempt a reset. Ignore the result, it will mark 696 + * the fifo broken if the reset fails 697 + */ 698 + sbefifo_request_reset(sbefifo); 699 + 700 + /* Return original error */ 701 + return rc; 702 + } 703 + 704 + /** 705 + * sbefifo_submit() - Submit and SBE fifo command and receive response 706 + * @dev: The sbefifo device 707 + * @command: The raw command data 708 + * @cmd_len: The command size (in 32-bit words) 709 + * @response: The output response buffer 710 + * @resp_len: In: Response buffer size, Out: Response size 711 + * 712 + * This will perform the entire operation. If the reponse buffer 713 + * overflows, returns -EOVERFLOW 714 + */ 715 + int sbefifo_submit(struct device *dev, const __be32 *command, size_t cmd_len, 716 + __be32 *response, size_t *resp_len) 717 + { 718 + struct sbefifo *sbefifo; 719 + struct iov_iter resp_iter; 720 + struct kvec resp_iov; 721 + size_t rbytes; 722 + int rc; 723 + 724 + if (!dev) 725 + return -ENODEV; 726 + sbefifo = dev_get_drvdata(dev); 727 + if (!sbefifo) 728 + return -ENODEV; 729 + if (WARN_ON_ONCE(sbefifo->magic != SBEFIFO_MAGIC)) 730 + return -ENODEV; 731 + if (!resp_len || !command || !response) 732 + return -EINVAL; 733 + 734 + /* Prepare iov iterator */ 735 + rbytes = (*resp_len) * sizeof(__be32); 736 + resp_iov.iov_base = response; 737 + resp_iov.iov_len = rbytes; 738 + iov_iter_kvec(&resp_iter, WRITE | ITER_KVEC, &resp_iov, 1, rbytes); 739 + 740 + /* Perform the command */ 741 + mutex_lock(&sbefifo->lock); 742 + rc = __sbefifo_submit(sbefifo, command, cmd_len, &resp_iter); 743 + mutex_unlock(&sbefifo->lock); 744 + 745 + /* Extract the response length */ 746 + rbytes -= iov_iter_count(&resp_iter); 747 + *resp_len = rbytes / sizeof(__be32); 748 + 749 + return rc; 750 + } 751 + EXPORT_SYMBOL_GPL(sbefifo_submit); 752 + 753 + /* 754 + * Char device interface 755 + */ 756 + 757 + static void sbefifo_release_command(struct sbefifo_user *user) 758 + { 759 + if (is_vmalloc_addr(user->pending_cmd)) 760 + vfree(user->pending_cmd); 761 + user->pending_cmd = NULL; 762 + user->pending_len = 0; 763 + } 764 + 765 + static int sbefifo_user_open(struct inode *inode, struct file *file) 766 + { 767 + struct sbefifo *sbefifo = container_of(inode->i_cdev, struct sbefifo, cdev); 768 + struct sbefifo_user *user; 769 + 770 + user = kzalloc(sizeof(struct sbefifo_user), GFP_KERNEL); 771 + if (!user) 772 + return -ENOMEM; 773 + 774 + file->private_data = user; 775 + user->sbefifo = sbefifo; 776 + user->cmd_page = (void *)__get_free_page(GFP_KERNEL); 777 + if (!user->cmd_page) { 778 + kfree(user); 779 + return -ENOMEM; 780 + } 781 + mutex_init(&user->file_lock); 782 + 783 + return 0; 784 + } 785 + 786 + static ssize_t sbefifo_user_read(struct file *file, char __user *buf, 787 + size_t len, loff_t *offset) 788 + { 789 + struct sbefifo_user *user = file->private_data; 790 + struct sbefifo *sbefifo; 791 + struct iov_iter resp_iter; 792 + struct iovec resp_iov; 793 + size_t cmd_len; 794 + int rc; 795 + 796 + if (!user) 797 + return -EINVAL; 798 + sbefifo = user->sbefifo; 799 + if (len & 3) 800 + return -EINVAL; 801 + 802 + mutex_lock(&user->file_lock); 803 + 804 + /* Cronus relies on -EAGAIN after a short read */ 805 + if (user->pending_len == 0) { 806 + rc = -EAGAIN; 807 + goto bail; 808 + } 809 + if (user->pending_len < 8) { 810 + rc = -EINVAL; 811 + goto bail; 812 + } 813 + cmd_len = user->pending_len >> 2; 814 + 815 + /* Prepare iov iterator */ 816 + resp_iov.iov_base = buf; 817 + resp_iov.iov_len = len; 818 + iov_iter_init(&resp_iter, WRITE, &resp_iov, 1, len); 819 + 820 + /* Perform the command */ 821 + mutex_lock(&sbefifo->lock); 822 + rc = __sbefifo_submit(sbefifo, user->pending_cmd, cmd_len, &resp_iter); 823 + mutex_unlock(&sbefifo->lock); 824 + if (rc < 0) 825 + goto bail; 826 + 827 + /* Extract the response length */ 828 + rc = len - iov_iter_count(&resp_iter); 829 + bail: 830 + sbefifo_release_command(user); 831 + mutex_unlock(&user->file_lock); 832 + return rc; 833 + } 834 + 835 + static ssize_t sbefifo_user_write(struct file *file, const char __user *buf, 836 + size_t len, loff_t *offset) 837 + { 838 + struct sbefifo_user *user = file->private_data; 839 + struct sbefifo *sbefifo; 840 + int rc = len; 841 + 842 + if (!user) 843 + return -EINVAL; 844 + sbefifo = user->sbefifo; 845 + if (len > SBEFIFO_MAX_USER_CMD_LEN) 846 + return -EINVAL; 847 + if (len & 3) 848 + return -EINVAL; 849 + 850 + mutex_lock(&user->file_lock); 851 + 852 + /* Can we use the pre-allocate buffer ? If not, allocate */ 853 + if (len <= PAGE_SIZE) 854 + user->pending_cmd = user->cmd_page; 855 + else 856 + user->pending_cmd = vmalloc(len); 857 + if (!user->pending_cmd) { 858 + rc = -ENOMEM; 859 + goto bail; 860 + } 861 + 862 + /* Copy the command into the staging buffer */ 863 + if (copy_from_user(user->pending_cmd, buf, len)) { 864 + rc = -EFAULT; 865 + goto bail; 866 + } 867 + 868 + /* Check for the magic reset command */ 869 + if (len == 4 && be32_to_cpu(*(__be32 *)user->pending_cmd) == 870 + SBEFIFO_RESET_MAGIC) { 871 + 872 + /* Clear out any pending command */ 873 + user->pending_len = 0; 874 + 875 + /* Trigger reset request */ 876 + mutex_lock(&sbefifo->lock); 877 + rc = sbefifo_request_reset(user->sbefifo); 878 + mutex_unlock(&sbefifo->lock); 879 + if (rc == 0) 880 + rc = 4; 881 + goto bail; 882 + } 883 + 884 + /* Update the staging buffer size */ 885 + user->pending_len = len; 886 + bail: 887 + if (!user->pending_len) 888 + sbefifo_release_command(user); 889 + 890 + mutex_unlock(&user->file_lock); 891 + 892 + /* And that's it, we'll issue the command on a read */ 893 + return rc; 894 + } 895 + 896 + static int sbefifo_user_release(struct inode *inode, struct file *file) 897 + { 898 + struct sbefifo_user *user = file->private_data; 899 + 900 + if (!user) 901 + return -EINVAL; 902 + 903 + sbefifo_release_command(user); 904 + free_page((unsigned long)user->cmd_page); 905 + kfree(user); 906 + 907 + return 0; 908 + } 909 + 910 + static const struct file_operations sbefifo_fops = { 911 + .owner = THIS_MODULE, 912 + .open = sbefifo_user_open, 913 + .read = sbefifo_user_read, 914 + .write = sbefifo_user_write, 915 + .release = sbefifo_user_release, 916 + }; 917 + 918 + static void sbefifo_free(struct device *dev) 919 + { 920 + struct sbefifo *sbefifo = container_of(dev, struct sbefifo, dev); 921 + 922 + put_device(&sbefifo->fsi_dev->dev); 923 + kfree(sbefifo); 924 + } 925 + 926 + /* 927 + * Probe/remove 928 + */ 929 + 930 + static int sbefifo_probe(struct device *dev) 931 + { 932 + struct fsi_device *fsi_dev = to_fsi_dev(dev); 933 + struct sbefifo *sbefifo; 934 + struct device_node *np; 935 + struct platform_device *child; 936 + char child_name[32]; 937 + int rc, didx, child_idx = 0; 938 + 939 + dev_dbg(dev, "Found sbefifo device\n"); 940 + 941 + sbefifo = kzalloc(sizeof(*sbefifo), GFP_KERNEL); 942 + if (!sbefifo) 943 + return -ENOMEM; 944 + 945 + /* Grab a reference to the device (parent of our cdev), we'll drop it later */ 946 + if (!get_device(dev)) { 947 + kfree(sbefifo); 948 + return -ENODEV; 949 + } 950 + 951 + sbefifo->magic = SBEFIFO_MAGIC; 952 + sbefifo->fsi_dev = fsi_dev; 953 + dev_set_drvdata(dev, sbefifo); 954 + mutex_init(&sbefifo->lock); 955 + 956 + /* 957 + * Try cleaning up the FIFO. If this fails, we still register the 958 + * driver and will try cleaning things up again on the next access. 959 + */ 960 + rc = sbefifo_cleanup_hw(sbefifo); 961 + if (rc && rc != -ESHUTDOWN) 962 + dev_err(dev, "Initial HW cleanup failed, will retry later\n"); 963 + 964 + /* Create chardev for userspace access */ 965 + sbefifo->dev.type = &fsi_cdev_type; 966 + sbefifo->dev.parent = dev; 967 + sbefifo->dev.release = sbefifo_free; 968 + device_initialize(&sbefifo->dev); 969 + 970 + /* Allocate a minor in the FSI space */ 971 + rc = fsi_get_new_minor(fsi_dev, fsi_dev_sbefifo, &sbefifo->dev.devt, &didx); 972 + if (rc) 973 + goto err; 974 + 975 + dev_set_name(&sbefifo->dev, "sbefifo%d", didx); 976 + cdev_init(&sbefifo->cdev, &sbefifo_fops); 977 + rc = cdev_device_add(&sbefifo->cdev, &sbefifo->dev); 978 + if (rc) { 979 + dev_err(dev, "Error %d creating char device %s\n", 980 + rc, dev_name(&sbefifo->dev)); 981 + goto err_free_minor; 982 + } 983 + 984 + /* Create platform devs for dts child nodes (occ, etc) */ 985 + for_each_available_child_of_node(dev->of_node, np) { 986 + snprintf(child_name, sizeof(child_name), "%s-dev%d", 987 + dev_name(&sbefifo->dev), child_idx++); 988 + child = of_platform_device_create(np, child_name, dev); 989 + if (!child) 990 + dev_warn(dev, "failed to create child %s dev\n", 991 + child_name); 992 + } 993 + 994 + return 0; 995 + err_free_minor: 996 + fsi_free_minor(sbefifo->dev.devt); 997 + err: 998 + put_device(&sbefifo->dev); 999 + return rc; 1000 + } 1001 + 1002 + static int sbefifo_unregister_child(struct device *dev, void *data) 1003 + { 1004 + struct platform_device *child = to_platform_device(dev); 1005 + 1006 + of_device_unregister(child); 1007 + if (dev->of_node) 1008 + of_node_clear_flag(dev->of_node, OF_POPULATED); 1009 + 1010 + return 0; 1011 + } 1012 + 1013 + static int sbefifo_remove(struct device *dev) 1014 + { 1015 + struct sbefifo *sbefifo = dev_get_drvdata(dev); 1016 + 1017 + dev_dbg(dev, "Removing sbefifo device...\n"); 1018 + 1019 + mutex_lock(&sbefifo->lock); 1020 + sbefifo->dead = true; 1021 + mutex_unlock(&sbefifo->lock); 1022 + 1023 + cdev_device_del(&sbefifo->cdev, &sbefifo->dev); 1024 + fsi_free_minor(sbefifo->dev.devt); 1025 + device_for_each_child(dev, NULL, sbefifo_unregister_child); 1026 + put_device(&sbefifo->dev); 1027 + 1028 + return 0; 1029 + } 1030 + 1031 + static struct fsi_device_id sbefifo_ids[] = { 1032 + { 1033 + .engine_type = FSI_ENGID_SBE, 1034 + .version = FSI_VERSION_ANY, 1035 + }, 1036 + { 0 } 1037 + }; 1038 + 1039 + static struct fsi_driver sbefifo_drv = { 1040 + .id_table = sbefifo_ids, 1041 + .drv = { 1042 + .name = DEVICE_NAME, 1043 + .bus = &fsi_bus_type, 1044 + .probe = sbefifo_probe, 1045 + .remove = sbefifo_remove, 1046 + } 1047 + }; 1048 + 1049 + static int sbefifo_init(void) 1050 + { 1051 + return fsi_driver_register(&sbefifo_drv); 1052 + } 1053 + 1054 + static void sbefifo_exit(void) 1055 + { 1056 + fsi_driver_unregister(&sbefifo_drv); 1057 + } 1058 + 1059 + module_init(sbefifo_init); 1060 + module_exit(sbefifo_exit); 1061 + MODULE_LICENSE("GPL"); 1062 + MODULE_AUTHOR("Brad Bishop <bradleyb@fuzziesquirrel.com>"); 1063 + MODULE_AUTHOR("Eddie James <eajames@linux.vnet.ibm.com>"); 1064 + MODULE_AUTHOR("Andrew Jeffery <andrew@aj.id.au>"); 1065 + MODULE_AUTHOR("Benjamin Herrenschmidt <benh@kernel.crashing.org>"); 1066 + MODULE_DESCRIPTION("Linux device interface to the POWER Self Boot Engine");
+499 -85
drivers/fsi/fsi-scom.c
··· 20 20 #include <linux/fs.h> 21 21 #include <linux/uaccess.h> 22 22 #include <linux/slab.h> 23 - #include <linux/miscdevice.h> 23 + #include <linux/cdev.h> 24 24 #include <linux/list.h> 25 - #include <linux/idr.h> 25 + 26 + #include <uapi/linux/fsi.h> 26 27 27 28 #define FSI_ENGID_SCOM 0x5 28 - 29 - #define SCOM_FSI2PIB_DELAY 50 30 29 31 30 /* SCOM engine register set */ 32 31 #define SCOM_DATA0_REG 0x00 33 32 #define SCOM_DATA1_REG 0x04 34 33 #define SCOM_CMD_REG 0x08 35 - #define SCOM_RESET_REG 0x1C 34 + #define SCOM_FSI2PIB_RESET_REG 0x18 35 + #define SCOM_STATUS_REG 0x1C /* Read */ 36 + #define SCOM_PIB_RESET_REG 0x1C /* Write */ 36 37 37 - #define SCOM_RESET_CMD 0x80000000 38 + /* Command register */ 38 39 #define SCOM_WRITE_CMD 0x80000000 40 + #define SCOM_READ_CMD 0x00000000 41 + 42 + /* Status register bits */ 43 + #define SCOM_STATUS_ERR_SUMMARY 0x80000000 44 + #define SCOM_STATUS_PROTECTION 0x01000000 45 + #define SCOM_STATUS_PARITY 0x04000000 46 + #define SCOM_STATUS_PIB_ABORT 0x00100000 47 + #define SCOM_STATUS_PIB_RESP_MASK 0x00007000 48 + #define SCOM_STATUS_PIB_RESP_SHIFT 12 49 + 50 + #define SCOM_STATUS_ANY_ERR (SCOM_STATUS_ERR_SUMMARY | \ 51 + SCOM_STATUS_PROTECTION | \ 52 + SCOM_STATUS_PARITY | \ 53 + SCOM_STATUS_PIB_ABORT | \ 54 + SCOM_STATUS_PIB_RESP_MASK) 55 + /* SCOM address encodings */ 56 + #define XSCOM_ADDR_IND_FLAG BIT_ULL(63) 57 + #define XSCOM_ADDR_INF_FORM1 BIT_ULL(60) 58 + 59 + /* SCOM indirect stuff */ 60 + #define XSCOM_ADDR_DIRECT_PART 0x7fffffffull 61 + #define XSCOM_ADDR_INDIRECT_PART 0x000fffff00000000ull 62 + #define XSCOM_DATA_IND_READ BIT_ULL(63) 63 + #define XSCOM_DATA_IND_COMPLETE BIT_ULL(31) 64 + #define XSCOM_DATA_IND_ERR_MASK 0x70000000ull 65 + #define XSCOM_DATA_IND_ERR_SHIFT 28 66 + #define XSCOM_DATA_IND_DATA 0x0000ffffull 67 + #define XSCOM_DATA_IND_FORM1_DATA 0x000fffffffffffffull 68 + #define XSCOM_ADDR_FORM1_LOW 0x000ffffffffull 69 + #define XSCOM_ADDR_FORM1_HI 0xfff00000000ull 70 + #define XSCOM_ADDR_FORM1_HI_SHIFT 20 71 + 72 + /* Retries */ 73 + #define SCOM_MAX_RETRIES 100 /* Retries on busy */ 74 + #define SCOM_MAX_IND_RETRIES 10 /* Retries indirect not ready */ 39 75 40 76 struct scom_device { 41 77 struct list_head link; 42 78 struct fsi_device *fsi_dev; 43 - struct miscdevice mdev; 44 - char name[32]; 45 - int idx; 79 + struct device dev; 80 + struct cdev cdev; 81 + struct mutex lock; 82 + bool dead; 46 83 }; 47 84 48 - #define to_scom_dev(x) container_of((x), struct scom_device, mdev) 49 - 50 - static struct list_head scom_devices; 51 - 52 - static DEFINE_IDA(scom_ida); 53 - 54 - static int put_scom(struct scom_device *scom_dev, uint64_t value, 55 - uint32_t addr) 85 + static int __put_scom(struct scom_device *scom_dev, uint64_t value, 86 + uint32_t addr, uint32_t *status) 56 87 { 88 + __be32 data, raw_status; 57 89 int rc; 58 - uint32_t data; 59 90 60 91 data = cpu_to_be32((value >> 32) & 0xffffffff); 61 92 rc = fsi_device_write(scom_dev->fsi_dev, SCOM_DATA0_REG, &data, ··· 101 70 return rc; 102 71 103 72 data = cpu_to_be32(SCOM_WRITE_CMD | addr); 104 - return fsi_device_write(scom_dev->fsi_dev, SCOM_CMD_REG, &data, 105 - sizeof(uint32_t)); 106 - } 107 - 108 - static int get_scom(struct scom_device *scom_dev, uint64_t *value, 109 - uint32_t addr) 110 - { 111 - uint32_t result, data; 112 - int rc; 113 - 114 - *value = 0ULL; 115 - data = cpu_to_be32(addr); 116 73 rc = fsi_device_write(scom_dev->fsi_dev, SCOM_CMD_REG, &data, 117 74 sizeof(uint32_t)); 118 75 if (rc) 119 76 return rc; 120 - 121 - rc = fsi_device_read(scom_dev->fsi_dev, SCOM_DATA0_REG, &result, 122 - sizeof(uint32_t)); 77 + rc = fsi_device_read(scom_dev->fsi_dev, SCOM_STATUS_REG, &raw_status, 78 + sizeof(uint32_t)); 123 79 if (rc) 124 80 return rc; 125 - 126 - *value |= (uint64_t)cpu_to_be32(result) << 32; 127 - rc = fsi_device_read(scom_dev->fsi_dev, SCOM_DATA1_REG, &result, 128 - sizeof(uint32_t)); 129 - if (rc) 130 - return rc; 131 - 132 - *value |= cpu_to_be32(result); 81 + *status = be32_to_cpu(raw_status); 133 82 134 83 return 0; 135 84 } 136 85 137 - static ssize_t scom_read(struct file *filep, char __user *buf, size_t len, 138 - loff_t *offset) 86 + static int __get_scom(struct scom_device *scom_dev, uint64_t *value, 87 + uint32_t addr, uint32_t *status) 139 88 { 89 + __be32 data, raw_status; 140 90 int rc; 141 - struct miscdevice *mdev = 142 - (struct miscdevice *)filep->private_data; 143 - struct scom_device *scom = to_scom_dev(mdev); 91 + 92 + 93 + *value = 0ULL; 94 + data = cpu_to_be32(SCOM_READ_CMD | addr); 95 + rc = fsi_device_write(scom_dev->fsi_dev, SCOM_CMD_REG, &data, 96 + sizeof(uint32_t)); 97 + if (rc) 98 + return rc; 99 + rc = fsi_device_read(scom_dev->fsi_dev, SCOM_STATUS_REG, &raw_status, 100 + sizeof(uint32_t)); 101 + if (rc) 102 + return rc; 103 + 104 + /* 105 + * Read the data registers even on error, so we don't have 106 + * to interpret the status register here. 107 + */ 108 + rc = fsi_device_read(scom_dev->fsi_dev, SCOM_DATA0_REG, &data, 109 + sizeof(uint32_t)); 110 + if (rc) 111 + return rc; 112 + *value |= (uint64_t)be32_to_cpu(data) << 32; 113 + rc = fsi_device_read(scom_dev->fsi_dev, SCOM_DATA1_REG, &data, 114 + sizeof(uint32_t)); 115 + if (rc) 116 + return rc; 117 + *value |= be32_to_cpu(data); 118 + *status = be32_to_cpu(raw_status); 119 + 120 + return rc; 121 + } 122 + 123 + static int put_indirect_scom_form0(struct scom_device *scom, uint64_t value, 124 + uint64_t addr, uint32_t *status) 125 + { 126 + uint64_t ind_data, ind_addr; 127 + int rc, retries, err = 0; 128 + 129 + if (value & ~XSCOM_DATA_IND_DATA) 130 + return -EINVAL; 131 + 132 + ind_addr = addr & XSCOM_ADDR_DIRECT_PART; 133 + ind_data = (addr & XSCOM_ADDR_INDIRECT_PART) | value; 134 + rc = __put_scom(scom, ind_data, ind_addr, status); 135 + if (rc || (*status & SCOM_STATUS_ANY_ERR)) 136 + return rc; 137 + 138 + for (retries = 0; retries < SCOM_MAX_IND_RETRIES; retries++) { 139 + rc = __get_scom(scom, &ind_data, addr, status); 140 + if (rc || (*status & SCOM_STATUS_ANY_ERR)) 141 + return rc; 142 + 143 + err = (ind_data & XSCOM_DATA_IND_ERR_MASK) >> XSCOM_DATA_IND_ERR_SHIFT; 144 + *status = err << SCOM_STATUS_PIB_RESP_SHIFT; 145 + if ((ind_data & XSCOM_DATA_IND_COMPLETE) || (err != SCOM_PIB_BLOCKED)) 146 + return 0; 147 + 148 + msleep(1); 149 + } 150 + return rc; 151 + } 152 + 153 + static int put_indirect_scom_form1(struct scom_device *scom, uint64_t value, 154 + uint64_t addr, uint32_t *status) 155 + { 156 + uint64_t ind_data, ind_addr; 157 + 158 + if (value & ~XSCOM_DATA_IND_FORM1_DATA) 159 + return -EINVAL; 160 + 161 + ind_addr = addr & XSCOM_ADDR_FORM1_LOW; 162 + ind_data = value | (addr & XSCOM_ADDR_FORM1_HI) << XSCOM_ADDR_FORM1_HI_SHIFT; 163 + return __put_scom(scom, ind_data, ind_addr, status); 164 + } 165 + 166 + static int get_indirect_scom_form0(struct scom_device *scom, uint64_t *value, 167 + uint64_t addr, uint32_t *status) 168 + { 169 + uint64_t ind_data, ind_addr; 170 + int rc, retries, err = 0; 171 + 172 + ind_addr = addr & XSCOM_ADDR_DIRECT_PART; 173 + ind_data = (addr & XSCOM_ADDR_INDIRECT_PART) | XSCOM_DATA_IND_READ; 174 + rc = __put_scom(scom, ind_data, ind_addr, status); 175 + if (rc || (*status & SCOM_STATUS_ANY_ERR)) 176 + return rc; 177 + 178 + for (retries = 0; retries < SCOM_MAX_IND_RETRIES; retries++) { 179 + rc = __get_scom(scom, &ind_data, addr, status); 180 + if (rc || (*status & SCOM_STATUS_ANY_ERR)) 181 + return rc; 182 + 183 + err = (ind_data & XSCOM_DATA_IND_ERR_MASK) >> XSCOM_DATA_IND_ERR_SHIFT; 184 + *status = err << SCOM_STATUS_PIB_RESP_SHIFT; 185 + *value = ind_data & XSCOM_DATA_IND_DATA; 186 + 187 + if ((ind_data & XSCOM_DATA_IND_COMPLETE) || (err != SCOM_PIB_BLOCKED)) 188 + return 0; 189 + 190 + msleep(1); 191 + } 192 + return rc; 193 + } 194 + 195 + static int raw_put_scom(struct scom_device *scom, uint64_t value, 196 + uint64_t addr, uint32_t *status) 197 + { 198 + if (addr & XSCOM_ADDR_IND_FLAG) { 199 + if (addr & XSCOM_ADDR_INF_FORM1) 200 + return put_indirect_scom_form1(scom, value, addr, status); 201 + else 202 + return put_indirect_scom_form0(scom, value, addr, status); 203 + } else 204 + return __put_scom(scom, value, addr, status); 205 + } 206 + 207 + static int raw_get_scom(struct scom_device *scom, uint64_t *value, 208 + uint64_t addr, uint32_t *status) 209 + { 210 + if (addr & XSCOM_ADDR_IND_FLAG) { 211 + if (addr & XSCOM_ADDR_INF_FORM1) 212 + return -ENXIO; 213 + return get_indirect_scom_form0(scom, value, addr, status); 214 + } else 215 + return __get_scom(scom, value, addr, status); 216 + } 217 + 218 + static int handle_fsi2pib_status(struct scom_device *scom, uint32_t status) 219 + { 220 + uint32_t dummy = -1; 221 + 222 + if (status & SCOM_STATUS_PROTECTION) 223 + return -EPERM; 224 + if (status & SCOM_STATUS_PARITY) { 225 + fsi_device_write(scom->fsi_dev, SCOM_FSI2PIB_RESET_REG, &dummy, 226 + sizeof(uint32_t)); 227 + return -EIO; 228 + } 229 + /* Return -EBUSY on PIB abort to force a retry */ 230 + if (status & SCOM_STATUS_PIB_ABORT) 231 + return -EBUSY; 232 + if (status & SCOM_STATUS_ERR_SUMMARY) { 233 + fsi_device_write(scom->fsi_dev, SCOM_FSI2PIB_RESET_REG, &dummy, 234 + sizeof(uint32_t)); 235 + return -EIO; 236 + } 237 + return 0; 238 + } 239 + 240 + static int handle_pib_status(struct scom_device *scom, uint8_t status) 241 + { 242 + uint32_t dummy = -1; 243 + 244 + if (status == SCOM_PIB_SUCCESS) 245 + return 0; 246 + if (status == SCOM_PIB_BLOCKED) 247 + return -EBUSY; 248 + 249 + /* Reset the bridge */ 250 + fsi_device_write(scom->fsi_dev, SCOM_FSI2PIB_RESET_REG, &dummy, 251 + sizeof(uint32_t)); 252 + 253 + switch(status) { 254 + case SCOM_PIB_OFFLINE: 255 + return -ENODEV; 256 + case SCOM_PIB_BAD_ADDR: 257 + return -ENXIO; 258 + case SCOM_PIB_TIMEOUT: 259 + return -ETIMEDOUT; 260 + case SCOM_PIB_PARTIAL: 261 + case SCOM_PIB_CLK_ERR: 262 + case SCOM_PIB_PARITY_ERR: 263 + default: 264 + return -EIO; 265 + } 266 + } 267 + 268 + static int put_scom(struct scom_device *scom, uint64_t value, 269 + uint64_t addr) 270 + { 271 + uint32_t status, dummy = -1; 272 + int rc, retries; 273 + 274 + for (retries = 0; retries < SCOM_MAX_RETRIES; retries++) { 275 + rc = raw_put_scom(scom, value, addr, &status); 276 + if (rc) { 277 + /* Try resetting the bridge if FSI fails */ 278 + if (rc != -ENODEV && retries == 0) { 279 + fsi_device_write(scom->fsi_dev, SCOM_FSI2PIB_RESET_REG, 280 + &dummy, sizeof(uint32_t)); 281 + rc = -EBUSY; 282 + } else 283 + return rc; 284 + } else 285 + rc = handle_fsi2pib_status(scom, status); 286 + if (rc && rc != -EBUSY) 287 + break; 288 + if (rc == 0) { 289 + rc = handle_pib_status(scom, 290 + (status & SCOM_STATUS_PIB_RESP_MASK) 291 + >> SCOM_STATUS_PIB_RESP_SHIFT); 292 + if (rc && rc != -EBUSY) 293 + break; 294 + } 295 + if (rc == 0) 296 + break; 297 + msleep(1); 298 + } 299 + return rc; 300 + } 301 + 302 + static int get_scom(struct scom_device *scom, uint64_t *value, 303 + uint64_t addr) 304 + { 305 + uint32_t status, dummy = -1; 306 + int rc, retries; 307 + 308 + for (retries = 0; retries < SCOM_MAX_RETRIES; retries++) { 309 + rc = raw_get_scom(scom, value, addr, &status); 310 + if (rc) { 311 + /* Try resetting the bridge if FSI fails */ 312 + if (rc != -ENODEV && retries == 0) { 313 + fsi_device_write(scom->fsi_dev, SCOM_FSI2PIB_RESET_REG, 314 + &dummy, sizeof(uint32_t)); 315 + rc = -EBUSY; 316 + } else 317 + return rc; 318 + } else 319 + rc = handle_fsi2pib_status(scom, status); 320 + if (rc && rc != -EBUSY) 321 + break; 322 + if (rc == 0) { 323 + rc = handle_pib_status(scom, 324 + (status & SCOM_STATUS_PIB_RESP_MASK) 325 + >> SCOM_STATUS_PIB_RESP_SHIFT); 326 + if (rc && rc != -EBUSY) 327 + break; 328 + } 329 + if (rc == 0) 330 + break; 331 + msleep(1); 332 + } 333 + return rc; 334 + } 335 + 336 + static ssize_t scom_read(struct file *filep, char __user *buf, size_t len, 337 + loff_t *offset) 338 + { 339 + struct scom_device *scom = filep->private_data; 144 340 struct device *dev = &scom->fsi_dev->dev; 145 341 uint64_t val; 342 + int rc; 146 343 147 344 if (len != sizeof(uint64_t)) 148 345 return -EINVAL; 149 346 150 - rc = get_scom(scom, &val, *offset); 347 + mutex_lock(&scom->lock); 348 + if (scom->dead) 349 + rc = -ENODEV; 350 + else 351 + rc = get_scom(scom, &val, *offset); 352 + mutex_unlock(&scom->lock); 151 353 if (rc) { 152 354 dev_dbg(dev, "get_scom fail:%d\n", rc); 153 355 return rc; ··· 394 130 } 395 131 396 132 static ssize_t scom_write(struct file *filep, const char __user *buf, 397 - size_t len, loff_t *offset) 133 + size_t len, loff_t *offset) 398 134 { 399 135 int rc; 400 - struct miscdevice *mdev = filep->private_data; 401 - struct scom_device *scom = to_scom_dev(mdev); 136 + struct scom_device *scom = filep->private_data; 402 137 struct device *dev = &scom->fsi_dev->dev; 403 138 uint64_t val; 404 139 ··· 410 147 return -EINVAL; 411 148 } 412 149 413 - rc = put_scom(scom, val, *offset); 150 + mutex_lock(&scom->lock); 151 + if (scom->dead) 152 + rc = -ENODEV; 153 + else 154 + rc = put_scom(scom, val, *offset); 155 + mutex_unlock(&scom->lock); 414 156 if (rc) { 415 157 dev_dbg(dev, "put_scom failed with:%d\n", rc); 416 158 return rc; ··· 439 171 return offset; 440 172 } 441 173 174 + static void raw_convert_status(struct scom_access *acc, uint32_t status) 175 + { 176 + acc->pib_status = (status & SCOM_STATUS_PIB_RESP_MASK) >> 177 + SCOM_STATUS_PIB_RESP_SHIFT; 178 + acc->intf_errors = 0; 179 + 180 + if (status & SCOM_STATUS_PROTECTION) 181 + acc->intf_errors |= SCOM_INTF_ERR_PROTECTION; 182 + else if (status & SCOM_STATUS_PARITY) 183 + acc->intf_errors |= SCOM_INTF_ERR_PARITY; 184 + else if (status & SCOM_STATUS_PIB_ABORT) 185 + acc->intf_errors |= SCOM_INTF_ERR_ABORT; 186 + else if (status & SCOM_STATUS_ERR_SUMMARY) 187 + acc->intf_errors |= SCOM_INTF_ERR_UNKNOWN; 188 + } 189 + 190 + static int scom_raw_read(struct scom_device *scom, void __user *argp) 191 + { 192 + struct scom_access acc; 193 + uint32_t status; 194 + int rc; 195 + 196 + if (copy_from_user(&acc, argp, sizeof(struct scom_access))) 197 + return -EFAULT; 198 + 199 + rc = raw_get_scom(scom, &acc.data, acc.addr, &status); 200 + if (rc) 201 + return rc; 202 + raw_convert_status(&acc, status); 203 + if (copy_to_user(argp, &acc, sizeof(struct scom_access))) 204 + return -EFAULT; 205 + return 0; 206 + } 207 + 208 + static int scom_raw_write(struct scom_device *scom, void __user *argp) 209 + { 210 + u64 prev_data, mask, data; 211 + struct scom_access acc; 212 + uint32_t status; 213 + int rc; 214 + 215 + if (copy_from_user(&acc, argp, sizeof(struct scom_access))) 216 + return -EFAULT; 217 + 218 + if (acc.mask) { 219 + rc = raw_get_scom(scom, &prev_data, acc.addr, &status); 220 + if (rc) 221 + return rc; 222 + if (status & SCOM_STATUS_ANY_ERR) 223 + goto fail; 224 + mask = acc.mask; 225 + } else { 226 + prev_data = mask = -1ull; 227 + } 228 + data = (prev_data & ~mask) | (acc.data & mask); 229 + rc = raw_put_scom(scom, data, acc.addr, &status); 230 + if (rc) 231 + return rc; 232 + fail: 233 + raw_convert_status(&acc, status); 234 + if (copy_to_user(argp, &acc, sizeof(struct scom_access))) 235 + return -EFAULT; 236 + return 0; 237 + } 238 + 239 + static int scom_reset(struct scom_device *scom, void __user *argp) 240 + { 241 + uint32_t flags, dummy = -1; 242 + int rc = 0; 243 + 244 + if (get_user(flags, (__u32 __user *)argp)) 245 + return -EFAULT; 246 + if (flags & SCOM_RESET_PIB) 247 + rc = fsi_device_write(scom->fsi_dev, SCOM_PIB_RESET_REG, &dummy, 248 + sizeof(uint32_t)); 249 + if (!rc && (flags & (SCOM_RESET_PIB | SCOM_RESET_INTF))) 250 + rc = fsi_device_write(scom->fsi_dev, SCOM_FSI2PIB_RESET_REG, &dummy, 251 + sizeof(uint32_t)); 252 + return rc; 253 + } 254 + 255 + static int scom_check(struct scom_device *scom, void __user *argp) 256 + { 257 + /* Still need to find out how to get "protected" */ 258 + return put_user(SCOM_CHECK_SUPPORTED, (__u32 __user *)argp); 259 + } 260 + 261 + static long scom_ioctl(struct file *file, unsigned int cmd, unsigned long arg) 262 + { 263 + struct scom_device *scom = file->private_data; 264 + void __user *argp = (void __user *)arg; 265 + int rc = -ENOTTY; 266 + 267 + mutex_lock(&scom->lock); 268 + if (scom->dead) { 269 + mutex_unlock(&scom->lock); 270 + return -ENODEV; 271 + } 272 + switch(cmd) { 273 + case FSI_SCOM_CHECK: 274 + rc = scom_check(scom, argp); 275 + break; 276 + case FSI_SCOM_READ: 277 + rc = scom_raw_read(scom, argp); 278 + break; 279 + case FSI_SCOM_WRITE: 280 + rc = scom_raw_write(scom, argp); 281 + break; 282 + case FSI_SCOM_RESET: 283 + rc = scom_reset(scom, argp); 284 + break; 285 + } 286 + mutex_unlock(&scom->lock); 287 + return rc; 288 + } 289 + 290 + static int scom_open(struct inode *inode, struct file *file) 291 + { 292 + struct scom_device *scom = container_of(inode->i_cdev, struct scom_device, cdev); 293 + 294 + file->private_data = scom; 295 + 296 + return 0; 297 + } 298 + 442 299 static const struct file_operations scom_fops = { 443 - .owner = THIS_MODULE, 444 - .llseek = scom_llseek, 445 - .read = scom_read, 446 - .write = scom_write, 300 + .owner = THIS_MODULE, 301 + .open = scom_open, 302 + .llseek = scom_llseek, 303 + .read = scom_read, 304 + .write = scom_write, 305 + .unlocked_ioctl = scom_ioctl, 447 306 }; 307 + 308 + static void scom_free(struct device *dev) 309 + { 310 + struct scom_device *scom = container_of(dev, struct scom_device, dev); 311 + 312 + put_device(&scom->fsi_dev->dev); 313 + kfree(scom); 314 + } 448 315 449 316 static int scom_probe(struct device *dev) 450 317 { 451 - uint32_t data; 452 318 struct fsi_device *fsi_dev = to_fsi_dev(dev); 453 319 struct scom_device *scom; 320 + int rc, didx; 454 321 455 - scom = devm_kzalloc(dev, sizeof(*scom), GFP_KERNEL); 322 + scom = kzalloc(sizeof(*scom), GFP_KERNEL); 456 323 if (!scom) 457 324 return -ENOMEM; 325 + dev_set_drvdata(dev, scom); 326 + mutex_init(&scom->lock); 458 327 459 - scom->idx = ida_simple_get(&scom_ida, 1, INT_MAX, GFP_KERNEL); 460 - snprintf(scom->name, sizeof(scom->name), "scom%d", scom->idx); 328 + /* Grab a reference to the device (parent of our cdev), we'll drop it later */ 329 + if (!get_device(dev)) { 330 + kfree(scom); 331 + return -ENODEV; 332 + } 461 333 scom->fsi_dev = fsi_dev; 462 - scom->mdev.minor = MISC_DYNAMIC_MINOR; 463 - scom->mdev.fops = &scom_fops; 464 - scom->mdev.name = scom->name; 465 - scom->mdev.parent = dev; 466 - list_add(&scom->link, &scom_devices); 467 334 468 - data = cpu_to_be32(SCOM_RESET_CMD); 469 - fsi_device_write(fsi_dev, SCOM_RESET_REG, &data, sizeof(uint32_t)); 335 + /* Create chardev for userspace access */ 336 + scom->dev.type = &fsi_cdev_type; 337 + scom->dev.parent = dev; 338 + scom->dev.release = scom_free; 339 + device_initialize(&scom->dev); 470 340 471 - return misc_register(&scom->mdev); 341 + /* Allocate a minor in the FSI space */ 342 + rc = fsi_get_new_minor(fsi_dev, fsi_dev_scom, &scom->dev.devt, &didx); 343 + if (rc) 344 + goto err; 345 + 346 + dev_set_name(&scom->dev, "scom%d", didx); 347 + cdev_init(&scom->cdev, &scom_fops); 348 + rc = cdev_device_add(&scom->cdev, &scom->dev); 349 + if (rc) { 350 + dev_err(dev, "Error %d creating char device %s\n", 351 + rc, dev_name(&scom->dev)); 352 + goto err_free_minor; 353 + } 354 + 355 + return 0; 356 + err_free_minor: 357 + fsi_free_minor(scom->dev.devt); 358 + err: 359 + put_device(&scom->dev); 360 + return rc; 472 361 } 473 362 474 363 static int scom_remove(struct device *dev) 475 364 { 476 - struct scom_device *scom, *scom_tmp; 477 - struct fsi_device *fsi_dev = to_fsi_dev(dev); 365 + struct scom_device *scom = dev_get_drvdata(dev); 478 366 479 - list_for_each_entry_safe(scom, scom_tmp, &scom_devices, link) { 480 - if (scom->fsi_dev == fsi_dev) { 481 - list_del(&scom->link); 482 - ida_simple_remove(&scom_ida, scom->idx); 483 - misc_deregister(&scom->mdev); 484 - } 485 - } 367 + mutex_lock(&scom->lock); 368 + scom->dead = true; 369 + mutex_unlock(&scom->lock); 370 + cdev_device_del(&scom->cdev, &scom->dev); 371 + fsi_free_minor(scom->dev.devt); 372 + put_device(&scom->dev); 486 373 487 374 return 0; 488 375 } ··· 662 239 663 240 static int scom_init(void) 664 241 { 665 - INIT_LIST_HEAD(&scom_devices); 666 242 return fsi_driver_register(&scom_drv); 667 243 } 668 244 669 245 static void scom_exit(void) 670 246 { 671 - struct list_head *pos; 672 - struct scom_device *scom; 673 - 674 - list_for_each(pos, &scom_devices) { 675 - scom = list_entry(pos, struct scom_device, link); 676 - misc_deregister(&scom->mdev); 677 - devm_kfree(&scom->fsi_dev->dev, scom); 678 - } 679 247 fsi_driver_unregister(&scom_drv); 680 248 } 681 249
+43
drivers/gnss/Kconfig
··· 1 + # 2 + # GNSS receiver configuration 3 + # 4 + 5 + menuconfig GNSS 6 + tristate "GNSS receiver support" 7 + ---help--- 8 + Say Y here if you have a GNSS receiver (e.g. a GPS receiver). 9 + 10 + To compile this driver as a module, choose M here: the module will 11 + be called gnss. 12 + 13 + if GNSS 14 + 15 + config GNSS_SERIAL 16 + tristate 17 + 18 + config GNSS_SIRF_SERIAL 19 + tristate "SiRFstar GNSS receiver support" 20 + depends on SERIAL_DEV_BUS 21 + ---help--- 22 + Say Y here if you have a SiRFstar-based GNSS receiver which uses a 23 + serial interface. 24 + 25 + To compile this driver as a module, choose M here: the module will 26 + be called gnss-sirf. 27 + 28 + If unsure, say N. 29 + 30 + config GNSS_UBX_SERIAL 31 + tristate "u-blox GNSS receiver support" 32 + depends on SERIAL_DEV_BUS 33 + select GNSS_SERIAL 34 + ---help--- 35 + Say Y here if you have a u-blox GNSS receiver which uses a serial 36 + interface. 37 + 38 + To compile this driver as a module, choose M here: the module will 39 + be called gnss-ubx. 40 + 41 + If unsure, say N. 42 + 43 + endif # GNSS
+16
drivers/gnss/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + # 3 + # Makefile for the GNSS subsystem. 4 + # 5 + 6 + obj-$(CONFIG_GNSS) += gnss.o 7 + gnss-y := core.o 8 + 9 + obj-$(CONFIG_GNSS_SERIAL) += gnss-serial.o 10 + gnss-serial-y := serial.o 11 + 12 + obj-$(CONFIG_GNSS_SIRF_SERIAL) += gnss-sirf.o 13 + gnss-sirf-y := sirf.o 14 + 15 + obj-$(CONFIG_GNSS_UBX_SERIAL) += gnss-ubx.o 16 + gnss-ubx-y := ubx.o
+420
drivers/gnss/core.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * GNSS receiver core 4 + * 5 + * Copyright (C) 2018 Johan Hovold <johan@kernel.org> 6 + */ 7 + 8 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 9 + 10 + #include <linux/cdev.h> 11 + #include <linux/errno.h> 12 + #include <linux/fs.h> 13 + #include <linux/gnss.h> 14 + #include <linux/idr.h> 15 + #include <linux/init.h> 16 + #include <linux/kernel.h> 17 + #include <linux/module.h> 18 + #include <linux/poll.h> 19 + #include <linux/slab.h> 20 + #include <linux/uaccess.h> 21 + #include <linux/wait.h> 22 + 23 + #define GNSS_FLAG_HAS_WRITE_RAW BIT(0) 24 + 25 + #define GNSS_MINORS 16 26 + 27 + static DEFINE_IDA(gnss_minors); 28 + static dev_t gnss_first; 29 + 30 + /* FIFO size must be a power of two */ 31 + #define GNSS_READ_FIFO_SIZE 4096 32 + #define GNSS_WRITE_BUF_SIZE 1024 33 + 34 + #define to_gnss_device(d) container_of((d), struct gnss_device, dev) 35 + 36 + static int gnss_open(struct inode *inode, struct file *file) 37 + { 38 + struct gnss_device *gdev; 39 + int ret = 0; 40 + 41 + gdev = container_of(inode->i_cdev, struct gnss_device, cdev); 42 + 43 + get_device(&gdev->dev); 44 + 45 + nonseekable_open(inode, file); 46 + file->private_data = gdev; 47 + 48 + down_write(&gdev->rwsem); 49 + if (gdev->disconnected) { 50 + ret = -ENODEV; 51 + goto unlock; 52 + } 53 + 54 + if (gdev->count++ == 0) { 55 + ret = gdev->ops->open(gdev); 56 + if (ret) 57 + gdev->count--; 58 + } 59 + unlock: 60 + up_write(&gdev->rwsem); 61 + 62 + if (ret) 63 + put_device(&gdev->dev); 64 + 65 + return ret; 66 + } 67 + 68 + static int gnss_release(struct inode *inode, struct file *file) 69 + { 70 + struct gnss_device *gdev = file->private_data; 71 + 72 + down_write(&gdev->rwsem); 73 + if (gdev->disconnected) 74 + goto unlock; 75 + 76 + if (--gdev->count == 0) { 77 + gdev->ops->close(gdev); 78 + kfifo_reset(&gdev->read_fifo); 79 + } 80 + unlock: 81 + up_write(&gdev->rwsem); 82 + 83 + put_device(&gdev->dev); 84 + 85 + return 0; 86 + } 87 + 88 + static ssize_t gnss_read(struct file *file, char __user *buf, 89 + size_t count, loff_t *pos) 90 + { 91 + struct gnss_device *gdev = file->private_data; 92 + unsigned int copied; 93 + int ret; 94 + 95 + mutex_lock(&gdev->read_mutex); 96 + while (kfifo_is_empty(&gdev->read_fifo)) { 97 + mutex_unlock(&gdev->read_mutex); 98 + 99 + if (gdev->disconnected) 100 + return 0; 101 + 102 + if (file->f_flags & O_NONBLOCK) 103 + return -EAGAIN; 104 + 105 + ret = wait_event_interruptible(gdev->read_queue, 106 + gdev->disconnected || 107 + !kfifo_is_empty(&gdev->read_fifo)); 108 + if (ret) 109 + return -ERESTARTSYS; 110 + 111 + mutex_lock(&gdev->read_mutex); 112 + } 113 + 114 + ret = kfifo_to_user(&gdev->read_fifo, buf, count, &copied); 115 + if (ret == 0) 116 + ret = copied; 117 + 118 + mutex_unlock(&gdev->read_mutex); 119 + 120 + return ret; 121 + } 122 + 123 + static ssize_t gnss_write(struct file *file, const char __user *buf, 124 + size_t count, loff_t *pos) 125 + { 126 + struct gnss_device *gdev = file->private_data; 127 + size_t written = 0; 128 + int ret; 129 + 130 + if (gdev->disconnected) 131 + return -EIO; 132 + 133 + if (!count) 134 + return 0; 135 + 136 + if (!(gdev->flags & GNSS_FLAG_HAS_WRITE_RAW)) 137 + return -EIO; 138 + 139 + /* Ignoring O_NONBLOCK, write_raw() is synchronous. */ 140 + 141 + ret = mutex_lock_interruptible(&gdev->write_mutex); 142 + if (ret) 143 + return -ERESTARTSYS; 144 + 145 + for (;;) { 146 + size_t n = count - written; 147 + 148 + if (n > GNSS_WRITE_BUF_SIZE) 149 + n = GNSS_WRITE_BUF_SIZE; 150 + 151 + if (copy_from_user(gdev->write_buf, buf, n)) { 152 + ret = -EFAULT; 153 + goto out_unlock; 154 + } 155 + 156 + /* 157 + * Assumes write_raw can always accept GNSS_WRITE_BUF_SIZE 158 + * bytes. 159 + * 160 + * FIXME: revisit 161 + */ 162 + down_read(&gdev->rwsem); 163 + if (!gdev->disconnected) 164 + ret = gdev->ops->write_raw(gdev, gdev->write_buf, n); 165 + else 166 + ret = -EIO; 167 + up_read(&gdev->rwsem); 168 + 169 + if (ret < 0) 170 + break; 171 + 172 + written += ret; 173 + buf += ret; 174 + 175 + if (written == count) 176 + break; 177 + } 178 + 179 + if (written) 180 + ret = written; 181 + out_unlock: 182 + mutex_unlock(&gdev->write_mutex); 183 + 184 + return ret; 185 + } 186 + 187 + static __poll_t gnss_poll(struct file *file, poll_table *wait) 188 + { 189 + struct gnss_device *gdev = file->private_data; 190 + __poll_t mask = 0; 191 + 192 + poll_wait(file, &gdev->read_queue, wait); 193 + 194 + if (!kfifo_is_empty(&gdev->read_fifo)) 195 + mask |= EPOLLIN | EPOLLRDNORM; 196 + if (gdev->disconnected) 197 + mask |= EPOLLHUP; 198 + 199 + return mask; 200 + } 201 + 202 + static const struct file_operations gnss_fops = { 203 + .owner = THIS_MODULE, 204 + .open = gnss_open, 205 + .release = gnss_release, 206 + .read = gnss_read, 207 + .write = gnss_write, 208 + .poll = gnss_poll, 209 + .llseek = no_llseek, 210 + }; 211 + 212 + static struct class *gnss_class; 213 + 214 + static void gnss_device_release(struct device *dev) 215 + { 216 + struct gnss_device *gdev = to_gnss_device(dev); 217 + 218 + kfree(gdev->write_buf); 219 + kfifo_free(&gdev->read_fifo); 220 + ida_simple_remove(&gnss_minors, gdev->id); 221 + kfree(gdev); 222 + } 223 + 224 + struct gnss_device *gnss_allocate_device(struct device *parent) 225 + { 226 + struct gnss_device *gdev; 227 + struct device *dev; 228 + int id; 229 + int ret; 230 + 231 + gdev = kzalloc(sizeof(*gdev), GFP_KERNEL); 232 + if (!gdev) 233 + return NULL; 234 + 235 + id = ida_simple_get(&gnss_minors, 0, GNSS_MINORS, GFP_KERNEL); 236 + if (id < 0) { 237 + kfree(gdev); 238 + return NULL; 239 + } 240 + 241 + gdev->id = id; 242 + 243 + dev = &gdev->dev; 244 + device_initialize(dev); 245 + dev->devt = gnss_first + id; 246 + dev->class = gnss_class; 247 + dev->parent = parent; 248 + dev->release = gnss_device_release; 249 + dev_set_drvdata(dev, gdev); 250 + dev_set_name(dev, "gnss%d", id); 251 + 252 + init_rwsem(&gdev->rwsem); 253 + mutex_init(&gdev->read_mutex); 254 + mutex_init(&gdev->write_mutex); 255 + init_waitqueue_head(&gdev->read_queue); 256 + 257 + ret = kfifo_alloc(&gdev->read_fifo, GNSS_READ_FIFO_SIZE, GFP_KERNEL); 258 + if (ret) 259 + goto err_put_device; 260 + 261 + gdev->write_buf = kzalloc(GNSS_WRITE_BUF_SIZE, GFP_KERNEL); 262 + if (!gdev->write_buf) 263 + goto err_put_device; 264 + 265 + cdev_init(&gdev->cdev, &gnss_fops); 266 + gdev->cdev.owner = THIS_MODULE; 267 + 268 + return gdev; 269 + 270 + err_put_device: 271 + put_device(dev); 272 + 273 + return NULL; 274 + } 275 + EXPORT_SYMBOL_GPL(gnss_allocate_device); 276 + 277 + void gnss_put_device(struct gnss_device *gdev) 278 + { 279 + put_device(&gdev->dev); 280 + } 281 + EXPORT_SYMBOL_GPL(gnss_put_device); 282 + 283 + int gnss_register_device(struct gnss_device *gdev) 284 + { 285 + int ret; 286 + 287 + /* Set a flag which can be accessed without holding the rwsem. */ 288 + if (gdev->ops->write_raw != NULL) 289 + gdev->flags |= GNSS_FLAG_HAS_WRITE_RAW; 290 + 291 + ret = cdev_device_add(&gdev->cdev, &gdev->dev); 292 + if (ret) { 293 + dev_err(&gdev->dev, "failed to add device: %d\n", ret); 294 + return ret; 295 + } 296 + 297 + return 0; 298 + } 299 + EXPORT_SYMBOL_GPL(gnss_register_device); 300 + 301 + void gnss_deregister_device(struct gnss_device *gdev) 302 + { 303 + down_write(&gdev->rwsem); 304 + gdev->disconnected = true; 305 + if (gdev->count) { 306 + wake_up_interruptible(&gdev->read_queue); 307 + gdev->ops->close(gdev); 308 + } 309 + up_write(&gdev->rwsem); 310 + 311 + cdev_device_del(&gdev->cdev, &gdev->dev); 312 + } 313 + EXPORT_SYMBOL_GPL(gnss_deregister_device); 314 + 315 + /* 316 + * Caller guarantees serialisation. 317 + * 318 + * Must not be called for a closed device. 319 + */ 320 + int gnss_insert_raw(struct gnss_device *gdev, const unsigned char *buf, 321 + size_t count) 322 + { 323 + int ret; 324 + 325 + ret = kfifo_in(&gdev->read_fifo, buf, count); 326 + 327 + wake_up_interruptible(&gdev->read_queue); 328 + 329 + return ret; 330 + } 331 + EXPORT_SYMBOL_GPL(gnss_insert_raw); 332 + 333 + static const char * const gnss_type_names[GNSS_TYPE_COUNT] = { 334 + [GNSS_TYPE_NMEA] = "NMEA", 335 + [GNSS_TYPE_SIRF] = "SiRF", 336 + [GNSS_TYPE_UBX] = "UBX", 337 + }; 338 + 339 + static const char *gnss_type_name(struct gnss_device *gdev) 340 + { 341 + const char *name = NULL; 342 + 343 + if (gdev->type < GNSS_TYPE_COUNT) 344 + name = gnss_type_names[gdev->type]; 345 + 346 + if (!name) 347 + dev_WARN(&gdev->dev, "type name not defined\n"); 348 + 349 + return name; 350 + } 351 + 352 + static ssize_t type_show(struct device *dev, struct device_attribute *attr, 353 + char *buf) 354 + { 355 + struct gnss_device *gdev = to_gnss_device(dev); 356 + 357 + return sprintf(buf, "%s\n", gnss_type_name(gdev)); 358 + } 359 + static DEVICE_ATTR_RO(type); 360 + 361 + static struct attribute *gnss_attrs[] = { 362 + &dev_attr_type.attr, 363 + NULL, 364 + }; 365 + ATTRIBUTE_GROUPS(gnss); 366 + 367 + static int gnss_uevent(struct device *dev, struct kobj_uevent_env *env) 368 + { 369 + struct gnss_device *gdev = to_gnss_device(dev); 370 + int ret; 371 + 372 + ret = add_uevent_var(env, "GNSS_TYPE=%s", gnss_type_name(gdev)); 373 + if (ret) 374 + return ret; 375 + 376 + return 0; 377 + } 378 + 379 + static int __init gnss_module_init(void) 380 + { 381 + int ret; 382 + 383 + ret = alloc_chrdev_region(&gnss_first, 0, GNSS_MINORS, "gnss"); 384 + if (ret < 0) { 385 + pr_err("failed to allocate device numbers: %d\n", ret); 386 + return ret; 387 + } 388 + 389 + gnss_class = class_create(THIS_MODULE, "gnss"); 390 + if (IS_ERR(gnss_class)) { 391 + ret = PTR_ERR(gnss_class); 392 + pr_err("failed to create class: %d\n", ret); 393 + goto err_unregister_chrdev; 394 + } 395 + 396 + gnss_class->dev_groups = gnss_groups; 397 + gnss_class->dev_uevent = gnss_uevent; 398 + 399 + pr_info("GNSS driver registered with major %d\n", MAJOR(gnss_first)); 400 + 401 + return 0; 402 + 403 + err_unregister_chrdev: 404 + unregister_chrdev_region(gnss_first, GNSS_MINORS); 405 + 406 + return ret; 407 + } 408 + module_init(gnss_module_init); 409 + 410 + static void __exit gnss_module_exit(void) 411 + { 412 + class_destroy(gnss_class); 413 + unregister_chrdev_region(gnss_first, GNSS_MINORS); 414 + ida_destroy(&gnss_minors); 415 + } 416 + module_exit(gnss_module_exit); 417 + 418 + MODULE_AUTHOR("Johan Hovold <johan@kernel.org>"); 419 + MODULE_DESCRIPTION("GNSS receiver core"); 420 + MODULE_LICENSE("GPL v2");
+275
drivers/gnss/serial.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Generic serial GNSS receiver driver 4 + * 5 + * Copyright (C) 2018 Johan Hovold <johan@kernel.org> 6 + */ 7 + 8 + #include <linux/errno.h> 9 + #include <linux/gnss.h> 10 + #include <linux/init.h> 11 + #include <linux/kernel.h> 12 + #include <linux/module.h> 13 + #include <linux/of.h> 14 + #include <linux/pm.h> 15 + #include <linux/pm_runtime.h> 16 + #include <linux/serdev.h> 17 + #include <linux/slab.h> 18 + 19 + #include "serial.h" 20 + 21 + static int gnss_serial_open(struct gnss_device *gdev) 22 + { 23 + struct gnss_serial *gserial = gnss_get_drvdata(gdev); 24 + struct serdev_device *serdev = gserial->serdev; 25 + int ret; 26 + 27 + ret = serdev_device_open(serdev); 28 + if (ret) 29 + return ret; 30 + 31 + serdev_device_set_baudrate(serdev, gserial->speed); 32 + serdev_device_set_flow_control(serdev, false); 33 + 34 + ret = pm_runtime_get_sync(&serdev->dev); 35 + if (ret < 0) { 36 + pm_runtime_put_noidle(&serdev->dev); 37 + goto err_close; 38 + } 39 + 40 + return 0; 41 + 42 + err_close: 43 + serdev_device_close(serdev); 44 + 45 + return ret; 46 + } 47 + 48 + static void gnss_serial_close(struct gnss_device *gdev) 49 + { 50 + struct gnss_serial *gserial = gnss_get_drvdata(gdev); 51 + struct serdev_device *serdev = gserial->serdev; 52 + 53 + serdev_device_close(serdev); 54 + 55 + pm_runtime_put(&serdev->dev); 56 + } 57 + 58 + static int gnss_serial_write_raw(struct gnss_device *gdev, 59 + const unsigned char *buf, size_t count) 60 + { 61 + struct gnss_serial *gserial = gnss_get_drvdata(gdev); 62 + struct serdev_device *serdev = gserial->serdev; 63 + int ret; 64 + 65 + /* write is only buffered synchronously */ 66 + ret = serdev_device_write(serdev, buf, count, 0); 67 + if (ret < 0) 68 + return ret; 69 + 70 + /* FIXME: determine if interrupted? */ 71 + serdev_device_wait_until_sent(serdev, 0); 72 + 73 + return count; 74 + } 75 + 76 + static const struct gnss_operations gnss_serial_gnss_ops = { 77 + .open = gnss_serial_open, 78 + .close = gnss_serial_close, 79 + .write_raw = gnss_serial_write_raw, 80 + }; 81 + 82 + static int gnss_serial_receive_buf(struct serdev_device *serdev, 83 + const unsigned char *buf, size_t count) 84 + { 85 + struct gnss_serial *gserial = serdev_device_get_drvdata(serdev); 86 + struct gnss_device *gdev = gserial->gdev; 87 + 88 + return gnss_insert_raw(gdev, buf, count); 89 + } 90 + 91 + static const struct serdev_device_ops gnss_serial_serdev_ops = { 92 + .receive_buf = gnss_serial_receive_buf, 93 + .write_wakeup = serdev_device_write_wakeup, 94 + }; 95 + 96 + static int gnss_serial_set_power(struct gnss_serial *gserial, 97 + enum gnss_serial_pm_state state) 98 + { 99 + if (!gserial->ops || !gserial->ops->set_power) 100 + return 0; 101 + 102 + return gserial->ops->set_power(gserial, state); 103 + } 104 + 105 + /* 106 + * FIXME: need to provide subdriver defaults or separate dt parsing from 107 + * allocation. 108 + */ 109 + static int gnss_serial_parse_dt(struct serdev_device *serdev) 110 + { 111 + struct gnss_serial *gserial = serdev_device_get_drvdata(serdev); 112 + struct device_node *node = serdev->dev.of_node; 113 + u32 speed = 4800; 114 + 115 + of_property_read_u32(node, "current-speed", &speed); 116 + 117 + gserial->speed = speed; 118 + 119 + return 0; 120 + } 121 + 122 + struct gnss_serial *gnss_serial_allocate(struct serdev_device *serdev, 123 + size_t data_size) 124 + { 125 + struct gnss_serial *gserial; 126 + struct gnss_device *gdev; 127 + int ret; 128 + 129 + gserial = kzalloc(sizeof(*gserial) + data_size, GFP_KERNEL); 130 + if (!gserial) 131 + return ERR_PTR(-ENOMEM); 132 + 133 + gdev = gnss_allocate_device(&serdev->dev); 134 + if (!gdev) { 135 + ret = -ENOMEM; 136 + goto err_free_gserial; 137 + } 138 + 139 + gdev->ops = &gnss_serial_gnss_ops; 140 + gnss_set_drvdata(gdev, gserial); 141 + 142 + gserial->serdev = serdev; 143 + gserial->gdev = gdev; 144 + 145 + serdev_device_set_drvdata(serdev, gserial); 146 + serdev_device_set_client_ops(serdev, &gnss_serial_serdev_ops); 147 + 148 + ret = gnss_serial_parse_dt(serdev); 149 + if (ret) 150 + goto err_put_device; 151 + 152 + return gserial; 153 + 154 + err_put_device: 155 + gnss_put_device(gserial->gdev); 156 + err_free_gserial: 157 + kfree(gserial); 158 + 159 + return ERR_PTR(ret); 160 + } 161 + EXPORT_SYMBOL_GPL(gnss_serial_allocate); 162 + 163 + void gnss_serial_free(struct gnss_serial *gserial) 164 + { 165 + gnss_put_device(gserial->gdev); 166 + kfree(gserial); 167 + }; 168 + EXPORT_SYMBOL_GPL(gnss_serial_free); 169 + 170 + int gnss_serial_register(struct gnss_serial *gserial) 171 + { 172 + struct serdev_device *serdev = gserial->serdev; 173 + int ret; 174 + 175 + if (IS_ENABLED(CONFIG_PM)) { 176 + pm_runtime_enable(&serdev->dev); 177 + } else { 178 + ret = gnss_serial_set_power(gserial, GNSS_SERIAL_ACTIVE); 179 + if (ret < 0) 180 + return ret; 181 + } 182 + 183 + ret = gnss_register_device(gserial->gdev); 184 + if (ret) 185 + goto err_disable_rpm; 186 + 187 + return 0; 188 + 189 + err_disable_rpm: 190 + if (IS_ENABLED(CONFIG_PM)) 191 + pm_runtime_disable(&serdev->dev); 192 + else 193 + gnss_serial_set_power(gserial, GNSS_SERIAL_OFF); 194 + 195 + return ret; 196 + } 197 + EXPORT_SYMBOL_GPL(gnss_serial_register); 198 + 199 + void gnss_serial_deregister(struct gnss_serial *gserial) 200 + { 201 + struct serdev_device *serdev = gserial->serdev; 202 + 203 + gnss_deregister_device(gserial->gdev); 204 + 205 + if (IS_ENABLED(CONFIG_PM)) 206 + pm_runtime_disable(&serdev->dev); 207 + else 208 + gnss_serial_set_power(gserial, GNSS_SERIAL_OFF); 209 + } 210 + EXPORT_SYMBOL_GPL(gnss_serial_deregister); 211 + 212 + #ifdef CONFIG_PM 213 + static int gnss_serial_runtime_suspend(struct device *dev) 214 + { 215 + struct gnss_serial *gserial = dev_get_drvdata(dev); 216 + 217 + return gnss_serial_set_power(gserial, GNSS_SERIAL_STANDBY); 218 + } 219 + 220 + static int gnss_serial_runtime_resume(struct device *dev) 221 + { 222 + struct gnss_serial *gserial = dev_get_drvdata(dev); 223 + 224 + return gnss_serial_set_power(gserial, GNSS_SERIAL_ACTIVE); 225 + } 226 + #endif /* CONFIG_PM */ 227 + 228 + static int gnss_serial_prepare(struct device *dev) 229 + { 230 + if (pm_runtime_suspended(dev)) 231 + return 1; 232 + 233 + return 0; 234 + } 235 + 236 + #ifdef CONFIG_PM_SLEEP 237 + static int gnss_serial_suspend(struct device *dev) 238 + { 239 + struct gnss_serial *gserial = dev_get_drvdata(dev); 240 + int ret = 0; 241 + 242 + /* 243 + * FIXME: serdev currently lacks support for managing the underlying 244 + * device's wakeup settings. A workaround would be to close the serdev 245 + * device here if it is open. 246 + */ 247 + 248 + if (!pm_runtime_suspended(dev)) 249 + ret = gnss_serial_set_power(gserial, GNSS_SERIAL_STANDBY); 250 + 251 + return ret; 252 + } 253 + 254 + static int gnss_serial_resume(struct device *dev) 255 + { 256 + struct gnss_serial *gserial = dev_get_drvdata(dev); 257 + int ret = 0; 258 + 259 + if (!pm_runtime_suspended(dev)) 260 + ret = gnss_serial_set_power(gserial, GNSS_SERIAL_ACTIVE); 261 + 262 + return ret; 263 + } 264 + #endif /* CONFIG_PM_SLEEP */ 265 + 266 + const struct dev_pm_ops gnss_serial_pm_ops = { 267 + .prepare = gnss_serial_prepare, 268 + SET_SYSTEM_SLEEP_PM_OPS(gnss_serial_suspend, gnss_serial_resume) 269 + SET_RUNTIME_PM_OPS(gnss_serial_runtime_suspend, gnss_serial_runtime_resume, NULL) 270 + }; 271 + EXPORT_SYMBOL_GPL(gnss_serial_pm_ops); 272 + 273 + MODULE_AUTHOR("Johan Hovold <johan@kernel.org>"); 274 + MODULE_DESCRIPTION("Generic serial GNSS receiver driver"); 275 + MODULE_LICENSE("GPL v2");
+47
drivers/gnss/serial.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Generic serial GNSS receiver driver 4 + * 5 + * Copyright (C) 2018 Johan Hovold <johan@kernel.org> 6 + */ 7 + 8 + #ifndef _LINUX_GNSS_SERIAL_H 9 + #define _LINUX_GNSS_SERIAL_H 10 + 11 + #include <asm/termbits.h> 12 + #include <linux/pm.h> 13 + 14 + struct gnss_serial { 15 + struct serdev_device *serdev; 16 + struct gnss_device *gdev; 17 + speed_t speed; 18 + const struct gnss_serial_ops *ops; 19 + unsigned long drvdata[0]; 20 + }; 21 + 22 + enum gnss_serial_pm_state { 23 + GNSS_SERIAL_OFF, 24 + GNSS_SERIAL_ACTIVE, 25 + GNSS_SERIAL_STANDBY, 26 + }; 27 + 28 + struct gnss_serial_ops { 29 + int (*set_power)(struct gnss_serial *gserial, 30 + enum gnss_serial_pm_state state); 31 + }; 32 + 33 + extern const struct dev_pm_ops gnss_serial_pm_ops; 34 + 35 + struct gnss_serial *gnss_serial_allocate(struct serdev_device *gserial, 36 + size_t data_size); 37 + void gnss_serial_free(struct gnss_serial *gserial); 38 + 39 + int gnss_serial_register(struct gnss_serial *gserial); 40 + void gnss_serial_deregister(struct gnss_serial *gserial); 41 + 42 + static inline void *gnss_serial_get_drvdata(struct gnss_serial *gserial) 43 + { 44 + return gserial->drvdata; 45 + } 46 + 47 + #endif /* _LINUX_GNSS_SERIAL_H */
+408
drivers/gnss/sirf.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * SiRFstar GNSS receiver driver 4 + * 5 + * Copyright (C) 2018 Johan Hovold <johan@kernel.org> 6 + */ 7 + 8 + #include <linux/errno.h> 9 + #include <linux/gnss.h> 10 + #include <linux/gpio/consumer.h> 11 + #include <linux/init.h> 12 + #include <linux/interrupt.h> 13 + #include <linux/kernel.h> 14 + #include <linux/module.h> 15 + #include <linux/of.h> 16 + #include <linux/pm.h> 17 + #include <linux/pm_runtime.h> 18 + #include <linux/regulator/consumer.h> 19 + #include <linux/serdev.h> 20 + #include <linux/slab.h> 21 + #include <linux/wait.h> 22 + 23 + #define SIRF_BOOT_DELAY 500 24 + #define SIRF_ON_OFF_PULSE_TIME 100 25 + #define SIRF_ACTIVATE_TIMEOUT 200 26 + #define SIRF_HIBERNATE_TIMEOUT 200 27 + 28 + struct sirf_data { 29 + struct gnss_device *gdev; 30 + struct serdev_device *serdev; 31 + speed_t speed; 32 + struct regulator *vcc; 33 + struct gpio_desc *on_off; 34 + struct gpio_desc *wakeup; 35 + int irq; 36 + bool active; 37 + wait_queue_head_t power_wait; 38 + }; 39 + 40 + static int sirf_open(struct gnss_device *gdev) 41 + { 42 + struct sirf_data *data = gnss_get_drvdata(gdev); 43 + struct serdev_device *serdev = data->serdev; 44 + int ret; 45 + 46 + ret = serdev_device_open(serdev); 47 + if (ret) 48 + return ret; 49 + 50 + serdev_device_set_baudrate(serdev, data->speed); 51 + serdev_device_set_flow_control(serdev, false); 52 + 53 + ret = pm_runtime_get_sync(&serdev->dev); 54 + if (ret < 0) { 55 + dev_err(&gdev->dev, "failed to runtime resume: %d\n", ret); 56 + pm_runtime_put_noidle(&serdev->dev); 57 + goto err_close; 58 + } 59 + 60 + return 0; 61 + 62 + err_close: 63 + serdev_device_close(serdev); 64 + 65 + return ret; 66 + } 67 + 68 + static void sirf_close(struct gnss_device *gdev) 69 + { 70 + struct sirf_data *data = gnss_get_drvdata(gdev); 71 + struct serdev_device *serdev = data->serdev; 72 + 73 + serdev_device_close(serdev); 74 + 75 + pm_runtime_put(&serdev->dev); 76 + } 77 + 78 + static int sirf_write_raw(struct gnss_device *gdev, const unsigned char *buf, 79 + size_t count) 80 + { 81 + struct sirf_data *data = gnss_get_drvdata(gdev); 82 + struct serdev_device *serdev = data->serdev; 83 + int ret; 84 + 85 + /* write is only buffered synchronously */ 86 + ret = serdev_device_write(serdev, buf, count, 0); 87 + if (ret < 0) 88 + return ret; 89 + 90 + /* FIXME: determine if interrupted? */ 91 + serdev_device_wait_until_sent(serdev, 0); 92 + 93 + return count; 94 + } 95 + 96 + static const struct gnss_operations sirf_gnss_ops = { 97 + .open = sirf_open, 98 + .close = sirf_close, 99 + .write_raw = sirf_write_raw, 100 + }; 101 + 102 + static int sirf_receive_buf(struct serdev_device *serdev, 103 + const unsigned char *buf, size_t count) 104 + { 105 + struct sirf_data *data = serdev_device_get_drvdata(serdev); 106 + struct gnss_device *gdev = data->gdev; 107 + 108 + return gnss_insert_raw(gdev, buf, count); 109 + } 110 + 111 + static const struct serdev_device_ops sirf_serdev_ops = { 112 + .receive_buf = sirf_receive_buf, 113 + .write_wakeup = serdev_device_write_wakeup, 114 + }; 115 + 116 + static irqreturn_t sirf_wakeup_handler(int irq, void *dev_id) 117 + { 118 + struct sirf_data *data = dev_id; 119 + struct device *dev = &data->serdev->dev; 120 + int ret; 121 + 122 + ret = gpiod_get_value_cansleep(data->wakeup); 123 + dev_dbg(dev, "%s - wakeup = %d\n", __func__, ret); 124 + if (ret < 0) 125 + goto out; 126 + 127 + data->active = !!ret; 128 + wake_up_interruptible(&data->power_wait); 129 + out: 130 + return IRQ_HANDLED; 131 + } 132 + 133 + static int sirf_wait_for_power_state(struct sirf_data *data, bool active, 134 + unsigned long timeout) 135 + { 136 + int ret; 137 + 138 + ret = wait_event_interruptible_timeout(data->power_wait, 139 + data->active == active, msecs_to_jiffies(timeout)); 140 + if (ret < 0) 141 + return ret; 142 + 143 + if (ret == 0) { 144 + dev_warn(&data->serdev->dev, "timeout waiting for active state = %d\n", 145 + active); 146 + return -ETIMEDOUT; 147 + } 148 + 149 + return 0; 150 + } 151 + 152 + static void sirf_pulse_on_off(struct sirf_data *data) 153 + { 154 + gpiod_set_value_cansleep(data->on_off, 1); 155 + msleep(SIRF_ON_OFF_PULSE_TIME); 156 + gpiod_set_value_cansleep(data->on_off, 0); 157 + } 158 + 159 + static int sirf_set_active(struct sirf_data *data, bool active) 160 + { 161 + unsigned long timeout; 162 + int retries = 3; 163 + int ret; 164 + 165 + if (active) 166 + timeout = SIRF_ACTIVATE_TIMEOUT; 167 + else 168 + timeout = SIRF_HIBERNATE_TIMEOUT; 169 + 170 + while (retries-- > 0) { 171 + sirf_pulse_on_off(data); 172 + ret = sirf_wait_for_power_state(data, active, timeout); 173 + if (ret < 0) { 174 + if (ret == -ETIMEDOUT) 175 + continue; 176 + 177 + return ret; 178 + } 179 + 180 + break; 181 + } 182 + 183 + if (retries == 0) 184 + return -ETIMEDOUT; 185 + 186 + return 0; 187 + } 188 + 189 + static int sirf_runtime_suspend(struct device *dev) 190 + { 191 + struct sirf_data *data = dev_get_drvdata(dev); 192 + 193 + if (!data->on_off) 194 + return regulator_disable(data->vcc); 195 + 196 + return sirf_set_active(data, false); 197 + } 198 + 199 + static int sirf_runtime_resume(struct device *dev) 200 + { 201 + struct sirf_data *data = dev_get_drvdata(dev); 202 + 203 + if (!data->on_off) 204 + return regulator_enable(data->vcc); 205 + 206 + return sirf_set_active(data, true); 207 + } 208 + 209 + static int __maybe_unused sirf_suspend(struct device *dev) 210 + { 211 + struct sirf_data *data = dev_get_drvdata(dev); 212 + int ret = 0; 213 + 214 + if (!pm_runtime_suspended(dev)) 215 + ret = sirf_runtime_suspend(dev); 216 + 217 + if (data->wakeup) 218 + disable_irq(data->irq); 219 + 220 + return ret; 221 + } 222 + 223 + static int __maybe_unused sirf_resume(struct device *dev) 224 + { 225 + struct sirf_data *data = dev_get_drvdata(dev); 226 + int ret = 0; 227 + 228 + if (data->wakeup) 229 + enable_irq(data->irq); 230 + 231 + if (!pm_runtime_suspended(dev)) 232 + ret = sirf_runtime_resume(dev); 233 + 234 + return ret; 235 + } 236 + 237 + static const struct dev_pm_ops sirf_pm_ops = { 238 + SET_SYSTEM_SLEEP_PM_OPS(sirf_suspend, sirf_resume) 239 + SET_RUNTIME_PM_OPS(sirf_runtime_suspend, sirf_runtime_resume, NULL) 240 + }; 241 + 242 + static int sirf_parse_dt(struct serdev_device *serdev) 243 + { 244 + struct sirf_data *data = serdev_device_get_drvdata(serdev); 245 + struct device_node *node = serdev->dev.of_node; 246 + u32 speed = 9600; 247 + 248 + of_property_read_u32(node, "current-speed", &speed); 249 + 250 + data->speed = speed; 251 + 252 + return 0; 253 + } 254 + 255 + static int sirf_probe(struct serdev_device *serdev) 256 + { 257 + struct device *dev = &serdev->dev; 258 + struct gnss_device *gdev; 259 + struct sirf_data *data; 260 + int ret; 261 + 262 + data = devm_kzalloc(dev, sizeof(*data), GFP_KERNEL); 263 + if (!data) 264 + return -ENOMEM; 265 + 266 + gdev = gnss_allocate_device(dev); 267 + if (!gdev) 268 + return -ENOMEM; 269 + 270 + gdev->type = GNSS_TYPE_SIRF; 271 + gdev->ops = &sirf_gnss_ops; 272 + gnss_set_drvdata(gdev, data); 273 + 274 + data->serdev = serdev; 275 + data->gdev = gdev; 276 + 277 + init_waitqueue_head(&data->power_wait); 278 + 279 + serdev_device_set_drvdata(serdev, data); 280 + serdev_device_set_client_ops(serdev, &sirf_serdev_ops); 281 + 282 + ret = sirf_parse_dt(serdev); 283 + if (ret) 284 + goto err_put_device; 285 + 286 + data->vcc = devm_regulator_get(dev, "vcc"); 287 + if (IS_ERR(data->vcc)) { 288 + ret = PTR_ERR(data->vcc); 289 + goto err_put_device; 290 + } 291 + 292 + data->on_off = devm_gpiod_get_optional(dev, "sirf,onoff", 293 + GPIOD_OUT_LOW); 294 + if (IS_ERR(data->on_off)) 295 + goto err_put_device; 296 + 297 + if (data->on_off) { 298 + data->wakeup = devm_gpiod_get_optional(dev, "sirf,wakeup", 299 + GPIOD_IN); 300 + if (IS_ERR(data->wakeup)) 301 + goto err_put_device; 302 + 303 + /* 304 + * Configurations where WAKEUP has been left not connected, 305 + * are currently not supported. 306 + */ 307 + if (!data->wakeup) { 308 + dev_err(dev, "no wakeup gpio specified\n"); 309 + ret = -ENODEV; 310 + goto err_put_device; 311 + } 312 + } 313 + 314 + if (data->wakeup) { 315 + ret = gpiod_to_irq(data->wakeup); 316 + if (ret < 0) 317 + goto err_put_device; 318 + 319 + data->irq = ret; 320 + 321 + ret = devm_request_threaded_irq(dev, data->irq, NULL, 322 + sirf_wakeup_handler, 323 + IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING | IRQF_ONESHOT, 324 + "wakeup", data); 325 + if (ret) 326 + goto err_put_device; 327 + } 328 + 329 + if (data->on_off) { 330 + ret = regulator_enable(data->vcc); 331 + if (ret) 332 + goto err_put_device; 333 + 334 + /* Wait for chip to boot into hibernate mode */ 335 + msleep(SIRF_BOOT_DELAY); 336 + } 337 + 338 + if (IS_ENABLED(CONFIG_PM)) { 339 + pm_runtime_set_suspended(dev); /* clear runtime_error flag */ 340 + pm_runtime_enable(dev); 341 + } else { 342 + ret = sirf_runtime_resume(dev); 343 + if (ret < 0) 344 + goto err_disable_vcc; 345 + } 346 + 347 + ret = gnss_register_device(gdev); 348 + if (ret) 349 + goto err_disable_rpm; 350 + 351 + return 0; 352 + 353 + err_disable_rpm: 354 + if (IS_ENABLED(CONFIG_PM)) 355 + pm_runtime_disable(dev); 356 + else 357 + sirf_runtime_suspend(dev); 358 + err_disable_vcc: 359 + if (data->on_off) 360 + regulator_disable(data->vcc); 361 + err_put_device: 362 + gnss_put_device(data->gdev); 363 + 364 + return ret; 365 + } 366 + 367 + static void sirf_remove(struct serdev_device *serdev) 368 + { 369 + struct sirf_data *data = serdev_device_get_drvdata(serdev); 370 + 371 + gnss_deregister_device(data->gdev); 372 + 373 + if (IS_ENABLED(CONFIG_PM)) 374 + pm_runtime_disable(&serdev->dev); 375 + else 376 + sirf_runtime_suspend(&serdev->dev); 377 + 378 + if (data->on_off) 379 + regulator_disable(data->vcc); 380 + 381 + gnss_put_device(data->gdev); 382 + }; 383 + 384 + #ifdef CONFIG_OF 385 + static const struct of_device_id sirf_of_match[] = { 386 + { .compatible = "fastrax,uc430" }, 387 + { .compatible = "linx,r4" }, 388 + { .compatible = "wi2wi,w2sg0008i" }, 389 + { .compatible = "wi2wi,w2sg0084i" }, 390 + {}, 391 + }; 392 + MODULE_DEVICE_TABLE(of, sirf_of_match); 393 + #endif 394 + 395 + static struct serdev_device_driver sirf_driver = { 396 + .driver = { 397 + .name = "gnss-sirf", 398 + .of_match_table = of_match_ptr(sirf_of_match), 399 + .pm = &sirf_pm_ops, 400 + }, 401 + .probe = sirf_probe, 402 + .remove = sirf_remove, 403 + }; 404 + module_serdev_device_driver(sirf_driver); 405 + 406 + MODULE_AUTHOR("Johan Hovold <johan@kernel.org>"); 407 + MODULE_DESCRIPTION("SiRFstar GNSS receiver driver"); 408 + MODULE_LICENSE("GPL v2");
+153
drivers/gnss/ubx.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * u-blox GNSS receiver driver 4 + * 5 + * Copyright (C) 2018 Johan Hovold <johan@kernel.org> 6 + */ 7 + 8 + #include <linux/errno.h> 9 + #include <linux/gnss.h> 10 + #include <linux/init.h> 11 + #include <linux/kernel.h> 12 + #include <linux/module.h> 13 + #include <linux/of.h> 14 + #include <linux/regulator/consumer.h> 15 + #include <linux/serdev.h> 16 + 17 + #include "serial.h" 18 + 19 + struct ubx_data { 20 + struct regulator *v_bckp; 21 + struct regulator *vcc; 22 + }; 23 + 24 + static int ubx_set_active(struct gnss_serial *gserial) 25 + { 26 + struct ubx_data *data = gnss_serial_get_drvdata(gserial); 27 + int ret; 28 + 29 + ret = regulator_enable(data->vcc); 30 + if (ret) 31 + return ret; 32 + 33 + return 0; 34 + } 35 + 36 + static int ubx_set_standby(struct gnss_serial *gserial) 37 + { 38 + struct ubx_data *data = gnss_serial_get_drvdata(gserial); 39 + int ret; 40 + 41 + ret = regulator_disable(data->vcc); 42 + if (ret) 43 + return ret; 44 + 45 + return 0; 46 + } 47 + 48 + static int ubx_set_power(struct gnss_serial *gserial, 49 + enum gnss_serial_pm_state state) 50 + { 51 + switch (state) { 52 + case GNSS_SERIAL_ACTIVE: 53 + return ubx_set_active(gserial); 54 + case GNSS_SERIAL_OFF: 55 + case GNSS_SERIAL_STANDBY: 56 + return ubx_set_standby(gserial); 57 + } 58 + 59 + return -EINVAL; 60 + } 61 + 62 + static const struct gnss_serial_ops ubx_gserial_ops = { 63 + .set_power = ubx_set_power, 64 + }; 65 + 66 + static int ubx_probe(struct serdev_device *serdev) 67 + { 68 + struct gnss_serial *gserial; 69 + struct ubx_data *data; 70 + int ret; 71 + 72 + gserial = gnss_serial_allocate(serdev, sizeof(*data)); 73 + if (IS_ERR(gserial)) { 74 + ret = PTR_ERR(gserial); 75 + return ret; 76 + } 77 + 78 + gserial->ops = &ubx_gserial_ops; 79 + 80 + gserial->gdev->type = GNSS_TYPE_UBX; 81 + 82 + data = gnss_serial_get_drvdata(gserial); 83 + 84 + data->vcc = devm_regulator_get(&serdev->dev, "vcc"); 85 + if (IS_ERR(data->vcc)) { 86 + ret = PTR_ERR(data->vcc); 87 + goto err_free_gserial; 88 + } 89 + 90 + data->v_bckp = devm_regulator_get_optional(&serdev->dev, "v-bckp"); 91 + if (IS_ERR(data->v_bckp)) { 92 + ret = PTR_ERR(data->v_bckp); 93 + if (ret == -ENODEV) 94 + data->v_bckp = NULL; 95 + else 96 + goto err_free_gserial; 97 + } 98 + 99 + if (data->v_bckp) { 100 + ret = regulator_enable(data->v_bckp); 101 + if (ret) 102 + goto err_free_gserial; 103 + } 104 + 105 + ret = gnss_serial_register(gserial); 106 + if (ret) 107 + goto err_disable_v_bckp; 108 + 109 + return 0; 110 + 111 + err_disable_v_bckp: 112 + if (data->v_bckp) 113 + regulator_disable(data->v_bckp); 114 + err_free_gserial: 115 + gnss_serial_free(gserial); 116 + 117 + return ret; 118 + } 119 + 120 + static void ubx_remove(struct serdev_device *serdev) 121 + { 122 + struct gnss_serial *gserial = serdev_device_get_drvdata(serdev); 123 + struct ubx_data *data = gnss_serial_get_drvdata(gserial); 124 + 125 + gnss_serial_deregister(gserial); 126 + if (data->v_bckp) 127 + regulator_disable(data->v_bckp); 128 + gnss_serial_free(gserial); 129 + }; 130 + 131 + #ifdef CONFIG_OF 132 + static const struct of_device_id ubx_of_match[] = { 133 + { .compatible = "u-blox,neo-8" }, 134 + { .compatible = "u-blox,neo-m8" }, 135 + {}, 136 + }; 137 + MODULE_DEVICE_TABLE(of, ubx_of_match); 138 + #endif 139 + 140 + static struct serdev_device_driver ubx_driver = { 141 + .driver = { 142 + .name = "gnss-ubx", 143 + .of_match_table = of_match_ptr(ubx_of_match), 144 + .pm = &gnss_serial_pm_ops, 145 + }, 146 + .probe = ubx_probe, 147 + .remove = ubx_remove, 148 + }; 149 + module_serdev_device_driver(ubx_driver); 150 + 151 + MODULE_AUTHOR("Johan Hovold <johan@kernel.org>"); 152 + MODULE_DESCRIPTION("u-blox GNSS receiver driver"); 153 + MODULE_LICENSE("GPL v2");
+1
drivers/gpu/drm/mediatek/mtk_cec.c
··· 15 15 #include <linux/delay.h> 16 16 #include <linux/io.h> 17 17 #include <linux/interrupt.h> 18 + #include <linux/mod_devicetable.h> 18 19 #include <linux/platform_device.h> 19 20 20 21 #include "mtk_cec.h"
+1
drivers/gpu/drm/sun4i/sun6i_drc.c
··· 12 12 #include <linux/clk.h> 13 13 #include <linux/component.h> 14 14 #include <linux/module.h> 15 + #include <linux/mod_devicetable.h> 15 16 #include <linux/platform_device.h> 16 17 #include <linux/regmap.h> 17 18 #include <linux/reset.h>
+3
drivers/hid/hid-hyperv.c
··· 598 598 .id_table = id_table, 599 599 .probe = mousevsc_probe, 600 600 .remove = mousevsc_remove, 601 + .driver = { 602 + .probe_type = PROBE_PREFER_ASYNCHRONOUS, 603 + }, 601 604 }; 602 605 603 606 static int __init mousevsc_init(void)
+1
drivers/hsi/controllers/omap_ssi_port.c
··· 20 20 * 02110-1301 USA 21 21 */ 22 22 23 + #include <linux/mod_devicetable.h> 23 24 #include <linux/platform_device.h> 24 25 #include <linux/dma-mapping.h> 25 26 #include <linux/pm_runtime.h>
+44 -23
drivers/hv/channel.c
··· 29 29 #include <linux/hyperv.h> 30 30 #include <linux/uio.h> 31 31 #include <linux/interrupt.h> 32 + #include <asm/page.h> 32 33 33 34 #include "hyperv_vmbus.h" 34 35 35 36 #define NUM_PAGES_SPANNED(addr, len) \ 36 37 ((PAGE_ALIGN(addr + len) >> PAGE_SHIFT) - (addr >> PAGE_SHIFT)) 38 + 39 + static unsigned long virt_to_hvpfn(void *addr) 40 + { 41 + unsigned long paddr; 42 + 43 + if (is_vmalloc_addr(addr)) 44 + paddr = page_to_phys(vmalloc_to_page(addr)) + 45 + offset_in_page(addr); 46 + else 47 + paddr = __pa(addr); 48 + 49 + return paddr >> PAGE_SHIFT; 50 + } 37 51 38 52 /* 39 53 * vmbus_setevent- Trigger an event notification on the specified ··· 312 298 gpadl_header->range[0].byte_offset = 0; 313 299 gpadl_header->range[0].byte_count = size; 314 300 for (i = 0; i < pfncount; i++) 315 - gpadl_header->range[0].pfn_array[i] = slow_virt_to_phys( 316 - kbuffer + PAGE_SIZE * i) >> PAGE_SHIFT; 301 + gpadl_header->range[0].pfn_array[i] = virt_to_hvpfn( 302 + kbuffer + PAGE_SIZE * i); 317 303 *msginfo = msgheader; 318 304 319 305 pfnsum = pfncount; ··· 364 350 * so the hypervisor guarantees that this is ok. 365 351 */ 366 352 for (i = 0; i < pfncurr; i++) 367 - gpadl_body->pfn[i] = slow_virt_to_phys( 368 - kbuffer + PAGE_SIZE * (pfnsum + i)) >> 369 - PAGE_SHIFT; 353 + gpadl_body->pfn[i] = virt_to_hvpfn( 354 + kbuffer + PAGE_SIZE * (pfnsum + i)); 370 355 371 356 /* add to msg header */ 372 357 list_add_tail(&msgbody->msglistentry, ··· 393 380 gpadl_header->range[0].byte_offset = 0; 394 381 gpadl_header->range[0].byte_count = size; 395 382 for (i = 0; i < pagecount; i++) 396 - gpadl_header->range[0].pfn_array[i] = slow_virt_to_phys( 397 - kbuffer + PAGE_SIZE * i) >> PAGE_SHIFT; 383 + gpadl_header->range[0].pfn_array[i] = virt_to_hvpfn( 384 + kbuffer + PAGE_SIZE * i); 398 385 399 386 *msginfo = msgheader; 400 387 } ··· 571 558 channel->onchannel_callback = NULL; 572 559 } 573 560 574 - static int vmbus_close_internal(struct vmbus_channel *channel) 561 + void vmbus_reset_channel_cb(struct vmbus_channel *channel) 575 562 { 576 - struct vmbus_channel_close_channel *msg; 577 - int ret; 578 - 579 563 /* 580 564 * vmbus_on_event(), running in the per-channel tasklet, can race 581 565 * with vmbus_close_internal() in the case of SMP guest, e.g., when ··· 581 571 * first. 582 572 */ 583 573 tasklet_disable(&channel->callback_event); 574 + 575 + channel->sc_creation_callback = NULL; 576 + 577 + /* Stop the callback asap */ 578 + if (channel->target_cpu != get_cpu()) { 579 + put_cpu(); 580 + smp_call_function_single(channel->target_cpu, reset_channel_cb, 581 + channel, true); 582 + } else { 583 + reset_channel_cb(channel); 584 + put_cpu(); 585 + } 586 + 587 + /* Re-enable tasklet for use on re-open */ 588 + tasklet_enable(&channel->callback_event); 589 + } 590 + 591 + static int vmbus_close_internal(struct vmbus_channel *channel) 592 + { 593 + struct vmbus_channel_close_channel *msg; 594 + int ret; 595 + 596 + vmbus_reset_channel_cb(channel); 584 597 585 598 /* 586 599 * In case a device driver's probe() fails (e.g., ··· 618 585 } 619 586 620 587 channel->state = CHANNEL_OPEN_STATE; 621 - channel->sc_creation_callback = NULL; 622 - /* Stop callback and cancel the timer asap */ 623 - if (channel->target_cpu != get_cpu()) { 624 - put_cpu(); 625 - smp_call_function_single(channel->target_cpu, reset_channel_cb, 626 - channel, true); 627 - } else { 628 - reset_channel_cb(channel); 629 - put_cpu(); 630 - } 631 588 632 589 /* Send a closing message */ 633 590 ··· 662 639 get_order(channel->ringbuffer_pagecount * PAGE_SIZE)); 663 640 664 641 out: 665 - /* re-enable tasklet for use on re-open */ 666 - tasklet_enable(&channel->callback_event); 667 642 return ret; 668 643 } 669 644
+7 -3
drivers/hv/channel_mgmt.c
··· 527 527 struct hv_device *dev 528 528 = newchannel->primary_channel->device_obj; 529 529 530 - if (vmbus_add_channel_kobj(dev, newchannel)) { 531 - atomic_dec(&vmbus_connection.offer_in_progress); 530 + if (vmbus_add_channel_kobj(dev, newchannel)) 532 531 goto err_free_chan; 533 - } 534 532 535 533 if (channel->sc_creation_callback != NULL) 536 534 channel->sc_creation_callback(newchannel); ··· 891 893 */ 892 894 return; 893 895 } 896 + 897 + /* 898 + * Before setting channel->rescind in vmbus_rescind_cleanup(), we 899 + * should make sure the channel callback is not running any more. 900 + */ 901 + vmbus_reset_channel_cb(channel); 894 902 895 903 /* 896 904 * Now wait for offer handling to complete.
+21 -23
drivers/hv/hv.c
··· 64 64 return -ENOMEM; 65 65 66 66 direct_mode_enabled = ms_hyperv.misc_features & 67 - HV_X64_STIMER_DIRECT_MODE_AVAILABLE; 67 + HV_STIMER_DIRECT_MODE_AVAILABLE; 68 68 return 0; 69 69 } 70 70 ··· 127 127 128 128 current_tick = hyperv_cs->read(NULL); 129 129 current_tick += delta; 130 - hv_init_timer(HV_X64_MSR_STIMER0_COUNT, current_tick); 130 + hv_init_timer(0, current_tick); 131 131 return 0; 132 132 } 133 133 134 134 static int hv_ce_shutdown(struct clock_event_device *evt) 135 135 { 136 - hv_init_timer(HV_X64_MSR_STIMER0_COUNT, 0); 137 - hv_init_timer_config(HV_X64_MSR_STIMER0_CONFIG, 0); 136 + hv_init_timer(0, 0); 137 + hv_init_timer_config(0, 0); 138 138 if (direct_mode_enabled) 139 139 hv_disable_stimer0_percpu_irq(stimer0_irq); 140 140 ··· 164 164 timer_cfg.direct_mode = 0; 165 165 timer_cfg.sintx = VMBUS_MESSAGE_SINT; 166 166 } 167 - hv_init_timer_config(HV_X64_MSR_STIMER0_CONFIG, timer_cfg.as_uint64); 167 + hv_init_timer_config(0, timer_cfg.as_uint64); 168 168 return 0; 169 169 } 170 170 ··· 242 242 243 243 return 0; 244 244 err: 245 + /* 246 + * Any memory allocations that succeeded will be freed when 247 + * the caller cleans up by calling hv_synic_free() 248 + */ 245 249 return -ENOMEM; 246 250 } 247 251 ··· 258 254 struct hv_per_cpu_context *hv_cpu 259 255 = per_cpu_ptr(hv_context.cpu_context, cpu); 260 256 261 - if (hv_cpu->synic_event_page) 262 - free_page((unsigned long)hv_cpu->synic_event_page); 263 - if (hv_cpu->synic_message_page) 264 - free_page((unsigned long)hv_cpu->synic_message_page); 265 - if (hv_cpu->post_msg_page) 266 - free_page((unsigned long)hv_cpu->post_msg_page); 257 + kfree(hv_cpu->clk_evt); 258 + free_page((unsigned long)hv_cpu->synic_event_page); 259 + free_page((unsigned long)hv_cpu->synic_message_page); 260 + free_page((unsigned long)hv_cpu->post_msg_page); 267 261 } 268 262 269 263 kfree(hv_context.hv_numa_map); ··· 300 298 hv_set_siefp(siefp.as_uint64); 301 299 302 300 /* Setup the shared SINT. */ 303 - hv_get_synint_state(HV_X64_MSR_SINT0 + VMBUS_MESSAGE_SINT, 304 - shared_sint.as_uint64); 301 + hv_get_synint_state(VMBUS_MESSAGE_SINT, shared_sint.as_uint64); 305 302 306 303 shared_sint.vector = HYPERVISOR_CALLBACK_VECTOR; 307 304 shared_sint.masked = false; 308 - if (ms_hyperv.hints & HV_X64_DEPRECATING_AEOI_RECOMMENDED) 305 + if (ms_hyperv.hints & HV_DEPRECATING_AEOI_RECOMMENDED) 309 306 shared_sint.auto_eoi = false; 310 307 else 311 308 shared_sint.auto_eoi = true; 312 309 313 - hv_set_synint_state(HV_X64_MSR_SINT0 + VMBUS_MESSAGE_SINT, 314 - shared_sint.as_uint64); 310 + hv_set_synint_state(VMBUS_MESSAGE_SINT, shared_sint.as_uint64); 315 311 316 312 /* Enable the global synic bit */ 317 313 hv_get_synic_state(sctrl.as_uint64); ··· 322 322 /* 323 323 * Register the per-cpu clockevent source. 324 324 */ 325 - if (ms_hyperv.features & HV_X64_MSR_SYNTIMER_AVAILABLE) 325 + if (ms_hyperv.features & HV_MSR_SYNTIMER_AVAILABLE) 326 326 clockevents_config_and_register(hv_cpu->clk_evt, 327 327 HV_TIMER_FREQUENCY, 328 328 HV_MIN_DELTA_TICKS, ··· 337 337 { 338 338 int cpu; 339 339 340 - if (!(ms_hyperv.features & HV_X64_MSR_SYNTIMER_AVAILABLE)) 340 + if (!(ms_hyperv.features & HV_MSR_SYNTIMER_AVAILABLE)) 341 341 return; 342 342 343 343 if (direct_mode_enabled) ··· 396 396 return -EBUSY; 397 397 398 398 /* Turn off clockevent device */ 399 - if (ms_hyperv.features & HV_X64_MSR_SYNTIMER_AVAILABLE) { 399 + if (ms_hyperv.features & HV_MSR_SYNTIMER_AVAILABLE) { 400 400 struct hv_per_cpu_context *hv_cpu 401 401 = this_cpu_ptr(hv_context.cpu_context); 402 402 ··· 405 405 put_cpu_ptr(hv_cpu); 406 406 } 407 407 408 - hv_get_synint_state(HV_X64_MSR_SINT0 + VMBUS_MESSAGE_SINT, 409 - shared_sint.as_uint64); 408 + hv_get_synint_state(VMBUS_MESSAGE_SINT, shared_sint.as_uint64); 410 409 411 410 shared_sint.masked = 1; 412 411 413 412 /* Need to correctly cleanup in the case of SMP!!! */ 414 413 /* Disable the interrupt */ 415 - hv_set_synint_state(HV_X64_MSR_SINT0 + VMBUS_MESSAGE_SINT, 416 - shared_sint.as_uint64); 414 + hv_set_synint_state(VMBUS_MESSAGE_SINT, shared_sint.as_uint64); 417 415 418 416 hv_get_simp(simp.as_uint64); 419 417 simp.simp_enabled = 0;
+3
drivers/hv/hv_balloon.c
··· 1765 1765 .id_table = id_table, 1766 1766 .probe = balloon_probe, 1767 1767 .remove = balloon_remove, 1768 + .driver = { 1769 + .probe_type = PROBE_PREFER_ASYNCHRONOUS, 1770 + }, 1768 1771 }; 1769 1772 1770 1773 static int __init init_balloon_drv(void)
+3
drivers/hv/hv_util.c
··· 487 487 .id_table = id_table, 488 488 .probe = util_probe, 489 489 .remove = util_remove, 490 + .driver = { 491 + .probe_type = PROBE_PREFER_ASYNCHRONOUS, 492 + }, 490 493 }; 491 494 492 495 static int hv_ptp_enable(struct ptp_clock_info *info,
+54 -11
drivers/hv/ring_buffer.c
··· 431 431 } 432 432 433 433 /* 434 - * Update host ring buffer after iterating over packets. 434 + * Update host ring buffer after iterating over packets. If the host has 435 + * stopped queuing new entries because it found the ring buffer full, and 436 + * sufficient space is being freed up, signal the host. But be careful to 437 + * only signal the host when necessary, both for performance reasons and 438 + * because Hyper-V protects itself by throttling guests that signal 439 + * inappropriately. 440 + * 441 + * Determining when to signal is tricky. There are three key data inputs 442 + * that must be handled in this order to avoid race conditions: 443 + * 444 + * 1. Update the read_index 445 + * 2. Read the pending_send_sz 446 + * 3. Read the current write_index 447 + * 448 + * The interrupt_mask is not used to determine when to signal. The 449 + * interrupt_mask is used only on the guest->host ring buffer when 450 + * sending requests to the host. The host does not use it on the host-> 451 + * guest ring buffer to indicate whether it should be signaled. 435 452 */ 436 453 void hv_pkt_iter_close(struct vmbus_channel *channel) 437 454 { ··· 464 447 start_read_index = rbi->ring_buffer->read_index; 465 448 rbi->ring_buffer->read_index = rbi->priv_read_index; 466 449 450 + /* 451 + * Older versions of Hyper-V (before WS2102 and Win8) do not 452 + * implement pending_send_sz and simply poll if the host->guest 453 + * ring buffer is full. No signaling is needed or expected. 454 + */ 467 455 if (!rbi->ring_buffer->feature_bits.feat_pending_send_sz) 468 456 return; 469 457 470 458 /* 471 459 * Issue a full memory barrier before making the signaling decision. 472 - * Here is the reason for having this barrier: 473 - * If the reading of the pend_sz (in this function) 474 - * were to be reordered and read before we commit the new read 475 - * index (in the calling function) we could 476 - * have a problem. If the host were to set the pending_sz after we 477 - * have sampled pending_sz and go to sleep before we commit the 460 + * If reading pending_send_sz were to be reordered and happen 461 + * before we commit the new read_index, a race could occur. If the 462 + * host were to set the pending_send_sz after we have sampled 463 + * pending_send_sz, and the ring buffer blocks before we commit the 478 464 * read index, we could miss sending the interrupt. Issue a full 479 465 * memory barrier to address this. 480 466 */ 481 467 virt_mb(); 482 468 469 + /* 470 + * If the pending_send_sz is zero, then the ring buffer is not 471 + * blocked and there is no need to signal. This is far by the 472 + * most common case, so exit quickly for best performance. 473 + */ 483 474 pending_sz = READ_ONCE(rbi->ring_buffer->pending_send_sz); 484 475 if (!pending_sz) 485 476 return; ··· 501 476 bytes_read = hv_pkt_iter_bytes_read(rbi, start_read_index); 502 477 503 478 /* 504 - * If there was space before we began iteration, 505 - * then host was not blocked. 479 + * We want to signal the host only if we're transitioning 480 + * from a "not enough free space" state to a "enough free 481 + * space" state. For example, it's possible that this function 482 + * could run and free up enough space to signal the host, and then 483 + * run again and free up additional space before the host has a 484 + * chance to clear the pending_send_sz. The 2nd invocation would 485 + * be a null transition from "enough free space" to "enough free 486 + * space", which doesn't warrant a signal. 487 + * 488 + * Exactly filling the ring buffer is treated as "not enough 489 + * space". The ring buffer always must have at least one byte 490 + * empty so the empty and full conditions are distinguishable. 491 + * hv_get_bytes_to_write() doesn't fully tell the truth in 492 + * this regard. 493 + * 494 + * So first check if we were in the "enough free space" state 495 + * before we began the iteration. If so, the host was not 496 + * blocked, and there's no need to signal. 506 497 */ 507 - 508 498 if (curr_write_sz - bytes_read > pending_sz) 509 499 return; 510 500 511 - /* If pending write will not fit, don't give false hope. */ 501 + /* 502 + * Similarly, if the new state is "not enough space", then 503 + * there's no need to signal. 504 + */ 512 505 if (curr_write_sz <= pending_sz) 513 506 return; 514 507
+119 -1
drivers/hv/vmbus_drv.c
··· 56 56 57 57 static int hyperv_cpuhp_online; 58 58 59 + static void *hv_panic_page; 60 + 59 61 static int hyperv_panic_event(struct notifier_block *nb, unsigned long val, 60 62 void *args) 61 63 { ··· 209 207 return sprintf(buf, "vmbus:%s\n", alias_name); 210 208 } 211 209 static DEVICE_ATTR_RO(modalias); 210 + 211 + #ifdef CONFIG_NUMA 212 + static ssize_t numa_node_show(struct device *dev, 213 + struct device_attribute *attr, char *buf) 214 + { 215 + struct hv_device *hv_dev = device_to_hv_device(dev); 216 + 217 + if (!hv_dev->channel) 218 + return -ENODEV; 219 + 220 + return sprintf(buf, "%d\n", hv_dev->channel->numa_node); 221 + } 222 + static DEVICE_ATTR_RO(numa_node); 223 + #endif 212 224 213 225 static ssize_t server_monitor_pending_show(struct device *dev, 214 226 struct device_attribute *dev_attr, ··· 506 490 &dev_attr_class_id.attr, 507 491 &dev_attr_device_id.attr, 508 492 &dev_attr_modalias.attr, 493 + #ifdef CONFIG_NUMA 494 + &dev_attr_numa_node.attr, 495 + #endif 509 496 &dev_attr_server_monitor_pending.attr, 510 497 &dev_attr_client_monitor_pending.attr, 511 498 &dev_attr_server_monitor_latency.attr, ··· 1037 1018 add_interrupt_randomness(HYPERVISOR_CALLBACK_VECTOR, 0); 1038 1019 } 1039 1020 1021 + /* 1022 + * Boolean to control whether to report panic messages over Hyper-V. 1023 + * 1024 + * It can be set via /proc/sys/kernel/hyperv/record_panic_msg 1025 + */ 1026 + static int sysctl_record_panic_msg = 1; 1027 + 1028 + /* 1029 + * Callback from kmsg_dump. Grab as much as possible from the end of the kmsg 1030 + * buffer and call into Hyper-V to transfer the data. 1031 + */ 1032 + static void hv_kmsg_dump(struct kmsg_dumper *dumper, 1033 + enum kmsg_dump_reason reason) 1034 + { 1035 + size_t bytes_written; 1036 + phys_addr_t panic_pa; 1037 + 1038 + /* We are only interested in panics. */ 1039 + if ((reason != KMSG_DUMP_PANIC) || (!sysctl_record_panic_msg)) 1040 + return; 1041 + 1042 + panic_pa = virt_to_phys(hv_panic_page); 1043 + 1044 + /* 1045 + * Write dump contents to the page. No need to synchronize; panic should 1046 + * be single-threaded. 1047 + */ 1048 + kmsg_dump_get_buffer(dumper, true, hv_panic_page, PAGE_SIZE, 1049 + &bytes_written); 1050 + if (bytes_written) 1051 + hyperv_report_panic_msg(panic_pa, bytes_written); 1052 + } 1053 + 1054 + static struct kmsg_dumper hv_kmsg_dumper = { 1055 + .dump = hv_kmsg_dump, 1056 + }; 1057 + 1058 + static struct ctl_table_header *hv_ctl_table_hdr; 1059 + static int zero; 1060 + static int one = 1; 1061 + 1062 + /* 1063 + * sysctl option to allow the user to control whether kmsg data should be 1064 + * reported to Hyper-V on panic. 1065 + */ 1066 + static struct ctl_table hv_ctl_table[] = { 1067 + { 1068 + .procname = "hyperv_record_panic_msg", 1069 + .data = &sysctl_record_panic_msg, 1070 + .maxlen = sizeof(int), 1071 + .mode = 0644, 1072 + .proc_handler = proc_dointvec_minmax, 1073 + .extra1 = &zero, 1074 + .extra2 = &one 1075 + }, 1076 + {} 1077 + }; 1078 + 1079 + static struct ctl_table hv_root_table[] = { 1080 + { 1081 + .procname = "kernel", 1082 + .mode = 0555, 1083 + .child = hv_ctl_table 1084 + }, 1085 + {} 1086 + }; 1040 1087 1041 1088 /* 1042 1089 * vmbus_bus_init -Main vmbus driver initialization routine. ··· 1150 1065 * Only register if the crash MSRs are available 1151 1066 */ 1152 1067 if (ms_hyperv.misc_features & HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE) { 1068 + u64 hyperv_crash_ctl; 1069 + /* 1070 + * Sysctl registration is not fatal, since by default 1071 + * reporting is enabled. 1072 + */ 1073 + hv_ctl_table_hdr = register_sysctl_table(hv_root_table); 1074 + if (!hv_ctl_table_hdr) 1075 + pr_err("Hyper-V: sysctl table register error"); 1076 + 1077 + /* 1078 + * Register for panic kmsg callback only if the right 1079 + * capability is supported by the hypervisor. 1080 + */ 1081 + hv_get_crash_ctl(hyperv_crash_ctl); 1082 + if (hyperv_crash_ctl & HV_CRASH_CTL_CRASH_NOTIFY_MSG) { 1083 + hv_panic_page = (void *)get_zeroed_page(GFP_KERNEL); 1084 + if (hv_panic_page) { 1085 + ret = kmsg_dump_register(&hv_kmsg_dumper); 1086 + if (ret) 1087 + pr_err("Hyper-V: kmsg dump register " 1088 + "error 0x%x\n", ret); 1089 + } else 1090 + pr_err("Hyper-V: panic message page memory " 1091 + "allocation failed"); 1092 + } 1093 + 1153 1094 register_die_notifier(&hyperv_die_block); 1154 1095 atomic_notifier_chain_register(&panic_notifier_list, 1155 1096 &hyperv_panic_block); ··· 1192 1081 hv_remove_vmbus_irq(); 1193 1082 1194 1083 bus_unregister(&hv_bus); 1195 - 1084 + free_page((unsigned long)hv_panic_page); 1085 + unregister_sysctl_table(hv_ctl_table_hdr); 1086 + hv_ctl_table_hdr = NULL; 1196 1087 return ret; 1197 1088 } 1198 1089 ··· 1898 1785 vmbus_free_channels(); 1899 1786 1900 1787 if (ms_hyperv.misc_features & HV_FEATURE_GUEST_CRASH_MSR_AVAILABLE) { 1788 + kmsg_dump_unregister(&hv_kmsg_dumper); 1901 1789 unregister_die_notifier(&hyperv_die_block); 1902 1790 atomic_notifier_chain_unregister(&panic_notifier_list, 1903 1791 &hyperv_panic_block); 1904 1792 } 1793 + 1794 + free_page((unsigned long)hv_panic_page); 1795 + unregister_sysctl_table(hv_ctl_table_hdr); 1796 + hv_ctl_table_hdr = NULL; 1905 1797 bus_unregister(&hv_bus); 1906 1798 1907 1799 cpuhp_remove_state(hyperv_cpuhp_online);
+1
drivers/hwmon/max197.c
··· 13 13 14 14 #include <linux/kernel.h> 15 15 #include <linux/module.h> 16 + #include <linux/mod_devicetable.h> 16 17 #include <linux/init.h> 17 18 #include <linux/err.h> 18 19 #include <linux/slab.h>
+1
drivers/hwmon/mc13783-adc.c
··· 23 23 #include <linux/hwmon-sysfs.h> 24 24 #include <linux/kernel.h> 25 25 #include <linux/module.h> 26 + #include <linux/mod_devicetable.h> 26 27 #include <linux/hwmon.h> 27 28 #include <linux/slab.h> 28 29 #include <linux/init.h>
+11
drivers/hwtracing/coresight/Kconfig
··· 31 31 complies with the generic implementation of the component without 32 32 special enhancement or added features. 33 33 34 + config CORESIGHT_CATU 35 + bool "Coresight Address Translation Unit (CATU) driver" 36 + depends on CORESIGHT_LINK_AND_SINK_TMC 37 + help 38 + Enable support for the Coresight Address Translation Unit (CATU). 39 + CATU supports a scatter gather table of 4K pages, with forward/backward 40 + lookup. CATU helps TMC ETR to use a large physically non-contiguous trace 41 + buffer by translating the addresses used by ETR to the physical address 42 + by looking up the provided table. CATU can also be used in pass-through 43 + mode where the address is not translated. 44 + 34 45 config CORESIGHT_SINK_TPIU 35 46 bool "Coresight generic TPIU driver" 36 47 depends on CORESIGHT_LINKS_AND_SINKS
+1
drivers/hwtracing/coresight/Makefile
··· 18 18 obj-$(CONFIG_CORESIGHT_DYNAMIC_REPLICATOR) += coresight-dynamic-replicator.o 19 19 obj-$(CONFIG_CORESIGHT_STM) += coresight-stm.o 20 20 obj-$(CONFIG_CORESIGHT_CPU_DEBUG) += coresight-cpu-debug.o 21 + obj-$(CONFIG_CORESIGHT_CATU) += coresight-catu.o
+577
drivers/hwtracing/coresight/coresight-catu.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2018 Arm Limited. All rights reserved. 4 + * 5 + * Coresight Address Translation Unit support 6 + * 7 + * Author: Suzuki K Poulose <suzuki.poulose@arm.com> 8 + */ 9 + 10 + #include <linux/amba/bus.h> 11 + #include <linux/device.h> 12 + #include <linux/dma-mapping.h> 13 + #include <linux/io.h> 14 + #include <linux/kernel.h> 15 + #include <linux/slab.h> 16 + 17 + #include "coresight-catu.h" 18 + #include "coresight-priv.h" 19 + #include "coresight-tmc.h" 20 + 21 + #define csdev_to_catu_drvdata(csdev) \ 22 + dev_get_drvdata(csdev->dev.parent) 23 + 24 + /* Verbose output for CATU table contents */ 25 + #ifdef CATU_DEBUG 26 + #define catu_dbg(x, ...) dev_dbg(x, __VA_ARGS__) 27 + #else 28 + #define catu_dbg(x, ...) do {} while (0) 29 + #endif 30 + 31 + struct catu_etr_buf { 32 + struct tmc_sg_table *catu_table; 33 + dma_addr_t sladdr; 34 + }; 35 + 36 + /* 37 + * CATU uses a page size of 4KB for page tables as well as data pages. 38 + * Each 64bit entry in the table has the following format. 39 + * 40 + * 63 12 1 0 41 + * ------------------------------------ 42 + * | Address [63-12] | SBZ | V| 43 + * ------------------------------------ 44 + * 45 + * Where bit[0] V indicates if the address is valid or not. 46 + * Each 4K table pages have upto 256 data page pointers, taking upto 2K 47 + * size. There are two Link pointers, pointing to the previous and next 48 + * table pages respectively at the end of the 4K page. (i.e, entry 510 49 + * and 511). 50 + * E.g, a table of two pages could look like : 51 + * 52 + * Table Page 0 Table Page 1 53 + * SLADDR ===> x------------------x x--> x-----------------x 54 + * INADDR ->| Page 0 | V | | | Page 256 | V | <- INADDR+1M 55 + * |------------------| | |-----------------| 56 + * INADDR+4K ->| Page 1 | V | | | | 57 + * |------------------| | |-----------------| 58 + * | Page 2 | V | | | | 59 + * |------------------| | |-----------------| 60 + * | ... | V | | | ... | 61 + * |------------------| | |-----------------| 62 + * INADDR+1020K| Page 255 | V | | | Page 511 | V | 63 + * SLADDR+2K==>|------------------| | |-----------------| 64 + * | UNUSED | | | | | 65 + * |------------------| | | | 66 + * | UNUSED | | | | | 67 + * |------------------| | | | 68 + * | ... | | | | | 69 + * |------------------| | |-----------------| 70 + * | IGNORED | 0 | | | Table Page 0| 1 | 71 + * |------------------| | |-----------------| 72 + * | Table Page 1| 1 |--x | IGNORED | 0 | 73 + * x------------------x x-----------------x 74 + * SLADDR+4K==> 75 + * 76 + * The base input address (used by the ETR, programmed in INADDR_{LO,HI}) 77 + * must be aligned to 1MB (the size addressable by a single page table). 78 + * The CATU maps INADDR{LO:HI} to the first page in the table pointed 79 + * to by SLADDR{LO:HI} and so on. 80 + * 81 + */ 82 + typedef u64 cate_t; 83 + 84 + #define CATU_PAGE_SHIFT 12 85 + #define CATU_PAGE_SIZE (1UL << CATU_PAGE_SHIFT) 86 + #define CATU_PAGES_PER_SYSPAGE (PAGE_SIZE / CATU_PAGE_SIZE) 87 + 88 + /* Page pointers are only allocated in the first 2K half */ 89 + #define CATU_PTRS_PER_PAGE ((CATU_PAGE_SIZE >> 1) / sizeof(cate_t)) 90 + #define CATU_PTRS_PER_SYSPAGE (CATU_PAGES_PER_SYSPAGE * CATU_PTRS_PER_PAGE) 91 + #define CATU_LINK_PREV ((CATU_PAGE_SIZE / sizeof(cate_t)) - 2) 92 + #define CATU_LINK_NEXT ((CATU_PAGE_SIZE / sizeof(cate_t)) - 1) 93 + 94 + #define CATU_ADDR_SHIFT 12 95 + #define CATU_ADDR_MASK ~(((cate_t)1 << CATU_ADDR_SHIFT) - 1) 96 + #define CATU_ENTRY_VALID ((cate_t)0x1) 97 + #define CATU_VALID_ENTRY(addr) \ 98 + (((cate_t)(addr) & CATU_ADDR_MASK) | CATU_ENTRY_VALID) 99 + #define CATU_ENTRY_ADDR(entry) ((cate_t)(entry) & ~((cate_t)CATU_ENTRY_VALID)) 100 + 101 + /* CATU expects the INADDR to be aligned to 1M. */ 102 + #define CATU_DEFAULT_INADDR (1ULL << 20) 103 + 104 + /* 105 + * catu_get_table : Retrieve the table pointers for the given @offset 106 + * within the buffer. The buffer is wrapped around to a valid offset. 107 + * 108 + * Returns : The CPU virtual address for the beginning of the table 109 + * containing the data page pointer for @offset. If @daddrp is not NULL, 110 + * @daddrp points the DMA address of the beginning of the table. 111 + */ 112 + static inline cate_t *catu_get_table(struct tmc_sg_table *catu_table, 113 + unsigned long offset, 114 + dma_addr_t *daddrp) 115 + { 116 + unsigned long buf_size = tmc_sg_table_buf_size(catu_table); 117 + unsigned int table_nr, pg_idx, pg_offset; 118 + struct tmc_pages *table_pages = &catu_table->table_pages; 119 + void *ptr; 120 + 121 + /* Make sure offset is within the range */ 122 + offset %= buf_size; 123 + 124 + /* 125 + * Each table can address 1MB and a single kernel page can 126 + * contain "CATU_PAGES_PER_SYSPAGE" CATU tables. 127 + */ 128 + table_nr = offset >> 20; 129 + /* Find the table page where the table_nr lies in */ 130 + pg_idx = table_nr / CATU_PAGES_PER_SYSPAGE; 131 + pg_offset = (table_nr % CATU_PAGES_PER_SYSPAGE) * CATU_PAGE_SIZE; 132 + if (daddrp) 133 + *daddrp = table_pages->daddrs[pg_idx] + pg_offset; 134 + ptr = page_address(table_pages->pages[pg_idx]); 135 + return (cate_t *)((unsigned long)ptr + pg_offset); 136 + } 137 + 138 + #ifdef CATU_DEBUG 139 + static void catu_dump_table(struct tmc_sg_table *catu_table) 140 + { 141 + int i; 142 + cate_t *table; 143 + unsigned long table_end, buf_size, offset = 0; 144 + 145 + buf_size = tmc_sg_table_buf_size(catu_table); 146 + dev_dbg(catu_table->dev, 147 + "Dump table %p, tdaddr: %llx\n", 148 + catu_table, catu_table->table_daddr); 149 + 150 + while (offset < buf_size) { 151 + table_end = offset + SZ_1M < buf_size ? 152 + offset + SZ_1M : buf_size; 153 + table = catu_get_table(catu_table, offset, NULL); 154 + for (i = 0; offset < table_end; i++, offset += CATU_PAGE_SIZE) 155 + dev_dbg(catu_table->dev, "%d: %llx\n", i, table[i]); 156 + dev_dbg(catu_table->dev, "Prev : %llx, Next: %llx\n", 157 + table[CATU_LINK_PREV], table[CATU_LINK_NEXT]); 158 + dev_dbg(catu_table->dev, "== End of sub-table ==="); 159 + } 160 + dev_dbg(catu_table->dev, "== End of Table ==="); 161 + } 162 + 163 + #else 164 + static inline void catu_dump_table(struct tmc_sg_table *catu_table) 165 + { 166 + } 167 + #endif 168 + 169 + static inline cate_t catu_make_entry(dma_addr_t addr) 170 + { 171 + return addr ? CATU_VALID_ENTRY(addr) : 0; 172 + } 173 + 174 + /* 175 + * catu_populate_table : Populate the given CATU table. 176 + * The table is always populated as a circular table. 177 + * i.e, the "prev" link of the "first" table points to the "last" 178 + * table and the "next" link of the "last" table points to the 179 + * "first" table. The buffer should be made linear by calling 180 + * catu_set_table(). 181 + */ 182 + static void 183 + catu_populate_table(struct tmc_sg_table *catu_table) 184 + { 185 + int i; 186 + int sys_pidx; /* Index to current system data page */ 187 + int catu_pidx; /* Index of CATU page within the system data page */ 188 + unsigned long offset, buf_size, table_end; 189 + dma_addr_t data_daddr; 190 + dma_addr_t prev_taddr, next_taddr, cur_taddr; 191 + cate_t *table_ptr, *next_table; 192 + 193 + buf_size = tmc_sg_table_buf_size(catu_table); 194 + sys_pidx = catu_pidx = 0; 195 + offset = 0; 196 + 197 + table_ptr = catu_get_table(catu_table, 0, &cur_taddr); 198 + prev_taddr = 0; /* Prev link for the first table */ 199 + 200 + while (offset < buf_size) { 201 + /* 202 + * The @offset is always 1M aligned here and we have an 203 + * empty table @table_ptr to fill. Each table can address 204 + * upto 1MB data buffer. The last table may have fewer 205 + * entries if the buffer size is not aligned. 206 + */ 207 + table_end = (offset + SZ_1M) < buf_size ? 208 + (offset + SZ_1M) : buf_size; 209 + for (i = 0; offset < table_end; 210 + i++, offset += CATU_PAGE_SIZE) { 211 + 212 + data_daddr = catu_table->data_pages.daddrs[sys_pidx] + 213 + catu_pidx * CATU_PAGE_SIZE; 214 + catu_dbg(catu_table->dev, 215 + "[table %5ld:%03d] 0x%llx\n", 216 + (offset >> 20), i, data_daddr); 217 + table_ptr[i] = catu_make_entry(data_daddr); 218 + /* Move the pointers for data pages */ 219 + catu_pidx = (catu_pidx + 1) % CATU_PAGES_PER_SYSPAGE; 220 + if (catu_pidx == 0) 221 + sys_pidx++; 222 + } 223 + 224 + /* 225 + * If we have finished all the valid entries, fill the rest of 226 + * the table (i.e, last table page) with invalid entries, 227 + * to fail the lookups. 228 + */ 229 + if (offset == buf_size) { 230 + memset(&table_ptr[i], 0, 231 + sizeof(cate_t) * (CATU_PTRS_PER_PAGE - i)); 232 + next_taddr = 0; 233 + } else { 234 + next_table = catu_get_table(catu_table, 235 + offset, &next_taddr); 236 + } 237 + 238 + table_ptr[CATU_LINK_PREV] = catu_make_entry(prev_taddr); 239 + table_ptr[CATU_LINK_NEXT] = catu_make_entry(next_taddr); 240 + 241 + catu_dbg(catu_table->dev, 242 + "[table%5ld]: Cur: 0x%llx Prev: 0x%llx, Next: 0x%llx\n", 243 + (offset >> 20) - 1, cur_taddr, prev_taddr, next_taddr); 244 + 245 + /* Update the prev/next addresses */ 246 + if (next_taddr) { 247 + prev_taddr = cur_taddr; 248 + cur_taddr = next_taddr; 249 + table_ptr = next_table; 250 + } 251 + } 252 + 253 + /* Sync the table for device */ 254 + tmc_sg_table_sync_table(catu_table); 255 + } 256 + 257 + static struct tmc_sg_table * 258 + catu_init_sg_table(struct device *catu_dev, int node, 259 + ssize_t size, void **pages) 260 + { 261 + int nr_tpages; 262 + struct tmc_sg_table *catu_table; 263 + 264 + /* 265 + * Each table can address upto 1MB and we can have 266 + * CATU_PAGES_PER_SYSPAGE tables in a system page. 267 + */ 268 + nr_tpages = DIV_ROUND_UP(size, SZ_1M) / CATU_PAGES_PER_SYSPAGE; 269 + catu_table = tmc_alloc_sg_table(catu_dev, node, nr_tpages, 270 + size >> PAGE_SHIFT, pages); 271 + if (IS_ERR(catu_table)) 272 + return catu_table; 273 + 274 + catu_populate_table(catu_table); 275 + dev_dbg(catu_dev, 276 + "Setup table %p, size %ldKB, %d table pages\n", 277 + catu_table, (unsigned long)size >> 10, nr_tpages); 278 + catu_dump_table(catu_table); 279 + return catu_table; 280 + } 281 + 282 + static void catu_free_etr_buf(struct etr_buf *etr_buf) 283 + { 284 + struct catu_etr_buf *catu_buf; 285 + 286 + if (!etr_buf || etr_buf->mode != ETR_MODE_CATU || !etr_buf->private) 287 + return; 288 + 289 + catu_buf = etr_buf->private; 290 + tmc_free_sg_table(catu_buf->catu_table); 291 + kfree(catu_buf); 292 + } 293 + 294 + static ssize_t catu_get_data_etr_buf(struct etr_buf *etr_buf, u64 offset, 295 + size_t len, char **bufpp) 296 + { 297 + struct catu_etr_buf *catu_buf = etr_buf->private; 298 + 299 + return tmc_sg_table_get_data(catu_buf->catu_table, offset, len, bufpp); 300 + } 301 + 302 + static void catu_sync_etr_buf(struct etr_buf *etr_buf, u64 rrp, u64 rwp) 303 + { 304 + struct catu_etr_buf *catu_buf = etr_buf->private; 305 + struct tmc_sg_table *catu_table = catu_buf->catu_table; 306 + u64 r_offset, w_offset; 307 + 308 + /* 309 + * ETR started off at etr_buf->hwaddr. Convert the RRP/RWP to 310 + * offsets within the trace buffer. 311 + */ 312 + r_offset = rrp - etr_buf->hwaddr; 313 + w_offset = rwp - etr_buf->hwaddr; 314 + 315 + if (!etr_buf->full) { 316 + etr_buf->len = w_offset - r_offset; 317 + if (w_offset < r_offset) 318 + etr_buf->len += etr_buf->size; 319 + } else { 320 + etr_buf->len = etr_buf->size; 321 + } 322 + 323 + etr_buf->offset = r_offset; 324 + tmc_sg_table_sync_data_range(catu_table, r_offset, etr_buf->len); 325 + } 326 + 327 + static int catu_alloc_etr_buf(struct tmc_drvdata *tmc_drvdata, 328 + struct etr_buf *etr_buf, int node, void **pages) 329 + { 330 + struct coresight_device *csdev; 331 + struct device *catu_dev; 332 + struct tmc_sg_table *catu_table; 333 + struct catu_etr_buf *catu_buf; 334 + 335 + csdev = tmc_etr_get_catu_device(tmc_drvdata); 336 + if (!csdev) 337 + return -ENODEV; 338 + catu_dev = csdev->dev.parent; 339 + catu_buf = kzalloc(sizeof(*catu_buf), GFP_KERNEL); 340 + if (!catu_buf) 341 + return -ENOMEM; 342 + 343 + catu_table = catu_init_sg_table(catu_dev, node, etr_buf->size, pages); 344 + if (IS_ERR(catu_table)) { 345 + kfree(catu_buf); 346 + return PTR_ERR(catu_table); 347 + } 348 + 349 + etr_buf->mode = ETR_MODE_CATU; 350 + etr_buf->private = catu_buf; 351 + etr_buf->hwaddr = CATU_DEFAULT_INADDR; 352 + 353 + catu_buf->catu_table = catu_table; 354 + /* Get the table base address */ 355 + catu_buf->sladdr = catu_table->table_daddr; 356 + 357 + return 0; 358 + } 359 + 360 + const struct etr_buf_operations etr_catu_buf_ops = { 361 + .alloc = catu_alloc_etr_buf, 362 + .free = catu_free_etr_buf, 363 + .sync = catu_sync_etr_buf, 364 + .get_data = catu_get_data_etr_buf, 365 + }; 366 + 367 + coresight_simple_reg32(struct catu_drvdata, devid, CORESIGHT_DEVID); 368 + coresight_simple_reg32(struct catu_drvdata, control, CATU_CONTROL); 369 + coresight_simple_reg32(struct catu_drvdata, status, CATU_STATUS); 370 + coresight_simple_reg32(struct catu_drvdata, mode, CATU_MODE); 371 + coresight_simple_reg32(struct catu_drvdata, axictrl, CATU_AXICTRL); 372 + coresight_simple_reg32(struct catu_drvdata, irqen, CATU_IRQEN); 373 + coresight_simple_reg64(struct catu_drvdata, sladdr, 374 + CATU_SLADDRLO, CATU_SLADDRHI); 375 + coresight_simple_reg64(struct catu_drvdata, inaddr, 376 + CATU_INADDRLO, CATU_INADDRHI); 377 + 378 + static struct attribute *catu_mgmt_attrs[] = { 379 + &dev_attr_devid.attr, 380 + &dev_attr_control.attr, 381 + &dev_attr_status.attr, 382 + &dev_attr_mode.attr, 383 + &dev_attr_axictrl.attr, 384 + &dev_attr_irqen.attr, 385 + &dev_attr_sladdr.attr, 386 + &dev_attr_inaddr.attr, 387 + NULL, 388 + }; 389 + 390 + static const struct attribute_group catu_mgmt_group = { 391 + .attrs = catu_mgmt_attrs, 392 + .name = "mgmt", 393 + }; 394 + 395 + static const struct attribute_group *catu_groups[] = { 396 + &catu_mgmt_group, 397 + NULL, 398 + }; 399 + 400 + 401 + static inline int catu_wait_for_ready(struct catu_drvdata *drvdata) 402 + { 403 + return coresight_timeout(drvdata->base, 404 + CATU_STATUS, CATU_STATUS_READY, 1); 405 + } 406 + 407 + static int catu_enable_hw(struct catu_drvdata *drvdata, void *data) 408 + { 409 + u32 control, mode; 410 + struct etr_buf *etr_buf = data; 411 + 412 + if (catu_wait_for_ready(drvdata)) 413 + dev_warn(drvdata->dev, "Timeout while waiting for READY\n"); 414 + 415 + control = catu_read_control(drvdata); 416 + if (control & BIT(CATU_CONTROL_ENABLE)) { 417 + dev_warn(drvdata->dev, "CATU is already enabled\n"); 418 + return -EBUSY; 419 + } 420 + 421 + control |= BIT(CATU_CONTROL_ENABLE); 422 + 423 + if (etr_buf && etr_buf->mode == ETR_MODE_CATU) { 424 + struct catu_etr_buf *catu_buf = etr_buf->private; 425 + 426 + mode = CATU_MODE_TRANSLATE; 427 + catu_write_axictrl(drvdata, CATU_OS_AXICTRL); 428 + catu_write_sladdr(drvdata, catu_buf->sladdr); 429 + catu_write_inaddr(drvdata, CATU_DEFAULT_INADDR); 430 + } else { 431 + mode = CATU_MODE_PASS_THROUGH; 432 + catu_write_sladdr(drvdata, 0); 433 + catu_write_inaddr(drvdata, 0); 434 + } 435 + 436 + catu_write_irqen(drvdata, 0); 437 + catu_write_mode(drvdata, mode); 438 + catu_write_control(drvdata, control); 439 + dev_dbg(drvdata->dev, "Enabled in %s mode\n", 440 + (mode == CATU_MODE_PASS_THROUGH) ? 441 + "Pass through" : 442 + "Translate"); 443 + return 0; 444 + } 445 + 446 + static int catu_enable(struct coresight_device *csdev, void *data) 447 + { 448 + int rc; 449 + struct catu_drvdata *catu_drvdata = csdev_to_catu_drvdata(csdev); 450 + 451 + CS_UNLOCK(catu_drvdata->base); 452 + rc = catu_enable_hw(catu_drvdata, data); 453 + CS_LOCK(catu_drvdata->base); 454 + return rc; 455 + } 456 + 457 + static int catu_disable_hw(struct catu_drvdata *drvdata) 458 + { 459 + int rc = 0; 460 + 461 + catu_write_control(drvdata, 0); 462 + if (catu_wait_for_ready(drvdata)) { 463 + dev_info(drvdata->dev, "Timeout while waiting for READY\n"); 464 + rc = -EAGAIN; 465 + } 466 + 467 + dev_dbg(drvdata->dev, "Disabled\n"); 468 + return rc; 469 + } 470 + 471 + static int catu_disable(struct coresight_device *csdev, void *__unused) 472 + { 473 + int rc; 474 + struct catu_drvdata *catu_drvdata = csdev_to_catu_drvdata(csdev); 475 + 476 + CS_UNLOCK(catu_drvdata->base); 477 + rc = catu_disable_hw(catu_drvdata); 478 + CS_LOCK(catu_drvdata->base); 479 + return rc; 480 + } 481 + 482 + const struct coresight_ops_helper catu_helper_ops = { 483 + .enable = catu_enable, 484 + .disable = catu_disable, 485 + }; 486 + 487 + const struct coresight_ops catu_ops = { 488 + .helper_ops = &catu_helper_ops, 489 + }; 490 + 491 + static int catu_probe(struct amba_device *adev, const struct amba_id *id) 492 + { 493 + int ret = 0; 494 + u32 dma_mask; 495 + struct catu_drvdata *drvdata; 496 + struct coresight_desc catu_desc; 497 + struct coresight_platform_data *pdata = NULL; 498 + struct device *dev = &adev->dev; 499 + struct device_node *np = dev->of_node; 500 + void __iomem *base; 501 + 502 + if (np) { 503 + pdata = of_get_coresight_platform_data(dev, np); 504 + if (IS_ERR(pdata)) { 505 + ret = PTR_ERR(pdata); 506 + goto out; 507 + } 508 + dev->platform_data = pdata; 509 + } 510 + 511 + drvdata = devm_kzalloc(dev, sizeof(*drvdata), GFP_KERNEL); 512 + if (!drvdata) { 513 + ret = -ENOMEM; 514 + goto out; 515 + } 516 + 517 + drvdata->dev = dev; 518 + dev_set_drvdata(dev, drvdata); 519 + base = devm_ioremap_resource(dev, &adev->res); 520 + if (IS_ERR(base)) { 521 + ret = PTR_ERR(base); 522 + goto out; 523 + } 524 + 525 + /* Setup dma mask for the device */ 526 + dma_mask = readl_relaxed(base + CORESIGHT_DEVID) & 0x3f; 527 + switch (dma_mask) { 528 + case 32: 529 + case 40: 530 + case 44: 531 + case 48: 532 + case 52: 533 + case 56: 534 + case 64: 535 + break; 536 + default: 537 + /* Default to the 40bits as supported by TMC-ETR */ 538 + dma_mask = 40; 539 + } 540 + ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(dma_mask)); 541 + if (ret) 542 + goto out; 543 + 544 + drvdata->base = base; 545 + catu_desc.pdata = pdata; 546 + catu_desc.dev = dev; 547 + catu_desc.groups = catu_groups; 548 + catu_desc.type = CORESIGHT_DEV_TYPE_HELPER; 549 + catu_desc.subtype.helper_subtype = CORESIGHT_DEV_SUBTYPE_HELPER_CATU; 550 + catu_desc.ops = &catu_ops; 551 + drvdata->csdev = coresight_register(&catu_desc); 552 + if (IS_ERR(drvdata->csdev)) 553 + ret = PTR_ERR(drvdata->csdev); 554 + out: 555 + pm_runtime_put(&adev->dev); 556 + return ret; 557 + } 558 + 559 + static struct amba_id catu_ids[] = { 560 + { 561 + .id = 0x000bb9ee, 562 + .mask = 0x000fffff, 563 + }, 564 + {}, 565 + }; 566 + 567 + static struct amba_driver catu_driver = { 568 + .drv = { 569 + .name = "coresight-catu", 570 + .owner = THIS_MODULE, 571 + .suppress_bind_attrs = true, 572 + }, 573 + .probe = catu_probe, 574 + .id_table = catu_ids, 575 + }; 576 + 577 + builtin_amba_driver(catu_driver);
+119
drivers/hwtracing/coresight/coresight-catu.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (C) 2018 Arm Limited. All rights reserved. 4 + * 5 + * Author: Suzuki K Poulose <suzuki.poulose@arm.com> 6 + */ 7 + 8 + #ifndef _CORESIGHT_CATU_H 9 + #define _CORESIGHT_CATU_H 10 + 11 + #include "coresight-priv.h" 12 + 13 + /* Register offset from base */ 14 + #define CATU_CONTROL 0x000 15 + #define CATU_MODE 0x004 16 + #define CATU_AXICTRL 0x008 17 + #define CATU_IRQEN 0x00c 18 + #define CATU_SLADDRLO 0x020 19 + #define CATU_SLADDRHI 0x024 20 + #define CATU_INADDRLO 0x028 21 + #define CATU_INADDRHI 0x02c 22 + #define CATU_STATUS 0x100 23 + #define CATU_DEVARCH 0xfbc 24 + 25 + #define CATU_CONTROL_ENABLE 0 26 + 27 + #define CATU_MODE_PASS_THROUGH 0U 28 + #define CATU_MODE_TRANSLATE 1U 29 + 30 + #define CATU_AXICTRL_ARCACHE_SHIFT 4 31 + #define CATU_AXICTRL_ARCACHE_MASK 0xf 32 + #define CATU_AXICTRL_ARPROT_MASK 0x3 33 + #define CATU_AXICTRL_ARCACHE(arcache) \ 34 + (((arcache) & CATU_AXICTRL_ARCACHE_MASK) << CATU_AXICTRL_ARCACHE_SHIFT) 35 + 36 + #define CATU_AXICTRL_VAL(arcache, arprot) \ 37 + (CATU_AXICTRL_ARCACHE(arcache) | ((arprot) & CATU_AXICTRL_ARPROT_MASK)) 38 + 39 + #define AXI3_AxCACHE_WB_READ_ALLOC 0x7 40 + /* 41 + * AXI - ARPROT bits: 42 + * See AMBA AXI & ACE Protocol specification (ARM IHI 0022E) 43 + * sectionA4.7 Access Permissions. 44 + * 45 + * Bit 0: 0 - Unprivileged access, 1 - Privileged access 46 + * Bit 1: 0 - Secure access, 1 - Non-secure access. 47 + * Bit 2: 0 - Data access, 1 - instruction access. 48 + * 49 + * CATU AXICTRL:ARPROT[2] is res0 as we always access data. 50 + */ 51 + #define CATU_OS_ARPROT 0x2 52 + 53 + #define CATU_OS_AXICTRL \ 54 + CATU_AXICTRL_VAL(AXI3_AxCACHE_WB_READ_ALLOC, CATU_OS_ARPROT) 55 + 56 + #define CATU_STATUS_READY 8 57 + #define CATU_STATUS_ADRERR 0 58 + #define CATU_STATUS_AXIERR 4 59 + 60 + #define CATU_IRQEN_ON 0x1 61 + #define CATU_IRQEN_OFF 0x0 62 + 63 + struct catu_drvdata { 64 + struct device *dev; 65 + void __iomem *base; 66 + struct coresight_device *csdev; 67 + int irq; 68 + }; 69 + 70 + #define CATU_REG32(name, offset) \ 71 + static inline u32 \ 72 + catu_read_##name(struct catu_drvdata *drvdata) \ 73 + { \ 74 + return coresight_read_reg_pair(drvdata->base, offset, -1); \ 75 + } \ 76 + static inline void \ 77 + catu_write_##name(struct catu_drvdata *drvdata, u32 val) \ 78 + { \ 79 + coresight_write_reg_pair(drvdata->base, val, offset, -1); \ 80 + } 81 + 82 + #define CATU_REG_PAIR(name, lo_off, hi_off) \ 83 + static inline u64 \ 84 + catu_read_##name(struct catu_drvdata *drvdata) \ 85 + { \ 86 + return coresight_read_reg_pair(drvdata->base, lo_off, hi_off); \ 87 + } \ 88 + static inline void \ 89 + catu_write_##name(struct catu_drvdata *drvdata, u64 val) \ 90 + { \ 91 + coresight_write_reg_pair(drvdata->base, val, lo_off, hi_off); \ 92 + } 93 + 94 + CATU_REG32(control, CATU_CONTROL); 95 + CATU_REG32(mode, CATU_MODE); 96 + CATU_REG32(irqen, CATU_IRQEN); 97 + CATU_REG32(axictrl, CATU_AXICTRL); 98 + CATU_REG_PAIR(sladdr, CATU_SLADDRLO, CATU_SLADDRHI) 99 + CATU_REG_PAIR(inaddr, CATU_INADDRLO, CATU_INADDRHI) 100 + 101 + static inline bool coresight_is_catu_device(struct coresight_device *csdev) 102 + { 103 + if (!IS_ENABLED(CONFIG_CORESIGHT_CATU)) 104 + return false; 105 + if (csdev->type != CORESIGHT_DEV_TYPE_HELPER) 106 + return false; 107 + if (csdev->subtype.helper_subtype != CORESIGHT_DEV_SUBTYPE_HELPER_CATU) 108 + return false; 109 + return true; 110 + } 111 + 112 + #ifdef CONFIG_CORESIGHT_CATU 113 + extern const struct etr_buf_operations etr_catu_buf_ops; 114 + #else 115 + /* Dummy declaration for the CATU ops */ 116 + static const struct etr_buf_operations etr_catu_buf_ops; 117 + #endif 118 + 119 + #endif
+4 -8
drivers/hwtracing/coresight/coresight-etb10.c
··· 195 195 bool lost = false; 196 196 int i; 197 197 u8 *buf_ptr; 198 - const u32 *barrier; 199 198 u32 read_data, depth; 200 199 u32 read_ptr, write_ptr; 201 200 u32 frame_off, frame_endoff; ··· 225 226 226 227 depth = drvdata->buffer_depth; 227 228 buf_ptr = drvdata->buf; 228 - barrier = barrier_pkt; 229 229 for (i = 0; i < depth; i++) { 230 230 read_data = readl_relaxed(drvdata->base + 231 231 ETB_RAM_READ_DATA_REG); 232 - if (lost && *barrier) { 233 - read_data = *barrier; 234 - barrier++; 235 - } 236 - 237 232 *(u32 *)buf_ptr = read_data; 238 233 buf_ptr += 4; 239 234 } 235 + 236 + if (lost) 237 + coresight_insert_barrier_packet(drvdata->buf); 240 238 241 239 if (frame_off) { 242 240 buf_ptr -= (frame_endoff * 4); ··· 443 447 buf_ptr = buf->data_pages[cur] + offset; 444 448 read_data = readl_relaxed(drvdata->base + 445 449 ETB_RAM_READ_DATA_REG); 446 - if (lost && *barrier) { 450 + if (lost && i < CORESIGHT_BARRIER_PKT_SIZE) { 447 451 read_data = *barrier; 448 452 barrier++; 449 453 }
-3
drivers/hwtracing/coresight/coresight-etm.h
··· 168 168 * @seq_curr_state: current value of the sequencer register. 169 169 * @ctxid_idx: index for the context ID registers. 170 170 * @ctxid_pid: value for the context ID to trigger on. 171 - * @ctxid_vpid: Virtual PID seen by users if PID namespace is enabled, otherwise 172 - * the same value of ctxid_pid. 173 171 * @ctxid_mask: mask applicable to all the context IDs. 174 172 * @sync_freq: Synchronisation frequency. 175 173 * @timestamp_event: Defines an event that requests the insertion ··· 200 202 u32 seq_curr_state; 201 203 u8 ctxid_idx; 202 204 u32 ctxid_pid[ETM_MAX_CTXID_CMP]; 203 - u32 ctxid_vpid[ETM_MAX_CTXID_CMP]; 204 205 u32 ctxid_mask; 205 206 u32 sync_freq; 206 207 u32 timestamp_event;
+37 -6
drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
··· 4 4 * Author: Mathieu Poirier <mathieu.poirier@linaro.org> 5 5 */ 6 6 7 + #include <linux/pid_namespace.h> 7 8 #include <linux/pm_runtime.h> 8 9 #include <linux/sysfs.h> 9 10 #include "coresight-etm.h" ··· 1026 1025 struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent); 1027 1026 struct etm_config *config = &drvdata->config; 1028 1027 1028 + /* 1029 + * Don't use contextID tracing if coming from a PID namespace. See 1030 + * comment in ctxid_pid_store(). 1031 + */ 1032 + if (task_active_pid_ns(current) != &init_pid_ns) 1033 + return -EINVAL; 1034 + 1029 1035 spin_lock(&drvdata->spinlock); 1030 - val = config->ctxid_vpid[config->ctxid_idx]; 1036 + val = config->ctxid_pid[config->ctxid_idx]; 1031 1037 spin_unlock(&drvdata->spinlock); 1032 1038 1033 1039 return sprintf(buf, "%#lx\n", val); ··· 1045 1037 const char *buf, size_t size) 1046 1038 { 1047 1039 int ret; 1048 - unsigned long vpid, pid; 1040 + unsigned long pid; 1049 1041 struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent); 1050 1042 struct etm_config *config = &drvdata->config; 1051 1043 1052 - ret = kstrtoul(buf, 16, &vpid); 1044 + /* 1045 + * When contextID tracing is enabled the tracers will insert the 1046 + * value found in the contextID register in the trace stream. But if 1047 + * a process is in a namespace the PID of that process as seen from the 1048 + * namespace won't be what the kernel sees, something that makes the 1049 + * feature confusing and can potentially leak kernel only information. 1050 + * As such refuse to use the feature if @current is not in the initial 1051 + * PID namespace. 1052 + */ 1053 + if (task_active_pid_ns(current) != &init_pid_ns) 1054 + return -EINVAL; 1055 + 1056 + ret = kstrtoul(buf, 16, &pid); 1053 1057 if (ret) 1054 1058 return ret; 1055 1059 1056 - pid = coresight_vpid_to_pid(vpid); 1057 - 1058 1060 spin_lock(&drvdata->spinlock); 1059 1061 config->ctxid_pid[config->ctxid_idx] = pid; 1060 - config->ctxid_vpid[config->ctxid_idx] = vpid; 1061 1062 spin_unlock(&drvdata->spinlock); 1062 1063 1063 1064 return size; ··· 1080 1063 struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent); 1081 1064 struct etm_config *config = &drvdata->config; 1082 1065 1066 + /* 1067 + * Don't use contextID tracing if coming from a PID namespace. See 1068 + * comment in ctxid_pid_store(). 1069 + */ 1070 + if (task_active_pid_ns(current) != &init_pid_ns) 1071 + return -EINVAL; 1072 + 1083 1073 val = config->ctxid_mask; 1084 1074 return sprintf(buf, "%#lx\n", val); 1085 1075 } ··· 1099 1075 unsigned long val; 1100 1076 struct etm_drvdata *drvdata = dev_get_drvdata(dev->parent); 1101 1077 struct etm_config *config = &drvdata->config; 1078 + 1079 + /* 1080 + * Don't use contextID tracing if coming from a PID namespace. See 1081 + * comment in ctxid_pid_store(). 1082 + */ 1083 + if (task_active_pid_ns(current) != &init_pid_ns) 1084 + return -EINVAL; 1102 1085 1103 1086 ret = kstrtoul(buf, 16, &val); 1104 1087 if (ret)
+1 -3
drivers/hwtracing/coresight/coresight-etm3x.c
··· 230 230 231 231 config->seq_curr_state = 0x0; 232 232 config->ctxid_idx = 0x0; 233 - for (i = 0; i < ETM_MAX_CTXID_CMP; i++) { 233 + for (i = 0; i < ETM_MAX_CTXID_CMP; i++) 234 234 config->ctxid_pid[i] = 0x0; 235 - config->ctxid_vpid[i] = 0x0; 236 - } 237 235 238 236 config->ctxid_mask = 0x0; 239 237 /* Setting default to 1024 as per TRM recommendation */
+38 -9
drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
··· 4 4 * Author: Mathieu Poirier <mathieu.poirier@linaro.org> 5 5 */ 6 6 7 + #include <linux/pid_namespace.h> 7 8 #include <linux/pm_runtime.h> 8 9 #include <linux/sysfs.h> 9 10 #include "coresight-etm4x.h" ··· 251 250 } 252 251 253 252 config->ctxid_idx = 0x0; 254 - for (i = 0; i < drvdata->numcidc; i++) { 253 + for (i = 0; i < drvdata->numcidc; i++) 255 254 config->ctxid_pid[i] = 0x0; 256 - config->ctxid_vpid[i] = 0x0; 257 - } 258 255 259 256 config->ctxid_mask0 = 0x0; 260 257 config->ctxid_mask1 = 0x0; ··· 1636 1637 struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1637 1638 struct etmv4_config *config = &drvdata->config; 1638 1639 1640 + /* 1641 + * Don't use contextID tracing if coming from a PID namespace. See 1642 + * comment in ctxid_pid_store(). 1643 + */ 1644 + if (task_active_pid_ns(current) != &init_pid_ns) 1645 + return -EINVAL; 1646 + 1639 1647 spin_lock(&drvdata->spinlock); 1640 1648 idx = config->ctxid_idx; 1641 - val = (unsigned long)config->ctxid_vpid[idx]; 1649 + val = (unsigned long)config->ctxid_pid[idx]; 1642 1650 spin_unlock(&drvdata->spinlock); 1643 1651 return scnprintf(buf, PAGE_SIZE, "%#lx\n", val); 1644 1652 } ··· 1655 1649 const char *buf, size_t size) 1656 1650 { 1657 1651 u8 idx; 1658 - unsigned long vpid, pid; 1652 + unsigned long pid; 1659 1653 struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1660 1654 struct etmv4_config *config = &drvdata->config; 1655 + 1656 + /* 1657 + * When contextID tracing is enabled the tracers will insert the 1658 + * value found in the contextID register in the trace stream. But if 1659 + * a process is in a namespace the PID of that process as seen from the 1660 + * namespace won't be what the kernel sees, something that makes the 1661 + * feature confusing and can potentially leak kernel only information. 1662 + * As such refuse to use the feature if @current is not in the initial 1663 + * PID namespace. 1664 + */ 1665 + if (task_active_pid_ns(current) != &init_pid_ns) 1666 + return -EINVAL; 1661 1667 1662 1668 /* 1663 1669 * only implemented when ctxid tracing is enabled, i.e. at least one ··· 1678 1660 */ 1679 1661 if (!drvdata->ctxid_size || !drvdata->numcidc) 1680 1662 return -EINVAL; 1681 - if (kstrtoul(buf, 16, &vpid)) 1663 + if (kstrtoul(buf, 16, &pid)) 1682 1664 return -EINVAL; 1683 - 1684 - pid = coresight_vpid_to_pid(vpid); 1685 1665 1686 1666 spin_lock(&drvdata->spinlock); 1687 1667 idx = config->ctxid_idx; 1688 1668 config->ctxid_pid[idx] = (u64)pid; 1689 - config->ctxid_vpid[idx] = (u64)vpid; 1690 1669 spin_unlock(&drvdata->spinlock); 1691 1670 return size; 1692 1671 } ··· 1696 1681 unsigned long val1, val2; 1697 1682 struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1698 1683 struct etmv4_config *config = &drvdata->config; 1684 + 1685 + /* 1686 + * Don't use contextID tracing if coming from a PID namespace. See 1687 + * comment in ctxid_pid_store(). 1688 + */ 1689 + if (task_active_pid_ns(current) != &init_pid_ns) 1690 + return -EINVAL; 1699 1691 1700 1692 spin_lock(&drvdata->spinlock); 1701 1693 val1 = config->ctxid_mask0; ··· 1719 1697 unsigned long val1, val2, mask; 1720 1698 struct etmv4_drvdata *drvdata = dev_get_drvdata(dev->parent); 1721 1699 struct etmv4_config *config = &drvdata->config; 1700 + 1701 + /* 1702 + * Don't use contextID tracing if coming from a PID namespace. See 1703 + * comment in ctxid_pid_store(). 1704 + */ 1705 + if (task_active_pid_ns(current) != &init_pid_ns) 1706 + return -EINVAL; 1722 1707 1723 1708 /* 1724 1709 * only implemented when ctxid tracing is enabled, i.e. at least one
+14 -17
drivers/hwtracing/coresight/coresight-etm4x.c
··· 1027 1027 } 1028 1028 1029 1029 pm_runtime_put(&adev->dev); 1030 - dev_info(dev, "%s initialized\n", (char *)id->data); 1030 + dev_info(dev, "CPU%d: ETM v%d.%d initialized\n", 1031 + drvdata->cpu, drvdata->arch >> 4, drvdata->arch & 0xf); 1031 1032 1032 1033 if (boot_enable) { 1033 1034 coresight_enable(drvdata->csdev); ··· 1046 1045 return ret; 1047 1046 } 1048 1047 1048 + #define ETM4x_AMBA_ID(pid) \ 1049 + { \ 1050 + .id = pid, \ 1051 + .mask = 0x000fffff, \ 1052 + } 1053 + 1049 1054 static const struct amba_id etm4_ids[] = { 1050 - { /* ETM 4.0 - Cortex-A53 */ 1051 - .id = 0x000bb95d, 1052 - .mask = 0x000fffff, 1053 - .data = "ETM 4.0", 1054 - }, 1055 - { /* ETM 4.0 - Cortex-A57 */ 1056 - .id = 0x000bb95e, 1057 - .mask = 0x000fffff, 1058 - .data = "ETM 4.0", 1059 - }, 1060 - { /* ETM 4.0 - A72, Maia, HiSilicon */ 1061 - .id = 0x000bb95a, 1062 - .mask = 0x000fffff, 1063 - .data = "ETM 4.0", 1064 - }, 1065 - { 0, 0}, 1055 + ETM4x_AMBA_ID(0x000bb95d), /* Cortex-A53 */ 1056 + ETM4x_AMBA_ID(0x000bb95e), /* Cortex-A57 */ 1057 + ETM4x_AMBA_ID(0x000bb95a), /* Cortex-A72 */ 1058 + ETM4x_AMBA_ID(0x000bb959), /* Cortex-A73 */ 1059 + ETM4x_AMBA_ID(0x000bb9da), /* Cortex-A35 */ 1060 + {}, 1066 1061 }; 1067 1062 1068 1063 static struct amba_driver etm4x_driver = {
-3
drivers/hwtracing/coresight/coresight-etm4x.h
··· 230 230 * @addr_type: Current status of the comparator register. 231 231 * @ctxid_idx: Context ID index selector. 232 232 * @ctxid_pid: Value of the context ID comparator. 233 - * @ctxid_vpid: Virtual PID seen by users if PID namespace is enabled, otherwise 234 - * the same value of ctxid_pid. 235 233 * @ctxid_mask0:Context ID comparator mask for comparator 0-3. 236 234 * @ctxid_mask1:Context ID comparator mask for comparator 4-7. 237 235 * @vmid_idx: VM ID index selector. ··· 272 274 u8 addr_type[ETM_MAX_SINGLE_ADDR_CMP]; 273 275 u8 ctxid_idx; 274 276 u64 ctxid_pid[ETMv4_MAX_CTXID_CMP]; 275 - u64 ctxid_vpid[ETMv4_MAX_CTXID_CMP]; 276 277 u32 ctxid_mask0; 277 278 u32 ctxid_mask1; 278 279 u8 vmid_idx;
+9 -1
drivers/hwtracing/coresight/coresight-priv.h
··· 57 57 #define coresight_simple_reg64(type, name, lo_off, hi_off) \ 58 58 __coresight_simple_func(type, NULL, name, lo_off, hi_off) 59 59 60 - extern const u32 barrier_pkt[5]; 60 + extern const u32 barrier_pkt[4]; 61 + #define CORESIGHT_BARRIER_PKT_SIZE (sizeof(barrier_pkt)) 61 62 62 63 enum etm_addr_type { 63 64 ETM_ADDR_TYPE_NONE, ··· 91 90 bool snapshot; 92 91 void **data_pages; 93 92 }; 93 + 94 + static inline void coresight_insert_barrier_packet(void *buf) 95 + { 96 + if (buf) 97 + memcpy(buf, barrier_pkt, CORESIGHT_BARRIER_PKT_SIZE); 98 + } 99 + 94 100 95 101 static inline void CS_LOCK(void __iomem *addr) 96 102 {
+26 -19
drivers/hwtracing/coresight/coresight-tmc-etf.c
··· 32 32 33 33 static void tmc_etb_dump_hw(struct tmc_drvdata *drvdata) 34 34 { 35 - bool lost = false; 36 35 char *bufp; 37 - const u32 *barrier; 38 - u32 read_data, status; 36 + u32 read_data, lost; 39 37 int i; 40 38 41 - /* 42 - * Get a hold of the status register and see if a wrap around 43 - * has occurred. 44 - */ 45 - status = readl_relaxed(drvdata->base + TMC_STS); 46 - if (status & TMC_STS_FULL) 47 - lost = true; 48 - 39 + /* Check if the buffer wrapped around. */ 40 + lost = readl_relaxed(drvdata->base + TMC_STS) & TMC_STS_FULL; 49 41 bufp = drvdata->buf; 50 42 drvdata->len = 0; 51 - barrier = barrier_pkt; 52 43 while (1) { 53 44 for (i = 0; i < drvdata->memwidth; i++) { 54 45 read_data = readl_relaxed(drvdata->base + TMC_RRD); 55 46 if (read_data == 0xFFFFFFFF) 56 - return; 57 - 58 - if (lost && *barrier) { 59 - read_data = *barrier; 60 - barrier++; 61 - } 62 - 47 + goto done; 63 48 memcpy(bufp, &read_data, 4); 64 49 bufp += 4; 65 50 drvdata->len += 4; 66 51 } 67 52 } 53 + done: 54 + if (lost) 55 + coresight_insert_barrier_packet(drvdata->buf); 56 + return; 68 57 } 69 58 70 59 static void tmc_etb_disable_hw(struct tmc_drvdata *drvdata) ··· 96 107 tmc_disable_hw(drvdata); 97 108 98 109 CS_LOCK(drvdata->base); 110 + } 111 + 112 + /* 113 + * Return the available trace data in the buffer from @pos, with 114 + * a maximum limit of @len, updating the @bufpp on where to 115 + * find it. 116 + */ 117 + ssize_t tmc_etb_get_sysfs_trace(struct tmc_drvdata *drvdata, 118 + loff_t pos, size_t len, char **bufpp) 119 + { 120 + ssize_t actual = len; 121 + 122 + /* Adjust the len to available size @pos */ 123 + if (pos + actual > drvdata->len) 124 + actual = drvdata->len - pos; 125 + if (actual > 0) 126 + *bufpp = drvdata->buf + pos; 127 + return actual; 99 128 } 100 129 101 130 static int tmc_enable_etf_sink_sysfs(struct coresight_device *csdev)
+977 -97
drivers/hwtracing/coresight/coresight-tmc-etr.c
··· 6 6 7 7 #include <linux/coresight.h> 8 8 #include <linux/dma-mapping.h> 9 + #include <linux/iommu.h> 10 + #include <linux/slab.h> 11 + #include <linux/vmalloc.h> 12 + #include "coresight-catu.h" 9 13 #include "coresight-priv.h" 10 14 #include "coresight-tmc.h" 15 + 16 + struct etr_flat_buf { 17 + struct device *dev; 18 + dma_addr_t daddr; 19 + void *vaddr; 20 + size_t size; 21 + }; 22 + 23 + /* 24 + * The TMC ETR SG has a page size of 4K. The SG table contains pointers 25 + * to 4KB buffers. However, the OS may use a PAGE_SIZE different from 26 + * 4K (i.e, 16KB or 64KB). This implies that a single OS page could 27 + * contain more than one SG buffer and tables. 28 + * 29 + * A table entry has the following format: 30 + * 31 + * ---Bit31------------Bit4-------Bit1-----Bit0-- 32 + * | Address[39:12] | SBZ | Entry Type | 33 + * ---------------------------------------------- 34 + * 35 + * Address: Bits [39:12] of a physical page address. Bits [11:0] are 36 + * always zero. 37 + * 38 + * Entry type: 39 + * b00 - Reserved. 40 + * b01 - Last entry in the tables, points to 4K page buffer. 41 + * b10 - Normal entry, points to 4K page buffer. 42 + * b11 - Link. The address points to the base of next table. 43 + */ 44 + 45 + typedef u32 sgte_t; 46 + 47 + #define ETR_SG_PAGE_SHIFT 12 48 + #define ETR_SG_PAGE_SIZE (1UL << ETR_SG_PAGE_SHIFT) 49 + #define ETR_SG_PAGES_PER_SYSPAGE (PAGE_SIZE / ETR_SG_PAGE_SIZE) 50 + #define ETR_SG_PTRS_PER_PAGE (ETR_SG_PAGE_SIZE / sizeof(sgte_t)) 51 + #define ETR_SG_PTRS_PER_SYSPAGE (PAGE_SIZE / sizeof(sgte_t)) 52 + 53 + #define ETR_SG_ET_MASK 0x3 54 + #define ETR_SG_ET_LAST 0x1 55 + #define ETR_SG_ET_NORMAL 0x2 56 + #define ETR_SG_ET_LINK 0x3 57 + 58 + #define ETR_SG_ADDR_SHIFT 4 59 + 60 + #define ETR_SG_ENTRY(addr, type) \ 61 + (sgte_t)((((addr) >> ETR_SG_PAGE_SHIFT) << ETR_SG_ADDR_SHIFT) | \ 62 + (type & ETR_SG_ET_MASK)) 63 + 64 + #define ETR_SG_ADDR(entry) \ 65 + (((dma_addr_t)(entry) >> ETR_SG_ADDR_SHIFT) << ETR_SG_PAGE_SHIFT) 66 + #define ETR_SG_ET(entry) ((entry) & ETR_SG_ET_MASK) 67 + 68 + /* 69 + * struct etr_sg_table : ETR SG Table 70 + * @sg_table: Generic SG Table holding the data/table pages. 71 + * @hwaddr: hwaddress used by the TMC, which is the base 72 + * address of the table. 73 + */ 74 + struct etr_sg_table { 75 + struct tmc_sg_table *sg_table; 76 + dma_addr_t hwaddr; 77 + }; 78 + 79 + /* 80 + * tmc_etr_sg_table_entries: Total number of table entries required to map 81 + * @nr_pages system pages. 82 + * 83 + * We need to map @nr_pages * ETR_SG_PAGES_PER_SYSPAGE data pages. 84 + * Each TMC page can map (ETR_SG_PTRS_PER_PAGE - 1) buffer pointers, 85 + * with the last entry pointing to another page of table entries. 86 + * If we spill over to a new page for mapping 1 entry, we could as 87 + * well replace the link entry of the previous page with the last entry. 88 + */ 89 + static inline unsigned long __attribute_const__ 90 + tmc_etr_sg_table_entries(int nr_pages) 91 + { 92 + unsigned long nr_sgpages = nr_pages * ETR_SG_PAGES_PER_SYSPAGE; 93 + unsigned long nr_sglinks = nr_sgpages / (ETR_SG_PTRS_PER_PAGE - 1); 94 + /* 95 + * If we spill over to a new page for 1 entry, we could as well 96 + * make it the LAST entry in the previous page, skipping the Link 97 + * address. 98 + */ 99 + if (nr_sglinks && (nr_sgpages % (ETR_SG_PTRS_PER_PAGE - 1) < 2)) 100 + nr_sglinks--; 101 + return nr_sgpages + nr_sglinks; 102 + } 103 + 104 + /* 105 + * tmc_pages_get_offset: Go through all the pages in the tmc_pages 106 + * and map the device address @addr to an offset within the virtual 107 + * contiguous buffer. 108 + */ 109 + static long 110 + tmc_pages_get_offset(struct tmc_pages *tmc_pages, dma_addr_t addr) 111 + { 112 + int i; 113 + dma_addr_t page_start; 114 + 115 + for (i = 0; i < tmc_pages->nr_pages; i++) { 116 + page_start = tmc_pages->daddrs[i]; 117 + if (addr >= page_start && addr < (page_start + PAGE_SIZE)) 118 + return i * PAGE_SIZE + (addr - page_start); 119 + } 120 + 121 + return -EINVAL; 122 + } 123 + 124 + /* 125 + * tmc_pages_free : Unmap and free the pages used by tmc_pages. 126 + * If the pages were not allocated in tmc_pages_alloc(), we would 127 + * simply drop the refcount. 128 + */ 129 + static void tmc_pages_free(struct tmc_pages *tmc_pages, 130 + struct device *dev, enum dma_data_direction dir) 131 + { 132 + int i; 133 + 134 + for (i = 0; i < tmc_pages->nr_pages; i++) { 135 + if (tmc_pages->daddrs && tmc_pages->daddrs[i]) 136 + dma_unmap_page(dev, tmc_pages->daddrs[i], 137 + PAGE_SIZE, dir); 138 + if (tmc_pages->pages && tmc_pages->pages[i]) 139 + __free_page(tmc_pages->pages[i]); 140 + } 141 + 142 + kfree(tmc_pages->pages); 143 + kfree(tmc_pages->daddrs); 144 + tmc_pages->pages = NULL; 145 + tmc_pages->daddrs = NULL; 146 + tmc_pages->nr_pages = 0; 147 + } 148 + 149 + /* 150 + * tmc_pages_alloc : Allocate and map pages for a given @tmc_pages. 151 + * If @pages is not NULL, the list of page virtual addresses are 152 + * used as the data pages. The pages are then dma_map'ed for @dev 153 + * with dma_direction @dir. 154 + * 155 + * Returns 0 upon success, else the error number. 156 + */ 157 + static int tmc_pages_alloc(struct tmc_pages *tmc_pages, 158 + struct device *dev, int node, 159 + enum dma_data_direction dir, void **pages) 160 + { 161 + int i, nr_pages; 162 + dma_addr_t paddr; 163 + struct page *page; 164 + 165 + nr_pages = tmc_pages->nr_pages; 166 + tmc_pages->daddrs = kcalloc(nr_pages, sizeof(*tmc_pages->daddrs), 167 + GFP_KERNEL); 168 + if (!tmc_pages->daddrs) 169 + return -ENOMEM; 170 + tmc_pages->pages = kcalloc(nr_pages, sizeof(*tmc_pages->pages), 171 + GFP_KERNEL); 172 + if (!tmc_pages->pages) { 173 + kfree(tmc_pages->daddrs); 174 + tmc_pages->daddrs = NULL; 175 + return -ENOMEM; 176 + } 177 + 178 + for (i = 0; i < nr_pages; i++) { 179 + if (pages && pages[i]) { 180 + page = virt_to_page(pages[i]); 181 + /* Hold a refcount on the page */ 182 + get_page(page); 183 + } else { 184 + page = alloc_pages_node(node, 185 + GFP_KERNEL | __GFP_ZERO, 0); 186 + } 187 + paddr = dma_map_page(dev, page, 0, PAGE_SIZE, dir); 188 + if (dma_mapping_error(dev, paddr)) 189 + goto err; 190 + tmc_pages->daddrs[i] = paddr; 191 + tmc_pages->pages[i] = page; 192 + } 193 + return 0; 194 + err: 195 + tmc_pages_free(tmc_pages, dev, dir); 196 + return -ENOMEM; 197 + } 198 + 199 + static inline long 200 + tmc_sg_get_data_page_offset(struct tmc_sg_table *sg_table, dma_addr_t addr) 201 + { 202 + return tmc_pages_get_offset(&sg_table->data_pages, addr); 203 + } 204 + 205 + static inline void tmc_free_table_pages(struct tmc_sg_table *sg_table) 206 + { 207 + if (sg_table->table_vaddr) 208 + vunmap(sg_table->table_vaddr); 209 + tmc_pages_free(&sg_table->table_pages, sg_table->dev, DMA_TO_DEVICE); 210 + } 211 + 212 + static void tmc_free_data_pages(struct tmc_sg_table *sg_table) 213 + { 214 + if (sg_table->data_vaddr) 215 + vunmap(sg_table->data_vaddr); 216 + tmc_pages_free(&sg_table->data_pages, sg_table->dev, DMA_FROM_DEVICE); 217 + } 218 + 219 + void tmc_free_sg_table(struct tmc_sg_table *sg_table) 220 + { 221 + tmc_free_table_pages(sg_table); 222 + tmc_free_data_pages(sg_table); 223 + } 224 + 225 + /* 226 + * Alloc pages for the table. Since this will be used by the device, 227 + * allocate the pages closer to the device (i.e, dev_to_node(dev) 228 + * rather than the CPU node). 229 + */ 230 + static int tmc_alloc_table_pages(struct tmc_sg_table *sg_table) 231 + { 232 + int rc; 233 + struct tmc_pages *table_pages = &sg_table->table_pages; 234 + 235 + rc = tmc_pages_alloc(table_pages, sg_table->dev, 236 + dev_to_node(sg_table->dev), 237 + DMA_TO_DEVICE, NULL); 238 + if (rc) 239 + return rc; 240 + sg_table->table_vaddr = vmap(table_pages->pages, 241 + table_pages->nr_pages, 242 + VM_MAP, 243 + PAGE_KERNEL); 244 + if (!sg_table->table_vaddr) 245 + rc = -ENOMEM; 246 + else 247 + sg_table->table_daddr = table_pages->daddrs[0]; 248 + return rc; 249 + } 250 + 251 + static int tmc_alloc_data_pages(struct tmc_sg_table *sg_table, void **pages) 252 + { 253 + int rc; 254 + 255 + /* Allocate data pages on the node requested by the caller */ 256 + rc = tmc_pages_alloc(&sg_table->data_pages, 257 + sg_table->dev, sg_table->node, 258 + DMA_FROM_DEVICE, pages); 259 + if (!rc) { 260 + sg_table->data_vaddr = vmap(sg_table->data_pages.pages, 261 + sg_table->data_pages.nr_pages, 262 + VM_MAP, 263 + PAGE_KERNEL); 264 + if (!sg_table->data_vaddr) 265 + rc = -ENOMEM; 266 + } 267 + return rc; 268 + } 269 + 270 + /* 271 + * tmc_alloc_sg_table: Allocate and setup dma pages for the TMC SG table 272 + * and data buffers. TMC writes to the data buffers and reads from the SG 273 + * Table pages. 274 + * 275 + * @dev - Device to which page should be DMA mapped. 276 + * @node - Numa node for mem allocations 277 + * @nr_tpages - Number of pages for the table entries. 278 + * @nr_dpages - Number of pages for Data buffer. 279 + * @pages - Optional list of virtual address of pages. 280 + */ 281 + struct tmc_sg_table *tmc_alloc_sg_table(struct device *dev, 282 + int node, 283 + int nr_tpages, 284 + int nr_dpages, 285 + void **pages) 286 + { 287 + long rc; 288 + struct tmc_sg_table *sg_table; 289 + 290 + sg_table = kzalloc(sizeof(*sg_table), GFP_KERNEL); 291 + if (!sg_table) 292 + return ERR_PTR(-ENOMEM); 293 + sg_table->data_pages.nr_pages = nr_dpages; 294 + sg_table->table_pages.nr_pages = nr_tpages; 295 + sg_table->node = node; 296 + sg_table->dev = dev; 297 + 298 + rc = tmc_alloc_data_pages(sg_table, pages); 299 + if (!rc) 300 + rc = tmc_alloc_table_pages(sg_table); 301 + if (rc) { 302 + tmc_free_sg_table(sg_table); 303 + kfree(sg_table); 304 + return ERR_PTR(rc); 305 + } 306 + 307 + return sg_table; 308 + } 309 + 310 + /* 311 + * tmc_sg_table_sync_data_range: Sync the data buffer written 312 + * by the device from @offset upto a @size bytes. 313 + */ 314 + void tmc_sg_table_sync_data_range(struct tmc_sg_table *table, 315 + u64 offset, u64 size) 316 + { 317 + int i, index, start; 318 + int npages = DIV_ROUND_UP(size, PAGE_SIZE); 319 + struct device *dev = table->dev; 320 + struct tmc_pages *data = &table->data_pages; 321 + 322 + start = offset >> PAGE_SHIFT; 323 + for (i = start; i < (start + npages); i++) { 324 + index = i % data->nr_pages; 325 + dma_sync_single_for_cpu(dev, data->daddrs[index], 326 + PAGE_SIZE, DMA_FROM_DEVICE); 327 + } 328 + } 329 + 330 + /* tmc_sg_sync_table: Sync the page table */ 331 + void tmc_sg_table_sync_table(struct tmc_sg_table *sg_table) 332 + { 333 + int i; 334 + struct device *dev = sg_table->dev; 335 + struct tmc_pages *table_pages = &sg_table->table_pages; 336 + 337 + for (i = 0; i < table_pages->nr_pages; i++) 338 + dma_sync_single_for_device(dev, table_pages->daddrs[i], 339 + PAGE_SIZE, DMA_TO_DEVICE); 340 + } 341 + 342 + /* 343 + * tmc_sg_table_get_data: Get the buffer pointer for data @offset 344 + * in the SG buffer. The @bufpp is updated to point to the buffer. 345 + * Returns : 346 + * the length of linear data available at @offset. 347 + * or 348 + * <= 0 if no data is available. 349 + */ 350 + ssize_t tmc_sg_table_get_data(struct tmc_sg_table *sg_table, 351 + u64 offset, size_t len, char **bufpp) 352 + { 353 + size_t size; 354 + int pg_idx = offset >> PAGE_SHIFT; 355 + int pg_offset = offset & (PAGE_SIZE - 1); 356 + struct tmc_pages *data_pages = &sg_table->data_pages; 357 + 358 + size = tmc_sg_table_buf_size(sg_table); 359 + if (offset >= size) 360 + return -EINVAL; 361 + 362 + /* Make sure we don't go beyond the end */ 363 + len = (len < (size - offset)) ? len : size - offset; 364 + /* Respect the page boundaries */ 365 + len = (len < (PAGE_SIZE - pg_offset)) ? len : (PAGE_SIZE - pg_offset); 366 + if (len > 0) 367 + *bufpp = page_address(data_pages->pages[pg_idx]) + pg_offset; 368 + return len; 369 + } 370 + 371 + #ifdef ETR_SG_DEBUG 372 + /* Map a dma address to virtual address */ 373 + static unsigned long 374 + tmc_sg_daddr_to_vaddr(struct tmc_sg_table *sg_table, 375 + dma_addr_t addr, bool table) 376 + { 377 + long offset; 378 + unsigned long base; 379 + struct tmc_pages *tmc_pages; 380 + 381 + if (table) { 382 + tmc_pages = &sg_table->table_pages; 383 + base = (unsigned long)sg_table->table_vaddr; 384 + } else { 385 + tmc_pages = &sg_table->data_pages; 386 + base = (unsigned long)sg_table->data_vaddr; 387 + } 388 + 389 + offset = tmc_pages_get_offset(tmc_pages, addr); 390 + if (offset < 0) 391 + return 0; 392 + return base + offset; 393 + } 394 + 395 + /* Dump the given sg_table */ 396 + static void tmc_etr_sg_table_dump(struct etr_sg_table *etr_table) 397 + { 398 + sgte_t *ptr; 399 + int i = 0; 400 + dma_addr_t addr; 401 + struct tmc_sg_table *sg_table = etr_table->sg_table; 402 + 403 + ptr = (sgte_t *)tmc_sg_daddr_to_vaddr(sg_table, 404 + etr_table->hwaddr, true); 405 + while (ptr) { 406 + addr = ETR_SG_ADDR(*ptr); 407 + switch (ETR_SG_ET(*ptr)) { 408 + case ETR_SG_ET_NORMAL: 409 + dev_dbg(sg_table->dev, 410 + "%05d: %p\t:[N] 0x%llx\n", i, ptr, addr); 411 + ptr++; 412 + break; 413 + case ETR_SG_ET_LINK: 414 + dev_dbg(sg_table->dev, 415 + "%05d: *** %p\t:{L} 0x%llx ***\n", 416 + i, ptr, addr); 417 + ptr = (sgte_t *)tmc_sg_daddr_to_vaddr(sg_table, 418 + addr, true); 419 + break; 420 + case ETR_SG_ET_LAST: 421 + dev_dbg(sg_table->dev, 422 + "%05d: ### %p\t:[L] 0x%llx ###\n", 423 + i, ptr, addr); 424 + return; 425 + default: 426 + dev_dbg(sg_table->dev, 427 + "%05d: xxx %p\t:[INVALID] 0x%llx xxx\n", 428 + i, ptr, addr); 429 + return; 430 + } 431 + i++; 432 + } 433 + dev_dbg(sg_table->dev, "******* End of Table *****\n"); 434 + } 435 + #else 436 + static inline void tmc_etr_sg_table_dump(struct etr_sg_table *etr_table) {} 437 + #endif 438 + 439 + /* 440 + * Populate the SG Table page table entries from table/data 441 + * pages allocated. Each Data page has ETR_SG_PAGES_PER_SYSPAGE SG pages. 442 + * So does a Table page. So we keep track of indices of the tables 443 + * in each system page and move the pointers accordingly. 444 + */ 445 + #define INC_IDX_ROUND(idx, size) ((idx) = ((idx) + 1) % (size)) 446 + static void tmc_etr_sg_table_populate(struct etr_sg_table *etr_table) 447 + { 448 + dma_addr_t paddr; 449 + int i, type, nr_entries; 450 + int tpidx = 0; /* index to the current system table_page */ 451 + int sgtidx = 0; /* index to the sg_table within the current syspage */ 452 + int sgtentry = 0; /* the entry within the sg_table */ 453 + int dpidx = 0; /* index to the current system data_page */ 454 + int spidx = 0; /* index to the SG page within the current data page */ 455 + sgte_t *ptr; /* pointer to the table entry to fill */ 456 + struct tmc_sg_table *sg_table = etr_table->sg_table; 457 + dma_addr_t *table_daddrs = sg_table->table_pages.daddrs; 458 + dma_addr_t *data_daddrs = sg_table->data_pages.daddrs; 459 + 460 + nr_entries = tmc_etr_sg_table_entries(sg_table->data_pages.nr_pages); 461 + /* 462 + * Use the contiguous virtual address of the table to update entries. 463 + */ 464 + ptr = sg_table->table_vaddr; 465 + /* 466 + * Fill all the entries, except the last entry to avoid special 467 + * checks within the loop. 468 + */ 469 + for (i = 0; i < nr_entries - 1; i++) { 470 + if (sgtentry == ETR_SG_PTRS_PER_PAGE - 1) { 471 + /* 472 + * Last entry in a sg_table page is a link address to 473 + * the next table page. If this sg_table is the last 474 + * one in the system page, it links to the first 475 + * sg_table in the next system page. Otherwise, it 476 + * links to the next sg_table page within the system 477 + * page. 478 + */ 479 + if (sgtidx == ETR_SG_PAGES_PER_SYSPAGE - 1) { 480 + paddr = table_daddrs[tpidx + 1]; 481 + } else { 482 + paddr = table_daddrs[tpidx] + 483 + (ETR_SG_PAGE_SIZE * (sgtidx + 1)); 484 + } 485 + type = ETR_SG_ET_LINK; 486 + } else { 487 + /* 488 + * Update the indices to the data_pages to point to the 489 + * next sg_page in the data buffer. 490 + */ 491 + type = ETR_SG_ET_NORMAL; 492 + paddr = data_daddrs[dpidx] + spidx * ETR_SG_PAGE_SIZE; 493 + if (!INC_IDX_ROUND(spidx, ETR_SG_PAGES_PER_SYSPAGE)) 494 + dpidx++; 495 + } 496 + *ptr++ = ETR_SG_ENTRY(paddr, type); 497 + /* 498 + * Move to the next table pointer, moving the table page index 499 + * if necessary 500 + */ 501 + if (!INC_IDX_ROUND(sgtentry, ETR_SG_PTRS_PER_PAGE)) { 502 + if (!INC_IDX_ROUND(sgtidx, ETR_SG_PAGES_PER_SYSPAGE)) 503 + tpidx++; 504 + } 505 + } 506 + 507 + /* Set up the last entry, which is always a data pointer */ 508 + paddr = data_daddrs[dpidx] + spidx * ETR_SG_PAGE_SIZE; 509 + *ptr++ = ETR_SG_ENTRY(paddr, ETR_SG_ET_LAST); 510 + } 511 + 512 + /* 513 + * tmc_init_etr_sg_table: Allocate a TMC ETR SG table, data buffer of @size and 514 + * populate the table. 515 + * 516 + * @dev - Device pointer for the TMC 517 + * @node - NUMA node where the memory should be allocated 518 + * @size - Total size of the data buffer 519 + * @pages - Optional list of page virtual address 520 + */ 521 + static struct etr_sg_table * 522 + tmc_init_etr_sg_table(struct device *dev, int node, 523 + unsigned long size, void **pages) 524 + { 525 + int nr_entries, nr_tpages; 526 + int nr_dpages = size >> PAGE_SHIFT; 527 + struct tmc_sg_table *sg_table; 528 + struct etr_sg_table *etr_table; 529 + 530 + etr_table = kzalloc(sizeof(*etr_table), GFP_KERNEL); 531 + if (!etr_table) 532 + return ERR_PTR(-ENOMEM); 533 + nr_entries = tmc_etr_sg_table_entries(nr_dpages); 534 + nr_tpages = DIV_ROUND_UP(nr_entries, ETR_SG_PTRS_PER_SYSPAGE); 535 + 536 + sg_table = tmc_alloc_sg_table(dev, node, nr_tpages, nr_dpages, pages); 537 + if (IS_ERR(sg_table)) { 538 + kfree(etr_table); 539 + return ERR_PTR(PTR_ERR(sg_table)); 540 + } 541 + 542 + etr_table->sg_table = sg_table; 543 + /* TMC should use table base address for DBA */ 544 + etr_table->hwaddr = sg_table->table_daddr; 545 + tmc_etr_sg_table_populate(etr_table); 546 + /* Sync the table pages for the HW */ 547 + tmc_sg_table_sync_table(sg_table); 548 + tmc_etr_sg_table_dump(etr_table); 549 + 550 + return etr_table; 551 + } 552 + 553 + /* 554 + * tmc_etr_alloc_flat_buf: Allocate a contiguous DMA buffer. 555 + */ 556 + static int tmc_etr_alloc_flat_buf(struct tmc_drvdata *drvdata, 557 + struct etr_buf *etr_buf, int node, 558 + void **pages) 559 + { 560 + struct etr_flat_buf *flat_buf; 561 + 562 + /* We cannot reuse existing pages for flat buf */ 563 + if (pages) 564 + return -EINVAL; 565 + 566 + flat_buf = kzalloc(sizeof(*flat_buf), GFP_KERNEL); 567 + if (!flat_buf) 568 + return -ENOMEM; 569 + 570 + flat_buf->vaddr = dma_alloc_coherent(drvdata->dev, etr_buf->size, 571 + &flat_buf->daddr, GFP_KERNEL); 572 + if (!flat_buf->vaddr) { 573 + kfree(flat_buf); 574 + return -ENOMEM; 575 + } 576 + 577 + flat_buf->size = etr_buf->size; 578 + flat_buf->dev = drvdata->dev; 579 + etr_buf->hwaddr = flat_buf->daddr; 580 + etr_buf->mode = ETR_MODE_FLAT; 581 + etr_buf->private = flat_buf; 582 + return 0; 583 + } 584 + 585 + static void tmc_etr_free_flat_buf(struct etr_buf *etr_buf) 586 + { 587 + struct etr_flat_buf *flat_buf = etr_buf->private; 588 + 589 + if (flat_buf && flat_buf->daddr) 590 + dma_free_coherent(flat_buf->dev, flat_buf->size, 591 + flat_buf->vaddr, flat_buf->daddr); 592 + kfree(flat_buf); 593 + } 594 + 595 + static void tmc_etr_sync_flat_buf(struct etr_buf *etr_buf, u64 rrp, u64 rwp) 596 + { 597 + /* 598 + * Adjust the buffer to point to the beginning of the trace data 599 + * and update the available trace data. 600 + */ 601 + etr_buf->offset = rrp - etr_buf->hwaddr; 602 + if (etr_buf->full) 603 + etr_buf->len = etr_buf->size; 604 + else 605 + etr_buf->len = rwp - rrp; 606 + } 607 + 608 + static ssize_t tmc_etr_get_data_flat_buf(struct etr_buf *etr_buf, 609 + u64 offset, size_t len, char **bufpp) 610 + { 611 + struct etr_flat_buf *flat_buf = etr_buf->private; 612 + 613 + *bufpp = (char *)flat_buf->vaddr + offset; 614 + /* 615 + * tmc_etr_buf_get_data already adjusts the length to handle 616 + * buffer wrapping around. 617 + */ 618 + return len; 619 + } 620 + 621 + static const struct etr_buf_operations etr_flat_buf_ops = { 622 + .alloc = tmc_etr_alloc_flat_buf, 623 + .free = tmc_etr_free_flat_buf, 624 + .sync = tmc_etr_sync_flat_buf, 625 + .get_data = tmc_etr_get_data_flat_buf, 626 + }; 627 + 628 + /* 629 + * tmc_etr_alloc_sg_buf: Allocate an SG buf @etr_buf. Setup the parameters 630 + * appropriately. 631 + */ 632 + static int tmc_etr_alloc_sg_buf(struct tmc_drvdata *drvdata, 633 + struct etr_buf *etr_buf, int node, 634 + void **pages) 635 + { 636 + struct etr_sg_table *etr_table; 637 + 638 + etr_table = tmc_init_etr_sg_table(drvdata->dev, node, 639 + etr_buf->size, pages); 640 + if (IS_ERR(etr_table)) 641 + return -ENOMEM; 642 + etr_buf->hwaddr = etr_table->hwaddr; 643 + etr_buf->mode = ETR_MODE_ETR_SG; 644 + etr_buf->private = etr_table; 645 + return 0; 646 + } 647 + 648 + static void tmc_etr_free_sg_buf(struct etr_buf *etr_buf) 649 + { 650 + struct etr_sg_table *etr_table = etr_buf->private; 651 + 652 + if (etr_table) { 653 + tmc_free_sg_table(etr_table->sg_table); 654 + kfree(etr_table); 655 + } 656 + } 657 + 658 + static ssize_t tmc_etr_get_data_sg_buf(struct etr_buf *etr_buf, u64 offset, 659 + size_t len, char **bufpp) 660 + { 661 + struct etr_sg_table *etr_table = etr_buf->private; 662 + 663 + return tmc_sg_table_get_data(etr_table->sg_table, offset, len, bufpp); 664 + } 665 + 666 + static void tmc_etr_sync_sg_buf(struct etr_buf *etr_buf, u64 rrp, u64 rwp) 667 + { 668 + long r_offset, w_offset; 669 + struct etr_sg_table *etr_table = etr_buf->private; 670 + struct tmc_sg_table *table = etr_table->sg_table; 671 + 672 + /* Convert hw address to offset in the buffer */ 673 + r_offset = tmc_sg_get_data_page_offset(table, rrp); 674 + if (r_offset < 0) { 675 + dev_warn(table->dev, 676 + "Unable to map RRP %llx to offset\n", rrp); 677 + etr_buf->len = 0; 678 + return; 679 + } 680 + 681 + w_offset = tmc_sg_get_data_page_offset(table, rwp); 682 + if (w_offset < 0) { 683 + dev_warn(table->dev, 684 + "Unable to map RWP %llx to offset\n", rwp); 685 + etr_buf->len = 0; 686 + return; 687 + } 688 + 689 + etr_buf->offset = r_offset; 690 + if (etr_buf->full) 691 + etr_buf->len = etr_buf->size; 692 + else 693 + etr_buf->len = ((w_offset < r_offset) ? etr_buf->size : 0) + 694 + w_offset - r_offset; 695 + tmc_sg_table_sync_data_range(table, r_offset, etr_buf->len); 696 + } 697 + 698 + static const struct etr_buf_operations etr_sg_buf_ops = { 699 + .alloc = tmc_etr_alloc_sg_buf, 700 + .free = tmc_etr_free_sg_buf, 701 + .sync = tmc_etr_sync_sg_buf, 702 + .get_data = tmc_etr_get_data_sg_buf, 703 + }; 704 + 705 + /* 706 + * TMC ETR could be connected to a CATU device, which can provide address 707 + * translation service. This is represented by the Output port of the TMC 708 + * (ETR) connected to the input port of the CATU. 709 + * 710 + * Returns : coresight_device ptr for the CATU device if a CATU is found. 711 + * : NULL otherwise. 712 + */ 713 + struct coresight_device * 714 + tmc_etr_get_catu_device(struct tmc_drvdata *drvdata) 715 + { 716 + int i; 717 + struct coresight_device *tmp, *etr = drvdata->csdev; 718 + 719 + if (!IS_ENABLED(CONFIG_CORESIGHT_CATU)) 720 + return NULL; 721 + 722 + for (i = 0; i < etr->nr_outport; i++) { 723 + tmp = etr->conns[i].child_dev; 724 + if (tmp && coresight_is_catu_device(tmp)) 725 + return tmp; 726 + } 727 + 728 + return NULL; 729 + } 730 + 731 + static inline void tmc_etr_enable_catu(struct tmc_drvdata *drvdata) 732 + { 733 + struct coresight_device *catu = tmc_etr_get_catu_device(drvdata); 734 + 735 + if (catu && helper_ops(catu)->enable) 736 + helper_ops(catu)->enable(catu, drvdata->etr_buf); 737 + } 738 + 739 + static inline void tmc_etr_disable_catu(struct tmc_drvdata *drvdata) 740 + { 741 + struct coresight_device *catu = tmc_etr_get_catu_device(drvdata); 742 + 743 + if (catu && helper_ops(catu)->disable) 744 + helper_ops(catu)->disable(catu, drvdata->etr_buf); 745 + } 746 + 747 + static const struct etr_buf_operations *etr_buf_ops[] = { 748 + [ETR_MODE_FLAT] = &etr_flat_buf_ops, 749 + [ETR_MODE_ETR_SG] = &etr_sg_buf_ops, 750 + [ETR_MODE_CATU] = &etr_catu_buf_ops, 751 + }; 752 + 753 + static inline int tmc_etr_mode_alloc_buf(int mode, 754 + struct tmc_drvdata *drvdata, 755 + struct etr_buf *etr_buf, int node, 756 + void **pages) 757 + { 758 + int rc = -EINVAL; 759 + 760 + switch (mode) { 761 + case ETR_MODE_FLAT: 762 + case ETR_MODE_ETR_SG: 763 + case ETR_MODE_CATU: 764 + if (etr_buf_ops[mode]->alloc) 765 + rc = etr_buf_ops[mode]->alloc(drvdata, etr_buf, 766 + node, pages); 767 + if (!rc) 768 + etr_buf->ops = etr_buf_ops[mode]; 769 + return rc; 770 + default: 771 + return -EINVAL; 772 + } 773 + } 774 + 775 + /* 776 + * tmc_alloc_etr_buf: Allocate a buffer use by ETR. 777 + * @drvdata : ETR device details. 778 + * @size : size of the requested buffer. 779 + * @flags : Required properties for the buffer. 780 + * @node : Node for memory allocations. 781 + * @pages : An optional list of pages. 782 + */ 783 + static struct etr_buf *tmc_alloc_etr_buf(struct tmc_drvdata *drvdata, 784 + ssize_t size, int flags, 785 + int node, void **pages) 786 + { 787 + int rc = -ENOMEM; 788 + bool has_etr_sg, has_iommu; 789 + bool has_sg, has_catu; 790 + struct etr_buf *etr_buf; 791 + 792 + has_etr_sg = tmc_etr_has_cap(drvdata, TMC_ETR_SG); 793 + has_iommu = iommu_get_domain_for_dev(drvdata->dev); 794 + has_catu = !!tmc_etr_get_catu_device(drvdata); 795 + 796 + has_sg = has_catu || has_etr_sg; 797 + 798 + etr_buf = kzalloc(sizeof(*etr_buf), GFP_KERNEL); 799 + if (!etr_buf) 800 + return ERR_PTR(-ENOMEM); 801 + 802 + etr_buf->size = size; 803 + 804 + /* 805 + * If we have to use an existing list of pages, we cannot reliably 806 + * use a contiguous DMA memory (even if we have an IOMMU). Otherwise, 807 + * we use the contiguous DMA memory if at least one of the following 808 + * conditions is true: 809 + * a) The ETR cannot use Scatter-Gather. 810 + * b) we have a backing IOMMU 811 + * c) The requested memory size is smaller (< 1M). 812 + * 813 + * Fallback to available mechanisms. 814 + * 815 + */ 816 + if (!pages && 817 + (!has_sg || has_iommu || size < SZ_1M)) 818 + rc = tmc_etr_mode_alloc_buf(ETR_MODE_FLAT, drvdata, 819 + etr_buf, node, pages); 820 + if (rc && has_etr_sg) 821 + rc = tmc_etr_mode_alloc_buf(ETR_MODE_ETR_SG, drvdata, 822 + etr_buf, node, pages); 823 + if (rc && has_catu) 824 + rc = tmc_etr_mode_alloc_buf(ETR_MODE_CATU, drvdata, 825 + etr_buf, node, pages); 826 + if (rc) { 827 + kfree(etr_buf); 828 + return ERR_PTR(rc); 829 + } 830 + 831 + dev_dbg(drvdata->dev, "allocated buffer of size %ldKB in mode %d\n", 832 + (unsigned long)size >> 10, etr_buf->mode); 833 + return etr_buf; 834 + } 835 + 836 + static void tmc_free_etr_buf(struct etr_buf *etr_buf) 837 + { 838 + WARN_ON(!etr_buf->ops || !etr_buf->ops->free); 839 + etr_buf->ops->free(etr_buf); 840 + kfree(etr_buf); 841 + } 842 + 843 + /* 844 + * tmc_etr_buf_get_data: Get the pointer the trace data at @offset 845 + * with a maximum of @len bytes. 846 + * Returns: The size of the linear data available @pos, with *bufpp 847 + * updated to point to the buffer. 848 + */ 849 + static ssize_t tmc_etr_buf_get_data(struct etr_buf *etr_buf, 850 + u64 offset, size_t len, char **bufpp) 851 + { 852 + /* Adjust the length to limit this transaction to end of buffer */ 853 + len = (len < (etr_buf->size - offset)) ? len : etr_buf->size - offset; 854 + 855 + return etr_buf->ops->get_data(etr_buf, (u64)offset, len, bufpp); 856 + } 857 + 858 + static inline s64 859 + tmc_etr_buf_insert_barrier_packet(struct etr_buf *etr_buf, u64 offset) 860 + { 861 + ssize_t len; 862 + char *bufp; 863 + 864 + len = tmc_etr_buf_get_data(etr_buf, offset, 865 + CORESIGHT_BARRIER_PKT_SIZE, &bufp); 866 + if (WARN_ON(len < CORESIGHT_BARRIER_PKT_SIZE)) 867 + return -EINVAL; 868 + coresight_insert_barrier_packet(bufp); 869 + return offset + CORESIGHT_BARRIER_PKT_SIZE; 870 + } 871 + 872 + /* 873 + * tmc_sync_etr_buf: Sync the trace buffer availability with drvdata. 874 + * Makes sure the trace data is synced to the memory for consumption. 875 + * @etr_buf->offset will hold the offset to the beginning of the trace data 876 + * within the buffer, with @etr_buf->len bytes to consume. 877 + */ 878 + static void tmc_sync_etr_buf(struct tmc_drvdata *drvdata) 879 + { 880 + struct etr_buf *etr_buf = drvdata->etr_buf; 881 + u64 rrp, rwp; 882 + u32 status; 883 + 884 + rrp = tmc_read_rrp(drvdata); 885 + rwp = tmc_read_rwp(drvdata); 886 + status = readl_relaxed(drvdata->base + TMC_STS); 887 + etr_buf->full = status & TMC_STS_FULL; 888 + 889 + WARN_ON(!etr_buf->ops || !etr_buf->ops->sync); 890 + 891 + etr_buf->ops->sync(etr_buf, rrp, rwp); 892 + 893 + /* Insert barrier packets at the beginning, if there was an overflow */ 894 + if (etr_buf->full) 895 + tmc_etr_buf_insert_barrier_packet(etr_buf, etr_buf->offset); 896 + } 11 897 12 898 static void tmc_etr_enable_hw(struct tmc_drvdata *drvdata) 13 899 { 14 900 u32 axictl, sts; 901 + struct etr_buf *etr_buf = drvdata->etr_buf; 15 902 16 - /* Zero out the memory to help with debug */ 17 - memset(drvdata->vaddr, 0, drvdata->size); 903 + /* 904 + * If this ETR is connected to a CATU, enable it before we turn 905 + * this on 906 + */ 907 + tmc_etr_enable_catu(drvdata); 18 908 19 909 CS_UNLOCK(drvdata->base); 20 910 21 911 /* Wait for TMCSReady bit to be set */ 22 912 tmc_wait_for_tmcready(drvdata); 23 913 24 - writel_relaxed(drvdata->size / 4, drvdata->base + TMC_RSZ); 914 + writel_relaxed(etr_buf->size / 4, drvdata->base + TMC_RSZ); 25 915 writel_relaxed(TMC_MODE_CIRCULAR_BUFFER, drvdata->base + TMC_MODE); 26 916 27 917 axictl = readl_relaxed(drvdata->base + TMC_AXICTL); ··· 924 34 axictl |= TMC_AXICTL_ARCACHE_OS; 925 35 } 926 36 37 + if (etr_buf->mode == ETR_MODE_ETR_SG) { 38 + if (WARN_ON(!tmc_etr_has_cap(drvdata, TMC_ETR_SG))) 39 + return; 40 + axictl |= TMC_AXICTL_SCT_GAT_MODE; 41 + } 42 + 927 43 writel_relaxed(axictl, drvdata->base + TMC_AXICTL); 928 - tmc_write_dba(drvdata, drvdata->paddr); 44 + tmc_write_dba(drvdata, etr_buf->hwaddr); 929 45 /* 930 46 * If the TMC pointers must be programmed before the session, 931 47 * we have to set it properly (i.e, RRP/RWP to base address and 932 48 * STS to "not full"). 933 49 */ 934 50 if (tmc_etr_has_cap(drvdata, TMC_ETR_SAVE_RESTORE)) { 935 - tmc_write_rrp(drvdata, drvdata->paddr); 936 - tmc_write_rwp(drvdata, drvdata->paddr); 51 + tmc_write_rrp(drvdata, etr_buf->hwaddr); 52 + tmc_write_rwp(drvdata, etr_buf->hwaddr); 937 53 sts = readl_relaxed(drvdata->base + TMC_STS) & ~TMC_STS_FULL; 938 54 writel_relaxed(sts, drvdata->base + TMC_STS); 939 55 } ··· 954 58 CS_LOCK(drvdata->base); 955 59 } 956 60 957 - static void tmc_etr_dump_hw(struct tmc_drvdata *drvdata) 61 + /* 62 + * Return the available trace data in the buffer (starts at etr_buf->offset, 63 + * limited by etr_buf->len) from @pos, with a maximum limit of @len, 64 + * also updating the @bufpp on where to find it. Since the trace data 65 + * starts at anywhere in the buffer, depending on the RRP, we adjust the 66 + * @len returned to handle buffer wrapping around. 67 + */ 68 + ssize_t tmc_etr_get_sysfs_trace(struct tmc_drvdata *drvdata, 69 + loff_t pos, size_t len, char **bufpp) 958 70 { 959 - const u32 *barrier; 960 - u32 val; 961 - u32 *temp; 962 - u64 rwp; 71 + s64 offset; 72 + ssize_t actual = len; 73 + struct etr_buf *etr_buf = drvdata->etr_buf; 963 74 964 - rwp = tmc_read_rwp(drvdata); 965 - val = readl_relaxed(drvdata->base + TMC_STS); 75 + if (pos + actual > etr_buf->len) 76 + actual = etr_buf->len - pos; 77 + if (actual <= 0) 78 + return actual; 966 79 967 - /* 968 - * Adjust the buffer to point to the beginning of the trace data 969 - * and update the available trace data. 970 - */ 971 - if (val & TMC_STS_FULL) { 972 - drvdata->buf = drvdata->vaddr + rwp - drvdata->paddr; 973 - drvdata->len = drvdata->size; 80 + /* Compute the offset from which we read the data */ 81 + offset = etr_buf->offset + pos; 82 + if (offset >= etr_buf->size) 83 + offset -= etr_buf->size; 84 + return tmc_etr_buf_get_data(etr_buf, offset, actual, bufpp); 85 + } 974 86 975 - barrier = barrier_pkt; 976 - temp = (u32 *)drvdata->buf; 87 + static struct etr_buf * 88 + tmc_etr_setup_sysfs_buf(struct tmc_drvdata *drvdata) 89 + { 90 + return tmc_alloc_etr_buf(drvdata, drvdata->size, 91 + 0, cpu_to_node(0), NULL); 92 + } 977 93 978 - while (*barrier) { 979 - *temp = *barrier; 980 - temp++; 981 - barrier++; 982 - } 94 + static void 95 + tmc_etr_free_sysfs_buf(struct etr_buf *buf) 96 + { 97 + if (buf) 98 + tmc_free_etr_buf(buf); 99 + } 983 100 984 - } else { 985 - drvdata->buf = drvdata->vaddr; 986 - drvdata->len = rwp - drvdata->paddr; 987 - } 101 + static void tmc_etr_sync_sysfs_buf(struct tmc_drvdata *drvdata) 102 + { 103 + tmc_sync_etr_buf(drvdata); 988 104 } 989 105 990 106 static void tmc_etr_disable_hw(struct tmc_drvdata *drvdata) ··· 1009 101 * read before the TMC is disabled. 1010 102 */ 1011 103 if (drvdata->mode == CS_MODE_SYSFS) 1012 - tmc_etr_dump_hw(drvdata); 104 + tmc_etr_sync_sysfs_buf(drvdata); 105 + 1013 106 tmc_disable_hw(drvdata); 1014 107 1015 108 CS_LOCK(drvdata->base); 109 + 110 + /* Disable CATU device if this ETR is connected to one */ 111 + tmc_etr_disable_catu(drvdata); 1016 112 } 1017 113 1018 114 static int tmc_enable_etr_sink_sysfs(struct coresight_device *csdev) 1019 115 { 1020 116 int ret = 0; 1021 - bool used = false; 1022 117 unsigned long flags; 1023 - void __iomem *vaddr = NULL; 1024 - dma_addr_t paddr = 0; 1025 118 struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 119 + struct etr_buf *new_buf = NULL, *free_buf = NULL; 1026 120 1027 121 /* 1028 - * If we don't have a buffer release the lock and allocate memory. 1029 - * Otherwise keep the lock and move along. 122 + * If we are enabling the ETR from disabled state, we need to make 123 + * sure we have a buffer with the right size. The etr_buf is not reset 124 + * immediately after we stop the tracing in SYSFS mode as we wait for 125 + * the user to collect the data. We may be able to reuse the existing 126 + * buffer, provided the size matches. Any allocation has to be done 127 + * with the lock released. 1030 128 */ 1031 129 spin_lock_irqsave(&drvdata->spinlock, flags); 1032 - if (!drvdata->vaddr) { 130 + if (!drvdata->etr_buf || (drvdata->etr_buf->size != drvdata->size)) { 1033 131 spin_unlock_irqrestore(&drvdata->spinlock, flags); 1034 132 1035 - /* 1036 - * Contiguous memory can't be allocated while a spinlock is 1037 - * held. As such allocate memory here and free it if a buffer 1038 - * has already been allocated (from a previous session). 1039 - */ 1040 - vaddr = dma_alloc_coherent(drvdata->dev, drvdata->size, 1041 - &paddr, GFP_KERNEL); 1042 - if (!vaddr) 1043 - return -ENOMEM; 133 + /* Allocate memory with the locks released */ 134 + free_buf = new_buf = tmc_etr_setup_sysfs_buf(drvdata); 135 + if (IS_ERR(new_buf)) 136 + return PTR_ERR(new_buf); 1044 137 1045 138 /* Let's try again */ 1046 139 spin_lock_irqsave(&drvdata->spinlock, flags); 1047 140 } 1048 141 1049 - if (drvdata->reading) { 142 + if (drvdata->reading || drvdata->mode == CS_MODE_PERF) { 1050 143 ret = -EBUSY; 1051 144 goto out; 1052 145 } ··· 1055 146 /* 1056 147 * In sysFS mode we can have multiple writers per sink. Since this 1057 148 * sink is already enabled no memory is needed and the HW need not be 1058 - * touched. 149 + * touched, even if the buffer size has changed. 1059 150 */ 1060 151 if (drvdata->mode == CS_MODE_SYSFS) 1061 152 goto out; 1062 153 1063 154 /* 1064 - * If drvdata::vaddr == NULL, use the memory allocated above. 1065 - * Otherwise a buffer still exists from a previous session, so 1066 - * simply use that. 155 + * If we don't have a buffer or it doesn't match the requested size, 156 + * use the buffer allocated above. Otherwise reuse the existing buffer. 1067 157 */ 1068 - if (drvdata->vaddr == NULL) { 1069 - used = true; 1070 - drvdata->vaddr = vaddr; 1071 - drvdata->paddr = paddr; 1072 - drvdata->buf = drvdata->vaddr; 158 + if (!drvdata->etr_buf || 159 + (new_buf && drvdata->etr_buf->size != new_buf->size)) { 160 + free_buf = drvdata->etr_buf; 161 + drvdata->etr_buf = new_buf; 1073 162 } 1074 163 1075 164 drvdata->mode = CS_MODE_SYSFS; ··· 1076 169 spin_unlock_irqrestore(&drvdata->spinlock, flags); 1077 170 1078 171 /* Free memory outside the spinlock if need be */ 1079 - if (!used && vaddr) 1080 - dma_free_coherent(drvdata->dev, drvdata->size, vaddr, paddr); 172 + if (free_buf) 173 + tmc_etr_free_sysfs_buf(free_buf); 1081 174 1082 175 if (!ret) 1083 176 dev_info(drvdata->dev, "TMC-ETR enabled\n"); ··· 1087 180 1088 181 static int tmc_enable_etr_sink_perf(struct coresight_device *csdev) 1089 182 { 1090 - int ret = 0; 1091 - unsigned long flags; 1092 - struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 1093 - 1094 - spin_lock_irqsave(&drvdata->spinlock, flags); 1095 - if (drvdata->reading) { 1096 - ret = -EINVAL; 1097 - goto out; 1098 - } 1099 - 1100 - /* 1101 - * In Perf mode there can be only one writer per sink. There 1102 - * is also no need to continue if the ETR is already operated 1103 - * from sysFS. 1104 - */ 1105 - if (drvdata->mode != CS_MODE_DISABLED) { 1106 - ret = -EINVAL; 1107 - goto out; 1108 - } 1109 - 1110 - drvdata->mode = CS_MODE_PERF; 1111 - tmc_etr_enable_hw(drvdata); 1112 - out: 1113 - spin_unlock_irqrestore(&drvdata->spinlock, flags); 1114 - 1115 - return ret; 183 + /* We don't support perf mode yet ! */ 184 + return -EINVAL; 1116 185 } 1117 186 1118 187 static int tmc_enable_etr_sink(struct coresight_device *csdev, u32 mode) ··· 1156 273 goto out; 1157 274 } 1158 275 1159 - /* If drvdata::buf is NULL the trace data has been read already */ 1160 - if (drvdata->buf == NULL) { 276 + /* If drvdata::etr_buf is NULL the trace data has been read already */ 277 + if (drvdata->etr_buf == NULL) { 1161 278 ret = -EINVAL; 1162 279 goto out; 1163 280 } ··· 1176 293 int tmc_read_unprepare_etr(struct tmc_drvdata *drvdata) 1177 294 { 1178 295 unsigned long flags; 1179 - dma_addr_t paddr; 1180 - void __iomem *vaddr = NULL; 296 + struct etr_buf *etr_buf = NULL; 1181 297 1182 298 /* config types are set a boot time and never change */ 1183 299 if (WARN_ON_ONCE(drvdata->config_type != TMC_CONFIG_TYPE_ETR)) ··· 1188 306 if (drvdata->mode == CS_MODE_SYSFS) { 1189 307 /* 1190 308 * The trace run will continue with the same allocated trace 1191 - * buffer. The trace buffer is cleared in tmc_etr_enable_hw(), 1192 - * so we don't have to explicitly clear it. Also, since the 1193 - * tracer is still enabled drvdata::buf can't be NULL. 309 + * buffer. Since the tracer is still enabled drvdata::buf can't 310 + * be NULL. 1194 311 */ 1195 312 tmc_etr_enable_hw(drvdata); 1196 313 } else { ··· 1197 316 * The ETR is not tracing and the buffer was just read. 1198 317 * As such prepare to free the trace buffer. 1199 318 */ 1200 - vaddr = drvdata->vaddr; 1201 - paddr = drvdata->paddr; 1202 - drvdata->buf = drvdata->vaddr = NULL; 319 + etr_buf = drvdata->etr_buf; 320 + drvdata->etr_buf = NULL; 1203 321 } 1204 322 1205 323 drvdata->reading = false; 1206 324 spin_unlock_irqrestore(&drvdata->spinlock, flags); 1207 325 1208 326 /* Free allocated memory out side of the spinlock */ 1209 - if (vaddr) 1210 - dma_free_coherent(drvdata->dev, drvdata->size, vaddr, paddr); 327 + if (etr_buf) 328 + tmc_free_etr_buf(etr_buf); 1211 329 1212 330 return 0; 1213 331 }
+64 -19
drivers/hwtracing/coresight/coresight-tmc.c
··· 12 12 #include <linux/err.h> 13 13 #include <linux/fs.h> 14 14 #include <linux/miscdevice.h> 15 + #include <linux/property.h> 15 16 #include <linux/uaccess.h> 16 17 #include <linux/slab.h> 17 18 #include <linux/dma-mapping.h> ··· 124 123 return 0; 125 124 } 126 125 126 + static inline ssize_t tmc_get_sysfs_trace(struct tmc_drvdata *drvdata, 127 + loff_t pos, size_t len, char **bufpp) 128 + { 129 + switch (drvdata->config_type) { 130 + case TMC_CONFIG_TYPE_ETB: 131 + case TMC_CONFIG_TYPE_ETF: 132 + return tmc_etb_get_sysfs_trace(drvdata, pos, len, bufpp); 133 + case TMC_CONFIG_TYPE_ETR: 134 + return tmc_etr_get_sysfs_trace(drvdata, pos, len, bufpp); 135 + } 136 + 137 + return -EINVAL; 138 + } 139 + 127 140 static ssize_t tmc_read(struct file *file, char __user *data, size_t len, 128 141 loff_t *ppos) 129 142 { 143 + char *bufp; 144 + ssize_t actual; 130 145 struct tmc_drvdata *drvdata = container_of(file->private_data, 131 146 struct tmc_drvdata, miscdev); 132 - char *bufp = drvdata->buf + *ppos; 147 + actual = tmc_get_sysfs_trace(drvdata, *ppos, len, &bufp); 148 + if (actual <= 0) 149 + return 0; 133 150 134 - if (*ppos + len > drvdata->len) 135 - len = drvdata->len - *ppos; 136 - 137 - if (drvdata->config_type == TMC_CONFIG_TYPE_ETR) { 138 - if (bufp == (char *)(drvdata->vaddr + drvdata->size)) 139 - bufp = drvdata->vaddr; 140 - else if (bufp > (char *)(drvdata->vaddr + drvdata->size)) 141 - bufp -= drvdata->size; 142 - if ((bufp + len) > (char *)(drvdata->vaddr + drvdata->size)) 143 - len = (char *)(drvdata->vaddr + drvdata->size) - bufp; 144 - } 145 - 146 - if (copy_to_user(data, bufp, len)) { 151 + if (copy_to_user(data, bufp, actual)) { 147 152 dev_dbg(drvdata->dev, "%s: copy_to_user failed\n", __func__); 148 153 return -EFAULT; 149 154 } 150 155 151 - *ppos += len; 156 + *ppos += actual; 157 + dev_dbg(drvdata->dev, "%zu bytes copied\n", actual); 152 158 153 - dev_dbg(drvdata->dev, "%s: %zu bytes copied, %d bytes left\n", 154 - __func__, len, (int)(drvdata->len - *ppos)); 155 - return len; 159 + return actual; 156 160 } 157 161 158 162 static int tmc_release(struct inode *inode, struct file *file) ··· 277 271 } 278 272 static DEVICE_ATTR_RW(trigger_cntr); 279 273 274 + static ssize_t buffer_size_show(struct device *dev, 275 + struct device_attribute *attr, char *buf) 276 + { 277 + struct tmc_drvdata *drvdata = dev_get_drvdata(dev->parent); 278 + 279 + return sprintf(buf, "%#x\n", drvdata->size); 280 + } 281 + 282 + static ssize_t buffer_size_store(struct device *dev, 283 + struct device_attribute *attr, 284 + const char *buf, size_t size) 285 + { 286 + int ret; 287 + unsigned long val; 288 + struct tmc_drvdata *drvdata = dev_get_drvdata(dev->parent); 289 + 290 + /* Only permitted for TMC-ETRs */ 291 + if (drvdata->config_type != TMC_CONFIG_TYPE_ETR) 292 + return -EPERM; 293 + 294 + ret = kstrtoul(buf, 0, &val); 295 + if (ret) 296 + return ret; 297 + /* The buffer size should be page aligned */ 298 + if (val & (PAGE_SIZE - 1)) 299 + return -EINVAL; 300 + drvdata->size = val; 301 + return size; 302 + } 303 + 304 + static DEVICE_ATTR_RW(buffer_size); 305 + 280 306 static struct attribute *coresight_tmc_attrs[] = { 281 307 &dev_attr_trigger_cntr.attr, 308 + &dev_attr_buffer_size.attr, 282 309 NULL, 283 310 }; 284 311 ··· 330 291 NULL, 331 292 }; 332 293 294 + static inline bool tmc_etr_can_use_sg(struct tmc_drvdata *drvdata) 295 + { 296 + return fwnode_property_present(drvdata->dev->fwnode, 297 + "arm,scatter-gather"); 298 + } 299 + 333 300 /* Detect and initialise the capabilities of a TMC ETR */ 334 301 static int tmc_etr_setup_caps(struct tmc_drvdata *drvdata, 335 302 u32 devid, void *dev_caps) ··· 345 300 /* Set the unadvertised capabilities */ 346 301 tmc_etr_init_caps(drvdata, (u32)(unsigned long)dev_caps); 347 302 348 - if (!(devid & TMC_DEVID_NOSCAT)) 303 + if (!(devid & TMC_DEVID_NOSCAT) && tmc_etr_can_use_sg(drvdata)) 349 304 tmc_etr_set_cap(drvdata, TMC_ETR_SG); 350 305 351 306 /* Check if the AXI address width is available */
+104 -9
drivers/hwtracing/coresight/coresight-tmc.h
··· 7 7 #ifndef _CORESIGHT_TMC_H 8 8 #define _CORESIGHT_TMC_H 9 9 10 + #include <linux/dma-mapping.h> 10 11 #include <linux/miscdevice.h> 11 12 12 13 #define TMC_RSZ 0x004 ··· 123 122 #define CORESIGHT_SOC_600_ETR_CAPS \ 124 123 (TMC_ETR_SAVE_RESTORE | TMC_ETR_AXI_ARCACHE) 125 124 125 + enum etr_mode { 126 + ETR_MODE_FLAT, /* Uses contiguous flat buffer */ 127 + ETR_MODE_ETR_SG, /* Uses in-built TMC ETR SG mechanism */ 128 + ETR_MODE_CATU, /* Use SG mechanism in CATU */ 129 + }; 130 + 131 + struct etr_buf_operations; 132 + 133 + /** 134 + * struct etr_buf - Details of the buffer used by ETR 135 + * @mode : Mode of the ETR buffer, contiguous, Scatter Gather etc. 136 + * @full : Trace data overflow 137 + * @size : Size of the buffer. 138 + * @hwaddr : Address to be programmed in the TMC:DBA{LO,HI} 139 + * @offset : Offset of the trace data in the buffer for consumption. 140 + * @len : Available trace data @buf (may round up to the beginning). 141 + * @ops : ETR buffer operations for the mode. 142 + * @private : Backend specific information for the buf 143 + */ 144 + struct etr_buf { 145 + enum etr_mode mode; 146 + bool full; 147 + ssize_t size; 148 + dma_addr_t hwaddr; 149 + unsigned long offset; 150 + s64 len; 151 + const struct etr_buf_operations *ops; 152 + void *private; 153 + }; 154 + 126 155 /** 127 156 * struct tmc_drvdata - specifics associated to an TMC component 128 157 * @base: memory mapped base address for this component. ··· 160 129 * @csdev: component vitals needed by the framework. 161 130 * @miscdev: specifics to handle "/dev/xyz.tmc" entry. 162 131 * @spinlock: only one at a time pls. 163 - * @buf: area of memory where trace data get sent. 164 - * @paddr: DMA start location in RAM. 165 - * @vaddr: virtual representation of @paddr. 166 - * @size: trace buffer size. 167 - * @len: size of the available trace. 132 + * @buf: Snapshot of the trace data for ETF/ETB. 133 + * @etr_buf: details of buffer used in TMC-ETR 134 + * @len: size of the available trace for ETF/ETB. 135 + * @size: trace buffer size for this TMC (common for all modes). 168 136 * @mode: how this TMC is being used. 169 137 * @config_type: TMC variant, must be of type @tmc_config_type. 170 138 * @memwidth: width of the memory interface databus, in bytes. ··· 178 148 struct miscdevice miscdev; 179 149 spinlock_t spinlock; 180 150 bool reading; 181 - char *buf; 182 - dma_addr_t paddr; 183 - void __iomem *vaddr; 184 - u32 size; 151 + union { 152 + char *buf; /* TMC ETB */ 153 + struct etr_buf *etr_buf; /* TMC ETR */ 154 + }; 185 155 u32 len; 156 + u32 size; 186 157 u32 mode; 187 158 enum tmc_config_type config_type; 188 159 enum tmc_mem_intf_width memwidth; 189 160 u32 trigger_cntr; 190 161 u32 etr_caps; 162 + }; 163 + 164 + struct etr_buf_operations { 165 + int (*alloc)(struct tmc_drvdata *drvdata, struct etr_buf *etr_buf, 166 + int node, void **pages); 167 + void (*sync)(struct etr_buf *etr_buf, u64 rrp, u64 rwp); 168 + ssize_t (*get_data)(struct etr_buf *etr_buf, u64 offset, size_t len, 169 + char **bufpp); 170 + void (*free)(struct etr_buf *etr_buf); 171 + }; 172 + 173 + /** 174 + * struct tmc_pages - Collection of pages used for SG. 175 + * @nr_pages: Number of pages in the list. 176 + * @daddrs: Array of DMA'able page address. 177 + * @pages: Array pages for the buffer. 178 + */ 179 + struct tmc_pages { 180 + int nr_pages; 181 + dma_addr_t *daddrs; 182 + struct page **pages; 183 + }; 184 + 185 + /* 186 + * struct tmc_sg_table - Generic SG table for TMC 187 + * @dev: Device for DMA allocations 188 + * @table_vaddr: Contiguous Virtual address for PageTable 189 + * @data_vaddr: Contiguous Virtual address for Data Buffer 190 + * @table_daddr: DMA address of the PageTable base 191 + * @node: Node for Page allocations 192 + * @table_pages: List of pages & dma address for Table 193 + * @data_pages: List of pages & dma address for Data 194 + */ 195 + struct tmc_sg_table { 196 + struct device *dev; 197 + void *table_vaddr; 198 + void *data_vaddr; 199 + dma_addr_t table_daddr; 200 + int node; 201 + struct tmc_pages table_pages; 202 + struct tmc_pages data_pages; 191 203 }; 192 204 193 205 /* Generic functions */ ··· 244 172 extern const struct coresight_ops tmc_etb_cs_ops; 245 173 extern const struct coresight_ops tmc_etf_cs_ops; 246 174 175 + ssize_t tmc_etb_get_sysfs_trace(struct tmc_drvdata *drvdata, 176 + loff_t pos, size_t len, char **bufpp); 247 177 /* ETR functions */ 248 178 int tmc_read_prepare_etr(struct tmc_drvdata *drvdata); 249 179 int tmc_read_unprepare_etr(struct tmc_drvdata *drvdata); 250 180 extern const struct coresight_ops tmc_etr_cs_ops; 181 + ssize_t tmc_etr_get_sysfs_trace(struct tmc_drvdata *drvdata, 182 + loff_t pos, size_t len, char **bufpp); 251 183 252 184 253 185 #define TMC_REG_PAIR(name, lo_off, hi_off) \ ··· 286 210 { 287 211 return !!(drvdata->etr_caps & cap); 288 212 } 213 + 214 + struct tmc_sg_table *tmc_alloc_sg_table(struct device *dev, 215 + int node, 216 + int nr_tpages, 217 + int nr_dpages, 218 + void **pages); 219 + void tmc_free_sg_table(struct tmc_sg_table *sg_table); 220 + void tmc_sg_table_sync_table(struct tmc_sg_table *sg_table); 221 + void tmc_sg_table_sync_data_range(struct tmc_sg_table *table, 222 + u64 offset, u64 size); 223 + ssize_t tmc_sg_table_get_data(struct tmc_sg_table *sg_table, 224 + u64 offset, size_t len, char **bufpp); 225 + static inline unsigned long 226 + tmc_sg_table_buf_size(struct tmc_sg_table *sg_table) 227 + { 228 + return sg_table->data_pages.nr_pages << PAGE_SHIFT; 229 + } 230 + 231 + struct coresight_device *tmc_etr_get_catu_device(struct tmc_drvdata *drvdata); 289 232 290 233 #endif
+4 -3
drivers/hwtracing/coresight/coresight-tpiu.c
··· 40 40 41 41 /** register definition **/ 42 42 /* FFSR - 0x300 */ 43 - #define FFSR_FT_STOPPED BIT(1) 43 + #define FFSR_FT_STOPPED_BIT 1 44 44 /* FFCR - 0x304 */ 45 + #define FFCR_FON_MAN_BIT 6 45 46 #define FFCR_FON_MAN BIT(6) 46 47 #define FFCR_STOP_FI BIT(12) 47 48 ··· 87 86 /* Generate manual flush */ 88 87 writel_relaxed(FFCR_STOP_FI | FFCR_FON_MAN, drvdata->base + TPIU_FFCR); 89 88 /* Wait for flush to complete */ 90 - coresight_timeout(drvdata->base, TPIU_FFCR, FFCR_FON_MAN, 0); 89 + coresight_timeout(drvdata->base, TPIU_FFCR, FFCR_FON_MAN_BIT, 0); 91 90 /* Wait for formatter to stop */ 92 - coresight_timeout(drvdata->base, TPIU_FFSR, FFSR_FT_STOPPED, 1); 91 + coresight_timeout(drvdata->base, TPIU_FFSR, FFSR_FT_STOPPED_BIT, 1); 93 92 94 93 CS_LOCK(drvdata->base); 95 94 }
+47 -6
drivers/hwtracing/coresight/coresight.c
··· 51 51 * beginning of the data collected in a buffer. That way the decoder knows that 52 52 * it needs to look for another sync sequence. 53 53 */ 54 - const u32 barrier_pkt[5] = {0x7fffffff, 0x7fffffff, 55 - 0x7fffffff, 0x7fffffff, 0x0}; 54 + const u32 barrier_pkt[4] = {0x7fffffff, 0x7fffffff, 0x7fffffff, 0x7fffffff}; 56 55 57 56 static int coresight_id_match(struct device *dev, void *data) 58 57 { ··· 107 108 dev_err(&csdev->dev, "couldn't find inport, parent: %s, child: %s\n", 108 109 dev_name(&parent->dev), dev_name(&csdev->dev)); 109 110 110 - return 0; 111 + return -ENODEV; 111 112 } 112 113 113 114 static int coresight_find_link_outport(struct coresight_device *csdev, ··· 125 126 dev_err(&csdev->dev, "couldn't find outport, parent: %s, child: %s\n", 126 127 dev_name(&csdev->dev), dev_name(&child->dev)); 127 128 128 - return 0; 129 + return -ENODEV; 129 130 } 130 131 131 132 static int coresight_enable_sink(struct coresight_device *csdev, u32 mode) ··· 177 178 refport = outport; 178 179 else 179 180 refport = 0; 181 + 182 + if (refport < 0) 183 + return refport; 180 184 181 185 if (atomic_inc_return(&csdev->refcnt[refport]) == 1) { 182 186 if (link_ops(csdev)->enable) { ··· 425 423 return dev ? to_coresight_device(dev) : NULL; 426 424 } 427 425 426 + /* 427 + * coresight_grab_device - Power up this device and any of the helper 428 + * devices connected to it for trace operation. Since the helper devices 429 + * don't appear on the trace path, they should be handled along with the 430 + * the master device. 431 + */ 432 + static void coresight_grab_device(struct coresight_device *csdev) 433 + { 434 + int i; 435 + 436 + for (i = 0; i < csdev->nr_outport; i++) { 437 + struct coresight_device *child = csdev->conns[i].child_dev; 438 + 439 + if (child && child->type == CORESIGHT_DEV_TYPE_HELPER) 440 + pm_runtime_get_sync(child->dev.parent); 441 + } 442 + pm_runtime_get_sync(csdev->dev.parent); 443 + } 444 + 445 + /* 446 + * coresight_drop_device - Release this device and any of the helper 447 + * devices connected to it. 448 + */ 449 + static void coresight_drop_device(struct coresight_device *csdev) 450 + { 451 + int i; 452 + 453 + pm_runtime_put(csdev->dev.parent); 454 + for (i = 0; i < csdev->nr_outport; i++) { 455 + struct coresight_device *child = csdev->conns[i].child_dev; 456 + 457 + if (child && child->type == CORESIGHT_DEV_TYPE_HELPER) 458 + pm_runtime_put(child->dev.parent); 459 + } 460 + } 461 + 428 462 /** 429 463 * _coresight_build_path - recursively build a path from a @csdev to a sink. 430 464 * @csdev: The device to start from. ··· 509 471 if (!node) 510 472 return -ENOMEM; 511 473 474 + coresight_grab_device(csdev); 512 475 node->csdev = csdev; 513 476 list_add(&node->link, path); 514 - pm_runtime_get_sync(csdev->dev.parent); 515 477 516 478 return 0; 517 479 } ··· 555 517 list_for_each_entry_safe(nd, next, path, link) { 556 518 csdev = nd->csdev; 557 519 558 - pm_runtime_put_sync(csdev->dev.parent); 520 + coresight_drop_device(csdev); 559 521 list_del(&nd->link); 560 522 kfree(nd); 561 523 } ··· 805 767 { 806 768 .name = "source", 807 769 .groups = coresight_source_groups, 770 + }, 771 + { 772 + .name = "helper", 808 773 }, 809 774 }; 810 775
+3
drivers/input/serio/hyperv-keyboard.c
··· 424 424 .id_table = id_table, 425 425 .probe = hv_kbd_probe, 426 426 .remove = hv_kbd_remove, 427 + .driver = { 428 + .probe_type = PROBE_PREFER_ASYNCHRONOUS, 429 + }, 427 430 }; 428 431 429 432 static int __init hv_kbd_init(void)
+7
drivers/ipack/carriers/tpci200.c
··· 304 304 ioremap_nocache(pci_resource_start(tpci200->info->pdev, 305 305 TPCI200_IP_INTERFACE_BAR), 306 306 TPCI200_IFACE_SIZE); 307 + if (!tpci200->info->interface_regs) { 308 + dev_err(&tpci200->info->pdev->dev, 309 + "(bn 0x%X, sn 0x%X) failed to map driver user space!", 310 + tpci200->info->pdev->bus->number, 311 + tpci200->info->pdev->devfn); 312 + goto out_release_mem8_space; 313 + } 307 314 308 315 /* Initialize lock that protects interface_regs */ 309 316 spin_lock_init(&tpci200->regs_lock);
+1
drivers/media/platform/coda/imx-vdoa.c
··· 18 18 #include <linux/device.h> 19 19 #include <linux/interrupt.h> 20 20 #include <linux/module.h> 21 + #include <linux/mod_devicetable.h> 21 22 #include <linux/dma-mapping.h> 22 23 #include <linux/platform_device.h> 23 24 #include <linux/videodev2.h>
+1
drivers/media/platform/rcar-fcp.c
··· 10 10 #include <linux/device.h> 11 11 #include <linux/list.h> 12 12 #include <linux/module.h> 13 + #include <linux/mod_devicetable.h> 13 14 #include <linux/mutex.h> 14 15 #include <linux/platform_device.h> 15 16 #include <linux/pm_runtime.h>
+1
drivers/media/platform/vimc/vimc-capture.c
··· 17 17 18 18 #include <linux/component.h> 19 19 #include <linux/module.h> 20 + #include <linux/mod_devicetable.h> 20 21 #include <linux/platform_device.h> 21 22 #include <media/v4l2-ioctl.h> 22 23 #include <media/videobuf2-core.h>
+1
drivers/media/platform/vimc/vimc-debayer.c
··· 17 17 18 18 #include <linux/component.h> 19 19 #include <linux/module.h> 20 + #include <linux/mod_devicetable.h> 20 21 #include <linux/platform_device.h> 21 22 #include <linux/vmalloc.h> 22 23 #include <linux/v4l2-mediabus.h>
+1
drivers/media/platform/vimc/vimc-scaler.c
··· 17 17 18 18 #include <linux/component.h> 19 19 #include <linux/module.h> 20 + #include <linux/mod_devicetable.h> 20 21 #include <linux/platform_device.h> 21 22 #include <linux/vmalloc.h> 22 23 #include <linux/v4l2-mediabus.h>
+1
drivers/media/platform/vimc/vimc-sensor.c
··· 19 19 #include <linux/freezer.h> 20 20 #include <linux/kthread.h> 21 21 #include <linux/module.h> 22 + #include <linux/mod_devicetable.h> 22 23 #include <linux/platform_device.h> 23 24 #include <linux/v4l2-mediabus.h> 24 25 #include <linux/vmalloc.h>
+1
drivers/memory/tegra/tegra186.c
··· 8 8 9 9 #include <linux/io.h> 10 10 #include <linux/module.h> 11 + #include <linux/mod_devicetable.h> 11 12 #include <linux/platform_device.h> 12 13 13 14 #include <dt-bindings/memory/tegra186-mc.h>
+1
drivers/mfd/atmel-hlcdc.c
··· 22 22 #include <linux/mfd/atmel-hlcdc.h> 23 23 #include <linux/mfd/core.h> 24 24 #include <linux/module.h> 25 + #include <linux/mod_devicetable.h> 25 26 #include <linux/platform_device.h> 26 27 #include <linux/regmap.h> 27 28
+1
drivers/mfd/cros_ec_dev.c
··· 20 20 #include <linux/fs.h> 21 21 #include <linux/mfd/core.h> 22 22 #include <linux/module.h> 23 + #include <linux/mod_devicetable.h> 23 24 #include <linux/platform_device.h> 24 25 #include <linux/pm.h> 25 26 #include <linux/slab.h>
+74 -10
drivers/misc/aspeed-lpc-snoop.c
··· 16 16 17 17 #include <linux/bitops.h> 18 18 #include <linux/interrupt.h> 19 + #include <linux/fs.h> 19 20 #include <linux/kfifo.h> 20 21 #include <linux/mfd/syscon.h> 22 + #include <linux/miscdevice.h> 21 23 #include <linux/module.h> 22 24 #include <linux/of.h> 23 25 #include <linux/of_device.h> 24 26 #include <linux/platform_device.h> 27 + #include <linux/poll.h> 25 28 #include <linux/regmap.h> 26 29 27 30 #define DEVICE_NAME "aspeed-lpc-snoop" ··· 62 59 unsigned int has_hicrb_ensnp; 63 60 }; 64 61 62 + struct aspeed_lpc_snoop_channel { 63 + struct kfifo fifo; 64 + wait_queue_head_t wq; 65 + struct miscdevice miscdev; 66 + }; 67 + 65 68 struct aspeed_lpc_snoop { 66 69 struct regmap *regmap; 67 70 int irq; 68 - struct kfifo snoop_fifo[NUM_SNOOP_CHANNELS]; 71 + struct aspeed_lpc_snoop_channel chan[NUM_SNOOP_CHANNELS]; 72 + }; 73 + 74 + static struct aspeed_lpc_snoop_channel *snoop_file_to_chan(struct file *file) 75 + { 76 + return container_of(file->private_data, 77 + struct aspeed_lpc_snoop_channel, 78 + miscdev); 79 + } 80 + 81 + static ssize_t snoop_file_read(struct file *file, char __user *buffer, 82 + size_t count, loff_t *ppos) 83 + { 84 + struct aspeed_lpc_snoop_channel *chan = snoop_file_to_chan(file); 85 + unsigned int copied; 86 + int ret = 0; 87 + 88 + if (kfifo_is_empty(&chan->fifo)) { 89 + if (file->f_flags & O_NONBLOCK) 90 + return -EAGAIN; 91 + ret = wait_event_interruptible(chan->wq, 92 + !kfifo_is_empty(&chan->fifo)); 93 + if (ret == -ERESTARTSYS) 94 + return -EINTR; 95 + } 96 + ret = kfifo_to_user(&chan->fifo, buffer, count, &copied); 97 + 98 + return ret ? ret : copied; 99 + } 100 + 101 + static unsigned int snoop_file_poll(struct file *file, 102 + struct poll_table_struct *pt) 103 + { 104 + struct aspeed_lpc_snoop_channel *chan = snoop_file_to_chan(file); 105 + 106 + poll_wait(file, &chan->wq, pt); 107 + return !kfifo_is_empty(&chan->fifo) ? POLLIN : 0; 108 + } 109 + 110 + static const struct file_operations snoop_fops = { 111 + .owner = THIS_MODULE, 112 + .read = snoop_file_read, 113 + .poll = snoop_file_poll, 114 + .llseek = noop_llseek, 69 115 }; 70 116 71 117 /* Save a byte to a FIFO and discard the oldest byte if FIFO is full */ 72 - static void put_fifo_with_discard(struct kfifo *fifo, u8 val) 118 + static void put_fifo_with_discard(struct aspeed_lpc_snoop_channel *chan, u8 val) 73 119 { 74 - if (!kfifo_initialized(fifo)) 120 + if (!kfifo_initialized(&chan->fifo)) 75 121 return; 76 - if (kfifo_is_full(fifo)) 77 - kfifo_skip(fifo); 78 - kfifo_put(fifo, val); 122 + if (kfifo_is_full(&chan->fifo)) 123 + kfifo_skip(&chan->fifo); 124 + kfifo_put(&chan->fifo, val); 125 + wake_up_interruptible(&chan->wq); 79 126 } 80 127 81 128 static irqreturn_t aspeed_lpc_snoop_irq(int irq, void *arg) ··· 150 97 if (reg & HICR6_STR_SNP0W) { 151 98 u8 val = (data & SNPWDR_CH0_MASK) >> SNPWDR_CH0_SHIFT; 152 99 153 - put_fifo_with_discard(&lpc_snoop->snoop_fifo[0], val); 100 + put_fifo_with_discard(&lpc_snoop->chan[0], val); 154 101 } 155 102 if (reg & HICR6_STR_SNP1W) { 156 103 u8 val = (data & SNPWDR_CH1_MASK) >> SNPWDR_CH1_SHIFT; 157 104 158 - put_fifo_with_discard(&lpc_snoop->snoop_fifo[1], val); 105 + put_fifo_with_discard(&lpc_snoop->chan[1], val); 159 106 } 160 107 161 108 return IRQ_HANDLED; ··· 192 139 const struct aspeed_lpc_snoop_model_data *model_data = 193 140 of_device_get_match_data(dev); 194 141 142 + init_waitqueue_head(&lpc_snoop->chan[channel].wq); 195 143 /* Create FIFO datastructure */ 196 - rc = kfifo_alloc(&lpc_snoop->snoop_fifo[channel], 144 + rc = kfifo_alloc(&lpc_snoop->chan[channel].fifo, 197 145 SNOOP_FIFO_SIZE, GFP_KERNEL); 146 + if (rc) 147 + return rc; 148 + 149 + lpc_snoop->chan[channel].miscdev.minor = MISC_DYNAMIC_MINOR; 150 + lpc_snoop->chan[channel].miscdev.name = 151 + devm_kasprintf(dev, GFP_KERNEL, "%s%d", DEVICE_NAME, channel); 152 + lpc_snoop->chan[channel].miscdev.fops = &snoop_fops; 153 + lpc_snoop->chan[channel].miscdev.parent = dev; 154 + rc = misc_register(&lpc_snoop->chan[channel].miscdev); 198 155 if (rc) 199 156 return rc; 200 157 ··· 254 191 return; 255 192 } 256 193 257 - kfifo_free(&lpc_snoop->snoop_fifo[channel]); 194 + kfifo_free(&lpc_snoop->chan[channel].fifo); 195 + misc_deregister(&lpc_snoop->chan[channel].miscdev); 258 196 } 259 197 260 198 static int aspeed_lpc_snoop_probe(struct platform_device *pdev)
+6 -6
drivers/misc/cardreader/rtsx_pcr.c
··· 80 80 0xFC, 0); 81 81 } 82 82 83 - int rtsx_comm_set_ltr_latency(struct rtsx_pcr *pcr, u32 latency) 83 + static int rtsx_comm_set_ltr_latency(struct rtsx_pcr *pcr, u32 latency) 84 84 { 85 85 rtsx_pci_write_register(pcr, MSGTXDATA0, 86 86 MASK_8_BIT_DEF, (u8) (latency & 0xFF)); ··· 143 143 return 0; 144 144 } 145 145 146 - void rtsx_set_l1off_sub_cfg_d0(struct rtsx_pcr *pcr, int active) 146 + static void rtsx_set_l1off_sub_cfg_d0(struct rtsx_pcr *pcr, int active) 147 147 { 148 148 if (pcr->ops->set_l1off_cfg_sub_d0) 149 149 pcr->ops->set_l1off_cfg_sub_d0(pcr, active); ··· 162 162 rtsx_set_l1off_sub_cfg_d0(pcr, 1); 163 163 } 164 164 165 - void rtsx_pm_full_on(struct rtsx_pcr *pcr) 165 + static void rtsx_pm_full_on(struct rtsx_pcr *pcr) 166 166 { 167 167 if (pcr->ops->full_on) 168 168 pcr->ops->full_on(pcr); ··· 967 967 pcr->slots[RTSX_MS_CARD].p_dev); 968 968 } 969 969 970 - void rtsx_pci_process_ocp(struct rtsx_pcr *pcr) 970 + static void rtsx_pci_process_ocp(struct rtsx_pcr *pcr) 971 971 { 972 972 if (pcr->ops->process_ocp) 973 973 pcr->ops->process_ocp(pcr); 974 974 } 975 975 976 - int rtsx_pci_process_ocp_interrupt(struct rtsx_pcr *pcr) 976 + static int rtsx_pci_process_ocp_interrupt(struct rtsx_pcr *pcr) 977 977 { 978 978 if (pcr->option.ocp_en) 979 979 rtsx_pci_process_ocp(pcr); ··· 1094 1094 rtsx_enable_aspm(pcr); 1095 1095 } 1096 1096 1097 - void rtsx_pm_power_saving(struct rtsx_pcr *pcr) 1097 + static void rtsx_pm_power_saving(struct rtsx_pcr *pcr) 1098 1098 { 1099 1099 if (pcr->ops->power_saving) 1100 1100 pcr->ops->power_saving(pcr);
+4
drivers/misc/eeprom/at25.c
··· 94 94 switch (at25->addrlen) { 95 95 default: /* case 3 */ 96 96 *cp++ = offset >> 16; 97 + /* fall through */ 97 98 case 2: 98 99 *cp++ = offset >> 8; 100 + /* fall through */ 99 101 case 1: 100 102 case 0: /* can't happen: for better codegen */ 101 103 *cp++ = offset >> 0; ··· 182 180 switch (at25->addrlen) { 183 181 default: /* case 3 */ 184 182 *cp++ = offset >> 16; 183 + /* fall through */ 185 184 case 2: 186 185 *cp++ = offset >> 8; 186 + /* fall through */ 187 187 case 1: 188 188 case 0: /* can't happen: for better codegen */ 189 189 *cp++ = offset >> 0;
+2 -4
drivers/misc/eeprom/idt_89hpesx.c
··· 938 938 { 939 939 struct idt_89hpesx_dev *pdev = filep->private_data; 940 940 char *colon_ch, *csraddr_str, *csrval_str; 941 - int ret, csraddr_len, csrval_len; 941 + int ret, csraddr_len; 942 942 u32 csraddr, csrval; 943 943 char *buf; 944 944 ··· 974 974 csraddr_str[csraddr_len] = '\0'; 975 975 /* Register value must follow the colon */ 976 976 csrval_str = colon_ch + 1; 977 - csrval_len = strnlen(csrval_str, count - csraddr_len); 978 977 } else /* if (str_colon == NULL) */ { 979 978 csraddr_str = (char *)buf; /* Just to shut warning up */ 980 979 csraddr_len = strnlen(csraddr_str, count); 981 980 csrval_str = NULL; 982 - csrval_len = 0; 983 981 } 984 982 985 983 /* Convert CSR address to u32 value */ ··· 1128 1130 1129 1131 device_for_each_child_node(dev, fwnode) { 1130 1132 ee_id = idt_ee_match_id(fwnode); 1131 - if (IS_ERR_OR_NULL(ee_id)) { 1133 + if (!ee_id) { 1132 1134 dev_warn(dev, "Skip unsupported EEPROM device"); 1133 1135 continue; 1134 1136 } else
+2 -1
drivers/misc/eeprom/max6875.c
··· 148 148 if (client->addr & 1) 149 149 return -ENODEV; 150 150 151 - if (!(data = kzalloc(sizeof(struct max6875_data), GFP_KERNEL))) 151 + data = kzalloc(sizeof(struct max6875_data), GFP_KERNEL); 152 + if (!data) 152 153 return -ENOMEM; 153 154 154 155 /* A fake client is created on the odd address */
+1 -1
drivers/misc/genwqe/card_base.h
··· 497 497 static inline bool dma_mapping_used(struct dma_mapping *m) 498 498 { 499 499 if (!m) 500 - return 0; 500 + return false; 501 501 return m->size != 0; 502 502 } 503 503
-4
drivers/misc/genwqe/card_debugfs.c
··· 305 305 static int genwqe_info_show(struct seq_file *s, void *unused) 306 306 { 307 307 struct genwqe_dev *cd = s->private; 308 - u16 val16, type; 309 308 u64 app_id, slu_id, bitstream = -1; 310 309 struct pci_dev *pci_dev = cd->pci_dev; 311 310 ··· 313 314 314 315 if (genwqe_is_privileged(cd)) 315 316 bitstream = __genwqe_readq(cd, IO_SLU_BITSTREAM); 316 - 317 - val16 = (u16)(slu_id & 0x0fLLU); 318 - type = (u16)((slu_id >> 20) & 0xffLLU); 319 317 320 318 seq_printf(s, "%s driver version: %s\n" 321 319 " Device Name/Type: %s %s CardIdx: %d\n"
-5
drivers/misc/genwqe/card_dev.c
··· 304 304 { 305 305 struct genwqe_dev *cd; 306 306 struct genwqe_file *cfile; 307 - struct pci_dev *pci_dev; 308 307 309 308 cfile = kzalloc(sizeof(*cfile), GFP_KERNEL); 310 309 if (cfile == NULL) 311 310 return -ENOMEM; 312 311 313 312 cd = container_of(inode->i_cdev, struct genwqe_dev, cdev_genwqe); 314 - pci_dev = cd->pci_dev; 315 313 cfile->cd = cd; 316 314 cfile->filp = filp; 317 315 cfile->client = NULL; ··· 862 864 struct genwqe_dev *cd = cfile->cd; 863 865 struct genwqe_ddcb_cmd *cmd = &req->cmd; 864 866 struct dma_mapping *m; 865 - const char *type = "UNKNOWN"; 866 867 867 868 for (i = 0, asiv_offs = 0x00; asiv_offs <= 0x58; 868 869 i++, asiv_offs += 0x08) { ··· 930 933 931 934 m = genwqe_search_pin(cfile, u_addr, u_size, NULL); 932 935 if (m != NULL) { 933 - type = "PINNING"; 934 936 page_offs = (u_addr - 935 937 (u64)m->u_vaddr)/PAGE_SIZE; 936 938 } else { 937 - type = "MAPPING"; 938 939 m = &req->dma_mappings[i]; 939 940 940 941 genwqe_mapping_init(m,
+1 -1
drivers/misc/ibmvmc.c
··· 273 273 dma_addr_t *dma_handle) 274 274 { 275 275 /* allocate memory */ 276 - void *buffer = kzalloc(size, GFP_KERNEL); 276 + void *buffer = kzalloc(size, GFP_ATOMIC); 277 277 278 278 if (!buffer) { 279 279 *dma_handle = 0;
+83 -9
drivers/misc/mei/bus-fixup.c
··· 1 1 /* 2 2 * 3 3 * Intel Management Engine Interface (Intel MEI) Linux driver 4 - * Copyright (c) 2003-2013, Intel Corporation. 4 + * Copyright (c) 2003-2018, Intel Corporation. 5 5 * 6 6 * This program is free software; you can redistribute it and/or modify it 7 7 * under the terms and conditions of the GNU General Public License, ··· 96 96 u8 data[0]; 97 97 } __packed; 98 98 99 + struct mkhi_fw_ver_block { 100 + u16 minor; 101 + u8 major; 102 + u8 platform; 103 + u16 buildno; 104 + u16 hotfix; 105 + } __packed; 106 + 107 + struct mkhi_fw_ver { 108 + struct mkhi_fw_ver_block ver[MEI_MAX_FW_VER_BLOCKS]; 109 + } __packed; 110 + 99 111 #define MKHI_FWCAPS_GROUP_ID 0x3 100 112 #define MKHI_FWCAPS_SET_OS_VER_APP_RULE_CMD 6 113 + #define MKHI_GEN_GROUP_ID 0xFF 114 + #define MKHI_GEN_GET_FW_VERSION_CMD 0x2 101 115 struct mkhi_msg_hdr { 102 116 u8 group_id; 103 117 u8 command; ··· 153 139 return __mei_cl_send(cldev->cl, buf, size, mode); 154 140 } 155 141 142 + #define MKHI_FWVER_BUF_LEN (sizeof(struct mkhi_msg_hdr) + \ 143 + sizeof(struct mkhi_fw_ver)) 144 + #define MKHI_FWVER_LEN(__num) (sizeof(struct mkhi_msg_hdr) + \ 145 + sizeof(struct mkhi_fw_ver_block) * (__num)) 146 + #define MKHI_RCV_TIMEOUT 500 /* receive timeout in msec */ 147 + static int mei_fwver(struct mei_cl_device *cldev) 148 + { 149 + char buf[MKHI_FWVER_BUF_LEN]; 150 + struct mkhi_msg *req; 151 + struct mkhi_fw_ver *fwver; 152 + int bytes_recv, ret, i; 153 + 154 + memset(buf, 0, sizeof(buf)); 155 + 156 + req = (struct mkhi_msg *)buf; 157 + req->hdr.group_id = MKHI_GEN_GROUP_ID; 158 + req->hdr.command = MKHI_GEN_GET_FW_VERSION_CMD; 159 + 160 + ret = __mei_cl_send(cldev->cl, buf, sizeof(struct mkhi_msg_hdr), 161 + MEI_CL_IO_TX_BLOCKING); 162 + if (ret < 0) { 163 + dev_err(&cldev->dev, "Could not send ReqFWVersion cmd\n"); 164 + return ret; 165 + } 166 + 167 + ret = 0; 168 + bytes_recv = __mei_cl_recv(cldev->cl, buf, sizeof(buf), 0, 169 + MKHI_RCV_TIMEOUT); 170 + if (bytes_recv < 0 || (size_t)bytes_recv < MKHI_FWVER_LEN(1)) { 171 + /* 172 + * Should be at least one version block, 173 + * error out if nothing found 174 + */ 175 + dev_err(&cldev->dev, "Could not read FW version\n"); 176 + return -EIO; 177 + } 178 + 179 + fwver = (struct mkhi_fw_ver *)req->data; 180 + memset(cldev->bus->fw_ver, 0, sizeof(cldev->bus->fw_ver)); 181 + for (i = 0; i < MEI_MAX_FW_VER_BLOCKS; i++) { 182 + if ((size_t)bytes_recv < MKHI_FWVER_LEN(i + 1)) 183 + break; 184 + dev_dbg(&cldev->dev, "FW version%d %d:%d.%d.%d.%d\n", 185 + i, fwver->ver[i].platform, 186 + fwver->ver[i].major, fwver->ver[i].minor, 187 + fwver->ver[i].hotfix, fwver->ver[i].buildno); 188 + 189 + cldev->bus->fw_ver[i].platform = fwver->ver[i].platform; 190 + cldev->bus->fw_ver[i].major = fwver->ver[i].major; 191 + cldev->bus->fw_ver[i].minor = fwver->ver[i].minor; 192 + cldev->bus->fw_ver[i].hotfix = fwver->ver[i].hotfix; 193 + cldev->bus->fw_ver[i].buildno = fwver->ver[i].buildno; 194 + } 195 + 196 + return ret; 197 + } 198 + 156 199 static void mei_mkhi_fix(struct mei_cl_device *cldev) 157 200 { 158 201 int ret; 159 - 160 - if (!cldev->bus->hbm_f_os_supported) 161 - return; 162 202 163 203 ret = mei_cldev_enable(cldev); 164 204 if (ret) 165 205 return; 166 206 167 - ret = mei_osver(cldev); 207 + ret = mei_fwver(cldev); 168 208 if (ret < 0) 169 - dev_err(&cldev->dev, "OS version command failed %d\n", ret); 209 + dev_err(&cldev->dev, "FW version command failed %d\n", ret); 170 210 211 + if (cldev->bus->hbm_f_os_supported) { 212 + ret = mei_osver(cldev); 213 + if (ret < 0) 214 + dev_err(&cldev->dev, "OS version command failed %d\n", 215 + ret); 216 + } 171 217 mei_cldev_disable(cldev); 172 218 } 173 219 ··· 340 266 return -ENOMEM; 341 267 342 268 ret = 0; 343 - bytes_recv = __mei_cl_recv(cl, (u8 *)reply, if_version_length, 0); 344 - if (bytes_recv < if_version_length) { 269 + bytes_recv = __mei_cl_recv(cl, (u8 *)reply, if_version_length, 0, 0); 270 + if (bytes_recv < 0 || (size_t)bytes_recv < if_version_length) { 345 271 dev_err(bus->dev, "Could not read IF version\n"); 346 272 ret = -EIO; 347 273 goto err; ··· 484 410 { 485 411 struct mei_fixup *f; 486 412 const uuid_le *uuid = mei_me_cl_uuid(cldev->me_cl); 487 - int i; 413 + size_t i; 488 414 489 415 for (i = 0; i < ARRAY_SIZE(mei_fixups); i++) { 490 416
+26 -10
drivers/misc/mei/bus.c
··· 116 116 * @buf: buffer to receive 117 117 * @length: buffer length 118 118 * @mode: io mode 119 + * @timeout: recv timeout, 0 for infinite timeout 119 120 * 120 121 * Return: read size in bytes of < 0 on error 121 122 */ 122 123 ssize_t __mei_cl_recv(struct mei_cl *cl, u8 *buf, size_t length, 123 - unsigned int mode) 124 + unsigned int mode, unsigned long timeout) 124 125 { 125 126 struct mei_device *bus; 126 127 struct mei_cl_cb *cb; ··· 159 158 160 159 mutex_unlock(&bus->device_lock); 161 160 162 - if (wait_event_interruptible(cl->rx_wait, 163 - (!list_empty(&cl->rd_completed)) || 164 - (!mei_cl_is_connected(cl)))) { 165 - 166 - if (signal_pending(current)) 167 - return -EINTR; 168 - return -ERESTARTSYS; 161 + if (timeout) { 162 + rets = wait_event_interruptible_timeout 163 + (cl->rx_wait, 164 + (!list_empty(&cl->rd_completed)) || 165 + (!mei_cl_is_connected(cl)), 166 + msecs_to_jiffies(timeout)); 167 + if (rets == 0) 168 + return -ETIME; 169 + if (rets < 0) { 170 + if (signal_pending(current)) 171 + return -EINTR; 172 + return -ERESTARTSYS; 173 + } 174 + } else { 175 + if (wait_event_interruptible 176 + (cl->rx_wait, 177 + (!list_empty(&cl->rd_completed)) || 178 + (!mei_cl_is_connected(cl)))) { 179 + if (signal_pending(current)) 180 + return -EINTR; 181 + return -ERESTARTSYS; 182 + } 169 183 } 170 184 171 185 mutex_lock(&bus->device_lock); ··· 247 231 { 248 232 struct mei_cl *cl = cldev->cl; 249 233 250 - return __mei_cl_recv(cl, buf, length, MEI_CL_IO_RX_NONBLOCK); 234 + return __mei_cl_recv(cl, buf, length, MEI_CL_IO_RX_NONBLOCK, 0); 251 235 } 252 236 EXPORT_SYMBOL_GPL(mei_cldev_recv_nonblock); 253 237 ··· 264 248 { 265 249 struct mei_cl *cl = cldev->cl; 266 250 267 - return __mei_cl_recv(cl, buf, length, 0); 251 + return __mei_cl_recv(cl, buf, length, 0, 0); 268 252 } 269 253 EXPORT_SYMBOL_GPL(mei_cldev_recv); 270 254
+81 -49
drivers/misc/mei/client.c
··· 863 863 int slots; 864 864 int ret; 865 865 866 - msg_slots = mei_data2slots(sizeof(struct hbm_client_connect_request)); 866 + msg_slots = mei_hbm2slots(sizeof(struct hbm_client_connect_request)); 867 867 slots = mei_hbuf_empty_slots(dev); 868 + if (slots < 0) 869 + return -EOVERFLOW; 868 870 869 - if (slots < msg_slots) 871 + if ((u32)slots < msg_slots) 870 872 return -EMSGSIZE; 871 873 872 874 ret = mei_cl_send_disconnect(cl, cb); ··· 1055 1053 int slots; 1056 1054 int rets; 1057 1055 1058 - msg_slots = mei_data2slots(sizeof(struct hbm_client_connect_request)); 1059 - slots = mei_hbuf_empty_slots(dev); 1060 - 1061 1056 if (mei_cl_is_other_connecting(cl)) 1062 1057 return 0; 1063 1058 1064 - if (slots < msg_slots) 1059 + msg_slots = mei_hbm2slots(sizeof(struct hbm_client_connect_request)); 1060 + slots = mei_hbuf_empty_slots(dev); 1061 + if (slots < 0) 1062 + return -EOVERFLOW; 1063 + 1064 + if ((u32)slots < msg_slots) 1065 1065 return -EMSGSIZE; 1066 1066 1067 1067 rets = mei_cl_send_connect(cl, cb); ··· 1298 1294 int ret; 1299 1295 bool request; 1300 1296 1301 - msg_slots = mei_data2slots(sizeof(struct hbm_client_connect_request)); 1297 + msg_slots = mei_hbm2slots(sizeof(struct hbm_client_connect_request)); 1302 1298 slots = mei_hbuf_empty_slots(dev); 1299 + if (slots < 0) 1300 + return -EOVERFLOW; 1303 1301 1304 - if (slots < msg_slots) 1302 + if ((u32)slots < msg_slots) 1305 1303 return -EMSGSIZE; 1306 1304 1307 1305 request = mei_cl_notify_fop2req(cb->fop_type); ··· 1539 1533 } 1540 1534 1541 1535 /** 1536 + * mei_msg_hdr_init - initialize mei message header 1537 + * 1538 + * @mei_hdr: mei message header 1539 + * @cb: message callback structure 1540 + */ 1541 + static void mei_msg_hdr_init(struct mei_msg_hdr *mei_hdr, struct mei_cl_cb *cb) 1542 + { 1543 + mei_hdr->host_addr = mei_cl_host_addr(cb->cl); 1544 + mei_hdr->me_addr = mei_cl_me_id(cb->cl); 1545 + mei_hdr->length = 0; 1546 + mei_hdr->reserved = 0; 1547 + mei_hdr->msg_complete = 0; 1548 + mei_hdr->dma_ring = 0; 1549 + mei_hdr->internal = cb->internal; 1550 + } 1551 + 1552 + /** 1542 1553 * mei_cl_irq_write - write a message to device 1543 1554 * from the interrupt thread context 1544 1555 * ··· 1571 1548 struct mei_device *dev; 1572 1549 struct mei_msg_data *buf; 1573 1550 struct mei_msg_hdr mei_hdr; 1551 + size_t hdr_len = sizeof(mei_hdr); 1574 1552 size_t len; 1575 - u32 msg_slots; 1576 - int slots; 1553 + size_t hbuf_len; 1554 + int hbuf_slots; 1577 1555 int rets; 1578 1556 bool first_chunk; 1579 1557 ··· 1596 1572 return 0; 1597 1573 } 1598 1574 1599 - slots = mei_hbuf_empty_slots(dev); 1600 1575 len = buf->size - cb->buf_idx; 1601 - msg_slots = mei_data2slots(len); 1576 + hbuf_slots = mei_hbuf_empty_slots(dev); 1577 + if (hbuf_slots < 0) { 1578 + rets = -EOVERFLOW; 1579 + goto err; 1580 + } 1602 1581 1603 - mei_hdr.host_addr = mei_cl_host_addr(cl); 1604 - mei_hdr.me_addr = mei_cl_me_id(cl); 1605 - mei_hdr.reserved = 0; 1606 - mei_hdr.internal = cb->internal; 1582 + hbuf_len = mei_slots2data(hbuf_slots); 1607 1583 1608 - if (slots >= msg_slots) { 1584 + mei_msg_hdr_init(&mei_hdr, cb); 1585 + 1586 + /** 1587 + * Split the message only if we can write the whole host buffer 1588 + * otherwise wait for next time the host buffer is empty. 1589 + */ 1590 + if (len + hdr_len <= hbuf_len) { 1609 1591 mei_hdr.length = len; 1610 1592 mei_hdr.msg_complete = 1; 1611 - /* Split the message only if we can write the whole host buffer */ 1612 - } else if (slots == dev->hbuf_depth) { 1613 - msg_slots = slots; 1614 - len = (slots * sizeof(u32)) - sizeof(struct mei_msg_hdr); 1615 - mei_hdr.length = len; 1616 - mei_hdr.msg_complete = 0; 1593 + } else if ((u32)hbuf_slots == mei_hbuf_depth(dev)) { 1594 + mei_hdr.length = hbuf_len - hdr_len; 1617 1595 } else { 1618 - /* wait for next time the host buffer is empty */ 1619 1596 return 0; 1620 1597 } 1621 1598 1622 1599 cl_dbg(dev, cl, "buf: size = %zu idx = %zu\n", 1623 1600 cb->buf.size, cb->buf_idx); 1624 1601 1625 - rets = mei_write_message(dev, &mei_hdr, buf->data + cb->buf_idx); 1602 + rets = mei_write_message(dev, &mei_hdr, hdr_len, 1603 + buf->data + cb->buf_idx, mei_hdr.length); 1626 1604 if (rets) 1627 1605 goto err; 1628 1606 1629 1607 cl->status = 0; 1630 1608 cl->writing_state = MEI_WRITING; 1631 1609 cb->buf_idx += mei_hdr.length; 1632 - cb->completed = mei_hdr.msg_complete == 1; 1633 1610 1634 1611 if (first_chunk) { 1635 1612 if (mei_cl_tx_flow_ctrl_creds_reduce(cl)) { ··· 1659 1634 * 1660 1635 * Return: number of bytes sent on success, <0 on failure. 1661 1636 */ 1662 - int mei_cl_write(struct mei_cl *cl, struct mei_cl_cb *cb) 1637 + ssize_t mei_cl_write(struct mei_cl *cl, struct mei_cl_cb *cb) 1663 1638 { 1664 1639 struct mei_device *dev; 1665 1640 struct mei_msg_data *buf; 1666 1641 struct mei_msg_hdr mei_hdr; 1667 - int size; 1668 - int rets; 1642 + size_t hdr_len = sizeof(mei_hdr); 1643 + size_t len; 1644 + size_t hbuf_len; 1645 + int hbuf_slots; 1646 + ssize_t rets; 1669 1647 bool blocking; 1670 1648 1671 1649 if (WARN_ON(!cl || !cl->dev)) ··· 1680 1652 dev = cl->dev; 1681 1653 1682 1654 buf = &cb->buf; 1683 - size = buf->size; 1655 + len = buf->size; 1684 1656 blocking = cb->blocking; 1685 1657 1686 - cl_dbg(dev, cl, "size=%d\n", size); 1658 + cl_dbg(dev, cl, "len=%zd\n", len); 1687 1659 1688 1660 rets = pm_runtime_get(dev->dev); 1689 1661 if (rets < 0 && rets != -EINPROGRESS) { 1690 1662 pm_runtime_put_noidle(dev->dev); 1691 - cl_err(dev, cl, "rpm: get failed %d\n", rets); 1663 + cl_err(dev, cl, "rpm: get failed %zd\n", rets); 1692 1664 goto free; 1693 1665 } 1694 1666 1695 1667 cb->buf_idx = 0; 1696 1668 cl->writing_state = MEI_IDLE; 1697 1669 1698 - mei_hdr.host_addr = mei_cl_host_addr(cl); 1699 - mei_hdr.me_addr = mei_cl_me_id(cl); 1700 - mei_hdr.reserved = 0; 1701 - mei_hdr.msg_complete = 0; 1702 - mei_hdr.internal = cb->internal; 1703 1670 1704 1671 rets = mei_cl_tx_flow_ctrl_creds(cl); 1705 1672 if (rets < 0) 1706 1673 goto err; 1707 1674 1675 + mei_msg_hdr_init(&mei_hdr, cb); 1676 + 1708 1677 if (rets == 0) { 1709 1678 cl_dbg(dev, cl, "No flow control credentials: not sending.\n"); 1710 - rets = size; 1679 + rets = len; 1711 1680 goto out; 1712 1681 } 1682 + 1713 1683 if (!mei_hbuf_acquire(dev)) { 1714 1684 cl_dbg(dev, cl, "Cannot acquire the host buffer: not sending.\n"); 1715 - rets = size; 1685 + rets = len; 1716 1686 goto out; 1717 1687 } 1718 1688 1719 - /* Check for a maximum length */ 1720 - if (size > mei_hbuf_max_len(dev)) { 1721 - mei_hdr.length = mei_hbuf_max_len(dev); 1722 - mei_hdr.msg_complete = 0; 1723 - } else { 1724 - mei_hdr.length = size; 1725 - mei_hdr.msg_complete = 1; 1689 + hbuf_slots = mei_hbuf_empty_slots(dev); 1690 + if (hbuf_slots < 0) { 1691 + rets = -EOVERFLOW; 1692 + goto out; 1726 1693 } 1727 1694 1728 - rets = mei_write_message(dev, &mei_hdr, buf->data); 1695 + hbuf_len = mei_slots2data(hbuf_slots); 1696 + 1697 + if (len + hdr_len <= hbuf_len) { 1698 + mei_hdr.length = len; 1699 + mei_hdr.msg_complete = 1; 1700 + } else { 1701 + mei_hdr.length = hbuf_len - hdr_len; 1702 + } 1703 + 1704 + rets = mei_write_message(dev, &mei_hdr, hdr_len, 1705 + buf->data, mei_hdr.length); 1729 1706 if (rets) 1730 1707 goto err; 1731 1708 ··· 1740 1707 1741 1708 cl->writing_state = MEI_WRITING; 1742 1709 cb->buf_idx = mei_hdr.length; 1743 - cb->completed = mei_hdr.msg_complete == 1; 1744 1710 1745 1711 out: 1746 1712 if (mei_hdr.msg_complete) ··· 1767 1735 } 1768 1736 } 1769 1737 1770 - rets = size; 1738 + rets = buf->size; 1771 1739 err: 1772 1740 cl_dbg(dev, cl, "rpm: autosuspend\n"); 1773 1741 pm_runtime_mark_last_busy(dev->dev);
+1 -1
drivers/misc/mei/client.h
··· 202 202 int mei_cl_irq_connect(struct mei_cl *cl, struct mei_cl_cb *cb, 203 203 struct list_head *cmpl_list); 204 204 int mei_cl_read_start(struct mei_cl *cl, size_t length, const struct file *fp); 205 - int mei_cl_write(struct mei_cl *cl, struct mei_cl_cb *cb); 205 + ssize_t mei_cl_write(struct mei_cl *cl, struct mei_cl_cb *cb); 206 206 int mei_cl_irq_write(struct mei_cl *cl, struct mei_cl_cb *cb, 207 207 struct list_head *cmpl_list); 208 208
+2
drivers/misc/mei/debugfs.c
··· 183 183 dev->hbm_f_fa_supported); 184 184 pos += scnprintf(buf + pos, bufsz - pos, "\tOS: %01d\n", 185 185 dev->hbm_f_os_supported); 186 + pos += scnprintf(buf + pos, bufsz - pos, "\tDR: %01d\n", 187 + dev->hbm_f_dr_supported); 186 188 } 187 189 188 190 pos += scnprintf(buf + pos, bufsz - pos, "pg: %s, %s\n",
+29 -8
drivers/misc/mei/hbm.c
··· 96 96 } 97 97 98 98 /** 99 + * mei_hbm_write_message - wrapper for sending hbm messages. 100 + * 101 + * @dev: mei device 102 + * @hdr: mei header 103 + * @data: payload 104 + */ 105 + static inline int mei_hbm_write_message(struct mei_device *dev, 106 + struct mei_msg_hdr *hdr, 107 + const void *data) 108 + { 109 + return mei_write_message(dev, hdr, sizeof(*hdr), data, hdr->length); 110 + } 111 + 112 + /** 99 113 * mei_hbm_idle - set hbm to idle state 100 114 * 101 115 * @dev: the device structure ··· 145 131 hdr->me_addr = 0; 146 132 hdr->length = length; 147 133 hdr->msg_complete = 1; 134 + hdr->dma_ring = 0; 148 135 hdr->reserved = 0; 149 136 hdr->internal = 0; 150 137 } ··· 189 174 mei_hbm_hdr(&mei_hdr, len); 190 175 mei_hbm_cl_hdr(cl, hbm_cmd, buf, len); 191 176 192 - return mei_write_message(dev, &mei_hdr, buf); 177 + return mei_hbm_write_message(dev, &mei_hdr, buf); 193 178 } 194 179 195 180 /** ··· 282 267 start_req.host_version.minor_version = HBM_MINOR_VERSION; 283 268 284 269 dev->hbm_state = MEI_HBM_IDLE; 285 - ret = mei_write_message(dev, &mei_hdr, &start_req); 270 + ret = mei_hbm_write_message(dev, &mei_hdr, &start_req); 286 271 if (ret) { 287 272 dev_err(dev->dev, "version message write failed: ret = %d\n", 288 273 ret); ··· 319 304 enum_req.flags |= dev->hbm_f_ie_supported ? 320 305 MEI_HBM_ENUM_F_IMMEDIATE_ENUM : 0; 321 306 322 - ret = mei_write_message(dev, &mei_hdr, &enum_req); 307 + ret = mei_hbm_write_message(dev, &mei_hdr, &enum_req); 323 308 if (ret) { 324 309 dev_err(dev->dev, "enumeration request write failed: ret = %d.\n", 325 310 ret); ··· 388 373 resp.me_addr = addr; 389 374 resp.status = status; 390 375 391 - ret = mei_write_message(dev, &mei_hdr, &resp); 376 + ret = mei_hbm_write_message(dev, &mei_hdr, &resp); 392 377 if (ret) 393 378 dev_err(dev->dev, "add client response write failed: ret = %d\n", 394 379 ret); ··· 445 430 446 431 req.start = start; 447 432 448 - ret = mei_write_message(dev, &mei_hdr, &req); 433 + ret = mei_hbm_write_message(dev, &mei_hdr, &req); 449 434 if (ret) 450 435 dev_err(dev->dev, "notify request failed: ret = %d\n", ret); 451 436 ··· 570 555 prop_req.hbm_cmd = HOST_CLIENT_PROPERTIES_REQ_CMD; 571 556 prop_req.me_addr = addr; 572 557 573 - ret = mei_write_message(dev, &mei_hdr, &prop_req); 558 + ret = mei_hbm_write_message(dev, &mei_hdr, &prop_req); 574 559 if (ret) { 575 560 dev_err(dev->dev, "properties request write failed: ret = %d\n", 576 561 ret); ··· 607 592 memset(&req, 0, len); 608 593 req.hbm_cmd = pg_cmd; 609 594 610 - ret = mei_write_message(dev, &mei_hdr, &req); 595 + ret = mei_hbm_write_message(dev, &mei_hdr, &req); 611 596 if (ret) 612 597 dev_err(dev->dev, "power gate command write failed.\n"); 613 598 return ret; ··· 633 618 req.hbm_cmd = HOST_STOP_REQ_CMD; 634 619 req.reason = DRIVER_STOP_REQUEST; 635 620 636 - return mei_write_message(dev, &mei_hdr, &req); 621 + return mei_hbm_write_message(dev, &mei_hdr, &req); 637 622 } 638 623 639 624 /** ··· 1007 992 /* OS ver message Support */ 1008 993 if (dev->version.major_version >= HBM_MAJOR_VERSION_OS) 1009 994 dev->hbm_f_os_supported = 1; 995 + 996 + /* DMA Ring Support */ 997 + if (dev->version.major_version > HBM_MAJOR_VERSION_DR || 998 + (dev->version.major_version == HBM_MAJOR_VERSION_DR && 999 + dev->version.minor_version >= HBM_MINOR_VERSION_DR)) 1000 + dev->hbm_f_dr_supported = 1; 1010 1001 } 1011 1002 1012 1003 /**
+52 -27
drivers/misc/mei/hw-me.c
··· 19 19 #include <linux/kthread.h> 20 20 #include <linux/interrupt.h> 21 21 #include <linux/pm_runtime.h> 22 + #include <linux/sizes.h> 22 23 23 24 #include "mei_dev.h" 24 25 #include "hbm.h" ··· 229 228 230 229 /* Doesn't change in runtime */ 231 230 hcsr = mei_hcsr_read(dev); 232 - dev->hbuf_depth = (hcsr & H_CBD) >> 24; 231 + hw->hbuf_depth = (hcsr & H_CBD) >> 24; 233 232 234 233 reg = 0; 235 234 pci_read_config_dword(pdev, PCI_CFG_HFS_1, &reg); ··· 491 490 */ 492 491 static int mei_me_hbuf_empty_slots(struct mei_device *dev) 493 492 { 493 + struct mei_me_hw *hw = to_me_hw(dev); 494 494 unsigned char filled_slots, empty_slots; 495 495 496 496 filled_slots = mei_hbuf_filled_slots(dev); 497 - empty_slots = dev->hbuf_depth - filled_slots; 497 + empty_slots = hw->hbuf_depth - filled_slots; 498 498 499 499 /* check for overflow */ 500 - if (filled_slots > dev->hbuf_depth) 500 + if (filled_slots > hw->hbuf_depth) 501 501 return -EOVERFLOW; 502 502 503 503 return empty_slots; 504 504 } 505 505 506 506 /** 507 - * mei_me_hbuf_max_len - returns size of hw buffer. 507 + * mei_me_hbuf_depth - returns depth of the hw buffer. 508 508 * 509 509 * @dev: the device structure 510 510 * 511 - * Return: size of hw buffer in bytes 511 + * Return: size of hw buffer in slots 512 512 */ 513 - static size_t mei_me_hbuf_max_len(const struct mei_device *dev) 513 + static u32 mei_me_hbuf_depth(const struct mei_device *dev) 514 514 { 515 - return dev->hbuf_depth * sizeof(u32) - sizeof(struct mei_msg_hdr); 516 - } 515 + struct mei_me_hw *hw = to_me_hw(dev); 517 516 517 + return hw->hbuf_depth; 518 + } 518 519 519 520 /** 520 521 * mei_me_hbuf_write - writes a message to host hw buffer. 521 522 * 522 523 * @dev: the device structure 523 - * @header: mei HECI header of message 524 - * @buf: message payload will be written 524 + * @hdr: header of message 525 + * @hdr_len: header length in bytes: must be multiplication of a slot (4bytes) 526 + * @data: payload 527 + * @data_len: payload length in bytes 525 528 * 526 - * Return: -EIO if write has failed 529 + * Return: 0 if success, < 0 - otherwise. 527 530 */ 528 531 static int mei_me_hbuf_write(struct mei_device *dev, 529 - struct mei_msg_hdr *header, 530 - const unsigned char *buf) 532 + const void *hdr, size_t hdr_len, 533 + const void *data, size_t data_len) 531 534 { 532 535 unsigned long rem; 533 - unsigned long length = header->length; 534 - u32 *reg_buf = (u32 *)buf; 536 + unsigned long i; 537 + const u32 *reg_buf; 535 538 u32 dw_cnt; 536 - int i; 537 539 int empty_slots; 538 540 539 - dev_dbg(dev->dev, MEI_HDR_FMT, MEI_HDR_PRM(header)); 541 + if (WARN_ON(!hdr || !data || hdr_len & 0x3)) 542 + return -EINVAL; 543 + 544 + dev_dbg(dev->dev, MEI_HDR_FMT, MEI_HDR_PRM((struct mei_msg_hdr *)hdr)); 540 545 541 546 empty_slots = mei_hbuf_empty_slots(dev); 542 547 dev_dbg(dev->dev, "empty slots = %hu.\n", empty_slots); 543 548 544 - dw_cnt = mei_data2slots(length); 545 - if (empty_slots < 0 || dw_cnt > empty_slots) 549 + if (empty_slots < 0) 550 + return -EOVERFLOW; 551 + 552 + dw_cnt = mei_data2slots(hdr_len + data_len); 553 + if (dw_cnt > (u32)empty_slots) 546 554 return -EMSGSIZE; 547 555 548 - mei_me_hcbww_write(dev, *((u32 *) header)); 549 - 550 - for (i = 0; i < length / 4; i++) 556 + reg_buf = hdr; 557 + for (i = 0; i < hdr_len / MEI_SLOT_SIZE; i++) 551 558 mei_me_hcbww_write(dev, reg_buf[i]); 552 559 553 - rem = length & 0x3; 560 + reg_buf = data; 561 + for (i = 0; i < data_len / MEI_SLOT_SIZE; i++) 562 + mei_me_hcbww_write(dev, reg_buf[i]); 563 + 564 + rem = data_len & 0x3; 554 565 if (rem > 0) { 555 566 u32 reg = 0; 556 567 557 - memcpy(&reg, &buf[length - rem], rem); 568 + memcpy(&reg, (const u8 *)data + data_len - rem, rem); 558 569 mei_me_hcbww_write(dev, reg); 559 570 } 560 571 ··· 614 601 * Return: always 0 615 602 */ 616 603 static int mei_me_read_slots(struct mei_device *dev, unsigned char *buffer, 617 - unsigned long buffer_length) 604 + unsigned long buffer_length) 618 605 { 619 606 u32 *reg_buf = (u32 *)buffer; 620 607 621 - for (; buffer_length >= sizeof(u32); buffer_length -= sizeof(u32)) 608 + for (; buffer_length >= MEI_SLOT_SIZE; buffer_length -= MEI_SLOT_SIZE) 622 609 *reg_buf++ = mei_me_mecbrw_read(dev); 623 610 624 611 if (buffer_length > 0) { ··· 1327 1314 1328 1315 .hbuf_free_slots = mei_me_hbuf_empty_slots, 1329 1316 .hbuf_is_ready = mei_me_hbuf_is_empty, 1330 - .hbuf_max_len = mei_me_hbuf_max_len, 1317 + .hbuf_depth = mei_me_hbuf_depth, 1331 1318 1332 1319 .write = mei_me_hbuf_write, 1333 1320 ··· 1390 1377 .fw_status.status[4] = PCI_CFG_HFS_5, \ 1391 1378 .fw_status.status[5] = PCI_CFG_HFS_6 1392 1379 1380 + #define MEI_CFG_DMA_128 \ 1381 + .dma_size[DMA_DSCR_HOST] = SZ_128K, \ 1382 + .dma_size[DMA_DSCR_DEVICE] = SZ_128K, \ 1383 + .dma_size[DMA_DSCR_CTRL] = PAGE_SIZE 1384 + 1393 1385 /* ICH Legacy devices */ 1394 1386 static const struct mei_cfg mei_me_ich_cfg = { 1395 1387 MEI_CFG_ICH_HFS, ··· 1427 1409 MEI_CFG_FW_SPS, 1428 1410 }; 1429 1411 1412 + /* Cannon Lake and newer devices */ 1413 + static const struct mei_cfg mei_me_pch12_cfg = { 1414 + MEI_CFG_PCH8_HFS, 1415 + MEI_CFG_DMA_128, 1416 + }; 1417 + 1430 1418 /* 1431 1419 * mei_cfg_list - A list of platform platform specific configurations. 1432 1420 * Note: has to be synchronized with enum mei_cfg_idx. ··· 1445 1421 [MEI_ME_PCH_CPT_PBG_CFG] = &mei_me_pch_cpt_pbg_cfg, 1446 1422 [MEI_ME_PCH8_CFG] = &mei_me_pch8_cfg, 1447 1423 [MEI_ME_PCH8_SPS_CFG] = &mei_me_pch8_sps_cfg, 1424 + [MEI_ME_PCH12_CFG] = &mei_me_pch12_cfg, 1448 1425 }; 1449 1426 1450 1427 const struct mei_cfg *mei_me_get_cfg(kernel_ulong_t idx)
+6
drivers/misc/mei/hw-me.h
··· 31 31 * 32 32 * @fw_status: FW status 33 33 * @quirk_probe: device exclusion quirk 34 + * @dma_size: device DMA buffers size 34 35 */ 35 36 struct mei_cfg { 36 37 const struct mei_fw_status fw_status; 37 38 bool (*quirk_probe)(struct pci_dev *pdev); 39 + size_t dma_size[DMA_DSCR_NUM]; 38 40 }; 39 41 40 42 ··· 54 52 * @mem_addr: io memory address 55 53 * @pg_state: power gating state 56 54 * @d0i3_supported: di03 support 55 + * @hbuf_depth: depth of hardware host/write buffer in slots 57 56 */ 58 57 struct mei_me_hw { 59 58 const struct mei_cfg *cfg; 60 59 void __iomem *mem_addr; 61 60 enum mei_pg_state pg_state; 62 61 bool d0i3_supported; 62 + u8 hbuf_depth; 63 63 }; 64 64 65 65 #define to_me_hw(dev) (struct mei_me_hw *)((dev)->hw) ··· 82 78 * @MEI_ME_PCH8_SPS_CFG: Platform Controller Hub Gen8 and newer 83 79 * servers platforms with quirk for 84 80 * SPS firmware exclusion. 81 + * @MEI_ME_PCH12_CFG: Platform Controller Hub Gen12 and newer 85 82 * @MEI_ME_NUM_CFG: Upper Sentinel. 86 83 */ 87 84 enum mei_cfg_idx { ··· 93 88 MEI_ME_PCH_CPT_PBG_CFG, 94 89 MEI_ME_PCH8_CFG, 95 90 MEI_ME_PCH8_SPS_CFG, 91 + MEI_ME_PCH12_CFG, 96 92 MEI_ME_NUM_CFG, 97 93 }; 98 94
+32 -34
drivers/misc/mei/hw-txe.c
··· 31 31 32 32 #include "mei-trace.h" 33 33 34 + #define TXE_HBUF_DEPTH (PAYLOAD_SIZE / MEI_SLOT_SIZE) 34 35 35 36 /** 36 37 * mei_txe_reg_read - Reads 32bit data from the txe device ··· 682 681 683 682 struct mei_txe_hw *hw = to_txe_hw(dev); 684 683 685 - /* Doesn't change in runtime */ 686 - dev->hbuf_depth = PAYLOAD_SIZE / 4; 687 - 688 684 hw->aliveness = mei_txe_aliveness_get(dev); 689 685 hw->readiness = mei_txe_readiness_get(dev); 690 686 ··· 689 691 hw->aliveness, hw->readiness); 690 692 } 691 693 692 - 693 694 /** 694 695 * mei_txe_write - writes a message to device. 695 696 * 696 697 * @dev: the device structure 697 - * @header: header of message 698 - * @buf: message buffer will be written 698 + * @hdr: header of message 699 + * @hdr_len: header length in bytes - must multiplication of a slot (4bytes) 700 + * @data: payload 701 + * @data_len: paylead length in bytes 699 702 * 700 - * Return: 0 if success, <0 - otherwise. 703 + * Return: 0 if success, < 0 - otherwise. 701 704 */ 702 - 703 705 static int mei_txe_write(struct mei_device *dev, 704 - struct mei_msg_hdr *header, 705 - const unsigned char *buf) 706 + const void *hdr, size_t hdr_len, 707 + const void *data, size_t data_len) 706 708 { 707 709 struct mei_txe_hw *hw = to_txe_hw(dev); 708 710 unsigned long rem; 709 - unsigned long length; 710 - int slots = dev->hbuf_depth; 711 - u32 *reg_buf = (u32 *)buf; 711 + const u32 *reg_buf; 712 + u32 slots = TXE_HBUF_DEPTH; 712 713 u32 dw_cnt; 713 - int i; 714 + unsigned long i, j; 714 715 715 - if (WARN_ON(!header || !buf)) 716 + if (WARN_ON(!hdr || !data || hdr_len & 0x3)) 716 717 return -EINVAL; 717 718 718 - length = header->length; 719 + dev_dbg(dev->dev, MEI_HDR_FMT, MEI_HDR_PRM((struct mei_msg_hdr *)hdr)); 719 720 720 - dev_dbg(dev->dev, MEI_HDR_FMT, MEI_HDR_PRM(header)); 721 - 722 - dw_cnt = mei_data2slots(length); 721 + dw_cnt = mei_data2slots(hdr_len + data_len); 723 722 if (dw_cnt > slots) 724 723 return -EMSGSIZE; 725 724 ··· 734 739 return -EAGAIN; 735 740 } 736 741 737 - mei_txe_input_payload_write(dev, 0, *((u32 *)header)); 742 + reg_buf = hdr; 743 + for (i = 0; i < hdr_len / MEI_SLOT_SIZE; i++) 744 + mei_txe_input_payload_write(dev, i, reg_buf[i]); 738 745 739 - for (i = 0; i < length / 4; i++) 740 - mei_txe_input_payload_write(dev, i + 1, reg_buf[i]); 746 + reg_buf = data; 747 + for (j = 0; j < data_len / MEI_SLOT_SIZE; j++) 748 + mei_txe_input_payload_write(dev, i + j, reg_buf[j]); 741 749 742 - rem = length & 0x3; 750 + rem = data_len & 0x3; 743 751 if (rem > 0) { 744 752 u32 reg = 0; 745 753 746 - memcpy(&reg, &buf[length - rem], rem); 747 - mei_txe_input_payload_write(dev, i + 1, reg); 754 + memcpy(&reg, (const u8 *)data + data_len - rem, rem); 755 + mei_txe_input_payload_write(dev, i + j, reg); 748 756 } 749 757 750 758 /* after each write the whole buffer is consumed */ ··· 760 762 } 761 763 762 764 /** 763 - * mei_txe_hbuf_max_len - mimics the me hbuf circular buffer 765 + * mei_txe_hbuf_depth - mimics the me hbuf circular buffer 764 766 * 765 767 * @dev: the device structure 766 768 * 767 - * Return: the PAYLOAD_SIZE - 4 769 + * Return: the TXE_HBUF_DEPTH 768 770 */ 769 - static size_t mei_txe_hbuf_max_len(const struct mei_device *dev) 771 + static u32 mei_txe_hbuf_depth(const struct mei_device *dev) 770 772 { 771 - return PAYLOAD_SIZE - sizeof(struct mei_msg_hdr); 773 + return TXE_HBUF_DEPTH; 772 774 } 773 775 774 776 /** ··· 776 778 * 777 779 * @dev: the device structure 778 780 * 779 - * Return: always hbuf_depth 781 + * Return: always TXE_HBUF_DEPTH 780 782 */ 781 783 static int mei_txe_hbuf_empty_slots(struct mei_device *dev) 782 784 { ··· 795 797 static int mei_txe_count_full_read_slots(struct mei_device *dev) 796 798 { 797 799 /* read buffers has static size */ 798 - return PAYLOAD_SIZE / 4; 800 + return TXE_HBUF_DEPTH; 799 801 } 800 802 801 803 /** ··· 837 839 dev_dbg(dev->dev, "buffer-length = %lu buf[0]0x%08X\n", 838 840 len, mei_txe_out_data_read(dev, 0)); 839 841 840 - for (i = 0; i < len / 4; i++) { 842 + for (i = 0; i < len / MEI_SLOT_SIZE; i++) { 841 843 /* skip header: index starts from 1 */ 842 844 reg = mei_txe_out_data_read(dev, i + 1); 843 845 dev_dbg(dev->dev, "buf[%d] = 0x%08X\n", i, reg); ··· 1138 1140 /* Input Ready: Detection if host can write to SeC */ 1139 1141 if (test_and_clear_bit(TXE_INTR_IN_READY_BIT, &hw->intr_cause)) { 1140 1142 dev->hbuf_is_ready = true; 1141 - hw->slots = dev->hbuf_depth; 1143 + hw->slots = TXE_HBUF_DEPTH; 1142 1144 } 1143 1145 1144 1146 if (hw->aliveness && dev->hbuf_is_ready) { ··· 1184 1186 1185 1187 .hbuf_free_slots = mei_txe_hbuf_empty_slots, 1186 1188 .hbuf_is_ready = mei_txe_is_input_ready, 1187 - .hbuf_max_len = mei_txe_hbuf_max_len, 1189 + .hbuf_depth = mei_txe_hbuf_depth, 1188 1190 1189 1191 .write = mei_txe_write, 1190 1192
+67 -6
drivers/misc/mei/hw.h
··· 28 28 #define MEI_CL_CONNECT_TIMEOUT 15 /* HPS: Client Connect Timeout */ 29 29 #define MEI_CLIENTS_INIT_TIMEOUT 15 /* HPS: Clients Enumeration Timeout */ 30 30 31 - #define MEI_IAMTHIF_STALL_TIMER 12 /* HPS */ 32 - 33 31 #define MEI_PGI_TIMEOUT 1 /* PG Isolation time response 1 sec */ 34 32 #define MEI_D0I3_TIMEOUT 5 /* D0i3 set/unset max response time */ 35 33 #define MEI_HBM_TIMEOUT 1 /* 1 second */ ··· 80 82 #define HBM_MINOR_VERSION_OS 0 81 83 #define HBM_MAJOR_VERSION_OS 2 82 84 85 + /* 86 + * MEI version with dma ring support 87 + */ 88 + #define HBM_MINOR_VERSION_DR 1 89 + #define HBM_MAJOR_VERSION_DR 2 90 + 83 91 /* Host bus message command opcode */ 84 92 #define MEI_HBM_CMD_OP_MSK 0x7f 85 93 /* Host bus message command RESPONSE */ ··· 127 123 #define MEI_HBM_NOTIFY_REQ_CMD 0x10 128 124 #define MEI_HBM_NOTIFY_RES_CMD 0x90 129 125 #define MEI_HBM_NOTIFICATION_CMD 0x11 126 + 127 + #define MEI_HBM_DMA_SETUP_REQ_CMD 0x12 128 + #define MEI_HBM_DMA_SETUP_RES_CMD 0x92 130 129 131 130 /* 132 131 * MEI Stop Reason ··· 196 189 MEI_CL_DISCONN_SUCCESS = MEI_HBMS_SUCCESS 197 190 }; 198 191 199 - /* 200 - * MEI BUS Interface Section 192 + /** 193 + * struct mei_msg_hdr - MEI BUS Interface Section 194 + * 195 + * @me_addr: device address 196 + * @host_addr: host address 197 + * @length: message length 198 + * @reserved: reserved 199 + * @dma_ring: message is on dma ring 200 + * @internal: message is internal 201 + * @msg_complete: last packet of the message 201 202 */ 202 203 struct mei_msg_hdr { 203 204 u32 me_addr:8; 204 205 u32 host_addr:8; 205 206 u32 length:9; 206 - u32 reserved:5; 207 + u32 reserved:4; 208 + u32 dma_ring:1; 207 209 u32 internal:1; 208 210 u32 msg_complete:1; 209 211 } __packed; 210 - 211 212 212 213 struct mei_bus_message { 213 214 u8 hbm_cmd; ··· 464 449 u8 me_addr; 465 450 u8 host_addr; 466 451 u8 reserved[1]; 452 + } __packed; 453 + 454 + /** 455 + * struct hbm_dma_mem_dscr - dma ring 456 + * 457 + * @addr_hi: the high 32bits of 64 bit address 458 + * @addr_lo: the low 32bits of 64 bit address 459 + * @size : size in bytes (must be power of 2) 460 + */ 461 + struct hbm_dma_mem_dscr { 462 + u32 addr_hi; 463 + u32 addr_lo; 464 + u32 size; 465 + } __packed; 466 + 467 + enum { 468 + DMA_DSCR_HOST = 0, 469 + DMA_DSCR_DEVICE = 1, 470 + DMA_DSCR_CTRL = 2, 471 + DMA_DSCR_NUM, 472 + }; 473 + 474 + /** 475 + * struct hbm_dma_setup_request - dma setup request 476 + * 477 + * @hbm_cmd: bus message command header 478 + * @reserved: reserved for alignment 479 + * @dma_dscr: dma descriptor for HOST, DEVICE, and CTRL 480 + */ 481 + struct hbm_dma_setup_request { 482 + u8 hbm_cmd; 483 + u8 reserved[3]; 484 + struct hbm_dma_mem_dscr dma_dscr[DMA_DSCR_NUM]; 485 + } __packed; 486 + 487 + /** 488 + * struct hbm_dma_setup_response - dma setup response 489 + * 490 + * @hbm_cmd: bus message command header 491 + * @status: 0 on success; otherwise DMA setup failed. 492 + * @reserved: reserved for alignment 493 + */ 494 + struct hbm_dma_setup_response { 495 + u8 hbm_cmd; 496 + u8 status; 497 + u8 reserved[2]; 467 498 } __packed; 468 499 469 500 #endif
+12 -5
drivers/misc/mei/interrupt.c
··· 173 173 int slots; 174 174 int ret; 175 175 176 + msg_slots = mei_hbm2slots(sizeof(struct hbm_client_connect_response)); 176 177 slots = mei_hbuf_empty_slots(dev); 177 - msg_slots = mei_data2slots(sizeof(struct hbm_client_connect_response)); 178 + if (slots < 0) 179 + return -EOVERFLOW; 178 180 179 - if (slots < msg_slots) 181 + if ((u32)slots < msg_slots) 180 182 return -EMSGSIZE; 181 183 182 184 ret = mei_hbm_cl_disconnect_rsp(dev, cl); ··· 208 206 if (!list_empty(&cl->rd_pending)) 209 207 return 0; 210 208 211 - msg_slots = mei_data2slots(sizeof(struct hbm_flow_control)); 209 + msg_slots = mei_hbm2slots(sizeof(struct hbm_flow_control)); 212 210 slots = mei_hbuf_empty_slots(dev); 211 + if (slots < 0) 212 + return -EOVERFLOW; 213 213 214 - if (slots < msg_slots) 214 + if ((u32)slots < msg_slots) 215 215 return -EMSGSIZE; 216 216 217 217 ret = mei_hbm_cl_flow_control_req(dev, cl); ··· 372 368 return 0; 373 369 374 370 slots = mei_hbuf_empty_slots(dev); 375 - if (slots <= 0) 371 + if (slots < 0) 372 + return -EOVERFLOW; 373 + 374 + if (slots == 0) 376 375 return -EMSGSIZE; 377 376 378 377 /* complete all waiting for write CB */
+34 -7
drivers/misc/mei/main.c
··· 1 1 /* 2 2 * 3 3 * Intel Management Engine Interface (Intel MEI) Linux driver 4 - * Copyright (c) 2003-2012, Intel Corporation. 4 + * Copyright (c) 2003-2018, Intel Corporation. 5 5 * 6 6 * This program is free software; you can redistribute it and/or modify it 7 7 * under the terms and conditions of the GNU General Public License, ··· 137 137 struct mei_device *dev; 138 138 struct mei_cl_cb *cb = NULL; 139 139 bool nonblock = !!(file->f_flags & O_NONBLOCK); 140 - int rets; 140 + ssize_t rets; 141 141 142 142 if (WARN_ON(!cl || !cl->dev)) 143 143 return -ENODEV; ··· 170 170 171 171 rets = mei_cl_read_start(cl, length, file); 172 172 if (rets && rets != -EBUSY) { 173 - cl_dbg(dev, cl, "mei start read failure status = %d\n", rets); 173 + cl_dbg(dev, cl, "mei start read failure status = %zd\n", rets); 174 174 goto out; 175 175 } 176 176 ··· 204 204 /* now copy the data to user space */ 205 205 if (cb->status) { 206 206 rets = cb->status; 207 - cl_dbg(dev, cl, "read operation failed %d\n", rets); 207 + cl_dbg(dev, cl, "read operation failed %zd\n", rets); 208 208 goto free; 209 209 } 210 210 ··· 236 236 *offset = 0; 237 237 238 238 out: 239 - cl_dbg(dev, cl, "end mei read rets = %d\n", rets); 239 + cl_dbg(dev, cl, "end mei read rets = %zd\n", rets); 240 240 mutex_unlock(&dev->device_lock); 241 241 return rets; 242 242 } ··· 256 256 struct mei_cl *cl = file->private_data; 257 257 struct mei_cl_cb *cb; 258 258 struct mei_device *dev; 259 - int rets; 259 + ssize_t rets; 260 260 261 261 if (WARN_ON(!cl || !cl->dev)) 262 262 return -ENODEV; ··· 312 312 } 313 313 } 314 314 315 - *offset = 0; 316 315 cb = mei_cl_alloc_cb(cl, length, MEI_FOP_WRITE, file); 317 316 if (!cb) { 318 317 rets = -ENOMEM; ··· 811 812 } 812 813 static DEVICE_ATTR_RW(tx_queue_limit); 813 814 815 + /** 816 + * fw_ver_show - display ME FW version 817 + * 818 + * @device: device pointer 819 + * @attr: attribute pointer 820 + * @buf: char out buffer 821 + * 822 + * Return: number of the bytes printed into buf or error 823 + */ 824 + static ssize_t fw_ver_show(struct device *device, 825 + struct device_attribute *attr, char *buf) 826 + { 827 + struct mei_device *dev = dev_get_drvdata(device); 828 + struct mei_fw_version *ver; 829 + ssize_t cnt = 0; 830 + int i; 831 + 832 + ver = dev->fw_ver; 833 + 834 + for (i = 0; i < MEI_MAX_FW_VER_BLOCKS; i++) 835 + cnt += scnprintf(buf + cnt, PAGE_SIZE - cnt, "%u:%u.%u.%u.%u\n", 836 + ver[i].platform, ver[i].major, ver[i].minor, 837 + ver[i].hotfix, ver[i].buildno); 838 + return cnt; 839 + } 840 + static DEVICE_ATTR_RO(fw_ver); 841 + 814 842 static struct attribute *mei_attrs[] = { 815 843 &dev_attr_fw_status.attr, 816 844 &dev_attr_hbm_ver.attr, 817 845 &dev_attr_hbm_ver_drv.attr, 818 846 &dev_attr_tx_queue_limit.attr, 847 + &dev_attr_fw_ver.attr, 819 848 NULL 820 849 }; 821 850 ATTRIBUTE_GROUPS(mei);
+56 -21
drivers/misc/mei/mei_dev.h
··· 1 1 /* 2 2 * 3 3 * Intel Management Engine Interface (Intel MEI) Linux driver 4 - * Copyright (c) 2003-2012, Intel Corporation. 4 + * Copyright (c) 2003-2018, Intel Corporation. 5 5 * 6 6 * This program is free software; you can redistribute it and/or modify it 7 7 * under the terms and conditions of the GNU General Public License, ··· 26 26 #include "hw.h" 27 27 #include "hbm.h" 28 28 29 - #define MEI_RD_MSG_BUF_SIZE (128 * sizeof(u32)) 29 + #define MEI_SLOT_SIZE sizeof(u32) 30 + #define MEI_RD_MSG_BUF_SIZE (128 * MEI_SLOT_SIZE) 30 31 31 32 /* 32 33 * Number of Maximum MEI Clients ··· 175 174 * @status: io status of the cb 176 175 * @internal: communication between driver and FW flag 177 176 * @blocking: transmission blocking mode 178 - * @completed: the transfer or reception has completed 179 177 */ 180 178 struct mei_cl_cb { 181 179 struct list_head list; ··· 186 186 int status; 187 187 u32 internal:1; 188 188 u32 blocking:1; 189 - u32 completed:1; 190 189 }; 191 190 192 191 /** ··· 268 269 * 269 270 * @hbuf_free_slots : query for write buffer empty slots 270 271 * @hbuf_is_ready : query if write buffer is empty 271 - * @hbuf_max_len : query for write buffer max len 272 + * @hbuf_depth : query for write buffer depth 272 273 * 273 274 * @write : write a message to FW 274 275 * ··· 298 299 299 300 int (*hbuf_free_slots)(struct mei_device *dev); 300 301 bool (*hbuf_is_ready)(struct mei_device *dev); 301 - size_t (*hbuf_max_len)(const struct mei_device *dev); 302 + u32 (*hbuf_depth)(const struct mei_device *dev); 302 303 int (*write)(struct mei_device *dev, 303 - struct mei_msg_hdr *hdr, 304 - const unsigned char *buf); 304 + const void *hdr, size_t hdr_len, 305 + const void *data, size_t data_len); 305 306 306 307 int (*rdbuf_full_slots)(struct mei_device *dev); 307 308 ··· 316 317 ssize_t __mei_cl_send(struct mei_cl *cl, u8 *buf, size_t length, 317 318 unsigned int mode); 318 319 ssize_t __mei_cl_recv(struct mei_cl *cl, u8 *buf, size_t length, 319 - unsigned int mode); 320 + unsigned int mode, unsigned long timeout); 320 321 bool mei_cl_bus_rx_event(struct mei_cl *cl); 321 322 bool mei_cl_bus_notify_event(struct mei_cl *cl); 322 323 void mei_cl_bus_remove_devices(struct mei_device *bus); ··· 354 355 const char *mei_pg_state_str(enum mei_pg_state state); 355 356 356 357 /** 358 + * struct mei_fw_version - MEI FW version struct 359 + * 360 + * @platform: platform identifier 361 + * @major: major version field 362 + * @minor: minor version field 363 + * @buildno: build number version field 364 + * @hotfix: hotfix number version field 365 + */ 366 + struct mei_fw_version { 367 + u8 platform; 368 + u8 major; 369 + u16 minor; 370 + u16 buildno; 371 + u16 hotfix; 372 + }; 373 + 374 + #define MEI_MAX_FW_VER_BLOCKS 3 375 + 376 + /** 357 377 * struct mei_device - MEI private device struct 358 378 * 359 379 * @dev : device on a bus ··· 408 390 * @rd_msg_buf : control messages buffer 409 391 * @rd_msg_hdr : read message header storage 410 392 * 411 - * @hbuf_depth : depth of hardware host/write buffer is slots 412 393 * @hbuf_is_ready : query if the host host/write buffer is ready 413 394 * 414 395 * @version : HBM protocol version in use ··· 418 401 * @hbm_f_fa_supported : hbm feature fixed address client 419 402 * @hbm_f_ie_supported : hbm feature immediate reply to enum request 420 403 * @hbm_f_os_supported : hbm feature support OS ver message 404 + * @hbm_f_dr_supported : hbm feature dma ring supported 405 + * 406 + * @fw_ver : FW versions 421 407 * 422 408 * @me_clients_rwsem: rw lock over me_clients list 423 409 * @me_clients : list of FW clients ··· 486 466 u32 rd_msg_hdr; 487 467 488 468 /* write buffer */ 489 - u8 hbuf_depth; 490 469 bool hbuf_is_ready; 491 470 492 471 struct hbm_version version; ··· 496 477 unsigned int hbm_f_fa_supported:1; 497 478 unsigned int hbm_f_ie_supported:1; 498 479 unsigned int hbm_f_os_supported:1; 480 + unsigned int hbm_f_dr_supported:1; 481 + 482 + struct mei_fw_version fw_ver[MEI_MAX_FW_VER_BLOCKS]; 499 483 500 484 struct rw_semaphore me_clients_rwsem; 501 485 struct list_head me_clients; ··· 530 508 } 531 509 532 510 /** 533 - * mei_data2slots - get slots - number of (dwords) from a message length 534 - * + size of the mei header 511 + * mei_data2slots - get slots number from a message length 535 512 * 536 513 * @length: size of the messages in bytes 537 514 * ··· 538 517 */ 539 518 static inline u32 mei_data2slots(size_t length) 540 519 { 541 - return DIV_ROUND_UP(sizeof(struct mei_msg_hdr) + length, 4); 520 + return DIV_ROUND_UP(length, MEI_SLOT_SIZE); 521 + } 522 + 523 + /** 524 + * mei_hbm2slots - get slots number from a hbm message length 525 + * length + size of the mei message header 526 + * 527 + * @length: size of the messages in bytes 528 + * 529 + * Return: number of slots 530 + */ 531 + static inline u32 mei_hbm2slots(size_t length) 532 + { 533 + return DIV_ROUND_UP(sizeof(struct mei_msg_hdr) + length, MEI_SLOT_SIZE); 542 534 } 543 535 544 536 /** ··· 563 529 */ 564 530 static inline u32 mei_slots2data(int slots) 565 531 { 566 - return slots * 4; 532 + return slots * MEI_SLOT_SIZE; 567 533 } 568 534 569 535 /* ··· 664 630 return dev->ops->hbuf_free_slots(dev); 665 631 } 666 632 667 - static inline size_t mei_hbuf_max_len(const struct mei_device *dev) 633 + static inline u32 mei_hbuf_depth(const struct mei_device *dev) 668 634 { 669 - return dev->ops->hbuf_max_len(dev); 635 + return dev->ops->hbuf_depth(dev); 670 636 } 671 637 672 638 static inline int mei_write_message(struct mei_device *dev, 673 - struct mei_msg_hdr *hdr, const void *buf) 639 + const void *hdr, size_t hdr_len, 640 + const void *data, size_t data_len) 674 641 { 675 - return dev->ops->write(dev, hdr, buf); 642 + return dev->ops->write(dev, hdr, hdr_len, data, data_len); 676 643 } 677 644 678 645 static inline u32 mei_read_hdr(const struct mei_device *dev) ··· 716 681 int mei_register(struct mei_device *dev, struct device *parent); 717 682 void mei_deregister(struct mei_device *dev); 718 683 719 - #define MEI_HDR_FMT "hdr:host=%02d me=%02d len=%d internal=%1d comp=%1d" 684 + #define MEI_HDR_FMT "hdr:host=%02d me=%02d len=%d dma=%1d internal=%1d comp=%1d" 720 685 #define MEI_HDR_PRM(hdr) \ 721 686 (hdr)->host_addr, (hdr)->me_addr, \ 722 - (hdr)->length, (hdr)->internal, (hdr)->msg_complete 687 + (hdr)->length, (hdr)->dma_ring, (hdr)->internal, (hdr)->msg_complete 723 688 724 689 ssize_t mei_fw_status2str(struct mei_fw_status *fw_sts, char *buf, size_t len); 725 690 /**
+4 -1
drivers/misc/mic/cosm/cosm_main.h
··· 45 45 u64 id; 46 46 union { 47 47 u64 shutdown_status; 48 - struct timespec64 timespec; 48 + struct { 49 + u64 tv_sec; 50 + u64 tv_nsec; 51 + } timespec; 49 52 }; 50 53 }; 51 54
+5 -1
drivers/misc/mic/cosm/cosm_scif_server.c
··· 179 179 static void cosm_send_time(struct cosm_device *cdev) 180 180 { 181 181 struct cosm_msg msg = { .id = COSM_MSG_SYNC_TIME }; 182 + struct timespec64 ts; 182 183 int rc; 183 184 184 - getnstimeofday64(&msg.timespec); 185 + ktime_get_real_ts64(&ts); 186 + msg.timespec.tv_sec = ts.tv_sec; 187 + msg.timespec.tv_nsec = ts.tv_nsec; 188 + 185 189 rc = scif_send(cdev->epd, &msg, sizeof(msg), SCIF_SEND_BLOCK); 186 190 if (rc < 0) 187 191 dev_err(&cdev->dev, "%s %d scif_send failed rc %d\n",
+5 -1
drivers/misc/mic/cosm_client/cosm_scif_client.c
··· 63 63 /* Set system time from timespec value received from the host */ 64 64 static void cosm_set_time(struct cosm_msg *msg) 65 65 { 66 - int rc = do_settimeofday64(&msg->timespec); 66 + struct timespec64 ts = { 67 + .tv_sec = msg->timespec.tv_sec, 68 + .tv_nsec = msg->timespec.tv_nsec, 69 + }; 70 + int rc = do_settimeofday64(&ts); 67 71 68 72 if (rc) 69 73 dev_err(&client_spdev->dev, "%s: %d settimeofday rc %d\n",
+10 -11
drivers/misc/mic/scif/scif_api.c
··· 187 187 case SCIFEP_ZOMBIE: 188 188 dev_err(scif_info.mdev.this_device, 189 189 "SCIFAPI close: zombie state unexpected\n"); 190 + /* fall through */ 190 191 case SCIFEP_DISCONNECTED: 191 192 spin_unlock(&ep->lock); 192 193 scif_unregister_all_windows(epd); ··· 371 370 goto scif_bind_exit; 372 371 } 373 372 } else { 374 - pn = scif_get_new_port(); 375 - if (!pn) { 376 - ret = -ENOSPC; 373 + ret = scif_get_new_port(); 374 + if (ret < 0) 377 375 goto scif_bind_exit; 378 - } 376 + pn = ret; 379 377 } 380 378 381 379 ep->state = SCIFEP_BOUND; ··· 648 648 err = -EISCONN; 649 649 break; 650 650 case SCIFEP_UNBOUND: 651 - ep->port.port = scif_get_new_port(); 652 - if (!ep->port.port) { 653 - err = -ENOSPC; 654 - } else { 655 - ep->port.node = scif_info.nodeid; 656 - ep->conn_async_state = ASYNC_CONN_IDLE; 657 - } 651 + err = scif_get_new_port(); 652 + if (err < 0) 653 + break; 654 + ep->port.port = err; 655 + ep->port.node = scif_info.nodeid; 656 + ep->conn_async_state = ASYNC_CONN_IDLE; 658 657 /* Fall through */ 659 658 case SCIFEP_BOUND: 660 659 /*
-3
drivers/misc/sgi-xp/xpc_channel.c
··· 656 656 { 657 657 short partid; 658 658 struct xpc_partition *part; 659 - struct xpc_channel *ch; 660 659 661 660 DBUG_ON(ch_number < 0 || ch_number >= XPC_MAX_NCHANNELS); 662 661 ··· 663 664 part = &xpc_partitions[partid]; 664 665 665 666 if (xpc_part_ref(part)) { 666 - ch = &part->channels[ch_number]; 667 - 668 667 /* 669 668 * Initiate the establishment of a connection on the 670 669 * newly registered channel to the remote partition.
+2 -3
drivers/misc/sgi-xp/xpc_partition.c
··· 415 415 int region_size; 416 416 int max_regions; 417 417 int nasid; 418 - struct xpc_rsvd_page *rp; 419 418 unsigned long *discovered_nasids; 420 419 enum xp_retval ret; 421 420 ··· 431 432 return; 432 433 } 433 434 434 - rp = (struct xpc_rsvd_page *)xpc_rsvd_page; 435 - 436 435 /* 437 436 * The term 'region' in this context refers to the minimum number of 438 437 * nodes that can comprise an access protection grouping. The access ··· 446 449 switch (region_size) { 447 450 case 128: 448 451 max_regions *= 2; 452 + /* fall through */ 449 453 case 64: 450 454 max_regions *= 2; 455 + /* fall through */ 451 456 case 32: 452 457 max_regions *= 2; 453 458 region_size = 16;
+13 -5
drivers/misc/sram.c
··· 391 391 if (IS_ERR(sram->pool)) 392 392 return PTR_ERR(sram->pool); 393 393 394 - ret = sram_reserve_regions(sram, res); 395 - if (ret) 396 - return ret; 397 - 398 394 sram->clk = devm_clk_get(sram->dev, NULL); 399 395 if (IS_ERR(sram->clk)) 400 396 sram->clk = NULL; 401 397 else 402 398 clk_prepare_enable(sram->clk); 399 + 400 + ret = sram_reserve_regions(sram, res); 401 + if (ret) 402 + goto err_disable_clk; 403 403 404 404 platform_set_drvdata(pdev, sram); 405 405 ··· 407 407 if (init_func) { 408 408 ret = init_func(); 409 409 if (ret) 410 - return ret; 410 + goto err_free_partitions; 411 411 } 412 412 413 413 dev_dbg(sram->dev, "SRAM pool: %zu KiB @ 0x%p\n", 414 414 gen_pool_size(sram->pool) / 1024, sram->virt_base); 415 415 416 416 return 0; 417 + 418 + err_free_partitions: 419 + sram_free_partitions(sram); 420 + err_disable_clk: 421 + if (sram->clk) 422 + clk_disable_unprepare(sram->clk); 423 + 424 + return ret; 417 425 } 418 426 419 427 static int sram_remove(struct platform_device *pdev)
+2 -1
drivers/misc/ti-st/Kconfig
··· 5 5 menu "Texas Instruments shared transport line discipline" 6 6 config TI_ST 7 7 tristate "Shared transport core driver" 8 - depends on NET && GPIOLIB && TTY 8 + depends on NET && TTY 9 + depends on GPIOLIB || COMPILE_TEST 9 10 select FW_LOADER 10 11 help 11 12 This enables the shared transport core driver for TI
+3 -4
drivers/misc/ti-st/st_kim.c
··· 138 138 const unsigned char *data, long count) 139 139 { 140 140 const unsigned char *ptr; 141 - int len = 0, type = 0; 141 + int len = 0; 142 142 unsigned char *plen; 143 143 144 144 pr_debug("%s", __func__); ··· 183 183 case 0x04: 184 184 kim_gdata->rx_state = ST_W4_HEADER; 185 185 kim_gdata->rx_count = 2; 186 - type = *ptr; 187 186 break; 188 187 default: 189 188 pr_info("unknown packet"); ··· 755 756 err = gpio_request(kim_gdata->nshutdown, "kim"); 756 757 if (unlikely(err)) { 757 758 pr_err(" gpio %d request failed ", kim_gdata->nshutdown); 758 - return err; 759 + goto err_sysfs_group; 759 760 } 760 761 761 762 /* Configure nShutdown GPIO as output=0 */ 762 763 err = gpio_direction_output(kim_gdata->nshutdown, 0); 763 764 if (unlikely(err)) { 764 765 pr_err(" unable to configure gpio %d", kim_gdata->nshutdown); 765 - return err; 766 + goto err_sysfs_group; 766 767 } 767 768 /* get reference of pdev for request_firmware 768 769 */
+1 -1
drivers/misc/tsl2550.c
··· 177 177 } else 178 178 lux = 0; 179 179 else 180 - return -EAGAIN; 180 + return 0; 181 181 182 182 /* LUX range check */ 183 183 return lux > TSL2550_MAX_LUX ? TSL2550_MAX_LUX : lux;
+3 -7
drivers/misc/vexpress-syscfg.c
··· 258 258 INIT_LIST_HEAD(&syscfg->funcs); 259 259 260 260 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 261 - if (!devm_request_mem_region(&pdev->dev, res->start, 262 - resource_size(res), pdev->name)) 263 - return -EBUSY; 264 - 265 - syscfg->base = devm_ioremap(&pdev->dev, res->start, resource_size(res)); 266 - if (!syscfg->base) 267 - return -EFAULT; 261 + syscfg->base = devm_ioremap_resource(&pdev->dev, res); 262 + if (IS_ERR(syscfg->base)) 263 + return PTR_ERR(syscfg->base); 268 264 269 265 /* Must use dev.parent (MFD), as that's where DT phandle points at... */ 270 266 bridge = vexpress_config_bridge_register(pdev->dev.parent,
+53 -126
drivers/misc/vmw_balloon.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * VMware Balloon driver. 3 4 * 4 - * Copyright (C) 2000-2014, VMware, Inc. All Rights Reserved. 5 + * Copyright (C) 2000-2018, VMware, Inc. All Rights Reserved. 5 6 * 6 - * This program is free software; you can redistribute it and/or modify it 7 - * under the terms of the GNU General Public License as published by the 8 - * Free Software Foundation; version 2 of the License and no later version. 9 - * 10 - * This program is distributed in the hope that it will be useful, but 11 - * WITHOUT ANY WARRANTY; without even the implied warranty of 12 - * MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE, GOOD TITLE or 13 - * NON INFRINGEMENT. See the GNU General Public License for more 14 - * details. 15 - * 16 - * You should have received a copy of the GNU General Public License 17 - * along with this program; if not, write to the Free Software 18 - * Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA. 19 - * 20 - * Maintained by: Xavier Deguillard <xdeguillard@vmware.com> 21 - * Philip Moltmann <moltmann@vmware.com> 22 - */ 23 - 24 - /* 25 7 * This is VMware physical memory management driver for Linux. The driver 26 8 * acts like a "balloon" that can be inflated to reclaim physical pages by 27 9 * reserving them in the guest and invalidating them in the monitor, ··· 35 53 MODULE_ALIAS("dmi:*:svnVMware*:*"); 36 54 MODULE_ALIAS("vmware_vmmemctl"); 37 55 MODULE_LICENSE("GPL"); 38 - 39 - /* 40 - * Various constants controlling rate of inflaint/deflating balloon, 41 - * measured in pages. 42 - */ 43 - 44 - /* 45 - * Rates of memory allocaton when guest experiences memory pressure 46 - * (driver performs sleeping allocations). 47 - */ 48 - #define VMW_BALLOON_RATE_ALLOC_MIN 512U 49 - #define VMW_BALLOON_RATE_ALLOC_MAX 2048U 50 - #define VMW_BALLOON_RATE_ALLOC_INC 16U 51 - 52 - /* 53 - * When guest is under memory pressure, use a reduced page allocation 54 - * rate for next several cycles. 55 - */ 56 - #define VMW_BALLOON_SLOW_CYCLES 4 57 56 58 57 /* 59 58 * Use __GFP_HIGHMEM to allow pages from HIGHMEM zone. We don't ··· 247 284 /* reset flag */ 248 285 bool reset_required; 249 286 250 - /* adjustment rates (pages per second) */ 251 - unsigned int rate_alloc; 252 - 253 - /* slowdown page allocations for next few cycles */ 254 - unsigned int slow_allocation_cycles; 255 - 256 287 unsigned long capabilities; 257 288 258 289 struct vmballoon_batch_page *batch_page; ··· 298 341 success = false; 299 342 } 300 343 301 - if (b->capabilities & VMW_BALLOON_BATCHED_2M_CMDS) 344 + /* 345 + * 2MB pages are only supported with batching. If batching is for some 346 + * reason disabled, do not use 2MB pages, since otherwise the legacy 347 + * mechanism is used with 2MB pages, causing a failure. 348 + */ 349 + if ((b->capabilities & VMW_BALLOON_BATCHED_2M_CMDS) && 350 + (b->capabilities & VMW_BALLOON_BATCHED_CMDS)) 302 351 b->supported_page_sizes = 2; 303 352 else 304 353 b->supported_page_sizes = 1; ··· 413 450 414 451 pfn32 = (u32)pfn; 415 452 if (pfn32 != pfn) 416 - return -1; 453 + return -EINVAL; 417 454 418 455 STATS_INC(b->stats.lock[false]); 419 456 ··· 423 460 424 461 pr_debug("%s - ppn %lx, hv returns %ld\n", __func__, pfn, status); 425 462 STATS_INC(b->stats.lock_fail[false]); 426 - return 1; 463 + return -EIO; 427 464 } 428 465 429 466 static int vmballoon_send_batched_lock(struct vmballoon *b, ··· 560 597 561 598 locked = vmballoon_send_lock_page(b, page_to_pfn(page), &hv_status, 562 599 target); 563 - if (locked > 0) { 600 + if (locked) { 564 601 STATS_INC(b->stats.refused_alloc[false]); 565 602 566 - if (hv_status == VMW_BALLOON_ERROR_RESET || 567 - hv_status == VMW_BALLOON_ERROR_PPN_NOTNEEDED) { 603 + if (locked == -EIO && 604 + (hv_status == VMW_BALLOON_ERROR_RESET || 605 + hv_status == VMW_BALLOON_ERROR_PPN_NOTNEEDED)) { 568 606 vmballoon_free_page(page, false); 569 607 return -EIO; 570 608 } ··· 581 617 } else { 582 618 vmballoon_free_page(page, false); 583 619 } 584 - return -EIO; 620 + return locked; 585 621 } 586 622 587 623 /* track allocated page */ ··· 754 790 */ 755 791 static void vmballoon_inflate(struct vmballoon *b) 756 792 { 757 - unsigned rate; 758 - unsigned int allocations = 0; 759 793 unsigned int num_pages = 0; 760 794 int error = 0; 761 795 gfp_t flags = VMW_PAGE_ALLOC_NOSLEEP; ··· 780 818 * Start with no sleep allocation rate which may be higher 781 819 * than sleeping allocation rate. 782 820 */ 783 - if (b->slow_allocation_cycles) { 784 - rate = b->rate_alloc; 785 - is_2m_pages = false; 786 - } else { 787 - rate = UINT_MAX; 788 - is_2m_pages = 789 - b->supported_page_sizes == VMW_BALLOON_NUM_PAGE_SIZES; 790 - } 821 + is_2m_pages = b->supported_page_sizes == VMW_BALLOON_NUM_PAGE_SIZES; 791 822 792 - pr_debug("%s - goal: %d, no-sleep rate: %u, sleep rate: %d\n", 793 - __func__, b->target - b->size, rate, b->rate_alloc); 823 + pr_debug("%s - goal: %d", __func__, b->target - b->size); 794 824 795 825 while (!b->reset_required && 796 826 b->size + num_pages * vmballoon_page_size(is_2m_pages) ··· 815 861 if (flags == VMW_PAGE_ALLOC_CANSLEEP) { 816 862 /* 817 863 * CANSLEEP page allocation failed, so guest 818 - * is under severe memory pressure. Quickly 819 - * decrease allocation rate. 864 + * is under severe memory pressure. We just log 865 + * the event, but do not stop the inflation 866 + * due to its negative impact on performance. 820 867 */ 821 - b->rate_alloc = max(b->rate_alloc / 2, 822 - VMW_BALLOON_RATE_ALLOC_MIN); 823 868 STATS_INC(b->stats.sleep_alloc_fail); 824 869 break; 825 870 } 826 871 827 872 /* 828 873 * NOSLEEP page allocation failed, so the guest is 829 - * under memory pressure. Let us slow down page 830 - * allocations for next few cycles so that the guest 831 - * gets out of memory pressure. Also, if we already 832 - * allocated b->rate_alloc pages, let's pause, 833 - * otherwise switch to sleeping allocations. 874 + * under memory pressure. Slowing down page alloctions 875 + * seems to be reasonable, but doing so might actually 876 + * cause the hypervisor to throttle us down, resulting 877 + * in degraded performance. We will count on the 878 + * scheduler and standard memory management mechanisms 879 + * for now. 834 880 */ 835 - b->slow_allocation_cycles = VMW_BALLOON_SLOW_CYCLES; 836 - 837 - if (allocations >= b->rate_alloc) 838 - break; 839 - 840 881 flags = VMW_PAGE_ALLOC_CANSLEEP; 841 - /* Lower rate for sleeping allocations. */ 842 - rate = b->rate_alloc; 843 882 continue; 844 883 } 845 884 ··· 846 899 } 847 900 848 901 cond_resched(); 849 - 850 - if (allocations >= rate) { 851 - /* We allocated enough pages, let's take a break. */ 852 - break; 853 - } 854 902 } 855 903 856 904 if (num_pages > 0) 857 905 b->ops->lock(b, num_pages, is_2m_pages, &b->target); 858 - 859 - /* 860 - * We reached our goal without failures so try increasing 861 - * allocation rate. 862 - */ 863 - if (error == 0 && allocations >= b->rate_alloc) { 864 - unsigned int mult = allocations / b->rate_alloc; 865 - 866 - b->rate_alloc = 867 - min(b->rate_alloc + mult * VMW_BALLOON_RATE_ALLOC_INC, 868 - VMW_BALLOON_RATE_ALLOC_MAX); 869 - } 870 906 871 907 vmballoon_release_refused_pages(b, true); 872 908 vmballoon_release_refused_pages(b, false); ··· 959 1029 */ 960 1030 static int vmballoon_vmci_init(struct vmballoon *b) 961 1031 { 962 - int error = 0; 1032 + unsigned long error, dummy; 963 1033 964 - if ((b->capabilities & VMW_BALLOON_SIGNALLED_WAKEUP_CMD) != 0) { 965 - error = vmci_doorbell_create(&b->vmci_doorbell, 966 - VMCI_FLAG_DELAYED_CB, 967 - VMCI_PRIVILEGE_FLAG_RESTRICTED, 968 - vmballoon_doorbell, b); 1034 + if ((b->capabilities & VMW_BALLOON_SIGNALLED_WAKEUP_CMD) == 0) 1035 + return 0; 969 1036 970 - if (error == VMCI_SUCCESS) { 971 - VMWARE_BALLOON_CMD(VMCI_DOORBELL_SET, 972 - b->vmci_doorbell.context, 973 - b->vmci_doorbell.resource, error); 974 - STATS_INC(b->stats.doorbell_set); 975 - } 976 - } 1037 + error = vmci_doorbell_create(&b->vmci_doorbell, VMCI_FLAG_DELAYED_CB, 1038 + VMCI_PRIVILEGE_FLAG_RESTRICTED, 1039 + vmballoon_doorbell, b); 977 1040 978 - if (error != 0) { 979 - vmballoon_vmci_cleanup(b); 1041 + if (error != VMCI_SUCCESS) 1042 + goto fail; 980 1043 981 - return -EIO; 982 - } 1044 + error = VMWARE_BALLOON_CMD(VMCI_DOORBELL_SET, b->vmci_doorbell.context, 1045 + b->vmci_doorbell.resource, dummy); 1046 + 1047 + STATS_INC(b->stats.doorbell_set); 1048 + 1049 + if (error != VMW_BALLOON_SUCCESS) 1050 + goto fail; 983 1051 984 1052 return 0; 1053 + fail: 1054 + vmballoon_vmci_cleanup(b); 1055 + return -EIO; 985 1056 } 986 1057 987 1058 /* ··· 1045 1114 if (b->reset_required) 1046 1115 vmballoon_reset(b); 1047 1116 1048 - if (b->slow_allocation_cycles > 0) 1049 - b->slow_allocation_cycles--; 1050 - 1051 1117 if (!b->reset_required && vmballoon_send_get_target(b, &target)) { 1052 1118 /* update target, adjust size */ 1053 1119 b->target = target; ··· 1087 1159 "target: %8d pages\n" 1088 1160 "current: %8d pages\n", 1089 1161 b->target, b->size); 1090 - 1091 - /* format rate info */ 1092 - seq_printf(f, 1093 - "rateSleepAlloc: %8d pages/sec\n", 1094 - b->rate_alloc); 1095 1162 1096 1163 seq_printf(f, 1097 1164 "\n" ··· 1194 1271 INIT_LIST_HEAD(&balloon.page_sizes[is_2m_pages].refused_pages); 1195 1272 } 1196 1273 1197 - /* initialize rates */ 1198 - balloon.rate_alloc = VMW_BALLOON_RATE_ALLOC_MAX; 1199 - 1200 1274 INIT_DELAYED_WORK(&balloon.dwork, vmballoon_work); 1201 1275 1202 1276 error = vmballoon_debugfs_init(&balloon); ··· 1209 1289 1210 1290 return 0; 1211 1291 } 1212 - module_init(vmballoon_init); 1292 + 1293 + /* 1294 + * Using late_initcall() instead of module_init() allows the balloon to use the 1295 + * VMCI doorbell even when the balloon is built into the kernel. Otherwise the 1296 + * VMCI is probed only after the balloon is initialized. If the balloon is used 1297 + * as a module, late_initcall() is equivalent to module_init(). 1298 + */ 1299 + late_initcall(vmballoon_init); 1213 1300 1214 1301 static void __exit vmballoon_exit(void) 1215 1302 {
+2 -7
drivers/misc/vmw_vmci/vmci_queue_pair.c
··· 668 668 retval = get_user_pages_fast((uintptr_t) produce_uva, 669 669 produce_q->kernel_if->num_pages, 1, 670 670 produce_q->kernel_if->u.h.header_page); 671 - if (retval < produce_q->kernel_if->num_pages) { 671 + if (retval < (int)produce_q->kernel_if->num_pages) { 672 672 pr_debug("get_user_pages_fast(produce) failed (retval=%d)", 673 673 retval); 674 674 qp_release_pages(produce_q->kernel_if->u.h.header_page, ··· 680 680 retval = get_user_pages_fast((uintptr_t) consume_uva, 681 681 consume_q->kernel_if->num_pages, 1, 682 682 consume_q->kernel_if->u.h.header_page); 683 - if (retval < consume_q->kernel_if->num_pages) { 683 + if (retval < (int)consume_q->kernel_if->num_pages) { 684 684 pr_debug("get_user_pages_fast(consume) failed (retval=%d)", 685 685 retval); 686 686 qp_release_pages(consume_q->kernel_if->u.h.header_page, ··· 2214 2214 { 2215 2215 struct qp_broker_entry *entry; 2216 2216 const u32 context_id = vmci_ctx_get_id(context); 2217 - bool is_local = false; 2218 2217 int result; 2219 2218 2220 2219 if (vmci_handle_is_invalid(handle) || !context || ··· 2242 2243 goto out; 2243 2244 } 2244 2245 2245 - is_local = entry->qp.flags & VMCI_QPFLAG_LOCAL; 2246 2246 result = VMCI_SUCCESS; 2247 2247 2248 2248 if (context_id != VMCI_HOST_CONTEXT_ID) { ··· 2323 2325 { 2324 2326 struct qp_broker_entry *entry; 2325 2327 const u32 context_id = vmci_ctx_get_id(context); 2326 - bool is_local = false; 2327 2328 int result; 2328 2329 2329 2330 if (vmci_handle_is_invalid(handle) || !context || ··· 2350 2353 result = VMCI_ERROR_QUEUEPAIR_NOTATTACHED; 2351 2354 goto out; 2352 2355 } 2353 - 2354 - is_local = entry->qp.flags & VMCI_QPFLAG_LOCAL; 2355 2356 2356 2357 if (context_id != VMCI_HOST_CONTEXT_ID) { 2357 2358 qp_acquire_queue_mutex(entry->produce_q);
+1
drivers/mtd/nand/raw/brcmnand/brcmstb_nand.c
··· 13 13 14 14 #include <linux/device.h> 15 15 #include <linux/module.h> 16 + #include <linux/mod_devicetable.h> 16 17 #include <linux/platform_device.h> 17 18 18 19 #include "brcmnand.h"
+10
drivers/mux/Kconfig
··· 21 21 To compile the driver as a module, choose M here: the module will 22 22 be called mux-adg792a. 23 23 24 + config MUX_ADGS1408 25 + tristate "Analog Devices ADGS1408/ADGS1409 Multiplexers" 26 + depends on SPI 27 + help 28 + ADGS1408 8:1 multiplexer and ADGS1409 double 4:1 multiplexer 29 + switches. 30 + 31 + To compile the driver as a module, choose M here: the module will 32 + be called mux-adgs1408. 33 + 24 34 config MUX_GPIO 25 35 tristate "GPIO-controlled Multiplexer" 26 36 depends on GPIOLIB || COMPILE_TEST
+2
drivers/mux/Makefile
··· 5 5 6 6 mux-core-objs := core.o 7 7 mux-adg792a-objs := adg792a.o 8 + mux-adgs1408-objs := adgs1408.o 8 9 mux-gpio-objs := gpio.o 9 10 mux-mmio-objs := mmio.o 10 11 11 12 obj-$(CONFIG_MULTIPLEXER) += mux-core.o 12 13 obj-$(CONFIG_MUX_ADG792A) += mux-adg792a.o 14 + obj-$(CONFIG_MUX_ADGS1408) += mux-adgs1408.o 13 15 obj-$(CONFIG_MUX_GPIO) += mux-gpio.o 14 16 obj-$(CONFIG_MUX_MMIO) += mux-mmio.o
+131
drivers/mux/adgs1408.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + /* 3 + * ADGS1408/ADGS1409 SPI MUX driver 4 + * 5 + * Copyright 2018 Analog Devices Inc. 6 + */ 7 + 8 + #include <linux/err.h> 9 + #include <linux/module.h> 10 + #include <linux/mux/driver.h> 11 + #include <linux/of_platform.h> 12 + #include <linux/property.h> 13 + #include <linux/spi/spi.h> 14 + 15 + #define ADGS1408_SW_DATA (0x01) 16 + #define ADGS1408_REG_READ(reg) ((reg) | 0x80) 17 + #define ADGS1408_DISABLE (0x00) 18 + #define ADGS1408_MUX(state) (((state) << 1) | 1) 19 + 20 + enum adgs1408_chip_id { 21 + ADGS1408 = 1, 22 + ADGS1409, 23 + }; 24 + 25 + static int adgs1408_spi_reg_write(struct spi_device *spi, 26 + u8 reg_addr, u8 reg_data) 27 + { 28 + u8 tx_buf[2]; 29 + 30 + tx_buf[0] = reg_addr; 31 + tx_buf[1] = reg_data; 32 + 33 + return spi_write_then_read(spi, tx_buf, sizeof(tx_buf), NULL, 0); 34 + } 35 + 36 + static int adgs1408_set(struct mux_control *mux, int state) 37 + { 38 + struct spi_device *spi = to_spi_device(mux->chip->dev.parent); 39 + u8 reg; 40 + 41 + if (state == MUX_IDLE_DISCONNECT) 42 + reg = ADGS1408_DISABLE; 43 + else 44 + reg = ADGS1408_MUX(state); 45 + 46 + return adgs1408_spi_reg_write(spi, ADGS1408_SW_DATA, reg); 47 + } 48 + 49 + static const struct mux_control_ops adgs1408_ops = { 50 + .set = adgs1408_set, 51 + }; 52 + 53 + static int adgs1408_probe(struct spi_device *spi) 54 + { 55 + struct device *dev = &spi->dev; 56 + enum adgs1408_chip_id chip_id; 57 + struct mux_chip *mux_chip; 58 + struct mux_control *mux; 59 + s32 idle_state; 60 + int ret; 61 + 62 + chip_id = (enum adgs1408_chip_id)of_device_get_match_data(dev); 63 + if (!chip_id) 64 + chip_id = spi_get_device_id(spi)->driver_data; 65 + 66 + mux_chip = devm_mux_chip_alloc(dev, 1, 0); 67 + if (IS_ERR(mux_chip)) 68 + return PTR_ERR(mux_chip); 69 + 70 + mux_chip->ops = &adgs1408_ops; 71 + 72 + ret = adgs1408_spi_reg_write(spi, ADGS1408_SW_DATA, ADGS1408_DISABLE); 73 + if (ret < 0) 74 + return ret; 75 + 76 + ret = device_property_read_u32(dev, "idle-state", (u32 *)&idle_state); 77 + if (ret < 0) 78 + idle_state = MUX_IDLE_AS_IS; 79 + 80 + mux = mux_chip->mux; 81 + 82 + if (chip_id == ADGS1408) 83 + mux->states = 8; 84 + else 85 + mux->states = 4; 86 + 87 + switch (idle_state) { 88 + case MUX_IDLE_DISCONNECT: 89 + case MUX_IDLE_AS_IS: 90 + case 0 ... 7: 91 + /* adgs1409 supports only 4 states */ 92 + if (idle_state < mux->states) { 93 + mux->idle_state = idle_state; 94 + break; 95 + } 96 + /* fall through */ 97 + default: 98 + dev_err(dev, "invalid idle-state %d\n", idle_state); 99 + return -EINVAL; 100 + } 101 + 102 + return devm_mux_chip_register(dev, mux_chip); 103 + } 104 + 105 + static const struct spi_device_id adgs1408_spi_id[] = { 106 + { "adgs1408", ADGS1408 }, 107 + { "adgs1409", ADGS1409 }, 108 + { } 109 + }; 110 + MODULE_DEVICE_TABLE(spi, adgs1408_spi_id); 111 + 112 + static const struct of_device_id adgs1408_of_match[] = { 113 + { .compatible = "adi,adgs1408", .data = (void *)ADGS1408, }, 114 + { .compatible = "adi,adgs1409", .data = (void *)ADGS1409, }, 115 + { } 116 + }; 117 + MODULE_DEVICE_TABLE(of, adgs1408_of_match); 118 + 119 + static struct spi_driver adgs1408_driver = { 120 + .driver = { 121 + .name = "adgs1408", 122 + .of_match_table = of_match_ptr(adgs1408_of_match), 123 + }, 124 + .probe = adgs1408_probe, 125 + .id_table = adgs1408_spi_id, 126 + }; 127 + module_spi_driver(adgs1408_driver); 128 + 129 + MODULE_AUTHOR("Mircea Caprioru <mircea.caprioru@analog.com>"); 130 + MODULE_DESCRIPTION("Analog Devices ADGS1408 MUX driver"); 131 + MODULE_LICENSE("GPL v2");
+1
drivers/net/ethernet/calxeda/xgmac.c
··· 14 14 * this program. If not, see <http://www.gnu.org/licenses/>. 15 15 */ 16 16 #include <linux/module.h> 17 + #include <linux/mod_devicetable.h> 17 18 #include <linux/kernel.h> 18 19 #include <linux/circ_buf.h> 19 20 #include <linux/interrupt.h>
+1
drivers/net/ethernet/faraday/ftmac100.c
··· 29 29 #include <linux/io.h> 30 30 #include <linux/mii.h> 31 31 #include <linux/module.h> 32 + #include <linux/mod_devicetable.h> 32 33 #include <linux/netdevice.h> 33 34 #include <linux/platform_device.h> 34 35
+3
drivers/net/hyperv/netvsc_drv.c
··· 2303 2303 .id_table = id_table, 2304 2304 .probe = netvsc_probe, 2305 2305 .remove = netvsc_remove, 2306 + .driver = { 2307 + .probe_type = PROBE_PREFER_ASYNCHRONOUS, 2308 + }, 2306 2309 }; 2307 2310 2308 2311 /*
+1
drivers/net/wireless/ath/ath9k/ahb.c
··· 19 19 #include <linux/nl80211.h> 20 20 #include <linux/platform_device.h> 21 21 #include <linux/module.h> 22 + #include <linux/mod_devicetable.h> 22 23 #include "ath9k.h" 23 24 24 25 static const struct platform_device_id ath9k_platform_id_table[] = {
+1
drivers/net/wireless/ti/wl12xx/main.c
··· 20 20 */ 21 21 22 22 #include <linux/module.h> 23 + #include <linux/mod_devicetable.h> 23 24 #include <linux/platform_device.h> 24 25 25 26 #include <linux/err.h>
+1
drivers/net/wireless/ti/wl18xx/main.c
··· 20 20 */ 21 21 22 22 #include <linux/module.h> 23 + #include <linux/mod_devicetable.h> 23 24 #include <linux/platform_device.h> 24 25 #include <linux/ip.h> 25 26 #include <linux/firmware.h>
+11
drivers/nvmem/Kconfig
··· 181 181 help 182 182 Say y here to enable Rave SP EEPROM support. 183 183 184 + config SC27XX_EFUSE 185 + tristate "Spreadtrum SC27XX eFuse Support" 186 + depends on MFD_SC27XX_PMIC || COMPILE_TEST 187 + depends on HAS_IOMEM 188 + help 189 + This is a simple driver to dump specified values of Spreadtrum 190 + SC27XX PMICs from eFuse. 191 + 192 + This driver can also be built as a module. If so, the module 193 + will be called nvmem-sc27xx-efuse. 194 + 184 195 endif
+2 -1
drivers/nvmem/Makefile
··· 39 39 nvmem_snvs_lpgpr-y := snvs_lpgpr.o 40 40 obj-$(CONFIG_RAVE_SP_EEPROM) += nvmem-rave-sp-eeprom.o 41 41 nvmem-rave-sp-eeprom-y := rave-sp-eeprom.o 42 - 42 + obj-$(CONFIG_SC27XX_EFUSE) += nvmem-sc27xx-efuse.o 43 + nvmem-sc27xx-efuse-y := sc27xx-efuse.o
-2
drivers/nvmem/core.c
··· 31 31 struct device dev; 32 32 int stride; 33 33 int word_size; 34 - int ncells; 35 34 int id; 36 35 int users; 37 36 size_t size; ··· 388 389 nvmem_cell_add(cells[i]); 389 390 } 390 391 391 - nvmem->ncells = ncells; 392 392 /* remove tmp array */ 393 393 kfree(cells); 394 394
+7
drivers/nvmem/imx-ocotp.c
··· 409 409 .set_timing = imx_ocotp_set_imx6_timing, 410 410 }; 411 411 412 + static const struct ocotp_params imx6sll_params = { 413 + .nregs = 128, 414 + .bank_address_words = 0, 415 + .set_timing = imx_ocotp_set_imx6_timing, 416 + }; 417 + 412 418 static const struct ocotp_params imx6sx_params = { 413 419 .nregs = 128, 414 420 .bank_address_words = 0, ··· 439 433 { .compatible = "fsl,imx6sx-ocotp", .data = &imx6sx_params }, 440 434 { .compatible = "fsl,imx6ul-ocotp", .data = &imx6ul_params }, 441 435 { .compatible = "fsl,imx7d-ocotp", .data = &imx7d_params }, 436 + { .compatible = "fsl,imx6sll-ocotp", .data = &imx6sll_params }, 442 437 { }, 443 438 }; 444 439 MODULE_DEVICE_TABLE(of, imx_ocotp_dt_ids);
+1
drivers/nvmem/lpc18xx_eeprom.c
··· 14 14 #include <linux/err.h> 15 15 #include <linux/io.h> 16 16 #include <linux/module.h> 17 + #include <linux/mod_devicetable.h> 17 18 #include <linux/nvmem-provider.h> 18 19 #include <linux/platform_device.h> 19 20 #include <linux/reset.h>
+1
drivers/nvmem/mtk-efuse.c
··· 14 14 15 15 #include <linux/device.h> 16 16 #include <linux/module.h> 17 + #include <linux/mod_devicetable.h> 17 18 #include <linux/io.h> 18 19 #include <linux/nvmem-provider.h> 19 20 #include <linux/platform_device.h>
+1
drivers/nvmem/qfprom.c
··· 13 13 14 14 #include <linux/device.h> 15 15 #include <linux/module.h> 16 + #include <linux/mod_devicetable.h> 16 17 #include <linux/io.h> 17 18 #include <linux/nvmem-provider.h> 18 19 #include <linux/platform_device.h>
+5 -1
drivers/nvmem/rave-sp-eeprom.c
··· 35 35 RAVE_SP_EEPROM_HEADER_SMALL = 4U, 36 36 RAVE_SP_EEPROM_HEADER_BIG = 5U, 37 37 }; 38 + #define RAVE_SP_EEPROM_HEADER_MAX RAVE_SP_EEPROM_HEADER_BIG 38 39 39 40 #define RAVE_SP_EEPROM_PAGE_SIZE 32U 40 41 ··· 98 97 const unsigned int rsp_size = 99 98 is_write ? sizeof(*page) - sizeof(page->data) : sizeof(*page); 100 99 unsigned int offset = 0; 101 - u8 cmd[cmd_size]; 100 + u8 cmd[RAVE_SP_EEPROM_HEADER_MAX + sizeof(page->data)]; 102 101 int ret; 102 + 103 + if (WARN_ON(cmd_size > sizeof(cmd))) 104 + return -EINVAL; 103 105 104 106 cmd[offset++] = eeprom->address; 105 107 cmd[offset++] = 0;
+264
drivers/nvmem/sc27xx-efuse.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (C) 2018 Spreadtrum Communications Inc. 3 + 4 + #include <linux/hwspinlock.h> 5 + #include <linux/module.h> 6 + #include <linux/of.h> 7 + #include <linux/platform_device.h> 8 + #include <linux/regmap.h> 9 + #include <linux/nvmem-provider.h> 10 + 11 + /* PMIC global registers definition */ 12 + #define SC27XX_MODULE_EN 0xc08 13 + #define SC27XX_EFUSE_EN BIT(6) 14 + 15 + /* Efuse controller registers definition */ 16 + #define SC27XX_EFUSE_GLB_CTRL 0x0 17 + #define SC27XX_EFUSE_DATA_RD 0x4 18 + #define SC27XX_EFUSE_DATA_WR 0x8 19 + #define SC27XX_EFUSE_BLOCK_INDEX 0xc 20 + #define SC27XX_EFUSE_MODE_CTRL 0x10 21 + #define SC27XX_EFUSE_STATUS 0x14 22 + #define SC27XX_EFUSE_WR_TIMING_CTRL 0x20 23 + #define SC27XX_EFUSE_RD_TIMING_CTRL 0x24 24 + #define SC27XX_EFUSE_EFUSE_DEB_CTRL 0x28 25 + 26 + /* Mask definition for SC27XX_EFUSE_BLOCK_INDEX register */ 27 + #define SC27XX_EFUSE_BLOCK_MASK GENMASK(4, 0) 28 + 29 + /* Bits definitions for SC27XX_EFUSE_MODE_CTRL register */ 30 + #define SC27XX_EFUSE_PG_START BIT(0) 31 + #define SC27XX_EFUSE_RD_START BIT(1) 32 + #define SC27XX_EFUSE_CLR_RDDONE BIT(2) 33 + 34 + /* Bits definitions for SC27XX_EFUSE_STATUS register */ 35 + #define SC27XX_EFUSE_PGM_BUSY BIT(0) 36 + #define SC27XX_EFUSE_READ_BUSY BIT(1) 37 + #define SC27XX_EFUSE_STANDBY BIT(2) 38 + #define SC27XX_EFUSE_GLOBAL_PROT BIT(3) 39 + #define SC27XX_EFUSE_RD_DONE BIT(4) 40 + 41 + /* Block number and block width (bytes) definitions */ 42 + #define SC27XX_EFUSE_BLOCK_MAX 32 43 + #define SC27XX_EFUSE_BLOCK_WIDTH 2 44 + 45 + /* Timeout (ms) for the trylock of hardware spinlocks */ 46 + #define SC27XX_EFUSE_HWLOCK_TIMEOUT 5000 47 + 48 + /* Timeout (us) of polling the status */ 49 + #define SC27XX_EFUSE_POLL_TIMEOUT 3000000 50 + #define SC27XX_EFUSE_POLL_DELAY_US 10000 51 + 52 + struct sc27xx_efuse { 53 + struct device *dev; 54 + struct regmap *regmap; 55 + struct hwspinlock *hwlock; 56 + struct mutex mutex; 57 + u32 base; 58 + }; 59 + 60 + /* 61 + * On Spreadtrum platform, we have multi-subsystems will access the unique 62 + * efuse controller, so we need one hardware spinlock to synchronize between 63 + * the multiple subsystems. 64 + */ 65 + static int sc27xx_efuse_lock(struct sc27xx_efuse *efuse) 66 + { 67 + int ret; 68 + 69 + mutex_lock(&efuse->mutex); 70 + 71 + ret = hwspin_lock_timeout_raw(efuse->hwlock, 72 + SC27XX_EFUSE_HWLOCK_TIMEOUT); 73 + if (ret) { 74 + dev_err(efuse->dev, "timeout to get the hwspinlock\n"); 75 + mutex_unlock(&efuse->mutex); 76 + return ret; 77 + } 78 + 79 + return 0; 80 + } 81 + 82 + static void sc27xx_efuse_unlock(struct sc27xx_efuse *efuse) 83 + { 84 + hwspin_unlock_raw(efuse->hwlock); 85 + mutex_unlock(&efuse->mutex); 86 + } 87 + 88 + static int sc27xx_efuse_poll_status(struct sc27xx_efuse *efuse, u32 bits) 89 + { 90 + int ret; 91 + u32 val; 92 + 93 + ret = regmap_read_poll_timeout(efuse->regmap, 94 + efuse->base + SC27XX_EFUSE_STATUS, 95 + val, (val & bits), 96 + SC27XX_EFUSE_POLL_DELAY_US, 97 + SC27XX_EFUSE_POLL_TIMEOUT); 98 + if (ret) { 99 + dev_err(efuse->dev, "timeout to update the efuse status\n"); 100 + return ret; 101 + } 102 + 103 + return 0; 104 + } 105 + 106 + static int sc27xx_efuse_read(void *context, u32 offset, void *val, size_t bytes) 107 + { 108 + struct sc27xx_efuse *efuse = context; 109 + u32 buf; 110 + int ret; 111 + 112 + if (offset > SC27XX_EFUSE_BLOCK_MAX || bytes > SC27XX_EFUSE_BLOCK_WIDTH) 113 + return -EINVAL; 114 + 115 + ret = sc27xx_efuse_lock(efuse); 116 + if (ret) 117 + return ret; 118 + 119 + /* Enable the efuse controller. */ 120 + ret = regmap_update_bits(efuse->regmap, SC27XX_MODULE_EN, 121 + SC27XX_EFUSE_EN, SC27XX_EFUSE_EN); 122 + if (ret) 123 + goto unlock_efuse; 124 + 125 + /* 126 + * Before reading, we should ensure the efuse controller is in 127 + * standby state. 128 + */ 129 + ret = sc27xx_efuse_poll_status(efuse, SC27XX_EFUSE_STANDBY); 130 + if (ret) 131 + goto disable_efuse; 132 + 133 + /* Set the block address to be read. */ 134 + ret = regmap_write(efuse->regmap, 135 + efuse->base + SC27XX_EFUSE_BLOCK_INDEX, 136 + offset & SC27XX_EFUSE_BLOCK_MASK); 137 + if (ret) 138 + goto disable_efuse; 139 + 140 + /* Start reading process from efuse memory. */ 141 + ret = regmap_update_bits(efuse->regmap, 142 + efuse->base + SC27XX_EFUSE_MODE_CTRL, 143 + SC27XX_EFUSE_RD_START, 144 + SC27XX_EFUSE_RD_START); 145 + if (ret) 146 + goto disable_efuse; 147 + 148 + /* 149 + * Polling the read done status to make sure the reading process 150 + * is completed, that means the data can be read out now. 151 + */ 152 + ret = sc27xx_efuse_poll_status(efuse, SC27XX_EFUSE_RD_DONE); 153 + if (ret) 154 + goto disable_efuse; 155 + 156 + /* Read data from efuse memory. */ 157 + ret = regmap_read(efuse->regmap, efuse->base + SC27XX_EFUSE_DATA_RD, 158 + &buf); 159 + if (ret) 160 + goto disable_efuse; 161 + 162 + /* Clear the read done flag. */ 163 + ret = regmap_update_bits(efuse->regmap, 164 + efuse->base + SC27XX_EFUSE_MODE_CTRL, 165 + SC27XX_EFUSE_CLR_RDDONE, 166 + SC27XX_EFUSE_CLR_RDDONE); 167 + 168 + disable_efuse: 169 + /* Disable the efuse controller after reading. */ 170 + regmap_update_bits(efuse->regmap, SC27XX_MODULE_EN, SC27XX_EFUSE_EN, 0); 171 + unlock_efuse: 172 + sc27xx_efuse_unlock(efuse); 173 + 174 + if (!ret) 175 + memcpy(val, &buf, bytes); 176 + 177 + return ret; 178 + } 179 + 180 + static int sc27xx_efuse_probe(struct platform_device *pdev) 181 + { 182 + struct device_node *np = pdev->dev.of_node; 183 + struct nvmem_config econfig = { }; 184 + struct nvmem_device *nvmem; 185 + struct sc27xx_efuse *efuse; 186 + int ret; 187 + 188 + efuse = devm_kzalloc(&pdev->dev, sizeof(*efuse), GFP_KERNEL); 189 + if (!efuse) 190 + return -ENOMEM; 191 + 192 + efuse->regmap = dev_get_regmap(pdev->dev.parent, NULL); 193 + if (!efuse->regmap) { 194 + dev_err(&pdev->dev, "failed to get efuse regmap\n"); 195 + return -ENODEV; 196 + } 197 + 198 + ret = of_property_read_u32(np, "reg", &efuse->base); 199 + if (ret) { 200 + dev_err(&pdev->dev, "failed to get efuse base address\n"); 201 + return ret; 202 + } 203 + 204 + ret = of_hwspin_lock_get_id(np, 0); 205 + if (ret < 0) { 206 + dev_err(&pdev->dev, "failed to get hwspinlock id\n"); 207 + return ret; 208 + } 209 + 210 + efuse->hwlock = hwspin_lock_request_specific(ret); 211 + if (!efuse->hwlock) { 212 + dev_err(&pdev->dev, "failed to request hwspinlock\n"); 213 + return -ENXIO; 214 + } 215 + 216 + mutex_init(&efuse->mutex); 217 + efuse->dev = &pdev->dev; 218 + platform_set_drvdata(pdev, efuse); 219 + 220 + econfig.stride = 1; 221 + econfig.word_size = 1; 222 + econfig.read_only = true; 223 + econfig.name = "sc27xx-efuse"; 224 + econfig.size = SC27XX_EFUSE_BLOCK_MAX * SC27XX_EFUSE_BLOCK_WIDTH; 225 + econfig.reg_read = sc27xx_efuse_read; 226 + econfig.priv = efuse; 227 + econfig.dev = &pdev->dev; 228 + nvmem = devm_nvmem_register(&pdev->dev, &econfig); 229 + if (IS_ERR(nvmem)) { 230 + dev_err(&pdev->dev, "failed to register nvmem config\n"); 231 + hwspin_lock_free(efuse->hwlock); 232 + return PTR_ERR(nvmem); 233 + } 234 + 235 + return 0; 236 + } 237 + 238 + static int sc27xx_efuse_remove(struct platform_device *pdev) 239 + { 240 + struct sc27xx_efuse *efuse = platform_get_drvdata(pdev); 241 + 242 + hwspin_lock_free(efuse->hwlock); 243 + return 0; 244 + } 245 + 246 + static const struct of_device_id sc27xx_efuse_of_match[] = { 247 + { .compatible = "sprd,sc2731-efuse" }, 248 + { } 249 + }; 250 + 251 + static struct platform_driver sc27xx_efuse_driver = { 252 + .probe = sc27xx_efuse_probe, 253 + .remove = sc27xx_efuse_remove, 254 + .driver = { 255 + .name = "sc27xx-efuse", 256 + .of_match_table = sc27xx_efuse_of_match, 257 + }, 258 + }; 259 + 260 + module_platform_driver(sc27xx_efuse_driver); 261 + 262 + MODULE_AUTHOR("Freeman Liu <freeman.liu@spreadtrum.com>"); 263 + MODULE_DESCRIPTION("Spreadtrum SC27xx efuse driver"); 264 + MODULE_LICENSE("GPL v2");
+1
drivers/nvmem/uniphier-efuse.c
··· 16 16 #include <linux/device.h> 17 17 #include <linux/io.h> 18 18 #include <linux/module.h> 19 + #include <linux/mod_devicetable.h> 19 20 #include <linux/nvmem-provider.h> 20 21 #include <linux/platform_device.h> 21 22
+2 -1
drivers/parport/ieee1284.c
··· 267 267 port->ieee1284.phase = IEEE1284_PH_FWD_IDLE; 268 268 } 269 269 270 - /* fall-though.. */ 270 + /* fall through */ 271 271 272 272 default: 273 273 /* Terminate from all other modes. */ ··· 615 615 case IEEE1284_MODE_NIBBLE: 616 616 case IEEE1284_MODE_BYTE: 617 617 parport_negotiate (port, IEEE1284_MODE_COMPAT); 618 + /* fall through */ 618 619 case IEEE1284_MODE_COMPAT: 619 620 DPRINTK (KERN_DEBUG "%s: Using compatibility mode\n", 620 621 port->name);
+6 -2
drivers/parport/parport_sunbpp.c
··· 286 286 287 287 ops = kmemdup(&parport_sunbpp_ops, sizeof(struct parport_operations), 288 288 GFP_KERNEL); 289 - if (!ops) 289 + if (!ops) { 290 + err = -ENOMEM; 290 291 goto out_unmap; 292 + } 291 293 292 294 dprintk(("register_port\n")); 293 - if (!(p = parport_register_port((unsigned long)base, irq, dma, ops))) 295 + if (!(p = parport_register_port((unsigned long)base, irq, dma, ops))) { 296 + err = -ENOMEM; 294 297 goto out_free_ops; 298 + } 295 299 296 300 p->size = size; 297 301 p->dev = &op->dev;
+1
drivers/perf/arm-ccn.c
··· 17 17 #include <linux/interrupt.h> 18 18 #include <linux/io.h> 19 19 #include <linux/module.h> 20 + #include <linux/mod_devicetable.h> 20 21 #include <linux/perf_event.h> 21 22 #include <linux/platform_device.h> 22 23 #include <linux/slab.h>
+1
drivers/pinctrl/intel/pinctrl-merrifield.c
··· 10 10 #include <linux/err.h> 11 11 #include <linux/io.h> 12 12 #include <linux/module.h> 13 + #include <linux/mod_devicetable.h> 13 14 #include <linux/platform_device.h> 14 15 #include <linux/pinctrl/pinconf.h> 15 16 #include <linux/pinctrl/pinconf-generic.h>
+1
drivers/pinctrl/pinctrl-u300.c
··· 13 13 */ 14 14 #include <linux/init.h> 15 15 #include <linux/module.h> 16 + #include <linux/mod_devicetable.h> 16 17 #include <linux/platform_device.h> 17 18 #include <linux/io.h> 18 19 #include <linux/slab.h>
+1
drivers/pinctrl/sprd/pinctrl-sprd-sc9860.c
··· 13 13 */ 14 14 15 15 #include <linux/module.h> 16 + #include <linux/mod_devicetable.h> 16 17 #include <linux/platform_device.h> 17 18 18 19 #include "pinctrl-sprd.h"
+1
drivers/pinctrl/uniphier/pinctrl-uniphier-ld11.c
··· 15 15 16 16 #include <linux/kernel.h> 17 17 #include <linux/init.h> 18 + #include <linux/mod_devicetable.h> 18 19 #include <linux/pinctrl/pinctrl.h> 19 20 #include <linux/platform_device.h> 20 21
+1
drivers/pinctrl/uniphier/pinctrl-uniphier-ld20.c
··· 15 15 16 16 #include <linux/kernel.h> 17 17 #include <linux/init.h> 18 + #include <linux/mod_devicetable.h> 18 19 #include <linux/pinctrl/pinctrl.h> 19 20 #include <linux/platform_device.h> 20 21
+1
drivers/pinctrl/uniphier/pinctrl-uniphier-ld4.c
··· 15 15 16 16 #include <linux/kernel.h> 17 17 #include <linux/init.h> 18 + #include <linux/mod_devicetable.h> 18 19 #include <linux/pinctrl/pinctrl.h> 19 20 #include <linux/platform_device.h> 20 21
+1
drivers/pinctrl/uniphier/pinctrl-uniphier-ld6b.c
··· 15 15 16 16 #include <linux/kernel.h> 17 17 #include <linux/init.h> 18 + #include <linux/mod_devicetable.h> 18 19 #include <linux/pinctrl/pinctrl.h> 19 20 #include <linux/platform_device.h> 20 21
+1
drivers/pinctrl/uniphier/pinctrl-uniphier-pro4.c
··· 15 15 16 16 #include <linux/kernel.h> 17 17 #include <linux/init.h> 18 + #include <linux/mod_devicetable.h> 18 19 #include <linux/pinctrl/pinctrl.h> 19 20 #include <linux/platform_device.h> 20 21
+1
drivers/pinctrl/uniphier/pinctrl-uniphier-pro5.c
··· 15 15 16 16 #include <linux/kernel.h> 17 17 #include <linux/init.h> 18 + #include <linux/mod_devicetable.h> 18 19 #include <linux/pinctrl/pinctrl.h> 19 20 #include <linux/platform_device.h> 20 21
+1
drivers/pinctrl/uniphier/pinctrl-uniphier-pxs2.c
··· 15 15 16 16 #include <linux/kernel.h> 17 17 #include <linux/init.h> 18 + #include <linux/mod_devicetable.h> 18 19 #include <linux/pinctrl/pinctrl.h> 19 20 #include <linux/platform_device.h> 20 21
+1
drivers/pinctrl/uniphier/pinctrl-uniphier-pxs3.c
··· 15 15 16 16 #include <linux/init.h> 17 17 #include <linux/kernel.h> 18 + #include <linux/mod_devicetable.h> 18 19 #include <linux/pinctrl/pinctrl.h> 19 20 #include <linux/platform_device.h> 20 21
+1
drivers/pinctrl/uniphier/pinctrl-uniphier-sld8.c
··· 15 15 16 16 #include <linux/kernel.h> 17 17 #include <linux/init.h> 18 + #include <linux/mod_devicetable.h> 18 19 #include <linux/pinctrl/pinctrl.h> 19 20 #include <linux/platform_device.h> 20 21
-5
drivers/platform/goldfish/Kconfig
··· 10 10 11 11 if GOLDFISH 12 12 13 - config GOLDFISH_BUS 14 - bool "Goldfish platform bus" 15 - ---help--- 16 - This is a virtual bus to host Goldfish Android Virtual Devices. 17 - 18 13 config GOLDFISH_PIPE 19 14 tristate "Goldfish virtual device for QEMU pipes" 20 15 ---help---
-1
drivers/platform/goldfish/Makefile
··· 1 1 # 2 2 # Makefile for Goldfish platform specific drivers 3 3 # 4 - obj-$(CONFIG_GOLDFISH_BUS) += pdev_bus.o 5 4 obj-$(CONFIG_GOLDFISH_PIPE) += goldfish_pipe.o
+2 -1
drivers/platform/goldfish/goldfish_pipe.c
··· 48 48 49 49 50 50 #include <linux/module.h> 51 + #include <linux/mod_devicetable.h> 51 52 #include <linux/interrupt.h> 52 53 #include <linux/kernel.h> 53 54 #include <linux/spinlock.h> ··· 646 645 wake_up_interruptible(&pipe->wake_queue); 647 646 } 648 647 } 649 - DECLARE_TASKLET(goldfish_interrupt_tasklet, goldfish_interrupt_task, 0); 648 + static DECLARE_TASKLET(goldfish_interrupt_tasklet, goldfish_interrupt_task, 0); 650 649 651 650 /* 652 651 * The general idea of the interrupt handling:
-232
drivers/platform/goldfish/pdev_bus.c
··· 1 - /* 2 - * Copyright (C) 2007 Google, Inc. 3 - * Copyright (C) 2011 Intel, Inc. 4 - * Copyright (C) 2013 Intel, Inc. 5 - * 6 - * This software is licensed under the terms of the GNU General Public 7 - * License version 2, as published by the Free Software Foundation, and 8 - * may be copied, distributed, and modified under those terms. 9 - * 10 - * This program is distributed in the hope that it will be useful, 11 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 - * GNU General Public License for more details. 14 - * 15 - */ 16 - 17 - #include <linux/kernel.h> 18 - #include <linux/init.h> 19 - #include <linux/interrupt.h> 20 - #include <linux/irq.h> 21 - #include <linux/platform_device.h> 22 - #include <linux/slab.h> 23 - #include <linux/io.h> 24 - 25 - #define PDEV_BUS_OP_DONE (0x00) 26 - #define PDEV_BUS_OP_REMOVE_DEV (0x04) 27 - #define PDEV_BUS_OP_ADD_DEV (0x08) 28 - 29 - #define PDEV_BUS_OP_INIT (0x00) 30 - 31 - #define PDEV_BUS_OP (0x00) 32 - #define PDEV_BUS_GET_NAME (0x04) 33 - #define PDEV_BUS_NAME_LEN (0x08) 34 - #define PDEV_BUS_ID (0x0c) 35 - #define PDEV_BUS_IO_BASE (0x10) 36 - #define PDEV_BUS_IO_SIZE (0x14) 37 - #define PDEV_BUS_IRQ (0x18) 38 - #define PDEV_BUS_IRQ_COUNT (0x1c) 39 - #define PDEV_BUS_GET_NAME_HIGH (0x20) 40 - 41 - struct pdev_bus_dev { 42 - struct list_head list; 43 - struct platform_device pdev; 44 - struct resource resources[0]; 45 - }; 46 - 47 - static void goldfish_pdev_worker(struct work_struct *work); 48 - 49 - static void __iomem *pdev_bus_base; 50 - static unsigned long pdev_bus_addr; 51 - static unsigned long pdev_bus_len; 52 - static u32 pdev_bus_irq; 53 - static LIST_HEAD(pdev_bus_new_devices); 54 - static LIST_HEAD(pdev_bus_registered_devices); 55 - static LIST_HEAD(pdev_bus_removed_devices); 56 - static DECLARE_WORK(pdev_bus_worker, goldfish_pdev_worker); 57 - 58 - 59 - static void goldfish_pdev_worker(struct work_struct *work) 60 - { 61 - int ret; 62 - struct pdev_bus_dev *pos, *n; 63 - 64 - list_for_each_entry_safe(pos, n, &pdev_bus_removed_devices, list) { 65 - list_del(&pos->list); 66 - platform_device_unregister(&pos->pdev); 67 - kfree(pos); 68 - } 69 - list_for_each_entry_safe(pos, n, &pdev_bus_new_devices, list) { 70 - list_del(&pos->list); 71 - ret = platform_device_register(&pos->pdev); 72 - if (ret) 73 - pr_err("goldfish_pdev_worker failed to register device, %s\n", 74 - pos->pdev.name); 75 - list_add_tail(&pos->list, &pdev_bus_registered_devices); 76 - } 77 - } 78 - 79 - static void goldfish_pdev_remove(void) 80 - { 81 - struct pdev_bus_dev *pos, *n; 82 - u32 base; 83 - 84 - base = readl(pdev_bus_base + PDEV_BUS_IO_BASE); 85 - 86 - list_for_each_entry_safe(pos, n, &pdev_bus_new_devices, list) { 87 - if (pos->resources[0].start == base) { 88 - list_del(&pos->list); 89 - kfree(pos); 90 - return; 91 - } 92 - } 93 - list_for_each_entry_safe(pos, n, &pdev_bus_registered_devices, list) { 94 - if (pos->resources[0].start == base) { 95 - list_del(&pos->list); 96 - list_add_tail(&pos->list, &pdev_bus_removed_devices); 97 - schedule_work(&pdev_bus_worker); 98 - return; 99 - } 100 - }; 101 - pr_err("goldfish_pdev_remove could not find device at %x\n", base); 102 - } 103 - 104 - static int goldfish_new_pdev(void) 105 - { 106 - struct pdev_bus_dev *dev; 107 - u32 name_len; 108 - u32 irq = -1, irq_count; 109 - int resource_count = 2; 110 - u32 base; 111 - char *name; 112 - 113 - base = readl(pdev_bus_base + PDEV_BUS_IO_BASE); 114 - 115 - irq_count = readl(pdev_bus_base + PDEV_BUS_IRQ_COUNT); 116 - name_len = readl(pdev_bus_base + PDEV_BUS_NAME_LEN); 117 - if (irq_count) 118 - resource_count++; 119 - 120 - dev = kzalloc(sizeof(*dev) + 121 - sizeof(struct resource) * resource_count + 122 - name_len + 1 + sizeof(*dev->pdev.dev.dma_mask), GFP_ATOMIC); 123 - if (dev == NULL) 124 - return -ENOMEM; 125 - 126 - dev->pdev.num_resources = resource_count; 127 - dev->pdev.resource = (struct resource *)(dev + 1); 128 - dev->pdev.name = name = (char *)(dev->pdev.resource + resource_count); 129 - dev->pdev.dev.coherent_dma_mask = ~0; 130 - dev->pdev.dev.dma_mask = (void *)(dev->pdev.name + name_len + 1); 131 - *dev->pdev.dev.dma_mask = ~0; 132 - 133 - #ifdef CONFIG_64BIT 134 - writel((u32)((u64)name>>32), pdev_bus_base + PDEV_BUS_GET_NAME_HIGH); 135 - #endif 136 - writel((u32)(unsigned long)name, pdev_bus_base + PDEV_BUS_GET_NAME); 137 - name[name_len] = '\0'; 138 - dev->pdev.id = readl(pdev_bus_base + PDEV_BUS_ID); 139 - dev->pdev.resource[0].start = base; 140 - dev->pdev.resource[0].end = base + 141 - readl(pdev_bus_base + PDEV_BUS_IO_SIZE) - 1; 142 - dev->pdev.resource[0].flags = IORESOURCE_MEM; 143 - if (irq_count) { 144 - irq = readl(pdev_bus_base + PDEV_BUS_IRQ); 145 - dev->pdev.resource[1].start = irq; 146 - dev->pdev.resource[1].end = irq + irq_count - 1; 147 - dev->pdev.resource[1].flags = IORESOURCE_IRQ; 148 - } 149 - 150 - pr_debug("goldfish_new_pdev %s at %x irq %d\n", name, base, irq); 151 - list_add_tail(&dev->list, &pdev_bus_new_devices); 152 - schedule_work(&pdev_bus_worker); 153 - 154 - return 0; 155 - } 156 - 157 - static irqreturn_t goldfish_pdev_bus_interrupt(int irq, void *dev_id) 158 - { 159 - irqreturn_t ret = IRQ_NONE; 160 - 161 - while (1) { 162 - u32 op = readl(pdev_bus_base + PDEV_BUS_OP); 163 - 164 - switch (op) { 165 - case PDEV_BUS_OP_REMOVE_DEV: 166 - goldfish_pdev_remove(); 167 - ret = IRQ_HANDLED; 168 - break; 169 - 170 - case PDEV_BUS_OP_ADD_DEV: 171 - goldfish_new_pdev(); 172 - ret = IRQ_HANDLED; 173 - break; 174 - 175 - case PDEV_BUS_OP_DONE: 176 - default: 177 - return ret; 178 - } 179 - } 180 - } 181 - 182 - static int goldfish_pdev_bus_probe(struct platform_device *pdev) 183 - { 184 - int ret; 185 - struct resource *r; 186 - 187 - r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 188 - if (r == NULL) 189 - return -EINVAL; 190 - 191 - pdev_bus_addr = r->start; 192 - pdev_bus_len = resource_size(r); 193 - 194 - pdev_bus_base = ioremap(pdev_bus_addr, pdev_bus_len); 195 - if (pdev_bus_base == NULL) { 196 - ret = -ENOMEM; 197 - dev_err(&pdev->dev, "unable to map Goldfish MMIO.\n"); 198 - goto free_resources; 199 - } 200 - 201 - r = platform_get_resource(pdev, IORESOURCE_IRQ, 0); 202 - if (r == NULL) { 203 - ret = -ENOENT; 204 - goto free_map; 205 - } 206 - 207 - pdev_bus_irq = r->start; 208 - 209 - ret = request_irq(pdev_bus_irq, goldfish_pdev_bus_interrupt, 210 - IRQF_SHARED, "goldfish_pdev_bus", pdev); 211 - if (ret) { 212 - dev_err(&pdev->dev, "unable to request Goldfish IRQ\n"); 213 - goto free_map; 214 - } 215 - 216 - writel(PDEV_BUS_OP_INIT, pdev_bus_base + PDEV_BUS_OP); 217 - return 0; 218 - 219 - free_map: 220 - iounmap(pdev_bus_base); 221 - free_resources: 222 - release_mem_region(pdev_bus_addr, pdev_bus_len); 223 - return ret; 224 - } 225 - 226 - static struct platform_driver goldfish_pdev_bus_driver = { 227 - .probe = goldfish_pdev_bus_probe, 228 - .driver = { 229 - .name = "goldfish_pdev_bus" 230 - } 231 - }; 232 - builtin_platform_driver(goldfish_pdev_bus_driver);
+1
drivers/platform/x86/intel_bxtwc_tmu.c
··· 19 19 */ 20 20 21 21 #include <linux/module.h> 22 + #include <linux/mod_devicetable.h> 22 23 #include <linux/interrupt.h> 23 24 #include <linux/platform_device.h> 24 25 #include <linux/mfd/intel_soc_pmic.h>
+1
drivers/power/avs/smartreflex.c
··· 18 18 */ 19 19 20 20 #include <linux/module.h> 21 + #include <linux/mod_devicetable.h> 21 22 #include <linux/interrupt.h> 22 23 #include <linux/clk.h> 23 24 #include <linux/io.h>
+1
drivers/power/reset/ltc2952-poweroff.c
··· 62 62 #include <linux/slab.h> 63 63 #include <linux/kmod.h> 64 64 #include <linux/module.h> 65 + #include <linux/mod_devicetable.h> 65 66 #include <linux/gpio/consumer.h> 66 67 #include <linux/reboot.h> 67 68
+1
drivers/power/supply/max8998_charger.c
··· 21 21 22 22 #include <linux/err.h> 23 23 #include <linux/module.h> 24 + #include <linux/mod_devicetable.h> 24 25 #include <linux/slab.h> 25 26 #include <linux/platform_device.h> 26 27 #include <linux/power_supply.h>
+1
drivers/power/supply/olpc_battery.c
··· 10 10 11 11 #include <linux/kernel.h> 12 12 #include <linux/module.h> 13 + #include <linux/mod_devicetable.h> 13 14 #include <linux/types.h> 14 15 #include <linux/err.h> 15 16 #include <linux/device.h>
+1
drivers/ptp/ptp_dte.c
··· 14 14 #include <linux/err.h> 15 15 #include <linux/io.h> 16 16 #include <linux/module.h> 17 + #include <linux/mod_devicetable.h> 17 18 #include <linux/platform_device.h> 18 19 #include <linux/ptp_clock_kernel.h> 19 20 #include <linux/types.h>
+1
drivers/regulator/tps65912-regulator.c
··· 18 18 */ 19 19 20 20 #include <linux/module.h> 21 + #include <linux/mod_devicetable.h> 21 22 #include <linux/platform_device.h> 22 23 #include <linux/regulator/driver.h> 23 24
+1
drivers/reset/reset-ath79.c
··· 17 17 18 18 #include <linux/io.h> 19 19 #include <linux/init.h> 20 + #include <linux/mod_devicetable.h> 20 21 #include <linux/platform_device.h> 21 22 #include <linux/reset-controller.h> 22 23 #include <linux/reboot.h>
+1
drivers/reset/reset-axs10x.c
··· 10 10 11 11 #include <linux/io.h> 12 12 #include <linux/module.h> 13 + #include <linux/mod_devicetable.h> 13 14 #include <linux/platform_device.h> 14 15 #include <linux/reset-controller.h> 15 16
+1
drivers/reset/reset-imx7.c
··· 16 16 */ 17 17 18 18 #include <linux/mfd/syscon.h> 19 + #include <linux/mod_devicetable.h> 19 20 #include <linux/platform_device.h> 20 21 #include <linux/reset-controller.h> 21 22 #include <linux/regmap.h>
+1
drivers/rtc/rtc-coh901331.c
··· 8 8 */ 9 9 #include <linux/init.h> 10 10 #include <linux/module.h> 11 + #include <linux/mod_devicetable.h> 11 12 #include <linux/rtc.h> 12 13 #include <linux/clk.h> 13 14 #include <linux/interrupt.h>
+1
drivers/rtc/rtc-cpcap.c
··· 24 24 */ 25 25 #include <linux/kernel.h> 26 26 #include <linux/module.h> 27 + #include <linux/mod_devicetable.h> 27 28 #include <linux/init.h> 28 29 #include <linux/device.h> 29 30 #include <linux/platform_device.h>
+1
drivers/rtc/rtc-ftrtc010.c
··· 26 26 #include <linux/platform_device.h> 27 27 #include <linux/kernel.h> 28 28 #include <linux/module.h> 29 + #include <linux/mod_devicetable.h> 29 30 #include <linux/clk.h> 30 31 31 32 #define DRV_NAME "rtc-ftrtc010"
+1
drivers/rtc/rtc-mc13xxx.c
··· 13 13 #include <linux/platform_device.h> 14 14 #include <linux/kernel.h> 15 15 #include <linux/module.h> 16 + #include <linux/mod_devicetable.h> 16 17 #include <linux/slab.h> 17 18 #include <linux/rtc.h> 18 19
+1
drivers/rtc/rtc-mxc_v2.c
··· 8 8 #include <linux/clk.h> 9 9 #include <linux/io.h> 10 10 #include <linux/module.h> 11 + #include <linux/mod_devicetable.h> 11 12 #include <linux/platform_device.h> 12 13 #include <linux/rtc.h> 13 14
+1
drivers/rtc/rtc-r7301.c
··· 11 11 #include <linux/io.h> 12 12 #include <linux/kernel.h> 13 13 #include <linux/module.h> 14 + #include <linux/mod_devicetable.h> 14 15 #include <linux/delay.h> 15 16 #include <linux/regmap.h> 16 17 #include <linux/platform_device.h>
+1
drivers/rtc/rtc-sh.c
··· 15 15 * for more details. 16 16 */ 17 17 #include <linux/module.h> 18 + #include <linux/mod_devicetable.h> 18 19 #include <linux/kernel.h> 19 20 #include <linux/bcd.h> 20 21 #include <linux/rtc.h>
+1
drivers/rtc/rtc-tegra.c
··· 25 25 #include <linux/irq.h> 26 26 #include <linux/kernel.h> 27 27 #include <linux/module.h> 28 + #include <linux/mod_devicetable.h> 28 29 #include <linux/platform_device.h> 29 30 #include <linux/pm.h> 30 31 #include <linux/rtc.h>
+3
drivers/scsi/storvsc_drv.c
··· 1935 1935 .id_table = id_table, 1936 1936 .probe = storvsc_probe, 1937 1937 .remove = storvsc_remove, 1938 + .driver = { 1939 + .probe_type = PROBE_PREFER_ASYNCHRONOUS, 1940 + }, 1938 1941 }; 1939 1942 1940 1943 #if IS_ENABLED(CONFIG_SCSI_FC_ATTRS)
+1
drivers/siox/siox-bus-gpio.c
··· 5 5 6 6 #include <linux/gpio/consumer.h> 7 7 #include <linux/module.h> 8 + #include <linux/mod_devicetable.h> 8 9 #include <linux/platform_device.h> 9 10 10 11 #include <linux/delay.h>
+15 -15
drivers/siox/siox-core.c
··· 215 215 siox_status_clean(status, 216 216 sdevice->status_written_lastcycle); 217 217 218 - /* Check counter bits */ 219 - if (siox_device_counter_error(sdevice, status_clean)) { 220 - bool prev_counter_error; 218 + /* Check counter and type bits */ 219 + if (siox_device_counter_error(sdevice, status_clean) || 220 + siox_device_type_error(sdevice, status_clean)) { 221 + bool prev_error; 221 222 222 223 synced = false; 223 224 224 225 /* only report a new error if the last cycle was ok */ 225 - prev_counter_error = 226 + prev_error = 226 227 siox_device_counter_error(sdevice, 227 - prev_status_clean); 228 - if (!prev_counter_error) { 228 + prev_status_clean) || 229 + siox_device_type_error(sdevice, 230 + prev_status_clean); 231 + 232 + if (!prev_error) { 229 233 sdevice->status_errors++; 230 234 sysfs_notify_dirent(sdevice->status_errors_kn); 231 235 } 232 236 } 233 - 234 - /* Check type bits */ 235 - if (siox_device_type_error(sdevice, status_clean)) 236 - synced = false; 237 237 238 238 /* If the device is unsynced report the watchdog as active */ 239 239 if (!synced) { ··· 715 715 716 716 dev_set_name(&smaster->dev, "siox-%d", smaster->busno); 717 717 718 + mutex_init(&smaster->lock); 719 + INIT_LIST_HEAD(&smaster->devices); 720 + 718 721 smaster->last_poll = jiffies; 719 - smaster->poll_thread = kthread_create(siox_poll_thread, smaster, 720 - "siox-%d", smaster->busno); 722 + smaster->poll_thread = kthread_run(siox_poll_thread, smaster, 723 + "siox-%d", smaster->busno); 721 724 if (IS_ERR(smaster->poll_thread)) { 722 725 smaster->active = 0; 723 726 return PTR_ERR(smaster->poll_thread); 724 727 } 725 - 726 - mutex_init(&smaster->lock); 727 - INIT_LIST_HEAD(&smaster->devices); 728 728 729 729 ret = device_add(&smaster->dev); 730 730 if (ret)
+11 -1
drivers/slimbus/Kconfig
··· 15 15 # SLIMbus controllers 16 16 config SLIM_QCOM_CTRL 17 17 tristate "Qualcomm SLIMbus Manager Component" 18 - depends on SLIMBUS 19 18 depends on HAS_IOMEM 20 19 help 21 20 Select driver if Qualcomm's SLIMbus Manager Component is 22 21 programmed using Linux kernel. 23 22 23 + config SLIM_QCOM_NGD_CTRL 24 + tristate "Qualcomm SLIMbus Satellite Non-Generic Device Component" 25 + depends on QCOM_QMI_HELPERS 26 + depends on HAS_IOMEM && DMA_ENGINE 27 + help 28 + Select driver if Qualcomm's SLIMbus Satellite Non-Generic Device 29 + Component is programmed using Linux kernel. 30 + This is light-weight slimbus controller driver responsible for 31 + communicating with slave HW directly over the bus using messaging 32 + interface, and communicating with master component residing on ADSP 33 + for bandwidth and data-channel management. 24 34 endif
+4 -1
drivers/slimbus/Makefile
··· 3 3 # Makefile for kernel SLIMbus framework. 4 4 # 5 5 obj-$(CONFIG_SLIMBUS) += slimbus.o 6 - slimbus-y := core.o messaging.o sched.o 6 + slimbus-y := core.o messaging.o sched.o stream.o 7 7 8 8 #Controllers 9 9 obj-$(CONFIG_SLIM_QCOM_CTRL) += slim-qcom-ctrl.o 10 10 slim-qcom-ctrl-y := qcom-ctrl.o 11 + 12 + obj-$(CONFIG_SLIM_QCOM_NGD_CTRL) += slim-qcom-ngd-ctrl.o 13 + slim-qcom-ngd-ctrl-y := qcom-ngd-ctrl.o
+41
drivers/slimbus/core.c
··· 114 114 sbdev->dev.release = slim_dev_release; 115 115 sbdev->dev.driver = NULL; 116 116 sbdev->ctrl = ctrl; 117 + INIT_LIST_HEAD(&sbdev->stream_list); 118 + spin_lock_init(&sbdev->stream_list_lock); 117 119 118 120 if (node) 119 121 sbdev->dev.of_node = of_node_get(node); ··· 357 355 return sbdev; 358 356 } 359 357 EXPORT_SYMBOL_GPL(slim_get_device); 358 + 359 + static int of_slim_match_dev(struct device *dev, void *data) 360 + { 361 + struct device_node *np = data; 362 + struct slim_device *sbdev = to_slim_device(dev); 363 + 364 + return (sbdev->dev.of_node == np); 365 + } 366 + 367 + static struct slim_device *of_find_slim_device(struct slim_controller *ctrl, 368 + struct device_node *np) 369 + { 370 + struct slim_device *sbdev; 371 + struct device *dev; 372 + 373 + dev = device_find_child(ctrl->dev, np, of_slim_match_dev); 374 + if (dev) { 375 + sbdev = to_slim_device(dev); 376 + return sbdev; 377 + } 378 + 379 + return NULL; 380 + } 381 + 382 + /** 383 + * of_slim_get_device() - get handle to a device using dt node. 384 + * 385 + * @ctrl: Controller on which this device will be added/queried 386 + * @np: node pointer to device 387 + * 388 + * Return: pointer to a device if it has already reported. Creates a new 389 + * device and returns pointer to it if the device has not yet enumerated. 390 + */ 391 + struct slim_device *of_slim_get_device(struct slim_controller *ctrl, 392 + struct device_node *np) 393 + { 394 + return of_find_slim_device(ctrl, np); 395 + } 396 + EXPORT_SYMBOL_GPL(of_slim_get_device); 360 397 361 398 static int slim_device_alloc_laddr(struct slim_device *sbdev, 362 399 bool report_present)
+55 -24
drivers/slimbus/messaging.c
··· 29 29 30 30 spin_lock_irqsave(&ctrl->txn_lock, flags); 31 31 txn = idr_find(&ctrl->tid_idr, tid); 32 - if (txn == NULL) { 33 - spin_unlock_irqrestore(&ctrl->txn_lock, flags); 32 + spin_unlock_irqrestore(&ctrl->txn_lock, flags); 33 + 34 + if (txn == NULL) 34 35 return; 35 - } 36 36 37 37 msg = txn->msg; 38 38 if (msg == NULL || msg->rbuf == NULL) { 39 39 dev_err(ctrl->dev, "Got response to invalid TID:%d, len:%d\n", 40 40 tid, len); 41 - spin_unlock_irqrestore(&ctrl->txn_lock, flags); 42 41 return; 43 42 } 44 43 45 - idr_remove(&ctrl->tid_idr, tid); 46 - spin_unlock_irqrestore(&ctrl->txn_lock, flags); 47 - 44 + slim_free_txn_tid(ctrl, txn); 48 45 memcpy(msg->rbuf, reply, len); 49 46 if (txn->comp) 50 47 complete(txn->comp); ··· 51 54 pm_runtime_put_autosuspend(ctrl->dev); 52 55 } 53 56 EXPORT_SYMBOL_GPL(slim_msg_response); 57 + 58 + /** 59 + * slim_alloc_txn_tid() - Allocate a tid to txn 60 + * 61 + * @ctrl: Controller handle 62 + * @txn: transaction to be allocated with tid. 63 + * 64 + * Return: zero on success with valid txn->tid and error code on failures. 65 + */ 66 + int slim_alloc_txn_tid(struct slim_controller *ctrl, struct slim_msg_txn *txn) 67 + { 68 + unsigned long flags; 69 + int ret = 0; 70 + 71 + spin_lock_irqsave(&ctrl->txn_lock, flags); 72 + ret = idr_alloc_cyclic(&ctrl->tid_idr, txn, 0, 73 + SLIM_MAX_TIDS, GFP_ATOMIC); 74 + if (ret < 0) { 75 + spin_unlock_irqrestore(&ctrl->txn_lock, flags); 76 + return ret; 77 + } 78 + txn->tid = ret; 79 + spin_unlock_irqrestore(&ctrl->txn_lock, flags); 80 + return 0; 81 + } 82 + EXPORT_SYMBOL_GPL(slim_alloc_txn_tid); 83 + 84 + /** 85 + * slim_free_txn_tid() - Freee tid of txn 86 + * 87 + * @ctrl: Controller handle 88 + * @txn: transaction whose tid should be freed 89 + */ 90 + void slim_free_txn_tid(struct slim_controller *ctrl, struct slim_msg_txn *txn) 91 + { 92 + unsigned long flags; 93 + 94 + spin_lock_irqsave(&ctrl->txn_lock, flags); 95 + idr_remove(&ctrl->tid_idr, txn->tid); 96 + spin_unlock_irqrestore(&ctrl->txn_lock, flags); 97 + } 98 + EXPORT_SYMBOL_GPL(slim_free_txn_tid); 54 99 55 100 /** 56 101 * slim_do_transfer() - Process a SLIMbus-messaging transaction ··· 111 72 { 112 73 DECLARE_COMPLETION_ONSTACK(done); 113 74 bool need_tid = false, clk_pause_msg = false; 114 - unsigned long flags; 115 - int ret, tid, timeout; 75 + int ret, timeout; 116 76 117 77 /* 118 78 * do not vote for runtime-PM if the transactions are part of clock ··· 135 97 need_tid = slim_tid_txn(txn->mt, txn->mc); 136 98 137 99 if (need_tid) { 138 - spin_lock_irqsave(&ctrl->txn_lock, flags); 139 - tid = idr_alloc(&ctrl->tid_idr, txn, 0, 140 - SLIM_MAX_TIDS, GFP_ATOMIC); 141 - txn->tid = tid; 100 + ret = slim_alloc_txn_tid(ctrl, txn); 101 + if (ret) 102 + return ret; 142 103 143 104 if (!txn->msg->comp) 144 105 txn->comp = &done; 145 106 else 146 107 txn->comp = txn->comp; 147 - 148 - spin_unlock_irqrestore(&ctrl->txn_lock, flags); 149 - 150 - if (tid < 0) 151 - return tid; 152 108 } 153 109 154 110 ret = ctrl->xfer_msg(ctrl, txn); 155 111 156 - if (ret && need_tid && !txn->msg->comp) { 112 + if (!ret && need_tid && !txn->msg->comp) { 157 113 unsigned long ms = txn->rl + HZ; 158 114 159 115 timeout = wait_for_completion_timeout(txn->comp, 160 116 msecs_to_jiffies(ms)); 161 117 if (!timeout) { 162 118 ret = -ETIMEDOUT; 163 - spin_lock_irqsave(&ctrl->txn_lock, flags); 164 - idr_remove(&ctrl->tid_idr, tid); 165 - spin_unlock_irqrestore(&ctrl->txn_lock, flags); 119 + slim_free_txn_tid(ctrl, txn); 166 120 } 167 121 } 168 122 ··· 169 139 * if there was error during this transaction 170 140 */ 171 141 pm_runtime_mark_last_busy(ctrl->dev); 172 - pm_runtime_mark_last_busy(ctrl->dev); 142 + pm_runtime_put_autosuspend(ctrl->dev); 173 143 } 174 144 return ret; 175 145 } ··· 276 246 msg->num_bytes = count; 277 247 msg->rbuf = rbuf; 278 248 msg->wbuf = wbuf; 249 + msg->comp = NULL; 279 250 } 280 251 281 252 /** ··· 338 307 { 339 308 struct slim_val_inf msg; 340 309 341 - slim_fill_msg(&msg, addr, count, val, NULL); 310 + slim_fill_msg(&msg, addr, count, NULL, val); 342 311 343 312 return slim_xfer_msg(sdev, &msg, SLIM_MSG_MC_CHANGE_VALUE); 344 313 }
+1526
drivers/slimbus/qcom-ngd-ctrl.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (c) 2011-2017, The Linux Foundation. All rights reserved. 3 + // Copyright (c) 2018, Linaro Limited 4 + 5 + #include <linux/irq.h> 6 + #include <linux/kernel.h> 7 + #include <linux/init.h> 8 + #include <linux/slab.h> 9 + #include <linux/interrupt.h> 10 + #include <linux/platform_device.h> 11 + #include <linux/dma-mapping.h> 12 + #include <linux/dmaengine.h> 13 + #include <linux/slimbus.h> 14 + #include <linux/delay.h> 15 + #include <linux/pm_runtime.h> 16 + #include <linux/of.h> 17 + #include <linux/io.h> 18 + #include <linux/soc/qcom/qmi.h> 19 + #include <net/sock.h> 20 + #include "slimbus.h" 21 + 22 + /* NGD (Non-ported Generic Device) registers */ 23 + #define NGD_CFG 0x0 24 + #define NGD_CFG_ENABLE BIT(0) 25 + #define NGD_CFG_RX_MSGQ_EN BIT(1) 26 + #define NGD_CFG_TX_MSGQ_EN BIT(2) 27 + #define NGD_STATUS 0x4 28 + #define NGD_LADDR BIT(1) 29 + #define NGD_RX_MSGQ_CFG 0x8 30 + #define NGD_INT_EN 0x10 31 + #define NGD_INT_RECFG_DONE BIT(24) 32 + #define NGD_INT_TX_NACKED_2 BIT(25) 33 + #define NGD_INT_MSG_BUF_CONTE BIT(26) 34 + #define NGD_INT_MSG_TX_INVAL BIT(27) 35 + #define NGD_INT_IE_VE_CHG BIT(28) 36 + #define NGD_INT_DEV_ERR BIT(29) 37 + #define NGD_INT_RX_MSG_RCVD BIT(30) 38 + #define NGD_INT_TX_MSG_SENT BIT(31) 39 + #define NGD_INT_STAT 0x14 40 + #define NGD_INT_CLR 0x18 41 + #define DEF_NGD_INT_MASK (NGD_INT_TX_NACKED_2 | NGD_INT_MSG_BUF_CONTE | \ 42 + NGD_INT_MSG_TX_INVAL | NGD_INT_IE_VE_CHG | \ 43 + NGD_INT_DEV_ERR | NGD_INT_TX_MSG_SENT | \ 44 + NGD_INT_RX_MSG_RCVD) 45 + 46 + /* Slimbus QMI service */ 47 + #define SLIMBUS_QMI_SVC_ID 0x0301 48 + #define SLIMBUS_QMI_SVC_V1 1 49 + #define SLIMBUS_QMI_INS_ID 0 50 + #define SLIMBUS_QMI_SELECT_INSTANCE_REQ_V01 0x0020 51 + #define SLIMBUS_QMI_SELECT_INSTANCE_RESP_V01 0x0020 52 + #define SLIMBUS_QMI_POWER_REQ_V01 0x0021 53 + #define SLIMBUS_QMI_POWER_RESP_V01 0x0021 54 + #define SLIMBUS_QMI_CHECK_FRAMER_STATUS_REQ 0x0022 55 + #define SLIMBUS_QMI_CHECK_FRAMER_STATUS_RESP 0x0022 56 + #define SLIMBUS_QMI_POWER_REQ_MAX_MSG_LEN 14 57 + #define SLIMBUS_QMI_POWER_RESP_MAX_MSG_LEN 7 58 + #define SLIMBUS_QMI_SELECT_INSTANCE_REQ_MAX_MSG_LEN 14 59 + #define SLIMBUS_QMI_SELECT_INSTANCE_RESP_MAX_MSG_LEN 7 60 + #define SLIMBUS_QMI_CHECK_FRAMER_STAT_RESP_MAX_MSG_LEN 7 61 + /* QMI response timeout of 500ms */ 62 + #define SLIMBUS_QMI_RESP_TOUT 1000 63 + 64 + /* User defined commands */ 65 + #define SLIM_USR_MC_GENERIC_ACK 0x25 66 + #define SLIM_USR_MC_MASTER_CAPABILITY 0x0 67 + #define SLIM_USR_MC_REPORT_SATELLITE 0x1 68 + #define SLIM_USR_MC_ADDR_QUERY 0xD 69 + #define SLIM_USR_MC_ADDR_REPLY 0xE 70 + #define SLIM_USR_MC_DEFINE_CHAN 0x20 71 + #define SLIM_USR_MC_DEF_ACT_CHAN 0x21 72 + #define SLIM_USR_MC_CHAN_CTRL 0x23 73 + #define SLIM_USR_MC_RECONFIG_NOW 0x24 74 + #define SLIM_USR_MC_REQ_BW 0x28 75 + #define SLIM_USR_MC_CONNECT_SRC 0x2C 76 + #define SLIM_USR_MC_CONNECT_SINK 0x2D 77 + #define SLIM_USR_MC_DISCONNECT_PORT 0x2E 78 + #define SLIM_USR_MC_REPEAT_CHANGE_VALUE 0x0 79 + 80 + #define QCOM_SLIM_NGD_AUTOSUSPEND MSEC_PER_SEC 81 + #define SLIM_RX_MSGQ_TIMEOUT_VAL 0x10000 82 + 83 + #define SLIM_LA_MGR 0xFF 84 + #define SLIM_ROOT_FREQ 24576000 85 + #define LADDR_RETRY 5 86 + 87 + /* Per spec.max 40 bytes per received message */ 88 + #define SLIM_MSGQ_BUF_LEN 40 89 + #define QCOM_SLIM_NGD_DESC_NUM 32 90 + 91 + #define SLIM_MSG_ASM_FIRST_WORD(l, mt, mc, dt, ad) \ 92 + ((l) | ((mt) << 5) | ((mc) << 8) | ((dt) << 15) | ((ad) << 16)) 93 + 94 + #define INIT_MX_RETRIES 10 95 + #define DEF_RETRY_MS 10 96 + #define SAT_MAGIC_LSB 0xD9 97 + #define SAT_MAGIC_MSB 0xC5 98 + #define SAT_MSG_VER 0x1 99 + #define SAT_MSG_PROT 0x1 100 + #define to_ngd(d) container_of(d, struct qcom_slim_ngd, dev) 101 + 102 + struct ngd_reg_offset_data { 103 + u32 offset, size; 104 + }; 105 + 106 + static const struct ngd_reg_offset_data ngd_v1_5_offset_info = { 107 + .offset = 0x1000, 108 + .size = 0x1000, 109 + }; 110 + 111 + enum qcom_slim_ngd_state { 112 + QCOM_SLIM_NGD_CTRL_AWAKE, 113 + QCOM_SLIM_NGD_CTRL_IDLE, 114 + QCOM_SLIM_NGD_CTRL_ASLEEP, 115 + QCOM_SLIM_NGD_CTRL_DOWN, 116 + }; 117 + 118 + struct qcom_slim_ngd_qmi { 119 + struct qmi_handle qmi; 120 + struct sockaddr_qrtr svc_info; 121 + struct qmi_handle svc_event_hdl; 122 + struct qmi_response_type_v01 resp; 123 + struct qmi_handle *handle; 124 + struct completion qmi_comp; 125 + }; 126 + 127 + struct qcom_slim_ngd_ctrl; 128 + struct qcom_slim_ngd; 129 + 130 + struct qcom_slim_ngd_dma_desc { 131 + struct dma_async_tx_descriptor *desc; 132 + struct qcom_slim_ngd_ctrl *ctrl; 133 + struct completion *comp; 134 + dma_cookie_t cookie; 135 + dma_addr_t phys; 136 + void *base; 137 + }; 138 + 139 + struct qcom_slim_ngd { 140 + struct platform_device *pdev; 141 + void __iomem *base; 142 + int id; 143 + }; 144 + 145 + struct qcom_slim_ngd_ctrl { 146 + struct slim_framer framer; 147 + struct slim_controller ctrl; 148 + struct qcom_slim_ngd_qmi qmi; 149 + struct qcom_slim_ngd *ngd; 150 + struct device *dev; 151 + void __iomem *base; 152 + struct dma_chan *dma_rx_channel; 153 + struct dma_chan *dma_tx_channel; 154 + struct qcom_slim_ngd_dma_desc rx_desc[QCOM_SLIM_NGD_DESC_NUM]; 155 + struct qcom_slim_ngd_dma_desc txdesc[QCOM_SLIM_NGD_DESC_NUM]; 156 + struct completion reconf; 157 + struct work_struct m_work; 158 + struct workqueue_struct *mwq; 159 + spinlock_t tx_buf_lock; 160 + enum qcom_slim_ngd_state state; 161 + dma_addr_t rx_phys_base; 162 + dma_addr_t tx_phys_base; 163 + void *rx_base; 164 + void *tx_base; 165 + int tx_tail; 166 + int tx_head; 167 + u32 ver; 168 + }; 169 + 170 + enum slimbus_mode_enum_type_v01 { 171 + /* To force a 32 bit signed enum. Do not change or use*/ 172 + SLIMBUS_MODE_ENUM_TYPE_MIN_ENUM_VAL_V01 = INT_MIN, 173 + SLIMBUS_MODE_SATELLITE_V01 = 1, 174 + SLIMBUS_MODE_MASTER_V01 = 2, 175 + SLIMBUS_MODE_ENUM_TYPE_MAX_ENUM_VAL_V01 = INT_MAX, 176 + }; 177 + 178 + enum slimbus_pm_enum_type_v01 { 179 + /* To force a 32 bit signed enum. Do not change or use*/ 180 + SLIMBUS_PM_ENUM_TYPE_MIN_ENUM_VAL_V01 = INT_MIN, 181 + SLIMBUS_PM_INACTIVE_V01 = 1, 182 + SLIMBUS_PM_ACTIVE_V01 = 2, 183 + SLIMBUS_PM_ENUM_TYPE_MAX_ENUM_VAL_V01 = INT_MAX, 184 + }; 185 + 186 + enum slimbus_resp_enum_type_v01 { 187 + SLIMBUS_RESP_ENUM_TYPE_MIN_VAL_V01 = INT_MIN, 188 + SLIMBUS_RESP_SYNCHRONOUS_V01 = 1, 189 + SLIMBUS_RESP_ENUM_TYPE_MAX_VAL_V01 = INT_MAX, 190 + }; 191 + 192 + struct slimbus_select_inst_req_msg_v01 { 193 + uint32_t instance; 194 + uint8_t mode_valid; 195 + enum slimbus_mode_enum_type_v01 mode; 196 + }; 197 + 198 + struct slimbus_select_inst_resp_msg_v01 { 199 + struct qmi_response_type_v01 resp; 200 + }; 201 + 202 + struct slimbus_power_req_msg_v01 { 203 + enum slimbus_pm_enum_type_v01 pm_req; 204 + uint8_t resp_type_valid; 205 + enum slimbus_resp_enum_type_v01 resp_type; 206 + }; 207 + 208 + struct slimbus_power_resp_msg_v01 { 209 + struct qmi_response_type_v01 resp; 210 + }; 211 + 212 + static struct qmi_elem_info slimbus_select_inst_req_msg_v01_ei[] = { 213 + { 214 + .data_type = QMI_UNSIGNED_4_BYTE, 215 + .elem_len = 1, 216 + .elem_size = sizeof(uint32_t), 217 + .array_type = NO_ARRAY, 218 + .tlv_type = 0x01, 219 + .offset = offsetof(struct slimbus_select_inst_req_msg_v01, 220 + instance), 221 + .ei_array = NULL, 222 + }, 223 + { 224 + .data_type = QMI_OPT_FLAG, 225 + .elem_len = 1, 226 + .elem_size = sizeof(uint8_t), 227 + .array_type = NO_ARRAY, 228 + .tlv_type = 0x10, 229 + .offset = offsetof(struct slimbus_select_inst_req_msg_v01, 230 + mode_valid), 231 + .ei_array = NULL, 232 + }, 233 + { 234 + .data_type = QMI_UNSIGNED_4_BYTE, 235 + .elem_len = 1, 236 + .elem_size = sizeof(enum slimbus_mode_enum_type_v01), 237 + .array_type = NO_ARRAY, 238 + .tlv_type = 0x10, 239 + .offset = offsetof(struct slimbus_select_inst_req_msg_v01, 240 + mode), 241 + .ei_array = NULL, 242 + }, 243 + { 244 + .data_type = QMI_EOTI, 245 + .elem_len = 0, 246 + .elem_size = 0, 247 + .array_type = NO_ARRAY, 248 + .tlv_type = 0x00, 249 + .offset = 0, 250 + .ei_array = NULL, 251 + }, 252 + }; 253 + 254 + static struct qmi_elem_info slimbus_select_inst_resp_msg_v01_ei[] = { 255 + { 256 + .data_type = QMI_STRUCT, 257 + .elem_len = 1, 258 + .elem_size = sizeof(struct qmi_response_type_v01), 259 + .array_type = NO_ARRAY, 260 + .tlv_type = 0x02, 261 + .offset = offsetof(struct slimbus_select_inst_resp_msg_v01, 262 + resp), 263 + .ei_array = qmi_response_type_v01_ei, 264 + }, 265 + { 266 + .data_type = QMI_EOTI, 267 + .elem_len = 0, 268 + .elem_size = 0, 269 + .array_type = NO_ARRAY, 270 + .tlv_type = 0x00, 271 + .offset = 0, 272 + .ei_array = NULL, 273 + }, 274 + }; 275 + 276 + static struct qmi_elem_info slimbus_power_req_msg_v01_ei[] = { 277 + { 278 + .data_type = QMI_UNSIGNED_4_BYTE, 279 + .elem_len = 1, 280 + .elem_size = sizeof(enum slimbus_pm_enum_type_v01), 281 + .array_type = NO_ARRAY, 282 + .tlv_type = 0x01, 283 + .offset = offsetof(struct slimbus_power_req_msg_v01, 284 + pm_req), 285 + .ei_array = NULL, 286 + }, 287 + { 288 + .data_type = QMI_OPT_FLAG, 289 + .elem_len = 1, 290 + .elem_size = sizeof(uint8_t), 291 + .array_type = NO_ARRAY, 292 + .tlv_type = 0x10, 293 + .offset = offsetof(struct slimbus_power_req_msg_v01, 294 + resp_type_valid), 295 + }, 296 + { 297 + .data_type = QMI_SIGNED_4_BYTE_ENUM, 298 + .elem_len = 1, 299 + .elem_size = sizeof(enum slimbus_resp_enum_type_v01), 300 + .array_type = NO_ARRAY, 301 + .tlv_type = 0x10, 302 + .offset = offsetof(struct slimbus_power_req_msg_v01, 303 + resp_type), 304 + }, 305 + { 306 + .data_type = QMI_EOTI, 307 + .elem_len = 0, 308 + .elem_size = 0, 309 + .array_type = NO_ARRAY, 310 + .tlv_type = 0x00, 311 + .offset = 0, 312 + .ei_array = NULL, 313 + }, 314 + }; 315 + 316 + static struct qmi_elem_info slimbus_power_resp_msg_v01_ei[] = { 317 + { 318 + .data_type = QMI_STRUCT, 319 + .elem_len = 1, 320 + .elem_size = sizeof(struct qmi_response_type_v01), 321 + .array_type = NO_ARRAY, 322 + .tlv_type = 0x02, 323 + .offset = offsetof(struct slimbus_power_resp_msg_v01, resp), 324 + .ei_array = qmi_response_type_v01_ei, 325 + }, 326 + { 327 + .data_type = QMI_EOTI, 328 + .elem_len = 0, 329 + .elem_size = 0, 330 + .array_type = NO_ARRAY, 331 + .tlv_type = 0x00, 332 + .offset = 0, 333 + .ei_array = NULL, 334 + }, 335 + }; 336 + 337 + static int qcom_slim_qmi_send_select_inst_req(struct qcom_slim_ngd_ctrl *ctrl, 338 + struct slimbus_select_inst_req_msg_v01 *req) 339 + { 340 + struct slimbus_select_inst_resp_msg_v01 resp = { { 0, 0 } }; 341 + struct qmi_txn txn; 342 + int rc; 343 + 344 + rc = qmi_txn_init(ctrl->qmi.handle, &txn, 345 + slimbus_select_inst_resp_msg_v01_ei, &resp); 346 + if (rc < 0) { 347 + dev_err(ctrl->dev, "QMI TXN init fail: %d\n", rc); 348 + return rc; 349 + } 350 + 351 + rc = qmi_send_request(ctrl->qmi.handle, NULL, &txn, 352 + SLIMBUS_QMI_SELECT_INSTANCE_REQ_V01, 353 + SLIMBUS_QMI_SELECT_INSTANCE_REQ_MAX_MSG_LEN, 354 + slimbus_select_inst_req_msg_v01_ei, req); 355 + if (rc < 0) { 356 + dev_err(ctrl->dev, "QMI send req fail %d\n", rc); 357 + qmi_txn_cancel(&txn); 358 + return rc; 359 + } 360 + 361 + rc = qmi_txn_wait(&txn, SLIMBUS_QMI_RESP_TOUT); 362 + if (rc < 0) { 363 + dev_err(ctrl->dev, "QMI TXN wait fail: %d\n", rc); 364 + return rc; 365 + } 366 + /* Check the response */ 367 + if (resp.resp.result != QMI_RESULT_SUCCESS_V01) { 368 + dev_err(ctrl->dev, "QMI request failed 0x%x\n", 369 + resp.resp.result); 370 + return -EREMOTEIO; 371 + } 372 + 373 + return 0; 374 + } 375 + 376 + static void qcom_slim_qmi_power_resp_cb(struct qmi_handle *handle, 377 + struct sockaddr_qrtr *sq, 378 + struct qmi_txn *txn, const void *data) 379 + { 380 + struct slimbus_power_resp_msg_v01 *resp; 381 + 382 + resp = (struct slimbus_power_resp_msg_v01 *)data; 383 + if (resp->resp.result != QMI_RESULT_SUCCESS_V01) 384 + pr_err("QMI power request failed 0x%x\n", 385 + resp->resp.result); 386 + 387 + complete(&txn->completion); 388 + } 389 + 390 + static int qcom_slim_qmi_send_power_request(struct qcom_slim_ngd_ctrl *ctrl, 391 + struct slimbus_power_req_msg_v01 *req) 392 + { 393 + struct slimbus_power_resp_msg_v01 resp = { { 0, 0 } }; 394 + struct qmi_txn txn; 395 + int rc; 396 + 397 + rc = qmi_txn_init(ctrl->qmi.handle, &txn, 398 + slimbus_power_resp_msg_v01_ei, &resp); 399 + 400 + rc = qmi_send_request(ctrl->qmi.handle, NULL, &txn, 401 + SLIMBUS_QMI_POWER_REQ_V01, 402 + SLIMBUS_QMI_POWER_REQ_MAX_MSG_LEN, 403 + slimbus_power_req_msg_v01_ei, req); 404 + if (rc < 0) { 405 + dev_err(ctrl->dev, "QMI send req fail %d\n", rc); 406 + qmi_txn_cancel(&txn); 407 + return rc; 408 + } 409 + 410 + rc = qmi_txn_wait(&txn, SLIMBUS_QMI_RESP_TOUT); 411 + if (rc < 0) { 412 + dev_err(ctrl->dev, "QMI TXN wait fail: %d\n", rc); 413 + return rc; 414 + } 415 + 416 + /* Check the response */ 417 + if (resp.resp.result != QMI_RESULT_SUCCESS_V01) { 418 + dev_err(ctrl->dev, "QMI request failed 0x%x\n", 419 + resp.resp.result); 420 + return -EREMOTEIO; 421 + } 422 + 423 + return 0; 424 + } 425 + 426 + static struct qmi_msg_handler qcom_slim_qmi_msg_handlers[] = { 427 + { 428 + .type = QMI_RESPONSE, 429 + .msg_id = SLIMBUS_QMI_POWER_RESP_V01, 430 + .ei = slimbus_power_resp_msg_v01_ei, 431 + .decoded_size = sizeof(struct slimbus_power_resp_msg_v01), 432 + .fn = qcom_slim_qmi_power_resp_cb, 433 + }, 434 + {} 435 + }; 436 + 437 + static int qcom_slim_qmi_init(struct qcom_slim_ngd_ctrl *ctrl, 438 + bool apps_is_master) 439 + { 440 + struct slimbus_select_inst_req_msg_v01 req; 441 + struct qmi_handle *handle; 442 + int rc; 443 + 444 + handle = devm_kzalloc(ctrl->dev, sizeof(*handle), GFP_KERNEL); 445 + if (!handle) 446 + return -ENOMEM; 447 + 448 + rc = qmi_handle_init(handle, SLIMBUS_QMI_POWER_REQ_MAX_MSG_LEN, 449 + NULL, qcom_slim_qmi_msg_handlers); 450 + if (rc < 0) { 451 + dev_err(ctrl->dev, "QMI client init failed: %d\n", rc); 452 + goto qmi_handle_init_failed; 453 + } 454 + 455 + rc = kernel_connect(handle->sock, 456 + (struct sockaddr *)&ctrl->qmi.svc_info, 457 + sizeof(ctrl->qmi.svc_info), 0); 458 + if (rc < 0) { 459 + dev_err(ctrl->dev, "Remote Service connect failed: %d\n", rc); 460 + goto qmi_connect_to_service_failed; 461 + } 462 + 463 + /* Instance is 0 based */ 464 + req.instance = (ctrl->ngd->id >> 1); 465 + req.mode_valid = 1; 466 + 467 + /* Mode indicates the role of the ADSP */ 468 + if (apps_is_master) 469 + req.mode = SLIMBUS_MODE_SATELLITE_V01; 470 + else 471 + req.mode = SLIMBUS_MODE_MASTER_V01; 472 + 473 + ctrl->qmi.handle = handle; 474 + 475 + rc = qcom_slim_qmi_send_select_inst_req(ctrl, &req); 476 + if (rc) { 477 + dev_err(ctrl->dev, "failed to select h/w instance\n"); 478 + goto qmi_select_instance_failed; 479 + } 480 + 481 + return 0; 482 + 483 + qmi_select_instance_failed: 484 + ctrl->qmi.handle = NULL; 485 + qmi_connect_to_service_failed: 486 + qmi_handle_release(handle); 487 + qmi_handle_init_failed: 488 + devm_kfree(ctrl->dev, handle); 489 + return rc; 490 + } 491 + 492 + static void qcom_slim_qmi_exit(struct qcom_slim_ngd_ctrl *ctrl) 493 + { 494 + if (!ctrl->qmi.handle) 495 + return; 496 + 497 + qmi_handle_release(ctrl->qmi.handle); 498 + devm_kfree(ctrl->dev, ctrl->qmi.handle); 499 + ctrl->qmi.handle = NULL; 500 + } 501 + 502 + static int qcom_slim_qmi_power_request(struct qcom_slim_ngd_ctrl *ctrl, 503 + bool active) 504 + { 505 + struct slimbus_power_req_msg_v01 req; 506 + 507 + if (active) 508 + req.pm_req = SLIMBUS_PM_ACTIVE_V01; 509 + else 510 + req.pm_req = SLIMBUS_PM_INACTIVE_V01; 511 + 512 + req.resp_type_valid = 0; 513 + 514 + return qcom_slim_qmi_send_power_request(ctrl, &req); 515 + } 516 + 517 + static u32 *qcom_slim_ngd_tx_msg_get(struct qcom_slim_ngd_ctrl *ctrl, int len, 518 + struct completion *comp) 519 + { 520 + struct qcom_slim_ngd_dma_desc *desc; 521 + unsigned long flags; 522 + 523 + spin_lock_irqsave(&ctrl->tx_buf_lock, flags); 524 + 525 + if ((ctrl->tx_tail + 1) % QCOM_SLIM_NGD_DESC_NUM == ctrl->tx_head) { 526 + spin_unlock_irqrestore(&ctrl->tx_buf_lock, flags); 527 + return NULL; 528 + } 529 + desc = &ctrl->txdesc[ctrl->tx_tail]; 530 + desc->base = ctrl->tx_base + ctrl->tx_tail * SLIM_MSGQ_BUF_LEN; 531 + desc->comp = comp; 532 + ctrl->tx_tail = (ctrl->tx_tail + 1) % QCOM_SLIM_NGD_DESC_NUM; 533 + 534 + spin_unlock_irqrestore(&ctrl->tx_buf_lock, flags); 535 + 536 + return desc->base; 537 + } 538 + 539 + static void qcom_slim_ngd_tx_msg_dma_cb(void *args) 540 + { 541 + struct qcom_slim_ngd_dma_desc *desc = args; 542 + struct qcom_slim_ngd_ctrl *ctrl = desc->ctrl; 543 + unsigned long flags; 544 + 545 + spin_lock_irqsave(&ctrl->tx_buf_lock, flags); 546 + 547 + if (desc->comp) { 548 + complete(desc->comp); 549 + desc->comp = NULL; 550 + } 551 + 552 + ctrl->tx_head = (ctrl->tx_head + 1) % QCOM_SLIM_NGD_DESC_NUM; 553 + spin_unlock_irqrestore(&ctrl->tx_buf_lock, flags); 554 + } 555 + 556 + static int qcom_slim_ngd_tx_msg_post(struct qcom_slim_ngd_ctrl *ctrl, 557 + void *buf, int len) 558 + { 559 + struct qcom_slim_ngd_dma_desc *desc; 560 + unsigned long flags; 561 + int index, offset; 562 + 563 + spin_lock_irqsave(&ctrl->tx_buf_lock, flags); 564 + offset = buf - ctrl->tx_base; 565 + index = offset/SLIM_MSGQ_BUF_LEN; 566 + 567 + desc = &ctrl->txdesc[index]; 568 + desc->phys = ctrl->tx_phys_base + offset; 569 + desc->base = ctrl->tx_base + offset; 570 + desc->ctrl = ctrl; 571 + len = (len + 3) & 0xfc; 572 + 573 + desc->desc = dmaengine_prep_slave_single(ctrl->dma_tx_channel, 574 + desc->phys, len, 575 + DMA_MEM_TO_DEV, 576 + DMA_PREP_INTERRUPT); 577 + if (!desc->desc) { 578 + dev_err(ctrl->dev, "unable to prepare channel\n"); 579 + spin_unlock_irqrestore(&ctrl->tx_buf_lock, flags); 580 + return -EINVAL; 581 + } 582 + 583 + desc->desc->callback = qcom_slim_ngd_tx_msg_dma_cb; 584 + desc->desc->callback_param = desc; 585 + desc->desc->cookie = dmaengine_submit(desc->desc); 586 + dma_async_issue_pending(ctrl->dma_tx_channel); 587 + spin_unlock_irqrestore(&ctrl->tx_buf_lock, flags); 588 + 589 + return 0; 590 + } 591 + 592 + static void qcom_slim_ngd_rx(struct qcom_slim_ngd_ctrl *ctrl, u8 *buf) 593 + { 594 + u8 mc, mt, len; 595 + 596 + mt = SLIM_HEADER_GET_MT(buf[0]); 597 + len = SLIM_HEADER_GET_RL(buf[0]); 598 + mc = SLIM_HEADER_GET_MC(buf[1]); 599 + 600 + if (mc == SLIM_USR_MC_MASTER_CAPABILITY && 601 + mt == SLIM_MSG_MT_SRC_REFERRED_USER) 602 + queue_work(ctrl->mwq, &ctrl->m_work); 603 + 604 + if (mc == SLIM_MSG_MC_REPLY_INFORMATION || 605 + mc == SLIM_MSG_MC_REPLY_VALUE || (mc == SLIM_USR_MC_ADDR_REPLY && 606 + mt == SLIM_MSG_MT_SRC_REFERRED_USER) || 607 + (mc == SLIM_USR_MC_GENERIC_ACK && 608 + mt == SLIM_MSG_MT_SRC_REFERRED_USER)) { 609 + slim_msg_response(&ctrl->ctrl, &buf[4], buf[3], len - 4); 610 + pm_runtime_mark_last_busy(ctrl->dev); 611 + } 612 + } 613 + 614 + static void qcom_slim_ngd_rx_msgq_cb(void *args) 615 + { 616 + struct qcom_slim_ngd_dma_desc *desc = args; 617 + struct qcom_slim_ngd_ctrl *ctrl = desc->ctrl; 618 + 619 + qcom_slim_ngd_rx(ctrl, (u8 *)desc->base); 620 + /* Add descriptor back to the queue */ 621 + desc->desc = dmaengine_prep_slave_single(ctrl->dma_rx_channel, 622 + desc->phys, SLIM_MSGQ_BUF_LEN, 623 + DMA_DEV_TO_MEM, 624 + DMA_PREP_INTERRUPT); 625 + if (!desc->desc) { 626 + dev_err(ctrl->dev, "Unable to prepare rx channel\n"); 627 + return; 628 + } 629 + 630 + desc->desc->callback = qcom_slim_ngd_rx_msgq_cb; 631 + desc->desc->callback_param = desc; 632 + desc->desc->cookie = dmaengine_submit(desc->desc); 633 + dma_async_issue_pending(ctrl->dma_rx_channel); 634 + } 635 + 636 + static int qcom_slim_ngd_post_rx_msgq(struct qcom_slim_ngd_ctrl *ctrl) 637 + { 638 + struct qcom_slim_ngd_dma_desc *desc; 639 + int i; 640 + 641 + for (i = 0; i < QCOM_SLIM_NGD_DESC_NUM; i++) { 642 + desc = &ctrl->rx_desc[i]; 643 + desc->phys = ctrl->rx_phys_base + i * SLIM_MSGQ_BUF_LEN; 644 + desc->ctrl = ctrl; 645 + desc->base = ctrl->rx_base + i * SLIM_MSGQ_BUF_LEN; 646 + desc->desc = dmaengine_prep_slave_single(ctrl->dma_rx_channel, 647 + desc->phys, SLIM_MSGQ_BUF_LEN, 648 + DMA_DEV_TO_MEM, 649 + DMA_PREP_INTERRUPT); 650 + if (!desc->desc) { 651 + dev_err(ctrl->dev, "Unable to prepare rx channel\n"); 652 + return -EINVAL; 653 + } 654 + 655 + desc->desc->callback = qcom_slim_ngd_rx_msgq_cb; 656 + desc->desc->callback_param = desc; 657 + desc->desc->cookie = dmaengine_submit(desc->desc); 658 + } 659 + dma_async_issue_pending(ctrl->dma_rx_channel); 660 + 661 + return 0; 662 + } 663 + 664 + static int qcom_slim_ngd_init_rx_msgq(struct qcom_slim_ngd_ctrl *ctrl) 665 + { 666 + struct device *dev = ctrl->dev; 667 + int ret, size; 668 + 669 + ctrl->dma_rx_channel = dma_request_slave_channel(dev, "rx"); 670 + if (!ctrl->dma_rx_channel) { 671 + dev_err(dev, "Failed to request dma channels"); 672 + return -EINVAL; 673 + } 674 + 675 + size = QCOM_SLIM_NGD_DESC_NUM * SLIM_MSGQ_BUF_LEN; 676 + ctrl->rx_base = dma_alloc_coherent(dev, size, &ctrl->rx_phys_base, 677 + GFP_KERNEL); 678 + if (!ctrl->rx_base) { 679 + dev_err(dev, "dma_alloc_coherent failed\n"); 680 + ret = -ENOMEM; 681 + goto rel_rx; 682 + } 683 + 684 + ret = qcom_slim_ngd_post_rx_msgq(ctrl); 685 + if (ret) { 686 + dev_err(dev, "post_rx_msgq() failed 0x%x\n", ret); 687 + goto rx_post_err; 688 + } 689 + 690 + return 0; 691 + 692 + rx_post_err: 693 + dma_free_coherent(dev, size, ctrl->rx_base, ctrl->rx_phys_base); 694 + rel_rx: 695 + dma_release_channel(ctrl->dma_rx_channel); 696 + return ret; 697 + } 698 + 699 + static int qcom_slim_ngd_init_tx_msgq(struct qcom_slim_ngd_ctrl *ctrl) 700 + { 701 + struct device *dev = ctrl->dev; 702 + unsigned long flags; 703 + int ret = 0; 704 + int size; 705 + 706 + ctrl->dma_tx_channel = dma_request_slave_channel(dev, "tx"); 707 + if (!ctrl->dma_tx_channel) { 708 + dev_err(dev, "Failed to request dma channels"); 709 + return -EINVAL; 710 + } 711 + 712 + size = ((QCOM_SLIM_NGD_DESC_NUM + 1) * SLIM_MSGQ_BUF_LEN); 713 + ctrl->tx_base = dma_alloc_coherent(dev, size, &ctrl->tx_phys_base, 714 + GFP_KERNEL); 715 + if (!ctrl->tx_base) { 716 + dev_err(dev, "dma_alloc_coherent failed\n"); 717 + ret = -EINVAL; 718 + goto rel_tx; 719 + } 720 + 721 + spin_lock_irqsave(&ctrl->tx_buf_lock, flags); 722 + ctrl->tx_tail = 0; 723 + ctrl->tx_head = 0; 724 + spin_unlock_irqrestore(&ctrl->tx_buf_lock, flags); 725 + 726 + return 0; 727 + rel_tx: 728 + dma_release_channel(ctrl->dma_tx_channel); 729 + return ret; 730 + } 731 + 732 + static int qcom_slim_ngd_init_dma(struct qcom_slim_ngd_ctrl *ctrl) 733 + { 734 + int ret = 0; 735 + 736 + ret = qcom_slim_ngd_init_rx_msgq(ctrl); 737 + if (ret) { 738 + dev_err(ctrl->dev, "rx dma init failed\n"); 739 + return ret; 740 + } 741 + 742 + ret = qcom_slim_ngd_init_tx_msgq(ctrl); 743 + if (ret) 744 + dev_err(ctrl->dev, "tx dma init failed\n"); 745 + 746 + return ret; 747 + } 748 + 749 + static irqreturn_t qcom_slim_ngd_interrupt(int irq, void *d) 750 + { 751 + struct qcom_slim_ngd_ctrl *ctrl = d; 752 + void __iomem *base = ctrl->ngd->base; 753 + u32 stat = readl(base + NGD_INT_STAT); 754 + 755 + if ((stat & NGD_INT_MSG_BUF_CONTE) || 756 + (stat & NGD_INT_MSG_TX_INVAL) || (stat & NGD_INT_DEV_ERR) || 757 + (stat & NGD_INT_TX_NACKED_2)) { 758 + dev_err(ctrl->dev, "Error Interrupt received 0x%x\n", stat); 759 + } 760 + 761 + writel(stat, base + NGD_INT_CLR); 762 + 763 + return IRQ_HANDLED; 764 + } 765 + 766 + static int qcom_slim_ngd_xfer_msg(struct slim_controller *sctrl, 767 + struct slim_msg_txn *txn) 768 + { 769 + struct qcom_slim_ngd_ctrl *ctrl = dev_get_drvdata(sctrl->dev); 770 + DECLARE_COMPLETION_ONSTACK(tx_sent); 771 + DECLARE_COMPLETION_ONSTACK(done); 772 + int ret, timeout, i; 773 + u8 wbuf[SLIM_MSGQ_BUF_LEN]; 774 + u8 rbuf[SLIM_MSGQ_BUF_LEN]; 775 + u32 *pbuf; 776 + u8 *puc; 777 + u8 la = txn->la; 778 + bool usr_msg = false; 779 + 780 + if (txn->mc & SLIM_MSG_CLK_PAUSE_SEQ_FLG) 781 + return -EPROTONOSUPPORT; 782 + 783 + if (txn->mt == SLIM_MSG_MT_CORE && 784 + (txn->mc >= SLIM_MSG_MC_BEGIN_RECONFIGURATION && 785 + txn->mc <= SLIM_MSG_MC_RECONFIGURE_NOW)) 786 + return 0; 787 + 788 + if (txn->dt == SLIM_MSG_DEST_ENUMADDR) 789 + return -EPROTONOSUPPORT; 790 + 791 + if (txn->msg->num_bytes > SLIM_MSGQ_BUF_LEN || 792 + txn->rl > SLIM_MSGQ_BUF_LEN) { 793 + dev_err(ctrl->dev, "msg exeeds HW limit\n"); 794 + return -EINVAL; 795 + } 796 + 797 + pbuf = qcom_slim_ngd_tx_msg_get(ctrl, txn->rl, &tx_sent); 798 + if (!pbuf) { 799 + dev_err(ctrl->dev, "Message buffer unavailable\n"); 800 + return -ENOMEM; 801 + } 802 + 803 + if (txn->mt == SLIM_MSG_MT_CORE && 804 + (txn->mc == SLIM_MSG_MC_CONNECT_SOURCE || 805 + txn->mc == SLIM_MSG_MC_CONNECT_SINK || 806 + txn->mc == SLIM_MSG_MC_DISCONNECT_PORT)) { 807 + txn->mt = SLIM_MSG_MT_DEST_REFERRED_USER; 808 + switch (txn->mc) { 809 + case SLIM_MSG_MC_CONNECT_SOURCE: 810 + txn->mc = SLIM_USR_MC_CONNECT_SRC; 811 + break; 812 + case SLIM_MSG_MC_CONNECT_SINK: 813 + txn->mc = SLIM_USR_MC_CONNECT_SINK; 814 + break; 815 + case SLIM_MSG_MC_DISCONNECT_PORT: 816 + txn->mc = SLIM_USR_MC_DISCONNECT_PORT; 817 + break; 818 + default: 819 + return -EINVAL; 820 + } 821 + 822 + usr_msg = true; 823 + i = 0; 824 + wbuf[i++] = txn->la; 825 + la = SLIM_LA_MGR; 826 + wbuf[i++] = txn->msg->wbuf[0]; 827 + if (txn->mc != SLIM_USR_MC_DISCONNECT_PORT) 828 + wbuf[i++] = txn->msg->wbuf[1]; 829 + 830 + txn->comp = &done; 831 + ret = slim_alloc_txn_tid(sctrl, txn); 832 + if (ret) { 833 + dev_err(ctrl->dev, "Unable to allocate TID\n"); 834 + return ret; 835 + } 836 + 837 + wbuf[i++] = txn->tid; 838 + 839 + txn->msg->num_bytes = i; 840 + txn->msg->wbuf = wbuf; 841 + txn->msg->rbuf = rbuf; 842 + txn->rl = txn->msg->num_bytes + 4; 843 + } 844 + 845 + /* HW expects length field to be excluded */ 846 + txn->rl--; 847 + puc = (u8 *)pbuf; 848 + *pbuf = 0; 849 + if (txn->dt == SLIM_MSG_DEST_LOGICALADDR) { 850 + *pbuf = SLIM_MSG_ASM_FIRST_WORD(txn->rl, txn->mt, txn->mc, 0, 851 + la); 852 + puc += 3; 853 + } else { 854 + *pbuf = SLIM_MSG_ASM_FIRST_WORD(txn->rl, txn->mt, txn->mc, 1, 855 + la); 856 + puc += 2; 857 + } 858 + 859 + if (slim_tid_txn(txn->mt, txn->mc)) 860 + *(puc++) = txn->tid; 861 + 862 + if (slim_ec_txn(txn->mt, txn->mc)) { 863 + *(puc++) = (txn->ec & 0xFF); 864 + *(puc++) = (txn->ec >> 8) & 0xFF; 865 + } 866 + 867 + if (txn->msg && txn->msg->wbuf) 868 + memcpy(puc, txn->msg->wbuf, txn->msg->num_bytes); 869 + 870 + ret = qcom_slim_ngd_tx_msg_post(ctrl, pbuf, txn->rl); 871 + if (ret) 872 + return ret; 873 + 874 + timeout = wait_for_completion_timeout(&tx_sent, HZ); 875 + if (!timeout) { 876 + dev_err(sctrl->dev, "TX timed out:MC:0x%x,mt:0x%x", txn->mc, 877 + txn->mt); 878 + return -ETIMEDOUT; 879 + } 880 + 881 + if (usr_msg) { 882 + timeout = wait_for_completion_timeout(&done, HZ); 883 + if (!timeout) { 884 + dev_err(sctrl->dev, "TX timed out:MC:0x%x,mt:0x%x", 885 + txn->mc, txn->mt); 886 + return -ETIMEDOUT; 887 + } 888 + } 889 + 890 + return 0; 891 + } 892 + 893 + static int qcom_slim_ngd_xfer_msg_sync(struct slim_controller *ctrl, 894 + struct slim_msg_txn *txn) 895 + { 896 + DECLARE_COMPLETION_ONSTACK(done); 897 + int ret, timeout; 898 + 899 + pm_runtime_get_sync(ctrl->dev); 900 + 901 + txn->comp = &done; 902 + 903 + ret = qcom_slim_ngd_xfer_msg(ctrl, txn); 904 + if (ret) 905 + return ret; 906 + 907 + timeout = wait_for_completion_timeout(&done, HZ); 908 + if (!timeout) { 909 + dev_err(ctrl->dev, "TX timed out:MC:0x%x,mt:0x%x", txn->mc, 910 + txn->mt); 911 + return -ETIMEDOUT; 912 + } 913 + return 0; 914 + } 915 + 916 + static int qcom_slim_ngd_enable_stream(struct slim_stream_runtime *rt) 917 + { 918 + struct slim_device *sdev = rt->dev; 919 + struct slim_controller *ctrl = sdev->ctrl; 920 + struct slim_val_inf msg = {0}; 921 + u8 wbuf[SLIM_MSGQ_BUF_LEN]; 922 + u8 rbuf[SLIM_MSGQ_BUF_LEN]; 923 + struct slim_msg_txn txn = {0,}; 924 + int i, ret; 925 + 926 + txn.mt = SLIM_MSG_MT_DEST_REFERRED_USER; 927 + txn.dt = SLIM_MSG_DEST_LOGICALADDR; 928 + txn.la = SLIM_LA_MGR; 929 + txn.ec = 0; 930 + txn.msg = &msg; 931 + txn.msg->num_bytes = 0; 932 + txn.msg->wbuf = wbuf; 933 + txn.msg->rbuf = rbuf; 934 + 935 + for (i = 0; i < rt->num_ports; i++) { 936 + struct slim_port *port = &rt->ports[i]; 937 + 938 + if (txn.msg->num_bytes == 0) { 939 + int seg_interval = SLIM_SLOTS_PER_SUPERFRAME/rt->ratem; 940 + int exp; 941 + 942 + wbuf[txn.msg->num_bytes++] = sdev->laddr; 943 + wbuf[txn.msg->num_bytes] = rt->bps >> 2 | 944 + (port->ch.aux_fmt << 6); 945 + 946 + /* Data channel segment interval not multiple of 3 */ 947 + exp = seg_interval % 3; 948 + if (exp) 949 + wbuf[txn.msg->num_bytes] |= BIT(5); 950 + 951 + txn.msg->num_bytes++; 952 + wbuf[txn.msg->num_bytes++] = exp << 4 | rt->prot; 953 + 954 + if (rt->prot == SLIM_PROTO_ISO) 955 + wbuf[txn.msg->num_bytes++] = 956 + port->ch.prrate | 957 + SLIM_CHANNEL_CONTENT_FL; 958 + else 959 + wbuf[txn.msg->num_bytes++] = port->ch.prrate; 960 + 961 + ret = slim_alloc_txn_tid(ctrl, &txn); 962 + if (ret) { 963 + dev_err(&sdev->dev, "Fail to allocate TID\n"); 964 + return -ENXIO; 965 + } 966 + wbuf[txn.msg->num_bytes++] = txn.tid; 967 + } 968 + wbuf[txn.msg->num_bytes++] = port->ch.id; 969 + } 970 + 971 + txn.mc = SLIM_USR_MC_DEF_ACT_CHAN; 972 + txn.rl = txn.msg->num_bytes + 4; 973 + ret = qcom_slim_ngd_xfer_msg_sync(ctrl, &txn); 974 + if (ret) { 975 + slim_free_txn_tid(ctrl, &txn); 976 + dev_err(&sdev->dev, "TX timed out:MC:0x%x,mt:0x%x", txn.mc, 977 + txn.mt); 978 + return ret; 979 + } 980 + 981 + txn.mc = SLIM_USR_MC_RECONFIG_NOW; 982 + txn.msg->num_bytes = 2; 983 + wbuf[1] = sdev->laddr; 984 + txn.rl = txn.msg->num_bytes + 4; 985 + 986 + ret = slim_alloc_txn_tid(ctrl, &txn); 987 + if (ret) { 988 + dev_err(ctrl->dev, "Fail to allocate TID\n"); 989 + return ret; 990 + } 991 + 992 + wbuf[0] = txn.tid; 993 + ret = qcom_slim_ngd_xfer_msg_sync(ctrl, &txn); 994 + if (ret) { 995 + slim_free_txn_tid(ctrl, &txn); 996 + dev_err(&sdev->dev, "TX timed out:MC:0x%x,mt:0x%x", txn.mc, 997 + txn.mt); 998 + } 999 + 1000 + return ret; 1001 + } 1002 + 1003 + static int qcom_slim_ngd_get_laddr(struct slim_controller *ctrl, 1004 + struct slim_eaddr *ea, u8 *laddr) 1005 + { 1006 + struct slim_val_inf msg = {0}; 1007 + struct slim_msg_txn txn; 1008 + u8 wbuf[10] = {0}; 1009 + u8 rbuf[10] = {0}; 1010 + int ret; 1011 + 1012 + txn.mt = SLIM_MSG_MT_DEST_REFERRED_USER; 1013 + txn.dt = SLIM_MSG_DEST_LOGICALADDR; 1014 + txn.la = SLIM_LA_MGR; 1015 + txn.ec = 0; 1016 + 1017 + txn.mc = SLIM_USR_MC_ADDR_QUERY; 1018 + txn.rl = 11; 1019 + txn.msg = &msg; 1020 + txn.msg->num_bytes = 7; 1021 + txn.msg->wbuf = wbuf; 1022 + txn.msg->rbuf = rbuf; 1023 + 1024 + ret = slim_alloc_txn_tid(ctrl, &txn); 1025 + if (ret < 0) 1026 + return ret; 1027 + 1028 + wbuf[0] = (u8)txn.tid; 1029 + memcpy(&wbuf[1], ea, sizeof(*ea)); 1030 + 1031 + ret = qcom_slim_ngd_xfer_msg_sync(ctrl, &txn); 1032 + if (ret) { 1033 + slim_free_txn_tid(ctrl, &txn); 1034 + return ret; 1035 + } 1036 + 1037 + *laddr = rbuf[6]; 1038 + 1039 + return ret; 1040 + } 1041 + 1042 + static int qcom_slim_ngd_exit_dma(struct qcom_slim_ngd_ctrl *ctrl) 1043 + { 1044 + if (ctrl->dma_rx_channel) { 1045 + dmaengine_terminate_sync(ctrl->dma_rx_channel); 1046 + dma_release_channel(ctrl->dma_rx_channel); 1047 + } 1048 + 1049 + if (ctrl->dma_tx_channel) { 1050 + dmaengine_terminate_sync(ctrl->dma_tx_channel); 1051 + dma_release_channel(ctrl->dma_tx_channel); 1052 + } 1053 + 1054 + ctrl->dma_tx_channel = ctrl->dma_rx_channel = NULL; 1055 + 1056 + return 0; 1057 + } 1058 + 1059 + static void qcom_slim_ngd_setup(struct qcom_slim_ngd_ctrl *ctrl) 1060 + { 1061 + u32 cfg = readl_relaxed(ctrl->ngd->base); 1062 + 1063 + if (ctrl->state == QCOM_SLIM_NGD_CTRL_DOWN) 1064 + qcom_slim_ngd_init_dma(ctrl); 1065 + 1066 + /* By default enable message queues */ 1067 + cfg |= NGD_CFG_RX_MSGQ_EN; 1068 + cfg |= NGD_CFG_TX_MSGQ_EN; 1069 + 1070 + /* Enable NGD if it's not already enabled*/ 1071 + if (!(cfg & NGD_CFG_ENABLE)) 1072 + cfg |= NGD_CFG_ENABLE; 1073 + 1074 + writel_relaxed(cfg, ctrl->ngd->base); 1075 + } 1076 + 1077 + static int qcom_slim_ngd_power_up(struct qcom_slim_ngd_ctrl *ctrl) 1078 + { 1079 + enum qcom_slim_ngd_state cur_state = ctrl->state; 1080 + struct qcom_slim_ngd *ngd = ctrl->ngd; 1081 + u32 laddr, rx_msgq; 1082 + int timeout, ret = 0; 1083 + 1084 + if (ctrl->state == QCOM_SLIM_NGD_CTRL_DOWN) { 1085 + timeout = wait_for_completion_timeout(&ctrl->qmi.qmi_comp, HZ); 1086 + if (!timeout) 1087 + return -EREMOTEIO; 1088 + } 1089 + 1090 + if (ctrl->state == QCOM_SLIM_NGD_CTRL_ASLEEP || 1091 + ctrl->state == QCOM_SLIM_NGD_CTRL_DOWN) { 1092 + ret = qcom_slim_qmi_power_request(ctrl, true); 1093 + if (ret) { 1094 + dev_err(ctrl->dev, "SLIM QMI power request failed:%d\n", 1095 + ret); 1096 + return ret; 1097 + } 1098 + } 1099 + 1100 + ctrl->ver = readl_relaxed(ctrl->base); 1101 + /* Version info in 16 MSbits */ 1102 + ctrl->ver >>= 16; 1103 + 1104 + laddr = readl_relaxed(ngd->base + NGD_STATUS); 1105 + if (laddr & NGD_LADDR) { 1106 + /* 1107 + * external MDM restart case where ADSP itself was active framer 1108 + * For example, modem restarted when playback was active 1109 + */ 1110 + if (cur_state == QCOM_SLIM_NGD_CTRL_AWAKE) { 1111 + dev_info(ctrl->dev, "Subsys restart: ADSP active framer\n"); 1112 + return 0; 1113 + } 1114 + return 0; 1115 + } 1116 + 1117 + writel_relaxed(DEF_NGD_INT_MASK, ngd->base + NGD_INT_EN); 1118 + rx_msgq = readl_relaxed(ngd->base + NGD_RX_MSGQ_CFG); 1119 + 1120 + writel_relaxed(rx_msgq|SLIM_RX_MSGQ_TIMEOUT_VAL, 1121 + ngd->base + NGD_RX_MSGQ_CFG); 1122 + qcom_slim_ngd_setup(ctrl); 1123 + 1124 + timeout = wait_for_completion_timeout(&ctrl->reconf, HZ); 1125 + if (!timeout) { 1126 + dev_err(ctrl->dev, "capability exchange timed-out\n"); 1127 + return -ETIMEDOUT; 1128 + } 1129 + 1130 + return 0; 1131 + } 1132 + 1133 + static void qcom_slim_ngd_notify_slaves(struct qcom_slim_ngd_ctrl *ctrl) 1134 + { 1135 + struct slim_device *sbdev; 1136 + struct device_node *node; 1137 + 1138 + for_each_child_of_node(ctrl->ngd->pdev->dev.of_node, node) { 1139 + sbdev = of_slim_get_device(&ctrl->ctrl, node); 1140 + if (!sbdev) 1141 + continue; 1142 + 1143 + if (slim_get_logical_addr(sbdev)) 1144 + dev_err(ctrl->dev, "Failed to get logical address\n"); 1145 + } 1146 + } 1147 + 1148 + static void qcom_slim_ngd_master_worker(struct work_struct *work) 1149 + { 1150 + struct qcom_slim_ngd_ctrl *ctrl; 1151 + struct slim_msg_txn txn; 1152 + struct slim_val_inf msg = {0}; 1153 + int retries = 0; 1154 + u8 wbuf[8]; 1155 + int ret = 0; 1156 + 1157 + ctrl = container_of(work, struct qcom_slim_ngd_ctrl, m_work); 1158 + txn.dt = SLIM_MSG_DEST_LOGICALADDR; 1159 + txn.ec = 0; 1160 + txn.mc = SLIM_USR_MC_REPORT_SATELLITE; 1161 + txn.mt = SLIM_MSG_MT_SRC_REFERRED_USER; 1162 + txn.la = SLIM_LA_MGR; 1163 + wbuf[0] = SAT_MAGIC_LSB; 1164 + wbuf[1] = SAT_MAGIC_MSB; 1165 + wbuf[2] = SAT_MSG_VER; 1166 + wbuf[3] = SAT_MSG_PROT; 1167 + txn.msg = &msg; 1168 + txn.msg->wbuf = wbuf; 1169 + txn.msg->num_bytes = 4; 1170 + txn.rl = 8; 1171 + 1172 + dev_info(ctrl->dev, "SLIM SAT: Rcvd master capability\n"); 1173 + 1174 + capability_retry: 1175 + ret = qcom_slim_ngd_xfer_msg(&ctrl->ctrl, &txn); 1176 + if (!ret) { 1177 + if (ctrl->state >= QCOM_SLIM_NGD_CTRL_ASLEEP) 1178 + complete(&ctrl->reconf); 1179 + else 1180 + dev_err(ctrl->dev, "unexpected state:%d\n", 1181 + ctrl->state); 1182 + 1183 + if (ctrl->state == QCOM_SLIM_NGD_CTRL_DOWN) 1184 + qcom_slim_ngd_notify_slaves(ctrl); 1185 + 1186 + } else if (ret == -EIO) { 1187 + dev_err(ctrl->dev, "capability message NACKed, retrying\n"); 1188 + if (retries < INIT_MX_RETRIES) { 1189 + msleep(DEF_RETRY_MS); 1190 + retries++; 1191 + goto capability_retry; 1192 + } 1193 + } else { 1194 + dev_err(ctrl->dev, "SLIM: capability TX failed:%d\n", ret); 1195 + } 1196 + } 1197 + 1198 + static int qcom_slim_ngd_runtime_resume(struct device *dev) 1199 + { 1200 + struct qcom_slim_ngd_ctrl *ctrl = dev_get_drvdata(dev); 1201 + int ret = 0; 1202 + 1203 + if (ctrl->state >= QCOM_SLIM_NGD_CTRL_ASLEEP) 1204 + ret = qcom_slim_ngd_power_up(ctrl); 1205 + if (ret) { 1206 + /* Did SSR cause this power up failure */ 1207 + if (ctrl->state != QCOM_SLIM_NGD_CTRL_DOWN) 1208 + ctrl->state = QCOM_SLIM_NGD_CTRL_ASLEEP; 1209 + else 1210 + dev_err(ctrl->dev, "HW wakeup attempt during SSR\n"); 1211 + } else { 1212 + ctrl->state = QCOM_SLIM_NGD_CTRL_AWAKE; 1213 + } 1214 + 1215 + return 0; 1216 + } 1217 + 1218 + static int qcom_slim_ngd_enable(struct qcom_slim_ngd_ctrl *ctrl, bool enable) 1219 + { 1220 + if (enable) { 1221 + int ret = qcom_slim_qmi_init(ctrl, false); 1222 + 1223 + if (ret) { 1224 + dev_err(ctrl->dev, "qmi init fail, ret:%d, state:%d\n", 1225 + ret, ctrl->state); 1226 + return ret; 1227 + } 1228 + /* controller state should be in sync with framework state */ 1229 + complete(&ctrl->qmi.qmi_comp); 1230 + if (!pm_runtime_enabled(ctrl->dev) || 1231 + !pm_runtime_suspended(ctrl->dev)) 1232 + qcom_slim_ngd_runtime_resume(ctrl->dev); 1233 + else 1234 + pm_runtime_resume(ctrl->dev); 1235 + pm_runtime_mark_last_busy(ctrl->dev); 1236 + pm_runtime_put(ctrl->dev); 1237 + } else { 1238 + qcom_slim_qmi_exit(ctrl); 1239 + } 1240 + 1241 + return 0; 1242 + } 1243 + 1244 + static int qcom_slim_ngd_qmi_new_server(struct qmi_handle *hdl, 1245 + struct qmi_service *service) 1246 + { 1247 + struct qcom_slim_ngd_qmi *qmi = 1248 + container_of(hdl, struct qcom_slim_ngd_qmi, svc_event_hdl); 1249 + struct qcom_slim_ngd_ctrl *ctrl = 1250 + container_of(qmi, struct qcom_slim_ngd_ctrl, qmi); 1251 + 1252 + qmi->svc_info.sq_family = AF_QIPCRTR; 1253 + qmi->svc_info.sq_node = service->node; 1254 + qmi->svc_info.sq_port = service->port; 1255 + 1256 + qcom_slim_ngd_enable(ctrl, true); 1257 + 1258 + return 0; 1259 + } 1260 + 1261 + static void qcom_slim_ngd_qmi_del_server(struct qmi_handle *hdl, 1262 + struct qmi_service *service) 1263 + { 1264 + struct qcom_slim_ngd_qmi *qmi = 1265 + container_of(hdl, struct qcom_slim_ngd_qmi, svc_event_hdl); 1266 + 1267 + qmi->svc_info.sq_node = 0; 1268 + qmi->svc_info.sq_port = 0; 1269 + } 1270 + 1271 + static struct qmi_ops qcom_slim_ngd_qmi_svc_event_ops = { 1272 + .new_server = qcom_slim_ngd_qmi_new_server, 1273 + .del_server = qcom_slim_ngd_qmi_del_server, 1274 + }; 1275 + 1276 + static int qcom_slim_ngd_qmi_svc_event_init(struct qcom_slim_ngd_ctrl *ctrl) 1277 + { 1278 + struct qcom_slim_ngd_qmi *qmi = &ctrl->qmi; 1279 + int ret; 1280 + 1281 + ret = qmi_handle_init(&qmi->svc_event_hdl, 0, 1282 + &qcom_slim_ngd_qmi_svc_event_ops, NULL); 1283 + if (ret < 0) { 1284 + dev_err(ctrl->dev, "qmi_handle_init failed: %d\n", ret); 1285 + return ret; 1286 + } 1287 + 1288 + ret = qmi_add_lookup(&qmi->svc_event_hdl, SLIMBUS_QMI_SVC_ID, 1289 + SLIMBUS_QMI_SVC_V1, SLIMBUS_QMI_INS_ID); 1290 + if (ret < 0) { 1291 + dev_err(ctrl->dev, "qmi_add_lookup failed: %d\n", ret); 1292 + qmi_handle_release(&qmi->svc_event_hdl); 1293 + } 1294 + return ret; 1295 + } 1296 + 1297 + static void qcom_slim_ngd_qmi_svc_event_deinit(struct qcom_slim_ngd_qmi *qmi) 1298 + { 1299 + qmi_handle_release(&qmi->svc_event_hdl); 1300 + } 1301 + 1302 + static struct platform_driver qcom_slim_ngd_driver; 1303 + #define QCOM_SLIM_NGD_DRV_NAME "qcom,slim-ngd" 1304 + 1305 + static const struct of_device_id qcom_slim_ngd_dt_match[] = { 1306 + { 1307 + .compatible = "qcom,slim-ngd-v1.5.0", 1308 + .data = &ngd_v1_5_offset_info, 1309 + }, 1310 + {} 1311 + }; 1312 + 1313 + MODULE_DEVICE_TABLE(of, qcom_slim_ngd_dt_match); 1314 + 1315 + static int of_qcom_slim_ngd_register(struct device *parent, 1316 + struct qcom_slim_ngd_ctrl *ctrl) 1317 + { 1318 + const struct ngd_reg_offset_data *data; 1319 + struct qcom_slim_ngd *ngd; 1320 + struct device_node *node; 1321 + u32 id; 1322 + 1323 + data = of_match_node(qcom_slim_ngd_dt_match, parent->of_node)->data; 1324 + 1325 + for_each_available_child_of_node(parent->of_node, node) { 1326 + if (of_property_read_u32(node, "reg", &id)) 1327 + continue; 1328 + 1329 + ngd = kzalloc(sizeof(*ngd), GFP_KERNEL); 1330 + if (!ngd) 1331 + return -ENOMEM; 1332 + 1333 + ngd->pdev = platform_device_alloc(QCOM_SLIM_NGD_DRV_NAME, id); 1334 + ngd->id = id; 1335 + ngd->pdev->dev.parent = parent; 1336 + ngd->pdev->driver_override = QCOM_SLIM_NGD_DRV_NAME; 1337 + ngd->pdev->dev.of_node = node; 1338 + ctrl->ngd = ngd; 1339 + platform_set_drvdata(ngd->pdev, ctrl); 1340 + 1341 + platform_device_add(ngd->pdev); 1342 + ngd->base = ctrl->base + ngd->id * data->offset + 1343 + (ngd->id - 1) * data->size; 1344 + ctrl->ngd = ngd; 1345 + platform_driver_register(&qcom_slim_ngd_driver); 1346 + 1347 + return 0; 1348 + } 1349 + 1350 + return -ENODEV; 1351 + } 1352 + 1353 + static int qcom_slim_ngd_probe(struct platform_device *pdev) 1354 + { 1355 + struct qcom_slim_ngd_ctrl *ctrl = platform_get_drvdata(pdev); 1356 + struct device *dev = &pdev->dev; 1357 + int ret; 1358 + 1359 + ctrl->ctrl.dev = dev; 1360 + ret = slim_register_controller(&ctrl->ctrl); 1361 + if (ret) { 1362 + dev_err(dev, "error adding slim controller\n"); 1363 + return ret; 1364 + } 1365 + 1366 + pm_runtime_use_autosuspend(dev); 1367 + pm_runtime_set_autosuspend_delay(dev, QCOM_SLIM_NGD_AUTOSUSPEND); 1368 + pm_runtime_set_suspended(dev); 1369 + pm_runtime_enable(dev); 1370 + pm_runtime_get_noresume(dev); 1371 + ret = qcom_slim_ngd_qmi_svc_event_init(ctrl); 1372 + if (ret) { 1373 + dev_err(&pdev->dev, "QMI service registration failed:%d", ret); 1374 + goto err; 1375 + } 1376 + 1377 + INIT_WORK(&ctrl->m_work, qcom_slim_ngd_master_worker); 1378 + ctrl->mwq = create_singlethread_workqueue("ngd_master"); 1379 + if (!ctrl->mwq) { 1380 + dev_err(&pdev->dev, "Failed to start master worker\n"); 1381 + ret = -ENOMEM; 1382 + goto wq_err; 1383 + } 1384 + 1385 + return 0; 1386 + err: 1387 + slim_unregister_controller(&ctrl->ctrl); 1388 + wq_err: 1389 + qcom_slim_ngd_qmi_svc_event_deinit(&ctrl->qmi); 1390 + if (ctrl->mwq) 1391 + destroy_workqueue(ctrl->mwq); 1392 + 1393 + return 0; 1394 + } 1395 + 1396 + static int qcom_slim_ngd_ctrl_probe(struct platform_device *pdev) 1397 + { 1398 + struct device *dev = &pdev->dev; 1399 + struct qcom_slim_ngd_ctrl *ctrl; 1400 + struct resource *res; 1401 + int ret; 1402 + 1403 + ctrl = devm_kzalloc(dev, sizeof(*ctrl), GFP_KERNEL); 1404 + if (!ctrl) 1405 + return -ENOMEM; 1406 + 1407 + dev_set_drvdata(dev, ctrl); 1408 + 1409 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 1410 + ctrl->base = devm_ioremap_resource(dev, res); 1411 + if (IS_ERR(ctrl->base)) 1412 + return PTR_ERR(ctrl->base); 1413 + 1414 + res = platform_get_resource(pdev, IORESOURCE_IRQ, 0); 1415 + if (!res) { 1416 + dev_err(&pdev->dev, "no slimbus IRQ resource\n"); 1417 + return -ENODEV; 1418 + } 1419 + 1420 + ret = devm_request_irq(dev, res->start, qcom_slim_ngd_interrupt, 1421 + IRQF_TRIGGER_HIGH, "slim-ngd", ctrl); 1422 + if (ret) { 1423 + dev_err(&pdev->dev, "request IRQ failed\n"); 1424 + return ret; 1425 + } 1426 + 1427 + ctrl->dev = dev; 1428 + ctrl->framer.rootfreq = SLIM_ROOT_FREQ >> 3; 1429 + ctrl->framer.superfreq = 1430 + ctrl->framer.rootfreq / SLIM_CL_PER_SUPERFRAME_DIV8; 1431 + 1432 + ctrl->ctrl.a_framer = &ctrl->framer; 1433 + ctrl->ctrl.clkgear = SLIM_MAX_CLK_GEAR; 1434 + ctrl->ctrl.get_laddr = qcom_slim_ngd_get_laddr; 1435 + ctrl->ctrl.enable_stream = qcom_slim_ngd_enable_stream; 1436 + ctrl->ctrl.xfer_msg = qcom_slim_ngd_xfer_msg; 1437 + ctrl->ctrl.wakeup = NULL; 1438 + ctrl->state = QCOM_SLIM_NGD_CTRL_DOWN; 1439 + 1440 + spin_lock_init(&ctrl->tx_buf_lock); 1441 + init_completion(&ctrl->reconf); 1442 + init_completion(&ctrl->qmi.qmi_comp); 1443 + 1444 + return of_qcom_slim_ngd_register(dev, ctrl); 1445 + } 1446 + 1447 + static int qcom_slim_ngd_ctrl_remove(struct platform_device *pdev) 1448 + { 1449 + platform_driver_unregister(&qcom_slim_ngd_driver); 1450 + 1451 + return 0; 1452 + } 1453 + 1454 + static int qcom_slim_ngd_remove(struct platform_device *pdev) 1455 + { 1456 + struct qcom_slim_ngd_ctrl *ctrl = platform_get_drvdata(pdev); 1457 + 1458 + pm_runtime_disable(&pdev->dev); 1459 + slim_unregister_controller(&ctrl->ctrl); 1460 + qcom_slim_ngd_exit_dma(ctrl); 1461 + qcom_slim_ngd_qmi_svc_event_deinit(&ctrl->qmi); 1462 + if (ctrl->mwq) 1463 + destroy_workqueue(ctrl->mwq); 1464 + 1465 + kfree(ctrl->ngd); 1466 + ctrl->ngd = NULL; 1467 + return 0; 1468 + } 1469 + 1470 + static int qcom_slim_ngd_runtime_idle(struct device *dev) 1471 + { 1472 + struct qcom_slim_ngd_ctrl *ctrl = dev_get_drvdata(dev); 1473 + 1474 + if (ctrl->state == QCOM_SLIM_NGD_CTRL_AWAKE) 1475 + ctrl->state = QCOM_SLIM_NGD_CTRL_IDLE; 1476 + pm_request_autosuspend(dev); 1477 + return -EAGAIN; 1478 + } 1479 + 1480 + #ifdef CONFIG_PM 1481 + static int qcom_slim_ngd_runtime_suspend(struct device *dev) 1482 + { 1483 + struct qcom_slim_ngd_ctrl *ctrl = dev_get_drvdata(dev); 1484 + int ret = 0; 1485 + 1486 + ret = qcom_slim_qmi_power_request(ctrl, false); 1487 + if (ret && ret != -EBUSY) 1488 + dev_info(ctrl->dev, "slim resource not idle:%d\n", ret); 1489 + if (!ret || ret == -ETIMEDOUT) 1490 + ctrl->state = QCOM_SLIM_NGD_CTRL_ASLEEP; 1491 + 1492 + return ret; 1493 + } 1494 + #endif 1495 + 1496 + static const struct dev_pm_ops qcom_slim_ngd_dev_pm_ops = { 1497 + SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend, 1498 + pm_runtime_force_resume) 1499 + SET_RUNTIME_PM_OPS( 1500 + qcom_slim_ngd_runtime_suspend, 1501 + qcom_slim_ngd_runtime_resume, 1502 + qcom_slim_ngd_runtime_idle 1503 + ) 1504 + }; 1505 + 1506 + static struct platform_driver qcom_slim_ngd_ctrl_driver = { 1507 + .probe = qcom_slim_ngd_ctrl_probe, 1508 + .remove = qcom_slim_ngd_ctrl_remove, 1509 + .driver = { 1510 + .name = "qcom,slim-ngd-ctrl", 1511 + .of_match_table = qcom_slim_ngd_dt_match, 1512 + }, 1513 + }; 1514 + 1515 + static struct platform_driver qcom_slim_ngd_driver = { 1516 + .probe = qcom_slim_ngd_probe, 1517 + .remove = qcom_slim_ngd_remove, 1518 + .driver = { 1519 + .name = QCOM_SLIM_NGD_DRV_NAME, 1520 + .pm = &qcom_slim_ngd_dev_pm_ops, 1521 + }, 1522 + }; 1523 + 1524 + module_platform_driver(qcom_slim_ngd_ctrl_driver); 1525 + MODULE_LICENSE("GPL v2"); 1526 + MODULE_DESCRIPTION("Qualcomm SLIMBus NGD controller");
+198
drivers/slimbus/slimbus.h
··· 17 17 18 18 /* SLIMbus message types. Related to interpretation of message code. */ 19 19 #define SLIM_MSG_MT_CORE 0x0 20 + #define SLIM_MSG_MT_DEST_REFERRED_USER 0x2 21 + #define SLIM_MSG_MT_SRC_REFERRED_USER 0x6 20 22 21 23 /* 22 24 * SLIM Broadcast header format ··· 45 43 #define SLIM_MSG_MC_ASSIGN_LOGICAL_ADDRESS 0x2 46 44 #define SLIM_MSG_MC_REPORT_ABSENT 0xF 47 45 46 + /* Data channel management messages */ 47 + #define SLIM_MSG_MC_CONNECT_SOURCE 0x10 48 + #define SLIM_MSG_MC_CONNECT_SINK 0x11 49 + #define SLIM_MSG_MC_DISCONNECT_PORT 0x14 50 + #define SLIM_MSG_MC_CHANGE_CONTENT 0x18 51 + 48 52 /* Clock pause Reconfiguration messages */ 49 53 #define SLIM_MSG_MC_BEGIN_RECONFIGURATION 0x40 50 54 #define SLIM_MSG_MC_NEXT_PAUSE_CLOCK 0x4A 55 + #define SLIM_MSG_MC_NEXT_DEFINE_CHANNEL 0x50 56 + #define SLIM_MSG_MC_NEXT_DEFINE_CONTENT 0x51 57 + #define SLIM_MSG_MC_NEXT_ACTIVATE_CHANNEL 0x54 58 + #define SLIM_MSG_MC_NEXT_DEACTIVATE_CHANNEL 0x55 59 + #define SLIM_MSG_MC_NEXT_REMOVE_CHANNEL 0x58 51 60 #define SLIM_MSG_MC_RECONFIGURE_NOW 0x5F 61 + 62 + /* 63 + * Clock pause flag to indicate that the reconfig message 64 + * corresponds to clock pause sequence 65 + */ 66 + #define SLIM_MSG_CLK_PAUSE_SEQ_FLG (1U << 8) 52 67 53 68 /* Clock pause values per SLIMbus spec */ 54 69 #define SLIM_CLK_FAST 0 ··· 80 61 /* Standard values per SLIMbus spec needed by controllers and devices */ 81 62 #define SLIM_MAX_CLK_GEAR 10 82 63 #define SLIM_MIN_CLK_GEAR 1 64 + #define SLIM_SLOT_LEN_BITS 4 83 65 66 + /* Indicate that the frequency of the flow and the bus frequency are locked */ 67 + #define SLIM_CHANNEL_CONTENT_FL BIT(7) 68 + 69 + /* Standard values per SLIMbus spec needed by controllers and devices */ 70 + #define SLIM_CL_PER_SUPERFRAME 6144 71 + #define SLIM_SLOTS_PER_SUPERFRAME (SLIM_CL_PER_SUPERFRAME >> 2) 72 + #define SLIM_SL_PER_SUPERFRAME (SLIM_CL_PER_SUPERFRAME >> 2) 84 73 /* Manager's logical address is set to 0xFF per spec */ 85 74 #define SLIM_LA_MANAGER 0xFF 86 75 ··· 187 160 }; 188 161 189 162 /** 163 + * enum slim_port_direction: SLIMbus port direction 164 + * 165 + * @SLIM_PORT_SINK: SLIMbus port is a sink 166 + * @SLIM_PORT_SOURCE: SLIMbus port is a source 167 + */ 168 + enum slim_port_direction { 169 + SLIM_PORT_SINK = 0, 170 + SLIM_PORT_SOURCE, 171 + }; 172 + /** 173 + * enum slim_port_state: SLIMbus Port/Endpoint state machine 174 + * according to SLIMbus Spec 2.0 175 + * @SLIM_PORT_DISCONNECTED: SLIMbus port is disconnected 176 + * entered from Unconfigure/configured state after 177 + * DISCONNECT_PORT or REMOVE_CHANNEL core command 178 + * @SLIM_PORT_UNCONFIGURED: SLIMbus port is in unconfigured state. 179 + * entered from disconnect state after CONNECT_SOURCE/SINK core command 180 + * @SLIM_PORT_CONFIGURED: SLIMbus port is in configured state. 181 + * entered from unconfigured state after DEFINE_CHANNEL, DEFINE_CONTENT 182 + * and ACTIVATE_CHANNEL core commands. Ready for data transmission. 183 + */ 184 + enum slim_port_state { 185 + SLIM_PORT_DISCONNECTED = 0, 186 + SLIM_PORT_UNCONFIGURED, 187 + SLIM_PORT_CONFIGURED, 188 + }; 189 + 190 + /** 191 + * enum slim_channel_state: SLIMbus channel state machine used by core. 192 + * @SLIM_CH_STATE_DISCONNECTED: SLIMbus channel is disconnected 193 + * @SLIM_CH_STATE_ALLOCATED: SLIMbus channel is allocated 194 + * @SLIM_CH_STATE_ASSOCIATED: SLIMbus channel is associated with port 195 + * @SLIM_CH_STATE_DEFINED: SLIMbus channel parameters are defined 196 + * @SLIM_CH_STATE_CONTENT_DEFINED: SLIMbus channel content is defined 197 + * @SLIM_CH_STATE_ACTIVE: SLIMbus channel is active and ready for data 198 + * @SLIM_CH_STATE_REMOVED: SLIMbus channel is inactive and removed 199 + */ 200 + enum slim_channel_state { 201 + SLIM_CH_STATE_DISCONNECTED = 0, 202 + SLIM_CH_STATE_ALLOCATED, 203 + SLIM_CH_STATE_ASSOCIATED, 204 + SLIM_CH_STATE_DEFINED, 205 + SLIM_CH_STATE_CONTENT_DEFINED, 206 + SLIM_CH_STATE_ACTIVE, 207 + SLIM_CH_STATE_REMOVED, 208 + }; 209 + 210 + /** 211 + * enum slim_ch_data_fmt: SLIMbus channel data Type identifiers according to 212 + * Table 60 of SLIMbus Spec 1.01.01 213 + * @SLIM_CH_DATA_FMT_NOT_DEFINED: Undefined 214 + * @SLIM_CH_DATA_FMT_LPCM_AUDIO: LPCM audio 215 + * @SLIM_CH_DATA_FMT_IEC61937_COMP_AUDIO: IEC61937 Compressed audio 216 + * @SLIM_CH_DATA_FMT_PACKED_PDM_AUDIO: Packed PDM audio 217 + */ 218 + enum slim_ch_data_fmt { 219 + SLIM_CH_DATA_FMT_NOT_DEFINED = 0, 220 + SLIM_CH_DATA_FMT_LPCM_AUDIO = 1, 221 + SLIM_CH_DATA_FMT_IEC61937_COMP_AUDIO = 2, 222 + SLIM_CH_DATA_FMT_PACKED_PDM_AUDIO = 3, 223 + }; 224 + 225 + /** 226 + * enum slim_ch_aux_fmt: SLIMbus channel Aux Field format IDs according to 227 + * Table 63 of SLIMbus Spec 2.0 228 + * @SLIM_CH_AUX_FMT_NOT_APPLICABLE: Undefined 229 + * @SLIM_CH_AUX_FMT_ZCUV_TUNNEL_IEC60958: ZCUV for tunneling IEC60958 230 + * @SLIM_CH_AUX_FMT_USER_DEFINED: User defined 231 + */ 232 + enum slim_ch_aux_bit_fmt { 233 + SLIM_CH_AUX_FMT_NOT_APPLICABLE = 0, 234 + SLIM_CH_AUX_FMT_ZCUV_TUNNEL_IEC60958 = 1, 235 + SLIM_CH_AUX_FMT_USER_DEFINED = 0xF, 236 + }; 237 + 238 + /** 239 + * struct slim_channel - SLIMbus channel, used for state machine 240 + * 241 + * @id: ID of channel 242 + * @prrate: Presense rate of channel from Table 66 of SLIMbus 2.0 Specs 243 + * @seg_dist: segment distribution code from Table 20 of SLIMbus 2.0 Specs 244 + * @data_fmt: Data format of channel. 245 + * @aux_fmt: Aux format for this channel. 246 + * @state: channel state machine 247 + */ 248 + struct slim_channel { 249 + int id; 250 + int prrate; 251 + int seg_dist; 252 + enum slim_ch_data_fmt data_fmt; 253 + enum slim_ch_aux_bit_fmt aux_fmt; 254 + enum slim_channel_state state; 255 + }; 256 + 257 + /** 258 + * struct slim_port - SLIMbus port 259 + * 260 + * @id: Port id 261 + * @direction: Port direction, Source or Sink. 262 + * @state: state machine of port. 263 + * @ch: channel associated with this port. 264 + */ 265 + struct slim_port { 266 + int id; 267 + enum slim_port_direction direction; 268 + enum slim_port_state state; 269 + struct slim_channel ch; 270 + }; 271 + 272 + /** 273 + * enum slim_transport_protocol: SLIMbus Transport protocol list from 274 + * Table 47 of SLIMbus 2.0 specs. 275 + * @SLIM_PROTO_ISO: Isochronous Protocol, no flow control as data rate match 276 + * channel rate flow control embedded in the data. 277 + * @SLIM_PROTO_PUSH: Pushed Protocol, includes flow control, Used to carry 278 + * data whose rate is equal to, or lower than the channel rate. 279 + * @SLIM_PROTO_PULL: Pulled Protocol, similar usage as pushed protocol 280 + * but pull is a unicast. 281 + * @SLIM_PROTO_LOCKED: Locked Protocol 282 + * @SLIM_PROTO_ASYNC_SMPLX: Asynchronous Protocol-Simplex 283 + * @SLIM_PROTO_ASYNC_HALF_DUP: Asynchronous Protocol-Half-duplex 284 + * @SLIM_PROTO_EXT_SMPLX: Extended Asynchronous Protocol-Simplex 285 + * @SLIM_PROTO_EXT_HALF_DUP: Extended Asynchronous Protocol-Half-duplex 286 + */ 287 + enum slim_transport_protocol { 288 + SLIM_PROTO_ISO = 0, 289 + SLIM_PROTO_PUSH, 290 + SLIM_PROTO_PULL, 291 + SLIM_PROTO_LOCKED, 292 + SLIM_PROTO_ASYNC_SMPLX, 293 + SLIM_PROTO_ASYNC_HALF_DUP, 294 + SLIM_PROTO_EXT_SMPLX, 295 + SLIM_PROTO_EXT_HALF_DUP, 296 + }; 297 + 298 + /** 299 + * struct slim_stream_runtime - SLIMbus stream runtime instance 300 + * 301 + * @name: Name of the stream 302 + * @dev: SLIM Device instance associated with this stream 303 + * @direction: direction of stream 304 + * @prot: Transport protocol used in this stream 305 + * @rate: Data rate of samples * 306 + * @bps: bits per sample 307 + * @ratem: rate multipler which is super frame rate/data rate 308 + * @num_ports: number of ports 309 + * @ports: pointer to instance of ports 310 + * @node: list head for stream associated with slim device. 311 + */ 312 + struct slim_stream_runtime { 313 + const char *name; 314 + struct slim_device *dev; 315 + int direction; 316 + enum slim_transport_protocol prot; 317 + unsigned int rate; 318 + unsigned int bps; 319 + unsigned int ratem; 320 + int num_ports; 321 + struct slim_port *ports; 322 + struct list_head node; 323 + }; 324 + 325 + /** 190 326 * struct slim_controller - Controls every instance of SLIMbus 191 327 * (similar to 'master' on SPI) 192 328 * @dev: Device interface to this driver ··· 378 188 * @wakeup: This function pointer implements controller-specific procedure 379 189 * to wake it up from clock-pause. Framework will call this to bring 380 190 * the controller out of clock pause. 191 + * @enable_stream: This function pointer implements controller-specific procedure 192 + * to enable a stream. 193 + * @disable_stream: This function pointer implements controller-specific procedure 194 + * to disable stream. 381 195 * 382 196 * 'Manager device' is responsible for device management, bandwidth 383 197 * allocation, channel setup, and port associations per channel. ··· 423 229 struct slim_eaddr *ea, u8 laddr); 424 230 int (*get_laddr)(struct slim_controller *ctrl, 425 231 struct slim_eaddr *ea, u8 *laddr); 232 + int (*enable_stream)(struct slim_stream_runtime *rt); 233 + int (*disable_stream)(struct slim_stream_runtime *rt); 426 234 int (*wakeup)(struct slim_controller *ctrl); 427 235 }; 428 236 ··· 436 240 void slim_msg_response(struct slim_controller *ctrl, u8 *reply, u8 tid, u8 l); 437 241 int slim_do_transfer(struct slim_controller *ctrl, struct slim_msg_txn *txn); 438 242 int slim_ctrl_clk_pause(struct slim_controller *ctrl, bool wakeup, u8 restart); 243 + int slim_alloc_txn_tid(struct slim_controller *ctrl, struct slim_msg_txn *txn); 244 + void slim_free_txn_tid(struct slim_controller *ctrl, struct slim_msg_txn *txn); 439 245 440 246 static inline bool slim_tid_txn(u8 mt, u8 mc) 441 247 {
+477
drivers/slimbus/stream.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (c) 2018, Linaro Limited 3 + 4 + #include <linux/kernel.h> 5 + #include <linux/errno.h> 6 + #include <linux/slab.h> 7 + #include <linux/list.h> 8 + #include <linux/slimbus.h> 9 + #include <uapi/sound/asound.h> 10 + #include "slimbus.h" 11 + 12 + /** 13 + * struct segdist_code - Segment Distributions code from 14 + * Table 20 of SLIMbus Specs Version 2.0 15 + * 16 + * @ratem: Channel Rate Multipler(Segments per Superframe) 17 + * @seg_interval: Number of slots between the first Slot of Segment 18 + * and the first slot of the next consecutive Segment. 19 + * @segdist_code: Segment Distribution Code SD[11:0] 20 + * @seg_offset_mask: Segment offset mask in SD[11:0] 21 + * @segdist_codes: List of all possible Segmet Distribution codes. 22 + */ 23 + static const struct segdist_code { 24 + int ratem; 25 + int seg_interval; 26 + int segdist_code; 27 + u32 seg_offset_mask; 28 + 29 + } segdist_codes[] = { 30 + {1, 1536, 0x200, 0xdff}, 31 + {2, 768, 0x100, 0xcff}, 32 + {4, 384, 0x080, 0xc7f}, 33 + {8, 192, 0x040, 0xc3f}, 34 + {16, 96, 0x020, 0xc1f}, 35 + {32, 48, 0x010, 0xc0f}, 36 + {64, 24, 0x008, 0xc07}, 37 + {128, 12, 0x004, 0xc03}, 38 + {256, 6, 0x002, 0xc01}, 39 + {512, 3, 0x001, 0xc00}, 40 + {3, 512, 0xe00, 0x1ff}, 41 + {6, 256, 0xd00, 0x0ff}, 42 + {12, 128, 0xc80, 0x07f}, 43 + {24, 64, 0xc40, 0x03f}, 44 + {48, 32, 0xc20, 0x01f}, 45 + {96, 16, 0xc10, 0x00f}, 46 + {192, 8, 0xc08, 0x007}, 47 + {364, 4, 0xc04, 0x003}, 48 + {768, 2, 0xc02, 0x001}, 49 + }; 50 + 51 + /* 52 + * Presence Rate table for all Natural Frequencies 53 + * The Presence rate of a constant bitrate stream is mean flow rate of the 54 + * stream expressed in occupied Segments of that Data Channel per second. 55 + * Table 66 from SLIMbus 2.0 Specs 56 + * 57 + * Index of the table corresponds to Presence rate code for the respective rate 58 + * in the table. 59 + */ 60 + static const int slim_presence_rate_table[] = { 61 + 0, /* Not Indicated */ 62 + 12000, 63 + 24000, 64 + 48000, 65 + 96000, 66 + 192000, 67 + 384000, 68 + 768000, 69 + 0, /* Reserved */ 70 + 110250, 71 + 220500, 72 + 441000, 73 + 882000, 74 + 176400, 75 + 352800, 76 + 705600, 77 + 4000, 78 + 8000, 79 + 16000, 80 + 32000, 81 + 64000, 82 + 128000, 83 + 256000, 84 + 512000, 85 + }; 86 + 87 + /* 88 + * slim_stream_allocate() - Allocate a new SLIMbus Stream 89 + * @dev:Slim device to be associated with 90 + * @name: name of the stream 91 + * 92 + * This is very first call for SLIMbus streaming, this API will allocate 93 + * a new SLIMbus stream and return a valid stream runtime pointer for client 94 + * to use it in subsequent stream apis. state of stream is set to ALLOCATED 95 + * 96 + * Return: valid pointer on success and error code on failure. 97 + * From ASoC DPCM framework, this state is linked to startup() operation. 98 + */ 99 + struct slim_stream_runtime *slim_stream_allocate(struct slim_device *dev, 100 + const char *name) 101 + { 102 + struct slim_stream_runtime *rt; 103 + 104 + rt = kzalloc(sizeof(*rt), GFP_KERNEL); 105 + if (!rt) 106 + return ERR_PTR(-ENOMEM); 107 + 108 + rt->name = kasprintf(GFP_KERNEL, "slim-%s", name); 109 + if (!rt->name) { 110 + kfree(rt); 111 + return ERR_PTR(-ENOMEM); 112 + } 113 + 114 + rt->dev = dev; 115 + spin_lock(&dev->stream_list_lock); 116 + list_add_tail(&rt->node, &dev->stream_list); 117 + spin_unlock(&dev->stream_list_lock); 118 + 119 + return rt; 120 + } 121 + EXPORT_SYMBOL_GPL(slim_stream_allocate); 122 + 123 + static int slim_connect_port_channel(struct slim_stream_runtime *stream, 124 + struct slim_port *port) 125 + { 126 + struct slim_device *sdev = stream->dev; 127 + u8 wbuf[2]; 128 + struct slim_val_inf msg = {0, 2, NULL, wbuf, NULL}; 129 + u8 mc = SLIM_MSG_MC_CONNECT_SOURCE; 130 + DEFINE_SLIM_LDEST_TXN(txn, mc, 6, stream->dev->laddr, &msg); 131 + 132 + if (port->direction == SLIM_PORT_SINK) 133 + txn.mc = SLIM_MSG_MC_CONNECT_SINK; 134 + 135 + wbuf[0] = port->id; 136 + wbuf[1] = port->ch.id; 137 + port->ch.state = SLIM_CH_STATE_ASSOCIATED; 138 + port->state = SLIM_PORT_UNCONFIGURED; 139 + 140 + return slim_do_transfer(sdev->ctrl, &txn); 141 + } 142 + 143 + static int slim_disconnect_port(struct slim_stream_runtime *stream, 144 + struct slim_port *port) 145 + { 146 + struct slim_device *sdev = stream->dev; 147 + u8 wbuf[1]; 148 + struct slim_val_inf msg = {0, 1, NULL, wbuf, NULL}; 149 + u8 mc = SLIM_MSG_MC_DISCONNECT_PORT; 150 + DEFINE_SLIM_LDEST_TXN(txn, mc, 5, stream->dev->laddr, &msg); 151 + 152 + wbuf[0] = port->id; 153 + port->ch.state = SLIM_CH_STATE_DISCONNECTED; 154 + port->state = SLIM_PORT_DISCONNECTED; 155 + 156 + return slim_do_transfer(sdev->ctrl, &txn); 157 + } 158 + 159 + static int slim_deactivate_remove_channel(struct slim_stream_runtime *stream, 160 + struct slim_port *port) 161 + { 162 + struct slim_device *sdev = stream->dev; 163 + u8 wbuf[1]; 164 + struct slim_val_inf msg = {0, 1, NULL, wbuf, NULL}; 165 + u8 mc = SLIM_MSG_MC_NEXT_DEACTIVATE_CHANNEL; 166 + DEFINE_SLIM_LDEST_TXN(txn, mc, 5, stream->dev->laddr, &msg); 167 + int ret; 168 + 169 + wbuf[0] = port->ch.id; 170 + ret = slim_do_transfer(sdev->ctrl, &txn); 171 + if (ret) 172 + return ret; 173 + 174 + txn.mc = SLIM_MSG_MC_NEXT_REMOVE_CHANNEL; 175 + port->ch.state = SLIM_CH_STATE_REMOVED; 176 + 177 + return slim_do_transfer(sdev->ctrl, &txn); 178 + } 179 + 180 + static int slim_get_prate_code(int rate) 181 + { 182 + int i; 183 + 184 + for (i = 0; i < ARRAY_SIZE(slim_presence_rate_table); i++) { 185 + if (rate == slim_presence_rate_table[i]) 186 + return i; 187 + } 188 + 189 + return -EINVAL; 190 + } 191 + 192 + /* 193 + * slim_stream_prepare() - Prepare a SLIMbus Stream 194 + * 195 + * @rt: instance of slim stream runtime to configure 196 + * @cfg: new configuration for the stream 197 + * 198 + * This API will configure SLIMbus stream with config parameters from cfg. 199 + * return zero on success and error code on failure. From ASoC DPCM framework, 200 + * this state is linked to hw_params() operation. 201 + */ 202 + int slim_stream_prepare(struct slim_stream_runtime *rt, 203 + struct slim_stream_config *cfg) 204 + { 205 + struct slim_controller *ctrl = rt->dev->ctrl; 206 + struct slim_port *port; 207 + int num_ports, i, port_id; 208 + 209 + if (rt->ports) { 210 + dev_err(&rt->dev->dev, "Stream already Prepared\n"); 211 + return -EINVAL; 212 + } 213 + 214 + num_ports = hweight32(cfg->port_mask); 215 + rt->ports = kcalloc(num_ports, sizeof(*port), GFP_KERNEL); 216 + if (!rt->ports) 217 + return -ENOMEM; 218 + 219 + rt->num_ports = num_ports; 220 + rt->rate = cfg->rate; 221 + rt->bps = cfg->bps; 222 + rt->direction = cfg->direction; 223 + 224 + if (cfg->rate % ctrl->a_framer->superfreq) { 225 + /* 226 + * data rate not exactly multiple of super frame, 227 + * use PUSH/PULL protocol 228 + */ 229 + if (cfg->direction == SNDRV_PCM_STREAM_PLAYBACK) 230 + rt->prot = SLIM_PROTO_PUSH; 231 + else 232 + rt->prot = SLIM_PROTO_PULL; 233 + } else { 234 + rt->prot = SLIM_PROTO_ISO; 235 + } 236 + 237 + rt->ratem = cfg->rate/ctrl->a_framer->superfreq; 238 + 239 + i = 0; 240 + for_each_set_bit(port_id, &cfg->port_mask, SLIM_DEVICE_MAX_PORTS) { 241 + port = &rt->ports[i]; 242 + port->state = SLIM_PORT_DISCONNECTED; 243 + port->id = port_id; 244 + port->ch.prrate = slim_get_prate_code(cfg->rate); 245 + port->ch.id = cfg->chs[i]; 246 + port->ch.data_fmt = SLIM_CH_DATA_FMT_NOT_DEFINED; 247 + port->ch.aux_fmt = SLIM_CH_AUX_FMT_NOT_APPLICABLE; 248 + port->ch.state = SLIM_CH_STATE_ALLOCATED; 249 + 250 + if (cfg->direction == SNDRV_PCM_STREAM_PLAYBACK) 251 + port->direction = SLIM_PORT_SINK; 252 + else 253 + port->direction = SLIM_PORT_SOURCE; 254 + 255 + slim_connect_port_channel(rt, port); 256 + i++; 257 + } 258 + 259 + return 0; 260 + } 261 + EXPORT_SYMBOL_GPL(slim_stream_prepare); 262 + 263 + static int slim_define_channel_content(struct slim_stream_runtime *stream, 264 + struct slim_port *port) 265 + { 266 + struct slim_device *sdev = stream->dev; 267 + u8 wbuf[4]; 268 + struct slim_val_inf msg = {0, 4, NULL, wbuf, NULL}; 269 + u8 mc = SLIM_MSG_MC_NEXT_DEFINE_CONTENT; 270 + DEFINE_SLIM_LDEST_TXN(txn, mc, 8, stream->dev->laddr, &msg); 271 + 272 + wbuf[0] = port->ch.id; 273 + wbuf[1] = port->ch.prrate; 274 + 275 + /* Frequency Locked for ISO Protocol */ 276 + if (stream->prot != SLIM_PROTO_ISO) 277 + wbuf[1] |= SLIM_CHANNEL_CONTENT_FL; 278 + 279 + wbuf[2] = port->ch.data_fmt | (port->ch.aux_fmt << 4); 280 + wbuf[3] = stream->bps/SLIM_SLOT_LEN_BITS; 281 + port->ch.state = SLIM_CH_STATE_CONTENT_DEFINED; 282 + 283 + return slim_do_transfer(sdev->ctrl, &txn); 284 + } 285 + 286 + static int slim_get_segdist_code(int ratem) 287 + { 288 + int i; 289 + 290 + for (i = 0; i < ARRAY_SIZE(segdist_codes); i++) { 291 + if (segdist_codes[i].ratem == ratem) 292 + return segdist_codes[i].segdist_code; 293 + } 294 + 295 + return -EINVAL; 296 + } 297 + 298 + static int slim_define_channel(struct slim_stream_runtime *stream, 299 + struct slim_port *port) 300 + { 301 + struct slim_device *sdev = stream->dev; 302 + u8 wbuf[4]; 303 + struct slim_val_inf msg = {0, 4, NULL, wbuf, NULL}; 304 + u8 mc = SLIM_MSG_MC_NEXT_DEFINE_CHANNEL; 305 + DEFINE_SLIM_LDEST_TXN(txn, mc, 8, stream->dev->laddr, &msg); 306 + 307 + port->ch.seg_dist = slim_get_segdist_code(stream->ratem); 308 + 309 + wbuf[0] = port->ch.id; 310 + wbuf[1] = port->ch.seg_dist & 0xFF; 311 + wbuf[2] = (stream->prot << 4) | ((port->ch.seg_dist & 0xF00) >> 8); 312 + if (stream->prot == SLIM_PROTO_ISO) 313 + wbuf[3] = stream->bps/SLIM_SLOT_LEN_BITS; 314 + else 315 + wbuf[3] = stream->bps/SLIM_SLOT_LEN_BITS + 1; 316 + 317 + port->ch.state = SLIM_CH_STATE_DEFINED; 318 + 319 + return slim_do_transfer(sdev->ctrl, &txn); 320 + } 321 + 322 + static int slim_activate_channel(struct slim_stream_runtime *stream, 323 + struct slim_port *port) 324 + { 325 + struct slim_device *sdev = stream->dev; 326 + u8 wbuf[1]; 327 + struct slim_val_inf msg = {0, 1, NULL, wbuf, NULL}; 328 + u8 mc = SLIM_MSG_MC_NEXT_ACTIVATE_CHANNEL; 329 + DEFINE_SLIM_LDEST_TXN(txn, mc, 5, stream->dev->laddr, &msg); 330 + 331 + txn.msg->num_bytes = 1; 332 + txn.msg->wbuf = wbuf; 333 + wbuf[0] = port->ch.id; 334 + port->ch.state = SLIM_CH_STATE_ACTIVE; 335 + 336 + return slim_do_transfer(sdev->ctrl, &txn); 337 + } 338 + 339 + /* 340 + * slim_stream_enable() - Enable a prepared SLIMbus Stream 341 + * 342 + * @stream: instance of slim stream runtime to enable 343 + * 344 + * This API will enable all the ports and channels associated with 345 + * SLIMbus stream 346 + * 347 + * Return: zero on success and error code on failure. From ASoC DPCM framework, 348 + * this state is linked to trigger() start operation. 349 + */ 350 + int slim_stream_enable(struct slim_stream_runtime *stream) 351 + { 352 + DEFINE_SLIM_BCAST_TXN(txn, SLIM_MSG_MC_BEGIN_RECONFIGURATION, 353 + 3, SLIM_LA_MANAGER, NULL); 354 + struct slim_controller *ctrl = stream->dev->ctrl; 355 + int ret, i; 356 + 357 + if (ctrl->enable_stream) { 358 + ret = ctrl->enable_stream(stream); 359 + if (ret) 360 + return ret; 361 + 362 + for (i = 0; i < stream->num_ports; i++) 363 + stream->ports[i].ch.state = SLIM_CH_STATE_ACTIVE; 364 + 365 + return ret; 366 + } 367 + 368 + ret = slim_do_transfer(ctrl, &txn); 369 + if (ret) 370 + return ret; 371 + 372 + /* define channels first before activating them */ 373 + for (i = 0; i < stream->num_ports; i++) { 374 + struct slim_port *port = &stream->ports[i]; 375 + 376 + slim_define_channel(stream, port); 377 + slim_define_channel_content(stream, port); 378 + } 379 + 380 + for (i = 0; i < stream->num_ports; i++) { 381 + struct slim_port *port = &stream->ports[i]; 382 + 383 + slim_activate_channel(stream, port); 384 + port->state = SLIM_PORT_CONFIGURED; 385 + } 386 + txn.mc = SLIM_MSG_MC_RECONFIGURE_NOW; 387 + 388 + return slim_do_transfer(ctrl, &txn); 389 + } 390 + EXPORT_SYMBOL_GPL(slim_stream_enable); 391 + 392 + /* 393 + * slim_stream_disable() - Disable a SLIMbus Stream 394 + * 395 + * @stream: instance of slim stream runtime to disable 396 + * 397 + * This API will disable all the ports and channels associated with 398 + * SLIMbus stream 399 + * 400 + * Return: zero on success and error code on failure. From ASoC DPCM framework, 401 + * this state is linked to trigger() pause operation. 402 + */ 403 + int slim_stream_disable(struct slim_stream_runtime *stream) 404 + { 405 + DEFINE_SLIM_BCAST_TXN(txn, SLIM_MSG_MC_BEGIN_RECONFIGURATION, 406 + 3, SLIM_LA_MANAGER, NULL); 407 + struct slim_controller *ctrl = stream->dev->ctrl; 408 + int ret, i; 409 + 410 + if (ctrl->disable_stream) 411 + ctrl->disable_stream(stream); 412 + 413 + ret = slim_do_transfer(ctrl, &txn); 414 + if (ret) 415 + return ret; 416 + 417 + for (i = 0; i < stream->num_ports; i++) 418 + slim_deactivate_remove_channel(stream, &stream->ports[i]); 419 + 420 + txn.mc = SLIM_MSG_MC_RECONFIGURE_NOW; 421 + 422 + return slim_do_transfer(ctrl, &txn); 423 + } 424 + EXPORT_SYMBOL_GPL(slim_stream_disable); 425 + 426 + /* 427 + * slim_stream_unprepare() - Un-prepare a SLIMbus Stream 428 + * 429 + * @stream: instance of slim stream runtime to unprepare 430 + * 431 + * This API will un allocate all the ports and channels associated with 432 + * SLIMbus stream 433 + * 434 + * Return: zero on success and error code on failure. From ASoC DPCM framework, 435 + * this state is linked to trigger() stop operation. 436 + */ 437 + int slim_stream_unprepare(struct slim_stream_runtime *stream) 438 + { 439 + int i; 440 + 441 + for (i = 0; i < stream->num_ports; i++) 442 + slim_disconnect_port(stream, &stream->ports[i]); 443 + 444 + kfree(stream->ports); 445 + stream->ports = NULL; 446 + stream->num_ports = 0; 447 + 448 + return 0; 449 + } 450 + EXPORT_SYMBOL_GPL(slim_stream_unprepare); 451 + 452 + /* 453 + * slim_stream_free() - Free a SLIMbus Stream 454 + * 455 + * @stream: instance of slim stream runtime to free 456 + * 457 + * This API will un allocate all the memory associated with 458 + * slim stream runtime, user is not allowed to make an dereference 459 + * to stream after this call. 460 + * 461 + * Return: zero on success and error code on failure. From ASoC DPCM framework, 462 + * this state is linked to shutdown() operation. 463 + */ 464 + int slim_stream_free(struct slim_stream_runtime *stream) 465 + { 466 + struct slim_device *sdev = stream->dev; 467 + 468 + spin_lock(&sdev->stream_list_lock); 469 + list_del(&stream->node); 470 + spin_unlock(&sdev->stream_list_lock); 471 + 472 + kfree(stream->name); 473 + kfree(stream); 474 + 475 + return 0; 476 + } 477 + EXPORT_SYMBOL_GPL(slim_stream_free);
+42 -15
drivers/thunderbolt/domain.c
··· 12 12 #include <linux/device.h> 13 13 #include <linux/idr.h> 14 14 #include <linux/module.h> 15 + #include <linux/pm_runtime.h> 15 16 #include <linux/slab.h> 16 17 #include <linux/random.h> 17 18 #include <crypto/hash.h> ··· 133 132 if (!uuids) 134 133 return -ENOMEM; 135 134 135 + pm_runtime_get_sync(&tb->dev); 136 + 136 137 if (mutex_lock_interruptible(&tb->lock)) { 137 138 ret = -ERESTARTSYS; 138 139 goto out; ··· 156 153 } 157 154 158 155 out: 156 + pm_runtime_mark_last_busy(&tb->dev); 157 + pm_runtime_put_autosuspend(&tb->dev); 159 158 kfree(uuids); 159 + 160 160 return ret; 161 161 } 162 162 ··· 214 208 goto err_free_acl; 215 209 } 216 210 211 + pm_runtime_get_sync(&tb->dev); 212 + 217 213 if (mutex_lock_interruptible(&tb->lock)) { 218 214 ret = -ERESTARTSYS; 219 - goto err_free_acl; 215 + goto err_rpm_put; 220 216 } 221 217 ret = tb->cm_ops->set_boot_acl(tb, acl, tb->nboot_acl); 222 218 if (!ret) { ··· 227 219 } 228 220 mutex_unlock(&tb->lock); 229 221 222 + err_rpm_put: 223 + pm_runtime_mark_last_busy(&tb->dev); 224 + pm_runtime_put_autosuspend(&tb->dev); 230 225 err_free_acl: 231 226 kfree(acl); 232 227 err_free_str: ··· 441 430 /* This starts event processing */ 442 431 mutex_unlock(&tb->lock); 443 432 433 + pm_runtime_no_callbacks(&tb->dev); 434 + pm_runtime_set_active(&tb->dev); 435 + pm_runtime_enable(&tb->dev); 436 + pm_runtime_set_autosuspend_delay(&tb->dev, TB_AUTOSUSPEND_DELAY); 437 + pm_runtime_mark_last_busy(&tb->dev); 438 + pm_runtime_use_autosuspend(&tb->dev); 439 + 444 440 return 0; 445 441 446 442 err_domain_del: ··· 527 509 528 510 int tb_domain_suspend(struct tb *tb) 529 511 { 530 - int ret; 531 - 532 - mutex_lock(&tb->lock); 533 - if (tb->cm_ops->suspend) { 534 - ret = tb->cm_ops->suspend(tb); 535 - if (ret) { 536 - mutex_unlock(&tb->lock); 537 - return ret; 538 - } 539 - } 540 - mutex_unlock(&tb->lock); 541 - return 0; 512 + return tb->cm_ops->suspend ? tb->cm_ops->suspend(tb) : 0; 542 513 } 543 514 544 515 void tb_domain_complete(struct tb *tb) 545 516 { 546 - mutex_lock(&tb->lock); 547 517 if (tb->cm_ops->complete) 548 518 tb->cm_ops->complete(tb); 549 - mutex_unlock(&tb->lock); 519 + } 520 + 521 + int tb_domain_runtime_suspend(struct tb *tb) 522 + { 523 + if (tb->cm_ops->runtime_suspend) { 524 + int ret = tb->cm_ops->runtime_suspend(tb); 525 + if (ret) 526 + return ret; 527 + } 528 + tb_ctl_stop(tb->ctl); 529 + return 0; 530 + } 531 + 532 + int tb_domain_runtime_resume(struct tb *tb) 533 + { 534 + tb_ctl_start(tb->ctl); 535 + if (tb->cm_ops->runtime_resume) { 536 + int ret = tb->cm_ops->runtime_resume(tb); 537 + if (ret) 538 + return ret; 539 + } 540 + return 0; 550 541 } 551 542 552 543 /**
+144 -30
drivers/thunderbolt/icm.c
··· 15 15 #include <linux/delay.h> 16 16 #include <linux/mutex.h> 17 17 #include <linux/pci.h> 18 + #include <linux/pm_runtime.h> 18 19 #include <linux/platform_data/x86/apple.h> 19 20 #include <linux/sizes.h> 20 21 #include <linux/slab.h> ··· 58 57 * (only set when @upstream_port is not %NULL) 59 58 * @safe_mode: ICM is in safe mode 60 59 * @max_boot_acl: Maximum number of preboot ACL entries (%0 if not supported) 60 + * @rpm: Does the controller support runtime PM (RTD3) 61 61 * @is_supported: Checks if we can support ICM on this controller 62 62 * @get_mode: Read and return the ICM firmware mode (optional) 63 63 * @get_route: Find a route string for given switch 64 + * @save_devices: Ask ICM to save devices to ACL when suspending (optional) 64 65 * @driver_ready: Send driver ready message to ICM 65 66 * @device_connected: Handle device connected ICM message 66 67 * @device_disconnected: Handle device disconnected ICM message ··· 76 73 size_t max_boot_acl; 77 74 int vnd_cap; 78 75 bool safe_mode; 76 + bool rpm; 79 77 bool (*is_supported)(struct tb *tb); 80 78 int (*get_mode)(struct tb *tb); 81 79 int (*get_route)(struct tb *tb, u8 link, u8 depth, u64 *route); 80 + void (*save_devices)(struct tb *tb); 82 81 int (*driver_ready)(struct tb *tb, 83 82 enum tb_security_level *security_level, 84 - size_t *nboot_acl); 83 + size_t *nboot_acl, bool *rpm); 85 84 void (*device_connected)(struct tb *tb, 86 85 const struct icm_pkg_header *hdr); 87 86 void (*device_disconnected)(struct tb *tb, ··· 99 94 struct icm_pkg_header *pkg; 100 95 struct tb *tb; 101 96 }; 97 + 98 + struct ep_name_entry { 99 + u8 len; 100 + u8 type; 101 + u8 data[0]; 102 + }; 103 + 104 + #define EP_NAME_INTEL_VSS 0x10 105 + 106 + /* Intel Vendor specific structure */ 107 + struct intel_vss { 108 + u16 vendor; 109 + u16 model; 110 + u8 mc; 111 + u8 flags; 112 + u16 pci_devid; 113 + u32 nvm_version; 114 + }; 115 + 116 + #define INTEL_VSS_FLAGS_RTD3 BIT(0) 117 + 118 + static const struct intel_vss *parse_intel_vss(const void *ep_name, size_t size) 119 + { 120 + const void *end = ep_name + size; 121 + 122 + while (ep_name < end) { 123 + const struct ep_name_entry *ep = ep_name; 124 + 125 + if (!ep->len) 126 + break; 127 + if (ep_name + ep->len > end) 128 + break; 129 + 130 + if (ep->type == EP_NAME_INTEL_VSS) 131 + return (const struct intel_vss *)ep->data; 132 + 133 + ep_name += ep->len; 134 + } 135 + 136 + return NULL; 137 + } 102 138 103 139 static inline struct tb *icm_to_tb(struct icm *icm) 104 140 { ··· 304 258 return ret; 305 259 } 306 260 261 + static void icm_fr_save_devices(struct tb *tb) 262 + { 263 + nhi_mailbox_cmd(tb->nhi, NHI_MAILBOX_SAVE_DEVS, 0); 264 + } 265 + 307 266 static int 308 267 icm_fr_driver_ready(struct tb *tb, enum tb_security_level *security_level, 309 - size_t *nboot_acl) 268 + size_t *nboot_acl, bool *rpm) 310 269 { 311 270 struct icm_fr_pkg_driver_ready_response reply; 312 271 struct icm_pkg_driver_ready request = { ··· 461 410 } 462 411 463 412 static void add_switch(struct tb_switch *parent_sw, u64 route, 464 - const uuid_t *uuid, u8 connection_id, u8 connection_key, 413 + const uuid_t *uuid, const u8 *ep_name, 414 + size_t ep_name_size, u8 connection_id, u8 connection_key, 465 415 u8 link, u8 depth, enum tb_security_level security_level, 466 416 bool authorized, bool boot) 467 417 { 418 + const struct intel_vss *vss; 468 419 struct tb_switch *sw; 420 + 421 + pm_runtime_get_sync(&parent_sw->dev); 469 422 470 423 sw = tb_switch_alloc(parent_sw->tb, &parent_sw->dev, route); 471 424 if (!sw) 472 - return; 425 + goto out; 473 426 474 427 sw->uuid = kmemdup(uuid, sizeof(*uuid), GFP_KERNEL); 475 428 sw->connection_id = connection_id; ··· 484 429 sw->security_level = security_level; 485 430 sw->boot = boot; 486 431 432 + vss = parse_intel_vss(ep_name, ep_name_size); 433 + if (vss) 434 + sw->rpm = !!(vss->flags & INTEL_VSS_FLAGS_RTD3); 435 + 487 436 /* Link the two switches now */ 488 437 tb_port_at(route, parent_sw)->remote = tb_upstream_port(sw); 489 438 tb_upstream_port(sw)->remote = tb_port_at(route, parent_sw); ··· 495 436 if (tb_switch_add(sw)) { 496 437 tb_port_at(tb_route(sw), parent_sw)->remote = NULL; 497 438 tb_switch_put(sw); 498 - return; 499 439 } 440 + 441 + out: 442 + pm_runtime_mark_last_busy(&parent_sw->dev); 443 + pm_runtime_put_autosuspend(&parent_sw->dev); 500 444 } 501 445 502 446 static void update_switch(struct tb_switch *parent_sw, struct tb_switch *sw, ··· 539 477 { 540 478 struct tb_xdomain *xd; 541 479 480 + pm_runtime_get_sync(&sw->dev); 481 + 542 482 xd = tb_xdomain_alloc(sw->tb, &sw->dev, route, local_uuid, remote_uuid); 543 483 if (!xd) 544 - return; 484 + goto out; 545 485 546 486 xd->link = link; 547 487 xd->depth = depth; ··· 551 487 tb_port_at(route, sw)->xdomain = xd; 552 488 553 489 tb_xdomain_add(xd); 490 + 491 + out: 492 + pm_runtime_mark_last_busy(&sw->dev); 493 + pm_runtime_put_autosuspend(&sw->dev); 554 494 } 555 495 556 496 static void update_xdomain(struct tb_xdomain *xd, u64 route, u8 link) ··· 602 534 return; 603 535 } 604 536 605 - ret = icm->get_route(tb, link, depth, &route); 606 - if (ret) { 607 - tb_err(tb, "failed to find route string for switch at %u.%u\n", 608 - link, depth); 609 - return; 610 - } 611 - 612 537 sw = tb_switch_find_by_uuid(tb, &pkg->ep_uuid); 613 538 if (sw) { 614 539 u8 phy_port, sw_phy_port; 615 540 616 541 parent_sw = tb_to_switch(sw->dev.parent); 617 - sw_phy_port = phy_port_from_route(tb_route(sw), sw->depth); 618 - phy_port = phy_port_from_route(route, depth); 542 + sw_phy_port = tb_phy_port_from_link(sw->link); 543 + phy_port = tb_phy_port_from_link(link); 619 544 620 545 /* 621 546 * On resume ICM will send us connected events for the ··· 620 559 */ 621 560 if (sw->depth == depth && sw_phy_port == phy_port && 622 561 !!sw->authorized == authorized) { 562 + /* 563 + * It was enumerated through another link so update 564 + * route string accordingly. 565 + */ 566 + if (sw->link != link) { 567 + ret = icm->get_route(tb, link, depth, &route); 568 + if (ret) { 569 + tb_err(tb, "failed to update route string for switch at %u.%u\n", 570 + link, depth); 571 + tb_switch_put(sw); 572 + return; 573 + } 574 + } else { 575 + route = tb_route(sw); 576 + } 577 + 623 578 update_switch(parent_sw, sw, route, pkg->connection_id, 624 579 pkg->connection_key, link, depth, boot); 625 580 tb_switch_put(sw); ··· 684 607 return; 685 608 } 686 609 687 - add_switch(parent_sw, route, &pkg->ep_uuid, pkg->connection_id, 610 + ret = icm->get_route(tb, link, depth, &route); 611 + if (ret) { 612 + tb_err(tb, "failed to find route string for switch at %u.%u\n", 613 + link, depth); 614 + tb_switch_put(parent_sw); 615 + return; 616 + } 617 + 618 + add_switch(parent_sw, route, &pkg->ep_uuid, (const u8 *)pkg->ep_name, 619 + sizeof(pkg->ep_name), pkg->connection_id, 688 620 pkg->connection_key, link, depth, security_level, 689 621 authorized, boot); 690 622 ··· 736 650 struct tb_xdomain *xd; 737 651 struct tb_switch *sw; 738 652 u8 link, depth; 739 - bool approved; 740 653 u64 route; 741 654 742 655 /* ··· 749 664 link = pkg->link_info & ICM_LINK_INFO_LINK_MASK; 750 665 depth = (pkg->link_info & ICM_LINK_INFO_DEPTH_MASK) >> 751 666 ICM_LINK_INFO_DEPTH_SHIFT; 752 - approved = pkg->link_info & ICM_LINK_INFO_APPROVED; 753 667 754 668 if (link > ICM_MAX_LINK || depth > ICM_MAX_DEPTH) { 755 669 tb_warn(tb, "invalid topology %u.%u, ignoring\n", link, depth); ··· 841 757 842 758 static int 843 759 icm_tr_driver_ready(struct tb *tb, enum tb_security_level *security_level, 844 - size_t *nboot_acl) 760 + size_t *nboot_acl, bool *rpm) 845 761 { 846 762 struct icm_tr_pkg_driver_ready_response reply; 847 763 struct icm_pkg_driver_ready request = { ··· 860 776 if (nboot_acl) 861 777 *nboot_acl = (reply.info & ICM_TR_INFO_BOOT_ACL_MASK) >> 862 778 ICM_TR_INFO_BOOT_ACL_SHIFT; 779 + if (rpm) 780 + *rpm = !!(reply.hdr.flags & ICM_TR_FLAGS_RTD3); 781 + 863 782 return 0; 864 783 } 865 784 ··· 1092 1005 return; 1093 1006 } 1094 1007 1095 - add_switch(parent_sw, route, &pkg->ep_uuid, pkg->connection_id, 1008 + add_switch(parent_sw, route, &pkg->ep_uuid, (const u8 *)pkg->ep_name, 1009 + sizeof(pkg->ep_name), pkg->connection_id, 1096 1010 0, 0, 0, security_level, authorized, boot); 1097 1011 1098 1012 tb_switch_put(parent_sw); ··· 1272 1184 1273 1185 static int 1274 1186 icm_ar_driver_ready(struct tb *tb, enum tb_security_level *security_level, 1275 - size_t *nboot_acl) 1187 + size_t *nboot_acl, bool *rpm) 1276 1188 { 1277 1189 struct icm_ar_pkg_driver_ready_response reply; 1278 1190 struct icm_pkg_driver_ready request = { ··· 1291 1203 if (nboot_acl && (reply.info & ICM_AR_INFO_BOOT_ACL_SUPPORTED)) 1292 1204 *nboot_acl = (reply.info & ICM_AR_INFO_BOOT_ACL_MASK) >> 1293 1205 ICM_AR_INFO_BOOT_ACL_SHIFT; 1206 + if (rpm) 1207 + *rpm = !!(reply.hdr.flags & ICM_AR_FLAGS_RTD3); 1208 + 1294 1209 return 0; 1295 1210 } 1296 1211 ··· 1447 1356 1448 1357 static int 1449 1358 __icm_driver_ready(struct tb *tb, enum tb_security_level *security_level, 1450 - size_t *nboot_acl) 1359 + size_t *nboot_acl, bool *rpm) 1451 1360 { 1452 1361 struct icm *icm = tb_priv(tb); 1453 1362 unsigned int retries = 50; 1454 1363 int ret; 1455 1364 1456 - ret = icm->driver_ready(tb, security_level, nboot_acl); 1365 + ret = icm->driver_ready(tb, security_level, nboot_acl, rpm); 1457 1366 if (ret) { 1458 1367 tb_err(tb, "failed to send driver ready to ICM\n"); 1459 1368 return ret; ··· 1723 1632 return 0; 1724 1633 } 1725 1634 1726 - ret = __icm_driver_ready(tb, &tb->security_level, &tb->nboot_acl); 1635 + ret = __icm_driver_ready(tb, &tb->security_level, &tb->nboot_acl, 1636 + &icm->rpm); 1727 1637 if (ret) 1728 1638 return ret; 1729 1639 ··· 1740 1648 1741 1649 static int icm_suspend(struct tb *tb) 1742 1650 { 1743 - int ret; 1651 + struct icm *icm = tb_priv(tb); 1744 1652 1745 - ret = nhi_mailbox_cmd(tb->nhi, NHI_MAILBOX_SAVE_DEVS, 0); 1746 - if (ret) 1747 - tb_info(tb, "Ignoring mailbox command error (%d) in %s\n", 1748 - ret, __func__); 1653 + if (icm->save_devices) 1654 + icm->save_devices(tb); 1749 1655 1656 + nhi_mailbox_cmd(tb->nhi, NHI_MAILBOX_DRV_UNLOADS, 0); 1750 1657 return 0; 1751 1658 } 1752 1659 ··· 1830 1739 * Now all existing children should be resumed, start events 1831 1740 * from ICM to get updated status. 1832 1741 */ 1833 - __icm_driver_ready(tb, NULL, NULL); 1742 + __icm_driver_ready(tb, NULL, NULL, NULL); 1834 1743 1835 1744 /* 1836 1745 * We do not get notifications of devices that have been ··· 1838 1747 * if any. 1839 1748 */ 1840 1749 queue_delayed_work(tb->wq, &icm->rescan_work, msecs_to_jiffies(500)); 1750 + } 1751 + 1752 + static int icm_runtime_suspend(struct tb *tb) 1753 + { 1754 + nhi_mailbox_cmd(tb->nhi, NHI_MAILBOX_DRV_UNLOADS, 0); 1755 + return 0; 1756 + } 1757 + 1758 + static int icm_runtime_resume(struct tb *tb) 1759 + { 1760 + /* 1761 + * We can reuse the same resume functionality than with system 1762 + * suspend. 1763 + */ 1764 + icm_complete(tb); 1765 + return 0; 1841 1766 } 1842 1767 1843 1768 static int icm_start(struct tb *tb) ··· 1874 1767 * prevent root switch NVM upgrade on Macs for now. 1875 1768 */ 1876 1769 tb->root_switch->no_nvm_upgrade = x86_apple_machine; 1770 + tb->root_switch->rpm = icm->rpm; 1877 1771 1878 1772 ret = tb_switch_add(tb->root_switch); 1879 1773 if (ret) { ··· 1923 1815 .stop = icm_stop, 1924 1816 .suspend = icm_suspend, 1925 1817 .complete = icm_complete, 1818 + .runtime_suspend = icm_runtime_suspend, 1819 + .runtime_resume = icm_runtime_resume, 1926 1820 .handle_event = icm_handle_event, 1927 1821 .get_boot_acl = icm_ar_get_boot_acl, 1928 1822 .set_boot_acl = icm_ar_set_boot_acl, ··· 1943 1833 .stop = icm_stop, 1944 1834 .suspend = icm_suspend, 1945 1835 .complete = icm_complete, 1836 + .runtime_suspend = icm_runtime_suspend, 1837 + .runtime_resume = icm_runtime_resume, 1946 1838 .handle_event = icm_handle_event, 1947 1839 .get_boot_acl = icm_ar_get_boot_acl, 1948 1840 .set_boot_acl = icm_ar_set_boot_acl, ··· 1974 1862 case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI: 1975 1863 icm->is_supported = icm_fr_is_supported; 1976 1864 icm->get_route = icm_fr_get_route; 1865 + icm->save_devices = icm_fr_save_devices; 1977 1866 icm->driver_ready = icm_fr_driver_ready; 1978 1867 icm->device_connected = icm_fr_device_connected; 1979 1868 icm->device_disconnected = icm_fr_device_disconnected; ··· 1992 1879 icm->is_supported = icm_ar_is_supported; 1993 1880 icm->get_mode = icm_ar_get_mode; 1994 1881 icm->get_route = icm_ar_get_route; 1882 + icm->save_devices = icm_fr_save_devices; 1995 1883 icm->driver_ready = icm_ar_driver_ready; 1996 1884 icm->device_connected = icm_fr_device_connected; 1997 1885 icm->device_disconnected = icm_fr_device_disconnected;
+45 -1
drivers/thunderbolt/nhi.c
··· 900 900 struct pci_dev *pdev = to_pci_dev(dev); 901 901 struct tb *tb = pci_get_drvdata(pdev); 902 902 903 - tb_domain_complete(tb); 903 + /* 904 + * If we were runtime suspended when system suspend started, 905 + * schedule runtime resume now. It should bring the domain back 906 + * to functional state. 907 + */ 908 + if (pm_runtime_suspended(&pdev->dev)) 909 + pm_runtime_resume(&pdev->dev); 910 + else 911 + tb_domain_complete(tb); 912 + } 913 + 914 + static int nhi_runtime_suspend(struct device *dev) 915 + { 916 + struct pci_dev *pdev = to_pci_dev(dev); 917 + struct tb *tb = pci_get_drvdata(pdev); 918 + 919 + return tb_domain_runtime_suspend(tb); 920 + } 921 + 922 + static int nhi_runtime_resume(struct device *dev) 923 + { 924 + struct pci_dev *pdev = to_pci_dev(dev); 925 + struct tb *tb = pci_get_drvdata(pdev); 926 + 927 + nhi_enable_int_throttling(tb->nhi); 928 + return tb_domain_runtime_resume(tb); 904 929 } 905 930 906 931 static void nhi_shutdown(struct tb_nhi *nhi) ··· 1040 1015 1041 1016 spin_lock_init(&nhi->lock); 1042 1017 1018 + res = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 1019 + if (res) 1020 + res = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(32)); 1021 + if (res) { 1022 + dev_err(&pdev->dev, "failed to set DMA mask\n"); 1023 + return res; 1024 + } 1025 + 1043 1026 pci_set_master(pdev); 1044 1027 1045 1028 tb = icm_probe(nhi); ··· 1073 1040 } 1074 1041 pci_set_drvdata(pdev, tb); 1075 1042 1043 + pm_runtime_allow(&pdev->dev); 1044 + pm_runtime_set_autosuspend_delay(&pdev->dev, TB_AUTOSUSPEND_DELAY); 1045 + pm_runtime_use_autosuspend(&pdev->dev); 1046 + pm_runtime_put_autosuspend(&pdev->dev); 1047 + 1076 1048 return 0; 1077 1049 } 1078 1050 ··· 1085 1047 { 1086 1048 struct tb *tb = pci_get_drvdata(pdev); 1087 1049 struct tb_nhi *nhi = tb->nhi; 1050 + 1051 + pm_runtime_get_sync(&pdev->dev); 1052 + pm_runtime_dont_use_autosuspend(&pdev->dev); 1053 + pm_runtime_forbid(&pdev->dev); 1088 1054 1089 1055 tb_domain_remove(tb); 1090 1056 nhi_shutdown(nhi); ··· 1112 1070 .freeze = nhi_suspend, 1113 1071 .poweroff = nhi_suspend, 1114 1072 .complete = nhi_complete, 1073 + .runtime_suspend = nhi_runtime_suspend, 1074 + .runtime_resume = nhi_runtime_resume, 1115 1075 }; 1116 1076 1117 1077 static struct pci_device_id nhi_ids[] = {
+62 -5
drivers/thunderbolt/switch.c
··· 8 8 #include <linux/delay.h> 9 9 #include <linux/idr.h> 10 10 #include <linux/nvmem-provider.h> 11 + #include <linux/pm_runtime.h> 11 12 #include <linux/sizes.h> 12 13 #include <linux/slab.h> 13 14 #include <linux/vmalloc.h> ··· 237 236 size_t bytes) 238 237 { 239 238 struct tb_switch *sw = priv; 239 + int ret; 240 240 241 - return dma_port_flash_read(sw->dma_port, offset, val, bytes); 241 + pm_runtime_get_sync(&sw->dev); 242 + ret = dma_port_flash_read(sw->dma_port, offset, val, bytes); 243 + pm_runtime_mark_last_busy(&sw->dev); 244 + pm_runtime_put_autosuspend(&sw->dev); 245 + 246 + return ret; 242 247 } 243 248 244 249 static int tb_switch_nvm_write(void *priv, unsigned int offset, void *val, ··· 729 722 * the new tunnel too early. 730 723 */ 731 724 pci_lock_rescan_remove(); 725 + pm_runtime_get_sync(&sw->dev); 732 726 733 727 switch (val) { 734 728 /* Approve switch */ ··· 750 742 break; 751 743 } 752 744 745 + pm_runtime_mark_last_busy(&sw->dev); 746 + pm_runtime_put_autosuspend(&sw->dev); 753 747 pci_unlock_rescan_remove(); 754 748 755 749 if (!ret) { ··· 898 888 nvm_clear_auth_status(sw); 899 889 900 890 if (val) { 901 - ret = nvm_validate_and_write(sw); 902 - if (ret) 891 + if (!sw->nvm->buf) { 892 + ret = -EINVAL; 903 893 goto exit_unlock; 894 + } 895 + 896 + pm_runtime_get_sync(&sw->dev); 897 + ret = nvm_validate_and_write(sw); 898 + if (ret) { 899 + pm_runtime_mark_last_busy(&sw->dev); 900 + pm_runtime_put_autosuspend(&sw->dev); 901 + goto exit_unlock; 902 + } 904 903 905 904 sw->nvm->authenticating = true; 906 905 ··· 917 898 ret = nvm_authenticate_host(sw); 918 899 else 919 900 ret = nvm_authenticate_device(sw); 901 + pm_runtime_mark_last_busy(&sw->dev); 902 + pm_runtime_put_autosuspend(&sw->dev); 920 903 } 921 904 922 905 exit_unlock: ··· 1044 1023 kfree(sw); 1045 1024 } 1046 1025 1026 + /* 1027 + * Currently only need to provide the callbacks. Everything else is handled 1028 + * in the connection manager. 1029 + */ 1030 + static int __maybe_unused tb_switch_runtime_suspend(struct device *dev) 1031 + { 1032 + return 0; 1033 + } 1034 + 1035 + static int __maybe_unused tb_switch_runtime_resume(struct device *dev) 1036 + { 1037 + return 0; 1038 + } 1039 + 1040 + static const struct dev_pm_ops tb_switch_pm_ops = { 1041 + SET_RUNTIME_PM_OPS(tb_switch_runtime_suspend, tb_switch_runtime_resume, 1042 + NULL) 1043 + }; 1044 + 1047 1045 struct device_type tb_switch_type = { 1048 1046 .name = "thunderbolt_device", 1049 1047 .release = tb_switch_release, 1048 + .pm = &tb_switch_pm_ops, 1050 1049 }; 1051 1050 1052 1051 static int tb_switch_get_generation(struct tb_switch *sw) ··· 1406 1365 return ret; 1407 1366 1408 1367 ret = tb_switch_nvm_add(sw); 1409 - if (ret) 1368 + if (ret) { 1410 1369 device_del(&sw->dev); 1370 + return ret; 1371 + } 1411 1372 1412 - return ret; 1373 + pm_runtime_set_active(&sw->dev); 1374 + if (sw->rpm) { 1375 + pm_runtime_set_autosuspend_delay(&sw->dev, TB_AUTOSUSPEND_DELAY); 1376 + pm_runtime_use_autosuspend(&sw->dev); 1377 + pm_runtime_mark_last_busy(&sw->dev); 1378 + pm_runtime_enable(&sw->dev); 1379 + pm_request_autosuspend(&sw->dev); 1380 + } 1381 + 1382 + return 0; 1413 1383 } 1414 1384 1415 1385 /** ··· 1434 1382 void tb_switch_remove(struct tb_switch *sw) 1435 1383 { 1436 1384 int i; 1385 + 1386 + if (sw->rpm) { 1387 + pm_runtime_get_sync(&sw->dev); 1388 + pm_runtime_disable(&sw->dev); 1389 + } 1437 1390 1438 1391 /* port 0 is the switch itself and never has a remote */ 1439 1392 for (i = 1; i <= sw->config.max_port_number; i++) {
+10
drivers/thunderbolt/tb.h
··· 67 67 * @no_nvm_upgrade: Prevent NVM upgrade of this switch 68 68 * @safe_mode: The switch is in safe-mode 69 69 * @boot: Whether the switch was already authorized on boot or not 70 + * @rpm: The switch supports runtime PM 70 71 * @authorized: Whether the switch is authorized by user or policy 71 72 * @work: Work used to automatically authorize a switch 72 73 * @security_level: Switch supported security level ··· 102 101 bool no_nvm_upgrade; 103 102 bool safe_mode; 104 103 bool boot; 104 + bool rpm; 105 105 unsigned int authorized; 106 106 struct work_struct work; 107 107 enum tb_security_level security_level; ··· 201 199 * @resume_noirq: Connection manager specific resume_noirq 202 200 * @suspend: Connection manager specific suspend 203 201 * @complete: Connection manager specific complete 202 + * @runtime_suspend: Connection manager specific runtime_suspend 203 + * @runtime_resume: Connection manager specific runtime_resume 204 204 * @handle_event: Handle thunderbolt event 205 205 * @get_boot_acl: Get boot ACL list 206 206 * @set_boot_acl: Set boot ACL list ··· 221 217 int (*resume_noirq)(struct tb *tb); 222 218 int (*suspend)(struct tb *tb); 223 219 void (*complete)(struct tb *tb); 220 + int (*runtime_suspend)(struct tb *tb); 221 + int (*runtime_resume)(struct tb *tb); 224 222 void (*handle_event)(struct tb *tb, enum tb_cfg_pkg_type, 225 223 const void *buf, size_t size); 226 224 int (*get_boot_acl)(struct tb *tb, uuid_t *uuids, size_t nuuids); ··· 240 234 { 241 235 return (void *)tb->privdata; 242 236 } 237 + 238 + #define TB_AUTOSUSPEND_DELAY 15000 /* ms */ 243 239 244 240 /* helper functions & macros */ 245 241 ··· 372 364 int tb_domain_resume_noirq(struct tb *tb); 373 365 int tb_domain_suspend(struct tb *tb); 374 366 void tb_domain_complete(struct tb *tb); 367 + int tb_domain_runtime_suspend(struct tb *tb); 368 + int tb_domain_runtime_resume(struct tb *tb); 375 369 int tb_domain_approve_switch(struct tb *tb, struct tb_switch *sw); 376 370 int tb_domain_approve_switch_key(struct tb *tb, struct tb_switch *sw); 377 371 int tb_domain_challenge_switch_key(struct tb *tb, struct tb_switch *sw);
+4
drivers/thunderbolt/tb_msgs.h
··· 286 286 u16 info; 287 287 }; 288 288 289 + #define ICM_AR_FLAGS_RTD3 BIT(6) 290 + 289 291 #define ICM_AR_INFO_SLEVEL_MASK GENMASK(3, 0) 290 292 #define ICM_AR_INFO_BOOT_ACL_SHIFT 7 291 293 #define ICM_AR_INFO_BOOT_ACL_MASK GENMASK(11, 7) ··· 334 332 u16 device_id; 335 333 u16 reserved2; 336 334 }; 335 + 336 + #define ICM_TR_FLAGS_RTD3 BIT(6) 337 337 338 338 #define ICM_TR_INFO_SLEVEL_MASK GENMASK(2, 0) 339 339 #define ICM_TR_INFO_BOOT_ACL_SHIFT 7
+1 -1
drivers/thunderbolt/tb_regs.h
··· 202 202 /* DWORD 5 */ 203 203 u32 max_in_hop_id:11; 204 204 u32 max_out_hop_id:11; 205 - u32 __unkown4:10; 205 + u32 __unknown4:10; 206 206 /* DWORD 6 */ 207 207 u32 __unknown5; 208 208 /* DWORD 7 */
+18
drivers/thunderbolt/xdomain.c
··· 13 13 #include <linux/device.h> 14 14 #include <linux/kmod.h> 15 15 #include <linux/module.h> 16 + #include <linux/pm_runtime.h> 16 17 #include <linux/utsname.h> 17 18 #include <linux/uuid.h> 18 19 #include <linux/workqueue.h> ··· 1130 1129 xd->dev.groups = xdomain_attr_groups; 1131 1130 dev_set_name(&xd->dev, "%u-%llx", tb->index, route); 1132 1131 1132 + /* 1133 + * This keeps the DMA powered on as long as we have active 1134 + * connection to another host. 1135 + */ 1136 + pm_runtime_set_active(&xd->dev); 1137 + pm_runtime_get_noresume(&xd->dev); 1138 + pm_runtime_enable(&xd->dev); 1139 + 1133 1140 return xd; 1134 1141 1135 1142 err_free_local_uuid: ··· 1182 1173 stop_handshake(xd); 1183 1174 1184 1175 device_for_each_child_reverse(&xd->dev, xd, unregister_service); 1176 + 1177 + /* 1178 + * Undo runtime PM here explicitly because it is possible that 1179 + * the XDomain was never added to the bus and thus device_del() 1180 + * is not called for it (device_del() would handle this otherwise). 1181 + */ 1182 + pm_runtime_disable(&xd->dev); 1183 + pm_runtime_put_noidle(&xd->dev); 1184 + pm_runtime_set_suspended(&xd->dev); 1185 1185 1186 1186 if (!device_is_registered(&xd->dev)) 1187 1187 put_device(&xd->dev);
+1
drivers/tty/goldfish.c
··· 13 13 #include <linux/slab.h> 14 14 #include <linux/io.h> 15 15 #include <linux/module.h> 16 + #include <linux/mod_devicetable.h> 16 17 #include <linux/goldfish.h> 17 18 #include <linux/mm.h> 18 19 #include <linux/dma-mapping.h>
+1
drivers/tty/serial/8250/8250_em.c
··· 8 8 #include <linux/device.h> 9 9 #include <linux/io.h> 10 10 #include <linux/module.h> 11 + #include <linux/mod_devicetable.h> 11 12 #include <linux/serial_8250.h> 12 13 #include <linux/serial_reg.h> 13 14 #include <linux/platform_device.h>
+1
drivers/tty/serial/sccnxp.c
··· 14 14 #include <linux/clk.h> 15 15 #include <linux/err.h> 16 16 #include <linux/module.h> 17 + #include <linux/mod_devicetable.h> 17 18 #include <linux/device.h> 18 19 #include <linux/console.h> 19 20 #include <linux/serial_core.h>
+9 -15
drivers/uio/uio.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * drivers/uio/uio.c 3 4 * ··· 10 9 * Userspace IO 11 10 * 12 11 * Base Functions 13 - * 14 - * Licensed under the GPLv2 only. 15 12 */ 16 13 17 14 #include <linux/module.h> ··· 624 625 ssize_t retval; 625 626 s32 irq_on; 626 627 628 + if (count != sizeof(s32)) 629 + return -EINVAL; 630 + 631 + if (copy_from_user(&irq_on, buf, count)) 632 + return -EFAULT; 633 + 627 634 mutex_lock(&idev->info_lock); 628 635 if (!idev->info) { 629 636 retval = -EINVAL; ··· 641 636 goto out; 642 637 } 643 638 644 - if (count != sizeof(s32)) { 645 - retval = -EINVAL; 646 - goto out; 647 - } 648 - 649 639 if (!idev->info->irqcontrol) { 650 640 retval = -ENOSYS; 651 - goto out; 652 - } 653 - 654 - if (copy_from_user(&irq_on, buf, count)) { 655 - retval = -EFAULT; 656 641 goto out; 657 642 } 658 643 ··· 809 814 810 815 out: 811 816 mutex_unlock(&idev->info_lock); 812 - return 0; 817 + return ret; 813 818 } 814 819 815 820 static const struct file_operations uio_fops = { ··· 953 958 if (ret) 954 959 goto err_uio_dev_add_attributes; 955 960 956 - info->uio_dev = idev; 957 - 958 961 if (info->irq && (info->irq != UIO_IRQ_CUSTOM)) { 959 962 /* 960 963 * Note that we deliberately don't use devm_request_irq ··· 969 976 goto err_request_irq; 970 977 } 971 978 979 + info->uio_dev = idev; 972 980 return 0; 973 981 974 982 err_request_irq:
+1 -3
drivers/uio/uio_cif.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * UIO Hilscher CIF card driver 3 4 * 4 5 * (C) 2007 Hans J. Koch <hjk@hansjkoch.de> 5 6 * Original code (C) 2005 Benedikt Spranger <b.spranger@linutronix.de> 6 - * 7 - * Licensed under GPL version 2 only. 8 - * 9 7 */ 10 8 11 9 #include <linux/device.h>
+1
drivers/uio/uio_fsl_elbc_gpcm.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* uio_fsl_elbc_gpcm: UIO driver for eLBC/GPCM peripherals 2 3 3 4 Copyright (C) 2014 Linutronix GmbH
+1 -3
drivers/uio/uio_hv_generic.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * uio_hv_generic - generic UIO driver for VMBus 3 4 * 4 5 * Copyright (c) 2013-2016 Brocade Communications Systems, Inc. 5 6 * Copyright (c) 2016, Microsoft Corporation. 6 - * 7 - * 8 - * This work is licensed under the terms of the GNU GPL, version 2. 9 7 * 10 8 * Since the driver does not declare any device ids, you must allocate 11 9 * id and bind the device to the driver yourself. For example:
+1 -2
drivers/uio/uio_netx.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * UIO driver for Hilscher NetX based fieldbus cards (cifX, comX). 3 4 * See http://www.hilscher.com for details. 4 5 * 5 6 * (C) 2007 Hans J. Koch <hjk@hansjkoch.de> 6 7 * (C) 2008 Manuel Traut <manut@linutronix.de> 7 - * 8 - * Licensed under GPL version 2 only. 9 8 * 10 9 */ 11 10
+1 -2
drivers/uio/uio_pci_generic.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* uio_pci_generic - generic UIO driver for PCI 2.3 devices 2 3 * 3 4 * Copyright (C) 2009 Red Hat, Inc. 4 5 * Author: Michael S. Tsirkin <mst@redhat.com> 5 - * 6 - * This work is licensed under the terms of the GNU GPL, version 2. 7 6 * 8 7 * Since the driver does not declare any device ids, you must allocate 9 8 * id and bind the device to the driver yourself. For example:
+45 -24
drivers/uio/uio_pruss.c
··· 122 122 struct uio_pruss_dev *gdev; 123 123 struct resource *regs_prussio; 124 124 struct device *dev = &pdev->dev; 125 - int ret = -ENODEV, cnt = 0, len; 125 + int ret, cnt, i, len; 126 126 struct uio_pruss_pdata *pdata = dev_get_platdata(dev); 127 127 128 128 gdev = kzalloc(sizeof(struct uio_pruss_dev), GFP_KERNEL); ··· 131 131 132 132 gdev->info = kcalloc(MAX_PRUSS_EVT, sizeof(*p), GFP_KERNEL); 133 133 if (!gdev->info) { 134 - kfree(gdev); 135 - return -ENOMEM; 134 + ret = -ENOMEM; 135 + goto err_free_gdev; 136 136 } 137 137 138 138 /* Power on PRU in case its not done as part of boot-loader */ ··· 140 140 if (IS_ERR(gdev->pruss_clk)) { 141 141 dev_err(dev, "Failed to get clock\n"); 142 142 ret = PTR_ERR(gdev->pruss_clk); 143 - kfree(gdev->info); 144 - kfree(gdev); 145 - return ret; 146 - } else { 147 - ret = clk_enable(gdev->pruss_clk); 148 - if (ret) { 149 - dev_err(dev, "Failed to enable clock\n"); 150 - clk_put(gdev->pruss_clk); 151 - kfree(gdev->info); 152 - kfree(gdev); 153 - return ret; 154 - } 143 + goto err_free_info; 144 + } 145 + 146 + ret = clk_enable(gdev->pruss_clk); 147 + if (ret) { 148 + dev_err(dev, "Failed to enable clock\n"); 149 + goto err_clk_put; 155 150 } 156 151 157 152 regs_prussio = platform_get_resource(pdev, IORESOURCE_MEM, 0); 158 153 if (!regs_prussio) { 159 154 dev_err(dev, "No PRUSS I/O resource specified\n"); 160 - goto out_free; 155 + ret = -EIO; 156 + goto err_clk_disable; 161 157 } 162 158 163 159 if (!regs_prussio->start) { 164 160 dev_err(dev, "Invalid memory resource\n"); 165 - goto out_free; 161 + ret = -EIO; 162 + goto err_clk_disable; 166 163 } 167 164 168 165 if (pdata->sram_pool) { ··· 169 172 sram_pool_sz, &gdev->sram_paddr); 170 173 if (!gdev->sram_vaddr) { 171 174 dev_err(dev, "Could not allocate SRAM pool\n"); 172 - goto out_free; 175 + ret = -ENOMEM; 176 + goto err_clk_disable; 173 177 } 174 178 } 175 179 ··· 178 180 &(gdev->ddr_paddr), GFP_KERNEL | GFP_DMA); 179 181 if (!gdev->ddr_vaddr) { 180 182 dev_err(dev, "Could not allocate external memory\n"); 181 - goto out_free; 183 + ret = -ENOMEM; 184 + goto err_free_sram; 182 185 } 183 186 184 187 len = resource_size(regs_prussio); 185 188 gdev->prussio_vaddr = ioremap(regs_prussio->start, len); 186 189 if (!gdev->prussio_vaddr) { 187 190 dev_err(dev, "Can't remap PRUSS I/O address range\n"); 188 - goto out_free; 191 + ret = -ENOMEM; 192 + goto err_free_ddr_vaddr; 189 193 } 190 194 191 195 gdev->pintc_base = pdata->pintc_base; ··· 215 215 p->priv = gdev; 216 216 217 217 ret = uio_register_device(dev, p); 218 - if (ret < 0) 219 - goto out_free; 218 + if (ret < 0) { 219 + kfree(p->name); 220 + goto err_unloop; 221 + } 220 222 } 221 223 222 224 platform_set_drvdata(pdev, gdev); 223 225 return 0; 224 226 225 - out_free: 226 - pruss_cleanup(dev, gdev); 227 + err_unloop: 228 + for (i = 0, p = gdev->info; i < cnt; i++, p++) { 229 + uio_unregister_device(p); 230 + kfree(p->name); 231 + } 232 + iounmap(gdev->prussio_vaddr); 233 + err_free_ddr_vaddr: 234 + dma_free_coherent(dev, extram_pool_sz, gdev->ddr_vaddr, 235 + gdev->ddr_paddr); 236 + err_free_sram: 237 + if (pdata->sram_pool) 238 + gen_pool_free(gdev->sram_pool, gdev->sram_vaddr, sram_pool_sz); 239 + err_clk_disable: 240 + clk_disable(gdev->pruss_clk); 241 + err_clk_put: 242 + clk_put(gdev->pruss_clk); 243 + err_free_info: 244 + kfree(gdev->info); 245 + err_free_gdev: 246 + kfree(gdev); 247 + 227 248 return ret; 228 249 } 229 250
+1
drivers/uio/uio_sercos3.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* sercos3: UIO driver for the Automata Sercos III PCI card 2 3 3 4 Copyright (C) 2008 Linutronix GmbH
+1
drivers/usb/gadget/udc/fsl_mxc_udc.c
··· 11 11 #include <linux/delay.h> 12 12 #include <linux/err.h> 13 13 #include <linux/fsl_devices.h> 14 + #include <linux/mod_devicetable.h> 14 15 #include <linux/platform_device.h> 15 16 #include <linux/io.h> 16 17
+6
drivers/video/fbdev/hyperv_fb.c
··· 912 912 .id_table = id_table, 913 913 .probe = hvfb_probe, 914 914 .remove = hvfb_remove, 915 + .driver = { 916 + .probe_type = PROBE_PREFER_ASYNCHRONOUS, 917 + }, 915 918 }; 916 919 917 920 static int hvfb_pci_stub_probe(struct pci_dev *pdev, ··· 932 929 .id_table = pci_stub_id_table, 933 930 .probe = hvfb_pci_stub_probe, 934 931 .remove = hvfb_pci_stub_remove, 932 + .driver = { 933 + .probe_type = PROBE_PREFER_ASYNCHRONOUS, 934 + } 935 935 }; 936 936 937 937 static int __init hvfb_drv_init(void)
+1
drivers/video/fbdev/omap2/omapfb/displays/encoder-tpd12s015.c
··· 12 12 #include <linux/completion.h> 13 13 #include <linux/delay.h> 14 14 #include <linux/module.h> 15 + #include <linux/mod_devicetable.h> 15 16 #include <linux/slab.h> 16 17 #include <linux/platform_device.h> 17 18 #include <linux/gpio/consumer.h>
-2
drivers/vme/bridges/vme_ca91cx42.c
··· 970 970 { 971 971 u32 result; 972 972 uintptr_t pci_addr; 973 - int i; 974 973 struct ca91cx42_driver *bridge; 975 974 struct device *dev; 976 975 ··· 977 978 dev = image->parent->parent; 978 979 979 980 /* Find the PCI address that maps to the desired VME address */ 980 - i = image->number; 981 981 982 982 /* Locking as we can only do one of these at a time */ 983 983 mutex_lock(&bridge->vme_rmw);
+1 -1
drivers/w1/masters/ds2482.c
··· 71 71 #define DS2482_REG_CFG_APU 0x01 /* active pull-up */ 72 72 73 73 /* extra configurations - e.g. 1WS */ 74 - int extra_config; 74 + static int extra_config; 75 75 76 76 /** 77 77 * Write and verify codes for the CHANNEL_SELECT command (DS2482-800 only).
+7 -9
drivers/w1/masters/ds2490.c
··· 134 134 #define EP_DATA_OUT 2 135 135 #define EP_DATA_IN 3 136 136 137 - struct ds_device 138 - { 137 + struct ds_device { 139 138 struct list_head ds_entry; 140 139 141 140 struct usb_device *udev; ··· 157 158 struct w1_bus_master master; 158 159 }; 159 160 160 - struct ds_status 161 - { 161 + struct ds_status { 162 162 u8 enable; 163 163 u8 speed; 164 164 u8 pullup_dur; ··· 234 236 int i; 235 237 236 238 pr_info("0x%x: count=%d, status: ", dev->ep[EP_STATUS], count); 237 - for (i=0; i<count; ++i) 239 + for (i = 0; i < count; ++i) 238 240 pr_info("%02x ", buf[i]); 239 241 pr_info("\n"); 240 242 ··· 356 358 int i; 357 359 358 360 printk("%s: count=%d: ", __func__, count); 359 - for (i=0; i<count; ++i) 361 + for (i = 0; i < count; ++i) 360 362 printk("%02x ", buf[i]); 361 363 printk("\n"); 362 364 } ··· 402 404 if (err) 403 405 break; 404 406 } 405 - } while(++count < limit); 407 + } while (++count < limit); 406 408 407 409 return err; 408 410 } ··· 445 447 if (err >= 0) { 446 448 int i; 447 449 printk("0x%x: count=%d, status: ", dev->ep[EP_STATUS], err); 448 - for (i=0; i<err; ++i) 450 + for (i = 0; i < err; ++i) 449 451 printk("%02x ", dev->st_buf[i]); 450 452 printk("\n"); 451 453 } ··· 611 613 int err; 612 614 struct ds_status st; 613 615 614 - err = ds_send_control(dev, COMM_BYTE_IO | COMM_IM , 0xff); 616 + err = ds_send_control(dev, COMM_BYTE_IO | COMM_IM, 0xff); 615 617 if (err) 616 618 return err; 617 619
+1
drivers/w1/masters/mxc_w1.c
··· 17 17 #include <linux/io.h> 18 18 #include <linux/jiffies.h> 19 19 #include <linux/module.h> 20 + #include <linux/mod_devicetable.h> 20 21 #include <linux/platform_device.h> 21 22 22 23 #include <linux/w1.h>
+1
drivers/watchdog/coh901327_wdt.c
··· 7 7 * Author: Linus Walleij <linus.walleij@stericsson.com> 8 8 */ 9 9 #include <linux/module.h> 10 + #include <linux/mod_devicetable.h> 10 11 #include <linux/types.h> 11 12 #include <linux/watchdog.h> 12 13 #include <linux/interrupt.h>
+1
drivers/watchdog/davinci_wdt.c
··· 13 13 14 14 #include <linux/module.h> 15 15 #include <linux/moduleparam.h> 16 + #include <linux/mod_devicetable.h> 16 17 #include <linux/types.h> 17 18 #include <linux/kernel.h> 18 19 #include <linux/watchdog.h>
+1
drivers/watchdog/imgpdc_wdt.c
··· 44 44 #include <linux/io.h> 45 45 #include <linux/log2.h> 46 46 #include <linux/module.h> 47 + #include <linux/mod_devicetable.h> 47 48 #include <linux/platform_device.h> 48 49 #include <linux/slab.h> 49 50 #include <linux/watchdog.h>
+1
drivers/watchdog/max63xx_wdt.c
··· 17 17 #include <linux/err.h> 18 18 #include <linux/module.h> 19 19 #include <linux/moduleparam.h> 20 + #include <linux/mod_devicetable.h> 20 21 #include <linux/types.h> 21 22 #include <linux/kernel.h> 22 23 #include <linux/watchdog.h>
+1
drivers/watchdog/max77620_wdt.c
··· 14 14 #include <linux/init.h> 15 15 #include <linux/kernel.h> 16 16 #include <linux/module.h> 17 + #include <linux/mod_devicetable.h> 17 18 #include <linux/mfd/max77620.h> 18 19 #include <linux/platform_device.h> 19 20 #include <linux/regmap.h>
+1
drivers/watchdog/moxart_wdt.c
··· 13 13 #include <linux/clk.h> 14 14 #include <linux/io.h> 15 15 #include <linux/module.h> 16 + #include <linux/mod_devicetable.h> 16 17 #include <linux/err.h> 17 18 #include <linux/kernel.h> 18 19 #include <linux/platform_device.h>
+1
drivers/watchdog/omap_wdt.c
··· 29 29 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 30 30 31 31 #include <linux/module.h> 32 + #include <linux/mod_devicetable.h> 32 33 #include <linux/types.h> 33 34 #include <linux/kernel.h> 34 35 #include <linux/mm.h>
+1
drivers/watchdog/tangox_wdt.c
··· 11 11 #include <linux/kernel.h> 12 12 #include <linux/module.h> 13 13 #include <linux/moduleparam.h> 14 + #include <linux/mod_devicetable.h> 14 15 #include <linux/platform_device.h> 15 16 #include <linux/watchdog.h> 16 17
+37 -31
include/linux/coresight.h
··· 40 40 CORESIGHT_DEV_TYPE_LINK, 41 41 CORESIGHT_DEV_TYPE_LINKSINK, 42 42 CORESIGHT_DEV_TYPE_SOURCE, 43 + CORESIGHT_DEV_TYPE_HELPER, 43 44 }; 44 45 45 46 enum coresight_dev_subtype_sink { ··· 63 62 CORESIGHT_DEV_SUBTYPE_SOURCE_SOFTWARE, 64 63 }; 65 64 65 + enum coresight_dev_subtype_helper { 66 + CORESIGHT_DEV_SUBTYPE_HELPER_NONE, 67 + CORESIGHT_DEV_SUBTYPE_HELPER_CATU, 68 + }; 69 + 66 70 /** 67 - * struct coresight_dev_subtype - further characterisation of a type 71 + * union coresight_dev_subtype - further characterisation of a type 68 72 * @sink_subtype: type of sink this component is, as defined 69 - by @coresight_dev_subtype_sink. 73 + * by @coresight_dev_subtype_sink. 70 74 * @link_subtype: type of link this component is, as defined 71 - by @coresight_dev_subtype_link. 75 + * by @coresight_dev_subtype_link. 72 76 * @source_subtype: type of source this component is, as defined 73 - by @coresight_dev_subtype_source. 77 + * by @coresight_dev_subtype_source. 78 + * @helper_subtype: type of helper this component is, as defined 79 + * by @coresight_dev_subtype_helper. 74 80 */ 75 - struct coresight_dev_subtype { 76 - enum coresight_dev_subtype_sink sink_subtype; 77 - enum coresight_dev_subtype_link link_subtype; 81 + union coresight_dev_subtype { 82 + /* We have some devices which acts as LINK and SINK */ 83 + struct { 84 + enum coresight_dev_subtype_sink sink_subtype; 85 + enum coresight_dev_subtype_link link_subtype; 86 + }; 78 87 enum coresight_dev_subtype_source source_subtype; 88 + enum coresight_dev_subtype_helper helper_subtype; 79 89 }; 80 90 81 91 /** ··· 99 87 * @child_ports:child component port number the current component is 100 88 connected to. 101 89 * @nr_outport: number of output ports for this component. 102 - * @clk: The clock this component is associated to. 103 90 */ 104 91 struct coresight_platform_data { 105 92 int cpu; ··· 108 97 const char **child_names; 109 98 int *child_ports; 110 99 int nr_outport; 111 - struct clk *clk; 112 100 }; 113 101 114 102 /** ··· 123 113 */ 124 114 struct coresight_desc { 125 115 enum coresight_dev_type type; 126 - struct coresight_dev_subtype subtype; 116 + union coresight_dev_subtype subtype; 127 117 const struct coresight_ops *ops; 128 118 struct coresight_platform_data *pdata; 129 119 struct device *dev; ··· 167 157 int nr_inport; 168 158 int nr_outport; 169 159 enum coresight_dev_type type; 170 - struct coresight_dev_subtype subtype; 160 + union coresight_dev_subtype subtype; 171 161 const struct coresight_ops *ops; 172 162 struct device dev; 173 163 atomic_t *refcnt; ··· 181 171 #define source_ops(csdev) csdev->ops->source_ops 182 172 #define sink_ops(csdev) csdev->ops->sink_ops 183 173 #define link_ops(csdev) csdev->ops->link_ops 174 + #define helper_ops(csdev) csdev->ops->helper_ops 184 175 185 176 /** 186 177 * struct coresight_ops_sink - basic operations for a sink ··· 241 230 struct perf_event *event); 242 231 }; 243 232 233 + /** 234 + * struct coresight_ops_helper - Operations for a helper device. 235 + * 236 + * All operations could pass in a device specific data, which could 237 + * help the helper device to determine what to do. 238 + * 239 + * @enable : Enable the device 240 + * @disable : Disable the device 241 + */ 242 + struct coresight_ops_helper { 243 + int (*enable)(struct coresight_device *csdev, void *data); 244 + int (*disable)(struct coresight_device *csdev, void *data); 245 + }; 246 + 244 247 struct coresight_ops { 245 248 const struct coresight_ops_sink *sink_ops; 246 249 const struct coresight_ops_link *link_ops; 247 250 const struct coresight_ops_source *source_ops; 251 + const struct coresight_ops_helper *helper_ops; 248 252 }; 249 253 250 254 #ifdef CONFIG_CORESIGHT ··· 291 265 { return 0; } 292 266 static inline struct coresight_platform_data *of_get_coresight_platform_data( 293 267 struct device *dev, const struct device_node *node) { return NULL; } 294 - #endif 295 - 296 - #ifdef CONFIG_PID_NS 297 - static inline unsigned long 298 - coresight_vpid_to_pid(unsigned long vpid) 299 - { 300 - struct task_struct *task = NULL; 301 - unsigned long pid = 0; 302 - 303 - rcu_read_lock(); 304 - task = find_task_by_vpid(vpid); 305 - if (task) 306 - pid = task_pid_nr(task); 307 - rcu_read_unlock(); 308 - 309 - return pid; 310 - } 311 - #else 312 - static inline unsigned long 313 - coresight_vpid_to_pid(unsigned long vpid) { return vpid; } 314 268 #endif 315 269 316 270 #endif
+4
include/linux/device.h
··· 701 701 702 702 void __iomem *devm_ioremap_resource(struct device *dev, struct resource *res); 703 703 704 + void __iomem *devm_of_iomap(struct device *dev, 705 + struct device_node *node, int index, 706 + resource_size_t *size); 707 + 704 708 /* allows to add/remove a custom action to devres stack */ 705 709 int devm_add_action(struct device *dev, void (*action)(void *), void *data); 706 710 void devm_remove_action(struct device *dev, void (*action)(void *), void *data);
+24
include/linux/fpga/fpga-mgr.h
··· 77 77 * @sgt: scatter/gather table containing FPGA image 78 78 * @buf: contiguous buffer containing FPGA image 79 79 * @count: size of buf 80 + * @region_id: id of target region 80 81 * @dev: device that owns this 81 82 * @overlay: Device Tree overlay 82 83 */ ··· 90 89 struct sg_table *sgt; 91 90 const char *buf; 92 91 size_t count; 92 + int region_id; 93 93 struct device *dev; 94 94 #ifdef CONFIG_OF 95 95 struct device_node *overlay; ··· 101 99 * struct fpga_manager_ops - ops for low level fpga manager drivers 102 100 * @initial_header_size: Maximum number of bytes that should be passed into write_init 103 101 * @state: returns an enum value of the FPGA's state 102 + * @status: returns status of the FPGA, including reconfiguration error code 104 103 * @write_init: prepare the FPGA to receive confuration data 105 104 * @write: write count bytes of configuration data to the FPGA 106 105 * @write_sg: write the scatter list of configuration data to the FPGA ··· 116 113 struct fpga_manager_ops { 117 114 size_t initial_header_size; 118 115 enum fpga_mgr_states (*state)(struct fpga_manager *mgr); 116 + u64 (*status)(struct fpga_manager *mgr); 119 117 int (*write_init)(struct fpga_manager *mgr, 120 118 struct fpga_image_info *info, 121 119 const char *buf, size_t count); ··· 128 124 const struct attribute_group **groups; 129 125 }; 130 126 127 + /* FPGA manager status: Partial/Full Reconfiguration errors */ 128 + #define FPGA_MGR_STATUS_OPERATION_ERR BIT(0) 129 + #define FPGA_MGR_STATUS_CRC_ERR BIT(1) 130 + #define FPGA_MGR_STATUS_INCOMPATIBLE_IMAGE_ERR BIT(2) 131 + #define FPGA_MGR_STATUS_IP_PROTOCOL_ERR BIT(3) 132 + #define FPGA_MGR_STATUS_FIFO_OVERFLOW_ERR BIT(4) 133 + 134 + /** 135 + * struct fpga_compat_id - id for compatibility check 136 + * 137 + * @id_h: high 64bit of the compat_id 138 + * @id_l: low 64bit of the compat_id 139 + */ 140 + struct fpga_compat_id { 141 + u64 id_h; 142 + u64 id_l; 143 + }; 144 + 131 145 /** 132 146 * struct fpga_manager - fpga manager structure 133 147 * @name: name of low level fpga manager 134 148 * @dev: fpga manager device 135 149 * @ref_mutex: only allows one reference to fpga manager 136 150 * @state: state of fpga manager 151 + * @compat_id: FPGA manager id for compatibility check. 137 152 * @mops: pointer to struct of fpga manager ops 138 153 * @priv: low level driver private date 139 154 */ ··· 161 138 struct device dev; 162 139 struct mutex ref_mutex; 163 140 enum fpga_mgr_states state; 141 + struct fpga_compat_id *compat_id; 164 142 const struct fpga_manager_ops *mops; 165 143 void *priv; 166 144 };
+2
include/linux/fpga/fpga-region.h
··· 14 14 * @bridge_list: list of FPGA bridges specified in region 15 15 * @mgr: FPGA manager 16 16 * @info: FPGA image info 17 + * @compat_id: FPGA region id for compatibility check. 17 18 * @priv: private data 18 19 * @get_bridges: optional function to get bridges to a list 19 20 */ ··· 24 23 struct list_head bridge_list; 25 24 struct fpga_manager *mgr; 26 25 struct fpga_image_info *info; 26 + struct fpga_compat_id *compat_id; 27 27 void *priv; 28 28 int (*get_bridges)(struct fpga_region *region); 29 29 };
+33
include/linux/fsi-sbefifo.h
··· 1 + /* 2 + * SBEFIFO FSI Client device driver 3 + * 4 + * Copyright (C) IBM Corporation 2017 5 + * 6 + * This program is free software; you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERGCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + */ 15 + 16 + #ifndef LINUX_FSI_SBEFIFO_H 17 + #define LINUX_FSI_SBEFIFO_H 18 + 19 + #define SBEFIFO_CMD_PUT_OCC_SRAM 0xa404 20 + #define SBEFIFO_CMD_GET_OCC_SRAM 0xa403 21 + #define SBEFIFO_CMD_GET_SBE_FFDC 0xa801 22 + 23 + #define SBEFIFO_MAX_FFDC_SIZE 0x2000 24 + 25 + struct device; 26 + 27 + int sbefifo_submit(struct device *dev, const __be32 *command, size_t cmd_len, 28 + __be32 *response, size_t *resp_len); 29 + 30 + int sbefifo_parse_status(struct device *dev, u16 cmd, __be32 *response, 31 + size_t resp_len, size_t *data_len); 32 + 33 + #endif /* LINUX_FSI_SBEFIFO_H */
+12 -2
include/linux/fsi.h
··· 76 76 extern int fsi_slave_write(struct fsi_slave *slave, uint32_t addr, 77 77 const void *val, size_t size); 78 78 79 - 80 - 81 79 extern struct bus_type fsi_bus_type; 80 + extern const struct device_type fsi_cdev_type; 81 + 82 + enum fsi_dev_type { 83 + fsi_dev_cfam, 84 + fsi_dev_sbefifo, 85 + fsi_dev_scom, 86 + fsi_dev_occ 87 + }; 88 + 89 + extern int fsi_get_new_minor(struct fsi_device *fdev, enum fsi_dev_type type, 90 + dev_t *out_dev, int *out_index); 91 + extern void fsi_free_minor(dev_t dev); 82 92 83 93 #endif /* LINUX_FSI_H */
+75
include/linux/gnss.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * GNSS receiver support 4 + * 5 + * Copyright (C) 2018 Johan Hovold <johan@kernel.org> 6 + */ 7 + 8 + #ifndef _LINUX_GNSS_H 9 + #define _LINUX_GNSS_H 10 + 11 + #include <linux/cdev.h> 12 + #include <linux/device.h> 13 + #include <linux/kfifo.h> 14 + #include <linux/mutex.h> 15 + #include <linux/rwsem.h> 16 + #include <linux/types.h> 17 + #include <linux/wait.h> 18 + 19 + struct gnss_device; 20 + 21 + enum gnss_type { 22 + GNSS_TYPE_NMEA = 0, 23 + GNSS_TYPE_SIRF, 24 + GNSS_TYPE_UBX, 25 + 26 + GNSS_TYPE_COUNT 27 + }; 28 + 29 + struct gnss_operations { 30 + int (*open)(struct gnss_device *gdev); 31 + void (*close)(struct gnss_device *gdev); 32 + int (*write_raw)(struct gnss_device *gdev, const unsigned char *buf, 33 + size_t count); 34 + }; 35 + 36 + struct gnss_device { 37 + struct device dev; 38 + struct cdev cdev; 39 + int id; 40 + 41 + enum gnss_type type; 42 + unsigned long flags; 43 + 44 + struct rw_semaphore rwsem; 45 + const struct gnss_operations *ops; 46 + unsigned int count; 47 + unsigned int disconnected:1; 48 + 49 + struct mutex read_mutex; 50 + struct kfifo read_fifo; 51 + wait_queue_head_t read_queue; 52 + 53 + struct mutex write_mutex; 54 + char *write_buf; 55 + }; 56 + 57 + struct gnss_device *gnss_allocate_device(struct device *parent); 58 + void gnss_put_device(struct gnss_device *gdev); 59 + int gnss_register_device(struct gnss_device *gdev); 60 + void gnss_deregister_device(struct gnss_device *gdev); 61 + 62 + int gnss_insert_raw(struct gnss_device *gdev, const unsigned char *buf, 63 + size_t count); 64 + 65 + static inline void gnss_set_drvdata(struct gnss_device *gdev, void *data) 66 + { 67 + dev_set_drvdata(&gdev->dev, data); 68 + } 69 + 70 + static inline void *gnss_get_drvdata(struct gnss_device *gdev) 71 + { 72 + return dev_get_drvdata(&gdev->dev); 73 + } 74 + 75 + #endif /* _LINUX_GNSS_H */
+10 -4
include/linux/goldfish.h
··· 2 2 #ifndef __LINUX_GOLDFISH_H 3 3 #define __LINUX_GOLDFISH_H 4 4 5 + #include <linux/kernel.h> 6 + #include <linux/types.h> 7 + #include <linux/io.h> 8 + 5 9 /* Helpers for Goldfish virtual platform */ 6 10 7 11 static inline void gf_write_ptr(const void *ptr, void __iomem *portl, 8 12 void __iomem *porth) 9 13 { 10 - writel((u32)(unsigned long)ptr, portl); 14 + const unsigned long addr = (unsigned long)ptr; 15 + 16 + writel(lower_32_bits(addr), portl); 11 17 #ifdef CONFIG_64BIT 12 - writel((unsigned long)ptr >> 32, porth); 18 + writel(upper_32_bits(addr), porth); 13 19 #endif 14 20 } 15 21 ··· 23 17 void __iomem *portl, 24 18 void __iomem *porth) 25 19 { 26 - writel((u32)addr, portl); 20 + writel(lower_32_bits(addr), portl); 27 21 #ifdef CONFIG_ARCH_DMA_ADDR_T_64BIT 28 - writel(addr >> 32, porth); 22 + writel(upper_32_bits(addr), porth); 29 23 #endif 30 24 } 31 25
+25 -8
include/linux/hyperv.h
··· 89 89 u32 interrupt_mask; 90 90 91 91 /* 92 - * Win8 uses some of the reserved bits to implement 93 - * interrupt driven flow management. On the send side 94 - * we can request that the receiver interrupt the sender 95 - * when the ring transitions from being full to being able 96 - * to handle a message of size "pending_send_sz". 92 + * WS2012/Win8 and later versions of Hyper-V implement interrupt 93 + * driven flow management. The feature bit feat_pending_send_sz 94 + * is set by the host on the host->guest ring buffer, and by the 95 + * guest on the guest->host ring buffer. 97 96 * 98 - * Add necessary state for this enhancement. 97 + * The meaning of the feature bit is a bit complex in that it has 98 + * semantics that apply to both ring buffers. If the guest sets 99 + * the feature bit in the guest->host ring buffer, the guest is 100 + * telling the host that: 101 + * 1) It will set the pending_send_sz field in the guest->host ring 102 + * buffer when it is waiting for space to become available, and 103 + * 2) It will read the pending_send_sz field in the host->guest 104 + * ring buffer and interrupt the host when it frees enough space 105 + * 106 + * Similarly, if the host sets the feature bit in the host->guest 107 + * ring buffer, the host is telling the guest that: 108 + * 1) It will set the pending_send_sz field in the host->guest ring 109 + * buffer when it is waiting for space to become available, and 110 + * 2) It will read the pending_send_sz field in the guest->host 111 + * ring buffer and interrupt the guest when it frees enough space 112 + * 113 + * If either the guest or host does not set the feature bit that it 114 + * owns, that guest or host must do polling if it encounters a full 115 + * ring buffer, and not signal the other end with an interrupt. 99 116 */ 100 117 u32 pending_send_sz; 101 - 102 118 u32 reserved1[12]; 103 - 104 119 union { 105 120 struct { 106 121 u32 feat_pending_send_sz:1; ··· 1060 1045 1061 1046 extern int vmbus_teardown_gpadl(struct vmbus_channel *channel, 1062 1047 u32 gpadl_handle); 1048 + 1049 + void vmbus_reset_channel_cb(struct vmbus_channel *channel); 1063 1050 1064 1051 extern int vmbus_recvpacket(struct vmbus_channel *channel, 1065 1052 void *buffer,
+1 -1
include/linux/platform_device.h
··· 12 12 #define _PLATFORM_DEVICE_H_ 13 13 14 14 #include <linux/device.h> 15 - #include <linux/mod_devicetable.h> 16 15 17 16 #define PLATFORM_DEVID_NONE (-1) 18 17 #define PLATFORM_DEVID_AUTO (-2) 19 18 20 19 struct mfd_cell; 21 20 struct property_entry; 21 + struct platform_device_id; 22 22 23 23 struct platform_device { 24 24 const char *name;
+54 -6
include/linux/slimbus.h
··· 14 14 15 15 /** 16 16 * struct slim_eaddr - Enumeration address for a SLIMbus device 17 - * @manf_id: Manufacturer Id for the device 18 - * @prod_code: Product code 19 - * @dev_index: Device index 20 17 * @instance: Instance value 18 + * @dev_index: Device index 19 + * @prod_code: Product code 20 + * @manf_id: Manufacturer Id for the device 21 21 */ 22 22 struct slim_eaddr { 23 - u16 manf_id; 24 - u16 prod_code; 25 - u8 dev_index; 26 23 u8 instance; 24 + u8 dev_index; 25 + u16 prod_code; 26 + u16 manf_id; 27 27 } __packed; 28 28 29 29 /** ··· 48 48 * @ctrl: slim controller instance. 49 49 * @laddr: 1-byte Logical address of this device. 50 50 * @is_laddr_valid: indicates if the laddr is valid or not 51 + * @stream_list: List of streams on this device 52 + * @stream_list_lock: lock to protect the stream list 51 53 * 52 54 * This is the client/device handle returned when a SLIMbus 53 55 * device is registered with a controller. ··· 62 60 enum slim_device_status status; 63 61 u8 laddr; 64 62 bool is_laddr_valid; 63 + struct list_head stream_list; 64 + spinlock_t stream_list_lock; 65 65 }; 66 66 67 67 #define to_slim_device(d) container_of(d, struct slim_device, dev) ··· 112 108 struct completion *comp; 113 109 }; 114 110 111 + #define SLIM_DEVICE_MAX_CHANNELS 256 112 + /* A SLIMBus Device may have frmo 0 to 31 Ports (inclusive) */ 113 + #define SLIM_DEVICE_MAX_PORTS 32 114 + 115 + /** 116 + * struct slim_stream_config - SLIMbus stream configuration 117 + * Configuring a stream is done at hw_params or prepare call 118 + * from audio drivers where they have all the required information 119 + * regarding rate, number of channels and so on. 120 + * There is a 1:1 mapping of channel and ports. 121 + * 122 + * @rate: data rate 123 + * @bps: bits per data sample 124 + * @ch_count: number of channels 125 + * @chs: pointer to list of channel numbers 126 + * @port_mask: port mask of ports to use for this stream 127 + * @direction: direction of the stream, SNDRV_PCM_STREAM_PLAYBACK 128 + * or SNDRV_PCM_STREAM_CAPTURE. 129 + */ 130 + struct slim_stream_config { 131 + unsigned int rate; 132 + unsigned int bps; 133 + /* MAX 256 channels */ 134 + unsigned int ch_count; 135 + unsigned int *chs; 136 + /* Max 32 ports per device */ 137 + unsigned long port_mask; 138 + int direction; 139 + }; 140 + 115 141 /* 116 142 * use a macro to avoid include chaining to get THIS_MODULE 117 143 */ ··· 172 138 dev_set_drvdata(&dev->dev, data); 173 139 } 174 140 141 + struct slim_device *of_slim_get_device(struct slim_controller *ctrl, 142 + struct device_node *np); 175 143 struct slim_device *slim_get_device(struct slim_controller *ctrl, 176 144 struct slim_eaddr *e_addr); 177 145 int slim_get_logical_addr(struct slim_device *sbdev); ··· 197 161 int slim_writeb(struct slim_device *sdev, u32 addr, u8 value); 198 162 int slim_read(struct slim_device *sdev, u32 addr, size_t count, u8 *val); 199 163 int slim_write(struct slim_device *sdev, u32 addr, size_t count, u8 *val); 164 + 165 + /* SLIMbus Stream apis */ 166 + struct slim_stream_runtime; 167 + struct slim_stream_runtime *slim_stream_allocate(struct slim_device *dev, 168 + const char *sname); 169 + int slim_stream_prepare(struct slim_stream_runtime *stream, 170 + struct slim_stream_config *c); 171 + int slim_stream_enable(struct slim_stream_runtime *stream); 172 + int slim_stream_disable(struct slim_stream_runtime *stream); 173 + int slim_stream_unprepare(struct slim_stream_runtime *stream); 174 + int slim_stream_free(struct slim_stream_runtime *stream); 175 + 200 176 #endif /* _LINUX_SLIMBUS_H */
+150
include/trace/events/fsi_master_ast_cf.h
··· 1 + 2 + #undef TRACE_SYSTEM 3 + #define TRACE_SYSTEM fsi_master_ast_cf 4 + 5 + #if !defined(_TRACE_FSI_MASTER_ACF_H) || defined(TRACE_HEADER_MULTI_READ) 6 + #define _TRACE_FSI_MASTER_ACF_H 7 + 8 + #include <linux/tracepoint.h> 9 + 10 + TRACE_EVENT(fsi_master_acf_copro_command, 11 + TP_PROTO(const struct fsi_master_acf *master, uint32_t op), 12 + TP_ARGS(master, op), 13 + TP_STRUCT__entry( 14 + __field(int, master_idx) 15 + __field(uint32_t, op) 16 + ), 17 + TP_fast_assign( 18 + __entry->master_idx = master->master.idx; 19 + __entry->op = op; 20 + ), 21 + TP_printk("fsi-acf%d command %08x", 22 + __entry->master_idx, __entry->op 23 + ) 24 + ); 25 + 26 + TRACE_EVENT(fsi_master_acf_send_request, 27 + TP_PROTO(const struct fsi_master_acf *master, const struct fsi_msg *cmd, u8 rbits), 28 + TP_ARGS(master, cmd, rbits), 29 + TP_STRUCT__entry( 30 + __field(int, master_idx) 31 + __field(uint64_t, msg) 32 + __field(u8, bits) 33 + __field(u8, rbits) 34 + ), 35 + TP_fast_assign( 36 + __entry->master_idx = master->master.idx; 37 + __entry->msg = cmd->msg; 38 + __entry->bits = cmd->bits; 39 + __entry->rbits = rbits; 40 + ), 41 + TP_printk("fsi-acf%d cmd: %016llx/%d/%d", 42 + __entry->master_idx, (unsigned long long)__entry->msg, 43 + __entry->bits, __entry->rbits 44 + ) 45 + ); 46 + 47 + TRACE_EVENT(fsi_master_acf_copro_response, 48 + TP_PROTO(const struct fsi_master_acf *master, u8 rtag, u8 rcrc, __be32 rdata, bool crc_ok), 49 + TP_ARGS(master, rtag, rcrc, rdata, crc_ok), 50 + TP_STRUCT__entry( 51 + __field(int, master_idx) 52 + __field(u8, rtag) 53 + __field(u8, rcrc) 54 + __field(u32, rdata) 55 + __field(bool, crc_ok) 56 + ), 57 + TP_fast_assign( 58 + __entry->master_idx = master->master.idx; 59 + __entry->rtag = rtag; 60 + __entry->rcrc = rcrc; 61 + __entry->rdata = be32_to_cpu(rdata); 62 + __entry->crc_ok = crc_ok; 63 + ), 64 + TP_printk("fsi-acf%d rsp: tag=%04x crc=%04x data=%08x %c\n", 65 + __entry->master_idx, __entry->rtag, __entry->rcrc, 66 + __entry->rdata, __entry->crc_ok ? ' ' : '!' 67 + ) 68 + ); 69 + 70 + TRACE_EVENT(fsi_master_acf_crc_rsp_error, 71 + TP_PROTO(const struct fsi_master_acf *master, int retries), 72 + TP_ARGS(master, retries), 73 + TP_STRUCT__entry( 74 + __field(int, master_idx) 75 + __field(int, retries) 76 + ), 77 + TP_fast_assign( 78 + __entry->master_idx = master->master.idx; 79 + __entry->retries = retries; 80 + ), 81 + TP_printk("fsi-acf%d CRC error in response retry %d", 82 + __entry->master_idx, __entry->retries 83 + ) 84 + ); 85 + 86 + TRACE_EVENT(fsi_master_acf_poll_response_busy, 87 + TP_PROTO(const struct fsi_master_acf *master, int busy_count), 88 + TP_ARGS(master, busy_count), 89 + TP_STRUCT__entry( 90 + __field(int, master_idx) 91 + __field(int, busy_count) 92 + ), 93 + TP_fast_assign( 94 + __entry->master_idx = master->master.idx; 95 + __entry->busy_count = busy_count; 96 + ), 97 + TP_printk("fsi-acf%d: device reported busy %d times", 98 + __entry->master_idx, __entry->busy_count 99 + ) 100 + ); 101 + 102 + TRACE_EVENT(fsi_master_acf_cmd_abs_addr, 103 + TP_PROTO(const struct fsi_master_acf *master, u32 addr), 104 + TP_ARGS(master, addr), 105 + TP_STRUCT__entry( 106 + __field(int, master_idx) 107 + __field(u32, addr) 108 + ), 109 + TP_fast_assign( 110 + __entry->master_idx = master->master.idx; 111 + __entry->addr = addr; 112 + ), 113 + TP_printk("fsi-acf%d: Sending ABS_ADR %06x", 114 + __entry->master_idx, __entry->addr 115 + ) 116 + ); 117 + 118 + TRACE_EVENT(fsi_master_acf_cmd_rel_addr, 119 + TP_PROTO(const struct fsi_master_acf *master, u32 rel_addr), 120 + TP_ARGS(master, rel_addr), 121 + TP_STRUCT__entry( 122 + __field(int, master_idx) 123 + __field(u32, rel_addr) 124 + ), 125 + TP_fast_assign( 126 + __entry->master_idx = master->master.idx; 127 + __entry->rel_addr = rel_addr; 128 + ), 129 + TP_printk("fsi-acf%d: Sending REL_ADR %03x", 130 + __entry->master_idx, __entry->rel_addr 131 + ) 132 + ); 133 + 134 + TRACE_EVENT(fsi_master_acf_cmd_same_addr, 135 + TP_PROTO(const struct fsi_master_acf *master), 136 + TP_ARGS(master), 137 + TP_STRUCT__entry( 138 + __field(int, master_idx) 139 + ), 140 + TP_fast_assign( 141 + __entry->master_idx = master->master.idx; 142 + ), 143 + TP_printk("fsi-acf%d: Sending SAME_ADR", 144 + __entry->master_idx 145 + ) 146 + ); 147 + 148 + #endif /* _TRACE_FSI_MASTER_ACF_H */ 149 + 150 + #include <trace/define_trace.h>
+102
include/trace/events/fsi_master_gpio.h
··· 50 50 ) 51 51 ); 52 52 53 + TRACE_EVENT(fsi_master_gpio_clock_zeros, 54 + TP_PROTO(const struct fsi_master_gpio *master, int clocks), 55 + TP_ARGS(master, clocks), 56 + TP_STRUCT__entry( 57 + __field(int, master_idx) 58 + __field(int, clocks) 59 + ), 60 + TP_fast_assign( 61 + __entry->master_idx = master->master.idx; 62 + __entry->clocks = clocks; 63 + ), 64 + TP_printk("fsi-gpio%d clock %d zeros", 65 + __entry->master_idx, __entry->clocks 66 + ) 67 + ); 68 + 53 69 TRACE_EVENT(fsi_master_gpio_break, 54 70 TP_PROTO(const struct fsi_master_gpio *master), 55 71 TP_ARGS(master), ··· 78 62 TP_printk("fsi-gpio%d ----break---", 79 63 __entry->master_idx 80 64 ) 65 + ); 66 + 67 + TRACE_EVENT(fsi_master_gpio_crc_cmd_error, 68 + TP_PROTO(const struct fsi_master_gpio *master), 69 + TP_ARGS(master), 70 + TP_STRUCT__entry( 71 + __field(int, master_idx) 72 + ), 73 + TP_fast_assign( 74 + __entry->master_idx = master->master.idx; 75 + ), 76 + TP_printk("fsi-gpio%d ----CRC command retry---", 77 + __entry->master_idx 78 + ) 79 + ); 80 + 81 + TRACE_EVENT(fsi_master_gpio_crc_rsp_error, 82 + TP_PROTO(const struct fsi_master_gpio *master), 83 + TP_ARGS(master), 84 + TP_STRUCT__entry( 85 + __field(int, master_idx) 86 + ), 87 + TP_fast_assign( 88 + __entry->master_idx = master->master.idx; 89 + ), 90 + TP_printk("fsi-gpio%d ----CRC response---", 91 + __entry->master_idx 92 + ) 93 + ); 94 + 95 + TRACE_EVENT(fsi_master_gpio_poll_response_busy, 96 + TP_PROTO(const struct fsi_master_gpio *master, int busy), 97 + TP_ARGS(master, busy), 98 + TP_STRUCT__entry( 99 + __field(int, master_idx) 100 + __field(int, busy) 101 + ), 102 + TP_fast_assign( 103 + __entry->master_idx = master->master.idx; 104 + __entry->busy = busy; 105 + ), 106 + TP_printk("fsi-gpio%d: device reported busy %d times", 107 + __entry->master_idx, __entry->busy) 108 + ); 109 + 110 + TRACE_EVENT(fsi_master_gpio_cmd_abs_addr, 111 + TP_PROTO(const struct fsi_master_gpio *master, u32 addr), 112 + TP_ARGS(master, addr), 113 + TP_STRUCT__entry( 114 + __field(int, master_idx) 115 + __field(u32, addr) 116 + ), 117 + TP_fast_assign( 118 + __entry->master_idx = master->master.idx; 119 + __entry->addr = addr; 120 + ), 121 + TP_printk("fsi-gpio%d: Sending ABS_ADR %06x", 122 + __entry->master_idx, __entry->addr) 123 + ); 124 + 125 + TRACE_EVENT(fsi_master_gpio_cmd_rel_addr, 126 + TP_PROTO(const struct fsi_master_gpio *master, u32 rel_addr), 127 + TP_ARGS(master, rel_addr), 128 + TP_STRUCT__entry( 129 + __field(int, master_idx) 130 + __field(u32, rel_addr) 131 + ), 132 + TP_fast_assign( 133 + __entry->master_idx = master->master.idx; 134 + __entry->rel_addr = rel_addr; 135 + ), 136 + TP_printk("fsi-gpio%d: Sending REL_ADR %03x", 137 + __entry->master_idx, __entry->rel_addr) 138 + ); 139 + 140 + TRACE_EVENT(fsi_master_gpio_cmd_same_addr, 141 + TP_PROTO(const struct fsi_master_gpio *master), 142 + TP_ARGS(master), 143 + TP_STRUCT__entry( 144 + __field(int, master_idx) 145 + ), 146 + TP_fast_assign( 147 + __entry->master_idx = master->master.idx; 148 + ), 149 + TP_printk("fsi-gpio%d: Sending SAME_ADR", 150 + __entry->master_idx) 81 151 ); 82 152 83 153 #endif /* _TRACE_FSI_MASTER_GPIO_H */
+4 -4
include/uapi/linux/eventpoll.h
··· 42 42 #define EPOLLRDHUP (__force __poll_t)0x00002000 43 43 44 44 /* Set exclusive wakeup mode for the target file descriptor */ 45 - #define EPOLLEXCLUSIVE (__force __poll_t)(1U << 28) 45 + #define EPOLLEXCLUSIVE ((__force __poll_t)(1U << 28)) 46 46 47 47 /* 48 48 * Request the handling of system wakeup events so as to prevent system suspends ··· 54 54 * 55 55 * Requires CAP_BLOCK_SUSPEND 56 56 */ 57 - #define EPOLLWAKEUP (__force __poll_t)(1U << 29) 57 + #define EPOLLWAKEUP ((__force __poll_t)(1U << 29)) 58 58 59 59 /* Set the One Shot behaviour for the target file descriptor */ 60 - #define EPOLLONESHOT (__force __poll_t)(1U << 30) 60 + #define EPOLLONESHOT ((__force __poll_t)(1U << 30)) 61 61 62 62 /* Set the Edge Triggered behaviour for the target file descriptor */ 63 - #define EPOLLET (__force __poll_t)(1U << 31) 63 + #define EPOLLET ((__force __poll_t)(1U << 31)) 64 64 65 65 /* 66 66 * On x86-64 make the 64bit structure have the same alignment as the
+179
include/uapi/linux/fpga-dfl.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 + /* 3 + * Header File for FPGA DFL User API 4 + * 5 + * Copyright (C) 2017-2018 Intel Corporation, Inc. 6 + * 7 + * Authors: 8 + * Kang Luwei <luwei.kang@intel.com> 9 + * Zhang Yi <yi.z.zhang@intel.com> 10 + * Wu Hao <hao.wu@intel.com> 11 + * Xiao Guangrong <guangrong.xiao@linux.intel.com> 12 + */ 13 + 14 + #ifndef _UAPI_LINUX_FPGA_DFL_H 15 + #define _UAPI_LINUX_FPGA_DFL_H 16 + 17 + #include <linux/types.h> 18 + #include <linux/ioctl.h> 19 + 20 + #define DFL_FPGA_API_VERSION 0 21 + 22 + /* 23 + * The IOCTL interface for DFL based FPGA is designed for extensibility by 24 + * embedding the structure length (argsz) and flags into structures passed 25 + * between kernel and userspace. This design referenced the VFIO IOCTL 26 + * interface (include/uapi/linux/vfio.h). 27 + */ 28 + 29 + #define DFL_FPGA_MAGIC 0xB6 30 + 31 + #define DFL_FPGA_BASE 0 32 + #define DFL_PORT_BASE 0x40 33 + #define DFL_FME_BASE 0x80 34 + 35 + /* Common IOCTLs for both FME and AFU file descriptor */ 36 + 37 + /** 38 + * DFL_FPGA_GET_API_VERSION - _IO(DFL_FPGA_MAGIC, DFL_FPGA_BASE + 0) 39 + * 40 + * Report the version of the driver API. 41 + * Return: Driver API Version. 42 + */ 43 + 44 + #define DFL_FPGA_GET_API_VERSION _IO(DFL_FPGA_MAGIC, DFL_FPGA_BASE + 0) 45 + 46 + /** 47 + * DFL_FPGA_CHECK_EXTENSION - _IO(DFL_FPGA_MAGIC, DFL_FPGA_BASE + 1) 48 + * 49 + * Check whether an extension is supported. 50 + * Return: 0 if not supported, otherwise the extension is supported. 51 + */ 52 + 53 + #define DFL_FPGA_CHECK_EXTENSION _IO(DFL_FPGA_MAGIC, DFL_FPGA_BASE + 1) 54 + 55 + /* IOCTLs for AFU file descriptor */ 56 + 57 + /** 58 + * DFL_FPGA_PORT_RESET - _IO(DFL_FPGA_MAGIC, DFL_PORT_BASE + 0) 59 + * 60 + * Reset the FPGA Port and its AFU. No parameters are supported. 61 + * Userspace can do Port reset at any time, e.g. during DMA or PR. But 62 + * it should never cause any system level issue, only functional failure 63 + * (e.g. DMA or PR operation failure) and be recoverable from the failure. 64 + * Return: 0 on success, -errno of failure 65 + */ 66 + 67 + #define DFL_FPGA_PORT_RESET _IO(DFL_FPGA_MAGIC, DFL_PORT_BASE + 0) 68 + 69 + /** 70 + * DFL_FPGA_PORT_GET_INFO - _IOR(DFL_FPGA_MAGIC, DFL_PORT_BASE + 1, 71 + * struct dfl_fpga_port_info) 72 + * 73 + * Retrieve information about the fpga port. 74 + * Driver fills the info in provided struct dfl_fpga_port_info. 75 + * Return: 0 on success, -errno on failure. 76 + */ 77 + struct dfl_fpga_port_info { 78 + /* Input */ 79 + __u32 argsz; /* Structure length */ 80 + /* Output */ 81 + __u32 flags; /* Zero for now */ 82 + __u32 num_regions; /* The number of supported regions */ 83 + __u32 num_umsgs; /* The number of allocated umsgs */ 84 + }; 85 + 86 + #define DFL_FPGA_PORT_GET_INFO _IO(DFL_FPGA_MAGIC, DFL_PORT_BASE + 1) 87 + 88 + /** 89 + * FPGA_PORT_GET_REGION_INFO - _IOWR(FPGA_MAGIC, PORT_BASE + 2, 90 + * struct dfl_fpga_port_region_info) 91 + * 92 + * Retrieve information about a device memory region. 93 + * Caller provides struct dfl_fpga_port_region_info with index value set. 94 + * Driver returns the region info in other fields. 95 + * Return: 0 on success, -errno on failure. 96 + */ 97 + struct dfl_fpga_port_region_info { 98 + /* input */ 99 + __u32 argsz; /* Structure length */ 100 + /* Output */ 101 + __u32 flags; /* Access permission */ 102 + #define DFL_PORT_REGION_READ (1 << 0) /* Region is readable */ 103 + #define DFL_PORT_REGION_WRITE (1 << 1) /* Region is writable */ 104 + #define DFL_PORT_REGION_MMAP (1 << 2) /* Can be mmaped to userspace */ 105 + /* Input */ 106 + __u32 index; /* Region index */ 107 + #define DFL_PORT_REGION_INDEX_AFU 0 /* AFU */ 108 + #define DFL_PORT_REGION_INDEX_STP 1 /* Signal Tap */ 109 + __u32 padding; 110 + /* Output */ 111 + __u64 size; /* Region size (bytes) */ 112 + __u64 offset; /* Region offset from start of device fd */ 113 + }; 114 + 115 + #define DFL_FPGA_PORT_GET_REGION_INFO _IO(DFL_FPGA_MAGIC, DFL_PORT_BASE + 2) 116 + 117 + /** 118 + * DFL_FPGA_PORT_DMA_MAP - _IOWR(DFL_FPGA_MAGIC, DFL_PORT_BASE + 3, 119 + * struct dfl_fpga_port_dma_map) 120 + * 121 + * Map the dma memory per user_addr and length which are provided by caller. 122 + * Driver fills the iova in provided struct afu_port_dma_map. 123 + * This interface only accepts page-size aligned user memory for dma mapping. 124 + * Return: 0 on success, -errno on failure. 125 + */ 126 + struct dfl_fpga_port_dma_map { 127 + /* Input */ 128 + __u32 argsz; /* Structure length */ 129 + __u32 flags; /* Zero for now */ 130 + __u64 user_addr; /* Process virtual address */ 131 + __u64 length; /* Length of mapping (bytes)*/ 132 + /* Output */ 133 + __u64 iova; /* IO virtual address */ 134 + }; 135 + 136 + #define DFL_FPGA_PORT_DMA_MAP _IO(DFL_FPGA_MAGIC, DFL_PORT_BASE + 3) 137 + 138 + /** 139 + * DFL_FPGA_PORT_DMA_UNMAP - _IOW(FPGA_MAGIC, PORT_BASE + 4, 140 + * struct dfl_fpga_port_dma_unmap) 141 + * 142 + * Unmap the dma memory per iova provided by caller. 143 + * Return: 0 on success, -errno on failure. 144 + */ 145 + struct dfl_fpga_port_dma_unmap { 146 + /* Input */ 147 + __u32 argsz; /* Structure length */ 148 + __u32 flags; /* Zero for now */ 149 + __u64 iova; /* IO virtual address */ 150 + }; 151 + 152 + #define DFL_FPGA_PORT_DMA_UNMAP _IO(DFL_FPGA_MAGIC, DFL_PORT_BASE + 4) 153 + 154 + /* IOCTLs for FME file descriptor */ 155 + 156 + /** 157 + * DFL_FPGA_FME_PORT_PR - _IOW(DFL_FPGA_MAGIC, DFL_FME_BASE + 0, 158 + * struct dfl_fpga_fme_port_pr) 159 + * 160 + * Driver does Partial Reconfiguration based on Port ID and Buffer (Image) 161 + * provided by caller. 162 + * Return: 0 on success, -errno on failure. 163 + * If DFL_FPGA_FME_PORT_PR returns -EIO, that indicates the HW has detected 164 + * some errors during PR, under this case, the user can fetch HW error info 165 + * from the status of FME's fpga manager. 166 + */ 167 + 168 + struct dfl_fpga_fme_port_pr { 169 + /* Input */ 170 + __u32 argsz; /* Structure length */ 171 + __u32 flags; /* Zero for now */ 172 + __u32 port_id; 173 + __u32 buffer_size; 174 + __u64 buffer_address; /* Userspace address to the buffer for PR */ 175 + }; 176 + 177 + #define DFL_FPGA_FME_PORT_PR _IO(DFL_FPGA_MAGIC, DFL_FME_BASE + 0) 178 + 179 + #endif /* _UAPI_LINUX_FPGA_DFL_H */
+58
include/uapi/linux/fsi.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */ 2 + #ifndef _UAPI_LINUX_FSI_H 3 + #define _UAPI_LINUX_FSI_H 4 + 5 + #include <linux/types.h> 6 + #include <linux/ioctl.h> 7 + 8 + /* 9 + * /dev/scom "raw" ioctl interface 10 + * 11 + * The driver supports a high level "read/write" interface which 12 + * handles retries and converts the status to Linux error codes, 13 + * however low level tools an debugger need to access the "raw" 14 + * HW status information and interpret it themselves, so this 15 + * ioctl interface is also provided for their use case. 16 + */ 17 + 18 + /* Structure for SCOM read/write */ 19 + struct scom_access { 20 + __u64 addr; /* SCOM address, supports indirect */ 21 + __u64 data; /* SCOM data (in for write, out for read) */ 22 + __u64 mask; /* Data mask for writes */ 23 + __u32 intf_errors; /* Interface error flags */ 24 + #define SCOM_INTF_ERR_PARITY 0x00000001 /* Parity error */ 25 + #define SCOM_INTF_ERR_PROTECTION 0x00000002 /* Blocked by secure boot */ 26 + #define SCOM_INTF_ERR_ABORT 0x00000004 /* PIB reset during access */ 27 + #define SCOM_INTF_ERR_UNKNOWN 0x80000000 /* Unknown error */ 28 + /* 29 + * Note: Any other bit set in intf_errors need to be considered as an 30 + * error. Future implementations may define new error conditions. The 31 + * pib_status below is only valid if intf_errors is 0. 32 + */ 33 + __u8 pib_status; /* 3-bit PIB status */ 34 + #define SCOM_PIB_SUCCESS 0 /* Access successful */ 35 + #define SCOM_PIB_BLOCKED 1 /* PIB blocked, pls retry */ 36 + #define SCOM_PIB_OFFLINE 2 /* Chiplet offline */ 37 + #define SCOM_PIB_PARTIAL 3 /* Partial good */ 38 + #define SCOM_PIB_BAD_ADDR 4 /* Invalid address */ 39 + #define SCOM_PIB_CLK_ERR 5 /* Clock error */ 40 + #define SCOM_PIB_PARITY_ERR 6 /* Parity error on the PIB bus */ 41 + #define SCOM_PIB_TIMEOUT 7 /* Bus timeout */ 42 + __u8 pad; 43 + }; 44 + 45 + /* Flags for SCOM check */ 46 + #define SCOM_CHECK_SUPPORTED 0x00000001 /* Interface supported */ 47 + #define SCOM_CHECK_PROTECTED 0x00000002 /* Interface blocked by secure boot */ 48 + 49 + /* Flags for SCOM reset */ 50 + #define SCOM_RESET_INTF 0x00000001 /* Reset interface */ 51 + #define SCOM_RESET_PIB 0x00000002 /* Reset PIB */ 52 + 53 + #define FSI_SCOM_CHECK _IOR('s', 0x00, __u32) 54 + #define FSI_SCOM_READ _IOWR('s', 0x01, struct scom_access) 55 + #define FSI_SCOM_WRITE _IOWR('s', 0x02, struct scom_access) 56 + #define FSI_SCOM_RESET _IOW('s', 0x03, __u32) 57 + 58 + #endif /* _UAPI_LINUX_FSI_H */
+36
lib/devres.c
··· 4 4 #include <linux/io.h> 5 5 #include <linux/gfp.h> 6 6 #include <linux/export.h> 7 + #include <linux/of_address.h> 7 8 8 9 enum devm_ioremap_type { 9 10 DEVM_IOREMAP = 0, ··· 162 161 return dest_ptr; 163 162 } 164 163 EXPORT_SYMBOL(devm_ioremap_resource); 164 + 165 + /* 166 + * devm_of_iomap - Requests a resource and maps the memory mapped IO 167 + * for a given device_node managed by a given device 168 + * 169 + * Checks that a resource is a valid memory region, requests the memory 170 + * region and ioremaps it. All operations are managed and will be undone 171 + * on driver detach of the device. 172 + * 173 + * This is to be used when a device requests/maps resources described 174 + * by other device tree nodes (children or otherwise). 175 + * 176 + * @dev: The device "managing" the resource 177 + * @node: The device-tree node where the resource resides 178 + * @index: index of the MMIO range in the "reg" property 179 + * @size: Returns the size of the resource (pass NULL if not needed) 180 + * Returns a pointer to the requested and mapped memory or an ERR_PTR() encoded 181 + * error code on failure. Usage example: 182 + * 183 + * base = devm_of_iomap(&pdev->dev, node, 0, NULL); 184 + * if (IS_ERR(base)) 185 + * return PTR_ERR(base); 186 + */ 187 + void __iomem *devm_of_iomap(struct device *dev, struct device_node *node, int index, 188 + resource_size_t *size) 189 + { 190 + struct resource res; 191 + 192 + if (of_address_to_resource(node, index, &res)) 193 + return IOMEM_ERR_PTR(-EINVAL); 194 + if (size) 195 + *size = resource_size(&res); 196 + return devm_ioremap_resource(dev, &res); 197 + } 198 + EXPORT_SYMBOL(devm_of_iomap); 165 199 166 200 #ifdef CONFIG_HAS_IOPORT_MAP 167 201 /*
+8 -6
scripts/ver_linux
··· 32 32 printversion("Nfs-utils", version("showmount --version")) 33 33 34 34 while (getline <"/proc/self/maps" > 0) { 35 - n = split($0, procmaps, "/") 36 - if (/libc.*so$/ && match(procmaps[n], /[0-9]+([.]?[0-9]+)+/)) { 37 - ver = substr(procmaps[n], RSTART, RLENGTH) 38 - printversion("Linux C Library", ver) 39 - break 35 + if (/libc.*\.so$/) { 36 + n = split($0, procmaps, "/") 37 + if (match(procmaps[n], /[0-9]+([.]?[0-9]+)+/)) { 38 + ver = substr(procmaps[n], RSTART, RLENGTH) 39 + printversion("Linux C Library", ver) 40 + break 41 + } 40 42 } 41 43 } 42 44 ··· 70 68 function version(cmd, ver) { 71 69 cmd = cmd " 2>&1" 72 70 while (cmd | getline > 0) { 73 - if (!/ver_linux/ && match($0, /[0-9]+([.]?[0-9]+)+/)) { 71 + if (match($0, /[0-9]+([.]?[0-9]+)+/)) { 74 72 ver = substr($0, RSTART, RLENGTH) 75 73 break 76 74 }
+61 -4
tools/hv/hv_vss_daemon.c
··· 36 36 #include <linux/hyperv.h> 37 37 #include <syslog.h> 38 38 #include <getopt.h> 39 + #include <stdbool.h> 40 + #include <dirent.h> 39 41 40 42 /* Don't use syslog() in the function since that can cause write to disk */ 41 43 static int vss_do_freeze(char *dir, unsigned int cmd) ··· 70 68 return !!ret; 71 69 } 72 70 71 + static bool is_dev_loop(const char *blkname) 72 + { 73 + char *buffer; 74 + DIR *dir; 75 + struct dirent *entry; 76 + bool ret = false; 77 + 78 + buffer = malloc(PATH_MAX); 79 + if (!buffer) { 80 + syslog(LOG_ERR, "Can't allocate memory!"); 81 + exit(1); 82 + } 83 + 84 + snprintf(buffer, PATH_MAX, "%s/loop", blkname); 85 + if (!access(buffer, R_OK | X_OK)) { 86 + ret = true; 87 + goto free_buffer; 88 + } else if (errno != ENOENT) { 89 + syslog(LOG_ERR, "Can't access: %s; error:%d %s!", 90 + buffer, errno, strerror(errno)); 91 + } 92 + 93 + snprintf(buffer, PATH_MAX, "%s/slaves", blkname); 94 + dir = opendir(buffer); 95 + if (!dir) { 96 + if (errno != ENOENT) 97 + syslog(LOG_ERR, "Can't opendir: %s; error:%d %s!", 98 + buffer, errno, strerror(errno)); 99 + goto free_buffer; 100 + } 101 + 102 + while ((entry = readdir(dir)) != NULL) { 103 + if (strcmp(entry->d_name, ".") == 0 || 104 + strcmp(entry->d_name, "..") == 0) 105 + continue; 106 + 107 + snprintf(buffer, PATH_MAX, "%s/slaves/%s", blkname, 108 + entry->d_name); 109 + if (is_dev_loop(buffer)) { 110 + ret = true; 111 + break; 112 + } 113 + } 114 + closedir(dir); 115 + free_buffer: 116 + free(buffer); 117 + return ret; 118 + } 119 + 73 120 static int vss_operate(int operation) 74 121 { 75 122 char match[] = "/dev/"; ··· 126 75 struct mntent *ent; 127 76 struct stat sb; 128 77 char errdir[1024] = {0}; 78 + char blkdir[23]; /* /sys/dev/block/XXX:XXX */ 129 79 unsigned int cmd; 130 80 int error = 0, root_seen = 0, save_errno = 0; 131 81 ··· 148 96 while ((ent = getmntent(mounts))) { 149 97 if (strncmp(ent->mnt_fsname, match, strlen(match))) 150 98 continue; 151 - if (stat(ent->mnt_fsname, &sb) == -1) 152 - continue; 153 - if (S_ISBLK(sb.st_mode) && major(sb.st_rdev) == LOOP_MAJOR) 154 - continue; 99 + if (stat(ent->mnt_fsname, &sb)) { 100 + syslog(LOG_ERR, "Can't stat: %s; error:%d %s!", 101 + ent->mnt_fsname, errno, strerror(errno)); 102 + } else { 103 + sprintf(blkdir, "/sys/dev/block/%d:%d", 104 + major(sb.st_rdev), minor(sb.st_rdev)); 105 + if (is_dev_loop(blkdir)) 106 + continue; 107 + } 155 108 if (hasmntopt(ent, MNTOPT_RO) != NULL) 156 109 continue; 157 110 if (strcmp(ent->mnt_type, "vfat") == 0)
+6 -6
tools/hv/lsvmbus
··· 17 17 18 18 vmbus_sys_path = '/sys/bus/vmbus/devices' 19 19 if not os.path.isdir(vmbus_sys_path): 20 - print "%s doesn't exist: exiting..." % vmbus_sys_path 20 + print("%s doesn't exist: exiting..." % vmbus_sys_path) 21 21 exit(-1) 22 22 23 23 vmbus_dev_dict = { ··· 93 93 94 94 for d in vmbus_dev_list: 95 95 if verbose == 0: 96 - print ('VMBUS ID ' + format0) % (d.vmbus_id, d.dev_desc) 96 + print(('VMBUS ID ' + format0) % (d.vmbus_id, d.dev_desc)) 97 97 elif verbose == 1: 98 - print ('VMBUS ID ' + format1) % \ 99 - (d.vmbus_id, d.class_id, d.dev_desc, d.chn_vp_mapping) 98 + print (('VMBUS ID ' + format1) % \ 99 + (d.vmbus_id, d.class_id, d.dev_desc, d.chn_vp_mapping)) 100 100 else: 101 - print ('VMBUS ID ' + format2) % \ 101 + print (('VMBUS ID ' + format2) % \ 102 102 (d.vmbus_id, d.class_id, d.dev_desc, \ 103 - d.device_id, d.sysfs_path, d.chn_vp_mapping) 103 + d.device_id, d.sysfs_path, d.chn_vp_mapping))