Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'char-misc-5.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char/misc driver updates from Greg KH:
"Here is the big char/misc driver pull request for 5.4-rc1.

As has been happening in previous releases, more and more individual
driver subsystem trees are ending up in here. Now if that is good or
bad I can't tell, but hopefully it makes your life easier as it's more
of an aggregation of trees together to one merge point for you.

Anyway, lots of stuff in here:
- habanalabs driver updates
- thunderbolt driver updates
- misc driver updates
- coresight and intel_th hwtracing driver updates
- fpga driver updates
- extcon driver updates
- some dma driver updates
- char driver updates
- android binder driver updates
- nvmem driver updates
- phy driver updates
- parport driver fixes
- pcmcia driver fix
- uio driver updates
- w1 driver updates
- configfs fixes
- other assorted driver updates

All of these have been in linux-next for a long time with no reported
issues"

* tag 'char-misc-5.4-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (200 commits)
misc: mic: Use PTR_ERR_OR_ZERO rather than its implementation
habanalabs: correctly cast variable to __le32
habanalabs: show correct id in error print
habanalabs: stop using the acronym KMD
habanalabs: display card name as sensors header
habanalabs: add uapi to retrieve aggregate H/W events
habanalabs: add uapi to retrieve device utilization
habanalabs: Make the Coresight timestamp perpetual
habanalabs: explicitly set the queue-id enumerated numbers
habanalabs: print to kernel log when reset is finished
habanalabs: replace __le32_to_cpu with le32_to_cpu
habanalabs: replace __cpu_to_le32/64 with cpu_to_le32/64
habanalabs: Handle HW_IP_INFO if device disabled or in reset
habanalabs: Expose devices after initialization is done
habanalabs: improve security in Debug IOCTL
habanalabs: use default structure for user input in Debug IOCTL
habanalabs: Add descriptive name to PSOC app status register
habanalabs: Add descriptive names to PSOC scratch-pad registers
habanalabs: create two char devices per ASIC
habanalabs: change device_setup_cdev() to be more generic
...

+10377 -2312
+2 -1
Documentation/ABI/testing/sysfs-bus-intel_th-devices-msc
··· 12 12 - "single", for contiguous buffer mode (high-order alloc); 13 13 - "multi", for multiblock mode; 14 14 - "ExI", for DCI handler mode; 15 - - "debug", for debug mode. 15 + - "debug", for debug mode; 16 + - any of the currently loaded buffer sinks. 16 17 If operating mode changes, existing buffer is deallocated, 17 18 provided there are no active users and tracing is not enabled, 18 19 otherwise the write will fail.
Documentation/ABI/testing/sysfs-class-mic.txt Documentation/ABI/testing/sysfs-class-mic
+128
Documentation/ABI/testing/sysfs-devices-platform-stratix10-rsu
··· 1 + Intel Stratix10 Remote System Update (RSU) device attributes 2 + 3 + What: /sys/devices/platform/stratix10-rsu.0/current_image 4 + Date: August 2019 5 + KernelVersion: 5.4 6 + Contact: Richard Gong <richard.gong@linux.intel.com> 7 + Description: 8 + (RO) the address in flash of currently running image. 9 + 10 + What: /sys/devices/platform/stratix10-rsu.0/fail_image 11 + Date: August 2019 12 + KernelVersion: 5.4 13 + Contact: Richard Gong <richard.gong@linux.intel.com> 14 + Description: 15 + (RO) the address in flash of failed image. 16 + 17 + What: /sys/devices/platform/stratix10-rsu.0/state 18 + Date: August 2019 19 + KernelVersion: 5.4 20 + Contact: Richard Gong <richard.gong@linux.intel.com> 21 + Description: 22 + (RO) the state of RSU system. 23 + The state field has two parts: major error code in 24 + upper 16 bits and minor error code in lower 16 bits. 25 + 26 + b[15:0] 27 + Currently used only when major error is 0xF006 28 + (CPU watchdog timeout), in which case the minor 29 + error code is the value reported by CPU to 30 + firmware through the RSU notify command before 31 + the watchdog timeout occurs. 32 + 33 + b[31:16] 34 + 0xF001 bitstream error 35 + 0xF002 hardware access failure 36 + 0xF003 bitstream corruption 37 + 0xF004 internal error 38 + 0xF005 device error 39 + 0xF006 CPU watchdog timeout 40 + 0xF007 internal unknown error 41 + 42 + What: /sys/devices/platform/stratix10-rsu.0/version 43 + Date: August 2019 44 + KernelVersion: 5.4 45 + Contact: Richard Gong <richard.gong@linux.intel.com> 46 + Description: 47 + (RO) the version number of RSU firmware. 19.3 or late 48 + version includes information about the firmware which 49 + reported the error. 50 + 51 + pre 19.3: 52 + b[31:0] 53 + 0x0 version number 54 + 55 + 19.3 or late: 56 + b[15:0] 57 + 0x1 version number 58 + b[31:16] 59 + 0x0 no error 60 + 0x0DCF Decision CMF error 61 + 0x0ACF Application CMF error 62 + 63 + What: /sys/devices/platform/stratix10-rsu.0/error_location 64 + Date: August 2019 65 + KernelVersion: 5.4 66 + Contact: Richard Gong <richard.gong@linux.intel.com> 67 + Description: 68 + (RO) the error offset inside the image that failed. 69 + 70 + What: /sys/devices/platform/stratix10-rsu.0/error_details 71 + Date: August 2019 72 + KernelVersion: 5.4 73 + Contact: Richard Gong <richard.gong@linux.intel.com> 74 + Description: 75 + (RO) error code. 76 + 77 + What: /sys/devices/platform/stratix10-rsu.0/retry_counter 78 + Date: August 2019 79 + KernelVersion: 5.4 80 + Contact: Richard Gong <richard.gong@linux.intel.com> 81 + Description: 82 + (RO) the current image's retry counter, which is used by 83 + user to know how many times the images is still allowed 84 + to reload itself before giving up and starting RSU 85 + fail-over flow. 86 + 87 + What: /sys/devices/platform/stratix10-rsu.0/reboot_image 88 + Date: August 2019 89 + KernelVersion: 5.4 90 + Contact: Richard Gong <richard.gong@linux.intel.com> 91 + Description: 92 + (WO) the address in flash of image to be loaded on next 93 + reboot command. 94 + 95 + What: /sys/devices/platform/stratix10-rsu.0/notify 96 + Date: August 2019 97 + KernelVersion: 5.4 98 + Contact: Richard Gong <richard.gong@linux.intel.com> 99 + Description: 100 + (WO) client to notify firmware with different actions. 101 + 102 + b[15:0] 103 + inform firmware the current software execution 104 + stage. 105 + 0 the first stage bootloader didn't run or 106 + didn't reach the point of launching second 107 + stage bootloader. 108 + 1 failed in second bootloader or didn't get 109 + to the point of launching the operating 110 + system. 111 + 2 both first and second stage bootloader ran 112 + and the operating system launch was 113 + attempted. 114 + 115 + b[16] 116 + 1 firmware to reset current image retry 117 + counter. 118 + 0 no action. 119 + 120 + b[17] 121 + 1 firmware to clear RSU log 122 + 0 no action. 123 + 124 + b[18] 125 + this is negative logic 126 + 1 no action 127 + 0 firmware record the notify code defined 128 + in b[15:0].
+3 -9
Documentation/ABI/testing/sysfs-driver-habanalabs
··· 57 57 Contact: oded.gabbay@gmail.com 58 58 Description: Allows the user to set the maximum clock frequency for MME, TPC 59 59 and IC when the power management profile is set to "automatic". 60 + This property is valid only for the Goya ASIC family 60 61 61 62 What: /sys/class/habanalabs/hl<n>/ic_clk 62 63 Date: Jan 2019 ··· 128 127 the max clock frequency to a low value when there are no user 129 128 processes that are opened on the device's file. In "manual" 130 129 mode, the user sets the maximum clock frequency by writing to 131 - ic_clk, mme_clk and tpc_clk 132 - 130 + ic_clk, mme_clk and tpc_clk. This property is valid only for 131 + the Goya ASIC family 133 132 134 133 What: /sys/class/habanalabs/hl<n>/preboot_btl_ver 135 134 Date: Jan 2019 ··· 188 187 KernelVersion: 5.1 189 188 Contact: oded.gabbay@gmail.com 190 189 Description: Version of the u-boot running on the device's CPU 191 - 192 - What: /sys/class/habanalabs/hl<n>/write_open_cnt 193 - Date: Jan 2019 194 - KernelVersion: 5.1 195 - Contact: oded.gabbay@gmail.com 196 - Description: Displays the total number of user processes that are currently 197 - opened on the device's file
+85
Documentation/ABI/testing/sysfs-platform-dfl-fme
··· 21 21 Description: Read-only. It returns Bitstream (static FPGA region) meta 22 22 data, which includes the synthesis date, seed and other 23 23 information of this static FPGA region. 24 + 25 + What: /sys/bus/platform/devices/dfl-fme.0/cache_size 26 + Date: August 2019 27 + KernelVersion: 5.4 28 + Contact: Wu Hao <hao.wu@intel.com> 29 + Description: Read-only. It returns cache size of this FPGA device. 30 + 31 + What: /sys/bus/platform/devices/dfl-fme.0/fabric_version 32 + Date: August 2019 33 + KernelVersion: 5.4 34 + Contact: Wu Hao <hao.wu@intel.com> 35 + Description: Read-only. It returns fabric version of this FPGA device. 36 + Userspace applications need this information to select 37 + best data channels per different fabric design. 38 + 39 + What: /sys/bus/platform/devices/dfl-fme.0/socket_id 40 + Date: August 2019 41 + KernelVersion: 5.4 42 + Contact: Wu Hao <hao.wu@intel.com> 43 + Description: Read-only. It returns socket_id to indicate which socket 44 + this FPGA belongs to, only valid for integrated solution. 45 + User only needs this information, in case standard numa node 46 + can't provide correct information. 47 + 48 + What: /sys/bus/platform/devices/dfl-fme.0/errors/pcie0_errors 49 + Date: August 2019 50 + KernelVersion: 5.4 51 + Contact: Wu Hao <hao.wu@intel.com> 52 + Description: Read-Write. Read this file for errors detected on pcie0 link. 53 + Write this file to clear errors logged in pcie0_errors. Write 54 + fails with -EINVAL if input parsing fails or input error code 55 + doesn't match. 56 + 57 + What: /sys/bus/platform/devices/dfl-fme.0/errors/pcie1_errors 58 + Date: August 2019 59 + KernelVersion: 5.4 60 + Contact: Wu Hao <hao.wu@intel.com> 61 + Description: Read-Write. Read this file for errors detected on pcie1 link. 62 + Write this file to clear errors logged in pcie1_errors. Write 63 + fails with -EINVAL if input parsing fails or input error code 64 + doesn't match. 65 + 66 + What: /sys/bus/platform/devices/dfl-fme.0/errors/nonfatal_errors 67 + Date: August 2019 68 + KernelVersion: 5.4 69 + Contact: Wu Hao <hao.wu@intel.com> 70 + Description: Read-only. It returns non-fatal errors detected. 71 + 72 + What: /sys/bus/platform/devices/dfl-fme.0/errors/catfatal_errors 73 + Date: August 2019 74 + KernelVersion: 5.4 75 + Contact: Wu Hao <hao.wu@intel.com> 76 + Description: Read-only. It returns catastrophic and fatal errors detected. 77 + 78 + What: /sys/bus/platform/devices/dfl-fme.0/errors/inject_errors 79 + Date: August 2019 80 + KernelVersion: 5.4 81 + Contact: Wu Hao <hao.wu@intel.com> 82 + Description: Read-Write. Read this file to check errors injected. Write this 83 + file to inject errors for testing purpose. Write fails with 84 + -EINVAL if input parsing fails or input inject error code isn't 85 + supported. 86 + 87 + What: /sys/bus/platform/devices/dfl-fme.0/errors/fme_errors 88 + Date: August 2019 89 + KernelVersion: 5.4 90 + Contact: Wu Hao <hao.wu@intel.com> 91 + Description: Read-Write. Read this file to get errors detected on FME. 92 + Write this file to clear errors logged in fme_errors. Write 93 + fials with -EINVAL if input parsing fails or input error code 94 + doesn't match. 95 + 96 + What: /sys/bus/platform/devices/dfl-fme.0/errors/first_error 97 + Date: August 2019 98 + KernelVersion: 5.4 99 + Contact: Wu Hao <hao.wu@intel.com> 100 + Description: Read-only. Read this file to get the first error detected by 101 + hardware. 102 + 103 + What: /sys/bus/platform/devices/dfl-fme.0/errors/next_error 104 + Date: August 2019 105 + KernelVersion: 5.4 106 + Contact: Wu Hao <hao.wu@intel.com> 107 + Description: Read-only. Read this file to get the second error detected by 108 + hardware.
+85
Documentation/ABI/testing/sysfs-platform-dfl-port
··· 14 14 Accelerator Function Unit (AFU) for different functions. It 15 15 returns uuid which could be used to identify which PR bitstream 16 16 is programmed in this AFU. 17 + 18 + What: /sys/bus/platform/devices/dfl-port.0/power_state 19 + Date: August 2019 20 + KernelVersion: 5.4 21 + Contact: Wu Hao <hao.wu@intel.com> 22 + Description: Read-only. It reports the APx (AFU Power) state, different APx 23 + means different throttling level. When reading this file, it 24 + returns "0" - Normal / "1" - AP1 / "2" - AP2 / "6" - AP6. 25 + 26 + What: /sys/bus/platform/devices/dfl-port.0/ap1_event 27 + Date: August 2019 28 + KernelVersion: 5.4 29 + Contact: Wu Hao <hao.wu@intel.com> 30 + Description: Read-write. Read this file for AP1 (AFU Power State 1) event. 31 + It's used to indicate transient AP1 state. Write 1 to this 32 + file to clear AP1 event. 33 + 34 + What: /sys/bus/platform/devices/dfl-port.0/ap2_event 35 + Date: August 2019 36 + KernelVersion: 5.4 37 + Contact: Wu Hao <hao.wu@intel.com> 38 + Description: Read-write. Read this file for AP2 (AFU Power State 2) event. 39 + It's used to indicate transient AP2 state. Write 1 to this 40 + file to clear AP2 event. 41 + 42 + What: /sys/bus/platform/devices/dfl-port.0/ltr 43 + Date: August 2019 44 + KernelVersion: 5.4 45 + Contact: Wu Hao <hao.wu@intel.com> 46 + Description: Read-write. Read or set AFU latency tolerance reporting value. 47 + Set ltr to 1 if the AFU can tolerate latency >= 40us or set it 48 + to 0 if it is latency sensitive. 49 + 50 + What: /sys/bus/platform/devices/dfl-port.0/userclk_freqcmd 51 + Date: August 2019 52 + KernelVersion: 5.4 53 + Contact: Wu Hao <hao.wu@intel.com> 54 + Description: Write-only. User writes command to this interface to set 55 + userclock to AFU. 56 + 57 + What: /sys/bus/platform/devices/dfl-port.0/userclk_freqsts 58 + Date: August 2019 59 + KernelVersion: 5.4 60 + Contact: Wu Hao <hao.wu@intel.com> 61 + Description: Read-only. Read this file to get the status of issued command 62 + to userclck_freqcmd. 63 + 64 + What: /sys/bus/platform/devices/dfl-port.0/userclk_freqcntrcmd 65 + Date: August 2019 66 + KernelVersion: 5.4 67 + Contact: Wu Hao <hao.wu@intel.com> 68 + Description: Write-only. User writes command to this interface to set 69 + userclock counter. 70 + 71 + What: /sys/bus/platform/devices/dfl-port.0/userclk_freqcntrsts 72 + Date: August 2019 73 + KernelVersion: 5.4 74 + Contact: Wu Hao <hao.wu@intel.com> 75 + Description: Read-only. Read this file to get the status of issued command 76 + to userclck_freqcntrcmd. 77 + 78 + What: /sys/bus/platform/devices/dfl-port.0/errors/errors 79 + Date: August 2019 80 + KernelVersion: 5.4 81 + Contact: Wu Hao <hao.wu@intel.com> 82 + Description: Read-Write. Read this file to get errors detected on port and 83 + Accelerated Function Unit (AFU). Write error code to this file 84 + to clear errors. Write fails with -EINVAL if input parsing 85 + fails or input error code doesn't match. Write fails with 86 + -EBUSY or -ETIMEDOUT if error can't be cleared as hardware 87 + in low power state (-EBUSY) or not respoding (-ETIMEDOUT). 88 + 89 + What: /sys/bus/platform/devices/dfl-port.0/errors/first_error 90 + Date: August 2019 91 + KernelVersion: 5.4 92 + Contact: Wu Hao <hao.wu@intel.com> 93 + Description: Read-only. Read this file to get the first error detected by 94 + hardware. 95 + 96 + What: /sys/bus/platform/devices/dfl-port.0/errors/first_malformed_req 97 + Date: August 2019 98 + KernelVersion: 5.4 99 + Contact: Wu Hao <hao.wu@intel.com> 100 + Description: Read-only. Read this file to get the first malformed request 101 + captured by hardware.
+3 -1
Documentation/devicetree/bindings/arm/freescale/fsl,scu.txt
··· 136 136 OCOTP bindings based on SCU Message Protocol 137 137 ------------------------------------------------------------ 138 138 Required properties: 139 - - compatible: Should be "fsl,imx8qxp-scu-ocotp" 139 + - compatible: Should be one of: 140 + "fsl,imx8qm-scu-ocotp", 141 + "fsl,imx8qxp-scu-ocotp". 140 142 - #address-cells: Must be 1. Contains byte index 141 143 - #size-cells: Must be 1. Contains byte length 142 144
+1 -1
Documentation/devicetree/bindings/extcon/extcon-arizona.txt
··· 72 72 1 2 1 /* MICDET2 MICBIAS2 GPIO=high */ 73 73 >; 74 74 75 - wlf,gpsw = <0>; 75 + wlf,gpsw = <ARIZONA_GPSW_OPEN>; 76 76 };
+3 -1
Documentation/devicetree/bindings/extcon/extcon-fsa9480.txt
··· 5 5 microphone, and UART data to use a common connector port. 6 6 7 7 Required properties: 8 - - compatible : Must be "fcs,fsa9480" 8 + - compatible : Must be one of 9 + "fcs,fsa9480" 10 + "fcs,fsa880" 9 11 - reg : Specifies i2c slave address. Must be 0x25. 10 12 - interrupts : Should contain one entry specifying interrupt signal of 11 13 interrupt parent to which interrupt pin of the chip is connected.
+1 -4
Documentation/devicetree/bindings/fpga/altera-fpga2sdram-bridge.txt
··· 3 3 Required properties: 4 4 - compatible : Should contain "altr,socfpga-fpga2sdram-bridge" 5 5 6 - Optional properties: 7 - - bridge-enable : 0 if driver should disable bridge at startup 8 - 1 if driver should enable bridge at startup 9 - Default is to leave bridge in current state. 6 + See Documentation/devicetree/bindings/fpga/fpga-bridge.txt for generic bindings. 10 7 11 8 Example: 12 9 fpga_bridge3: fpga-bridge@ffc25080 {
+1 -4
Documentation/devicetree/bindings/fpga/altera-freeze-bridge.txt
··· 10 10 - compatible : Should contain "altr,freeze-bridge-controller" 11 11 - regs : base address and size for freeze bridge module 12 12 13 - Optional properties: 14 - - bridge-enable : 0 if driver should disable bridge at startup 15 - 1 if driver should enable bridge at startup 16 - Default is to leave bridge in current state. 13 + See Documentation/devicetree/bindings/fpga/fpga-bridge.txt for generic bindings. 17 14 18 15 Example: 19 16 freeze-controller@100000450 {
+1 -4
Documentation/devicetree/bindings/fpga/altera-hps2fpga-bridge.txt
··· 9 9 - resets : Phandle and reset specifier for this bridge's reset 10 10 - clocks : Clocks used by this module. 11 11 12 - Optional properties: 13 - - bridge-enable : 0 if driver should disable bridge at startup. 14 - 1 if driver should enable bridge at startup. 15 - Default is to leave bridge in its current state. 12 + See Documentation/devicetree/bindings/fpga/fpga-bridge.txt for generic bindings. 16 13 17 14 Example: 18 15 fpga_bridge0: fpga-bridge@ff400000 {
+13
Documentation/devicetree/bindings/fpga/fpga-bridge.txt
··· 1 + FPGA Bridge Device Tree Binding 2 + 3 + Optional properties: 4 + - bridge-enable : 0 if driver should disable bridge at startup 5 + 1 if driver should enable bridge at startup 6 + Default is to leave bridge in current state. 7 + 8 + Example: 9 + fpga_bridge3: fpga-bridge@ffc25080 { 10 + compatible = "altr,socfpga-fpga2sdram-bridge"; 11 + reg = <0xffc25080 0x4>; 12 + bridge-enable = <0>; 13 + };
+2 -6
Documentation/devicetree/bindings/fpga/xilinx-pr-decoupler.txt
··· 18 18 - clocks : input clock to IP 19 19 - clock-names : should contain "aclk" 20 20 21 - Optional properties: 22 - - bridge-enable : 0 if driver should disable bridge at startup 23 - 1 if driver should enable bridge at startup 24 - Default is to leave bridge in current state. 25 - 26 - See Documentation/devicetree/bindings/fpga/fpga-region.txt for generic bindings. 21 + See Documentation/devicetree/bindings/fpga/fpga-region.txt and 22 + Documentation/devicetree/bindings/fpga/fpga-bridge.txt for generic bindings. 27 23 28 24 Example: 29 25 fpga-bridge@100000450 {
+45
Documentation/devicetree/bindings/interconnect/qcom,qcs404.txt
··· 1 + Qualcomm QCS404 Network-On-Chip interconnect driver binding 2 + ----------------------------------------------------------- 3 + 4 + Required properties : 5 + - compatible : shall contain only one of the following: 6 + "qcom,qcs404-bimc" 7 + "qcom,qcs404-pcnoc" 8 + "qcom,qcs404-snoc" 9 + - #interconnect-cells : should contain 1 10 + 11 + reg : specifies the physical base address and size of registers 12 + clocks : list of phandles and specifiers to all interconnect bus clocks 13 + clock-names : clock names should include both "bus" and "bus_a" 14 + 15 + Example: 16 + 17 + soc { 18 + ... 19 + bimc: interconnect@400000 { 20 + reg = <0x00400000 0x80000>; 21 + compatible = "qcom,qcs404-bimc"; 22 + #interconnect-cells = <1>; 23 + clock-names = "bus", "bus_a"; 24 + clocks = <&rpmcc RPM_SMD_BIMC_CLK>, 25 + <&rpmcc RPM_SMD_BIMC_A_CLK>; 26 + }; 27 + 28 + pnoc: interconnect@500000 { 29 + reg = <0x00500000 0x15080>; 30 + compatible = "qcom,qcs404-pcnoc"; 31 + #interconnect-cells = <1>; 32 + clock-names = "bus", "bus_a"; 33 + clocks = <&rpmcc RPM_SMD_PNOC_CLK>, 34 + <&rpmcc RPM_SMD_PNOC_A_CLK>; 35 + }; 36 + 37 + snoc: interconnect@580000 { 38 + reg = <0x00580000 0x23080>; 39 + compatible = "qcom,qcs404-snoc"; 40 + #interconnect-cells = <1>; 41 + clock-names = "bus", "bus_a"; 42 + clocks = <&rpmcc RPM_SMD_SNOC_CLK>, 43 + <&rpmcc RPM_SMD_SNOC_A_CLK>; 44 + }; 45 + };
+2 -1
Documentation/devicetree/bindings/nvmem/imx-ocotp.txt
··· 2 2 3 3 This binding represents the on-chip eFuse OTP controller found on 4 4 i.MX6Q/D, i.MX6DL/S, i.MX6SL, i.MX6SX, i.MX6UL, i.MX6ULL/ULZ, i.MX6SLL, 5 - i.MX7D/S, i.MX7ULP and i.MX8MQ SoCs. 5 + i.MX7D/S, i.MX7ULP, i.MX8MQ, i.MX8MM and i.MX8MN SoCs. 6 6 7 7 Required properties: 8 8 - compatible: should be one of ··· 16 16 "fsl,imx7ulp-ocotp" (i.MX7ULP), 17 17 "fsl,imx8mq-ocotp" (i.MX8MQ), 18 18 "fsl,imx8mm-ocotp" (i.MX8MM), 19 + "fsl,imx8mn-ocotp" (i.MX8MN), 19 20 followed by "syscon". 20 21 - #address-cells : Should be 1 21 22 - #size-cells : Should be 1
+8
Documentation/devicetree/bindings/pci/pci-armada8k.txt
··· 17 17 name must be "core" for the first clock and "reg" for the second 18 18 one 19 19 20 + Optional properties: 21 + - phys: phandle(s) to PHY node(s) following the generic PHY bindings. 22 + Either 1, 2 or 4 PHYs might be needed depending on the number of 23 + PCIe lanes. 24 + - phy-names: names of the PHYs corresponding to the number of lanes. 25 + Must be "cp0-pcie0-x4-lane0-phy", "cp0-pcie0-x4-lane1-phy" for 26 + 2 PHYs. 27 + 20 28 Example: 21 29 22 30 pcie@f2600000 {
+95
Documentation/devicetree/bindings/phy/lantiq,vrx200-pcie-phy.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/phy/lantiq,vrx200-pcie-phy.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Lantiq VRX200 and ARX300 PCIe PHY Device Tree Bindings 8 + 9 + maintainers: 10 + - Martin Blumenstingl <martin.blumenstingl@googlemail.com> 11 + 12 + properties: 13 + "#phy-cells": 14 + const: 1 15 + description: selects the PHY mode as defined in <dt-bindings/phy/phy-lantiq-vrx200-pcie.h> 16 + 17 + compatible: 18 + enum: 19 + - lantiq,vrx200-pcie-phy 20 + - lantiq,arx300-pcie-phy 21 + 22 + reg: 23 + maxItems: 1 24 + 25 + clocks: 26 + items: 27 + - description: PHY module clock 28 + - description: PDI register clock 29 + 30 + clock-names: 31 + items: 32 + - const: phy 33 + - const: pdi 34 + 35 + resets: 36 + items: 37 + - description: exclusive PHY reset line 38 + - description: shared reset line between the PCIe PHY and PCIe controller 39 + 40 + resets-names: 41 + items: 42 + - const: phy 43 + - const: pcie 44 + 45 + lantiq,rcu: 46 + $ref: /schemas/types.yaml#/definitions/phandle 47 + description: phandle to the RCU syscon 48 + 49 + lantiq,rcu-endian-offset: 50 + $ref: /schemas/types.yaml#/definitions/uint32 51 + description: the offset of the endian registers for this PHY instance in the RCU syscon 52 + 53 + lantiq,rcu-big-endian-mask: 54 + $ref: /schemas/types.yaml#/definitions/uint32 55 + description: the mask to set the PDI (PHY) registers for this PHY instance to big endian 56 + 57 + big-endian: 58 + description: Configures the PDI (PHY) registers in big-endian mode 59 + type: boolean 60 + 61 + little-endian: 62 + description: Configures the PDI (PHY) registers in big-endian mode 63 + type: boolean 64 + 65 + required: 66 + - "#phy-cells" 67 + - compatible 68 + - reg 69 + - clocks 70 + - clock-names 71 + - resets 72 + - reset-names 73 + - lantiq,rcu 74 + - lantiq,rcu-endian-offset 75 + - lantiq,rcu-big-endian-mask 76 + 77 + additionalProperties: false 78 + 79 + examples: 80 + - | 81 + pcie0_phy: phy@106800 { 82 + compatible = "lantiq,vrx200-pcie-phy"; 83 + reg = <0x106800 0x100>; 84 + lantiq,rcu = <&rcu0>; 85 + lantiq,rcu-endian-offset = <0x4c>; 86 + lantiq,rcu-big-endian-mask = <0x80>; /* bit 7 */ 87 + big-endian; 88 + clocks = <&pmu 32>, <&pmu 36>; 89 + clock-names = "phy", "pdi"; 90 + resets = <&reset0 12 24>, <&reset0 22 22>; 91 + reset-names = "phy", "pcie"; 92 + #phy-cells = <1>; 93 + }; 94 + 95 + ...
+10
Documentation/devicetree/bindings/phy/phy-mvebu-comphy.txt
··· 25 25 - #address-cells: should be 1. 26 26 - #size-cells: should be 0. 27 27 28 + Optional properlties: 29 + 30 + - clocks: pointers to the reference clocks for this device (CP110 only), 31 + consequently: MG clock, MG Core clock, AXI clock. 32 + - clock-names: names of used clocks for CP110 only, must be : 33 + "mg_clk", "mg_core_clk" and "axi_clk". 34 + 28 35 A sub-node is required for each comphy lane provided by the comphy. 29 36 30 37 Required properties (child nodes): ··· 46 39 compatible = "marvell,comphy-cp110"; 47 40 reg = <0x120000 0x6000>; 48 41 marvell,system-controller = <&cpm_syscon0>; 42 + clocks = <&CP110_LABEL(clk) 1 5>, <&CP110_LABEL(clk) 1 6>, 43 + <&CP110_LABEL(clk) 1 18>; 44 + clock-names = "mg_clk", "mg_core_clk", "axi_clk"; 49 45 #address-cells = <1>; 50 46 #size-cells = <0>; 51 47
+7
Documentation/driver-api/uio-howto.rst
··· 408 408 internal registers to create the kernel part of the driver. All you need 409 409 to know is the irq number of the pin the chip is connected to. 410 410 411 + When used in a device-tree enabled system, the driver needs to be 412 + probed with the ``"of_id"`` module parameter set to the ``"compatible"`` 413 + string of the node the driver is supposed to handle. By default, the 414 + node's name (without the unit address) is exposed as name for the 415 + UIO device in userspace. To set a custom name, a property named 416 + ``"linux,uio-name"`` may be specified in the DT node. 417 + 411 418 Using uio_dmem_genirq for platform devices 412 419 ------------------------------------------ 413 420
+105
Documentation/fpga/dfl.rst
··· 87 87 - Get driver API version (DFL_FPGA_GET_API_VERSION) 88 88 - Check for extensions (DFL_FPGA_CHECK_EXTENSION) 89 89 - Program bitstream (DFL_FPGA_FME_PORT_PR) 90 + - Assign port to PF (DFL_FPGA_FME_PORT_ASSIGN) 91 + - Release port from PF (DFL_FPGA_FME_PORT_RELEASE) 90 92 91 93 More functions are exposed through sysfs 92 94 (/sys/class/fpga_region/regionX/dfl-fme.n/): ··· 103 101 Read number of ports (ports_num) 104 102 one FPGA device may have more than one port, this sysfs interface indicates 105 103 how many ports the FPGA device has. 104 + 105 + Global error reporting management (errors/) 106 + error reporting sysfs interfaces allow user to read errors detected by the 107 + hardware, and clear the logged errors. 106 108 107 109 108 110 FIU - PORT ··· 148 142 149 143 Read Accelerator GUID (afu_id) 150 144 afu_id indicates which PR bitstream is programmed to this AFU. 145 + 146 + Error reporting (errors/) 147 + error reporting sysfs interfaces allow user to read port/afu errors 148 + detected by the hardware, and clear the logged errors. 151 149 152 150 153 151 DFL Framework Overview ··· 227 217 the compat_id exposed by the target FPGA region. This check is usually done by 228 218 userspace before calling the reconfiguration IOCTL. 229 219 220 + 221 + FPGA virtualization - PCIe SRIOV 222 + ================================ 223 + This section describes the virtualization support on DFL based FPGA device to 224 + enable accessing an accelerator from applications running in a virtual machine 225 + (VM). This section only describes the PCIe based FPGA device with SRIOV support. 226 + 227 + Features supported by the particular FPGA device are exposed through Device 228 + Feature Lists, as illustrated below: 229 + 230 + :: 231 + 232 + +-------------------------------+ +-------------+ 233 + | PF | | VF | 234 + +-------------------------------+ +-------------+ 235 + ^ ^ ^ ^ 236 + | | | | 237 + +-----|------------|---------|--------------|-------+ 238 + | | | | | | 239 + | +-----+ +-------+ +-------+ +-------+ | 240 + | | FME | | Port0 | | Port1 | | Port2 | | 241 + | +-----+ +-------+ +-------+ +-------+ | 242 + | ^ ^ ^ | 243 + | | | | | 244 + | +-------+ +------+ +-------+ | 245 + | | AFU | | AFU | | AFU | | 246 + | +-------+ +------+ +-------+ | 247 + | | 248 + | DFL based FPGA PCIe Device | 249 + +---------------------------------------------------+ 250 + 251 + FME is always accessed through the physical function (PF). 252 + 253 + Ports (and related AFUs) are accessed via PF by default, but could be exposed 254 + through virtual function (VF) devices via PCIe SRIOV. Each VF only contains 255 + 1 Port and 1 AFU for isolation. Users could assign individual VFs (accelerators) 256 + created via PCIe SRIOV interface, to virtual machines. 257 + 258 + The driver organization in virtualization case is illustrated below: 259 + :: 260 + 261 + +-------++------++------+ | 262 + | FME || FME || FME | | 263 + | FPGA || FPGA || FPGA | | 264 + |Manager||Bridge||Region| | 265 + +-------++------++------+ | 266 + +-----------------------+ +--------+ | +--------+ 267 + | FME | | AFU | | | AFU | 268 + | Module | | Module | | | Module | 269 + +-----------------------+ +--------+ | +--------+ 270 + +-----------------------+ | +-----------------------+ 271 + | FPGA Container Device | | | FPGA Container Device | 272 + | (FPGA Base Region) | | | (FPGA Base Region) | 273 + +-----------------------+ | +-----------------------+ 274 + +------------------+ | +------------------+ 275 + | FPGA PCIE Module | | Virtual | FPGA PCIE Module | 276 + +------------------+ Host | Machine +------------------+ 277 + -------------------------------------- | ------------------------------ 278 + +---------------+ | +---------------+ 279 + | PCI PF Device | | | PCI VF Device | 280 + +---------------+ | +---------------+ 281 + 282 + FPGA PCIe device driver is always loaded first once a FPGA PCIe PF or VF device 283 + is detected. It: 284 + 285 + * Finishes enumeration on both FPGA PCIe PF and VF device using common 286 + interfaces from DFL framework. 287 + * Supports SRIOV. 288 + 289 + The FME device driver plays a management role in this driver architecture, it 290 + provides ioctls to release Port from PF and assign Port to PF. After release 291 + a port from PF, then it's safe to expose this port through a VF via PCIe SRIOV 292 + sysfs interface. 293 + 294 + To enable accessing an accelerator from applications running in a VM, the 295 + respective AFU's port needs to be assigned to a VF using the following steps: 296 + 297 + #. The PF owns all AFU ports by default. Any port that needs to be 298 + reassigned to a VF must first be released through the 299 + DFL_FPGA_FME_PORT_RELEASE ioctl on the FME device. 300 + 301 + #. Once N ports are released from PF, then user can use command below 302 + to enable SRIOV and VFs. Each VF owns only one Port with AFU. 303 + 304 + :: 305 + 306 + echo N > $PCI_DEVICE_PATH/sriov_numvfs 307 + 308 + #. Pass through the VFs to VMs 309 + 310 + #. The AFU under VF is accessible from applications in VM (using the 311 + same driver inside the VF). 312 + 313 + Note that an FME can't be assigned to a VF, thus PR and other management 314 + functions are only available via the PF. 230 315 231 316 Device enumeration 232 317 ==================
+1
Documentation/misc-devices/index.rst
··· 20 20 isl29003 21 21 lis3lv02d 22 22 max6875 23 + xilinx_sdfec
+23
MAINTAINERS
··· 8360 8360 F: tools/power/x86/intel-speed-select/ 8361 8361 F: include/uapi/linux/isst_if.h 8362 8362 8363 + INTEL STRATIX10 FIRMWARE DRIVERS 8364 + M: Richard Gong <richard.gong@linux.intel.com> 8365 + L: linux-kernel@vger.kernel.org 8366 + S: Maintained 8367 + F: drivers/firmware/stratix10-rsu.c 8368 + F: drivers/firmware/stratix10-svc.c 8369 + F: include/linux/firmware/intel/stratix10-smc.h 8370 + F: include/linux/firmware/intel/stratix10-svc-client.h 8371 + F: Documentation/ABI/testing/sysfs-devices-platform-stratix10-rsu 8372 + F: Documentation/devicetree/bindings/firmware/intel,stratix10-svc.txt 8373 + 8363 8374 INTEL TELEMETRY DRIVER 8364 8375 M: Rajneesh Bhardwaj <rajneesh.bhardwaj@linux.intel.com> 8365 8376 M: "David E. Box" <david.e.box@linux.intel.com> ··· 8422 8411 S: Supported 8423 8412 F: Documentation/trace/intel_th.rst 8424 8413 F: drivers/hwtracing/intel_th/ 8414 + F: include/linux/intel_th.h 8425 8415 8426 8416 INTEL(R) TRUSTED EXECUTION TECHNOLOGY (TXT) 8427 8417 M: Ning Sun <ning.sun@intel.com> ··· 17771 17759 F: Documentation/devicetree/bindings/media/xilinx/ 17772 17760 F: drivers/media/platform/xilinx/ 17773 17761 F: include/uapi/linux/xilinx-v4l2-controls.h 17762 + 17763 + XILINX SD-FEC IP CORES 17764 + M: Derek Kiernan <derek.kiernan@xilinx.com> 17765 + M: Dragan Cvetic <dragan.cvetic@xilinx.com> 17766 + S: Maintained 17767 + F: Documentation/devicetree/bindings/misc/xlnx,sd-fec.txt 17768 + F: Documentation/misc-devices/xilinx_sdfec.rst 17769 + F: drivers/misc/xilinx_sdfec.c 17770 + F: drivers/misc/Kconfig 17771 + F: drivers/misc/Makefile 17772 + F: include/uapi/misc/xilinx_sdfec.h 17774 17773 17775 17774 XILLYBUS DRIVER 17776 17775 M: Eli Billauer <eli.billauer@gmail.com>
+6
drivers/acpi/property.c
··· 39 39 /* External facing port GUID: efcc06cc-73ac-4bc3-bff0-76143807c389 */ 40 40 GUID_INIT(0xefcc06cc, 0x73ac, 0x4bc3, 41 41 0xbf, 0xf0, 0x76, 0x14, 0x38, 0x07, 0xc3, 0x89), 42 + /* Thunderbolt GUID for IMR_VALID: c44d002f-69f9-4e7d-a904-a7baabdf43f7 */ 43 + GUID_INIT(0xc44d002f, 0x69f9, 0x4e7d, 44 + 0xa9, 0x04, 0xa7, 0xba, 0xab, 0xdf, 0x43, 0xf7), 45 + /* Thunderbolt GUID for WAKE_SUPPORTED: 6c501103-c189-4296-ba72-9bf5a26ebe5d */ 46 + GUID_INIT(0x6c501103, 0xc189, 0x4296, 47 + 0xba, 0x72, 0x9b, 0xf5, 0xa2, 0x6e, 0xbe, 0x5d), 42 48 }; 43 49 44 50 /* ACPI _DSD data subnodes GUID: dbb8e3e6-5886-4ba6-8795-1319f52a966b */
+58 -42
drivers/android/binder.c
··· 122 122 BINDER_DEBUG_FAILED_TRANSACTION | BINDER_DEBUG_DEAD_TRANSACTION; 123 123 module_param_named(debug_mask, binder_debug_mask, uint, 0644); 124 124 125 - static char *binder_devices_param = CONFIG_ANDROID_BINDER_DEVICES; 125 + char *binder_devices_param = CONFIG_ANDROID_BINDER_DEVICES; 126 126 module_param_named(devices, binder_devices_param, charp, 0444); 127 127 128 128 static DECLARE_WAIT_QUEUE_HEAD(binder_user_error_wait); ··· 196 196 atomic_inc(&binder_stats.obj_created[type]); 197 197 } 198 198 199 - struct binder_transaction_log_entry { 200 - int debug_id; 201 - int debug_id_done; 202 - int call_type; 203 - int from_proc; 204 - int from_thread; 205 - int target_handle; 206 - int to_proc; 207 - int to_thread; 208 - int to_node; 209 - int data_size; 210 - int offsets_size; 211 - int return_error_line; 212 - uint32_t return_error; 213 - uint32_t return_error_param; 214 - const char *context_name; 215 - }; 216 - struct binder_transaction_log { 217 - atomic_t cur; 218 - bool full; 219 - struct binder_transaction_log_entry entry[32]; 220 - }; 221 - static struct binder_transaction_log binder_transaction_log; 222 - static struct binder_transaction_log binder_transaction_log_failed; 199 + struct binder_transaction_log binder_transaction_log; 200 + struct binder_transaction_log binder_transaction_log_failed; 223 201 224 202 static struct binder_transaction_log_entry *binder_transaction_log_add( 225 203 struct binder_transaction_log *log) ··· 458 480 * @inner_lock: can nest under outer_lock and/or node lock 459 481 * @outer_lock: no nesting under innor or node lock 460 482 * Lock order: 1) outer, 2) node, 3) inner 483 + * @binderfs_entry: process-specific binderfs log file 461 484 * 462 485 * Bookkeeping structure for binder processes 463 486 */ ··· 488 509 struct binder_context *context; 489 510 spinlock_t inner_lock; 490 511 spinlock_t outer_lock; 512 + struct dentry *binderfs_entry; 491 513 }; 492 514 493 515 enum { ··· 5210 5230 { 5211 5231 struct binder_proc *proc; 5212 5232 struct binder_device *binder_dev; 5233 + struct binderfs_info *info; 5234 + struct dentry *binder_binderfs_dir_entry_proc = NULL; 5213 5235 5214 5236 binder_debug(BINDER_DEBUG_OPEN_CLOSE, "%s: %d:%d\n", __func__, 5215 5237 current->group_leader->pid, current->pid); ··· 5226 5244 INIT_LIST_HEAD(&proc->todo); 5227 5245 proc->default_priority = task_nice(current); 5228 5246 /* binderfs stashes devices in i_private */ 5229 - if (is_binderfs_device(nodp)) 5247 + if (is_binderfs_device(nodp)) { 5230 5248 binder_dev = nodp->i_private; 5231 - else 5249 + info = nodp->i_sb->s_fs_info; 5250 + binder_binderfs_dir_entry_proc = info->proc_log_dir; 5251 + } else { 5232 5252 binder_dev = container_of(filp->private_data, 5233 5253 struct binder_device, miscdev); 5254 + } 5234 5255 proc->context = &binder_dev->context; 5235 5256 binder_alloc_init(&proc->alloc); 5236 5257 ··· 5262 5277 binder_debugfs_dir_entry_proc, 5263 5278 (void *)(unsigned long)proc->pid, 5264 5279 &proc_fops); 5280 + } 5281 + 5282 + if (binder_binderfs_dir_entry_proc) { 5283 + char strbuf[11]; 5284 + struct dentry *binderfs_entry; 5285 + 5286 + snprintf(strbuf, sizeof(strbuf), "%u", proc->pid); 5287 + /* 5288 + * Similar to debugfs, the process specific log file is shared 5289 + * between contexts. If the file has already been created for a 5290 + * process, the following binderfs_create_file() call will 5291 + * fail with error code EEXIST if another context of the same 5292 + * process invoked binder_open(). This is ok since same as 5293 + * debugfs, the log file will contain information on all 5294 + * contexts of a given PID. 5295 + */ 5296 + binderfs_entry = binderfs_create_file(binder_binderfs_dir_entry_proc, 5297 + strbuf, &proc_fops, (void *)(unsigned long)proc->pid); 5298 + if (!IS_ERR(binderfs_entry)) { 5299 + proc->binderfs_entry = binderfs_entry; 5300 + } else { 5301 + int error; 5302 + 5303 + error = PTR_ERR(binderfs_entry); 5304 + if (error != -EEXIST) { 5305 + pr_warn("Unable to create file %s in binderfs (error %d)\n", 5306 + strbuf, error); 5307 + } 5308 + } 5265 5309 } 5266 5310 5267 5311 return 0; ··· 5332 5318 struct binder_proc *proc = filp->private_data; 5333 5319 5334 5320 debugfs_remove(proc->debugfs_entry); 5321 + 5322 + if (proc->binderfs_entry) { 5323 + binderfs_remove_file(proc->binderfs_entry); 5324 + proc->binderfs_entry = NULL; 5325 + } 5326 + 5335 5327 binder_defer_work(proc, BINDER_DEFERRED_RELEASE); 5336 5328 5337 5329 return 0; ··· 5927 5907 } 5928 5908 5929 5909 5930 - static int state_show(struct seq_file *m, void *unused) 5910 + int binder_state_show(struct seq_file *m, void *unused) 5931 5911 { 5932 5912 struct binder_proc *proc; 5933 5913 struct binder_node *node; ··· 5966 5946 return 0; 5967 5947 } 5968 5948 5969 - static int stats_show(struct seq_file *m, void *unused) 5949 + int binder_stats_show(struct seq_file *m, void *unused) 5970 5950 { 5971 5951 struct binder_proc *proc; 5972 5952 ··· 5982 5962 return 0; 5983 5963 } 5984 5964 5985 - static int transactions_show(struct seq_file *m, void *unused) 5965 + int binder_transactions_show(struct seq_file *m, void *unused) 5986 5966 { 5987 5967 struct binder_proc *proc; 5988 5968 ··· 6038 6018 "\n" : " (incomplete)\n"); 6039 6019 } 6040 6020 6041 - static int transaction_log_show(struct seq_file *m, void *unused) 6021 + int binder_transaction_log_show(struct seq_file *m, void *unused) 6042 6022 { 6043 6023 struct binder_transaction_log *log = m->private; 6044 6024 unsigned int log_cur = atomic_read(&log->cur); ··· 6069 6049 .flush = binder_flush, 6070 6050 .release = binder_release, 6071 6051 }; 6072 - 6073 - DEFINE_SHOW_ATTRIBUTE(state); 6074 - DEFINE_SHOW_ATTRIBUTE(stats); 6075 - DEFINE_SHOW_ATTRIBUTE(transactions); 6076 - DEFINE_SHOW_ATTRIBUTE(transaction_log); 6077 6052 6078 6053 static int __init init_binder_device(const char *name) 6079 6054 { ··· 6123 6108 0444, 6124 6109 binder_debugfs_dir_entry_root, 6125 6110 NULL, 6126 - &state_fops); 6111 + &binder_state_fops); 6127 6112 debugfs_create_file("stats", 6128 6113 0444, 6129 6114 binder_debugfs_dir_entry_root, 6130 6115 NULL, 6131 - &stats_fops); 6116 + &binder_stats_fops); 6132 6117 debugfs_create_file("transactions", 6133 6118 0444, 6134 6119 binder_debugfs_dir_entry_root, 6135 6120 NULL, 6136 - &transactions_fops); 6121 + &binder_transactions_fops); 6137 6122 debugfs_create_file("transaction_log", 6138 6123 0444, 6139 6124 binder_debugfs_dir_entry_root, 6140 6125 &binder_transaction_log, 6141 - &transaction_log_fops); 6126 + &binder_transaction_log_fops); 6142 6127 debugfs_create_file("failed_transaction_log", 6143 6128 0444, 6144 6129 binder_debugfs_dir_entry_root, 6145 6130 &binder_transaction_log_failed, 6146 - &transaction_log_fops); 6131 + &binder_transaction_log_fops); 6147 6132 } 6148 6133 6149 - if (strcmp(binder_devices_param, "") != 0) { 6134 + if (!IS_ENABLED(CONFIG_ANDROID_BINDERFS) && 6135 + strcmp(binder_devices_param, "") != 0) { 6150 6136 /* 6151 6137 * Copy the module_parameter string, because we don't want to 6152 6138 * tokenize it in-place.
+86
drivers/android/binder_internal.h
··· 35 35 struct inode *binderfs_inode; 36 36 }; 37 37 38 + /** 39 + * binderfs_mount_opts - mount options for binderfs 40 + * @max: maximum number of allocatable binderfs binder devices 41 + * @stats_mode: enable binder stats in binderfs. 42 + */ 43 + struct binderfs_mount_opts { 44 + int max; 45 + int stats_mode; 46 + }; 47 + 48 + /** 49 + * binderfs_info - information about a binderfs mount 50 + * @ipc_ns: The ipc namespace the binderfs mount belongs to. 51 + * @control_dentry: This records the dentry of this binderfs mount 52 + * binder-control device. 53 + * @root_uid: uid that needs to be used when a new binder device is 54 + * created. 55 + * @root_gid: gid that needs to be used when a new binder device is 56 + * created. 57 + * @mount_opts: The mount options in use. 58 + * @device_count: The current number of allocated binder devices. 59 + * @proc_log_dir: Pointer to the directory dentry containing process-specific 60 + * logs. 61 + */ 62 + struct binderfs_info { 63 + struct ipc_namespace *ipc_ns; 64 + struct dentry *control_dentry; 65 + kuid_t root_uid; 66 + kgid_t root_gid; 67 + struct binderfs_mount_opts mount_opts; 68 + int device_count; 69 + struct dentry *proc_log_dir; 70 + }; 71 + 38 72 extern const struct file_operations binder_fops; 73 + 74 + extern char *binder_devices_param; 39 75 40 76 #ifdef CONFIG_ANDROID_BINDERFS 41 77 extern bool is_binderfs_device(const struct inode *inode); 78 + extern struct dentry *binderfs_create_file(struct dentry *dir, const char *name, 79 + const struct file_operations *fops, 80 + void *data); 81 + extern void binderfs_remove_file(struct dentry *dentry); 42 82 #else 43 83 static inline bool is_binderfs_device(const struct inode *inode) 44 84 { 45 85 return false; 46 86 } 87 + static inline struct dentry *binderfs_create_file(struct dentry *dir, 88 + const char *name, 89 + const struct file_operations *fops, 90 + void *data) 91 + { 92 + return NULL; 93 + } 94 + static inline void binderfs_remove_file(struct dentry *dentry) {} 47 95 #endif 48 96 49 97 #ifdef CONFIG_ANDROID_BINDERFS ··· 103 55 } 104 56 #endif 105 57 58 + int binder_stats_show(struct seq_file *m, void *unused); 59 + DEFINE_SHOW_ATTRIBUTE(binder_stats); 60 + 61 + int binder_state_show(struct seq_file *m, void *unused); 62 + DEFINE_SHOW_ATTRIBUTE(binder_state); 63 + 64 + int binder_transactions_show(struct seq_file *m, void *unused); 65 + DEFINE_SHOW_ATTRIBUTE(binder_transactions); 66 + 67 + int binder_transaction_log_show(struct seq_file *m, void *unused); 68 + DEFINE_SHOW_ATTRIBUTE(binder_transaction_log); 69 + 70 + struct binder_transaction_log_entry { 71 + int debug_id; 72 + int debug_id_done; 73 + int call_type; 74 + int from_proc; 75 + int from_thread; 76 + int target_handle; 77 + int to_proc; 78 + int to_thread; 79 + int to_node; 80 + int data_size; 81 + int offsets_size; 82 + int return_error_line; 83 + uint32_t return_error; 84 + uint32_t return_error_param; 85 + const char *context_name; 86 + }; 87 + 88 + struct binder_transaction_log { 89 + atomic_t cur; 90 + bool full; 91 + struct binder_transaction_log_entry entry[32]; 92 + }; 93 + 94 + extern struct binder_transaction_log binder_transaction_log; 95 + extern struct binder_transaction_log binder_transaction_log_failed; 106 96 #endif /* _LINUX_BINDER_INTERNAL_H */
+255 -35
drivers/android/binderfs.c
··· 48 48 static DEFINE_MUTEX(binderfs_minors_mutex); 49 49 static DEFINE_IDA(binderfs_minors); 50 50 51 - /** 52 - * binderfs_mount_opts - mount options for binderfs 53 - * @max: maximum number of allocatable binderfs binder devices 54 - */ 55 - struct binderfs_mount_opts { 56 - int max; 57 - }; 58 - 59 51 enum { 60 52 Opt_max, 53 + Opt_stats_mode, 61 54 Opt_err 55 + }; 56 + 57 + enum binderfs_stats_mode { 58 + STATS_NONE, 59 + STATS_GLOBAL, 62 60 }; 63 61 64 62 static const match_table_t tokens = { 65 63 { Opt_max, "max=%d" }, 64 + { Opt_stats_mode, "stats=%s" }, 66 65 { Opt_err, NULL } 67 - }; 68 - 69 - /** 70 - * binderfs_info - information about a binderfs mount 71 - * @ipc_ns: The ipc namespace the binderfs mount belongs to. 72 - * @control_dentry: This records the dentry of this binderfs mount 73 - * binder-control device. 74 - * @root_uid: uid that needs to be used when a new binder device is 75 - * created. 76 - * @root_gid: gid that needs to be used when a new binder device is 77 - * created. 78 - * @mount_opts: The mount options in use. 79 - * @device_count: The current number of allocated binder devices. 80 - */ 81 - struct binderfs_info { 82 - struct ipc_namespace *ipc_ns; 83 - struct dentry *control_dentry; 84 - kuid_t root_uid; 85 - kgid_t root_gid; 86 - struct binderfs_mount_opts mount_opts; 87 - int device_count; 88 66 }; 89 67 90 68 static inline struct binderfs_info *BINDERFS_I(const struct inode *inode) ··· 164 186 req->major = MAJOR(binderfs_dev); 165 187 req->minor = minor; 166 188 167 - ret = copy_to_user(userp, req, sizeof(*req)); 168 - if (ret) { 189 + if (userp && copy_to_user(userp, req, sizeof(*req))) { 169 190 ret = -EFAULT; 170 191 goto err; 171 192 } ··· 249 272 250 273 clear_inode(inode); 251 274 252 - if (!device) 275 + if (!S_ISCHR(inode->i_mode) || !device) 253 276 return; 254 277 255 278 mutex_lock(&binderfs_minors_mutex); ··· 268 291 static int binderfs_parse_mount_opts(char *data, 269 292 struct binderfs_mount_opts *opts) 270 293 { 271 - char *p; 294 + char *p, *stats; 272 295 opts->max = BINDERFS_MAX_MINOR; 296 + opts->stats_mode = STATS_NONE; 273 297 274 298 while ((p = strsep(&data, ",")) != NULL) { 275 299 substring_t args[MAX_OPT_ARGS]; ··· 290 312 291 313 opts->max = max_devices; 292 314 break; 315 + case Opt_stats_mode: 316 + if (!capable(CAP_SYS_ADMIN)) 317 + return -EINVAL; 318 + 319 + stats = match_strdup(&args[0]); 320 + if (!stats) 321 + return -ENOMEM; 322 + 323 + if (strcmp(stats, "global") != 0) { 324 + kfree(stats); 325 + return -EINVAL; 326 + } 327 + 328 + opts->stats_mode = STATS_GLOBAL; 329 + kfree(stats); 330 + break; 293 331 default: 294 332 pr_err("Invalid mount options\n"); 295 333 return -EINVAL; ··· 317 323 318 324 static int binderfs_remount(struct super_block *sb, int *flags, char *data) 319 325 { 326 + int prev_stats_mode, ret; 320 327 struct binderfs_info *info = sb->s_fs_info; 321 - return binderfs_parse_mount_opts(data, &info->mount_opts); 328 + 329 + prev_stats_mode = info->mount_opts.stats_mode; 330 + ret = binderfs_parse_mount_opts(data, &info->mount_opts); 331 + if (ret) 332 + return ret; 333 + 334 + if (prev_stats_mode != info->mount_opts.stats_mode) { 335 + pr_err("Binderfs stats mode cannot be changed during a remount\n"); 336 + info->mount_opts.stats_mode = prev_stats_mode; 337 + return -EINVAL; 338 + } 339 + 340 + return 0; 322 341 } 323 342 324 343 static int binderfs_show_mount_opts(struct seq_file *seq, struct dentry *root) ··· 341 334 info = root->d_sb->s_fs_info; 342 335 if (info->mount_opts.max <= BINDERFS_MAX_MINOR) 343 336 seq_printf(seq, ",max=%d", info->mount_opts.max); 337 + if (info->mount_opts.stats_mode == STATS_GLOBAL) 338 + seq_printf(seq, ",stats=global"); 344 339 345 340 return 0; 346 341 } ··· 471 462 .unlink = binderfs_unlink, 472 463 }; 473 464 465 + static struct inode *binderfs_make_inode(struct super_block *sb, int mode) 466 + { 467 + struct inode *ret; 468 + 469 + ret = new_inode(sb); 470 + if (ret) { 471 + ret->i_ino = iunique(sb, BINDERFS_MAX_MINOR + INODE_OFFSET); 472 + ret->i_mode = mode; 473 + ret->i_atime = ret->i_mtime = ret->i_ctime = current_time(ret); 474 + } 475 + return ret; 476 + } 477 + 478 + static struct dentry *binderfs_create_dentry(struct dentry *parent, 479 + const char *name) 480 + { 481 + struct dentry *dentry; 482 + 483 + dentry = lookup_one_len(name, parent, strlen(name)); 484 + if (IS_ERR(dentry)) 485 + return dentry; 486 + 487 + /* Return error if the file/dir already exists. */ 488 + if (d_really_is_positive(dentry)) { 489 + dput(dentry); 490 + return ERR_PTR(-EEXIST); 491 + } 492 + 493 + return dentry; 494 + } 495 + 496 + void binderfs_remove_file(struct dentry *dentry) 497 + { 498 + struct inode *parent_inode; 499 + 500 + parent_inode = d_inode(dentry->d_parent); 501 + inode_lock(parent_inode); 502 + if (simple_positive(dentry)) { 503 + dget(dentry); 504 + simple_unlink(parent_inode, dentry); 505 + d_delete(dentry); 506 + dput(dentry); 507 + } 508 + inode_unlock(parent_inode); 509 + } 510 + 511 + struct dentry *binderfs_create_file(struct dentry *parent, const char *name, 512 + const struct file_operations *fops, 513 + void *data) 514 + { 515 + struct dentry *dentry; 516 + struct inode *new_inode, *parent_inode; 517 + struct super_block *sb; 518 + 519 + parent_inode = d_inode(parent); 520 + inode_lock(parent_inode); 521 + 522 + dentry = binderfs_create_dentry(parent, name); 523 + if (IS_ERR(dentry)) 524 + goto out; 525 + 526 + sb = parent_inode->i_sb; 527 + new_inode = binderfs_make_inode(sb, S_IFREG | 0444); 528 + if (!new_inode) { 529 + dput(dentry); 530 + dentry = ERR_PTR(-ENOMEM); 531 + goto out; 532 + } 533 + 534 + new_inode->i_fop = fops; 535 + new_inode->i_private = data; 536 + d_instantiate(dentry, new_inode); 537 + fsnotify_create(parent_inode, dentry); 538 + 539 + out: 540 + inode_unlock(parent_inode); 541 + return dentry; 542 + } 543 + 544 + static struct dentry *binderfs_create_dir(struct dentry *parent, 545 + const char *name) 546 + { 547 + struct dentry *dentry; 548 + struct inode *new_inode, *parent_inode; 549 + struct super_block *sb; 550 + 551 + parent_inode = d_inode(parent); 552 + inode_lock(parent_inode); 553 + 554 + dentry = binderfs_create_dentry(parent, name); 555 + if (IS_ERR(dentry)) 556 + goto out; 557 + 558 + sb = parent_inode->i_sb; 559 + new_inode = binderfs_make_inode(sb, S_IFDIR | 0755); 560 + if (!new_inode) { 561 + dput(dentry); 562 + dentry = ERR_PTR(-ENOMEM); 563 + goto out; 564 + } 565 + 566 + new_inode->i_fop = &simple_dir_operations; 567 + new_inode->i_op = &simple_dir_inode_operations; 568 + 569 + set_nlink(new_inode, 2); 570 + d_instantiate(dentry, new_inode); 571 + inc_nlink(parent_inode); 572 + fsnotify_mkdir(parent_inode, dentry); 573 + 574 + out: 575 + inode_unlock(parent_inode); 576 + return dentry; 577 + } 578 + 579 + static int init_binder_logs(struct super_block *sb) 580 + { 581 + struct dentry *binder_logs_root_dir, *dentry, *proc_log_dir; 582 + struct binderfs_info *info; 583 + int ret = 0; 584 + 585 + binder_logs_root_dir = binderfs_create_dir(sb->s_root, 586 + "binder_logs"); 587 + if (IS_ERR(binder_logs_root_dir)) { 588 + ret = PTR_ERR(binder_logs_root_dir); 589 + goto out; 590 + } 591 + 592 + dentry = binderfs_create_file(binder_logs_root_dir, "stats", 593 + &binder_stats_fops, NULL); 594 + if (IS_ERR(dentry)) { 595 + ret = PTR_ERR(dentry); 596 + goto out; 597 + } 598 + 599 + dentry = binderfs_create_file(binder_logs_root_dir, "state", 600 + &binder_state_fops, NULL); 601 + if (IS_ERR(dentry)) { 602 + ret = PTR_ERR(dentry); 603 + goto out; 604 + } 605 + 606 + dentry = binderfs_create_file(binder_logs_root_dir, "transactions", 607 + &binder_transactions_fops, NULL); 608 + if (IS_ERR(dentry)) { 609 + ret = PTR_ERR(dentry); 610 + goto out; 611 + } 612 + 613 + dentry = binderfs_create_file(binder_logs_root_dir, 614 + "transaction_log", 615 + &binder_transaction_log_fops, 616 + &binder_transaction_log); 617 + if (IS_ERR(dentry)) { 618 + ret = PTR_ERR(dentry); 619 + goto out; 620 + } 621 + 622 + dentry = binderfs_create_file(binder_logs_root_dir, 623 + "failed_transaction_log", 624 + &binder_transaction_log_fops, 625 + &binder_transaction_log_failed); 626 + if (IS_ERR(dentry)) { 627 + ret = PTR_ERR(dentry); 628 + goto out; 629 + } 630 + 631 + proc_log_dir = binderfs_create_dir(binder_logs_root_dir, "proc"); 632 + if (IS_ERR(proc_log_dir)) { 633 + ret = PTR_ERR(proc_log_dir); 634 + goto out; 635 + } 636 + info = sb->s_fs_info; 637 + info->proc_log_dir = proc_log_dir; 638 + 639 + out: 640 + return ret; 641 + } 642 + 474 643 static int binderfs_fill_super(struct super_block *sb, void *data, int silent) 475 644 { 476 645 int ret; 477 646 struct binderfs_info *info; 478 647 struct inode *inode = NULL; 648 + struct binderfs_device device_info = { 0 }; 649 + const char *name; 650 + size_t len; 479 651 480 652 sb->s_blocksize = PAGE_SIZE; 481 653 sb->s_blocksize_bits = PAGE_SHIFT; ··· 711 521 if (!sb->s_root) 712 522 return -ENOMEM; 713 523 714 - return binderfs_binder_ctl_create(sb); 524 + ret = binderfs_binder_ctl_create(sb); 525 + if (ret) 526 + return ret; 527 + 528 + name = binder_devices_param; 529 + for (len = strcspn(name, ","); len > 0; len = strcspn(name, ",")) { 530 + strscpy(device_info.name, name, len + 1); 531 + ret = binderfs_binder_device_create(inode, NULL, &device_info); 532 + if (ret) 533 + return ret; 534 + name += len; 535 + if (*name == ',') 536 + name++; 537 + } 538 + 539 + if (info->mount_opts.stats_mode == STATS_GLOBAL) 540 + return init_binder_logs(sb); 541 + 542 + return 0; 715 543 } 716 544 717 545 static struct dentry *binderfs_mount(struct file_system_type *fs_type, ··· 761 553 int __init init_binderfs(void) 762 554 { 763 555 int ret; 556 + const char *name; 557 + size_t len; 558 + 559 + /* Verify that the default binderfs device names are valid. */ 560 + name = binder_devices_param; 561 + for (len = strcspn(name, ","); len > 0; len = strcspn(name, ",")) { 562 + if (len > BINDERFS_MAX_NAME) 563 + return -E2BIG; 564 + name += len; 565 + if (*name == ',') 566 + name++; 567 + } 764 568 765 569 /* Allocate new major number for binderfs. */ 766 570 ret = alloc_chrdev_region(&binderfs_dev, 0, BINDERFS_MAX_MINOR,
+21
drivers/char/mem.c
··· 97 97 } 98 98 #endif 99 99 100 + static inline bool should_stop_iteration(void) 101 + { 102 + if (need_resched()) 103 + cond_resched(); 104 + return fatal_signal_pending(current); 105 + } 106 + 100 107 /* 101 108 * This funcion reads the *physical* memory. The f_pos points directly to the 102 109 * memory location. ··· 182 175 p += sz; 183 176 count -= sz; 184 177 read += sz; 178 + if (should_stop_iteration()) 179 + break; 185 180 } 186 181 kfree(bounce); 187 182 ··· 260 251 p += sz; 261 252 count -= sz; 262 253 written += sz; 254 + if (should_stop_iteration()) 255 + break; 263 256 } 264 257 265 258 *ppos += written; ··· 479 468 read += sz; 480 469 low_count -= sz; 481 470 count -= sz; 471 + if (should_stop_iteration()) { 472 + count = 0; 473 + break; 474 + } 482 475 } 483 476 } 484 477 ··· 507 492 buf += sz; 508 493 read += sz; 509 494 p += sz; 495 + if (should_stop_iteration()) 496 + break; 510 497 } 511 498 free_page((unsigned long)kbuf); 512 499 } ··· 561 544 p += sz; 562 545 count -= sz; 563 546 written += sz; 547 + if (should_stop_iteration()) 548 + break; 564 549 } 565 550 566 551 *ppos += written; ··· 614 595 buf += sz; 615 596 virtr += sz; 616 597 p += sz; 598 + if (should_stop_iteration()) 599 + break; 617 600 } 618 601 free_page((unsigned long)kbuf); 619 602 }
+1 -1
drivers/char/ppdev.c
··· 737 737 "negotiated back to compatibility mode because user-space forgot\n"); 738 738 } 739 739 740 - if (pp->flags & PP_CLAIMED) { 740 + if ((pp->flags & PP_CLAIMED) && pp->pdev) { 741 741 struct ieee1284_info *info; 742 742 743 743 info = &pp->pdev->port->ieee1284;
+4 -4
drivers/char/toshiba.c
··· 373 373 value. This has been verified on a Satellite Pro 430CDT, 374 374 Tecra 750CDT, Tecra 780DVD and Satellite 310CDT. */ 375 375 #if TOSH_DEBUG 376 - printk("toshiba: debugging ID ebx=0x%04x\n", regs.ebx); 376 + pr_debug("toshiba: debugging ID ebx=0x%04x\n", regs.ebx); 377 377 #endif 378 378 bx = 0xe6f5; 379 379 ··· 417 417 418 418 for (i=0;i<7;i++) { 419 419 if (readb(bios+0xe010+i)!=signature[i]) { 420 - printk("toshiba: not a supported Toshiba laptop\n"); 420 + pr_err("toshiba: not a supported Toshiba laptop\n"); 421 421 iounmap(bios); 422 422 return -ENODEV; 423 423 } ··· 433 433 /* if this is not a Toshiba laptop carry flag is set and ah=0x86 */ 434 434 435 435 if ((flag==1) || ((regs.eax & 0xff00)==0x8600)) { 436 - printk("toshiba: not a supported Toshiba laptop\n"); 436 + pr_err("toshiba: not a supported Toshiba laptop\n"); 437 437 iounmap(bios); 438 438 return -ENODEV; 439 439 } ··· 486 486 if (tosh_probe()) 487 487 return -ENODEV; 488 488 489 - printk(KERN_INFO "Toshiba System Management Mode driver v" TOSH_VERSION "\n"); 489 + pr_info("Toshiba System Management Mode driver v" TOSH_VERSION "\n"); 490 490 491 491 /* set the port to use for Fn status if not specified as a parameter */ 492 492 if (tosh_fn==0x00)
+3 -13
drivers/clk/qcom/clk-rpmh.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 - * Copyright (c) 2018, The Linux Foundation. All rights reserved. 3 + * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved. 4 4 */ 5 5 6 6 #include <linux/clk-provider.h> ··· 12 12 #include <linux/platform_device.h> 13 13 #include <soc/qcom/cmd-db.h> 14 14 #include <soc/qcom/rpmh.h> 15 + #include <soc/qcom/tcs.h> 15 16 16 17 #include <dt-bindings/clock/qcom,rpmh.h> 17 18 18 19 #define CLK_RPMH_ARC_EN_OFFSET 0 19 20 #define CLK_RPMH_VRM_EN_OFFSET 4 20 - 21 - #define BCM_TCS_CMD_COMMIT_MASK 0x40000000 22 - #define BCM_TCS_CMD_VALID_SHIFT 29 23 - #define BCM_TCS_CMD_VOTE_MASK 0x3fff 24 - #define BCM_TCS_CMD_VOTE_SHIFT 0 25 - 26 - #define BCM_TCS_CMD(valid, vote) \ 27 - (BCM_TCS_CMD_COMMIT_MASK | \ 28 - ((valid) << BCM_TCS_CMD_VALID_SHIFT) | \ 29 - ((vote & BCM_TCS_CMD_VOTE_MASK) \ 30 - << BCM_TCS_CMD_VOTE_SHIFT)) 31 21 32 22 /** 33 23 * struct bcm_db - Auxiliary data pertaining to each Bus Clock Manager(BCM) ··· 259 269 } 260 270 261 271 cmd.addr = c->res_addr; 262 - cmd.data = BCM_TCS_CMD(enable, cmd_state); 272 + cmd.data = BCM_TCS_CMD(1, enable, 0, cmd_state); 263 273 264 274 ret = rpmh_write_async(c->dev, RPMH_ACTIVE_ONLY_STATE, &cmd, 1); 265 275 if (ret) {
+1 -3
drivers/extcon/extcon-adc-jack.c
··· 140 140 return err; 141 141 142 142 data->irq = platform_get_irq(pdev, 0); 143 - if (data->irq < 0) { 144 - dev_err(&pdev->dev, "platform_get_irq failed\n"); 143 + if (data->irq < 0) 145 144 return -ENODEV; 146 - } 147 145 148 146 err = request_any_context_irq(data->irq, adc_jack_irq_thread, 149 147 pdata->irq_flags, pdata->name, data);
+1 -1
drivers/extcon/extcon-arizona.c
··· 1253 1253 int i, j; 1254 1254 u32 *vals; 1255 1255 1256 - nconfs = device_property_read_u32_array(arizona->dev, prop, NULL, 0); 1256 + nconfs = device_property_count_u32(arizona->dev, prop); 1257 1257 if (nconfs <= 0) 1258 1258 return 0; 1259 1259
+9 -7
drivers/extcon/extcon-axp288.c
··· 121 121 "Last shutdown caused by PMIC UVLO threshold", 122 122 "Last shutdown caused by SOC initiated cold off", 123 123 "Last shutdown caused by user pressing the power button", 124 - NULL, 125 124 }; 126 125 127 126 /* ··· 129 130 */ 130 131 static void axp288_extcon_log_rsi(struct axp288_extcon_info *info) 131 132 { 132 - const char * const *rsi; 133 133 unsigned int val, i, clear_mask = 0; 134 + unsigned long bits; 134 135 int ret; 135 136 136 137 ret = regmap_read(info->regmap, AXP288_PS_BOOT_REASON_REG, &val); 137 - for (i = 0, rsi = axp288_pwr_up_down_info; *rsi; rsi++, i++) { 138 - if (val & BIT(i)) { 139 - dev_dbg(info->dev, "%s\n", *rsi); 140 - clear_mask |= BIT(i); 141 - } 138 + if (ret < 0) { 139 + dev_err(info->dev, "failed to read reset source indicator\n"); 140 + return; 142 141 } 142 + 143 + bits = val & GENMASK(ARRAY_SIZE(axp288_pwr_up_down_info) - 1, 0); 144 + for_each_set_bit(i, &bits, ARRAY_SIZE(axp288_pwr_up_down_info)) 145 + dev_dbg(info->dev, "%s\n", axp288_pwr_up_down_info[i]); 146 + clear_mask = bits; 143 147 144 148 /* Clear the register value for next reboot (write 1 to clear bit) */ 145 149 regmap_write(info->regmap, AXP288_PS_BOOT_REASON_REG, clear_mask);
+1
drivers/extcon/extcon-fsa9480.c
··· 363 363 364 364 static const struct of_device_id fsa9480_of_match[] = { 365 365 { .compatible = "fcs,fsa9480", }, 366 + { .compatible = "fcs,fsa880", }, 366 367 { }, 367 368 }; 368 369 MODULE_DEVICE_TABLE(of, fsa9480_of_match);
+19 -10
drivers/extcon/extcon-gpio.c
··· 22 22 /** 23 23 * struct gpio_extcon_data - A simple GPIO-controlled extcon device state container. 24 24 * @edev: Extcon device. 25 - * @irq: Interrupt line for the external connector. 26 25 * @work: Work fired by the interrupt. 27 26 * @debounce_jiffies: Number of jiffies to wait for the GPIO to stabilize, from the debounce 28 27 * value. 29 28 * @gpiod: GPIO descriptor for this external connector. 30 29 * @extcon_id: The unique id of specific external connector. 31 30 * @debounce: Debounce time for GPIO IRQ in ms. 32 - * @irq_flags: IRQ Flags (e.g., IRQF_TRIGGER_LOW). 33 31 * @check_on_resume: Boolean describing whether to check the state of gpio 34 32 * while resuming from sleep. 35 33 */ 36 34 struct gpio_extcon_data { 37 35 struct extcon_dev *edev; 38 - int irq; 39 36 struct delayed_work work; 40 37 unsigned long debounce_jiffies; 41 38 struct gpio_desc *gpiod; 42 39 unsigned int extcon_id; 43 40 unsigned long debounce; 44 - unsigned long irq_flags; 45 41 bool check_on_resume; 46 42 }; 47 43 ··· 65 69 { 66 70 struct gpio_extcon_data *data; 67 71 struct device *dev = &pdev->dev; 72 + unsigned long irq_flags; 73 + int irq; 68 74 int ret; 69 75 70 76 data = devm_kzalloc(dev, sizeof(struct gpio_extcon_data), GFP_KERNEL); ··· 80 82 * developed to get the extcon id from device-tree or others. 81 83 * On later, it have to be solved. 82 84 */ 83 - if (!data->irq_flags || data->extcon_id > EXTCON_NONE) 85 + if (data->extcon_id > EXTCON_NONE) 84 86 return -EINVAL; 85 87 86 88 data->gpiod = devm_gpiod_get(dev, "extcon", GPIOD_IN); 87 89 if (IS_ERR(data->gpiod)) 88 90 return PTR_ERR(data->gpiod); 89 - data->irq = gpiod_to_irq(data->gpiod); 90 - if (data->irq <= 0) 91 - return data->irq; 91 + irq = gpiod_to_irq(data->gpiod); 92 + if (irq <= 0) 93 + return irq; 94 + 95 + /* 96 + * It is unlikely that this is an acknowledged interrupt that goes 97 + * away after handling, what we are looking for are falling edges 98 + * if the signal is active low, and rising edges if the signal is 99 + * active high. 100 + */ 101 + if (gpiod_is_active_low(data->gpiod)) 102 + irq_flags = IRQF_TRIGGER_FALLING; 103 + else 104 + irq_flags = IRQF_TRIGGER_RISING; 92 105 93 106 /* Allocate the memory of extcon devie and register extcon device */ 94 107 data->edev = devm_extcon_dev_allocate(dev, &data->extcon_id); ··· 118 109 * Request the interrupt of gpio to detect whether external connector 119 110 * is attached or detached. 120 111 */ 121 - ret = devm_request_any_context_irq(dev, data->irq, 122 - gpio_irq_handler, data->irq_flags, 112 + ret = devm_request_any_context_irq(dev, irq, 113 + gpio_irq_handler, irq_flags, 123 114 pdev->name, data); 124 115 if (ret < 0) 125 116 return ret;
+3 -3
drivers/extcon/extcon-max77843.c
··· 774 774 { 775 775 int ret; 776 776 777 - max77843->i2c_muic = i2c_new_dummy(max77843->i2c->adapter, 777 + max77843->i2c_muic = i2c_new_dummy_device(max77843->i2c->adapter, 778 778 I2C_ADDR_MUIC); 779 - if (!max77843->i2c_muic) { 779 + if (IS_ERR(max77843->i2c_muic)) { 780 780 dev_err(&max77843->i2c->dev, 781 781 "Cannot allocate I2C device for MUIC\n"); 782 - return -ENOMEM; 782 + return PTR_ERR(max77843->i2c_muic); 783 783 } 784 784 785 785 i2c_set_clientdata(max77843->i2c_muic, max77843);
+1 -1
drivers/extcon/extcon-sm5502.c
··· 597 597 598 598 ret = devm_request_threaded_irq(info->dev, virq, NULL, 599 599 sm5502_muic_irq_handler, 600 - IRQF_NO_SUSPEND, 600 + IRQF_NO_SUSPEND | IRQF_ONESHOT, 601 601 muic_irq->name, info); 602 602 if (ret) { 603 603 dev_err(info->dev,
+18
drivers/firmware/Kconfig
··· 216 216 217 217 Say Y here if you want Stratix10 service layer support. 218 218 219 + config INTEL_STRATIX10_RSU 220 + tristate "Intel Stratix10 Remote System Update" 221 + depends on INTEL_STRATIX10_SERVICE 222 + help 223 + The Intel Remote System Update (RSU) driver exposes interfaces 224 + access through the Intel Service Layer to user space via sysfs 225 + device attribute nodes. The RSU interfaces report/control some of 226 + the optional RSU features of the Stratix 10 SoC FPGA. 227 + 228 + The RSU provides a way for customers to update the boot 229 + configuration of a Stratix 10 SoC device with significantly reduced 230 + risk of corrupting the bitstream storage and bricking the system. 231 + 232 + Enable RSU support if you are using an Intel SoC FPGA with the RSU 233 + feature enabled and you want Linux user space control. 234 + 235 + Say Y here if you want Intel RSU support. 236 + 219 237 config QCOM_SCM 220 238 bool 221 239 depends on ARM || ARM64
+1
drivers/firmware/Makefile
··· 11 11 obj-$(CONFIG_EFI_PCDP) += pcdp.o 12 12 obj-$(CONFIG_DMIID) += dmi-id.o 13 13 obj-$(CONFIG_INTEL_STRATIX10_SERVICE) += stratix10-svc.o 14 + obj-$(CONFIG_INTEL_STRATIX10_RSU) += stratix10-rsu.o 14 15 obj-$(CONFIG_ISCSI_IBFT_FIND) += iscsi_ibft_find.o 15 16 obj-$(CONFIG_ISCSI_IBFT) += iscsi_ibft.o 16 17 obj-$(CONFIG_FIRMWARE_MEMMAP) += memmap.o
+2 -2
drivers/firmware/google/vpd.c
··· 92 92 return VPD_OK; 93 93 } 94 94 95 - static int vpd_section_attrib_add(const u8 *key, s32 key_len, 96 - const u8 *value, s32 value_len, 95 + static int vpd_section_attrib_add(const u8 *key, u32 key_len, 96 + const u8 *value, u32 value_len, 97 97 void *arg) 98 98 { 99 99 int ret;
+33 -24
drivers/firmware/google/vpd_decode.c
··· 9 9 10 10 #include "vpd_decode.h" 11 11 12 - static int vpd_decode_len(const s32 max_len, const u8 *in, 13 - s32 *length, s32 *decoded_len) 12 + static int vpd_decode_len(const u32 max_len, const u8 *in, 13 + u32 *length, u32 *decoded_len) 14 14 { 15 15 u8 more; 16 16 int i = 0; ··· 30 30 } while (more); 31 31 32 32 *decoded_len = i; 33 - 34 33 return VPD_OK; 35 34 } 36 35 37 - int vpd_decode_string(const s32 max_len, const u8 *input_buf, s32 *consumed, 36 + static int vpd_decode_entry(const u32 max_len, const u8 *input_buf, 37 + u32 *_consumed, const u8 **entry, u32 *entry_len) 38 + { 39 + u32 decoded_len; 40 + u32 consumed = *_consumed; 41 + 42 + if (vpd_decode_len(max_len - consumed, &input_buf[consumed], 43 + entry_len, &decoded_len) != VPD_OK) 44 + return VPD_FAIL; 45 + if (max_len - consumed < decoded_len) 46 + return VPD_FAIL; 47 + 48 + consumed += decoded_len; 49 + *entry = input_buf + consumed; 50 + 51 + /* entry_len is untrusted data and must be checked again. */ 52 + if (max_len - consumed < *entry_len) 53 + return VPD_FAIL; 54 + 55 + consumed += decoded_len; 56 + *_consumed = consumed; 57 + return VPD_OK; 58 + } 59 + 60 + int vpd_decode_string(const u32 max_len, const u8 *input_buf, u32 *consumed, 38 61 vpd_decode_callback callback, void *callback_arg) 39 62 { 40 63 int type; 41 - int res; 42 - s32 key_len; 43 - s32 value_len; 44 - s32 decoded_len; 64 + u32 key_len; 65 + u32 value_len; 45 66 const u8 *key; 46 67 const u8 *value; 47 68 ··· 77 56 case VPD_TYPE_STRING: 78 57 (*consumed)++; 79 58 80 - /* key */ 81 - res = vpd_decode_len(max_len - *consumed, &input_buf[*consumed], 82 - &key_len, &decoded_len); 83 - if (res != VPD_OK || *consumed + decoded_len >= max_len) 59 + if (vpd_decode_entry(max_len, input_buf, consumed, &key, 60 + &key_len) != VPD_OK) 84 61 return VPD_FAIL; 85 62 86 - *consumed += decoded_len; 87 - key = &input_buf[*consumed]; 88 - *consumed += key_len; 89 - 90 - /* value */ 91 - res = vpd_decode_len(max_len - *consumed, &input_buf[*consumed], 92 - &value_len, &decoded_len); 93 - if (res != VPD_OK || *consumed + decoded_len > max_len) 63 + if (vpd_decode_entry(max_len, input_buf, consumed, &value, 64 + &value_len) != VPD_OK) 94 65 return VPD_FAIL; 95 - 96 - *consumed += decoded_len; 97 - value = &input_buf[*consumed]; 98 - *consumed += value_len; 99 66 100 67 if (type == VPD_TYPE_STRING) 101 68 return callback(key, key_len, value, value_len,
+3 -3
drivers/firmware/google/vpd_decode.h
··· 25 25 }; 26 26 27 27 /* Callback for vpd_decode_string to invoke. */ 28 - typedef int vpd_decode_callback(const u8 *key, s32 key_len, 29 - const u8 *value, s32 value_len, 28 + typedef int vpd_decode_callback(const u8 *key, u32 key_len, 29 + const u8 *value, u32 value_len, 30 30 void *arg); 31 31 32 32 /* ··· 44 44 * If one entry is successfully decoded, sends it to callback and returns the 45 45 * result. 46 46 */ 47 - int vpd_decode_string(const s32 max_len, const u8 *input_buf, s32 *consumed, 47 + int vpd_decode_string(const u32 max_len, const u8 *input_buf, u32 *consumed, 48 48 vpd_decode_callback callback, void *callback_arg); 49 49 50 50 #endif /* __VPD_DECODE_H */
+451
drivers/firmware/stratix10-rsu.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2018-2019, Intel Corporation 4 + */ 5 + 6 + #include <linux/arm-smccc.h> 7 + #include <linux/bitfield.h> 8 + #include <linux/completion.h> 9 + #include <linux/kobject.h> 10 + #include <linux/module.h> 11 + #include <linux/mutex.h> 12 + #include <linux/of.h> 13 + #include <linux/of_platform.h> 14 + #include <linux/platform_device.h> 15 + #include <linux/firmware/intel/stratix10-svc-client.h> 16 + #include <linux/string.h> 17 + #include <linux/sysfs.h> 18 + 19 + #define RSU_STATE_MASK GENMASK_ULL(31, 0) 20 + #define RSU_VERSION_MASK GENMASK_ULL(63, 32) 21 + #define RSU_ERROR_LOCATION_MASK GENMASK_ULL(31, 0) 22 + #define RSU_ERROR_DETAIL_MASK GENMASK_ULL(63, 32) 23 + #define RSU_FW_VERSION_MASK GENMASK_ULL(15, 0) 24 + 25 + #define RSU_TIMEOUT (msecs_to_jiffies(SVC_RSU_REQUEST_TIMEOUT_MS)) 26 + 27 + #define INVALID_RETRY_COUNTER 0xFFFFFFFF 28 + 29 + typedef void (*rsu_callback)(struct stratix10_svc_client *client, 30 + struct stratix10_svc_cb_data *data); 31 + /** 32 + * struct stratix10_rsu_priv - rsu data structure 33 + * @chan: pointer to the allocated service channel 34 + * @client: active service client 35 + * @completion: state for callback completion 36 + * @lock: a mutex to protect callback completion state 37 + * @status.current_image: address of image currently running in flash 38 + * @status.fail_image: address of failed image in flash 39 + * @status.version: the version number of RSU firmware 40 + * @status.state: the state of RSU system 41 + * @status.error_details: error code 42 + * @status.error_location: the error offset inside the image that failed 43 + * @retry_counter: the current image's retry counter 44 + */ 45 + struct stratix10_rsu_priv { 46 + struct stratix10_svc_chan *chan; 47 + struct stratix10_svc_client client; 48 + struct completion completion; 49 + struct mutex lock; 50 + struct { 51 + unsigned long current_image; 52 + unsigned long fail_image; 53 + unsigned int version; 54 + unsigned int state; 55 + unsigned int error_details; 56 + unsigned int error_location; 57 + } status; 58 + unsigned int retry_counter; 59 + }; 60 + 61 + /** 62 + * rsu_status_callback() - Status callback from Intel Service Layer 63 + * @client: pointer to service client 64 + * @data: pointer to callback data structure 65 + * 66 + * Callback from Intel service layer for RSU status request. Status is 67 + * only updated after a system reboot, so a get updated status call is 68 + * made during driver probe. 69 + */ 70 + static void rsu_status_callback(struct stratix10_svc_client *client, 71 + struct stratix10_svc_cb_data *data) 72 + { 73 + struct stratix10_rsu_priv *priv = client->priv; 74 + struct arm_smccc_res *res = (struct arm_smccc_res *)data->kaddr1; 75 + 76 + if (data->status == BIT(SVC_STATUS_RSU_OK)) { 77 + priv->status.version = FIELD_GET(RSU_VERSION_MASK, 78 + res->a2); 79 + priv->status.state = FIELD_GET(RSU_STATE_MASK, res->a2); 80 + priv->status.fail_image = res->a1; 81 + priv->status.current_image = res->a0; 82 + priv->status.error_location = 83 + FIELD_GET(RSU_ERROR_LOCATION_MASK, res->a3); 84 + priv->status.error_details = 85 + FIELD_GET(RSU_ERROR_DETAIL_MASK, res->a3); 86 + } else { 87 + dev_err(client->dev, "COMMAND_RSU_STATUS returned 0x%lX\n", 88 + res->a0); 89 + priv->status.version = 0; 90 + priv->status.state = 0; 91 + priv->status.fail_image = 0; 92 + priv->status.current_image = 0; 93 + priv->status.error_location = 0; 94 + priv->status.error_details = 0; 95 + } 96 + 97 + complete(&priv->completion); 98 + } 99 + 100 + /** 101 + * rsu_command_callback() - Update callback from Intel Service Layer 102 + * @client: pointer to client 103 + * @data: pointer to callback data structure 104 + * 105 + * Callback from Intel service layer for RSU commands. 106 + */ 107 + static void rsu_command_callback(struct stratix10_svc_client *client, 108 + struct stratix10_svc_cb_data *data) 109 + { 110 + struct stratix10_rsu_priv *priv = client->priv; 111 + 112 + if (data->status != BIT(SVC_STATUS_RSU_OK)) 113 + dev_err(client->dev, "RSU returned status is %i\n", 114 + data->status); 115 + complete(&priv->completion); 116 + } 117 + 118 + /** 119 + * rsu_retry_callback() - Callback from Intel service layer for getting 120 + * the current image's retry counter from firmware 121 + * @client: pointer to client 122 + * @data: pointer to callback data structure 123 + * 124 + * Callback from Intel service layer for retry counter, which is used by 125 + * user to know how many times the images is still allowed to reload 126 + * itself before giving up and starting RSU fail-over flow. 127 + */ 128 + static void rsu_retry_callback(struct stratix10_svc_client *client, 129 + struct stratix10_svc_cb_data *data) 130 + { 131 + struct stratix10_rsu_priv *priv = client->priv; 132 + unsigned int *counter = (unsigned int *)data->kaddr1; 133 + 134 + if (data->status == BIT(SVC_STATUS_RSU_OK)) 135 + priv->retry_counter = *counter; 136 + else 137 + dev_err(client->dev, "Failed to get retry counter %i\n", 138 + data->status); 139 + 140 + complete(&priv->completion); 141 + } 142 + 143 + /** 144 + * rsu_send_msg() - send a message to Intel service layer 145 + * @priv: pointer to rsu private data 146 + * @command: RSU status or update command 147 + * @arg: the request argument, the bitstream address or notify status 148 + * @callback: function pointer for the callback (status or update) 149 + * 150 + * Start an Intel service layer transaction to perform the SMC call that 151 + * is necessary to get RSU boot log or set the address of bitstream to 152 + * boot after reboot. 153 + * 154 + * Returns 0 on success or -ETIMEDOUT on error. 155 + */ 156 + static int rsu_send_msg(struct stratix10_rsu_priv *priv, 157 + enum stratix10_svc_command_code command, 158 + unsigned long arg, 159 + rsu_callback callback) 160 + { 161 + struct stratix10_svc_client_msg msg; 162 + int ret; 163 + 164 + mutex_lock(&priv->lock); 165 + reinit_completion(&priv->completion); 166 + priv->client.receive_cb = callback; 167 + 168 + msg.command = command; 169 + if (arg) 170 + msg.arg[0] = arg; 171 + 172 + ret = stratix10_svc_send(priv->chan, &msg); 173 + if (ret < 0) 174 + goto status_done; 175 + 176 + ret = wait_for_completion_interruptible_timeout(&priv->completion, 177 + RSU_TIMEOUT); 178 + if (!ret) { 179 + dev_err(priv->client.dev, 180 + "timeout waiting for SMC call\n"); 181 + ret = -ETIMEDOUT; 182 + goto status_done; 183 + } else if (ret < 0) { 184 + dev_err(priv->client.dev, 185 + "error %d waiting for SMC call\n", ret); 186 + goto status_done; 187 + } else { 188 + ret = 0; 189 + } 190 + 191 + status_done: 192 + stratix10_svc_done(priv->chan); 193 + mutex_unlock(&priv->lock); 194 + return ret; 195 + } 196 + 197 + /* 198 + * This driver exposes some optional features of the Intel Stratix 10 SoC FPGA. 199 + * The sysfs interfaces exposed here are FPGA Remote System Update (RSU) 200 + * related. They allow user space software to query the configuration system 201 + * status and to request optional reboot behavior specific to Intel FPGAs. 202 + */ 203 + 204 + static ssize_t current_image_show(struct device *dev, 205 + struct device_attribute *attr, char *buf) 206 + { 207 + struct stratix10_rsu_priv *priv = dev_get_drvdata(dev); 208 + 209 + if (!priv) 210 + return -ENODEV; 211 + 212 + return sprintf(buf, "0x%08lx\n", priv->status.current_image); 213 + } 214 + 215 + static ssize_t fail_image_show(struct device *dev, 216 + struct device_attribute *attr, char *buf) 217 + { 218 + struct stratix10_rsu_priv *priv = dev_get_drvdata(dev); 219 + 220 + if (!priv) 221 + return -ENODEV; 222 + 223 + return sprintf(buf, "0x%08lx\n", priv->status.fail_image); 224 + } 225 + 226 + static ssize_t version_show(struct device *dev, struct device_attribute *attr, 227 + char *buf) 228 + { 229 + struct stratix10_rsu_priv *priv = dev_get_drvdata(dev); 230 + 231 + if (!priv) 232 + return -ENODEV; 233 + 234 + return sprintf(buf, "0x%08x\n", priv->status.version); 235 + } 236 + 237 + static ssize_t state_show(struct device *dev, struct device_attribute *attr, 238 + char *buf) 239 + { 240 + struct stratix10_rsu_priv *priv = dev_get_drvdata(dev); 241 + 242 + if (!priv) 243 + return -ENODEV; 244 + 245 + return sprintf(buf, "0x%08x\n", priv->status.state); 246 + } 247 + 248 + static ssize_t error_location_show(struct device *dev, 249 + struct device_attribute *attr, char *buf) 250 + { 251 + struct stratix10_rsu_priv *priv = dev_get_drvdata(dev); 252 + 253 + if (!priv) 254 + return -ENODEV; 255 + 256 + return sprintf(buf, "0x%08x\n", priv->status.error_location); 257 + } 258 + 259 + static ssize_t error_details_show(struct device *dev, 260 + struct device_attribute *attr, char *buf) 261 + { 262 + struct stratix10_rsu_priv *priv = dev_get_drvdata(dev); 263 + 264 + if (!priv) 265 + return -ENODEV; 266 + 267 + return sprintf(buf, "0x%08x\n", priv->status.error_details); 268 + } 269 + 270 + static ssize_t retry_counter_show(struct device *dev, 271 + struct device_attribute *attr, char *buf) 272 + { 273 + struct stratix10_rsu_priv *priv = dev_get_drvdata(dev); 274 + 275 + if (!priv) 276 + return -ENODEV; 277 + 278 + return sprintf(buf, "0x%08x\n", priv->retry_counter); 279 + } 280 + 281 + static ssize_t reboot_image_store(struct device *dev, 282 + struct device_attribute *attr, 283 + const char *buf, size_t count) 284 + { 285 + struct stratix10_rsu_priv *priv = dev_get_drvdata(dev); 286 + unsigned long address; 287 + int ret; 288 + 289 + if (priv == 0) 290 + return -ENODEV; 291 + 292 + ret = kstrtoul(buf, 0, &address); 293 + if (ret) 294 + return ret; 295 + 296 + ret = rsu_send_msg(priv, COMMAND_RSU_UPDATE, 297 + address, rsu_command_callback); 298 + if (ret) { 299 + dev_err(dev, "Error, RSU update returned %i\n", ret); 300 + return ret; 301 + } 302 + 303 + return count; 304 + } 305 + 306 + static ssize_t notify_store(struct device *dev, 307 + struct device_attribute *attr, 308 + const char *buf, size_t count) 309 + { 310 + struct stratix10_rsu_priv *priv = dev_get_drvdata(dev); 311 + unsigned long status; 312 + int ret; 313 + 314 + if (priv == 0) 315 + return -ENODEV; 316 + 317 + ret = kstrtoul(buf, 0, &status); 318 + if (ret) 319 + return ret; 320 + 321 + ret = rsu_send_msg(priv, COMMAND_RSU_NOTIFY, 322 + status, rsu_command_callback); 323 + if (ret) { 324 + dev_err(dev, "Error, RSU notify returned %i\n", ret); 325 + return ret; 326 + } 327 + 328 + /* to get the updated state */ 329 + ret = rsu_send_msg(priv, COMMAND_RSU_STATUS, 330 + 0, rsu_status_callback); 331 + if (ret) { 332 + dev_err(dev, "Error, getting RSU status %i\n", ret); 333 + return ret; 334 + } 335 + 336 + /* only 19.3 or late version FW supports retry counter feature */ 337 + if (FIELD_GET(RSU_FW_VERSION_MASK, priv->status.version)) { 338 + ret = rsu_send_msg(priv, COMMAND_RSU_RETRY, 339 + 0, rsu_retry_callback); 340 + if (ret) { 341 + dev_err(dev, 342 + "Error, getting RSU retry %i\n", ret); 343 + return ret; 344 + } 345 + } 346 + 347 + return count; 348 + } 349 + 350 + static DEVICE_ATTR_RO(current_image); 351 + static DEVICE_ATTR_RO(fail_image); 352 + static DEVICE_ATTR_RO(state); 353 + static DEVICE_ATTR_RO(version); 354 + static DEVICE_ATTR_RO(error_location); 355 + static DEVICE_ATTR_RO(error_details); 356 + static DEVICE_ATTR_RO(retry_counter); 357 + static DEVICE_ATTR_WO(reboot_image); 358 + static DEVICE_ATTR_WO(notify); 359 + 360 + static struct attribute *rsu_attrs[] = { 361 + &dev_attr_current_image.attr, 362 + &dev_attr_fail_image.attr, 363 + &dev_attr_state.attr, 364 + &dev_attr_version.attr, 365 + &dev_attr_error_location.attr, 366 + &dev_attr_error_details.attr, 367 + &dev_attr_retry_counter.attr, 368 + &dev_attr_reboot_image.attr, 369 + &dev_attr_notify.attr, 370 + NULL 371 + }; 372 + 373 + ATTRIBUTE_GROUPS(rsu); 374 + 375 + static int stratix10_rsu_probe(struct platform_device *pdev) 376 + { 377 + struct device *dev = &pdev->dev; 378 + struct stratix10_rsu_priv *priv; 379 + int ret; 380 + 381 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 382 + if (!priv) 383 + return -ENOMEM; 384 + 385 + priv->client.dev = dev; 386 + priv->client.receive_cb = NULL; 387 + priv->client.priv = priv; 388 + priv->status.current_image = 0; 389 + priv->status.fail_image = 0; 390 + priv->status.error_location = 0; 391 + priv->status.error_details = 0; 392 + priv->status.version = 0; 393 + priv->status.state = 0; 394 + priv->retry_counter = INVALID_RETRY_COUNTER; 395 + 396 + mutex_init(&priv->lock); 397 + priv->chan = stratix10_svc_request_channel_byname(&priv->client, 398 + SVC_CLIENT_RSU); 399 + if (IS_ERR(priv->chan)) { 400 + dev_err(dev, "couldn't get service channel %s\n", 401 + SVC_CLIENT_RSU); 402 + return PTR_ERR(priv->chan); 403 + } 404 + 405 + init_completion(&priv->completion); 406 + platform_set_drvdata(pdev, priv); 407 + 408 + /* get the initial state from firmware */ 409 + ret = rsu_send_msg(priv, COMMAND_RSU_STATUS, 410 + 0, rsu_status_callback); 411 + if (ret) { 412 + dev_err(dev, "Error, getting RSU status %i\n", ret); 413 + stratix10_svc_free_channel(priv->chan); 414 + } 415 + 416 + /* only 19.3 or late version FW supports retry counter feature */ 417 + if (FIELD_GET(RSU_FW_VERSION_MASK, priv->status.version)) { 418 + ret = rsu_send_msg(priv, COMMAND_RSU_RETRY, 0, 419 + rsu_retry_callback); 420 + if (ret) { 421 + dev_err(dev, 422 + "Error, getting RSU retry %i\n", ret); 423 + stratix10_svc_free_channel(priv->chan); 424 + } 425 + } 426 + 427 + return ret; 428 + } 429 + 430 + static int stratix10_rsu_remove(struct platform_device *pdev) 431 + { 432 + struct stratix10_rsu_priv *priv = platform_get_drvdata(pdev); 433 + 434 + stratix10_svc_free_channel(priv->chan); 435 + return 0; 436 + } 437 + 438 + static struct platform_driver stratix10_rsu_driver = { 439 + .probe = stratix10_rsu_probe, 440 + .remove = stratix10_rsu_remove, 441 + .driver = { 442 + .name = "stratix10-rsu", 443 + .dev_groups = rsu_groups, 444 + }, 445 + }; 446 + 447 + module_platform_driver(stratix10_rsu_driver); 448 + 449 + MODULE_LICENSE("GPL v2"); 450 + MODULE_DESCRIPTION("Intel Remote System Update Driver"); 451 + MODULE_AUTHOR("Richard Gong <richard.gong@intel.com>");
+73 -3
drivers/firmware/stratix10-svc.c
··· 38 38 #define FPGA_CONFIG_DATA_CLAIM_TIMEOUT_MS 200 39 39 #define FPGA_CONFIG_STATUS_TIMEOUT_SEC 30 40 40 41 + /* stratix10 service layer clients */ 42 + #define STRATIX10_RSU "stratix10-rsu" 43 + 41 44 typedef void (svc_invoke_fn)(unsigned long, unsigned long, unsigned long, 42 45 unsigned long, unsigned long, unsigned long, 43 46 unsigned long, unsigned long, 44 47 struct arm_smccc_res *); 45 48 struct stratix10_svc_chan; 49 + 50 + /** 51 + * struct stratix10_svc - svc private data 52 + * @stratix10_svc_rsu: pointer to stratix10 RSU device 53 + */ 54 + struct stratix10_svc { 55 + struct platform_device *stratix10_svc_rsu; 56 + }; 46 57 47 58 /** 48 59 * struct stratix10_svc_sh_memory - service shared memory structure ··· 307 296 cb_data->status = BIT(SVC_STATUS_RECONFIG_COMPLETED); 308 297 break; 309 298 case COMMAND_RSU_UPDATE: 299 + case COMMAND_RSU_NOTIFY: 310 300 cb_data->status = BIT(SVC_STATUS_RSU_OK); 301 + break; 302 + case COMMAND_RSU_RETRY: 303 + cb_data->status = BIT(SVC_STATUS_RSU_OK); 304 + cb_data->kaddr1 = &res.a1; 311 305 break; 312 306 default: 313 307 pr_warn("it shouldn't happen\n"); ··· 402 386 a1 = pdata->arg[0]; 403 387 a2 = 0; 404 388 break; 389 + case COMMAND_RSU_NOTIFY: 390 + a0 = INTEL_SIP_SMC_RSU_NOTIFY; 391 + a1 = pdata->arg[0]; 392 + a2 = 0; 393 + break; 394 + case COMMAND_RSU_RETRY: 395 + a0 = INTEL_SIP_SMC_RSU_RETRY_COUNTER; 396 + a1 = 0; 397 + a2 = 0; 398 + break; 405 399 default: 406 400 pr_warn("it shouldn't happen\n"); 407 401 break; ··· 464 438 pr_debug("%s: STATUS_REJECTED\n", __func__); 465 439 break; 466 440 case INTEL_SIP_SMC_FPGA_CONFIG_STATUS_ERROR: 441 + case INTEL_SIP_SMC_RSU_ERROR: 467 442 pr_err("%s: STATUS_ERROR\n", __func__); 443 + switch (pdata->command) { 444 + /* for FPGA mgr */ 445 + case COMMAND_RECONFIG_DATA_CLAIM: 446 + case COMMAND_RECONFIG: 447 + case COMMAND_RECONFIG_DATA_SUBMIT: 448 + case COMMAND_RECONFIG_STATUS: 449 + cbdata->status = 450 + BIT(SVC_STATUS_RECONFIG_ERROR); 451 + break; 452 + 453 + /* for RSU */ 454 + case COMMAND_RSU_STATUS: 455 + case COMMAND_RSU_UPDATE: 456 + case COMMAND_RSU_NOTIFY: 457 + case COMMAND_RSU_RETRY: 458 + cbdata->status = 459 + BIT(SVC_STATUS_RSU_ERROR); 460 + break; 461 + } 462 + 468 463 cbdata->status = BIT(SVC_STATUS_RECONFIG_ERROR); 469 464 cbdata->kaddr1 = NULL; 470 465 cbdata->kaddr2 = NULL; ··· 577 530 578 531 if (!sh_memory->addr || !sh_memory->size) { 579 532 dev_err(dev, 580 - "fails to get shared memory info from secure world\n"); 533 + "failed to get shared memory info from secure world\n"); 581 534 return -ENOMEM; 582 535 } 583 536 ··· 815 768 "svc_smc_hvc_thread"); 816 769 if (IS_ERR(chan->ctrl->task)) { 817 770 dev_err(chan->ctrl->dev, 818 - "fails to create svc_smc_hvc_thread\n"); 771 + "failed to create svc_smc_hvc_thread\n"); 819 772 kfree(p_data); 820 773 return -EINVAL; 821 774 } ··· 960 913 struct stratix10_svc_chan *chans; 961 914 struct gen_pool *genpool; 962 915 struct stratix10_svc_sh_memory *sh_memory; 916 + struct stratix10_svc *svc; 917 + 963 918 svc_invoke_fn *invoke_fn; 964 919 size_t fifo_size; 965 920 int ret; ··· 1006 957 fifo_size = sizeof(struct stratix10_svc_data) * SVC_NUM_DATA_IN_FIFO; 1007 958 ret = kfifo_alloc(&controller->svc_fifo, fifo_size, GFP_KERNEL); 1008 959 if (ret) { 1009 - dev_err(dev, "fails to allocate FIFO\n"); 960 + dev_err(dev, "failed to allocate FIFO\n"); 1010 961 return ret; 1011 962 } 1012 963 spin_lock_init(&controller->svc_fifo_lock); ··· 1024 975 list_add_tail(&controller->node, &svc_ctrl); 1025 976 platform_set_drvdata(pdev, controller); 1026 977 978 + /* add svc client device(s) */ 979 + svc = devm_kzalloc(dev, sizeof(*svc), GFP_KERNEL); 980 + if (!svc) 981 + return -ENOMEM; 982 + 983 + svc->stratix10_svc_rsu = platform_device_alloc(STRATIX10_RSU, 0); 984 + if (!svc->stratix10_svc_rsu) { 985 + dev_err(dev, "failed to allocate %s device\n", STRATIX10_RSU); 986 + return -ENOMEM; 987 + } 988 + 989 + ret = platform_device_add(svc->stratix10_svc_rsu); 990 + if (ret) { 991 + platform_device_put(svc->stratix10_svc_rsu); 992 + return ret; 993 + } 994 + dev_set_drvdata(dev, svc); 995 + 1027 996 pr_info("Intel Service Layer Driver Initialized\n"); 1028 997 1029 998 return ret; ··· 1049 982 1050 983 static int stratix10_svc_drv_remove(struct platform_device *pdev) 1051 984 { 985 + struct stratix10_svc *svc = dev_get_drvdata(&pdev->dev); 1052 986 struct stratix10_svc_controller *ctrl = platform_get_drvdata(pdev); 987 + 988 + platform_device_unregister(svc->stratix10_svc_rsu); 1053 989 1054 990 kfifo_free(&ctrl->svc_fifo); 1055 991 if (ctrl->task) {
+3 -3
drivers/fpga/Kconfig
··· 46 46 using the passive serial interface over SPI. 47 47 48 48 config FPGA_MGR_ALTERA_CVP 49 - tristate "Altera Arria-V/Cyclone-V/Stratix-V CvP FPGA Manager" 49 + tristate "Altera CvP FPGA Manager" 50 50 depends on PCI 51 51 help 52 - FPGA manager driver support for Arria-V, Cyclone-V, Stratix-V 53 - and Arria 10 Altera FPGAs using the CvP interface over PCIe. 52 + FPGA manager driver support for Arria-V, Cyclone-V, Stratix-V, 53 + Arria 10 and Stratix10 Altera FPGAs using the CvP interface over PCIe. 54 54 55 55 config FPGA_MGR_ZYNQ_FPGA 56 56 tristate "Xilinx Zynq FPGA"
+2 -1
drivers/fpga/Makefile
··· 39 39 obj-$(CONFIG_FPGA_DFL_FME_REGION) += dfl-fme-region.o 40 40 obj-$(CONFIG_FPGA_DFL_AFU) += dfl-afu.o 41 41 42 - dfl-fme-objs := dfl-fme-main.o dfl-fme-pr.o 42 + dfl-fme-objs := dfl-fme-main.o dfl-fme-pr.o dfl-fme-error.o 43 43 dfl-afu-objs := dfl-afu-main.o dfl-afu-region.o dfl-afu-dma-region.o 44 + dfl-afu-objs += dfl-afu-error.o 44 45 45 46 # Drivers for FPGAs which implement DFL 46 47 obj-$(CONFIG_FPGA_DFL_PCI) += dfl-pci.o
+270 -72
drivers/fpga/altera-cvp.c
··· 22 22 #define TIMEOUT_US 2000 /* CVP STATUS timeout for USERMODE polling */ 23 23 24 24 /* Vendor Specific Extended Capability Registers */ 25 - #define VSE_PCIE_EXT_CAP_ID 0x200 25 + #define VSE_PCIE_EXT_CAP_ID 0x0 26 26 #define VSE_PCIE_EXT_CAP_ID_VAL 0x000b /* 16bit */ 27 27 28 - #define VSE_CVP_STATUS 0x21c /* 32bit */ 28 + #define VSE_CVP_STATUS 0x1c /* 32bit */ 29 29 #define VSE_CVP_STATUS_CFG_RDY BIT(18) /* CVP_CONFIG_READY */ 30 30 #define VSE_CVP_STATUS_CFG_ERR BIT(19) /* CVP_CONFIG_ERROR */ 31 31 #define VSE_CVP_STATUS_CVP_EN BIT(20) /* ctrl block is enabling CVP */ ··· 33 33 #define VSE_CVP_STATUS_CFG_DONE BIT(23) /* CVP_CONFIG_DONE */ 34 34 #define VSE_CVP_STATUS_PLD_CLK_IN_USE BIT(24) /* PLD_CLK_IN_USE */ 35 35 36 - #define VSE_CVP_MODE_CTRL 0x220 /* 32bit */ 36 + #define VSE_CVP_MODE_CTRL 0x20 /* 32bit */ 37 37 #define VSE_CVP_MODE_CTRL_CVP_MODE BIT(0) /* CVP (1) or normal mode (0) */ 38 38 #define VSE_CVP_MODE_CTRL_HIP_CLK_SEL BIT(1) /* PMA (1) or fabric clock (0) */ 39 39 #define VSE_CVP_MODE_CTRL_NUMCLKS_OFF 8 /* NUMCLKS bits offset */ 40 40 #define VSE_CVP_MODE_CTRL_NUMCLKS_MASK GENMASK(15, 8) 41 41 42 - #define VSE_CVP_DATA 0x228 /* 32bit */ 43 - #define VSE_CVP_PROG_CTRL 0x22c /* 32bit */ 42 + #define VSE_CVP_DATA 0x28 /* 32bit */ 43 + #define VSE_CVP_PROG_CTRL 0x2c /* 32bit */ 44 44 #define VSE_CVP_PROG_CTRL_CONFIG BIT(0) 45 45 #define VSE_CVP_PROG_CTRL_START_XFER BIT(1) 46 + #define VSE_CVP_PROG_CTRL_MASK GENMASK(1, 0) 46 47 47 - #define VSE_UNCOR_ERR_STATUS 0x234 /* 32bit */ 48 + #define VSE_UNCOR_ERR_STATUS 0x34 /* 32bit */ 48 49 #define VSE_UNCOR_ERR_CVP_CFG_ERR BIT(5) /* CVP_CONFIG_ERROR_LATCHED */ 50 + 51 + #define V1_VSEC_OFFSET 0x200 /* Vendor Specific Offset V1 */ 52 + /* V2 Defines */ 53 + #define VSE_CVP_TX_CREDITS 0x49 /* 8bit */ 54 + 55 + #define V2_CREDIT_TIMEOUT_US 20000 56 + #define V2_CHECK_CREDIT_US 10 57 + #define V2_POLL_TIMEOUT_US 1000000 58 + #define V2_USER_TIMEOUT_US 500000 59 + 60 + #define V1_POLL_TIMEOUT_US 10 49 61 50 62 #define DRV_NAME "altera-cvp" 51 63 #define ALTERA_CVP_MGR_NAME "Altera CvP FPGA Manager" 52 64 65 + /* Write block sizes */ 66 + #define ALTERA_CVP_V1_SIZE 4 67 + #define ALTERA_CVP_V2_SIZE 4096 68 + 53 69 /* Optional CvP config error status check for debugging */ 54 70 static bool altera_cvp_chkcfg; 71 + 72 + struct cvp_priv; 55 73 56 74 struct altera_cvp_conf { 57 75 struct fpga_manager *mgr; 58 76 struct pci_dev *pci_dev; 59 77 void __iomem *map; 60 - void (*write_data)(struct altera_cvp_conf *, u32); 78 + void (*write_data)(struct altera_cvp_conf *conf, 79 + u32 data); 61 80 char mgr_name[64]; 62 81 u8 numclks; 82 + u32 sent_packets; 83 + u32 vsec_offset; 84 + const struct cvp_priv *priv; 63 85 }; 86 + 87 + struct cvp_priv { 88 + void (*switch_clk)(struct altera_cvp_conf *conf); 89 + int (*clear_state)(struct altera_cvp_conf *conf); 90 + int (*wait_credit)(struct fpga_manager *mgr, u32 blocks); 91 + size_t block_size; 92 + int poll_time_us; 93 + int user_time_us; 94 + }; 95 + 96 + static int altera_read_config_byte(struct altera_cvp_conf *conf, 97 + int where, u8 *val) 98 + { 99 + return pci_read_config_byte(conf->pci_dev, conf->vsec_offset + where, 100 + val); 101 + } 102 + 103 + static int altera_read_config_dword(struct altera_cvp_conf *conf, 104 + int where, u32 *val) 105 + { 106 + return pci_read_config_dword(conf->pci_dev, conf->vsec_offset + where, 107 + val); 108 + } 109 + 110 + static int altera_write_config_dword(struct altera_cvp_conf *conf, 111 + int where, u32 val) 112 + { 113 + return pci_write_config_dword(conf->pci_dev, conf->vsec_offset + where, 114 + val); 115 + } 64 116 65 117 static enum fpga_mgr_states altera_cvp_state(struct fpga_manager *mgr) 66 118 { 67 119 struct altera_cvp_conf *conf = mgr->priv; 68 120 u32 status; 69 121 70 - pci_read_config_dword(conf->pci_dev, VSE_CVP_STATUS, &status); 122 + altera_read_config_dword(conf, VSE_CVP_STATUS, &status); 71 123 72 124 if (status & VSE_CVP_STATUS_CFG_DONE) 73 125 return FPGA_MGR_STATE_OPERATING; ··· 137 85 138 86 static void altera_cvp_write_data_config(struct altera_cvp_conf *conf, u32 val) 139 87 { 140 - pci_write_config_dword(conf->pci_dev, VSE_CVP_DATA, val); 88 + pci_write_config_dword(conf->pci_dev, conf->vsec_offset + VSE_CVP_DATA, 89 + val); 141 90 } 142 91 143 92 /* switches between CvP clock and internal clock */ ··· 148 95 u32 val; 149 96 150 97 /* set 1 CVP clock cycle for every CVP Data Register Write */ 151 - pci_read_config_dword(conf->pci_dev, VSE_CVP_MODE_CTRL, &val); 98 + altera_read_config_dword(conf, VSE_CVP_MODE_CTRL, &val); 152 99 val &= ~VSE_CVP_MODE_CTRL_NUMCLKS_MASK; 153 100 val |= 1 << VSE_CVP_MODE_CTRL_NUMCLKS_OFF; 154 - pci_write_config_dword(conf->pci_dev, VSE_CVP_MODE_CTRL, val); 101 + altera_write_config_dword(conf, VSE_CVP_MODE_CTRL, val); 155 102 156 103 for (i = 0; i < CVP_DUMMY_WR; i++) 157 104 conf->write_data(conf, 0); /* dummy data, could be any value */ ··· 168 115 retries++; 169 116 170 117 do { 171 - pci_read_config_dword(conf->pci_dev, VSE_CVP_STATUS, &val); 118 + altera_read_config_dword(conf, VSE_CVP_STATUS, &val); 172 119 if ((val & status_mask) == status_val) 173 120 return 0; 174 121 ··· 179 126 return -ETIMEDOUT; 180 127 } 181 128 129 + static int altera_cvp_chk_error(struct fpga_manager *mgr, size_t bytes) 130 + { 131 + struct altera_cvp_conf *conf = mgr->priv; 132 + u32 val; 133 + int ret; 134 + 135 + /* STEP 10 (optional) - check CVP_CONFIG_ERROR flag */ 136 + ret = altera_read_config_dword(conf, VSE_CVP_STATUS, &val); 137 + if (ret || (val & VSE_CVP_STATUS_CFG_ERR)) { 138 + dev_err(&mgr->dev, "CVP_CONFIG_ERROR after %zu bytes!\n", 139 + bytes); 140 + return -EPROTO; 141 + } 142 + return 0; 143 + } 144 + 145 + /* 146 + * CvP Version2 Functions 147 + * Recent Intel FPGAs use a credit mechanism to throttle incoming 148 + * bitstreams and a different method of clearing the state. 149 + */ 150 + 151 + static int altera_cvp_v2_clear_state(struct altera_cvp_conf *conf) 152 + { 153 + u32 val; 154 + int ret; 155 + 156 + /* Clear the START_XFER and CVP_CONFIG bits */ 157 + ret = altera_read_config_dword(conf, VSE_CVP_PROG_CTRL, &val); 158 + if (ret) { 159 + dev_err(&conf->pci_dev->dev, 160 + "Error reading CVP Program Control Register\n"); 161 + return ret; 162 + } 163 + 164 + val &= ~VSE_CVP_PROG_CTRL_MASK; 165 + ret = altera_write_config_dword(conf, VSE_CVP_PROG_CTRL, val); 166 + if (ret) { 167 + dev_err(&conf->pci_dev->dev, 168 + "Error writing CVP Program Control Register\n"); 169 + return ret; 170 + } 171 + 172 + return altera_cvp_wait_status(conf, VSE_CVP_STATUS_CFG_RDY, 0, 173 + conf->priv->poll_time_us); 174 + } 175 + 176 + static int altera_cvp_v2_wait_for_credit(struct fpga_manager *mgr, 177 + u32 blocks) 178 + { 179 + u32 timeout = V2_CREDIT_TIMEOUT_US / V2_CHECK_CREDIT_US; 180 + struct altera_cvp_conf *conf = mgr->priv; 181 + int ret; 182 + u8 val; 183 + 184 + do { 185 + ret = altera_read_config_byte(conf, VSE_CVP_TX_CREDITS, &val); 186 + if (ret) { 187 + dev_err(&conf->pci_dev->dev, 188 + "Error reading CVP Credit Register\n"); 189 + return ret; 190 + } 191 + 192 + /* Return if there is space in FIFO */ 193 + if (val - (u8)conf->sent_packets) 194 + return 0; 195 + 196 + ret = altera_cvp_chk_error(mgr, blocks * ALTERA_CVP_V2_SIZE); 197 + if (ret) { 198 + dev_err(&conf->pci_dev->dev, 199 + "CE Bit error credit reg[0x%x]:sent[0x%x]\n", 200 + val, conf->sent_packets); 201 + return -EAGAIN; 202 + } 203 + 204 + /* Limit the check credit byte traffic */ 205 + usleep_range(V2_CHECK_CREDIT_US, V2_CHECK_CREDIT_US + 1); 206 + } while (timeout--); 207 + 208 + dev_err(&conf->pci_dev->dev, "Timeout waiting for credit\n"); 209 + return -ETIMEDOUT; 210 + } 211 + 212 + static int altera_cvp_send_block(struct altera_cvp_conf *conf, 213 + const u32 *data, size_t len) 214 + { 215 + u32 mask, words = len / sizeof(u32); 216 + int i, remainder; 217 + 218 + for (i = 0; i < words; i++) 219 + conf->write_data(conf, *data++); 220 + 221 + /* write up to 3 trailing bytes, if any */ 222 + remainder = len % sizeof(u32); 223 + if (remainder) { 224 + mask = BIT(remainder * 8) - 1; 225 + if (mask) 226 + conf->write_data(conf, *data & mask); 227 + } 228 + 229 + return 0; 230 + } 231 + 182 232 static int altera_cvp_teardown(struct fpga_manager *mgr, 183 233 struct fpga_image_info *info) 184 234 { 185 235 struct altera_cvp_conf *conf = mgr->priv; 186 - struct pci_dev *pdev = conf->pci_dev; 187 236 int ret; 188 237 u32 val; 189 238 190 239 /* STEP 12 - reset START_XFER bit */ 191 - pci_read_config_dword(pdev, VSE_CVP_PROG_CTRL, &val); 240 + altera_read_config_dword(conf, VSE_CVP_PROG_CTRL, &val); 192 241 val &= ~VSE_CVP_PROG_CTRL_START_XFER; 193 - pci_write_config_dword(pdev, VSE_CVP_PROG_CTRL, val); 242 + altera_write_config_dword(conf, VSE_CVP_PROG_CTRL, val); 194 243 195 244 /* STEP 13 - reset CVP_CONFIG bit */ 196 245 val &= ~VSE_CVP_PROG_CTRL_CONFIG; 197 - pci_write_config_dword(pdev, VSE_CVP_PROG_CTRL, val); 246 + altera_write_config_dword(conf, VSE_CVP_PROG_CTRL, val); 198 247 199 248 /* 200 249 * STEP 14 201 250 * - set CVP_NUMCLKS to 1 and then issue CVP_DUMMY_WR dummy 202 251 * writes to the HIP 203 252 */ 204 - altera_cvp_dummy_write(conf); /* from CVP clock to internal clock */ 253 + if (conf->priv->switch_clk) 254 + conf->priv->switch_clk(conf); 205 255 206 256 /* STEP 15 - poll CVP_CONFIG_READY bit for 0 with 10us timeout */ 207 - ret = altera_cvp_wait_status(conf, VSE_CVP_STATUS_CFG_RDY, 0, 10); 257 + ret = altera_cvp_wait_status(conf, VSE_CVP_STATUS_CFG_RDY, 0, 258 + conf->priv->poll_time_us); 208 259 if (ret) 209 260 dev_err(&mgr->dev, "CFG_RDY == 0 timeout\n"); 210 261 ··· 320 163 const char *buf, size_t count) 321 164 { 322 165 struct altera_cvp_conf *conf = mgr->priv; 323 - struct pci_dev *pdev = conf->pci_dev; 324 166 u32 iflags, val; 325 167 int ret; 326 168 ··· 339 183 conf->numclks = 1; /* for uncompressed and unencrypted images */ 340 184 341 185 /* STEP 1 - read CVP status and check CVP_EN flag */ 342 - pci_read_config_dword(pdev, VSE_CVP_STATUS, &val); 186 + altera_read_config_dword(conf, VSE_CVP_STATUS, &val); 343 187 if (!(val & VSE_CVP_STATUS_CVP_EN)) { 344 188 dev_err(&mgr->dev, "CVP mode off: 0x%04x\n", val); 345 189 return -ENODEV; ··· 357 201 * - set HIP_CLK_SEL and CVP_MODE (must be set in the order mentioned) 358 202 */ 359 203 /* switch from fabric to PMA clock */ 360 - pci_read_config_dword(pdev, VSE_CVP_MODE_CTRL, &val); 204 + altera_read_config_dword(conf, VSE_CVP_MODE_CTRL, &val); 361 205 val |= VSE_CVP_MODE_CTRL_HIP_CLK_SEL; 362 - pci_write_config_dword(pdev, VSE_CVP_MODE_CTRL, val); 206 + altera_write_config_dword(conf, VSE_CVP_MODE_CTRL, val); 363 207 364 208 /* set CVP mode */ 365 - pci_read_config_dword(pdev, VSE_CVP_MODE_CTRL, &val); 209 + altera_read_config_dword(conf, VSE_CVP_MODE_CTRL, &val); 366 210 val |= VSE_CVP_MODE_CTRL_CVP_MODE; 367 - pci_write_config_dword(pdev, VSE_CVP_MODE_CTRL, val); 211 + altera_write_config_dword(conf, VSE_CVP_MODE_CTRL, val); 368 212 369 213 /* 370 214 * STEP 3 371 215 * - set CVP_NUMCLKS to 1 and issue CVP_DUMMY_WR dummy writes to the HIP 372 216 */ 373 - altera_cvp_dummy_write(conf); 217 + if (conf->priv->switch_clk) 218 + conf->priv->switch_clk(conf); 219 + 220 + if (conf->priv->clear_state) { 221 + ret = conf->priv->clear_state(conf); 222 + if (ret) { 223 + dev_err(&mgr->dev, "Problem clearing out state\n"); 224 + return ret; 225 + } 226 + } 227 + 228 + conf->sent_packets = 0; 374 229 375 230 /* STEP 4 - set CVP_CONFIG bit */ 376 - pci_read_config_dword(pdev, VSE_CVP_PROG_CTRL, &val); 231 + altera_read_config_dword(conf, VSE_CVP_PROG_CTRL, &val); 377 232 /* request control block to begin transfer using CVP */ 378 233 val |= VSE_CVP_PROG_CTRL_CONFIG; 379 - pci_write_config_dword(pdev, VSE_CVP_PROG_CTRL, val); 234 + altera_write_config_dword(conf, VSE_CVP_PROG_CTRL, val); 380 235 381 - /* STEP 5 - poll CVP_CONFIG READY for 1 with 10us timeout */ 236 + /* STEP 5 - poll CVP_CONFIG READY for 1 with timeout */ 382 237 ret = altera_cvp_wait_status(conf, VSE_CVP_STATUS_CFG_RDY, 383 - VSE_CVP_STATUS_CFG_RDY, 10); 238 + VSE_CVP_STATUS_CFG_RDY, 239 + conf->priv->poll_time_us); 384 240 if (ret) { 385 241 dev_warn(&mgr->dev, "CFG_RDY == 1 timeout\n"); 386 242 return ret; ··· 402 234 * STEP 6 403 235 * - set CVP_NUMCLKS to 1 and issue CVP_DUMMY_WR dummy writes to the HIP 404 236 */ 405 - altera_cvp_dummy_write(conf); 237 + if (conf->priv->switch_clk) 238 + conf->priv->switch_clk(conf); 239 + 240 + if (altera_cvp_chkcfg) { 241 + ret = altera_cvp_chk_error(mgr, 0); 242 + if (ret) { 243 + dev_warn(&mgr->dev, "CFG_RDY == 1 timeout\n"); 244 + return ret; 245 + } 246 + } 406 247 407 248 /* STEP 7 - set START_XFER */ 408 - pci_read_config_dword(pdev, VSE_CVP_PROG_CTRL, &val); 249 + altera_read_config_dword(conf, VSE_CVP_PROG_CTRL, &val); 409 250 val |= VSE_CVP_PROG_CTRL_START_XFER; 410 - pci_write_config_dword(pdev, VSE_CVP_PROG_CTRL, val); 251 + altera_write_config_dword(conf, VSE_CVP_PROG_CTRL, val); 411 252 412 253 /* STEP 8 - start transfer (set CVP_NUMCLKS for bitstream) */ 413 - pci_read_config_dword(pdev, VSE_CVP_MODE_CTRL, &val); 414 - val &= ~VSE_CVP_MODE_CTRL_NUMCLKS_MASK; 415 - val |= conf->numclks << VSE_CVP_MODE_CTRL_NUMCLKS_OFF; 416 - pci_write_config_dword(pdev, VSE_CVP_MODE_CTRL, val); 417 - 418 - return 0; 419 - } 420 - 421 - static inline int altera_cvp_chk_error(struct fpga_manager *mgr, size_t bytes) 422 - { 423 - struct altera_cvp_conf *conf = mgr->priv; 424 - u32 val; 425 - 426 - /* STEP 10 (optional) - check CVP_CONFIG_ERROR flag */ 427 - pci_read_config_dword(conf->pci_dev, VSE_CVP_STATUS, &val); 428 - if (val & VSE_CVP_STATUS_CFG_ERR) { 429 - dev_err(&mgr->dev, "CVP_CONFIG_ERROR after %zu bytes!\n", 430 - bytes); 431 - return -EPROTO; 254 + if (conf->priv->switch_clk) { 255 + altera_read_config_dword(conf, VSE_CVP_MODE_CTRL, &val); 256 + val &= ~VSE_CVP_MODE_CTRL_NUMCLKS_MASK; 257 + val |= conf->numclks << VSE_CVP_MODE_CTRL_NUMCLKS_OFF; 258 + altera_write_config_dword(conf, VSE_CVP_MODE_CTRL, val); 432 259 } 433 260 return 0; 434 261 } ··· 432 269 size_t count) 433 270 { 434 271 struct altera_cvp_conf *conf = mgr->priv; 272 + size_t done, remaining, len; 435 273 const u32 *data; 436 - size_t done, remaining; 437 274 int status = 0; 438 - u32 mask; 439 275 440 276 /* STEP 9 - write 32-bit data from RBF file to CVP data register */ 441 277 data = (u32 *)buf; 442 278 remaining = count; 443 279 done = 0; 444 280 445 - while (remaining >= 4) { 446 - conf->write_data(conf, *data++); 447 - done += 4; 448 - remaining -= 4; 281 + while (remaining) { 282 + /* Use credit throttling if available */ 283 + if (conf->priv->wait_credit) { 284 + status = conf->priv->wait_credit(mgr, done); 285 + if (status) { 286 + dev_err(&conf->pci_dev->dev, 287 + "Wait Credit ERR: 0x%x\n", status); 288 + return status; 289 + } 290 + } 291 + 292 + len = min(conf->priv->block_size, remaining); 293 + altera_cvp_send_block(conf, data, len); 294 + data += len / sizeof(u32); 295 + done += len; 296 + remaining -= len; 297 + conf->sent_packets++; 449 298 450 299 /* 451 300 * STEP 10 (optional) and STEP 11 ··· 475 300 } 476 301 } 477 302 478 - /* write up to 3 trailing bytes, if any */ 479 - mask = BIT(remaining * 8) - 1; 480 - if (mask) 481 - conf->write_data(conf, *data & mask); 482 - 483 303 if (altera_cvp_chkcfg) 484 304 status = altera_cvp_chk_error(mgr, count); 485 305 ··· 485 315 struct fpga_image_info *info) 486 316 { 487 317 struct altera_cvp_conf *conf = mgr->priv; 488 - struct pci_dev *pdev = conf->pci_dev; 318 + u32 mask, val; 489 319 int ret; 490 - u32 mask; 491 - u32 val; 492 320 493 321 ret = altera_cvp_teardown(mgr, info); 494 322 if (ret) 495 323 return ret; 496 324 497 325 /* STEP 16 - check CVP_CONFIG_ERROR_LATCHED bit */ 498 - pci_read_config_dword(pdev, VSE_UNCOR_ERR_STATUS, &val); 326 + altera_read_config_dword(conf, VSE_UNCOR_ERR_STATUS, &val); 499 327 if (val & VSE_UNCOR_ERR_CVP_CFG_ERR) { 500 328 dev_err(&mgr->dev, "detected CVP_CONFIG_ERROR_LATCHED!\n"); 501 329 return -EPROTO; 502 330 } 503 331 504 332 /* STEP 17 - reset CVP_MODE and HIP_CLK_SEL bit */ 505 - pci_read_config_dword(pdev, VSE_CVP_MODE_CTRL, &val); 333 + altera_read_config_dword(conf, VSE_CVP_MODE_CTRL, &val); 506 334 val &= ~VSE_CVP_MODE_CTRL_HIP_CLK_SEL; 507 335 val &= ~VSE_CVP_MODE_CTRL_CVP_MODE; 508 - pci_write_config_dword(pdev, VSE_CVP_MODE_CTRL, val); 336 + altera_write_config_dword(conf, VSE_CVP_MODE_CTRL, val); 509 337 510 338 /* STEP 18 - poll PLD_CLK_IN_USE and USER_MODE bits */ 511 339 mask = VSE_CVP_STATUS_PLD_CLK_IN_USE | VSE_CVP_STATUS_USERMODE; 512 - ret = altera_cvp_wait_status(conf, mask, mask, TIMEOUT_US); 340 + ret = altera_cvp_wait_status(conf, mask, mask, 341 + conf->priv->user_time_us); 513 342 if (ret) 514 343 dev_err(&mgr->dev, "PLD_CLK_IN_USE|USERMODE timeout\n"); 515 344 ··· 520 351 .write_init = altera_cvp_write_init, 521 352 .write = altera_cvp_write, 522 353 .write_complete = altera_cvp_write_complete, 354 + }; 355 + 356 + static const struct cvp_priv cvp_priv_v1 = { 357 + .switch_clk = altera_cvp_dummy_write, 358 + .block_size = ALTERA_CVP_V1_SIZE, 359 + .poll_time_us = V1_POLL_TIMEOUT_US, 360 + .user_time_us = TIMEOUT_US, 361 + }; 362 + 363 + static const struct cvp_priv cvp_priv_v2 = { 364 + .clear_state = altera_cvp_v2_clear_state, 365 + .wait_credit = altera_cvp_v2_wait_for_credit, 366 + .block_size = ALTERA_CVP_V2_SIZE, 367 + .poll_time_us = V2_POLL_TIMEOUT_US, 368 + .user_time_us = V2_USER_TIMEOUT_US, 523 369 }; 524 370 525 371 static ssize_t chkcfg_show(struct device_driver *dev, char *buf) ··· 578 394 { 579 395 struct altera_cvp_conf *conf; 580 396 struct fpga_manager *mgr; 397 + int ret, offset; 581 398 u16 cmd, val; 582 399 u32 regval; 583 - int ret; 400 + 401 + /* Discover the Vendor Specific Offset for this device */ 402 + offset = pci_find_next_ext_capability(pdev, 0, PCI_EXT_CAP_ID_VNDR); 403 + if (!offset) { 404 + dev_err(&pdev->dev, "No Vendor Specific Offset.\n"); 405 + return -ENODEV; 406 + } 584 407 585 408 /* 586 409 * First check if this is the expected FPGA device. PCI config 587 410 * space access works without enabling the PCI device, memory 588 411 * space access is enabled further down. 589 412 */ 590 - pci_read_config_word(pdev, VSE_PCIE_EXT_CAP_ID, &val); 413 + pci_read_config_word(pdev, offset + VSE_PCIE_EXT_CAP_ID, &val); 591 414 if (val != VSE_PCIE_EXT_CAP_ID_VAL) { 592 415 dev_err(&pdev->dev, "Wrong EXT_CAP_ID value 0x%x\n", val); 593 416 return -ENODEV; 594 417 } 595 418 596 - pci_read_config_dword(pdev, VSE_CVP_STATUS, &regval); 419 + pci_read_config_dword(pdev, offset + VSE_CVP_STATUS, &regval); 597 420 if (!(regval & VSE_CVP_STATUS_CVP_EN)) { 598 421 dev_err(&pdev->dev, 599 422 "CVP is disabled for this device: CVP_STATUS Reg 0x%x\n", ··· 611 420 conf = devm_kzalloc(&pdev->dev, sizeof(*conf), GFP_KERNEL); 612 421 if (!conf) 613 422 return -ENOMEM; 423 + 424 + conf->vsec_offset = offset; 614 425 615 426 /* 616 427 * Enable memory BAR access. We cannot use pci_enable_device() here ··· 637 444 638 445 conf->pci_dev = pdev; 639 446 conf->write_data = altera_cvp_write_data_iomem; 447 + 448 + if (conf->vsec_offset == V1_VSEC_OFFSET) 449 + conf->priv = &cvp_priv_v1; 450 + else 451 + conf->priv = &cvp_priv_v2; 640 452 641 453 conf->map = pci_iomap(pdev, CVP_BAR, 0); 642 454 if (!conf->map) {
+3 -1
drivers/fpga/altera-pr-ip-core-plat.c
··· 32 32 { 33 33 struct device *dev = &pdev->dev; 34 34 35 - return alt_pr_unregister(dev); 35 + alt_pr_unregister(dev); 36 + 37 + return 0; 36 38 } 37 39 38 40 static const struct of_device_id alt_pr_of_match[] = {
+1 -3
drivers/fpga/altera-pr-ip-core.c
··· 201 201 } 202 202 EXPORT_SYMBOL_GPL(alt_pr_register); 203 203 204 - int alt_pr_unregister(struct device *dev) 204 + void alt_pr_unregister(struct device *dev) 205 205 { 206 206 struct fpga_manager *mgr = dev_get_drvdata(dev); 207 207 208 208 dev_dbg(dev, "%s\n", __func__); 209 209 210 210 fpga_mgr_unregister(mgr); 211 - 212 - return 0; 213 211 } 214 212 EXPORT_SYMBOL_GPL(alt_pr_unregister); 215 213
+230
drivers/fpga/dfl-afu-error.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Driver for FPGA Accelerated Function Unit (AFU) Error Reporting 4 + * 5 + * Copyright 2019 Intel Corporation, Inc. 6 + * 7 + * Authors: 8 + * Wu Hao <hao.wu@linux.intel.com> 9 + * Xiao Guangrong <guangrong.xiao@linux.intel.com> 10 + * Joseph Grecco <joe.grecco@intel.com> 11 + * Enno Luebbers <enno.luebbers@intel.com> 12 + * Tim Whisonant <tim.whisonant@intel.com> 13 + * Ananda Ravuri <ananda.ravuri@intel.com> 14 + * Mitchel Henry <henry.mitchel@intel.com> 15 + */ 16 + 17 + #include <linux/uaccess.h> 18 + 19 + #include "dfl-afu.h" 20 + 21 + #define PORT_ERROR_MASK 0x8 22 + #define PORT_ERROR 0x10 23 + #define PORT_FIRST_ERROR 0x18 24 + #define PORT_MALFORMED_REQ0 0x20 25 + #define PORT_MALFORMED_REQ1 0x28 26 + 27 + #define ERROR_MASK GENMASK_ULL(63, 0) 28 + 29 + /* mask or unmask port errors by the error mask register. */ 30 + static void __afu_port_err_mask(struct device *dev, bool mask) 31 + { 32 + void __iomem *base; 33 + 34 + base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_ERROR); 35 + 36 + writeq(mask ? ERROR_MASK : 0, base + PORT_ERROR_MASK); 37 + } 38 + 39 + static void afu_port_err_mask(struct device *dev, bool mask) 40 + { 41 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 42 + 43 + mutex_lock(&pdata->lock); 44 + __afu_port_err_mask(dev, mask); 45 + mutex_unlock(&pdata->lock); 46 + } 47 + 48 + /* clear port errors. */ 49 + static int afu_port_err_clear(struct device *dev, u64 err) 50 + { 51 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 52 + struct platform_device *pdev = to_platform_device(dev); 53 + void __iomem *base_err, *base_hdr; 54 + int ret = -EBUSY; 55 + u64 v; 56 + 57 + base_err = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_ERROR); 58 + base_hdr = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER); 59 + 60 + mutex_lock(&pdata->lock); 61 + 62 + /* 63 + * clear Port Errors 64 + * 65 + * - Check for AP6 State 66 + * - Halt Port by keeping Port in reset 67 + * - Set PORT Error mask to all 1 to mask errors 68 + * - Clear all errors 69 + * - Set Port mask to all 0 to enable errors 70 + * - All errors start capturing new errors 71 + * - Enable Port by pulling the port out of reset 72 + */ 73 + 74 + /* if device is still in AP6 power state, can not clear any error. */ 75 + v = readq(base_hdr + PORT_HDR_STS); 76 + if (FIELD_GET(PORT_STS_PWR_STATE, v) == PORT_STS_PWR_STATE_AP6) { 77 + dev_err(dev, "Could not clear errors, device in AP6 state.\n"); 78 + goto done; 79 + } 80 + 81 + /* Halt Port by keeping Port in reset */ 82 + ret = __afu_port_disable(pdev); 83 + if (ret) 84 + goto done; 85 + 86 + /* Mask all errors */ 87 + __afu_port_err_mask(dev, true); 88 + 89 + /* Clear errors if err input matches with current port errors.*/ 90 + v = readq(base_err + PORT_ERROR); 91 + 92 + if (v == err) { 93 + writeq(v, base_err + PORT_ERROR); 94 + 95 + v = readq(base_err + PORT_FIRST_ERROR); 96 + writeq(v, base_err + PORT_FIRST_ERROR); 97 + } else { 98 + ret = -EINVAL; 99 + } 100 + 101 + /* Clear mask */ 102 + __afu_port_err_mask(dev, false); 103 + 104 + /* Enable the Port by clear the reset */ 105 + __afu_port_enable(pdev); 106 + 107 + done: 108 + mutex_unlock(&pdata->lock); 109 + return ret; 110 + } 111 + 112 + static ssize_t errors_show(struct device *dev, struct device_attribute *attr, 113 + char *buf) 114 + { 115 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 116 + void __iomem *base; 117 + u64 error; 118 + 119 + base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_ERROR); 120 + 121 + mutex_lock(&pdata->lock); 122 + error = readq(base + PORT_ERROR); 123 + mutex_unlock(&pdata->lock); 124 + 125 + return sprintf(buf, "0x%llx\n", (unsigned long long)error); 126 + } 127 + 128 + static ssize_t errors_store(struct device *dev, struct device_attribute *attr, 129 + const char *buff, size_t count) 130 + { 131 + u64 value; 132 + int ret; 133 + 134 + if (kstrtou64(buff, 0, &value)) 135 + return -EINVAL; 136 + 137 + ret = afu_port_err_clear(dev, value); 138 + 139 + return ret ? ret : count; 140 + } 141 + static DEVICE_ATTR_RW(errors); 142 + 143 + static ssize_t first_error_show(struct device *dev, 144 + struct device_attribute *attr, char *buf) 145 + { 146 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 147 + void __iomem *base; 148 + u64 error; 149 + 150 + base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_ERROR); 151 + 152 + mutex_lock(&pdata->lock); 153 + error = readq(base + PORT_FIRST_ERROR); 154 + mutex_unlock(&pdata->lock); 155 + 156 + return sprintf(buf, "0x%llx\n", (unsigned long long)error); 157 + } 158 + static DEVICE_ATTR_RO(first_error); 159 + 160 + static ssize_t first_malformed_req_show(struct device *dev, 161 + struct device_attribute *attr, 162 + char *buf) 163 + { 164 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 165 + void __iomem *base; 166 + u64 req0, req1; 167 + 168 + base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_ERROR); 169 + 170 + mutex_lock(&pdata->lock); 171 + req0 = readq(base + PORT_MALFORMED_REQ0); 172 + req1 = readq(base + PORT_MALFORMED_REQ1); 173 + mutex_unlock(&pdata->lock); 174 + 175 + return sprintf(buf, "0x%016llx%016llx\n", 176 + (unsigned long long)req1, (unsigned long long)req0); 177 + } 178 + static DEVICE_ATTR_RO(first_malformed_req); 179 + 180 + static struct attribute *port_err_attrs[] = { 181 + &dev_attr_errors.attr, 182 + &dev_attr_first_error.attr, 183 + &dev_attr_first_malformed_req.attr, 184 + NULL, 185 + }; 186 + 187 + static umode_t port_err_attrs_visible(struct kobject *kobj, 188 + struct attribute *attr, int n) 189 + { 190 + struct device *dev = kobj_to_dev(kobj); 191 + 192 + /* 193 + * sysfs entries are visible only if related private feature is 194 + * enumerated. 195 + */ 196 + if (!dfl_get_feature_by_id(dev, PORT_FEATURE_ID_ERROR)) 197 + return 0; 198 + 199 + return attr->mode; 200 + } 201 + 202 + const struct attribute_group port_err_group = { 203 + .name = "errors", 204 + .attrs = port_err_attrs, 205 + .is_visible = port_err_attrs_visible, 206 + }; 207 + 208 + static int port_err_init(struct platform_device *pdev, 209 + struct dfl_feature *feature) 210 + { 211 + afu_port_err_mask(&pdev->dev, false); 212 + 213 + return 0; 214 + } 215 + 216 + static void port_err_uinit(struct platform_device *pdev, 217 + struct dfl_feature *feature) 218 + { 219 + afu_port_err_mask(&pdev->dev, true); 220 + } 221 + 222 + const struct dfl_feature_id port_err_id_table[] = { 223 + {.id = PORT_FEATURE_ID_ERROR,}, 224 + {0,} 225 + }; 226 + 227 + const struct dfl_feature_ops port_err_ops = { 228 + .init = port_err_init, 229 + .uinit = port_err_uinit, 230 + };
+342 -47
drivers/fpga/dfl-afu-main.c
··· 22 22 #include "dfl-afu.h" 23 23 24 24 /** 25 - * port_enable - enable a port 25 + * __afu_port_enable - enable a port by clear reset 26 26 * @pdev: port platform device. 27 27 * 28 28 * Enable Port by clear the port soft reset bit, which is set by default. 29 29 * The AFU is unable to respond to any MMIO access while in reset. 30 - * port_enable function should only be used after port_disable function. 30 + * __afu_port_enable function should only be used after __afu_port_disable 31 + * function. 32 + * 33 + * The caller needs to hold lock for protection. 31 34 */ 32 - static void port_enable(struct platform_device *pdev) 35 + void __afu_port_enable(struct platform_device *pdev) 33 36 { 34 37 struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev); 35 38 void __iomem *base; ··· 55 52 #define RST_POLL_TIMEOUT 1000 /* us */ 56 53 57 54 /** 58 - * port_disable - disable a port 55 + * __afu_port_disable - disable a port by hold reset 59 56 * @pdev: port platform device. 60 57 * 61 - * Disable Port by setting the port soft reset bit, it puts the port into 62 - * reset. 58 + * Disable Port by setting the port soft reset bit, it puts the port into reset. 59 + * 60 + * The caller needs to hold lock for protection. 63 61 */ 64 - static int port_disable(struct platform_device *pdev) 62 + int __afu_port_disable(struct platform_device *pdev) 65 63 { 66 64 struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev); 67 65 void __iomem *base; ··· 108 104 { 109 105 int ret; 110 106 111 - ret = port_disable(pdev); 107 + ret = __afu_port_disable(pdev); 112 108 if (!ret) 113 - port_enable(pdev); 109 + __afu_port_enable(pdev); 114 110 115 111 return ret; 116 112 } ··· 145 141 } 146 142 static DEVICE_ATTR_RO(id); 147 143 148 - static const struct attribute *port_hdr_attrs[] = { 144 + static ssize_t 145 + ltr_show(struct device *dev, struct device_attribute *attr, char *buf) 146 + { 147 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 148 + void __iomem *base; 149 + u64 v; 150 + 151 + base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER); 152 + 153 + mutex_lock(&pdata->lock); 154 + v = readq(base + PORT_HDR_CTRL); 155 + mutex_unlock(&pdata->lock); 156 + 157 + return sprintf(buf, "%x\n", (u8)FIELD_GET(PORT_CTRL_LATENCY, v)); 158 + } 159 + 160 + static ssize_t 161 + ltr_store(struct device *dev, struct device_attribute *attr, 162 + const char *buf, size_t count) 163 + { 164 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 165 + void __iomem *base; 166 + bool ltr; 167 + u64 v; 168 + 169 + if (kstrtobool(buf, &ltr)) 170 + return -EINVAL; 171 + 172 + base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER); 173 + 174 + mutex_lock(&pdata->lock); 175 + v = readq(base + PORT_HDR_CTRL); 176 + v &= ~PORT_CTRL_LATENCY; 177 + v |= FIELD_PREP(PORT_CTRL_LATENCY, ltr ? 1 : 0); 178 + writeq(v, base + PORT_HDR_CTRL); 179 + mutex_unlock(&pdata->lock); 180 + 181 + return count; 182 + } 183 + static DEVICE_ATTR_RW(ltr); 184 + 185 + static ssize_t 186 + ap1_event_show(struct device *dev, struct device_attribute *attr, char *buf) 187 + { 188 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 189 + void __iomem *base; 190 + u64 v; 191 + 192 + base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER); 193 + 194 + mutex_lock(&pdata->lock); 195 + v = readq(base + PORT_HDR_STS); 196 + mutex_unlock(&pdata->lock); 197 + 198 + return sprintf(buf, "%x\n", (u8)FIELD_GET(PORT_STS_AP1_EVT, v)); 199 + } 200 + 201 + static ssize_t 202 + ap1_event_store(struct device *dev, struct device_attribute *attr, 203 + const char *buf, size_t count) 204 + { 205 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 206 + void __iomem *base; 207 + bool clear; 208 + 209 + if (kstrtobool(buf, &clear) || !clear) 210 + return -EINVAL; 211 + 212 + base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER); 213 + 214 + mutex_lock(&pdata->lock); 215 + writeq(PORT_STS_AP1_EVT, base + PORT_HDR_STS); 216 + mutex_unlock(&pdata->lock); 217 + 218 + return count; 219 + } 220 + static DEVICE_ATTR_RW(ap1_event); 221 + 222 + static ssize_t 223 + ap2_event_show(struct device *dev, struct device_attribute *attr, 224 + char *buf) 225 + { 226 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 227 + void __iomem *base; 228 + u64 v; 229 + 230 + base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER); 231 + 232 + mutex_lock(&pdata->lock); 233 + v = readq(base + PORT_HDR_STS); 234 + mutex_unlock(&pdata->lock); 235 + 236 + return sprintf(buf, "%x\n", (u8)FIELD_GET(PORT_STS_AP2_EVT, v)); 237 + } 238 + 239 + static ssize_t 240 + ap2_event_store(struct device *dev, struct device_attribute *attr, 241 + const char *buf, size_t count) 242 + { 243 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 244 + void __iomem *base; 245 + bool clear; 246 + 247 + if (kstrtobool(buf, &clear) || !clear) 248 + return -EINVAL; 249 + 250 + base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER); 251 + 252 + mutex_lock(&pdata->lock); 253 + writeq(PORT_STS_AP2_EVT, base + PORT_HDR_STS); 254 + mutex_unlock(&pdata->lock); 255 + 256 + return count; 257 + } 258 + static DEVICE_ATTR_RW(ap2_event); 259 + 260 + static ssize_t 261 + power_state_show(struct device *dev, struct device_attribute *attr, char *buf) 262 + { 263 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 264 + void __iomem *base; 265 + u64 v; 266 + 267 + base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER); 268 + 269 + mutex_lock(&pdata->lock); 270 + v = readq(base + PORT_HDR_STS); 271 + mutex_unlock(&pdata->lock); 272 + 273 + return sprintf(buf, "0x%x\n", (u8)FIELD_GET(PORT_STS_PWR_STATE, v)); 274 + } 275 + static DEVICE_ATTR_RO(power_state); 276 + 277 + static ssize_t 278 + userclk_freqcmd_store(struct device *dev, struct device_attribute *attr, 279 + const char *buf, size_t count) 280 + { 281 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 282 + u64 userclk_freq_cmd; 283 + void __iomem *base; 284 + 285 + if (kstrtou64(buf, 0, &userclk_freq_cmd)) 286 + return -EINVAL; 287 + 288 + base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER); 289 + 290 + mutex_lock(&pdata->lock); 291 + writeq(userclk_freq_cmd, base + PORT_HDR_USRCLK_CMD0); 292 + mutex_unlock(&pdata->lock); 293 + 294 + return count; 295 + } 296 + static DEVICE_ATTR_WO(userclk_freqcmd); 297 + 298 + static ssize_t 299 + userclk_freqcntrcmd_store(struct device *dev, struct device_attribute *attr, 300 + const char *buf, size_t count) 301 + { 302 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 303 + u64 userclk_freqcntr_cmd; 304 + void __iomem *base; 305 + 306 + if (kstrtou64(buf, 0, &userclk_freqcntr_cmd)) 307 + return -EINVAL; 308 + 309 + base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER); 310 + 311 + mutex_lock(&pdata->lock); 312 + writeq(userclk_freqcntr_cmd, base + PORT_HDR_USRCLK_CMD1); 313 + mutex_unlock(&pdata->lock); 314 + 315 + return count; 316 + } 317 + static DEVICE_ATTR_WO(userclk_freqcntrcmd); 318 + 319 + static ssize_t 320 + userclk_freqsts_show(struct device *dev, struct device_attribute *attr, 321 + char *buf) 322 + { 323 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 324 + u64 userclk_freqsts; 325 + void __iomem *base; 326 + 327 + base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER); 328 + 329 + mutex_lock(&pdata->lock); 330 + userclk_freqsts = readq(base + PORT_HDR_USRCLK_STS0); 331 + mutex_unlock(&pdata->lock); 332 + 333 + return sprintf(buf, "0x%llx\n", (unsigned long long)userclk_freqsts); 334 + } 335 + static DEVICE_ATTR_RO(userclk_freqsts); 336 + 337 + static ssize_t 338 + userclk_freqcntrsts_show(struct device *dev, struct device_attribute *attr, 339 + char *buf) 340 + { 341 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 342 + u64 userclk_freqcntrsts; 343 + void __iomem *base; 344 + 345 + base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER); 346 + 347 + mutex_lock(&pdata->lock); 348 + userclk_freqcntrsts = readq(base + PORT_HDR_USRCLK_STS1); 349 + mutex_unlock(&pdata->lock); 350 + 351 + return sprintf(buf, "0x%llx\n", 352 + (unsigned long long)userclk_freqcntrsts); 353 + } 354 + static DEVICE_ATTR_RO(userclk_freqcntrsts); 355 + 356 + static struct attribute *port_hdr_attrs[] = { 149 357 &dev_attr_id.attr, 358 + &dev_attr_ltr.attr, 359 + &dev_attr_ap1_event.attr, 360 + &dev_attr_ap2_event.attr, 361 + &dev_attr_power_state.attr, 362 + &dev_attr_userclk_freqcmd.attr, 363 + &dev_attr_userclk_freqcntrcmd.attr, 364 + &dev_attr_userclk_freqsts.attr, 365 + &dev_attr_userclk_freqcntrsts.attr, 150 366 NULL, 367 + }; 368 + 369 + static umode_t port_hdr_attrs_visible(struct kobject *kobj, 370 + struct attribute *attr, int n) 371 + { 372 + struct device *dev = kobj_to_dev(kobj); 373 + umode_t mode = attr->mode; 374 + void __iomem *base; 375 + 376 + base = dfl_get_feature_ioaddr_by_id(dev, PORT_FEATURE_ID_HEADER); 377 + 378 + if (dfl_feature_revision(base) > 0) { 379 + /* 380 + * userclk sysfs interfaces are only visible in case port 381 + * revision is 0, as hardware with revision >0 doesn't 382 + * support this. 383 + */ 384 + if (attr == &dev_attr_userclk_freqcmd.attr || 385 + attr == &dev_attr_userclk_freqcntrcmd.attr || 386 + attr == &dev_attr_userclk_freqsts.attr || 387 + attr == &dev_attr_userclk_freqcntrsts.attr) 388 + mode = 0; 389 + } 390 + 391 + return mode; 392 + } 393 + 394 + static const struct attribute_group port_hdr_group = { 395 + .attrs = port_hdr_attrs, 396 + .is_visible = port_hdr_attrs_visible, 151 397 }; 152 398 153 399 static int port_hdr_init(struct platform_device *pdev, 154 400 struct dfl_feature *feature) 155 401 { 156 - dev_dbg(&pdev->dev, "PORT HDR Init.\n"); 157 - 158 402 port_reset(pdev); 159 403 160 - return sysfs_create_files(&pdev->dev.kobj, port_hdr_attrs); 161 - } 162 - 163 - static void port_hdr_uinit(struct platform_device *pdev, 164 - struct dfl_feature *feature) 165 - { 166 - dev_dbg(&pdev->dev, "PORT HDR UInit.\n"); 167 - 168 - sysfs_remove_files(&pdev->dev.kobj, port_hdr_attrs); 404 + return 0; 169 405 } 170 406 171 407 static long ··· 429 185 return ret; 430 186 } 431 187 188 + static const struct dfl_feature_id port_hdr_id_table[] = { 189 + {.id = PORT_FEATURE_ID_HEADER,}, 190 + {0,} 191 + }; 192 + 432 193 static const struct dfl_feature_ops port_hdr_ops = { 433 194 .init = port_hdr_init, 434 - .uinit = port_hdr_uinit, 435 195 .ioctl = port_hdr_ioctl, 436 196 }; 437 197 ··· 462 214 } 463 215 static DEVICE_ATTR_RO(afu_id); 464 216 465 - static const struct attribute *port_afu_attrs[] = { 217 + static struct attribute *port_afu_attrs[] = { 466 218 &dev_attr_afu_id.attr, 467 219 NULL 220 + }; 221 + 222 + static umode_t port_afu_attrs_visible(struct kobject *kobj, 223 + struct attribute *attr, int n) 224 + { 225 + struct device *dev = kobj_to_dev(kobj); 226 + 227 + /* 228 + * sysfs entries are visible only if related private feature is 229 + * enumerated. 230 + */ 231 + if (!dfl_get_feature_by_id(dev, PORT_FEATURE_ID_AFU)) 232 + return 0; 233 + 234 + return attr->mode; 235 + } 236 + 237 + static const struct attribute_group port_afu_group = { 238 + .attrs = port_afu_attrs, 239 + .is_visible = port_afu_attrs_visible, 468 240 }; 469 241 470 242 static int port_afu_init(struct platform_device *pdev, 471 243 struct dfl_feature *feature) 472 244 { 473 245 struct resource *res = &pdev->resource[feature->resource_index]; 474 - int ret; 475 246 476 - dev_dbg(&pdev->dev, "PORT AFU Init.\n"); 477 - 478 - ret = afu_mmio_region_add(dev_get_platdata(&pdev->dev), 479 - DFL_PORT_REGION_INDEX_AFU, resource_size(res), 480 - res->start, DFL_PORT_REGION_READ | 481 - DFL_PORT_REGION_WRITE | DFL_PORT_REGION_MMAP); 482 - if (ret) 483 - return ret; 484 - 485 - return sysfs_create_files(&pdev->dev.kobj, port_afu_attrs); 247 + return afu_mmio_region_add(dev_get_platdata(&pdev->dev), 248 + DFL_PORT_REGION_INDEX_AFU, 249 + resource_size(res), res->start, 250 + DFL_PORT_REGION_MMAP | DFL_PORT_REGION_READ | 251 + DFL_PORT_REGION_WRITE); 486 252 } 487 253 488 - static void port_afu_uinit(struct platform_device *pdev, 489 - struct dfl_feature *feature) 490 - { 491 - dev_dbg(&pdev->dev, "PORT AFU UInit.\n"); 492 - 493 - sysfs_remove_files(&pdev->dev.kobj, port_afu_attrs); 494 - } 254 + static const struct dfl_feature_id port_afu_id_table[] = { 255 + {.id = PORT_FEATURE_ID_AFU,}, 256 + {0,} 257 + }; 495 258 496 259 static const struct dfl_feature_ops port_afu_ops = { 497 260 .init = port_afu_init, 498 - .uinit = port_afu_uinit, 261 + }; 262 + 263 + static int port_stp_init(struct platform_device *pdev, 264 + struct dfl_feature *feature) 265 + { 266 + struct resource *res = &pdev->resource[feature->resource_index]; 267 + 268 + return afu_mmio_region_add(dev_get_platdata(&pdev->dev), 269 + DFL_PORT_REGION_INDEX_STP, 270 + resource_size(res), res->start, 271 + DFL_PORT_REGION_MMAP | DFL_PORT_REGION_READ | 272 + DFL_PORT_REGION_WRITE); 273 + } 274 + 275 + static const struct dfl_feature_id port_stp_id_table[] = { 276 + {.id = PORT_FEATURE_ID_STP,}, 277 + {0,} 278 + }; 279 + 280 + static const struct dfl_feature_ops port_stp_ops = { 281 + .init = port_stp_init, 499 282 }; 500 283 501 284 static struct dfl_feature_driver port_feature_drvs[] = { 502 285 { 503 - .id = PORT_FEATURE_ID_HEADER, 286 + .id_table = port_hdr_id_table, 504 287 .ops = &port_hdr_ops, 505 288 }, 506 289 { 507 - .id = PORT_FEATURE_ID_AFU, 290 + .id_table = port_afu_id_table, 508 291 .ops = &port_afu_ops, 292 + }, 293 + { 294 + .id_table = port_err_id_table, 295 + .ops = &port_err_ops, 296 + }, 297 + { 298 + .id_table = port_stp_id_table, 299 + .ops = &port_stp_ops, 509 300 }, 510 301 { 511 302 .ops = NULL, ··· 832 545 833 546 mutex_lock(&pdata->lock); 834 547 if (enable) 835 - port_enable(pdev); 548 + __afu_port_enable(pdev); 836 549 else 837 - ret = port_disable(pdev); 550 + ret = __afu_port_disable(pdev); 838 551 mutex_unlock(&pdata->lock); 839 552 840 553 return ret; ··· 886 599 return 0; 887 600 } 888 601 602 + static const struct attribute_group *afu_dev_groups[] = { 603 + &port_hdr_group, 604 + &port_afu_group, 605 + &port_err_group, 606 + NULL 607 + }; 608 + 889 609 static struct platform_driver afu_driver = { 890 610 .driver = { 891 - .name = DFL_FPGA_FEATURE_DEV_PORT, 611 + .name = DFL_FPGA_FEATURE_DEV_PORT, 612 + .dev_groups = afu_dev_groups, 892 613 }, 893 614 .probe = afu_probe, 894 615 .remove = afu_remove,
+9
drivers/fpga/dfl-afu.h
··· 79 79 struct dfl_feature_platform_data *pdata; 80 80 }; 81 81 82 + /* hold pdata->lock when call __afu_port_enable/disable */ 83 + void __afu_port_enable(struct platform_device *pdev); 84 + int __afu_port_disable(struct platform_device *pdev); 85 + 82 86 void afu_mmio_region_init(struct dfl_feature_platform_data *pdata); 83 87 int afu_mmio_region_add(struct dfl_feature_platform_data *pdata, 84 88 u32 region_index, u64 region_size, u64 phys, u32 flags); ··· 101 97 struct dfl_afu_dma_region * 102 98 afu_dma_region_find(struct dfl_feature_platform_data *pdata, 103 99 u64 iova, u64 size); 100 + 101 + extern const struct dfl_feature_ops port_err_ops; 102 + extern const struct dfl_feature_id port_err_id_table[]; 103 + extern const struct attribute_group port_err_group; 104 + 104 105 #endif /* __DFL_AFU_H */
+359
drivers/fpga/dfl-fme-error.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Driver for FPGA Management Engine Error Management 4 + * 5 + * Copyright 2019 Intel Corporation, Inc. 6 + * 7 + * Authors: 8 + * Kang Luwei <luwei.kang@intel.com> 9 + * Xiao Guangrong <guangrong.xiao@linux.intel.com> 10 + * Wu Hao <hao.wu@intel.com> 11 + * Joseph Grecco <joe.grecco@intel.com> 12 + * Enno Luebbers <enno.luebbers@intel.com> 13 + * Tim Whisonant <tim.whisonant@intel.com> 14 + * Ananda Ravuri <ananda.ravuri@intel.com> 15 + * Mitchel, Henry <henry.mitchel@intel.com> 16 + */ 17 + 18 + #include <linux/uaccess.h> 19 + 20 + #include "dfl.h" 21 + #include "dfl-fme.h" 22 + 23 + #define FME_ERROR_MASK 0x8 24 + #define FME_ERROR 0x10 25 + #define MBP_ERROR BIT_ULL(6) 26 + #define PCIE0_ERROR_MASK 0x18 27 + #define PCIE0_ERROR 0x20 28 + #define PCIE1_ERROR_MASK 0x28 29 + #define PCIE1_ERROR 0x30 30 + #define FME_FIRST_ERROR 0x38 31 + #define FME_NEXT_ERROR 0x40 32 + #define RAS_NONFAT_ERROR_MASK 0x48 33 + #define RAS_NONFAT_ERROR 0x50 34 + #define RAS_CATFAT_ERROR_MASK 0x58 35 + #define RAS_CATFAT_ERROR 0x60 36 + #define RAS_ERROR_INJECT 0x68 37 + #define INJECT_ERROR_MASK GENMASK_ULL(2, 0) 38 + 39 + #define ERROR_MASK GENMASK_ULL(63, 0) 40 + 41 + static ssize_t pcie0_errors_show(struct device *dev, 42 + struct device_attribute *attr, char *buf) 43 + { 44 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 45 + void __iomem *base; 46 + u64 value; 47 + 48 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR); 49 + 50 + mutex_lock(&pdata->lock); 51 + value = readq(base + PCIE0_ERROR); 52 + mutex_unlock(&pdata->lock); 53 + 54 + return sprintf(buf, "0x%llx\n", (unsigned long long)value); 55 + } 56 + 57 + static ssize_t pcie0_errors_store(struct device *dev, 58 + struct device_attribute *attr, 59 + const char *buf, size_t count) 60 + { 61 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 62 + void __iomem *base; 63 + int ret = 0; 64 + u64 v, val; 65 + 66 + if (kstrtou64(buf, 0, &val)) 67 + return -EINVAL; 68 + 69 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR); 70 + 71 + mutex_lock(&pdata->lock); 72 + writeq(GENMASK_ULL(63, 0), base + PCIE0_ERROR_MASK); 73 + 74 + v = readq(base + PCIE0_ERROR); 75 + if (val == v) 76 + writeq(v, base + PCIE0_ERROR); 77 + else 78 + ret = -EINVAL; 79 + 80 + writeq(0ULL, base + PCIE0_ERROR_MASK); 81 + mutex_unlock(&pdata->lock); 82 + return ret ? ret : count; 83 + } 84 + static DEVICE_ATTR_RW(pcie0_errors); 85 + 86 + static ssize_t pcie1_errors_show(struct device *dev, 87 + struct device_attribute *attr, char *buf) 88 + { 89 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 90 + void __iomem *base; 91 + u64 value; 92 + 93 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR); 94 + 95 + mutex_lock(&pdata->lock); 96 + value = readq(base + PCIE1_ERROR); 97 + mutex_unlock(&pdata->lock); 98 + 99 + return sprintf(buf, "0x%llx\n", (unsigned long long)value); 100 + } 101 + 102 + static ssize_t pcie1_errors_store(struct device *dev, 103 + struct device_attribute *attr, 104 + const char *buf, size_t count) 105 + { 106 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 107 + void __iomem *base; 108 + int ret = 0; 109 + u64 v, val; 110 + 111 + if (kstrtou64(buf, 0, &val)) 112 + return -EINVAL; 113 + 114 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR); 115 + 116 + mutex_lock(&pdata->lock); 117 + writeq(GENMASK_ULL(63, 0), base + PCIE1_ERROR_MASK); 118 + 119 + v = readq(base + PCIE1_ERROR); 120 + if (val == v) 121 + writeq(v, base + PCIE1_ERROR); 122 + else 123 + ret = -EINVAL; 124 + 125 + writeq(0ULL, base + PCIE1_ERROR_MASK); 126 + mutex_unlock(&pdata->lock); 127 + return ret ? ret : count; 128 + } 129 + static DEVICE_ATTR_RW(pcie1_errors); 130 + 131 + static ssize_t nonfatal_errors_show(struct device *dev, 132 + struct device_attribute *attr, char *buf) 133 + { 134 + void __iomem *base; 135 + 136 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR); 137 + 138 + return sprintf(buf, "0x%llx\n", 139 + (unsigned long long)readq(base + RAS_NONFAT_ERROR)); 140 + } 141 + static DEVICE_ATTR_RO(nonfatal_errors); 142 + 143 + static ssize_t catfatal_errors_show(struct device *dev, 144 + struct device_attribute *attr, char *buf) 145 + { 146 + void __iomem *base; 147 + 148 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR); 149 + 150 + return sprintf(buf, "0x%llx\n", 151 + (unsigned long long)readq(base + RAS_CATFAT_ERROR)); 152 + } 153 + static DEVICE_ATTR_RO(catfatal_errors); 154 + 155 + static ssize_t inject_errors_show(struct device *dev, 156 + struct device_attribute *attr, char *buf) 157 + { 158 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 159 + void __iomem *base; 160 + u64 v; 161 + 162 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR); 163 + 164 + mutex_lock(&pdata->lock); 165 + v = readq(base + RAS_ERROR_INJECT); 166 + mutex_unlock(&pdata->lock); 167 + 168 + return sprintf(buf, "0x%llx\n", 169 + (unsigned long long)FIELD_GET(INJECT_ERROR_MASK, v)); 170 + } 171 + 172 + static ssize_t inject_errors_store(struct device *dev, 173 + struct device_attribute *attr, 174 + const char *buf, size_t count) 175 + { 176 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 177 + void __iomem *base; 178 + u8 inject_error; 179 + u64 v; 180 + 181 + if (kstrtou8(buf, 0, &inject_error)) 182 + return -EINVAL; 183 + 184 + if (inject_error & ~INJECT_ERROR_MASK) 185 + return -EINVAL; 186 + 187 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR); 188 + 189 + mutex_lock(&pdata->lock); 190 + v = readq(base + RAS_ERROR_INJECT); 191 + v &= ~INJECT_ERROR_MASK; 192 + v |= FIELD_PREP(INJECT_ERROR_MASK, inject_error); 193 + writeq(v, base + RAS_ERROR_INJECT); 194 + mutex_unlock(&pdata->lock); 195 + 196 + return count; 197 + } 198 + static DEVICE_ATTR_RW(inject_errors); 199 + 200 + static ssize_t fme_errors_show(struct device *dev, 201 + struct device_attribute *attr, char *buf) 202 + { 203 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 204 + void __iomem *base; 205 + u64 value; 206 + 207 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR); 208 + 209 + mutex_lock(&pdata->lock); 210 + value = readq(base + FME_ERROR); 211 + mutex_unlock(&pdata->lock); 212 + 213 + return sprintf(buf, "0x%llx\n", (unsigned long long)value); 214 + } 215 + 216 + static ssize_t fme_errors_store(struct device *dev, 217 + struct device_attribute *attr, 218 + const char *buf, size_t count) 219 + { 220 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 221 + void __iomem *base; 222 + u64 v, val; 223 + int ret = 0; 224 + 225 + if (kstrtou64(buf, 0, &val)) 226 + return -EINVAL; 227 + 228 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR); 229 + 230 + mutex_lock(&pdata->lock); 231 + writeq(GENMASK_ULL(63, 0), base + FME_ERROR_MASK); 232 + 233 + v = readq(base + FME_ERROR); 234 + if (val == v) 235 + writeq(v, base + FME_ERROR); 236 + else 237 + ret = -EINVAL; 238 + 239 + /* Workaround: disable MBP_ERROR if feature revision is 0 */ 240 + writeq(dfl_feature_revision(base) ? 0ULL : MBP_ERROR, 241 + base + FME_ERROR_MASK); 242 + mutex_unlock(&pdata->lock); 243 + return ret ? ret : count; 244 + } 245 + static DEVICE_ATTR_RW(fme_errors); 246 + 247 + static ssize_t first_error_show(struct device *dev, 248 + struct device_attribute *attr, char *buf) 249 + { 250 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 251 + void __iomem *base; 252 + u64 value; 253 + 254 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR); 255 + 256 + mutex_lock(&pdata->lock); 257 + value = readq(base + FME_FIRST_ERROR); 258 + mutex_unlock(&pdata->lock); 259 + 260 + return sprintf(buf, "0x%llx\n", (unsigned long long)value); 261 + } 262 + static DEVICE_ATTR_RO(first_error); 263 + 264 + static ssize_t next_error_show(struct device *dev, 265 + struct device_attribute *attr, char *buf) 266 + { 267 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 268 + void __iomem *base; 269 + u64 value; 270 + 271 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR); 272 + 273 + mutex_lock(&pdata->lock); 274 + value = readq(base + FME_NEXT_ERROR); 275 + mutex_unlock(&pdata->lock); 276 + 277 + return sprintf(buf, "0x%llx\n", (unsigned long long)value); 278 + } 279 + static DEVICE_ATTR_RO(next_error); 280 + 281 + static struct attribute *fme_global_err_attrs[] = { 282 + &dev_attr_pcie0_errors.attr, 283 + &dev_attr_pcie1_errors.attr, 284 + &dev_attr_nonfatal_errors.attr, 285 + &dev_attr_catfatal_errors.attr, 286 + &dev_attr_inject_errors.attr, 287 + &dev_attr_fme_errors.attr, 288 + &dev_attr_first_error.attr, 289 + &dev_attr_next_error.attr, 290 + NULL, 291 + }; 292 + 293 + static umode_t fme_global_err_attrs_visible(struct kobject *kobj, 294 + struct attribute *attr, int n) 295 + { 296 + struct device *dev = kobj_to_dev(kobj); 297 + 298 + /* 299 + * sysfs entries are visible only if related private feature is 300 + * enumerated. 301 + */ 302 + if (!dfl_get_feature_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR)) 303 + return 0; 304 + 305 + return attr->mode; 306 + } 307 + 308 + const struct attribute_group fme_global_err_group = { 309 + .name = "errors", 310 + .attrs = fme_global_err_attrs, 311 + .is_visible = fme_global_err_attrs_visible, 312 + }; 313 + 314 + static void fme_err_mask(struct device *dev, bool mask) 315 + { 316 + struct dfl_feature_platform_data *pdata = dev_get_platdata(dev); 317 + void __iomem *base; 318 + 319 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_GLOBAL_ERR); 320 + 321 + mutex_lock(&pdata->lock); 322 + 323 + /* Workaround: keep MBP_ERROR always masked if revision is 0 */ 324 + if (dfl_feature_revision(base)) 325 + writeq(mask ? ERROR_MASK : 0, base + FME_ERROR_MASK); 326 + else 327 + writeq(mask ? ERROR_MASK : MBP_ERROR, base + FME_ERROR_MASK); 328 + 329 + writeq(mask ? ERROR_MASK : 0, base + PCIE0_ERROR_MASK); 330 + writeq(mask ? ERROR_MASK : 0, base + PCIE1_ERROR_MASK); 331 + writeq(mask ? ERROR_MASK : 0, base + RAS_NONFAT_ERROR_MASK); 332 + writeq(mask ? ERROR_MASK : 0, base + RAS_CATFAT_ERROR_MASK); 333 + 334 + mutex_unlock(&pdata->lock); 335 + } 336 + 337 + static int fme_global_err_init(struct platform_device *pdev, 338 + struct dfl_feature *feature) 339 + { 340 + fme_err_mask(&pdev->dev, false); 341 + 342 + return 0; 343 + } 344 + 345 + static void fme_global_err_uinit(struct platform_device *pdev, 346 + struct dfl_feature *feature) 347 + { 348 + fme_err_mask(&pdev->dev, true); 349 + } 350 + 351 + const struct dfl_feature_id fme_global_err_id_table[] = { 352 + {.id = FME_FEATURE_ID_GLOBAL_ERR,}, 353 + {0,} 354 + }; 355 + 356 + const struct dfl_feature_ops fme_global_err_ops = { 357 + .init = fme_global_err_init, 358 + .uinit = fme_global_err_uinit, 359 + };
+107 -23
drivers/fpga/dfl-fme-main.c
··· 16 16 17 17 #include <linux/kernel.h> 18 18 #include <linux/module.h> 19 + #include <linux/uaccess.h> 19 20 #include <linux/fpga-dfl.h> 20 21 21 22 #include "dfl.h" ··· 73 72 } 74 73 static DEVICE_ATTR_RO(bitstream_metadata); 75 74 76 - static const struct attribute *fme_hdr_attrs[] = { 75 + static ssize_t cache_size_show(struct device *dev, 76 + struct device_attribute *attr, char *buf) 77 + { 78 + void __iomem *base; 79 + u64 v; 80 + 81 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_HEADER); 82 + 83 + v = readq(base + FME_HDR_CAP); 84 + 85 + return sprintf(buf, "%u\n", 86 + (unsigned int)FIELD_GET(FME_CAP_CACHE_SIZE, v)); 87 + } 88 + static DEVICE_ATTR_RO(cache_size); 89 + 90 + static ssize_t fabric_version_show(struct device *dev, 91 + struct device_attribute *attr, char *buf) 92 + { 93 + void __iomem *base; 94 + u64 v; 95 + 96 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_HEADER); 97 + 98 + v = readq(base + FME_HDR_CAP); 99 + 100 + return sprintf(buf, "%u\n", 101 + (unsigned int)FIELD_GET(FME_CAP_FABRIC_VERID, v)); 102 + } 103 + static DEVICE_ATTR_RO(fabric_version); 104 + 105 + static ssize_t socket_id_show(struct device *dev, 106 + struct device_attribute *attr, char *buf) 107 + { 108 + void __iomem *base; 109 + u64 v; 110 + 111 + base = dfl_get_feature_ioaddr_by_id(dev, FME_FEATURE_ID_HEADER); 112 + 113 + v = readq(base + FME_HDR_CAP); 114 + 115 + return sprintf(buf, "%u\n", 116 + (unsigned int)FIELD_GET(FME_CAP_SOCKET_ID, v)); 117 + } 118 + static DEVICE_ATTR_RO(socket_id); 119 + 120 + static struct attribute *fme_hdr_attrs[] = { 77 121 &dev_attr_ports_num.attr, 78 122 &dev_attr_bitstream_id.attr, 79 123 &dev_attr_bitstream_metadata.attr, 124 + &dev_attr_cache_size.attr, 125 + &dev_attr_fabric_version.attr, 126 + &dev_attr_socket_id.attr, 80 127 NULL, 81 128 }; 82 129 83 - static int fme_hdr_init(struct platform_device *pdev, 84 - struct dfl_feature *feature) 130 + static const struct attribute_group fme_hdr_group = { 131 + .attrs = fme_hdr_attrs, 132 + }; 133 + 134 + static long fme_hdr_ioctl_release_port(struct dfl_feature_platform_data *pdata, 135 + unsigned long arg) 85 136 { 86 - void __iomem *base = feature->ioaddr; 87 - int ret; 137 + struct dfl_fpga_cdev *cdev = pdata->dfl_cdev; 138 + int port_id; 88 139 89 - dev_dbg(&pdev->dev, "FME HDR Init.\n"); 90 - dev_dbg(&pdev->dev, "FME cap %llx.\n", 91 - (unsigned long long)readq(base + FME_HDR_CAP)); 140 + if (get_user(port_id, (int __user *)arg)) 141 + return -EFAULT; 92 142 93 - ret = sysfs_create_files(&pdev->dev.kobj, fme_hdr_attrs); 94 - if (ret) 95 - return ret; 96 - 97 - return 0; 143 + return dfl_fpga_cdev_release_port(cdev, port_id); 98 144 } 99 145 100 - static void fme_hdr_uinit(struct platform_device *pdev, 101 - struct dfl_feature *feature) 146 + static long fme_hdr_ioctl_assign_port(struct dfl_feature_platform_data *pdata, 147 + unsigned long arg) 102 148 { 103 - dev_dbg(&pdev->dev, "FME HDR UInit.\n"); 104 - sysfs_remove_files(&pdev->dev.kobj, fme_hdr_attrs); 149 + struct dfl_fpga_cdev *cdev = pdata->dfl_cdev; 150 + int port_id; 151 + 152 + if (get_user(port_id, (int __user *)arg)) 153 + return -EFAULT; 154 + 155 + return dfl_fpga_cdev_assign_port(cdev, port_id); 105 156 } 157 + 158 + static long fme_hdr_ioctl(struct platform_device *pdev, 159 + struct dfl_feature *feature, 160 + unsigned int cmd, unsigned long arg) 161 + { 162 + struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev); 163 + 164 + switch (cmd) { 165 + case DFL_FPGA_FME_PORT_RELEASE: 166 + return fme_hdr_ioctl_release_port(pdata, arg); 167 + case DFL_FPGA_FME_PORT_ASSIGN: 168 + return fme_hdr_ioctl_assign_port(pdata, arg); 169 + } 170 + 171 + return -ENODEV; 172 + } 173 + 174 + static const struct dfl_feature_id fme_hdr_id_table[] = { 175 + {.id = FME_FEATURE_ID_HEADER,}, 176 + {0,} 177 + }; 106 178 107 179 static const struct dfl_feature_ops fme_hdr_ops = { 108 - .init = fme_hdr_init, 109 - .uinit = fme_hdr_uinit, 180 + .ioctl = fme_hdr_ioctl, 110 181 }; 111 182 112 183 static struct dfl_feature_driver fme_feature_drvs[] = { 113 184 { 114 - .id = FME_FEATURE_ID_HEADER, 185 + .id_table = fme_hdr_id_table, 115 186 .ops = &fme_hdr_ops, 116 187 }, 117 188 { 118 - .id = FME_FEATURE_ID_PR_MGMT, 119 - .ops = &pr_mgmt_ops, 189 + .id_table = fme_pr_mgmt_id_table, 190 + .ops = &fme_pr_mgmt_ops, 191 + }, 192 + { 193 + .id_table = fme_global_err_id_table, 194 + .ops = &fme_global_err_ops, 120 195 }, 121 196 { 122 197 .ops = NULL, ··· 340 263 return 0; 341 264 } 342 265 266 + static const struct attribute_group *fme_dev_groups[] = { 267 + &fme_hdr_group, 268 + &fme_global_err_group, 269 + NULL 270 + }; 271 + 343 272 static struct platform_driver fme_driver = { 344 273 .driver = { 345 - .name = DFL_FPGA_FEATURE_DEV_FME, 274 + .name = DFL_FPGA_FEATURE_DEV_FME, 275 + .dev_groups = fme_dev_groups, 346 276 }, 347 277 .probe = fme_probe, 348 278 .remove = fme_remove,
+6 -1
drivers/fpga/dfl-fme-pr.c
··· 470 470 return ret; 471 471 } 472 472 473 - const struct dfl_feature_ops pr_mgmt_ops = { 473 + const struct dfl_feature_id fme_pr_mgmt_id_table[] = { 474 + {.id = FME_FEATURE_ID_PR_MGMT,}, 475 + {0} 476 + }; 477 + 478 + const struct dfl_feature_ops fme_pr_mgmt_ops = { 474 479 .init = pr_mgmt_init, 475 480 .uinit = pr_mgmt_uinit, 476 481 .ioctl = fme_pr_ioctl,
+5 -1
drivers/fpga/dfl-fme.h
··· 33 33 struct dfl_feature_platform_data *pdata; 34 34 }; 35 35 36 - extern const struct dfl_feature_ops pr_mgmt_ops; 36 + extern const struct dfl_feature_ops fme_pr_mgmt_ops; 37 + extern const struct dfl_feature_id fme_pr_mgmt_id_table[]; 38 + extern const struct dfl_feature_ops fme_global_err_ops; 39 + extern const struct dfl_feature_id fme_global_err_id_table[]; 40 + extern const struct attribute_group fme_global_err_group; 37 41 38 42 #endif /* __DFL_FME_H */
+36
drivers/fpga/dfl-pci.c
··· 223 223 return ret; 224 224 } 225 225 226 + static int cci_pci_sriov_configure(struct pci_dev *pcidev, int num_vfs) 227 + { 228 + struct cci_drvdata *drvdata = pci_get_drvdata(pcidev); 229 + struct dfl_fpga_cdev *cdev = drvdata->cdev; 230 + int ret = 0; 231 + 232 + if (!num_vfs) { 233 + /* 234 + * disable SRIOV and then put released ports back to default 235 + * PF access mode. 236 + */ 237 + pci_disable_sriov(pcidev); 238 + 239 + dfl_fpga_cdev_config_ports_pf(cdev); 240 + 241 + } else { 242 + /* 243 + * before enable SRIOV, put released ports into VF access mode 244 + * first of all. 245 + */ 246 + ret = dfl_fpga_cdev_config_ports_vf(cdev, num_vfs); 247 + if (ret) 248 + return ret; 249 + 250 + ret = pci_enable_sriov(pcidev, num_vfs); 251 + if (ret) 252 + dfl_fpga_cdev_config_ports_pf(cdev); 253 + } 254 + 255 + return ret; 256 + } 257 + 226 258 static void cci_pci_remove(struct pci_dev *pcidev) 227 259 { 260 + if (dev_is_pf(&pcidev->dev)) 261 + cci_pci_sriov_configure(pcidev, 0); 262 + 228 263 cci_remove_feature_devs(pcidev); 229 264 pci_disable_pcie_error_reporting(pcidev); 230 265 } ··· 269 234 .id_table = cci_pcie_id_tbl, 270 235 .probe = cci_pci_probe, 271 236 .remove = cci_pci_remove, 237 + .sriov_configure = cci_pci_sriov_configure, 272 238 }; 273 239 274 240 module_pci_driver(cci_pci_driver);
+207 -19
drivers/fpga/dfl.c
··· 231 231 */ 232 232 int dfl_fpga_check_port_id(struct platform_device *pdev, void *pport_id) 233 233 { 234 - struct dfl_fpga_port_ops *port_ops = dfl_fpga_port_ops_get(pdev); 235 - int port_id; 234 + struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev); 235 + struct dfl_fpga_port_ops *port_ops; 236 236 237 + if (pdata->id != FEATURE_DEV_ID_UNUSED) 238 + return pdata->id == *(int *)pport_id; 239 + 240 + port_ops = dfl_fpga_port_ops_get(pdev); 237 241 if (!port_ops || !port_ops->get_id) 238 242 return 0; 239 243 240 - port_id = port_ops->get_id(pdev); 244 + pdata->id = port_ops->get_id(pdev); 241 245 dfl_fpga_port_ops_put(port_ops); 242 246 243 - return port_id == *(int *)pport_id; 247 + return pdata->id == *(int *)pport_id; 244 248 } 245 249 EXPORT_SYMBOL_GPL(dfl_fpga_check_port_id); 246 250 ··· 259 255 260 256 dfl_fpga_dev_for_each_feature(pdata, feature) 261 257 if (feature->ops) { 262 - feature->ops->uinit(pdev, feature); 258 + if (feature->ops->uinit) 259 + feature->ops->uinit(pdev, feature); 263 260 feature->ops = NULL; 264 261 } 265 262 } ··· 271 266 struct dfl_feature *feature, 272 267 struct dfl_feature_driver *drv) 273 268 { 274 - int ret; 269 + int ret = 0; 275 270 276 - ret = drv->ops->init(pdev, feature); 277 - if (ret) 278 - return ret; 271 + if (drv->ops->init) { 272 + ret = drv->ops->init(pdev, feature); 273 + if (ret) 274 + return ret; 275 + } 279 276 280 277 feature->ops = drv->ops; 281 278 282 279 return ret; 280 + } 281 + 282 + static bool dfl_feature_drv_match(struct dfl_feature *feature, 283 + struct dfl_feature_driver *driver) 284 + { 285 + const struct dfl_feature_id *ids = driver->id_table; 286 + 287 + if (ids) { 288 + while (ids->id) { 289 + if (ids->id == feature->id) 290 + return true; 291 + ids++; 292 + } 293 + } 294 + return false; 283 295 } 284 296 285 297 /** ··· 319 297 320 298 while (drv->ops) { 321 299 dfl_fpga_dev_for_each_feature(pdata, feature) { 322 - /* match feature and drv using id */ 323 - if (feature->id == drv->id) { 300 + if (dfl_feature_drv_match(feature, drv)) { 324 301 ret = dfl_feature_instance_init(pdev, pdata, 325 302 feature, drv); 326 303 if (ret) ··· 495 474 pdata->dev = fdev; 496 475 pdata->num = binfo->feature_num; 497 476 pdata->dfl_cdev = binfo->cdev; 477 + pdata->id = FEATURE_DEV_ID_UNUSED; 498 478 mutex_init(&pdata->lock); 499 479 lockdep_set_class_and_name(&pdata->lock, &dfl_pdata_keys[type], 500 480 dfl_pdata_key_strings[type]); ··· 995 973 { 996 974 struct dfl_feature_platform_data *pdata, *ptmp; 997 975 998 - remove_feature_devs(cdev); 999 - 1000 976 mutex_lock(&cdev->lock); 1001 - if (cdev->fme_dev) { 1002 - /* the fme should be unregistered. */ 1003 - WARN_ON(device_is_registered(cdev->fme_dev)); 977 + if (cdev->fme_dev) 1004 978 put_device(cdev->fme_dev); 1005 - } 1006 979 1007 980 list_for_each_entry_safe(pdata, ptmp, &cdev->port_dev_list, node) { 1008 981 struct platform_device *port_dev = pdata->dev; 1009 982 1010 - /* the port should be unregistered. */ 1011 - WARN_ON(device_is_registered(&port_dev->dev)); 983 + /* remove released ports */ 984 + if (!device_is_registered(&port_dev->dev)) { 985 + dfl_id_free(feature_dev_id_type(port_dev), 986 + port_dev->id); 987 + platform_device_put(port_dev); 988 + } 989 + 1012 990 list_del(&pdata->node); 1013 991 put_device(&port_dev->dev); 1014 992 } 1015 993 mutex_unlock(&cdev->lock); 994 + 995 + remove_feature_devs(cdev); 1016 996 1017 997 fpga_region_unregister(cdev->region); 1018 998 devm_kfree(cdev->parent, cdev); ··· 1065 1041 1066 1042 return ret; 1067 1043 } 1044 + 1045 + /** 1046 + * dfl_fpga_cdev_release_port - release a port platform device 1047 + * 1048 + * @cdev: parent container device. 1049 + * @port_id: id of the port platform device. 1050 + * 1051 + * This function allows user to release a port platform device. This is a 1052 + * mandatory step before turn a port from PF into VF for SRIOV support. 1053 + * 1054 + * Return: 0 on success, negative error code otherwise. 1055 + */ 1056 + int dfl_fpga_cdev_release_port(struct dfl_fpga_cdev *cdev, int port_id) 1057 + { 1058 + struct platform_device *port_pdev; 1059 + int ret = -ENODEV; 1060 + 1061 + mutex_lock(&cdev->lock); 1062 + port_pdev = __dfl_fpga_cdev_find_port(cdev, &port_id, 1063 + dfl_fpga_check_port_id); 1064 + if (!port_pdev) 1065 + goto unlock_exit; 1066 + 1067 + if (!device_is_registered(&port_pdev->dev)) { 1068 + ret = -EBUSY; 1069 + goto put_dev_exit; 1070 + } 1071 + 1072 + ret = dfl_feature_dev_use_begin(dev_get_platdata(&port_pdev->dev)); 1073 + if (ret) 1074 + goto put_dev_exit; 1075 + 1076 + platform_device_del(port_pdev); 1077 + cdev->released_port_num++; 1078 + put_dev_exit: 1079 + put_device(&port_pdev->dev); 1080 + unlock_exit: 1081 + mutex_unlock(&cdev->lock); 1082 + return ret; 1083 + } 1084 + EXPORT_SYMBOL_GPL(dfl_fpga_cdev_release_port); 1085 + 1086 + /** 1087 + * dfl_fpga_cdev_assign_port - assign a port platform device back 1088 + * 1089 + * @cdev: parent container device. 1090 + * @port_id: id of the port platform device. 1091 + * 1092 + * This function allows user to assign a port platform device back. This is 1093 + * a mandatory step after disable SRIOV support. 1094 + * 1095 + * Return: 0 on success, negative error code otherwise. 1096 + */ 1097 + int dfl_fpga_cdev_assign_port(struct dfl_fpga_cdev *cdev, int port_id) 1098 + { 1099 + struct platform_device *port_pdev; 1100 + int ret = -ENODEV; 1101 + 1102 + mutex_lock(&cdev->lock); 1103 + port_pdev = __dfl_fpga_cdev_find_port(cdev, &port_id, 1104 + dfl_fpga_check_port_id); 1105 + if (!port_pdev) 1106 + goto unlock_exit; 1107 + 1108 + if (device_is_registered(&port_pdev->dev)) { 1109 + ret = -EBUSY; 1110 + goto put_dev_exit; 1111 + } 1112 + 1113 + ret = platform_device_add(port_pdev); 1114 + if (ret) 1115 + goto put_dev_exit; 1116 + 1117 + dfl_feature_dev_use_end(dev_get_platdata(&port_pdev->dev)); 1118 + cdev->released_port_num--; 1119 + put_dev_exit: 1120 + put_device(&port_pdev->dev); 1121 + unlock_exit: 1122 + mutex_unlock(&cdev->lock); 1123 + return ret; 1124 + } 1125 + EXPORT_SYMBOL_GPL(dfl_fpga_cdev_assign_port); 1126 + 1127 + static void config_port_access_mode(struct device *fme_dev, int port_id, 1128 + bool is_vf) 1129 + { 1130 + void __iomem *base; 1131 + u64 v; 1132 + 1133 + base = dfl_get_feature_ioaddr_by_id(fme_dev, FME_FEATURE_ID_HEADER); 1134 + 1135 + v = readq(base + FME_HDR_PORT_OFST(port_id)); 1136 + 1137 + v &= ~FME_PORT_OFST_ACC_CTRL; 1138 + v |= FIELD_PREP(FME_PORT_OFST_ACC_CTRL, 1139 + is_vf ? FME_PORT_OFST_ACC_VF : FME_PORT_OFST_ACC_PF); 1140 + 1141 + writeq(v, base + FME_HDR_PORT_OFST(port_id)); 1142 + } 1143 + 1144 + #define config_port_vf_mode(dev, id) config_port_access_mode(dev, id, true) 1145 + #define config_port_pf_mode(dev, id) config_port_access_mode(dev, id, false) 1146 + 1147 + /** 1148 + * dfl_fpga_cdev_config_ports_pf - configure ports to PF access mode 1149 + * 1150 + * @cdev: parent container device. 1151 + * 1152 + * This function is needed in sriov configuration routine. It could be used to 1153 + * configure the all released ports from VF access mode to PF. 1154 + */ 1155 + void dfl_fpga_cdev_config_ports_pf(struct dfl_fpga_cdev *cdev) 1156 + { 1157 + struct dfl_feature_platform_data *pdata; 1158 + 1159 + mutex_lock(&cdev->lock); 1160 + list_for_each_entry(pdata, &cdev->port_dev_list, node) { 1161 + if (device_is_registered(&pdata->dev->dev)) 1162 + continue; 1163 + 1164 + config_port_pf_mode(cdev->fme_dev, pdata->id); 1165 + } 1166 + mutex_unlock(&cdev->lock); 1167 + } 1168 + EXPORT_SYMBOL_GPL(dfl_fpga_cdev_config_ports_pf); 1169 + 1170 + /** 1171 + * dfl_fpga_cdev_config_ports_vf - configure ports to VF access mode 1172 + * 1173 + * @cdev: parent container device. 1174 + * @num_vfs: VF device number. 1175 + * 1176 + * This function is needed in sriov configuration routine. It could be used to 1177 + * configure the released ports from PF access mode to VF. 1178 + * 1179 + * Return: 0 on success, negative error code otherwise. 1180 + */ 1181 + int dfl_fpga_cdev_config_ports_vf(struct dfl_fpga_cdev *cdev, int num_vfs) 1182 + { 1183 + struct dfl_feature_platform_data *pdata; 1184 + int ret = 0; 1185 + 1186 + mutex_lock(&cdev->lock); 1187 + /* 1188 + * can't turn multiple ports into 1 VF device, only 1 port for 1 VF 1189 + * device, so if released port number doesn't match VF device number, 1190 + * then reject the request with -EINVAL error code. 1191 + */ 1192 + if (cdev->released_port_num != num_vfs) { 1193 + ret = -EINVAL; 1194 + goto done; 1195 + } 1196 + 1197 + list_for_each_entry(pdata, &cdev->port_dev_list, node) { 1198 + if (device_is_registered(&pdata->dev->dev)) 1199 + continue; 1200 + 1201 + config_port_vf_mode(cdev->fme_dev, pdata->id); 1202 + } 1203 + done: 1204 + mutex_unlock(&cdev->lock); 1205 + return ret; 1206 + } 1207 + EXPORT_SYMBOL_GPL(dfl_fpga_cdev_config_ports_vf); 1068 1208 1069 1209 static void __exit dfl_fpga_exit(void) 1070 1210 {
+46 -6
drivers/fpga/dfl.h
··· 30 30 /* plus one for fme device */ 31 31 #define MAX_DFL_FEATURE_DEV_NUM (MAX_DFL_FPGA_PORT_NUM + 1) 32 32 33 - /* Reserved 0x0 for Header Group Register and 0xff for AFU */ 34 - #define FEATURE_ID_FIU_HEADER 0x0 33 + /* Reserved 0xfe for Header Group Register and 0xff for AFU */ 34 + #define FEATURE_ID_FIU_HEADER 0xfe 35 35 #define FEATURE_ID_AFU 0xff 36 36 37 37 #define FME_FEATURE_ID_HEADER FEATURE_ID_FIU_HEADER ··· 119 119 #define PORT_HDR_NEXT_AFU NEXT_AFU 120 120 #define PORT_HDR_CAP 0x30 121 121 #define PORT_HDR_CTRL 0x38 122 + #define PORT_HDR_STS 0x40 123 + #define PORT_HDR_USRCLK_CMD0 0x50 124 + #define PORT_HDR_USRCLK_CMD1 0x58 125 + #define PORT_HDR_USRCLK_STS0 0x60 126 + #define PORT_HDR_USRCLK_STS1 0x68 122 127 123 128 /* Port Capability Register Bitfield */ 124 129 #define PORT_CAP_PORT_NUM GENMASK_ULL(1, 0) /* ID of this port */ ··· 135 130 /* Latency tolerance reporting. '1' >= 40us, '0' < 40us.*/ 136 131 #define PORT_CTRL_LATENCY BIT_ULL(2) 137 132 #define PORT_CTRL_SFTRST_ACK BIT_ULL(4) /* HW ack for reset */ 133 + 134 + /* Port Status Register Bitfield */ 135 + #define PORT_STS_AP2_EVT BIT_ULL(13) /* AP2 event detected */ 136 + #define PORT_STS_AP1_EVT BIT_ULL(12) /* AP1 event detected */ 137 + #define PORT_STS_PWR_STATE GENMASK_ULL(11, 8) /* AFU power states */ 138 + #define PORT_STS_PWR_STATE_NORM 0 139 + #define PORT_STS_PWR_STATE_AP1 1 /* 50% throttling */ 140 + #define PORT_STS_PWR_STATE_AP2 2 /* 90% throttling */ 141 + #define PORT_STS_PWR_STATE_AP6 6 /* 100% throttling */ 142 + 138 143 /** 139 144 * struct dfl_fpga_port_ops - port ops 140 145 * ··· 169 154 int dfl_fpga_check_port_id(struct platform_device *pdev, void *pport_id); 170 155 171 156 /** 172 - * struct dfl_feature_driver - sub feature's driver 157 + * struct dfl_feature_id - dfl private feature id 173 158 * 174 - * @id: sub feature id. 175 - * @ops: ops of this sub feature. 159 + * @id: unique dfl private feature id. 160 + */ 161 + struct dfl_feature_id { 162 + u64 id; 163 + }; 164 + 165 + /** 166 + * struct dfl_feature_driver - dfl private feature driver 167 + * 168 + * @id_table: id_table for dfl private features supported by this driver. 169 + * @ops: ops of this dfl private feature driver. 176 170 */ 177 171 struct dfl_feature_driver { 178 - u64 id; 172 + const struct dfl_feature_id *id_table; 179 173 const struct dfl_feature_ops *ops; 180 174 }; 181 175 ··· 207 183 208 184 #define DEV_STATUS_IN_USE 0 209 185 186 + #define FEATURE_DEV_ID_UNUSED (-1) 187 + 210 188 /** 211 189 * struct dfl_feature_platform_data - platform data for feature devices 212 190 * ··· 217 191 * @cdev: cdev of feature dev. 218 192 * @dev: ptr to platform device linked with this platform data. 219 193 * @dfl_cdev: ptr to container device. 194 + * @id: id used for this feature device. 220 195 * @disable_count: count for port disable. 221 196 * @num: number for sub features. 222 197 * @dev_status: dev status (e.g. DEV_STATUS_IN_USE). ··· 230 203 struct cdev cdev; 231 204 struct platform_device *dev; 232 205 struct dfl_fpga_cdev *dfl_cdev; 206 + int id; 233 207 unsigned int disable_count; 234 208 unsigned long dev_status; 235 209 void *private; ··· 359 331 (FIELD_GET(DFH_ID, v) == DFH_ID_FIU_PORT); 360 332 } 361 333 334 + static inline u8 dfl_feature_revision(void __iomem *base) 335 + { 336 + return (u8)FIELD_GET(DFH_REVISION, readq(base + DFH)); 337 + } 338 + 362 339 /** 363 340 * struct dfl_fpga_enum_info - DFL FPGA enumeration information 364 341 * ··· 406 373 * @fme_dev: FME feature device under this container device. 407 374 * @lock: mutex lock to protect the port device list. 408 375 * @port_dev_list: list of all port feature devices under this container device. 376 + * @released_port_num: released port number under this container device. 409 377 */ 410 378 struct dfl_fpga_cdev { 411 379 struct device *parent; ··· 414 380 struct device *fme_dev; 415 381 struct mutex lock; 416 382 struct list_head port_dev_list; 383 + int released_port_num; 417 384 }; 418 385 419 386 struct dfl_fpga_cdev * ··· 442 407 443 408 return pdev; 444 409 } 410 + 411 + int dfl_fpga_cdev_release_port(struct dfl_fpga_cdev *cdev, int port_id); 412 + int dfl_fpga_cdev_assign_port(struct dfl_fpga_cdev *cdev, int port_id); 413 + void dfl_fpga_cdev_config_ports_pf(struct dfl_fpga_cdev *cdev); 414 + int dfl_fpga_cdev_config_ports_vf(struct dfl_fpga_cdev *cdev, int num_vf); 445 415 #endif /* __FPGA_DFL_H */
+16 -17
drivers/hwtracing/coresight/coresight-cpu-debug.c
··· 646 646 return 0; 647 647 } 648 648 649 + static const struct amba_cs_uci_id uci_id_debug[] = { 650 + { 651 + /* CPU Debug UCI data */ 652 + .devarch = 0x47706a15, 653 + .devarch_mask = 0xfff0ffff, 654 + .devtype = 0x00000015, 655 + } 656 + }; 657 + 649 658 static const struct amba_id debug_ids[] = { 650 - { /* Debug for Cortex-A53 */ 651 - .id = 0x000bbd03, 652 - .mask = 0x000fffff, 653 - }, 654 - { /* Debug for Cortex-A57 */ 655 - .id = 0x000bbd07, 656 - .mask = 0x000fffff, 657 - }, 658 - { /* Debug for Cortex-A72 */ 659 - .id = 0x000bbd08, 660 - .mask = 0x000fffff, 661 - }, 662 - { /* Debug for Cortex-A73 */ 663 - .id = 0x000bbd09, 664 - .mask = 0x000fffff, 665 - }, 666 - { 0, 0 }, 659 + CS_AMBA_ID(0x000bbd03), /* Cortex-A53 */ 660 + CS_AMBA_ID(0x000bbd07), /* Cortex-A57 */ 661 + CS_AMBA_ID(0x000bbd08), /* Cortex-A72 */ 662 + CS_AMBA_ID(0x000bbd09), /* Cortex-A73 */ 663 + CS_AMBA_UCI_ID(0x000f0205, uci_id_debug), /* Qualcomm Kryo */ 664 + CS_AMBA_UCI_ID(0x000f0211, uci_id_debug), /* Qualcomm Kryo */ 665 + {}, 667 666 }; 668 667 669 668 static struct amba_driver debug_driver = {
+4 -9
drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
··· 296 296 297 297 spin_lock(&drvdata->spinlock); 298 298 config->mode = val & ETMv4_MODE_ALL; 299 - 300 - if (config->mode & ETM_MODE_EXCLUDE) 301 - etm4_set_mode_exclude(drvdata, true); 302 - else 303 - etm4_set_mode_exclude(drvdata, false); 299 + etm4_set_mode_exclude(drvdata, 300 + config->mode & ETM_MODE_EXCLUDE ? true : false); 304 301 305 302 if (drvdata->instrp0 == true) { 306 303 /* start by clearing instruction P0 field */ ··· 996 999 * Program include or exclude control bits for vinst or vdata 997 1000 * whenever we change addr comparators to ETM_ADDR_TYPE_RANGE 998 1001 */ 999 - if (config->mode & ETM_MODE_EXCLUDE) 1000 - etm4_set_mode_exclude(drvdata, true); 1001 - else 1002 - etm4_set_mode_exclude(drvdata, false); 1002 + etm4_set_mode_exclude(drvdata, 1003 + config->mode & ETM_MODE_EXCLUDE ? true : false); 1003 1004 1004 1005 spin_unlock(&drvdata->spinlock); 1005 1006 return size;
+26 -12
drivers/hwtracing/coresight/coresight-etm4x.c
··· 34 34 #include "coresight-etm-perf.h" 35 35 36 36 static int boot_enable; 37 - module_param_named(boot_enable, boot_enable, int, S_IRUGO); 37 + module_param(boot_enable, int, 0444); 38 + MODULE_PARM_DESC(boot_enable, "Enable tracing on boot"); 38 39 39 40 /* The number of ETMv4 currently registered */ 40 41 static int etm4_count; ··· 48 47 49 48 static void etm4_os_unlock(struct etmv4_drvdata *drvdata) 50 49 { 51 - /* Writing any value to ETMOSLAR unlocks the trace registers */ 50 + /* Writing 0 to TRCOSLAR unlocks the trace registers */ 52 51 writel_relaxed(0x0, drvdata->base + TRCOSLAR); 53 52 drvdata->os_unlock = true; 54 53 isb(); ··· 188 187 if (coresight_timeout(drvdata->base, TRCSTATR, TRCSTATR_IDLE_BIT, 0)) 189 188 dev_err(etm_dev, 190 189 "timeout while waiting for Idle Trace Status\n"); 190 + 191 + /* 192 + * As recommended by section 4.3.7 ("Synchronization when using the 193 + * memory-mapped interface") of ARM IHI 0064D 194 + */ 195 + dsb(sy); 196 + isb(); 191 197 192 198 done: 193 199 CS_LOCK(drvdata->base); ··· 461 453 /* EN, bit[0] Trace unit enable bit */ 462 454 control &= ~0x1; 463 455 464 - /* make sure everything completes before disabling */ 465 - mb(); 456 + /* 457 + * Make sure everything completes before disabling, as recommended 458 + * by section 7.3.77 ("TRCVICTLR, ViewInst Main Control Register, 459 + * SSTATUS") of ARM IHI 0064D 460 + */ 461 + dsb(sy); 466 462 isb(); 467 463 writel_relaxed(control, drvdata->base + TRCPRGCTLR); 468 464 ··· 1059 1047 return 0; 1060 1048 1061 1049 spin_lock(&etmdrvdata[cpu]->spinlock); 1062 - if (!etmdrvdata[cpu]->os_unlock) { 1050 + if (!etmdrvdata[cpu]->os_unlock) 1063 1051 etm4_os_unlock(etmdrvdata[cpu]); 1064 - etmdrvdata[cpu]->os_unlock = true; 1065 - } 1066 1052 1067 1053 if (local_read(&etmdrvdata[cpu]->mode)) 1068 1054 etm4_enable_hw(etmdrvdata[cpu]); ··· 1202 1192 }; 1203 1193 1204 1194 static const struct amba_id etm4_ids[] = { 1205 - CS_AMBA_ID(0x000bb95d), /* Cortex-A53 */ 1206 - CS_AMBA_ID(0x000bb95e), /* Cortex-A57 */ 1207 - CS_AMBA_ID(0x000bb95a), /* Cortex-A72 */ 1208 - CS_AMBA_ID(0x000bb959), /* Cortex-A73 */ 1209 - CS_AMBA_UCI_ID(0x000bb9da, uci_id_etm4), /* Cortex-A35 */ 1195 + CS_AMBA_ID(0x000bb95d), /* Cortex-A53 */ 1196 + CS_AMBA_ID(0x000bb95e), /* Cortex-A57 */ 1197 + CS_AMBA_ID(0x000bb95a), /* Cortex-A72 */ 1198 + CS_AMBA_ID(0x000bb959), /* Cortex-A73 */ 1199 + CS_AMBA_UCI_ID(0x000bb9da, uci_id_etm4),/* Cortex-A35 */ 1200 + CS_AMBA_UCI_ID(0x000f0205, uci_id_etm4),/* Qualcomm Kryo */ 1201 + CS_AMBA_UCI_ID(0x000f0211, uci_id_etm4),/* Qualcomm Kryo */ 1202 + CS_AMBA_ID(0x000bb802), /* Qualcomm Kryo 385 Cortex-A55 */ 1203 + CS_AMBA_ID(0x000bb803), /* Qualcomm Kryo 385 Cortex-A75 */ 1210 1204 {}, 1211 1205 }; 1212 1206
+10 -1
drivers/hwtracing/coresight/coresight-funnel.c
··· 5 5 * Description: CoreSight Funnel driver 6 6 */ 7 7 8 + #include <linux/acpi.h> 8 9 #include <linux/kernel.h> 9 10 #include <linux/init.h> 10 11 #include <linux/types.h> ··· 193 192 194 193 if (is_of_node(dev_fwnode(dev)) && 195 194 of_device_is_compatible(dev->of_node, "arm,coresight-funnel")) 196 - pr_warn_once("Uses OBSOLETE CoreSight funnel binding\n"); 195 + dev_warn_once(dev, "Uses OBSOLETE CoreSight funnel binding\n"); 197 196 198 197 desc.name = coresight_alloc_device_name(&funnel_devs, dev); 199 198 if (!desc.name) ··· 303 302 {} 304 303 }; 305 304 305 + #ifdef CONFIG_ACPI 306 + static const struct acpi_device_id static_funnel_ids[] = { 307 + {"ARMHC9FE", 0}, 308 + {}, 309 + }; 310 + #endif 311 + 306 312 static struct platform_driver static_funnel_driver = { 307 313 .probe = static_funnel_probe, 308 314 .driver = { 309 315 .name = "coresight-static-funnel", 310 316 .of_match_table = static_funnel_match, 317 + .acpi_match_table = ACPI_PTR(static_funnel_ids), 311 318 .pm = &funnel_dev_pm_ops, 312 319 .suppress_bind_attrs = true, 313 320 },
+5 -5
drivers/hwtracing/coresight/coresight-priv.h
··· 185 185 } 186 186 187 187 /* coresight AMBA ID, full UCI structure: id table entry. */ 188 - #define CS_AMBA_UCI_ID(pid, uci_ptr) \ 189 - { \ 190 - .id = pid, \ 191 - .mask = 0x000fffff, \ 192 - .data = uci_ptr \ 188 + #define CS_AMBA_UCI_ID(pid, uci_ptr) \ 189 + { \ 190 + .id = pid, \ 191 + .mask = 0x000fffff, \ 192 + .data = (void *)uci_ptr \ 193 193 } 194 194 195 195 /* extract the data value from a UCI structure given amba_id pointer. */
+2 -1
drivers/hwtracing/coresight/coresight-replicator.c
··· 184 184 185 185 if (is_of_node(dev_fwnode(dev)) && 186 186 of_device_is_compatible(dev->of_node, "arm,coresight-replicator")) 187 - pr_warn_once("Uses OBSOLETE CoreSight replicator binding\n"); 187 + dev_warn_once(dev, 188 + "Uses OBSOLETE CoreSight replicator binding\n"); 188 189 189 190 desc.name = coresight_alloc_device_name(&replicator_devs, dev); 190 191 if (!desc.name)
+2 -21
drivers/hwtracing/coresight/coresight-tmc-etf.c
··· 479 479 * traces. 480 480 */ 481 481 if (!buf->snapshot && to_read > handle->size) { 482 - u32 mask = 0; 483 - 484 - /* 485 - * The value written to RRP must be byte-address aligned to 486 - * the width of the trace memory databus _and_ to a frame 487 - * boundary (16 byte), whichever is the biggest. For example, 488 - * for 32-bit, 64-bit and 128-bit wide trace memory, the four 489 - * LSBs must be 0s. For 256-bit wide trace memory, the five 490 - * LSBs must be 0s. 491 - */ 492 - switch (drvdata->memwidth) { 493 - case TMC_MEM_INTF_WIDTH_32BITS: 494 - case TMC_MEM_INTF_WIDTH_64BITS: 495 - case TMC_MEM_INTF_WIDTH_128BITS: 496 - mask = GENMASK(31, 4); 497 - break; 498 - case TMC_MEM_INTF_WIDTH_256BITS: 499 - mask = GENMASK(31, 5); 500 - break; 501 - } 482 + u32 mask = tmc_get_memwidth_mask(drvdata); 502 483 503 484 /* 504 485 * Make sure the new size is aligned in accordance with the 505 - * requirement explained above. 486 + * requirement explained in function tmc_get_memwidth_mask(). 506 487 */ 507 488 to_read = handle->size & mask; 508 489 /* Move the RAM read pointer up */
+63 -30
drivers/hwtracing/coresight/coresight-tmc-etr.c
··· 871 871 return ERR_PTR(rc); 872 872 } 873 873 874 + refcount_set(&etr_buf->refcount, 1); 874 875 dev_dbg(dev, "allocated buffer of size %ldKB in mode %d\n", 875 876 (unsigned long)size >> 10, etr_buf->mode); 876 877 return etr_buf; ··· 928 927 rrp = tmc_read_rrp(drvdata); 929 928 rwp = tmc_read_rwp(drvdata); 930 929 status = readl_relaxed(drvdata->base + TMC_STS); 930 + 931 + /* 932 + * If there were memory errors in the session, truncate the 933 + * buffer. 934 + */ 935 + if (WARN_ON_ONCE(status & TMC_STS_MEMERR)) { 936 + dev_dbg(&drvdata->csdev->dev, 937 + "tmc memory error detected, truncating buffer\n"); 938 + etr_buf->len = 0; 939 + etr_buf->full = 0; 940 + return; 941 + } 942 + 931 943 etr_buf->full = status & TMC_STS_FULL; 932 944 933 945 WARN_ON(!etr_buf->ops || !etr_buf->ops->sync); 934 946 935 947 etr_buf->ops->sync(etr_buf, rrp, rwp); 936 - 937 - /* Insert barrier packets at the beginning, if there was an overflow */ 938 - if (etr_buf->full) 939 - tmc_etr_buf_insert_barrier_packet(etr_buf, etr_buf->offset); 940 948 } 941 949 942 950 static void __tmc_etr_enable_hw(struct tmc_drvdata *drvdata) ··· 1082 1072 drvdata->sysfs_buf = NULL; 1083 1073 } else { 1084 1074 tmc_sync_etr_buf(drvdata); 1075 + /* 1076 + * Insert barrier packets at the beginning, if there was 1077 + * an overflow. 1078 + */ 1079 + if (etr_buf->full) 1080 + tmc_etr_buf_insert_barrier_packet(etr_buf, 1081 + etr_buf->offset); 1085 1082 } 1086 1083 } 1087 1084 ··· 1280 1263 if (IS_ERR(etr_buf)) 1281 1264 return etr_buf; 1282 1265 1283 - refcount_set(&etr_buf->refcount, 1); 1284 - 1285 1266 /* Now that we have a buffer, add it to the IDR. */ 1286 1267 mutex_lock(&drvdata->idr_mutex); 1287 1268 ret = idr_alloc(&drvdata->idr, etr_buf, pid, pid + 1, GFP_KERNEL); ··· 1306 1291 struct perf_event *event, int nr_pages, 1307 1292 void **pages, bool snapshot) 1308 1293 { 1309 - struct etr_buf *etr_buf; 1310 - 1311 1294 /* 1312 1295 * In per-thread mode the etr_buf isn't shared, so just go ahead 1313 1296 * with memory allocation. 1314 1297 */ 1315 - etr_buf = alloc_etr_buf(drvdata, event, nr_pages, pages, snapshot); 1316 - if (IS_ERR(etr_buf)) 1317 - goto out; 1318 - 1319 - refcount_set(&etr_buf->refcount, 1); 1320 - out: 1321 - return etr_buf; 1298 + return alloc_etr_buf(drvdata, event, nr_pages, pages, snapshot); 1322 1299 } 1323 1300 1324 1301 static struct etr_buf * ··· 1417 1410 * tmc_etr_sync_perf_buffer: Copy the actual trace data from the hardware 1418 1411 * buffer to the perf ring buffer. 1419 1412 */ 1420 - static void tmc_etr_sync_perf_buffer(struct etr_perf_buffer *etr_perf) 1413 + static void tmc_etr_sync_perf_buffer(struct etr_perf_buffer *etr_perf, 1414 + unsigned long src_offset, 1415 + unsigned long to_copy) 1421 1416 { 1422 - long bytes, to_copy; 1423 - long pg_idx, pg_offset, src_offset; 1417 + long bytes; 1418 + long pg_idx, pg_offset; 1424 1419 unsigned long head = etr_perf->head; 1425 1420 char **dst_pages, *src_buf; 1426 1421 struct etr_buf *etr_buf = etr_perf->etr_buf; ··· 1431 1422 pg_idx = head >> PAGE_SHIFT; 1432 1423 pg_offset = head & (PAGE_SIZE - 1); 1433 1424 dst_pages = (char **)etr_perf->pages; 1434 - src_offset = etr_buf->offset; 1435 - to_copy = etr_buf->len; 1436 1425 1437 1426 while (to_copy > 0) { 1438 1427 /* ··· 1441 1434 * 3) what is available in the destination page. 1442 1435 * in one iteration. 1443 1436 */ 1437 + if (src_offset >= etr_buf->size) 1438 + src_offset -= etr_buf->size; 1444 1439 bytes = tmc_etr_buf_get_data(etr_buf, src_offset, to_copy, 1445 1440 &src_buf); 1446 1441 if (WARN_ON_ONCE(bytes <= 0)) ··· 1463 1454 1464 1455 /* Move source pointers */ 1465 1456 src_offset += bytes; 1466 - if (src_offset >= etr_buf->size) 1467 - src_offset -= etr_buf->size; 1468 1457 } 1469 1458 } 1470 1459 ··· 1478 1471 void *config) 1479 1472 { 1480 1473 bool lost = false; 1481 - unsigned long flags, size = 0; 1474 + unsigned long flags, offset, size = 0; 1482 1475 struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 1483 1476 struct etr_perf_buffer *etr_perf = config; 1484 1477 struct etr_buf *etr_buf = etr_perf->etr_buf; ··· 1491 1484 goto out; 1492 1485 } 1493 1486 1494 - if (WARN_ON(drvdata->perf_data != etr_perf)) { 1487 + if (WARN_ON(drvdata->perf_buf != etr_buf)) { 1495 1488 lost = true; 1496 1489 spin_unlock_irqrestore(&drvdata->spinlock, flags); 1497 1490 goto out; ··· 1503 1496 tmc_sync_etr_buf(drvdata); 1504 1497 1505 1498 CS_LOCK(drvdata->base); 1506 - /* Reset perf specific data */ 1507 - drvdata->perf_data = NULL; 1508 1499 spin_unlock_irqrestore(&drvdata->spinlock, flags); 1509 1500 1501 + lost = etr_buf->full; 1502 + offset = etr_buf->offset; 1510 1503 size = etr_buf->len; 1511 - tmc_etr_sync_perf_buffer(etr_perf); 1504 + 1505 + /* 1506 + * The ETR buffer may be bigger than the space available in the 1507 + * perf ring buffer (handle->size). If so advance the offset so that we 1508 + * get the latest trace data. In snapshot mode none of that matters 1509 + * since we are expected to clobber stale data in favour of the latest 1510 + * traces. 1511 + */ 1512 + if (!etr_perf->snapshot && size > handle->size) { 1513 + u32 mask = tmc_get_memwidth_mask(drvdata); 1514 + 1515 + /* 1516 + * Make sure the new size is aligned in accordance with the 1517 + * requirement explained in function tmc_get_memwidth_mask(). 1518 + */ 1519 + size = handle->size & mask; 1520 + offset = etr_buf->offset + etr_buf->len - size; 1521 + 1522 + if (offset >= etr_buf->size) 1523 + offset -= etr_buf->size; 1524 + lost = true; 1525 + } 1526 + 1527 + /* Insert barrier packets at the beginning, if there was an overflow */ 1528 + if (lost) 1529 + tmc_etr_buf_insert_barrier_packet(etr_buf, etr_buf->offset); 1530 + tmc_etr_sync_perf_buffer(etr_perf, offset, size); 1512 1531 1513 1532 /* 1514 1533 * In snapshot mode we simply increment the head by the number of byte ··· 1544 1511 */ 1545 1512 if (etr_perf->snapshot) 1546 1513 handle->head += size; 1547 - 1548 - lost |= etr_buf->full; 1549 1514 out: 1550 1515 /* 1551 1516 * Don't set the TRUNCATED flag in snapshot mode because 1) the ··· 1587 1556 } 1588 1557 1589 1558 etr_perf->head = PERF_IDX2OFF(handle->head, etr_perf); 1590 - drvdata->perf_data = etr_perf; 1591 1559 1592 1560 /* 1593 1561 * No HW configuration is needed if the sink is already in ··· 1602 1572 /* Associate with monitored process. */ 1603 1573 drvdata->pid = pid; 1604 1574 drvdata->mode = CS_MODE_PERF; 1575 + drvdata->perf_buf = etr_perf->etr_buf; 1605 1576 atomic_inc(csdev->refcnt); 1606 1577 } 1607 1578 ··· 1648 1617 /* Dissociate from monitored process. */ 1649 1618 drvdata->pid = -1; 1650 1619 drvdata->mode = CS_MODE_DISABLED; 1620 + /* Reset perf specific data */ 1621 + drvdata->perf_buf = NULL; 1651 1622 1652 1623 spin_unlock_irqrestore(&drvdata->spinlock, flags); 1653 1624
+40
drivers/hwtracing/coresight/coresight-tmc.c
··· 70 70 writel_relaxed(0x0, drvdata->base + TMC_CTL); 71 71 } 72 72 73 + u32 tmc_get_memwidth_mask(struct tmc_drvdata *drvdata) 74 + { 75 + u32 mask = 0; 76 + 77 + /* 78 + * When moving RRP or an offset address forward, the new values must 79 + * be byte-address aligned to the width of the trace memory databus 80 + * _and_ to a frame boundary (16 byte), whichever is the biggest. For 81 + * example, for 32-bit, 64-bit and 128-bit wide trace memory, the four 82 + * LSBs must be 0s. For 256-bit wide trace memory, the five LSBs must 83 + * be 0s. 84 + */ 85 + switch (drvdata->memwidth) { 86 + case TMC_MEM_INTF_WIDTH_32BITS: 87 + /* fallthrough */ 88 + case TMC_MEM_INTF_WIDTH_64BITS: 89 + /* fallthrough */ 90 + case TMC_MEM_INTF_WIDTH_128BITS: 91 + mask = GENMASK(31, 4); 92 + break; 93 + case TMC_MEM_INTF_WIDTH_256BITS: 94 + mask = GENMASK(31, 5); 95 + break; 96 + } 97 + 98 + return mask; 99 + } 100 + 73 101 static int tmc_read_prepare(struct tmc_drvdata *drvdata) 74 102 { 75 103 int ret = 0; ··· 264 236 coresight_tmc_reg(mode, TMC_MODE); 265 237 coresight_tmc_reg(pscr, TMC_PSCR); 266 238 coresight_tmc_reg(axictl, TMC_AXICTL); 239 + coresight_tmc_reg(authstatus, TMC_AUTHSTATUS); 267 240 coresight_tmc_reg(devid, CORESIGHT_DEVID); 268 241 coresight_tmc_reg64(rrp, TMC_RRP, TMC_RRPHI); 269 242 coresight_tmc_reg64(rwp, TMC_RWP, TMC_RWPHI); ··· 284 255 &dev_attr_devid.attr, 285 256 &dev_attr_dba.attr, 286 257 &dev_attr_axictl.attr, 258 + &dev_attr_authstatus.attr, 287 259 NULL, 288 260 }; 289 261 ··· 372 342 return fwnode_property_present(dev->fwnode, "arm,scatter-gather"); 373 343 } 374 344 345 + static inline bool tmc_etr_has_non_secure_access(struct tmc_drvdata *drvdata) 346 + { 347 + u32 auth = readl_relaxed(drvdata->base + TMC_AUTHSTATUS); 348 + 349 + return (auth & TMC_AUTH_NSID_MASK) == 0x3; 350 + } 351 + 375 352 /* Detect and initialise the capabilities of a TMC ETR */ 376 353 static int tmc_etr_setup_caps(struct device *parent, u32 devid, void *dev_caps) 377 354 { 378 355 int rc; 379 356 u32 dma_mask = 0; 380 357 struct tmc_drvdata *drvdata = dev_get_drvdata(parent); 358 + 359 + if (!tmc_etr_has_non_secure_access(drvdata)) 360 + return -EACCES; 381 361 382 362 /* Set the unadvertised capabilities */ 383 363 tmc_etr_init_caps(drvdata, (u32)(unsigned long)dev_caps);
+8 -3
drivers/hwtracing/coresight/coresight-tmc.h
··· 39 39 #define TMC_ITATBCTR2 0xef0 40 40 #define TMC_ITATBCTR1 0xef4 41 41 #define TMC_ITATBCTR0 0xef8 42 + #define TMC_AUTHSTATUS 0xfb8 42 43 43 44 /* register description */ 44 45 /* TMC_CTL - 0x020 */ ··· 48 47 #define TMC_STS_TMCREADY_BIT 2 49 48 #define TMC_STS_FULL BIT(0) 50 49 #define TMC_STS_TRIGGERED BIT(1) 50 + #define TMC_STS_MEMERR BIT(5) 51 51 /* 52 52 * TMC_AXICTL - 0x110 53 53 * ··· 90 88 #define TMC_DEVID_AXIAW_VALID BIT(16) 91 89 #define TMC_DEVID_AXIAW_SHIFT 17 92 90 #define TMC_DEVID_AXIAW_MASK 0x7f 91 + 92 + #define TMC_AUTH_NSID_MASK GENMASK(1, 0) 93 93 94 94 enum tmc_config_type { 95 95 TMC_CONFIG_TYPE_ETB, ··· 182 178 * device configuration register (DEVID) 183 179 * @idr: Holds etr_bufs allocated for this ETR. 184 180 * @idr_mutex: Access serialisation for idr. 185 - * @perf_data: PERF buffer for ETR. 186 - * @sysfs_data: SYSFS buffer for ETR. 181 + * @sysfs_buf: SYSFS buffer for ETR. 182 + * @perf_buf: PERF buffer for ETR. 187 183 */ 188 184 struct tmc_drvdata { 189 185 void __iomem *base; ··· 206 202 struct idr idr; 207 203 struct mutex idr_mutex; 208 204 struct etr_buf *sysfs_buf; 209 - void *perf_data; 205 + struct etr_buf *perf_buf; 210 206 }; 211 207 212 208 struct etr_buf_operations { ··· 255 251 void tmc_flush_and_stop(struct tmc_drvdata *drvdata); 256 252 void tmc_enable_hw(struct tmc_drvdata *drvdata); 257 253 void tmc_disable_hw(struct tmc_drvdata *drvdata); 254 + u32 tmc_get_memwidth_mask(struct tmc_drvdata *drvdata); 258 255 259 256 /* ETB/ETF functions */ 260 257 int tmc_read_prepare_etb(struct tmc_drvdata *drvdata);
+3
drivers/hwtracing/intel_th/Makefile
··· 20 20 21 21 obj-$(CONFIG_INTEL_TH_PTI) += intel_th_pti.o 22 22 intel_th_pti-y := pti.o 23 + 24 + obj-$(CONFIG_INTEL_TH_MSU) += intel_th_msu_sink.o 25 + intel_th_msu_sink-y := msu-sink.o
+116
drivers/hwtracing/intel_th/msu-sink.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * An example software sink buffer for Intel TH MSU. 4 + * 5 + * Copyright (C) 2019 Intel Corporation. 6 + */ 7 + 8 + #include <linux/intel_th.h> 9 + #include <linux/module.h> 10 + #include <linux/slab.h> 11 + #include <linux/device.h> 12 + #include <linux/dma-mapping.h> 13 + 14 + #define MAX_SGTS 16 15 + 16 + struct msu_sink_private { 17 + struct device *dev; 18 + struct sg_table **sgts; 19 + unsigned int nr_sgts; 20 + }; 21 + 22 + static void *msu_sink_assign(struct device *dev, int *mode) 23 + { 24 + struct msu_sink_private *priv; 25 + 26 + priv = kzalloc(sizeof(*priv), GFP_KERNEL); 27 + if (!priv) 28 + return NULL; 29 + 30 + priv->sgts = kcalloc(MAX_SGTS, sizeof(void *), GFP_KERNEL); 31 + if (!priv->sgts) { 32 + kfree(priv); 33 + return NULL; 34 + } 35 + 36 + priv->dev = dev; 37 + *mode = MSC_MODE_MULTI; 38 + 39 + return priv; 40 + } 41 + 42 + static void msu_sink_unassign(void *data) 43 + { 44 + struct msu_sink_private *priv = data; 45 + 46 + kfree(priv->sgts); 47 + kfree(priv); 48 + } 49 + 50 + /* See also: msc.c: __msc_buffer_win_alloc() */ 51 + static int msu_sink_alloc_window(void *data, struct sg_table **sgt, size_t size) 52 + { 53 + struct msu_sink_private *priv = data; 54 + unsigned int nents; 55 + struct scatterlist *sg_ptr; 56 + void *block; 57 + int ret, i; 58 + 59 + if (priv->nr_sgts == MAX_SGTS) 60 + return -ENOMEM; 61 + 62 + nents = DIV_ROUND_UP(size, PAGE_SIZE); 63 + 64 + ret = sg_alloc_table(*sgt, nents, GFP_KERNEL); 65 + if (ret) 66 + return -ENOMEM; 67 + 68 + priv->sgts[priv->nr_sgts++] = *sgt; 69 + 70 + for_each_sg((*sgt)->sgl, sg_ptr, nents, i) { 71 + block = dma_alloc_coherent(priv->dev->parent->parent, 72 + PAGE_SIZE, &sg_dma_address(sg_ptr), 73 + GFP_KERNEL); 74 + sg_set_buf(sg_ptr, block, PAGE_SIZE); 75 + } 76 + 77 + return nents; 78 + } 79 + 80 + /* See also: msc.c: __msc_buffer_win_free() */ 81 + static void msu_sink_free_window(void *data, struct sg_table *sgt) 82 + { 83 + struct msu_sink_private *priv = data; 84 + struct scatterlist *sg_ptr; 85 + int i; 86 + 87 + for_each_sg(sgt->sgl, sg_ptr, sgt->nents, i) { 88 + dma_free_coherent(priv->dev->parent->parent, PAGE_SIZE, 89 + sg_virt(sg_ptr), sg_dma_address(sg_ptr)); 90 + } 91 + 92 + sg_free_table(sgt); 93 + priv->nr_sgts--; 94 + } 95 + 96 + static int msu_sink_ready(void *data, struct sg_table *sgt, size_t bytes) 97 + { 98 + struct msu_sink_private *priv = data; 99 + 100 + intel_th_msc_window_unlock(priv->dev, sgt); 101 + 102 + return 0; 103 + } 104 + 105 + static const struct msu_buffer sink_mbuf = { 106 + .name = "sink", 107 + .assign = msu_sink_assign, 108 + .unassign = msu_sink_unassign, 109 + .alloc_window = msu_sink_alloc_window, 110 + .free_window = msu_sink_free_window, 111 + .ready = msu_sink_ready, 112 + }; 113 + 114 + module_intel_th_msu_buffer(sink_mbuf); 115 + 116 + MODULE_LICENSE("GPL v2");
+446 -91
drivers/hwtracing/intel_th/msu.c
··· 17 17 #include <linux/mm.h> 18 18 #include <linux/fs.h> 19 19 #include <linux/io.h> 20 + #include <linux/workqueue.h> 20 21 #include <linux/dma-mapping.h> 21 22 22 23 #ifdef CONFIG_X86 23 24 #include <asm/set_memory.h> 24 25 #endif 25 26 27 + #include <linux/intel_th.h> 26 28 #include "intel_th.h" 27 29 #include "msu.h" 28 30 29 31 #define msc_dev(x) (&(x)->thdev->dev) 30 32 33 + /* 34 + * Lockout state transitions: 35 + * READY -> INUSE -+-> LOCKED -+-> READY -> etc. 36 + * \-----------/ 37 + * WIN_READY: window can be used by HW 38 + * WIN_INUSE: window is in use 39 + * WIN_LOCKED: window is filled up and is being processed by the buffer 40 + * handling code 41 + * 42 + * All state transitions happen automatically, except for the LOCKED->READY, 43 + * which needs to be signalled by the buffer code by calling 44 + * intel_th_msc_window_unlock(). 45 + * 46 + * When the interrupt handler has to switch to the next window, it checks 47 + * whether it's READY, and if it is, it performs the switch and tracing 48 + * continues. If it's LOCKED, it stops the trace. 49 + */ 50 + enum lockout_state { 51 + WIN_READY = 0, 52 + WIN_INUSE, 53 + WIN_LOCKED 54 + }; 55 + 31 56 /** 32 57 * struct msc_window - multiblock mode window descriptor 33 58 * @entry: window list linkage (msc::win_list) 34 59 * @pgoff: page offset into the buffer that this window starts at 60 + * @lockout: lockout state, see comment below 61 + * @lo_lock: lockout state serialization 35 62 * @nr_blocks: number of blocks (pages) in this window 36 63 * @nr_segs: number of segments in this window (<= @nr_blocks) 37 64 * @_sgt: array of block descriptors ··· 67 40 struct msc_window { 68 41 struct list_head entry; 69 42 unsigned long pgoff; 43 + enum lockout_state lockout; 44 + spinlock_t lo_lock; 70 45 unsigned int nr_blocks; 71 46 unsigned int nr_segs; 72 47 struct msc *msc; ··· 95 66 struct msc_window *start_win; 96 67 struct msc_window *win; 97 68 unsigned long offset; 98 - int start_block; 99 - int block; 69 + struct scatterlist *start_block; 70 + struct scatterlist *block; 100 71 unsigned int block_off; 101 72 unsigned int wrap_count; 102 73 unsigned int eof; ··· 106 77 * struct msc - MSC device representation 107 78 * @reg_base: register window base address 108 79 * @thdev: intel_th_device pointer 80 + * @mbuf: MSU buffer, if assigned 81 + * @mbuf_priv MSU buffer's private data, if @mbuf 109 82 * @win_list: list of windows in multiblock mode 110 83 * @single_sgt: single mode buffer 111 84 * @cur_win: current window ··· 131 100 void __iomem *msu_base; 132 101 struct intel_th_device *thdev; 133 102 103 + const struct msu_buffer *mbuf; 104 + void *mbuf_priv; 105 + 106 + struct work_struct work; 134 107 struct list_head win_list; 135 108 struct sg_table single_sgt; 136 109 struct msc_window *cur_win; ··· 143 108 unsigned int single_wrap : 1; 144 109 void *base; 145 110 dma_addr_t base_addr; 111 + u32 orig_addr; 112 + u32 orig_sz; 146 113 147 114 /* <0: no buffer, 0: no users, >0: active users */ 148 115 atomic_t user_count; ··· 163 126 unsigned int index; 164 127 }; 165 128 129 + static LIST_HEAD(msu_buffer_list); 130 + static struct mutex msu_buffer_mutex; 131 + 132 + /** 133 + * struct msu_buffer_entry - internal MSU buffer bookkeeping 134 + * @entry: link to msu_buffer_list 135 + * @mbuf: MSU buffer object 136 + * @owner: module that provides this MSU buffer 137 + */ 138 + struct msu_buffer_entry { 139 + struct list_head entry; 140 + const struct msu_buffer *mbuf; 141 + struct module *owner; 142 + }; 143 + 144 + static struct msu_buffer_entry *__msu_buffer_entry_find(const char *name) 145 + { 146 + struct msu_buffer_entry *mbe; 147 + 148 + lockdep_assert_held(&msu_buffer_mutex); 149 + 150 + list_for_each_entry(mbe, &msu_buffer_list, entry) { 151 + if (!strcmp(mbe->mbuf->name, name)) 152 + return mbe; 153 + } 154 + 155 + return NULL; 156 + } 157 + 158 + static const struct msu_buffer * 159 + msu_buffer_get(const char *name) 160 + { 161 + struct msu_buffer_entry *mbe; 162 + 163 + mutex_lock(&msu_buffer_mutex); 164 + mbe = __msu_buffer_entry_find(name); 165 + if (mbe && !try_module_get(mbe->owner)) 166 + mbe = NULL; 167 + mutex_unlock(&msu_buffer_mutex); 168 + 169 + return mbe ? mbe->mbuf : NULL; 170 + } 171 + 172 + static void msu_buffer_put(const struct msu_buffer *mbuf) 173 + { 174 + struct msu_buffer_entry *mbe; 175 + 176 + mutex_lock(&msu_buffer_mutex); 177 + mbe = __msu_buffer_entry_find(mbuf->name); 178 + if (mbe) 179 + module_put(mbe->owner); 180 + mutex_unlock(&msu_buffer_mutex); 181 + } 182 + 183 + int intel_th_msu_buffer_register(const struct msu_buffer *mbuf, 184 + struct module *owner) 185 + { 186 + struct msu_buffer_entry *mbe; 187 + int ret = 0; 188 + 189 + mbe = kzalloc(sizeof(*mbe), GFP_KERNEL); 190 + if (!mbe) 191 + return -ENOMEM; 192 + 193 + mutex_lock(&msu_buffer_mutex); 194 + if (__msu_buffer_entry_find(mbuf->name)) { 195 + ret = -EEXIST; 196 + kfree(mbe); 197 + goto unlock; 198 + } 199 + 200 + mbe->mbuf = mbuf; 201 + mbe->owner = owner; 202 + list_add_tail(&mbe->entry, &msu_buffer_list); 203 + unlock: 204 + mutex_unlock(&msu_buffer_mutex); 205 + 206 + return ret; 207 + } 208 + EXPORT_SYMBOL_GPL(intel_th_msu_buffer_register); 209 + 210 + void intel_th_msu_buffer_unregister(const struct msu_buffer *mbuf) 211 + { 212 + struct msu_buffer_entry *mbe; 213 + 214 + mutex_lock(&msu_buffer_mutex); 215 + mbe = __msu_buffer_entry_find(mbuf->name); 216 + if (mbe) { 217 + list_del(&mbe->entry); 218 + kfree(mbe); 219 + } 220 + mutex_unlock(&msu_buffer_mutex); 221 + } 222 + EXPORT_SYMBOL_GPL(intel_th_msu_buffer_unregister); 223 + 166 224 static inline bool msc_block_is_empty(struct msc_block_desc *bdesc) 167 225 { 168 226 /* header hasn't been written */ ··· 271 139 return false; 272 140 } 273 141 274 - static inline struct msc_block_desc * 275 - msc_win_block(struct msc_window *win, unsigned int block) 142 + static inline struct scatterlist *msc_win_base_sg(struct msc_window *win) 276 143 { 277 - return sg_virt(&win->sgt->sgl[block]); 144 + return win->sgt->sgl; 278 145 } 279 146 280 - static inline size_t 281 - msc_win_actual_bsz(struct msc_window *win, unsigned int block) 147 + static inline struct msc_block_desc *msc_win_base(struct msc_window *win) 282 148 { 283 - return win->sgt->sgl[block].length; 149 + return sg_virt(msc_win_base_sg(win)); 284 150 } 285 151 286 - static inline dma_addr_t 287 - msc_win_baddr(struct msc_window *win, unsigned int block) 152 + static inline dma_addr_t msc_win_base_dma(struct msc_window *win) 288 153 { 289 - return sg_dma_address(&win->sgt->sgl[block]); 154 + return sg_dma_address(msc_win_base_sg(win)); 290 155 } 291 156 292 157 static inline unsigned long 293 - msc_win_bpfn(struct msc_window *win, unsigned int block) 158 + msc_win_base_pfn(struct msc_window *win) 294 159 { 295 - return msc_win_baddr(win, block) >> PAGE_SHIFT; 160 + return PFN_DOWN(msc_win_base_dma(win)); 296 161 } 297 162 298 163 /** ··· 315 186 entry); 316 187 317 188 return list_next_entry(win, entry); 189 + } 190 + 191 + static size_t msc_win_total_sz(struct msc_window *win) 192 + { 193 + struct scatterlist *sg; 194 + unsigned int blk; 195 + size_t size = 0; 196 + 197 + for_each_sg(win->sgt->sgl, sg, win->nr_segs, blk) { 198 + struct msc_block_desc *bdesc = sg_virt(sg); 199 + 200 + if (msc_block_wrapped(bdesc)) 201 + return win->nr_blocks << PAGE_SHIFT; 202 + 203 + size += msc_total_sz(bdesc); 204 + if (msc_block_last_written(bdesc)) 205 + break; 206 + } 207 + 208 + return size; 318 209 } 319 210 320 211 /** ··· 365 216 found++; 366 217 367 218 /* skip the empty ones */ 368 - if (nonempty && msc_block_is_empty(msc_win_block(win, 0))) 219 + if (nonempty && msc_block_is_empty(msc_win_base(win))) 369 220 continue; 370 221 371 222 if (found) ··· 399 250 } 400 251 401 252 /** 402 - * msc_win_oldest_block() - locate the oldest block in a given window 253 + * msc_win_oldest_sg() - locate the oldest block in a given window 403 254 * @win: window to look at 404 255 * 405 256 * Return: index of the block with the oldest data 406 257 */ 407 - static unsigned int msc_win_oldest_block(struct msc_window *win) 258 + static struct scatterlist *msc_win_oldest_sg(struct msc_window *win) 408 259 { 409 260 unsigned int blk; 410 - struct msc_block_desc *bdesc = msc_win_block(win, 0); 261 + struct scatterlist *sg; 262 + struct msc_block_desc *bdesc = msc_win_base(win); 411 263 412 264 /* without wrapping, first block is the oldest */ 413 265 if (!msc_block_wrapped(bdesc)) 414 - return 0; 266 + return msc_win_base_sg(win); 415 267 416 268 /* 417 269 * with wrapping, last written block contains both the newest and the 418 270 * oldest data for this window. 419 271 */ 420 - for (blk = 0; blk < win->nr_segs; blk++) { 421 - bdesc = msc_win_block(win, blk); 272 + for_each_sg(win->sgt->sgl, sg, win->nr_segs, blk) { 273 + struct msc_block_desc *bdesc = sg_virt(sg); 422 274 423 275 if (msc_block_last_written(bdesc)) 424 - return blk; 276 + return sg; 425 277 } 426 278 427 - return 0; 279 + return msc_win_base_sg(win); 428 280 } 429 281 430 282 static struct msc_block_desc *msc_iter_bdesc(struct msc_iter *iter) 431 283 { 432 - return msc_win_block(iter->win, iter->block); 433 - } 434 - 435 - static void msc_iter_init(struct msc_iter *iter) 436 - { 437 - memset(iter, 0, sizeof(*iter)); 438 - iter->start_block = -1; 439 - iter->block = -1; 284 + return sg_virt(iter->block); 440 285 } 441 286 442 287 static struct msc_iter *msc_iter_install(struct msc *msc) ··· 455 312 goto unlock; 456 313 } 457 314 458 - msc_iter_init(iter); 459 315 iter->msc = msc; 460 316 461 317 list_add_tail(&iter->entry, &msc->iter_list); ··· 475 333 476 334 static void msc_iter_block_start(struct msc_iter *iter) 477 335 { 478 - if (iter->start_block != -1) 336 + if (iter->start_block) 479 337 return; 480 338 481 - iter->start_block = msc_win_oldest_block(iter->win); 339 + iter->start_block = msc_win_oldest_sg(iter->win); 482 340 iter->block = iter->start_block; 483 341 iter->wrap_count = 0; 484 342 ··· 502 360 return -EINVAL; 503 361 504 362 iter->win = iter->start_win; 505 - iter->start_block = -1; 363 + iter->start_block = NULL; 506 364 507 365 msc_iter_block_start(iter); 508 366 ··· 512 370 static int msc_iter_win_advance(struct msc_iter *iter) 513 371 { 514 372 iter->win = msc_next_window(iter->win); 515 - iter->start_block = -1; 373 + iter->start_block = NULL; 516 374 517 375 if (iter->win == iter->start_win) { 518 376 iter->eof++; ··· 542 400 return msc_iter_win_advance(iter); 543 401 544 402 /* block advance */ 545 - if (++iter->block == iter->win->nr_segs) 546 - iter->block = 0; 403 + if (sg_is_last(iter->block)) 404 + iter->block = msc_win_base_sg(iter->win); 405 + else 406 + iter->block = sg_next(iter->block); 547 407 548 408 /* no wrapping, sanity check in case there is no last written block */ 549 409 if (!iter->wrap_count && iter->block == iter->start_block) ··· 650 506 static void msc_buffer_clear_hw_header(struct msc *msc) 651 507 { 652 508 struct msc_window *win; 509 + struct scatterlist *sg; 653 510 654 511 list_for_each_entry(win, &msc->win_list, entry) { 655 512 unsigned int blk; 656 513 size_t hw_sz = sizeof(struct msc_block_desc) - 657 514 offsetof(struct msc_block_desc, hw_tag); 658 515 659 - for (blk = 0; blk < win->nr_segs; blk++) { 660 - struct msc_block_desc *bdesc = msc_win_block(win, blk); 516 + for_each_sg(win->sgt->sgl, sg, win->nr_segs, blk) { 517 + struct msc_block_desc *bdesc = sg_virt(sg); 661 518 662 519 memset(&bdesc->hw_tag, 0, hw_sz); 663 520 } ··· 670 525 u32 mintctl, msusts; 671 526 672 527 if (!msc->do_irq) 528 + return 0; 529 + 530 + if (!msc->mbuf) 673 531 return 0; 674 532 675 533 mintctl = ioread32(msc->msu_base + REG_MSU_MINTCTL); ··· 702 554 iowrite32(mintctl, msc->msu_base + REG_MSU_MINTCTL); 703 555 } 704 556 557 + static int msc_win_set_lockout(struct msc_window *win, 558 + enum lockout_state expect, 559 + enum lockout_state new) 560 + { 561 + enum lockout_state old; 562 + unsigned long flags; 563 + int ret = 0; 564 + 565 + if (!win->msc->mbuf) 566 + return 0; 567 + 568 + spin_lock_irqsave(&win->lo_lock, flags); 569 + old = win->lockout; 570 + 571 + if (old != expect) { 572 + ret = -EINVAL; 573 + dev_warn_ratelimited(msc_dev(win->msc), 574 + "expected lockout state %d, got %d\n", 575 + expect, old); 576 + goto unlock; 577 + } 578 + 579 + win->lockout = new; 580 + 581 + if (old == expect && new == WIN_LOCKED) 582 + atomic_inc(&win->msc->user_count); 583 + else if (old == expect && old == WIN_LOCKED) 584 + atomic_dec(&win->msc->user_count); 585 + 586 + unlock: 587 + spin_unlock_irqrestore(&win->lo_lock, flags); 588 + 589 + if (ret) { 590 + if (expect == WIN_READY && old == WIN_LOCKED) 591 + return -EBUSY; 592 + 593 + /* from intel_th_msc_window_unlock(), don't warn if not locked */ 594 + if (expect == WIN_LOCKED && old == new) 595 + return 0; 596 + } 597 + 598 + return ret; 599 + } 705 600 /** 706 601 * msc_configure() - set up MSC hardware 707 602 * @msc: the MSC device to configure ··· 762 571 if (msc->mode > MSC_MODE_MULTI) 763 572 return -ENOTSUPP; 764 573 765 - if (msc->mode == MSC_MODE_MULTI) 574 + if (msc->mode == MSC_MODE_MULTI) { 575 + if (msc_win_set_lockout(msc->cur_win, WIN_READY, WIN_INUSE)) 576 + return -EBUSY; 577 + 766 578 msc_buffer_clear_hw_header(msc); 579 + } 580 + 581 + msc->orig_addr = ioread32(msc->reg_base + REG_MSU_MSC0BAR); 582 + msc->orig_sz = ioread32(msc->reg_base + REG_MSU_MSC0SIZE); 767 583 768 584 reg = msc->base_addr >> PAGE_SHIFT; 769 585 iowrite32(reg, msc->reg_base + REG_MSU_MSC0BAR); ··· 792 594 793 595 iowrite32(reg, msc->reg_base + REG_MSU_MSC0CTL); 794 596 597 + intel_th_msu_init(msc); 598 + 795 599 msc->thdev->output.multiblock = msc->mode == MSC_MODE_MULTI; 796 600 intel_th_trace_enable(msc->thdev); 797 601 msc->enabled = 1; 798 602 603 + if (msc->mbuf && msc->mbuf->activate) 604 + msc->mbuf->activate(msc->mbuf_priv); 799 605 800 606 return 0; 801 607 } ··· 813 611 */ 814 612 static void msc_disable(struct msc *msc) 815 613 { 614 + struct msc_window *win = msc->cur_win; 816 615 u32 reg; 817 616 818 617 lockdep_assert_held(&msc->buf_mutex); 819 618 619 + if (msc->mode == MSC_MODE_MULTI) 620 + msc_win_set_lockout(win, WIN_INUSE, WIN_LOCKED); 621 + 622 + if (msc->mbuf && msc->mbuf->deactivate) 623 + msc->mbuf->deactivate(msc->mbuf_priv); 624 + intel_th_msu_deinit(msc); 820 625 intel_th_trace_disable(msc->thdev); 821 626 822 627 if (msc->mode == MSC_MODE_SINGLE) { ··· 839 630 reg = ioread32(msc->reg_base + REG_MSU_MSC0CTL); 840 631 reg &= ~MSC_EN; 841 632 iowrite32(reg, msc->reg_base + REG_MSU_MSC0CTL); 633 + 634 + if (msc->mbuf && msc->mbuf->ready) 635 + msc->mbuf->ready(msc->mbuf_priv, win->sgt, 636 + msc_win_total_sz(win)); 637 + 842 638 msc->enabled = 0; 843 639 844 - iowrite32(0, msc->reg_base + REG_MSU_MSC0BAR); 845 - iowrite32(0, msc->reg_base + REG_MSU_MSC0SIZE); 640 + iowrite32(msc->orig_addr, msc->reg_base + REG_MSU_MSC0BAR); 641 + iowrite32(msc->orig_sz, msc->reg_base + REG_MSU_MSC0SIZE); 846 642 847 643 dev_dbg(msc_dev(msc), "MSCnNWSA: %08x\n", 848 644 ioread32(msc->reg_base + REG_MSU_MSC0NWSA)); 849 645 850 646 reg = ioread32(msc->reg_base + REG_MSU_MSC0STS); 851 647 dev_dbg(msc_dev(msc), "MSCnSTS: %08x\n", reg); 648 + 649 + reg = ioread32(msc->reg_base + REG_MSU_MSUSTS); 650 + reg &= msc->index ? MSUSTS_MSC1BLAST : MSUSTS_MSC0BLAST; 651 + iowrite32(reg, msc->reg_base + REG_MSU_MSUSTS); 852 652 } 853 653 854 654 static int intel_th_msc_activate(struct intel_th_device *thdev) ··· 1009 791 return nr_segs; 1010 792 1011 793 err_nomem: 1012 - for (i--; i >= 0; i--) 794 + for_each_sg(win->sgt->sgl, sg_ptr, i, ret) 1013 795 dma_free_coherent(msc_dev(win->msc)->parent->parent, PAGE_SIZE, 1014 - msc_win_block(win, i), 1015 - msc_win_baddr(win, i)); 796 + sg_virt(sg_ptr), sg_dma_address(sg_ptr)); 1016 797 1017 798 sg_free_table(win->sgt); 1018 799 ··· 1021 804 #ifdef CONFIG_X86 1022 805 static void msc_buffer_set_uc(struct msc_window *win, unsigned int nr_segs) 1023 806 { 807 + struct scatterlist *sg_ptr; 1024 808 int i; 1025 809 1026 - for (i = 0; i < nr_segs; i++) 810 + for_each_sg(win->sgt->sgl, sg_ptr, nr_segs, i) { 1027 811 /* Set the page as uncached */ 1028 - set_memory_uc((unsigned long)msc_win_block(win, i), 1); 812 + set_memory_uc((unsigned long)sg_virt(sg_ptr), 813 + PFN_DOWN(sg_ptr->length)); 814 + } 1029 815 } 1030 816 1031 817 static void msc_buffer_set_wb(struct msc_window *win) 1032 818 { 819 + struct scatterlist *sg_ptr; 1033 820 int i; 1034 821 1035 - for (i = 0; i < win->nr_segs; i++) 822 + for_each_sg(win->sgt->sgl, sg_ptr, win->nr_segs, i) { 1036 823 /* Reset the page to write-back */ 1037 - set_memory_wb((unsigned long)msc_win_block(win, i), 1); 824 + set_memory_wb((unsigned long)sg_virt(sg_ptr), 825 + PFN_DOWN(sg_ptr->length)); 826 + } 1038 827 } 1039 828 #else /* !X86 */ 1040 829 static inline void ··· 1066 843 if (!nr_blocks) 1067 844 return 0; 1068 845 1069 - /* 1070 - * This limitation hold as long as we need random access to the 1071 - * block. When that changes, this can go away. 1072 - */ 1073 - if (nr_blocks > SG_MAX_SINGLE_ALLOC) 1074 - return -EINVAL; 1075 - 1076 846 win = kzalloc(sizeof(*win), GFP_KERNEL); 1077 847 if (!win) 1078 848 return -ENOMEM; 1079 849 1080 850 win->msc = msc; 1081 851 win->sgt = &win->_sgt; 852 + win->lockout = WIN_READY; 853 + spin_lock_init(&win->lo_lock); 1082 854 1083 855 if (!list_empty(&msc->win_list)) { 1084 856 struct msc_window *prev = list_last_entry(&msc->win_list, ··· 1083 865 win->pgoff = prev->pgoff + prev->nr_blocks; 1084 866 } 1085 867 1086 - ret = __msc_buffer_win_alloc(win, nr_blocks); 1087 - if (ret < 0) 868 + if (msc->mbuf && msc->mbuf->alloc_window) 869 + ret = msc->mbuf->alloc_window(msc->mbuf_priv, &win->sgt, 870 + nr_blocks << PAGE_SHIFT); 871 + else 872 + ret = __msc_buffer_win_alloc(win, nr_blocks); 873 + 874 + if (ret <= 0) 1088 875 goto err_nomem; 1089 876 1090 877 msc_buffer_set_uc(win, ret); ··· 1098 875 win->nr_blocks = nr_blocks; 1099 876 1100 877 if (list_empty(&msc->win_list)) { 1101 - msc->base = msc_win_block(win, 0); 1102 - msc->base_addr = msc_win_baddr(win, 0); 878 + msc->base = msc_win_base(win); 879 + msc->base_addr = msc_win_base_dma(win); 1103 880 msc->cur_win = win; 1104 881 } 1105 882 ··· 1116 893 1117 894 static void __msc_buffer_win_free(struct msc *msc, struct msc_window *win) 1118 895 { 896 + struct scatterlist *sg; 1119 897 int i; 1120 898 1121 - for (i = 0; i < win->nr_segs; i++) { 1122 - struct page *page = sg_page(&win->sgt->sgl[i]); 899 + for_each_sg(win->sgt->sgl, sg, win->nr_segs, i) { 900 + struct page *page = sg_page(sg); 1123 901 1124 902 page->mapping = NULL; 1125 903 dma_free_coherent(msc_dev(win->msc)->parent->parent, PAGE_SIZE, 1126 - msc_win_block(win, i), msc_win_baddr(win, i)); 904 + sg_virt(sg), sg_dma_address(sg)); 1127 905 } 1128 906 sg_free_table(win->sgt); 1129 907 } ··· 1149 925 1150 926 msc_buffer_set_wb(win); 1151 927 1152 - __msc_buffer_win_free(msc, win); 928 + if (msc->mbuf && msc->mbuf->free_window) 929 + msc->mbuf->free_window(msc->mbuf_priv, win->sgt); 930 + else 931 + __msc_buffer_win_free(msc, win); 1153 932 1154 933 kfree(win); 1155 934 } ··· 1170 943 1171 944 /* call with msc::mutex locked */ 1172 945 list_for_each_entry(win, &msc->win_list, entry) { 946 + struct scatterlist *sg; 1173 947 unsigned int blk; 1174 948 u32 sw_tag = 0; 1175 949 ··· 1186 958 next_win = list_next_entry(win, entry); 1187 959 } 1188 960 1189 - for (blk = 0; blk < win->nr_segs; blk++) { 1190 - struct msc_block_desc *bdesc = msc_win_block(win, blk); 961 + for_each_sg(win->sgt->sgl, sg, win->nr_segs, blk) { 962 + struct msc_block_desc *bdesc = sg_virt(sg); 1191 963 1192 964 memset(bdesc, 0, sizeof(*bdesc)); 1193 965 1194 - bdesc->next_win = msc_win_bpfn(next_win, 0); 966 + bdesc->next_win = msc_win_base_pfn(next_win); 1195 967 1196 968 /* 1197 969 * Similarly to last window, last block should point ··· 1199 971 */ 1200 972 if (blk == win->nr_segs - 1) { 1201 973 sw_tag |= MSC_SW_TAG_LASTBLK; 1202 - bdesc->next_blk = msc_win_bpfn(win, 0); 974 + bdesc->next_blk = msc_win_base_pfn(win); 1203 975 } else { 1204 - bdesc->next_blk = msc_win_bpfn(win, blk + 1); 976 + dma_addr_t addr = sg_dma_address(sg_next(sg)); 977 + 978 + bdesc->next_blk = PFN_DOWN(addr); 1205 979 } 1206 980 1207 981 bdesc->sw_tag = sw_tag; 1208 - bdesc->block_sz = msc_win_actual_bsz(win, blk) / 64; 982 + bdesc->block_sz = sg->length / 64; 1209 983 } 1210 984 } 1211 985 ··· 1366 1136 static struct page *msc_buffer_get_page(struct msc *msc, unsigned long pgoff) 1367 1137 { 1368 1138 struct msc_window *win; 1139 + struct scatterlist *sg; 1369 1140 unsigned int blk; 1370 1141 1371 1142 if (msc->mode == MSC_MODE_SINGLE) ··· 1381 1150 found: 1382 1151 pgoff -= win->pgoff; 1383 1152 1384 - for (blk = 0; blk < win->nr_segs; blk++) { 1385 - struct page *page = sg_page(&win->sgt->sgl[blk]); 1386 - size_t pgsz = PFN_DOWN(msc_win_actual_bsz(win, blk)); 1153 + for_each_sg(win->sgt->sgl, sg, win->nr_segs, blk) { 1154 + struct page *page = sg_page(sg); 1155 + size_t pgsz = PFN_DOWN(sg->length); 1387 1156 1388 1157 if (pgoff < pgsz) 1389 1158 return page + pgoff; ··· 1687 1456 else 1688 1457 msc->cur_win = list_next_entry(msc->cur_win, entry); 1689 1458 1690 - msc->base = msc_win_block(msc->cur_win, 0); 1691 - msc->base_addr = msc_win_baddr(msc->cur_win, 0); 1459 + msc->base = msc_win_base(msc->cur_win); 1460 + msc->base_addr = msc_win_base_dma(msc->cur_win); 1692 1461 1693 1462 intel_th_trace_switch(msc->thdev); 1463 + } 1464 + 1465 + /** 1466 + * intel_th_msc_window_unlock - put the window back in rotation 1467 + * @dev: MSC device to which this relates 1468 + * @sgt: buffer's sg_table for the window, does nothing if NULL 1469 + */ 1470 + void intel_th_msc_window_unlock(struct device *dev, struct sg_table *sgt) 1471 + { 1472 + struct msc *msc = dev_get_drvdata(dev); 1473 + struct msc_window *win; 1474 + 1475 + if (!sgt) 1476 + return; 1477 + 1478 + win = msc_find_window(msc, sgt, false); 1479 + if (!win) 1480 + return; 1481 + 1482 + msc_win_set_lockout(win, WIN_LOCKED, WIN_READY); 1483 + } 1484 + EXPORT_SYMBOL_GPL(intel_th_msc_window_unlock); 1485 + 1486 + static void msc_work(struct work_struct *work) 1487 + { 1488 + struct msc *msc = container_of(work, struct msc, work); 1489 + 1490 + intel_th_msc_deactivate(msc->thdev); 1694 1491 } 1695 1492 1696 1493 static irqreturn_t intel_th_msc_interrupt(struct intel_th_device *thdev) ··· 1726 1467 struct msc *msc = dev_get_drvdata(&thdev->dev); 1727 1468 u32 msusts = ioread32(msc->msu_base + REG_MSU_MSUSTS); 1728 1469 u32 mask = msc->index ? MSUSTS_MSC1BLAST : MSUSTS_MSC0BLAST; 1470 + struct msc_window *win, *next_win; 1729 1471 1730 - if (!(msusts & mask)) { 1731 - if (msc->enabled) 1732 - return IRQ_HANDLED; 1472 + if (!msc->do_irq || !msc->mbuf) 1733 1473 return IRQ_NONE; 1474 + 1475 + msusts &= mask; 1476 + 1477 + if (!msusts) 1478 + return msc->enabled ? IRQ_HANDLED : IRQ_NONE; 1479 + 1480 + iowrite32(msusts, msc->msu_base + REG_MSU_MSUSTS); 1481 + 1482 + if (!msc->enabled) 1483 + return IRQ_NONE; 1484 + 1485 + /* grab the window before we do the switch */ 1486 + win = msc->cur_win; 1487 + if (!win) 1488 + return IRQ_HANDLED; 1489 + next_win = msc_next_window(win); 1490 + if (!next_win) 1491 + return IRQ_HANDLED; 1492 + 1493 + /* next window: if READY, proceed, if LOCKED, stop the trace */ 1494 + if (msc_win_set_lockout(next_win, WIN_READY, WIN_INUSE)) { 1495 + schedule_work(&msc->work); 1496 + return IRQ_HANDLED; 1734 1497 } 1498 + 1499 + /* current window: INUSE -> LOCKED */ 1500 + msc_win_set_lockout(win, WIN_INUSE, WIN_LOCKED); 1501 + 1502 + msc_win_switch(msc); 1503 + 1504 + if (msc->mbuf && msc->mbuf->ready) 1505 + msc->mbuf->ready(msc->mbuf_priv, win->sgt, 1506 + msc_win_total_sz(win)); 1735 1507 1736 1508 return IRQ_HANDLED; 1737 1509 } ··· 1801 1511 1802 1512 static DEVICE_ATTR_RW(wrap); 1803 1513 1514 + static void msc_buffer_unassign(struct msc *msc) 1515 + { 1516 + lockdep_assert_held(&msc->buf_mutex); 1517 + 1518 + if (!msc->mbuf) 1519 + return; 1520 + 1521 + msc->mbuf->unassign(msc->mbuf_priv); 1522 + msu_buffer_put(msc->mbuf); 1523 + msc->mbuf_priv = NULL; 1524 + msc->mbuf = NULL; 1525 + } 1526 + 1804 1527 static ssize_t 1805 1528 mode_show(struct device *dev, struct device_attribute *attr, char *buf) 1806 1529 { 1807 1530 struct msc *msc = dev_get_drvdata(dev); 1531 + const char *mode = msc_mode[msc->mode]; 1532 + ssize_t ret; 1808 1533 1809 - return scnprintf(buf, PAGE_SIZE, "%s\n", msc_mode[msc->mode]); 1534 + mutex_lock(&msc->buf_mutex); 1535 + if (msc->mbuf) 1536 + mode = msc->mbuf->name; 1537 + ret = scnprintf(buf, PAGE_SIZE, "%s\n", mode); 1538 + mutex_unlock(&msc->buf_mutex); 1539 + 1540 + return ret; 1810 1541 } 1811 1542 1812 1543 static ssize_t 1813 1544 mode_store(struct device *dev, struct device_attribute *attr, const char *buf, 1814 1545 size_t size) 1815 1546 { 1547 + const struct msu_buffer *mbuf = NULL; 1816 1548 struct msc *msc = dev_get_drvdata(dev); 1817 1549 size_t len = size; 1818 - char *cp; 1550 + char *cp, *mode; 1819 1551 int i, ret; 1820 1552 1821 1553 if (!capable(CAP_SYS_RAWIO)) ··· 1847 1535 if (cp) 1848 1536 len = cp - buf; 1849 1537 1850 - for (i = 0; i < ARRAY_SIZE(msc_mode); i++) 1851 - if (!strncmp(msc_mode[i], buf, len)) 1852 - goto found; 1538 + mode = kstrndup(buf, len, GFP_KERNEL); 1539 + i = match_string(msc_mode, ARRAY_SIZE(msc_mode), mode); 1540 + if (i >= 0) 1541 + goto found; 1542 + 1543 + /* Buffer sinks only work with a usable IRQ */ 1544 + if (!msc->do_irq) { 1545 + kfree(mode); 1546 + return -EINVAL; 1547 + } 1548 + 1549 + mbuf = msu_buffer_get(mode); 1550 + kfree(mode); 1551 + if (mbuf) 1552 + goto found; 1853 1553 1854 1554 return -EINVAL; 1855 1555 1856 1556 found: 1857 1557 mutex_lock(&msc->buf_mutex); 1558 + ret = 0; 1559 + 1560 + /* Same buffer: do nothing */ 1561 + if (mbuf && mbuf == msc->mbuf) { 1562 + /* put the extra reference we just got */ 1563 + msu_buffer_put(mbuf); 1564 + goto unlock; 1565 + } 1566 + 1858 1567 ret = msc_buffer_unlocked_free_unless_used(msc); 1859 - if (!ret) 1860 - msc->mode = i; 1568 + if (ret) 1569 + goto unlock; 1570 + 1571 + if (mbuf) { 1572 + void *mbuf_priv = mbuf->assign(dev, &i); 1573 + 1574 + if (!mbuf_priv) { 1575 + ret = -ENOMEM; 1576 + goto unlock; 1577 + } 1578 + 1579 + msc_buffer_unassign(msc); 1580 + msc->mbuf_priv = mbuf_priv; 1581 + msc->mbuf = mbuf; 1582 + } else { 1583 + msc_buffer_unassign(msc); 1584 + } 1585 + 1586 + msc->mode = i; 1587 + 1588 + unlock: 1589 + if (ret && mbuf) 1590 + msu_buffer_put(mbuf); 1861 1591 mutex_unlock(&msc->buf_mutex); 1862 1592 1863 1593 return ret ? ret : size; ··· 2021 1667 return -EINVAL; 2022 1668 2023 1669 mutex_lock(&msc->buf_mutex); 2024 - if (msc->mode != MSC_MODE_MULTI) 1670 + /* 1671 + * Window switch can only happen in the "multi" mode. 1672 + * If a external buffer is engaged, they have the full 1673 + * control over window switching. 1674 + */ 1675 + if (msc->mode != MSC_MODE_MULTI || msc->mbuf) 2025 1676 ret = -ENOTSUPP; 2026 1677 else 2027 1678 msc_win_switch(msc); ··· 2079 1720 msc->reg_base = base + msc->index * 0x100; 2080 1721 msc->msu_base = base; 2081 1722 2082 - err = intel_th_msu_init(msc); 2083 - if (err) 2084 - return err; 2085 - 1723 + INIT_WORK(&msc->work, msc_work); 2086 1724 err = intel_th_msc_init(msc); 2087 1725 if (err) 2088 1726 return err; ··· 2095 1739 int ret; 2096 1740 2097 1741 intel_th_msc_deactivate(thdev); 2098 - intel_th_msu_deinit(msc); 2099 1742 2100 1743 /* 2101 1744 * Buffers should not be used at this point except if the
+11 -9
drivers/hwtracing/intel_th/msu.h
··· 44 44 #define M0BLIE BIT(16) 45 45 #define M1BLIE BIT(24) 46 46 47 - /* MSC operating modes (MSC_MODE) */ 48 - enum { 49 - MSC_MODE_SINGLE = 0, 50 - MSC_MODE_MULTI, 51 - MSC_MODE_EXI, 52 - MSC_MODE_DEBUG, 53 - }; 54 - 55 47 /* MSCnSTS bits */ 56 48 #define MSCSTS_WRAPSTAT BIT(1) /* Wrap occurred */ 57 49 #define MSCSTS_PLE BIT(2) /* Pipeline Empty */ ··· 85 93 return bdesc->valid_dw * 4 - MSC_BDESC; 86 94 } 87 95 96 + static inline unsigned long msc_total_sz(struct msc_block_desc *bdesc) 97 + { 98 + return bdesc->valid_dw * 4; 99 + } 100 + 101 + static inline unsigned long msc_block_sz(struct msc_block_desc *bdesc) 102 + { 103 + return bdesc->block_sz * 64 - MSC_BDESC; 104 + } 105 + 88 106 static inline bool msc_block_wrapped(struct msc_block_desc *bdesc) 89 107 { 90 108 if (bdesc->hw_tag & (MSC_HW_TAG_BLOCKWRAP | MSC_HW_TAG_WINWRAP)) ··· 106 104 static inline bool msc_block_last_written(struct msc_block_desc *bdesc) 107 105 { 108 106 if ((bdesc->hw_tag & MSC_HW_TAG_ENDBIT) || 109 - (msc_data_sz(bdesc) != DATA_IN_PAGE)) 107 + (msc_data_sz(bdesc) != msc_block_sz(bdesc))) 110 108 return true; 111 109 112 110 return false;
+26 -1
drivers/interconnect/core.c
··· 29 29 * @req_node: entry in list of requests for the particular @node 30 30 * @node: the interconnect node to which this constraint applies 31 31 * @dev: reference to the device that sets the constraints 32 + * @tag: path tag (optional) 32 33 * @avg_bw: an integer describing the average bandwidth in kBps 33 34 * @peak_bw: an integer describing the peak bandwidth in kBps 34 35 */ ··· 37 36 struct hlist_node req_node; 38 37 struct icc_node *node; 39 38 struct device *dev; 39 + u32 tag; 40 40 u32 avg_bw; 41 41 u32 peak_bw; 42 42 }; ··· 205 203 node->avg_bw = 0; 206 204 node->peak_bw = 0; 207 205 206 + if (p->pre_aggregate) 207 + p->pre_aggregate(node); 208 + 208 209 hlist_for_each_entry(r, &node->req_list, req_node) 209 - p->aggregate(node, r->avg_bw, r->peak_bw, 210 + p->aggregate(node, r->tag, r->avg_bw, r->peak_bw, 210 211 &node->avg_bw, &node->peak_bw); 211 212 212 213 return 0; ··· 389 384 return path; 390 385 } 391 386 EXPORT_SYMBOL_GPL(of_icc_get); 387 + 388 + /** 389 + * icc_set_tag() - set an optional tag on a path 390 + * @path: the path we want to tag 391 + * @tag: the tag value 392 + * 393 + * This function allows consumers to append a tag to the requests associated 394 + * with a path, so that a different aggregation could be done based on this tag. 395 + */ 396 + void icc_set_tag(struct icc_path *path, u32 tag) 397 + { 398 + int i; 399 + 400 + if (!path) 401 + return; 402 + 403 + for (i = 0; i < path->num_nodes; i++) 404 + path->reqs[i].tag = tag; 405 + } 406 + EXPORT_SYMBOL_GPL(icc_set_tag); 392 407 393 408 /** 394 409 * icc_set_bw() - set bandwidth constraints on an interconnect path
+12
drivers/interconnect/qcom/Kconfig
··· 5 5 help 6 6 Support for Qualcomm's Network-on-Chip interconnect hardware. 7 7 8 + config INTERCONNECT_QCOM_QCS404 9 + tristate "Qualcomm QCS404 interconnect driver" 10 + depends on INTERCONNECT_QCOM 11 + depends on QCOM_SMD_RPM 12 + select INTERCONNECT_QCOM_SMD_RPM 13 + help 14 + This is a driver for the Qualcomm Network-on-Chip on qcs404-based 15 + platforms. 16 + 8 17 config INTERCONNECT_QCOM_SDM845 9 18 tristate "Qualcomm SDM845 interconnect driver" 10 19 depends on INTERCONNECT_QCOM ··· 21 12 help 22 13 This is a driver for the Qualcomm Network-on-Chip on sdm845-based 23 14 platforms. 15 + 16 + config INTERCONNECT_QCOM_SMD_RPM 17 + tristate
+4
drivers/interconnect/qcom/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 + qnoc-qcs404-objs := qcs404.o 3 4 qnoc-sdm845-objs := sdm845.o 5 + icc-smd-rpm-objs := smd-rpm.o 4 6 7 + obj-$(CONFIG_INTERCONNECT_QCOM_QCS404) += qnoc-qcs404.o 5 8 obj-$(CONFIG_INTERCONNECT_QCOM_SDM845) += qnoc-sdm845.o 9 + obj-$(CONFIG_INTERCONNECT_QCOM_SMD_RPM) += icc-smd-rpm.o
+539
drivers/interconnect/qcom/qcs404.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2019 Linaro Ltd 4 + */ 5 + 6 + #include <dt-bindings/interconnect/qcom,qcs404.h> 7 + #include <linux/clk.h> 8 + #include <linux/device.h> 9 + #include <linux/interconnect-provider.h> 10 + #include <linux/io.h> 11 + #include <linux/module.h> 12 + #include <linux/of_device.h> 13 + #include <linux/of_platform.h> 14 + #include <linux/platform_device.h> 15 + #include <linux/slab.h> 16 + 17 + #include "smd-rpm.h" 18 + 19 + #define RPM_BUS_MASTER_REQ 0x73616d62 20 + #define RPM_BUS_SLAVE_REQ 0x766c7362 21 + 22 + enum { 23 + QCS404_MASTER_AMPSS_M0 = 1, 24 + QCS404_MASTER_GRAPHICS_3D, 25 + QCS404_MASTER_MDP_PORT0, 26 + QCS404_SNOC_BIMC_1_MAS, 27 + QCS404_MASTER_TCU_0, 28 + QCS404_MASTER_SPDM, 29 + QCS404_MASTER_BLSP_1, 30 + QCS404_MASTER_BLSP_2, 31 + QCS404_MASTER_XM_USB_HS1, 32 + QCS404_MASTER_CRYPTO_CORE0, 33 + QCS404_MASTER_SDCC_1, 34 + QCS404_MASTER_SDCC_2, 35 + QCS404_SNOC_PNOC_MAS, 36 + QCS404_MASTER_QPIC, 37 + QCS404_MASTER_QDSS_BAM, 38 + QCS404_BIMC_SNOC_MAS, 39 + QCS404_PNOC_SNOC_MAS, 40 + QCS404_MASTER_QDSS_ETR, 41 + QCS404_MASTER_EMAC, 42 + QCS404_MASTER_PCIE, 43 + QCS404_MASTER_USB3, 44 + QCS404_PNOC_INT_0, 45 + QCS404_PNOC_INT_2, 46 + QCS404_PNOC_INT_3, 47 + QCS404_PNOC_SLV_0, 48 + QCS404_PNOC_SLV_1, 49 + QCS404_PNOC_SLV_2, 50 + QCS404_PNOC_SLV_3, 51 + QCS404_PNOC_SLV_4, 52 + QCS404_PNOC_SLV_6, 53 + QCS404_PNOC_SLV_7, 54 + QCS404_PNOC_SLV_8, 55 + QCS404_PNOC_SLV_9, 56 + QCS404_PNOC_SLV_10, 57 + QCS404_PNOC_SLV_11, 58 + QCS404_SNOC_QDSS_INT, 59 + QCS404_SNOC_INT_0, 60 + QCS404_SNOC_INT_1, 61 + QCS404_SNOC_INT_2, 62 + QCS404_SLAVE_EBI_CH0, 63 + QCS404_BIMC_SNOC_SLV, 64 + QCS404_SLAVE_SPDM_WRAPPER, 65 + QCS404_SLAVE_PDM, 66 + QCS404_SLAVE_PRNG, 67 + QCS404_SLAVE_TCSR, 68 + QCS404_SLAVE_SNOC_CFG, 69 + QCS404_SLAVE_MESSAGE_RAM, 70 + QCS404_SLAVE_DISPLAY_CFG, 71 + QCS404_SLAVE_GRAPHICS_3D_CFG, 72 + QCS404_SLAVE_BLSP_1, 73 + QCS404_SLAVE_TLMM_NORTH, 74 + QCS404_SLAVE_PCIE_1, 75 + QCS404_SLAVE_EMAC_CFG, 76 + QCS404_SLAVE_BLSP_2, 77 + QCS404_SLAVE_TLMM_EAST, 78 + QCS404_SLAVE_TCU, 79 + QCS404_SLAVE_PMIC_ARB, 80 + QCS404_SLAVE_SDCC_1, 81 + QCS404_SLAVE_SDCC_2, 82 + QCS404_SLAVE_TLMM_SOUTH, 83 + QCS404_SLAVE_USB_HS, 84 + QCS404_SLAVE_USB3, 85 + QCS404_SLAVE_CRYPTO_0_CFG, 86 + QCS404_PNOC_SNOC_SLV, 87 + QCS404_SLAVE_APPSS, 88 + QCS404_SLAVE_WCSS, 89 + QCS404_SNOC_BIMC_1_SLV, 90 + QCS404_SLAVE_OCIMEM, 91 + QCS404_SNOC_PNOC_SLV, 92 + QCS404_SLAVE_QDSS_STM, 93 + QCS404_SLAVE_CATS_128, 94 + QCS404_SLAVE_OCMEM_64, 95 + QCS404_SLAVE_LPASS, 96 + }; 97 + 98 + #define to_qcom_provider(_provider) \ 99 + container_of(_provider, struct qcom_icc_provider, provider) 100 + 101 + static const struct clk_bulk_data bus_clocks[] = { 102 + { .id = "bus" }, 103 + { .id = "bus_a" }, 104 + }; 105 + 106 + /** 107 + * struct qcom_icc_provider - Qualcomm specific interconnect provider 108 + * @provider: generic interconnect provider 109 + * @bus_clks: the clk_bulk_data table of bus clocks 110 + * @num_clks: the total number of clk_bulk_data entries 111 + */ 112 + struct qcom_icc_provider { 113 + struct icc_provider provider; 114 + struct clk_bulk_data *bus_clks; 115 + int num_clks; 116 + }; 117 + 118 + #define QCS404_MAX_LINKS 12 119 + 120 + /** 121 + * struct qcom_icc_node - Qualcomm specific interconnect nodes 122 + * @name: the node name used in debugfs 123 + * @id: a unique node identifier 124 + * @links: an array of nodes where we can go next while traversing 125 + * @num_links: the total number of @links 126 + * @buswidth: width of the interconnect between a node and the bus (bytes) 127 + * @mas_rpm_id: RPM id for devices that are bus masters 128 + * @slv_rpm_id: RPM id for devices that are bus slaves 129 + * @rate: current bus clock rate in Hz 130 + */ 131 + struct qcom_icc_node { 132 + unsigned char *name; 133 + u16 id; 134 + u16 links[QCS404_MAX_LINKS]; 135 + u16 num_links; 136 + u16 buswidth; 137 + int mas_rpm_id; 138 + int slv_rpm_id; 139 + u64 rate; 140 + }; 141 + 142 + struct qcom_icc_desc { 143 + struct qcom_icc_node **nodes; 144 + size_t num_nodes; 145 + }; 146 + 147 + #define DEFINE_QNODE(_name, _id, _buswidth, _mas_rpm_id, _slv_rpm_id, \ 148 + ...) \ 149 + static struct qcom_icc_node _name = { \ 150 + .name = #_name, \ 151 + .id = _id, \ 152 + .buswidth = _buswidth, \ 153 + .mas_rpm_id = _mas_rpm_id, \ 154 + .slv_rpm_id = _slv_rpm_id, \ 155 + .num_links = ARRAY_SIZE(((int[]){ __VA_ARGS__ })), \ 156 + .links = { __VA_ARGS__ }, \ 157 + } 158 + 159 + DEFINE_QNODE(mas_apps_proc, QCS404_MASTER_AMPSS_M0, 8, 0, -1, QCS404_SLAVE_EBI_CH0, QCS404_BIMC_SNOC_SLV); 160 + DEFINE_QNODE(mas_oxili, QCS404_MASTER_GRAPHICS_3D, 8, 6, -1, QCS404_SLAVE_EBI_CH0, QCS404_BIMC_SNOC_SLV); 161 + DEFINE_QNODE(mas_mdp, QCS404_MASTER_MDP_PORT0, 8, 8, -1, QCS404_SLAVE_EBI_CH0, QCS404_BIMC_SNOC_SLV); 162 + DEFINE_QNODE(mas_snoc_bimc_1, QCS404_SNOC_BIMC_1_MAS, 8, 76, -1, QCS404_SLAVE_EBI_CH0); 163 + DEFINE_QNODE(mas_tcu_0, QCS404_MASTER_TCU_0, 8, -1, -1, QCS404_SLAVE_EBI_CH0, QCS404_BIMC_SNOC_SLV); 164 + DEFINE_QNODE(mas_spdm, QCS404_MASTER_SPDM, 4, -1, -1, QCS404_PNOC_INT_3); 165 + DEFINE_QNODE(mas_blsp_1, QCS404_MASTER_BLSP_1, 4, 41, -1, QCS404_PNOC_INT_3); 166 + DEFINE_QNODE(mas_blsp_2, QCS404_MASTER_BLSP_2, 4, 39, -1, QCS404_PNOC_INT_3); 167 + DEFINE_QNODE(mas_xi_usb_hs1, QCS404_MASTER_XM_USB_HS1, 8, 138, -1, QCS404_PNOC_INT_0); 168 + DEFINE_QNODE(mas_crypto, QCS404_MASTER_CRYPTO_CORE0, 8, 23, -1, QCS404_PNOC_SNOC_SLV, QCS404_PNOC_INT_2); 169 + DEFINE_QNODE(mas_sdcc_1, QCS404_MASTER_SDCC_1, 8, 33, -1, QCS404_PNOC_INT_0); 170 + DEFINE_QNODE(mas_sdcc_2, QCS404_MASTER_SDCC_2, 8, 35, -1, QCS404_PNOC_INT_0); 171 + DEFINE_QNODE(mas_snoc_pcnoc, QCS404_SNOC_PNOC_MAS, 8, 77, -1, QCS404_PNOC_INT_2); 172 + DEFINE_QNODE(mas_qpic, QCS404_MASTER_QPIC, 4, -1, -1, QCS404_PNOC_INT_0); 173 + DEFINE_QNODE(mas_qdss_bam, QCS404_MASTER_QDSS_BAM, 4, -1, -1, QCS404_SNOC_QDSS_INT); 174 + DEFINE_QNODE(mas_bimc_snoc, QCS404_BIMC_SNOC_MAS, 8, 21, -1, QCS404_SLAVE_OCMEM_64, QCS404_SLAVE_CATS_128, QCS404_SNOC_INT_0, QCS404_SNOC_INT_1); 175 + DEFINE_QNODE(mas_pcnoc_snoc, QCS404_PNOC_SNOC_MAS, 8, 29, -1, QCS404_SNOC_BIMC_1_SLV, QCS404_SNOC_INT_2, QCS404_SNOC_INT_0); 176 + DEFINE_QNODE(mas_qdss_etr, QCS404_MASTER_QDSS_ETR, 8, -1, -1, QCS404_SNOC_QDSS_INT); 177 + DEFINE_QNODE(mas_emac, QCS404_MASTER_EMAC, 8, -1, -1, QCS404_SNOC_BIMC_1_SLV, QCS404_SNOC_INT_1); 178 + DEFINE_QNODE(mas_pcie, QCS404_MASTER_PCIE, 8, -1, -1, QCS404_SNOC_BIMC_1_SLV, QCS404_SNOC_INT_1); 179 + DEFINE_QNODE(mas_usb3, QCS404_MASTER_USB3, 8, -1, -1, QCS404_SNOC_BIMC_1_SLV, QCS404_SNOC_INT_1); 180 + DEFINE_QNODE(pcnoc_int_0, QCS404_PNOC_INT_0, 8, 85, 114, QCS404_PNOC_SNOC_SLV, QCS404_PNOC_INT_2); 181 + DEFINE_QNODE(pcnoc_int_2, QCS404_PNOC_INT_2, 8, 124, 184, QCS404_PNOC_SLV_10, QCS404_SLAVE_TCU, QCS404_PNOC_SLV_11, QCS404_PNOC_SLV_2, QCS404_PNOC_SLV_3, QCS404_PNOC_SLV_0, QCS404_PNOC_SLV_1, QCS404_PNOC_SLV_6, QCS404_PNOC_SLV_7, QCS404_PNOC_SLV_4, QCS404_PNOC_SLV_8, QCS404_PNOC_SLV_9); 182 + DEFINE_QNODE(pcnoc_int_3, QCS404_PNOC_INT_3, 8, 125, 185, QCS404_PNOC_SNOC_SLV); 183 + DEFINE_QNODE(pcnoc_s_0, QCS404_PNOC_SLV_0, 4, 89, 118, QCS404_SLAVE_PRNG, QCS404_SLAVE_SPDM_WRAPPER, QCS404_SLAVE_PDM); 184 + DEFINE_QNODE(pcnoc_s_1, QCS404_PNOC_SLV_1, 4, 90, 119, QCS404_SLAVE_TCSR); 185 + DEFINE_QNODE(pcnoc_s_2, QCS404_PNOC_SLV_2, 4, -1, -1, QCS404_SLAVE_GRAPHICS_3D_CFG); 186 + DEFINE_QNODE(pcnoc_s_3, QCS404_PNOC_SLV_3, 4, 92, 121, QCS404_SLAVE_MESSAGE_RAM); 187 + DEFINE_QNODE(pcnoc_s_4, QCS404_PNOC_SLV_4, 4, 93, 122, QCS404_SLAVE_SNOC_CFG); 188 + DEFINE_QNODE(pcnoc_s_6, QCS404_PNOC_SLV_6, 4, 94, 123, QCS404_SLAVE_BLSP_1, QCS404_SLAVE_TLMM_NORTH, QCS404_SLAVE_EMAC_CFG); 189 + DEFINE_QNODE(pcnoc_s_7, QCS404_PNOC_SLV_7, 4, 95, 124, QCS404_SLAVE_TLMM_SOUTH, QCS404_SLAVE_DISPLAY_CFG, QCS404_SLAVE_SDCC_1, QCS404_SLAVE_PCIE_1, QCS404_SLAVE_SDCC_2); 190 + DEFINE_QNODE(pcnoc_s_8, QCS404_PNOC_SLV_8, 4, 96, 125, QCS404_SLAVE_CRYPTO_0_CFG); 191 + DEFINE_QNODE(pcnoc_s_9, QCS404_PNOC_SLV_9, 4, 97, 126, QCS404_SLAVE_BLSP_2, QCS404_SLAVE_TLMM_EAST, QCS404_SLAVE_PMIC_ARB); 192 + DEFINE_QNODE(pcnoc_s_10, QCS404_PNOC_SLV_10, 4, 157, -1, QCS404_SLAVE_USB_HS); 193 + DEFINE_QNODE(pcnoc_s_11, QCS404_PNOC_SLV_11, 4, 158, 246, QCS404_SLAVE_USB3); 194 + DEFINE_QNODE(qdss_int, QCS404_SNOC_QDSS_INT, 8, -1, -1, QCS404_SNOC_BIMC_1_SLV, QCS404_SNOC_INT_1); 195 + DEFINE_QNODE(snoc_int_0, QCS404_SNOC_INT_0, 8, 99, 130, QCS404_SLAVE_LPASS, QCS404_SLAVE_APPSS, QCS404_SLAVE_WCSS); 196 + DEFINE_QNODE(snoc_int_1, QCS404_SNOC_INT_1, 8, 100, 131, QCS404_SNOC_PNOC_SLV, QCS404_SNOC_INT_2); 197 + DEFINE_QNODE(snoc_int_2, QCS404_SNOC_INT_2, 8, 134, 197, QCS404_SLAVE_QDSS_STM, QCS404_SLAVE_OCIMEM); 198 + DEFINE_QNODE(slv_ebi, QCS404_SLAVE_EBI_CH0, 8, -1, 0, 0); 199 + DEFINE_QNODE(slv_bimc_snoc, QCS404_BIMC_SNOC_SLV, 8, -1, 2, QCS404_BIMC_SNOC_MAS); 200 + DEFINE_QNODE(slv_spdm, QCS404_SLAVE_SPDM_WRAPPER, 4, -1, -1, 0); 201 + DEFINE_QNODE(slv_pdm, QCS404_SLAVE_PDM, 4, -1, 41, 0); 202 + DEFINE_QNODE(slv_prng, QCS404_SLAVE_PRNG, 4, -1, 44, 0); 203 + DEFINE_QNODE(slv_tcsr, QCS404_SLAVE_TCSR, 4, -1, 50, 0); 204 + DEFINE_QNODE(slv_snoc_cfg, QCS404_SLAVE_SNOC_CFG, 4, -1, 70, 0); 205 + DEFINE_QNODE(slv_message_ram, QCS404_SLAVE_MESSAGE_RAM, 4, -1, 55, 0); 206 + DEFINE_QNODE(slv_disp_ss_cfg, QCS404_SLAVE_DISPLAY_CFG, 4, -1, -1, 0); 207 + DEFINE_QNODE(slv_gpu_cfg, QCS404_SLAVE_GRAPHICS_3D_CFG, 4, -1, -1, 0); 208 + DEFINE_QNODE(slv_blsp_1, QCS404_SLAVE_BLSP_1, 4, -1, 39, 0); 209 + DEFINE_QNODE(slv_tlmm_north, QCS404_SLAVE_TLMM_NORTH, 4, -1, 214, 0); 210 + DEFINE_QNODE(slv_pcie, QCS404_SLAVE_PCIE_1, 4, -1, -1, 0); 211 + DEFINE_QNODE(slv_ethernet, QCS404_SLAVE_EMAC_CFG, 4, -1, -1, 0); 212 + DEFINE_QNODE(slv_blsp_2, QCS404_SLAVE_BLSP_2, 4, -1, 37, 0); 213 + DEFINE_QNODE(slv_tlmm_east, QCS404_SLAVE_TLMM_EAST, 4, -1, 213, 0); 214 + DEFINE_QNODE(slv_tcu, QCS404_SLAVE_TCU, 8, -1, -1, 0); 215 + DEFINE_QNODE(slv_pmic_arb, QCS404_SLAVE_PMIC_ARB, 4, -1, 59, 0); 216 + DEFINE_QNODE(slv_sdcc_1, QCS404_SLAVE_SDCC_1, 4, -1, 31, 0); 217 + DEFINE_QNODE(slv_sdcc_2, QCS404_SLAVE_SDCC_2, 4, -1, 33, 0); 218 + DEFINE_QNODE(slv_tlmm_south, QCS404_SLAVE_TLMM_SOUTH, 4, -1, -1, 0); 219 + DEFINE_QNODE(slv_usb_hs, QCS404_SLAVE_USB_HS, 4, -1, 40, 0); 220 + DEFINE_QNODE(slv_usb3, QCS404_SLAVE_USB3, 4, -1, 22, 0); 221 + DEFINE_QNODE(slv_crypto_0_cfg, QCS404_SLAVE_CRYPTO_0_CFG, 4, -1, 52, 0); 222 + DEFINE_QNODE(slv_pcnoc_snoc, QCS404_PNOC_SNOC_SLV, 8, -1, 45, QCS404_PNOC_SNOC_MAS); 223 + DEFINE_QNODE(slv_kpss_ahb, QCS404_SLAVE_APPSS, 4, -1, -1, 0); 224 + DEFINE_QNODE(slv_wcss, QCS404_SLAVE_WCSS, 4, -1, 23, 0); 225 + DEFINE_QNODE(slv_snoc_bimc_1, QCS404_SNOC_BIMC_1_SLV, 8, -1, 104, QCS404_SNOC_BIMC_1_MAS); 226 + DEFINE_QNODE(slv_imem, QCS404_SLAVE_OCIMEM, 8, -1, 26, 0); 227 + DEFINE_QNODE(slv_snoc_pcnoc, QCS404_SNOC_PNOC_SLV, 8, -1, 28, QCS404_SNOC_PNOC_MAS); 228 + DEFINE_QNODE(slv_qdss_stm, QCS404_SLAVE_QDSS_STM, 4, -1, 30, 0); 229 + DEFINE_QNODE(slv_cats_0, QCS404_SLAVE_CATS_128, 16, -1, -1, 0); 230 + DEFINE_QNODE(slv_cats_1, QCS404_SLAVE_OCMEM_64, 8, -1, -1, 0); 231 + DEFINE_QNODE(slv_lpass, QCS404_SLAVE_LPASS, 4, -1, -1, 0); 232 + 233 + static struct qcom_icc_node *qcs404_bimc_nodes[] = { 234 + [MASTER_AMPSS_M0] = &mas_apps_proc, 235 + [MASTER_OXILI] = &mas_oxili, 236 + [MASTER_MDP_PORT0] = &mas_mdp, 237 + [MASTER_SNOC_BIMC_1] = &mas_snoc_bimc_1, 238 + [MASTER_TCU_0] = &mas_tcu_0, 239 + [SLAVE_EBI_CH0] = &slv_ebi, 240 + [SLAVE_BIMC_SNOC] = &slv_bimc_snoc, 241 + }; 242 + 243 + static struct qcom_icc_desc qcs404_bimc = { 244 + .nodes = qcs404_bimc_nodes, 245 + .num_nodes = ARRAY_SIZE(qcs404_bimc_nodes), 246 + }; 247 + 248 + static struct qcom_icc_node *qcs404_pcnoc_nodes[] = { 249 + [MASTER_SPDM] = &mas_spdm, 250 + [MASTER_BLSP_1] = &mas_blsp_1, 251 + [MASTER_BLSP_2] = &mas_blsp_2, 252 + [MASTER_XI_USB_HS1] = &mas_xi_usb_hs1, 253 + [MASTER_CRYPT0] = &mas_crypto, 254 + [MASTER_SDCC_1] = &mas_sdcc_1, 255 + [MASTER_SDCC_2] = &mas_sdcc_2, 256 + [MASTER_SNOC_PCNOC] = &mas_snoc_pcnoc, 257 + [MASTER_QPIC] = &mas_qpic, 258 + [PCNOC_INT_0] = &pcnoc_int_0, 259 + [PCNOC_INT_2] = &pcnoc_int_2, 260 + [PCNOC_INT_3] = &pcnoc_int_3, 261 + [PCNOC_S_0] = &pcnoc_s_0, 262 + [PCNOC_S_1] = &pcnoc_s_1, 263 + [PCNOC_S_2] = &pcnoc_s_2, 264 + [PCNOC_S_3] = &pcnoc_s_3, 265 + [PCNOC_S_4] = &pcnoc_s_4, 266 + [PCNOC_S_6] = &pcnoc_s_6, 267 + [PCNOC_S_7] = &pcnoc_s_7, 268 + [PCNOC_S_8] = &pcnoc_s_8, 269 + [PCNOC_S_9] = &pcnoc_s_9, 270 + [PCNOC_S_10] = &pcnoc_s_10, 271 + [PCNOC_S_11] = &pcnoc_s_11, 272 + [SLAVE_SPDM] = &slv_spdm, 273 + [SLAVE_PDM] = &slv_pdm, 274 + [SLAVE_PRNG] = &slv_prng, 275 + [SLAVE_TCSR] = &slv_tcsr, 276 + [SLAVE_SNOC_CFG] = &slv_snoc_cfg, 277 + [SLAVE_MESSAGE_RAM] = &slv_message_ram, 278 + [SLAVE_DISP_SS_CFG] = &slv_disp_ss_cfg, 279 + [SLAVE_GPU_CFG] = &slv_gpu_cfg, 280 + [SLAVE_BLSP_1] = &slv_blsp_1, 281 + [SLAVE_BLSP_2] = &slv_blsp_2, 282 + [SLAVE_TLMM_NORTH] = &slv_tlmm_north, 283 + [SLAVE_PCIE] = &slv_pcie, 284 + [SLAVE_ETHERNET] = &slv_ethernet, 285 + [SLAVE_TLMM_EAST] = &slv_tlmm_east, 286 + [SLAVE_TCU] = &slv_tcu, 287 + [SLAVE_PMIC_ARB] = &slv_pmic_arb, 288 + [SLAVE_SDCC_1] = &slv_sdcc_1, 289 + [SLAVE_SDCC_2] = &slv_sdcc_2, 290 + [SLAVE_TLMM_SOUTH] = &slv_tlmm_south, 291 + [SLAVE_USB_HS] = &slv_usb_hs, 292 + [SLAVE_USB3] = &slv_usb3, 293 + [SLAVE_CRYPTO_0_CFG] = &slv_crypto_0_cfg, 294 + [SLAVE_PCNOC_SNOC] = &slv_pcnoc_snoc, 295 + }; 296 + 297 + static struct qcom_icc_desc qcs404_pcnoc = { 298 + .nodes = qcs404_pcnoc_nodes, 299 + .num_nodes = ARRAY_SIZE(qcs404_pcnoc_nodes), 300 + }; 301 + 302 + static struct qcom_icc_node *qcs404_snoc_nodes[] = { 303 + [MASTER_QDSS_BAM] = &mas_qdss_bam, 304 + [MASTER_BIMC_SNOC] = &mas_bimc_snoc, 305 + [MASTER_PCNOC_SNOC] = &mas_pcnoc_snoc, 306 + [MASTER_QDSS_ETR] = &mas_qdss_etr, 307 + [MASTER_EMAC] = &mas_emac, 308 + [MASTER_PCIE] = &mas_pcie, 309 + [MASTER_USB3] = &mas_usb3, 310 + [QDSS_INT] = &qdss_int, 311 + [SNOC_INT_0] = &snoc_int_0, 312 + [SNOC_INT_1] = &snoc_int_1, 313 + [SNOC_INT_2] = &snoc_int_2, 314 + [SLAVE_KPSS_AHB] = &slv_kpss_ahb, 315 + [SLAVE_WCSS] = &slv_wcss, 316 + [SLAVE_SNOC_BIMC_1] = &slv_snoc_bimc_1, 317 + [SLAVE_IMEM] = &slv_imem, 318 + [SLAVE_SNOC_PCNOC] = &slv_snoc_pcnoc, 319 + [SLAVE_QDSS_STM] = &slv_qdss_stm, 320 + [SLAVE_CATS_0] = &slv_cats_0, 321 + [SLAVE_CATS_1] = &slv_cats_1, 322 + [SLAVE_LPASS] = &slv_lpass, 323 + }; 324 + 325 + static struct qcom_icc_desc qcs404_snoc = { 326 + .nodes = qcs404_snoc_nodes, 327 + .num_nodes = ARRAY_SIZE(qcs404_snoc_nodes), 328 + }; 329 + 330 + static int qcom_icc_aggregate(struct icc_node *node, u32 tag, u32 avg_bw, 331 + u32 peak_bw, u32 *agg_avg, u32 *agg_peak) 332 + { 333 + *agg_avg += avg_bw; 334 + *agg_peak = max(*agg_peak, peak_bw); 335 + 336 + return 0; 337 + } 338 + 339 + static int qcom_icc_set(struct icc_node *src, struct icc_node *dst) 340 + { 341 + struct qcom_icc_provider *qp; 342 + struct qcom_icc_node *qn; 343 + struct icc_provider *provider; 344 + struct icc_node *n; 345 + u64 sum_bw; 346 + u64 max_peak_bw; 347 + u64 rate; 348 + u32 agg_avg = 0; 349 + u32 agg_peak = 0; 350 + int ret, i; 351 + 352 + qn = src->data; 353 + provider = src->provider; 354 + qp = to_qcom_provider(provider); 355 + 356 + list_for_each_entry(n, &provider->nodes, node_list) 357 + qcom_icc_aggregate(n, 0, n->avg_bw, n->peak_bw, 358 + &agg_avg, &agg_peak); 359 + 360 + sum_bw = icc_units_to_bps(agg_avg); 361 + max_peak_bw = icc_units_to_bps(agg_peak); 362 + 363 + /* send bandwidth request message to the RPM processor */ 364 + if (qn->mas_rpm_id != -1) { 365 + ret = qcom_icc_rpm_smd_send(QCOM_SMD_RPM_ACTIVE_STATE, 366 + RPM_BUS_MASTER_REQ, 367 + qn->mas_rpm_id, 368 + sum_bw); 369 + if (ret) { 370 + pr_err("qcom_icc_rpm_smd_send mas %d error %d\n", 371 + qn->mas_rpm_id, ret); 372 + return ret; 373 + } 374 + } 375 + 376 + if (qn->slv_rpm_id != -1) { 377 + ret = qcom_icc_rpm_smd_send(QCOM_SMD_RPM_ACTIVE_STATE, 378 + RPM_BUS_SLAVE_REQ, 379 + qn->slv_rpm_id, 380 + sum_bw); 381 + if (ret) { 382 + pr_err("qcom_icc_rpm_smd_send slv error %d\n", 383 + ret); 384 + return ret; 385 + } 386 + } 387 + 388 + rate = max(sum_bw, max_peak_bw); 389 + 390 + do_div(rate, qn->buswidth); 391 + 392 + if (qn->rate == rate) 393 + return 0; 394 + 395 + for (i = 0; i < qp->num_clks; i++) { 396 + ret = clk_set_rate(qp->bus_clks[i].clk, rate); 397 + if (ret) { 398 + pr_err("%s clk_set_rate error: %d\n", 399 + qp->bus_clks[i].id, ret); 400 + return ret; 401 + } 402 + } 403 + 404 + qn->rate = rate; 405 + 406 + return 0; 407 + } 408 + 409 + static int qnoc_probe(struct platform_device *pdev) 410 + { 411 + struct device *dev = &pdev->dev; 412 + const struct qcom_icc_desc *desc; 413 + struct icc_onecell_data *data; 414 + struct icc_provider *provider; 415 + struct qcom_icc_node **qnodes; 416 + struct qcom_icc_provider *qp; 417 + struct icc_node *node; 418 + size_t num_nodes, i; 419 + int ret; 420 + 421 + /* wait for the RPM proxy */ 422 + if (!qcom_icc_rpm_smd_available()) 423 + return -EPROBE_DEFER; 424 + 425 + desc = of_device_get_match_data(dev); 426 + if (!desc) 427 + return -EINVAL; 428 + 429 + qnodes = desc->nodes; 430 + num_nodes = desc->num_nodes; 431 + 432 + qp = devm_kzalloc(dev, sizeof(*qp), GFP_KERNEL); 433 + if (!qp) 434 + return -ENOMEM; 435 + 436 + data = devm_kcalloc(dev, num_nodes, sizeof(*node), GFP_KERNEL); 437 + if (!data) 438 + return -ENOMEM; 439 + 440 + qp->bus_clks = devm_kmemdup(dev, bus_clocks, sizeof(bus_clocks), 441 + GFP_KERNEL); 442 + if (!qp->bus_clks) 443 + return -ENOMEM; 444 + 445 + qp->num_clks = ARRAY_SIZE(bus_clocks); 446 + ret = devm_clk_bulk_get(dev, qp->num_clks, qp->bus_clks); 447 + if (ret) 448 + return ret; 449 + 450 + ret = clk_bulk_prepare_enable(qp->num_clks, qp->bus_clks); 451 + if (ret) 452 + return ret; 453 + 454 + provider = &qp->provider; 455 + INIT_LIST_HEAD(&provider->nodes); 456 + provider->dev = dev; 457 + provider->set = qcom_icc_set; 458 + provider->aggregate = qcom_icc_aggregate; 459 + provider->xlate = of_icc_xlate_onecell; 460 + provider->data = data; 461 + 462 + ret = icc_provider_add(provider); 463 + if (ret) { 464 + dev_err(dev, "error adding interconnect provider: %d\n", ret); 465 + clk_bulk_disable_unprepare(qp->num_clks, qp->bus_clks); 466 + return ret; 467 + } 468 + 469 + for (i = 0; i < num_nodes; i++) { 470 + size_t j; 471 + 472 + node = icc_node_create(qnodes[i]->id); 473 + if (IS_ERR(node)) { 474 + ret = PTR_ERR(node); 475 + goto err; 476 + } 477 + 478 + node->name = qnodes[i]->name; 479 + node->data = qnodes[i]; 480 + icc_node_add(node, provider); 481 + 482 + dev_dbg(dev, "registered node %s\n", node->name); 483 + 484 + /* populate links */ 485 + for (j = 0; j < qnodes[i]->num_links; j++) 486 + icc_link_create(node, qnodes[i]->links[j]); 487 + 488 + data->nodes[i] = node; 489 + } 490 + data->num_nodes = num_nodes; 491 + 492 + platform_set_drvdata(pdev, qp); 493 + 494 + return 0; 495 + err: 496 + list_for_each_entry(node, &provider->nodes, node_list) { 497 + icc_node_del(node); 498 + icc_node_destroy(node->id); 499 + } 500 + clk_bulk_disable_unprepare(qp->num_clks, qp->bus_clks); 501 + icc_provider_del(provider); 502 + 503 + return ret; 504 + } 505 + 506 + static int qnoc_remove(struct platform_device *pdev) 507 + { 508 + struct qcom_icc_provider *qp = platform_get_drvdata(pdev); 509 + struct icc_provider *provider = &qp->provider; 510 + struct icc_node *n; 511 + 512 + list_for_each_entry(n, &provider->nodes, node_list) { 513 + icc_node_del(n); 514 + icc_node_destroy(n->id); 515 + } 516 + clk_bulk_disable_unprepare(qp->num_clks, qp->bus_clks); 517 + 518 + return icc_provider_del(provider); 519 + } 520 + 521 + static const struct of_device_id qcs404_noc_of_match[] = { 522 + { .compatible = "qcom,qcs404-bimc", .data = &qcs404_bimc }, 523 + { .compatible = "qcom,qcs404-pcnoc", .data = &qcs404_pcnoc }, 524 + { .compatible = "qcom,qcs404-snoc", .data = &qcs404_snoc }, 525 + { }, 526 + }; 527 + MODULE_DEVICE_TABLE(of, qcs404_noc_of_match); 528 + 529 + static struct platform_driver qcs404_noc_driver = { 530 + .probe = qnoc_probe, 531 + .remove = qnoc_remove, 532 + .driver = { 533 + .name = "qnoc-qcs404", 534 + .of_match_table = qcs404_noc_of_match, 535 + }, 536 + }; 537 + module_platform_driver(qcs404_noc_driver); 538 + MODULE_DESCRIPTION("Qualcomm QCS404 NoC driver"); 539 + MODULE_LICENSE("GPL v2");
+111 -51
drivers/interconnect/qcom/sdm845.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 - * Copyright (c) 2018, The Linux Foundation. All rights reserved. 3 + * Copyright (c) 2018-2019, The Linux Foundation. All rights reserved. 4 4 * 5 5 */ 6 6 ··· 19 19 #include <soc/qcom/cmd-db.h> 20 20 #include <soc/qcom/rpmh.h> 21 21 #include <soc/qcom/tcs.h> 22 - 23 - #define BCM_TCS_CMD_COMMIT_SHFT 30 24 - #define BCM_TCS_CMD_COMMIT_MASK 0x40000000 25 - #define BCM_TCS_CMD_VALID_SHFT 29 26 - #define BCM_TCS_CMD_VALID_MASK 0x20000000 27 - #define BCM_TCS_CMD_VOTE_X_SHFT 14 28 - #define BCM_TCS_CMD_VOTE_MASK 0x3fff 29 - #define BCM_TCS_CMD_VOTE_Y_SHFT 0 30 - #define BCM_TCS_CMD_VOTE_Y_MASK 0xfffc000 31 - 32 - #define BCM_TCS_CMD(commit, valid, vote_x, vote_y) \ 33 - (((commit) << BCM_TCS_CMD_COMMIT_SHFT) | \ 34 - ((valid) << BCM_TCS_CMD_VALID_SHFT) | \ 35 - ((cpu_to_le32(vote_x) & \ 36 - BCM_TCS_CMD_VOTE_MASK) << BCM_TCS_CMD_VOTE_X_SHFT) | \ 37 - ((cpu_to_le32(vote_y) & \ 38 - BCM_TCS_CMD_VOTE_MASK) << BCM_TCS_CMD_VOTE_Y_SHFT)) 39 22 40 23 #define to_qcom_provider(_provider) \ 41 24 container_of(_provider, struct qcom_icc_provider, provider) ··· 49 66 #define SDM845_MAX_BCM_PER_NODE 2 50 67 #define SDM845_MAX_VCD 10 51 68 69 + /* 70 + * The AMC bucket denotes constraints that are applied to hardware when 71 + * icc_set_bw() completes, whereas the WAKE and SLEEP constraints are applied 72 + * when the execution environment transitions between active and low power mode. 73 + */ 74 + #define QCOM_ICC_BUCKET_AMC 0 75 + #define QCOM_ICC_BUCKET_WAKE 1 76 + #define QCOM_ICC_BUCKET_SLEEP 2 77 + #define QCOM_ICC_NUM_BUCKETS 3 78 + #define QCOM_ICC_TAG_AMC BIT(QCOM_ICC_BUCKET_AMC) 79 + #define QCOM_ICC_TAG_WAKE BIT(QCOM_ICC_BUCKET_WAKE) 80 + #define QCOM_ICC_TAG_SLEEP BIT(QCOM_ICC_BUCKET_SLEEP) 81 + #define QCOM_ICC_TAG_ACTIVE_ONLY (QCOM_ICC_TAG_AMC | QCOM_ICC_TAG_WAKE) 82 + #define QCOM_ICC_TAG_ALWAYS (QCOM_ICC_TAG_AMC | QCOM_ICC_TAG_WAKE |\ 83 + QCOM_ICC_TAG_SLEEP) 84 + 52 85 /** 53 86 * struct qcom_icc_node - Qualcomm specific interconnect nodes 54 87 * @name: the node name used in debugfs ··· 85 86 u16 num_links; 86 87 u16 channels; 87 88 u16 buswidth; 88 - u64 sum_avg; 89 - u64 max_peak; 89 + u64 sum_avg[QCOM_ICC_NUM_BUCKETS]; 90 + u64 max_peak[QCOM_ICC_NUM_BUCKETS]; 90 91 struct qcom_icc_bcm *bcms[SDM845_MAX_BCM_PER_NODE]; 91 92 size_t num_bcms; 92 93 }; ··· 111 112 const char *name; 112 113 u32 type; 113 114 u32 addr; 114 - u64 vote_x; 115 - u64 vote_y; 115 + u64 vote_x[QCOM_ICC_NUM_BUCKETS]; 116 + u64 vote_y[QCOM_ICC_NUM_BUCKETS]; 116 117 bool dirty; 117 118 bool keepalive; 118 119 struct bcm_db aux_data; ··· 554 555 cmd->wait = true; 555 556 } 556 557 557 - static void tcs_list_gen(struct list_head *bcm_list, 558 + static void tcs_list_gen(struct list_head *bcm_list, int bucket, 558 559 struct tcs_cmd tcs_list[SDM845_MAX_VCD], 559 560 int n[SDM845_MAX_VCD]) 560 561 { ··· 572 573 commit = true; 573 574 cur_vcd_size = 0; 574 575 } 575 - tcs_cmd_gen(&tcs_list[idx], bcm->vote_x, bcm->vote_y, 576 - bcm->addr, commit); 576 + tcs_cmd_gen(&tcs_list[idx], bcm->vote_x[bucket], 577 + bcm->vote_y[bucket], bcm->addr, commit); 577 578 idx++; 578 579 n[batch]++; 579 580 /* ··· 594 595 595 596 static void bcm_aggregate(struct qcom_icc_bcm *bcm) 596 597 { 597 - size_t i; 598 - u64 agg_avg = 0; 599 - u64 agg_peak = 0; 598 + size_t i, bucket; 599 + u64 agg_avg[QCOM_ICC_NUM_BUCKETS] = {0}; 600 + u64 agg_peak[QCOM_ICC_NUM_BUCKETS] = {0}; 600 601 u64 temp; 601 602 602 - for (i = 0; i < bcm->num_nodes; i++) { 603 - temp = bcm->nodes[i]->sum_avg * bcm->aux_data.width; 604 - do_div(temp, bcm->nodes[i]->buswidth * bcm->nodes[i]->channels); 605 - agg_avg = max(agg_avg, temp); 603 + for (bucket = 0; bucket < QCOM_ICC_NUM_BUCKETS; bucket++) { 604 + for (i = 0; i < bcm->num_nodes; i++) { 605 + temp = bcm->nodes[i]->sum_avg[bucket] * bcm->aux_data.width; 606 + do_div(temp, bcm->nodes[i]->buswidth * bcm->nodes[i]->channels); 607 + agg_avg[bucket] = max(agg_avg[bucket], temp); 606 608 607 - temp = bcm->nodes[i]->max_peak * bcm->aux_data.width; 608 - do_div(temp, bcm->nodes[i]->buswidth); 609 - agg_peak = max(agg_peak, temp); 609 + temp = bcm->nodes[i]->max_peak[bucket] * bcm->aux_data.width; 610 + do_div(temp, bcm->nodes[i]->buswidth); 611 + agg_peak[bucket] = max(agg_peak[bucket], temp); 612 + } 613 + 614 + temp = agg_avg[bucket] * 1000ULL; 615 + do_div(temp, bcm->aux_data.unit); 616 + bcm->vote_x[bucket] = temp; 617 + 618 + temp = agg_peak[bucket] * 1000ULL; 619 + do_div(temp, bcm->aux_data.unit); 620 + bcm->vote_y[bucket] = temp; 610 621 } 611 622 612 - temp = agg_avg * 1000ULL; 613 - do_div(temp, bcm->aux_data.unit); 614 - bcm->vote_x = temp; 615 - 616 - temp = agg_peak * 1000ULL; 617 - do_div(temp, bcm->aux_data.unit); 618 - bcm->vote_y = temp; 619 - 620 - if (bcm->keepalive && bcm->vote_x == 0 && bcm->vote_y == 0) { 621 - bcm->vote_x = 1; 622 - bcm->vote_y = 1; 623 + if (bcm->keepalive && bcm->vote_x[QCOM_ICC_BUCKET_AMC] == 0 && 624 + bcm->vote_y[QCOM_ICC_BUCKET_AMC] == 0) { 625 + bcm->vote_x[QCOM_ICC_BUCKET_AMC] = 1; 626 + bcm->vote_x[QCOM_ICC_BUCKET_WAKE] = 1; 627 + bcm->vote_y[QCOM_ICC_BUCKET_AMC] = 1; 628 + bcm->vote_y[QCOM_ICC_BUCKET_WAKE] = 1; 623 629 } 624 630 625 631 bcm->dirty = false; 626 632 } 627 633 628 - static int qcom_icc_aggregate(struct icc_node *node, u32 avg_bw, 634 + static void qcom_icc_pre_aggregate(struct icc_node *node) 635 + { 636 + size_t i; 637 + struct qcom_icc_node *qn; 638 + 639 + qn = node->data; 640 + 641 + for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) { 642 + qn->sum_avg[i] = 0; 643 + qn->max_peak[i] = 0; 644 + } 645 + } 646 + 647 + static int qcom_icc_aggregate(struct icc_node *node, u32 tag, u32 avg_bw, 629 648 u32 peak_bw, u32 *agg_avg, u32 *agg_peak) 630 649 { 631 650 size_t i; ··· 651 634 652 635 qn = node->data; 653 636 637 + if (!tag) 638 + tag = QCOM_ICC_TAG_ALWAYS; 639 + 640 + for (i = 0; i < QCOM_ICC_NUM_BUCKETS; i++) { 641 + if (tag & BIT(i)) { 642 + qn->sum_avg[i] += avg_bw; 643 + qn->max_peak[i] = max_t(u32, qn->max_peak[i], peak_bw); 644 + } 645 + } 646 + 654 647 *agg_avg += avg_bw; 655 648 *agg_peak = max_t(u32, *agg_peak, peak_bw); 656 - 657 - qn->sum_avg = *agg_avg; 658 - qn->max_peak = *agg_peak; 659 649 660 650 for (i = 0; i < qn->num_bcms; i++) 661 651 qn->bcms[i]->dirty = true; ··· 699 675 * Construct the command list based on a pre ordered list of BCMs 700 676 * based on VCD. 701 677 */ 702 - tcs_list_gen(&commit_list, cmds, commit_idx); 678 + tcs_list_gen(&commit_list, QCOM_ICC_BUCKET_AMC, cmds, commit_idx); 703 679 704 680 if (!commit_idx[0]) 705 681 return ret; ··· 714 690 cmds, commit_idx); 715 691 if (ret) { 716 692 pr_err("Error sending AMC RPMH requests (%d)\n", ret); 693 + return ret; 694 + } 695 + 696 + INIT_LIST_HEAD(&commit_list); 697 + 698 + for (i = 0; i < qp->num_bcms; i++) { 699 + /* 700 + * Only generate WAKE and SLEEP commands if a resource's 701 + * requirements change as the execution environment transitions 702 + * between different power states. 703 + */ 704 + if (qp->bcms[i]->vote_x[QCOM_ICC_BUCKET_WAKE] != 705 + qp->bcms[i]->vote_x[QCOM_ICC_BUCKET_SLEEP] || 706 + qp->bcms[i]->vote_y[QCOM_ICC_BUCKET_WAKE] != 707 + qp->bcms[i]->vote_y[QCOM_ICC_BUCKET_SLEEP]) { 708 + list_add_tail(&qp->bcms[i]->list, &commit_list); 709 + } 710 + } 711 + 712 + if (list_empty(&commit_list)) 713 + return ret; 714 + 715 + tcs_list_gen(&commit_list, QCOM_ICC_BUCKET_WAKE, cmds, commit_idx); 716 + 717 + ret = rpmh_write_batch(qp->dev, RPMH_WAKE_ONLY_STATE, cmds, commit_idx); 718 + if (ret) { 719 + pr_err("Error sending WAKE RPMH requests (%d)\n", ret); 720 + return ret; 721 + } 722 + 723 + tcs_list_gen(&commit_list, QCOM_ICC_BUCKET_SLEEP, cmds, commit_idx); 724 + 725 + ret = rpmh_write_batch(qp->dev, RPMH_SLEEP_STATE, cmds, commit_idx); 726 + if (ret) { 727 + pr_err("Error sending SLEEP RPMH requests (%d)\n", ret); 717 728 return ret; 718 729 } 719 730 ··· 797 738 provider = &qp->provider; 798 739 provider->dev = &pdev->dev; 799 740 provider->set = qcom_icc_set; 741 + provider->pre_aggregate = qcom_icc_pre_aggregate; 800 742 provider->aggregate = qcom_icc_aggregate; 801 743 provider->xlate = of_icc_xlate_onecell; 802 744 INIT_LIST_HEAD(&provider->nodes);
+77
drivers/interconnect/qcom/smd-rpm.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * RPM over SMD communication wrapper for interconnects 4 + * 5 + * Copyright (C) 2019 Linaro Ltd 6 + * Author: Georgi Djakov <georgi.djakov@linaro.org> 7 + */ 8 + 9 + #include <linux/interconnect-provider.h> 10 + #include <linux/module.h> 11 + #include <linux/of.h> 12 + #include <linux/of_platform.h> 13 + #include <linux/platform_device.h> 14 + #include <linux/soc/qcom/smd-rpm.h> 15 + 16 + #include "smd-rpm.h" 17 + 18 + #define RPM_KEY_BW 0x00007762 19 + 20 + static struct qcom_smd_rpm *icc_smd_rpm; 21 + 22 + struct icc_rpm_smd_req { 23 + __le32 key; 24 + __le32 nbytes; 25 + __le32 value; 26 + }; 27 + 28 + bool qcom_icc_rpm_smd_available(void) 29 + { 30 + return !!icc_smd_rpm; 31 + } 32 + EXPORT_SYMBOL_GPL(qcom_icc_rpm_smd_available); 33 + 34 + int qcom_icc_rpm_smd_send(int ctx, int rsc_type, int id, u32 val) 35 + { 36 + struct icc_rpm_smd_req req = { 37 + .key = cpu_to_le32(RPM_KEY_BW), 38 + .nbytes = cpu_to_le32(sizeof(u32)), 39 + .value = cpu_to_le32(val), 40 + }; 41 + 42 + return qcom_rpm_smd_write(icc_smd_rpm, ctx, rsc_type, id, &req, 43 + sizeof(req)); 44 + } 45 + EXPORT_SYMBOL_GPL(qcom_icc_rpm_smd_send); 46 + 47 + static int qcom_icc_rpm_smd_remove(struct platform_device *pdev) 48 + { 49 + icc_smd_rpm = NULL; 50 + 51 + return 0; 52 + } 53 + 54 + static int qcom_icc_rpm_smd_probe(struct platform_device *pdev) 55 + { 56 + icc_smd_rpm = dev_get_drvdata(pdev->dev.parent); 57 + 58 + if (!icc_smd_rpm) { 59 + dev_err(&pdev->dev, "unable to retrieve handle to RPM\n"); 60 + return -ENODEV; 61 + } 62 + 63 + return 0; 64 + } 65 + 66 + static struct platform_driver qcom_interconnect_rpm_smd_driver = { 67 + .driver = { 68 + .name = "icc_smd_rpm", 69 + }, 70 + .probe = qcom_icc_rpm_smd_probe, 71 + .remove = qcom_icc_rpm_smd_remove, 72 + }; 73 + module_platform_driver(qcom_interconnect_rpm_smd_driver); 74 + MODULE_AUTHOR("Georgi Djakov <georgi.djakov@linaro.org>"); 75 + MODULE_DESCRIPTION("Qualcomm SMD RPM interconnect proxy driver"); 76 + MODULE_LICENSE("GPL v2"); 77 + MODULE_ALIAS("platform:icc_smd_rpm");
+15
drivers/interconnect/qcom/smd-rpm.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (c) 2019, Linaro Ltd. 4 + * Author: Georgi Djakov <georgi.djakov@linaro.org> 5 + */ 6 + 7 + #ifndef __DRIVERS_INTERCONNECT_QCOM_SMD_RPM_H 8 + #define __DRIVERS_INTERCONNECT_QCOM_SMD_RPM_H 9 + 10 + #include <linux/soc/qcom/smd-rpm.h> 11 + 12 + bool qcom_icc_rpm_smd_available(void); 13 + int qcom_icc_rpm_smd_send(int ctx, int rsc_type, int id, u32 val); 14 + 15 + #endif
-9
drivers/misc/Kconfig
··· 362 362 This driver can also be built as a module. If so, the module 363 363 will be called ds1682. 364 364 365 - config SPEAR13XX_PCIE_GADGET 366 - bool "PCIe gadget support for SPEAr13XX platform" 367 - depends on ARCH_SPEAR13XX && BROKEN 368 - help 369 - This option enables gadget support for PCIe controller. If 370 - board file defines any controller as PCIe endpoint then a sysfs 371 - entry will be created for that controller. User can use these 372 - sysfs node to configure PCIe EP as per his requirements. 373 - 374 365 config VMWARE_BALLOON 375 366 tristate "VMware Balloon Driver" 376 367 depends on VMWARE_VMCI && X86 && HYPERVISOR_GUEST
-1
drivers/misc/Makefile
··· 36 36 obj-$(CONFIG_HMC6352) += hmc6352.o 37 37 obj-y += eeprom/ 38 38 obj-y += cb710/ 39 - obj-$(CONFIG_SPEAR13XX_PCIE_GADGET) += spear13xx_pcie_gadget.o 40 39 obj-$(CONFIG_VMWARE_BALLOON) += vmw_balloon.o 41 40 obj-$(CONFIG_PCH_PHUB) += pch_phub.o 42 41 obj-y += ti-st/
+2 -4
drivers/misc/cardreader/alcor_pci.c
··· 334 334 #ifdef CONFIG_PM_SLEEP 335 335 static int alcor_suspend(struct device *dev) 336 336 { 337 - struct pci_dev *pdev = to_pci_dev(dev); 338 - struct alcor_pci_priv *priv = pci_get_drvdata(pdev); 337 + struct alcor_pci_priv *priv = dev_get_drvdata(dev); 339 338 340 339 alcor_pci_aspm_ctrl(priv, 1); 341 340 return 0; ··· 343 344 static int alcor_resume(struct device *dev) 344 345 { 345 346 346 - struct pci_dev *pdev = to_pci_dev(dev); 347 - struct alcor_pci_priv *priv = pci_get_drvdata(pdev); 347 + struct alcor_pci_priv *priv = dev_get_drvdata(dev); 348 348 349 349 alcor_pci_aspm_ctrl(priv, 0); 350 350 return 0;
+4 -1
drivers/misc/eeprom/Kconfig
··· 45 45 will be called at25. 46 46 47 47 config EEPROM_LEGACY 48 - tristate "Old I2C EEPROM reader" 48 + tristate "Old I2C EEPROM reader (DEPRECATED)" 49 49 depends on I2C && SYSFS 50 50 help 51 51 If you say yes here you get read-only access to the EEPROM data 52 52 available on modern memory DIMMs and Sony Vaio laptops via I2C. Such 53 53 EEPROMs could theoretically be available on other devices as well. 54 + 55 + This driver is deprecated and will be removed soon, please use the 56 + better at24 driver instead. 54 57 55 58 This driver can also be built as a module. If so, the module 56 59 will be called eeprom.
+3 -3
drivers/misc/eeprom/ee1004.c
··· 195 195 mutex_lock(&ee1004_bus_lock); 196 196 if (++ee1004_dev_count == 1) { 197 197 for (cnr = 0; cnr < 2; cnr++) { 198 - ee1004_set_page[cnr] = i2c_new_dummy(client->adapter, 198 + ee1004_set_page[cnr] = i2c_new_dummy_device(client->adapter, 199 199 EE1004_ADDR_SET_PAGE + cnr); 200 - if (!ee1004_set_page[cnr]) { 200 + if (IS_ERR(ee1004_set_page[cnr])) { 201 201 dev_err(&client->dev, 202 202 "address 0x%02x unavailable\n", 203 203 EE1004_ADDR_SET_PAGE + cnr); 204 - err = -EADDRINUSE; 204 + err = PTR_ERR(ee1004_set_page[cnr]); 205 205 goto err_clients; 206 206 } 207 207 }
+3 -3
drivers/misc/eeprom/max6875.c
··· 150 150 return -ENOMEM; 151 151 152 152 /* A fake client is created on the odd address */ 153 - data->fake_client = i2c_new_dummy(client->adapter, client->addr + 1); 154 - if (!data->fake_client) { 155 - err = -ENOMEM; 153 + data->fake_client = i2c_new_dummy_device(client->adapter, client->addr + 1); 154 + if (IS_ERR(data->fake_client)) { 155 + err = PTR_ERR(data->fake_client); 156 156 goto exit_kfree; 157 157 } 158 158
+47 -32
drivers/misc/fastrpc.c
··· 33 33 #define FASTRPC_INIT_HANDLE 1 34 34 #define FASTRPC_CTXID_MASK (0xFF0) 35 35 #define INIT_FILELEN_MAX (64 * 1024 * 1024) 36 - #define INIT_MEMLEN_MAX (8 * 1024 * 1024) 37 36 #define FASTRPC_DEVICE_NAME "fastrpc" 38 37 39 38 /* Retrives number of input buffers from the scalars parameter */ ··· 185 186 struct idr ctx_idr; 186 187 struct list_head users; 187 188 struct miscdevice miscdev; 189 + struct kref refcount; 188 190 }; 189 191 190 192 struct fastrpc_user { ··· 279 279 280 280 buf->virt = dma_alloc_coherent(dev, buf->size, (dma_addr_t *)&buf->phys, 281 281 GFP_KERNEL); 282 - if (!buf->virt) 282 + if (!buf->virt) { 283 + mutex_destroy(&buf->lock); 284 + kfree(buf); 283 285 return -ENOMEM; 286 + } 284 287 285 288 if (fl->sctx && fl->sctx->sid) 286 289 buf->phys += ((u64)fl->sctx->sid << 32); ··· 291 288 *obuf = buf; 292 289 293 290 return 0; 291 + } 292 + 293 + static void fastrpc_channel_ctx_free(struct kref *ref) 294 + { 295 + struct fastrpc_channel_ctx *cctx; 296 + 297 + cctx = container_of(ref, struct fastrpc_channel_ctx, refcount); 298 + 299 + kfree(cctx); 300 + } 301 + 302 + static void fastrpc_channel_ctx_get(struct fastrpc_channel_ctx *cctx) 303 + { 304 + kref_get(&cctx->refcount); 305 + } 306 + 307 + static void fastrpc_channel_ctx_put(struct fastrpc_channel_ctx *cctx) 308 + { 309 + kref_put(&cctx->refcount, fastrpc_channel_ctx_free); 294 310 } 295 311 296 312 static void fastrpc_context_free(struct kref *ref) ··· 335 313 kfree(ctx->maps); 336 314 kfree(ctx->olaps); 337 315 kfree(ctx); 316 + 317 + fastrpc_channel_ctx_put(cctx); 338 318 } 339 319 340 320 static void fastrpc_context_get(struct fastrpc_invoke_ctx *ctx) ··· 443 419 fastrpc_get_buff_overlaps(ctx); 444 420 } 445 421 422 + /* Released in fastrpc_context_put() */ 423 + fastrpc_channel_ctx_get(cctx); 424 + 446 425 ctx->sc = sc; 447 426 ctx->retval = -1; 448 427 ctx->pid = current->pid; ··· 475 448 spin_lock(&user->lock); 476 449 list_del(&ctx->node); 477 450 spin_unlock(&user->lock); 451 + fastrpc_channel_ctx_put(cctx); 478 452 kfree(ctx->maps); 479 453 kfree(ctx->olaps); 480 454 kfree(ctx); ··· 550 522 mutex_lock(&buffer->lock); 551 523 list_del(&a->node); 552 524 mutex_unlock(&buffer->lock); 525 + sg_free_table(&a->sgt); 553 526 kfree(a); 554 527 } 555 528 ··· 913 884 if (!fl->sctx) 914 885 return -EINVAL; 915 886 887 + if (!fl->cctx->rpdev) 888 + return -EPIPE; 889 + 916 890 ctx = fastrpc_context_alloc(fl, kernel, sc, args); 917 891 if (IS_ERR(ctx)) 918 892 return PTR_ERR(ctx); ··· 1152 1120 } 1153 1121 1154 1122 fastrpc_session_free(cctx, fl->sctx); 1123 + fastrpc_channel_ctx_put(cctx); 1155 1124 1156 1125 mutex_destroy(&fl->mutex); 1157 1126 kfree(fl); ··· 1170 1137 fl = kzalloc(sizeof(*fl), GFP_KERNEL); 1171 1138 if (!fl) 1172 1139 return -ENOMEM; 1140 + 1141 + /* Released in fastrpc_device_release() */ 1142 + fastrpc_channel_ctx_get(cctx); 1173 1143 1174 1144 filp->private_data = fl; 1175 1145 spin_lock_init(&fl->lock); ··· 1195 1159 spin_lock_irqsave(&cctx->lock, flags); 1196 1160 list_add_tail(&fl->user, &cctx->users); 1197 1161 spin_unlock_irqrestore(&cctx->lock, flags); 1198 - 1199 - return 0; 1200 - } 1201 - 1202 - static int fastrpc_dmabuf_free(struct fastrpc_user *fl, char __user *argp) 1203 - { 1204 - struct dma_buf *buf; 1205 - int info; 1206 - 1207 - if (copy_from_user(&info, argp, sizeof(info))) 1208 - return -EFAULT; 1209 - 1210 - buf = dma_buf_get(info); 1211 - if (IS_ERR_OR_NULL(buf)) 1212 - return -EINVAL; 1213 - /* 1214 - * one for the last get and other for the ALLOC_DMA_BUFF ioctl 1215 - */ 1216 - dma_buf_put(buf); 1217 - dma_buf_put(buf); 1218 1162 1219 1163 return 0; 1220 1164 } ··· 1233 1217 dma_buf_put(buf->dmabuf); 1234 1218 return -EFAULT; 1235 1219 } 1236 - 1237 - get_dma_buf(buf->dmabuf); 1238 1220 1239 1221 return 0; 1240 1222 } ··· 1300 1286 break; 1301 1287 case FASTRPC_IOCTL_INIT_CREATE: 1302 1288 err = fastrpc_init_create_process(fl, argp); 1303 - break; 1304 - case FASTRPC_IOCTL_FREE_DMA_BUFF: 1305 - err = fastrpc_dmabuf_free(fl, argp); 1306 1289 break; 1307 1290 case FASTRPC_IOCTL_ALLOC_DMA_BUFF: 1308 1291 err = fastrpc_dmabuf_alloc(fl, argp); ··· 1406 1395 int i, err, domain_id = -1; 1407 1396 const char *domain; 1408 1397 1409 - data = devm_kzalloc(rdev, sizeof(*data), GFP_KERNEL); 1410 - if (!data) 1411 - return -ENOMEM; 1412 - 1413 1398 err = of_property_read_string(rdev->of_node, "label", &domain); 1414 1399 if (err) { 1415 1400 dev_info(rdev, "FastRPC Domain not specified in DT\n"); ··· 1424 1417 return -EINVAL; 1425 1418 } 1426 1419 1420 + data = kzalloc(sizeof(*data), GFP_KERNEL); 1421 + if (!data) 1422 + return -ENOMEM; 1423 + 1427 1424 data->miscdev.minor = MISC_DYNAMIC_MINOR; 1428 1425 data->miscdev.name = kasprintf(GFP_KERNEL, "fastrpc-%s", 1429 1426 domains[domain_id]); ··· 1435 1424 err = misc_register(&data->miscdev); 1436 1425 if (err) 1437 1426 return err; 1427 + 1428 + kref_init(&data->refcount); 1438 1429 1439 1430 dev_set_drvdata(&rpdev->dev, data); 1440 1431 dma_set_mask_and_coherent(rdev, DMA_BIT_MASK(32)); ··· 1472 1459 1473 1460 misc_deregister(&cctx->miscdev); 1474 1461 of_platform_depopulate(&rpdev->dev); 1475 - kfree(cctx); 1462 + 1463 + cctx->rpdev = NULL; 1464 + fastrpc_channel_ctx_put(cctx); 1476 1465 } 1477 1466 1478 1467 static int fastrpc_rpmsg_callback(struct rpmsg_device *rpdev, void *data,
+1 -1
drivers/misc/habanalabs/asid.c
··· 18 18 19 19 mutex_init(&hdev->asid_mutex); 20 20 21 - /* ASID 0 is reserved for KMD and device CPU */ 21 + /* ASID 0 is reserved for the kernel driver and device CPU */ 22 22 set_bit(0, hdev->asid_bitmap); 23 23 24 24 return 0;
+2 -1
drivers/misc/habanalabs/command_buffer.c
··· 397 397 rc = hl_cb_create(hdev, &hdev->kernel_cb_mgr, cb_size, &cb_handle, 398 398 HL_KERNEL_ASID_ID); 399 399 if (rc) { 400 - dev_err(hdev->dev, "Failed to allocate CB for KMD %d\n", rc); 400 + dev_err(hdev->dev, 401 + "Failed to allocate CB for the kernel driver %d\n", rc); 401 402 return NULL; 402 403 } 403 404
+21 -6
drivers/misc/habanalabs/command_submission.c
··· 178 178 179 179 /* We also need to update CI for internal queues */ 180 180 if (cs->submitted) { 181 - int cs_cnt = atomic_dec_return(&hdev->cs_active_cnt); 181 + hdev->asic_funcs->hw_queues_lock(hdev); 182 182 183 - WARN_ONCE((cs_cnt < 0), 184 - "hl%d: error in CS active cnt %d\n", 185 - hdev->id, cs_cnt); 183 + hdev->cs_active_cnt--; 184 + if (!hdev->cs_active_cnt) { 185 + struct hl_device_idle_busy_ts *ts; 186 + 187 + ts = &hdev->idle_busy_ts_arr[hdev->idle_busy_ts_idx++]; 188 + ts->busy_to_idle_ts = ktime_get(); 189 + 190 + if (hdev->idle_busy_ts_idx == HL_IDLE_BUSY_TS_ARR_SIZE) 191 + hdev->idle_busy_ts_idx = 0; 192 + } else if (hdev->cs_active_cnt < 0) { 193 + dev_crit(hdev->dev, "CS active cnt %d is negative\n", 194 + hdev->cs_active_cnt); 195 + } 196 + 197 + hdev->asic_funcs->hw_queues_unlock(hdev); 186 198 187 199 hl_int_hw_queue_update_ci(cs); 188 200 ··· 317 305 other = ctx->cs_pending[fence->cs_seq & (HL_MAX_PENDING_CS - 1)]; 318 306 if ((other) && (!dma_fence_is_signaled(other))) { 319 307 spin_unlock(&ctx->cs_lock); 308 + dev_dbg(hdev->dev, 309 + "Rejecting CS because of too many in-flights CS\n"); 320 310 rc = -EAGAIN; 321 311 goto free_fence; 322 312 } ··· 409 395 return NULL; 410 396 } 411 397 412 - if (hw_queue_prop->kmd_only) { 413 - dev_err(hdev->dev, "Queue index %d is restricted for KMD\n", 398 + if (hw_queue_prop->driver_only) { 399 + dev_err(hdev->dev, 400 + "Queue index %d is restricted for the kernel driver\n", 414 401 chunk->queue_index); 415 402 return NULL; 416 403 } else if (hw_queue_prop->type == QUEUE_TYPE_INT) {
+25 -17
drivers/misc/habanalabs/context.c
··· 26 26 dma_fence_put(ctx->cs_pending[i]); 27 27 28 28 if (ctx->asid != HL_KERNEL_ASID_ID) { 29 - /* 30 - * The engines are stopped as there is no executing CS, but the 29 + /* The engines are stopped as there is no executing CS, but the 31 30 * Coresight might be still working by accessing addresses 32 31 * related to the stopped engines. Hence stop it explicitly. 32 + * Stop only if this is the compute context, as there can be 33 + * only one compute context 33 34 */ 34 - if (hdev->in_debug) 35 + if ((hdev->in_debug) && (hdev->compute_ctx == ctx)) 35 36 hl_device_set_debug_mode(hdev, false); 36 37 37 38 hl_vm_ctx_fini(ctx); ··· 68 67 goto out_err; 69 68 } 70 69 71 - rc = hl_ctx_init(hdev, ctx, false); 72 - if (rc) 73 - goto free_ctx; 74 - 75 - hl_hpriv_get(hpriv); 76 - ctx->hpriv = hpriv; 77 - 78 - /* TODO: remove for multiple contexts */ 79 - hpriv->ctx = ctx; 80 - hdev->user_ctx = ctx; 81 - 82 70 mutex_lock(&mgr->ctx_lock); 83 71 rc = idr_alloc(&mgr->ctx_handles, ctx, 1, 0, GFP_KERNEL); 84 72 mutex_unlock(&mgr->ctx_lock); 85 73 86 74 if (rc < 0) { 87 75 dev_err(hdev->dev, "Failed to allocate IDR for a new CTX\n"); 88 - hl_ctx_free(hdev, ctx); 89 - goto out_err; 76 + goto free_ctx; 90 77 } 78 + 79 + ctx->handle = rc; 80 + 81 + rc = hl_ctx_init(hdev, ctx, false); 82 + if (rc) 83 + goto remove_from_idr; 84 + 85 + hl_hpriv_get(hpriv); 86 + ctx->hpriv = hpriv; 87 + 88 + /* TODO: remove for multiple contexts per process */ 89 + hpriv->ctx = ctx; 90 + 91 + /* TODO: remove the following line for multiple process support */ 92 + hdev->compute_ctx = ctx; 91 93 92 94 return 0; 93 95 96 + remove_from_idr: 97 + mutex_lock(&mgr->ctx_lock); 98 + idr_remove(&mgr->ctx_handles, ctx->handle); 99 + mutex_unlock(&mgr->ctx_lock); 94 100 free_ctx: 95 101 kfree(ctx); 96 102 out_err: ··· 128 120 ctx->thread_ctx_switch_wait_token = 0; 129 121 130 122 if (is_kernel_ctx) { 131 - ctx->asid = HL_KERNEL_ASID_ID; /* KMD gets ASID 0 */ 123 + ctx->asid = HL_KERNEL_ASID_ID; /* Kernel driver gets ASID 0 */ 132 124 rc = hl_mmu_ctx_init(ctx); 133 125 if (rc) { 134 126 dev_err(hdev->dev, "Failed to init mmu ctx module\n");
+8 -8
drivers/misc/habanalabs/debugfs.c
··· 29 29 30 30 memset(&pkt, 0, sizeof(pkt)); 31 31 32 - pkt.ctl = __cpu_to_le32(ARMCP_PACKET_I2C_RD << 32 + pkt.ctl = cpu_to_le32(ARMCP_PACKET_I2C_RD << 33 33 ARMCP_PKT_CTL_OPCODE_SHIFT); 34 34 pkt.i2c_bus = i2c_bus; 35 35 pkt.i2c_addr = i2c_addr; ··· 55 55 56 56 memset(&pkt, 0, sizeof(pkt)); 57 57 58 - pkt.ctl = __cpu_to_le32(ARMCP_PACKET_I2C_WR << 58 + pkt.ctl = cpu_to_le32(ARMCP_PACKET_I2C_WR << 59 59 ARMCP_PKT_CTL_OPCODE_SHIFT); 60 60 pkt.i2c_bus = i2c_bus; 61 61 pkt.i2c_addr = i2c_addr; 62 62 pkt.i2c_reg = i2c_reg; 63 - pkt.value = __cpu_to_le64(val); 63 + pkt.value = cpu_to_le64(val); 64 64 65 65 rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), 66 66 HL_DEVICE_TIMEOUT_USEC, NULL); ··· 81 81 82 82 memset(&pkt, 0, sizeof(pkt)); 83 83 84 - pkt.ctl = __cpu_to_le32(ARMCP_PACKET_LED_SET << 84 + pkt.ctl = cpu_to_le32(ARMCP_PACKET_LED_SET << 85 85 ARMCP_PKT_CTL_OPCODE_SHIFT); 86 - pkt.led_index = __cpu_to_le32(led); 87 - pkt.value = __cpu_to_le64(state); 86 + pkt.led_index = cpu_to_le32(led); 87 + pkt.value = cpu_to_le64(state); 88 88 89 89 rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), 90 90 HL_DEVICE_TIMEOUT_USEC, NULL); ··· 370 370 if (dev_entry->mmu_asid == HL_KERNEL_ASID_ID) 371 371 ctx = hdev->kernel_ctx; 372 372 else 373 - ctx = hdev->user_ctx; 373 + ctx = hdev->compute_ctx; 374 374 375 375 if (!ctx) { 376 376 dev_err(hdev->dev, "no ctx available\n"); ··· 533 533 static int device_va_to_pa(struct hl_device *hdev, u64 virt_addr, 534 534 u64 *phys_addr) 535 535 { 536 - struct hl_ctx *ctx = hdev->user_ctx; 536 + struct hl_ctx *ctx = hdev->compute_ctx; 537 537 u64 hop_addr, hop_pte_addr, hop_pte; 538 538 u64 offset_mask = HOP4_MASK | OFFSET_MASK; 539 539 int rc = 0;
+354 -134
drivers/misc/habanalabs/device.c
··· 42 42 { 43 43 struct hl_fpriv *hpriv; 44 44 struct hl_device *hdev; 45 + struct hl_ctx *ctx; 45 46 46 47 hpriv = container_of(ref, struct hl_fpriv, refcount); 47 48 48 49 hdev = hpriv->hdev; 50 + ctx = hpriv->ctx; 49 51 50 52 put_pid(hpriv->taskpid); 51 53 ··· 55 53 56 54 mutex_destroy(&hpriv->restore_phase_mutex); 57 55 56 + mutex_lock(&hdev->fpriv_list_lock); 57 + list_del(&hpriv->dev_node); 58 + hdev->compute_ctx = NULL; 59 + mutex_unlock(&hdev->fpriv_list_lock); 60 + 58 61 kfree(hpriv); 59 - 60 - /* Now the FD is really closed */ 61 - atomic_dec(&hdev->fd_open_cnt); 62 - 63 - /* This allows a new user context to open the device */ 64 - hdev->user_ctx = NULL; 65 62 } 66 63 67 64 void hl_hpriv_get(struct hl_fpriv *hpriv) ··· 95 94 return 0; 96 95 } 97 96 97 + static int hl_device_release_ctrl(struct inode *inode, struct file *filp) 98 + { 99 + struct hl_fpriv *hpriv = filp->private_data; 100 + struct hl_device *hdev; 101 + 102 + filp->private_data = NULL; 103 + 104 + hdev = hpriv->hdev; 105 + 106 + mutex_lock(&hdev->fpriv_list_lock); 107 + list_del(&hpriv->dev_node); 108 + mutex_unlock(&hdev->fpriv_list_lock); 109 + 110 + kfree(hpriv); 111 + 112 + return 0; 113 + } 114 + 98 115 /* 99 116 * hl_mmap - mmap function for habanalabs device 100 117 * ··· 143 124 .compat_ioctl = hl_ioctl 144 125 }; 145 126 127 + static const struct file_operations hl_ctrl_ops = { 128 + .owner = THIS_MODULE, 129 + .open = hl_device_open_ctrl, 130 + .release = hl_device_release_ctrl, 131 + .unlocked_ioctl = hl_ioctl_control, 132 + .compat_ioctl = hl_ioctl_control 133 + }; 134 + 135 + static void device_release_func(struct device *dev) 136 + { 137 + kfree(dev); 138 + } 139 + 146 140 /* 147 - * device_setup_cdev - setup cdev and device for habanalabs device 141 + * device_init_cdev - Initialize cdev and device for habanalabs device 148 142 * 149 143 * @hdev: pointer to habanalabs device structure 150 144 * @hclass: pointer to the class object of the device 151 145 * @minor: minor number of the specific device 152 - * @fpos : file operations to install for this device 146 + * @fpos: file operations to install for this device 147 + * @name: name of the device as it will appear in the filesystem 148 + * @cdev: pointer to the char device object that will be initialized 149 + * @dev: pointer to the device object that will be initialized 153 150 * 154 - * Create a cdev and a Linux device for habanalabs's device. Need to be 155 - * called at the end of the habanalabs device initialization process, 156 - * because this function exposes the device to the user 151 + * Initialize a cdev and a Linux device for habanalabs's device. 157 152 */ 158 - static int device_setup_cdev(struct hl_device *hdev, struct class *hclass, 159 - int minor, const struct file_operations *fops) 153 + static int device_init_cdev(struct hl_device *hdev, struct class *hclass, 154 + int minor, const struct file_operations *fops, 155 + char *name, struct cdev *cdev, 156 + struct device **dev) 160 157 { 161 - int err, devno = MKDEV(hdev->major, minor); 162 - struct cdev *hdev_cdev = &hdev->cdev; 163 - char *name; 158 + cdev_init(cdev, fops); 159 + cdev->owner = THIS_MODULE; 164 160 165 - name = kasprintf(GFP_KERNEL, "hl%d", hdev->id); 166 - if (!name) 161 + *dev = kzalloc(sizeof(**dev), GFP_KERNEL); 162 + if (!*dev) 167 163 return -ENOMEM; 168 164 169 - cdev_init(hdev_cdev, fops); 170 - hdev_cdev->owner = THIS_MODULE; 171 - err = cdev_add(hdev_cdev, devno, 1); 172 - if (err) { 173 - pr_err("Failed to add char device %s\n", name); 174 - goto err_cdev_add; 165 + device_initialize(*dev); 166 + (*dev)->devt = MKDEV(hdev->major, minor); 167 + (*dev)->class = hclass; 168 + (*dev)->release = device_release_func; 169 + dev_set_drvdata(*dev, hdev); 170 + dev_set_name(*dev, "%s", name); 171 + 172 + return 0; 173 + } 174 + 175 + static int device_cdev_sysfs_add(struct hl_device *hdev) 176 + { 177 + int rc; 178 + 179 + rc = cdev_device_add(&hdev->cdev, hdev->dev); 180 + if (rc) { 181 + dev_err(hdev->dev, 182 + "failed to add a char device to the system\n"); 183 + return rc; 175 184 } 176 185 177 - hdev->dev = device_create(hclass, NULL, devno, NULL, "%s", name); 178 - if (IS_ERR(hdev->dev)) { 179 - pr_err("Failed to create device %s\n", name); 180 - err = PTR_ERR(hdev->dev); 181 - goto err_device_create; 186 + rc = cdev_device_add(&hdev->cdev_ctrl, hdev->dev_ctrl); 187 + if (rc) { 188 + dev_err(hdev->dev, 189 + "failed to add a control char device to the system\n"); 190 + goto delete_cdev_device; 182 191 } 183 192 184 - dev_set_drvdata(hdev->dev, hdev); 193 + /* hl_sysfs_init() must be done after adding the device to the system */ 194 + rc = hl_sysfs_init(hdev); 195 + if (rc) { 196 + dev_err(hdev->dev, "failed to initialize sysfs\n"); 197 + goto delete_ctrl_cdev_device; 198 + } 185 199 186 - kfree(name); 200 + hdev->cdev_sysfs_created = true; 187 201 188 202 return 0; 189 203 190 - err_device_create: 191 - cdev_del(hdev_cdev); 192 - err_cdev_add: 193 - kfree(name); 194 - return err; 204 + delete_ctrl_cdev_device: 205 + cdev_device_del(&hdev->cdev_ctrl, hdev->dev_ctrl); 206 + delete_cdev_device: 207 + cdev_device_del(&hdev->cdev, hdev->dev); 208 + return rc; 209 + } 210 + 211 + static void device_cdev_sysfs_del(struct hl_device *hdev) 212 + { 213 + /* device_release() won't be called so must free devices explicitly */ 214 + if (!hdev->cdev_sysfs_created) { 215 + kfree(hdev->dev_ctrl); 216 + kfree(hdev->dev); 217 + return; 218 + } 219 + 220 + hl_sysfs_fini(hdev); 221 + cdev_device_del(&hdev->cdev_ctrl, hdev->dev_ctrl); 222 + cdev_device_del(&hdev->cdev, hdev->dev); 195 223 } 196 224 197 225 /* ··· 293 227 goto free_eq_wq; 294 228 } 295 229 230 + hdev->idle_busy_ts_arr = kmalloc_array(HL_IDLE_BUSY_TS_ARR_SIZE, 231 + sizeof(struct hl_device_idle_busy_ts), 232 + (GFP_KERNEL | __GFP_ZERO)); 233 + if (!hdev->idle_busy_ts_arr) { 234 + rc = -ENOMEM; 235 + goto free_chip_info; 236 + } 237 + 296 238 hl_cb_mgr_init(&hdev->kernel_cb_mgr); 297 239 298 - mutex_init(&hdev->fd_open_cnt_lock); 299 240 mutex_init(&hdev->send_cpu_message_lock); 300 241 mutex_init(&hdev->debug_lock); 301 242 mutex_init(&hdev->mmu_cache_lock); 302 243 INIT_LIST_HEAD(&hdev->hw_queues_mirror_list); 303 244 spin_lock_init(&hdev->hw_queues_mirror_lock); 245 + INIT_LIST_HEAD(&hdev->fpriv_list); 246 + mutex_init(&hdev->fpriv_list_lock); 304 247 atomic_set(&hdev->in_reset, 0); 305 - atomic_set(&hdev->fd_open_cnt, 0); 306 - atomic_set(&hdev->cs_active_cnt, 0); 307 248 308 249 return 0; 309 250 251 + free_chip_info: 252 + kfree(hdev->hl_chip_info); 310 253 free_eq_wq: 311 254 destroy_workqueue(hdev->eq_wq); 312 255 free_cq_wq: ··· 341 266 mutex_destroy(&hdev->debug_lock); 342 267 mutex_destroy(&hdev->send_cpu_message_lock); 343 268 269 + mutex_destroy(&hdev->fpriv_list_lock); 270 + 344 271 hl_cb_mgr_fini(hdev, &hdev->kernel_cb_mgr); 345 272 273 + kfree(hdev->idle_busy_ts_arr); 346 274 kfree(hdev->hl_chip_info); 347 275 348 276 destroy_workqueue(hdev->eq_wq); ··· 355 277 356 278 if (hdev->asic_funcs->early_fini) 357 279 hdev->asic_funcs->early_fini(hdev); 358 - 359 - mutex_destroy(&hdev->fd_open_cnt_lock); 360 280 } 361 281 362 282 static void set_freq_to_low_job(struct work_struct *work) ··· 362 286 struct hl_device *hdev = container_of(work, struct hl_device, 363 287 work_freq.work); 364 288 365 - if (atomic_read(&hdev->fd_open_cnt) == 0) 289 + mutex_lock(&hdev->fpriv_list_lock); 290 + 291 + if (!hdev->compute_ctx) 366 292 hl_device_set_frequency(hdev, PLL_LOW); 293 + 294 + mutex_unlock(&hdev->fpriv_list_lock); 367 295 368 296 schedule_delayed_work(&hdev->work_freq, 369 297 usecs_to_jiffies(HL_PLL_LOW_JOB_FREQ_USEC)); ··· 418 338 hdev->high_pll = hdev->asic_prop.high_pll; 419 339 420 340 /* force setting to low frequency */ 421 - atomic_set(&hdev->curr_pll_profile, PLL_LOW); 341 + hdev->curr_pll_profile = PLL_LOW; 422 342 423 343 if (hdev->pm_mng_profile == PM_AUTO) 424 344 hdev->asic_funcs->set_pll_profile(hdev, PLL_LOW); ··· 461 381 hdev->late_init_done = false; 462 382 } 463 383 384 + uint32_t hl_device_utilization(struct hl_device *hdev, uint32_t period_ms) 385 + { 386 + struct hl_device_idle_busy_ts *ts; 387 + ktime_t zero_ktime, curr = ktime_get(); 388 + u32 overlap_cnt = 0, last_index = hdev->idle_busy_ts_idx; 389 + s64 period_us, last_start_us, last_end_us, last_busy_time_us, 390 + total_busy_time_us = 0, total_busy_time_ms; 391 + 392 + zero_ktime = ktime_set(0, 0); 393 + period_us = period_ms * USEC_PER_MSEC; 394 + ts = &hdev->idle_busy_ts_arr[last_index]; 395 + 396 + /* check case that device is currently in idle */ 397 + if (!ktime_compare(ts->busy_to_idle_ts, zero_ktime) && 398 + !ktime_compare(ts->idle_to_busy_ts, zero_ktime)) { 399 + 400 + last_index--; 401 + /* Handle case idle_busy_ts_idx was 0 */ 402 + if (last_index > HL_IDLE_BUSY_TS_ARR_SIZE) 403 + last_index = HL_IDLE_BUSY_TS_ARR_SIZE - 1; 404 + 405 + ts = &hdev->idle_busy_ts_arr[last_index]; 406 + } 407 + 408 + while (overlap_cnt < HL_IDLE_BUSY_TS_ARR_SIZE) { 409 + /* Check if we are in last sample case. i.e. if the sample 410 + * begun before the sampling period. This could be a real 411 + * sample or 0 so need to handle both cases 412 + */ 413 + last_start_us = ktime_to_us( 414 + ktime_sub(curr, ts->idle_to_busy_ts)); 415 + 416 + if (last_start_us > period_us) { 417 + 418 + /* First check two cases: 419 + * 1. If the device is currently busy 420 + * 2. If the device was idle during the whole sampling 421 + * period 422 + */ 423 + 424 + if (!ktime_compare(ts->busy_to_idle_ts, zero_ktime)) { 425 + /* Check if the device is currently busy */ 426 + if (ktime_compare(ts->idle_to_busy_ts, 427 + zero_ktime)) 428 + return 100; 429 + 430 + /* We either didn't have any activity or we 431 + * reached an entry which is 0. Either way, 432 + * exit and return what was accumulated so far 433 + */ 434 + break; 435 + } 436 + 437 + /* If sample has finished, check it is relevant */ 438 + last_end_us = ktime_to_us( 439 + ktime_sub(curr, ts->busy_to_idle_ts)); 440 + 441 + if (last_end_us > period_us) 442 + break; 443 + 444 + /* It is relevant so add it but with adjustment */ 445 + last_busy_time_us = ktime_to_us( 446 + ktime_sub(ts->busy_to_idle_ts, 447 + ts->idle_to_busy_ts)); 448 + total_busy_time_us += last_busy_time_us - 449 + (last_start_us - period_us); 450 + break; 451 + } 452 + 453 + /* Check if the sample is finished or still open */ 454 + if (ktime_compare(ts->busy_to_idle_ts, zero_ktime)) 455 + last_busy_time_us = ktime_to_us( 456 + ktime_sub(ts->busy_to_idle_ts, 457 + ts->idle_to_busy_ts)); 458 + else 459 + last_busy_time_us = ktime_to_us( 460 + ktime_sub(curr, ts->idle_to_busy_ts)); 461 + 462 + total_busy_time_us += last_busy_time_us; 463 + 464 + last_index--; 465 + /* Handle case idle_busy_ts_idx was 0 */ 466 + if (last_index > HL_IDLE_BUSY_TS_ARR_SIZE) 467 + last_index = HL_IDLE_BUSY_TS_ARR_SIZE - 1; 468 + 469 + ts = &hdev->idle_busy_ts_arr[last_index]; 470 + 471 + overlap_cnt++; 472 + } 473 + 474 + total_busy_time_ms = DIV_ROUND_UP_ULL(total_busy_time_us, 475 + USEC_PER_MSEC); 476 + 477 + return DIV_ROUND_UP_ULL(total_busy_time_ms * 100, period_ms); 478 + } 479 + 464 480 /* 465 481 * hl_device_set_frequency - set the frequency of the device 466 482 * 467 483 * @hdev: pointer to habanalabs device structure 468 484 * @freq: the new frequency value 469 485 * 470 - * Change the frequency if needed. 471 - * We allose to set PLL to low only if there is no user process 472 - * Returns 0 if no change was done, otherwise returns 1; 486 + * Change the frequency if needed. This function has no protection against 487 + * concurrency, therefore it is assumed that the calling function has protected 488 + * itself against the case of calling this function from multiple threads with 489 + * different values 490 + * 491 + * Returns 0 if no change was done, otherwise returns 1 473 492 */ 474 493 int hl_device_set_frequency(struct hl_device *hdev, enum hl_pll_frequency freq) 475 494 { 476 - enum hl_pll_frequency old_freq = 477 - (freq == PLL_HIGH) ? PLL_LOW : PLL_HIGH; 478 - int ret; 479 - 480 - if (hdev->pm_mng_profile == PM_MANUAL) 495 + if ((hdev->pm_mng_profile == PM_MANUAL) || 496 + (hdev->curr_pll_profile == freq)) 481 497 return 0; 482 - 483 - ret = atomic_cmpxchg(&hdev->curr_pll_profile, old_freq, freq); 484 - if (ret == freq) 485 - return 0; 486 - 487 - /* 488 - * in case we want to lower frequency, check if device is not 489 - * opened. We must have a check here to workaround race condition with 490 - * hl_device_open 491 - */ 492 - if ((freq == PLL_LOW) && (atomic_read(&hdev->fd_open_cnt) > 0)) { 493 - atomic_set(&hdev->curr_pll_profile, PLL_HIGH); 494 - return 0; 495 - } 496 498 497 499 dev_dbg(hdev->dev, "Changing device frequency to %s\n", 498 500 freq == PLL_HIGH ? "high" : "low"); 499 501 500 502 hdev->asic_funcs->set_pll_profile(hdev, freq); 503 + 504 + hdev->curr_pll_profile = freq; 501 505 502 506 return 1; 503 507 } ··· 613 449 goto out; 614 450 } 615 451 616 - mutex_lock(&hdev->fd_open_cnt_lock); 617 - 618 - if (atomic_read(&hdev->fd_open_cnt) > 1) { 619 - dev_err(hdev->dev, 620 - "Failed to enable debug mode. More then a single user is using the device\n"); 621 - rc = -EPERM; 622 - goto unlock_fd_open_lock; 623 - } 624 - 625 452 hdev->in_debug = 1; 626 453 627 - unlock_fd_open_lock: 628 - mutex_unlock(&hdev->fd_open_cnt_lock); 629 454 out: 630 455 mutex_unlock(&hdev->debug_lock); 631 456 ··· 721 568 static void device_kill_open_processes(struct hl_device *hdev) 722 569 { 723 570 u16 pending_total, pending_cnt; 571 + struct hl_fpriv *hpriv; 724 572 struct task_struct *task = NULL; 725 573 726 574 if (hdev->pldm) ··· 729 575 else 730 576 pending_total = HL_PENDING_RESET_PER_SEC; 731 577 732 - pending_cnt = pending_total; 733 - 734 - /* Flush all processes that are inside hl_open */ 735 - mutex_lock(&hdev->fd_open_cnt_lock); 736 - 737 - while ((atomic_read(&hdev->fd_open_cnt)) && (pending_cnt)) { 738 - 739 - pending_cnt--; 740 - 741 - dev_info(hdev->dev, 742 - "Can't HARD reset, waiting for user to close FD\n"); 578 + /* Giving time for user to close FD, and for processes that are inside 579 + * hl_device_open to finish 580 + */ 581 + if (!list_empty(&hdev->fpriv_list)) 743 582 ssleep(1); 744 - } 745 583 746 - if (atomic_read(&hdev->fd_open_cnt)) { 747 - task = get_pid_task(hdev->user_ctx->hpriv->taskpid, 748 - PIDTYPE_PID); 584 + mutex_lock(&hdev->fpriv_list_lock); 585 + 586 + /* This section must be protected because we are dereferencing 587 + * pointers that are freed if the process exits 588 + */ 589 + list_for_each_entry(hpriv, &hdev->fpriv_list, dev_node) { 590 + task = get_pid_task(hpriv->taskpid, PIDTYPE_PID); 749 591 if (task) { 750 - dev_info(hdev->dev, "Killing user processes\n"); 592 + dev_info(hdev->dev, "Killing user process pid=%d\n", 593 + task_pid_nr(task)); 751 594 send_sig(SIGKILL, task, 1); 752 - msleep(100); 595 + usleep_range(1000, 10000); 753 596 754 597 put_task_struct(task); 755 598 } 756 599 } 600 + 601 + mutex_unlock(&hdev->fpriv_list_lock); 757 602 758 603 /* We killed the open users, but because the driver cleans up after the 759 604 * user contexts are closed (e.g. mmu mappings), we need to wait again ··· 762 609 763 610 pending_cnt = pending_total; 764 611 765 - while ((atomic_read(&hdev->fd_open_cnt)) && (pending_cnt)) { 612 + while ((!list_empty(&hdev->fpriv_list)) && (pending_cnt)) { 613 + dev_info(hdev->dev, 614 + "Waiting for all unmap operations to finish before hard reset\n"); 766 615 767 616 pending_cnt--; 768 617 769 618 ssleep(1); 770 619 } 771 620 772 - if (atomic_read(&hdev->fd_open_cnt)) 621 + if (!list_empty(&hdev->fpriv_list)) 773 622 dev_crit(hdev->dev, 774 623 "Going to hard reset with open user contexts\n"); 775 - 776 - mutex_unlock(&hdev->fd_open_cnt_lock); 777 - 778 624 } 779 625 780 626 static void device_hard_reset_pending(struct work_struct *work) ··· 781 629 struct hl_device_reset_work *device_reset_work = 782 630 container_of(work, struct hl_device_reset_work, reset_work); 783 631 struct hl_device *hdev = device_reset_work->hdev; 784 - 785 - device_kill_open_processes(hdev); 786 632 787 633 hl_device_reset(hdev, true, true); 788 634 ··· 829 679 /* This also blocks future CS/VM/JOB completion operations */ 830 680 hdev->disabled = true; 831 681 832 - /* 833 - * Flush anyone that is inside the critical section of enqueue 682 + /* Flush anyone that is inside the critical section of enqueue 834 683 * jobs to the H/W 835 684 */ 836 685 hdev->asic_funcs->hw_queues_lock(hdev); 837 686 hdev->asic_funcs->hw_queues_unlock(hdev); 687 + 688 + /* Flush anyone that is inside device open */ 689 + mutex_lock(&hdev->fpriv_list_lock); 690 + mutex_unlock(&hdev->fpriv_list_lock); 838 691 839 692 dev_err(hdev->dev, "Going to RESET device!\n"); 840 693 } ··· 889 736 /* Go over all the queues, release all CS and their jobs */ 890 737 hl_cs_rollback_all(hdev); 891 738 739 + /* Kill processes here after CS rollback. This is because the process 740 + * can't really exit until all its CSs are done, which is what we 741 + * do in cs rollback 742 + */ 743 + if (from_hard_reset_thread) 744 + device_kill_open_processes(hdev); 745 + 892 746 /* Release kernel context */ 893 747 if ((hard_reset) && (hl_ctx_put(hdev->kernel_ctx) == 1)) 894 748 hdev->kernel_ctx = NULL; ··· 914 754 for (i = 0 ; i < hdev->asic_prop.completion_queues_count ; i++) 915 755 hl_cq_reset(hdev, &hdev->completion_queue[i]); 916 756 757 + hdev->idle_busy_ts_idx = 0; 758 + hdev->idle_busy_ts_arr[0].busy_to_idle_ts = ktime_set(0, 0); 759 + hdev->idle_busy_ts_arr[0].idle_to_busy_ts = ktime_set(0, 0); 760 + 761 + if (hdev->cs_active_cnt) 762 + dev_crit(hdev->dev, "CS active cnt %d is not 0 during reset\n", 763 + hdev->cs_active_cnt); 764 + 765 + mutex_lock(&hdev->fpriv_list_lock); 766 + 917 767 /* Make sure the context switch phase will run again */ 918 - if (hdev->user_ctx) { 919 - atomic_set(&hdev->user_ctx->thread_ctx_switch_token, 1); 920 - hdev->user_ctx->thread_ctx_switch_wait_token = 0; 768 + if (hdev->compute_ctx) { 769 + atomic_set(&hdev->compute_ctx->thread_ctx_switch_token, 1); 770 + hdev->compute_ctx->thread_ctx_switch_wait_token = 0; 921 771 } 772 + 773 + mutex_unlock(&hdev->fpriv_list_lock); 922 774 923 775 /* Finished tear-down, starting to re-initialize */ 924 776 ··· 960 788 goto out_err; 961 789 } 962 790 963 - hdev->user_ctx = NULL; 791 + hdev->compute_ctx = NULL; 964 792 965 793 rc = hl_ctx_init(hdev, hdev->kernel_ctx, true); 966 794 if (rc) { ··· 1021 849 else 1022 850 hdev->soft_reset_cnt++; 1023 851 852 + dev_warn(hdev->dev, "Successfully finished resetting the device\n"); 853 + 1024 854 return 0; 1025 855 1026 856 out_err: ··· 1057 883 int hl_device_init(struct hl_device *hdev, struct class *hclass) 1058 884 { 1059 885 int i, rc, cq_ready_cnt; 886 + char *name; 887 + bool add_cdev_sysfs_on_err = false; 1060 888 1061 - /* Create device */ 1062 - rc = device_setup_cdev(hdev, hclass, hdev->id, &hl_ops); 889 + name = kasprintf(GFP_KERNEL, "hl%d", hdev->id / 2); 890 + if (!name) { 891 + rc = -ENOMEM; 892 + goto out_disabled; 893 + } 894 + 895 + /* Initialize cdev and device structures */ 896 + rc = device_init_cdev(hdev, hclass, hdev->id, &hl_ops, name, 897 + &hdev->cdev, &hdev->dev); 898 + 899 + kfree(name); 1063 900 1064 901 if (rc) 1065 902 goto out_disabled; 1066 903 904 + name = kasprintf(GFP_KERNEL, "hl_controlD%d", hdev->id / 2); 905 + if (!name) { 906 + rc = -ENOMEM; 907 + goto free_dev; 908 + } 909 + 910 + /* Initialize cdev and device structures for control device */ 911 + rc = device_init_cdev(hdev, hclass, hdev->id_control, &hl_ctrl_ops, 912 + name, &hdev->cdev_ctrl, &hdev->dev_ctrl); 913 + 914 + kfree(name); 915 + 916 + if (rc) 917 + goto free_dev; 918 + 1067 919 /* Initialize ASIC function pointers and perform early init */ 1068 920 rc = device_early_init(hdev); 1069 921 if (rc) 1070 - goto release_device; 922 + goto free_dev_ctrl; 1071 923 1072 924 /* 1073 925 * Start calling ASIC initialization. First S/W then H/W and finally ··· 1165 965 goto mmu_fini; 1166 966 } 1167 967 1168 - hdev->user_ctx = NULL; 968 + hdev->compute_ctx = NULL; 1169 969 1170 970 rc = hl_ctx_init(hdev, hdev->kernel_ctx, true); 1171 971 if (rc) { ··· 1180 980 goto release_ctx; 1181 981 } 1182 982 1183 - rc = hl_sysfs_init(hdev); 1184 - if (rc) { 1185 - dev_err(hdev->dev, "failed to initialize sysfs\n"); 1186 - goto free_cb_pool; 1187 - } 1188 - 1189 983 hl_debugfs_add_device(hdev); 1190 984 1191 985 if (hdev->asic_funcs->get_hw_state(hdev) == HL_DEVICE_HW_STATE_DIRTY) { ··· 1187 993 "H/W state is dirty, must reset before initializing\n"); 1188 994 hdev->asic_funcs->hw_fini(hdev, true); 1189 995 } 996 + 997 + /* 998 + * From this point, in case of an error, add char devices and create 999 + * sysfs nodes as part of the error flow, to allow debugging. 1000 + */ 1001 + add_cdev_sysfs_on_err = true; 1190 1002 1191 1003 rc = hdev->asic_funcs->hw_init(hdev); 1192 1004 if (rc) { ··· 1230 1030 } 1231 1031 1232 1032 /* 1233 - * hl_hwmon_init must be called after device_late_init, because only 1033 + * Expose devices and sysfs nodes to user. 1034 + * From here there is no need to add char devices and create sysfs nodes 1035 + * in case of an error. 1036 + */ 1037 + add_cdev_sysfs_on_err = false; 1038 + rc = device_cdev_sysfs_add(hdev); 1039 + if (rc) { 1040 + dev_err(hdev->dev, 1041 + "Failed to add char devices and sysfs nodes\n"); 1042 + rc = 0; 1043 + goto out_disabled; 1044 + } 1045 + 1046 + /* 1047 + * hl_hwmon_init() must be called after device_late_init(), because only 1234 1048 * there we get the information from the device about which 1235 - * hwmon-related sensors the device supports 1049 + * hwmon-related sensors the device supports. 1050 + * Furthermore, it must be done after adding the device to the system. 1236 1051 */ 1237 1052 rc = hl_hwmon_init(hdev); 1238 1053 if (rc) { ··· 1263 1048 1264 1049 return 0; 1265 1050 1266 - free_cb_pool: 1267 - hl_cb_pool_fini(hdev); 1268 1051 release_ctx: 1269 1052 if (hl_ctx_put(hdev->kernel_ctx) != 1) 1270 1053 dev_err(hdev->dev, ··· 1281 1068 hdev->asic_funcs->sw_fini(hdev); 1282 1069 early_fini: 1283 1070 device_early_fini(hdev); 1284 - release_device: 1285 - device_destroy(hclass, hdev->dev->devt); 1286 - cdev_del(&hdev->cdev); 1071 + free_dev_ctrl: 1072 + kfree(hdev->dev_ctrl); 1073 + free_dev: 1074 + kfree(hdev->dev); 1287 1075 out_disabled: 1288 1076 hdev->disabled = true; 1077 + if (add_cdev_sysfs_on_err) 1078 + device_cdev_sysfs_add(hdev); 1289 1079 if (hdev->pdev) 1290 1080 dev_err(&hdev->pdev->dev, 1291 1081 "Failed to initialize hl%d. Device is NOT usable !\n", 1292 - hdev->id); 1082 + hdev->id / 2); 1293 1083 else 1294 1084 pr_err("Failed to initialize hl%d. Device is NOT usable !\n", 1295 - hdev->id); 1085 + hdev->id / 2); 1296 1086 1297 1087 return rc; 1298 1088 } ··· 1336 1120 /* Mark device as disabled */ 1337 1121 hdev->disabled = true; 1338 1122 1339 - /* 1340 - * Flush anyone that is inside the critical section of enqueue 1123 + /* Flush anyone that is inside the critical section of enqueue 1341 1124 * jobs to the H/W 1342 1125 */ 1343 1126 hdev->asic_funcs->hw_queues_lock(hdev); 1344 1127 hdev->asic_funcs->hw_queues_unlock(hdev); 1345 1128 1346 - hdev->hard_reset_pending = true; 1129 + /* Flush anyone that is inside device open */ 1130 + mutex_lock(&hdev->fpriv_list_lock); 1131 + mutex_unlock(&hdev->fpriv_list_lock); 1347 1132 1348 - device_kill_open_processes(hdev); 1133 + hdev->hard_reset_pending = true; 1349 1134 1350 1135 hl_hwmon_fini(hdev); 1351 1136 1352 1137 device_late_fini(hdev); 1353 1138 1354 1139 hl_debugfs_remove_device(hdev); 1355 - 1356 - hl_sysfs_fini(hdev); 1357 1140 1358 1141 /* 1359 1142 * Halt the engines and disable interrupts so we won't get any more ··· 1363 1148 1364 1149 /* Go over all the queues, release all CS and their jobs */ 1365 1150 hl_cs_rollback_all(hdev); 1151 + 1152 + /* Kill processes here after CS rollback. This is because the process 1153 + * can't really exit until all its CSs are done, which is what we 1154 + * do in cs rollback 1155 + */ 1156 + device_kill_open_processes(hdev); 1366 1157 1367 1158 hl_cb_pool_fini(hdev); 1368 1159 ··· 1396 1175 1397 1176 device_early_fini(hdev); 1398 1177 1399 - /* Hide device from user */ 1400 - device_destroy(hdev->dev->class, hdev->dev->devt); 1401 - cdev_del(&hdev->cdev); 1178 + /* Hide devices and sysfs nodes from user */ 1179 + device_cdev_sysfs_del(hdev); 1402 1180 1403 1181 pr_info("removed device successfully\n"); 1404 1182 }
+63 -32
drivers/misc/habanalabs/goya/goya.c
··· 9 9 #include "include/hw_ip/mmu/mmu_general.h" 10 10 #include "include/hw_ip/mmu/mmu_v1_0.h" 11 11 #include "include/goya/asic_reg/goya_masks.h" 12 + #include "include/goya/goya_reg_map.h" 12 13 13 14 #include <linux/pci.h> 14 15 #include <linux/genalloc.h> ··· 42 41 * PQ, CQ and CP are not secured. 43 42 * PQ, CB and the data are on the SRAM/DRAM. 44 43 * 45 - * Since QMAN DMA is secured, KMD is parsing the DMA CB: 46 - * - KMD checks DMA pointer 44 + * Since QMAN DMA is secured, the driver is parsing the DMA CB: 45 + * - checks DMA pointer 47 46 * - WREG, MSG_PROT are not allowed. 48 47 * - MSG_LONG/SHORT are allowed. 49 48 * ··· 56 55 * QMAN DMA: PQ, CQ and CP are secured. 57 56 * MMU is set to bypass on the Secure props register of the QMAN. 58 57 * The reasons we don't enable MMU for PQ, CQ and CP are: 59 - * - PQ entry is in kernel address space and KMD doesn't map it. 58 + * - PQ entry is in kernel address space and the driver doesn't map it. 60 59 * - CP writes to MSIX register and to kernel address space (completion 61 60 * queue). 62 61 * 63 - * DMA is not secured but because CP is secured, KMD still needs to parse the 64 - * CB, but doesn't need to check the DMA addresses. 62 + * DMA is not secured but because CP is secured, the driver still needs to parse 63 + * the CB, but doesn't need to check the DMA addresses. 65 64 * 66 - * For QMAN DMA 0, DMA is also secured because only KMD uses this DMA and KMD 67 - * doesn't map memory in MMU. 65 + * For QMAN DMA 0, DMA is also secured because only the driver uses this DMA and 66 + * the driver doesn't map memory in MMU. 68 67 * 69 68 * QMAN TPC/MME: PQ, CQ and CP aren't secured (no change from MMU disabled mode) 70 69 * ··· 336 335 337 336 for (i = 0 ; i < NUMBER_OF_EXT_HW_QUEUES ; i++) { 338 337 prop->hw_queues_props[i].type = QUEUE_TYPE_EXT; 339 - prop->hw_queues_props[i].kmd_only = 0; 338 + prop->hw_queues_props[i].driver_only = 0; 340 339 } 341 340 342 341 for (; i < NUMBER_OF_EXT_HW_QUEUES + NUMBER_OF_CPU_HW_QUEUES ; i++) { 343 342 prop->hw_queues_props[i].type = QUEUE_TYPE_CPU; 344 - prop->hw_queues_props[i].kmd_only = 1; 343 + prop->hw_queues_props[i].driver_only = 1; 345 344 } 346 345 347 346 for (; i < NUMBER_OF_EXT_HW_QUEUES + NUMBER_OF_CPU_HW_QUEUES + 348 347 NUMBER_OF_INT_HW_QUEUES; i++) { 349 348 prop->hw_queues_props[i].type = QUEUE_TYPE_INT; 350 - prop->hw_queues_props[i].kmd_only = 0; 349 + prop->hw_queues_props[i].driver_only = 0; 351 350 } 352 351 353 352 for (; i < HL_MAX_QUEUES; i++) ··· 1007 1006 1008 1007 eq = &hdev->event_queue; 1009 1008 1010 - WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_0, 1011 - lower_32_bits(cpu_pq->bus_address)); 1012 - WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_1, 1013 - upper_32_bits(cpu_pq->bus_address)); 1009 + WREG32(mmCPU_PQ_BASE_ADDR_LOW, lower_32_bits(cpu_pq->bus_address)); 1010 + WREG32(mmCPU_PQ_BASE_ADDR_HIGH, upper_32_bits(cpu_pq->bus_address)); 1014 1011 1015 - WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_2, lower_32_bits(eq->bus_address)); 1016 - WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_3, upper_32_bits(eq->bus_address)); 1012 + WREG32(mmCPU_EQ_BASE_ADDR_LOW, lower_32_bits(eq->bus_address)); 1013 + WREG32(mmCPU_EQ_BASE_ADDR_HIGH, upper_32_bits(eq->bus_address)); 1017 1014 1018 - WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_8, 1015 + WREG32(mmCPU_CQ_BASE_ADDR_LOW, 1019 1016 lower_32_bits(VA_CPU_ACCESSIBLE_MEM_ADDR)); 1020 - WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_9, 1017 + WREG32(mmCPU_CQ_BASE_ADDR_HIGH, 1021 1018 upper_32_bits(VA_CPU_ACCESSIBLE_MEM_ADDR)); 1022 1019 1023 - WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_5, HL_QUEUE_SIZE_IN_BYTES); 1024 - WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_4, HL_EQ_SIZE_IN_BYTES); 1025 - WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_10, HL_CPU_ACCESSIBLE_MEM_SIZE); 1020 + WREG32(mmCPU_PQ_LENGTH, HL_QUEUE_SIZE_IN_BYTES); 1021 + WREG32(mmCPU_EQ_LENGTH, HL_EQ_SIZE_IN_BYTES); 1022 + WREG32(mmCPU_CQ_LENGTH, HL_CPU_ACCESSIBLE_MEM_SIZE); 1026 1023 1027 1024 /* Used for EQ CI */ 1028 - WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_6, 0); 1025 + WREG32(mmCPU_EQ_CI, 0); 1029 1026 1030 1027 WREG32(mmCPU_IF_PF_PQ_PI, 0); 1031 1028 1032 - WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_7, PQ_INIT_STATUS_READY_FOR_CP); 1029 + WREG32(mmCPU_PQ_INIT_STATUS, PQ_INIT_STATUS_READY_FOR_CP); 1033 1030 1034 1031 WREG32(mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR, 1035 1032 GOYA_ASYNC_EVENT_ID_PI_UPDATE); 1036 1033 1037 1034 err = hl_poll_timeout( 1038 1035 hdev, 1039 - mmPSOC_GLOBAL_CONF_SCRATCHPAD_7, 1036 + mmCPU_PQ_INIT_STATUS, 1040 1037 status, 1041 1038 (status == PQ_INIT_STATUS_READY_FOR_HOST), 1042 1039 1000, ··· 2062 2063 goya->hw_cap_initialized &= ~HW_CAP_MSIX; 2063 2064 } 2064 2065 2066 + static void goya_enable_timestamp(struct hl_device *hdev) 2067 + { 2068 + /* Disable the timestamp counter */ 2069 + WREG32(mmPSOC_TIMESTAMP_BASE - CFG_BASE, 0); 2070 + 2071 + /* Zero the lower/upper parts of the 64-bit counter */ 2072 + WREG32(mmPSOC_TIMESTAMP_BASE - CFG_BASE + 0xC, 0); 2073 + WREG32(mmPSOC_TIMESTAMP_BASE - CFG_BASE + 0x8, 0); 2074 + 2075 + /* Enable the counter */ 2076 + WREG32(mmPSOC_TIMESTAMP_BASE - CFG_BASE, 1); 2077 + } 2078 + 2079 + static void goya_disable_timestamp(struct hl_device *hdev) 2080 + { 2081 + /* Disable the timestamp counter */ 2082 + WREG32(mmPSOC_TIMESTAMP_BASE - CFG_BASE, 0); 2083 + } 2084 + 2065 2085 static void goya_halt_engines(struct hl_device *hdev, bool hard_reset) 2066 2086 { 2067 2087 u32 wait_timeout_ms, cpu_timeout_ms; ··· 2120 2102 2121 2103 goya_disable_external_queues(hdev); 2122 2104 goya_disable_internal_queues(hdev); 2105 + 2106 + goya_disable_timestamp(hdev); 2123 2107 2124 2108 if (hard_reset) { 2125 2109 goya_disable_msix(hdev); ··· 2225 2205 2226 2206 switch (fwc) { 2227 2207 case FW_COMP_UBOOT: 2228 - ver_off = RREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_29); 2208 + ver_off = RREG32(mmUBOOT_VER_OFFSET); 2229 2209 dest = hdev->asic_prop.uboot_ver; 2230 2210 name = "U-Boot"; 2231 2211 break; 2232 2212 case FW_COMP_PREBOOT: 2233 - ver_off = RREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_28); 2213 + ver_off = RREG32(mmPREBOOT_VER_OFFSET); 2234 2214 dest = hdev->asic_prop.preboot_ver; 2235 2215 name = "Preboot"; 2236 2216 break; ··· 2489 2469 * we need to reset the chip before doing H/W init. This register is 2490 2470 * cleared by the H/W upon H/W reset 2491 2471 */ 2492 - WREG32(mmPSOC_GLOBAL_CONF_APP_STATUS, HL_DEVICE_HW_STATE_DIRTY); 2472 + WREG32(mmHW_STATE, HL_DEVICE_HW_STATE_DIRTY); 2493 2473 2494 2474 rc = goya_init_cpu(hdev, GOYA_CPU_TIMEOUT_USEC); 2495 2475 if (rc) { ··· 2524 2504 goya_init_mme_qmans(hdev); 2525 2505 2526 2506 goya_init_tpc_qmans(hdev); 2507 + 2508 + goya_enable_timestamp(hdev); 2527 2509 2528 2510 /* MSI-X must be enabled before CPU queues are initialized */ 2529 2511 rc = goya_enable_msix(hdev); ··· 2853 2831 2854 2832 if (!hdev->asic_funcs->is_device_idle(hdev, NULL, NULL)) { 2855 2833 dev_err_ratelimited(hdev->dev, 2856 - "Can't send KMD job on QMAN0 because the device is not idle\n"); 2834 + "Can't send driver job on QMAN0 because the device is not idle\n"); 2857 2835 return -EBUSY; 2858 2836 } 2859 2837 ··· 3971 3949 3972 3950 void goya_update_eq_ci(struct hl_device *hdev, u32 val) 3973 3951 { 3974 - WREG32(mmPSOC_GLOBAL_CONF_SCRATCHPAD_6, val); 3952 + WREG32(mmCPU_EQ_CI, val); 3975 3953 } 3976 3954 3977 3955 void goya_restore_phase_topology(struct hl_device *hdev) ··· 4469 4447 struct goya_device *goya = hdev->asic_specific; 4470 4448 4471 4449 goya->events_stat[event_type]++; 4450 + goya->events_stat_aggregate[event_type]++; 4472 4451 4473 4452 switch (event_type) { 4474 4453 case GOYA_ASYNC_EVENT_ID_PCIE_IF: ··· 4551 4528 } 4552 4529 } 4553 4530 4554 - void *goya_get_events_stat(struct hl_device *hdev, u32 *size) 4531 + void *goya_get_events_stat(struct hl_device *hdev, bool aggregate, u32 *size) 4555 4532 { 4556 4533 struct goya_device *goya = hdev->asic_specific; 4557 4534 4558 - *size = (u32) sizeof(goya->events_stat); 4535 + if (aggregate) { 4536 + *size = (u32) sizeof(goya->events_stat_aggregate); 4537 + return goya->events_stat_aggregate; 4538 + } 4559 4539 4540 + *size = (u32) sizeof(goya->events_stat); 4560 4541 return goya->events_stat; 4561 4542 } 4562 4543 ··· 4961 4934 prop->dram_end_address = prop->dram_base_address + dram_size; 4962 4935 } 4963 4936 4937 + if (!strlen(prop->armcp_info.card_name)) 4938 + strncpy(prop->armcp_info.card_name, GOYA_DEFAULT_CARD_NAME, 4939 + CARD_NAME_MAX_LEN); 4940 + 4964 4941 return 0; 4965 4942 } 4966 4943 ··· 5078 5047 5079 5048 static enum hl_device_hw_state goya_get_hw_state(struct hl_device *hdev) 5080 5049 { 5081 - return RREG32(mmPSOC_GLOBAL_CONF_APP_STATUS); 5050 + return RREG32(mmHW_STATE); 5082 5051 } 5083 5052 5084 5053 static const struct hl_asic_funcs goya_funcs = {
+11 -8
drivers/misc/habanalabs/goya/goyaP.h
··· 55 55 56 56 #define DRAM_PHYS_DEFAULT_SIZE 0x100000000ull /* 4GB */ 57 57 58 + #define GOYA_DEFAULT_CARD_NAME "HL1000" 59 + 58 60 /* DRAM Memory Map */ 59 61 60 62 #define CPU_FW_IMAGE_SIZE 0x10000000 /* 256MB */ ··· 70 68 MMU_PAGE_TABLES_SIZE) 71 69 #define MMU_CACHE_MNG_ADDR (MMU_DRAM_DEFAULT_PAGE_ADDR + \ 72 70 MMU_DRAM_DEFAULT_PAGE_SIZE) 73 - #define DRAM_KMD_END_ADDR (MMU_CACHE_MNG_ADDR + \ 71 + #define DRAM_DRIVER_END_ADDR (MMU_CACHE_MNG_ADDR + \ 74 72 MMU_CACHE_MNG_SIZE) 75 73 76 74 #define DRAM_BASE_ADDR_USER 0x20000000 77 75 78 - #if (DRAM_KMD_END_ADDR > DRAM_BASE_ADDR_USER) 79 - #error "KMD must reserve no more than 512MB" 76 + #if (DRAM_DRIVER_END_ADDR > DRAM_BASE_ADDR_USER) 77 + #error "Driver must reserve no more than 512MB" 80 78 #endif 81 79 82 80 /* 83 - * SRAM Memory Map for KMD 81 + * SRAM Memory Map for Driver 84 82 * 85 - * KMD occupies KMD_SRAM_SIZE bytes from the start of SRAM. It is used for 83 + * Driver occupies DRIVER_SRAM_SIZE bytes from the start of SRAM. It is used for 86 84 * MME/TPC QMANs 87 85 * 88 86 */ ··· 108 106 #define TPC7_QMAN_BASE_OFFSET (TPC6_QMAN_BASE_OFFSET + \ 109 107 (TPC_QMAN_LENGTH * QMAN_PQ_ENTRY_SIZE)) 110 108 111 - #define SRAM_KMD_RES_OFFSET (TPC7_QMAN_BASE_OFFSET + \ 109 + #define SRAM_DRIVER_RES_OFFSET (TPC7_QMAN_BASE_OFFSET + \ 112 110 (TPC_QMAN_LENGTH * QMAN_PQ_ENTRY_SIZE)) 113 111 114 - #if (SRAM_KMD_RES_OFFSET >= GOYA_KMD_SRAM_RESERVED_SIZE_FROM_START) 112 + #if (SRAM_DRIVER_RES_OFFSET >= GOYA_KMD_SRAM_RESERVED_SIZE_FROM_START) 115 113 #error "MME/TPC QMANs SRAM space exceeds limit" 116 114 #endif 117 115 ··· 164 162 165 163 u64 ddr_bar_cur_addr; 166 164 u32 events_stat[GOYA_ASYNC_EVENT_ID_SIZE]; 165 + u32 events_stat_aggregate[GOYA_ASYNC_EVENT_ID_SIZE]; 167 166 u32 hw_cap_initialized; 168 167 u8 device_cpu_mmu_mappings_done; 169 168 }; ··· 218 215 int goya_resume(struct hl_device *hdev); 219 216 220 217 void goya_handle_eqe(struct hl_device *hdev, struct hl_eq_entry *eq_entry); 221 - void *goya_get_events_stat(struct hl_device *hdev, u32 *size); 218 + void *goya_get_events_stat(struct hl_device *hdev, bool aggregate, u32 *size); 222 219 223 220 void goya_add_end_of_cb_packets(struct hl_device *hdev, u64 kernel_address, 224 221 u32 len, u64 cq_addr, u32 cq_val, u32 msix_vec);
+64 -25
drivers/misc/habanalabs/goya/goya_coresight.c
··· 15 15 16 16 #define GOYA_PLDM_CORESIGHT_TIMEOUT_USEC (CORESIGHT_TIMEOUT_USEC * 100) 17 17 18 + #define SPMU_SECTION_SIZE DMA_CH_0_CS_SPMU_MAX_OFFSET 19 + #define SPMU_EVENT_TYPES_OFFSET 0x400 20 + #define SPMU_MAX_COUNTERS 6 21 + 18 22 static u64 debug_stm_regs[GOYA_STM_LAST + 1] = { 19 23 [GOYA_STM_CPU] = mmCPU_STM_BASE, 20 24 [GOYA_STM_DMA_CH_0_CS] = mmDMA_CH_0_CS_STM_BASE, ··· 230 226 struct hl_debug_params *params) 231 227 { 232 228 struct hl_debug_params_stm *input; 233 - u64 base_reg = debug_stm_regs[params->reg_idx] - CFG_BASE; 229 + u64 base_reg; 234 230 int rc; 231 + 232 + if (params->reg_idx >= ARRAY_SIZE(debug_stm_regs)) { 233 + dev_err(hdev->dev, "Invalid register index in STM\n"); 234 + return -EINVAL; 235 + } 236 + 237 + base_reg = debug_stm_regs[params->reg_idx] - CFG_BASE; 235 238 236 239 WREG32(base_reg + 0xFB0, CORESIGHT_UNLOCK); 237 240 ··· 299 288 struct hl_debug_params *params) 300 289 { 301 290 struct hl_debug_params_etf *input; 302 - u64 base_reg = debug_etf_regs[params->reg_idx] - CFG_BASE; 291 + u64 base_reg; 303 292 u32 val; 304 293 int rc; 294 + 295 + if (params->reg_idx >= ARRAY_SIZE(debug_etf_regs)) { 296 + dev_err(hdev->dev, "Invalid register index in ETF\n"); 297 + return -EINVAL; 298 + } 299 + 300 + base_reg = debug_etf_regs[params->reg_idx] - CFG_BASE; 305 301 306 302 WREG32(base_reg + 0xFB0, CORESIGHT_UNLOCK); 307 303 ··· 463 445 static int goya_config_funnel(struct hl_device *hdev, 464 446 struct hl_debug_params *params) 465 447 { 466 - WREG32(debug_funnel_regs[params->reg_idx] - CFG_BASE + 0xFB0, 467 - CORESIGHT_UNLOCK); 448 + u64 base_reg; 468 449 469 - WREG32(debug_funnel_regs[params->reg_idx] - CFG_BASE, 470 - params->enable ? 0x33F : 0); 450 + if (params->reg_idx >= ARRAY_SIZE(debug_funnel_regs)) { 451 + dev_err(hdev->dev, "Invalid register index in FUNNEL\n"); 452 + return -EINVAL; 453 + } 454 + 455 + base_reg = debug_funnel_regs[params->reg_idx] - CFG_BASE; 456 + 457 + WREG32(base_reg + 0xFB0, CORESIGHT_UNLOCK); 458 + 459 + WREG32(base_reg, params->enable ? 0x33F : 0); 471 460 472 461 return 0; 473 462 } ··· 483 458 struct hl_debug_params *params) 484 459 { 485 460 struct hl_debug_params_bmon *input; 486 - u64 base_reg = debug_bmon_regs[params->reg_idx] - CFG_BASE; 461 + u64 base_reg; 487 462 u32 pcie_base = 0; 463 + 464 + if (params->reg_idx >= ARRAY_SIZE(debug_bmon_regs)) { 465 + dev_err(hdev->dev, "Invalid register index in BMON\n"); 466 + return -EINVAL; 467 + } 468 + 469 + base_reg = debug_bmon_regs[params->reg_idx] - CFG_BASE; 488 470 489 471 WREG32(base_reg + 0x104, 1); 490 472 ··· 554 522 static int goya_config_spmu(struct hl_device *hdev, 555 523 struct hl_debug_params *params) 556 524 { 557 - u64 base_reg = debug_spmu_regs[params->reg_idx] - CFG_BASE; 525 + u64 base_reg; 558 526 struct hl_debug_params_spmu *input = params->input; 559 527 u64 *output; 560 528 u32 output_arr_len; ··· 562 530 u32 overflow_idx; 563 531 u32 cycle_cnt_idx; 564 532 int i; 533 + 534 + if (params->reg_idx >= ARRAY_SIZE(debug_spmu_regs)) { 535 + dev_err(hdev->dev, "Invalid register index in SPMU\n"); 536 + return -EINVAL; 537 + } 538 + 539 + base_reg = debug_spmu_regs[params->reg_idx] - CFG_BASE; 565 540 566 541 if (params->enable) { 567 542 input = params->input; ··· 578 539 579 540 if (input->event_types_num < 3) { 580 541 dev_err(hdev->dev, 581 - "not enough values for SPMU enable\n"); 542 + "not enough event types values for SPMU enable\n"); 543 + return -EINVAL; 544 + } 545 + 546 + if (input->event_types_num > SPMU_MAX_COUNTERS) { 547 + dev_err(hdev->dev, 548 + "too many event types values for SPMU enable\n"); 582 549 return -EINVAL; 583 550 } 584 551 ··· 592 547 WREG32(base_reg + 0xE04, 0x41013040); 593 548 594 549 for (i = 0 ; i < input->event_types_num ; i++) 595 - WREG32(base_reg + 0x400 + i * 4, input->event_types[i]); 550 + WREG32(base_reg + SPMU_EVENT_TYPES_OFFSET + i * 4, 551 + input->event_types[i]); 596 552 597 553 WREG32(base_reg + 0xE04, 0x41013041); 598 554 WREG32(base_reg + 0xC00, 0x8000003F); ··· 610 564 if (output_arr_len < 3) { 611 565 dev_err(hdev->dev, 612 566 "not enough values for SPMU disable\n"); 567 + return -EINVAL; 568 + } 569 + 570 + if (events_num > SPMU_MAX_COUNTERS) { 571 + dev_err(hdev->dev, 572 + "too many events values for SPMU disable\n"); 613 573 return -EINVAL; 614 574 } 615 575 ··· 636 584 return 0; 637 585 } 638 586 639 - static int goya_config_timestamp(struct hl_device *hdev, 640 - struct hl_debug_params *params) 641 - { 642 - WREG32(mmPSOC_TIMESTAMP_BASE - CFG_BASE, 0); 643 - if (params->enable) { 644 - WREG32(mmPSOC_TIMESTAMP_BASE - CFG_BASE + 0xC, 0); 645 - WREG32(mmPSOC_TIMESTAMP_BASE - CFG_BASE + 0x8, 0); 646 - WREG32(mmPSOC_TIMESTAMP_BASE - CFG_BASE, 1); 647 - } 648 - 649 - return 0; 650 - } 651 - 652 587 int goya_debug_coresight(struct hl_device *hdev, void *data) 653 588 { 654 589 struct hl_debug_params *params = data; 655 590 u32 val; 656 - int rc; 591 + int rc = 0; 657 592 658 593 switch (params->op) { 659 594 case HL_DEBUG_OP_STM: ··· 662 623 rc = goya_config_spmu(hdev, params); 663 624 break; 664 625 case HL_DEBUG_OP_TIMESTAMP: 665 - rc = goya_config_timestamp(hdev, params); 626 + /* Do nothing as this opcode is deprecated */ 666 627 break; 667 628 668 629 default:
+109
drivers/misc/habanalabs/goya/goya_hwmgr.c
··· 230 230 return sprintf(buf, "%lu\n", value); 231 231 } 232 232 233 + static ssize_t pm_mng_profile_show(struct device *dev, 234 + struct device_attribute *attr, char *buf) 235 + { 236 + struct hl_device *hdev = dev_get_drvdata(dev); 237 + 238 + if (hl_device_disabled_or_in_reset(hdev)) 239 + return -ENODEV; 240 + 241 + return sprintf(buf, "%s\n", 242 + (hdev->pm_mng_profile == PM_AUTO) ? "auto" : 243 + (hdev->pm_mng_profile == PM_MANUAL) ? "manual" : 244 + "unknown"); 245 + } 246 + 247 + static ssize_t pm_mng_profile_store(struct device *dev, 248 + struct device_attribute *attr, const char *buf, size_t count) 249 + { 250 + struct hl_device *hdev = dev_get_drvdata(dev); 251 + 252 + if (hl_device_disabled_or_in_reset(hdev)) { 253 + count = -ENODEV; 254 + goto out; 255 + } 256 + 257 + mutex_lock(&hdev->fpriv_list_lock); 258 + 259 + if (hdev->compute_ctx) { 260 + dev_err(hdev->dev, 261 + "Can't change PM profile while compute context is opened on the device\n"); 262 + count = -EPERM; 263 + goto unlock_mutex; 264 + } 265 + 266 + if (strncmp("auto", buf, strlen("auto")) == 0) { 267 + /* Make sure we are in LOW PLL when changing modes */ 268 + if (hdev->pm_mng_profile == PM_MANUAL) { 269 + hdev->curr_pll_profile = PLL_HIGH; 270 + hl_device_set_frequency(hdev, PLL_LOW); 271 + hdev->pm_mng_profile = PM_AUTO; 272 + } 273 + } else if (strncmp("manual", buf, strlen("manual")) == 0) { 274 + if (hdev->pm_mng_profile == PM_AUTO) { 275 + /* Must release the lock because the work thread also 276 + * takes this lock. But before we release it, set 277 + * the mode to manual so nothing will change if a user 278 + * suddenly opens the device 279 + */ 280 + hdev->pm_mng_profile = PM_MANUAL; 281 + 282 + mutex_unlock(&hdev->fpriv_list_lock); 283 + 284 + /* Flush the current work so we can return to the user 285 + * knowing that he is the only one changing frequencies 286 + */ 287 + flush_delayed_work(&hdev->work_freq); 288 + 289 + return count; 290 + } 291 + } else { 292 + dev_err(hdev->dev, "value should be auto or manual\n"); 293 + count = -EINVAL; 294 + } 295 + 296 + unlock_mutex: 297 + mutex_unlock(&hdev->fpriv_list_lock); 298 + out: 299 + return count; 300 + } 301 + 302 + static ssize_t high_pll_show(struct device *dev, struct device_attribute *attr, 303 + char *buf) 304 + { 305 + struct hl_device *hdev = dev_get_drvdata(dev); 306 + 307 + if (hl_device_disabled_or_in_reset(hdev)) 308 + return -ENODEV; 309 + 310 + return sprintf(buf, "%u\n", hdev->high_pll); 311 + } 312 + 313 + static ssize_t high_pll_store(struct device *dev, struct device_attribute *attr, 314 + const char *buf, size_t count) 315 + { 316 + struct hl_device *hdev = dev_get_drvdata(dev); 317 + long value; 318 + int rc; 319 + 320 + if (hl_device_disabled_or_in_reset(hdev)) { 321 + count = -ENODEV; 322 + goto out; 323 + } 324 + 325 + rc = kstrtoul(buf, 0, &value); 326 + 327 + if (rc) { 328 + count = -EINVAL; 329 + goto out; 330 + } 331 + 332 + hdev->high_pll = value; 333 + 334 + out: 335 + return count; 336 + } 337 + 338 + static DEVICE_ATTR_RW(high_pll); 233 339 static DEVICE_ATTR_RW(ic_clk); 234 340 static DEVICE_ATTR_RO(ic_clk_curr); 235 341 static DEVICE_ATTR_RW(mme_clk); 236 342 static DEVICE_ATTR_RO(mme_clk_curr); 343 + static DEVICE_ATTR_RW(pm_mng_profile); 237 344 static DEVICE_ATTR_RW(tpc_clk); 238 345 static DEVICE_ATTR_RO(tpc_clk_curr); 239 346 240 347 static struct attribute *goya_dev_attrs[] = { 348 + &dev_attr_high_pll.attr, 241 349 &dev_attr_ic_clk.attr, 242 350 &dev_attr_ic_clk_curr.attr, 243 351 &dev_attr_mme_clk.attr, 244 352 &dev_attr_mme_clk_curr.attr, 353 + &dev_attr_pm_mng_profile.attr, 245 354 &dev_attr_tpc_clk.attr, 246 355 &dev_attr_tpc_clk_curr.attr, 247 356 NULL,
+84 -45
drivers/misc/habanalabs/habanalabs.h
··· 36 36 37 37 #define HL_PCI_ELBI_TIMEOUT_MSEC 10 /* 10ms */ 38 38 39 + #define HL_SIM_MAX_TIMEOUT_US 10000000 /* 10s */ 40 + 39 41 #define HL_MAX_QUEUES 128 40 42 41 43 #define HL_MAX_JOBS_PER_CS 64 42 44 43 45 /* MUST BE POWER OF 2 and larger than 1 */ 44 46 #define HL_MAX_PENDING_CS 64 47 + 48 + #define HL_IDLE_BUSY_TS_ARR_SIZE 4096 45 49 46 50 /* Memory */ 47 51 #define MEM_HASH_TABLE_BITS 7 /* 1 << 7 buckets */ ··· 96 92 /** 97 93 * struct hw_queue_properties - queue information. 98 94 * @type: queue type. 99 - * @kmd_only: true if only KMD is allowed to send a job to this queue, false 100 - * otherwise. 95 + * @driver_only: true if only the driver is allowed to send a job to this queue, 96 + * false otherwise. 101 97 */ 102 98 struct hw_queue_properties { 103 99 enum hl_queue_type type; 104 - u8 kmd_only; 100 + u8 driver_only; 105 101 }; 106 102 107 103 /** ··· 324 320 #define HL_EQ_LENGTH 64 325 321 #define HL_EQ_SIZE_IN_BYTES (HL_EQ_LENGTH * HL_EQ_ENTRY_SIZE) 326 322 327 - /* KMD <-> ArmCP shared memory size */ 323 + /* Host <-> ArmCP shared memory size */ 328 324 #define HL_CPU_ACCESSIBLE_MEM_SIZE SZ_2M 329 325 330 326 /** ··· 405 401 406 402 /** 407 403 * enum hl_pm_mng_profile - power management profile. 408 - * @PM_AUTO: internal clock is set by KMD. 404 + * @PM_AUTO: internal clock is set by the Linux driver. 409 405 * @PM_MANUAL: internal clock is set by the user. 410 406 * @PM_LAST: last power management type. 411 407 */ ··· 558 554 struct hl_eq_entry *eq_entry); 559 555 void (*set_pll_profile)(struct hl_device *hdev, 560 556 enum hl_pll_frequency freq); 561 - void* (*get_events_stat)(struct hl_device *hdev, u32 *size); 557 + void* (*get_events_stat)(struct hl_device *hdev, bool aggregate, 558 + u32 *size); 562 559 u64 (*read_pte)(struct hl_device *hdev, u64 addr); 563 560 void (*write_pte)(struct hl_device *hdev, u64 addr, u64 val); 564 561 void (*mmu_invalidate_cache)(struct hl_device *hdev, bool is_hard); ··· 613 608 * descriptor (hl_vm_phys_pg_list or hl_userptr). 614 609 * @mmu_phys_hash: holds a mapping from physical address to pgt_info structure. 615 610 * @mmu_shadow_hash: holds a mapping from shadow address to pgt_info structure. 616 - * @hpriv: pointer to the private (KMD) data of the process (fd). 611 + * @hpriv: pointer to the private (Kernel Driver) data of the process (fd). 617 612 * @hdev: pointer to the device structure. 618 613 * @refcount: reference counter for the context. Context is released only when 619 614 * this hits 0l. It is incremented on CS and CS_WAIT. ··· 639 634 * execution phase before the context switch phase 640 635 * has finished. 641 636 * @asid: context's unique address space ID in the device's MMU. 637 + * @handle: context's opaque handle for user 642 638 */ 643 639 struct hl_ctx { 644 640 DECLARE_HASHTABLE(mem_hash, MEM_HASH_TABLE_BITS); ··· 661 655 atomic_t thread_ctx_switch_token; 662 656 u32 thread_ctx_switch_wait_token; 663 657 u32 asid; 658 + u32 handle; 664 659 }; 665 660 666 661 /** ··· 913 906 * @hdev: habanalabs device structure. 914 907 * @filp: pointer to the given file structure. 915 908 * @taskpid: current process ID. 916 - * @ctx: current executing context. 909 + * @ctx: current executing context. TODO: remove for multiple ctx per process 917 910 * @ctx_mgr: context manager to handle multiple context for this FD. 918 911 * @cb_mgr: command buffer manager to handle multiple buffers for this FD. 919 912 * @debugfs_list: list of relevant ASIC debugfs. 913 + * @dev_node: node in the device list of file private data 920 914 * @refcount: number of related contexts. 921 915 * @restore_phase_mutex: lock for context switch and restore phase. 916 + * @is_control: true for control device, false otherwise 922 917 */ 923 918 struct hl_fpriv { 924 919 struct hl_device *hdev; 925 920 struct file *filp; 926 921 struct pid *taskpid; 927 - struct hl_ctx *ctx; /* TODO: remove for multiple ctx */ 922 + struct hl_ctx *ctx; 928 923 struct hl_ctx_mgr ctx_mgr; 929 924 struct hl_cb_mgr cb_mgr; 930 925 struct list_head debugfs_list; 926 + struct list_head dev_node; 931 927 struct kref refcount; 932 928 struct mutex restore_phase_mutex; 929 + u8 is_control; 933 930 }; 934 931 935 932 ··· 1020 1009 */ 1021 1010 1022 1011 /* Theoretical limit only. A single host can only contain up to 4 or 8 PCIe 1023 - * x16 cards. In extereme cases, there are hosts that can accommodate 16 cards 1012 + * x16 cards. In extreme cases, there are hosts that can accommodate 16 cards. 1024 1013 */ 1025 1014 #define HL_MAX_MINORS 256 1026 1015 ··· 1052 1041 WREG32(mm##reg, (RREG32(mm##reg) & ~REG_FIELD_MASK(reg, field)) | \ 1053 1042 (val) << REG_FIELD_SHIFT(reg, field)) 1054 1043 1044 + /* Timeout should be longer when working with simulator but cap the 1045 + * increased timeout to some maximum 1046 + */ 1055 1047 #define hl_poll_timeout(hdev, addr, val, cond, sleep_us, timeout_us) \ 1056 1048 ({ \ 1057 1049 ktime_t __timeout; \ 1058 - /* timeout should be longer when working with simulator */ \ 1059 1050 if (hdev->pdev) \ 1060 1051 __timeout = ktime_add_us(ktime_get(), timeout_us); \ 1061 1052 else \ 1062 - __timeout = ktime_add_us(ktime_get(), (timeout_us * 10)); \ 1053 + __timeout = ktime_add_us(ktime_get(),\ 1054 + min((u64)(timeout_us * 10), \ 1055 + (u64) HL_SIM_MAX_TIMEOUT_US)); \ 1063 1056 might_sleep_if(sleep_us); \ 1064 1057 for (;;) { \ 1065 1058 (val) = RREG32(addr); \ ··· 1095 1080 mem_written_by_device) \ 1096 1081 ({ \ 1097 1082 ktime_t __timeout; \ 1098 - /* timeout should be longer when working with simulator */ \ 1099 1083 if (hdev->pdev) \ 1100 1084 __timeout = ktime_add_us(ktime_get(), timeout_us); \ 1101 1085 else \ 1102 - __timeout = ktime_add_us(ktime_get(), (timeout_us * 10)); \ 1086 + __timeout = ktime_add_us(ktime_get(),\ 1087 + min((u64)(timeout_us * 10), \ 1088 + (u64) HL_SIM_MAX_TIMEOUT_US)); \ 1103 1089 might_sleep_if(sleep_us); \ 1104 1090 for (;;) { \ 1105 1091 /* Verify we read updates done by other cores or by device */ \ 1106 1092 mb(); \ 1107 1093 (val) = *((u32 *) (uintptr_t) (addr)); \ 1108 1094 if (mem_written_by_device) \ 1109 - (val) = le32_to_cpu(val); \ 1095 + (val) = le32_to_cpu(*(__le32 *) &(val)); \ 1110 1096 if (cond) \ 1111 1097 break; \ 1112 1098 if (timeout_us && ktime_compare(ktime_get(), __timeout) > 0) { \ 1113 1099 (val) = *((u32 *) (uintptr_t) (addr)); \ 1114 1100 if (mem_written_by_device) \ 1115 - (val) = le32_to_cpu(val); \ 1101 + (val) = le32_to_cpu(*(__le32 *) &(val)); \ 1116 1102 break; \ 1117 1103 } \ 1118 1104 if (sleep_us) \ ··· 1126 1110 timeout_us) \ 1127 1111 ({ \ 1128 1112 ktime_t __timeout; \ 1129 - /* timeout should be longer when working with simulator */ \ 1130 1113 if (hdev->pdev) \ 1131 1114 __timeout = ktime_add_us(ktime_get(), timeout_us); \ 1132 1115 else \ 1133 - __timeout = ktime_add_us(ktime_get(), (timeout_us * 10)); \ 1116 + __timeout = ktime_add_us(ktime_get(),\ 1117 + min((u64)(timeout_us * 10), \ 1118 + (u64) HL_SIM_MAX_TIMEOUT_US)); \ 1134 1119 might_sleep_if(sleep_us); \ 1135 1120 for (;;) { \ 1136 1121 (val) = readl(addr); \ ··· 1160 1143 }; 1161 1144 1162 1145 /** 1146 + * struct hl_device_idle_busy_ts - used for calculating device utilization rate. 1147 + * @idle_to_busy_ts: timestamp where device changed from idle to busy. 1148 + * @busy_to_idle_ts: timestamp where device changed from busy to idle. 1149 + */ 1150 + struct hl_device_idle_busy_ts { 1151 + ktime_t idle_to_busy_ts; 1152 + ktime_t busy_to_idle_ts; 1153 + }; 1154 + 1155 + /** 1163 1156 * struct hl_device - habanalabs device structure. 1164 1157 * @pdev: pointer to PCI device, can be NULL in case of simulator device. 1165 1158 * @pcie_bar: array of available PCIe bars. 1166 1159 * @rmmio: configuration area address on SRAM. 1167 1160 * @cdev: related char device. 1168 - * @dev: realted kernel basic device structure. 1161 + * @cdev_ctrl: char device for control operations only (INFO IOCTL) 1162 + * @dev: related kernel basic device structure. 1163 + * @dev_ctrl: related kernel device structure for the control device 1169 1164 * @work_freq: delayed work to lower device frequency if possible. 1170 1165 * @work_heartbeat: delayed work for ArmCP is-alive check. 1171 1166 * @asic_name: ASIC specific nmae. ··· 1185 1156 * @completion_queue: array of hl_cq. 1186 1157 * @cq_wq: work queue of completion queues for executing work in process context 1187 1158 * @eq_wq: work queue of event queue for executing work in process context. 1188 - * @kernel_ctx: KMD context structure. 1159 + * @kernel_ctx: Kernel driver context structure. 1189 1160 * @kernel_queues: array of hl_hw_queue. 1190 1161 * @hw_queues_mirror_list: CS mirror list for TDR. 1191 1162 * @hw_queues_mirror_lock: protects hw_queues_mirror_list. 1192 1163 * @kernel_cb_mgr: command buffer manager for creating/destroying/handling CGs. 1193 1164 * @event_queue: event queue for IRQ from ArmCP. 1194 1165 * @dma_pool: DMA pool for small allocations. 1195 - * @cpu_accessible_dma_mem: KMD <-> ArmCP shared memory CPU address. 1196 - * @cpu_accessible_dma_address: KMD <-> ArmCP shared memory DMA address. 1197 - * @cpu_accessible_dma_pool: KMD <-> ArmCP shared memory pool. 1166 + * @cpu_accessible_dma_mem: Host <-> ArmCP shared memory CPU address. 1167 + * @cpu_accessible_dma_address: Host <-> ArmCP shared memory DMA address. 1168 + * @cpu_accessible_dma_pool: Host <-> ArmCP shared memory pool. 1198 1169 * @asid_bitmap: holds used/available ASIDs. 1199 1170 * @asid_mutex: protects asid_bitmap. 1200 - * @fd_open_cnt_lock: lock for updating fd_open_cnt in hl_device_open. Although 1201 - * fd_open_cnt is atomic, we need this lock to serialize 1202 - * the open function because the driver currently supports 1203 - * only a single process at a time. In addition, we need a 1204 - * lock here so we can flush user processes which are opening 1205 - * the device while we are trying to hard reset it 1206 - * @send_cpu_message_lock: enforces only one message in KMD <-> ArmCP queue. 1171 + * @send_cpu_message_lock: enforces only one message in Host <-> ArmCP queue. 1207 1172 * @debug_lock: protects critical section of setting debug mode for device 1208 1173 * @asic_prop: ASIC specific immutable properties. 1209 1174 * @asic_funcs: ASIC specific functions. ··· 1212 1189 * @hl_debugfs: device's debugfs manager. 1213 1190 * @cb_pool: list of preallocated CBs. 1214 1191 * @cb_pool_lock: protects the CB pool. 1215 - * @user_ctx: current user context executing. 1192 + * @fpriv_list: list of file private data structures. Each structure is created 1193 + * when a user opens the device 1194 + * @fpriv_list_lock: protects the fpriv_list 1195 + * @compute_ctx: current compute context executing. 1196 + * @idle_busy_ts_arr: array to hold time stamps of transitions from idle to busy 1197 + * and vice-versa 1216 1198 * @dram_used_mem: current DRAM memory consumption. 1217 1199 * @timeout_jiffies: device CS timeout value. 1218 1200 * @max_power: the max power of the device, as configured by the sysadmin. This 1219 - * value is saved so in case of hard-reset, KMD will restore this 1220 - * value and update the F/W after the re-initialization 1201 + * value is saved so in case of hard-reset, the driver will restore 1202 + * this value and update the F/W after the re-initialization 1221 1203 * @in_reset: is device in reset flow. 1222 1204 * @curr_pll_profile: current PLL profile. 1223 - * @fd_open_cnt: number of open user processes. 1224 1205 * @cs_active_cnt: number of active command submissions on this device (active 1225 1206 * means already in H/W queues) 1226 - * @major: habanalabs KMD major. 1207 + * @major: habanalabs kernel driver major. 1227 1208 * @high_pll: high PLL profile frequency. 1228 - * @soft_reset_cnt: number of soft reset since KMD loading. 1229 - * @hard_reset_cnt: number of hard reset since KMD loading. 1209 + * @soft_reset_cnt: number of soft reset since the driver was loaded. 1210 + * @hard_reset_cnt: number of hard reset since the driver was loaded. 1211 + * @idle_busy_ts_idx: index of current entry in idle_busy_ts_arr 1230 1212 * @id: device minor. 1213 + * @id_control: minor of the control device 1231 1214 * @disabled: is device disabled. 1232 1215 * @late_init_done: is late init stage was done during initialization. 1233 1216 * @hwmon_initialized: is H/W monitor sensors was initialized. ··· 1247 1218 * @mmu_enable: is MMU enabled. 1248 1219 * @device_cpu_disabled: is the device CPU disabled (due to timeouts) 1249 1220 * @dma_mask: the dma mask that was set for this device 1250 - * @in_debug: is device under debug. This, together with fd_open_cnt, enforces 1221 + * @in_debug: is device under debug. This, together with fpriv_list, enforces 1251 1222 * that only a single user is configuring the debug infrastructure. 1223 + * @cdev_sysfs_created: were char devices and sysfs nodes created. 1252 1224 */ 1253 1225 struct hl_device { 1254 1226 struct pci_dev *pdev; 1255 1227 void __iomem *pcie_bar[6]; 1256 1228 void __iomem *rmmio; 1257 1229 struct cdev cdev; 1230 + struct cdev cdev_ctrl; 1258 1231 struct device *dev; 1232 + struct device *dev_ctrl; 1259 1233 struct delayed_work work_freq; 1260 1234 struct delayed_work work_heartbeat; 1261 1235 char asic_name[16]; ··· 1278 1246 struct gen_pool *cpu_accessible_dma_pool; 1279 1247 unsigned long *asid_bitmap; 1280 1248 struct mutex asid_mutex; 1281 - /* TODO: remove fd_open_cnt_lock for multiple process support */ 1282 - struct mutex fd_open_cnt_lock; 1283 1249 struct mutex send_cpu_message_lock; 1284 1250 struct mutex debug_lock; 1285 1251 struct asic_fixed_properties asic_prop; ··· 1296 1266 struct list_head cb_pool; 1297 1267 spinlock_t cb_pool_lock; 1298 1268 1299 - /* TODO: remove user_ctx for multiple process support */ 1300 - struct hl_ctx *user_ctx; 1269 + struct list_head fpriv_list; 1270 + struct mutex fpriv_list_lock; 1271 + 1272 + struct hl_ctx *compute_ctx; 1273 + 1274 + struct hl_device_idle_busy_ts *idle_busy_ts_arr; 1301 1275 1302 1276 atomic64_t dram_used_mem; 1303 1277 u64 timeout_jiffies; 1304 1278 u64 max_power; 1305 1279 atomic_t in_reset; 1306 - atomic_t curr_pll_profile; 1307 - atomic_t fd_open_cnt; 1308 - atomic_t cs_active_cnt; 1280 + enum hl_pll_frequency curr_pll_profile; 1281 + int cs_active_cnt; 1309 1282 u32 major; 1310 1283 u32 high_pll; 1311 1284 u32 soft_reset_cnt; 1312 1285 u32 hard_reset_cnt; 1286 + u32 idle_busy_ts_idx; 1313 1287 u16 id; 1288 + u16 id_control; 1314 1289 u8 disabled; 1315 1290 u8 late_init_done; 1316 1291 u8 hwmon_initialized; ··· 1328 1293 u8 device_cpu_disabled; 1329 1294 u8 dma_mask; 1330 1295 u8 in_debug; 1296 + u8 cdev_sysfs_created; 1331 1297 1332 1298 /* Parameters for bring-up */ 1333 1299 u8 mmu_enable; ··· 1422 1386 } 1423 1387 1424 1388 int hl_device_open(struct inode *inode, struct file *filp); 1389 + int hl_device_open_ctrl(struct inode *inode, struct file *filp); 1425 1390 bool hl_device_disabled_or_in_reset(struct hl_device *hdev); 1426 1391 enum hl_device_status hl_device_status(struct hl_device *hdev); 1427 1392 int hl_device_set_debug_mode(struct hl_device *hdev, bool enable); ··· 1476 1439 void hl_hpriv_get(struct hl_fpriv *hpriv); 1477 1440 void hl_hpriv_put(struct hl_fpriv *hpriv); 1478 1441 int hl_device_set_frequency(struct hl_device *hdev, enum hl_pll_frequency freq); 1442 + uint32_t hl_device_utilization(struct hl_device *hdev, uint32_t period_ms); 1479 1443 1480 1444 int hl_build_hwmon_channel_info(struct hl_device *hdev, 1481 1445 struct armcp_sensor *sensors_arr); ··· 1663 1625 1664 1626 /* IOCTLs */ 1665 1627 long hl_ioctl(struct file *filep, unsigned int cmd, unsigned long arg); 1628 + long hl_ioctl_control(struct file *filep, unsigned int cmd, unsigned long arg); 1666 1629 int hl_cb_ioctl(struct hl_fpriv *hpriv, void *data); 1667 1630 int hl_cs_ioctl(struct hl_fpriv *hpriv, void *data); 1668 1631 int hl_cs_wait_ioctl(struct hl_fpriv *hpriv, void *data);
+118 -71
drivers/misc/habanalabs/habanalabs_drv.c
··· 95 95 return -ENXIO; 96 96 } 97 97 98 - mutex_lock(&hdev->fd_open_cnt_lock); 99 - 100 - if (hl_device_disabled_or_in_reset(hdev)) { 101 - dev_err_ratelimited(hdev->dev, 102 - "Can't open %s because it is disabled or in reset\n", 103 - dev_name(hdev->dev)); 104 - mutex_unlock(&hdev->fd_open_cnt_lock); 105 - return -EPERM; 106 - } 107 - 108 - if (hdev->in_debug) { 109 - dev_err_ratelimited(hdev->dev, 110 - "Can't open %s because it is being debugged by another user\n", 111 - dev_name(hdev->dev)); 112 - mutex_unlock(&hdev->fd_open_cnt_lock); 113 - return -EPERM; 114 - } 115 - 116 - if (atomic_read(&hdev->fd_open_cnt)) { 117 - dev_info_ratelimited(hdev->dev, 118 - "Can't open %s because another user is working on it\n", 119 - dev_name(hdev->dev)); 120 - mutex_unlock(&hdev->fd_open_cnt_lock); 121 - return -EBUSY; 122 - } 123 - 124 - atomic_inc(&hdev->fd_open_cnt); 125 - 126 - mutex_unlock(&hdev->fd_open_cnt_lock); 127 - 128 98 hpriv = kzalloc(sizeof(*hpriv), GFP_KERNEL); 129 - if (!hpriv) { 130 - rc = -ENOMEM; 131 - goto close_device; 132 - } 99 + if (!hpriv) 100 + return -ENOMEM; 133 101 134 102 hpriv->hdev = hdev; 135 103 filp->private_data = hpriv; ··· 109 141 hl_cb_mgr_init(&hpriv->cb_mgr); 110 142 hl_ctx_mgr_init(&hpriv->ctx_mgr); 111 143 112 - rc = hl_ctx_create(hdev, hpriv); 113 - if (rc) { 114 - dev_err(hdev->dev, "Failed to open FD (CTX fail)\n"); 144 + hpriv->taskpid = find_get_pid(current->pid); 145 + 146 + mutex_lock(&hdev->fpriv_list_lock); 147 + 148 + if (hl_device_disabled_or_in_reset(hdev)) { 149 + dev_err_ratelimited(hdev->dev, 150 + "Can't open %s because it is disabled or in reset\n", 151 + dev_name(hdev->dev)); 152 + rc = -EPERM; 115 153 goto out_err; 116 154 } 117 155 118 - hpriv->taskpid = find_get_pid(current->pid); 156 + if (hdev->in_debug) { 157 + dev_err_ratelimited(hdev->dev, 158 + "Can't open %s because it is being debugged by another user\n", 159 + dev_name(hdev->dev)); 160 + rc = -EPERM; 161 + goto out_err; 162 + } 119 163 120 - /* 121 - * Device is IDLE at this point so it is legal to change PLLs. There 122 - * is no need to check anything because if the PLL is already HIGH, the 123 - * set function will return without doing anything 164 + if (hdev->compute_ctx) { 165 + dev_dbg_ratelimited(hdev->dev, 166 + "Can't open %s because another user is working on it\n", 167 + dev_name(hdev->dev)); 168 + rc = -EBUSY; 169 + goto out_err; 170 + } 171 + 172 + rc = hl_ctx_create(hdev, hpriv); 173 + if (rc) { 174 + dev_err(hdev->dev, "Failed to create context %d\n", rc); 175 + goto out_err; 176 + } 177 + 178 + /* Device is IDLE at this point so it is legal to change PLLs. 179 + * There is no need to check anything because if the PLL is 180 + * already HIGH, the set function will return without doing 181 + * anything 124 182 */ 125 183 hl_device_set_frequency(hdev, PLL_HIGH); 184 + 185 + list_add(&hpriv->dev_node, &hdev->fpriv_list); 186 + mutex_unlock(&hdev->fpriv_list_lock); 126 187 127 188 hl_debugfs_add_file(hpriv); 128 189 129 190 return 0; 130 191 131 192 out_err: 132 - filp->private_data = NULL; 133 - hl_ctx_mgr_fini(hpriv->hdev, &hpriv->ctx_mgr); 134 - hl_cb_mgr_fini(hpriv->hdev, &hpriv->cb_mgr); 135 - mutex_destroy(&hpriv->restore_phase_mutex); 136 - kfree(hpriv); 193 + mutex_unlock(&hdev->fpriv_list_lock); 137 194 138 - close_device: 139 - atomic_dec(&hdev->fd_open_cnt); 195 + hl_cb_mgr_fini(hpriv->hdev, &hpriv->cb_mgr); 196 + hl_ctx_mgr_fini(hpriv->hdev, &hpriv->ctx_mgr); 197 + filp->private_data = NULL; 198 + mutex_destroy(&hpriv->restore_phase_mutex); 199 + put_pid(hpriv->taskpid); 200 + 201 + kfree(hpriv); 202 + return rc; 203 + } 204 + 205 + int hl_device_open_ctrl(struct inode *inode, struct file *filp) 206 + { 207 + struct hl_device *hdev; 208 + struct hl_fpriv *hpriv; 209 + int rc; 210 + 211 + mutex_lock(&hl_devs_idr_lock); 212 + hdev = idr_find(&hl_devs_idr, iminor(inode)); 213 + mutex_unlock(&hl_devs_idr_lock); 214 + 215 + if (!hdev) { 216 + pr_err("Couldn't find device %d:%d\n", 217 + imajor(inode), iminor(inode)); 218 + return -ENXIO; 219 + } 220 + 221 + hpriv = kzalloc(sizeof(*hpriv), GFP_KERNEL); 222 + if (!hpriv) 223 + return -ENOMEM; 224 + 225 + mutex_lock(&hdev->fpriv_list_lock); 226 + 227 + if (hl_device_disabled_or_in_reset(hdev)) { 228 + dev_err_ratelimited(hdev->dev_ctrl, 229 + "Can't open %s because it is disabled or in reset\n", 230 + dev_name(hdev->dev_ctrl)); 231 + rc = -EPERM; 232 + goto out_err; 233 + } 234 + 235 + list_add(&hpriv->dev_node, &hdev->fpriv_list); 236 + mutex_unlock(&hdev->fpriv_list_lock); 237 + 238 + hpriv->hdev = hdev; 239 + filp->private_data = hpriv; 240 + hpriv->filp = filp; 241 + hpriv->is_control = true; 242 + nonseekable_open(inode, filp); 243 + 244 + hpriv->taskpid = find_get_pid(current->pid); 245 + 246 + return 0; 247 + 248 + out_err: 249 + mutex_unlock(&hdev->fpriv_list_lock); 250 + kfree(hpriv); 140 251 return rc; 141 252 } 142 253 ··· 246 199 enum hl_asic_type asic_type, int minor) 247 200 { 248 201 struct hl_device *hdev; 249 - int rc; 202 + int rc, main_id, ctrl_id = 0; 250 203 251 204 *dev = NULL; 252 205 ··· 287 240 288 241 mutex_lock(&hl_devs_idr_lock); 289 242 290 - if (minor == -1) { 291 - rc = idr_alloc(&hl_devs_idr, hdev, 0, HL_MAX_MINORS, 243 + /* Always save 2 numbers, 1 for main device and 1 for control. 244 + * They must be consecutive 245 + */ 246 + main_id = idr_alloc(&hl_devs_idr, hdev, 0, HL_MAX_MINORS, 292 247 GFP_KERNEL); 293 - } else { 294 - void *old_idr = idr_replace(&hl_devs_idr, hdev, minor); 295 248 296 - if (IS_ERR_VALUE(old_idr)) { 297 - rc = PTR_ERR(old_idr); 298 - pr_err("Error %d when trying to replace minor %d\n", 299 - rc, minor); 300 - mutex_unlock(&hl_devs_idr_lock); 301 - goto free_hdev; 302 - } 303 - rc = minor; 304 - } 249 + if (main_id >= 0) 250 + ctrl_id = idr_alloc(&hl_devs_idr, hdev, main_id + 1, 251 + main_id + 2, GFP_KERNEL); 305 252 306 253 mutex_unlock(&hl_devs_idr_lock); 307 254 308 - if (rc < 0) { 309 - if (rc == -ENOSPC) { 255 + if ((main_id < 0) || (ctrl_id < 0)) { 256 + if ((main_id == -ENOSPC) || (ctrl_id == -ENOSPC)) 310 257 pr_err("too many devices in the system\n"); 311 - rc = -EBUSY; 258 + 259 + if (main_id >= 0) { 260 + mutex_lock(&hl_devs_idr_lock); 261 + idr_remove(&hl_devs_idr, main_id); 262 + mutex_unlock(&hl_devs_idr_lock); 312 263 } 264 + 265 + rc = -EBUSY; 313 266 goto free_hdev; 314 267 } 315 268 316 - hdev->id = rc; 269 + hdev->id = main_id; 270 + hdev->id_control = ctrl_id; 317 271 318 272 *dev = hdev; 319 273 ··· 336 288 /* Remove device from the device list */ 337 289 mutex_lock(&hl_devs_idr_lock); 338 290 idr_remove(&hl_devs_idr, hdev->id); 291 + idr_remove(&hl_devs_idr, hdev->id_control); 339 292 mutex_unlock(&hl_devs_idr_lock); 340 293 341 294 kfree(hdev); ··· 344 295 345 296 static int hl_pmops_suspend(struct device *dev) 346 297 { 347 - struct pci_dev *pdev = to_pci_dev(dev); 348 - struct hl_device *hdev = pci_get_drvdata(pdev); 298 + struct hl_device *hdev = dev_get_drvdata(dev); 349 299 350 300 pr_debug("Going to suspend PCI device\n"); 351 301 ··· 358 310 359 311 static int hl_pmops_resume(struct device *dev) 360 312 { 361 - struct pci_dev *pdev = to_pci_dev(dev); 362 - struct hl_device *hdev = pci_get_drvdata(pdev); 313 + struct hl_device *hdev = dev_get_drvdata(dev); 363 314 364 315 pr_debug("Going to resume PCI device\n"); 365 316
+132 -48
drivers/misc/habanalabs/habanalabs_ioctl.c
··· 65 65 hw_ip.num_of_events = prop->num_of_events; 66 66 memcpy(hw_ip.armcp_version, 67 67 prop->armcp_info.armcp_version, VERSION_MAX_LEN); 68 - hw_ip.armcp_cpld_version = __le32_to_cpu(prop->armcp_info.cpld_version); 68 + hw_ip.armcp_cpld_version = le32_to_cpu(prop->armcp_info.cpld_version); 69 69 hw_ip.psoc_pci_pll_nr = prop->psoc_pci_pll_nr; 70 70 hw_ip.psoc_pci_pll_nf = prop->psoc_pci_pll_nf; 71 71 hw_ip.psoc_pci_pll_od = prop->psoc_pci_pll_od; ··· 75 75 min((size_t)size, sizeof(hw_ip))) ? -EFAULT : 0; 76 76 } 77 77 78 - static int hw_events_info(struct hl_device *hdev, struct hl_info_args *args) 78 + static int hw_events_info(struct hl_device *hdev, bool aggregate, 79 + struct hl_info_args *args) 79 80 { 80 81 u32 size, max_size = args->return_size; 81 82 void __user *out = (void __user *) (uintptr_t) args->return_pointer; ··· 85 84 if ((!max_size) || (!out)) 86 85 return -EINVAL; 87 86 88 - arr = hdev->asic_funcs->get_events_stat(hdev, &size); 87 + arr = hdev->asic_funcs->get_events_stat(hdev, aggregate, &size); 89 88 90 89 return copy_to_user(out, arr, min(max_size, size)) ? -EFAULT : 0; 91 90 } 92 91 93 - static int dram_usage_info(struct hl_device *hdev, struct hl_info_args *args) 92 + static int dram_usage_info(struct hl_fpriv *hpriv, struct hl_info_args *args) 94 93 { 94 + struct hl_device *hdev = hpriv->hdev; 95 95 struct hl_info_dram_usage dram_usage = {0}; 96 96 u32 max_size = args->return_size; 97 97 void __user *out = (void __user *) (uintptr_t) args->return_pointer; ··· 106 104 prop->dram_base_address); 107 105 dram_usage.dram_free_mem = (prop->dram_size - dram_kmd_size) - 108 106 atomic64_read(&hdev->dram_used_mem); 109 - dram_usage.ctx_dram_mem = atomic64_read(&hdev->user_ctx->dram_phys_mem); 107 + if (hpriv->ctx) 108 + dram_usage.ctx_dram_mem = 109 + atomic64_read(&hpriv->ctx->dram_phys_mem); 110 110 111 111 return copy_to_user(out, &dram_usage, 112 112 min((size_t) max_size, sizeof(dram_usage))) ? -EFAULT : 0; ··· 145 141 params->op = args->op; 146 142 147 143 if (args->input_ptr && args->input_size) { 148 - input = memdup_user(u64_to_user_ptr(args->input_ptr), 149 - args->input_size); 150 - if (IS_ERR(input)) { 151 - rc = PTR_ERR(input); 152 - input = NULL; 153 - dev_err(hdev->dev, 154 - "error %d when copying input debug data\n", rc); 144 + input = kzalloc(hl_debug_struct_size[args->op], GFP_KERNEL); 145 + if (!input) { 146 + rc = -ENOMEM; 147 + goto out; 148 + } 149 + 150 + if (copy_from_user(input, u64_to_user_ptr(args->input_ptr), 151 + args->input_size)) { 152 + rc = -EFAULT; 153 + dev_err(hdev->dev, "failed to copy input debug data\n"); 155 154 goto out; 156 155 } 157 156 ··· 198 191 return rc; 199 192 } 200 193 201 - static int hl_info_ioctl(struct hl_fpriv *hpriv, void *data) 194 + static int device_utilization(struct hl_device *hdev, struct hl_info_args *args) 195 + { 196 + struct hl_info_device_utilization device_util = {0}; 197 + u32 max_size = args->return_size; 198 + void __user *out = (void __user *) (uintptr_t) args->return_pointer; 199 + 200 + if ((!max_size) || (!out)) 201 + return -EINVAL; 202 + 203 + if ((args->period_ms < 100) || (args->period_ms > 1000) || 204 + (args->period_ms % 100)) { 205 + dev_err(hdev->dev, 206 + "period %u must be between 100 - 1000 and must be divisible by 100\n", 207 + args->period_ms); 208 + return -EINVAL; 209 + } 210 + 211 + device_util.utilization = hl_device_utilization(hdev, args->period_ms); 212 + 213 + return copy_to_user(out, &device_util, 214 + min((size_t) max_size, sizeof(device_util))) ? -EFAULT : 0; 215 + } 216 + 217 + static int _hl_info_ioctl(struct hl_fpriv *hpriv, void *data, 218 + struct device *dev) 202 219 { 203 220 struct hl_info_args *args = data; 204 221 struct hl_device *hdev = hpriv->hdev; 205 222 int rc; 206 223 207 - /* We want to return device status even if it disabled or in reset */ 208 - if (args->op == HL_INFO_DEVICE_STATUS) 224 + /* 225 + * Information is returned for the following opcodes even if the device 226 + * is disabled or in reset. 227 + */ 228 + switch (args->op) { 229 + case HL_INFO_HW_IP_INFO: 230 + return hw_ip_info(hdev, args); 231 + 232 + case HL_INFO_DEVICE_STATUS: 209 233 return device_status_info(hdev, args); 210 234 235 + default: 236 + break; 237 + } 238 + 211 239 if (hl_device_disabled_or_in_reset(hdev)) { 212 - dev_warn_ratelimited(hdev->dev, 240 + dev_warn_ratelimited(dev, 213 241 "Device is %s. Can't execute INFO IOCTL\n", 214 242 atomic_read(&hdev->in_reset) ? "in_reset" : "disabled"); 215 243 return -EBUSY; 216 244 } 217 245 218 246 switch (args->op) { 219 - case HL_INFO_HW_IP_INFO: 220 - rc = hw_ip_info(hdev, args); 221 - break; 222 - 223 247 case HL_INFO_HW_EVENTS: 224 - rc = hw_events_info(hdev, args); 248 + rc = hw_events_info(hdev, false, args); 225 249 break; 226 250 227 251 case HL_INFO_DRAM_USAGE: 228 - rc = dram_usage_info(hdev, args); 252 + rc = dram_usage_info(hpriv, args); 229 253 break; 230 254 231 255 case HL_INFO_HW_IDLE: 232 256 rc = hw_idle(hdev, args); 233 257 break; 234 258 259 + case HL_INFO_DEVICE_UTILIZATION: 260 + rc = device_utilization(hdev, args); 261 + break; 262 + 263 + case HL_INFO_HW_EVENTS_AGGREGATE: 264 + rc = hw_events_info(hdev, true, args); 265 + break; 266 + 235 267 default: 236 - dev_err(hdev->dev, "Invalid request %d\n", args->op); 268 + dev_err(dev, "Invalid request %d\n", args->op); 237 269 rc = -ENOTTY; 238 270 break; 239 271 } 240 272 241 273 return rc; 274 + } 275 + 276 + static int hl_info_ioctl(struct hl_fpriv *hpriv, void *data) 277 + { 278 + return _hl_info_ioctl(hpriv, data, hpriv->hdev->dev); 279 + } 280 + 281 + static int hl_info_ioctl_control(struct hl_fpriv *hpriv, void *data) 282 + { 283 + return _hl_info_ioctl(hpriv, data, hpriv->hdev->dev_ctrl); 242 284 } 243 285 244 286 static int hl_debug_ioctl(struct hl_fpriv *hpriv, void *data) ··· 344 288 HL_IOCTL_DEF(HL_IOCTL_DEBUG, hl_debug_ioctl) 345 289 }; 346 290 347 - #define HL_CORE_IOCTL_COUNT ARRAY_SIZE(hl_ioctls) 291 + static const struct hl_ioctl_desc hl_ioctls_control[] = { 292 + HL_IOCTL_DEF(HL_IOCTL_INFO, hl_info_ioctl_control) 293 + }; 348 294 349 - long hl_ioctl(struct file *filep, unsigned int cmd, unsigned long arg) 295 + static long _hl_ioctl(struct file *filep, unsigned int cmd, unsigned long arg, 296 + const struct hl_ioctl_desc *ioctl, struct device *dev) 350 297 { 351 298 struct hl_fpriv *hpriv = filep->private_data; 352 299 struct hl_device *hdev = hpriv->hdev; 353 - hl_ioctl_t *func; 354 - const struct hl_ioctl_desc *ioctl = NULL; 355 300 unsigned int nr = _IOC_NR(cmd); 356 301 char stack_kdata[128] = {0}; 357 302 char *kdata = NULL; 358 303 unsigned int usize, asize; 304 + hl_ioctl_t *func; 305 + u32 hl_size; 359 306 int retcode; 360 307 361 308 if (hdev->hard_reset_pending) { 362 - dev_crit_ratelimited(hdev->dev, 309 + dev_crit_ratelimited(hdev->dev_ctrl, 363 310 "Device HARD reset pending! Please close FD\n"); 364 311 return -ENODEV; 365 - } 366 - 367 - if ((nr >= HL_COMMAND_START) && (nr < HL_COMMAND_END)) { 368 - u32 hl_size; 369 - 370 - ioctl = &hl_ioctls[nr]; 371 - 372 - hl_size = _IOC_SIZE(ioctl->cmd); 373 - usize = asize = _IOC_SIZE(cmd); 374 - if (hl_size > asize) 375 - asize = hl_size; 376 - 377 - cmd = ioctl->cmd; 378 - } else { 379 - dev_err(hdev->dev, "invalid ioctl: pid=%d, nr=0x%02x\n", 380 - task_pid_nr(current), nr); 381 - return -ENOTTY; 382 312 } 383 313 384 314 /* Do not trust userspace, use our own definition */ 385 315 func = ioctl->func; 386 316 387 317 if (unlikely(!func)) { 388 - dev_dbg(hdev->dev, "no function\n"); 318 + dev_dbg(dev, "no function\n"); 389 319 retcode = -ENOTTY; 390 320 goto out_err; 391 321 } 322 + 323 + hl_size = _IOC_SIZE(ioctl->cmd); 324 + usize = asize = _IOC_SIZE(cmd); 325 + if (hl_size > asize) 326 + asize = hl_size; 327 + 328 + cmd = ioctl->cmd; 392 329 393 330 if (cmd & (IOC_IN | IOC_OUT)) { 394 331 if (asize <= sizeof(stack_kdata)) { ··· 412 363 413 364 out_err: 414 365 if (retcode) 415 - dev_dbg(hdev->dev, 416 - "error in ioctl: pid=%d, cmd=0x%02x, nr=0x%02x\n", 366 + dev_dbg(dev, "error in ioctl: pid=%d, cmd=0x%02x, nr=0x%02x\n", 417 367 task_pid_nr(current), cmd, nr); 418 368 419 369 if (kdata != stack_kdata) 420 370 kfree(kdata); 421 371 422 372 return retcode; 373 + } 374 + 375 + long hl_ioctl(struct file *filep, unsigned int cmd, unsigned long arg) 376 + { 377 + struct hl_fpriv *hpriv = filep->private_data; 378 + struct hl_device *hdev = hpriv->hdev; 379 + const struct hl_ioctl_desc *ioctl = NULL; 380 + unsigned int nr = _IOC_NR(cmd); 381 + 382 + if ((nr >= HL_COMMAND_START) && (nr < HL_COMMAND_END)) { 383 + ioctl = &hl_ioctls[nr]; 384 + } else { 385 + dev_err(hdev->dev, "invalid ioctl: pid=%d, nr=0x%02x\n", 386 + task_pid_nr(current), nr); 387 + return -ENOTTY; 388 + } 389 + 390 + return _hl_ioctl(filep, cmd, arg, ioctl, hdev->dev); 391 + } 392 + 393 + long hl_ioctl_control(struct file *filep, unsigned int cmd, unsigned long arg) 394 + { 395 + struct hl_fpriv *hpriv = filep->private_data; 396 + struct hl_device *hdev = hpriv->hdev; 397 + const struct hl_ioctl_desc *ioctl = NULL; 398 + unsigned int nr = _IOC_NR(cmd); 399 + 400 + if (nr == _IOC_NR(HL_IOCTL_INFO)) { 401 + ioctl = &hl_ioctls_control[nr]; 402 + } else { 403 + dev_err(hdev->dev_ctrl, "invalid ioctl: pid=%d, nr=0x%02x\n", 404 + task_pid_nr(current), nr); 405 + return -ENOTTY; 406 + } 407 + 408 + return _hl_ioctl(filep, cmd, arg, ioctl, hdev->dev_ctrl); 423 409 }
+12 -6
drivers/misc/habanalabs/hw_queue.c
··· 80 80 81 81 bd = (struct hl_bd *) (uintptr_t) q->kernel_address; 82 82 bd += hl_pi_2_offset(q->pi); 83 - bd->ctl = __cpu_to_le32(ctl); 84 - bd->len = __cpu_to_le32(len); 85 - bd->ptr = __cpu_to_le64(ptr); 83 + bd->ctl = cpu_to_le32(ctl); 84 + bd->len = cpu_to_le32(len); 85 + bd->ptr = cpu_to_le64(ptr); 86 86 87 87 q->pi = hl_queue_inc_ptr(q->pi); 88 88 hdev->asic_funcs->ring_doorbell(hdev, q->hw_queue_id, q->pi); ··· 249 249 len = job->job_cb_size; 250 250 ptr = cb->bus_address; 251 251 252 - cq_pkt.data = __cpu_to_le32( 252 + cq_pkt.data = cpu_to_le32( 253 253 ((q->pi << CQ_ENTRY_SHADOW_INDEX_SHIFT) 254 254 & CQ_ENTRY_SHADOW_INDEX_MASK) | 255 255 (1 << CQ_ENTRY_SHADOW_INDEX_VALID_SHIFT) | ··· 267 267 268 268 hdev->asic_funcs->add_end_of_cb_packets(hdev, cb->kernel_address, len, 269 269 cq_addr, 270 - __le32_to_cpu(cq_pkt.data), 270 + le32_to_cpu(cq_pkt.data), 271 271 q->hw_queue_id); 272 272 273 273 q->shadow_queue[hl_pi_2_offset(q->pi)] = job; ··· 364 364 spin_unlock(&hdev->hw_queues_mirror_lock); 365 365 } 366 366 367 - atomic_inc(&hdev->cs_active_cnt); 367 + if (!hdev->cs_active_cnt++) { 368 + struct hl_device_idle_busy_ts *ts; 369 + 370 + ts = &hdev->idle_busy_ts_arr[hdev->idle_busy_ts_idx]; 371 + ts->busy_to_idle_ts = ktime_set(0, 0); 372 + ts->idle_to_busy_ts = ktime_get(); 373 + } 368 374 369 375 list_for_each_entry_safe(job, tmp, &cs->job_list, cs_node) 370 376 if (job->ext_queue)
+13 -11
drivers/misc/habanalabs/hwmon.c
··· 26 26 int rc, i, j; 27 27 28 28 for (i = 0 ; i < ARMCP_MAX_SENSORS ; i++) { 29 - type = __le32_to_cpu(sensors_arr[i].type); 29 + type = le32_to_cpu(sensors_arr[i].type); 30 30 31 31 if ((type == 0) && (sensors_arr[i].flags == 0)) 32 32 break; ··· 58 58 } 59 59 60 60 for (i = 0 ; i < arr_size ; i++) { 61 - type = __le32_to_cpu(sensors_arr[i].type); 61 + type = le32_to_cpu(sensors_arr[i].type); 62 62 curr_arr = sensors_by_type[type]; 63 63 curr_arr[sensors_by_type_next_index[type]++] = 64 - __le32_to_cpu(sensors_arr[i].flags); 64 + le32_to_cpu(sensors_arr[i].flags); 65 65 } 66 66 67 67 channels_info = kcalloc(num_active_sensor_types + 1, ··· 273 273 274 274 memset(&pkt, 0, sizeof(pkt)); 275 275 276 - pkt.ctl = __cpu_to_le32(ARMCP_PACKET_TEMPERATURE_GET << 276 + pkt.ctl = cpu_to_le32(ARMCP_PACKET_TEMPERATURE_GET << 277 277 ARMCP_PKT_CTL_OPCODE_SHIFT); 278 278 pkt.sensor_index = __cpu_to_le16(sensor_index); 279 279 pkt.type = __cpu_to_le16(attr); ··· 299 299 300 300 memset(&pkt, 0, sizeof(pkt)); 301 301 302 - pkt.ctl = __cpu_to_le32(ARMCP_PACKET_VOLTAGE_GET << 302 + pkt.ctl = cpu_to_le32(ARMCP_PACKET_VOLTAGE_GET << 303 303 ARMCP_PKT_CTL_OPCODE_SHIFT); 304 304 pkt.sensor_index = __cpu_to_le16(sensor_index); 305 305 pkt.type = __cpu_to_le16(attr); ··· 325 325 326 326 memset(&pkt, 0, sizeof(pkt)); 327 327 328 - pkt.ctl = __cpu_to_le32(ARMCP_PACKET_CURRENT_GET << 328 + pkt.ctl = cpu_to_le32(ARMCP_PACKET_CURRENT_GET << 329 329 ARMCP_PKT_CTL_OPCODE_SHIFT); 330 330 pkt.sensor_index = __cpu_to_le16(sensor_index); 331 331 pkt.type = __cpu_to_le16(attr); ··· 351 351 352 352 memset(&pkt, 0, sizeof(pkt)); 353 353 354 - pkt.ctl = __cpu_to_le32(ARMCP_PACKET_FAN_SPEED_GET << 354 + pkt.ctl = cpu_to_le32(ARMCP_PACKET_FAN_SPEED_GET << 355 355 ARMCP_PKT_CTL_OPCODE_SHIFT); 356 356 pkt.sensor_index = __cpu_to_le16(sensor_index); 357 357 pkt.type = __cpu_to_le16(attr); ··· 377 377 378 378 memset(&pkt, 0, sizeof(pkt)); 379 379 380 - pkt.ctl = __cpu_to_le32(ARMCP_PACKET_PWM_GET << 380 + pkt.ctl = cpu_to_le32(ARMCP_PACKET_PWM_GET << 381 381 ARMCP_PKT_CTL_OPCODE_SHIFT); 382 382 pkt.sensor_index = __cpu_to_le16(sensor_index); 383 383 pkt.type = __cpu_to_le16(attr); ··· 403 403 404 404 memset(&pkt, 0, sizeof(pkt)); 405 405 406 - pkt.ctl = __cpu_to_le32(ARMCP_PACKET_PWM_SET << 406 + pkt.ctl = cpu_to_le32(ARMCP_PACKET_PWM_SET << 407 407 ARMCP_PKT_CTL_OPCODE_SHIFT); 408 408 pkt.sensor_index = __cpu_to_le16(sensor_index); 409 409 pkt.type = __cpu_to_le16(attr); 410 - pkt.value = __cpu_to_le64(value); 410 + pkt.value = cpu_to_le64(value); 411 411 412 412 rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), 413 413 SENSORS_PKT_TIMEOUT, NULL); ··· 421 421 int hl_hwmon_init(struct hl_device *hdev) 422 422 { 423 423 struct device *dev = hdev->pdev ? &hdev->pdev->dev : hdev->dev; 424 + struct asic_fixed_properties *prop = &hdev->asic_prop; 424 425 int rc; 425 426 426 427 if ((hdev->hwmon_initialized) || !(hdev->fw_loading)) ··· 431 430 hdev->hl_chip_info->ops = &hl_hwmon_ops; 432 431 433 432 hdev->hwmon_dev = hwmon_device_register_with_info(dev, 434 - "habanalabs", hdev, hdev->hl_chip_info, NULL); 433 + prop->armcp_info.card_name, hdev, 434 + hdev->hl_chip_info, NULL); 435 435 if (IS_ERR(hdev->hwmon_dev)) { 436 436 rc = PTR_ERR(hdev->hwmon_dev); 437 437 dev_err(hdev->dev,
+48 -31
drivers/misc/habanalabs/include/armcp_if.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 2 2 * 3 - * Copyright 2016-2018 HabanaLabs, Ltd. 3 + * Copyright 2016-2019 HabanaLabs, Ltd. 4 4 * All Rights Reserved. 5 5 * 6 6 */ ··· 41 41 /* 42 42 * ArmCP Primary Queue Packets 43 43 * 44 - * During normal operation, KMD needs to send various messages to ArmCP, 45 - * usually either to SET some value into a H/W periphery or to GET the current 46 - * value of some H/W periphery. For example, SET the frequency of MME/TPC and 47 - * GET the value of the thermal sensor. 44 + * During normal operation, the host's kernel driver needs to send various 45 + * messages to ArmCP, usually either to SET some value into a H/W periphery or 46 + * to GET the current value of some H/W periphery. For example, SET the 47 + * frequency of MME/TPC and GET the value of the thermal sensor. 48 48 * 49 - * These messages can be initiated either by the User application or by KMD 50 - * itself, e.g. power management code. In either case, the communication from 51 - * KMD to ArmCP will *always* be in synchronous mode, meaning that KMD will 52 - * send a single message and poll until the message was acknowledged and the 53 - * results are ready (if results are needed). 49 + * These messages can be initiated either by the User application or by the 50 + * host's driver itself, e.g. power management code. In either case, the 51 + * communication from the host's driver to ArmCP will *always* be in 52 + * synchronous mode, meaning that the host will send a single message and poll 53 + * until the message was acknowledged and the results are ready (if results are 54 + * needed). 54 55 * 55 - * This means that only a single message can be sent at a time and KMD must 56 - * wait for its result before sending the next message. Having said that, 57 - * because these are control messages which are sent in a relatively low 56 + * This means that only a single message can be sent at a time and the host's 57 + * driver must wait for its result before sending the next message. Having said 58 + * that, because these are control messages which are sent in a relatively low 58 59 * frequency, this limitation seems acceptable. It's important to note that 59 60 * in case of multiple devices, messages to different devices *can* be sent 60 61 * at the same time. 61 62 * 62 63 * The message, inputs/outputs (if relevant) and fence object will be located 63 - * on the device DDR at an address that will be determined by KMD. During 64 - * device initialization phase, KMD will pass to ArmCP that address. Most of 65 - * the message types will contain inputs/outputs inside the message itself. 66 - * The common part of each message will contain the opcode of the message (its 67 - * type) and a field representing a fence object. 64 + * on the device DDR at an address that will be determined by the host's driver. 65 + * During device initialization phase, the host will pass to ArmCP that address. 66 + * Most of the message types will contain inputs/outputs inside the message 67 + * itself. The common part of each message will contain the opcode of the 68 + * message (its type) and a field representing a fence object. 68 69 * 69 - * When KMD wishes to send a message to ArmCP, it will write the message 70 - * contents to the device DDR, clear the fence object and then write the 70 + * When the host's driver wishes to send a message to ArmCP, it will write the 71 + * message contents to the device DDR, clear the fence object and then write the 71 72 * value 484 to the mmGIC_DISTRIBUTOR__5_GICD_SETSPI_NSR register to issue 72 73 * the 484 interrupt-id to the ARM core. 73 74 * ··· 79 78 * device DDR and then write to the fence object. If an error occurred, ArmCP 80 79 * will fill the rc field with the right error code. 81 80 * 82 - * In the meantime, KMD will poll on the fence object. Once KMD sees that the 83 - * fence object is signaled, it will read the results from the device DDR 84 - * (if relevant) and resume the code execution in KMD. 81 + * In the meantime, the host's driver will poll on the fence object. Once the 82 + * host sees that the fence object is signaled, it will read the results from 83 + * the device DDR (if relevant) and resume the code execution in the host's 84 + * driver. 85 85 * 86 86 * To use QMAN packets, the opcode must be the QMAN opcode, shifted by 8 87 - * so the value being put by the KMD matches the value read by ArmCP 87 + * so the value being put by the host's driver matches the value read by ArmCP 88 88 * 89 89 * Non-QMAN packets should be limited to values 1 through (2^8 - 1) 90 90 * ··· 150 148 * 151 149 * ARMCP_PACKET_INFO_GET - 152 150 * Fetch information from the device as specified in the packet's 153 - * structure. KMD passes the max size it allows the ArmCP to write to 154 - * the structure, to prevent data corruption in case of mismatched 155 - * KMD/FW versions. 151 + * structure. The host's driver passes the max size it allows the ArmCP to 152 + * write to the structure, to prevent data corruption in case of 153 + * mismatched driver/FW versions. 156 154 * 157 155 * ARMCP_PACKET_FLASH_PROGRAM_REMOVED - this packet was removed 158 156 * ··· 185 183 * ARMCP_PACKET_EEPROM_DATA_GET - 186 184 * Get EEPROM data from the ArmCP kernel. The buffer is specified in the 187 185 * addr field. The CPU will put the returned data size in the result 188 - * field. In addition, KMD passes the max size it allows the ArmCP to 189 - * write to the structure, to prevent data corruption in case of 190 - * mismatched KMD/FW versions. 186 + * field. In addition, the host's driver passes the max size it allows the 187 + * ArmCP to write to the structure, to prevent data corruption in case of 188 + * mismatched driver/FW versions. 191 189 * 192 190 */ 193 191 ··· 233 231 234 232 __le32 ctl; 235 233 236 - __le32 fence; /* Signal to KMD that message is completed */ 234 + __le32 fence; /* Signal to host that message is completed */ 237 235 238 236 union { 239 237 struct {/* For temperature/current/voltage/fan/pwm get/set */ ··· 312 310 * ArmCP info 313 311 */ 314 312 313 + #define CARD_NAME_MAX_LEN 16 315 314 #define VERSION_MAX_LEN 128 316 315 #define ARMCP_MAX_SENSORS 128 317 316 ··· 321 318 __le32 flags; 322 319 }; 323 320 321 + /** 322 + * struct armcp_info - Info from ArmCP that is necessary to the host's driver 323 + * @sensors: available sensors description. 324 + * @kernel_version: ArmCP linux kernel version. 325 + * @reserved: reserved field. 326 + * @cpld_version: CPLD programmed F/W version. 327 + * @infineon_version: Infineon main DC-DC version. 328 + * @fuse_version: silicon production FUSE information. 329 + * @thermal_version: thermald S/W version. 330 + * @armcp_version: ArmCP S/W version. 331 + * @dram_size: available DRAM size. 332 + * @card_name: card name that will be displayed in HWMON subsystem on the host 333 + */ 324 334 struct armcp_info { 325 335 struct armcp_sensor sensors[ARMCP_MAX_SENSORS]; 326 336 __u8 kernel_version[VERSION_MAX_LEN]; ··· 344 328 __u8 thermal_version[VERSION_MAX_LEN]; 345 329 __u8 armcp_version[VERSION_MAX_LEN]; 346 330 __le64 dram_size; 331 + char card_name[CARD_NAME_MAX_LEN]; 347 332 }; 348 333 349 334 #endif /* ARMCP_IF_H */
+2
drivers/misc/habanalabs/include/goya/goya.h
··· 38 38 39 39 #define TPC_MAX_NUM 8 40 40 41 + #define MME_MAX_NUM 1 42 + 41 43 #endif /* GOYA_H */
+34
drivers/misc/habanalabs/include/goya/goya_reg_map.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 2 + * 3 + * Copyright 2019 HabanaLabs, Ltd. 4 + * All Rights Reserved. 5 + * 6 + */ 7 + 8 + #ifndef GOYA_REG_MAP_H_ 9 + #define GOYA_REG_MAP_H_ 10 + 11 + /* 12 + * PSOC scratch-pad registers 13 + */ 14 + #define mmCPU_PQ_BASE_ADDR_LOW mmPSOC_GLOBAL_CONF_SCRATCHPAD_0 15 + #define mmCPU_PQ_BASE_ADDR_HIGH mmPSOC_GLOBAL_CONF_SCRATCHPAD_1 16 + #define mmCPU_EQ_BASE_ADDR_LOW mmPSOC_GLOBAL_CONF_SCRATCHPAD_2 17 + #define mmCPU_EQ_BASE_ADDR_HIGH mmPSOC_GLOBAL_CONF_SCRATCHPAD_3 18 + #define mmCPU_EQ_LENGTH mmPSOC_GLOBAL_CONF_SCRATCHPAD_4 19 + #define mmCPU_PQ_LENGTH mmPSOC_GLOBAL_CONF_SCRATCHPAD_5 20 + #define mmCPU_EQ_CI mmPSOC_GLOBAL_CONF_SCRATCHPAD_6 21 + #define mmCPU_PQ_INIT_STATUS mmPSOC_GLOBAL_CONF_SCRATCHPAD_7 22 + #define mmCPU_CQ_BASE_ADDR_LOW mmPSOC_GLOBAL_CONF_SCRATCHPAD_8 23 + #define mmCPU_CQ_BASE_ADDR_HIGH mmPSOC_GLOBAL_CONF_SCRATCHPAD_9 24 + #define mmCPU_CQ_LENGTH mmPSOC_GLOBAL_CONF_SCRATCHPAD_10 25 + #define mmUPD_STS mmPSOC_GLOBAL_CONF_SCRATCHPAD_26 26 + #define mmUPD_CMD mmPSOC_GLOBAL_CONF_SCRATCHPAD_27 27 + #define mmPREBOOT_VER_OFFSET mmPSOC_GLOBAL_CONF_SCRATCHPAD_28 28 + #define mmUBOOT_VER_OFFSET mmPSOC_GLOBAL_CONF_SCRATCHPAD_29 29 + #define mmUBOOT_OFFSET mmPSOC_GLOBAL_CONF_SCRATCHPAD_30 30 + #define mmBTL_ID mmPSOC_GLOBAL_CONF_SCRATCHPAD_31 31 + 32 + #define mmHW_STATE mmPSOC_GLOBAL_CONF_APP_STATUS 33 + 34 + #endif /* GOYA_REG_MAP_H_ */
+2 -2
drivers/misc/habanalabs/irq.c
··· 160 160 161 161 while (1) { 162 162 bool entry_ready = 163 - ((__le32_to_cpu(eq_base[eq->ci].hdr.ctl) & 163 + ((le32_to_cpu(eq_base[eq->ci].hdr.ctl) & 164 164 EQ_CTL_READY_MASK) >> EQ_CTL_READY_SHIFT); 165 165 166 166 if (!entry_ready) ··· 194 194 skip_irq: 195 195 /* Clear EQ entry ready bit */ 196 196 eq_entry->hdr.ctl = 197 - __cpu_to_le32(__le32_to_cpu(eq_entry->hdr.ctl) & 197 + cpu_to_le32(le32_to_cpu(eq_entry->hdr.ctl) & 198 198 ~EQ_CTL_READY_MASK); 199 199 200 200 eq->ci = hl_eq_inc_ptr(eq->ci);
+9 -117
drivers/misc/habanalabs/sysfs.c
··· 21 21 memset(&pkt, 0, sizeof(pkt)); 22 22 23 23 if (curr) 24 - pkt.ctl = __cpu_to_le32(ARMCP_PACKET_FREQUENCY_CURR_GET << 24 + pkt.ctl = cpu_to_le32(ARMCP_PACKET_FREQUENCY_CURR_GET << 25 25 ARMCP_PKT_CTL_OPCODE_SHIFT); 26 26 else 27 - pkt.ctl = __cpu_to_le32(ARMCP_PACKET_FREQUENCY_GET << 27 + pkt.ctl = cpu_to_le32(ARMCP_PACKET_FREQUENCY_GET << 28 28 ARMCP_PKT_CTL_OPCODE_SHIFT); 29 - pkt.pll_index = __cpu_to_le32(pll_index); 29 + pkt.pll_index = cpu_to_le32(pll_index); 30 30 31 31 rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), 32 32 SET_CLK_PKT_TIMEOUT, &result); ··· 48 48 49 49 memset(&pkt, 0, sizeof(pkt)); 50 50 51 - pkt.ctl = __cpu_to_le32(ARMCP_PACKET_FREQUENCY_SET << 51 + pkt.ctl = cpu_to_le32(ARMCP_PACKET_FREQUENCY_SET << 52 52 ARMCP_PKT_CTL_OPCODE_SHIFT); 53 - pkt.pll_index = __cpu_to_le32(pll_index); 54 - pkt.value = __cpu_to_le64(freq); 53 + pkt.pll_index = cpu_to_le32(pll_index); 54 + pkt.value = cpu_to_le64(freq); 55 55 56 56 rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), 57 57 SET_CLK_PKT_TIMEOUT, NULL); ··· 70 70 71 71 memset(&pkt, 0, sizeof(pkt)); 72 72 73 - pkt.ctl = __cpu_to_le32(ARMCP_PACKET_MAX_POWER_GET << 73 + pkt.ctl = cpu_to_le32(ARMCP_PACKET_MAX_POWER_GET << 74 74 ARMCP_PKT_CTL_OPCODE_SHIFT); 75 75 76 76 rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), ··· 91 91 92 92 memset(&pkt, 0, sizeof(pkt)); 93 93 94 - pkt.ctl = __cpu_to_le32(ARMCP_PACKET_MAX_POWER_SET << 94 + pkt.ctl = cpu_to_le32(ARMCP_PACKET_MAX_POWER_SET << 95 95 ARMCP_PKT_CTL_OPCODE_SHIFT); 96 - pkt.value = __cpu_to_le64(value); 96 + pkt.value = cpu_to_le64(value); 97 97 98 98 rc = hdev->asic_funcs->send_cpu_message(hdev, (u32 *) &pkt, sizeof(pkt), 99 99 SET_PWR_PKT_TIMEOUT, NULL); 100 100 101 101 if (rc) 102 102 dev_err(hdev->dev, "Failed to set max power, error %d\n", rc); 103 - } 104 - 105 - static ssize_t pm_mng_profile_show(struct device *dev, 106 - struct device_attribute *attr, char *buf) 107 - { 108 - struct hl_device *hdev = dev_get_drvdata(dev); 109 - 110 - if (hl_device_disabled_or_in_reset(hdev)) 111 - return -ENODEV; 112 - 113 - return sprintf(buf, "%s\n", 114 - (hdev->pm_mng_profile == PM_AUTO) ? "auto" : 115 - (hdev->pm_mng_profile == PM_MANUAL) ? "manual" : 116 - "unknown"); 117 - } 118 - 119 - static ssize_t pm_mng_profile_store(struct device *dev, 120 - struct device_attribute *attr, const char *buf, size_t count) 121 - { 122 - struct hl_device *hdev = dev_get_drvdata(dev); 123 - 124 - if (hl_device_disabled_or_in_reset(hdev)) { 125 - count = -ENODEV; 126 - goto out; 127 - } 128 - 129 - mutex_lock(&hdev->fd_open_cnt_lock); 130 - 131 - if (atomic_read(&hdev->fd_open_cnt) > 0) { 132 - dev_err(hdev->dev, 133 - "Can't change PM profile while user process is opened on the device\n"); 134 - count = -EPERM; 135 - goto unlock_mutex; 136 - } 137 - 138 - if (strncmp("auto", buf, strlen("auto")) == 0) { 139 - /* Make sure we are in LOW PLL when changing modes */ 140 - if (hdev->pm_mng_profile == PM_MANUAL) { 141 - atomic_set(&hdev->curr_pll_profile, PLL_HIGH); 142 - hl_device_set_frequency(hdev, PLL_LOW); 143 - hdev->pm_mng_profile = PM_AUTO; 144 - } 145 - } else if (strncmp("manual", buf, strlen("manual")) == 0) { 146 - /* Make sure we are in LOW PLL when changing modes */ 147 - if (hdev->pm_mng_profile == PM_AUTO) { 148 - flush_delayed_work(&hdev->work_freq); 149 - hdev->pm_mng_profile = PM_MANUAL; 150 - } 151 - } else { 152 - dev_err(hdev->dev, "value should be auto or manual\n"); 153 - count = -EINVAL; 154 - goto unlock_mutex; 155 - } 156 - 157 - unlock_mutex: 158 - mutex_unlock(&hdev->fd_open_cnt_lock); 159 - out: 160 - return count; 161 - } 162 - 163 - static ssize_t high_pll_show(struct device *dev, struct device_attribute *attr, 164 - char *buf) 165 - { 166 - struct hl_device *hdev = dev_get_drvdata(dev); 167 - 168 - if (hl_device_disabled_or_in_reset(hdev)) 169 - return -ENODEV; 170 - 171 - return sprintf(buf, "%u\n", hdev->high_pll); 172 - } 173 - 174 - static ssize_t high_pll_store(struct device *dev, struct device_attribute *attr, 175 - const char *buf, size_t count) 176 - { 177 - struct hl_device *hdev = dev_get_drvdata(dev); 178 - long value; 179 - int rc; 180 - 181 - if (hl_device_disabled_or_in_reset(hdev)) { 182 - count = -ENODEV; 183 - goto out; 184 - } 185 - 186 - rc = kstrtoul(buf, 0, &value); 187 - 188 - if (rc) { 189 - count = -EINVAL; 190 - goto out; 191 - } 192 - 193 - hdev->high_pll = value; 194 - 195 - out: 196 - return count; 197 103 } 198 104 199 105 static ssize_t uboot_ver_show(struct device *dev, struct device_attribute *attr, ··· 257 351 return sprintf(buf, "%s\n", str); 258 352 } 259 353 260 - static ssize_t write_open_cnt_show(struct device *dev, 261 - struct device_attribute *attr, char *buf) 262 - { 263 - struct hl_device *hdev = dev_get_drvdata(dev); 264 - 265 - return sprintf(buf, "%d\n", hdev->user_ctx ? 1 : 0); 266 - } 267 - 268 354 static ssize_t soft_reset_cnt_show(struct device *dev, 269 355 struct device_attribute *attr, char *buf) 270 356 { ··· 348 450 static DEVICE_ATTR_RO(fuse_ver); 349 451 static DEVICE_ATTR_WO(hard_reset); 350 452 static DEVICE_ATTR_RO(hard_reset_cnt); 351 - static DEVICE_ATTR_RW(high_pll); 352 453 static DEVICE_ATTR_RO(infineon_ver); 353 454 static DEVICE_ATTR_RW(max_power); 354 455 static DEVICE_ATTR_RO(pci_addr); 355 - static DEVICE_ATTR_RW(pm_mng_profile); 356 456 static DEVICE_ATTR_RO(preboot_btl_ver); 357 457 static DEVICE_ATTR_WO(soft_reset); 358 458 static DEVICE_ATTR_RO(soft_reset_cnt); 359 459 static DEVICE_ATTR_RO(status); 360 460 static DEVICE_ATTR_RO(thermal_ver); 361 461 static DEVICE_ATTR_RO(uboot_ver); 362 - static DEVICE_ATTR_RO(write_open_cnt); 363 462 364 463 static struct bin_attribute bin_attr_eeprom = { 365 464 .attr = {.name = "eeprom", .mode = (0444)}, ··· 372 477 &dev_attr_fuse_ver.attr, 373 478 &dev_attr_hard_reset.attr, 374 479 &dev_attr_hard_reset_cnt.attr, 375 - &dev_attr_high_pll.attr, 376 480 &dev_attr_infineon_ver.attr, 377 481 &dev_attr_max_power.attr, 378 482 &dev_attr_pci_addr.attr, 379 - &dev_attr_pm_mng_profile.attr, 380 483 &dev_attr_preboot_btl_ver.attr, 381 484 &dev_attr_soft_reset.attr, 382 485 &dev_attr_soft_reset_cnt.attr, 383 486 &dev_attr_status.attr, 384 487 &dev_attr_thermal_ver.attr, 385 488 &dev_attr_uboot_ver.attr, 386 - &dev_attr_write_open_cnt.attr, 387 489 NULL, 388 490 }; 389 491
+1
drivers/misc/lkdtm/Makefile
··· 9 9 lkdtm-$(CONFIG_LKDTM) += rodata_objcopy.o 10 10 lkdtm-$(CONFIG_LKDTM) += usercopy.o 11 11 lkdtm-$(CONFIG_LKDTM) += stackleak.o 12 + lkdtm-$(CONFIG_LKDTM) += cfi.o 12 13 13 14 KASAN_SANITIZE_stackleak.o := n 14 15 KCOV_INSTRUMENT_rodata.o := n
+6 -1
drivers/misc/lkdtm/bugs.c
··· 75 75 76 76 void lkdtm_WARNING(void) 77 77 { 78 - WARN(1, "Warning message trigger count: %d\n", warn_counter++); 78 + WARN_ON(++warn_counter); 79 + } 80 + 81 + void lkdtm_WARNING_MESSAGE(void) 82 + { 83 + WARN(1, "Warning message trigger count: %d\n", ++warn_counter); 79 84 } 80 85 81 86 void lkdtm_EXCEPTION(void)
+42
drivers/misc/lkdtm/cfi.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * This is for all the tests relating directly to Control Flow Integrity. 4 + */ 5 + #include "lkdtm.h" 6 + 7 + static int called_count; 8 + 9 + /* Function taking one argument, without a return value. */ 10 + static noinline void lkdtm_increment_void(int *counter) 11 + { 12 + (*counter)++; 13 + } 14 + 15 + /* Function taking one argument, returning int. */ 16 + static noinline int lkdtm_increment_int(int *counter) 17 + { 18 + (*counter)++; 19 + 20 + return *counter; 21 + } 22 + /* 23 + * This tries to call an indirect function with a mismatched prototype. 24 + */ 25 + void lkdtm_CFI_FORWARD_PROTO(void) 26 + { 27 + /* 28 + * Matches lkdtm_increment_void()'s prototype, but not 29 + * lkdtm_increment_int()'s prototype. 30 + */ 31 + void (*func)(int *); 32 + 33 + pr_info("Calling matched prototype ...\n"); 34 + func = lkdtm_increment_void; 35 + func(&called_count); 36 + 37 + pr_info("Calling mismatched prototype ...\n"); 38 + func = (void *)lkdtm_increment_int; 39 + func(&called_count); 40 + 41 + pr_info("Fail: survived mismatched prototype function call!\n"); 42 + }
+2
drivers/misc/lkdtm/core.c
··· 104 104 CRASHTYPE(PANIC), 105 105 CRASHTYPE(BUG), 106 106 CRASHTYPE(WARNING), 107 + CRASHTYPE(WARNING_MESSAGE), 107 108 CRASHTYPE(EXCEPTION), 108 109 CRASHTYPE(LOOP), 109 110 CRASHTYPE(EXHAUST_STACK), ··· 170 169 CRASHTYPE(USERCOPY_KERNEL), 171 170 CRASHTYPE(USERCOPY_KERNEL_DS), 172 171 CRASHTYPE(STACKLEAK_ERASING), 172 + CRASHTYPE(CFI_FORWARD_PROTO), 173 173 }; 174 174 175 175
+4
drivers/misc/lkdtm/lkdtm.h
··· 11 11 void lkdtm_PANIC(void); 12 12 void lkdtm_BUG(void); 13 13 void lkdtm_WARNING(void); 14 + void lkdtm_WARNING_MESSAGE(void); 14 15 void lkdtm_EXCEPTION(void); 15 16 void lkdtm_LOOP(void); 16 17 void lkdtm_EXHAUST_STACK(void); ··· 95 94 96 95 /* lkdtm_stackleak.c */ 97 96 void lkdtm_STACKLEAK_ERASING(void); 97 + 98 + /* cfi.c */ 99 + void lkdtm_CFI_FORWARD_PROTO(void); 98 100 99 101 #endif
+8 -11
drivers/misc/mei/pci-me.c
··· 383 383 #ifdef CONFIG_PM 384 384 static int mei_me_pm_runtime_idle(struct device *device) 385 385 { 386 - struct pci_dev *pdev = to_pci_dev(device); 387 386 struct mei_device *dev; 388 387 389 - dev_dbg(&pdev->dev, "rpm: me: runtime_idle\n"); 388 + dev_dbg(device, "rpm: me: runtime_idle\n"); 390 389 391 - dev = pci_get_drvdata(pdev); 390 + dev = dev_get_drvdata(device); 392 391 if (!dev) 393 392 return -ENODEV; 394 393 if (mei_write_is_idle(dev)) ··· 398 399 399 400 static int mei_me_pm_runtime_suspend(struct device *device) 400 401 { 401 - struct pci_dev *pdev = to_pci_dev(device); 402 402 struct mei_device *dev; 403 403 int ret; 404 404 405 - dev_dbg(&pdev->dev, "rpm: me: runtime suspend\n"); 405 + dev_dbg(device, "rpm: me: runtime suspend\n"); 406 406 407 - dev = pci_get_drvdata(pdev); 407 + dev = dev_get_drvdata(device); 408 408 if (!dev) 409 409 return -ENODEV; 410 410 ··· 416 418 417 419 mutex_unlock(&dev->device_lock); 418 420 419 - dev_dbg(&pdev->dev, "rpm: me: runtime suspend ret=%d\n", ret); 421 + dev_dbg(device, "rpm: me: runtime suspend ret=%d\n", ret); 420 422 421 423 if (ret && ret != -EAGAIN) 422 424 schedule_work(&dev->reset_work); ··· 426 428 427 429 static int mei_me_pm_runtime_resume(struct device *device) 428 430 { 429 - struct pci_dev *pdev = to_pci_dev(device); 430 431 struct mei_device *dev; 431 432 int ret; 432 433 433 - dev_dbg(&pdev->dev, "rpm: me: runtime resume\n"); 434 + dev_dbg(device, "rpm: me: runtime resume\n"); 434 435 435 - dev = pci_get_drvdata(pdev); 436 + dev = dev_get_drvdata(device); 436 437 if (!dev) 437 438 return -ENODEV; 438 439 ··· 441 444 442 445 mutex_unlock(&dev->device_lock); 443 446 444 - dev_dbg(&pdev->dev, "rpm: me: runtime resume ret = %d\n", ret); 447 + dev_dbg(device, "rpm: me: runtime resume ret = %d\n", ret); 445 448 446 449 if (ret) 447 450 schedule_work(&dev->reset_work);
+8 -11
drivers/misc/mei/pci-txe.c
··· 276 276 #ifdef CONFIG_PM 277 277 static int mei_txe_pm_runtime_idle(struct device *device) 278 278 { 279 - struct pci_dev *pdev = to_pci_dev(device); 280 279 struct mei_device *dev; 281 280 282 - dev_dbg(&pdev->dev, "rpm: txe: runtime_idle\n"); 281 + dev_dbg(device, "rpm: txe: runtime_idle\n"); 283 282 284 - dev = pci_get_drvdata(pdev); 283 + dev = dev_get_drvdata(device); 285 284 if (!dev) 286 285 return -ENODEV; 287 286 if (mei_write_is_idle(dev)) ··· 290 291 } 291 292 static int mei_txe_pm_runtime_suspend(struct device *device) 292 293 { 293 - struct pci_dev *pdev = to_pci_dev(device); 294 294 struct mei_device *dev; 295 295 int ret; 296 296 297 - dev_dbg(&pdev->dev, "rpm: txe: runtime suspend\n"); 297 + dev_dbg(device, "rpm: txe: runtime suspend\n"); 298 298 299 - dev = pci_get_drvdata(pdev); 299 + dev = dev_get_drvdata(device); 300 300 if (!dev) 301 301 return -ENODEV; 302 302 ··· 308 310 309 311 /* keep irq on we are staying in D0 */ 310 312 311 - dev_dbg(&pdev->dev, "rpm: txe: runtime suspend ret=%d\n", ret); 313 + dev_dbg(device, "rpm: txe: runtime suspend ret=%d\n", ret); 312 314 313 315 mutex_unlock(&dev->device_lock); 314 316 ··· 320 322 321 323 static int mei_txe_pm_runtime_resume(struct device *device) 322 324 { 323 - struct pci_dev *pdev = to_pci_dev(device); 324 325 struct mei_device *dev; 325 326 int ret; 326 327 327 - dev_dbg(&pdev->dev, "rpm: txe: runtime resume\n"); 328 + dev_dbg(device, "rpm: txe: runtime resume\n"); 328 329 329 - dev = pci_get_drvdata(pdev); 330 + dev = dev_get_drvdata(device); 330 331 if (!dev) 331 332 return -ENODEV; 332 333 ··· 337 340 338 341 mutex_unlock(&dev->device_lock); 339 342 340 - dev_dbg(&pdev->dev, "rpm: txe: runtime resume ret = %d\n", ret); 343 + dev_dbg(device, "rpm: txe: runtime resume ret = %d\n", ret); 341 344 342 345 if (ret) 343 346 schedule_work(&dev->reset_work);
+12 -16
drivers/misc/mic/card/mic_x100.c
··· 237 237 mdrv->dev = &pdev->dev; 238 238 snprintf(mdrv->name, sizeof(mic_driver_name), mic_driver_name); 239 239 240 + /* FIXME: use dma_set_mask_and_coherent() and check result */ 241 + dma_coerce_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(64)); 242 + 240 243 mdev->mmio.pa = MIC_X100_MMIO_BASE; 241 244 mdev->mmio.len = MIC_X100_MMIO_LEN; 242 245 mdev->mmio.va = devm_ioremap(&pdev->dev, MIC_X100_MMIO_BASE, ··· 285 282 mic_remove(pdev); 286 283 } 287 284 288 - static u64 mic_dma_mask = DMA_BIT_MASK(64); 289 - 290 - static struct platform_device mic_platform_dev = { 291 - .name = mic_driver_name, 292 - .id = 0, 293 - .num_resources = 0, 294 - .dev = { 295 - .dma_mask = &mic_dma_mask, 296 - .coherent_dma_mask = DMA_BIT_MASK(64), 297 - }, 298 - }; 299 - 300 285 static struct platform_driver __refdata mic_platform_driver = { 301 286 .probe = mic_probe, 302 287 .remove = mic_remove, ··· 293 302 .name = mic_driver_name, 294 303 }, 295 304 }; 305 + 306 + static struct platform_device *mic_platform_dev; 296 307 297 308 static int __init mic_init(void) 298 309 { ··· 309 316 310 317 request_module("mic_x100_dma"); 311 318 mic_init_card_debugfs(); 312 - ret = platform_device_register(&mic_platform_dev); 319 + 320 + mic_platform_dev = platform_device_register_simple(mic_driver_name, 321 + 0, NULL, 0); 322 + ret = PTR_ERR_OR_ZERO(mic_platform_dev); 313 323 if (ret) { 314 - pr_err("platform_device_register ret %d\n", ret); 324 + pr_err("platform_device_register_full ret %d\n", ret); 315 325 goto cleanup_debugfs; 316 326 } 317 327 ret = platform_driver_register(&mic_platform_driver); ··· 325 329 return ret; 326 330 327 331 device_unregister: 328 - platform_device_unregister(&mic_platform_dev); 332 + platform_device_unregister(mic_platform_dev); 329 333 cleanup_debugfs: 330 334 mic_exit_card_debugfs(); 331 335 done: ··· 335 339 static void __exit mic_exit(void) 336 340 { 337 341 platform_driver_unregister(&mic_platform_driver); 338 - platform_device_unregister(&mic_platform_dev); 342 + platform_device_unregister(mic_platform_dev); 339 343 mic_exit_card_debugfs(); 340 344 } 341 345
+2 -3
drivers/misc/mic/scif/scif_epd.h
··· 156 156 static inline int scif_anon_inode_getfile(scif_epd_t epd) 157 157 { 158 158 epd->anon = anon_inode_getfile("scif", &scif_anon_fops, NULL, 0); 159 - if (IS_ERR(epd->anon)) 160 - return PTR_ERR(epd->anon); 161 - return 0; 159 + 160 + return PTR_ERR_OR_ZERO(epd->anon); 162 161 } 163 162 164 163 static inline void scif_anon_inode_fput(scif_epd_t epd)
+2 -2
drivers/misc/sgi-xp/xpc_uv.c
··· 696 696 if (gru_mq_desc == NULL) { 697 697 gru_mq_desc = kmalloc(sizeof(struct 698 698 gru_message_queue_desc), 699 - GFP_KERNEL); 699 + GFP_ATOMIC); 700 700 if (gru_mq_desc == NULL) { 701 701 ret = xpNoMemory; 702 702 goto done; ··· 1680 1680 XPC_DEACTIVATE_PARTITION(&xpc_partitions[ch->partid], ret); 1681 1681 } 1682 1682 1683 - static struct xpc_arch_operations xpc_arch_ops_uv = { 1683 + static const struct xpc_arch_operations xpc_arch_ops_uv = { 1684 1684 .setup_partitions = xpc_setup_partitions_uv, 1685 1685 .teardown_partitions = xpc_teardown_partitions_uv, 1686 1686 .process_activate_IRQ_rcvd = xpc_process_activate_IRQ_rcvd_uv,
-797
drivers/misc/spear13xx_pcie_gadget.c
··· 1 - /* 2 - * drivers/misc/spear13xx_pcie_gadget.c 3 - * 4 - * Copyright (C) 2010 ST Microelectronics 5 - * Pratyush Anand<pratyush.anand@gmail.com> 6 - * 7 - * This file is licensed under the terms of the GNU General Public 8 - * License version 2. This program is licensed "as is" without any 9 - * warranty of any kind, whether express or implied. 10 - */ 11 - 12 - #include <linux/device.h> 13 - #include <linux/clk.h> 14 - #include <linux/slab.h> 15 - #include <linux/delay.h> 16 - #include <linux/io.h> 17 - #include <linux/interrupt.h> 18 - #include <linux/irq.h> 19 - #include <linux/kernel.h> 20 - #include <linux/module.h> 21 - #include <linux/platform_device.h> 22 - #include <linux/pci_regs.h> 23 - #include <linux/configfs.h> 24 - #include <mach/pcie.h> 25 - #include <mach/misc_regs.h> 26 - 27 - #define IN0_MEM_SIZE (200 * 1024 * 1024 - 1) 28 - /* In current implementation address translation is done using IN0 only. 29 - * So IN1 start address and IN0 end address has been kept same 30 - */ 31 - #define IN1_MEM_SIZE (0 * 1024 * 1024 - 1) 32 - #define IN_IO_SIZE (20 * 1024 * 1024 - 1) 33 - #define IN_CFG0_SIZE (12 * 1024 * 1024 - 1) 34 - #define IN_CFG1_SIZE (12 * 1024 * 1024 - 1) 35 - #define IN_MSG_SIZE (12 * 1024 * 1024 - 1) 36 - /* Keep default BAR size as 4K*/ 37 - /* AORAM would be mapped by default*/ 38 - #define INBOUND_ADDR_MASK (SPEAR13XX_SYSRAM1_SIZE - 1) 39 - 40 - #define INT_TYPE_NO_INT 0 41 - #define INT_TYPE_INTX 1 42 - #define INT_TYPE_MSI 2 43 - struct spear_pcie_gadget_config { 44 - void __iomem *base; 45 - void __iomem *va_app_base; 46 - void __iomem *va_dbi_base; 47 - char int_type[10]; 48 - ulong requested_msi; 49 - ulong configured_msi; 50 - ulong bar0_size; 51 - ulong bar0_rw_offset; 52 - void __iomem *va_bar0_address; 53 - }; 54 - 55 - struct pcie_gadget_target { 56 - struct configfs_subsystem subsys; 57 - struct spear_pcie_gadget_config config; 58 - }; 59 - 60 - struct pcie_gadget_target_attr { 61 - struct configfs_attribute attr; 62 - ssize_t (*show)(struct spear_pcie_gadget_config *config, 63 - char *buf); 64 - ssize_t (*store)(struct spear_pcie_gadget_config *config, 65 - const char *buf, 66 - size_t count); 67 - }; 68 - 69 - static void enable_dbi_access(struct pcie_app_reg __iomem *app_reg) 70 - { 71 - /* Enable DBI access */ 72 - writel(readl(&app_reg->slv_armisc) | (1 << AXI_OP_DBI_ACCESS_ID), 73 - &app_reg->slv_armisc); 74 - writel(readl(&app_reg->slv_awmisc) | (1 << AXI_OP_DBI_ACCESS_ID), 75 - &app_reg->slv_awmisc); 76 - 77 - } 78 - 79 - static void disable_dbi_access(struct pcie_app_reg __iomem *app_reg) 80 - { 81 - /* disable DBI access */ 82 - writel(readl(&app_reg->slv_armisc) & ~(1 << AXI_OP_DBI_ACCESS_ID), 83 - &app_reg->slv_armisc); 84 - writel(readl(&app_reg->slv_awmisc) & ~(1 << AXI_OP_DBI_ACCESS_ID), 85 - &app_reg->slv_awmisc); 86 - 87 - } 88 - 89 - static void spear_dbi_read_reg(struct spear_pcie_gadget_config *config, 90 - int where, int size, u32 *val) 91 - { 92 - struct pcie_app_reg __iomem *app_reg = config->va_app_base; 93 - ulong va_address; 94 - 95 - /* Enable DBI access */ 96 - enable_dbi_access(app_reg); 97 - 98 - va_address = (ulong)config->va_dbi_base + (where & ~0x3); 99 - 100 - *val = readl(va_address); 101 - 102 - if (size == 1) 103 - *val = (*val >> (8 * (where & 3))) & 0xff; 104 - else if (size == 2) 105 - *val = (*val >> (8 * (where & 3))) & 0xffff; 106 - 107 - /* Disable DBI access */ 108 - disable_dbi_access(app_reg); 109 - } 110 - 111 - static void spear_dbi_write_reg(struct spear_pcie_gadget_config *config, 112 - int where, int size, u32 val) 113 - { 114 - struct pcie_app_reg __iomem *app_reg = config->va_app_base; 115 - ulong va_address; 116 - 117 - /* Enable DBI access */ 118 - enable_dbi_access(app_reg); 119 - 120 - va_address = (ulong)config->va_dbi_base + (where & ~0x3); 121 - 122 - if (size == 4) 123 - writel(val, va_address); 124 - else if (size == 2) 125 - writew(val, va_address + (where & 2)); 126 - else if (size == 1) 127 - writeb(val, va_address + (where & 3)); 128 - 129 - /* Disable DBI access */ 130 - disable_dbi_access(app_reg); 131 - } 132 - 133 - #define PCI_FIND_CAP_TTL 48 134 - 135 - static int pci_find_own_next_cap_ttl(struct spear_pcie_gadget_config *config, 136 - u32 pos, int cap, int *ttl) 137 - { 138 - u32 id; 139 - 140 - while ((*ttl)--) { 141 - spear_dbi_read_reg(config, pos, 1, &pos); 142 - if (pos < 0x40) 143 - break; 144 - pos &= ~3; 145 - spear_dbi_read_reg(config, pos + PCI_CAP_LIST_ID, 1, &id); 146 - if (id == 0xff) 147 - break; 148 - if (id == cap) 149 - return pos; 150 - pos += PCI_CAP_LIST_NEXT; 151 - } 152 - return 0; 153 - } 154 - 155 - static int pci_find_own_next_cap(struct spear_pcie_gadget_config *config, 156 - u32 pos, int cap) 157 - { 158 - int ttl = PCI_FIND_CAP_TTL; 159 - 160 - return pci_find_own_next_cap_ttl(config, pos, cap, &ttl); 161 - } 162 - 163 - static int pci_find_own_cap_start(struct spear_pcie_gadget_config *config, 164 - u8 hdr_type) 165 - { 166 - u32 status; 167 - 168 - spear_dbi_read_reg(config, PCI_STATUS, 2, &status); 169 - if (!(status & PCI_STATUS_CAP_LIST)) 170 - return 0; 171 - 172 - switch (hdr_type) { 173 - case PCI_HEADER_TYPE_NORMAL: 174 - case PCI_HEADER_TYPE_BRIDGE: 175 - return PCI_CAPABILITY_LIST; 176 - case PCI_HEADER_TYPE_CARDBUS: 177 - return PCI_CB_CAPABILITY_LIST; 178 - default: 179 - return 0; 180 - } 181 - 182 - return 0; 183 - } 184 - 185 - /* 186 - * Tell if a device supports a given PCI capability. 187 - * Returns the address of the requested capability structure within the 188 - * device's PCI configuration space or 0 in case the device does not 189 - * support it. Possible values for @cap: 190 - * 191 - * %PCI_CAP_ID_PM Power Management 192 - * %PCI_CAP_ID_AGP Accelerated Graphics Port 193 - * %PCI_CAP_ID_VPD Vital Product Data 194 - * %PCI_CAP_ID_SLOTID Slot Identification 195 - * %PCI_CAP_ID_MSI Message Signalled Interrupts 196 - * %PCI_CAP_ID_CHSWP CompactPCI HotSwap 197 - * %PCI_CAP_ID_PCIX PCI-X 198 - * %PCI_CAP_ID_EXP PCI Express 199 - */ 200 - static int pci_find_own_capability(struct spear_pcie_gadget_config *config, 201 - int cap) 202 - { 203 - u32 pos; 204 - u32 hdr_type; 205 - 206 - spear_dbi_read_reg(config, PCI_HEADER_TYPE, 1, &hdr_type); 207 - 208 - pos = pci_find_own_cap_start(config, hdr_type); 209 - if (pos) 210 - pos = pci_find_own_next_cap(config, pos, cap); 211 - 212 - return pos; 213 - } 214 - 215 - static irqreturn_t spear_pcie_gadget_irq(int irq, void *dev_id) 216 - { 217 - return 0; 218 - } 219 - 220 - /* 221 - * configfs interfaces show/store functions 222 - */ 223 - 224 - static struct pcie_gadget_target *to_target(struct config_item *item) 225 - { 226 - return item ? 227 - container_of(to_configfs_subsystem(to_config_group(item)), 228 - struct pcie_gadget_target, subsys) : NULL; 229 - } 230 - 231 - static ssize_t pcie_gadget_link_show(struct config_item *item, char *buf) 232 - { 233 - struct pcie_app_reg __iomem *app_reg = to_target(item)->va_app_base; 234 - 235 - if (readl(&app_reg->app_status_1) & ((u32)1 << XMLH_LINK_UP_ID)) 236 - return sprintf(buf, "UP"); 237 - else 238 - return sprintf(buf, "DOWN"); 239 - } 240 - 241 - static ssize_t pcie_gadget_link_store(struct config_item *item, 242 - const char *buf, size_t count) 243 - { 244 - struct pcie_app_reg __iomem *app_reg = to_target(item)->va_app_base; 245 - 246 - if (sysfs_streq(buf, "UP")) 247 - writel(readl(&app_reg->app_ctrl_0) | (1 << APP_LTSSM_ENABLE_ID), 248 - &app_reg->app_ctrl_0); 249 - else if (sysfs_streq(buf, "DOWN")) 250 - writel(readl(&app_reg->app_ctrl_0) 251 - & ~(1 << APP_LTSSM_ENABLE_ID), 252 - &app_reg->app_ctrl_0); 253 - else 254 - return -EINVAL; 255 - return count; 256 - } 257 - 258 - static ssize_t pcie_gadget_int_type_show(struct config_item *item, char *buf) 259 - { 260 - return sprintf(buf, "%s", to_target(item)->int_type); 261 - } 262 - 263 - static ssize_t pcie_gadget_int_type_store(struct config_item *item, 264 - const char *buf, size_t count) 265 - { 266 - struct spear_pcie_gadget_config *config = to_target(item) 267 - u32 cap, vec, flags; 268 - ulong vector; 269 - 270 - if (sysfs_streq(buf, "INTA")) 271 - spear_dbi_write_reg(config, PCI_INTERRUPT_LINE, 1, 1); 272 - 273 - else if (sysfs_streq(buf, "MSI")) { 274 - vector = config->requested_msi; 275 - vec = 0; 276 - while (vector > 1) { 277 - vector /= 2; 278 - vec++; 279 - } 280 - spear_dbi_write_reg(config, PCI_INTERRUPT_LINE, 1, 0); 281 - cap = pci_find_own_capability(config, PCI_CAP_ID_MSI); 282 - spear_dbi_read_reg(config, cap + PCI_MSI_FLAGS, 1, &flags); 283 - flags &= ~PCI_MSI_FLAGS_QMASK; 284 - flags |= vec << 1; 285 - spear_dbi_write_reg(config, cap + PCI_MSI_FLAGS, 1, flags); 286 - } else 287 - return -EINVAL; 288 - 289 - strcpy(config->int_type, buf); 290 - 291 - return count; 292 - } 293 - 294 - static ssize_t pcie_gadget_no_of_msi_show(struct config_item *item, char *buf) 295 - { 296 - struct spear_pcie_gadget_config *config = to_target(item) 297 - struct pcie_app_reg __iomem *app_reg = to_target(item)->va_app_base; 298 - u32 cap, vec, flags; 299 - ulong vector; 300 - 301 - if ((readl(&app_reg->msg_status) & (1 << CFG_MSI_EN_ID)) 302 - != (1 << CFG_MSI_EN_ID)) 303 - vector = 0; 304 - else { 305 - cap = pci_find_own_capability(config, PCI_CAP_ID_MSI); 306 - spear_dbi_read_reg(config, cap + PCI_MSI_FLAGS, 1, &flags); 307 - flags &= ~PCI_MSI_FLAGS_QSIZE; 308 - vec = flags >> 4; 309 - vector = 1; 310 - while (vec--) 311 - vector *= 2; 312 - } 313 - config->configured_msi = vector; 314 - 315 - return sprintf(buf, "%lu", vector); 316 - } 317 - 318 - static ssize_t pcie_gadget_no_of_msi_store(struct config_item *item, 319 - const char *buf, size_t count) 320 - { 321 - int ret; 322 - 323 - ret = kstrtoul(buf, 0, &to_target(item)->requested_msi); 324 - if (ret) 325 - return ret; 326 - 327 - if (config->requested_msi > 32) 328 - config->requested_msi = 32; 329 - 330 - return count; 331 - } 332 - 333 - static ssize_t pcie_gadget_inta_store(struct config_item *item, 334 - const char *buf, size_t count) 335 - { 336 - struct pcie_app_reg __iomem *app_reg = to_target(item)->va_app_base; 337 - ulong en; 338 - int ret; 339 - 340 - ret = kstrtoul(buf, 0, &en); 341 - if (ret) 342 - return ret; 343 - 344 - if (en) 345 - writel(readl(&app_reg->app_ctrl_0) | (1 << SYS_INT_ID), 346 - &app_reg->app_ctrl_0); 347 - else 348 - writel(readl(&app_reg->app_ctrl_0) & ~(1 << SYS_INT_ID), 349 - &app_reg->app_ctrl_0); 350 - 351 - return count; 352 - } 353 - 354 - static ssize_t pcie_gadget_send_msi_store(struct config_item *item, 355 - const char *buf, size_t count) 356 - { 357 - struct spear_pcie_gadget_config *config = to_target(item) 358 - struct pcie_app_reg __iomem *app_reg = config->va_app_base; 359 - ulong vector; 360 - u32 ven_msi; 361 - int ret; 362 - 363 - ret = kstrtoul(buf, 0, &vector); 364 - if (ret) 365 - return ret; 366 - 367 - if (!config->configured_msi) 368 - return -EINVAL; 369 - 370 - if (vector >= config->configured_msi) 371 - return -EINVAL; 372 - 373 - ven_msi = readl(&app_reg->ven_msi_1); 374 - ven_msi &= ~VEN_MSI_FUN_NUM_MASK; 375 - ven_msi |= 0 << VEN_MSI_FUN_NUM_ID; 376 - ven_msi &= ~VEN_MSI_TC_MASK; 377 - ven_msi |= 0 << VEN_MSI_TC_ID; 378 - ven_msi &= ~VEN_MSI_VECTOR_MASK; 379 - ven_msi |= vector << VEN_MSI_VECTOR_ID; 380 - 381 - /* generating interrupt for msi vector */ 382 - ven_msi |= VEN_MSI_REQ_EN; 383 - writel(ven_msi, &app_reg->ven_msi_1); 384 - udelay(1); 385 - ven_msi &= ~VEN_MSI_REQ_EN; 386 - writel(ven_msi, &app_reg->ven_msi_1); 387 - 388 - return count; 389 - } 390 - 391 - static ssize_t pcie_gadget_vendor_id_show(struct config_item *item, char *buf) 392 - { 393 - u32 id; 394 - 395 - spear_dbi_read_reg(to_target(item), PCI_VENDOR_ID, 2, &id); 396 - 397 - return sprintf(buf, "%x", id); 398 - } 399 - 400 - static ssize_t pcie_gadget_vendor_id_store(struct config_item *item, 401 - const char *buf, size_t count) 402 - { 403 - ulong id; 404 - int ret; 405 - 406 - ret = kstrtoul(buf, 0, &id); 407 - if (ret) 408 - return ret; 409 - 410 - spear_dbi_write_reg(to_target(item), PCI_VENDOR_ID, 2, id); 411 - 412 - return count; 413 - } 414 - 415 - static ssize_t pcie_gadget_device_id_show(struct config_item *item, char *buf) 416 - { 417 - u32 id; 418 - 419 - spear_dbi_read_reg(to_target(item), PCI_DEVICE_ID, 2, &id); 420 - 421 - return sprintf(buf, "%x", id); 422 - } 423 - 424 - static ssize_t pcie_gadget_device_id_store(struct config_item *item, 425 - const char *buf, size_t count) 426 - { 427 - ulong id; 428 - int ret; 429 - 430 - ret = kstrtoul(buf, 0, &id); 431 - if (ret) 432 - return ret; 433 - 434 - spear_dbi_write_reg(to_target(item), PCI_DEVICE_ID, 2, id); 435 - 436 - return count; 437 - } 438 - 439 - static ssize_t pcie_gadget_bar0_size_show(struct config_item *item, char *buf) 440 - { 441 - return sprintf(buf, "%lx", to_target(item)->bar0_size); 442 - } 443 - 444 - static ssize_t pcie_gadget_bar0_size_store(struct config_item *item, 445 - const char *buf, size_t count) 446 - { 447 - struct spear_pcie_gadget_config *config = to_target(item) 448 - ulong size; 449 - u32 pos, pos1; 450 - u32 no_of_bit = 0; 451 - int ret; 452 - 453 - ret = kstrtoul(buf, 0, &size); 454 - if (ret) 455 - return ret; 456 - 457 - /* min bar size is 256 */ 458 - if (size <= 0x100) 459 - size = 0x100; 460 - /* max bar size is 1MB*/ 461 - else if (size >= 0x100000) 462 - size = 0x100000; 463 - else { 464 - pos = 0; 465 - pos1 = 0; 466 - while (pos < 21) { 467 - pos = find_next_bit((ulong *)&size, 21, pos); 468 - if (pos != 21) 469 - pos1 = pos + 1; 470 - pos++; 471 - no_of_bit++; 472 - } 473 - if (no_of_bit == 2) 474 - pos1--; 475 - 476 - size = 1 << pos1; 477 - } 478 - config->bar0_size = size; 479 - spear_dbi_write_reg(config, PCIE_BAR0_MASK_REG, 4, size - 1); 480 - 481 - return count; 482 - } 483 - 484 - static ssize_t pcie_gadget_bar0_address_show(struct config_item *item, 485 - char *buf) 486 - { 487 - struct pcie_app_reg __iomem *app_reg = to_target(item)->va_app_base; 488 - 489 - u32 address = readl(&app_reg->pim0_mem_addr_start); 490 - 491 - return sprintf(buf, "%x", address); 492 - } 493 - 494 - static ssize_t pcie_gadget_bar0_address_store(struct config_item *item, 495 - const char *buf, size_t count) 496 - { 497 - struct spear_pcie_gadget_config *config = to_target(item) 498 - struct pcie_app_reg __iomem *app_reg = config->va_app_base; 499 - ulong address; 500 - int ret; 501 - 502 - ret = kstrtoul(buf, 0, &address); 503 - if (ret) 504 - return ret; 505 - 506 - address &= ~(config->bar0_size - 1); 507 - if (config->va_bar0_address) 508 - iounmap(config->va_bar0_address); 509 - config->va_bar0_address = ioremap(address, config->bar0_size); 510 - if (!config->va_bar0_address) 511 - return -ENOMEM; 512 - 513 - writel(address, &app_reg->pim0_mem_addr_start); 514 - 515 - return count; 516 - } 517 - 518 - static ssize_t pcie_gadget_bar0_rw_offset_show(struct config_item *item, 519 - char *buf) 520 - { 521 - return sprintf(buf, "%lx", to_target(item)->bar0_rw_offset); 522 - } 523 - 524 - static ssize_t pcie_gadget_bar0_rw_offset_store(struct config_item *item, 525 - const char *buf, size_t count) 526 - { 527 - ulong offset; 528 - int ret; 529 - 530 - ret = kstrtoul(buf, 0, &offset); 531 - if (ret) 532 - return ret; 533 - 534 - if (offset % 4) 535 - return -EINVAL; 536 - 537 - to_target(item)->bar0_rw_offset = offset; 538 - 539 - return count; 540 - } 541 - 542 - static ssize_t pcie_gadget_bar0_data_show(struct config_item *item, char *buf) 543 - { 544 - struct spear_pcie_gadget_config *config = to_target(item) 545 - ulong data; 546 - 547 - if (!config->va_bar0_address) 548 - return -ENOMEM; 549 - 550 - data = readl((ulong)config->va_bar0_address + config->bar0_rw_offset); 551 - 552 - return sprintf(buf, "%lx", data); 553 - } 554 - 555 - static ssize_t pcie_gadget_bar0_data_store(struct config_item *item, 556 - const char *buf, size_t count) 557 - { 558 - struct spear_pcie_gadget_config *config = to_target(item) 559 - ulong data; 560 - int ret; 561 - 562 - ret = kstrtoul(buf, 0, &data); 563 - if (ret) 564 - return ret; 565 - 566 - if (!config->va_bar0_address) 567 - return -ENOMEM; 568 - 569 - writel(data, (ulong)config->va_bar0_address + config->bar0_rw_offset); 570 - 571 - return count; 572 - } 573 - 574 - CONFIGFS_ATTR(pcie_gadget_, link); 575 - CONFIGFS_ATTR(pcie_gadget_, int_type); 576 - CONFIGFS_ATTR(pcie_gadget_, no_of_msi); 577 - CONFIGFS_ATTR_WO(pcie_gadget_, inta); 578 - CONFIGFS_ATTR_WO(pcie_gadget_, send_msi); 579 - CONFIGFS_ATTR(pcie_gadget_, vendor_id); 580 - CONFIGFS_ATTR(pcie_gadget_, device_id); 581 - CONFIGFS_ATTR(pcie_gadget_, bar0_size); 582 - CONFIGFS_ATTR(pcie_gadget_, bar0_address); 583 - CONFIGFS_ATTR(pcie_gadget_, bar0_rw_offset); 584 - CONFIGFS_ATTR(pcie_gadget_, bar0_data); 585 - 586 - static struct configfs_attribute *pcie_gadget_target_attrs[] = { 587 - &pcie_gadget_attr_link, 588 - &pcie_gadget_attr_int_type, 589 - &pcie_gadget_attr_no_of_msi, 590 - &pcie_gadget_attr_inta, 591 - &pcie_gadget_attr_send_msi, 592 - &pcie_gadget_attr_vendor_id, 593 - &pcie_gadget_attr_device_id, 594 - &pcie_gadget_attr_bar0_size, 595 - &pcie_gadget_attr_bar0_address, 596 - &pcie_gadget_attr_bar0_rw_offset, 597 - &pcie_gadget_attr_bar0_data, 598 - NULL, 599 - }; 600 - 601 - static struct config_item_type pcie_gadget_target_type = { 602 - .ct_attrs = pcie_gadget_target_attrs, 603 - .ct_owner = THIS_MODULE, 604 - }; 605 - 606 - static void spear13xx_pcie_device_init(struct spear_pcie_gadget_config *config) 607 - { 608 - struct pcie_app_reg __iomem *app_reg = config->va_app_base; 609 - 610 - /*setup registers for outbound translation */ 611 - 612 - writel(config->base, &app_reg->in0_mem_addr_start); 613 - writel(app_reg->in0_mem_addr_start + IN0_MEM_SIZE, 614 - &app_reg->in0_mem_addr_limit); 615 - writel(app_reg->in0_mem_addr_limit + 1, &app_reg->in1_mem_addr_start); 616 - writel(app_reg->in1_mem_addr_start + IN1_MEM_SIZE, 617 - &app_reg->in1_mem_addr_limit); 618 - writel(app_reg->in1_mem_addr_limit + 1, &app_reg->in_io_addr_start); 619 - writel(app_reg->in_io_addr_start + IN_IO_SIZE, 620 - &app_reg->in_io_addr_limit); 621 - writel(app_reg->in_io_addr_limit + 1, &app_reg->in_cfg0_addr_start); 622 - writel(app_reg->in_cfg0_addr_start + IN_CFG0_SIZE, 623 - &app_reg->in_cfg0_addr_limit); 624 - writel(app_reg->in_cfg0_addr_limit + 1, &app_reg->in_cfg1_addr_start); 625 - writel(app_reg->in_cfg1_addr_start + IN_CFG1_SIZE, 626 - &app_reg->in_cfg1_addr_limit); 627 - writel(app_reg->in_cfg1_addr_limit + 1, &app_reg->in_msg_addr_start); 628 - writel(app_reg->in_msg_addr_start + IN_MSG_SIZE, 629 - &app_reg->in_msg_addr_limit); 630 - 631 - writel(app_reg->in0_mem_addr_start, &app_reg->pom0_mem_addr_start); 632 - writel(app_reg->in1_mem_addr_start, &app_reg->pom1_mem_addr_start); 633 - writel(app_reg->in_io_addr_start, &app_reg->pom_io_addr_start); 634 - 635 - /*setup registers for inbound translation */ 636 - 637 - /* Keep AORAM mapped at BAR0 as default */ 638 - config->bar0_size = INBOUND_ADDR_MASK + 1; 639 - spear_dbi_write_reg(config, PCIE_BAR0_MASK_REG, 4, INBOUND_ADDR_MASK); 640 - spear_dbi_write_reg(config, PCI_BASE_ADDRESS_0, 4, 0xC); 641 - config->va_bar0_address = ioremap(SPEAR13XX_SYSRAM1_BASE, 642 - config->bar0_size); 643 - 644 - writel(SPEAR13XX_SYSRAM1_BASE, &app_reg->pim0_mem_addr_start); 645 - writel(0, &app_reg->pim1_mem_addr_start); 646 - writel(INBOUND_ADDR_MASK + 1, &app_reg->mem0_addr_offset_limit); 647 - 648 - writel(0x0, &app_reg->pim_io_addr_start); 649 - writel(0x0, &app_reg->pim_io_addr_start); 650 - writel(0x0, &app_reg->pim_rom_addr_start); 651 - 652 - writel(DEVICE_TYPE_EP | (1 << MISCTRL_EN_ID) 653 - | ((u32)1 << REG_TRANSLATION_ENABLE), 654 - &app_reg->app_ctrl_0); 655 - /* disable all rx interrupts */ 656 - writel(0, &app_reg->int_mask); 657 - 658 - /* Select INTA as default*/ 659 - spear_dbi_write_reg(config, PCI_INTERRUPT_LINE, 1, 1); 660 - } 661 - 662 - static int spear_pcie_gadget_probe(struct platform_device *pdev) 663 - { 664 - struct resource *res0, *res1; 665 - unsigned int status = 0; 666 - int irq; 667 - struct clk *clk; 668 - static struct pcie_gadget_target *target; 669 - struct spear_pcie_gadget_config *config; 670 - struct config_item *cg_item; 671 - struct configfs_subsystem *subsys; 672 - 673 - target = devm_kzalloc(&pdev->dev, sizeof(*target), GFP_KERNEL); 674 - if (!target) { 675 - dev_err(&pdev->dev, "out of memory\n"); 676 - return -ENOMEM; 677 - } 678 - 679 - cg_item = &target->subsys.su_group.cg_item; 680 - sprintf(cg_item->ci_namebuf, "pcie_gadget.%d", pdev->id); 681 - cg_item->ci_type = &pcie_gadget_target_type; 682 - config = &target->config; 683 - 684 - /* get resource for application registers*/ 685 - res0 = platform_get_resource(pdev, IORESOURCE_MEM, 0); 686 - config->va_app_base = devm_ioremap_resource(&pdev->dev, res0); 687 - if (IS_ERR(config->va_app_base)) { 688 - dev_err(&pdev->dev, "ioremap fail\n"); 689 - return PTR_ERR(config->va_app_base); 690 - } 691 - 692 - /* get resource for dbi registers*/ 693 - res1 = platform_get_resource(pdev, IORESOURCE_MEM, 1); 694 - config->base = (void __iomem *)res1->start; 695 - 696 - config->va_dbi_base = devm_ioremap_resource(&pdev->dev, res1); 697 - if (IS_ERR(config->va_dbi_base)) { 698 - dev_err(&pdev->dev, "ioremap fail\n"); 699 - return PTR_ERR(config->va_dbi_base); 700 - } 701 - 702 - platform_set_drvdata(pdev, target); 703 - 704 - irq = platform_get_irq(pdev, 0); 705 - if (irq < 0) { 706 - dev_err(&pdev->dev, "no update irq?\n"); 707 - return irq; 708 - } 709 - 710 - status = devm_request_irq(&pdev->dev, irq, spear_pcie_gadget_irq, 711 - 0, pdev->name, NULL); 712 - if (status) { 713 - dev_err(&pdev->dev, 714 - "pcie gadget interrupt IRQ%d already claimed\n", irq); 715 - return status; 716 - } 717 - 718 - /* Register configfs hooks */ 719 - subsys = &target->subsys; 720 - config_group_init(&subsys->su_group); 721 - mutex_init(&subsys->su_mutex); 722 - status = configfs_register_subsystem(subsys); 723 - if (status) 724 - return status; 725 - 726 - /* 727 - * init basic pcie application registers 728 - * do not enable clock if it is PCIE0.Ideally , all controller should 729 - * have been independent from others with respect to clock. But PCIE1 730 - * and 2 depends on PCIE0.So PCIE0 clk is provided during board init. 731 - */ 732 - if (pdev->id == 1) { 733 - /* 734 - * Ideally CFG Clock should have been also enabled here. But 735 - * it is done currently during board init routne 736 - */ 737 - clk = clk_get_sys("pcie1", NULL); 738 - if (IS_ERR(clk)) { 739 - pr_err("%s:couldn't get clk for pcie1\n", __func__); 740 - return PTR_ERR(clk); 741 - } 742 - status = clk_enable(clk); 743 - if (status) { 744 - pr_err("%s:couldn't enable clk for pcie1\n", __func__); 745 - return status; 746 - } 747 - } else if (pdev->id == 2) { 748 - /* 749 - * Ideally CFG Clock should have been also enabled here. But 750 - * it is done currently during board init routne 751 - */ 752 - clk = clk_get_sys("pcie2", NULL); 753 - if (IS_ERR(clk)) { 754 - pr_err("%s:couldn't get clk for pcie2\n", __func__); 755 - return PTR_ERR(clk); 756 - } 757 - status = clk_enable(clk); 758 - if (status) { 759 - pr_err("%s:couldn't enable clk for pcie2\n", __func__); 760 - return status; 761 - } 762 - } 763 - spear13xx_pcie_device_init(config); 764 - 765 - return 0; 766 - } 767 - 768 - static int spear_pcie_gadget_remove(struct platform_device *pdev) 769 - { 770 - static struct pcie_gadget_target *target; 771 - 772 - target = platform_get_drvdata(pdev); 773 - 774 - configfs_unregister_subsystem(&target->subsys); 775 - 776 - return 0; 777 - } 778 - 779 - static void spear_pcie_gadget_shutdown(struct platform_device *pdev) 780 - { 781 - } 782 - 783 - static struct platform_driver spear_pcie_gadget_driver = { 784 - .probe = spear_pcie_gadget_probe, 785 - .remove = spear_pcie_gadget_remove, 786 - .shutdown = spear_pcie_gadget_shutdown, 787 - .driver = { 788 - .name = "pcie-gadget-spear", 789 - .bus = &platform_bus_type 790 - }, 791 - }; 792 - 793 - module_platform_driver(spear_pcie_gadget_driver); 794 - 795 - MODULE_ALIAS("platform:pcie-gadget-spear"); 796 - MODULE_AUTHOR("Pratyush Anand"); 797 - MODULE_LICENSE("GPL");
+1189 -25
drivers/misc/xilinx_sdfec.c
··· 19 19 #include <linux/poll.h> 20 20 #include <linux/slab.h> 21 21 #include <linux/clk.h> 22 + #include <linux/compat.h> 23 + #include <linux/highmem.h> 24 + 25 + #include <uapi/misc/xilinx_sdfec.h> 22 26 23 27 #define DEV_NAME_LEN 12 24 28 25 - static struct idr dev_idr; 26 - static struct mutex dev_idr_lock; 29 + static DEFINE_IDA(dev_nrs); 30 + 31 + /* Xilinx SDFEC Register Map */ 32 + /* CODE_WRI_PROTECT Register */ 33 + #define XSDFEC_CODE_WR_PROTECT_ADDR (0x4) 34 + 35 + /* ACTIVE Register */ 36 + #define XSDFEC_ACTIVE_ADDR (0x8) 37 + #define XSDFEC_IS_ACTIVITY_SET (0x1) 38 + 39 + /* AXIS_WIDTH Register */ 40 + #define XSDFEC_AXIS_WIDTH_ADDR (0xC) 41 + #define XSDFEC_AXIS_DOUT_WORDS_LSB (5) 42 + #define XSDFEC_AXIS_DOUT_WIDTH_LSB (3) 43 + #define XSDFEC_AXIS_DIN_WORDS_LSB (2) 44 + #define XSDFEC_AXIS_DIN_WIDTH_LSB (0) 45 + 46 + /* AXIS_ENABLE Register */ 47 + #define XSDFEC_AXIS_ENABLE_ADDR (0x10) 48 + #define XSDFEC_AXIS_OUT_ENABLE_MASK (0x38) 49 + #define XSDFEC_AXIS_IN_ENABLE_MASK (0x7) 50 + #define XSDFEC_AXIS_ENABLE_MASK \ 51 + (XSDFEC_AXIS_OUT_ENABLE_MASK | XSDFEC_AXIS_IN_ENABLE_MASK) 52 + 53 + /* FEC_CODE Register */ 54 + #define XSDFEC_FEC_CODE_ADDR (0x14) 55 + 56 + /* ORDER Register Map */ 57 + #define XSDFEC_ORDER_ADDR (0x18) 58 + 59 + /* Interrupt Status Register */ 60 + #define XSDFEC_ISR_ADDR (0x1C) 61 + /* Interrupt Status Register Bit Mask */ 62 + #define XSDFEC_ISR_MASK (0x3F) 63 + 64 + /* Write Only - Interrupt Enable Register */ 65 + #define XSDFEC_IER_ADDR (0x20) 66 + /* Write Only - Interrupt Disable Register */ 67 + #define XSDFEC_IDR_ADDR (0x24) 68 + /* Read Only - Interrupt Mask Register */ 69 + #define XSDFEC_IMR_ADDR (0x28) 70 + 71 + /* ECC Interrupt Status Register */ 72 + #define XSDFEC_ECC_ISR_ADDR (0x2C) 73 + /* Single Bit Errors */ 74 + #define XSDFEC_ECC_ISR_SBE_MASK (0x7FF) 75 + /* PL Initialize Single Bit Errors */ 76 + #define XSDFEC_PL_INIT_ECC_ISR_SBE_MASK (0x3C00000) 77 + /* Multi Bit Errors */ 78 + #define XSDFEC_ECC_ISR_MBE_MASK (0x3FF800) 79 + /* PL Initialize Multi Bit Errors */ 80 + #define XSDFEC_PL_INIT_ECC_ISR_MBE_MASK (0x3C000000) 81 + /* Multi Bit Error to Event Shift */ 82 + #define XSDFEC_ECC_ISR_MBE_TO_EVENT_SHIFT (11) 83 + /* PL Initialize Multi Bit Error to Event Shift */ 84 + #define XSDFEC_PL_INIT_ECC_ISR_MBE_TO_EVENT_SHIFT (4) 85 + /* ECC Interrupt Status Bit Mask */ 86 + #define XSDFEC_ECC_ISR_MASK (XSDFEC_ECC_ISR_SBE_MASK | XSDFEC_ECC_ISR_MBE_MASK) 87 + /* ECC Interrupt Status PL Initialize Bit Mask */ 88 + #define XSDFEC_PL_INIT_ECC_ISR_MASK \ 89 + (XSDFEC_PL_INIT_ECC_ISR_SBE_MASK | XSDFEC_PL_INIT_ECC_ISR_MBE_MASK) 90 + /* ECC Interrupt Status All Bit Mask */ 91 + #define XSDFEC_ALL_ECC_ISR_MASK \ 92 + (XSDFEC_ECC_ISR_MASK | XSDFEC_PL_INIT_ECC_ISR_MASK) 93 + /* ECC Interrupt Status Single Bit Errors Mask */ 94 + #define XSDFEC_ALL_ECC_ISR_SBE_MASK \ 95 + (XSDFEC_ECC_ISR_SBE_MASK | XSDFEC_PL_INIT_ECC_ISR_SBE_MASK) 96 + /* ECC Interrupt Status Multi Bit Errors Mask */ 97 + #define XSDFEC_ALL_ECC_ISR_MBE_MASK \ 98 + (XSDFEC_ECC_ISR_MBE_MASK | XSDFEC_PL_INIT_ECC_ISR_MBE_MASK) 99 + 100 + /* Write Only - ECC Interrupt Enable Register */ 101 + #define XSDFEC_ECC_IER_ADDR (0x30) 102 + /* Write Only - ECC Interrupt Disable Register */ 103 + #define XSDFEC_ECC_IDR_ADDR (0x34) 104 + /* Read Only - ECC Interrupt Mask Register */ 105 + #define XSDFEC_ECC_IMR_ADDR (0x38) 106 + 107 + /* BYPASS Register */ 108 + #define XSDFEC_BYPASS_ADDR (0x3C) 109 + 110 + /* Turbo Code Register */ 111 + #define XSDFEC_TURBO_ADDR (0x100) 112 + #define XSDFEC_TURBO_SCALE_MASK (0xFFF) 113 + #define XSDFEC_TURBO_SCALE_BIT_POS (8) 114 + #define XSDFEC_TURBO_SCALE_MAX (15) 115 + 116 + /* REG0 Register */ 117 + #define XSDFEC_LDPC_CODE_REG0_ADDR_BASE (0x2000) 118 + #define XSDFEC_LDPC_CODE_REG0_ADDR_HIGH (0x27F0) 119 + #define XSDFEC_REG0_N_MIN (4) 120 + #define XSDFEC_REG0_N_MAX (32768) 121 + #define XSDFEC_REG0_N_MUL_P (256) 122 + #define XSDFEC_REG0_N_LSB (0) 123 + #define XSDFEC_REG0_K_MIN (2) 124 + #define XSDFEC_REG0_K_MAX (32766) 125 + #define XSDFEC_REG0_K_MUL_P (256) 126 + #define XSDFEC_REG0_K_LSB (16) 127 + 128 + /* REG1 Register */ 129 + #define XSDFEC_LDPC_CODE_REG1_ADDR_BASE (0x2004) 130 + #define XSDFEC_LDPC_CODE_REG1_ADDR_HIGH (0x27f4) 131 + #define XSDFEC_REG1_PSIZE_MIN (2) 132 + #define XSDFEC_REG1_PSIZE_MAX (512) 133 + #define XSDFEC_REG1_NO_PACKING_MASK (0x400) 134 + #define XSDFEC_REG1_NO_PACKING_LSB (10) 135 + #define XSDFEC_REG1_NM_MASK (0xFF800) 136 + #define XSDFEC_REG1_NM_LSB (11) 137 + #define XSDFEC_REG1_BYPASS_MASK (0x100000) 138 + 139 + /* REG2 Register */ 140 + #define XSDFEC_LDPC_CODE_REG2_ADDR_BASE (0x2008) 141 + #define XSDFEC_LDPC_CODE_REG2_ADDR_HIGH (0x27f8) 142 + #define XSDFEC_REG2_NLAYERS_MIN (1) 143 + #define XSDFEC_REG2_NLAYERS_MAX (256) 144 + #define XSDFEC_REG2_NNMQC_MASK (0xFFE00) 145 + #define XSDFEC_REG2_NMQC_LSB (9) 146 + #define XSDFEC_REG2_NORM_TYPE_MASK (0x100000) 147 + #define XSDFEC_REG2_NORM_TYPE_LSB (20) 148 + #define XSDFEC_REG2_SPECIAL_QC_MASK (0x200000) 149 + #define XSDFEC_REG2_SPEICAL_QC_LSB (21) 150 + #define XSDFEC_REG2_NO_FINAL_PARITY_MASK (0x400000) 151 + #define XSDFEC_REG2_NO_FINAL_PARITY_LSB (22) 152 + #define XSDFEC_REG2_MAX_SCHEDULE_MASK (0x1800000) 153 + #define XSDFEC_REG2_MAX_SCHEDULE_LSB (23) 154 + 155 + /* REG3 Register */ 156 + #define XSDFEC_LDPC_CODE_REG3_ADDR_BASE (0x200C) 157 + #define XSDFEC_LDPC_CODE_REG3_ADDR_HIGH (0x27FC) 158 + #define XSDFEC_REG3_LA_OFF_LSB (8) 159 + #define XSDFEC_REG3_QC_OFF_LSB (16) 160 + 161 + #define XSDFEC_LDPC_REG_JUMP (0x10) 162 + #define XSDFEC_REG_WIDTH_JUMP (4) 163 + 164 + /* The maximum number of pinned pages */ 165 + #define MAX_NUM_PAGES ((XSDFEC_QC_TABLE_DEPTH / PAGE_SIZE) + 1) 27 166 28 167 /** 29 168 * struct xsdfec_clks - For managing SD-FEC clocks ··· 188 49 189 50 /** 190 51 * struct xsdfec_dev - Driver data for SDFEC 52 + * @miscdev: Misc device handle 53 + * @clks: Clocks managed by the SDFEC driver 54 + * @waitq: Driver wait queue 55 + * @config: Configuration of the SDFEC device 56 + * @dev_name: Device name 57 + * @flags: spinlock flags 191 58 * @regs: device physical base address 192 59 * @dev: pointer to device struct 193 - * @miscdev: Misc device handle 60 + * @state: State of the SDFEC device 194 61 * @error_data_lock: Error counter and states spinlock 195 - * @clks: Clocks managed by the SDFEC driver 196 - * @dev_name: Device name 197 62 * @dev_id: Device ID 63 + * @isr_err_count: Count of ISR errors 64 + * @cecc_count: Count of Correctable ECC errors (SBE) 65 + * @uecc_count: Count of Uncorrectable ECC errors (MBE) 66 + * @irq: IRQ number 67 + * @state_updated: indicates State updated by interrupt handler 68 + * @stats_updated: indicates Stats updated by interrupt handler 69 + * @intr_enabled: indicates IRQ enabled 198 70 * 199 71 * This structure contains necessary state for SDFEC driver to operate 200 72 */ 201 73 struct xsdfec_dev { 74 + struct miscdevice miscdev; 75 + struct xsdfec_clks clks; 76 + wait_queue_head_t waitq; 77 + struct xsdfec_config config; 78 + char dev_name[DEV_NAME_LEN]; 79 + unsigned long flags; 202 80 void __iomem *regs; 203 81 struct device *dev; 204 - struct miscdevice miscdev; 82 + enum xsdfec_state state; 205 83 /* Spinlock to protect state_updated and stats_updated */ 206 84 spinlock_t error_data_lock; 207 - struct xsdfec_clks clks; 208 - char dev_name[DEV_NAME_LEN]; 209 85 int dev_id; 86 + u32 isr_err_count; 87 + u32 cecc_count; 88 + u32 uecc_count; 89 + int irq; 90 + bool state_updated; 91 + bool stats_updated; 92 + bool intr_enabled; 210 93 }; 94 + 95 + static inline void xsdfec_regwrite(struct xsdfec_dev *xsdfec, u32 addr, 96 + u32 value) 97 + { 98 + dev_dbg(xsdfec->dev, "Writing 0x%x to offset 0x%x", value, addr); 99 + iowrite32(value, xsdfec->regs + addr); 100 + } 101 + 102 + static inline u32 xsdfec_regread(struct xsdfec_dev *xsdfec, u32 addr) 103 + { 104 + u32 rval; 105 + 106 + rval = ioread32(xsdfec->regs + addr); 107 + dev_dbg(xsdfec->dev, "Read value = 0x%x from offset 0x%x", rval, addr); 108 + return rval; 109 + } 110 + 111 + static void update_bool_config_from_reg(struct xsdfec_dev *xsdfec, 112 + u32 reg_offset, u32 bit_num, 113 + char *config_value) 114 + { 115 + u32 reg_val; 116 + u32 bit_mask = 1 << bit_num; 117 + 118 + reg_val = xsdfec_regread(xsdfec, reg_offset); 119 + *config_value = (reg_val & bit_mask) > 0; 120 + } 121 + 122 + static void update_config_from_hw(struct xsdfec_dev *xsdfec) 123 + { 124 + u32 reg_value; 125 + bool sdfec_started; 126 + 127 + /* Update the Order */ 128 + reg_value = xsdfec_regread(xsdfec, XSDFEC_ORDER_ADDR); 129 + xsdfec->config.order = reg_value; 130 + 131 + update_bool_config_from_reg(xsdfec, XSDFEC_BYPASS_ADDR, 132 + 0, /* Bit Number, maybe change to mask */ 133 + &xsdfec->config.bypass); 134 + 135 + update_bool_config_from_reg(xsdfec, XSDFEC_CODE_WR_PROTECT_ADDR, 136 + 0, /* Bit Number */ 137 + &xsdfec->config.code_wr_protect); 138 + 139 + reg_value = xsdfec_regread(xsdfec, XSDFEC_IMR_ADDR); 140 + xsdfec->config.irq.enable_isr = (reg_value & XSDFEC_ISR_MASK) > 0; 141 + 142 + reg_value = xsdfec_regread(xsdfec, XSDFEC_ECC_IMR_ADDR); 143 + xsdfec->config.irq.enable_ecc_isr = 144 + (reg_value & XSDFEC_ECC_ISR_MASK) > 0; 145 + 146 + reg_value = xsdfec_regread(xsdfec, XSDFEC_AXIS_ENABLE_ADDR); 147 + sdfec_started = (reg_value & XSDFEC_AXIS_IN_ENABLE_MASK) > 0; 148 + if (sdfec_started) 149 + xsdfec->state = XSDFEC_STARTED; 150 + else 151 + xsdfec->state = XSDFEC_STOPPED; 152 + } 153 + 154 + static int xsdfec_get_status(struct xsdfec_dev *xsdfec, void __user *arg) 155 + { 156 + struct xsdfec_status status; 157 + int err; 158 + 159 + memset(&status, 0, sizeof(status)); 160 + spin_lock_irqsave(&xsdfec->error_data_lock, xsdfec->flags); 161 + status.state = xsdfec->state; 162 + xsdfec->state_updated = false; 163 + spin_unlock_irqrestore(&xsdfec->error_data_lock, xsdfec->flags); 164 + status.activity = (xsdfec_regread(xsdfec, XSDFEC_ACTIVE_ADDR) & 165 + XSDFEC_IS_ACTIVITY_SET); 166 + 167 + err = copy_to_user(arg, &status, sizeof(status)); 168 + if (err) 169 + err = -EFAULT; 170 + 171 + return err; 172 + } 173 + 174 + static int xsdfec_get_config(struct xsdfec_dev *xsdfec, void __user *arg) 175 + { 176 + int err; 177 + 178 + err = copy_to_user(arg, &xsdfec->config, sizeof(xsdfec->config)); 179 + if (err) 180 + err = -EFAULT; 181 + 182 + return err; 183 + } 184 + 185 + static int xsdfec_isr_enable(struct xsdfec_dev *xsdfec, bool enable) 186 + { 187 + u32 mask_read; 188 + 189 + if (enable) { 190 + /* Enable */ 191 + xsdfec_regwrite(xsdfec, XSDFEC_IER_ADDR, XSDFEC_ISR_MASK); 192 + mask_read = xsdfec_regread(xsdfec, XSDFEC_IMR_ADDR); 193 + if (mask_read & XSDFEC_ISR_MASK) { 194 + dev_dbg(xsdfec->dev, 195 + "SDFEC enabling irq with IER failed"); 196 + return -EIO; 197 + } 198 + } else { 199 + /* Disable */ 200 + xsdfec_regwrite(xsdfec, XSDFEC_IDR_ADDR, XSDFEC_ISR_MASK); 201 + mask_read = xsdfec_regread(xsdfec, XSDFEC_IMR_ADDR); 202 + if ((mask_read & XSDFEC_ISR_MASK) != XSDFEC_ISR_MASK) { 203 + dev_dbg(xsdfec->dev, 204 + "SDFEC disabling irq with IDR failed"); 205 + return -EIO; 206 + } 207 + } 208 + return 0; 209 + } 210 + 211 + static int xsdfec_ecc_isr_enable(struct xsdfec_dev *xsdfec, bool enable) 212 + { 213 + u32 mask_read; 214 + 215 + if (enable) { 216 + /* Enable */ 217 + xsdfec_regwrite(xsdfec, XSDFEC_ECC_IER_ADDR, 218 + XSDFEC_ALL_ECC_ISR_MASK); 219 + mask_read = xsdfec_regread(xsdfec, XSDFEC_ECC_IMR_ADDR); 220 + if (mask_read & XSDFEC_ALL_ECC_ISR_MASK) { 221 + dev_dbg(xsdfec->dev, 222 + "SDFEC enabling ECC irq with ECC IER failed"); 223 + return -EIO; 224 + } 225 + } else { 226 + /* Disable */ 227 + xsdfec_regwrite(xsdfec, XSDFEC_ECC_IDR_ADDR, 228 + XSDFEC_ALL_ECC_ISR_MASK); 229 + mask_read = xsdfec_regread(xsdfec, XSDFEC_ECC_IMR_ADDR); 230 + if (!(((mask_read & XSDFEC_ALL_ECC_ISR_MASK) == 231 + XSDFEC_ECC_ISR_MASK) || 232 + ((mask_read & XSDFEC_ALL_ECC_ISR_MASK) == 233 + XSDFEC_PL_INIT_ECC_ISR_MASK))) { 234 + dev_dbg(xsdfec->dev, 235 + "SDFEC disable ECC irq with ECC IDR failed"); 236 + return -EIO; 237 + } 238 + } 239 + return 0; 240 + } 241 + 242 + static int xsdfec_set_irq(struct xsdfec_dev *xsdfec, void __user *arg) 243 + { 244 + struct xsdfec_irq irq; 245 + int err; 246 + int isr_err; 247 + int ecc_err; 248 + 249 + err = copy_from_user(&irq, arg, sizeof(irq)); 250 + if (err) 251 + return -EFAULT; 252 + 253 + /* Setup tlast related IRQ */ 254 + isr_err = xsdfec_isr_enable(xsdfec, irq.enable_isr); 255 + if (!isr_err) 256 + xsdfec->config.irq.enable_isr = irq.enable_isr; 257 + 258 + /* Setup ECC related IRQ */ 259 + ecc_err = xsdfec_ecc_isr_enable(xsdfec, irq.enable_ecc_isr); 260 + if (!ecc_err) 261 + xsdfec->config.irq.enable_ecc_isr = irq.enable_ecc_isr; 262 + 263 + if (isr_err < 0 || ecc_err < 0) 264 + err = -EIO; 265 + 266 + return err; 267 + } 268 + 269 + static int xsdfec_set_turbo(struct xsdfec_dev *xsdfec, void __user *arg) 270 + { 271 + struct xsdfec_turbo turbo; 272 + int err; 273 + u32 turbo_write; 274 + 275 + err = copy_from_user(&turbo, arg, sizeof(turbo)); 276 + if (err) 277 + return -EFAULT; 278 + 279 + if (turbo.alg >= XSDFEC_TURBO_ALG_MAX) 280 + return -EINVAL; 281 + 282 + if (turbo.scale > XSDFEC_TURBO_SCALE_MAX) 283 + return -EINVAL; 284 + 285 + /* Check to see what device tree says about the FEC codes */ 286 + if (xsdfec->config.code == XSDFEC_LDPC_CODE) 287 + return -EIO; 288 + 289 + turbo_write = ((turbo.scale & XSDFEC_TURBO_SCALE_MASK) 290 + << XSDFEC_TURBO_SCALE_BIT_POS) | 291 + turbo.alg; 292 + xsdfec_regwrite(xsdfec, XSDFEC_TURBO_ADDR, turbo_write); 293 + return err; 294 + } 295 + 296 + static int xsdfec_get_turbo(struct xsdfec_dev *xsdfec, void __user *arg) 297 + { 298 + u32 reg_value; 299 + struct xsdfec_turbo turbo_params; 300 + int err; 301 + 302 + if (xsdfec->config.code == XSDFEC_LDPC_CODE) 303 + return -EIO; 304 + 305 + memset(&turbo_params, 0, sizeof(turbo_params)); 306 + reg_value = xsdfec_regread(xsdfec, XSDFEC_TURBO_ADDR); 307 + 308 + turbo_params.scale = (reg_value & XSDFEC_TURBO_SCALE_MASK) >> 309 + XSDFEC_TURBO_SCALE_BIT_POS; 310 + turbo_params.alg = reg_value & 0x1; 311 + 312 + err = copy_to_user(arg, &turbo_params, sizeof(turbo_params)); 313 + if (err) 314 + err = -EFAULT; 315 + 316 + return err; 317 + } 318 + 319 + static int xsdfec_reg0_write(struct xsdfec_dev *xsdfec, u32 n, u32 k, u32 psize, 320 + u32 offset) 321 + { 322 + u32 wdata; 323 + 324 + if (n < XSDFEC_REG0_N_MIN || n > XSDFEC_REG0_N_MAX || psize == 0 || 325 + (n > XSDFEC_REG0_N_MUL_P * psize) || n <= k || ((n % psize) != 0)) { 326 + dev_dbg(xsdfec->dev, "N value is not in range"); 327 + return -EINVAL; 328 + } 329 + n <<= XSDFEC_REG0_N_LSB; 330 + 331 + if (k < XSDFEC_REG0_K_MIN || k > XSDFEC_REG0_K_MAX || 332 + (k > XSDFEC_REG0_K_MUL_P * psize) || ((k % psize) != 0)) { 333 + dev_dbg(xsdfec->dev, "K value is not in range"); 334 + return -EINVAL; 335 + } 336 + k = k << XSDFEC_REG0_K_LSB; 337 + wdata = k | n; 338 + 339 + if (XSDFEC_LDPC_CODE_REG0_ADDR_BASE + (offset * XSDFEC_LDPC_REG_JUMP) > 340 + XSDFEC_LDPC_CODE_REG0_ADDR_HIGH) { 341 + dev_dbg(xsdfec->dev, "Writing outside of LDPC reg0 space 0x%x", 342 + XSDFEC_LDPC_CODE_REG0_ADDR_BASE + 343 + (offset * XSDFEC_LDPC_REG_JUMP)); 344 + return -EINVAL; 345 + } 346 + xsdfec_regwrite(xsdfec, 347 + XSDFEC_LDPC_CODE_REG0_ADDR_BASE + 348 + (offset * XSDFEC_LDPC_REG_JUMP), 349 + wdata); 350 + return 0; 351 + } 352 + 353 + static int xsdfec_reg1_write(struct xsdfec_dev *xsdfec, u32 psize, 354 + u32 no_packing, u32 nm, u32 offset) 355 + { 356 + u32 wdata; 357 + 358 + if (psize < XSDFEC_REG1_PSIZE_MIN || psize > XSDFEC_REG1_PSIZE_MAX) { 359 + dev_dbg(xsdfec->dev, "Psize is not in range"); 360 + return -EINVAL; 361 + } 362 + 363 + if (no_packing != 0 && no_packing != 1) 364 + dev_dbg(xsdfec->dev, "No-packing bit register invalid"); 365 + no_packing = ((no_packing << XSDFEC_REG1_NO_PACKING_LSB) & 366 + XSDFEC_REG1_NO_PACKING_MASK); 367 + 368 + if (nm & ~(XSDFEC_REG1_NM_MASK >> XSDFEC_REG1_NM_LSB)) 369 + dev_dbg(xsdfec->dev, "NM is beyond 10 bits"); 370 + nm = (nm << XSDFEC_REG1_NM_LSB) & XSDFEC_REG1_NM_MASK; 371 + 372 + wdata = nm | no_packing | psize; 373 + if (XSDFEC_LDPC_CODE_REG1_ADDR_BASE + (offset * XSDFEC_LDPC_REG_JUMP) > 374 + XSDFEC_LDPC_CODE_REG1_ADDR_HIGH) { 375 + dev_dbg(xsdfec->dev, "Writing outside of LDPC reg1 space 0x%x", 376 + XSDFEC_LDPC_CODE_REG1_ADDR_BASE + 377 + (offset * XSDFEC_LDPC_REG_JUMP)); 378 + return -EINVAL; 379 + } 380 + xsdfec_regwrite(xsdfec, 381 + XSDFEC_LDPC_CODE_REG1_ADDR_BASE + 382 + (offset * XSDFEC_LDPC_REG_JUMP), 383 + wdata); 384 + return 0; 385 + } 386 + 387 + static int xsdfec_reg2_write(struct xsdfec_dev *xsdfec, u32 nlayers, u32 nmqc, 388 + u32 norm_type, u32 special_qc, u32 no_final_parity, 389 + u32 max_schedule, u32 offset) 390 + { 391 + u32 wdata; 392 + 393 + if (nlayers < XSDFEC_REG2_NLAYERS_MIN || 394 + nlayers > XSDFEC_REG2_NLAYERS_MAX) { 395 + dev_dbg(xsdfec->dev, "Nlayers is not in range"); 396 + return -EINVAL; 397 + } 398 + 399 + if (nmqc & ~(XSDFEC_REG2_NNMQC_MASK >> XSDFEC_REG2_NMQC_LSB)) 400 + dev_dbg(xsdfec->dev, "NMQC exceeds 11 bits"); 401 + nmqc = (nmqc << XSDFEC_REG2_NMQC_LSB) & XSDFEC_REG2_NNMQC_MASK; 402 + 403 + if (norm_type > 1) 404 + dev_dbg(xsdfec->dev, "Norm type is invalid"); 405 + norm_type = ((norm_type << XSDFEC_REG2_NORM_TYPE_LSB) & 406 + XSDFEC_REG2_NORM_TYPE_MASK); 407 + if (special_qc > 1) 408 + dev_dbg(xsdfec->dev, "Special QC in invalid"); 409 + special_qc = ((special_qc << XSDFEC_REG2_SPEICAL_QC_LSB) & 410 + XSDFEC_REG2_SPECIAL_QC_MASK); 411 + 412 + if (no_final_parity > 1) 413 + dev_dbg(xsdfec->dev, "No final parity check invalid"); 414 + no_final_parity = 415 + ((no_final_parity << XSDFEC_REG2_NO_FINAL_PARITY_LSB) & 416 + XSDFEC_REG2_NO_FINAL_PARITY_MASK); 417 + if (max_schedule & 418 + ~(XSDFEC_REG2_MAX_SCHEDULE_MASK >> XSDFEC_REG2_MAX_SCHEDULE_LSB)) 419 + dev_dbg(xsdfec->dev, "Max Schedule exceeds 2 bits"); 420 + max_schedule = ((max_schedule << XSDFEC_REG2_MAX_SCHEDULE_LSB) & 421 + XSDFEC_REG2_MAX_SCHEDULE_MASK); 422 + 423 + wdata = (max_schedule | no_final_parity | special_qc | norm_type | 424 + nmqc | nlayers); 425 + 426 + if (XSDFEC_LDPC_CODE_REG2_ADDR_BASE + (offset * XSDFEC_LDPC_REG_JUMP) > 427 + XSDFEC_LDPC_CODE_REG2_ADDR_HIGH) { 428 + dev_dbg(xsdfec->dev, "Writing outside of LDPC reg2 space 0x%x", 429 + XSDFEC_LDPC_CODE_REG2_ADDR_BASE + 430 + (offset * XSDFEC_LDPC_REG_JUMP)); 431 + return -EINVAL; 432 + } 433 + xsdfec_regwrite(xsdfec, 434 + XSDFEC_LDPC_CODE_REG2_ADDR_BASE + 435 + (offset * XSDFEC_LDPC_REG_JUMP), 436 + wdata); 437 + return 0; 438 + } 439 + 440 + static int xsdfec_reg3_write(struct xsdfec_dev *xsdfec, u8 sc_off, u8 la_off, 441 + u16 qc_off, u32 offset) 442 + { 443 + u32 wdata; 444 + 445 + wdata = ((qc_off << XSDFEC_REG3_QC_OFF_LSB) | 446 + (la_off << XSDFEC_REG3_LA_OFF_LSB) | sc_off); 447 + if (XSDFEC_LDPC_CODE_REG3_ADDR_BASE + (offset * XSDFEC_LDPC_REG_JUMP) > 448 + XSDFEC_LDPC_CODE_REG3_ADDR_HIGH) { 449 + dev_dbg(xsdfec->dev, "Writing outside of LDPC reg3 space 0x%x", 450 + XSDFEC_LDPC_CODE_REG3_ADDR_BASE + 451 + (offset * XSDFEC_LDPC_REG_JUMP)); 452 + return -EINVAL; 453 + } 454 + xsdfec_regwrite(xsdfec, 455 + XSDFEC_LDPC_CODE_REG3_ADDR_BASE + 456 + (offset * XSDFEC_LDPC_REG_JUMP), 457 + wdata); 458 + return 0; 459 + } 460 + 461 + static int xsdfec_table_write(struct xsdfec_dev *xsdfec, u32 offset, 462 + u32 *src_ptr, u32 len, const u32 base_addr, 463 + const u32 depth) 464 + { 465 + u32 reg = 0; 466 + u32 res; 467 + u32 n, i; 468 + u32 *addr = NULL; 469 + struct page *page[MAX_NUM_PAGES]; 470 + 471 + /* 472 + * Writes that go beyond the length of 473 + * Shared Scale(SC) table should fail 474 + */ 475 + if (offset > depth / XSDFEC_REG_WIDTH_JUMP || 476 + len > depth / XSDFEC_REG_WIDTH_JUMP || 477 + offset + len > depth / XSDFEC_REG_WIDTH_JUMP) { 478 + dev_dbg(xsdfec->dev, "Write exceeds SC table length"); 479 + return -EINVAL; 480 + } 481 + 482 + n = (len * XSDFEC_REG_WIDTH_JUMP) / PAGE_SIZE; 483 + if ((len * XSDFEC_REG_WIDTH_JUMP) % PAGE_SIZE) 484 + n += 1; 485 + 486 + res = get_user_pages_fast((unsigned long)src_ptr, n, 0, page); 487 + if (res < n) { 488 + for (i = 0; i < res; i++) 489 + put_page(page[i]); 490 + return -EINVAL; 491 + } 492 + 493 + for (i = 0; i < n; i++) { 494 + addr = kmap(page[i]); 495 + do { 496 + xsdfec_regwrite(xsdfec, 497 + base_addr + ((offset + reg) * 498 + XSDFEC_REG_WIDTH_JUMP), 499 + addr[reg]); 500 + reg++; 501 + } while ((reg < len) && 502 + ((reg * XSDFEC_REG_WIDTH_JUMP) % PAGE_SIZE)); 503 + put_page(page[i]); 504 + } 505 + return reg; 506 + } 507 + 508 + static int xsdfec_add_ldpc(struct xsdfec_dev *xsdfec, void __user *arg) 509 + { 510 + struct xsdfec_ldpc_params *ldpc; 511 + int ret, n; 512 + 513 + ldpc = kzalloc(sizeof(*ldpc), GFP_KERNEL); 514 + if (!ldpc) 515 + return -ENOMEM; 516 + 517 + if (copy_from_user(ldpc, arg, sizeof(*ldpc))) { 518 + ret = -EFAULT; 519 + goto err_out; 520 + } 521 + 522 + if (xsdfec->config.code == XSDFEC_TURBO_CODE) { 523 + ret = -EIO; 524 + goto err_out; 525 + } 526 + 527 + /* Verify Device has not started */ 528 + if (xsdfec->state == XSDFEC_STARTED) { 529 + ret = -EIO; 530 + goto err_out; 531 + } 532 + 533 + if (xsdfec->config.code_wr_protect) { 534 + ret = -EIO; 535 + goto err_out; 536 + } 537 + 538 + /* Write Reg 0 */ 539 + ret = xsdfec_reg0_write(xsdfec, ldpc->n, ldpc->k, ldpc->psize, 540 + ldpc->code_id); 541 + if (ret) 542 + goto err_out; 543 + 544 + /* Write Reg 1 */ 545 + ret = xsdfec_reg1_write(xsdfec, ldpc->psize, ldpc->no_packing, ldpc->nm, 546 + ldpc->code_id); 547 + if (ret) 548 + goto err_out; 549 + 550 + /* Write Reg 2 */ 551 + ret = xsdfec_reg2_write(xsdfec, ldpc->nlayers, ldpc->nmqc, 552 + ldpc->norm_type, ldpc->special_qc, 553 + ldpc->no_final_parity, ldpc->max_schedule, 554 + ldpc->code_id); 555 + if (ret) 556 + goto err_out; 557 + 558 + /* Write Reg 3 */ 559 + ret = xsdfec_reg3_write(xsdfec, ldpc->sc_off, ldpc->la_off, 560 + ldpc->qc_off, ldpc->code_id); 561 + if (ret) 562 + goto err_out; 563 + 564 + /* Write Shared Codes */ 565 + n = ldpc->nlayers / 4; 566 + if (ldpc->nlayers % 4) 567 + n++; 568 + 569 + ret = xsdfec_table_write(xsdfec, ldpc->sc_off, ldpc->sc_table, n, 570 + XSDFEC_LDPC_SC_TABLE_ADDR_BASE, 571 + XSDFEC_SC_TABLE_DEPTH); 572 + if (ret < 0) 573 + goto err_out; 574 + 575 + ret = xsdfec_table_write(xsdfec, 4 * ldpc->la_off, ldpc->la_table, 576 + ldpc->nlayers, XSDFEC_LDPC_LA_TABLE_ADDR_BASE, 577 + XSDFEC_LA_TABLE_DEPTH); 578 + if (ret < 0) 579 + goto err_out; 580 + 581 + ret = xsdfec_table_write(xsdfec, 4 * ldpc->qc_off, ldpc->qc_table, 582 + ldpc->nqc, XSDFEC_LDPC_QC_TABLE_ADDR_BASE, 583 + XSDFEC_QC_TABLE_DEPTH); 584 + if (ret > 0) 585 + ret = 0; 586 + err_out: 587 + kfree(ldpc); 588 + return ret; 589 + } 590 + 591 + static int xsdfec_set_order(struct xsdfec_dev *xsdfec, void __user *arg) 592 + { 593 + bool order_invalid; 594 + enum xsdfec_order order; 595 + int err; 596 + 597 + err = get_user(order, (enum xsdfec_order *)arg); 598 + if (err) 599 + return -EFAULT; 600 + 601 + order_invalid = (order != XSDFEC_MAINTAIN_ORDER) && 602 + (order != XSDFEC_OUT_OF_ORDER); 603 + if (order_invalid) 604 + return -EINVAL; 605 + 606 + /* Verify Device has not started */ 607 + if (xsdfec->state == XSDFEC_STARTED) 608 + return -EIO; 609 + 610 + xsdfec_regwrite(xsdfec, XSDFEC_ORDER_ADDR, order); 611 + 612 + xsdfec->config.order = order; 613 + 614 + return 0; 615 + } 616 + 617 + static int xsdfec_set_bypass(struct xsdfec_dev *xsdfec, bool __user *arg) 618 + { 619 + bool bypass; 620 + int err; 621 + 622 + err = get_user(bypass, arg); 623 + if (err) 624 + return -EFAULT; 625 + 626 + /* Verify Device has not started */ 627 + if (xsdfec->state == XSDFEC_STARTED) 628 + return -EIO; 629 + 630 + if (bypass) 631 + xsdfec_regwrite(xsdfec, XSDFEC_BYPASS_ADDR, 1); 632 + else 633 + xsdfec_regwrite(xsdfec, XSDFEC_BYPASS_ADDR, 0); 634 + 635 + xsdfec->config.bypass = bypass; 636 + 637 + return 0; 638 + } 639 + 640 + static int xsdfec_is_active(struct xsdfec_dev *xsdfec, bool __user *arg) 641 + { 642 + u32 reg_value; 643 + bool is_active; 644 + int err; 645 + 646 + reg_value = xsdfec_regread(xsdfec, XSDFEC_ACTIVE_ADDR); 647 + /* using a double ! operator instead of casting */ 648 + is_active = !!(reg_value & XSDFEC_IS_ACTIVITY_SET); 649 + err = put_user(is_active, arg); 650 + if (err) 651 + return -EFAULT; 652 + 653 + return err; 654 + } 655 + 656 + static u32 657 + xsdfec_translate_axis_width_cfg_val(enum xsdfec_axis_width axis_width_cfg) 658 + { 659 + u32 axis_width_field = 0; 660 + 661 + switch (axis_width_cfg) { 662 + case XSDFEC_1x128b: 663 + axis_width_field = 0; 664 + break; 665 + case XSDFEC_2x128b: 666 + axis_width_field = 1; 667 + break; 668 + case XSDFEC_4x128b: 669 + axis_width_field = 2; 670 + break; 671 + } 672 + 673 + return axis_width_field; 674 + } 675 + 676 + static u32 xsdfec_translate_axis_words_cfg_val(enum xsdfec_axis_word_include 677 + axis_word_inc_cfg) 678 + { 679 + u32 axis_words_field = 0; 680 + 681 + if (axis_word_inc_cfg == XSDFEC_FIXED_VALUE || 682 + axis_word_inc_cfg == XSDFEC_IN_BLOCK) 683 + axis_words_field = 0; 684 + else if (axis_word_inc_cfg == XSDFEC_PER_AXI_TRANSACTION) 685 + axis_words_field = 1; 686 + 687 + return axis_words_field; 688 + } 689 + 690 + static int xsdfec_cfg_axi_streams(struct xsdfec_dev *xsdfec) 691 + { 692 + u32 reg_value; 693 + u32 dout_words_field; 694 + u32 dout_width_field; 695 + u32 din_words_field; 696 + u32 din_width_field; 697 + struct xsdfec_config *config = &xsdfec->config; 698 + 699 + /* translate config info to register values */ 700 + dout_words_field = 701 + xsdfec_translate_axis_words_cfg_val(config->dout_word_include); 702 + dout_width_field = 703 + xsdfec_translate_axis_width_cfg_val(config->dout_width); 704 + din_words_field = 705 + xsdfec_translate_axis_words_cfg_val(config->din_word_include); 706 + din_width_field = 707 + xsdfec_translate_axis_width_cfg_val(config->din_width); 708 + 709 + reg_value = dout_words_field << XSDFEC_AXIS_DOUT_WORDS_LSB; 710 + reg_value |= dout_width_field << XSDFEC_AXIS_DOUT_WIDTH_LSB; 711 + reg_value |= din_words_field << XSDFEC_AXIS_DIN_WORDS_LSB; 712 + reg_value |= din_width_field << XSDFEC_AXIS_DIN_WIDTH_LSB; 713 + 714 + xsdfec_regwrite(xsdfec, XSDFEC_AXIS_WIDTH_ADDR, reg_value); 715 + 716 + return 0; 717 + } 718 + 719 + static int xsdfec_dev_open(struct inode *iptr, struct file *fptr) 720 + { 721 + return 0; 722 + } 723 + 724 + static int xsdfec_dev_release(struct inode *iptr, struct file *fptr) 725 + { 726 + return 0; 727 + } 728 + 729 + static int xsdfec_start(struct xsdfec_dev *xsdfec) 730 + { 731 + u32 regread; 732 + 733 + regread = xsdfec_regread(xsdfec, XSDFEC_FEC_CODE_ADDR); 734 + regread &= 0x1; 735 + if (regread != xsdfec->config.code) { 736 + dev_dbg(xsdfec->dev, 737 + "%s SDFEC HW code does not match driver code, reg %d, code %d", 738 + __func__, regread, xsdfec->config.code); 739 + return -EINVAL; 740 + } 741 + 742 + /* Set AXIS enable */ 743 + xsdfec_regwrite(xsdfec, XSDFEC_AXIS_ENABLE_ADDR, 744 + XSDFEC_AXIS_ENABLE_MASK); 745 + /* Done */ 746 + xsdfec->state = XSDFEC_STARTED; 747 + return 0; 748 + } 749 + 750 + static int xsdfec_stop(struct xsdfec_dev *xsdfec) 751 + { 752 + u32 regread; 753 + 754 + if (xsdfec->state != XSDFEC_STARTED) 755 + dev_dbg(xsdfec->dev, "Device not started correctly"); 756 + /* Disable AXIS_ENABLE Input interfaces only */ 757 + regread = xsdfec_regread(xsdfec, XSDFEC_AXIS_ENABLE_ADDR); 758 + regread &= (~XSDFEC_AXIS_IN_ENABLE_MASK); 759 + xsdfec_regwrite(xsdfec, XSDFEC_AXIS_ENABLE_ADDR, regread); 760 + /* Stop */ 761 + xsdfec->state = XSDFEC_STOPPED; 762 + return 0; 763 + } 764 + 765 + static int xsdfec_clear_stats(struct xsdfec_dev *xsdfec) 766 + { 767 + spin_lock_irqsave(&xsdfec->error_data_lock, xsdfec->flags); 768 + xsdfec->isr_err_count = 0; 769 + xsdfec->uecc_count = 0; 770 + xsdfec->cecc_count = 0; 771 + spin_unlock_irqrestore(&xsdfec->error_data_lock, xsdfec->flags); 772 + 773 + return 0; 774 + } 775 + 776 + static int xsdfec_get_stats(struct xsdfec_dev *xsdfec, void __user *arg) 777 + { 778 + int err; 779 + struct xsdfec_stats user_stats; 780 + 781 + spin_lock_irqsave(&xsdfec->error_data_lock, xsdfec->flags); 782 + user_stats.isr_err_count = xsdfec->isr_err_count; 783 + user_stats.cecc_count = xsdfec->cecc_count; 784 + user_stats.uecc_count = xsdfec->uecc_count; 785 + xsdfec->stats_updated = false; 786 + spin_unlock_irqrestore(&xsdfec->error_data_lock, xsdfec->flags); 787 + 788 + err = copy_to_user(arg, &user_stats, sizeof(user_stats)); 789 + if (err) 790 + err = -EFAULT; 791 + 792 + return err; 793 + } 794 + 795 + static int xsdfec_set_default_config(struct xsdfec_dev *xsdfec) 796 + { 797 + /* Ensure registers are aligned with core configuration */ 798 + xsdfec_regwrite(xsdfec, XSDFEC_FEC_CODE_ADDR, xsdfec->config.code); 799 + xsdfec_cfg_axi_streams(xsdfec); 800 + update_config_from_hw(xsdfec); 801 + 802 + return 0; 803 + } 804 + 805 + static long xsdfec_dev_ioctl(struct file *fptr, unsigned int cmd, 806 + unsigned long data) 807 + { 808 + struct xsdfec_dev *xsdfec; 809 + void __user *arg = NULL; 810 + int rval = -EINVAL; 811 + 812 + xsdfec = container_of(fptr->private_data, struct xsdfec_dev, miscdev); 813 + 814 + /* In failed state allow only reset and get status IOCTLs */ 815 + if (xsdfec->state == XSDFEC_NEEDS_RESET && 816 + (cmd != XSDFEC_SET_DEFAULT_CONFIG && cmd != XSDFEC_GET_STATUS && 817 + cmd != XSDFEC_GET_STATS && cmd != XSDFEC_CLEAR_STATS)) { 818 + return -EPERM; 819 + } 820 + 821 + if (_IOC_TYPE(cmd) != XSDFEC_MAGIC) 822 + return -ENOTTY; 823 + 824 + /* check if ioctl argument is present and valid */ 825 + if (_IOC_DIR(cmd) != _IOC_NONE) { 826 + arg = (void __user *)data; 827 + if (!arg) 828 + return rval; 829 + } 830 + 831 + switch (cmd) { 832 + case XSDFEC_START_DEV: 833 + rval = xsdfec_start(xsdfec); 834 + break; 835 + case XSDFEC_STOP_DEV: 836 + rval = xsdfec_stop(xsdfec); 837 + break; 838 + case XSDFEC_CLEAR_STATS: 839 + rval = xsdfec_clear_stats(xsdfec); 840 + break; 841 + case XSDFEC_GET_STATS: 842 + rval = xsdfec_get_stats(xsdfec, arg); 843 + break; 844 + case XSDFEC_GET_STATUS: 845 + rval = xsdfec_get_status(xsdfec, arg); 846 + break; 847 + case XSDFEC_GET_CONFIG: 848 + rval = xsdfec_get_config(xsdfec, arg); 849 + break; 850 + case XSDFEC_SET_DEFAULT_CONFIG: 851 + rval = xsdfec_set_default_config(xsdfec); 852 + break; 853 + case XSDFEC_SET_IRQ: 854 + rval = xsdfec_set_irq(xsdfec, arg); 855 + break; 856 + case XSDFEC_SET_TURBO: 857 + rval = xsdfec_set_turbo(xsdfec, arg); 858 + break; 859 + case XSDFEC_GET_TURBO: 860 + rval = xsdfec_get_turbo(xsdfec, arg); 861 + break; 862 + case XSDFEC_ADD_LDPC_CODE_PARAMS: 863 + rval = xsdfec_add_ldpc(xsdfec, arg); 864 + break; 865 + case XSDFEC_SET_ORDER: 866 + rval = xsdfec_set_order(xsdfec, arg); 867 + break; 868 + case XSDFEC_SET_BYPASS: 869 + rval = xsdfec_set_bypass(xsdfec, arg); 870 + break; 871 + case XSDFEC_IS_ACTIVE: 872 + rval = xsdfec_is_active(xsdfec, (bool __user *)arg); 873 + break; 874 + default: 875 + /* Should not get here */ 876 + break; 877 + } 878 + return rval; 879 + } 880 + 881 + #ifdef CONFIG_COMPAT 882 + static long xsdfec_dev_compat_ioctl(struct file *file, unsigned int cmd, 883 + unsigned long data) 884 + { 885 + return xsdfec_dev_ioctl(file, cmd, (unsigned long)compat_ptr(data)); 886 + } 887 + #endif 888 + 889 + static unsigned int xsdfec_poll(struct file *file, poll_table *wait) 890 + { 891 + unsigned int mask = 0; 892 + struct xsdfec_dev *xsdfec; 893 + 894 + xsdfec = container_of(file->private_data, struct xsdfec_dev, miscdev); 895 + 896 + if (!xsdfec) 897 + return POLLNVAL | POLLHUP; 898 + 899 + poll_wait(file, &xsdfec->waitq, wait); 900 + 901 + /* XSDFEC ISR detected an error */ 902 + spin_lock_irqsave(&xsdfec->error_data_lock, xsdfec->flags); 903 + if (xsdfec->state_updated) 904 + mask |= POLLIN | POLLPRI; 905 + 906 + if (xsdfec->stats_updated) 907 + mask |= POLLIN | POLLRDNORM; 908 + spin_unlock_irqrestore(&xsdfec->error_data_lock, xsdfec->flags); 909 + 910 + return mask; 911 + } 211 912 212 913 static const struct file_operations xsdfec_fops = { 213 914 .owner = THIS_MODULE, 915 + .open = xsdfec_dev_open, 916 + .release = xsdfec_dev_release, 917 + .unlocked_ioctl = xsdfec_dev_ioctl, 918 + .poll = xsdfec_poll, 919 + #ifdef CONFIG_COMPAT 920 + .compat_ioctl = xsdfec_dev_compat_ioctl, 921 + #endif 214 922 }; 923 + 924 + static int xsdfec_parse_of(struct xsdfec_dev *xsdfec) 925 + { 926 + struct device *dev = xsdfec->dev; 927 + struct device_node *node = dev->of_node; 928 + int rval; 929 + const char *fec_code; 930 + u32 din_width; 931 + u32 din_word_include; 932 + u32 dout_width; 933 + u32 dout_word_include; 934 + 935 + rval = of_property_read_string(node, "xlnx,sdfec-code", &fec_code); 936 + if (rval < 0) 937 + return rval; 938 + 939 + if (!strcasecmp(fec_code, "ldpc")) 940 + xsdfec->config.code = XSDFEC_LDPC_CODE; 941 + else if (!strcasecmp(fec_code, "turbo")) 942 + xsdfec->config.code = XSDFEC_TURBO_CODE; 943 + else 944 + return -EINVAL; 945 + 946 + rval = of_property_read_u32(node, "xlnx,sdfec-din-words", 947 + &din_word_include); 948 + if (rval < 0) 949 + return rval; 950 + 951 + if (din_word_include < XSDFEC_AXIS_WORDS_INCLUDE_MAX) 952 + xsdfec->config.din_word_include = din_word_include; 953 + else 954 + return -EINVAL; 955 + 956 + rval = of_property_read_u32(node, "xlnx,sdfec-din-width", &din_width); 957 + if (rval < 0) 958 + return rval; 959 + 960 + switch (din_width) { 961 + /* Fall through and set for valid values */ 962 + case XSDFEC_1x128b: 963 + case XSDFEC_2x128b: 964 + case XSDFEC_4x128b: 965 + xsdfec->config.din_width = din_width; 966 + break; 967 + default: 968 + return -EINVAL; 969 + } 970 + 971 + rval = of_property_read_u32(node, "xlnx,sdfec-dout-words", 972 + &dout_word_include); 973 + if (rval < 0) 974 + return rval; 975 + 976 + if (dout_word_include < XSDFEC_AXIS_WORDS_INCLUDE_MAX) 977 + xsdfec->config.dout_word_include = dout_word_include; 978 + else 979 + return -EINVAL; 980 + 981 + rval = of_property_read_u32(node, "xlnx,sdfec-dout-width", &dout_width); 982 + if (rval < 0) 983 + return rval; 984 + 985 + switch (dout_width) { 986 + /* Fall through and set for valid values */ 987 + case XSDFEC_1x128b: 988 + case XSDFEC_2x128b: 989 + case XSDFEC_4x128b: 990 + xsdfec->config.dout_width = dout_width; 991 + break; 992 + default: 993 + return -EINVAL; 994 + } 995 + 996 + /* Write LDPC to CODE Register */ 997 + xsdfec_regwrite(xsdfec, XSDFEC_FEC_CODE_ADDR, xsdfec->config.code); 998 + 999 + xsdfec_cfg_axi_streams(xsdfec); 1000 + 1001 + return 0; 1002 + } 1003 + 1004 + static irqreturn_t xsdfec_irq_thread(int irq, void *dev_id) 1005 + { 1006 + struct xsdfec_dev *xsdfec = dev_id; 1007 + irqreturn_t ret = IRQ_HANDLED; 1008 + u32 ecc_err; 1009 + u32 isr_err; 1010 + u32 uecc_count; 1011 + u32 cecc_count; 1012 + u32 isr_err_count; 1013 + u32 aecc_count; 1014 + u32 tmp; 1015 + 1016 + WARN_ON(xsdfec->irq != irq); 1017 + 1018 + /* Mask Interrupts */ 1019 + xsdfec_isr_enable(xsdfec, false); 1020 + xsdfec_ecc_isr_enable(xsdfec, false); 1021 + /* Read ISR */ 1022 + ecc_err = xsdfec_regread(xsdfec, XSDFEC_ECC_ISR_ADDR); 1023 + isr_err = xsdfec_regread(xsdfec, XSDFEC_ISR_ADDR); 1024 + /* Clear the interrupts */ 1025 + xsdfec_regwrite(xsdfec, XSDFEC_ECC_ISR_ADDR, ecc_err); 1026 + xsdfec_regwrite(xsdfec, XSDFEC_ISR_ADDR, isr_err); 1027 + 1028 + tmp = ecc_err & XSDFEC_ALL_ECC_ISR_MBE_MASK; 1029 + /* Count uncorrectable 2-bit errors */ 1030 + uecc_count = hweight32(tmp); 1031 + /* Count all ECC errors */ 1032 + aecc_count = hweight32(ecc_err); 1033 + /* Number of correctable 1-bit ECC error */ 1034 + cecc_count = aecc_count - 2 * uecc_count; 1035 + /* Count ISR errors */ 1036 + isr_err_count = hweight32(isr_err); 1037 + dev_dbg(xsdfec->dev, "tmp=%x, uecc=%x, aecc=%x, cecc=%x, isr=%x", tmp, 1038 + uecc_count, aecc_count, cecc_count, isr_err_count); 1039 + dev_dbg(xsdfec->dev, "uecc=%x, cecc=%x, isr=%x", xsdfec->uecc_count, 1040 + xsdfec->cecc_count, xsdfec->isr_err_count); 1041 + 1042 + spin_lock_irqsave(&xsdfec->error_data_lock, xsdfec->flags); 1043 + /* Add new errors to a 2-bits counter */ 1044 + if (uecc_count) 1045 + xsdfec->uecc_count += uecc_count; 1046 + /* Add new errors to a 1-bits counter */ 1047 + if (cecc_count) 1048 + xsdfec->cecc_count += cecc_count; 1049 + /* Add new errors to a ISR counter */ 1050 + if (isr_err_count) 1051 + xsdfec->isr_err_count += isr_err_count; 1052 + 1053 + /* Update state/stats flag */ 1054 + if (uecc_count) { 1055 + if (ecc_err & XSDFEC_ECC_ISR_MBE_MASK) 1056 + xsdfec->state = XSDFEC_NEEDS_RESET; 1057 + else if (ecc_err & XSDFEC_PL_INIT_ECC_ISR_MBE_MASK) 1058 + xsdfec->state = XSDFEC_PL_RECONFIGURE; 1059 + xsdfec->stats_updated = true; 1060 + xsdfec->state_updated = true; 1061 + } 1062 + 1063 + if (cecc_count) 1064 + xsdfec->stats_updated = true; 1065 + 1066 + if (isr_err_count) { 1067 + xsdfec->state = XSDFEC_NEEDS_RESET; 1068 + xsdfec->stats_updated = true; 1069 + xsdfec->state_updated = true; 1070 + } 1071 + 1072 + spin_unlock_irqrestore(&xsdfec->error_data_lock, xsdfec->flags); 1073 + dev_dbg(xsdfec->dev, "state=%x, stats=%x", xsdfec->state_updated, 1074 + xsdfec->stats_updated); 1075 + 1076 + /* Enable another polling */ 1077 + if (xsdfec->state_updated || xsdfec->stats_updated) 1078 + wake_up_interruptible(&xsdfec->waitq); 1079 + else 1080 + ret = IRQ_NONE; 1081 + 1082 + /* Unmask Interrupts */ 1083 + xsdfec_isr_enable(xsdfec, true); 1084 + xsdfec_ecc_isr_enable(xsdfec, true); 1085 + 1086 + return ret; 1087 + } 215 1088 216 1089 static int xsdfec_clk_init(struct platform_device *pdev, 217 1090 struct xsdfec_clks *clks) ··· 1378 227 clk_disable_unprepare(clks->axi_clk); 1379 228 } 1380 229 1381 - static void xsdfec_idr_remove(struct xsdfec_dev *xsdfec) 1382 - { 1383 - mutex_lock(&dev_idr_lock); 1384 - idr_remove(&dev_idr, xsdfec->dev_id); 1385 - mutex_unlock(&dev_idr_lock); 1386 - } 1387 - 1388 230 static int xsdfec_probe(struct platform_device *pdev) 1389 231 { 1390 232 struct xsdfec_dev *xsdfec; 1391 233 struct device *dev; 1392 234 struct resource *res; 1393 235 int err; 236 + bool irq_enabled = true; 1394 237 1395 238 xsdfec = devm_kzalloc(&pdev->dev, sizeof(*xsdfec), GFP_KERNEL); 1396 239 if (!xsdfec) ··· 1405 260 goto err_xsdfec_dev; 1406 261 } 1407 262 263 + xsdfec->irq = platform_get_irq(pdev, 0); 264 + if (xsdfec->irq < 0) { 265 + dev_dbg(dev, "platform_get_irq failed"); 266 + irq_enabled = false; 267 + } 268 + 269 + err = xsdfec_parse_of(xsdfec); 270 + if (err < 0) 271 + goto err_xsdfec_dev; 272 + 273 + update_config_from_hw(xsdfec); 274 + 1408 275 /* Save driver private data */ 1409 276 platform_set_drvdata(pdev, xsdfec); 1410 277 1411 - mutex_lock(&dev_idr_lock); 1412 - err = idr_alloc(&dev_idr, xsdfec->dev_name, 0, 0, GFP_KERNEL); 1413 - mutex_unlock(&dev_idr_lock); 278 + if (irq_enabled) { 279 + init_waitqueue_head(&xsdfec->waitq); 280 + /* Register IRQ thread */ 281 + err = devm_request_threaded_irq(dev, xsdfec->irq, NULL, 282 + xsdfec_irq_thread, IRQF_ONESHOT, 283 + "xilinx-sdfec16", xsdfec); 284 + if (err < 0) { 285 + dev_err(dev, "unable to request IRQ%d", xsdfec->irq); 286 + goto err_xsdfec_dev; 287 + } 288 + } 289 + 290 + err = ida_alloc(&dev_nrs, GFP_KERNEL); 1414 291 if (err < 0) 1415 292 goto err_xsdfec_dev; 1416 293 xsdfec->dev_id = err; ··· 1445 278 err = misc_register(&xsdfec->miscdev); 1446 279 if (err) { 1447 280 dev_err(dev, "error:%d. Unable to register device", err); 1448 - goto err_xsdfec_idr; 281 + goto err_xsdfec_ida; 1449 282 } 1450 283 return 0; 1451 284 1452 - err_xsdfec_idr: 1453 - xsdfec_idr_remove(xsdfec); 285 + err_xsdfec_ida: 286 + ida_free(&dev_nrs, xsdfec->dev_id); 1454 287 err_xsdfec_dev: 1455 288 xsdfec_disable_all_clks(&xsdfec->clks); 1456 289 return err; ··· 1462 295 1463 296 xsdfec = platform_get_drvdata(pdev); 1464 297 misc_deregister(&xsdfec->miscdev); 1465 - xsdfec_idr_remove(xsdfec); 298 + ida_free(&dev_nrs, xsdfec->dev_id); 1466 299 xsdfec_disable_all_clks(&xsdfec->clks); 1467 300 return 0; 1468 301 } ··· 1488 321 { 1489 322 int err; 1490 323 1491 - mutex_init(&dev_idr_lock); 1492 - idr_init(&dev_idr); 1493 324 err = platform_driver_register(&xsdfec_driver); 1494 325 if (err < 0) { 1495 326 pr_err("%s Unabled to register SDFEC driver", __func__); ··· 1499 334 static void __exit xsdfec_exit(void) 1500 335 { 1501 336 platform_driver_unregister(&xsdfec_driver); 1502 - idr_destroy(&dev_idr); 1503 337 } 1504 338 1505 339 module_init(xsdfec_init);
+7
drivers/nvmem/imx-ocotp-scu.c
··· 16 16 17 17 enum ocotp_devtype { 18 18 IMX8QXP, 19 + IMX8QM, 19 20 }; 20 21 21 22 struct ocotp_devtype_data { ··· 37 36 38 37 static struct ocotp_devtype_data imx8qxp_data = { 39 38 .devtype = IMX8QXP, 39 + .nregs = 800, 40 + }; 41 + 42 + static struct ocotp_devtype_data imx8qm_data = { 43 + .devtype = IMX8QM, 40 44 .nregs = 800, 41 45 }; 42 46 ··· 124 118 125 119 static const struct of_device_id imx_scu_ocotp_dt_ids[] = { 126 120 { .compatible = "fsl,imx8qxp-scu-ocotp", (void *)&imx8qxp_data }, 121 + { .compatible = "fsl,imx8qm-scu-ocotp", (void *)&imx8qm_data }, 127 122 { }, 128 123 }; 129 124 MODULE_DEVICE_TABLE(of, imx_scu_ocotp_dt_ids);
+7
drivers/nvmem/imx-ocotp.c
··· 479 479 .set_timing = imx_ocotp_set_imx6_timing, 480 480 }; 481 481 482 + static const struct ocotp_params imx8mn_params = { 483 + .nregs = 256, 484 + .bank_address_words = 0, 485 + .set_timing = imx_ocotp_set_imx6_timing, 486 + }; 487 + 482 488 static const struct of_device_id imx_ocotp_dt_ids[] = { 483 489 { .compatible = "fsl,imx6q-ocotp", .data = &imx6q_params }, 484 490 { .compatible = "fsl,imx6sl-ocotp", .data = &imx6sl_params }, ··· 496 490 { .compatible = "fsl,imx7ulp-ocotp", .data = &imx7ulp_params }, 497 491 { .compatible = "fsl,imx8mq-ocotp", .data = &imx8mq_params }, 498 492 { .compatible = "fsl,imx8mm-ocotp", .data = &imx8mm_params }, 493 + { .compatible = "fsl,imx8mn-ocotp", .data = &imx8mn_params }, 499 494 { }, 500 495 }; 501 496 MODULE_DEVICE_TABLE(of, imx_ocotp_dt_ids);
+2 -1
drivers/nvmem/meson-mx-efuse.c
··· 155 155 if (err) 156 156 break; 157 157 158 - memcpy(buf + i, &tmp, efuse->config.word_size); 158 + memcpy(buf + i, &tmp, 159 + min_t(size_t, bytes - i, efuse->config.word_size)); 159 160 } 160 161 161 162 meson_mx_efuse_mask_bits(efuse, MESON_MX_EFUSE_CNTL1,
+1 -1
drivers/nvmem/mxs-ocotp.c
··· 200 200 }; 201 201 202 202 module_platform_driver(mxs_ocotp_driver); 203 - MODULE_AUTHOR("Stefan Wahren <stefan.wahren@i2se.com>"); 203 + MODULE_AUTHOR("Stefan Wahren <wahrenst@gmx.net"); 204 204 MODULE_DESCRIPTION("driver for OCOTP in i.MX23/i.MX28"); 205 205 MODULE_LICENSE("GPL v2");
+1
drivers/nvmem/sunxi_sid.c
··· 186 186 static const struct sunxi_sid_cfg sun50i_a64_cfg = { 187 187 .value_offset = 0x200, 188 188 .size = 0x100, 189 + .need_register_readout = true, 189 190 }; 190 191 191 192 static const struct sunxi_sid_cfg sun50i_h6_cfg = {
drivers/parport/Makefile
+2 -4
drivers/parport/parport_serial.c
··· 680 680 681 681 static int __maybe_unused parport_serial_pci_suspend(struct device *dev) 682 682 { 683 - struct pci_dev *pdev = to_pci_dev(dev); 684 - struct parport_serial_private *priv = pci_get_drvdata(pdev); 683 + struct parport_serial_private *priv = dev_get_drvdata(dev); 685 684 686 685 if (priv->serial) 687 686 pciserial_suspend_ports(priv->serial); ··· 691 692 692 693 static int __maybe_unused parport_serial_pci_resume(struct device *dev) 693 694 { 694 - struct pci_dev *pdev = to_pci_dev(dev); 695 - struct parport_serial_private *priv = pci_get_drvdata(pdev); 695 + struct parport_serial_private *priv = dev_get_drvdata(dev); 696 696 697 697 if (priv->serial) 698 698 pciserial_resume_ports(priv->serial);
+3 -3
drivers/pcmcia/i82092.c
··· 117 117 118 118 if (card_present(i)) { 119 119 sockets[i].card_state = 3; 120 - dprintk(KERN_DEBUG "i82092aa: slot %i is occupied\n",i); 120 + dev_dbg(&dev->dev, "i82092aa: slot %i is occupied\n", i); 121 121 } else { 122 - dprintk(KERN_DEBUG "i82092aa: slot %i is vacant\n",i); 122 + dev_dbg(&dev->dev, "i82092aa: slot %i is vacant\n", i); 123 123 } 124 124 } 125 125 ··· 128 128 pci_write_config_byte(dev, 0x50, configbyte); /* PCI Interrupt Routing Register */ 129 129 130 130 /* Register the interrupt handler */ 131 - dprintk(KERN_DEBUG "Requesting interrupt %i \n",dev->irq); 131 + dev_dbg(&dev->dev, "Requesting interrupt %i\n", dev->irq); 132 132 if ((ret = request_irq(dev->irq, i82092aa_interrupt, IRQF_SHARED, "i82092aa", i82092aa_interrupt))) { 133 133 printk(KERN_ERR "i82092aa: Failed to register IRQ %d, aborting\n", dev->irq); 134 134 goto err_out_free_res;
+1 -1
drivers/phy/Makefile
··· 10 10 obj-$(CONFIG_PHY_PISTACHIO_USB) += phy-pistachio-usb.o 11 11 obj-$(CONFIG_ARCH_SUNXI) += allwinner/ 12 12 obj-$(CONFIG_ARCH_MESON) += amlogic/ 13 - obj-$(CONFIG_LANTIQ) += lantiq/ 14 13 obj-$(CONFIG_ARCH_MEDIATEK) += mediatek/ 15 14 obj-$(CONFIG_ARCH_RENESAS) += renesas/ 16 15 obj-$(CONFIG_ARCH_ROCKCHIP) += rockchip/ ··· 18 19 cadence/ \ 19 20 freescale/ \ 20 21 hisilicon/ \ 22 + lantiq/ \ 21 23 marvell/ \ 22 24 motorola/ \ 23 25 mscc/ \
+11
drivers/phy/lantiq/Kconfig
··· 2 2 # 3 3 # Phy drivers for Lantiq / Intel platforms 4 4 # 5 + config PHY_LANTIQ_VRX200_PCIE 6 + tristate "Lantiq VRX200/ARX300 PCIe PHY" 7 + depends on SOC_TYPE_XWAY || COMPILE_TEST 8 + depends on OF && HAS_IOMEM 9 + select GENERIC_PHY 10 + select REGMAP_MMIO 11 + help 12 + Support for the PCIe PHY(s) on the Lantiq / Intel VRX200 and ARX300 13 + family SoCs. 14 + If unsure, say N. 15 + 5 16 config PHY_LANTIQ_RCU_USB2 6 17 tristate "Lantiq XWAY SoC RCU based USB PHY" 7 18 depends on OF && (SOC_TYPE_XWAY || COMPILE_TEST)
+1
drivers/phy/lantiq/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 obj-$(CONFIG_PHY_LANTIQ_RCU_USB2) += phy-lantiq-rcu-usb2.o 3 + obj-$(CONFIG_PHY_LANTIQ_VRX200_PCIE) += phy-lantiq-vrx200-pcie.o
+494
drivers/phy/lantiq/phy-lantiq-vrx200-pcie.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * PCIe PHY driver for Lantiq VRX200 and ARX300 SoCs. 4 + * 5 + * Copyright (C) 2019 Martin Blumenstingl <martin.blumenstingl@googlemail.com> 6 + * 7 + * Based on the BSP (called "UGW") driver: 8 + * Copyright (C) 2009-2015 Lei Chuanhua <chuanhua.lei@lantiq.com> 9 + * Copyright (C) 2016 Intel Corporation 10 + * 11 + * TODO: PHY modes other than 36MHz (without "SSC") 12 + */ 13 + 14 + #include <linux/bitfield.h> 15 + #include <linux/bits.h> 16 + #include <linux/clk.h> 17 + #include <linux/delay.h> 18 + #include <linux/mfd/syscon.h> 19 + #include <linux/module.h> 20 + #include <linux/of.h> 21 + #include <linux/phy/phy.h> 22 + #include <linux/platform_device.h> 23 + #include <linux/property.h> 24 + #include <linux/regmap.h> 25 + #include <linux/reset.h> 26 + 27 + #include <dt-bindings/phy/phy-lantiq-vrx200-pcie.h> 28 + 29 + #define PCIE_PHY_PLL_CTRL1 0x44 30 + 31 + #define PCIE_PHY_PLL_CTRL2 0x46 32 + #define PCIE_PHY_PLL_CTRL2_CONST_SDM_MASK GENMASK(7, 0) 33 + #define PCIE_PHY_PLL_CTRL2_CONST_SDM_EN BIT(8) 34 + #define PCIE_PHY_PLL_CTRL2_PLL_SDM_EN BIT(9) 35 + 36 + #define PCIE_PHY_PLL_CTRL3 0x48 37 + #define PCIE_PHY_PLL_CTRL3_EXT_MMD_DIV_RATIO_EN BIT(1) 38 + #define PCIE_PHY_PLL_CTRL3_EXT_MMD_DIV_RATIO_MASK GENMASK(6, 4) 39 + 40 + #define PCIE_PHY_PLL_CTRL4 0x4a 41 + #define PCIE_PHY_PLL_CTRL5 0x4c 42 + #define PCIE_PHY_PLL_CTRL6 0x4e 43 + #define PCIE_PHY_PLL_CTRL7 0x50 44 + #define PCIE_PHY_PLL_A_CTRL1 0x52 45 + 46 + #define PCIE_PHY_PLL_A_CTRL2 0x54 47 + #define PCIE_PHY_PLL_A_CTRL2_LF_MODE_EN BIT(14) 48 + 49 + #define PCIE_PHY_PLL_A_CTRL3 0x56 50 + #define PCIE_PHY_PLL_A_CTRL3_MMD_MASK GENMASK(15, 13) 51 + 52 + #define PCIE_PHY_PLL_STATUS 0x58 53 + 54 + #define PCIE_PHY_TX1_CTRL1 0x60 55 + #define PCIE_PHY_TX1_CTRL1_FORCE_EN BIT(3) 56 + #define PCIE_PHY_TX1_CTRL1_LOAD_EN BIT(4) 57 + 58 + #define PCIE_PHY_TX1_CTRL2 0x62 59 + #define PCIE_PHY_TX1_CTRL3 0x64 60 + #define PCIE_PHY_TX1_A_CTRL1 0x66 61 + #define PCIE_PHY_TX1_A_CTRL2 0x68 62 + #define PCIE_PHY_TX1_MOD1 0x6a 63 + #define PCIE_PHY_TX1_MOD2 0x6c 64 + #define PCIE_PHY_TX1_MOD3 0x6e 65 + 66 + #define PCIE_PHY_TX2_CTRL1 0x70 67 + #define PCIE_PHY_TX2_CTRL1_LOAD_EN BIT(4) 68 + 69 + #define PCIE_PHY_TX2_CTRL2 0x72 70 + #define PCIE_PHY_TX2_A_CTRL1 0x76 71 + #define PCIE_PHY_TX2_A_CTRL2 0x78 72 + #define PCIE_PHY_TX2_MOD1 0x7a 73 + #define PCIE_PHY_TX2_MOD2 0x7c 74 + #define PCIE_PHY_TX2_MOD3 0x7e 75 + 76 + #define PCIE_PHY_RX1_CTRL1 0xa0 77 + #define PCIE_PHY_RX1_CTRL1_LOAD_EN BIT(1) 78 + 79 + #define PCIE_PHY_RX1_CTRL2 0xa2 80 + #define PCIE_PHY_RX1_CDR 0xa4 81 + #define PCIE_PHY_RX1_EI 0xa6 82 + #define PCIE_PHY_RX1_A_CTRL 0xaa 83 + 84 + struct ltq_vrx200_pcie_phy_priv { 85 + struct phy *phy; 86 + unsigned int mode; 87 + struct device *dev; 88 + struct regmap *phy_regmap; 89 + struct regmap *rcu_regmap; 90 + struct clk *pdi_clk; 91 + struct clk *phy_clk; 92 + struct reset_control *phy_reset; 93 + struct reset_control *pcie_reset; 94 + u32 rcu_ahb_endian_offset; 95 + u32 rcu_ahb_endian_big_endian_mask; 96 + }; 97 + 98 + static void ltq_vrx200_pcie_phy_common_setup(struct phy *phy) 99 + { 100 + struct ltq_vrx200_pcie_phy_priv *priv = phy_get_drvdata(phy); 101 + 102 + /* PLL Setting */ 103 + regmap_write(priv->phy_regmap, PCIE_PHY_PLL_A_CTRL1, 0x120e); 104 + 105 + /* increase the bias reference voltage */ 106 + regmap_write(priv->phy_regmap, PCIE_PHY_PLL_A_CTRL2, 0x39d7); 107 + regmap_write(priv->phy_regmap, PCIE_PHY_PLL_A_CTRL3, 0x0900); 108 + 109 + /* Endcnt */ 110 + regmap_write(priv->phy_regmap, PCIE_PHY_RX1_EI, 0x0004); 111 + regmap_write(priv->phy_regmap, PCIE_PHY_RX1_A_CTRL, 0x6803); 112 + 113 + regmap_update_bits(priv->phy_regmap, PCIE_PHY_TX1_CTRL1, 114 + PCIE_PHY_TX1_CTRL1_FORCE_EN, 115 + PCIE_PHY_TX1_CTRL1_FORCE_EN); 116 + 117 + /* predrv_ser_en */ 118 + regmap_write(priv->phy_regmap, PCIE_PHY_TX1_A_CTRL2, 0x0706); 119 + 120 + /* ctrl_lim */ 121 + regmap_write(priv->phy_regmap, PCIE_PHY_TX1_CTRL3, 0x1fff); 122 + 123 + /* ctrl */ 124 + regmap_write(priv->phy_regmap, PCIE_PHY_TX1_A_CTRL1, 0x0810); 125 + 126 + /* predrv_ser_en */ 127 + regmap_update_bits(priv->phy_regmap, PCIE_PHY_TX2_A_CTRL2, 0x7f00, 128 + 0x4700); 129 + 130 + /* RTERM */ 131 + regmap_write(priv->phy_regmap, PCIE_PHY_TX1_CTRL2, 0x2e00); 132 + 133 + /* Improved 100MHz clock output */ 134 + regmap_write(priv->phy_regmap, PCIE_PHY_TX2_CTRL2, 0x3096); 135 + regmap_write(priv->phy_regmap, PCIE_PHY_TX2_A_CTRL2, 0x4707); 136 + 137 + /* Reduced CDR BW to avoid glitches */ 138 + regmap_write(priv->phy_regmap, PCIE_PHY_RX1_CDR, 0x0235); 139 + } 140 + 141 + static void pcie_phy_36mhz_mode_setup(struct phy *phy) 142 + { 143 + struct ltq_vrx200_pcie_phy_priv *priv = phy_get_drvdata(phy); 144 + 145 + regmap_update_bits(priv->phy_regmap, PCIE_PHY_PLL_CTRL3, 146 + PCIE_PHY_PLL_CTRL3_EXT_MMD_DIV_RATIO_EN, 0x0000); 147 + 148 + regmap_update_bits(priv->phy_regmap, PCIE_PHY_PLL_CTRL3, 149 + PCIE_PHY_PLL_CTRL3_EXT_MMD_DIV_RATIO_MASK, 0x0000); 150 + 151 + regmap_update_bits(priv->phy_regmap, PCIE_PHY_PLL_CTRL2, 152 + PCIE_PHY_PLL_CTRL2_PLL_SDM_EN, 153 + PCIE_PHY_PLL_CTRL2_PLL_SDM_EN); 154 + 155 + regmap_update_bits(priv->phy_regmap, PCIE_PHY_PLL_CTRL2, 156 + PCIE_PHY_PLL_CTRL2_CONST_SDM_EN, 157 + PCIE_PHY_PLL_CTRL2_CONST_SDM_EN); 158 + 159 + regmap_update_bits(priv->phy_regmap, PCIE_PHY_PLL_A_CTRL3, 160 + PCIE_PHY_PLL_A_CTRL3_MMD_MASK, 161 + FIELD_PREP(PCIE_PHY_PLL_A_CTRL3_MMD_MASK, 0x1)); 162 + 163 + regmap_update_bits(priv->phy_regmap, PCIE_PHY_PLL_A_CTRL2, 164 + PCIE_PHY_PLL_A_CTRL2_LF_MODE_EN, 0x0000); 165 + 166 + /* const_sdm */ 167 + regmap_write(priv->phy_regmap, PCIE_PHY_PLL_CTRL1, 0x38e4); 168 + 169 + regmap_update_bits(priv->phy_regmap, PCIE_PHY_PLL_CTRL2, 170 + PCIE_PHY_PLL_CTRL2_CONST_SDM_MASK, 171 + FIELD_PREP(PCIE_PHY_PLL_CTRL2_CONST_SDM_MASK, 172 + 0xee)); 173 + 174 + /* pllmod */ 175 + regmap_write(priv->phy_regmap, PCIE_PHY_PLL_CTRL7, 0x0002); 176 + regmap_write(priv->phy_regmap, PCIE_PHY_PLL_CTRL6, 0x3a04); 177 + regmap_write(priv->phy_regmap, PCIE_PHY_PLL_CTRL5, 0xfae3); 178 + regmap_write(priv->phy_regmap, PCIE_PHY_PLL_CTRL4, 0x1b72); 179 + } 180 + 181 + static int ltq_vrx200_pcie_phy_wait_for_pll(struct phy *phy) 182 + { 183 + struct ltq_vrx200_pcie_phy_priv *priv = phy_get_drvdata(phy); 184 + unsigned int tmp; 185 + int ret; 186 + 187 + ret = regmap_read_poll_timeout(priv->phy_regmap, PCIE_PHY_PLL_STATUS, 188 + tmp, ((tmp & 0x0070) == 0x0070), 10, 189 + 10000); 190 + if (ret) { 191 + dev_err(priv->dev, "PLL Link timeout, PLL status = 0x%04x\n", 192 + tmp); 193 + return ret; 194 + } 195 + 196 + return 0; 197 + } 198 + 199 + static void ltq_vrx200_pcie_phy_apply_workarounds(struct phy *phy) 200 + { 201 + struct ltq_vrx200_pcie_phy_priv *priv = phy_get_drvdata(phy); 202 + static const struct reg_default slices[] = { 203 + { 204 + .reg = PCIE_PHY_TX1_CTRL1, 205 + .def = PCIE_PHY_TX1_CTRL1_LOAD_EN, 206 + }, 207 + { 208 + .reg = PCIE_PHY_TX2_CTRL1, 209 + .def = PCIE_PHY_TX2_CTRL1_LOAD_EN, 210 + }, 211 + { 212 + .reg = PCIE_PHY_RX1_CTRL1, 213 + .def = PCIE_PHY_RX1_CTRL1_LOAD_EN, 214 + } 215 + }; 216 + int i; 217 + 218 + for (i = 0; i < ARRAY_SIZE(slices); i++) { 219 + /* enable load_en */ 220 + regmap_update_bits(priv->phy_regmap, slices[i].reg, 221 + slices[i].def, slices[i].def); 222 + 223 + udelay(1); 224 + 225 + /* disable load_en */ 226 + regmap_update_bits(priv->phy_regmap, slices[i].reg, 227 + slices[i].def, 0x0); 228 + } 229 + 230 + for (i = 0; i < 5; i++) { 231 + /* TX2 modulation */ 232 + regmap_write(priv->phy_regmap, PCIE_PHY_TX2_MOD1, 0x1ffe); 233 + regmap_write(priv->phy_regmap, PCIE_PHY_TX2_MOD2, 0xfffe); 234 + regmap_write(priv->phy_regmap, PCIE_PHY_TX2_MOD3, 0x0601); 235 + usleep_range(1000, 2000); 236 + regmap_write(priv->phy_regmap, PCIE_PHY_TX2_MOD3, 0x0001); 237 + 238 + /* TX1 modulation */ 239 + regmap_write(priv->phy_regmap, PCIE_PHY_TX1_MOD1, 0x1ffe); 240 + regmap_write(priv->phy_regmap, PCIE_PHY_TX1_MOD2, 0xfffe); 241 + regmap_write(priv->phy_regmap, PCIE_PHY_TX1_MOD3, 0x0601); 242 + usleep_range(1000, 2000); 243 + regmap_write(priv->phy_regmap, PCIE_PHY_TX1_MOD3, 0x0001); 244 + } 245 + } 246 + 247 + static int ltq_vrx200_pcie_phy_init(struct phy *phy) 248 + { 249 + struct ltq_vrx200_pcie_phy_priv *priv = phy_get_drvdata(phy); 250 + int ret; 251 + 252 + if (of_device_is_big_endian(priv->dev->of_node)) 253 + regmap_update_bits(priv->rcu_regmap, 254 + priv->rcu_ahb_endian_offset, 255 + priv->rcu_ahb_endian_big_endian_mask, 256 + priv->rcu_ahb_endian_big_endian_mask); 257 + else 258 + regmap_update_bits(priv->rcu_regmap, 259 + priv->rcu_ahb_endian_offset, 260 + priv->rcu_ahb_endian_big_endian_mask, 0x0); 261 + 262 + ret = reset_control_assert(priv->phy_reset); 263 + if (ret) 264 + goto err; 265 + 266 + udelay(1); 267 + 268 + ret = reset_control_deassert(priv->phy_reset); 269 + if (ret) 270 + goto err; 271 + 272 + udelay(1); 273 + 274 + ret = reset_control_deassert(priv->pcie_reset); 275 + if (ret) 276 + goto err_assert_phy_reset; 277 + 278 + /* Make sure PHY PLL is stable */ 279 + usleep_range(20, 40); 280 + 281 + return 0; 282 + 283 + err_assert_phy_reset: 284 + reset_control_assert(priv->phy_reset); 285 + err: 286 + return ret; 287 + } 288 + 289 + static int ltq_vrx200_pcie_phy_exit(struct phy *phy) 290 + { 291 + struct ltq_vrx200_pcie_phy_priv *priv = phy_get_drvdata(phy); 292 + int ret; 293 + 294 + ret = reset_control_assert(priv->pcie_reset); 295 + if (ret) 296 + return ret; 297 + 298 + ret = reset_control_assert(priv->phy_reset); 299 + if (ret) 300 + return ret; 301 + 302 + return 0; 303 + } 304 + 305 + static int ltq_vrx200_pcie_phy_power_on(struct phy *phy) 306 + { 307 + struct ltq_vrx200_pcie_phy_priv *priv = phy_get_drvdata(phy); 308 + int ret; 309 + 310 + /* Enable PDI to access PCIe PHY register */ 311 + ret = clk_prepare_enable(priv->pdi_clk); 312 + if (ret) 313 + goto err; 314 + 315 + /* Configure PLL and PHY clock */ 316 + ltq_vrx200_pcie_phy_common_setup(phy); 317 + 318 + pcie_phy_36mhz_mode_setup(phy); 319 + 320 + /* Enable the PCIe PHY and make PLL setting take effect */ 321 + ret = clk_prepare_enable(priv->phy_clk); 322 + if (ret) 323 + goto err_disable_pdi_clk; 324 + 325 + /* Check if we are in "startup ready" status */ 326 + if (ltq_vrx200_pcie_phy_wait_for_pll(phy) != 0) 327 + goto err_disable_phy_clk; 328 + 329 + ltq_vrx200_pcie_phy_apply_workarounds(phy); 330 + 331 + return 0; 332 + 333 + err_disable_phy_clk: 334 + clk_disable_unprepare(priv->phy_clk); 335 + err_disable_pdi_clk: 336 + clk_disable_unprepare(priv->pdi_clk); 337 + err: 338 + return ret; 339 + } 340 + 341 + static int ltq_vrx200_pcie_phy_power_off(struct phy *phy) 342 + { 343 + struct ltq_vrx200_pcie_phy_priv *priv = phy_get_drvdata(phy); 344 + 345 + clk_disable_unprepare(priv->phy_clk); 346 + clk_disable_unprepare(priv->pdi_clk); 347 + 348 + return 0; 349 + } 350 + 351 + static struct phy_ops ltq_vrx200_pcie_phy_ops = { 352 + .init = ltq_vrx200_pcie_phy_init, 353 + .exit = ltq_vrx200_pcie_phy_exit, 354 + .power_on = ltq_vrx200_pcie_phy_power_on, 355 + .power_off = ltq_vrx200_pcie_phy_power_off, 356 + .owner = THIS_MODULE, 357 + }; 358 + 359 + static struct phy *ltq_vrx200_pcie_phy_xlate(struct device *dev, 360 + struct of_phandle_args *args) 361 + { 362 + struct ltq_vrx200_pcie_phy_priv *priv = dev_get_drvdata(dev); 363 + unsigned int mode; 364 + 365 + if (args->args_count != 1) { 366 + dev_err(dev, "invalid number of arguments\n"); 367 + return ERR_PTR(-EINVAL); 368 + } 369 + 370 + mode = args->args[0]; 371 + 372 + switch (mode) { 373 + case LANTIQ_PCIE_PHY_MODE_36MHZ: 374 + priv->mode = mode; 375 + break; 376 + 377 + case LANTIQ_PCIE_PHY_MODE_25MHZ: 378 + case LANTIQ_PCIE_PHY_MODE_25MHZ_SSC: 379 + case LANTIQ_PCIE_PHY_MODE_36MHZ_SSC: 380 + case LANTIQ_PCIE_PHY_MODE_100MHZ: 381 + case LANTIQ_PCIE_PHY_MODE_100MHZ_SSC: 382 + dev_err(dev, "PHY mode not implemented yet: %u\n", mode); 383 + return ERR_PTR(-EINVAL); 384 + 385 + default: 386 + dev_err(dev, "invalid PHY mode %u\n", mode); 387 + return ERR_PTR(-EINVAL); 388 + }; 389 + 390 + return priv->phy; 391 + } 392 + 393 + static int ltq_vrx200_pcie_phy_probe(struct platform_device *pdev) 394 + { 395 + static const struct regmap_config regmap_config = { 396 + .reg_bits = 8, 397 + .val_bits = 16, 398 + .reg_stride = 2, 399 + .max_register = PCIE_PHY_RX1_A_CTRL, 400 + }; 401 + struct ltq_vrx200_pcie_phy_priv *priv; 402 + struct device *dev = &pdev->dev; 403 + struct phy_provider *provider; 404 + struct resource *res; 405 + void __iomem *base; 406 + int ret; 407 + 408 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 409 + if (!priv) 410 + return -ENOMEM; 411 + 412 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 413 + base = devm_ioremap_resource(dev, res); 414 + if (IS_ERR(base)) 415 + return PTR_ERR(base); 416 + 417 + priv->phy_regmap = devm_regmap_init_mmio(dev, base, &regmap_config); 418 + if (IS_ERR(priv->phy_regmap)) 419 + return PTR_ERR(priv->phy_regmap); 420 + 421 + priv->rcu_regmap = syscon_regmap_lookup_by_phandle(dev->of_node, 422 + "lantiq,rcu"); 423 + if (IS_ERR(priv->rcu_regmap)) 424 + return PTR_ERR(priv->rcu_regmap); 425 + 426 + ret = device_property_read_u32(dev, "lantiq,rcu-endian-offset", 427 + &priv->rcu_ahb_endian_offset); 428 + if (ret) { 429 + dev_err(dev, 430 + "failed to parse the 'lantiq,rcu-endian-offset' property\n"); 431 + return ret; 432 + } 433 + 434 + ret = device_property_read_u32(dev, "lantiq,rcu-big-endian-mask", 435 + &priv->rcu_ahb_endian_big_endian_mask); 436 + if (ret) { 437 + dev_err(dev, 438 + "failed to parse the 'lantiq,rcu-big-endian-mask' property\n"); 439 + return ret; 440 + } 441 + 442 + priv->pdi_clk = devm_clk_get(dev, "pdi"); 443 + if (IS_ERR(priv->pdi_clk)) 444 + return PTR_ERR(priv->pdi_clk); 445 + 446 + priv->phy_clk = devm_clk_get(dev, "phy"); 447 + if (IS_ERR(priv->phy_clk)) 448 + return PTR_ERR(priv->phy_clk); 449 + 450 + priv->phy_reset = devm_reset_control_get_exclusive(dev, "phy"); 451 + if (IS_ERR(priv->phy_reset)) 452 + return PTR_ERR(priv->phy_reset); 453 + 454 + priv->pcie_reset = devm_reset_control_get_shared(dev, "pcie"); 455 + if (IS_ERR(priv->pcie_reset)) 456 + return PTR_ERR(priv->pcie_reset); 457 + 458 + priv->dev = dev; 459 + 460 + priv->phy = devm_phy_create(dev, dev->of_node, 461 + &ltq_vrx200_pcie_phy_ops); 462 + if (IS_ERR(priv->phy)) { 463 + dev_err(dev, "failed to create PHY\n"); 464 + return PTR_ERR(priv->phy); 465 + } 466 + 467 + phy_set_drvdata(priv->phy, priv); 468 + dev_set_drvdata(dev, priv); 469 + 470 + provider = devm_of_phy_provider_register(dev, 471 + ltq_vrx200_pcie_phy_xlate); 472 + 473 + return PTR_ERR_OR_ZERO(provider); 474 + } 475 + 476 + static const struct of_device_id ltq_vrx200_pcie_phy_of_match[] = { 477 + { .compatible = "lantiq,vrx200-pcie-phy", }, 478 + { .compatible = "lantiq,arx300-pcie-phy", }, 479 + { /* sentinel */ }, 480 + }; 481 + MODULE_DEVICE_TABLE(of, ltq_vrx200_pcie_phy_of_match); 482 + 483 + static struct platform_driver ltq_vrx200_pcie_phy_driver = { 484 + .probe = ltq_vrx200_pcie_phy_probe, 485 + .driver = { 486 + .name = "ltq-vrx200-pcie-phy", 487 + .of_match_table = ltq_vrx200_pcie_phy_of_match, 488 + } 489 + }; 490 + module_platform_driver(ltq_vrx200_pcie_phy_driver); 491 + 492 + MODULE_AUTHOR("Martin Blumenstingl <martin.blumenstingl@googlemail.com>"); 493 + MODULE_DESCRIPTION("Lantiq VRX200 and ARX300 PCIe PHY driver"); 494 + MODULE_LICENSE("GPL v2");
+1
drivers/phy/marvell/Kconfig
··· 57 57 tristate "Marvell CP110 comphy driver" 58 58 depends on ARCH_MVEBU || COMPILE_TEST 59 59 depends on OF 60 + depends on HAVE_ARM_SMCCC 60 61 select GENERIC_PHY 61 62 help 62 63 This driver allows to control the comphy, an hardware block providing
+3 -1
drivers/phy/marvell/phy-armada38x-comphy.c
··· 200 200 } 201 201 202 202 phy = devm_phy_create(&pdev->dev, child, &a38x_comphy_ops); 203 - if (IS_ERR(phy)) 203 + if (IS_ERR(phy)) { 204 + of_node_put(child); 204 205 return PTR_ERR(phy); 206 + } 205 207 206 208 priv->lane[val].base = base + 0x28 * val; 207 209 priv->lane[val].priv = priv;
+14 -3
drivers/phy/marvell/phy-mvebu-a3700-comphy.c
··· 26 26 #define COMPHY_SIP_POWER_ON 0x82000001 27 27 #define COMPHY_SIP_POWER_OFF 0x82000002 28 28 #define COMPHY_SIP_PLL_LOCK 0x82000003 29 + #define COMPHY_FW_NOT_SUPPORTED (-1) 29 30 30 31 #define COMPHY_FW_MODE_SATA 0x1 31 32 #define COMPHY_FW_MODE_SGMII 0x2 ··· 170 169 struct mvebu_a3700_comphy_lane *lane = phy_get_drvdata(phy); 171 170 u32 fw_param; 172 171 int fw_mode; 172 + int ret; 173 173 174 174 fw_mode = mvebu_a3700_comphy_get_fw_mode(lane->id, lane->port, 175 175 lane->mode, lane->submode); ··· 219 217 return -ENOTSUPP; 220 218 } 221 219 222 - return mvebu_a3700_comphy_smc(COMPHY_SIP_POWER_ON, lane->id, fw_param); 220 + ret = mvebu_a3700_comphy_smc(COMPHY_SIP_POWER_ON, lane->id, fw_param); 221 + if (ret == COMPHY_FW_NOT_SUPPORTED) 222 + dev_err(lane->dev, 223 + "unsupported SMC call, try updating your firmware\n"); 224 + 225 + return ret; 223 226 } 224 227 225 228 static int mvebu_a3700_comphy_power_off(struct phy *phy) ··· 284 277 } 285 278 286 279 lane = devm_kzalloc(&pdev->dev, sizeof(*lane), GFP_KERNEL); 287 - if (!lane) 280 + if (!lane) { 281 + of_node_put(child); 288 282 return -ENOMEM; 283 + } 289 284 290 285 phy = devm_phy_create(&pdev->dev, child, 291 286 &mvebu_a3700_comphy_ops); 292 - if (IS_ERR(phy)) 287 + if (IS_ERR(phy)) { 288 + of_node_put(child); 293 289 return PTR_ERR(phy); 290 + } 294 291 295 292 lane->dev = &pdev->dev; 296 293 lane->mode = PHY_MODE_INVALID;
+475 -52
drivers/phy/marvell/phy-mvebu-cp110-comphy.c
··· 5 5 * Antoine Tenart <antoine.tenart@free-electrons.com> 6 6 */ 7 7 8 + #include <linux/arm-smccc.h> 9 + #include <linux/clk.h> 8 10 #include <linux/io.h> 9 11 #include <linux/iopoll.h> 10 12 #include <linux/mfd/syscon.h> ··· 24 22 #define MVEBU_COMPHY_SERDES_CFG0_PU_RX BIT(11) 25 23 #define MVEBU_COMPHY_SERDES_CFG0_PU_TX BIT(12) 26 24 #define MVEBU_COMPHY_SERDES_CFG0_HALF_BUS BIT(14) 25 + #define MVEBU_COMPHY_SERDES_CFG0_RXAUI_MODE BIT(15) 27 26 #define MVEBU_COMPHY_SERDES_CFG1(n) (0x4 + (n) * 0x1000) 28 27 #define MVEBU_COMPHY_SERDES_CFG1_RESET BIT(3) 29 28 #define MVEBU_COMPHY_SERDES_CFG1_RX_INIT BIT(4) ··· 80 77 #define MVEBU_COMPHY_TX_SLEW_RATE(n) (0x974 + (n) * 0x1000) 81 78 #define MVEBU_COMPHY_TX_SLEW_RATE_EMPH(n) ((n) << 5) 82 79 #define MVEBU_COMPHY_TX_SLEW_RATE_SLC(n) ((n) << 10) 83 - #define MVEBU_COMPHY_DLT_CTRL(n) (0x984 + (n) * 0x1000) 84 - #define MVEBU_COMPHY_DLT_CTRL_DTL_FLOOP_EN BIT(2) 80 + #define MVEBU_COMPHY_DTL_CTRL(n) (0x984 + (n) * 0x1000) 81 + #define MVEBU_COMPHY_DTL_CTRL_DTL_FLOOP_EN BIT(2) 85 82 #define MVEBU_COMPHY_FRAME_DETECT0(n) (0xa14 + (n) * 0x1000) 86 83 #define MVEBU_COMPHY_FRAME_DETECT0_PATN(n) ((n) << 7) 87 84 #define MVEBU_COMPHY_FRAME_DETECT3(n) (0xa20 + (n) * 0x1000) ··· 114 111 #define MVEBU_COMPHY_SELECTOR_PHY(n) ((n) * 0x4) 115 112 #define MVEBU_COMPHY_PIPE_SELECTOR 0x1144 116 113 #define MVEBU_COMPHY_PIPE_SELECTOR_PIPE(n) ((n) * 0x4) 114 + #define MVEBU_COMPHY_SD1_CTRL1 0x1148 115 + #define MVEBU_COMPHY_SD1_CTRL1_RXAUI1_EN BIT(26) 116 + #define MVEBU_COMPHY_SD1_CTRL1_RXAUI0_EN BIT(27) 117 117 118 118 #define MVEBU_COMPHY_LANES 6 119 119 #define MVEBU_COMPHY_PORTS 3 120 + 121 + #define COMPHY_SIP_POWER_ON 0x82000001 122 + #define COMPHY_SIP_POWER_OFF 0x82000002 123 + #define COMPHY_FW_NOT_SUPPORTED (-1) 124 + 125 + /* 126 + * A lane is described by the following bitfields: 127 + * [ 1- 0]: COMPHY polarity invertion 128 + * [ 2- 7]: COMPHY speed 129 + * [ 5-11]: COMPHY port index 130 + * [12-16]: COMPHY mode 131 + * [17]: Clock source 132 + * [18-20]: PCIe width (x1, x2, x4) 133 + */ 134 + #define COMPHY_FW_POL_OFFSET 0 135 + #define COMPHY_FW_POL_MASK GENMASK(1, 0) 136 + #define COMPHY_FW_SPEED_OFFSET 2 137 + #define COMPHY_FW_SPEED_MASK GENMASK(7, 2) 138 + #define COMPHY_FW_SPEED_MAX COMPHY_FW_SPEED_MASK 139 + #define COMPHY_FW_SPEED_1250 0 140 + #define COMPHY_FW_SPEED_3125 2 141 + #define COMPHY_FW_SPEED_5000 3 142 + #define COMPHY_FW_SPEED_103125 6 143 + #define COMPHY_FW_PORT_OFFSET 8 144 + #define COMPHY_FW_PORT_MASK GENMASK(11, 8) 145 + #define COMPHY_FW_MODE_OFFSET 12 146 + #define COMPHY_FW_MODE_MASK GENMASK(16, 12) 147 + #define COMPHY_FW_WIDTH_OFFSET 18 148 + #define COMPHY_FW_WIDTH_MASK GENMASK(20, 18) 149 + 150 + #define COMPHY_FW_PARAM_FULL(mode, port, speed, pol, width) \ 151 + ((((pol) << COMPHY_FW_POL_OFFSET) & COMPHY_FW_POL_MASK) | \ 152 + (((mode) << COMPHY_FW_MODE_OFFSET) & COMPHY_FW_MODE_MASK) | \ 153 + (((port) << COMPHY_FW_PORT_OFFSET) & COMPHY_FW_PORT_MASK) | \ 154 + (((speed) << COMPHY_FW_SPEED_OFFSET) & COMPHY_FW_SPEED_MASK) | \ 155 + (((width) << COMPHY_FW_WIDTH_OFFSET) & COMPHY_FW_WIDTH_MASK)) 156 + 157 + #define COMPHY_FW_PARAM(mode, port) \ 158 + COMPHY_FW_PARAM_FULL(mode, port, COMPHY_FW_SPEED_MAX, 0, 0) 159 + 160 + #define COMPHY_FW_PARAM_ETH(mode, port, speed) \ 161 + COMPHY_FW_PARAM_FULL(mode, port, speed, 0, 0) 162 + 163 + #define COMPHY_FW_PARAM_PCIE(mode, port, width) \ 164 + COMPHY_FW_PARAM_FULL(mode, port, COMPHY_FW_SPEED_5000, 0, width) 165 + 166 + #define COMPHY_FW_MODE_SATA 0x1 167 + #define COMPHY_FW_MODE_SGMII 0x2 /* SGMII 1G */ 168 + #define COMPHY_FW_MODE_HS_SGMII 0x3 /* SGMII 2.5G */ 169 + #define COMPHY_FW_MODE_USB3H 0x4 170 + #define COMPHY_FW_MODE_USB3D 0x5 171 + #define COMPHY_FW_MODE_PCIE 0x6 172 + #define COMPHY_FW_MODE_RXAUI 0x7 173 + #define COMPHY_FW_MODE_XFI 0x8 /* SFI: 0x9 (is treated like XFI) */ 120 174 121 175 struct mvebu_comphy_conf { 122 176 enum phy_mode mode; ··· 181 121 unsigned lane; 182 122 unsigned port; 183 123 u32 mux; 124 + u32 fw_mode; 184 125 }; 185 126 186 - #define MVEBU_COMPHY_CONF(_lane, _port, _submode, _mux) \ 127 + #define ETH_CONF(_lane, _port, _submode, _mux, _fw) \ 187 128 { \ 188 129 .lane = _lane, \ 189 130 .port = _port, \ 190 131 .mode = PHY_MODE_ETHERNET, \ 191 132 .submode = _submode, \ 192 133 .mux = _mux, \ 134 + .fw_mode = _fw, \ 135 + } 136 + 137 + #define GEN_CONF(_lane, _port, _mode, _fw) \ 138 + { \ 139 + .lane = _lane, \ 140 + .port = _port, \ 141 + .mode = _mode, \ 142 + .submode = PHY_INTERFACE_MODE_NA, \ 143 + .mux = -1, \ 144 + .fw_mode = _fw, \ 193 145 } 194 146 195 147 static const struct mvebu_comphy_conf mvebu_comphy_cp110_modes[] = { 196 148 /* lane 0 */ 197 - MVEBU_COMPHY_CONF(0, 1, PHY_INTERFACE_MODE_SGMII, 0x1), 198 - MVEBU_COMPHY_CONF(0, 1, PHY_INTERFACE_MODE_2500BASEX, 0x1), 149 + GEN_CONF(0, 0, PHY_MODE_PCIE, COMPHY_FW_MODE_PCIE), 150 + ETH_CONF(0, 1, PHY_INTERFACE_MODE_SGMII, 0x1, COMPHY_FW_MODE_SGMII), 151 + ETH_CONF(0, 1, PHY_INTERFACE_MODE_2500BASEX, 0x1, COMPHY_FW_MODE_HS_SGMII), 152 + GEN_CONF(0, 1, PHY_MODE_SATA, COMPHY_FW_MODE_SATA), 199 153 /* lane 1 */ 200 - MVEBU_COMPHY_CONF(1, 2, PHY_INTERFACE_MODE_SGMII, 0x1), 201 - MVEBU_COMPHY_CONF(1, 2, PHY_INTERFACE_MODE_2500BASEX, 0x1), 154 + GEN_CONF(1, 0, PHY_MODE_USB_HOST_SS, COMPHY_FW_MODE_USB3H), 155 + GEN_CONF(1, 0, PHY_MODE_USB_DEVICE_SS, COMPHY_FW_MODE_USB3D), 156 + GEN_CONF(1, 0, PHY_MODE_SATA, COMPHY_FW_MODE_SATA), 157 + GEN_CONF(1, 0, PHY_MODE_PCIE, COMPHY_FW_MODE_PCIE), 158 + ETH_CONF(1, 2, PHY_INTERFACE_MODE_SGMII, 0x1, COMPHY_FW_MODE_SGMII), 159 + ETH_CONF(1, 2, PHY_INTERFACE_MODE_2500BASEX, 0x1, COMPHY_FW_MODE_HS_SGMII), 202 160 /* lane 2 */ 203 - MVEBU_COMPHY_CONF(2, 0, PHY_INTERFACE_MODE_SGMII, 0x1), 204 - MVEBU_COMPHY_CONF(2, 0, PHY_INTERFACE_MODE_2500BASEX, 0x1), 205 - MVEBU_COMPHY_CONF(2, 0, PHY_INTERFACE_MODE_10GKR, 0x1), 161 + ETH_CONF(2, 0, PHY_INTERFACE_MODE_SGMII, 0x1, COMPHY_FW_MODE_SGMII), 162 + ETH_CONF(2, 0, PHY_INTERFACE_MODE_2500BASEX, 0x1, COMPHY_FW_MODE_HS_SGMII), 163 + ETH_CONF(2, 0, PHY_INTERFACE_MODE_RXAUI, 0x1, COMPHY_FW_MODE_RXAUI), 164 + ETH_CONF(2, 0, PHY_INTERFACE_MODE_10GKR, 0x1, COMPHY_FW_MODE_XFI), 165 + GEN_CONF(2, 0, PHY_MODE_USB_HOST_SS, COMPHY_FW_MODE_USB3H), 166 + GEN_CONF(2, 0, PHY_MODE_SATA, COMPHY_FW_MODE_SATA), 167 + GEN_CONF(2, 0, PHY_MODE_PCIE, COMPHY_FW_MODE_PCIE), 206 168 /* lane 3 */ 207 - MVEBU_COMPHY_CONF(3, 1, PHY_INTERFACE_MODE_SGMII, 0x2), 208 - MVEBU_COMPHY_CONF(3, 1, PHY_INTERFACE_MODE_2500BASEX, 0x2), 169 + GEN_CONF(3, 0, PHY_MODE_PCIE, COMPHY_FW_MODE_PCIE), 170 + ETH_CONF(3, 1, PHY_INTERFACE_MODE_SGMII, 0x2, COMPHY_FW_MODE_SGMII), 171 + ETH_CONF(3, 1, PHY_INTERFACE_MODE_2500BASEX, 0x2, COMPHY_FW_MODE_HS_SGMII), 172 + ETH_CONF(3, 1, PHY_INTERFACE_MODE_RXAUI, 0x1, COMPHY_FW_MODE_RXAUI), 173 + GEN_CONF(3, 1, PHY_MODE_USB_HOST_SS, COMPHY_FW_MODE_USB3H), 174 + GEN_CONF(3, 1, PHY_MODE_SATA, COMPHY_FW_MODE_SATA), 209 175 /* lane 4 */ 210 - MVEBU_COMPHY_CONF(4, 0, PHY_INTERFACE_MODE_SGMII, 0x2), 211 - MVEBU_COMPHY_CONF(4, 0, PHY_INTERFACE_MODE_2500BASEX, 0x2), 212 - MVEBU_COMPHY_CONF(4, 0, PHY_INTERFACE_MODE_10GKR, 0x2), 213 - MVEBU_COMPHY_CONF(4, 1, PHY_INTERFACE_MODE_SGMII, 0x1), 176 + ETH_CONF(4, 0, PHY_INTERFACE_MODE_SGMII, 0x2, COMPHY_FW_MODE_SGMII), 177 + ETH_CONF(4, 0, PHY_INTERFACE_MODE_2500BASEX, 0x2, COMPHY_FW_MODE_HS_SGMII), 178 + ETH_CONF(4, 0, PHY_INTERFACE_MODE_10GKR, 0x2, COMPHY_FW_MODE_XFI), 179 + ETH_CONF(4, 0, PHY_INTERFACE_MODE_RXAUI, 0x2, COMPHY_FW_MODE_RXAUI), 180 + GEN_CONF(4, 0, PHY_MODE_USB_DEVICE_SS, COMPHY_FW_MODE_USB3D), 181 + GEN_CONF(4, 1, PHY_MODE_USB_HOST_SS, COMPHY_FW_MODE_USB3H), 182 + GEN_CONF(4, 1, PHY_MODE_PCIE, COMPHY_FW_MODE_PCIE), 183 + ETH_CONF(4, 1, PHY_INTERFACE_MODE_SGMII, 0x1, COMPHY_FW_MODE_SGMII), 184 + ETH_CONF(4, 1, PHY_INTERFACE_MODE_2500BASEX, -1, COMPHY_FW_MODE_HS_SGMII), 185 + ETH_CONF(4, 1, PHY_INTERFACE_MODE_10GKR, -1, COMPHY_FW_MODE_XFI), 214 186 /* lane 5 */ 215 - MVEBU_COMPHY_CONF(5, 2, PHY_INTERFACE_MODE_SGMII, 0x1), 216 - MVEBU_COMPHY_CONF(5, 2, PHY_INTERFACE_MODE_2500BASEX, 0x1), 187 + ETH_CONF(5, 1, PHY_INTERFACE_MODE_RXAUI, 0x2, COMPHY_FW_MODE_RXAUI), 188 + GEN_CONF(5, 1, PHY_MODE_SATA, COMPHY_FW_MODE_SATA), 189 + ETH_CONF(5, 2, PHY_INTERFACE_MODE_SGMII, 0x1, COMPHY_FW_MODE_SGMII), 190 + ETH_CONF(5, 2, PHY_INTERFACE_MODE_2500BASEX, 0x1, COMPHY_FW_MODE_HS_SGMII), 191 + GEN_CONF(5, 2, PHY_MODE_PCIE, COMPHY_FW_MODE_PCIE), 217 192 }; 218 193 219 194 struct mvebu_comphy_priv { 220 195 void __iomem *base; 221 196 struct regmap *regmap; 222 197 struct device *dev; 198 + struct clk *mg_domain_clk; 199 + struct clk *mg_core_clk; 200 + struct clk *axi_clk; 201 + unsigned long cp_phys; 223 202 }; 224 203 225 204 struct mvebu_comphy_lane { ··· 269 170 int port; 270 171 }; 271 172 272 - static int mvebu_comphy_get_mux(int lane, int port, 273 - enum phy_mode mode, int submode) 173 + static int mvebu_comphy_smc(unsigned long function, unsigned long phys, 174 + unsigned long lane, unsigned long mode) 175 + { 176 + struct arm_smccc_res res; 177 + 178 + arm_smccc_smc(function, phys, lane, mode, 0, 0, 0, 0, &res); 179 + 180 + return res.a0; 181 + } 182 + 183 + static int mvebu_comphy_get_mode(bool fw_mode, int lane, int port, 184 + enum phy_mode mode, int submode) 274 185 { 275 186 int i, n = ARRAY_SIZE(mvebu_comphy_cp110_modes); 187 + /* Ignore PCIe submode: it represents the width */ 188 + bool ignore_submode = (mode == PHY_MODE_PCIE); 189 + const struct mvebu_comphy_conf *conf; 276 190 277 191 /* Unused PHY mux value is 0x0 */ 278 192 if (mode == PHY_MODE_INVALID) 279 193 return 0; 280 194 281 195 for (i = 0; i < n; i++) { 282 - if (mvebu_comphy_cp110_modes[i].lane == lane && 283 - mvebu_comphy_cp110_modes[i].port == port && 284 - mvebu_comphy_cp110_modes[i].mode == mode && 285 - mvebu_comphy_cp110_modes[i].submode == submode) 196 + conf = &mvebu_comphy_cp110_modes[i]; 197 + if (conf->lane == lane && 198 + conf->port == port && 199 + conf->mode == mode && 200 + (conf->submode == submode || ignore_submode)) 286 201 break; 287 202 } 288 203 289 204 if (i == n) 290 205 return -EINVAL; 291 206 292 - return mvebu_comphy_cp110_modes[i].mux; 207 + if (fw_mode) 208 + return conf->fw_mode; 209 + else 210 + return conf->mux; 293 211 } 294 212 295 - static void mvebu_comphy_ethernet_init_reset(struct mvebu_comphy_lane *lane) 213 + static inline int mvebu_comphy_get_mux(int lane, int port, 214 + enum phy_mode mode, int submode) 215 + { 216 + return mvebu_comphy_get_mode(false, lane, port, mode, submode); 217 + } 218 + 219 + static inline int mvebu_comphy_get_fw_mode(int lane, int port, 220 + enum phy_mode mode, int submode) 221 + { 222 + return mvebu_comphy_get_mode(true, lane, port, mode, submode); 223 + } 224 + 225 + static int mvebu_comphy_ethernet_init_reset(struct mvebu_comphy_lane *lane) 296 226 { 297 227 struct mvebu_comphy_priv *priv = lane->priv; 298 228 u32 val; ··· 338 210 MVEBU_COMPHY_SERDES_CFG0_PU_TX | 339 211 MVEBU_COMPHY_SERDES_CFG0_HALF_BUS | 340 212 MVEBU_COMPHY_SERDES_CFG0_GEN_RX(0xf) | 341 - MVEBU_COMPHY_SERDES_CFG0_GEN_TX(0xf)); 342 - if (lane->submode == PHY_INTERFACE_MODE_10GKR) 213 + MVEBU_COMPHY_SERDES_CFG0_GEN_TX(0xf) | 214 + MVEBU_COMPHY_SERDES_CFG0_RXAUI_MODE); 215 + 216 + switch (lane->submode) { 217 + case PHY_INTERFACE_MODE_10GKR: 343 218 val |= MVEBU_COMPHY_SERDES_CFG0_GEN_RX(0xe) | 344 219 MVEBU_COMPHY_SERDES_CFG0_GEN_TX(0xe); 345 - else if (lane->submode == PHY_INTERFACE_MODE_2500BASEX) 220 + break; 221 + case PHY_INTERFACE_MODE_RXAUI: 222 + val |= MVEBU_COMPHY_SERDES_CFG0_GEN_RX(0xb) | 223 + MVEBU_COMPHY_SERDES_CFG0_GEN_TX(0xb) | 224 + MVEBU_COMPHY_SERDES_CFG0_RXAUI_MODE; 225 + break; 226 + case PHY_INTERFACE_MODE_2500BASEX: 346 227 val |= MVEBU_COMPHY_SERDES_CFG0_GEN_RX(0x8) | 347 228 MVEBU_COMPHY_SERDES_CFG0_GEN_TX(0x8) | 348 229 MVEBU_COMPHY_SERDES_CFG0_HALF_BUS; 349 - else if (lane->submode == PHY_INTERFACE_MODE_SGMII) 230 + break; 231 + case PHY_INTERFACE_MODE_SGMII: 350 232 val |= MVEBU_COMPHY_SERDES_CFG0_GEN_RX(0x6) | 351 233 MVEBU_COMPHY_SERDES_CFG0_GEN_TX(0x6) | 352 234 MVEBU_COMPHY_SERDES_CFG0_HALF_BUS; 235 + break; 236 + default: 237 + dev_err(priv->dev, 238 + "unsupported comphy submode (%d) on lane %d\n", 239 + lane->submode, 240 + lane->id); 241 + return -ENOTSUPP; 242 + } 243 + 353 244 writel(val, priv->base + MVEBU_COMPHY_SERDES_CFG0(lane->id)); 245 + 246 + if (lane->submode == PHY_INTERFACE_MODE_RXAUI) { 247 + regmap_read(priv->regmap, MVEBU_COMPHY_SD1_CTRL1, &val); 248 + 249 + switch (lane->id) { 250 + case 2: 251 + case 3: 252 + val |= MVEBU_COMPHY_SD1_CTRL1_RXAUI0_EN; 253 + break; 254 + case 4: 255 + case 5: 256 + val |= MVEBU_COMPHY_SD1_CTRL1_RXAUI1_EN; 257 + break; 258 + default: 259 + dev_err(priv->dev, 260 + "RXAUI is not supported on comphy lane %d\n", 261 + lane->id); 262 + return -EINVAL; 263 + } 264 + 265 + regmap_write(priv->regmap, MVEBU_COMPHY_SD1_CTRL1, val); 266 + } 354 267 355 268 /* reset */ 356 269 val = readl(priv->base + MVEBU_COMPHY_SERDES_CFG1(lane->id)); ··· 433 264 val &= ~MVEBU_COMPHY_LOOPBACK_DBUS_WIDTH(0x7); 434 265 val |= MVEBU_COMPHY_LOOPBACK_DBUS_WIDTH(0x1); 435 266 writel(val, priv->base + MVEBU_COMPHY_LOOPBACK(lane->id)); 267 + 268 + return 0; 436 269 } 437 270 438 271 static int mvebu_comphy_init_plls(struct mvebu_comphy_lane *lane) ··· 483 312 struct mvebu_comphy_lane *lane = phy_get_drvdata(phy); 484 313 struct mvebu_comphy_priv *priv = lane->priv; 485 314 u32 val; 315 + int err; 486 316 487 - mvebu_comphy_ethernet_init_reset(lane); 317 + err = mvebu_comphy_ethernet_init_reset(lane); 318 + if (err) 319 + return err; 488 320 489 321 val = readl(priv->base + MVEBU_COMPHY_RX_CTRL1(lane->id)); 490 322 val &= ~MVEBU_COMPHY_RX_CTRL1_CLK8T_EN; 491 323 val |= MVEBU_COMPHY_RX_CTRL1_RXCLK2X_SEL; 492 324 writel(val, priv->base + MVEBU_COMPHY_RX_CTRL1(lane->id)); 493 325 494 - val = readl(priv->base + MVEBU_COMPHY_DLT_CTRL(lane->id)); 495 - val &= ~MVEBU_COMPHY_DLT_CTRL_DTL_FLOOP_EN; 496 - writel(val, priv->base + MVEBU_COMPHY_DLT_CTRL(lane->id)); 326 + val = readl(priv->base + MVEBU_COMPHY_DTL_CTRL(lane->id)); 327 + val &= ~MVEBU_COMPHY_DTL_CTRL_DTL_FLOOP_EN; 328 + writel(val, priv->base + MVEBU_COMPHY_DTL_CTRL(lane->id)); 497 329 498 330 regmap_read(priv->regmap, MVEBU_COMPHY_CONF1(lane->id), &val); 499 331 val &= ~MVEBU_COMPHY_CONF1_USB_PCIE; ··· 511 337 return mvebu_comphy_init_plls(lane); 512 338 } 513 339 514 - static int mvebu_comphy_set_mode_10gkr(struct phy *phy) 340 + static int mvebu_comphy_set_mode_rxaui(struct phy *phy) 515 341 { 516 342 struct mvebu_comphy_lane *lane = phy_get_drvdata(phy); 517 343 struct mvebu_comphy_priv *priv = lane->priv; 518 344 u32 val; 345 + int err; 519 346 520 - mvebu_comphy_ethernet_init_reset(lane); 347 + err = mvebu_comphy_ethernet_init_reset(lane); 348 + if (err) 349 + return err; 521 350 522 351 val = readl(priv->base + MVEBU_COMPHY_RX_CTRL1(lane->id)); 523 352 val |= MVEBU_COMPHY_RX_CTRL1_RXCLK2X_SEL | 524 353 MVEBU_COMPHY_RX_CTRL1_CLK8T_EN; 525 354 writel(val, priv->base + MVEBU_COMPHY_RX_CTRL1(lane->id)); 526 355 527 - val = readl(priv->base + MVEBU_COMPHY_DLT_CTRL(lane->id)); 528 - val |= MVEBU_COMPHY_DLT_CTRL_DTL_FLOOP_EN; 529 - writel(val, priv->base + MVEBU_COMPHY_DLT_CTRL(lane->id)); 356 + val = readl(priv->base + MVEBU_COMPHY_DTL_CTRL(lane->id)); 357 + val |= MVEBU_COMPHY_DTL_CTRL_DTL_FLOOP_EN; 358 + writel(val, priv->base + MVEBU_COMPHY_DTL_CTRL(lane->id)); 359 + 360 + val = readl(priv->base + MVEBU_COMPHY_SERDES_CFG2(lane->id)); 361 + val |= MVEBU_COMPHY_SERDES_CFG2_DFE_EN; 362 + writel(val, priv->base + MVEBU_COMPHY_SERDES_CFG2(lane->id)); 363 + 364 + val = readl(priv->base + MVEBU_COMPHY_DFE_RES(lane->id)); 365 + val |= MVEBU_COMPHY_DFE_RES_FORCE_GEN_TBL; 366 + writel(val, priv->base + MVEBU_COMPHY_DFE_RES(lane->id)); 367 + 368 + val = readl(priv->base + MVEBU_COMPHY_GEN1_S0(lane->id)); 369 + val &= ~MVEBU_COMPHY_GEN1_S0_TX_EMPH(0xf); 370 + val |= MVEBU_COMPHY_GEN1_S0_TX_EMPH(0xd); 371 + writel(val, priv->base + MVEBU_COMPHY_GEN1_S0(lane->id)); 372 + 373 + val = readl(priv->base + MVEBU_COMPHY_GEN1_S1(lane->id)); 374 + val &= ~(MVEBU_COMPHY_GEN1_S1_RX_MUL_PI(0x7) | 375 + MVEBU_COMPHY_GEN1_S1_RX_MUL_PF(0x7)); 376 + val |= MVEBU_COMPHY_GEN1_S1_RX_MUL_PI(0x1) | 377 + MVEBU_COMPHY_GEN1_S1_RX_MUL_PF(0x1) | 378 + MVEBU_COMPHY_GEN1_S1_RX_DFE_EN; 379 + writel(val, priv->base + MVEBU_COMPHY_GEN1_S1(lane->id)); 380 + 381 + val = readl(priv->base + MVEBU_COMPHY_COEF(lane->id)); 382 + val &= ~(MVEBU_COMPHY_COEF_DFE_EN | MVEBU_COMPHY_COEF_DFE_CTRL); 383 + writel(val, priv->base + MVEBU_COMPHY_COEF(lane->id)); 384 + 385 + val = readl(priv->base + MVEBU_COMPHY_GEN1_S4(lane->id)); 386 + val &= ~MVEBU_COMPHY_GEN1_S4_DFE_RES(0x3); 387 + val |= MVEBU_COMPHY_GEN1_S4_DFE_RES(0x1); 388 + writel(val, priv->base + MVEBU_COMPHY_GEN1_S4(lane->id)); 389 + 390 + return mvebu_comphy_init_plls(lane); 391 + } 392 + 393 + static int mvebu_comphy_set_mode_10gkr(struct phy *phy) 394 + { 395 + struct mvebu_comphy_lane *lane = phy_get_drvdata(phy); 396 + struct mvebu_comphy_priv *priv = lane->priv; 397 + u32 val; 398 + int err; 399 + 400 + err = mvebu_comphy_ethernet_init_reset(lane); 401 + if (err) 402 + return err; 403 + 404 + val = readl(priv->base + MVEBU_COMPHY_RX_CTRL1(lane->id)); 405 + val |= MVEBU_COMPHY_RX_CTRL1_RXCLK2X_SEL | 406 + MVEBU_COMPHY_RX_CTRL1_CLK8T_EN; 407 + writel(val, priv->base + MVEBU_COMPHY_RX_CTRL1(lane->id)); 408 + 409 + val = readl(priv->base + MVEBU_COMPHY_DTL_CTRL(lane->id)); 410 + val |= MVEBU_COMPHY_DTL_CTRL_DTL_FLOOP_EN; 411 + writel(val, priv->base + MVEBU_COMPHY_DTL_CTRL(lane->id)); 530 412 531 413 /* Speed divider */ 532 414 val = readl(priv->base + MVEBU_COMPHY_SPEED_DIV(lane->id)); ··· 706 476 return mvebu_comphy_init_plls(lane); 707 477 } 708 478 709 - static int mvebu_comphy_power_on(struct phy *phy) 479 + static int mvebu_comphy_power_on_legacy(struct phy *phy) 710 480 { 711 481 struct mvebu_comphy_lane *lane = phy_get_drvdata(phy); 712 482 struct mvebu_comphy_priv *priv = lane->priv; ··· 732 502 case PHY_INTERFACE_MODE_2500BASEX: 733 503 ret = mvebu_comphy_set_mode_sgmii(phy); 734 504 break; 505 + case PHY_INTERFACE_MODE_RXAUI: 506 + ret = mvebu_comphy_set_mode_rxaui(phy); 507 + break; 735 508 case PHY_INTERFACE_MODE_10GKR: 736 509 ret = mvebu_comphy_set_mode_10gkr(phy); 737 510 break; ··· 750 517 return ret; 751 518 } 752 519 520 + static int mvebu_comphy_power_on(struct phy *phy) 521 + { 522 + struct mvebu_comphy_lane *lane = phy_get_drvdata(phy); 523 + struct mvebu_comphy_priv *priv = lane->priv; 524 + int fw_mode, fw_speed; 525 + u32 fw_param = 0; 526 + int ret; 527 + 528 + fw_mode = mvebu_comphy_get_fw_mode(lane->id, lane->port, 529 + lane->mode, lane->submode); 530 + if (fw_mode < 0) 531 + goto try_legacy; 532 + 533 + /* Try SMC flow first */ 534 + switch (lane->mode) { 535 + case PHY_MODE_ETHERNET: 536 + switch (lane->submode) { 537 + case PHY_INTERFACE_MODE_RXAUI: 538 + dev_dbg(priv->dev, "set lane %d to RXAUI mode\n", 539 + lane->id); 540 + fw_speed = 0; 541 + break; 542 + case PHY_INTERFACE_MODE_SGMII: 543 + dev_dbg(priv->dev, "set lane %d to 1000BASE-X mode\n", 544 + lane->id); 545 + fw_speed = COMPHY_FW_SPEED_1250; 546 + break; 547 + case PHY_INTERFACE_MODE_2500BASEX: 548 + dev_dbg(priv->dev, "set lane %d to 2500BASE-X mode\n", 549 + lane->id); 550 + fw_speed = COMPHY_FW_SPEED_3125; 551 + break; 552 + case PHY_INTERFACE_MODE_10GKR: 553 + dev_dbg(priv->dev, "set lane %d to 10G-KR mode\n", 554 + lane->id); 555 + fw_speed = COMPHY_FW_SPEED_103125; 556 + break; 557 + default: 558 + dev_err(priv->dev, "unsupported Ethernet mode (%d)\n", 559 + lane->submode); 560 + return -ENOTSUPP; 561 + } 562 + fw_param = COMPHY_FW_PARAM_ETH(fw_mode, lane->port, fw_speed); 563 + break; 564 + case PHY_MODE_USB_HOST_SS: 565 + case PHY_MODE_USB_DEVICE_SS: 566 + dev_dbg(priv->dev, "set lane %d to USB3 mode\n", lane->id); 567 + fw_param = COMPHY_FW_PARAM(fw_mode, lane->port); 568 + break; 569 + case PHY_MODE_SATA: 570 + dev_dbg(priv->dev, "set lane %d to SATA mode\n", lane->id); 571 + fw_param = COMPHY_FW_PARAM(fw_mode, lane->port); 572 + break; 573 + case PHY_MODE_PCIE: 574 + dev_dbg(priv->dev, "set lane %d to PCIe mode (x%d)\n", lane->id, 575 + lane->submode); 576 + fw_param = COMPHY_FW_PARAM_PCIE(fw_mode, lane->port, 577 + lane->submode); 578 + break; 579 + default: 580 + dev_err(priv->dev, "unsupported PHY mode (%d)\n", lane->mode); 581 + return -ENOTSUPP; 582 + } 583 + 584 + ret = mvebu_comphy_smc(COMPHY_SIP_POWER_ON, priv->cp_phys, lane->id, 585 + fw_param); 586 + if (!ret) 587 + return ret; 588 + 589 + if (ret == COMPHY_FW_NOT_SUPPORTED) 590 + dev_err(priv->dev, 591 + "unsupported SMC call, try updating your firmware\n"); 592 + 593 + dev_warn(priv->dev, 594 + "Firmware could not configure PHY %d with mode %d (ret: %d), trying legacy method\n", 595 + lane->id, lane->mode, ret); 596 + 597 + try_legacy: 598 + /* Fallback to Linux's implementation */ 599 + return mvebu_comphy_power_on_legacy(phy); 600 + } 601 + 753 602 static int mvebu_comphy_set_mode(struct phy *phy, 754 603 enum phy_mode mode, int submode) 755 604 { 756 605 struct mvebu_comphy_lane *lane = phy_get_drvdata(phy); 757 606 758 - if (mode != PHY_MODE_ETHERNET) 759 - return -EINVAL; 760 - 761 607 if (submode == PHY_INTERFACE_MODE_1000BASEX) 762 608 submode = PHY_INTERFACE_MODE_SGMII; 763 609 764 - if (mvebu_comphy_get_mux(lane->id, lane->port, mode, submode) < 0) 610 + if (mvebu_comphy_get_fw_mode(lane->id, lane->port, mode, submode) < 0) 765 611 return -EINVAL; 766 612 767 613 lane->mode = mode; 768 614 lane->submode = submode; 615 + 616 + /* PCIe submode represents the width */ 617 + if (mode == PHY_MODE_PCIE && !lane->submode) 618 + lane->submode = 1; 619 + 769 620 return 0; 770 621 } 771 622 772 - static int mvebu_comphy_power_off(struct phy *phy) 623 + static int mvebu_comphy_power_off_legacy(struct phy *phy) 773 624 { 774 625 struct mvebu_comphy_lane *lane = phy_get_drvdata(phy); 775 626 struct mvebu_comphy_priv *priv = lane->priv; ··· 874 557 regmap_write(priv->regmap, MVEBU_COMPHY_PIPE_SELECTOR, val); 875 558 876 559 return 0; 560 + } 561 + 562 + static int mvebu_comphy_power_off(struct phy *phy) 563 + { 564 + struct mvebu_comphy_lane *lane = phy_get_drvdata(phy); 565 + struct mvebu_comphy_priv *priv = lane->priv; 566 + int ret; 567 + 568 + ret = mvebu_comphy_smc(COMPHY_SIP_POWER_OFF, priv->cp_phys, 569 + lane->id, 0); 570 + if (!ret) 571 + return ret; 572 + 573 + /* Fallback to Linux's implementation */ 574 + return mvebu_comphy_power_off_legacy(phy); 877 575 } 878 576 879 577 static const struct phy_ops mvebu_comphy_ops = { ··· 917 585 return phy; 918 586 } 919 587 588 + static int mvebu_comphy_init_clks(struct mvebu_comphy_priv *priv) 589 + { 590 + int ret; 591 + 592 + priv->mg_domain_clk = devm_clk_get(priv->dev, "mg_clk"); 593 + if (IS_ERR(priv->mg_domain_clk)) 594 + return PTR_ERR(priv->mg_domain_clk); 595 + 596 + ret = clk_prepare_enable(priv->mg_domain_clk); 597 + if (ret < 0) 598 + return ret; 599 + 600 + priv->mg_core_clk = devm_clk_get(priv->dev, "mg_core_clk"); 601 + if (IS_ERR(priv->mg_core_clk)) { 602 + ret = PTR_ERR(priv->mg_core_clk); 603 + goto dis_mg_domain_clk; 604 + } 605 + 606 + ret = clk_prepare_enable(priv->mg_core_clk); 607 + if (ret < 0) 608 + goto dis_mg_domain_clk; 609 + 610 + priv->axi_clk = devm_clk_get(priv->dev, "axi_clk"); 611 + if (IS_ERR(priv->axi_clk)) { 612 + ret = PTR_ERR(priv->axi_clk); 613 + goto dis_mg_core_clk; 614 + } 615 + 616 + ret = clk_prepare_enable(priv->axi_clk); 617 + if (ret < 0) 618 + goto dis_mg_core_clk; 619 + 620 + return 0; 621 + 622 + dis_mg_core_clk: 623 + clk_disable_unprepare(priv->mg_core_clk); 624 + 625 + dis_mg_domain_clk: 626 + clk_disable_unprepare(priv->mg_domain_clk); 627 + 628 + priv->mg_domain_clk = NULL; 629 + priv->mg_core_clk = NULL; 630 + priv->axi_clk = NULL; 631 + 632 + return ret; 633 + }; 634 + 635 + static void mvebu_comphy_disable_unprepare_clks(struct mvebu_comphy_priv *priv) 636 + { 637 + if (priv->axi_clk) 638 + clk_disable_unprepare(priv->axi_clk); 639 + 640 + if (priv->mg_core_clk) 641 + clk_disable_unprepare(priv->mg_core_clk); 642 + 643 + if (priv->mg_domain_clk) 644 + clk_disable_unprepare(priv->mg_domain_clk); 645 + } 646 + 920 647 static int mvebu_comphy_probe(struct platform_device *pdev) 921 648 { 922 649 struct mvebu_comphy_priv *priv; 923 650 struct phy_provider *provider; 924 651 struct device_node *child; 925 652 struct resource *res; 653 + int ret; 926 654 927 655 priv = devm_kzalloc(&pdev->dev, sizeof(*priv), GFP_KERNEL); 928 656 if (!priv) ··· 999 607 if (IS_ERR(priv->base)) 1000 608 return PTR_ERR(priv->base); 1001 609 610 + /* 611 + * Ignore error if clocks have not been initialized properly for DT 612 + * compatibility reasons. 613 + */ 614 + ret = mvebu_comphy_init_clks(priv); 615 + if (ret) { 616 + if (ret == -EPROBE_DEFER) 617 + return ret; 618 + dev_warn(&pdev->dev, "cannot initialize clocks\n"); 619 + } 620 + 621 + /* 622 + * Hack to retrieve a physical offset relative to this CP that will be 623 + * given to the firmware 624 + */ 625 + priv->cp_phys = res->start; 626 + 1002 627 for_each_available_child_of_node(pdev->dev.of_node, child) { 1003 628 struct mvebu_comphy_lane *lane; 1004 629 struct phy *phy; 1005 - int ret; 1006 630 u32 val; 1007 631 1008 632 ret = of_property_read_u32(child, "reg", &val); ··· 1034 626 } 1035 627 1036 628 lane = devm_kzalloc(&pdev->dev, sizeof(*lane), GFP_KERNEL); 1037 - if (!lane) 1038 - return -ENOMEM; 629 + if (!lane) { 630 + of_node_put(child); 631 + ret = -ENOMEM; 632 + goto disable_clks; 633 + } 1039 634 1040 635 phy = devm_phy_create(&pdev->dev, child, &mvebu_comphy_ops); 1041 - if (IS_ERR(phy)) 1042 - return PTR_ERR(phy); 636 + if (IS_ERR(phy)) { 637 + of_node_put(child); 638 + ret = PTR_ERR(phy); 639 + goto disable_clks; 640 + } 1043 641 1044 642 lane->priv = priv; 1045 643 lane->mode = PHY_MODE_INVALID; 644 + lane->submode = PHY_INTERFACE_MODE_NA; 1046 645 lane->id = val; 1047 646 lane->port = -1; 1048 647 phy_set_drvdata(phy, lane); 1049 648 1050 649 /* 1051 - * Once all modes are supported in this driver we should call 650 + * All modes are supported in this driver so we could call 1052 651 * mvebu_comphy_power_off(phy) here to avoid relying on the 1053 - * bootloader/firmware configuration. 652 + * bootloader/firmware configuration, but for compatibility 653 + * reasons we cannot de-configure the COMPHY without being sure 654 + * that the firmware is up-to-date and fully-featured. 1054 655 */ 1055 656 } 1056 657 1057 658 dev_set_drvdata(&pdev->dev, priv); 1058 659 provider = devm_of_phy_provider_register(&pdev->dev, 1059 660 mvebu_comphy_xlate); 661 + 1060 662 return PTR_ERR_OR_ZERO(provider); 663 + 664 + disable_clks: 665 + mvebu_comphy_disable_unprepare_clks(priv); 666 + 667 + return ret; 1061 668 } 1062 669 1063 670 static const struct of_device_id mvebu_comphy_of_match_table[] = {
+10
drivers/phy/phy-core.c
··· 394 394 } 395 395 EXPORT_SYMBOL_GPL(phy_reset); 396 396 397 + /** 398 + * phy_calibrate() - Tunes the phy hw parameters for current configuration 399 + * @phy: the phy returned by phy_get() 400 + * 401 + * Used to calibrate phy hardware, typically by adjusting some parameters in 402 + * runtime, which are otherwise lost after host controller reset and cannot 403 + * be applied in phy_init() or phy_power_on(). 404 + * 405 + * Returns: 0 if successful, an negative error code otherwise 406 + */ 397 407 int phy_calibrate(struct phy *phy) 398 408 { 399 409 int ret;
+23 -21
drivers/phy/qualcomm/phy-qcom-qmp.c
··· 35 35 #define PLL_READY_GATE_EN BIT(3) 36 36 /* QPHY_PCS_STATUS bit */ 37 37 #define PHYSTATUS BIT(6) 38 - /* QPHY_COM_PCS_READY_STATUS bit */ 38 + /* QPHY_PCS_READY_STATUS & QPHY_COM_PCS_READY_STATUS bit */ 39 39 #define PCS_READY BIT(0) 40 40 41 41 /* QPHY_V3_DP_COM_RESET_OVRD_CTRL register bits */ ··· 115 115 QPHY_SW_RESET, 116 116 QPHY_START_CTRL, 117 117 QPHY_PCS_READY_STATUS, 118 + QPHY_PCS_STATUS, 118 119 QPHY_PCS_AUTONOMOUS_MODE_CTRL, 119 120 QPHY_PCS_LFPS_RXTERM_IRQ_CLEAR, 120 121 QPHY_PCS_LFPS_RXTERM_IRQ_STATUS, ··· 134 133 [QPHY_FLL_MAN_CODE] = 0xd4, 135 134 [QPHY_SW_RESET] = 0x00, 136 135 [QPHY_START_CTRL] = 0x08, 137 - [QPHY_PCS_READY_STATUS] = 0x174, 136 + [QPHY_PCS_STATUS] = 0x174, 138 137 }; 139 138 140 139 static const unsigned int usb3phy_regs_layout[] = { ··· 145 144 [QPHY_FLL_MAN_CODE] = 0xd0, 146 145 [QPHY_SW_RESET] = 0x00, 147 146 [QPHY_START_CTRL] = 0x08, 148 - [QPHY_PCS_READY_STATUS] = 0x17c, 147 + [QPHY_PCS_STATUS] = 0x17c, 149 148 [QPHY_PCS_AUTONOMOUS_MODE_CTRL] = 0x0d4, 150 149 [QPHY_PCS_LFPS_RXTERM_IRQ_CLEAR] = 0x0d8, 151 150 [QPHY_PCS_LFPS_RXTERM_IRQ_STATUS] = 0x178, ··· 154 153 static const unsigned int qmp_v3_usb3phy_regs_layout[] = { 155 154 [QPHY_SW_RESET] = 0x00, 156 155 [QPHY_START_CTRL] = 0x08, 157 - [QPHY_PCS_READY_STATUS] = 0x174, 156 + [QPHY_PCS_STATUS] = 0x174, 158 157 [QPHY_PCS_AUTONOMOUS_MODE_CTRL] = 0x0d8, 159 158 [QPHY_PCS_LFPS_RXTERM_IRQ_CLEAR] = 0x0dc, 160 159 [QPHY_PCS_LFPS_RXTERM_IRQ_STATUS] = 0x170, ··· 912 911 913 912 unsigned int start_ctrl; 914 913 unsigned int pwrdn_ctrl; 915 - unsigned int mask_pcs_ready; 916 914 unsigned int mask_com_pcs_ready; 917 915 918 916 /* true, if PHY has a separate PHY_COM control block */ ··· 1074 1074 1075 1075 .start_ctrl = PCS_START | PLL_READY_GATE_EN, 1076 1076 .pwrdn_ctrl = SW_PWRDN | REFCLK_DRV_DSBL, 1077 - .mask_pcs_ready = PHYSTATUS, 1078 1077 .mask_com_pcs_ready = PCS_READY, 1079 1078 1080 1079 .has_phy_com_ctrl = true, ··· 1105 1106 1106 1107 .start_ctrl = SERDES_START | PCS_START, 1107 1108 .pwrdn_ctrl = SW_PWRDN, 1108 - .mask_pcs_ready = PHYSTATUS, 1109 1109 }; 1110 1110 1111 1111 /* list of resets */ ··· 1134 1136 1135 1137 .start_ctrl = SERDES_START | PCS_START, 1136 1138 .pwrdn_ctrl = SW_PWRDN | REFCLK_DRV_DSBL, 1137 - .mask_pcs_ready = PHYSTATUS, 1138 1139 1139 1140 .has_phy_com_ctrl = false, 1140 1141 .has_lane_rst = false, ··· 1164 1167 1165 1168 .start_ctrl = SERDES_START | PCS_START, 1166 1169 .pwrdn_ctrl = SW_PWRDN, 1167 - .mask_pcs_ready = PHYSTATUS, 1168 1170 1169 1171 .has_pwrdn_delay = true, 1170 1172 .pwrdn_delay_min = POWER_DOWN_DELAY_US_MIN, ··· 1195 1199 1196 1200 .start_ctrl = SERDES_START | PCS_START, 1197 1201 .pwrdn_ctrl = SW_PWRDN, 1198 - .mask_pcs_ready = PHYSTATUS, 1199 1202 1200 1203 .has_pwrdn_delay = true, 1201 1204 .pwrdn_delay_min = POWER_DOWN_DELAY_US_MIN, ··· 1221 1226 1222 1227 .start_ctrl = SERDES_START, 1223 1228 .pwrdn_ctrl = SW_PWRDN, 1224 - .mask_pcs_ready = PCS_READY, 1225 1229 1226 1230 .is_dual_lane_phy = true, 1227 1231 .no_pcs_sw_reset = true, ··· 1248 1254 1249 1255 .start_ctrl = SERDES_START | PCS_START, 1250 1256 .pwrdn_ctrl = SW_PWRDN | REFCLK_DRV_DSBL, 1251 - .mask_pcs_ready = PHYSTATUS, 1252 1257 }; 1253 1258 1254 1259 static const struct qmp_phy_cfg msm8998_usb3phy_cfg = { ··· 1272 1279 1273 1280 .start_ctrl = SERDES_START | PCS_START, 1274 1281 .pwrdn_ctrl = SW_PWRDN, 1275 - .mask_pcs_ready = PHYSTATUS, 1276 1282 1277 1283 .is_dual_lane_phy = true, 1278 1284 }; ··· 1449 1457 void __iomem *pcs = qphy->pcs; 1450 1458 void __iomem *dp_com = qmp->dp_com; 1451 1459 void __iomem *status; 1452 - unsigned int mask, val; 1460 + unsigned int mask, val, ready; 1453 1461 int ret; 1454 1462 1455 1463 dev_vdbg(qmp->dev, "Initializing QMP phy\n"); ··· 1537 1545 /* start SerDes and Phy-Coding-Sublayer */ 1538 1546 qphy_setbits(pcs, cfg->regs[QPHY_START_CTRL], cfg->start_ctrl); 1539 1547 1540 - status = pcs + cfg->regs[QPHY_PCS_READY_STATUS]; 1541 - mask = cfg->mask_pcs_ready; 1548 + if (cfg->type == PHY_TYPE_UFS) { 1549 + status = pcs + cfg->regs[QPHY_PCS_READY_STATUS]; 1550 + mask = PCS_READY; 1551 + ready = PCS_READY; 1552 + } else { 1553 + status = pcs + cfg->regs[QPHY_PCS_STATUS]; 1554 + mask = PHYSTATUS; 1555 + ready = 0; 1556 + } 1542 1557 1543 - ret = readl_poll_timeout(status, val, val & mask, 10, 1558 + ret = readl_poll_timeout(status, val, (val & mask) == ready, 10, 1544 1559 PHY_INIT_COMPLETE_TIMEOUT); 1545 1560 if (ret) { 1546 1561 dev_err(qmp->dev, "phy initialization timed-out\n"); ··· 2092 2093 if (ret) { 2093 2094 dev_err(dev, "failed to create lane%d phy, %d\n", 2094 2095 id, ret); 2095 - pm_runtime_disable(dev); 2096 - return ret; 2096 + goto err_node_put; 2097 2097 } 2098 2098 2099 2099 /* ··· 2103 2105 if (ret) { 2104 2106 dev_err(qmp->dev, 2105 2107 "failed to register pipe clock source\n"); 2106 - pm_runtime_disable(dev); 2107 - return ret; 2108 + goto err_node_put; 2108 2109 } 2109 2110 id++; 2110 2111 } ··· 2115 2118 pm_runtime_disable(dev); 2116 2119 2117 2120 return PTR_ERR_OR_ZERO(phy_provider); 2121 + 2122 + err_node_put: 2123 + pm_runtime_disable(dev); 2124 + of_node_put(child); 2125 + return ret; 2118 2126 } 2119 2127 2120 2128 static struct platform_driver qcom_qmp_phy_driver = {
+2
drivers/phy/renesas/phy-rcar-gen3-usb2.c
··· 61 61 USB2_OBINT_IDDIGCHG) 62 62 63 63 /* VBCTRL */ 64 + #define USB2_VBCTRL_OCCLREN BIT(16) 64 65 #define USB2_VBCTRL_DRVVBUSSEL BIT(8) 65 66 66 67 /* LINECTRL1 */ ··· 375 374 writel(val, usb2_base + USB2_LINECTRL1); 376 375 377 376 val = readl(usb2_base + USB2_VBCTRL); 377 + val &= ~USB2_VBCTRL_OCCLREN; 378 378 writel(val | USB2_VBCTRL_DRVVBUSSEL, usb2_base + USB2_VBCTRL); 379 379 val = readl(usb2_base + USB2_ADPCTRL); 380 380 writel(val | USB2_ADPCTRL_IDPULLUP, usb2_base + USB2_ADPCTRL);
+1 -1
drivers/phy/rockchip/phy-rockchip-inno-hdmi.c
··· 198 198 #define RK3328_BYPASS_TERM_RESISTOR_CALIB BIT(7) 199 199 #define RK3328_TERM_RESISTOR_CALIB_SPEED_14_8(x) UPDATE((x) >> 8, 6, 0) 200 200 /* REG:0xc6 */ 201 - #define RK3328_TERM_RESISTOR_CALIB_SPEED_7_0(x) UPDATE(x, 7, 9) 201 + #define RK3328_TERM_RESISTOR_CALIB_SPEED_7_0(x) UPDATE(x, 7, 0) 202 202 /* REG:0xc7 */ 203 203 #define RK3328_TERM_RESISTOR_50 UPDATE(0, 2, 1) 204 204 #define RK3328_TERM_RESISTOR_62_5 UPDATE(1, 2, 1)
+1
drivers/phy/samsung/phy-exynos-dp-video.c
··· 109 109 .driver = { 110 110 .name = "exynos-dp-video-phy", 111 111 .of_match_table = exynos_dp_video_phy_of_match, 112 + .suppress_bind_attrs = true, 112 113 } 113 114 }; 114 115 module_platform_driver(exynos_dp_video_phy_driver);
+1
drivers/phy/samsung/phy-exynos-mipi-video.c
··· 359 359 .driver = { 360 360 .of_match_table = exynos_mipi_video_phy_of_match, 361 361 .name = "exynos-mipi-video-phy", 362 + .suppress_bind_attrs = true, 362 363 } 363 364 }; 364 365 module_platform_driver(exynos_mipi_video_phy_driver);
+1
drivers/phy/samsung/phy-exynos-pcie.c
··· 272 272 .driver = { 273 273 .of_match_table = exynos_pcie_phy_match, 274 274 .name = "exynos_pcie_phy", 275 + .suppress_bind_attrs = true, 275 276 } 276 277 }; 277 278
+1
drivers/phy/samsung/phy-exynos5-usbdrd.c
··· 953 953 .driver = { 954 954 .of_match_table = exynos5_usbdrd_phy_of_match, 955 955 .name = "exynos5_usb3drd_phy", 956 + .suppress_bind_attrs = true, 956 957 } 957 958 }; 958 959
+1
drivers/phy/samsung/phy-exynos5250-sata.c
··· 237 237 .driver = { 238 238 .of_match_table = exynos_sata_phy_of_match, 239 239 .name = "samsung,sata-phy", 240 + .suppress_bind_attrs = true, 240 241 } 241 242 }; 242 243 module_platform_driver(exynos_sata_phy_driver);
+1
drivers/phy/samsung/phy-samsung-usb2.c
··· 250 250 .driver = { 251 251 .of_match_table = samsung_usb2_phy_of_match, 252 252 .name = "samsung-usb2-phy", 253 + .suppress_bind_attrs = true, 253 254 } 254 255 }; 255 256
+22 -11
drivers/phy/ti/phy-am654-serdes.c
··· 405 405 const __be32 *addr; 406 406 unsigned int reg; 407 407 struct clk *clk; 408 + int ret = 0; 408 409 409 410 mux = devm_kzalloc(dev, sizeof(*mux), GFP_KERNEL); 410 411 if (!mux) ··· 414 413 init = &mux->clk_data; 415 414 416 415 regmap_node = of_parse_phandle(node, "ti,serdes-clk", 0); 417 - of_node_put(regmap_node); 418 416 if (!regmap_node) { 419 417 dev_err(dev, "Fail to get serdes-clk node\n"); 420 - return -ENODEV; 418 + ret = -ENODEV; 419 + goto out_put_node; 421 420 } 422 421 423 422 regmap = syscon_node_to_regmap(regmap_node->parent); 424 423 if (IS_ERR(regmap)) { 425 424 dev_err(dev, "Fail to get Syscon regmap\n"); 426 - return PTR_ERR(regmap); 425 + ret = PTR_ERR(regmap); 426 + goto out_put_node; 427 427 } 428 428 429 429 num_parents = of_clk_get_parent_count(node); 430 430 if (num_parents < 2) { 431 431 dev_err(dev, "SERDES clock must have parents\n"); 432 - return -EINVAL; 432 + ret = -EINVAL; 433 + goto out_put_node; 433 434 } 434 435 435 436 parent_names = devm_kzalloc(dev, (sizeof(char *) * num_parents), 436 437 GFP_KERNEL); 437 - if (!parent_names) 438 - return -ENOMEM; 438 + if (!parent_names) { 439 + ret = -ENOMEM; 440 + goto out_put_node; 441 + } 439 442 440 443 of_clk_parent_fill(node, parent_names, num_parents); 441 444 442 445 addr = of_get_address(regmap_node, 0, NULL, NULL); 443 - if (!addr) 444 - return -EINVAL; 446 + if (!addr) { 447 + ret = -EINVAL; 448 + goto out_put_node; 449 + } 445 450 446 451 reg = be32_to_cpu(*addr); 447 452 ··· 463 456 mux->hw.init = init; 464 457 465 458 clk = devm_clk_register(dev, &mux->hw); 466 - if (IS_ERR(clk)) 467 - return PTR_ERR(clk); 459 + if (IS_ERR(clk)) { 460 + ret = PTR_ERR(clk); 461 + goto out_put_node; 462 + } 468 463 469 464 am654_phy->clks[clock_num] = clk; 470 465 471 - return 0; 466 + out_put_node: 467 + of_node_put(regmap_node); 468 + return ret; 472 469 } 473 470 474 471 static const struct of_device_id serdes_am654_id_table[] = {
+4 -1
drivers/slimbus/qcom-ngd-ctrl.c
··· 1338 1338 continue; 1339 1339 1340 1340 ngd = kzalloc(sizeof(*ngd), GFP_KERNEL); 1341 - if (!ngd) 1341 + if (!ngd) { 1342 + of_node_put(node); 1342 1343 return -ENOMEM; 1344 + } 1343 1345 1344 1346 ngd->pdev = platform_device_alloc(QCOM_SLIM_NGD_DRV_NAME, id); 1345 1347 if (!ngd->pdev) { 1346 1348 kfree(ngd); 1349 + of_node_put(node); 1347 1350 return -ENOMEM; 1348 1351 } 1349 1352 ngd->id = id;
+1 -1
drivers/slimbus/slimbus.h
··· 439 439 (mc == SLIM_MSG_MC_REQUEST_INFORMATION || 440 440 mc == SLIM_MSG_MC_REQUEST_CLEAR_INFORMATION || 441 441 mc == SLIM_MSG_MC_REQUEST_VALUE || 442 - mc == SLIM_MSG_MC_REQUEST_CLEAR_INFORMATION)); 442 + mc == SLIM_MSG_MC_REQUEST_CHANGE_VALUE)); 443 443 } 444 444 445 445 static inline bool slim_ec_txn(u8 mt, u8 mc)
+1 -1
drivers/thunderbolt/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0-only 2 2 obj-${CONFIG_THUNDERBOLT} := thunderbolt.o 3 - thunderbolt-objs := nhi.o ctl.o tb.o switch.o cap.o path.o tunnel.o eeprom.o 3 + thunderbolt-objs := nhi.o nhi_ops.o ctl.o tb.o switch.o cap.o path.o tunnel.o eeprom.o 4 4 thunderbolt-objs += domain.o dma_port.o icm.o property.o xdomain.o lc.o
+19 -4
drivers/thunderbolt/ctl.c
··· 930 930 return res; 931 931 } 932 932 933 + static int tb_cfg_get_error(struct tb_ctl *ctl, enum tb_cfg_space space, 934 + const struct tb_cfg_result *res) 935 + { 936 + /* 937 + * For unimplemented ports access to port config space may return 938 + * TB_CFG_ERROR_INVALID_CONFIG_SPACE (alternatively their type is 939 + * set to TB_TYPE_INACTIVE). In the former case return -ENODEV so 940 + * that the caller can mark the port as disabled. 941 + */ 942 + if (space == TB_CFG_PORT && 943 + res->tb_error == TB_CFG_ERROR_INVALID_CONFIG_SPACE) 944 + return -ENODEV; 945 + 946 + tb_cfg_print_error(ctl, res); 947 + return -EIO; 948 + } 949 + 933 950 int tb_cfg_read(struct tb_ctl *ctl, void *buffer, u64 route, u32 port, 934 951 enum tb_cfg_space space, u32 offset, u32 length) 935 952 { ··· 959 942 960 943 case 1: 961 944 /* Thunderbolt error, tb_error holds the actual number */ 962 - tb_cfg_print_error(ctl, &res); 963 - return -EIO; 945 + return tb_cfg_get_error(ctl, space, &res); 964 946 965 947 case -ETIMEDOUT: 966 948 tb_ctl_warn(ctl, "timeout reading config space %u from %#x\n", ··· 985 969 986 970 case 1: 987 971 /* Thunderbolt error, tb_error holds the actual number */ 988 - tb_cfg_print_error(ctl, &res); 989 - return -EIO; 972 + return tb_cfg_get_error(ctl, space, &res); 990 973 991 974 case -ETIMEDOUT: 992 975 tb_ctl_warn(ctl, "timeout writing config space %u to %#x\n",
+1 -5
drivers/thunderbolt/eeprom.c
··· 414 414 struct device *dev = &sw->tb->nhi->pdev->dev; 415 415 int len, res; 416 416 417 - len = device_property_read_u8_array(dev, "ThunderboltDROM", NULL, 0); 417 + len = device_property_count_u8(dev, "ThunderboltDROM"); 418 418 if (len < 0 || len < sizeof(struct tb_drom_header)) 419 419 return -EINVAL; 420 420 ··· 524 524 sw->ports[4].link_nr = 1; 525 525 sw->ports[3].dual_link_port = &sw->ports[4]; 526 526 sw->ports[4].dual_link_port = &sw->ports[3]; 527 - 528 - /* Port 5 is inaccessible on this gen 1 controller */ 529 - if (sw->config.device_id == PCI_DEVICE_ID_INTEL_LIGHT_RIDGE) 530 - sw->ports[5].disabled = true; 531 527 532 528 return 0; 533 529 }
+178 -16
drivers/thunderbolt/icm.c
··· 55 55 * @safe_mode: ICM is in safe mode 56 56 * @max_boot_acl: Maximum number of preboot ACL entries (%0 if not supported) 57 57 * @rpm: Does the controller support runtime PM (RTD3) 58 + * @can_upgrade_nvm: Can the NVM firmware be upgrade on this controller 59 + * @veto: Is RTD3 veto in effect 58 60 * @is_supported: Checks if we can support ICM on this controller 59 61 * @cio_reset: Trigger CIO reset 60 62 * @get_mode: Read and return the ICM firmware mode (optional) 61 63 * @get_route: Find a route string for given switch 62 64 * @save_devices: Ask ICM to save devices to ACL when suspending (optional) 63 65 * @driver_ready: Send driver ready message to ICM 66 + * @set_uuid: Set UUID for the root switch (optional) 64 67 * @device_connected: Handle device connected ICM message 65 68 * @device_disconnected: Handle device disconnected ICM message 66 69 * @xdomain_connected - Handle XDomain connected ICM message 67 70 * @xdomain_disconnected - Handle XDomain disconnected ICM message 71 + * @rtd3_veto: Handle RTD3 veto notification ICM message 68 72 */ 69 73 struct icm { 70 74 struct mutex request_lock; ··· 78 74 int vnd_cap; 79 75 bool safe_mode; 80 76 bool rpm; 77 + bool can_upgrade_nvm; 78 + bool veto; 81 79 bool (*is_supported)(struct tb *tb); 82 80 int (*cio_reset)(struct tb *tb); 83 81 int (*get_mode)(struct tb *tb); ··· 88 82 int (*driver_ready)(struct tb *tb, 89 83 enum tb_security_level *security_level, 90 84 size_t *nboot_acl, bool *rpm); 85 + void (*set_uuid)(struct tb *tb); 91 86 void (*device_connected)(struct tb *tb, 92 87 const struct icm_pkg_header *hdr); 93 88 void (*device_disconnected)(struct tb *tb, ··· 97 90 const struct icm_pkg_header *hdr); 98 91 void (*xdomain_disconnected)(struct tb *tb, 99 92 const struct icm_pkg_header *hdr); 93 + void (*rtd3_veto)(struct tb *tb, const struct icm_pkg_header *hdr); 100 94 }; 101 95 102 96 struct icm_notification { ··· 300 292 } while (retries--); 301 293 302 294 return -ETIMEDOUT; 295 + } 296 + 297 + /* 298 + * If rescan is queued to run (we are resuming), postpone it to give the 299 + * firmware some more time to send device connected notifications for next 300 + * devices in the chain. 301 + */ 302 + static void icm_postpone_rescan(struct tb *tb) 303 + { 304 + struct icm *icm = tb_priv(tb); 305 + 306 + if (delayed_work_pending(&icm->rescan_work)) 307 + mod_delayed_work(tb->wq, &icm->rescan_work, 308 + msecs_to_jiffies(500)); 309 + } 310 + 311 + static void icm_veto_begin(struct tb *tb) 312 + { 313 + struct icm *icm = tb_priv(tb); 314 + 315 + if (!icm->veto) { 316 + icm->veto = true; 317 + /* Keep the domain powered while veto is in effect */ 318 + pm_runtime_get(&tb->dev); 319 + } 320 + } 321 + 322 + static void icm_veto_end(struct tb *tb) 323 + { 324 + struct icm *icm = tb_priv(tb); 325 + 326 + if (icm->veto) { 327 + icm->veto = false; 328 + /* Allow the domain suspend now */ 329 + pm_runtime_mark_last_busy(&tb->dev); 330 + pm_runtime_put_autosuspend(&tb->dev); 331 + } 303 332 } 304 333 305 334 static bool icm_fr_is_supported(struct tb *tb) ··· 562 517 return 0; 563 518 } 564 519 565 - static void add_switch(struct tb_switch *parent_sw, u64 route, 566 - const uuid_t *uuid, const u8 *ep_name, 567 - size_t ep_name_size, u8 connection_id, u8 connection_key, 568 - u8 link, u8 depth, enum tb_security_level security_level, 569 - bool authorized, bool boot) 520 + static struct tb_switch *add_switch(struct tb_switch *parent_sw, u64 route, 521 + const uuid_t *uuid, const u8 *ep_name, 522 + size_t ep_name_size, u8 connection_id, 523 + u8 connection_key, u8 link, u8 depth, 524 + enum tb_security_level security_level, 525 + bool authorized, bool boot) 570 526 { 571 527 const struct intel_vss *vss; 572 528 struct tb_switch *sw; 529 + int ret; 573 530 574 531 pm_runtime_get_sync(&parent_sw->dev); 575 532 ··· 602 555 tb_port_at(route, parent_sw)->remote = tb_upstream_port(sw); 603 556 tb_upstream_port(sw)->remote = tb_port_at(route, parent_sw); 604 557 605 - if (tb_switch_add(sw)) { 558 + ret = tb_switch_add(sw); 559 + if (ret) { 606 560 tb_port_at(tb_route(sw), parent_sw)->remote = NULL; 607 561 tb_switch_put(sw); 562 + sw = ERR_PTR(ret); 608 563 } 609 564 610 565 out: 611 566 pm_runtime_mark_last_busy(&parent_sw->dev); 612 567 pm_runtime_put_autosuspend(&parent_sw->dev); 568 + 569 + return sw; 613 570 } 614 571 615 572 static void update_switch(struct tb_switch *parent_sw, struct tb_switch *sw, ··· 704 653 bool boot; 705 654 u64 route; 706 655 int ret; 656 + 657 + icm_postpone_rescan(tb); 707 658 708 659 link = pkg->link_info & ICM_LINK_INFO_LINK_MASK; 709 660 depth = (pkg->link_info & ICM_LINK_INFO_DEPTH_MASK) >> ··· 1137 1084 } 1138 1085 1139 1086 static void 1140 - icm_tr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr) 1087 + __icm_tr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr, 1088 + bool force_rtd3) 1141 1089 { 1142 1090 const struct icm_tr_event_device_connected *pkg = 1143 1091 (const struct icm_tr_event_device_connected *)hdr; ··· 1147 1093 struct tb_xdomain *xd; 1148 1094 bool authorized, boot; 1149 1095 u64 route; 1096 + 1097 + icm_postpone_rescan(tb); 1150 1098 1151 1099 /* 1152 1100 * Currently we don't use the QoS information coming with the ··· 1205 1149 return; 1206 1150 } 1207 1151 1208 - add_switch(parent_sw, route, &pkg->ep_uuid, (const u8 *)pkg->ep_name, 1209 - sizeof(pkg->ep_name), pkg->connection_id, 1210 - 0, 0, 0, security_level, authorized, boot); 1152 + sw = add_switch(parent_sw, route, &pkg->ep_uuid, (const u8 *)pkg->ep_name, 1153 + sizeof(pkg->ep_name), pkg->connection_id, 0, 0, 0, 1154 + security_level, authorized, boot); 1155 + if (!IS_ERR(sw) && force_rtd3) 1156 + sw->rpm = true; 1211 1157 1212 1158 tb_switch_put(parent_sw); 1159 + } 1160 + 1161 + static void 1162 + icm_tr_device_connected(struct tb *tb, const struct icm_pkg_header *hdr) 1163 + { 1164 + __icm_tr_device_connected(tb, hdr, false); 1213 1165 } 1214 1166 1215 1167 static void ··· 1530 1466 return 0; 1531 1467 } 1532 1468 1469 + static int 1470 + icm_icl_driver_ready(struct tb *tb, enum tb_security_level *security_level, 1471 + size_t *nboot_acl, bool *rpm) 1472 + { 1473 + struct icm_tr_pkg_driver_ready_response reply; 1474 + struct icm_pkg_driver_ready request = { 1475 + .hdr.code = ICM_DRIVER_READY, 1476 + }; 1477 + int ret; 1478 + 1479 + memset(&reply, 0, sizeof(reply)); 1480 + ret = icm_request(tb, &request, sizeof(request), &reply, sizeof(reply), 1481 + 1, 20000); 1482 + if (ret) 1483 + return ret; 1484 + 1485 + /* Ice Lake always supports RTD3 */ 1486 + if (rpm) 1487 + *rpm = true; 1488 + 1489 + return 0; 1490 + } 1491 + 1492 + static void icm_icl_set_uuid(struct tb *tb) 1493 + { 1494 + struct tb_nhi *nhi = tb->nhi; 1495 + u32 uuid[4]; 1496 + 1497 + pci_read_config_dword(nhi->pdev, VS_CAP_10, &uuid[0]); 1498 + pci_read_config_dword(nhi->pdev, VS_CAP_11, &uuid[1]); 1499 + uuid[2] = 0xffffffff; 1500 + uuid[3] = 0xffffffff; 1501 + 1502 + tb->root_switch->uuid = kmemdup(uuid, sizeof(uuid), GFP_KERNEL); 1503 + } 1504 + 1505 + static void 1506 + icm_icl_device_connected(struct tb *tb, const struct icm_pkg_header *hdr) 1507 + { 1508 + __icm_tr_device_connected(tb, hdr, true); 1509 + } 1510 + 1511 + static void icm_icl_rtd3_veto(struct tb *tb, const struct icm_pkg_header *hdr) 1512 + { 1513 + const struct icm_icl_event_rtd3_veto *pkg = 1514 + (const struct icm_icl_event_rtd3_veto *)hdr; 1515 + 1516 + tb_dbg(tb, "ICM rtd3 veto=0x%08x\n", pkg->veto_reason); 1517 + 1518 + if (pkg->veto_reason) 1519 + icm_veto_begin(tb); 1520 + else 1521 + icm_veto_end(tb); 1522 + } 1523 + 1533 1524 static void icm_handle_notification(struct work_struct *work) 1534 1525 { 1535 1526 struct icm_notification *n = container_of(work, typeof(*n), work); ··· 1611 1492 break; 1612 1493 case ICM_EVENT_XDOMAIN_DISCONNECTED: 1613 1494 icm->xdomain_disconnected(tb, n->pkg); 1495 + break; 1496 + case ICM_EVENT_RTD3_VETO: 1497 + icm->rtd3_veto(tb, n->pkg); 1614 1498 break; 1615 1499 } 1616 1500 } ··· 1973 1851 if (tb->nhi->going_away) 1974 1852 return; 1975 1853 1854 + /* 1855 + * If RTD3 was vetoed before we entered system suspend allow it 1856 + * again now before driver ready is sent. Firmware sends a new RTD3 1857 + * veto if it is still the case after we have sent it driver ready 1858 + * command. 1859 + */ 1860 + icm_veto_end(tb); 1976 1861 icm_unplug_children(tb->root_switch); 1977 1862 1978 1863 /* ··· 2042 1913 if (IS_ERR(tb->root_switch)) 2043 1914 return PTR_ERR(tb->root_switch); 2044 1915 2045 - /* 2046 - * NVM upgrade has not been tested on Apple systems and they 2047 - * don't provide images publicly either. To be on the safe side 2048 - * prevent root switch NVM upgrade on Macs for now. 2049 - */ 2050 - tb->root_switch->no_nvm_upgrade = x86_apple_machine; 1916 + tb->root_switch->no_nvm_upgrade = !icm->can_upgrade_nvm; 2051 1917 tb->root_switch->rpm = icm->rpm; 1918 + 1919 + if (icm->set_uuid) 1920 + icm->set_uuid(tb); 2052 1921 2053 1922 ret = tb_switch_add(tb->root_switch); 2054 1923 if (ret) { ··· 2132 2005 .disconnect_xdomain_paths = icm_tr_disconnect_xdomain_paths, 2133 2006 }; 2134 2007 2008 + /* Ice Lake */ 2009 + static const struct tb_cm_ops icm_icl_ops = { 2010 + .driver_ready = icm_driver_ready, 2011 + .start = icm_start, 2012 + .stop = icm_stop, 2013 + .complete = icm_complete, 2014 + .runtime_suspend = icm_runtime_suspend, 2015 + .runtime_resume = icm_runtime_resume, 2016 + .handle_event = icm_handle_event, 2017 + .approve_xdomain_paths = icm_tr_approve_xdomain_paths, 2018 + .disconnect_xdomain_paths = icm_tr_disconnect_xdomain_paths, 2019 + }; 2020 + 2135 2021 struct tb *icm_probe(struct tb_nhi *nhi) 2136 2022 { 2137 2023 struct icm *icm; ··· 2161 2021 switch (nhi->pdev->device) { 2162 2022 case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_2C_NHI: 2163 2023 case PCI_DEVICE_ID_INTEL_FALCON_RIDGE_4C_NHI: 2024 + icm->can_upgrade_nvm = true; 2164 2025 icm->is_supported = icm_fr_is_supported; 2165 2026 icm->get_route = icm_fr_get_route; 2166 2027 icm->save_devices = icm_fr_save_devices; ··· 2179 2038 case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_4C_NHI: 2180 2039 case PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_2C_NHI: 2181 2040 icm->max_boot_acl = ICM_AR_PREBOOT_ACL_ENTRIES; 2041 + /* 2042 + * NVM upgrade has not been tested on Apple systems and 2043 + * they don't provide images publicly either. To be on 2044 + * the safe side prevent root switch NVM upgrade on Macs 2045 + * for now. 2046 + */ 2047 + icm->can_upgrade_nvm = !x86_apple_machine; 2182 2048 icm->is_supported = icm_ar_is_supported; 2183 2049 icm->cio_reset = icm_ar_cio_reset; 2184 2050 icm->get_mode = icm_ar_get_mode; ··· 2202 2054 case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_NHI: 2203 2055 case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_NHI: 2204 2056 icm->max_boot_acl = ICM_AR_PREBOOT_ACL_ENTRIES; 2057 + icm->can_upgrade_nvm = !x86_apple_machine; 2205 2058 icm->is_supported = icm_ar_is_supported; 2206 2059 icm->cio_reset = icm_tr_cio_reset; 2207 2060 icm->get_mode = icm_ar_get_mode; ··· 2212 2063 icm->xdomain_connected = icm_tr_xdomain_connected; 2213 2064 icm->xdomain_disconnected = icm_tr_xdomain_disconnected; 2214 2065 tb->cm_ops = &icm_tr_ops; 2066 + break; 2067 + 2068 + case PCI_DEVICE_ID_INTEL_ICL_NHI0: 2069 + case PCI_DEVICE_ID_INTEL_ICL_NHI1: 2070 + icm->is_supported = icm_ar_is_supported; 2071 + icm->driver_ready = icm_icl_driver_ready; 2072 + icm->set_uuid = icm_icl_set_uuid; 2073 + icm->device_connected = icm_icl_device_connected; 2074 + icm->device_disconnected = icm_tr_device_disconnected; 2075 + icm->xdomain_connected = icm_tr_xdomain_connected; 2076 + icm->xdomain_disconnected = icm_tr_xdomain_disconnected; 2077 + icm->rtd3_veto = icm_icl_rtd3_veto; 2078 + tb->cm_ops = &icm_icl_ops; 2215 2079 break; 2216 2080 } 2217 2081
+123 -11
drivers/thunderbolt/nhi.c
··· 16 16 #include <linux/interrupt.h> 17 17 #include <linux/module.h> 18 18 #include <linux/delay.h> 19 + #include <linux/property.h> 19 20 20 21 #include "nhi.h" 21 22 #include "nhi_regs.h" ··· 144 143 return io; 145 144 } 146 145 147 - static void ring_iowrite16desc(struct tb_ring *ring, u32 value, u32 offset) 146 + static void ring_iowrite_cons(struct tb_ring *ring, u16 cons) 148 147 { 149 - iowrite16(value, ring_desc_base(ring) + offset); 148 + /* 149 + * The other 16-bits in the register is read-only and writes to it 150 + * are ignored by the hardware so we can save one ioread32() by 151 + * filling the read-only bits with zeroes. 152 + */ 153 + iowrite32(cons, ring_desc_base(ring) + 8); 154 + } 155 + 156 + static void ring_iowrite_prod(struct tb_ring *ring, u16 prod) 157 + { 158 + /* See ring_iowrite_cons() above for explanation */ 159 + iowrite32(prod << 16, ring_desc_base(ring) + 8); 150 160 } 151 161 152 162 static void ring_iowrite32desc(struct tb_ring *ring, u32 value, u32 offset) ··· 209 197 descriptor->sof = frame->sof; 210 198 } 211 199 ring->head = (ring->head + 1) % ring->size; 212 - ring_iowrite16desc(ring, ring->head, ring->is_tx ? 10 : 8); 200 + if (ring->is_tx) 201 + ring_iowrite_prod(ring, ring->head); 202 + else 203 + ring_iowrite_cons(ring, ring->head); 213 204 } 214 205 } 215 206 ··· 677 662 678 663 ring_iowrite32options(ring, 0, 0); 679 664 ring_iowrite64desc(ring, 0, 0); 680 - ring_iowrite16desc(ring, 0, ring->is_tx ? 10 : 8); 665 + ring_iowrite32desc(ring, 0, 8); 681 666 ring_iowrite32desc(ring, 0, 12); 682 667 ring->head = 0; 683 668 ring->tail = 0; ··· 860 845 return IRQ_HANDLED; 861 846 } 862 847 863 - static int nhi_suspend_noirq(struct device *dev) 848 + static int __nhi_suspend_noirq(struct device *dev, bool wakeup) 864 849 { 865 850 struct pci_dev *pdev = to_pci_dev(dev); 866 851 struct tb *tb = pci_get_drvdata(pdev); 852 + struct tb_nhi *nhi = tb->nhi; 853 + int ret; 867 854 868 - return tb_domain_suspend_noirq(tb); 855 + ret = tb_domain_suspend_noirq(tb); 856 + if (ret) 857 + return ret; 858 + 859 + if (nhi->ops && nhi->ops->suspend_noirq) { 860 + ret = nhi->ops->suspend_noirq(tb->nhi, wakeup); 861 + if (ret) 862 + return ret; 863 + } 864 + 865 + return 0; 866 + } 867 + 868 + static int nhi_suspend_noirq(struct device *dev) 869 + { 870 + return __nhi_suspend_noirq(dev, device_may_wakeup(dev)); 871 + } 872 + 873 + static bool nhi_wake_supported(struct pci_dev *pdev) 874 + { 875 + u8 val; 876 + 877 + /* 878 + * If power rails are sustainable for wakeup from S4 this 879 + * property is set by the BIOS. 880 + */ 881 + if (device_property_read_u8(&pdev->dev, "WAKE_SUPPORTED", &val)) 882 + return !!val; 883 + 884 + return true; 885 + } 886 + 887 + static int nhi_poweroff_noirq(struct device *dev) 888 + { 889 + struct pci_dev *pdev = to_pci_dev(dev); 890 + bool wakeup; 891 + 892 + wakeup = device_may_wakeup(dev) && nhi_wake_supported(pdev); 893 + return __nhi_suspend_noirq(dev, wakeup); 869 894 } 870 895 871 896 static void nhi_enable_int_throttling(struct tb_nhi *nhi) ··· 928 873 { 929 874 struct pci_dev *pdev = to_pci_dev(dev); 930 875 struct tb *tb = pci_get_drvdata(pdev); 876 + struct tb_nhi *nhi = tb->nhi; 877 + int ret; 931 878 932 879 /* 933 880 * Check that the device is still there. It may be that the user 934 881 * unplugged last device which causes the host controller to go 935 882 * away on PCs. 936 883 */ 937 - if (!pci_device_is_present(pdev)) 938 - tb->nhi->going_away = true; 939 - else 884 + if (!pci_device_is_present(pdev)) { 885 + nhi->going_away = true; 886 + } else { 887 + if (nhi->ops && nhi->ops->resume_noirq) { 888 + ret = nhi->ops->resume_noirq(nhi); 889 + if (ret) 890 + return ret; 891 + } 940 892 nhi_enable_int_throttling(tb->nhi); 893 + } 941 894 942 895 return tb_domain_resume_noirq(tb); 943 896 } ··· 978 915 { 979 916 struct pci_dev *pdev = to_pci_dev(dev); 980 917 struct tb *tb = pci_get_drvdata(pdev); 918 + struct tb_nhi *nhi = tb->nhi; 919 + int ret; 981 920 982 - return tb_domain_runtime_suspend(tb); 921 + ret = tb_domain_runtime_suspend(tb); 922 + if (ret) 923 + return ret; 924 + 925 + if (nhi->ops && nhi->ops->runtime_suspend) { 926 + ret = nhi->ops->runtime_suspend(tb->nhi); 927 + if (ret) 928 + return ret; 929 + } 930 + return 0; 983 931 } 984 932 985 933 static int nhi_runtime_resume(struct device *dev) 986 934 { 987 935 struct pci_dev *pdev = to_pci_dev(dev); 988 936 struct tb *tb = pci_get_drvdata(pdev); 937 + struct tb_nhi *nhi = tb->nhi; 938 + int ret; 989 939 990 - nhi_enable_int_throttling(tb->nhi); 940 + if (nhi->ops && nhi->ops->runtime_resume) { 941 + ret = nhi->ops->runtime_resume(nhi); 942 + if (ret) 943 + return ret; 944 + } 945 + 946 + nhi_enable_int_throttling(nhi); 991 947 return tb_domain_runtime_resume(tb); 992 948 } 993 949 ··· 1034 952 flush_work(&nhi->interrupt_work); 1035 953 } 1036 954 ida_destroy(&nhi->msix_ida); 955 + 956 + if (nhi->ops && nhi->ops->shutdown) 957 + nhi->ops->shutdown(nhi); 1037 958 } 1038 959 1039 960 static int nhi_init_msi(struct tb_nhi *nhi) ··· 1081 996 return 0; 1082 997 } 1083 998 999 + static bool nhi_imr_valid(struct pci_dev *pdev) 1000 + { 1001 + u8 val; 1002 + 1003 + if (!device_property_read_u8(&pdev->dev, "IMR_VALID", &val)) 1004 + return !!val; 1005 + 1006 + return true; 1007 + } 1008 + 1084 1009 static int nhi_probe(struct pci_dev *pdev, const struct pci_device_id *id) 1085 1010 { 1086 1011 struct tb_nhi *nhi; 1087 1012 struct tb *tb; 1088 1013 int res; 1014 + 1015 + if (!nhi_imr_valid(pdev)) { 1016 + dev_warn(&pdev->dev, "firmware image not valid, aborting\n"); 1017 + return -ENODEV; 1018 + } 1089 1019 1090 1020 res = pcim_enable_device(pdev); 1091 1021 if (res) { ··· 1119 1019 return -ENOMEM; 1120 1020 1121 1021 nhi->pdev = pdev; 1022 + nhi->ops = (const struct tb_nhi_ops *)id->driver_data; 1122 1023 /* cannot fail - table is allocated bin pcim_iomap_regions */ 1123 1024 nhi->iobase = pcim_iomap_table(pdev)[0]; 1124 1025 nhi->hop_count = ioread32(nhi->iobase + REG_HOP_COUNT) & 0x3ff; ··· 1151 1050 } 1152 1051 1153 1052 pci_set_master(pdev); 1053 + 1054 + if (nhi->ops && nhi->ops->init) { 1055 + res = nhi->ops->init(nhi); 1056 + if (res) 1057 + return res; 1058 + } 1154 1059 1155 1060 tb = icm_probe(nhi); 1156 1061 if (!tb) ··· 1218 1111 .restore_noirq = nhi_resume_noirq, 1219 1112 .suspend = nhi_suspend, 1220 1113 .freeze = nhi_suspend, 1114 + .poweroff_noirq = nhi_poweroff_noirq, 1221 1115 .poweroff = nhi_suspend, 1222 1116 .complete = nhi_complete, 1223 1117 .runtime_suspend = nhi_runtime_suspend, ··· 1266 1158 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ALPINE_RIDGE_C_USBONLY_NHI) }, 1267 1159 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_NHI) }, 1268 1160 { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_NHI) }, 1161 + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ICL_NHI0), 1162 + .driver_data = (kernel_ulong_t)&icl_nhi_ops }, 1163 + { PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ICL_NHI1), 1164 + .driver_data = (kernel_ulong_t)&icl_nhi_ops }, 1269 1165 1270 1166 { 0,} 1271 1167 };
+22
drivers/thunderbolt/nhi.h
··· 30 30 int nhi_mailbox_cmd(struct tb_nhi *nhi, enum nhi_mailbox_cmd cmd, u32 data); 31 31 enum nhi_fw_mode nhi_mailbox_mode(struct tb_nhi *nhi); 32 32 33 + /** 34 + * struct tb_nhi_ops - NHI specific optional operations 35 + * @init: NHI specific initialization 36 + * @suspend_noirq: NHI specific suspend_noirq hook 37 + * @resume_noirq: NHI specific resume_noirq hook 38 + * @runtime_suspend: NHI specific runtime_suspend hook 39 + * @runtime_resume: NHI specific runtime_resume hook 40 + * @shutdown: NHI specific shutdown 41 + */ 42 + struct tb_nhi_ops { 43 + int (*init)(struct tb_nhi *nhi); 44 + int (*suspend_noirq)(struct tb_nhi *nhi, bool wakeup); 45 + int (*resume_noirq)(struct tb_nhi *nhi); 46 + int (*runtime_suspend)(struct tb_nhi *nhi); 47 + int (*runtime_resume)(struct tb_nhi *nhi); 48 + void (*shutdown)(struct tb_nhi *nhi); 49 + }; 50 + 51 + extern const struct tb_nhi_ops icl_nhi_ops; 52 + 33 53 /* 34 54 * PCI IDs used in this driver from Win Ridge forward. There is no 35 55 * need for the PCI quirk anymore as we will use ICM also on Apple ··· 71 51 #define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_BRIDGE 0x15ea 72 52 #define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_NHI 0x15eb 73 53 #define PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_BRIDGE 0x15ef 54 + #define PCI_DEVICE_ID_INTEL_ICL_NHI1 0x8a0d 55 + #define PCI_DEVICE_ID_INTEL_ICL_NHI0 0x8a17 74 56 75 57 #endif
+179
drivers/thunderbolt/nhi_ops.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * NHI specific operations 4 + * 5 + * Copyright (C) 2019, Intel Corporation 6 + * Author: Mika Westerberg <mika.westerberg@linux.intel.com> 7 + */ 8 + 9 + #include <linux/delay.h> 10 + #include <linux/suspend.h> 11 + 12 + #include "nhi.h" 13 + #include "nhi_regs.h" 14 + #include "tb.h" 15 + 16 + /* Ice Lake specific NHI operations */ 17 + 18 + #define ICL_LC_MAILBOX_TIMEOUT 500 /* ms */ 19 + 20 + static int check_for_device(struct device *dev, void *data) 21 + { 22 + return tb_is_switch(dev); 23 + } 24 + 25 + static bool icl_nhi_is_device_connected(struct tb_nhi *nhi) 26 + { 27 + struct tb *tb = pci_get_drvdata(nhi->pdev); 28 + int ret; 29 + 30 + ret = device_for_each_child(&tb->root_switch->dev, NULL, 31 + check_for_device); 32 + return ret > 0; 33 + } 34 + 35 + static int icl_nhi_force_power(struct tb_nhi *nhi, bool power) 36 + { 37 + u32 vs_cap; 38 + 39 + /* 40 + * The Thunderbolt host controller is present always in Ice Lake 41 + * but the firmware may not be loaded and running (depending 42 + * whether there is device connected and so on). Each time the 43 + * controller is used we need to "Force Power" it first and wait 44 + * for the firmware to indicate it is up and running. This "Force 45 + * Power" is really not about actually powering on/off the 46 + * controller so it is accessible even if "Force Power" is off. 47 + * 48 + * The actual power management happens inside shared ACPI power 49 + * resources using standard ACPI methods. 50 + */ 51 + pci_read_config_dword(nhi->pdev, VS_CAP_22, &vs_cap); 52 + if (power) { 53 + vs_cap &= ~VS_CAP_22_DMA_DELAY_MASK; 54 + vs_cap |= 0x22 << VS_CAP_22_DMA_DELAY_SHIFT; 55 + vs_cap |= VS_CAP_22_FORCE_POWER; 56 + } else { 57 + vs_cap &= ~VS_CAP_22_FORCE_POWER; 58 + } 59 + pci_write_config_dword(nhi->pdev, VS_CAP_22, vs_cap); 60 + 61 + if (power) { 62 + unsigned int retries = 10; 63 + u32 val; 64 + 65 + /* Wait until the firmware tells it is up and running */ 66 + do { 67 + pci_read_config_dword(nhi->pdev, VS_CAP_9, &val); 68 + if (val & VS_CAP_9_FW_READY) 69 + return 0; 70 + msleep(250); 71 + } while (--retries); 72 + 73 + return -ETIMEDOUT; 74 + } 75 + 76 + return 0; 77 + } 78 + 79 + static void icl_nhi_lc_mailbox_cmd(struct tb_nhi *nhi, enum icl_lc_mailbox_cmd cmd) 80 + { 81 + u32 data; 82 + 83 + pci_read_config_dword(nhi->pdev, VS_CAP_19, &data); 84 + data = (cmd << VS_CAP_19_CMD_SHIFT) & VS_CAP_19_CMD_MASK; 85 + pci_write_config_dword(nhi->pdev, VS_CAP_19, data | VS_CAP_19_VALID); 86 + } 87 + 88 + static int icl_nhi_lc_mailbox_cmd_complete(struct tb_nhi *nhi, int timeout) 89 + { 90 + unsigned long end; 91 + u32 data; 92 + 93 + if (!timeout) 94 + goto clear; 95 + 96 + end = jiffies + msecs_to_jiffies(timeout); 97 + do { 98 + pci_read_config_dword(nhi->pdev, VS_CAP_18, &data); 99 + if (data & VS_CAP_18_DONE) 100 + goto clear; 101 + msleep(100); 102 + } while (time_before(jiffies, end)); 103 + 104 + return -ETIMEDOUT; 105 + 106 + clear: 107 + /* Clear the valid bit */ 108 + pci_write_config_dword(nhi->pdev, VS_CAP_19, 0); 109 + return 0; 110 + } 111 + 112 + static void icl_nhi_set_ltr(struct tb_nhi *nhi) 113 + { 114 + u32 max_ltr, ltr; 115 + 116 + pci_read_config_dword(nhi->pdev, VS_CAP_16, &max_ltr); 117 + max_ltr &= 0xffff; 118 + /* Program the same value for both snoop and no-snoop */ 119 + ltr = max_ltr << 16 | max_ltr; 120 + pci_write_config_dword(nhi->pdev, VS_CAP_15, ltr); 121 + } 122 + 123 + static int icl_nhi_suspend(struct tb_nhi *nhi) 124 + { 125 + int ret; 126 + 127 + if (icl_nhi_is_device_connected(nhi)) 128 + return 0; 129 + 130 + /* 131 + * If there is no device connected we need to perform both: a 132 + * handshake through LC mailbox and force power down before 133 + * entering D3. 134 + */ 135 + icl_nhi_lc_mailbox_cmd(nhi, ICL_LC_PREPARE_FOR_RESET); 136 + ret = icl_nhi_lc_mailbox_cmd_complete(nhi, ICL_LC_MAILBOX_TIMEOUT); 137 + if (ret) 138 + return ret; 139 + 140 + return icl_nhi_force_power(nhi, false); 141 + } 142 + 143 + static int icl_nhi_suspend_noirq(struct tb_nhi *nhi, bool wakeup) 144 + { 145 + enum icl_lc_mailbox_cmd cmd; 146 + 147 + if (!pm_suspend_via_firmware()) 148 + return icl_nhi_suspend(nhi); 149 + 150 + cmd = wakeup ? ICL_LC_GO2SX : ICL_LC_GO2SX_NO_WAKE; 151 + icl_nhi_lc_mailbox_cmd(nhi, cmd); 152 + return icl_nhi_lc_mailbox_cmd_complete(nhi, ICL_LC_MAILBOX_TIMEOUT); 153 + } 154 + 155 + static int icl_nhi_resume(struct tb_nhi *nhi) 156 + { 157 + int ret; 158 + 159 + ret = icl_nhi_force_power(nhi, true); 160 + if (ret) 161 + return ret; 162 + 163 + icl_nhi_set_ltr(nhi); 164 + return 0; 165 + } 166 + 167 + static void icl_nhi_shutdown(struct tb_nhi *nhi) 168 + { 169 + icl_nhi_force_power(nhi, false); 170 + } 171 + 172 + const struct tb_nhi_ops icl_nhi_ops = { 173 + .init = icl_nhi_resume, 174 + .suspend_noirq = icl_nhi_suspend_noirq, 175 + .resume_noirq = icl_nhi_resume, 176 + .runtime_suspend = icl_nhi_suspend, 177 + .runtime_resume = icl_nhi_resume, 178 + .shutdown = icl_nhi_shutdown, 179 + };
+37
drivers/thunderbolt/nhi_regs.h
··· 124 124 #define REG_FW_STS_ICM_EN_INVERT BIT(1) 125 125 #define REG_FW_STS_ICM_EN BIT(0) 126 126 127 + /* ICL NHI VSEC registers */ 128 + 129 + /* FW ready */ 130 + #define VS_CAP_9 0xc8 131 + #define VS_CAP_9_FW_READY BIT(31) 132 + /* UUID */ 133 + #define VS_CAP_10 0xcc 134 + #define VS_CAP_11 0xd0 135 + /* LTR */ 136 + #define VS_CAP_15 0xe0 137 + #define VS_CAP_16 0xe4 138 + /* TBT2PCIe */ 139 + #define VS_CAP_18 0xec 140 + #define VS_CAP_18_DONE BIT(0) 141 + /* PCIe2TBT */ 142 + #define VS_CAP_19 0xf0 143 + #define VS_CAP_19_VALID BIT(0) 144 + #define VS_CAP_19_CMD_SHIFT 1 145 + #define VS_CAP_19_CMD_MASK GENMASK(7, 1) 146 + /* Force power */ 147 + #define VS_CAP_22 0xfc 148 + #define VS_CAP_22_FORCE_POWER BIT(1) 149 + #define VS_CAP_22_DMA_DELAY_MASK GENMASK(31, 24) 150 + #define VS_CAP_22_DMA_DELAY_SHIFT 24 151 + 152 + /** 153 + * enum icl_lc_mailbox_cmd - ICL specific LC mailbox commands 154 + * @ICL_LC_GO2SX: Ask LC to enter Sx without wake 155 + * @ICL_LC_GO2SX_NO_WAKE: Ask LC to enter Sx with wake 156 + * @ICL_LC_PREPARE_FOR_RESET: Prepare LC for reset 157 + */ 158 + enum icl_lc_mailbox_cmd { 159 + ICL_LC_GO2SX = 0x02, 160 + ICL_LC_GO2SX_NO_WAKE = 0x03, 161 + ICL_LC_PREPARE_FOR_RESET = 0x21, 162 + }; 163 + 127 164 #endif
+41 -11
drivers/thunderbolt/switch.c
··· 364 364 nvm->active = nvm_dev; 365 365 } 366 366 367 - nvm_dev = register_nvmem(sw, nvm->id, NVM_MAX_SIZE, false); 368 - if (IS_ERR(nvm_dev)) { 369 - ret = PTR_ERR(nvm_dev); 370 - goto err_nvm_active; 367 + if (!sw->no_nvm_upgrade) { 368 + nvm_dev = register_nvmem(sw, nvm->id, NVM_MAX_SIZE, false); 369 + if (IS_ERR(nvm_dev)) { 370 + ret = PTR_ERR(nvm_dev); 371 + goto err_nvm_active; 372 + } 373 + nvm->non_active = nvm_dev; 371 374 } 372 - nvm->non_active = nvm_dev; 373 375 374 376 sw->nvm = nvm; 375 377 return 0; ··· 400 398 if (!nvm->authenticating) 401 399 nvm_clear_auth_status(sw); 402 400 403 - nvmem_unregister(nvm->non_active); 401 + if (nvm->non_active) 402 + nvmem_unregister(nvm->non_active); 404 403 if (nvm->active) 405 404 nvmem_unregister(nvm->active); 406 405 ida_simple_remove(&nvm_ida, nvm->id); ··· 614 611 int cap; 615 612 616 613 res = tb_port_read(port, &port->config, TB_CFG_PORT, 0, 8); 617 - if (res) 614 + if (res) { 615 + if (res == -ENODEV) { 616 + tb_dbg(port->sw->tb, " Port %d: not implemented\n", 617 + port->port); 618 + return 0; 619 + } 618 620 return res; 621 + } 619 622 620 623 /* Port 0 is the switch itself and has no PHY. */ 621 624 if (port->config.type == TB_TYPE_PORT && port->port != 0) { ··· 1340 1331 struct device *dev = container_of(kobj, struct device, kobj); 1341 1332 struct tb_switch *sw = tb_to_switch(dev); 1342 1333 1343 - if (attr == &dev_attr_key.attr) { 1334 + if (attr == &dev_attr_device.attr) { 1335 + if (!sw->device) 1336 + return 0; 1337 + } else if (attr == &dev_attr_device_name.attr) { 1338 + if (!sw->device_name) 1339 + return 0; 1340 + } else if (attr == &dev_attr_vendor.attr) { 1341 + if (!sw->vendor) 1342 + return 0; 1343 + } else if (attr == &dev_attr_vendor_name.attr) { 1344 + if (!sw->vendor_name) 1345 + return 0; 1346 + } else if (attr == &dev_attr_key.attr) { 1344 1347 if (tb_route(sw) && 1345 1348 sw->tb->security_level == TB_SECURITY_SECURE && 1346 1349 sw->security_level == TB_SECURITY_SECURE) 1347 1350 return attr->mode; 1348 1351 return 0; 1349 - } else if (attr == &dev_attr_nvm_authenticate.attr || 1350 - attr == &dev_attr_nvm_version.attr) { 1352 + } else if (attr == &dev_attr_nvm_authenticate.attr) { 1353 + if (sw->dma_port && !sw->no_nvm_upgrade) 1354 + return attr->mode; 1355 + return 0; 1356 + } else if (attr == &dev_attr_nvm_version.attr) { 1351 1357 if (sw->dma_port) 1352 1358 return attr->mode; 1353 1359 return 0; ··· 1470 1446 case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_2C_BRIDGE: 1471 1447 case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_4C_BRIDGE: 1472 1448 case PCI_DEVICE_ID_INTEL_TITAN_RIDGE_DD_BRIDGE: 1449 + case PCI_DEVICE_ID_INTEL_ICL_NHI0: 1450 + case PCI_DEVICE_ID_INTEL_ICL_NHI1: 1473 1451 return 3; 1474 1452 1475 1453 default: ··· 1715 1689 break; 1716 1690 } 1717 1691 1718 - if (sw->no_nvm_upgrade) 1692 + /* Root switch DMA port requires running firmware */ 1693 + if (!tb_route(sw) && sw->config.enabled) 1719 1694 return 0; 1720 1695 1721 1696 sw->dma_port = dma_port_alloc(sw); 1722 1697 if (!sw->dma_port) 1698 + return 0; 1699 + 1700 + if (sw->no_nvm_upgrade) 1723 1701 return 0; 1724 1702 1725 1703 /*
+12 -4
drivers/thunderbolt/tb_msgs.h
··· 104 104 }; 105 105 106 106 enum icm_event_code { 107 - ICM_EVENT_DEVICE_CONNECTED = 3, 108 - ICM_EVENT_DEVICE_DISCONNECTED = 4, 109 - ICM_EVENT_XDOMAIN_CONNECTED = 6, 110 - ICM_EVENT_XDOMAIN_DISCONNECTED = 7, 107 + ICM_EVENT_DEVICE_CONNECTED = 0x3, 108 + ICM_EVENT_DEVICE_DISCONNECTED = 0x4, 109 + ICM_EVENT_XDOMAIN_CONNECTED = 0x6, 110 + ICM_EVENT_XDOMAIN_DISCONNECTED = 0x7, 111 + ICM_EVENT_RTD3_VETO = 0xa, 111 112 }; 112 113 113 114 struct icm_pkg_header { ··· 462 461 u32 route_hi; 463 462 u32 route_lo; 464 463 uuid_t remote_uuid; 464 + }; 465 + 466 + /* Ice Lake messages */ 467 + 468 + struct icm_icl_event_rtd3_veto { 469 + struct icm_pkg_header hdr; 470 + u32 veto_reason; 465 471 }; 466 472 467 473 /* XDomain messages */
+2 -2
drivers/thunderbolt/tunnel.c
··· 211 211 return NULL; 212 212 } 213 213 tb_pci_init_path(path); 214 - tunnel->paths[TB_PCI_PATH_UP] = path; 214 + tunnel->paths[TB_PCI_PATH_DOWN] = path; 215 215 216 216 path = tb_path_alloc(tb, up, TB_PCI_HOPID, down, TB_PCI_HOPID, 0, 217 217 "PCIe Up"); ··· 220 220 return NULL; 221 221 } 222 222 tb_pci_init_path(path); 223 - tunnel->paths[TB_PCI_PATH_DOWN] = path; 223 + tunnel->paths[TB_PCI_PATH_UP] = path; 224 224 225 225 return tunnel; 226 226 }
+1 -1
drivers/thunderbolt/xdomain.c
··· 636 636 * It should be null terminated but anything else is pretty much 637 637 * allowed. 638 638 */ 639 - return sprintf(buf, "%*pEp\n", (int)strlen(svc->key), svc->key); 639 + return sprintf(buf, "%*pE\n", (int)strlen(svc->key), svc->key); 640 640 } 641 641 static DEVICE_ATTR_RO(key); 642 642
+1 -3
drivers/uio/uio_dmem_genirq.c
··· 200 200 201 201 if (!uioinfo->irq) { 202 202 ret = platform_get_irq(pdev, 0); 203 - if (ret < 0) { 204 - dev_err(&pdev->dev, "failed to get IRQ\n"); 203 + if (ret < 0) 205 204 goto bad1; 206 - } 207 205 uioinfo->irq = ret; 208 206 } 209 207 uiomem = &uioinfo->mem[0];
+11 -3
drivers/uio/uio_pdrv_genirq.c
··· 102 102 static int uio_pdrv_genirq_probe(struct platform_device *pdev) 103 103 { 104 104 struct uio_info *uioinfo = dev_get_platdata(&pdev->dev); 105 + struct device_node *node = pdev->dev.of_node; 105 106 struct uio_pdrv_genirq_platdata *priv; 106 107 struct uio_mem *uiomem; 107 108 int ret = -EINVAL; 108 109 int i; 109 110 110 - if (pdev->dev.of_node) { 111 + if (node) { 112 + const char *name; 113 + 111 114 /* alloc uioinfo for one device */ 112 115 uioinfo = devm_kzalloc(&pdev->dev, sizeof(*uioinfo), 113 116 GFP_KERNEL); ··· 118 115 dev_err(&pdev->dev, "unable to kmalloc\n"); 119 116 return -ENOMEM; 120 117 } 121 - uioinfo->name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%pOFn", 122 - pdev->dev.of_node); 118 + 119 + if (!of_property_read_string(node, "linux,uio-name", &name)) 120 + uioinfo->name = devm_kstrdup(&pdev->dev, name, GFP_KERNEL); 121 + else 122 + uioinfo->name = devm_kasprintf(&pdev->dev, GFP_KERNEL, 123 + "%pOFn", node); 124 + 123 125 uioinfo->version = "devicetree"; 124 126 /* Multiple IRQs are not supported */ 125 127 }
+9
drivers/w1/masters/Kconfig
··· 65 65 Say Y here if you want support for the 1-wire or HDQ Interface 66 66 on an OMAP processor. 67 67 68 + config W1_MASTER_SGI 69 + tristate "SGI ASIC driver" 70 + help 71 + Say Y here if you want support for your 1-wire devices using 72 + SGI ASIC 1-Wire interface 73 + 74 + This support is also available as a module. If so, the module 75 + will be called sgi_w1. 76 + 68 77 endmenu 69 78
+1
drivers/w1/masters/Makefile
··· 11 11 obj-$(CONFIG_W1_MASTER_DS1WM) += ds1wm.o 12 12 obj-$(CONFIG_W1_MASTER_GPIO) += w1-gpio.o 13 13 obj-$(CONFIG_HDQ_MASTER_OMAP) += omap_hdq.o 14 + obj-$(CONFIG_W1_MASTER_SGI) += sgi_w1.o
+1 -3
drivers/w1/masters/mxc_w1.c
··· 92 92 { 93 93 struct mxc_w1_device *mdev; 94 94 unsigned long clkrate; 95 - struct resource *res; 96 95 unsigned int clkdiv; 97 96 int err; 98 97 ··· 119 120 dev_warn(&pdev->dev, 120 121 "Incorrect time base frequency %lu Hz\n", clkrate); 121 122 122 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 123 - mdev->regs = devm_ioremap_resource(&pdev->dev, res); 123 + mdev->regs = devm_platform_ioremap_resource(pdev, 0); 124 124 if (IS_ERR(mdev->regs)) { 125 125 err = PTR_ERR(mdev->regs); 126 126 goto out_disable_clk;
+1 -3
drivers/w1/masters/omap_hdq.c
··· 660 660 { 661 661 struct device *dev = &pdev->dev; 662 662 struct hdq_data *hdq_data; 663 - struct resource *res; 664 663 int ret, irq; 665 664 u8 rev; 666 665 const char *mode; ··· 673 674 hdq_data->dev = dev; 674 675 platform_set_drvdata(pdev, hdq_data); 675 676 676 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 677 - hdq_data->hdq_base = devm_ioremap_resource(dev, res); 677 + hdq_data->hdq_base = devm_platform_ioremap_resource(pdev, 0); 678 678 if (IS_ERR(hdq_data->hdq_base)) 679 679 return PTR_ERR(hdq_data->hdq_base); 680 680
+130
drivers/w1/masters/sgi_w1.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * sgi_w1.c - w1 master driver for one wire support in SGI ASICs 4 + */ 5 + 6 + #include <linux/clk.h> 7 + #include <linux/delay.h> 8 + #include <linux/io.h> 9 + #include <linux/jiffies.h> 10 + #include <linux/module.h> 11 + #include <linux/mod_devicetable.h> 12 + #include <linux/platform_device.h> 13 + #include <linux/platform_data/sgi-w1.h> 14 + 15 + #include <linux/w1.h> 16 + 17 + #define MCR_RD_DATA BIT(0) 18 + #define MCR_DONE BIT(1) 19 + 20 + #define MCR_PACK(pulse, sample) (((pulse) << 10) | ((sample) << 2)) 21 + 22 + struct sgi_w1_device { 23 + u32 __iomem *mcr; 24 + struct w1_bus_master bus_master; 25 + char dev_id[64]; 26 + }; 27 + 28 + static u8 sgi_w1_wait(u32 __iomem *mcr) 29 + { 30 + u32 mcr_val; 31 + 32 + do { 33 + mcr_val = readl(mcr); 34 + } while (!(mcr_val & MCR_DONE)); 35 + 36 + return (mcr_val & MCR_RD_DATA) ? 1 : 0; 37 + } 38 + 39 + /* 40 + * this is the low level routine to 41 + * reset the device on the One Wire interface 42 + * on the hardware 43 + */ 44 + static u8 sgi_w1_reset_bus(void *data) 45 + { 46 + struct sgi_w1_device *dev = data; 47 + u8 ret; 48 + 49 + writel(MCR_PACK(520, 65), dev->mcr); 50 + ret = sgi_w1_wait(dev->mcr); 51 + udelay(500); /* recovery time */ 52 + return ret; 53 + } 54 + 55 + /* 56 + * this is the low level routine to read/write a bit on the One Wire 57 + * interface on the hardware. It does write 0 if parameter bit is set 58 + * to 0, otherwise a write 1/read. 59 + */ 60 + static u8 sgi_w1_touch_bit(void *data, u8 bit) 61 + { 62 + struct sgi_w1_device *dev = data; 63 + u8 ret; 64 + 65 + if (bit) 66 + writel(MCR_PACK(6, 13), dev->mcr); 67 + else 68 + writel(MCR_PACK(80, 30), dev->mcr); 69 + 70 + ret = sgi_w1_wait(dev->mcr); 71 + if (bit) 72 + udelay(100); /* recovery */ 73 + return ret; 74 + } 75 + 76 + static int sgi_w1_probe(struct platform_device *pdev) 77 + { 78 + struct sgi_w1_device *sdev; 79 + struct sgi_w1_platform_data *pdata; 80 + struct resource *res; 81 + 82 + sdev = devm_kzalloc(&pdev->dev, sizeof(struct sgi_w1_device), 83 + GFP_KERNEL); 84 + if (!sdev) 85 + return -ENOMEM; 86 + 87 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 88 + sdev->mcr = devm_ioremap_resource(&pdev->dev, res); 89 + if (IS_ERR(sdev->mcr)) 90 + return PTR_ERR(sdev->mcr); 91 + 92 + sdev->bus_master.data = sdev; 93 + sdev->bus_master.reset_bus = sgi_w1_reset_bus; 94 + sdev->bus_master.touch_bit = sgi_w1_touch_bit; 95 + 96 + pdata = dev_get_platdata(&pdev->dev); 97 + if (pdata) { 98 + strlcpy(sdev->dev_id, pdata->dev_id, sizeof(sdev->dev_id)); 99 + sdev->bus_master.dev_id = sdev->dev_id; 100 + } 101 + 102 + platform_set_drvdata(pdev, sdev); 103 + 104 + return w1_add_master_device(&sdev->bus_master); 105 + } 106 + 107 + /* 108 + * disassociate the w1 device from the driver 109 + */ 110 + static int sgi_w1_remove(struct platform_device *pdev) 111 + { 112 + struct sgi_w1_device *sdev = platform_get_drvdata(pdev); 113 + 114 + w1_remove_master_device(&sdev->bus_master); 115 + 116 + return 0; 117 + } 118 + 119 + static struct platform_driver sgi_w1_driver = { 120 + .driver = { 121 + .name = "sgi_w1", 122 + }, 123 + .probe = sgi_w1_probe, 124 + .remove = sgi_w1_remove, 125 + }; 126 + module_platform_driver(sgi_w1_driver); 127 + 128 + MODULE_LICENSE("GPL"); 129 + MODULE_AUTHOR("Thomas Bogendoerfer"); 130 + MODULE_DESCRIPTION("Driver for One-Wire IP in SGI ASICs");
+6
drivers/w1/slaves/Kconfig
··· 101 101 Say Y here if you want to use a 1-wire 102 102 DS2438 Smart Battery Monitor device support 103 103 104 + config W1_SLAVE_DS250X 105 + tristate "512b/1kb/16kb EPROM family support" 106 + help 107 + Say Y here if you want to use a 1-wire 108 + 512b/1kb/16kb EPROM family device (DS250x). 109 + 104 110 config W1_SLAVE_DS2780 105 111 tristate "Dallas 2780 battery monitor chip" 106 112 help
+1
drivers/w1/slaves/Makefile
··· 14 14 obj-$(CONFIG_W1_SLAVE_DS2805) += w1_ds2805.o 15 15 obj-$(CONFIG_W1_SLAVE_DS2433) += w1_ds2433.o 16 16 obj-$(CONFIG_W1_SLAVE_DS2438) += w1_ds2438.o 17 + obj-$(CONFIG_W1_SLAVE_DS250X) += w1_ds250x.o 17 18 obj-$(CONFIG_W1_SLAVE_DS2780) += w1_ds2780.o 18 19 obj-$(CONFIG_W1_SLAVE_DS2781) += w1_ds2781.o 19 20 obj-$(CONFIG_W1_SLAVE_DS28E04) += w1_ds28e04.o
+290
drivers/w1/slaves/w1_ds250x.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * w1_ds250x.c - w1 family 09/0b/89/91 (DS250x) driver 4 + */ 5 + 6 + #include <linux/kernel.h> 7 + #include <linux/module.h> 8 + #include <linux/moduleparam.h> 9 + #include <linux/device.h> 10 + #include <linux/types.h> 11 + #include <linux/delay.h> 12 + #include <linux/slab.h> 13 + #include <linux/crc16.h> 14 + 15 + #include <linux/w1.h> 16 + #include <linux/nvmem-provider.h> 17 + 18 + #define W1_DS2501_UNW_FAMILY 0x91 19 + #define W1_DS2501_SIZE 64 20 + 21 + #define W1_DS2502_FAMILY 0x09 22 + #define W1_DS2502_UNW_FAMILY 0x89 23 + #define W1_DS2502_SIZE 128 24 + 25 + #define W1_DS2505_FAMILY 0x0b 26 + #define W1_DS2505_SIZE 2048 27 + 28 + #define W1_PAGE_SIZE 32 29 + 30 + #define W1_EXT_READ_MEMORY 0xA5 31 + #define W1_READ_DATA_CRC 0xC3 32 + 33 + #define OFF2PG(off) ((off) / W1_PAGE_SIZE) 34 + 35 + #define CRC16_INIT 0 36 + #define CRC16_VALID 0xb001 37 + 38 + struct w1_eprom_data { 39 + size_t size; 40 + int (*read)(struct w1_slave *sl, int pageno); 41 + u8 eprom[W1_DS2505_SIZE]; 42 + DECLARE_BITMAP(page_present, W1_DS2505_SIZE / W1_PAGE_SIZE); 43 + char nvmem_name[64]; 44 + }; 45 + 46 + static int w1_ds2502_read_page(struct w1_slave *sl, int pageno) 47 + { 48 + struct w1_eprom_data *data = sl->family_data; 49 + int pgoff = pageno * W1_PAGE_SIZE; 50 + int ret = -EIO; 51 + u8 buf[3]; 52 + u8 crc8; 53 + 54 + if (test_bit(pageno, data->page_present)) 55 + return 0; /* page already present */ 56 + 57 + mutex_lock(&sl->master->bus_mutex); 58 + 59 + if (w1_reset_select_slave(sl)) 60 + goto err; 61 + 62 + buf[0] = W1_READ_DATA_CRC; 63 + buf[1] = pgoff & 0xff; 64 + buf[2] = pgoff >> 8; 65 + w1_write_block(sl->master, buf, 3); 66 + 67 + crc8 = w1_read_8(sl->master); 68 + if (w1_calc_crc8(buf, 3) != crc8) 69 + goto err; 70 + 71 + w1_read_block(sl->master, &data->eprom[pgoff], W1_PAGE_SIZE); 72 + 73 + crc8 = w1_read_8(sl->master); 74 + if (w1_calc_crc8(&data->eprom[pgoff], W1_PAGE_SIZE) != crc8) 75 + goto err; 76 + 77 + set_bit(pageno, data->page_present); /* mark page present */ 78 + ret = 0; 79 + err: 80 + mutex_unlock(&sl->master->bus_mutex); 81 + return ret; 82 + } 83 + 84 + static int w1_ds2505_read_page(struct w1_slave *sl, int pageno) 85 + { 86 + struct w1_eprom_data *data = sl->family_data; 87 + int redir_retries = 16; 88 + int pgoff, epoff; 89 + int ret = -EIO; 90 + u8 buf[6]; 91 + u8 redir; 92 + u16 crc; 93 + 94 + if (test_bit(pageno, data->page_present)) 95 + return 0; /* page already present */ 96 + 97 + epoff = pgoff = pageno * W1_PAGE_SIZE; 98 + mutex_lock(&sl->master->bus_mutex); 99 + 100 + retry: 101 + if (w1_reset_select_slave(sl)) 102 + goto err; 103 + 104 + buf[0] = W1_EXT_READ_MEMORY; 105 + buf[1] = pgoff & 0xff; 106 + buf[2] = pgoff >> 8; 107 + w1_write_block(sl->master, buf, 3); 108 + w1_read_block(sl->master, buf + 3, 3); /* redir, crc16 */ 109 + redir = buf[3]; 110 + crc = crc16(CRC16_INIT, buf, 6); 111 + 112 + if (crc != CRC16_VALID) 113 + goto err; 114 + 115 + 116 + if (redir != 0xff) { 117 + redir_retries--; 118 + if (redir_retries < 0) 119 + goto err; 120 + 121 + pgoff = (redir ^ 0xff) * W1_PAGE_SIZE; 122 + goto retry; 123 + } 124 + 125 + w1_read_block(sl->master, &data->eprom[epoff], W1_PAGE_SIZE); 126 + w1_read_block(sl->master, buf, 2); /* crc16 */ 127 + crc = crc16(CRC16_INIT, &data->eprom[epoff], W1_PAGE_SIZE); 128 + crc = crc16(crc, buf, 2); 129 + 130 + if (crc != CRC16_VALID) 131 + goto err; 132 + 133 + set_bit(pageno, data->page_present); 134 + ret = 0; 135 + err: 136 + mutex_unlock(&sl->master->bus_mutex); 137 + return ret; 138 + } 139 + 140 + static int w1_nvmem_read(void *priv, unsigned int off, void *buf, size_t count) 141 + { 142 + struct w1_slave *sl = priv; 143 + struct w1_eprom_data *data = sl->family_data; 144 + size_t eprom_size = data->size; 145 + int ret; 146 + int i; 147 + 148 + if (off > eprom_size) 149 + return -EINVAL; 150 + 151 + if ((off + count) > eprom_size) 152 + count = eprom_size - off; 153 + 154 + i = OFF2PG(off); 155 + do { 156 + ret = data->read(sl, i++); 157 + if (ret < 0) 158 + return ret; 159 + } while (i < OFF2PG(off + count)); 160 + 161 + memcpy(buf, &data->eprom[off], count); 162 + return 0; 163 + } 164 + 165 + static int w1_eprom_add_slave(struct w1_slave *sl) 166 + { 167 + struct w1_eprom_data *data; 168 + struct nvmem_device *nvmem; 169 + struct nvmem_config nvmem_cfg = { 170 + .dev = &sl->dev, 171 + .reg_read = w1_nvmem_read, 172 + .type = NVMEM_TYPE_OTP, 173 + .read_only = true, 174 + .word_size = 1, 175 + .priv = sl, 176 + .id = -1 177 + }; 178 + 179 + data = devm_kzalloc(&sl->dev, sizeof(struct w1_eprom_data), GFP_KERNEL); 180 + if (!data) 181 + return -ENOMEM; 182 + 183 + sl->family_data = data; 184 + switch (sl->family->fid) { 185 + case W1_DS2501_UNW_FAMILY: 186 + data->size = W1_DS2501_SIZE; 187 + data->read = w1_ds2502_read_page; 188 + break; 189 + case W1_DS2502_FAMILY: 190 + case W1_DS2502_UNW_FAMILY: 191 + data->size = W1_DS2502_SIZE; 192 + data->read = w1_ds2502_read_page; 193 + break; 194 + case W1_DS2505_FAMILY: 195 + data->size = W1_DS2505_SIZE; 196 + data->read = w1_ds2505_read_page; 197 + break; 198 + } 199 + 200 + if (sl->master->bus_master->dev_id) 201 + snprintf(data->nvmem_name, sizeof(data->nvmem_name), 202 + "%s-%02x-%012llx", 203 + sl->master->bus_master->dev_id, sl->reg_num.family, 204 + (unsigned long long)sl->reg_num.id); 205 + else 206 + snprintf(data->nvmem_name, sizeof(data->nvmem_name), 207 + "%02x-%012llx", 208 + sl->reg_num.family, 209 + (unsigned long long)sl->reg_num.id); 210 + 211 + nvmem_cfg.name = data->nvmem_name; 212 + nvmem_cfg.size = data->size; 213 + 214 + nvmem = devm_nvmem_register(&sl->dev, &nvmem_cfg); 215 + return PTR_ERR_OR_ZERO(nvmem); 216 + } 217 + 218 + static struct w1_family_ops w1_eprom_fops = { 219 + .add_slave = w1_eprom_add_slave, 220 + }; 221 + 222 + static struct w1_family w1_family_09 = { 223 + .fid = W1_DS2502_FAMILY, 224 + .fops = &w1_eprom_fops, 225 + }; 226 + 227 + static struct w1_family w1_family_0b = { 228 + .fid = W1_DS2505_FAMILY, 229 + .fops = &w1_eprom_fops, 230 + }; 231 + 232 + static struct w1_family w1_family_89 = { 233 + .fid = W1_DS2502_UNW_FAMILY, 234 + .fops = &w1_eprom_fops, 235 + }; 236 + 237 + static struct w1_family w1_family_91 = { 238 + .fid = W1_DS2501_UNW_FAMILY, 239 + .fops = &w1_eprom_fops, 240 + }; 241 + 242 + static int __init w1_ds250x_init(void) 243 + { 244 + int err; 245 + 246 + err = w1_register_family(&w1_family_09); 247 + if (err) 248 + return err; 249 + 250 + err = w1_register_family(&w1_family_0b); 251 + if (err) 252 + goto err_0b; 253 + 254 + err = w1_register_family(&w1_family_89); 255 + if (err) 256 + goto err_89; 257 + 258 + err = w1_register_family(&w1_family_91); 259 + if (err) 260 + goto err_91; 261 + 262 + return 0; 263 + 264 + err_91: 265 + w1_unregister_family(&w1_family_89); 266 + err_89: 267 + w1_unregister_family(&w1_family_0b); 268 + err_0b: 269 + w1_unregister_family(&w1_family_09); 270 + return err; 271 + } 272 + 273 + static void __exit w1_ds250x_exit(void) 274 + { 275 + w1_unregister_family(&w1_family_09); 276 + w1_unregister_family(&w1_family_0b); 277 + w1_unregister_family(&w1_family_89); 278 + w1_unregister_family(&w1_family_91); 279 + } 280 + 281 + module_init(w1_ds250x_init); 282 + module_exit(w1_ds250x_exit); 283 + 284 + MODULE_AUTHOR("Thomas Bogendoerfer <tbogendoerfe@suse.de>"); 285 + MODULE_DESCRIPTION("w1 family driver for DS250x Add Only Memory"); 286 + MODULE_LICENSE("GPL"); 287 + MODULE_ALIAS("w1-family-" __stringify(W1_DS2502_FAMILY)); 288 + MODULE_ALIAS("w1-family-" __stringify(W1_DS2505_FAMILY)); 289 + MODULE_ALIAS("w1-family-" __stringify(W1_DS2501_UNW_FAMILY)); 290 + MODULE_ALIAS("w1-family-" __stringify(W1_DS2502_UNW_FAMILY));
+88
include/dt-bindings/interconnect/qcom,qcs404.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Qualcomm interconnect IDs 4 + * 5 + * Copyright (c) 2019, Linaro Ltd. 6 + * Author: Georgi Djakov <georgi.djakov@linaro.org> 7 + */ 8 + 9 + #ifndef __DT_BINDINGS_INTERCONNECT_QCOM_QCS404_H 10 + #define __DT_BINDINGS_INTERCONNECT_QCOM_QCS404_H 11 + 12 + #define MASTER_AMPSS_M0 0 13 + #define MASTER_OXILI 1 14 + #define MASTER_MDP_PORT0 2 15 + #define MASTER_SNOC_BIMC_1 3 16 + #define MASTER_TCU_0 4 17 + #define SLAVE_EBI_CH0 5 18 + #define SLAVE_BIMC_SNOC 6 19 + 20 + #define MASTER_SPDM 0 21 + #define MASTER_BLSP_1 1 22 + #define MASTER_BLSP_2 2 23 + #define MASTER_XI_USB_HS1 3 24 + #define MASTER_CRYPT0 4 25 + #define MASTER_SDCC_1 5 26 + #define MASTER_SDCC_2 6 27 + #define MASTER_SNOC_PCNOC 7 28 + #define MASTER_QPIC 8 29 + #define PCNOC_INT_0 9 30 + #define PCNOC_INT_2 10 31 + #define PCNOC_INT_3 11 32 + #define PCNOC_S_0 12 33 + #define PCNOC_S_1 13 34 + #define PCNOC_S_2 14 35 + #define PCNOC_S_3 15 36 + #define PCNOC_S_4 16 37 + #define PCNOC_S_6 17 38 + #define PCNOC_S_7 18 39 + #define PCNOC_S_8 19 40 + #define PCNOC_S_9 20 41 + #define PCNOC_S_10 21 42 + #define PCNOC_S_11 22 43 + #define SLAVE_SPDM 23 44 + #define SLAVE_PDM 24 45 + #define SLAVE_PRNG 25 46 + #define SLAVE_TCSR 26 47 + #define SLAVE_SNOC_CFG 27 48 + #define SLAVE_MESSAGE_RAM 28 49 + #define SLAVE_DISP_SS_CFG 29 50 + #define SLAVE_GPU_CFG 30 51 + #define SLAVE_BLSP_1 31 52 + #define SLAVE_BLSP_2 32 53 + #define SLAVE_TLMM_NORTH 33 54 + #define SLAVE_PCIE 34 55 + #define SLAVE_ETHERNET 35 56 + #define SLAVE_TLMM_EAST 36 57 + #define SLAVE_TCU 37 58 + #define SLAVE_PMIC_ARB 38 59 + #define SLAVE_SDCC_1 39 60 + #define SLAVE_SDCC_2 40 61 + #define SLAVE_TLMM_SOUTH 41 62 + #define SLAVE_USB_HS 42 63 + #define SLAVE_USB3 43 64 + #define SLAVE_CRYPTO_0_CFG 44 65 + #define SLAVE_PCNOC_SNOC 45 66 + 67 + #define MASTER_QDSS_BAM 0 68 + #define MASTER_BIMC_SNOC 1 69 + #define MASTER_PCNOC_SNOC 2 70 + #define MASTER_QDSS_ETR 3 71 + #define MASTER_EMAC 4 72 + #define MASTER_PCIE 5 73 + #define MASTER_USB3 6 74 + #define QDSS_INT 7 75 + #define SNOC_INT_0 8 76 + #define SNOC_INT_1 9 77 + #define SNOC_INT_2 10 78 + #define SLAVE_KPSS_AHB 11 79 + #define SLAVE_WCSS 12 80 + #define SLAVE_SNOC_BIMC_1 13 81 + #define SLAVE_IMEM 14 82 + #define SLAVE_SNOC_PCNOC 15 83 + #define SLAVE_QDSS_STM 16 84 + #define SLAVE_CATS_0 17 85 + #define SLAVE_CATS_1 18 86 + #define SLAVE_LPASS 19 87 + 88 + #endif
+11
include/dt-bindings/phy/phy-lantiq-vrx200-pcie.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright (C) 2019 Martin Blumenstingl <martin.blumenstingl@googlemail.com> 4 + */ 5 + 6 + #define LANTIQ_PCIE_PHY_MODE_25MHZ 0 7 + #define LANTIQ_PCIE_PHY_MODE_25MHZ_SSC 1 8 + #define LANTIQ_PCIE_PHY_MODE_36MHZ 2 9 + #define LANTIQ_PCIE_PHY_MODE_36MHZ_SSC 3 10 + #define LANTIQ_PCIE_PHY_MODE_100MHZ 4 11 + #define LANTIQ_PCIE_PHY_MODE_100MHZ_SSC 5
+45 -6
include/linux/firmware/intel/stratix10-smc.h
··· 210 210 #define INTEL_SIP_SMC_FPGA_CONFIG_LOOPBACK \ 211 211 INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_FPGA_CONFIG_LOOPBACK) 212 212 213 - /* 213 + /** 214 214 * Request INTEL_SIP_SMC_REG_READ 215 215 * 216 216 * Read a protected register at EL3 ··· 229 229 #define INTEL_SIP_SMC_REG_READ \ 230 230 INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_REG_READ) 231 231 232 - /* 232 + /** 233 233 * Request INTEL_SIP_SMC_REG_WRITE 234 234 * 235 235 * Write a protected register at EL3 ··· 248 248 #define INTEL_SIP_SMC_REG_WRITE \ 249 249 INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_REG_WRITE) 250 250 251 - /* 251 + /** 252 252 * Request INTEL_SIP_SMC_FUNCID_REG_UPDATE 253 253 * 254 254 * Update one or more bits in a protected register at EL3 using a ··· 269 269 #define INTEL_SIP_SMC_REG_UPDATE \ 270 270 INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_REG_UPDATE) 271 271 272 - /* 272 + /** 273 273 * Request INTEL_SIP_SMC_RSU_STATUS 274 274 * 275 275 * Request remote status update boot log, call is synchronous. ··· 292 292 #define INTEL_SIP_SMC_RSU_STATUS \ 293 293 INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_RSU_STATUS) 294 294 295 - /* 295 + /** 296 296 * Request INTEL_SIP_SMC_RSU_UPDATE 297 297 * 298 298 * Request to set the offset of the bitstream to boot after reboot, call ··· 310 310 #define INTEL_SIP_SMC_RSU_UPDATE \ 311 311 INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_RSU_UPDATE) 312 312 313 - /* 313 + /** 314 314 * Request INTEL_SIP_SMC_ECC_DBE 315 315 * 316 316 * Sync call used by service driver at EL1 to alert EL3 that a Double ··· 329 329 INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_ECC_DBE) 330 330 331 331 #endif 332 + 333 + /** 334 + * Request INTEL_SIP_SMC_RSU_NOTIFY 335 + * 336 + * Sync call used by service driver at EL1 to report hard processor 337 + * system execution stage to firmware 338 + * 339 + * Call register usage: 340 + * a0 INTEL_SIP_SMC_RSU_NOTIFY 341 + * a1 32bit value representing hard processor system execution stage 342 + * a2-7 not used 343 + * 344 + * Return status 345 + * a0 INTEL_SIP_SMC_STATUS_OK 346 + */ 347 + #define INTEL_SIP_SMC_FUNCID_RSU_NOTIFY 14 348 + #define INTEL_SIP_SMC_RSU_NOTIFY \ 349 + INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_RSU_NOTIFY) 350 + 351 + /** 352 + * Request INTEL_SIP_SMC_RSU_RETRY_COUNTER 353 + * 354 + * Sync call used by service driver at EL1 to query RSU retry counter 355 + * 356 + * Call register usage: 357 + * a0 INTEL_SIP_SMC_RSU_RETRY_COUNTER 358 + * a1-7 not used 359 + * 360 + * Return status 361 + * a0 INTEL_SIP_SMC_STATUS_OK 362 + * a1 the retry counter 363 + * 364 + * Or 365 + * 366 + * a0 INTEL_SIP_SMC_RSU_ERROR 367 + */ 368 + #define INTEL_SIP_SMC_FUNCID_RSU_RETRY_COUNTER 15 369 + #define INTEL_SIP_SMC_RSU_RETRY_COUNTER \ 370 + INTEL_SIP_SMC_FAST_CALL_VAL(INTEL_SIP_SMC_FUNCID_RSU_RETRY_COUNTER)
+10 -1
include/linux/firmware/intel/stratix10-svc-client.h
··· 95 95 * 96 96 * @COMMAND_RSU_UPDATE: set the offset of the bitstream to boot after reboot, 97 97 * return status is SVC_STATUS_RSU_OK or SVC_STATUS_RSU_ERROR 98 + * 99 + * @COMMAND_RSU_NOTIFY: report the status of hard processor system 100 + * software to firmware, return status is SVC_STATUS_RSU_OK or 101 + * SVC_STATUS_RSU_ERROR 102 + * 103 + * @COMMAND_RSU_RETRY: query firmware for the current image's retry counter, 104 + * return status is SVC_STATUS_RSU_OK or SVC_STATUS_RSU_ERROR 98 105 */ 99 106 enum stratix10_svc_command_code { 100 107 COMMAND_NOOP = 0, ··· 110 103 COMMAND_RECONFIG_DATA_CLAIM, 111 104 COMMAND_RECONFIG_STATUS, 112 105 COMMAND_RSU_STATUS, 113 - COMMAND_RSU_UPDATE 106 + COMMAND_RSU_UPDATE, 107 + COMMAND_RSU_NOTIFY, 108 + COMMAND_RSU_RETRY, 114 109 }; 115 110 116 111 /**
+1 -1
include/linux/fpga/altera-pr-ip-core.h
··· 13 13 #include <linux/io.h> 14 14 15 15 int alt_pr_register(struct device *dev, void __iomem *reg_base); 16 - int alt_pr_unregister(struct device *dev); 16 + void alt_pr_unregister(struct device *dev); 17 17 18 18 #endif /* _ALT_PR_IP_CORE_H */
+79
include/linux/intel_th.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Intel(R) Trace Hub data structures for implementing buffer sinks. 4 + * 5 + * Copyright (C) 2019 Intel Corporation. 6 + */ 7 + 8 + #ifndef _INTEL_TH_H_ 9 + #define _INTEL_TH_H_ 10 + 11 + #include <linux/scatterlist.h> 12 + 13 + /* MSC operating modes (MSC_MODE) */ 14 + enum { 15 + MSC_MODE_SINGLE = 0, 16 + MSC_MODE_MULTI, 17 + MSC_MODE_EXI, 18 + MSC_MODE_DEBUG, 19 + }; 20 + 21 + struct msu_buffer { 22 + const char *name; 23 + /* 24 + * ->assign() called when buffer 'mode' is set to this driver 25 + * (aka mode_store()) 26 + * @device: struct device * of the msc 27 + * @mode: allows the driver to set HW mode (see the enum above) 28 + * Returns: a pointer to a private structure associated with this 29 + * msc or NULL in case of error. This private structure 30 + * will then be passed into all other callbacks. 31 + */ 32 + void *(*assign)(struct device *dev, int *mode); 33 + /* ->unassign(): some other mode is selected, clean up */ 34 + void (*unassign)(void *priv); 35 + /* 36 + * ->alloc_window(): allocate memory for the window of a given 37 + * size 38 + * @sgt: pointer to sg_table, can be overridden by the buffer 39 + * driver, or kept intact 40 + * Returns: number of sg table entries <= number of pages; 41 + * 0 is treated as an allocation failure. 42 + */ 43 + int (*alloc_window)(void *priv, struct sg_table **sgt, 44 + size_t size); 45 + void (*free_window)(void *priv, struct sg_table *sgt); 46 + /* ->activate(): trace has started */ 47 + void (*activate)(void *priv); 48 + /* ->deactivate(): trace is about to stop */ 49 + void (*deactivate)(void *priv); 50 + /* 51 + * ->ready(): window @sgt is filled up to the last block OR 52 + * tracing is stopped by the user; this window contains 53 + * @bytes data. The window in question transitions into 54 + * the "LOCKED" state, indicating that it can't be used 55 + * by hardware. To clear this state and make the window 56 + * available to the hardware again, call 57 + * intel_th_msc_window_unlock(). 58 + */ 59 + int (*ready)(void *priv, struct sg_table *sgt, size_t bytes); 60 + }; 61 + 62 + int intel_th_msu_buffer_register(const struct msu_buffer *mbuf, 63 + struct module *owner); 64 + void intel_th_msu_buffer_unregister(const struct msu_buffer *mbuf); 65 + void intel_th_msc_window_unlock(struct device *dev, struct sg_table *sgt); 66 + 67 + #define module_intel_th_msu_buffer(__buffer) \ 68 + static int __init __buffer##_init(void) \ 69 + { \ 70 + return intel_th_msu_buffer_register(&(__buffer), THIS_MODULE); \ 71 + } \ 72 + module_init(__buffer##_init); \ 73 + static void __exit __buffer##_exit(void) \ 74 + { \ 75 + intel_th_msu_buffer_unregister(&(__buffer)); \ 76 + } \ 77 + module_exit(__buffer##_exit); 78 + 79 + #endif /* _INTEL_TH_H_ */
+5 -2
include/linux/interconnect-provider.h
··· 36 36 * @nodes: internal list of the interconnect provider nodes 37 37 * @set: pointer to device specific set operation function 38 38 * @aggregate: pointer to device specific aggregate operation function 39 + * @pre_aggregate: pointer to device specific function that is called 40 + * before the aggregation begins (optional) 39 41 * @xlate: provider-specific callback for mapping nodes from phandle arguments 40 42 * @dev: the device this interconnect provider belongs to 41 43 * @users: count of active users ··· 47 45 struct list_head provider_list; 48 46 struct list_head nodes; 49 47 int (*set)(struct icc_node *src, struct icc_node *dst); 50 - int (*aggregate)(struct icc_node *node, u32 avg_bw, u32 peak_bw, 51 - u32 *agg_avg, u32 *agg_peak); 48 + int (*aggregate)(struct icc_node *node, u32 tag, u32 avg_bw, 49 + u32 peak_bw, u32 *agg_avg, u32 *agg_peak); 50 + void (*pre_aggregate)(struct icc_node *node); 52 51 struct icc_node* (*xlate)(struct of_phandle_args *spec, void *data); 53 52 struct device *dev; 54 53 int users;
+5
include/linux/interconnect.h
··· 30 30 struct icc_path *of_icc_get(struct device *dev, const char *name); 31 31 void icc_put(struct icc_path *path); 32 32 int icc_set_bw(struct icc_path *path, u32 avg_bw, u32 peak_bw); 33 + void icc_set_tag(struct icc_path *path, u32 tag); 33 34 34 35 #else 35 36 ··· 53 52 static inline int icc_set_bw(struct icc_path *path, u32 avg_bw, u32 peak_bw) 54 53 { 55 54 return 0; 55 + } 56 + 57 + static inline void icc_set_tag(struct icc_path *path, u32 tag) 58 + { 56 59 } 57 60 58 61 #endif /* CONFIG_INTERCONNECT */
+13
include/linux/platform_data/sgi-w1.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * SGI One-Wire (W1) IP 4 + */ 5 + 6 + #ifndef PLATFORM_DATA_SGI_W1_H 7 + #define PLATFORM_DATA_SGI_W1_H 8 + 9 + struct sgi_w1_platform_data { 10 + char dev_id[64]; 11 + }; 12 + 13 + #endif /* PLATFORM_DATA_SGI_W1_H */
+2
include/linux/thunderbolt.h
··· 429 429 * @lock: Must be held during ring creation/destruction. Is acquired by 430 430 * interrupt_work when dispatching interrupts to individual rings. 431 431 * @pdev: Pointer to the PCI device 432 + * @ops: NHI specific optional ops 432 433 * @iobase: MMIO space of the NHI 433 434 * @tx_rings: All Tx rings available on this host controller 434 435 * @rx_rings: All Rx rings available on this host controller ··· 443 442 struct tb_nhi { 444 443 spinlock_t lock; 445 444 struct pci_dev *pdev; 445 + const struct tb_nhi_ops *ops; 446 446 void __iomem *iobase; 447 447 struct tb_ring **tx_rings; 448 448 struct tb_ring **rx_rings;
+5
include/linux/w1.h
··· 118 118 * w1_master* is passed to the slave found callback. 119 119 * u8 is search_type, W1_SEARCH or W1_ALARM_SEARCH 120 120 * 121 + * @dev_id: Optional device id string, which w1 slaves could use for 122 + * creating names, which then give a connection to the w1 master 123 + * 121 124 * Note: read_bit and write_bit are very low level functions and should only 122 125 * be used with hardware that doesn't really support 1-wire operations, 123 126 * like a parallel/serial port. ··· 153 150 154 151 void (*search)(void *, struct w1_master *, 155 152 u8, w1_slave_found_callback); 153 + 154 + char *dev_id; 156 155 }; 157 156 158 157 /**
+19 -1
include/soc/qcom/tcs.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 /* 3 - * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved. 3 + * Copyright (c) 2016-2019, The Linux Foundation. All rights reserved. 4 4 */ 5 5 6 6 #ifndef __SOC_QCOM_TCS_H__ ··· 52 52 u32 num_cmds; 53 53 struct tcs_cmd *cmds; 54 54 }; 55 + 56 + #define BCM_TCS_CMD_COMMIT_SHFT 30 57 + #define BCM_TCS_CMD_COMMIT_MASK 0x40000000 58 + #define BCM_TCS_CMD_VALID_SHFT 29 59 + #define BCM_TCS_CMD_VALID_MASK 0x20000000 60 + #define BCM_TCS_CMD_VOTE_X_SHFT 14 61 + #define BCM_TCS_CMD_VOTE_MASK 0x3fff 62 + #define BCM_TCS_CMD_VOTE_Y_SHFT 0 63 + #define BCM_TCS_CMD_VOTE_Y_MASK 0xfffc000 64 + 65 + /* Construct a Bus Clock Manager (BCM) specific TCS command */ 66 + #define BCM_TCS_CMD(commit, valid, vote_x, vote_y) \ 67 + (((commit) << BCM_TCS_CMD_COMMIT_SHFT) | \ 68 + ((valid) << BCM_TCS_CMD_VALID_SHFT) | \ 69 + ((cpu_to_le32(vote_x) & \ 70 + BCM_TCS_CMD_VOTE_MASK) << BCM_TCS_CMD_VOTE_X_SHFT) | \ 71 + ((cpu_to_le32(vote_y) & \ 72 + BCM_TCS_CMD_VOTE_MASK) << BCM_TCS_CMD_VOTE_Y_SHFT)) 55 73 56 74 #endif /* __SOC_QCOM_TCS_H__ */
+18
include/uapi/linux/fpga-dfl.h
··· 176 176 177 177 #define DFL_FPGA_FME_PORT_PR _IO(DFL_FPGA_MAGIC, DFL_FME_BASE + 0) 178 178 179 + /** 180 + * DFL_FPGA_FME_PORT_RELEASE - _IOW(DFL_FPGA_MAGIC, DFL_FME_BASE + 1, 181 + * int port_id) 182 + * 183 + * Driver releases the port per Port ID provided by caller. 184 + * Return: 0 on success, -errno on failure. 185 + */ 186 + #define DFL_FPGA_FME_PORT_RELEASE _IOW(DFL_FPGA_MAGIC, DFL_FME_BASE + 1, int) 187 + 188 + /** 189 + * DFL_FPGA_FME_PORT_ASSIGN - _IOW(DFL_FPGA_MAGIC, DFL_FME_BASE + 2, 190 + * int port_id) 191 + * 192 + * Driver assigns the port back per Port ID provided by caller. 193 + * Return: 0 on success, -errno on failure. 194 + */ 195 + #define DFL_FPGA_FME_PORT_ASSIGN _IOW(DFL_FPGA_MAGIC, DFL_FME_BASE + 2, int) 196 + 179 197 #endif /* _UAPI_LINUX_FPGA_DFL_H */
+4 -1
include/uapi/linux/ppdev.h
··· 15 15 * Added PPGETMODES/PPGETMODE/PPGETPHASE, Fred Barnes <frmb2@ukc.ac.uk>, 03/01/2001 16 16 */ 17 17 18 + #ifndef _UAPI_LINUX_PPDEV_H 19 + #define _UAPI_LINUX_PPDEV_H 20 + 18 21 #define PP_IOCTL 'p' 19 22 20 23 /* Set mode for read/write (e.g. IEEE1284_MODE_EPP) */ ··· 100 97 /* only masks user-visible flags */ 101 98 #define PP_FLAGMASK (PP_FASTWRITE | PP_FASTREAD | PP_W91284PIC) 102 99 103 - 100 + #endif /* _UAPI_LINUX_PPDEV_H */
+68 -34
include/uapi/misc/habanalabs.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note 2 2 * 3 - * Copyright 2016-2018 HabanaLabs, Ltd. 3 + * Copyright 2016-2019 HabanaLabs, Ltd. 4 4 * All Rights Reserved. 5 5 * 6 6 */ ··· 28 28 29 29 enum goya_queue_id { 30 30 GOYA_QUEUE_ID_DMA_0 = 0, 31 - GOYA_QUEUE_ID_DMA_1, 32 - GOYA_QUEUE_ID_DMA_2, 33 - GOYA_QUEUE_ID_DMA_3, 34 - GOYA_QUEUE_ID_DMA_4, 35 - GOYA_QUEUE_ID_CPU_PQ, 36 - GOYA_QUEUE_ID_MME, /* Internal queues start here */ 37 - GOYA_QUEUE_ID_TPC0, 38 - GOYA_QUEUE_ID_TPC1, 39 - GOYA_QUEUE_ID_TPC2, 40 - GOYA_QUEUE_ID_TPC3, 41 - GOYA_QUEUE_ID_TPC4, 42 - GOYA_QUEUE_ID_TPC5, 43 - GOYA_QUEUE_ID_TPC6, 44 - GOYA_QUEUE_ID_TPC7, 31 + GOYA_QUEUE_ID_DMA_1 = 1, 32 + GOYA_QUEUE_ID_DMA_2 = 2, 33 + GOYA_QUEUE_ID_DMA_3 = 3, 34 + GOYA_QUEUE_ID_DMA_4 = 4, 35 + GOYA_QUEUE_ID_CPU_PQ = 5, 36 + GOYA_QUEUE_ID_MME = 6, /* Internal queues start here */ 37 + GOYA_QUEUE_ID_TPC0 = 7, 38 + GOYA_QUEUE_ID_TPC1 = 8, 39 + GOYA_QUEUE_ID_TPC2 = 9, 40 + GOYA_QUEUE_ID_TPC3 = 10, 41 + GOYA_QUEUE_ID_TPC4 = 11, 42 + GOYA_QUEUE_ID_TPC5 = 12, 43 + GOYA_QUEUE_ID_TPC6 = 13, 44 + GOYA_QUEUE_ID_TPC7 = 14, 45 45 GOYA_QUEUE_ID_SIZE 46 46 }; 47 47 ··· 75 75 HL_DEVICE_STATUS_MALFUNCTION 76 76 }; 77 77 78 - /* Opcode for management ioctl */ 79 - #define HL_INFO_HW_IP_INFO 0 80 - #define HL_INFO_HW_EVENTS 1 81 - #define HL_INFO_DRAM_USAGE 2 82 - #define HL_INFO_HW_IDLE 3 83 - #define HL_INFO_DEVICE_STATUS 4 78 + /* Opcode for management ioctl 79 + * 80 + * HW_IP_INFO - Receive information about different IP blocks in the 81 + * device. 82 + * HL_INFO_HW_EVENTS - Receive an array describing how many times each event 83 + * occurred since the last hard reset. 84 + * HL_INFO_DRAM_USAGE - Retrieve the dram usage inside the device and of the 85 + * specific context. This is relevant only for devices 86 + * where the dram is managed by the kernel driver 87 + * HL_INFO_HW_IDLE - Retrieve information about the idle status of each 88 + * internal engine. 89 + * HL_INFO_DEVICE_STATUS - Retrieve the device's status. This opcode doesn't 90 + * require an open context. 91 + * HL_INFO_DEVICE_UTILIZATION - Retrieve the total utilization of the device 92 + * over the last period specified by the user. 93 + * The period can be between 100ms to 1s, in 94 + * resolution of 100ms. The return value is a 95 + * percentage of the utilization rate. 96 + * HL_INFO_HW_EVENTS_AGGREGATE - Receive an array describing how many times each 97 + * event occurred since the driver was loaded. 98 + */ 99 + #define HL_INFO_HW_IP_INFO 0 100 + #define HL_INFO_HW_EVENTS 1 101 + #define HL_INFO_DRAM_USAGE 2 102 + #define HL_INFO_HW_IDLE 3 103 + #define HL_INFO_DEVICE_STATUS 4 104 + #define HL_INFO_DEVICE_UTILIZATION 6 105 + #define HL_INFO_HW_EVENTS_AGGREGATE 7 84 106 85 107 #define HL_INFO_VERSION_MAX_LEN 128 86 108 ··· 144 122 __u32 pad; 145 123 }; 146 124 125 + struct hl_info_device_utilization { 126 + __u32 utilization; 127 + __u32 pad; 128 + }; 129 + 147 130 struct hl_info_args { 148 131 /* Location of relevant struct in userspace */ 149 132 __u64 return_pointer; ··· 164 137 /* HL_INFO_* */ 165 138 __u32 op; 166 139 167 - /* Context ID - Currently not in use */ 168 - __u32 ctx_id; 140 + union { 141 + /* Context ID - Currently not in use */ 142 + __u32 ctx_id; 143 + /* Period value for utilization rate (100ms - 1000ms, in 100ms 144 + * resolution. 145 + */ 146 + __u32 period_ms; 147 + }; 148 + 169 149 __u32 pad; 170 150 }; 171 151 ··· 329 295 struct { 330 296 /* 331 297 * Requested virtual address of mapped memory. 332 - * KMD will try to map the requested region to this 333 - * hint address, as long as the address is valid and 334 - * not already mapped. The user should check the 298 + * The driver will try to map the requested region to 299 + * this hint address, as long as the address is valid 300 + * and not already mapped. The user should check the 335 301 * returned address of the IOCTL to make sure he got 336 - * the hint address. Passing 0 here means that KMD 337 - * will choose the address itself. 302 + * the hint address. Passing 0 here means that the 303 + * driver will choose the address itself. 338 304 */ 339 305 __u64 hint_addr; 340 306 /* Handle returned from HL_MEM_OP_ALLOC */ ··· 347 313 __u64 host_virt_addr; 348 314 /* 349 315 * Requested virtual address of mapped memory. 350 - * KMD will try to map the requested region to this 351 - * hint address, as long as the address is valid and 352 - * not already mapped. The user should check the 316 + * The driver will try to map the requested region to 317 + * this hint address, as long as the address is valid 318 + * and not already mapped. The user should check the 353 319 * returned address of the IOCTL to make sure he got 354 - * the hint address. Passing 0 here means that KMD 355 - * will choose the address itself. 320 + * the hint address. Passing 0 here means that the 321 + * driver will choose the address itself. 356 322 */ 357 323 __u64 hint_addr; 358 324 /* Size of allocated host memory */ ··· 473 439 #define HL_DEBUG_OP_BMON 4 474 440 /* Opcode for SPMU component */ 475 441 #define HL_DEBUG_OP_SPMU 5 476 - /* Opcode for timestamp */ 442 + /* Opcode for timestamp (deprecated) */ 477 443 #define HL_DEBUG_OP_TIMESTAMP 6 478 444 /* Opcode for setting the device into or out of debug mode. The enable 479 445 * variable should be 1 for enabling debug mode and 0 for disabling it
+448
include/uapi/misc/xilinx_sdfec.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0+ WITH Linux-syscall-note */ 2 + /* 3 + * Xilinx SD-FEC 4 + * 5 + * Copyright (C) 2019 Xilinx, Inc. 6 + * 7 + * Description: 8 + * This driver is developed for SDFEC16 IP. It provides a char device 9 + * in sysfs and supports file operations like open(), close() and ioctl(). 10 + */ 11 + #ifndef __XILINX_SDFEC_H__ 12 + #define __XILINX_SDFEC_H__ 13 + 14 + #include <linux/types.h> 15 + 16 + /* Shared LDPC Tables */ 17 + #define XSDFEC_LDPC_SC_TABLE_ADDR_BASE (0x10000) 18 + #define XSDFEC_LDPC_SC_TABLE_ADDR_HIGH (0x10400) 19 + #define XSDFEC_LDPC_LA_TABLE_ADDR_BASE (0x18000) 20 + #define XSDFEC_LDPC_LA_TABLE_ADDR_HIGH (0x19000) 21 + #define XSDFEC_LDPC_QC_TABLE_ADDR_BASE (0x20000) 22 + #define XSDFEC_LDPC_QC_TABLE_ADDR_HIGH (0x28000) 23 + 24 + /* LDPC tables depth */ 25 + #define XSDFEC_SC_TABLE_DEPTH \ 26 + (XSDFEC_LDPC_SC_TABLE_ADDR_HIGH - XSDFEC_LDPC_SC_TABLE_ADDR_BASE) 27 + #define XSDFEC_LA_TABLE_DEPTH \ 28 + (XSDFEC_LDPC_LA_TABLE_ADDR_HIGH - XSDFEC_LDPC_LA_TABLE_ADDR_BASE) 29 + #define XSDFEC_QC_TABLE_DEPTH \ 30 + (XSDFEC_LDPC_QC_TABLE_ADDR_HIGH - XSDFEC_LDPC_QC_TABLE_ADDR_BASE) 31 + 32 + /** 33 + * enum xsdfec_code - Code Type. 34 + * @XSDFEC_TURBO_CODE: Driver is configured for Turbo mode. 35 + * @XSDFEC_LDPC_CODE: Driver is configured for LDPC mode. 36 + * 37 + * This enum is used to indicate the mode of the driver. The mode is determined 38 + * by checking which codes are set in the driver. Note that the mode cannot be 39 + * changed by the driver. 40 + */ 41 + enum xsdfec_code { 42 + XSDFEC_TURBO_CODE = 0, 43 + XSDFEC_LDPC_CODE, 44 + }; 45 + 46 + /** 47 + * enum xsdfec_order - Order 48 + * @XSDFEC_MAINTAIN_ORDER: Maintain order execution of blocks. 49 + * @XSDFEC_OUT_OF_ORDER: Out-of-order execution of blocks. 50 + * 51 + * This enum is used to indicate whether the order of blocks can change from 52 + * input to output. 53 + */ 54 + enum xsdfec_order { 55 + XSDFEC_MAINTAIN_ORDER = 0, 56 + XSDFEC_OUT_OF_ORDER, 57 + }; 58 + 59 + /** 60 + * enum xsdfec_turbo_alg - Turbo Algorithm Type. 61 + * @XSDFEC_MAX_SCALE: Max Log-Map algorithm with extrinsic scaling. When 62 + * scaling is set to this is equivalent to the Max Log-Map 63 + * algorithm. 64 + * @XSDFEC_MAX_STAR: Log-Map algorithm. 65 + * @XSDFEC_TURBO_ALG_MAX: Used to indicate out of bound Turbo algorithms. 66 + * 67 + * This enum specifies which Turbo Decode algorithm is in use. 68 + */ 69 + enum xsdfec_turbo_alg { 70 + XSDFEC_MAX_SCALE = 0, 71 + XSDFEC_MAX_STAR, 72 + XSDFEC_TURBO_ALG_MAX, 73 + }; 74 + 75 + /** 76 + * enum xsdfec_state - State. 77 + * @XSDFEC_INIT: Driver is initialized. 78 + * @XSDFEC_STARTED: Driver is started. 79 + * @XSDFEC_STOPPED: Driver is stopped. 80 + * @XSDFEC_NEEDS_RESET: Driver needs to be reset. 81 + * @XSDFEC_PL_RECONFIGURE: Programmable Logic needs to be recofigured. 82 + * 83 + * This enum is used to indicate the state of the driver. 84 + */ 85 + enum xsdfec_state { 86 + XSDFEC_INIT = 0, 87 + XSDFEC_STARTED, 88 + XSDFEC_STOPPED, 89 + XSDFEC_NEEDS_RESET, 90 + XSDFEC_PL_RECONFIGURE, 91 + }; 92 + 93 + /** 94 + * enum xsdfec_axis_width - AXIS_WIDTH.DIN Setting for 128-bit width. 95 + * @XSDFEC_1x128b: DIN data input stream consists of a 128-bit lane 96 + * @XSDFEC_2x128b: DIN data input stream consists of two 128-bit lanes 97 + * @XSDFEC_4x128b: DIN data input stream consists of four 128-bit lanes 98 + * 99 + * This enum is used to indicate the AXIS_WIDTH.DIN setting for 128-bit width. 100 + * The number of lanes of the DIN data input stream depends upon the 101 + * AXIS_WIDTH.DIN parameter. 102 + */ 103 + enum xsdfec_axis_width { 104 + XSDFEC_1x128b = 1, 105 + XSDFEC_2x128b = 2, 106 + XSDFEC_4x128b = 4, 107 + }; 108 + 109 + /** 110 + * enum xsdfec_axis_word_include - Words Configuration. 111 + * @XSDFEC_FIXED_VALUE: Fixed, the DIN_WORDS AXI4-Stream interface is removed 112 + * from the IP instance and is driven with the specified 113 + * number of words. 114 + * @XSDFEC_IN_BLOCK: In Block, configures the IP instance to expect a single 115 + * DIN_WORDS value per input code block. The DIN_WORDS 116 + * interface is present. 117 + * @XSDFEC_PER_AXI_TRANSACTION: Per Transaction, configures the IP instance to 118 + * expect one DIN_WORDS value per input transaction on the DIN interface. The 119 + * DIN_WORDS interface is present. 120 + * @XSDFEC_AXIS_WORDS_INCLUDE_MAX: Used to indicate out of bound Words 121 + * Configurations. 122 + * 123 + * This enum is used to specify the DIN_WORDS configuration. 124 + */ 125 + enum xsdfec_axis_word_include { 126 + XSDFEC_FIXED_VALUE = 0, 127 + XSDFEC_IN_BLOCK, 128 + XSDFEC_PER_AXI_TRANSACTION, 129 + XSDFEC_AXIS_WORDS_INCLUDE_MAX, 130 + }; 131 + 132 + /** 133 + * struct xsdfec_turbo - User data for Turbo codes. 134 + * @alg: Specifies which Turbo decode algorithm to use 135 + * @scale: Specifies the extrinsic scaling to apply when the Max Scale algorithm 136 + * has been selected 137 + * 138 + * Turbo code structure to communicate parameters to XSDFEC driver. 139 + */ 140 + struct xsdfec_turbo { 141 + __u32 alg; 142 + __u8 scale; 143 + }; 144 + 145 + /** 146 + * struct xsdfec_ldpc_params - User data for LDPC codes. 147 + * @n: Number of code word bits 148 + * @k: Number of information bits 149 + * @psize: Size of sub-matrix 150 + * @nlayers: Number of layers in code 151 + * @nqc: Quasi Cyclic Number 152 + * @nmqc: Number of M-sized QC operations in parity check matrix 153 + * @nm: Number of M-size vectors in N 154 + * @norm_type: Normalization required or not 155 + * @no_packing: Determines if multiple QC ops should be performed 156 + * @special_qc: Sub-Matrix property for Circulant weight > 0 157 + * @no_final_parity: Decide if final parity check needs to be performed 158 + * @max_schedule: Experimental code word scheduling limit 159 + * @sc_off: SC offset 160 + * @la_off: LA offset 161 + * @qc_off: QC offset 162 + * @sc_table: Pointer to SC Table which must be page aligned 163 + * @la_table: Pointer to LA Table which must be page aligned 164 + * @qc_table: Pointer to QC Table which must be page aligned 165 + * @code_id: LDPC Code 166 + * 167 + * This structure describes the LDPC code that is passed to the driver by the 168 + * application. 169 + */ 170 + struct xsdfec_ldpc_params { 171 + __u32 n; 172 + __u32 k; 173 + __u32 psize; 174 + __u32 nlayers; 175 + __u32 nqc; 176 + __u32 nmqc; 177 + __u32 nm; 178 + __u32 norm_type; 179 + __u32 no_packing; 180 + __u32 special_qc; 181 + __u32 no_final_parity; 182 + __u32 max_schedule; 183 + __u32 sc_off; 184 + __u32 la_off; 185 + __u32 qc_off; 186 + __u32 *sc_table; 187 + __u32 *la_table; 188 + __u32 *qc_table; 189 + __u16 code_id; 190 + }; 191 + 192 + /** 193 + * struct xsdfec_status - Status of SD-FEC core. 194 + * @state: State of the SD-FEC core 195 + * @activity: Describes if the SD-FEC instance is Active 196 + */ 197 + struct xsdfec_status { 198 + __u32 state; 199 + __s8 activity; 200 + }; 201 + 202 + /** 203 + * struct xsdfec_irq - Enabling or Disabling Interrupts. 204 + * @enable_isr: If true enables the ISR 205 + * @enable_ecc_isr: If true enables the ECC ISR 206 + */ 207 + struct xsdfec_irq { 208 + __s8 enable_isr; 209 + __s8 enable_ecc_isr; 210 + }; 211 + 212 + /** 213 + * struct xsdfec_config - Configuration of SD-FEC core. 214 + * @code: The codes being used by the SD-FEC instance 215 + * @order: Order of Operation 216 + * @din_width: Width of the DIN AXI4-Stream 217 + * @din_word_include: How DIN_WORDS are inputted 218 + * @dout_width: Width of the DOUT AXI4-Stream 219 + * @dout_word_include: HOW DOUT_WORDS are outputted 220 + * @irq: Enabling or disabling interrupts 221 + * @bypass: Is the core being bypassed 222 + * @code_wr_protect: Is write protection of LDPC codes enabled 223 + */ 224 + struct xsdfec_config { 225 + __u32 code; 226 + __u32 order; 227 + __u32 din_width; 228 + __u32 din_word_include; 229 + __u32 dout_width; 230 + __u32 dout_word_include; 231 + struct xsdfec_irq irq; 232 + __s8 bypass; 233 + __s8 code_wr_protect; 234 + }; 235 + 236 + /** 237 + * struct xsdfec_stats - Stats retrived by ioctl XSDFEC_GET_STATS. Used 238 + * to buffer atomic_t variables from struct 239 + * xsdfec_dev. Counts are accumulated until 240 + * the user clears them. 241 + * @isr_err_count: Count of ISR errors 242 + * @cecc_count: Count of Correctable ECC errors (SBE) 243 + * @uecc_count: Count of Uncorrectable ECC errors (MBE) 244 + */ 245 + struct xsdfec_stats { 246 + __u32 isr_err_count; 247 + __u32 cecc_count; 248 + __u32 uecc_count; 249 + }; 250 + 251 + /** 252 + * struct xsdfec_ldpc_param_table_sizes - Used to store sizes of SD-FEC table 253 + * entries for an individual LPDC code 254 + * parameter. 255 + * @sc_size: Size of SC table used 256 + * @la_size: Size of LA table used 257 + * @qc_size: Size of QC table used 258 + */ 259 + struct xsdfec_ldpc_param_table_sizes { 260 + __u32 sc_size; 261 + __u32 la_size; 262 + __u32 qc_size; 263 + }; 264 + 265 + /* 266 + * XSDFEC IOCTL List 267 + */ 268 + #define XSDFEC_MAGIC 'f' 269 + /** 270 + * DOC: XSDFEC_START_DEV 271 + * 272 + * @Description 273 + * 274 + * ioctl to start SD-FEC core 275 + * 276 + * This fails if the XSDFEC_SET_ORDER ioctl has not been previously called 277 + */ 278 + #define XSDFEC_START_DEV _IO(XSDFEC_MAGIC, 0) 279 + /** 280 + * DOC: XSDFEC_STOP_DEV 281 + * 282 + * @Description 283 + * 284 + * ioctl to stop the SD-FEC core 285 + */ 286 + #define XSDFEC_STOP_DEV _IO(XSDFEC_MAGIC, 1) 287 + /** 288 + * DOC: XSDFEC_GET_STATUS 289 + * 290 + * @Description 291 + * 292 + * ioctl that returns status of SD-FEC core 293 + */ 294 + #define XSDFEC_GET_STATUS _IOR(XSDFEC_MAGIC, 2, struct xsdfec_status) 295 + /** 296 + * DOC: XSDFEC_SET_IRQ 297 + * @Parameters 298 + * 299 + * @struct xsdfec_irq * 300 + * Pointer to the &struct xsdfec_irq that contains the interrupt settings 301 + * for the SD-FEC core 302 + * 303 + * @Description 304 + * 305 + * ioctl to enable or disable irq 306 + */ 307 + #define XSDFEC_SET_IRQ _IOW(XSDFEC_MAGIC, 3, struct xsdfec_irq) 308 + /** 309 + * DOC: XSDFEC_SET_TURBO 310 + * @Parameters 311 + * 312 + * @struct xsdfec_turbo * 313 + * Pointer to the &struct xsdfec_turbo that contains the Turbo decode 314 + * settings for the SD-FEC core 315 + * 316 + * @Description 317 + * 318 + * ioctl that sets the SD-FEC Turbo parameter values 319 + * 320 + * This can only be used when the driver is in the XSDFEC_STOPPED state 321 + */ 322 + #define XSDFEC_SET_TURBO _IOW(XSDFEC_MAGIC, 4, struct xsdfec_turbo) 323 + /** 324 + * DOC: XSDFEC_ADD_LDPC_CODE_PARAMS 325 + * @Parameters 326 + * 327 + * @struct xsdfec_ldpc_params * 328 + * Pointer to the &struct xsdfec_ldpc_params that contains the LDPC code 329 + * parameters to be added to the SD-FEC Block 330 + * 331 + * @Description 332 + * ioctl to add an LDPC code to the SD-FEC LDPC codes 333 + * 334 + * This can only be used when: 335 + * 336 + * - Driver is in the XSDFEC_STOPPED state 337 + * 338 + * - SD-FEC core is configured as LPDC 339 + * 340 + * - SD-FEC Code Write Protection is disabled 341 + */ 342 + #define XSDFEC_ADD_LDPC_CODE_PARAMS \ 343 + _IOW(XSDFEC_MAGIC, 5, struct xsdfec_ldpc_params) 344 + /** 345 + * DOC: XSDFEC_GET_CONFIG 346 + * @Parameters 347 + * 348 + * @struct xsdfec_config * 349 + * Pointer to the &struct xsdfec_config that contains the current 350 + * configuration settings of the SD-FEC Block 351 + * 352 + * @Description 353 + * 354 + * ioctl that returns SD-FEC core configuration 355 + */ 356 + #define XSDFEC_GET_CONFIG _IOR(XSDFEC_MAGIC, 6, struct xsdfec_config) 357 + /** 358 + * DOC: XSDFEC_GET_TURBO 359 + * @Parameters 360 + * 361 + * @struct xsdfec_turbo * 362 + * Pointer to the &struct xsdfec_turbo that contains the current Turbo 363 + * decode settings of the SD-FEC Block 364 + * 365 + * @Description 366 + * 367 + * ioctl that returns SD-FEC turbo param values 368 + */ 369 + #define XSDFEC_GET_TURBO _IOR(XSDFEC_MAGIC, 7, struct xsdfec_turbo) 370 + /** 371 + * DOC: XSDFEC_SET_ORDER 372 + * @Parameters 373 + * 374 + * @struct unsigned long * 375 + * Pointer to the unsigned long that contains a value from the 376 + * @enum xsdfec_order 377 + * 378 + * @Description 379 + * 380 + * ioctl that sets order, if order of blocks can change from input to output 381 + * 382 + * This can only be used when the driver is in the XSDFEC_STOPPED state 383 + */ 384 + #define XSDFEC_SET_ORDER _IOW(XSDFEC_MAGIC, 8, unsigned long) 385 + /** 386 + * DOC: XSDFEC_SET_BYPASS 387 + * @Parameters 388 + * 389 + * @struct bool * 390 + * Pointer to bool that sets the bypass value, where false results in 391 + * normal operation and false results in the SD-FEC performing the 392 + * configured operations (same number of cycles) but output data matches 393 + * the input data 394 + * 395 + * @Description 396 + * 397 + * ioctl that sets bypass. 398 + * 399 + * This can only be used when the driver is in the XSDFEC_STOPPED state 400 + */ 401 + #define XSDFEC_SET_BYPASS _IOW(XSDFEC_MAGIC, 9, bool) 402 + /** 403 + * DOC: XSDFEC_IS_ACTIVE 404 + * @Parameters 405 + * 406 + * @struct bool * 407 + * Pointer to bool that returns true if the SD-FEC is processing data 408 + * 409 + * @Description 410 + * 411 + * ioctl that determines if SD-FEC is processing data 412 + */ 413 + #define XSDFEC_IS_ACTIVE _IOR(XSDFEC_MAGIC, 10, bool) 414 + /** 415 + * DOC: XSDFEC_CLEAR_STATS 416 + * 417 + * @Description 418 + * 419 + * ioctl that clears error stats collected during interrupts 420 + */ 421 + #define XSDFEC_CLEAR_STATS _IO(XSDFEC_MAGIC, 11) 422 + /** 423 + * DOC: XSDFEC_GET_STATS 424 + * @Parameters 425 + * 426 + * @struct xsdfec_stats * 427 + * Pointer to the &struct xsdfec_stats that will contain the updated stats 428 + * values 429 + * 430 + * @Description 431 + * 432 + * ioctl that returns SD-FEC core stats 433 + * 434 + * This can only be used when the driver is in the XSDFEC_STOPPED state 435 + */ 436 + #define XSDFEC_GET_STATS _IOR(XSDFEC_MAGIC, 12, struct xsdfec_stats) 437 + /** 438 + * DOC: XSDFEC_SET_DEFAULT_CONFIG 439 + * 440 + * @Description 441 + * 442 + * ioctl that returns SD-FEC core to default config, use after a reset 443 + * 444 + * This can only be used when the driver is in the XSDFEC_STOPPED state 445 + */ 446 + #define XSDFEC_SET_DEFAULT_CONFIG _IO(XSDFEC_MAGIC, 13) 447 + 448 + #endif /* __XILINX_SDFEC_H__ */
+48 -2
lib/test_firmware.c
··· 18 18 #include <linux/device.h> 19 19 #include <linux/fs.h> 20 20 #include <linux/miscdevice.h> 21 + #include <linux/sizes.h> 21 22 #include <linux/slab.h> 22 23 #include <linux/uaccess.h> 23 24 #include <linux/delay.h> ··· 27 26 28 27 #define TEST_FIRMWARE_NAME "test-firmware.bin" 29 28 #define TEST_FIRMWARE_NUM_REQS 4 29 + #define TEST_FIRMWARE_BUF_SIZE SZ_1K 30 30 31 31 static DEFINE_MUTEX(test_fw_mutex); 32 32 static const struct firmware *test_firmware; ··· 47 45 * test_config - represents configuration for the test for different triggers 48 46 * 49 47 * @name: the name of the firmware file to look for 48 + * @into_buf: when the into_buf is used if this is true 49 + * request_firmware_into_buf() will be used instead. 50 50 * @sync_direct: when the sync trigger is used if this is true 51 51 * request_firmware_direct() will be used instead. 52 52 * @send_uevent: whether or not to send a uevent for async requests ··· 87 83 */ 88 84 struct test_config { 89 85 char *name; 86 + bool into_buf; 90 87 bool sync_direct; 91 88 bool send_uevent; 92 89 u8 num_requests; ··· 181 176 182 177 test_fw_config->num_requests = TEST_FIRMWARE_NUM_REQS; 183 178 test_fw_config->send_uevent = true; 179 + test_fw_config->into_buf = false; 184 180 test_fw_config->sync_direct = false; 185 181 test_fw_config->req_firmware = request_firmware; 186 182 test_fw_config->test_result = 0; ··· 250 244 test_fw_config->send_uevent ? 251 245 "FW_ACTION_HOTPLUG" : 252 246 "FW_ACTION_NOHOTPLUG"); 247 + len += scnprintf(buf+len, PAGE_SIZE - len, 248 + "into_buf:\t\t%s\n", 249 + test_fw_config->into_buf ? "true" : "false"); 253 250 len += scnprintf(buf+len, PAGE_SIZE - len, 254 251 "sync_direct:\t\t%s\n", 255 252 test_fw_config->sync_direct ? "true" : "false"); ··· 402 393 } 403 394 static DEVICE_ATTR_RW(config_num_requests); 404 395 396 + static ssize_t config_into_buf_store(struct device *dev, 397 + struct device_attribute *attr, 398 + const char *buf, size_t count) 399 + { 400 + return test_dev_config_update_bool(buf, 401 + count, 402 + &test_fw_config->into_buf); 403 + } 404 + 405 + static ssize_t config_into_buf_show(struct device *dev, 406 + struct device_attribute *attr, 407 + char *buf) 408 + { 409 + return test_dev_config_show_bool(buf, test_fw_config->into_buf); 410 + } 411 + static DEVICE_ATTR_RW(config_into_buf); 412 + 405 413 static ssize_t config_sync_direct_store(struct device *dev, 406 414 struct device_attribute *attr, 407 415 const char *buf, size_t count) ··· 548 522 rc = count; 549 523 } else { 550 524 pr_err("failed to async load firmware\n"); 551 - rc = -ENODEV; 525 + rc = -ENOMEM; 552 526 } 553 527 554 528 out: ··· 611 585 return -EINVAL; 612 586 } 613 587 614 - req->rc = test_fw_config->req_firmware(&req->fw, req->name, req->dev); 588 + if (test_fw_config->into_buf) { 589 + void *test_buf; 590 + 591 + test_buf = kzalloc(TEST_FIRMWARE_BUF_SIZE, GFP_KERNEL); 592 + if (!test_buf) 593 + return -ENOSPC; 594 + 595 + req->rc = request_firmware_into_buf(&req->fw, 596 + req->name, 597 + req->dev, 598 + test_buf, 599 + TEST_FIRMWARE_BUF_SIZE); 600 + if (!req->fw) 601 + kfree(test_buf); 602 + } else { 603 + req->rc = test_fw_config->req_firmware(&req->fw, 604 + req->name, 605 + req->dev); 606 + } 607 + 615 608 if (req->rc) { 616 609 pr_info("#%u: batched sync load failed: %d\n", 617 610 req->idx, req->rc); ··· 894 849 TEST_FW_DEV_ATTR(config), 895 850 TEST_FW_DEV_ATTR(config_name), 896 851 TEST_FW_DEV_ATTR(config_num_requests), 852 + TEST_FW_DEV_ATTR(config_into_buf), 897 853 TEST_FW_DEV_ATTR(config_sync_direct), 898 854 TEST_FW_DEV_ATTR(config_send_uevent), 899 855 TEST_FW_DEV_ATTR(config_read_fw_idx),
+55 -2
tools/testing/selftests/firmware/fw_filesystem.sh
··· 116 116 echo -n $1 > $DIR/config_name 117 117 } 118 118 119 + config_set_into_buf() 120 + { 121 + echo 1 > $DIR/config_into_buf 122 + } 123 + 124 + config_unset_into_buf() 125 + { 126 + echo 0 > $DIR/config_into_buf 127 + } 128 + 119 129 config_set_sync_direct() 120 130 { 121 131 echo 1 > $DIR/config_sync_direct ··· 163 153 164 154 read_firmwares() 165 155 { 166 - if [ "$1" = "xzonly" ]; then 167 - fwfile="${FW}-orig" 156 + if [ "$(cat $DIR/config_into_buf)" == "1" ]; then 157 + fwfile="$FW_INTO_BUF" 168 158 else 169 159 fwfile="$FW" 160 + fi 161 + if [ "$1" = "xzonly" ]; then 162 + fwfile="${fwfile}-orig" 170 163 fi 171 164 for i in $(seq 0 3); do 172 165 config_set_read_fw_idx $i ··· 201 188 echo -n "Batched request_firmware() nofile try #$1: " 202 189 config_reset 203 190 config_set_name nope-test-firmware.bin 191 + config_trigger_sync 192 + read_firmwares_expect_nofile 193 + release_all_firmware 194 + echo "OK" 195 + } 196 + 197 + test_batched_request_firmware_into_buf_nofile() 198 + { 199 + echo -n "Batched request_firmware_into_buf() nofile try #$1: " 200 + config_reset 201 + config_set_name nope-test-firmware.bin 202 + config_set_into_buf 204 203 config_trigger_sync 205 204 read_firmwares_expect_nofile 206 205 release_all_firmware ··· 284 259 echo "OK" 285 260 } 286 261 262 + test_batched_request_firmware_into_buf() 263 + { 264 + echo -n "Batched request_firmware_into_buf() $2 try #$1: " 265 + config_reset 266 + config_set_name $TEST_FIRMWARE_INTO_BUF_FILENAME 267 + config_set_into_buf 268 + config_trigger_sync 269 + read_firmwares $2 270 + release_all_firmware 271 + echo "OK" 272 + } 273 + 287 274 test_batched_request_firmware_direct() 288 275 { 289 276 echo -n "Batched request_firmware_direct() $2 try #$1: " ··· 345 308 done 346 309 347 310 for i in $(seq 1 5); do 311 + test_batched_request_firmware_into_buf $i normal 312 + done 313 + 314 + for i in $(seq 1 5); do 348 315 test_batched_request_firmware_direct $i normal 349 316 done 350 317 ··· 366 325 echo "Testing with the file missing..." 367 326 for i in $(seq 1 5); do 368 327 test_batched_request_firmware_nofile $i 328 + done 329 + 330 + for i in $(seq 1 5); do 331 + test_batched_request_firmware_into_buf_nofile $i 369 332 done 370 333 371 334 for i in $(seq 1 5); do ··· 396 351 done 397 352 398 353 for i in $(seq 1 5); do 354 + test_batched_request_firmware_into_buf $i both 355 + done 356 + 357 + for i in $(seq 1 5); do 399 358 test_batched_request_firmware_direct $i both 400 359 done 401 360 ··· 417 368 echo "Testing with only xz file present..." 418 369 for i in $(seq 1 5); do 419 370 test_batched_request_firmware $i xzonly 371 + done 372 + 373 + for i in $(seq 1 5); do 374 + test_batched_request_firmware_into_buf $i xzonly 420 375 done 421 376 422 377 for i in $(seq 1 5); do
+11
tools/testing/selftests/firmware/fw_lib.sh
··· 9 9 PROC_CONFIG="/proc/config.gz" 10 10 TEST_DIR=$(dirname $0) 11 11 12 + # We need to load a different file to test request_firmware_into_buf 13 + # I believe the issue is firmware loaded cached vs. non-cached 14 + # with same filename is bungled. 15 + # To reproduce rename this to test-firmware.bin 16 + TEST_FIRMWARE_INTO_BUF_FILENAME=test-firmware-into-buf.bin 17 + 12 18 # Kselftest framework requirement - SKIP code is 4. 13 19 ksft_skip=4 14 20 ··· 114 108 FWPATH=$(mktemp -d) 115 109 FW="$FWPATH/test-firmware.bin" 116 110 echo "ABCD0123" >"$FW" 111 + FW_INTO_BUF="$FWPATH/$TEST_FIRMWARE_INTO_BUF_FILENAME" 112 + echo "EFGH4567" >"$FW_INTO_BUF" 117 113 NAME=$(basename "$FW") 118 114 if [ "$TEST_REQS_FW_SET_CUSTOM_PATH" = "yes" ]; then 119 115 echo -n "$FWPATH" >/sys/module/firmware_class/parameters/path ··· 182 174 fi 183 175 if [ -f $FW ]; then 184 176 rm -f "$FW" 177 + fi 178 + if [ -f $FW_INTO_BUF ]; then 179 + rm -f "$FW_INTO_BUF" 185 180 fi 186 181 if [ -d $FWPATH ]; then 187 182 rm -rf "$FWPATH"