Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'char-misc-4.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char/misc driver updates from Greg KH:
"Here is the "big" char and misc driver patches for 4.18-rc1.

It's not a lot of stuff here, but there are some highlights:

- coreboot driver updates

- soundwire driver updates

- android binder updates

- fpga big sync, mostly documentation

- lots of minor driver updates

All of these have been in linux-next for a while with no reported
issues"

* tag 'char-misc-4.18-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (81 commits)
vmw_balloon: fixing double free when batching mode is off
MAINTAINERS: Add driver-api/fpga path
fpga: clarify that unregister functions also free
documentation: fpga: move fpga-region.txt to driver-api
documentation: fpga: add bridge document to driver-api
documentation: fpga: move fpga-mgr.txt to driver-api
Documentation: fpga: move fpga overview to driver-api
fpga: region: kernel-doc fixes
fpga: bridge: kernel-doc fixes
fpga: mgr: kernel-doc fixes
fpga: use SPDX
fpga: region: change api, add fpga_region_create/free
fpga: bridge: change api, don't use drvdata
fpga: manager: change api, don't use drvdata
fpga: region: don't use drvdata in common fpga code
Drivers: hv: vmbus: Removed an unnecessary cast from void *
ver_linux: Drop redundant calls to system() to test if file is readable
ver_linux: Move stderr redirection from function parameter to function body
misc: IBM Virtual Management Channel Driver (VMC)
rpmsg: Correct support for MODULE_DEVICE_TABLE()
...

+8983 -1236
+20 -20
Documentation/ABI/stable/sysfs-bus-vmbus
··· 1 - What: /sys/bus/vmbus/devices/vmbus_*/id 1 + What: /sys/bus/vmbus/devices/<UUID>/id 2 2 Date: Jul 2009 3 3 KernelVersion: 2.6.31 4 4 Contact: K. Y. Srinivasan <kys@microsoft.com> 5 5 Description: The VMBus child_relid of the device's primary channel 6 6 Users: tools/hv/lsvmbus 7 7 8 - What: /sys/bus/vmbus/devices/vmbus_*/class_id 8 + What: /sys/bus/vmbus/devices/<UUID>/class_id 9 9 Date: Jul 2009 10 10 KernelVersion: 2.6.31 11 11 Contact: K. Y. Srinivasan <kys@microsoft.com> 12 12 Description: The VMBus interface type GUID of the device 13 13 Users: tools/hv/lsvmbus 14 14 15 - What: /sys/bus/vmbus/devices/vmbus_*/device_id 15 + What: /sys/bus/vmbus/devices/<UUID>/device_id 16 16 Date: Jul 2009 17 17 KernelVersion: 2.6.31 18 18 Contact: K. Y. Srinivasan <kys@microsoft.com> 19 19 Description: The VMBus interface instance GUID of the device 20 20 Users: tools/hv/lsvmbus 21 21 22 - What: /sys/bus/vmbus/devices/vmbus_*/channel_vp_mapping 22 + What: /sys/bus/vmbus/devices/<UUID>/channel_vp_mapping 23 23 Date: Jul 2015 24 24 KernelVersion: 4.2.0 25 25 Contact: K. Y. Srinivasan <kys@microsoft.com> ··· 28 28 Format: <channel's child_relid:the bound cpu's number> 29 29 Users: tools/hv/lsvmbus 30 30 31 - What: /sys/bus/vmbus/devices/vmbus_*/device 31 + What: /sys/bus/vmbus/devices/<UUID>/device 32 32 Date: Dec. 2015 33 33 KernelVersion: 4.5 34 34 Contact: K. Y. Srinivasan <kys@microsoft.com> 35 35 Description: The 16 bit device ID of the device 36 36 Users: tools/hv/lsvmbus and user level RDMA libraries 37 37 38 - What: /sys/bus/vmbus/devices/vmbus_*/vendor 38 + What: /sys/bus/vmbus/devices/<UUID>/vendor 39 39 Date: Dec. 2015 40 40 KernelVersion: 4.5 41 41 Contact: K. Y. Srinivasan <kys@microsoft.com> 42 42 Description: The 16 bit vendor ID of the device 43 43 Users: tools/hv/lsvmbus and user level RDMA libraries 44 44 45 - What: /sys/bus/vmbus/devices/vmbus_*/channels/NN 45 + What: /sys/bus/vmbus/devices/<UUID>/channels/<N> 46 46 Date: September. 2017 47 47 KernelVersion: 4.14 48 48 Contact: Stephen Hemminger <sthemmin@microsoft.com> 49 49 Description: Directory for per-channel information 50 50 NN is the VMBUS relid associtated with the channel. 51 51 52 - What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/cpu 52 + What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/cpu 53 53 Date: September. 2017 54 54 KernelVersion: 4.14 55 55 Contact: Stephen Hemminger <sthemmin@microsoft.com> 56 56 Description: VCPU (sub)channel is affinitized to 57 57 Users: tools/hv/lsvmbus and other debugging tools 58 58 59 - What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/cpu 59 + What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/cpu 60 60 Date: September. 2017 61 61 KernelVersion: 4.14 62 62 Contact: Stephen Hemminger <sthemmin@microsoft.com> 63 63 Description: VCPU (sub)channel is affinitized to 64 64 Users: tools/hv/lsvmbus and other debugging tools 65 65 66 - What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/in_mask 66 + What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/in_mask 67 67 Date: September. 2017 68 68 KernelVersion: 4.14 69 69 Contact: Stephen Hemminger <sthemmin@microsoft.com> 70 70 Description: Host to guest channel interrupt mask 71 71 Users: Debugging tools 72 72 73 - What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/latency 73 + What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/latency 74 74 Date: September. 2017 75 75 KernelVersion: 4.14 76 76 Contact: Stephen Hemminger <sthemmin@microsoft.com> 77 77 Description: Channel signaling latency 78 78 Users: Debugging tools 79 79 80 - What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/out_mask 80 + What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/out_mask 81 81 Date: September. 2017 82 82 KernelVersion: 4.14 83 83 Contact: Stephen Hemminger <sthemmin@microsoft.com> 84 84 Description: Guest to host channel interrupt mask 85 85 Users: Debugging tools 86 86 87 - What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/pending 87 + What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/pending 88 88 Date: September. 2017 89 89 KernelVersion: 4.14 90 90 Contact: Stephen Hemminger <sthemmin@microsoft.com> 91 91 Description: Channel interrupt pending state 92 92 Users: Debugging tools 93 93 94 - What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/read_avail 94 + What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/read_avail 95 95 Date: September. 2017 96 96 KernelVersion: 4.14 97 97 Contact: Stephen Hemminger <sthemmin@microsoft.com> 98 98 Description: Bytes available to read 99 99 Users: Debugging tools 100 100 101 - What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/write_avail 101 + What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/write_avail 102 102 Date: September. 2017 103 103 KernelVersion: 4.14 104 104 Contact: Stephen Hemminger <sthemmin@microsoft.com> 105 105 Description: Bytes available to write 106 106 Users: Debugging tools 107 107 108 - What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/events 108 + What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/events 109 109 Date: September. 2017 110 110 KernelVersion: 4.14 111 111 Contact: Stephen Hemminger <sthemmin@microsoft.com> 112 112 Description: Number of times we have signaled the host 113 113 Users: Debugging tools 114 114 115 - What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/interrupts 115 + What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/interrupts 116 116 Date: September. 2017 117 117 KernelVersion: 4.14 118 118 Contact: Stephen Hemminger <sthemmin@microsoft.com> 119 119 Description: Number of times we have taken an interrupt (incoming) 120 120 Users: Debugging tools 121 121 122 - What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/subchannel_id 122 + What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/subchannel_id 123 123 Date: January. 2018 124 124 KernelVersion: 4.16 125 125 Contact: Stephen Hemminger <sthemmin@microsoft.com> 126 126 Description: Subchannel ID associated with VMBUS channel 127 127 Users: Debugging tools and userspace drivers 128 128 129 - What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/monitor_id 129 + What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/monitor_id 130 130 Date: January. 2018 131 131 KernelVersion: 4.16 132 132 Contact: Stephen Hemminger <sthemmin@microsoft.com> 133 133 Description: Monitor bit associated with channel 134 134 Users: Debugging tools and userspace drivers 135 135 136 - What: /sys/bus/vmbus/devices/vmbus_*/channels/NN/ring 136 + What: /sys/bus/vmbus/devices/<UUID>/channels/<N>/ring 137 137 Date: January. 2018 138 138 KernelVersion: 4.16 139 139 Contact: Stephen Hemminger <sthemmin@microsoft.com>
+29
Documentation/devicetree/bindings/fpga/lattice-machxo2-spi.txt
··· 1 + Lattice MachXO2 Slave SPI FPGA Manager 2 + 3 + Lattice MachXO2 FPGAs support a method of loading the bitstream over 4 + 'slave SPI' interface. 5 + 6 + See 'MachXO2ProgrammingandConfigurationUsageGuide.pdf' on www.latticesemi.com 7 + 8 + Required properties: 9 + - compatible: should contain "lattice,machxo2-slave-spi" 10 + - reg: spi chip select of the FPGA 11 + 12 + Example for full FPGA configuration: 13 + 14 + fpga-region0 { 15 + compatible = "fpga-region"; 16 + fpga-mgr = <&fpga_mgr_spi>; 17 + #address-cells = <0x1>; 18 + #size-cells = <0x1>; 19 + }; 20 + 21 + spi1: spi@2000 { 22 + ... 23 + 24 + fpga_mgr_spi: fpga-mgr@0 { 25 + compatible = "lattice,machxo2-slave-spi"; 26 + spi-max-frequency = <8000000>; 27 + reg = <0>; 28 + }; 29 + };
+40
Documentation/devicetree/bindings/nvmem/zii,rave-sp-eeprom.txt
··· 1 + Zodiac Inflight Innovations RAVE EEPROM Bindings 2 + 3 + RAVE SP EEPROM device is a "MFD cell" device exposing physical EEPROM 4 + attached to RAVE Supervisory Processor. It is expected that its Device 5 + Tree node is specified as a child of the node corresponding to the 6 + parent RAVE SP device (as documented in 7 + Documentation/devicetree/bindings/mfd/zii,rave-sp.txt) 8 + 9 + Required properties: 10 + 11 + - compatible: Should be "zii,rave-sp-eeprom" 12 + 13 + Optional properties: 14 + 15 + - zii,eeprom-name: Unique EEPROM identifier describing its function in the 16 + system. Will be used as created NVMEM deivce's name. 17 + 18 + Data cells: 19 + 20 + Data cells are child nodes of eerpom node, bindings for which are 21 + documented in Documentation/bindings/nvmem/nvmem.txt 22 + 23 + Example: 24 + 25 + rave-sp { 26 + compatible = "zii,rave-sp-rdu1"; 27 + current-speed = <38400>; 28 + 29 + eeprom@a4 { 30 + compatible = "zii,rave-sp-eeprom"; 31 + reg = <0xa4 0x4000>; 32 + #address-cells = <1>; 33 + #size-cells = <1>; 34 + zii,eeprom-name = "main-eeprom"; 35 + 36 + wdt_timeout: wdt-timeout@81 { 37 + reg = <0x81 2>; 38 + }; 39 + }; 40 + }
+49
Documentation/driver-api/fpga/fpga-bridge.rst
··· 1 + FPGA Bridge 2 + =========== 3 + 4 + API to implement a new FPGA bridge 5 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 6 + 7 + .. kernel-doc:: include/linux/fpga/fpga-bridge.h 8 + :functions: fpga_bridge 9 + 10 + .. kernel-doc:: include/linux/fpga/fpga-bridge.h 11 + :functions: fpga_bridge_ops 12 + 13 + .. kernel-doc:: drivers/fpga/fpga-bridge.c 14 + :functions: fpga_bridge_create 15 + 16 + .. kernel-doc:: drivers/fpga/fpga-bridge.c 17 + :functions: fpga_bridge_free 18 + 19 + .. kernel-doc:: drivers/fpga/fpga-bridge.c 20 + :functions: fpga_bridge_register 21 + 22 + .. kernel-doc:: drivers/fpga/fpga-bridge.c 23 + :functions: fpga_bridge_unregister 24 + 25 + API to control an FPGA bridge 26 + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 27 + 28 + You probably won't need these directly. FPGA regions should handle this. 29 + 30 + .. kernel-doc:: drivers/fpga/fpga-bridge.c 31 + :functions: of_fpga_bridge_get 32 + 33 + .. kernel-doc:: drivers/fpga/fpga-bridge.c 34 + :functions: fpga_bridge_get 35 + 36 + .. kernel-doc:: drivers/fpga/fpga-bridge.c 37 + :functions: fpga_bridge_put 38 + 39 + .. kernel-doc:: drivers/fpga/fpga-bridge.c 40 + :functions: fpga_bridge_get_to_list 41 + 42 + .. kernel-doc:: drivers/fpga/fpga-bridge.c 43 + :functions: of_fpga_bridge_get_to_list 44 + 45 + .. kernel-doc:: drivers/fpga/fpga-bridge.c 46 + :functions: fpga_bridge_enable 47 + 48 + .. kernel-doc:: drivers/fpga/fpga-bridge.c 49 + :functions: fpga_bridge_disable
+220
Documentation/driver-api/fpga/fpga-mgr.rst
··· 1 + FPGA Manager 2 + ============ 3 + 4 + Overview 5 + -------- 6 + 7 + The FPGA manager core exports a set of functions for programming an FPGA with 8 + an image. The API is manufacturer agnostic. All manufacturer specifics are 9 + hidden away in a low level driver which registers a set of ops with the core. 10 + The FPGA image data itself is very manufacturer specific, but for our purposes 11 + it's just binary data. The FPGA manager core won't parse it. 12 + 13 + The FPGA image to be programmed can be in a scatter gather list, a single 14 + contiguous buffer, or a firmware file. Because allocating contiguous kernel 15 + memory for the buffer should be avoided, users are encouraged to use a scatter 16 + gather list instead if possible. 17 + 18 + The particulars for programming the image are presented in a structure (struct 19 + fpga_image_info). This struct contains parameters such as pointers to the 20 + FPGA image as well as image-specific particulars such as whether the image was 21 + built for full or partial reconfiguration. 22 + 23 + How to support a new FPGA device 24 + -------------------------------- 25 + 26 + To add another FPGA manager, write a driver that implements a set of ops. The 27 + probe function calls fpga_mgr_register(), such as:: 28 + 29 + static const struct fpga_manager_ops socfpga_fpga_ops = { 30 + .write_init = socfpga_fpga_ops_configure_init, 31 + .write = socfpga_fpga_ops_configure_write, 32 + .write_complete = socfpga_fpga_ops_configure_complete, 33 + .state = socfpga_fpga_ops_state, 34 + }; 35 + 36 + static int socfpga_fpga_probe(struct platform_device *pdev) 37 + { 38 + struct device *dev = &pdev->dev; 39 + struct socfpga_fpga_priv *priv; 40 + struct fpga_manager *mgr; 41 + int ret; 42 + 43 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 44 + if (!priv) 45 + return -ENOMEM; 46 + 47 + /* 48 + * do ioremaps, get interrupts, etc. and save 49 + * them in priv 50 + */ 51 + 52 + mgr = fpga_mgr_create(dev, "Altera SOCFPGA FPGA Manager", 53 + &socfpga_fpga_ops, priv); 54 + if (!mgr) 55 + return -ENOMEM; 56 + 57 + platform_set_drvdata(pdev, mgr); 58 + 59 + ret = fpga_mgr_register(mgr); 60 + if (ret) 61 + fpga_mgr_free(mgr); 62 + 63 + return ret; 64 + } 65 + 66 + static int socfpga_fpga_remove(struct platform_device *pdev) 67 + { 68 + struct fpga_manager *mgr = platform_get_drvdata(pdev); 69 + 70 + fpga_mgr_unregister(mgr); 71 + 72 + return 0; 73 + } 74 + 75 + 76 + The ops will implement whatever device specific register writes are needed to 77 + do the programming sequence for this particular FPGA. These ops return 0 for 78 + success or negative error codes otherwise. 79 + 80 + The programming sequence is:: 81 + 1. .write_init 82 + 2. .write or .write_sg (may be called once or multiple times) 83 + 3. .write_complete 84 + 85 + The .write_init function will prepare the FPGA to receive the image data. The 86 + buffer passed into .write_init will be atmost .initial_header_size bytes long, 87 + if the whole bitstream is not immediately available then the core code will 88 + buffer up at least this much before starting. 89 + 90 + The .write function writes a buffer to the FPGA. The buffer may be contain the 91 + whole FPGA image or may be a smaller chunk of an FPGA image. In the latter 92 + case, this function is called multiple times for successive chunks. This interface 93 + is suitable for drivers which use PIO. 94 + 95 + The .write_sg version behaves the same as .write except the input is a sg_table 96 + scatter list. This interface is suitable for drivers which use DMA. 97 + 98 + The .write_complete function is called after all the image has been written 99 + to put the FPGA into operating mode. 100 + 101 + The ops include a .state function which will read the hardware FPGA manager and 102 + return a code of type enum fpga_mgr_states. It doesn't result in a change in 103 + hardware state. 104 + 105 + How to write an image buffer to a supported FPGA 106 + ------------------------------------------------ 107 + 108 + Some sample code:: 109 + 110 + #include <linux/fpga/fpga-mgr.h> 111 + 112 + struct fpga_manager *mgr; 113 + struct fpga_image_info *info; 114 + int ret; 115 + 116 + /* 117 + * Get a reference to FPGA manager. The manager is not locked, so you can 118 + * hold onto this reference without it preventing programming. 119 + * 120 + * This example uses the device node of the manager. Alternatively, use 121 + * fpga_mgr_get(dev) instead if you have the device. 122 + */ 123 + mgr = of_fpga_mgr_get(mgr_node); 124 + 125 + /* struct with information about the FPGA image to program. */ 126 + info = fpga_image_info_alloc(dev); 127 + 128 + /* flags indicates whether to do full or partial reconfiguration */ 129 + info->flags = FPGA_MGR_PARTIAL_RECONFIG; 130 + 131 + /* 132 + * At this point, indicate where the image is. This is pseudo-code; you're 133 + * going to use one of these three. 134 + */ 135 + if (image is in a scatter gather table) { 136 + 137 + info->sgt = [your scatter gather table] 138 + 139 + } else if (image is in a buffer) { 140 + 141 + info->buf = [your image buffer] 142 + info->count = [image buffer size] 143 + 144 + } else if (image is in a firmware file) { 145 + 146 + info->firmware_name = devm_kstrdup(dev, firmware_name, GFP_KERNEL); 147 + 148 + } 149 + 150 + /* Get exclusive control of FPGA manager */ 151 + ret = fpga_mgr_lock(mgr); 152 + 153 + /* Load the buffer to the FPGA */ 154 + ret = fpga_mgr_buf_load(mgr, &info, buf, count); 155 + 156 + /* Release the FPGA manager */ 157 + fpga_mgr_unlock(mgr); 158 + fpga_mgr_put(mgr); 159 + 160 + /* Deallocate the image info if you're done with it */ 161 + fpga_image_info_free(info); 162 + 163 + API for implementing a new FPGA Manager driver 164 + ---------------------------------------------- 165 + 166 + .. kernel-doc:: include/linux/fpga/fpga-mgr.h 167 + :functions: fpga_manager 168 + 169 + .. kernel-doc:: include/linux/fpga/fpga-mgr.h 170 + :functions: fpga_manager_ops 171 + 172 + .. kernel-doc:: drivers/fpga/fpga-mgr.c 173 + :functions: fpga_mgr_create 174 + 175 + .. kernel-doc:: drivers/fpga/fpga-mgr.c 176 + :functions: fpga_mgr_free 177 + 178 + .. kernel-doc:: drivers/fpga/fpga-mgr.c 179 + :functions: fpga_mgr_register 180 + 181 + .. kernel-doc:: drivers/fpga/fpga-mgr.c 182 + :functions: fpga_mgr_unregister 183 + 184 + API for programming a FPGA 185 + -------------------------- 186 + 187 + .. kernel-doc:: include/linux/fpga/fpga-mgr.h 188 + :functions: fpga_image_info 189 + 190 + .. kernel-doc:: include/linux/fpga/fpga-mgr.h 191 + :functions: fpga_mgr_states 192 + 193 + .. kernel-doc:: drivers/fpga/fpga-mgr.c 194 + :functions: fpga_image_info_alloc 195 + 196 + .. kernel-doc:: drivers/fpga/fpga-mgr.c 197 + :functions: fpga_image_info_free 198 + 199 + .. kernel-doc:: drivers/fpga/fpga-mgr.c 200 + :functions: of_fpga_mgr_get 201 + 202 + .. kernel-doc:: drivers/fpga/fpga-mgr.c 203 + :functions: fpga_mgr_get 204 + 205 + .. kernel-doc:: drivers/fpga/fpga-mgr.c 206 + :functions: fpga_mgr_put 207 + 208 + .. kernel-doc:: drivers/fpga/fpga-mgr.c 209 + :functions: fpga_mgr_lock 210 + 211 + .. kernel-doc:: drivers/fpga/fpga-mgr.c 212 + :functions: fpga_mgr_unlock 213 + 214 + .. kernel-doc:: include/linux/fpga/fpga-mgr.h 215 + :functions: fpga_mgr_states 216 + 217 + Note - use :c:func:`fpga_region_program_fpga()` instead of :c:func:`fpga_mgr_load()` 218 + 219 + .. kernel-doc:: drivers/fpga/fpga-mgr.c 220 + :functions: fpga_mgr_load
+102
Documentation/driver-api/fpga/fpga-region.rst
··· 1 + FPGA Region 2 + =========== 3 + 4 + Overview 5 + -------- 6 + 7 + This document is meant to be an brief overview of the FPGA region API usage. A 8 + more conceptual look at regions can be found in the Device Tree binding 9 + document [#f1]_. 10 + 11 + For the purposes of this API document, let's just say that a region associates 12 + an FPGA Manager and a bridge (or bridges) with a reprogrammable region of an 13 + FPGA or the whole FPGA. The API provides a way to register a region and to 14 + program a region. 15 + 16 + Currently the only layer above fpga-region.c in the kernel is the Device Tree 17 + support (of-fpga-region.c) described in [#f1]_. The DT support layer uses regions 18 + to program the FPGA and then DT to handle enumeration. The common region code 19 + is intended to be used by other schemes that have other ways of accomplishing 20 + enumeration after programming. 21 + 22 + An fpga-region can be set up to know the following things: 23 + 24 + * which FPGA manager to use to do the programming 25 + 26 + * which bridges to disable before programming and enable afterwards. 27 + 28 + Additional info needed to program the FPGA image is passed in the struct 29 + fpga_image_info including: 30 + 31 + * pointers to the image as either a scatter-gather buffer, a contiguous 32 + buffer, or the name of firmware file 33 + 34 + * flags indicating specifics such as whether the image if for partial 35 + reconfiguration. 36 + 37 + How to program a FPGA using a region 38 + ------------------------------------ 39 + 40 + First, allocate the info struct:: 41 + 42 + info = fpga_image_info_alloc(dev); 43 + if (!info) 44 + return -ENOMEM; 45 + 46 + Set flags as needed, i.e.:: 47 + 48 + info->flags |= FPGA_MGR_PARTIAL_RECONFIG; 49 + 50 + Point to your FPGA image, such as:: 51 + 52 + info->sgt = &sgt; 53 + 54 + Add info to region and do the programming:: 55 + 56 + region->info = info; 57 + ret = fpga_region_program_fpga(region); 58 + 59 + :c:func:`fpga_region_program_fpga()` operates on info passed in the 60 + fpga_image_info (region->info). This function will attempt to: 61 + 62 + * lock the region's mutex 63 + * lock the region's FPGA manager 64 + * build a list of FPGA bridges if a method has been specified to do so 65 + * disable the bridges 66 + * program the FPGA 67 + * re-enable the bridges 68 + * release the locks 69 + 70 + Then you will want to enumerate whatever hardware has appeared in the FPGA. 71 + 72 + How to add a new FPGA region 73 + ---------------------------- 74 + 75 + An example of usage can be seen in the probe function of [#f2]_. 76 + 77 + .. [#f1] ../devicetree/bindings/fpga/fpga-region.txt 78 + .. [#f2] ../../drivers/fpga/of-fpga-region.c 79 + 80 + API to program a FGPA 81 + --------------------- 82 + 83 + .. kernel-doc:: drivers/fpga/fpga-region.c 84 + :functions: fpga_region_program_fpga 85 + 86 + API to add a new FPGA region 87 + ---------------------------- 88 + 89 + .. kernel-doc:: include/linux/fpga/fpga-region.h 90 + :functions: fpga_region 91 + 92 + .. kernel-doc:: drivers/fpga/fpga-region.c 93 + :functions: fpga_region_create 94 + 95 + .. kernel-doc:: drivers/fpga/fpga-region.c 96 + :functions: fpga_region_free 97 + 98 + .. kernel-doc:: drivers/fpga/fpga-region.c 99 + :functions: fpga_region_register 100 + 101 + .. kernel-doc:: drivers/fpga/fpga-region.c 102 + :functions: fpga_region_unregister
+13
Documentation/driver-api/fpga/index.rst
··· 1 + ============== 2 + FPGA Subsystem 3 + ============== 4 + 5 + :Author: Alan Tull 6 + 7 + .. toctree:: 8 + :maxdepth: 2 9 + 10 + intro 11 + fpga-mgr 12 + fpga-bridge 13 + fpga-region
+54
Documentation/driver-api/fpga/intro.rst
··· 1 + Introduction 2 + ============ 3 + 4 + The FPGA subsystem supports reprogramming FPGAs dynamically under 5 + Linux. Some of the core intentions of the FPGA subsystems are: 6 + 7 + * The FPGA subsystem is vendor agnostic. 8 + 9 + * The FPGA subsystem separates upper layers (userspace interfaces and 10 + enumeration) from lower layers that know how to program a specific 11 + FPGA. 12 + 13 + * Code should not be shared between upper and lower layers. This 14 + should go without saying. If that seems necessary, there's probably 15 + framework functionality that that can be added that will benefit 16 + other users. Write the linux-fpga mailing list and maintainers and 17 + seek out a solution that expands the framework for broad reuse. 18 + 19 + * Generally, when adding code, think of the future. Plan for re-use. 20 + 21 + The framework in the kernel is divided into: 22 + 23 + FPGA Manager 24 + ------------ 25 + 26 + If you are adding a new FPGA or a new method of programming a FPGA, 27 + this is the subsystem for you. Low level FPGA manager drivers contain 28 + the knowledge of how to program a specific device. This subsystem 29 + includes the framework in fpga-mgr.c and the low level drivers that 30 + are registered with it. 31 + 32 + FPGA Bridge 33 + ----------- 34 + 35 + FPGA Bridges prevent spurious signals from going out of a FPGA or a 36 + region of a FPGA during programming. They are disabled before 37 + programming begins and re-enabled afterwards. An FPGA bridge may be 38 + actual hard hardware that gates a bus to a cpu or a soft ("freeze") 39 + bridge in FPGA fabric that surrounds a partial reconfiguration region 40 + of an FPGA. This subsystem includes fpga-bridge.c and the low level 41 + drivers that are registered with it. 42 + 43 + FPGA Region 44 + ----------- 45 + 46 + If you are adding a new interface to the FPGA framework, add it on top 47 + of a FPGA region to allow the most reuse of your interface. 48 + 49 + The FPGA Region framework (fpga-region.c) associates managers and 50 + bridges as reconfigurable regions. A region may refer to the whole 51 + FPGA in full reconfiguration or to a partial reconfiguration region. 52 + 53 + The Device Tree FPGA Region support (of-fpga-region.c) handles 54 + reprogramming FPGAs when device tree overlays are applied.
+1
Documentation/driver-api/index.rst
··· 51 51 dmaengine/index 52 52 slimbus 53 53 soundwire/index 54 + fpga/index 54 55 55 56 .. only:: subproject and html 56 57
+65
Documentation/driver-api/soundwire/error_handling.rst
··· 1 + ======================== 2 + SoundWire Error Handling 3 + ======================== 4 + 5 + The SoundWire PHY was designed with care and errors on the bus are going to 6 + be very unlikely, and if they happen it should be limited to single bit 7 + errors. Examples of this design can be found in the synchronization 8 + mechanism (sync loss after two errors) and short CRCs used for the Bulk 9 + Register Access. 10 + 11 + The errors can be detected with multiple mechanisms: 12 + 13 + 1. Bus clash or parity errors: This mechanism relies on low-level detectors 14 + that are independent of the payload and usages, and they cover both control 15 + and audio data. The current implementation only logs such errors. 16 + Improvements could be invalidating an entire programming sequence and 17 + restarting from a known position. In the case of such errors outside of a 18 + control/command sequence, there is no concealment or recovery for audio 19 + data enabled by the SoundWire protocol, the location of the error will also 20 + impact its audibility (most-significant bits will be more impacted in PCM), 21 + and after a number of such errors are detected the bus might be reset. Note 22 + that bus clashes due to programming errors (two streams using the same bit 23 + slots) or electrical issues during the transmit/receive transition cannot 24 + be distinguished, although a recurring bus clash when audio is enabled is a 25 + indication of a bus allocation issue. The interrupt mechanism can also help 26 + identify Slaves which detected a Bus Clash or a Parity Error, but they may 27 + not be responsible for the errors so resetting them individually is not a 28 + viable recovery strategy. 29 + 30 + 2. Command status: Each command is associated with a status, which only 31 + covers transmission of the data between devices. The ACK status indicates 32 + that the command was received and will be executed by the end of the 33 + current frame. A NAK indicates that the command was in error and will not 34 + be applied. In case of a bad programming (command sent to non-existent 35 + Slave or to a non-implemented register) or electrical issue, no response 36 + signals the command was ignored. Some Master implementations allow for a 37 + command to be retransmitted several times. If the retransmission fails, 38 + backtracking and restarting the entire programming sequence might be a 39 + solution. Alternatively some implementations might directly issue a bus 40 + reset and re-enumerate all devices. 41 + 42 + 3. Timeouts: In a number of cases such as ChannelPrepare or 43 + ClockStopPrepare, the bus driver is supposed to poll a register field until 44 + it transitions to a NotFinished value of zero. The MIPI SoundWire spec 1.1 45 + does not define timeouts but the MIPI SoundWire DisCo document adds 46 + recommendation on timeouts. If such configurations do not complete, the 47 + driver will return a -ETIMEOUT. Such timeouts are symptoms of a faulty 48 + Slave device and are likely impossible to recover from. 49 + 50 + Errors during global reconfiguration sequences are extremely difficult to 51 + handle: 52 + 53 + 1. BankSwitch: An error during the last command issuing a BankSwitch is 54 + difficult to backtrack from. Retransmitting the Bank Switch command may be 55 + possible in a single segment setup, but this can lead to synchronization 56 + problems when enabling multiple bus segments (a command with side effects 57 + such as frame reconfiguration would be handled at different times). A global 58 + hard-reset might be the best solution. 59 + 60 + Note that SoundWire does not provide a mechanism to detect illegal values 61 + written in valid registers. In a number of cases the standard even mentions 62 + that the Slave might behave in implementation-defined ways. The bus 63 + implementation does not provide a recovery mechanism for such errors, Slave 64 + or Master driver implementers are responsible for writing valid values in 65 + valid registers and implement additional range checking if needed.
+3
Documentation/driver-api/soundwire/index.rst
··· 6 6 :maxdepth: 1 7 7 8 8 summary 9 + stream 10 + error_handling 11 + locking 9 12 10 13 .. only:: subproject 11 14
+106
Documentation/driver-api/soundwire/locking.rst
··· 1 + ================= 2 + SoundWire Locking 3 + ================= 4 + 5 + This document explains locking mechanism of the SoundWire Bus. Bus uses 6 + following locks in order to avoid race conditions in Bus operations on 7 + shared resources. 8 + 9 + - Bus lock 10 + 11 + - Message lock 12 + 13 + Bus lock 14 + ======== 15 + 16 + SoundWire Bus lock is a mutex and is part of Bus data structure 17 + (sdw_bus) which is used for every Bus instance. This lock is used to 18 + serialize each of the following operations(s) within SoundWire Bus instance. 19 + 20 + - Addition and removal of Slave(s), changing Slave status. 21 + 22 + - Prepare, Enable, Disable and De-prepare stream operations. 23 + 24 + - Access of Stream data structure. 25 + 26 + Message lock 27 + ============ 28 + 29 + SoundWire message transfer lock. This mutex is part of 30 + Bus data structure (sdw_bus). This lock is used to serialize the message 31 + transfers (read/write) within a SoundWire Bus instance. 32 + 33 + Below examples show how locks are acquired. 34 + 35 + Example 1 36 + --------- 37 + 38 + Message transfer. 39 + 40 + 1. For every message transfer 41 + 42 + a. Acquire Message lock. 43 + 44 + b. Transfer message (Read/Write) to Slave1 or broadcast message on 45 + Bus in case of bank switch. 46 + 47 + c. Release Message lock :: 48 + 49 + +----------+ +---------+ 50 + | | | | 51 + | Bus | | Master | 52 + | | | Driver | 53 + | | | | 54 + +----+-----+ +----+----+ 55 + | | 56 + | bus->ops->xfer_msg() | 57 + <-------------------------------+ a. Acquire Message lock 58 + | | b. Transfer message 59 + | | 60 + +-------------------------------> c. Release Message lock 61 + | return success/error | d. Return success/error 62 + | | 63 + + + 64 + 65 + Example 2 66 + --------- 67 + 68 + Prepare operation. 69 + 70 + 1. Acquire lock for Bus instance associated with Master 1. 71 + 72 + 2. For every message transfer in Prepare operation 73 + 74 + a. Acquire Message lock. 75 + 76 + b. Transfer message (Read/Write) to Slave1 or broadcast message on 77 + Bus in case of bank switch. 78 + 79 + c. Release Message lock. 80 + 81 + 3. Release lock for Bus instance associated with Master 1 :: 82 + 83 + +----------+ +---------+ 84 + | | | | 85 + | Bus | | Master | 86 + | | | Driver | 87 + | | | | 88 + +----+-----+ +----+----+ 89 + | | 90 + | sdw_prepare_stream() | 91 + <-------------------------------+ 1. Acquire bus lock 92 + | | 2. Perform stream prepare 93 + | | 94 + | | 95 + | bus->ops->xfer_msg() | 96 + <-------------------------------+ a. Acquire Message lock 97 + | | b. Transfer message 98 + | | 99 + +-------------------------------> c. Release Message lock 100 + | return success/error | d. Return success/error 101 + | | 102 + | | 103 + | return success/error | 3. Release bus lock 104 + +-------------------------------> 4. Return success/error 105 + | | 106 + + +
+372
Documentation/driver-api/soundwire/stream.rst
··· 1 + ========================= 2 + Audio Stream in SoundWire 3 + ========================= 4 + 5 + An audio stream is a logical or virtual connection created between 6 + 7 + (1) System memory buffer(s) and Codec(s) 8 + 9 + (2) DSP memory buffer(s) and Codec(s) 10 + 11 + (3) FIFO(s) and Codec(s) 12 + 13 + (4) Codec(s) and Codec(s) 14 + 15 + which is typically driven by a DMA(s) channel through the data link. An 16 + audio stream contains one or more channels of data. All channels within 17 + stream must have same sample rate and same sample size. 18 + 19 + Assume a stream with two channels (Left & Right) is opened using SoundWire 20 + interface. Below are some ways a stream can be represented in SoundWire. 21 + 22 + Stream Sample in memory (System memory, DSP memory or FIFOs) :: 23 + 24 + ------------------------- 25 + | L | R | L | R | L | R | 26 + ------------------------- 27 + 28 + Example 1: Stereo Stream with L and R channels is rendered from Master to 29 + Slave. Both Master and Slave is using single port. :: 30 + 31 + +---------------+ Clock Signal +---------------+ 32 + | Master +----------------------------------+ Slave | 33 + | Interface | | Interface | 34 + | | | 1 | 35 + | | Data Signal | | 36 + | L + R +----------------------------------+ L + R | 37 + | (Data) | Data Direction | (Data) | 38 + +---------------+ +-----------------------> +---------------+ 39 + 40 + 41 + Example 2: Stereo Stream with L and R channels is captured from Slave to 42 + Master. Both Master and Slave is using single port. :: 43 + 44 + 45 + +---------------+ Clock Signal +---------------+ 46 + | Master +----------------------------------+ Slave | 47 + | Interface | | Interface | 48 + | | | 1 | 49 + | | Data Signal | | 50 + | L + R +----------------------------------+ L + R | 51 + | (Data) | Data Direction | (Data) | 52 + +---------------+ <-----------------------+ +---------------+ 53 + 54 + 55 + Example 3: Stereo Stream with L and R channels is rendered by Master. Each 56 + of the L and R channel is received by two different Slaves. Master and both 57 + Slaves are using single port. :: 58 + 59 + +---------------+ Clock Signal +---------------+ 60 + | Master +---------+------------------------+ Slave | 61 + | Interface | | | Interface | 62 + | | | | 1 | 63 + | | | Data Signal | | 64 + | L + R +---+------------------------------+ L | 65 + | (Data) | | | Data Direction | (Data) | 66 + +---------------+ | | +-------------> +---------------+ 67 + | | 68 + | | 69 + | | +---------------+ 70 + | +----------------------> | Slave | 71 + | | Interface | 72 + | | 2 | 73 + | | | 74 + +----------------------------> | R | 75 + | (Data) | 76 + +---------------+ 77 + 78 + 79 + Example 4: Stereo Stream with L and R channel is rendered by two different 80 + Ports of the Master and is received by only single Port of the Slave 81 + interface. :: 82 + 83 + +--------------------+ 84 + | | 85 + | +--------------+ +----------------+ 86 + | | || | | 87 + | | Data Port || L Channel | | 88 + | | 1 |------------+ | | 89 + | | L Channel || | +-----+----+ | 90 + | | (Data) || | L + R Channel || Data | | 91 + | Master +----------+ | +---+---------> || Port | | 92 + | Interface | | || 1 | | 93 + | +--------------+ | || | | 94 + | | || | +----------+ | 95 + | | Data Port |------------+ | | 96 + | | 2 || R Channel | Slave | 97 + | | R Channel || | Interface | 98 + | | (Data) || | 1 | 99 + | +--------------+ Clock Signal | L + R | 100 + | +---------------------------> | (Data) | 101 + +--------------------+ | | 102 + +----------------+ 103 + 104 + SoundWire Stream Management flow 105 + ================================ 106 + 107 + Stream definitions 108 + ------------------ 109 + 110 + (1) Current stream: This is classified as the stream on which operation has 111 + to be performed like prepare, enable, disable, de-prepare etc. 112 + 113 + (2) Active stream: This is classified as the stream which is already active 114 + on Bus other than current stream. There can be multiple active streams 115 + on the Bus. 116 + 117 + SoundWire Bus manages stream operations for each stream getting 118 + rendered/captured on the SoundWire Bus. This section explains Bus operations 119 + done for each of the stream allocated/released on Bus. Following are the 120 + stream states maintained by the Bus for each of the audio stream. 121 + 122 + 123 + SoundWire stream states 124 + ----------------------- 125 + 126 + Below shows the SoundWire stream states and state transition diagram. :: 127 + 128 + +-----------+ +------------+ +----------+ +----------+ 129 + | ALLOCATED +---->| CONFIGURED +---->| PREPARED +---->| ENABLED | 130 + | STATE | | STATE | | STATE | | STATE | 131 + +-----------+ +------------+ +----------+ +----+-----+ 132 + ^ 133 + | 134 + | 135 + v 136 + +----------+ +------------+ +----+-----+ 137 + | RELEASED |<----------+ DEPREPARED |<-------+ DISABLED | 138 + | STATE | | STATE | | STATE | 139 + +----------+ +------------+ +----------+ 140 + 141 + NOTE: State transition between prepare and deprepare is supported in Spec 142 + but not in the software (subsystem) 143 + 144 + NOTE2: Stream state transition checks need to be handled by caller 145 + framework, for example ALSA/ASoC. No checks for stream transition exist in 146 + SoundWire subsystem. 147 + 148 + Stream State Operations 149 + ----------------------- 150 + 151 + Below section explains the operations done by the Bus on Master(s) and 152 + Slave(s) as part of stream state transitions. 153 + 154 + SDW_STREAM_ALLOCATED 155 + ~~~~~~~~~~~~~~~~~~~~ 156 + 157 + Allocation state for stream. This is the entry state 158 + of the stream. Operations performed before entering in this state: 159 + 160 + (1) A stream runtime is allocated for the stream. This stream 161 + runtime is used as a reference for all the operations performed 162 + on the stream. 163 + 164 + (2) The resources required for holding stream runtime information are 165 + allocated and initialized. This holds all stream related information 166 + such as stream type (PCM/PDM) and parameters, Master and Slave 167 + interface associated with the stream, stream state etc. 168 + 169 + After all above operations are successful, stream state is set to 170 + ``SDW_STREAM_ALLOCATED``. 171 + 172 + Bus implements below API for allocate a stream which needs to be called once 173 + per stream. From ASoC DPCM framework, this stream state maybe linked to 174 + .startup() operation. 175 + 176 + .. code-block:: c 177 + int sdw_alloc_stream(char * stream_name); 178 + 179 + 180 + SDW_STREAM_CONFIGURED 181 + ~~~~~~~~~~~~~~~~~~~~~ 182 + 183 + Configuration state of stream. Operations performed before entering in 184 + this state: 185 + 186 + (1) The resources allocated for stream information in SDW_STREAM_ALLOCATED 187 + state are updated here. This includes stream parameters, Master(s) 188 + and Slave(s) runtime information associated with current stream. 189 + 190 + (2) All the Master(s) and Slave(s) associated with current stream provide 191 + the port information to Bus which includes port numbers allocated by 192 + Master(s) and Slave(s) for current stream and their channel mask. 193 + 194 + After all above operations are successful, stream state is set to 195 + ``SDW_STREAM_CONFIGURED``. 196 + 197 + Bus implements below APIs for CONFIG state which needs to be called by 198 + the respective Master(s) and Slave(s) associated with stream. These APIs can 199 + only be invoked once by respective Master(s) and Slave(s). From ASoC DPCM 200 + framework, this stream state is linked to .hw_params() operation. 201 + 202 + .. code-block:: c 203 + int sdw_stream_add_master(struct sdw_bus * bus, 204 + struct sdw_stream_config * stream_config, 205 + struct sdw_ports_config * ports_config, 206 + struct sdw_stream_runtime * stream); 207 + 208 + int sdw_stream_add_slave(struct sdw_slave * slave, 209 + struct sdw_stream_config * stream_config, 210 + struct sdw_ports_config * ports_config, 211 + struct sdw_stream_runtime * stream); 212 + 213 + 214 + SDW_STREAM_PREPARED 215 + ~~~~~~~~~~~~~~~~~~~ 216 + 217 + Prepare state of stream. Operations performed before entering in this state: 218 + 219 + (1) Bus parameters such as bandwidth, frame shape, clock frequency, 220 + are computed based on current stream as well as already active 221 + stream(s) on Bus. Re-computation is required to accommodate current 222 + stream on the Bus. 223 + 224 + (2) Transport and port parameters of all Master(s) and Slave(s) port(s) are 225 + computed for the current as well as already active stream based on frame 226 + shape and clock frequency computed in step 1. 227 + 228 + (3) Computed Bus and transport parameters are programmed in Master(s) and 229 + Slave(s) registers. The banked registers programming is done on the 230 + alternate bank (bank currently unused). Port(s) are enabled for the 231 + already active stream(s) on the alternate bank (bank currently unused). 232 + This is done in order to not disrupt already active stream(s). 233 + 234 + (4) Once all the values are programmed, Bus initiates switch to alternate 235 + bank where all new values programmed gets into effect. 236 + 237 + (5) Ports of Master(s) and Slave(s) for current stream are prepared by 238 + programming PrepareCtrl register. 239 + 240 + After all above operations are successful, stream state is set to 241 + ``SDW_STREAM_PREPARED``. 242 + 243 + Bus implements below API for PREPARE state which needs to be called once per 244 + stream. From ASoC DPCM framework, this stream state is linked to 245 + .prepare() operation. 246 + 247 + .. code-block:: c 248 + int sdw_prepare_stream(struct sdw_stream_runtime * stream); 249 + 250 + 251 + SDW_STREAM_ENABLED 252 + ~~~~~~~~~~~~~~~~~~ 253 + 254 + Enable state of stream. The data port(s) are enabled upon entering this state. 255 + Operations performed before entering in this state: 256 + 257 + (1) All the values computed in SDW_STREAM_PREPARED state are programmed 258 + in alternate bank (bank currently unused). It includes programming of 259 + already active stream(s) as well. 260 + 261 + (2) All the Master(s) and Slave(s) port(s) for the current stream are 262 + enabled on alternate bank (bank currently unused) by programming 263 + ChannelEn register. 264 + 265 + (3) Once all the values are programmed, Bus initiates switch to alternate 266 + bank where all new values programmed gets into effect and port(s) 267 + associated with current stream are enabled. 268 + 269 + After all above operations are successful, stream state is set to 270 + ``SDW_STREAM_ENABLED``. 271 + 272 + Bus implements below API for ENABLE state which needs to be called once per 273 + stream. From ASoC DPCM framework, this stream state is linked to 274 + .trigger() start operation. 275 + 276 + .. code-block:: c 277 + int sdw_enable_stream(struct sdw_stream_runtime * stream); 278 + 279 + SDW_STREAM_DISABLED 280 + ~~~~~~~~~~~~~~~~~~~ 281 + 282 + Disable state of stream. The data port(s) are disabled upon exiting this state. 283 + Operations performed before entering in this state: 284 + 285 + (1) All the Master(s) and Slave(s) port(s) for the current stream are 286 + disabled on alternate bank (bank currently unused) by programming 287 + ChannelEn register. 288 + 289 + (2) All the current configuration of Bus and active stream(s) are programmed 290 + into alternate bank (bank currently unused). 291 + 292 + (3) Once all the values are programmed, Bus initiates switch to alternate 293 + bank where all new values programmed gets into effect and port(s) associated 294 + with current stream are disabled. 295 + 296 + After all above operations are successful, stream state is set to 297 + ``SDW_STREAM_DISABLED``. 298 + 299 + Bus implements below API for DISABLED state which needs to be called once 300 + per stream. From ASoC DPCM framework, this stream state is linked to 301 + .trigger() stop operation. 302 + 303 + .. code-block:: c 304 + int sdw_disable_stream(struct sdw_stream_runtime * stream); 305 + 306 + 307 + SDW_STREAM_DEPREPARED 308 + ~~~~~~~~~~~~~~~~~~~~~ 309 + 310 + De-prepare state of stream. Operations performed before entering in this 311 + state: 312 + 313 + (1) All the port(s) of Master(s) and Slave(s) for current stream are 314 + de-prepared by programming PrepareCtrl register. 315 + 316 + (2) The payload bandwidth of current stream is reduced from the total 317 + bandwidth requirement of bus and new parameters calculated and 318 + applied by performing bank switch etc. 319 + 320 + After all above operations are successful, stream state is set to 321 + ``SDW_STREAM_DEPREPARED``. 322 + 323 + Bus implements below API for DEPREPARED state which needs to be called once 324 + per stream. From ASoC DPCM framework, this stream state is linked to 325 + .trigger() stop operation. 326 + 327 + .. code-block:: c 328 + int sdw_deprepare_stream(struct sdw_stream_runtime * stream); 329 + 330 + 331 + SDW_STREAM_RELEASED 332 + ~~~~~~~~~~~~~~~~~~~ 333 + 334 + Release state of stream. Operations performed before entering in this state: 335 + 336 + (1) Release port resources for all Master(s) and Slave(s) port(s) 337 + associated with current stream. 338 + 339 + (2) Release Master(s) and Slave(s) runtime resources associated with 340 + current stream. 341 + 342 + (3) Release stream runtime resources associated with current stream. 343 + 344 + After all above operations are successful, stream state is set to 345 + ``SDW_STREAM_RELEASED``. 346 + 347 + Bus implements below APIs for RELEASE state which needs to be called by 348 + all the Master(s) and Slave(s) associated with stream. From ASoC DPCM 349 + framework, this stream state is linked to .hw_free() operation. 350 + 351 + .. code-block:: c 352 + int sdw_stream_remove_master(struct sdw_bus * bus, 353 + struct sdw_stream_runtime * stream); 354 + int sdw_stream_remove_slave(struct sdw_slave * slave, 355 + struct sdw_stream_runtime * stream); 356 + 357 + 358 + The .shutdown() ASoC DPCM operation calls below Bus API to release 359 + stream assigned as part of ALLOCATED state. 360 + 361 + In .shutdown() the data structure maintaining stream state are freed up. 362 + 363 + .. code-block:: c 364 + void sdw_release_stream(struct sdw_stream_runtime * stream); 365 + 366 + Not Supported 367 + ============= 368 + 369 + 1. A single port with multiple channels supported cannot be used between two 370 + streams or across stream. For example a port with 4 channels cannot be used 371 + to handle 2 independent stereo streams even though it's possible in theory 372 + in SoundWire.
-199
Documentation/fpga/fpga-mgr.txt
··· 1 - FPGA Manager Core 2 - 3 - Alan Tull 2015 4 - 5 - Overview 6 - ======== 7 - 8 - The FPGA manager core exports a set of functions for programming an FPGA with 9 - an image. The API is manufacturer agnostic. All manufacturer specifics are 10 - hidden away in a low level driver which registers a set of ops with the core. 11 - The FPGA image data itself is very manufacturer specific, but for our purposes 12 - it's just binary data. The FPGA manager core won't parse it. 13 - 14 - The FPGA image to be programmed can be in a scatter gather list, a single 15 - contiguous buffer, or a firmware file. Because allocating contiguous kernel 16 - memory for the buffer should be avoided, users are encouraged to use a scatter 17 - gather list instead if possible. 18 - 19 - The particulars for programming the image are presented in a structure (struct 20 - fpga_image_info). This struct contains parameters such as pointers to the 21 - FPGA image as well as image-specific particulars such as whether the image was 22 - built for full or partial reconfiguration. 23 - 24 - API Functions: 25 - ============== 26 - 27 - To program the FPGA: 28 - -------------------- 29 - 30 - int fpga_mgr_load(struct fpga_manager *mgr, 31 - struct fpga_image_info *info); 32 - 33 - Load the FPGA from an image which is indicated in the info. If successful, 34 - the FPGA ends up in operating mode. Return 0 on success or a negative error 35 - code. 36 - 37 - To allocate or free a struct fpga_image_info: 38 - --------------------------------------------- 39 - 40 - struct fpga_image_info *fpga_image_info_alloc(struct device *dev); 41 - 42 - void fpga_image_info_free(struct fpga_image_info *info); 43 - 44 - To get/put a reference to a FPGA manager: 45 - ----------------------------------------- 46 - 47 - struct fpga_manager *of_fpga_mgr_get(struct device_node *node); 48 - struct fpga_manager *fpga_mgr_get(struct device *dev); 49 - void fpga_mgr_put(struct fpga_manager *mgr); 50 - 51 - Given a DT node or device, get a reference to a FPGA manager. This pointer 52 - can be saved until you are ready to program the FPGA. fpga_mgr_put releases 53 - the reference. 54 - 55 - 56 - To get exclusive control of a FPGA manager: 57 - ------------------------------------------- 58 - 59 - int fpga_mgr_lock(struct fpga_manager *mgr); 60 - void fpga_mgr_unlock(struct fpga_manager *mgr); 61 - 62 - The user should call fpga_mgr_lock and verify that it returns 0 before 63 - attempting to program the FPGA. Likewise, the user should call 64 - fpga_mgr_unlock when done programming the FPGA. 65 - 66 - 67 - To register or unregister the low level FPGA-specific driver: 68 - ------------------------------------------------------------- 69 - 70 - int fpga_mgr_register(struct device *dev, const char *name, 71 - const struct fpga_manager_ops *mops, 72 - void *priv); 73 - 74 - void fpga_mgr_unregister(struct device *dev); 75 - 76 - Use of these two functions is described below in "How To Support a new FPGA 77 - device." 78 - 79 - 80 - How to write an image buffer to a supported FPGA 81 - ================================================ 82 - #include <linux/fpga/fpga-mgr.h> 83 - 84 - struct fpga_manager *mgr; 85 - struct fpga_image_info *info; 86 - int ret; 87 - 88 - /* 89 - * Get a reference to FPGA manager. The manager is not locked, so you can 90 - * hold onto this reference without it preventing programming. 91 - * 92 - * This example uses the device node of the manager. Alternatively, use 93 - * fpga_mgr_get(dev) instead if you have the device. 94 - */ 95 - mgr = of_fpga_mgr_get(mgr_node); 96 - 97 - /* struct with information about the FPGA image to program. */ 98 - info = fpga_image_info_alloc(dev); 99 - 100 - /* flags indicates whether to do full or partial reconfiguration */ 101 - info->flags = FPGA_MGR_PARTIAL_RECONFIG; 102 - 103 - /* 104 - * At this point, indicate where the image is. This is pseudo-code; you're 105 - * going to use one of these three. 106 - */ 107 - if (image is in a scatter gather table) { 108 - 109 - info->sgt = [your scatter gather table] 110 - 111 - } else if (image is in a buffer) { 112 - 113 - info->buf = [your image buffer] 114 - info->count = [image buffer size] 115 - 116 - } else if (image is in a firmware file) { 117 - 118 - info->firmware_name = devm_kstrdup(dev, firmware_name, GFP_KERNEL); 119 - 120 - } 121 - 122 - /* Get exclusive control of FPGA manager */ 123 - ret = fpga_mgr_lock(mgr); 124 - 125 - /* Load the buffer to the FPGA */ 126 - ret = fpga_mgr_buf_load(mgr, &info, buf, count); 127 - 128 - /* Release the FPGA manager */ 129 - fpga_mgr_unlock(mgr); 130 - fpga_mgr_put(mgr); 131 - 132 - /* Deallocate the image info if you're done with it */ 133 - fpga_image_info_free(info); 134 - 135 - How to support a new FPGA device 136 - ================================ 137 - To add another FPGA manager, write a driver that implements a set of ops. The 138 - probe function calls fpga_mgr_register(), such as: 139 - 140 - static const struct fpga_manager_ops socfpga_fpga_ops = { 141 - .write_init = socfpga_fpga_ops_configure_init, 142 - .write = socfpga_fpga_ops_configure_write, 143 - .write_complete = socfpga_fpga_ops_configure_complete, 144 - .state = socfpga_fpga_ops_state, 145 - }; 146 - 147 - static int socfpga_fpga_probe(struct platform_device *pdev) 148 - { 149 - struct device *dev = &pdev->dev; 150 - struct socfpga_fpga_priv *priv; 151 - int ret; 152 - 153 - priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 154 - if (!priv) 155 - return -ENOMEM; 156 - 157 - /* ... do ioremaps, get interrupts, etc. and save 158 - them in priv... */ 159 - 160 - return fpga_mgr_register(dev, "Altera SOCFPGA FPGA Manager", 161 - &socfpga_fpga_ops, priv); 162 - } 163 - 164 - static int socfpga_fpga_remove(struct platform_device *pdev) 165 - { 166 - fpga_mgr_unregister(&pdev->dev); 167 - 168 - return 0; 169 - } 170 - 171 - 172 - The ops will implement whatever device specific register writes are needed to 173 - do the programming sequence for this particular FPGA. These ops return 0 for 174 - success or negative error codes otherwise. 175 - 176 - The programming sequence is: 177 - 1. .write_init 178 - 2. .write or .write_sg (may be called once or multiple times) 179 - 3. .write_complete 180 - 181 - The .write_init function will prepare the FPGA to receive the image data. The 182 - buffer passed into .write_init will be atmost .initial_header_size bytes long, 183 - if the whole bitstream is not immediately available then the core code will 184 - buffer up at least this much before starting. 185 - 186 - The .write function writes a buffer to the FPGA. The buffer may be contain the 187 - whole FPGA image or may be a smaller chunk of an FPGA image. In the latter 188 - case, this function is called multiple times for successive chunks. This interface 189 - is suitable for drivers which use PIO. 190 - 191 - The .write_sg version behaves the same as .write except the input is a sg_table 192 - scatter list. This interface is suitable for drivers which use DMA. 193 - 194 - The .write_complete function is called after all the image has been written 195 - to put the FPGA into operating mode. 196 - 197 - The ops include a .state function which will read the hardware FPGA manager and 198 - return a code of type enum fpga_mgr_states. It doesn't result in a change in 199 - hardware state.
-95
Documentation/fpga/fpga-region.txt
··· 1 - FPGA Regions 2 - 3 - Alan Tull 2017 4 - 5 - CONTENTS 6 - - Introduction 7 - - The FPGA region API 8 - - Usage example 9 - 10 - Introduction 11 - ============ 12 - 13 - This document is meant to be an brief overview of the FPGA region API usage. A 14 - more conceptual look at regions can be found in [1]. 15 - 16 - For the purposes of this API document, let's just say that a region associates 17 - an FPGA Manager and a bridge (or bridges) with a reprogrammable region of an 18 - FPGA or the whole FPGA. The API provides a way to register a region and to 19 - program a region. 20 - 21 - Currently the only layer above fpga-region.c in the kernel is the Device Tree 22 - support (of-fpga-region.c) described in [1]. The DT support layer uses regions 23 - to program the FPGA and then DT to handle enumeration. The common region code 24 - is intended to be used by other schemes that have other ways of accomplishing 25 - enumeration after programming. 26 - 27 - An fpga-region can be set up to know the following things: 28 - * which FPGA manager to use to do the programming 29 - * which bridges to disable before programming and enable afterwards. 30 - 31 - Additional info needed to program the FPGA image is passed in the struct 32 - fpga_image_info [2] including: 33 - * pointers to the image as either a scatter-gather buffer, a contiguous 34 - buffer, or the name of firmware file 35 - * flags indicating specifics such as whether the image if for partial 36 - reconfiguration. 37 - 38 - =================== 39 - The FPGA region API 40 - =================== 41 - 42 - To register or unregister a region: 43 - ----------------------------------- 44 - 45 - int fpga_region_register(struct device *dev, 46 - struct fpga_region *region); 47 - int fpga_region_unregister(struct fpga_region *region); 48 - 49 - An example of usage can be seen in the probe function of [3] 50 - 51 - To program an FPGA: 52 - ------------------- 53 - int fpga_region_program_fpga(struct fpga_region *region); 54 - 55 - This function operates on info passed in the fpga_image_info 56 - (region->info). 57 - 58 - This function will attempt to: 59 - * lock the region's mutex 60 - * lock the region's FPGA manager 61 - * build a list of FPGA bridges if a method has been specified to do so 62 - * disable the bridges 63 - * program the FPGA 64 - * re-enable the bridges 65 - * release the locks 66 - 67 - ============= 68 - Usage example 69 - ============= 70 - 71 - First, allocate the info struct: 72 - 73 - info = fpga_image_info_alloc(dev); 74 - if (!info) 75 - return -ENOMEM; 76 - 77 - Set flags as needed, i.e. 78 - 79 - info->flags |= FPGA_MGR_PARTIAL_RECONFIG; 80 - 81 - Point to your FPGA image, such as: 82 - 83 - info->sgt = &sgt; 84 - 85 - Add info to region and do the programming: 86 - 87 - region->info = info; 88 - ret = fpga_region_program_fpga(region); 89 - 90 - Then enumerate whatever hardware has appeared in the FPGA. 91 - 92 - -- 93 - [1] ../devicetree/bindings/fpga/fpga-region.txt 94 - [2] ./fpga-mgr.txt 95 - [3] ../../drivers/fpga/of-fpga-region.c
-23
Documentation/fpga/overview.txt
··· 1 - Linux kernel FPGA support 2 - 3 - Alan Tull 2017 4 - 5 - The main point of this project has been to separate the out the upper layers 6 - that know when to reprogram a FPGA from the lower layers that know how to 7 - reprogram a specific FPGA device. The intention is to make this manufacturer 8 - agnostic, understanding that of course the FPGA images are very device specific 9 - themselves. 10 - 11 - The framework in the kernel includes: 12 - * low level FPGA manager drivers that know how to program a specific device 13 - * the fpga-mgr framework they are registered with 14 - * low level FPGA bridge drivers for hard/soft bridges which are intended to 15 - be disable during FPGA programming 16 - * the fpga-bridge framework they are registered with 17 - * the fpga-region framework which associates and controls managers and bridges 18 - as reconfigurable regions 19 - * the of-fpga-region support for reprogramming FPGAs when device tree overlays 20 - are applied. 21 - 22 - I would encourage you the user to add code that creates FPGA regions rather 23 - that trying to control managers and bridges separately.
+1
Documentation/ioctl/ioctl-number.txt
··· 328 328 0xCA 80-BF uapi/scsi/cxlflash_ioctl.h 329 329 0xCB 00-1F CBM serial IEC bus in development: 330 330 <mailto:michael.klein@puffin.lb.shuttle.de> 331 + 0xCC 00-0F drivers/misc/ibmvmc.h pseries VMC driver 331 332 0xCD 01 linux/reiserfs_fs.h 332 333 0xCF 02 fs/cifs/ioctl.c 333 334 0xDB 00-0F drivers/char/mwave/mwavepub.h
+226
Documentation/misc-devices/ibmvmc.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0+ 2 + ====================================================== 3 + IBM Virtual Management Channel Kernel Driver (IBMVMC) 4 + ====================================================== 5 + 6 + :Authors: 7 + Dave Engebretsen <engebret@us.ibm.com>, 8 + Adam Reznechek <adreznec@linux.vnet.ibm.com>, 9 + Steven Royer <seroyer@linux.vnet.ibm.com>, 10 + Bryant G. Ly <bryantly@linux.vnet.ibm.com>, 11 + 12 + Introduction 13 + ============ 14 + 15 + Note: Knowledge of virtualization technology is required to understand 16 + this document. 17 + 18 + A good reference document would be: 19 + 20 + https://openpowerfoundation.org/wp-content/uploads/2016/05/LoPAPR_DRAFT_v11_24March2016_cmt1.pdf 21 + 22 + The Virtual Management Channel (VMC) is a logical device which provides an 23 + interface between the hypervisor and a management partition. This interface 24 + is like a message passing interface. This management partition is intended 25 + to provide an alternative to systems that use a Hardware Management 26 + Console (HMC) - based system management. 27 + 28 + The primary hardware management solution that is developed by IBM relies 29 + on an appliance server named the Hardware Management Console (HMC), 30 + packaged as an external tower or rack-mounted personal computer. In a 31 + Power Systems environment, a single HMC can manage multiple POWER 32 + processor-based systems. 33 + 34 + Management Application 35 + ---------------------- 36 + 37 + In the management partition, a management application exists which enables 38 + a system administrator to configure the system’s partitioning 39 + characteristics via a command line interface (CLI) or Representational 40 + State Transfer Application (REST API's). 41 + 42 + The management application runs on a Linux logical partition on a 43 + POWER8 or newer processor-based server that is virtualized by PowerVM. 44 + System configuration, maintenance, and control functions which 45 + traditionally require an HMC can be implemented in the management 46 + application using a combination of HMC to hypervisor interfaces and 47 + existing operating system methods. This tool provides a subset of the 48 + functions implemented by the HMC and enables basic partition configuration. 49 + The set of HMC to hypervisor messages supported by the management 50 + application component are passed to the hypervisor over a VMC interface, 51 + which is defined below. 52 + 53 + The VMC enables the management partition to provide basic partitioning 54 + functions: 55 + 56 + - Logical Partitioning Configuration 57 + - Start, and stop actions for individual partitions 58 + - Display of partition status 59 + - Management of virtual Ethernet 60 + - Management of virtual Storage 61 + - Basic system management 62 + 63 + Virtual Management Channel (VMC) 64 + -------------------------------- 65 + 66 + A logical device, called the Virtual Management Channel (VMC), is defined 67 + for communicating between the management application and the hypervisor. It 68 + basically creates the pipes that enable virtualization management 69 + software. This device is presented to a designated management partition as 70 + a virtual device. 71 + 72 + This communication device uses Command/Response Queue (CRQ) and the 73 + Remote Direct Memory Access (RDMA) interfaces. A three-way handshake is 74 + defined that must take place to establish that both the hypervisor and 75 + management partition sides of the channel are running prior to 76 + sending/receiving any of the protocol messages. 77 + 78 + This driver also utilizes Transport Event CRQs. CRQ messages are sent 79 + when the hypervisor detects one of the peer partitions has abnormally 80 + terminated, or one side has called H_FREE_CRQ to close their CRQ. 81 + Two new classes of CRQ messages are introduced for the VMC device. VMC 82 + Administrative messages are used for each partition using the VMC to 83 + communicate capabilities to their partner. HMC Interface messages are used 84 + for the actual flow of HMC messages between the management partition and 85 + the hypervisor. As most HMC messages far exceed the size of a CRQ buffer, 86 + a virtual DMA (RMDA) of the HMC message data is done prior to each HMC 87 + Interface CRQ message. Only the management partition drives RDMA 88 + operations; hypervisors never directly cause the movement of message data. 89 + 90 + 91 + Terminology 92 + ----------- 93 + RDMA 94 + Remote Direct Memory Access is DMA transfer from the server to its 95 + client or from the server to its partner partition. DMA refers 96 + to both physical I/O to and from memory operations and to memory 97 + to memory move operations. 98 + CRQ 99 + Command/Response Queue a facility which is used to communicate 100 + between partner partitions. Transport events which are signaled 101 + from the hypervisor to partition are also reported in this queue. 102 + 103 + Example Management Partition VMC Driver Interface 104 + ================================================= 105 + 106 + This section provides an example for the management application 107 + implementation where a device driver is used to interface to the VMC 108 + device. This driver consists of a new device, for example /dev/ibmvmc, 109 + which provides interfaces to open, close, read, write, and perform 110 + ioctl’s against the VMC device. 111 + 112 + VMC Interface Initialization 113 + ---------------------------- 114 + 115 + The device driver is responsible for initializing the VMC when the driver 116 + is loaded. It first creates and initializes the CRQ. Next, an exchange of 117 + VMC capabilities is performed to indicate the code version and number of 118 + resources available in both the management partition and the hypervisor. 119 + Finally, the hypervisor requests that the management partition create an 120 + initial pool of VMC buffers, one buffer for each possible HMC connection, 121 + which will be used for management application session initialization. 122 + Prior to completion of this initialization sequence, the device returns 123 + EBUSY to open() calls. EIO is returned for all open() failures. 124 + 125 + :: 126 + 127 + Management Partition Hypervisor 128 + CRQ INIT 129 + ----------------------------------------> 130 + CRQ INIT COMPLETE 131 + <---------------------------------------- 132 + CAPABILITIES 133 + ----------------------------------------> 134 + CAPABILITIES RESPONSE 135 + <---------------------------------------- 136 + ADD BUFFER (HMC IDX=0,1,..) _ 137 + <---------------------------------------- | 138 + ADD BUFFER RESPONSE | - Perform # HMCs Iterations 139 + ----------------------------------------> - 140 + 141 + VMC Interface Open 142 + ------------------ 143 + 144 + After the basic VMC channel has been initialized, an HMC session level 145 + connection can be established. The application layer performs an open() to 146 + the VMC device and executes an ioctl() against it, indicating the HMC ID 147 + (32 bytes of data) for this session. If the VMC device is in an invalid 148 + state, EIO will be returned for the ioctl(). The device driver creates a 149 + new HMC session value (ranging from 1 to 255) and HMC index value (starting 150 + at index 0 and ranging to 254) for this HMC ID. The driver then does an 151 + RDMA of the HMC ID to the hypervisor, and then sends an Interface Open 152 + message to the hypervisor to establish the session over the VMC. After the 153 + hypervisor receives this information, it sends Add Buffer messages to the 154 + management partition to seed an initial pool of buffers for the new HMC 155 + connection. Finally, the hypervisor sends an Interface Open Response 156 + message, to indicate that it is ready for normal runtime messaging. The 157 + following illustrates this VMC flow: 158 + 159 + :: 160 + 161 + Management Partition Hypervisor 162 + RDMA HMC ID 163 + ----------------------------------------> 164 + Interface Open 165 + ----------------------------------------> 166 + Add Buffer _ 167 + <---------------------------------------- | 168 + Add Buffer Response | - Perform N Iterations 169 + ----------------------------------------> - 170 + Interface Open Response 171 + <---------------------------------------- 172 + 173 + VMC Interface Runtime 174 + --------------------- 175 + 176 + During normal runtime, the management application and the hypervisor 177 + exchange HMC messages via the Signal VMC message and RDMA operations. When 178 + sending data to the hypervisor, the management application performs a 179 + write() to the VMC device, and the driver RDMA’s the data to the hypervisor 180 + and then sends a Signal Message. If a write() is attempted before VMC 181 + device buffers have been made available by the hypervisor, or no buffers 182 + are currently available, EBUSY is returned in response to the write(). A 183 + write() will return EIO for all other errors, such as an invalid device 184 + state. When the hypervisor sends a message to the management, the data is 185 + put into a VMC buffer and an Signal Message is sent to the VMC driver in 186 + the management partition. The driver RDMA’s the buffer into the partition 187 + and passes the data up to the appropriate management application via a 188 + read() to the VMC device. The read() request blocks if there is no buffer 189 + available to read. The management application may use select() to wait for 190 + the VMC device to become ready with data to read. 191 + 192 + :: 193 + 194 + Management Partition Hypervisor 195 + MSG RDMA 196 + ----------------------------------------> 197 + SIGNAL MSG 198 + ----------------------------------------> 199 + SIGNAL MSG 200 + <---------------------------------------- 201 + MSG RDMA 202 + <---------------------------------------- 203 + 204 + VMC Interface Close 205 + ------------------- 206 + 207 + HMC session level connections are closed by the management partition when 208 + the application layer performs a close() against the device. This action 209 + results in an Interface Close message flowing to the hypervisor, which 210 + causes the session to be terminated. The device driver must free any 211 + storage allocated for buffers for this HMC connection. 212 + 213 + :: 214 + 215 + Management Partition Hypervisor 216 + INTERFACE CLOSE 217 + ----------------------------------------> 218 + INTERFACE CLOSE RESPONSE 219 + <---------------------------------------- 220 + 221 + Additional Information 222 + ====================== 223 + 224 + For more information on the documentation for CRQ Messages, VMC Messages, 225 + HMC interface Buffers, and signal messages please refer to the Linux on 226 + Power Architecture Platform Reference. Section F.
+2 -2
Documentation/trace/coresight-cpu-debug.txt
··· 177 177 ARM external debug module: 178 178 coresight-cpu-debug 850000.debug: CPU[0]: 179 179 coresight-cpu-debug 850000.debug: EDPRSR: 00000001 (Power:On DLK:Unlock) 180 - coresight-cpu-debug 850000.debug: EDPCSR: [<ffff00000808e9bc>] handle_IPI+0x174/0x1d8 180 + coresight-cpu-debug 850000.debug: EDPCSR: handle_IPI+0x174/0x1d8 181 181 coresight-cpu-debug 850000.debug: EDCIDSR: 00000000 182 182 coresight-cpu-debug 850000.debug: EDVIDSR: 90000000 (State:Non-secure Mode:EL1/0 Width:64bits VMID:0) 183 183 coresight-cpu-debug 852000.debug: CPU[1]: 184 184 coresight-cpu-debug 852000.debug: EDPRSR: 00000001 (Power:On DLK:Unlock) 185 - coresight-cpu-debug 852000.debug: EDPCSR: [<ffff0000087fab34>] debug_notifier_call+0x23c/0x358 185 + coresight-cpu-debug 852000.debug: EDPCSR: debug_notifier_call+0x23c/0x358 186 186 coresight-cpu-debug 852000.debug: EDCIDSR: 00000000 187 187 coresight-cpu-debug 852000.debug: EDVIDSR: 90000000 (State:Non-secure Mode:EL1/0 Width:64bits VMID:0)
+8 -1
MAINTAINERS
··· 5586 5586 T: git git://git.kernel.org/pub/scm/linux/kernel/git/atull/linux-fpga.git 5587 5587 Q: http://patchwork.kernel.org/project/linux-fpga/list/ 5588 5588 F: Documentation/fpga/ 5589 + F: Documentation/driver-api/fpga/ 5589 5590 F: Documentation/devicetree/bindings/fpga/ 5590 5591 F: drivers/fpga/ 5591 5592 F: include/linux/fpga/ ··· 6773 6772 L: linux-scsi@vger.kernel.org 6774 6773 S: Supported 6775 6774 F: drivers/scsi/ibmvscsi/ibmvfc* 6775 + 6776 + IBM Power Virtual Management Channel Driver 6777 + M: Bryant G. Ly <bryantly@linux.vnet.ibm.com> 6778 + M: Steven Royer <seroyer@linux.vnet.ibm.com> 6779 + S: Supported 6780 + F: drivers/misc/ibmvmc.* 6776 6781 6777 6782 IBM Power Virtual SCSI Device Drivers 6778 6783 M: Tyrel Datwyler <tyreld@linux.vnet.ibm.com> ··· 13143 13136 F: sound/ 13144 13137 13145 13138 SOUND - COMPRESSED AUDIO 13146 - M: Vinod Koul <vinod.koul@intel.com> 13139 + M: Vinod Koul <vkoul@kernel.org> 13147 13140 L: alsa-devel@alsa-project.org (moderated for non-subscribers) 13148 13141 T: git git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound.git 13149 13142 S: Supported
+1
arch/powerpc/include/asm/hvcall.h
··· 279 279 #define H_GET_MPP_X 0x314 280 280 #define H_SET_MODE 0x31C 281 281 #define H_CLEAR_HPT 0x358 282 + #define H_REQUEST_VMC 0x360 282 283 #define H_RESIZE_HPT_PREPARE 0x36C 283 284 #define H_RESIZE_HPT_COMMIT 0x370 284 285 #define H_REGISTER_PROC_TBL 0x37C
+2 -2
drivers/amba/bus.c
··· 102 102 if (strlen(driver_override)) { 103 103 dev->driver_override = driver_override; 104 104 } else { 105 - kfree(driver_override); 106 - dev->driver_override = NULL; 105 + kfree(driver_override); 106 + dev->driver_override = NULL; 107 107 } 108 108 device_unlock(_dev); 109 109
+1 -14
drivers/android/Kconfig
··· 10 10 11 11 config ANDROID_BINDER_IPC 12 12 bool "Android Binder IPC Driver" 13 - depends on MMU 13 + depends on MMU && !M68K 14 14 default n 15 15 ---help--- 16 16 Binder is used in Android for both communication between processes, ··· 31 31 that specifies the names of the binder device nodes that will be 32 32 created. Each binder device has its own context manager, and is 33 33 therefore logically separated from the other devices. 34 - 35 - config ANDROID_BINDER_IPC_32BIT 36 - bool "Use old (Android 4.4 and earlier) 32-bit binder API" 37 - depends on !64BIT && ANDROID_BINDER_IPC 38 - default y 39 - ---help--- 40 - The Binder API has been changed to support both 32 and 64bit 41 - applications in a mixed environment. 42 - 43 - Enable this to support an old 32-bit Android user-space (v4.4 and 44 - earlier). 45 - 46 - Note that enabling this will break newer Android user-space. 47 34 48 35 config ANDROID_BINDER_IPC_SELFTEST 49 36 bool "Android Binder IPC Driver Selftest"
+8 -9
drivers/android/binder.c
··· 72 72 #include <linux/security.h> 73 73 #include <linux/spinlock.h> 74 74 75 - #ifdef CONFIG_ANDROID_BINDER_IPC_32BIT 76 - #define BINDER_IPC_32BIT 1 77 - #endif 78 - 79 75 #include <uapi/linux/android/binder.h> 80 76 #include "binder_alloc.h" 81 77 #include "binder_trace.h" ··· 2054 2058 struct binder_object_header *hdr; 2055 2059 size_t object_size = 0; 2056 2060 2057 - if (offset > buffer->data_size - sizeof(*hdr) || 2058 - buffer->data_size < sizeof(*hdr) || 2061 + if (buffer->data_size < sizeof(*hdr) || 2062 + offset > buffer->data_size - sizeof(*hdr) || 2059 2063 !IS_ALIGNED(offset, sizeof(u32))) 2060 2064 return 0; 2061 2065 ··· 3921 3925 binder_inner_proc_unlock(proc); 3922 3926 if (put_user(e->cmd, (uint32_t __user *)ptr)) 3923 3927 return -EFAULT; 3928 + cmd = e->cmd; 3924 3929 e->cmd = BR_OK; 3925 3930 ptr += sizeof(uint32_t); 3926 3931 3927 - binder_stat_br(proc, thread, e->cmd); 3932 + binder_stat_br(proc, thread, cmd); 3928 3933 } break; 3929 3934 case BINDER_WORK_TRANSACTION_COMPLETE: { 3930 3935 binder_inner_proc_unlock(proc); ··· 4693 4696 binder_defer_work(proc, BINDER_DEFERRED_PUT_FILES); 4694 4697 } 4695 4698 4696 - static int binder_vm_fault(struct vm_fault *vmf) 4699 + static vm_fault_t binder_vm_fault(struct vm_fault *vmf) 4697 4700 { 4698 4701 return VM_FAULT_SIGBUS; 4699 4702 } ··· 4727 4730 failure_string = "bad vm_flags"; 4728 4731 goto err_bad_arg; 4729 4732 } 4730 - vma->vm_flags = (vma->vm_flags | VM_DONTCOPY) & ~VM_MAYWRITE; 4733 + vma->vm_flags |= VM_DONTCOPY | VM_MIXEDMAP; 4734 + vma->vm_flags &= ~VM_MAYWRITE; 4735 + 4731 4736 vma->vm_ops = &binder_vm_ops; 4732 4737 vma->vm_private_data = proc; 4733 4738
+3 -3
drivers/android/binder_alloc.c
··· 219 219 mm = alloc->vma_vm_mm; 220 220 221 221 if (mm) { 222 - down_write(&mm->mmap_sem); 222 + down_read(&mm->mmap_sem); 223 223 vma = alloc->vma; 224 224 } 225 225 ··· 288 288 /* vm_insert_page does not seem to increment the refcount */ 289 289 } 290 290 if (mm) { 291 - up_write(&mm->mmap_sem); 291 + up_read(&mm->mmap_sem); 292 292 mmput(mm); 293 293 } 294 294 return 0; ··· 321 321 } 322 322 err_no_vma: 323 323 if (mm) { 324 - up_write(&mm->mmap_sem); 324 + up_read(&mm->mmap_sem); 325 325 mmput(mm); 326 326 } 327 327 return vma ? -ENOMEM : -ESRCH;
+2 -9
drivers/char/mspec.c
··· 191 191 * 192 192 * Creates a mspec page and maps it to user space. 193 193 */ 194 - static int 194 + static vm_fault_t 195 195 mspec_fault(struct vm_fault *vmf) 196 196 { 197 197 unsigned long paddr, maddr; ··· 223 223 224 224 pfn = paddr >> PAGE_SHIFT; 225 225 226 - /* 227 - * vm_insert_pfn can fail with -EBUSY, but in that case it will 228 - * be because another thread has installed the pte first, so it 229 - * is no problem. 230 - */ 231 - vm_insert_pfn(vmf->vma, vmf->address, pfn); 232 - 233 - return VM_FAULT_NOPAGE; 226 + return vmf_insert_pfn(vmf->vma, vmf->address, pfn); 234 227 } 235 228 236 229 static const struct vm_operations_struct mspec_vm_ops = {
+8
drivers/firmware/google/Kconfig
··· 55 55 the EBDA on Google servers. If found, this log is exported to 56 56 userland in the file /sys/firmware/log. 57 57 58 + config GOOGLE_FRAMEBUFFER_COREBOOT 59 + tristate "Coreboot Framebuffer" 60 + depends on FB_SIMPLE 61 + depends on GOOGLE_COREBOOT_TABLE 62 + help 63 + This option enables the kernel to search for a framebuffer in 64 + the coreboot table. If found, it is registered with simplefb. 65 + 58 66 config GOOGLE_MEMCONSOLE_COREBOOT 59 67 tristate "Firmware Memory Console" 60 68 depends on GOOGLE_COREBOOT_TABLE
+1
drivers/firmware/google/Makefile
··· 4 4 obj-$(CONFIG_GOOGLE_COREBOOT_TABLE) += coreboot_table.o 5 5 obj-$(CONFIG_GOOGLE_COREBOOT_TABLE_ACPI) += coreboot_table-acpi.o 6 6 obj-$(CONFIG_GOOGLE_COREBOOT_TABLE_OF) += coreboot_table-of.o 7 + obj-$(CONFIG_GOOGLE_FRAMEBUFFER_COREBOOT) += framebuffer-coreboot.o 7 8 obj-$(CONFIG_GOOGLE_MEMCONSOLE) += memconsole.o 8 9 obj-$(CONFIG_GOOGLE_MEMCONSOLE_COREBOOT) += memconsole-coreboot.o 9 10 obj-$(CONFIG_GOOGLE_MEMCONSOLE_X86_LEGACY) += memconsole-x86-legacy.o
+1 -1
drivers/firmware/google/coreboot_table-acpi.c
··· 53 53 if (!ptr) 54 54 return -ENOMEM; 55 55 56 - return coreboot_table_init(ptr); 56 + return coreboot_table_init(&pdev->dev, ptr); 57 57 } 58 58 59 59 static int coreboot_table_acpi_remove(struct platform_device *pdev)
+1 -1
drivers/firmware/google/coreboot_table-of.c
··· 34 34 if (!ptr) 35 35 return -ENOMEM; 36 36 37 - return coreboot_table_init(ptr); 37 + return coreboot_table_init(&pdev->dev, ptr); 38 38 } 39 39 40 40 static int coreboot_table_of_remove(struct platform_device *pdev)
+97 -33
drivers/firmware/google/coreboot_table.c
··· 4 4 * Module providing coreboot table access. 5 5 * 6 6 * Copyright 2017 Google Inc. 7 + * Copyright 2017 Samuel Holland <samuel@sholland.org> 7 8 * 8 9 * This program is free software; you can redistribute it and/or modify 9 10 * it under the terms of the GNU General Public License v2.0 as published by ··· 16 15 * GNU General Public License for more details. 17 16 */ 18 17 18 + #include <linux/device.h> 19 19 #include <linux/err.h> 20 20 #include <linux/init.h> 21 21 #include <linux/io.h> 22 22 #include <linux/kernel.h> 23 23 #include <linux/module.h> 24 + #include <linux/slab.h> 24 25 25 26 #include "coreboot_table.h" 26 27 27 - struct coreboot_table_entry { 28 - u32 tag; 29 - u32 size; 30 - }; 28 + #define CB_DEV(d) container_of(d, struct coreboot_device, dev) 29 + #define CB_DRV(d) container_of(d, struct coreboot_driver, drv) 31 30 32 31 static struct coreboot_table_header __iomem *ptr_header; 33 32 34 - /* 35 - * This function parses the coreboot table for an entry that contains the base 36 - * address of the given entry tag. The coreboot table consists of a header 37 - * directly followed by a number of small, variable-sized entries, which each 38 - * contain an identifying tag and their length as the first two fields. 39 - */ 40 - int coreboot_table_find(int tag, void *data, size_t data_size) 33 + static int coreboot_bus_match(struct device *dev, struct device_driver *drv) 41 34 { 42 - struct coreboot_table_header header; 43 - struct coreboot_table_entry entry; 35 + struct coreboot_device *device = CB_DEV(dev); 36 + struct coreboot_driver *driver = CB_DRV(drv); 37 + 38 + return device->entry.tag == driver->tag; 39 + } 40 + 41 + static int coreboot_bus_probe(struct device *dev) 42 + { 43 + int ret = -ENODEV; 44 + struct coreboot_device *device = CB_DEV(dev); 45 + struct coreboot_driver *driver = CB_DRV(dev->driver); 46 + 47 + if (driver->probe) 48 + ret = driver->probe(device); 49 + 50 + return ret; 51 + } 52 + 53 + static int coreboot_bus_remove(struct device *dev) 54 + { 55 + int ret = 0; 56 + struct coreboot_device *device = CB_DEV(dev); 57 + struct coreboot_driver *driver = CB_DRV(dev->driver); 58 + 59 + if (driver->remove) 60 + ret = driver->remove(device); 61 + 62 + return ret; 63 + } 64 + 65 + static struct bus_type coreboot_bus_type = { 66 + .name = "coreboot", 67 + .match = coreboot_bus_match, 68 + .probe = coreboot_bus_probe, 69 + .remove = coreboot_bus_remove, 70 + }; 71 + 72 + static int __init coreboot_bus_init(void) 73 + { 74 + return bus_register(&coreboot_bus_type); 75 + } 76 + module_init(coreboot_bus_init); 77 + 78 + static void coreboot_device_release(struct device *dev) 79 + { 80 + struct coreboot_device *device = CB_DEV(dev); 81 + 82 + kfree(device); 83 + } 84 + 85 + int coreboot_driver_register(struct coreboot_driver *driver) 86 + { 87 + driver->drv.bus = &coreboot_bus_type; 88 + 89 + return driver_register(&driver->drv); 90 + } 91 + EXPORT_SYMBOL(coreboot_driver_register); 92 + 93 + void coreboot_driver_unregister(struct coreboot_driver *driver) 94 + { 95 + driver_unregister(&driver->drv); 96 + } 97 + EXPORT_SYMBOL(coreboot_driver_unregister); 98 + 99 + int coreboot_table_init(struct device *dev, void __iomem *ptr) 100 + { 101 + int i, ret; 44 102 void *ptr_entry; 45 - int i; 103 + struct coreboot_device *device; 104 + struct coreboot_table_entry entry; 105 + struct coreboot_table_header header; 46 106 47 - if (!ptr_header) 48 - return -EPROBE_DEFER; 49 - 107 + ptr_header = ptr; 50 108 memcpy_fromio(&header, ptr_header, sizeof(header)); 51 109 52 110 if (strncmp(header.signature, "LBIO", sizeof(header.signature))) { ··· 114 54 } 115 55 116 56 ptr_entry = (void *)ptr_header + header.header_bytes; 117 - 118 57 for (i = 0; i < header.table_entries; i++) { 119 58 memcpy_fromio(&entry, ptr_entry, sizeof(entry)); 120 - if (entry.tag == tag) { 121 - if (data_size < entry.size) 122 - return -EINVAL; 123 59 124 - memcpy_fromio(data, ptr_entry, entry.size); 60 + device = kzalloc(sizeof(struct device) + entry.size, GFP_KERNEL); 61 + if (!device) { 62 + ret = -ENOMEM; 63 + break; 64 + } 125 65 126 - return 0; 66 + dev_set_name(&device->dev, "coreboot%d", i); 67 + device->dev.parent = dev; 68 + device->dev.bus = &coreboot_bus_type; 69 + device->dev.release = coreboot_device_release; 70 + memcpy_fromio(&device->entry, ptr_entry, entry.size); 71 + 72 + ret = device_register(&device->dev); 73 + if (ret) { 74 + put_device(&device->dev); 75 + break; 127 76 } 128 77 129 78 ptr_entry += entry.size; 130 79 } 131 80 132 - return -ENOENT; 133 - } 134 - EXPORT_SYMBOL(coreboot_table_find); 135 - 136 - int coreboot_table_init(void __iomem *ptr) 137 - { 138 - ptr_header = ptr; 139 - 140 - return 0; 81 + return ret; 141 82 } 142 83 EXPORT_SYMBOL(coreboot_table_init); 143 84 144 85 int coreboot_table_exit(void) 145 86 { 146 - if (ptr_header) 87 + if (ptr_header) { 88 + bus_unregister(&coreboot_bus_type); 147 89 iounmap(ptr_header); 90 + ptr_header = NULL; 91 + } 148 92 149 93 return 0; 150 94 }
+61 -11
drivers/firmware/google/coreboot_table.h
··· 3 3 * 4 4 * Internal header for coreboot table access. 5 5 * 6 + * Copyright 2014 Gerd Hoffmann <kraxel@redhat.com> 6 7 * Copyright 2017 Google Inc. 8 + * Copyright 2017 Samuel Holland <samuel@sholland.org> 7 9 * 8 10 * This program is free software; you can redistribute it and/or modify 9 11 * it under the terms of the GNU General Public License v2.0 as published by ··· 22 20 23 21 #include <linux/io.h> 24 22 25 - /* List of coreboot entry structures that is used */ 26 - struct lb_cbmem_ref { 27 - uint32_t tag; 28 - uint32_t size; 29 - 30 - uint64_t cbmem_addr; 31 - }; 32 - 33 23 /* Coreboot table header structure */ 34 24 struct coreboot_table_header { 35 25 char signature[4]; ··· 32 38 u32 table_entries; 33 39 }; 34 40 35 - /* Retrieve coreboot table entry with tag *tag* and copy it to data */ 36 - int coreboot_table_find(int tag, void *data, size_t data_size); 41 + /* List of coreboot entry structures that is used */ 42 + /* Generic */ 43 + struct coreboot_table_entry { 44 + u32 tag; 45 + u32 size; 46 + }; 47 + 48 + /* Points to a CBMEM entry */ 49 + struct lb_cbmem_ref { 50 + u32 tag; 51 + u32 size; 52 + 53 + u64 cbmem_addr; 54 + }; 55 + 56 + /* Describes framebuffer setup by coreboot */ 57 + struct lb_framebuffer { 58 + u32 tag; 59 + u32 size; 60 + 61 + u64 physical_address; 62 + u32 x_resolution; 63 + u32 y_resolution; 64 + u32 bytes_per_line; 65 + u8 bits_per_pixel; 66 + u8 red_mask_pos; 67 + u8 red_mask_size; 68 + u8 green_mask_pos; 69 + u8 green_mask_size; 70 + u8 blue_mask_pos; 71 + u8 blue_mask_size; 72 + u8 reserved_mask_pos; 73 + u8 reserved_mask_size; 74 + }; 75 + 76 + /* A device, additionally with information from coreboot. */ 77 + struct coreboot_device { 78 + struct device dev; 79 + union { 80 + struct coreboot_table_entry entry; 81 + struct lb_cbmem_ref cbmem_ref; 82 + struct lb_framebuffer framebuffer; 83 + }; 84 + }; 85 + 86 + /* A driver for handling devices described in coreboot tables. */ 87 + struct coreboot_driver { 88 + int (*probe)(struct coreboot_device *); 89 + int (*remove)(struct coreboot_device *); 90 + struct device_driver drv; 91 + u32 tag; 92 + }; 93 + 94 + /* Register a driver that uses the data from a coreboot table. */ 95 + int coreboot_driver_register(struct coreboot_driver *driver); 96 + 97 + /* Unregister a driver that uses the data from a coreboot table. */ 98 + void coreboot_driver_unregister(struct coreboot_driver *driver); 37 99 38 100 /* Initialize coreboot table module given a pointer to iomem */ 39 - int coreboot_table_init(void __iomem *ptr); 101 + int coreboot_table_init(struct device *dev, void __iomem *ptr); 40 102 41 103 /* Cleanup coreboot table module */ 42 104 int coreboot_table_exit(void);
+115
drivers/firmware/google/framebuffer-coreboot.c
··· 1 + /* 2 + * framebuffer-coreboot.c 3 + * 4 + * Memory based framebuffer accessed through coreboot table. 5 + * 6 + * Copyright 2012-2013 David Herrmann <dh.herrmann@gmail.com> 7 + * Copyright 2017 Google Inc. 8 + * Copyright 2017 Samuel Holland <samuel@sholland.org> 9 + * 10 + * This program is free software; you can redistribute it and/or modify 11 + * it under the terms of the GNU General Public License v2.0 as published by 12 + * the Free Software Foundation. 13 + * 14 + * This program is distributed in the hope that it will be useful, 15 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 16 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 17 + * GNU General Public License for more details. 18 + */ 19 + 20 + #include <linux/device.h> 21 + #include <linux/kernel.h> 22 + #include <linux/mm.h> 23 + #include <linux/module.h> 24 + #include <linux/platform_data/simplefb.h> 25 + #include <linux/platform_device.h> 26 + 27 + #include "coreboot_table.h" 28 + 29 + #define CB_TAG_FRAMEBUFFER 0x12 30 + 31 + static const struct simplefb_format formats[] = SIMPLEFB_FORMATS; 32 + 33 + static int framebuffer_probe(struct coreboot_device *dev) 34 + { 35 + int i; 36 + u32 length; 37 + struct lb_framebuffer *fb = &dev->framebuffer; 38 + struct platform_device *pdev; 39 + struct resource res; 40 + struct simplefb_platform_data pdata = { 41 + .width = fb->x_resolution, 42 + .height = fb->y_resolution, 43 + .stride = fb->bytes_per_line, 44 + .format = NULL, 45 + }; 46 + 47 + for (i = 0; i < ARRAY_SIZE(formats); ++i) { 48 + if (fb->bits_per_pixel == formats[i].bits_per_pixel && 49 + fb->red_mask_pos == formats[i].red.offset && 50 + fb->red_mask_size == formats[i].red.length && 51 + fb->green_mask_pos == formats[i].green.offset && 52 + fb->green_mask_size == formats[i].green.length && 53 + fb->blue_mask_pos == formats[i].blue.offset && 54 + fb->blue_mask_size == formats[i].blue.length && 55 + fb->reserved_mask_pos == formats[i].transp.offset && 56 + fb->reserved_mask_size == formats[i].transp.length) 57 + pdata.format = formats[i].name; 58 + } 59 + if (!pdata.format) 60 + return -ENODEV; 61 + 62 + memset(&res, 0, sizeof(res)); 63 + res.flags = IORESOURCE_MEM | IORESOURCE_BUSY; 64 + res.name = "Coreboot Framebuffer"; 65 + res.start = fb->physical_address; 66 + length = PAGE_ALIGN(fb->y_resolution * fb->bytes_per_line); 67 + res.end = res.start + length - 1; 68 + if (res.end <= res.start) 69 + return -EINVAL; 70 + 71 + pdev = platform_device_register_resndata(&dev->dev, 72 + "simple-framebuffer", 0, 73 + &res, 1, &pdata, 74 + sizeof(pdata)); 75 + if (IS_ERR(pdev)) 76 + pr_warn("coreboot: could not register framebuffer\n"); 77 + else 78 + dev_set_drvdata(&dev->dev, pdev); 79 + 80 + return PTR_ERR_OR_ZERO(pdev); 81 + } 82 + 83 + static int framebuffer_remove(struct coreboot_device *dev) 84 + { 85 + struct platform_device *pdev = dev_get_drvdata(&dev->dev); 86 + 87 + platform_device_unregister(pdev); 88 + 89 + return 0; 90 + } 91 + 92 + static struct coreboot_driver framebuffer_driver = { 93 + .probe = framebuffer_probe, 94 + .remove = framebuffer_remove, 95 + .drv = { 96 + .name = "framebuffer", 97 + }, 98 + .tag = CB_TAG_FRAMEBUFFER, 99 + }; 100 + 101 + static int __init coreboot_framebuffer_init(void) 102 + { 103 + return coreboot_driver_register(&framebuffer_driver); 104 + } 105 + 106 + static void coreboot_framebuffer_exit(void) 107 + { 108 + coreboot_driver_unregister(&framebuffer_driver); 109 + } 110 + 111 + module_init(coreboot_framebuffer_init); 112 + module_exit(coreboot_framebuffer_exit); 113 + 114 + MODULE_AUTHOR("Samuel Holland <samuel@sholland.org>"); 115 + MODULE_LICENSE("GPL");
+18 -33
drivers/firmware/google/memconsole-coreboot.c
··· 15 15 * GNU General Public License for more details. 16 16 */ 17 17 18 + #include <linux/device.h> 18 19 #include <linux/kernel.h> 19 20 #include <linux/module.h> 20 - #include <linux/platform_device.h> 21 21 22 22 #include "memconsole.h" 23 23 #include "coreboot_table.h" ··· 73 73 return done; 74 74 } 75 75 76 - static int memconsole_coreboot_init(phys_addr_t physaddr) 76 + static int memconsole_probe(struct coreboot_device *dev) 77 77 { 78 78 struct cbmem_cons __iomem *tmp_cbmc; 79 79 80 - tmp_cbmc = memremap(physaddr, sizeof(*tmp_cbmc), MEMREMAP_WB); 80 + tmp_cbmc = memremap(dev->cbmem_ref.cbmem_addr, 81 + sizeof(*tmp_cbmc), MEMREMAP_WB); 81 82 82 83 if (!tmp_cbmc) 83 84 return -ENOMEM; 84 85 85 86 /* Read size only once to prevent overrun attack through /dev/mem. */ 86 87 cbmem_console_size = tmp_cbmc->size_dont_access_after_boot; 87 - cbmem_console = memremap(physaddr, 88 + cbmem_console = memremap(dev->cbmem_ref.cbmem_addr, 88 89 cbmem_console_size + sizeof(*cbmem_console), 89 90 MEMREMAP_WB); 90 91 memunmap(tmp_cbmc); ··· 94 93 return -ENOMEM; 95 94 96 95 memconsole_setup(memconsole_coreboot_read); 97 - return 0; 98 - } 99 - 100 - static int memconsole_probe(struct platform_device *pdev) 101 - { 102 - int ret; 103 - struct lb_cbmem_ref entry; 104 - 105 - ret = coreboot_table_find(CB_TAG_CBMEM_CONSOLE, &entry, sizeof(entry)); 106 - if (ret) 107 - return ret; 108 - 109 - ret = memconsole_coreboot_init(entry.cbmem_addr); 110 - if (ret) 111 - return ret; 112 96 113 97 return memconsole_sysfs_init(); 114 98 } 115 99 116 - static int memconsole_remove(struct platform_device *pdev) 100 + static int memconsole_remove(struct coreboot_device *dev) 117 101 { 118 102 memconsole_exit(); 119 103 ··· 108 122 return 0; 109 123 } 110 124 111 - static struct platform_driver memconsole_driver = { 125 + static struct coreboot_driver memconsole_driver = { 112 126 .probe = memconsole_probe, 113 127 .remove = memconsole_remove, 114 - .driver = { 128 + .drv = { 115 129 .name = "memconsole", 116 130 }, 131 + .tag = CB_TAG_CBMEM_CONSOLE, 117 132 }; 118 133 119 - static int __init platform_memconsole_init(void) 134 + static void coreboot_memconsole_exit(void) 120 135 { 121 - struct platform_device *pdev; 122 - 123 - pdev = platform_device_register_simple("memconsole", -1, NULL, 0); 124 - if (IS_ERR(pdev)) 125 - return PTR_ERR(pdev); 126 - 127 - platform_driver_register(&memconsole_driver); 128 - 129 - return 0; 136 + coreboot_driver_unregister(&memconsole_driver); 130 137 } 131 138 132 - module_init(platform_memconsole_init); 139 + static int __init coreboot_memconsole_init(void) 140 + { 141 + return coreboot_driver_register(&memconsole_driver); 142 + } 143 + 144 + module_exit(coreboot_memconsole_exit); 145 + module_init(coreboot_memconsole_init); 133 146 134 147 MODULE_AUTHOR("Google, Inc."); 135 148 MODULE_LICENSE("GPL");
+12 -31
drivers/firmware/google/vpd.c
··· 286 286 return 0; 287 287 } 288 288 289 - static int vpd_probe(struct platform_device *pdev) 289 + static int vpd_probe(struct coreboot_device *dev) 290 290 { 291 291 int ret; 292 - struct lb_cbmem_ref entry; 293 - 294 - ret = coreboot_table_find(CB_TAG_VPD, &entry, sizeof(entry)); 295 - if (ret) 296 - return ret; 297 292 298 293 vpd_kobj = kobject_create_and_add("vpd", firmware_kobj); 299 294 if (!vpd_kobj) 300 295 return -ENOMEM; 301 296 302 - ret = vpd_sections_init(entry.cbmem_addr); 297 + ret = vpd_sections_init(dev->cbmem_ref.cbmem_addr); 303 298 if (ret) { 304 299 kobject_put(vpd_kobj); 305 300 return ret; ··· 303 308 return 0; 304 309 } 305 310 306 - static int vpd_remove(struct platform_device *pdev) 311 + static int vpd_remove(struct coreboot_device *dev) 307 312 { 308 313 vpd_section_destroy(&ro_vpd); 309 314 vpd_section_destroy(&rw_vpd); ··· 313 318 return 0; 314 319 } 315 320 316 - static struct platform_driver vpd_driver = { 321 + static struct coreboot_driver vpd_driver = { 317 322 .probe = vpd_probe, 318 323 .remove = vpd_remove, 319 - .driver = { 324 + .drv = { 320 325 .name = "vpd", 321 326 }, 327 + .tag = CB_TAG_VPD, 322 328 }; 323 329 324 - static struct platform_device *vpd_pdev; 325 - 326 - static int __init vpd_platform_init(void) 330 + static int __init coreboot_vpd_init(void) 327 331 { 328 - int ret; 329 - 330 - ret = platform_driver_register(&vpd_driver); 331 - if (ret) 332 - return ret; 333 - 334 - vpd_pdev = platform_device_register_simple("vpd", -1, NULL, 0); 335 - if (IS_ERR(vpd_pdev)) { 336 - platform_driver_unregister(&vpd_driver); 337 - return PTR_ERR(vpd_pdev); 338 - } 339 - 340 - return 0; 332 + return coreboot_driver_register(&vpd_driver); 341 333 } 342 334 343 - static void __exit vpd_platform_exit(void) 335 + static void __exit coreboot_vpd_exit(void) 344 336 { 345 - platform_device_unregister(vpd_pdev); 346 - platform_driver_unregister(&vpd_driver); 337 + coreboot_driver_unregister(&vpd_driver); 347 338 } 348 339 349 - module_init(vpd_platform_init); 350 - module_exit(vpd_platform_exit); 340 + module_init(coreboot_vpd_init); 341 + module_exit(coreboot_vpd_exit); 351 342 352 343 MODULE_AUTHOR("Google, Inc."); 353 344 MODULE_LICENSE("GPL");
+7 -1
drivers/fpga/Kconfig
··· 53 53 config FPGA_MGR_ZYNQ_FPGA 54 54 tristate "Xilinx Zynq FPGA" 55 55 depends on ARCH_ZYNQ || COMPILE_TEST 56 - depends on HAS_DMA 57 56 help 58 57 FPGA manager driver support for Xilinx Zynq FPGAs. 59 58 ··· 68 69 depends on OF && SPI 69 70 help 70 71 FPGA manager driver support for Lattice iCE40 FPGAs over SPI. 72 + 73 + config FPGA_MGR_MACHXO2_SPI 74 + tristate "Lattice MachXO2 SPI" 75 + depends on SPI 76 + help 77 + FPGA manager driver support for Lattice MachXO2 configuration 78 + over slave SPI interface. 71 79 72 80 config FPGA_MGR_TS73XX 73 81 tristate "Technologic Systems TS-73xx SBC FPGA Manager"
+1
drivers/fpga/Makefile
··· 10 10 obj-$(CONFIG_FPGA_MGR_ALTERA_CVP) += altera-cvp.o 11 11 obj-$(CONFIG_FPGA_MGR_ALTERA_PS_SPI) += altera-ps-spi.o 12 12 obj-$(CONFIG_FPGA_MGR_ICE40_SPI) += ice40-spi.o 13 + obj-$(CONFIG_FPGA_MGR_MACHXO2_SPI) += machxo2-spi.o 13 14 obj-$(CONFIG_FPGA_MGR_SOCFPGA) += socfpga.o 14 15 obj-$(CONFIG_FPGA_MGR_SOCFPGA_A10) += socfpga-a10.o 15 16 obj-$(CONFIG_FPGA_MGR_TS73XX) += ts73xx-fpga.o
+14 -5
drivers/fpga/altera-cvp.c
··· 401 401 const struct pci_device_id *dev_id) 402 402 { 403 403 struct altera_cvp_conf *conf; 404 + struct fpga_manager *mgr; 404 405 u16 cmd, val; 405 406 int ret; 406 407 ··· 453 452 snprintf(conf->mgr_name, sizeof(conf->mgr_name), "%s @%s", 454 453 ALTERA_CVP_MGR_NAME, pci_name(pdev)); 455 454 456 - ret = fpga_mgr_register(&pdev->dev, conf->mgr_name, 457 - &altera_cvp_ops, conf); 458 - if (ret) 455 + mgr = fpga_mgr_create(&pdev->dev, conf->mgr_name, 456 + &altera_cvp_ops, conf); 457 + if (!mgr) 458 + return -ENOMEM; 459 + 460 + pci_set_drvdata(pdev, mgr); 461 + 462 + ret = fpga_mgr_register(mgr); 463 + if (ret) { 464 + fpga_mgr_free(mgr); 459 465 goto err_unmap; 466 + } 460 467 461 468 ret = driver_create_file(&altera_cvp_driver.driver, 462 469 &driver_attr_chkcfg); 463 470 if (ret) { 464 471 dev_err(&pdev->dev, "Can't create sysfs chkcfg file\n"); 465 - fpga_mgr_unregister(&pdev->dev); 472 + fpga_mgr_unregister(mgr); 466 473 goto err_unmap; 467 474 } 468 475 ··· 492 483 u16 cmd; 493 484 494 485 driver_remove_file(&altera_cvp_driver.driver, &driver_attr_chkcfg); 495 - fpga_mgr_unregister(&pdev->dev); 486 + fpga_mgr_unregister(mgr); 496 487 pci_iounmap(pdev, conf->map); 497 488 pci_release_region(pdev, CVP_BAR); 498 489 pci_read_config_word(pdev, PCI_COMMAND, &cmd);
+17 -17
drivers/fpga/altera-fpga2sdram.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * FPGA to SDRAM Bridge Driver for Altera SoCFPGA Devices 3 4 * 4 5 * Copyright (C) 2013-2016 Altera Corporation, All Rights Reserved. 5 - * 6 - * This program is free software; you can redistribute it and/or modify it 7 - * under the terms and conditions of the GNU General Public License, 8 - * version 2, as published by the Free Software Foundation. 9 - * 10 - * This program is distributed in the hope it will be useful, but WITHOUT 11 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 - * more details. 14 - * 15 - * You should have received a copy of the GNU General Public License along with 16 - * this program. If not, see <http://www.gnu.org/licenses/>. 17 6 */ 18 7 19 8 /* ··· 95 106 { 96 107 struct device *dev = &pdev->dev; 97 108 struct alt_fpga2sdram_data *priv; 109 + struct fpga_bridge *br; 98 110 u32 enable; 99 111 struct regmap *sysmgr; 100 112 int ret = 0; ··· 121 131 /* Get f2s bridge configuration saved in handoff register */ 122 132 regmap_read(sysmgr, SYSMGR_ISWGRP_HANDOFF3, &priv->mask); 123 133 124 - ret = fpga_bridge_register(dev, F2S_BRIDGE_NAME, 125 - &altera_fpga2sdram_br_ops, priv); 126 - if (ret) 134 + br = fpga_bridge_create(dev, F2S_BRIDGE_NAME, 135 + &altera_fpga2sdram_br_ops, priv); 136 + if (!br) 137 + return -ENOMEM; 138 + 139 + platform_set_drvdata(pdev, br); 140 + 141 + ret = fpga_bridge_register(br); 142 + if (ret) { 143 + fpga_bridge_free(br); 127 144 return ret; 145 + } 128 146 129 147 dev_info(dev, "driver initialized with handoff %08x\n", priv->mask); 130 148 ··· 144 146 (enable ? "enabling" : "disabling")); 145 147 ret = _alt_fpga2sdram_enable_set(priv, enable); 146 148 if (ret) { 147 - fpga_bridge_unregister(&pdev->dev); 149 + fpga_bridge_unregister(br); 148 150 return ret; 149 151 } 150 152 } ··· 155 157 156 158 static int alt_fpga_bridge_remove(struct platform_device *pdev) 157 159 { 158 - fpga_bridge_unregister(&pdev->dev); 160 + struct fpga_bridge *br = platform_get_drvdata(pdev); 161 + 162 + fpga_bridge_unregister(br); 159 163 160 164 return 0; 161 165 }
+20 -15
drivers/fpga/altera-freeze-bridge.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * FPGA Freeze Bridge Controller 3 4 * 4 5 * Copyright (C) 2016 Altera Corporation. All rights reserved. 5 - * 6 - * This program is free software; you can redistribute it and/or modify it 7 - * under the terms and conditions of the GNU General Public License, 8 - * version 2, as published by the Free Software Foundation. 9 - * 10 - * This program is distributed in the hope it will be useful, but WITHOUT 11 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 - * more details. 14 - * 15 - * You should have received a copy of the GNU General Public License along with 16 - * this program. If not, see <http://www.gnu.org/licenses/>. 17 6 */ 18 7 #include <linux/delay.h> 19 8 #include <linux/io.h> ··· 210 221 struct device_node *np = pdev->dev.of_node; 211 222 void __iomem *base_addr; 212 223 struct altera_freeze_br_data *priv; 224 + struct fpga_bridge *br; 213 225 struct resource *res; 214 226 u32 status, revision; 227 + int ret; 215 228 216 229 if (!np) 217 230 return -ENODEV; ··· 245 254 246 255 priv->base_addr = base_addr; 247 256 248 - return fpga_bridge_register(dev, FREEZE_BRIDGE_NAME, 249 - &altera_freeze_br_br_ops, priv); 257 + br = fpga_bridge_create(dev, FREEZE_BRIDGE_NAME, 258 + &altera_freeze_br_br_ops, priv); 259 + if (!br) 260 + return -ENOMEM; 261 + 262 + platform_set_drvdata(pdev, br); 263 + 264 + ret = fpga_bridge_register(br); 265 + if (ret) { 266 + fpga_bridge_free(br); 267 + return ret; 268 + } 269 + 270 + return 0; 250 271 } 251 272 252 273 static int altera_freeze_br_remove(struct platform_device *pdev) 253 274 { 254 - fpga_bridge_unregister(&pdev->dev); 275 + struct fpga_bridge *br = platform_get_drvdata(pdev); 276 + 277 + fpga_bridge_unregister(br); 255 278 256 279 return 0; 257 280 }
+20 -17
drivers/fpga/altera-hps2fpga.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * FPGA to/from HPS Bridge Driver for Altera SoCFPGA Devices 3 4 * ··· 7 6 * Includes this patch from the mailing list: 8 7 * fpga: altera-hps2fpga: fix HPS2FPGA bridge visibility to L3 masters 9 8 * Signed-off-by: Anatolij Gustschin <agust@denx.de> 10 - * 11 - * This program is free software; you can redistribute it and/or modify it 12 - * under the terms and conditions of the GNU General Public License, 13 - * version 2, as published by the Free Software Foundation. 14 - * 15 - * This program is distributed in the hope it will be useful, but WITHOUT 16 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 17 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 18 - * more details. 19 - * 20 - * You should have received a copy of the GNU General Public License along with 21 - * this program. If not, see <http://www.gnu.org/licenses/>. 22 9 */ 23 10 24 11 /* ··· 128 139 struct device *dev = &pdev->dev; 129 140 struct altera_hps2fpga_data *priv; 130 141 const struct of_device_id *of_id; 142 + struct fpga_bridge *br; 131 143 u32 enable; 132 144 int ret; 133 145 ··· 180 190 } 181 191 } 182 192 183 - ret = fpga_bridge_register(dev, priv->name, &altera_hps2fpga_br_ops, 184 - priv); 185 - err: 193 + br = fpga_bridge_create(dev, priv->name, &altera_hps2fpga_br_ops, priv); 194 + if (!br) { 195 + ret = -ENOMEM; 196 + goto err; 197 + } 198 + 199 + platform_set_drvdata(pdev, br); 200 + 201 + ret = fpga_bridge_register(br); 186 202 if (ret) 187 - clk_disable_unprepare(priv->clk); 203 + goto err_free; 204 + 205 + return 0; 206 + 207 + err_free: 208 + fpga_bridge_free(br); 209 + err: 210 + clk_disable_unprepare(priv->clk); 188 211 189 212 return ret; 190 213 } ··· 207 204 struct fpga_bridge *bridge = platform_get_drvdata(pdev); 208 205 struct altera_hps2fpga_data *priv = bridge->priv; 209 206 210 - fpga_bridge_unregister(&pdev->dev); 207 + fpga_bridge_unregister(bridge); 211 208 212 209 clk_disable_unprepare(priv->clk); 213 210
+1 -12
drivers/fpga/altera-pr-ip-core-plat.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Driver for Altera Partial Reconfiguration IP Core 3 4 * ··· 6 5 * 7 6 * Based on socfpga-a10.c Copyright (C) 2015-2016 Altera Corporation 8 7 * by Alan Tull <atull@opensource.altera.com> 9 - * 10 - * This program is free software; you can redistribute it and/or modify it 11 - * under the terms and conditions of the GNU General Public License, 12 - * version 2, as published by the Free Software Foundation. 13 - * 14 - * This program is distributed in the hope it will be useful, but WITHOUT 15 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 16 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 17 - * more details. 18 - * 19 - * You should have received a copy of the GNU General Public License along with 20 - * this program. If not, see <http://www.gnu.org/licenses/>. 21 8 */ 22 9 #include <linux/fpga/altera-pr-ip-core.h> 23 10 #include <linux/module.h>
+17 -14
drivers/fpga/altera-pr-ip-core.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Driver for Altera Partial Reconfiguration IP Core 3 4 * ··· 6 5 * 7 6 * Based on socfpga-a10.c Copyright (C) 2015-2016 Altera Corporation 8 7 * by Alan Tull <atull@opensource.altera.com> 9 - * 10 - * This program is free software; you can redistribute it and/or modify it 11 - * under the terms and conditions of the GNU General Public License, 12 - * version 2, as published by the Free Software Foundation. 13 - * 14 - * This program is distributed in the hope it will be useful, but WITHOUT 15 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 16 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 17 - * more details. 18 - * 19 - * You should have received a copy of the GNU General Public License along with 20 - * this program. If not, see <http://www.gnu.org/licenses/>. 21 8 */ 22 9 #include <linux/delay.h> 23 10 #include <linux/fpga/altera-pr-ip-core.h> ··· 176 187 int alt_pr_register(struct device *dev, void __iomem *reg_base) 177 188 { 178 189 struct alt_pr_priv *priv; 190 + struct fpga_manager *mgr; 191 + int ret; 179 192 u32 val; 180 193 181 194 priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); ··· 192 201 (val & ALT_PR_CSR_STATUS_MSK) >> ALT_PR_CSR_STATUS_SFT, 193 202 (int)(val & ALT_PR_CSR_PR_START)); 194 203 195 - return fpga_mgr_register(dev, dev_name(dev), &alt_pr_ops, priv); 204 + mgr = fpga_mgr_create(dev, dev_name(dev), &alt_pr_ops, priv); 205 + if (!mgr) 206 + return -ENOMEM; 207 + 208 + dev_set_drvdata(dev, mgr); 209 + 210 + ret = fpga_mgr_register(mgr); 211 + if (ret) 212 + fpga_mgr_free(mgr); 213 + 214 + return ret; 196 215 } 197 216 EXPORT_SYMBOL_GPL(alt_pr_register); 198 217 199 218 int alt_pr_unregister(struct device *dev) 200 219 { 220 + struct fpga_manager *mgr = dev_get_drvdata(dev); 221 + 201 222 dev_dbg(dev, "%s\n", __func__); 202 223 203 - fpga_mgr_unregister(dev); 224 + fpga_mgr_unregister(mgr); 204 225 205 226 return 0; 206 227 }
+17 -3
drivers/fpga/altera-ps-spi.c
··· 238 238 { 239 239 struct altera_ps_conf *conf; 240 240 const struct of_device_id *of_id; 241 + struct fpga_manager *mgr; 242 + int ret; 241 243 242 244 conf = devm_kzalloc(&spi->dev, sizeof(*conf), GFP_KERNEL); 243 245 if (!conf) ··· 275 273 snprintf(conf->mgr_name, sizeof(conf->mgr_name), "%s %s", 276 274 dev_driver_string(&spi->dev), dev_name(&spi->dev)); 277 275 278 - return fpga_mgr_register(&spi->dev, conf->mgr_name, 279 - &altera_ps_ops, conf); 276 + mgr = fpga_mgr_create(&spi->dev, conf->mgr_name, 277 + &altera_ps_ops, conf); 278 + if (!mgr) 279 + return -ENOMEM; 280 + 281 + spi_set_drvdata(spi, mgr); 282 + 283 + ret = fpga_mgr_register(mgr); 284 + if (ret) 285 + fpga_mgr_free(mgr); 286 + 287 + return ret; 280 288 } 281 289 282 290 static int altera_ps_remove(struct spi_device *spi) 283 291 { 284 - fpga_mgr_unregister(&spi->dev); 292 + struct fpga_manager *mgr = spi_get_drvdata(spi); 293 + 294 + fpga_mgr_unregister(mgr); 285 295 286 296 return 0; 287 297 }
+49 -37
drivers/fpga/fpga-bridge.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * FPGA Bridge Framework Driver 3 4 * 4 5 * Copyright (C) 2013-2016 Altera Corporation, All Rights Reserved. 5 6 * Copyright (C) 2017 Intel Corporation 6 - * 7 - * This program is free software; you can redistribute it and/or modify it 8 - * under the terms and conditions of the GNU General Public License, 9 - * version 2, as published by the Free Software Foundation. 10 - * 11 - * This program is distributed in the hope it will be useful, but WITHOUT 12 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 13 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 14 - * more details. 15 - * 16 - * You should have received a copy of the GNU General Public License along with 17 - * this program. If not, see <http://www.gnu.org/licenses/>. 18 7 */ 19 8 #include <linux/fpga/fpga-bridge.h> 20 9 #include <linux/idr.h> ··· 121 132 /** 122 133 * fpga_bridge_get - get an exclusive reference to a fpga bridge 123 134 * @dev: parent device that fpga bridge was registered with 135 + * @info: fpga manager info 124 136 * 125 137 * Given a device, get an exclusive reference to a fpga bridge. 126 138 * ··· 318 328 ATTRIBUTE_GROUPS(fpga_bridge); 319 329 320 330 /** 321 - * fpga_bridge_register - register a fpga bridge driver 331 + * fpga_bridge_create - create and initialize a struct fpga_bridge 322 332 * @dev: FPGA bridge device from pdev 323 333 * @name: FPGA bridge name 324 334 * @br_ops: pointer to structure of fpga bridge ops 325 335 * @priv: FPGA bridge private data 326 336 * 327 - * Return: 0 for success, error code otherwise. 337 + * Return: struct fpga_bridge or NULL 328 338 */ 329 - int fpga_bridge_register(struct device *dev, const char *name, 330 - const struct fpga_bridge_ops *br_ops, void *priv) 339 + struct fpga_bridge *fpga_bridge_create(struct device *dev, const char *name, 340 + const struct fpga_bridge_ops *br_ops, 341 + void *priv) 331 342 { 332 343 struct fpga_bridge *bridge; 333 344 int id, ret = 0; 334 345 335 346 if (!name || !strlen(name)) { 336 347 dev_err(dev, "Attempt to register with no name!\n"); 337 - return -EINVAL; 348 + return NULL; 338 349 } 339 350 340 351 bridge = kzalloc(sizeof(*bridge), GFP_KERNEL); 341 352 if (!bridge) 342 - return -ENOMEM; 353 + return NULL; 343 354 344 355 id = ida_simple_get(&fpga_bridge_ida, 0, 0, GFP_KERNEL); 345 356 if (id < 0) { ··· 361 370 bridge->dev.parent = dev; 362 371 bridge->dev.of_node = dev->of_node; 363 372 bridge->dev.id = id; 364 - dev_set_drvdata(dev, bridge); 365 373 366 374 ret = dev_set_name(&bridge->dev, "br%d", id); 367 375 if (ret) 368 376 goto error_device; 369 377 370 - ret = device_add(&bridge->dev); 371 - if (ret) 372 - goto error_device; 373 - 374 - of_platform_populate(dev->of_node, NULL, NULL, dev); 375 - 376 - dev_info(bridge->dev.parent, "fpga bridge [%s] registered\n", 377 - bridge->name); 378 - 379 - return 0; 378 + return bridge; 380 379 381 380 error_device: 382 381 ida_simple_remove(&fpga_bridge_ida, id); 383 382 error_kfree: 384 383 kfree(bridge); 385 384 386 - return ret; 385 + return NULL; 386 + } 387 + EXPORT_SYMBOL_GPL(fpga_bridge_create); 388 + 389 + /** 390 + * fpga_bridge_free - free a fpga bridge and its id 391 + * @bridge: FPGA bridge struct created by fpga_bridge_create 392 + */ 393 + void fpga_bridge_free(struct fpga_bridge *bridge) 394 + { 395 + ida_simple_remove(&fpga_bridge_ida, bridge->dev.id); 396 + kfree(bridge); 397 + } 398 + EXPORT_SYMBOL_GPL(fpga_bridge_free); 399 + 400 + /** 401 + * fpga_bridge_register - register a fpga bridge 402 + * @bridge: FPGA bridge struct created by fpga_bridge_create 403 + * 404 + * Return: 0 for success, error code otherwise. 405 + */ 406 + int fpga_bridge_register(struct fpga_bridge *bridge) 407 + { 408 + struct device *dev = &bridge->dev; 409 + int ret; 410 + 411 + ret = device_add(dev); 412 + if (ret) 413 + return ret; 414 + 415 + of_platform_populate(dev->of_node, NULL, NULL, dev); 416 + 417 + dev_info(dev->parent, "fpga bridge [%s] registered\n", bridge->name); 418 + 419 + return 0; 387 420 } 388 421 EXPORT_SYMBOL_GPL(fpga_bridge_register); 389 422 390 423 /** 391 - * fpga_bridge_unregister - unregister a fpga bridge driver 392 - * @dev: FPGA bridge device from pdev 424 + * fpga_bridge_unregister - unregister and free a fpga bridge 425 + * @bridge: FPGA bridge struct created by fpga_bridge_create 393 426 */ 394 - void fpga_bridge_unregister(struct device *dev) 427 + void fpga_bridge_unregister(struct fpga_bridge *bridge) 395 428 { 396 - struct fpga_bridge *bridge = dev_get_drvdata(dev); 397 - 398 429 /* 399 430 * If the low level driver provides a method for putting bridge into 400 431 * a desired state upon unregister, do it. ··· 432 419 { 433 420 struct fpga_bridge *bridge = to_fpga_bridge(dev); 434 421 435 - ida_simple_remove(&fpga_bridge_ida, bridge->dev.id); 436 - kfree(bridge); 422 + fpga_bridge_free(bridge); 437 423 } 438 424 439 425 static int __init fpga_bridge_dev_init(void)
+82 -47
drivers/fpga/fpga-mgr.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * FPGA Manager Core 3 4 * ··· 7 6 * 8 7 * With code from the mailing list: 9 8 * Copyright (C) 2013 Xilinx, Inc. 10 - * 11 - * This program is free software; you can redistribute it and/or modify it 12 - * under the terms and conditions of the GNU General Public License, 13 - * version 2, as published by the Free Software Foundation. 14 - * 15 - * This program is distributed in the hope it will be useful, but WITHOUT 16 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 17 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 18 - * more details. 19 - * 20 - * You should have received a copy of the GNU General Public License along with 21 - * this program. If not, see <http://www.gnu.org/licenses/>. 22 9 */ 23 10 #include <linux/firmware.h> 24 11 #include <linux/fpga/fpga-mgr.h> ··· 21 32 static DEFINE_IDA(fpga_mgr_ida); 22 33 static struct class *fpga_mgr_class; 23 34 35 + /** 36 + * fpga_image_info_alloc - Allocate a FPGA image info struct 37 + * @dev: owning device 38 + * 39 + * Return: struct fpga_image_info or NULL 40 + */ 24 41 struct fpga_image_info *fpga_image_info_alloc(struct device *dev) 25 42 { 26 43 struct fpga_image_info *info; ··· 45 50 } 46 51 EXPORT_SYMBOL_GPL(fpga_image_info_alloc); 47 52 53 + /** 54 + * fpga_image_info_free - Free a FPGA image info struct 55 + * @info: FPGA image info struct to free 56 + */ 48 57 void fpga_image_info_free(struct fpga_image_info *info) 49 58 { 50 59 struct device *dev; ··· 233 234 /** 234 235 * fpga_mgr_buf_load - load fpga from image in buffer 235 236 * @mgr: fpga manager 236 - * @flags: flags setting fpga confuration modes 237 + * @info: fpga image info 237 238 * @buf: buffer contain fpga image 238 239 * @count: byte count of buf 239 240 * ··· 342 343 return ret; 343 344 } 344 345 346 + /** 347 + * fpga_mgr_load - load FPGA from scatter/gather table, buffer, or firmware 348 + * @mgr: fpga manager 349 + * @info: fpga image information. 350 + * 351 + * Load the FPGA from an image which is indicated in @info. If successful, the 352 + * FPGA ends up in operating mode. 353 + * 354 + * Return: 0 on success, negative error code otherwise. 355 + */ 345 356 int fpga_mgr_load(struct fpga_manager *mgr, struct fpga_image_info *info) 346 357 { 347 358 if (info->sgt) ··· 438 429 } 439 430 440 431 /** 441 - * fpga_mgr_get - get a reference to a fpga mgr 432 + * fpga_mgr_get - Given a device, get a reference to a fpga mgr. 442 433 * @dev: parent device that fpga mgr was registered with 443 - * 444 - * Given a device, get a reference to a fpga mgr. 445 434 * 446 435 * Return: fpga manager struct or IS_ERR() condition containing error code. 447 436 */ ··· 460 453 } 461 454 462 455 /** 463 - * of_fpga_mgr_get - get a reference to a fpga mgr 464 - * @node: device node 456 + * of_fpga_mgr_get - Given a device node, get a reference to a fpga mgr. 465 457 * 466 - * Given a device node, get a reference to a fpga mgr. 458 + * @node: device node 467 459 * 468 460 * Return: fpga manager struct or IS_ERR() condition containing error code. 469 461 */ ··· 495 489 * @mgr: fpga manager 496 490 * 497 491 * Given a pointer to FPGA Manager (from fpga_mgr_get() or 498 - * of_fpga_mgr_put()) attempt to get the mutex. 492 + * of_fpga_mgr_put()) attempt to get the mutex. The user should call 493 + * fpga_mgr_lock() and verify that it returns 0 before attempting to 494 + * program the FPGA. Likewise, the user should call fpga_mgr_unlock 495 + * when done programming the FPGA. 499 496 * 500 497 * Return: 0 for success or -EBUSY 501 498 */ ··· 514 505 EXPORT_SYMBOL_GPL(fpga_mgr_lock); 515 506 516 507 /** 517 - * fpga_mgr_unlock - Unlock FPGA manager 508 + * fpga_mgr_unlock - Unlock FPGA manager after done programming 518 509 * @mgr: fpga manager 519 510 */ 520 511 void fpga_mgr_unlock(struct fpga_manager *mgr) ··· 524 515 EXPORT_SYMBOL_GPL(fpga_mgr_unlock); 525 516 526 517 /** 527 - * fpga_mgr_register - register a low level fpga manager driver 518 + * fpga_mgr_create - create and initialize a FPGA manager struct 528 519 * @dev: fpga manager device from pdev 529 520 * @name: fpga manager name 530 521 * @mops: pointer to structure of fpga manager ops 531 522 * @priv: fpga manager private data 532 523 * 533 - * Return: 0 on success, negative error code otherwise. 524 + * Return: pointer to struct fpga_manager or NULL 534 525 */ 535 - int fpga_mgr_register(struct device *dev, const char *name, 536 - const struct fpga_manager_ops *mops, 537 - void *priv) 526 + struct fpga_manager *fpga_mgr_create(struct device *dev, const char *name, 527 + const struct fpga_manager_ops *mops, 528 + void *priv) 538 529 { 539 530 struct fpga_manager *mgr; 540 531 int id, ret; ··· 543 534 !mops->write_init || (!mops->write && !mops->write_sg) || 544 535 (mops->write && mops->write_sg)) { 545 536 dev_err(dev, "Attempt to register without fpga_manager_ops\n"); 546 - return -EINVAL; 537 + return NULL; 547 538 } 548 539 549 540 if (!name || !strlen(name)) { 550 541 dev_err(dev, "Attempt to register with no name!\n"); 551 - return -EINVAL; 542 + return NULL; 552 543 } 553 544 554 545 mgr = kzalloc(sizeof(*mgr), GFP_KERNEL); 555 546 if (!mgr) 556 - return -ENOMEM; 547 + return NULL; 557 548 558 549 id = ida_simple_get(&fpga_mgr_ida, 0, 0, GFP_KERNEL); 559 550 if (id < 0) { ··· 567 558 mgr->mops = mops; 568 559 mgr->priv = priv; 569 560 570 - /* 571 - * Initialize framework state by requesting low level driver read state 572 - * from device. FPGA may be in reset mode or may have been programmed 573 - * by bootloader or EEPROM. 574 - */ 575 - mgr->state = mgr->mops->state(mgr); 576 - 577 561 device_initialize(&mgr->dev); 578 562 mgr->dev.class = fpga_mgr_class; 579 563 mgr->dev.groups = mops->groups; 580 564 mgr->dev.parent = dev; 581 565 mgr->dev.of_node = dev->of_node; 582 566 mgr->dev.id = id; 583 - dev_set_drvdata(dev, mgr); 584 567 585 568 ret = dev_set_name(&mgr->dev, "fpga%d", id); 586 569 if (ret) 587 570 goto error_device; 571 + 572 + return mgr; 573 + 574 + error_device: 575 + ida_simple_remove(&fpga_mgr_ida, id); 576 + error_kfree: 577 + kfree(mgr); 578 + 579 + return NULL; 580 + } 581 + EXPORT_SYMBOL_GPL(fpga_mgr_create); 582 + 583 + /** 584 + * fpga_mgr_free - deallocate a FPGA manager 585 + * @mgr: fpga manager struct created by fpga_mgr_create 586 + */ 587 + void fpga_mgr_free(struct fpga_manager *mgr) 588 + { 589 + ida_simple_remove(&fpga_mgr_ida, mgr->dev.id); 590 + kfree(mgr); 591 + } 592 + EXPORT_SYMBOL_GPL(fpga_mgr_free); 593 + 594 + /** 595 + * fpga_mgr_register - register a FPGA manager 596 + * @mgr: fpga manager struct created by fpga_mgr_create 597 + * 598 + * Return: 0 on success, negative error code otherwise. 599 + */ 600 + int fpga_mgr_register(struct fpga_manager *mgr) 601 + { 602 + int ret; 603 + 604 + /* 605 + * Initialize framework state by requesting low level driver read state 606 + * from device. FPGA may be in reset mode or may have been programmed 607 + * by bootloader or EEPROM. 608 + */ 609 + mgr->state = mgr->mops->state(mgr); 588 610 589 611 ret = device_add(&mgr->dev); 590 612 if (ret) ··· 626 586 return 0; 627 587 628 588 error_device: 629 - ida_simple_remove(&fpga_mgr_ida, id); 630 - error_kfree: 631 - kfree(mgr); 589 + ida_simple_remove(&fpga_mgr_ida, mgr->dev.id); 632 590 633 591 return ret; 634 592 } 635 593 EXPORT_SYMBOL_GPL(fpga_mgr_register); 636 594 637 595 /** 638 - * fpga_mgr_unregister - unregister a low level fpga manager driver 639 - * @dev: fpga manager device from pdev 596 + * fpga_mgr_unregister - unregister and free a FPGA manager 597 + * @mgr: fpga manager struct 640 598 */ 641 - void fpga_mgr_unregister(struct device *dev) 599 + void fpga_mgr_unregister(struct fpga_manager *mgr) 642 600 { 643 - struct fpga_manager *mgr = dev_get_drvdata(dev); 644 - 645 601 dev_info(&mgr->dev, "%s %s\n", __func__, mgr->name); 646 602 647 603 /* ··· 655 619 { 656 620 struct fpga_manager *mgr = to_fpga_manager(dev); 657 621 658 - ida_simple_remove(&fpga_mgr_ida, mgr->dev.id); 659 - kfree(mgr); 622 + fpga_mgr_free(mgr); 660 623 } 661 624 662 625 static int __init fpga_mgr_class_init(void)
+64 -27
drivers/fpga/fpga-region.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * FPGA Region - Device Tree support for FPGA programming under Linux 3 4 * 4 5 * Copyright (C) 2013-2016 Altera Corporation 5 6 * Copyright (C) 2017 Intel Corporation 6 - * 7 - * This program is free software; you can redistribute it and/or modify it 8 - * under the terms and conditions of the GNU General Public License, 9 - * version 2, as published by the Free Software Foundation. 10 - * 11 - * This program is distributed in the hope it will be useful, but WITHOUT 12 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 13 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 14 - * more details. 15 - * 16 - * You should have received a copy of the GNU General Public License along with 17 - * this program. If not, see <http://www.gnu.org/licenses/>. 18 7 */ 19 - 20 8 #include <linux/fpga/fpga-bridge.h> 21 9 #include <linux/fpga/fpga-mgr.h> 22 10 #include <linux/fpga/fpga-region.h> ··· 81 93 82 94 /** 83 95 * fpga_region_program_fpga - program FPGA 96 + * 84 97 * @region: FPGA region 98 + * 85 99 * Program an FPGA using fpga image info (region->info). 100 + * If the region has a get_bridges function, the exclusive reference for the 101 + * bridges will be held if programming succeeds. This is intended to prevent 102 + * reprogramming the region until the caller considers it safe to do so. 103 + * The caller will need to call fpga_bridges_put() before attempting to 104 + * reprogram the region. 105 + * 86 106 * Return 0 for success or negative error code. 87 107 */ 88 108 int fpga_region_program_fpga(struct fpga_region *region) ··· 158 162 } 159 163 EXPORT_SYMBOL_GPL(fpga_region_program_fpga); 160 164 161 - int fpga_region_register(struct device *dev, struct fpga_region *region) 165 + /** 166 + * fpga_region_create - alloc and init a struct fpga_region 167 + * @dev: device parent 168 + * @mgr: manager that programs this region 169 + * @get_bridges: optional function to get bridges to a list 170 + * 171 + * Return: struct fpga_region or NULL 172 + */ 173 + struct fpga_region 174 + *fpga_region_create(struct device *dev, 175 + struct fpga_manager *mgr, 176 + int (*get_bridges)(struct fpga_region *)) 162 177 { 178 + struct fpga_region *region; 163 179 int id, ret = 0; 180 + 181 + region = kzalloc(sizeof(*region), GFP_KERNEL); 182 + if (!region) 183 + return NULL; 164 184 165 185 id = ida_simple_get(&fpga_region_ida, 0, 0, GFP_KERNEL); 166 186 if (id < 0) 167 - return id; 187 + goto err_free; 168 188 189 + region->mgr = mgr; 190 + region->get_bridges = get_bridges; 169 191 mutex_init(&region->mutex); 170 192 INIT_LIST_HEAD(&region->bridge_list); 193 + 171 194 device_initialize(&region->dev); 172 - region->dev.groups = region->groups; 173 195 region->dev.class = fpga_region_class; 174 196 region->dev.parent = dev; 175 197 region->dev.of_node = dev->of_node; 176 198 region->dev.id = id; 177 - dev_set_drvdata(dev, region); 178 199 179 200 ret = dev_set_name(&region->dev, "region%d", id); 180 201 if (ret) 181 202 goto err_remove; 182 203 183 - ret = device_add(&region->dev); 184 - if (ret) 185 - goto err_remove; 186 - 187 - return 0; 204 + return region; 188 205 189 206 err_remove: 190 207 ida_simple_remove(&fpga_region_ida, id); 191 - return ret; 208 + err_free: 209 + kfree(region); 210 + 211 + return NULL; 212 + } 213 + EXPORT_SYMBOL_GPL(fpga_region_create); 214 + 215 + /** 216 + * fpga_region_free - free a struct fpga_region 217 + * @region: FPGA region created by fpga_region_create 218 + */ 219 + void fpga_region_free(struct fpga_region *region) 220 + { 221 + ida_simple_remove(&fpga_region_ida, region->dev.id); 222 + kfree(region); 223 + } 224 + EXPORT_SYMBOL_GPL(fpga_region_free); 225 + 226 + /** 227 + * fpga_region_register - register a FPGA region 228 + * @region: FPGA region created by fpga_region_create 229 + * Return: 0 or -errno 230 + */ 231 + int fpga_region_register(struct fpga_region *region) 232 + { 233 + return device_add(&region->dev); 234 + 192 235 } 193 236 EXPORT_SYMBOL_GPL(fpga_region_register); 194 237 195 - int fpga_region_unregister(struct fpga_region *region) 238 + /** 239 + * fpga_region_unregister - unregister and free a FPGA region 240 + * @region: FPGA region 241 + */ 242 + void fpga_region_unregister(struct fpga_region *region) 196 243 { 197 244 device_unregister(&region->dev); 198 - 199 - return 0; 200 245 } 201 246 EXPORT_SYMBOL_GPL(fpga_region_unregister); 202 247 ··· 245 208 { 246 209 struct fpga_region *region = to_fpga_region(dev); 247 210 248 - ida_simple_remove(&fpga_region_ida, region->dev.id); 211 + fpga_region_free(region); 249 212 } 250 213 251 214 /**
+17 -4
drivers/fpga/ice40-spi.c
··· 133 133 { 134 134 struct device *dev = &spi->dev; 135 135 struct ice40_fpga_priv *priv; 136 + struct fpga_manager *mgr; 136 137 int ret; 137 138 138 139 priv = devm_kzalloc(&spi->dev, sizeof(*priv), GFP_KERNEL); ··· 175 174 return ret; 176 175 } 177 176 178 - /* Register with the FPGA manager */ 179 - return fpga_mgr_register(dev, "Lattice iCE40 FPGA Manager", 180 - &ice40_fpga_ops, priv); 177 + mgr = fpga_mgr_create(dev, "Lattice iCE40 FPGA Manager", 178 + &ice40_fpga_ops, priv); 179 + if (!mgr) 180 + return -ENOMEM; 181 + 182 + spi_set_drvdata(spi, mgr); 183 + 184 + ret = fpga_mgr_register(mgr); 185 + if (ret) 186 + fpga_mgr_free(mgr); 187 + 188 + return ret; 181 189 } 182 190 183 191 static int ice40_fpga_remove(struct spi_device *spi) 184 192 { 185 - fpga_mgr_unregister(&spi->dev); 193 + struct fpga_manager *mgr = spi_get_drvdata(spi); 194 + 195 + fpga_mgr_unregister(mgr); 196 + 186 197 return 0; 187 198 } 188 199
+415
drivers/fpga/machxo2-spi.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Lattice MachXO2 Slave SPI Driver 4 + * 5 + * Manage Lattice FPGA firmware that is loaded over SPI using 6 + * the slave serial configuration interface. 7 + * 8 + * Copyright (C) 2018 Paolo Pisati <p.pisati@gmail.com> 9 + */ 10 + 11 + #include <linux/delay.h> 12 + #include <linux/fpga/fpga-mgr.h> 13 + #include <linux/gpio/consumer.h> 14 + #include <linux/module.h> 15 + #include <linux/of.h> 16 + #include <linux/spi/spi.h> 17 + 18 + /* MachXO2 Programming Guide - sysCONFIG Programming Commands */ 19 + #define IDCODE_PUB {0xe0, 0x00, 0x00, 0x00} 20 + #define ISC_ENABLE {0xc6, 0x08, 0x00, 0x00} 21 + #define ISC_ERASE {0x0e, 0x04, 0x00, 0x00} 22 + #define ISC_PROGRAMDONE {0x5e, 0x00, 0x00, 0x00} 23 + #define LSC_INITADDRESS {0x46, 0x00, 0x00, 0x00} 24 + #define LSC_PROGINCRNV {0x70, 0x00, 0x00, 0x01} 25 + #define LSC_READ_STATUS {0x3c, 0x00, 0x00, 0x00} 26 + #define LSC_REFRESH {0x79, 0x00, 0x00, 0x00} 27 + 28 + /* 29 + * Max CCLK in Slave SPI mode according to 'MachXO2 Family Data 30 + * Sheet' sysCONFIG Port Timing Specifications (3-36) 31 + */ 32 + #define MACHXO2_MAX_SPEED 66000000 33 + 34 + #define MACHXO2_LOW_DELAY_USEC 5 35 + #define MACHXO2_HIGH_DELAY_USEC 200 36 + #define MACHXO2_REFRESH_USEC 4800 37 + #define MACHXO2_MAX_BUSY_LOOP 128 38 + #define MACHXO2_MAX_REFRESH_LOOP 16 39 + 40 + #define MACHXO2_PAGE_SIZE 16 41 + #define MACHXO2_BUF_SIZE (MACHXO2_PAGE_SIZE + 4) 42 + 43 + /* Status register bits, errors and error mask */ 44 + #define BUSY 12 45 + #define DONE 8 46 + #define DVER 27 47 + #define ENAB 9 48 + #define ERRBITS 23 49 + #define ERRMASK 7 50 + #define FAIL 13 51 + 52 + #define ENOERR 0 /* no error */ 53 + #define EID 1 54 + #define ECMD 2 55 + #define ECRC 3 56 + #define EPREAM 4 /* preamble error */ 57 + #define EABRT 5 /* abort error */ 58 + #define EOVERFL 6 /* overflow error */ 59 + #define ESDMEOF 7 /* SDM EOF */ 60 + 61 + static inline u8 get_err(unsigned long *status) 62 + { 63 + return (*status >> ERRBITS) & ERRMASK; 64 + } 65 + 66 + static int get_status(struct spi_device *spi, unsigned long *status) 67 + { 68 + struct spi_message msg; 69 + struct spi_transfer rx, tx; 70 + static const u8 cmd[] = LSC_READ_STATUS; 71 + int ret; 72 + 73 + memset(&rx, 0, sizeof(rx)); 74 + memset(&tx, 0, sizeof(tx)); 75 + tx.tx_buf = cmd; 76 + tx.len = sizeof(cmd); 77 + rx.rx_buf = status; 78 + rx.len = 4; 79 + spi_message_init(&msg); 80 + spi_message_add_tail(&tx, &msg); 81 + spi_message_add_tail(&rx, &msg); 82 + ret = spi_sync(spi, &msg); 83 + if (ret) 84 + return ret; 85 + 86 + *status = be32_to_cpu(*status); 87 + 88 + return 0; 89 + } 90 + 91 + #ifdef DEBUG 92 + static const char *get_err_string(u8 err) 93 + { 94 + switch (err) { 95 + case ENOERR: return "No Error"; 96 + case EID: return "ID ERR"; 97 + case ECMD: return "CMD ERR"; 98 + case ECRC: return "CRC ERR"; 99 + case EPREAM: return "Preamble ERR"; 100 + case EABRT: return "Abort ERR"; 101 + case EOVERFL: return "Overflow ERR"; 102 + case ESDMEOF: return "SDM EOF"; 103 + } 104 + 105 + return "Default switch case"; 106 + } 107 + #endif 108 + 109 + static void dump_status_reg(unsigned long *status) 110 + { 111 + #ifdef DEBUG 112 + pr_debug("machxo2 status: 0x%08lX - done=%d, cfgena=%d, busy=%d, fail=%d, devver=%d, err=%s\n", 113 + *status, test_bit(DONE, status), test_bit(ENAB, status), 114 + test_bit(BUSY, status), test_bit(FAIL, status), 115 + test_bit(DVER, status), get_err_string(get_err(status))); 116 + #endif 117 + } 118 + 119 + static int wait_until_not_busy(struct spi_device *spi) 120 + { 121 + unsigned long status; 122 + int ret, loop = 0; 123 + 124 + do { 125 + ret = get_status(spi, &status); 126 + if (ret) 127 + return ret; 128 + if (++loop >= MACHXO2_MAX_BUSY_LOOP) 129 + return -EBUSY; 130 + } while (test_bit(BUSY, &status)); 131 + 132 + return 0; 133 + } 134 + 135 + static int machxo2_cleanup(struct fpga_manager *mgr) 136 + { 137 + struct spi_device *spi = mgr->priv; 138 + struct spi_message msg; 139 + struct spi_transfer tx[2]; 140 + static const u8 erase[] = ISC_ERASE; 141 + static const u8 refresh[] = LSC_REFRESH; 142 + int ret; 143 + 144 + memset(tx, 0, sizeof(tx)); 145 + spi_message_init(&msg); 146 + tx[0].tx_buf = &erase; 147 + tx[0].len = sizeof(erase); 148 + spi_message_add_tail(&tx[0], &msg); 149 + ret = spi_sync(spi, &msg); 150 + if (ret) 151 + goto fail; 152 + 153 + ret = wait_until_not_busy(spi); 154 + if (ret) 155 + goto fail; 156 + 157 + spi_message_init(&msg); 158 + tx[1].tx_buf = &refresh; 159 + tx[1].len = sizeof(refresh); 160 + tx[1].delay_usecs = MACHXO2_REFRESH_USEC; 161 + spi_message_add_tail(&tx[1], &msg); 162 + ret = spi_sync(spi, &msg); 163 + if (ret) 164 + goto fail; 165 + 166 + return 0; 167 + fail: 168 + dev_err(&mgr->dev, "Cleanup failed\n"); 169 + 170 + return ret; 171 + } 172 + 173 + static enum fpga_mgr_states machxo2_spi_state(struct fpga_manager *mgr) 174 + { 175 + struct spi_device *spi = mgr->priv; 176 + unsigned long status; 177 + 178 + get_status(spi, &status); 179 + if (!test_bit(BUSY, &status) && test_bit(DONE, &status) && 180 + get_err(&status) == ENOERR) 181 + return FPGA_MGR_STATE_OPERATING; 182 + 183 + return FPGA_MGR_STATE_UNKNOWN; 184 + } 185 + 186 + static int machxo2_write_init(struct fpga_manager *mgr, 187 + struct fpga_image_info *info, 188 + const char *buf, size_t count) 189 + { 190 + struct spi_device *spi = mgr->priv; 191 + struct spi_message msg; 192 + struct spi_transfer tx[3]; 193 + static const u8 enable[] = ISC_ENABLE; 194 + static const u8 erase[] = ISC_ERASE; 195 + static const u8 initaddr[] = LSC_INITADDRESS; 196 + unsigned long status; 197 + int ret; 198 + 199 + if ((info->flags & FPGA_MGR_PARTIAL_RECONFIG)) { 200 + dev_err(&mgr->dev, 201 + "Partial reconfiguration is not supported\n"); 202 + return -ENOTSUPP; 203 + } 204 + 205 + get_status(spi, &status); 206 + dump_status_reg(&status); 207 + memset(tx, 0, sizeof(tx)); 208 + spi_message_init(&msg); 209 + tx[0].tx_buf = &enable; 210 + tx[0].len = sizeof(enable); 211 + tx[0].delay_usecs = MACHXO2_LOW_DELAY_USEC; 212 + spi_message_add_tail(&tx[0], &msg); 213 + 214 + tx[1].tx_buf = &erase; 215 + tx[1].len = sizeof(erase); 216 + spi_message_add_tail(&tx[1], &msg); 217 + ret = spi_sync(spi, &msg); 218 + if (ret) 219 + goto fail; 220 + 221 + ret = wait_until_not_busy(spi); 222 + if (ret) 223 + goto fail; 224 + 225 + get_status(spi, &status); 226 + if (test_bit(FAIL, &status)) 227 + goto fail; 228 + dump_status_reg(&status); 229 + 230 + spi_message_init(&msg); 231 + tx[2].tx_buf = &initaddr; 232 + tx[2].len = sizeof(initaddr); 233 + spi_message_add_tail(&tx[2], &msg); 234 + ret = spi_sync(spi, &msg); 235 + if (ret) 236 + goto fail; 237 + 238 + get_status(spi, &status); 239 + dump_status_reg(&status); 240 + 241 + return 0; 242 + fail: 243 + dev_err(&mgr->dev, "Error during FPGA init.\n"); 244 + 245 + return ret; 246 + } 247 + 248 + static int machxo2_write(struct fpga_manager *mgr, const char *buf, 249 + size_t count) 250 + { 251 + struct spi_device *spi = mgr->priv; 252 + struct spi_message msg; 253 + struct spi_transfer tx; 254 + static const u8 progincr[] = LSC_PROGINCRNV; 255 + u8 payload[MACHXO2_BUF_SIZE]; 256 + unsigned long status; 257 + int i, ret; 258 + 259 + if (count % MACHXO2_PAGE_SIZE != 0) { 260 + dev_err(&mgr->dev, "Malformed payload.\n"); 261 + return -EINVAL; 262 + } 263 + get_status(spi, &status); 264 + dump_status_reg(&status); 265 + memcpy(payload, &progincr, sizeof(progincr)); 266 + for (i = 0; i < count; i += MACHXO2_PAGE_SIZE) { 267 + memcpy(&payload[sizeof(progincr)], &buf[i], MACHXO2_PAGE_SIZE); 268 + memset(&tx, 0, sizeof(tx)); 269 + spi_message_init(&msg); 270 + tx.tx_buf = payload; 271 + tx.len = MACHXO2_BUF_SIZE; 272 + tx.delay_usecs = MACHXO2_HIGH_DELAY_USEC; 273 + spi_message_add_tail(&tx, &msg); 274 + ret = spi_sync(spi, &msg); 275 + if (ret) { 276 + dev_err(&mgr->dev, "Error loading the bitstream.\n"); 277 + return ret; 278 + } 279 + } 280 + get_status(spi, &status); 281 + dump_status_reg(&status); 282 + 283 + return 0; 284 + } 285 + 286 + static int machxo2_write_complete(struct fpga_manager *mgr, 287 + struct fpga_image_info *info) 288 + { 289 + struct spi_device *spi = mgr->priv; 290 + struct spi_message msg; 291 + struct spi_transfer tx[2]; 292 + static const u8 progdone[] = ISC_PROGRAMDONE; 293 + static const u8 refresh[] = LSC_REFRESH; 294 + unsigned long status; 295 + int ret, refreshloop = 0; 296 + 297 + memset(tx, 0, sizeof(tx)); 298 + spi_message_init(&msg); 299 + tx[0].tx_buf = &progdone; 300 + tx[0].len = sizeof(progdone); 301 + spi_message_add_tail(&tx[0], &msg); 302 + ret = spi_sync(spi, &msg); 303 + if (ret) 304 + goto fail; 305 + ret = wait_until_not_busy(spi); 306 + if (ret) 307 + goto fail; 308 + 309 + get_status(spi, &status); 310 + dump_status_reg(&status); 311 + if (!test_bit(DONE, &status)) { 312 + machxo2_cleanup(mgr); 313 + goto fail; 314 + } 315 + 316 + do { 317 + spi_message_init(&msg); 318 + tx[1].tx_buf = &refresh; 319 + tx[1].len = sizeof(refresh); 320 + tx[1].delay_usecs = MACHXO2_REFRESH_USEC; 321 + spi_message_add_tail(&tx[1], &msg); 322 + ret = spi_sync(spi, &msg); 323 + if (ret) 324 + goto fail; 325 + 326 + /* check refresh status */ 327 + get_status(spi, &status); 328 + dump_status_reg(&status); 329 + if (!test_bit(BUSY, &status) && test_bit(DONE, &status) && 330 + get_err(&status) == ENOERR) 331 + break; 332 + if (++refreshloop == MACHXO2_MAX_REFRESH_LOOP) { 333 + machxo2_cleanup(mgr); 334 + goto fail; 335 + } 336 + } while (1); 337 + 338 + get_status(spi, &status); 339 + dump_status_reg(&status); 340 + 341 + return 0; 342 + fail: 343 + dev_err(&mgr->dev, "Refresh failed.\n"); 344 + 345 + return ret; 346 + } 347 + 348 + static const struct fpga_manager_ops machxo2_ops = { 349 + .state = machxo2_spi_state, 350 + .write_init = machxo2_write_init, 351 + .write = machxo2_write, 352 + .write_complete = machxo2_write_complete, 353 + }; 354 + 355 + static int machxo2_spi_probe(struct spi_device *spi) 356 + { 357 + struct device *dev = &spi->dev; 358 + struct fpga_manager *mgr; 359 + int ret; 360 + 361 + if (spi->max_speed_hz > MACHXO2_MAX_SPEED) { 362 + dev_err(dev, "Speed is too high\n"); 363 + return -EINVAL; 364 + } 365 + 366 + mgr = fpga_mgr_create(dev, "Lattice MachXO2 SPI FPGA Manager", 367 + &machxo2_ops, spi); 368 + if (!mgr) 369 + return -ENOMEM; 370 + 371 + spi_set_drvdata(spi, mgr); 372 + 373 + ret = fpga_mgr_register(mgr); 374 + if (ret) 375 + fpga_mgr_free(mgr); 376 + 377 + return ret; 378 + } 379 + 380 + static int machxo2_spi_remove(struct spi_device *spi) 381 + { 382 + struct fpga_manager *mgr = spi_get_drvdata(spi); 383 + 384 + fpga_mgr_unregister(mgr); 385 + 386 + return 0; 387 + } 388 + 389 + static const struct of_device_id of_match[] = { 390 + { .compatible = "lattice,machxo2-slave-spi", }, 391 + {} 392 + }; 393 + MODULE_DEVICE_TABLE(of, of_match); 394 + 395 + static const struct spi_device_id lattice_ids[] = { 396 + { "machxo2-slave-spi", 0 }, 397 + { }, 398 + }; 399 + MODULE_DEVICE_TABLE(spi, lattice_ids); 400 + 401 + static struct spi_driver machxo2_spi_driver = { 402 + .driver = { 403 + .name = "machxo2-slave-spi", 404 + .of_match_table = of_match_ptr(of_match), 405 + }, 406 + .probe = machxo2_spi_probe, 407 + .remove = machxo2_spi_remove, 408 + .id_table = lattice_ids, 409 + }; 410 + 411 + module_spi_driver(machxo2_spi_driver) 412 + 413 + MODULE_AUTHOR("Paolo Pisati <p.pisati@gmail.com>"); 414 + MODULE_DESCRIPTION("Load Lattice FPGA firmware over SPI"); 415 + MODULE_LICENSE("GPL v2");
+7 -21
drivers/fpga/of-fpga-region.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * FPGA Region - Device Tree support for FPGA programming under Linux 3 4 * 4 5 * Copyright (C) 2013-2016 Altera Corporation 5 6 * Copyright (C) 2017 Intel Corporation 6 - * 7 - * This program is free software; you can redistribute it and/or modify it 8 - * under the terms and conditions of the GNU General Public License, 9 - * version 2, as published by the Free Software Foundation. 10 - * 11 - * This program is distributed in the hope it will be useful, but WITHOUT 12 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 13 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 14 - * more details. 15 - * 16 - * You should have received a copy of the GNU General Public License along with 17 - * this program. If not, see <http://www.gnu.org/licenses/>. 18 7 */ 19 - 20 8 #include <linux/fpga/fpga-bridge.h> 21 9 #include <linux/fpga/fpga-mgr.h> 22 10 #include <linux/fpga/fpga-region.h> ··· 410 422 if (IS_ERR(mgr)) 411 423 return -EPROBE_DEFER; 412 424 413 - region = devm_kzalloc(dev, sizeof(*region), GFP_KERNEL); 425 + region = fpga_region_create(dev, mgr, of_fpga_region_get_bridges); 414 426 if (!region) { 415 427 ret = -ENOMEM; 416 428 goto eprobe_mgr_put; 417 429 } 418 430 419 - region->mgr = mgr; 420 - 421 - /* Specify how to get bridges for this type of region. */ 422 - region->get_bridges = of_fpga_region_get_bridges; 423 - 424 - ret = fpga_region_register(dev, region); 431 + ret = fpga_region_register(region); 425 432 if (ret) 426 - goto eprobe_mgr_put; 433 + goto eprobe_free; 427 434 428 435 of_platform_populate(np, fpga_region_of_match, NULL, &region->dev); 436 + dev_set_drvdata(dev, region); 429 437 430 438 dev_info(dev, "FPGA Region probed\n"); 431 439 432 440 return 0; 433 441 442 + eprobe_free: 443 + fpga_region_free(region); 434 444 eprobe_mgr_put: 435 445 fpga_mgr_put(mgr); 436 446 return ret;
+12 -16
drivers/fpga/socfpga-a10.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * FPGA Manager Driver for Altera Arria10 SoCFPGA 3 4 * 4 5 * Copyright (C) 2015-2016 Altera Corporation 5 - * 6 - * This program is free software; you can redistribute it and/or modify it 7 - * under the terms and conditions of the GNU General Public License, 8 - * version 2, as published by the Free Software Foundation. 9 - * 10 - * This program is distributed in the hope it will be useful, but WITHOUT 11 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 - * more details. 14 - * 15 - * You should have received a copy of the GNU General Public License along with 16 - * this program. If not, see <http://www.gnu.org/licenses/>. 17 6 */ 18 - 19 7 #include <linux/clk.h> 20 8 #include <linux/device.h> 21 9 #include <linux/delay.h> ··· 470 482 struct device *dev = &pdev->dev; 471 483 struct a10_fpga_priv *priv; 472 484 void __iomem *reg_base; 485 + struct fpga_manager *mgr; 473 486 struct resource *res; 474 487 int ret; 475 488 ··· 508 519 return -EBUSY; 509 520 } 510 521 511 - ret = fpga_mgr_register(dev, "SoCFPGA Arria10 FPGA Manager", 512 - &socfpga_a10_fpga_mgr_ops, priv); 522 + mgr = fpga_mgr_create(dev, "SoCFPGA Arria10 FPGA Manager", 523 + &socfpga_a10_fpga_mgr_ops, priv); 524 + if (!mgr) 525 + return -ENOMEM; 526 + 527 + platform_set_drvdata(pdev, mgr); 528 + 529 + ret = fpga_mgr_register(mgr); 513 530 if (ret) { 531 + fpga_mgr_free(mgr); 514 532 clk_disable_unprepare(priv->clk); 515 533 return ret; 516 534 } ··· 530 534 struct fpga_manager *mgr = platform_get_drvdata(pdev); 531 535 struct a10_fpga_priv *priv = mgr->priv; 532 536 533 - fpga_mgr_unregister(&pdev->dev); 537 + fpga_mgr_unregister(mgr); 534 538 clk_disable_unprepare(priv->clk); 535 539 536 540 return 0;
+17 -15
drivers/fpga/socfpga.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * FPGA Manager Driver for Altera SOCFPGA 3 4 * 4 5 * Copyright (C) 2013-2015 Altera Corporation 5 - * 6 - * This program is free software; you can redistribute it and/or modify it 7 - * under the terms and conditions of the GNU General Public License, 8 - * version 2, as published by the Free Software Foundation. 9 - * 10 - * This program is distributed in the hope it will be useful, but WITHOUT 11 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 - * more details. 14 - * 15 - * You should have received a copy of the GNU General Public License along with 16 - * this program. If not, see <http://www.gnu.org/licenses/>. 17 6 */ 18 7 #include <linux/completion.h> 19 8 #include <linux/delay.h> ··· 544 555 { 545 556 struct device *dev = &pdev->dev; 546 557 struct socfpga_fpga_priv *priv; 558 + struct fpga_manager *mgr; 547 559 struct resource *res; 548 560 int ret; 549 561 ··· 571 581 if (ret) 572 582 return ret; 573 583 574 - return fpga_mgr_register(dev, "Altera SOCFPGA FPGA Manager", 575 - &socfpga_fpga_ops, priv); 584 + mgr = fpga_mgr_create(dev, "Altera SOCFPGA FPGA Manager", 585 + &socfpga_fpga_ops, priv); 586 + if (!mgr) 587 + return -ENOMEM; 588 + 589 + platform_set_drvdata(pdev, mgr); 590 + 591 + ret = fpga_mgr_register(mgr); 592 + if (ret) 593 + fpga_mgr_free(mgr); 594 + 595 + return ret; 576 596 } 577 597 578 598 static int socfpga_fpga_remove(struct platform_device *pdev) 579 599 { 580 - fpga_mgr_unregister(&pdev->dev); 600 + struct fpga_manager *mgr = platform_get_drvdata(pdev); 601 + 602 + fpga_mgr_unregister(mgr); 581 603 582 604 return 0; 583 605 }
+17 -3
drivers/fpga/ts73xx-fpga.c
··· 116 116 { 117 117 struct device *kdev = &pdev->dev; 118 118 struct ts73xx_fpga_priv *priv; 119 + struct fpga_manager *mgr; 119 120 struct resource *res; 121 + int ret; 120 122 121 123 priv = devm_kzalloc(kdev, sizeof(*priv), GFP_KERNEL); 122 124 if (!priv) ··· 133 131 return PTR_ERR(priv->io_base); 134 132 } 135 133 136 - return fpga_mgr_register(kdev, "TS-73xx FPGA Manager", 137 - &ts73xx_fpga_ops, priv); 134 + mgr = fpga_mgr_create(kdev, "TS-73xx FPGA Manager", 135 + &ts73xx_fpga_ops, priv); 136 + if (!mgr) 137 + return -ENOMEM; 138 + 139 + platform_set_drvdata(pdev, mgr); 140 + 141 + ret = fpga_mgr_register(mgr); 142 + if (ret) 143 + fpga_mgr_free(mgr); 144 + 145 + return ret; 138 146 } 139 147 140 148 static int ts73xx_fpga_remove(struct platform_device *pdev) 141 149 { 142 - fpga_mgr_unregister(&pdev->dev); 150 + struct fpga_manager *mgr = platform_get_drvdata(pdev); 151 + 152 + fpga_mgr_unregister(mgr); 143 153 144 154 return 0; 145 155 }
+17 -5
drivers/fpga/xilinx-pr-decoupler.c
··· 94 94 static int xlnx_pr_decoupler_probe(struct platform_device *pdev) 95 95 { 96 96 struct xlnx_pr_decoupler_data *priv; 97 + struct fpga_bridge *br; 97 98 int err; 98 99 struct resource *res; 99 100 ··· 121 120 122 121 clk_disable(priv->clk); 123 122 124 - err = fpga_bridge_register(&pdev->dev, "Xilinx PR Decoupler", 125 - &xlnx_pr_decoupler_br_ops, priv); 123 + br = fpga_bridge_create(&pdev->dev, "Xilinx PR Decoupler", 124 + &xlnx_pr_decoupler_br_ops, priv); 125 + if (!br) { 126 + err = -ENOMEM; 127 + goto err_clk; 128 + } 126 129 130 + platform_set_drvdata(pdev, br); 131 + 132 + err = fpga_bridge_register(br); 127 133 if (err) { 128 134 dev_err(&pdev->dev, "unable to register Xilinx PR Decoupler"); 129 - clk_unprepare(priv->clk); 130 - return err; 135 + goto err_clk; 131 136 } 132 137 133 138 return 0; 139 + 140 + err_clk: 141 + clk_unprepare(priv->clk); 142 + 143 + return err; 134 144 } 135 145 136 146 static int xlnx_pr_decoupler_remove(struct platform_device *pdev) ··· 149 137 struct fpga_bridge *bridge = platform_get_drvdata(pdev); 150 138 struct xlnx_pr_decoupler_data *p = bridge->priv; 151 139 152 - fpga_bridge_unregister(&pdev->dev); 140 + fpga_bridge_unregister(bridge); 153 141 154 142 clk_unprepare(p->clk); 155 143
+17 -3
drivers/fpga/xilinx-spi.c
··· 143 143 static int xilinx_spi_probe(struct spi_device *spi) 144 144 { 145 145 struct xilinx_spi_conf *conf; 146 + struct fpga_manager *mgr; 147 + int ret; 146 148 147 149 conf = devm_kzalloc(&spi->dev, sizeof(*conf), GFP_KERNEL); 148 150 if (!conf) ··· 167 165 return PTR_ERR(conf->done); 168 166 } 169 167 170 - return fpga_mgr_register(&spi->dev, "Xilinx Slave Serial FPGA Manager", 171 - &xilinx_spi_ops, conf); 168 + mgr = fpga_mgr_create(&spi->dev, "Xilinx Slave Serial FPGA Manager", 169 + &xilinx_spi_ops, conf); 170 + if (!mgr) 171 + return -ENOMEM; 172 + 173 + spi_set_drvdata(spi, mgr); 174 + 175 + ret = fpga_mgr_register(mgr); 176 + if (ret) 177 + fpga_mgr_free(mgr); 178 + 179 + return ret; 172 180 } 173 181 174 182 static int xilinx_spi_remove(struct spi_device *spi) 175 183 { 176 - fpga_mgr_unregister(&spi->dev); 184 + struct fpga_manager *mgr = spi_get_drvdata(spi); 185 + 186 + fpga_mgr_unregister(mgr); 177 187 178 188 return 0; 179 189 }
+11 -3
drivers/fpga/zynq-fpga.c
··· 558 558 { 559 559 struct device *dev = &pdev->dev; 560 560 struct zynq_fpga_priv *priv; 561 + struct fpga_manager *mgr; 561 562 struct resource *res; 562 563 int err; 563 564 ··· 614 613 615 614 clk_disable(priv->clk); 616 615 617 - err = fpga_mgr_register(dev, "Xilinx Zynq FPGA Manager", 618 - &zynq_fpga_ops, priv); 616 + mgr = fpga_mgr_create(dev, "Xilinx Zynq FPGA Manager", 617 + &zynq_fpga_ops, priv); 618 + if (!mgr) 619 + return -ENOMEM; 620 + 621 + platform_set_drvdata(pdev, mgr); 622 + 623 + err = fpga_mgr_register(mgr); 619 624 if (err) { 620 625 dev_err(dev, "unable to register FPGA manager\n"); 626 + fpga_mgr_free(mgr); 621 627 clk_unprepare(priv->clk); 622 628 return err; 623 629 } ··· 640 632 mgr = platform_get_drvdata(pdev); 641 633 priv = mgr->priv; 642 634 643 - fpga_mgr_unregister(&pdev->dev); 635 + fpga_mgr_unregister(mgr); 644 636 645 637 clk_unprepare(priv->clk); 646 638
+42 -2
drivers/hv/connection.c
··· 63 63 case (VERSION_WIN10): 64 64 return VERSION_WIN8_1; 65 65 66 + case (VERSION_WIN10_V5): 67 + return VERSION_WIN10; 68 + 66 69 case (VERSION_WS2008): 67 70 default: 68 71 return VERSION_INVAL; ··· 83 80 84 81 msg = (struct vmbus_channel_initiate_contact *)msginfo->msg; 85 82 83 + memset(msg, 0, sizeof(*msg)); 86 84 msg->header.msgtype = CHANNELMSG_INITIATE_CONTACT; 87 85 msg->vmbus_version_requested = version; 88 - msg->interrupt_page = virt_to_phys(vmbus_connection.int_page); 86 + 87 + /* 88 + * VMBus protocol 5.0 (VERSION_WIN10_V5) requires that we must use 89 + * VMBUS_MESSAGE_CONNECTION_ID_4 for the Initiate Contact Message, 90 + * and for subsequent messages, we must use the Message Connection ID 91 + * field in the host-returned Version Response Message. And, with 92 + * VERSION_WIN10_V5, we don't use msg->interrupt_page, but we tell 93 + * the host explicitly that we still use VMBUS_MESSAGE_SINT(2) for 94 + * compatibility. 95 + * 96 + * On old hosts, we should always use VMBUS_MESSAGE_CONNECTION_ID (1). 97 + */ 98 + if (version >= VERSION_WIN10_V5) { 99 + msg->msg_sint = VMBUS_MESSAGE_SINT; 100 + vmbus_connection.msg_conn_id = VMBUS_MESSAGE_CONNECTION_ID_4; 101 + } else { 102 + msg->interrupt_page = virt_to_phys(vmbus_connection.int_page); 103 + vmbus_connection.msg_conn_id = VMBUS_MESSAGE_CONNECTION_ID; 104 + } 105 + 89 106 msg->monitor_page1 = virt_to_phys(vmbus_connection.monitor_pages[0]); 90 107 msg->monitor_page2 = virt_to_phys(vmbus_connection.monitor_pages[1]); 91 108 /* ··· 160 137 /* Check if successful */ 161 138 if (msginfo->response.version_response.version_supported) { 162 139 vmbus_connection.conn_state = CONNECTED; 140 + 141 + if (version >= VERSION_WIN10_V5) 142 + vmbus_connection.msg_conn_id = 143 + msginfo->response.version_response.msg_conn_id; 163 144 } else { 164 145 return -ECONNREFUSED; 165 146 } ··· 381 354 */ 382 355 int vmbus_post_msg(void *buffer, size_t buflen, bool can_sleep) 383 356 { 357 + struct vmbus_channel_message_header *hdr; 384 358 union hv_connection_id conn_id; 385 359 int ret = 0; 386 360 int retries = 0; 387 361 u32 usec = 1; 388 362 389 363 conn_id.asu32 = 0; 390 - conn_id.u.id = VMBUS_MESSAGE_CONNECTION_ID; 364 + conn_id.u.id = vmbus_connection.msg_conn_id; 391 365 392 366 /* 393 367 * hv_post_message() can have transient failures because of ··· 400 372 401 373 switch (ret) { 402 374 case HV_STATUS_INVALID_CONNECTION_ID: 375 + /* 376 + * See vmbus_negotiate_version(): VMBus protocol 5.0 377 + * requires that we must use 378 + * VMBUS_MESSAGE_CONNECTION_ID_4 for the Initiate 379 + * Contact message, but on old hosts that only 380 + * support VMBus protocol 4.0 or lower, here we get 381 + * HV_STATUS_INVALID_CONNECTION_ID and we should 382 + * return an error immediately without retrying. 383 + */ 384 + hdr = buffer; 385 + if (hdr->msgtype == CHANNELMSG_INITIATE_CONTACT) 386 + return -EINVAL; 403 387 /* 404 388 * We could get this if we send messages too 405 389 * frequently.
+3
drivers/hv/hyperv_vmbus.h
··· 187 187 188 188 enum { 189 189 VMBUS_MESSAGE_CONNECTION_ID = 1, 190 + VMBUS_MESSAGE_CONNECTION_ID_4 = 4, 190 191 VMBUS_MESSAGE_PORT_ID = 1, 191 192 VMBUS_EVENT_CONNECTION_ID = 2, 192 193 VMBUS_EVENT_PORT_ID = 2, ··· 302 301 * CPU on which the initial host contact was made. 303 302 */ 304 303 int connect_cpu; 304 + 305 + u32 msg_conn_id; 305 306 306 307 atomic_t offer_in_progress; 307 308
+2 -14
drivers/hwtracing/coresight/coresight-cpu-debug.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Copyright (c) 2017 Linaro Limited. All rights reserved. 3 4 * 4 5 * Author: Leo Yan <leo.yan@linaro.org> 5 - * 6 - * This program is free software; you can redistribute it and/or modify it 7 - * under the terms of the GNU General Public License version 2 as published by 8 - * the Free Software Foundation. 9 - * 10 - * This program is distributed in the hope that it will be useful, but WITHOUT 11 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 12 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 13 - * more details. 14 - * 15 - * You should have received a copy of the GNU General Public License along with 16 - * this program. If not, see <http://www.gnu.org/licenses/>. 17 - * 18 6 */ 19 7 #include <linux/amba/bus.h> 20 8 #include <linux/coresight.h> ··· 303 315 } 304 316 305 317 pc = debug_adjust_pc(drvdata); 306 - dev_emerg(dev, " EDPCSR: [<%px>] %pS\n", (void *)pc, (void *)pc); 318 + dev_emerg(dev, " EDPCSR: %pS\n", (void *)pc); 307 319 308 320 if (drvdata->edcidsr_present) 309 321 dev_emerg(dev, " EDCIDSR: %08x\n", drvdata->edcidsr);
+1 -9
drivers/hwtracing/coresight/coresight-dynamic-replicator.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Copyright (c) 2011-2015, The Linux Foundation. All rights reserved. 3 - * 4 - * This program is free software; you can redistribute it and/or modify 5 - * it under the terms of the GNU General Public License version 2 and 6 - * only version 2 as published by the Free Software Foundation. 7 - * 8 - * This program is distributed in the hope that it will be useful, 9 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 10 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 11 - * GNU General Public License for more details. 12 4 */ 13 5 14 6 #include <linux/amba/bus.h>
+3 -10
drivers/hwtracing/coresight/coresight-etb10.c
··· 1 - /* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved. 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2011-2012, The Linux Foundation. All rights reserved. 2 4 * 3 5 * Description: CoreSight Embedded Trace Buffer driver 4 - * 5 - * This program is free software; you can redistribute it and/or modify 6 - * it under the terms of the GNU General Public License version 2 and 7 - * only version 2 as published by the Free Software Foundation. 8 - * 9 - * This program is distributed in the hope that it will be useful, 10 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 - * GNU General Public License for more details. 13 6 */ 14 7 15 8 #include <asm/local.h>
+3 -10
drivers/hwtracing/coresight/coresight-etm-cp14.c
··· 1 - /* Copyright (c) 2012, The Linux Foundation. All rights reserved. 2 - * 3 - * This program is free software; you can redistribute it and/or modify 4 - * it under the terms of the GNU General Public License version 2 and 5 - * only version 2 as published by the Free Software Foundation. 6 - * 7 - * This program is distributed in the hope that it will be useful, 8 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 9 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 10 - * GNU General Public License for more details. 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2012, The Linux Foundation. All rights reserved. 11 4 */ 12 5 13 6 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+1 -12
drivers/hwtracing/coresight/coresight-etm-perf.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Copyright(C) 2015 Linaro Limited. All rights reserved. 3 4 * Author: Mathieu Poirier <mathieu.poirier@linaro.org> 4 - * 5 - * This program is free software; you can redistribute it and/or modify it 6 - * under the terms of the GNU General Public License version 2 as published by 7 - * the Free Software Foundation. 8 - * 9 - * This program is distributed in the hope that it will be useful, but WITHOUT 10 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 - * more details. 13 - * 14 - * You should have received a copy of the GNU General Public License along with 15 - * this program. If not, see <http://www.gnu.org/licenses/>. 16 5 */ 17 6 18 7 #include <linux/coresight.h>
+1 -12
drivers/hwtracing/coresight/coresight-etm-perf.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 /* 2 3 * Copyright(C) 2015 Linaro Limited. All rights reserved. 3 4 * Author: Mathieu Poirier <mathieu.poirier@linaro.org> 4 - * 5 - * This program is free software; you can redistribute it and/or modify it 6 - * under the terms of the GNU General Public License version 2 as published by 7 - * the Free Software Foundation. 8 - * 9 - * This program is distributed in the hope that it will be useful, but WITHOUT 10 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 - * more details. 13 - * 14 - * You should have received a copy of the GNU General Public License along with 15 - * this program. If not, see <http://www.gnu.org/licenses/>. 16 5 */ 17 6 18 7 #ifndef _CORESIGHT_ETM_PERF_H
+3 -10
drivers/hwtracing/coresight/coresight-etm.h
··· 1 - /* Copyright (c) 2014-2015, The Linux Foundation. All rights reserved. 2 - * 3 - * This program is free software; you can redistribute it and/or modify 4 - * it under the terms of the GNU General Public License version 2 and 5 - * only version 2 as published by the Free Software Foundation. 6 - * 7 - * This program is distributed in the hope that it will be useful, 8 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 9 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 10 - * GNU General Public License for more details. 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (c) 2014-2015, The Linux Foundation. All rights reserved. 11 4 */ 12 5 13 6 #ifndef _CORESIGHT_CORESIGHT_ETM_H
+1 -12
drivers/hwtracing/coresight/coresight-etm3x-sysfs.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Copyright(C) 2015 Linaro Limited. All rights reserved. 3 4 * Author: Mathieu Poirier <mathieu.poirier@linaro.org> 4 - * 5 - * This program is free software; you can redistribute it and/or modify it 6 - * under the terms of the GNU General Public License version 2 as published by 7 - * the Free Software Foundation. 8 - * 9 - * This program is distributed in the hope that it will be useful, but WITHOUT 10 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 - * more details. 13 - * 14 - * You should have received a copy of the GNU General Public License along with 15 - * this program. If not, see <http://www.gnu.org/licenses/>. 16 5 */ 17 6 18 7 #include <linux/pm_runtime.h>
+3 -10
drivers/hwtracing/coresight/coresight-etm3x.c
··· 1 - /* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved. 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2011-2012, The Linux Foundation. All rights reserved. 2 4 * 3 5 * Description: CoreSight Program Flow Trace driver 4 - * 5 - * This program is free software; you can redistribute it and/or modify 6 - * it under the terms of the GNU General Public License version 2 and 7 - * only version 2 as published by the Free Software Foundation. 8 - * 9 - * This program is distributed in the hope that it will be useful, 10 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 - * GNU General Public License for more details. 13 6 */ 14 7 15 8 #include <linux/kernel.h>
+1 -12
drivers/hwtracing/coresight/coresight-etm4x-sysfs.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Copyright(C) 2015 Linaro Limited. All rights reserved. 3 4 * Author: Mathieu Poirier <mathieu.poirier@linaro.org> 4 - * 5 - * This program is free software; you can redistribute it and/or modify it 6 - * under the terms of the GNU General Public License version 2 as published by 7 - * the Free Software Foundation. 8 - * 9 - * This program is distributed in the hope that it will be useful, but WITHOUT 10 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 - * more details. 13 - * 14 - * You should have received a copy of the GNU General Public License along with 15 - * this program. If not, see <http://www.gnu.org/licenses/>. 16 5 */ 17 6 18 7 #include <linux/pm_runtime.h>
+3 -10
drivers/hwtracing/coresight/coresight-etm4x.c
··· 1 - /* Copyright (c) 2014, The Linux Foundation. All rights reserved. 2 - * 3 - * This program is free software; you can redistribute it and/or modify 4 - * it under the terms of the GNU General Public License version 2 and 5 - * only version 2 as published by the Free Software Foundation. 6 - * 7 - * This program is distributed in the hope that it will be useful, 8 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 9 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 10 - * GNU General Public License for more details. 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2014, The Linux Foundation. All rights reserved. 11 4 */ 12 5 13 6 #include <linux/kernel.h>
+3 -10
drivers/hwtracing/coresight/coresight-etm4x.h
··· 1 - /* Copyright (c) 2014-2015, The Linux Foundation. All rights reserved. 2 - * 3 - * This program is free software; you can redistribute it and/or modify 4 - * it under the terms of the GNU General Public License version 2 and 5 - * only version 2 as published by the Free Software Foundation. 6 - * 7 - * This program is distributed in the hope that it will be useful, 8 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 9 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 10 - * GNU General Public License for more details. 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (c) 2014-2015, The Linux Foundation. All rights reserved. 11 4 */ 12 5 13 6 #ifndef _CORESIGHT_CORESIGHT_ETM_H
+3 -10
drivers/hwtracing/coresight/coresight-funnel.c
··· 1 - /* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved. 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2011-2012, The Linux Foundation. All rights reserved. 2 4 * 3 5 * Description: CoreSight Funnel driver 4 - * 5 - * This program is free software; you can redistribute it and/or modify 6 - * it under the terms of the GNU General Public License version 2 and 7 - * only version 2 as published by the Free Software Foundation. 8 - * 9 - * This program is distributed in the hope that it will be useful, 10 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 - * GNU General Public License for more details. 13 6 */ 14 7 15 8 #include <linux/kernel.h>
+3 -10
drivers/hwtracing/coresight/coresight-priv.h
··· 1 - /* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved. 2 - * 3 - * This program is free software; you can redistribute it and/or modify 4 - * it under the terms of the GNU General Public License version 2 and 5 - * only version 2 as published by the Free Software Foundation. 6 - * 7 - * This program is distributed in the hope that it will be useful, 8 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 9 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 10 - * GNU General Public License for more details. 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (c) 2011-2012, The Linux Foundation. All rights reserved. 11 4 */ 12 5 13 6 #ifndef _CORESIGHT_PRIV_H
+3 -10
drivers/hwtracing/coresight/coresight-replicator.c
··· 1 - /* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved. 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2011-2012, The Linux Foundation. All rights reserved. 2 4 * 3 5 * Description: CoreSight Replicator driver 4 - * 5 - * This program is free software; you can redistribute it and/or modify 6 - * it under the terms of the GNU General Public License version 2 and 7 - * only version 2 as published by the Free Software Foundation. 8 - * 9 - * This program is distributed in the hope that it will be useful, 10 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 - * GNU General Public License for more details. 13 6 */ 14 7 15 8 #include <linux/kernel.h>
+3 -10
drivers/hwtracing/coresight/coresight-stm.c
··· 1 - /* Copyright (c) 2015-2016, The Linux Foundation. All rights reserved. 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2015-2016, The Linux Foundation. All rights reserved. 2 4 * 3 5 * Description: CoreSight System Trace Macrocell driver 4 - * 5 - * This program is free software; you can redistribute it and/or modify 6 - * it under the terms of the GNU General Public License version 2 and 7 - * only version 2 as published by the Free Software Foundation. 8 - * 9 - * This program is distributed in the hope that it will be useful, 10 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 - * GNU General Public License for more details. 13 6 * 14 7 * Initial implementation by Pratik Patel 15 8 * (C) 2014-2015 Pratik Patel <pratikp@codeaurora.org>
+1 -12
drivers/hwtracing/coresight/coresight-tmc-etf.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Copyright(C) 2016 Linaro Limited. All rights reserved. 3 4 * Author: Mathieu Poirier <mathieu.poirier@linaro.org> 4 - * 5 - * This program is free software; you can redistribute it and/or modify it 6 - * under the terms of the GNU General Public License version 2 as published by 7 - * the Free Software Foundation. 8 - * 9 - * This program is distributed in the hope that it will be useful, but WITHOUT 10 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 - * more details. 13 - * 14 - * You should have received a copy of the GNU General Public License along with 15 - * this program. If not, see <http://www.gnu.org/licenses/>. 16 5 */ 17 6 18 7 #include <linux/circ_buf.h>
+4 -16
drivers/hwtracing/coresight/coresight-tmc-etr.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Copyright(C) 2016 Linaro Limited. All rights reserved. 3 4 * Author: Mathieu Poirier <mathieu.poirier@linaro.org> 4 - * 5 - * This program is free software; you can redistribute it and/or modify it 6 - * under the terms of the GNU General Public License version 2 as published by 7 - * the Free Software Foundation. 8 - * 9 - * This program is distributed in the hope that it will be useful, but WITHOUT 10 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 - * more details. 13 - * 14 - * You should have received a copy of the GNU General Public License along with 15 - * this program. If not, see <http://www.gnu.org/licenses/>. 16 5 */ 17 6 18 7 #include <linux/coresight.h> ··· 113 124 bool used = false; 114 125 unsigned long flags; 115 126 void __iomem *vaddr = NULL; 116 - dma_addr_t paddr; 127 + dma_addr_t paddr = 0; 117 128 struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 118 - 119 129 120 130 /* 121 131 * If we don't have a buffer release the lock and allocate memory. ··· 152 164 goto out; 153 165 154 166 /* 155 - * If drvdata::buf == NULL, use the memory allocated above. 167 + * If drvdata::vaddr == NULL, use the memory allocated above. 156 168 * Otherwise a buffer still exists from a previous session, so 157 169 * simply use that. 158 170 */ 159 - if (drvdata->buf == NULL) { 171 + if (drvdata->vaddr == NULL) { 160 172 used = true; 161 173 drvdata->vaddr = vaddr; 162 174 drvdata->paddr = paddr;
+1 -9
drivers/hwtracing/coresight/coresight-tmc.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* Copyright (c) 2012, The Linux Foundation. All rights reserved. 2 3 * 3 4 * Description: CoreSight Trace Memory Controller driver 4 - * 5 - * This program is free software; you can redistribute it and/or modify 6 - * it under the terms of the GNU General Public License version 2 and 7 - * only version 2 as published by the Free Software Foundation. 8 - * 9 - * This program is distributed in the hope that it will be useful, 10 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 - * GNU General Public License for more details. 13 5 */ 14 6 15 7 #include <linux/kernel.h>
+1 -12
drivers/hwtracing/coresight/coresight-tmc.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 /* 2 3 * Copyright(C) 2015 Linaro Limited. All rights reserved. 3 4 * Author: Mathieu Poirier <mathieu.poirier@linaro.org> 4 - * 5 - * This program is free software; you can redistribute it and/or modify it 6 - * under the terms of the GNU General Public License version 2 as published by 7 - * the Free Software Foundation. 8 - * 9 - * This program is distributed in the hope that it will be useful, but WITHOUT 10 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 - * more details. 13 - * 14 - * You should have received a copy of the GNU General Public License along with 15 - * this program. If not, see <http://www.gnu.org/licenses/>. 16 5 */ 17 6 18 7 #ifndef _CORESIGHT_TMC_H
+3 -10
drivers/hwtracing/coresight/coresight-tpiu.c
··· 1 - /* Copyright (c) 2011-2012, The Linux Foundation. All rights reserved. 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2011-2012, The Linux Foundation. All rights reserved. 2 4 * 3 5 * Description: CoreSight Trace Port Interface Unit driver 4 - * 5 - * This program is free software; you can redistribute it and/or modify 6 - * it under the terms of the GNU General Public License version 2 and 7 - * only version 2 as published by the Free Software Foundation. 8 - * 9 - * This program is distributed in the hope that it will be useful, 10 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 11 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 12 - * GNU General Public License for more details. 13 6 */ 14 7 15 8 #include <linux/kernel.h>
+7 -14
drivers/hwtracing/coresight/coresight.c
··· 1 - /* Copyright (c) 2012, The Linux Foundation. All rights reserved. 2 - * 3 - * This program is free software; you can redistribute it and/or modify 4 - * it under the terms of the GNU General Public License version 2 and 5 - * only version 2 as published by the Free Software Foundation. 6 - * 7 - * This program is distributed in the hope that it will be useful, 8 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 9 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 10 - * GNU General Public License for more details. 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2012, The Linux Foundation. All rights reserved. 11 4 */ 12 5 13 6 #include <linux/kernel.h> ··· 1019 1026 dev_set_name(&csdev->dev, "%s", desc->pdata->name); 1020 1027 1021 1028 ret = device_register(&csdev->dev); 1022 - if (ret) 1023 - goto err_device_register; 1029 + if (ret) { 1030 + put_device(&csdev->dev); 1031 + goto err_kzalloc_csdev; 1032 + } 1024 1033 1025 1034 mutex_lock(&coresight_mutex); 1026 1035 ··· 1033 1038 1034 1039 return csdev; 1035 1040 1036 - err_device_register: 1037 - kfree(conns); 1038 1041 err_kzalloc_conns: 1039 1042 kfree(refcnts); 1040 1043 err_kzalloc_refcnts:
+3 -10
drivers/hwtracing/coresight/of_coresight.c
··· 1 - /* Copyright (c) 2012, The Linux Foundation. All rights reserved. 2 - * 3 - * This program is free software; you can redistribute it and/or modify 4 - * it under the terms of the GNU General Public License version 2 and 5 - * only version 2 as published by the Free Software Foundation. 6 - * 7 - * This program is distributed in the hope that it will be useful, 8 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 9 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 10 - * GNU General Public License for more details. 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (c) 2012, The Linux Foundation. All rights reserved. 11 4 */ 12 5 13 6 #include <linux/types.h>
+1 -9
drivers/hwtracing/stm/ftrace.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Simple kernel driver to link kernel Ftrace and an STM device 3 4 * Copyright (c) 2016, Linaro Ltd. 4 - * 5 - * This program is free software; you can redistribute it and/or modify it 6 - * under the terms and conditions of the GNU General Public License, 7 - * version 2, as published by the Free Software Foundation. 8 - * 9 - * This program is distributed in the hope it will be useful, but WITHOUT 10 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 - * more details. 13 5 * 14 6 * STM Ftrace will be registered as a trace_export. 15 7 */
+14
drivers/misc/Kconfig
··· 113 113 for information on the specific driver level and support statement 114 114 for your IBM server. 115 115 116 + config IBMVMC 117 + tristate "IBM Virtual Management Channel support" 118 + depends on PPC_PSERIES 119 + help 120 + This is the IBM POWER Virtual Management Channel 121 + 122 + This driver is to be used for the POWER Virtual 123 + Management Channel virtual adapter on the PowerVM 124 + platform. It provides both request/response and 125 + async message support through the /dev/ibmvmc node. 126 + 127 + To compile this driver as a module, choose M here: the 128 + module will be called ibmvmc. 129 + 116 130 config PHANTOM 117 131 tristate "Sensable PHANToM (PCI)" 118 132 depends on PCI
+1
drivers/misc/Makefile
··· 4 4 # 5 5 6 6 obj-$(CONFIG_IBM_ASM) += ibmasm/ 7 + obj-$(CONFIG_IBMVMC) += ibmvmc.o 7 8 obj-$(CONFIG_AD525X_DPOT) += ad525x_dpot.o 8 9 obj-$(CONFIG_AD525X_DPOT_I2C) += ad525x_dpot-i2c.o 9 10 obj-$(CONFIG_AD525X_DPOT_SPI) += ad525x_dpot-spi.o
+4 -3
drivers/misc/cxl/context.c
··· 128 128 mutex_unlock(&ctx->mapping_lock); 129 129 } 130 130 131 - static int cxl_mmap_fault(struct vm_fault *vmf) 131 + static vm_fault_t cxl_mmap_fault(struct vm_fault *vmf) 132 132 { 133 133 struct vm_area_struct *vma = vmf->vma; 134 134 struct cxl_context *ctx = vma->vm_file->private_data; 135 135 u64 area, offset; 136 + vm_fault_t ret; 136 137 137 138 offset = vmf->pgoff << PAGE_SHIFT; 138 139 ··· 170 169 return VM_FAULT_SIGBUS; 171 170 } 172 171 173 - vm_insert_pfn(vma, vmf->address, (area + offset) >> PAGE_SHIFT); 172 + ret = vmf_insert_pfn(vma, vmf->address, (area + offset) >> PAGE_SHIFT); 174 173 175 174 mutex_unlock(&ctx->status_mutex); 176 175 177 - return VM_FAULT_NOPAGE; 176 + return ret; 178 177 } 179 178 180 179 static const struct vm_operations_struct cxl_mmap_vmops = {
+2418
drivers/misc/ibmvmc.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * IBM Power Systems Virtual Management Channel Support. 4 + * 5 + * Copyright (c) 2004, 2018 IBM Corp. 6 + * Dave Engebretsen engebret@us.ibm.com 7 + * Steven Royer seroyer@linux.vnet.ibm.com 8 + * Adam Reznechek adreznec@linux.vnet.ibm.com 9 + * Bryant G. Ly <bryantly@linux.vnet.ibm.com> 10 + */ 11 + 12 + #include <linux/module.h> 13 + #include <linux/kernel.h> 14 + #include <linux/kthread.h> 15 + #include <linux/major.h> 16 + #include <linux/string.h> 17 + #include <linux/fcntl.h> 18 + #include <linux/slab.h> 19 + #include <linux/poll.h> 20 + #include <linux/init.h> 21 + #include <linux/fs.h> 22 + #include <linux/interrupt.h> 23 + #include <linux/spinlock.h> 24 + #include <linux/percpu.h> 25 + #include <linux/delay.h> 26 + #include <linux/uaccess.h> 27 + #include <linux/io.h> 28 + #include <linux/miscdevice.h> 29 + #include <linux/sched/signal.h> 30 + 31 + #include <asm/byteorder.h> 32 + #include <asm/irq.h> 33 + #include <asm/vio.h> 34 + 35 + #include "ibmvmc.h" 36 + 37 + #define IBMVMC_DRIVER_VERSION "1.0" 38 + 39 + /* 40 + * Static global variables 41 + */ 42 + static DECLARE_WAIT_QUEUE_HEAD(ibmvmc_read_wait); 43 + 44 + static const char ibmvmc_driver_name[] = "ibmvmc"; 45 + 46 + static struct ibmvmc_struct ibmvmc; 47 + static struct ibmvmc_hmc hmcs[MAX_HMCS]; 48 + static struct crq_server_adapter ibmvmc_adapter; 49 + 50 + static int ibmvmc_max_buf_pool_size = DEFAULT_BUF_POOL_SIZE; 51 + static int ibmvmc_max_hmcs = DEFAULT_HMCS; 52 + static int ibmvmc_max_mtu = DEFAULT_MTU; 53 + 54 + static inline long h_copy_rdma(s64 length, u64 sliobn, u64 slioba, 55 + u64 dliobn, u64 dlioba) 56 + { 57 + long rc = 0; 58 + 59 + /* Ensure all writes to source memory are visible before hcall */ 60 + dma_wmb(); 61 + pr_debug("ibmvmc: h_copy_rdma(0x%llx, 0x%llx, 0x%llx, 0x%llx, 0x%llx\n", 62 + length, sliobn, slioba, dliobn, dlioba); 63 + rc = plpar_hcall_norets(H_COPY_RDMA, length, sliobn, slioba, 64 + dliobn, dlioba); 65 + pr_debug("ibmvmc: h_copy_rdma rc = 0x%lx\n", rc); 66 + 67 + return rc; 68 + } 69 + 70 + static inline void h_free_crq(uint32_t unit_address) 71 + { 72 + long rc = 0; 73 + 74 + do { 75 + if (H_IS_LONG_BUSY(rc)) 76 + msleep(get_longbusy_msecs(rc)); 77 + 78 + rc = plpar_hcall_norets(H_FREE_CRQ, unit_address); 79 + } while ((rc == H_BUSY) || (H_IS_LONG_BUSY(rc))); 80 + } 81 + 82 + /** 83 + * h_request_vmc: - request a hypervisor virtual management channel device 84 + * @vmc_index: drc index of the vmc device created 85 + * 86 + * Requests the hypervisor create a new virtual management channel device, 87 + * allowing this partition to send hypervisor virtualization control 88 + * commands. 89 + * 90 + * Return: 91 + * 0 - Success 92 + * Non-zero - Failure 93 + */ 94 + static inline long h_request_vmc(u32 *vmc_index) 95 + { 96 + long rc = 0; 97 + unsigned long retbuf[PLPAR_HCALL_BUFSIZE]; 98 + 99 + do { 100 + if (H_IS_LONG_BUSY(rc)) 101 + msleep(get_longbusy_msecs(rc)); 102 + 103 + /* Call to request the VMC device from phyp */ 104 + rc = plpar_hcall(H_REQUEST_VMC, retbuf); 105 + pr_debug("ibmvmc: %s rc = 0x%lx\n", __func__, rc); 106 + *vmc_index = retbuf[0]; 107 + } while ((rc == H_BUSY) || (H_IS_LONG_BUSY(rc))); 108 + 109 + return rc; 110 + } 111 + 112 + /* routines for managing a command/response queue */ 113 + /** 114 + * ibmvmc_handle_event: - Interrupt handler for crq events 115 + * @irq: number of irq to handle, not used 116 + * @dev_instance: crq_server_adapter that received interrupt 117 + * 118 + * Disables interrupts and schedules ibmvmc_task 119 + * 120 + * Always returns IRQ_HANDLED 121 + */ 122 + static irqreturn_t ibmvmc_handle_event(int irq, void *dev_instance) 123 + { 124 + struct crq_server_adapter *adapter = 125 + (struct crq_server_adapter *)dev_instance; 126 + 127 + vio_disable_interrupts(to_vio_dev(adapter->dev)); 128 + tasklet_schedule(&adapter->work_task); 129 + 130 + return IRQ_HANDLED; 131 + } 132 + 133 + /** 134 + * ibmvmc_release_crq_queue - Release CRQ Queue 135 + * 136 + * @adapter: crq_server_adapter struct 137 + * 138 + * Return: 139 + * 0 - Success 140 + * Non-Zero - Failure 141 + */ 142 + static void ibmvmc_release_crq_queue(struct crq_server_adapter *adapter) 143 + { 144 + struct vio_dev *vdev = to_vio_dev(adapter->dev); 145 + struct crq_queue *queue = &adapter->queue; 146 + 147 + free_irq(vdev->irq, (void *)adapter); 148 + tasklet_kill(&adapter->work_task); 149 + 150 + if (adapter->reset_task) 151 + kthread_stop(adapter->reset_task); 152 + 153 + h_free_crq(vdev->unit_address); 154 + dma_unmap_single(adapter->dev, 155 + queue->msg_token, 156 + queue->size * sizeof(*queue->msgs), DMA_BIDIRECTIONAL); 157 + free_page((unsigned long)queue->msgs); 158 + } 159 + 160 + /** 161 + * ibmvmc_reset_crq_queue - Reset CRQ Queue 162 + * 163 + * @adapter: crq_server_adapter struct 164 + * 165 + * This function calls h_free_crq and then calls H_REG_CRQ and does all the 166 + * bookkeeping to get us back to where we can communicate. 167 + * 168 + * Return: 169 + * 0 - Success 170 + * Non-Zero - Failure 171 + */ 172 + static int ibmvmc_reset_crq_queue(struct crq_server_adapter *adapter) 173 + { 174 + struct vio_dev *vdev = to_vio_dev(adapter->dev); 175 + struct crq_queue *queue = &adapter->queue; 176 + int rc = 0; 177 + 178 + /* Close the CRQ */ 179 + h_free_crq(vdev->unit_address); 180 + 181 + /* Clean out the queue */ 182 + memset(queue->msgs, 0x00, PAGE_SIZE); 183 + queue->cur = 0; 184 + 185 + /* And re-open it again */ 186 + rc = plpar_hcall_norets(H_REG_CRQ, 187 + vdev->unit_address, 188 + queue->msg_token, PAGE_SIZE); 189 + if (rc == 2) 190 + /* Adapter is good, but other end is not ready */ 191 + dev_warn(adapter->dev, "Partner adapter not ready\n"); 192 + else if (rc != 0) 193 + dev_err(adapter->dev, "couldn't register crq--rc 0x%x\n", rc); 194 + 195 + return rc; 196 + } 197 + 198 + /** 199 + * crq_queue_next_crq: - Returns the next entry in message queue 200 + * @queue: crq_queue to use 201 + * 202 + * Returns pointer to next entry in queue, or NULL if there are no new 203 + * entried in the CRQ. 204 + */ 205 + static struct ibmvmc_crq_msg *crq_queue_next_crq(struct crq_queue *queue) 206 + { 207 + struct ibmvmc_crq_msg *crq; 208 + unsigned long flags; 209 + 210 + spin_lock_irqsave(&queue->lock, flags); 211 + crq = &queue->msgs[queue->cur]; 212 + if (crq->valid & 0x80) { 213 + if (++queue->cur == queue->size) 214 + queue->cur = 0; 215 + 216 + /* Ensure the read of the valid bit occurs before reading any 217 + * other bits of the CRQ entry 218 + */ 219 + dma_rmb(); 220 + } else { 221 + crq = NULL; 222 + } 223 + 224 + spin_unlock_irqrestore(&queue->lock, flags); 225 + 226 + return crq; 227 + } 228 + 229 + /** 230 + * ibmvmc_send_crq - Send CRQ 231 + * 232 + * @adapter: crq_server_adapter struct 233 + * @word1: Word1 Data field 234 + * @word2: Word2 Data field 235 + * 236 + * Return: 237 + * 0 - Success 238 + * Non-Zero - Failure 239 + */ 240 + static long ibmvmc_send_crq(struct crq_server_adapter *adapter, 241 + u64 word1, u64 word2) 242 + { 243 + struct vio_dev *vdev = to_vio_dev(adapter->dev); 244 + long rc = 0; 245 + 246 + dev_dbg(adapter->dev, "(0x%x, 0x%016llx, 0x%016llx)\n", 247 + vdev->unit_address, word1, word2); 248 + 249 + /* 250 + * Ensure the command buffer is flushed to memory before handing it 251 + * over to the other side to prevent it from fetching any stale data. 252 + */ 253 + dma_wmb(); 254 + rc = plpar_hcall_norets(H_SEND_CRQ, vdev->unit_address, word1, word2); 255 + dev_dbg(adapter->dev, "rc = 0x%lx\n", rc); 256 + 257 + return rc; 258 + } 259 + 260 + /** 261 + * alloc_dma_buffer - Create DMA Buffer 262 + * 263 + * @vdev: vio_dev struct 264 + * @size: Size field 265 + * @dma_handle: DMA address field 266 + * 267 + * Allocates memory for the command queue and maps remote memory into an 268 + * ioba. 269 + * 270 + * Returns a pointer to the buffer 271 + */ 272 + static void *alloc_dma_buffer(struct vio_dev *vdev, size_t size, 273 + dma_addr_t *dma_handle) 274 + { 275 + /* allocate memory */ 276 + void *buffer = kzalloc(size, GFP_KERNEL); 277 + 278 + if (!buffer) { 279 + *dma_handle = 0; 280 + return NULL; 281 + } 282 + 283 + /* DMA map */ 284 + *dma_handle = dma_map_single(&vdev->dev, buffer, size, 285 + DMA_BIDIRECTIONAL); 286 + 287 + if (dma_mapping_error(&vdev->dev, *dma_handle)) { 288 + *dma_handle = 0; 289 + kzfree(buffer); 290 + return NULL; 291 + } 292 + 293 + return buffer; 294 + } 295 + 296 + /** 297 + * free_dma_buffer - Free DMA Buffer 298 + * 299 + * @vdev: vio_dev struct 300 + * @size: Size field 301 + * @vaddr: Address field 302 + * @dma_handle: DMA address field 303 + * 304 + * Releases memory for a command queue and unmaps mapped remote memory. 305 + */ 306 + static void free_dma_buffer(struct vio_dev *vdev, size_t size, void *vaddr, 307 + dma_addr_t dma_handle) 308 + { 309 + /* DMA unmap */ 310 + dma_unmap_single(&vdev->dev, dma_handle, size, DMA_BIDIRECTIONAL); 311 + 312 + /* deallocate memory */ 313 + kzfree(vaddr); 314 + } 315 + 316 + /** 317 + * ibmvmc_get_valid_hmc_buffer - Retrieve Valid HMC Buffer 318 + * 319 + * @hmc_index: HMC Index Field 320 + * 321 + * Return: 322 + * Pointer to ibmvmc_buffer 323 + */ 324 + static struct ibmvmc_buffer *ibmvmc_get_valid_hmc_buffer(u8 hmc_index) 325 + { 326 + struct ibmvmc_buffer *buffer; 327 + struct ibmvmc_buffer *ret_buf = NULL; 328 + unsigned long i; 329 + 330 + if (hmc_index > ibmvmc.max_hmc_index) 331 + return NULL; 332 + 333 + buffer = hmcs[hmc_index].buffer; 334 + 335 + for (i = 0; i < ibmvmc_max_buf_pool_size; i++) { 336 + if (buffer[i].valid && buffer[i].free && 337 + buffer[i].owner == VMC_BUF_OWNER_ALPHA) { 338 + buffer[i].free = 0; 339 + ret_buf = &buffer[i]; 340 + break; 341 + } 342 + } 343 + 344 + return ret_buf; 345 + } 346 + 347 + /** 348 + * ibmvmc_get_free_hmc_buffer - Get Free HMC Buffer 349 + * 350 + * @adapter: crq_server_adapter struct 351 + * @hmc_index: Hmc Index field 352 + * 353 + * Return: 354 + * Pointer to ibmvmc_buffer 355 + */ 356 + static struct ibmvmc_buffer *ibmvmc_get_free_hmc_buffer(struct crq_server_adapter *adapter, 357 + u8 hmc_index) 358 + { 359 + struct ibmvmc_buffer *buffer; 360 + struct ibmvmc_buffer *ret_buf = NULL; 361 + unsigned long i; 362 + 363 + if (hmc_index > ibmvmc.max_hmc_index) { 364 + dev_info(adapter->dev, "get_free_hmc_buffer: invalid hmc_index=0x%x\n", 365 + hmc_index); 366 + return NULL; 367 + } 368 + 369 + buffer = hmcs[hmc_index].buffer; 370 + 371 + for (i = 0; i < ibmvmc_max_buf_pool_size; i++) { 372 + if (buffer[i].free && 373 + buffer[i].owner == VMC_BUF_OWNER_ALPHA) { 374 + buffer[i].free = 0; 375 + ret_buf = &buffer[i]; 376 + break; 377 + } 378 + } 379 + 380 + return ret_buf; 381 + } 382 + 383 + /** 384 + * ibmvmc_free_hmc_buffer - Free an HMC Buffer 385 + * 386 + * @hmc: ibmvmc_hmc struct 387 + * @buffer: ibmvmc_buffer struct 388 + * 389 + */ 390 + static void ibmvmc_free_hmc_buffer(struct ibmvmc_hmc *hmc, 391 + struct ibmvmc_buffer *buffer) 392 + { 393 + unsigned long flags; 394 + 395 + spin_lock_irqsave(&hmc->lock, flags); 396 + buffer->free = 1; 397 + spin_unlock_irqrestore(&hmc->lock, flags); 398 + } 399 + 400 + /** 401 + * ibmvmc_count_hmc_buffers - Count HMC Buffers 402 + * 403 + * @hmc_index: HMC Index field 404 + * @valid: Valid number of buffers field 405 + * @free: Free number of buffers field 406 + * 407 + */ 408 + static void ibmvmc_count_hmc_buffers(u8 hmc_index, unsigned int *valid, 409 + unsigned int *free) 410 + { 411 + struct ibmvmc_buffer *buffer; 412 + unsigned long i; 413 + unsigned long flags; 414 + 415 + if (hmc_index > ibmvmc.max_hmc_index) 416 + return; 417 + 418 + if (!valid || !free) 419 + return; 420 + 421 + *valid = 0; *free = 0; 422 + 423 + buffer = hmcs[hmc_index].buffer; 424 + spin_lock_irqsave(&hmcs[hmc_index].lock, flags); 425 + 426 + for (i = 0; i < ibmvmc_max_buf_pool_size; i++) { 427 + if (buffer[i].valid) { 428 + *valid = *valid + 1; 429 + if (buffer[i].free) 430 + *free = *free + 1; 431 + } 432 + } 433 + 434 + spin_unlock_irqrestore(&hmcs[hmc_index].lock, flags); 435 + } 436 + 437 + /** 438 + * ibmvmc_get_free_hmc - Get Free HMC 439 + * 440 + * Return: 441 + * Pointer to an available HMC Connection 442 + * Null otherwise 443 + */ 444 + static struct ibmvmc_hmc *ibmvmc_get_free_hmc(void) 445 + { 446 + unsigned long i; 447 + unsigned long flags; 448 + 449 + /* 450 + * Find an available HMC connection. 451 + */ 452 + for (i = 0; i <= ibmvmc.max_hmc_index; i++) { 453 + spin_lock_irqsave(&hmcs[i].lock, flags); 454 + if (hmcs[i].state == ibmhmc_state_free) { 455 + hmcs[i].index = i; 456 + hmcs[i].state = ibmhmc_state_initial; 457 + spin_unlock_irqrestore(&hmcs[i].lock, flags); 458 + return &hmcs[i]; 459 + } 460 + spin_unlock_irqrestore(&hmcs[i].lock, flags); 461 + } 462 + 463 + return NULL; 464 + } 465 + 466 + /** 467 + * ibmvmc_return_hmc - Return an HMC Connection 468 + * 469 + * @hmc: ibmvmc_hmc struct 470 + * @release_readers: Number of readers connected to session 471 + * 472 + * This function releases the HMC connections back into the pool. 473 + * 474 + * Return: 475 + * 0 - Success 476 + * Non-zero - Failure 477 + */ 478 + static int ibmvmc_return_hmc(struct ibmvmc_hmc *hmc, bool release_readers) 479 + { 480 + struct ibmvmc_buffer *buffer; 481 + struct crq_server_adapter *adapter; 482 + struct vio_dev *vdev; 483 + unsigned long i; 484 + unsigned long flags; 485 + 486 + if (!hmc || !hmc->adapter) 487 + return -EIO; 488 + 489 + if (release_readers) { 490 + if (hmc->file_session) { 491 + struct ibmvmc_file_session *session = hmc->file_session; 492 + 493 + session->valid = 0; 494 + wake_up_interruptible(&ibmvmc_read_wait); 495 + } 496 + } 497 + 498 + adapter = hmc->adapter; 499 + vdev = to_vio_dev(adapter->dev); 500 + 501 + spin_lock_irqsave(&hmc->lock, flags); 502 + hmc->index = 0; 503 + hmc->state = ibmhmc_state_free; 504 + hmc->queue_head = 0; 505 + hmc->queue_tail = 0; 506 + buffer = hmc->buffer; 507 + for (i = 0; i < ibmvmc_max_buf_pool_size; i++) { 508 + if (buffer[i].valid) { 509 + free_dma_buffer(vdev, 510 + ibmvmc.max_mtu, 511 + buffer[i].real_addr_local, 512 + buffer[i].dma_addr_local); 513 + dev_dbg(adapter->dev, "Forgot buffer id 0x%lx\n", i); 514 + } 515 + memset(&buffer[i], 0, sizeof(struct ibmvmc_buffer)); 516 + 517 + hmc->queue_outbound_msgs[i] = VMC_INVALID_BUFFER_ID; 518 + } 519 + 520 + spin_unlock_irqrestore(&hmc->lock, flags); 521 + 522 + return 0; 523 + } 524 + 525 + /** 526 + * ibmvmc_send_open - Interface Open 527 + * @buffer: Pointer to ibmvmc_buffer struct 528 + * @hmc: Pointer to ibmvmc_hmc struct 529 + * 530 + * This command is sent by the management partition as the result of a 531 + * management partition device request. It causes the hypervisor to 532 + * prepare a set of data buffers for the management application connection 533 + * indicated HMC idx. A unique HMC Idx would be used if multiple management 534 + * applications running concurrently were desired. Before responding to this 535 + * command, the hypervisor must provide the management partition with at 536 + * least one of these new buffers via the Add Buffer. This indicates whether 537 + * the messages are inbound or outbound from the hypervisor. 538 + * 539 + * Return: 540 + * 0 - Success 541 + * Non-zero - Failure 542 + */ 543 + static int ibmvmc_send_open(struct ibmvmc_buffer *buffer, 544 + struct ibmvmc_hmc *hmc) 545 + { 546 + struct ibmvmc_crq_msg crq_msg; 547 + struct crq_server_adapter *adapter; 548 + __be64 *crq_as_u64 = (__be64 *)&crq_msg; 549 + int rc = 0; 550 + 551 + if (!hmc || !hmc->adapter) 552 + return -EIO; 553 + 554 + adapter = hmc->adapter; 555 + 556 + dev_dbg(adapter->dev, "send_open: 0x%lx 0x%lx 0x%lx 0x%lx 0x%lx\n", 557 + (unsigned long)buffer->size, (unsigned long)adapter->liobn, 558 + (unsigned long)buffer->dma_addr_local, 559 + (unsigned long)adapter->riobn, 560 + (unsigned long)buffer->dma_addr_remote); 561 + 562 + rc = h_copy_rdma(buffer->size, 563 + adapter->liobn, 564 + buffer->dma_addr_local, 565 + adapter->riobn, 566 + buffer->dma_addr_remote); 567 + if (rc) { 568 + dev_err(adapter->dev, "Error: In send_open, h_copy_rdma rc 0x%x\n", 569 + rc); 570 + return -EIO; 571 + } 572 + 573 + hmc->state = ibmhmc_state_opening; 574 + 575 + crq_msg.valid = 0x80; 576 + crq_msg.type = VMC_MSG_OPEN; 577 + crq_msg.status = 0; 578 + crq_msg.var1.rsvd = 0; 579 + crq_msg.hmc_session = hmc->session; 580 + crq_msg.hmc_index = hmc->index; 581 + crq_msg.var2.buffer_id = cpu_to_be16(buffer->id); 582 + crq_msg.rsvd = 0; 583 + crq_msg.var3.rsvd = 0; 584 + 585 + ibmvmc_send_crq(adapter, be64_to_cpu(crq_as_u64[0]), 586 + be64_to_cpu(crq_as_u64[1])); 587 + 588 + return rc; 589 + } 590 + 591 + /** 592 + * ibmvmc_send_close - Interface Close 593 + * @hmc: Pointer to ibmvmc_hmc struct 594 + * 595 + * This command is sent by the management partition to terminate a 596 + * management application to hypervisor connection. When this command is 597 + * sent, the management partition has quiesced all I/O operations to all 598 + * buffers associated with this management application connection, and 599 + * has freed any storage for these buffers. 600 + * 601 + * Return: 602 + * 0 - Success 603 + * Non-zero - Failure 604 + */ 605 + static int ibmvmc_send_close(struct ibmvmc_hmc *hmc) 606 + { 607 + struct ibmvmc_crq_msg crq_msg; 608 + struct crq_server_adapter *adapter; 609 + __be64 *crq_as_u64 = (__be64 *)&crq_msg; 610 + int rc = 0; 611 + 612 + if (!hmc || !hmc->adapter) 613 + return -EIO; 614 + 615 + adapter = hmc->adapter; 616 + 617 + dev_info(adapter->dev, "CRQ send: close\n"); 618 + 619 + crq_msg.valid = 0x80; 620 + crq_msg.type = VMC_MSG_CLOSE; 621 + crq_msg.status = 0; 622 + crq_msg.var1.rsvd = 0; 623 + crq_msg.hmc_session = hmc->session; 624 + crq_msg.hmc_index = hmc->index; 625 + crq_msg.var2.rsvd = 0; 626 + crq_msg.rsvd = 0; 627 + crq_msg.var3.rsvd = 0; 628 + 629 + ibmvmc_send_crq(adapter, be64_to_cpu(crq_as_u64[0]), 630 + be64_to_cpu(crq_as_u64[1])); 631 + 632 + return rc; 633 + } 634 + 635 + /** 636 + * ibmvmc_send_capabilities - Send VMC Capabilities 637 + * 638 + * @adapter: crq_server_adapter struct 639 + * 640 + * The capabilities message is an administrative message sent after the CRQ 641 + * initialization sequence of messages and is used to exchange VMC capabilities 642 + * between the management partition and the hypervisor. The management 643 + * partition must send this message and the hypervisor must respond with VMC 644 + * capabilities Response message before HMC interface message can begin. Any 645 + * HMC interface messages received before the exchange of capabilities has 646 + * complete are dropped. 647 + * 648 + * Return: 649 + * 0 - Success 650 + */ 651 + static int ibmvmc_send_capabilities(struct crq_server_adapter *adapter) 652 + { 653 + struct ibmvmc_admin_crq_msg crq_msg; 654 + __be64 *crq_as_u64 = (__be64 *)&crq_msg; 655 + 656 + dev_dbg(adapter->dev, "ibmvmc: CRQ send: capabilities\n"); 657 + crq_msg.valid = 0x80; 658 + crq_msg.type = VMC_MSG_CAP; 659 + crq_msg.status = 0; 660 + crq_msg.rsvd[0] = 0; 661 + crq_msg.rsvd[1] = 0; 662 + crq_msg.max_hmc = ibmvmc_max_hmcs; 663 + crq_msg.max_mtu = cpu_to_be32(ibmvmc_max_mtu); 664 + crq_msg.pool_size = cpu_to_be16(ibmvmc_max_buf_pool_size); 665 + crq_msg.crq_size = cpu_to_be16(adapter->queue.size); 666 + crq_msg.version = cpu_to_be16(IBMVMC_PROTOCOL_VERSION); 667 + 668 + ibmvmc_send_crq(adapter, be64_to_cpu(crq_as_u64[0]), 669 + be64_to_cpu(crq_as_u64[1])); 670 + 671 + ibmvmc.state = ibmvmc_state_capabilities; 672 + 673 + return 0; 674 + } 675 + 676 + /** 677 + * ibmvmc_send_add_buffer_resp - Add Buffer Response 678 + * 679 + * @adapter: crq_server_adapter struct 680 + * @status: Status field 681 + * @hmc_session: HMC Session field 682 + * @hmc_index: HMC Index field 683 + * @buffer_id: Buffer Id field 684 + * 685 + * This command is sent by the management partition to the hypervisor in 686 + * response to the Add Buffer message. The Status field indicates the result of 687 + * the command. 688 + * 689 + * Return: 690 + * 0 - Success 691 + */ 692 + static int ibmvmc_send_add_buffer_resp(struct crq_server_adapter *adapter, 693 + u8 status, u8 hmc_session, 694 + u8 hmc_index, u16 buffer_id) 695 + { 696 + struct ibmvmc_crq_msg crq_msg; 697 + __be64 *crq_as_u64 = (__be64 *)&crq_msg; 698 + 699 + dev_dbg(adapter->dev, "CRQ send: add_buffer_resp\n"); 700 + crq_msg.valid = 0x80; 701 + crq_msg.type = VMC_MSG_ADD_BUF_RESP; 702 + crq_msg.status = status; 703 + crq_msg.var1.rsvd = 0; 704 + crq_msg.hmc_session = hmc_session; 705 + crq_msg.hmc_index = hmc_index; 706 + crq_msg.var2.buffer_id = cpu_to_be16(buffer_id); 707 + crq_msg.rsvd = 0; 708 + crq_msg.var3.rsvd = 0; 709 + 710 + ibmvmc_send_crq(adapter, be64_to_cpu(crq_as_u64[0]), 711 + be64_to_cpu(crq_as_u64[1])); 712 + 713 + return 0; 714 + } 715 + 716 + /** 717 + * ibmvmc_send_rem_buffer_resp - Remove Buffer Response 718 + * 719 + * @adapter: crq_server_adapter struct 720 + * @status: Status field 721 + * @hmc_session: HMC Session field 722 + * @hmc_index: HMC Index field 723 + * @buffer_id: Buffer Id field 724 + * 725 + * This command is sent by the management partition to the hypervisor in 726 + * response to the Remove Buffer message. The Buffer ID field indicates 727 + * which buffer the management partition selected to remove. The Status 728 + * field indicates the result of the command. 729 + * 730 + * Return: 731 + * 0 - Success 732 + */ 733 + static int ibmvmc_send_rem_buffer_resp(struct crq_server_adapter *adapter, 734 + u8 status, u8 hmc_session, 735 + u8 hmc_index, u16 buffer_id) 736 + { 737 + struct ibmvmc_crq_msg crq_msg; 738 + __be64 *crq_as_u64 = (__be64 *)&crq_msg; 739 + 740 + dev_dbg(adapter->dev, "CRQ send: rem_buffer_resp\n"); 741 + crq_msg.valid = 0x80; 742 + crq_msg.type = VMC_MSG_REM_BUF_RESP; 743 + crq_msg.status = status; 744 + crq_msg.var1.rsvd = 0; 745 + crq_msg.hmc_session = hmc_session; 746 + crq_msg.hmc_index = hmc_index; 747 + crq_msg.var2.buffer_id = cpu_to_be16(buffer_id); 748 + crq_msg.rsvd = 0; 749 + crq_msg.var3.rsvd = 0; 750 + 751 + ibmvmc_send_crq(adapter, be64_to_cpu(crq_as_u64[0]), 752 + be64_to_cpu(crq_as_u64[1])); 753 + 754 + return 0; 755 + } 756 + 757 + /** 758 + * ibmvmc_send_msg - Signal Message 759 + * 760 + * @adapter: crq_server_adapter struct 761 + * @buffer: ibmvmc_buffer struct 762 + * @hmc: ibmvmc_hmc struct 763 + * @msg_length: message length field 764 + * 765 + * This command is sent between the management partition and the hypervisor 766 + * in order to signal the arrival of an HMC protocol message. The command 767 + * can be sent by both the management partition and the hypervisor. It is 768 + * used for all traffic between the management application and the hypervisor, 769 + * regardless of who initiated the communication. 770 + * 771 + * There is no response to this message. 772 + * 773 + * Return: 774 + * 0 - Success 775 + * Non-zero - Failure 776 + */ 777 + static int ibmvmc_send_msg(struct crq_server_adapter *adapter, 778 + struct ibmvmc_buffer *buffer, 779 + struct ibmvmc_hmc *hmc, int msg_len) 780 + { 781 + struct ibmvmc_crq_msg crq_msg; 782 + __be64 *crq_as_u64 = (__be64 *)&crq_msg; 783 + int rc = 0; 784 + 785 + dev_dbg(adapter->dev, "CRQ send: rdma to HV\n"); 786 + rc = h_copy_rdma(msg_len, 787 + adapter->liobn, 788 + buffer->dma_addr_local, 789 + adapter->riobn, 790 + buffer->dma_addr_remote); 791 + if (rc) { 792 + dev_err(adapter->dev, "Error in send_msg, h_copy_rdma rc 0x%x\n", 793 + rc); 794 + return rc; 795 + } 796 + 797 + crq_msg.valid = 0x80; 798 + crq_msg.type = VMC_MSG_SIGNAL; 799 + crq_msg.status = 0; 800 + crq_msg.var1.rsvd = 0; 801 + crq_msg.hmc_session = hmc->session; 802 + crq_msg.hmc_index = hmc->index; 803 + crq_msg.var2.buffer_id = cpu_to_be16(buffer->id); 804 + crq_msg.var3.msg_len = cpu_to_be32(msg_len); 805 + dev_dbg(adapter->dev, "CRQ send: msg to HV 0x%llx 0x%llx\n", 806 + be64_to_cpu(crq_as_u64[0]), be64_to_cpu(crq_as_u64[1])); 807 + 808 + buffer->owner = VMC_BUF_OWNER_HV; 809 + ibmvmc_send_crq(adapter, be64_to_cpu(crq_as_u64[0]), 810 + be64_to_cpu(crq_as_u64[1])); 811 + 812 + return rc; 813 + } 814 + 815 + /** 816 + * ibmvmc_open - Open Session 817 + * 818 + * @inode: inode struct 819 + * @file: file struct 820 + * 821 + * Return: 822 + * 0 - Success 823 + */ 824 + static int ibmvmc_open(struct inode *inode, struct file *file) 825 + { 826 + struct ibmvmc_file_session *session; 827 + int rc = 0; 828 + 829 + pr_debug("%s: inode = 0x%lx, file = 0x%lx, state = 0x%x\n", __func__, 830 + (unsigned long)inode, (unsigned long)file, 831 + ibmvmc.state); 832 + 833 + session = kzalloc(sizeof(*session), GFP_KERNEL); 834 + session->file = file; 835 + file->private_data = session; 836 + 837 + return rc; 838 + } 839 + 840 + /** 841 + * ibmvmc_close - Close Session 842 + * 843 + * @inode: inode struct 844 + * @file: file struct 845 + * 846 + * Return: 847 + * 0 - Success 848 + * Non-zero - Failure 849 + */ 850 + static int ibmvmc_close(struct inode *inode, struct file *file) 851 + { 852 + struct ibmvmc_file_session *session; 853 + struct ibmvmc_hmc *hmc; 854 + int rc = 0; 855 + unsigned long flags; 856 + 857 + pr_debug("%s: file = 0x%lx, state = 0x%x\n", __func__, 858 + (unsigned long)file, ibmvmc.state); 859 + 860 + session = file->private_data; 861 + if (!session) 862 + return -EIO; 863 + 864 + hmc = session->hmc; 865 + if (hmc) { 866 + if (!hmc->adapter) 867 + return -EIO; 868 + 869 + if (ibmvmc.state == ibmvmc_state_failed) { 870 + dev_warn(hmc->adapter->dev, "close: state_failed\n"); 871 + return -EIO; 872 + } 873 + 874 + spin_lock_irqsave(&hmc->lock, flags); 875 + if (hmc->state >= ibmhmc_state_opening) { 876 + rc = ibmvmc_send_close(hmc); 877 + if (rc) 878 + dev_warn(hmc->adapter->dev, "close: send_close failed.\n"); 879 + } 880 + spin_unlock_irqrestore(&hmc->lock, flags); 881 + } 882 + 883 + kzfree(session); 884 + 885 + return rc; 886 + } 887 + 888 + /** 889 + * ibmvmc_read - Read 890 + * 891 + * @file: file struct 892 + * @buf: Character buffer 893 + * @nbytes: Size in bytes 894 + * @ppos: Offset 895 + * 896 + * Return: 897 + * 0 - Success 898 + * Non-zero - Failure 899 + */ 900 + static ssize_t ibmvmc_read(struct file *file, char *buf, size_t nbytes, 901 + loff_t *ppos) 902 + { 903 + struct ibmvmc_file_session *session; 904 + struct ibmvmc_hmc *hmc; 905 + struct crq_server_adapter *adapter; 906 + struct ibmvmc_buffer *buffer; 907 + ssize_t n; 908 + ssize_t retval = 0; 909 + unsigned long flags; 910 + DEFINE_WAIT(wait); 911 + 912 + pr_debug("ibmvmc: read: file = 0x%lx, buf = 0x%lx, nbytes = 0x%lx\n", 913 + (unsigned long)file, (unsigned long)buf, 914 + (unsigned long)nbytes); 915 + 916 + if (nbytes == 0) 917 + return 0; 918 + 919 + if (nbytes > ibmvmc.max_mtu) { 920 + pr_warn("ibmvmc: read: nbytes invalid 0x%x\n", 921 + (unsigned int)nbytes); 922 + return -EINVAL; 923 + } 924 + 925 + session = file->private_data; 926 + if (!session) { 927 + pr_warn("ibmvmc: read: no session\n"); 928 + return -EIO; 929 + } 930 + 931 + hmc = session->hmc; 932 + if (!hmc) { 933 + pr_warn("ibmvmc: read: no hmc\n"); 934 + return -EIO; 935 + } 936 + 937 + adapter = hmc->adapter; 938 + if (!adapter) { 939 + pr_warn("ibmvmc: read: no adapter\n"); 940 + return -EIO; 941 + } 942 + 943 + do { 944 + prepare_to_wait(&ibmvmc_read_wait, &wait, TASK_INTERRUPTIBLE); 945 + 946 + spin_lock_irqsave(&hmc->lock, flags); 947 + if (hmc->queue_tail != hmc->queue_head) 948 + /* Data is available */ 949 + break; 950 + 951 + spin_unlock_irqrestore(&hmc->lock, flags); 952 + 953 + if (!session->valid) { 954 + retval = -EBADFD; 955 + goto out; 956 + } 957 + if (file->f_flags & O_NONBLOCK) { 958 + retval = -EAGAIN; 959 + goto out; 960 + } 961 + 962 + schedule(); 963 + 964 + if (signal_pending(current)) { 965 + retval = -ERESTARTSYS; 966 + goto out; 967 + } 968 + } while (1); 969 + 970 + buffer = &(hmc->buffer[hmc->queue_outbound_msgs[hmc->queue_tail]]); 971 + hmc->queue_tail++; 972 + if (hmc->queue_tail == ibmvmc_max_buf_pool_size) 973 + hmc->queue_tail = 0; 974 + spin_unlock_irqrestore(&hmc->lock, flags); 975 + 976 + nbytes = min_t(size_t, nbytes, buffer->msg_len); 977 + n = copy_to_user((void *)buf, buffer->real_addr_local, nbytes); 978 + dev_dbg(adapter->dev, "read: copy to user nbytes = 0x%lx.\n", nbytes); 979 + ibmvmc_free_hmc_buffer(hmc, buffer); 980 + retval = nbytes; 981 + 982 + if (n) { 983 + dev_warn(adapter->dev, "read: copy to user failed.\n"); 984 + retval = -EFAULT; 985 + } 986 + 987 + out: 988 + finish_wait(&ibmvmc_read_wait, &wait); 989 + dev_dbg(adapter->dev, "read: out %ld\n", retval); 990 + return retval; 991 + } 992 + 993 + /** 994 + * ibmvmc_poll - Poll 995 + * 996 + * @file: file struct 997 + * @wait: Poll Table 998 + * 999 + * Return: 1000 + * poll.h return values 1001 + */ 1002 + static unsigned int ibmvmc_poll(struct file *file, poll_table *wait) 1003 + { 1004 + struct ibmvmc_file_session *session; 1005 + struct ibmvmc_hmc *hmc; 1006 + unsigned int mask = 0; 1007 + 1008 + session = file->private_data; 1009 + if (!session) 1010 + return 0; 1011 + 1012 + hmc = session->hmc; 1013 + if (!hmc) 1014 + return 0; 1015 + 1016 + poll_wait(file, &ibmvmc_read_wait, wait); 1017 + 1018 + if (hmc->queue_head != hmc->queue_tail) 1019 + mask |= POLLIN | POLLRDNORM; 1020 + 1021 + return mask; 1022 + } 1023 + 1024 + /** 1025 + * ibmvmc_write - Write 1026 + * 1027 + * @file: file struct 1028 + * @buf: Character buffer 1029 + * @count: Count field 1030 + * @ppos: Offset 1031 + * 1032 + * Return: 1033 + * 0 - Success 1034 + * Non-zero - Failure 1035 + */ 1036 + static ssize_t ibmvmc_write(struct file *file, const char *buffer, 1037 + size_t count, loff_t *ppos) 1038 + { 1039 + struct ibmvmc_buffer *vmc_buffer; 1040 + struct ibmvmc_file_session *session; 1041 + struct crq_server_adapter *adapter; 1042 + struct ibmvmc_hmc *hmc; 1043 + unsigned char *buf; 1044 + unsigned long flags; 1045 + size_t bytes; 1046 + const char *p = buffer; 1047 + size_t c = count; 1048 + int ret = 0; 1049 + 1050 + session = file->private_data; 1051 + if (!session) 1052 + return -EIO; 1053 + 1054 + hmc = session->hmc; 1055 + if (!hmc) 1056 + return -EIO; 1057 + 1058 + spin_lock_irqsave(&hmc->lock, flags); 1059 + if (hmc->state == ibmhmc_state_free) { 1060 + /* HMC connection is not valid (possibly was reset under us). */ 1061 + ret = -EIO; 1062 + goto out; 1063 + } 1064 + 1065 + adapter = hmc->adapter; 1066 + if (!adapter) { 1067 + ret = -EIO; 1068 + goto out; 1069 + } 1070 + 1071 + if (count > ibmvmc.max_mtu) { 1072 + dev_warn(adapter->dev, "invalid buffer size 0x%lx\n", 1073 + (unsigned long)count); 1074 + ret = -EIO; 1075 + goto out; 1076 + } 1077 + 1078 + /* Waiting for the open resp message to the ioctl(1) - retry */ 1079 + if (hmc->state == ibmhmc_state_opening) { 1080 + ret = -EBUSY; 1081 + goto out; 1082 + } 1083 + 1084 + /* Make sure the ioctl() was called & the open msg sent, and that 1085 + * the HMC connection has not failed. 1086 + */ 1087 + if (hmc->state != ibmhmc_state_ready) { 1088 + ret = -EIO; 1089 + goto out; 1090 + } 1091 + 1092 + vmc_buffer = ibmvmc_get_valid_hmc_buffer(hmc->index); 1093 + if (!vmc_buffer) { 1094 + /* No buffer available for the msg send, or we have not yet 1095 + * completed the open/open_resp sequence. Retry until this is 1096 + * complete. 1097 + */ 1098 + ret = -EBUSY; 1099 + goto out; 1100 + } 1101 + if (!vmc_buffer->real_addr_local) { 1102 + dev_err(adapter->dev, "no buffer storage assigned\n"); 1103 + ret = -EIO; 1104 + goto out; 1105 + } 1106 + buf = vmc_buffer->real_addr_local; 1107 + 1108 + while (c > 0) { 1109 + bytes = min_t(size_t, c, vmc_buffer->size); 1110 + 1111 + bytes -= copy_from_user(buf, p, bytes); 1112 + if (!bytes) { 1113 + ret = -EFAULT; 1114 + goto out; 1115 + } 1116 + c -= bytes; 1117 + p += bytes; 1118 + } 1119 + if (p == buffer) 1120 + goto out; 1121 + 1122 + file->f_path.dentry->d_inode->i_mtime = current_time(file_inode(file)); 1123 + mark_inode_dirty(file->f_path.dentry->d_inode); 1124 + 1125 + dev_dbg(adapter->dev, "write: file = 0x%lx, count = 0x%lx\n", 1126 + (unsigned long)file, (unsigned long)count); 1127 + 1128 + ibmvmc_send_msg(adapter, vmc_buffer, hmc, count); 1129 + ret = p - buffer; 1130 + out: 1131 + spin_unlock_irqrestore(&hmc->lock, flags); 1132 + return (ssize_t)(ret); 1133 + } 1134 + 1135 + /** 1136 + * ibmvmc_setup_hmc - Setup the HMC 1137 + * 1138 + * @session: ibmvmc_file_session struct 1139 + * 1140 + * Return: 1141 + * 0 - Success 1142 + * Non-zero - Failure 1143 + */ 1144 + static long ibmvmc_setup_hmc(struct ibmvmc_file_session *session) 1145 + { 1146 + struct ibmvmc_hmc *hmc; 1147 + unsigned int valid, free, index; 1148 + 1149 + if (ibmvmc.state == ibmvmc_state_failed) { 1150 + pr_warn("ibmvmc: Reserve HMC: state_failed\n"); 1151 + return -EIO; 1152 + } 1153 + 1154 + if (ibmvmc.state < ibmvmc_state_ready) { 1155 + pr_warn("ibmvmc: Reserve HMC: not state_ready\n"); 1156 + return -EAGAIN; 1157 + } 1158 + 1159 + /* Device is busy until capabilities have been exchanged and we 1160 + * have a generic buffer for each possible HMC connection. 1161 + */ 1162 + for (index = 0; index <= ibmvmc.max_hmc_index; index++) { 1163 + valid = 0; 1164 + ibmvmc_count_hmc_buffers(index, &valid, &free); 1165 + if (valid == 0) { 1166 + pr_warn("ibmvmc: buffers not ready for index %d\n", 1167 + index); 1168 + return -ENOBUFS; 1169 + } 1170 + } 1171 + 1172 + /* Get an hmc object, and transition to ibmhmc_state_initial */ 1173 + hmc = ibmvmc_get_free_hmc(); 1174 + if (!hmc) { 1175 + pr_warn("%s: free hmc not found\n", __func__); 1176 + return -EBUSY; 1177 + } 1178 + 1179 + hmc->session = hmc->session + 1; 1180 + if (hmc->session == 0xff) 1181 + hmc->session = 1; 1182 + 1183 + session->hmc = hmc; 1184 + hmc->adapter = &ibmvmc_adapter; 1185 + hmc->file_session = session; 1186 + session->valid = 1; 1187 + 1188 + return 0; 1189 + } 1190 + 1191 + /** 1192 + * ibmvmc_ioctl_sethmcid - IOCTL Set HMC ID 1193 + * 1194 + * @session: ibmvmc_file_session struct 1195 + * @new_hmc_id: HMC id field 1196 + * 1197 + * IOCTL command to setup the hmc id 1198 + * 1199 + * Return: 1200 + * 0 - Success 1201 + * Non-zero - Failure 1202 + */ 1203 + static long ibmvmc_ioctl_sethmcid(struct ibmvmc_file_session *session, 1204 + unsigned char __user *new_hmc_id) 1205 + { 1206 + struct ibmvmc_hmc *hmc; 1207 + struct ibmvmc_buffer *buffer; 1208 + size_t bytes; 1209 + char print_buffer[HMC_ID_LEN + 1]; 1210 + unsigned long flags; 1211 + long rc = 0; 1212 + 1213 + /* Reserve HMC session */ 1214 + hmc = session->hmc; 1215 + if (!hmc) { 1216 + rc = ibmvmc_setup_hmc(session); 1217 + if (rc) 1218 + return rc; 1219 + 1220 + hmc = session->hmc; 1221 + if (!hmc) { 1222 + pr_err("ibmvmc: setup_hmc success but no hmc\n"); 1223 + return -EIO; 1224 + } 1225 + } 1226 + 1227 + if (hmc->state != ibmhmc_state_initial) { 1228 + pr_warn("ibmvmc: sethmcid: invalid state to send open 0x%x\n", 1229 + hmc->state); 1230 + return -EIO; 1231 + } 1232 + 1233 + bytes = copy_from_user(hmc->hmc_id, new_hmc_id, HMC_ID_LEN); 1234 + if (bytes) 1235 + return -EFAULT; 1236 + 1237 + /* Send Open Session command */ 1238 + spin_lock_irqsave(&hmc->lock, flags); 1239 + buffer = ibmvmc_get_valid_hmc_buffer(hmc->index); 1240 + spin_unlock_irqrestore(&hmc->lock, flags); 1241 + 1242 + if (!buffer || !buffer->real_addr_local) { 1243 + pr_warn("ibmvmc: sethmcid: no buffer available\n"); 1244 + return -EIO; 1245 + } 1246 + 1247 + /* Make sure buffer is NULL terminated before trying to print it */ 1248 + memset(print_buffer, 0, HMC_ID_LEN + 1); 1249 + strncpy(print_buffer, hmc->hmc_id, HMC_ID_LEN); 1250 + pr_info("ibmvmc: sethmcid: Set HMC ID: \"%s\"\n", print_buffer); 1251 + 1252 + memcpy(buffer->real_addr_local, hmc->hmc_id, HMC_ID_LEN); 1253 + /* RDMA over ID, send open msg, change state to ibmhmc_state_opening */ 1254 + rc = ibmvmc_send_open(buffer, hmc); 1255 + 1256 + return rc; 1257 + } 1258 + 1259 + /** 1260 + * ibmvmc_ioctl_query - IOCTL Query 1261 + * 1262 + * @session: ibmvmc_file_session struct 1263 + * @ret_struct: ibmvmc_query_struct 1264 + * 1265 + * Return: 1266 + * 0 - Success 1267 + * Non-zero - Failure 1268 + */ 1269 + static long ibmvmc_ioctl_query(struct ibmvmc_file_session *session, 1270 + struct ibmvmc_query_struct __user *ret_struct) 1271 + { 1272 + struct ibmvmc_query_struct query_struct; 1273 + size_t bytes; 1274 + 1275 + memset(&query_struct, 0, sizeof(query_struct)); 1276 + query_struct.have_vmc = (ibmvmc.state > ibmvmc_state_initial); 1277 + query_struct.state = ibmvmc.state; 1278 + query_struct.vmc_drc_index = ibmvmc.vmc_drc_index; 1279 + 1280 + bytes = copy_to_user(ret_struct, &query_struct, 1281 + sizeof(query_struct)); 1282 + if (bytes) 1283 + return -EFAULT; 1284 + 1285 + return 0; 1286 + } 1287 + 1288 + /** 1289 + * ibmvmc_ioctl_requestvmc - IOCTL Request VMC 1290 + * 1291 + * @session: ibmvmc_file_session struct 1292 + * @ret_vmc_index: VMC Index 1293 + * 1294 + * Return: 1295 + * 0 - Success 1296 + * Non-zero - Failure 1297 + */ 1298 + static long ibmvmc_ioctl_requestvmc(struct ibmvmc_file_session *session, 1299 + u32 __user *ret_vmc_index) 1300 + { 1301 + /* TODO: (adreznec) Add locking to control multiple process access */ 1302 + size_t bytes; 1303 + long rc; 1304 + u32 vmc_drc_index; 1305 + 1306 + /* Call to request the VMC device from phyp*/ 1307 + rc = h_request_vmc(&vmc_drc_index); 1308 + pr_debug("ibmvmc: requestvmc: H_REQUEST_VMC rc = 0x%lx\n", rc); 1309 + 1310 + if (rc == H_SUCCESS) { 1311 + rc = 0; 1312 + } else if (rc == H_FUNCTION) { 1313 + pr_err("ibmvmc: requestvmc: h_request_vmc not supported\n"); 1314 + return -EPERM; 1315 + } else if (rc == H_AUTHORITY) { 1316 + pr_err("ibmvmc: requestvmc: hypervisor denied vmc request\n"); 1317 + return -EPERM; 1318 + } else if (rc == H_HARDWARE) { 1319 + pr_err("ibmvmc: requestvmc: hypervisor hardware fault\n"); 1320 + return -EIO; 1321 + } else if (rc == H_RESOURCE) { 1322 + pr_err("ibmvmc: requestvmc: vmc resource unavailable\n"); 1323 + return -ENODEV; 1324 + } else if (rc == H_NOT_AVAILABLE) { 1325 + pr_err("ibmvmc: requestvmc: system cannot be vmc managed\n"); 1326 + return -EPERM; 1327 + } else if (rc == H_PARAMETER) { 1328 + pr_err("ibmvmc: requestvmc: invalid parameter\n"); 1329 + return -EINVAL; 1330 + } 1331 + 1332 + /* Success, set the vmc index in global struct */ 1333 + ibmvmc.vmc_drc_index = vmc_drc_index; 1334 + 1335 + bytes = copy_to_user(ret_vmc_index, &vmc_drc_index, 1336 + sizeof(*ret_vmc_index)); 1337 + if (bytes) { 1338 + pr_warn("ibmvmc: requestvmc: copy to user failed.\n"); 1339 + return -EFAULT; 1340 + } 1341 + return rc; 1342 + } 1343 + 1344 + /** 1345 + * ibmvmc_ioctl - IOCTL 1346 + * 1347 + * @session: ibmvmc_file_session struct 1348 + * @cmd: cmd field 1349 + * @arg: Argument field 1350 + * 1351 + * Return: 1352 + * 0 - Success 1353 + * Non-zero - Failure 1354 + */ 1355 + static long ibmvmc_ioctl(struct file *file, 1356 + unsigned int cmd, unsigned long arg) 1357 + { 1358 + struct ibmvmc_file_session *session = file->private_data; 1359 + 1360 + pr_debug("ibmvmc: ioctl file=0x%lx, cmd=0x%x, arg=0x%lx, ses=0x%lx\n", 1361 + (unsigned long)file, cmd, arg, 1362 + (unsigned long)session); 1363 + 1364 + if (!session) { 1365 + pr_warn("ibmvmc: ioctl: no session\n"); 1366 + return -EIO; 1367 + } 1368 + 1369 + switch (cmd) { 1370 + case VMC_IOCTL_SETHMCID: 1371 + return ibmvmc_ioctl_sethmcid(session, 1372 + (unsigned char __user *)arg); 1373 + case VMC_IOCTL_QUERY: 1374 + return ibmvmc_ioctl_query(session, 1375 + (struct ibmvmc_query_struct __user *)arg); 1376 + case VMC_IOCTL_REQUESTVMC: 1377 + return ibmvmc_ioctl_requestvmc(session, 1378 + (unsigned int __user *)arg); 1379 + default: 1380 + pr_warn("ibmvmc: unknown ioctl 0x%x\n", cmd); 1381 + return -EINVAL; 1382 + } 1383 + } 1384 + 1385 + static const struct file_operations ibmvmc_fops = { 1386 + .owner = THIS_MODULE, 1387 + .read = ibmvmc_read, 1388 + .write = ibmvmc_write, 1389 + .poll = ibmvmc_poll, 1390 + .unlocked_ioctl = ibmvmc_ioctl, 1391 + .open = ibmvmc_open, 1392 + .release = ibmvmc_close, 1393 + }; 1394 + 1395 + /** 1396 + * ibmvmc_add_buffer - Add Buffer 1397 + * 1398 + * @adapter: crq_server_adapter struct 1399 + * @crq: ibmvmc_crq_msg struct 1400 + * 1401 + * This message transfers a buffer from hypervisor ownership to management 1402 + * partition ownership. The LIOBA is obtained from the virtual TCE table 1403 + * associated with the hypervisor side of the VMC device, and points to a 1404 + * buffer of size MTU (as established in the capabilities exchange). 1405 + * 1406 + * Typical flow for ading buffers: 1407 + * 1. A new management application connection is opened by the management 1408 + * partition. 1409 + * 2. The hypervisor assigns new buffers for the traffic associated with 1410 + * that connection. 1411 + * 3. The hypervisor sends VMC Add Buffer messages to the management 1412 + * partition, informing it of the new buffers. 1413 + * 4. The hypervisor sends an HMC protocol message (to the management 1414 + * application) notifying it of the new buffers. This informs the 1415 + * application that it has buffers available for sending HMC 1416 + * commands. 1417 + * 1418 + * Return: 1419 + * 0 - Success 1420 + * Non-zero - Failure 1421 + */ 1422 + static int ibmvmc_add_buffer(struct crq_server_adapter *adapter, 1423 + struct ibmvmc_crq_msg *crq) 1424 + { 1425 + struct ibmvmc_buffer *buffer; 1426 + u8 hmc_index; 1427 + u8 hmc_session; 1428 + u16 buffer_id; 1429 + unsigned long flags; 1430 + int rc = 0; 1431 + 1432 + if (!crq) 1433 + return -1; 1434 + 1435 + hmc_session = crq->hmc_session; 1436 + hmc_index = crq->hmc_index; 1437 + buffer_id = be16_to_cpu(crq->var2.buffer_id); 1438 + 1439 + if (hmc_index > ibmvmc.max_hmc_index) { 1440 + dev_err(adapter->dev, "add_buffer: invalid hmc_index = 0x%x\n", 1441 + hmc_index); 1442 + ibmvmc_send_add_buffer_resp(adapter, VMC_MSG_INVALID_HMC_INDEX, 1443 + hmc_session, hmc_index, buffer_id); 1444 + return -1; 1445 + } 1446 + 1447 + if (buffer_id >= ibmvmc.max_buffer_pool_size) { 1448 + dev_err(adapter->dev, "add_buffer: invalid buffer_id = 0x%x\n", 1449 + buffer_id); 1450 + ibmvmc_send_add_buffer_resp(adapter, VMC_MSG_INVALID_BUFFER_ID, 1451 + hmc_session, hmc_index, buffer_id); 1452 + return -1; 1453 + } 1454 + 1455 + spin_lock_irqsave(&hmcs[hmc_index].lock, flags); 1456 + buffer = &hmcs[hmc_index].buffer[buffer_id]; 1457 + 1458 + if (buffer->real_addr_local || buffer->dma_addr_local) { 1459 + dev_warn(adapter->dev, "add_buffer: already allocated id = 0x%lx\n", 1460 + (unsigned long)buffer_id); 1461 + spin_unlock_irqrestore(&hmcs[hmc_index].lock, flags); 1462 + ibmvmc_send_add_buffer_resp(adapter, VMC_MSG_INVALID_BUFFER_ID, 1463 + hmc_session, hmc_index, buffer_id); 1464 + return -1; 1465 + } 1466 + 1467 + buffer->real_addr_local = alloc_dma_buffer(to_vio_dev(adapter->dev), 1468 + ibmvmc.max_mtu, 1469 + &buffer->dma_addr_local); 1470 + 1471 + if (!buffer->real_addr_local) { 1472 + dev_err(adapter->dev, "add_buffer: alloc_dma_buffer failed.\n"); 1473 + spin_unlock_irqrestore(&hmcs[hmc_index].lock, flags); 1474 + ibmvmc_send_add_buffer_resp(adapter, VMC_MSG_INTERFACE_FAILURE, 1475 + hmc_session, hmc_index, buffer_id); 1476 + return -1; 1477 + } 1478 + 1479 + buffer->dma_addr_remote = be32_to_cpu(crq->var3.lioba); 1480 + buffer->size = ibmvmc.max_mtu; 1481 + buffer->owner = crq->var1.owner; 1482 + buffer->free = 1; 1483 + /* Must ensure valid==1 is observable only after all other fields are */ 1484 + dma_wmb(); 1485 + buffer->valid = 1; 1486 + buffer->id = buffer_id; 1487 + 1488 + dev_dbg(adapter->dev, "add_buffer: successfully added a buffer:\n"); 1489 + dev_dbg(adapter->dev, " index: %d, session: %d, buffer: 0x%x, owner: %d\n", 1490 + hmc_index, hmc_session, buffer_id, buffer->owner); 1491 + dev_dbg(adapter->dev, " local: 0x%x, remote: 0x%x\n", 1492 + (u32)buffer->dma_addr_local, 1493 + (u32)buffer->dma_addr_remote); 1494 + spin_unlock_irqrestore(&hmcs[hmc_index].lock, flags); 1495 + 1496 + ibmvmc_send_add_buffer_resp(adapter, VMC_MSG_SUCCESS, hmc_session, 1497 + hmc_index, buffer_id); 1498 + 1499 + return rc; 1500 + } 1501 + 1502 + /** 1503 + * ibmvmc_rem_buffer - Remove Buffer 1504 + * 1505 + * @adapter: crq_server_adapter struct 1506 + * @crq: ibmvmc_crq_msg struct 1507 + * 1508 + * This message requests an HMC buffer to be transferred from management 1509 + * partition ownership to hypervisor ownership. The management partition may 1510 + * not be able to satisfy the request at a particular point in time if all its 1511 + * buffers are in use. The management partition requires a depth of at least 1512 + * one inbound buffer to allow management application commands to flow to the 1513 + * hypervisor. It is, therefore, an interface error for the hypervisor to 1514 + * attempt to remove the management partition's last buffer. 1515 + * 1516 + * The hypervisor is expected to manage buffer usage with the management 1517 + * application directly and inform the management partition when buffers may be 1518 + * removed. The typical flow for removing buffers: 1519 + * 1520 + * 1. The management application no longer needs a communication path to a 1521 + * particular hypervisor function. That function is closed. 1522 + * 2. The hypervisor and the management application quiesce all traffic to that 1523 + * function. The hypervisor requests a reduction in buffer pool size. 1524 + * 3. The management application acknowledges the reduction in buffer pool size. 1525 + * 4. The hypervisor sends a Remove Buffer message to the management partition, 1526 + * informing it of the reduction in buffers. 1527 + * 5. The management partition verifies it can remove the buffer. This is 1528 + * possible if buffers have been quiesced. 1529 + * 1530 + * Return: 1531 + * 0 - Success 1532 + * Non-zero - Failure 1533 + */ 1534 + /* 1535 + * The hypervisor requested that we pick an unused buffer, and return it. 1536 + * Before sending the buffer back, we free any storage associated with the 1537 + * buffer. 1538 + */ 1539 + static int ibmvmc_rem_buffer(struct crq_server_adapter *adapter, 1540 + struct ibmvmc_crq_msg *crq) 1541 + { 1542 + struct ibmvmc_buffer *buffer; 1543 + u8 hmc_index; 1544 + u8 hmc_session; 1545 + u16 buffer_id = 0; 1546 + unsigned long flags; 1547 + int rc = 0; 1548 + 1549 + if (!crq) 1550 + return -1; 1551 + 1552 + hmc_session = crq->hmc_session; 1553 + hmc_index = crq->hmc_index; 1554 + 1555 + if (hmc_index > ibmvmc.max_hmc_index) { 1556 + dev_warn(adapter->dev, "rem_buffer: invalid hmc_index = 0x%x\n", 1557 + hmc_index); 1558 + ibmvmc_send_rem_buffer_resp(adapter, VMC_MSG_INVALID_HMC_INDEX, 1559 + hmc_session, hmc_index, buffer_id); 1560 + return -1; 1561 + } 1562 + 1563 + spin_lock_irqsave(&hmcs[hmc_index].lock, flags); 1564 + buffer = ibmvmc_get_free_hmc_buffer(adapter, hmc_index); 1565 + if (!buffer) { 1566 + dev_info(adapter->dev, "rem_buffer: no buffer to remove\n"); 1567 + spin_unlock_irqrestore(&hmcs[hmc_index].lock, flags); 1568 + ibmvmc_send_rem_buffer_resp(adapter, VMC_MSG_NO_BUFFER, 1569 + hmc_session, hmc_index, 1570 + VMC_INVALID_BUFFER_ID); 1571 + return -1; 1572 + } 1573 + 1574 + buffer_id = buffer->id; 1575 + 1576 + if (buffer->valid) 1577 + free_dma_buffer(to_vio_dev(adapter->dev), 1578 + ibmvmc.max_mtu, 1579 + buffer->real_addr_local, 1580 + buffer->dma_addr_local); 1581 + 1582 + memset(buffer, 0, sizeof(struct ibmvmc_buffer)); 1583 + spin_unlock_irqrestore(&hmcs[hmc_index].lock, flags); 1584 + 1585 + dev_dbg(adapter->dev, "rem_buffer: removed buffer 0x%x.\n", buffer_id); 1586 + ibmvmc_send_rem_buffer_resp(adapter, VMC_MSG_SUCCESS, hmc_session, 1587 + hmc_index, buffer_id); 1588 + 1589 + return rc; 1590 + } 1591 + 1592 + static int ibmvmc_recv_msg(struct crq_server_adapter *adapter, 1593 + struct ibmvmc_crq_msg *crq) 1594 + { 1595 + struct ibmvmc_buffer *buffer; 1596 + struct ibmvmc_hmc *hmc; 1597 + unsigned long msg_len; 1598 + u8 hmc_index; 1599 + u8 hmc_session; 1600 + u16 buffer_id; 1601 + unsigned long flags; 1602 + int rc = 0; 1603 + 1604 + if (!crq) 1605 + return -1; 1606 + 1607 + /* Hypervisor writes CRQs directly into our memory in big endian */ 1608 + dev_dbg(adapter->dev, "Recv_msg: msg from HV 0x%016llx 0x%016llx\n", 1609 + be64_to_cpu(*((unsigned long *)crq)), 1610 + be64_to_cpu(*(((unsigned long *)crq) + 1))); 1611 + 1612 + hmc_session = crq->hmc_session; 1613 + hmc_index = crq->hmc_index; 1614 + buffer_id = be16_to_cpu(crq->var2.buffer_id); 1615 + msg_len = be32_to_cpu(crq->var3.msg_len); 1616 + 1617 + if (hmc_index > ibmvmc.max_hmc_index) { 1618 + dev_err(adapter->dev, "Recv_msg: invalid hmc_index = 0x%x\n", 1619 + hmc_index); 1620 + ibmvmc_send_add_buffer_resp(adapter, VMC_MSG_INVALID_HMC_INDEX, 1621 + hmc_session, hmc_index, buffer_id); 1622 + return -1; 1623 + } 1624 + 1625 + if (buffer_id >= ibmvmc.max_buffer_pool_size) { 1626 + dev_err(adapter->dev, "Recv_msg: invalid buffer_id = 0x%x\n", 1627 + buffer_id); 1628 + ibmvmc_send_add_buffer_resp(adapter, VMC_MSG_INVALID_BUFFER_ID, 1629 + hmc_session, hmc_index, buffer_id); 1630 + return -1; 1631 + } 1632 + 1633 + hmc = &hmcs[hmc_index]; 1634 + spin_lock_irqsave(&hmc->lock, flags); 1635 + 1636 + if (hmc->state == ibmhmc_state_free) { 1637 + dev_err(adapter->dev, "Recv_msg: invalid hmc state = 0x%x\n", 1638 + hmc->state); 1639 + /* HMC connection is not valid (possibly was reset under us). */ 1640 + spin_unlock_irqrestore(&hmc->lock, flags); 1641 + return -1; 1642 + } 1643 + 1644 + buffer = &hmc->buffer[buffer_id]; 1645 + 1646 + if (buffer->valid == 0 || buffer->owner == VMC_BUF_OWNER_ALPHA) { 1647 + dev_err(adapter->dev, "Recv_msg: not valid, or not HV. 0x%x 0x%x\n", 1648 + buffer->valid, buffer->owner); 1649 + spin_unlock_irqrestore(&hmc->lock, flags); 1650 + return -1; 1651 + } 1652 + 1653 + /* RDMA the data into the partition. */ 1654 + rc = h_copy_rdma(msg_len, 1655 + adapter->riobn, 1656 + buffer->dma_addr_remote, 1657 + adapter->liobn, 1658 + buffer->dma_addr_local); 1659 + 1660 + dev_dbg(adapter->dev, "Recv_msg: msg_len = 0x%x, buffer_id = 0x%x, queue_head = 0x%x, hmc_idx = 0x%x\n", 1661 + (unsigned int)msg_len, (unsigned int)buffer_id, 1662 + (unsigned int)hmc->queue_head, (unsigned int)hmc_index); 1663 + buffer->msg_len = msg_len; 1664 + buffer->free = 0; 1665 + buffer->owner = VMC_BUF_OWNER_ALPHA; 1666 + 1667 + if (rc) { 1668 + dev_err(adapter->dev, "Failure in recv_msg: h_copy_rdma = 0x%x\n", 1669 + rc); 1670 + spin_unlock_irqrestore(&hmc->lock, flags); 1671 + return -1; 1672 + } 1673 + 1674 + /* Must be locked because read operates on the same data */ 1675 + hmc->queue_outbound_msgs[hmc->queue_head] = buffer_id; 1676 + hmc->queue_head++; 1677 + if (hmc->queue_head == ibmvmc_max_buf_pool_size) 1678 + hmc->queue_head = 0; 1679 + 1680 + if (hmc->queue_head == hmc->queue_tail) 1681 + dev_err(adapter->dev, "outbound buffer queue wrapped.\n"); 1682 + 1683 + spin_unlock_irqrestore(&hmc->lock, flags); 1684 + 1685 + wake_up_interruptible(&ibmvmc_read_wait); 1686 + 1687 + return 0; 1688 + } 1689 + 1690 + /** 1691 + * ibmvmc_process_capabilities - Process Capabilities 1692 + * 1693 + * @adapter: crq_server_adapter struct 1694 + * @crqp: ibmvmc_crq_msg struct 1695 + * 1696 + */ 1697 + static void ibmvmc_process_capabilities(struct crq_server_adapter *adapter, 1698 + struct ibmvmc_crq_msg *crqp) 1699 + { 1700 + struct ibmvmc_admin_crq_msg *crq = (struct ibmvmc_admin_crq_msg *)crqp; 1701 + 1702 + if ((be16_to_cpu(crq->version) >> 8) != 1703 + (IBMVMC_PROTOCOL_VERSION >> 8)) { 1704 + dev_err(adapter->dev, "init failed, incompatible versions 0x%x 0x%x\n", 1705 + be16_to_cpu(crq->version), 1706 + IBMVMC_PROTOCOL_VERSION); 1707 + ibmvmc.state = ibmvmc_state_failed; 1708 + return; 1709 + } 1710 + 1711 + ibmvmc.max_mtu = min_t(u32, ibmvmc_max_mtu, be32_to_cpu(crq->max_mtu)); 1712 + ibmvmc.max_buffer_pool_size = min_t(u16, ibmvmc_max_buf_pool_size, 1713 + be16_to_cpu(crq->pool_size)); 1714 + ibmvmc.max_hmc_index = min_t(u8, ibmvmc_max_hmcs, crq->max_hmc) - 1; 1715 + ibmvmc.state = ibmvmc_state_ready; 1716 + 1717 + dev_info(adapter->dev, "Capabilities: mtu=0x%x, pool_size=0x%x, max_hmc=0x%x\n", 1718 + ibmvmc.max_mtu, ibmvmc.max_buffer_pool_size, 1719 + ibmvmc.max_hmc_index); 1720 + } 1721 + 1722 + /** 1723 + * ibmvmc_validate_hmc_session - Validate HMC Session 1724 + * 1725 + * @adapter: crq_server_adapter struct 1726 + * @crq: ibmvmc_crq_msg struct 1727 + * 1728 + * Return: 1729 + * 0 - Success 1730 + * Non-zero - Failure 1731 + */ 1732 + static int ibmvmc_validate_hmc_session(struct crq_server_adapter *adapter, 1733 + struct ibmvmc_crq_msg *crq) 1734 + { 1735 + unsigned char hmc_index; 1736 + 1737 + hmc_index = crq->hmc_index; 1738 + 1739 + if (crq->hmc_session == 0) 1740 + return 0; 1741 + 1742 + if (hmc_index > ibmvmc.max_hmc_index) 1743 + return -1; 1744 + 1745 + if (hmcs[hmc_index].session != crq->hmc_session) { 1746 + dev_warn(adapter->dev, "Drop, bad session: expected 0x%x, recv 0x%x\n", 1747 + hmcs[hmc_index].session, crq->hmc_session); 1748 + return -1; 1749 + } 1750 + 1751 + return 0; 1752 + } 1753 + 1754 + /** 1755 + * ibmvmc_reset - Reset 1756 + * 1757 + * @adapter: crq_server_adapter struct 1758 + * @xport_event: export_event field 1759 + * 1760 + * Closes all HMC sessions and conditionally schedules a CRQ reset. 1761 + * @xport_event: If true, the partner closed their CRQ; we don't need to reset. 1762 + * If false, we need to schedule a CRQ reset. 1763 + */ 1764 + static void ibmvmc_reset(struct crq_server_adapter *adapter, bool xport_event) 1765 + { 1766 + int i; 1767 + 1768 + if (ibmvmc.state != ibmvmc_state_sched_reset) { 1769 + dev_info(adapter->dev, "*** Reset to initial state.\n"); 1770 + for (i = 0; i < ibmvmc_max_hmcs; i++) 1771 + ibmvmc_return_hmc(&hmcs[i], xport_event); 1772 + 1773 + if (xport_event) { 1774 + /* CRQ was closed by the partner. We don't need to do 1775 + * anything except set ourself to the correct state to 1776 + * handle init msgs. 1777 + */ 1778 + ibmvmc.state = ibmvmc_state_crqinit; 1779 + } else { 1780 + /* The partner did not close their CRQ - instead, we're 1781 + * closing the CRQ on our end. Need to schedule this 1782 + * for process context, because CRQ reset may require a 1783 + * sleep. 1784 + * 1785 + * Setting ibmvmc.state here immediately prevents 1786 + * ibmvmc_open from completing until the reset 1787 + * completes in process context. 1788 + */ 1789 + ibmvmc.state = ibmvmc_state_sched_reset; 1790 + dev_dbg(adapter->dev, "Device reset scheduled"); 1791 + wake_up_interruptible(&adapter->reset_wait_queue); 1792 + } 1793 + } 1794 + } 1795 + 1796 + /** 1797 + * ibmvmc_reset_task - Reset Task 1798 + * 1799 + * @data: Data field 1800 + * 1801 + * Performs a CRQ reset of the VMC device in process context. 1802 + * NOTE: This function should not be called directly, use ibmvmc_reset. 1803 + */ 1804 + static int ibmvmc_reset_task(void *data) 1805 + { 1806 + struct crq_server_adapter *adapter = data; 1807 + int rc; 1808 + 1809 + set_user_nice(current, -20); 1810 + 1811 + while (!kthread_should_stop()) { 1812 + wait_event_interruptible(adapter->reset_wait_queue, 1813 + (ibmvmc.state == ibmvmc_state_sched_reset) || 1814 + kthread_should_stop()); 1815 + 1816 + if (kthread_should_stop()) 1817 + break; 1818 + 1819 + dev_dbg(adapter->dev, "CRQ resetting in process context"); 1820 + tasklet_disable(&adapter->work_task); 1821 + 1822 + rc = ibmvmc_reset_crq_queue(adapter); 1823 + 1824 + if (rc != H_SUCCESS && rc != H_RESOURCE) { 1825 + dev_err(adapter->dev, "Error initializing CRQ. rc = 0x%x\n", 1826 + rc); 1827 + ibmvmc.state = ibmvmc_state_failed; 1828 + } else { 1829 + ibmvmc.state = ibmvmc_state_crqinit; 1830 + 1831 + if (ibmvmc_send_crq(adapter, 0xC001000000000000LL, 0) 1832 + != 0 && rc != H_RESOURCE) 1833 + dev_warn(adapter->dev, "Failed to send initialize CRQ message\n"); 1834 + } 1835 + 1836 + vio_enable_interrupts(to_vio_dev(adapter->dev)); 1837 + tasklet_enable(&adapter->work_task); 1838 + } 1839 + 1840 + return 0; 1841 + } 1842 + 1843 + /** 1844 + * ibmvmc_process_open_resp - Process Open Response 1845 + * 1846 + * @crq: ibmvmc_crq_msg struct 1847 + * @adapter: crq_server_adapter struct 1848 + * 1849 + * This command is sent by the hypervisor in response to the Interface 1850 + * Open message. When this message is received, the indicated buffer is 1851 + * again available for management partition use. 1852 + */ 1853 + static void ibmvmc_process_open_resp(struct ibmvmc_crq_msg *crq, 1854 + struct crq_server_adapter *adapter) 1855 + { 1856 + unsigned char hmc_index; 1857 + unsigned short buffer_id; 1858 + 1859 + hmc_index = crq->hmc_index; 1860 + if (hmc_index > ibmvmc.max_hmc_index) { 1861 + /* Why would PHYP give an index > max negotiated? */ 1862 + ibmvmc_reset(adapter, false); 1863 + return; 1864 + } 1865 + 1866 + if (crq->status) { 1867 + dev_warn(adapter->dev, "open_resp: failed - status 0x%x\n", 1868 + crq->status); 1869 + ibmvmc_return_hmc(&hmcs[hmc_index], false); 1870 + return; 1871 + } 1872 + 1873 + if (hmcs[hmc_index].state == ibmhmc_state_opening) { 1874 + buffer_id = be16_to_cpu(crq->var2.buffer_id); 1875 + if (buffer_id >= ibmvmc.max_buffer_pool_size) { 1876 + dev_err(adapter->dev, "open_resp: invalid buffer_id = 0x%x\n", 1877 + buffer_id); 1878 + hmcs[hmc_index].state = ibmhmc_state_failed; 1879 + } else { 1880 + ibmvmc_free_hmc_buffer(&hmcs[hmc_index], 1881 + &hmcs[hmc_index].buffer[buffer_id]); 1882 + hmcs[hmc_index].state = ibmhmc_state_ready; 1883 + dev_dbg(adapter->dev, "open_resp: set hmc state = ready\n"); 1884 + } 1885 + } else { 1886 + dev_warn(adapter->dev, "open_resp: invalid hmc state (0x%x)\n", 1887 + hmcs[hmc_index].state); 1888 + } 1889 + } 1890 + 1891 + /** 1892 + * ibmvmc_process_close_resp - Process Close Response 1893 + * 1894 + * @crq: ibmvmc_crq_msg struct 1895 + * @adapter: crq_server_adapter struct 1896 + * 1897 + * This command is sent by the hypervisor in response to the managemant 1898 + * application Interface Close message. 1899 + * 1900 + * If the close fails, simply reset the entire driver as the state of the VMC 1901 + * must be in tough shape. 1902 + */ 1903 + static void ibmvmc_process_close_resp(struct ibmvmc_crq_msg *crq, 1904 + struct crq_server_adapter *adapter) 1905 + { 1906 + unsigned char hmc_index; 1907 + 1908 + hmc_index = crq->hmc_index; 1909 + if (hmc_index > ibmvmc.max_hmc_index) { 1910 + ibmvmc_reset(adapter, false); 1911 + return; 1912 + } 1913 + 1914 + if (crq->status) { 1915 + dev_warn(adapter->dev, "close_resp: failed - status 0x%x\n", 1916 + crq->status); 1917 + ibmvmc_reset(adapter, false); 1918 + return; 1919 + } 1920 + 1921 + ibmvmc_return_hmc(&hmcs[hmc_index], false); 1922 + } 1923 + 1924 + /** 1925 + * ibmvmc_crq_process - Process CRQ 1926 + * 1927 + * @adapter: crq_server_adapter struct 1928 + * @crq: ibmvmc_crq_msg struct 1929 + * 1930 + * Process the CRQ message based upon the type of message received. 1931 + * 1932 + */ 1933 + static void ibmvmc_crq_process(struct crq_server_adapter *adapter, 1934 + struct ibmvmc_crq_msg *crq) 1935 + { 1936 + switch (crq->type) { 1937 + case VMC_MSG_CAP_RESP: 1938 + dev_dbg(adapter->dev, "CRQ recv: capabilities resp (0x%x)\n", 1939 + crq->type); 1940 + if (ibmvmc.state == ibmvmc_state_capabilities) 1941 + ibmvmc_process_capabilities(adapter, crq); 1942 + else 1943 + dev_warn(adapter->dev, "caps msg invalid in state 0x%x\n", 1944 + ibmvmc.state); 1945 + break; 1946 + case VMC_MSG_OPEN_RESP: 1947 + dev_dbg(adapter->dev, "CRQ recv: open resp (0x%x)\n", 1948 + crq->type); 1949 + if (ibmvmc_validate_hmc_session(adapter, crq) == 0) 1950 + ibmvmc_process_open_resp(crq, adapter); 1951 + break; 1952 + case VMC_MSG_ADD_BUF: 1953 + dev_dbg(adapter->dev, "CRQ recv: add buf (0x%x)\n", 1954 + crq->type); 1955 + if (ibmvmc_validate_hmc_session(adapter, crq) == 0) 1956 + ibmvmc_add_buffer(adapter, crq); 1957 + break; 1958 + case VMC_MSG_REM_BUF: 1959 + dev_dbg(adapter->dev, "CRQ recv: rem buf (0x%x)\n", 1960 + crq->type); 1961 + if (ibmvmc_validate_hmc_session(adapter, crq) == 0) 1962 + ibmvmc_rem_buffer(adapter, crq); 1963 + break; 1964 + case VMC_MSG_SIGNAL: 1965 + dev_dbg(adapter->dev, "CRQ recv: signal msg (0x%x)\n", 1966 + crq->type); 1967 + if (ibmvmc_validate_hmc_session(adapter, crq) == 0) 1968 + ibmvmc_recv_msg(adapter, crq); 1969 + break; 1970 + case VMC_MSG_CLOSE_RESP: 1971 + dev_dbg(adapter->dev, "CRQ recv: close resp (0x%x)\n", 1972 + crq->type); 1973 + if (ibmvmc_validate_hmc_session(adapter, crq) == 0) 1974 + ibmvmc_process_close_resp(crq, adapter); 1975 + break; 1976 + case VMC_MSG_CAP: 1977 + case VMC_MSG_OPEN: 1978 + case VMC_MSG_CLOSE: 1979 + case VMC_MSG_ADD_BUF_RESP: 1980 + case VMC_MSG_REM_BUF_RESP: 1981 + dev_warn(adapter->dev, "CRQ recv: unexpected msg (0x%x)\n", 1982 + crq->type); 1983 + break; 1984 + default: 1985 + dev_warn(adapter->dev, "CRQ recv: unknown msg (0x%x)\n", 1986 + crq->type); 1987 + break; 1988 + } 1989 + } 1990 + 1991 + /** 1992 + * ibmvmc_handle_crq_init - Handle CRQ Init 1993 + * 1994 + * @crq: ibmvmc_crq_msg struct 1995 + * @adapter: crq_server_adapter struct 1996 + * 1997 + * Handle the type of crq initialization based on whether 1998 + * it is a message or a response. 1999 + * 2000 + */ 2001 + static void ibmvmc_handle_crq_init(struct ibmvmc_crq_msg *crq, 2002 + struct crq_server_adapter *adapter) 2003 + { 2004 + switch (crq->type) { 2005 + case 0x01: /* Initialization message */ 2006 + dev_dbg(adapter->dev, "CRQ recv: CRQ init msg - state 0x%x\n", 2007 + ibmvmc.state); 2008 + if (ibmvmc.state == ibmvmc_state_crqinit) { 2009 + /* Send back a response */ 2010 + if (ibmvmc_send_crq(adapter, 0xC002000000000000, 2011 + 0) == 0) 2012 + ibmvmc_send_capabilities(adapter); 2013 + else 2014 + dev_err(adapter->dev, " Unable to send init rsp\n"); 2015 + } else { 2016 + dev_err(adapter->dev, "Invalid state 0x%x mtu = 0x%x\n", 2017 + ibmvmc.state, ibmvmc.max_mtu); 2018 + } 2019 + 2020 + break; 2021 + case 0x02: /* Initialization response */ 2022 + dev_dbg(adapter->dev, "CRQ recv: initialization resp msg - state 0x%x\n", 2023 + ibmvmc.state); 2024 + if (ibmvmc.state == ibmvmc_state_crqinit) 2025 + ibmvmc_send_capabilities(adapter); 2026 + break; 2027 + default: 2028 + dev_warn(adapter->dev, "Unknown crq message type 0x%lx\n", 2029 + (unsigned long)crq->type); 2030 + } 2031 + } 2032 + 2033 + /** 2034 + * ibmvmc_handle_crq - Handle CRQ 2035 + * 2036 + * @crq: ibmvmc_crq_msg struct 2037 + * @adapter: crq_server_adapter struct 2038 + * 2039 + * Read the command elements from the command queue and execute the 2040 + * requests based upon the type of crq message. 2041 + * 2042 + */ 2043 + static void ibmvmc_handle_crq(struct ibmvmc_crq_msg *crq, 2044 + struct crq_server_adapter *adapter) 2045 + { 2046 + switch (crq->valid) { 2047 + case 0xC0: /* initialization */ 2048 + ibmvmc_handle_crq_init(crq, adapter); 2049 + break; 2050 + case 0xFF: /* Hypervisor telling us the connection is closed */ 2051 + dev_warn(adapter->dev, "CRQ recv: virtual adapter failed - resetting.\n"); 2052 + ibmvmc_reset(adapter, true); 2053 + break; 2054 + case 0x80: /* real payload */ 2055 + ibmvmc_crq_process(adapter, crq); 2056 + break; 2057 + default: 2058 + dev_warn(adapter->dev, "CRQ recv: unknown msg 0x%02x.\n", 2059 + crq->valid); 2060 + break; 2061 + } 2062 + } 2063 + 2064 + static void ibmvmc_task(unsigned long data) 2065 + { 2066 + struct crq_server_adapter *adapter = 2067 + (struct crq_server_adapter *)data; 2068 + struct vio_dev *vdev = to_vio_dev(adapter->dev); 2069 + struct ibmvmc_crq_msg *crq; 2070 + int done = 0; 2071 + 2072 + while (!done) { 2073 + /* Pull all the valid messages off the CRQ */ 2074 + while ((crq = crq_queue_next_crq(&adapter->queue)) != NULL) { 2075 + ibmvmc_handle_crq(crq, adapter); 2076 + crq->valid = 0x00; 2077 + /* CRQ reset was requested, stop processing CRQs. 2078 + * Interrupts will be re-enabled by the reset task. 2079 + */ 2080 + if (ibmvmc.state == ibmvmc_state_sched_reset) 2081 + return; 2082 + } 2083 + 2084 + vio_enable_interrupts(vdev); 2085 + crq = crq_queue_next_crq(&adapter->queue); 2086 + if (crq) { 2087 + vio_disable_interrupts(vdev); 2088 + ibmvmc_handle_crq(crq, adapter); 2089 + crq->valid = 0x00; 2090 + /* CRQ reset was requested, stop processing CRQs. 2091 + * Interrupts will be re-enabled by the reset task. 2092 + */ 2093 + if (ibmvmc.state == ibmvmc_state_sched_reset) 2094 + return; 2095 + } else { 2096 + done = 1; 2097 + } 2098 + } 2099 + } 2100 + 2101 + /** 2102 + * ibmvmc_init_crq_queue - Init CRQ Queue 2103 + * 2104 + * @adapter: crq_server_adapter struct 2105 + * 2106 + * Return: 2107 + * 0 - Success 2108 + * Non-zero - Failure 2109 + */ 2110 + static int ibmvmc_init_crq_queue(struct crq_server_adapter *adapter) 2111 + { 2112 + struct vio_dev *vdev = to_vio_dev(adapter->dev); 2113 + struct crq_queue *queue = &adapter->queue; 2114 + int rc = 0; 2115 + int retrc = 0; 2116 + 2117 + queue->msgs = (struct ibmvmc_crq_msg *)get_zeroed_page(GFP_KERNEL); 2118 + 2119 + if (!queue->msgs) 2120 + goto malloc_failed; 2121 + 2122 + queue->size = PAGE_SIZE / sizeof(*queue->msgs); 2123 + 2124 + queue->msg_token = dma_map_single(adapter->dev, queue->msgs, 2125 + queue->size * sizeof(*queue->msgs), 2126 + DMA_BIDIRECTIONAL); 2127 + 2128 + if (dma_mapping_error(adapter->dev, queue->msg_token)) 2129 + goto map_failed; 2130 + 2131 + retrc = plpar_hcall_norets(H_REG_CRQ, 2132 + vdev->unit_address, 2133 + queue->msg_token, PAGE_SIZE); 2134 + retrc = rc; 2135 + 2136 + if (rc == H_RESOURCE) 2137 + rc = ibmvmc_reset_crq_queue(adapter); 2138 + 2139 + if (rc == 2) { 2140 + dev_warn(adapter->dev, "Partner adapter not ready\n"); 2141 + retrc = 0; 2142 + } else if (rc != 0) { 2143 + dev_err(adapter->dev, "Error %d opening adapter\n", rc); 2144 + goto reg_crq_failed; 2145 + } 2146 + 2147 + queue->cur = 0; 2148 + spin_lock_init(&queue->lock); 2149 + 2150 + tasklet_init(&adapter->work_task, ibmvmc_task, (unsigned long)adapter); 2151 + 2152 + if (request_irq(vdev->irq, 2153 + ibmvmc_handle_event, 2154 + 0, "ibmvmc", (void *)adapter) != 0) { 2155 + dev_err(adapter->dev, "couldn't register irq 0x%x\n", 2156 + vdev->irq); 2157 + goto req_irq_failed; 2158 + } 2159 + 2160 + rc = vio_enable_interrupts(vdev); 2161 + if (rc != 0) { 2162 + dev_err(adapter->dev, "Error %d enabling interrupts!!!\n", rc); 2163 + goto req_irq_failed; 2164 + } 2165 + 2166 + return retrc; 2167 + 2168 + req_irq_failed: 2169 + /* Cannot have any work since we either never got our IRQ registered, 2170 + * or never got interrupts enabled 2171 + */ 2172 + tasklet_kill(&adapter->work_task); 2173 + h_free_crq(vdev->unit_address); 2174 + reg_crq_failed: 2175 + dma_unmap_single(adapter->dev, 2176 + queue->msg_token, 2177 + queue->size * sizeof(*queue->msgs), DMA_BIDIRECTIONAL); 2178 + map_failed: 2179 + free_page((unsigned long)queue->msgs); 2180 + malloc_failed: 2181 + return -ENOMEM; 2182 + } 2183 + 2184 + /* Fill in the liobn and riobn fields on the adapter */ 2185 + static int read_dma_window(struct vio_dev *vdev, 2186 + struct crq_server_adapter *adapter) 2187 + { 2188 + const __be32 *dma_window; 2189 + const __be32 *prop; 2190 + 2191 + /* TODO Using of_parse_dma_window would be better, but it doesn't give 2192 + * a way to read multiple windows without already knowing the size of 2193 + * a window or the number of windows 2194 + */ 2195 + dma_window = 2196 + (const __be32 *)vio_get_attribute(vdev, "ibm,my-dma-window", 2197 + NULL); 2198 + if (!dma_window) { 2199 + dev_warn(adapter->dev, "Couldn't find ibm,my-dma-window property\n"); 2200 + return -1; 2201 + } 2202 + 2203 + adapter->liobn = be32_to_cpu(*dma_window); 2204 + dma_window++; 2205 + 2206 + prop = (const __be32 *)vio_get_attribute(vdev, "ibm,#dma-address-cells", 2207 + NULL); 2208 + if (!prop) { 2209 + dev_warn(adapter->dev, "Couldn't find ibm,#dma-address-cells property\n"); 2210 + dma_window++; 2211 + } else { 2212 + dma_window += be32_to_cpu(*prop); 2213 + } 2214 + 2215 + prop = (const __be32 *)vio_get_attribute(vdev, "ibm,#dma-size-cells", 2216 + NULL); 2217 + if (!prop) { 2218 + dev_warn(adapter->dev, "Couldn't find ibm,#dma-size-cells property\n"); 2219 + dma_window++; 2220 + } else { 2221 + dma_window += be32_to_cpu(*prop); 2222 + } 2223 + 2224 + /* dma_window should point to the second window now */ 2225 + adapter->riobn = be32_to_cpu(*dma_window); 2226 + 2227 + return 0; 2228 + } 2229 + 2230 + static int ibmvmc_probe(struct vio_dev *vdev, const struct vio_device_id *id) 2231 + { 2232 + struct crq_server_adapter *adapter = &ibmvmc_adapter; 2233 + int rc; 2234 + 2235 + dev_set_drvdata(&vdev->dev, NULL); 2236 + memset(adapter, 0, sizeof(*adapter)); 2237 + adapter->dev = &vdev->dev; 2238 + 2239 + dev_info(adapter->dev, "Probe for UA 0x%x\n", vdev->unit_address); 2240 + 2241 + rc = read_dma_window(vdev, adapter); 2242 + if (rc != 0) { 2243 + ibmvmc.state = ibmvmc_state_failed; 2244 + return -1; 2245 + } 2246 + 2247 + dev_dbg(adapter->dev, "Probe: liobn 0x%x, riobn 0x%x\n", 2248 + adapter->liobn, adapter->riobn); 2249 + 2250 + init_waitqueue_head(&adapter->reset_wait_queue); 2251 + adapter->reset_task = kthread_run(ibmvmc_reset_task, adapter, "ibmvmc"); 2252 + if (IS_ERR(adapter->reset_task)) { 2253 + dev_err(adapter->dev, "Failed to start reset thread\n"); 2254 + ibmvmc.state = ibmvmc_state_failed; 2255 + rc = PTR_ERR(adapter->reset_task); 2256 + adapter->reset_task = NULL; 2257 + return rc; 2258 + } 2259 + 2260 + rc = ibmvmc_init_crq_queue(adapter); 2261 + if (rc != 0 && rc != H_RESOURCE) { 2262 + dev_err(adapter->dev, "Error initializing CRQ. rc = 0x%x\n", 2263 + rc); 2264 + ibmvmc.state = ibmvmc_state_failed; 2265 + goto crq_failed; 2266 + } 2267 + 2268 + ibmvmc.state = ibmvmc_state_crqinit; 2269 + 2270 + /* Try to send an initialization message. Note that this is allowed 2271 + * to fail if the other end is not acive. In that case we just wait 2272 + * for the other side to initialize. 2273 + */ 2274 + if (ibmvmc_send_crq(adapter, 0xC001000000000000LL, 0) != 0 && 2275 + rc != H_RESOURCE) 2276 + dev_warn(adapter->dev, "Failed to send initialize CRQ message\n"); 2277 + 2278 + dev_set_drvdata(&vdev->dev, adapter); 2279 + 2280 + return 0; 2281 + 2282 + crq_failed: 2283 + kthread_stop(adapter->reset_task); 2284 + adapter->reset_task = NULL; 2285 + return -EPERM; 2286 + } 2287 + 2288 + static int ibmvmc_remove(struct vio_dev *vdev) 2289 + { 2290 + struct crq_server_adapter *adapter = dev_get_drvdata(&vdev->dev); 2291 + 2292 + dev_info(adapter->dev, "Entering remove for UA 0x%x\n", 2293 + vdev->unit_address); 2294 + ibmvmc_release_crq_queue(adapter); 2295 + 2296 + return 0; 2297 + } 2298 + 2299 + static struct vio_device_id ibmvmc_device_table[] = { 2300 + { "ibm,vmc", "IBM,vmc" }, 2301 + { "", "" } 2302 + }; 2303 + MODULE_DEVICE_TABLE(vio, ibmvmc_device_table); 2304 + 2305 + static struct vio_driver ibmvmc_driver = { 2306 + .name = ibmvmc_driver_name, 2307 + .id_table = ibmvmc_device_table, 2308 + .probe = ibmvmc_probe, 2309 + .remove = ibmvmc_remove, 2310 + }; 2311 + 2312 + static void __init ibmvmc_scrub_module_parms(void) 2313 + { 2314 + if (ibmvmc_max_mtu > MAX_MTU) { 2315 + pr_warn("ibmvmc: Max MTU reduced to %d\n", MAX_MTU); 2316 + ibmvmc_max_mtu = MAX_MTU; 2317 + } else if (ibmvmc_max_mtu < MIN_MTU) { 2318 + pr_warn("ibmvmc: Max MTU increased to %d\n", MIN_MTU); 2319 + ibmvmc_max_mtu = MIN_MTU; 2320 + } 2321 + 2322 + if (ibmvmc_max_buf_pool_size > MAX_BUF_POOL_SIZE) { 2323 + pr_warn("ibmvmc: Max buffer pool size reduced to %d\n", 2324 + MAX_BUF_POOL_SIZE); 2325 + ibmvmc_max_buf_pool_size = MAX_BUF_POOL_SIZE; 2326 + } else if (ibmvmc_max_buf_pool_size < MIN_BUF_POOL_SIZE) { 2327 + pr_warn("ibmvmc: Max buffer pool size increased to %d\n", 2328 + MIN_BUF_POOL_SIZE); 2329 + ibmvmc_max_buf_pool_size = MIN_BUF_POOL_SIZE; 2330 + } 2331 + 2332 + if (ibmvmc_max_hmcs > MAX_HMCS) { 2333 + pr_warn("ibmvmc: Max HMCs reduced to %d\n", MAX_HMCS); 2334 + ibmvmc_max_hmcs = MAX_HMCS; 2335 + } else if (ibmvmc_max_hmcs < MIN_HMCS) { 2336 + pr_warn("ibmvmc: Max HMCs increased to %d\n", MIN_HMCS); 2337 + ibmvmc_max_hmcs = MIN_HMCS; 2338 + } 2339 + } 2340 + 2341 + static struct miscdevice ibmvmc_miscdev = { 2342 + .name = ibmvmc_driver_name, 2343 + .minor = MISC_DYNAMIC_MINOR, 2344 + .fops = &ibmvmc_fops, 2345 + }; 2346 + 2347 + static int __init ibmvmc_module_init(void) 2348 + { 2349 + int rc, i, j; 2350 + 2351 + ibmvmc.state = ibmvmc_state_initial; 2352 + pr_info("ibmvmc: version %s\n", IBMVMC_DRIVER_VERSION); 2353 + 2354 + rc = misc_register(&ibmvmc_miscdev); 2355 + if (rc) { 2356 + pr_err("ibmvmc: misc registration failed\n"); 2357 + goto misc_register_failed; 2358 + } 2359 + pr_info("ibmvmc: node %d:%d\n", MISC_MAJOR, 2360 + ibmvmc_miscdev.minor); 2361 + 2362 + /* Initialize data structures */ 2363 + memset(hmcs, 0, sizeof(struct ibmvmc_hmc) * MAX_HMCS); 2364 + for (i = 0; i < MAX_HMCS; i++) { 2365 + spin_lock_init(&hmcs[i].lock); 2366 + hmcs[i].state = ibmhmc_state_free; 2367 + for (j = 0; j < MAX_BUF_POOL_SIZE; j++) 2368 + hmcs[i].queue_outbound_msgs[j] = VMC_INVALID_BUFFER_ID; 2369 + } 2370 + 2371 + /* Sanity check module parms */ 2372 + ibmvmc_scrub_module_parms(); 2373 + 2374 + /* 2375 + * Initialize some reasonable values. Might be negotiated smaller 2376 + * values during the capabilities exchange. 2377 + */ 2378 + ibmvmc.max_mtu = ibmvmc_max_mtu; 2379 + ibmvmc.max_buffer_pool_size = ibmvmc_max_buf_pool_size; 2380 + ibmvmc.max_hmc_index = ibmvmc_max_hmcs - 1; 2381 + 2382 + rc = vio_register_driver(&ibmvmc_driver); 2383 + 2384 + if (rc) { 2385 + pr_err("ibmvmc: rc %d from vio_register_driver\n", rc); 2386 + goto vio_reg_failed; 2387 + } 2388 + 2389 + return 0; 2390 + 2391 + vio_reg_failed: 2392 + misc_deregister(&ibmvmc_miscdev); 2393 + misc_register_failed: 2394 + return rc; 2395 + } 2396 + 2397 + static void __exit ibmvmc_module_exit(void) 2398 + { 2399 + pr_info("ibmvmc: module exit\n"); 2400 + vio_unregister_driver(&ibmvmc_driver); 2401 + misc_deregister(&ibmvmc_miscdev); 2402 + } 2403 + 2404 + module_init(ibmvmc_module_init); 2405 + module_exit(ibmvmc_module_exit); 2406 + 2407 + module_param_named(buf_pool_size, ibmvmc_max_buf_pool_size, 2408 + int, 0644); 2409 + MODULE_PARM_DESC(buf_pool_size, "Buffer pool size"); 2410 + module_param_named(max_hmcs, ibmvmc_max_hmcs, int, 0644); 2411 + MODULE_PARM_DESC(max_hmcs, "Max HMCs"); 2412 + module_param_named(max_mtu, ibmvmc_max_mtu, int, 0644); 2413 + MODULE_PARM_DESC(max_mtu, "Max MTU"); 2414 + 2415 + MODULE_AUTHOR("Steven Royer <seroyer@linux.vnet.ibm.com>"); 2416 + MODULE_DESCRIPTION("IBM VMC"); 2417 + MODULE_VERSION(IBMVMC_DRIVER_VERSION); 2418 + MODULE_LICENSE("GPL v2");
+209
drivers/misc/ibmvmc.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0+ 2 + * 3 + * linux/drivers/misc/ibmvmc.h 4 + * 5 + * IBM Power Systems Virtual Management Channel Support. 6 + * 7 + * Copyright (c) 2004, 2018 IBM Corp. 8 + * Dave Engebretsen engebret@us.ibm.com 9 + * Steven Royer seroyer@linux.vnet.ibm.com 10 + * Adam Reznechek adreznec@linux.vnet.ibm.com 11 + * Bryant G. Ly <bryantly@linux.vnet.ibm.com> 12 + */ 13 + #ifndef IBMVMC_H 14 + #define IBMVMC_H 15 + 16 + #include <linux/types.h> 17 + #include <linux/cdev.h> 18 + 19 + #include <asm/vio.h> 20 + 21 + #define IBMVMC_PROTOCOL_VERSION 0x0101 22 + 23 + #define MIN_BUF_POOL_SIZE 16 24 + #define MIN_HMCS 1 25 + #define MIN_MTU 4096 26 + #define MAX_BUF_POOL_SIZE 64 27 + #define MAX_HMCS 2 28 + #define MAX_MTU (4 * 4096) 29 + #define DEFAULT_BUF_POOL_SIZE 32 30 + #define DEFAULT_HMCS 1 31 + #define DEFAULT_MTU 4096 32 + #define HMC_ID_LEN 32 33 + 34 + #define VMC_INVALID_BUFFER_ID 0xFFFF 35 + 36 + /* ioctl numbers */ 37 + #define VMC_BASE 0xCC 38 + #define VMC_IOCTL_SETHMCID _IOW(VMC_BASE, 0x00, unsigned char *) 39 + #define VMC_IOCTL_QUERY _IOR(VMC_BASE, 0x01, struct ibmvmc_query_struct) 40 + #define VMC_IOCTL_REQUESTVMC _IOR(VMC_BASE, 0x02, u32) 41 + 42 + #define VMC_MSG_CAP 0x01 43 + #define VMC_MSG_CAP_RESP 0x81 44 + #define VMC_MSG_OPEN 0x02 45 + #define VMC_MSG_OPEN_RESP 0x82 46 + #define VMC_MSG_CLOSE 0x03 47 + #define VMC_MSG_CLOSE_RESP 0x83 48 + #define VMC_MSG_ADD_BUF 0x04 49 + #define VMC_MSG_ADD_BUF_RESP 0x84 50 + #define VMC_MSG_REM_BUF 0x05 51 + #define VMC_MSG_REM_BUF_RESP 0x85 52 + #define VMC_MSG_SIGNAL 0x06 53 + 54 + #define VMC_MSG_SUCCESS 0 55 + #define VMC_MSG_INVALID_HMC_INDEX 1 56 + #define VMC_MSG_INVALID_BUFFER_ID 2 57 + #define VMC_MSG_CLOSED_HMC 3 58 + #define VMC_MSG_INTERFACE_FAILURE 4 59 + #define VMC_MSG_NO_BUFFER 5 60 + 61 + #define VMC_BUF_OWNER_ALPHA 0 62 + #define VMC_BUF_OWNER_HV 1 63 + 64 + enum ibmvmc_states { 65 + ibmvmc_state_sched_reset = -1, 66 + ibmvmc_state_initial = 0, 67 + ibmvmc_state_crqinit = 1, 68 + ibmvmc_state_capabilities = 2, 69 + ibmvmc_state_ready = 3, 70 + ibmvmc_state_failed = 4, 71 + }; 72 + 73 + enum ibmhmc_states { 74 + /* HMC connection not established */ 75 + ibmhmc_state_free = 0, 76 + 77 + /* HMC connection established (open called) */ 78 + ibmhmc_state_initial = 1, 79 + 80 + /* open msg sent to HV, due to ioctl(1) call */ 81 + ibmhmc_state_opening = 2, 82 + 83 + /* HMC connection ready, open resp msg from HV */ 84 + ibmhmc_state_ready = 3, 85 + 86 + /* HMC connection failure */ 87 + ibmhmc_state_failed = 4, 88 + }; 89 + 90 + struct ibmvmc_buffer { 91 + u8 valid; /* 1 when DMA storage allocated to buffer */ 92 + u8 free; /* 1 when buffer available for the Alpha Partition */ 93 + u8 owner; 94 + u16 id; 95 + u32 size; 96 + u32 msg_len; 97 + dma_addr_t dma_addr_local; 98 + dma_addr_t dma_addr_remote; 99 + void *real_addr_local; 100 + }; 101 + 102 + struct ibmvmc_admin_crq_msg { 103 + u8 valid; /* RPA Defined */ 104 + u8 type; /* ibmvmc msg type */ 105 + u8 status; /* Response msg status. Zero is success and on failure, 106 + * either 1 - General Failure, or 2 - Invalid Version is 107 + * returned. 108 + */ 109 + u8 rsvd[2]; 110 + u8 max_hmc; /* Max # of independent HMC connections supported */ 111 + __be16 pool_size; /* Maximum number of buffers supported per HMC 112 + * connection 113 + */ 114 + __be32 max_mtu; /* Maximum message size supported (bytes) */ 115 + __be16 crq_size; /* # of entries available in the CRQ for the 116 + * source partition. The target partition must 117 + * limit the number of outstanding messages to 118 + * one half or less. 119 + */ 120 + __be16 version; /* Indicates the code level of the management partition 121 + * or the hypervisor with the high-order byte 122 + * indicating a major version and the low-order byte 123 + * indicating a minor version. 124 + */ 125 + }; 126 + 127 + struct ibmvmc_crq_msg { 128 + u8 valid; /* RPA Defined */ 129 + u8 type; /* ibmvmc msg type */ 130 + u8 status; /* Response msg status */ 131 + union { 132 + u8 rsvd; /* Reserved */ 133 + u8 owner; 134 + } var1; 135 + u8 hmc_session; /* Session Identifier for the current VMC connection */ 136 + u8 hmc_index; /* A unique HMC Idx would be used if multiple management 137 + * applications running concurrently were desired 138 + */ 139 + union { 140 + __be16 rsvd; 141 + __be16 buffer_id; 142 + } var2; 143 + __be32 rsvd; 144 + union { 145 + __be32 rsvd; 146 + __be32 lioba; 147 + __be32 msg_len; 148 + } var3; 149 + }; 150 + 151 + /* an RPA command/response transport queue */ 152 + struct crq_queue { 153 + struct ibmvmc_crq_msg *msgs; 154 + int size, cur; 155 + dma_addr_t msg_token; 156 + spinlock_t lock; 157 + }; 158 + 159 + /* VMC server adapter settings */ 160 + struct crq_server_adapter { 161 + struct device *dev; 162 + struct crq_queue queue; 163 + u32 liobn; 164 + u32 riobn; 165 + struct tasklet_struct work_task; 166 + wait_queue_head_t reset_wait_queue; 167 + struct task_struct *reset_task; 168 + }; 169 + 170 + /* Driver wide settings */ 171 + struct ibmvmc_struct { 172 + u32 state; 173 + u32 max_mtu; 174 + u32 max_buffer_pool_size; 175 + u32 max_hmc_index; 176 + struct crq_server_adapter *adapter; 177 + struct cdev cdev; 178 + u32 vmc_drc_index; 179 + }; 180 + 181 + struct ibmvmc_file_session; 182 + 183 + /* Connection specific settings */ 184 + struct ibmvmc_hmc { 185 + u8 session; 186 + u8 index; 187 + u32 state; 188 + struct crq_server_adapter *adapter; 189 + spinlock_t lock; 190 + unsigned char hmc_id[HMC_ID_LEN]; 191 + struct ibmvmc_buffer buffer[MAX_BUF_POOL_SIZE]; 192 + unsigned short queue_outbound_msgs[MAX_BUF_POOL_SIZE]; 193 + int queue_head, queue_tail; 194 + struct ibmvmc_file_session *file_session; 195 + }; 196 + 197 + struct ibmvmc_file_session { 198 + struct file *file; 199 + struct ibmvmc_hmc *hmc; 200 + bool valid; 201 + }; 202 + 203 + struct ibmvmc_query_struct { 204 + int have_vmc; 205 + int state; 206 + int vmc_drc_index; 207 + }; 208 + 209 + #endif /* __IBMVMC_H */
+1 -1
drivers/misc/sgi-gru/grumain.c
··· 926 926 * 927 927 * Note: gru segments alway mmaped on GRU_GSEG_PAGESIZE boundaries. 928 928 */ 929 - int gru_fault(struct vm_fault *vmf) 929 + vm_fault_t gru_fault(struct vm_fault *vmf) 930 930 { 931 931 struct vm_area_struct *vma = vmf->vma; 932 932 struct gru_thread_state *gts;
+2 -1
drivers/misc/sgi-gru/grutables.h
··· 147 147 #include <linux/mutex.h> 148 148 #include <linux/wait.h> 149 149 #include <linux/mmu_notifier.h> 150 + #include <linux/mm_types.h> 150 151 #include "gru.h" 151 152 #include "grulib.h" 152 153 #include "gruhandles.h" ··· 666 665 int cbr_au_count, char *cbmap); 667 666 extern unsigned long gru_reserve_ds_resources(struct gru_state *gru, 668 667 int dsr_au_count, char *dsmap); 669 - extern int gru_fault(struct vm_fault *vmf); 668 + extern vm_fault_t gru_fault(struct vm_fault *vmf); 670 669 extern struct gru_mm_struct *gru_register_mmu_notifier(void); 671 670 extern void gru_drop_mmu_notifier(struct gru_mm_struct *gms); 672 671
+1 -1
drivers/misc/sgi-xp/xpnet.c
··· 407 407 * destination partid. If the destination partid octets are 0xffff, 408 408 * this packet is to be broadcast to all connected partitions. 409 409 */ 410 - static int 410 + static netdev_tx_t 411 411 xpnet_dev_hard_start_xmit(struct sk_buff *skb, struct net_device *dev) 412 412 { 413 413 struct xpnet_pending_msg *queued_msg;
+1 -1
drivers/misc/ti-st/st_kim.c
··· 735 735 st_kim_devices[0] = pdev; 736 736 } 737 737 738 - kim_gdata = kzalloc(sizeof(struct kim_data_s), GFP_ATOMIC); 738 + kim_gdata = kzalloc(sizeof(struct kim_data_s), GFP_KERNEL); 739 739 if (!kim_gdata) { 740 740 pr_err("no mem to allocate"); 741 741 return -ENOMEM;
+5 -1
drivers/misc/tifm_7xx1.c
··· 239 239 unsigned long timeout; 240 240 unsigned int good_sockets = 0, bad_sockets = 0; 241 241 unsigned long flags; 242 - unsigned char new_ids[fm->num_sockets]; 242 + /* Maximum number of entries is 4 */ 243 + unsigned char new_ids[4]; 243 244 DECLARE_COMPLETION_ONSTACK(finish_resume); 245 + 246 + if (WARN_ON(fm->num_sockets > ARRAY_SIZE(new_ids))) 247 + return -ENXIO; 244 248 245 249 pci_set_power_state(dev, PCI_D0); 246 250 pci_restore_state(dev);
+8 -17
drivers/misc/vmw_balloon.c
··· 576 576 } 577 577 } 578 578 579 - if (b->batch_page) { 580 - vunmap(b->batch_page); 581 - b->batch_page = NULL; 582 - } 583 - 584 - if (b->page) { 585 - __free_page(b->page); 586 - b->page = NULL; 587 - } 579 + /* Clearing the batch_page unconditionally has no adverse effect */ 580 + free_page((unsigned long)b->batch_page); 581 + b->batch_page = NULL; 588 582 } 589 583 590 584 /* ··· 985 991 986 992 static bool vmballoon_init_batching(struct vmballoon *b) 987 993 { 988 - b->page = alloc_page(VMW_PAGE_ALLOC_NOSLEEP); 989 - if (!b->page) 994 + struct page *page; 995 + 996 + page = alloc_page(GFP_KERNEL | __GFP_ZERO); 997 + if (!page) 990 998 return false; 991 999 992 - b->batch_page = vmap(&b->page, 1, VM_MAP, PAGE_KERNEL); 993 - if (!b->batch_page) { 994 - __free_page(b->page); 995 - return false; 996 - } 997 - 1000 + b->batch_page = page_address(page); 998 1001 return true; 999 1002 } 1000 1003
+2 -3
drivers/mux/adg792a.c
··· 58 58 .set = adg792a_set, 59 59 }; 60 60 61 - static int adg792a_probe(struct i2c_client *i2c, 62 - const struct i2c_device_id *id) 61 + static int adg792a_probe(struct i2c_client *i2c) 63 62 { 64 63 struct device *dev = &i2c->dev; 65 64 struct mux_chip *mux_chip; ··· 143 144 .name = "adg792a", 144 145 .of_match_table = of_match_ptr(adg792a_of_match), 145 146 }, 146 - .probe = adg792a_probe, 147 + .probe_new = adg792a_probe, 147 148 .id_table = adg792a_id, 148 149 }; 149 150 module_i2c_driver(adg792a_driver);
+7 -12
drivers/nubus/bus.c
··· 63 63 .init_name = "nubus", 64 64 }; 65 65 66 - int __init nubus_bus_register(void) 66 + static int __init nubus_bus_register(void) 67 67 { 68 - int err; 68 + return bus_register(&nubus_bus_type); 69 + } 70 + postcore_initcall(nubus_bus_register); 69 71 70 - err = device_register(&nubus_parent); 71 - if (err) 72 - return err; 73 - 74 - err = bus_register(&nubus_bus_type); 75 - if (!err) 76 - return 0; 77 - 78 - device_unregister(&nubus_parent); 79 - return err; 72 + int __init nubus_parent_device_register(void) 73 + { 74 + return device_register(&nubus_parent); 80 75 } 81 76 82 77 static void nubus_device_release(struct device *dev)
+1 -1
drivers/nubus/nubus.c
··· 875 875 return 0; 876 876 877 877 nubus_proc_init(); 878 - err = nubus_bus_register(); 878 + err = nubus_parent_device_register(); 879 879 if (err) 880 880 return err; 881 881 nubus_scan_bus();
+6
drivers/nvmem/Kconfig
··· 175 175 This driver can also be built as a module. If so, the module 176 176 will be called nvmem-snvs-lpgpr. 177 177 178 + config RAVE_SP_EEPROM 179 + tristate "Rave SP EEPROM Support" 180 + depends on RAVE_SP_CORE 181 + help 182 + Say y here to enable Rave SP EEPROM support. 183 + 178 184 endif
+3
drivers/nvmem/Makefile
··· 37 37 nvmem_meson_mx_efuse-y := meson-mx-efuse.o 38 38 obj-$(CONFIG_NVMEM_SNVS_LPGPR) += nvmem_snvs_lpgpr.o 39 39 nvmem_snvs_lpgpr-y := snvs_lpgpr.o 40 + obj-$(CONFIG_RAVE_SP_EEPROM) += nvmem-rave-sp-eeprom.o 41 + nvmem-rave-sp-eeprom-y := rave-sp-eeprom.o 42 +
+26 -7
drivers/nvmem/core.c
··· 353 353 return 0; 354 354 } 355 355 356 - static int nvmem_add_cells(struct nvmem_device *nvmem, 357 - const struct nvmem_config *cfg) 356 + /** 357 + * nvmem_add_cells() - Add cell information to an nvmem device 358 + * 359 + * @nvmem: nvmem device to add cells to. 360 + * @info: nvmem cell info to add to the device 361 + * @ncells: number of cells in info 362 + * 363 + * Return: 0 or negative error code on failure. 364 + */ 365 + int nvmem_add_cells(struct nvmem_device *nvmem, 366 + const struct nvmem_cell_info *info, 367 + int ncells) 358 368 { 359 369 struct nvmem_cell **cells; 360 - const struct nvmem_cell_info *info = cfg->cells; 361 370 int i, rval; 362 371 363 - cells = kcalloc(cfg->ncells, sizeof(*cells), GFP_KERNEL); 372 + cells = kcalloc(ncells, sizeof(*cells), GFP_KERNEL); 364 373 if (!cells) 365 374 return -ENOMEM; 366 375 367 - for (i = 0; i < cfg->ncells; i++) { 376 + for (i = 0; i < ncells; i++) { 368 377 cells[i] = kzalloc(sizeof(**cells), GFP_KERNEL); 369 378 if (!cells[i]) { 370 379 rval = -ENOMEM; ··· 389 380 nvmem_cell_add(cells[i]); 390 381 } 391 382 392 - nvmem->ncells = cfg->ncells; 383 + nvmem->ncells = ncells; 393 384 /* remove tmp array */ 394 385 kfree(cells); 395 386 ··· 402 393 403 394 return rval; 404 395 } 396 + EXPORT_SYMBOL_GPL(nvmem_add_cells); 405 397 406 398 /* 407 399 * nvmem_setup_compat() - Create an additional binary entry in ··· 519 509 } 520 510 521 511 if (config->cells) 522 - nvmem_add_cells(nvmem, config); 512 + nvmem_add_cells(nvmem, config->cells, config->ncells); 523 513 524 514 return nvmem; 525 515 ··· 569 559 * nvmem_config. 570 560 * Also creates an binary entry in /sys/bus/nvmem/devices/dev-name/nvmem 571 561 * 562 + * @dev: Device that uses the nvmem device. 572 563 * @config: nvmem device configuration with which nvmem device is created. 573 564 * 574 565 * Return: Will be an ERR_PTR() on error or a valid pointer to nvmem_device ··· 608 597 * devm_nvmem_unregister() - Unregister previously registered managed nvmem 609 598 * device. 610 599 * 600 + * @dev: Device that uses the nvmem device. 611 601 * @nvmem: Pointer to previously registered nvmem device. 612 602 * 613 603 * Return: Will be an negative on error or a zero on success. ··· 1119 1107 1120 1108 /* setup the first byte with lsb bits from nvmem */ 1121 1109 rc = nvmem_reg_read(nvmem, cell->offset, &v, 1); 1110 + if (rc) 1111 + goto err; 1122 1112 *b++ |= GENMASK(bit_offset - 1, 0) & v; 1123 1113 1124 1114 /* setup rest of the byte if any */ ··· 1139 1125 /* setup the last byte with msb bits from nvmem */ 1140 1126 rc = nvmem_reg_read(nvmem, 1141 1127 cell->offset + cell->bytes - 1, &v, 1); 1128 + if (rc) 1129 + goto err; 1142 1130 *p |= GENMASK(7, (nbits + bit_offset) % BITS_PER_BYTE) & v; 1143 1131 1144 1132 } 1145 1133 1146 1134 return buf; 1135 + err: 1136 + kfree(buf); 1137 + return ERR_PTR(rc); 1147 1138 } 1148 1139 1149 1140 /**
+22 -19
drivers/nvmem/meson-efuse.c
··· 24 24 static int meson_efuse_read(void *context, unsigned int offset, 25 25 void *val, size_t bytes) 26 26 { 27 - u8 *buf = val; 28 - int ret; 29 - 30 - ret = meson_sm_call_read(buf, bytes, SM_EFUSE_READ, offset, 31 - bytes, 0, 0, 0); 32 - if (ret < 0) 33 - return ret; 34 - 35 - return 0; 27 + return meson_sm_call_read((u8 *)val, bytes, SM_EFUSE_READ, offset, 28 + bytes, 0, 0, 0); 36 29 } 37 30 38 - static struct nvmem_config econfig = { 39 - .name = "meson-efuse", 40 - .stride = 1, 41 - .word_size = 1, 42 - .read_only = true, 43 - }; 31 + static int meson_efuse_write(void *context, unsigned int offset, 32 + void *val, size_t bytes) 33 + { 34 + return meson_sm_call_write((u8 *)val, bytes, SM_EFUSE_WRITE, offset, 35 + bytes, 0, 0, 0); 36 + } 44 37 45 38 static const struct of_device_id meson_efuse_match[] = { 46 39 { .compatible = "amlogic,meson-gxbb-efuse", }, ··· 43 50 44 51 static int meson_efuse_probe(struct platform_device *pdev) 45 52 { 53 + struct device *dev = &pdev->dev; 46 54 struct nvmem_device *nvmem; 55 + struct nvmem_config *econfig; 47 56 unsigned int size; 48 57 49 58 if (meson_sm_call(SM_EFUSE_USER_MAX, &size, 0, 0, 0, 0, 0) < 0) 50 59 return -EINVAL; 51 60 52 - econfig.dev = &pdev->dev; 53 - econfig.reg_read = meson_efuse_read; 54 - econfig.size = size; 61 + econfig = devm_kzalloc(dev, sizeof(*econfig), GFP_KERNEL); 62 + if (!econfig) 63 + return -ENOMEM; 55 64 56 - nvmem = devm_nvmem_register(&pdev->dev, &econfig); 65 + econfig->dev = dev; 66 + econfig->name = dev_name(dev); 67 + econfig->stride = 1; 68 + econfig->word_size = 1; 69 + econfig->reg_read = meson_efuse_read; 70 + econfig->reg_write = meson_efuse_write; 71 + econfig->size = size; 72 + 73 + nvmem = devm_nvmem_register(&pdev->dev, econfig); 57 74 58 75 return PTR_ERR_OR_ZERO(nvmem); 59 76 }
+357
drivers/nvmem/rave-sp-eeprom.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + 3 + /* 4 + * EEPROM driver for RAVE SP 5 + * 6 + * Copyright (C) 2018 Zodiac Inflight Innovations 7 + * 8 + */ 9 + #include <linux/kernel.h> 10 + #include <linux/mfd/rave-sp.h> 11 + #include <linux/module.h> 12 + #include <linux/nvmem-provider.h> 13 + #include <linux/of_device.h> 14 + #include <linux/platform_device.h> 15 + #include <linux/sizes.h> 16 + 17 + /** 18 + * enum rave_sp_eeprom_access_type - Supported types of EEPROM access 19 + * 20 + * @RAVE_SP_EEPROM_WRITE: EEPROM write 21 + * @RAVE_SP_EEPROM_READ: EEPROM read 22 + */ 23 + enum rave_sp_eeprom_access_type { 24 + RAVE_SP_EEPROM_WRITE = 0, 25 + RAVE_SP_EEPROM_READ = 1, 26 + }; 27 + 28 + /** 29 + * enum rave_sp_eeprom_header_size - EEPROM command header sizes 30 + * 31 + * @RAVE_SP_EEPROM_HEADER_SMALL: EEPROM header size for "small" devices (< 8K) 32 + * @RAVE_SP_EEPROM_HEADER_BIG: EEPROM header size for "big" devices (> 8K) 33 + */ 34 + enum rave_sp_eeprom_header_size { 35 + RAVE_SP_EEPROM_HEADER_SMALL = 4U, 36 + RAVE_SP_EEPROM_HEADER_BIG = 5U, 37 + }; 38 + 39 + #define RAVE_SP_EEPROM_PAGE_SIZE 32U 40 + 41 + /** 42 + * struct rave_sp_eeprom_page - RAVE SP EEPROM page 43 + * 44 + * @type: Access type (see enum rave_sp_eeprom_access_type) 45 + * @success: Success flag (Success = 1, Failure = 0) 46 + * @data: Read data 47 + 48 + * Note this structure corresponds to RSP_*_EEPROM payload from RAVE 49 + * SP ICD 50 + */ 51 + struct rave_sp_eeprom_page { 52 + u8 type; 53 + u8 success; 54 + u8 data[RAVE_SP_EEPROM_PAGE_SIZE]; 55 + } __packed; 56 + 57 + /** 58 + * struct rave_sp_eeprom - RAVE SP EEPROM device 59 + * 60 + * @sp: Pointer to parent RAVE SP device 61 + * @mutex: Lock protecting access to EEPROM 62 + * @address: EEPROM device address 63 + * @header_size: Size of EEPROM command header for this device 64 + * @dev: Pointer to corresponding struct device used for logging 65 + */ 66 + struct rave_sp_eeprom { 67 + struct rave_sp *sp; 68 + struct mutex mutex; 69 + u8 address; 70 + unsigned int header_size; 71 + struct device *dev; 72 + }; 73 + 74 + /** 75 + * rave_sp_eeprom_io - Low-level part of EEPROM page access 76 + * 77 + * @eeprom: EEPROM device to write to 78 + * @type: EEPROM access type (read or write) 79 + * @idx: number of the EEPROM page 80 + * @page: Data to write or buffer to store result (via page->data) 81 + * 82 + * This function does all of the low-level work required to perform a 83 + * EEPROM access. This includes formatting correct command payload, 84 + * sending it and checking received results. 85 + * 86 + * Returns zero in case of success or negative error code in 87 + * case of failure. 88 + */ 89 + static int rave_sp_eeprom_io(struct rave_sp_eeprom *eeprom, 90 + enum rave_sp_eeprom_access_type type, 91 + u16 idx, 92 + struct rave_sp_eeprom_page *page) 93 + { 94 + const bool is_write = type == RAVE_SP_EEPROM_WRITE; 95 + const unsigned int data_size = is_write ? sizeof(page->data) : 0; 96 + const unsigned int cmd_size = eeprom->header_size + data_size; 97 + const unsigned int rsp_size = 98 + is_write ? sizeof(*page) - sizeof(page->data) : sizeof(*page); 99 + unsigned int offset = 0; 100 + u8 cmd[cmd_size]; 101 + int ret; 102 + 103 + cmd[offset++] = eeprom->address; 104 + cmd[offset++] = 0; 105 + cmd[offset++] = type; 106 + cmd[offset++] = idx; 107 + 108 + /* 109 + * If there's still room in this command's header it means we 110 + * are talkin to EEPROM that uses 16-bit page numbers and we 111 + * have to specify index's MSB in payload as well. 112 + */ 113 + if (offset < eeprom->header_size) 114 + cmd[offset++] = idx >> 8; 115 + /* 116 + * Copy our data to write to command buffer first. In case of 117 + * a read data_size should be zero and memcpy would become a 118 + * no-op 119 + */ 120 + memcpy(&cmd[offset], page->data, data_size); 121 + 122 + ret = rave_sp_exec(eeprom->sp, cmd, cmd_size, page, rsp_size); 123 + if (ret) 124 + return ret; 125 + 126 + if (page->type != type) 127 + return -EPROTO; 128 + 129 + if (!page->success) 130 + return -EIO; 131 + 132 + return 0; 133 + } 134 + 135 + /** 136 + * rave_sp_eeprom_page_access - Access single EEPROM page 137 + * 138 + * @eeprom: EEPROM device to access 139 + * @type: Access type to perform (read or write) 140 + * @offset: Offset within EEPROM to access 141 + * @data: Data buffer 142 + * @data_len: Size of the data buffer 143 + * 144 + * This function performs a generic access to a single page or a 145 + * portion thereof. Requested access MUST NOT cross the EEPROM page 146 + * boundary. 147 + * 148 + * Returns zero in case of success or negative error code in 149 + * case of failure. 150 + */ 151 + static int 152 + rave_sp_eeprom_page_access(struct rave_sp_eeprom *eeprom, 153 + enum rave_sp_eeprom_access_type type, 154 + unsigned int offset, u8 *data, 155 + size_t data_len) 156 + { 157 + const unsigned int page_offset = offset % RAVE_SP_EEPROM_PAGE_SIZE; 158 + const unsigned int page_nr = offset / RAVE_SP_EEPROM_PAGE_SIZE; 159 + struct rave_sp_eeprom_page page; 160 + int ret; 161 + 162 + /* 163 + * This function will not work if data access we've been asked 164 + * to do is crossing EEPROM page boundary. Normally this 165 + * should never happen and getting here would indicate a bug 166 + * in the code. 167 + */ 168 + if (WARN_ON(data_len > sizeof(page.data) - page_offset)) 169 + return -EINVAL; 170 + 171 + if (type == RAVE_SP_EEPROM_WRITE) { 172 + /* 173 + * If doing a partial write we need to do a read first 174 + * to fill the rest of the page with correct data. 175 + */ 176 + if (data_len < RAVE_SP_EEPROM_PAGE_SIZE) { 177 + ret = rave_sp_eeprom_io(eeprom, RAVE_SP_EEPROM_READ, 178 + page_nr, &page); 179 + if (ret) 180 + return ret; 181 + } 182 + 183 + memcpy(&page.data[page_offset], data, data_len); 184 + } 185 + 186 + ret = rave_sp_eeprom_io(eeprom, type, page_nr, &page); 187 + if (ret) 188 + return ret; 189 + 190 + /* 191 + * Since we receive the result of the read via 'page.data' 192 + * buffer we need to copy that to 'data' 193 + */ 194 + if (type == RAVE_SP_EEPROM_READ) 195 + memcpy(data, &page.data[page_offset], data_len); 196 + 197 + return 0; 198 + } 199 + 200 + /** 201 + * rave_sp_eeprom_access - Access EEPROM data 202 + * 203 + * @eeprom: EEPROM device to access 204 + * @type: Access type to perform (read or write) 205 + * @offset: Offset within EEPROM to access 206 + * @data: Data buffer 207 + * @data_len: Size of the data buffer 208 + * 209 + * This function performs a generic access (either read or write) at 210 + * arbitrary offset (not necessary page aligned) of arbitrary length 211 + * (is not constrained by EEPROM page size). 212 + * 213 + * Returns zero in case of success or negative error code in case of 214 + * failure. 215 + */ 216 + static int rave_sp_eeprom_access(struct rave_sp_eeprom *eeprom, 217 + enum rave_sp_eeprom_access_type type, 218 + unsigned int offset, u8 *data, 219 + unsigned int data_len) 220 + { 221 + unsigned int residue; 222 + unsigned int chunk; 223 + unsigned int head; 224 + int ret; 225 + 226 + mutex_lock(&eeprom->mutex); 227 + 228 + head = offset % RAVE_SP_EEPROM_PAGE_SIZE; 229 + residue = data_len; 230 + 231 + do { 232 + /* 233 + * First iteration, if we are doing an access that is 234 + * not 32-byte aligned, we need to access only data up 235 + * to a page boundary to avoid corssing it in 236 + * rave_sp_eeprom_page_access() 237 + */ 238 + if (unlikely(head)) { 239 + chunk = RAVE_SP_EEPROM_PAGE_SIZE - head; 240 + /* 241 + * This can only happen once per 242 + * rave_sp_eeprom_access() call, so we set 243 + * head to zero to process all the other 244 + * iterations normally. 245 + */ 246 + head = 0; 247 + } else { 248 + chunk = RAVE_SP_EEPROM_PAGE_SIZE; 249 + } 250 + 251 + /* 252 + * We should never read more that 'residue' bytes 253 + */ 254 + chunk = min(chunk, residue); 255 + ret = rave_sp_eeprom_page_access(eeprom, type, offset, 256 + data, chunk); 257 + if (ret) 258 + goto out; 259 + 260 + residue -= chunk; 261 + offset += chunk; 262 + data += chunk; 263 + } while (residue); 264 + out: 265 + mutex_unlock(&eeprom->mutex); 266 + return ret; 267 + } 268 + 269 + static int rave_sp_eeprom_reg_read(void *eeprom, unsigned int offset, 270 + void *val, size_t bytes) 271 + { 272 + return rave_sp_eeprom_access(eeprom, RAVE_SP_EEPROM_READ, 273 + offset, val, bytes); 274 + } 275 + 276 + static int rave_sp_eeprom_reg_write(void *eeprom, unsigned int offset, 277 + void *val, size_t bytes) 278 + { 279 + return rave_sp_eeprom_access(eeprom, RAVE_SP_EEPROM_WRITE, 280 + offset, val, bytes); 281 + } 282 + 283 + static int rave_sp_eeprom_probe(struct platform_device *pdev) 284 + { 285 + struct device *dev = &pdev->dev; 286 + struct rave_sp *sp = dev_get_drvdata(dev->parent); 287 + struct device_node *np = dev->of_node; 288 + struct nvmem_config config = { 0 }; 289 + struct rave_sp_eeprom *eeprom; 290 + struct nvmem_device *nvmem; 291 + u32 reg[2], size; 292 + 293 + if (of_property_read_u32_array(np, "reg", reg, ARRAY_SIZE(reg))) { 294 + dev_err(dev, "Failed to parse \"reg\" property\n"); 295 + return -EINVAL; 296 + } 297 + 298 + size = reg[1]; 299 + /* 300 + * Per ICD, we have no more than 2 bytes to specify EEPROM 301 + * page. 302 + */ 303 + if (size > U16_MAX * RAVE_SP_EEPROM_PAGE_SIZE) { 304 + dev_err(dev, "Specified size is too big\n"); 305 + return -EINVAL; 306 + } 307 + 308 + eeprom = devm_kzalloc(dev, sizeof(*eeprom), GFP_KERNEL); 309 + if (!eeprom) 310 + return -ENOMEM; 311 + 312 + eeprom->address = reg[0]; 313 + eeprom->sp = sp; 314 + eeprom->dev = dev; 315 + 316 + if (size > SZ_8K) 317 + eeprom->header_size = RAVE_SP_EEPROM_HEADER_BIG; 318 + else 319 + eeprom->header_size = RAVE_SP_EEPROM_HEADER_SMALL; 320 + 321 + mutex_init(&eeprom->mutex); 322 + 323 + config.id = -1; 324 + of_property_read_string(np, "zii,eeprom-name", &config.name); 325 + config.priv = eeprom; 326 + config.dev = dev; 327 + config.size = size; 328 + config.reg_read = rave_sp_eeprom_reg_read; 329 + config.reg_write = rave_sp_eeprom_reg_write; 330 + config.word_size = 1; 331 + config.stride = 1; 332 + 333 + nvmem = devm_nvmem_register(dev, &config); 334 + 335 + return PTR_ERR_OR_ZERO(nvmem); 336 + } 337 + 338 + static const struct of_device_id rave_sp_eeprom_of_match[] = { 339 + { .compatible = "zii,rave-sp-eeprom" }, 340 + {} 341 + }; 342 + MODULE_DEVICE_TABLE(of, rave_sp_eeprom_of_match); 343 + 344 + static struct platform_driver rave_sp_eeprom_driver = { 345 + .probe = rave_sp_eeprom_probe, 346 + .driver = { 347 + .name = KBUILD_MODNAME, 348 + .of_match_table = rave_sp_eeprom_of_match, 349 + }, 350 + }; 351 + module_platform_driver(rave_sp_eeprom_driver); 352 + 353 + MODULE_LICENSE("GPL"); 354 + MODULE_AUTHOR("Andrey Vostrikov <andrey.vostrikov@cogentembedded.com>"); 355 + MODULE_AUTHOR("Nikita Yushchenko <nikita.yoush@cogentembedded.com>"); 356 + MODULE_AUTHOR("Andrey Smirnov <andrew.smirnov@gmail.com>"); 357 + MODULE_DESCRIPTION("RAVE SP EEPROM driver");
+1 -2
drivers/slimbus/qcom-ctrl.c
··· 439 439 static void qcom_slim_rxwq(struct work_struct *work) 440 440 { 441 441 u8 buf[SLIM_MSGQ_BUF_LEN]; 442 - u8 mc, mt, len; 442 + u8 mc, mt; 443 443 int ret; 444 444 struct qcom_slim_ctrl *ctrl = container_of(work, struct qcom_slim_ctrl, 445 445 wd); 446 446 447 447 while ((slim_get_current_rxbuf(ctrl, buf)) != -ENODATA) { 448 - len = SLIM_HEADER_GET_RL(buf[0]); 449 448 mt = SLIM_HEADER_GET_MT(buf[0]); 450 449 mc = SLIM_HEADER_GET_MC(buf[1]); 451 450 if (mt == SLIM_MSG_MT_CORE &&
+1 -1
drivers/soundwire/Kconfig
··· 27 27 tristate "Intel SoundWire Master driver" 28 28 select SOUNDWIRE_CADENCE 29 29 select SOUNDWIRE_BUS 30 - depends on X86 && ACPI 30 + depends on X86 && ACPI && SND_SOC 31 31 ---help--- 32 32 SoundWire Intel Master driver. 33 33 If you have an Intel platform which has a SoundWire Master then
+1 -1
drivers/soundwire/Makefile
··· 3 3 # 4 4 5 5 #Bus Objs 6 - soundwire-bus-objs := bus_type.o bus.o slave.o mipi_disco.o 6 + soundwire-bus-objs := bus_type.o bus.o slave.o mipi_disco.o stream.o 7 7 obj-$(CONFIG_SOUNDWIRE_BUS) += soundwire-bus.o 8 8 9 9 #Cadence Objs
+43
drivers/soundwire/bus.c
··· 17 17 */ 18 18 int sdw_add_bus_master(struct sdw_bus *bus) 19 19 { 20 + struct sdw_master_prop *prop = NULL; 20 21 int ret; 21 22 22 23 if (!bus->dev) { ··· 33 32 mutex_init(&bus->msg_lock); 34 33 mutex_init(&bus->bus_lock); 35 34 INIT_LIST_HEAD(&bus->slaves); 35 + INIT_LIST_HEAD(&bus->m_rt_list); 36 36 37 37 if (bus->ops->read_prop) { 38 38 ret = bus->ops->read_prop(bus); ··· 78 76 dev_err(bus->dev, "Finding slaves failed:%d\n", ret); 79 77 return ret; 80 78 } 79 + 80 + /* 81 + * Initialize clock values based on Master properties. The max 82 + * frequency is read from max_freq property. Current assumption 83 + * is that the bus will start at highest clock frequency when 84 + * powered on. 85 + * 86 + * Default active bank will be 0 as out of reset the Slaves have 87 + * to start with bank 0 (Table 40 of Spec) 88 + */ 89 + prop = &bus->prop; 90 + bus->params.max_dr_freq = prop->max_freq * SDW_DOUBLE_RATE_FACTOR; 91 + bus->params.curr_dr_freq = bus->params.max_dr_freq; 92 + bus->params.curr_bank = SDW_BANK0; 93 + bus->params.next_bank = SDW_BANK1; 81 94 82 95 return 0; 83 96 } ··· 591 574 mutex_lock(&slave->bus->bus_lock); 592 575 slave->status = status; 593 576 mutex_unlock(&slave->bus->bus_lock); 577 + } 578 + 579 + int sdw_configure_dpn_intr(struct sdw_slave *slave, 580 + int port, bool enable, int mask) 581 + { 582 + u32 addr; 583 + int ret; 584 + u8 val = 0; 585 + 586 + addr = SDW_DPN_INTMASK(port); 587 + 588 + /* Set/Clear port ready interrupt mask */ 589 + if (enable) { 590 + val |= mask; 591 + val |= SDW_DPN_INT_PORT_READY; 592 + } else { 593 + val &= ~(mask); 594 + val &= ~SDW_DPN_INT_PORT_READY; 595 + } 596 + 597 + ret = sdw_update(slave, addr, (mask | SDW_DPN_INT_PORT_READY), val); 598 + if (ret < 0) 599 + dev_err(slave->bus->dev, 600 + "SDW_DPN_INTMASK write failed:%d", val); 601 + 602 + return ret; 594 603 } 595 604 596 605 static int sdw_initialize_slave(struct sdw_slave *slave)
+72
drivers/soundwire/bus.h
··· 45 45 bool page; 46 46 }; 47 47 48 + #define SDW_DOUBLE_RATE_FACTOR 2 49 + 50 + extern int rows[SDW_FRAME_ROWS]; 51 + extern int cols[SDW_FRAME_COLS]; 52 + 53 + /** 54 + * sdw_port_runtime: Runtime port parameters for Master or Slave 55 + * 56 + * @num: Port number. For audio streams, valid port number ranges from 57 + * [1,14] 58 + * @ch_mask: Channel mask 59 + * @transport_params: Transport parameters 60 + * @port_params: Port parameters 61 + * @port_node: List node for Master or Slave port_list 62 + * 63 + * SoundWire spec has no mention of ports for Master interface but the 64 + * concept is logically extended. 65 + */ 66 + struct sdw_port_runtime { 67 + int num; 68 + int ch_mask; 69 + struct sdw_transport_params transport_params; 70 + struct sdw_port_params port_params; 71 + struct list_head port_node; 72 + }; 73 + 74 + /** 75 + * sdw_slave_runtime: Runtime Stream parameters for Slave 76 + * 77 + * @slave: Slave handle 78 + * @direction: Data direction for Slave 79 + * @ch_count: Number of channels handled by the Slave for 80 + * this stream 81 + * @m_rt_node: sdw_master_runtime list node 82 + * @port_list: List of Slave Ports configured for this stream 83 + */ 84 + struct sdw_slave_runtime { 85 + struct sdw_slave *slave; 86 + enum sdw_data_direction direction; 87 + unsigned int ch_count; 88 + struct list_head m_rt_node; 89 + struct list_head port_list; 90 + }; 91 + 92 + /** 93 + * sdw_master_runtime: Runtime stream parameters for Master 94 + * 95 + * @bus: Bus handle 96 + * @stream: Stream runtime handle 97 + * @direction: Data direction for Master 98 + * @ch_count: Number of channels handled by the Master for 99 + * this stream, can be zero. 100 + * @slave_rt_list: Slave runtime list 101 + * @port_list: List of Master Ports configured for this stream, can be zero. 102 + * @bus_node: sdw_bus m_rt_list node 103 + */ 104 + struct sdw_master_runtime { 105 + struct sdw_bus *bus; 106 + struct sdw_stream_runtime *stream; 107 + enum sdw_data_direction direction; 108 + unsigned int ch_count; 109 + struct list_head slave_rt_list; 110 + struct list_head port_list; 111 + struct list_head bus_node; 112 + }; 113 + 114 + struct sdw_dpn_prop *sdw_get_slave_dpn_prop(struct sdw_slave *slave, 115 + enum sdw_data_direction direction, 116 + unsigned int port_num); 117 + int sdw_configure_dpn_intr(struct sdw_slave *slave, int port, 118 + bool enable, int mask); 119 + 48 120 int sdw_transfer(struct sdw_bus *bus, struct sdw_msg *msg); 49 121 int sdw_transfer_defer(struct sdw_bus *bus, struct sdw_msg *msg, 50 122 struct sdw_defer *defer);
+442 -9
drivers/soundwire/cadence_master.c
··· 13 13 #include <linux/mod_devicetable.h> 14 14 #include <linux/soundwire/sdw_registers.h> 15 15 #include <linux/soundwire/sdw.h> 16 + #include <sound/pcm_params.h> 17 + #include <sound/soc.h> 16 18 #include "bus.h" 17 19 #include "cadence_master.h" 18 20 ··· 398 396 return 0; 399 397 } 400 398 401 - static enum sdw_command_response 399 + enum sdw_command_response 402 400 cdns_xfer_msg(struct sdw_bus *bus, struct sdw_msg *msg) 403 401 { 404 402 struct sdw_cdns *cdns = bus_to_cdns(bus); ··· 424 422 exit: 425 423 return ret; 426 424 } 425 + EXPORT_SYMBOL(cdns_xfer_msg); 427 426 428 - static enum sdw_command_response 427 + enum sdw_command_response 429 428 cdns_xfer_msg_defer(struct sdw_bus *bus, 430 429 struct sdw_msg *msg, struct sdw_defer *defer) 431 430 { ··· 446 443 447 444 return _cdns_xfer_msg(cdns, msg, cmd, 0, msg->len, true); 448 445 } 446 + EXPORT_SYMBOL(cdns_xfer_msg_defer); 449 447 450 - static enum sdw_command_response 448 + enum sdw_command_response 451 449 cdns_reset_page_addr(struct sdw_bus *bus, unsigned int dev_num) 452 450 { 453 451 struct sdw_cdns *cdns = bus_to_cdns(bus); ··· 460 456 461 457 return cdns_program_scp_addr(cdns, &msg); 462 458 } 459 + EXPORT_SYMBOL(cdns_reset_page_addr); 463 460 464 461 /* 465 462 * IRQ handling ··· 671 666 } 672 667 EXPORT_SYMBOL(sdw_cdns_enable_interrupt); 673 668 669 + static int cdns_allocate_pdi(struct sdw_cdns *cdns, 670 + struct sdw_cdns_pdi **stream, 671 + u32 num, u32 pdi_offset) 672 + { 673 + struct sdw_cdns_pdi *pdi; 674 + int i; 675 + 676 + if (!num) 677 + return 0; 678 + 679 + pdi = devm_kcalloc(cdns->dev, num, sizeof(*pdi), GFP_KERNEL); 680 + if (!pdi) 681 + return -ENOMEM; 682 + 683 + for (i = 0; i < num; i++) { 684 + pdi[i].num = i + pdi_offset; 685 + pdi[i].assigned = false; 686 + } 687 + 688 + *stream = pdi; 689 + return 0; 690 + } 691 + 692 + /** 693 + * sdw_cdns_pdi_init() - PDI initialization routine 694 + * 695 + * @cdns: Cadence instance 696 + * @config: Stream configurations 697 + */ 698 + int sdw_cdns_pdi_init(struct sdw_cdns *cdns, 699 + struct sdw_cdns_stream_config config) 700 + { 701 + struct sdw_cdns_streams *stream; 702 + int offset, i, ret; 703 + 704 + cdns->pcm.num_bd = config.pcm_bd; 705 + cdns->pcm.num_in = config.pcm_in; 706 + cdns->pcm.num_out = config.pcm_out; 707 + cdns->pdm.num_bd = config.pdm_bd; 708 + cdns->pdm.num_in = config.pdm_in; 709 + cdns->pdm.num_out = config.pdm_out; 710 + 711 + /* Allocate PDIs for PCMs */ 712 + stream = &cdns->pcm; 713 + 714 + /* First two PDIs are reserved for bulk transfers */ 715 + stream->num_bd -= CDNS_PCM_PDI_OFFSET; 716 + offset = CDNS_PCM_PDI_OFFSET; 717 + 718 + ret = cdns_allocate_pdi(cdns, &stream->bd, 719 + stream->num_bd, offset); 720 + if (ret) 721 + return ret; 722 + 723 + offset += stream->num_bd; 724 + 725 + ret = cdns_allocate_pdi(cdns, &stream->in, 726 + stream->num_in, offset); 727 + if (ret) 728 + return ret; 729 + 730 + offset += stream->num_in; 731 + 732 + ret = cdns_allocate_pdi(cdns, &stream->out, 733 + stream->num_out, offset); 734 + if (ret) 735 + return ret; 736 + 737 + /* Update total number of PCM PDIs */ 738 + stream->num_pdi = stream->num_bd + stream->num_in + stream->num_out; 739 + cdns->num_ports = stream->num_pdi; 740 + 741 + /* Allocate PDIs for PDMs */ 742 + stream = &cdns->pdm; 743 + offset = CDNS_PDM_PDI_OFFSET; 744 + ret = cdns_allocate_pdi(cdns, &stream->bd, 745 + stream->num_bd, offset); 746 + if (ret) 747 + return ret; 748 + 749 + offset += stream->num_bd; 750 + 751 + ret = cdns_allocate_pdi(cdns, &stream->in, 752 + stream->num_in, offset); 753 + if (ret) 754 + return ret; 755 + 756 + offset += stream->num_in; 757 + 758 + ret = cdns_allocate_pdi(cdns, &stream->out, 759 + stream->num_out, offset); 760 + if (ret) 761 + return ret; 762 + 763 + /* Update total number of PDM PDIs */ 764 + stream->num_pdi = stream->num_bd + stream->num_in + stream->num_out; 765 + cdns->num_ports += stream->num_pdi; 766 + 767 + cdns->ports = devm_kcalloc(cdns->dev, cdns->num_ports, 768 + sizeof(*cdns->ports), GFP_KERNEL); 769 + if (!cdns->ports) { 770 + ret = -ENOMEM; 771 + return ret; 772 + } 773 + 774 + for (i = 0; i < cdns->num_ports; i++) { 775 + cdns->ports[i].assigned = false; 776 + cdns->ports[i].num = i + 1; /* Port 0 reserved for bulk */ 777 + } 778 + 779 + return 0; 780 + } 781 + EXPORT_SYMBOL(sdw_cdns_pdi_init); 782 + 674 783 /** 675 784 * sdw_cdns_init() - Cadence initialization 676 785 * @cdns: Cadence instance ··· 846 727 } 847 728 EXPORT_SYMBOL(sdw_cdns_init); 848 729 849 - struct sdw_master_ops sdw_cdns_master_ops = { 850 - .read_prop = sdw_master_read_prop, 851 - .xfer_msg = cdns_xfer_msg, 852 - .xfer_msg_defer = cdns_xfer_msg_defer, 853 - .reset_page_addr = cdns_reset_page_addr, 730 + int cdns_bus_conf(struct sdw_bus *bus, struct sdw_bus_params *params) 731 + { 732 + struct sdw_cdns *cdns = bus_to_cdns(bus); 733 + int mcp_clkctrl_off, mcp_clkctrl; 734 + int divider; 735 + 736 + if (!params->curr_dr_freq) { 737 + dev_err(cdns->dev, "NULL curr_dr_freq"); 738 + return -EINVAL; 739 + } 740 + 741 + divider = (params->max_dr_freq / params->curr_dr_freq) - 1; 742 + 743 + if (params->next_bank) 744 + mcp_clkctrl_off = CDNS_MCP_CLK_CTRL1; 745 + else 746 + mcp_clkctrl_off = CDNS_MCP_CLK_CTRL0; 747 + 748 + mcp_clkctrl = cdns_readl(cdns, mcp_clkctrl_off); 749 + mcp_clkctrl |= divider; 750 + cdns_writel(cdns, mcp_clkctrl_off, mcp_clkctrl); 751 + 752 + return 0; 753 + } 754 + EXPORT_SYMBOL(cdns_bus_conf); 755 + 756 + static int cdns_port_params(struct sdw_bus *bus, 757 + struct sdw_port_params *p_params, unsigned int bank) 758 + { 759 + struct sdw_cdns *cdns = bus_to_cdns(bus); 760 + int dpn_config = 0, dpn_config_off; 761 + 762 + if (bank) 763 + dpn_config_off = CDNS_DPN_B1_CONFIG(p_params->num); 764 + else 765 + dpn_config_off = CDNS_DPN_B0_CONFIG(p_params->num); 766 + 767 + dpn_config = cdns_readl(cdns, dpn_config_off); 768 + 769 + dpn_config |= ((p_params->bps - 1) << 770 + SDW_REG_SHIFT(CDNS_DPN_CONFIG_WL)); 771 + dpn_config |= (p_params->flow_mode << 772 + SDW_REG_SHIFT(CDNS_DPN_CONFIG_PORT_FLOW)); 773 + dpn_config |= (p_params->data_mode << 774 + SDW_REG_SHIFT(CDNS_DPN_CONFIG_PORT_DAT)); 775 + 776 + cdns_writel(cdns, dpn_config_off, dpn_config); 777 + 778 + return 0; 779 + } 780 + 781 + static int cdns_transport_params(struct sdw_bus *bus, 782 + struct sdw_transport_params *t_params, 783 + enum sdw_reg_bank bank) 784 + { 785 + struct sdw_cdns *cdns = bus_to_cdns(bus); 786 + int dpn_offsetctrl = 0, dpn_offsetctrl_off; 787 + int dpn_config = 0, dpn_config_off; 788 + int dpn_hctrl = 0, dpn_hctrl_off; 789 + int num = t_params->port_num; 790 + int dpn_samplectrl_off; 791 + 792 + /* 793 + * Note: Only full data port is supported on the Master side for 794 + * both PCM and PDM ports. 795 + */ 796 + 797 + if (bank) { 798 + dpn_config_off = CDNS_DPN_B1_CONFIG(num); 799 + dpn_samplectrl_off = CDNS_DPN_B1_SAMPLE_CTRL(num); 800 + dpn_hctrl_off = CDNS_DPN_B1_HCTRL(num); 801 + dpn_offsetctrl_off = CDNS_DPN_B1_OFFSET_CTRL(num); 802 + } else { 803 + dpn_config_off = CDNS_DPN_B0_CONFIG(num); 804 + dpn_samplectrl_off = CDNS_DPN_B0_SAMPLE_CTRL(num); 805 + dpn_hctrl_off = CDNS_DPN_B0_HCTRL(num); 806 + dpn_offsetctrl_off = CDNS_DPN_B0_OFFSET_CTRL(num); 807 + } 808 + 809 + dpn_config = cdns_readl(cdns, dpn_config_off); 810 + 811 + dpn_config |= (t_params->blk_grp_ctrl << 812 + SDW_REG_SHIFT(CDNS_DPN_CONFIG_BGC)); 813 + dpn_config |= (t_params->blk_pkg_mode << 814 + SDW_REG_SHIFT(CDNS_DPN_CONFIG_BPM)); 815 + cdns_writel(cdns, dpn_config_off, dpn_config); 816 + 817 + dpn_offsetctrl |= (t_params->offset1 << 818 + SDW_REG_SHIFT(CDNS_DPN_OFFSET_CTRL_1)); 819 + dpn_offsetctrl |= (t_params->offset2 << 820 + SDW_REG_SHIFT(CDNS_DPN_OFFSET_CTRL_2)); 821 + cdns_writel(cdns, dpn_offsetctrl_off, dpn_offsetctrl); 822 + 823 + dpn_hctrl |= (t_params->hstart << 824 + SDW_REG_SHIFT(CDNS_DPN_HCTRL_HSTART)); 825 + dpn_hctrl |= (t_params->hstop << SDW_REG_SHIFT(CDNS_DPN_HCTRL_HSTOP)); 826 + dpn_hctrl |= (t_params->lane_ctrl << 827 + SDW_REG_SHIFT(CDNS_DPN_HCTRL_LCTRL)); 828 + 829 + cdns_writel(cdns, dpn_hctrl_off, dpn_hctrl); 830 + cdns_writel(cdns, dpn_samplectrl_off, (t_params->sample_interval - 1)); 831 + 832 + return 0; 833 + } 834 + 835 + static int cdns_port_enable(struct sdw_bus *bus, 836 + struct sdw_enable_ch *enable_ch, unsigned int bank) 837 + { 838 + struct sdw_cdns *cdns = bus_to_cdns(bus); 839 + int dpn_chnen_off, ch_mask; 840 + 841 + if (bank) 842 + dpn_chnen_off = CDNS_DPN_B1_CH_EN(enable_ch->port_num); 843 + else 844 + dpn_chnen_off = CDNS_DPN_B0_CH_EN(enable_ch->port_num); 845 + 846 + ch_mask = enable_ch->ch_mask * enable_ch->enable; 847 + cdns_writel(cdns, dpn_chnen_off, ch_mask); 848 + 849 + return 0; 850 + } 851 + 852 + static const struct sdw_master_port_ops cdns_port_ops = { 853 + .dpn_set_port_params = cdns_port_params, 854 + .dpn_set_port_transport_params = cdns_transport_params, 855 + .dpn_port_enable_ch = cdns_port_enable, 854 856 }; 855 - EXPORT_SYMBOL(sdw_cdns_master_ops); 856 857 857 858 /** 858 859 * sdw_cdns_probe() - Cadence probe routine ··· 981 742 int sdw_cdns_probe(struct sdw_cdns *cdns) 982 743 { 983 744 init_completion(&cdns->tx_complete); 745 + cdns->bus.port_ops = &cdns_port_ops; 984 746 985 747 return 0; 986 748 } 987 749 EXPORT_SYMBOL(sdw_cdns_probe); 750 + 751 + int cdns_set_sdw_stream(struct snd_soc_dai *dai, 752 + void *stream, bool pcm, int direction) 753 + { 754 + struct sdw_cdns *cdns = snd_soc_dai_get_drvdata(dai); 755 + struct sdw_cdns_dma_data *dma; 756 + 757 + dma = kzalloc(sizeof(*dma), GFP_KERNEL); 758 + if (!dma) 759 + return -ENOMEM; 760 + 761 + if (pcm) 762 + dma->stream_type = SDW_STREAM_PCM; 763 + else 764 + dma->stream_type = SDW_STREAM_PDM; 765 + 766 + dma->bus = &cdns->bus; 767 + dma->link_id = cdns->instance; 768 + 769 + dma->stream = stream; 770 + 771 + if (direction == SNDRV_PCM_STREAM_PLAYBACK) 772 + dai->playback_dma_data = dma; 773 + else 774 + dai->capture_dma_data = dma; 775 + 776 + return 0; 777 + } 778 + EXPORT_SYMBOL(cdns_set_sdw_stream); 779 + 780 + /** 781 + * cdns_find_pdi() - Find a free PDI 782 + * 783 + * @cdns: Cadence instance 784 + * @num: Number of PDIs 785 + * @pdi: PDI instances 786 + * 787 + * Find and return a free PDI for a given PDI array 788 + */ 789 + static struct sdw_cdns_pdi *cdns_find_pdi(struct sdw_cdns *cdns, 790 + unsigned int num, struct sdw_cdns_pdi *pdi) 791 + { 792 + int i; 793 + 794 + for (i = 0; i < num; i++) { 795 + if (pdi[i].assigned == true) 796 + continue; 797 + pdi[i].assigned = true; 798 + return &pdi[i]; 799 + } 800 + 801 + return NULL; 802 + } 803 + 804 + /** 805 + * sdw_cdns_config_stream: Configure a stream 806 + * 807 + * @cdns: Cadence instance 808 + * @port: Cadence data port 809 + * @ch: Channel count 810 + * @dir: Data direction 811 + * @pdi: PDI to be used 812 + */ 813 + void sdw_cdns_config_stream(struct sdw_cdns *cdns, 814 + struct sdw_cdns_port *port, 815 + u32 ch, u32 dir, struct sdw_cdns_pdi *pdi) 816 + { 817 + u32 offset, val = 0; 818 + 819 + if (dir == SDW_DATA_DIR_RX) 820 + val = CDNS_PORTCTRL_DIRN; 821 + 822 + offset = CDNS_PORTCTRL + port->num * CDNS_PORT_OFFSET; 823 + cdns_updatel(cdns, offset, CDNS_PORTCTRL_DIRN, val); 824 + 825 + val = port->num; 826 + val |= ((1 << ch) - 1) << SDW_REG_SHIFT(CDNS_PDI_CONFIG_CHANNEL); 827 + cdns_writel(cdns, CDNS_PDI_CONFIG(pdi->num), val); 828 + } 829 + EXPORT_SYMBOL(sdw_cdns_config_stream); 830 + 831 + /** 832 + * cdns_get_num_pdi() - Get number of PDIs required 833 + * 834 + * @cdns: Cadence instance 835 + * @pdi: PDI to be used 836 + * @num: Number of PDIs 837 + * @ch_count: Channel count 838 + */ 839 + static int cdns_get_num_pdi(struct sdw_cdns *cdns, 840 + struct sdw_cdns_pdi *pdi, 841 + unsigned int num, u32 ch_count) 842 + { 843 + int i, pdis = 0; 844 + 845 + for (i = 0; i < num; i++) { 846 + if (pdi[i].assigned == true) 847 + continue; 848 + 849 + if (pdi[i].ch_count < ch_count) 850 + ch_count -= pdi[i].ch_count; 851 + else 852 + ch_count = 0; 853 + 854 + pdis++; 855 + 856 + if (!ch_count) 857 + break; 858 + } 859 + 860 + if (ch_count) 861 + return 0; 862 + 863 + return pdis; 864 + } 865 + 866 + /** 867 + * sdw_cdns_get_stream() - Get stream information 868 + * 869 + * @cdns: Cadence instance 870 + * @stream: Stream to be allocated 871 + * @ch: Channel count 872 + * @dir: Data direction 873 + */ 874 + int sdw_cdns_get_stream(struct sdw_cdns *cdns, 875 + struct sdw_cdns_streams *stream, 876 + u32 ch, u32 dir) 877 + { 878 + int pdis = 0; 879 + 880 + if (dir == SDW_DATA_DIR_RX) 881 + pdis = cdns_get_num_pdi(cdns, stream->in, stream->num_in, ch); 882 + else 883 + pdis = cdns_get_num_pdi(cdns, stream->out, stream->num_out, ch); 884 + 885 + /* check if we found PDI, else find in bi-directional */ 886 + if (!pdis) 887 + pdis = cdns_get_num_pdi(cdns, stream->bd, stream->num_bd, ch); 888 + 889 + return pdis; 890 + } 891 + EXPORT_SYMBOL(sdw_cdns_get_stream); 892 + 893 + /** 894 + * sdw_cdns_alloc_stream() - Allocate a stream 895 + * 896 + * @cdns: Cadence instance 897 + * @stream: Stream to be allocated 898 + * @port: Cadence data port 899 + * @ch: Channel count 900 + * @dir: Data direction 901 + */ 902 + int sdw_cdns_alloc_stream(struct sdw_cdns *cdns, 903 + struct sdw_cdns_streams *stream, 904 + struct sdw_cdns_port *port, u32 ch, u32 dir) 905 + { 906 + struct sdw_cdns_pdi *pdi = NULL; 907 + 908 + if (dir == SDW_DATA_DIR_RX) 909 + pdi = cdns_find_pdi(cdns, stream->num_in, stream->in); 910 + else 911 + pdi = cdns_find_pdi(cdns, stream->num_out, stream->out); 912 + 913 + /* check if we found a PDI, else find in bi-directional */ 914 + if (!pdi) 915 + pdi = cdns_find_pdi(cdns, stream->num_bd, stream->bd); 916 + 917 + if (!pdi) 918 + return -EIO; 919 + 920 + port->pdi = pdi; 921 + pdi->l_ch_num = 0; 922 + pdi->h_ch_num = ch - 1; 923 + pdi->dir = dir; 924 + pdi->ch_count = ch; 925 + 926 + return 0; 927 + } 928 + EXPORT_SYMBOL(sdw_cdns_alloc_stream); 929 + 930 + void sdw_cdns_shutdown(struct snd_pcm_substream *substream, 931 + struct snd_soc_dai *dai) 932 + { 933 + struct sdw_cdns_dma_data *dma; 934 + 935 + dma = snd_soc_dai_get_dma_data(dai, substream); 936 + if (!dma) 937 + return; 938 + 939 + snd_soc_dai_set_dma_data(dai, substream, NULL); 940 + kfree(dma); 941 + } 942 + EXPORT_SYMBOL(sdw_cdns_shutdown); 988 943 989 944 MODULE_LICENSE("Dual BSD/GPL"); 990 945 MODULE_DESCRIPTION("Cadence Soundwire Library");
+151
drivers/soundwire/cadence_master.h
··· 1 1 // SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) 2 2 // Copyright(c) 2015-17 Intel Corporation. 3 + #include <sound/soc.h> 3 4 4 5 #ifndef __SDW_CADENCE_H 5 6 #define __SDW_CADENCE_H 7 + 8 + /** 9 + * struct sdw_cdns_pdi: PDI (Physical Data Interface) instance 10 + * 11 + * @assigned: pdi assigned 12 + * @num: pdi number 13 + * @intel_alh_id: link identifier 14 + * @l_ch_num: low channel for PDI 15 + * @h_ch_num: high channel for PDI 16 + * @ch_count: total channel count for PDI 17 + * @dir: data direction 18 + * @type: stream type, PDM or PCM 19 + */ 20 + struct sdw_cdns_pdi { 21 + bool assigned; 22 + int num; 23 + int intel_alh_id; 24 + int l_ch_num; 25 + int h_ch_num; 26 + int ch_count; 27 + enum sdw_data_direction dir; 28 + enum sdw_stream_type type; 29 + }; 30 + 31 + /** 32 + * struct sdw_cdns_port: Cadence port structure 33 + * 34 + * @num: port number 35 + * @assigned: port assigned 36 + * @ch: channel count 37 + * @direction: data port direction 38 + * @pdi: pdi for this port 39 + */ 40 + struct sdw_cdns_port { 41 + unsigned int num; 42 + bool assigned; 43 + unsigned int ch; 44 + enum sdw_data_direction direction; 45 + struct sdw_cdns_pdi *pdi; 46 + }; 47 + 48 + /** 49 + * struct sdw_cdns_streams: Cadence stream data structure 50 + * 51 + * @num_bd: number of bidirectional streams 52 + * @num_in: number of input streams 53 + * @num_out: number of output streams 54 + * @num_ch_bd: number of bidirectional stream channels 55 + * @num_ch_bd: number of input stream channels 56 + * @num_ch_bd: number of output stream channels 57 + * @num_pdi: total number of PDIs 58 + * @bd: bidirectional streams 59 + * @in: input streams 60 + * @out: output streams 61 + */ 62 + struct sdw_cdns_streams { 63 + unsigned int num_bd; 64 + unsigned int num_in; 65 + unsigned int num_out; 66 + unsigned int num_ch_bd; 67 + unsigned int num_ch_in; 68 + unsigned int num_ch_out; 69 + unsigned int num_pdi; 70 + struct sdw_cdns_pdi *bd; 71 + struct sdw_cdns_pdi *in; 72 + struct sdw_cdns_pdi *out; 73 + }; 74 + 75 + /** 76 + * struct sdw_cdns_stream_config: stream configuration 77 + * 78 + * @pcm_bd: number of bidirectional PCM streams supported 79 + * @pcm_in: number of input PCM streams supported 80 + * @pcm_out: number of output PCM streams supported 81 + * @pdm_bd: number of bidirectional PDM streams supported 82 + * @pdm_in: number of input PDM streams supported 83 + * @pdm_out: number of output PDM streams supported 84 + */ 85 + struct sdw_cdns_stream_config { 86 + unsigned int pcm_bd; 87 + unsigned int pcm_in; 88 + unsigned int pcm_out; 89 + unsigned int pdm_bd; 90 + unsigned int pdm_in; 91 + unsigned int pdm_out; 92 + }; 93 + 94 + /** 95 + * struct sdw_cdns_dma_data: Cadence DMA data 96 + * 97 + * @name: SoundWire stream name 98 + * @nr_ports: Number of ports 99 + * @port: Ports 100 + * @bus: Bus handle 101 + * @stream_type: Stream type 102 + * @link_id: Master link id 103 + */ 104 + struct sdw_cdns_dma_data { 105 + char *name; 106 + struct sdw_stream_runtime *stream; 107 + int nr_ports; 108 + struct sdw_cdns_port **port; 109 + struct sdw_bus *bus; 110 + enum sdw_stream_type stream_type; 111 + int link_id; 112 + }; 6 113 7 114 /** 8 115 * struct sdw_cdns - Cadence driver context ··· 119 12 * @response_buf: SoundWire response buffer 120 13 * @tx_complete: Tx completion 121 14 * @defer: Defer pointer 15 + * @ports: Data ports 16 + * @num_ports: Total number of data ports 17 + * @pcm: PCM streams 18 + * @pdm: PDM streams 122 19 * @registers: Cadence registers 123 20 * @link_up: Link status 124 21 * @msg_count: Messages sent on bus ··· 135 24 u32 response_buf[0x80]; 136 25 struct completion tx_complete; 137 26 struct sdw_defer *defer; 27 + 28 + struct sdw_cdns_port *ports; 29 + int num_ports; 30 + 31 + struct sdw_cdns_streams pcm; 32 + struct sdw_cdns_streams pdm; 138 33 139 34 void __iomem *registers; 140 35 ··· 159 42 irqreturn_t sdw_cdns_thread(int irq, void *dev_id); 160 43 161 44 int sdw_cdns_init(struct sdw_cdns *cdns); 45 + int sdw_cdns_pdi_init(struct sdw_cdns *cdns, 46 + struct sdw_cdns_stream_config config); 162 47 int sdw_cdns_enable_interrupt(struct sdw_cdns *cdns); 163 48 49 + int sdw_cdns_get_stream(struct sdw_cdns *cdns, 50 + struct sdw_cdns_streams *stream, 51 + u32 ch, u32 dir); 52 + int sdw_cdns_alloc_stream(struct sdw_cdns *cdns, 53 + struct sdw_cdns_streams *stream, 54 + struct sdw_cdns_port *port, u32 ch, u32 dir); 55 + void sdw_cdns_config_stream(struct sdw_cdns *cdns, struct sdw_cdns_port *port, 56 + u32 ch, u32 dir, struct sdw_cdns_pdi *pdi); 164 57 58 + void sdw_cdns_shutdown(struct snd_pcm_substream *substream, 59 + struct snd_soc_dai *dai); 60 + int sdw_cdns_pcm_set_stream(struct snd_soc_dai *dai, 61 + void *stream, int direction); 62 + int sdw_cdns_pdm_set_stream(struct snd_soc_dai *dai, 63 + void *stream, int direction); 64 + 65 + enum sdw_command_response 66 + cdns_reset_page_addr(struct sdw_bus *bus, unsigned int dev_num); 67 + 68 + enum sdw_command_response 69 + cdns_xfer_msg(struct sdw_bus *bus, struct sdw_msg *msg); 70 + 71 + enum sdw_command_response 72 + cdns_xfer_msg_defer(struct sdw_bus *bus, 73 + struct sdw_msg *msg, struct sdw_defer *defer); 74 + 75 + enum sdw_command_response 76 + cdns_reset_page_addr(struct sdw_bus *bus, unsigned int dev_num); 77 + 78 + int cdns_bus_conf(struct sdw_bus *bus, struct sdw_bus_params *params); 79 + 80 + int cdns_set_sdw_stream(struct snd_soc_dai *dai, 81 + void *stream, bool pcm, int direction); 165 82 #endif /* __SDW_CADENCE_H */
+523 -2
drivers/soundwire/intel.c
··· 9 9 #include <linux/delay.h> 10 10 #include <linux/interrupt.h> 11 11 #include <linux/platform_device.h> 12 + #include <sound/pcm_params.h> 13 + #include <sound/soc.h> 12 14 #include <linux/soundwire/sdw_registers.h> 13 15 #include <linux/soundwire/sdw.h> 14 16 #include <linux/soundwire/sdw_intel.h> ··· 86 84 #define SDW_ALH_STRMZCFG_DMAT_VAL 0x3 87 85 #define SDW_ALH_STRMZCFG_DMAT GENMASK(7, 0) 88 86 #define SDW_ALH_STRMZCFG_CHN GENMASK(19, 16) 87 + 88 + enum intel_pdi_type { 89 + INTEL_PDI_IN = 0, 90 + INTEL_PDI_OUT = 1, 91 + INTEL_PDI_BD = 2, 92 + }; 89 93 90 94 struct sdw_intel { 91 95 struct sdw_cdns cdns; ··· 242 234 return ret; 243 235 } 244 236 237 + /* 238 + * PDI routines 239 + */ 240 + static void intel_pdi_init(struct sdw_intel *sdw, 241 + struct sdw_cdns_stream_config *config) 242 + { 243 + void __iomem *shim = sdw->res->shim; 244 + unsigned int link_id = sdw->instance; 245 + int pcm_cap, pdm_cap; 246 + 247 + /* PCM Stream Capability */ 248 + pcm_cap = intel_readw(shim, SDW_SHIM_PCMSCAP(link_id)); 249 + 250 + config->pcm_bd = (pcm_cap & SDW_SHIM_PCMSCAP_BSS) >> 251 + SDW_REG_SHIFT(SDW_SHIM_PCMSCAP_BSS); 252 + config->pcm_in = (pcm_cap & SDW_SHIM_PCMSCAP_ISS) >> 253 + SDW_REG_SHIFT(SDW_SHIM_PCMSCAP_ISS); 254 + config->pcm_out = (pcm_cap & SDW_SHIM_PCMSCAP_OSS) >> 255 + SDW_REG_SHIFT(SDW_SHIM_PCMSCAP_OSS); 256 + 257 + /* PDM Stream Capability */ 258 + pdm_cap = intel_readw(shim, SDW_SHIM_PDMSCAP(link_id)); 259 + 260 + config->pdm_bd = (pdm_cap & SDW_SHIM_PDMSCAP_BSS) >> 261 + SDW_REG_SHIFT(SDW_SHIM_PDMSCAP_BSS); 262 + config->pdm_in = (pdm_cap & SDW_SHIM_PDMSCAP_ISS) >> 263 + SDW_REG_SHIFT(SDW_SHIM_PDMSCAP_ISS); 264 + config->pdm_out = (pdm_cap & SDW_SHIM_PDMSCAP_OSS) >> 265 + SDW_REG_SHIFT(SDW_SHIM_PDMSCAP_OSS); 266 + } 267 + 268 + static int 269 + intel_pdi_get_ch_cap(struct sdw_intel *sdw, unsigned int pdi_num, bool pcm) 270 + { 271 + void __iomem *shim = sdw->res->shim; 272 + unsigned int link_id = sdw->instance; 273 + int count; 274 + 275 + if (pcm) { 276 + count = intel_readw(shim, SDW_SHIM_PCMSYCHC(link_id, pdi_num)); 277 + } else { 278 + count = intel_readw(shim, SDW_SHIM_PDMSCAP(link_id)); 279 + count = ((count & SDW_SHIM_PDMSCAP_CPSS) >> 280 + SDW_REG_SHIFT(SDW_SHIM_PDMSCAP_CPSS)); 281 + } 282 + 283 + /* zero based values for channel count in register */ 284 + count++; 285 + 286 + return count; 287 + } 288 + 289 + static int intel_pdi_get_ch_update(struct sdw_intel *sdw, 290 + struct sdw_cdns_pdi *pdi, 291 + unsigned int num_pdi, 292 + unsigned int *num_ch, bool pcm) 293 + { 294 + int i, ch_count = 0; 295 + 296 + for (i = 0; i < num_pdi; i++) { 297 + pdi->ch_count = intel_pdi_get_ch_cap(sdw, pdi->num, pcm); 298 + ch_count += pdi->ch_count; 299 + pdi++; 300 + } 301 + 302 + *num_ch = ch_count; 303 + return 0; 304 + } 305 + 306 + static int intel_pdi_stream_ch_update(struct sdw_intel *sdw, 307 + struct sdw_cdns_streams *stream, bool pcm) 308 + { 309 + intel_pdi_get_ch_update(sdw, stream->bd, stream->num_bd, 310 + &stream->num_ch_bd, pcm); 311 + 312 + intel_pdi_get_ch_update(sdw, stream->in, stream->num_in, 313 + &stream->num_ch_in, pcm); 314 + 315 + intel_pdi_get_ch_update(sdw, stream->out, stream->num_out, 316 + &stream->num_ch_out, pcm); 317 + 318 + return 0; 319 + } 320 + 321 + static int intel_pdi_ch_update(struct sdw_intel *sdw) 322 + { 323 + /* First update PCM streams followed by PDM streams */ 324 + intel_pdi_stream_ch_update(sdw, &sdw->cdns.pcm, true); 325 + intel_pdi_stream_ch_update(sdw, &sdw->cdns.pdm, false); 326 + 327 + return 0; 328 + } 329 + 330 + static void 331 + intel_pdi_shim_configure(struct sdw_intel *sdw, struct sdw_cdns_pdi *pdi) 332 + { 333 + void __iomem *shim = sdw->res->shim; 334 + unsigned int link_id = sdw->instance; 335 + int pdi_conf = 0; 336 + 337 + pdi->intel_alh_id = (link_id * 16) + pdi->num + 5; 338 + 339 + /* 340 + * Program stream parameters to stream SHIM register 341 + * This is applicable for PCM stream only. 342 + */ 343 + if (pdi->type != SDW_STREAM_PCM) 344 + return; 345 + 346 + if (pdi->dir == SDW_DATA_DIR_RX) 347 + pdi_conf |= SDW_SHIM_PCMSYCM_DIR; 348 + else 349 + pdi_conf &= ~(SDW_SHIM_PCMSYCM_DIR); 350 + 351 + pdi_conf |= (pdi->intel_alh_id << 352 + SDW_REG_SHIFT(SDW_SHIM_PCMSYCM_STREAM)); 353 + pdi_conf |= (pdi->l_ch_num << SDW_REG_SHIFT(SDW_SHIM_PCMSYCM_LCHN)); 354 + pdi_conf |= (pdi->h_ch_num << SDW_REG_SHIFT(SDW_SHIM_PCMSYCM_HCHN)); 355 + 356 + intel_writew(shim, SDW_SHIM_PCMSYCHM(link_id, pdi->num), pdi_conf); 357 + } 358 + 359 + static void 360 + intel_pdi_alh_configure(struct sdw_intel *sdw, struct sdw_cdns_pdi *pdi) 361 + { 362 + void __iomem *alh = sdw->res->alh; 363 + unsigned int link_id = sdw->instance; 364 + unsigned int conf; 365 + 366 + pdi->intel_alh_id = (link_id * 16) + pdi->num + 5; 367 + 368 + /* Program Stream config ALH register */ 369 + conf = intel_readl(alh, SDW_ALH_STRMZCFG(pdi->intel_alh_id)); 370 + 371 + conf |= (SDW_ALH_STRMZCFG_DMAT_VAL << 372 + SDW_REG_SHIFT(SDW_ALH_STRMZCFG_DMAT)); 373 + 374 + conf |= ((pdi->ch_count - 1) << 375 + SDW_REG_SHIFT(SDW_ALH_STRMZCFG_CHN)); 376 + 377 + intel_writel(alh, SDW_ALH_STRMZCFG(pdi->intel_alh_id), conf); 378 + } 379 + 380 + static int intel_config_stream(struct sdw_intel *sdw, 381 + struct snd_pcm_substream *substream, 382 + struct snd_soc_dai *dai, 383 + struct snd_pcm_hw_params *hw_params, int link_id) 384 + { 385 + if (sdw->res->ops && sdw->res->ops->config_stream) 386 + return sdw->res->ops->config_stream(sdw->res->arg, 387 + substream, dai, hw_params, link_id); 388 + 389 + return -EIO; 390 + } 391 + 392 + /* 393 + * DAI routines 394 + */ 395 + 396 + static struct sdw_cdns_port *intel_alloc_port(struct sdw_intel *sdw, 397 + u32 ch, u32 dir, bool pcm) 398 + { 399 + struct sdw_cdns *cdns = &sdw->cdns; 400 + struct sdw_cdns_port *port = NULL; 401 + int i, ret = 0; 402 + 403 + for (i = 0; i < cdns->num_ports; i++) { 404 + if (cdns->ports[i].assigned == true) 405 + continue; 406 + 407 + port = &cdns->ports[i]; 408 + port->assigned = true; 409 + port->direction = dir; 410 + port->ch = ch; 411 + break; 412 + } 413 + 414 + if (!port) { 415 + dev_err(cdns->dev, "Unable to find a free port\n"); 416 + return NULL; 417 + } 418 + 419 + if (pcm) { 420 + ret = sdw_cdns_alloc_stream(cdns, &cdns->pcm, port, ch, dir); 421 + if (ret) 422 + goto out; 423 + 424 + intel_pdi_shim_configure(sdw, port->pdi); 425 + sdw_cdns_config_stream(cdns, port, ch, dir, port->pdi); 426 + 427 + intel_pdi_alh_configure(sdw, port->pdi); 428 + 429 + } else { 430 + ret = sdw_cdns_alloc_stream(cdns, &cdns->pdm, port, ch, dir); 431 + } 432 + 433 + out: 434 + if (ret) { 435 + port->assigned = false; 436 + port = NULL; 437 + } 438 + 439 + return port; 440 + } 441 + 442 + static void intel_port_cleanup(struct sdw_cdns_dma_data *dma) 443 + { 444 + int i; 445 + 446 + for (i = 0; i < dma->nr_ports; i++) { 447 + if (dma->port[i]) { 448 + dma->port[i]->pdi->assigned = false; 449 + dma->port[i]->pdi = NULL; 450 + dma->port[i]->assigned = false; 451 + dma->port[i] = NULL; 452 + } 453 + } 454 + } 455 + 456 + static int intel_hw_params(struct snd_pcm_substream *substream, 457 + struct snd_pcm_hw_params *params, 458 + struct snd_soc_dai *dai) 459 + { 460 + struct sdw_cdns *cdns = snd_soc_dai_get_drvdata(dai); 461 + struct sdw_intel *sdw = cdns_to_intel(cdns); 462 + struct sdw_cdns_dma_data *dma; 463 + struct sdw_stream_config sconfig; 464 + struct sdw_port_config *pconfig; 465 + int ret, i, ch, dir; 466 + bool pcm = true; 467 + 468 + dma = snd_soc_dai_get_dma_data(dai, substream); 469 + if (!dma) 470 + return -EIO; 471 + 472 + ch = params_channels(params); 473 + if (substream->stream == SNDRV_PCM_STREAM_CAPTURE) 474 + dir = SDW_DATA_DIR_RX; 475 + else 476 + dir = SDW_DATA_DIR_TX; 477 + 478 + if (dma->stream_type == SDW_STREAM_PDM) { 479 + /* TODO: Check whether PDM decimator is already in use */ 480 + dma->nr_ports = sdw_cdns_get_stream(cdns, &cdns->pdm, ch, dir); 481 + pcm = false; 482 + } else { 483 + dma->nr_ports = sdw_cdns_get_stream(cdns, &cdns->pcm, ch, dir); 484 + } 485 + 486 + if (!dma->nr_ports) { 487 + dev_err(dai->dev, "ports/resources not available"); 488 + return -EINVAL; 489 + } 490 + 491 + dma->port = kcalloc(dma->nr_ports, sizeof(*dma->port), GFP_KERNEL); 492 + if (!dma->port) 493 + return -ENOMEM; 494 + 495 + for (i = 0; i < dma->nr_ports; i++) { 496 + dma->port[i] = intel_alloc_port(sdw, ch, dir, pcm); 497 + if (!dma->port[i]) { 498 + ret = -EINVAL; 499 + goto port_error; 500 + } 501 + } 502 + 503 + /* Inform DSP about PDI stream number */ 504 + for (i = 0; i < dma->nr_ports; i++) { 505 + ret = intel_config_stream(sdw, substream, dai, params, 506 + dma->port[i]->pdi->intel_alh_id); 507 + if (ret) 508 + goto port_error; 509 + } 510 + 511 + sconfig.direction = dir; 512 + sconfig.ch_count = ch; 513 + sconfig.frame_rate = params_rate(params); 514 + sconfig.type = dma->stream_type; 515 + 516 + if (dma->stream_type == SDW_STREAM_PDM) { 517 + sconfig.frame_rate *= 50; 518 + sconfig.bps = 1; 519 + } else { 520 + sconfig.bps = snd_pcm_format_width(params_format(params)); 521 + } 522 + 523 + /* Port configuration */ 524 + pconfig = kcalloc(dma->nr_ports, sizeof(*pconfig), GFP_KERNEL); 525 + if (!pconfig) { 526 + ret = -ENOMEM; 527 + goto port_error; 528 + } 529 + 530 + for (i = 0; i < dma->nr_ports; i++) { 531 + pconfig[i].num = dma->port[i]->num; 532 + pconfig[i].ch_mask = (1 << ch) - 1; 533 + } 534 + 535 + ret = sdw_stream_add_master(&cdns->bus, &sconfig, 536 + pconfig, dma->nr_ports, dma->stream); 537 + if (ret) { 538 + dev_err(cdns->dev, "add master to stream failed:%d", ret); 539 + goto stream_error; 540 + } 541 + 542 + kfree(pconfig); 543 + return ret; 544 + 545 + stream_error: 546 + kfree(pconfig); 547 + port_error: 548 + intel_port_cleanup(dma); 549 + kfree(dma->port); 550 + return ret; 551 + } 552 + 553 + static int 554 + intel_hw_free(struct snd_pcm_substream *substream, struct snd_soc_dai *dai) 555 + { 556 + struct sdw_cdns *cdns = snd_soc_dai_get_drvdata(dai); 557 + struct sdw_cdns_dma_data *dma; 558 + int ret; 559 + 560 + dma = snd_soc_dai_get_dma_data(dai, substream); 561 + if (!dma) 562 + return -EIO; 563 + 564 + ret = sdw_stream_remove_master(&cdns->bus, dma->stream); 565 + if (ret < 0) 566 + dev_err(dai->dev, "remove master from stream %s failed: %d", 567 + dma->stream->name, ret); 568 + 569 + intel_port_cleanup(dma); 570 + kfree(dma->port); 571 + return ret; 572 + } 573 + 574 + static int intel_pcm_set_sdw_stream(struct snd_soc_dai *dai, 575 + void *stream, int direction) 576 + { 577 + return cdns_set_sdw_stream(dai, stream, true, direction); 578 + } 579 + 580 + static int intel_pdm_set_sdw_stream(struct snd_soc_dai *dai, 581 + void *stream, int direction) 582 + { 583 + return cdns_set_sdw_stream(dai, stream, false, direction); 584 + } 585 + 586 + static struct snd_soc_dai_ops intel_pcm_dai_ops = { 587 + .hw_params = intel_hw_params, 588 + .hw_free = intel_hw_free, 589 + .shutdown = sdw_cdns_shutdown, 590 + .set_sdw_stream = intel_pcm_set_sdw_stream, 591 + }; 592 + 593 + static struct snd_soc_dai_ops intel_pdm_dai_ops = { 594 + .hw_params = intel_hw_params, 595 + .hw_free = intel_hw_free, 596 + .shutdown = sdw_cdns_shutdown, 597 + .set_sdw_stream = intel_pdm_set_sdw_stream, 598 + }; 599 + 600 + static const struct snd_soc_component_driver dai_component = { 601 + .name = "soundwire", 602 + }; 603 + 604 + static int intel_create_dai(struct sdw_cdns *cdns, 605 + struct snd_soc_dai_driver *dais, 606 + enum intel_pdi_type type, 607 + u32 num, u32 off, u32 max_ch, bool pcm) 608 + { 609 + int i; 610 + 611 + if (num == 0) 612 + return 0; 613 + 614 + /* TODO: Read supported rates/formats from hardware */ 615 + for (i = off; i < (off + num); i++) { 616 + dais[i].name = kasprintf(GFP_KERNEL, "SDW%d Pin%d", 617 + cdns->instance, i); 618 + if (!dais[i].name) 619 + return -ENOMEM; 620 + 621 + if (type == INTEL_PDI_BD || type == INTEL_PDI_OUT) { 622 + dais[i].playback.stream_name = kasprintf(GFP_KERNEL, 623 + "SDW%d Tx%d", 624 + cdns->instance, i); 625 + if (!dais[i].playback.stream_name) { 626 + kfree(dais[i].name); 627 + return -ENOMEM; 628 + } 629 + 630 + dais[i].playback.channels_min = 1; 631 + dais[i].playback.channels_max = max_ch; 632 + dais[i].playback.rates = SNDRV_PCM_RATE_48000; 633 + dais[i].playback.formats = SNDRV_PCM_FMTBIT_S16_LE; 634 + } 635 + 636 + if (type == INTEL_PDI_BD || type == INTEL_PDI_IN) { 637 + dais[i].capture.stream_name = kasprintf(GFP_KERNEL, 638 + "SDW%d Rx%d", 639 + cdns->instance, i); 640 + if (!dais[i].capture.stream_name) { 641 + kfree(dais[i].name); 642 + kfree(dais[i].playback.stream_name); 643 + return -ENOMEM; 644 + } 645 + 646 + dais[i].playback.channels_min = 1; 647 + dais[i].playback.channels_max = max_ch; 648 + dais[i].capture.rates = SNDRV_PCM_RATE_48000; 649 + dais[i].capture.formats = SNDRV_PCM_FMTBIT_S16_LE; 650 + } 651 + 652 + dais[i].id = SDW_DAI_ID_RANGE_START + i; 653 + 654 + if (pcm) 655 + dais[i].ops = &intel_pcm_dai_ops; 656 + else 657 + dais[i].ops = &intel_pdm_dai_ops; 658 + } 659 + 660 + return 0; 661 + } 662 + 663 + static int intel_register_dai(struct sdw_intel *sdw) 664 + { 665 + struct sdw_cdns *cdns = &sdw->cdns; 666 + struct sdw_cdns_streams *stream; 667 + struct snd_soc_dai_driver *dais; 668 + int num_dai, ret, off = 0; 669 + 670 + /* DAIs are created based on total number of PDIs supported */ 671 + num_dai = cdns->pcm.num_pdi + cdns->pdm.num_pdi; 672 + 673 + dais = devm_kcalloc(cdns->dev, num_dai, sizeof(*dais), GFP_KERNEL); 674 + if (!dais) 675 + return -ENOMEM; 676 + 677 + /* Create PCM DAIs */ 678 + stream = &cdns->pcm; 679 + 680 + ret = intel_create_dai(cdns, dais, INTEL_PDI_IN, 681 + stream->num_in, off, stream->num_ch_in, true); 682 + if (ret) 683 + return ret; 684 + 685 + off += cdns->pcm.num_in; 686 + ret = intel_create_dai(cdns, dais, INTEL_PDI_OUT, 687 + cdns->pcm.num_out, off, stream->num_ch_out, true); 688 + if (ret) 689 + return ret; 690 + 691 + off += cdns->pcm.num_out; 692 + ret = intel_create_dai(cdns, dais, INTEL_PDI_BD, 693 + cdns->pcm.num_bd, off, stream->num_ch_bd, true); 694 + if (ret) 695 + return ret; 696 + 697 + /* Create PDM DAIs */ 698 + stream = &cdns->pdm; 699 + off += cdns->pcm.num_bd; 700 + ret = intel_create_dai(cdns, dais, INTEL_PDI_IN, 701 + cdns->pdm.num_in, off, stream->num_ch_in, false); 702 + if (ret) 703 + return ret; 704 + 705 + off += cdns->pdm.num_in; 706 + ret = intel_create_dai(cdns, dais, INTEL_PDI_OUT, 707 + cdns->pdm.num_out, off, stream->num_ch_out, false); 708 + if (ret) 709 + return ret; 710 + 711 + off += cdns->pdm.num_bd; 712 + ret = intel_create_dai(cdns, dais, INTEL_PDI_BD, 713 + cdns->pdm.num_bd, off, stream->num_ch_bd, false); 714 + if (ret) 715 + return ret; 716 + 717 + return snd_soc_register_component(cdns->dev, &dai_component, 718 + dais, num_dai); 719 + } 720 + 245 721 static int intel_prop_read(struct sdw_bus *bus) 246 722 { 247 723 /* Initialize with default handler to read all DisCo properties */ ··· 744 252 return 0; 745 253 } 746 254 255 + static struct sdw_master_ops sdw_intel_ops = { 256 + .read_prop = sdw_master_read_prop, 257 + .xfer_msg = cdns_xfer_msg, 258 + .xfer_msg_defer = cdns_xfer_msg_defer, 259 + .reset_page_addr = cdns_reset_page_addr, 260 + .set_bus_conf = cdns_bus_conf, 261 + }; 262 + 747 263 /* 748 264 * probe and init 749 265 */ 750 266 static int intel_probe(struct platform_device *pdev) 751 267 { 268 + struct sdw_cdns_stream_config config; 752 269 struct sdw_intel *sdw; 753 270 int ret; 754 271 ··· 777 276 sdw_cdns_probe(&sdw->cdns); 778 277 779 278 /* Set property read ops */ 780 - sdw_cdns_master_ops.read_prop = intel_prop_read; 781 - sdw->cdns.bus.ops = &sdw_cdns_master_ops; 279 + sdw_intel_ops.read_prop = intel_prop_read; 280 + sdw->cdns.bus.ops = &sdw_intel_ops; 281 + 282 + sdw_intel_ops.read_prop = intel_prop_read; 283 + sdw->cdns.bus.ops = &sdw_intel_ops; 782 284 783 285 platform_set_drvdata(pdev, sdw); 784 286 ··· 800 296 goto err_init; 801 297 802 298 ret = sdw_cdns_enable_interrupt(&sdw->cdns); 299 + 300 + /* Read the PDI config and initialize cadence PDI */ 301 + intel_pdi_init(sdw, &config); 302 + ret = sdw_cdns_pdi_init(&sdw->cdns, config); 803 303 if (ret) 804 304 goto err_init; 305 + 306 + intel_pdi_ch_update(sdw); 805 307 806 308 /* Acquire IRQ */ 807 309 ret = request_threaded_irq(sdw->res->irq, sdw_cdns_irq, ··· 819 309 goto err_init; 820 310 } 821 311 312 + /* Register DAIs */ 313 + ret = intel_register_dai(sdw); 314 + if (ret) { 315 + dev_err(sdw->cdns.dev, "DAI registration failed: %d", ret); 316 + snd_soc_unregister_component(sdw->cdns.dev); 317 + goto err_dai; 318 + } 319 + 822 320 return 0; 823 321 322 + err_dai: 323 + free_irq(sdw->res->irq, sdw); 824 324 err_init: 825 325 sdw_delete_bus_master(&sdw->cdns.bus); 826 326 err_master_reg: ··· 844 324 sdw = platform_get_drvdata(pdev); 845 325 846 326 free_irq(sdw->res->irq, sdw); 327 + snd_soc_unregister_component(sdw->cdns.dev); 847 328 sdw_delete_bus_master(&sdw->cdns.bus); 848 329 849 330 return 0;
+4
drivers/soundwire/intel.h
··· 10 10 * @shim: Audio shim pointer 11 11 * @alh: ALH (Audio Link Hub) pointer 12 12 * @irq: Interrupt line 13 + * @ops: Shim callback ops 14 + * @arg: Shim callback ops argument 13 15 * 14 16 * This is set as pdata for each link instance. 15 17 */ ··· 20 18 void __iomem *shim; 21 19 void __iomem *alh; 22 20 int irq; 21 + const struct sdw_intel_ops *ops; 22 + void *arg; 23 23 }; 24 24 25 25 #endif /* __SDW_INTEL_LOCAL_H */
+3
drivers/soundwire/intel_init.c
··· 111 111 link->res.shim = res->mmio_base + SDW_SHIM_BASE; 112 112 link->res.alh = res->mmio_base + SDW_ALH_BASE; 113 113 114 + link->res.ops = res->ops; 115 + link->res.arg = res->arg; 116 + 114 117 memset(&pdevinfo, 0, sizeof(pdevinfo)); 115 118 116 119 pdevinfo.parent = res->parent;
+1479
drivers/soundwire/stream.c
··· 1 + // SPDX-License-Identifier: (GPL-2.0 OR BSD-3-Clause) 2 + // Copyright(c) 2015-18 Intel Corporation. 3 + 4 + /* 5 + * stream.c - SoundWire Bus stream operations. 6 + */ 7 + 8 + #include <linux/delay.h> 9 + #include <linux/device.h> 10 + #include <linux/init.h> 11 + #include <linux/module.h> 12 + #include <linux/mod_devicetable.h> 13 + #include <linux/slab.h> 14 + #include <linux/soundwire/sdw_registers.h> 15 + #include <linux/soundwire/sdw.h> 16 + #include "bus.h" 17 + 18 + /* 19 + * Array of supported rows and columns as per MIPI SoundWire Specification 1.1 20 + * 21 + * The rows are arranged as per the array index value programmed 22 + * in register. The index 15 has dummy value 0 in order to fill hole. 23 + */ 24 + int rows[SDW_FRAME_ROWS] = {48, 50, 60, 64, 75, 80, 125, 147, 25 + 96, 100, 120, 128, 150, 160, 250, 0, 26 + 192, 200, 240, 256, 72, 144, 90, 180}; 27 + 28 + int cols[SDW_FRAME_COLS] = {2, 4, 6, 8, 10, 12, 14, 16}; 29 + 30 + static int sdw_find_col_index(int col) 31 + { 32 + int i; 33 + 34 + for (i = 0; i < SDW_FRAME_COLS; i++) { 35 + if (cols[i] == col) 36 + return i; 37 + } 38 + 39 + pr_warn("Requested column not found, selecting lowest column no: 2\n"); 40 + return 0; 41 + } 42 + 43 + static int sdw_find_row_index(int row) 44 + { 45 + int i; 46 + 47 + for (i = 0; i < SDW_FRAME_ROWS; i++) { 48 + if (rows[i] == row) 49 + return i; 50 + } 51 + 52 + pr_warn("Requested row not found, selecting lowest row no: 48\n"); 53 + return 0; 54 + } 55 + static int _sdw_program_slave_port_params(struct sdw_bus *bus, 56 + struct sdw_slave *slave, 57 + struct sdw_transport_params *t_params, 58 + enum sdw_dpn_type type) 59 + { 60 + u32 addr1, addr2, addr3, addr4; 61 + int ret; 62 + u16 wbuf; 63 + 64 + if (bus->params.next_bank) { 65 + addr1 = SDW_DPN_OFFSETCTRL2_B1(t_params->port_num); 66 + addr2 = SDW_DPN_BLOCKCTRL3_B1(t_params->port_num); 67 + addr3 = SDW_DPN_SAMPLECTRL2_B1(t_params->port_num); 68 + addr4 = SDW_DPN_HCTRL_B1(t_params->port_num); 69 + } else { 70 + addr1 = SDW_DPN_OFFSETCTRL2_B0(t_params->port_num); 71 + addr2 = SDW_DPN_BLOCKCTRL3_B0(t_params->port_num); 72 + addr3 = SDW_DPN_SAMPLECTRL2_B0(t_params->port_num); 73 + addr4 = SDW_DPN_HCTRL_B0(t_params->port_num); 74 + } 75 + 76 + /* Program DPN_OffsetCtrl2 registers */ 77 + ret = sdw_write(slave, addr1, t_params->offset2); 78 + if (ret < 0) { 79 + dev_err(bus->dev, "DPN_OffsetCtrl2 register write failed"); 80 + return ret; 81 + } 82 + 83 + /* Program DPN_BlockCtrl3 register */ 84 + ret = sdw_write(slave, addr2, t_params->blk_pkg_mode); 85 + if (ret < 0) { 86 + dev_err(bus->dev, "DPN_BlockCtrl3 register write failed"); 87 + return ret; 88 + } 89 + 90 + /* 91 + * Data ports are FULL, SIMPLE and REDUCED. This function handles 92 + * FULL and REDUCED only and and beyond this point only FULL is 93 + * handled, so bail out if we are not FULL data port type 94 + */ 95 + if (type != SDW_DPN_FULL) 96 + return ret; 97 + 98 + /* Program DPN_SampleCtrl2 register */ 99 + wbuf = (t_params->sample_interval - 1); 100 + wbuf &= SDW_DPN_SAMPLECTRL_HIGH; 101 + wbuf >>= SDW_REG_SHIFT(SDW_DPN_SAMPLECTRL_HIGH); 102 + 103 + ret = sdw_write(slave, addr3, wbuf); 104 + if (ret < 0) { 105 + dev_err(bus->dev, "DPN_SampleCtrl2 register write failed"); 106 + return ret; 107 + } 108 + 109 + /* Program DPN_HCtrl register */ 110 + wbuf = t_params->hstart; 111 + wbuf <<= SDW_REG_SHIFT(SDW_DPN_HCTRL_HSTART); 112 + wbuf |= t_params->hstop; 113 + 114 + ret = sdw_write(slave, addr4, wbuf); 115 + if (ret < 0) 116 + dev_err(bus->dev, "DPN_HCtrl register write failed"); 117 + 118 + return ret; 119 + } 120 + 121 + static int sdw_program_slave_port_params(struct sdw_bus *bus, 122 + struct sdw_slave_runtime *s_rt, 123 + struct sdw_port_runtime *p_rt) 124 + { 125 + struct sdw_transport_params *t_params = &p_rt->transport_params; 126 + struct sdw_port_params *p_params = &p_rt->port_params; 127 + struct sdw_slave_prop *slave_prop = &s_rt->slave->prop; 128 + u32 addr1, addr2, addr3, addr4, addr5, addr6; 129 + struct sdw_dpn_prop *dpn_prop; 130 + int ret; 131 + u8 wbuf; 132 + 133 + dpn_prop = sdw_get_slave_dpn_prop(s_rt->slave, 134 + s_rt->direction, 135 + t_params->port_num); 136 + if (!dpn_prop) 137 + return -EINVAL; 138 + 139 + addr1 = SDW_DPN_PORTCTRL(t_params->port_num); 140 + addr2 = SDW_DPN_BLOCKCTRL1(t_params->port_num); 141 + 142 + if (bus->params.next_bank) { 143 + addr3 = SDW_DPN_SAMPLECTRL1_B1(t_params->port_num); 144 + addr4 = SDW_DPN_OFFSETCTRL1_B1(t_params->port_num); 145 + addr5 = SDW_DPN_BLOCKCTRL2_B1(t_params->port_num); 146 + addr6 = SDW_DPN_LANECTRL_B1(t_params->port_num); 147 + 148 + } else { 149 + addr3 = SDW_DPN_SAMPLECTRL1_B0(t_params->port_num); 150 + addr4 = SDW_DPN_OFFSETCTRL1_B0(t_params->port_num); 151 + addr5 = SDW_DPN_BLOCKCTRL2_B0(t_params->port_num); 152 + addr6 = SDW_DPN_LANECTRL_B0(t_params->port_num); 153 + } 154 + 155 + /* Program DPN_PortCtrl register */ 156 + wbuf = p_params->data_mode << SDW_REG_SHIFT(SDW_DPN_PORTCTRL_DATAMODE); 157 + wbuf |= p_params->flow_mode; 158 + 159 + ret = sdw_update(s_rt->slave, addr1, 0xF, wbuf); 160 + if (ret < 0) { 161 + dev_err(&s_rt->slave->dev, 162 + "DPN_PortCtrl register write failed for port %d", 163 + t_params->port_num); 164 + return ret; 165 + } 166 + 167 + /* Program DPN_BlockCtrl1 register */ 168 + ret = sdw_write(s_rt->slave, addr2, (p_params->bps - 1)); 169 + if (ret < 0) { 170 + dev_err(&s_rt->slave->dev, 171 + "DPN_BlockCtrl1 register write failed for port %d", 172 + t_params->port_num); 173 + return ret; 174 + } 175 + 176 + /* Program DPN_SampleCtrl1 register */ 177 + wbuf = (t_params->sample_interval - 1) & SDW_DPN_SAMPLECTRL_LOW; 178 + ret = sdw_write(s_rt->slave, addr3, wbuf); 179 + if (ret < 0) { 180 + dev_err(&s_rt->slave->dev, 181 + "DPN_SampleCtrl1 register write failed for port %d", 182 + t_params->port_num); 183 + return ret; 184 + } 185 + 186 + /* Program DPN_OffsetCtrl1 registers */ 187 + ret = sdw_write(s_rt->slave, addr4, t_params->offset1); 188 + if (ret < 0) { 189 + dev_err(&s_rt->slave->dev, 190 + "DPN_OffsetCtrl1 register write failed for port %d", 191 + t_params->port_num); 192 + return ret; 193 + } 194 + 195 + /* Program DPN_BlockCtrl2 register*/ 196 + if (t_params->blk_grp_ctrl_valid) { 197 + ret = sdw_write(s_rt->slave, addr5, t_params->blk_grp_ctrl); 198 + if (ret < 0) { 199 + dev_err(&s_rt->slave->dev, 200 + "DPN_BlockCtrl2 reg write failed for port %d", 201 + t_params->port_num); 202 + return ret; 203 + } 204 + } 205 + 206 + /* program DPN_LaneCtrl register */ 207 + if (slave_prop->lane_control_support) { 208 + ret = sdw_write(s_rt->slave, addr6, t_params->lane_ctrl); 209 + if (ret < 0) { 210 + dev_err(&s_rt->slave->dev, 211 + "DPN_LaneCtrl register write failed for port %d", 212 + t_params->port_num); 213 + return ret; 214 + } 215 + } 216 + 217 + if (dpn_prop->type != SDW_DPN_SIMPLE) { 218 + ret = _sdw_program_slave_port_params(bus, s_rt->slave, 219 + t_params, dpn_prop->type); 220 + if (ret < 0) 221 + dev_err(&s_rt->slave->dev, 222 + "Transport reg write failed for port: %d", 223 + t_params->port_num); 224 + } 225 + 226 + return ret; 227 + } 228 + 229 + static int sdw_program_master_port_params(struct sdw_bus *bus, 230 + struct sdw_port_runtime *p_rt) 231 + { 232 + int ret; 233 + 234 + /* 235 + * we need to set transport and port parameters for the port. 236 + * Transport parameters refers to the smaple interval, offsets and 237 + * hstart/stop etc of the data. Port parameters refers to word 238 + * length, flow mode etc of the port 239 + */ 240 + ret = bus->port_ops->dpn_set_port_transport_params(bus, 241 + &p_rt->transport_params, 242 + bus->params.next_bank); 243 + if (ret < 0) 244 + return ret; 245 + 246 + return bus->port_ops->dpn_set_port_params(bus, 247 + &p_rt->port_params, 248 + bus->params.next_bank); 249 + } 250 + 251 + /** 252 + * sdw_program_port_params() - Programs transport parameters of Master(s) 253 + * and Slave(s) 254 + * 255 + * @m_rt: Master stream runtime 256 + */ 257 + static int sdw_program_port_params(struct sdw_master_runtime *m_rt) 258 + { 259 + struct sdw_slave_runtime *s_rt = NULL; 260 + struct sdw_bus *bus = m_rt->bus; 261 + struct sdw_port_runtime *p_rt; 262 + int ret = 0; 263 + 264 + /* Program transport & port parameters for Slave(s) */ 265 + list_for_each_entry(s_rt, &m_rt->slave_rt_list, m_rt_node) { 266 + list_for_each_entry(p_rt, &s_rt->port_list, port_node) { 267 + ret = sdw_program_slave_port_params(bus, s_rt, p_rt); 268 + if (ret < 0) 269 + return ret; 270 + } 271 + } 272 + 273 + /* Program transport & port parameters for Master(s) */ 274 + list_for_each_entry(p_rt, &m_rt->port_list, port_node) { 275 + ret = sdw_program_master_port_params(bus, p_rt); 276 + if (ret < 0) 277 + return ret; 278 + } 279 + 280 + return 0; 281 + } 282 + 283 + /** 284 + * sdw_enable_disable_slave_ports: Enable/disable slave data port 285 + * 286 + * @bus: bus instance 287 + * @s_rt: slave runtime 288 + * @p_rt: port runtime 289 + * @en: enable or disable operation 290 + * 291 + * This function only sets the enable/disable bits in the relevant bank, the 292 + * actual enable/disable is done with a bank switch 293 + */ 294 + static int sdw_enable_disable_slave_ports(struct sdw_bus *bus, 295 + struct sdw_slave_runtime *s_rt, 296 + struct sdw_port_runtime *p_rt, bool en) 297 + { 298 + struct sdw_transport_params *t_params = &p_rt->transport_params; 299 + u32 addr; 300 + int ret; 301 + 302 + if (bus->params.next_bank) 303 + addr = SDW_DPN_CHANNELEN_B1(p_rt->num); 304 + else 305 + addr = SDW_DPN_CHANNELEN_B0(p_rt->num); 306 + 307 + /* 308 + * Since bus doesn't support sharing a port across two streams, 309 + * it is safe to reset this register 310 + */ 311 + if (en) 312 + ret = sdw_update(s_rt->slave, addr, 0xFF, p_rt->ch_mask); 313 + else 314 + ret = sdw_update(s_rt->slave, addr, 0xFF, 0x0); 315 + 316 + if (ret < 0) 317 + dev_err(&s_rt->slave->dev, 318 + "Slave chn_en reg write failed:%d port:%d", 319 + ret, t_params->port_num); 320 + 321 + return ret; 322 + } 323 + 324 + static int sdw_enable_disable_master_ports(struct sdw_master_runtime *m_rt, 325 + struct sdw_port_runtime *p_rt, bool en) 326 + { 327 + struct sdw_transport_params *t_params = &p_rt->transport_params; 328 + struct sdw_bus *bus = m_rt->bus; 329 + struct sdw_enable_ch enable_ch; 330 + int ret = 0; 331 + 332 + enable_ch.port_num = p_rt->num; 333 + enable_ch.ch_mask = p_rt->ch_mask; 334 + enable_ch.enable = en; 335 + 336 + /* Perform Master port channel(s) enable/disable */ 337 + if (bus->port_ops->dpn_port_enable_ch) { 338 + ret = bus->port_ops->dpn_port_enable_ch(bus, 339 + &enable_ch, bus->params.next_bank); 340 + if (ret < 0) { 341 + dev_err(bus->dev, 342 + "Master chn_en write failed:%d port:%d", 343 + ret, t_params->port_num); 344 + return ret; 345 + } 346 + } else { 347 + dev_err(bus->dev, 348 + "dpn_port_enable_ch not supported, %s failed\n", 349 + en ? "enable" : "disable"); 350 + return -EINVAL; 351 + } 352 + 353 + return 0; 354 + } 355 + 356 + /** 357 + * sdw_enable_disable_ports() - Enable/disable port(s) for Master and 358 + * Slave(s) 359 + * 360 + * @m_rt: Master stream runtime 361 + * @en: mode (enable/disable) 362 + */ 363 + static int sdw_enable_disable_ports(struct sdw_master_runtime *m_rt, bool en) 364 + { 365 + struct sdw_port_runtime *s_port, *m_port; 366 + struct sdw_slave_runtime *s_rt = NULL; 367 + int ret = 0; 368 + 369 + /* Enable/Disable Slave port(s) */ 370 + list_for_each_entry(s_rt, &m_rt->slave_rt_list, m_rt_node) { 371 + list_for_each_entry(s_port, &s_rt->port_list, port_node) { 372 + ret = sdw_enable_disable_slave_ports(m_rt->bus, s_rt, 373 + s_port, en); 374 + if (ret < 0) 375 + return ret; 376 + } 377 + } 378 + 379 + /* Enable/Disable Master port(s) */ 380 + list_for_each_entry(m_port, &m_rt->port_list, port_node) { 381 + ret = sdw_enable_disable_master_ports(m_rt, m_port, en); 382 + if (ret < 0) 383 + return ret; 384 + } 385 + 386 + return 0; 387 + } 388 + 389 + static int sdw_do_port_prep(struct sdw_slave_runtime *s_rt, 390 + struct sdw_prepare_ch prep_ch, enum sdw_port_prep_ops cmd) 391 + { 392 + const struct sdw_slave_ops *ops = s_rt->slave->ops; 393 + int ret; 394 + 395 + if (ops->port_prep) { 396 + ret = ops->port_prep(s_rt->slave, &prep_ch, cmd); 397 + if (ret < 0) { 398 + dev_err(&s_rt->slave->dev, 399 + "Slave Port Prep cmd %d failed: %d", cmd, ret); 400 + return ret; 401 + } 402 + } 403 + 404 + return 0; 405 + } 406 + 407 + static int sdw_prep_deprep_slave_ports(struct sdw_bus *bus, 408 + struct sdw_slave_runtime *s_rt, 409 + struct sdw_port_runtime *p_rt, bool prep) 410 + { 411 + struct completion *port_ready = NULL; 412 + struct sdw_dpn_prop *dpn_prop; 413 + struct sdw_prepare_ch prep_ch; 414 + unsigned int time_left; 415 + bool intr = false; 416 + int ret = 0, val; 417 + u32 addr; 418 + 419 + prep_ch.num = p_rt->num; 420 + prep_ch.ch_mask = p_rt->ch_mask; 421 + 422 + dpn_prop = sdw_get_slave_dpn_prop(s_rt->slave, 423 + s_rt->direction, 424 + prep_ch.num); 425 + if (!dpn_prop) { 426 + dev_err(bus->dev, 427 + "Slave Port:%d properties not found", prep_ch.num); 428 + return -EINVAL; 429 + } 430 + 431 + prep_ch.prepare = prep; 432 + 433 + prep_ch.bank = bus->params.next_bank; 434 + 435 + if (dpn_prop->device_interrupts || !dpn_prop->simple_ch_prep_sm) 436 + intr = true; 437 + 438 + /* 439 + * Enable interrupt before Port prepare. 440 + * For Port de-prepare, it is assumed that port 441 + * was prepared earlier 442 + */ 443 + if (prep && intr) { 444 + ret = sdw_configure_dpn_intr(s_rt->slave, p_rt->num, prep, 445 + dpn_prop->device_interrupts); 446 + if (ret < 0) 447 + return ret; 448 + } 449 + 450 + /* Inform slave about the impending port prepare */ 451 + sdw_do_port_prep(s_rt, prep_ch, SDW_OPS_PORT_PRE_PREP); 452 + 453 + /* Prepare Slave port implementing CP_SM */ 454 + if (!dpn_prop->simple_ch_prep_sm) { 455 + addr = SDW_DPN_PREPARECTRL(p_rt->num); 456 + 457 + if (prep) 458 + ret = sdw_update(s_rt->slave, addr, 459 + 0xFF, p_rt->ch_mask); 460 + else 461 + ret = sdw_update(s_rt->slave, addr, 0xFF, 0x0); 462 + 463 + if (ret < 0) { 464 + dev_err(&s_rt->slave->dev, 465 + "Slave prep_ctrl reg write failed"); 466 + return ret; 467 + } 468 + 469 + /* Wait for completion on port ready */ 470 + port_ready = &s_rt->slave->port_ready[prep_ch.num]; 471 + time_left = wait_for_completion_timeout(port_ready, 472 + msecs_to_jiffies(dpn_prop->ch_prep_timeout)); 473 + 474 + val = sdw_read(s_rt->slave, SDW_DPN_PREPARESTATUS(p_rt->num)); 475 + val &= p_rt->ch_mask; 476 + if (!time_left || val) { 477 + dev_err(&s_rt->slave->dev, 478 + "Chn prep failed for port:%d", prep_ch.num); 479 + return -ETIMEDOUT; 480 + } 481 + } 482 + 483 + /* Inform slaves about ports prepared */ 484 + sdw_do_port_prep(s_rt, prep_ch, SDW_OPS_PORT_POST_PREP); 485 + 486 + /* Disable interrupt after Port de-prepare */ 487 + if (!prep && intr) 488 + ret = sdw_configure_dpn_intr(s_rt->slave, p_rt->num, prep, 489 + dpn_prop->device_interrupts); 490 + 491 + return ret; 492 + } 493 + 494 + static int sdw_prep_deprep_master_ports(struct sdw_master_runtime *m_rt, 495 + struct sdw_port_runtime *p_rt, bool prep) 496 + { 497 + struct sdw_transport_params *t_params = &p_rt->transport_params; 498 + struct sdw_bus *bus = m_rt->bus; 499 + const struct sdw_master_port_ops *ops = bus->port_ops; 500 + struct sdw_prepare_ch prep_ch; 501 + int ret = 0; 502 + 503 + prep_ch.num = p_rt->num; 504 + prep_ch.ch_mask = p_rt->ch_mask; 505 + prep_ch.prepare = prep; /* Prepare/De-prepare */ 506 + prep_ch.bank = bus->params.next_bank; 507 + 508 + /* Pre-prepare/Pre-deprepare port(s) */ 509 + if (ops->dpn_port_prep) { 510 + ret = ops->dpn_port_prep(bus, &prep_ch); 511 + if (ret < 0) { 512 + dev_err(bus->dev, "Port prepare failed for port:%d", 513 + t_params->port_num); 514 + return ret; 515 + } 516 + } 517 + 518 + return ret; 519 + } 520 + 521 + /** 522 + * sdw_prep_deprep_ports() - Prepare/De-prepare port(s) for Master(s) and 523 + * Slave(s) 524 + * 525 + * @m_rt: Master runtime handle 526 + * @prep: Prepare or De-prepare 527 + */ 528 + static int sdw_prep_deprep_ports(struct sdw_master_runtime *m_rt, bool prep) 529 + { 530 + struct sdw_slave_runtime *s_rt = NULL; 531 + struct sdw_port_runtime *p_rt; 532 + int ret = 0; 533 + 534 + /* Prepare/De-prepare Slave port(s) */ 535 + list_for_each_entry(s_rt, &m_rt->slave_rt_list, m_rt_node) { 536 + list_for_each_entry(p_rt, &s_rt->port_list, port_node) { 537 + ret = sdw_prep_deprep_slave_ports(m_rt->bus, s_rt, 538 + p_rt, prep); 539 + if (ret < 0) 540 + return ret; 541 + } 542 + } 543 + 544 + /* Prepare/De-prepare Master port(s) */ 545 + list_for_each_entry(p_rt, &m_rt->port_list, port_node) { 546 + ret = sdw_prep_deprep_master_ports(m_rt, p_rt, prep); 547 + if (ret < 0) 548 + return ret; 549 + } 550 + 551 + return ret; 552 + } 553 + 554 + /** 555 + * sdw_notify_config() - Notify bus configuration 556 + * 557 + * @m_rt: Master runtime handle 558 + * 559 + * This function notifies the Master(s) and Slave(s) of the 560 + * new bus configuration. 561 + */ 562 + static int sdw_notify_config(struct sdw_master_runtime *m_rt) 563 + { 564 + struct sdw_slave_runtime *s_rt; 565 + struct sdw_bus *bus = m_rt->bus; 566 + struct sdw_slave *slave; 567 + int ret = 0; 568 + 569 + if (bus->ops->set_bus_conf) { 570 + ret = bus->ops->set_bus_conf(bus, &bus->params); 571 + if (ret < 0) 572 + return ret; 573 + } 574 + 575 + list_for_each_entry(s_rt, &m_rt->slave_rt_list, m_rt_node) { 576 + slave = s_rt->slave; 577 + 578 + if (slave->ops->bus_config) { 579 + ret = slave->ops->bus_config(slave, &bus->params); 580 + if (ret < 0) 581 + dev_err(bus->dev, "Notify Slave: %d failed", 582 + slave->dev_num); 583 + return ret; 584 + } 585 + } 586 + 587 + return ret; 588 + } 589 + 590 + /** 591 + * sdw_program_params() - Program transport and port parameters for Master(s) 592 + * and Slave(s) 593 + * 594 + * @bus: SDW bus instance 595 + */ 596 + static int sdw_program_params(struct sdw_bus *bus) 597 + { 598 + struct sdw_master_runtime *m_rt = NULL; 599 + int ret = 0; 600 + 601 + list_for_each_entry(m_rt, &bus->m_rt_list, bus_node) { 602 + ret = sdw_program_port_params(m_rt); 603 + if (ret < 0) { 604 + dev_err(bus->dev, 605 + "Program transport params failed: %d", ret); 606 + return ret; 607 + } 608 + 609 + ret = sdw_notify_config(m_rt); 610 + if (ret < 0) { 611 + dev_err(bus->dev, "Notify bus config failed: %d", ret); 612 + return ret; 613 + } 614 + 615 + /* Enable port(s) on alternate bank for all active streams */ 616 + if (m_rt->stream->state != SDW_STREAM_ENABLED) 617 + continue; 618 + 619 + ret = sdw_enable_disable_ports(m_rt, true); 620 + if (ret < 0) { 621 + dev_err(bus->dev, "Enable channel failed: %d", ret); 622 + return ret; 623 + } 624 + } 625 + 626 + return ret; 627 + } 628 + 629 + static int sdw_bank_switch(struct sdw_bus *bus) 630 + { 631 + int col_index, row_index; 632 + struct sdw_msg *wr_msg; 633 + u8 *wbuf = NULL; 634 + int ret = 0; 635 + u16 addr; 636 + 637 + wr_msg = kzalloc(sizeof(*wr_msg), GFP_KERNEL); 638 + if (!wr_msg) 639 + return -ENOMEM; 640 + 641 + wbuf = kzalloc(sizeof(*wbuf), GFP_KERNEL); 642 + if (!wbuf) { 643 + ret = -ENOMEM; 644 + goto error_1; 645 + } 646 + 647 + /* Get row and column index to program register */ 648 + col_index = sdw_find_col_index(bus->params.col); 649 + row_index = sdw_find_row_index(bus->params.row); 650 + wbuf[0] = col_index | (row_index << 3); 651 + 652 + if (bus->params.next_bank) 653 + addr = SDW_SCP_FRAMECTRL_B1; 654 + else 655 + addr = SDW_SCP_FRAMECTRL_B0; 656 + 657 + sdw_fill_msg(wr_msg, NULL, addr, 1, SDW_BROADCAST_DEV_NUM, 658 + SDW_MSG_FLAG_WRITE, wbuf); 659 + wr_msg->ssp_sync = true; 660 + 661 + ret = sdw_transfer(bus, wr_msg); 662 + if (ret < 0) { 663 + dev_err(bus->dev, "Slave frame_ctrl reg write failed"); 664 + goto error; 665 + } 666 + 667 + kfree(wr_msg); 668 + kfree(wbuf); 669 + bus->defer_msg.msg = NULL; 670 + bus->params.curr_bank = !bus->params.curr_bank; 671 + bus->params.next_bank = !bus->params.next_bank; 672 + 673 + return 0; 674 + 675 + error: 676 + kfree(wbuf); 677 + error_1: 678 + kfree(wr_msg); 679 + return ret; 680 + } 681 + 682 + static int do_bank_switch(struct sdw_stream_runtime *stream) 683 + { 684 + struct sdw_master_runtime *m_rt = stream->m_rt; 685 + const struct sdw_master_ops *ops; 686 + struct sdw_bus *bus = m_rt->bus; 687 + int ret = 0; 688 + 689 + ops = bus->ops; 690 + 691 + /* Pre-bank switch */ 692 + if (ops->pre_bank_switch) { 693 + ret = ops->pre_bank_switch(bus); 694 + if (ret < 0) { 695 + dev_err(bus->dev, "Pre bank switch op failed: %d", ret); 696 + return ret; 697 + } 698 + } 699 + 700 + /* Bank switch */ 701 + ret = sdw_bank_switch(bus); 702 + if (ret < 0) { 703 + dev_err(bus->dev, "Bank switch failed: %d", ret); 704 + return ret; 705 + } 706 + 707 + /* Post-bank switch */ 708 + if (ops->post_bank_switch) { 709 + ret = ops->post_bank_switch(bus); 710 + if (ret < 0) { 711 + dev_err(bus->dev, 712 + "Post bank switch op failed: %d", ret); 713 + } 714 + } 715 + 716 + return ret; 717 + } 718 + 719 + /** 720 + * sdw_release_stream() - Free the assigned stream runtime 721 + * 722 + * @stream: SoundWire stream runtime 723 + * 724 + * sdw_release_stream should be called only once per stream 725 + */ 726 + void sdw_release_stream(struct sdw_stream_runtime *stream) 727 + { 728 + kfree(stream); 729 + } 730 + EXPORT_SYMBOL(sdw_release_stream); 731 + 732 + /** 733 + * sdw_alloc_stream() - Allocate and return stream runtime 734 + * 735 + * @stream_name: SoundWire stream name 736 + * 737 + * Allocates a SoundWire stream runtime instance. 738 + * sdw_alloc_stream should be called only once per stream. Typically 739 + * invoked from ALSA/ASoC machine/platform driver. 740 + */ 741 + struct sdw_stream_runtime *sdw_alloc_stream(char *stream_name) 742 + { 743 + struct sdw_stream_runtime *stream; 744 + 745 + stream = kzalloc(sizeof(*stream), GFP_KERNEL); 746 + if (!stream) 747 + return NULL; 748 + 749 + stream->name = stream_name; 750 + stream->state = SDW_STREAM_ALLOCATED; 751 + 752 + return stream; 753 + } 754 + EXPORT_SYMBOL(sdw_alloc_stream); 755 + 756 + /** 757 + * sdw_alloc_master_rt() - Allocates and initialize Master runtime handle 758 + * 759 + * @bus: SDW bus instance 760 + * @stream_config: Stream configuration 761 + * @stream: Stream runtime handle. 762 + * 763 + * This function is to be called with bus_lock held. 764 + */ 765 + static struct sdw_master_runtime 766 + *sdw_alloc_master_rt(struct sdw_bus *bus, 767 + struct sdw_stream_config *stream_config, 768 + struct sdw_stream_runtime *stream) 769 + { 770 + struct sdw_master_runtime *m_rt; 771 + 772 + m_rt = stream->m_rt; 773 + 774 + /* 775 + * check if Master is already allocated (as a result of Slave adding 776 + * it first), if so skip allocation and go to configure 777 + */ 778 + if (m_rt) 779 + goto stream_config; 780 + 781 + m_rt = kzalloc(sizeof(*m_rt), GFP_KERNEL); 782 + if (!m_rt) 783 + return NULL; 784 + 785 + /* Initialization of Master runtime handle */ 786 + INIT_LIST_HEAD(&m_rt->port_list); 787 + INIT_LIST_HEAD(&m_rt->slave_rt_list); 788 + stream->m_rt = m_rt; 789 + 790 + list_add_tail(&m_rt->bus_node, &bus->m_rt_list); 791 + 792 + stream_config: 793 + m_rt->ch_count = stream_config->ch_count; 794 + m_rt->bus = bus; 795 + m_rt->stream = stream; 796 + m_rt->direction = stream_config->direction; 797 + 798 + return m_rt; 799 + } 800 + 801 + /** 802 + * sdw_alloc_slave_rt() - Allocate and initialize Slave runtime handle. 803 + * 804 + * @slave: Slave handle 805 + * @stream_config: Stream configuration 806 + * @stream: Stream runtime handle 807 + * 808 + * This function is to be called with bus_lock held. 809 + */ 810 + static struct sdw_slave_runtime 811 + *sdw_alloc_slave_rt(struct sdw_slave *slave, 812 + struct sdw_stream_config *stream_config, 813 + struct sdw_stream_runtime *stream) 814 + { 815 + struct sdw_slave_runtime *s_rt = NULL; 816 + 817 + s_rt = kzalloc(sizeof(*s_rt), GFP_KERNEL); 818 + if (!s_rt) 819 + return NULL; 820 + 821 + INIT_LIST_HEAD(&s_rt->port_list); 822 + s_rt->ch_count = stream_config->ch_count; 823 + s_rt->direction = stream_config->direction; 824 + s_rt->slave = slave; 825 + 826 + return s_rt; 827 + } 828 + 829 + static void sdw_master_port_release(struct sdw_bus *bus, 830 + struct sdw_master_runtime *m_rt) 831 + { 832 + struct sdw_port_runtime *p_rt, *_p_rt; 833 + 834 + list_for_each_entry_safe(p_rt, _p_rt, 835 + &m_rt->port_list, port_node) { 836 + list_del(&p_rt->port_node); 837 + kfree(p_rt); 838 + } 839 + } 840 + 841 + static void sdw_slave_port_release(struct sdw_bus *bus, 842 + struct sdw_slave *slave, 843 + struct sdw_stream_runtime *stream) 844 + { 845 + struct sdw_port_runtime *p_rt, *_p_rt; 846 + struct sdw_master_runtime *m_rt = stream->m_rt; 847 + struct sdw_slave_runtime *s_rt; 848 + 849 + list_for_each_entry(s_rt, &m_rt->slave_rt_list, m_rt_node) { 850 + if (s_rt->slave != slave) 851 + continue; 852 + 853 + list_for_each_entry_safe(p_rt, _p_rt, 854 + &s_rt->port_list, port_node) { 855 + list_del(&p_rt->port_node); 856 + kfree(p_rt); 857 + } 858 + } 859 + } 860 + 861 + /** 862 + * sdw_release_slave_stream() - Free Slave(s) runtime handle 863 + * 864 + * @slave: Slave handle. 865 + * @stream: Stream runtime handle. 866 + * 867 + * This function is to be called with bus_lock held. 868 + */ 869 + static void sdw_release_slave_stream(struct sdw_slave *slave, 870 + struct sdw_stream_runtime *stream) 871 + { 872 + struct sdw_slave_runtime *s_rt, *_s_rt; 873 + struct sdw_master_runtime *m_rt = stream->m_rt; 874 + 875 + /* Retrieve Slave runtime handle */ 876 + list_for_each_entry_safe(s_rt, _s_rt, 877 + &m_rt->slave_rt_list, m_rt_node) { 878 + 879 + if (s_rt->slave == slave) { 880 + list_del(&s_rt->m_rt_node); 881 + kfree(s_rt); 882 + return; 883 + } 884 + } 885 + } 886 + 887 + /** 888 + * sdw_release_master_stream() - Free Master runtime handle 889 + * 890 + * @stream: Stream runtime handle. 891 + * 892 + * This function is to be called with bus_lock held 893 + * It frees the Master runtime handle and associated Slave(s) runtime 894 + * handle. If this is called first then sdw_release_slave_stream() will have 895 + * no effect as Slave(s) runtime handle would already be freed up. 896 + */ 897 + static void sdw_release_master_stream(struct sdw_stream_runtime *stream) 898 + { 899 + struct sdw_master_runtime *m_rt = stream->m_rt; 900 + struct sdw_slave_runtime *s_rt, *_s_rt; 901 + 902 + list_for_each_entry_safe(s_rt, _s_rt, 903 + &m_rt->slave_rt_list, m_rt_node) 904 + sdw_stream_remove_slave(s_rt->slave, stream); 905 + 906 + list_del(&m_rt->bus_node); 907 + } 908 + 909 + /** 910 + * sdw_stream_remove_master() - Remove master from sdw_stream 911 + * 912 + * @bus: SDW Bus instance 913 + * @stream: SoundWire stream 914 + * 915 + * This removes and frees port_rt and master_rt from a stream 916 + */ 917 + int sdw_stream_remove_master(struct sdw_bus *bus, 918 + struct sdw_stream_runtime *stream) 919 + { 920 + mutex_lock(&bus->bus_lock); 921 + 922 + sdw_release_master_stream(stream); 923 + sdw_master_port_release(bus, stream->m_rt); 924 + stream->state = SDW_STREAM_RELEASED; 925 + kfree(stream->m_rt); 926 + stream->m_rt = NULL; 927 + 928 + mutex_unlock(&bus->bus_lock); 929 + 930 + return 0; 931 + } 932 + EXPORT_SYMBOL(sdw_stream_remove_master); 933 + 934 + /** 935 + * sdw_stream_remove_slave() - Remove slave from sdw_stream 936 + * 937 + * @slave: SDW Slave instance 938 + * @stream: SoundWire stream 939 + * 940 + * This removes and frees port_rt and slave_rt from a stream 941 + */ 942 + int sdw_stream_remove_slave(struct sdw_slave *slave, 943 + struct sdw_stream_runtime *stream) 944 + { 945 + mutex_lock(&slave->bus->bus_lock); 946 + 947 + sdw_slave_port_release(slave->bus, slave, stream); 948 + sdw_release_slave_stream(slave, stream); 949 + 950 + mutex_unlock(&slave->bus->bus_lock); 951 + 952 + return 0; 953 + } 954 + EXPORT_SYMBOL(sdw_stream_remove_slave); 955 + 956 + /** 957 + * sdw_config_stream() - Configure the allocated stream 958 + * 959 + * @dev: SDW device 960 + * @stream: SoundWire stream 961 + * @stream_config: Stream configuration for audio stream 962 + * @is_slave: is API called from Slave or Master 963 + * 964 + * This function is to be called with bus_lock held. 965 + */ 966 + static int sdw_config_stream(struct device *dev, 967 + struct sdw_stream_runtime *stream, 968 + struct sdw_stream_config *stream_config, bool is_slave) 969 + { 970 + /* 971 + * Update the stream rate, channel and bps based on data 972 + * source. For more than one data source (multilink), 973 + * match the rate, bps, stream type and increment number of channels. 974 + * 975 + * If rate/bps is zero, it means the values are not set, so skip 976 + * comparison and allow the value to be set and stored in stream 977 + */ 978 + if (stream->params.rate && 979 + stream->params.rate != stream_config->frame_rate) { 980 + dev_err(dev, "rate not matching, stream:%s", stream->name); 981 + return -EINVAL; 982 + } 983 + 984 + if (stream->params.bps && 985 + stream->params.bps != stream_config->bps) { 986 + dev_err(dev, "bps not matching, stream:%s", stream->name); 987 + return -EINVAL; 988 + } 989 + 990 + stream->type = stream_config->type; 991 + stream->params.rate = stream_config->frame_rate; 992 + stream->params.bps = stream_config->bps; 993 + 994 + /* TODO: Update this check during Device-device support */ 995 + if (is_slave) 996 + stream->params.ch_count += stream_config->ch_count; 997 + 998 + return 0; 999 + } 1000 + 1001 + static int sdw_is_valid_port_range(struct device *dev, 1002 + struct sdw_port_runtime *p_rt) 1003 + { 1004 + if (!SDW_VALID_PORT_RANGE(p_rt->num)) { 1005 + dev_err(dev, 1006 + "SoundWire: Invalid port number :%d", p_rt->num); 1007 + return -EINVAL; 1008 + } 1009 + 1010 + return 0; 1011 + } 1012 + 1013 + static struct sdw_port_runtime *sdw_port_alloc(struct device *dev, 1014 + struct sdw_port_config *port_config, 1015 + int port_index) 1016 + { 1017 + struct sdw_port_runtime *p_rt; 1018 + 1019 + p_rt = kzalloc(sizeof(*p_rt), GFP_KERNEL); 1020 + if (!p_rt) 1021 + return NULL; 1022 + 1023 + p_rt->ch_mask = port_config[port_index].ch_mask; 1024 + p_rt->num = port_config[port_index].num; 1025 + 1026 + return p_rt; 1027 + } 1028 + 1029 + static int sdw_master_port_config(struct sdw_bus *bus, 1030 + struct sdw_master_runtime *m_rt, 1031 + struct sdw_port_config *port_config, 1032 + unsigned int num_ports) 1033 + { 1034 + struct sdw_port_runtime *p_rt; 1035 + int i; 1036 + 1037 + /* Iterate for number of ports to perform initialization */ 1038 + for (i = 0; i < num_ports; i++) { 1039 + p_rt = sdw_port_alloc(bus->dev, port_config, i); 1040 + if (!p_rt) 1041 + return -ENOMEM; 1042 + 1043 + /* 1044 + * TODO: Check port capabilities for requested 1045 + * configuration (audio mode support) 1046 + */ 1047 + 1048 + list_add_tail(&p_rt->port_node, &m_rt->port_list); 1049 + } 1050 + 1051 + return 0; 1052 + } 1053 + 1054 + static int sdw_slave_port_config(struct sdw_slave *slave, 1055 + struct sdw_slave_runtime *s_rt, 1056 + struct sdw_port_config *port_config, 1057 + unsigned int num_config) 1058 + { 1059 + struct sdw_port_runtime *p_rt; 1060 + int i, ret; 1061 + 1062 + /* Iterate for number of ports to perform initialization */ 1063 + for (i = 0; i < num_config; i++) { 1064 + p_rt = sdw_port_alloc(&slave->dev, port_config, i); 1065 + if (!p_rt) 1066 + return -ENOMEM; 1067 + 1068 + /* 1069 + * TODO: Check valid port range as defined by DisCo/ 1070 + * slave 1071 + */ 1072 + ret = sdw_is_valid_port_range(&slave->dev, p_rt); 1073 + if (ret < 0) { 1074 + kfree(p_rt); 1075 + return ret; 1076 + } 1077 + 1078 + /* 1079 + * TODO: Check port capabilities for requested 1080 + * configuration (audio mode support) 1081 + */ 1082 + 1083 + list_add_tail(&p_rt->port_node, &s_rt->port_list); 1084 + } 1085 + 1086 + return 0; 1087 + } 1088 + 1089 + /** 1090 + * sdw_stream_add_master() - Allocate and add master runtime to a stream 1091 + * 1092 + * @bus: SDW Bus instance 1093 + * @stream_config: Stream configuration for audio stream 1094 + * @port_config: Port configuration for audio stream 1095 + * @num_ports: Number of ports 1096 + * @stream: SoundWire stream 1097 + */ 1098 + int sdw_stream_add_master(struct sdw_bus *bus, 1099 + struct sdw_stream_config *stream_config, 1100 + struct sdw_port_config *port_config, 1101 + unsigned int num_ports, 1102 + struct sdw_stream_runtime *stream) 1103 + { 1104 + struct sdw_master_runtime *m_rt = NULL; 1105 + int ret; 1106 + 1107 + mutex_lock(&bus->bus_lock); 1108 + 1109 + m_rt = sdw_alloc_master_rt(bus, stream_config, stream); 1110 + if (!m_rt) { 1111 + dev_err(bus->dev, 1112 + "Master runtime config failed for stream:%s", 1113 + stream->name); 1114 + ret = -ENOMEM; 1115 + goto error; 1116 + } 1117 + 1118 + ret = sdw_config_stream(bus->dev, stream, stream_config, false); 1119 + if (ret) 1120 + goto stream_error; 1121 + 1122 + ret = sdw_master_port_config(bus, m_rt, port_config, num_ports); 1123 + if (ret) 1124 + goto stream_error; 1125 + 1126 + stream->state = SDW_STREAM_CONFIGURED; 1127 + 1128 + stream_error: 1129 + sdw_release_master_stream(stream); 1130 + error: 1131 + mutex_unlock(&bus->bus_lock); 1132 + return ret; 1133 + } 1134 + EXPORT_SYMBOL(sdw_stream_add_master); 1135 + 1136 + /** 1137 + * sdw_stream_add_slave() - Allocate and add master/slave runtime to a stream 1138 + * 1139 + * @slave: SDW Slave instance 1140 + * @stream_config: Stream configuration for audio stream 1141 + * @stream: SoundWire stream 1142 + * @port_config: Port configuration for audio stream 1143 + * @num_ports: Number of ports 1144 + */ 1145 + int sdw_stream_add_slave(struct sdw_slave *slave, 1146 + struct sdw_stream_config *stream_config, 1147 + struct sdw_port_config *port_config, 1148 + unsigned int num_ports, 1149 + struct sdw_stream_runtime *stream) 1150 + { 1151 + struct sdw_slave_runtime *s_rt; 1152 + struct sdw_master_runtime *m_rt; 1153 + int ret; 1154 + 1155 + mutex_lock(&slave->bus->bus_lock); 1156 + 1157 + /* 1158 + * If this API is invoked by Slave first then m_rt is not valid. 1159 + * So, allocate m_rt and add Slave to it. 1160 + */ 1161 + m_rt = sdw_alloc_master_rt(slave->bus, stream_config, stream); 1162 + if (!m_rt) { 1163 + dev_err(&slave->dev, 1164 + "alloc master runtime failed for stream:%s", 1165 + stream->name); 1166 + ret = -ENOMEM; 1167 + goto error; 1168 + } 1169 + 1170 + s_rt = sdw_alloc_slave_rt(slave, stream_config, stream); 1171 + if (!s_rt) { 1172 + dev_err(&slave->dev, 1173 + "Slave runtime config failed for stream:%s", 1174 + stream->name); 1175 + ret = -ENOMEM; 1176 + goto stream_error; 1177 + } 1178 + 1179 + ret = sdw_config_stream(&slave->dev, stream, stream_config, true); 1180 + if (ret) 1181 + goto stream_error; 1182 + 1183 + list_add_tail(&s_rt->m_rt_node, &m_rt->slave_rt_list); 1184 + 1185 + ret = sdw_slave_port_config(slave, s_rt, port_config, num_ports); 1186 + if (ret) 1187 + goto stream_error; 1188 + 1189 + stream->state = SDW_STREAM_CONFIGURED; 1190 + goto error; 1191 + 1192 + stream_error: 1193 + /* 1194 + * we hit error so cleanup the stream, release all Slave(s) and 1195 + * Master runtime 1196 + */ 1197 + sdw_release_master_stream(stream); 1198 + error: 1199 + mutex_unlock(&slave->bus->bus_lock); 1200 + return ret; 1201 + } 1202 + EXPORT_SYMBOL(sdw_stream_add_slave); 1203 + 1204 + /** 1205 + * sdw_get_slave_dpn_prop() - Get Slave port capabilities 1206 + * 1207 + * @slave: Slave handle 1208 + * @direction: Data direction. 1209 + * @port_num: Port number 1210 + */ 1211 + struct sdw_dpn_prop *sdw_get_slave_dpn_prop(struct sdw_slave *slave, 1212 + enum sdw_data_direction direction, 1213 + unsigned int port_num) 1214 + { 1215 + struct sdw_dpn_prop *dpn_prop; 1216 + u8 num_ports; 1217 + int i; 1218 + 1219 + if (direction == SDW_DATA_DIR_TX) { 1220 + num_ports = hweight32(slave->prop.source_ports); 1221 + dpn_prop = slave->prop.src_dpn_prop; 1222 + } else { 1223 + num_ports = hweight32(slave->prop.sink_ports); 1224 + dpn_prop = slave->prop.sink_dpn_prop; 1225 + } 1226 + 1227 + for (i = 0; i < num_ports; i++) { 1228 + dpn_prop = &dpn_prop[i]; 1229 + 1230 + if (dpn_prop->num == port_num) 1231 + return &dpn_prop[i]; 1232 + } 1233 + 1234 + return NULL; 1235 + } 1236 + 1237 + static int _sdw_prepare_stream(struct sdw_stream_runtime *stream) 1238 + { 1239 + struct sdw_master_runtime *m_rt = stream->m_rt; 1240 + struct sdw_bus *bus = m_rt->bus; 1241 + struct sdw_master_prop *prop = NULL; 1242 + struct sdw_bus_params params; 1243 + int ret; 1244 + 1245 + prop = &bus->prop; 1246 + memcpy(&params, &bus->params, sizeof(params)); 1247 + 1248 + /* TODO: Support Asynchronous mode */ 1249 + if ((prop->max_freq % stream->params.rate) != 0) { 1250 + dev_err(bus->dev, "Async mode not supported"); 1251 + return -EINVAL; 1252 + } 1253 + 1254 + /* Increment cumulative bus bandwidth */ 1255 + /* TODO: Update this during Device-Device support */ 1256 + bus->params.bandwidth += m_rt->stream->params.rate * 1257 + m_rt->ch_count * m_rt->stream->params.bps; 1258 + 1259 + /* Program params */ 1260 + ret = sdw_program_params(bus); 1261 + if (ret < 0) { 1262 + dev_err(bus->dev, "Program params failed: %d", ret); 1263 + goto restore_params; 1264 + } 1265 + 1266 + ret = do_bank_switch(stream); 1267 + if (ret < 0) { 1268 + dev_err(bus->dev, "Bank switch failed: %d", ret); 1269 + goto restore_params; 1270 + } 1271 + 1272 + /* Prepare port(s) on the new clock configuration */ 1273 + ret = sdw_prep_deprep_ports(m_rt, true); 1274 + if (ret < 0) { 1275 + dev_err(bus->dev, "Prepare port(s) failed ret = %d", 1276 + ret); 1277 + return ret; 1278 + } 1279 + 1280 + stream->state = SDW_STREAM_PREPARED; 1281 + 1282 + return ret; 1283 + 1284 + restore_params: 1285 + memcpy(&bus->params, &params, sizeof(params)); 1286 + return ret; 1287 + } 1288 + 1289 + /** 1290 + * sdw_prepare_stream() - Prepare SoundWire stream 1291 + * 1292 + * @stream: Soundwire stream 1293 + * 1294 + * Documentation/soundwire/stream.txt explains this API in detail 1295 + */ 1296 + int sdw_prepare_stream(struct sdw_stream_runtime *stream) 1297 + { 1298 + int ret = 0; 1299 + 1300 + if (!stream) { 1301 + pr_err("SoundWire: Handle not found for stream"); 1302 + return -EINVAL; 1303 + } 1304 + 1305 + mutex_lock(&stream->m_rt->bus->bus_lock); 1306 + 1307 + ret = _sdw_prepare_stream(stream); 1308 + if (ret < 0) 1309 + pr_err("Prepare for stream:%s failed: %d", stream->name, ret); 1310 + 1311 + mutex_unlock(&stream->m_rt->bus->bus_lock); 1312 + return ret; 1313 + } 1314 + EXPORT_SYMBOL(sdw_prepare_stream); 1315 + 1316 + static int _sdw_enable_stream(struct sdw_stream_runtime *stream) 1317 + { 1318 + struct sdw_master_runtime *m_rt = stream->m_rt; 1319 + struct sdw_bus *bus = m_rt->bus; 1320 + int ret; 1321 + 1322 + /* Program params */ 1323 + ret = sdw_program_params(bus); 1324 + if (ret < 0) { 1325 + dev_err(bus->dev, "Program params failed: %d", ret); 1326 + return ret; 1327 + } 1328 + 1329 + /* Enable port(s) */ 1330 + ret = sdw_enable_disable_ports(m_rt, true); 1331 + if (ret < 0) { 1332 + dev_err(bus->dev, "Enable port(s) failed ret: %d", ret); 1333 + return ret; 1334 + } 1335 + 1336 + ret = do_bank_switch(stream); 1337 + if (ret < 0) { 1338 + dev_err(bus->dev, "Bank switch failed: %d", ret); 1339 + return ret; 1340 + } 1341 + 1342 + stream->state = SDW_STREAM_ENABLED; 1343 + return 0; 1344 + } 1345 + 1346 + /** 1347 + * sdw_enable_stream() - Enable SoundWire stream 1348 + * 1349 + * @stream: Soundwire stream 1350 + * 1351 + * Documentation/soundwire/stream.txt explains this API in detail 1352 + */ 1353 + int sdw_enable_stream(struct sdw_stream_runtime *stream) 1354 + { 1355 + int ret = 0; 1356 + 1357 + if (!stream) { 1358 + pr_err("SoundWire: Handle not found for stream"); 1359 + return -EINVAL; 1360 + } 1361 + 1362 + mutex_lock(&stream->m_rt->bus->bus_lock); 1363 + 1364 + ret = _sdw_enable_stream(stream); 1365 + if (ret < 0) 1366 + pr_err("Enable for stream:%s failed: %d", stream->name, ret); 1367 + 1368 + mutex_unlock(&stream->m_rt->bus->bus_lock); 1369 + return ret; 1370 + } 1371 + EXPORT_SYMBOL(sdw_enable_stream); 1372 + 1373 + static int _sdw_disable_stream(struct sdw_stream_runtime *stream) 1374 + { 1375 + struct sdw_master_runtime *m_rt = stream->m_rt; 1376 + struct sdw_bus *bus = m_rt->bus; 1377 + int ret; 1378 + 1379 + /* Disable port(s) */ 1380 + ret = sdw_enable_disable_ports(m_rt, false); 1381 + if (ret < 0) { 1382 + dev_err(bus->dev, "Disable port(s) failed: %d", ret); 1383 + return ret; 1384 + } 1385 + 1386 + stream->state = SDW_STREAM_DISABLED; 1387 + 1388 + /* Program params */ 1389 + ret = sdw_program_params(bus); 1390 + if (ret < 0) { 1391 + dev_err(bus->dev, "Program params failed: %d", ret); 1392 + return ret; 1393 + } 1394 + 1395 + return do_bank_switch(stream); 1396 + } 1397 + 1398 + /** 1399 + * sdw_disable_stream() - Disable SoundWire stream 1400 + * 1401 + * @stream: Soundwire stream 1402 + * 1403 + * Documentation/soundwire/stream.txt explains this API in detail 1404 + */ 1405 + int sdw_disable_stream(struct sdw_stream_runtime *stream) 1406 + { 1407 + int ret = 0; 1408 + 1409 + if (!stream) { 1410 + pr_err("SoundWire: Handle not found for stream"); 1411 + return -EINVAL; 1412 + } 1413 + 1414 + mutex_lock(&stream->m_rt->bus->bus_lock); 1415 + 1416 + ret = _sdw_disable_stream(stream); 1417 + if (ret < 0) 1418 + pr_err("Disable for stream:%s failed: %d", stream->name, ret); 1419 + 1420 + mutex_unlock(&stream->m_rt->bus->bus_lock); 1421 + return ret; 1422 + } 1423 + EXPORT_SYMBOL(sdw_disable_stream); 1424 + 1425 + static int _sdw_deprepare_stream(struct sdw_stream_runtime *stream) 1426 + { 1427 + struct sdw_master_runtime *m_rt = stream->m_rt; 1428 + struct sdw_bus *bus = m_rt->bus; 1429 + int ret = 0; 1430 + 1431 + /* De-prepare port(s) */ 1432 + ret = sdw_prep_deprep_ports(m_rt, false); 1433 + if (ret < 0) { 1434 + dev_err(bus->dev, "De-prepare port(s) failed: %d", ret); 1435 + return ret; 1436 + } 1437 + 1438 + stream->state = SDW_STREAM_DEPREPARED; 1439 + 1440 + /* TODO: Update this during Device-Device support */ 1441 + bus->params.bandwidth -= m_rt->stream->params.rate * 1442 + m_rt->ch_count * m_rt->stream->params.bps; 1443 + 1444 + /* Program params */ 1445 + ret = sdw_program_params(bus); 1446 + if (ret < 0) { 1447 + dev_err(bus->dev, "Program params failed: %d", ret); 1448 + return ret; 1449 + } 1450 + 1451 + return do_bank_switch(stream); 1452 + } 1453 + 1454 + /** 1455 + * sdw_deprepare_stream() - Deprepare SoundWire stream 1456 + * 1457 + * @stream: Soundwire stream 1458 + * 1459 + * Documentation/soundwire/stream.txt explains this API in detail 1460 + */ 1461 + int sdw_deprepare_stream(struct sdw_stream_runtime *stream) 1462 + { 1463 + int ret = 0; 1464 + 1465 + if (!stream) { 1466 + pr_err("SoundWire: Handle not found for stream"); 1467 + return -EINVAL; 1468 + } 1469 + 1470 + mutex_lock(&stream->m_rt->bus->bus_lock); 1471 + 1472 + ret = _sdw_deprepare_stream(stream); 1473 + if (ret < 0) 1474 + pr_err("De-prepare for stream:%d failed: %d", ret, ret); 1475 + 1476 + mutex_unlock(&stream->m_rt->bus->bus_lock); 1477 + return ret; 1478 + } 1479 + EXPORT_SYMBOL(sdw_deprepare_stream);
+89 -34
drivers/uio/uio.c
··· 270 270 if (!map_found) { 271 271 map_found = 1; 272 272 idev->map_dir = kobject_create_and_add("maps", 273 - &idev->dev->kobj); 273 + &idev->dev.kobj); 274 274 if (!idev->map_dir) { 275 275 ret = -ENOMEM; 276 276 goto err_map; ··· 299 299 if (!portio_found) { 300 300 portio_found = 1; 301 301 idev->portio_dir = kobject_create_and_add("portio", 302 - &idev->dev->kobj); 302 + &idev->dev.kobj); 303 303 if (!idev->portio_dir) { 304 304 ret = -ENOMEM; 305 305 goto err_portio; ··· 342 342 kobject_put(&map->kobj); 343 343 } 344 344 kobject_put(idev->map_dir); 345 - dev_err(idev->dev, "error creating sysfs files (%d)\n", ret); 345 + dev_err(&idev->dev, "error creating sysfs files (%d)\n", ret); 346 346 return ret; 347 347 } 348 348 ··· 379 379 idev->minor = retval; 380 380 retval = 0; 381 381 } else if (retval == -ENOSPC) { 382 - dev_err(idev->dev, "too many uio devices\n"); 382 + dev_err(&idev->dev, "too many uio devices\n"); 383 383 retval = -EINVAL; 384 384 } 385 385 mutex_unlock(&minor_lock); ··· 433 433 struct uio_device *idev; 434 434 struct uio_listener *listener; 435 435 int ret = 0; 436 + unsigned long flags; 436 437 437 438 mutex_lock(&minor_lock); 438 439 idev = idr_find(&uio_idr, iminor(inode)); ··· 443 442 goto out; 444 443 } 445 444 445 + get_device(&idev->dev); 446 + 446 447 if (!try_module_get(idev->owner)) { 447 448 ret = -ENODEV; 448 - goto out; 449 + goto err_module_get; 449 450 } 450 451 451 452 listener = kmalloc(sizeof(*listener), GFP_KERNEL); ··· 460 457 listener->event_count = atomic_read(&idev->event); 461 458 filep->private_data = listener; 462 459 463 - if (idev->info->open) { 460 + spin_lock_irqsave(&idev->info_lock, flags); 461 + if (idev->info && idev->info->open) 464 462 ret = idev->info->open(idev->info, inode); 465 - if (ret) 466 - goto err_infoopen; 467 - } 463 + spin_unlock_irqrestore(&idev->info_lock, flags); 464 + if (ret) 465 + goto err_infoopen; 466 + 468 467 return 0; 469 468 470 469 err_infoopen: ··· 474 469 475 470 err_alloc_listener: 476 471 module_put(idev->owner); 472 + 473 + err_module_get: 474 + put_device(&idev->dev); 477 475 478 476 out: 479 477 return ret; ··· 495 487 int ret = 0; 496 488 struct uio_listener *listener = filep->private_data; 497 489 struct uio_device *idev = listener->dev; 490 + unsigned long flags; 498 491 499 - if (idev->info->release) 492 + spin_lock_irqsave(&idev->info_lock, flags); 493 + if (idev->info && idev->info->release) 500 494 ret = idev->info->release(idev->info, inode); 495 + spin_unlock_irqrestore(&idev->info_lock, flags); 501 496 502 497 module_put(idev->owner); 503 498 kfree(listener); 499 + put_device(&idev->dev); 504 500 return ret; 505 501 } 506 502 ··· 512 500 { 513 501 struct uio_listener *listener = filep->private_data; 514 502 struct uio_device *idev = listener->dev; 503 + __poll_t ret = 0; 504 + unsigned long flags; 515 505 516 - if (!idev->info->irq) 517 - return -EIO; 506 + spin_lock_irqsave(&idev->info_lock, flags); 507 + if (!idev->info || !idev->info->irq) 508 + ret = -EIO; 509 + spin_unlock_irqrestore(&idev->info_lock, flags); 510 + 511 + if (ret) 512 + return ret; 518 513 519 514 poll_wait(filep, &idev->wait, wait); 520 515 if (listener->event_count != atomic_read(&idev->event)) ··· 535 516 struct uio_listener *listener = filep->private_data; 536 517 struct uio_device *idev = listener->dev; 537 518 DECLARE_WAITQUEUE(wait, current); 538 - ssize_t retval; 519 + ssize_t retval = 0; 539 520 s32 event_count; 521 + unsigned long flags; 540 522 541 - if (!idev->info->irq) 542 - return -EIO; 523 + spin_lock_irqsave(&idev->info_lock, flags); 524 + if (!idev->info || !idev->info->irq) 525 + retval = -EIO; 526 + spin_unlock_irqrestore(&idev->info_lock, flags); 527 + 528 + if (retval) 529 + return retval; 543 530 544 531 if (count != sizeof(s32)) 545 532 return -EINVAL; ··· 592 567 struct uio_device *idev = listener->dev; 593 568 ssize_t retval; 594 569 s32 irq_on; 570 + unsigned long flags; 595 571 596 - if (!idev->info->irq) 597 - return -EIO; 572 + spin_lock_irqsave(&idev->info_lock, flags); 573 + if (!idev->info || !idev->info->irq) { 574 + retval = -EIO; 575 + goto out; 576 + } 598 577 599 - if (count != sizeof(s32)) 600 - return -EINVAL; 578 + if (count != sizeof(s32)) { 579 + retval = -EINVAL; 580 + goto out; 581 + } 601 582 602 - if (!idev->info->irqcontrol) 603 - return -ENOSYS; 583 + if (!idev->info->irqcontrol) { 584 + retval = -ENOSYS; 585 + goto out; 586 + } 604 587 605 - if (copy_from_user(&irq_on, buf, count)) 606 - return -EFAULT; 588 + if (copy_from_user(&irq_on, buf, count)) { 589 + retval = -EFAULT; 590 + goto out; 591 + } 607 592 608 593 retval = idev->info->irqcontrol(idev->info, irq_on); 609 594 595 + out: 596 + spin_unlock_irqrestore(&idev->info_lock, flags); 610 597 return retval ? retval : sizeof(s32); 611 598 } 612 599 ··· 634 597 return -1; 635 598 } 636 599 637 - static int uio_vma_fault(struct vm_fault *vmf) 600 + static vm_fault_t uio_vma_fault(struct vm_fault *vmf) 638 601 { 639 602 struct uio_device *idev = vmf->vma->vm_private_data; 640 603 struct page *page; ··· 831 794 uio_major_cleanup(); 832 795 } 833 796 797 + static void uio_device_release(struct device *dev) 798 + { 799 + struct uio_device *idev = dev_get_drvdata(dev); 800 + 801 + kfree(idev); 802 + } 803 + 834 804 /** 835 805 * uio_register_device - register a new userspace IO device 836 806 * @owner: module that creates the new device ··· 858 814 859 815 info->uio_dev = NULL; 860 816 861 - idev = devm_kzalloc(parent, sizeof(*idev), GFP_KERNEL); 817 + idev = kzalloc(sizeof(*idev), GFP_KERNEL); 862 818 if (!idev) { 863 819 return -ENOMEM; 864 820 } 865 821 866 822 idev->owner = owner; 867 823 idev->info = info; 824 + spin_lock_init(&idev->info_lock); 868 825 init_waitqueue_head(&idev->wait); 869 826 atomic_set(&idev->event, 0); 870 827 ··· 873 828 if (ret) 874 829 return ret; 875 830 876 - idev->dev = device_create(&uio_class, parent, 877 - MKDEV(uio_major, idev->minor), idev, 878 - "uio%d", idev->minor); 879 - if (IS_ERR(idev->dev)) { 880 - printk(KERN_ERR "UIO: device register failed\n"); 881 - ret = PTR_ERR(idev->dev); 831 + idev->dev.devt = MKDEV(uio_major, idev->minor); 832 + idev->dev.class = &uio_class; 833 + idev->dev.parent = parent; 834 + idev->dev.release = uio_device_release; 835 + dev_set_drvdata(&idev->dev, idev); 836 + 837 + ret = dev_set_name(&idev->dev, "uio%d", idev->minor); 838 + if (ret) 882 839 goto err_device_create; 883 - } 840 + 841 + ret = device_register(&idev->dev); 842 + if (ret) 843 + goto err_device_create; 884 844 885 845 ret = uio_dev_add_attributes(idev); 886 846 if (ret) ··· 913 863 err_request_irq: 914 864 uio_dev_del_attributes(idev); 915 865 err_uio_dev_add_attributes: 916 - device_destroy(&uio_class, MKDEV(uio_major, idev->minor)); 866 + device_unregister(&idev->dev); 917 867 err_device_create: 918 868 uio_free_minor(idev); 919 869 return ret; ··· 928 878 void uio_unregister_device(struct uio_info *info) 929 879 { 930 880 struct uio_device *idev; 881 + unsigned long flags; 931 882 932 883 if (!info || !info->uio_dev) 933 884 return; ··· 942 891 if (info->irq && info->irq != UIO_IRQ_CUSTOM) 943 892 free_irq(info->irq, idev); 944 893 945 - device_destroy(&uio_class, MKDEV(uio_major, idev->minor)); 894 + spin_lock_irqsave(&idev->info_lock, flags); 895 + idev->info = NULL; 896 + spin_unlock_irqrestore(&idev->info_lock, flags); 897 + 898 + device_unregister(&idev->dev); 946 899 947 900 return; 948 901 }
-1
drivers/uio/uio_fsl_elbc_gpcm.c
··· 475 475 static struct platform_driver uio_fsl_elbc_gpcm_driver = { 476 476 .driver = { 477 477 .name = "fsl,elbc-gpcm-uio", 478 - .owner = THIS_MODULE, 479 478 .of_match_table = uio_fsl_elbc_gpcm_match, 480 479 }, 481 480 .probe = uio_fsl_elbc_gpcm_probe,
+3 -1
drivers/virt/vboxguest/vboxguest_linux.c
··· 121 121 if (!buf) 122 122 return -ENOMEM; 123 123 124 - if (copy_from_user(buf, (void *)arg, hdr.size_in)) { 124 + *((struct vbg_ioctl_hdr *)buf) = hdr; 125 + if (copy_from_user(buf + sizeof(hdr), (void *)arg + sizeof(hdr), 126 + hdr.size_in - sizeof(hdr))) { 125 127 ret = -EFAULT; 126 128 goto out; 127 129 }
+13 -7
drivers/w1/masters/mxc_w1.c
··· 112 112 if (IS_ERR(mdev->clk)) 113 113 return PTR_ERR(mdev->clk); 114 114 115 + err = clk_prepare_enable(mdev->clk); 116 + if (err) 117 + return err; 118 + 115 119 clkrate = clk_get_rate(mdev->clk); 116 120 if (clkrate < 10000000) 117 121 dev_warn(&pdev->dev, ··· 129 125 130 126 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 131 127 mdev->regs = devm_ioremap_resource(&pdev->dev, res); 132 - if (IS_ERR(mdev->regs)) 133 - return PTR_ERR(mdev->regs); 134 - 135 - err = clk_prepare_enable(mdev->clk); 136 - if (err) 137 - return err; 128 + if (IS_ERR(mdev->regs)) { 129 + err = PTR_ERR(mdev->regs); 130 + goto out_disable_clk; 131 + } 138 132 139 133 /* Software reset 1-Wire module */ 140 134 writeb(MXC_W1_RESET_RST, mdev->regs + MXC_W1_RESET); ··· 148 146 149 147 err = w1_add_master_device(&mdev->bus_master); 150 148 if (err) 151 - clk_disable_unprepare(mdev->clk); 149 + goto out_disable_clk; 152 150 151 + return 0; 152 + 153 + out_disable_clk: 154 + clk_disable_unprepare(mdev->clk); 153 155 return err; 154 156 } 155 157
+1 -1
drivers/w1/w1.c
··· 751 751 752 752 /* slave modules need to be loaded in a context with unlocked mutex */ 753 753 mutex_unlock(&dev->mutex); 754 - request_module("w1-family-0x%02x", rn->family); 754 + request_module("w1-family-0x%02X", rn->family); 755 755 mutex_lock(&dev->mutex); 756 756 757 757 spin_lock(&w1_flock);
+3 -10
include/linux/coresight.h
··· 1 - /* Copyright (c) 2012, The Linux Foundation. All rights reserved. 2 - * 3 - * This program is free software; you can redistribute it and/or modify 4 - * it under the terms of the GNU General Public License version 2 and 5 - * only version 2 as published by the Free Software Foundation. 6 - * 7 - * This program is distributed in the hope that it will be useful, 8 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 9 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 10 - * GNU General Public License for more details. 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Copyright (c) 2012, The Linux Foundation. All rights reserved. 11 4 */ 12 5 13 6 #ifndef _LINUX_CORESIGHT_H
+1 -12
include/linux/fpga/altera-pr-ip-core.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 /* 2 3 * Driver for Altera Partial Reconfiguration IP Core 3 4 * ··· 6 5 * 7 6 * Based on socfpga-a10.c Copyright (C) 2015-2016 Altera Corporation 8 7 * by Alan Tull <atull@opensource.altera.com> 9 - * 10 - * This program is free software; you can redistribute it and/or modify it 11 - * under the terms and conditions of the GNU General Public License, 12 - * version 2, as published by the Free Software Foundation. 13 - * 14 - * This program is distributed in the hope it will be useful, but WITHOUT 15 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 16 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 17 - * more details. 18 - * 19 - * You should have received a copy of the GNU General Public License along with 20 - * this program. If not, see <http://www.gnu.org/licenses/>. 21 8 */ 22 9 23 10 #ifndef _ALT_PR_IP_CORE_H
+6 -3
include/linux/fpga/fpga-bridge.h
··· 62 62 struct fpga_image_info *info, 63 63 struct list_head *bridge_list); 64 64 65 - int fpga_bridge_register(struct device *dev, const char *name, 66 - const struct fpga_bridge_ops *br_ops, void *priv); 67 - void fpga_bridge_unregister(struct device *dev); 65 + struct fpga_bridge *fpga_bridge_create(struct device *dev, const char *name, 66 + const struct fpga_bridge_ops *br_ops, 67 + void *priv); 68 + void fpga_bridge_free(struct fpga_bridge *br); 69 + int fpga_bridge_register(struct fpga_bridge *br); 70 + void fpga_bridge_unregister(struct fpga_bridge *br); 68 71 69 72 #endif /* _LINUX_FPGA_BRIDGE_H */
+7 -16
include/linux/fpga/fpga-mgr.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 /* 2 3 * FPGA Framework 3 4 * 4 5 * Copyright (C) 2013-2016 Altera Corporation 5 6 * Copyright (C) 2017 Intel Corporation 6 - * 7 - * This program is free software; you can redistribute it and/or modify it 8 - * under the terms and conditions of the GNU General Public License, 9 - * version 2, as published by the Free Software Foundation. 10 - * 11 - * This program is distributed in the hope it will be useful, but WITHOUT 12 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 13 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 14 - * more details. 15 - * 16 - * You should have received a copy of the GNU General Public License along with 17 - * this program. If not, see <http://www.gnu.org/licenses/>. 18 7 */ 19 8 #ifndef _LINUX_FPGA_MGR_H 20 9 #define _LINUX_FPGA_MGR_H ··· 159 170 160 171 void fpga_mgr_put(struct fpga_manager *mgr); 161 172 162 - int fpga_mgr_register(struct device *dev, const char *name, 163 - const struct fpga_manager_ops *mops, void *priv); 164 - 165 - void fpga_mgr_unregister(struct device *dev); 173 + struct fpga_manager *fpga_mgr_create(struct device *dev, const char *name, 174 + const struct fpga_manager_ops *mops, 175 + void *priv); 176 + void fpga_mgr_free(struct fpga_manager *mgr); 177 + int fpga_mgr_register(struct fpga_manager *mgr); 178 + void fpga_mgr_unregister(struct fpga_manager *mgr); 166 179 167 180 #endif /*_LINUX_FPGA_MGR_H */
+9 -4
include/linux/fpga/fpga-region.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + 1 3 #ifndef _FPGA_REGION_H 2 4 #define _FPGA_REGION_H 3 5 ··· 16 14 * @info: FPGA image info 17 15 * @priv: private data 18 16 * @get_bridges: optional function to get bridges to a list 19 - * @groups: optional attribute groups. 20 17 */ 21 18 struct fpga_region { 22 19 struct device dev; ··· 25 24 struct fpga_image_info *info; 26 25 void *priv; 27 26 int (*get_bridges)(struct fpga_region *region); 28 - const struct attribute_group **groups; 29 27 }; 30 28 31 29 #define to_fpga_region(d) container_of(d, struct fpga_region, dev) ··· 34 34 int (*match)(struct device *, const void *)); 35 35 36 36 int fpga_region_program_fpga(struct fpga_region *region); 37 - int fpga_region_register(struct device *dev, struct fpga_region *region); 38 - int fpga_region_unregister(struct fpga_region *region); 37 + 38 + struct fpga_region 39 + *fpga_region_create(struct device *dev, struct fpga_manager *mgr, 40 + int (*get_bridges)(struct fpga_region *)); 41 + void fpga_region_free(struct fpga_region *region); 42 + int fpga_region_register(struct fpga_region *region); 43 + void fpga_region_unregister(struct fpga_region *region); 39 44 40 45 #endif /* _FPGA_REGION_H */
+24 -2
include/linux/hyperv.h
··· 163 163 * 2 . 4 (Windows 8) 164 164 * 3 . 0 (Windows 8 R2) 165 165 * 4 . 0 (Windows 10) 166 + * 5 . 0 (Newer Windows 10) 166 167 */ 167 168 168 169 #define VERSION_WS2008 ((0 << 16) | (13)) ··· 171 170 #define VERSION_WIN8 ((2 << 16) | (4)) 172 171 #define VERSION_WIN8_1 ((3 << 16) | (0)) 173 172 #define VERSION_WIN10 ((4 << 16) | (0)) 173 + #define VERSION_WIN10_V5 ((5 << 16) | (0)) 174 174 175 175 #define VERSION_INVAL -1 176 176 177 - #define VERSION_CURRENT VERSION_WIN10 177 + #define VERSION_CURRENT VERSION_WIN10_V5 178 178 179 179 /* Make maximum size of pipe payload of 16K */ 180 180 #define MAX_PIPE_DATA_PAYLOAD (sizeof(u8) * 16384) ··· 572 570 struct vmbus_channel_message_header header; 573 571 u32 vmbus_version_requested; 574 572 u32 target_vcpu; /* The VCPU the host should respond to */ 575 - u64 interrupt_page; 573 + union { 574 + u64 interrupt_page; 575 + struct { 576 + u8 msg_sint; 577 + u8 padding1[3]; 578 + u32 padding2; 579 + }; 580 + }; 576 581 u64 monitor_page1; 577 582 u64 monitor_page2; 578 583 } __packed; ··· 594 585 struct vmbus_channel_version_response { 595 586 struct vmbus_channel_message_header header; 596 587 u8 version_supported; 588 + 589 + u8 connection_state; 590 + u16 padding; 591 + 592 + /* 593 + * On new hosts that support VMBus protocol 5.0, we must use 594 + * VMBUS_MESSAGE_CONNECTION_ID_4 for the Initiate Contact Message, 595 + * and for subsequent messages, we must use the Message Connection ID 596 + * field in the host-returned Version Response Message. 597 + * 598 + * On old hosts, we should always use VMBUS_MESSAGE_CONNECTION_ID (1). 599 + */ 600 + u32 msg_conn_id; 597 601 } __packed; 598 602 599 603 enum vmbus_channel_state {
+1 -1
include/linux/nubus.h
··· 163 163 unsigned char *nubus_dirptr(const struct nubus_dirent *nd); 164 164 165 165 /* Declarations relating to driver model objects */ 166 - int nubus_bus_register(void); 166 + int nubus_parent_device_register(void); 167 167 int nubus_device_register(struct nubus_board *board); 168 168 int nubus_driver_register(struct nubus_driver *ndrv); 169 169 void nubus_driver_unregister(struct nubus_driver *ndrv);
+11
include/linux/nvmem-provider.h
··· 77 77 78 78 int devm_nvmem_unregister(struct device *dev, struct nvmem_device *nvmem); 79 79 80 + int nvmem_add_cells(struct nvmem_device *nvmem, 81 + const struct nvmem_cell_info *info, 82 + int ncells); 80 83 #else 81 84 82 85 static inline struct nvmem_device *nvmem_register(const struct nvmem_config *c) ··· 102 99 devm_nvmem_unregister(struct device *dev, struct nvmem_device *nvmem) 103 100 { 104 101 return nvmem_unregister(nvmem); 102 + 103 + } 104 + 105 + static inline int nvmem_add_cells(struct nvmem_device *nvmem, 106 + const struct nvmem_cell_info *info, 107 + int ncells) 108 + { 109 + return -ENOSYS; 105 110 } 106 111 107 112 #endif /* CONFIG_NVMEM */
+331 -1
include/linux/soundwire/sdw.h
··· 23 23 #define SDW_MASTER_DEV_NUM 14 24 24 25 25 #define SDW_NUM_DEV_ID_REGISTERS 6 26 + /* frame shape defines */ 26 27 28 + /* 29 + * Note: The maximum row define in SoundWire spec 1.1 is 23. In order to 30 + * fill hole with 0, one more dummy entry is added 31 + */ 32 + #define SDW_FRAME_ROWS 24 33 + #define SDW_FRAME_COLS 8 34 + #define SDW_FRAME_ROW_COLS (SDW_FRAME_ROWS * SDW_FRAME_COLS) 35 + 36 + #define SDW_FRAME_CTRL_BITS 48 27 37 #define SDW_MAX_DEVICES 11 38 + 39 + #define SDW_VALID_PORT_RANGE(n) (n <= 14 && n >= 1) 40 + 41 + #define SDW_DAI_ID_RANGE_START 100 42 + #define SDW_DAI_ID_RANGE_END 200 28 43 29 44 /** 30 45 * enum sdw_slave_status - Slave status ··· 74 59 SDW_CMD_FAIL = 2, 75 60 SDW_CMD_TIMEOUT = 3, 76 61 SDW_CMD_FAIL_OTHER = 4, 62 + }; 63 + 64 + /** 65 + * enum sdw_stream_type: data stream type 66 + * 67 + * @SDW_STREAM_PCM: PCM data stream 68 + * @SDW_STREAM_PDM: PDM data stream 69 + * 70 + * spec doesn't define this, but is used in implementation 71 + */ 72 + enum sdw_stream_type { 73 + SDW_STREAM_PCM = 0, 74 + SDW_STREAM_PDM = 1, 75 + }; 76 + 77 + /** 78 + * enum sdw_data_direction: Data direction 79 + * 80 + * @SDW_DATA_DIR_RX: Data into Port 81 + * @SDW_DATA_DIR_TX: Data out of Port 82 + */ 83 + enum sdw_data_direction { 84 + SDW_DATA_DIR_RX = 0, 85 + SDW_DATA_DIR_TX = 1, 77 86 }; 78 87 79 88 /* ··· 380 341 }; 381 342 382 343 /** 383 - * struct sdw_slave_ops - Slave driver callback ops 344 + * sdw_reg_bank - SoundWire register banks 345 + * @SDW_BANK0: Soundwire register bank 0 346 + * @SDW_BANK1: Soundwire register bank 1 347 + */ 348 + enum sdw_reg_bank { 349 + SDW_BANK0, 350 + SDW_BANK1, 351 + }; 352 + 353 + /** 354 + * struct sdw_bus_conf: Bus configuration 355 + * 356 + * @clk_freq: Clock frequency, in Hz 357 + * @num_rows: Number of rows in frame 358 + * @num_cols: Number of columns in frame 359 + * @bank: Next register bank 360 + */ 361 + struct sdw_bus_conf { 362 + unsigned int clk_freq; 363 + unsigned int num_rows; 364 + unsigned int num_cols; 365 + unsigned int bank; 366 + }; 367 + 368 + /** 369 + * struct sdw_prepare_ch: Prepare/De-prepare Data Port channel 370 + * 371 + * @num: Port number 372 + * @ch_mask: Active channel mask 373 + * @prepare: Prepare (true) /de-prepare (false) channel 374 + * @bank: Register bank, which bank Slave/Master driver should program for 375 + * implementation defined registers. This is always updated to next_bank 376 + * value read from bus params. 377 + * 378 + */ 379 + struct sdw_prepare_ch { 380 + unsigned int num; 381 + unsigned int ch_mask; 382 + bool prepare; 383 + unsigned int bank; 384 + }; 385 + 386 + /** 387 + * enum sdw_port_prep_ops: Prepare operations for Data Port 388 + * 389 + * @SDW_OPS_PORT_PRE_PREP: Pre prepare operation for the Port 390 + * @SDW_OPS_PORT_PREP: Prepare operation for the Port 391 + * @SDW_OPS_PORT_POST_PREP: Post prepare operation for the Port 392 + */ 393 + enum sdw_port_prep_ops { 394 + SDW_OPS_PORT_PRE_PREP = 0, 395 + SDW_OPS_PORT_PREP = 1, 396 + SDW_OPS_PORT_POST_PREP = 2, 397 + }; 398 + 399 + /** 400 + * struct sdw_bus_params: Structure holding bus configuration 401 + * 402 + * @curr_bank: Current bank in use (BANK0/BANK1) 403 + * @next_bank: Next bank to use (BANK0/BANK1). next_bank will always be 404 + * set to !curr_bank 405 + * @max_dr_freq: Maximum double rate clock frequency supported, in Hz 406 + * @curr_dr_freq: Current double rate clock frequency, in Hz 407 + * @bandwidth: Current bandwidth 408 + * @col: Active columns 409 + * @row: Active rows 410 + */ 411 + struct sdw_bus_params { 412 + enum sdw_reg_bank curr_bank; 413 + enum sdw_reg_bank next_bank; 414 + unsigned int max_dr_freq; 415 + unsigned int curr_dr_freq; 416 + unsigned int bandwidth; 417 + unsigned int col; 418 + unsigned int row; 419 + }; 420 + 421 + /** 422 + * struct sdw_slave_ops: Slave driver callback ops 423 + * 384 424 * @read_prop: Read Slave properties 385 425 * @interrupt_callback: Device interrupt notification (invoked in thread 386 426 * context) 387 427 * @update_status: Update Slave status 428 + * @bus_config: Update the bus config for Slave 429 + * @port_prep: Prepare the port with parameters 388 430 */ 389 431 struct sdw_slave_ops { 390 432 int (*read_prop)(struct sdw_slave *sdw); ··· 473 353 struct sdw_slave_intr_status *status); 474 354 int (*update_status)(struct sdw_slave *slave, 475 355 enum sdw_slave_status status); 356 + int (*bus_config)(struct sdw_slave *slave, 357 + struct sdw_bus_params *params); 358 + int (*port_prep)(struct sdw_slave *slave, 359 + struct sdw_prepare_ch *prepare_ch, 360 + enum sdw_port_prep_ops pre_ops); 476 361 }; 477 362 478 363 /** ··· 531 406 * SDW master structures and APIs 532 407 */ 533 408 409 + /** 410 + * struct sdw_port_params: Data Port parameters 411 + * 412 + * @num: Port number 413 + * @bps: Word length of the Port 414 + * @flow_mode: Port Data flow mode 415 + * @data_mode: Test modes or normal mode 416 + * 417 + * This is used to program the Data Port based on Data Port stream 418 + * parameters. 419 + */ 420 + struct sdw_port_params { 421 + unsigned int num; 422 + unsigned int bps; 423 + unsigned int flow_mode; 424 + unsigned int data_mode; 425 + }; 426 + 427 + /** 428 + * struct sdw_transport_params: Data Port Transport Parameters 429 + * 430 + * @blk_grp_ctrl_valid: Port implements block group control 431 + * @num: Port number 432 + * @blk_grp_ctrl: Block group control value 433 + * @sample_interval: Sample interval 434 + * @offset1: Blockoffset of the payload data 435 + * @offset2: Blockoffset of the payload data 436 + * @hstart: Horizontal start of the payload data 437 + * @hstop: Horizontal stop of the payload data 438 + * @blk_pkg_mode: Block per channel or block per port 439 + * @lane_ctrl: Data lane Port uses for Data transfer. Currently only single 440 + * data lane is supported in bus 441 + * 442 + * This is used to program the Data Port based on Data Port transport 443 + * parameters. All these parameters are banked and can be modified 444 + * during a bank switch without any artifacts in audio stream. 445 + */ 446 + struct sdw_transport_params { 447 + bool blk_grp_ctrl_valid; 448 + unsigned int port_num; 449 + unsigned int blk_grp_ctrl; 450 + unsigned int sample_interval; 451 + unsigned int offset1; 452 + unsigned int offset2; 453 + unsigned int hstart; 454 + unsigned int hstop; 455 + unsigned int blk_pkg_mode; 456 + unsigned int lane_ctrl; 457 + }; 458 + 459 + /** 460 + * struct sdw_enable_ch: Enable/disable Data Port channel 461 + * 462 + * @num: Port number 463 + * @ch_mask: Active channel mask 464 + * @enable: Enable (true) /disable (false) channel 465 + */ 466 + struct sdw_enable_ch { 467 + unsigned int port_num; 468 + unsigned int ch_mask; 469 + bool enable; 470 + }; 471 + 472 + /** 473 + * struct sdw_master_port_ops: Callback functions from bus to Master 474 + * driver to set Master Data ports. 475 + * 476 + * @dpn_set_port_params: Set the Port parameters for the Master Port. 477 + * Mandatory callback 478 + * @dpn_set_port_transport_params: Set transport parameters for the Master 479 + * Port. Mandatory callback 480 + * @dpn_port_prep: Port prepare operations for the Master Data Port. 481 + * @dpn_port_enable_ch: Enable the channels of Master Port. 482 + */ 483 + struct sdw_master_port_ops { 484 + int (*dpn_set_port_params)(struct sdw_bus *bus, 485 + struct sdw_port_params *port_params, 486 + unsigned int bank); 487 + int (*dpn_set_port_transport_params)(struct sdw_bus *bus, 488 + struct sdw_transport_params *transport_params, 489 + enum sdw_reg_bank bank); 490 + int (*dpn_port_prep)(struct sdw_bus *bus, 491 + struct sdw_prepare_ch *prepare_ch); 492 + int (*dpn_port_enable_ch)(struct sdw_bus *bus, 493 + struct sdw_enable_ch *enable_ch, unsigned int bank); 494 + }; 495 + 534 496 struct sdw_msg; 535 497 536 498 /** ··· 638 426 * @xfer_msg: Transfer message callback 639 427 * @xfer_msg_defer: Defer version of transfer message callback 640 428 * @reset_page_addr: Reset the SCP page address registers 429 + * @set_bus_conf: Set the bus configuration 430 + * @pre_bank_switch: Callback for pre bank switch 431 + * @post_bank_switch: Callback for post bank switch 641 432 */ 642 433 struct sdw_master_ops { 643 434 int (*read_prop)(struct sdw_bus *bus); ··· 652 437 struct sdw_defer *defer); 653 438 enum sdw_command_response (*reset_page_addr) 654 439 (struct sdw_bus *bus, unsigned int dev_num); 440 + int (*set_bus_conf)(struct sdw_bus *bus, 441 + struct sdw_bus_params *params); 442 + int (*pre_bank_switch)(struct sdw_bus *bus); 443 + int (*post_bank_switch)(struct sdw_bus *bus); 444 + 655 445 }; 656 446 657 447 /** ··· 669 449 * @bus_lock: bus lock 670 450 * @msg_lock: message lock 671 451 * @ops: Master callback ops 452 + * @port_ops: Master port callback ops 453 + * @params: Current bus parameters 672 454 * @prop: Master properties 455 + * @m_rt_list: List of Master instance of all stream(s) running on Bus. This 456 + * is used to compute and program bus bandwidth, clock, frame shape, 457 + * transport and port parameters 673 458 * @defer_msg: Defer message 674 459 * @clk_stop_timeout: Clock stop timeout computed 460 + * @bank_switch_timeout: Bank switch timeout computed 675 461 */ 676 462 struct sdw_bus { 677 463 struct device *dev; ··· 687 461 struct mutex bus_lock; 688 462 struct mutex msg_lock; 689 463 const struct sdw_master_ops *ops; 464 + const struct sdw_master_port_ops *port_ops; 465 + struct sdw_bus_params params; 690 466 struct sdw_master_prop prop; 467 + struct list_head m_rt_list; 691 468 struct sdw_defer defer_msg; 692 469 unsigned int clk_stop_timeout; 470 + u32 bank_switch_timeout; 693 471 }; 694 472 695 473 int sdw_add_bus_master(struct sdw_bus *bus); 696 474 void sdw_delete_bus_master(struct sdw_bus *bus); 475 + 476 + /** 477 + * sdw_port_config: Master or Slave Port configuration 478 + * 479 + * @num: Port number 480 + * @ch_mask: channels mask for port 481 + */ 482 + struct sdw_port_config { 483 + unsigned int num; 484 + unsigned int ch_mask; 485 + }; 486 + 487 + /** 488 + * sdw_stream_config: Master or Slave stream configuration 489 + * 490 + * @frame_rate: Audio frame rate of the stream, in Hz 491 + * @ch_count: Channel count of the stream 492 + * @bps: Number of bits per audio sample 493 + * @direction: Data direction 494 + * @type: Stream type PCM or PDM 495 + */ 496 + struct sdw_stream_config { 497 + unsigned int frame_rate; 498 + unsigned int ch_count; 499 + unsigned int bps; 500 + enum sdw_data_direction direction; 501 + enum sdw_stream_type type; 502 + }; 503 + 504 + /** 505 + * sdw_stream_state: Stream states 506 + * 507 + * @SDW_STREAM_ALLOCATED: New stream allocated. 508 + * @SDW_STREAM_CONFIGURED: Stream configured 509 + * @SDW_STREAM_PREPARED: Stream prepared 510 + * @SDW_STREAM_ENABLED: Stream enabled 511 + * @SDW_STREAM_DISABLED: Stream disabled 512 + * @SDW_STREAM_DEPREPARED: Stream de-prepared 513 + * @SDW_STREAM_RELEASED: Stream released 514 + */ 515 + enum sdw_stream_state { 516 + SDW_STREAM_ALLOCATED = 0, 517 + SDW_STREAM_CONFIGURED = 1, 518 + SDW_STREAM_PREPARED = 2, 519 + SDW_STREAM_ENABLED = 3, 520 + SDW_STREAM_DISABLED = 4, 521 + SDW_STREAM_DEPREPARED = 5, 522 + SDW_STREAM_RELEASED = 6, 523 + }; 524 + 525 + /** 526 + * sdw_stream_params: Stream parameters 527 + * 528 + * @rate: Sampling frequency, in Hz 529 + * @ch_count: Number of channels 530 + * @bps: bits per channel sample 531 + */ 532 + struct sdw_stream_params { 533 + unsigned int rate; 534 + unsigned int ch_count; 535 + unsigned int bps; 536 + }; 537 + 538 + /** 539 + * sdw_stream_runtime: Runtime stream parameters 540 + * 541 + * @name: SoundWire stream name 542 + * @params: Stream parameters 543 + * @state: Current state of the stream 544 + * @type: Stream type PCM or PDM 545 + * @m_rt: Master runtime 546 + */ 547 + struct sdw_stream_runtime { 548 + char *name; 549 + struct sdw_stream_params params; 550 + enum sdw_stream_state state; 551 + enum sdw_stream_type type; 552 + struct sdw_master_runtime *m_rt; 553 + }; 554 + 555 + struct sdw_stream_runtime *sdw_alloc_stream(char *stream_name); 556 + void sdw_release_stream(struct sdw_stream_runtime *stream); 557 + int sdw_stream_add_master(struct sdw_bus *bus, 558 + struct sdw_stream_config *stream_config, 559 + struct sdw_port_config *port_config, 560 + unsigned int num_ports, 561 + struct sdw_stream_runtime *stream); 562 + int sdw_stream_add_slave(struct sdw_slave *slave, 563 + struct sdw_stream_config *stream_config, 564 + struct sdw_port_config *port_config, 565 + unsigned int num_ports, 566 + struct sdw_stream_runtime *stream); 567 + int sdw_stream_remove_master(struct sdw_bus *bus, 568 + struct sdw_stream_runtime *stream); 569 + int sdw_stream_remove_slave(struct sdw_slave *slave, 570 + struct sdw_stream_runtime *stream); 571 + int sdw_prepare_stream(struct sdw_stream_runtime *stream); 572 + int sdw_enable_stream(struct sdw_stream_runtime *stream); 573 + int sdw_disable_stream(struct sdw_stream_runtime *stream); 574 + int sdw_deprepare_stream(struct sdw_stream_runtime *stream); 697 575 698 576 /* messaging and data APIs */ 699 577
+14
include/linux/soundwire/sdw_intel.h
··· 5 5 #define __SDW_INTEL_H 6 6 7 7 /** 8 + * struct sdw_intel_ops: Intel audio driver callback ops 9 + * 10 + * @config_stream: configure the stream with the hw_params 11 + */ 12 + struct sdw_intel_ops { 13 + int (*config_stream)(void *arg, void *substream, 14 + void *dai, void *hw_params, int stream_num); 15 + }; 16 + 17 + /** 8 18 * struct sdw_intel_res - Soundwire Intel resource structure 9 19 * @mmio_base: mmio base of SoundWire registers 10 20 * @irq: interrupt number 11 21 * @handle: ACPI parent handle 12 22 * @parent: parent device 23 + * @ops: callback ops 24 + * @arg: callback arg 13 25 */ 14 26 struct sdw_intel_res { 15 27 void __iomem *mmio_base; 16 28 int irq; 17 29 acpi_handle handle; 18 30 struct device *parent; 31 + const struct sdw_intel_ops *ops; 32 + void *arg; 19 33 }; 20 34 21 35 void *sdw_intel_init(acpi_handle *parent_handle, struct sdw_intel_res *res);
+3 -1
include/linux/uio_driver.h
··· 14 14 #ifndef _UIO_DRIVER_H_ 15 15 #define _UIO_DRIVER_H_ 16 16 17 + #include <linux/device.h> 17 18 #include <linux/fs.h> 18 19 #include <linux/interrupt.h> 19 20 ··· 69 68 70 69 struct uio_device { 71 70 struct module *owner; 72 - struct device *dev; 71 + struct device dev; 73 72 int minor; 74 73 atomic_t event; 75 74 struct fasync_struct *async_queue; 76 75 wait_queue_head_t wait; 77 76 struct uio_info *info; 77 + spinlock_t info_lock; 78 78 struct kobject *map_dir; 79 79 struct kobject *portio_dir; 80 80 };
+23
include/sound/soc-dai.h
··· 170 170 unsigned int rx_num, unsigned int *rx_slot); 171 171 int (*set_tristate)(struct snd_soc_dai *dai, int tristate); 172 172 173 + int (*set_sdw_stream)(struct snd_soc_dai *dai, 174 + void *stream, int direction); 173 175 /* 174 176 * DAI digital mute - optional. 175 177 * Called by soc-core to minimise any pops. ··· 358 356 static inline void *snd_soc_dai_get_drvdata(struct snd_soc_dai *dai) 359 357 { 360 358 return dev_get_drvdata(dai->dev); 359 + } 360 + 361 + /** 362 + * snd_soc_dai_set_sdw_stream() - Configures a DAI for SDW stream operation 363 + * @dai: DAI 364 + * @stream: STREAM 365 + * @direction: Stream direction(Playback/Capture) 366 + * SoundWire subsystem doesn't have a notion of direction and we reuse 367 + * the ASoC stream direction to configure sink/source ports. 368 + * Playback maps to source ports and Capture for sink ports. 369 + * 370 + * This should be invoked with NULL to clear the stream set previously. 371 + * Returns 0 on success, a negative error code otherwise. 372 + */ 373 + static inline int snd_soc_dai_set_sdw_stream(struct snd_soc_dai *dai, 374 + void *stream, int direction) 375 + { 376 + if (dai->driver->ops->set_sdw_stream) 377 + return dai->driver->ops->set_sdw_stream(dai, stream, direction); 378 + else 379 + return -ENOTSUPP; 361 380 } 362 381 363 382 #endif
+3
scripts/mod/devicetable-offsets.c
··· 139 139 DEVID(hv_vmbus_device_id); 140 140 DEVID_FIELD(hv_vmbus_device_id, guid); 141 141 142 + DEVID(rpmsg_device_id); 143 + DEVID_FIELD(rpmsg_device_id, name); 144 + 142 145 DEVID(i2c_device_id); 143 146 DEVID_FIELD(i2c_device_id, name); 144 147
+11
scripts/mod/file2alias.c
··· 944 944 } 945 945 ADD_TO_DEVTABLE("vmbus", hv_vmbus_device_id, do_vmbus_entry); 946 946 947 + /* Looks like: rpmsg:S */ 948 + static int do_rpmsg_entry(const char *filename, void *symval, 949 + char *alias) 950 + { 951 + DEF_FIELD_ADDR(symval, rpmsg_device_id, name); 952 + sprintf(alias, RPMSG_DEVICE_MODALIAS_FMT, *name); 953 + 954 + return 1; 955 + } 956 + ADD_TO_DEVTABLE("rpmsg", rpmsg_device_id, do_rpmsg_entry); 957 + 947 958 /* Looks like: i2c:S */ 948 959 static int do_i2c_entry(const char *filename, void *symval, 949 960 char *alias)
+38 -43
scripts/ver_linux
··· 13 13 system("uname -a") 14 14 printf("\n") 15 15 16 - printversion("GNU C", version("gcc -dumpversion 2>&1")) 17 - printversion("GNU Make", version("make --version 2>&1")) 18 - printversion("Binutils", version("ld -v 2>&1")) 19 - printversion("Util-linux", version("mount --version 2>&1")) 20 - printversion("Mount", version("mount --version 2>&1")) 21 - printversion("Module-init-tools", version("depmod -V 2>&1")) 22 - printversion("E2fsprogs", version("tune2fs 2>&1")) 23 - printversion("Jfsutils", version("fsck.jfs -V 2>&1")) 24 - printversion("Reiserfsprogs", version("reiserfsck -V 2>&1")) 25 - printversion("Reiser4fsprogs", version("fsck.reiser4 -V 2>&1")) 26 - printversion("Xfsprogs", version("xfs_db -V 2>&1")) 27 - printversion("Pcmciautils", version("pccardctl -V 2>&1")) 28 - printversion("Pcmcia-cs", version("cardmgr -V 2>&1")) 29 - printversion("Quota-tools", version("quota -V 2>&1")) 30 - printversion("PPP", version("pppd --version 2>&1")) 31 - printversion("Isdn4k-utils", version("isdnctrl 2>&1")) 32 - printversion("Nfs-utils", version("showmount --version 2>&1")) 16 + printversion("GNU C", version("gcc -dumpversion")) 17 + printversion("GNU Make", version("make --version")) 18 + printversion("Binutils", version("ld -v")) 19 + printversion("Util-linux", version("mount --version")) 20 + printversion("Mount", version("mount --version")) 21 + printversion("Module-init-tools", version("depmod -V")) 22 + printversion("E2fsprogs", version("tune2fs")) 23 + printversion("Jfsutils", version("fsck.jfs -V")) 24 + printversion("Reiserfsprogs", version("reiserfsck -V")) 25 + printversion("Reiser4fsprogs", version("fsck.reiser4 -V")) 26 + printversion("Xfsprogs", version("xfs_db -V")) 27 + printversion("Pcmciautils", version("pccardctl -V")) 28 + printversion("Pcmcia-cs", version("cardmgr -V")) 29 + printversion("Quota-tools", version("quota -V")) 30 + printversion("PPP", version("pppd --version")) 31 + printversion("Isdn4k-utils", version("isdnctrl")) 32 + printversion("Nfs-utils", version("showmount --version")) 33 33 34 - if (system("test -r /proc/self/maps") == 0) { 35 - while (getline <"/proc/self/maps" > 0) { 36 - n = split($0, procmaps, "/") 37 - if (/libc.*so$/ && match(procmaps[n], /[0-9]+([.]?[0-9]+)+/)) { 38 - ver = substr(procmaps[n], RSTART, RLENGTH) 39 - printversion("Linux C Library", ver) 40 - break 41 - } 34 + while (getline <"/proc/self/maps" > 0) { 35 + n = split($0, procmaps, "/") 36 + if (/libc.*so$/ && match(procmaps[n], /[0-9]+([.]?[0-9]+)+/)) { 37 + ver = substr(procmaps[n], RSTART, RLENGTH) 38 + printversion("Linux C Library", ver) 39 + break 42 40 } 43 41 } 44 42 45 - printversion("Dynamic linker (ldd)", version("ldd --version 2>&1")) 43 + printversion("Dynamic linker (ldd)", version("ldd --version")) 46 44 47 45 while ("ldconfig -p 2>/dev/null" | getline > 0) { 48 46 if (/(libg|stdc)[+]+\.so/) { ··· 48 50 break 49 51 } 50 52 } 51 - if (system("test -r " libcpp) == 0) 52 - printversion("Linux C++ Library", version("readlink " libcpp)) 53 + printversion("Linux C++ Library", version("readlink " libcpp)) 54 + printversion("Procps", version("ps --version")) 55 + printversion("Net-tools", version("ifconfig --version")) 56 + printversion("Kbd", version("loadkeys -V")) 57 + printversion("Console-tools", version("loadkeys -V")) 58 + printversion("Oprofile", version("oprofiled --version")) 59 + printversion("Sh-utils", version("expr --v")) 60 + printversion("Udev", version("udevadm --version")) 61 + printversion("Wireless-tools", version("iwconfig --version")) 53 62 54 - printversion("Procps", version("ps --version 2>&1")) 55 - printversion("Net-tools", version("ifconfig --version 2>&1")) 56 - printversion("Kbd", version("loadkeys -V 2>&1")) 57 - printversion("Console-tools", version("loadkeys -V 2>&1")) 58 - printversion("Oprofile", version("oprofiled --version 2>&1")) 59 - printversion("Sh-utils", version("expr --v 2>&1")) 60 - printversion("Udev", version("udevadm --version 2>&1")) 61 - printversion("Wireless-tools", version("iwconfig --version 2>&1")) 62 - 63 - if (system("test -r /proc/modules") == 0) { 64 - while ("sort /proc/modules" | getline > 0) { 65 - mods = mods sep $1 66 - sep = " " 67 - } 68 - printversion("Modules Loaded", mods) 63 + while ("sort /proc/modules" | getline > 0) { 64 + mods = mods sep $1 65 + sep = " " 69 66 } 67 + printversion("Modules Loaded", mods) 70 68 } 71 69 72 70 function version(cmd, ver) { 71 + cmd = cmd " 2>&1" 73 72 while (cmd | getline > 0) { 74 73 if (!/ver_linux/ && match($0, /[0-9]+([.]?[0-9]+)+/)) { 75 74 ver = substr($0, RSTART, RLENGTH)