Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'char-misc-4.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char/misc driver updates from Greg KH:
"Here is the big set of char/misc patches for 4.20-rc1.

Loads of things here, we have new code in all of these driver
subsystems:
- fpga
- stm
- extcon
- nvmem
- eeprom
- hyper-v
- gsmi
- coresight
- thunderbolt
- vmw_balloon
- goldfish
- soundwire
along with lots of fixes and minor changes to other small drivers.

All of these have been in linux-next for a while with no reported
issues"

* tag 'char-misc-4.20-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (245 commits)
Documentation/security-bugs: Clarify treatment of embargoed information
lib: Fix ia64 bootloader linkage
MAINTAINERS: Clarify UIO vs UIOVEC maintainer
docs/uio: fix a grammar nitpick
docs: fpga: document programming fpgas using regions
fpga: add devm_fpga_region_create
fpga: bridge: add devm_fpga_bridge_create
fpga: mgr: add devm_fpga_mgr_create
hv_balloon: Replace spin_is_locked() with lockdep
sgi-xp: Replace spin_is_locked() with lockdep
eeprom: New ee1004 driver for DDR4 memory
eeprom: at25: remove unneeded 'at25_remove'
w1: IAD Register is yet readable trough iad sys file. Fix snprintf (%u for unsigned, count for max size).
misc: mic: scif: remove set but not used variables 'src_dma_addr, dst_dma_addr'
misc: mic: fix a DMA pool free failure
platform: goldfish: pipe: Add a blank line to separate varibles and code
platform: goldfish: pipe: Remove redundant casting
platform: goldfish: pipe: Call misc_deregister if init fails
platform: goldfish: pipe: Move the file-scope goldfish_pipe_dev variable into the driver state
platform: goldfish: pipe: Move the file-scope goldfish_pipe_miscdev variable into the driver state
...

+6928 -3401
+41
Documentation/ABI/testing/configfs-stp-policy-p_sys-t
··· 1 + What: /config/stp-policy/<device>:p_sys-t.<policy>/<node>/uuid 2 + Date: June 2018 3 + KernelVersion: 4.19 4 + Description: 5 + UUID source identifier string, RW. 6 + Default value is randomly generated at the mkdir <node> time. 7 + Data coming from trace sources that use this <node> will be 8 + tagged with this UUID in the MIPI SyS-T packet stream, to 9 + allow the decoder to discern between different sources 10 + within the same master/channel range, and identify the 11 + higher level decoders that may be needed for each source. 12 + 13 + What: /config/stp-policy/<device>:p_sys-t.<policy>/<node>/do_len 14 + Date: June 2018 15 + KernelVersion: 4.19 16 + Description: 17 + Include payload length in the MIPI SyS-T header, boolean. 18 + If enabled, the SyS-T protocol encoder will include payload 19 + length in each packet's metadata. This is normally redundant 20 + if the underlying transport protocol supports marking message 21 + boundaries (which STP does), so this is off by default. 22 + 23 + What: /config/stp-policy/<device>:p_sys-t.<policy>/<node>/ts_interval 24 + Date: June 2018 25 + KernelVersion: 4.19 26 + Description: 27 + Time interval in milliseconds. Include a timestamp in the 28 + MIPI SyS-T packet metadata, if this many milliseconds have 29 + passed since the previous packet from this source. Zero is 30 + the default and stands for "never send the timestamp". 31 + 32 + What: /config/stp-policy/<device>:p_sys-t.<policy>/<node>/clocksync_interval 33 + Date: June 2018 34 + KernelVersion: 4.19 35 + Description: 36 + Time interval in milliseconds. Send a CLOCKSYNC packet if 37 + this many milliseconds have passed since the previous 38 + CLOCKSYNC packet from this source. Zero is the default and 39 + stands for "never send the CLOCKSYNC". It makes sense to 40 + use this option with sources that generate constant and/or 41 + periodic data, like stm_heartbeat.
+21
Documentation/ABI/testing/sysfs-bus-vmbus
··· 1 + What: /sys/bus/vmbus/devices/.../driver_override 2 + Date: August 2019 3 + Contact: Stephen Hemminger <sthemmin@microsoft.com> 4 + Description: 5 + This file allows the driver for a device to be specified which 6 + will override standard static and dynamic ID matching. When 7 + specified, only a driver with a name matching the value written 8 + to driver_override will have an opportunity to bind to the 9 + device. The override is specified by writing a string to the 10 + driver_override file (echo uio_hv_generic > driver_override) and 11 + may be cleared with an empty string (echo > driver_override). 12 + This returns the device to standard matching rules binding. 13 + Writing to driver_override does not automatically unbind the 14 + device from its current driver or make any attempt to 15 + automatically load the specified driver. If no driver with a 16 + matching name is currently loaded in the kernel, the device 17 + will not bind to any driver. This also allows devices to 18 + opt-out of driver binding using a driver_override name such as 19 + "none". Only a single driver may be specified in the override, 20 + there is no support for parsing delimiters. 21 +
+27 -16
Documentation/admin-guide/security-bugs.rst
··· 26 26 be released without consent from the reporter unless it has already been 27 27 made public. 28 28 29 - Disclosure 30 - ---------- 29 + Disclosure and embargoed information 30 + ------------------------------------ 31 31 32 - The goal of the Linux kernel security team is to work with the bug 33 - submitter to understand and fix the bug. We prefer to publish the fix as 34 - soon as possible, but try to avoid public discussion of the bug itself 35 - and leave that to others. 32 + The security list is not a disclosure channel. For that, see Coordination 33 + below. 36 34 37 - Publishing the fix may be delayed when the bug or the fix is not yet 38 - fully understood, the solution is not well-tested or for vendor 39 - coordination. However, we expect these delays to be short, measurable in 40 - days, not weeks or months. A release date is negotiated by the security 41 - team working with the bug submitter as well as vendors. However, the 42 - kernel security team holds the final say when setting a timeframe. The 43 - timeframe varies from immediate (esp. if it's already publicly known bug) 44 - to a few weeks. As a basic default policy, we expect report date to 45 - release date to be on the order of 7 days. 35 + Once a robust fix has been developed, our preference is to release the 36 + fix in a timely fashion, treating it no differently than any of the other 37 + thousands of changes and fixes the Linux kernel project releases every 38 + month. 39 + 40 + However, at the request of the reporter, we will postpone releasing the 41 + fix for up to 5 business days after the date of the report or after the 42 + embargo has lifted; whichever comes first. The only exception to that 43 + rule is if the bug is publicly known, in which case the preference is to 44 + release the fix as soon as it's available. 45 + 46 + Whilst embargoed information may be shared with trusted individuals in 47 + order to develop a fix, such information will not be published alongside 48 + the fix or on any other disclosure channel without the permission of the 49 + reporter. This includes but is not limited to the original bug report 50 + and followup discussions (if any), exploits, CVE information or the 51 + identity of the reporter. 52 + 53 + In other words our only interest is in getting bugs fixed. All other 54 + information submitted to the security list and any followup discussions 55 + of the report are treated confidentially even after the embargo has been 56 + lifted, in perpetuity. 46 57 47 58 Coordination 48 59 ------------ ··· 79 68 assigned ahead of public disclosure, they will need to contact the private 80 69 linux-distros list, described above. When such a CVE identifier is known 81 70 before a patch is provided, it is desirable to mention it in the commit 82 - message, though. 71 + message if the reporter agrees. 83 72 84 73 Non-disclosure agreements 85 74 -------------------------
+65 -55
Documentation/devicetree/bindings/arm/coresight.txt
··· 54 54 clocks the core of that coresight component. The latter clock 55 55 is optional. 56 56 57 - * port or ports: The representation of the component's port 58 - layout using the generic DT graph presentation found in 59 - "bindings/graph.txt". 57 + * port or ports: see "Graph bindings for Coresight" below. 60 58 61 59 * Additional required properties for System Trace Macrocells (STM): 62 60 * reg: along with the physical base address and length of the register ··· 71 73 AMBA markee): 72 74 - "arm,coresight-replicator" 73 75 74 - * port or ports: same as above. 76 + * port or ports: see "Graph bindings for Coresight" below. 75 77 76 78 * Optional properties for ETM/PTMs: 77 79 ··· 94 96 * interrupts : Exactly one SPI may be listed for reporting the address 95 97 error 96 98 99 + Graph bindings for Coresight 100 + ------------------------------- 101 + 102 + Coresight components are interconnected to create a data path for the flow of 103 + trace data generated from the "sources" to their collection points "sink". 104 + Each coresight component must describe the "input" and "output" connections. 105 + The connections must be described via generic DT graph bindings as described 106 + by the "bindings/graph.txt", where each "port" along with an "endpoint" 107 + component represents a hardware port and the connection. 108 + 109 + * All output ports must be listed inside a child node named "out-ports" 110 + * All input ports must be listed inside a child node named "in-ports". 111 + * Port address must match the hardware port number. 112 + 97 113 Example: 98 114 99 115 1. Sinks ··· 117 105 118 106 clocks = <&oscclk6a>; 119 107 clock-names = "apb_pclk"; 120 - port { 121 - etb_in_port: endpoint@0 { 122 - slave-mode; 123 - remote-endpoint = <&replicator_out_port0>; 108 + in-ports { 109 + port { 110 + etb_in_port: endpoint@0 { 111 + remote-endpoint = <&replicator_out_port0>; 112 + }; 124 113 }; 125 114 }; 126 115 }; ··· 132 119 133 120 clocks = <&oscclk6a>; 134 121 clock-names = "apb_pclk"; 135 - port { 136 - tpiu_in_port: endpoint@0 { 137 - slave-mode; 138 - remote-endpoint = <&replicator_out_port1>; 122 + in-ports { 123 + port { 124 + tpiu_in_port: endpoint@0 { 125 + remote-endpoint = <&replicator_out_port1>; 126 + }; 139 127 }; 140 128 }; 141 129 }; ··· 147 133 148 134 clocks = <&oscclk6a>; 149 135 clock-names = "apb_pclk"; 150 - ports { 151 - #address-cells = <1>; 152 - #size-cells = <0>; 153 - 154 - /* input port */ 155 - port@0 { 156 - reg = <0>; 136 + in-ports { 137 + port { 157 138 etr_in_port: endpoint { 158 - slave-mode; 159 139 remote-endpoint = <&replicator2_out_port0>; 160 140 }; 161 141 }; 142 + }; 162 143 163 - /* CATU link represented by output port */ 164 - port@1 { 165 - reg = <1>; 144 + out-ports { 145 + port { 166 146 etr_out_port: endpoint { 167 147 remote-endpoint = <&catu_in_port>; 168 148 }; ··· 171 163 */ 172 164 compatible = "arm,coresight-replicator"; 173 165 174 - ports { 166 + out-ports { 175 167 #address-cells = <1>; 176 168 #size-cells = <0>; 177 169 ··· 189 181 remote-endpoint = <&tpiu_in_port>; 190 182 }; 191 183 }; 184 + }; 192 185 193 - /* replicator input port */ 194 - port@2 { 195 - reg = <0>; 186 + in-ports { 187 + port { 196 188 replicator_in_port0: endpoint { 197 - slave-mode; 198 189 remote-endpoint = <&funnel_out_port0>; 199 190 }; 200 191 }; ··· 206 199 207 200 clocks = <&oscclk6a>; 208 201 clock-names = "apb_pclk"; 209 - ports { 210 - #address-cells = <1>; 211 - #size-cells = <0>; 212 - 213 - /* funnel output port */ 214 - port@0 { 215 - reg = <0>; 202 + out-ports { 203 + port { 216 204 funnel_out_port0: endpoint { 217 205 remote-endpoint = 218 206 <&replicator_in_port0>; 219 207 }; 220 208 }; 209 + }; 221 210 222 - /* funnel input ports */ 223 - port@1 { 211 + in-ports { 212 + #address-cells = <1>; 213 + #size-cells = <0>; 214 + 215 + port@0 { 224 216 reg = <0>; 225 217 funnel_in_port0: endpoint { 226 - slave-mode; 227 218 remote-endpoint = <&ptm0_out_port>; 228 219 }; 229 220 }; 230 221 231 - port@2 { 222 + port@1 { 232 223 reg = <1>; 233 224 funnel_in_port1: endpoint { 234 - slave-mode; 235 225 remote-endpoint = <&ptm1_out_port>; 236 226 }; 237 227 }; 238 228 239 - port@3 { 229 + port@2 { 240 230 reg = <2>; 241 231 funnel_in_port2: endpoint { 242 - slave-mode; 243 232 remote-endpoint = <&etm0_out_port>; 244 233 }; 245 234 }; ··· 251 248 cpu = <&cpu0>; 252 249 clocks = <&oscclk6a>; 253 250 clock-names = "apb_pclk"; 254 - port { 255 - ptm0_out_port: endpoint { 256 - remote-endpoint = <&funnel_in_port0>; 251 + out-ports { 252 + port { 253 + ptm0_out_port: endpoint { 254 + remote-endpoint = <&funnel_in_port0>; 255 + }; 257 256 }; 258 257 }; 259 258 }; ··· 267 262 cpu = <&cpu1>; 268 263 clocks = <&oscclk6a>; 269 264 clock-names = "apb_pclk"; 270 - port { 271 - ptm1_out_port: endpoint { 272 - remote-endpoint = <&funnel_in_port1>; 265 + out-ports { 266 + port { 267 + ptm1_out_port: endpoint { 268 + remote-endpoint = <&funnel_in_port1>; 269 + }; 273 270 }; 274 271 }; 275 272 }; ··· 285 278 286 279 clocks = <&soc_smc50mhz>; 287 280 clock-names = "apb_pclk"; 288 - port { 289 - stm_out_port: endpoint { 290 - remote-endpoint = <&main_funnel_in_port2>; 281 + out-ports { 282 + port { 283 + stm_out_port: endpoint { 284 + remote-endpoint = <&main_funnel_in_port2>; 285 + }; 291 286 }; 292 287 }; 293 288 }; ··· 304 295 clock-names = "apb_pclk"; 305 296 306 297 interrupts = <GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>; 307 - port { 308 - catu_in_port: endpoint { 309 - slave-mode; 310 - remote-endpoint = <&etr_out_port>; 298 + in-ports { 299 + port { 300 + catu_in_port: endpoint { 301 + remote-endpoint = <&etr_out_port>; 302 + }; 311 303 }; 312 304 }; 313 305 };
+7 -30
Documentation/driver-api/fpga/fpga-bridge.rst
··· 4 4 API to implement a new FPGA bridge 5 5 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 6 6 7 + * struct :c:type:`fpga_bridge` — The FPGA Bridge structure 8 + * struct :c:type:`fpga_bridge_ops` — Low level Bridge driver ops 9 + * :c:func:`devm_fpga_bridge_create()` — Allocate and init a bridge struct 10 + * :c:func:`fpga_bridge_register()` — Register a bridge 11 + * :c:func:`fpga_bridge_unregister()` — Unregister a bridge 12 + 7 13 .. kernel-doc:: include/linux/fpga/fpga-bridge.h 8 14 :functions: fpga_bridge 9 15 ··· 17 11 :functions: fpga_bridge_ops 18 12 19 13 .. kernel-doc:: drivers/fpga/fpga-bridge.c 20 - :functions: fpga_bridge_create 21 - 22 - .. kernel-doc:: drivers/fpga/fpga-bridge.c 23 - :functions: fpga_bridge_free 14 + :functions: devm_fpga_bridge_create 24 15 25 16 .. kernel-doc:: drivers/fpga/fpga-bridge.c 26 17 :functions: fpga_bridge_register 27 18 28 19 .. kernel-doc:: drivers/fpga/fpga-bridge.c 29 20 :functions: fpga_bridge_unregister 30 - 31 - API to control an FPGA bridge 32 - ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 33 - 34 - You probably won't need these directly. FPGA regions should handle this. 35 - 36 - .. kernel-doc:: drivers/fpga/fpga-bridge.c 37 - :functions: of_fpga_bridge_get 38 - 39 - .. kernel-doc:: drivers/fpga/fpga-bridge.c 40 - :functions: fpga_bridge_get 41 - 42 - .. kernel-doc:: drivers/fpga/fpga-bridge.c 43 - :functions: fpga_bridge_put 44 - 45 - .. kernel-doc:: drivers/fpga/fpga-bridge.c 46 - :functions: fpga_bridge_get_to_list 47 - 48 - .. kernel-doc:: drivers/fpga/fpga-bridge.c 49 - :functions: of_fpga_bridge_get_to_list 50 - 51 - .. kernel-doc:: drivers/fpga/fpga-bridge.c 52 - :functions: fpga_bridge_enable 53 - 54 - .. kernel-doc:: drivers/fpga/fpga-bridge.c 55 - :functions: fpga_bridge_disable
+14 -112
Documentation/driver-api/fpga/fpga-mgr.rst
··· 49 49 * them in priv 50 50 */ 51 51 52 - mgr = fpga_mgr_create(dev, "Altera SOCFPGA FPGA Manager", 53 - &socfpga_fpga_ops, priv); 52 + mgr = devm_fpga_mgr_create(dev, "Altera SOCFPGA FPGA Manager", 53 + &socfpga_fpga_ops, priv); 54 54 if (!mgr) 55 55 return -ENOMEM; 56 56 57 57 platform_set_drvdata(pdev, mgr); 58 58 59 - ret = fpga_mgr_register(mgr); 60 - if (ret) 61 - fpga_mgr_free(mgr); 62 - 63 - return ret; 59 + return fpga_mgr_register(mgr); 64 60 } 65 61 66 62 static int socfpga_fpga_remove(struct platform_device *pdev) ··· 98 102 and return a code of type enum fpga_mgr_states. It doesn't result in a change 99 103 in state. 100 104 101 - How to write an image buffer to a supported FPGA 102 - ------------------------------------------------ 103 - 104 - Some sample code:: 105 - 106 - #include <linux/fpga/fpga-mgr.h> 107 - 108 - struct fpga_manager *mgr; 109 - struct fpga_image_info *info; 110 - int ret; 111 - 112 - /* 113 - * Get a reference to FPGA manager. The manager is not locked, so you can 114 - * hold onto this reference without it preventing programming. 115 - * 116 - * This example uses the device node of the manager. Alternatively, use 117 - * fpga_mgr_get(dev) instead if you have the device. 118 - */ 119 - mgr = of_fpga_mgr_get(mgr_node); 120 - 121 - /* struct with information about the FPGA image to program. */ 122 - info = fpga_image_info_alloc(dev); 123 - 124 - /* flags indicates whether to do full or partial reconfiguration */ 125 - info->flags = FPGA_MGR_PARTIAL_RECONFIG; 126 - 127 - /* 128 - * At this point, indicate where the image is. This is pseudo-code; you're 129 - * going to use one of these three. 130 - */ 131 - if (image is in a scatter gather table) { 132 - 133 - info->sgt = [your scatter gather table] 134 - 135 - } else if (image is in a buffer) { 136 - 137 - info->buf = [your image buffer] 138 - info->count = [image buffer size] 139 - 140 - } else if (image is in a firmware file) { 141 - 142 - info->firmware_name = devm_kstrdup(dev, firmware_name, GFP_KERNEL); 143 - 144 - } 145 - 146 - /* Get exclusive control of FPGA manager */ 147 - ret = fpga_mgr_lock(mgr); 148 - 149 - /* Load the buffer to the FPGA */ 150 - ret = fpga_mgr_buf_load(mgr, &info, buf, count); 151 - 152 - /* Release the FPGA manager */ 153 - fpga_mgr_unlock(mgr); 154 - fpga_mgr_put(mgr); 155 - 156 - /* Deallocate the image info if you're done with it */ 157 - fpga_image_info_free(info); 158 - 159 105 API for implementing a new FPGA Manager driver 160 106 ---------------------------------------------- 107 + 108 + * ``fpga_mgr_states`` — Values for :c:member:`fpga_manager->state`. 109 + * struct :c:type:`fpga_manager` — the FPGA manager struct 110 + * struct :c:type:`fpga_manager_ops` — Low level FPGA manager driver ops 111 + * :c:func:`devm_fpga_mgr_create` — Allocate and init a manager struct 112 + * :c:func:`fpga_mgr_register` — Register an FPGA manager 113 + * :c:func:`fpga_mgr_unregister` — Unregister an FPGA manager 114 + 115 + .. kernel-doc:: include/linux/fpga/fpga-mgr.h 116 + :functions: fpga_mgr_states 161 117 162 118 .. kernel-doc:: include/linux/fpga/fpga-mgr.h 163 119 :functions: fpga_manager ··· 118 170 :functions: fpga_manager_ops 119 171 120 172 .. kernel-doc:: drivers/fpga/fpga-mgr.c 121 - :functions: fpga_mgr_create 122 - 123 - .. kernel-doc:: drivers/fpga/fpga-mgr.c 124 - :functions: fpga_mgr_free 173 + :functions: devm_fpga_mgr_create 125 174 126 175 .. kernel-doc:: drivers/fpga/fpga-mgr.c 127 176 :functions: fpga_mgr_register 128 177 129 178 .. kernel-doc:: drivers/fpga/fpga-mgr.c 130 179 :functions: fpga_mgr_unregister 131 - 132 - API for programming an FPGA 133 - --------------------------- 134 - 135 - FPGA Manager flags 136 - 137 - .. kernel-doc:: include/linux/fpga/fpga-mgr.h 138 - :doc: FPGA Manager flags 139 - 140 - .. kernel-doc:: include/linux/fpga/fpga-mgr.h 141 - :functions: fpga_image_info 142 - 143 - .. kernel-doc:: include/linux/fpga/fpga-mgr.h 144 - :functions: fpga_mgr_states 145 - 146 - .. kernel-doc:: drivers/fpga/fpga-mgr.c 147 - :functions: fpga_image_info_alloc 148 - 149 - .. kernel-doc:: drivers/fpga/fpga-mgr.c 150 - :functions: fpga_image_info_free 151 - 152 - .. kernel-doc:: drivers/fpga/fpga-mgr.c 153 - :functions: of_fpga_mgr_get 154 - 155 - .. kernel-doc:: drivers/fpga/fpga-mgr.c 156 - :functions: fpga_mgr_get 157 - 158 - .. kernel-doc:: drivers/fpga/fpga-mgr.c 159 - :functions: fpga_mgr_put 160 - 161 - .. kernel-doc:: drivers/fpga/fpga-mgr.c 162 - :functions: fpga_mgr_lock 163 - 164 - .. kernel-doc:: drivers/fpga/fpga-mgr.c 165 - :functions: fpga_mgr_unlock 166 - 167 - .. kernel-doc:: include/linux/fpga/fpga-mgr.h 168 - :functions: fpga_mgr_states 169 - 170 - Note - use :c:func:`fpga_region_program_fpga()` instead of :c:func:`fpga_mgr_load()` 171 - 172 - .. kernel-doc:: drivers/fpga/fpga-mgr.c 173 - :functions: fpga_mgr_load
+107
Documentation/driver-api/fpga/fpga-programming.rst
··· 1 + In-kernel API for FPGA Programming 2 + ================================== 3 + 4 + Overview 5 + -------- 6 + 7 + The in-kernel API for FPGA programming is a combination of APIs from 8 + FPGA manager, bridge, and regions. The actual function used to 9 + trigger FPGA programming is :c:func:`fpga_region_program_fpga()`. 10 + 11 + :c:func:`fpga_region_program_fpga()` uses functionality supplied by 12 + the FPGA manager and bridges. It will: 13 + 14 + * lock the region's mutex 15 + * lock the mutex of the region's FPGA manager 16 + * build a list of FPGA bridges if a method has been specified to do so 17 + * disable the bridges 18 + * program the FPGA using info passed in :c:member:`fpga_region->info`. 19 + * re-enable the bridges 20 + * release the locks 21 + 22 + The struct fpga_image_info specifies what FPGA image to program. It is 23 + allocated/freed by :c:func:`fpga_image_info_alloc()` and freed with 24 + :c:func:`fpga_image_info_free()` 25 + 26 + How to program an FPGA using a region 27 + ------------------------------------- 28 + 29 + When the FPGA region driver probed, it was given a pointer to an FPGA manager 30 + driver so it knows which manager to use. The region also either has a list of 31 + bridges to control during programming or it has a pointer to a function that 32 + will generate that list. Here's some sample code of what to do next:: 33 + 34 + #include <linux/fpga/fpga-mgr.h> 35 + #include <linux/fpga/fpga-region.h> 36 + 37 + struct fpga_image_info *info; 38 + int ret; 39 + 40 + /* 41 + * First, alloc the struct with information about the FPGA image to 42 + * program. 43 + */ 44 + info = fpga_image_info_alloc(dev); 45 + if (!info) 46 + return -ENOMEM; 47 + 48 + /* Set flags as needed, such as: */ 49 + info->flags = FPGA_MGR_PARTIAL_RECONFIG; 50 + 51 + /* 52 + * Indicate where the FPGA image is. This is pseudo-code; you're 53 + * going to use one of these three. 54 + */ 55 + if (image is in a scatter gather table) { 56 + 57 + info->sgt = [your scatter gather table] 58 + 59 + } else if (image is in a buffer) { 60 + 61 + info->buf = [your image buffer] 62 + info->count = [image buffer size] 63 + 64 + } else if (image is in a firmware file) { 65 + 66 + info->firmware_name = devm_kstrdup(dev, firmware_name, 67 + GFP_KERNEL); 68 + 69 + } 70 + 71 + /* Add info to region and do the programming */ 72 + region->info = info; 73 + ret = fpga_region_program_fpga(region); 74 + 75 + /* Deallocate the image info if you're done with it */ 76 + region->info = NULL; 77 + fpga_image_info_free(info); 78 + 79 + if (ret) 80 + return ret; 81 + 82 + /* Now enumerate whatever hardware has appeared in the FPGA. */ 83 + 84 + API for programming an FPGA 85 + --------------------------- 86 + 87 + * :c:func:`fpga_region_program_fpga` — Program an FPGA 88 + * :c:type:`fpga_image_info` — Specifies what FPGA image to program 89 + * :c:func:`fpga_image_info_alloc()` — Allocate an FPGA image info struct 90 + * :c:func:`fpga_image_info_free()` — Free an FPGA image info struct 91 + 92 + .. kernel-doc:: drivers/fpga/fpga-region.c 93 + :functions: fpga_region_program_fpga 94 + 95 + FPGA Manager flags 96 + 97 + .. kernel-doc:: include/linux/fpga/fpga-mgr.h 98 + :doc: FPGA Manager flags 99 + 100 + .. kernel-doc:: include/linux/fpga/fpga-mgr.h 101 + :functions: fpga_image_info 102 + 103 + .. kernel-doc:: drivers/fpga/fpga-mgr.c 104 + :functions: fpga_image_info_alloc 105 + 106 + .. kernel-doc:: drivers/fpga/fpga-mgr.c 107 + :functions: fpga_image_info_free
+46 -45
Documentation/driver-api/fpga/fpga-region.rst
··· 34 34 * flags indicating specifics such as whether the image is for partial 35 35 reconfiguration. 36 36 37 - How to program an FPGA using a region 38 - ------------------------------------- 39 - 40 - First, allocate the info struct:: 41 - 42 - info = fpga_image_info_alloc(dev); 43 - if (!info) 44 - return -ENOMEM; 45 - 46 - Set flags as needed, i.e.:: 47 - 48 - info->flags |= FPGA_MGR_PARTIAL_RECONFIG; 49 - 50 - Point to your FPGA image, such as:: 51 - 52 - info->sgt = &sgt; 53 - 54 - Add info to region and do the programming:: 55 - 56 - region->info = info; 57 - ret = fpga_region_program_fpga(region); 58 - 59 - :c:func:`fpga_region_program_fpga()` operates on info passed in the 60 - fpga_image_info (region->info). This function will attempt to: 61 - 62 - * lock the region's mutex 63 - * lock the region's FPGA manager 64 - * build a list of FPGA bridges if a method has been specified to do so 65 - * disable the bridges 66 - * program the FPGA 67 - * re-enable the bridges 68 - * release the locks 69 - 70 - Then you will want to enumerate whatever hardware has appeared in the FPGA. 71 - 72 37 How to add a new FPGA region 73 38 ---------------------------- 74 39 ··· 42 77 .. [#f1] ../devicetree/bindings/fpga/fpga-region.txt 43 78 .. [#f2] ../../drivers/fpga/of-fpga-region.c 44 79 45 - API to program an FPGA 46 - ---------------------- 47 - 48 - .. kernel-doc:: drivers/fpga/fpga-region.c 49 - :functions: fpga_region_program_fpga 50 - 51 80 API to add a new FPGA region 52 81 ---------------------------- 82 + 83 + * struct :c:type:`fpga_region` — The FPGA region struct 84 + * :c:func:`devm_fpga_region_create` — Allocate and init a region struct 85 + * :c:func:`fpga_region_register` — Register an FPGA region 86 + * :c:func:`fpga_region_unregister` — Unregister an FPGA region 87 + 88 + The FPGA region's probe function will need to get a reference to the FPGA 89 + Manager it will be using to do the programming. This usually would happen 90 + during the region's probe function. 91 + 92 + * :c:func:`fpga_mgr_get` — Get a reference to an FPGA manager, raise ref count 93 + * :c:func:`of_fpga_mgr_get` — Get a reference to an FPGA manager, raise ref count, 94 + given a device node. 95 + * :c:func:`fpga_mgr_put` — Put an FPGA manager 96 + 97 + The FPGA region will need to specify which bridges to control while programming 98 + the FPGA. The region driver can build a list of bridges during probe time 99 + (:c:member:`fpga_region->bridge_list`) or it can have a function that creates 100 + the list of bridges to program just before programming 101 + (:c:member:`fpga_region->get_bridges`). The FPGA bridge framework supplies the 102 + following APIs to handle building or tearing down that list. 103 + 104 + * :c:func:`fpga_bridge_get_to_list` — Get a ref of an FPGA bridge, add it to a 105 + list 106 + * :c:func:`of_fpga_bridge_get_to_list` — Get a ref of an FPGA bridge, add it to a 107 + list, given a device node 108 + * :c:func:`fpga_bridges_put` — Given a list of bridges, put them 53 109 54 110 .. kernel-doc:: include/linux/fpga/fpga-region.h 55 111 :functions: fpga_region 56 112 57 113 .. kernel-doc:: drivers/fpga/fpga-region.c 58 - :functions: fpga_region_create 59 - 60 - .. kernel-doc:: drivers/fpga/fpga-region.c 61 - :functions: fpga_region_free 114 + :functions: devm_fpga_region_create 62 115 63 116 .. kernel-doc:: drivers/fpga/fpga-region.c 64 117 :functions: fpga_region_register 65 118 66 119 .. kernel-doc:: drivers/fpga/fpga-region.c 67 120 :functions: fpga_region_unregister 121 + 122 + .. kernel-doc:: drivers/fpga/fpga-mgr.c 123 + :functions: fpga_mgr_get 124 + 125 + .. kernel-doc:: drivers/fpga/fpga-mgr.c 126 + :functions: of_fpga_mgr_get 127 + 128 + .. kernel-doc:: drivers/fpga/fpga-mgr.c 129 + :functions: fpga_mgr_put 130 + 131 + .. kernel-doc:: drivers/fpga/fpga-bridge.c 132 + :functions: fpga_bridge_get_to_list 133 + 134 + .. kernel-doc:: drivers/fpga/fpga-bridge.c 135 + :functions: of_fpga_bridge_get_to_list 136 + 137 + .. kernel-doc:: drivers/fpga/fpga-bridge.c 138 + :functions: fpga_bridges_put
+2
Documentation/driver-api/fpga/index.rst
··· 11 11 fpga-mgr 12 12 fpga-bridge 13 13 fpga-region 14 + fpga-programming 15 +
+1 -1
Documentation/driver-api/fpga/intro.rst
··· 44 44 ----------- 45 45 46 46 If you are adding a new interface to the FPGA framework, add it on top 47 - of an FPGA region to allow the most reuse of your interface. 47 + of an FPGA region. 48 48 49 49 The FPGA Region framework (fpga-region.c) associates managers and 50 50 bridges as reconfigurable regions. A region may refer to the whole
+36
Documentation/driver-api/soundwire/stream.rst
··· 101 101 +--------------------+ | | 102 102 +----------------+ 103 103 104 + Example 5: Stereo Stream with L and R channel is rendered by 2 Masters, each 105 + rendering one channel, and is received by two different Slaves, each 106 + receiving one channel. Both Masters and both Slaves are using single port. :: 107 + 108 + +---------------+ Clock Signal +---------------+ 109 + | Master +----------------------------------+ Slave | 110 + | Interface | | Interface | 111 + | 1 | | 1 | 112 + | | Data Signal | | 113 + | L +----------------------------------+ L | 114 + | (Data) | Data Direction | (Data) | 115 + +---------------+ +-----------------------> +---------------+ 116 + 117 + +---------------+ Clock Signal +---------------+ 118 + | Master +----------------------------------+ Slave | 119 + | Interface | | Interface | 120 + | 2 | | 2 | 121 + | | Data Signal | | 122 + | R +----------------------------------+ R | 123 + | (Data) | Data Direction | (Data) | 124 + +---------------+ +-----------------------> +---------------+ 125 + 126 + Note: In multi-link cases like above, to lock, one would acquire a global 127 + lock and then go on locking bus instances. But, in this case the caller 128 + framework(ASoC DPCM) guarantees that stream operations on a card are 129 + always serialized. So, there is no race condition and hence no need for 130 + global lock. 131 + 104 132 SoundWire Stream Management flow 105 133 ================================ 106 134 ··· 202 174 .startup() operation. 203 175 204 176 .. code-block:: c 177 + 205 178 int sdw_alloc_stream(char * stream_name); 206 179 207 180 ··· 229 200 framework, this stream state is linked to .hw_params() operation. 230 201 231 202 .. code-block:: c 203 + 232 204 int sdw_stream_add_master(struct sdw_bus * bus, 233 205 struct sdw_stream_config * stream_config, 234 206 struct sdw_ports_config * ports_config, ··· 275 245 .prepare() operation. 276 246 277 247 .. code-block:: c 248 + 278 249 int sdw_prepare_stream(struct sdw_stream_runtime * stream); 279 250 280 251 ··· 305 274 .trigger() start operation. 306 275 307 276 .. code-block:: c 277 + 308 278 int sdw_enable_stream(struct sdw_stream_runtime * stream); 309 279 310 280 SDW_STREAM_DISABLED ··· 333 301 .trigger() stop operation. 334 302 335 303 .. code-block:: c 304 + 336 305 int sdw_disable_stream(struct sdw_stream_runtime * stream); 337 306 338 307 ··· 358 325 .trigger() stop operation. 359 326 360 327 .. code-block:: c 328 + 361 329 int sdw_deprepare_stream(struct sdw_stream_runtime * stream); 362 330 363 331 ··· 383 349 framework, this stream state is linked to .hw_free() operation. 384 350 385 351 .. code-block:: c 352 + 386 353 int sdw_stream_remove_master(struct sdw_bus * bus, 387 354 struct sdw_stream_runtime * stream); 388 355 int sdw_stream_remove_slave(struct sdw_slave * slave, ··· 396 361 In .shutdown() the data structure maintaining stream state are freed up. 397 362 398 363 .. code-block:: c 364 + 399 365 void sdw_release_stream(struct sdw_stream_runtime * stream); 400 366 401 367 Not Supported
+2 -2
Documentation/driver-api/uio-howto.rst
··· 463 463 464 464 Information about all UIO devices is available in sysfs. The first thing 465 465 you should do in your driver is check ``name`` and ``version`` to make 466 - sure your talking to the right device and that its kernel driver has the 467 - version you expect. 466 + sure you're talking to the right device and that its kernel driver has 467 + the version you expect. 468 468 469 469 You should also make sure that the memory mapping you need exists and 470 470 has the size you expect.
+31
Documentation/nvmem/nvmem.txt
··· 58 58 It is mandatory that the NVMEM provider has a regmap associated with its 59 59 struct device. Failure to do would return error code from nvmem_register(). 60 60 61 + Users of board files can define and register nvmem cells using the 62 + nvmem_cell_table struct: 63 + 64 + static struct nvmem_cell_info foo_nvmem_cells[] = { 65 + { 66 + .name = "macaddr", 67 + .offset = 0x7f00, 68 + .bytes = ETH_ALEN, 69 + } 70 + }; 71 + 72 + static struct nvmem_cell_table foo_nvmem_cell_table = { 73 + .nvmem_name = "i2c-eeprom", 74 + .cells = foo_nvmem_cells, 75 + .ncells = ARRAY_SIZE(foo_nvmem_cells), 76 + }; 77 + 78 + nvmem_add_cell_table(&foo_nvmem_cell_table); 79 + 80 + Additionally it is possible to create nvmem cell lookup entries and register 81 + them with the nvmem framework from machine code as shown in the example below: 82 + 83 + static struct nvmem_cell_lookup foo_nvmem_lookup = { 84 + .nvmem_name = "i2c-eeprom", 85 + .cell_name = "macaddr", 86 + .dev_id = "foo_mac.0", 87 + .con_id = "mac-address", 88 + }; 89 + 90 + nvmem_add_cell_lookups(&foo_nvmem_lookup, 1); 91 + 61 92 NVMEM Consumers 62 93 +++++++++++++++ 63 94
+29 -9
Documentation/trace/stm.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 1 3 =================== 2 4 System Trace Module 3 5 =================== ··· 55 53 be used for trace sources with the id string of "user/dummy". 56 54 57 55 Trace sources have to open the stm class device's node and write their 58 - trace data into its file descriptor. In order to identify themselves 59 - to the policy, they need to do a STP_POLICY_ID_SET ioctl on this file 60 - descriptor providing their id string. Otherwise, they will be 61 - automatically allocated a master/channel pair upon first write to this 62 - file descriptor according to the "default" rule of the policy, if such 63 - exists. 56 + trace data into its file descriptor. 57 + 58 + In order to find an appropriate policy node for a given trace source, 59 + several mechanisms can be used. First, a trace source can explicitly 60 + identify itself by calling an STP_POLICY_ID_SET ioctl on the character 61 + device's file descriptor, providing their id string, before they write 62 + any data there. Secondly, if they chose not to perform the explicit 63 + identification (because you may not want to patch existing software 64 + to do this), they can just start writing the data, at which point the 65 + stm core will try to find a policy node with the name matching the 66 + task's name (e.g., "syslogd") and if one exists, it will be used. 67 + Thirdly, if the task name can't be found among the policy nodes, the 68 + catch-all entry "default" will be used, if it exists. This entry also 69 + needs to be created and configured by the system administrator or 70 + whatever tools are taking care of the policy configuration. Finally, 71 + if all the above steps failed, the write() to an stm file descriptor 72 + will return a error (EINVAL). 73 + 74 + Previously, if no policy nodes were found for a trace source, the stm 75 + class would silently fall back to allocating the first available 76 + contiguous range of master/channels from the beginning of the device's 77 + master/channel range. The new requirement for a policy node to exist 78 + will help programmers and sysadmins identify gaps in configuration 79 + and have better control over the un-identified sources. 64 80 65 81 Some STM devices may allow direct mapping of the channel mmio regions 66 82 to userspace for zero-copy writing. One mappable page (in terms of ··· 112 92 there's a node in the root of the policy directory that matches the 113 93 stm_source device's name (for example, "console"), this node will be 114 94 used to allocate master and channel numbers. If there's no such policy 115 - node, the stm core will pick the first contiguous chunk of channels 116 - within the first available master. Note that the node must exist 117 - before the stm_source device is connected to its stm device. 95 + node, the stm core will use the catch-all entry "default", if one 96 + exists. If neither policy nodes exist, the write() to stm_source_link 97 + will return an error. 118 98 119 99 stm_console 120 100 ===========
+62
Documentation/trace/sys-t.rst
··· 1 + .. SPDX-License-Identifier: GPL-2.0 2 + 3 + =================== 4 + MIPI SyS-T over STP 5 + =================== 6 + 7 + The MIPI SyS-T protocol driver can be used with STM class devices to 8 + generate standardized trace stream. Aside from being a standard, it 9 + provides better trace source identification and timestamp correlation. 10 + 11 + In order to use the MIPI SyS-T protocol driver with your STM device, 12 + first, you'll need CONFIG_STM_PROTO_SYS_T. 13 + 14 + Now, you can select which protocol driver you want to use when you create 15 + a policy for your STM device, by specifying it in the policy name: 16 + 17 + # mkdir /config/stp-policy/dummy_stm.0:p_sys-t.my-policy/ 18 + 19 + In other words, the policy name format is extended like this: 20 + 21 + <device_name>:<protocol_name>.<policy_name> 22 + 23 + With Intel TH, therefore it can look like "0-sth:p_sys-t.my-policy". 24 + 25 + If the protocol name is omitted, the STM class will chose whichever 26 + protocol driver was loaded first. 27 + 28 + You can also double check that everything is working as expected by 29 + 30 + # cat /config/stp-policy/dummy_stm.0:p_sys-t.my-policy/protocol 31 + p_sys-t 32 + 33 + Now, with the MIPI SyS-T protocol driver, each policy node in the 34 + configfs gets a few additional attributes, which determine per-source 35 + parameters specific to the protocol: 36 + 37 + # mkdir /config/stp-policy/dummy_stm.0:p_sys-t.my-policy/default 38 + # ls /config/stp-policy/dummy_stm.0:p_sys-t.my-policy/default 39 + channels 40 + clocksync_interval 41 + do_len 42 + masters 43 + ts_interval 44 + uuid 45 + 46 + The most important one here is the "uuid", which determines the UUID 47 + that will be used to tag all data coming from this source. It is 48 + automatically generated when a new node is created, but it is likely 49 + that you would want to change it. 50 + 51 + do_len switches on/off the additional "payload length" field in the 52 + MIPI SyS-T message header. It is off by default as the STP already 53 + marks message boundaries. 54 + 55 + ts_interval and clocksync_interval determine how much time in milliseconds 56 + can pass before we need to include a protocol (not transport, aka STP) 57 + timestamp in a message header or send a CLOCKSYNC packet, respectively. 58 + 59 + See Documentation/ABI/testing/configfs-stp-policy-p_sys-t for more 60 + details. 61 + 62 + * [1] https://www.mipi.org/specifications/sys-t
+9 -2
MAINTAINERS
··· 932 932 M: Arve Hjønnevåg <arve@android.com> 933 933 M: Todd Kjos <tkjos@android.com> 934 934 M: Martijn Coenen <maco@android.com> 935 + M: Joel Fernandes <joel@joelfernandes.org> 935 936 T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/staging.git 936 937 L: devel@driverdev.osuosl.org 937 938 S: Supported ··· 13758 13757 F: include/sound/soc* 13759 13758 13760 13759 SOUNDWIRE SUBSYSTEM 13761 - M: Vinod Koul <vinod.koul@intel.com> 13760 + M: Vinod Koul <vkoul@kernel.org> 13762 13761 M: Sanyog Kale <sanyog.r.kale@intel.com> 13763 13762 R: Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com> 13764 13763 L: alsa-devel@alsa-project.org (moderated for non-subscribers) ··· 15524 15523 F: fs/hostfs/ 15525 15524 F: fs/hppfs/ 15526 15525 15526 + USERSPACE COPYIN/COPYOUT (UIOVEC) 15527 + M: Alexander Viro <viro@zeniv.linux.org.uk> 15528 + S: Maintained 15529 + F: lib/iov_iter.c 15530 + F: include/linux/uio.h 15531 + 15527 15532 USERSPACE I/O (UIO) 15528 15533 M: Greg Kroah-Hartman <gregkh@linuxfoundation.org> 15529 15534 S: Maintained 15530 15535 T: git git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc.git 15531 15536 F: Documentation/driver-api/uio-howto.rst 15532 15537 F: drivers/uio/ 15533 - F: include/linux/uio*.h 15538 + F: include/linux/uio_driver.h 15534 15539 15535 15540 UTIL-LINUX PACKAGE 15536 15541 M: Karel Zak <kzak@redhat.com>
+1 -1
drivers/android/Kconfig
··· 10 10 11 11 config ANDROID_BINDER_IPC 12 12 bool "Android Binder IPC Driver" 13 - depends on MMU 13 + depends on MMU && !CPU_CACHE_VIVT 14 14 default n 15 15 ---help--- 16 16 Binder is used in Android for both communication between processes,
+320 -169
drivers/android/binder.c
··· 71 71 #include <linux/security.h> 72 72 #include <linux/spinlock.h> 73 73 #include <linux/ratelimit.h> 74 + #include <linux/syscalls.h> 74 75 75 76 #include <uapi/linux/android/binder.h> 76 77 ··· 458 457 }; 459 458 460 459 enum binder_deferred_state { 461 - BINDER_DEFERRED_PUT_FILES = 0x01, 462 - BINDER_DEFERRED_FLUSH = 0x02, 463 - BINDER_DEFERRED_RELEASE = 0x04, 460 + BINDER_DEFERRED_FLUSH = 0x01, 461 + BINDER_DEFERRED_RELEASE = 0x02, 464 462 }; 465 463 466 464 /** ··· 480 480 * (invariant after initialized) 481 481 * @tsk task_struct for group_leader of process 482 482 * (invariant after initialized) 483 - * @files files_struct for process 484 - * (protected by @files_lock) 485 - * @files_lock mutex to protect @files 486 483 * @deferred_work_node: element for binder_deferred_list 487 484 * (protected by binder_deferred_lock) 488 485 * @deferred_work: bitmap of deferred work to perform ··· 524 527 struct list_head waiting_threads; 525 528 int pid; 526 529 struct task_struct *tsk; 527 - struct files_struct *files; 528 - struct mutex files_lock; 529 530 struct hlist_node deferred_work_node; 530 531 int deferred_work; 531 532 bool is_dead; ··· 606 611 bool is_dead; 607 612 }; 608 613 614 + /** 615 + * struct binder_txn_fd_fixup - transaction fd fixup list element 616 + * @fixup_entry: list entry 617 + * @file: struct file to be associated with new fd 618 + * @offset: offset in buffer data to this fixup 619 + * 620 + * List element for fd fixups in a transaction. Since file 621 + * descriptors need to be allocated in the context of the 622 + * target process, we pass each fd to be processed in this 623 + * struct. 624 + */ 625 + struct binder_txn_fd_fixup { 626 + struct list_head fixup_entry; 627 + struct file *file; 628 + size_t offset; 629 + }; 630 + 609 631 struct binder_transaction { 610 632 int debug_id; 611 633 struct binder_work work; ··· 640 628 long priority; 641 629 long saved_priority; 642 630 kuid_t sender_euid; 631 + struct list_head fd_fixups; 643 632 /** 644 633 * @lock: protects @from, @to_proc, and @to_thread 645 634 * ··· 835 822 binder_enqueue_deferred_thread_work_ilocked(struct binder_thread *thread, 836 823 struct binder_work *work) 837 824 { 825 + WARN_ON(!list_empty(&thread->waiting_thread_node)); 838 826 binder_enqueue_work_ilocked(work, &thread->todo); 839 827 } 840 828 ··· 853 839 binder_enqueue_thread_work_ilocked(struct binder_thread *thread, 854 840 struct binder_work *work) 855 841 { 842 + WARN_ON(!list_empty(&thread->waiting_thread_node)); 856 843 binder_enqueue_work_ilocked(work, &thread->todo); 857 844 thread->process_todo = true; 858 845 } ··· 934 919 static void binder_free_thread(struct binder_thread *thread); 935 920 static void binder_free_proc(struct binder_proc *proc); 936 921 static void binder_inc_node_tmpref_ilocked(struct binder_node *node); 937 - 938 - static int task_get_unused_fd_flags(struct binder_proc *proc, int flags) 939 - { 940 - unsigned long rlim_cur; 941 - unsigned long irqs; 942 - int ret; 943 - 944 - mutex_lock(&proc->files_lock); 945 - if (proc->files == NULL) { 946 - ret = -ESRCH; 947 - goto err; 948 - } 949 - if (!lock_task_sighand(proc->tsk, &irqs)) { 950 - ret = -EMFILE; 951 - goto err; 952 - } 953 - rlim_cur = task_rlimit(proc->tsk, RLIMIT_NOFILE); 954 - unlock_task_sighand(proc->tsk, &irqs); 955 - 956 - ret = __alloc_fd(proc->files, 0, rlim_cur, flags); 957 - err: 958 - mutex_unlock(&proc->files_lock); 959 - return ret; 960 - } 961 - 962 - /* 963 - * copied from fd_install 964 - */ 965 - static void task_fd_install( 966 - struct binder_proc *proc, unsigned int fd, struct file *file) 967 - { 968 - mutex_lock(&proc->files_lock); 969 - if (proc->files) 970 - __fd_install(proc->files, fd, file); 971 - mutex_unlock(&proc->files_lock); 972 - } 973 - 974 - /* 975 - * copied from sys_close 976 - */ 977 - static long task_close_fd(struct binder_proc *proc, unsigned int fd) 978 - { 979 - int retval; 980 - 981 - mutex_lock(&proc->files_lock); 982 - if (proc->files == NULL) { 983 - retval = -ESRCH; 984 - goto err; 985 - } 986 - retval = __close_fd(proc->files, fd); 987 - /* can't restart close syscall because file table entry was cleared */ 988 - if (unlikely(retval == -ERESTARTSYS || 989 - retval == -ERESTARTNOINTR || 990 - retval == -ERESTARTNOHAND || 991 - retval == -ERESTART_RESTARTBLOCK)) 992 - retval = -EINTR; 993 - err: 994 - mutex_unlock(&proc->files_lock); 995 - return retval; 996 - } 997 922 998 923 static bool binder_has_work_ilocked(struct binder_thread *thread, 999 924 bool do_proc_work) ··· 1225 1270 } else 1226 1271 node->local_strong_refs++; 1227 1272 if (!node->has_strong_ref && target_list) { 1273 + struct binder_thread *thread = container_of(target_list, 1274 + struct binder_thread, todo); 1228 1275 binder_dequeue_work_ilocked(&node->work); 1229 - /* 1230 - * Note: this function is the only place where we queue 1231 - * directly to a thread->todo without using the 1232 - * corresponding binder_enqueue_thread_work() helper 1233 - * functions; in this case it's ok to not set the 1234 - * process_todo flag, since we know this node work will 1235 - * always be followed by other work that starts queue 1236 - * processing: in case of synchronous transactions, a 1237 - * BR_REPLY or BR_ERROR; in case of oneway 1238 - * transactions, a BR_TRANSACTION_COMPLETE. 1239 - */ 1240 - binder_enqueue_work_ilocked(&node->work, target_list); 1276 + BUG_ON(&thread->todo != target_list); 1277 + binder_enqueue_deferred_thread_work_ilocked(thread, 1278 + &node->work); 1241 1279 } 1242 1280 } else { 1243 1281 if (!internal) ··· 1906 1958 return NULL; 1907 1959 } 1908 1960 1961 + /** 1962 + * binder_free_txn_fixups() - free unprocessed fd fixups 1963 + * @t: binder transaction for t->from 1964 + * 1965 + * If the transaction is being torn down prior to being 1966 + * processed by the target process, free all of the 1967 + * fd fixups and fput the file structs. It is safe to 1968 + * call this function after the fixups have been 1969 + * processed -- in that case, the list will be empty. 1970 + */ 1971 + static void binder_free_txn_fixups(struct binder_transaction *t) 1972 + { 1973 + struct binder_txn_fd_fixup *fixup, *tmp; 1974 + 1975 + list_for_each_entry_safe(fixup, tmp, &t->fd_fixups, fixup_entry) { 1976 + fput(fixup->file); 1977 + list_del(&fixup->fixup_entry); 1978 + kfree(fixup); 1979 + } 1980 + } 1981 + 1909 1982 static void binder_free_transaction(struct binder_transaction *t) 1910 1983 { 1911 1984 if (t->buffer) 1912 1985 t->buffer->transaction = NULL; 1986 + binder_free_txn_fixups(t); 1913 1987 kfree(t); 1914 1988 binder_stats_deleted(BINDER_STAT_TRANSACTION); 1915 1989 } ··· 2232 2262 } break; 2233 2263 2234 2264 case BINDER_TYPE_FD: { 2235 - struct binder_fd_object *fp = to_binder_fd_object(hdr); 2236 - 2237 - binder_debug(BINDER_DEBUG_TRANSACTION, 2238 - " fd %d\n", fp->fd); 2239 - if (failed_at) 2240 - task_close_fd(proc, fp->fd); 2265 + /* 2266 + * No need to close the file here since user-space 2267 + * closes it for for successfully delivered 2268 + * transactions. For transactions that weren't 2269 + * delivered, the new fd was never allocated so 2270 + * there is no need to close and the fput on the 2271 + * file is done when the transaction is torn 2272 + * down. 2273 + */ 2274 + WARN_ON(failed_at && 2275 + proc->tsk == current->group_leader); 2241 2276 } break; 2242 2277 case BINDER_TYPE_PTR: 2243 2278 /* ··· 2257 2282 u32 *fd_array; 2258 2283 size_t fd_index; 2259 2284 binder_size_t fd_buf_size; 2285 + 2286 + if (proc->tsk != current->group_leader) { 2287 + /* 2288 + * Nothing to do if running in sender context 2289 + * The fd fixups have not been applied so no 2290 + * fds need to be closed. 2291 + */ 2292 + continue; 2293 + } 2260 2294 2261 2295 fda = to_binder_fd_array_object(hdr); 2262 2296 parent = binder_validate_ptr(buffer, fda->parent, ··· 2299 2315 } 2300 2316 fd_array = (u32 *)(parent_buffer + (uintptr_t)fda->parent_offset); 2301 2317 for (fd_index = 0; fd_index < fda->num_fds; fd_index++) 2302 - task_close_fd(proc, fd_array[fd_index]); 2318 + ksys_close(fd_array[fd_index]); 2303 2319 } break; 2304 2320 default: 2305 2321 pr_err("transaction release %d bad object type %x\n", ··· 2431 2447 return ret; 2432 2448 } 2433 2449 2434 - static int binder_translate_fd(int fd, 2450 + static int binder_translate_fd(u32 *fdp, 2435 2451 struct binder_transaction *t, 2436 2452 struct binder_thread *thread, 2437 2453 struct binder_transaction *in_reply_to) 2438 2454 { 2439 2455 struct binder_proc *proc = thread->proc; 2440 2456 struct binder_proc *target_proc = t->to_proc; 2441 - int target_fd; 2457 + struct binder_txn_fd_fixup *fixup; 2442 2458 struct file *file; 2443 - int ret; 2459 + int ret = 0; 2444 2460 bool target_allows_fd; 2461 + int fd = *fdp; 2445 2462 2446 2463 if (in_reply_to) 2447 2464 target_allows_fd = !!(in_reply_to->flags & TF_ACCEPT_FDS); ··· 2470 2485 goto err_security; 2471 2486 } 2472 2487 2473 - target_fd = task_get_unused_fd_flags(target_proc, O_CLOEXEC); 2474 - if (target_fd < 0) { 2488 + /* 2489 + * Add fixup record for this transaction. The allocation 2490 + * of the fd in the target needs to be done from a 2491 + * target thread. 2492 + */ 2493 + fixup = kzalloc(sizeof(*fixup), GFP_KERNEL); 2494 + if (!fixup) { 2475 2495 ret = -ENOMEM; 2476 - goto err_get_unused_fd; 2496 + goto err_alloc; 2477 2497 } 2478 - task_fd_install(target_proc, target_fd, file); 2479 - trace_binder_transaction_fd(t, fd, target_fd); 2480 - binder_debug(BINDER_DEBUG_TRANSACTION, " fd %d -> %d\n", 2481 - fd, target_fd); 2498 + fixup->file = file; 2499 + fixup->offset = (uintptr_t)fdp - (uintptr_t)t->buffer->data; 2500 + trace_binder_transaction_fd_send(t, fd, fixup->offset); 2501 + list_add_tail(&fixup->fixup_entry, &t->fd_fixups); 2482 2502 2483 - return target_fd; 2503 + return ret; 2484 2504 2485 - err_get_unused_fd: 2505 + err_alloc: 2486 2506 err_security: 2487 2507 fput(file); 2488 2508 err_fget: ··· 2501 2511 struct binder_thread *thread, 2502 2512 struct binder_transaction *in_reply_to) 2503 2513 { 2504 - binder_size_t fdi, fd_buf_size, num_installed_fds; 2505 - int target_fd; 2514 + binder_size_t fdi, fd_buf_size; 2506 2515 uintptr_t parent_buffer; 2507 2516 u32 *fd_array; 2508 2517 struct binder_proc *proc = thread->proc; ··· 2533 2544 return -EINVAL; 2534 2545 } 2535 2546 for (fdi = 0; fdi < fda->num_fds; fdi++) { 2536 - target_fd = binder_translate_fd(fd_array[fdi], t, thread, 2547 + int ret = binder_translate_fd(&fd_array[fdi], t, thread, 2537 2548 in_reply_to); 2538 - if (target_fd < 0) 2539 - goto err_translate_fd_failed; 2540 - fd_array[fdi] = target_fd; 2549 + if (ret < 0) 2550 + return ret; 2541 2551 } 2542 2552 return 0; 2543 - 2544 - err_translate_fd_failed: 2545 - /* 2546 - * Failed to allocate fd or security error, free fds 2547 - * installed so far. 2548 - */ 2549 - num_installed_fds = fdi; 2550 - for (fdi = 0; fdi < num_installed_fds; fdi++) 2551 - task_close_fd(target_proc, fd_array[fdi]); 2552 - return target_fd; 2553 2553 } 2554 2554 2555 2555 static int binder_fixup_parent(struct binder_transaction *t, ··· 2701 2723 { 2702 2724 int ret; 2703 2725 struct binder_transaction *t; 2726 + struct binder_work *w; 2704 2727 struct binder_work *tcomplete; 2705 2728 binder_size_t *offp, *off_end, *off_start; 2706 2729 binder_size_t off_min; ··· 2843 2864 goto err_invalid_target_handle; 2844 2865 } 2845 2866 binder_inner_proc_lock(proc); 2867 + 2868 + w = list_first_entry_or_null(&thread->todo, 2869 + struct binder_work, entry); 2870 + if (!(tr->flags & TF_ONE_WAY) && w && 2871 + w->type == BINDER_WORK_TRANSACTION) { 2872 + /* 2873 + * Do not allow new outgoing transaction from a 2874 + * thread that has a transaction at the head of 2875 + * its todo list. Only need to check the head 2876 + * because binder_select_thread_ilocked picks a 2877 + * thread from proc->waiting_threads to enqueue 2878 + * the transaction, and nothing is queued to the 2879 + * todo list while the thread is on waiting_threads. 2880 + */ 2881 + binder_user_error("%d:%d new transaction not allowed when there is a transaction on thread todo\n", 2882 + proc->pid, thread->pid); 2883 + binder_inner_proc_unlock(proc); 2884 + return_error = BR_FAILED_REPLY; 2885 + return_error_param = -EPROTO; 2886 + return_error_line = __LINE__; 2887 + goto err_bad_todo_list; 2888 + } 2889 + 2846 2890 if (!(tr->flags & TF_ONE_WAY) && thread->transaction_stack) { 2847 2891 struct binder_transaction *tmp; 2848 2892 ··· 2913 2911 return_error_line = __LINE__; 2914 2912 goto err_alloc_t_failed; 2915 2913 } 2914 + INIT_LIST_HEAD(&t->fd_fixups); 2916 2915 binder_stats_created(BINDER_STAT_TRANSACTION); 2917 2916 spin_lock_init(&t->lock); 2918 2917 ··· 3069 3066 3070 3067 case BINDER_TYPE_FD: { 3071 3068 struct binder_fd_object *fp = to_binder_fd_object(hdr); 3072 - int target_fd = binder_translate_fd(fp->fd, t, thread, 3073 - in_reply_to); 3069 + int ret = binder_translate_fd(&fp->fd, t, thread, 3070 + in_reply_to); 3074 3071 3075 - if (target_fd < 0) { 3072 + if (ret < 0) { 3076 3073 return_error = BR_FAILED_REPLY; 3077 - return_error_param = target_fd; 3074 + return_error_param = ret; 3078 3075 return_error_line = __LINE__; 3079 3076 goto err_translate_failed; 3080 3077 } 3081 3078 fp->pad_binder = 0; 3082 - fp->fd = target_fd; 3083 3079 } break; 3084 3080 case BINDER_TYPE_FDA: { 3085 3081 struct binder_fd_array_object *fda = ··· 3235 3233 err_bad_offset: 3236 3234 err_bad_parent: 3237 3235 err_copy_data_failed: 3236 + binder_free_txn_fixups(t); 3238 3237 trace_binder_transaction_failed_buffer_release(t->buffer); 3239 3238 binder_transaction_buffer_release(target_proc, t->buffer, offp); 3240 3239 if (target_node) ··· 3250 3247 kfree(t); 3251 3248 binder_stats_deleted(BINDER_STAT_TRANSACTION); 3252 3249 err_alloc_t_failed: 3250 + err_bad_todo_list: 3253 3251 err_bad_call_stack: 3254 3252 err_empty_call_stack: 3255 3253 err_dead_binder: ··· 3296 3292 thread->return_error.cmd = return_error; 3297 3293 binder_enqueue_thread_work(thread, &thread->return_error.work); 3298 3294 } 3295 + } 3296 + 3297 + /** 3298 + * binder_free_buf() - free the specified buffer 3299 + * @proc: binder proc that owns buffer 3300 + * @buffer: buffer to be freed 3301 + * 3302 + * If buffer for an async transaction, enqueue the next async 3303 + * transaction from the node. 3304 + * 3305 + * Cleanup buffer and free it. 3306 + */ 3307 + static void 3308 + binder_free_buf(struct binder_proc *proc, struct binder_buffer *buffer) 3309 + { 3310 + if (buffer->transaction) { 3311 + buffer->transaction->buffer = NULL; 3312 + buffer->transaction = NULL; 3313 + } 3314 + if (buffer->async_transaction && buffer->target_node) { 3315 + struct binder_node *buf_node; 3316 + struct binder_work *w; 3317 + 3318 + buf_node = buffer->target_node; 3319 + binder_node_inner_lock(buf_node); 3320 + BUG_ON(!buf_node->has_async_transaction); 3321 + BUG_ON(buf_node->proc != proc); 3322 + w = binder_dequeue_work_head_ilocked( 3323 + &buf_node->async_todo); 3324 + if (!w) { 3325 + buf_node->has_async_transaction = false; 3326 + } else { 3327 + binder_enqueue_work_ilocked( 3328 + w, &proc->todo); 3329 + binder_wakeup_proc_ilocked(proc); 3330 + } 3331 + binder_node_inner_unlock(buf_node); 3332 + } 3333 + trace_binder_transaction_buffer_release(buffer); 3334 + binder_transaction_buffer_release(proc, buffer, NULL); 3335 + binder_alloc_free_buf(&proc->alloc, buffer); 3299 3336 } 3300 3337 3301 3338 static int binder_thread_write(struct binder_proc *proc, ··· 3525 3480 proc->pid, thread->pid, (u64)data_ptr, 3526 3481 buffer->debug_id, 3527 3482 buffer->transaction ? "active" : "finished"); 3528 - 3529 - if (buffer->transaction) { 3530 - buffer->transaction->buffer = NULL; 3531 - buffer->transaction = NULL; 3532 - } 3533 - if (buffer->async_transaction && buffer->target_node) { 3534 - struct binder_node *buf_node; 3535 - struct binder_work *w; 3536 - 3537 - buf_node = buffer->target_node; 3538 - binder_node_inner_lock(buf_node); 3539 - BUG_ON(!buf_node->has_async_transaction); 3540 - BUG_ON(buf_node->proc != proc); 3541 - w = binder_dequeue_work_head_ilocked( 3542 - &buf_node->async_todo); 3543 - if (!w) { 3544 - buf_node->has_async_transaction = false; 3545 - } else { 3546 - binder_enqueue_work_ilocked( 3547 - w, &proc->todo); 3548 - binder_wakeup_proc_ilocked(proc); 3549 - } 3550 - binder_node_inner_unlock(buf_node); 3551 - } 3552 - trace_binder_transaction_buffer_release(buffer); 3553 - binder_transaction_buffer_release(proc, buffer, NULL); 3554 - binder_alloc_free_buf(&proc->alloc, buffer); 3483 + binder_free_buf(proc, buffer); 3555 3484 break; 3556 3485 } 3557 3486 ··· 3848 3829 return ret; 3849 3830 } 3850 3831 3832 + /** 3833 + * binder_apply_fd_fixups() - finish fd translation 3834 + * @t: binder transaction with list of fd fixups 3835 + * 3836 + * Now that we are in the context of the transaction target 3837 + * process, we can allocate and install fds. Process the 3838 + * list of fds to translate and fixup the buffer with the 3839 + * new fds. 3840 + * 3841 + * If we fail to allocate an fd, then free the resources by 3842 + * fput'ing files that have not been processed and ksys_close'ing 3843 + * any fds that have already been allocated. 3844 + */ 3845 + static int binder_apply_fd_fixups(struct binder_transaction *t) 3846 + { 3847 + struct binder_txn_fd_fixup *fixup, *tmp; 3848 + int ret = 0; 3849 + 3850 + list_for_each_entry(fixup, &t->fd_fixups, fixup_entry) { 3851 + int fd = get_unused_fd_flags(O_CLOEXEC); 3852 + u32 *fdp; 3853 + 3854 + if (fd < 0) { 3855 + binder_debug(BINDER_DEBUG_TRANSACTION, 3856 + "failed fd fixup txn %d fd %d\n", 3857 + t->debug_id, fd); 3858 + ret = -ENOMEM; 3859 + break; 3860 + } 3861 + binder_debug(BINDER_DEBUG_TRANSACTION, 3862 + "fd fixup txn %d fd %d\n", 3863 + t->debug_id, fd); 3864 + trace_binder_transaction_fd_recv(t, fd, fixup->offset); 3865 + fd_install(fd, fixup->file); 3866 + fixup->file = NULL; 3867 + fdp = (u32 *)(t->buffer->data + fixup->offset); 3868 + /* 3869 + * This store can cause problems for CPUs with a 3870 + * VIVT cache (eg ARMv5) since the cache cannot 3871 + * detect virtual aliases to the same physical cacheline. 3872 + * To support VIVT, this address and the user-space VA 3873 + * would both need to be flushed. Since this kernel 3874 + * VA is not constructed via page_to_virt(), we can't 3875 + * use flush_dcache_page() on it, so we'd have to use 3876 + * an internal function. If devices with VIVT ever 3877 + * need to run Android, we'll either need to go back 3878 + * to patching the translated fd from the sender side 3879 + * (using the non-standard kernel functions), or rework 3880 + * how the kernel uses the buffer to use page_to_virt() 3881 + * addresses instead of allocating in our own vm area. 3882 + * 3883 + * For now, we disable compilation if CONFIG_CPU_CACHE_VIVT. 3884 + */ 3885 + *fdp = fd; 3886 + } 3887 + list_for_each_entry_safe(fixup, tmp, &t->fd_fixups, fixup_entry) { 3888 + if (fixup->file) { 3889 + fput(fixup->file); 3890 + } else if (ret) { 3891 + u32 *fdp = (u32 *)(t->buffer->data + fixup->offset); 3892 + 3893 + ksys_close(*fdp); 3894 + } 3895 + list_del(&fixup->fixup_entry); 3896 + kfree(fixup); 3897 + } 3898 + 3899 + return ret; 3900 + } 3901 + 3851 3902 static int binder_thread_read(struct binder_proc *proc, 3852 3903 struct binder_thread *thread, 3853 3904 binder_uintptr_t binder_buffer, size_t size, ··· 4199 4110 tr.sender_pid = 0; 4200 4111 } 4201 4112 4113 + ret = binder_apply_fd_fixups(t); 4114 + if (ret) { 4115 + struct binder_buffer *buffer = t->buffer; 4116 + bool oneway = !!(t->flags & TF_ONE_WAY); 4117 + int tid = t->debug_id; 4118 + 4119 + if (t_from) 4120 + binder_thread_dec_tmpref(t_from); 4121 + buffer->transaction = NULL; 4122 + binder_cleanup_transaction(t, "fd fixups failed", 4123 + BR_FAILED_REPLY); 4124 + binder_free_buf(proc, buffer); 4125 + binder_debug(BINDER_DEBUG_FAILED_TRANSACTION, 4126 + "%d:%d %stransaction %d fd fixups failed %d/%d, line %d\n", 4127 + proc->pid, thread->pid, 4128 + oneway ? "async " : 4129 + (cmd == BR_REPLY ? "reply " : ""), 4130 + tid, BR_FAILED_REPLY, ret, __LINE__); 4131 + if (cmd == BR_REPLY) { 4132 + cmd = BR_FAILED_REPLY; 4133 + if (put_user(cmd, (uint32_t __user *)ptr)) 4134 + return -EFAULT; 4135 + ptr += sizeof(uint32_t); 4136 + binder_stat_br(proc, thread, cmd); 4137 + break; 4138 + } 4139 + continue; 4140 + } 4202 4141 tr.data_size = t->buffer->data_size; 4203 4142 tr.offsets_size = t->buffer->offsets_size; 4204 4143 tr.data.ptr.buffer = (binder_uintptr_t) ··· 4661 4544 return ret; 4662 4545 } 4663 4546 4547 + static int binder_ioctl_get_node_info_for_ref(struct binder_proc *proc, 4548 + struct binder_node_info_for_ref *info) 4549 + { 4550 + struct binder_node *node; 4551 + struct binder_context *context = proc->context; 4552 + __u32 handle = info->handle; 4553 + 4554 + if (info->strong_count || info->weak_count || info->reserved1 || 4555 + info->reserved2 || info->reserved3) { 4556 + binder_user_error("%d BINDER_GET_NODE_INFO_FOR_REF: only handle may be non-zero.", 4557 + proc->pid); 4558 + return -EINVAL; 4559 + } 4560 + 4561 + /* This ioctl may only be used by the context manager */ 4562 + mutex_lock(&context->context_mgr_node_lock); 4563 + if (!context->binder_context_mgr_node || 4564 + context->binder_context_mgr_node->proc != proc) { 4565 + mutex_unlock(&context->context_mgr_node_lock); 4566 + return -EPERM; 4567 + } 4568 + mutex_unlock(&context->context_mgr_node_lock); 4569 + 4570 + node = binder_get_node_from_ref(proc, handle, true, NULL); 4571 + if (!node) 4572 + return -EINVAL; 4573 + 4574 + info->strong_count = node->local_strong_refs + 4575 + node->internal_strong_refs; 4576 + info->weak_count = node->local_weak_refs; 4577 + 4578 + binder_put_node(node); 4579 + 4580 + return 0; 4581 + } 4582 + 4664 4583 static int binder_ioctl_get_node_debug_info(struct binder_proc *proc, 4665 4584 struct binder_node_debug_info *info) 4666 4585 { ··· 4791 4638 } 4792 4639 break; 4793 4640 } 4641 + case BINDER_GET_NODE_INFO_FOR_REF: { 4642 + struct binder_node_info_for_ref info; 4643 + 4644 + if (copy_from_user(&info, ubuf, sizeof(info))) { 4645 + ret = -EFAULT; 4646 + goto err; 4647 + } 4648 + 4649 + ret = binder_ioctl_get_node_info_for_ref(proc, &info); 4650 + if (ret < 0) 4651 + goto err; 4652 + 4653 + if (copy_to_user(ubuf, &info, sizeof(info))) { 4654 + ret = -EFAULT; 4655 + goto err; 4656 + } 4657 + 4658 + break; 4659 + } 4794 4660 case BINDER_GET_NODE_DEBUG_INFO: { 4795 4661 struct binder_node_debug_info info; 4796 4662 ··· 4865 4693 (vma->vm_end - vma->vm_start) / SZ_1K, vma->vm_flags, 4866 4694 (unsigned long)pgprot_val(vma->vm_page_prot)); 4867 4695 binder_alloc_vma_close(&proc->alloc); 4868 - binder_defer_work(proc, BINDER_DEFERRED_PUT_FILES); 4869 4696 } 4870 4697 4871 4698 static vm_fault_t binder_vm_fault(struct vm_fault *vmf) ··· 4910 4739 ret = binder_alloc_mmap_handler(&proc->alloc, vma); 4911 4740 if (ret) 4912 4741 return ret; 4913 - mutex_lock(&proc->files_lock); 4914 - proc->files = get_files_struct(current); 4915 - mutex_unlock(&proc->files_lock); 4916 4742 return 0; 4917 4743 4918 4744 err_bad_arg: ··· 4933 4765 spin_lock_init(&proc->outer_lock); 4934 4766 get_task_struct(current->group_leader); 4935 4767 proc->tsk = current->group_leader; 4936 - mutex_init(&proc->files_lock); 4937 4768 INIT_LIST_HEAD(&proc->todo); 4938 4769 proc->default_priority = task_nice(current); 4939 4770 binder_dev = container_of(filp->private_data, struct binder_device, ··· 5082 4915 struct rb_node *n; 5083 4916 int threads, nodes, incoming_refs, outgoing_refs, active_transactions; 5084 4917 5085 - BUG_ON(proc->files); 5086 - 5087 4918 mutex_lock(&binder_procs_lock); 5088 4919 hlist_del(&proc->proc_node); 5089 4920 mutex_unlock(&binder_procs_lock); ··· 5163 4998 static void binder_deferred_func(struct work_struct *work) 5164 4999 { 5165 5000 struct binder_proc *proc; 5166 - struct files_struct *files; 5167 5001 5168 5002 int defer; 5169 5003 ··· 5180 5016 } 5181 5017 mutex_unlock(&binder_deferred_lock); 5182 5018 5183 - files = NULL; 5184 - if (defer & BINDER_DEFERRED_PUT_FILES) { 5185 - mutex_lock(&proc->files_lock); 5186 - files = proc->files; 5187 - if (files) 5188 - proc->files = NULL; 5189 - mutex_unlock(&proc->files_lock); 5190 - } 5191 - 5192 5019 if (defer & BINDER_DEFERRED_FLUSH) 5193 5020 binder_deferred_flush(proc); 5194 5021 5195 5022 if (defer & BINDER_DEFERRED_RELEASE) 5196 5023 binder_deferred_release(proc); /* frees proc */ 5197 - 5198 - if (files) 5199 - put_files_struct(files); 5200 5024 } while (proc); 5201 5025 } 5202 5026 static DECLARE_WORK(binder_deferred_work, binder_deferred_func); ··· 5819 5667 * Copy the module_parameter string, because we don't want to 5820 5668 * tokenize it in-place. 5821 5669 */ 5822 - device_names = kzalloc(strlen(binder_devices_param) + 1, GFP_KERNEL); 5670 + device_names = kstrdup(binder_devices_param, GFP_KERNEL); 5823 5671 if (!device_names) { 5824 5672 ret = -ENOMEM; 5825 5673 goto err_alloc_device_names_failed; 5826 5674 } 5827 - strcpy(device_names, binder_devices_param); 5828 5675 5829 5676 device_tmp = device_names; 5830 5677 while ((device_name = strsep(&device_tmp, ","))) {
+27 -9
drivers/android/binder_trace.h
··· 223 223 __entry->dest_ref_debug_id, __entry->dest_ref_desc) 224 224 ); 225 225 226 - TRACE_EVENT(binder_transaction_fd, 227 - TP_PROTO(struct binder_transaction *t, int src_fd, int dest_fd), 228 - TP_ARGS(t, src_fd, dest_fd), 226 + TRACE_EVENT(binder_transaction_fd_send, 227 + TP_PROTO(struct binder_transaction *t, int fd, size_t offset), 228 + TP_ARGS(t, fd, offset), 229 229 230 230 TP_STRUCT__entry( 231 231 __field(int, debug_id) 232 - __field(int, src_fd) 233 - __field(int, dest_fd) 232 + __field(int, fd) 233 + __field(size_t, offset) 234 234 ), 235 235 TP_fast_assign( 236 236 __entry->debug_id = t->debug_id; 237 - __entry->src_fd = src_fd; 238 - __entry->dest_fd = dest_fd; 237 + __entry->fd = fd; 238 + __entry->offset = offset; 239 239 ), 240 - TP_printk("transaction=%d src_fd=%d ==> dest_fd=%d", 241 - __entry->debug_id, __entry->src_fd, __entry->dest_fd) 240 + TP_printk("transaction=%d src_fd=%d offset=%zu", 241 + __entry->debug_id, __entry->fd, __entry->offset) 242 + ); 243 + 244 + TRACE_EVENT(binder_transaction_fd_recv, 245 + TP_PROTO(struct binder_transaction *t, int fd, size_t offset), 246 + TP_ARGS(t, fd, offset), 247 + 248 + TP_STRUCT__entry( 249 + __field(int, debug_id) 250 + __field(int, fd) 251 + __field(size_t, offset) 252 + ), 253 + TP_fast_assign( 254 + __entry->debug_id = t->debug_id; 255 + __entry->fd = fd; 256 + __entry->offset = offset; 257 + ), 258 + TP_printk("transaction=%d dest_fd=%d offset=%zu", 259 + __entry->debug_id, __entry->fd, __entry->offset) 242 260 ); 243 261 244 262 DECLARE_EVENT_CLASS(binder_buffer_class,
+29 -31
drivers/extcon/extcon-intel-cht-wc.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Extcon charger detection driver for Intel Cherrytrail Whiskey Cove PMIC 3 4 * Copyright (C) 2017 Hans de Goede <hdegoede@redhat.com> 4 5 * 5 6 * Based on various non upstream patches to support the CHT Whiskey Cove PMIC: 6 7 * Copyright (C) 2013-2015 Intel Corporation. All rights reserved. 7 - * 8 - * This program is free software; you can redistribute it and/or modify it 9 - * under the terms and conditions of the GNU General Public License, 10 - * version 2, as published by the Free Software Foundation. 11 - * 12 - * This program is distributed in the hope it will be useful, but WITHOUT 13 - * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 14 - * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 15 - * more details. 16 8 */ 17 9 18 10 #include <linux/extcon-provider.h> ··· 24 32 #define CHT_WC_CHGRCTRL0_EMRGCHREN BIT(1) 25 33 #define CHT_WC_CHGRCTRL0_EXTCHRDIS BIT(2) 26 34 #define CHT_WC_CHGRCTRL0_SWCONTROL BIT(3) 27 - #define CHT_WC_CHGRCTRL0_TTLCK_MASK BIT(4) 28 - #define CHT_WC_CHGRCTRL0_CCSM_OFF_MASK BIT(5) 29 - #define CHT_WC_CHGRCTRL0_DBPOFF_MASK BIT(6) 30 - #define CHT_WC_CHGRCTRL0_WDT_NOKICK BIT(7) 35 + #define CHT_WC_CHGRCTRL0_TTLCK BIT(4) 36 + #define CHT_WC_CHGRCTRL0_CCSM_OFF BIT(5) 37 + #define CHT_WC_CHGRCTRL0_DBPOFF BIT(6) 38 + #define CHT_WC_CHGRCTRL0_CHR_WDT_NOKICK BIT(7) 31 39 32 40 #define CHT_WC_CHGRCTRL1 0x5e17 33 41 ··· 44 52 #define CHT_WC_USBSRC_TYPE_ACA 4 45 53 #define CHT_WC_USBSRC_TYPE_SE1 5 46 54 #define CHT_WC_USBSRC_TYPE_MHL 6 47 - #define CHT_WC_USBSRC_TYPE_FLOAT_DP_DN 7 55 + #define CHT_WC_USBSRC_TYPE_FLOATING 7 48 56 #define CHT_WC_USBSRC_TYPE_OTHER 8 49 57 #define CHT_WC_USBSRC_TYPE_DCP_EXTPHY 9 50 58 ··· 53 61 #define CHT_WC_PWRSRC_STS 0x6e1e 54 62 #define CHT_WC_PWRSRC_VBUS BIT(0) 55 63 #define CHT_WC_PWRSRC_DC BIT(1) 56 - #define CHT_WC_PWRSRC_BAT BIT(2) 57 - #define CHT_WC_PWRSRC_ID_GND BIT(3) 58 - #define CHT_WC_PWRSRC_ID_FLOAT BIT(4) 64 + #define CHT_WC_PWRSRC_BATT BIT(2) 65 + #define CHT_WC_PWRSRC_USBID_MASK GENMASK(4, 3) 66 + #define CHT_WC_PWRSRC_USBID_SHIFT 3 67 + #define CHT_WC_PWRSRC_RID_ACA 0 68 + #define CHT_WC_PWRSRC_RID_GND 1 69 + #define CHT_WC_PWRSRC_RID_FLOAT 2 59 70 60 71 #define CHT_WC_VBUS_GPIO_CTLO 0x6e2d 61 72 #define CHT_WC_VBUS_GPIO_CTLO_OUTPUT BIT(0) ··· 99 104 100 105 static int cht_wc_extcon_get_id(struct cht_wc_extcon_data *ext, int pwrsrc_sts) 101 106 { 102 - if (pwrsrc_sts & CHT_WC_PWRSRC_ID_GND) 107 + switch ((pwrsrc_sts & CHT_WC_PWRSRC_USBID_MASK) >> CHT_WC_PWRSRC_USBID_SHIFT) { 108 + case CHT_WC_PWRSRC_RID_GND: 103 109 return USB_ID_GND; 104 - if (pwrsrc_sts & CHT_WC_PWRSRC_ID_FLOAT) 110 + case CHT_WC_PWRSRC_RID_FLOAT: 105 111 return USB_ID_FLOAT; 106 - 107 - /* 108 - * Once we have iio support for the gpadc we should read the USBID 109 - * gpadc channel here and determine ACA role based on that. 110 - */ 111 - return USB_ID_FLOAT; 112 + case CHT_WC_PWRSRC_RID_ACA: 113 + default: 114 + /* 115 + * Once we have IIO support for the GPADC we should read 116 + * the USBID GPADC channel here and determine ACA role 117 + * based on that. 118 + */ 119 + return USB_ID_FLOAT; 120 + } 112 121 } 113 122 114 123 static int cht_wc_extcon_get_charger(struct cht_wc_extcon_data *ext, ··· 155 156 dev_warn(ext->dev, 156 157 "Unhandled charger type %d, defaulting to SDP\n", 157 158 ret); 158 - /* Fall through, treat as SDP */ 159 + return EXTCON_CHG_USB_SDP; 159 160 case CHT_WC_USBSRC_TYPE_SDP: 160 - case CHT_WC_USBSRC_TYPE_FLOAT_DP_DN: 161 + case CHT_WC_USBSRC_TYPE_FLOATING: 161 162 case CHT_WC_USBSRC_TYPE_OTHER: 162 163 return EXTCON_CHG_USB_SDP; 163 164 case CHT_WC_USBSRC_TYPE_CDP: ··· 278 279 { 279 280 int ret, mask, val; 280 281 281 - mask = CHT_WC_CHGRCTRL0_SWCONTROL | CHT_WC_CHGRCTRL0_CCSM_OFF_MASK; 282 + mask = CHT_WC_CHGRCTRL0_SWCONTROL | CHT_WC_CHGRCTRL0_CCSM_OFF; 282 283 val = enable ? mask : 0; 283 284 ret = regmap_update_bits(ext->regmap, CHT_WC_CHGRCTRL0, mask, val); 284 285 if (ret) ··· 291 292 { 292 293 struct intel_soc_pmic *pmic = dev_get_drvdata(pdev->dev.parent); 293 294 struct cht_wc_extcon_data *ext; 295 + unsigned long mask = ~(CHT_WC_PWRSRC_VBUS | CHT_WC_PWRSRC_USBID_MASK); 294 296 int irq, ret; 295 297 296 298 irq = platform_get_irq(pdev, 0); ··· 352 352 } 353 353 354 354 /* Unmask irqs */ 355 - ret = regmap_write(ext->regmap, CHT_WC_PWRSRC_IRQ_MASK, 356 - (int)~(CHT_WC_PWRSRC_VBUS | CHT_WC_PWRSRC_ID_GND | 357 - CHT_WC_PWRSRC_ID_FLOAT)); 355 + ret = regmap_write(ext->regmap, CHT_WC_PWRSRC_IRQ_MASK, mask); 358 356 if (ret) { 359 357 dev_err(ext->dev, "Error writing irq-mask: %d\n", ret); 360 358 goto disable_sw_control;
+2 -10
drivers/extcon/extcon-intel-int3496.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Intel INT3496 ACPI device extcon driver 3 4 * ··· 8 7 * 9 8 * Copyright (c) 2014, Intel Corporation. 10 9 * Author: David Cohen <david.a.cohen@linux.intel.com> 11 - * 12 - * This program is free software; you can redistribute it and/or modify 13 - * it under the terms of the GNU General Public License version 2 as 14 - * published by the Free Software Foundation. 15 - * 16 - * This program is distributed in the hope that it will be useful, 17 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 18 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 19 - * GNU General Public License for more details. 20 10 */ 21 11 22 12 #include <linux/acpi.h> ··· 184 192 185 193 MODULE_AUTHOR("Hans de Goede <hdegoede@redhat.com>"); 186 194 MODULE_DESCRIPTION("Intel INT3496 ACPI device extcon driver"); 187 - MODULE_LICENSE("GPL"); 195 + MODULE_LICENSE("GPL v2");
+7 -17
drivers/extcon/extcon-max14577.c
··· 1 - /* 2 - * extcon-max14577.c - MAX14577/77836 extcon driver to support MUIC 3 - * 4 - * Copyright (C) 2013,2014 Samsung Electronics 5 - * Chanwoo Choi <cw00.choi@samsung.com> 6 - * Krzysztof Kozlowski <krzk@kernel.org> 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License as published by 10 - * the Free Software Foundation; either version 2 of the License, or 11 - * (at your option) any later version. 12 - * 13 - * This program is distributed in the hope that it will be useful, 14 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 15 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 16 - * GNU General Public License for more details. 17 - */ 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + // 3 + // extcon-max14577.c - MAX14577/77836 extcon driver to support MUIC 4 + // 5 + // Copyright (C) 2013,2014 Samsung Electronics 6 + // Chanwoo Choi <cw00.choi@samsung.com> 7 + // Krzysztof Kozlowski <krzk@kernel.org> 18 8 19 9 #include <linux/kernel.h> 20 10 #include <linux/module.h>
+6 -16
drivers/extcon/extcon-max77693.c
··· 1 - /* 2 - * extcon-max77693.c - MAX77693 extcon driver to support MAX77693 MUIC 3 - * 4 - * Copyright (C) 2012 Samsung Electrnoics 5 - * Chanwoo Choi <cw00.choi@samsung.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License as published by 9 - * the Free Software Foundation; either version 2 of the License, or 10 - * (at your option) any later version. 11 - * 12 - * This program is distributed in the hope that it will be useful, 13 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 - * GNU General Public License for more details. 16 - */ 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + // 3 + // extcon-max77693.c - MAX77693 extcon driver to support MAX77693 MUIC 4 + // 5 + // Copyright (C) 2012 Samsung Electrnoics 6 + // Chanwoo Choi <cw00.choi@samsung.com> 17 7 18 8 #include <linux/kernel.h> 19 9 #include <linux/module.h>
+7 -12
drivers/extcon/extcon-max77843.c
··· 1 - /* 2 - * extcon-max77843.c - Maxim MAX77843 extcon driver to support 3 - * MUIC(Micro USB Interface Controller) 4 - * 5 - * Copyright (C) 2015 Samsung Electronics 6 - * Author: Jaewon Kim <jaewon02.kim@samsung.com> 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License as published by 10 - * the Free Software Foundation; either version 2 of the License, or 11 - * (at your option) any later version. 12 - */ 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + // 3 + // extcon-max77843.c - Maxim MAX77843 extcon driver to support 4 + // MUIC(Micro USB Interface Controller) 5 + // 6 + // Copyright (C) 2015 Samsung Electronics 7 + // Author: Jaewon Kim <jaewon02.kim@samsung.com> 13 8 14 9 #include <linux/extcon-provider.h> 15 10 #include <linux/i2c.h>
+6 -16
drivers/extcon/extcon-max8997.c
··· 1 - /* 2 - * extcon-max8997.c - MAX8997 extcon driver to support MAX8997 MUIC 3 - * 4 - * Copyright (C) 2012 Samsung Electronics 5 - * Donggeun Kim <dg77.kim@samsung.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License as published by 9 - * the Free Software Foundation; either version 2 of the License, or 10 - * (at your option) any later version. 11 - * 12 - * This program is distributed in the hope that it will be useful, 13 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 - * GNU General Public License for more details. 16 - */ 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + // 3 + // extcon-max8997.c - MAX8997 extcon driver to support MAX8997 MUIC 4 + // 5 + // Copyright (C) 2012 Samsung Electronics 6 + // Donggeun Kim <dg77.kim@samsung.com> 17 7 18 8 #include <linux/kernel.h> 19 9 #include <linux/module.h>
+4 -11
drivers/extcon/extcon.c
··· 628 628 unsigned long flags; 629 629 int index, ret = 0; 630 630 631 - *prop_val = (union extcon_property_value)(0); 631 + *prop_val = (union extcon_property_value){0}; 632 632 633 633 if (!edev) 634 634 return -EINVAL; ··· 1123 1123 (unsigned long)atomic_inc_return(&edev_no)); 1124 1124 1125 1125 if (edev->max_supported) { 1126 - char buf[10]; 1127 1126 char *str; 1128 1127 struct extcon_cable *cable; 1129 1128 ··· 1136 1137 for (index = 0; index < edev->max_supported; index++) { 1137 1138 cable = &edev->cables[index]; 1138 1139 1139 - snprintf(buf, 10, "cable.%d", index); 1140 - str = kzalloc(strlen(buf) + 1, 1141 - GFP_KERNEL); 1140 + str = kasprintf(GFP_KERNEL, "cable.%d", index); 1142 1141 if (!str) { 1143 1142 for (index--; index >= 0; index--) { 1144 1143 cable = &edev->cables[index]; ··· 1146 1149 1147 1150 goto err_alloc_cables; 1148 1151 } 1149 - strcpy(str, buf); 1150 1152 1151 1153 cable->edev = edev; 1152 1154 cable->cable_index = index; ··· 1168 1172 } 1169 1173 1170 1174 if (edev->max_supported && edev->mutually_exclusive) { 1171 - char buf[80]; 1172 1175 char *name; 1173 1176 1174 1177 /* Count the size of mutually_exclusive array */ ··· 1192 1197 } 1193 1198 1194 1199 for (index = 0; edev->mutually_exclusive[index]; index++) { 1195 - sprintf(buf, "0x%x", edev->mutually_exclusive[index]); 1196 - name = kzalloc(strlen(buf) + 1, 1197 - GFP_KERNEL); 1200 + name = kasprintf(GFP_KERNEL, "0x%x", 1201 + edev->mutually_exclusive[index]); 1198 1202 if (!name) { 1199 1203 for (index--; index >= 0; index--) { 1200 1204 kfree(edev->d_attrs_muex[index].attr. ··· 1204 1210 ret = -ENOMEM; 1205 1211 goto err_muex; 1206 1212 } 1207 - strcpy(name, buf); 1208 1213 sysfs_attr_init(&edev->d_attrs_muex[index].attr); 1209 1214 edev->d_attrs_muex[index].attr.name = name; 1210 1215 edev->d_attrs_muex[index].attr.mode = 0000;
+14 -20
drivers/firmware/google/Kconfig
··· 10 10 11 11 config GOOGLE_SMI 12 12 tristate "SMI interface for Google platforms" 13 - depends on X86 && ACPI && DMI && EFI 14 - select EFI_VARS 13 + depends on X86 && ACPI && DMI 15 14 help 16 15 Say Y here if you want to enable SMI callbacks for Google 17 16 platforms. This provides an interface for writing to and 18 - clearing the EFI event log and reading and writing NVRAM 17 + clearing the event log. If EFI_VARS is also enabled this 18 + driver provides an interface for reading and writing NVRAM 19 19 variables. 20 20 21 21 config GOOGLE_COREBOOT_TABLE 22 - tristate 23 - depends on GOOGLE_COREBOOT_TABLE_ACPI || GOOGLE_COREBOOT_TABLE_OF 24 - 25 - config GOOGLE_COREBOOT_TABLE_ACPI 26 - tristate "Coreboot Table Access - ACPI" 27 - depends on ACPI 28 - select GOOGLE_COREBOOT_TABLE 22 + tristate "Coreboot Table Access" 23 + depends on ACPI || OF 29 24 help 30 25 This option enables the coreboot_table module, which provides other 31 - firmware modules to access to the coreboot table. The coreboot table 32 - pointer is accessed through the ACPI "GOOGCB00" object. 26 + firmware modules access to the coreboot table. The coreboot table 27 + pointer is accessed through the ACPI "GOOGCB00" object or the 28 + device tree node /firmware/coreboot. 33 29 If unsure say N. 34 30 35 - config GOOGLE_COREBOOT_TABLE_OF 36 - tristate "Coreboot Table Access - Device Tree" 37 - depends on OF 31 + config GOOGLE_COREBOOT_TABLE_ACPI 32 + tristate 38 33 select GOOGLE_COREBOOT_TABLE 39 - help 40 - This option enable the coreboot_table module, which provide other 41 - firmware modules to access coreboot table. The coreboot table pointer 42 - is accessed through the device tree node /firmware/coreboot. 43 - If unsure say N. 34 + 35 + config GOOGLE_COREBOOT_TABLE_OF 36 + tristate 37 + select GOOGLE_COREBOOT_TABLE 44 38 45 39 config GOOGLE_MEMCONSOLE 46 40 tristate
-2
drivers/firmware/google/Makefile
··· 2 2 3 3 obj-$(CONFIG_GOOGLE_SMI) += gsmi.o 4 4 obj-$(CONFIG_GOOGLE_COREBOOT_TABLE) += coreboot_table.o 5 - obj-$(CONFIG_GOOGLE_COREBOOT_TABLE_ACPI) += coreboot_table-acpi.o 6 - obj-$(CONFIG_GOOGLE_COREBOOT_TABLE_OF) += coreboot_table-of.o 7 5 obj-$(CONFIG_GOOGLE_FRAMEBUFFER_COREBOOT) += framebuffer-coreboot.o 8 6 obj-$(CONFIG_GOOGLE_MEMCONSOLE) += memconsole.o 9 7 obj-$(CONFIG_GOOGLE_MEMCONSOLE_COREBOOT) += memconsole-coreboot.o
-88
drivers/firmware/google/coreboot_table-acpi.c
··· 1 - /* 2 - * coreboot_table-acpi.c 3 - * 4 - * Using ACPI to locate Coreboot table and provide coreboot table access. 5 - * 6 - * Copyright 2017 Google Inc. 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License v2.0 as published by 10 - * the Free Software Foundation. 11 - * 12 - * This program is distributed in the hope that it will be useful, 13 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 - * GNU General Public License for more details. 16 - */ 17 - 18 - #include <linux/acpi.h> 19 - #include <linux/device.h> 20 - #include <linux/err.h> 21 - #include <linux/init.h> 22 - #include <linux/io.h> 23 - #include <linux/kernel.h> 24 - #include <linux/module.h> 25 - #include <linux/platform_device.h> 26 - 27 - #include "coreboot_table.h" 28 - 29 - static int coreboot_table_acpi_probe(struct platform_device *pdev) 30 - { 31 - phys_addr_t phyaddr; 32 - resource_size_t len; 33 - struct coreboot_table_header __iomem *header = NULL; 34 - struct resource *res; 35 - void __iomem *ptr = NULL; 36 - 37 - res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 38 - if (!res) 39 - return -EINVAL; 40 - 41 - len = resource_size(res); 42 - if (!res->start || !len) 43 - return -EINVAL; 44 - 45 - phyaddr = res->start; 46 - header = ioremap_cache(phyaddr, sizeof(*header)); 47 - if (header == NULL) 48 - return -ENOMEM; 49 - 50 - ptr = ioremap_cache(phyaddr, 51 - header->header_bytes + header->table_bytes); 52 - iounmap(header); 53 - if (!ptr) 54 - return -ENOMEM; 55 - 56 - return coreboot_table_init(&pdev->dev, ptr); 57 - } 58 - 59 - static int coreboot_table_acpi_remove(struct platform_device *pdev) 60 - { 61 - return coreboot_table_exit(); 62 - } 63 - 64 - static const struct acpi_device_id cros_coreboot_acpi_match[] = { 65 - { "GOOGCB00", 0 }, 66 - { "BOOT0000", 0 }, 67 - { } 68 - }; 69 - MODULE_DEVICE_TABLE(acpi, cros_coreboot_acpi_match); 70 - 71 - static struct platform_driver coreboot_table_acpi_driver = { 72 - .probe = coreboot_table_acpi_probe, 73 - .remove = coreboot_table_acpi_remove, 74 - .driver = { 75 - .name = "coreboot_table_acpi", 76 - .acpi_match_table = ACPI_PTR(cros_coreboot_acpi_match), 77 - }, 78 - }; 79 - 80 - static int __init coreboot_table_acpi_init(void) 81 - { 82 - return platform_driver_register(&coreboot_table_acpi_driver); 83 - } 84 - 85 - module_init(coreboot_table_acpi_init); 86 - 87 - MODULE_AUTHOR("Google, Inc."); 88 - MODULE_LICENSE("GPL");
-82
drivers/firmware/google/coreboot_table-of.c
··· 1 - /* 2 - * coreboot_table-of.c 3 - * 4 - * Coreboot table access through open firmware. 5 - * 6 - * Copyright 2017 Google Inc. 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License v2.0 as published by 10 - * the Free Software Foundation. 11 - * 12 - * This program is distributed in the hope that it will be useful, 13 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 14 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 15 - * GNU General Public License for more details. 16 - */ 17 - 18 - #include <linux/device.h> 19 - #include <linux/io.h> 20 - #include <linux/module.h> 21 - #include <linux/of_address.h> 22 - #include <linux/of_platform.h> 23 - #include <linux/platform_device.h> 24 - 25 - #include "coreboot_table.h" 26 - 27 - static int coreboot_table_of_probe(struct platform_device *pdev) 28 - { 29 - struct device_node *fw_dn = pdev->dev.of_node; 30 - void __iomem *ptr; 31 - 32 - ptr = of_iomap(fw_dn, 0); 33 - of_node_put(fw_dn); 34 - if (!ptr) 35 - return -ENOMEM; 36 - 37 - return coreboot_table_init(&pdev->dev, ptr); 38 - } 39 - 40 - static int coreboot_table_of_remove(struct platform_device *pdev) 41 - { 42 - return coreboot_table_exit(); 43 - } 44 - 45 - static const struct of_device_id coreboot_of_match[] = { 46 - { .compatible = "coreboot" }, 47 - {}, 48 - }; 49 - 50 - static struct platform_driver coreboot_table_of_driver = { 51 - .probe = coreboot_table_of_probe, 52 - .remove = coreboot_table_of_remove, 53 - .driver = { 54 - .name = "coreboot_table_of", 55 - .of_match_table = coreboot_of_match, 56 - }, 57 - }; 58 - 59 - static int __init platform_coreboot_table_of_init(void) 60 - { 61 - struct platform_device *pdev; 62 - struct device_node *of_node; 63 - 64 - /* Limit device creation to the presence of /firmware/coreboot node */ 65 - of_node = of_find_node_by_path("/firmware/coreboot"); 66 - if (!of_node) 67 - return -ENODEV; 68 - 69 - if (!of_match_node(coreboot_of_match, of_node)) 70 - return -ENODEV; 71 - 72 - pdev = of_platform_device_create(of_node, "coreboot_table_of", NULL); 73 - if (!pdev) 74 - return -ENODEV; 75 - 76 - return platform_driver_register(&coreboot_table_of_driver); 77 - } 78 - 79 - module_init(platform_coreboot_table_of_init); 80 - 81 - MODULE_AUTHOR("Google, Inc."); 82 - MODULE_LICENSE("GPL");
+93 -43
drivers/firmware/google/coreboot_table.c
··· 16 16 * GNU General Public License for more details. 17 17 */ 18 18 19 + #include <linux/acpi.h> 19 20 #include <linux/device.h> 20 21 #include <linux/err.h> 21 22 #include <linux/init.h> 22 23 #include <linux/io.h> 23 24 #include <linux/kernel.h> 24 25 #include <linux/module.h> 26 + #include <linux/of.h> 27 + #include <linux/platform_device.h> 25 28 #include <linux/slab.h> 26 29 27 30 #include "coreboot_table.h" 28 31 29 32 #define CB_DEV(d) container_of(d, struct coreboot_device, dev) 30 33 #define CB_DRV(d) container_of(d, struct coreboot_driver, drv) 31 - 32 - static struct coreboot_table_header __iomem *ptr_header; 33 34 34 35 static int coreboot_bus_match(struct device *dev, struct device_driver *drv) 35 36 { ··· 71 70 .remove = coreboot_bus_remove, 72 71 }; 73 72 74 - static int __init coreboot_bus_init(void) 75 - { 76 - return bus_register(&coreboot_bus_type); 77 - } 78 - module_init(coreboot_bus_init); 79 - 80 73 static void coreboot_device_release(struct device *dev) 81 74 { 82 75 struct coreboot_device *device = CB_DEV(dev); ··· 92 97 } 93 98 EXPORT_SYMBOL(coreboot_driver_unregister); 94 99 95 - int coreboot_table_init(struct device *dev, void __iomem *ptr) 100 + static int coreboot_table_populate(struct device *dev, void *ptr) 96 101 { 97 102 int i, ret; 98 103 void *ptr_entry; 99 104 struct coreboot_device *device; 100 - struct coreboot_table_entry entry; 101 - struct coreboot_table_header header; 105 + struct coreboot_table_entry *entry; 106 + struct coreboot_table_header *header = ptr; 102 107 103 - ptr_header = ptr; 104 - memcpy_fromio(&header, ptr_header, sizeof(header)); 108 + ptr_entry = ptr + header->header_bytes; 109 + for (i = 0; i < header->table_entries; i++) { 110 + entry = ptr_entry; 105 111 106 - if (strncmp(header.signature, "LBIO", sizeof(header.signature))) { 107 - pr_warn("coreboot_table: coreboot table missing or corrupt!\n"); 108 - return -ENODEV; 109 - } 110 - 111 - ptr_entry = (void *)ptr_header + header.header_bytes; 112 - for (i = 0; i < header.table_entries; i++) { 113 - memcpy_fromio(&entry, ptr_entry, sizeof(entry)); 114 - 115 - device = kzalloc(sizeof(struct device) + entry.size, GFP_KERNEL); 116 - if (!device) { 117 - ret = -ENOMEM; 118 - break; 119 - } 112 + device = kzalloc(sizeof(struct device) + entry->size, GFP_KERNEL); 113 + if (!device) 114 + return -ENOMEM; 120 115 121 116 dev_set_name(&device->dev, "coreboot%d", i); 122 117 device->dev.parent = dev; 123 118 device->dev.bus = &coreboot_bus_type; 124 119 device->dev.release = coreboot_device_release; 125 - memcpy_fromio(&device->entry, ptr_entry, entry.size); 120 + memcpy(&device->entry, ptr_entry, entry->size); 126 121 127 122 ret = device_register(&device->dev); 128 123 if (ret) { 129 124 put_device(&device->dev); 130 - break; 125 + return ret; 131 126 } 132 127 133 - ptr_entry += entry.size; 134 - } 135 - 136 - return ret; 137 - } 138 - EXPORT_SYMBOL(coreboot_table_init); 139 - 140 - int coreboot_table_exit(void) 141 - { 142 - if (ptr_header) { 143 - bus_unregister(&coreboot_bus_type); 144 - iounmap(ptr_header); 145 - ptr_header = NULL; 128 + ptr_entry += entry->size; 146 129 } 147 130 148 131 return 0; 149 132 } 150 - EXPORT_SYMBOL(coreboot_table_exit); 151 133 134 + static int coreboot_table_probe(struct platform_device *pdev) 135 + { 136 + resource_size_t len; 137 + struct coreboot_table_header *header; 138 + struct resource *res; 139 + struct device *dev = &pdev->dev; 140 + void *ptr; 141 + int ret; 142 + 143 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 144 + if (!res) 145 + return -EINVAL; 146 + 147 + len = resource_size(res); 148 + if (!res->start || !len) 149 + return -EINVAL; 150 + 151 + /* Check just the header first to make sure things are sane */ 152 + header = memremap(res->start, sizeof(*header), MEMREMAP_WB); 153 + if (!header) 154 + return -ENOMEM; 155 + 156 + len = header->header_bytes + header->table_bytes; 157 + ret = strncmp(header->signature, "LBIO", sizeof(header->signature)); 158 + memunmap(header); 159 + if (ret) { 160 + dev_warn(dev, "coreboot table missing or corrupt!\n"); 161 + return -ENODEV; 162 + } 163 + 164 + ptr = memremap(res->start, len, MEMREMAP_WB); 165 + if (!ptr) 166 + return -ENOMEM; 167 + 168 + ret = bus_register(&coreboot_bus_type); 169 + if (!ret) { 170 + ret = coreboot_table_populate(dev, ptr); 171 + if (ret) 172 + bus_unregister(&coreboot_bus_type); 173 + } 174 + memunmap(ptr); 175 + 176 + return ret; 177 + } 178 + 179 + static int coreboot_table_remove(struct platform_device *pdev) 180 + { 181 + bus_unregister(&coreboot_bus_type); 182 + return 0; 183 + } 184 + 185 + #ifdef CONFIG_ACPI 186 + static const struct acpi_device_id cros_coreboot_acpi_match[] = { 187 + { "GOOGCB00", 0 }, 188 + { "BOOT0000", 0 }, 189 + { } 190 + }; 191 + MODULE_DEVICE_TABLE(acpi, cros_coreboot_acpi_match); 192 + #endif 193 + 194 + #ifdef CONFIG_OF 195 + static const struct of_device_id coreboot_of_match[] = { 196 + { .compatible = "coreboot" }, 197 + {} 198 + }; 199 + MODULE_DEVICE_TABLE(of, coreboot_of_match); 200 + #endif 201 + 202 + static struct platform_driver coreboot_table_driver = { 203 + .probe = coreboot_table_probe, 204 + .remove = coreboot_table_remove, 205 + .driver = { 206 + .name = "coreboot_table", 207 + .acpi_match_table = ACPI_PTR(cros_coreboot_acpi_match), 208 + .of_match_table = of_match_ptr(coreboot_of_match), 209 + }, 210 + }; 211 + module_platform_driver(coreboot_table_driver); 152 212 MODULE_AUTHOR("Google, Inc."); 153 213 MODULE_LICENSE("GPL");
-6
drivers/firmware/google/coreboot_table.h
··· 91 91 /* Unregister a driver that uses the data from a coreboot table. */ 92 92 void coreboot_driver_unregister(struct coreboot_driver *driver); 93 93 94 - /* Initialize coreboot table module given a pointer to iomem */ 95 - int coreboot_table_init(struct device *dev, void __iomem *ptr); 96 - 97 - /* Cleanup coreboot table module */ 98 - int coreboot_table_exit(void); 99 - 100 94 #endif /* __COREBOOT_TABLE_H */
+113 -9
drivers/firmware/google/gsmi.c
··· 29 29 #include <linux/efi.h> 30 30 #include <linux/module.h> 31 31 #include <linux/ucs2_string.h> 32 + #include <linux/suspend.h> 32 33 33 34 #define GSMI_SHUTDOWN_CLEAN 0 /* Clean Shutdown */ 34 35 /* TODO(mikew@google.com): Tie in HARDLOCKUP_DETECTOR with NMIWDT */ ··· 71 70 #define GSMI_CMD_SET_NVRAM_VAR 0x03 72 71 #define GSMI_CMD_SET_EVENT_LOG 0x08 73 72 #define GSMI_CMD_CLEAR_EVENT_LOG 0x09 73 + #define GSMI_CMD_LOG_S0IX_SUSPEND 0x0a 74 + #define GSMI_CMD_LOG_S0IX_RESUME 0x0b 74 75 #define GSMI_CMD_CLEAR_CONFIG 0x20 75 76 #define GSMI_CMD_HANDSHAKE_TYPE 0xC1 76 77 ··· 87 84 u32 address; /* physical address of buffer */ 88 85 }; 89 86 90 - struct gsmi_device { 87 + static struct gsmi_device { 91 88 struct platform_device *pdev; /* platform device */ 92 89 struct gsmi_buf *name_buf; /* variable name buffer */ 93 90 struct gsmi_buf *data_buf; /* generic data buffer */ ··· 125 122 u32 instance; 126 123 } __packed; 127 124 128 - 129 125 /* 130 126 * Some platforms don't have explicit SMI handshake 131 127 * and need to wait for SMI to complete. ··· 134 132 module_param(spincount, uint, 0600); 135 133 MODULE_PARM_DESC(spincount, 136 134 "The number of loop iterations to use when using the spin handshake."); 135 + 136 + /* 137 + * Platforms might not support S0ix logging in their GSMI handlers. In order to 138 + * avoid any side-effects of generating an SMI for S0ix logging, use the S0ix 139 + * related GSMI commands only for those platforms that explicitly enable this 140 + * option. 141 + */ 142 + static bool s0ix_logging_enable; 143 + module_param(s0ix_logging_enable, bool, 0600); 137 144 138 145 static struct gsmi_buf *gsmi_buf_alloc(void) 139 146 { ··· 299 288 300 289 return rc; 301 290 } 291 + 292 + #ifdef CONFIG_EFI_VARS 293 + 294 + static struct efivars efivars; 302 295 303 296 static efi_status_t gsmi_get_variable(efi_char16_t *name, 304 297 efi_guid_t *vendor, u32 *attr, ··· 481 466 .get_next_variable = gsmi_get_next_variable, 482 467 }; 483 468 469 + #endif /* CONFIG_EFI_VARS */ 470 + 484 471 static ssize_t eventlog_write(struct file *filp, struct kobject *kobj, 485 472 struct bin_attribute *bin_attr, 486 473 char *buf, loff_t pos, size_t count) ··· 497 480 if (count < sizeof(u32)) 498 481 return -EINVAL; 499 482 param.type = *(u32 *)buf; 500 - count -= sizeof(u32); 501 483 buf += sizeof(u32); 502 484 503 485 /* The remaining buffer is the data payload */ 504 - if (count > gsmi_dev.data_buf->length) 486 + if ((count - sizeof(u32)) > gsmi_dev.data_buf->length) 505 487 return -EINVAL; 506 488 param.data_len = count - sizeof(u32); 507 489 ··· 520 504 521 505 spin_unlock_irqrestore(&gsmi_dev.lock, flags); 522 506 523 - return rc; 507 + return (rc == 0) ? count : rc; 524 508 525 509 } 526 510 ··· 732 716 DMI_MATCH(DMI_BOARD_VENDOR, "Google, Inc."), 733 717 }, 734 718 }, 719 + { 720 + .ident = "Coreboot Firmware", 721 + .matches = { 722 + DMI_MATCH(DMI_BIOS_VENDOR, "coreboot"), 723 + }, 724 + }, 735 725 {} 736 726 }; 737 727 MODULE_DEVICE_TABLE(dmi, gsmi_dmi_table); ··· 784 762 } 785 763 786 764 static struct kobject *gsmi_kobj; 787 - static struct efivars efivars; 788 765 789 766 static const struct platform_device_info gsmi_dev_info = { 790 767 .name = "gsmi", ··· 791 770 /* SMI callbacks require 32bit addresses */ 792 771 .dma_mask = DMA_BIT_MASK(32), 793 772 }; 773 + 774 + #ifdef CONFIG_PM 775 + static void gsmi_log_s0ix_info(u8 cmd) 776 + { 777 + unsigned long flags; 778 + 779 + /* 780 + * If platform has not enabled S0ix logging, then no action is 781 + * necessary. 782 + */ 783 + if (!s0ix_logging_enable) 784 + return; 785 + 786 + spin_lock_irqsave(&gsmi_dev.lock, flags); 787 + 788 + memset(gsmi_dev.param_buf->start, 0, gsmi_dev.param_buf->length); 789 + 790 + gsmi_exec(GSMI_CALLBACK, cmd); 791 + 792 + spin_unlock_irqrestore(&gsmi_dev.lock, flags); 793 + } 794 + 795 + static int gsmi_log_s0ix_suspend(struct device *dev) 796 + { 797 + /* 798 + * If system is not suspending via firmware using the standard ACPI Sx 799 + * types, then make a GSMI call to log the suspend info. 800 + */ 801 + if (!pm_suspend_via_firmware()) 802 + gsmi_log_s0ix_info(GSMI_CMD_LOG_S0IX_SUSPEND); 803 + 804 + /* 805 + * Always return success, since we do not want suspend 806 + * to fail just because of logging failure. 807 + */ 808 + return 0; 809 + } 810 + 811 + static int gsmi_log_s0ix_resume(struct device *dev) 812 + { 813 + /* 814 + * If system did not resume via firmware, then make a GSMI call to log 815 + * the resume info and wake source. 816 + */ 817 + if (!pm_resume_via_firmware()) 818 + gsmi_log_s0ix_info(GSMI_CMD_LOG_S0IX_RESUME); 819 + 820 + /* 821 + * Always return success, since we do not want resume 822 + * to fail just because of logging failure. 823 + */ 824 + return 0; 825 + } 826 + 827 + static const struct dev_pm_ops gsmi_pm_ops = { 828 + .suspend_noirq = gsmi_log_s0ix_suspend, 829 + .resume_noirq = gsmi_log_s0ix_resume, 830 + }; 831 + 832 + static int gsmi_platform_driver_probe(struct platform_device *dev) 833 + { 834 + return 0; 835 + } 836 + 837 + static struct platform_driver gsmi_driver_info = { 838 + .driver = { 839 + .name = "gsmi", 840 + .pm = &gsmi_pm_ops, 841 + }, 842 + .probe = gsmi_platform_driver_probe, 843 + }; 844 + #endif 794 845 795 846 static __init int gsmi_init(void) 796 847 { ··· 874 781 return ret; 875 782 876 783 gsmi_dev.smi_cmd = acpi_gbl_FADT.smi_command; 784 + 785 + #ifdef CONFIG_PM 786 + ret = platform_driver_register(&gsmi_driver_info); 787 + if (unlikely(ret)) { 788 + printk(KERN_ERR "gsmi: unable to register platform driver\n"); 789 + return ret; 790 + } 791 + #endif 877 792 878 793 /* register device */ 879 794 gsmi_dev.pdev = platform_device_register_full(&gsmi_dev_info); ··· 987 886 goto out_remove_bin_file; 988 887 } 989 888 889 + #ifdef CONFIG_EFI_VARS 990 890 ret = efivars_register(&efivars, &efivar_ops, gsmi_kobj); 991 891 if (ret) { 992 892 printk(KERN_INFO "gsmi: Failed to register efivars\n"); 993 - goto out_remove_sysfs_files; 893 + sysfs_remove_files(gsmi_kobj, gsmi_attrs); 894 + goto out_remove_bin_file; 994 895 } 896 + #endif 995 897 996 898 register_reboot_notifier(&gsmi_reboot_notifier); 997 899 register_die_notifier(&gsmi_die_notifier); ··· 1005 901 1006 902 return 0; 1007 903 1008 - out_remove_sysfs_files: 1009 - sysfs_remove_files(gsmi_kobj, gsmi_attrs); 1010 904 out_remove_bin_file: 1011 905 sysfs_remove_bin_file(gsmi_kobj, &eventlog_bin_attr); 1012 906 out_err: ··· 1024 922 unregister_die_notifier(&gsmi_die_notifier); 1025 923 atomic_notifier_chain_unregister(&panic_notifier_list, 1026 924 &gsmi_panic_notifier); 925 + #ifdef CONFIG_EFI_VARS 1027 926 efivars_unregister(&efivars); 927 + #endif 1028 928 1029 929 sysfs_remove_files(gsmi_kobj, gsmi_attrs); 1030 930 sysfs_remove_bin_file(gsmi_kobj, &eventlog_bin_attr);
+1 -1
drivers/firmware/google/vpd.c
··· 198 198 199 199 sec->name = name; 200 200 201 - /* We want to export the raw partion with name ${name}_raw */ 201 + /* We want to export the raw partition with name ${name}_raw */ 202 202 sec->raw_name = kasprintf(GFP_KERNEL, "%s_raw", name); 203 203 if (!sec->raw_name) { 204 204 err = -ENOMEM;
+3 -5
drivers/fpga/altera-cvp.c
··· 453 453 snprintf(conf->mgr_name, sizeof(conf->mgr_name), "%s @%s", 454 454 ALTERA_CVP_MGR_NAME, pci_name(pdev)); 455 455 456 - mgr = fpga_mgr_create(&pdev->dev, conf->mgr_name, 457 - &altera_cvp_ops, conf); 456 + mgr = devm_fpga_mgr_create(&pdev->dev, conf->mgr_name, 457 + &altera_cvp_ops, conf); 458 458 if (!mgr) { 459 459 ret = -ENOMEM; 460 460 goto err_unmap; ··· 463 463 pci_set_drvdata(pdev, mgr); 464 464 465 465 ret = fpga_mgr_register(mgr); 466 - if (ret) { 467 - fpga_mgr_free(mgr); 466 + if (ret) 468 467 goto err_unmap; 469 - } 470 468 471 469 ret = driver_create_file(&altera_cvp_driver.driver, 472 470 &driver_attr_chkcfg);
+3 -5
drivers/fpga/altera-fpga2sdram.c
··· 121 121 /* Get f2s bridge configuration saved in handoff register */ 122 122 regmap_read(sysmgr, SYSMGR_ISWGRP_HANDOFF3, &priv->mask); 123 123 124 - br = fpga_bridge_create(dev, F2S_BRIDGE_NAME, 125 - &altera_fpga2sdram_br_ops, priv); 124 + br = devm_fpga_bridge_create(dev, F2S_BRIDGE_NAME, 125 + &altera_fpga2sdram_br_ops, priv); 126 126 if (!br) 127 127 return -ENOMEM; 128 128 129 129 platform_set_drvdata(pdev, br); 130 130 131 131 ret = fpga_bridge_register(br); 132 - if (ret) { 133 - fpga_bridge_free(br); 132 + if (ret) 134 133 return ret; 135 - } 136 134 137 135 dev_info(dev, "driver initialized with handoff %08x\n", priv->mask); 138 136
+3 -10
drivers/fpga/altera-freeze-bridge.c
··· 213 213 struct fpga_bridge *br; 214 214 struct resource *res; 215 215 u32 status, revision; 216 - int ret; 217 216 218 217 if (!np) 219 218 return -ENODEV; ··· 244 245 245 246 priv->base_addr = base_addr; 246 247 247 - br = fpga_bridge_create(dev, FREEZE_BRIDGE_NAME, 248 - &altera_freeze_br_br_ops, priv); 248 + br = devm_fpga_bridge_create(dev, FREEZE_BRIDGE_NAME, 249 + &altera_freeze_br_br_ops, priv); 249 250 if (!br) 250 251 return -ENOMEM; 251 252 252 253 platform_set_drvdata(pdev, br); 253 254 254 - ret = fpga_bridge_register(br); 255 - if (ret) { 256 - fpga_bridge_free(br); 257 - return ret; 258 - } 259 - 260 - return 0; 255 + return fpga_bridge_register(br); 261 256 } 262 257 263 258 static int altera_freeze_br_remove(struct platform_device *pdev)
+3 -4
drivers/fpga/altera-hps2fpga.c
··· 180 180 } 181 181 } 182 182 183 - br = fpga_bridge_create(dev, priv->name, &altera_hps2fpga_br_ops, priv); 183 + br = devm_fpga_bridge_create(dev, priv->name, 184 + &altera_hps2fpga_br_ops, priv); 184 185 if (!br) { 185 186 ret = -ENOMEM; 186 187 goto err; ··· 191 190 192 191 ret = fpga_bridge_register(br); 193 192 if (ret) 194 - goto err_free; 193 + goto err; 195 194 196 195 return 0; 197 196 198 - err_free: 199 - fpga_bridge_free(br); 200 197 err: 201 198 clk_disable_unprepare(priv->clk); 202 199
+2 -7
drivers/fpga/altera-pr-ip-core.c
··· 177 177 { 178 178 struct alt_pr_priv *priv; 179 179 struct fpga_manager *mgr; 180 - int ret; 181 180 u32 val; 182 181 183 182 priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); ··· 191 192 (val & ALT_PR_CSR_STATUS_MSK) >> ALT_PR_CSR_STATUS_SFT, 192 193 (int)(val & ALT_PR_CSR_PR_START)); 193 194 194 - mgr = fpga_mgr_create(dev, dev_name(dev), &alt_pr_ops, priv); 195 + mgr = devm_fpga_mgr_create(dev, dev_name(dev), &alt_pr_ops, priv); 195 196 if (!mgr) 196 197 return -ENOMEM; 197 198 198 199 dev_set_drvdata(dev, mgr); 199 200 200 - ret = fpga_mgr_register(mgr); 201 - if (ret) 202 - fpga_mgr_free(mgr); 203 - 204 - return ret; 201 + return fpga_mgr_register(mgr); 205 202 } 206 203 EXPORT_SYMBOL_GPL(alt_pr_register); 207 204
+3 -8
drivers/fpga/altera-ps-spi.c
··· 239 239 struct altera_ps_conf *conf; 240 240 const struct of_device_id *of_id; 241 241 struct fpga_manager *mgr; 242 - int ret; 243 242 244 243 conf = devm_kzalloc(&spi->dev, sizeof(*conf), GFP_KERNEL); 245 244 if (!conf) ··· 274 275 snprintf(conf->mgr_name, sizeof(conf->mgr_name), "%s %s", 275 276 dev_driver_string(&spi->dev), dev_name(&spi->dev)); 276 277 277 - mgr = fpga_mgr_create(&spi->dev, conf->mgr_name, 278 - &altera_ps_ops, conf); 278 + mgr = devm_fpga_mgr_create(&spi->dev, conf->mgr_name, 279 + &altera_ps_ops, conf); 279 280 if (!mgr) 280 281 return -ENOMEM; 281 282 282 283 spi_set_drvdata(spi, mgr); 283 284 284 - ret = fpga_mgr_register(mgr); 285 - if (ret) 286 - fpga_mgr_free(mgr); 287 - 288 - return ret; 285 + return fpga_mgr_register(mgr); 289 286 } 290 287 291 288 static int altera_ps_remove(struct spi_device *spi)
+1 -1
drivers/fpga/dfl-afu-dma-region.c
··· 70 70 dev_dbg(dev, "[%d] RLIMIT_MEMLOCK %c%ld %ld/%ld%s\n", current->pid, 71 71 incr ? '+' : '-', npages << PAGE_SHIFT, 72 72 current->mm->locked_vm << PAGE_SHIFT, rlimit(RLIMIT_MEMLOCK), 73 - ret ? "- execeeded" : ""); 73 + ret ? "- exceeded" : ""); 74 74 75 75 up_write(&current->mm->mmap_sem); 76 76
+3 -8
drivers/fpga/dfl-fme-br.c
··· 61 61 struct device *dev = &pdev->dev; 62 62 struct fme_br_priv *priv; 63 63 struct fpga_bridge *br; 64 - int ret; 65 64 66 65 priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 67 66 if (!priv) ··· 68 69 69 70 priv->pdata = dev_get_platdata(dev); 70 71 71 - br = fpga_bridge_create(dev, "DFL FPGA FME Bridge", 72 - &fme_bridge_ops, priv); 72 + br = devm_fpga_bridge_create(dev, "DFL FPGA FME Bridge", 73 + &fme_bridge_ops, priv); 73 74 if (!br) 74 75 return -ENOMEM; 75 76 76 77 platform_set_drvdata(pdev, br); 77 78 78 - ret = fpga_bridge_register(br); 79 - if (ret) 80 - fpga_bridge_free(br); 81 - 82 - return ret; 79 + return fpga_bridge_register(br); 83 80 } 84 81 85 82 static int fme_br_remove(struct platform_device *pdev)
+4 -9
drivers/fpga/dfl-fme-mgr.c
··· 201 201 } 202 202 203 203 if (count < 4) { 204 - dev_err(dev, "Invaild PR bitstream size\n"); 204 + dev_err(dev, "Invalid PR bitstream size\n"); 205 205 return -EINVAL; 206 206 } 207 207 ··· 287 287 struct fme_mgr_priv *priv; 288 288 struct fpga_manager *mgr; 289 289 struct resource *res; 290 - int ret; 291 290 292 291 priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 293 292 if (!priv) ··· 308 309 309 310 fme_mgr_get_compat_id(priv->ioaddr, compat_id); 310 311 311 - mgr = fpga_mgr_create(dev, "DFL FME FPGA Manager", 312 - &fme_mgr_ops, priv); 312 + mgr = devm_fpga_mgr_create(dev, "DFL FME FPGA Manager", 313 + &fme_mgr_ops, priv); 313 314 if (!mgr) 314 315 return -ENOMEM; 315 316 316 317 mgr->compat_id = compat_id; 317 318 platform_set_drvdata(pdev, mgr); 318 319 319 - ret = fpga_mgr_register(mgr); 320 - if (ret) 321 - fpga_mgr_free(mgr); 322 - 323 - return ret; 320 + return fpga_mgr_register(mgr); 324 321 } 325 322 326 323 static int fme_mgr_remove(struct platform_device *pdev)
+2 -4
drivers/fpga/dfl-fme-region.c
··· 39 39 if (IS_ERR(mgr)) 40 40 return -EPROBE_DEFER; 41 41 42 - region = fpga_region_create(dev, mgr, fme_region_get_bridges); 42 + region = devm_fpga_region_create(dev, mgr, fme_region_get_bridges); 43 43 if (!region) { 44 44 ret = -ENOMEM; 45 45 goto eprobe_mgr_put; ··· 51 51 52 52 ret = fpga_region_register(region); 53 53 if (ret) 54 - goto region_free; 54 + goto eprobe_mgr_put; 55 55 56 56 dev_dbg(dev, "DFL FME FPGA Region probed\n"); 57 57 58 58 return 0; 59 59 60 - region_free: 61 - fpga_region_free(region); 62 60 eprobe_mgr_put: 63 61 fpga_mgr_put(mgr); 64 62 return ret;
+2 -4
drivers/fpga/dfl.c
··· 899 899 if (!cdev) 900 900 return ERR_PTR(-ENOMEM); 901 901 902 - cdev->region = fpga_region_create(info->dev, NULL, NULL); 902 + cdev->region = devm_fpga_region_create(info->dev, NULL, NULL); 903 903 if (!cdev->region) { 904 904 ret = -ENOMEM; 905 905 goto free_cdev_exit; ··· 911 911 912 912 ret = fpga_region_register(cdev->region); 913 913 if (ret) 914 - goto free_region_exit; 914 + goto free_cdev_exit; 915 915 916 916 /* create and init build info for enumeration */ 917 917 binfo = devm_kzalloc(info->dev, sizeof(*binfo), GFP_KERNEL); ··· 942 942 943 943 unregister_region_exit: 944 944 fpga_region_unregister(cdev->region); 945 - free_region_exit: 946 - fpga_region_free(cdev->region); 947 945 free_cdev_exit: 948 946 devm_kfree(info->dev, cdev); 949 947 return ERR_PTR(ret);
+59 -9
drivers/fpga/fpga-bridge.c
··· 324 324 * @br_ops: pointer to structure of fpga bridge ops 325 325 * @priv: FPGA bridge private data 326 326 * 327 + * The caller of this function is responsible for freeing the bridge with 328 + * fpga_bridge_free(). Using devm_fpga_bridge_create() instead is recommended. 329 + * 327 330 * Return: struct fpga_bridge or NULL 328 331 */ 329 332 struct fpga_bridge *fpga_bridge_create(struct device *dev, const char *name, ··· 381 378 EXPORT_SYMBOL_GPL(fpga_bridge_create); 382 379 383 380 /** 384 - * fpga_bridge_free - free a fpga bridge and its id 385 - * @bridge: FPGA bridge struct created by fpga_bridge_create 381 + * fpga_bridge_free - free a fpga bridge created by fpga_bridge_create() 382 + * @bridge: FPGA bridge struct 386 383 */ 387 384 void fpga_bridge_free(struct fpga_bridge *bridge) 388 385 { ··· 391 388 } 392 389 EXPORT_SYMBOL_GPL(fpga_bridge_free); 393 390 391 + static void devm_fpga_bridge_release(struct device *dev, void *res) 392 + { 393 + struct fpga_bridge *bridge = *(struct fpga_bridge **)res; 394 + 395 + fpga_bridge_free(bridge); 396 + } 397 + 394 398 /** 395 - * fpga_bridge_register - register a fpga bridge 396 - * @bridge: FPGA bridge struct created by fpga_bridge_create 399 + * devm_fpga_bridge_create - create and init a managed struct fpga_bridge 400 + * @dev: FPGA bridge device from pdev 401 + * @name: FPGA bridge name 402 + * @br_ops: pointer to structure of fpga bridge ops 403 + * @priv: FPGA bridge private data 404 + * 405 + * This function is intended for use in a FPGA bridge driver's probe function. 406 + * After the bridge driver creates the struct with devm_fpga_bridge_create(), it 407 + * should register the bridge with fpga_bridge_register(). The bridge driver's 408 + * remove function should call fpga_bridge_unregister(). The bridge struct 409 + * allocated with this function will be freed automatically on driver detach. 410 + * This includes the case of a probe function returning error before calling 411 + * fpga_bridge_register(), the struct will still get cleaned up. 412 + * 413 + * Return: struct fpga_bridge or NULL 414 + */ 415 + struct fpga_bridge 416 + *devm_fpga_bridge_create(struct device *dev, const char *name, 417 + const struct fpga_bridge_ops *br_ops, void *priv) 418 + { 419 + struct fpga_bridge **ptr, *bridge; 420 + 421 + ptr = devres_alloc(devm_fpga_bridge_release, sizeof(*ptr), GFP_KERNEL); 422 + if (!ptr) 423 + return NULL; 424 + 425 + bridge = fpga_bridge_create(dev, name, br_ops, priv); 426 + if (!bridge) { 427 + devres_free(ptr); 428 + } else { 429 + *ptr = bridge; 430 + devres_add(dev, ptr); 431 + } 432 + 433 + return bridge; 434 + } 435 + EXPORT_SYMBOL_GPL(devm_fpga_bridge_create); 436 + 437 + /** 438 + * fpga_bridge_register - register a FPGA bridge 439 + * 440 + * @bridge: FPGA bridge struct 397 441 * 398 442 * Return: 0 for success, error code otherwise. 399 443 */ ··· 462 412 EXPORT_SYMBOL_GPL(fpga_bridge_register); 463 413 464 414 /** 465 - * fpga_bridge_unregister - unregister and free a fpga bridge 466 - * @bridge: FPGA bridge struct created by fpga_bridge_create 415 + * fpga_bridge_unregister - unregister a FPGA bridge 416 + * 417 + * @bridge: FPGA bridge struct 418 + * 419 + * This function is intended for use in a FPGA bridge driver's remove function. 467 420 */ 468 421 void fpga_bridge_unregister(struct fpga_bridge *bridge) 469 422 { ··· 483 430 484 431 static void fpga_bridge_dev_release(struct device *dev) 485 432 { 486 - struct fpga_bridge *bridge = to_fpga_bridge(dev); 487 - 488 - fpga_bridge_free(bridge); 489 433 } 490 434 491 435 static int __init fpga_bridge_dev_init(void)
+56 -8
drivers/fpga/fpga-mgr.c
··· 558 558 * @mops: pointer to structure of fpga manager ops 559 559 * @priv: fpga manager private data 560 560 * 561 + * The caller of this function is responsible for freeing the struct with 562 + * fpga_mgr_free(). Using devm_fpga_mgr_create() instead is recommended. 563 + * 561 564 * Return: pointer to struct fpga_manager or NULL 562 565 */ 563 566 struct fpga_manager *fpga_mgr_create(struct device *dev, const char *name, ··· 621 618 EXPORT_SYMBOL_GPL(fpga_mgr_create); 622 619 623 620 /** 624 - * fpga_mgr_free - deallocate a FPGA manager 625 - * @mgr: fpga manager struct created by fpga_mgr_create 621 + * fpga_mgr_free - free a FPGA manager created with fpga_mgr_create() 622 + * @mgr: fpga manager struct 626 623 */ 627 624 void fpga_mgr_free(struct fpga_manager *mgr) 628 625 { ··· 631 628 } 632 629 EXPORT_SYMBOL_GPL(fpga_mgr_free); 633 630 631 + static void devm_fpga_mgr_release(struct device *dev, void *res) 632 + { 633 + struct fpga_manager *mgr = *(struct fpga_manager **)res; 634 + 635 + fpga_mgr_free(mgr); 636 + } 637 + 638 + /** 639 + * devm_fpga_mgr_create - create and initialize a managed FPGA manager struct 640 + * @dev: fpga manager device from pdev 641 + * @name: fpga manager name 642 + * @mops: pointer to structure of fpga manager ops 643 + * @priv: fpga manager private data 644 + * 645 + * This function is intended for use in a FPGA manager driver's probe function. 646 + * After the manager driver creates the manager struct with 647 + * devm_fpga_mgr_create(), it should register it with fpga_mgr_register(). The 648 + * manager driver's remove function should call fpga_mgr_unregister(). The 649 + * manager struct allocated with this function will be freed automatically on 650 + * driver detach. This includes the case of a probe function returning error 651 + * before calling fpga_mgr_register(), the struct will still get cleaned up. 652 + * 653 + * Return: pointer to struct fpga_manager or NULL 654 + */ 655 + struct fpga_manager *devm_fpga_mgr_create(struct device *dev, const char *name, 656 + const struct fpga_manager_ops *mops, 657 + void *priv) 658 + { 659 + struct fpga_manager **ptr, *mgr; 660 + 661 + ptr = devres_alloc(devm_fpga_mgr_release, sizeof(*ptr), GFP_KERNEL); 662 + if (!ptr) 663 + return NULL; 664 + 665 + mgr = fpga_mgr_create(dev, name, mops, priv); 666 + if (!mgr) { 667 + devres_free(ptr); 668 + } else { 669 + *ptr = mgr; 670 + devres_add(dev, ptr); 671 + } 672 + 673 + return mgr; 674 + } 675 + EXPORT_SYMBOL_GPL(devm_fpga_mgr_create); 676 + 634 677 /** 635 678 * fpga_mgr_register - register a FPGA manager 636 - * @mgr: fpga manager struct created by fpga_mgr_create 679 + * @mgr: fpga manager struct 637 680 * 638 681 * Return: 0 on success, negative error code otherwise. 639 682 */ ··· 710 661 EXPORT_SYMBOL_GPL(fpga_mgr_register); 711 662 712 663 /** 713 - * fpga_mgr_unregister - unregister and free a FPGA manager 714 - * @mgr: fpga manager struct 664 + * fpga_mgr_unregister - unregister a FPGA manager 665 + * @mgr: fpga manager struct 666 + * 667 + * This function is intended for use in a FPGA manager driver's remove function. 715 668 */ 716 669 void fpga_mgr_unregister(struct fpga_manager *mgr) 717 670 { ··· 732 681 733 682 static void fpga_mgr_dev_release(struct device *dev) 734 683 { 735 - struct fpga_manager *mgr = to_fpga_manager(dev); 736 - 737 - fpga_mgr_free(mgr); 738 684 } 739 685 740 686 static int __init fpga_mgr_class_init(void)
+57 -8
drivers/fpga/fpga-region.c
··· 185 185 * @mgr: manager that programs this region 186 186 * @get_bridges: optional function to get bridges to a list 187 187 * 188 + * The caller of this function is responsible for freeing the resulting region 189 + * struct with fpga_region_free(). Using devm_fpga_region_create() instead is 190 + * recommended. 191 + * 188 192 * Return: struct fpga_region or NULL 189 193 */ 190 194 struct fpga_region ··· 234 230 EXPORT_SYMBOL_GPL(fpga_region_create); 235 231 236 232 /** 237 - * fpga_region_free - free a struct fpga_region 238 - * @region: FPGA region created by fpga_region_create 233 + * fpga_region_free - free a FPGA region created by fpga_region_create() 234 + * @region: FPGA region 239 235 */ 240 236 void fpga_region_free(struct fpga_region *region) 241 237 { ··· 244 240 } 245 241 EXPORT_SYMBOL_GPL(fpga_region_free); 246 242 243 + static void devm_fpga_region_release(struct device *dev, void *res) 244 + { 245 + struct fpga_region *region = *(struct fpga_region **)res; 246 + 247 + fpga_region_free(region); 248 + } 249 + 250 + /** 251 + * devm_fpga_region_create - create and initialize a managed FPGA region struct 252 + * @dev: device parent 253 + * @mgr: manager that programs this region 254 + * @get_bridges: optional function to get bridges to a list 255 + * 256 + * This function is intended for use in a FPGA region driver's probe function. 257 + * After the region driver creates the region struct with 258 + * devm_fpga_region_create(), it should register it with fpga_region_register(). 259 + * The region driver's remove function should call fpga_region_unregister(). 260 + * The region struct allocated with this function will be freed automatically on 261 + * driver detach. This includes the case of a probe function returning error 262 + * before calling fpga_region_register(), the struct will still get cleaned up. 263 + * 264 + * Return: struct fpga_region or NULL 265 + */ 266 + struct fpga_region 267 + *devm_fpga_region_create(struct device *dev, 268 + struct fpga_manager *mgr, 269 + int (*get_bridges)(struct fpga_region *)) 270 + { 271 + struct fpga_region **ptr, *region; 272 + 273 + ptr = devres_alloc(devm_fpga_region_release, sizeof(*ptr), GFP_KERNEL); 274 + if (!ptr) 275 + return NULL; 276 + 277 + region = fpga_region_create(dev, mgr, get_bridges); 278 + if (!region) { 279 + devres_free(ptr); 280 + } else { 281 + *ptr = region; 282 + devres_add(dev, ptr); 283 + } 284 + 285 + return region; 286 + } 287 + EXPORT_SYMBOL_GPL(devm_fpga_region_create); 288 + 247 289 /** 248 290 * fpga_region_register - register a FPGA region 249 - * @region: FPGA region created by fpga_region_create 291 + * @region: FPGA region 292 + * 250 293 * Return: 0 or -errno 251 294 */ 252 295 int fpga_region_register(struct fpga_region *region) 253 296 { 254 297 return device_add(&region->dev); 255 - 256 298 } 257 299 EXPORT_SYMBOL_GPL(fpga_region_register); 258 300 259 301 /** 260 - * fpga_region_unregister - unregister and free a FPGA region 302 + * fpga_region_unregister - unregister a FPGA region 261 303 * @region: FPGA region 304 + * 305 + * This function is intended for use in a FPGA region driver's remove function. 262 306 */ 263 307 void fpga_region_unregister(struct fpga_region *region) 264 308 { ··· 316 264 317 265 static void fpga_region_dev_release(struct device *dev) 318 266 { 319 - struct fpga_region *region = to_fpga_region(dev); 320 - 321 - fpga_region_free(region); 322 267 } 323 268 324 269 /**
+3 -7
drivers/fpga/ice40-spi.c
··· 175 175 return ret; 176 176 } 177 177 178 - mgr = fpga_mgr_create(dev, "Lattice iCE40 FPGA Manager", 179 - &ice40_fpga_ops, priv); 178 + mgr = devm_fpga_mgr_create(dev, "Lattice iCE40 FPGA Manager", 179 + &ice40_fpga_ops, priv); 180 180 if (!mgr) 181 181 return -ENOMEM; 182 182 183 183 spi_set_drvdata(spi, mgr); 184 184 185 - ret = fpga_mgr_register(mgr); 186 - if (ret) 187 - fpga_mgr_free(mgr); 188 - 189 - return ret; 185 + return fpga_mgr_register(mgr); 190 186 } 191 187 192 188 static int ice40_fpga_remove(struct spi_device *spi)
+3 -8
drivers/fpga/machxo2-spi.c
··· 356 356 { 357 357 struct device *dev = &spi->dev; 358 358 struct fpga_manager *mgr; 359 - int ret; 360 359 361 360 if (spi->max_speed_hz > MACHXO2_MAX_SPEED) { 362 361 dev_err(dev, "Speed is too high\n"); 363 362 return -EINVAL; 364 363 } 365 364 366 - mgr = fpga_mgr_create(dev, "Lattice MachXO2 SPI FPGA Manager", 367 - &machxo2_ops, spi); 365 + mgr = devm_fpga_mgr_create(dev, "Lattice MachXO2 SPI FPGA Manager", 366 + &machxo2_ops, spi); 368 367 if (!mgr) 369 368 return -ENOMEM; 370 369 371 370 spi_set_drvdata(spi, mgr); 372 371 373 - ret = fpga_mgr_register(mgr); 374 - if (ret) 375 - fpga_mgr_free(mgr); 376 - 377 - return ret; 372 + return fpga_mgr_register(mgr); 378 373 } 379 374 380 375 static int machxo2_spi_remove(struct spi_device *spi)
+2 -4
drivers/fpga/of-fpga-region.c
··· 410 410 if (IS_ERR(mgr)) 411 411 return -EPROBE_DEFER; 412 412 413 - region = fpga_region_create(dev, mgr, of_fpga_region_get_bridges); 413 + region = devm_fpga_region_create(dev, mgr, of_fpga_region_get_bridges); 414 414 if (!region) { 415 415 ret = -ENOMEM; 416 416 goto eprobe_mgr_put; ··· 418 418 419 419 ret = fpga_region_register(region); 420 420 if (ret) 421 - goto eprobe_free; 421 + goto eprobe_mgr_put; 422 422 423 423 of_platform_populate(np, fpga_region_of_match, NULL, &region->dev); 424 424 dev_set_drvdata(dev, region); ··· 427 427 428 428 return 0; 429 429 430 - eprobe_free: 431 - fpga_region_free(region); 432 430 eprobe_mgr_put: 433 431 fpga_mgr_put(mgr); 434 432 return ret;
+2 -3
drivers/fpga/socfpga-a10.c
··· 508 508 return -EBUSY; 509 509 } 510 510 511 - mgr = fpga_mgr_create(dev, "SoCFPGA Arria10 FPGA Manager", 512 - &socfpga_a10_fpga_mgr_ops, priv); 511 + mgr = devm_fpga_mgr_create(dev, "SoCFPGA Arria10 FPGA Manager", 512 + &socfpga_a10_fpga_mgr_ops, priv); 513 513 if (!mgr) 514 514 return -ENOMEM; 515 515 ··· 517 517 518 518 ret = fpga_mgr_register(mgr); 519 519 if (ret) { 520 - fpga_mgr_free(mgr); 521 520 clk_disable_unprepare(priv->clk); 522 521 return ret; 523 522 }
+3 -7
drivers/fpga/socfpga.c
··· 571 571 if (ret) 572 572 return ret; 573 573 574 - mgr = fpga_mgr_create(dev, "Altera SOCFPGA FPGA Manager", 575 - &socfpga_fpga_ops, priv); 574 + mgr = devm_fpga_mgr_create(dev, "Altera SOCFPGA FPGA Manager", 575 + &socfpga_fpga_ops, priv); 576 576 if (!mgr) 577 577 return -ENOMEM; 578 578 579 579 platform_set_drvdata(pdev, mgr); 580 580 581 - ret = fpga_mgr_register(mgr); 582 - if (ret) 583 - fpga_mgr_free(mgr); 584 - 585 - return ret; 581 + return fpga_mgr_register(mgr); 586 582 } 587 583 588 584 static int socfpga_fpga_remove(struct platform_device *pdev)
+3 -8
drivers/fpga/ts73xx-fpga.c
··· 118 118 struct ts73xx_fpga_priv *priv; 119 119 struct fpga_manager *mgr; 120 120 struct resource *res; 121 - int ret; 122 121 123 122 priv = devm_kzalloc(kdev, sizeof(*priv), GFP_KERNEL); 124 123 if (!priv) ··· 132 133 return PTR_ERR(priv->io_base); 133 134 } 134 135 135 - mgr = fpga_mgr_create(kdev, "TS-73xx FPGA Manager", 136 - &ts73xx_fpga_ops, priv); 136 + mgr = devm_fpga_mgr_create(kdev, "TS-73xx FPGA Manager", 137 + &ts73xx_fpga_ops, priv); 137 138 if (!mgr) 138 139 return -ENOMEM; 139 140 140 141 platform_set_drvdata(pdev, mgr); 141 142 142 - ret = fpga_mgr_register(mgr); 143 - if (ret) 144 - fpga_mgr_free(mgr); 145 - 146 - return ret; 143 + return fpga_mgr_register(mgr); 147 144 } 148 145 149 146 static int ts73xx_fpga_remove(struct platform_device *pdev)
+2 -2
drivers/fpga/xilinx-pr-decoupler.c
··· 121 121 122 122 clk_disable(priv->clk); 123 123 124 - br = fpga_bridge_create(&pdev->dev, "Xilinx PR Decoupler", 125 - &xlnx_pr_decoupler_br_ops, priv); 124 + br = devm_fpga_bridge_create(&pdev->dev, "Xilinx PR Decoupler", 125 + &xlnx_pr_decoupler_br_ops, priv); 126 126 if (!br) { 127 127 err = -ENOMEM; 128 128 goto err_clk;
+4 -8
drivers/fpga/xilinx-spi.c
··· 144 144 { 145 145 struct xilinx_spi_conf *conf; 146 146 struct fpga_manager *mgr; 147 - int ret; 148 147 149 148 conf = devm_kzalloc(&spi->dev, sizeof(*conf), GFP_KERNEL); 150 149 if (!conf) ··· 166 167 return PTR_ERR(conf->done); 167 168 } 168 169 169 - mgr = fpga_mgr_create(&spi->dev, "Xilinx Slave Serial FPGA Manager", 170 - &xilinx_spi_ops, conf); 170 + mgr = devm_fpga_mgr_create(&spi->dev, 171 + "Xilinx Slave Serial FPGA Manager", 172 + &xilinx_spi_ops, conf); 171 173 if (!mgr) 172 174 return -ENOMEM; 173 175 174 176 spi_set_drvdata(spi, mgr); 175 177 176 - ret = fpga_mgr_register(mgr); 177 - if (ret) 178 - fpga_mgr_free(mgr); 179 - 180 - return ret; 178 + return fpga_mgr_register(mgr); 181 179 } 182 180 183 181 static int xilinx_spi_remove(struct spi_device *spi)
+2 -3
drivers/fpga/zynq-fpga.c
··· 614 614 615 615 clk_disable(priv->clk); 616 616 617 - mgr = fpga_mgr_create(dev, "Xilinx Zynq FPGA Manager", 618 - &zynq_fpga_ops, priv); 617 + mgr = devm_fpga_mgr_create(dev, "Xilinx Zynq FPGA Manager", 618 + &zynq_fpga_ops, priv); 619 619 if (!mgr) 620 620 return -ENOMEM; 621 621 ··· 624 624 err = fpga_mgr_register(mgr); 625 625 if (err) { 626 626 dev_err(dev, "unable to register FPGA manager\n"); 627 - fpga_mgr_free(mgr); 628 627 clk_unprepare(priv->clk); 629 628 return err; 630 629 }
+174 -138
drivers/hv/channel.c
··· 79 79 } 80 80 EXPORT_SYMBOL_GPL(vmbus_setevent); 81 81 82 - /* 83 - * vmbus_open - Open the specified channel. 84 - */ 85 - int vmbus_open(struct vmbus_channel *newchannel, u32 send_ringbuffer_size, 86 - u32 recv_ringbuffer_size, void *userdata, u32 userdatalen, 87 - void (*onchannelcallback)(void *context), void *context) 82 + /* vmbus_free_ring - drop mapping of ring buffer */ 83 + void vmbus_free_ring(struct vmbus_channel *channel) 84 + { 85 + hv_ringbuffer_cleanup(&channel->outbound); 86 + hv_ringbuffer_cleanup(&channel->inbound); 87 + 88 + if (channel->ringbuffer_page) { 89 + __free_pages(channel->ringbuffer_page, 90 + get_order(channel->ringbuffer_pagecount 91 + << PAGE_SHIFT)); 92 + channel->ringbuffer_page = NULL; 93 + } 94 + } 95 + EXPORT_SYMBOL_GPL(vmbus_free_ring); 96 + 97 + /* vmbus_alloc_ring - allocate and map pages for ring buffer */ 98 + int vmbus_alloc_ring(struct vmbus_channel *newchannel, 99 + u32 send_size, u32 recv_size) 100 + { 101 + struct page *page; 102 + int order; 103 + 104 + if (send_size % PAGE_SIZE || recv_size % PAGE_SIZE) 105 + return -EINVAL; 106 + 107 + /* Allocate the ring buffer */ 108 + order = get_order(send_size + recv_size); 109 + page = alloc_pages_node(cpu_to_node(newchannel->target_cpu), 110 + GFP_KERNEL|__GFP_ZERO, order); 111 + 112 + if (!page) 113 + page = alloc_pages(GFP_KERNEL|__GFP_ZERO, order); 114 + 115 + if (!page) 116 + return -ENOMEM; 117 + 118 + newchannel->ringbuffer_page = page; 119 + newchannel->ringbuffer_pagecount = (send_size + recv_size) >> PAGE_SHIFT; 120 + newchannel->ringbuffer_send_offset = send_size >> PAGE_SHIFT; 121 + 122 + return 0; 123 + } 124 + EXPORT_SYMBOL_GPL(vmbus_alloc_ring); 125 + 126 + static int __vmbus_open(struct vmbus_channel *newchannel, 127 + void *userdata, u32 userdatalen, 128 + void (*onchannelcallback)(void *context), void *context) 88 129 { 89 130 struct vmbus_channel_open_channel *open_msg; 90 131 struct vmbus_channel_msginfo *open_info = NULL; 132 + struct page *page = newchannel->ringbuffer_page; 133 + u32 send_pages, recv_pages; 91 134 unsigned long flags; 92 - int ret, err = 0; 93 - struct page *page; 135 + int err; 94 136 95 - if (send_ringbuffer_size % PAGE_SIZE || 96 - recv_ringbuffer_size % PAGE_SIZE) 137 + if (userdatalen > MAX_USER_DEFINED_BYTES) 97 138 return -EINVAL; 98 139 140 + send_pages = newchannel->ringbuffer_send_offset; 141 + recv_pages = newchannel->ringbuffer_pagecount - send_pages; 142 + 99 143 spin_lock_irqsave(&newchannel->lock, flags); 100 - if (newchannel->state == CHANNEL_OPEN_STATE) { 101 - newchannel->state = CHANNEL_OPENING_STATE; 102 - } else { 144 + if (newchannel->state != CHANNEL_OPEN_STATE) { 103 145 spin_unlock_irqrestore(&newchannel->lock, flags); 104 146 return -EINVAL; 105 147 } 106 148 spin_unlock_irqrestore(&newchannel->lock, flags); 107 149 150 + newchannel->state = CHANNEL_OPENING_STATE; 108 151 newchannel->onchannel_callback = onchannelcallback; 109 152 newchannel->channel_callback_context = context; 110 153 111 - /* Allocate the ring buffer */ 112 - page = alloc_pages_node(cpu_to_node(newchannel->target_cpu), 113 - GFP_KERNEL|__GFP_ZERO, 114 - get_order(send_ringbuffer_size + 115 - recv_ringbuffer_size)); 154 + err = hv_ringbuffer_init(&newchannel->outbound, page, send_pages); 155 + if (err) 156 + goto error_clean_ring; 116 157 117 - if (!page) 118 - page = alloc_pages(GFP_KERNEL|__GFP_ZERO, 119 - get_order(send_ringbuffer_size + 120 - recv_ringbuffer_size)); 121 - 122 - if (!page) { 123 - err = -ENOMEM; 124 - goto error_set_chnstate; 125 - } 126 - 127 - newchannel->ringbuffer_pages = page_address(page); 128 - newchannel->ringbuffer_pagecount = (send_ringbuffer_size + 129 - recv_ringbuffer_size) >> PAGE_SHIFT; 130 - 131 - ret = hv_ringbuffer_init(&newchannel->outbound, page, 132 - send_ringbuffer_size >> PAGE_SHIFT); 133 - 134 - if (ret != 0) { 135 - err = ret; 136 - goto error_free_pages; 137 - } 138 - 139 - ret = hv_ringbuffer_init(&newchannel->inbound, 140 - &page[send_ringbuffer_size >> PAGE_SHIFT], 141 - recv_ringbuffer_size >> PAGE_SHIFT); 142 - if (ret != 0) { 143 - err = ret; 144 - goto error_free_pages; 145 - } 146 - 158 + err = hv_ringbuffer_init(&newchannel->inbound, 159 + &page[send_pages], recv_pages); 160 + if (err) 161 + goto error_clean_ring; 147 162 148 163 /* Establish the gpadl for the ring buffer */ 149 164 newchannel->ringbuffer_gpadlhandle = 0; 150 165 151 - ret = vmbus_establish_gpadl(newchannel, 152 - page_address(page), 153 - send_ringbuffer_size + 154 - recv_ringbuffer_size, 166 + err = vmbus_establish_gpadl(newchannel, 167 + page_address(newchannel->ringbuffer_page), 168 + (send_pages + recv_pages) << PAGE_SHIFT, 155 169 &newchannel->ringbuffer_gpadlhandle); 156 - 157 - if (ret != 0) { 158 - err = ret; 159 - goto error_free_pages; 160 - } 170 + if (err) 171 + goto error_clean_ring; 161 172 162 173 /* Create and init the channel open message */ 163 174 open_info = kmalloc(sizeof(*open_info) + ··· 187 176 open_msg->openid = newchannel->offermsg.child_relid; 188 177 open_msg->child_relid = newchannel->offermsg.child_relid; 189 178 open_msg->ringbuffer_gpadlhandle = newchannel->ringbuffer_gpadlhandle; 190 - open_msg->downstream_ringbuffer_pageoffset = send_ringbuffer_size >> 191 - PAGE_SHIFT; 179 + open_msg->downstream_ringbuffer_pageoffset = newchannel->ringbuffer_send_offset; 192 180 open_msg->target_vp = newchannel->target_vp; 193 - 194 - if (userdatalen > MAX_USER_DEFINED_BYTES) { 195 - err = -EINVAL; 196 - goto error_free_gpadl; 197 - } 198 181 199 182 if (userdatalen) 200 183 memcpy(open_msg->userdata, userdata, userdatalen); ··· 200 195 201 196 if (newchannel->rescind) { 202 197 err = -ENODEV; 203 - goto error_free_gpadl; 198 + goto error_free_info; 204 199 } 205 200 206 - ret = vmbus_post_msg(open_msg, 201 + err = vmbus_post_msg(open_msg, 207 202 sizeof(struct vmbus_channel_open_channel), true); 208 203 209 - trace_vmbus_open(open_msg, ret); 204 + trace_vmbus_open(open_msg, err); 210 205 211 - if (ret != 0) { 212 - err = ret; 206 + if (err != 0) 213 207 goto error_clean_msglist; 214 - } 215 208 216 209 wait_for_completion(&open_info->waitevent); 217 210 ··· 219 216 220 217 if (newchannel->rescind) { 221 218 err = -ENODEV; 222 - goto error_free_gpadl; 219 + goto error_free_info; 223 220 } 224 221 225 222 if (open_info->response.open_result.status) { 226 223 err = -EAGAIN; 227 - goto error_free_gpadl; 224 + goto error_free_info; 228 225 } 229 226 230 227 newchannel->state = CHANNEL_OPENED_STATE; ··· 235 232 spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags); 236 233 list_del(&open_info->msglistentry); 237 234 spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags); 238 - 235 + error_free_info: 236 + kfree(open_info); 239 237 error_free_gpadl: 240 238 vmbus_teardown_gpadl(newchannel, newchannel->ringbuffer_gpadlhandle); 241 - kfree(open_info); 242 - error_free_pages: 239 + newchannel->ringbuffer_gpadlhandle = 0; 240 + error_clean_ring: 243 241 hv_ringbuffer_cleanup(&newchannel->outbound); 244 242 hv_ringbuffer_cleanup(&newchannel->inbound); 245 - __free_pages(page, 246 - get_order(send_ringbuffer_size + recv_ringbuffer_size)); 247 - error_set_chnstate: 248 243 newchannel->state = CHANNEL_OPEN_STATE; 244 + return err; 245 + } 246 + 247 + /* 248 + * vmbus_connect_ring - Open the channel but reuse ring buffer 249 + */ 250 + int vmbus_connect_ring(struct vmbus_channel *newchannel, 251 + void (*onchannelcallback)(void *context), void *context) 252 + { 253 + return __vmbus_open(newchannel, NULL, 0, onchannelcallback, context); 254 + } 255 + EXPORT_SYMBOL_GPL(vmbus_connect_ring); 256 + 257 + /* 258 + * vmbus_open - Open the specified channel. 259 + */ 260 + int vmbus_open(struct vmbus_channel *newchannel, 261 + u32 send_ringbuffer_size, u32 recv_ringbuffer_size, 262 + void *userdata, u32 userdatalen, 263 + void (*onchannelcallback)(void *context), void *context) 264 + { 265 + int err; 266 + 267 + err = vmbus_alloc_ring(newchannel, send_ringbuffer_size, 268 + recv_ringbuffer_size); 269 + if (err) 270 + return err; 271 + 272 + err = __vmbus_open(newchannel, userdata, userdatalen, 273 + onchannelcallback, context); 274 + if (err) 275 + vmbus_free_ring(newchannel); 276 + 249 277 return err; 250 278 } 251 279 EXPORT_SYMBOL_GPL(vmbus_open); ··· 646 612 * in Hyper-V Manager), the driver's remove() invokes vmbus_close(): 647 613 * here we should skip most of the below cleanup work. 648 614 */ 649 - if (channel->state != CHANNEL_OPENED_STATE) { 650 - ret = -EINVAL; 651 - goto out; 652 - } 615 + if (channel->state != CHANNEL_OPENED_STATE) 616 + return -EINVAL; 653 617 654 618 channel->state = CHANNEL_OPEN_STATE; 655 619 ··· 669 637 * If we failed to post the close msg, 670 638 * it is perhaps better to leak memory. 671 639 */ 672 - goto out; 673 640 } 674 641 675 642 /* Tear down the gpadl for the channel's ring buffer */ 676 - if (channel->ringbuffer_gpadlhandle) { 643 + else if (channel->ringbuffer_gpadlhandle) { 677 644 ret = vmbus_teardown_gpadl(channel, 678 645 channel->ringbuffer_gpadlhandle); 679 646 if (ret) { ··· 681 650 * If we failed to teardown gpadl, 682 651 * it is perhaps better to leak memory. 683 652 */ 684 - goto out; 685 653 } 654 + 655 + channel->ringbuffer_gpadlhandle = 0; 686 656 } 687 657 688 - /* Cleanup the ring buffers for this channel */ 689 - hv_ringbuffer_cleanup(&channel->outbound); 690 - hv_ringbuffer_cleanup(&channel->inbound); 691 - 692 - free_pages((unsigned long)channel->ringbuffer_pages, 693 - get_order(channel->ringbuffer_pagecount * PAGE_SIZE)); 694 - 695 - out: 696 658 return ret; 697 659 } 660 + 661 + /* disconnect ring - close all channels */ 662 + int vmbus_disconnect_ring(struct vmbus_channel *channel) 663 + { 664 + struct vmbus_channel *cur_channel, *tmp; 665 + unsigned long flags; 666 + LIST_HEAD(list); 667 + int ret; 668 + 669 + if (channel->primary_channel != NULL) 670 + return -EINVAL; 671 + 672 + /* Snapshot the list of subchannels */ 673 + spin_lock_irqsave(&channel->lock, flags); 674 + list_splice_init(&channel->sc_list, &list); 675 + channel->num_sc = 0; 676 + spin_unlock_irqrestore(&channel->lock, flags); 677 + 678 + list_for_each_entry_safe(cur_channel, tmp, &list, sc_list) { 679 + if (cur_channel->rescind) 680 + wait_for_completion(&cur_channel->rescind_event); 681 + 682 + mutex_lock(&vmbus_connection.channel_mutex); 683 + if (vmbus_close_internal(cur_channel) == 0) { 684 + vmbus_free_ring(cur_channel); 685 + 686 + if (cur_channel->rescind) 687 + hv_process_channel_removal(cur_channel); 688 + } 689 + mutex_unlock(&vmbus_connection.channel_mutex); 690 + } 691 + 692 + /* 693 + * Now close the primary. 694 + */ 695 + mutex_lock(&vmbus_connection.channel_mutex); 696 + ret = vmbus_close_internal(channel); 697 + mutex_unlock(&vmbus_connection.channel_mutex); 698 + 699 + return ret; 700 + } 701 + EXPORT_SYMBOL_GPL(vmbus_disconnect_ring); 698 702 699 703 /* 700 704 * vmbus_close - Close the specified channel 701 705 */ 702 706 void vmbus_close(struct vmbus_channel *channel) 703 707 { 704 - struct list_head *cur, *tmp; 705 - struct vmbus_channel *cur_channel; 706 - 707 - if (channel->primary_channel != NULL) { 708 - /* 709 - * We will only close sub-channels when 710 - * the primary is closed. 711 - */ 712 - return; 713 - } 714 - /* 715 - * Close all the sub-channels first and then close the 716 - * primary channel. 717 - */ 718 - list_for_each_safe(cur, tmp, &channel->sc_list) { 719 - cur_channel = list_entry(cur, struct vmbus_channel, sc_list); 720 - if (cur_channel->rescind) { 721 - wait_for_completion(&cur_channel->rescind_event); 722 - mutex_lock(&vmbus_connection.channel_mutex); 723 - vmbus_close_internal(cur_channel); 724 - hv_process_channel_removal( 725 - cur_channel->offermsg.child_relid); 726 - } else { 727 - mutex_lock(&vmbus_connection.channel_mutex); 728 - vmbus_close_internal(cur_channel); 729 - } 730 - mutex_unlock(&vmbus_connection.channel_mutex); 731 - } 732 - /* 733 - * Now close the primary. 734 - */ 735 - mutex_lock(&vmbus_connection.channel_mutex); 736 - vmbus_close_internal(channel); 737 - mutex_unlock(&vmbus_connection.channel_mutex); 708 + if (vmbus_disconnect_ring(channel) == 0) 709 + vmbus_free_ring(channel); 738 710 } 739 711 EXPORT_SYMBOL_GPL(vmbus_close); 740 712 741 713 /** 742 714 * vmbus_sendpacket() - Send the specified buffer on the given channel 743 - * @channel: Pointer to vmbus_channel structure. 744 - * @buffer: Pointer to the buffer you want to receive the data into. 745 - * @bufferlen: Maximum size of what the the buffer will hold 715 + * @channel: Pointer to vmbus_channel structure 716 + * @buffer: Pointer to the buffer you want to send the data from. 717 + * @bufferlen: Maximum size of what the buffer holds. 746 718 * @requestid: Identifier of the request 747 - * @type: Type of packet that is being send e.g. negotiate, time 748 - * packet etc. 719 + * @type: Type of packet that is being sent e.g. negotiate, time 720 + * packet etc. 721 + * @flags: 0 or VMBUS_DATA_PACKET_FLAG_COMPLETION_REQUESTED 749 722 * 750 - * Sends data in @buffer directly to hyper-v via the vmbus 751 - * This will send the data unparsed to hyper-v. 723 + * Sends data in @buffer directly to Hyper-V via the vmbus. 724 + * This will send the data unparsed to Hyper-V. 752 725 * 753 726 * Mainly used by Hyper-V drivers. 754 727 */ ··· 885 850 EXPORT_SYMBOL_GPL(vmbus_sendpacket_mpb_desc); 886 851 887 852 /** 888 - * vmbus_recvpacket() - Retrieve the user packet on the specified channel 889 - * @channel: Pointer to vmbus_channel structure. 853 + * __vmbus_recvpacket() - Retrieve the user packet on the specified channel 854 + * @channel: Pointer to vmbus_channel structure 890 855 * @buffer: Pointer to the buffer you want to receive the data into. 891 - * @bufferlen: Maximum size of what the the buffer will hold 892 - * @buffer_actual_len: The actual size of the data after it was received 856 + * @bufferlen: Maximum size of what the buffer can hold. 857 + * @buffer_actual_len: The actual size of the data after it was received. 893 858 * @requestid: Identifier of the request 859 + * @raw: true means keep the vmpacket_descriptor header in the received data. 894 860 * 895 861 * Receives directly from the hyper-v vmbus and puts the data it received 896 862 * into Buffer. This will receive the data unparsed from hyper-v.
+23 -31
drivers/hv/channel_mgmt.c
··· 198 198 } 199 199 200 200 /** 201 - * vmbus_prep_negotiate_resp() - Create default response for Hyper-V Negotiate message 201 + * vmbus_prep_negotiate_resp() - Create default response for Negotiate message 202 202 * @icmsghdrp: Pointer to msg header structure 203 - * @icmsg_negotiate: Pointer to negotiate message structure 204 203 * @buf: Raw buffer channel data 204 + * @fw_version: The framework versions we can support. 205 + * @fw_vercnt: The size of @fw_version. 206 + * @srv_version: The service versions we can support. 207 + * @srv_vercnt: The size of @srv_version. 208 + * @nego_fw_version: The selected framework version. 209 + * @nego_srv_version: The selected service version. 205 210 * 206 - * @icmsghdrp is of type &struct icmsg_hdr. 211 + * Note: Versions are given in decreasing order. 212 + * 207 213 * Set up and fill in default negotiate response message. 208 - * 209 - * The fw_version and fw_vercnt specifies the framework version that 210 - * we can support. 211 - * 212 - * The srv_version and srv_vercnt specifies the service 213 - * versions we can support. 214 - * 215 - * Versions are given in decreasing order. 216 - * 217 - * nego_fw_version and nego_srv_version store the selected protocol versions. 218 - * 219 214 * Mainly used by Hyper-V drivers. 220 215 */ 221 216 bool vmbus_prep_negotiate_resp(struct icmsg_hdr *icmsghdrp, ··· 380 385 trace_vmbus_release_relid(&msg, ret); 381 386 } 382 387 383 - void hv_process_channel_removal(u32 relid) 388 + void hv_process_channel_removal(struct vmbus_channel *channel) 384 389 { 390 + struct vmbus_channel *primary_channel; 385 391 unsigned long flags; 386 - struct vmbus_channel *primary_channel, *channel; 387 392 388 393 BUG_ON(!mutex_is_locked(&vmbus_connection.channel_mutex)); 389 - 390 - /* 391 - * Make sure channel is valid as we may have raced. 392 - */ 393 - channel = relid2channel(relid); 394 - if (!channel) 395 - return; 396 - 397 394 BUG_ON(!channel->rescind); 395 + 398 396 if (channel->target_cpu != get_cpu()) { 399 397 put_cpu(); 400 398 smp_call_function_single(channel->target_cpu, ··· 417 429 cpumask_clear_cpu(channel->target_cpu, 418 430 &primary_channel->alloced_cpus_in_node); 419 431 420 - vmbus_release_relid(relid); 432 + vmbus_release_relid(channel->offermsg.child_relid); 421 433 422 434 free_channel(channel); 423 435 } ··· 594 606 bool perf_chn = vmbus_devs[dev_type].perf_device; 595 607 struct vmbus_channel *primary = channel->primary_channel; 596 608 int next_node; 597 - struct cpumask available_mask; 609 + cpumask_var_t available_mask; 598 610 struct cpumask *alloced_mask; 599 611 600 612 if ((vmbus_proto_version == VERSION_WS2008) || 601 - (vmbus_proto_version == VERSION_WIN7) || (!perf_chn)) { 613 + (vmbus_proto_version == VERSION_WIN7) || (!perf_chn) || 614 + !alloc_cpumask_var(&available_mask, GFP_KERNEL)) { 602 615 /* 603 616 * Prior to win8, all channel interrupts are 604 617 * delivered on cpu 0. 605 618 * Also if the channel is not a performance critical 606 619 * channel, bind it to cpu 0. 620 + * In case alloc_cpumask_var() fails, bind it to cpu 0. 607 621 */ 608 622 channel->numa_node = 0; 609 623 channel->target_cpu = 0; ··· 643 653 cpumask_clear(alloced_mask); 644 654 } 645 655 646 - cpumask_xor(&available_mask, alloced_mask, 656 + cpumask_xor(available_mask, alloced_mask, 647 657 cpumask_of_node(primary->numa_node)); 648 658 649 659 cur_cpu = -1; ··· 661 671 } 662 672 663 673 while (true) { 664 - cur_cpu = cpumask_next(cur_cpu, &available_mask); 674 + cur_cpu = cpumask_next(cur_cpu, available_mask); 665 675 if (cur_cpu >= nr_cpu_ids) { 666 676 cur_cpu = -1; 667 - cpumask_copy(&available_mask, 677 + cpumask_copy(available_mask, 668 678 cpumask_of_node(primary->numa_node)); 669 679 continue; 670 680 } ··· 694 704 695 705 channel->target_cpu = cur_cpu; 696 706 channel->target_vp = hv_cpu_number_to_vp_number(cur_cpu); 707 + 708 + free_cpumask_var(available_mask); 697 709 } 698 710 699 711 static void vmbus_wait_for_unload(void) ··· 935 943 * The channel is currently not open; 936 944 * it is safe for us to cleanup the channel. 937 945 */ 938 - hv_process_channel_removal(rescind->child_relid); 946 + hv_process_channel_removal(channel); 939 947 } else { 940 948 complete(&channel->rescind_event); 941 949 }
+12 -3
drivers/hv/hv.c
··· 189 189 int hv_synic_alloc(void) 190 190 { 191 191 int cpu; 192 + struct hv_per_cpu_context *hv_cpu; 193 + 194 + /* 195 + * First, zero all per-cpu memory areas so hv_synic_free() can 196 + * detect what memory has been allocated and cleanup properly 197 + * after any failures. 198 + */ 199 + for_each_present_cpu(cpu) { 200 + hv_cpu = per_cpu_ptr(hv_context.cpu_context, cpu); 201 + memset(hv_cpu, 0, sizeof(*hv_cpu)); 202 + } 192 203 193 204 hv_context.hv_numa_map = kcalloc(nr_node_ids, sizeof(struct cpumask), 194 205 GFP_KERNEL); ··· 209 198 } 210 199 211 200 for_each_present_cpu(cpu) { 212 - struct hv_per_cpu_context *hv_cpu 213 - = per_cpu_ptr(hv_context.cpu_context, cpu); 201 + hv_cpu = per_cpu_ptr(hv_context.cpu_context, cpu); 214 202 215 - memset(hv_cpu, 0, sizeof(*hv_cpu)); 216 203 tasklet_init(&hv_cpu->msg_dpc, 217 204 vmbus_on_msg_dpc, (unsigned long) hv_cpu); 218 205
+1 -1
drivers/hv/hv_balloon.c
··· 689 689 __online_page_increment_counters(pg); 690 690 __online_page_free(pg); 691 691 692 - WARN_ON_ONCE(!spin_is_locked(&dm_device.ha_lock)); 692 + lockdep_assert_held(&dm_device.ha_lock); 693 693 dm_device.num_pages_onlined++; 694 694 } 695 695
+8 -6
drivers/hv/hv_kvp.c
··· 353 353 354 354 out->body.kvp_ip_val.dhcp_enabled = in->kvp_ip_val.dhcp_enabled; 355 355 356 - default: 357 356 utf16s_to_utf8s((wchar_t *)in->kvp_ip_val.adapter_id, 358 357 MAX_ADAPTER_ID_SIZE, 359 358 UTF16_LITTLE_ENDIAN, ··· 405 406 process_ib_ipinfo(in_msg, message, KVP_OP_SET_IP_INFO); 406 407 break; 407 408 case KVP_OP_GET_IP_INFO: 408 - process_ib_ipinfo(in_msg, message, KVP_OP_GET_IP_INFO); 409 + /* We only need to pass on message->kvp_hdr.operation. */ 409 410 break; 410 411 case KVP_OP_SET: 411 412 switch (in_msg->body.kvp_set.data.value_type) { ··· 420 421 UTF16_LITTLE_ENDIAN, 421 422 message->body.kvp_set.data.value, 422 423 HV_KVP_EXCHANGE_MAX_VALUE_SIZE - 1) + 1; 423 - break; 424 + break; 424 425 425 426 case REG_U32: 426 427 /* ··· 445 446 break; 446 447 447 448 } 449 + 450 + break; 451 + 448 452 case KVP_OP_GET: 449 453 message->body.kvp_set.data.key_size = 450 454 utf16s_to_utf8s( ··· 456 454 UTF16_LITTLE_ENDIAN, 457 455 message->body.kvp_set.data.key, 458 456 HV_KVP_EXCHANGE_MAX_KEY_SIZE - 1) + 1; 459 - break; 457 + break; 460 458 461 459 case KVP_OP_DELETE: 462 460 message->body.kvp_delete.key_size = ··· 466 464 UTF16_LITTLE_ENDIAN, 467 465 message->body.kvp_delete.key, 468 466 HV_KVP_EXCHANGE_MAX_KEY_SIZE - 1) + 1; 469 - break; 467 + break; 470 468 471 469 case KVP_OP_ENUMERATE: 472 470 message->body.kvp_enum_data.index = 473 471 in_msg->body.kvp_enum_data.index; 474 - break; 472 + break; 475 473 } 476 474 477 475 kvp_transaction.state = HVUTIL_USERSPACE_REQ;
+1
drivers/hv/ring_buffer.c
··· 241 241 void hv_ringbuffer_cleanup(struct hv_ring_buffer_info *ring_info) 242 242 { 243 243 vunmap(ring_info->ring_buffer); 244 + ring_info->ring_buffer = NULL; 244 245 } 245 246 246 247 /* Write to the ring buffer. */
+97 -21
drivers/hv/vmbus_drv.c
··· 498 498 } 499 499 static DEVICE_ATTR_RO(device); 500 500 501 + static ssize_t driver_override_store(struct device *dev, 502 + struct device_attribute *attr, 503 + const char *buf, size_t count) 504 + { 505 + struct hv_device *hv_dev = device_to_hv_device(dev); 506 + char *driver_override, *old, *cp; 507 + 508 + /* We need to keep extra room for a newline */ 509 + if (count >= (PAGE_SIZE - 1)) 510 + return -EINVAL; 511 + 512 + driver_override = kstrndup(buf, count, GFP_KERNEL); 513 + if (!driver_override) 514 + return -ENOMEM; 515 + 516 + cp = strchr(driver_override, '\n'); 517 + if (cp) 518 + *cp = '\0'; 519 + 520 + device_lock(dev); 521 + old = hv_dev->driver_override; 522 + if (strlen(driver_override)) { 523 + hv_dev->driver_override = driver_override; 524 + } else { 525 + kfree(driver_override); 526 + hv_dev->driver_override = NULL; 527 + } 528 + device_unlock(dev); 529 + 530 + kfree(old); 531 + 532 + return count; 533 + } 534 + 535 + static ssize_t driver_override_show(struct device *dev, 536 + struct device_attribute *attr, char *buf) 537 + { 538 + struct hv_device *hv_dev = device_to_hv_device(dev); 539 + ssize_t len; 540 + 541 + device_lock(dev); 542 + len = snprintf(buf, PAGE_SIZE, "%s\n", hv_dev->driver_override); 543 + device_unlock(dev); 544 + 545 + return len; 546 + } 547 + static DEVICE_ATTR_RW(driver_override); 548 + 501 549 /* Set up per device attributes in /sys/bus/vmbus/devices/<bus device> */ 502 550 static struct attribute *vmbus_dev_attrs[] = { 503 551 &dev_attr_id.attr, ··· 576 528 &dev_attr_channel_vp_mapping.attr, 577 529 &dev_attr_vendor.attr, 578 530 &dev_attr_device.attr, 531 + &dev_attr_driver_override.attr, 579 532 NULL, 580 533 }; 581 534 ATTRIBUTE_GROUPS(vmbus_dev); ··· 612 563 return true; 613 564 } 614 565 615 - /* 616 - * Return a matching hv_vmbus_device_id pointer. 617 - * If there is no match, return NULL. 618 - */ 619 - static const struct hv_vmbus_device_id *hv_vmbus_get_id(struct hv_driver *drv, 620 - const uuid_le *guid) 566 + static const struct hv_vmbus_device_id * 567 + hv_vmbus_dev_match(const struct hv_vmbus_device_id *id, const uuid_le *guid) 568 + 569 + { 570 + if (id == NULL) 571 + return NULL; /* empty device table */ 572 + 573 + for (; !is_null_guid(&id->guid); id++) 574 + if (!uuid_le_cmp(id->guid, *guid)) 575 + return id; 576 + 577 + return NULL; 578 + } 579 + 580 + static const struct hv_vmbus_device_id * 581 + hv_vmbus_dynid_match(struct hv_driver *drv, const uuid_le *guid) 621 582 { 622 583 const struct hv_vmbus_device_id *id = NULL; 623 584 struct vmbus_dynid *dynid; 624 585 625 - /* Look at the dynamic ids first, before the static ones */ 626 586 spin_lock(&drv->dynids.lock); 627 587 list_for_each_entry(dynid, &drv->dynids.list, node) { 628 588 if (!uuid_le_cmp(dynid->id.guid, *guid)) { ··· 641 583 } 642 584 spin_unlock(&drv->dynids.lock); 643 585 644 - if (id) 645 - return id; 586 + return id; 587 + } 646 588 647 - id = drv->id_table; 648 - if (id == NULL) 649 - return NULL; /* empty device table */ 589 + static const struct hv_vmbus_device_id vmbus_device_null = { 590 + .guid = NULL_UUID_LE, 591 + }; 650 592 651 - for (; !is_null_guid(&id->guid); id++) 652 - if (!uuid_le_cmp(id->guid, *guid)) 653 - return id; 593 + /* 594 + * Return a matching hv_vmbus_device_id pointer. 595 + * If there is no match, return NULL. 596 + */ 597 + static const struct hv_vmbus_device_id *hv_vmbus_get_id(struct hv_driver *drv, 598 + struct hv_device *dev) 599 + { 600 + const uuid_le *guid = &dev->dev_type; 601 + const struct hv_vmbus_device_id *id; 654 602 655 - return NULL; 603 + /* When driver_override is set, only bind to the matching driver */ 604 + if (dev->driver_override && strcmp(dev->driver_override, drv->name)) 605 + return NULL; 606 + 607 + /* Look at the dynamic ids first, before the static ones */ 608 + id = hv_vmbus_dynid_match(drv, guid); 609 + if (!id) 610 + id = hv_vmbus_dev_match(drv->id_table, guid); 611 + 612 + /* driver_override will always match, send a dummy id */ 613 + if (!id && dev->driver_override) 614 + id = &vmbus_device_null; 615 + 616 + return id; 656 617 } 657 618 658 619 /* vmbus_add_dynid - add a new device ID to this driver and re-probe devices */ ··· 720 643 if (retval) 721 644 return retval; 722 645 723 - if (hv_vmbus_get_id(drv, &guid)) 646 + if (hv_vmbus_dynid_match(drv, &guid)) 724 647 return -EEXIST; 725 648 726 649 retval = vmbus_add_dynid(drv, &guid); ··· 785 708 if (is_hvsock_channel(hv_dev->channel)) 786 709 return drv->hvsock; 787 710 788 - if (hv_vmbus_get_id(drv, &hv_dev->dev_type)) 711 + if (hv_vmbus_get_id(drv, hv_dev)) 789 712 return 1; 790 713 791 714 return 0; ··· 802 725 struct hv_device *dev = device_to_hv_device(child_device); 803 726 const struct hv_vmbus_device_id *dev_id; 804 727 805 - dev_id = hv_vmbus_get_id(drv, &dev->dev_type); 728 + dev_id = hv_vmbus_get_id(drv, dev); 806 729 if (drv->probe) { 807 730 ret = drv->probe(dev, dev_id); 808 731 if (ret != 0) ··· 864 787 struct vmbus_channel *channel = hv_dev->channel; 865 788 866 789 mutex_lock(&vmbus_connection.channel_mutex); 867 - hv_process_channel_removal(channel->offermsg.child_relid); 790 + hv_process_channel_removal(channel); 868 791 mutex_unlock(&vmbus_connection.channel_mutex); 869 792 kfree(hv_dev); 870 - 871 793 } 872 794 873 795 /* The one and only one */
+6
drivers/hwtracing/coresight/coresight-catu.c
··· 406 406 407 407 static int catu_enable_hw(struct catu_drvdata *drvdata, void *data) 408 408 { 409 + int rc; 409 410 u32 control, mode; 410 411 struct etr_buf *etr_buf = data; 411 412 ··· 418 417 dev_warn(drvdata->dev, "CATU is already enabled\n"); 419 418 return -EBUSY; 420 419 } 420 + 421 + rc = coresight_claim_device_unlocked(drvdata->base); 422 + if (rc) 423 + return rc; 421 424 422 425 control |= BIT(CATU_CONTROL_ENABLE); 423 426 ··· 464 459 int rc = 0; 465 460 466 461 catu_write_control(drvdata, 0); 462 + coresight_disclaim_device_unlocked(drvdata->base); 467 463 if (catu_wait_for_ready(drvdata)) { 468 464 dev_info(drvdata->dev, "Timeout while waiting for READY\n"); 469 465 rc = -EAGAIN;
+62 -19
drivers/hwtracing/coresight/coresight-dynamic-replicator.c
··· 34 34 struct coresight_device *csdev; 35 35 }; 36 36 37 + /* 38 + * replicator_reset : Reset the replicator configuration to sane values. 39 + */ 40 + static void replicator_reset(struct replicator_state *drvdata) 41 + { 42 + CS_UNLOCK(drvdata->base); 43 + 44 + if (!coresight_claim_device_unlocked(drvdata->base)) { 45 + writel_relaxed(0xff, drvdata->base + REPLICATOR_IDFILTER0); 46 + writel_relaxed(0xff, drvdata->base + REPLICATOR_IDFILTER1); 47 + coresight_disclaim_device_unlocked(drvdata->base); 48 + } 49 + 50 + CS_LOCK(drvdata->base); 51 + } 52 + 37 53 static int replicator_enable(struct coresight_device *csdev, int inport, 38 54 int outport) 39 55 { 56 + int rc = 0; 57 + u32 reg; 40 58 struct replicator_state *drvdata = dev_get_drvdata(csdev->dev.parent); 59 + 60 + switch (outport) { 61 + case 0: 62 + reg = REPLICATOR_IDFILTER0; 63 + break; 64 + case 1: 65 + reg = REPLICATOR_IDFILTER1; 66 + break; 67 + default: 68 + WARN_ON(1); 69 + return -EINVAL; 70 + } 41 71 42 72 CS_UNLOCK(drvdata->base); 43 73 44 - /* 45 - * Ensure that the other port is disabled 46 - * 0x00 - passing through the replicator unimpeded 47 - * 0xff - disable (or impede) the flow of ATB data 48 - */ 49 - if (outport == 0) { 50 - writel_relaxed(0x00, drvdata->base + REPLICATOR_IDFILTER0); 51 - writel_relaxed(0xff, drvdata->base + REPLICATOR_IDFILTER1); 52 - } else { 53 - writel_relaxed(0x00, drvdata->base + REPLICATOR_IDFILTER1); 54 - writel_relaxed(0xff, drvdata->base + REPLICATOR_IDFILTER0); 74 + if ((readl_relaxed(drvdata->base + REPLICATOR_IDFILTER0) == 0xff) && 75 + (readl_relaxed(drvdata->base + REPLICATOR_IDFILTER1) == 0xff)) 76 + rc = coresight_claim_device_unlocked(drvdata->base); 77 + 78 + /* Ensure that the outport is enabled. */ 79 + if (!rc) { 80 + writel_relaxed(0x00, drvdata->base + reg); 81 + dev_dbg(drvdata->dev, "REPLICATOR enabled\n"); 55 82 } 56 83 57 84 CS_LOCK(drvdata->base); 58 85 59 - dev_info(drvdata->dev, "REPLICATOR enabled\n"); 60 - return 0; 86 + return rc; 61 87 } 62 88 63 89 static void replicator_disable(struct coresight_device *csdev, int inport, 64 90 int outport) 65 91 { 92 + u32 reg; 66 93 struct replicator_state *drvdata = dev_get_drvdata(csdev->dev.parent); 94 + 95 + switch (outport) { 96 + case 0: 97 + reg = REPLICATOR_IDFILTER0; 98 + break; 99 + case 1: 100 + reg = REPLICATOR_IDFILTER1; 101 + break; 102 + default: 103 + WARN_ON(1); 104 + return; 105 + } 67 106 68 107 CS_UNLOCK(drvdata->base); 69 108 70 109 /* disable the flow of ATB data through port */ 71 - if (outport == 0) 72 - writel_relaxed(0xff, drvdata->base + REPLICATOR_IDFILTER0); 73 - else 74 - writel_relaxed(0xff, drvdata->base + REPLICATOR_IDFILTER1); 110 + writel_relaxed(0xff, drvdata->base + reg); 75 111 112 + if ((readl_relaxed(drvdata->base + REPLICATOR_IDFILTER0) == 0xff) && 113 + (readl_relaxed(drvdata->base + REPLICATOR_IDFILTER1) == 0xff)) 114 + coresight_disclaim_device_unlocked(drvdata->base); 76 115 CS_LOCK(drvdata->base); 77 116 78 - dev_info(drvdata->dev, "REPLICATOR disabled\n"); 117 + dev_dbg(drvdata->dev, "REPLICATOR disabled\n"); 79 118 } 80 119 81 120 static const struct coresight_ops_link replicator_link_ops = { ··· 195 156 desc.groups = replicator_groups; 196 157 drvdata->csdev = coresight_register(&desc); 197 158 198 - return PTR_ERR_OR_ZERO(drvdata->csdev); 159 + if (!IS_ERR(drvdata->csdev)) { 160 + replicator_reset(drvdata); 161 + return 0; 162 + } 163 + return PTR_ERR(drvdata->csdev); 199 164 } 200 165 201 166 #ifdef CONFIG_PM
+110 -75
drivers/hwtracing/coresight/coresight-etb10.c
··· 5 5 * Description: CoreSight Embedded Trace Buffer driver 6 6 */ 7 7 8 - #include <asm/local.h> 9 8 #include <linux/kernel.h> 10 9 #include <linux/init.h> 11 10 #include <linux/types.h> ··· 27 28 28 29 29 30 #include "coresight-priv.h" 31 + #include "coresight-etm-perf.h" 30 32 31 33 #define ETB_RAM_DEPTH_REG 0x004 32 34 #define ETB_STATUS_REG 0x00c ··· 71 71 * @miscdev: specifics to handle "/dev/xyz.etb" entry. 72 72 * @spinlock: only one at a time pls. 73 73 * @reading: synchronise user space access to etb buffer. 74 - * @mode: this ETB is being used. 75 74 * @buf: area of memory where ETB buffer content gets sent. 75 + * @mode: this ETB is being used. 76 76 * @buffer_depth: size of @buf. 77 77 * @trigger_cntr: amount of words to store after a trigger. 78 78 */ ··· 84 84 struct miscdevice miscdev; 85 85 spinlock_t spinlock; 86 86 local_t reading; 87 - local_t mode; 88 87 u8 *buf; 88 + u32 mode; 89 89 u32 buffer_depth; 90 90 u32 trigger_cntr; 91 91 }; 92 + 93 + static int etb_set_buffer(struct coresight_device *csdev, 94 + struct perf_output_handle *handle); 92 95 93 96 static unsigned int etb_get_buffer_depth(struct etb_drvdata *drvdata) 94 97 { ··· 106 103 return depth; 107 104 } 108 105 109 - static void etb_enable_hw(struct etb_drvdata *drvdata) 106 + static void __etb_enable_hw(struct etb_drvdata *drvdata) 110 107 { 111 108 int i; 112 109 u32 depth; ··· 134 131 CS_LOCK(drvdata->base); 135 132 } 136 133 137 - static int etb_enable(struct coresight_device *csdev, u32 mode) 134 + static int etb_enable_hw(struct etb_drvdata *drvdata) 138 135 { 139 - u32 val; 136 + __etb_enable_hw(drvdata); 137 + return 0; 138 + } 139 + 140 + static int etb_enable_sysfs(struct coresight_device *csdev) 141 + { 142 + int ret = 0; 140 143 unsigned long flags; 141 144 struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 142 145 143 - val = local_cmpxchg(&drvdata->mode, 144 - CS_MODE_DISABLED, mode); 145 - /* 146 - * When accessing from Perf, a HW buffer can be handled 147 - * by a single trace entity. In sysFS mode many tracers 148 - * can be logging to the same HW buffer. 149 - */ 150 - if (val == CS_MODE_PERF) 151 - return -EBUSY; 146 + spin_lock_irqsave(&drvdata->spinlock, flags); 147 + 148 + /* Don't messup with perf sessions. */ 149 + if (drvdata->mode == CS_MODE_PERF) { 150 + ret = -EBUSY; 151 + goto out; 152 + } 152 153 153 154 /* Nothing to do, the tracer is already enabled. */ 154 - if (val == CS_MODE_SYSFS) 155 + if (drvdata->mode == CS_MODE_SYSFS) 155 156 goto out; 156 157 157 - spin_lock_irqsave(&drvdata->spinlock, flags); 158 - etb_enable_hw(drvdata); 159 - spin_unlock_irqrestore(&drvdata->spinlock, flags); 158 + ret = etb_enable_hw(drvdata); 159 + if (!ret) 160 + drvdata->mode = CS_MODE_SYSFS; 160 161 161 162 out: 162 - dev_info(drvdata->dev, "ETB enabled\n"); 163 + spin_unlock_irqrestore(&drvdata->spinlock, flags); 164 + return ret; 165 + } 166 + 167 + static int etb_enable_perf(struct coresight_device *csdev, void *data) 168 + { 169 + int ret = 0; 170 + unsigned long flags; 171 + struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 172 + 173 + spin_lock_irqsave(&drvdata->spinlock, flags); 174 + 175 + /* No need to continue if the component is already in use. */ 176 + if (drvdata->mode != CS_MODE_DISABLED) { 177 + ret = -EBUSY; 178 + goto out; 179 + } 180 + 181 + /* 182 + * We don't have an internal state to clean up if we fail to setup 183 + * the perf buffer. So we can perform the step before we turn the 184 + * ETB on and leave without cleaning up. 185 + */ 186 + ret = etb_set_buffer(csdev, (struct perf_output_handle *)data); 187 + if (ret) 188 + goto out; 189 + 190 + ret = etb_enable_hw(drvdata); 191 + if (!ret) 192 + drvdata->mode = CS_MODE_PERF; 193 + 194 + out: 195 + spin_unlock_irqrestore(&drvdata->spinlock, flags); 196 + return ret; 197 + } 198 + 199 + static int etb_enable(struct coresight_device *csdev, u32 mode, void *data) 200 + { 201 + int ret; 202 + struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 203 + 204 + switch (mode) { 205 + case CS_MODE_SYSFS: 206 + ret = etb_enable_sysfs(csdev); 207 + break; 208 + case CS_MODE_PERF: 209 + ret = etb_enable_perf(csdev, data); 210 + break; 211 + default: 212 + ret = -EINVAL; 213 + break; 214 + } 215 + 216 + if (ret) 217 + return ret; 218 + 219 + dev_dbg(drvdata->dev, "ETB enabled\n"); 163 220 return 0; 164 221 } 165 222 ··· 319 256 unsigned long flags; 320 257 321 258 spin_lock_irqsave(&drvdata->spinlock, flags); 322 - etb_disable_hw(drvdata); 323 - etb_dump_hw(drvdata); 259 + 260 + /* Disable the ETB only if it needs to */ 261 + if (drvdata->mode != CS_MODE_DISABLED) { 262 + etb_disable_hw(drvdata); 263 + etb_dump_hw(drvdata); 264 + drvdata->mode = CS_MODE_DISABLED; 265 + } 324 266 spin_unlock_irqrestore(&drvdata->spinlock, flags); 325 267 326 - local_set(&drvdata->mode, CS_MODE_DISABLED); 327 - 328 - dev_info(drvdata->dev, "ETB disabled\n"); 268 + dev_dbg(drvdata->dev, "ETB disabled\n"); 329 269 } 330 270 331 271 static void *etb_alloc_buffer(struct coresight_device *csdev, int cpu, ··· 360 294 } 361 295 362 296 static int etb_set_buffer(struct coresight_device *csdev, 363 - struct perf_output_handle *handle, 364 - void *sink_config) 297 + struct perf_output_handle *handle) 365 298 { 366 299 int ret = 0; 367 300 unsigned long head; 368 - struct cs_buffers *buf = sink_config; 301 + struct cs_buffers *buf = etm_perf_sink_config(handle); 302 + 303 + if (!buf) 304 + return -EINVAL; 369 305 370 306 /* wrap head around to the amount of space we have */ 371 307 head = handle->head & ((buf->nr_pages << PAGE_SHIFT) - 1); ··· 383 315 return ret; 384 316 } 385 317 386 - static unsigned long etb_reset_buffer(struct coresight_device *csdev, 387 - struct perf_output_handle *handle, 388 - void *sink_config) 389 - { 390 - unsigned long size = 0; 391 - struct cs_buffers *buf = sink_config; 392 - 393 - if (buf) { 394 - /* 395 - * In snapshot mode ->data_size holds the new address of the 396 - * ring buffer's head. The size itself is the whole address 397 - * range since we want the latest information. 398 - */ 399 - if (buf->snapshot) 400 - handle->head = local_xchg(&buf->data_size, 401 - buf->nr_pages << PAGE_SHIFT); 402 - 403 - /* 404 - * Tell the tracer PMU how much we got in this run and if 405 - * something went wrong along the way. Nobody else can use 406 - * this cs_buffers instance until we are done. As such 407 - * resetting parameters here and squaring off with the ring 408 - * buffer API in the tracer PMU is fine. 409 - */ 410 - size = local_xchg(&buf->data_size, 0); 411 - } 412 - 413 - return size; 414 - } 415 - 416 - static void etb_update_buffer(struct coresight_device *csdev, 318 + static unsigned long etb_update_buffer(struct coresight_device *csdev, 417 319 struct perf_output_handle *handle, 418 320 void *sink_config) 419 321 { ··· 392 354 u8 *buf_ptr; 393 355 const u32 *barrier; 394 356 u32 read_ptr, write_ptr, capacity; 395 - u32 status, read_data, to_read; 396 - unsigned long offset; 357 + u32 status, read_data; 358 + unsigned long offset, to_read; 397 359 struct cs_buffers *buf = sink_config; 398 360 struct etb_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 399 361 400 362 if (!buf) 401 - return; 363 + return 0; 402 364 403 365 capacity = drvdata->buffer_depth * ETB_FRAME_SIZE_WORDS; 404 366 ··· 503 465 writel_relaxed(0x0, drvdata->base + ETB_RAM_WRITE_POINTER); 504 466 505 467 /* 506 - * In snapshot mode all we have to do is communicate to 507 - * perf_aux_output_end() the address of the current head. In full 508 - * trace mode the same function expects a size to move rb->aux_head 509 - * forward. 468 + * In snapshot mode we have to update the handle->head to point 469 + * to the new location. 510 470 */ 511 - if (buf->snapshot) 512 - local_set(&buf->data_size, (cur * PAGE_SIZE) + offset); 513 - else 514 - local_add(to_read, &buf->data_size); 515 - 471 + if (buf->snapshot) { 472 + handle->head = (cur * PAGE_SIZE) + offset; 473 + to_read = buf->nr_pages << PAGE_SHIFT; 474 + } 516 475 etb_enable_hw(drvdata); 517 476 CS_LOCK(drvdata->base); 477 + 478 + return to_read; 518 479 } 519 480 520 481 static const struct coresight_ops_sink etb_sink_ops = { ··· 521 484 .disable = etb_disable, 522 485 .alloc_buffer = etb_alloc_buffer, 523 486 .free_buffer = etb_free_buffer, 524 - .set_buffer = etb_set_buffer, 525 - .reset_buffer = etb_reset_buffer, 526 487 .update_buffer = etb_update_buffer, 527 488 }; 528 489 ··· 533 498 unsigned long flags; 534 499 535 500 spin_lock_irqsave(&drvdata->spinlock, flags); 536 - if (local_read(&drvdata->mode) == CS_MODE_SYSFS) { 501 + if (drvdata->mode == CS_MODE_SYSFS) { 537 502 etb_disable_hw(drvdata); 538 503 etb_dump_hw(drvdata); 539 504 etb_enable_hw(drvdata); 540 505 } 541 506 spin_unlock_irqrestore(&drvdata->spinlock, flags); 542 507 543 - dev_info(drvdata->dev, "ETB dumped\n"); 508 + dev_dbg(drvdata->dev, "ETB dumped\n"); 544 509 } 545 510 546 511 static int etb_open(struct inode *inode, struct file *file)
+72 -60
drivers/hwtracing/coresight/coresight-etm-perf.c
··· 12 12 #include <linux/mm.h> 13 13 #include <linux/init.h> 14 14 #include <linux/perf_event.h> 15 + #include <linux/percpu-defs.h> 15 16 #include <linux/slab.h> 16 17 #include <linux/types.h> 17 18 #include <linux/workqueue.h> ··· 22 21 23 22 static struct pmu etm_pmu; 24 23 static bool etm_perf_up; 25 - 26 - /** 27 - * struct etm_event_data - Coresight specifics associated to an event 28 - * @work: Handle to free allocated memory outside IRQ context. 29 - * @mask: Hold the CPU(s) this event was set for. 30 - * @snk_config: The sink configuration. 31 - * @path: An array of path, each slot for one CPU. 32 - */ 33 - struct etm_event_data { 34 - struct work_struct work; 35 - cpumask_t mask; 36 - void *snk_config; 37 - struct list_head **path; 38 - }; 39 24 40 25 static DEFINE_PER_CPU(struct perf_output_handle, ctx_handle); 41 26 static DEFINE_PER_CPU(struct coresight_device *, csdev_src); ··· 47 60 &etm_pmu_format_group, 48 61 NULL, 49 62 }; 63 + 64 + static inline struct list_head ** 65 + etm_event_cpu_path_ptr(struct etm_event_data *data, int cpu) 66 + { 67 + return per_cpu_ptr(data->path, cpu); 68 + } 69 + 70 + static inline struct list_head * 71 + etm_event_cpu_path(struct etm_event_data *data, int cpu) 72 + { 73 + return *etm_event_cpu_path_ptr(data, cpu); 74 + } 50 75 51 76 static void etm_event_read(struct perf_event *event) {} 52 77 ··· 113 114 114 115 event_data = container_of(work, struct etm_event_data, work); 115 116 mask = &event_data->mask; 116 - /* 117 - * First deal with the sink configuration. See comment in 118 - * etm_setup_aux() about why we take the first available path. 119 - */ 120 - if (event_data->snk_config) { 117 + 118 + /* Free the sink buffers, if there are any */ 119 + if (event_data->snk_config && !WARN_ON(cpumask_empty(mask))) { 121 120 cpu = cpumask_first(mask); 122 - sink = coresight_get_sink(event_data->path[cpu]); 121 + sink = coresight_get_sink(etm_event_cpu_path(event_data, cpu)); 123 122 if (sink_ops(sink)->free_buffer) 124 123 sink_ops(sink)->free_buffer(event_data->snk_config); 125 124 } 126 125 127 126 for_each_cpu(cpu, mask) { 128 - if (!(IS_ERR_OR_NULL(event_data->path[cpu]))) 129 - coresight_release_path(event_data->path[cpu]); 127 + struct list_head **ppath; 128 + 129 + ppath = etm_event_cpu_path_ptr(event_data, cpu); 130 + if (!(IS_ERR_OR_NULL(*ppath))) 131 + coresight_release_path(*ppath); 132 + *ppath = NULL; 130 133 } 131 134 132 - kfree(event_data->path); 135 + free_percpu(event_data->path); 133 136 kfree(event_data); 134 137 } 135 138 136 139 static void *alloc_event_data(int cpu) 137 140 { 138 - int size; 139 141 cpumask_t *mask; 140 142 struct etm_event_data *event_data; 141 143 ··· 145 145 if (!event_data) 146 146 return NULL; 147 147 148 - /* Make sure nothing disappears under us */ 149 - get_online_cpus(); 150 - size = num_online_cpus(); 151 148 152 149 mask = &event_data->mask; 153 150 if (cpu != -1) 154 151 cpumask_set_cpu(cpu, mask); 155 152 else 156 - cpumask_copy(mask, cpu_online_mask); 157 - put_online_cpus(); 153 + cpumask_copy(mask, cpu_present_mask); 158 154 159 155 /* 160 156 * Each CPU has a single path between source and destination. As such ··· 160 164 * unused memory when dealing with single CPU trace scenarios is small 161 165 * compared to the cost of searching through an optimized array. 162 166 */ 163 - event_data->path = kcalloc(size, 164 - sizeof(struct list_head *), GFP_KERNEL); 167 + event_data->path = alloc_percpu(struct list_head *); 168 + 165 169 if (!event_data->path) { 166 170 kfree(event_data); 167 171 return NULL; ··· 202 206 * on the cmd line. As such the "enable_sink" flag in sysFS is reset. 203 207 */ 204 208 sink = coresight_get_enabled_sink(true); 205 - if (!sink) 209 + if (!sink || !sink_ops(sink)->alloc_buffer) 206 210 goto err; 207 211 208 212 mask = &event_data->mask; 209 213 210 - /* Setup the path for each CPU in a trace session */ 214 + /* 215 + * Setup the path for each CPU in a trace session. We try to build 216 + * trace path for each CPU in the mask. If we don't find an ETM 217 + * for the CPU or fail to build a path, we clear the CPU from the 218 + * mask and continue with the rest. If ever we try to trace on those 219 + * CPUs, we can handle it and fail the session. 220 + */ 211 221 for_each_cpu(cpu, mask) { 222 + struct list_head *path; 212 223 struct coresight_device *csdev; 213 224 214 225 csdev = per_cpu(csdev_src, cpu); 215 - if (!csdev) 216 - goto err; 226 + /* 227 + * If there is no ETM associated with this CPU clear it from 228 + * the mask and continue with the rest. If ever we try to trace 229 + * on this CPU, we handle it accordingly. 230 + */ 231 + if (!csdev) { 232 + cpumask_clear_cpu(cpu, mask); 233 + continue; 234 + } 217 235 218 236 /* 219 237 * Building a path doesn't enable it, it simply builds a 220 238 * list of devices from source to sink that can be 221 239 * referenced later when the path is actually needed. 222 240 */ 223 - event_data->path[cpu] = coresight_build_path(csdev, sink); 224 - if (IS_ERR(event_data->path[cpu])) 225 - goto err; 241 + path = coresight_build_path(csdev, sink); 242 + if (IS_ERR(path)) { 243 + cpumask_clear_cpu(cpu, mask); 244 + continue; 245 + } 246 + 247 + *etm_event_cpu_path_ptr(event_data, cpu) = path; 226 248 } 227 249 228 - if (!sink_ops(sink)->alloc_buffer) 250 + /* If we don't have any CPUs ready for tracing, abort */ 251 + cpu = cpumask_first(mask); 252 + if (cpu >= nr_cpu_ids) 229 253 goto err; 230 254 231 - cpu = cpumask_first(mask); 232 - /* Get the AUX specific data from the sink buffer */ 255 + /* Allocate the sink buffer for this session */ 233 256 event_data->snk_config = 234 257 sink_ops(sink)->alloc_buffer(sink, cpu, pages, 235 258 nr_pages, overwrite); ··· 270 255 struct etm_event_data *event_data; 271 256 struct perf_output_handle *handle = this_cpu_ptr(&ctx_handle); 272 257 struct coresight_device *sink, *csdev = per_cpu(csdev_src, cpu); 258 + struct list_head *path; 273 259 274 260 if (!csdev) 275 261 goto fail; ··· 283 267 if (!event_data) 284 268 goto fail; 285 269 270 + path = etm_event_cpu_path(event_data, cpu); 286 271 /* We need a sink, no need to continue without one */ 287 - sink = coresight_get_sink(event_data->path[cpu]); 288 - if (WARN_ON_ONCE(!sink || !sink_ops(sink)->set_buffer)) 289 - goto fail_end_stop; 290 - 291 - /* Configure the sink */ 292 - if (sink_ops(sink)->set_buffer(sink, handle, 293 - event_data->snk_config)) 272 + sink = coresight_get_sink(path); 273 + if (WARN_ON_ONCE(!sink)) 294 274 goto fail_end_stop; 295 275 296 276 /* Nothing will happen without a path */ 297 - if (coresight_enable_path(event_data->path[cpu], CS_MODE_PERF)) 277 + if (coresight_enable_path(path, CS_MODE_PERF, handle)) 298 278 goto fail_end_stop; 299 279 300 280 /* Tell the perf core the event is alive */ ··· 298 286 299 287 /* Finally enable the tracer */ 300 288 if (source_ops(csdev)->enable(csdev, event, CS_MODE_PERF)) 301 - goto fail_end_stop; 289 + goto fail_disable_path; 302 290 303 291 out: 304 292 return; 305 293 294 + fail_disable_path: 295 + coresight_disable_path(path); 306 296 fail_end_stop: 307 297 perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED); 308 298 perf_aux_output_end(handle, 0); ··· 320 306 struct coresight_device *sink, *csdev = per_cpu(csdev_src, cpu); 321 307 struct perf_output_handle *handle = this_cpu_ptr(&ctx_handle); 322 308 struct etm_event_data *event_data = perf_get_aux(handle); 309 + struct list_head *path; 323 310 324 311 if (event->hw.state == PERF_HES_STOPPED) 325 312 return; ··· 328 313 if (!csdev) 329 314 return; 330 315 331 - sink = coresight_get_sink(event_data->path[cpu]); 316 + path = etm_event_cpu_path(event_data, cpu); 317 + if (!path) 318 + return; 319 + 320 + sink = coresight_get_sink(path); 332 321 if (!sink) 333 322 return; 334 323 ··· 350 331 if (!sink_ops(sink)->update_buffer) 351 332 return; 352 333 353 - sink_ops(sink)->update_buffer(sink, handle, 334 + size = sink_ops(sink)->update_buffer(sink, handle, 354 335 event_data->snk_config); 355 - 356 - if (!sink_ops(sink)->reset_buffer) 357 - return; 358 - 359 - size = sink_ops(sink)->reset_buffer(sink, handle, 360 - event_data->snk_config); 361 - 362 336 perf_aux_output_end(handle, size); 363 337 } 364 338 365 339 /* Disabling the path make its elements available to other sessions */ 366 - coresight_disable_path(event_data->path[cpu]); 340 + coresight_disable_path(path); 367 341 } 368 342 369 343 static int etm_event_add(struct perf_event *event, int mode)
+26
drivers/hwtracing/coresight/coresight-etm-perf.h
··· 7 7 #ifndef _CORESIGHT_ETM_PERF_H 8 8 #define _CORESIGHT_ETM_PERF_H 9 9 10 + #include <linux/percpu-defs.h> 10 11 #include "coresight-priv.h" 11 12 12 13 struct coresight_device; ··· 43 42 bool ssstatus; 44 43 }; 45 44 45 + /** 46 + * struct etm_event_data - Coresight specifics associated to an event 47 + * @work: Handle to free allocated memory outside IRQ context. 48 + * @mask: Hold the CPU(s) this event was set for. 49 + * @snk_config: The sink configuration. 50 + * @path: An array of path, each slot for one CPU. 51 + */ 52 + struct etm_event_data { 53 + struct work_struct work; 54 + cpumask_t mask; 55 + void *snk_config; 56 + struct list_head * __percpu *path; 57 + }; 46 58 47 59 #ifdef CONFIG_CORESIGHT 48 60 int etm_perf_symlink(struct coresight_device *csdev, bool link); 61 + static inline void *etm_perf_sink_config(struct perf_output_handle *handle) 62 + { 63 + struct etm_event_data *data = perf_get_aux(handle); 49 64 65 + if (data) 66 + return data->snk_config; 67 + return NULL; 68 + } 50 69 #else 51 70 static inline int etm_perf_symlink(struct coresight_device *csdev, bool link) 52 71 { return -EINVAL; } 72 + 73 + static inline void *etm_perf_sink_config(struct perf_output_handle *handle) 74 + { 75 + return NULL; 76 + } 53 77 54 78 #endif /* CONFIG_CORESIGHT */ 55 79
+41 -17
drivers/hwtracing/coresight/coresight-etm3x.c
··· 355 355 return 0; 356 356 } 357 357 358 - static void etm_enable_hw(void *info) 358 + static int etm_enable_hw(struct etm_drvdata *drvdata) 359 359 { 360 - int i; 360 + int i, rc; 361 361 u32 etmcr; 362 - struct etm_drvdata *drvdata = info; 363 362 struct etm_config *config = &drvdata->config; 364 363 365 364 CS_UNLOCK(drvdata->base); ··· 369 370 etm_set_pwrup(drvdata); 370 371 /* Make sure all registers are accessible */ 371 372 etm_os_unlock(drvdata); 373 + rc = coresight_claim_device_unlocked(drvdata->base); 374 + if (rc) 375 + goto done; 372 376 373 377 etm_set_prog(drvdata); 374 378 ··· 420 418 etm_writel(drvdata, 0x0, ETMVMIDCVR); 421 419 422 420 etm_clr_prog(drvdata); 421 + 422 + done: 423 + if (rc) 424 + etm_set_pwrdwn(drvdata); 423 425 CS_LOCK(drvdata->base); 424 426 425 - dev_dbg(drvdata->dev, "cpu: %d enable smp call done\n", drvdata->cpu); 427 + dev_dbg(drvdata->dev, "cpu: %d enable smp call done: %d\n", 428 + drvdata->cpu, rc); 429 + return rc; 430 + } 431 + 432 + struct etm_enable_arg { 433 + struct etm_drvdata *drvdata; 434 + int rc; 435 + }; 436 + 437 + static void etm_enable_hw_smp_call(void *info) 438 + { 439 + struct etm_enable_arg *arg = info; 440 + 441 + if (WARN_ON(!arg)) 442 + return; 443 + arg->rc = etm_enable_hw(arg->drvdata); 426 444 } 427 445 428 446 static int etm_cpu_id(struct coresight_device *csdev) ··· 497 475 /* Configure the tracer based on the session's specifics */ 498 476 etm_parse_event_config(drvdata, event); 499 477 /* And enable it */ 500 - etm_enable_hw(drvdata); 501 - 502 - return 0; 478 + return etm_enable_hw(drvdata); 503 479 } 504 480 505 481 static int etm_enable_sysfs(struct coresight_device *csdev) 506 482 { 507 483 struct etm_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 484 + struct etm_enable_arg arg = { 0 }; 508 485 int ret; 509 486 510 487 spin_lock(&drvdata->spinlock); ··· 513 492 * hw configuration will take place on the local CPU during bring up. 514 493 */ 515 494 if (cpu_online(drvdata->cpu)) { 495 + arg.drvdata = drvdata; 516 496 ret = smp_call_function_single(drvdata->cpu, 517 - etm_enable_hw, drvdata, 1); 518 - if (ret) 519 - goto err; 497 + etm_enable_hw_smp_call, &arg, 1); 498 + if (!ret) 499 + ret = arg.rc; 500 + if (!ret) 501 + drvdata->sticky_enable = true; 502 + } else { 503 + ret = -ENODEV; 520 504 } 521 505 522 - drvdata->sticky_enable = true; 523 506 spin_unlock(&drvdata->spinlock); 524 507 525 - dev_info(drvdata->dev, "ETM tracing enabled\n"); 526 - return 0; 527 - 528 - err: 529 - spin_unlock(&drvdata->spinlock); 508 + if (!ret) 509 + dev_dbg(drvdata->dev, "ETM tracing enabled\n"); 530 510 return ret; 531 511 } 532 512 ··· 576 554 577 555 for (i = 0; i < drvdata->nr_cntr; i++) 578 556 config->cntr_val[i] = etm_readl(drvdata, ETMCNTVRn(i)); 557 + 558 + coresight_disclaim_device_unlocked(drvdata->base); 579 559 580 560 etm_set_pwrdwn(drvdata); 581 561 CS_LOCK(drvdata->base); ··· 628 604 spin_unlock(&drvdata->spinlock); 629 605 cpus_read_unlock(); 630 606 631 - dev_info(drvdata->dev, "ETM tracing disabled\n"); 607 + dev_dbg(drvdata->dev, "ETM tracing disabled\n"); 632 608 } 633 609 634 610 static void etm_disable(struct coresight_device *csdev,
+57 -36
drivers/hwtracing/coresight/coresight-etm4x.c
··· 28 28 #include <linux/pm_runtime.h> 29 29 #include <asm/sections.h> 30 30 #include <asm/local.h> 31 + #include <asm/virt.h> 31 32 32 33 #include "coresight-etm4x.h" 33 34 #include "coresight-etm-perf.h" ··· 78 77 return drvdata->trcid; 79 78 } 80 79 81 - static void etm4_enable_hw(void *info) 80 + struct etm4_enable_arg { 81 + struct etmv4_drvdata *drvdata; 82 + int rc; 83 + }; 84 + 85 + static int etm4_enable_hw(struct etmv4_drvdata *drvdata) 82 86 { 83 - int i; 84 - struct etmv4_drvdata *drvdata = info; 87 + int i, rc; 85 88 struct etmv4_config *config = &drvdata->config; 86 89 87 90 CS_UNLOCK(drvdata->base); 88 91 89 92 etm4_os_unlock(drvdata); 93 + 94 + rc = coresight_claim_device_unlocked(drvdata->base); 95 + if (rc) 96 + goto done; 90 97 91 98 /* Disable the trace unit before programming trace registers */ 92 99 writel_relaxed(0, drvdata->base + TRCPRGCTLR); ··· 183 174 dev_err(drvdata->dev, 184 175 "timeout while waiting for Idle Trace Status\n"); 185 176 177 + done: 186 178 CS_LOCK(drvdata->base); 187 179 188 - dev_dbg(drvdata->dev, "cpu: %d enable smp call done\n", drvdata->cpu); 180 + dev_dbg(drvdata->dev, "cpu: %d enable smp call done: %d\n", 181 + drvdata->cpu, rc); 182 + return rc; 183 + } 184 + 185 + static void etm4_enable_hw_smp_call(void *info) 186 + { 187 + struct etm4_enable_arg *arg = info; 188 + 189 + if (WARN_ON(!arg)) 190 + return; 191 + arg->rc = etm4_enable_hw(arg->drvdata); 189 192 } 190 193 191 194 static int etm4_parse_event_config(struct etmv4_drvdata *drvdata, ··· 263 242 if (ret) 264 243 goto out; 265 244 /* And enable it */ 266 - etm4_enable_hw(drvdata); 245 + ret = etm4_enable_hw(drvdata); 267 246 268 247 out: 269 248 return ret; ··· 272 251 static int etm4_enable_sysfs(struct coresight_device *csdev) 273 252 { 274 253 struct etmv4_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 254 + struct etm4_enable_arg arg = { 0 }; 275 255 int ret; 276 256 277 257 spin_lock(&drvdata->spinlock); ··· 281 259 * Executing etm4_enable_hw on the cpu whose ETM is being enabled 282 260 * ensures that register writes occur when cpu is powered. 283 261 */ 262 + arg.drvdata = drvdata; 284 263 ret = smp_call_function_single(drvdata->cpu, 285 - etm4_enable_hw, drvdata, 1); 286 - if (ret) 287 - goto err; 288 - 289 - drvdata->sticky_enable = true; 264 + etm4_enable_hw_smp_call, &arg, 1); 265 + if (!ret) 266 + ret = arg.rc; 267 + if (!ret) 268 + drvdata->sticky_enable = true; 290 269 spin_unlock(&drvdata->spinlock); 291 270 292 - dev_info(drvdata->dev, "ETM tracing enabled\n"); 293 - return 0; 294 - 295 - err: 296 - spin_unlock(&drvdata->spinlock); 271 + if (!ret) 272 + dev_dbg(drvdata->dev, "ETM tracing enabled\n"); 297 273 return ret; 298 274 } 299 275 ··· 348 328 isb(); 349 329 writel_relaxed(control, drvdata->base + TRCPRGCTLR); 350 330 331 + coresight_disclaim_device_unlocked(drvdata->base); 332 + 351 333 CS_LOCK(drvdata->base); 352 334 353 335 dev_dbg(drvdata->dev, "cpu: %d disable smp call done\n", drvdata->cpu); ··· 402 380 spin_unlock(&drvdata->spinlock); 403 381 cpus_read_unlock(); 404 382 405 - dev_info(drvdata->dev, "ETM tracing disabled\n"); 383 + dev_dbg(drvdata->dev, "ETM tracing disabled\n"); 406 384 } 407 385 408 386 static void etm4_disable(struct coresight_device *csdev, ··· 627 605 config->vinst_ctrl |= BIT(0); 628 606 } 629 607 630 - static u64 etm4_get_access_type(struct etmv4_config *config) 608 + static u64 etm4_get_ns_access_type(struct etmv4_config *config) 631 609 { 632 610 u64 access_type = 0; 633 611 ··· 638 616 * Bit[13] Exception level 1 - OS 639 617 * Bit[14] Exception level 2 - Hypervisor 640 618 * Bit[15] Never implemented 641 - * 642 - * Always stay away from hypervisor mode. 643 619 */ 644 - access_type = ETM_EXLEVEL_NS_HYP; 645 - 646 - if (config->mode & ETM_MODE_EXCL_KERN) 647 - access_type |= ETM_EXLEVEL_NS_OS; 620 + if (!is_kernel_in_hyp_mode()) { 621 + /* Stay away from hypervisor mode for non-VHE */ 622 + access_type = ETM_EXLEVEL_NS_HYP; 623 + if (config->mode & ETM_MODE_EXCL_KERN) 624 + access_type |= ETM_EXLEVEL_NS_OS; 625 + } else if (config->mode & ETM_MODE_EXCL_KERN) { 626 + access_type = ETM_EXLEVEL_NS_HYP; 627 + } 648 628 649 629 if (config->mode & ETM_MODE_EXCL_USER) 650 630 access_type |= ETM_EXLEVEL_NS_APP; 631 + 632 + return access_type; 633 + } 634 + 635 + static u64 etm4_get_access_type(struct etmv4_config *config) 636 + { 637 + u64 access_type = etm4_get_ns_access_type(config); 651 638 652 639 /* 653 640 * EXLEVEL_S, bits[11:8], don't trace anything happening ··· 911 880 912 881 addr_acc = config->addr_acc[ETM_DEFAULT_ADDR_COMP]; 913 882 /* clear default config */ 914 - addr_acc &= ~(ETM_EXLEVEL_NS_APP | ETM_EXLEVEL_NS_OS); 883 + addr_acc &= ~(ETM_EXLEVEL_NS_APP | ETM_EXLEVEL_NS_OS | 884 + ETM_EXLEVEL_NS_HYP); 915 885 916 - /* 917 - * EXLEVEL_NS, bits[15:12] 918 - * The Exception levels are: 919 - * Bit[12] Exception level 0 - Application 920 - * Bit[13] Exception level 1 - OS 921 - * Bit[14] Exception level 2 - Hypervisor 922 - * Bit[15] Never implemented 923 - */ 924 - if (mode & ETM_MODE_EXCL_KERN) 925 - addr_acc |= ETM_EXLEVEL_NS_OS; 926 - else 927 - addr_acc |= ETM_EXLEVEL_NS_APP; 886 + addr_acc |= etm4_get_ns_access_type(config); 928 887 929 888 config->addr_acc[ETM_DEFAULT_ADDR_COMP] = addr_acc; 930 889 config->addr_acc[ETM_DEFAULT_ADDR_COMP + 1] = addr_acc;
+22 -6
drivers/hwtracing/coresight/coresight-funnel.c
··· 25 25 #define FUNNEL_HOLDTIME_MASK 0xf00 26 26 #define FUNNEL_HOLDTIME_SHFT 0x8 27 27 #define FUNNEL_HOLDTIME (0x7 << FUNNEL_HOLDTIME_SHFT) 28 + #define FUNNEL_ENSx_MASK 0xff 28 29 29 30 /** 30 31 * struct funnel_drvdata - specifics associated to a funnel component ··· 43 42 unsigned long priority; 44 43 }; 45 44 46 - static void funnel_enable_hw(struct funnel_drvdata *drvdata, int port) 45 + static int funnel_enable_hw(struct funnel_drvdata *drvdata, int port) 47 46 { 48 47 u32 functl; 48 + int rc = 0; 49 49 50 50 CS_UNLOCK(drvdata->base); 51 51 52 52 functl = readl_relaxed(drvdata->base + FUNNEL_FUNCTL); 53 + /* Claim the device only when we enable the first slave */ 54 + if (!(functl & FUNNEL_ENSx_MASK)) { 55 + rc = coresight_claim_device_unlocked(drvdata->base); 56 + if (rc) 57 + goto done; 58 + } 59 + 53 60 functl &= ~FUNNEL_HOLDTIME_MASK; 54 61 functl |= FUNNEL_HOLDTIME; 55 62 functl |= (1 << port); 56 63 writel_relaxed(functl, drvdata->base + FUNNEL_FUNCTL); 57 64 writel_relaxed(drvdata->priority, drvdata->base + FUNNEL_PRICTL); 58 - 65 + done: 59 66 CS_LOCK(drvdata->base); 67 + return rc; 60 68 } 61 69 62 70 static int funnel_enable(struct coresight_device *csdev, int inport, 63 71 int outport) 64 72 { 73 + int rc; 65 74 struct funnel_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 66 75 67 - funnel_enable_hw(drvdata, inport); 76 + rc = funnel_enable_hw(drvdata, inport); 68 77 69 - dev_info(drvdata->dev, "FUNNEL inport %d enabled\n", inport); 70 - return 0; 78 + if (!rc) 79 + dev_dbg(drvdata->dev, "FUNNEL inport %d enabled\n", inport); 80 + return rc; 71 81 } 72 82 73 83 static void funnel_disable_hw(struct funnel_drvdata *drvdata, int inport) ··· 91 79 functl &= ~(1 << inport); 92 80 writel_relaxed(functl, drvdata->base + FUNNEL_FUNCTL); 93 81 82 + /* Disclaim the device if none of the slaves are now active */ 83 + if (!(functl & FUNNEL_ENSx_MASK)) 84 + coresight_disclaim_device_unlocked(drvdata->base); 85 + 94 86 CS_LOCK(drvdata->base); 95 87 } 96 88 ··· 105 89 106 90 funnel_disable_hw(drvdata, inport); 107 91 108 - dev_info(drvdata->dev, "FUNNEL inport %d disabled\n", inport); 92 + dev_dbg(drvdata->dev, "FUNNEL inport %d disabled\n", inport); 109 93 } 110 94 111 95 static const struct coresight_ops_link funnel_link_ops = {
+8 -1
drivers/hwtracing/coresight/coresight-priv.h
··· 25 25 #define CORESIGHT_DEVID 0xfc8 26 26 #define CORESIGHT_DEVTYPE 0xfcc 27 27 28 + 29 + /* 30 + * Coresight device CLAIM protocol. 31 + * See PSCI - ARM DEN 0022D, Section: 6.8.1 Debug and Trace save and restore. 32 + */ 33 + #define CORESIGHT_CLAIM_SELF_HOSTED BIT(1) 34 + 28 35 #define TIMEOUT_US 100 29 36 #define BMVAL(val, lsb, msb) ((val & GENMASK(msb, lsb)) >> lsb) 30 37 ··· 144 137 } 145 138 146 139 void coresight_disable_path(struct list_head *path); 147 - int coresight_enable_path(struct list_head *path, u32 mode); 140 + int coresight_enable_path(struct list_head *path, u32 mode, void *sink_data); 148 141 struct coresight_device *coresight_get_sink(struct list_head *path); 149 142 struct coresight_device *coresight_get_enabled_sink(bool reset); 150 143 struct list_head *coresight_build_path(struct coresight_device *csdev,
+2 -2
drivers/hwtracing/coresight/coresight-replicator.c
··· 35 35 { 36 36 struct replicator_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 37 37 38 - dev_info(drvdata->dev, "REPLICATOR enabled\n"); 38 + dev_dbg(drvdata->dev, "REPLICATOR enabled\n"); 39 39 return 0; 40 40 } 41 41 ··· 44 44 { 45 45 struct replicator_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 46 46 47 - dev_info(drvdata->dev, "REPLICATOR disabled\n"); 47 + dev_dbg(drvdata->dev, "REPLICATOR disabled\n"); 48 48 } 49 49 50 50 static const struct coresight_ops_link replicator_link_ops = {
+2 -2
drivers/hwtracing/coresight/coresight-stm.c
··· 211 211 stm_enable_hw(drvdata); 212 212 spin_unlock(&drvdata->spinlock); 213 213 214 - dev_info(drvdata->dev, "STM tracing enabled\n"); 214 + dev_dbg(drvdata->dev, "STM tracing enabled\n"); 215 215 return 0; 216 216 } 217 217 ··· 274 274 pm_runtime_put(drvdata->dev); 275 275 276 276 local_set(&drvdata->mode, CS_MODE_DISABLED); 277 - dev_info(drvdata->dev, "STM tracing disabled\n"); 277 + dev_dbg(drvdata->dev, "STM tracing disabled\n"); 278 278 } 279 279 } 280 280
+102 -96
drivers/hwtracing/coresight/coresight-tmc-etf.c
··· 10 10 #include <linux/slab.h> 11 11 #include "coresight-priv.h" 12 12 #include "coresight-tmc.h" 13 + #include "coresight-etm-perf.h" 13 14 14 - static void tmc_etb_enable_hw(struct tmc_drvdata *drvdata) 15 + static int tmc_set_etf_buffer(struct coresight_device *csdev, 16 + struct perf_output_handle *handle); 17 + 18 + static void __tmc_etb_enable_hw(struct tmc_drvdata *drvdata) 15 19 { 16 20 CS_UNLOCK(drvdata->base); 17 21 ··· 34 30 CS_LOCK(drvdata->base); 35 31 } 36 32 33 + static int tmc_etb_enable_hw(struct tmc_drvdata *drvdata) 34 + { 35 + int rc = coresight_claim_device(drvdata->base); 36 + 37 + if (rc) 38 + return rc; 39 + 40 + __tmc_etb_enable_hw(drvdata); 41 + return 0; 42 + } 43 + 37 44 static void tmc_etb_dump_hw(struct tmc_drvdata *drvdata) 38 45 { 39 46 char *bufp; 40 47 u32 read_data, lost; 41 - int i; 42 48 43 49 /* Check if the buffer wrapped around. */ 44 50 lost = readl_relaxed(drvdata->base + TMC_STS) & TMC_STS_FULL; 45 51 bufp = drvdata->buf; 46 52 drvdata->len = 0; 47 53 while (1) { 48 - for (i = 0; i < drvdata->memwidth; i++) { 49 - read_data = readl_relaxed(drvdata->base + TMC_RRD); 50 - if (read_data == 0xFFFFFFFF) 51 - goto done; 52 - memcpy(bufp, &read_data, 4); 53 - bufp += 4; 54 - drvdata->len += 4; 55 - } 54 + read_data = readl_relaxed(drvdata->base + TMC_RRD); 55 + if (read_data == 0xFFFFFFFF) 56 + break; 57 + memcpy(bufp, &read_data, 4); 58 + bufp += 4; 59 + drvdata->len += 4; 56 60 } 57 - done: 61 + 58 62 if (lost) 59 63 coresight_insert_barrier_packet(drvdata->buf); 60 64 return; 61 65 } 62 66 63 - static void tmc_etb_disable_hw(struct tmc_drvdata *drvdata) 67 + static void __tmc_etb_disable_hw(struct tmc_drvdata *drvdata) 64 68 { 65 69 CS_UNLOCK(drvdata->base); 66 70 ··· 84 72 CS_LOCK(drvdata->base); 85 73 } 86 74 87 - static void tmc_etf_enable_hw(struct tmc_drvdata *drvdata) 75 + static void tmc_etb_disable_hw(struct tmc_drvdata *drvdata) 76 + { 77 + coresight_disclaim_device(drvdata); 78 + __tmc_etb_disable_hw(drvdata); 79 + } 80 + 81 + static void __tmc_etf_enable_hw(struct tmc_drvdata *drvdata) 88 82 { 89 83 CS_UNLOCK(drvdata->base); 90 84 ··· 106 88 CS_LOCK(drvdata->base); 107 89 } 108 90 91 + static int tmc_etf_enable_hw(struct tmc_drvdata *drvdata) 92 + { 93 + int rc = coresight_claim_device(drvdata->base); 94 + 95 + if (rc) 96 + return rc; 97 + 98 + __tmc_etf_enable_hw(drvdata); 99 + return 0; 100 + } 101 + 109 102 static void tmc_etf_disable_hw(struct tmc_drvdata *drvdata) 110 103 { 111 104 CS_UNLOCK(drvdata->base); 112 105 113 106 tmc_flush_and_stop(drvdata); 114 107 tmc_disable_hw(drvdata); 115 - 108 + coresight_disclaim_device_unlocked(drvdata->base); 116 109 CS_LOCK(drvdata->base); 117 110 } 118 111 ··· 199 170 drvdata->buf = buf; 200 171 } 201 172 202 - drvdata->mode = CS_MODE_SYSFS; 203 - tmc_etb_enable_hw(drvdata); 173 + ret = tmc_etb_enable_hw(drvdata); 174 + if (!ret) 175 + drvdata->mode = CS_MODE_SYSFS; 176 + else 177 + /* Free up the buffer if we failed to enable */ 178 + used = false; 204 179 out: 205 180 spin_unlock_irqrestore(&drvdata->spinlock, flags); 206 181 ··· 215 182 return ret; 216 183 } 217 184 218 - static int tmc_enable_etf_sink_perf(struct coresight_device *csdev) 185 + static int tmc_enable_etf_sink_perf(struct coresight_device *csdev, void *data) 219 186 { 220 187 int ret = 0; 221 188 unsigned long flags; 222 189 struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 190 + struct perf_output_handle *handle = data; 223 191 224 192 spin_lock_irqsave(&drvdata->spinlock, flags); 225 - if (drvdata->reading) { 193 + do { 226 194 ret = -EINVAL; 227 - goto out; 228 - } 195 + if (drvdata->reading) 196 + break; 197 + /* 198 + * In Perf mode there can be only one writer per sink. There 199 + * is also no need to continue if the ETB/ETF is already 200 + * operated from sysFS. 201 + */ 202 + if (drvdata->mode != CS_MODE_DISABLED) 203 + break; 229 204 230 - /* 231 - * In Perf mode there can be only one writer per sink. There 232 - * is also no need to continue if the ETB/ETR is already operated 233 - * from sysFS. 234 - */ 235 - if (drvdata->mode != CS_MODE_DISABLED) { 236 - ret = -EINVAL; 237 - goto out; 238 - } 239 - 240 - drvdata->mode = CS_MODE_PERF; 241 - tmc_etb_enable_hw(drvdata); 242 - out: 205 + ret = tmc_set_etf_buffer(csdev, handle); 206 + if (ret) 207 + break; 208 + ret = tmc_etb_enable_hw(drvdata); 209 + if (!ret) 210 + drvdata->mode = CS_MODE_PERF; 211 + } while (0); 243 212 spin_unlock_irqrestore(&drvdata->spinlock, flags); 244 213 245 214 return ret; 246 215 } 247 216 248 - static int tmc_enable_etf_sink(struct coresight_device *csdev, u32 mode) 217 + static int tmc_enable_etf_sink(struct coresight_device *csdev, 218 + u32 mode, void *data) 249 219 { 250 220 int ret; 251 221 struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); ··· 258 222 ret = tmc_enable_etf_sink_sysfs(csdev); 259 223 break; 260 224 case CS_MODE_PERF: 261 - ret = tmc_enable_etf_sink_perf(csdev); 225 + ret = tmc_enable_etf_sink_perf(csdev, data); 262 226 break; 263 227 /* We shouldn't be here */ 264 228 default: ··· 269 233 if (ret) 270 234 return ret; 271 235 272 - dev_info(drvdata->dev, "TMC-ETB/ETF enabled\n"); 236 + dev_dbg(drvdata->dev, "TMC-ETB/ETF enabled\n"); 273 237 return 0; 274 238 } 275 239 ··· 292 256 293 257 spin_unlock_irqrestore(&drvdata->spinlock, flags); 294 258 295 - dev_info(drvdata->dev, "TMC-ETB/ETF disabled\n"); 259 + dev_dbg(drvdata->dev, "TMC-ETB/ETF disabled\n"); 296 260 } 297 261 298 262 static int tmc_enable_etf_link(struct coresight_device *csdev, 299 263 int inport, int outport) 300 264 { 265 + int ret; 301 266 unsigned long flags; 302 267 struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 303 268 ··· 308 271 return -EBUSY; 309 272 } 310 273 311 - tmc_etf_enable_hw(drvdata); 312 - drvdata->mode = CS_MODE_SYSFS; 274 + ret = tmc_etf_enable_hw(drvdata); 275 + if (!ret) 276 + drvdata->mode = CS_MODE_SYSFS; 313 277 spin_unlock_irqrestore(&drvdata->spinlock, flags); 314 278 315 - dev_info(drvdata->dev, "TMC-ETF enabled\n"); 316 - return 0; 279 + if (!ret) 280 + dev_dbg(drvdata->dev, "TMC-ETF enabled\n"); 281 + return ret; 317 282 } 318 283 319 284 static void tmc_disable_etf_link(struct coresight_device *csdev, ··· 334 295 drvdata->mode = CS_MODE_DISABLED; 335 296 spin_unlock_irqrestore(&drvdata->spinlock, flags); 336 297 337 - dev_info(drvdata->dev, "TMC-ETF disabled\n"); 298 + dev_dbg(drvdata->dev, "TMC-ETF disabled\n"); 338 299 } 339 300 340 301 static void *tmc_alloc_etf_buffer(struct coresight_device *csdev, int cpu, ··· 367 328 } 368 329 369 330 static int tmc_set_etf_buffer(struct coresight_device *csdev, 370 - struct perf_output_handle *handle, 371 - void *sink_config) 331 + struct perf_output_handle *handle) 372 332 { 373 333 int ret = 0; 374 334 unsigned long head; 375 - struct cs_buffers *buf = sink_config; 335 + struct cs_buffers *buf = etm_perf_sink_config(handle); 336 + 337 + if (!buf) 338 + return -EINVAL; 376 339 377 340 /* wrap head around to the amount of space we have */ 378 341 head = handle->head & ((buf->nr_pages << PAGE_SHIFT) - 1); ··· 390 349 return ret; 391 350 } 392 351 393 - static unsigned long tmc_reset_etf_buffer(struct coresight_device *csdev, 394 - struct perf_output_handle *handle, 395 - void *sink_config) 396 - { 397 - long size = 0; 398 - struct cs_buffers *buf = sink_config; 399 - 400 - if (buf) { 401 - /* 402 - * In snapshot mode ->data_size holds the new address of the 403 - * ring buffer's head. The size itself is the whole address 404 - * range since we want the latest information. 405 - */ 406 - if (buf->snapshot) 407 - handle->head = local_xchg(&buf->data_size, 408 - buf->nr_pages << PAGE_SHIFT); 409 - /* 410 - * Tell the tracer PMU how much we got in this run and if 411 - * something went wrong along the way. Nobody else can use 412 - * this cs_buffers instance until we are done. As such 413 - * resetting parameters here and squaring off with the ring 414 - * buffer API in the tracer PMU is fine. 415 - */ 416 - size = local_xchg(&buf->data_size, 0); 417 - } 418 - 419 - return size; 420 - } 421 - 422 - static void tmc_update_etf_buffer(struct coresight_device *csdev, 352 + static unsigned long tmc_update_etf_buffer(struct coresight_device *csdev, 423 353 struct perf_output_handle *handle, 424 354 void *sink_config) 425 355 { ··· 399 387 const u32 *barrier; 400 388 u32 *buf_ptr; 401 389 u64 read_ptr, write_ptr; 402 - u32 status, to_read; 403 - unsigned long offset; 390 + u32 status; 391 + unsigned long offset, to_read; 404 392 struct cs_buffers *buf = sink_config; 405 393 struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 406 394 407 395 if (!buf) 408 - return; 396 + return 0; 409 397 410 398 /* This shouldn't happen */ 411 399 if (WARN_ON_ONCE(drvdata->mode != CS_MODE_PERF)) 412 - return; 400 + return 0; 413 401 414 402 CS_UNLOCK(drvdata->base); 415 403 ··· 450 438 case TMC_MEM_INTF_WIDTH_32BITS: 451 439 case TMC_MEM_INTF_WIDTH_64BITS: 452 440 case TMC_MEM_INTF_WIDTH_128BITS: 453 - mask = GENMASK(31, 5); 441 + mask = GENMASK(31, 4); 454 442 break; 455 443 case TMC_MEM_INTF_WIDTH_256BITS: 456 - mask = GENMASK(31, 6); 444 + mask = GENMASK(31, 5); 457 445 break; 458 446 } 459 447 ··· 498 486 } 499 487 } 500 488 501 - /* 502 - * In snapshot mode all we have to do is communicate to 503 - * perf_aux_output_end() the address of the current head. In full 504 - * trace mode the same function expects a size to move rb->aux_head 505 - * forward. 506 - */ 507 - if (buf->snapshot) 508 - local_set(&buf->data_size, (cur * PAGE_SIZE) + offset); 509 - else 510 - local_add(to_read, &buf->data_size); 511 - 489 + /* In snapshot mode we have to update the head */ 490 + if (buf->snapshot) { 491 + handle->head = (cur * PAGE_SIZE) + offset; 492 + to_read = buf->nr_pages << PAGE_SHIFT; 493 + } 512 494 CS_LOCK(drvdata->base); 495 + 496 + return to_read; 513 497 } 514 498 515 499 static const struct coresight_ops_sink tmc_etf_sink_ops = { ··· 513 505 .disable = tmc_disable_etf_sink, 514 506 .alloc_buffer = tmc_alloc_etf_buffer, 515 507 .free_buffer = tmc_free_etf_buffer, 516 - .set_buffer = tmc_set_etf_buffer, 517 - .reset_buffer = tmc_reset_etf_buffer, 518 508 .update_buffer = tmc_update_etf_buffer, 519 509 }; 520 510 ··· 569 563 570 564 /* Disable the TMC if need be */ 571 565 if (drvdata->mode == CS_MODE_SYSFS) 572 - tmc_etb_disable_hw(drvdata); 566 + __tmc_etb_disable_hw(drvdata); 573 567 574 568 drvdata->reading = true; 575 569 out: ··· 609 603 * can't be NULL. 610 604 */ 611 605 memset(drvdata->buf, 0, drvdata->size); 612 - tmc_etb_enable_hw(drvdata); 606 + __tmc_etb_enable_hw(drvdata); 613 607 } else { 614 608 /* 615 609 * The ETB/ETF is not tracing and the buffer was just read.
+337 -48
drivers/hwtracing/coresight/coresight-tmc-etr.c
··· 10 10 #include <linux/slab.h> 11 11 #include <linux/vmalloc.h> 12 12 #include "coresight-catu.h" 13 + #include "coresight-etm-perf.h" 13 14 #include "coresight-priv.h" 14 15 #include "coresight-tmc.h" 15 16 ··· 20 19 void *vaddr; 21 20 size_t size; 22 21 }; 22 + 23 + /* 24 + * etr_perf_buffer - Perf buffer used for ETR 25 + * @etr_buf - Actual buffer used by the ETR 26 + * @snaphost - Perf session mode 27 + * @head - handle->head at the beginning of the session. 28 + * @nr_pages - Number of pages in the ring buffer. 29 + * @pages - Array of Pages in the ring buffer. 30 + */ 31 + struct etr_perf_buffer { 32 + struct etr_buf *etr_buf; 33 + bool snapshot; 34 + unsigned long head; 35 + int nr_pages; 36 + void **pages; 37 + }; 38 + 39 + /* Convert the perf index to an offset within the ETR buffer */ 40 + #define PERF_IDX2OFF(idx, buf) ((idx) % ((buf)->nr_pages << PAGE_SHIFT)) 41 + 42 + /* Lower limit for ETR hardware buffer */ 43 + #define TMC_ETR_PERF_MIN_BUF_SIZE SZ_1M 23 44 24 45 /* 25 46 * The TMC ETR SG has a page size of 4K. The SG table contains pointers ··· 559 536 sg_table = tmc_alloc_sg_table(dev, node, nr_tpages, nr_dpages, pages); 560 537 if (IS_ERR(sg_table)) { 561 538 kfree(etr_table); 562 - return ERR_PTR(PTR_ERR(sg_table)); 539 + return ERR_CAST(sg_table); 563 540 } 564 541 565 542 etr_table->sg_table = sg_table; ··· 751 728 return NULL; 752 729 } 753 730 754 - static inline void tmc_etr_enable_catu(struct tmc_drvdata *drvdata) 731 + static inline int tmc_etr_enable_catu(struct tmc_drvdata *drvdata, 732 + struct etr_buf *etr_buf) 755 733 { 756 734 struct coresight_device *catu = tmc_etr_get_catu_device(drvdata); 757 735 758 736 if (catu && helper_ops(catu)->enable) 759 - helper_ops(catu)->enable(catu, drvdata->etr_buf); 737 + return helper_ops(catu)->enable(catu, etr_buf); 738 + return 0; 760 739 } 761 740 762 741 static inline void tmc_etr_disable_catu(struct tmc_drvdata *drvdata) ··· 920 895 tmc_etr_buf_insert_barrier_packet(etr_buf, etr_buf->offset); 921 896 } 922 897 923 - static void tmc_etr_enable_hw(struct tmc_drvdata *drvdata) 898 + static void __tmc_etr_enable_hw(struct tmc_drvdata *drvdata) 924 899 { 925 900 u32 axictl, sts; 926 901 struct etr_buf *etr_buf = drvdata->etr_buf; 927 - 928 - /* 929 - * If this ETR is connected to a CATU, enable it before we turn 930 - * this on 931 - */ 932 - tmc_etr_enable_catu(drvdata); 933 902 934 903 CS_UNLOCK(drvdata->base); 935 904 ··· 943 924 axictl |= TMC_AXICTL_ARCACHE_OS; 944 925 } 945 926 946 - if (etr_buf->mode == ETR_MODE_ETR_SG) { 947 - if (WARN_ON(!tmc_etr_has_cap(drvdata, TMC_ETR_SG))) 948 - return; 927 + if (etr_buf->mode == ETR_MODE_ETR_SG) 949 928 axictl |= TMC_AXICTL_SCT_GAT_MODE; 950 - } 951 929 952 930 writel_relaxed(axictl, drvdata->base + TMC_AXICTL); 953 931 tmc_write_dba(drvdata, etr_buf->hwaddr); ··· 970 954 CS_LOCK(drvdata->base); 971 955 } 972 956 957 + static int tmc_etr_enable_hw(struct tmc_drvdata *drvdata, 958 + struct etr_buf *etr_buf) 959 + { 960 + int rc; 961 + 962 + /* Callers should provide an appropriate buffer for use */ 963 + if (WARN_ON(!etr_buf)) 964 + return -EINVAL; 965 + 966 + if ((etr_buf->mode == ETR_MODE_ETR_SG) && 967 + WARN_ON(!tmc_etr_has_cap(drvdata, TMC_ETR_SG))) 968 + return -EINVAL; 969 + 970 + if (WARN_ON(drvdata->etr_buf)) 971 + return -EBUSY; 972 + 973 + /* 974 + * If this ETR is connected to a CATU, enable it before we turn 975 + * this on. 976 + */ 977 + rc = tmc_etr_enable_catu(drvdata, etr_buf); 978 + if (rc) 979 + return rc; 980 + rc = coresight_claim_device(drvdata->base); 981 + if (!rc) { 982 + drvdata->etr_buf = etr_buf; 983 + __tmc_etr_enable_hw(drvdata); 984 + } 985 + 986 + return rc; 987 + } 988 + 973 989 /* 974 990 * Return the available trace data in the buffer (starts at etr_buf->offset, 975 991 * limited by etr_buf->len) from @pos, with a maximum limit of @len, 976 992 * also updating the @bufpp on where to find it. Since the trace data 977 993 * starts at anywhere in the buffer, depending on the RRP, we adjust the 978 994 * @len returned to handle buffer wrapping around. 995 + * 996 + * We are protected here by drvdata->reading != 0, which ensures the 997 + * sysfs_buf stays alive. 979 998 */ 980 999 ssize_t tmc_etr_get_sysfs_trace(struct tmc_drvdata *drvdata, 981 1000 loff_t pos, size_t len, char **bufpp) 982 1001 { 983 1002 s64 offset; 984 1003 ssize_t actual = len; 985 - struct etr_buf *etr_buf = drvdata->etr_buf; 1004 + struct etr_buf *etr_buf = drvdata->sysfs_buf; 986 1005 987 1006 if (pos + actual > etr_buf->len) 988 1007 actual = etr_buf->len - pos; ··· 1047 996 1048 997 static void tmc_etr_sync_sysfs_buf(struct tmc_drvdata *drvdata) 1049 998 { 1050 - tmc_sync_etr_buf(drvdata); 999 + struct etr_buf *etr_buf = drvdata->etr_buf; 1000 + 1001 + if (WARN_ON(drvdata->sysfs_buf != etr_buf)) { 1002 + tmc_etr_free_sysfs_buf(drvdata->sysfs_buf); 1003 + drvdata->sysfs_buf = NULL; 1004 + } else { 1005 + tmc_sync_etr_buf(drvdata); 1006 + } 1051 1007 } 1052 1008 1053 - static void tmc_etr_disable_hw(struct tmc_drvdata *drvdata) 1009 + static void __tmc_etr_disable_hw(struct tmc_drvdata *drvdata) 1054 1010 { 1055 1011 CS_UNLOCK(drvdata->base); 1056 1012 ··· 1073 1015 1074 1016 CS_LOCK(drvdata->base); 1075 1017 1018 + } 1019 + 1020 + static void tmc_etr_disable_hw(struct tmc_drvdata *drvdata) 1021 + { 1022 + __tmc_etr_disable_hw(drvdata); 1076 1023 /* Disable CATU device if this ETR is connected to one */ 1077 1024 tmc_etr_disable_catu(drvdata); 1025 + coresight_disclaim_device(drvdata->base); 1026 + /* Reset the ETR buf used by hardware */ 1027 + drvdata->etr_buf = NULL; 1078 1028 } 1079 1029 1080 1030 static int tmc_enable_etr_sink_sysfs(struct coresight_device *csdev) ··· 1090 1024 int ret = 0; 1091 1025 unsigned long flags; 1092 1026 struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 1093 - struct etr_buf *new_buf = NULL, *free_buf = NULL; 1027 + struct etr_buf *sysfs_buf = NULL, *new_buf = NULL, *free_buf = NULL; 1094 1028 1095 1029 /* 1096 1030 * If we are enabling the ETR from disabled state, we need to make ··· 1101 1035 * with the lock released. 1102 1036 */ 1103 1037 spin_lock_irqsave(&drvdata->spinlock, flags); 1104 - if (!drvdata->etr_buf || (drvdata->etr_buf->size != drvdata->size)) { 1038 + sysfs_buf = READ_ONCE(drvdata->sysfs_buf); 1039 + if (!sysfs_buf || (sysfs_buf->size != drvdata->size)) { 1105 1040 spin_unlock_irqrestore(&drvdata->spinlock, flags); 1106 1041 1107 1042 /* Allocate memory with the locks released */ ··· 1131 1064 * If we don't have a buffer or it doesn't match the requested size, 1132 1065 * use the buffer allocated above. Otherwise reuse the existing buffer. 1133 1066 */ 1134 - if (!drvdata->etr_buf || 1135 - (new_buf && drvdata->etr_buf->size != new_buf->size)) { 1136 - free_buf = drvdata->etr_buf; 1137 - drvdata->etr_buf = new_buf; 1067 + sysfs_buf = READ_ONCE(drvdata->sysfs_buf); 1068 + if (!sysfs_buf || (new_buf && sysfs_buf->size != new_buf->size)) { 1069 + free_buf = sysfs_buf; 1070 + drvdata->sysfs_buf = new_buf; 1138 1071 } 1139 1072 1140 - drvdata->mode = CS_MODE_SYSFS; 1141 - tmc_etr_enable_hw(drvdata); 1073 + ret = tmc_etr_enable_hw(drvdata, drvdata->sysfs_buf); 1074 + if (!ret) 1075 + drvdata->mode = CS_MODE_SYSFS; 1142 1076 out: 1143 1077 spin_unlock_irqrestore(&drvdata->spinlock, flags); 1144 1078 ··· 1148 1080 tmc_etr_free_sysfs_buf(free_buf); 1149 1081 1150 1082 if (!ret) 1151 - dev_info(drvdata->dev, "TMC-ETR enabled\n"); 1083 + dev_dbg(drvdata->dev, "TMC-ETR enabled\n"); 1152 1084 1153 1085 return ret; 1154 1086 } 1155 1087 1156 - static int tmc_enable_etr_sink_perf(struct coresight_device *csdev) 1088 + /* 1089 + * tmc_etr_setup_perf_buf: Allocate ETR buffer for use by perf. 1090 + * The size of the hardware buffer is dependent on the size configured 1091 + * via sysfs and the perf ring buffer size. We prefer to allocate the 1092 + * largest possible size, scaling down the size by half until it 1093 + * reaches a minimum limit (1M), beyond which we give up. 1094 + */ 1095 + static struct etr_perf_buffer * 1096 + tmc_etr_setup_perf_buf(struct tmc_drvdata *drvdata, int node, int nr_pages, 1097 + void **pages, bool snapshot) 1157 1098 { 1158 - /* We don't support perf mode yet ! */ 1159 - return -EINVAL; 1099 + struct etr_buf *etr_buf; 1100 + struct etr_perf_buffer *etr_perf; 1101 + unsigned long size; 1102 + 1103 + etr_perf = kzalloc_node(sizeof(*etr_perf), GFP_KERNEL, node); 1104 + if (!etr_perf) 1105 + return ERR_PTR(-ENOMEM); 1106 + 1107 + /* 1108 + * Try to match the perf ring buffer size if it is larger 1109 + * than the size requested via sysfs. 1110 + */ 1111 + if ((nr_pages << PAGE_SHIFT) > drvdata->size) { 1112 + etr_buf = tmc_alloc_etr_buf(drvdata, (nr_pages << PAGE_SHIFT), 1113 + 0, node, NULL); 1114 + if (!IS_ERR(etr_buf)) 1115 + goto done; 1116 + } 1117 + 1118 + /* 1119 + * Else switch to configured size for this ETR 1120 + * and scale down until we hit the minimum limit. 1121 + */ 1122 + size = drvdata->size; 1123 + do { 1124 + etr_buf = tmc_alloc_etr_buf(drvdata, size, 0, node, NULL); 1125 + if (!IS_ERR(etr_buf)) 1126 + goto done; 1127 + size /= 2; 1128 + } while (size >= TMC_ETR_PERF_MIN_BUF_SIZE); 1129 + 1130 + kfree(etr_perf); 1131 + return ERR_PTR(-ENOMEM); 1132 + 1133 + done: 1134 + etr_perf->etr_buf = etr_buf; 1135 + return etr_perf; 1160 1136 } 1161 1137 1162 - static int tmc_enable_etr_sink(struct coresight_device *csdev, u32 mode) 1138 + 1139 + static void *tmc_alloc_etr_buffer(struct coresight_device *csdev, 1140 + int cpu, void **pages, int nr_pages, 1141 + bool snapshot) 1142 + { 1143 + struct etr_perf_buffer *etr_perf; 1144 + struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 1145 + 1146 + if (cpu == -1) 1147 + cpu = smp_processor_id(); 1148 + 1149 + etr_perf = tmc_etr_setup_perf_buf(drvdata, cpu_to_node(cpu), 1150 + nr_pages, pages, snapshot); 1151 + if (IS_ERR(etr_perf)) { 1152 + dev_dbg(drvdata->dev, "Unable to allocate ETR buffer\n"); 1153 + return NULL; 1154 + } 1155 + 1156 + etr_perf->snapshot = snapshot; 1157 + etr_perf->nr_pages = nr_pages; 1158 + etr_perf->pages = pages; 1159 + 1160 + return etr_perf; 1161 + } 1162 + 1163 + static void tmc_free_etr_buffer(void *config) 1164 + { 1165 + struct etr_perf_buffer *etr_perf = config; 1166 + 1167 + if (etr_perf->etr_buf) 1168 + tmc_free_etr_buf(etr_perf->etr_buf); 1169 + kfree(etr_perf); 1170 + } 1171 + 1172 + /* 1173 + * tmc_etr_sync_perf_buffer: Copy the actual trace data from the hardware 1174 + * buffer to the perf ring buffer. 1175 + */ 1176 + static void tmc_etr_sync_perf_buffer(struct etr_perf_buffer *etr_perf) 1177 + { 1178 + long bytes, to_copy; 1179 + long pg_idx, pg_offset, src_offset; 1180 + unsigned long head = etr_perf->head; 1181 + char **dst_pages, *src_buf; 1182 + struct etr_buf *etr_buf = etr_perf->etr_buf; 1183 + 1184 + head = etr_perf->head; 1185 + pg_idx = head >> PAGE_SHIFT; 1186 + pg_offset = head & (PAGE_SIZE - 1); 1187 + dst_pages = (char **)etr_perf->pages; 1188 + src_offset = etr_buf->offset; 1189 + to_copy = etr_buf->len; 1190 + 1191 + while (to_copy > 0) { 1192 + /* 1193 + * In one iteration, we can copy minimum of : 1194 + * 1) what is available in the source buffer, 1195 + * 2) what is available in the source buffer, before it 1196 + * wraps around. 1197 + * 3) what is available in the destination page. 1198 + * in one iteration. 1199 + */ 1200 + bytes = tmc_etr_buf_get_data(etr_buf, src_offset, to_copy, 1201 + &src_buf); 1202 + if (WARN_ON_ONCE(bytes <= 0)) 1203 + break; 1204 + bytes = min(bytes, (long)(PAGE_SIZE - pg_offset)); 1205 + 1206 + memcpy(dst_pages[pg_idx] + pg_offset, src_buf, bytes); 1207 + 1208 + to_copy -= bytes; 1209 + 1210 + /* Move destination pointers */ 1211 + pg_offset += bytes; 1212 + if (pg_offset == PAGE_SIZE) { 1213 + pg_offset = 0; 1214 + if (++pg_idx == etr_perf->nr_pages) 1215 + pg_idx = 0; 1216 + } 1217 + 1218 + /* Move source pointers */ 1219 + src_offset += bytes; 1220 + if (src_offset >= etr_buf->size) 1221 + src_offset -= etr_buf->size; 1222 + } 1223 + } 1224 + 1225 + /* 1226 + * tmc_update_etr_buffer : Update the perf ring buffer with the 1227 + * available trace data. We use software double buffering at the moment. 1228 + * 1229 + * TODO: Add support for reusing the perf ring buffer. 1230 + */ 1231 + static unsigned long 1232 + tmc_update_etr_buffer(struct coresight_device *csdev, 1233 + struct perf_output_handle *handle, 1234 + void *config) 1235 + { 1236 + bool lost = false; 1237 + unsigned long flags, size = 0; 1238 + struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 1239 + struct etr_perf_buffer *etr_perf = config; 1240 + struct etr_buf *etr_buf = etr_perf->etr_buf; 1241 + 1242 + spin_lock_irqsave(&drvdata->spinlock, flags); 1243 + if (WARN_ON(drvdata->perf_data != etr_perf)) { 1244 + lost = true; 1245 + spin_unlock_irqrestore(&drvdata->spinlock, flags); 1246 + goto out; 1247 + } 1248 + 1249 + CS_UNLOCK(drvdata->base); 1250 + 1251 + tmc_flush_and_stop(drvdata); 1252 + tmc_sync_etr_buf(drvdata); 1253 + 1254 + CS_LOCK(drvdata->base); 1255 + /* Reset perf specific data */ 1256 + drvdata->perf_data = NULL; 1257 + spin_unlock_irqrestore(&drvdata->spinlock, flags); 1258 + 1259 + size = etr_buf->len; 1260 + tmc_etr_sync_perf_buffer(etr_perf); 1261 + 1262 + /* 1263 + * Update handle->head in snapshot mode. Also update the size to the 1264 + * hardware buffer size if there was an overflow. 1265 + */ 1266 + if (etr_perf->snapshot) { 1267 + handle->head += size; 1268 + if (etr_buf->full) 1269 + size = etr_buf->size; 1270 + } 1271 + 1272 + lost |= etr_buf->full; 1273 + out: 1274 + if (lost) 1275 + perf_aux_output_flag(handle, PERF_AUX_FLAG_TRUNCATED); 1276 + return size; 1277 + } 1278 + 1279 + static int tmc_enable_etr_sink_perf(struct coresight_device *csdev, void *data) 1280 + { 1281 + int rc = 0; 1282 + unsigned long flags; 1283 + struct tmc_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 1284 + struct perf_output_handle *handle = data; 1285 + struct etr_perf_buffer *etr_perf = etm_perf_sink_config(handle); 1286 + 1287 + spin_lock_irqsave(&drvdata->spinlock, flags); 1288 + /* 1289 + * There can be only one writer per sink in perf mode. If the sink 1290 + * is already open in SYSFS mode, we can't use it. 1291 + */ 1292 + if (drvdata->mode != CS_MODE_DISABLED || WARN_ON(drvdata->perf_data)) { 1293 + rc = -EBUSY; 1294 + goto unlock_out; 1295 + } 1296 + 1297 + if (WARN_ON(!etr_perf || !etr_perf->etr_buf)) { 1298 + rc = -EINVAL; 1299 + goto unlock_out; 1300 + } 1301 + 1302 + etr_perf->head = PERF_IDX2OFF(handle->head, etr_perf); 1303 + drvdata->perf_data = etr_perf; 1304 + rc = tmc_etr_enable_hw(drvdata, etr_perf->etr_buf); 1305 + if (!rc) 1306 + drvdata->mode = CS_MODE_PERF; 1307 + 1308 + unlock_out: 1309 + spin_unlock_irqrestore(&drvdata->spinlock, flags); 1310 + return rc; 1311 + } 1312 + 1313 + static int tmc_enable_etr_sink(struct coresight_device *csdev, 1314 + u32 mode, void *data) 1163 1315 { 1164 1316 switch (mode) { 1165 1317 case CS_MODE_SYSFS: 1166 1318 return tmc_enable_etr_sink_sysfs(csdev); 1167 1319 case CS_MODE_PERF: 1168 - return tmc_enable_etr_sink_perf(csdev); 1320 + return tmc_enable_etr_sink_perf(csdev, data); 1169 1321 } 1170 1322 1171 1323 /* We shouldn't be here */ ··· 1411 1123 1412 1124 spin_unlock_irqrestore(&drvdata->spinlock, flags); 1413 1125 1414 - dev_info(drvdata->dev, "TMC-ETR disabled\n"); 1126 + dev_dbg(drvdata->dev, "TMC-ETR disabled\n"); 1415 1127 } 1416 1128 1417 1129 static const struct coresight_ops_sink tmc_etr_sink_ops = { 1418 1130 .enable = tmc_enable_etr_sink, 1419 1131 .disable = tmc_disable_etr_sink, 1132 + .alloc_buffer = tmc_alloc_etr_buffer, 1133 + .update_buffer = tmc_update_etr_buffer, 1134 + .free_buffer = tmc_free_etr_buffer, 1420 1135 }; 1421 1136 1422 1137 const struct coresight_ops tmc_etr_cs_ops = { ··· 1441 1150 goto out; 1442 1151 } 1443 1152 1444 - /* Don't interfere if operated from Perf */ 1445 - if (drvdata->mode == CS_MODE_PERF) { 1153 + /* 1154 + * We can safely allow reads even if the ETR is operating in PERF mode, 1155 + * since the sysfs session is captured in mode specific data. 1156 + * If drvdata::sysfs_data is NULL the trace data has been read already. 1157 + */ 1158 + if (!drvdata->sysfs_buf) { 1446 1159 ret = -EINVAL; 1447 1160 goto out; 1448 1161 } 1449 1162 1450 - /* If drvdata::etr_buf is NULL the trace data has been read already */ 1451 - if (drvdata->etr_buf == NULL) { 1452 - ret = -EINVAL; 1453 - goto out; 1454 - } 1455 - 1456 - /* Disable the TMC if need be */ 1163 + /* Disable the TMC if we are trying to read from a running session. */ 1457 1164 if (drvdata->mode == CS_MODE_SYSFS) 1458 - tmc_etr_disable_hw(drvdata); 1165 + __tmc_etr_disable_hw(drvdata); 1459 1166 1460 1167 drvdata->reading = true; 1461 1168 out: ··· 1465 1176 int tmc_read_unprepare_etr(struct tmc_drvdata *drvdata) 1466 1177 { 1467 1178 unsigned long flags; 1468 - struct etr_buf *etr_buf = NULL; 1179 + struct etr_buf *sysfs_buf = NULL; 1469 1180 1470 1181 /* config types are set a boot time and never change */ 1471 1182 if (WARN_ON_ONCE(drvdata->config_type != TMC_CONFIG_TYPE_ETR)) ··· 1480 1191 * buffer. Since the tracer is still enabled drvdata::buf can't 1481 1192 * be NULL. 1482 1193 */ 1483 - tmc_etr_enable_hw(drvdata); 1194 + __tmc_etr_enable_hw(drvdata); 1484 1195 } else { 1485 1196 /* 1486 1197 * The ETR is not tracing and the buffer was just read. 1487 1198 * As such prepare to free the trace buffer. 1488 1199 */ 1489 - etr_buf = drvdata->etr_buf; 1490 - drvdata->etr_buf = NULL; 1200 + sysfs_buf = drvdata->sysfs_buf; 1201 + drvdata->sysfs_buf = NULL; 1491 1202 } 1492 1203 1493 1204 drvdata->reading = false; 1494 1205 spin_unlock_irqrestore(&drvdata->spinlock, flags); 1495 1206 1496 1207 /* Free allocated memory out side of the spinlock */ 1497 - if (etr_buf) 1498 - tmc_free_etr_buf(etr_buf); 1208 + if (sysfs_buf) 1209 + tmc_etr_free_sysfs_buf(sysfs_buf); 1499 1210 1500 1211 return 0; 1501 1212 }
+2 -2
drivers/hwtracing/coresight/coresight-tmc.c
··· 81 81 } 82 82 83 83 if (!ret) 84 - dev_info(drvdata->dev, "TMC read start\n"); 84 + dev_dbg(drvdata->dev, "TMC read start\n"); 85 85 86 86 return ret; 87 87 } ··· 103 103 } 104 104 105 105 if (!ret) 106 - dev_info(drvdata->dev, "TMC read end\n"); 106 + dev_dbg(drvdata->dev, "TMC read end\n"); 107 107 108 108 return ret; 109 109 }
+4
drivers/hwtracing/coresight/coresight-tmc.h
··· 170 170 * @trigger_cntr: amount of words to store after a trigger. 171 171 * @etr_caps: Bitmask of capabilities of the TMC ETR, inferred from the 172 172 * device configuration register (DEVID) 173 + * @perf_data: PERF buffer for ETR. 174 + * @sysfs_data: SYSFS buffer for ETR. 173 175 */ 174 176 struct tmc_drvdata { 175 177 void __iomem *base; ··· 191 189 enum tmc_mem_intf_width memwidth; 192 190 u32 trigger_cntr; 193 191 u32 etr_caps; 192 + struct etr_buf *sysfs_buf; 193 + void *perf_data; 194 194 }; 195 195 196 196 struct etr_buf_operations {
+3 -3
drivers/hwtracing/coresight/coresight-tpiu.c
··· 68 68 CS_LOCK(drvdata->base); 69 69 } 70 70 71 - static int tpiu_enable(struct coresight_device *csdev, u32 mode) 71 + static int tpiu_enable(struct coresight_device *csdev, u32 mode, void *__unused) 72 72 { 73 73 struct tpiu_drvdata *drvdata = dev_get_drvdata(csdev->dev.parent); 74 74 75 75 tpiu_enable_hw(drvdata); 76 76 77 - dev_info(drvdata->dev, "TPIU enabled\n"); 77 + dev_dbg(drvdata->dev, "TPIU enabled\n"); 78 78 return 0; 79 79 } 80 80 ··· 100 100 101 101 tpiu_disable_hw(drvdata); 102 102 103 - dev_info(drvdata->dev, "TPIU disabled\n"); 103 + dev_dbg(drvdata->dev, "TPIU disabled\n"); 104 104 } 105 105 106 106 static const struct coresight_ops_sink tpiu_sink_ops = {
+142 -42
drivers/hwtracing/coresight/coresight.c
··· 128 128 return -ENODEV; 129 129 } 130 130 131 - static int coresight_enable_sink(struct coresight_device *csdev, u32 mode) 131 + static inline u32 coresight_read_claim_tags(void __iomem *base) 132 + { 133 + return readl_relaxed(base + CORESIGHT_CLAIMCLR); 134 + } 135 + 136 + static inline bool coresight_is_claimed_self_hosted(void __iomem *base) 137 + { 138 + return coresight_read_claim_tags(base) == CORESIGHT_CLAIM_SELF_HOSTED; 139 + } 140 + 141 + static inline bool coresight_is_claimed_any(void __iomem *base) 142 + { 143 + return coresight_read_claim_tags(base) != 0; 144 + } 145 + 146 + static inline void coresight_set_claim_tags(void __iomem *base) 147 + { 148 + writel_relaxed(CORESIGHT_CLAIM_SELF_HOSTED, base + CORESIGHT_CLAIMSET); 149 + isb(); 150 + } 151 + 152 + static inline void coresight_clear_claim_tags(void __iomem *base) 153 + { 154 + writel_relaxed(CORESIGHT_CLAIM_SELF_HOSTED, base + CORESIGHT_CLAIMCLR); 155 + isb(); 156 + } 157 + 158 + /* 159 + * coresight_claim_device_unlocked : Claim the device for self-hosted usage 160 + * to prevent an external tool from touching this device. As per PSCI 161 + * standards, section "Preserving the execution context" => "Debug and Trace 162 + * save and Restore", DBGCLAIM[1] is reserved for Self-hosted debug/trace and 163 + * DBGCLAIM[0] is reserved for external tools. 164 + * 165 + * Called with CS_UNLOCKed for the component. 166 + * Returns : 0 on success 167 + */ 168 + int coresight_claim_device_unlocked(void __iomem *base) 169 + { 170 + if (coresight_is_claimed_any(base)) 171 + return -EBUSY; 172 + 173 + coresight_set_claim_tags(base); 174 + if (coresight_is_claimed_self_hosted(base)) 175 + return 0; 176 + /* There was a race setting the tags, clean up and fail */ 177 + coresight_clear_claim_tags(base); 178 + return -EBUSY; 179 + } 180 + 181 + int coresight_claim_device(void __iomem *base) 182 + { 183 + int rc; 184 + 185 + CS_UNLOCK(base); 186 + rc = coresight_claim_device_unlocked(base); 187 + CS_LOCK(base); 188 + 189 + return rc; 190 + } 191 + 192 + /* 193 + * coresight_disclaim_device_unlocked : Clear the claim tags for the device. 194 + * Called with CS_UNLOCKed for the component. 195 + */ 196 + void coresight_disclaim_device_unlocked(void __iomem *base) 197 + { 198 + 199 + if (coresight_is_claimed_self_hosted(base)) 200 + coresight_clear_claim_tags(base); 201 + else 202 + /* 203 + * The external agent may have not honoured our claim 204 + * and has manipulated it. Or something else has seriously 205 + * gone wrong in our driver. 206 + */ 207 + WARN_ON_ONCE(1); 208 + } 209 + 210 + void coresight_disclaim_device(void __iomem *base) 211 + { 212 + CS_UNLOCK(base); 213 + coresight_disclaim_device_unlocked(base); 214 + CS_LOCK(base); 215 + } 216 + 217 + static int coresight_enable_sink(struct coresight_device *csdev, 218 + u32 mode, void *data) 132 219 { 133 220 int ret; 134 221 135 - if (!csdev->enable) { 136 - if (sink_ops(csdev)->enable) { 137 - ret = sink_ops(csdev)->enable(csdev, mode); 138 - if (ret) 139 - return ret; 140 - } 222 + /* 223 + * We need to make sure the "new" session is compatible with the 224 + * existing "mode" of operation. 225 + */ 226 + if (sink_ops(csdev)->enable) { 227 + ret = sink_ops(csdev)->enable(csdev, mode, data); 228 + if (ret) 229 + return ret; 141 230 csdev->enable = true; 142 231 } 143 232 ··· 273 184 if (atomic_inc_return(&csdev->refcnt[refport]) == 1) { 274 185 if (link_ops(csdev)->enable) { 275 186 ret = link_ops(csdev)->enable(csdev, inport, outport); 276 - if (ret) 187 + if (ret) { 188 + atomic_dec(&csdev->refcnt[refport]); 277 189 return ret; 190 + } 278 191 } 279 192 } 280 193 ··· 365 274 return !csdev->enable; 366 275 } 367 276 368 - void coresight_disable_path(struct list_head *path) 277 + /* 278 + * coresight_disable_path_from : Disable components in the given path beyond 279 + * @nd in the list. If @nd is NULL, all the components, except the SOURCE are 280 + * disabled. 281 + */ 282 + static void coresight_disable_path_from(struct list_head *path, 283 + struct coresight_node *nd) 369 284 { 370 285 u32 type; 371 - struct coresight_node *nd; 372 286 struct coresight_device *csdev, *parent, *child; 373 287 374 - list_for_each_entry(nd, path, link) { 288 + if (!nd) 289 + nd = list_first_entry(path, struct coresight_node, link); 290 + 291 + list_for_each_entry_continue(nd, path, link) { 375 292 csdev = nd->csdev; 376 293 type = csdev->type; 377 294 ··· 399 300 coresight_disable_sink(csdev); 400 301 break; 401 302 case CORESIGHT_DEV_TYPE_SOURCE: 402 - /* sources are disabled from either sysFS or Perf */ 303 + /* 304 + * We skip the first node in the path assuming that it 305 + * is the source. So we don't expect a source device in 306 + * the middle of a path. 307 + */ 308 + WARN_ON(1); 403 309 break; 404 310 case CORESIGHT_DEV_TYPE_LINK: 405 311 parent = list_prev_entry(nd, link)->csdev; ··· 417 313 } 418 314 } 419 315 420 - int coresight_enable_path(struct list_head *path, u32 mode) 316 + void coresight_disable_path(struct list_head *path) 317 + { 318 + coresight_disable_path_from(path, NULL); 319 + } 320 + 321 + int coresight_enable_path(struct list_head *path, u32 mode, void *sink_data) 421 322 { 422 323 423 324 int ret = 0; ··· 447 338 448 339 switch (type) { 449 340 case CORESIGHT_DEV_TYPE_SINK: 450 - ret = coresight_enable_sink(csdev, mode); 341 + ret = coresight_enable_sink(csdev, mode, sink_data); 342 + /* 343 + * Sink is the first component turned on. If we 344 + * failed to enable the sink, there are no components 345 + * that need disabling. Disabling the path here 346 + * would mean we could disrupt an existing session. 347 + */ 451 348 if (ret) 452 - goto err; 349 + goto out; 453 350 break; 454 351 case CORESIGHT_DEV_TYPE_SOURCE: 455 352 /* sources are enabled from either sysFS or Perf */ ··· 475 360 out: 476 361 return ret; 477 362 err: 478 - coresight_disable_path(path); 363 + coresight_disable_path_from(path, nd); 479 364 goto out; 480 365 } 481 366 ··· 750 635 goto out; 751 636 } 752 637 753 - ret = coresight_enable_path(path, CS_MODE_SYSFS); 638 + ret = coresight_enable_path(path, CS_MODE_SYSFS, NULL); 754 639 if (ret) 755 640 goto err_path; 756 641 ··· 1110 995 1111 996 struct coresight_device *coresight_register(struct coresight_desc *desc) 1112 997 { 1113 - int i; 1114 998 int ret; 1115 999 int link_subtype; 1116 1000 int nr_refcnts = 1; 1117 1001 atomic_t *refcnts = NULL; 1118 1002 struct coresight_device *csdev; 1119 - struct coresight_connection *conns = NULL; 1120 1003 1121 1004 csdev = kzalloc(sizeof(*csdev), GFP_KERNEL); 1122 1005 if (!csdev) { 1123 1006 ret = -ENOMEM; 1124 - goto err_kzalloc_csdev; 1007 + goto err_out; 1125 1008 } 1126 1009 1127 1010 if (desc->type == CORESIGHT_DEV_TYPE_LINK || ··· 1135 1022 refcnts = kcalloc(nr_refcnts, sizeof(*refcnts), GFP_KERNEL); 1136 1023 if (!refcnts) { 1137 1024 ret = -ENOMEM; 1138 - goto err_kzalloc_refcnts; 1025 + goto err_free_csdev; 1139 1026 } 1140 1027 1141 1028 csdev->refcnt = refcnts; ··· 1143 1030 csdev->nr_inport = desc->pdata->nr_inport; 1144 1031 csdev->nr_outport = desc->pdata->nr_outport; 1145 1032 1146 - /* Initialise connections if there is at least one outport */ 1147 - if (csdev->nr_outport) { 1148 - conns = kcalloc(csdev->nr_outport, sizeof(*conns), GFP_KERNEL); 1149 - if (!conns) { 1150 - ret = -ENOMEM; 1151 - goto err_kzalloc_conns; 1152 - } 1153 - 1154 - for (i = 0; i < csdev->nr_outport; i++) { 1155 - conns[i].outport = desc->pdata->outports[i]; 1156 - conns[i].child_name = desc->pdata->child_names[i]; 1157 - conns[i].child_port = desc->pdata->child_ports[i]; 1158 - } 1159 - } 1160 - 1161 - csdev->conns = conns; 1033 + csdev->conns = desc->pdata->conns; 1162 1034 1163 1035 csdev->type = desc->type; 1164 1036 csdev->subtype = desc->subtype; ··· 1160 1062 ret = device_register(&csdev->dev); 1161 1063 if (ret) { 1162 1064 put_device(&csdev->dev); 1163 - goto err_kzalloc_csdev; 1065 + /* 1066 + * All resources are free'd explicitly via 1067 + * coresight_device_release(), triggered from put_device(). 1068 + */ 1069 + goto err_out; 1164 1070 } 1165 1071 1166 1072 mutex_lock(&coresight_mutex); ··· 1176 1074 1177 1075 return csdev; 1178 1076 1179 - err_kzalloc_conns: 1180 - kfree(refcnts); 1181 - err_kzalloc_refcnts: 1077 + err_free_csdev: 1182 1078 kfree(csdev); 1183 - err_kzalloc_csdev: 1079 + err_out: 1184 1080 return ERR_PTR(ret); 1185 1081 } 1186 1082 EXPORT_SYMBOL_GPL(coresight_register);
+182 -80
drivers/hwtracing/coresight/of_coresight.c
··· 45 45 endpoint, of_dev_node_match); 46 46 } 47 47 48 - static void of_coresight_get_ports(const struct device_node *node, 49 - int *nr_inport, int *nr_outport) 48 + static inline bool of_coresight_legacy_ep_is_input(struct device_node *ep) 49 + { 50 + return of_property_read_bool(ep, "slave-mode"); 51 + } 52 + 53 + static void of_coresight_get_ports_legacy(const struct device_node *node, 54 + int *nr_inport, int *nr_outport) 50 55 { 51 56 struct device_node *ep = NULL; 52 57 int in = 0, out = 0; ··· 61 56 if (!ep) 62 57 break; 63 58 64 - if (of_property_read_bool(ep, "slave-mode")) 59 + if (of_coresight_legacy_ep_is_input(ep)) 65 60 in++; 66 61 else 67 62 out++; ··· 72 67 *nr_outport = out; 73 68 } 74 69 70 + static struct device_node *of_coresight_get_port_parent(struct device_node *ep) 71 + { 72 + struct device_node *parent = of_graph_get_port_parent(ep); 73 + 74 + /* 75 + * Skip one-level up to the real device node, if we 76 + * are using the new bindings. 77 + */ 78 + if (!of_node_cmp(parent->name, "in-ports") || 79 + !of_node_cmp(parent->name, "out-ports")) 80 + parent = of_get_next_parent(parent); 81 + 82 + return parent; 83 + } 84 + 85 + static inline struct device_node * 86 + of_coresight_get_input_ports_node(const struct device_node *node) 87 + { 88 + return of_get_child_by_name(node, "in-ports"); 89 + } 90 + 91 + static inline struct device_node * 92 + of_coresight_get_output_ports_node(const struct device_node *node) 93 + { 94 + return of_get_child_by_name(node, "out-ports"); 95 + } 96 + 97 + static inline int 98 + of_coresight_count_ports(struct device_node *port_parent) 99 + { 100 + int i = 0; 101 + struct device_node *ep = NULL; 102 + 103 + while ((ep = of_graph_get_next_endpoint(port_parent, ep))) 104 + i++; 105 + return i; 106 + } 107 + 108 + static void of_coresight_get_ports(const struct device_node *node, 109 + int *nr_inport, int *nr_outport) 110 + { 111 + struct device_node *input_ports = NULL, *output_ports = NULL; 112 + 113 + input_ports = of_coresight_get_input_ports_node(node); 114 + output_ports = of_coresight_get_output_ports_node(node); 115 + 116 + if (input_ports || output_ports) { 117 + if (input_ports) { 118 + *nr_inport = of_coresight_count_ports(input_ports); 119 + of_node_put(input_ports); 120 + } 121 + if (output_ports) { 122 + *nr_outport = of_coresight_count_ports(output_ports); 123 + of_node_put(output_ports); 124 + } 125 + } else { 126 + /* Fall back to legacy DT bindings parsing */ 127 + of_coresight_get_ports_legacy(node, nr_inport, nr_outport); 128 + } 129 + } 130 + 75 131 static int of_coresight_alloc_memory(struct device *dev, 76 132 struct coresight_platform_data *pdata) 77 133 { 78 - /* List of output port on this component */ 79 - pdata->outports = devm_kcalloc(dev, 80 - pdata->nr_outport, 81 - sizeof(*pdata->outports), 82 - GFP_KERNEL); 83 - if (!pdata->outports) 84 - return -ENOMEM; 85 - 86 - /* Children connected to this component via @outports */ 87 - pdata->child_names = devm_kcalloc(dev, 88 - pdata->nr_outport, 89 - sizeof(*pdata->child_names), 90 - GFP_KERNEL); 91 - if (!pdata->child_names) 92 - return -ENOMEM; 93 - 94 - /* Port number on the child this component is connected to */ 95 - pdata->child_ports = devm_kcalloc(dev, 96 - pdata->nr_outport, 97 - sizeof(*pdata->child_ports), 98 - GFP_KERNEL); 99 - if (!pdata->child_ports) 100 - return -ENOMEM; 134 + if (pdata->nr_outport) { 135 + pdata->conns = devm_kzalloc(dev, pdata->nr_outport * 136 + sizeof(*pdata->conns), 137 + GFP_KERNEL); 138 + if (!pdata->conns) 139 + return -ENOMEM; 140 + } 101 141 102 142 return 0; 103 143 } ··· 164 114 } 165 115 EXPORT_SYMBOL_GPL(of_coresight_get_cpu); 166 116 117 + /* 118 + * of_coresight_parse_endpoint : Parse the given output endpoint @ep 119 + * and fill the connection information in @conn 120 + * 121 + * Parses the local port, remote device name and the remote port. 122 + * 123 + * Returns : 124 + * 1 - If the parsing is successful and a connection record 125 + * was created for an output connection. 126 + * 0 - If the parsing completed without any fatal errors. 127 + * -Errno - Fatal error, abort the scanning. 128 + */ 129 + static int of_coresight_parse_endpoint(struct device *dev, 130 + struct device_node *ep, 131 + struct coresight_connection *conn) 132 + { 133 + int ret = 0; 134 + struct of_endpoint endpoint, rendpoint; 135 + struct device_node *rparent = NULL; 136 + struct device_node *rep = NULL; 137 + struct device *rdev = NULL; 138 + 139 + do { 140 + /* Parse the local port details */ 141 + if (of_graph_parse_endpoint(ep, &endpoint)) 142 + break; 143 + /* 144 + * Get a handle on the remote endpoint and the device it is 145 + * attached to. 146 + */ 147 + rep = of_graph_get_remote_endpoint(ep); 148 + if (!rep) 149 + break; 150 + rparent = of_coresight_get_port_parent(rep); 151 + if (!rparent) 152 + break; 153 + if (of_graph_parse_endpoint(rep, &rendpoint)) 154 + break; 155 + 156 + /* If the remote device is not available, defer probing */ 157 + rdev = of_coresight_get_endpoint_device(rparent); 158 + if (!rdev) { 159 + ret = -EPROBE_DEFER; 160 + break; 161 + } 162 + 163 + conn->outport = endpoint.port; 164 + conn->child_name = devm_kstrdup(dev, 165 + dev_name(rdev), 166 + GFP_KERNEL); 167 + conn->child_port = rendpoint.port; 168 + /* Connection record updated */ 169 + ret = 1; 170 + } while (0); 171 + 172 + of_node_put(rparent); 173 + of_node_put(rep); 174 + put_device(rdev); 175 + 176 + return ret; 177 + } 178 + 167 179 struct coresight_platform_data * 168 180 of_get_coresight_platform_data(struct device *dev, 169 181 const struct device_node *node) 170 182 { 171 - int i = 0, ret = 0; 183 + int ret = 0; 172 184 struct coresight_platform_data *pdata; 173 - struct of_endpoint endpoint, rendpoint; 174 - struct device *rdev; 185 + struct coresight_connection *conn; 175 186 struct device_node *ep = NULL; 176 - struct device_node *rparent = NULL; 177 - struct device_node *rport = NULL; 187 + const struct device_node *parent = NULL; 188 + bool legacy_binding = false; 178 189 179 190 pdata = devm_kzalloc(dev, sizeof(*pdata), GFP_KERNEL); 180 191 if (!pdata) ··· 243 132 244 133 /* Use device name as sysfs handle */ 245 134 pdata->name = dev_name(dev); 135 + pdata->cpu = of_coresight_get_cpu(node); 246 136 247 137 /* Get the number of input and output port for this component */ 248 138 of_coresight_get_ports(node, &pdata->nr_inport, &pdata->nr_outport); 249 139 250 - if (pdata->nr_outport) { 251 - ret = of_coresight_alloc_memory(dev, pdata); 252 - if (ret) 253 - return ERR_PTR(ret); 140 + /* If there are no output connections, we are done */ 141 + if (!pdata->nr_outport) 142 + return pdata; 254 143 255 - /* Iterate through each port to discover topology */ 256 - do { 257 - /* Get a handle on a port */ 258 - ep = of_graph_get_next_endpoint(node, ep); 259 - if (!ep) 260 - break; 144 + ret = of_coresight_alloc_memory(dev, pdata); 145 + if (ret) 146 + return ERR_PTR(ret); 261 147 262 - /* 263 - * No need to deal with input ports, processing for as 264 - * processing for output ports will deal with them. 265 - */ 266 - if (of_find_property(ep, "slave-mode", NULL)) 267 - continue; 268 - 269 - /* Get a handle on the local endpoint */ 270 - ret = of_graph_parse_endpoint(ep, &endpoint); 271 - 272 - if (ret) 273 - continue; 274 - 275 - /* The local out port number */ 276 - pdata->outports[i] = endpoint.port; 277 - 278 - /* 279 - * Get a handle on the remote port and parent 280 - * attached to it. 281 - */ 282 - rparent = of_graph_get_remote_port_parent(ep); 283 - rport = of_graph_get_remote_port(ep); 284 - 285 - if (!rparent || !rport) 286 - continue; 287 - 288 - if (of_graph_parse_endpoint(rport, &rendpoint)) 289 - continue; 290 - 291 - rdev = of_coresight_get_endpoint_device(rparent); 292 - if (!rdev) 293 - return ERR_PTR(-EPROBE_DEFER); 294 - 295 - pdata->child_names[i] = dev_name(rdev); 296 - pdata->child_ports[i] = rendpoint.id; 297 - 298 - i++; 299 - } while (ep); 148 + parent = of_coresight_get_output_ports_node(node); 149 + /* 150 + * If the DT uses obsoleted bindings, the ports are listed 151 + * under the device and we need to filter out the input 152 + * ports. 153 + */ 154 + if (!parent) { 155 + legacy_binding = true; 156 + parent = node; 157 + dev_warn_once(dev, "Uses obsolete Coresight DT bindings\n"); 300 158 } 301 159 302 - pdata->cpu = of_coresight_get_cpu(node); 160 + conn = pdata->conns; 161 + 162 + /* Iterate through each output port to discover topology */ 163 + while ((ep = of_graph_get_next_endpoint(parent, ep))) { 164 + /* 165 + * Legacy binding mixes input/output ports under the 166 + * same parent. So, skip the input ports if we are dealing 167 + * with legacy binding, as they processed with their 168 + * connected output ports. 169 + */ 170 + if (legacy_binding && of_coresight_legacy_ep_is_input(ep)) 171 + continue; 172 + 173 + ret = of_coresight_parse_endpoint(dev, ep, conn); 174 + switch (ret) { 175 + case 1: 176 + conn++; /* Fall through */ 177 + case 0: 178 + break; 179 + default: 180 + return ERR_PTR(ret); 181 + } 182 + } 303 183 304 184 return pdata; 305 185 }
+29
drivers/hwtracing/stm/Kconfig
··· 11 11 12 12 if STM 13 13 14 + config STM_PROTO_BASIC 15 + tristate "Basic STM framing protocol driver" 16 + default CONFIG_STM 17 + help 18 + This is a simple framing protocol for sending data over STM 19 + devices. This was the protocol that the STM framework used 20 + exclusively until the MIPI SyS-T support was added. Use this 21 + driver for compatibility with your existing STM setup. 22 + 23 + The receiving side only needs to be able to decode the MIPI 24 + STP protocol in order to extract the data. 25 + 26 + If you want to be able to use the basic protocol or want the 27 + backwards compatibility for your existing setup, say Y. 28 + 29 + config STM_PROTO_SYS_T 30 + tristate "MIPI SyS-T STM framing protocol driver" 31 + default CONFIG_STM 32 + help 33 + This is an implementation of MIPI SyS-T protocol to be used 34 + over the STP transport. In addition to the data payload, it 35 + also carries additional metadata for time correlation, better 36 + means of trace source identification, etc. 37 + 38 + The receiving side must be able to decode this protocol in 39 + addition to the MIPI STP, in order to extract the data. 40 + 41 + If you don't know what this is, say N. 42 + 14 43 config STM_DUMMY 15 44 tristate "Dummy STM driver" 16 45 help
+6
drivers/hwtracing/stm/Makefile
··· 3 3 4 4 stm_core-y := core.o policy.o 5 5 6 + obj-$(CONFIG_STM_PROTO_BASIC) += stm_p_basic.o 7 + obj-$(CONFIG_STM_PROTO_SYS_T) += stm_p_sys-t.o 8 + 9 + stm_p_basic-y := p_basic.o 10 + stm_p_sys-t-y := p_sys-t.o 11 + 6 12 obj-$(CONFIG_STM_DUMMY) += dummy_stm.o 7 13 8 14 obj-$(CONFIG_STM_SOURCE_CONSOLE) += stm_console.o
+238 -54
drivers/hwtracing/stm/core.c
··· 293 293 if (width > stm->data->sw_nchannels) 294 294 return -EINVAL; 295 295 296 - if (policy_node) { 297 - stp_policy_node_get_ranges(policy_node, 298 - &midx, &mend, &cidx, &cend); 299 - } else { 300 - midx = stm->data->sw_start; 301 - cidx = 0; 302 - mend = stm->data->sw_end; 303 - cend = stm->data->sw_nchannels - 1; 304 - } 296 + /* We no longer accept policy_node==NULL here */ 297 + if (WARN_ON_ONCE(!policy_node)) 298 + return -EINVAL; 299 + 300 + /* 301 + * Also, the caller holds reference to policy_node, so it won't 302 + * disappear on us. 303 + */ 304 + stp_policy_node_get_ranges(policy_node, &midx, &mend, &cidx, &cend); 305 305 306 306 spin_lock(&stm->mc_lock); 307 307 spin_lock(&output->lock); ··· 316 316 output->master = midx; 317 317 output->channel = cidx; 318 318 output->nr_chans = width; 319 + if (stm->pdrv->output_open) { 320 + void *priv = stp_policy_node_priv(policy_node); 321 + 322 + if (WARN_ON_ONCE(!priv)) 323 + goto unlock; 324 + 325 + /* configfs subsys mutex is held by the caller */ 326 + ret = stm->pdrv->output_open(priv, output); 327 + if (ret) 328 + goto unlock; 329 + } 330 + 319 331 stm_output_claim(stm, output); 320 332 dev_dbg(&stm->dev, "assigned %u:%u (+%u)\n", midx, cidx, width); 321 333 322 334 ret = 0; 323 335 unlock: 336 + if (ret) 337 + output->nr_chans = 0; 338 + 324 339 spin_unlock(&output->lock); 325 340 spin_unlock(&stm->mc_lock); 326 341 ··· 348 333 spin_lock(&output->lock); 349 334 if (output->nr_chans) 350 335 stm_output_disclaim(stm, output); 336 + if (stm->pdrv && stm->pdrv->output_close) 337 + stm->pdrv->output_close(output); 351 338 spin_unlock(&output->lock); 352 339 spin_unlock(&stm->mc_lock); 353 340 } ··· 364 347 unsigned int major = *(unsigned int *)data; 365 348 366 349 return MAJOR(dev->devt) == major; 350 + } 351 + 352 + /* 353 + * Framing protocol management 354 + * Modules can implement STM protocol drivers and (un-)register them 355 + * with the STM class framework. 356 + */ 357 + static struct list_head stm_pdrv_head; 358 + static struct mutex stm_pdrv_mutex; 359 + 360 + struct stm_pdrv_entry { 361 + struct list_head entry; 362 + const struct stm_protocol_driver *pdrv; 363 + const struct config_item_type *node_type; 364 + }; 365 + 366 + static const struct stm_pdrv_entry * 367 + __stm_lookup_protocol(const char *name) 368 + { 369 + struct stm_pdrv_entry *pe; 370 + 371 + /* 372 + * If no name is given (NULL or ""), fall back to "p_basic". 373 + */ 374 + if (!name || !*name) 375 + name = "p_basic"; 376 + 377 + list_for_each_entry(pe, &stm_pdrv_head, entry) { 378 + if (!strcmp(name, pe->pdrv->name)) 379 + return pe; 380 + } 381 + 382 + return NULL; 383 + } 384 + 385 + int stm_register_protocol(const struct stm_protocol_driver *pdrv) 386 + { 387 + struct stm_pdrv_entry *pe = NULL; 388 + int ret = -ENOMEM; 389 + 390 + mutex_lock(&stm_pdrv_mutex); 391 + 392 + if (__stm_lookup_protocol(pdrv->name)) { 393 + ret = -EEXIST; 394 + goto unlock; 395 + } 396 + 397 + pe = kzalloc(sizeof(*pe), GFP_KERNEL); 398 + if (!pe) 399 + goto unlock; 400 + 401 + if (pdrv->policy_attr) { 402 + pe->node_type = get_policy_node_type(pdrv->policy_attr); 403 + if (!pe->node_type) 404 + goto unlock; 405 + } 406 + 407 + list_add_tail(&pe->entry, &stm_pdrv_head); 408 + pe->pdrv = pdrv; 409 + 410 + ret = 0; 411 + unlock: 412 + mutex_unlock(&stm_pdrv_mutex); 413 + 414 + if (ret) 415 + kfree(pe); 416 + 417 + return ret; 418 + } 419 + EXPORT_SYMBOL_GPL(stm_register_protocol); 420 + 421 + void stm_unregister_protocol(const struct stm_protocol_driver *pdrv) 422 + { 423 + struct stm_pdrv_entry *pe, *iter; 424 + 425 + mutex_lock(&stm_pdrv_mutex); 426 + 427 + list_for_each_entry_safe(pe, iter, &stm_pdrv_head, entry) { 428 + if (pe->pdrv == pdrv) { 429 + list_del(&pe->entry); 430 + 431 + if (pe->node_type) { 432 + kfree(pe->node_type->ct_attrs); 433 + kfree(pe->node_type); 434 + } 435 + kfree(pe); 436 + break; 437 + } 438 + } 439 + 440 + mutex_unlock(&stm_pdrv_mutex); 441 + } 442 + EXPORT_SYMBOL_GPL(stm_unregister_protocol); 443 + 444 + static bool stm_get_protocol(const struct stm_protocol_driver *pdrv) 445 + { 446 + return try_module_get(pdrv->owner); 447 + } 448 + 449 + void stm_put_protocol(const struct stm_protocol_driver *pdrv) 450 + { 451 + module_put(pdrv->owner); 452 + } 453 + 454 + int stm_lookup_protocol(const char *name, 455 + const struct stm_protocol_driver **pdrv, 456 + const struct config_item_type **node_type) 457 + { 458 + const struct stm_pdrv_entry *pe; 459 + 460 + mutex_lock(&stm_pdrv_mutex); 461 + 462 + pe = __stm_lookup_protocol(name); 463 + if (pe && pe->pdrv && stm_get_protocol(pe->pdrv)) { 464 + *pdrv = pe->pdrv; 465 + *node_type = pe->node_type; 466 + } 467 + 468 + mutex_unlock(&stm_pdrv_mutex); 469 + 470 + return pe ? 0 : -ENOENT; 367 471 } 368 472 369 473 static int stm_char_open(struct inode *inode, struct file *file) ··· 543 405 return 0; 544 406 } 545 407 546 - static int stm_file_assign(struct stm_file *stmf, char *id, unsigned int width) 408 + static int 409 + stm_assign_first_policy(struct stm_device *stm, struct stm_output *output, 410 + char **ids, unsigned int width) 547 411 { 548 - struct stm_device *stm = stmf->stm; 549 - int ret; 412 + struct stp_policy_node *pn; 413 + int err, n; 550 414 551 - stmf->policy_node = stp_policy_node_lookup(stm, id); 415 + /* 416 + * On success, stp_policy_node_lookup() will return holding the 417 + * configfs subsystem mutex, which is then released in 418 + * stp_policy_node_put(). This allows the pdrv->output_open() in 419 + * stm_output_assign() to serialize against the attribute accessors. 420 + */ 421 + for (n = 0, pn = NULL; ids[n] && !pn; n++) 422 + pn = stp_policy_node_lookup(stm, ids[n]); 552 423 553 - ret = stm_output_assign(stm, width, stmf->policy_node, &stmf->output); 424 + if (!pn) 425 + return -EINVAL; 554 426 555 - if (stmf->policy_node) 556 - stp_policy_node_put(stmf->policy_node); 427 + err = stm_output_assign(stm, width, pn, output); 557 428 558 - return ret; 429 + stp_policy_node_put(pn); 430 + 431 + return err; 559 432 } 560 433 561 - static ssize_t notrace stm_write(struct stm_data *data, unsigned int master, 562 - unsigned int channel, const char *buf, size_t count) 434 + /** 435 + * stm_data_write() - send the given payload as data packets 436 + * @data: stm driver's data 437 + * @m: STP master 438 + * @c: STP channel 439 + * @ts_first: timestamp the first packet 440 + * @buf: data payload buffer 441 + * @count: data payload size 442 + */ 443 + ssize_t notrace stm_data_write(struct stm_data *data, unsigned int m, 444 + unsigned int c, bool ts_first, const void *buf, 445 + size_t count) 563 446 { 564 - unsigned int flags = STP_PACKET_TIMESTAMPED; 565 - const unsigned char *p = buf, nil = 0; 566 - size_t pos; 447 + unsigned int flags = ts_first ? STP_PACKET_TIMESTAMPED : 0; 567 448 ssize_t sz; 449 + size_t pos; 568 450 569 - for (pos = 0, p = buf; count > pos; pos += sz, p += sz) { 451 + for (pos = 0, sz = 0; pos < count; pos += sz) { 570 452 sz = min_t(unsigned int, count - pos, 8); 571 - sz = data->packet(data, master, channel, STP_PACKET_DATA, flags, 572 - sz, p); 573 - flags = 0; 574 - 575 - if (sz < 0) 453 + sz = data->packet(data, m, c, STP_PACKET_DATA, flags, sz, 454 + &((u8 *)buf)[pos]); 455 + if (sz <= 0) 576 456 break; 457 + 458 + if (ts_first) { 459 + flags = 0; 460 + ts_first = false; 461 + } 577 462 } 578 463 579 - data->packet(data, master, channel, STP_PACKET_FLAG, 0, 0, &nil); 464 + return sz < 0 ? sz : pos; 465 + } 466 + EXPORT_SYMBOL_GPL(stm_data_write); 580 467 581 - return pos; 468 + static ssize_t notrace 469 + stm_write(struct stm_device *stm, struct stm_output *output, 470 + unsigned int chan, const char *buf, size_t count) 471 + { 472 + int err; 473 + 474 + /* stm->pdrv is serialized against policy_mutex */ 475 + if (!stm->pdrv) 476 + return -ENODEV; 477 + 478 + err = stm->pdrv->write(stm->data, output, chan, buf, count); 479 + if (err < 0) 480 + return err; 481 + 482 + return err; 582 483 } 583 484 584 485 static ssize_t stm_char_write(struct file *file, const char __user *buf, ··· 632 455 count = PAGE_SIZE - 1; 633 456 634 457 /* 635 - * if no m/c have been assigned to this writer up to this 636 - * point, use "default" policy entry 458 + * If no m/c have been assigned to this writer up to this 459 + * point, try to use the task name and "default" policy entries. 637 460 */ 638 461 if (!stmf->output.nr_chans) { 639 - err = stm_file_assign(stmf, "default", 1); 462 + char comm[sizeof(current->comm)]; 463 + char *ids[] = { comm, "default", NULL }; 464 + 465 + get_task_comm(comm, current); 466 + 467 + err = stm_assign_first_policy(stmf->stm, &stmf->output, ids, 1); 640 468 /* 641 469 * EBUSY means that somebody else just assigned this 642 470 * output, which is just fine for write() 643 471 */ 644 - if (err && err != -EBUSY) 472 + if (err) 645 473 return err; 646 474 } 647 475 ··· 662 480 663 481 pm_runtime_get_sync(&stm->dev); 664 482 665 - count = stm_write(stm->data, stmf->output.master, stmf->output.channel, 666 - kbuf, count); 483 + count = stm_write(stm, &stmf->output, 0, kbuf, count); 667 484 668 485 pm_runtime_mark_last_busy(&stm->dev); 669 486 pm_runtime_put_autosuspend(&stm->dev); ··· 731 550 { 732 551 struct stm_device *stm = stmf->stm; 733 552 struct stp_policy_id *id; 553 + char *ids[] = { NULL, NULL }; 734 554 int ret = -EINVAL; 735 555 u32 size; 736 556 ··· 764 582 id->width > PAGE_SIZE / stm->data->sw_mmiosz) 765 583 goto err_free; 766 584 767 - ret = stm_file_assign(stmf, id->id, id->width); 585 + ids[0] = id->id; 586 + ret = stm_assign_first_policy(stmf->stm, &stmf->output, ids, 587 + id->width); 768 588 if (ret) 769 589 goto err_free; 770 590 ··· 1002 818 static int stm_source_link_add(struct stm_source_device *src, 1003 819 struct stm_device *stm) 1004 820 { 1005 - char *id; 1006 - int err; 821 + char *ids[] = { NULL, "default", NULL }; 822 + int err = -ENOMEM; 1007 823 1008 824 mutex_lock(&stm->link_mutex); 1009 825 spin_lock(&stm->link_lock); ··· 1017 833 spin_unlock(&stm->link_lock); 1018 834 mutex_unlock(&stm->link_mutex); 1019 835 1020 - id = kstrdup(src->data->name, GFP_KERNEL); 1021 - if (id) { 1022 - src->policy_node = 1023 - stp_policy_node_lookup(stm, id); 836 + ids[0] = kstrdup(src->data->name, GFP_KERNEL); 837 + if (!ids[0]) 838 + goto fail_detach; 1024 839 1025 - kfree(id); 1026 - } 1027 - 1028 - err = stm_output_assign(stm, src->data->nr_chans, 1029 - src->policy_node, &src->output); 1030 - 1031 - if (src->policy_node) 1032 - stp_policy_node_put(src->policy_node); 840 + err = stm_assign_first_policy(stm, &src->output, ids, 841 + src->data->nr_chans); 842 + kfree(ids[0]); 1033 843 1034 844 if (err) 1035 845 goto fail_detach; ··· 1312 1134 1313 1135 stm = srcu_dereference(src->link, &stm_source_srcu); 1314 1136 if (stm) 1315 - count = stm_write(stm->data, src->output.master, 1316 - src->output.channel + chan, 1317 - buf, count); 1137 + count = stm_write(stm, &src->output, chan, buf, count); 1318 1138 else 1319 1139 count = -ENODEV; 1320 1140 ··· 1339 1163 goto err_src; 1340 1164 1341 1165 init_srcu_struct(&stm_source_srcu); 1166 + INIT_LIST_HEAD(&stm_pdrv_head); 1167 + mutex_init(&stm_pdrv_mutex); 1342 1168 1169 + /* 1170 + * So as to not confuse existing users with a requirement 1171 + * to load yet another module, do it here. 1172 + */ 1173 + if (IS_ENABLED(CONFIG_STM_PROTO_BASIC)) 1174 + (void)request_module_nowait("stm_p_basic"); 1343 1175 stm_core_up++; 1344 1176 1345 1177 return 0;
+1 -1
drivers/hwtracing/stm/heartbeat.c
··· 76 76 goto fail_unregister; 77 77 78 78 stm_heartbeat[i].data.nr_chans = 1; 79 - stm_heartbeat[i].data.link = stm_heartbeat_link; 79 + stm_heartbeat[i].data.link = stm_heartbeat_link; 80 80 stm_heartbeat[i].data.unlink = stm_heartbeat_unlink; 81 81 hrtimer_init(&stm_heartbeat[i].hrtimer, CLOCK_MONOTONIC, 82 82 HRTIMER_MODE_ABS);
+48
drivers/hwtracing/stm/p_basic.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Basic framing protocol for STM devices. 4 + * Copyright (c) 2018, Intel Corporation. 5 + */ 6 + 7 + #include <linux/module.h> 8 + #include <linux/device.h> 9 + #include <linux/stm.h> 10 + #include "stm.h" 11 + 12 + static ssize_t basic_write(struct stm_data *data, struct stm_output *output, 13 + unsigned int chan, const char *buf, size_t count) 14 + { 15 + unsigned int c = output->channel + chan; 16 + unsigned int m = output->master; 17 + const unsigned char nil = 0; 18 + ssize_t sz; 19 + 20 + sz = stm_data_write(data, m, c, true, buf, count); 21 + if (sz > 0) 22 + data->packet(data, m, c, STP_PACKET_FLAG, 0, 0, &nil); 23 + 24 + return sz; 25 + } 26 + 27 + static const struct stm_protocol_driver basic_pdrv = { 28 + .owner = THIS_MODULE, 29 + .name = "p_basic", 30 + .write = basic_write, 31 + }; 32 + 33 + static int basic_stm_init(void) 34 + { 35 + return stm_register_protocol(&basic_pdrv); 36 + } 37 + 38 + static void basic_stm_exit(void) 39 + { 40 + stm_unregister_protocol(&basic_pdrv); 41 + } 42 + 43 + module_init(basic_stm_init); 44 + module_exit(basic_stm_exit); 45 + 46 + MODULE_LICENSE("GPL v2"); 47 + MODULE_DESCRIPTION("Basic STM framing protocol driver"); 48 + MODULE_AUTHOR("Alexander Shishkin <alexander.shishkin@linux.intel.com>");
+382
drivers/hwtracing/stm/p_sys-t.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * MIPI SyS-T framing protocol for STM devices. 4 + * Copyright (c) 2018, Intel Corporation. 5 + */ 6 + 7 + #include <linux/configfs.h> 8 + #include <linux/module.h> 9 + #include <linux/device.h> 10 + #include <linux/slab.h> 11 + #include <linux/uuid.h> 12 + #include <linux/stm.h> 13 + #include "stm.h" 14 + 15 + enum sys_t_message_type { 16 + MIPI_SYST_TYPE_BUILD = 0, 17 + MIPI_SYST_TYPE_SHORT32, 18 + MIPI_SYST_TYPE_STRING, 19 + MIPI_SYST_TYPE_CATALOG, 20 + MIPI_SYST_TYPE_RAW = 6, 21 + MIPI_SYST_TYPE_SHORT64, 22 + MIPI_SYST_TYPE_CLOCK, 23 + }; 24 + 25 + enum sys_t_message_severity { 26 + MIPI_SYST_SEVERITY_MAX = 0, 27 + MIPI_SYST_SEVERITY_FATAL, 28 + MIPI_SYST_SEVERITY_ERROR, 29 + MIPI_SYST_SEVERITY_WARNING, 30 + MIPI_SYST_SEVERITY_INFO, 31 + MIPI_SYST_SEVERITY_USER1, 32 + MIPI_SYST_SEVERITY_USER2, 33 + MIPI_SYST_SEVERITY_DEBUG, 34 + }; 35 + 36 + enum sys_t_message_build_subtype { 37 + MIPI_SYST_BUILD_ID_COMPACT32 = 0, 38 + MIPI_SYST_BUILD_ID_COMPACT64, 39 + MIPI_SYST_BUILD_ID_LONG, 40 + }; 41 + 42 + enum sys_t_message_clock_subtype { 43 + MIPI_SYST_CLOCK_TRANSPORT_SYNC = 1, 44 + }; 45 + 46 + enum sys_t_message_string_subtype { 47 + MIPI_SYST_STRING_GENERIC = 1, 48 + MIPI_SYST_STRING_FUNCTIONENTER, 49 + MIPI_SYST_STRING_FUNCTIONEXIT, 50 + MIPI_SYST_STRING_INVALIDPARAM = 5, 51 + MIPI_SYST_STRING_ASSERT = 7, 52 + MIPI_SYST_STRING_PRINTF_32 = 11, 53 + MIPI_SYST_STRING_PRINTF_64 = 12, 54 + }; 55 + 56 + #define MIPI_SYST_TYPE(t) ((u32)(MIPI_SYST_TYPE_ ## t)) 57 + #define MIPI_SYST_SEVERITY(s) ((u32)(MIPI_SYST_SEVERITY_ ## s) << 4) 58 + #define MIPI_SYST_OPT_LOC BIT(8) 59 + #define MIPI_SYST_OPT_LEN BIT(9) 60 + #define MIPI_SYST_OPT_CHK BIT(10) 61 + #define MIPI_SYST_OPT_TS BIT(11) 62 + #define MIPI_SYST_UNIT(u) ((u32)(u) << 12) 63 + #define MIPI_SYST_ORIGIN(o) ((u32)(o) << 16) 64 + #define MIPI_SYST_OPT_GUID BIT(23) 65 + #define MIPI_SYST_SUBTYPE(s) ((u32)(MIPI_SYST_ ## s) << 24) 66 + #define MIPI_SYST_UNITLARGE(u) (MIPI_SYST_UNIT(u & 0xf) | \ 67 + MIPI_SYST_ORIGIN(u >> 4)) 68 + #define MIPI_SYST_TYPES(t, s) (MIPI_SYST_TYPE(t) | \ 69 + MIPI_SYST_SUBTYPE(t ## _ ## s)) 70 + 71 + #define DATA_HEADER (MIPI_SYST_TYPES(STRING, GENERIC) | \ 72 + MIPI_SYST_SEVERITY(INFO) | \ 73 + MIPI_SYST_OPT_GUID) 74 + 75 + #define CLOCK_SYNC_HEADER (MIPI_SYST_TYPES(CLOCK, TRANSPORT_SYNC) | \ 76 + MIPI_SYST_SEVERITY(MAX)) 77 + 78 + struct sys_t_policy_node { 79 + uuid_t uuid; 80 + bool do_len; 81 + unsigned long ts_interval; 82 + unsigned long clocksync_interval; 83 + }; 84 + 85 + struct sys_t_output { 86 + struct sys_t_policy_node node; 87 + unsigned long ts_jiffies; 88 + unsigned long clocksync_jiffies; 89 + }; 90 + 91 + static void sys_t_policy_node_init(void *priv) 92 + { 93 + struct sys_t_policy_node *pn = priv; 94 + 95 + generate_random_uuid(pn->uuid.b); 96 + } 97 + 98 + static int sys_t_output_open(void *priv, struct stm_output *output) 99 + { 100 + struct sys_t_policy_node *pn = priv; 101 + struct sys_t_output *opriv; 102 + 103 + opriv = kzalloc(sizeof(*opriv), GFP_ATOMIC); 104 + if (!opriv) 105 + return -ENOMEM; 106 + 107 + memcpy(&opriv->node, pn, sizeof(opriv->node)); 108 + output->pdrv_private = opriv; 109 + 110 + return 0; 111 + } 112 + 113 + static void sys_t_output_close(struct stm_output *output) 114 + { 115 + kfree(output->pdrv_private); 116 + } 117 + 118 + static ssize_t sys_t_policy_uuid_show(struct config_item *item, 119 + char *page) 120 + { 121 + struct sys_t_policy_node *pn = to_pdrv_policy_node(item); 122 + 123 + return sprintf(page, "%pU\n", &pn->uuid); 124 + } 125 + 126 + static ssize_t 127 + sys_t_policy_uuid_store(struct config_item *item, const char *page, 128 + size_t count) 129 + { 130 + struct mutex *mutexp = &item->ci_group->cg_subsys->su_mutex; 131 + struct sys_t_policy_node *pn = to_pdrv_policy_node(item); 132 + int ret; 133 + 134 + mutex_lock(mutexp); 135 + ret = uuid_parse(page, &pn->uuid); 136 + mutex_unlock(mutexp); 137 + 138 + return ret < 0 ? ret : count; 139 + } 140 + 141 + CONFIGFS_ATTR(sys_t_policy_, uuid); 142 + 143 + static ssize_t sys_t_policy_do_len_show(struct config_item *item, 144 + char *page) 145 + { 146 + struct sys_t_policy_node *pn = to_pdrv_policy_node(item); 147 + 148 + return sprintf(page, "%d\n", pn->do_len); 149 + } 150 + 151 + static ssize_t 152 + sys_t_policy_do_len_store(struct config_item *item, const char *page, 153 + size_t count) 154 + { 155 + struct mutex *mutexp = &item->ci_group->cg_subsys->su_mutex; 156 + struct sys_t_policy_node *pn = to_pdrv_policy_node(item); 157 + int ret; 158 + 159 + mutex_lock(mutexp); 160 + ret = kstrtobool(page, &pn->do_len); 161 + mutex_unlock(mutexp); 162 + 163 + return ret ? ret : count; 164 + } 165 + 166 + CONFIGFS_ATTR(sys_t_policy_, do_len); 167 + 168 + static ssize_t sys_t_policy_ts_interval_show(struct config_item *item, 169 + char *page) 170 + { 171 + struct sys_t_policy_node *pn = to_pdrv_policy_node(item); 172 + 173 + return sprintf(page, "%u\n", jiffies_to_msecs(pn->ts_interval)); 174 + } 175 + 176 + static ssize_t 177 + sys_t_policy_ts_interval_store(struct config_item *item, const char *page, 178 + size_t count) 179 + { 180 + struct mutex *mutexp = &item->ci_group->cg_subsys->su_mutex; 181 + struct sys_t_policy_node *pn = to_pdrv_policy_node(item); 182 + unsigned int ms; 183 + int ret; 184 + 185 + mutex_lock(mutexp); 186 + ret = kstrtouint(page, 10, &ms); 187 + mutex_unlock(mutexp); 188 + 189 + if (!ret) { 190 + pn->ts_interval = msecs_to_jiffies(ms); 191 + return count; 192 + } 193 + 194 + return ret; 195 + } 196 + 197 + CONFIGFS_ATTR(sys_t_policy_, ts_interval); 198 + 199 + static ssize_t sys_t_policy_clocksync_interval_show(struct config_item *item, 200 + char *page) 201 + { 202 + struct sys_t_policy_node *pn = to_pdrv_policy_node(item); 203 + 204 + return sprintf(page, "%u\n", jiffies_to_msecs(pn->clocksync_interval)); 205 + } 206 + 207 + static ssize_t 208 + sys_t_policy_clocksync_interval_store(struct config_item *item, 209 + const char *page, size_t count) 210 + { 211 + struct mutex *mutexp = &item->ci_group->cg_subsys->su_mutex; 212 + struct sys_t_policy_node *pn = to_pdrv_policy_node(item); 213 + unsigned int ms; 214 + int ret; 215 + 216 + mutex_lock(mutexp); 217 + ret = kstrtouint(page, 10, &ms); 218 + mutex_unlock(mutexp); 219 + 220 + if (!ret) { 221 + pn->clocksync_interval = msecs_to_jiffies(ms); 222 + return count; 223 + } 224 + 225 + return ret; 226 + } 227 + 228 + CONFIGFS_ATTR(sys_t_policy_, clocksync_interval); 229 + 230 + static struct configfs_attribute *sys_t_policy_attrs[] = { 231 + &sys_t_policy_attr_uuid, 232 + &sys_t_policy_attr_do_len, 233 + &sys_t_policy_attr_ts_interval, 234 + &sys_t_policy_attr_clocksync_interval, 235 + NULL, 236 + }; 237 + 238 + static inline bool sys_t_need_ts(struct sys_t_output *op) 239 + { 240 + if (op->node.ts_interval && 241 + time_after(op->ts_jiffies + op->node.ts_interval, jiffies)) { 242 + op->ts_jiffies = jiffies; 243 + 244 + return true; 245 + } 246 + 247 + return false; 248 + } 249 + 250 + static bool sys_t_need_clock_sync(struct sys_t_output *op) 251 + { 252 + if (op->node.clocksync_interval && 253 + time_after(op->clocksync_jiffies + op->node.clocksync_interval, 254 + jiffies)) { 255 + op->clocksync_jiffies = jiffies; 256 + 257 + return true; 258 + } 259 + 260 + return false; 261 + } 262 + 263 + static ssize_t 264 + sys_t_clock_sync(struct stm_data *data, unsigned int m, unsigned int c) 265 + { 266 + u32 header = CLOCK_SYNC_HEADER; 267 + const unsigned char nil = 0; 268 + u64 payload[2]; /* Clock value and frequency */ 269 + ssize_t sz; 270 + 271 + sz = data->packet(data, m, c, STP_PACKET_DATA, STP_PACKET_TIMESTAMPED, 272 + 4, (u8 *)&header); 273 + if (sz <= 0) 274 + return sz; 275 + 276 + payload[0] = ktime_get_real_ns(); 277 + payload[1] = NSEC_PER_SEC; 278 + sz = stm_data_write(data, m, c, false, &payload, sizeof(payload)); 279 + if (sz <= 0) 280 + return sz; 281 + 282 + data->packet(data, m, c, STP_PACKET_FLAG, 0, 0, &nil); 283 + 284 + return sizeof(header) + sizeof(payload); 285 + } 286 + 287 + static ssize_t sys_t_write(struct stm_data *data, struct stm_output *output, 288 + unsigned int chan, const char *buf, size_t count) 289 + { 290 + struct sys_t_output *op = output->pdrv_private; 291 + unsigned int c = output->channel + chan; 292 + unsigned int m = output->master; 293 + const unsigned char nil = 0; 294 + u32 header = DATA_HEADER; 295 + ssize_t sz; 296 + 297 + /* We require an existing policy node to proceed */ 298 + if (!op) 299 + return -EINVAL; 300 + 301 + if (sys_t_need_clock_sync(op)) { 302 + sz = sys_t_clock_sync(data, m, c); 303 + if (sz <= 0) 304 + return sz; 305 + } 306 + 307 + if (op->node.do_len) 308 + header |= MIPI_SYST_OPT_LEN; 309 + if (sys_t_need_ts(op)) 310 + header |= MIPI_SYST_OPT_TS; 311 + 312 + /* 313 + * STP framing rules for SyS-T frames: 314 + * * the first packet of the SyS-T frame is timestamped; 315 + * * the last packet is a FLAG. 316 + */ 317 + /* Message layout: HEADER / GUID / [LENGTH /][TIMESTAMP /] DATA */ 318 + /* HEADER */ 319 + sz = data->packet(data, m, c, STP_PACKET_DATA, STP_PACKET_TIMESTAMPED, 320 + 4, (u8 *)&header); 321 + if (sz <= 0) 322 + return sz; 323 + 324 + /* GUID */ 325 + sz = stm_data_write(data, m, c, false, op->node.uuid.b, UUID_SIZE); 326 + if (sz <= 0) 327 + return sz; 328 + 329 + /* [LENGTH] */ 330 + if (op->node.do_len) { 331 + u16 length = count; 332 + 333 + sz = data->packet(data, m, c, STP_PACKET_DATA, 0, 2, 334 + (u8 *)&length); 335 + if (sz <= 0) 336 + return sz; 337 + } 338 + 339 + /* [TIMESTAMP] */ 340 + if (header & MIPI_SYST_OPT_TS) { 341 + u64 ts = ktime_get_real_ns(); 342 + 343 + sz = stm_data_write(data, m, c, false, &ts, sizeof(ts)); 344 + if (sz <= 0) 345 + return sz; 346 + } 347 + 348 + /* DATA */ 349 + sz = stm_data_write(data, m, c, false, buf, count); 350 + if (sz > 0) 351 + data->packet(data, m, c, STP_PACKET_FLAG, 0, 0, &nil); 352 + 353 + return sz; 354 + } 355 + 356 + static const struct stm_protocol_driver sys_t_pdrv = { 357 + .owner = THIS_MODULE, 358 + .name = "p_sys-t", 359 + .priv_sz = sizeof(struct sys_t_policy_node), 360 + .write = sys_t_write, 361 + .policy_attr = sys_t_policy_attrs, 362 + .policy_node_init = sys_t_policy_node_init, 363 + .output_open = sys_t_output_open, 364 + .output_close = sys_t_output_close, 365 + }; 366 + 367 + static int sys_t_stm_init(void) 368 + { 369 + return stm_register_protocol(&sys_t_pdrv); 370 + } 371 + 372 + static void sys_t_stm_exit(void) 373 + { 374 + stm_unregister_protocol(&sys_t_pdrv); 375 + } 376 + 377 + module_init(sys_t_stm_init); 378 + module_exit(sys_t_stm_exit); 379 + 380 + MODULE_LICENSE("GPL v2"); 381 + MODULE_DESCRIPTION("MIPI SyS-T STM framing protocol driver"); 382 + MODULE_AUTHOR("Alexander Shishkin <alexander.shishkin@linux.intel.com>");
+120 -27
drivers/hwtracing/stm/policy.c
··· 33 33 unsigned int last_master; 34 34 unsigned int first_channel; 35 35 unsigned int last_channel; 36 + /* this is the one that's exposed to the attributes */ 37 + unsigned char priv[0]; 36 38 }; 39 + 40 + void *stp_policy_node_priv(struct stp_policy_node *pn) 41 + { 42 + if (!pn) 43 + return NULL; 44 + 45 + return pn->priv; 46 + } 37 47 38 48 static struct configfs_subsystem stp_policy_subsys; 39 49 ··· 77 67 group) : 78 68 NULL; 79 69 } 70 + 71 + void *to_pdrv_policy_node(struct config_item *item) 72 + { 73 + struct stp_policy_node *node = to_stp_policy_node(item); 74 + 75 + return stp_policy_node_priv(node); 76 + } 77 + EXPORT_SYMBOL_GPL(to_pdrv_policy_node); 80 78 81 79 static ssize_t 82 80 stp_policy_node_masters_show(struct config_item *item, char *page) ··· 181 163 182 164 static void stp_policy_node_release(struct config_item *item) 183 165 { 184 - kfree(to_stp_policy_node(item)); 166 + struct stp_policy_node *node = to_stp_policy_node(item); 167 + 168 + kfree(node); 185 169 } 186 170 187 171 static struct configfs_item_operations stp_policy_node_item_ops = { ··· 202 182 static const struct config_item_type stp_policy_type; 203 183 static const struct config_item_type stp_policy_node_type; 204 184 185 + const struct config_item_type * 186 + get_policy_node_type(struct configfs_attribute **attrs) 187 + { 188 + struct config_item_type *type; 189 + struct configfs_attribute **merged; 190 + 191 + type = kmemdup(&stp_policy_node_type, sizeof(stp_policy_node_type), 192 + GFP_KERNEL); 193 + if (!type) 194 + return NULL; 195 + 196 + merged = memcat_p(stp_policy_node_attrs, attrs); 197 + if (!merged) { 198 + kfree(type); 199 + return NULL; 200 + } 201 + 202 + type->ct_attrs = merged; 203 + 204 + return type; 205 + } 206 + 205 207 static struct config_group * 206 208 stp_policy_node_make(struct config_group *group, const char *name) 207 209 { 210 + const struct config_item_type *type = &stp_policy_node_type; 208 211 struct stp_policy_node *policy_node, *parent_node; 212 + const struct stm_protocol_driver *pdrv; 209 213 struct stp_policy *policy; 210 214 211 215 if (group->cg_item.ci_type == &stp_policy_type) { ··· 243 199 if (!policy->stm) 244 200 return ERR_PTR(-ENODEV); 245 201 246 - policy_node = kzalloc(sizeof(struct stp_policy_node), GFP_KERNEL); 202 + pdrv = policy->stm->pdrv; 203 + policy_node = 204 + kzalloc(offsetof(struct stp_policy_node, priv[pdrv->priv_sz]), 205 + GFP_KERNEL); 247 206 if (!policy_node) 248 207 return ERR_PTR(-ENOMEM); 249 208 250 - config_group_init_type_name(&policy_node->group, name, 251 - &stp_policy_node_type); 209 + if (pdrv->policy_node_init) 210 + pdrv->policy_node_init((void *)policy_node->priv); 211 + 212 + if (policy->stm->pdrv_node_type) 213 + type = policy->stm->pdrv_node_type; 214 + 215 + config_group_init_type_name(&policy_node->group, name, type); 252 216 253 217 policy_node->policy = policy; 254 218 ··· 306 254 307 255 CONFIGFS_ATTR_RO(stp_policy_, device); 308 256 257 + static ssize_t stp_policy_protocol_show(struct config_item *item, 258 + char *page) 259 + { 260 + struct stp_policy *policy = to_stp_policy(item); 261 + ssize_t count; 262 + 263 + count = sprintf(page, "%s\n", 264 + (policy && policy->stm) ? 265 + policy->stm->pdrv->name : 266 + "<none>"); 267 + 268 + return count; 269 + } 270 + 271 + CONFIGFS_ATTR_RO(stp_policy_, protocol); 272 + 309 273 static struct configfs_attribute *stp_policy_attrs[] = { 310 274 &stp_policy_attr_device, 275 + &stp_policy_attr_protocol, 311 276 NULL, 312 277 }; 313 278 ··· 345 276 stm->policy = NULL; 346 277 policy->stm = NULL; 347 278 279 + stm_put_protocol(stm->pdrv); 348 280 stm_put_device(stm); 349 281 } 350 282 ··· 381 311 }; 382 312 383 313 static struct config_group * 384 - stp_policies_make(struct config_group *group, const char *name) 314 + stp_policy_make(struct config_group *group, const char *name) 385 315 { 316 + const struct config_item_type *pdrv_node_type; 317 + const struct stm_protocol_driver *pdrv; 318 + char *devname, *proto, *p; 386 319 struct config_group *ret; 387 320 struct stm_device *stm; 388 - char *devname, *p; 321 + int err; 389 322 390 323 devname = kasprintf(GFP_KERNEL, "%s", name); 391 324 if (!devname) ··· 399 326 * <device_name> is the name of an existing stm device; may 400 327 * contain dots; 401 328 * <policy_name> is an arbitrary string; may not contain dots 329 + * <device_name>:<protocol_name>.<policy_name> 402 330 */ 403 331 p = strrchr(devname, '.'); 404 332 if (!p) { ··· 409 335 410 336 *p = '\0'; 411 337 338 + /* 339 + * look for ":<protocol_name>": 340 + * + no protocol suffix: fall back to whatever is available; 341 + * + unknown protocol: fail the whole thing 342 + */ 343 + proto = strrchr(devname, ':'); 344 + if (proto) 345 + *proto++ = '\0'; 346 + 412 347 stm = stm_find_device(devname); 348 + if (!stm) { 349 + kfree(devname); 350 + return ERR_PTR(-ENODEV); 351 + } 352 + 353 + err = stm_lookup_protocol(proto, &pdrv, &pdrv_node_type); 413 354 kfree(devname); 414 355 415 - if (!stm) 356 + if (err) { 357 + stm_put_device(stm); 416 358 return ERR_PTR(-ENODEV); 359 + } 417 360 418 361 mutex_lock(&stm->policy_mutex); 419 362 if (stm->policy) { ··· 440 349 441 350 stm->policy = kzalloc(sizeof(*stm->policy), GFP_KERNEL); 442 351 if (!stm->policy) { 443 - ret = ERR_PTR(-ENOMEM); 444 - goto unlock_policy; 352 + mutex_unlock(&stm->policy_mutex); 353 + stm_put_protocol(pdrv); 354 + stm_put_device(stm); 355 + return ERR_PTR(-ENOMEM); 445 356 } 446 357 447 358 config_group_init_type_name(&stm->policy->group, name, 448 359 &stp_policy_type); 449 - stm->policy->stm = stm; 450 360 361 + stm->pdrv = pdrv; 362 + stm->pdrv_node_type = pdrv_node_type; 363 + stm->policy->stm = stm; 451 364 ret = &stm->policy->group; 452 365 453 366 unlock_policy: 454 367 mutex_unlock(&stm->policy_mutex); 455 368 456 - if (IS_ERR(ret)) 369 + if (IS_ERR(ret)) { 370 + stm_put_protocol(stm->pdrv); 457 371 stm_put_device(stm); 372 + } 458 373 459 374 return ret; 460 375 } 461 376 462 - static struct configfs_group_operations stp_policies_group_ops = { 463 - .make_group = stp_policies_make, 377 + static struct configfs_group_operations stp_policy_root_group_ops = { 378 + .make_group = stp_policy_make, 464 379 }; 465 380 466 - static const struct config_item_type stp_policies_type = { 467 - .ct_group_ops = &stp_policies_group_ops, 381 + static const struct config_item_type stp_policy_root_type = { 382 + .ct_group_ops = &stp_policy_root_group_ops, 468 383 .ct_owner = THIS_MODULE, 469 384 }; 470 385 ··· 478 381 .su_group = { 479 382 .cg_item = { 480 383 .ci_namebuf = "stp-policy", 481 - .ci_type = &stp_policies_type, 384 + .ci_type = &stp_policy_root_type, 482 385 }, 483 386 }, 484 387 }; ··· 489 392 static struct stp_policy_node * 490 393 __stp_policy_node_lookup(struct stp_policy *policy, char *s) 491 394 { 492 - struct stp_policy_node *policy_node, *ret; 395 + struct stp_policy_node *policy_node, *ret = NULL; 493 396 struct list_head *head = &policy->group.cg_children; 494 397 struct config_item *item; 495 398 char *start, *end = s; 496 399 497 400 if (list_empty(head)) 498 401 return NULL; 499 - 500 - /* return the first entry if everything else fails */ 501 - item = list_entry(head->next, struct config_item, ci_entry); 502 - ret = to_stp_policy_node(item); 503 402 504 403 next: 505 404 for (;;) { ··· 542 449 543 450 if (policy_node) 544 451 config_item_get(&policy_node->group.cg_item); 545 - mutex_unlock(&stp_policy_subsys.su_mutex); 452 + else 453 + mutex_unlock(&stp_policy_subsys.su_mutex); 546 454 547 455 return policy_node; 548 456 } 549 457 550 458 void stp_policy_node_put(struct stp_policy_node *policy_node) 551 459 { 460 + lockdep_assert_held(&stp_policy_subsys.su_mutex); 461 + 462 + mutex_unlock(&stp_policy_subsys.su_mutex); 552 463 config_item_put(&policy_node->group.cg_item); 553 464 } 554 465 555 466 int __init stp_configfs_init(void) 556 467 { 557 - int err; 558 - 559 468 config_group_init(&stp_policy_subsys.su_group); 560 469 mutex_init(&stp_policy_subsys.su_mutex); 561 - err = configfs_register_subsystem(&stp_policy_subsys); 562 - 563 - return err; 470 + return configfs_register_subsystem(&stp_policy_subsys); 564 471 } 565 472 566 473 void __exit stp_configfs_exit(void)
+46 -10
drivers/hwtracing/stm/stm.h
··· 10 10 #ifndef _STM_STM_H_ 11 11 #define _STM_STM_H_ 12 12 13 + #include <linux/configfs.h> 14 + 13 15 struct stp_policy; 14 16 struct stp_policy_node; 17 + struct stm_protocol_driver; 15 18 16 - struct stp_policy_node * 17 - stp_policy_node_lookup(struct stm_device *stm, char *s); 18 - void stp_policy_node_put(struct stp_policy_node *policy_node); 19 - void stp_policy_unbind(struct stp_policy *policy); 20 - 21 - void stp_policy_node_get_ranges(struct stp_policy_node *policy_node, 22 - unsigned int *mstart, unsigned int *mend, 23 - unsigned int *cstart, unsigned int *cend); 24 19 int stp_configfs_init(void); 25 20 void stp_configfs_exit(void); 21 + 22 + void *stp_policy_node_priv(struct stp_policy_node *pn); 26 23 27 24 struct stp_master { 28 25 unsigned int nr_free; ··· 37 40 struct mutex link_mutex; 38 41 spinlock_t link_lock; 39 42 struct list_head link_list; 43 + /* framing protocol in use */ 44 + const struct stm_protocol_driver *pdrv; 45 + const struct config_item_type *pdrv_node_type; 40 46 /* master allocation */ 41 47 spinlock_t mc_lock; 42 48 struct stp_master *masters[0]; ··· 48 48 #define to_stm_device(_d) \ 49 49 container_of((_d), struct stm_device, dev) 50 50 51 + struct stp_policy_node * 52 + stp_policy_node_lookup(struct stm_device *stm, char *s); 53 + void stp_policy_node_put(struct stp_policy_node *policy_node); 54 + void stp_policy_unbind(struct stp_policy *policy); 55 + 56 + void stp_policy_node_get_ranges(struct stp_policy_node *policy_node, 57 + unsigned int *mstart, unsigned int *mend, 58 + unsigned int *cstart, unsigned int *cend); 59 + 60 + const struct config_item_type * 61 + get_policy_node_type(struct configfs_attribute **attrs); 62 + 51 63 struct stm_output { 52 64 spinlock_t lock; 53 65 unsigned int master; 54 66 unsigned int channel; 55 67 unsigned int nr_chans; 68 + void *pdrv_private; 56 69 }; 57 70 58 71 struct stm_file { 59 72 struct stm_device *stm; 60 - struct stp_policy_node *policy_node; 61 73 struct stm_output output; 62 74 }; 63 75 ··· 83 71 struct stm_device __rcu *link; 84 72 struct list_head link_entry; 85 73 /* one output per stm_source device */ 86 - struct stp_policy_node *policy_node; 87 74 struct stm_output output; 88 75 }; 89 76 90 77 #define to_stm_source_device(_d) \ 91 78 container_of((_d), struct stm_source_device, dev) 79 + 80 + void *to_pdrv_policy_node(struct config_item *item); 81 + 82 + struct stm_protocol_driver { 83 + struct module *owner; 84 + const char *name; 85 + ssize_t (*write)(struct stm_data *data, 86 + struct stm_output *output, unsigned int chan, 87 + const char *buf, size_t count); 88 + void (*policy_node_init)(void *arg); 89 + int (*output_open)(void *priv, struct stm_output *output); 90 + void (*output_close)(struct stm_output *output); 91 + ssize_t priv_sz; 92 + struct configfs_attribute **policy_attr; 93 + }; 94 + 95 + int stm_register_protocol(const struct stm_protocol_driver *pdrv); 96 + void stm_unregister_protocol(const struct stm_protocol_driver *pdrv); 97 + int stm_lookup_protocol(const char *name, 98 + const struct stm_protocol_driver **pdrv, 99 + const struct config_item_type **type); 100 + void stm_put_protocol(const struct stm_protocol_driver *pdrv); 101 + ssize_t stm_data_write(struct stm_data *data, unsigned int m, 102 + unsigned int c, bool ts_first, const void *buf, 103 + size_t count); 92 104 93 105 #endif /* _STM_STM_H_ */
+1 -1
drivers/misc/ad525x_dpot-i2c.c
··· 114 114 115 115 module_i2c_driver(ad_dpot_i2c_driver); 116 116 117 - MODULE_AUTHOR("Michael Hennerich <hennerich@blackfin.uclinux.org>"); 117 + MODULE_AUTHOR("Michael Hennerich <michael.hennerich@analog.com>"); 118 118 MODULE_DESCRIPTION("digital potentiometer I2C bus driver"); 119 119 MODULE_LICENSE("GPL");
+1 -1
drivers/misc/ad525x_dpot-spi.c
··· 140 140 141 141 module_spi_driver(ad_dpot_spi_driver); 142 142 143 - MODULE_AUTHOR("Michael Hennerich <hennerich@blackfin.uclinux.org>"); 143 + MODULE_AUTHOR("Michael Hennerich <michael.hennerich@analog.com>"); 144 144 MODULE_DESCRIPTION("digital potentiometer SPI bus driver"); 145 145 MODULE_LICENSE("GPL"); 146 146 MODULE_ALIAS("spi:ad_dpot");
+3 -3
drivers/misc/ad525x_dpot.c
··· 1 1 /* 2 2 * ad525x_dpot: Driver for the Analog Devices digital potentiometers 3 3 * Copyright (c) 2009-2010 Analog Devices, Inc. 4 - * Author: Michael Hennerich <hennerich@blackfin.uclinux.org> 4 + * Author: Michael Hennerich <michael.hennerich@analog.com> 5 5 * 6 6 * DEVID #Wipers #Positions Resistor Options (kOhm) 7 7 * AD5258 1 64 1, 10, 50, 100 ··· 64 64 * Author: Chris Verges <chrisv@cyberswitching.com> 65 65 * 66 66 * derived from ad5252.c 67 - * Copyright (c) 2006-2011 Michael Hennerich <hennerich@blackfin.uclinux.org> 67 + * Copyright (c) 2006-2011 Michael Hennerich <michael.hennerich@analog.com> 68 68 * 69 69 * Licensed under the GPL-2 or later. 70 70 */ ··· 760 760 761 761 762 762 MODULE_AUTHOR("Chris Verges <chrisv@cyberswitching.com>, " 763 - "Michael Hennerich <hennerich@blackfin.uclinux.org>"); 763 + "Michael Hennerich <michael.hennerich@analog.com>"); 764 764 MODULE_DESCRIPTION("Digital potentiometer driver"); 765 765 MODULE_LICENSE("GPL");
-1
drivers/misc/apds990x.c
··· 188 188 #define APDS_LUX_DEFAULT_RATE 200 189 189 190 190 static const u8 again[] = {1, 8, 16, 120}; /* ALS gain steps */ 191 - static const u8 ir_currents[] = {100, 50, 25, 12}; /* IRled currents in mA */ 192 191 193 192 /* Following two tables must match i.e 10Hz rate means 1 as persistence value */ 194 193 static const u16 arates_hz[] = {10, 5, 2, 1};
-3
drivers/misc/bh1770glc.c
··· 180 180 static const s16 prox_rates_hz[] = {100, 50, 33, 25, 14, 10, 5, 2}; 181 181 static const s16 prox_rates_ms[] = {10, 20, 30, 40, 70, 100, 200, 500}; 182 182 183 - /* Supported IR-led currents in mA */ 184 - static const u8 prox_curr_ma[] = {5, 10, 20, 50, 100, 150, 200}; 185 - 186 183 /* 187 184 * Supported stand alone rates in ms from chip data sheet 188 185 * {100, 200, 500, 1000, 2000};
+2 -2
drivers/misc/cxl/flash.c
··· 92 92 93 93 val = (u32 *)new_prop->value; 94 94 rc = cxl_update_properties(dn, new_prop); 95 - pr_devel("%s: update property (%s, length: %i, value: %#x)\n", 96 - dn->name, name, vd, be32_to_cpu(*val)); 95 + pr_devel("%pOFn: update property (%s, length: %i, value: %#x)\n", 96 + dn, name, vd, be32_to_cpu(*val)); 97 97 98 98 if (rc) { 99 99 kfree(new_prop->name);
-2
drivers/misc/cxl/guest.c
··· 1018 1018 1019 1019 void cxl_guest_remove_afu(struct cxl_afu *afu) 1020 1020 { 1021 - pr_devel("in %s - AFU(%d)\n", __func__, afu->slice); 1022 - 1023 1021 if (!afu) 1024 1022 return; 1025 1023
+1 -1
drivers/misc/echo/echo.c
··· 381 381 */ 382 382 ec->factor = 0; 383 383 ec->shift = 0; 384 - if ((ec->nonupdate_dwell == 0)) { 384 + if (!ec->nonupdate_dwell) { 385 385 int p, logp, shift; 386 386 387 387 /* Determine:
+11
drivers/misc/eeprom/Kconfig
··· 111 111 This driver can also be built as a module. If so, the module 112 112 will be called idt_89hpesx. 113 113 114 + config EEPROM_EE1004 115 + tristate "SPD EEPROMs on DDR4 memory modules" 116 + depends on I2C && SYSFS 117 + help 118 + Enable this driver to get read support to SPD EEPROMs following 119 + the JEDEC EE1004 standard. These are typically found on DDR4 120 + SDRAM memory modules. 121 + 122 + This driver can also be built as a module. If so, the module 123 + will be called ee1004. 124 + 114 125 endmenu
+1
drivers/misc/eeprom/Makefile
··· 7 7 obj-$(CONFIG_EEPROM_93XX46) += eeprom_93xx46.o 8 8 obj-$(CONFIG_EEPROM_DIGSY_MTC_CFG) += digsy_mtc_eeprom.o 9 9 obj-$(CONFIG_EEPROM_IDT_89HPESX) += idt_89hpesx.o 10 + obj-$(CONFIG_EEPROM_EE1004) += ee1004.o
+1 -12
drivers/misc/eeprom/at25.c
··· 366 366 at25->nvmem_config.word_size = 1; 367 367 at25->nvmem_config.size = chip.byte_len; 368 368 369 - at25->nvmem = nvmem_register(&at25->nvmem_config); 369 + at25->nvmem = devm_nvmem_register(&spi->dev, &at25->nvmem_config); 370 370 if (IS_ERR(at25->nvmem)) 371 371 return PTR_ERR(at25->nvmem); 372 372 ··· 376 376 at25->chip.name, 377 377 (chip.flags & EE_READONLY) ? " (readonly)" : "", 378 378 at25->chip.page_size); 379 - return 0; 380 - } 381 - 382 - static int at25_remove(struct spi_device *spi) 383 - { 384 - struct at25_data *at25; 385 - 386 - at25 = spi_get_drvdata(spi); 387 - nvmem_unregister(at25->nvmem); 388 - 389 379 return 0; 390 380 } 391 381 ··· 393 403 .of_match_table = at25_of_match, 394 404 }, 395 405 .probe = at25_probe, 396 - .remove = at25_remove, 397 406 }; 398 407 399 408 module_spi_driver(at25_driver);
+281
drivers/misc/eeprom/ee1004.c
··· 1 + /* 2 + * ee1004 - driver for DDR4 SPD EEPROMs 3 + * 4 + * Copyright (C) 2017 Jean Delvare 5 + * 6 + * Based on the at24 driver: 7 + * Copyright (C) 2005-2007 David Brownell 8 + * Copyright (C) 2008 Wolfram Sang, Pengutronix 9 + * 10 + * This program is free software; you can redistribute it and/or modify 11 + * it under the terms of the GNU General Public License as published by 12 + * the Free Software Foundation; either version 2 of the License, or 13 + * (at your option) any later version. 14 + */ 15 + 16 + #include <linux/i2c.h> 17 + #include <linux/init.h> 18 + #include <linux/kernel.h> 19 + #include <linux/mod_devicetable.h> 20 + #include <linux/module.h> 21 + #include <linux/mutex.h> 22 + 23 + /* 24 + * DDR4 memory modules use special EEPROMs following the Jedec EE1004 25 + * specification. These are 512-byte EEPROMs using a single I2C address 26 + * in the 0x50-0x57 range for data. One of two 256-byte page is selected 27 + * by writing a command to I2C address 0x36 or 0x37 on the same I2C bus. 28 + * 29 + * Therefore we need to request these 2 additional addresses, and serialize 30 + * access to all such EEPROMs with a single mutex. 31 + * 32 + * We assume it is safe to read up to 32 bytes at once from these EEPROMs. 33 + * We use SMBus access even if I2C is available, these EEPROMs are small 34 + * enough, and reading from them infrequent enough, that we favor simplicity 35 + * over performance. 36 + */ 37 + 38 + #define EE1004_ADDR_SET_PAGE 0x36 39 + #define EE1004_EEPROM_SIZE 512 40 + #define EE1004_PAGE_SIZE 256 41 + #define EE1004_PAGE_SHIFT 8 42 + 43 + /* 44 + * Mutex protects ee1004_set_page and ee1004_dev_count, and must be held 45 + * from page selection to end of read. 46 + */ 47 + static DEFINE_MUTEX(ee1004_bus_lock); 48 + static struct i2c_client *ee1004_set_page[2]; 49 + static unsigned int ee1004_dev_count; 50 + static int ee1004_current_page; 51 + 52 + static const struct i2c_device_id ee1004_ids[] = { 53 + { "ee1004", 0 }, 54 + { } 55 + }; 56 + MODULE_DEVICE_TABLE(i2c, ee1004_ids); 57 + 58 + /*-------------------------------------------------------------------------*/ 59 + 60 + static ssize_t ee1004_eeprom_read(struct i2c_client *client, char *buf, 61 + unsigned int offset, size_t count) 62 + { 63 + int status; 64 + 65 + if (count > I2C_SMBUS_BLOCK_MAX) 66 + count = I2C_SMBUS_BLOCK_MAX; 67 + /* Can't cross page boundaries */ 68 + if (unlikely(offset + count > EE1004_PAGE_SIZE)) 69 + count = EE1004_PAGE_SIZE - offset; 70 + 71 + status = i2c_smbus_read_i2c_block_data_or_emulated(client, offset, 72 + count, buf); 73 + dev_dbg(&client->dev, "read %zu@%d --> %d\n", count, offset, status); 74 + 75 + return status; 76 + } 77 + 78 + static ssize_t ee1004_read(struct file *filp, struct kobject *kobj, 79 + struct bin_attribute *bin_attr, 80 + char *buf, loff_t off, size_t count) 81 + { 82 + struct device *dev = kobj_to_dev(kobj); 83 + struct i2c_client *client = to_i2c_client(dev); 84 + size_t requested = count; 85 + int page; 86 + 87 + if (unlikely(!count)) 88 + return count; 89 + 90 + page = off >> EE1004_PAGE_SHIFT; 91 + if (unlikely(page > 1)) 92 + return 0; 93 + off &= (1 << EE1004_PAGE_SHIFT) - 1; 94 + 95 + /* 96 + * Read data from chip, protecting against concurrent access to 97 + * other EE1004 SPD EEPROMs on the same adapter. 98 + */ 99 + mutex_lock(&ee1004_bus_lock); 100 + 101 + while (count) { 102 + int status; 103 + 104 + /* Select page */ 105 + if (page != ee1004_current_page) { 106 + /* Data is ignored */ 107 + status = i2c_smbus_write_byte(ee1004_set_page[page], 108 + 0x00); 109 + if (status < 0) { 110 + dev_err(dev, "Failed to select page %d (%d)\n", 111 + page, status); 112 + mutex_unlock(&ee1004_bus_lock); 113 + return status; 114 + } 115 + dev_dbg(dev, "Selected page %d\n", page); 116 + ee1004_current_page = page; 117 + } 118 + 119 + status = ee1004_eeprom_read(client, buf, off, count); 120 + if (status < 0) { 121 + mutex_unlock(&ee1004_bus_lock); 122 + return status; 123 + } 124 + buf += status; 125 + off += status; 126 + count -= status; 127 + 128 + if (off == EE1004_PAGE_SIZE) { 129 + page++; 130 + off = 0; 131 + } 132 + } 133 + 134 + mutex_unlock(&ee1004_bus_lock); 135 + 136 + return requested; 137 + } 138 + 139 + static const struct bin_attribute eeprom_attr = { 140 + .attr = { 141 + .name = "eeprom", 142 + .mode = 0444, 143 + }, 144 + .size = EE1004_EEPROM_SIZE, 145 + .read = ee1004_read, 146 + }; 147 + 148 + static int ee1004_probe(struct i2c_client *client, 149 + const struct i2c_device_id *id) 150 + { 151 + int err, cnr = 0; 152 + const char *slow = NULL; 153 + 154 + /* Make sure we can operate on this adapter */ 155 + if (!i2c_check_functionality(client->adapter, 156 + I2C_FUNC_SMBUS_READ_BYTE | 157 + I2C_FUNC_SMBUS_READ_I2C_BLOCK)) { 158 + if (i2c_check_functionality(client->adapter, 159 + I2C_FUNC_SMBUS_READ_BYTE | 160 + I2C_FUNC_SMBUS_READ_WORD_DATA)) 161 + slow = "word"; 162 + else if (i2c_check_functionality(client->adapter, 163 + I2C_FUNC_SMBUS_READ_BYTE | 164 + I2C_FUNC_SMBUS_READ_BYTE_DATA)) 165 + slow = "byte"; 166 + else 167 + return -EPFNOSUPPORT; 168 + } 169 + 170 + /* Use 2 dummy devices for page select command */ 171 + mutex_lock(&ee1004_bus_lock); 172 + if (++ee1004_dev_count == 1) { 173 + for (cnr = 0; cnr < 2; cnr++) { 174 + ee1004_set_page[cnr] = i2c_new_dummy(client->adapter, 175 + EE1004_ADDR_SET_PAGE + cnr); 176 + if (!ee1004_set_page[cnr]) { 177 + dev_err(&client->dev, 178 + "address 0x%02x unavailable\n", 179 + EE1004_ADDR_SET_PAGE + cnr); 180 + err = -EADDRINUSE; 181 + goto err_clients; 182 + } 183 + } 184 + } else if (i2c_adapter_id(client->adapter) != 185 + i2c_adapter_id(ee1004_set_page[0]->adapter)) { 186 + dev_err(&client->dev, 187 + "Driver only supports devices on a single I2C bus\n"); 188 + err = -EOPNOTSUPP; 189 + goto err_clients; 190 + } 191 + 192 + /* Remember current page to avoid unneeded page select */ 193 + err = i2c_smbus_read_byte(ee1004_set_page[0]); 194 + if (err == -ENXIO) { 195 + /* Nack means page 1 is selected */ 196 + ee1004_current_page = 1; 197 + } else if (err < 0) { 198 + /* Anything else is a real error, bail out */ 199 + goto err_clients; 200 + } else { 201 + /* Ack means page 0 is selected, returned value meaningless */ 202 + ee1004_current_page = 0; 203 + } 204 + dev_dbg(&client->dev, "Currently selected page: %d\n", 205 + ee1004_current_page); 206 + mutex_unlock(&ee1004_bus_lock); 207 + 208 + /* Create the sysfs eeprom file */ 209 + err = sysfs_create_bin_file(&client->dev.kobj, &eeprom_attr); 210 + if (err) 211 + goto err_clients_lock; 212 + 213 + dev_info(&client->dev, 214 + "%u byte EE1004-compliant SPD EEPROM, read-only\n", 215 + EE1004_EEPROM_SIZE); 216 + if (slow) 217 + dev_notice(&client->dev, 218 + "Falling back to %s reads, performance will suffer\n", 219 + slow); 220 + 221 + return 0; 222 + 223 + err_clients_lock: 224 + mutex_lock(&ee1004_bus_lock); 225 + err_clients: 226 + if (--ee1004_dev_count == 0) { 227 + for (cnr--; cnr >= 0; cnr--) { 228 + i2c_unregister_device(ee1004_set_page[cnr]); 229 + ee1004_set_page[cnr] = NULL; 230 + } 231 + } 232 + mutex_unlock(&ee1004_bus_lock); 233 + 234 + return err; 235 + } 236 + 237 + static int ee1004_remove(struct i2c_client *client) 238 + { 239 + int i; 240 + 241 + sysfs_remove_bin_file(&client->dev.kobj, &eeprom_attr); 242 + 243 + /* Remove page select clients if this is the last device */ 244 + mutex_lock(&ee1004_bus_lock); 245 + if (--ee1004_dev_count == 0) { 246 + for (i = 0; i < 2; i++) { 247 + i2c_unregister_device(ee1004_set_page[i]); 248 + ee1004_set_page[i] = NULL; 249 + } 250 + } 251 + mutex_unlock(&ee1004_bus_lock); 252 + 253 + return 0; 254 + } 255 + 256 + /*-------------------------------------------------------------------------*/ 257 + 258 + static struct i2c_driver ee1004_driver = { 259 + .driver = { 260 + .name = "ee1004", 261 + }, 262 + .probe = ee1004_probe, 263 + .remove = ee1004_remove, 264 + .id_table = ee1004_ids, 265 + }; 266 + 267 + static int __init ee1004_init(void) 268 + { 269 + return i2c_add_driver(&ee1004_driver); 270 + } 271 + module_init(ee1004_init); 272 + 273 + static void __exit ee1004_exit(void) 274 + { 275 + i2c_del_driver(&ee1004_driver); 276 + } 277 + module_exit(ee1004_exit); 278 + 279 + MODULE_DESCRIPTION("Driver for EE1004-compliant DDR4 SPD EEPROMs"); 280 + MODULE_AUTHOR("Jean Delvare"); 281 + MODULE_LICENSE("GPL");
+5 -14
drivers/misc/eeprom/eeprom_93xx46.c
··· 439 439 return -ENODEV; 440 440 } 441 441 442 - edev = kzalloc(sizeof(*edev), GFP_KERNEL); 442 + edev = devm_kzalloc(&spi->dev, sizeof(*edev), GFP_KERNEL); 443 443 if (!edev) 444 444 return -ENOMEM; 445 445 ··· 449 449 edev->addrlen = 6; 450 450 else { 451 451 dev_err(&spi->dev, "unspecified address type\n"); 452 - err = -EINVAL; 453 - goto fail; 452 + return -EINVAL; 454 453 } 455 454 456 455 mutex_init(&edev->lock); ··· 472 473 edev->nvmem_config.word_size = 1; 473 474 edev->nvmem_config.size = edev->size; 474 475 475 - edev->nvmem = nvmem_register(&edev->nvmem_config); 476 - if (IS_ERR(edev->nvmem)) { 477 - err = PTR_ERR(edev->nvmem); 478 - goto fail; 479 - } 476 + edev->nvmem = devm_nvmem_register(&spi->dev, &edev->nvmem_config); 477 + if (IS_ERR(edev->nvmem)) 478 + return PTR_ERR(edev->nvmem); 480 479 481 480 dev_info(&spi->dev, "%d-bit eeprom %s\n", 482 481 (pd->flags & EE_ADDR8) ? 8 : 16, ··· 487 490 488 491 spi_set_drvdata(spi, edev); 489 492 return 0; 490 - fail: 491 - kfree(edev); 492 - return err; 493 493 } 494 494 495 495 static int eeprom_93xx46_remove(struct spi_device *spi) 496 496 { 497 497 struct eeprom_93xx46_dev *edev = spi_get_drvdata(spi); 498 498 499 - nvmem_unregister(edev->nvmem); 500 - 501 499 if (!(edev->pdata->flags & EE_READONLY)) 502 500 device_remove_file(&spi->dev, &dev_attr_erase); 503 501 504 - kfree(edev); 505 502 return 0; 506 503 } 507 504
-1
drivers/misc/genwqe/card_base.c
··· 24 24 * controlled from here. 25 25 */ 26 26 27 - #include <linux/module.h> 28 27 #include <linux/types.h> 29 28 #include <linux/pci.h> 30 29 #include <linux/err.h>
-1
drivers/misc/genwqe/card_ddcb.c
··· 27 27 */ 28 28 29 29 #include <linux/types.h> 30 - #include <linux/module.h> 31 30 #include <linux/sched.h> 32 31 #include <linux/wait.h> 33 32 #include <linux/pci.h>
+7 -8
drivers/misc/genwqe/card_utils.c
··· 23 23 */ 24 24 25 25 #include <linux/kernel.h> 26 - #include <linux/dma-mapping.h> 27 26 #include <linux/sched.h> 28 27 #include <linux/vmalloc.h> 29 28 #include <linux/page-flags.h> 30 29 #include <linux/scatterlist.h> 31 30 #include <linux/hugetlb.h> 32 31 #include <linux/iommu.h> 33 - #include <linux/delay.h> 34 32 #include <linux/pci.h> 35 33 #include <linux/dma-mapping.h> 36 34 #include <linux/ctype.h> ··· 296 298 int genwqe_alloc_sync_sgl(struct genwqe_dev *cd, struct genwqe_sgl *sgl, 297 299 void __user *user_addr, size_t user_size, int write) 298 300 { 299 - int rc; 301 + int ret = -ENOMEM; 300 302 struct pci_dev *pci_dev = cd->pci_dev; 301 303 302 304 sgl->fpage_offs = offset_in_page((unsigned long)user_addr); ··· 316 318 if (get_order(sgl->sgl_size) > MAX_ORDER) { 317 319 dev_err(&pci_dev->dev, 318 320 "[%s] err: too much memory requested!\n", __func__); 319 - return -ENOMEM; 321 + return ret; 320 322 } 321 323 322 324 sgl->sgl = __genwqe_alloc_consistent(cd, sgl->sgl_size, ··· 324 326 if (sgl->sgl == NULL) { 325 327 dev_err(&pci_dev->dev, 326 328 "[%s] err: no memory available!\n", __func__); 327 - return -ENOMEM; 329 + return ret; 328 330 } 329 331 330 332 /* Only use buffering on incomplete pages */ ··· 337 339 /* Sync with user memory */ 338 340 if (copy_from_user(sgl->fpage + sgl->fpage_offs, 339 341 user_addr, sgl->fpage_size)) { 340 - rc = -EFAULT; 342 + ret = -EFAULT; 341 343 goto err_out; 342 344 } 343 345 } ··· 350 352 /* Sync with user memory */ 351 353 if (copy_from_user(sgl->lpage, user_addr + user_size - 352 354 sgl->lpage_size, sgl->lpage_size)) { 353 - rc = -EFAULT; 355 + ret = -EFAULT; 354 356 goto err_out2; 355 357 } 356 358 } ··· 372 374 sgl->sgl = NULL; 373 375 sgl->sgl_dma_addr = 0; 374 376 sgl->sgl_size = 0; 375 - return -ENOMEM; 377 + 378 + return ret; 376 379 } 377 380 378 381 int genwqe_setup_sgl(struct genwqe_dev *cd, struct genwqe_sgl *sgl,
+6 -10
drivers/misc/kgdbts.c
··· 985 985 int nmi_sleep = 0; 986 986 int i; 987 987 988 + verbose = 0; 989 + if (strstr(config, "V1")) 990 + verbose = 1; 991 + if (strstr(config, "V2")) 992 + verbose = 2; 993 + 988 994 ptr = strchr(config, 'F'); 989 995 if (ptr) 990 996 fork_test = simple_strtol(ptr + 1, NULL, 10); ··· 1074 1068 return -ENOSPC; 1075 1069 } 1076 1070 strcpy(config, opt); 1077 - 1078 - verbose = 0; 1079 - if (strstr(config, "V1")) 1080 - verbose = 1; 1081 - if (strstr(config, "V2")) 1082 - verbose = 2; 1083 - 1084 1071 return 0; 1085 1072 } 1086 1073 ··· 1084 1085 int err = 0; 1085 1086 1086 1087 if (!strlen(config) || isspace(config[0])) 1087 - goto noconfig; 1088 - err = kgdbts_option_setup(config); 1089 - if (err) 1090 1088 goto noconfig; 1091 1089 1092 1090 final_ack = 0;
+1 -1
drivers/misc/lkdtm/usercopy.c
··· 18 18 * hardened usercopy checks by added "unconst" to all the const copies, 19 19 * and making sure "cache_size" isn't optimized into a const. 20 20 */ 21 - static volatile size_t unconst = 0; 21 + static volatile size_t unconst; 22 22 static volatile size_t cache_size = 1024; 23 23 static struct kmem_cache *whitelist_cache; 24 24
-1
drivers/misc/mei/bus-fixup.c
··· 17 17 #include <linux/kernel.h> 18 18 #include <linux/sched.h> 19 19 #include <linux/module.h> 20 - #include <linux/moduleparam.h> 21 20 #include <linux/device.h> 22 21 #include <linux/slab.h> 23 22 #include <linux/uuid.h>
+2 -2
drivers/misc/mei/main.c
··· 599 599 mei_cl_read_start(cl, mei_cl_mtu(cl), file); 600 600 } 601 601 602 - if (req_events & (POLLOUT | POLLWRNORM)) { 602 + if (req_events & (EPOLLOUT | EPOLLWRNORM)) { 603 603 poll_wait(file, &cl->tx_wait, wait); 604 604 if (cl->tx_cb_queued < dev->tx_queue_limit) 605 - mask |= POLLOUT | POLLWRNORM; 605 + mask |= EPOLLOUT | EPOLLWRNORM; 606 606 } 607 607 608 608 out:
+1 -8
drivers/misc/mic/scif/scif_dma.c
··· 1035 1035 } 1036 1036 dma_async_issue_pending(chan); 1037 1037 } 1038 - if (ret < 0) 1039 - goto err; 1040 1038 offset += loop_len; 1041 1039 temp += loop_len; 1042 1040 temp_phys += loop_len; ··· 1551 1553 int src_cache_off, dst_cache_off; 1552 1554 s64 src_offset = work->src_offset, dst_offset = work->dst_offset; 1553 1555 u8 *temp = NULL; 1554 - bool src_local = true, dst_local = false; 1556 + bool src_local = true; 1555 1557 struct scif_dma_comp_cb *comp_cb; 1556 - dma_addr_t src_dma_addr, dst_dma_addr; 1557 1558 int err; 1558 1559 1559 1560 if (is_dma_copy_aligned(chan->device, 1, 1, 1)) ··· 1566 1569 1567 1570 if (work->loopback) 1568 1571 return scif_rma_list_cpu_copy(work); 1569 - src_dma_addr = __scif_off_to_dma_addr(work->src_window, src_offset); 1570 - dst_dma_addr = __scif_off_to_dma_addr(work->dst_window, dst_offset); 1571 1572 src_local = work->src_window->type == SCIF_WINDOW_SELF; 1572 - dst_local = work->dst_window->type == SCIF_WINDOW_SELF; 1573 1573 1574 - dst_local = dst_local; 1575 1574 /* Allocate dma_completion cb */ 1576 1575 comp_cb = kzalloc(sizeof(*comp_cb), GFP_KERNEL); 1577 1576 if (!comp_cb)
+1 -1
drivers/misc/mic/scif/scif_fence.c
··· 272 272 dma_fail: 273 273 if (!x100) 274 274 dma_pool_free(ep->remote_dev->signal_pool, status, 275 - status->src_dma_addr); 275 + src - offsetof(struct scif_status, val)); 276 276 alloc_fail: 277 277 return err; 278 278 }
+2 -2
drivers/misc/sgi-gru/grukservices.c
··· 634 634 break; 635 635 case CBSS_PAGE_OVERFLOW: 636 636 STAT(mesq_noop_page_overflow); 637 - /* fallthru */ 637 + /* fall through */ 638 638 default: 639 639 BUG(); 640 640 } ··· 792 792 break; 793 793 case CBSS_PAGE_OVERFLOW: 794 794 STAT(mesq_page_overflow); 795 - /* fallthru */ 795 + /* fall through */ 796 796 default: 797 797 BUG(); 798 798 }
+3 -3
drivers/misc/sgi-xp/xpc_channel.c
··· 28 28 { 29 29 enum xp_retval ret; 30 30 31 - DBUG_ON(!spin_is_locked(&ch->lock)); 31 + lockdep_assert_held(&ch->lock); 32 32 33 33 if (!(ch->flags & XPC_C_OPENREQUEST) || 34 34 !(ch->flags & XPC_C_ROPENREQUEST)) { ··· 82 82 struct xpc_partition *part = &xpc_partitions[ch->partid]; 83 83 u32 channel_was_connected = (ch->flags & XPC_C_WASCONNECTED); 84 84 85 - DBUG_ON(!spin_is_locked(&ch->lock)); 85 + lockdep_assert_held(&ch->lock); 86 86 87 87 if (!(ch->flags & XPC_C_DISCONNECTING)) 88 88 return; ··· 755 755 { 756 756 u32 channel_was_connected = (ch->flags & XPC_C_CONNECTED); 757 757 758 - DBUG_ON(!spin_is_locked(&ch->lock)); 758 + lockdep_assert_held(&ch->lock); 759 759 760 760 if (ch->flags & (XPC_C_DISCONNECTING | XPC_C_DISCONNECTED)) 761 761 return;
+1 -2
drivers/misc/sgi-xp/xpc_partition.c
··· 98 98 len = L1_CACHE_ALIGN(len); 99 99 100 100 if (len > buf_len) { 101 - if (buf_base != NULL) 102 - kfree(buf_base); 101 + kfree(buf_base); 103 102 buf_len = L1_CACHE_ALIGN(len); 104 103 buf = xpc_kmalloc_cacheline_aligned(buf_len, GFP_KERNEL, 105 104 &buf_base);
+1 -1
drivers/misc/sgi-xp/xpc_sn2.c
··· 1671 1671 { 1672 1672 struct xpc_channel_sn2 *ch_sn2 = &ch->sn.sn2; 1673 1673 1674 - DBUG_ON(!spin_is_locked(&ch->lock)); 1674 + lockdep_assert_held(&ch->lock); 1675 1675 1676 1676 ch_sn2->remote_msgqueue_pa = 0; 1677 1677
+1 -1
drivers/misc/sgi-xp/xpc_uv.c
··· 1183 1183 { 1184 1184 struct xpc_channel_uv *ch_uv = &ch->sn.uv; 1185 1185 1186 - DBUG_ON(!spin_is_locked(&ch->lock)); 1186 + lockdep_assert_held(&ch->lock); 1187 1187 1188 1188 kfree(ch_uv->cached_notify_gru_mq_desc); 1189 1189 ch_uv->cached_notify_gru_mq_desc = NULL;
+2 -4
drivers/misc/sram.c
··· 323 323 cur_start = block->start + block->size; 324 324 } 325 325 326 - err_chunks: 327 - if (child) 328 - of_node_put(child); 329 - 326 + err_chunks: 327 + of_node_put(child); 330 328 kfree(rblocks); 331 329 332 330 return ret;
+1111 -767
drivers/misc/vmw_balloon.c
··· 25 25 #include <linux/workqueue.h> 26 26 #include <linux/debugfs.h> 27 27 #include <linux/seq_file.h> 28 + #include <linux/rwsem.h> 29 + #include <linux/slab.h> 30 + #include <linux/spinlock.h> 28 31 #include <linux/vmw_vmci_defs.h> 29 32 #include <linux/vmw_vmci_api.h> 30 33 #include <asm/hypervisor.h> ··· 40 37 MODULE_LICENSE("GPL"); 41 38 42 39 /* 43 - * Use __GFP_HIGHMEM to allow pages from HIGHMEM zone. We don't 44 - * allow wait (__GFP_RECLAIM) for NOSLEEP page allocations. Use 45 - * __GFP_NOWARN, to suppress page allocation failure warnings. 40 + * Use __GFP_HIGHMEM to allow pages from HIGHMEM zone. We don't allow wait 41 + * (__GFP_RECLAIM) for huge page allocations. Use __GFP_NOWARN, to suppress page 42 + * allocation failure warnings. Disallow access to emergency low-memory pools. 46 43 */ 47 - #define VMW_PAGE_ALLOC_NOSLEEP (__GFP_HIGHMEM|__GFP_NOWARN) 44 + #define VMW_HUGE_PAGE_ALLOC_FLAGS (__GFP_HIGHMEM|__GFP_NOWARN| \ 45 + __GFP_NOMEMALLOC) 48 46 49 47 /* 50 - * Use GFP_HIGHUSER when executing in a separate kernel thread 51 - * context and allocation can sleep. This is less stressful to 52 - * the guest memory system, since it allows the thread to block 53 - * while memory is reclaimed, and won't take pages from emergency 54 - * low-memory pools. 48 + * Use __GFP_HIGHMEM to allow pages from HIGHMEM zone. We allow lightweight 49 + * reclamation (__GFP_NORETRY). Use __GFP_NOWARN, to suppress page allocation 50 + * failure warnings. Disallow access to emergency low-memory pools. 55 51 */ 56 - #define VMW_PAGE_ALLOC_CANSLEEP (GFP_HIGHUSER) 52 + #define VMW_PAGE_ALLOC_FLAGS (__GFP_HIGHMEM|__GFP_NOWARN| \ 53 + __GFP_NOMEMALLOC|__GFP_NORETRY) 57 54 58 55 /* Maximum number of refused pages we accumulate during inflation cycle */ 59 56 #define VMW_BALLOON_MAX_REFUSED 16 ··· 80 77 | VMW_BALLOON_BATCHED_2M_CMDS \ 81 78 | VMW_BALLOON_SIGNALLED_WAKEUP_CMD) 82 79 83 - #define VMW_BALLOON_2M_SHIFT (9) 84 - #define VMW_BALLOON_NUM_PAGE_SIZES (2) 80 + #define VMW_BALLOON_2M_ORDER (PMD_SHIFT - PAGE_SHIFT) 85 81 86 - /* 87 - * Backdoor commands availability: 82 + enum vmballoon_page_size_type { 83 + VMW_BALLOON_4K_PAGE, 84 + VMW_BALLOON_2M_PAGE, 85 + VMW_BALLOON_LAST_SIZE = VMW_BALLOON_2M_PAGE 86 + }; 87 + 88 + #define VMW_BALLOON_NUM_PAGE_SIZES (VMW_BALLOON_LAST_SIZE + 1) 89 + 90 + static const char * const vmballoon_page_size_names[] = { 91 + [VMW_BALLOON_4K_PAGE] = "4k", 92 + [VMW_BALLOON_2M_PAGE] = "2M" 93 + }; 94 + 95 + enum vmballoon_op { 96 + VMW_BALLOON_INFLATE, 97 + VMW_BALLOON_DEFLATE 98 + }; 99 + 100 + enum vmballoon_op_stat_type { 101 + VMW_BALLOON_OP_STAT, 102 + VMW_BALLOON_OP_FAIL_STAT 103 + }; 104 + 105 + #define VMW_BALLOON_OP_STAT_TYPES (VMW_BALLOON_OP_FAIL_STAT + 1) 106 + 107 + /** 108 + * enum vmballoon_cmd_type - backdoor commands. 88 109 * 89 - * START, GET_TARGET and GUEST_ID are always available, 110 + * Availability of the commands is as followed: 90 111 * 91 - * VMW_BALLOON_BASIC_CMDS: 92 - * LOCK and UNLOCK commands, 93 - * VMW_BALLOON_BATCHED_CMDS: 94 - * BATCHED_LOCK and BATCHED_UNLOCK commands. 95 - * VMW BALLOON_BATCHED_2M_CMDS: 96 - * BATCHED_2M_LOCK and BATCHED_2M_UNLOCK commands, 97 - * VMW VMW_BALLOON_SIGNALLED_WAKEUP_CMD: 98 - * VMW_BALLOON_CMD_VMCI_DOORBELL_SET command. 112 + * %VMW_BALLOON_CMD_START, %VMW_BALLOON_CMD_GET_TARGET and 113 + * %VMW_BALLOON_CMD_GUEST_ID are always available. 114 + * 115 + * If the host reports %VMW_BALLOON_BASIC_CMDS are supported then 116 + * %VMW_BALLOON_CMD_LOCK and %VMW_BALLOON_CMD_UNLOCK commands are available. 117 + * 118 + * If the host reports %VMW_BALLOON_BATCHED_CMDS are supported then 119 + * %VMW_BALLOON_CMD_BATCHED_LOCK and VMW_BALLOON_CMD_BATCHED_UNLOCK commands 120 + * are available. 121 + * 122 + * If the host reports %VMW_BALLOON_BATCHED_2M_CMDS are supported then 123 + * %VMW_BALLOON_CMD_BATCHED_2M_LOCK and %VMW_BALLOON_CMD_BATCHED_2M_UNLOCK 124 + * are supported. 125 + * 126 + * If the host reports VMW_BALLOON_SIGNALLED_WAKEUP_CMD is supported then 127 + * VMW_BALLOON_CMD_VMCI_DOORBELL_SET command is supported. 128 + * 129 + * @VMW_BALLOON_CMD_START: Communicating supported version with the hypervisor. 130 + * @VMW_BALLOON_CMD_GET_TARGET: Gets the balloon target size. 131 + * @VMW_BALLOON_CMD_LOCK: Informs the hypervisor about a ballooned page. 132 + * @VMW_BALLOON_CMD_UNLOCK: Informs the hypervisor about a page that is about 133 + * to be deflated from the balloon. 134 + * @VMW_BALLOON_CMD_GUEST_ID: Informs the hypervisor about the type of OS that 135 + * runs in the VM. 136 + * @VMW_BALLOON_CMD_BATCHED_LOCK: Inform the hypervisor about a batch of 137 + * ballooned pages (up to 512). 138 + * @VMW_BALLOON_CMD_BATCHED_UNLOCK: Inform the hypervisor about a batch of 139 + * pages that are about to be deflated from the 140 + * balloon (up to 512). 141 + * @VMW_BALLOON_CMD_BATCHED_2M_LOCK: Similar to @VMW_BALLOON_CMD_BATCHED_LOCK 142 + * for 2MB pages. 143 + * @VMW_BALLOON_CMD_BATCHED_2M_UNLOCK: Similar to 144 + * @VMW_BALLOON_CMD_BATCHED_UNLOCK for 2MB 145 + * pages. 146 + * @VMW_BALLOON_CMD_VMCI_DOORBELL_SET: A command to set doorbell notification 147 + * that would be invoked when the balloon 148 + * size changes. 149 + * @VMW_BALLOON_CMD_LAST: Value of the last command. 99 150 */ 100 - #define VMW_BALLOON_CMD_START 0 101 - #define VMW_BALLOON_CMD_GET_TARGET 1 102 - #define VMW_BALLOON_CMD_LOCK 2 103 - #define VMW_BALLOON_CMD_UNLOCK 3 104 - #define VMW_BALLOON_CMD_GUEST_ID 4 105 - #define VMW_BALLOON_CMD_BATCHED_LOCK 6 106 - #define VMW_BALLOON_CMD_BATCHED_UNLOCK 7 107 - #define VMW_BALLOON_CMD_BATCHED_2M_LOCK 8 108 - #define VMW_BALLOON_CMD_BATCHED_2M_UNLOCK 9 109 - #define VMW_BALLOON_CMD_VMCI_DOORBELL_SET 10 151 + enum vmballoon_cmd_type { 152 + VMW_BALLOON_CMD_START, 153 + VMW_BALLOON_CMD_GET_TARGET, 154 + VMW_BALLOON_CMD_LOCK, 155 + VMW_BALLOON_CMD_UNLOCK, 156 + VMW_BALLOON_CMD_GUEST_ID, 157 + /* No command 5 */ 158 + VMW_BALLOON_CMD_BATCHED_LOCK = 6, 159 + VMW_BALLOON_CMD_BATCHED_UNLOCK, 160 + VMW_BALLOON_CMD_BATCHED_2M_LOCK, 161 + VMW_BALLOON_CMD_BATCHED_2M_UNLOCK, 162 + VMW_BALLOON_CMD_VMCI_DOORBELL_SET, 163 + VMW_BALLOON_CMD_LAST = VMW_BALLOON_CMD_VMCI_DOORBELL_SET, 164 + }; 110 165 166 + #define VMW_BALLOON_CMD_NUM (VMW_BALLOON_CMD_LAST + 1) 111 167 112 - /* error codes */ 113 - #define VMW_BALLOON_SUCCESS 0 114 - #define VMW_BALLOON_FAILURE -1 115 - #define VMW_BALLOON_ERROR_CMD_INVALID 1 116 - #define VMW_BALLOON_ERROR_PPN_INVALID 2 117 - #define VMW_BALLOON_ERROR_PPN_LOCKED 3 118 - #define VMW_BALLOON_ERROR_PPN_UNLOCKED 4 119 - #define VMW_BALLOON_ERROR_PPN_PINNED 5 120 - #define VMW_BALLOON_ERROR_PPN_NOTNEEDED 6 121 - #define VMW_BALLOON_ERROR_RESET 7 122 - #define VMW_BALLOON_ERROR_BUSY 8 168 + enum vmballoon_error_codes { 169 + VMW_BALLOON_SUCCESS, 170 + VMW_BALLOON_ERROR_CMD_INVALID, 171 + VMW_BALLOON_ERROR_PPN_INVALID, 172 + VMW_BALLOON_ERROR_PPN_LOCKED, 173 + VMW_BALLOON_ERROR_PPN_UNLOCKED, 174 + VMW_BALLOON_ERROR_PPN_PINNED, 175 + VMW_BALLOON_ERROR_PPN_NOTNEEDED, 176 + VMW_BALLOON_ERROR_RESET, 177 + VMW_BALLOON_ERROR_BUSY 178 + }; 123 179 124 180 #define VMW_BALLOON_SUCCESS_WITH_CAPABILITIES (0x03000000) 125 181 126 - /* Batch page description */ 182 + #define VMW_BALLOON_CMD_WITH_TARGET_MASK \ 183 + ((1UL << VMW_BALLOON_CMD_GET_TARGET) | \ 184 + (1UL << VMW_BALLOON_CMD_LOCK) | \ 185 + (1UL << VMW_BALLOON_CMD_UNLOCK) | \ 186 + (1UL << VMW_BALLOON_CMD_BATCHED_LOCK) | \ 187 + (1UL << VMW_BALLOON_CMD_BATCHED_UNLOCK) | \ 188 + (1UL << VMW_BALLOON_CMD_BATCHED_2M_LOCK) | \ 189 + (1UL << VMW_BALLOON_CMD_BATCHED_2M_UNLOCK)) 127 190 128 - /* 129 - * Layout of a page in the batch page: 130 - * 131 - * +-------------+----------+--------+ 132 - * | | | | 133 - * | Page number | Reserved | Status | 134 - * | | | | 135 - * +-------------+----------+--------+ 136 - * 64 PAGE_SHIFT 6 0 137 - * 138 - * The reserved field should be set to 0. 139 - */ 140 - #define VMW_BALLOON_BATCH_MAX_PAGES (PAGE_SIZE / sizeof(u64)) 141 - #define VMW_BALLOON_BATCH_STATUS_MASK ((1UL << 5) - 1) 142 - #define VMW_BALLOON_BATCH_PAGE_MASK (~((1UL << PAGE_SHIFT) - 1)) 143 - 144 - struct vmballoon_batch_page { 145 - u64 pages[VMW_BALLOON_BATCH_MAX_PAGES]; 191 + static const char * const vmballoon_cmd_names[] = { 192 + [VMW_BALLOON_CMD_START] = "start", 193 + [VMW_BALLOON_CMD_GET_TARGET] = "target", 194 + [VMW_BALLOON_CMD_LOCK] = "lock", 195 + [VMW_BALLOON_CMD_UNLOCK] = "unlock", 196 + [VMW_BALLOON_CMD_GUEST_ID] = "guestType", 197 + [VMW_BALLOON_CMD_BATCHED_LOCK] = "batchLock", 198 + [VMW_BALLOON_CMD_BATCHED_UNLOCK] = "batchUnlock", 199 + [VMW_BALLOON_CMD_BATCHED_2M_LOCK] = "2m-lock", 200 + [VMW_BALLOON_CMD_BATCHED_2M_UNLOCK] = "2m-unlock", 201 + [VMW_BALLOON_CMD_VMCI_DOORBELL_SET] = "doorbellSet" 146 202 }; 147 203 148 - static u64 vmballoon_batch_get_pa(struct vmballoon_batch_page *batch, int idx) 149 - { 150 - return batch->pages[idx] & VMW_BALLOON_BATCH_PAGE_MASK; 151 - } 152 - 153 - static int vmballoon_batch_get_status(struct vmballoon_batch_page *batch, 154 - int idx) 155 - { 156 - return (int)(batch->pages[idx] & VMW_BALLOON_BATCH_STATUS_MASK); 157 - } 158 - 159 - static void vmballoon_batch_set_pa(struct vmballoon_batch_page *batch, int idx, 160 - u64 pa) 161 - { 162 - batch->pages[idx] = pa; 163 - } 164 - 165 - 166 - #define VMWARE_BALLOON_CMD(cmd, arg1, arg2, result) \ 167 - ({ \ 168 - unsigned long __status, __dummy1, __dummy2, __dummy3; \ 169 - __asm__ __volatile__ ("inl %%dx" : \ 170 - "=a"(__status), \ 171 - "=c"(__dummy1), \ 172 - "=d"(__dummy2), \ 173 - "=b"(result), \ 174 - "=S" (__dummy3) : \ 175 - "0"(VMW_BALLOON_HV_MAGIC), \ 176 - "1"(VMW_BALLOON_CMD_##cmd), \ 177 - "2"(VMW_BALLOON_HV_PORT), \ 178 - "3"(arg1), \ 179 - "4" (arg2) : \ 180 - "memory"); \ 181 - if (VMW_BALLOON_CMD_##cmd == VMW_BALLOON_CMD_START) \ 182 - result = __dummy1; \ 183 - result &= -1UL; \ 184 - __status & -1UL; \ 185 - }) 186 - 187 - #ifdef CONFIG_DEBUG_FS 188 - struct vmballoon_stats { 189 - unsigned int timer; 190 - unsigned int doorbell; 191 - 192 - /* allocation statistics */ 193 - unsigned int alloc[VMW_BALLOON_NUM_PAGE_SIZES]; 194 - unsigned int alloc_fail[VMW_BALLOON_NUM_PAGE_SIZES]; 195 - unsigned int sleep_alloc; 196 - unsigned int sleep_alloc_fail; 197 - unsigned int refused_alloc[VMW_BALLOON_NUM_PAGE_SIZES]; 198 - unsigned int refused_free[VMW_BALLOON_NUM_PAGE_SIZES]; 199 - unsigned int free[VMW_BALLOON_NUM_PAGE_SIZES]; 200 - 201 - /* monitor operations */ 202 - unsigned int lock[VMW_BALLOON_NUM_PAGE_SIZES]; 203 - unsigned int lock_fail[VMW_BALLOON_NUM_PAGE_SIZES]; 204 - unsigned int unlock[VMW_BALLOON_NUM_PAGE_SIZES]; 205 - unsigned int unlock_fail[VMW_BALLOON_NUM_PAGE_SIZES]; 206 - unsigned int target; 207 - unsigned int target_fail; 208 - unsigned int start; 209 - unsigned int start_fail; 210 - unsigned int guest_type; 211 - unsigned int guest_type_fail; 212 - unsigned int doorbell_set; 213 - unsigned int doorbell_unset; 204 + enum vmballoon_stat_page { 205 + VMW_BALLOON_PAGE_STAT_ALLOC, 206 + VMW_BALLOON_PAGE_STAT_ALLOC_FAIL, 207 + VMW_BALLOON_PAGE_STAT_REFUSED_ALLOC, 208 + VMW_BALLOON_PAGE_STAT_REFUSED_FREE, 209 + VMW_BALLOON_PAGE_STAT_FREE, 210 + VMW_BALLOON_PAGE_STAT_LAST = VMW_BALLOON_PAGE_STAT_FREE 214 211 }; 215 212 216 - #define STATS_INC(stat) (stat)++ 217 - #else 218 - #define STATS_INC(stat) 219 - #endif 213 + #define VMW_BALLOON_PAGE_STAT_NUM (VMW_BALLOON_PAGE_STAT_LAST + 1) 220 214 221 - struct vmballoon; 215 + enum vmballoon_stat_general { 216 + VMW_BALLOON_STAT_TIMER, 217 + VMW_BALLOON_STAT_DOORBELL, 218 + VMW_BALLOON_STAT_RESET, 219 + VMW_BALLOON_STAT_LAST = VMW_BALLOON_STAT_RESET 220 + }; 222 221 223 - struct vmballoon_ops { 224 - void (*add_page)(struct vmballoon *b, int idx, struct page *p); 225 - int (*lock)(struct vmballoon *b, unsigned int num_pages, 226 - bool is_2m_pages, unsigned int *target); 227 - int (*unlock)(struct vmballoon *b, unsigned int num_pages, 228 - bool is_2m_pages, unsigned int *target); 222 + #define VMW_BALLOON_STAT_NUM (VMW_BALLOON_STAT_LAST + 1) 223 + 224 + 225 + static DEFINE_STATIC_KEY_TRUE(vmw_balloon_batching); 226 + static DEFINE_STATIC_KEY_FALSE(balloon_stat_enabled); 227 + 228 + struct vmballoon_ctl { 229 + struct list_head pages; 230 + struct list_head refused_pages; 231 + unsigned int n_refused_pages; 232 + unsigned int n_pages; 233 + enum vmballoon_page_size_type page_size; 234 + enum vmballoon_op op; 229 235 }; 230 236 231 237 struct vmballoon_page_size { 232 238 /* list of reserved physical pages */ 233 239 struct list_head pages; 234 - 235 - /* transient list of non-balloonable pages */ 236 - struct list_head refused_pages; 237 - unsigned int n_refused_pages; 238 240 }; 241 + 242 + /** 243 + * struct vmballoon_batch_entry - a batch entry for lock or unlock. 244 + * 245 + * @status: the status of the operation, which is written by the hypervisor. 246 + * @reserved: reserved for future use. Must be set to zero. 247 + * @pfn: the physical frame number of the page to be locked or unlocked. 248 + */ 249 + struct vmballoon_batch_entry { 250 + u64 status : 5; 251 + u64 reserved : PAGE_SHIFT - 5; 252 + u64 pfn : 52; 253 + } __packed; 239 254 240 255 struct vmballoon { 241 256 struct vmballoon_page_size page_sizes[VMW_BALLOON_NUM_PAGE_SIZES]; 242 257 243 - /* supported page sizes. 1 == 4k pages only, 2 == 4k and 2m pages */ 244 - unsigned supported_page_sizes; 258 + /** 259 + * @max_page_size: maximum supported page size for ballooning. 260 + * 261 + * Protected by @conf_sem 262 + */ 263 + enum vmballoon_page_size_type max_page_size; 245 264 246 - /* balloon size in pages */ 247 - unsigned int size; 248 - unsigned int target; 265 + /** 266 + * @size: balloon actual size in basic page size (frames). 267 + * 268 + * While we currently do not support size which is bigger than 32-bit, 269 + * in preparation for future support, use 64-bits. 270 + */ 271 + atomic64_t size; 249 272 250 - /* reset flag */ 273 + /** 274 + * @target: balloon target size in basic page size (frames). 275 + * 276 + * We do not protect the target under the assumption that setting the 277 + * value is always done through a single write. If this assumption ever 278 + * breaks, we would have to use X_ONCE for accesses, and suffer the less 279 + * optimized code. Although we may read stale target value if multiple 280 + * accesses happen at once, the performance impact should be minor. 281 + */ 282 + unsigned long target; 283 + 284 + /** 285 + * @reset_required: reset flag 286 + * 287 + * Setting this flag may introduce races, but the code is expected to 288 + * handle them gracefully. In the worst case, another operation will 289 + * fail as reset did not take place. Clearing the flag is done while 290 + * holding @conf_sem for write. 291 + */ 251 292 bool reset_required; 252 293 294 + /** 295 + * @capabilities: hypervisor balloon capabilities. 296 + * 297 + * Protected by @conf_sem. 298 + */ 253 299 unsigned long capabilities; 254 300 255 - struct vmballoon_batch_page *batch_page; 301 + /** 302 + * @batch_page: pointer to communication batch page. 303 + * 304 + * When batching is used, batch_page points to a page, which holds up to 305 + * %VMW_BALLOON_BATCH_MAX_PAGES entries for locking or unlocking. 306 + */ 307 + struct vmballoon_batch_entry *batch_page; 308 + 309 + /** 310 + * @batch_max_pages: maximum pages that can be locked/unlocked. 311 + * 312 + * Indicates the number of pages that the hypervisor can lock or unlock 313 + * at once, according to whether batching is enabled. If batching is 314 + * disabled, only a single page can be locked/unlock on each operation. 315 + * 316 + * Protected by @conf_sem. 317 + */ 256 318 unsigned int batch_max_pages; 319 + 320 + /** 321 + * @page: page to be locked/unlocked by the hypervisor 322 + * 323 + * @page is only used when batching is disabled and a single page is 324 + * reclaimed on each iteration. 325 + * 326 + * Protected by @comm_lock. 327 + */ 257 328 struct page *page; 258 329 259 - const struct vmballoon_ops *ops; 330 + /* statistics */ 331 + struct vmballoon_stats *stats; 260 332 261 333 #ifdef CONFIG_DEBUG_FS 262 - /* statistics */ 263 - struct vmballoon_stats stats; 264 - 265 334 /* debugfs file exporting statistics */ 266 335 struct dentry *dbg_entry; 267 336 #endif 268 337 269 - struct sysinfo sysinfo; 270 - 271 338 struct delayed_work dwork; 272 339 340 + /** 341 + * @vmci_doorbell. 342 + * 343 + * Protected by @conf_sem. 344 + */ 273 345 struct vmci_handle vmci_doorbell; 346 + 347 + /** 348 + * @conf_sem: semaphore to protect the configuration and the statistics. 349 + */ 350 + struct rw_semaphore conf_sem; 351 + 352 + /** 353 + * @comm_lock: lock to protect the communication with the host. 354 + * 355 + * Lock ordering: @conf_sem -> @comm_lock . 356 + */ 357 + spinlock_t comm_lock; 274 358 }; 275 359 276 360 static struct vmballoon balloon; 361 + 362 + struct vmballoon_stats { 363 + /* timer / doorbell operations */ 364 + atomic64_t general_stat[VMW_BALLOON_STAT_NUM]; 365 + 366 + /* allocation statistics for huge and small pages */ 367 + atomic64_t 368 + page_stat[VMW_BALLOON_PAGE_STAT_NUM][VMW_BALLOON_NUM_PAGE_SIZES]; 369 + 370 + /* Monitor operations: total operations, and failures */ 371 + atomic64_t ops[VMW_BALLOON_CMD_NUM][VMW_BALLOON_OP_STAT_TYPES]; 372 + }; 373 + 374 + static inline bool is_vmballoon_stats_on(void) 375 + { 376 + return IS_ENABLED(CONFIG_DEBUG_FS) && 377 + static_branch_unlikely(&balloon_stat_enabled); 378 + } 379 + 380 + static inline void vmballoon_stats_op_inc(struct vmballoon *b, unsigned int op, 381 + enum vmballoon_op_stat_type type) 382 + { 383 + if (is_vmballoon_stats_on()) 384 + atomic64_inc(&b->stats->ops[op][type]); 385 + } 386 + 387 + static inline void vmballoon_stats_gen_inc(struct vmballoon *b, 388 + enum vmballoon_stat_general stat) 389 + { 390 + if (is_vmballoon_stats_on()) 391 + atomic64_inc(&b->stats->general_stat[stat]); 392 + } 393 + 394 + static inline void vmballoon_stats_gen_add(struct vmballoon *b, 395 + enum vmballoon_stat_general stat, 396 + unsigned int val) 397 + { 398 + if (is_vmballoon_stats_on()) 399 + atomic64_add(val, &b->stats->general_stat[stat]); 400 + } 401 + 402 + static inline void vmballoon_stats_page_inc(struct vmballoon *b, 403 + enum vmballoon_stat_page stat, 404 + enum vmballoon_page_size_type size) 405 + { 406 + if (is_vmballoon_stats_on()) 407 + atomic64_inc(&b->stats->page_stat[stat][size]); 408 + } 409 + 410 + static inline void vmballoon_stats_page_add(struct vmballoon *b, 411 + enum vmballoon_stat_page stat, 412 + enum vmballoon_page_size_type size, 413 + unsigned int val) 414 + { 415 + if (is_vmballoon_stats_on()) 416 + atomic64_add(val, &b->stats->page_stat[stat][size]); 417 + } 418 + 419 + static inline unsigned long 420 + __vmballoon_cmd(struct vmballoon *b, unsigned long cmd, unsigned long arg1, 421 + unsigned long arg2, unsigned long *result) 422 + { 423 + unsigned long status, dummy1, dummy2, dummy3, local_result; 424 + 425 + vmballoon_stats_op_inc(b, cmd, VMW_BALLOON_OP_STAT); 426 + 427 + asm volatile ("inl %%dx" : 428 + "=a"(status), 429 + "=c"(dummy1), 430 + "=d"(dummy2), 431 + "=b"(local_result), 432 + "=S"(dummy3) : 433 + "0"(VMW_BALLOON_HV_MAGIC), 434 + "1"(cmd), 435 + "2"(VMW_BALLOON_HV_PORT), 436 + "3"(arg1), 437 + "4"(arg2) : 438 + "memory"); 439 + 440 + /* update the result if needed */ 441 + if (result) 442 + *result = (cmd == VMW_BALLOON_CMD_START) ? dummy1 : 443 + local_result; 444 + 445 + /* update target when applicable */ 446 + if (status == VMW_BALLOON_SUCCESS && 447 + ((1ul << cmd) & VMW_BALLOON_CMD_WITH_TARGET_MASK)) 448 + WRITE_ONCE(b->target, local_result); 449 + 450 + if (status != VMW_BALLOON_SUCCESS && 451 + status != VMW_BALLOON_SUCCESS_WITH_CAPABILITIES) { 452 + vmballoon_stats_op_inc(b, cmd, VMW_BALLOON_OP_FAIL_STAT); 453 + pr_debug("%s: %s [0x%lx,0x%lx) failed, returned %ld\n", 454 + __func__, vmballoon_cmd_names[cmd], arg1, arg2, 455 + status); 456 + } 457 + 458 + /* mark reset required accordingly */ 459 + if (status == VMW_BALLOON_ERROR_RESET) 460 + b->reset_required = true; 461 + 462 + return status; 463 + } 464 + 465 + static __always_inline unsigned long 466 + vmballoon_cmd(struct vmballoon *b, unsigned long cmd, unsigned long arg1, 467 + unsigned long arg2) 468 + { 469 + unsigned long dummy; 470 + 471 + return __vmballoon_cmd(b, cmd, arg1, arg2, &dummy); 472 + } 277 473 278 474 /* 279 475 * Send "start" command to the host, communicating supported version 280 476 * of the protocol. 281 477 */ 282 - static bool vmballoon_send_start(struct vmballoon *b, unsigned long req_caps) 478 + static int vmballoon_send_start(struct vmballoon *b, unsigned long req_caps) 283 479 { 284 - unsigned long status, capabilities, dummy = 0; 285 - bool success; 480 + unsigned long status, capabilities; 286 481 287 - STATS_INC(b->stats.start); 288 - 289 - status = VMWARE_BALLOON_CMD(START, req_caps, dummy, capabilities); 482 + status = __vmballoon_cmd(b, VMW_BALLOON_CMD_START, req_caps, 0, 483 + &capabilities); 290 484 291 485 switch (status) { 292 486 case VMW_BALLOON_SUCCESS_WITH_CAPABILITIES: 293 487 b->capabilities = capabilities; 294 - success = true; 295 488 break; 296 489 case VMW_BALLOON_SUCCESS: 297 490 b->capabilities = VMW_BALLOON_BASIC_CMDS; 298 - success = true; 299 491 break; 300 492 default: 301 - success = false; 493 + return -EIO; 302 494 } 303 495 304 496 /* ··· 501 303 * reason disabled, do not use 2MB pages, since otherwise the legacy 502 304 * mechanism is used with 2MB pages, causing a failure. 503 305 */ 306 + b->max_page_size = VMW_BALLOON_4K_PAGE; 504 307 if ((b->capabilities & VMW_BALLOON_BATCHED_2M_CMDS) && 505 308 (b->capabilities & VMW_BALLOON_BATCHED_CMDS)) 506 - b->supported_page_sizes = 2; 507 - else 508 - b->supported_page_sizes = 1; 309 + b->max_page_size = VMW_BALLOON_2M_PAGE; 509 310 510 - if (!success) { 511 - pr_debug("%s - failed, hv returns %ld\n", __func__, status); 512 - STATS_INC(b->stats.start_fail); 513 - } 514 - return success; 311 + 312 + return 0; 515 313 } 516 314 517 - static bool vmballoon_check_status(struct vmballoon *b, unsigned long status) 518 - { 519 - switch (status) { 520 - case VMW_BALLOON_SUCCESS: 521 - return true; 522 - 523 - case VMW_BALLOON_ERROR_RESET: 524 - b->reset_required = true; 525 - /* fall through */ 526 - 527 - default: 528 - return false; 529 - } 530 - } 531 - 532 - /* 315 + /** 316 + * vmballoon_send_guest_id - communicate guest type to the host. 317 + * 318 + * @b: pointer to the balloon. 319 + * 533 320 * Communicate guest type to the host so that it can adjust ballooning 534 321 * algorithm to the one most appropriate for the guest. This command 535 322 * is normally issued after sending "start" command and is part of 536 323 * standard reset sequence. 324 + * 325 + * Return: zero on success or appropriate error code. 537 326 */ 538 - static bool vmballoon_send_guest_id(struct vmballoon *b) 539 - { 540 - unsigned long status, dummy = 0; 541 - 542 - status = VMWARE_BALLOON_CMD(GUEST_ID, VMW_BALLOON_GUEST_ID, dummy, 543 - dummy); 544 - 545 - STATS_INC(b->stats.guest_type); 546 - 547 - if (vmballoon_check_status(b, status)) 548 - return true; 549 - 550 - pr_debug("%s - failed, hv returns %ld\n", __func__, status); 551 - STATS_INC(b->stats.guest_type_fail); 552 - return false; 553 - } 554 - 555 - static u16 vmballoon_page_size(bool is_2m_page) 556 - { 557 - if (is_2m_page) 558 - return 1 << VMW_BALLOON_2M_SHIFT; 559 - 560 - return 1; 561 - } 562 - 563 - /* 564 - * Retrieve desired balloon size from the host. 565 - */ 566 - static bool vmballoon_send_get_target(struct vmballoon *b, u32 *new_target) 327 + static int vmballoon_send_guest_id(struct vmballoon *b) 567 328 { 568 329 unsigned long status; 569 - unsigned long target; 570 - unsigned long limit; 571 - unsigned long dummy = 0; 572 - u32 limit32; 573 330 574 - /* 575 - * si_meminfo() is cheap. Moreover, we want to provide dynamic 576 - * max balloon size later. So let us call si_meminfo() every 577 - * iteration. 578 - */ 579 - si_meminfo(&b->sysinfo); 580 - limit = b->sysinfo.totalram; 331 + status = vmballoon_cmd(b, VMW_BALLOON_CMD_GUEST_ID, 332 + VMW_BALLOON_GUEST_ID, 0); 581 333 582 - /* Ensure limit fits in 32-bits */ 583 - limit32 = (u32)limit; 584 - if (limit != limit32) 585 - return false; 586 - 587 - /* update stats */ 588 - STATS_INC(b->stats.target); 589 - 590 - status = VMWARE_BALLOON_CMD(GET_TARGET, limit, dummy, target); 591 - if (vmballoon_check_status(b, status)) { 592 - *new_target = target; 593 - return true; 594 - } 595 - 596 - pr_debug("%s - failed, hv returns %ld\n", __func__, status); 597 - STATS_INC(b->stats.target_fail); 598 - return false; 334 + return status == VMW_BALLOON_SUCCESS ? 0 : -EIO; 599 335 } 600 336 601 - /* 602 - * Notify the host about allocated page so that host can use it without 603 - * fear that guest will need it. Host may reject some pages, we need to 604 - * check the return value and maybe submit a different page. 337 + /** 338 + * vmballoon_page_order() - return the order of the page 339 + * @page_size: the size of the page. 340 + * 341 + * Return: the allocation order. 605 342 */ 606 - static int vmballoon_send_lock_page(struct vmballoon *b, unsigned long pfn, 607 - unsigned int *hv_status, unsigned int *target) 343 + static inline 344 + unsigned int vmballoon_page_order(enum vmballoon_page_size_type page_size) 608 345 { 609 - unsigned long status, dummy = 0; 610 - u32 pfn32; 346 + return page_size == VMW_BALLOON_2M_PAGE ? VMW_BALLOON_2M_ORDER : 0; 347 + } 611 348 612 - pfn32 = (u32)pfn; 613 - if (pfn32 != pfn) 349 + /** 350 + * vmballoon_page_in_frames() - returns the number of frames in a page. 351 + * @page_size: the size of the page. 352 + * 353 + * Return: the number of 4k frames. 354 + */ 355 + static inline unsigned int 356 + vmballoon_page_in_frames(enum vmballoon_page_size_type page_size) 357 + { 358 + return 1 << vmballoon_page_order(page_size); 359 + } 360 + 361 + /** 362 + * vmballoon_send_get_target() - Retrieve desired balloon size from the host. 363 + * 364 + * @b: pointer to the balloon. 365 + * 366 + * Return: zero on success, EINVAL if limit does not fit in 32-bit, as required 367 + * by the host-guest protocol and EIO if an error occurred in communicating with 368 + * the host. 369 + */ 370 + static int vmballoon_send_get_target(struct vmballoon *b) 371 + { 372 + unsigned long status; 373 + unsigned long limit; 374 + 375 + limit = totalram_pages; 376 + 377 + /* Ensure limit fits in 32-bits */ 378 + if (limit != (u32)limit) 614 379 return -EINVAL; 615 380 616 - STATS_INC(b->stats.lock[false]); 381 + status = vmballoon_cmd(b, VMW_BALLOON_CMD_GET_TARGET, limit, 0); 617 382 618 - *hv_status = status = VMWARE_BALLOON_CMD(LOCK, pfn, dummy, *target); 619 - if (vmballoon_check_status(b, status)) 383 + return status == VMW_BALLOON_SUCCESS ? 0 : -EIO; 384 + } 385 + 386 + /** 387 + * vmballoon_alloc_page_list - allocates a list of pages. 388 + * 389 + * @b: pointer to the balloon. 390 + * @ctl: pointer for the %struct vmballoon_ctl, which defines the operation. 391 + * @req_n_pages: the number of requested pages. 392 + * 393 + * Tries to allocate @req_n_pages. Add them to the list of balloon pages in 394 + * @ctl.pages and updates @ctl.n_pages to reflect the number of pages. 395 + * 396 + * Return: zero on success or error code otherwise. 397 + */ 398 + static int vmballoon_alloc_page_list(struct vmballoon *b, 399 + struct vmballoon_ctl *ctl, 400 + unsigned int req_n_pages) 401 + { 402 + struct page *page; 403 + unsigned int i; 404 + 405 + for (i = 0; i < req_n_pages; i++) { 406 + if (ctl->page_size == VMW_BALLOON_2M_PAGE) 407 + page = alloc_pages(VMW_HUGE_PAGE_ALLOC_FLAGS, 408 + VMW_BALLOON_2M_ORDER); 409 + else 410 + page = alloc_page(VMW_PAGE_ALLOC_FLAGS); 411 + 412 + /* Update statistics */ 413 + vmballoon_stats_page_inc(b, VMW_BALLOON_PAGE_STAT_ALLOC, 414 + ctl->page_size); 415 + 416 + if (page) { 417 + /* Success. Add the page to the list and continue. */ 418 + list_add(&page->lru, &ctl->pages); 419 + continue; 420 + } 421 + 422 + /* Allocation failed. Update statistics and stop. */ 423 + vmballoon_stats_page_inc(b, VMW_BALLOON_PAGE_STAT_ALLOC_FAIL, 424 + ctl->page_size); 425 + break; 426 + } 427 + 428 + ctl->n_pages = i; 429 + 430 + return req_n_pages == ctl->n_pages ? 0 : -ENOMEM; 431 + } 432 + 433 + /** 434 + * vmballoon_handle_one_result - Handle lock/unlock result for a single page. 435 + * 436 + * @b: pointer for %struct vmballoon. 437 + * @page: pointer for the page whose result should be handled. 438 + * @page_size: size of the page. 439 + * @status: status of the operation as provided by the hypervisor. 440 + */ 441 + static int vmballoon_handle_one_result(struct vmballoon *b, struct page *page, 442 + enum vmballoon_page_size_type page_size, 443 + unsigned long status) 444 + { 445 + /* On success do nothing. The page is already on the balloon list. */ 446 + if (likely(status == VMW_BALLOON_SUCCESS)) 620 447 return 0; 621 448 622 - pr_debug("%s - ppn %lx, hv returns %ld\n", __func__, pfn, status); 623 - STATS_INC(b->stats.lock_fail[false]); 449 + pr_debug("%s: failed comm pfn %lx status %lu page_size %s\n", __func__, 450 + page_to_pfn(page), status, 451 + vmballoon_page_size_names[page_size]); 452 + 453 + /* Error occurred */ 454 + vmballoon_stats_page_inc(b, VMW_BALLOON_PAGE_STAT_REFUSED_ALLOC, 455 + page_size); 456 + 624 457 return -EIO; 625 458 } 626 459 627 - static int vmballoon_send_batched_lock(struct vmballoon *b, 628 - unsigned int num_pages, bool is_2m_pages, unsigned int *target) 460 + /** 461 + * vmballoon_status_page - returns the status of (un)lock operation 462 + * 463 + * @b: pointer to the balloon. 464 + * @idx: index for the page for which the operation is performed. 465 + * @p: pointer to where the page struct is returned. 466 + * 467 + * Following a lock or unlock operation, returns the status of the operation for 468 + * an individual page. Provides the page that the operation was performed on on 469 + * the @page argument. 470 + * 471 + * Returns: The status of a lock or unlock operation for an individual page. 472 + */ 473 + static unsigned long vmballoon_status_page(struct vmballoon *b, int idx, 474 + struct page **p) 629 475 { 630 - unsigned long status; 631 - unsigned long pfn = PHYS_PFN(virt_to_phys(b->batch_page)); 476 + if (static_branch_likely(&vmw_balloon_batching)) { 477 + /* batching mode */ 478 + *p = pfn_to_page(b->batch_page[idx].pfn); 479 + return b->batch_page[idx].status; 480 + } 632 481 633 - STATS_INC(b->stats.lock[is_2m_pages]); 482 + /* non-batching mode */ 483 + *p = b->page; 634 484 635 - if (is_2m_pages) 636 - status = VMWARE_BALLOON_CMD(BATCHED_2M_LOCK, pfn, num_pages, 637 - *target); 485 + /* 486 + * If a failure occurs, the indication will be provided in the status 487 + * of the entire operation, which is considered before the individual 488 + * page status. So for non-batching mode, the indication is always of 489 + * success. 490 + */ 491 + return VMW_BALLOON_SUCCESS; 492 + } 493 + 494 + /** 495 + * vmballoon_lock_op - notifies the host about inflated/deflated pages. 496 + * @b: pointer to the balloon. 497 + * @num_pages: number of inflated/deflated pages. 498 + * @page_size: size of the page. 499 + * @op: the type of operation (lock or unlock). 500 + * 501 + * Notify the host about page(s) that were ballooned (or removed from the 502 + * balloon) so that host can use it without fear that guest will need it (or 503 + * stop using them since the VM does). Host may reject some pages, we need to 504 + * check the return value and maybe submit a different page. The pages that are 505 + * inflated/deflated are pointed by @b->page. 506 + * 507 + * Return: result as provided by the hypervisor. 508 + */ 509 + static unsigned long vmballoon_lock_op(struct vmballoon *b, 510 + unsigned int num_pages, 511 + enum vmballoon_page_size_type page_size, 512 + enum vmballoon_op op) 513 + { 514 + unsigned long cmd, pfn; 515 + 516 + lockdep_assert_held(&b->comm_lock); 517 + 518 + if (static_branch_likely(&vmw_balloon_batching)) { 519 + if (op == VMW_BALLOON_INFLATE) 520 + cmd = page_size == VMW_BALLOON_2M_PAGE ? 521 + VMW_BALLOON_CMD_BATCHED_2M_LOCK : 522 + VMW_BALLOON_CMD_BATCHED_LOCK; 523 + else 524 + cmd = page_size == VMW_BALLOON_2M_PAGE ? 525 + VMW_BALLOON_CMD_BATCHED_2M_UNLOCK : 526 + VMW_BALLOON_CMD_BATCHED_UNLOCK; 527 + 528 + pfn = PHYS_PFN(virt_to_phys(b->batch_page)); 529 + } else { 530 + cmd = op == VMW_BALLOON_INFLATE ? VMW_BALLOON_CMD_LOCK : 531 + VMW_BALLOON_CMD_UNLOCK; 532 + pfn = page_to_pfn(b->page); 533 + 534 + /* In non-batching mode, PFNs must fit in 32-bit */ 535 + if (unlikely(pfn != (u32)pfn)) 536 + return VMW_BALLOON_ERROR_PPN_INVALID; 537 + } 538 + 539 + return vmballoon_cmd(b, cmd, pfn, num_pages); 540 + } 541 + 542 + /** 543 + * vmballoon_add_page - adds a page towards lock/unlock operation. 544 + * 545 + * @b: pointer to the balloon. 546 + * @idx: index of the page to be ballooned in this batch. 547 + * @p: pointer to the page that is about to be ballooned. 548 + * 549 + * Adds the page to be ballooned. Must be called while holding @comm_lock. 550 + */ 551 + static void vmballoon_add_page(struct vmballoon *b, unsigned int idx, 552 + struct page *p) 553 + { 554 + lockdep_assert_held(&b->comm_lock); 555 + 556 + if (static_branch_likely(&vmw_balloon_batching)) 557 + b->batch_page[idx] = (struct vmballoon_batch_entry) 558 + { .pfn = page_to_pfn(p) }; 638 559 else 639 - status = VMWARE_BALLOON_CMD(BATCHED_LOCK, pfn, num_pages, 640 - *target); 560 + b->page = p; 561 + } 641 562 642 - if (vmballoon_check_status(b, status)) 563 + /** 564 + * vmballoon_lock - lock or unlock a batch of pages. 565 + * 566 + * @b: pointer to the balloon. 567 + * @ctl: pointer for the %struct vmballoon_ctl, which defines the operation. 568 + * 569 + * Notifies the host of about ballooned pages (after inflation or deflation, 570 + * according to @ctl). If the host rejects the page put it on the 571 + * @ctl refuse list. These refused page are then released when moving to the 572 + * next size of pages. 573 + * 574 + * Note that we neither free any @page here nor put them back on the ballooned 575 + * pages list. Instead we queue it for later processing. We do that for several 576 + * reasons. First, we do not want to free the page under the lock. Second, it 577 + * allows us to unify the handling of lock and unlock. In the inflate case, the 578 + * caller will check if there are too many refused pages and release them. 579 + * Although it is not identical to the past behavior, it should not affect 580 + * performance. 581 + */ 582 + static int vmballoon_lock(struct vmballoon *b, struct vmballoon_ctl *ctl) 583 + { 584 + unsigned long batch_status; 585 + struct page *page; 586 + unsigned int i, num_pages; 587 + 588 + num_pages = ctl->n_pages; 589 + if (num_pages == 0) 643 590 return 0; 644 591 645 - pr_debug("%s - batch ppn %lx, hv returns %ld\n", __func__, pfn, status); 646 - STATS_INC(b->stats.lock_fail[is_2m_pages]); 647 - return 1; 648 - } 592 + /* communication with the host is done under the communication lock */ 593 + spin_lock(&b->comm_lock); 649 594 650 - /* 651 - * Notify the host that guest intends to release given page back into 652 - * the pool of available (to the guest) pages. 653 - */ 654 - static bool vmballoon_send_unlock_page(struct vmballoon *b, unsigned long pfn, 655 - unsigned int *target) 656 - { 657 - unsigned long status, dummy = 0; 658 - u32 pfn32; 595 + i = 0; 596 + list_for_each_entry(page, &ctl->pages, lru) 597 + vmballoon_add_page(b, i++, page); 659 598 660 - pfn32 = (u32)pfn; 661 - if (pfn32 != pfn) 662 - return false; 599 + batch_status = vmballoon_lock_op(b, ctl->n_pages, ctl->page_size, 600 + ctl->op); 663 601 664 - STATS_INC(b->stats.unlock[false]); 602 + /* 603 + * Iterate over the pages in the provided list. Since we are changing 604 + * @ctl->n_pages we are saving the original value in @num_pages and 605 + * use this value to bound the loop. 606 + */ 607 + for (i = 0; i < num_pages; i++) { 608 + unsigned long status; 665 609 666 - status = VMWARE_BALLOON_CMD(UNLOCK, pfn, dummy, *target); 667 - if (vmballoon_check_status(b, status)) 668 - return true; 669 - 670 - pr_debug("%s - ppn %lx, hv returns %ld\n", __func__, pfn, status); 671 - STATS_INC(b->stats.unlock_fail[false]); 672 - return false; 673 - } 674 - 675 - static bool vmballoon_send_batched_unlock(struct vmballoon *b, 676 - unsigned int num_pages, bool is_2m_pages, unsigned int *target) 677 - { 678 - unsigned long status; 679 - unsigned long pfn = PHYS_PFN(virt_to_phys(b->batch_page)); 680 - 681 - STATS_INC(b->stats.unlock[is_2m_pages]); 682 - 683 - if (is_2m_pages) 684 - status = VMWARE_BALLOON_CMD(BATCHED_2M_UNLOCK, pfn, num_pages, 685 - *target); 686 - else 687 - status = VMWARE_BALLOON_CMD(BATCHED_UNLOCK, pfn, num_pages, 688 - *target); 689 - 690 - if (vmballoon_check_status(b, status)) 691 - return true; 692 - 693 - pr_debug("%s - batch ppn %lx, hv returns %ld\n", __func__, pfn, status); 694 - STATS_INC(b->stats.unlock_fail[is_2m_pages]); 695 - return false; 696 - } 697 - 698 - static struct page *vmballoon_alloc_page(gfp_t flags, bool is_2m_page) 699 - { 700 - if (is_2m_page) 701 - return alloc_pages(flags, VMW_BALLOON_2M_SHIFT); 702 - 703 - return alloc_page(flags); 704 - } 705 - 706 - static void vmballoon_free_page(struct page *page, bool is_2m_page) 707 - { 708 - if (is_2m_page) 709 - __free_pages(page, VMW_BALLOON_2M_SHIFT); 710 - else 711 - __free_page(page); 712 - } 713 - 714 - /* 715 - * Quickly release all pages allocated for the balloon. This function is 716 - * called when host decides to "reset" balloon for one reason or another. 717 - * Unlike normal "deflate" we do not (shall not) notify host of the pages 718 - * being released. 719 - */ 720 - static void vmballoon_pop(struct vmballoon *b) 721 - { 722 - struct page *page, *next; 723 - unsigned is_2m_pages; 724 - 725 - for (is_2m_pages = 0; is_2m_pages < VMW_BALLOON_NUM_PAGE_SIZES; 726 - is_2m_pages++) { 727 - struct vmballoon_page_size *page_size = 728 - &b->page_sizes[is_2m_pages]; 729 - u16 size_per_page = vmballoon_page_size(is_2m_pages); 730 - 731 - list_for_each_entry_safe(page, next, &page_size->pages, lru) { 732 - list_del(&page->lru); 733 - vmballoon_free_page(page, is_2m_pages); 734 - STATS_INC(b->stats.free[is_2m_pages]); 735 - b->size -= size_per_page; 736 - cond_resched(); 737 - } 738 - } 739 - 740 - /* Clearing the batch_page unconditionally has no adverse effect */ 741 - free_page((unsigned long)b->batch_page); 742 - b->batch_page = NULL; 743 - } 744 - 745 - /* 746 - * Notify the host of a ballooned page. If host rejects the page put it on the 747 - * refuse list, those refused page are then released at the end of the 748 - * inflation cycle. 749 - */ 750 - static int vmballoon_lock_page(struct vmballoon *b, unsigned int num_pages, 751 - bool is_2m_pages, unsigned int *target) 752 - { 753 - int locked, hv_status; 754 - struct page *page = b->page; 755 - struct vmballoon_page_size *page_size = &b->page_sizes[false]; 756 - 757 - /* is_2m_pages can never happen as 2m pages support implies batching */ 758 - 759 - locked = vmballoon_send_lock_page(b, page_to_pfn(page), &hv_status, 760 - target); 761 - if (locked) { 762 - STATS_INC(b->stats.refused_alloc[false]); 763 - 764 - if (locked == -EIO && 765 - (hv_status == VMW_BALLOON_ERROR_RESET || 766 - hv_status == VMW_BALLOON_ERROR_PPN_NOTNEEDED)) { 767 - vmballoon_free_page(page, false); 768 - return -EIO; 769 - } 610 + status = vmballoon_status_page(b, i, &page); 770 611 771 612 /* 772 - * Place page on the list of non-balloonable pages 773 - * and retry allocation, unless we already accumulated 774 - * too many of them, in which case take a breather. 613 + * Failure of the whole batch overrides a single operation 614 + * results. 775 615 */ 776 - if (page_size->n_refused_pages < VMW_BALLOON_MAX_REFUSED) { 777 - page_size->n_refused_pages++; 778 - list_add(&page->lru, &page_size->refused_pages); 779 - } else { 780 - vmballoon_free_page(page, false); 781 - } 782 - return locked; 616 + if (batch_status != VMW_BALLOON_SUCCESS) 617 + status = batch_status; 618 + 619 + /* Continue if no error happened */ 620 + if (!vmballoon_handle_one_result(b, page, ctl->page_size, 621 + status)) 622 + continue; 623 + 624 + /* 625 + * Error happened. Move the pages to the refused list and update 626 + * the pages number. 627 + */ 628 + list_move(&page->lru, &ctl->refused_pages); 629 + ctl->n_pages--; 630 + ctl->n_refused_pages++; 783 631 } 784 632 785 - /* track allocated page */ 786 - list_add(&page->lru, &page_size->pages); 633 + spin_unlock(&b->comm_lock); 787 634 788 - /* update balloon size */ 789 - b->size++; 790 - 791 - return 0; 635 + return batch_status == VMW_BALLOON_SUCCESS ? 0 : -EIO; 792 636 } 793 637 794 - static int vmballoon_lock_batched_page(struct vmballoon *b, 795 - unsigned int num_pages, bool is_2m_pages, unsigned int *target) 796 - { 797 - int locked, i; 798 - u16 size_per_page = vmballoon_page_size(is_2m_pages); 799 - 800 - locked = vmballoon_send_batched_lock(b, num_pages, is_2m_pages, 801 - target); 802 - if (locked > 0) { 803 - for (i = 0; i < num_pages; i++) { 804 - u64 pa = vmballoon_batch_get_pa(b->batch_page, i); 805 - struct page *p = pfn_to_page(pa >> PAGE_SHIFT); 806 - 807 - vmballoon_free_page(p, is_2m_pages); 808 - } 809 - 810 - return -EIO; 811 - } 812 - 813 - for (i = 0; i < num_pages; i++) { 814 - u64 pa = vmballoon_batch_get_pa(b->batch_page, i); 815 - struct page *p = pfn_to_page(pa >> PAGE_SHIFT); 816 - struct vmballoon_page_size *page_size = 817 - &b->page_sizes[is_2m_pages]; 818 - 819 - locked = vmballoon_batch_get_status(b->batch_page, i); 820 - 821 - switch (locked) { 822 - case VMW_BALLOON_SUCCESS: 823 - list_add(&p->lru, &page_size->pages); 824 - b->size += size_per_page; 825 - break; 826 - case VMW_BALLOON_ERROR_PPN_PINNED: 827 - case VMW_BALLOON_ERROR_PPN_INVALID: 828 - if (page_size->n_refused_pages 829 - < VMW_BALLOON_MAX_REFUSED) { 830 - list_add(&p->lru, &page_size->refused_pages); 831 - page_size->n_refused_pages++; 832 - break; 833 - } 834 - /* Fallthrough */ 835 - case VMW_BALLOON_ERROR_RESET: 836 - case VMW_BALLOON_ERROR_PPN_NOTNEEDED: 837 - vmballoon_free_page(p, is_2m_pages); 838 - break; 839 - default: 840 - /* This should never happen */ 841 - WARN_ON_ONCE(true); 842 - } 843 - } 844 - 845 - return 0; 846 - } 847 - 848 - /* 849 - * Release the page allocated for the balloon. Note that we first notify 850 - * the host so it can make sure the page will be available for the guest 851 - * to use, if needed. 638 + /** 639 + * vmballoon_release_page_list() - Releases a page list 640 + * 641 + * @page_list: list of pages to release. 642 + * @n_pages: pointer to the number of pages. 643 + * @page_size: whether the pages in the list are 2MB (or else 4KB). 644 + * 645 + * Releases the list of pages and zeros the number of pages. 852 646 */ 853 - static int vmballoon_unlock_page(struct vmballoon *b, unsigned int num_pages, 854 - bool is_2m_pages, unsigned int *target) 647 + static void vmballoon_release_page_list(struct list_head *page_list, 648 + int *n_pages, 649 + enum vmballoon_page_size_type page_size) 855 650 { 856 - struct page *page = b->page; 857 - struct vmballoon_page_size *page_size = &b->page_sizes[false]; 651 + struct page *page, *tmp; 858 652 859 - /* is_2m_pages can never happen as 2m pages support implies batching */ 860 - 861 - if (!vmballoon_send_unlock_page(b, page_to_pfn(page), target)) { 862 - list_add(&page->lru, &page_size->pages); 863 - return -EIO; 653 + list_for_each_entry_safe(page, tmp, page_list, lru) { 654 + list_del(&page->lru); 655 + __free_pages(page, vmballoon_page_order(page_size)); 864 656 } 865 657 866 - /* deallocate page */ 867 - vmballoon_free_page(page, false); 868 - STATS_INC(b->stats.free[false]); 869 - 870 - /* update balloon size */ 871 - b->size--; 872 - 873 - return 0; 658 + *n_pages = 0; 874 659 } 875 660 876 - static int vmballoon_unlock_batched_page(struct vmballoon *b, 877 - unsigned int num_pages, bool is_2m_pages, 878 - unsigned int *target) 879 - { 880 - int locked, i, ret = 0; 881 - bool hv_success; 882 - u16 size_per_page = vmballoon_page_size(is_2m_pages); 883 - 884 - hv_success = vmballoon_send_batched_unlock(b, num_pages, is_2m_pages, 885 - target); 886 - if (!hv_success) 887 - ret = -EIO; 888 - 889 - for (i = 0; i < num_pages; i++) { 890 - u64 pa = vmballoon_batch_get_pa(b->batch_page, i); 891 - struct page *p = pfn_to_page(pa >> PAGE_SHIFT); 892 - struct vmballoon_page_size *page_size = 893 - &b->page_sizes[is_2m_pages]; 894 - 895 - locked = vmballoon_batch_get_status(b->batch_page, i); 896 - if (!hv_success || locked != VMW_BALLOON_SUCCESS) { 897 - /* 898 - * That page wasn't successfully unlocked by the 899 - * hypervisor, re-add it to the list of pages owned by 900 - * the balloon driver. 901 - */ 902 - list_add(&p->lru, &page_size->pages); 903 - } else { 904 - /* deallocate page */ 905 - vmballoon_free_page(p, is_2m_pages); 906 - STATS_INC(b->stats.free[is_2m_pages]); 907 - 908 - /* update balloon size */ 909 - b->size -= size_per_page; 910 - } 911 - } 912 - 913 - return ret; 914 - } 915 661 916 662 /* 917 663 * Release pages that were allocated while attempting to inflate the 918 664 * balloon but were refused by the host for one reason or another. 919 665 */ 920 666 static void vmballoon_release_refused_pages(struct vmballoon *b, 921 - bool is_2m_pages) 667 + struct vmballoon_ctl *ctl) 922 668 { 923 - struct page *page, *next; 924 - struct vmballoon_page_size *page_size = 925 - &b->page_sizes[is_2m_pages]; 669 + vmballoon_stats_page_inc(b, VMW_BALLOON_PAGE_STAT_REFUSED_FREE, 670 + ctl->page_size); 926 671 927 - list_for_each_entry_safe(page, next, &page_size->refused_pages, lru) { 928 - list_del(&page->lru); 929 - vmballoon_free_page(page, is_2m_pages); 930 - STATS_INC(b->stats.refused_free[is_2m_pages]); 672 + vmballoon_release_page_list(&ctl->refused_pages, &ctl->n_refused_pages, 673 + ctl->page_size); 674 + } 675 + 676 + /** 677 + * vmballoon_change - retrieve the required balloon change 678 + * 679 + * @b: pointer for the balloon. 680 + * 681 + * Return: the required change for the balloon size. A positive number 682 + * indicates inflation, a negative number indicates a deflation. 683 + */ 684 + static int64_t vmballoon_change(struct vmballoon *b) 685 + { 686 + int64_t size, target; 687 + 688 + size = atomic64_read(&b->size); 689 + target = READ_ONCE(b->target); 690 + 691 + /* 692 + * We must cast first because of int sizes 693 + * Otherwise we might get huge positives instead of negatives 694 + */ 695 + 696 + if (b->reset_required) 697 + return 0; 698 + 699 + /* consider a 2MB slack on deflate, unless the balloon is emptied */ 700 + if (target < size && target != 0 && 701 + size - target < vmballoon_page_in_frames(VMW_BALLOON_2M_PAGE)) 702 + return 0; 703 + 704 + return target - size; 705 + } 706 + 707 + /** 708 + * vmballoon_enqueue_page_list() - Enqueues list of pages after inflation. 709 + * 710 + * @b: pointer to balloon. 711 + * @pages: list of pages to enqueue. 712 + * @n_pages: pointer to number of pages in list. The value is zeroed. 713 + * @page_size: whether the pages are 2MB or 4KB pages. 714 + * 715 + * Enqueues the provides list of pages in the ballooned page list, clears the 716 + * list and zeroes the number of pages that was provided. 717 + */ 718 + static void vmballoon_enqueue_page_list(struct vmballoon *b, 719 + struct list_head *pages, 720 + unsigned int *n_pages, 721 + enum vmballoon_page_size_type page_size) 722 + { 723 + struct vmballoon_page_size *page_size_info = &b->page_sizes[page_size]; 724 + 725 + list_splice_init(pages, &page_size_info->pages); 726 + *n_pages = 0; 727 + } 728 + 729 + /** 730 + * vmballoon_dequeue_page_list() - Dequeues page lists for deflation. 731 + * 732 + * @b: pointer to balloon. 733 + * @pages: list of pages to enqueue. 734 + * @n_pages: pointer to number of pages in list. The value is zeroed. 735 + * @page_size: whether the pages are 2MB or 4KB pages. 736 + * @n_req_pages: the number of requested pages. 737 + * 738 + * Dequeues the number of requested pages from the balloon for deflation. The 739 + * number of dequeued pages may be lower, if not enough pages in the requested 740 + * size are available. 741 + */ 742 + static void vmballoon_dequeue_page_list(struct vmballoon *b, 743 + struct list_head *pages, 744 + unsigned int *n_pages, 745 + enum vmballoon_page_size_type page_size, 746 + unsigned int n_req_pages) 747 + { 748 + struct vmballoon_page_size *page_size_info = &b->page_sizes[page_size]; 749 + struct page *page, *tmp; 750 + unsigned int i = 0; 751 + 752 + list_for_each_entry_safe(page, tmp, &page_size_info->pages, lru) { 753 + list_move(&page->lru, pages); 754 + if (++i == n_req_pages) 755 + break; 931 756 } 932 - 933 - page_size->n_refused_pages = 0; 757 + *n_pages = i; 934 758 } 935 759 936 - static void vmballoon_add_page(struct vmballoon *b, int idx, struct page *p) 937 - { 938 - b->page = p; 939 - } 940 - 941 - static void vmballoon_add_batched_page(struct vmballoon *b, int idx, 942 - struct page *p) 943 - { 944 - vmballoon_batch_set_pa(b->batch_page, idx, 945 - (u64)page_to_pfn(p) << PAGE_SHIFT); 946 - } 947 - 948 - /* 949 - * Inflate the balloon towards its target size. Note that we try to limit 950 - * the rate of allocation to make sure we are not choking the rest of the 951 - * system. 760 + /** 761 + * vmballoon_inflate() - Inflate the balloon towards its target size. 762 + * 763 + * @b: pointer to the balloon. 952 764 */ 953 765 static void vmballoon_inflate(struct vmballoon *b) 954 766 { 955 - unsigned int num_pages = 0; 956 - int error = 0; 957 - gfp_t flags = VMW_PAGE_ALLOC_NOSLEEP; 958 - bool is_2m_pages; 767 + int64_t to_inflate_frames; 768 + struct vmballoon_ctl ctl = { 769 + .pages = LIST_HEAD_INIT(ctl.pages), 770 + .refused_pages = LIST_HEAD_INIT(ctl.refused_pages), 771 + .page_size = b->max_page_size, 772 + .op = VMW_BALLOON_INFLATE 773 + }; 959 774 960 - pr_debug("%s - size: %d, target %d\n", __func__, b->size, b->target); 775 + while ((to_inflate_frames = vmballoon_change(b)) > 0) { 776 + unsigned int to_inflate_pages, page_in_frames; 777 + int alloc_error, lock_error = 0; 961 778 962 - /* 963 - * First try NOSLEEP page allocations to inflate balloon. 964 - * 965 - * If we do not throttle nosleep allocations, we can drain all 966 - * free pages in the guest quickly (if the balloon target is high). 967 - * As a side-effect, draining free pages helps to inform (force) 968 - * the guest to start swapping if balloon target is not met yet, 969 - * which is a desired behavior. However, balloon driver can consume 970 - * all available CPU cycles if too many pages are allocated in a 971 - * second. Therefore, we throttle nosleep allocations even when 972 - * the guest is not under memory pressure. OTOH, if we have already 973 - * predicted that the guest is under memory pressure, then we 974 - * slowdown page allocations considerably. 975 - */ 779 + VM_BUG_ON(!list_empty(&ctl.pages)); 780 + VM_BUG_ON(ctl.n_pages != 0); 976 781 977 - /* 978 - * Start with no sleep allocation rate which may be higher 979 - * than sleeping allocation rate. 980 - */ 981 - is_2m_pages = b->supported_page_sizes == VMW_BALLOON_NUM_PAGE_SIZES; 782 + page_in_frames = vmballoon_page_in_frames(ctl.page_size); 982 783 983 - pr_debug("%s - goal: %d", __func__, b->target - b->size); 784 + to_inflate_pages = min_t(unsigned long, b->batch_max_pages, 785 + DIV_ROUND_UP_ULL(to_inflate_frames, 786 + page_in_frames)); 984 787 985 - while (!b->reset_required && 986 - b->size + num_pages * vmballoon_page_size(is_2m_pages) 987 - < b->target) { 988 - struct page *page; 788 + /* Start by allocating */ 789 + alloc_error = vmballoon_alloc_page_list(b, &ctl, 790 + to_inflate_pages); 989 791 990 - if (flags == VMW_PAGE_ALLOC_NOSLEEP) 991 - STATS_INC(b->stats.alloc[is_2m_pages]); 992 - else 993 - STATS_INC(b->stats.sleep_alloc); 792 + /* Actually lock the pages by telling the hypervisor */ 793 + lock_error = vmballoon_lock(b, &ctl); 994 794 995 - page = vmballoon_alloc_page(flags, is_2m_pages); 996 - if (!page) { 997 - STATS_INC(b->stats.alloc_fail[is_2m_pages]); 795 + /* 796 + * If an error indicates that something serious went wrong, 797 + * stop the inflation. 798 + */ 799 + if (lock_error) 800 + break; 998 801 999 - if (is_2m_pages) { 1000 - b->ops->lock(b, num_pages, true, &b->target); 802 + /* Update the balloon size */ 803 + atomic64_add(ctl.n_pages * page_in_frames, &b->size); 1001 804 1002 - /* 1003 - * ignore errors from locking as we now switch 1004 - * to 4k pages and we might get different 1005 - * errors. 1006 - */ 805 + vmballoon_enqueue_page_list(b, &ctl.pages, &ctl.n_pages, 806 + ctl.page_size); 1007 807 1008 - num_pages = 0; 1009 - is_2m_pages = false; 1010 - continue; 1011 - } 1012 - 1013 - if (flags == VMW_PAGE_ALLOC_CANSLEEP) { 1014 - /* 1015 - * CANSLEEP page allocation failed, so guest 1016 - * is under severe memory pressure. We just log 1017 - * the event, but do not stop the inflation 1018 - * due to its negative impact on performance. 1019 - */ 1020 - STATS_INC(b->stats.sleep_alloc_fail); 808 + /* 809 + * If allocation failed or the number of refused pages exceeds 810 + * the maximum allowed, move to the next page size. 811 + */ 812 + if (alloc_error || 813 + ctl.n_refused_pages >= VMW_BALLOON_MAX_REFUSED) { 814 + if (ctl.page_size == VMW_BALLOON_4K_PAGE) 1021 815 break; 1022 - } 1023 816 1024 817 /* 1025 - * NOSLEEP page allocation failed, so the guest is 1026 - * under memory pressure. Slowing down page alloctions 1027 - * seems to be reasonable, but doing so might actually 1028 - * cause the hypervisor to throttle us down, resulting 1029 - * in degraded performance. We will count on the 1030 - * scheduler and standard memory management mechanisms 1031 - * for now. 818 + * Ignore errors from locking as we now switch to 4k 819 + * pages and we might get different errors. 1032 820 */ 1033 - flags = VMW_PAGE_ALLOC_CANSLEEP; 1034 - continue; 1035 - } 1036 - 1037 - b->ops->add_page(b, num_pages++, page); 1038 - if (num_pages == b->batch_max_pages) { 1039 - error = b->ops->lock(b, num_pages, is_2m_pages, 1040 - &b->target); 1041 - num_pages = 0; 1042 - if (error) 1043 - break; 821 + vmballoon_release_refused_pages(b, &ctl); 822 + ctl.page_size--; 1044 823 } 1045 824 1046 825 cond_resched(); 1047 826 } 1048 827 1049 - if (num_pages > 0) 1050 - b->ops->lock(b, num_pages, is_2m_pages, &b->target); 1051 - 1052 - vmballoon_release_refused_pages(b, true); 1053 - vmballoon_release_refused_pages(b, false); 828 + /* 829 + * Release pages that were allocated while attempting to inflate the 830 + * balloon but were refused by the host for one reason or another, 831 + * and update the statistics. 832 + */ 833 + if (ctl.n_refused_pages != 0) 834 + vmballoon_release_refused_pages(b, &ctl); 1054 835 } 1055 836 1056 - /* 837 + /** 838 + * vmballoon_deflate() - Decrease the size of the balloon. 839 + * 840 + * @b: pointer to the balloon 841 + * @n_frames: the number of frames to deflate. If zero, automatically 842 + * calculated according to the target size. 843 + * @coordinated: whether to coordinate with the host 844 + * 1057 845 * Decrease the size of the balloon allowing guest to use more memory. 846 + * 847 + * Return: The number of deflated frames (i.e., basic page size units) 1058 848 */ 1059 - static void vmballoon_deflate(struct vmballoon *b) 849 + static unsigned long vmballoon_deflate(struct vmballoon *b, uint64_t n_frames, 850 + bool coordinated) 1060 851 { 1061 - unsigned is_2m_pages; 1062 - 1063 - pr_debug("%s - size: %d, target %d\n", __func__, b->size, b->target); 852 + unsigned long deflated_frames = 0; 853 + unsigned long tried_frames = 0; 854 + struct vmballoon_ctl ctl = { 855 + .pages = LIST_HEAD_INIT(ctl.pages), 856 + .refused_pages = LIST_HEAD_INIT(ctl.refused_pages), 857 + .page_size = VMW_BALLOON_4K_PAGE, 858 + .op = VMW_BALLOON_DEFLATE 859 + }; 1064 860 1065 861 /* free pages to reach target */ 1066 - for (is_2m_pages = 0; is_2m_pages < b->supported_page_sizes; 1067 - is_2m_pages++) { 1068 - struct page *page, *next; 1069 - unsigned int num_pages = 0; 1070 - struct vmballoon_page_size *page_size = 1071 - &b->page_sizes[is_2m_pages]; 862 + while (true) { 863 + unsigned int to_deflate_pages, n_unlocked_frames; 864 + unsigned int page_in_frames; 865 + int64_t to_deflate_frames; 866 + bool deflated_all; 1072 867 1073 - list_for_each_entry_safe(page, next, &page_size->pages, lru) { 1074 - if (b->reset_required || 1075 - (b->target > 0 && 1076 - b->size - num_pages 1077 - * vmballoon_page_size(is_2m_pages) 1078 - < b->target + vmballoon_page_size(true))) 868 + page_in_frames = vmballoon_page_in_frames(ctl.page_size); 869 + 870 + VM_BUG_ON(!list_empty(&ctl.pages)); 871 + VM_BUG_ON(ctl.n_pages); 872 + VM_BUG_ON(!list_empty(&ctl.refused_pages)); 873 + VM_BUG_ON(ctl.n_refused_pages); 874 + 875 + /* 876 + * If we were requested a specific number of frames, we try to 877 + * deflate this number of frames. Otherwise, deflation is 878 + * performed according to the target and balloon size. 879 + */ 880 + to_deflate_frames = n_frames ? n_frames - tried_frames : 881 + -vmballoon_change(b); 882 + 883 + /* break if no work to do */ 884 + if (to_deflate_frames <= 0) 885 + break; 886 + 887 + /* 888 + * Calculate the number of frames based on current page size, 889 + * but limit the deflated frames to a single chunk 890 + */ 891 + to_deflate_pages = min_t(unsigned long, b->batch_max_pages, 892 + DIV_ROUND_UP_ULL(to_deflate_frames, 893 + page_in_frames)); 894 + 895 + /* First take the pages from the balloon pages. */ 896 + vmballoon_dequeue_page_list(b, &ctl.pages, &ctl.n_pages, 897 + ctl.page_size, to_deflate_pages); 898 + 899 + /* 900 + * Before pages are moving to the refused list, count their 901 + * frames as frames that we tried to deflate. 902 + */ 903 + tried_frames += ctl.n_pages * page_in_frames; 904 + 905 + /* 906 + * Unlock the pages by communicating with the hypervisor if the 907 + * communication is coordinated (i.e., not pop). We ignore the 908 + * return code. Instead we check if all the pages we manage to 909 + * unlock all the pages. If we failed, we will move to the next 910 + * page size, and would eventually try again later. 911 + */ 912 + if (coordinated) 913 + vmballoon_lock(b, &ctl); 914 + 915 + /* 916 + * Check if we deflated enough. We will move to the next page 917 + * size if we did not manage to do so. This calculation takes 918 + * place now, as once the pages are released, the number of 919 + * pages is zeroed. 920 + */ 921 + deflated_all = (ctl.n_pages == to_deflate_pages); 922 + 923 + /* Update local and global counters */ 924 + n_unlocked_frames = ctl.n_pages * page_in_frames; 925 + atomic64_sub(n_unlocked_frames, &b->size); 926 + deflated_frames += n_unlocked_frames; 927 + 928 + vmballoon_stats_page_add(b, VMW_BALLOON_PAGE_STAT_FREE, 929 + ctl.page_size, ctl.n_pages); 930 + 931 + /* free the ballooned pages */ 932 + vmballoon_release_page_list(&ctl.pages, &ctl.n_pages, 933 + ctl.page_size); 934 + 935 + /* Return the refused pages to the ballooned list. */ 936 + vmballoon_enqueue_page_list(b, &ctl.refused_pages, 937 + &ctl.n_refused_pages, 938 + ctl.page_size); 939 + 940 + /* If we failed to unlock all the pages, move to next size. */ 941 + if (!deflated_all) { 942 + if (ctl.page_size == b->max_page_size) 1079 943 break; 1080 - 1081 - list_del(&page->lru); 1082 - b->ops->add_page(b, num_pages++, page); 1083 - 1084 - if (num_pages == b->batch_max_pages) { 1085 - int error; 1086 - 1087 - error = b->ops->unlock(b, num_pages, 1088 - is_2m_pages, &b->target); 1089 - num_pages = 0; 1090 - if (error) 1091 - return; 1092 - } 1093 - 1094 - cond_resched(); 944 + ctl.page_size++; 1095 945 } 1096 946 1097 - if (num_pages > 0) 1098 - b->ops->unlock(b, num_pages, is_2m_pages, &b->target); 947 + cond_resched(); 1099 948 } 949 + 950 + return deflated_frames; 1100 951 } 1101 952 1102 - static const struct vmballoon_ops vmballoon_basic_ops = { 1103 - .add_page = vmballoon_add_page, 1104 - .lock = vmballoon_lock_page, 1105 - .unlock = vmballoon_unlock_page 1106 - }; 953 + /** 954 + * vmballoon_deinit_batching - disables batching mode. 955 + * 956 + * @b: pointer to &struct vmballoon. 957 + * 958 + * Disables batching, by deallocating the page for communication with the 959 + * hypervisor and disabling the static key to indicate that batching is off. 960 + */ 961 + static void vmballoon_deinit_batching(struct vmballoon *b) 962 + { 963 + free_page((unsigned long)b->batch_page); 964 + b->batch_page = NULL; 965 + static_branch_disable(&vmw_balloon_batching); 966 + b->batch_max_pages = 1; 967 + } 1107 968 1108 - static const struct vmballoon_ops vmballoon_batched_ops = { 1109 - .add_page = vmballoon_add_batched_page, 1110 - .lock = vmballoon_lock_batched_page, 1111 - .unlock = vmballoon_unlock_batched_page 1112 - }; 1113 - 1114 - static bool vmballoon_init_batching(struct vmballoon *b) 969 + /** 970 + * vmballoon_init_batching - enable batching mode. 971 + * 972 + * @b: pointer to &struct vmballoon. 973 + * 974 + * Enables batching, by allocating a page for communication with the hypervisor 975 + * and enabling the static_key to use batching. 976 + * 977 + * Return: zero on success or an appropriate error-code. 978 + */ 979 + static int vmballoon_init_batching(struct vmballoon *b) 1115 980 { 1116 981 struct page *page; 1117 982 1118 983 page = alloc_page(GFP_KERNEL | __GFP_ZERO); 1119 984 if (!page) 1120 - return false; 985 + return -ENOMEM; 1121 986 1122 987 b->batch_page = page_address(page); 1123 - return true; 988 + b->batch_max_pages = PAGE_SIZE / sizeof(struct vmballoon_batch_entry); 989 + 990 + static_branch_enable(&vmw_balloon_batching); 991 + 992 + return 0; 1124 993 } 1125 994 1126 995 /* ··· 1197 932 { 1198 933 struct vmballoon *b = client_data; 1199 934 1200 - STATS_INC(b->stats.doorbell); 935 + vmballoon_stats_gen_inc(b, VMW_BALLOON_STAT_DOORBELL); 1201 936 1202 937 mod_delayed_work(system_freezable_wq, &b->dwork, 0); 1203 938 } ··· 1207 942 */ 1208 943 static void vmballoon_vmci_cleanup(struct vmballoon *b) 1209 944 { 1210 - int error; 1211 - 1212 - VMWARE_BALLOON_CMD(VMCI_DOORBELL_SET, VMCI_INVALID_ID, 1213 - VMCI_INVALID_ID, error); 1214 - STATS_INC(b->stats.doorbell_unset); 945 + vmballoon_cmd(b, VMW_BALLOON_CMD_VMCI_DOORBELL_SET, 946 + VMCI_INVALID_ID, VMCI_INVALID_ID); 1215 947 1216 948 if (!vmci_handle_is_invalid(b->vmci_doorbell)) { 1217 949 vmci_doorbell_destroy(b->vmci_doorbell); ··· 1216 954 } 1217 955 } 1218 956 1219 - /* 1220 - * Initialize vmci doorbell, to get notified as soon as balloon changes 957 + /** 958 + * vmballoon_vmci_init - Initialize vmci doorbell. 959 + * 960 + * @b: pointer to the balloon. 961 + * 962 + * Return: zero on success or when wakeup command not supported. Error-code 963 + * otherwise. 964 + * 965 + * Initialize vmci doorbell, to get notified as soon as balloon changes. 1221 966 */ 1222 967 static int vmballoon_vmci_init(struct vmballoon *b) 1223 968 { 1224 - unsigned long error, dummy; 969 + unsigned long error; 1225 970 1226 971 if ((b->capabilities & VMW_BALLOON_SIGNALLED_WAKEUP_CMD) == 0) 1227 972 return 0; ··· 1240 971 if (error != VMCI_SUCCESS) 1241 972 goto fail; 1242 973 1243 - error = VMWARE_BALLOON_CMD(VMCI_DOORBELL_SET, b->vmci_doorbell.context, 1244 - b->vmci_doorbell.resource, dummy); 1245 - 1246 - STATS_INC(b->stats.doorbell_set); 974 + error = __vmballoon_cmd(b, VMW_BALLOON_CMD_VMCI_DOORBELL_SET, 975 + b->vmci_doorbell.context, 976 + b->vmci_doorbell.resource, NULL); 1247 977 1248 978 if (error != VMW_BALLOON_SUCCESS) 1249 979 goto fail; ··· 1251 983 fail: 1252 984 vmballoon_vmci_cleanup(b); 1253 985 return -EIO; 986 + } 987 + 988 + /** 989 + * vmballoon_pop - Quickly release all pages allocate for the balloon. 990 + * 991 + * @b: pointer to the balloon. 992 + * 993 + * This function is called when host decides to "reset" balloon for one reason 994 + * or another. Unlike normal "deflate" we do not (shall not) notify host of the 995 + * pages being released. 996 + */ 997 + static void vmballoon_pop(struct vmballoon *b) 998 + { 999 + unsigned long size; 1000 + 1001 + while ((size = atomic64_read(&b->size))) 1002 + vmballoon_deflate(b, size, false); 1254 1003 } 1255 1004 1256 1005 /* ··· 1279 994 { 1280 995 int error; 1281 996 997 + down_write(&b->conf_sem); 998 + 1282 999 vmballoon_vmci_cleanup(b); 1283 1000 1284 1001 /* free all pages, skipping monitor unlock */ 1285 1002 vmballoon_pop(b); 1286 1003 1287 - if (!vmballoon_send_start(b, VMW_BALLOON_CAPABILITIES)) 1004 + if (vmballoon_send_start(b, VMW_BALLOON_CAPABILITIES)) 1288 1005 return; 1289 1006 1290 1007 if ((b->capabilities & VMW_BALLOON_BATCHED_CMDS) != 0) { 1291 - b->ops = &vmballoon_batched_ops; 1292 - b->batch_max_pages = VMW_BALLOON_BATCH_MAX_PAGES; 1293 - if (!vmballoon_init_batching(b)) { 1008 + if (vmballoon_init_batching(b)) { 1294 1009 /* 1295 1010 * We failed to initialize batching, inform the monitor 1296 1011 * about it by sending a null capability. ··· 1301 1016 return; 1302 1017 } 1303 1018 } else if ((b->capabilities & VMW_BALLOON_BASIC_CMDS) != 0) { 1304 - b->ops = &vmballoon_basic_ops; 1305 - b->batch_max_pages = 1; 1019 + vmballoon_deinit_batching(b); 1306 1020 } 1307 1021 1022 + vmballoon_stats_gen_inc(b, VMW_BALLOON_STAT_RESET); 1308 1023 b->reset_required = false; 1309 1024 1310 1025 error = vmballoon_vmci_init(b); 1311 1026 if (error) 1312 1027 pr_err("failed to initialize vmci doorbell\n"); 1313 1028 1314 - if (!vmballoon_send_guest_id(b)) 1029 + if (vmballoon_send_guest_id(b)) 1315 1030 pr_err("failed to send guest ID to the host\n"); 1031 + 1032 + up_write(&b->conf_sem); 1316 1033 } 1317 1034 1318 - /* 1319 - * Balloon work function: reset protocol, if needed, get the new size and 1320 - * adjust balloon as needed. Repeat in 1 sec. 1035 + /** 1036 + * vmballoon_work - periodic balloon worker for reset, inflation and deflation. 1037 + * 1038 + * @work: pointer to the &work_struct which is provided by the workqueue. 1039 + * 1040 + * Resets the protocol if needed, gets the new size and adjusts balloon as 1041 + * needed. Repeat in 1 sec. 1321 1042 */ 1322 1043 static void vmballoon_work(struct work_struct *work) 1323 1044 { 1324 1045 struct delayed_work *dwork = to_delayed_work(work); 1325 1046 struct vmballoon *b = container_of(dwork, struct vmballoon, dwork); 1326 - unsigned int target; 1327 - 1328 - STATS_INC(b->stats.timer); 1047 + int64_t change = 0; 1329 1048 1330 1049 if (b->reset_required) 1331 1050 vmballoon_reset(b); 1332 1051 1333 - if (!b->reset_required && vmballoon_send_get_target(b, &target)) { 1334 - /* update target, adjust size */ 1335 - b->target = target; 1052 + down_read(&b->conf_sem); 1336 1053 1337 - if (b->size < target) 1054 + /* 1055 + * Update the stats while holding the semaphore to ensure that 1056 + * @stats_enabled is consistent with whether the stats are actually 1057 + * enabled 1058 + */ 1059 + vmballoon_stats_gen_inc(b, VMW_BALLOON_STAT_TIMER); 1060 + 1061 + if (!vmballoon_send_get_target(b)) 1062 + change = vmballoon_change(b); 1063 + 1064 + if (change != 0) { 1065 + pr_debug("%s - size: %llu, target %lu\n", __func__, 1066 + atomic64_read(&b->size), READ_ONCE(b->target)); 1067 + 1068 + if (change > 0) 1338 1069 vmballoon_inflate(b); 1339 - else if (target == 0 || 1340 - b->size > target + vmballoon_page_size(true)) 1341 - vmballoon_deflate(b); 1070 + else /* (change < 0) */ 1071 + vmballoon_deflate(b, 0, true); 1342 1072 } 1073 + 1074 + up_read(&b->conf_sem); 1343 1075 1344 1076 /* 1345 1077 * We are using a freezable workqueue so that balloon operations are ··· 1364 1062 */ 1365 1063 queue_delayed_work(system_freezable_wq, 1366 1064 dwork, round_jiffies_relative(HZ)); 1065 + 1367 1066 } 1368 1067 1369 1068 /* ··· 1372 1069 */ 1373 1070 #ifdef CONFIG_DEBUG_FS 1374 1071 1072 + static const char * const vmballoon_stat_page_names[] = { 1073 + [VMW_BALLOON_PAGE_STAT_ALLOC] = "alloc", 1074 + [VMW_BALLOON_PAGE_STAT_ALLOC_FAIL] = "allocFail", 1075 + [VMW_BALLOON_PAGE_STAT_REFUSED_ALLOC] = "errAlloc", 1076 + [VMW_BALLOON_PAGE_STAT_REFUSED_FREE] = "errFree", 1077 + [VMW_BALLOON_PAGE_STAT_FREE] = "free" 1078 + }; 1079 + 1080 + static const char * const vmballoon_stat_names[] = { 1081 + [VMW_BALLOON_STAT_TIMER] = "timer", 1082 + [VMW_BALLOON_STAT_DOORBELL] = "doorbell", 1083 + [VMW_BALLOON_STAT_RESET] = "reset", 1084 + }; 1085 + 1086 + static int vmballoon_enable_stats(struct vmballoon *b) 1087 + { 1088 + int r = 0; 1089 + 1090 + down_write(&b->conf_sem); 1091 + 1092 + /* did we somehow race with another reader which enabled stats? */ 1093 + if (b->stats) 1094 + goto out; 1095 + 1096 + b->stats = kzalloc(sizeof(*b->stats), GFP_KERNEL); 1097 + 1098 + if (!b->stats) { 1099 + /* allocation failed */ 1100 + r = -ENOMEM; 1101 + goto out; 1102 + } 1103 + static_key_enable(&balloon_stat_enabled.key); 1104 + out: 1105 + up_write(&b->conf_sem); 1106 + return r; 1107 + } 1108 + 1109 + /** 1110 + * vmballoon_debug_show - shows statistics of balloon operations. 1111 + * @f: pointer to the &struct seq_file. 1112 + * @offset: ignored. 1113 + * 1114 + * Provides the statistics that can be accessed in vmmemctl in the debugfs. 1115 + * To avoid the overhead - mainly that of memory - of collecting the statistics, 1116 + * we only collect statistics after the first time the counters are read. 1117 + * 1118 + * Return: zero on success or an error code. 1119 + */ 1375 1120 static int vmballoon_debug_show(struct seq_file *f, void *offset) 1376 1121 { 1377 1122 struct vmballoon *b = f->private; 1378 - struct vmballoon_stats *stats = &b->stats; 1123 + int i, j; 1124 + 1125 + /* enables stats if they are disabled */ 1126 + if (!b->stats) { 1127 + int r = vmballoon_enable_stats(b); 1128 + 1129 + if (r) 1130 + return r; 1131 + } 1379 1132 1380 1133 /* format capabilities info */ 1381 - seq_printf(f, 1382 - "balloon capabilities: %#4x\n" 1383 - "used capabilities: %#4lx\n" 1384 - "is resetting: %c\n", 1385 - VMW_BALLOON_CAPABILITIES, b->capabilities, 1386 - b->reset_required ? 'y' : 'n'); 1134 + seq_printf(f, "%-22s: %#16x\n", "balloon capabilities", 1135 + VMW_BALLOON_CAPABILITIES); 1136 + seq_printf(f, "%-22s: %#16lx\n", "used capabilities", b->capabilities); 1137 + seq_printf(f, "%-22s: %16s\n", "is resetting", 1138 + b->reset_required ? "y" : "n"); 1387 1139 1388 1140 /* format size info */ 1389 - seq_printf(f, 1390 - "target: %8d pages\n" 1391 - "current: %8d pages\n", 1392 - b->target, b->size); 1141 + seq_printf(f, "%-22s: %16lu\n", "target", READ_ONCE(b->target)); 1142 + seq_printf(f, "%-22s: %16llu\n", "current", atomic64_read(&b->size)); 1393 1143 1394 - seq_printf(f, 1395 - "\n" 1396 - "timer: %8u\n" 1397 - "doorbell: %8u\n" 1398 - "start: %8u (%4u failed)\n" 1399 - "guestType: %8u (%4u failed)\n" 1400 - "2m-lock: %8u (%4u failed)\n" 1401 - "lock: %8u (%4u failed)\n" 1402 - "2m-unlock: %8u (%4u failed)\n" 1403 - "unlock: %8u (%4u failed)\n" 1404 - "target: %8u (%4u failed)\n" 1405 - "prim2mAlloc: %8u (%4u failed)\n" 1406 - "primNoSleepAlloc: %8u (%4u failed)\n" 1407 - "primCanSleepAlloc: %8u (%4u failed)\n" 1408 - "prim2mFree: %8u\n" 1409 - "primFree: %8u\n" 1410 - "err2mAlloc: %8u\n" 1411 - "errAlloc: %8u\n" 1412 - "err2mFree: %8u\n" 1413 - "errFree: %8u\n" 1414 - "doorbellSet: %8u\n" 1415 - "doorbellUnset: %8u\n", 1416 - stats->timer, 1417 - stats->doorbell, 1418 - stats->start, stats->start_fail, 1419 - stats->guest_type, stats->guest_type_fail, 1420 - stats->lock[true], stats->lock_fail[true], 1421 - stats->lock[false], stats->lock_fail[false], 1422 - stats->unlock[true], stats->unlock_fail[true], 1423 - stats->unlock[false], stats->unlock_fail[false], 1424 - stats->target, stats->target_fail, 1425 - stats->alloc[true], stats->alloc_fail[true], 1426 - stats->alloc[false], stats->alloc_fail[false], 1427 - stats->sleep_alloc, stats->sleep_alloc_fail, 1428 - stats->free[true], 1429 - stats->free[false], 1430 - stats->refused_alloc[true], stats->refused_alloc[false], 1431 - stats->refused_free[true], stats->refused_free[false], 1432 - stats->doorbell_set, stats->doorbell_unset); 1144 + for (i = 0; i < VMW_BALLOON_CMD_NUM; i++) { 1145 + if (vmballoon_cmd_names[i] == NULL) 1146 + continue; 1147 + 1148 + seq_printf(f, "%-22s: %16llu (%llu failed)\n", 1149 + vmballoon_cmd_names[i], 1150 + atomic64_read(&b->stats->ops[i][VMW_BALLOON_OP_STAT]), 1151 + atomic64_read(&b->stats->ops[i][VMW_BALLOON_OP_FAIL_STAT])); 1152 + } 1153 + 1154 + for (i = 0; i < VMW_BALLOON_STAT_NUM; i++) 1155 + seq_printf(f, "%-22s: %16llu\n", 1156 + vmballoon_stat_names[i], 1157 + atomic64_read(&b->stats->general_stat[i])); 1158 + 1159 + for (i = 0; i < VMW_BALLOON_PAGE_STAT_NUM; i++) { 1160 + for (j = 0; j < VMW_BALLOON_NUM_PAGE_SIZES; j++) 1161 + seq_printf(f, "%-18s(%s): %16llu\n", 1162 + vmballoon_stat_page_names[i], 1163 + vmballoon_page_size_names[j], 1164 + atomic64_read(&b->stats->page_stat[i][j])); 1165 + } 1433 1166 1434 1167 return 0; 1435 1168 } ··· 1500 1161 1501 1162 static void __exit vmballoon_debugfs_exit(struct vmballoon *b) 1502 1163 { 1164 + static_key_disable(&balloon_stat_enabled.key); 1503 1165 debugfs_remove(b->dbg_entry); 1166 + kfree(b->stats); 1167 + b->stats = NULL; 1504 1168 } 1505 1169 1506 1170 #else ··· 1521 1179 1522 1180 static int __init vmballoon_init(void) 1523 1181 { 1182 + enum vmballoon_page_size_type page_size; 1524 1183 int error; 1525 - unsigned is_2m_pages; 1184 + 1526 1185 /* 1527 1186 * Check if we are running on VMware's hypervisor and bail out 1528 1187 * if we are not. ··· 1531 1188 if (x86_hyper_type != X86_HYPER_VMWARE) 1532 1189 return -ENODEV; 1533 1190 1534 - for (is_2m_pages = 0; is_2m_pages < VMW_BALLOON_NUM_PAGE_SIZES; 1535 - is_2m_pages++) { 1536 - INIT_LIST_HEAD(&balloon.page_sizes[is_2m_pages].pages); 1537 - INIT_LIST_HEAD(&balloon.page_sizes[is_2m_pages].refused_pages); 1538 - } 1191 + for (page_size = VMW_BALLOON_4K_PAGE; 1192 + page_size <= VMW_BALLOON_LAST_SIZE; page_size++) 1193 + INIT_LIST_HEAD(&balloon.page_sizes[page_size].pages); 1194 + 1539 1195 1540 1196 INIT_DELAYED_WORK(&balloon.dwork, vmballoon_work); 1541 1197 ··· 1542 1200 if (error) 1543 1201 return error; 1544 1202 1203 + spin_lock_init(&balloon.comm_lock); 1204 + init_rwsem(&balloon.conf_sem); 1545 1205 balloon.vmci_doorbell = VMCI_INVALID_HANDLE; 1546 1206 balloon.batch_page = NULL; 1547 1207 balloon.page = NULL;
+1 -1
drivers/misc/vmw_vmci/vmci_driver.c
··· 113 113 114 114 MODULE_AUTHOR("VMware, Inc."); 115 115 MODULE_DESCRIPTION("VMware Virtual Machine Communication Interface."); 116 - MODULE_VERSION("1.1.5.0-k"); 116 + MODULE_VERSION("1.1.6.0-k"); 117 117 MODULE_LICENSE("GPL v2");
-4
drivers/misc/vmw_vmci/vmci_host.c
··· 15 15 16 16 #include <linux/vmw_vmci_defs.h> 17 17 #include <linux/vmw_vmci_api.h> 18 - #include <linux/moduleparam.h> 19 18 #include <linux/miscdevice.h> 20 19 #include <linux/interrupt.h> 21 20 #include <linux/highmem.h> ··· 447 448 struct vmci_handle handle; 448 449 int vmci_status; 449 450 int __user *retptr; 450 - u32 cid; 451 451 452 452 if (vmci_host_dev->ct_type != VMCIOBJ_CONTEXT) { 453 453 vmci_ioctl_err("only valid for contexts\n"); 454 454 return -EINVAL; 455 455 } 456 - 457 - cid = vmci_ctx_get_id(vmci_host_dev->context); 458 456 459 457 if (vmci_host_dev->user_version < VMCI_VERSION_NOVMVM) { 460 458 struct vmci_qp_alloc_info_vmvm alloc_info;
+2 -1
drivers/misc/vmw_vmci/vmci_resource.c
··· 57 57 58 58 if (r->type == type && 59 59 rid == handle.resource && 60 - (cid == handle.context || cid == VMCI_INVALID_ID)) { 60 + (cid == handle.context || cid == VMCI_INVALID_ID || 61 + handle.context == VMCI_INVALID_ID)) { 61 62 resource = r; 62 63 break; 63 64 }
+358 -175
drivers/nvmem/core.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * nvmem framework core. 3 4 * 4 5 * Copyright (C) 2015 Srinivas Kandagatla <srinivas.kandagatla@linaro.org> 5 6 * Copyright (C) 2013 Maxime Ripard <maxime.ripard@free-electrons.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 and 9 - * only version 2 as published by the Free Software Foundation. 10 - * 11 - * This program is distributed in the hope that it will be useful, 12 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 - * GNU General Public License for more details. 15 7 */ 16 8 17 9 #include <linux/device.h> ··· 11 19 #include <linux/fs.h> 12 20 #include <linux/idr.h> 13 21 #include <linux/init.h> 22 + #include <linux/kref.h> 14 23 #include <linux/module.h> 15 24 #include <linux/nvmem-consumer.h> 16 25 #include <linux/nvmem-provider.h> ··· 19 26 #include <linux/slab.h> 20 27 21 28 struct nvmem_device { 22 - const char *name; 23 29 struct module *owner; 24 30 struct device dev; 25 31 int stride; 26 32 int word_size; 27 33 int id; 28 - int users; 34 + struct kref refcnt; 29 35 size_t size; 30 36 bool read_only; 31 37 int flags; 32 38 struct bin_attribute eeprom; 33 39 struct device *base_dev; 40 + struct list_head cells; 34 41 nvmem_reg_read_t reg_read; 35 42 nvmem_reg_write_t reg_write; 36 43 void *priv; ··· 51 58 static DEFINE_MUTEX(nvmem_mutex); 52 59 static DEFINE_IDA(nvmem_ida); 53 60 54 - static LIST_HEAD(nvmem_cells); 55 - static DEFINE_MUTEX(nvmem_cells_mutex); 61 + static DEFINE_MUTEX(nvmem_cell_mutex); 62 + static LIST_HEAD(nvmem_cell_tables); 63 + 64 + static DEFINE_MUTEX(nvmem_lookup_mutex); 65 + static LIST_HEAD(nvmem_lookup_list); 66 + 67 + static BLOCKING_NOTIFIER_HEAD(nvmem_notifier); 56 68 57 69 #ifdef CONFIG_DEBUG_LOCK_ALLOC 58 70 static struct lock_class_key eeprom_lock_key; ··· 154 156 static struct bin_attribute bin_attr_rw_nvmem = { 155 157 .attr = { 156 158 .name = "nvmem", 157 - .mode = S_IWUSR | S_IRUGO, 159 + .mode = 0644, 158 160 }, 159 161 .read = bin_attr_nvmem_read, 160 162 .write = bin_attr_nvmem_write, ··· 178 180 static struct bin_attribute bin_attr_ro_nvmem = { 179 181 .attr = { 180 182 .name = "nvmem", 181 - .mode = S_IRUGO, 183 + .mode = 0444, 182 184 }, 183 185 .read = bin_attr_nvmem_read, 184 186 }; ··· 201 203 static struct bin_attribute bin_attr_rw_root_nvmem = { 202 204 .attr = { 203 205 .name = "nvmem", 204 - .mode = S_IWUSR | S_IRUSR, 206 + .mode = 0600, 205 207 }, 206 208 .read = bin_attr_nvmem_read, 207 209 .write = bin_attr_nvmem_write, ··· 225 227 static struct bin_attribute bin_attr_ro_root_nvmem = { 226 228 .attr = { 227 229 .name = "nvmem", 228 - .mode = S_IRUSR, 230 + .mode = 0400, 229 231 }, 230 232 .read = bin_attr_nvmem_read, 231 233 }; ··· 280 282 return to_nvmem_device(d); 281 283 } 282 284 283 - static struct nvmem_cell *nvmem_find_cell(const char *cell_id) 285 + static struct nvmem_device *nvmem_find(const char *name) 284 286 { 285 - struct nvmem_cell *p; 287 + struct device *d; 286 288 287 - mutex_lock(&nvmem_cells_mutex); 289 + d = bus_find_device_by_name(&nvmem_bus_type, NULL, name); 288 290 289 - list_for_each_entry(p, &nvmem_cells, node) 290 - if (!strcmp(p->name, cell_id)) { 291 - mutex_unlock(&nvmem_cells_mutex); 292 - return p; 293 - } 291 + if (!d) 292 + return NULL; 294 293 295 - mutex_unlock(&nvmem_cells_mutex); 296 - 297 - return NULL; 294 + return to_nvmem_device(d); 298 295 } 299 296 300 297 static void nvmem_cell_drop(struct nvmem_cell *cell) 301 298 { 302 - mutex_lock(&nvmem_cells_mutex); 299 + blocking_notifier_call_chain(&nvmem_notifier, NVMEM_CELL_REMOVE, cell); 300 + mutex_lock(&nvmem_mutex); 303 301 list_del(&cell->node); 304 - mutex_unlock(&nvmem_cells_mutex); 302 + mutex_unlock(&nvmem_mutex); 303 + kfree(cell->name); 305 304 kfree(cell); 306 305 } 307 306 308 307 static void nvmem_device_remove_all_cells(const struct nvmem_device *nvmem) 309 308 { 310 - struct nvmem_cell *cell; 311 - struct list_head *p, *n; 309 + struct nvmem_cell *cell, *p; 312 310 313 - list_for_each_safe(p, n, &nvmem_cells) { 314 - cell = list_entry(p, struct nvmem_cell, node); 315 - if (cell->nvmem == nvmem) 316 - nvmem_cell_drop(cell); 317 - } 311 + list_for_each_entry_safe(cell, p, &nvmem->cells, node) 312 + nvmem_cell_drop(cell); 318 313 } 319 314 320 315 static void nvmem_cell_add(struct nvmem_cell *cell) 321 316 { 322 - mutex_lock(&nvmem_cells_mutex); 323 - list_add_tail(&cell->node, &nvmem_cells); 324 - mutex_unlock(&nvmem_cells_mutex); 317 + mutex_lock(&nvmem_mutex); 318 + list_add_tail(&cell->node, &cell->nvmem->cells); 319 + mutex_unlock(&nvmem_mutex); 320 + blocking_notifier_call_chain(&nvmem_notifier, NVMEM_CELL_ADD, cell); 325 321 } 326 322 327 323 static int nvmem_cell_info_to_nvmem_cell(struct nvmem_device *nvmem, ··· 353 361 * 354 362 * Return: 0 or negative error code on failure. 355 363 */ 356 - int nvmem_add_cells(struct nvmem_device *nvmem, 364 + static int nvmem_add_cells(struct nvmem_device *nvmem, 357 365 const struct nvmem_cell_info *info, 358 366 int ncells) 359 367 { ··· 392 400 393 401 return rval; 394 402 } 395 - EXPORT_SYMBOL_GPL(nvmem_add_cells); 396 403 397 404 /* 398 405 * nvmem_setup_compat() - Create an additional binary entry in ··· 431 440 } 432 441 433 442 /** 443 + * nvmem_register_notifier() - Register a notifier block for nvmem events. 444 + * 445 + * @nb: notifier block to be called on nvmem events. 446 + * 447 + * Return: 0 on success, negative error number on failure. 448 + */ 449 + int nvmem_register_notifier(struct notifier_block *nb) 450 + { 451 + return blocking_notifier_chain_register(&nvmem_notifier, nb); 452 + } 453 + EXPORT_SYMBOL_GPL(nvmem_register_notifier); 454 + 455 + /** 456 + * nvmem_unregister_notifier() - Unregister a notifier block for nvmem events. 457 + * 458 + * @nb: notifier block to be unregistered. 459 + * 460 + * Return: 0 on success, negative error number on failure. 461 + */ 462 + int nvmem_unregister_notifier(struct notifier_block *nb) 463 + { 464 + return blocking_notifier_chain_unregister(&nvmem_notifier, nb); 465 + } 466 + EXPORT_SYMBOL_GPL(nvmem_unregister_notifier); 467 + 468 + static int nvmem_add_cells_from_table(struct nvmem_device *nvmem) 469 + { 470 + const struct nvmem_cell_info *info; 471 + struct nvmem_cell_table *table; 472 + struct nvmem_cell *cell; 473 + int rval = 0, i; 474 + 475 + mutex_lock(&nvmem_cell_mutex); 476 + list_for_each_entry(table, &nvmem_cell_tables, node) { 477 + if (strcmp(nvmem_dev_name(nvmem), table->nvmem_name) == 0) { 478 + for (i = 0; i < table->ncells; i++) { 479 + info = &table->cells[i]; 480 + 481 + cell = kzalloc(sizeof(*cell), GFP_KERNEL); 482 + if (!cell) { 483 + rval = -ENOMEM; 484 + goto out; 485 + } 486 + 487 + rval = nvmem_cell_info_to_nvmem_cell(nvmem, 488 + info, 489 + cell); 490 + if (rval) { 491 + kfree(cell); 492 + goto out; 493 + } 494 + 495 + nvmem_cell_add(cell); 496 + } 497 + } 498 + } 499 + 500 + out: 501 + mutex_unlock(&nvmem_cell_mutex); 502 + return rval; 503 + } 504 + 505 + static struct nvmem_cell * 506 + nvmem_find_cell_by_name(struct nvmem_device *nvmem, const char *cell_id) 507 + { 508 + struct nvmem_cell *cell = NULL; 509 + 510 + mutex_lock(&nvmem_mutex); 511 + list_for_each_entry(cell, &nvmem->cells, node) { 512 + if (strcmp(cell_id, cell->name) == 0) 513 + break; 514 + } 515 + mutex_unlock(&nvmem_mutex); 516 + 517 + return cell; 518 + } 519 + 520 + static int nvmem_add_cells_from_of(struct nvmem_device *nvmem) 521 + { 522 + struct device_node *parent, *child; 523 + struct device *dev = &nvmem->dev; 524 + struct nvmem_cell *cell; 525 + const __be32 *addr; 526 + int len; 527 + 528 + parent = dev->of_node; 529 + 530 + for_each_child_of_node(parent, child) { 531 + addr = of_get_property(child, "reg", &len); 532 + if (!addr || (len < 2 * sizeof(u32))) { 533 + dev_err(dev, "nvmem: invalid reg on %pOF\n", child); 534 + return -EINVAL; 535 + } 536 + 537 + cell = kzalloc(sizeof(*cell), GFP_KERNEL); 538 + if (!cell) 539 + return -ENOMEM; 540 + 541 + cell->nvmem = nvmem; 542 + cell->offset = be32_to_cpup(addr++); 543 + cell->bytes = be32_to_cpup(addr); 544 + cell->name = kasprintf(GFP_KERNEL, "%pOFn", child); 545 + 546 + addr = of_get_property(child, "bits", &len); 547 + if (addr && len == (2 * sizeof(u32))) { 548 + cell->bit_offset = be32_to_cpup(addr++); 549 + cell->nbits = be32_to_cpup(addr); 550 + } 551 + 552 + if (cell->nbits) 553 + cell->bytes = DIV_ROUND_UP( 554 + cell->nbits + cell->bit_offset, 555 + BITS_PER_BYTE); 556 + 557 + if (!IS_ALIGNED(cell->offset, nvmem->stride)) { 558 + dev_err(dev, "cell %s unaligned to nvmem stride %d\n", 559 + cell->name, nvmem->stride); 560 + /* Cells already added will be freed later. */ 561 + kfree(cell->name); 562 + kfree(cell); 563 + return -EINVAL; 564 + } 565 + 566 + nvmem_cell_add(cell); 567 + } 568 + 569 + return 0; 570 + } 571 + 572 + /** 434 573 * nvmem_register() - Register a nvmem device for given nvmem_config. 435 574 * Also creates an binary entry in /sys/bus/nvmem/devices/dev-name/nvmem 436 575 * ··· 587 466 kfree(nvmem); 588 467 return ERR_PTR(rval); 589 468 } 469 + 470 + kref_init(&nvmem->refcnt); 471 + INIT_LIST_HEAD(&nvmem->cells); 590 472 591 473 nvmem->id = rval; 592 474 nvmem->owner = config->owner; ··· 640 516 goto err_device_del; 641 517 } 642 518 643 - if (config->cells) 644 - nvmem_add_cells(nvmem, config->cells, config->ncells); 519 + if (config->cells) { 520 + rval = nvmem_add_cells(nvmem, config->cells, config->ncells); 521 + if (rval) 522 + goto err_teardown_compat; 523 + } 524 + 525 + rval = nvmem_add_cells_from_table(nvmem); 526 + if (rval) 527 + goto err_remove_cells; 528 + 529 + rval = nvmem_add_cells_from_of(nvmem); 530 + if (rval) 531 + goto err_remove_cells; 532 + 533 + rval = blocking_notifier_call_chain(&nvmem_notifier, NVMEM_ADD, nvmem); 534 + if (rval) 535 + goto err_remove_cells; 645 536 646 537 return nvmem; 647 538 539 + err_remove_cells: 540 + nvmem_device_remove_all_cells(nvmem); 541 + err_teardown_compat: 542 + if (config->compat) 543 + device_remove_bin_file(nvmem->base_dev, &nvmem->eeprom); 648 544 err_device_del: 649 545 device_del(&nvmem->dev); 650 546 err_put_device: ··· 674 530 } 675 531 EXPORT_SYMBOL_GPL(nvmem_register); 676 532 677 - /** 678 - * nvmem_unregister() - Unregister previously registered nvmem device 679 - * 680 - * @nvmem: Pointer to previously registered nvmem device. 681 - * 682 - * Return: Will be an negative on error or a zero on success. 683 - */ 684 - int nvmem_unregister(struct nvmem_device *nvmem) 533 + static void nvmem_device_release(struct kref *kref) 685 534 { 686 - mutex_lock(&nvmem_mutex); 687 - if (nvmem->users) { 688 - mutex_unlock(&nvmem_mutex); 689 - return -EBUSY; 690 - } 691 - mutex_unlock(&nvmem_mutex); 535 + struct nvmem_device *nvmem; 536 + 537 + nvmem = container_of(kref, struct nvmem_device, refcnt); 538 + 539 + blocking_notifier_call_chain(&nvmem_notifier, NVMEM_REMOVE, nvmem); 692 540 693 541 if (nvmem->flags & FLAG_COMPAT) 694 542 device_remove_bin_file(nvmem->base_dev, &nvmem->eeprom); ··· 688 552 nvmem_device_remove_all_cells(nvmem); 689 553 device_del(&nvmem->dev); 690 554 put_device(&nvmem->dev); 555 + } 691 556 692 - return 0; 557 + /** 558 + * nvmem_unregister() - Unregister previously registered nvmem device 559 + * 560 + * @nvmem: Pointer to previously registered nvmem device. 561 + */ 562 + void nvmem_unregister(struct nvmem_device *nvmem) 563 + { 564 + kref_put(&nvmem->refcnt, nvmem_device_release); 693 565 } 694 566 EXPORT_SYMBOL_GPL(nvmem_unregister); 695 567 696 568 static void devm_nvmem_release(struct device *dev, void *res) 697 569 { 698 - WARN_ON(nvmem_unregister(*(struct nvmem_device **)res)); 570 + nvmem_unregister(*(struct nvmem_device **)res); 699 571 } 700 572 701 573 /** ··· 761 617 } 762 618 EXPORT_SYMBOL(devm_nvmem_unregister); 763 619 764 - 765 620 static struct nvmem_device *__nvmem_device_get(struct device_node *np, 766 - struct nvmem_cell **cellp, 767 - const char *cell_id) 621 + const char *nvmem_name) 768 622 { 769 623 struct nvmem_device *nvmem = NULL; 770 624 771 625 mutex_lock(&nvmem_mutex); 772 - 773 - if (np) { 774 - nvmem = of_nvmem_find(np); 775 - if (!nvmem) { 776 - mutex_unlock(&nvmem_mutex); 777 - return ERR_PTR(-EPROBE_DEFER); 778 - } 779 - } else { 780 - struct nvmem_cell *cell = nvmem_find_cell(cell_id); 781 - 782 - if (cell) { 783 - nvmem = cell->nvmem; 784 - *cellp = cell; 785 - } 786 - 787 - if (!nvmem) { 788 - mutex_unlock(&nvmem_mutex); 789 - return ERR_PTR(-ENOENT); 790 - } 791 - } 792 - 793 - nvmem->users++; 626 + nvmem = np ? of_nvmem_find(np) : nvmem_find(nvmem_name); 794 627 mutex_unlock(&nvmem_mutex); 628 + if (!nvmem) 629 + return ERR_PTR(-EPROBE_DEFER); 795 630 796 631 if (!try_module_get(nvmem->owner)) { 797 632 dev_err(&nvmem->dev, 798 633 "could not increase module refcount for cell %s\n", 799 - nvmem->name); 800 - 801 - mutex_lock(&nvmem_mutex); 802 - nvmem->users--; 803 - mutex_unlock(&nvmem_mutex); 634 + nvmem_dev_name(nvmem)); 804 635 805 636 return ERR_PTR(-EINVAL); 806 637 } 638 + 639 + kref_get(&nvmem->refcnt); 807 640 808 641 return nvmem; 809 642 } ··· 788 667 static void __nvmem_device_put(struct nvmem_device *nvmem) 789 668 { 790 669 module_put(nvmem->owner); 791 - mutex_lock(&nvmem_mutex); 792 - nvmem->users--; 793 - mutex_unlock(&nvmem_mutex); 794 - } 795 - 796 - static struct nvmem_device *nvmem_find(const char *name) 797 - { 798 - struct device *d; 799 - 800 - d = bus_find_device_by_name(&nvmem_bus_type, NULL, name); 801 - 802 - if (!d) 803 - return NULL; 804 - 805 - return to_nvmem_device(d); 670 + kref_put(&nvmem->refcnt, nvmem_device_release); 806 671 } 807 672 808 673 #if IS_ENABLED(CONFIG_OF) ··· 813 706 if (!nvmem_np) 814 707 return ERR_PTR(-EINVAL); 815 708 816 - return __nvmem_device_get(nvmem_np, NULL, NULL); 709 + return __nvmem_device_get(nvmem_np, NULL); 817 710 } 818 711 EXPORT_SYMBOL_GPL(of_nvmem_device_get); 819 712 #endif ··· 917 810 } 918 811 EXPORT_SYMBOL_GPL(devm_nvmem_device_get); 919 812 920 - static struct nvmem_cell *nvmem_cell_get_from_list(const char *cell_id) 813 + static struct nvmem_cell * 814 + nvmem_cell_get_from_lookup(struct device *dev, const char *con_id) 921 815 { 922 - struct nvmem_cell *cell = NULL; 816 + struct nvmem_cell *cell = ERR_PTR(-ENOENT); 817 + struct nvmem_cell_lookup *lookup; 923 818 struct nvmem_device *nvmem; 819 + const char *dev_id; 924 820 925 - nvmem = __nvmem_device_get(NULL, &cell, cell_id); 926 - if (IS_ERR(nvmem)) 927 - return ERR_CAST(nvmem); 821 + if (!dev) 822 + return ERR_PTR(-EINVAL); 928 823 824 + dev_id = dev_name(dev); 825 + 826 + mutex_lock(&nvmem_lookup_mutex); 827 + 828 + list_for_each_entry(lookup, &nvmem_lookup_list, node) { 829 + if ((strcmp(lookup->dev_id, dev_id) == 0) && 830 + (strcmp(lookup->con_id, con_id) == 0)) { 831 + /* This is the right entry. */ 832 + nvmem = __nvmem_device_get(NULL, lookup->nvmem_name); 833 + if (IS_ERR(nvmem)) { 834 + /* Provider may not be registered yet. */ 835 + cell = ERR_CAST(nvmem); 836 + goto out; 837 + } 838 + 839 + cell = nvmem_find_cell_by_name(nvmem, 840 + lookup->cell_name); 841 + if (!cell) { 842 + __nvmem_device_put(nvmem); 843 + cell = ERR_PTR(-ENOENT); 844 + goto out; 845 + } 846 + } 847 + } 848 + 849 + out: 850 + mutex_unlock(&nvmem_lookup_mutex); 929 851 return cell; 930 852 } 931 853 932 854 #if IS_ENABLED(CONFIG_OF) 855 + static struct nvmem_cell * 856 + nvmem_find_cell_by_index(struct nvmem_device *nvmem, int index) 857 + { 858 + struct nvmem_cell *cell = NULL; 859 + int i = 0; 860 + 861 + mutex_lock(&nvmem_mutex); 862 + list_for_each_entry(cell, &nvmem->cells, node) { 863 + if (index == i++) 864 + break; 865 + } 866 + mutex_unlock(&nvmem_mutex); 867 + 868 + return cell; 869 + } 870 + 933 871 /** 934 872 * of_nvmem_cell_get() - Get a nvmem cell from given device node and cell id 935 873 * 936 874 * @np: Device tree node that uses the nvmem cell. 937 - * @name: nvmem cell name from nvmem-cell-names property, or NULL 938 - * for the cell at index 0 (the lone cell with no accompanying 939 - * nvmem-cell-names property). 875 + * @id: nvmem cell name from nvmem-cell-names property, or NULL 876 + * for the cell at index 0 (the lone cell with no accompanying 877 + * nvmem-cell-names property). 940 878 * 941 879 * Return: Will be an ERR_PTR() on error or a valid pointer 942 880 * to a struct nvmem_cell. The nvmem_cell will be freed by the 943 881 * nvmem_cell_put(). 944 882 */ 945 - struct nvmem_cell *of_nvmem_cell_get(struct device_node *np, 946 - const char *name) 883 + struct nvmem_cell *of_nvmem_cell_get(struct device_node *np, const char *id) 947 884 { 948 885 struct device_node *cell_np, *nvmem_np; 949 - struct nvmem_cell *cell; 950 886 struct nvmem_device *nvmem; 951 - const __be32 *addr; 952 - int rval, len; 887 + struct nvmem_cell *cell; 953 888 int index = 0; 954 889 955 890 /* if cell name exists, find index to the name */ 956 - if (name) 957 - index = of_property_match_string(np, "nvmem-cell-names", name); 891 + if (id) 892 + index = of_property_match_string(np, "nvmem-cell-names", id); 958 893 959 894 cell_np = of_parse_phandle(np, "nvmem-cells", index); 960 895 if (!cell_np) ··· 1006 857 if (!nvmem_np) 1007 858 return ERR_PTR(-EINVAL); 1008 859 1009 - nvmem = __nvmem_device_get(nvmem_np, NULL, NULL); 860 + nvmem = __nvmem_device_get(nvmem_np, NULL); 1010 861 of_node_put(nvmem_np); 1011 862 if (IS_ERR(nvmem)) 1012 863 return ERR_CAST(nvmem); 1013 864 1014 - addr = of_get_property(cell_np, "reg", &len); 1015 - if (!addr || (len < 2 * sizeof(u32))) { 1016 - dev_err(&nvmem->dev, "nvmem: invalid reg on %pOF\n", 1017 - cell_np); 1018 - rval = -EINVAL; 1019 - goto err_mem; 1020 - } 1021 - 1022 - cell = kzalloc(sizeof(*cell), GFP_KERNEL); 865 + cell = nvmem_find_cell_by_index(nvmem, index); 1023 866 if (!cell) { 1024 - rval = -ENOMEM; 1025 - goto err_mem; 867 + __nvmem_device_put(nvmem); 868 + return ERR_PTR(-ENOENT); 1026 869 } 1027 - 1028 - cell->nvmem = nvmem; 1029 - cell->offset = be32_to_cpup(addr++); 1030 - cell->bytes = be32_to_cpup(addr); 1031 - cell->name = cell_np->name; 1032 - 1033 - addr = of_get_property(cell_np, "bits", &len); 1034 - if (addr && len == (2 * sizeof(u32))) { 1035 - cell->bit_offset = be32_to_cpup(addr++); 1036 - cell->nbits = be32_to_cpup(addr); 1037 - } 1038 - 1039 - if (cell->nbits) 1040 - cell->bytes = DIV_ROUND_UP(cell->nbits + cell->bit_offset, 1041 - BITS_PER_BYTE); 1042 - 1043 - if (!IS_ALIGNED(cell->offset, nvmem->stride)) { 1044 - dev_err(&nvmem->dev, 1045 - "cell %s unaligned to nvmem stride %d\n", 1046 - cell->name, nvmem->stride); 1047 - rval = -EINVAL; 1048 - goto err_sanity; 1049 - } 1050 - 1051 - nvmem_cell_add(cell); 1052 870 1053 871 return cell; 1054 - 1055 - err_sanity: 1056 - kfree(cell); 1057 - 1058 - err_mem: 1059 - __nvmem_device_put(nvmem); 1060 - 1061 - return ERR_PTR(rval); 1062 872 } 1063 873 EXPORT_SYMBOL_GPL(of_nvmem_cell_get); 1064 874 #endif ··· 1026 918 * nvmem_cell_get() - Get nvmem cell of device form a given cell name 1027 919 * 1028 920 * @dev: Device that requests the nvmem cell. 1029 - * @cell_id: nvmem cell name to get. 921 + * @id: nvmem cell name to get (this corresponds with the name from the 922 + * nvmem-cell-names property for DT systems and with the con_id from 923 + * the lookup entry for non-DT systems). 1030 924 * 1031 925 * Return: Will be an ERR_PTR() on error or a valid pointer 1032 926 * to a struct nvmem_cell. The nvmem_cell will be freed by the 1033 927 * nvmem_cell_put(). 1034 928 */ 1035 - struct nvmem_cell *nvmem_cell_get(struct device *dev, const char *cell_id) 929 + struct nvmem_cell *nvmem_cell_get(struct device *dev, const char *id) 1036 930 { 1037 931 struct nvmem_cell *cell; 1038 932 1039 933 if (dev->of_node) { /* try dt first */ 1040 - cell = of_nvmem_cell_get(dev->of_node, cell_id); 934 + cell = of_nvmem_cell_get(dev->of_node, id); 1041 935 if (!IS_ERR(cell) || PTR_ERR(cell) == -EPROBE_DEFER) 1042 936 return cell; 1043 937 } 1044 938 1045 - /* NULL cell_id only allowed for device tree; invalid otherwise */ 1046 - if (!cell_id) 939 + /* NULL cell id only allowed for device tree; invalid otherwise */ 940 + if (!id) 1047 941 return ERR_PTR(-EINVAL); 1048 942 1049 - return nvmem_cell_get_from_list(cell_id); 943 + return nvmem_cell_get_from_lookup(dev, id); 1050 944 } 1051 945 EXPORT_SYMBOL_GPL(nvmem_cell_get); 1052 946 ··· 1125 1015 struct nvmem_device *nvmem = cell->nvmem; 1126 1016 1127 1017 __nvmem_device_put(nvmem); 1128 - nvmem_cell_drop(cell); 1129 1018 } 1130 1019 EXPORT_SYMBOL_GPL(nvmem_cell_put); 1131 1020 ··· 1376 1267 * @buf: buffer to be written to cell. 1377 1268 * 1378 1269 * Return: length of bytes written or negative error code on failure. 1379 - * */ 1270 + */ 1380 1271 int nvmem_device_cell_write(struct nvmem_device *nvmem, 1381 1272 struct nvmem_cell_info *info, void *buf) 1382 1273 { ··· 1432 1323 * @buf: buffer to be written. 1433 1324 * 1434 1325 * Return: length of bytes written or negative error code on failure. 1435 - * */ 1326 + */ 1436 1327 int nvmem_device_write(struct nvmem_device *nvmem, 1437 1328 unsigned int offset, 1438 1329 size_t bytes, void *buf) ··· 1451 1342 return bytes; 1452 1343 } 1453 1344 EXPORT_SYMBOL_GPL(nvmem_device_write); 1345 + 1346 + /** 1347 + * nvmem_add_cell_table() - register a table of cell info entries 1348 + * 1349 + * @table: table of cell info entries 1350 + */ 1351 + void nvmem_add_cell_table(struct nvmem_cell_table *table) 1352 + { 1353 + mutex_lock(&nvmem_cell_mutex); 1354 + list_add_tail(&table->node, &nvmem_cell_tables); 1355 + mutex_unlock(&nvmem_cell_mutex); 1356 + } 1357 + EXPORT_SYMBOL_GPL(nvmem_add_cell_table); 1358 + 1359 + /** 1360 + * nvmem_del_cell_table() - remove a previously registered cell info table 1361 + * 1362 + * @table: table of cell info entries 1363 + */ 1364 + void nvmem_del_cell_table(struct nvmem_cell_table *table) 1365 + { 1366 + mutex_lock(&nvmem_cell_mutex); 1367 + list_del(&table->node); 1368 + mutex_unlock(&nvmem_cell_mutex); 1369 + } 1370 + EXPORT_SYMBOL_GPL(nvmem_del_cell_table); 1371 + 1372 + /** 1373 + * nvmem_add_cell_lookups() - register a list of cell lookup entries 1374 + * 1375 + * @entries: array of cell lookup entries 1376 + * @nentries: number of cell lookup entries in the array 1377 + */ 1378 + void nvmem_add_cell_lookups(struct nvmem_cell_lookup *entries, size_t nentries) 1379 + { 1380 + int i; 1381 + 1382 + mutex_lock(&nvmem_lookup_mutex); 1383 + for (i = 0; i < nentries; i++) 1384 + list_add_tail(&entries[i].node, &nvmem_lookup_list); 1385 + mutex_unlock(&nvmem_lookup_mutex); 1386 + } 1387 + EXPORT_SYMBOL_GPL(nvmem_add_cell_lookups); 1388 + 1389 + /** 1390 + * nvmem_del_cell_lookups() - remove a list of previously added cell lookup 1391 + * entries 1392 + * 1393 + * @entries: array of cell lookup entries 1394 + * @nentries: number of cell lookup entries in the array 1395 + */ 1396 + void nvmem_del_cell_lookups(struct nvmem_cell_lookup *entries, size_t nentries) 1397 + { 1398 + int i; 1399 + 1400 + mutex_lock(&nvmem_lookup_mutex); 1401 + for (i = 0; i < nentries; i++) 1402 + list_del(&entries[i].node); 1403 + mutex_unlock(&nvmem_lookup_mutex); 1404 + } 1405 + EXPORT_SYMBOL_GPL(nvmem_del_cell_lookups); 1406 + 1407 + /** 1408 + * nvmem_dev_name() - Get the name of a given nvmem device. 1409 + * 1410 + * @nvmem: nvmem device. 1411 + * 1412 + * Return: name of the nvmem device. 1413 + */ 1414 + const char *nvmem_dev_name(struct nvmem_device *nvmem) 1415 + { 1416 + return dev_name(&nvmem->dev); 1417 + } 1418 + EXPORT_SYMBOL_GPL(nvmem_dev_name); 1454 1419 1455 1420 static int __init nvmem_init(void) 1456 1421 {
+1 -6
drivers/nvmem/lpc18xx_eeprom.c
··· 236 236 lpc18xx_nvmem_config.dev = dev; 237 237 lpc18xx_nvmem_config.priv = eeprom; 238 238 239 - eeprom->nvmem = nvmem_register(&lpc18xx_nvmem_config); 239 + eeprom->nvmem = devm_nvmem_register(dev, &lpc18xx_nvmem_config); 240 240 if (IS_ERR(eeprom->nvmem)) { 241 241 ret = PTR_ERR(eeprom->nvmem); 242 242 goto err_clk; ··· 255 255 static int lpc18xx_eeprom_remove(struct platform_device *pdev) 256 256 { 257 257 struct lpc18xx_eeprom_dev *eeprom = platform_get_drvdata(pdev); 258 - int ret; 259 - 260 - ret = nvmem_unregister(eeprom->nvmem); 261 - if (ret < 0) 262 - return ret; 263 258 264 259 clk_disable_unprepare(eeprom->clk); 265 260
+2 -2
drivers/nvmem/mxs-ocotp.c
··· 177 177 ocotp_config.size = data->size; 178 178 ocotp_config.priv = otp; 179 179 ocotp_config.dev = dev; 180 - otp->nvmem = nvmem_register(&ocotp_config); 180 + otp->nvmem = devm_nvmem_register(dev, &ocotp_config); 181 181 if (IS_ERR(otp->nvmem)) { 182 182 ret = PTR_ERR(otp->nvmem); 183 183 goto err_clk; ··· 199 199 200 200 clk_unprepare(otp->clk); 201 201 202 - return nvmem_unregister(otp->nvmem); 202 + return 0; 203 203 } 204 204 205 205 static struct platform_driver mxs_ocotp_driver = {
+4 -18
drivers/nvmem/sunxi_sid.c
··· 154 154 struct resource *res; 155 155 struct nvmem_device *nvmem; 156 156 struct sunxi_sid *sid; 157 - int ret, i, size; 157 + int i, size; 158 158 char *randomness; 159 159 const struct sunxi_sid_cfg *cfg; 160 160 ··· 181 181 else 182 182 econfig.reg_read = sunxi_sid_read; 183 183 econfig.priv = sid; 184 - nvmem = nvmem_register(&econfig); 184 + nvmem = devm_nvmem_register(dev, &econfig); 185 185 if (IS_ERR(nvmem)) 186 186 return PTR_ERR(nvmem); 187 187 188 188 randomness = kzalloc(size, GFP_KERNEL); 189 - if (!randomness) { 190 - ret = -EINVAL; 191 - goto err_unreg_nvmem; 192 - } 189 + if (!randomness) 190 + return -ENOMEM; 193 191 194 192 for (i = 0; i < size; i++) 195 193 econfig.reg_read(sid, i, &randomness[i], 1); ··· 198 200 platform_set_drvdata(pdev, nvmem); 199 201 200 202 return 0; 201 - 202 - err_unreg_nvmem: 203 - nvmem_unregister(nvmem); 204 - return ret; 205 - } 206 - 207 - static int sunxi_sid_remove(struct platform_device *pdev) 208 - { 209 - struct nvmem_device *nvmem = platform_get_drvdata(pdev); 210 - 211 - return nvmem_unregister(nvmem); 212 203 } 213 204 214 205 static const struct sunxi_sid_cfg sun4i_a10_cfg = { ··· 230 243 231 244 static struct platform_driver sunxi_sid_driver = { 232 245 .probe = sunxi_sid_probe, 233 - .remove = sunxi_sid_remove, 234 246 .driver = { 235 247 .name = "eeprom-sunxi-sid", 236 248 .of_match_table = sunxi_sid_of_match,
+191 -206
drivers/platform/goldfish/goldfish_pipe.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Copyright (C) 2012 Intel, Inc. 3 4 * Copyright (C) 2013 Intel, Inc. ··· 47 46 * exchange is properly mapped during a transfer. 48 47 */ 49 48 50 - 51 49 #include <linux/module.h> 52 50 #include <linux/mod_devicetable.h> 53 51 #include <linux/interrupt.h> ··· 59 59 #include <linux/bitops.h> 60 60 #include <linux/slab.h> 61 61 #include <linux/io.h> 62 - #include <linux/goldfish.h> 63 62 #include <linux/dma-mapping.h> 64 63 #include <linux/mm.h> 65 64 #include <linux/acpi.h> 65 + #include <linux/bug.h> 66 + #include "goldfish_pipe_qemu.h" 66 67 67 68 /* 68 69 * Update this when something changes in the driver's behavior so the host ··· 74 73 PIPE_CURRENT_DEVICE_VERSION = 2 75 74 }; 76 75 77 - /* 78 - * IMPORTANT: The following constants must match the ones used and defined 79 - * in external/qemu/hw/goldfish_pipe.c in the Android source tree. 80 - */ 81 - 82 - /* List of bitflags returned in status of CMD_POLL command */ 83 - enum PipePollFlags { 84 - PIPE_POLL_IN = 1 << 0, 85 - PIPE_POLL_OUT = 1 << 1, 86 - PIPE_POLL_HUP = 1 << 2 87 - }; 88 - 89 - /* Possible status values used to signal errors - see goldfish_pipe_error_convert */ 90 - enum PipeErrors { 91 - PIPE_ERROR_INVAL = -1, 92 - PIPE_ERROR_AGAIN = -2, 93 - PIPE_ERROR_NOMEM = -3, 94 - PIPE_ERROR_IO = -4 95 - }; 96 - 97 - /* Bit-flags used to signal events from the emulator */ 98 - enum PipeWakeFlags { 99 - PIPE_WAKE_CLOSED = 1 << 0, /* emulator closed pipe */ 100 - PIPE_WAKE_READ = 1 << 1, /* pipe can now be read from */ 101 - PIPE_WAKE_WRITE = 1 << 2 /* pipe can now be written to */ 102 - }; 103 - 104 - /* Bit flags for the 'flags' field */ 105 - enum PipeFlagsBits { 106 - BIT_CLOSED_ON_HOST = 0, /* pipe closed by host */ 107 - BIT_WAKE_ON_WRITE = 1, /* want to be woken on writes */ 108 - BIT_WAKE_ON_READ = 2, /* want to be woken on reads */ 109 - }; 110 - 111 - enum PipeRegs { 112 - PIPE_REG_CMD = 0, 113 - 114 - PIPE_REG_SIGNAL_BUFFER_HIGH = 4, 115 - PIPE_REG_SIGNAL_BUFFER = 8, 116 - PIPE_REG_SIGNAL_BUFFER_COUNT = 12, 117 - 118 - PIPE_REG_OPEN_BUFFER_HIGH = 20, 119 - PIPE_REG_OPEN_BUFFER = 24, 120 - 121 - PIPE_REG_VERSION = 36, 122 - 123 - PIPE_REG_GET_SIGNALLED = 48, 124 - }; 125 - 126 - enum PipeCmdCode { 127 - PIPE_CMD_OPEN = 1, /* to be used by the pipe device itself */ 128 - PIPE_CMD_CLOSE, 129 - PIPE_CMD_POLL, 130 - PIPE_CMD_WRITE, 131 - PIPE_CMD_WAKE_ON_WRITE, 132 - PIPE_CMD_READ, 133 - PIPE_CMD_WAKE_ON_READ, 134 - 135 - /* 136 - * TODO(zyy): implement a deferred read/write execution to allow 137 - * parallel processing of pipe operations on the host. 138 - */ 139 - PIPE_CMD_WAKE_ON_DONE_IO, 140 - }; 141 - 142 76 enum { 143 77 MAX_BUFFERS_PER_COMMAND = 336, 144 78 MAX_SIGNALLED_PIPES = 64, ··· 81 145 }; 82 146 83 147 struct goldfish_pipe_dev; 84 - struct goldfish_pipe; 85 - struct goldfish_pipe_command; 86 148 87 149 /* A per-pipe command structure, shared with the host */ 88 150 struct goldfish_pipe_command { 89 - s32 cmd; /* PipeCmdCode, guest -> host */ 90 - s32 id; /* pipe id, guest -> host */ 91 - s32 status; /* command execution status, host -> guest */ 151 + s32 cmd; /* PipeCmdCode, guest -> host */ 152 + s32 id; /* pipe id, guest -> host */ 153 + s32 status; /* command execution status, host -> guest */ 92 154 s32 reserved; /* to pad to 64-bit boundary */ 93 155 union { 94 156 /* Parameters for PIPE_CMD_{READ,WRITE} */ ··· 118 184 /* Device-level set of buffers shared with the host */ 119 185 struct goldfish_pipe_dev_buffers { 120 186 struct open_command_param open_command_params; 121 - struct signalled_pipe_buffer signalled_pipe_buffers[ 122 - MAX_SIGNALLED_PIPES]; 187 + struct signalled_pipe_buffer 188 + signalled_pipe_buffers[MAX_SIGNALLED_PIPES]; 123 189 }; 124 190 125 191 /* This data type models a given pipe instance */ 126 192 struct goldfish_pipe { 127 193 /* pipe ID - index into goldfish_pipe_dev::pipes array */ 128 194 u32 id; 195 + 129 196 /* The wake flags pipe is waiting for 130 197 * Note: not protected with any lock, uses atomic operations 131 198 * and barriers to make it thread-safe. 132 199 */ 133 200 unsigned long flags; 201 + 134 202 /* wake flags host have signalled, 135 203 * - protected by goldfish_pipe_dev::lock 136 204 */ ··· 156 220 157 221 /* A wake queue for sleeping until host signals an event */ 158 222 wait_queue_head_t wake_queue; 223 + 159 224 /* Pointer to the parent goldfish_pipe_dev instance */ 160 225 struct goldfish_pipe_dev *dev; 226 + 227 + /* A buffer of pages, too large to fit into a stack frame */ 228 + struct page *pages[MAX_BUFFERS_PER_COMMAND]; 161 229 }; 162 230 163 231 /* The global driver data. Holds a reference to the i/o page used to ··· 169 229 * waiting to be awoken. 170 230 */ 171 231 struct goldfish_pipe_dev { 232 + /* A magic number to check if this is an instance of this struct */ 233 + void *magic; 234 + 172 235 /* 173 236 * Global device spinlock. Protects the following members: 174 237 * - pipes, pipes_capacity ··· 204 261 /* Head of a doubly linked list of signalled pipes */ 205 262 struct goldfish_pipe *first_signalled_pipe; 206 263 264 + /* ptr to platform device's device struct */ 265 + struct device *pdev_dev; 266 + 207 267 /* Some device-specific data */ 208 268 int irq; 209 269 int version; 210 270 unsigned char __iomem *base; 271 + 272 + /* an irq tasklet to run goldfish_interrupt_task */ 273 + struct tasklet_struct irq_tasklet; 274 + 275 + struct miscdevice miscdev; 211 276 }; 212 277 213 - static struct goldfish_pipe_dev pipe_dev[1] = {}; 214 - 215 - static int goldfish_cmd_locked(struct goldfish_pipe *pipe, enum PipeCmdCode cmd) 278 + static int goldfish_pipe_cmd_locked(struct goldfish_pipe *pipe, 279 + enum PipeCmdCode cmd) 216 280 { 217 281 pipe->command_buffer->cmd = cmd; 218 282 /* failure by default */ ··· 228 278 return pipe->command_buffer->status; 229 279 } 230 280 231 - static int goldfish_cmd(struct goldfish_pipe *pipe, enum PipeCmdCode cmd) 281 + static int goldfish_pipe_cmd(struct goldfish_pipe *pipe, enum PipeCmdCode cmd) 232 282 { 233 283 int status; 234 284 235 285 if (mutex_lock_interruptible(&pipe->lock)) 236 286 return PIPE_ERROR_IO; 237 - status = goldfish_cmd_locked(pipe, cmd); 287 + status = goldfish_pipe_cmd_locked(pipe, cmd); 238 288 mutex_unlock(&pipe->lock); 239 289 return status; 240 290 } ··· 257 307 } 258 308 } 259 309 260 - static int pin_user_pages(unsigned long first_page, unsigned long last_page, 261 - unsigned int last_page_size, int is_write, 262 - struct page *pages[MAX_BUFFERS_PER_COMMAND], 263 - unsigned int *iter_last_page_size) 310 + static int pin_user_pages(unsigned long first_page, 311 + unsigned long last_page, 312 + unsigned int last_page_size, 313 + int is_write, 314 + struct page *pages[MAX_BUFFERS_PER_COMMAND], 315 + unsigned int *iter_last_page_size) 264 316 { 265 317 int ret; 266 318 int requested_pages = ((last_page - first_page) >> PAGE_SHIFT) + 1; ··· 274 322 *iter_last_page_size = last_page_size; 275 323 } 276 324 277 - ret = get_user_pages_fast( 278 - first_page, requested_pages, !is_write, pages); 325 + ret = get_user_pages_fast(first_page, requested_pages, !is_write, 326 + pages); 279 327 if (ret <= 0) 280 328 return -EFAULT; 281 329 if (ret < requested_pages) 282 330 *iter_last_page_size = PAGE_SIZE; 283 - return ret; 284 331 332 + return ret; 285 333 } 286 334 287 335 static void release_user_pages(struct page **pages, int pages_count, 288 - int is_write, s32 consumed_size) 336 + int is_write, s32 consumed_size) 289 337 { 290 338 int i; 291 339 ··· 297 345 } 298 346 299 347 /* Populate the call parameters, merging adjacent pages together */ 300 - static void populate_rw_params( 301 - struct page **pages, int pages_count, 302 - unsigned long address, unsigned long address_end, 303 - unsigned long first_page, unsigned long last_page, 304 - unsigned int iter_last_page_size, int is_write, 305 - struct goldfish_pipe_command *command) 348 + static void populate_rw_params(struct page **pages, 349 + int pages_count, 350 + unsigned long address, 351 + unsigned long address_end, 352 + unsigned long first_page, 353 + unsigned long last_page, 354 + unsigned int iter_last_page_size, 355 + int is_write, 356 + struct goldfish_pipe_command *command) 306 357 { 307 358 /* 308 359 * Process the first page separately - it's the only page that ··· 337 382 } 338 383 339 384 static int transfer_max_buffers(struct goldfish_pipe *pipe, 340 - unsigned long address, unsigned long address_end, int is_write, 341 - unsigned long last_page, unsigned int last_page_size, 342 - s32 *consumed_size, int *status) 385 + unsigned long address, 386 + unsigned long address_end, 387 + int is_write, 388 + unsigned long last_page, 389 + unsigned int last_page_size, 390 + s32 *consumed_size, 391 + int *status) 343 392 { 344 - static struct page *pages[MAX_BUFFERS_PER_COMMAND]; 345 393 unsigned long first_page = address & PAGE_MASK; 346 394 unsigned int iter_last_page_size; 347 - int pages_count = pin_user_pages(first_page, last_page, 348 - last_page_size, is_write, 349 - pages, &iter_last_page_size); 350 - 351 - if (pages_count < 0) 352 - return pages_count; 395 + int pages_count; 353 396 354 397 /* Serialize access to the pipe command buffers */ 355 398 if (mutex_lock_interruptible(&pipe->lock)) 356 399 return -ERESTARTSYS; 357 400 358 - populate_rw_params(pages, pages_count, address, address_end, 359 - first_page, last_page, iter_last_page_size, is_write, 360 - pipe->command_buffer); 401 + pages_count = pin_user_pages(first_page, last_page, 402 + last_page_size, is_write, 403 + pipe->pages, &iter_last_page_size); 404 + if (pages_count < 0) { 405 + mutex_unlock(&pipe->lock); 406 + return pages_count; 407 + } 408 + 409 + populate_rw_params(pipe->pages, pages_count, address, address_end, 410 + first_page, last_page, iter_last_page_size, is_write, 411 + pipe->command_buffer); 361 412 362 413 /* Transfer the data */ 363 - *status = goldfish_cmd_locked(pipe, 414 + *status = goldfish_pipe_cmd_locked(pipe, 364 415 is_write ? PIPE_CMD_WRITE : PIPE_CMD_READ); 365 416 366 417 *consumed_size = pipe->command_buffer->rw_params.consumed_size; 367 418 368 - release_user_pages(pages, pages_count, is_write, *consumed_size); 419 + release_user_pages(pipe->pages, pages_count, is_write, *consumed_size); 369 420 370 421 mutex_unlock(&pipe->lock); 371 - 372 422 return 0; 373 423 } 374 424 375 425 static int wait_for_host_signal(struct goldfish_pipe *pipe, int is_write) 376 426 { 377 - u32 wakeBit = is_write ? BIT_WAKE_ON_WRITE : BIT_WAKE_ON_READ; 427 + u32 wake_bit = is_write ? BIT_WAKE_ON_WRITE : BIT_WAKE_ON_READ; 378 428 379 - set_bit(wakeBit, &pipe->flags); 429 + set_bit(wake_bit, &pipe->flags); 380 430 381 431 /* Tell the emulator we're going to wait for a wake event */ 382 - (void)goldfish_cmd(pipe, 432 + goldfish_pipe_cmd(pipe, 383 433 is_write ? PIPE_CMD_WAKE_ON_WRITE : PIPE_CMD_WAKE_ON_READ); 384 434 385 - while (test_bit(wakeBit, &pipe->flags)) { 386 - if (wait_event_interruptible( 387 - pipe->wake_queue, 388 - !test_bit(wakeBit, &pipe->flags))) 435 + while (test_bit(wake_bit, &pipe->flags)) { 436 + if (wait_event_interruptible(pipe->wake_queue, 437 + !test_bit(wake_bit, &pipe->flags))) 389 438 return -ERESTARTSYS; 390 439 391 440 if (test_bit(BIT_CLOSED_ON_HOST, &pipe->flags)) ··· 400 441 } 401 442 402 443 static ssize_t goldfish_pipe_read_write(struct file *filp, 403 - char __user *buffer, size_t bufflen, int is_write) 444 + char __user *buffer, 445 + size_t bufflen, 446 + int is_write) 404 447 { 405 448 struct goldfish_pipe *pipe = filp->private_data; 406 449 int count = 0, ret = -EINVAL; ··· 417 456 return 0; 418 457 /* Check the buffer range for access */ 419 458 if (unlikely(!access_ok(is_write ? VERIFY_WRITE : VERIFY_READ, 420 - buffer, bufflen))) 459 + buffer, bufflen))) 421 460 return -EFAULT; 422 461 423 462 address = (unsigned long)buffer; ··· 430 469 int status; 431 470 432 471 ret = transfer_max_buffers(pipe, address, address_end, is_write, 433 - last_page, last_page_size, &consumed_size, 434 - &status); 472 + last_page, last_page_size, 473 + &consumed_size, &status); 435 474 if (ret < 0) 436 475 break; 437 476 ··· 457 496 * err. 458 497 */ 459 498 if (status != PIPE_ERROR_AGAIN) 460 - pr_info_ratelimited("goldfish_pipe: backend error %d on %s\n", 499 + dev_err_ratelimited(pipe->dev->pdev_dev, 500 + "backend error %d on %s\n", 461 501 status, is_write ? "write" : "read"); 462 502 break; 463 503 } ··· 484 522 } 485 523 486 524 static ssize_t goldfish_pipe_read(struct file *filp, char __user *buffer, 487 - size_t bufflen, loff_t *ppos) 525 + size_t bufflen, loff_t *ppos) 488 526 { 489 527 return goldfish_pipe_read_write(filp, buffer, bufflen, 490 - /* is_write */ 0); 528 + /* is_write */ 0); 491 529 } 492 530 493 531 static ssize_t goldfish_pipe_write(struct file *filp, 494 - const char __user *buffer, size_t bufflen, 495 - loff_t *ppos) 532 + const char __user *buffer, size_t bufflen, 533 + loff_t *ppos) 496 534 { 497 - return goldfish_pipe_read_write(filp, 498 - /* cast away the const */(char __user *)buffer, bufflen, 499 - /* is_write */ 1); 535 + /* cast away the const */ 536 + char __user *no_const_buffer = (char __user *)buffer; 537 + 538 + return goldfish_pipe_read_write(filp, no_const_buffer, bufflen, 539 + /* is_write */ 1); 500 540 } 501 541 502 542 static __poll_t goldfish_pipe_poll(struct file *filp, poll_table *wait) ··· 509 545 510 546 poll_wait(filp, &pipe->wake_queue, wait); 511 547 512 - status = goldfish_cmd(pipe, PIPE_CMD_POLL); 548 + status = goldfish_pipe_cmd(pipe, PIPE_CMD_POLL); 513 549 if (status < 0) 514 550 return -ERESTARTSYS; 515 551 ··· 526 562 } 527 563 528 564 static void signalled_pipes_add_locked(struct goldfish_pipe_dev *dev, 529 - u32 id, u32 flags) 565 + u32 id, u32 flags) 530 566 { 531 567 struct goldfish_pipe *pipe; 532 568 ··· 538 574 return; 539 575 pipe->signalled_flags |= flags; 540 576 541 - if (pipe->prev_signalled || pipe->next_signalled 542 - || dev->first_signalled_pipe == pipe) 577 + if (pipe->prev_signalled || pipe->next_signalled || 578 + dev->first_signalled_pipe == pipe) 543 579 return; /* already in the list */ 544 580 pipe->next_signalled = dev->first_signalled_pipe; 545 581 if (dev->first_signalled_pipe) ··· 548 584 } 549 585 550 586 static void signalled_pipes_remove_locked(struct goldfish_pipe_dev *dev, 551 - struct goldfish_pipe *pipe) { 587 + struct goldfish_pipe *pipe) 588 + { 552 589 if (pipe->prev_signalled) 553 590 pipe->prev_signalled->next_signalled = pipe->next_signalled; 554 591 if (pipe->next_signalled) ··· 588 623 return pipe; 589 624 } 590 625 591 - static void goldfish_interrupt_task(unsigned long unused) 626 + static void goldfish_interrupt_task(unsigned long dev_addr) 592 627 { 593 - struct goldfish_pipe_dev *dev = pipe_dev; 594 628 /* Iterate over the signalled pipes and wake them one by one */ 629 + struct goldfish_pipe_dev *dev = (struct goldfish_pipe_dev *)dev_addr; 595 630 struct goldfish_pipe *pipe; 596 631 int wakes; 597 632 ··· 611 646 wake_up_interruptible(&pipe->wake_queue); 612 647 } 613 648 } 614 - static DECLARE_TASKLET(goldfish_interrupt_tasklet, goldfish_interrupt_task, 0); 649 + 650 + static void goldfish_pipe_device_deinit(struct platform_device *pdev, 651 + struct goldfish_pipe_dev *dev); 615 652 616 653 /* 617 654 * The general idea of the interrupt handling: ··· 635 668 unsigned long flags; 636 669 struct goldfish_pipe_dev *dev = dev_id; 637 670 638 - if (dev != pipe_dev) 671 + if (dev->magic != &goldfish_pipe_device_deinit) 639 672 return IRQ_NONE; 640 673 641 674 /* Request the signalled pipes from the device */ ··· 656 689 657 690 spin_unlock_irqrestore(&dev->lock, flags); 658 691 659 - tasklet_schedule(&goldfish_interrupt_tasklet); 692 + tasklet_schedule(&dev->irq_tasklet); 660 693 return IRQ_HANDLED; 661 694 } 662 695 ··· 669 702 return id; 670 703 671 704 { 672 - /* Reallocate the array */ 705 + /* Reallocate the array. 706 + * Since get_free_pipe_id_locked runs with interrupts disabled, 707 + * we don't want to make calls that could lead to sleep. 708 + */ 673 709 u32 new_capacity = 2 * dev->pipes_capacity; 674 710 struct goldfish_pipe **pipes = 675 711 kcalloc(new_capacity, sizeof(*pipes), GFP_ATOMIC); ··· 685 715 dev->pipes_capacity = new_capacity; 686 716 } 687 717 return id; 718 + } 719 + 720 + /* A helper function to get the instance of goldfish_pipe_dev from file */ 721 + static struct goldfish_pipe_dev *to_goldfish_pipe_dev(struct file *file) 722 + { 723 + struct miscdevice *miscdev = file->private_data; 724 + 725 + return container_of(miscdev, struct goldfish_pipe_dev, miscdev); 688 726 } 689 727 690 728 /** ··· 708 730 */ 709 731 static int goldfish_pipe_open(struct inode *inode, struct file *file) 710 732 { 711 - struct goldfish_pipe_dev *dev = pipe_dev; 733 + struct goldfish_pipe_dev *dev = to_goldfish_pipe_dev(file); 712 734 unsigned long flags; 713 735 int id; 714 736 int status; 715 737 716 738 /* Allocate new pipe kernel object */ 717 739 struct goldfish_pipe *pipe = kzalloc(sizeof(*pipe), GFP_KERNEL); 718 - if (pipe == NULL) 740 + 741 + if (!pipe) 719 742 return -ENOMEM; 720 743 721 744 pipe->dev = dev; ··· 727 748 * Command buffer needs to be allocated on its own page to make sure 728 749 * it is physically contiguous in host's address space. 729 750 */ 751 + BUILD_BUG_ON(sizeof(struct goldfish_pipe_command) > PAGE_SIZE); 730 752 pipe->command_buffer = 731 753 (struct goldfish_pipe_command *)__get_free_page(GFP_KERNEL); 732 754 if (!pipe->command_buffer) { ··· 752 772 MAX_BUFFERS_PER_COMMAND; 753 773 dev->buffers->open_command_params.command_buffer_ptr = 754 774 (u64)(unsigned long)__pa(pipe->command_buffer); 755 - status = goldfish_cmd_locked(pipe, PIPE_CMD_OPEN); 775 + status = goldfish_pipe_cmd_locked(pipe, PIPE_CMD_OPEN); 756 776 spin_unlock_irqrestore(&dev->lock, flags); 757 777 if (status < 0) 758 778 goto err_cmd; ··· 778 798 struct goldfish_pipe_dev *dev = pipe->dev; 779 799 780 800 /* The guest is closing the channel, so tell the emulator right now */ 781 - (void)goldfish_cmd(pipe, PIPE_CMD_CLOSE); 801 + goldfish_pipe_cmd(pipe, PIPE_CMD_CLOSE); 782 802 783 803 spin_lock_irqsave(&dev->lock, flags); 784 804 dev->pipes[pipe->id] = NULL; ··· 800 820 .release = goldfish_pipe_release, 801 821 }; 802 822 803 - static struct miscdevice goldfish_pipe_dev = { 804 - .minor = MISC_DYNAMIC_MINOR, 805 - .name = "goldfish_pipe", 806 - .fops = &goldfish_pipe_fops, 807 - }; 808 - 809 - static int goldfish_pipe_device_init(struct platform_device *pdev) 823 + static void init_miscdevice(struct miscdevice *miscdev) 810 824 { 811 - char *page; 812 - struct goldfish_pipe_dev *dev = pipe_dev; 813 - int err = devm_request_irq(&pdev->dev, dev->irq, 814 - goldfish_pipe_interrupt, 815 - IRQF_SHARED, "goldfish_pipe", dev); 825 + memset(miscdev, 0, sizeof(*miscdev)); 826 + 827 + miscdev->minor = MISC_DYNAMIC_MINOR; 828 + miscdev->name = "goldfish_pipe"; 829 + miscdev->fops = &goldfish_pipe_fops; 830 + } 831 + 832 + static void write_pa_addr(void *addr, void __iomem *portl, void __iomem *porth) 833 + { 834 + const unsigned long paddr = __pa(addr); 835 + 836 + writel(upper_32_bits(paddr), porth); 837 + writel(lower_32_bits(paddr), portl); 838 + } 839 + 840 + static int goldfish_pipe_device_init(struct platform_device *pdev, 841 + struct goldfish_pipe_dev *dev) 842 + { 843 + int err; 844 + 845 + tasklet_init(&dev->irq_tasklet, &goldfish_interrupt_task, 846 + (unsigned long)dev); 847 + 848 + err = devm_request_irq(&pdev->dev, dev->irq, 849 + goldfish_pipe_interrupt, 850 + IRQF_SHARED, "goldfish_pipe", dev); 816 851 if (err) { 817 852 dev_err(&pdev->dev, "unable to allocate IRQ for v2\n"); 818 853 return err; 819 854 } 820 855 821 - err = misc_register(&goldfish_pipe_dev); 856 + init_miscdevice(&dev->miscdev); 857 + err = misc_register(&dev->miscdev); 822 858 if (err) { 823 859 dev_err(&pdev->dev, "unable to register v2 device\n"); 824 860 return err; 825 861 } 826 862 863 + dev->pdev_dev = &pdev->dev; 827 864 dev->first_signalled_pipe = NULL; 828 865 dev->pipes_capacity = INITIAL_PIPES_CAPACITY; 829 866 dev->pipes = kcalloc(dev->pipes_capacity, sizeof(*dev->pipes), 830 - GFP_KERNEL); 831 - if (!dev->pipes) 867 + GFP_KERNEL); 868 + if (!dev->pipes) { 869 + misc_deregister(&dev->miscdev); 832 870 return -ENOMEM; 871 + } 833 872 834 873 /* 835 874 * We're going to pass two buffers, open_command_params and ··· 856 857 * needs to be contained in a single physical page. The easiest choice 857 858 * is to just allocate a page and place the buffers in it. 858 859 */ 859 - if (WARN_ON(sizeof(*dev->buffers) > PAGE_SIZE)) 860 - return -ENOMEM; 861 - 862 - page = (char *)__get_free_page(GFP_KERNEL); 863 - if (!page) { 860 + BUILD_BUG_ON(sizeof(struct goldfish_pipe_dev_buffers) > PAGE_SIZE); 861 + dev->buffers = (struct goldfish_pipe_dev_buffers *) 862 + __get_free_page(GFP_KERNEL); 863 + if (!dev->buffers) { 864 864 kfree(dev->pipes); 865 + misc_deregister(&dev->miscdev); 865 866 return -ENOMEM; 866 867 } 867 - dev->buffers = (struct goldfish_pipe_dev_buffers *)page; 868 868 869 869 /* Send the buffer addresses to the host */ 870 - { 871 - u64 paddr = __pa(&dev->buffers->signalled_pipe_buffers); 870 + write_pa_addr(&dev->buffers->signalled_pipe_buffers, 871 + dev->base + PIPE_REG_SIGNAL_BUFFER, 872 + dev->base + PIPE_REG_SIGNAL_BUFFER_HIGH); 872 873 873 - writel((u32)(unsigned long)(paddr >> 32), 874 - dev->base + PIPE_REG_SIGNAL_BUFFER_HIGH); 875 - writel((u32)(unsigned long)paddr, 876 - dev->base + PIPE_REG_SIGNAL_BUFFER); 877 - writel((u32)MAX_SIGNALLED_PIPES, 878 - dev->base + PIPE_REG_SIGNAL_BUFFER_COUNT); 874 + writel(MAX_SIGNALLED_PIPES, 875 + dev->base + PIPE_REG_SIGNAL_BUFFER_COUNT); 879 876 880 - paddr = __pa(&dev->buffers->open_command_params); 881 - writel((u32)(unsigned long)(paddr >> 32), 882 - dev->base + PIPE_REG_OPEN_BUFFER_HIGH); 883 - writel((u32)(unsigned long)paddr, 884 - dev->base + PIPE_REG_OPEN_BUFFER); 885 - } 877 + write_pa_addr(&dev->buffers->open_command_params, 878 + dev->base + PIPE_REG_OPEN_BUFFER, 879 + dev->base + PIPE_REG_OPEN_BUFFER_HIGH); 880 + 881 + platform_set_drvdata(pdev, dev); 886 882 return 0; 887 883 } 888 884 889 - static void goldfish_pipe_device_deinit(struct platform_device *pdev) 885 + static void goldfish_pipe_device_deinit(struct platform_device *pdev, 886 + struct goldfish_pipe_dev *dev) 890 887 { 891 - struct goldfish_pipe_dev *dev = pipe_dev; 892 - 893 - misc_deregister(&goldfish_pipe_dev); 888 + misc_deregister(&dev->miscdev); 889 + tasklet_kill(&dev->irq_tasklet); 894 890 kfree(dev->pipes); 895 891 free_page((unsigned long)dev->buffers); 896 892 } 897 893 898 894 static int goldfish_pipe_probe(struct platform_device *pdev) 899 895 { 900 - int err; 901 896 struct resource *r; 902 - struct goldfish_pipe_dev *dev = pipe_dev; 897 + struct goldfish_pipe_dev *dev; 903 898 904 - if (WARN_ON(sizeof(struct goldfish_pipe_command) > PAGE_SIZE)) 899 + dev = devm_kzalloc(&pdev->dev, sizeof(*dev), GFP_KERNEL); 900 + if (!dev) 905 901 return -ENOMEM; 906 902 907 - /* not thread safe, but this should not happen */ 908 - WARN_ON(dev->base != NULL); 909 - 903 + dev->magic = &goldfish_pipe_device_deinit; 910 904 spin_lock_init(&dev->lock); 911 905 912 906 r = platform_get_resource(pdev, IORESOURCE_MEM, 0); 913 - if (r == NULL || resource_size(r) < PAGE_SIZE) { 907 + if (!r || resource_size(r) < PAGE_SIZE) { 914 908 dev_err(&pdev->dev, "can't allocate i/o page\n"); 915 909 return -EINVAL; 916 910 } 917 911 dev->base = devm_ioremap(&pdev->dev, r->start, PAGE_SIZE); 918 - if (dev->base == NULL) { 912 + if (!dev->base) { 919 913 dev_err(&pdev->dev, "ioremap failed\n"); 920 914 return -EINVAL; 921 915 } 922 916 923 917 r = platform_get_resource(pdev, IORESOURCE_IRQ, 0); 924 - if (r == NULL) { 925 - err = -EINVAL; 926 - goto error; 927 - } 918 + if (!r) 919 + return -EINVAL; 920 + 928 921 dev->irq = r->start; 929 922 930 923 /* ··· 926 935 * reading device version back: this allows the host implementation to 927 936 * detect the old driver (if there was no version write before read). 928 937 */ 929 - writel((u32)PIPE_DRIVER_VERSION, dev->base + PIPE_REG_VERSION); 938 + writel(PIPE_DRIVER_VERSION, dev->base + PIPE_REG_VERSION); 930 939 dev->version = readl(dev->base + PIPE_REG_VERSION); 931 940 if (WARN_ON(dev->version < PIPE_CURRENT_DEVICE_VERSION)) 932 941 return -EINVAL; 933 942 934 - err = goldfish_pipe_device_init(pdev); 935 - if (!err) 936 - return 0; 937 - 938 - error: 939 - dev->base = NULL; 940 - return err; 943 + return goldfish_pipe_device_init(pdev, dev); 941 944 } 942 945 943 946 static int goldfish_pipe_remove(struct platform_device *pdev) 944 947 { 945 - struct goldfish_pipe_dev *dev = pipe_dev; 946 - goldfish_pipe_device_deinit(pdev); 947 - dev->base = NULL; 948 + struct goldfish_pipe_dev *dev = platform_get_drvdata(pdev); 949 + 950 + goldfish_pipe_device_deinit(pdev, dev); 948 951 return 0; 949 952 } 950 953 ··· 966 981 967 982 module_platform_driver(goldfish_pipe_driver); 968 983 MODULE_AUTHOR("David Turner <digit@google.com>"); 969 - MODULE_LICENSE("GPL"); 984 + MODULE_LICENSE("GPL v2");
+98
drivers/platform/goldfish/goldfish_pipe_qemu.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * IMPORTANT: The following constants must match the ones used and defined in 4 + * external/qemu/include/hw/misc/goldfish_pipe.h 5 + */ 6 + 7 + #ifndef GOLDFISH_PIPE_QEMU_H 8 + #define GOLDFISH_PIPE_QEMU_H 9 + 10 + /* List of bitflags returned in status of CMD_POLL command */ 11 + enum PipePollFlags { 12 + PIPE_POLL_IN = 1 << 0, 13 + PIPE_POLL_OUT = 1 << 1, 14 + PIPE_POLL_HUP = 1 << 2 15 + }; 16 + 17 + /* Possible status values used to signal errors */ 18 + enum PipeErrors { 19 + PIPE_ERROR_INVAL = -1, 20 + PIPE_ERROR_AGAIN = -2, 21 + PIPE_ERROR_NOMEM = -3, 22 + PIPE_ERROR_IO = -4 23 + }; 24 + 25 + /* Bit-flags used to signal events from the emulator */ 26 + enum PipeWakeFlags { 27 + /* emulator closed pipe */ 28 + PIPE_WAKE_CLOSED = 1 << 0, 29 + 30 + /* pipe can now be read from */ 31 + PIPE_WAKE_READ = 1 << 1, 32 + 33 + /* pipe can now be written to */ 34 + PIPE_WAKE_WRITE = 1 << 2, 35 + 36 + /* unlock this pipe's DMA buffer */ 37 + PIPE_WAKE_UNLOCK_DMA = 1 << 3, 38 + 39 + /* unlock DMA buffer of the pipe shared to this pipe */ 40 + PIPE_WAKE_UNLOCK_DMA_SHARED = 1 << 4, 41 + }; 42 + 43 + /* Possible pipe closing reasons */ 44 + enum PipeCloseReason { 45 + /* guest sent a close command */ 46 + PIPE_CLOSE_GRACEFUL = 0, 47 + 48 + /* guest rebooted, we're closing the pipes */ 49 + PIPE_CLOSE_REBOOT = 1, 50 + 51 + /* close old pipes on snapshot load */ 52 + PIPE_CLOSE_LOAD_SNAPSHOT = 2, 53 + 54 + /* some unrecoverable error on the pipe */ 55 + PIPE_CLOSE_ERROR = 3, 56 + }; 57 + 58 + /* Bit flags for the 'flags' field */ 59 + enum PipeFlagsBits { 60 + BIT_CLOSED_ON_HOST = 0, /* pipe closed by host */ 61 + BIT_WAKE_ON_WRITE = 1, /* want to be woken on writes */ 62 + BIT_WAKE_ON_READ = 2, /* want to be woken on reads */ 63 + }; 64 + 65 + enum PipeRegs { 66 + PIPE_REG_CMD = 0, 67 + 68 + PIPE_REG_SIGNAL_BUFFER_HIGH = 4, 69 + PIPE_REG_SIGNAL_BUFFER = 8, 70 + PIPE_REG_SIGNAL_BUFFER_COUNT = 12, 71 + 72 + PIPE_REG_OPEN_BUFFER_HIGH = 20, 73 + PIPE_REG_OPEN_BUFFER = 24, 74 + 75 + PIPE_REG_VERSION = 36, 76 + 77 + PIPE_REG_GET_SIGNALLED = 48, 78 + }; 79 + 80 + enum PipeCmdCode { 81 + /* to be used by the pipe device itself */ 82 + PIPE_CMD_OPEN = 1, 83 + 84 + PIPE_CMD_CLOSE, 85 + PIPE_CMD_POLL, 86 + PIPE_CMD_WRITE, 87 + PIPE_CMD_WAKE_ON_WRITE, 88 + PIPE_CMD_READ, 89 + PIPE_CMD_WAKE_ON_READ, 90 + 91 + /* 92 + * TODO(zyy): implement a deferred read/write execution to allow 93 + * parallel processing of pipe operations on the host. 94 + */ 95 + PIPE_CMD_WAKE_ON_DONE_IO, 96 + }; 97 + 98 + #endif /* GOLDFISH_PIPE_QEMU_H */
+35 -2
drivers/slimbus/core.c
··· 9 9 #include <linux/init.h> 10 10 #include <linux/idr.h> 11 11 #include <linux/of.h> 12 + #include <linux/of_device.h> 12 13 #include <linux/pm_runtime.h> 13 14 #include <linux/slimbus.h> 14 15 #include "slimbus.h" ··· 33 32 struct slim_device *sbdev = to_slim_device(dev); 34 33 struct slim_driver *sbdrv = to_slim_driver(drv); 35 34 35 + /* Attempt an OF style match first */ 36 + if (of_driver_match_device(dev, drv)) 37 + return 1; 38 + 36 39 return !!slim_match(sbdrv->id_table, sbdev); 37 40 } 38 41 ··· 44 39 { 45 40 struct slim_device *sbdev = to_slim_device(dev); 46 41 struct slim_driver *sbdrv = to_slim_driver(dev->driver); 42 + int ret; 47 43 48 - return sbdrv->probe(sbdev); 44 + ret = sbdrv->probe(sbdev); 45 + if (ret) 46 + return ret; 47 + 48 + /* try getting the logical address after probe */ 49 + ret = slim_get_logical_addr(sbdev); 50 + if (!ret) { 51 + if (sbdrv->device_status) 52 + sbdrv->device_status(sbdev, sbdev->status); 53 + } else { 54 + dev_err(&sbdev->dev, "Failed to get logical address\n"); 55 + ret = -EPROBE_DEFER; 56 + } 57 + 58 + return ret; 49 59 } 50 60 51 61 static int slim_device_remove(struct device *dev) ··· 77 57 return 0; 78 58 } 79 59 60 + static int slim_device_uevent(struct device *dev, struct kobj_uevent_env *env) 61 + { 62 + struct slim_device *sbdev = to_slim_device(dev); 63 + int ret; 64 + 65 + ret = of_device_uevent_modalias(dev, env); 66 + if (ret != -ENODEV) 67 + return ret; 68 + 69 + return add_uevent_var(env, "MODALIAS=slim:%s", dev_name(&sbdev->dev)); 70 + } 71 + 80 72 struct bus_type slimbus_bus = { 81 73 .name = "slimbus", 82 74 .match = slim_device_match, 83 75 .probe = slim_device_probe, 84 76 .remove = slim_device_remove, 77 + .uevent = slim_device_uevent, 85 78 }; 86 79 EXPORT_SYMBOL_GPL(slimbus_bus); 87 80 ··· 110 77 int __slim_driver_register(struct slim_driver *drv, struct module *owner) 111 78 { 112 79 /* ID table and probe are mandatory */ 113 - if (!drv->id_table || !drv->probe) 80 + if (!(drv->driver.of_match_table || drv->id_table) || !drv->probe) 114 81 return -EINVAL; 115 82 116 83 drv->driver.bus = &slimbus_bus;
+19 -15
drivers/slimbus/qcom-ngd-ctrl.c
··· 1004 1004 struct slim_eaddr *ea, u8 *laddr) 1005 1005 { 1006 1006 struct slim_val_inf msg = {0}; 1007 + u8 failed_ea[6] = {0, 0, 0, 0, 0, 0}; 1007 1008 struct slim_msg_txn txn; 1008 1009 u8 wbuf[10] = {0}; 1009 1010 u8 rbuf[10] = {0}; ··· 1034 1033 slim_free_txn_tid(ctrl, &txn); 1035 1034 return ret; 1036 1035 } 1036 + 1037 + if (!memcmp(rbuf, failed_ea, 6)) 1038 + return -ENXIO; 1037 1039 1038 1040 *laddr = rbuf[6]; 1039 1041 ··· 1238 1234 pm_runtime_resume(ctrl->dev); 1239 1235 pm_runtime_mark_last_busy(ctrl->dev); 1240 1236 pm_runtime_put(ctrl->dev); 1237 + 1238 + ret = slim_register_controller(&ctrl->ctrl); 1239 + if (ret) { 1240 + dev_err(ctrl->dev, "error adding slim controller\n"); 1241 + return ret; 1242 + } 1243 + 1244 + dev_info(ctrl->dev, "SLIM controller Registered\n"); 1241 1245 } else { 1242 1246 qcom_slim_qmi_exit(ctrl); 1247 + slim_unregister_controller(&ctrl->ctrl); 1243 1248 } 1244 1249 1245 1250 return 0; ··· 1355 1342 ngd->base = ctrl->base + ngd->id * data->offset + 1356 1343 (ngd->id - 1) * data->size; 1357 1344 ctrl->ngd = ngd; 1358 - platform_driver_register(&qcom_slim_ngd_driver); 1359 1345 1360 1346 return 0; 1361 1347 } ··· 1369 1357 int ret; 1370 1358 1371 1359 ctrl->ctrl.dev = dev; 1372 - ret = slim_register_controller(&ctrl->ctrl); 1373 - if (ret) { 1374 - dev_err(dev, "error adding slim controller\n"); 1375 - return ret; 1376 - } 1377 1360 1378 1361 pm_runtime_use_autosuspend(dev); 1379 1362 pm_runtime_set_autosuspend_delay(dev, QCOM_SLIM_NGD_AUTOSUSPEND); ··· 1378 1371 ret = qcom_slim_ngd_qmi_svc_event_init(ctrl); 1379 1372 if (ret) { 1380 1373 dev_err(&pdev->dev, "QMI service registration failed:%d", ret); 1381 - goto err; 1374 + return ret; 1382 1375 } 1383 1376 1384 1377 INIT_WORK(&ctrl->m_work, qcom_slim_ngd_master_worker); ··· 1390 1383 } 1391 1384 1392 1385 return 0; 1393 - err: 1394 - slim_unregister_controller(&ctrl->ctrl); 1395 1386 wq_err: 1396 1387 qcom_slim_ngd_qmi_svc_event_deinit(&ctrl->qmi); 1397 1388 if (ctrl->mwq) 1398 1389 destroy_workqueue(ctrl->mwq); 1399 1390 1400 - return 0; 1391 + return ret; 1401 1392 } 1402 1393 1403 1394 static int qcom_slim_ngd_ctrl_probe(struct platform_device *pdev) ··· 1446 1441 init_completion(&ctrl->reconf); 1447 1442 init_completion(&ctrl->qmi.qmi_comp); 1448 1443 1444 + platform_driver_register(&qcom_slim_ngd_driver); 1449 1445 return of_qcom_slim_ngd_register(dev, ctrl); 1450 1446 } 1451 1447 ··· 1462 1456 struct qcom_slim_ngd_ctrl *ctrl = platform_get_drvdata(pdev); 1463 1457 1464 1458 pm_runtime_disable(&pdev->dev); 1465 - slim_unregister_controller(&ctrl->ctrl); 1459 + qcom_slim_ngd_enable(ctrl, false); 1466 1460 qcom_slim_ngd_exit_dma(ctrl); 1467 1461 qcom_slim_ngd_qmi_svc_event_deinit(&ctrl->qmi); 1468 1462 if (ctrl->mwq) ··· 1473 1467 return 0; 1474 1468 } 1475 1469 1476 - static int qcom_slim_ngd_runtime_idle(struct device *dev) 1470 + static int __maybe_unused qcom_slim_ngd_runtime_idle(struct device *dev) 1477 1471 { 1478 1472 struct qcom_slim_ngd_ctrl *ctrl = dev_get_drvdata(dev); 1479 1473 ··· 1483 1477 return -EAGAIN; 1484 1478 } 1485 1479 1486 - #ifdef CONFIG_PM 1487 - static int qcom_slim_ngd_runtime_suspend(struct device *dev) 1480 + static int __maybe_unused qcom_slim_ngd_runtime_suspend(struct device *dev) 1488 1481 { 1489 1482 struct qcom_slim_ngd_ctrl *ctrl = dev_get_drvdata(dev); 1490 1483 int ret = 0; ··· 1496 1491 1497 1492 return ret; 1498 1493 } 1499 - #endif 1500 1494 1501 1495 static const struct dev_pm_ops qcom_slim_ngd_dev_pm_ops = { 1502 1496 SET_SYSTEM_SLEEP_PM_OPS(pm_runtime_force_suspend,
+6
drivers/soundwire/bus.c
··· 35 35 INIT_LIST_HEAD(&bus->slaves); 36 36 INIT_LIST_HEAD(&bus->m_rt_list); 37 37 38 + /* 39 + * Initialize multi_link flag 40 + * TODO: populate this flag by reading property from FW node 41 + */ 42 + bus->multi_link = false; 38 43 if (bus->ops->read_prop) { 39 44 ret = bus->ops->read_prop(bus); 40 45 if (ret < 0) { ··· 180 175 181 176 defer->msg = msg; 182 177 defer->length = msg->len; 178 + init_completion(&defer->complete); 183 179 184 180 for (i = 0; i <= retry; i++) { 185 181 resp = bus->ops->xfer_msg_defer(bus, msg, defer);
+4
drivers/soundwire/bus.h
··· 4 4 #ifndef __SDW_BUS_H 5 5 #define __SDW_BUS_H 6 6 7 + #define DEFAULT_BANK_SWITCH_TIMEOUT 3000 8 + 7 9 #if IS_ENABLED(CONFIG_ACPI) 8 10 int sdw_acpi_find_slaves(struct sdw_bus *bus); 9 11 #else ··· 101 99 * this stream, can be zero. 102 100 * @slave_rt_list: Slave runtime list 103 101 * @port_list: List of Master Ports configured for this stream, can be zero. 102 + * @stream_node: sdw_stream_runtime master_list node 104 103 * @bus_node: sdw_bus m_rt_list node 105 104 */ 106 105 struct sdw_master_runtime { ··· 111 108 unsigned int ch_count; 112 109 struct list_head slave_rt_list; 113 110 struct list_head port_list; 111 + struct list_head stream_node; 114 112 struct list_head bus_node; 115 113 }; 116 114
+65 -3
drivers/soundwire/intel.c
··· 398 398 } 399 399 400 400 /* 401 + * bank switch routines 402 + */ 403 + 404 + static int intel_pre_bank_switch(struct sdw_bus *bus) 405 + { 406 + struct sdw_cdns *cdns = bus_to_cdns(bus); 407 + struct sdw_intel *sdw = cdns_to_intel(cdns); 408 + void __iomem *shim = sdw->res->shim; 409 + int sync_reg; 410 + 411 + /* Write to register only for multi-link */ 412 + if (!bus->multi_link) 413 + return 0; 414 + 415 + /* Read SYNC register */ 416 + sync_reg = intel_readl(shim, SDW_SHIM_SYNC); 417 + sync_reg |= SDW_SHIM_SYNC_CMDSYNC << sdw->instance; 418 + intel_writel(shim, SDW_SHIM_SYNC, sync_reg); 419 + 420 + return 0; 421 + } 422 + 423 + static int intel_post_bank_switch(struct sdw_bus *bus) 424 + { 425 + struct sdw_cdns *cdns = bus_to_cdns(bus); 426 + struct sdw_intel *sdw = cdns_to_intel(cdns); 427 + void __iomem *shim = sdw->res->shim; 428 + int sync_reg, ret; 429 + 430 + /* Write to register only for multi-link */ 431 + if (!bus->multi_link) 432 + return 0; 433 + 434 + /* Read SYNC register */ 435 + sync_reg = intel_readl(shim, SDW_SHIM_SYNC); 436 + 437 + /* 438 + * post_bank_switch() ops is called from the bus in loop for 439 + * all the Masters in the steam with the expectation that 440 + * we trigger the bankswitch for the only first Master in the list 441 + * and do nothing for the other Masters 442 + * 443 + * So, set the SYNCGO bit only if CMDSYNC bit is set for any Master. 444 + */ 445 + if (!(sync_reg & SDW_SHIM_SYNC_CMDSYNC_MASK)) 446 + return 0; 447 + 448 + /* 449 + * Set SyncGO bit to synchronously trigger a bank switch for 450 + * all the masters. A write to SYNCGO bit clears CMDSYNC bit for all 451 + * the Masters. 452 + */ 453 + sync_reg |= SDW_SHIM_SYNC_SYNCGO; 454 + 455 + ret = intel_clear_bit(shim, SDW_SHIM_SYNC, sync_reg, 456 + SDW_SHIM_SYNC_SYNCGO); 457 + if (ret < 0) 458 + dev_err(sdw->cdns.dev, "Post bank switch failed: %d", ret); 459 + 460 + return ret; 461 + } 462 + 463 + /* 401 464 * DAI routines 402 465 */ 403 466 ··· 813 750 .xfer_msg_defer = cdns_xfer_msg_defer, 814 751 .reset_page_addr = cdns_reset_page_addr, 815 752 .set_bus_conf = cdns_bus_conf, 753 + .pre_bank_switch = intel_pre_bank_switch, 754 + .post_bank_switch = intel_post_bank_switch, 816 755 }; 817 756 818 757 /* ··· 842 777 sdw_cdns_probe(&sdw->cdns); 843 778 844 779 /* Set property read ops */ 845 - sdw_intel_ops.read_prop = intel_prop_read; 846 - sdw->cdns.bus.ops = &sdw_intel_ops; 847 - 848 780 sdw_intel_ops.read_prop = intel_prop_read; 849 781 sdw->cdns.bus.ops = &sdw_intel_ops; 850 782
+1 -1
drivers/soundwire/intel_init.c
··· 151 151 struct acpi_device *adev; 152 152 153 153 if (acpi_bus_get_device(handle, &adev)) { 154 - dev_err(&adev->dev, "Couldn't find ACPI handle\n"); 154 + pr_err("%s: Couldn't find ACPI handle\n", __func__); 155 155 return AE_NOT_FOUND; 156 156 } 157 157
+361 -131
drivers/soundwire/stream.c
··· 626 626 return ret; 627 627 } 628 628 629 - static int sdw_bank_switch(struct sdw_bus *bus) 629 + static int sdw_bank_switch(struct sdw_bus *bus, int m_rt_count) 630 630 { 631 631 int col_index, row_index; 632 + bool multi_link; 632 633 struct sdw_msg *wr_msg; 633 634 u8 *wbuf = NULL; 634 635 int ret = 0; ··· 638 637 wr_msg = kzalloc(sizeof(*wr_msg), GFP_KERNEL); 639 638 if (!wr_msg) 640 639 return -ENOMEM; 640 + 641 + bus->defer_msg.msg = wr_msg; 641 642 642 643 wbuf = kzalloc(sizeof(*wbuf), GFP_KERNEL); 643 644 if (!wbuf) { ··· 661 658 SDW_MSG_FLAG_WRITE, wbuf); 662 659 wr_msg->ssp_sync = true; 663 660 664 - ret = sdw_transfer(bus, wr_msg); 661 + /* 662 + * Set the multi_link flag only when both the hardware supports 663 + * and there is a stream handled by multiple masters 664 + */ 665 + multi_link = bus->multi_link && (m_rt_count > 1); 666 + 667 + if (multi_link) 668 + ret = sdw_transfer_defer(bus, wr_msg, &bus->defer_msg); 669 + else 670 + ret = sdw_transfer(bus, wr_msg); 671 + 665 672 if (ret < 0) { 666 673 dev_err(bus->dev, "Slave frame_ctrl reg write failed"); 667 674 goto error; 668 675 } 669 676 670 - kfree(wr_msg); 671 - kfree(wbuf); 672 - bus->defer_msg.msg = NULL; 673 - bus->params.curr_bank = !bus->params.curr_bank; 674 - bus->params.next_bank = !bus->params.next_bank; 677 + if (!multi_link) { 678 + kfree(wr_msg); 679 + kfree(wbuf); 680 + bus->defer_msg.msg = NULL; 681 + bus->params.curr_bank = !bus->params.curr_bank; 682 + bus->params.next_bank = !bus->params.next_bank; 683 + } 675 684 676 685 return 0; 677 686 ··· 694 679 return ret; 695 680 } 696 681 682 + /** 683 + * sdw_ml_sync_bank_switch: Multilink register bank switch 684 + * 685 + * @bus: SDW bus instance 686 + * 687 + * Caller function should free the buffers on error 688 + */ 689 + static int sdw_ml_sync_bank_switch(struct sdw_bus *bus) 690 + { 691 + unsigned long time_left; 692 + 693 + if (!bus->multi_link) 694 + return 0; 695 + 696 + /* Wait for completion of transfer */ 697 + time_left = wait_for_completion_timeout(&bus->defer_msg.complete, 698 + bus->bank_switch_timeout); 699 + 700 + if (!time_left) { 701 + dev_err(bus->dev, "Controller Timed out on bank switch"); 702 + return -ETIMEDOUT; 703 + } 704 + 705 + bus->params.curr_bank = !bus->params.curr_bank; 706 + bus->params.next_bank = !bus->params.next_bank; 707 + 708 + if (bus->defer_msg.msg) { 709 + kfree(bus->defer_msg.msg->buf); 710 + kfree(bus->defer_msg.msg); 711 + } 712 + 713 + return 0; 714 + } 715 + 697 716 static int do_bank_switch(struct sdw_stream_runtime *stream) 698 717 { 699 - struct sdw_master_runtime *m_rt = stream->m_rt; 718 + struct sdw_master_runtime *m_rt = NULL; 700 719 const struct sdw_master_ops *ops; 701 - struct sdw_bus *bus = m_rt->bus; 720 + struct sdw_bus *bus = NULL; 721 + bool multi_link = false; 702 722 int ret = 0; 703 723 704 - ops = bus->ops; 724 + list_for_each_entry(m_rt, &stream->master_list, stream_node) { 725 + bus = m_rt->bus; 726 + ops = bus->ops; 705 727 706 - /* Pre-bank switch */ 707 - if (ops->pre_bank_switch) { 708 - ret = ops->pre_bank_switch(bus); 728 + if (bus->multi_link) { 729 + multi_link = true; 730 + mutex_lock(&bus->msg_lock); 731 + } 732 + 733 + /* Pre-bank switch */ 734 + if (ops->pre_bank_switch) { 735 + ret = ops->pre_bank_switch(bus); 736 + if (ret < 0) { 737 + dev_err(bus->dev, 738 + "Pre bank switch op failed: %d", ret); 739 + goto msg_unlock; 740 + } 741 + } 742 + 743 + /* 744 + * Perform Bank switch operation. 745 + * For multi link cases, the actual bank switch is 746 + * synchronized across all Masters and happens later as a 747 + * part of post_bank_switch ops. 748 + */ 749 + ret = sdw_bank_switch(bus, stream->m_rt_count); 709 750 if (ret < 0) { 710 - dev_err(bus->dev, "Pre bank switch op failed: %d", ret); 711 - return ret; 751 + dev_err(bus->dev, "Bank switch failed: %d", ret); 752 + goto error; 753 + 712 754 } 713 755 } 714 756 715 - /* Bank switch */ 716 - ret = sdw_bank_switch(bus); 717 - if (ret < 0) { 718 - dev_err(bus->dev, "Bank switch failed: %d", ret); 719 - return ret; 720 - } 757 + /* 758 + * For multi link cases, it is expected that the bank switch is 759 + * triggered by the post_bank_switch for the first Master in the list 760 + * and for the other Masters the post_bank_switch() should return doing 761 + * nothing. 762 + */ 763 + list_for_each_entry(m_rt, &stream->master_list, stream_node) { 764 + bus = m_rt->bus; 765 + ops = bus->ops; 721 766 722 - /* Post-bank switch */ 723 - if (ops->post_bank_switch) { 724 - ret = ops->post_bank_switch(bus); 767 + /* Post-bank switch */ 768 + if (ops->post_bank_switch) { 769 + ret = ops->post_bank_switch(bus); 770 + if (ret < 0) { 771 + dev_err(bus->dev, 772 + "Post bank switch op failed: %d", ret); 773 + goto error; 774 + } 775 + } else if (bus->multi_link && stream->m_rt_count > 1) { 776 + dev_err(bus->dev, 777 + "Post bank switch ops not implemented"); 778 + goto error; 779 + } 780 + 781 + /* Set the bank switch timeout to default, if not set */ 782 + if (!bus->bank_switch_timeout) 783 + bus->bank_switch_timeout = DEFAULT_BANK_SWITCH_TIMEOUT; 784 + 785 + /* Check if bank switch was successful */ 786 + ret = sdw_ml_sync_bank_switch(bus); 725 787 if (ret < 0) { 726 788 dev_err(bus->dev, 727 - "Post bank switch op failed: %d", ret); 789 + "multi link bank switch failed: %d", ret); 790 + goto error; 791 + } 792 + 793 + mutex_unlock(&bus->msg_lock); 794 + } 795 + 796 + return ret; 797 + 798 + error: 799 + list_for_each_entry(m_rt, &stream->master_list, stream_node) { 800 + 801 + bus = m_rt->bus; 802 + 803 + kfree(bus->defer_msg.msg->buf); 804 + kfree(bus->defer_msg.msg); 805 + } 806 + 807 + msg_unlock: 808 + 809 + if (multi_link) { 810 + list_for_each_entry(m_rt, &stream->master_list, stream_node) { 811 + bus = m_rt->bus; 812 + if (mutex_is_locked(&bus->msg_lock)) 813 + mutex_unlock(&bus->msg_lock); 728 814 } 729 815 } 730 816 ··· 863 747 return NULL; 864 748 865 749 stream->name = stream_name; 750 + INIT_LIST_HEAD(&stream->master_list); 866 751 stream->state = SDW_STREAM_ALLOCATED; 752 + stream->m_rt_count = 0; 867 753 868 754 return stream; 869 755 } 870 756 EXPORT_SYMBOL(sdw_alloc_stream); 757 + 758 + static struct sdw_master_runtime 759 + *sdw_find_master_rt(struct sdw_bus *bus, 760 + struct sdw_stream_runtime *stream) 761 + { 762 + struct sdw_master_runtime *m_rt = NULL; 763 + 764 + /* Retrieve Bus handle if already available */ 765 + list_for_each_entry(m_rt, &stream->master_list, stream_node) { 766 + if (m_rt->bus == bus) 767 + return m_rt; 768 + } 769 + 770 + return NULL; 771 + } 871 772 872 773 /** 873 774 * sdw_alloc_master_rt() - Allocates and initialize Master runtime handle ··· 902 769 { 903 770 struct sdw_master_runtime *m_rt; 904 771 905 - m_rt = stream->m_rt; 906 - 907 772 /* 908 773 * check if Master is already allocated (as a result of Slave adding 909 774 * it first), if so skip allocation and go to configure 910 775 */ 776 + m_rt = sdw_find_master_rt(bus, stream); 911 777 if (m_rt) 912 778 goto stream_config; 913 779 ··· 917 785 /* Initialization of Master runtime handle */ 918 786 INIT_LIST_HEAD(&m_rt->port_list); 919 787 INIT_LIST_HEAD(&m_rt->slave_rt_list); 920 - stream->m_rt = m_rt; 788 + list_add_tail(&m_rt->stream_node, &stream->master_list); 921 789 922 790 list_add_tail(&m_rt->bus_node, &bus->m_rt_list); 923 791 ··· 975 843 struct sdw_stream_runtime *stream) 976 844 { 977 845 struct sdw_port_runtime *p_rt, *_p_rt; 978 - struct sdw_master_runtime *m_rt = stream->m_rt; 846 + struct sdw_master_runtime *m_rt; 979 847 struct sdw_slave_runtime *s_rt; 980 848 981 - list_for_each_entry(s_rt, &m_rt->slave_rt_list, m_rt_node) { 982 - if (s_rt->slave != slave) 983 - continue; 849 + list_for_each_entry(m_rt, &stream->master_list, stream_node) { 850 + list_for_each_entry(s_rt, &m_rt->slave_rt_list, m_rt_node) { 984 851 985 - list_for_each_entry_safe(p_rt, _p_rt, 986 - &s_rt->port_list, port_node) { 987 - list_del(&p_rt->port_node); 988 - kfree(p_rt); 852 + if (s_rt->slave != slave) 853 + continue; 854 + 855 + list_for_each_entry_safe(p_rt, _p_rt, 856 + &s_rt->port_list, port_node) { 857 + 858 + list_del(&p_rt->port_node); 859 + kfree(p_rt); 860 + } 989 861 } 990 862 } 991 863 } ··· 1006 870 struct sdw_stream_runtime *stream) 1007 871 { 1008 872 struct sdw_slave_runtime *s_rt, *_s_rt; 1009 - struct sdw_master_runtime *m_rt = stream->m_rt; 873 + struct sdw_master_runtime *m_rt; 1010 874 1011 - /* Retrieve Slave runtime handle */ 1012 - list_for_each_entry_safe(s_rt, _s_rt, 1013 - &m_rt->slave_rt_list, m_rt_node) { 875 + list_for_each_entry(m_rt, &stream->master_list, stream_node) { 876 + /* Retrieve Slave runtime handle */ 877 + list_for_each_entry_safe(s_rt, _s_rt, 878 + &m_rt->slave_rt_list, m_rt_node) { 1014 879 1015 - if (s_rt->slave == slave) { 1016 - list_del(&s_rt->m_rt_node); 1017 - kfree(s_rt); 1018 - return; 880 + if (s_rt->slave == slave) { 881 + list_del(&s_rt->m_rt_node); 882 + kfree(s_rt); 883 + return; 884 + } 1019 885 } 1020 886 } 1021 887 } ··· 1025 887 /** 1026 888 * sdw_release_master_stream() - Free Master runtime handle 1027 889 * 890 + * @m_rt: Master runtime node 1028 891 * @stream: Stream runtime handle. 1029 892 * 1030 893 * This function is to be called with bus_lock held ··· 1033 894 * handle. If this is called first then sdw_release_slave_stream() will have 1034 895 * no effect as Slave(s) runtime handle would already be freed up. 1035 896 */ 1036 - static void sdw_release_master_stream(struct sdw_stream_runtime *stream) 897 + static void sdw_release_master_stream(struct sdw_master_runtime *m_rt, 898 + struct sdw_stream_runtime *stream) 1037 899 { 1038 - struct sdw_master_runtime *m_rt = stream->m_rt; 1039 900 struct sdw_slave_runtime *s_rt, *_s_rt; 1040 901 1041 902 list_for_each_entry_safe(s_rt, _s_rt, &m_rt->slave_rt_list, m_rt_node) { ··· 1043 904 sdw_release_slave_stream(s_rt->slave, stream); 1044 905 } 1045 906 907 + list_del(&m_rt->stream_node); 1046 908 list_del(&m_rt->bus_node); 909 + kfree(m_rt); 1047 910 } 1048 911 1049 912 /** ··· 1059 918 int sdw_stream_remove_master(struct sdw_bus *bus, 1060 919 struct sdw_stream_runtime *stream) 1061 920 { 921 + struct sdw_master_runtime *m_rt, *_m_rt; 922 + 1062 923 mutex_lock(&bus->bus_lock); 1063 924 1064 - sdw_release_master_stream(stream); 1065 - sdw_master_port_release(bus, stream->m_rt); 1066 - stream->state = SDW_STREAM_RELEASED; 1067 - kfree(stream->m_rt); 1068 - stream->m_rt = NULL; 925 + list_for_each_entry_safe(m_rt, _m_rt, 926 + &stream->master_list, stream_node) { 927 + 928 + if (m_rt->bus != bus) 929 + continue; 930 + 931 + sdw_master_port_release(bus, m_rt); 932 + sdw_release_master_stream(m_rt, stream); 933 + stream->m_rt_count--; 934 + } 935 + 936 + if (list_empty(&stream->master_list)) 937 + stream->state = SDW_STREAM_RELEASED; 1069 938 1070 939 mutex_unlock(&bus->bus_lock); 1071 940 ··· 1258 1107 1259 1108 mutex_lock(&bus->bus_lock); 1260 1109 1110 + /* 1111 + * For multi link streams, add the second master only if 1112 + * the bus supports it. 1113 + * Check if bus->multi_link is set 1114 + */ 1115 + if (!bus->multi_link && stream->m_rt_count > 0) { 1116 + dev_err(bus->dev, 1117 + "Multilink not supported, link %d", bus->link_id); 1118 + ret = -EINVAL; 1119 + goto unlock; 1120 + } 1121 + 1261 1122 m_rt = sdw_alloc_master_rt(bus, stream_config, stream); 1262 1123 if (!m_rt) { 1263 1124 dev_err(bus->dev, ··· 1287 1124 if (ret) 1288 1125 goto stream_error; 1289 1126 1127 + stream->m_rt_count++; 1128 + 1290 1129 goto unlock; 1291 1130 1292 1131 stream_error: 1293 - sdw_release_master_stream(stream); 1132 + sdw_release_master_stream(m_rt, stream); 1294 1133 unlock: 1295 1134 mutex_unlock(&bus->bus_lock); 1296 1135 return ret; ··· 1370 1205 * we hit error so cleanup the stream, release all Slave(s) and 1371 1206 * Master runtime 1372 1207 */ 1373 - sdw_release_master_stream(stream); 1208 + sdw_release_master_stream(m_rt, stream); 1374 1209 error: 1375 1210 mutex_unlock(&slave->bus->bus_lock); 1376 1211 return ret; ··· 1410 1245 return NULL; 1411 1246 } 1412 1247 1248 + /** 1249 + * sdw_acquire_bus_lock: Acquire bus lock for all Master runtime(s) 1250 + * 1251 + * @stream: SoundWire stream 1252 + * 1253 + * Acquire bus_lock for each of the master runtime(m_rt) part of this 1254 + * stream to reconfigure the bus. 1255 + * NOTE: This function is called from SoundWire stream ops and is 1256 + * expected that a global lock is held before acquiring bus_lock. 1257 + */ 1258 + static void sdw_acquire_bus_lock(struct sdw_stream_runtime *stream) 1259 + { 1260 + struct sdw_master_runtime *m_rt = NULL; 1261 + struct sdw_bus *bus = NULL; 1262 + 1263 + /* Iterate for all Master(s) in Master list */ 1264 + list_for_each_entry(m_rt, &stream->master_list, stream_node) { 1265 + bus = m_rt->bus; 1266 + 1267 + mutex_lock(&bus->bus_lock); 1268 + } 1269 + } 1270 + 1271 + /** 1272 + * sdw_release_bus_lock: Release bus lock for all Master runtime(s) 1273 + * 1274 + * @stream: SoundWire stream 1275 + * 1276 + * Release the previously held bus_lock after reconfiguring the bus. 1277 + * NOTE: This function is called from SoundWire stream ops and is 1278 + * expected that a global lock is held before releasing bus_lock. 1279 + */ 1280 + static void sdw_release_bus_lock(struct sdw_stream_runtime *stream) 1281 + { 1282 + struct sdw_master_runtime *m_rt = NULL; 1283 + struct sdw_bus *bus = NULL; 1284 + 1285 + /* Iterate for all Master(s) in Master list */ 1286 + list_for_each_entry_reverse(m_rt, &stream->master_list, stream_node) { 1287 + bus = m_rt->bus; 1288 + mutex_unlock(&bus->bus_lock); 1289 + } 1290 + } 1291 + 1413 1292 static int _sdw_prepare_stream(struct sdw_stream_runtime *stream) 1414 1293 { 1415 - struct sdw_master_runtime *m_rt = stream->m_rt; 1416 - struct sdw_bus *bus = m_rt->bus; 1294 + struct sdw_master_runtime *m_rt = NULL; 1295 + struct sdw_bus *bus = NULL; 1417 1296 struct sdw_master_prop *prop = NULL; 1418 1297 struct sdw_bus_params params; 1419 1298 int ret; 1420 1299 1421 - prop = &bus->prop; 1422 - memcpy(&params, &bus->params, sizeof(params)); 1300 + /* Prepare Master(s) and Slave(s) port(s) associated with stream */ 1301 + list_for_each_entry(m_rt, &stream->master_list, stream_node) { 1302 + bus = m_rt->bus; 1303 + prop = &bus->prop; 1304 + memcpy(&params, &bus->params, sizeof(params)); 1423 1305 1424 - /* TODO: Support Asynchronous mode */ 1425 - if ((prop->max_freq % stream->params.rate) != 0) { 1426 - dev_err(bus->dev, "Async mode not supported"); 1427 - return -EINVAL; 1428 - } 1306 + /* TODO: Support Asynchronous mode */ 1307 + if ((prop->max_freq % stream->params.rate) != 0) { 1308 + dev_err(bus->dev, "Async mode not supported"); 1309 + return -EINVAL; 1310 + } 1429 1311 1430 - /* Increment cumulative bus bandwidth */ 1431 - /* TODO: Update this during Device-Device support */ 1432 - bus->params.bandwidth += m_rt->stream->params.rate * 1433 - m_rt->ch_count * m_rt->stream->params.bps; 1312 + /* Increment cumulative bus bandwidth */ 1313 + /* TODO: Update this during Device-Device support */ 1314 + bus->params.bandwidth += m_rt->stream->params.rate * 1315 + m_rt->ch_count * m_rt->stream->params.bps; 1434 1316 1435 - /* Program params */ 1436 - ret = sdw_program_params(bus); 1437 - if (ret < 0) { 1438 - dev_err(bus->dev, "Program params failed: %d", ret); 1439 - goto restore_params; 1317 + /* Program params */ 1318 + ret = sdw_program_params(bus); 1319 + if (ret < 0) { 1320 + dev_err(bus->dev, "Program params failed: %d", ret); 1321 + goto restore_params; 1322 + } 1323 + 1440 1324 } 1441 1325 1442 1326 ret = do_bank_switch(stream); ··· 1494 1280 goto restore_params; 1495 1281 } 1496 1282 1497 - /* Prepare port(s) on the new clock configuration */ 1498 - ret = sdw_prep_deprep_ports(m_rt, true); 1499 - if (ret < 0) { 1500 - dev_err(bus->dev, "Prepare port(s) failed ret = %d", 1501 - ret); 1502 - return ret; 1283 + list_for_each_entry(m_rt, &stream->master_list, stream_node) { 1284 + bus = m_rt->bus; 1285 + 1286 + /* Prepare port(s) on the new clock configuration */ 1287 + ret = sdw_prep_deprep_ports(m_rt, true); 1288 + if (ret < 0) { 1289 + dev_err(bus->dev, "Prepare port(s) failed ret = %d", 1290 + ret); 1291 + return ret; 1292 + } 1503 1293 } 1504 1294 1505 1295 stream->state = SDW_STREAM_PREPARED; ··· 1531 1313 return -EINVAL; 1532 1314 } 1533 1315 1534 - mutex_lock(&stream->m_rt->bus->bus_lock); 1316 + sdw_acquire_bus_lock(stream); 1535 1317 1536 1318 ret = _sdw_prepare_stream(stream); 1537 1319 if (ret < 0) 1538 1320 pr_err("Prepare for stream:%s failed: %d", stream->name, ret); 1539 1321 1540 - mutex_unlock(&stream->m_rt->bus->bus_lock); 1322 + sdw_release_bus_lock(stream); 1541 1323 return ret; 1542 1324 } 1543 1325 EXPORT_SYMBOL(sdw_prepare_stream); 1544 1326 1545 1327 static int _sdw_enable_stream(struct sdw_stream_runtime *stream) 1546 1328 { 1547 - struct sdw_master_runtime *m_rt = stream->m_rt; 1548 - struct sdw_bus *bus = m_rt->bus; 1329 + struct sdw_master_runtime *m_rt = NULL; 1330 + struct sdw_bus *bus = NULL; 1549 1331 int ret; 1550 1332 1551 - /* Program params */ 1552 - ret = sdw_program_params(bus); 1553 - if (ret < 0) { 1554 - dev_err(bus->dev, "Program params failed: %d", ret); 1555 - return ret; 1556 - } 1333 + /* Enable Master(s) and Slave(s) port(s) associated with stream */ 1334 + list_for_each_entry(m_rt, &stream->master_list, stream_node) { 1335 + bus = m_rt->bus; 1557 1336 1558 - /* Enable port(s) */ 1559 - ret = sdw_enable_disable_ports(m_rt, true); 1560 - if (ret < 0) { 1561 - dev_err(bus->dev, "Enable port(s) failed ret: %d", ret); 1562 - return ret; 1337 + /* Program params */ 1338 + ret = sdw_program_params(bus); 1339 + if (ret < 0) { 1340 + dev_err(bus->dev, "Program params failed: %d", ret); 1341 + return ret; 1342 + } 1343 + 1344 + /* Enable port(s) */ 1345 + ret = sdw_enable_disable_ports(m_rt, true); 1346 + if (ret < 0) { 1347 + dev_err(bus->dev, "Enable port(s) failed ret: %d", ret); 1348 + return ret; 1349 + } 1563 1350 } 1564 1351 1565 1352 ret = do_bank_switch(stream); ··· 1593 1370 return -EINVAL; 1594 1371 } 1595 1372 1596 - mutex_lock(&stream->m_rt->bus->bus_lock); 1373 + sdw_acquire_bus_lock(stream); 1597 1374 1598 1375 ret = _sdw_enable_stream(stream); 1599 1376 if (ret < 0) 1600 1377 pr_err("Enable for stream:%s failed: %d", stream->name, ret); 1601 1378 1602 - mutex_unlock(&stream->m_rt->bus->bus_lock); 1379 + sdw_release_bus_lock(stream); 1603 1380 return ret; 1604 1381 } 1605 1382 EXPORT_SYMBOL(sdw_enable_stream); 1606 1383 1607 1384 static int _sdw_disable_stream(struct sdw_stream_runtime *stream) 1608 1385 { 1609 - struct sdw_master_runtime *m_rt = stream->m_rt; 1610 - struct sdw_bus *bus = m_rt->bus; 1386 + struct sdw_master_runtime *m_rt = NULL; 1387 + struct sdw_bus *bus = NULL; 1611 1388 int ret; 1612 1389 1613 - /* Disable port(s) */ 1614 - ret = sdw_enable_disable_ports(m_rt, false); 1615 - if (ret < 0) { 1616 - dev_err(bus->dev, "Disable port(s) failed: %d", ret); 1617 - return ret; 1390 + list_for_each_entry(m_rt, &stream->master_list, stream_node) { 1391 + bus = m_rt->bus; 1392 + /* Disable port(s) */ 1393 + ret = sdw_enable_disable_ports(m_rt, false); 1394 + if (ret < 0) { 1395 + dev_err(bus->dev, "Disable port(s) failed: %d", ret); 1396 + return ret; 1397 + } 1618 1398 } 1619 - 1620 1399 stream->state = SDW_STREAM_DISABLED; 1621 1400 1622 - /* Program params */ 1623 - ret = sdw_program_params(bus); 1624 - if (ret < 0) { 1625 - dev_err(bus->dev, "Program params failed: %d", ret); 1626 - return ret; 1401 + list_for_each_entry(m_rt, &stream->master_list, stream_node) { 1402 + bus = m_rt->bus; 1403 + /* Program params */ 1404 + ret = sdw_program_params(bus); 1405 + if (ret < 0) { 1406 + dev_err(bus->dev, "Program params failed: %d", ret); 1407 + return ret; 1408 + } 1627 1409 } 1628 1410 1629 1411 return do_bank_switch(stream); ··· 1650 1422 return -EINVAL; 1651 1423 } 1652 1424 1653 - mutex_lock(&stream->m_rt->bus->bus_lock); 1425 + sdw_acquire_bus_lock(stream); 1654 1426 1655 1427 ret = _sdw_disable_stream(stream); 1656 1428 if (ret < 0) 1657 1429 pr_err("Disable for stream:%s failed: %d", stream->name, ret); 1658 1430 1659 - mutex_unlock(&stream->m_rt->bus->bus_lock); 1431 + sdw_release_bus_lock(stream); 1660 1432 return ret; 1661 1433 } 1662 1434 EXPORT_SYMBOL(sdw_disable_stream); 1663 1435 1664 1436 static int _sdw_deprepare_stream(struct sdw_stream_runtime *stream) 1665 1437 { 1666 - struct sdw_master_runtime *m_rt = stream->m_rt; 1667 - struct sdw_bus *bus = m_rt->bus; 1438 + struct sdw_master_runtime *m_rt = NULL; 1439 + struct sdw_bus *bus = NULL; 1668 1440 int ret = 0; 1669 1441 1670 - /* De-prepare port(s) */ 1671 - ret = sdw_prep_deprep_ports(m_rt, false); 1672 - if (ret < 0) { 1673 - dev_err(bus->dev, "De-prepare port(s) failed: %d", ret); 1674 - return ret; 1442 + list_for_each_entry(m_rt, &stream->master_list, stream_node) { 1443 + bus = m_rt->bus; 1444 + /* De-prepare port(s) */ 1445 + ret = sdw_prep_deprep_ports(m_rt, false); 1446 + if (ret < 0) { 1447 + dev_err(bus->dev, "De-prepare port(s) failed: %d", ret); 1448 + return ret; 1449 + } 1450 + 1451 + /* TODO: Update this during Device-Device support */ 1452 + bus->params.bandwidth -= m_rt->stream->params.rate * 1453 + m_rt->ch_count * m_rt->stream->params.bps; 1454 + 1455 + /* Program params */ 1456 + ret = sdw_program_params(bus); 1457 + if (ret < 0) { 1458 + dev_err(bus->dev, "Program params failed: %d", ret); 1459 + return ret; 1460 + } 1461 + 1675 1462 } 1676 1463 1677 1464 stream->state = SDW_STREAM_DEPREPARED; 1678 - 1679 - /* TODO: Update this during Device-Device support */ 1680 - bus->params.bandwidth -= m_rt->stream->params.rate * 1681 - m_rt->ch_count * m_rt->stream->params.bps; 1682 - 1683 - /* Program params */ 1684 - ret = sdw_program_params(bus); 1685 - if (ret < 0) { 1686 - dev_err(bus->dev, "Program params failed: %d", ret); 1687 - return ret; 1688 - } 1689 - 1690 1465 return do_bank_switch(stream); 1691 1466 } 1692 1467 ··· 1709 1478 return -EINVAL; 1710 1479 } 1711 1480 1712 - mutex_lock(&stream->m_rt->bus->bus_lock); 1713 - 1481 + sdw_acquire_bus_lock(stream); 1714 1482 ret = _sdw_deprepare_stream(stream); 1715 1483 if (ret < 0) 1716 1484 pr_err("De-prepare for stream:%d failed: %d", ret, ret); 1717 1485 1718 - mutex_unlock(&stream->m_rt->bus->bus_lock); 1486 + sdw_release_bus_lock(stream); 1719 1487 return ret; 1720 1488 } 1721 1489 EXPORT_SYMBOL(sdw_deprepare_stream);
+2 -1
drivers/thunderbolt/cap.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 - * Thunderbolt Cactus Ridge driver - capabilities lookup 3 + * Thunderbolt driver - capabilities lookup 4 4 * 5 5 * Copyright (c) 2014 Andreas Noever <andreas.noever@gmail.com> 6 + * Copyright (C) 2018, Intel Corporation 6 7 */ 7 8 8 9 #include <linux/slab.h>
+6 -6
drivers/thunderbolt/ctl.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 - * Thunderbolt Cactus Ridge driver - control channel and configuration commands 3 + * Thunderbolt driver - control channel and configuration commands 4 4 * 5 5 * Copyright (c) 2014 Andreas Noever <andreas.noever@gmail.com> 6 + * Copyright (C) 2018, Intel Corporation 6 7 */ 7 8 8 9 #include <linux/crc32.h> ··· 632 631 ctl->rx_packets[i]->frame.callback = tb_ctl_rx_callback; 633 632 } 634 633 635 - tb_ctl_info(ctl, "control channel created\n"); 634 + tb_ctl_dbg(ctl, "control channel created\n"); 636 635 return ctl; 637 636 err: 638 637 tb_ctl_free(ctl); ··· 663 662 tb_ctl_pkg_free(ctl->rx_packets[i]); 664 663 665 664 666 - if (ctl->frame_pool) 667 - dma_pool_destroy(ctl->frame_pool); 665 + dma_pool_destroy(ctl->frame_pool); 668 666 kfree(ctl); 669 667 } 670 668 ··· 673 673 void tb_ctl_start(struct tb_ctl *ctl) 674 674 { 675 675 int i; 676 - tb_ctl_info(ctl, "control channel starting...\n"); 676 + tb_ctl_dbg(ctl, "control channel starting...\n"); 677 677 tb_ring_start(ctl->tx); /* is used to ack hotplug packets, start first */ 678 678 tb_ring_start(ctl->rx); 679 679 for (i = 0; i < TB_CTL_RX_PKG_COUNT; i++) ··· 702 702 if (!list_empty(&ctl->request_queue)) 703 703 tb_ctl_WARN(ctl, "dangling request in request_queue\n"); 704 704 INIT_LIST_HEAD(&ctl->request_queue); 705 - tb_ctl_info(ctl, "control channel stopped\n"); 705 + tb_ctl_dbg(ctl, "control channel stopped\n"); 706 706 } 707 707 708 708 /* public interface, commands */
+2 -1
drivers/thunderbolt/ctl.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 /* 3 - * Thunderbolt Cactus Ridge driver - control channel and configuration commands 3 + * Thunderbolt driver - control channel and configuration commands 4 4 * 5 5 * Copyright (c) 2014 Andreas Noever <andreas.noever@gmail.com> 6 + * Copyright (C) 2018, Intel Corporation 6 7 */ 7 8 8 9 #ifndef _TB_CFG
+1 -4
drivers/thunderbolt/dma_port.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Thunderbolt DMA configuration based mailbox support 3 4 * 4 5 * Copyright (C) 2017, Intel Corporation 5 6 * Authors: Michael Jamet <michael.jamet@intel.com> 6 7 * Mika Westerberg <mika.westerberg@linux.intel.com> 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License version 2 as 10 - * published by the Free Software Foundation. 11 8 */ 12 9 13 10 #include <linux/delay.h>
+1 -4
drivers/thunderbolt/dma_port.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 /* 2 3 * Thunderbolt DMA configuration based mailbox support 3 4 * 4 5 * Copyright (C) 2017, Intel Corporation 5 6 * Authors: Michael Jamet <michael.jamet@intel.com> 6 7 * Mika Westerberg <mika.westerberg@linux.intel.com> 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License version 2 as 10 - * published by the Free Software Foundation. 11 8 */ 12 9 13 10 #ifndef DMA_PORT_H_
+2 -5
drivers/thunderbolt/domain.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Thunderbolt bus support 3 4 * 4 5 * Copyright (C) 2017, Intel Corporation 5 - * Author: Mika Westerberg <mika.westerberg@linux.intel.com> 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 6 + * Author: Mika Westerberg <mika.westerberg@linux.intel.com> 10 7 */ 11 8 12 9 #include <linux/device.h>
+3 -2
drivers/thunderbolt/eeprom.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 - * Thunderbolt Cactus Ridge driver - eeprom access 3 + * Thunderbolt driver - eeprom access 4 4 * 5 5 * Copyright (c) 2014 Andreas Noever <andreas.noever@gmail.com> 6 + * Copyright (C) 2018, Intel Corporation 6 7 */ 7 8 8 9 #include <linux/crc32.h> ··· 541 540 return res; 542 541 size &= 0x3ff; 543 542 size += TB_DROM_DATA_START; 544 - tb_sw_info(sw, "reading drom (length: %#x)\n", size); 543 + tb_sw_dbg(sw, "reading drom (length: %#x)\n", size); 545 544 if (size < sizeof(*header)) { 546 545 tb_sw_warn(sw, "drom too small, aborting\n"); 547 546 return -EIO;
+1 -4
drivers/thunderbolt/icm.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Internal Thunderbolt Connection Manager. This is a firmware running on 3 4 * the Thunderbolt host controller performing most of the low-level ··· 7 6 * Copyright (C) 2017, Intel Corporation 8 7 * Authors: Michael Jamet <michael.jamet@intel.com> 9 8 * Mika Westerberg <mika.westerberg@linux.intel.com> 10 - * 11 - * This program is free software; you can redistribute it and/or modify 12 - * it under the terms of the GNU General Public License version 2 as 13 - * published by the Free Software Foundation. 14 9 */ 15 10 16 11 #include <linux/delay.h>
+17 -16
drivers/thunderbolt/nhi.c
··· 1 1 /* 2 - * Thunderbolt Cactus Ridge driver - NHI driver 2 + * Thunderbolt driver - NHI driver 3 3 * 4 4 * The NHI (native host interface) is the pci device that allows us to send and 5 5 * receive frames from the thunderbolt bus. 6 6 * 7 7 * Copyright (c) 2014 Andreas Noever <andreas.noever@gmail.com> 8 + * Copyright (C) 2018, Intel Corporation 8 9 */ 9 10 10 11 #include <linux/pm_runtime.h> ··· 96 95 else 97 96 new = old & ~mask; 98 97 99 - dev_info(&ring->nhi->pdev->dev, 100 - "%s interrupt at register %#x bit %d (%#x -> %#x)\n", 101 - active ? "enabling" : "disabling", reg, bit, old, new); 98 + dev_dbg(&ring->nhi->pdev->dev, 99 + "%s interrupt at register %#x bit %d (%#x -> %#x)\n", 100 + active ? "enabling" : "disabling", reg, bit, old, new); 102 101 103 102 if (new == old) 104 103 dev_WARN(&ring->nhi->pdev->dev, ··· 477 476 void *poll_data) 478 477 { 479 478 struct tb_ring *ring = NULL; 480 - dev_info(&nhi->pdev->dev, "allocating %s ring %d of size %d\n", 481 - transmit ? "TX" : "RX", hop, size); 479 + 480 + dev_dbg(&nhi->pdev->dev, "allocating %s ring %d of size %d\n", 481 + transmit ? "TX" : "RX", hop, size); 482 482 483 483 /* Tx Ring 2 is reserved for E2E workaround */ 484 484 if (transmit && hop == RING_E2E_UNUSED_HOPID) ··· 587 585 dev_WARN(&ring->nhi->pdev->dev, "ring already started\n"); 588 586 goto err; 589 587 } 590 - dev_info(&ring->nhi->pdev->dev, "starting %s %d\n", 591 - RING_TYPE(ring), ring->hop); 588 + dev_dbg(&ring->nhi->pdev->dev, "starting %s %d\n", 589 + RING_TYPE(ring), ring->hop); 592 590 593 591 if (ring->flags & RING_FLAG_FRAME) { 594 592 /* Means 4096 */ ··· 649 647 { 650 648 spin_lock_irq(&ring->nhi->lock); 651 649 spin_lock(&ring->lock); 652 - dev_info(&ring->nhi->pdev->dev, "stopping %s %d\n", 653 - RING_TYPE(ring), ring->hop); 650 + dev_dbg(&ring->nhi->pdev->dev, "stopping %s %d\n", 651 + RING_TYPE(ring), ring->hop); 654 652 if (ring->nhi->going_away) 655 653 goto err; 656 654 if (!ring->running) { ··· 718 716 ring->descriptors_dma = 0; 719 717 720 718 721 - dev_info(&ring->nhi->pdev->dev, 722 - "freeing %s %d\n", 723 - RING_TYPE(ring), 724 - ring->hop); 719 + dev_dbg(&ring->nhi->pdev->dev, "freeing %s %d\n", RING_TYPE(ring), 720 + ring->hop); 725 721 726 722 /** 727 723 * ring->work can no longer be scheduled (it is scheduled only ··· 931 931 static void nhi_shutdown(struct tb_nhi *nhi) 932 932 { 933 933 int i; 934 - dev_info(&nhi->pdev->dev, "shutdown\n"); 934 + 935 + dev_dbg(&nhi->pdev->dev, "shutdown\n"); 935 936 936 937 for (i = 0; i < nhi->hop_count; i++) { 937 938 if (nhi->tx_rings[i]) ··· 1060 1059 return -ENODEV; 1061 1060 } 1062 1061 1063 - dev_info(&nhi->pdev->dev, "NHI initialized, starting thunderbolt\n"); 1062 + dev_dbg(&nhi->pdev->dev, "NHI initialized, starting thunderbolt\n"); 1064 1063 1065 1064 res = tb_domain_add(tb); 1066 1065 if (res) {
+2 -1
drivers/thunderbolt/nhi.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 /* 3 - * Thunderbolt Cactus Ridge driver - NHI driver 3 + * Thunderbolt driver - NHI driver 4 4 * 5 5 * Copyright (c) 2014 Andreas Noever <andreas.noever@gmail.com> 6 + * Copyright (C) 2018, Intel Corporation 6 7 */ 7 8 8 9 #ifndef DSL3510_H_
+1
drivers/thunderbolt/nhi_regs.h
··· 3 3 * Thunderbolt driver - NHI registers 4 4 * 5 5 * Copyright (c) 2014 Andreas Noever <andreas.noever@gmail.com> 6 + * Copyright (C) 2018, Intel Corporation 6 7 */ 7 8 8 9 #ifndef NHI_REGS_H_
+13 -13
drivers/thunderbolt/path.c
··· 13 13 14 14 static void tb_dump_hop(struct tb_port *port, struct tb_regs_hop *hop) 15 15 { 16 - tb_port_info(port, " Hop through port %d to hop %d (%s)\n", 17 - hop->out_port, hop->next_hop, 18 - hop->enable ? "enabled" : "disabled"); 19 - tb_port_info(port, " Weight: %d Priority: %d Credits: %d Drop: %d\n", 20 - hop->weight, hop->priority, 21 - hop->initial_credits, hop->drop_packages); 22 - tb_port_info(port, " Counter enabled: %d Counter index: %d\n", 23 - hop->counter_enable, hop->counter); 24 - tb_port_info(port, " Flow Control (In/Eg): %d/%d Shared Buffer (In/Eg): %d/%d\n", 25 - hop->ingress_fc, hop->egress_fc, 26 - hop->ingress_shared_buffer, hop->egress_shared_buffer); 27 - tb_port_info(port, " Unknown1: %#x Unknown2: %#x Unknown3: %#x\n", 28 - hop->unknown1, hop->unknown2, hop->unknown3); 16 + tb_port_dbg(port, " Hop through port %d to hop %d (%s)\n", 17 + hop->out_port, hop->next_hop, 18 + hop->enable ? "enabled" : "disabled"); 19 + tb_port_dbg(port, " Weight: %d Priority: %d Credits: %d Drop: %d\n", 20 + hop->weight, hop->priority, 21 + hop->initial_credits, hop->drop_packages); 22 + tb_port_dbg(port, " Counter enabled: %d Counter index: %d\n", 23 + hop->counter_enable, hop->counter); 24 + tb_port_dbg(port, " Flow Control (In/Eg): %d/%d Shared Buffer (In/Eg): %d/%d\n", 25 + hop->ingress_fc, hop->egress_fc, 26 + hop->ingress_shared_buffer, hop->egress_shared_buffer); 27 + tb_port_dbg(port, " Unknown1: %#x Unknown2: %#x Unknown3: %#x\n", 28 + hop->unknown1, hop->unknown2, hop->unknown3); 29 29 } 30 30 31 31 /**
+1 -4
drivers/thunderbolt/property.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Thunderbolt XDomain property support 3 4 * 4 5 * Copyright (C) 2017, Intel Corporation 5 6 * Authors: Michael Jamet <michael.jamet@intel.com> 6 7 * Mika Westerberg <mika.westerberg@linux.intel.com> 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License version 2 as 10 - * published by the Free Software Foundation. 11 8 */ 12 9 13 10 #include <linux/err.h>
+40 -31
drivers/thunderbolt/switch.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 - * Thunderbolt Cactus Ridge driver - switch/port utility functions 3 + * Thunderbolt driver - switch/port utility functions 4 4 * 5 5 * Copyright (c) 2014 Andreas Noever <andreas.noever@gmail.com> 6 + * Copyright (C) 2018, Intel Corporation 6 7 */ 7 8 8 9 #include <linux/delay.h> ··· 437 436 438 437 static void tb_dump_port(struct tb *tb, struct tb_regs_port_header *port) 439 438 { 440 - tb_info(tb, 441 - " Port %d: %x:%x (Revision: %d, TB Version: %d, Type: %s (%#x))\n", 442 - port->port_number, port->vendor_id, port->device_id, 443 - port->revision, port->thunderbolt_version, tb_port_type(port), 444 - port->type); 445 - tb_info(tb, " Max hop id (in/out): %d/%d\n", 446 - port->max_in_hop_id, port->max_out_hop_id); 447 - tb_info(tb, " Max counters: %d\n", port->max_counters); 448 - tb_info(tb, " NFC Credits: %#x\n", port->nfc_credits); 439 + tb_dbg(tb, 440 + " Port %d: %x:%x (Revision: %d, TB Version: %d, Type: %s (%#x))\n", 441 + port->port_number, port->vendor_id, port->device_id, 442 + port->revision, port->thunderbolt_version, tb_port_type(port), 443 + port->type); 444 + tb_dbg(tb, " Max hop id (in/out): %d/%d\n", 445 + port->max_in_hop_id, port->max_out_hop_id); 446 + tb_dbg(tb, " Max counters: %d\n", port->max_counters); 447 + tb_dbg(tb, " NFC Credits: %#x\n", port->nfc_credits); 449 448 } 450 449 451 450 /** ··· 606 605 607 606 static void tb_dump_switch(struct tb *tb, struct tb_regs_switch_header *sw) 608 607 { 609 - tb_info(tb, 610 - " Switch: %x:%x (Revision: %d, TB Version: %d)\n", 611 - sw->vendor_id, sw->device_id, sw->revision, 612 - sw->thunderbolt_version); 613 - tb_info(tb, " Max Port Number: %d\n", sw->max_port_number); 614 - tb_info(tb, " Config:\n"); 615 - tb_info(tb, 608 + tb_dbg(tb, " Switch: %x:%x (Revision: %d, TB Version: %d)\n", 609 + sw->vendor_id, sw->device_id, sw->revision, 610 + sw->thunderbolt_version); 611 + tb_dbg(tb, " Max Port Number: %d\n", sw->max_port_number); 612 + tb_dbg(tb, " Config:\n"); 613 + tb_dbg(tb, 616 614 " Upstream Port Number: %d Depth: %d Route String: %#llx Enabled: %d, PlugEventsDelay: %dms\n", 617 - sw->upstream_port_number, sw->depth, 618 - (((u64) sw->route_hi) << 32) | sw->route_lo, 619 - sw->enabled, sw->plug_events_delay); 620 - tb_info(tb, 621 - " unknown1: %#x unknown4: %#x\n", 622 - sw->__unknown1, sw->__unknown4); 615 + sw->upstream_port_number, sw->depth, 616 + (((u64) sw->route_hi) << 32) | sw->route_lo, 617 + sw->enabled, sw->plug_events_delay); 618 + tb_dbg(tb, " unknown1: %#x unknown4: %#x\n", 619 + sw->__unknown1, sw->__unknown4); 623 620 } 624 621 625 622 /** ··· 633 634 header.route_lo = route, 634 635 header.enabled = true, 635 636 }; 636 - tb_info(tb, "resetting switch at %llx\n", route); 637 + tb_dbg(tb, "resetting switch at %llx\n", route); 637 638 res.err = tb_cfg_write(tb->ctl, ((u32 *) &header) + 2, route, 638 639 0, 2, 2, 2); 639 640 if (res.err) ··· 1138 1139 if (tb_cfg_read(tb->ctl, &sw->config, route, 0, TB_CFG_SWITCH, 0, 5)) 1139 1140 goto err_free_sw_ports; 1140 1141 1141 - tb_info(tb, "current switch config:\n"); 1142 + tb_dbg(tb, "current switch config:\n"); 1142 1143 tb_dump_switch(tb, &sw->config); 1143 1144 1144 1145 /* configure switch */ ··· 1245 1246 int ret; 1246 1247 1247 1248 route = tb_route(sw); 1248 - tb_info(tb, 1249 - "initializing Switch at %#llx (depth: %d, up port: %d)\n", 1250 - route, tb_route_length(route), sw->config.upstream_port_number); 1249 + tb_dbg(tb, "initializing Switch at %#llx (depth: %d, up port: %d)\n", 1250 + route, tb_route_length(route), sw->config.upstream_port_number); 1251 1251 1252 1252 if (sw->config.vendor_id != PCI_VENDOR_ID_INTEL) 1253 1253 tb_sw_warn(sw, "unknown switch vendor id %#x\n", ··· 1384 1386 tb_sw_warn(sw, "tb_eeprom_read_rom failed\n"); 1385 1387 return ret; 1386 1388 } 1387 - tb_sw_info(sw, "uid: %#llx\n", sw->uid); 1389 + tb_sw_dbg(sw, "uid: %#llx\n", sw->uid); 1388 1390 1389 1391 tb_switch_set_uuid(sw); 1390 1392 1391 1393 for (i = 0; i <= sw->config.max_port_number; i++) { 1392 1394 if (sw->ports[i].disabled) { 1393 - tb_port_info(&sw->ports[i], "disabled by eeprom\n"); 1395 + tb_port_dbg(&sw->ports[i], "disabled by eeprom\n"); 1394 1396 continue; 1395 1397 } 1396 1398 ret = tb_init_port(&sw->ports[i]); ··· 1402 1404 ret = device_add(&sw->dev); 1403 1405 if (ret) 1404 1406 return ret; 1407 + 1408 + if (tb_route(sw)) { 1409 + dev_info(&sw->dev, "new device found, vendor=%#x device=%#x\n", 1410 + sw->vendor, sw->device); 1411 + if (sw->vendor_name && sw->device_name) 1412 + dev_info(&sw->dev, "%s %s\n", sw->vendor_name, 1413 + sw->device_name); 1414 + } 1405 1415 1406 1416 ret = tb_switch_nvm_add(sw); 1407 1417 if (ret) { ··· 1462 1456 tb_plug_events_active(sw, false); 1463 1457 1464 1458 tb_switch_nvm_remove(sw); 1459 + 1460 + if (tb_route(sw)) 1461 + dev_info(&sw->dev, "device disconnected\n"); 1465 1462 device_unregister(&sw->dev); 1466 1463 } 1467 1464 ··· 1492 1483 int tb_switch_resume(struct tb_switch *sw) 1493 1484 { 1494 1485 int i, err; 1495 - tb_sw_info(sw, "resuming switch\n"); 1486 + tb_sw_dbg(sw, "resuming switch\n"); 1496 1487 1497 1488 /* 1498 1489 * Check for UID of the connected switches except for root
+5 -5
drivers/thunderbolt/tb.c
··· 404 404 { 405 405 struct tb_cm *tcm = tb_priv(tb); 406 406 407 - tb_info(tb, "suspending...\n"); 407 + tb_dbg(tb, "suspending...\n"); 408 408 tb_switch_suspend(tb->root_switch); 409 409 tcm->hotplug_active = false; /* signal tb_handle_hotplug to quit */ 410 - tb_info(tb, "suspend finished\n"); 410 + tb_dbg(tb, "suspend finished\n"); 411 411 412 412 return 0; 413 413 } ··· 417 417 struct tb_cm *tcm = tb_priv(tb); 418 418 struct tb_pci_tunnel *tunnel, *n; 419 419 420 - tb_info(tb, "resuming...\n"); 420 + tb_dbg(tb, "resuming...\n"); 421 421 422 422 /* remove any pci devices the firmware might have setup */ 423 423 tb_switch_reset(tb, 0); ··· 432 432 * the pcie links need some time to get going. 433 433 * 100ms works for me... 434 434 */ 435 - tb_info(tb, "tunnels restarted, sleeping for 100ms\n"); 435 + tb_dbg(tb, "tunnels restarted, sleeping for 100ms\n"); 436 436 msleep(100); 437 437 } 438 438 /* Allow tb_handle_hotplug to progress events */ 439 439 tcm->hotplug_active = true; 440 - tb_info(tb, "resume finished\n"); 440 + tb_dbg(tb, "resume finished\n"); 441 441 442 442 return 0; 443 443 }
+6 -3
drivers/thunderbolt/tb.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 /* 3 - * Thunderbolt Cactus Ridge driver - bus logic (NHI independent) 3 + * Thunderbolt driver - bus logic (NHI independent) 4 4 * 5 5 * Copyright (c) 2014 Andreas Noever <andreas.noever@gmail.com> 6 + * Copyright (C) 2018, Intel Corporation 6 7 */ 7 8 8 9 #ifndef TB_H_ ··· 328 327 #define tb_WARN(tb, fmt, arg...) dev_WARN(&(tb)->nhi->pdev->dev, fmt, ## arg) 329 328 #define tb_warn(tb, fmt, arg...) dev_warn(&(tb)->nhi->pdev->dev, fmt, ## arg) 330 329 #define tb_info(tb, fmt, arg...) dev_info(&(tb)->nhi->pdev->dev, fmt, ## arg) 331 - 330 + #define tb_dbg(tb, fmt, arg...) dev_dbg(&(tb)->nhi->pdev->dev, fmt, ## arg) 332 331 333 332 #define __TB_SW_PRINT(level, sw, fmt, arg...) \ 334 333 do { \ ··· 339 338 #define tb_sw_WARN(sw, fmt, arg...) __TB_SW_PRINT(tb_WARN, sw, fmt, ##arg) 340 339 #define tb_sw_warn(sw, fmt, arg...) __TB_SW_PRINT(tb_warn, sw, fmt, ##arg) 341 340 #define tb_sw_info(sw, fmt, arg...) __TB_SW_PRINT(tb_info, sw, fmt, ##arg) 342 - 341 + #define tb_sw_dbg(sw, fmt, arg...) __TB_SW_PRINT(tb_dbg, sw, fmt, ##arg) 343 342 344 343 #define __TB_PORT_PRINT(level, _port, fmt, arg...) \ 345 344 do { \ ··· 353 352 __TB_PORT_PRINT(tb_warn, port, fmt, ##arg) 354 353 #define tb_port_info(port, fmt, arg...) \ 355 354 __TB_PORT_PRINT(tb_info, port, fmt, ##arg) 355 + #define tb_port_dbg(port, fmt, arg...) \ 356 + __TB_PORT_PRINT(tb_dbg, port, fmt, ##arg) 356 357 357 358 struct tb *icm_probe(struct tb_nhi *nhi); 358 359 struct tb *tb_probe(struct tb_nhi *nhi);
+1 -4
drivers/thunderbolt/tb_msgs.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 /* 2 3 * Thunderbolt control channel messages 3 4 * 4 5 * Copyright (C) 2014 Andreas Noever <andreas.noever@gmail.com> 5 6 * Copyright (C) 2017, Intel Corporation 6 - * 7 - * This program is free software; you can redistribute it and/or modify 8 - * it under the terms of the GNU General Public License version 2 as 9 - * published by the Free Software Foundation. 10 7 */ 11 8 12 9 #ifndef _TB_MSGS
+2 -1
drivers/thunderbolt/tb_regs.h
··· 1 1 /* SPDX-License-Identifier: GPL-2.0 */ 2 2 /* 3 - * Thunderbolt Cactus Ridge driver - Port/Switch config area registers 3 + * Thunderbolt driver - Port/Switch config area registers 4 4 * 5 5 * Every thunderbolt device consists (logically) of a switch with multiple 6 6 * ports. Every port contains up to four config regions (HOPS, PORT, SWITCH, 7 7 * COUNTERS) which are used to configure the device. 8 8 * 9 9 * Copyright (c) 2014 Andreas Noever <andreas.noever@gmail.com> 10 + * Copyright (C) 2018, Intel Corporation 10 11 */ 11 12 12 13 #ifndef _TB_REGS
+1 -4
drivers/thunderbolt/xdomain.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Thunderbolt XDomain discovery protocol support 3 4 * 4 5 * Copyright (C) 2017, Intel Corporation 5 6 * Authors: Michael Jamet <michael.jamet@intel.com> 6 7 * Mika Westerberg <mika.westerberg@linux.intel.com> 7 - * 8 - * This program is free software; you can redistribute it and/or modify 9 - * it under the terms of the GNU General Public License version 2 as 10 - * published by the Free Software Foundation. 11 8 */ 12 9 13 10 #include <linux/device.h>
+23 -12
drivers/uio/uio.c
··· 274 274 .dev_groups = uio_groups, 275 275 }; 276 276 277 + static bool uio_class_registered; 278 + 277 279 /* 278 280 * device functions 279 281 */ ··· 670 668 struct page *page; 671 669 unsigned long offset; 672 670 void *addr; 673 - int ret = 0; 671 + vm_fault_t ret = 0; 674 672 int mi; 675 673 676 674 mutex_lock(&idev->info_lock); ··· 738 736 return -EINVAL; 739 737 740 738 vma->vm_ops = &uio_physical_vm_ops; 741 - vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); 739 + if (idev->info->mem[mi].memtype == UIO_MEM_PHYS) 740 + vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot); 742 741 743 742 /* 744 743 * We cannot use the vm_iomap_memory() helper here, ··· 796 793 } 797 794 798 795 switch (idev->info->mem[mi].memtype) { 799 - case UIO_MEM_PHYS: 800 - ret = uio_mmap_physical(vma); 801 - break; 802 - case UIO_MEM_LOGICAL: 803 - case UIO_MEM_VIRTUAL: 804 - ret = uio_mmap_logical(vma); 805 - break; 806 - default: 807 - ret = -EINVAL; 796 + case UIO_MEM_IOVA: 797 + case UIO_MEM_PHYS: 798 + ret = uio_mmap_physical(vma); 799 + break; 800 + case UIO_MEM_LOGICAL: 801 + case UIO_MEM_VIRTUAL: 802 + ret = uio_mmap_logical(vma); 803 + break; 804 + default: 805 + ret = -EINVAL; 808 806 } 809 807 810 - out: 808 + out: 811 809 mutex_unlock(&idev->info_lock); 812 810 return ret; 813 811 } ··· 880 876 printk(KERN_ERR "class_register failed for uio\n"); 881 877 goto err_class_register; 882 878 } 879 + 880 + uio_class_registered = true; 881 + 883 882 return 0; 884 883 885 884 err_class_register: ··· 893 886 894 887 static void release_uio_class(void) 895 888 { 889 + uio_class_registered = false; 896 890 class_unregister(&uio_class); 897 891 uio_major_cleanup(); 898 892 } ··· 919 911 { 920 912 struct uio_device *idev; 921 913 int ret = 0; 914 + 915 + if (!uio_class_registered) 916 + return -EPROBE_DEFER; 922 917 923 918 if (!parent || !info || !info->name || !info->version) 924 919 return -EINVAL;
+2 -1
drivers/uio/uio_dmem_genirq.c
··· 163 163 dev_err(&pdev->dev, "unable to kmalloc\n"); 164 164 goto bad2; 165 165 } 166 - uioinfo->name = pdev->dev.of_node->name; 166 + uioinfo->name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%pOFn", 167 + pdev->dev.of_node); 167 168 uioinfo->version = "devicetree"; 168 169 169 170 /* Multiple IRQs are not supported */
+1 -2
drivers/uio/uio_fsl_elbc_gpcm.c
··· 382 382 } 383 383 384 384 /* set all UIO data */ 385 - if (node->name) 386 - info->mem[0].name = kstrdup(node->name, GFP_KERNEL); 385 + info->mem[0].name = kasprintf(GFP_KERNEL, "%pOFn", node); 387 386 info->mem[0].addr = res.start; 388 387 info->mem[0].size = resource_size(&res); 389 388 info->mem[0].memtype = UIO_MEM_PHYS;
+80 -36
drivers/uio/uio_hv_generic.c
··· 17 17 * # echo -n "ed963694-e847-4b2a-85af-bc9cfc11d6f3" \ 18 18 * > /sys/bus/vmbus/drivers/uio_hv_generic/bind 19 19 */ 20 - #define DEBUG 1 21 20 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 22 21 23 22 #include <linux/device.h> ··· 32 33 33 34 #include "../hv/hyperv_vmbus.h" 34 35 35 - #define DRIVER_VERSION "0.02.0" 36 + #define DRIVER_VERSION "0.02.1" 36 37 #define DRIVER_AUTHOR "Stephen Hemminger <sthemmin at microsoft.com>" 37 38 #define DRIVER_DESC "Generic UIO driver for VMBus devices" 38 39 39 40 #define HV_RING_SIZE 512 /* pages */ 40 - #define SEND_BUFFER_SIZE (15 * 1024 * 1024) 41 - #define RECV_BUFFER_SIZE (15 * 1024 * 1024) 41 + #define SEND_BUFFER_SIZE (16 * 1024 * 1024) 42 + #define RECV_BUFFER_SIZE (31 * 1024 * 1024) 42 43 43 44 /* 44 45 * List of resources to be mapped to user space ··· 55 56 struct hv_uio_private_data { 56 57 struct uio_info info; 57 58 struct hv_device *device; 59 + atomic_t refcnt; 58 60 59 61 void *recv_buf; 60 62 u32 recv_gpadl; ··· 129 129 { 130 130 struct vmbus_channel *channel 131 131 = container_of(kobj, struct vmbus_channel, kobj); 132 - struct hv_device *dev = channel->primary_channel->device_obj; 133 - u16 q_idx = channel->offermsg.offer.sub_channel_index; 132 + void *ring_buffer = page_address(channel->ringbuffer_page); 134 133 135 - dev_dbg(&dev->device, "mmap channel %u pages %#lx at %#lx\n", 136 - q_idx, vma_pages(vma), vma->vm_pgoff); 134 + if (channel->state != CHANNEL_OPENED_STATE) 135 + return -ENODEV; 137 136 138 - return vm_iomap_memory(vma, virt_to_phys(channel->ringbuffer_pages), 137 + return vm_iomap_memory(vma, virt_to_phys(ring_buffer), 139 138 channel->ringbuffer_pagecount << PAGE_SHIFT); 140 139 } 141 140 ··· 175 176 } 176 177 } 177 178 179 + /* free the reserved buffers for send and receive */ 178 180 static void 179 181 hv_uio_cleanup(struct hv_device *dev, struct hv_uio_private_data *pdata) 180 182 { 181 - if (pdata->send_gpadl) 183 + if (pdata->send_gpadl) { 182 184 vmbus_teardown_gpadl(dev->channel, pdata->send_gpadl); 183 - vfree(pdata->send_buf); 185 + pdata->send_gpadl = 0; 186 + vfree(pdata->send_buf); 187 + } 184 188 185 - if (pdata->recv_gpadl) 189 + if (pdata->recv_gpadl) { 186 190 vmbus_teardown_gpadl(dev->channel, pdata->recv_gpadl); 187 - vfree(pdata->recv_buf); 191 + pdata->recv_gpadl = 0; 192 + vfree(pdata->recv_buf); 193 + } 194 + } 195 + 196 + /* VMBus primary channel is opened on first use */ 197 + static int 198 + hv_uio_open(struct uio_info *info, struct inode *inode) 199 + { 200 + struct hv_uio_private_data *pdata 201 + = container_of(info, struct hv_uio_private_data, info); 202 + struct hv_device *dev = pdata->device; 203 + int ret; 204 + 205 + if (atomic_inc_return(&pdata->refcnt) != 1) 206 + return 0; 207 + 208 + ret = vmbus_connect_ring(dev->channel, 209 + hv_uio_channel_cb, dev->channel); 210 + 211 + if (ret == 0) 212 + dev->channel->inbound.ring_buffer->interrupt_mask = 1; 213 + else 214 + atomic_dec(&pdata->refcnt); 215 + 216 + return ret; 217 + } 218 + 219 + /* VMBus primary channel is closed on last close */ 220 + static int 221 + hv_uio_release(struct uio_info *info, struct inode *inode) 222 + { 223 + struct hv_uio_private_data *pdata 224 + = container_of(info, struct hv_uio_private_data, info); 225 + struct hv_device *dev = pdata->device; 226 + int ret = 0; 227 + 228 + if (atomic_dec_and_test(&pdata->refcnt)) 229 + ret = vmbus_disconnect_ring(dev->channel); 230 + 231 + return ret; 188 232 } 189 233 190 234 static int 191 235 hv_uio_probe(struct hv_device *dev, 192 236 const struct hv_vmbus_device_id *dev_id) 193 237 { 238 + struct vmbus_channel *channel = dev->channel; 194 239 struct hv_uio_private_data *pdata; 240 + void *ring_buffer; 195 241 int ret; 242 + 243 + /* Communicating with host has to be via shared memory not hypercall */ 244 + if (!channel->offermsg.monitor_allocated) { 245 + dev_err(&dev->device, "vmbus channel requires hypercall\n"); 246 + return -ENOTSUPP; 247 + } 196 248 197 249 pdata = kzalloc(sizeof(*pdata), GFP_KERNEL); 198 250 if (!pdata) 199 251 return -ENOMEM; 200 252 201 - ret = vmbus_open(dev->channel, HV_RING_SIZE * PAGE_SIZE, 202 - HV_RING_SIZE * PAGE_SIZE, NULL, 0, 203 - hv_uio_channel_cb, dev->channel); 253 + ret = vmbus_alloc_ring(channel, HV_RING_SIZE * PAGE_SIZE, 254 + HV_RING_SIZE * PAGE_SIZE); 204 255 if (ret) 205 256 goto fail; 206 257 207 - /* Communicating with host has to be via shared memory not hypercall */ 208 - if (!dev->channel->offermsg.monitor_allocated) { 209 - dev_err(&dev->device, "vmbus channel requires hypercall\n"); 210 - ret = -ENOTSUPP; 211 - goto fail_close; 212 - } 213 - 214 - dev->channel->inbound.ring_buffer->interrupt_mask = 1; 215 - set_channel_read_mode(dev->channel, HV_CALL_ISR); 258 + set_channel_read_mode(channel, HV_CALL_ISR); 216 259 217 260 /* Fill general uio info */ 218 261 pdata->info.name = "uio_hv_generic"; 219 262 pdata->info.version = DRIVER_VERSION; 220 263 pdata->info.irqcontrol = hv_uio_irqcontrol; 264 + pdata->info.open = hv_uio_open; 265 + pdata->info.release = hv_uio_release; 221 266 pdata->info.irq = UIO_IRQ_CUSTOM; 267 + atomic_set(&pdata->refcnt, 0); 222 268 223 269 /* mem resources */ 224 270 pdata->info.mem[TXRX_RING_MAP].name = "txrx_rings"; 271 + ring_buffer = page_address(channel->ringbuffer_page); 225 272 pdata->info.mem[TXRX_RING_MAP].addr 226 - = (uintptr_t)dev->channel->ringbuffer_pages; 273 + = (uintptr_t)virt_to_phys(ring_buffer); 227 274 pdata->info.mem[TXRX_RING_MAP].size 228 - = dev->channel->ringbuffer_pagecount << PAGE_SHIFT; 229 - pdata->info.mem[TXRX_RING_MAP].memtype = UIO_MEM_LOGICAL; 275 + = channel->ringbuffer_pagecount << PAGE_SHIFT; 276 + pdata->info.mem[TXRX_RING_MAP].memtype = UIO_MEM_IOVA; 230 277 231 278 pdata->info.mem[INT_PAGE_MAP].name = "int_page"; 232 279 pdata->info.mem[INT_PAGE_MAP].addr ··· 292 247 goto fail_close; 293 248 } 294 249 295 - ret = vmbus_establish_gpadl(dev->channel, pdata->recv_buf, 250 + ret = vmbus_establish_gpadl(channel, pdata->recv_buf, 296 251 RECV_BUFFER_SIZE, &pdata->recv_gpadl); 297 252 if (ret) 298 253 goto fail_close; ··· 306 261 pdata->info.mem[RECV_BUF_MAP].size = RECV_BUFFER_SIZE; 307 262 pdata->info.mem[RECV_BUF_MAP].memtype = UIO_MEM_VIRTUAL; 308 263 309 - 310 264 pdata->send_buf = vzalloc(SEND_BUFFER_SIZE); 311 265 if (pdata->send_buf == NULL) { 312 266 ret = -ENOMEM; 313 267 goto fail_close; 314 268 } 315 269 316 - ret = vmbus_establish_gpadl(dev->channel, pdata->send_buf, 270 + ret = vmbus_establish_gpadl(channel, pdata->send_buf, 317 271 SEND_BUFFER_SIZE, &pdata->send_gpadl); 318 272 if (ret) 319 273 goto fail_close; ··· 334 290 goto fail_close; 335 291 } 336 292 337 - vmbus_set_chn_rescind_callback(dev->channel, hv_uio_rescind); 338 - vmbus_set_sc_create_callback(dev->channel, hv_uio_new_channel); 293 + vmbus_set_chn_rescind_callback(channel, hv_uio_rescind); 294 + vmbus_set_sc_create_callback(channel, hv_uio_new_channel); 339 295 340 - ret = sysfs_create_bin_file(&dev->channel->kobj, &ring_buffer_bin_attr); 296 + ret = sysfs_create_bin_file(&channel->kobj, &ring_buffer_bin_attr); 341 297 if (ret) 342 298 dev_notice(&dev->device, 343 299 "sysfs create ring bin file failed; %d\n", ret); ··· 348 304 349 305 fail_close: 350 306 hv_uio_cleanup(dev, pdata); 351 - vmbus_close(dev->channel); 352 307 fail: 353 308 kfree(pdata); 354 309 ··· 365 322 uio_unregister_device(&pdata->info); 366 323 hv_uio_cleanup(dev, pdata); 367 324 hv_set_drvdata(dev, NULL); 368 - vmbus_close(dev->channel); 325 + 326 + vmbus_free_ring(dev->channel); 369 327 kfree(pdata); 370 328 return 0; 371 329 }
+2 -1
drivers/uio/uio_pdrv_genirq.c
··· 118 118 dev_err(&pdev->dev, "unable to kmalloc\n"); 119 119 return -ENOMEM; 120 120 } 121 - uioinfo->name = pdev->dev.of_node->name; 121 + uioinfo->name = devm_kasprintf(&pdev->dev, GFP_KERNEL, "%pOFn", 122 + pdev->dev.of_node); 122 123 uioinfo->version = "devicetree"; 123 124 /* Multiple IRQs are not supported */ 124 125 }
-1
drivers/vme/vme.c
··· 1890 1890 1891 1891 err_reg: 1892 1892 put_device(&vdev->dev); 1893 - kfree(vdev); 1894 1893 err_devalloc: 1895 1894 list_for_each_entry_safe(vdev, tmp, &drv->devices, drv_list) { 1896 1895 list_del(&vdev->drv_list);
+2
drivers/w1/masters/omap_hdq.c
··· 763 763 /* remove module dependency */ 764 764 pm_runtime_disable(&pdev->dev); 765 765 766 + w1_remove_master_device(&omap_w1_master); 767 + 766 768 return 0; 767 769 } 768 770
+52 -14
drivers/w1/slaves/w1_ds2438.c
··· 186 186 return -1; 187 187 } 188 188 189 - static uint16_t w1_ds2438_get_voltage(struct w1_slave *sl, 190 - int adc_input, uint16_t *voltage) 189 + static int w1_ds2438_get_voltage(struct w1_slave *sl, 190 + int adc_input, uint16_t *voltage) 191 191 { 192 192 unsigned int retries = W1_DS2438_RETRIES; 193 193 u8 w1_buf[DS2438_PAGE_SIZE + 1 /*for CRC*/]; ··· 235 235 return ret; 236 236 } 237 237 238 + static int w1_ds2438_get_current(struct w1_slave *sl, int16_t *voltage) 239 + { 240 + u8 w1_buf[DS2438_PAGE_SIZE + 1 /*for CRC*/]; 241 + int ret; 242 + 243 + mutex_lock(&sl->master->bus_mutex); 244 + 245 + if (w1_ds2438_get_page(sl, 0, w1_buf) == 0) { 246 + /* The voltage measured across current sense resistor RSENS. */ 247 + *voltage = (((int16_t) w1_buf[DS2438_CURRENT_MSB]) << 8) | ((int16_t) w1_buf[DS2438_CURRENT_LSB]); 248 + ret = 0; 249 + } else 250 + ret = -1; 251 + 252 + mutex_unlock(&sl->master->bus_mutex); 253 + 254 + return ret; 255 + } 256 + 238 257 static ssize_t iad_write(struct file *filp, struct kobject *kobj, 239 258 struct bin_attribute *bin_attr, char *buf, 240 259 loff_t off, size_t count) ··· 276 257 return ret; 277 258 } 278 259 260 + static ssize_t iad_read(struct file *filp, struct kobject *kobj, 261 + struct bin_attribute *bin_attr, char *buf, 262 + loff_t off, size_t count) 263 + { 264 + struct w1_slave *sl = kobj_to_w1_slave(kobj); 265 + int ret; 266 + int16_t voltage; 267 + 268 + if (off != 0) 269 + return 0; 270 + if (!buf) 271 + return -EINVAL; 272 + 273 + if (w1_ds2438_get_current(sl, &voltage) == 0) { 274 + ret = snprintf(buf, count, "%i\n", voltage); 275 + } else 276 + ret = -EIO; 277 + 278 + return ret; 279 + } 280 + 279 281 static ssize_t page0_read(struct file *filp, struct kobject *kobj, 280 282 struct bin_attribute *bin_attr, char *buf, 281 283 loff_t off, size_t count) ··· 312 272 313 273 mutex_lock(&sl->master->bus_mutex); 314 274 275 + /* Read no more than page0 size */ 276 + if (count > DS2438_PAGE_SIZE) 277 + count = DS2438_PAGE_SIZE; 278 + 315 279 if (w1_ds2438_get_page(sl, 0, w1_buf) == 0) { 316 - memcpy(buf, &w1_buf, DS2438_PAGE_SIZE); 317 - ret = DS2438_PAGE_SIZE; 280 + memcpy(buf, &w1_buf, count); 281 + ret = count; 318 282 } else 319 283 ret = -EIO; 320 284 ··· 333 289 { 334 290 struct w1_slave *sl = kobj_to_w1_slave(kobj); 335 291 int ret; 336 - ssize_t c = PAGE_SIZE; 337 292 int16_t temp; 338 293 339 294 if (off != 0) ··· 341 298 return -EINVAL; 342 299 343 300 if (w1_ds2438_get_temperature(sl, &temp) == 0) { 344 - c -= snprintf(buf + PAGE_SIZE - c, c, "%d\n", temp); 345 - ret = PAGE_SIZE - c; 301 + ret = snprintf(buf, count, "%i\n", temp); 346 302 } else 347 303 ret = -EIO; 348 304 ··· 354 312 { 355 313 struct w1_slave *sl = kobj_to_w1_slave(kobj); 356 314 int ret; 357 - ssize_t c = PAGE_SIZE; 358 315 uint16_t voltage; 359 316 360 317 if (off != 0) ··· 362 321 return -EINVAL; 363 322 364 323 if (w1_ds2438_get_voltage(sl, DS2438_ADC_INPUT_VAD, &voltage) == 0) { 365 - c -= snprintf(buf + PAGE_SIZE - c, c, "%d\n", voltage); 366 - ret = PAGE_SIZE - c; 324 + ret = snprintf(buf, count, "%u\n", voltage); 367 325 } else 368 326 ret = -EIO; 369 327 ··· 375 335 { 376 336 struct w1_slave *sl = kobj_to_w1_slave(kobj); 377 337 int ret; 378 - ssize_t c = PAGE_SIZE; 379 338 uint16_t voltage; 380 339 381 340 if (off != 0) ··· 383 344 return -EINVAL; 384 345 385 346 if (w1_ds2438_get_voltage(sl, DS2438_ADC_INPUT_VDD, &voltage) == 0) { 386 - c -= snprintf(buf + PAGE_SIZE - c, c, "%d\n", voltage); 387 - ret = PAGE_SIZE - c; 347 + ret = snprintf(buf, count, "%u\n", voltage); 388 348 } else 389 349 ret = -EIO; 390 350 391 351 return ret; 392 352 } 393 353 394 - static BIN_ATTR(iad, S_IRUGO | S_IWUSR | S_IWGRP, NULL, iad_write, 1); 354 + static BIN_ATTR(iad, S_IRUGO | S_IWUSR | S_IWGRP, iad_read, iad_write, 0); 395 355 static BIN_ATTR_RO(page0, DS2438_PAGE_SIZE); 396 356 static BIN_ATTR_RO(temperature, 0/* real length varies */); 397 357 static BIN_ATTR_RO(vad, 0/* real length varies */);
+24 -17
include/linux/coresight.h
··· 94 94 * @cpu: the CPU a source belongs to. Only applicable for ETM/PTMs. 95 95 * @name: name of the component as shown under sysfs. 96 96 * @nr_inport: number of input ports for this component. 97 - * @outports: list of remote endpoint port number. 98 - * @child_names:name of all child components connected to this device. 99 - * @child_ports:child component port number the current component is 100 - connected to. 101 97 * @nr_outport: number of output ports for this component. 98 + * @conns: Array of nr_outport connections from this component 102 99 */ 103 100 struct coresight_platform_data { 104 101 int cpu; 105 102 const char *name; 106 103 int nr_inport; 107 - int *outports; 108 - const char **child_names; 109 - int *child_ports; 110 104 int nr_outport; 105 + struct coresight_connection *conns; 111 106 }; 112 107 113 108 /** ··· 185 190 * @disable: disables the sink. 186 191 * @alloc_buffer: initialises perf's ring buffer for trace collection. 187 192 * @free_buffer: release memory allocated in @get_config. 188 - * @set_buffer: initialises buffer mechanic before a trace session. 189 - * @reset_buffer: finalises buffer mechanic after a trace session. 190 193 * @update_buffer: update buffer pointers after a trace session. 191 194 */ 192 195 struct coresight_ops_sink { 193 - int (*enable)(struct coresight_device *csdev, u32 mode); 196 + int (*enable)(struct coresight_device *csdev, u32 mode, void *data); 194 197 void (*disable)(struct coresight_device *csdev); 195 198 void *(*alloc_buffer)(struct coresight_device *csdev, int cpu, 196 199 void **pages, int nr_pages, bool overwrite); 197 200 void (*free_buffer)(void *config); 198 - int (*set_buffer)(struct coresight_device *csdev, 199 - struct perf_output_handle *handle, 200 - void *sink_config); 201 - unsigned long (*reset_buffer)(struct coresight_device *csdev, 202 - struct perf_output_handle *handle, 203 - void *sink_config); 204 - void (*update_buffer)(struct coresight_device *csdev, 201 + unsigned long (*update_buffer)(struct coresight_device *csdev, 205 202 struct perf_output_handle *handle, 206 203 void *sink_config); 207 204 }; ··· 257 270 extern void coresight_disable(struct coresight_device *csdev); 258 271 extern int coresight_timeout(void __iomem *addr, u32 offset, 259 272 int position, int value); 273 + 274 + extern int coresight_claim_device(void __iomem *base); 275 + extern int coresight_claim_device_unlocked(void __iomem *base); 276 + 277 + extern void coresight_disclaim_device(void __iomem *base); 278 + extern void coresight_disclaim_device_unlocked(void __iomem *base); 279 + 260 280 #else 261 281 static inline struct coresight_device * 262 282 coresight_register(struct coresight_desc *desc) { return NULL; } ··· 273 279 static inline void coresight_disable(struct coresight_device *csdev) {} 274 280 static inline int coresight_timeout(void __iomem *addr, u32 offset, 275 281 int position, int value) { return 1; } 282 + static inline int coresight_claim_device_unlocked(void __iomem *base) 283 + { 284 + return -EINVAL; 285 + } 286 + 287 + static inline int coresight_claim_device(void __iomem *base) 288 + { 289 + return -EINVAL; 290 + } 291 + 292 + static inline void coresight_disclaim_device(void __iomem *base) {} 293 + static inline void coresight_disclaim_device_unlocked(void __iomem *base) {} 294 + 276 295 #endif 277 296 278 297 #ifdef CONFIG_OF
+4
include/linux/fpga/fpga-bridge.h
··· 69 69 int fpga_bridge_register(struct fpga_bridge *br); 70 70 void fpga_bridge_unregister(struct fpga_bridge *br); 71 71 72 + struct fpga_bridge 73 + *devm_fpga_bridge_create(struct device *dev, const char *name, 74 + const struct fpga_bridge_ops *br_ops, void *priv); 75 + 72 76 #endif /* _LINUX_FPGA_BRIDGE_H */
+4
include/linux/fpga/fpga-mgr.h
··· 198 198 int fpga_mgr_register(struct fpga_manager *mgr); 199 199 void fpga_mgr_unregister(struct fpga_manager *mgr); 200 200 201 + struct fpga_manager *devm_fpga_mgr_create(struct device *dev, const char *name, 202 + const struct fpga_manager_ops *mops, 203 + void *priv); 204 + 201 205 #endif /*_LINUX_FPGA_MGR_H */
+4
include/linux/fpga/fpga-region.h
··· 44 44 int fpga_region_register(struct fpga_region *region); 45 45 void fpga_region_unregister(struct fpga_region *region); 46 46 47 + struct fpga_region 48 + *devm_fpga_region_create(struct device *dev, struct fpga_manager *mgr, 49 + int (*get_bridges)(struct fpga_region *)); 50 + 47 51 #endif /* _FPGA_REGION_H */
+12 -2
include/linux/hyperv.h
··· 739 739 u32 ringbuffer_gpadlhandle; 740 740 741 741 /* Allocated memory for ring buffer */ 742 - void *ringbuffer_pages; 742 + struct page *ringbuffer_page; 743 743 u32 ringbuffer_pagecount; 744 + u32 ringbuffer_send_offset; 744 745 struct hv_ring_buffer_info outbound; /* send to parent */ 745 746 struct hv_ring_buffer_info inbound; /* receive from parent */ 746 747 ··· 1022 1021 struct hv_mpb_array range; 1023 1022 } __packed; 1024 1023 1024 + int vmbus_alloc_ring(struct vmbus_channel *channel, 1025 + u32 send_size, u32 recv_size); 1026 + void vmbus_free_ring(struct vmbus_channel *channel); 1027 + 1028 + int vmbus_connect_ring(struct vmbus_channel *channel, 1029 + void (*onchannel_callback)(void *context), 1030 + void *context); 1031 + int vmbus_disconnect_ring(struct vmbus_channel *channel); 1025 1032 1026 1033 extern int vmbus_open(struct vmbus_channel *channel, 1027 1034 u32 send_ringbuffersize, ··· 1134 1125 u16 device_id; 1135 1126 1136 1127 struct device device; 1128 + char *driver_override; /* Driver name to force a match */ 1137 1129 1138 1130 struct vmbus_channel *channel; 1139 1131 struct kset *channels_kset; ··· 1452 1442 const int *srv_version, int srv_vercnt, 1453 1443 int *nego_fw_version, int *nego_srv_version); 1454 1444 1455 - void hv_process_channel_removal(u32 relid); 1445 + void hv_process_channel_removal(struct vmbus_channel *channel); 1456 1446 1457 1447 void vmbus_setevent(struct vmbus_channel *channel); 1458 1448 /*
+77 -23
include/linux/nvmem-consumer.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 /* 2 3 * nvmem framework consumer. 3 4 * 4 5 * Copyright (C) 2015 Srinivas Kandagatla <srinivas.kandagatla@linaro.org> 5 6 * Copyright (C) 2013 Maxime Ripard <maxime.ripard@free-electrons.com> 6 - * 7 - * This file is licensed under the terms of the GNU General Public 8 - * License version 2. This program is licensed "as is" without any 9 - * warranty of any kind, whether express or implied. 10 7 */ 11 8 12 9 #ifndef _LINUX_NVMEM_CONSUMER_H ··· 11 14 12 15 #include <linux/err.h> 13 16 #include <linux/errno.h> 17 + #include <linux/notifier.h> 14 18 15 19 struct device; 16 20 struct device_node; ··· 27 29 unsigned int nbits; 28 30 }; 29 31 32 + /** 33 + * struct nvmem_cell_lookup - cell lookup entry 34 + * 35 + * @nvmem_name: Name of the provider. 36 + * @cell_name: Name of the nvmem cell as defined in the name field of 37 + * struct nvmem_cell_info. 38 + * @dev_id: Name of the consumer device that will be associated with 39 + * this cell. 40 + * @con_id: Connector id for this cell lookup. 41 + */ 42 + struct nvmem_cell_lookup { 43 + const char *nvmem_name; 44 + const char *cell_name; 45 + const char *dev_id; 46 + const char *con_id; 47 + struct list_head node; 48 + }; 49 + 50 + enum { 51 + NVMEM_ADD = 1, 52 + NVMEM_REMOVE, 53 + NVMEM_CELL_ADD, 54 + NVMEM_CELL_REMOVE, 55 + }; 56 + 30 57 #if IS_ENABLED(CONFIG_NVMEM) 31 58 32 59 /* Cell based interface */ 33 - struct nvmem_cell *nvmem_cell_get(struct device *dev, const char *name); 34 - struct nvmem_cell *devm_nvmem_cell_get(struct device *dev, const char *name); 60 + struct nvmem_cell *nvmem_cell_get(struct device *dev, const char *id); 61 + struct nvmem_cell *devm_nvmem_cell_get(struct device *dev, const char *id); 35 62 void nvmem_cell_put(struct nvmem_cell *cell); 36 63 void devm_nvmem_cell_put(struct device *dev, struct nvmem_cell *cell); 37 64 void *nvmem_cell_read(struct nvmem_cell *cell, size_t *len); ··· 78 55 int nvmem_device_cell_write(struct nvmem_device *nvmem, 79 56 struct nvmem_cell_info *info, void *buf); 80 57 58 + const char *nvmem_dev_name(struct nvmem_device *nvmem); 59 + 60 + void nvmem_add_cell_lookups(struct nvmem_cell_lookup *entries, 61 + size_t nentries); 62 + void nvmem_del_cell_lookups(struct nvmem_cell_lookup *entries, 63 + size_t nentries); 64 + 65 + int nvmem_register_notifier(struct notifier_block *nb); 66 + int nvmem_unregister_notifier(struct notifier_block *nb); 67 + 81 68 #else 82 69 83 70 static inline struct nvmem_cell *nvmem_cell_get(struct device *dev, 84 - const char *name) 71 + const char *id) 85 72 { 86 - return ERR_PTR(-ENOSYS); 73 + return ERR_PTR(-EOPNOTSUPP); 87 74 } 88 75 89 76 static inline struct nvmem_cell *devm_nvmem_cell_get(struct device *dev, 90 - const char *name) 77 + const char *id) 91 78 { 92 - return ERR_PTR(-ENOSYS); 79 + return ERR_PTR(-EOPNOTSUPP); 93 80 } 94 81 95 82 static inline void devm_nvmem_cell_put(struct device *dev, ··· 113 80 114 81 static inline void *nvmem_cell_read(struct nvmem_cell *cell, size_t *len) 115 82 { 116 - return ERR_PTR(-ENOSYS); 83 + return ERR_PTR(-EOPNOTSUPP); 117 84 } 118 85 119 86 static inline int nvmem_cell_write(struct nvmem_cell *cell, 120 87 const char *buf, size_t len) 121 88 { 122 - return -ENOSYS; 89 + return -EOPNOTSUPP; 123 90 } 124 91 125 92 static inline int nvmem_cell_read_u32(struct device *dev, 126 93 const char *cell_id, u32 *val) 127 94 { 128 - return -ENOSYS; 95 + return -EOPNOTSUPP; 129 96 } 130 97 131 98 static inline struct nvmem_device *nvmem_device_get(struct device *dev, 132 99 const char *name) 133 100 { 134 - return ERR_PTR(-ENOSYS); 101 + return ERR_PTR(-EOPNOTSUPP); 135 102 } 136 103 137 104 static inline struct nvmem_device *devm_nvmem_device_get(struct device *dev, 138 105 const char *name) 139 106 { 140 - return ERR_PTR(-ENOSYS); 107 + return ERR_PTR(-EOPNOTSUPP); 141 108 } 142 109 143 110 static inline void nvmem_device_put(struct nvmem_device *nvmem) ··· 153 120 struct nvmem_cell_info *info, 154 121 void *buf) 155 122 { 156 - return -ENOSYS; 123 + return -EOPNOTSUPP; 157 124 } 158 125 159 126 static inline int nvmem_device_cell_write(struct nvmem_device *nvmem, 160 127 struct nvmem_cell_info *info, 161 128 void *buf) 162 129 { 163 - return -ENOSYS; 130 + return -EOPNOTSUPP; 164 131 } 165 132 166 133 static inline int nvmem_device_read(struct nvmem_device *nvmem, 167 134 unsigned int offset, size_t bytes, 168 135 void *buf) 169 136 { 170 - return -ENOSYS; 137 + return -EOPNOTSUPP; 171 138 } 172 139 173 140 static inline int nvmem_device_write(struct nvmem_device *nvmem, 174 141 unsigned int offset, size_t bytes, 175 142 void *buf) 176 143 { 177 - return -ENOSYS; 144 + return -EOPNOTSUPP; 178 145 } 146 + 147 + static inline const char *nvmem_dev_name(struct nvmem_device *nvmem) 148 + { 149 + return NULL; 150 + } 151 + 152 + static inline void 153 + nvmem_add_cell_lookups(struct nvmem_cell_lookup *entries, size_t nentries) {} 154 + static inline void 155 + nvmem_del_cell_lookups(struct nvmem_cell_lookup *entries, size_t nentries) {} 156 + 157 + static inline int nvmem_register_notifier(struct notifier_block *nb) 158 + { 159 + return -EOPNOTSUPP; 160 + } 161 + 162 + static inline int nvmem_unregister_notifier(struct notifier_block *nb) 163 + { 164 + return -EOPNOTSUPP; 165 + } 166 + 179 167 #endif /* CONFIG_NVMEM */ 180 168 181 169 #if IS_ENABLED(CONFIG_NVMEM) && IS_ENABLED(CONFIG_OF) 182 170 struct nvmem_cell *of_nvmem_cell_get(struct device_node *np, 183 - const char *name); 171 + const char *id); 184 172 struct nvmem_device *of_nvmem_device_get(struct device_node *np, 185 173 const char *name); 186 174 #else 187 175 static inline struct nvmem_cell *of_nvmem_cell_get(struct device_node *np, 188 - const char *name) 176 + const char *id) 189 177 { 190 - return ERR_PTR(-ENOSYS); 178 + return ERR_PTR(-EOPNOTSUPP); 191 179 } 192 180 193 181 static inline struct nvmem_device *of_nvmem_device_get(struct device_node *np, 194 182 const char *name) 195 183 { 196 - return ERR_PTR(-ENOSYS); 184 + return ERR_PTR(-EOPNOTSUPP); 197 185 } 198 186 #endif /* CONFIG_NVMEM && CONFIG_OF */ 199 187
+29 -21
include/linux/nvmem-provider.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 /* 2 3 * nvmem framework provider. 3 4 * 4 5 * Copyright (C) 2015 Srinivas Kandagatla <srinivas.kandagatla@linaro.org> 5 6 * Copyright (C) 2013 Maxime Ripard <maxime.ripard@free-electrons.com> 6 - * 7 - * This file is licensed under the terms of the GNU General Public 8 - * License version 2. This program is licensed "as is" without any 9 - * warranty of any kind, whether express or implied. 10 7 */ 11 8 12 9 #ifndef _LINUX_NVMEM_PROVIDER_H ··· 64 67 struct device *base_dev; 65 68 }; 66 69 70 + /** 71 + * struct nvmem_cell_table - NVMEM cell definitions for given provider 72 + * 73 + * @nvmem_name: Provider name. 74 + * @cells: Array of cell definitions. 75 + * @ncells: Number of cell definitions in the array. 76 + * @node: List node. 77 + * 78 + * This structure together with related helper functions is provided for users 79 + * that don't can't access the nvmem provided structure but wish to register 80 + * cell definitions for it e.g. board files registering an EEPROM device. 81 + */ 82 + struct nvmem_cell_table { 83 + const char *nvmem_name; 84 + const struct nvmem_cell_info *cells; 85 + size_t ncells; 86 + struct list_head node; 87 + }; 88 + 67 89 #if IS_ENABLED(CONFIG_NVMEM) 68 90 69 91 struct nvmem_device *nvmem_register(const struct nvmem_config *cfg); 70 - int nvmem_unregister(struct nvmem_device *nvmem); 92 + void nvmem_unregister(struct nvmem_device *nvmem); 71 93 72 94 struct nvmem_device *devm_nvmem_register(struct device *dev, 73 95 const struct nvmem_config *cfg); 74 96 75 97 int devm_nvmem_unregister(struct device *dev, struct nvmem_device *nvmem); 76 98 77 - int nvmem_add_cells(struct nvmem_device *nvmem, 78 - const struct nvmem_cell_info *info, 79 - int ncells); 99 + void nvmem_add_cell_table(struct nvmem_cell_table *table); 100 + void nvmem_del_cell_table(struct nvmem_cell_table *table); 101 + 80 102 #else 81 103 82 104 static inline struct nvmem_device *nvmem_register(const struct nvmem_config *c) 83 105 { 84 - return ERR_PTR(-ENOSYS); 106 + return ERR_PTR(-EOPNOTSUPP); 85 107 } 86 108 87 - static inline int nvmem_unregister(struct nvmem_device *nvmem) 88 - { 89 - return -ENOSYS; 90 - } 109 + static inline void nvmem_unregister(struct nvmem_device *nvmem) {} 91 110 92 111 static inline struct nvmem_device * 93 112 devm_nvmem_register(struct device *dev, const struct nvmem_config *c) ··· 114 101 static inline int 115 102 devm_nvmem_unregister(struct device *dev, struct nvmem_device *nvmem) 116 103 { 117 - return nvmem_unregister(nvmem); 118 - 104 + return -EOPNOTSUPP; 119 105 } 120 106 121 - static inline int nvmem_add_cells(struct nvmem_device *nvmem, 122 - const struct nvmem_cell_info *info, 123 - int ncells) 124 - { 125 - return -ENOSYS; 126 - } 107 + static inline void nvmem_add_cell_table(struct nvmem_cell_table *table) {} 108 + static inline void nvmem_del_cell_table(struct nvmem_cell_table *table) {} 127 109 128 110 #endif /* CONFIG_NVMEM */ 129 111 #endif /* ifndef _LINUX_NVMEM_PROVIDER_H */
+10 -2
include/linux/soundwire/sdw.h
··· 678 678 * @defer_msg: Defer message 679 679 * @clk_stop_timeout: Clock stop timeout computed 680 680 * @bank_switch_timeout: Bank switch timeout computed 681 + * @multi_link: Store bus property that indicates if multi links 682 + * are supported. This flag is populated by drivers after reading 683 + * appropriate firmware (ACPI/DT). 681 684 */ 682 685 struct sdw_bus { 683 686 struct device *dev; ··· 697 694 struct sdw_defer defer_msg; 698 695 unsigned int clk_stop_timeout; 699 696 u32 bank_switch_timeout; 697 + bool multi_link; 700 698 }; 701 699 702 700 int sdw_add_bus_master(struct sdw_bus *bus); ··· 772 768 * @params: Stream parameters 773 769 * @state: Current state of the stream 774 770 * @type: Stream type PCM or PDM 775 - * @m_rt: Master runtime 771 + * @master_list: List of Master runtime(s) in this stream. 772 + * master_list can contain only one m_rt per Master instance 773 + * for a stream 774 + * @m_rt_count: Count of Master runtime(s) in this stream 776 775 */ 777 776 struct sdw_stream_runtime { 778 777 char *name; 779 778 struct sdw_stream_params params; 780 779 enum sdw_stream_state state; 781 780 enum sdw_stream_type type; 782 - struct sdw_master_runtime *m_rt; 781 + struct list_head master_list; 782 + int m_rt_count; 783 783 }; 784 784 785 785 struct sdw_stream_runtime *sdw_alloc_stream(char *stream_name);
+7
include/linux/string.h
··· 131 131 return memset64((uint64_t *)p, (uintptr_t)v, n); 132 132 } 133 133 134 + extern void **__memcat_p(void **a, void **b); 135 + #define memcat_p(a, b) ({ \ 136 + BUILD_BUG_ON_MSG(!__same_type(*(a), *(b)), \ 137 + "type mismatch in memcat_p()"); \ 138 + (typeof(*a) *)__memcat_p((void **)(a), (void **)(b)); \ 139 + }) 140 + 134 141 #ifndef __HAVE_ARCH_MEMCPY 135 142 extern void * memcpy(void *,const void *,__kernel_size_t); 136 143 #endif
+1 -4
include/linux/thunderbolt.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 1 2 /* 2 3 * Thunderbolt service API 3 4 * ··· 6 5 * Copyright (C) 2017, Intel Corporation 7 6 * Authors: Michael Jamet <michael.jamet@intel.com> 8 7 * Mika Westerberg <mika.westerberg@linux.intel.com> 9 - * 10 - * This program is free software; you can redistribute it and/or modify 11 - * it under the terms of the GNU General Public License version 2 as 12 - * published by the Free Software Foundation. 13 8 */ 14 9 15 10 #ifndef THUNDERBOLT_H_
+1
include/linux/uio_driver.h
··· 133 133 #define UIO_MEM_PHYS 1 134 134 #define UIO_MEM_LOGICAL 2 135 135 #define UIO_MEM_VIRTUAL 3 136 + #define UIO_MEM_IOVA 4 136 137 137 138 /* defines for uio_port->porttype */ 138 139 #define UIO_PORT_NONE 0
+10
include/uapi/linux/android/binder.h
··· 200 200 __u32 has_weak_ref; 201 201 }; 202 202 203 + struct binder_node_info_for_ref { 204 + __u32 handle; 205 + __u32 strong_count; 206 + __u32 weak_count; 207 + __u32 reserved1; 208 + __u32 reserved2; 209 + __u32 reserved3; 210 + }; 211 + 203 212 #define BINDER_WRITE_READ _IOWR('b', 1, struct binder_write_read) 204 213 #define BINDER_SET_IDLE_TIMEOUT _IOW('b', 3, __s64) 205 214 #define BINDER_SET_MAX_THREADS _IOW('b', 5, __u32) ··· 217 208 #define BINDER_THREAD_EXIT _IOW('b', 8, __s32) 218 209 #define BINDER_VERSION _IOWR('b', 9, struct binder_version) 219 210 #define BINDER_GET_NODE_DEBUG_INFO _IOWR('b', 11, struct binder_node_debug_info) 211 + #define BINDER_GET_NODE_INFO_FOR_REF _IOWR('b', 12, struct binder_node_info_for_ref) 220 212 221 213 /* 222 214 * NOTE: Two special error codes you should check for when calling
+8
lib/Kconfig.debug
··· 1965 1965 1966 1966 If unsure, say N. 1967 1967 1968 + config TEST_MEMCAT_P 1969 + tristate "Test memcat_p() helper function" 1970 + help 1971 + Test the memcat_p() helper for correctly merging two 1972 + pointer arrays together. 1973 + 1974 + If unsure, say N. 1975 + 1968 1976 endif # RUNTIME_TESTING_MENU 1969 1977 1970 1978 config MEMTEST
+2 -1
lib/Makefile
··· 24 24 flex_proportions.o ratelimit.o show_mem.o \ 25 25 is_single_threaded.o plist.o decompress.o kobject_uevent.o \ 26 26 earlycpio.o seq_buf.o siphash.o dec_and_lock.o \ 27 - nmi_backtrace.o nodemask.o win_minmax.o 27 + nmi_backtrace.o nodemask.o win_minmax.o memcat_p.o 28 28 29 29 lib-$(CONFIG_PRINTK) += dump_stack.o 30 30 lib-$(CONFIG_MMU) += ioremap.o ··· 71 71 obj-$(CONFIG_TEST_PARMAN) += test_parman.o 72 72 obj-$(CONFIG_TEST_KMOD) += test_kmod.o 73 73 obj-$(CONFIG_TEST_DEBUG_VIRTUAL) += test_debug_virtual.o 74 + obj-$(CONFIG_TEST_MEMCAT_P) += test_memcat_p.o 74 75 75 76 ifeq ($(CONFIG_DEBUG_KOBJECT),y) 76 77 CFLAGS_kobject.o += -DDEBUG
+34
lib/memcat_p.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/slab.h> 4 + 5 + /* 6 + * Merge two NULL-terminated pointer arrays into a newly allocated 7 + * array, which is also NULL-terminated. Nomenclature is inspired by 8 + * memset_p() and memcat() found elsewhere in the kernel source tree. 9 + */ 10 + void **__memcat_p(void **a, void **b) 11 + { 12 + void **p = a, **new; 13 + int nr; 14 + 15 + /* count the elements in both arrays */ 16 + for (nr = 0, p = a; *p; nr++, p++) 17 + ; 18 + for (p = b; *p; nr++, p++) 19 + ; 20 + /* one for the NULL-terminator */ 21 + nr++; 22 + 23 + new = kmalloc_array(nr, sizeof(void *), GFP_KERNEL); 24 + if (!new) 25 + return NULL; 26 + 27 + /* nr -> last index; p points to NULL in b[] */ 28 + for (nr--; nr >= 0; nr--, p = p == b ? &a[nr] : p - 1) 29 + new[nr] = *p; 30 + 31 + return new; 32 + } 33 + EXPORT_SYMBOL_GPL(__memcat_p); 34 +
+1
lib/string.c
··· 27 27 #include <linux/export.h> 28 28 #include <linux/bug.h> 29 29 #include <linux/errno.h> 30 + #include <linux/slab.h> 30 31 31 32 #include <asm/byteorder.h> 32 33 #include <asm/word-at-a-time.h>
+115
lib/test_memcat_p.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Test cases for memcat_p() in lib/memcat_p.c 4 + */ 5 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 6 + 7 + #include <linux/string.h> 8 + #include <linux/slab.h> 9 + #include <linux/module.h> 10 + 11 + struct test_struct { 12 + int num; 13 + unsigned int magic; 14 + }; 15 + 16 + #define MAGIC 0xf00ff00f 17 + /* Size of each of the NULL-terminated input arrays */ 18 + #define INPUT_MAX 128 19 + /* Expected number of non-NULL elements in the output array */ 20 + #define EXPECT (INPUT_MAX * 2 - 2) 21 + 22 + static int __init test_memcat_p_init(void) 23 + { 24 + struct test_struct **in0, **in1, **out, **p; 25 + int err = -ENOMEM, i, r, total = 0; 26 + 27 + in0 = kcalloc(INPUT_MAX, sizeof(*in0), GFP_KERNEL); 28 + if (!in0) 29 + return err; 30 + 31 + in1 = kcalloc(INPUT_MAX, sizeof(*in1), GFP_KERNEL); 32 + if (!in1) 33 + goto err_free_in0; 34 + 35 + for (i = 0, r = 1; i < INPUT_MAX - 1; i++) { 36 + in0[i] = kmalloc(sizeof(**in0), GFP_KERNEL); 37 + if (!in0[i]) 38 + goto err_free_elements; 39 + 40 + in1[i] = kmalloc(sizeof(**in1), GFP_KERNEL); 41 + if (!in1[i]) { 42 + kfree(in0[i]); 43 + goto err_free_elements; 44 + } 45 + 46 + /* lifted from test_sort.c */ 47 + r = (r * 725861) % 6599; 48 + in0[i]->num = r; 49 + in1[i]->num = -r; 50 + in0[i]->magic = MAGIC; 51 + in1[i]->magic = MAGIC; 52 + } 53 + 54 + in0[i] = in1[i] = NULL; 55 + 56 + out = memcat_p(in0, in1); 57 + if (!out) 58 + goto err_free_all_elements; 59 + 60 + err = -EINVAL; 61 + for (i = 0, p = out; *p && (i < INPUT_MAX * 2 - 1); p++, i++) { 62 + total += (*p)->num; 63 + 64 + if ((*p)->magic != MAGIC) { 65 + pr_err("test failed: wrong magic at %d: %u\n", i, 66 + (*p)->magic); 67 + goto err_free_out; 68 + } 69 + } 70 + 71 + if (total) { 72 + pr_err("test failed: expected zero total, got %d\n", total); 73 + goto err_free_out; 74 + } 75 + 76 + if (i != EXPECT) { 77 + pr_err("test failed: expected output size %d, got %d\n", 78 + EXPECT, i); 79 + goto err_free_out; 80 + } 81 + 82 + for (i = 0; i < INPUT_MAX - 1; i++) 83 + if (out[i] != in0[i] || out[i + INPUT_MAX - 1] != in1[i]) { 84 + pr_err("test failed: wrong element order at %d\n", i); 85 + goto err_free_out; 86 + } 87 + 88 + err = 0; 89 + pr_info("test passed\n"); 90 + 91 + err_free_out: 92 + kfree(out); 93 + err_free_all_elements: 94 + i = INPUT_MAX; 95 + err_free_elements: 96 + for (i--; i >= 0; i--) { 97 + kfree(in1[i]); 98 + kfree(in0[i]); 99 + } 100 + 101 + kfree(in1); 102 + err_free_in0: 103 + kfree(in0); 104 + 105 + return err; 106 + } 107 + 108 + static void __exit test_memcat_p_exit(void) 109 + { 110 + } 111 + 112 + module_init(test_memcat_p_init); 113 + module_exit(test_memcat_p_exit); 114 + 115 + MODULE_LICENSE("GPL");
+1 -1
samples/mei/mei-amt-version.c
··· 370 370 unsigned int expected_sz) 371 371 { 372 372 uint32_t in_buf_sz; 373 - uint32_t out_buf_sz; 373 + ssize_t out_buf_sz; 374 374 ssize_t written; 375 375 uint32_t status; 376 376 struct amt_host_if_resp_header *msg_hdr;