Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'char-misc-5.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc

Pull char/misc driver updates from Greg KH:
"Here is the big char/misc/whatever driver changes for 5.6-rc1

Included in here are loads of things from a variety of different
driver subsystems:
- soundwire updates
- binder updates
- nvmem updates
- firmware drivers updates
- extcon driver updates
- various misc driver updates
- fpga driver updates
- interconnect subsystem and driver updates
- bus driver updates
- uio driver updates
- mei driver updates
- w1 driver cleanups
- various other small driver updates

All of these have been in linux-next for a while with no reported
issues"

* tag 'char-misc-5.6-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/char-misc: (86 commits)
mei: me: add jasper point DID
char: hpet: Use flexible-array member
binder: fix log spam for existing debugfs file creation.
mei: me: add comet point (lake) H device ids
nvmem: add QTI SDAM driver
dt-bindings: nvmem: add binding for QTI SPMI SDAM
dt-bindings: imx-ocotp: Add i.MX8MP compatible
dt-bindings: soundwire: fix example
soundwire: cadence: fix kernel-doc parameter descriptions
soundwire: intel: report slave_ids for each link to SOF driver
siox: Use the correct style for SPDX License Identifier
w1: omap-hdq: Simplify driver with PM runtime autosuspend
firmware: stratix10-svc: Remove unneeded semicolon
firmware: google: Probe for a GSMI handler in firmware
firmware: google: Unregister driver_info on failure and exit in gsmi
firmware: google: Release devices before unregistering the bus
slimbus: qcom: add missed clk_disable_unprepare in remove
slimbus: Use the correct style for SPDX License Identifier
slimbus: qcom-ngd-ctrl: Use dma_request_chan() instead dma_request_slave_channel()
dt-bindings: SLIMBus: add slim devices optional properties
...

+3133 -702
+77
Documentation/devicetree/bindings/interconnect/qcom,msm8916.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0 OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/interconnect/qcom,msm8916.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm MSM8916 Network-On-Chip interconnect 8 + 9 + maintainers: 10 + - Georgi Djakov <georgi.djakov@linaro.org> 11 + 12 + description: | 13 + The Qualcomm MSM8916 interconnect providers support adjusting the 14 + bandwidth requirements between the various NoC fabrics. 15 + 16 + properties: 17 + compatible: 18 + enum: 19 + - qcom,msm8916-bimc 20 + - qcom,msm8916-pcnoc 21 + - qcom,msm8916-snoc 22 + 23 + reg: 24 + maxItems: 1 25 + 26 + '#interconnect-cells': 27 + const: 1 28 + 29 + clock-names: 30 + items: 31 + - const: bus 32 + - const: bus_a 33 + 34 + clocks: 35 + items: 36 + - description: Bus Clock 37 + - description: Bus A Clock 38 + 39 + required: 40 + - compatible 41 + - reg 42 + - '#interconnect-cells' 43 + - clock-names 44 + - clocks 45 + 46 + additionalProperties: false 47 + 48 + examples: 49 + - | 50 + #include <dt-bindings/clock/qcom,rpmcc.h> 51 + 52 + bimc: interconnect@400000 { 53 + compatible = "qcom,msm8916-bimc"; 54 + reg = <0x00400000 0x62000>; 55 + #interconnect-cells = <1>; 56 + clock-names = "bus", "bus_a"; 57 + clocks = <&rpmcc RPM_SMD_BIMC_CLK>, 58 + <&rpmcc RPM_SMD_BIMC_A_CLK>; 59 + }; 60 + 61 + pcnoc: interconnect@500000 { 62 + compatible = "qcom,msm8916-pcnoc"; 63 + reg = <0x00500000 0x11000>; 64 + #interconnect-cells = <1>; 65 + clock-names = "bus", "bus_a"; 66 + clocks = <&rpmcc RPM_SMD_PCNOC_CLK>, 67 + <&rpmcc RPM_SMD_PCNOC_A_CLK>; 68 + }; 69 + 70 + snoc: interconnect@580000 { 71 + compatible = "qcom,msm8916-snoc"; 72 + reg = <0x00580000 0x14000>; 73 + #interconnect-cells = <1>; 74 + clock-names = "bus", "bus_a"; 75 + clocks = <&rpmcc RPM_SMD_SNOC_CLK>, 76 + <&rpmcc RPM_SMD_SNOC_A_CLK>; 77 + };
+2 -1
Documentation/devicetree/bindings/nvmem/imx-ocotp.txt
··· 2 2 3 3 This binding represents the on-chip eFuse OTP controller found on 4 4 i.MX6Q/D, i.MX6DL/S, i.MX6SL, i.MX6SX, i.MX6UL, i.MX6ULL/ULZ, i.MX6SLL, 5 - i.MX7D/S, i.MX7ULP, i.MX8MQ, i.MX8MM and i.MX8MN SoCs. 5 + i.MX7D/S, i.MX7ULP, i.MX8MQ, i.MX8MM, i.MX8MN and i.MX8MP SoCs. 6 6 7 7 Required properties: 8 8 - compatible: should be one of ··· 17 17 "fsl,imx8mq-ocotp" (i.MX8MQ), 18 18 "fsl,imx8mm-ocotp" (i.MX8MM), 19 19 "fsl,imx8mn-ocotp" (i.MX8MN), 20 + "fsl,imx8mp-ocotp" (i.MX8MP), 20 21 followed by "syscon". 21 22 - #address-cells : Should be 1 22 23 - #size-cells : Should be 1
+84
Documentation/devicetree/bindings/nvmem/qcom,spmi-sdam.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only OR BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/nvmem/qcom,spmi-sdam.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Qualcomm Technologies, Inc. SPMI SDAM DT bindings 8 + 9 + maintainers: 10 + - Shyam Kumar Thella <sthella@codeaurora.org> 11 + 12 + description: | 13 + The SDAM provides scratch register space for the PMIC clients. This 14 + memory can be used by software to store information or communicate 15 + to/from the PBUS. 16 + 17 + allOf: 18 + - $ref: "nvmem.yaml#" 19 + 20 + properties: 21 + compatible: 22 + enum: 23 + - qcom,spmi-sdam 24 + 25 + reg: 26 + maxItems: 1 27 + 28 + "#address-cells": 29 + const: 1 30 + 31 + "#size-cells": 32 + const: 1 33 + 34 + ranges: true 35 + 36 + required: 37 + - compatible 38 + - reg 39 + - ranges 40 + 41 + patternProperties: 42 + "^.*@[0-9a-f]+$": 43 + type: object 44 + 45 + properties: 46 + reg: 47 + maxItems: 1 48 + description: 49 + Offset and size in bytes within the storage device. 50 + 51 + bits: 52 + $ref: /schemas/types.yaml#/definitions/uint32-array 53 + maxItems: 1 54 + items: 55 + items: 56 + - minimum: 0 57 + maximum: 7 58 + description: 59 + Offset in bit within the address range specified by reg. 60 + - minimum: 1 61 + description: 62 + Size in bit within the address range specified by reg. 63 + 64 + required: 65 + - reg 66 + 67 + additionalProperties: false 68 + 69 + examples: 70 + - | 71 + sdam_1: nvram@b000 { 72 + #address-cells = <1>; 73 + #size-cells = <1>; 74 + compatible = "qcom,spmi-sdam"; 75 + reg = <0xb000 0x100>; 76 + ranges = <0 0xb000 0x100>; 77 + 78 + /* Data cells */ 79 + restart_reason: restart@50 { 80 + reg = <0x50 0x1>; 81 + bits = <6 2>; 82 + }; 83 + }; 84 + ...
+10
Documentation/devicetree/bindings/slimbus/bus.txt
··· 32 32 Product Code, shall be in lower case hexadecimal with leading 33 33 zeroes suppressed 34 34 35 + Optional property for SLIMbus child node if it is present: 36 + - slim-ifc-dev - Should be phandle to SLIMBus Interface device. 37 + Required for devices which deal with streams. 38 + 35 39 SLIMbus example for Qualcomm's slimbus manager component: 36 40 37 41 slim@28080000 { ··· 47 43 #address-cells = <2>; 48 44 #size-cell = <0>; 49 45 46 + codec_ifd: ifd@0,0{ 47 + compatible = "slim217,60"; 48 + reg = <0 0>; 49 + }; 50 + 50 51 codec: wcd9310@1,0{ 51 52 compatible = "slim217,60"; 52 53 reg = <1 0>; 54 + slim-ifc-dev = <&codec_ifd>; 53 55 }; 54 56 };
+167
Documentation/devicetree/bindings/soundwire/qcom,sdw.txt
··· 1 + Qualcomm SoundWire Controller Bindings 2 + 3 + 4 + This binding describes the Qualcomm SoundWire Controller along with its 5 + board specific bus parameters. 6 + 7 + - compatible: 8 + Usage: required 9 + Value type: <stringlist> 10 + Definition: must be "qcom,soundwire-v<MAJOR>.<MINOR>.<STEP>", 11 + Example: 12 + "qcom,soundwire-v1.3.0" 13 + "qcom,soundwire-v1.5.0" 14 + "qcom,soundwire-v1.6.0" 15 + - reg: 16 + Usage: required 17 + Value type: <prop-encoded-array> 18 + Definition: the base address and size of SoundWire controller 19 + address space. 20 + 21 + - interrupts: 22 + Usage: required 23 + Value type: <prop-encoded-array> 24 + Definition: should specify the SoundWire Controller IRQ 25 + 26 + - clock-names: 27 + Usage: required 28 + Value type: <stringlist> 29 + Definition: should be "iface" for SoundWire Controller interface clock 30 + 31 + - clocks: 32 + Usage: required 33 + Value type: <prop-encoded-array> 34 + Definition: should specify the SoundWire Controller interface clock 35 + 36 + - #sound-dai-cells: 37 + Usage: required 38 + Value type: <u32> 39 + Definition: must be 1 for digital audio interfaces on the controller. 40 + 41 + - qcom,dout-ports: 42 + Usage: required 43 + Value type: <u32> 44 + Definition: must be count of data out ports 45 + 46 + - qcom,din-ports: 47 + Usage: required 48 + Value type: <u32> 49 + Definition: must be count of data in ports 50 + 51 + - qcom,ports-offset1: 52 + Usage: required 53 + Value type: <prop-encoded-array> 54 + Definition: should specify payload transport window offset1 of each 55 + data port. Out ports followed by In ports. 56 + More info in MIPI Alliance SoundWire 1.0 Specifications. 57 + 58 + - qcom,ports-offset2: 59 + Usage: required 60 + Value type: <prop-encoded-array> 61 + Definition: should specify payload transport window offset2 of each 62 + data port. Out ports followed by In ports. 63 + More info in MIPI Alliance SoundWire 1.0 Specifications. 64 + 65 + - qcom,ports-sinterval-low: 66 + Usage: required 67 + Value type: <prop-encoded-array> 68 + Definition: should be sample interval low of each data port. 69 + Out ports followed by In ports. Used for Sample Interval 70 + calculation. 71 + More info in MIPI Alliance SoundWire 1.0 Specifications. 72 + 73 + - qcom,ports-word-length: 74 + Usage: optional 75 + Value type: <prop-encoded-array> 76 + Definition: should be size of payload channel sample. 77 + More info in MIPI Alliance SoundWire 1.0 Specifications. 78 + 79 + - qcom,ports-block-pack-mode: 80 + Usage: optional 81 + Value type: <prop-encoded-array> 82 + Definition: should be 0 or 1 to indicate the block packing mode. 83 + 0 to indicate Blocks are per Channel 84 + 1 to indicate Blocks are per Port. 85 + Out ports followed by In ports. 86 + More info in MIPI Alliance SoundWire 1.0 Specifications. 87 + 88 + - qcom,ports-block-group-count: 89 + Usage: optional 90 + Value type: <prop-encoded-array> 91 + Definition: should be in range 1 to 4 to indicate how many sample 92 + intervals are combined into a payload. 93 + Out ports followed by In ports. 94 + More info in MIPI Alliance SoundWire 1.0 Specifications. 95 + 96 + - qcom,ports-lane-control: 97 + Usage: optional 98 + Value type: <prop-encoded-array> 99 + Definition: should be in range 0 to 7 to identify which data lane 100 + the data port uses. 101 + Out ports followed by In ports. 102 + More info in MIPI Alliance SoundWire 1.0 Specifications. 103 + 104 + - qcom,ports-hstart: 105 + Usage: optional 106 + Value type: <prop-encoded-array> 107 + Definition: should be number identifying lowerst numbered coloum in 108 + SoundWire Frame, i.e. left edge of the Transport sub-frame 109 + for each port. Values between 0 and 15 are valid. 110 + Out ports followed by In ports. 111 + More info in MIPI Alliance SoundWire 1.0 Specifications. 112 + 113 + - qcom,ports-hstop: 114 + Usage: optional 115 + Value type: <prop-encoded-array> 116 + Definition: should be number identifying highest numbered coloum in 117 + SoundWire Frame, i.e. the right edge of the Transport 118 + sub-frame for each port. Values between 0 and 15 are valid. 119 + Out ports followed by In ports. 120 + More info in MIPI Alliance SoundWire 1.0 Specifications. 121 + 122 + - qcom,dports-type: 123 + Usage: optional 124 + Value type: <prop-encoded-array> 125 + Definition: should be one of the following types 126 + 0 for reduced port 127 + 1 for simple ports 128 + 2 for full port 129 + Out ports followed by In ports. 130 + More info in MIPI Alliance SoundWire 1.0 Specifications. 131 + 132 + Note: 133 + More Information on detail of encoding of these fields can be 134 + found in MIPI Alliance SoundWire 1.0 Specifications. 135 + 136 + = SoundWire devices 137 + Each subnode of the bus represents SoundWire device attached to it. 138 + The properties of these nodes are defined by the individual bindings. 139 + 140 + = EXAMPLE 141 + The following example represents a SoundWire controller on DB845c board 142 + which has controller integrated inside WCD934x codec on SDM845 SoC. 143 + 144 + soundwire: soundwire@c85 { 145 + compatible = "qcom,soundwire-v1.3.0"; 146 + reg = <0xc85 0x20>; 147 + interrupts = <20 IRQ_TYPE_EDGE_RISING>; 148 + clocks = <&wcc>; 149 + clock-names = "iface"; 150 + #sound-dai-cells = <1>; 151 + qcom,dports-type = <0>; 152 + qcom,dout-ports = <6>; 153 + qcom,din-ports = <2>; 154 + qcom,ports-sinterval-low = /bits/ 8 <0x07 0x1F 0x3F 0x7 0x1F 0x3F 0x0F 0x0F>; 155 + qcom,ports-offset1 = /bits/ 8 <0x01 0x02 0x0C 0x6 0x12 0x0D 0x07 0x0A >; 156 + qcom,ports-offset2 = /bits/ 8 <0x00 0x00 0x1F 0x00 0x00 0x1F 0x00 0x00>; 157 + 158 + /* Left Speaker */ 159 + left{ 160 + .... 161 + }; 162 + 163 + /* Right Speaker */ 164 + right{ 165 + .... 166 + }; 167 + };
+2
Documentation/devicetree/bindings/soundwire/soundwire-controller.yaml
··· 69 69 reg = <0 1>; 70 70 powerdown-gpios = <&wcdpinctrl 2 0>; 71 71 #thermal-sensor-cells = <0>; 72 + #sound-dai-cells = <0>; 72 73 }; 73 74 74 75 speaker@0,2 { ··· 77 76 reg = <0 2>; 78 77 powerdown-gpios = <&wcdpinctrl 2 0>; 79 78 #thermal-sensor-cells = <0>; 79 + #sound-dai-cells = <0>; 80 80 }; 81 81 }; 82 82
+22
Documentation/driver-api/interconnect.rst
··· 91 91 get paths between endpoints and set their bandwidth/latency/QoS requirements 92 92 for these interconnect paths. These interfaces are not currently 93 93 documented. 94 + 95 + Interconnect debugfs interfaces 96 + ------------------------------- 97 + 98 + Like several other subsystems interconnect will create some files for debugging 99 + and introspection. Files in debugfs are not considered ABI so application 100 + software shouldn't rely on format details change between kernel versions. 101 + 102 + ``/sys/kernel/debug/interconnect/interconnect_summary``: 103 + 104 + Show all interconnect nodes in the system with their aggregated bandwidth 105 + request. Indented under each node show bandwidth requests from each device. 106 + 107 + ``/sys/kernel/debug/interconnect/interconnect_graph``: 108 + 109 + Show the interconnect graph in the graphviz dot format. It shows all 110 + interconnect nodes and links in the system and groups together nodes from the 111 + same provider as subgraphs. The format is human-readable and can also be piped 112 + through dot to generate diagrams in many graphical formats:: 113 + 114 + $ cat /sys/kernel/debug/interconnect/interconnect_graph | \ 115 + dot -Tsvg > interconnect_graph.svg
+19 -18
drivers/android/binder.c
··· 5199 5199 5200 5200 static int binder_open(struct inode *nodp, struct file *filp) 5201 5201 { 5202 - struct binder_proc *proc; 5202 + struct binder_proc *proc, *itr; 5203 5203 struct binder_device *binder_dev; 5204 5204 struct binderfs_info *info; 5205 5205 struct dentry *binder_binderfs_dir_entry_proc = NULL; 5206 + bool existing_pid = false; 5206 5207 5207 5208 binder_debug(BINDER_DEBUG_OPEN_CLOSE, "%s: %d:%d\n", __func__, 5208 5209 current->group_leader->pid, current->pid); ··· 5236 5235 filp->private_data = proc; 5237 5236 5238 5237 mutex_lock(&binder_procs_lock); 5238 + hlist_for_each_entry(itr, &binder_procs, proc_node) { 5239 + if (itr->pid == proc->pid) { 5240 + existing_pid = true; 5241 + break; 5242 + } 5243 + } 5239 5244 hlist_add_head(&proc->proc_node, &binder_procs); 5240 5245 mutex_unlock(&binder_procs_lock); 5241 5246 5242 - if (binder_debugfs_dir_entry_proc) { 5247 + if (binder_debugfs_dir_entry_proc && !existing_pid) { 5243 5248 char strbuf[11]; 5244 5249 5245 5250 snprintf(strbuf, sizeof(strbuf), "%u", proc->pid); 5246 5251 /* 5247 - * proc debug entries are shared between contexts, so 5248 - * this will fail if the process tries to open the driver 5249 - * again with a different context. The priting code will 5250 - * anyway print all contexts that a given PID has, so this 5251 - * is not a problem. 5252 + * proc debug entries are shared between contexts. 5253 + * Only create for the first PID to avoid debugfs log spamming 5254 + * The printing code will anyway print all contexts for a given 5255 + * PID so this is not a problem. 5252 5256 */ 5253 5257 proc->debugfs_entry = debugfs_create_file(strbuf, 0444, 5254 5258 binder_debugfs_dir_entry_proc, ··· 5261 5255 &proc_fops); 5262 5256 } 5263 5257 5264 - if (binder_binderfs_dir_entry_proc) { 5258 + if (binder_binderfs_dir_entry_proc && !existing_pid) { 5265 5259 char strbuf[11]; 5266 5260 struct dentry *binderfs_entry; 5267 5261 5268 5262 snprintf(strbuf, sizeof(strbuf), "%u", proc->pid); 5269 5263 /* 5270 5264 * Similar to debugfs, the process specific log file is shared 5271 - * between contexts. If the file has already been created for a 5272 - * process, the following binderfs_create_file() call will 5273 - * fail with error code EEXIST if another context of the same 5274 - * process invoked binder_open(). This is ok since same as 5275 - * debugfs, the log file will contain information on all 5276 - * contexts of a given PID. 5265 + * between contexts. Only create for the first PID. 5266 + * This is ok since same as debugfs, the log file will contain 5267 + * information on all contexts of a given PID. 5277 5268 */ 5278 5269 binderfs_entry = binderfs_create_file(binder_binderfs_dir_entry_proc, 5279 5270 strbuf, &proc_fops, (void *)(unsigned long)proc->pid); ··· 5280 5277 int error; 5281 5278 5282 5279 error = PTR_ERR(binderfs_entry); 5283 - if (error != -EEXIST) { 5284 - pr_warn("Unable to create file %s in binderfs (error %d)\n", 5285 - strbuf, error); 5286 - } 5280 + pr_warn("Unable to create file %s in binderfs (error %d)\n", 5281 + strbuf, error); 5287 5282 } 5288 5283 } 5289 5284
+3 -3
drivers/bus/fsl-mc/fsl-mc-bus.c
··· 715 715 struct fsl_mc_device *fsl_mc_get_endpoint(struct fsl_mc_device *mc_dev) 716 716 { 717 717 struct fsl_mc_device *mc_bus_dev, *endpoint; 718 - struct fsl_mc_obj_desc endpoint_desc = { 0 }; 719 - struct dprc_endpoint endpoint1 = { 0 }; 720 - struct dprc_endpoint endpoint2 = { 0 }; 718 + struct fsl_mc_obj_desc endpoint_desc = {{ 0 }}; 719 + struct dprc_endpoint endpoint1 = {{ 0 }}; 720 + struct dprc_endpoint endpoint2 = {{ 0 }}; 721 721 int state, err; 722 722 723 723 mc_bus_dev = to_fsl_mc_device(mc_dev->dev.parent);
+1 -1
drivers/char/hpet.c
··· 110 110 unsigned long hp_delta; 111 111 unsigned int hp_ntimer; 112 112 unsigned int hp_which; 113 - struct hpet_dev hp_dev[1]; 113 + struct hpet_dev hp_dev[]; 114 114 }; 115 115 116 116 static struct hpets *hpets;
+195 -161
drivers/extcon/extcon-arizona.c
··· 77 77 const struct arizona_micd_range *micd_ranges; 78 78 int num_micd_ranges; 79 79 80 - int micd_timeout; 81 - 82 80 bool micd_reva; 83 81 bool micd_clamp; 84 82 ··· 308 310 } 309 311 310 312 if (info->micd_reva) { 311 - regmap_write(arizona->regmap, 0x80, 0x3); 312 - regmap_write(arizona->regmap, 0x294, 0); 313 - regmap_write(arizona->regmap, 0x80, 0x0); 313 + const struct reg_sequence reva[] = { 314 + { 0x80, 0x3 }, 315 + { 0x294, 0x0 }, 316 + { 0x80, 0x0 }, 317 + }; 318 + 319 + regmap_multi_reg_write(arizona->regmap, reva, ARRAY_SIZE(reva)); 314 320 } 315 321 316 322 if (info->detecting && arizona->pdata.micd_software_compare) ··· 363 361 snd_soc_dapm_sync(dapm); 364 362 365 363 if (info->micd_reva) { 366 - regmap_write(arizona->regmap, 0x80, 0x3); 367 - regmap_write(arizona->regmap, 0x294, 2); 368 - regmap_write(arizona->regmap, 0x80, 0x0); 364 + const struct reg_sequence reva[] = { 365 + { 0x80, 0x3 }, 366 + { 0x294, 0x2 }, 367 + { 0x80, 0x0 }, 368 + }; 369 + 370 + regmap_multi_reg_write(arizona->regmap, reva, ARRAY_SIZE(reva)); 369 371 } 370 372 371 373 ret = regulator_allow_bypass(info->micvdd, true); ··· 533 527 struct arizona *arizona = info->arizona; 534 528 int id_gpio = arizona->pdata.hpdet_id_gpio; 535 529 530 + if (!arizona->pdata.hpdet_acc_id) 531 + return 0; 532 + 536 533 /* 537 534 * If we're using HPDET for accessory identification we need 538 535 * to take multiple measurements, step through them in sequence. 539 536 */ 540 - if (arizona->pdata.hpdet_acc_id) { 541 - info->hpdet_res[info->num_hpdet_res++] = *reading; 537 + info->hpdet_res[info->num_hpdet_res++] = *reading; 542 538 543 - /* Only check the mic directly if we didn't already ID it */ 544 - if (id_gpio && info->num_hpdet_res == 1) { 545 - dev_dbg(arizona->dev, "Measuring mic\n"); 539 + /* Only check the mic directly if we didn't already ID it */ 540 + if (id_gpio && info->num_hpdet_res == 1) { 541 + dev_dbg(arizona->dev, "Measuring mic\n"); 546 542 547 - regmap_update_bits(arizona->regmap, 548 - ARIZONA_ACCESSORY_DETECT_MODE_1, 549 - ARIZONA_ACCDET_MODE_MASK | 550 - ARIZONA_ACCDET_SRC, 551 - ARIZONA_ACCDET_MODE_HPR | 552 - info->micd_modes[0].src); 553 - 554 - gpio_set_value_cansleep(id_gpio, 1); 555 - 556 - regmap_update_bits(arizona->regmap, 557 - ARIZONA_HEADPHONE_DETECT_1, 558 - ARIZONA_HP_POLL, ARIZONA_HP_POLL); 559 - return -EAGAIN; 560 - } 561 - 562 - /* OK, got both. Now, compare... */ 563 - dev_dbg(arizona->dev, "HPDET measured %d %d\n", 564 - info->hpdet_res[0], info->hpdet_res[1]); 565 - 566 - /* Take the headphone impedance for the main report */ 567 - *reading = info->hpdet_res[0]; 568 - 569 - /* Sometimes we get false readings due to slow insert */ 570 - if (*reading >= ARIZONA_HPDET_MAX && !info->hpdet_retried) { 571 - dev_dbg(arizona->dev, "Retrying high impedance\n"); 572 - info->num_hpdet_res = 0; 573 - info->hpdet_retried = true; 574 - arizona_start_hpdet_acc_id(info); 575 - pm_runtime_put(info->dev); 576 - return -EAGAIN; 577 - } 578 - 579 - /* 580 - * If we measure the mic as high impedance 581 - */ 582 - if (!id_gpio || info->hpdet_res[1] > 50) { 583 - dev_dbg(arizona->dev, "Detected mic\n"); 584 - *mic = true; 585 - info->detecting = true; 586 - } else { 587 - dev_dbg(arizona->dev, "Detected headphone\n"); 588 - } 589 - 590 - /* Make sure everything is reset back to the real polarity */ 591 543 regmap_update_bits(arizona->regmap, 592 544 ARIZONA_ACCESSORY_DETECT_MODE_1, 545 + ARIZONA_ACCDET_MODE_MASK | 593 546 ARIZONA_ACCDET_SRC, 547 + ARIZONA_ACCDET_MODE_HPR | 594 548 info->micd_modes[0].src); 549 + 550 + gpio_set_value_cansleep(id_gpio, 1); 551 + 552 + regmap_update_bits(arizona->regmap, ARIZONA_HEADPHONE_DETECT_1, 553 + ARIZONA_HP_POLL, ARIZONA_HP_POLL); 554 + return -EAGAIN; 595 555 } 556 + 557 + /* OK, got both. Now, compare... */ 558 + dev_dbg(arizona->dev, "HPDET measured %d %d\n", 559 + info->hpdet_res[0], info->hpdet_res[1]); 560 + 561 + /* Take the headphone impedance for the main report */ 562 + *reading = info->hpdet_res[0]; 563 + 564 + /* Sometimes we get false readings due to slow insert */ 565 + if (*reading >= ARIZONA_HPDET_MAX && !info->hpdet_retried) { 566 + dev_dbg(arizona->dev, "Retrying high impedance\n"); 567 + info->num_hpdet_res = 0; 568 + info->hpdet_retried = true; 569 + arizona_start_hpdet_acc_id(info); 570 + pm_runtime_put(info->dev); 571 + return -EAGAIN; 572 + } 573 + 574 + /* 575 + * If we measure the mic as high impedance 576 + */ 577 + if (!id_gpio || info->hpdet_res[1] > 50) { 578 + dev_dbg(arizona->dev, "Detected mic\n"); 579 + *mic = true; 580 + info->detecting = true; 581 + } else { 582 + dev_dbg(arizona->dev, "Detected headphone\n"); 583 + } 584 + 585 + /* Make sure everything is reset back to the real polarity */ 586 + regmap_update_bits(arizona->regmap, ARIZONA_ACCESSORY_DETECT_MODE_1, 587 + ARIZONA_ACCDET_SRC, info->micd_modes[0].src); 596 588 597 589 return 0; 598 590 } ··· 666 662 if (id_gpio) 667 663 gpio_set_value_cansleep(id_gpio, 0); 668 664 669 - /* Revert back to MICDET mode */ 670 - regmap_update_bits(arizona->regmap, 671 - ARIZONA_ACCESSORY_DETECT_MODE_1, 672 - ARIZONA_ACCDET_MODE_MASK, ARIZONA_ACCDET_MODE_MIC); 673 - 674 665 /* If we have a mic then reenable MICDET */ 675 666 if (mic || info->mic) 676 667 arizona_start_mic(info); ··· 698 699 699 700 info->hpdet_active = true; 700 701 701 - if (info->mic) 702 - arizona_stop_mic(info); 702 + arizona_stop_mic(info); 703 703 704 704 arizona_extcon_hp_clamp(info, true); 705 705 ··· 722 724 return; 723 725 724 726 err: 725 - regmap_update_bits(arizona->regmap, ARIZONA_ACCESSORY_DETECT_MODE_1, 726 - ARIZONA_ACCDET_MODE_MASK, ARIZONA_ACCDET_MODE_MIC); 727 + arizona_extcon_hp_clamp(info, false); 728 + pm_runtime_put_autosuspend(info->dev); 727 729 728 730 /* Just report headphone */ 729 731 ret = extcon_set_state_sync(info->edev, EXTCON_JACK_HEADPHONE, true); ··· 779 781 return; 780 782 781 783 err: 782 - regmap_update_bits(arizona->regmap, ARIZONA_ACCESSORY_DETECT_MODE_1, 783 - ARIZONA_ACCDET_MODE_MASK, ARIZONA_ACCDET_MODE_MIC); 784 - 785 784 /* Just report headphone */ 786 785 ret = extcon_set_state_sync(info->edev, EXTCON_JACK_HEADPHONE, true); 787 786 if (ret != 0) ··· 801 806 802 807 arizona_identify_headphone(info); 803 808 804 - arizona_stop_mic(info); 805 - 806 809 mutex_unlock(&info->lock); 807 810 } 808 811 809 - static void arizona_micd_detect(struct work_struct *work) 812 + static int arizona_micd_adc_read(struct arizona_extcon_info *info) 810 813 { 811 - struct arizona_extcon_info *info = container_of(work, 812 - struct arizona_extcon_info, 813 - micd_detect_work.work); 814 814 struct arizona *arizona = info->arizona; 815 - unsigned int val = 0, lvl; 816 - int ret, i, key; 815 + unsigned int val; 816 + int ret; 817 817 818 - cancel_delayed_work_sync(&info->micd_timeout_work); 818 + /* Must disable MICD before we read the ADCVAL */ 819 + regmap_update_bits(arizona->regmap, ARIZONA_MIC_DETECT_1, 820 + ARIZONA_MICD_ENA, 0); 819 821 820 - mutex_lock(&info->lock); 821 - 822 - /* If the cable was removed while measuring ignore the result */ 823 - ret = extcon_get_state(info->edev, EXTCON_MECHANICAL); 824 - if (ret < 0) { 825 - dev_err(arizona->dev, "Failed to check cable state: %d\n", 826 - ret); 827 - mutex_unlock(&info->lock); 828 - return; 829 - } else if (!ret) { 830 - dev_dbg(arizona->dev, "Ignoring MICDET for removed cable\n"); 831 - mutex_unlock(&info->lock); 832 - return; 822 + ret = regmap_read(arizona->regmap, ARIZONA_MIC_DETECT_4, &val); 823 + if (ret != 0) { 824 + dev_err(arizona->dev, 825 + "Failed to read MICDET_ADCVAL: %d\n", ret); 826 + return ret; 833 827 } 834 828 835 - if (info->detecting && arizona->pdata.micd_software_compare) { 836 - /* Must disable MICD before we read the ADCVAL */ 837 - regmap_update_bits(arizona->regmap, ARIZONA_MIC_DETECT_1, 838 - ARIZONA_MICD_ENA, 0); 839 - ret = regmap_read(arizona->regmap, ARIZONA_MIC_DETECT_4, &val); 840 - if (ret != 0) { 841 - dev_err(arizona->dev, 842 - "Failed to read MICDET_ADCVAL: %d\n", 843 - ret); 844 - mutex_unlock(&info->lock); 845 - return; 846 - } 829 + dev_dbg(arizona->dev, "MICDET_ADCVAL: %x\n", val); 847 830 848 - dev_dbg(arizona->dev, "MICDET_ADCVAL: %x\n", val); 831 + val &= ARIZONA_MICDET_ADCVAL_MASK; 832 + if (val < ARRAY_SIZE(arizona_micd_levels)) 833 + val = arizona_micd_levels[val]; 834 + else 835 + val = INT_MAX; 849 836 850 - val &= ARIZONA_MICDET_ADCVAL_MASK; 851 - if (val < ARRAY_SIZE(arizona_micd_levels)) 852 - val = arizona_micd_levels[val]; 853 - else 854 - val = INT_MAX; 837 + if (val <= QUICK_HEADPHONE_MAX_OHM) 838 + val = ARIZONA_MICD_STS | ARIZONA_MICD_LVL_0; 839 + else if (val <= MICROPHONE_MIN_OHM) 840 + val = ARIZONA_MICD_STS | ARIZONA_MICD_LVL_1; 841 + else if (val <= MICROPHONE_MAX_OHM) 842 + val = ARIZONA_MICD_STS | ARIZONA_MICD_LVL_8; 843 + else 844 + val = ARIZONA_MICD_LVL_8; 855 845 856 - if (val <= QUICK_HEADPHONE_MAX_OHM) 857 - val = ARIZONA_MICD_STS | ARIZONA_MICD_LVL_0; 858 - else if (val <= MICROPHONE_MIN_OHM) 859 - val = ARIZONA_MICD_STS | ARIZONA_MICD_LVL_1; 860 - else if (val <= MICROPHONE_MAX_OHM) 861 - val = ARIZONA_MICD_STS | ARIZONA_MICD_LVL_8; 862 - else 863 - val = ARIZONA_MICD_LVL_8; 864 - } 846 + return val; 847 + } 848 + 849 + static int arizona_micd_read(struct arizona_extcon_info *info) 850 + { 851 + struct arizona *arizona = info->arizona; 852 + unsigned int val = 0; 853 + int ret, i; 865 854 866 855 for (i = 0; i < 10 && !(val & MICD_LVL_0_TO_8); i++) { 867 856 ret = regmap_read(arizona->regmap, ARIZONA_MIC_DETECT_3, &val); 868 857 if (ret != 0) { 869 858 dev_err(arizona->dev, 870 859 "Failed to read MICDET: %d\n", ret); 871 - mutex_unlock(&info->lock); 872 - return; 860 + return ret; 873 861 } 874 862 875 863 dev_dbg(arizona->dev, "MICDET: %x\n", val); ··· 860 882 if (!(val & ARIZONA_MICD_VALID)) { 861 883 dev_warn(arizona->dev, 862 884 "Microphone detection state invalid\n"); 863 - mutex_unlock(&info->lock); 864 - return; 885 + return -EINVAL; 865 886 } 866 887 } 867 888 868 889 if (i == 10 && !(val & MICD_LVL_0_TO_8)) { 869 890 dev_err(arizona->dev, "Failed to get valid MICDET value\n"); 870 - mutex_unlock(&info->lock); 871 - return; 891 + return -EINVAL; 872 892 } 893 + 894 + return val; 895 + } 896 + 897 + static int arizona_micdet_reading(void *priv) 898 + { 899 + struct arizona_extcon_info *info = priv; 900 + struct arizona *arizona = info->arizona; 901 + int ret, val; 902 + 903 + if (info->detecting && arizona->pdata.micd_software_compare) 904 + ret = arizona_micd_adc_read(info); 905 + else 906 + ret = arizona_micd_read(info); 907 + if (ret < 0) 908 + return ret; 909 + 910 + val = ret; 873 911 874 912 /* Due to jack detect this should never happen */ 875 913 if (!(val & ARIZONA_MICD_STS)) { 876 914 dev_warn(arizona->dev, "Detected open circuit\n"); 877 915 info->mic = false; 878 - arizona_stop_mic(info); 879 916 info->detecting = false; 880 917 arizona_identify_headphone(info); 881 - goto handled; 918 + return 0; 882 919 } 883 920 884 921 /* If we got a high impedence we should have a headset, report it. */ 885 - if (info->detecting && (val & ARIZONA_MICD_LVL_8)) { 922 + if (val & ARIZONA_MICD_LVL_8) { 886 923 info->mic = true; 887 924 info->detecting = false; 888 925 ··· 916 923 ret); 917 924 } 918 925 919 - goto handled; 926 + return 0; 920 927 } 921 928 922 929 /* If we detected a lower impedence during initial startup ··· 925 932 * plain headphones. If both polarities report a low 926 933 * impedence then give up and report headphones. 927 934 */ 928 - if (info->detecting && (val & MICD_LVL_1_TO_7)) { 935 + if (val & MICD_LVL_1_TO_7) { 929 936 if (info->jack_flips >= info->micd_num_modes * 10) { 930 937 dev_dbg(arizona->dev, "Detected HP/line\n"); 931 938 932 939 info->detecting = false; 933 940 934 941 arizona_identify_headphone(info); 935 - 936 - arizona_stop_mic(info); 937 942 } else { 938 943 info->micd_mode++; 939 944 if (info->micd_mode == info->micd_num_modes) ··· 939 948 arizona_extcon_set_mode(info, info->micd_mode); 940 949 941 950 info->jack_flips++; 951 + 952 + if (arizona->pdata.micd_software_compare) 953 + regmap_update_bits(arizona->regmap, 954 + ARIZONA_MIC_DETECT_1, 955 + ARIZONA_MICD_ENA, 956 + ARIZONA_MICD_ENA); 957 + 958 + queue_delayed_work(system_power_efficient_wq, 959 + &info->micd_timeout_work, 960 + msecs_to_jiffies(arizona->pdata.micd_timeout)); 942 961 } 943 962 944 - goto handled; 963 + return 0; 945 964 } 965 + 966 + /* 967 + * If we're still detecting and we detect a short then we've 968 + * got a headphone. 969 + */ 970 + dev_dbg(arizona->dev, "Headphone detected\n"); 971 + info->detecting = false; 972 + 973 + arizona_identify_headphone(info); 974 + 975 + return 0; 976 + } 977 + 978 + static int arizona_button_reading(void *priv) 979 + { 980 + struct arizona_extcon_info *info = priv; 981 + struct arizona *arizona = info->arizona; 982 + int val, key, lvl, i; 983 + 984 + val = arizona_micd_read(info); 985 + if (val < 0) 986 + return val; 946 987 947 988 /* 948 989 * If we're still detecting and we detect a short then we've ··· 991 968 input_report_key(info->input, 992 969 info->micd_ranges[i].key, 0); 993 970 994 - WARN_ON(!lvl); 995 - WARN_ON(ffs(lvl) - 1 >= info->num_micd_ranges); 996 971 if (lvl && ffs(lvl) - 1 < info->num_micd_ranges) { 997 972 key = info->micd_ranges[ffs(lvl) - 1].key; 998 973 input_report_key(info->input, key, 1); 999 974 input_sync(info->input); 975 + } else { 976 + dev_err(arizona->dev, "Button out of range\n"); 1000 977 } 1001 - 1002 - } else if (info->detecting) { 1003 - dev_dbg(arizona->dev, "Headphone detected\n"); 1004 - info->detecting = false; 1005 - arizona_stop_mic(info); 1006 - 1007 - arizona_identify_headphone(info); 1008 978 } else { 1009 979 dev_warn(arizona->dev, "Button with no mic: %x\n", 1010 980 val); ··· 1011 995 arizona_extcon_pulse_micbias(info); 1012 996 } 1013 997 1014 - handled: 1015 - if (info->detecting) { 1016 - if (arizona->pdata.micd_software_compare) 1017 - regmap_update_bits(arizona->regmap, 1018 - ARIZONA_MIC_DETECT_1, 1019 - ARIZONA_MICD_ENA, 1020 - ARIZONA_MICD_ENA); 998 + return 0; 999 + } 1021 1000 1022 - queue_delayed_work(system_power_efficient_wq, 1023 - &info->micd_timeout_work, 1024 - msecs_to_jiffies(info->micd_timeout)); 1001 + static void arizona_micd_detect(struct work_struct *work) 1002 + { 1003 + struct arizona_extcon_info *info = container_of(work, 1004 + struct arizona_extcon_info, 1005 + micd_detect_work.work); 1006 + struct arizona *arizona = info->arizona; 1007 + int ret; 1008 + 1009 + cancel_delayed_work_sync(&info->micd_timeout_work); 1010 + 1011 + mutex_lock(&info->lock); 1012 + 1013 + /* If the cable was removed while measuring ignore the result */ 1014 + ret = extcon_get_state(info->edev, EXTCON_MECHANICAL); 1015 + if (ret < 0) { 1016 + dev_err(arizona->dev, "Failed to check cable state: %d\n", 1017 + ret); 1018 + mutex_unlock(&info->lock); 1019 + return; 1020 + } else if (!ret) { 1021 + dev_dbg(arizona->dev, "Ignoring MICDET for removed cable\n"); 1022 + mutex_unlock(&info->lock); 1023 + return; 1025 1024 } 1025 + 1026 + if (info->detecting) 1027 + arizona_micdet_reading(info); 1028 + else 1029 + arizona_button_reading(info); 1026 1030 1027 1031 pm_runtime_mark_last_busy(info->dev); 1028 1032 mutex_unlock(&info->lock); ··· 1161 1125 msecs_to_jiffies(HPDET_DEBOUNCE)); 1162 1126 1163 1127 if (cancelled_mic) { 1164 - int micd_timeout = info->micd_timeout; 1128 + int micd_timeout = arizona->pdata.micd_timeout; 1165 1129 1166 1130 queue_delayed_work(system_power_efficient_wq, 1167 1131 &info->micd_timeout_work, ··· 1181 1145 dev_err(arizona->dev, "Mechanical report failed: %d\n", 1182 1146 ret); 1183 1147 1184 - if (!arizona->pdata.hpdet_acc_id) { 1185 - info->detecting = true; 1186 - info->mic = false; 1187 - info->jack_flips = 0; 1148 + info->detecting = true; 1149 + info->mic = false; 1150 + info->jack_flips = 0; 1188 1151 1152 + if (!arizona->pdata.hpdet_acc_id) { 1189 1153 arizona_start_mic(info); 1190 1154 } else { 1191 1155 queue_delayed_work(system_power_efficient_wq, ··· 1237 1201 ARIZONA_MICD_CLAMP_DB | ARIZONA_JD1_DB, 1238 1202 ARIZONA_MICD_CLAMP_DB | ARIZONA_JD1_DB); 1239 1203 } 1240 - 1241 - if (arizona->pdata.micd_timeout) 1242 - info->micd_timeout = arizona->pdata.micd_timeout; 1243 - else 1244 - info->micd_timeout = DEFAULT_MICD_TIMEOUT; 1245 1204 1246 1205 out: 1247 1206 /* Clear trig_sts to make sure DCVDD is not forced up */ ··· 1465 1434 1466 1435 info->input->name = "Headset"; 1467 1436 info->input->phys = "arizona/extcon"; 1437 + 1438 + if (!pdata->micd_timeout) 1439 + pdata->micd_timeout = DEFAULT_MICD_TIMEOUT; 1468 1440 1469 1441 if (pdata->num_micd_configs) { 1470 1442 info->micd_modes = pdata->micd_configs;
+5 -5
drivers/extcon/extcon-sm5502.c
··· 249 249 dev_err(info->dev, "Unknown DM_CON/DP_CON switch type (%d)\n", 250 250 con_sw); 251 251 return -EINVAL; 252 - }; 252 + } 253 253 254 254 switch (vbus_sw) { 255 255 case VBUSIN_SWITCH_OPEN: ··· 268 268 default: 269 269 dev_err(info->dev, "Unknown VBUS switch type (%d)\n", vbus_sw); 270 270 return -EINVAL; 271 - }; 271 + } 272 272 273 273 return 0; 274 274 } ··· 357 357 "cannot identify the cable type: adc(0x%x)\n", 358 358 adc); 359 359 return -EINVAL; 360 - }; 360 + } 361 361 break; 362 362 default: 363 363 dev_err(info->dev, 364 364 "failed to identify the cable type: adc(0x%x)\n", adc); 365 365 return -EINVAL; 366 - }; 366 + } 367 367 368 368 return cable_type; 369 369 } ··· 405 405 dev_dbg(info->dev, 406 406 "cannot handle this cable_type (0x%x)\n", cable_type); 407 407 return 0; 408 - }; 408 + } 409 409 410 410 /* Change internal hardware path(DM_CON/DP_CON, VBUSIN) */ 411 411 ret = sm5502_muic_set_path(info, con_sw, vbus_sw, attached);
+7
drivers/firmware/google/coreboot_table.c
··· 163 163 return ret; 164 164 } 165 165 166 + static int __cb_dev_unregister(struct device *dev, void *dummy) 167 + { 168 + device_unregister(dev); 169 + return 0; 170 + } 171 + 166 172 static int coreboot_table_remove(struct platform_device *pdev) 167 173 { 174 + bus_for_each_dev(&coreboot_bus_type, NULL, NULL, __cb_dev_unregister); 168 175 bus_unregister(&coreboot_bus_type); 169 176 return 0; 170 177 }
+25
drivers/firmware/google/gsmi.c
··· 76 76 #define GSMI_CMD_LOG_S0IX_RESUME 0x0b 77 77 #define GSMI_CMD_CLEAR_CONFIG 0x20 78 78 #define GSMI_CMD_HANDSHAKE_TYPE 0xC1 79 + #define GSMI_CMD_RESERVED 0xff 79 80 80 81 /* Magic entry type for kernel events */ 81 82 #define GSMI_LOG_ENTRY_TYPE_KERNEL 0xDEAD ··· 747 746 static __init int gsmi_system_valid(void) 748 747 { 749 748 u32 hash; 749 + u16 cmd, result; 750 750 751 751 if (!dmi_check_system(gsmi_dmi_table)) 752 752 return -ENODEV; ··· 779 777 /* check for valid SMI command port in ACPI FADT */ 780 778 if (acpi_gbl_FADT.smi_command == 0) { 781 779 pr_info("gsmi: missing smi_command\n"); 780 + return -ENODEV; 781 + } 782 + 783 + /* Test the smihandler with a bogus command. If it leaves the 784 + * calling argument in %ax untouched, there is no handler for 785 + * GSMI commands. 786 + */ 787 + cmd = GSMI_CALLBACK | GSMI_CMD_RESERVED << 8; 788 + asm volatile ( 789 + "outb %%al, %%dx\n\t" 790 + : "=a" (result) 791 + : "0" (cmd), 792 + "d" (acpi_gbl_FADT.smi_command) 793 + : "memory", "cc" 794 + ); 795 + if (cmd == result) { 796 + pr_info("gsmi: no gsmi handler in firmware\n"); 782 797 return -ENODEV; 783 798 } 784 799 ··· 1035 1016 dma_pool_destroy(gsmi_dev.dma_pool); 1036 1017 platform_device_unregister(gsmi_dev.pdev); 1037 1018 pr_info("gsmi: failed to load: %d\n", ret); 1019 + #ifdef CONFIG_PM 1020 + platform_driver_unregister(&gsmi_driver_info); 1021 + #endif 1038 1022 return ret; 1039 1023 } 1040 1024 ··· 1059 1037 gsmi_buf_free(gsmi_dev.name_buf); 1060 1038 dma_pool_destroy(gsmi_dev.dma_pool); 1061 1039 platform_device_unregister(gsmi_dev.pdev); 1040 + #ifdef CONFIG_PM 1041 + platform_driver_unregister(&gsmi_driver_info); 1042 + #endif 1062 1043 } 1063 1044 1064 1045 module_init(gsmi_init);
+2 -2
drivers/firmware/stratix10-svc.c
··· 268 268 */ 269 269 msleep(1000); 270 270 count_in_sec--; 271 - }; 271 + } 272 272 273 273 if (res.a0 == INTEL_SIP_SMC_STATUS_OK && count_in_sec) 274 274 cb_data->status = BIT(SVC_STATUS_RECONFIG_COMPLETED); ··· 512 512 break; 513 513 514 514 } 515 - }; 515 + } 516 516 517 517 kfree(cbdata); 518 518 kfree(pdata);
-2
drivers/fpga/dfl-afu-main.c
··· 813 813 static int afu_dev_destroy(struct platform_device *pdev) 814 814 { 815 815 struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev); 816 - struct dfl_afu *afu; 817 816 818 817 mutex_lock(&pdata->lock); 819 - afu = dfl_fpga_pdata_get_private(pdata); 820 818 afu_mmio_region_destroy(pdata); 821 819 afu_dma_region_destroy(pdata); 822 820 dfl_fpga_pdata_set_private(pdata, NULL);
-2
drivers/fpga/dfl-fme-main.c
··· 675 675 static void fme_dev_destroy(struct platform_device *pdev) 676 676 { 677 677 struct dfl_feature_platform_data *pdata = dev_get_platdata(&pdev->dev); 678 - struct dfl_fme *fme; 679 678 680 679 mutex_lock(&pdata->lock); 681 - fme = dfl_fpga_pdata_get_private(pdata); 682 680 dfl_fpga_pdata_set_private(pdata, NULL); 683 681 mutex_unlock(&pdata->lock); 684 682 }
+1 -3
drivers/fpga/ts73xx-fpga.c
··· 119 119 120 120 res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 121 121 priv->io_base = devm_ioremap_resource(kdev, res); 122 - if (IS_ERR(priv->io_base)) { 123 - dev_err(kdev, "unable to remap registers\n"); 122 + if (IS_ERR(priv->io_base)) 124 123 return PTR_ERR(priv->io_base); 125 - } 126 124 127 125 mgr = devm_fpga_mgr_create(kdev, "TS-73xx FPGA Manager", 128 126 &ts73xx_fpga_ops, priv);
+2 -1
drivers/fpga/xilinx-pr-decoupler.c
··· 101 101 102 102 priv->clk = devm_clk_get(&pdev->dev, "aclk"); 103 103 if (IS_ERR(priv->clk)) { 104 - dev_err(&pdev->dev, "input clock not found\n"); 104 + if (PTR_ERR(priv->clk) != -EPROBE_DEFER) 105 + dev_err(&pdev->dev, "input clock not found\n"); 105 106 return PTR_ERR(priv->clk); 106 107 } 107 108
+1
drivers/interconnect/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 + CFLAGS_core.o := -I$(src) 3 4 icc-core-objs := core.o 4 5 5 6 obj-$(CONFIG_INTERCONNECT) += icc-core.o
+131 -37
drivers/interconnect/core.c
··· 19 19 #include <linux/of.h> 20 20 #include <linux/overflow.h> 21 21 22 + #include "internal.h" 23 + 24 + #define CREATE_TRACE_POINTS 25 + #include "trace.h" 26 + 22 27 static DEFINE_IDR(icc_idr); 23 28 static LIST_HEAD(icc_providers); 24 29 static DEFINE_MUTEX(icc_lock); 25 30 static struct dentry *icc_debugfs_dir; 26 - 27 - /** 28 - * struct icc_req - constraints that are attached to each node 29 - * @req_node: entry in list of requests for the particular @node 30 - * @node: the interconnect node to which this constraint applies 31 - * @dev: reference to the device that sets the constraints 32 - * @tag: path tag (optional) 33 - * @avg_bw: an integer describing the average bandwidth in kBps 34 - * @peak_bw: an integer describing the peak bandwidth in kBps 35 - */ 36 - struct icc_req { 37 - struct hlist_node req_node; 38 - struct icc_node *node; 39 - struct device *dev; 40 - u32 tag; 41 - u32 avg_bw; 42 - u32 peak_bw; 43 - }; 44 - 45 - /** 46 - * struct icc_path - interconnect path structure 47 - * @num_nodes: number of hops (nodes) 48 - * @reqs: array of the requests applicable to this path of nodes 49 - */ 50 - struct icc_path { 51 - size_t num_nodes; 52 - struct icc_req reqs[]; 53 - }; 54 31 55 32 static void icc_summary_show_one(struct seq_file *s, struct icc_node *n) 56 33 { 57 34 if (!n) 58 35 return; 59 36 60 - seq_printf(s, "%-30s %12u %12u\n", 37 + seq_printf(s, "%-42s %12u %12u\n", 61 38 n->name, n->avg_bw, n->peak_bw); 62 39 } 63 40 ··· 42 65 { 43 66 struct icc_provider *provider; 44 67 45 - seq_puts(s, " node avg peak\n"); 46 - seq_puts(s, "--------------------------------------------------------\n"); 68 + seq_puts(s, " node tag avg peak\n"); 69 + seq_puts(s, "--------------------------------------------------------------------\n"); 47 70 48 71 mutex_lock(&icc_lock); 49 72 ··· 58 81 if (!r->dev) 59 82 continue; 60 83 61 - seq_printf(s, " %-26s %12u %12u\n", 62 - dev_name(r->dev), r->avg_bw, 84 + seq_printf(s, " %-27s %12u %12u %12u\n", 85 + dev_name(r->dev), r->tag, r->avg_bw, 63 86 r->peak_bw); 64 87 } 65 88 } ··· 70 93 return 0; 71 94 } 72 95 DEFINE_SHOW_ATTRIBUTE(icc_summary); 96 + 97 + static void icc_graph_show_link(struct seq_file *s, int level, 98 + struct icc_node *n, struct icc_node *m) 99 + { 100 + seq_printf(s, "%s\"%d:%s\" -> \"%d:%s\"\n", 101 + level == 2 ? "\t\t" : "\t", 102 + n->id, n->name, m->id, m->name); 103 + } 104 + 105 + static void icc_graph_show_node(struct seq_file *s, struct icc_node *n) 106 + { 107 + seq_printf(s, "\t\t\"%d:%s\" [label=\"%d:%s", 108 + n->id, n->name, n->id, n->name); 109 + seq_printf(s, "\n\t\t\t|avg_bw=%ukBps", n->avg_bw); 110 + seq_printf(s, "\n\t\t\t|peak_bw=%ukBps", n->peak_bw); 111 + seq_puts(s, "\"]\n"); 112 + } 113 + 114 + static int icc_graph_show(struct seq_file *s, void *data) 115 + { 116 + struct icc_provider *provider; 117 + struct icc_node *n; 118 + int cluster_index = 0; 119 + int i; 120 + 121 + seq_puts(s, "digraph {\n\trankdir = LR\n\tnode [shape = record]\n"); 122 + mutex_lock(&icc_lock); 123 + 124 + /* draw providers as cluster subgraphs */ 125 + cluster_index = 0; 126 + list_for_each_entry(provider, &icc_providers, provider_list) { 127 + seq_printf(s, "\tsubgraph cluster_%d {\n", ++cluster_index); 128 + if (provider->dev) 129 + seq_printf(s, "\t\tlabel = \"%s\"\n", 130 + dev_name(provider->dev)); 131 + 132 + /* draw nodes */ 133 + list_for_each_entry(n, &provider->nodes, node_list) 134 + icc_graph_show_node(s, n); 135 + 136 + /* draw internal links */ 137 + list_for_each_entry(n, &provider->nodes, node_list) 138 + for (i = 0; i < n->num_links; ++i) 139 + if (n->provider == n->links[i]->provider) 140 + icc_graph_show_link(s, 2, n, 141 + n->links[i]); 142 + 143 + seq_puts(s, "\t}\n"); 144 + } 145 + 146 + /* draw external links */ 147 + list_for_each_entry(provider, &icc_providers, provider_list) 148 + list_for_each_entry(n, &provider->nodes, node_list) 149 + for (i = 0; i < n->num_links; ++i) 150 + if (n->provider != n->links[i]->provider) 151 + icc_graph_show_link(s, 1, n, 152 + n->links[i]); 153 + 154 + mutex_unlock(&icc_lock); 155 + seq_puts(s, "}"); 156 + 157 + return 0; 158 + } 159 + DEFINE_SHOW_ATTRIBUTE(icc_graph); 73 160 74 161 static struct icc_node *node_find(const int id) 75 162 { ··· 285 244 return ret; 286 245 } 287 246 247 + int icc_std_aggregate(struct icc_node *node, u32 tag, u32 avg_bw, 248 + u32 peak_bw, u32 *agg_avg, u32 *agg_peak) 249 + { 250 + *agg_avg += avg_bw; 251 + *agg_peak = max(*agg_peak, peak_bw); 252 + 253 + return 0; 254 + } 255 + EXPORT_SYMBOL_GPL(icc_std_aggregate); 256 + 288 257 /* of_icc_xlate_onecell() - Translate function using a single index. 289 258 * @spec: OF phandle args to map into an interconnect node. 290 259 * @data: private data (pointer to struct icc_onecell_data) ··· 433 382 434 383 mutex_lock(&icc_lock); 435 384 path = path_find(dev, src_node, dst_node); 436 - if (IS_ERR(path)) 437 - dev_err(dev, "%s: invalid path=%ld\n", __func__, PTR_ERR(path)); 438 385 mutex_unlock(&icc_lock); 386 + if (IS_ERR(path)) { 387 + dev_err(dev, "%s: invalid path=%ld\n", __func__, PTR_ERR(path)); 388 + return path; 389 + } 390 + 391 + if (name) 392 + path->name = kstrdup_const(name, GFP_KERNEL); 393 + else 394 + path->name = kasprintf(GFP_KERNEL, "%s-%s", 395 + src_node->name, dst_node->name); 439 396 440 397 return path; 441 398 } ··· 495 436 size_t i; 496 437 int ret; 497 438 498 - if (!path || !path->num_nodes) 439 + if (!path) 499 440 return 0; 441 + 442 + if (WARN_ON(IS_ERR(path) || !path->num_nodes)) 443 + return -EINVAL; 500 444 501 445 mutex_lock(&icc_lock); 502 446 ··· 515 453 516 454 /* aggregate requests for this node */ 517 455 aggregate_requests(node); 456 + 457 + trace_icc_set_bw(path, node, i, avg_bw, peak_bw); 518 458 } 519 459 520 460 ret = apply_constraints(path); ··· 534 470 } 535 471 536 472 mutex_unlock(&icc_lock); 473 + 474 + trace_icc_set_bw_end(path, ret); 537 475 538 476 return ret; 539 477 } ··· 573 507 goto out; 574 508 575 509 path = path_find(dev, src, dst); 576 - if (IS_ERR(path)) 510 + if (IS_ERR(path)) { 577 511 dev_err(dev, "%s: invalid path=%ld\n", __func__, PTR_ERR(path)); 512 + goto out; 513 + } 578 514 515 + path->name = kasprintf(GFP_KERNEL, "%s-%s", src->name, dst->name); 579 516 out: 580 517 mutex_unlock(&icc_lock); 581 518 return path; ··· 614 545 } 615 546 mutex_unlock(&icc_lock); 616 547 548 + kfree_const(path->name); 617 549 kfree(path); 618 550 } 619 551 EXPORT_SYMBOL_GPL(icc_put); ··· 813 743 EXPORT_SYMBOL_GPL(icc_node_del); 814 744 815 745 /** 746 + * icc_nodes_remove() - remove all previously added nodes from provider 747 + * @provider: the interconnect provider we are removing nodes from 748 + * 749 + * Return: 0 on success, or an error code otherwise 750 + */ 751 + int icc_nodes_remove(struct icc_provider *provider) 752 + { 753 + struct icc_node *n, *tmp; 754 + 755 + if (WARN_ON(IS_ERR_OR_NULL(provider))) 756 + return -EINVAL; 757 + 758 + list_for_each_entry_safe_reverse(n, tmp, &provider->nodes, node_list) { 759 + icc_node_del(n); 760 + icc_node_destroy(n->id); 761 + } 762 + 763 + return 0; 764 + } 765 + EXPORT_SYMBOL_GPL(icc_nodes_remove); 766 + 767 + /** 816 768 * icc_provider_add() - add a new interconnect provider 817 769 * @provider: the interconnect provider that will be added into topology 818 770 * ··· 894 802 icc_debugfs_dir = debugfs_create_dir("interconnect", NULL); 895 803 debugfs_create_file("interconnect_summary", 0444, 896 804 icc_debugfs_dir, NULL, &icc_summary_fops); 805 + debugfs_create_file("interconnect_graph", 0444, 806 + icc_debugfs_dir, NULL, &icc_graph_fops); 897 807 return 0; 898 808 } 899 809
+42
drivers/interconnect/internal.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Interconnect framework internal structs 4 + * 5 + * Copyright (c) 2019, Linaro Ltd. 6 + * Author: Georgi Djakov <georgi.djakov@linaro.org> 7 + */ 8 + 9 + #ifndef __DRIVERS_INTERCONNECT_INTERNAL_H 10 + #define __DRIVERS_INTERCONNECT_INTERNAL_H 11 + 12 + /** 13 + * struct icc_req - constraints that are attached to each node 14 + * @req_node: entry in list of requests for the particular @node 15 + * @node: the interconnect node to which this constraint applies 16 + * @dev: reference to the device that sets the constraints 17 + * @tag: path tag (optional) 18 + * @avg_bw: an integer describing the average bandwidth in kBps 19 + * @peak_bw: an integer describing the peak bandwidth in kBps 20 + */ 21 + struct icc_req { 22 + struct hlist_node req_node; 23 + struct icc_node *node; 24 + struct device *dev; 25 + u32 tag; 26 + u32 avg_bw; 27 + u32 peak_bw; 28 + }; 29 + 30 + /** 31 + * struct icc_path - interconnect path structure 32 + * @name: a string name of the path (useful for ftrace) 33 + * @num_nodes: number of hops (nodes) 34 + * @reqs: array of the requests applicable to this path of nodes 35 + */ 36 + struct icc_path { 37 + const char *name; 38 + size_t num_nodes; 39 + struct icc_req reqs[]; 40 + }; 41 + 42 + #endif
+9
drivers/interconnect/qcom/Kconfig
··· 5 5 help 6 6 Support for Qualcomm's Network-on-Chip interconnect hardware. 7 7 8 + config INTERCONNECT_QCOM_MSM8916 9 + tristate "Qualcomm MSM8916 interconnect driver" 10 + depends on INTERCONNECT_QCOM 11 + depends on QCOM_SMD_RPM 12 + select INTERCONNECT_QCOM_SMD_RPM 13 + help 14 + This is a driver for the Qualcomm Network-on-Chip on msm8916-based 15 + platforms. 16 + 8 17 config INTERCONNECT_QCOM_MSM8974 9 18 tristate "Qualcomm MSM8974 interconnect driver" 10 19 depends on INTERCONNECT_QCOM
+2
drivers/interconnect/qcom/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 + qnoc-msm8916-objs := msm8916.o 3 4 qnoc-msm8974-objs := msm8974.o 4 5 qnoc-qcs404-objs := qcs404.o 5 6 qnoc-sdm845-objs := sdm845.o 6 7 icc-smd-rpm-objs := smd-rpm.o 7 8 9 + obj-$(CONFIG_INTERCONNECT_QCOM_MSM8916) += qnoc-msm8916.o 8 10 obj-$(CONFIG_INTERCONNECT_QCOM_MSM8974) += qnoc-msm8974.o 9 11 obj-$(CONFIG_INTERCONNECT_QCOM_QCS404) += qnoc-qcs404.o 10 12 obj-$(CONFIG_INTERCONNECT_QCOM_SDM845) += qnoc-sdm845.o
+554
drivers/interconnect/qcom/msm8916.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright (C) 2018-2020 Linaro Ltd 4 + * Author: Georgi Djakov <georgi.djakov@linaro.org> 5 + */ 6 + 7 + #include <linux/clk.h> 8 + #include <linux/device.h> 9 + #include <linux/interconnect-provider.h> 10 + #include <linux/io.h> 11 + #include <linux/module.h> 12 + #include <linux/platform_device.h> 13 + #include <linux/of_device.h> 14 + 15 + #include <dt-bindings/interconnect/qcom,msm8916.h> 16 + 17 + #include "smd-rpm.h" 18 + 19 + #define RPM_BUS_MASTER_REQ 0x73616d62 20 + #define RPM_BUS_SLAVE_REQ 0x766c7362 21 + 22 + enum { 23 + MSM8916_BIMC_SNOC_MAS = 1, 24 + MSM8916_BIMC_SNOC_SLV, 25 + MSM8916_MASTER_AMPSS_M0, 26 + MSM8916_MASTER_LPASS, 27 + MSM8916_MASTER_BLSP_1, 28 + MSM8916_MASTER_DEHR, 29 + MSM8916_MASTER_GRAPHICS_3D, 30 + MSM8916_MASTER_JPEG, 31 + MSM8916_MASTER_MDP_PORT0, 32 + MSM8916_MASTER_CRYPTO_CORE0, 33 + MSM8916_MASTER_SDCC_1, 34 + MSM8916_MASTER_SDCC_2, 35 + MSM8916_MASTER_QDSS_BAM, 36 + MSM8916_MASTER_QDSS_ETR, 37 + MSM8916_MASTER_SNOC_CFG, 38 + MSM8916_MASTER_SPDM, 39 + MSM8916_MASTER_TCU0, 40 + MSM8916_MASTER_TCU1, 41 + MSM8916_MASTER_USB_HS, 42 + MSM8916_MASTER_VFE, 43 + MSM8916_MASTER_VIDEO_P0, 44 + MSM8916_SNOC_MM_INT_0, 45 + MSM8916_SNOC_MM_INT_1, 46 + MSM8916_SNOC_MM_INT_2, 47 + MSM8916_SNOC_MM_INT_BIMC, 48 + MSM8916_PNOC_INT_0, 49 + MSM8916_PNOC_INT_1, 50 + MSM8916_PNOC_MAS_0, 51 + MSM8916_PNOC_MAS_1, 52 + MSM8916_PNOC_SLV_0, 53 + MSM8916_PNOC_SLV_1, 54 + MSM8916_PNOC_SLV_2, 55 + MSM8916_PNOC_SLV_3, 56 + MSM8916_PNOC_SLV_4, 57 + MSM8916_PNOC_SLV_8, 58 + MSM8916_PNOC_SLV_9, 59 + MSM8916_PNOC_SNOC_MAS, 60 + MSM8916_PNOC_SNOC_SLV, 61 + MSM8916_SNOC_QDSS_INT, 62 + MSM8916_SLAVE_AMPSS_L2, 63 + MSM8916_SLAVE_APSS, 64 + MSM8916_SLAVE_LPASS, 65 + MSM8916_SLAVE_BIMC_CFG, 66 + MSM8916_SLAVE_BLSP_1, 67 + MSM8916_SLAVE_BOOT_ROM, 68 + MSM8916_SLAVE_CAMERA_CFG, 69 + MSM8916_SLAVE_CATS_128, 70 + MSM8916_SLAVE_OCMEM_64, 71 + MSM8916_SLAVE_CLK_CTL, 72 + MSM8916_SLAVE_CRYPTO_0_CFG, 73 + MSM8916_SLAVE_DEHR_CFG, 74 + MSM8916_SLAVE_DISPLAY_CFG, 75 + MSM8916_SLAVE_EBI_CH0, 76 + MSM8916_SLAVE_GRAPHICS_3D_CFG, 77 + MSM8916_SLAVE_IMEM_CFG, 78 + MSM8916_SLAVE_IMEM, 79 + MSM8916_SLAVE_MPM, 80 + MSM8916_SLAVE_MSG_RAM, 81 + MSM8916_SLAVE_MSS, 82 + MSM8916_SLAVE_PDM, 83 + MSM8916_SLAVE_PMIC_ARB, 84 + MSM8916_SLAVE_PNOC_CFG, 85 + MSM8916_SLAVE_PRNG, 86 + MSM8916_SLAVE_QDSS_CFG, 87 + MSM8916_SLAVE_QDSS_STM, 88 + MSM8916_SLAVE_RBCPR_CFG, 89 + MSM8916_SLAVE_SDCC_1, 90 + MSM8916_SLAVE_SDCC_2, 91 + MSM8916_SLAVE_SECURITY, 92 + MSM8916_SLAVE_SNOC_CFG, 93 + MSM8916_SLAVE_SPDM, 94 + MSM8916_SLAVE_SRVC_SNOC, 95 + MSM8916_SLAVE_TCSR, 96 + MSM8916_SLAVE_TLMM, 97 + MSM8916_SLAVE_USB_HS, 98 + MSM8916_SLAVE_VENUS_CFG, 99 + MSM8916_SNOC_BIMC_0_MAS, 100 + MSM8916_SNOC_BIMC_0_SLV, 101 + MSM8916_SNOC_BIMC_1_MAS, 102 + MSM8916_SNOC_BIMC_1_SLV, 103 + MSM8916_SNOC_INT_0, 104 + MSM8916_SNOC_INT_1, 105 + MSM8916_SNOC_INT_BIMC, 106 + MSM8916_SNOC_PNOC_MAS, 107 + MSM8916_SNOC_PNOC_SLV, 108 + }; 109 + 110 + #define to_msm8916_provider(_provider) \ 111 + container_of(_provider, struct msm8916_icc_provider, provider) 112 + 113 + static const struct clk_bulk_data msm8916_bus_clocks[] = { 114 + { .id = "bus" }, 115 + { .id = "bus_a" }, 116 + }; 117 + 118 + /** 119 + * struct msm8916_icc_provider - Qualcomm specific interconnect provider 120 + * @provider: generic interconnect provider 121 + * @bus_clks: the clk_bulk_data table of bus clocks 122 + * @num_clks: the total number of clk_bulk_data entries 123 + */ 124 + struct msm8916_icc_provider { 125 + struct icc_provider provider; 126 + struct clk_bulk_data *bus_clks; 127 + int num_clks; 128 + }; 129 + 130 + #define MSM8916_MAX_LINKS 8 131 + 132 + /** 133 + * struct msm8916_icc_node - Qualcomm specific interconnect nodes 134 + * @name: the node name used in debugfs 135 + * @id: a unique node identifier 136 + * @links: an array of nodes where we can go next while traversing 137 + * @num_links: the total number of @links 138 + * @buswidth: width of the interconnect between a node and the bus (bytes) 139 + * @mas_rpm_id: RPM ID for devices that are bus masters 140 + * @slv_rpm_id: RPM ID for devices that are bus slaves 141 + * @rate: current bus clock rate in Hz 142 + */ 143 + struct msm8916_icc_node { 144 + unsigned char *name; 145 + u16 id; 146 + u16 links[MSM8916_MAX_LINKS]; 147 + u16 num_links; 148 + u16 buswidth; 149 + int mas_rpm_id; 150 + int slv_rpm_id; 151 + u64 rate; 152 + }; 153 + 154 + struct msm8916_icc_desc { 155 + struct msm8916_icc_node **nodes; 156 + size_t num_nodes; 157 + }; 158 + 159 + #define DEFINE_QNODE(_name, _id, _buswidth, _mas_rpm_id, _slv_rpm_id, \ 160 + ...) \ 161 + static struct msm8916_icc_node _name = { \ 162 + .name = #_name, \ 163 + .id = _id, \ 164 + .buswidth = _buswidth, \ 165 + .mas_rpm_id = _mas_rpm_id, \ 166 + .slv_rpm_id = _slv_rpm_id, \ 167 + .num_links = ARRAY_SIZE(((int[]){ __VA_ARGS__ })), \ 168 + .links = { __VA_ARGS__ }, \ 169 + } 170 + 171 + DEFINE_QNODE(bimc_snoc_mas, MSM8916_BIMC_SNOC_MAS, 8, -1, -1, MSM8916_BIMC_SNOC_SLV); 172 + DEFINE_QNODE(bimc_snoc_slv, MSM8916_BIMC_SNOC_SLV, 8, -1, -1, MSM8916_SNOC_INT_0, MSM8916_SNOC_INT_1); 173 + DEFINE_QNODE(mas_apss, MSM8916_MASTER_AMPSS_M0, 8, -1, -1, MSM8916_SLAVE_EBI_CH0, MSM8916_BIMC_SNOC_MAS, MSM8916_SLAVE_AMPSS_L2); 174 + DEFINE_QNODE(mas_audio, MSM8916_MASTER_LPASS, 4, -1, -1, MSM8916_PNOC_MAS_0); 175 + DEFINE_QNODE(mas_blsp_1, MSM8916_MASTER_BLSP_1, 4, -1, -1, MSM8916_PNOC_MAS_1); 176 + DEFINE_QNODE(mas_dehr, MSM8916_MASTER_DEHR, 4, -1, -1, MSM8916_PNOC_MAS_0); 177 + DEFINE_QNODE(mas_gfx, MSM8916_MASTER_GRAPHICS_3D, 8, -1, -1, MSM8916_SLAVE_EBI_CH0, MSM8916_BIMC_SNOC_MAS, MSM8916_SLAVE_AMPSS_L2); 178 + DEFINE_QNODE(mas_jpeg, MSM8916_MASTER_JPEG, 16, -1, -1, MSM8916_SNOC_MM_INT_0, MSM8916_SNOC_MM_INT_2); 179 + DEFINE_QNODE(mas_mdp, MSM8916_MASTER_MDP_PORT0, 16, -1, -1, MSM8916_SNOC_MM_INT_0, MSM8916_SNOC_MM_INT_2); 180 + DEFINE_QNODE(mas_pcnoc_crypto_0, MSM8916_MASTER_CRYPTO_CORE0, 8, -1, -1, MSM8916_PNOC_INT_1); 181 + DEFINE_QNODE(mas_pcnoc_sdcc_1, MSM8916_MASTER_SDCC_1, 8, -1, -1, MSM8916_PNOC_INT_1); 182 + DEFINE_QNODE(mas_pcnoc_sdcc_2, MSM8916_MASTER_SDCC_2, 8, -1, -1, MSM8916_PNOC_INT_1); 183 + DEFINE_QNODE(mas_qdss_bam, MSM8916_MASTER_QDSS_BAM, 8, -1, -1, MSM8916_SNOC_QDSS_INT); 184 + DEFINE_QNODE(mas_qdss_etr, MSM8916_MASTER_QDSS_ETR, 8, -1, -1, MSM8916_SNOC_QDSS_INT); 185 + DEFINE_QNODE(mas_snoc_cfg, MSM8916_MASTER_SNOC_CFG, 4, 20, -1, MSM8916_SNOC_QDSS_INT); 186 + DEFINE_QNODE(mas_spdm, MSM8916_MASTER_SPDM, 4, -1, -1, MSM8916_PNOC_MAS_0); 187 + DEFINE_QNODE(mas_tcu0, MSM8916_MASTER_TCU0, 8, -1, -1, MSM8916_SLAVE_EBI_CH0, MSM8916_BIMC_SNOC_MAS, MSM8916_SLAVE_AMPSS_L2); 188 + DEFINE_QNODE(mas_tcu1, MSM8916_MASTER_TCU1, 8, -1, -1, MSM8916_SLAVE_EBI_CH0, MSM8916_BIMC_SNOC_MAS, MSM8916_SLAVE_AMPSS_L2); 189 + DEFINE_QNODE(mas_usb_hs, MSM8916_MASTER_USB_HS, 4, -1, -1, MSM8916_PNOC_MAS_1); 190 + DEFINE_QNODE(mas_vfe, MSM8916_MASTER_VFE, 16, -1, -1, MSM8916_SNOC_MM_INT_1, MSM8916_SNOC_MM_INT_2); 191 + DEFINE_QNODE(mas_video, MSM8916_MASTER_VIDEO_P0, 16, -1, -1, MSM8916_SNOC_MM_INT_0, MSM8916_SNOC_MM_INT_2); 192 + DEFINE_QNODE(mm_int_0, MSM8916_SNOC_MM_INT_0, 16, -1, -1, MSM8916_SNOC_MM_INT_BIMC); 193 + DEFINE_QNODE(mm_int_1, MSM8916_SNOC_MM_INT_1, 16, -1, -1, MSM8916_SNOC_MM_INT_BIMC); 194 + DEFINE_QNODE(mm_int_2, MSM8916_SNOC_MM_INT_2, 16, -1, -1, MSM8916_SNOC_INT_0); 195 + DEFINE_QNODE(mm_int_bimc, MSM8916_SNOC_MM_INT_BIMC, 16, -1, -1, MSM8916_SNOC_BIMC_1_MAS); 196 + DEFINE_QNODE(pcnoc_int_0, MSM8916_PNOC_INT_0, 8, -1, -1, MSM8916_PNOC_SNOC_MAS, MSM8916_PNOC_SLV_0, MSM8916_PNOC_SLV_1, MSM8916_PNOC_SLV_2, MSM8916_PNOC_SLV_3, MSM8916_PNOC_SLV_4, MSM8916_PNOC_SLV_8, MSM8916_PNOC_SLV_9); 197 + DEFINE_QNODE(pcnoc_int_1, MSM8916_PNOC_INT_1, 8, -1, -1, MSM8916_PNOC_SNOC_MAS); 198 + DEFINE_QNODE(pcnoc_m_0, MSM8916_PNOC_MAS_0, 8, -1, -1, MSM8916_PNOC_INT_0); 199 + DEFINE_QNODE(pcnoc_m_1, MSM8916_PNOC_MAS_1, 8, -1, -1, MSM8916_PNOC_SNOC_MAS); 200 + DEFINE_QNODE(pcnoc_s_0, MSM8916_PNOC_SLV_0, 8, -1, -1, MSM8916_SLAVE_CLK_CTL, MSM8916_SLAVE_TLMM, MSM8916_SLAVE_TCSR, MSM8916_SLAVE_SECURITY, MSM8916_SLAVE_MSS); 201 + DEFINE_QNODE(pcnoc_s_1, MSM8916_PNOC_SLV_1, 8, -1, -1, MSM8916_SLAVE_IMEM_CFG, MSM8916_SLAVE_CRYPTO_0_CFG, MSM8916_SLAVE_MSG_RAM, MSM8916_SLAVE_PDM, MSM8916_SLAVE_PRNG); 202 + DEFINE_QNODE(pcnoc_s_2, MSM8916_PNOC_SLV_2, 8, -1, -1, MSM8916_SLAVE_SPDM, MSM8916_SLAVE_BOOT_ROM, MSM8916_SLAVE_BIMC_CFG, MSM8916_SLAVE_PNOC_CFG, MSM8916_SLAVE_PMIC_ARB); 203 + DEFINE_QNODE(pcnoc_s_3, MSM8916_PNOC_SLV_3, 8, -1, -1, MSM8916_SLAVE_MPM, MSM8916_SLAVE_SNOC_CFG, MSM8916_SLAVE_RBCPR_CFG, MSM8916_SLAVE_QDSS_CFG, MSM8916_SLAVE_DEHR_CFG); 204 + DEFINE_QNODE(pcnoc_s_4, MSM8916_PNOC_SLV_4, 8, -1, -1, MSM8916_SLAVE_VENUS_CFG, MSM8916_SLAVE_CAMERA_CFG, MSM8916_SLAVE_DISPLAY_CFG); 205 + DEFINE_QNODE(pcnoc_s_8, MSM8916_PNOC_SLV_8, 8, -1, -1, MSM8916_SLAVE_USB_HS, MSM8916_SLAVE_SDCC_1, MSM8916_SLAVE_BLSP_1); 206 + DEFINE_QNODE(pcnoc_s_9, MSM8916_PNOC_SLV_9, 8, -1, -1, MSM8916_SLAVE_SDCC_2, MSM8916_SLAVE_LPASS, MSM8916_SLAVE_GRAPHICS_3D_CFG); 207 + DEFINE_QNODE(pcnoc_snoc_mas, MSM8916_PNOC_SNOC_MAS, 8, 29, -1, MSM8916_PNOC_SNOC_SLV); 208 + DEFINE_QNODE(pcnoc_snoc_slv, MSM8916_PNOC_SNOC_SLV, 8, -1, 45, MSM8916_SNOC_INT_0, MSM8916_SNOC_INT_BIMC, MSM8916_SNOC_INT_1); 209 + DEFINE_QNODE(qdss_int, MSM8916_SNOC_QDSS_INT, 8, -1, -1, MSM8916_SNOC_INT_0, MSM8916_SNOC_INT_BIMC); 210 + DEFINE_QNODE(slv_apps_l2, MSM8916_SLAVE_AMPSS_L2, 8, -1, -1, 0); 211 + DEFINE_QNODE(slv_apss, MSM8916_SLAVE_APSS, 4, -1, 20, 0); 212 + DEFINE_QNODE(slv_audio, MSM8916_SLAVE_LPASS, 4, -1, -1, 0); 213 + DEFINE_QNODE(slv_bimc_cfg, MSM8916_SLAVE_BIMC_CFG, 4, -1, -1, 0); 214 + DEFINE_QNODE(slv_blsp_1, MSM8916_SLAVE_BLSP_1, 4, -1, -1, 0); 215 + DEFINE_QNODE(slv_boot_rom, MSM8916_SLAVE_BOOT_ROM, 4, -1, -1, 0); 216 + DEFINE_QNODE(slv_camera_cfg, MSM8916_SLAVE_CAMERA_CFG, 4, -1, -1, 0); 217 + DEFINE_QNODE(slv_cats_0, MSM8916_SLAVE_CATS_128, 16, -1, 106, 0); 218 + DEFINE_QNODE(slv_cats_1, MSM8916_SLAVE_OCMEM_64, 8, -1, 107, 0); 219 + DEFINE_QNODE(slv_clk_ctl, MSM8916_SLAVE_CLK_CTL, 4, -1, -1, 0); 220 + DEFINE_QNODE(slv_crypto_0_cfg, MSM8916_SLAVE_CRYPTO_0_CFG, 4, -1, -1, 0); 221 + DEFINE_QNODE(slv_dehr_cfg, MSM8916_SLAVE_DEHR_CFG, 4, -1, -1, 0); 222 + DEFINE_QNODE(slv_display_cfg, MSM8916_SLAVE_DISPLAY_CFG, 4, -1, -1, 0); 223 + DEFINE_QNODE(slv_ebi_ch0, MSM8916_SLAVE_EBI_CH0, 8, -1, 0, 0); 224 + DEFINE_QNODE(slv_gfx_cfg, MSM8916_SLAVE_GRAPHICS_3D_CFG, 4, -1, -1, 0); 225 + DEFINE_QNODE(slv_imem_cfg, MSM8916_SLAVE_IMEM_CFG, 4, -1, -1, 0); 226 + DEFINE_QNODE(slv_imem, MSM8916_SLAVE_IMEM, 8, -1, 26, 0); 227 + DEFINE_QNODE(slv_mpm, MSM8916_SLAVE_MPM, 4, -1, -1, 0); 228 + DEFINE_QNODE(slv_msg_ram, MSM8916_SLAVE_MSG_RAM, 4, -1, -1, 0); 229 + DEFINE_QNODE(slv_mss, MSM8916_SLAVE_MSS, 4, -1, -1, 0); 230 + DEFINE_QNODE(slv_pdm, MSM8916_SLAVE_PDM, 4, -1, -1, 0); 231 + DEFINE_QNODE(slv_pmic_arb, MSM8916_SLAVE_PMIC_ARB, 4, -1, -1, 0); 232 + DEFINE_QNODE(slv_pcnoc_cfg, MSM8916_SLAVE_PNOC_CFG, 4, -1, -1, 0); 233 + DEFINE_QNODE(slv_prng, MSM8916_SLAVE_PRNG, 4, -1, -1, 0); 234 + DEFINE_QNODE(slv_qdss_cfg, MSM8916_SLAVE_QDSS_CFG, 4, -1, -1, 0); 235 + DEFINE_QNODE(slv_qdss_stm, MSM8916_SLAVE_QDSS_STM, 4, -1, 30, 0); 236 + DEFINE_QNODE(slv_rbcpr_cfg, MSM8916_SLAVE_RBCPR_CFG, 4, -1, -1, 0); 237 + DEFINE_QNODE(slv_sdcc_1, MSM8916_SLAVE_SDCC_1, 4, -1, -1, 0); 238 + DEFINE_QNODE(slv_sdcc_2, MSM8916_SLAVE_SDCC_2, 4, -1, -1, 0); 239 + DEFINE_QNODE(slv_security, MSM8916_SLAVE_SECURITY, 4, -1, -1, 0); 240 + DEFINE_QNODE(slv_snoc_cfg, MSM8916_SLAVE_SNOC_CFG, 4, -1, -1, 0); 241 + DEFINE_QNODE(slv_spdm, MSM8916_SLAVE_SPDM, 4, -1, -1, 0); 242 + DEFINE_QNODE(slv_srvc_snoc, MSM8916_SLAVE_SRVC_SNOC, 8, -1, 29, 0); 243 + DEFINE_QNODE(slv_tcsr, MSM8916_SLAVE_TCSR, 4, -1, -1, 0); 244 + DEFINE_QNODE(slv_tlmm, MSM8916_SLAVE_TLMM, 4, -1, -1, 0); 245 + DEFINE_QNODE(slv_usb_hs, MSM8916_SLAVE_USB_HS, 4, -1, -1, 0); 246 + DEFINE_QNODE(slv_venus_cfg, MSM8916_SLAVE_VENUS_CFG, 4, -1, -1, 0); 247 + DEFINE_QNODE(snoc_bimc_0_mas, MSM8916_SNOC_BIMC_0_MAS, 8, 3, -1, MSM8916_SNOC_BIMC_0_SLV); 248 + DEFINE_QNODE(snoc_bimc_0_slv, MSM8916_SNOC_BIMC_0_SLV, 8, -1, 24, MSM8916_SLAVE_EBI_CH0); 249 + DEFINE_QNODE(snoc_bimc_1_mas, MSM8916_SNOC_BIMC_1_MAS, 16, -1, -1, MSM8916_SNOC_BIMC_1_SLV); 250 + DEFINE_QNODE(snoc_bimc_1_slv, MSM8916_SNOC_BIMC_1_SLV, 8, -1, -1, MSM8916_SLAVE_EBI_CH0); 251 + DEFINE_QNODE(snoc_int_0, MSM8916_SNOC_INT_0, 8, 99, 130, MSM8916_SLAVE_QDSS_STM, MSM8916_SLAVE_IMEM, MSM8916_SNOC_PNOC_MAS); 252 + DEFINE_QNODE(snoc_int_1, MSM8916_SNOC_INT_1, 8, 100, 131, MSM8916_SLAVE_APSS, MSM8916_SLAVE_CATS_128, MSM8916_SLAVE_OCMEM_64); 253 + DEFINE_QNODE(snoc_int_bimc, MSM8916_SNOC_INT_BIMC, 8, 101, 132, MSM8916_SNOC_BIMC_0_MAS); 254 + DEFINE_QNODE(snoc_pcnoc_mas, MSM8916_SNOC_PNOC_MAS, 8, -1, -1, MSM8916_SNOC_PNOC_SLV); 255 + DEFINE_QNODE(snoc_pcnoc_slv, MSM8916_SNOC_PNOC_SLV, 8, -1, -1, MSM8916_PNOC_INT_0); 256 + 257 + static struct msm8916_icc_node *msm8916_snoc_nodes[] = { 258 + [BIMC_SNOC_SLV] = &bimc_snoc_slv, 259 + [MASTER_JPEG] = &mas_jpeg, 260 + [MASTER_MDP_PORT0] = &mas_mdp, 261 + [MASTER_QDSS_BAM] = &mas_qdss_bam, 262 + [MASTER_QDSS_ETR] = &mas_qdss_etr, 263 + [MASTER_SNOC_CFG] = &mas_snoc_cfg, 264 + [MASTER_VFE] = &mas_vfe, 265 + [MASTER_VIDEO_P0] = &mas_video, 266 + [SNOC_MM_INT_0] = &mm_int_0, 267 + [SNOC_MM_INT_1] = &mm_int_1, 268 + [SNOC_MM_INT_2] = &mm_int_2, 269 + [SNOC_MM_INT_BIMC] = &mm_int_bimc, 270 + [PCNOC_SNOC_SLV] = &pcnoc_snoc_slv, 271 + [SLAVE_APSS] = &slv_apss, 272 + [SLAVE_CATS_128] = &slv_cats_0, 273 + [SLAVE_OCMEM_64] = &slv_cats_1, 274 + [SLAVE_IMEM] = &slv_imem, 275 + [SLAVE_QDSS_STM] = &slv_qdss_stm, 276 + [SLAVE_SRVC_SNOC] = &slv_srvc_snoc, 277 + [SNOC_BIMC_0_MAS] = &snoc_bimc_0_mas, 278 + [SNOC_BIMC_1_MAS] = &snoc_bimc_1_mas, 279 + [SNOC_INT_0] = &snoc_int_0, 280 + [SNOC_INT_1] = &snoc_int_1, 281 + [SNOC_INT_BIMC] = &snoc_int_bimc, 282 + [SNOC_PCNOC_MAS] = &snoc_pcnoc_mas, 283 + [SNOC_QDSS_INT] = &qdss_int, 284 + }; 285 + 286 + static struct msm8916_icc_desc msm8916_snoc = { 287 + .nodes = msm8916_snoc_nodes, 288 + .num_nodes = ARRAY_SIZE(msm8916_snoc_nodes), 289 + }; 290 + 291 + static struct msm8916_icc_node *msm8916_bimc_nodes[] = { 292 + [BIMC_SNOC_MAS] = &bimc_snoc_mas, 293 + [MASTER_AMPSS_M0] = &mas_apss, 294 + [MASTER_GRAPHICS_3D] = &mas_gfx, 295 + [MASTER_TCU0] = &mas_tcu0, 296 + [MASTER_TCU1] = &mas_tcu1, 297 + [SLAVE_AMPSS_L2] = &slv_apps_l2, 298 + [SLAVE_EBI_CH0] = &slv_ebi_ch0, 299 + [SNOC_BIMC_0_SLV] = &snoc_bimc_0_slv, 300 + [SNOC_BIMC_1_SLV] = &snoc_bimc_1_slv, 301 + }; 302 + 303 + static struct msm8916_icc_desc msm8916_bimc = { 304 + .nodes = msm8916_bimc_nodes, 305 + .num_nodes = ARRAY_SIZE(msm8916_bimc_nodes), 306 + }; 307 + 308 + static struct msm8916_icc_node *msm8916_pcnoc_nodes[] = { 309 + [MASTER_BLSP_1] = &mas_blsp_1, 310 + [MASTER_DEHR] = &mas_dehr, 311 + [MASTER_LPASS] = &mas_audio, 312 + [MASTER_CRYPTO_CORE0] = &mas_pcnoc_crypto_0, 313 + [MASTER_SDCC_1] = &mas_pcnoc_sdcc_1, 314 + [MASTER_SDCC_2] = &mas_pcnoc_sdcc_2, 315 + [MASTER_SPDM] = &mas_spdm, 316 + [MASTER_USB_HS] = &mas_usb_hs, 317 + [PCNOC_INT_0] = &pcnoc_int_0, 318 + [PCNOC_INT_1] = &pcnoc_int_1, 319 + [PCNOC_MAS_0] = &pcnoc_m_0, 320 + [PCNOC_MAS_1] = &pcnoc_m_1, 321 + [PCNOC_SLV_0] = &pcnoc_s_0, 322 + [PCNOC_SLV_1] = &pcnoc_s_1, 323 + [PCNOC_SLV_2] = &pcnoc_s_2, 324 + [PCNOC_SLV_3] = &pcnoc_s_3, 325 + [PCNOC_SLV_4] = &pcnoc_s_4, 326 + [PCNOC_SLV_8] = &pcnoc_s_8, 327 + [PCNOC_SLV_9] = &pcnoc_s_9, 328 + [PCNOC_SNOC_MAS] = &pcnoc_snoc_mas, 329 + [SLAVE_BIMC_CFG] = &slv_bimc_cfg, 330 + [SLAVE_BLSP_1] = &slv_blsp_1, 331 + [SLAVE_BOOT_ROM] = &slv_boot_rom, 332 + [SLAVE_CAMERA_CFG] = &slv_camera_cfg, 333 + [SLAVE_CLK_CTL] = &slv_clk_ctl, 334 + [SLAVE_CRYPTO_0_CFG] = &slv_crypto_0_cfg, 335 + [SLAVE_DEHR_CFG] = &slv_dehr_cfg, 336 + [SLAVE_DISPLAY_CFG] = &slv_display_cfg, 337 + [SLAVE_GRAPHICS_3D_CFG] = &slv_gfx_cfg, 338 + [SLAVE_IMEM_CFG] = &slv_imem_cfg, 339 + [SLAVE_LPASS] = &slv_audio, 340 + [SLAVE_MPM] = &slv_mpm, 341 + [SLAVE_MSG_RAM] = &slv_msg_ram, 342 + [SLAVE_MSS] = &slv_mss, 343 + [SLAVE_PDM] = &slv_pdm, 344 + [SLAVE_PMIC_ARB] = &slv_pmic_arb, 345 + [SLAVE_PCNOC_CFG] = &slv_pcnoc_cfg, 346 + [SLAVE_PRNG] = &slv_prng, 347 + [SLAVE_QDSS_CFG] = &slv_qdss_cfg, 348 + [SLAVE_RBCPR_CFG] = &slv_rbcpr_cfg, 349 + [SLAVE_SDCC_1] = &slv_sdcc_1, 350 + [SLAVE_SDCC_2] = &slv_sdcc_2, 351 + [SLAVE_SECURITY] = &slv_security, 352 + [SLAVE_SNOC_CFG] = &slv_snoc_cfg, 353 + [SLAVE_SPDM] = &slv_spdm, 354 + [SLAVE_TCSR] = &slv_tcsr, 355 + [SLAVE_TLMM] = &slv_tlmm, 356 + [SLAVE_USB_HS] = &slv_usb_hs, 357 + [SLAVE_VENUS_CFG] = &slv_venus_cfg, 358 + [SNOC_PCNOC_SLV] = &snoc_pcnoc_slv, 359 + }; 360 + 361 + static struct msm8916_icc_desc msm8916_pcnoc = { 362 + .nodes = msm8916_pcnoc_nodes, 363 + .num_nodes = ARRAY_SIZE(msm8916_pcnoc_nodes), 364 + }; 365 + 366 + static int msm8916_icc_set(struct icc_node *src, struct icc_node *dst) 367 + { 368 + struct msm8916_icc_provider *qp; 369 + struct msm8916_icc_node *qn; 370 + u64 sum_bw, max_peak_bw, rate; 371 + u32 agg_avg = 0, agg_peak = 0; 372 + struct icc_provider *provider; 373 + struct icc_node *n; 374 + int ret, i; 375 + 376 + qn = src->data; 377 + provider = src->provider; 378 + qp = to_msm8916_provider(provider); 379 + 380 + list_for_each_entry(n, &provider->nodes, node_list) 381 + provider->aggregate(n, 0, n->avg_bw, n->peak_bw, 382 + &agg_avg, &agg_peak); 383 + 384 + sum_bw = icc_units_to_bps(agg_avg); 385 + max_peak_bw = icc_units_to_bps(agg_peak); 386 + 387 + /* send bandwidth request message to the RPM processor */ 388 + if (qn->mas_rpm_id != -1) { 389 + ret = qcom_icc_rpm_smd_send(QCOM_SMD_RPM_ACTIVE_STATE, 390 + RPM_BUS_MASTER_REQ, 391 + qn->mas_rpm_id, 392 + sum_bw); 393 + if (ret) { 394 + pr_err("qcom_icc_rpm_smd_send mas %d error %d\n", 395 + qn->mas_rpm_id, ret); 396 + return ret; 397 + } 398 + } 399 + 400 + if (qn->slv_rpm_id != -1) { 401 + ret = qcom_icc_rpm_smd_send(QCOM_SMD_RPM_ACTIVE_STATE, 402 + RPM_BUS_SLAVE_REQ, 403 + qn->slv_rpm_id, 404 + sum_bw); 405 + if (ret) { 406 + pr_err("qcom_icc_rpm_smd_send slv error %d\n", 407 + ret); 408 + return ret; 409 + } 410 + } 411 + 412 + rate = max(sum_bw, max_peak_bw); 413 + 414 + do_div(rate, qn->buswidth); 415 + 416 + if (qn->rate == rate) 417 + return 0; 418 + 419 + for (i = 0; i < qp->num_clks; i++) { 420 + ret = clk_set_rate(qp->bus_clks[i].clk, rate); 421 + if (ret) { 422 + pr_err("%s clk_set_rate error: %d\n", 423 + qp->bus_clks[i].id, ret); 424 + return ret; 425 + } 426 + } 427 + 428 + qn->rate = rate; 429 + 430 + return 0; 431 + } 432 + 433 + static int msm8916_qnoc_probe(struct platform_device *pdev) 434 + { 435 + const struct msm8916_icc_desc *desc; 436 + struct msm8916_icc_node **qnodes; 437 + struct msm8916_icc_provider *qp; 438 + struct device *dev = &pdev->dev; 439 + struct icc_onecell_data *data; 440 + struct icc_provider *provider; 441 + struct icc_node *node; 442 + size_t num_nodes, i; 443 + int ret; 444 + 445 + /* wait for the RPM proxy */ 446 + if (!qcom_icc_rpm_smd_available()) 447 + return -EPROBE_DEFER; 448 + 449 + desc = of_device_get_match_data(dev); 450 + if (!desc) 451 + return -EINVAL; 452 + 453 + qnodes = desc->nodes; 454 + num_nodes = desc->num_nodes; 455 + 456 + qp = devm_kzalloc(dev, sizeof(*qp), GFP_KERNEL); 457 + if (!qp) 458 + return -ENOMEM; 459 + 460 + data = devm_kzalloc(dev, struct_size(data, nodes, num_nodes), 461 + GFP_KERNEL); 462 + if (!data) 463 + return -ENOMEM; 464 + 465 + qp->bus_clks = devm_kmemdup(dev, msm8916_bus_clocks, 466 + sizeof(msm8916_bus_clocks), GFP_KERNEL); 467 + if (!qp->bus_clks) 468 + return -ENOMEM; 469 + 470 + qp->num_clks = ARRAY_SIZE(msm8916_bus_clocks); 471 + ret = devm_clk_bulk_get(dev, qp->num_clks, qp->bus_clks); 472 + if (ret) 473 + return ret; 474 + 475 + ret = clk_bulk_prepare_enable(qp->num_clks, qp->bus_clks); 476 + if (ret) 477 + return ret; 478 + 479 + provider = &qp->provider; 480 + INIT_LIST_HEAD(&provider->nodes); 481 + provider->dev = dev; 482 + provider->set = msm8916_icc_set; 483 + provider->aggregate = icc_std_aggregate; 484 + provider->xlate = of_icc_xlate_onecell; 485 + provider->data = data; 486 + 487 + ret = icc_provider_add(provider); 488 + if (ret) { 489 + dev_err(dev, "error adding interconnect provider: %d\n", ret); 490 + clk_bulk_disable_unprepare(qp->num_clks, qp->bus_clks); 491 + return ret; 492 + } 493 + 494 + for (i = 0; i < num_nodes; i++) { 495 + size_t j; 496 + 497 + node = icc_node_create(qnodes[i]->id); 498 + if (IS_ERR(node)) { 499 + ret = PTR_ERR(node); 500 + goto err; 501 + } 502 + 503 + node->name = qnodes[i]->name; 504 + node->data = qnodes[i]; 505 + icc_node_add(node, provider); 506 + 507 + for (j = 0; j < qnodes[i]->num_links; j++) 508 + icc_link_create(node, qnodes[i]->links[j]); 509 + 510 + data->nodes[i] = node; 511 + } 512 + data->num_nodes = num_nodes; 513 + 514 + platform_set_drvdata(pdev, qp); 515 + 516 + return 0; 517 + 518 + err: 519 + icc_nodes_remove(provider); 520 + icc_provider_del(provider); 521 + clk_bulk_disable_unprepare(qp->num_clks, qp->bus_clks); 522 + 523 + return ret; 524 + } 525 + 526 + static int msm8916_qnoc_remove(struct platform_device *pdev) 527 + { 528 + struct msm8916_icc_provider *qp = platform_get_drvdata(pdev); 529 + 530 + icc_nodes_remove(&qp->provider); 531 + clk_bulk_disable_unprepare(qp->num_clks, qp->bus_clks); 532 + return icc_provider_del(&qp->provider); 533 + } 534 + 535 + static const struct of_device_id msm8916_noc_of_match[] = { 536 + { .compatible = "qcom,msm8916-bimc", .data = &msm8916_bimc }, 537 + { .compatible = "qcom,msm8916-pcnoc", .data = &msm8916_pcnoc }, 538 + { .compatible = "qcom,msm8916-snoc", .data = &msm8916_snoc }, 539 + { } 540 + }; 541 + MODULE_DEVICE_TABLE(of, msm8916_noc_of_match); 542 + 543 + static struct platform_driver msm8916_noc_driver = { 544 + .probe = msm8916_qnoc_probe, 545 + .remove = msm8916_qnoc_remove, 546 + .driver = { 547 + .name = "qnoc-msm8916", 548 + .of_match_table = msm8916_noc_of_match, 549 + }, 550 + }; 551 + module_platform_driver(msm8916_noc_driver); 552 + MODULE_AUTHOR("Georgi Djakov <georgi.djakov@linaro.org>"); 553 + MODULE_DESCRIPTION("Qualcomm MSM8916 NoC driver"); 554 + MODULE_LICENSE("GPL v2");
+7 -25
drivers/interconnect/qcom/msm8974.c
··· 550 550 .num_nodes = ARRAY_SIZE(msm8974_snoc_nodes), 551 551 }; 552 552 553 - static int msm8974_icc_aggregate(struct icc_node *node, u32 tag, u32 avg_bw, 554 - u32 peak_bw, u32 *agg_avg, u32 *agg_peak) 555 - { 556 - *agg_avg += avg_bw; 557 - *agg_peak = max(*agg_peak, peak_bw); 558 - 559 - return 0; 560 - } 561 - 562 553 static void msm8974_icc_rpm_smd_send(struct device *dev, int rsc_type, 563 554 char *name, int id, u64 val) 564 555 { ··· 594 603 qp = to_msm8974_icc_provider(provider); 595 604 596 605 list_for_each_entry(n, &provider->nodes, node_list) 597 - msm8974_icc_aggregate(n, 0, n->avg_bw, n->peak_bw, 598 - &agg_avg, &agg_peak); 606 + provider->aggregate(n, 0, n->avg_bw, n->peak_bw, 607 + &agg_avg, &agg_peak); 599 608 600 609 sum_bw = icc_units_to_bps(agg_avg); 601 610 max_peak_bw = icc_units_to_bps(agg_peak); ··· 643 652 struct device *dev = &pdev->dev; 644 653 struct icc_onecell_data *data; 645 654 struct icc_provider *provider; 646 - struct icc_node *node, *tmp; 655 + struct icc_node *node; 647 656 size_t num_nodes, i; 648 657 int ret; 649 658 ··· 685 694 INIT_LIST_HEAD(&provider->nodes); 686 695 provider->dev = dev; 687 696 provider->set = msm8974_icc_set; 688 - provider->aggregate = msm8974_icc_aggregate; 697 + provider->aggregate = icc_std_aggregate; 689 698 provider->xlate = of_icc_xlate_onecell; 690 699 provider->data = data; 691 700 ··· 723 732 return 0; 724 733 725 734 err_del_icc: 726 - list_for_each_entry_safe(node, tmp, &provider->nodes, node_list) { 727 - icc_node_del(node); 728 - icc_node_destroy(node->id); 729 - } 735 + icc_nodes_remove(provider); 730 736 icc_provider_del(provider); 731 737 732 738 err_disable_clks: ··· 735 747 static int msm8974_icc_remove(struct platform_device *pdev) 736 748 { 737 749 struct msm8974_icc_provider *qp = platform_get_drvdata(pdev); 738 - struct icc_provider *provider = &qp->provider; 739 - struct icc_node *n, *tmp; 740 750 741 - list_for_each_entry_safe(n, tmp, &provider->nodes, node_list) { 742 - icc_node_del(n); 743 - icc_node_destroy(n->id); 744 - } 751 + icc_nodes_remove(&qp->provider); 745 752 clk_bulk_disable_unprepare(qp->num_clks, qp->bus_clks); 746 - 747 - return icc_provider_del(provider); 753 + return icc_provider_del(&qp->provider); 748 754 } 749 755 750 756 static const struct of_device_id msm8974_noc_of_match[] = {
+7 -25
drivers/interconnect/qcom/qcs404.c
··· 327 327 .num_nodes = ARRAY_SIZE(qcs404_snoc_nodes), 328 328 }; 329 329 330 - static int qcom_icc_aggregate(struct icc_node *node, u32 tag, u32 avg_bw, 331 - u32 peak_bw, u32 *agg_avg, u32 *agg_peak) 332 - { 333 - *agg_avg += avg_bw; 334 - *agg_peak = max(*agg_peak, peak_bw); 335 - 336 - return 0; 337 - } 338 - 339 330 static int qcom_icc_set(struct icc_node *src, struct icc_node *dst) 340 331 { 341 332 struct qcom_icc_provider *qp; ··· 345 354 qp = to_qcom_provider(provider); 346 355 347 356 list_for_each_entry(n, &provider->nodes, node_list) 348 - qcom_icc_aggregate(n, 0, n->avg_bw, n->peak_bw, 349 - &agg_avg, &agg_peak); 357 + provider->aggregate(n, 0, n->avg_bw, n->peak_bw, 358 + &agg_avg, &agg_peak); 350 359 351 360 sum_bw = icc_units_to_bps(agg_avg); 352 361 max_peak_bw = icc_units_to_bps(agg_peak); ··· 405 414 struct icc_provider *provider; 406 415 struct qcom_icc_node **qnodes; 407 416 struct qcom_icc_provider *qp; 408 - struct icc_node *node, *tmp; 417 + struct icc_node *node; 409 418 size_t num_nodes, i; 410 419 int ret; 411 420 ··· 447 456 INIT_LIST_HEAD(&provider->nodes); 448 457 provider->dev = dev; 449 458 provider->set = qcom_icc_set; 450 - provider->aggregate = qcom_icc_aggregate; 459 + provider->aggregate = icc_std_aggregate; 451 460 provider->xlate = of_icc_xlate_onecell; 452 461 provider->data = data; 453 462 ··· 485 494 486 495 return 0; 487 496 err: 488 - list_for_each_entry_safe(node, tmp, &provider->nodes, node_list) { 489 - icc_node_del(node); 490 - icc_node_destroy(node->id); 491 - } 497 + icc_nodes_remove(provider); 492 498 clk_bulk_disable_unprepare(qp->num_clks, qp->bus_clks); 493 499 icc_provider_del(provider); 494 500 ··· 495 507 static int qnoc_remove(struct platform_device *pdev) 496 508 { 497 509 struct qcom_icc_provider *qp = platform_get_drvdata(pdev); 498 - struct icc_provider *provider = &qp->provider; 499 - struct icc_node *n, *tmp; 500 510 501 - list_for_each_entry_safe(n, tmp, &provider->nodes, node_list) { 502 - icc_node_del(n); 503 - icc_node_destroy(n->id); 504 - } 511 + icc_nodes_remove(&qp->provider); 505 512 clk_bulk_disable_unprepare(qp->num_clks, qp->bus_clks); 506 - 507 - return icc_provider_del(provider); 513 + return icc_provider_del(&qp->provider); 508 514 } 509 515 510 516 static const struct of_device_id qcs404_noc_of_match[] = {
+3 -13
drivers/interconnect/qcom/sdm845.c
··· 855 855 856 856 return ret; 857 857 err: 858 - list_for_each_entry(node, &provider->nodes, node_list) { 859 - icc_node_del(node); 860 - icc_node_destroy(node->id); 861 - } 862 - 858 + icc_nodes_remove(provider); 863 859 icc_provider_del(provider); 864 860 return ret; 865 861 } ··· 863 867 static int qnoc_remove(struct platform_device *pdev) 864 868 { 865 869 struct qcom_icc_provider *qp = platform_get_drvdata(pdev); 866 - struct icc_provider *provider = &qp->provider; 867 - struct icc_node *n, *tmp; 868 870 869 - list_for_each_entry_safe(n, tmp, &provider->nodes, node_list) { 870 - icc_node_del(n); 871 - icc_node_destroy(n->id); 872 - } 873 - 874 - return icc_provider_del(provider); 871 + icc_nodes_remove(&qp->provider); 872 + return icc_provider_del(&qp->provider); 875 873 } 876 874 877 875 static const struct of_device_id qnoc_of_match[] = {
+88
drivers/interconnect/trace.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Interconnect framework tracepoints 4 + * Copyright (c) 2019, Linaro Ltd. 5 + * Author: Georgi Djakov <georgi.djakov@linaro.org> 6 + */ 7 + 8 + #undef TRACE_SYSTEM 9 + #define TRACE_SYSTEM interconnect 10 + 11 + #if !defined(_TRACE_INTERCONNECT_H) || defined(TRACE_HEADER_MULTI_READ) 12 + #define _TRACE_INTERCONNECT_H 13 + 14 + #include <linux/interconnect.h> 15 + #include <linux/tracepoint.h> 16 + 17 + TRACE_EVENT(icc_set_bw, 18 + 19 + TP_PROTO(struct icc_path *p, struct icc_node *n, int i, 20 + u32 avg_bw, u32 peak_bw), 21 + 22 + TP_ARGS(p, n, i, avg_bw, peak_bw), 23 + 24 + TP_STRUCT__entry( 25 + __string(path_name, p->name) 26 + __string(dev, dev_name(p->reqs[i].dev)) 27 + __string(node_name, n->name) 28 + __field(u32, avg_bw) 29 + __field(u32, peak_bw) 30 + __field(u32, node_avg_bw) 31 + __field(u32, node_peak_bw) 32 + ), 33 + 34 + TP_fast_assign( 35 + __assign_str(path_name, p->name); 36 + __assign_str(dev, dev_name(p->reqs[i].dev)); 37 + __assign_str(node_name, n->name); 38 + __entry->avg_bw = avg_bw; 39 + __entry->peak_bw = peak_bw; 40 + __entry->node_avg_bw = n->avg_bw; 41 + __entry->node_peak_bw = n->peak_bw; 42 + ), 43 + 44 + TP_printk("path=%s dev=%s node=%s avg_bw=%u peak_bw=%u agg_avg=%u agg_peak=%u", 45 + __get_str(path_name), 46 + __get_str(dev), 47 + __get_str(node_name), 48 + __entry->avg_bw, 49 + __entry->peak_bw, 50 + __entry->node_avg_bw, 51 + __entry->node_peak_bw) 52 + ); 53 + 54 + TRACE_EVENT(icc_set_bw_end, 55 + 56 + TP_PROTO(struct icc_path *p, int ret), 57 + 58 + TP_ARGS(p, ret), 59 + 60 + TP_STRUCT__entry( 61 + __string(path_name, p->name) 62 + __string(dev, dev_name(p->reqs[0].dev)) 63 + __field(int, ret) 64 + ), 65 + 66 + TP_fast_assign( 67 + __assign_str(path_name, p->name); 68 + __assign_str(dev, dev_name(p->reqs[0].dev)); 69 + __entry->ret = ret; 70 + ), 71 + 72 + TP_printk("path=%s dev=%s ret=%d", 73 + __get_str(path_name), 74 + __get_str(dev), 75 + __entry->ret) 76 + ); 77 + 78 + #endif /* _TRACE_INTERCONNECT_H */ 79 + 80 + /* This part must be outside protection */ 81 + 82 + #undef TRACE_INCLUDE_PATH 83 + #define TRACE_INCLUDE_PATH . 84 + 85 + #undef TRACE_INCLUDE_FILE 86 + #define TRACE_INCLUDE_FILE trace 87 + 88 + #include <trace/define_trace.h>
+7 -1
drivers/misc/cardreader/alcor_pci.c
··· 38 38 .dma = 1, 39 39 }; 40 40 41 + static const struct alcor_dev_cfg au6625_cfg = { 42 + .dma = 0, 43 + }; 44 + 41 45 static const struct pci_device_id pci_ids[] = { 42 46 { PCI_DEVICE(PCI_ID_ALCOR_MICRO, PCI_ID_AU6601), 43 47 .driver_data = (kernel_ulong_t)&alcor_cfg }, 44 48 { PCI_DEVICE(PCI_ID_ALCOR_MICRO, PCI_ID_AU6621), 45 49 .driver_data = (kernel_ulong_t)&au6621_cfg }, 46 - { }, 50 + { PCI_DEVICE(PCI_ID_ALCOR_MICRO, PCI_ID_AU6625), 51 + .driver_data = (kernel_ulong_t)&au6625_cfg }, 52 + {}, 47 53 }; 48 54 MODULE_DEVICE_TABLE(pci, pci_ids); 49 55
+6 -5
drivers/misc/cardreader/rts5261.c
··· 628 628 u8 ssc_depth, bool initial_mode, bool double_clk, bool vpclk) 629 629 { 630 630 int err, clk; 631 - u8 n, clk_divider, mcu_cnt, div; 631 + u16 n; 632 + u8 clk_divider, mcu_cnt, div; 632 633 static const u8 depth[] = { 633 634 [RTSX_SSC_DEPTH_4M] = RTS5261_SSC_DEPTH_4M, 634 635 [RTSX_SSC_DEPTH_2M] = RTS5261_SSC_DEPTH_2M, ··· 662 661 return 0; 663 662 664 663 if (pcr->ops->conv_clk_and_div_n) 665 - n = (u8)pcr->ops->conv_clk_and_div_n(clk, CLK_TO_DIV_N); 664 + n = pcr->ops->conv_clk_and_div_n(clk, CLK_TO_DIV_N); 666 665 else 667 - n = (u8)(clk - 4); 666 + n = clk - 4; 668 667 if ((clk <= 4) || (n > 396)) 669 668 return -EINVAL; 670 669 671 - mcu_cnt = (u8)(125/clk + 3); 670 + mcu_cnt = 125/clk + 3; 672 671 if (mcu_cnt > 15) 673 672 mcu_cnt = 15; 674 673 ··· 677 676 if (pcr->ops->conv_clk_and_div_n) { 678 677 int dbl_clk = pcr->ops->conv_clk_and_div_n(n, 679 678 DIV_N_TO_CLK) * 2; 680 - n = (u8)pcr->ops->conv_clk_and_div_n(dbl_clk, 679 + n = pcr->ops->conv_clk_and_div_n(dbl_clk, 681 680 CLK_TO_DIV_N); 682 681 } else { 683 682 n = (n + 4) * 2 - 4;
+1 -1
drivers/misc/cxl/context.c
··· 352 352 void cxl_context_mm_count_get(struct cxl_context *ctx) 353 353 { 354 354 if (ctx->mm) 355 - atomic_inc(&ctx->mm->mm_count); 355 + mmgrab(ctx->mm); 356 356 } 357 357 358 358 void cxl_context_mm_count_put(struct cxl_context *ctx)
+4 -4
drivers/misc/genwqe/card_ddcb.c
··· 1084 1084 queue->ddcb_daddr); 1085 1085 queue->ddcb_vaddr = NULL; 1086 1086 queue->ddcb_daddr = 0ull; 1087 - return -ENODEV; 1087 + return rc; 1088 1088 1089 1089 } 1090 1090 ··· 1179 1179 */ 1180 1180 static int genwqe_card_thread(void *data) 1181 1181 { 1182 - int should_stop = 0, rc = 0; 1182 + int should_stop = 0; 1183 1183 struct genwqe_dev *cd = (struct genwqe_dev *)data; 1184 1184 1185 1185 while (!kthread_should_stop()) { ··· 1187 1187 genwqe_check_ddcb_queue(cd, &cd->queue); 1188 1188 1189 1189 if (GENWQE_POLLING_ENABLED) { 1190 - rc = wait_event_interruptible_timeout( 1190 + wait_event_interruptible_timeout( 1191 1191 cd->queue_waitq, 1192 1192 genwqe_ddcbs_in_flight(cd) || 1193 1193 (should_stop = kthread_should_stop()), 1); 1194 1194 } else { 1195 - rc = wait_event_interruptible_timeout( 1195 + wait_event_interruptible_timeout( 1196 1196 cd->queue_waitq, 1197 1197 genwqe_next_ddcb_ready(cd) || 1198 1198 (should_stop = kthread_should_stop()), HZ);
+1
drivers/misc/isl29020.c
··· 173 173 174 174 static int isl29020_remove(struct i2c_client *client) 175 175 { 176 + pm_runtime_disable(&client->dev); 176 177 sysfs_remove_group(&client->dev.kobj, &m_als_gr); 177 178 return 0; 178 179 }
+5 -5
drivers/misc/mei/bus.c
··· 765 765 struct mei_cl_device *cldev = to_mei_cl_device(dev); 766 766 const uuid_le *uuid = mei_me_cl_uuid(cldev->me_cl); 767 767 768 - return scnprintf(buf, PAGE_SIZE, "%pUl", uuid); 768 + return sprintf(buf, "%pUl", uuid); 769 769 } 770 770 static DEVICE_ATTR_RO(uuid); 771 771 ··· 775 775 struct mei_cl_device *cldev = to_mei_cl_device(dev); 776 776 u8 version = mei_me_cl_ver(cldev->me_cl); 777 777 778 - return scnprintf(buf, PAGE_SIZE, "%02X", version); 778 + return sprintf(buf, "%02X", version); 779 779 } 780 780 static DEVICE_ATTR_RO(version); 781 781 ··· 797 797 struct mei_cl_device *cldev = to_mei_cl_device(dev); 798 798 u8 maxconn = mei_me_cl_max_conn(cldev->me_cl); 799 799 800 - return scnprintf(buf, PAGE_SIZE, "%d", maxconn); 800 + return sprintf(buf, "%d", maxconn); 801 801 } 802 802 static DEVICE_ATTR_RO(max_conn); 803 803 ··· 807 807 struct mei_cl_device *cldev = to_mei_cl_device(dev); 808 808 u8 fixed = mei_me_cl_fixed(cldev->me_cl); 809 809 810 - return scnprintf(buf, PAGE_SIZE, "%d", fixed); 810 + return sprintf(buf, "%d", fixed); 811 811 } 812 812 static DEVICE_ATTR_RO(fixed); 813 813 ··· 817 817 struct mei_cl_device *cldev = to_mei_cl_device(dev); 818 818 u32 maxlen = mei_me_cl_max_len(cldev->me_cl); 819 819 820 - return scnprintf(buf, PAGE_SIZE, "%u", maxlen); 820 + return sprintf(buf, "%u", maxlen); 821 821 } 822 822 static DEVICE_ATTR_RO(max_len); 823 823
+30 -3
drivers/misc/mei/hdcp/mei_hdcp.c
··· 757 757 .unbind = mei_component_master_unbind, 758 758 }; 759 759 760 + /** 761 + * mei_hdcp_component_match - compare function for matching mei hdcp. 762 + * 763 + * The function checks if the driver is i915, the subcomponent is HDCP 764 + * and the grand parent of hdcp and the parent of i915 are the same 765 + * PCH device. 766 + * 767 + * @dev: master device 768 + * @subcomponent: subcomponent to match (I915_COMPONENT_HDCP) 769 + * @data: compare data (mei hdcp device) 770 + * 771 + * Return: 772 + * * 1 - if components match 773 + * * 0 - otherwise 774 + */ 760 775 static int mei_hdcp_component_match(struct device *dev, int subcomponent, 761 776 void *data) 762 777 { 763 - return !strcmp(dev->driver->name, "i915") && 764 - subcomponent == I915_COMPONENT_HDCP; 778 + struct device *base = data; 779 + 780 + if (strcmp(dev->driver->name, "i915") || 781 + subcomponent != I915_COMPONENT_HDCP) 782 + return 0; 783 + 784 + base = base->parent; 785 + if (!base) 786 + return 0; 787 + 788 + base = base->parent; 789 + dev = dev->parent; 790 + 791 + return (base && dev && dev == base); 765 792 } 766 793 767 794 static int mei_hdcp_probe(struct mei_cl_device *cldev, ··· 812 785 813 786 master_match = NULL; 814 787 component_match_add_typed(&cldev->dev, &master_match, 815 - mei_hdcp_component_match, comp_master); 788 + mei_hdcp_component_match, &cldev->dev); 816 789 if (IS_ERR_OR_NULL(master_match)) { 817 790 ret = -ENOMEM; 818 791 goto err_exit;
+6
drivers/misc/mei/hw-me-regs.h
··· 81 81 82 82 #define MEI_DEV_ID_CMP_LP 0x02e0 /* Comet Point LP */ 83 83 #define MEI_DEV_ID_CMP_LP_3 0x02e4 /* Comet Point LP 3 (iTouch) */ 84 + 84 85 #define MEI_DEV_ID_CMP_V 0xA3BA /* Comet Point Lake V */ 85 86 87 + #define MEI_DEV_ID_CMP_H 0x06e0 /* Comet Lake H */ 88 + #define MEI_DEV_ID_CMP_H_3 0x06e4 /* Comet Lake H 3 (iTouch) */ 89 + 86 90 #define MEI_DEV_ID_ICP_LP 0x34E0 /* Ice Lake Point LP */ 91 + 92 + #define MEI_DEV_ID_JSP_N 0x4DE0 /* Jasper Lake Point N */ 87 93 88 94 #define MEI_DEV_ID_TGP_LP 0xA0E0 /* Tiger Lake Point LP */ 89 95
+4
drivers/misc/mei/pci-me.c
··· 99 99 {MEI_PCI_DEVICE(MEI_DEV_ID_CMP_LP, MEI_ME_PCH12_CFG)}, 100 100 {MEI_PCI_DEVICE(MEI_DEV_ID_CMP_LP_3, MEI_ME_PCH8_CFG)}, 101 101 {MEI_PCI_DEVICE(MEI_DEV_ID_CMP_V, MEI_ME_PCH12_CFG)}, 102 + {MEI_PCI_DEVICE(MEI_DEV_ID_CMP_H, MEI_ME_PCH12_CFG)}, 103 + {MEI_PCI_DEVICE(MEI_DEV_ID_CMP_H_3, MEI_ME_PCH8_CFG)}, 102 104 103 105 {MEI_PCI_DEVICE(MEI_DEV_ID_ICP_LP, MEI_ME_PCH12_CFG)}, 104 106 105 107 {MEI_PCI_DEVICE(MEI_DEV_ID_TGP_LP, MEI_ME_PCH15_CFG)}, 108 + 109 + {MEI_PCI_DEVICE(MEI_DEV_ID_JSP_N, MEI_ME_PCH15_CFG)}, 106 110 107 111 {MEI_PCI_DEVICE(MEI_DEV_ID_MCC, MEI_ME_PCH15_CFG)}, 108 112 {MEI_PCI_DEVICE(MEI_DEV_ID_MCC_4, MEI_ME_PCH8_CFG)},
-3
drivers/misc/mic/card/mic_debugfs.c
··· 65 65 */ 66 66 void mic_delete_card_debug_dir(struct mic_driver *mdrv) 67 67 { 68 - if (!mdrv->dbg_dir) 69 - return; 70 - 71 68 debugfs_remove_recursive(mdrv->dbg_dir); 72 69 } 73 70
-3
drivers/misc/mic/cosm/cosm_debugfs.c
··· 102 102 103 103 void cosm_delete_debug_dir(struct cosm_device *cdev) 104 104 { 105 - if (!cdev->dbg_dir) 106 - return; 107 - 108 105 debugfs_remove_recursive(cdev->dbg_dir); 109 106 } 110 107
-3
drivers/misc/mic/host/mic_debugfs.c
··· 129 129 */ 130 130 void mic_delete_debug_dir(struct mic_device *mdev) 131 131 { 132 - if (!mdev->dbg_dir) 133 - return; 134 - 135 132 debugfs_remove_recursive(mdev->dbg_dir); 136 133 } 137 134
+2 -2
drivers/misc/pti.c
··· 792 792 const struct pci_device_id *ent) 793 793 { 794 794 unsigned int a; 795 - int retval = -EINVAL; 795 + int retval; 796 796 int pci_bar = 1; 797 797 798 798 dev_dbg(&pdev->dev, "%s %s(%d): PTI PCI ID %04x:%04x\n", __FILE__, ··· 910 910 */ 911 911 static int __init pti_init(void) 912 912 { 913 - int retval = -EINVAL; 913 + int retval; 914 914 915 915 /* First register module as tty device */ 916 916
+9 -3
drivers/misc/pvpanic.c
··· 10 10 11 11 #include <linux/acpi.h> 12 12 #include <linux/kernel.h> 13 + #include <linux/kexec.h> 13 14 #include <linux/module.h> 14 15 #include <linux/of.h> 15 16 #include <linux/of_address.h> 16 17 #include <linux/platform_device.h> 17 18 #include <linux/types.h> 19 + #include <uapi/misc/pvpanic.h> 18 20 19 21 static void __iomem *base; 20 - 21 - #define PVPANIC_PANICKED (1 << 0) 22 22 23 23 MODULE_AUTHOR("Hu Tao <hutao@cn.fujitsu.com>"); 24 24 MODULE_DESCRIPTION("pvpanic device driver"); ··· 34 34 pvpanic_panic_notify(struct notifier_block *nb, unsigned long code, 35 35 void *unused) 36 36 { 37 - pvpanic_send_event(PVPANIC_PANICKED); 37 + unsigned int event = PVPANIC_PANICKED; 38 + 39 + if (kexec_crash_loaded()) 40 + event = PVPANIC_CRASH_LOADED; 41 + 42 + pvpanic_send_event(event); 43 + 38 44 return NOTIFY_DONE; 39 45 } 40 46
+17 -4
drivers/misc/sram-exec.c
··· 85 85 unsigned long base; 86 86 int pages; 87 87 void *dst_cpy; 88 + int ret; 88 89 89 90 mutex_lock(&exec_pool_list_mutex); 90 91 list_for_each_entry(p, &exec_pool_list, list) { ··· 105 104 106 105 mutex_lock(&part->lock); 107 106 108 - set_memory_nx((unsigned long)base, pages); 109 - set_memory_rw((unsigned long)base, pages); 107 + ret = set_memory_nx((unsigned long)base, pages); 108 + if (ret) 109 + goto error_out; 110 + ret = set_memory_rw((unsigned long)base, pages); 111 + if (ret) 112 + goto error_out; 110 113 111 114 dst_cpy = fncpy(dst, src, size); 112 115 113 - set_memory_ro((unsigned long)base, pages); 114 - set_memory_x((unsigned long)base, pages); 116 + ret = set_memory_ro((unsigned long)base, pages); 117 + if (ret) 118 + goto error_out; 119 + ret = set_memory_x((unsigned long)base, pages); 120 + if (ret) 121 + goto error_out; 115 122 116 123 mutex_unlock(&part->lock); 117 124 118 125 return dst_cpy; 126 + 127 + error_out: 128 + mutex_unlock(&part->lock); 129 + return NULL; 119 130 } 120 131 EXPORT_SYMBOL_GPL(sram_exec_copy);
+2 -2
drivers/misc/ti-st/st_core.c
··· 736 736 737 737 static void st_tty_close(struct tty_struct *tty) 738 738 { 739 - unsigned char i = ST_MAX_CHANNELS; 740 - unsigned long flags = 0; 739 + unsigned char i; 740 + unsigned long flags; 741 741 struct st_data_s *st_gdata = tty->disc_data; 742 742 743 743 pr_info("%s ", __func__);
+5 -7
drivers/misc/tsl2550.c
··· 148 148 u16 c0 = count_lut[ch0]; 149 149 u16 c1 = count_lut[ch1]; 150 150 151 - /* 152 - * Calculate ratio. 153 - * Note: the "128" is a scaling factor 154 - */ 155 - u8 r = 128; 156 - 157 151 /* Avoid division by 0 and count 1 cannot be greater than count 0 */ 158 152 if (c1 <= c0) 159 153 if (c0) { 160 - r = c1 * 128 / c0; 154 + /* 155 + * Calculate ratio. 156 + * Note: the "128" is a scaling factor 157 + */ 158 + u8 r = c1 * 128 / c0; 161 159 162 160 /* Calculate LUX */ 163 161 lux = ((c0 - c1) * ratio_lut[r]) / 256;
+6 -6
drivers/misc/xilinx_sdfec.c
··· 733 733 enum xsdfec_order order; 734 734 int err; 735 735 736 - err = get_user(order, (enum xsdfec_order *)arg); 736 + err = get_user(order, (enum xsdfec_order __user *)arg); 737 737 if (err) 738 738 return -EFAULT; 739 739 ··· 1025 1025 } 1026 1026 #endif 1027 1027 1028 - static unsigned int xsdfec_poll(struct file *file, poll_table *wait) 1028 + static __poll_t xsdfec_poll(struct file *file, poll_table *wait) 1029 1029 { 1030 - unsigned int mask = 0; 1030 + __poll_t mask = 0; 1031 1031 struct xsdfec_dev *xsdfec; 1032 1032 1033 1033 xsdfec = container_of(file->private_data, struct xsdfec_dev, miscdev); 1034 1034 1035 1035 if (!xsdfec) 1036 - return POLLNVAL | POLLHUP; 1036 + return EPOLLNVAL | EPOLLHUP; 1037 1037 1038 1038 poll_wait(file, &xsdfec->waitq, wait); 1039 1039 1040 1040 /* XSDFEC ISR detected an error */ 1041 1041 spin_lock_irqsave(&xsdfec->error_data_lock, xsdfec->flags); 1042 1042 if (xsdfec->state_updated) 1043 - mask |= POLLIN | POLLPRI; 1043 + mask |= EPOLLIN | EPOLLPRI; 1044 1044 1045 1045 if (xsdfec->stats_updated) 1046 - mask |= POLLIN | POLLRDNORM; 1046 + mask |= EPOLLIN | EPOLLRDNORM; 1047 1047 spin_unlock_irqrestore(&xsdfec->error_data_lock, xsdfec->flags); 1048 1048 1049 1049 return mask;
+8
drivers/nvmem/Kconfig
··· 109 109 This driver can also be built as a module. If so, the module 110 110 will be called nvmem_qfprom. 111 111 112 + config NVMEM_SPMI_SDAM 113 + tristate "SPMI SDAM Support" 114 + depends on SPMI 115 + help 116 + This driver supports the Shared Direct Access Memory Module on 117 + Qualcomm Technologies, Inc. PMICs. It provides the clients 118 + an interface to read/write to the SDAM module's shared memory. 119 + 112 120 config ROCKCHIP_EFUSE 113 121 tristate "Rockchip eFuse Support" 114 122 depends on ARCH_ROCKCHIP || COMPILE_TEST
+2
drivers/nvmem/Makefile
··· 28 28 nvmem_mtk-efuse-y := mtk-efuse.o 29 29 obj-$(CONFIG_QCOM_QFPROM) += nvmem_qfprom.o 30 30 nvmem_qfprom-y := qfprom.o 31 + obj-$(CONFIG_NVMEM_SPMI_SDAM) += nvmem_qcom-spmi-sdam.o 32 + nvmem_qcom-spmi-sdam-y += qcom-spmi-sdam.o 31 33 obj-$(CONFIG_ROCKCHIP_EFUSE) += nvmem_rockchip_efuse.o 32 34 nvmem_rockchip_efuse-y := rockchip-efuse.o 33 35 obj-$(CONFIG_ROCKCHIP_OTP) += nvmem-rockchip-otp.o
+5 -3
drivers/nvmem/core.c
··· 83 83 list_del(&cell->node); 84 84 mutex_unlock(&nvmem_mutex); 85 85 of_node_put(cell->np); 86 - kfree(cell->name); 86 + kfree_const(cell->name); 87 87 kfree(cell); 88 88 } 89 89 ··· 110 110 cell->nvmem = nvmem; 111 111 cell->offset = info->offset; 112 112 cell->bytes = info->bytes; 113 - cell->name = info->name; 113 + cell->name = kstrdup_const(info->name, GFP_KERNEL); 114 + if (!cell->name) 115 + return -ENOMEM; 114 116 115 117 cell->bit_offset = info->bit_offset; 116 118 cell->nbits = info->nbits; ··· 302 300 dev_err(dev, "cell %s unaligned to nvmem stride %d\n", 303 301 cell->name, nvmem->stride); 304 302 /* Cells already added will be freed later. */ 305 - kfree(cell->name); 303 + kfree_const(cell->name); 306 304 kfree(cell); 307 305 return -EINVAL; 308 306 }
+7 -9
drivers/nvmem/imx-ocotp-scu.c
··· 15 15 #include <linux/platform_device.h> 16 16 #include <linux/slab.h> 17 17 18 - #define IMX_SIP_OTP 0xC200000A 19 - #define IMX_SIP_OTP_WRITE 0x2 18 + #define IMX_SIP_OTP_WRITE 0xc200000B 20 19 21 20 enum ocotp_devtype { 22 21 IMX8QXP, ··· 138 139 void *p; 139 140 int i, ret; 140 141 141 - index = offset >> 2; 142 - num_bytes = round_up((offset % 4) + bytes, 4); 142 + index = offset; 143 + num_bytes = round_up(bytes, 4); 143 144 count = num_bytes >> 2; 144 145 145 146 if (count > (priv->data->nregs - index)) ··· 168 169 buf++; 169 170 } 170 171 171 - memcpy(val, (u8 *)p + offset % 4, bytes); 172 + memcpy(val, (u8 *)p, bytes); 172 173 173 174 mutex_unlock(&scu_ocotp_mutex); 174 175 ··· 188 189 int ret; 189 190 190 191 /* allow only writing one complete OTP word at a time */ 191 - if ((bytes != 4) || (offset % 4)) 192 + if (bytes != 4) 192 193 return -EINVAL; 193 194 194 - index = offset >> 2; 195 + index = offset; 195 196 196 197 if (in_hole(context, index)) 197 198 return -EINVAL; ··· 211 212 212 213 mutex_lock(&scu_ocotp_mutex); 213 214 214 - arm_smccc_smc(IMX_SIP_OTP, IMX_SIP_OTP_WRITE, index, *buf, 215 - 0, 0, 0, 0, &res); 215 + arm_smccc_smc(IMX_SIP_OTP_WRITE, index, *buf, 0, 0, 0, 0, 0, &res); 216 216 217 217 mutex_unlock(&scu_ocotp_mutex); 218 218
+57 -22
drivers/nvmem/imx-ocotp.c
··· 44 44 #define IMX_OCOTP_BM_CTRL_ERROR 0x00000200 45 45 #define IMX_OCOTP_BM_CTRL_REL_SHADOWS 0x00000400 46 46 47 + #define IMX_OCOTP_BM_CTRL_DEFAULT \ 48 + { \ 49 + .bm_addr = IMX_OCOTP_BM_CTRL_ADDR, \ 50 + .bm_busy = IMX_OCOTP_BM_CTRL_BUSY, \ 51 + .bm_error = IMX_OCOTP_BM_CTRL_ERROR, \ 52 + .bm_rel_shadows = IMX_OCOTP_BM_CTRL_REL_SHADOWS,\ 53 + } 54 + 47 55 #define TIMING_STROBE_PROG_US 10 /* Min time to blow a fuse */ 48 56 #define TIMING_STROBE_READ_NS 37 /* Min time before read */ 49 57 #define TIMING_RELAX_NS 17 ··· 70 62 struct nvmem_config *config; 71 63 }; 72 64 65 + struct ocotp_ctrl_reg { 66 + u32 bm_addr; 67 + u32 bm_busy; 68 + u32 bm_error; 69 + u32 bm_rel_shadows; 70 + }; 71 + 73 72 struct ocotp_params { 74 73 unsigned int nregs; 75 74 unsigned int bank_address_words; 76 75 void (*set_timing)(struct ocotp_priv *priv); 76 + struct ocotp_ctrl_reg ctrl; 77 77 }; 78 78 79 - static int imx_ocotp_wait_for_busy(void __iomem *base, u32 flags) 79 + static int imx_ocotp_wait_for_busy(struct ocotp_priv *priv, u32 flags) 80 80 { 81 81 int count; 82 82 u32 c, mask; 83 + u32 bm_ctrl_busy, bm_ctrl_error; 84 + void __iomem *base = priv->base; 83 85 84 - mask = IMX_OCOTP_BM_CTRL_BUSY | IMX_OCOTP_BM_CTRL_ERROR | flags; 86 + bm_ctrl_busy = priv->params->ctrl.bm_busy; 87 + bm_ctrl_error = priv->params->ctrl.bm_error; 88 + 89 + mask = bm_ctrl_busy | bm_ctrl_error | flags; 85 90 86 91 for (count = 10000; count >= 0; count--) { 87 92 c = readl(base + IMX_OCOTP_ADDR_CTRL); ··· 118 97 * - A read is performed to from a fuse word which has been read 119 98 * locked. 120 99 */ 121 - if (c & IMX_OCOTP_BM_CTRL_ERROR) 100 + if (c & bm_ctrl_error) 122 101 return -EPERM; 123 102 return -ETIMEDOUT; 124 103 } ··· 126 105 return 0; 127 106 } 128 107 129 - static void imx_ocotp_clr_err_if_set(void __iomem *base) 108 + static void imx_ocotp_clr_err_if_set(struct ocotp_priv *priv) 130 109 { 131 - u32 c; 110 + u32 c, bm_ctrl_error; 111 + void __iomem *base = priv->base; 112 + 113 + bm_ctrl_error = priv->params->ctrl.bm_error; 132 114 133 115 c = readl(base + IMX_OCOTP_ADDR_CTRL); 134 - if (!(c & IMX_OCOTP_BM_CTRL_ERROR)) 116 + if (!(c & bm_ctrl_error)) 135 117 return; 136 118 137 - writel(IMX_OCOTP_BM_CTRL_ERROR, base + IMX_OCOTP_ADDR_CTRL_CLR); 119 + writel(bm_ctrl_error, base + IMX_OCOTP_ADDR_CTRL_CLR); 138 120 } 139 121 140 122 static int imx_ocotp_read(void *context, unsigned int offset, ··· 164 140 return ret; 165 141 } 166 142 167 - ret = imx_ocotp_wait_for_busy(priv->base, 0); 143 + ret = imx_ocotp_wait_for_busy(priv, 0); 168 144 if (ret < 0) { 169 145 dev_err(priv->dev, "timeout during read setup\n"); 170 146 goto read_end; ··· 181 157 * issued 182 158 */ 183 159 if (*(buf - 1) == IMX_OCOTP_READ_LOCKED_VAL) 184 - imx_ocotp_clr_err_if_set(priv->base); 160 + imx_ocotp_clr_err_if_set(priv); 185 161 } 186 162 ret = 0; 187 163 ··· 298 274 * write or reload must be completed before a write access can be 299 275 * requested. 300 276 */ 301 - ret = imx_ocotp_wait_for_busy(priv->base, 0); 277 + ret = imx_ocotp_wait_for_busy(priv, 0); 302 278 if (ret < 0) { 303 279 dev_err(priv->dev, "timeout during timing setup\n"); 304 280 goto write_end; ··· 330 306 } 331 307 332 308 ctrl = readl(priv->base + IMX_OCOTP_ADDR_CTRL); 333 - ctrl &= ~IMX_OCOTP_BM_CTRL_ADDR; 334 - ctrl |= waddr & IMX_OCOTP_BM_CTRL_ADDR; 309 + ctrl &= ~priv->params->ctrl.bm_addr; 310 + ctrl |= waddr & priv->params->ctrl.bm_addr; 335 311 ctrl |= IMX_OCOTP_WR_UNLOCK; 336 312 337 313 writel(ctrl, priv->base + IMX_OCOTP_ADDR_CTRL); ··· 398 374 * be set. It must be cleared by software before any new write access 399 375 * can be issued. 400 376 */ 401 - ret = imx_ocotp_wait_for_busy(priv->base, 0); 377 + ret = imx_ocotp_wait_for_busy(priv, 0); 402 378 if (ret < 0) { 403 379 if (ret == -EPERM) { 404 380 dev_err(priv->dev, "failed write to locked region"); 405 - imx_ocotp_clr_err_if_set(priv->base); 381 + imx_ocotp_clr_err_if_set(priv); 406 382 } else { 407 383 dev_err(priv->dev, "timeout during data write\n"); 408 384 } ··· 418 394 udelay(2); 419 395 420 396 /* reload all shadow registers */ 421 - writel(IMX_OCOTP_BM_CTRL_REL_SHADOWS, 397 + writel(priv->params->ctrl.bm_rel_shadows, 422 398 priv->base + IMX_OCOTP_ADDR_CTRL_SET); 423 - ret = imx_ocotp_wait_for_busy(priv->base, 424 - IMX_OCOTP_BM_CTRL_REL_SHADOWS); 399 + ret = imx_ocotp_wait_for_busy(priv, 400 + priv->params->ctrl.bm_rel_shadows); 425 401 if (ret < 0) { 426 402 dev_err(priv->dev, "timeout during shadow register reload\n"); 427 403 goto write_end; ··· 448 424 .nregs = 128, 449 425 .bank_address_words = 0, 450 426 .set_timing = imx_ocotp_set_imx6_timing, 427 + .ctrl = IMX_OCOTP_BM_CTRL_DEFAULT, 451 428 }; 452 429 453 430 static const struct ocotp_params imx6sl_params = { 454 431 .nregs = 64, 455 432 .bank_address_words = 0, 456 433 .set_timing = imx_ocotp_set_imx6_timing, 434 + .ctrl = IMX_OCOTP_BM_CTRL_DEFAULT, 457 435 }; 458 436 459 437 static const struct ocotp_params imx6sll_params = { 460 438 .nregs = 128, 461 439 .bank_address_words = 0, 462 440 .set_timing = imx_ocotp_set_imx6_timing, 441 + .ctrl = IMX_OCOTP_BM_CTRL_DEFAULT, 463 442 }; 464 443 465 444 static const struct ocotp_params imx6sx_params = { 466 445 .nregs = 128, 467 446 .bank_address_words = 0, 468 447 .set_timing = imx_ocotp_set_imx6_timing, 448 + .ctrl = IMX_OCOTP_BM_CTRL_DEFAULT, 469 449 }; 470 450 471 451 static const struct ocotp_params imx6ul_params = { 472 452 .nregs = 128, 473 453 .bank_address_words = 0, 474 454 .set_timing = imx_ocotp_set_imx6_timing, 455 + .ctrl = IMX_OCOTP_BM_CTRL_DEFAULT, 475 456 }; 476 457 477 458 static const struct ocotp_params imx6ull_params = { 478 459 .nregs = 64, 479 460 .bank_address_words = 0, 480 461 .set_timing = imx_ocotp_set_imx6_timing, 462 + .ctrl = IMX_OCOTP_BM_CTRL_DEFAULT, 481 463 }; 482 464 483 465 static const struct ocotp_params imx7d_params = { 484 466 .nregs = 64, 485 467 .bank_address_words = 4, 486 468 .set_timing = imx_ocotp_set_imx7_timing, 469 + .ctrl = IMX_OCOTP_BM_CTRL_DEFAULT, 487 470 }; 488 471 489 472 static const struct ocotp_params imx7ulp_params = { 490 473 .nregs = 256, 491 474 .bank_address_words = 0, 475 + .ctrl = IMX_OCOTP_BM_CTRL_DEFAULT, 492 476 }; 493 477 494 478 static const struct ocotp_params imx8mq_params = { 495 479 .nregs = 256, 496 480 .bank_address_words = 0, 497 481 .set_timing = imx_ocotp_set_imx6_timing, 482 + .ctrl = IMX_OCOTP_BM_CTRL_DEFAULT, 498 483 }; 499 484 500 485 static const struct ocotp_params imx8mm_params = { 501 486 .nregs = 256, 502 487 .bank_address_words = 0, 503 488 .set_timing = imx_ocotp_set_imx6_timing, 489 + .ctrl = IMX_OCOTP_BM_CTRL_DEFAULT, 504 490 }; 505 491 506 492 static const struct ocotp_params imx8mn_params = { 507 493 .nregs = 256, 508 494 .bank_address_words = 0, 509 495 .set_timing = imx_ocotp_set_imx6_timing, 496 + .ctrl = IMX_OCOTP_BM_CTRL_DEFAULT, 510 497 }; 511 498 512 499 static const struct of_device_id imx_ocotp_dt_ids[] = { ··· 556 521 if (IS_ERR(priv->clk)) 557 522 return PTR_ERR(priv->clk); 558 523 559 - clk_prepare_enable(priv->clk); 560 - imx_ocotp_clr_err_if_set(priv->base); 561 - clk_disable_unprepare(priv->clk); 562 - 563 524 priv->params = of_device_get_match_data(&pdev->dev); 564 525 imx_ocotp_nvmem_config.size = 4 * priv->params->nregs; 565 526 imx_ocotp_nvmem_config.dev = dev; 566 527 imx_ocotp_nvmem_config.priv = priv; 567 528 priv->config = &imx_ocotp_nvmem_config; 568 - nvmem = devm_nvmem_register(dev, &imx_ocotp_nvmem_config); 569 529 530 + clk_prepare_enable(priv->clk); 531 + imx_ocotp_clr_err_if_set(priv); 532 + clk_disable_unprepare(priv->clk); 533 + 534 + nvmem = devm_nvmem_register(dev, &imx_ocotp_nvmem_config); 570 535 571 536 return PTR_ERR_OR_ZERO(nvmem); 572 537 }
+192
drivers/nvmem/qcom-spmi-sdam.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (c) 2017 The Linux Foundation. All rights reserved. 4 + */ 5 + 6 + #include <linux/device.h> 7 + #include <linux/module.h> 8 + #include <linux/of.h> 9 + #include <linux/of_platform.h> 10 + #include <linux/nvmem-provider.h> 11 + #include <linux/regmap.h> 12 + 13 + #define SDAM_MEM_START 0x40 14 + #define REGISTER_MAP_ID 0x40 15 + #define REGISTER_MAP_VERSION 0x41 16 + #define SDAM_SIZE 0x44 17 + #define SDAM_PBS_TRIG_SET 0xE5 18 + #define SDAM_PBS_TRIG_CLR 0xE6 19 + 20 + struct sdam_chip { 21 + struct platform_device *pdev; 22 + struct regmap *regmap; 23 + struct nvmem_config sdam_config; 24 + unsigned int base; 25 + unsigned int size; 26 + }; 27 + 28 + /* read only register offsets */ 29 + static const u8 sdam_ro_map[] = { 30 + REGISTER_MAP_ID, 31 + REGISTER_MAP_VERSION, 32 + SDAM_SIZE 33 + }; 34 + 35 + static bool sdam_is_valid(struct sdam_chip *sdam, unsigned int offset, 36 + size_t len) 37 + { 38 + unsigned int sdam_mem_end = SDAM_MEM_START + sdam->size - 1; 39 + 40 + if (!len) 41 + return false; 42 + 43 + if (offset >= SDAM_MEM_START && offset <= sdam_mem_end 44 + && (offset + len - 1) <= sdam_mem_end) 45 + return true; 46 + else if ((offset == SDAM_PBS_TRIG_SET || offset == SDAM_PBS_TRIG_CLR) 47 + && (len == 1)) 48 + return true; 49 + 50 + return false; 51 + } 52 + 53 + static bool sdam_is_ro(unsigned int offset, size_t len) 54 + { 55 + int i; 56 + 57 + for (i = 0; i < ARRAY_SIZE(sdam_ro_map); i++) 58 + if (offset <= sdam_ro_map[i] && (offset + len) > sdam_ro_map[i]) 59 + return true; 60 + 61 + return false; 62 + } 63 + 64 + static int sdam_read(void *priv, unsigned int offset, void *val, 65 + size_t bytes) 66 + { 67 + struct sdam_chip *sdam = priv; 68 + struct device *dev = &sdam->pdev->dev; 69 + int rc; 70 + 71 + if (!sdam_is_valid(sdam, offset, bytes)) { 72 + dev_err(dev, "Invalid SDAM offset %#x len=%zd\n", 73 + offset, bytes); 74 + return -EINVAL; 75 + } 76 + 77 + rc = regmap_bulk_read(sdam->regmap, sdam->base + offset, val, bytes); 78 + if (rc < 0) 79 + dev_err(dev, "Failed to read SDAM offset %#x len=%zd, rc=%d\n", 80 + offset, bytes, rc); 81 + 82 + return rc; 83 + } 84 + 85 + static int sdam_write(void *priv, unsigned int offset, void *val, 86 + size_t bytes) 87 + { 88 + struct sdam_chip *sdam = priv; 89 + struct device *dev = &sdam->pdev->dev; 90 + int rc; 91 + 92 + if (!sdam_is_valid(sdam, offset, bytes)) { 93 + dev_err(dev, "Invalid SDAM offset %#x len=%zd\n", 94 + offset, bytes); 95 + return -EINVAL; 96 + } 97 + 98 + if (sdam_is_ro(offset, bytes)) { 99 + dev_err(dev, "Invalid write offset %#x len=%zd\n", 100 + offset, bytes); 101 + return -EINVAL; 102 + } 103 + 104 + rc = regmap_bulk_write(sdam->regmap, sdam->base + offset, val, bytes); 105 + if (rc < 0) 106 + dev_err(dev, "Failed to write SDAM offset %#x len=%zd, rc=%d\n", 107 + offset, bytes, rc); 108 + 109 + return rc; 110 + } 111 + 112 + static int sdam_probe(struct platform_device *pdev) 113 + { 114 + struct sdam_chip *sdam; 115 + struct nvmem_device *nvmem; 116 + unsigned int val; 117 + int rc; 118 + 119 + sdam = devm_kzalloc(&pdev->dev, sizeof(*sdam), GFP_KERNEL); 120 + if (!sdam) 121 + return -ENOMEM; 122 + 123 + sdam->regmap = dev_get_regmap(pdev->dev.parent, NULL); 124 + if (!sdam->regmap) { 125 + dev_err(&pdev->dev, "Failed to get regmap handle\n"); 126 + return -ENXIO; 127 + } 128 + 129 + rc = of_property_read_u32(pdev->dev.of_node, "reg", &sdam->base); 130 + if (rc < 0) { 131 + dev_err(&pdev->dev, "Failed to get SDAM base, rc=%d\n", rc); 132 + return -EINVAL; 133 + } 134 + 135 + rc = regmap_read(sdam->regmap, sdam->base + SDAM_SIZE, &val); 136 + if (rc < 0) { 137 + dev_err(&pdev->dev, "Failed to read SDAM_SIZE rc=%d\n", rc); 138 + return -EINVAL; 139 + } 140 + sdam->size = val * 32; 141 + 142 + sdam->sdam_config.dev = &pdev->dev; 143 + sdam->sdam_config.name = "spmi_sdam"; 144 + sdam->sdam_config.id = pdev->id; 145 + sdam->sdam_config.owner = THIS_MODULE, 146 + sdam->sdam_config.stride = 1; 147 + sdam->sdam_config.word_size = 1; 148 + sdam->sdam_config.reg_read = sdam_read; 149 + sdam->sdam_config.reg_write = sdam_write; 150 + sdam->sdam_config.priv = sdam; 151 + 152 + nvmem = devm_nvmem_register(&pdev->dev, &sdam->sdam_config); 153 + if (IS_ERR(nvmem)) { 154 + dev_err(&pdev->dev, 155 + "Failed to register SDAM nvmem device rc=%ld\n", 156 + PTR_ERR(nvmem)); 157 + return -ENXIO; 158 + } 159 + dev_dbg(&pdev->dev, 160 + "SDAM base=%#x size=%u registered successfully\n", 161 + sdam->base, sdam->size); 162 + 163 + return 0; 164 + } 165 + 166 + static const struct of_device_id sdam_match_table[] = { 167 + { .compatible = "qcom,spmi-sdam" }, 168 + {}, 169 + }; 170 + 171 + static struct platform_driver sdam_driver = { 172 + .driver = { 173 + .name = "qcom,spmi-sdam", 174 + .of_match_table = sdam_match_table, 175 + }, 176 + .probe = sdam_probe, 177 + }; 178 + 179 + static int __init sdam_init(void) 180 + { 181 + return platform_driver_register(&sdam_driver); 182 + } 183 + subsys_initcall(sdam_init); 184 + 185 + static void __exit sdam_exit(void) 186 + { 187 + return platform_driver_unregister(&sdam_driver); 188 + } 189 + module_exit(sdam_exit); 190 + 191 + MODULE_DESCRIPTION("QCOM SPMI SDAM driver"); 192 + MODULE_LICENSE("GPL v2");
+1 -1
drivers/siox/siox.h
··· 1 - // SPDX-License-Identifier: GPL-2.0 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 2 /* 3 3 * Copyright (C) 2015-2017 Pengutronix, Uwe Kleine-König <kernel@pengutronix.de> 4 4 */
+2
drivers/slimbus/qcom-ctrl.c
··· 641 641 642 642 pm_runtime_disable(&pdev->dev); 643 643 slim_unregister_controller(&ctrl->ctrl); 644 + clk_disable_unprepare(ctrl->rclk); 645 + clk_disable_unprepare(ctrl->hclk); 644 646 destroy_workqueue(ctrl->rxwq); 645 647 return 0; 646 648 }
+12 -8
drivers/slimbus/qcom-ngd-ctrl.c
··· 666 666 struct device *dev = ctrl->dev; 667 667 int ret, size; 668 668 669 - ctrl->dma_rx_channel = dma_request_slave_channel(dev, "rx"); 670 - if (!ctrl->dma_rx_channel) { 671 - dev_err(dev, "Failed to request dma channels"); 672 - return -EINVAL; 669 + ctrl->dma_rx_channel = dma_request_chan(dev, "rx"); 670 + if (IS_ERR(ctrl->dma_rx_channel)) { 671 + dev_err(dev, "Failed to request RX dma channel"); 672 + ret = PTR_ERR(ctrl->dma_rx_channel); 673 + ctrl->dma_rx_channel = NULL; 674 + return ret; 673 675 } 674 676 675 677 size = QCOM_SLIM_NGD_DESC_NUM * SLIM_MSGQ_BUF_LEN; ··· 705 703 int ret = 0; 706 704 int size; 707 705 708 - ctrl->dma_tx_channel = dma_request_slave_channel(dev, "tx"); 709 - if (!ctrl->dma_tx_channel) { 710 - dev_err(dev, "Failed to request dma channels"); 711 - return -EINVAL; 706 + ctrl->dma_tx_channel = dma_request_chan(dev, "tx"); 707 + if (IS_ERR(ctrl->dma_tx_channel)) { 708 + dev_err(dev, "Failed to request TX dma channel"); 709 + ret = PTR_ERR(ctrl->dma_tx_channel); 710 + ctrl->dma_tx_channel = NULL; 711 + return ret; 712 712 } 713 713 714 714 size = ((QCOM_SLIM_NGD_DESC_NUM + 1) * SLIM_MSGQ_BUF_LEN);
+1 -1
drivers/slimbus/slimbus.h
··· 1 - // SPDX-License-Identifier: GPL-2.0 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 2 /* 3 3 * Copyright (c) 2011-2017, The Linux Foundation 4 4 */
+9
drivers/soundwire/Kconfig
··· 31 31 enable this config option to get the SoundWire support for that 32 32 device. 33 33 34 + config SOUNDWIRE_QCOM 35 + tristate "Qualcomm SoundWire Master driver" 36 + depends on SLIMBUS 37 + depends on SND_SOC 38 + help 39 + SoundWire Qualcomm Master driver. 40 + If you have an Qualcomm platform which has a SoundWire Master then 41 + enable this config option to get the SoundWire support for that 42 + device 34 43 endif
+4
drivers/soundwire/Makefile
··· 21 21 22 22 soundwire-intel-init-objs := intel_init.o 23 23 obj-$(CONFIG_SOUNDWIRE_INTEL) += soundwire-intel-init.o 24 + 25 + #Qualcomm driver 26 + soundwire-qcom-objs := qcom.o 27 + obj-$(CONFIG_SOUNDWIRE_QCOM) += soundwire-qcom.o
+41 -14
drivers/soundwire/bus.c
··· 456 456 static int sdw_assign_device_num(struct sdw_slave *slave) 457 457 { 458 458 int ret, dev_num; 459 + bool new_device = false; 459 460 460 461 /* check first if device number is assigned, if so reuse that */ 461 462 if (!slave->dev_num) { 462 - mutex_lock(&slave->bus->bus_lock); 463 - dev_num = sdw_get_device_num(slave); 464 - mutex_unlock(&slave->bus->bus_lock); 465 - if (dev_num < 0) { 466 - dev_err(slave->bus->dev, "Get dev_num failed: %d\n", 467 - dev_num); 468 - return dev_num; 463 + if (!slave->dev_num_sticky) { 464 + mutex_lock(&slave->bus->bus_lock); 465 + dev_num = sdw_get_device_num(slave); 466 + mutex_unlock(&slave->bus->bus_lock); 467 + if (dev_num < 0) { 468 + dev_err(slave->bus->dev, "Get dev_num failed: %d\n", 469 + dev_num); 470 + return dev_num; 471 + } 472 + slave->dev_num = dev_num; 473 + slave->dev_num_sticky = dev_num; 474 + new_device = true; 475 + } else { 476 + slave->dev_num = slave->dev_num_sticky; 469 477 } 470 - } else { 478 + } 479 + 480 + if (!new_device) 471 481 dev_info(slave->bus->dev, 472 - "Slave already registered dev_num:%d\n", 482 + "Slave already registered, reusing dev_num:%d\n", 473 483 slave->dev_num); 474 484 475 - /* Clear the slave->dev_num to transfer message on device 0 */ 476 - dev_num = slave->dev_num; 477 - slave->dev_num = 0; 478 - } 485 + /* Clear the slave->dev_num to transfer message on device 0 */ 486 + dev_num = slave->dev_num; 487 + slave->dev_num = 0; 479 488 480 489 ret = sdw_write(slave, SDW_SCP_DEVNUMBER, dev_num); 481 490 if (ret < 0) { ··· 494 485 } 495 486 496 487 /* After xfer of msg, restore dev_num */ 497 - slave->dev_num = dev_num; 488 + slave->dev_num = slave->dev_num_sticky; 498 489 499 490 return 0; 500 491 } ··· 987 978 enum sdw_slave_status prev_status; 988 979 struct sdw_slave *slave; 989 980 int i, ret = 0; 981 + 982 + /* first check if any Slaves fell off the bus */ 983 + for (i = 1; i <= SDW_MAX_DEVICES; i++) { 984 + mutex_lock(&bus->bus_lock); 985 + if (test_bit(i, bus->assigned) == false) { 986 + mutex_unlock(&bus->bus_lock); 987 + continue; 988 + } 989 + mutex_unlock(&bus->bus_lock); 990 + 991 + slave = sdw_get_slave(bus, i); 992 + if (!slave) 993 + continue; 994 + 995 + if (status[i] == SDW_SLAVE_UNATTACHED && 996 + slave->status != SDW_SLAVE_UNATTACHED) 997 + sdw_modify_slave_status(slave, SDW_SLAVE_UNATTACHED); 998 + } 990 999 991 1000 if (status[0] == SDW_SLAVE_ATTACHED) { 992 1001 dev_dbg(bus->dev, "Slave attached, programming device number\n");
+54 -12
drivers/soundwire/cadence_master.c
··· 74 74 #define CDNS_MCP_INTMASK 0x48 75 75 76 76 #define CDNS_MCP_INT_IRQ BIT(31) 77 + #define CDNS_MCP_INT_RESERVED1 GENMASK(30, 17) 77 78 #define CDNS_MCP_INT_WAKEUP BIT(16) 78 79 #define CDNS_MCP_INT_SLAVE_RSVD BIT(15) 79 80 #define CDNS_MCP_INT_SLAVE_ALERT BIT(14) ··· 86 85 #define CDNS_MCP_INT_DATA_CLASH BIT(9) 87 86 #define CDNS_MCP_INT_PARITY BIT(8) 88 87 #define CDNS_MCP_INT_CMD_ERR BIT(7) 88 + #define CDNS_MCP_INT_RESERVED2 GENMASK(6, 4) 89 89 #define CDNS_MCP_INT_RX_NE BIT(3) 90 90 #define CDNS_MCP_INT_RX_WL BIT(2) 91 91 #define CDNS_MCP_INT_TXE BIT(1) 92 92 #define CDNS_MCP_INT_TXF BIT(0) 93 + #define CDNS_MCP_INT_RESERVED (CDNS_MCP_INT_RESERVED1 | CDNS_MCP_INT_RESERVED2) 93 94 94 95 #define CDNS_MCP_INTSET 0x4C 95 96 ··· 447 444 time = wait_for_completion_timeout(&cdns->tx_complete, 448 445 msecs_to_jiffies(CDNS_TX_TIMEOUT)); 449 446 if (!time) { 450 - dev_err(cdns->dev, "IO transfer timed out\n"); 447 + dev_err(cdns->dev, "IO transfer timed out, cmd %d device %d addr %x len %d\n", 448 + cmd, msg->dev_num, msg->addr, msg->len); 451 449 msg->len = 0; 452 450 return SDW_CMD_TIMEOUT; 453 451 } ··· 676 672 677 673 /* first check if Slave reported multiple status */ 678 674 if (set_status > 1) { 675 + u32 val; 676 + 679 677 dev_warn_ratelimited(cdns->dev, 680 - "Slave reported multiple Status: %d\n", 681 - mask); 682 - /* 683 - * TODO: we need to reread the status here by 684 - * issuing a PING cmd 685 - */ 678 + "Slave %d reported multiple Status: %d\n", 679 + i, mask); 680 + 681 + /* check latest status extracted from PING commands */ 682 + val = cdns_readl(cdns, CDNS_MCP_SLAVE_STAT); 683 + val >>= (i * 2); 684 + 685 + switch (val & 0x3) { 686 + case 0: 687 + status[i] = SDW_SLAVE_UNATTACHED; 688 + break; 689 + case 1: 690 + status[i] = SDW_SLAVE_ATTACHED; 691 + break; 692 + case 2: 693 + status[i] = SDW_SLAVE_ALERT; 694 + break; 695 + case 3: 696 + default: 697 + status[i] = SDW_SLAVE_RESERVED; 698 + break; 699 + } 700 + 701 + dev_warn_ratelimited(cdns->dev, 702 + "Slave %d status updated to %d\n", 703 + i, status[i]); 704 + 686 705 } 687 706 } 688 707 ··· 731 704 return IRQ_NONE; 732 705 733 706 int_status = cdns_readl(cdns, CDNS_MCP_INTSTAT); 707 + 708 + /* check for reserved values read as zero */ 709 + if (int_status & CDNS_MCP_INT_RESERVED) 710 + return IRQ_NONE; 734 711 735 712 if (!(int_status & CDNS_MCP_INT_IRQ)) 736 713 return IRQ_NONE; ··· 843 812 EXPORT_SYMBOL(sdw_cdns_exit_reset); 844 813 845 814 /** 846 - * sdw_cdns_enable_interrupt() - Enable SDW interrupts and update config 815 + * sdw_cdns_enable_interrupt() - Enable SDW interrupts 847 816 * @cdns: Cadence instance 817 + * @state: True if we are trying to enable interrupt. 848 818 */ 849 819 int sdw_cdns_enable_interrupt(struct sdw_cdns *cdns, bool state) 850 820 { ··· 881 849 mask = interrupt_mask; 882 850 883 851 update_masks: 852 + /* clear slave interrupt status before enabling interrupt */ 853 + if (state) { 854 + u32 slave_state; 855 + 856 + slave_state = cdns_readl(cdns, CDNS_MCP_SLAVE_INTSTAT0); 857 + cdns_writel(cdns, CDNS_MCP_SLAVE_INTSTAT0, slave_state); 858 + slave_state = cdns_readl(cdns, CDNS_MCP_SLAVE_INTSTAT1); 859 + cdns_writel(cdns, CDNS_MCP_SLAVE_INTSTAT1, slave_state); 860 + } 861 + 884 862 cdns_writel(cdns, CDNS_MCP_SLAVE_INTMASK0, slave_intmask0); 885 863 cdns_writel(cdns, CDNS_MCP_SLAVE_INTMASK1, slave_intmask1); 886 864 cdns_writel(cdns, CDNS_MCP_INTMASK, mask); 887 865 888 - /* commit changes */ 889 - return cdns_update_config(cdns); 866 + return 0; 890 867 } 891 868 EXPORT_SYMBOL(sdw_cdns_enable_interrupt); 892 869 ··· 988 947 989 948 ret = cdns_allocate_pdi(cdns, &stream->out, 990 949 stream->num_out, offset); 991 - 992 - offset += stream->num_out; 993 950 994 951 if (ret) 995 952 return ret; ··· 1263 1224 * cdns_find_pdi() - Find a free PDI 1264 1225 * 1265 1226 * @cdns: Cadence instance 1227 + * @offset: Starting offset 1266 1228 * @num: Number of PDIs 1267 1229 * @pdi: PDI instances 1230 + * @dai_id: DAI id 1268 1231 * 1269 1232 * Find a PDI for a given PDI array. The PDI num and dai_id are 1270 1233 * expected to match, return NULL otherwise. ··· 1318 1277 * @stream: Stream to be allocated 1319 1278 * @ch: Channel count 1320 1279 * @dir: Data direction 1280 + * @dai_id: DAI id 1321 1281 */ 1322 1282 struct sdw_cdns_pdi *sdw_cdns_alloc_pdi(struct sdw_cdns *cdns, 1323 1283 struct sdw_cdns_streams *stream,
+3
drivers/soundwire/intel.c
··· 880 880 "intel-sdw-ip-clock", 881 881 &prop->mclk_freq); 882 882 883 + /* the values reported by BIOS are the 2x clock, not the bus clock */ 884 + prop->mclk_freq /= 2; 885 + 883 886 fwnode_property_read_u32(link, 884 887 "intel-quirk-mask", 885 888 &quirk_mask);
+861
drivers/soundwire/qcom.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + // Copyright (c) 2019, Linaro Limited 3 + 4 + #include <linux/clk.h> 5 + #include <linux/completion.h> 6 + #include <linux/interrupt.h> 7 + #include <linux/io.h> 8 + #include <linux/kernel.h> 9 + #include <linux/module.h> 10 + #include <linux/of.h> 11 + #include <linux/of_irq.h> 12 + #include <linux/of_device.h> 13 + #include <linux/regmap.h> 14 + #include <linux/slab.h> 15 + #include <linux/slimbus.h> 16 + #include <linux/soundwire/sdw.h> 17 + #include <linux/soundwire/sdw_registers.h> 18 + #include <sound/pcm_params.h> 19 + #include <sound/soc.h> 20 + #include "bus.h" 21 + 22 + #define SWRM_COMP_HW_VERSION 0x00 23 + #define SWRM_COMP_CFG_ADDR 0x04 24 + #define SWRM_COMP_CFG_IRQ_LEVEL_OR_PULSE_MSK BIT(1) 25 + #define SWRM_COMP_CFG_ENABLE_MSK BIT(0) 26 + #define SWRM_COMP_PARAMS 0x100 27 + #define SWRM_COMP_PARAMS_DOUT_PORTS_MASK GENMASK(4, 0) 28 + #define SWRM_COMP_PARAMS_DIN_PORTS_MASK GENMASK(9, 5) 29 + #define SWRM_INTERRUPT_STATUS 0x200 30 + #define SWRM_INTERRUPT_STATUS_RMSK GENMASK(16, 0) 31 + #define SWRM_INTERRUPT_STATUS_NEW_SLAVE_ATTACHED BIT(1) 32 + #define SWRM_INTERRUPT_STATUS_CHANGE_ENUM_SLAVE_STATUS BIT(2) 33 + #define SWRM_INTERRUPT_STATUS_CMD_ERROR BIT(7) 34 + #define SWRM_INTERRUPT_STATUS_SPECIAL_CMD_ID_FINISHED BIT(10) 35 + #define SWRM_INTERRUPT_MASK_ADDR 0x204 36 + #define SWRM_INTERRUPT_CLEAR 0x208 37 + #define SWRM_CMD_FIFO_WR_CMD 0x300 38 + #define SWRM_CMD_FIFO_RD_CMD 0x304 39 + #define SWRM_CMD_FIFO_CMD 0x308 40 + #define SWRM_CMD_FIFO_STATUS 0x30C 41 + #define SWRM_CMD_FIFO_CFG_ADDR 0x314 42 + #define SWRM_RD_WR_CMD_RETRIES 0x7 43 + #define SWRM_CMD_FIFO_RD_FIFO_ADDR 0x318 44 + #define SWRM_ENUMERATOR_CFG_ADDR 0x500 45 + #define SWRM_MCP_FRAME_CTRL_BANK_ADDR(m) (0x101C + 0x40 * (m)) 46 + #define SWRM_MCP_FRAME_CTRL_BANK_ROW_CTRL_SHFT 3 47 + #define SWRM_MCP_FRAME_CTRL_BANK_COL_CTRL_BMSK GENMASK(2, 0) 48 + #define SWRM_MCP_FRAME_CTRL_BANK_ROW_CTRL_BMSK GENMASK(7, 3) 49 + #define SWRM_MCP_FRAME_CTRL_BANK_COL_CTRL_SHFT 0 50 + #define SWRM_MCP_CFG_ADDR 0x1048 51 + #define SWRM_MCP_CFG_MAX_NUM_OF_CMD_NO_PINGS_BMSK GENMASK(21, 17) 52 + #define SWRM_MCP_CFG_MAX_NUM_OF_CMD_NO_PINGS_SHFT 0x11 53 + #define SWRM_DEF_CMD_NO_PINGS 0x1f 54 + #define SWRM_MCP_STATUS 0x104C 55 + #define SWRM_MCP_STATUS_BANK_NUM_MASK BIT(0) 56 + #define SWRM_MCP_SLV_STATUS 0x1090 57 + #define SWRM_MCP_SLV_STATUS_MASK GENMASK(1, 0) 58 + #define SWRM_DP_PORT_CTRL_BANK(n, m) (0x1124 + 0x100 * (n - 1) + 0x40 * m) 59 + #define SWRM_DP_PORT_CTRL_EN_CHAN_SHFT 0x18 60 + #define SWRM_DP_PORT_CTRL_OFFSET2_SHFT 0x10 61 + #define SWRM_DP_PORT_CTRL_OFFSET1_SHFT 0x08 62 + #define SWRM_AHB_BRIDGE_WR_DATA_0 0xc85 63 + #define SWRM_AHB_BRIDGE_WR_ADDR_0 0xc89 64 + #define SWRM_AHB_BRIDGE_RD_ADDR_0 0xc8d 65 + #define SWRM_AHB_BRIDGE_RD_DATA_0 0xc91 66 + 67 + #define SWRM_REG_VAL_PACK(data, dev, id, reg) \ 68 + ((reg) | ((id) << 16) | ((dev) << 20) | ((data) << 24)) 69 + 70 + #define SWRM_MAX_ROW_VAL 0 /* Rows = 48 */ 71 + #define SWRM_DEFAULT_ROWS 48 72 + #define SWRM_MIN_COL_VAL 0 /* Cols = 2 */ 73 + #define SWRM_DEFAULT_COL 16 74 + #define SWRM_MAX_COL_VAL 7 75 + #define SWRM_SPECIAL_CMD_ID 0xF 76 + #define MAX_FREQ_NUM 1 77 + #define TIMEOUT_MS (2 * HZ) 78 + #define QCOM_SWRM_MAX_RD_LEN 0xf 79 + #define QCOM_SDW_MAX_PORTS 14 80 + #define DEFAULT_CLK_FREQ 9600000 81 + #define SWRM_MAX_DAIS 0xF 82 + 83 + struct qcom_swrm_port_config { 84 + u8 si; 85 + u8 off1; 86 + u8 off2; 87 + }; 88 + 89 + struct qcom_swrm_ctrl { 90 + struct sdw_bus bus; 91 + struct device *dev; 92 + struct regmap *regmap; 93 + struct completion *comp; 94 + struct work_struct slave_work; 95 + /* read/write lock */ 96 + spinlock_t comp_lock; 97 + /* Port alloc/free lock */ 98 + struct mutex port_lock; 99 + struct clk *hclk; 100 + u8 wr_cmd_id; 101 + u8 rd_cmd_id; 102 + int irq; 103 + unsigned int version; 104 + int num_din_ports; 105 + int num_dout_ports; 106 + unsigned long dout_port_mask; 107 + unsigned long din_port_mask; 108 + struct qcom_swrm_port_config pconfig[QCOM_SDW_MAX_PORTS]; 109 + struct sdw_stream_runtime *sruntime[SWRM_MAX_DAIS]; 110 + enum sdw_slave_status status[SDW_MAX_DEVICES]; 111 + int (*reg_read)(struct qcom_swrm_ctrl *ctrl, int reg, u32 *val); 112 + int (*reg_write)(struct qcom_swrm_ctrl *ctrl, int reg, int val); 113 + }; 114 + 115 + #define to_qcom_sdw(b) container_of(b, struct qcom_swrm_ctrl, bus) 116 + 117 + static int qcom_swrm_abh_reg_read(struct qcom_swrm_ctrl *ctrl, int reg, 118 + u32 *val) 119 + { 120 + struct regmap *wcd_regmap = ctrl->regmap; 121 + int ret; 122 + 123 + /* pg register + offset */ 124 + ret = regmap_bulk_write(wcd_regmap, SWRM_AHB_BRIDGE_RD_ADDR_0, 125 + (u8 *)&reg, 4); 126 + if (ret < 0) 127 + return SDW_CMD_FAIL; 128 + 129 + ret = regmap_bulk_read(wcd_regmap, SWRM_AHB_BRIDGE_RD_DATA_0, 130 + val, 4); 131 + if (ret < 0) 132 + return SDW_CMD_FAIL; 133 + 134 + return SDW_CMD_OK; 135 + } 136 + 137 + static int qcom_swrm_ahb_reg_write(struct qcom_swrm_ctrl *ctrl, 138 + int reg, int val) 139 + { 140 + struct regmap *wcd_regmap = ctrl->regmap; 141 + int ret; 142 + /* pg register + offset */ 143 + ret = regmap_bulk_write(wcd_regmap, SWRM_AHB_BRIDGE_WR_DATA_0, 144 + (u8 *)&val, 4); 145 + if (ret) 146 + return SDW_CMD_FAIL; 147 + 148 + /* write address register */ 149 + ret = regmap_bulk_write(wcd_regmap, SWRM_AHB_BRIDGE_WR_ADDR_0, 150 + (u8 *)&reg, 4); 151 + if (ret) 152 + return SDW_CMD_FAIL; 153 + 154 + return SDW_CMD_OK; 155 + } 156 + 157 + static int qcom_swrm_cmd_fifo_wr_cmd(struct qcom_swrm_ctrl *ctrl, u8 cmd_data, 158 + u8 dev_addr, u16 reg_addr) 159 + { 160 + DECLARE_COMPLETION_ONSTACK(comp); 161 + unsigned long flags; 162 + u32 val; 163 + int ret; 164 + 165 + spin_lock_irqsave(&ctrl->comp_lock, flags); 166 + ctrl->comp = &comp; 167 + spin_unlock_irqrestore(&ctrl->comp_lock, flags); 168 + val = SWRM_REG_VAL_PACK(cmd_data, dev_addr, 169 + SWRM_SPECIAL_CMD_ID, reg_addr); 170 + ret = ctrl->reg_write(ctrl, SWRM_CMD_FIFO_WR_CMD, val); 171 + if (ret) 172 + goto err; 173 + 174 + ret = wait_for_completion_timeout(ctrl->comp, 175 + msecs_to_jiffies(TIMEOUT_MS)); 176 + 177 + if (!ret) 178 + ret = SDW_CMD_IGNORED; 179 + else 180 + ret = SDW_CMD_OK; 181 + err: 182 + spin_lock_irqsave(&ctrl->comp_lock, flags); 183 + ctrl->comp = NULL; 184 + spin_unlock_irqrestore(&ctrl->comp_lock, flags); 185 + 186 + return ret; 187 + } 188 + 189 + static int qcom_swrm_cmd_fifo_rd_cmd(struct qcom_swrm_ctrl *ctrl, 190 + u8 dev_addr, u16 reg_addr, 191 + u32 len, u8 *rval) 192 + { 193 + int i, ret; 194 + u32 val; 195 + DECLARE_COMPLETION_ONSTACK(comp); 196 + unsigned long flags; 197 + 198 + spin_lock_irqsave(&ctrl->comp_lock, flags); 199 + ctrl->comp = &comp; 200 + spin_unlock_irqrestore(&ctrl->comp_lock, flags); 201 + 202 + val = SWRM_REG_VAL_PACK(len, dev_addr, SWRM_SPECIAL_CMD_ID, reg_addr); 203 + ret = ctrl->reg_write(ctrl, SWRM_CMD_FIFO_RD_CMD, val); 204 + if (ret) 205 + goto err; 206 + 207 + ret = wait_for_completion_timeout(ctrl->comp, 208 + msecs_to_jiffies(TIMEOUT_MS)); 209 + 210 + if (!ret) { 211 + ret = SDW_CMD_IGNORED; 212 + goto err; 213 + } else { 214 + ret = SDW_CMD_OK; 215 + } 216 + 217 + for (i = 0; i < len; i++) { 218 + ctrl->reg_read(ctrl, SWRM_CMD_FIFO_RD_FIFO_ADDR, &val); 219 + rval[i] = val & 0xFF; 220 + } 221 + 222 + err: 223 + spin_lock_irqsave(&ctrl->comp_lock, flags); 224 + ctrl->comp = NULL; 225 + spin_unlock_irqrestore(&ctrl->comp_lock, flags); 226 + 227 + return ret; 228 + } 229 + 230 + static void qcom_swrm_get_device_status(struct qcom_swrm_ctrl *ctrl) 231 + { 232 + u32 val; 233 + int i; 234 + 235 + ctrl->reg_read(ctrl, SWRM_MCP_SLV_STATUS, &val); 236 + 237 + for (i = 0; i < SDW_MAX_DEVICES; i++) { 238 + u32 s; 239 + 240 + s = (val >> (i * 2)); 241 + s &= SWRM_MCP_SLV_STATUS_MASK; 242 + ctrl->status[i] = s; 243 + } 244 + } 245 + 246 + static irqreturn_t qcom_swrm_irq_handler(int irq, void *dev_id) 247 + { 248 + struct qcom_swrm_ctrl *ctrl = dev_id; 249 + u32 sts, value; 250 + unsigned long flags; 251 + 252 + ctrl->reg_read(ctrl, SWRM_INTERRUPT_STATUS, &sts); 253 + 254 + if (sts & SWRM_INTERRUPT_STATUS_CMD_ERROR) { 255 + ctrl->reg_read(ctrl, SWRM_CMD_FIFO_STATUS, &value); 256 + dev_err_ratelimited(ctrl->dev, 257 + "CMD error, fifo status 0x%x\n", 258 + value); 259 + ctrl->reg_write(ctrl, SWRM_CMD_FIFO_CMD, 0x1); 260 + } 261 + 262 + if ((sts & SWRM_INTERRUPT_STATUS_NEW_SLAVE_ATTACHED) || 263 + sts & SWRM_INTERRUPT_STATUS_CHANGE_ENUM_SLAVE_STATUS) 264 + schedule_work(&ctrl->slave_work); 265 + 266 + /** 267 + * clear the interrupt before complete() is called, as complete can 268 + * schedule new read/writes which require interrupts, clearing the 269 + * interrupt would avoid missing interrupts in such cases. 270 + */ 271 + ctrl->reg_write(ctrl, SWRM_INTERRUPT_CLEAR, sts); 272 + 273 + if (sts & SWRM_INTERRUPT_STATUS_SPECIAL_CMD_ID_FINISHED) { 274 + spin_lock_irqsave(&ctrl->comp_lock, flags); 275 + if (ctrl->comp) 276 + complete(ctrl->comp); 277 + spin_unlock_irqrestore(&ctrl->comp_lock, flags); 278 + } 279 + 280 + return IRQ_HANDLED; 281 + } 282 + static int qcom_swrm_init(struct qcom_swrm_ctrl *ctrl) 283 + { 284 + u32 val; 285 + 286 + /* Clear Rows and Cols */ 287 + val = (SWRM_MAX_ROW_VAL << SWRM_MCP_FRAME_CTRL_BANK_ROW_CTRL_SHFT | 288 + SWRM_MIN_COL_VAL << SWRM_MCP_FRAME_CTRL_BANK_COL_CTRL_SHFT); 289 + 290 + ctrl->reg_write(ctrl, SWRM_MCP_FRAME_CTRL_BANK_ADDR(0), val); 291 + 292 + /* Disable Auto enumeration */ 293 + ctrl->reg_write(ctrl, SWRM_ENUMERATOR_CFG_ADDR, 0); 294 + 295 + /* Mask soundwire interrupts */ 296 + ctrl->reg_write(ctrl, SWRM_INTERRUPT_MASK_ADDR, 297 + SWRM_INTERRUPT_STATUS_RMSK); 298 + 299 + /* Configure No pings */ 300 + ctrl->reg_read(ctrl, SWRM_MCP_CFG_ADDR, &val); 301 + val &= ~SWRM_MCP_CFG_MAX_NUM_OF_CMD_NO_PINGS_BMSK; 302 + val |= (SWRM_DEF_CMD_NO_PINGS << 303 + SWRM_MCP_CFG_MAX_NUM_OF_CMD_NO_PINGS_SHFT); 304 + ctrl->reg_write(ctrl, SWRM_MCP_CFG_ADDR, val); 305 + 306 + /* Configure number of retries of a read/write cmd */ 307 + ctrl->reg_write(ctrl, SWRM_CMD_FIFO_CFG_ADDR, SWRM_RD_WR_CMD_RETRIES); 308 + 309 + /* Set IRQ to PULSE */ 310 + ctrl->reg_write(ctrl, SWRM_COMP_CFG_ADDR, 311 + SWRM_COMP_CFG_IRQ_LEVEL_OR_PULSE_MSK | 312 + SWRM_COMP_CFG_ENABLE_MSK); 313 + return 0; 314 + } 315 + 316 + static enum sdw_command_response qcom_swrm_xfer_msg(struct sdw_bus *bus, 317 + struct sdw_msg *msg) 318 + { 319 + struct qcom_swrm_ctrl *ctrl = to_qcom_sdw(bus); 320 + int ret, i, len; 321 + 322 + if (msg->flags == SDW_MSG_FLAG_READ) { 323 + for (i = 0; i < msg->len;) { 324 + if ((msg->len - i) < QCOM_SWRM_MAX_RD_LEN) 325 + len = msg->len - i; 326 + else 327 + len = QCOM_SWRM_MAX_RD_LEN; 328 + 329 + ret = qcom_swrm_cmd_fifo_rd_cmd(ctrl, msg->dev_num, 330 + msg->addr + i, len, 331 + &msg->buf[i]); 332 + if (ret) 333 + return ret; 334 + 335 + i = i + len; 336 + } 337 + } else if (msg->flags == SDW_MSG_FLAG_WRITE) { 338 + for (i = 0; i < msg->len; i++) { 339 + ret = qcom_swrm_cmd_fifo_wr_cmd(ctrl, msg->buf[i], 340 + msg->dev_num, 341 + msg->addr + i); 342 + if (ret) 343 + return SDW_CMD_IGNORED; 344 + } 345 + } 346 + 347 + return SDW_CMD_OK; 348 + } 349 + 350 + static int qcom_swrm_pre_bank_switch(struct sdw_bus *bus) 351 + { 352 + u32 reg = SWRM_MCP_FRAME_CTRL_BANK_ADDR(bus->params.next_bank); 353 + struct qcom_swrm_ctrl *ctrl = to_qcom_sdw(bus); 354 + u32 val; 355 + 356 + ctrl->reg_read(ctrl, reg, &val); 357 + 358 + val &= ~SWRM_MCP_FRAME_CTRL_BANK_COL_CTRL_BMSK; 359 + val &= ~SWRM_MCP_FRAME_CTRL_BANK_ROW_CTRL_BMSK; 360 + 361 + val |= (SWRM_MAX_ROW_VAL << SWRM_MCP_FRAME_CTRL_BANK_ROW_CTRL_SHFT | 362 + SWRM_MAX_COL_VAL << SWRM_MCP_FRAME_CTRL_BANK_COL_CTRL_SHFT); 363 + 364 + return ctrl->reg_write(ctrl, reg, val); 365 + } 366 + 367 + static int qcom_swrm_port_params(struct sdw_bus *bus, 368 + struct sdw_port_params *p_params, 369 + unsigned int bank) 370 + { 371 + /* TBD */ 372 + return 0; 373 + } 374 + 375 + static int qcom_swrm_transport_params(struct sdw_bus *bus, 376 + struct sdw_transport_params *params, 377 + enum sdw_reg_bank bank) 378 + { 379 + struct qcom_swrm_ctrl *ctrl = to_qcom_sdw(bus); 380 + u32 value; 381 + 382 + value = params->offset1 << SWRM_DP_PORT_CTRL_OFFSET1_SHFT; 383 + value |= params->offset2 << SWRM_DP_PORT_CTRL_OFFSET2_SHFT; 384 + value |= params->sample_interval - 1; 385 + 386 + return ctrl->reg_write(ctrl, 387 + SWRM_DP_PORT_CTRL_BANK((params->port_num), bank), 388 + value); 389 + } 390 + 391 + static int qcom_swrm_port_enable(struct sdw_bus *bus, 392 + struct sdw_enable_ch *enable_ch, 393 + unsigned int bank) 394 + { 395 + u32 reg = SWRM_DP_PORT_CTRL_BANK(enable_ch->port_num, bank); 396 + struct qcom_swrm_ctrl *ctrl = to_qcom_sdw(bus); 397 + u32 val; 398 + 399 + ctrl->reg_read(ctrl, reg, &val); 400 + 401 + if (enable_ch->enable) 402 + val |= (enable_ch->ch_mask << SWRM_DP_PORT_CTRL_EN_CHAN_SHFT); 403 + else 404 + val &= ~(0xff << SWRM_DP_PORT_CTRL_EN_CHAN_SHFT); 405 + 406 + return ctrl->reg_write(ctrl, reg, val); 407 + } 408 + 409 + static struct sdw_master_port_ops qcom_swrm_port_ops = { 410 + .dpn_set_port_params = qcom_swrm_port_params, 411 + .dpn_set_port_transport_params = qcom_swrm_transport_params, 412 + .dpn_port_enable_ch = qcom_swrm_port_enable, 413 + }; 414 + 415 + static struct sdw_master_ops qcom_swrm_ops = { 416 + .xfer_msg = qcom_swrm_xfer_msg, 417 + .pre_bank_switch = qcom_swrm_pre_bank_switch, 418 + }; 419 + 420 + static int qcom_swrm_compute_params(struct sdw_bus *bus) 421 + { 422 + struct qcom_swrm_ctrl *ctrl = to_qcom_sdw(bus); 423 + struct sdw_master_runtime *m_rt; 424 + struct sdw_slave_runtime *s_rt; 425 + struct sdw_port_runtime *p_rt; 426 + struct qcom_swrm_port_config *pcfg; 427 + int i = 0; 428 + 429 + list_for_each_entry(m_rt, &bus->m_rt_list, bus_node) { 430 + list_for_each_entry(p_rt, &m_rt->port_list, port_node) { 431 + pcfg = &ctrl->pconfig[p_rt->num - 1]; 432 + p_rt->transport_params.port_num = p_rt->num; 433 + p_rt->transport_params.sample_interval = pcfg->si + 1; 434 + p_rt->transport_params.offset1 = pcfg->off1; 435 + p_rt->transport_params.offset2 = pcfg->off2; 436 + } 437 + 438 + list_for_each_entry(s_rt, &m_rt->slave_rt_list, m_rt_node) { 439 + list_for_each_entry(p_rt, &s_rt->port_list, port_node) { 440 + pcfg = &ctrl->pconfig[i]; 441 + p_rt->transport_params.port_num = p_rt->num; 442 + p_rt->transport_params.sample_interval = 443 + pcfg->si + 1; 444 + p_rt->transport_params.offset1 = pcfg->off1; 445 + p_rt->transport_params.offset2 = pcfg->off2; 446 + i++; 447 + } 448 + } 449 + } 450 + 451 + return 0; 452 + } 453 + 454 + static u32 qcom_swrm_freq_tbl[MAX_FREQ_NUM] = { 455 + DEFAULT_CLK_FREQ, 456 + }; 457 + 458 + static void qcom_swrm_slave_wq(struct work_struct *work) 459 + { 460 + struct qcom_swrm_ctrl *ctrl = 461 + container_of(work, struct qcom_swrm_ctrl, slave_work); 462 + 463 + qcom_swrm_get_device_status(ctrl); 464 + sdw_handle_slave_status(&ctrl->bus, ctrl->status); 465 + } 466 + 467 + 468 + static void qcom_swrm_stream_free_ports(struct qcom_swrm_ctrl *ctrl, 469 + struct sdw_stream_runtime *stream) 470 + { 471 + struct sdw_master_runtime *m_rt; 472 + struct sdw_port_runtime *p_rt; 473 + unsigned long *port_mask; 474 + 475 + mutex_lock(&ctrl->port_lock); 476 + 477 + list_for_each_entry(m_rt, &stream->master_list, stream_node) { 478 + if (m_rt->direction == SDW_DATA_DIR_RX) 479 + port_mask = &ctrl->dout_port_mask; 480 + else 481 + port_mask = &ctrl->din_port_mask; 482 + 483 + list_for_each_entry(p_rt, &m_rt->port_list, port_node) 484 + clear_bit(p_rt->num - 1, port_mask); 485 + } 486 + 487 + mutex_unlock(&ctrl->port_lock); 488 + } 489 + 490 + static int qcom_swrm_stream_alloc_ports(struct qcom_swrm_ctrl *ctrl, 491 + struct sdw_stream_runtime *stream, 492 + struct snd_pcm_hw_params *params, 493 + int direction) 494 + { 495 + struct sdw_port_config pconfig[QCOM_SDW_MAX_PORTS]; 496 + struct sdw_stream_config sconfig; 497 + struct sdw_master_runtime *m_rt; 498 + struct sdw_slave_runtime *s_rt; 499 + struct sdw_port_runtime *p_rt; 500 + unsigned long *port_mask; 501 + int i, maxport, pn, nports = 0, ret = 0; 502 + 503 + mutex_lock(&ctrl->port_lock); 504 + list_for_each_entry(m_rt, &stream->master_list, stream_node) { 505 + if (m_rt->direction == SDW_DATA_DIR_RX) { 506 + maxport = ctrl->num_dout_ports; 507 + port_mask = &ctrl->dout_port_mask; 508 + } else { 509 + maxport = ctrl->num_din_ports; 510 + port_mask = &ctrl->din_port_mask; 511 + } 512 + 513 + list_for_each_entry(s_rt, &m_rt->slave_rt_list, m_rt_node) { 514 + list_for_each_entry(p_rt, &s_rt->port_list, port_node) { 515 + /* Port numbers start from 1 - 14*/ 516 + pn = find_first_zero_bit(port_mask, maxport); 517 + if (pn > (maxport - 1)) { 518 + dev_err(ctrl->dev, "All ports busy\n"); 519 + ret = -EBUSY; 520 + goto err; 521 + } 522 + set_bit(pn, port_mask); 523 + pconfig[nports].num = pn + 1; 524 + pconfig[nports].ch_mask = p_rt->ch_mask; 525 + nports++; 526 + } 527 + } 528 + } 529 + 530 + if (direction == SNDRV_PCM_STREAM_CAPTURE) 531 + sconfig.direction = SDW_DATA_DIR_TX; 532 + else 533 + sconfig.direction = SDW_DATA_DIR_RX; 534 + 535 + /* hw parameters wil be ignored as we only support PDM */ 536 + sconfig.ch_count = 1; 537 + sconfig.frame_rate = params_rate(params); 538 + sconfig.type = stream->type; 539 + sconfig.bps = 1; 540 + sdw_stream_add_master(&ctrl->bus, &sconfig, pconfig, 541 + nports, stream); 542 + err: 543 + if (ret) { 544 + for (i = 0; i < nports; i++) 545 + clear_bit(pconfig[i].num - 1, port_mask); 546 + } 547 + 548 + mutex_unlock(&ctrl->port_lock); 549 + 550 + return ret; 551 + } 552 + 553 + static int qcom_swrm_hw_params(struct snd_pcm_substream *substream, 554 + struct snd_pcm_hw_params *params, 555 + struct snd_soc_dai *dai) 556 + { 557 + struct qcom_swrm_ctrl *ctrl = dev_get_drvdata(dai->dev); 558 + struct sdw_stream_runtime *sruntime = ctrl->sruntime[dai->id]; 559 + int ret; 560 + 561 + ret = qcom_swrm_stream_alloc_ports(ctrl, sruntime, params, 562 + substream->stream); 563 + if (ret) 564 + qcom_swrm_stream_free_ports(ctrl, sruntime); 565 + 566 + return ret; 567 + } 568 + 569 + static int qcom_swrm_hw_free(struct snd_pcm_substream *substream, 570 + struct snd_soc_dai *dai) 571 + { 572 + struct qcom_swrm_ctrl *ctrl = dev_get_drvdata(dai->dev); 573 + struct sdw_stream_runtime *sruntime = ctrl->sruntime[dai->id]; 574 + 575 + qcom_swrm_stream_free_ports(ctrl, sruntime); 576 + sdw_stream_remove_master(&ctrl->bus, sruntime); 577 + 578 + return 0; 579 + } 580 + 581 + static int qcom_swrm_set_sdw_stream(struct snd_soc_dai *dai, 582 + void *stream, int direction) 583 + { 584 + struct qcom_swrm_ctrl *ctrl = dev_get_drvdata(dai->dev); 585 + 586 + ctrl->sruntime[dai->id] = stream; 587 + 588 + return 0; 589 + } 590 + 591 + static int qcom_swrm_startup(struct snd_pcm_substream *substream, 592 + struct snd_soc_dai *dai) 593 + { 594 + struct qcom_swrm_ctrl *ctrl = dev_get_drvdata(dai->dev); 595 + struct snd_soc_pcm_runtime *rtd = substream->private_data; 596 + struct sdw_stream_runtime *sruntime; 597 + int ret, i; 598 + 599 + sruntime = sdw_alloc_stream(dai->name); 600 + if (!sruntime) 601 + return -ENOMEM; 602 + 603 + ctrl->sruntime[dai->id] = sruntime; 604 + 605 + for (i = 0; i < rtd->num_codecs; i++) { 606 + ret = snd_soc_dai_set_sdw_stream(rtd->codec_dais[i], sruntime, 607 + substream->stream); 608 + if (ret < 0 && ret != -ENOTSUPP) { 609 + dev_err(dai->dev, "Failed to set sdw stream on %s", 610 + rtd->codec_dais[i]->name); 611 + sdw_release_stream(sruntime); 612 + return ret; 613 + } 614 + } 615 + 616 + return 0; 617 + } 618 + 619 + static void qcom_swrm_shutdown(struct snd_pcm_substream *substream, 620 + struct snd_soc_dai *dai) 621 + { 622 + struct qcom_swrm_ctrl *ctrl = dev_get_drvdata(dai->dev); 623 + 624 + sdw_release_stream(ctrl->sruntime[dai->id]); 625 + ctrl->sruntime[dai->id] = NULL; 626 + } 627 + 628 + static const struct snd_soc_dai_ops qcom_swrm_pdm_dai_ops = { 629 + .hw_params = qcom_swrm_hw_params, 630 + .hw_free = qcom_swrm_hw_free, 631 + .startup = qcom_swrm_startup, 632 + .shutdown = qcom_swrm_shutdown, 633 + .set_sdw_stream = qcom_swrm_set_sdw_stream, 634 + }; 635 + 636 + static const struct snd_soc_component_driver qcom_swrm_dai_component = { 637 + .name = "soundwire", 638 + }; 639 + 640 + static int qcom_swrm_register_dais(struct qcom_swrm_ctrl *ctrl) 641 + { 642 + int num_dais = ctrl->num_dout_ports + ctrl->num_din_ports; 643 + struct snd_soc_dai_driver *dais; 644 + struct snd_soc_pcm_stream *stream; 645 + struct device *dev = ctrl->dev; 646 + int i; 647 + 648 + /* PDM dais are only tested for now */ 649 + dais = devm_kcalloc(dev, num_dais, sizeof(*dais), GFP_KERNEL); 650 + if (!dais) 651 + return -ENOMEM; 652 + 653 + for (i = 0; i < num_dais; i++) { 654 + dais[i].name = devm_kasprintf(dev, GFP_KERNEL, "SDW Pin%d", i); 655 + if (!dais[i].name) 656 + return -ENOMEM; 657 + 658 + if (i < ctrl->num_dout_ports) 659 + stream = &dais[i].playback; 660 + else 661 + stream = &dais[i].capture; 662 + 663 + stream->channels_min = 1; 664 + stream->channels_max = 1; 665 + stream->rates = SNDRV_PCM_RATE_48000; 666 + stream->formats = SNDRV_PCM_FMTBIT_S16_LE; 667 + 668 + dais[i].ops = &qcom_swrm_pdm_dai_ops; 669 + dais[i].id = i; 670 + } 671 + 672 + return devm_snd_soc_register_component(ctrl->dev, 673 + &qcom_swrm_dai_component, 674 + dais, num_dais); 675 + } 676 + 677 + static int qcom_swrm_get_port_config(struct qcom_swrm_ctrl *ctrl) 678 + { 679 + struct device_node *np = ctrl->dev->of_node; 680 + u8 off1[QCOM_SDW_MAX_PORTS]; 681 + u8 off2[QCOM_SDW_MAX_PORTS]; 682 + u8 si[QCOM_SDW_MAX_PORTS]; 683 + int i, ret, nports, val; 684 + 685 + ctrl->reg_read(ctrl, SWRM_COMP_PARAMS, &val); 686 + 687 + ctrl->num_dout_ports = val & SWRM_COMP_PARAMS_DOUT_PORTS_MASK; 688 + ctrl->num_din_ports = (val & SWRM_COMP_PARAMS_DIN_PORTS_MASK) >> 5; 689 + 690 + ret = of_property_read_u32(np, "qcom,din-ports", &val); 691 + if (ret) 692 + return ret; 693 + 694 + if (val > ctrl->num_din_ports) 695 + return -EINVAL; 696 + 697 + ctrl->num_din_ports = val; 698 + 699 + ret = of_property_read_u32(np, "qcom,dout-ports", &val); 700 + if (ret) 701 + return ret; 702 + 703 + if (val > ctrl->num_dout_ports) 704 + return -EINVAL; 705 + 706 + ctrl->num_dout_ports = val; 707 + 708 + nports = ctrl->num_dout_ports + ctrl->num_din_ports; 709 + 710 + ret = of_property_read_u8_array(np, "qcom,ports-offset1", 711 + off1, nports); 712 + if (ret) 713 + return ret; 714 + 715 + ret = of_property_read_u8_array(np, "qcom,ports-offset2", 716 + off2, nports); 717 + if (ret) 718 + return ret; 719 + 720 + ret = of_property_read_u8_array(np, "qcom,ports-sinterval-low", 721 + si, nports); 722 + if (ret) 723 + return ret; 724 + 725 + for (i = 0; i < nports; i++) { 726 + ctrl->pconfig[i].si = si[i]; 727 + ctrl->pconfig[i].off1 = off1[i]; 728 + ctrl->pconfig[i].off2 = off2[i]; 729 + } 730 + 731 + return 0; 732 + } 733 + 734 + static int qcom_swrm_probe(struct platform_device *pdev) 735 + { 736 + struct device *dev = &pdev->dev; 737 + struct sdw_master_prop *prop; 738 + struct sdw_bus_params *params; 739 + struct qcom_swrm_ctrl *ctrl; 740 + int ret; 741 + u32 val; 742 + 743 + ctrl = devm_kzalloc(dev, sizeof(*ctrl), GFP_KERNEL); 744 + if (!ctrl) 745 + return -ENOMEM; 746 + 747 + if (dev->parent->bus == &slimbus_bus) { 748 + ctrl->reg_read = qcom_swrm_abh_reg_read; 749 + ctrl->reg_write = qcom_swrm_ahb_reg_write; 750 + ctrl->regmap = dev_get_regmap(dev->parent, NULL); 751 + if (!ctrl->regmap) 752 + return -EINVAL; 753 + } else { 754 + /* Only WCD based SoundWire controller is supported */ 755 + return -ENOTSUPP; 756 + } 757 + 758 + ctrl->irq = of_irq_get(dev->of_node, 0); 759 + if (ctrl->irq < 0) 760 + return ctrl->irq; 761 + 762 + ctrl->hclk = devm_clk_get(dev, "iface"); 763 + if (IS_ERR(ctrl->hclk)) 764 + return PTR_ERR(ctrl->hclk); 765 + 766 + clk_prepare_enable(ctrl->hclk); 767 + 768 + ctrl->dev = dev; 769 + dev_set_drvdata(&pdev->dev, ctrl); 770 + spin_lock_init(&ctrl->comp_lock); 771 + mutex_init(&ctrl->port_lock); 772 + INIT_WORK(&ctrl->slave_work, qcom_swrm_slave_wq); 773 + 774 + ctrl->bus.dev = dev; 775 + ctrl->bus.ops = &qcom_swrm_ops; 776 + ctrl->bus.port_ops = &qcom_swrm_port_ops; 777 + ctrl->bus.compute_params = &qcom_swrm_compute_params; 778 + 779 + ret = qcom_swrm_get_port_config(ctrl); 780 + if (ret) 781 + return ret; 782 + 783 + params = &ctrl->bus.params; 784 + params->max_dr_freq = DEFAULT_CLK_FREQ; 785 + params->curr_dr_freq = DEFAULT_CLK_FREQ; 786 + params->col = SWRM_DEFAULT_COL; 787 + params->row = SWRM_DEFAULT_ROWS; 788 + ctrl->reg_read(ctrl, SWRM_MCP_STATUS, &val); 789 + params->curr_bank = val & SWRM_MCP_STATUS_BANK_NUM_MASK; 790 + params->next_bank = !params->curr_bank; 791 + 792 + prop = &ctrl->bus.prop; 793 + prop->max_clk_freq = DEFAULT_CLK_FREQ; 794 + prop->num_clk_gears = 0; 795 + prop->num_clk_freq = MAX_FREQ_NUM; 796 + prop->clk_freq = &qcom_swrm_freq_tbl[0]; 797 + prop->default_col = SWRM_DEFAULT_COL; 798 + prop->default_row = SWRM_DEFAULT_ROWS; 799 + 800 + ctrl->reg_read(ctrl, SWRM_COMP_HW_VERSION, &ctrl->version); 801 + 802 + ret = devm_request_threaded_irq(dev, ctrl->irq, NULL, 803 + qcom_swrm_irq_handler, 804 + IRQF_TRIGGER_RISING, 805 + "soundwire", ctrl); 806 + if (ret) { 807 + dev_err(dev, "Failed to request soundwire irq\n"); 808 + goto err; 809 + } 810 + 811 + ret = sdw_add_bus_master(&ctrl->bus); 812 + if (ret) { 813 + dev_err(dev, "Failed to register Soundwire controller (%d)\n", 814 + ret); 815 + goto err; 816 + } 817 + 818 + qcom_swrm_init(ctrl); 819 + ret = qcom_swrm_register_dais(ctrl); 820 + if (ret) 821 + goto err; 822 + 823 + dev_info(dev, "Qualcomm Soundwire controller v%x.%x.%x Registered\n", 824 + (ctrl->version >> 24) & 0xff, (ctrl->version >> 16) & 0xff, 825 + ctrl->version & 0xffff); 826 + 827 + return 0; 828 + err: 829 + clk_disable_unprepare(ctrl->hclk); 830 + return ret; 831 + } 832 + 833 + static int qcom_swrm_remove(struct platform_device *pdev) 834 + { 835 + struct qcom_swrm_ctrl *ctrl = dev_get_drvdata(&pdev->dev); 836 + 837 + sdw_delete_bus_master(&ctrl->bus); 838 + clk_disable_unprepare(ctrl->hclk); 839 + 840 + return 0; 841 + } 842 + 843 + static const struct of_device_id qcom_swrm_of_match[] = { 844 + { .compatible = "qcom,soundwire-v1.3.0", }, 845 + {/* sentinel */}, 846 + }; 847 + 848 + MODULE_DEVICE_TABLE(of, qcom_swrm_of_match); 849 + 850 + static struct platform_driver qcom_swrm_driver = { 851 + .probe = &qcom_swrm_probe, 852 + .remove = &qcom_swrm_remove, 853 + .driver = { 854 + .name = "qcom-soundwire", 855 + .of_match_table = qcom_swrm_of_match, 856 + } 857 + }; 858 + module_platform_driver(qcom_swrm_driver); 859 + 860 + MODULE_DESCRIPTION("Qualcomm soundwire driver"); 861 + MODULE_LICENSE("GPL v2");
-8
drivers/soundwire/stream.c
··· 1554 1554 sdw_acquire_bus_lock(stream); 1555 1555 1556 1556 ret = _sdw_prepare_stream(stream); 1557 - if (ret < 0) 1558 - pr_err("Prepare for stream:%s failed: %d\n", stream->name, ret); 1559 1557 1560 1558 sdw_release_bus_lock(stream); 1561 1559 return ret; ··· 1620 1622 sdw_acquire_bus_lock(stream); 1621 1623 1622 1624 ret = _sdw_enable_stream(stream); 1623 - if (ret < 0) 1624 - pr_err("Enable for stream:%s failed: %d\n", stream->name, ret); 1625 1625 1626 1626 sdw_release_bus_lock(stream); 1627 1627 return ret; ··· 1694 1698 sdw_acquire_bus_lock(stream); 1695 1699 1696 1700 ret = _sdw_disable_stream(stream); 1697 - if (ret < 0) 1698 - pr_err("Disable for stream:%s failed: %d\n", stream->name, ret); 1699 1701 1700 1702 sdw_release_bus_lock(stream); 1701 1703 return ret; ··· 1750 1756 1751 1757 sdw_acquire_bus_lock(stream); 1752 1758 ret = _sdw_deprepare_stream(stream); 1753 - if (ret < 0) 1754 - pr_err("De-prepare for stream:%d failed: %d\n", ret, ret); 1755 1759 1756 1760 sdw_release_bus_lock(stream); 1757 1761 return ret;
+4 -2
drivers/uio/uio_dmem_genirq.c
··· 132 132 if (irq_on) { 133 133 if (test_and_clear_bit(0, &priv->flags)) 134 134 enable_irq(dev_info->irq); 135 + spin_unlock_irqrestore(&priv->lock, flags); 135 136 } else { 136 - if (!test_and_set_bit(0, &priv->flags)) 137 + if (!test_and_set_bit(0, &priv->flags)) { 138 + spin_unlock_irqrestore(&priv->lock, flags); 137 139 disable_irq(dev_info->irq); 140 + } 138 141 } 139 - spin_unlock_irqrestore(&priv->lock, flags); 140 142 141 143 return 0; 142 144 }
+2
drivers/uio/uio_pdrv_genirq.c
··· 156 156 uioinfo->irq = ret; 157 157 if (ret == -ENXIO && pdev->dev.of_node) 158 158 uioinfo->irq = UIO_IRQ_NONE; 159 + else if (ret == -EPROBE_DEFER) 160 + return ret; 159 161 else if (ret < 0) { 160 162 dev_err(&pdev->dev, "failed to get IRQ\n"); 161 163 return ret;
+7 -4
drivers/visorbus/visorchipset.c
··· 1210 1210 { 1211 1211 struct controlvm_message local_crash_bus_msg; 1212 1212 struct controlvm_message local_crash_dev_msg; 1213 - struct controlvm_message msg; 1213 + struct controlvm_message msg = { 1214 + .hdr.id = CONTROLVM_CHIPSET_INIT, 1215 + .cmd.init_chipset = { 1216 + .bus_count = 23, 1217 + .switch_count = 0, 1218 + }, 1219 + }; 1214 1220 u32 local_crash_msg_offset; 1215 1221 u16 local_crash_msg_count; 1216 1222 1217 1223 /* send init chipset msg */ 1218 - msg.hdr.id = CONTROLVM_CHIPSET_INIT; 1219 - msg.cmd.init_chipset.bus_count = 23; 1220 - msg.cmd.init_chipset.switch_count = 0; 1221 1224 chipset_init(&msg); 1222 1225 /* get saved message count */ 1223 1226 if (visorchannel_read(chipset_dev->controlvm_channel,
+18 -12
drivers/vme/bridges/vme_fake.c
··· 414 414 } 415 415 } 416 416 417 - static u8 fake_vmeread8(struct fake_driver *bridge, unsigned long long addr, 418 - u32 aspace, u32 cycle) 417 + static noinline_for_stack u8 fake_vmeread8(struct fake_driver *bridge, 418 + unsigned long long addr, 419 + u32 aspace, u32 cycle) 419 420 { 420 421 u8 retval = 0xff; 421 422 int i; ··· 447 446 return retval; 448 447 } 449 448 450 - static u16 fake_vmeread16(struct fake_driver *bridge, unsigned long long addr, 451 - u32 aspace, u32 cycle) 449 + static noinline_for_stack u16 fake_vmeread16(struct fake_driver *bridge, 450 + unsigned long long addr, 451 + u32 aspace, u32 cycle) 452 452 { 453 453 u16 retval = 0xffff; 454 454 int i; ··· 480 478 return retval; 481 479 } 482 480 483 - static u32 fake_vmeread32(struct fake_driver *bridge, unsigned long long addr, 484 - u32 aspace, u32 cycle) 481 + static noinline_for_stack u32 fake_vmeread32(struct fake_driver *bridge, 482 + unsigned long long addr, 483 + u32 aspace, u32 cycle) 485 484 { 486 485 u32 retval = 0xffffffff; 487 486 int i; ··· 612 609 return retval; 613 610 } 614 611 615 - static void fake_vmewrite8(struct fake_driver *bridge, u8 *buf, 616 - unsigned long long addr, u32 aspace, u32 cycle) 612 + static noinline_for_stack void fake_vmewrite8(struct fake_driver *bridge, 613 + u8 *buf, unsigned long long addr, 614 + u32 aspace, u32 cycle) 617 615 { 618 616 int i; 619 617 unsigned long long start, end, offset; ··· 643 639 644 640 } 645 641 646 - static void fake_vmewrite16(struct fake_driver *bridge, u16 *buf, 647 - unsigned long long addr, u32 aspace, u32 cycle) 642 + static noinline_for_stack void fake_vmewrite16(struct fake_driver *bridge, 643 + u16 *buf, unsigned long long addr, 644 + u32 aspace, u32 cycle) 648 645 { 649 646 int i; 650 647 unsigned long long start, end, offset; ··· 674 669 675 670 } 676 671 677 - static void fake_vmewrite32(struct fake_driver *bridge, u32 *buf, 678 - unsigned long long addr, u32 aspace, u32 cycle) 672 + static noinline_for_stack void fake_vmewrite32(struct fake_driver *bridge, 673 + u32 *buf, unsigned long long addr, 674 + u32 aspace, u32 cycle) 679 675 { 680 676 int i; 681 677 unsigned long long start, end, offset;
+117 -239
drivers/w1/masters/omap_hdq.c
··· 38 38 #define OMAP_HDQ_INT_STATUS_TXCOMPLETE BIT(2) 39 39 #define OMAP_HDQ_INT_STATUS_RXCOMPLETE BIT(1) 40 40 #define OMAP_HDQ_INT_STATUS_TIMEOUT BIT(0) 41 - #define OMAP_HDQ_SYSCONFIG 0x14 42 - #define OMAP_HDQ_SYSCONFIG_SOFTRESET BIT(1) 43 - #define OMAP_HDQ_SYSCONFIG_AUTOIDLE BIT(0) 44 - #define OMAP_HDQ_SYSCONFIG_NOIDLE 0x0 45 - #define OMAP_HDQ_SYSSTATUS 0x18 46 - #define OMAP_HDQ_SYSSTATUS_RESETDONE BIT(0) 47 41 48 42 #define OMAP_HDQ_FLAG_CLEAR 0 49 43 #define OMAP_HDQ_FLAG_SET 1 ··· 56 62 void __iomem *hdq_base; 57 63 /* lock status update */ 58 64 struct mutex hdq_mutex; 59 - int hdq_usecount; 60 65 u8 hdq_irqstatus; 61 66 /* device lock */ 62 67 spinlock_t hdq_spinlock; 63 - /* 64 - * Used to control the call to omap_hdq_get and omap_hdq_put. 65 - * HDQ Protocol: Write the CMD|REG_address first, followed by 66 - * the data wrire or read. 67 - */ 68 - int init_trans; 69 - int rrw; 70 68 /* mode: 0-HDQ 1-W1 */ 71 69 int mode; 72 70 ··· 83 97 __raw_writel(new_val, hdq_data->hdq_base + offset); 84 98 85 99 return new_val; 86 - } 87 - 88 - static void hdq_disable_interrupt(struct hdq_data *hdq_data, u32 offset, 89 - u32 mask) 90 - { 91 - u32 ie; 92 - 93 - ie = readl(hdq_data->hdq_base + offset); 94 - writel(ie & mask, hdq_data->hdq_base + offset); 95 100 } 96 101 97 102 /* ··· 119 142 return ret; 120 143 } 121 144 145 + /* Clear saved irqstatus after using an interrupt */ 146 + static void hdq_reset_irqstatus(struct hdq_data *hdq_data) 147 + { 148 + unsigned long irqflags; 149 + 150 + spin_lock_irqsave(&hdq_data->hdq_spinlock, irqflags); 151 + hdq_data->hdq_irqstatus = 0; 152 + spin_unlock_irqrestore(&hdq_data->hdq_spinlock, irqflags); 153 + } 154 + 122 155 /* write out a byte and fill *status with HDQ_INT_STATUS */ 123 156 static int hdq_write_byte(struct hdq_data *hdq_data, u8 val, u8 *status) 124 157 { 125 158 int ret; 126 159 u8 tmp_status; 127 - unsigned long irqflags; 128 160 129 161 *status = 0; 130 - 131 - spin_lock_irqsave(&hdq_data->hdq_spinlock, irqflags); 132 - /* clear interrupt flags via a dummy read */ 133 - hdq_reg_in(hdq_data, OMAP_HDQ_INT_STATUS); 134 - /* ISR loads it with new INT_STATUS */ 135 - hdq_data->hdq_irqstatus = 0; 136 - spin_unlock_irqrestore(&hdq_data->hdq_spinlock, irqflags); 137 162 138 163 hdq_reg_out(hdq_data, OMAP_HDQ_TX_DATA, val); 139 164 ··· 170 191 } 171 192 172 193 out: 194 + hdq_reset_irqstatus(hdq_data); 173 195 return ret; 174 196 } 175 197 ··· 217 237 slave_found(master_dev, id); 218 238 } 219 239 220 - static int _omap_hdq_reset(struct hdq_data *hdq_data) 221 - { 222 - int ret; 223 - u8 tmp_status; 224 - 225 - hdq_reg_out(hdq_data, OMAP_HDQ_SYSCONFIG, 226 - OMAP_HDQ_SYSCONFIG_SOFTRESET); 227 - /* 228 - * Select HDQ/1W mode & enable clocks. 229 - * It is observed that INT flags can't be cleared via a read and GO/INIT 230 - * won't return to zero if interrupt is disabled. So we always enable 231 - * interrupt. 232 - */ 233 - hdq_reg_out(hdq_data, OMAP_HDQ_CTRL_STATUS, 234 - OMAP_HDQ_CTRL_STATUS_CLOCKENABLE | 235 - OMAP_HDQ_CTRL_STATUS_INTERRUPTMASK); 236 - 237 - /* wait for reset to complete */ 238 - ret = hdq_wait_for_flag(hdq_data, OMAP_HDQ_SYSSTATUS, 239 - OMAP_HDQ_SYSSTATUS_RESETDONE, OMAP_HDQ_FLAG_SET, &tmp_status); 240 - if (ret) 241 - dev_dbg(hdq_data->dev, "timeout waiting HDQ reset, %x", 242 - tmp_status); 243 - else { 244 - hdq_reg_out(hdq_data, OMAP_HDQ_CTRL_STATUS, 245 - OMAP_HDQ_CTRL_STATUS_CLOCKENABLE | 246 - OMAP_HDQ_CTRL_STATUS_INTERRUPTMASK | 247 - hdq_data->mode); 248 - hdq_reg_out(hdq_data, OMAP_HDQ_SYSCONFIG, 249 - OMAP_HDQ_SYSCONFIG_AUTOIDLE); 250 - } 251 - 252 - return ret; 253 - } 254 - 255 240 /* Issue break pulse to the device */ 256 241 static int omap_hdq_break(struct hdq_data *hdq_data) 257 242 { 258 243 int ret = 0; 259 244 u8 tmp_status; 260 - unsigned long irqflags; 261 245 262 246 ret = mutex_lock_interruptible(&hdq_data->hdq_mutex); 263 247 if (ret < 0) { ··· 229 285 ret = -EINTR; 230 286 goto rtn; 231 287 } 232 - 233 - spin_lock_irqsave(&hdq_data->hdq_spinlock, irqflags); 234 - /* clear interrupt flags via a dummy read */ 235 - hdq_reg_in(hdq_data, OMAP_HDQ_INT_STATUS); 236 - /* ISR loads it with new INT_STATUS */ 237 - hdq_data->hdq_irqstatus = 0; 238 - spin_unlock_irqrestore(&hdq_data->hdq_spinlock, irqflags); 239 288 240 289 /* set the INIT and GO bit */ 241 290 hdq_reg_merge(hdq_data, OMAP_HDQ_CTRL_STATUS, ··· 278 341 " return to zero, %x", tmp_status); 279 342 280 343 out: 344 + hdq_reset_irqstatus(hdq_data); 281 345 mutex_unlock(&hdq_data->hdq_mutex); 282 346 rtn: 283 347 return ret; ··· 295 357 goto rtn; 296 358 } 297 359 298 - if (!hdq_data->hdq_usecount) { 360 + if (pm_runtime_suspended(hdq_data->dev)) { 299 361 ret = -EINVAL; 300 362 goto out; 301 363 } ··· 326 388 /* the data is ready. Read it in! */ 327 389 *val = hdq_reg_in(hdq_data, OMAP_HDQ_RX_DATA); 328 390 out: 391 + hdq_reset_irqstatus(hdq_data); 329 392 mutex_unlock(&hdq_data->hdq_mutex); 330 393 rtn: 331 394 return ret; 332 395 333 - } 334 - 335 - /* Enable clocks and set the controller to HDQ/1W mode */ 336 - static int omap_hdq_get(struct hdq_data *hdq_data) 337 - { 338 - int ret = 0; 339 - 340 - ret = mutex_lock_interruptible(&hdq_data->hdq_mutex); 341 - if (ret < 0) { 342 - ret = -EINTR; 343 - goto rtn; 344 - } 345 - 346 - if (OMAP_HDQ_MAX_USER == hdq_data->hdq_usecount) { 347 - dev_dbg(hdq_data->dev, "attempt to exceed the max use count"); 348 - ret = -EINVAL; 349 - goto out; 350 - } else { 351 - hdq_data->hdq_usecount++; 352 - try_module_get(THIS_MODULE); 353 - if (1 == hdq_data->hdq_usecount) { 354 - 355 - pm_runtime_get_sync(hdq_data->dev); 356 - 357 - /* make sure HDQ/1W is out of reset */ 358 - if (!(hdq_reg_in(hdq_data, OMAP_HDQ_SYSSTATUS) & 359 - OMAP_HDQ_SYSSTATUS_RESETDONE)) { 360 - ret = _omap_hdq_reset(hdq_data); 361 - if (ret) 362 - /* back up the count */ 363 - hdq_data->hdq_usecount--; 364 - } else { 365 - /* select HDQ/1W mode & enable clocks */ 366 - hdq_reg_out(hdq_data, OMAP_HDQ_CTRL_STATUS, 367 - OMAP_HDQ_CTRL_STATUS_CLOCKENABLE | 368 - OMAP_HDQ_CTRL_STATUS_INTERRUPTMASK | 369 - hdq_data->mode); 370 - hdq_reg_out(hdq_data, OMAP_HDQ_SYSCONFIG, 371 - OMAP_HDQ_SYSCONFIG_NOIDLE); 372 - hdq_reg_in(hdq_data, OMAP_HDQ_INT_STATUS); 373 - } 374 - } 375 - } 376 - 377 - out: 378 - mutex_unlock(&hdq_data->hdq_mutex); 379 - rtn: 380 - return ret; 381 - } 382 - 383 - /* Disable clocks to the module */ 384 - static int omap_hdq_put(struct hdq_data *hdq_data) 385 - { 386 - int ret = 0; 387 - 388 - ret = mutex_lock_interruptible(&hdq_data->hdq_mutex); 389 - if (ret < 0) 390 - return -EINTR; 391 - 392 - hdq_reg_out(hdq_data, OMAP_HDQ_SYSCONFIG, 393 - OMAP_HDQ_SYSCONFIG_AUTOIDLE); 394 - if (0 == hdq_data->hdq_usecount) { 395 - dev_dbg(hdq_data->dev, "attempt to decrement use count" 396 - " when it is zero"); 397 - ret = -EINVAL; 398 - } else { 399 - hdq_data->hdq_usecount--; 400 - module_put(THIS_MODULE); 401 - if (0 == hdq_data->hdq_usecount) 402 - pm_runtime_put_sync(hdq_data->dev); 403 - } 404 - mutex_unlock(&hdq_data->hdq_mutex); 405 - 406 - return ret; 407 396 } 408 397 409 398 /* ··· 347 482 OMAP_HDQ_CTRL_STATUS_INTERRUPTMASK; 348 483 u8 mask = ctrl | OMAP_HDQ_CTRL_STATUS_DIR; 349 484 350 - omap_hdq_get(_hdq); 485 + err = pm_runtime_get_sync(hdq_data->dev); 486 + if (err < 0) { 487 + pm_runtime_put_noidle(hdq_data->dev); 488 + 489 + return err; 490 + } 351 491 352 492 err = mutex_lock_interruptible(&hdq_data->hdq_mutex); 353 493 if (err < 0) { ··· 360 490 goto rtn; 361 491 } 362 492 363 - hdq_data->hdq_irqstatus = 0; 364 493 /* read id_bit */ 365 494 hdq_reg_merge(_hdq, OMAP_HDQ_CTRL_STATUS, 366 495 ctrl | OMAP_HDQ_CTRL_STATUS_DIR, mask); ··· 373 504 } 374 505 id_bit = (hdq_reg_in(_hdq, OMAP_HDQ_RX_DATA) & 0x01); 375 506 376 - hdq_data->hdq_irqstatus = 0; 507 + /* Must clear irqstatus for another RXCOMPLETE interrupt */ 508 + hdq_reset_irqstatus(hdq_data); 509 + 377 510 /* read comp_bit */ 378 511 hdq_reg_merge(_hdq, OMAP_HDQ_CTRL_STATUS, 379 512 ctrl | OMAP_HDQ_CTRL_STATUS_DIR, mask); ··· 418 547 OMAP_HDQ_CTRL_STATUS_SINGLE); 419 548 420 549 out: 550 + hdq_reset_irqstatus(hdq_data); 421 551 mutex_unlock(&hdq_data->hdq_mutex); 422 552 rtn: 423 - omap_hdq_put(_hdq); 553 + pm_runtime_mark_last_busy(hdq_data->dev); 554 + pm_runtime_put_autosuspend(hdq_data->dev); 555 + 424 556 return ret; 425 557 } 426 558 427 559 /* reset callback */ 428 560 static u8 omap_w1_reset_bus(void *_hdq) 429 561 { 430 - omap_hdq_get(_hdq); 431 - omap_hdq_break(_hdq); 432 - omap_hdq_put(_hdq); 562 + struct hdq_data *hdq_data = _hdq; 563 + int err; 564 + 565 + err = pm_runtime_get_sync(hdq_data->dev); 566 + if (err < 0) { 567 + pm_runtime_put_noidle(hdq_data->dev); 568 + 569 + return err; 570 + } 571 + 572 + omap_hdq_break(hdq_data); 573 + 574 + pm_runtime_mark_last_busy(hdq_data->dev); 575 + pm_runtime_put_autosuspend(hdq_data->dev); 576 + 433 577 return 0; 434 578 } 435 579 ··· 455 569 u8 val = 0; 456 570 int ret; 457 571 458 - /* First write to initialize the transfer */ 459 - if (hdq_data->init_trans == 0) 460 - omap_hdq_get(hdq_data); 572 + ret = pm_runtime_get_sync(hdq_data->dev); 573 + if (ret < 0) { 574 + pm_runtime_put_noidle(hdq_data->dev); 461 575 462 - ret = hdq_read_byte(hdq_data, &val); 463 - if (ret) { 464 - ret = mutex_lock_interruptible(&hdq_data->hdq_mutex); 465 - if (ret < 0) { 466 - dev_dbg(hdq_data->dev, "Could not acquire mutex\n"); 467 - return -EINTR; 468 - } 469 - hdq_data->init_trans = 0; 470 - mutex_unlock(&hdq_data->hdq_mutex); 471 - omap_hdq_put(hdq_data); 472 576 return -1; 473 577 } 474 578 475 - hdq_disable_interrupt(hdq_data, OMAP_HDQ_CTRL_STATUS, 476 - ~OMAP_HDQ_CTRL_STATUS_INTERRUPTMASK); 579 + ret = hdq_read_byte(hdq_data, &val); 580 + if (ret) 581 + ret = -1; 477 582 478 - /* Write followed by a read, release the module */ 479 - if (hdq_data->init_trans) { 480 - ret = mutex_lock_interruptible(&hdq_data->hdq_mutex); 481 - if (ret < 0) { 482 - dev_dbg(hdq_data->dev, "Could not acquire mutex\n"); 483 - return -EINTR; 484 - } 485 - hdq_data->init_trans = 0; 486 - mutex_unlock(&hdq_data->hdq_mutex); 487 - omap_hdq_put(hdq_data); 488 - } 583 + pm_runtime_mark_last_busy(hdq_data->dev); 584 + pm_runtime_put_autosuspend(hdq_data->dev); 489 585 490 586 return val; 491 587 } ··· 479 611 int ret; 480 612 u8 status; 481 613 482 - /* First write to initialize the transfer */ 483 - if (hdq_data->init_trans == 0) 484 - omap_hdq_get(hdq_data); 614 + ret = pm_runtime_get_sync(hdq_data->dev); 615 + if (ret < 0) { 616 + pm_runtime_put_noidle(hdq_data->dev); 617 + 618 + return; 619 + } 485 620 486 621 /* 487 622 * We need to reset the slave before ··· 494 623 if (byte == W1_SKIP_ROM) 495 624 omap_hdq_break(hdq_data); 496 625 497 - ret = mutex_lock_interruptible(&hdq_data->hdq_mutex); 498 - if (ret < 0) { 499 - dev_dbg(hdq_data->dev, "Could not acquire mutex\n"); 500 - return; 501 - } 502 - hdq_data->init_trans++; 503 - mutex_unlock(&hdq_data->hdq_mutex); 504 - 505 626 ret = hdq_write_byte(hdq_data, byte, &status); 506 627 if (ret < 0) { 507 628 dev_dbg(hdq_data->dev, "TX failure:Ctrl status %x\n", status); 508 - return; 629 + goto out_err; 509 630 } 510 631 511 - /* Second write, data transferred. Release the module */ 512 - if (hdq_data->init_trans > 1) { 513 - omap_hdq_put(hdq_data); 514 - ret = mutex_lock_interruptible(&hdq_data->hdq_mutex); 515 - if (ret < 0) { 516 - dev_dbg(hdq_data->dev, "Could not acquire mutex\n"); 517 - return; 518 - } 519 - hdq_data->init_trans = 0; 520 - mutex_unlock(&hdq_data->hdq_mutex); 521 - } 632 + out_err: 633 + pm_runtime_mark_last_busy(hdq_data->dev); 634 + pm_runtime_put_autosuspend(hdq_data->dev); 522 635 } 523 636 524 637 static struct w1_bus_master omap_w1_master = { 525 638 .read_byte = omap_w1_read_byte, 526 639 .write_byte = omap_w1_write_byte, 527 640 .reset_bus = omap_w1_reset_bus, 641 + }; 642 + 643 + static int __maybe_unused omap_hdq_runtime_suspend(struct device *dev) 644 + { 645 + struct hdq_data *hdq_data = dev_get_drvdata(dev); 646 + 647 + hdq_reg_out(hdq_data, 0, hdq_data->mode); 648 + hdq_reg_in(hdq_data, OMAP_HDQ_INT_STATUS); 649 + 650 + return 0; 651 + } 652 + 653 + static int __maybe_unused omap_hdq_runtime_resume(struct device *dev) 654 + { 655 + struct hdq_data *hdq_data = dev_get_drvdata(dev); 656 + 657 + /* select HDQ/1W mode & enable clocks */ 658 + hdq_reg_out(hdq_data, OMAP_HDQ_CTRL_STATUS, 659 + OMAP_HDQ_CTRL_STATUS_CLOCKENABLE | 660 + OMAP_HDQ_CTRL_STATUS_INTERRUPTMASK | 661 + hdq_data->mode); 662 + hdq_reg_in(hdq_data, OMAP_HDQ_INT_STATUS); 663 + 664 + return 0; 665 + } 666 + 667 + static const struct dev_pm_ops omap_hdq_pm_ops = { 668 + SET_RUNTIME_PM_OPS(omap_hdq_runtime_suspend, 669 + omap_hdq_runtime_resume, NULL) 528 670 }; 529 671 530 672 static int omap_hdq_probe(struct platform_device *pdev) ··· 561 677 if (IS_ERR(hdq_data->hdq_base)) 562 678 return PTR_ERR(hdq_data->hdq_base); 563 679 564 - hdq_data->hdq_usecount = 0; 565 - hdq_data->rrw = 0; 566 680 mutex_init(&hdq_data->hdq_mutex); 567 681 568 - pm_runtime_enable(&pdev->dev); 569 - ret = pm_runtime_get_sync(&pdev->dev); 570 - if (ret < 0) { 571 - dev_dbg(&pdev->dev, "pm_runtime_get_sync failed\n"); 572 - goto err_w1; 682 + ret = of_property_read_string(pdev->dev.of_node, "ti,mode", &mode); 683 + if (ret < 0 || !strcmp(mode, "hdq")) { 684 + hdq_data->mode = 0; 685 + omap_w1_master.search = omap_w1_search_bus; 686 + } else { 687 + hdq_data->mode = 1; 688 + omap_w1_master.triplet = omap_w1_triplet; 573 689 } 574 690 575 - ret = _omap_hdq_reset(hdq_data); 576 - if (ret) { 577 - dev_dbg(&pdev->dev, "reset failed\n"); 578 - goto err_irq; 691 + pm_runtime_enable(&pdev->dev); 692 + pm_runtime_use_autosuspend(&pdev->dev); 693 + pm_runtime_set_autosuspend_delay(&pdev->dev, 300); 694 + ret = pm_runtime_get_sync(&pdev->dev); 695 + if (ret < 0) { 696 + pm_runtime_put_noidle(&pdev->dev); 697 + dev_dbg(&pdev->dev, "pm_runtime_get_sync failed\n"); 698 + goto err_w1; 579 699 } 580 700 581 701 rev = hdq_reg_in(hdq_data, OMAP_HDQ_REVISION); ··· 603 715 604 716 omap_hdq_break(hdq_data); 605 717 606 - pm_runtime_put_sync(&pdev->dev); 607 - 608 - ret = of_property_read_string(pdev->dev.of_node, "ti,mode", &mode); 609 - if (ret < 0 || !strcmp(mode, "hdq")) { 610 - hdq_data->mode = 0; 611 - omap_w1_master.search = omap_w1_search_bus; 612 - } else { 613 - hdq_data->mode = 1; 614 - omap_w1_master.triplet = omap_w1_triplet; 615 - } 718 + pm_runtime_mark_last_busy(&pdev->dev); 719 + pm_runtime_put_autosuspend(&pdev->dev); 616 720 617 721 omap_w1_master.data = hdq_data; 618 722 ··· 619 739 err_irq: 620 740 pm_runtime_put_sync(&pdev->dev); 621 741 err_w1: 742 + pm_runtime_dont_use_autosuspend(&pdev->dev); 622 743 pm_runtime_disable(&pdev->dev); 623 744 624 745 return ret; ··· 627 746 628 747 static int omap_hdq_remove(struct platform_device *pdev) 629 748 { 630 - struct hdq_data *hdq_data = platform_get_drvdata(pdev); 749 + int active; 631 750 632 - mutex_lock(&hdq_data->hdq_mutex); 633 - 634 - if (hdq_data->hdq_usecount) { 635 - dev_dbg(&pdev->dev, "removed when use count is not zero\n"); 636 - mutex_unlock(&hdq_data->hdq_mutex); 637 - return -EBUSY; 638 - } 639 - 640 - mutex_unlock(&hdq_data->hdq_mutex); 641 - 642 - /* remove module dependency */ 643 - pm_runtime_disable(&pdev->dev); 751 + active = pm_runtime_get_sync(&pdev->dev); 752 + if (active < 0) 753 + pm_runtime_put_noidle(&pdev->dev); 644 754 645 755 w1_remove_master_device(&omap_w1_master); 756 + 757 + pm_runtime_dont_use_autosuspend(&pdev->dev); 758 + if (active >= 0) 759 + pm_runtime_put_sync(&pdev->dev); 760 + pm_runtime_disable(&pdev->dev); 646 761 647 762 return 0; 648 763 } ··· 656 779 .driver = { 657 780 .name = "omap_hdq", 658 781 .of_match_table = omap_hdq_dt_ids, 782 + .pm = &omap_hdq_pm_ops, 659 783 }, 660 784 }; 661 785 module_platform_driver(omap_hdq_driver);
+100
include/dt-bindings/interconnect/qcom,msm8916.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Qualcomm interconnect IDs 4 + * 5 + * Copyright (c) 2019, Linaro Ltd. 6 + * Author: Georgi Djakov <georgi.djakov@linaro.org> 7 + */ 8 + 9 + #ifndef __DT_BINDINGS_INTERCONNECT_QCOM_MSM8916_H 10 + #define __DT_BINDINGS_INTERCONNECT_QCOM_MSM8916_H 11 + 12 + #define BIMC_SNOC_SLV 0 13 + #define MASTER_JPEG 1 14 + #define MASTER_MDP_PORT0 2 15 + #define MASTER_QDSS_BAM 3 16 + #define MASTER_QDSS_ETR 4 17 + #define MASTER_SNOC_CFG 5 18 + #define MASTER_VFE 6 19 + #define MASTER_VIDEO_P0 7 20 + #define SNOC_MM_INT_0 8 21 + #define SNOC_MM_INT_1 9 22 + #define SNOC_MM_INT_2 10 23 + #define SNOC_MM_INT_BIMC 11 24 + #define PCNOC_SNOC_SLV 12 25 + #define SLAVE_APSS 13 26 + #define SLAVE_CATS_128 14 27 + #define SLAVE_OCMEM_64 15 28 + #define SLAVE_IMEM 16 29 + #define SLAVE_QDSS_STM 17 30 + #define SLAVE_SRVC_SNOC 18 31 + #define SNOC_BIMC_0_MAS 19 32 + #define SNOC_BIMC_1_MAS 20 33 + #define SNOC_INT_0 21 34 + #define SNOC_INT_1 22 35 + #define SNOC_INT_BIMC 23 36 + #define SNOC_PCNOC_MAS 24 37 + #define SNOC_QDSS_INT 25 38 + 39 + #define BIMC_SNOC_MAS 0 40 + #define MASTER_AMPSS_M0 1 41 + #define MASTER_GRAPHICS_3D 2 42 + #define MASTER_TCU0 3 43 + #define MASTER_TCU1 4 44 + #define SLAVE_AMPSS_L2 5 45 + #define SLAVE_EBI_CH0 6 46 + #define SNOC_BIMC_0_SLV 7 47 + #define SNOC_BIMC_1_SLV 8 48 + 49 + #define MASTER_BLSP_1 0 50 + #define MASTER_DEHR 1 51 + #define MASTER_LPASS 2 52 + #define MASTER_CRYPTO_CORE0 3 53 + #define MASTER_SDCC_1 4 54 + #define MASTER_SDCC_2 5 55 + #define MASTER_SPDM 6 56 + #define MASTER_USB_HS 7 57 + #define PCNOC_INT_0 8 58 + #define PCNOC_INT_1 9 59 + #define PCNOC_MAS_0 10 60 + #define PCNOC_MAS_1 11 61 + #define PCNOC_SLV_0 12 62 + #define PCNOC_SLV_1 13 63 + #define PCNOC_SLV_2 14 64 + #define PCNOC_SLV_3 15 65 + #define PCNOC_SLV_4 16 66 + #define PCNOC_SLV_8 17 67 + #define PCNOC_SLV_9 18 68 + #define PCNOC_SNOC_MAS 19 69 + #define SLAVE_BIMC_CFG 20 70 + #define SLAVE_BLSP_1 21 71 + #define SLAVE_BOOT_ROM 22 72 + #define SLAVE_CAMERA_CFG 23 73 + #define SLAVE_CLK_CTL 24 74 + #define SLAVE_CRYPTO_0_CFG 25 75 + #define SLAVE_DEHR_CFG 26 76 + #define SLAVE_DISPLAY_CFG 27 77 + #define SLAVE_GRAPHICS_3D_CFG 28 78 + #define SLAVE_IMEM_CFG 29 79 + #define SLAVE_LPASS 30 80 + #define SLAVE_MPM 31 81 + #define SLAVE_MSG_RAM 32 82 + #define SLAVE_MSS 33 83 + #define SLAVE_PDM 34 84 + #define SLAVE_PMIC_ARB 35 85 + #define SLAVE_PCNOC_CFG 36 86 + #define SLAVE_PRNG 37 87 + #define SLAVE_QDSS_CFG 38 88 + #define SLAVE_RBCPR_CFG 39 89 + #define SLAVE_SDCC_1 40 90 + #define SLAVE_SDCC_2 41 91 + #define SLAVE_SECURITY 42 92 + #define SLAVE_SNOC_CFG 43 93 + #define SLAVE_SPDM 44 94 + #define SLAVE_TCSR 45 95 + #define SLAVE_TLMM 46 96 + #define SLAVE_USB_HS 47 97 + #define SLAVE_VENUS_CFG 48 98 + #define SNOC_PCNOC_SLV 49 99 + 100 + #endif
+1
include/linux/alcor_pci.h
··· 17 17 #define PCI_ID_ALCOR_MICRO 0x1AEA 18 18 #define PCI_ID_AU6601 0x6601 19 19 #define PCI_ID_AU6621 0x6621 20 + #define PCI_ID_AU6625 0x6625 20 21 21 22 #define MHZ_TO_HZ(freq) ((freq) * 1000 * 1000) 22 23
+15 -15
include/linux/extcon.h
··· 170 170 * Following APIs get the connected state of each external connector. 171 171 * The 'id' argument indicates the defined external connector. 172 172 */ 173 - extern int extcon_get_state(struct extcon_dev *edev, unsigned int id); 173 + int extcon_get_state(struct extcon_dev *edev, unsigned int id); 174 174 175 175 /* 176 176 * Following APIs get the property of each external connector. ··· 181 181 * for each external connector. They are used to get the capability of the 182 182 * property of each external connector based on the id and property. 183 183 */ 184 - extern int extcon_get_property(struct extcon_dev *edev, unsigned int id, 184 + int extcon_get_property(struct extcon_dev *edev, unsigned int id, 185 185 unsigned int prop, 186 186 union extcon_property_value *prop_val); 187 - extern int extcon_get_property_capability(struct extcon_dev *edev, 187 + int extcon_get_property_capability(struct extcon_dev *edev, 188 188 unsigned int id, unsigned int prop); 189 189 190 190 /* ··· 196 196 * extcon_register_notifier_all(*edev, *nb) : Register a notifier block 197 197 * for all supported external connectors of the extcon. 198 198 */ 199 - extern int extcon_register_notifier(struct extcon_dev *edev, unsigned int id, 199 + int extcon_register_notifier(struct extcon_dev *edev, unsigned int id, 200 200 struct notifier_block *nb); 201 - extern int extcon_unregister_notifier(struct extcon_dev *edev, unsigned int id, 201 + int extcon_unregister_notifier(struct extcon_dev *edev, unsigned int id, 202 202 struct notifier_block *nb); 203 - extern int devm_extcon_register_notifier(struct device *dev, 203 + int devm_extcon_register_notifier(struct device *dev, 204 204 struct extcon_dev *edev, unsigned int id, 205 205 struct notifier_block *nb); 206 - extern void devm_extcon_unregister_notifier(struct device *dev, 206 + void devm_extcon_unregister_notifier(struct device *dev, 207 207 struct extcon_dev *edev, unsigned int id, 208 208 struct notifier_block *nb); 209 209 210 - extern int extcon_register_notifier_all(struct extcon_dev *edev, 210 + int extcon_register_notifier_all(struct extcon_dev *edev, 211 211 struct notifier_block *nb); 212 - extern int extcon_unregister_notifier_all(struct extcon_dev *edev, 212 + int extcon_unregister_notifier_all(struct extcon_dev *edev, 213 213 struct notifier_block *nb); 214 - extern int devm_extcon_register_notifier_all(struct device *dev, 214 + int devm_extcon_register_notifier_all(struct device *dev, 215 215 struct extcon_dev *edev, 216 216 struct notifier_block *nb); 217 - extern void devm_extcon_unregister_notifier_all(struct device *dev, 217 + void devm_extcon_unregister_notifier_all(struct device *dev, 218 218 struct extcon_dev *edev, 219 219 struct notifier_block *nb); 220 220 221 221 /* 222 222 * Following APIs get the extcon_dev from devicetree or by through extcon name. 223 223 */ 224 - extern struct extcon_dev *extcon_get_extcon_dev(const char *extcon_name); 225 - extern struct extcon_dev *extcon_find_edev_by_node(struct device_node *node); 226 - extern struct extcon_dev *extcon_get_edev_by_phandle(struct device *dev, 224 + struct extcon_dev *extcon_get_extcon_dev(const char *extcon_name); 225 + struct extcon_dev *extcon_find_edev_by_node(struct device_node *node); 226 + struct extcon_dev *extcon_get_edev_by_phandle(struct device *dev, 227 227 int index); 228 228 229 229 /* Following API get the name of extcon device. */ 230 - extern const char *extcon_get_edev_name(struct extcon_dev *edev); 230 + const char *extcon_get_edev_name(struct extcon_dev *edev); 231 231 232 232 #else /* CONFIG_EXTCON */ 233 233 static inline int extcon_get_state(struct extcon_dev *edev, unsigned int id)
+14
include/linux/interconnect-provider.h
··· 92 92 93 93 #if IS_ENABLED(CONFIG_INTERCONNECT) 94 94 95 + int icc_std_aggregate(struct icc_node *node, u32 tag, u32 avg_bw, 96 + u32 peak_bw, u32 *agg_avg, u32 *agg_peak); 95 97 struct icc_node *icc_node_create(int id); 96 98 void icc_node_destroy(int id); 97 99 int icc_link_create(struct icc_node *node, const int dst_id); 98 100 int icc_link_destroy(struct icc_node *src, struct icc_node *dst); 99 101 void icc_node_add(struct icc_node *node, struct icc_provider *provider); 100 102 void icc_node_del(struct icc_node *node); 103 + int icc_nodes_remove(struct icc_provider *provider); 101 104 int icc_provider_add(struct icc_provider *provider); 102 105 int icc_provider_del(struct icc_provider *provider); 103 106 104 107 #else 108 + 109 + static inline int icc_std_aggregate(struct icc_node *node, u32 tag, u32 avg_bw, 110 + u32 peak_bw, u32 *agg_avg, u32 *agg_peak) 111 + { 112 + return -ENOTSUPP; 113 + } 105 114 106 115 static inline struct icc_node *icc_node_create(int id) 107 116 { ··· 137 128 138 129 void icc_node_del(struct icc_node *node) 139 130 { 131 + } 132 + 133 + static inline int icc_nodes_remove(struct icc_provider *provider) 134 + { 135 + return -ENOTSUPP; 140 136 } 141 137 142 138 static inline int icc_provider_add(struct icc_provider *provider)
+3 -1
include/linux/soundwire/sdw.h
··· 546 546 * @debugfs: Slave debugfs 547 547 * @node: node for bus list 548 548 * @port_ready: Port ready completion flag for each Slave port 549 - * @dev_num: Device Number assigned by Bus 549 + * @dev_num: Current Device Number, values can be 0 or dev_num_sticky 550 + * @dev_num_sticky: one-time static Device Number assigned by Bus 550 551 * @probed: boolean tracking driver state 551 552 * @probe_complete: completion utility to control potential races 552 553 * on startup between driver probe/initialization and SoundWire ··· 576 575 struct list_head node; 577 576 struct completion *port_ready; 578 577 u16 dev_num; 578 + u16 dev_num_sticky; 579 579 bool probed; 580 580 struct completion probe_complete; 581 581 struct completion enumeration_complete;
+11
include/linux/soundwire/sdw_intel.h
··· 5 5 #define __SDW_INTEL_H 6 6 7 7 #include <linux/irqreturn.h> 8 + #include <linux/soundwire/sdw.h> 8 9 9 10 /** 10 11 * struct sdw_intel_stream_params_data: configuration passed during ··· 94 93 */ 95 94 #define SDW_INTEL_CLK_STOP_BUS_RESET BIT(3) 96 95 96 + struct sdw_intel_slave_id { 97 + int link_id; 98 + struct sdw_slave_id id; 99 + }; 100 + 97 101 /** 98 102 * struct sdw_intel_ctx - context allocated by the controller 99 103 * driver probe ··· 107 101 * hardware capabilities after all power dependencies are settled. 108 102 * @link_mask: bit-wise mask listing SoundWire links reported by the 109 103 * Controller 104 + * @num_slaves: total number of devices exposed across all enabled links 110 105 * @handle: ACPI parent handle 111 106 * @links: information for each link (controller-specific and kept 112 107 * opaque here) 108 + * @ids: array of slave_id, representing Slaves exposed across all enabled 109 + * links 113 110 * @link_list: list to handle interrupts across all links 114 111 * @shim_lock: mutex to handle concurrent rmw access to shared SHIM registers. 115 112 */ ··· 120 111 int count; 121 112 void __iomem *mmio_base; 122 113 u32 link_mask; 114 + int num_slaves; 123 115 acpi_handle handle; 124 116 struct sdw_intel_link_res *links; 117 + struct sdw_intel_slave_id *ids; 125 118 struct list_head link_list; 126 119 struct mutex shim_lock; /* lock for access to shared SHIM registers */ 127 120 };
+9
include/uapi/misc/pvpanic.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */ 2 + 3 + #ifndef __PVPANIC_H__ 4 + #define __PVPANIC_H__ 5 + 6 + #define PVPANIC_PANICKED (1 << 0) 7 + #define PVPANIC_CRASH_LOADED (1 << 1) 8 + 9 + #endif /* __PVPANIC_H__ */