Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'soc-drivers-6.7' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

Pull SoC driver updates from Arnd Bergmann:
"The highlights for the driver support this time are

- Qualcomm platforms gain support for the Qualcomm Secure Execution
Environment firmware interface to access EFI variables on certain
devices, and new features for multiple platform and firmware
drivers.

- Arm FF-A firmware support gains support for v1.1 specification
features, in particular notification and memory transaction
descriptor changes.

- SCMI firmware support now support v3.2 features for clock and DVFS
configuration and a new transport for Qualcomm platforms.

- Minor cleanups and bugfixes are added to pretty much all the active
platforms: qualcomm, broadcom, dove, ti-k3, rockchip, sifive,
amlogic, atmel, tegra, aspeed, vexpress, mediatek, samsung and
more.

In particular, this contains portions of the treewide conversion to
use __counted_by annotations and the device_get_match_data helper"

* tag 'soc-drivers-6.7' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (156 commits)
soc: qcom: pmic_glink_altmode: Print return value on error
firmware: qcom: scm: remove unneeded 'extern' specifiers
firmware: qcom: scm: add a missing forward declaration for struct device
firmware: qcom: move Qualcomm code into its own directory
soc: samsung: exynos-chipid: Convert to platform remove callback returning void
soc: qcom: apr: Add __counted_by for struct apr_rx_buf and use struct_size()
soc: qcom: pmic_glink: fix connector type to be DisplayPort
soc: ti: k3-socinfo: Avoid overriding return value
soc: ti: k3-socinfo: Fix typo in bitfield documentation
soc: ti: knav_qmss_queue: Use device_get_match_data()
firmware: ti_sci: Use device_get_match_data()
firmware: qcom: qseecom: add missing include guards
soc/pxa: ssp: Convert to platform remove callback returning void
soc/mediatek: mtk-mmsys: Convert to platform remove callback returning void
soc/mediatek: mtk-devapc: Convert to platform remove callback returning void
soc/loongson: loongson2_guts: Convert to platform remove callback returning void
soc/litex: litex_soc_ctrl: Convert to platform remove callback returning void
soc/ixp4xx: ixp4xx-qmgr: Convert to platform remove callback returning void
soc/ixp4xx: ixp4xx-npe: Convert to platform remove callback returning void
soc/hisilicon: kunpeng_hccs: Convert to platform remove callback returning void
...

+4259 -697
+3 -1
Documentation/devicetree/bindings/arm/cpus.yaml
··· 309 309 power-domains property. 310 310 311 311 For PSCI based platforms, the name corresponding to the index of the PSCI 312 - PM domain provider, must be "psci". 312 + PM domain provider, must be "psci". For SCMI based platforms, the name 313 + corresponding to the index of an SCMI performance domain provider, must be 314 + "perf". 313 315 314 316 qcom,saw: 315 317 $ref: /schemas/types.yaml#/definitions/phandle
+10
Documentation/devicetree/bindings/cache/qcom,llcc.yaml
··· 20 20 properties: 21 21 compatible: 22 22 enum: 23 + - qcom,qdu1000-llcc 23 24 - qcom,sc7180-llcc 24 25 - qcom,sc7280-llcc 25 26 - qcom,sc8180x-llcc ··· 44 43 45 44 interrupts: 46 45 maxItems: 1 46 + 47 + nvmem-cells: 48 + items: 49 + - description: Reference to an nvmem node for multi channel DDR 50 + 51 + nvmem-cell-names: 52 + items: 53 + - const: multi-chan-ddr 47 54 48 55 required: 49 56 - compatible ··· 101 92 compatible: 102 93 contains: 103 94 enum: 95 + - qcom,qdu1000-llcc 104 96 - qcom,sc8180x-llcc 105 97 - qcom,sc8280xp-llcc 106 98 then:
+13 -2
Documentation/devicetree/bindings/firmware/arm,scmi.yaml
··· 38 38 with shmem address(4KB-page, offset) as parameters 39 39 items: 40 40 - const: arm,scmi-smc-param 41 + - description: SCMI compliant firmware with Qualcomm SMC/HVC transport 42 + items: 43 + - const: qcom,scmi-smc 41 44 - description: SCMI compliant firmware with SCMI Virtio transport. 42 45 The virtio transport only supports a single device. 43 46 items: ··· 152 149 '#clock-cells': 153 150 const: 1 154 151 155 - required: 156 - - '#clock-cells' 152 + '#power-domain-cells': 153 + const: 1 154 + 155 + oneOf: 156 + - required: 157 + - '#clock-cells' 158 + 159 + - required: 160 + - '#power-domain-cells' 157 161 158 162 protocol@14: 159 163 $ref: '#/$defs/protocol-node' ··· 316 306 enum: 317 307 - arm,scmi-smc 318 308 - arm,scmi-smc-param 309 + - qcom,scmi-smc 319 310 then: 320 311 required: 321 312 - arm,smc-id
+10
Documentation/devicetree/bindings/firmware/qcom,scm.yaml
··· 24 24 - qcom,scm-apq8064 25 25 - qcom,scm-apq8084 26 26 - qcom,scm-ipq4019 27 + - qcom,scm-ipq5018 27 28 - qcom,scm-ipq5332 28 29 - qcom,scm-ipq6018 29 30 - qcom,scm-ipq806x ··· 57 56 - qcom,scm-sm6125 58 57 - qcom,scm-sm6350 59 58 - qcom,scm-sm6375 59 + - qcom,scm-sm7150 60 60 - qcom,scm-sm8150 61 61 - qcom,scm-sm8250 62 62 - qcom,scm-sm8350 ··· 90 88 The wait-queue interrupt that firmware raises as part of handshake 91 89 protocol to handle sleeping SCM calls. 92 90 maxItems: 1 91 + 92 + qcom,sdi-enabled: 93 + description: 94 + Indicates that the SDI (Secure Debug Image) has been enabled by TZ 95 + by default and it needs to be disabled. 96 + If not disabled WDT assertion or reboot will cause the board to hang 97 + in the debug mode. 98 + type: boolean 93 99 94 100 qcom,dload-mode: 95 101 $ref: /schemas/types.yaml#/definitions/phandle-array
+1
Documentation/devicetree/bindings/memory-controllers/ingenic,nemc.yaml
··· 40 40 ".*@[0-9]+$": 41 41 type: object 42 42 $ref: mc-peripheral-props.yaml# 43 + additionalProperties: true 43 44 44 45 required: 45 46 - compatible
+2
Documentation/devicetree/bindings/memory-controllers/renesas,rpc-if.yaml
··· 80 80 patternProperties: 81 81 "flash@[0-9a-f]+$": 82 82 type: object 83 + additionalProperties: true 84 + 83 85 properties: 84 86 compatible: 85 87 contains:
+1 -1
Documentation/devicetree/bindings/memory-controllers/ti,gpmc.yaml
··· 130 130 bus. The device can be a NAND chip, SRAM device, NOR device 131 131 or an ASIC. 132 132 $ref: ti,gpmc-child.yaml 133 - 133 + additionalProperties: true 134 134 135 135 required: 136 136 - compatible
+9 -8
Documentation/devicetree/bindings/power/power-domain.yaml
··· 13 13 14 14 description: |+ 15 15 System on chip designs are often divided into multiple PM domains that can be 16 - used for power gating of selected IP blocks for power saving by reduced leakage 17 - current. 16 + used for power gating of selected IP blocks for power saving by reduced 17 + leakage current. Moreover, in some cases the similar PM domains may also be 18 + capable of scaling performance for a group of IP blocks. 18 19 19 20 This device tree binding can be used to bind PM domain consumer devices with 20 21 their PM domains provided by PM domain providers. A PM domain provider can be ··· 26 25 27 26 properties: 28 27 $nodename: 29 - pattern: "^(power-controller|power-domain)([@-].*)?$" 28 + pattern: "^(power-controller|power-domain|performance-domain)([@-].*)?$" 30 29 31 30 domain-idle-states: 32 31 $ref: /schemas/types.yaml#/definitions/phandle-array ··· 45 44 46 45 operating-points-v2: 47 46 description: 48 - Phandles to the OPP tables of power domains provided by a power domain 49 - provider. If the provider provides a single power domain only or all 50 - the power domains provided by the provider have identical OPP tables, 51 - then this shall contain a single phandle. Refer to ../opp/opp-v2-base.yaml 52 - for more information. 47 + Phandles to the OPP tables of power domains that are capable of scaling 48 + performance, provided by a power domain provider. If the provider provides 49 + a single power domain only or all the power domains provided by the 50 + provider have identical OPP tables, then this shall contain a single 51 + phandle. Refer to ../opp/opp-v2-base.yaml for more information. 53 52 54 53 "#power-domain-cells": 55 54 description:
+11
Documentation/devicetree/bindings/reserved-memory/qcom,rmtfs-mem.yaml
··· 26 26 description: > 27 27 identifier of the client to use this region for buffers 28 28 29 + qcom,use-guard-pages: 30 + type: boolean 31 + description: > 32 + Indicates that the firmware, or hardware, does not gracefully handle 33 + memory protection of this region when placed adjacent to other protected 34 + memory regions, and that padding around the used portion of the memory 35 + region is necessary. 36 + 37 + When this is set, the first and last page should be left unused, and the 38 + effective size of the region will thereby shrink with two pages. 39 + 29 40 qcom,vmid: 30 41 $ref: /schemas/types.yaml#/definitions/uint32-array 31 42 description: >
+1
Documentation/devicetree/bindings/soc/mediatek/mtk-svs.yaml
··· 22 22 compatible: 23 23 enum: 24 24 - mediatek,mt8183-svs 25 + - mediatek,mt8188-svs 25 26 - mediatek,mt8192-svs 26 27 27 28 reg:
+2
Documentation/devicetree/bindings/soc/qcom/qcom,geni-se.yaml
··· 52 52 iommus: 53 53 maxItems: 1 54 54 55 + dma-coherent: true 56 + 55 57 required: 56 58 - compatible 57 59 - reg
+13
MAINTAINERS
··· 17894 17894 F: Documentation/devicetree/bindings/mtd/qcom,nandc.yaml 17895 17895 F: drivers/mtd/nand/raw/qcom_nandc.c 17896 17896 17897 + QUALCOMM QSEECOM DRIVER 17898 + M: Maximilian Luz <luzmaximilian@gmail.com> 17899 + L: linux-arm-msm@vger.kernel.org 17900 + S: Maintained 17901 + F: drivers/firmware/qcom/qcom_qseecom.c 17902 + 17903 + QUALCOMM QSEECOM UEFISECAPP DRIVER 17904 + M: Maximilian Luz <luzmaximilian@gmail.com> 17905 + L: linux-arm-msm@vger.kernel.org 17906 + S: Maintained 17907 + F: drivers/firmware/qcom/qcom_qseecom_uefisecapp.c 17908 + 17897 17909 QUALCOMM RMNET DRIVER 17898 17910 M: Subash Abhinov Kasiviswanathan <quic_subashab@quicinc.com> 17899 17911 M: Sean Tranchetti <quic_stranche@quicinc.com> ··· 21015 21003 F: drivers/cpufreq/sc[mp]i-cpufreq.c 21016 21004 F: drivers/firmware/arm_scmi/ 21017 21005 F: drivers/firmware/arm_scpi.c 21006 + F: drivers/pmdomain/arm/ 21018 21007 F: drivers/powercap/arm_scmi_powercap.c 21019 21008 F: drivers/regulator/scmi-regulator.c 21020 21009 F: drivers/reset/reset-scmi.c
+8 -2
arch/arm64/kvm/hyp/nvhe/ffa.c
··· 423 423 DECLARE_REG(u32, fraglen, ctxt, 2); 424 424 DECLARE_REG(u64, addr_mbz, ctxt, 3); 425 425 DECLARE_REG(u32, npages_mbz, ctxt, 4); 426 + struct ffa_mem_region_attributes *ep_mem_access; 426 427 struct ffa_composite_mem_region *reg; 427 428 struct ffa_mem_region *buf; 428 429 u32 offset, nr_ranges; ··· 453 452 buf = hyp_buffers.tx; 454 453 memcpy(buf, host_buffers.tx, fraglen); 455 454 456 - offset = buf->ep_mem_access[0].composite_off; 455 + ep_mem_access = (void *)buf + 456 + ffa_mem_desc_offset(buf, 0, FFA_VERSION_1_0); 457 + offset = ep_mem_access->composite_off; 457 458 if (!offset || buf->ep_count != 1 || buf->sender_id != HOST_FFA_ID) { 458 459 ret = FFA_RET_INVALID_PARAMETERS; 459 460 goto out_unlock; ··· 507 504 DECLARE_REG(u32, handle_lo, ctxt, 1); 508 505 DECLARE_REG(u32, handle_hi, ctxt, 2); 509 506 DECLARE_REG(u32, flags, ctxt, 3); 507 + struct ffa_mem_region_attributes *ep_mem_access; 510 508 struct ffa_composite_mem_region *reg; 511 509 u32 offset, len, fraglen, fragoff; 512 510 struct ffa_mem_region *buf; ··· 532 528 len = res->a1; 533 529 fraglen = res->a2; 534 530 535 - offset = buf->ep_mem_access[0].composite_off; 531 + ep_mem_access = (void *)buf + 532 + ffa_mem_desc_offset(buf, 0, FFA_VERSION_1_0); 533 + offset = ep_mem_access->composite_off; 536 534 /* 537 535 * We can trust the SPMD to get this right, but let's at least 538 536 * check that we end up with something that doesn't look _completely_
+1
arch/riscv/Kconfig.socs
··· 34 34 bool "StarFive SoCs" 35 35 select PINCTRL 36 36 select RESET_CONTROLLER 37 + select ARM_AMBA 37 38 help 38 39 This enables support for StarFive SoC platform hardware. 39 40
+6 -5
drivers/base/power/domain.c
··· 130 130 #define genpd_is_active_wakeup(genpd) (genpd->flags & GENPD_FLAG_ACTIVE_WAKEUP) 131 131 #define genpd_is_cpu_domain(genpd) (genpd->flags & GENPD_FLAG_CPU_DOMAIN) 132 132 #define genpd_is_rpm_always_on(genpd) (genpd->flags & GENPD_FLAG_RPM_ALWAYS_ON) 133 + #define genpd_is_opp_table_fw(genpd) (genpd->flags & GENPD_FLAG_OPP_TABLE_FW) 133 134 134 135 static inline bool irq_safe_dev_in_sleep_domain(struct device *dev, 135 136 const struct generic_pm_domain *genpd) ··· 2338 2337 genpd->dev.of_node = np; 2339 2338 2340 2339 /* Parse genpd OPP table */ 2341 - if (genpd->set_performance_state) { 2340 + if (!genpd_is_opp_table_fw(genpd) && genpd->set_performance_state) { 2342 2341 ret = dev_pm_opp_of_add_table(&genpd->dev); 2343 2342 if (ret) 2344 2343 return dev_err_probe(&genpd->dev, ret, "Failed to add OPP table\n"); ··· 2353 2352 2354 2353 ret = genpd_add_provider(np, genpd_xlate_simple, genpd); 2355 2354 if (ret) { 2356 - if (genpd->set_performance_state) { 2355 + if (!genpd_is_opp_table_fw(genpd) && genpd->set_performance_state) { 2357 2356 dev_pm_opp_put_opp_table(genpd->opp_table); 2358 2357 dev_pm_opp_of_remove_table(&genpd->dev); 2359 2358 } ··· 2397 2396 genpd->dev.of_node = np; 2398 2397 2399 2398 /* Parse genpd OPP table */ 2400 - if (genpd->set_performance_state) { 2399 + if (!genpd_is_opp_table_fw(genpd) && genpd->set_performance_state) { 2401 2400 ret = dev_pm_opp_of_add_table_indexed(&genpd->dev, i); 2402 2401 if (ret) { 2403 2402 dev_err_probe(&genpd->dev, ret, ··· 2433 2432 genpd->provider = NULL; 2434 2433 genpd->has_provider = false; 2435 2434 2436 - if (genpd->set_performance_state) { 2435 + if (!genpd_is_opp_table_fw(genpd) && genpd->set_performance_state) { 2437 2436 dev_pm_opp_put_opp_table(genpd->opp_table); 2438 2437 dev_pm_opp_of_remove_table(&genpd->dev); 2439 2438 } ··· 2465 2464 if (gpd->provider == &np->fwnode) { 2466 2465 gpd->has_provider = false; 2467 2466 2468 - if (!gpd->set_performance_state) 2467 + if (genpd_is_opp_table_fw(gpd) || !gpd->set_performance_state) 2469 2468 continue; 2470 2469 2471 2470 dev_pm_opp_put_opp_table(gpd->opp_table);
+1 -1
drivers/bus/Kconfig
··· 31 31 32 32 config BRCMSTB_GISB_ARB 33 33 tristate "Broadcom STB GISB bus arbiter" 34 - depends on ARM || ARM64 || MIPS 34 + depends on ARCH_BRCMSTB || BMIPS_GENERIC 35 35 default ARCH_BRCMSTB || BMIPS_GENERIC 36 36 help 37 37 Driver for the Broadcom Set Top Box System-on-a-chip internal bus
+1 -1
drivers/bus/vexpress-config.c
··· 54 54 struct vexpress_syscfg *syscfg; 55 55 struct regmap *regmap; 56 56 int num_templates; 57 - u32 template[]; /* Keep it last! */ 57 + u32 template[] __counted_by(num_templates); /* Keep it last! */ 58 58 }; 59 59 60 60 struct vexpress_config_bridge_ops {
+88 -8
drivers/clk/clk-scmi.c
··· 13 13 #include <linux/scmi_protocol.h> 14 14 #include <asm/div64.h> 15 15 16 + #define NOT_ATOMIC false 17 + #define ATOMIC true 18 + 16 19 static const struct scmi_clk_proto_ops *scmi_proto_clk_ops; 17 20 18 21 struct scmi_clk { 19 22 u32 id; 23 + struct device *dev; 20 24 struct clk_hw hw; 21 25 const struct scmi_clock_info *info; 22 26 const struct scmi_protocol_handle *ph; 27 + struct clk_parent_data *parent_data; 23 28 }; 24 29 25 30 #define to_scmi_clk(clk) container_of(clk, struct scmi_clk, hw) ··· 79 74 return scmi_proto_clk_ops->rate_set(clk->ph, clk->id, rate); 80 75 } 81 76 77 + static int scmi_clk_set_parent(struct clk_hw *hw, u8 parent_index) 78 + { 79 + struct scmi_clk *clk = to_scmi_clk(hw); 80 + 81 + return scmi_proto_clk_ops->parent_set(clk->ph, clk->id, parent_index); 82 + } 83 + 84 + static u8 scmi_clk_get_parent(struct clk_hw *hw) 85 + { 86 + struct scmi_clk *clk = to_scmi_clk(hw); 87 + u32 parent_id, p_idx; 88 + int ret; 89 + 90 + ret = scmi_proto_clk_ops->parent_get(clk->ph, clk->id, &parent_id); 91 + if (ret) 92 + return 0; 93 + 94 + for (p_idx = 0; p_idx < clk->info->num_parents; p_idx++) { 95 + if (clk->parent_data[p_idx].index == parent_id) 96 + break; 97 + } 98 + 99 + if (p_idx == clk->info->num_parents) 100 + return 0; 101 + 102 + return p_idx; 103 + } 104 + 105 + static int scmi_clk_determine_rate(struct clk_hw *hw, struct clk_rate_request *req) 106 + { 107 + /* 108 + * Suppose all the requested rates are supported, and let firmware 109 + * to handle the left work. 110 + */ 111 + return 0; 112 + } 113 + 82 114 static int scmi_clk_enable(struct clk_hw *hw) 83 115 { 84 116 struct scmi_clk *clk = to_scmi_clk(hw); 85 117 86 - return scmi_proto_clk_ops->enable(clk->ph, clk->id); 118 + return scmi_proto_clk_ops->enable(clk->ph, clk->id, NOT_ATOMIC); 87 119 } 88 120 89 121 static void scmi_clk_disable(struct clk_hw *hw) 90 122 { 91 123 struct scmi_clk *clk = to_scmi_clk(hw); 92 124 93 - scmi_proto_clk_ops->disable(clk->ph, clk->id); 125 + scmi_proto_clk_ops->disable(clk->ph, clk->id, NOT_ATOMIC); 94 126 } 95 127 96 128 static int scmi_clk_atomic_enable(struct clk_hw *hw) 97 129 { 98 130 struct scmi_clk *clk = to_scmi_clk(hw); 99 131 100 - return scmi_proto_clk_ops->enable_atomic(clk->ph, clk->id); 132 + return scmi_proto_clk_ops->enable(clk->ph, clk->id, ATOMIC); 101 133 } 102 134 103 135 static void scmi_clk_atomic_disable(struct clk_hw *hw) 104 136 { 105 137 struct scmi_clk *clk = to_scmi_clk(hw); 106 138 107 - scmi_proto_clk_ops->disable_atomic(clk->ph, clk->id); 139 + scmi_proto_clk_ops->disable(clk->ph, clk->id, ATOMIC); 140 + } 141 + 142 + static int scmi_clk_atomic_is_enabled(struct clk_hw *hw) 143 + { 144 + int ret; 145 + bool enabled = false; 146 + struct scmi_clk *clk = to_scmi_clk(hw); 147 + 148 + ret = scmi_proto_clk_ops->state_get(clk->ph, clk->id, &enabled, ATOMIC); 149 + if (ret) 150 + dev_warn(clk->dev, 151 + "Failed to get state for clock ID %d\n", clk->id); 152 + 153 + return !!enabled; 108 154 } 109 155 110 156 /* 111 - * We can provide enable/disable atomic callbacks only if the underlying SCMI 112 - * transport for an SCMI instance is configured to handle SCMI commands in an 113 - * atomic manner. 157 + * We can provide enable/disable/is_enabled atomic callbacks only if the 158 + * underlying SCMI transport for an SCMI instance is configured to handle 159 + * SCMI commands in an atomic manner. 114 160 * 115 161 * When no SCMI atomic transport support is available we instead provide only 116 162 * the prepare/unprepare API, as allowed by the clock framework when atomic ··· 177 121 .set_rate = scmi_clk_set_rate, 178 122 .prepare = scmi_clk_enable, 179 123 .unprepare = scmi_clk_disable, 124 + .set_parent = scmi_clk_set_parent, 125 + .get_parent = scmi_clk_get_parent, 126 + .determine_rate = scmi_clk_determine_rate, 180 127 }; 181 128 182 129 static const struct clk_ops scmi_atomic_clk_ops = { ··· 188 129 .set_rate = scmi_clk_set_rate, 189 130 .enable = scmi_clk_atomic_enable, 190 131 .disable = scmi_clk_atomic_disable, 132 + .is_enabled = scmi_clk_atomic_is_enabled, 133 + .set_parent = scmi_clk_set_parent, 134 + .get_parent = scmi_clk_get_parent, 135 + .determine_rate = scmi_clk_determine_rate, 191 136 }; 192 137 193 138 static int scmi_clk_ops_init(struct device *dev, struct scmi_clk *sclk, ··· 202 139 203 140 struct clk_init_data init = { 204 141 .flags = CLK_GET_RATE_NOCACHE, 205 - .num_parents = 0, 142 + .num_parents = sclk->info->num_parents, 206 143 .ops = scmi_ops, 207 144 .name = sclk->info->name, 145 + .parent_data = sclk->parent_data, 208 146 }; 209 147 210 148 sclk->hw.init = &init; ··· 277 213 sclk->info = scmi_proto_clk_ops->info_get(ph, idx); 278 214 if (!sclk->info) { 279 215 dev_dbg(dev, "invalid clock info for idx %d\n", idx); 216 + devm_kfree(dev, sclk); 280 217 continue; 281 218 } 282 219 283 220 sclk->id = idx; 284 221 sclk->ph = ph; 222 + sclk->dev = dev; 285 223 286 224 /* 287 225 * Note that when transport is atomic but SCMI protocol did not ··· 296 230 else 297 231 scmi_ops = &scmi_clk_ops; 298 232 233 + /* Initialize clock parent data. */ 234 + if (sclk->info->num_parents > 0) { 235 + sclk->parent_data = devm_kcalloc(dev, sclk->info->num_parents, 236 + sizeof(*sclk->parent_data), GFP_KERNEL); 237 + if (!sclk->parent_data) 238 + return -ENOMEM; 239 + 240 + for (int i = 0; i < sclk->info->num_parents; i++) { 241 + sclk->parent_data[i].index = sclk->info->parents[i]; 242 + sclk->parent_data[i].hw = hws[sclk->info->parents[i]]; 243 + } 244 + } 245 + 299 246 err = scmi_clk_ops_init(dev, sclk, scmi_ops); 300 247 if (err) { 301 248 dev_err(dev, "failed to register clock %d\n", idx); 249 + devm_kfree(dev, sclk->parent_data); 302 250 devm_kfree(dev, sclk); 303 251 hws[idx] = NULL; 304 252 } else {
+39 -15
drivers/cpufreq/scmi-cpufreq.c
··· 70 70 return 0; 71 71 } 72 72 73 - static int 74 - scmi_get_sharing_cpus(struct device *cpu_dev, struct cpumask *cpumask) 73 + static int scmi_cpu_domain_id(struct device *cpu_dev) 75 74 { 76 - int cpu, domain, tdomain; 77 - struct device *tcpu_dev; 75 + struct device_node *np = cpu_dev->of_node; 76 + struct of_phandle_args domain_id; 77 + int index; 78 78 79 - domain = perf_ops->device_domain_id(cpu_dev); 80 - if (domain < 0) 81 - return domain; 79 + if (of_parse_phandle_with_args(np, "clocks", "#clock-cells", 0, 80 + &domain_id)) { 81 + /* Find the corresponding index for power-domain "perf". */ 82 + index = of_property_match_string(np, "power-domain-names", 83 + "perf"); 84 + if (index < 0) 85 + return -EINVAL; 86 + 87 + if (of_parse_phandle_with_args(np, "power-domains", 88 + "#power-domain-cells", index, 89 + &domain_id)) 90 + return -EINVAL; 91 + } 92 + 93 + return domain_id.args[0]; 94 + } 95 + 96 + static int 97 + scmi_get_sharing_cpus(struct device *cpu_dev, int domain, 98 + struct cpumask *cpumask) 99 + { 100 + int cpu, tdomain; 101 + struct device *tcpu_dev; 82 102 83 103 for_each_possible_cpu(cpu) { 84 104 if (cpu == cpu_dev->id) ··· 108 88 if (!tcpu_dev) 109 89 continue; 110 90 111 - tdomain = perf_ops->device_domain_id(tcpu_dev); 91 + tdomain = scmi_cpu_domain_id(tcpu_dev); 112 92 if (tdomain == domain) 113 93 cpumask_set_cpu(cpu, cpumask); 114 94 } ··· 124 104 unsigned long Hz; 125 105 int ret, domain; 126 106 127 - domain = perf_ops->device_domain_id(cpu_dev); 107 + domain = scmi_cpu_domain_id(cpu_dev); 128 108 if (domain < 0) 129 109 return domain; 130 110 ··· 146 126 147 127 static int scmi_cpufreq_init(struct cpufreq_policy *policy) 148 128 { 149 - int ret, nr_opp; 129 + int ret, nr_opp, domain; 150 130 unsigned int latency; 151 131 struct device *cpu_dev; 152 132 struct scmi_data *priv; ··· 158 138 return -ENODEV; 159 139 } 160 140 141 + domain = scmi_cpu_domain_id(cpu_dev); 142 + if (domain < 0) 143 + return domain; 144 + 161 145 priv = kzalloc(sizeof(*priv), GFP_KERNEL); 162 146 if (!priv) 163 147 return -ENOMEM; ··· 172 148 } 173 149 174 150 /* Obtain CPUs that share SCMI performance controls */ 175 - ret = scmi_get_sharing_cpus(cpu_dev, policy->cpus); 151 + ret = scmi_get_sharing_cpus(cpu_dev, domain, policy->cpus); 176 152 if (ret) { 177 153 dev_warn(cpu_dev, "failed to get sharing cpumask\n"); 178 154 goto out_free_cpumask; ··· 200 176 */ 201 177 nr_opp = dev_pm_opp_get_opp_count(cpu_dev); 202 178 if (nr_opp <= 0) { 203 - ret = perf_ops->device_opps_add(ph, cpu_dev); 179 + ret = perf_ops->device_opps_add(ph, cpu_dev, domain); 204 180 if (ret) { 205 181 dev_warn(cpu_dev, "failed to add opps to the device\n"); 206 182 goto out_free_cpumask; ··· 233 209 } 234 210 235 211 priv->cpu_dev = cpu_dev; 236 - priv->domain_id = perf_ops->device_domain_id(cpu_dev); 212 + priv->domain_id = domain; 237 213 238 214 policy->driver_data = priv; 239 215 policy->freq_table = freq_table; ··· 241 217 /* SCMI allows DVFS request for any domain from any CPU */ 242 218 policy->dvfs_possible_from_any_cpu = true; 243 219 244 - latency = perf_ops->transition_latency_get(ph, cpu_dev); 220 + latency = perf_ops->transition_latency_get(ph, domain); 245 221 if (!latency) 246 222 latency = CPUFREQ_ETERNAL; 247 223 248 224 policy->cpuinfo.transition_latency = latency; 249 225 250 226 policy->fast_switch_possible = 251 - perf_ops->fast_switch_possible(ph, cpu_dev); 227 + perf_ops->fast_switch_possible(ph, domain); 252 228 253 229 return 0; 254 230
+1 -14
drivers/firmware/Kconfig
··· 212 212 ADSP exists on some mtk processors. 213 213 Client might use shared memory to exchange information with ADSP. 214 214 215 - config QCOM_SCM 216 - tristate 217 - 218 - config QCOM_SCM_DOWNLOAD_MODE_DEFAULT 219 - bool "Qualcomm download mode enabled by default" 220 - depends on QCOM_SCM 221 - help 222 - A device with "download mode" enabled will upon an unexpected 223 - warm-restart enter a special debug mode that allows the user to 224 - "download" memory content over USB for offline postmortem analysis. 225 - The feature can be enabled/disabled on the kernel command line. 226 - 227 - Say Y here to enable "download mode" by default. 228 - 229 215 config SYSFB 230 216 bool 231 217 select BOOT_VESA_SUPPORT ··· 297 311 source "drivers/firmware/imx/Kconfig" 298 312 source "drivers/firmware/meson/Kconfig" 299 313 source "drivers/firmware/psci/Kconfig" 314 + source "drivers/firmware/qcom/Kconfig" 300 315 source "drivers/firmware/smccc/Kconfig" 301 316 source "drivers/firmware/tegra/Kconfig" 302 317 source "drivers/firmware/xilinx/Kconfig"
+1 -2
drivers/firmware/Makefile
··· 18 18 obj-$(CONFIG_MTK_ADSP_IPC) += mtk-adsp-ipc.o 19 19 obj-$(CONFIG_RASPBERRYPI_FIRMWARE) += raspberrypi.o 20 20 obj-$(CONFIG_FW_CFG_SYSFS) += qemu_fw_cfg.o 21 - obj-$(CONFIG_QCOM_SCM) += qcom-scm.o 22 - qcom-scm-objs += qcom_scm.o qcom_scm-smc.o qcom_scm-legacy.o 23 21 obj-$(CONFIG_SYSFB) += sysfb.o 24 22 obj-$(CONFIG_SYSFB_SIMPLEFB) += sysfb_simplefb.o 25 23 obj-$(CONFIG_TI_SCI_PROTOCOL) += ti_sci.o ··· 33 35 obj-y += efi/ 34 36 obj-y += imx/ 35 37 obj-y += psci/ 38 + obj-y += qcom/ 36 39 obj-y += smccc/ 37 40 obj-y += tegra/ 38 41 obj-y += xilinx/
+15 -1
drivers/firmware/arm_ffa/bus.c
··· 15 15 16 16 #include "common.h" 17 17 18 + #define SCMI_UEVENT_MODALIAS_FMT "arm_ffa:%04x:%pUb" 19 + 18 20 static DEFINE_IDA(ffa_bus_id); 19 21 20 22 static int ffa_device_match(struct device *dev, struct device_driver *drv) ··· 65 63 { 66 64 const struct ffa_device *ffa_dev = to_ffa_dev(dev); 67 65 68 - return add_uevent_var(env, "MODALIAS=arm_ffa:%04x:%pUb", 66 + return add_uevent_var(env, "MODALIAS=" SCMI_UEVENT_MODALIAS_FMT, 69 67 ffa_dev->vm_id, &ffa_dev->uuid); 70 68 } 69 + 70 + static ssize_t modalias_show(struct device *dev, 71 + struct device_attribute *attr, char *buf) 72 + { 73 + struct ffa_device *ffa_dev = to_ffa_dev(dev); 74 + 75 + return sysfs_emit(buf, SCMI_UEVENT_MODALIAS_FMT, ffa_dev->vm_id, 76 + &ffa_dev->uuid); 77 + } 78 + static DEVICE_ATTR_RO(modalias); 71 79 72 80 static ssize_t partition_id_show(struct device *dev, 73 81 struct device_attribute *attr, char *buf) ··· 100 88 static struct attribute *ffa_device_attributes_attrs[] = { 101 89 &dev_attr_partition_id.attr, 102 90 &dev_attr_uuid.attr, 91 + &dev_attr_modalias.attr, 103 92 NULL, 104 93 }; 105 94 ATTRIBUTE_GROUPS(ffa_device_attributes); ··· 206 193 dev->release = ffa_release_device; 207 194 dev_set_name(&ffa_dev->dev, "arm-ffa-%d", id); 208 195 196 + ffa_dev->id = id; 209 197 ffa_dev->vm_id = vm_id; 210 198 ffa_dev->ops = ops; 211 199 uuid_copy(&ffa_dev->uuid, uuid);
+750 -20
drivers/firmware/arm_ffa/driver.c
··· 22 22 #define DRIVER_NAME "ARM FF-A" 23 23 #define pr_fmt(fmt) DRIVER_NAME ": " fmt 24 24 25 + #include <linux/acpi.h> 25 26 #include <linux/arm_ffa.h> 26 27 #include <linux/bitfield.h> 28 + #include <linux/cpuhotplug.h> 27 29 #include <linux/device.h> 30 + #include <linux/hashtable.h> 31 + #include <linux/interrupt.h> 28 32 #include <linux/io.h> 29 33 #include <linux/kernel.h> 30 34 #include <linux/module.h> 31 35 #include <linux/mm.h> 36 + #include <linux/mutex.h> 37 + #include <linux/of_irq.h> 32 38 #include <linux/scatterlist.h> 33 39 #include <linux/slab.h> 40 + #include <linux/smp.h> 34 41 #include <linux/uuid.h> 42 + #include <linux/xarray.h> 35 43 36 44 #include "common.h" 37 45 38 - #define FFA_DRIVER_VERSION FFA_VERSION_1_0 46 + #define FFA_DRIVER_VERSION FFA_VERSION_1_1 39 47 #define FFA_MIN_VERSION FFA_VERSION_1_0 40 48 41 49 #define SENDER_ID_MASK GENMASK(31, 16) ··· 59 51 */ 60 52 #define RXTX_BUFFER_SIZE SZ_4K 61 53 54 + #define FFA_MAX_NOTIFICATIONS 64 55 + 62 56 static ffa_fn *invoke_ffa_fn; 63 57 64 58 static const int ffa_linux_errmap[] = { ··· 74 64 -EACCES, /* FFA_RET_DENIED */ 75 65 -EAGAIN, /* FFA_RET_RETRY */ 76 66 -ECANCELED, /* FFA_RET_ABORTED */ 67 + -ENODATA, /* FFA_RET_NO_DATA */ 77 68 }; 78 69 79 70 static inline int ffa_to_linux_errno(int errno) ··· 86 75 return -EINVAL; 87 76 } 88 77 78 + struct ffa_pcpu_irq { 79 + struct ffa_drv_info *info; 80 + }; 81 + 89 82 struct ffa_drv_info { 90 83 u32 version; 91 84 u16 vm_id; ··· 98 83 void *rx_buffer; 99 84 void *tx_buffer; 100 85 bool mem_ops_native; 86 + bool bitmap_created; 87 + unsigned int sched_recv_irq; 88 + unsigned int cpuhp_state; 89 + struct ffa_pcpu_irq __percpu *irq_pcpu; 90 + struct workqueue_struct *notif_pcpu_wq; 91 + struct work_struct notif_pcpu_work; 92 + struct work_struct irq_work; 93 + struct xarray partition_info; 94 + unsigned int partition_count; 95 + DECLARE_HASHTABLE(notifier_hash, ilog2(FFA_MAX_NOTIFICATIONS)); 96 + struct mutex notify_lock; /* lock to protect notifier hashtable */ 101 97 }; 102 98 103 99 static struct ffa_drv_info *drv_info; ··· 423 397 return num_pages; 424 398 } 425 399 426 - static u8 ffa_memory_attributes_get(u32 func_id) 400 + static u16 ffa_memory_attributes_get(u32 func_id) 427 401 { 428 402 /* 429 403 * For the memory lend or donate operation, if the receiver is a PE or ··· 442 416 { 443 417 int rc = 0; 444 418 bool first = true; 419 + u32 composite_offset; 445 420 phys_addr_t addr = 0; 421 + struct ffa_mem_region *mem_region = buffer; 446 422 struct ffa_composite_mem_region *composite; 447 423 struct ffa_mem_region_addr_range *constituents; 448 424 struct ffa_mem_region_attributes *ep_mem_access; 449 - struct ffa_mem_region *mem_region = buffer; 450 425 u32 idx, frag_len, length, buf_sz = 0, num_entries = sg_nents(args->sg); 451 426 452 427 mem_region->tag = args->tag; 453 428 mem_region->flags = args->flags; 454 429 mem_region->sender_id = drv_info->vm_id; 455 430 mem_region->attributes = ffa_memory_attributes_get(func_id); 456 - ep_mem_access = &mem_region->ep_mem_access[0]; 431 + ep_mem_access = buffer + 432 + ffa_mem_desc_offset(buffer, 0, drv_info->version); 433 + composite_offset = ffa_mem_desc_offset(buffer, args->nattrs, 434 + drv_info->version); 457 435 458 436 for (idx = 0; idx < args->nattrs; idx++, ep_mem_access++) { 459 437 ep_mem_access->receiver = args->attrs[idx].receiver; 460 438 ep_mem_access->attrs = args->attrs[idx].attrs; 461 - ep_mem_access->composite_off = COMPOSITE_OFFSET(args->nattrs); 439 + ep_mem_access->composite_off = composite_offset; 462 440 ep_mem_access->flag = 0; 463 441 ep_mem_access->reserved = 0; 464 442 } 465 443 mem_region->handle = 0; 466 - mem_region->reserved_0 = 0; 467 - mem_region->reserved_1 = 0; 468 444 mem_region->ep_count = args->nattrs; 445 + if (drv_info->version <= FFA_VERSION_1_0) { 446 + mem_region->ep_mem_size = 0; 447 + } else { 448 + mem_region->ep_mem_size = sizeof(*ep_mem_access); 449 + mem_region->ep_mem_offset = sizeof(*mem_region); 450 + memset(mem_region->reserved, 0, 12); 451 + } 469 452 470 - composite = buffer + COMPOSITE_OFFSET(args->nattrs); 453 + composite = buffer + composite_offset; 471 454 composite->total_pg_cnt = ffa_get_num_pages_sg(args->sg); 472 455 composite->addr_range_cnt = num_entries; 473 456 composite->reserved = 0; 474 457 475 - length = COMPOSITE_CONSTITUENTS_OFFSET(args->nattrs, num_entries); 476 - frag_len = COMPOSITE_CONSTITUENTS_OFFSET(args->nattrs, 0); 458 + length = composite_offset + CONSTITUENTS_OFFSET(num_entries); 459 + frag_len = composite_offset + CONSTITUENTS_OFFSET(0); 477 460 if (frag_len > max_fragsize) 478 461 return -ENXIO; 479 462 ··· 589 554 return 0; 590 555 } 591 556 557 + static int ffa_notification_bitmap_create(void) 558 + { 559 + ffa_value_t ret; 560 + u16 vcpu_count = nr_cpu_ids; 561 + 562 + invoke_ffa_fn((ffa_value_t){ 563 + .a0 = FFA_NOTIFICATION_BITMAP_CREATE, 564 + .a1 = drv_info->vm_id, .a2 = vcpu_count, 565 + }, &ret); 566 + 567 + if (ret.a0 == FFA_ERROR) 568 + return ffa_to_linux_errno((int)ret.a2); 569 + 570 + return 0; 571 + } 572 + 573 + static int ffa_notification_bitmap_destroy(void) 574 + { 575 + ffa_value_t ret; 576 + 577 + invoke_ffa_fn((ffa_value_t){ 578 + .a0 = FFA_NOTIFICATION_BITMAP_DESTROY, 579 + .a1 = drv_info->vm_id, 580 + }, &ret); 581 + 582 + if (ret.a0 == FFA_ERROR) 583 + return ffa_to_linux_errno((int)ret.a2); 584 + 585 + return 0; 586 + } 587 + 588 + #define NOTIFICATION_LOW_MASK GENMASK(31, 0) 589 + #define NOTIFICATION_HIGH_MASK GENMASK(63, 32) 590 + #define NOTIFICATION_BITMAP_HIGH(x) \ 591 + ((u32)(FIELD_GET(NOTIFICATION_HIGH_MASK, (x)))) 592 + #define NOTIFICATION_BITMAP_LOW(x) \ 593 + ((u32)(FIELD_GET(NOTIFICATION_LOW_MASK, (x)))) 594 + #define PACK_NOTIFICATION_BITMAP(low, high) \ 595 + (FIELD_PREP(NOTIFICATION_LOW_MASK, (low)) | \ 596 + FIELD_PREP(NOTIFICATION_HIGH_MASK, (high))) 597 + 598 + #define RECEIVER_VCPU_MASK GENMASK(31, 16) 599 + #define PACK_NOTIFICATION_GET_RECEIVER_INFO(vcpu_r, r) \ 600 + (FIELD_PREP(RECEIVER_VCPU_MASK, (vcpu_r)) | \ 601 + FIELD_PREP(RECEIVER_ID_MASK, (r))) 602 + 603 + #define NOTIFICATION_INFO_GET_MORE_PEND_MASK BIT(0) 604 + #define NOTIFICATION_INFO_GET_ID_COUNT GENMASK(11, 7) 605 + #define ID_LIST_MASK_64 GENMASK(51, 12) 606 + #define ID_LIST_MASK_32 GENMASK(31, 12) 607 + #define MAX_IDS_64 20 608 + #define MAX_IDS_32 10 609 + 610 + #define PER_VCPU_NOTIFICATION_FLAG BIT(0) 611 + #define SECURE_PARTITION_BITMAP BIT(0) 612 + #define NON_SECURE_VM_BITMAP BIT(1) 613 + #define SPM_FRAMEWORK_BITMAP BIT(2) 614 + #define NS_HYP_FRAMEWORK_BITMAP BIT(3) 615 + 616 + static int ffa_notification_bind_common(u16 dst_id, u64 bitmap, 617 + u32 flags, bool is_bind) 618 + { 619 + ffa_value_t ret; 620 + u32 func, src_dst_ids = PACK_TARGET_INFO(dst_id, drv_info->vm_id); 621 + 622 + func = is_bind ? FFA_NOTIFICATION_BIND : FFA_NOTIFICATION_UNBIND; 623 + 624 + invoke_ffa_fn((ffa_value_t){ 625 + .a0 = func, .a1 = src_dst_ids, .a2 = flags, 626 + .a3 = NOTIFICATION_BITMAP_LOW(bitmap), 627 + .a4 = NOTIFICATION_BITMAP_HIGH(bitmap), 628 + }, &ret); 629 + 630 + if (ret.a0 == FFA_ERROR) 631 + return ffa_to_linux_errno((int)ret.a2); 632 + else if (ret.a0 != FFA_SUCCESS) 633 + return -EINVAL; 634 + 635 + return 0; 636 + } 637 + 638 + static 639 + int ffa_notification_set(u16 src_id, u16 dst_id, u32 flags, u64 bitmap) 640 + { 641 + ffa_value_t ret; 642 + u32 src_dst_ids = PACK_TARGET_INFO(dst_id, src_id); 643 + 644 + invoke_ffa_fn((ffa_value_t) { 645 + .a0 = FFA_NOTIFICATION_SET, .a1 = src_dst_ids, .a2 = flags, 646 + .a3 = NOTIFICATION_BITMAP_LOW(bitmap), 647 + .a4 = NOTIFICATION_BITMAP_HIGH(bitmap), 648 + }, &ret); 649 + 650 + if (ret.a0 == FFA_ERROR) 651 + return ffa_to_linux_errno((int)ret.a2); 652 + else if (ret.a0 != FFA_SUCCESS) 653 + return -EINVAL; 654 + 655 + return 0; 656 + } 657 + 658 + struct ffa_notify_bitmaps { 659 + u64 sp_map; 660 + u64 vm_map; 661 + u64 arch_map; 662 + }; 663 + 664 + static int ffa_notification_get(u32 flags, struct ffa_notify_bitmaps *notify) 665 + { 666 + ffa_value_t ret; 667 + u16 src_id = drv_info->vm_id; 668 + u16 cpu_id = smp_processor_id(); 669 + u32 rec_vcpu_ids = PACK_NOTIFICATION_GET_RECEIVER_INFO(cpu_id, src_id); 670 + 671 + invoke_ffa_fn((ffa_value_t){ 672 + .a0 = FFA_NOTIFICATION_GET, .a1 = rec_vcpu_ids, .a2 = flags, 673 + }, &ret); 674 + 675 + if (ret.a0 == FFA_ERROR) 676 + return ffa_to_linux_errno((int)ret.a2); 677 + else if (ret.a0 != FFA_SUCCESS) 678 + return -EINVAL; /* Something else went wrong. */ 679 + 680 + notify->sp_map = PACK_NOTIFICATION_BITMAP(ret.a2, ret.a3); 681 + notify->vm_map = PACK_NOTIFICATION_BITMAP(ret.a4, ret.a5); 682 + notify->arch_map = PACK_NOTIFICATION_BITMAP(ret.a6, ret.a7); 683 + 684 + return 0; 685 + } 686 + 687 + struct ffa_dev_part_info { 688 + ffa_sched_recv_cb callback; 689 + void *cb_data; 690 + rwlock_t rw_lock; 691 + }; 692 + 693 + static void __do_sched_recv_cb(u16 part_id, u16 vcpu, bool is_per_vcpu) 694 + { 695 + struct ffa_dev_part_info *partition; 696 + ffa_sched_recv_cb callback; 697 + void *cb_data; 698 + 699 + partition = xa_load(&drv_info->partition_info, part_id); 700 + read_lock(&partition->rw_lock); 701 + callback = partition->callback; 702 + cb_data = partition->cb_data; 703 + read_unlock(&partition->rw_lock); 704 + 705 + if (callback) 706 + callback(vcpu, is_per_vcpu, cb_data); 707 + } 708 + 709 + static void ffa_notification_info_get(void) 710 + { 711 + int idx, list, max_ids, lists_cnt, ids_processed, ids_count[MAX_IDS_64]; 712 + bool is_64b_resp; 713 + ffa_value_t ret; 714 + u64 id_list; 715 + 716 + do { 717 + invoke_ffa_fn((ffa_value_t){ 718 + .a0 = FFA_FN_NATIVE(NOTIFICATION_INFO_GET), 719 + }, &ret); 720 + 721 + if (ret.a0 != FFA_FN_NATIVE(SUCCESS) && ret.a0 != FFA_SUCCESS) { 722 + if (ret.a2 != FFA_RET_NO_DATA) 723 + pr_err("Notification Info fetch failed: 0x%lx (0x%lx)", 724 + ret.a0, ret.a2); 725 + return; 726 + } 727 + 728 + is_64b_resp = (ret.a0 == FFA_FN64_SUCCESS); 729 + 730 + ids_processed = 0; 731 + lists_cnt = FIELD_GET(NOTIFICATION_INFO_GET_ID_COUNT, ret.a2); 732 + if (is_64b_resp) { 733 + max_ids = MAX_IDS_64; 734 + id_list = FIELD_GET(ID_LIST_MASK_64, ret.a2); 735 + } else { 736 + max_ids = MAX_IDS_32; 737 + id_list = FIELD_GET(ID_LIST_MASK_32, ret.a2); 738 + } 739 + 740 + for (idx = 0; idx < lists_cnt; idx++, id_list >>= 2) 741 + ids_count[idx] = (id_list & 0x3) + 1; 742 + 743 + /* Process IDs */ 744 + for (list = 0; list < lists_cnt; list++) { 745 + u16 vcpu_id, part_id, *packed_id_list = (u16 *)&ret.a3; 746 + 747 + if (ids_processed >= max_ids - 1) 748 + break; 749 + 750 + part_id = packed_id_list[++ids_processed]; 751 + 752 + if (!ids_count[list]) { /* Global Notification */ 753 + __do_sched_recv_cb(part_id, 0, false); 754 + continue; 755 + } 756 + 757 + /* Per vCPU Notification */ 758 + for (idx = 0; idx < ids_count[list]; idx++) { 759 + if (ids_processed >= max_ids - 1) 760 + break; 761 + 762 + vcpu_id = packed_id_list[++ids_processed]; 763 + 764 + __do_sched_recv_cb(part_id, vcpu_id, true); 765 + } 766 + } 767 + } while (ret.a2 & NOTIFICATION_INFO_GET_MORE_PEND_MASK); 768 + } 769 + 770 + static int ffa_run(struct ffa_device *dev, u16 vcpu) 771 + { 772 + ffa_value_t ret; 773 + u32 target = dev->vm_id << 16 | vcpu; 774 + 775 + invoke_ffa_fn((ffa_value_t){ .a0 = FFA_RUN, .a1 = target, }, &ret); 776 + 777 + while (ret.a0 == FFA_INTERRUPT) 778 + invoke_ffa_fn((ffa_value_t){ .a0 = FFA_RUN, .a1 = ret.a1, }, 779 + &ret); 780 + 781 + if (ret.a0 == FFA_ERROR) 782 + return ffa_to_linux_errno((int)ret.a2); 783 + 784 + return 0; 785 + } 786 + 592 787 static void ffa_set_up_mem_ops_native_flag(void) 593 788 { 594 789 if (!ffa_features(FFA_FN_NATIVE(MEM_LEND), 0, NULL, NULL) || ··· 852 587 return 0; 853 588 } 854 589 855 - static void _ffa_mode_32bit_set(struct ffa_device *dev) 856 - { 857 - dev->mode_32bit = true; 858 - } 859 - 860 590 static void ffa_mode_32bit_set(struct ffa_device *dev) 861 591 { 862 - if (drv_info->version > FFA_VERSION_1_0) 863 - return; 864 - 865 - _ffa_mode_32bit_set(dev); 592 + dev->mode_32bit = true; 866 593 } 867 594 868 595 static int ffa_sync_send_receive(struct ffa_device *dev, ··· 887 630 return ffa_memory_ops(FFA_MEM_LEND, args); 888 631 } 889 632 633 + #define FFA_SECURE_PARTITION_ID_FLAG BIT(15) 634 + 635 + enum notify_type { 636 + NON_SECURE_VM, 637 + SECURE_PARTITION, 638 + FRAMEWORK, 639 + }; 640 + 641 + struct notifier_cb_info { 642 + struct hlist_node hnode; 643 + ffa_notifier_cb cb; 644 + void *cb_data; 645 + enum notify_type type; 646 + }; 647 + 648 + static int ffa_sched_recv_cb_update(u16 part_id, ffa_sched_recv_cb callback, 649 + void *cb_data, bool is_registration) 650 + { 651 + struct ffa_dev_part_info *partition; 652 + bool cb_valid; 653 + 654 + partition = xa_load(&drv_info->partition_info, part_id); 655 + write_lock(&partition->rw_lock); 656 + 657 + cb_valid = !!partition->callback; 658 + if (!(is_registration ^ cb_valid)) { 659 + write_unlock(&partition->rw_lock); 660 + return -EINVAL; 661 + } 662 + 663 + partition->callback = callback; 664 + partition->cb_data = cb_data; 665 + 666 + write_unlock(&partition->rw_lock); 667 + return 0; 668 + } 669 + 670 + static int ffa_sched_recv_cb_register(struct ffa_device *dev, 671 + ffa_sched_recv_cb cb, void *cb_data) 672 + { 673 + return ffa_sched_recv_cb_update(dev->vm_id, cb, cb_data, true); 674 + } 675 + 676 + static int ffa_sched_recv_cb_unregister(struct ffa_device *dev) 677 + { 678 + return ffa_sched_recv_cb_update(dev->vm_id, NULL, NULL, false); 679 + } 680 + 681 + static int ffa_notification_bind(u16 dst_id, u64 bitmap, u32 flags) 682 + { 683 + return ffa_notification_bind_common(dst_id, bitmap, flags, true); 684 + } 685 + 686 + static int ffa_notification_unbind(u16 dst_id, u64 bitmap) 687 + { 688 + return ffa_notification_bind_common(dst_id, bitmap, 0, false); 689 + } 690 + 691 + /* Should be called while the notify_lock is taken */ 692 + static struct notifier_cb_info * 693 + notifier_hash_node_get(u16 notify_id, enum notify_type type) 694 + { 695 + struct notifier_cb_info *node; 696 + 697 + hash_for_each_possible(drv_info->notifier_hash, node, hnode, notify_id) 698 + if (type == node->type) 699 + return node; 700 + 701 + return NULL; 702 + } 703 + 704 + static int 705 + update_notifier_cb(int notify_id, enum notify_type type, ffa_notifier_cb cb, 706 + void *cb_data, bool is_registration) 707 + { 708 + struct notifier_cb_info *cb_info = NULL; 709 + bool cb_found; 710 + 711 + cb_info = notifier_hash_node_get(notify_id, type); 712 + cb_found = !!cb_info; 713 + 714 + if (!(is_registration ^ cb_found)) 715 + return -EINVAL; 716 + 717 + if (is_registration) { 718 + cb_info = kzalloc(sizeof(*cb_info), GFP_KERNEL); 719 + if (!cb_info) 720 + return -ENOMEM; 721 + 722 + cb_info->type = type; 723 + cb_info->cb = cb; 724 + cb_info->cb_data = cb_data; 725 + 726 + hash_add(drv_info->notifier_hash, &cb_info->hnode, notify_id); 727 + } else { 728 + hash_del(&cb_info->hnode); 729 + } 730 + 731 + return 0; 732 + } 733 + 734 + static enum notify_type ffa_notify_type_get(u16 vm_id) 735 + { 736 + if (vm_id & FFA_SECURE_PARTITION_ID_FLAG) 737 + return SECURE_PARTITION; 738 + else 739 + return NON_SECURE_VM; 740 + } 741 + 742 + static int ffa_notify_relinquish(struct ffa_device *dev, int notify_id) 743 + { 744 + int rc; 745 + enum notify_type type = ffa_notify_type_get(dev->vm_id); 746 + 747 + if (notify_id >= FFA_MAX_NOTIFICATIONS) 748 + return -EINVAL; 749 + 750 + mutex_lock(&drv_info->notify_lock); 751 + 752 + rc = update_notifier_cb(notify_id, type, NULL, NULL, false); 753 + if (rc) { 754 + pr_err("Could not unregister notification callback\n"); 755 + mutex_unlock(&drv_info->notify_lock); 756 + return rc; 757 + } 758 + 759 + rc = ffa_notification_unbind(dev->vm_id, BIT(notify_id)); 760 + 761 + mutex_unlock(&drv_info->notify_lock); 762 + 763 + return rc; 764 + } 765 + 766 + static int ffa_notify_request(struct ffa_device *dev, bool is_per_vcpu, 767 + ffa_notifier_cb cb, void *cb_data, int notify_id) 768 + { 769 + int rc; 770 + u32 flags = 0; 771 + enum notify_type type = ffa_notify_type_get(dev->vm_id); 772 + 773 + if (notify_id >= FFA_MAX_NOTIFICATIONS) 774 + return -EINVAL; 775 + 776 + mutex_lock(&drv_info->notify_lock); 777 + 778 + if (is_per_vcpu) 779 + flags = PER_VCPU_NOTIFICATION_FLAG; 780 + 781 + rc = ffa_notification_bind(dev->vm_id, BIT(notify_id), flags); 782 + if (rc) { 783 + mutex_unlock(&drv_info->notify_lock); 784 + return rc; 785 + } 786 + 787 + rc = update_notifier_cb(notify_id, type, cb, cb_data, true); 788 + if (rc) { 789 + pr_err("Failed to register callback for %d - %d\n", 790 + notify_id, rc); 791 + ffa_notification_unbind(dev->vm_id, BIT(notify_id)); 792 + } 793 + mutex_unlock(&drv_info->notify_lock); 794 + 795 + return rc; 796 + } 797 + 798 + static int ffa_notify_send(struct ffa_device *dev, int notify_id, 799 + bool is_per_vcpu, u16 vcpu) 800 + { 801 + u32 flags = 0; 802 + 803 + if (is_per_vcpu) 804 + flags |= (PER_VCPU_NOTIFICATION_FLAG | vcpu << 16); 805 + 806 + return ffa_notification_set(dev->vm_id, drv_info->vm_id, flags, 807 + BIT(notify_id)); 808 + } 809 + 810 + static void handle_notif_callbacks(u64 bitmap, enum notify_type type) 811 + { 812 + int notify_id; 813 + struct notifier_cb_info *cb_info = NULL; 814 + 815 + for (notify_id = 0; notify_id <= FFA_MAX_NOTIFICATIONS && bitmap; 816 + notify_id++, bitmap >>= 1) { 817 + if (!(bitmap & 1)) 818 + continue; 819 + 820 + mutex_lock(&drv_info->notify_lock); 821 + cb_info = notifier_hash_node_get(notify_id, type); 822 + mutex_unlock(&drv_info->notify_lock); 823 + 824 + if (cb_info && cb_info->cb) 825 + cb_info->cb(notify_id, cb_info->cb_data); 826 + } 827 + } 828 + 829 + static void notif_pcpu_irq_work_fn(struct work_struct *work) 830 + { 831 + int rc; 832 + struct ffa_notify_bitmaps bitmaps; 833 + 834 + rc = ffa_notification_get(SECURE_PARTITION_BITMAP | 835 + SPM_FRAMEWORK_BITMAP, &bitmaps); 836 + if (rc) { 837 + pr_err("Failed to retrieve notifications with %d!\n", rc); 838 + return; 839 + } 840 + 841 + handle_notif_callbacks(bitmaps.vm_map, NON_SECURE_VM); 842 + handle_notif_callbacks(bitmaps.sp_map, SECURE_PARTITION); 843 + handle_notif_callbacks(bitmaps.arch_map, FRAMEWORK); 844 + } 845 + 846 + static void 847 + ffa_self_notif_handle(u16 vcpu, bool is_per_vcpu, void *cb_data) 848 + { 849 + struct ffa_drv_info *info = cb_data; 850 + 851 + if (!is_per_vcpu) 852 + notif_pcpu_irq_work_fn(&info->notif_pcpu_work); 853 + else 854 + queue_work_on(vcpu, info->notif_pcpu_wq, 855 + &info->notif_pcpu_work); 856 + } 857 + 890 858 static const struct ffa_info_ops ffa_drv_info_ops = { 891 859 .api_version_get = ffa_api_version_get, 892 860 .partition_info_get = ffa_partition_info_get, ··· 1128 646 .memory_lend = ffa_memory_lend, 1129 647 }; 1130 648 649 + static const struct ffa_cpu_ops ffa_drv_cpu_ops = { 650 + .run = ffa_run, 651 + }; 652 + 653 + static const struct ffa_notifier_ops ffa_drv_notifier_ops = { 654 + .sched_recv_cb_register = ffa_sched_recv_cb_register, 655 + .sched_recv_cb_unregister = ffa_sched_recv_cb_unregister, 656 + .notify_request = ffa_notify_request, 657 + .notify_relinquish = ffa_notify_relinquish, 658 + .notify_send = ffa_notify_send, 659 + }; 660 + 1131 661 static const struct ffa_ops ffa_drv_ops = { 1132 662 .info_ops = &ffa_drv_info_ops, 1133 663 .msg_ops = &ffa_drv_msg_ops, 1134 664 .mem_ops = &ffa_drv_mem_ops, 665 + .cpu_ops = &ffa_drv_cpu_ops, 666 + .notifier_ops = &ffa_drv_notifier_ops, 1135 667 }; 1136 668 1137 669 void ffa_device_match_uuid(struct ffa_device *ffa_dev, const uuid_t *uuid) ··· 1176 680 int count, idx; 1177 681 uuid_t uuid; 1178 682 struct ffa_device *ffa_dev; 683 + struct ffa_dev_part_info *info; 1179 684 struct ffa_partition_info *pbuf, *tpbuf; 1180 685 1181 686 count = ffa_partition_probe(&uuid_null, &pbuf); ··· 1185 688 return; 1186 689 } 1187 690 691 + xa_init(&drv_info->partition_info); 1188 692 for (idx = 0, tpbuf = pbuf; idx < count; idx++, tpbuf++) { 1189 693 import_uuid(&uuid, (u8 *)tpbuf->uuid); 1190 694 ··· 1204 706 1205 707 if (drv_info->version > FFA_VERSION_1_0 && 1206 708 !(tpbuf->properties & FFA_PARTITION_AARCH64_EXEC)) 1207 - _ffa_mode_32bit_set(ffa_dev); 709 + ffa_mode_32bit_set(ffa_dev); 710 + 711 + info = kzalloc(sizeof(*info), GFP_KERNEL); 712 + if (!info) { 713 + ffa_device_unregister(ffa_dev); 714 + continue; 715 + } 716 + xa_store(&drv_info->partition_info, tpbuf->id, info, GFP_KERNEL); 1208 717 } 718 + drv_info->partition_count = count; 719 + 1209 720 kfree(pbuf); 721 + 722 + /* Allocate for the host */ 723 + info = kzalloc(sizeof(*info), GFP_KERNEL); 724 + if (!info) 725 + return; 726 + xa_store(&drv_info->partition_info, drv_info->vm_id, info, GFP_KERNEL); 727 + drv_info->partition_count++; 728 + } 729 + 730 + static void ffa_partitions_cleanup(void) 731 + { 732 + struct ffa_dev_part_info **info; 733 + int idx, count = drv_info->partition_count; 734 + 735 + if (!count) 736 + return; 737 + 738 + info = kcalloc(count, sizeof(**info), GFP_KERNEL); 739 + if (!info) 740 + return; 741 + 742 + xa_extract(&drv_info->partition_info, (void **)info, 0, VM_ID_MASK, 743 + count, XA_PRESENT); 744 + 745 + for (idx = 0; idx < count; idx++) 746 + kfree(info[idx]); 747 + kfree(info); 748 + 749 + drv_info->partition_count = 0; 750 + xa_destroy(&drv_info->partition_info); 751 + } 752 + 753 + /* FFA FEATURE IDs */ 754 + #define FFA_FEAT_NOTIFICATION_PENDING_INT (1) 755 + #define FFA_FEAT_SCHEDULE_RECEIVER_INT (2) 756 + #define FFA_FEAT_MANAGED_EXIT_INT (3) 757 + 758 + static irqreturn_t irq_handler(int irq, void *irq_data) 759 + { 760 + struct ffa_pcpu_irq *pcpu = irq_data; 761 + struct ffa_drv_info *info = pcpu->info; 762 + 763 + queue_work(info->notif_pcpu_wq, &info->irq_work); 764 + 765 + return IRQ_HANDLED; 766 + } 767 + 768 + static void ffa_sched_recv_irq_work_fn(struct work_struct *work) 769 + { 770 + ffa_notification_info_get(); 771 + } 772 + 773 + static int ffa_sched_recv_irq_map(void) 774 + { 775 + int ret, irq, sr_intid; 776 + 777 + /* The returned sr_intid is assumed to be SGI donated to NS world */ 778 + ret = ffa_features(FFA_FEAT_SCHEDULE_RECEIVER_INT, 0, &sr_intid, NULL); 779 + if (ret < 0) { 780 + if (ret != -EOPNOTSUPP) 781 + pr_err("Failed to retrieve scheduler Rx interrupt\n"); 782 + return ret; 783 + } 784 + 785 + if (acpi_disabled) { 786 + struct of_phandle_args oirq = {}; 787 + struct device_node *gic; 788 + 789 + /* Only GICv3 supported currently with the device tree */ 790 + gic = of_find_compatible_node(NULL, NULL, "arm,gic-v3"); 791 + if (!gic) 792 + return -ENXIO; 793 + 794 + oirq.np = gic; 795 + oirq.args_count = 1; 796 + oirq.args[0] = sr_intid; 797 + irq = irq_create_of_mapping(&oirq); 798 + of_node_put(gic); 799 + #ifdef CONFIG_ACPI 800 + } else { 801 + irq = acpi_register_gsi(NULL, sr_intid, ACPI_EDGE_SENSITIVE, 802 + ACPI_ACTIVE_HIGH); 803 + #endif 804 + } 805 + 806 + if (irq <= 0) { 807 + pr_err("Failed to create IRQ mapping!\n"); 808 + return -ENODATA; 809 + } 810 + 811 + return irq; 812 + } 813 + 814 + static void ffa_sched_recv_irq_unmap(void) 815 + { 816 + if (drv_info->sched_recv_irq) 817 + irq_dispose_mapping(drv_info->sched_recv_irq); 818 + } 819 + 820 + static int ffa_cpuhp_pcpu_irq_enable(unsigned int cpu) 821 + { 822 + enable_percpu_irq(drv_info->sched_recv_irq, IRQ_TYPE_NONE); 823 + return 0; 824 + } 825 + 826 + static int ffa_cpuhp_pcpu_irq_disable(unsigned int cpu) 827 + { 828 + disable_percpu_irq(drv_info->sched_recv_irq); 829 + return 0; 830 + } 831 + 832 + static void ffa_uninit_pcpu_irq(void) 833 + { 834 + if (drv_info->cpuhp_state) 835 + cpuhp_remove_state(drv_info->cpuhp_state); 836 + 837 + if (drv_info->notif_pcpu_wq) 838 + destroy_workqueue(drv_info->notif_pcpu_wq); 839 + 840 + if (drv_info->sched_recv_irq) 841 + free_percpu_irq(drv_info->sched_recv_irq, drv_info->irq_pcpu); 842 + 843 + if (drv_info->irq_pcpu) 844 + free_percpu(drv_info->irq_pcpu); 845 + } 846 + 847 + static int ffa_init_pcpu_irq(unsigned int irq) 848 + { 849 + struct ffa_pcpu_irq __percpu *irq_pcpu; 850 + int ret, cpu; 851 + 852 + irq_pcpu = alloc_percpu(struct ffa_pcpu_irq); 853 + if (!irq_pcpu) 854 + return -ENOMEM; 855 + 856 + for_each_present_cpu(cpu) 857 + per_cpu_ptr(irq_pcpu, cpu)->info = drv_info; 858 + 859 + drv_info->irq_pcpu = irq_pcpu; 860 + 861 + ret = request_percpu_irq(irq, irq_handler, "ARM-FFA", irq_pcpu); 862 + if (ret) { 863 + pr_err("Error registering notification IRQ %d: %d\n", irq, ret); 864 + return ret; 865 + } 866 + 867 + INIT_WORK(&drv_info->irq_work, ffa_sched_recv_irq_work_fn); 868 + INIT_WORK(&drv_info->notif_pcpu_work, notif_pcpu_irq_work_fn); 869 + drv_info->notif_pcpu_wq = create_workqueue("ffa_pcpu_irq_notification"); 870 + if (!drv_info->notif_pcpu_wq) 871 + return -EINVAL; 872 + 873 + ret = cpuhp_setup_state(CPUHP_AP_ONLINE_DYN, "ffa/pcpu-irq:starting", 874 + ffa_cpuhp_pcpu_irq_enable, 875 + ffa_cpuhp_pcpu_irq_disable); 876 + 877 + if (ret < 0) 878 + return ret; 879 + 880 + drv_info->cpuhp_state = ret; 881 + return 0; 882 + } 883 + 884 + static void ffa_notifications_cleanup(void) 885 + { 886 + ffa_uninit_pcpu_irq(); 887 + ffa_sched_recv_irq_unmap(); 888 + 889 + if (drv_info->bitmap_created) { 890 + ffa_notification_bitmap_destroy(); 891 + drv_info->bitmap_created = false; 892 + } 893 + } 894 + 895 + static int ffa_notifications_setup(void) 896 + { 897 + int ret, irq; 898 + 899 + ret = ffa_features(FFA_NOTIFICATION_BITMAP_CREATE, 0, NULL, NULL); 900 + if (ret) { 901 + pr_err("Notifications not supported, continuing with it ..\n"); 902 + return 0; 903 + } 904 + 905 + ret = ffa_notification_bitmap_create(); 906 + if (ret) { 907 + pr_err("notification_bitmap_create error %d\n", ret); 908 + return ret; 909 + } 910 + drv_info->bitmap_created = true; 911 + 912 + irq = ffa_sched_recv_irq_map(); 913 + if (irq <= 0) { 914 + ret = irq; 915 + goto cleanup; 916 + } 917 + 918 + drv_info->sched_recv_irq = irq; 919 + 920 + ret = ffa_init_pcpu_irq(irq); 921 + if (ret) 922 + goto cleanup; 923 + 924 + hash_init(drv_info->notifier_hash); 925 + mutex_init(&drv_info->notify_lock); 926 + 927 + /* Register internal scheduling callback */ 928 + ret = ffa_sched_recv_cb_update(drv_info->vm_id, ffa_self_notif_handle, 929 + drv_info, true); 930 + if (!ret) 931 + return ret; 932 + cleanup: 933 + ffa_notifications_cleanup(); 934 + return ret; 1210 935 } 1211 936 1212 937 static int __init ffa_init(void) ··· 1487 766 1488 767 ffa_set_up_mem_ops_native_flag(); 1489 768 769 + ret = ffa_notifications_setup(); 770 + if (ret) 771 + goto partitions_cleanup; 772 + 1490 773 return 0; 774 + partitions_cleanup: 775 + ffa_partitions_cleanup(); 1491 776 free_pages: 1492 777 if (drv_info->tx_buffer) 1493 778 free_pages_exact(drv_info->tx_buffer, RXTX_BUFFER_SIZE); ··· 1508 781 1509 782 static void __exit ffa_exit(void) 1510 783 { 784 + ffa_notifications_cleanup(); 785 + ffa_partitions_cleanup(); 1511 786 ffa_rxtx_unmap(drv_info->vm_id); 1512 787 free_pages_exact(drv_info->tx_buffer, RXTX_BUFFER_SIZE); 1513 788 free_pages_exact(drv_info->rx_buffer, RXTX_BUFFER_SIZE); 789 + xa_destroy(&drv_info->partition_info); 1514 790 kfree(drv_info); 1515 791 arm_ffa_bus_exit(); 1516 792 }
+12
drivers/firmware/arm_scmi/Kconfig
··· 181 181 will be called scmi_pm_domain. Note this may needed early in boot 182 182 before rootfs may be available. 183 183 184 + config ARM_SCMI_PERF_DOMAIN 185 + tristate "SCMI performance domain driver" 186 + depends on ARM_SCMI_PROTOCOL || (COMPILE_TEST && OF) 187 + default y 188 + select PM_GENERIC_DOMAINS if PM 189 + help 190 + This enables support for the SCMI performance domains which can be 191 + enabled or disabled via the SCP firmware. 192 + 193 + This driver can also be built as a module. If so, the module will be 194 + called scmi_perf_domain. 195 + 184 196 config ARM_SCMI_POWER_CONTROL 185 197 tristate "SCMI system power control driver" 186 198 depends on ARM_SCMI_PROTOCOL || (COMPILE_TEST && OF)
-1
drivers/firmware/arm_scmi/Makefile
··· 16 16 obj-$(CONFIG_ARM_SCMI_PROTOCOL) += scmi-core.o 17 17 obj-$(CONFIG_ARM_SCMI_PROTOCOL) += scmi-module.o 18 18 19 - obj-$(CONFIG_ARM_SCMI_POWER_DOMAIN) += scmi_pm_domain.o 20 19 obj-$(CONFIG_ARM_SCMI_POWER_CONTROL) += scmi_power_control.o 21 20 22 21 ifeq ($(CONFIG_THUMB2_KERNEL)$(CONFIG_CC_IS_CLANG),yy)
+379 -23
drivers/firmware/arm_scmi/clock.c
··· 21 21 CLOCK_NAME_GET = 0x8, 22 22 CLOCK_RATE_NOTIFY = 0x9, 23 23 CLOCK_RATE_CHANGE_REQUESTED_NOTIFY = 0xA, 24 + CLOCK_CONFIG_GET = 0xB, 25 + CLOCK_POSSIBLE_PARENTS_GET = 0xC, 26 + CLOCK_PARENT_SET = 0xD, 27 + CLOCK_PARENT_GET = 0xE, 28 + }; 29 + 30 + enum clk_state { 31 + CLK_STATE_DISABLE, 32 + CLK_STATE_ENABLE, 33 + CLK_STATE_RESERVED, 34 + CLK_STATE_UNCHANGED, 24 35 }; 25 36 26 37 struct scmi_msg_resp_clock_protocol_attributes { ··· 42 31 43 32 struct scmi_msg_resp_clock_attributes { 44 33 __le32 attributes; 45 - #define CLOCK_ENABLE BIT(0) 46 34 #define SUPPORTS_RATE_CHANGED_NOTIF(x) ((x) & BIT(31)) 47 35 #define SUPPORTS_RATE_CHANGE_REQUESTED_NOTIF(x) ((x) & BIT(30)) 48 36 #define SUPPORTS_EXTENDED_NAMES(x) ((x) & BIT(29)) 37 + #define SUPPORTS_PARENT_CLOCK(x) ((x) & BIT(28)) 49 38 u8 name[SCMI_SHORT_NAME_MAX_SIZE]; 50 39 __le32 clock_enable_latency; 51 40 }; 52 41 53 - struct scmi_clock_set_config { 42 + struct scmi_msg_clock_possible_parents { 43 + __le32 id; 44 + __le32 skip_parents; 45 + }; 46 + 47 + struct scmi_msg_resp_clock_possible_parents { 48 + __le32 num_parent_flags; 49 + #define NUM_PARENTS_RETURNED(x) ((x) & 0xff) 50 + #define NUM_PARENTS_REMAINING(x) ((x) >> 24) 51 + __le32 possible_parents[]; 52 + }; 53 + 54 + struct scmi_msg_clock_set_parent { 55 + __le32 id; 56 + __le32 parent_id; 57 + }; 58 + 59 + struct scmi_msg_clock_config_set { 54 60 __le32 id; 55 61 __le32 attributes; 62 + }; 63 + 64 + /* Valid only from SCMI clock v2.1 */ 65 + struct scmi_msg_clock_config_set_v2 { 66 + __le32 id; 67 + __le32 attributes; 68 + #define NULL_OEM_TYPE 0 69 + #define REGMASK_OEM_TYPE_SET GENMASK(23, 16) 70 + #define REGMASK_CLK_STATE GENMASK(1, 0) 71 + __le32 oem_config_val; 72 + }; 73 + 74 + struct scmi_msg_clock_config_get { 75 + __le32 id; 76 + __le32 flags; 77 + #define REGMASK_OEM_TYPE_GET GENMASK(7, 0) 78 + }; 79 + 80 + struct scmi_msg_resp_clock_config_get { 81 + __le32 attributes; 82 + __le32 config; 83 + #define IS_CLK_ENABLED(x) le32_get_bits((x), BIT(0)) 84 + __le32 oem_config_val; 56 85 }; 57 86 58 87 struct scmi_msg_clock_describe_rates { ··· 151 100 int max_async_req; 152 101 atomic_t cur_async_req; 153 102 struct scmi_clock_info *clk; 103 + int (*clock_config_set)(const struct scmi_protocol_handle *ph, 104 + u32 clk_id, enum clk_state state, 105 + u8 oem_type, u32 oem_val, bool atomic); 106 + int (*clock_config_get)(const struct scmi_protocol_handle *ph, 107 + u32 clk_id, u8 oem_type, u32 *attributes, 108 + bool *enabled, u32 *oem_val, bool atomic); 154 109 }; 155 110 156 111 static enum scmi_clock_protocol_cmd evt_2_cmd[] = { ··· 186 129 } 187 130 188 131 ph->xops->xfer_put(ph, t); 132 + return ret; 133 + } 134 + 135 + struct scmi_clk_ipriv { 136 + struct device *dev; 137 + u32 clk_id; 138 + struct scmi_clock_info *clk; 139 + }; 140 + 141 + static void iter_clk_possible_parents_prepare_message(void *message, unsigned int desc_index, 142 + const void *priv) 143 + { 144 + struct scmi_msg_clock_possible_parents *msg = message; 145 + const struct scmi_clk_ipriv *p = priv; 146 + 147 + msg->id = cpu_to_le32(p->clk_id); 148 + /* Set the number of OPPs to be skipped/already read */ 149 + msg->skip_parents = cpu_to_le32(desc_index); 150 + } 151 + 152 + static int iter_clk_possible_parents_update_state(struct scmi_iterator_state *st, 153 + const void *response, void *priv) 154 + { 155 + const struct scmi_msg_resp_clock_possible_parents *r = response; 156 + struct scmi_clk_ipriv *p = priv; 157 + struct device *dev = ((struct scmi_clk_ipriv *)p)->dev; 158 + u32 flags; 159 + 160 + flags = le32_to_cpu(r->num_parent_flags); 161 + st->num_returned = NUM_PARENTS_RETURNED(flags); 162 + st->num_remaining = NUM_PARENTS_REMAINING(flags); 163 + 164 + /* 165 + * num parents is not declared previously anywhere so we 166 + * assume it's returned+remaining on first call. 167 + */ 168 + if (!st->max_resources) { 169 + p->clk->num_parents = st->num_returned + st->num_remaining; 170 + p->clk->parents = devm_kcalloc(dev, p->clk->num_parents, 171 + sizeof(*p->clk->parents), 172 + GFP_KERNEL); 173 + if (!p->clk->parents) { 174 + p->clk->num_parents = 0; 175 + return -ENOMEM; 176 + } 177 + st->max_resources = st->num_returned + st->num_remaining; 178 + } 179 + 180 + return 0; 181 + } 182 + 183 + static int iter_clk_possible_parents_process_response(const struct scmi_protocol_handle *ph, 184 + const void *response, 185 + struct scmi_iterator_state *st, 186 + void *priv) 187 + { 188 + const struct scmi_msg_resp_clock_possible_parents *r = response; 189 + struct scmi_clk_ipriv *p = priv; 190 + 191 + u32 *parent = &p->clk->parents[st->desc_index + st->loop_idx]; 192 + 193 + *parent = le32_to_cpu(r->possible_parents[st->loop_idx]); 194 + 195 + return 0; 196 + } 197 + 198 + static int scmi_clock_possible_parents(const struct scmi_protocol_handle *ph, u32 clk_id, 199 + struct scmi_clock_info *clk) 200 + { 201 + struct scmi_iterator_ops ops = { 202 + .prepare_message = iter_clk_possible_parents_prepare_message, 203 + .update_state = iter_clk_possible_parents_update_state, 204 + .process_response = iter_clk_possible_parents_process_response, 205 + }; 206 + 207 + struct scmi_clk_ipriv ppriv = { 208 + .clk_id = clk_id, 209 + .clk = clk, 210 + .dev = ph->dev, 211 + }; 212 + void *iter; 213 + int ret; 214 + 215 + iter = ph->hops->iter_response_init(ph, &ops, 0, 216 + CLOCK_POSSIBLE_PARENTS_GET, 217 + sizeof(struct scmi_msg_clock_possible_parents), 218 + &ppriv); 219 + if (IS_ERR(iter)) 220 + return PTR_ERR(iter); 221 + 222 + ret = ph->hops->iter_response_run(iter); 223 + 189 224 return ret; 190 225 } 191 226 ··· 325 176 clk->rate_changed_notifications = true; 326 177 if (SUPPORTS_RATE_CHANGE_REQUESTED_NOTIF(attributes)) 327 178 clk->rate_change_requested_notifications = true; 179 + if (SUPPORTS_PARENT_CLOCK(attributes)) 180 + scmi_clock_possible_parents(ph, clk_id, clk); 328 181 } 329 182 330 183 return ret; ··· 343 192 else 344 193 return 1; 345 194 } 346 - 347 - struct scmi_clk_ipriv { 348 - struct device *dev; 349 - u32 clk_id; 350 - struct scmi_clock_info *clk; 351 - }; 352 195 353 196 static void iter_clk_describe_prepare_message(void *message, 354 197 const unsigned int desc_index, ··· 540 395 541 396 static int 542 397 scmi_clock_config_set(const struct scmi_protocol_handle *ph, u32 clk_id, 543 - u32 config, bool atomic) 398 + enum clk_state state, u8 __unused0, u32 __unused1, 399 + bool atomic) 544 400 { 545 401 int ret; 546 402 struct scmi_xfer *t; 547 - struct scmi_clock_set_config *cfg; 403 + struct scmi_msg_clock_config_set *cfg; 404 + 405 + if (state >= CLK_STATE_RESERVED) 406 + return -EINVAL; 548 407 549 408 ret = ph->xops->xfer_get_init(ph, CLOCK_CONFIG_SET, 550 409 sizeof(*cfg), 0, &t); ··· 559 410 560 411 cfg = t->tx.buf; 561 412 cfg->id = cpu_to_le32(clk_id); 562 - cfg->attributes = cpu_to_le32(config); 413 + cfg->attributes = cpu_to_le32(state); 563 414 564 415 ret = ph->xops->do_xfer(ph, t); 565 416 ··· 567 418 return ret; 568 419 } 569 420 570 - static int scmi_clock_enable(const struct scmi_protocol_handle *ph, u32 clk_id) 421 + static int 422 + scmi_clock_set_parent(const struct scmi_protocol_handle *ph, u32 clk_id, 423 + u32 parent_id) 571 424 { 572 - return scmi_clock_config_set(ph, clk_id, CLOCK_ENABLE, false); 425 + int ret; 426 + struct scmi_xfer *t; 427 + struct scmi_msg_clock_set_parent *cfg; 428 + struct clock_info *ci = ph->get_priv(ph); 429 + struct scmi_clock_info *clk; 430 + 431 + if (clk_id >= ci->num_clocks) 432 + return -EINVAL; 433 + 434 + clk = ci->clk + clk_id; 435 + 436 + if (parent_id >= clk->num_parents) 437 + return -EINVAL; 438 + 439 + ret = ph->xops->xfer_get_init(ph, CLOCK_PARENT_SET, 440 + sizeof(*cfg), 0, &t); 441 + if (ret) 442 + return ret; 443 + 444 + t->hdr.poll_completion = false; 445 + 446 + cfg = t->tx.buf; 447 + cfg->id = cpu_to_le32(clk_id); 448 + cfg->parent_id = cpu_to_le32(clk->parents[parent_id]); 449 + 450 + ret = ph->xops->do_xfer(ph, t); 451 + 452 + ph->xops->xfer_put(ph, t); 453 + 454 + return ret; 573 455 } 574 456 575 - static int scmi_clock_disable(const struct scmi_protocol_handle *ph, u32 clk_id) 457 + static int 458 + scmi_clock_get_parent(const struct scmi_protocol_handle *ph, u32 clk_id, 459 + u32 *parent_id) 576 460 { 577 - return scmi_clock_config_set(ph, clk_id, 0, false); 461 + int ret; 462 + struct scmi_xfer *t; 463 + 464 + ret = ph->xops->xfer_get_init(ph, CLOCK_PARENT_GET, 465 + sizeof(__le32), sizeof(u32), &t); 466 + if (ret) 467 + return ret; 468 + 469 + put_unaligned_le32(clk_id, t->tx.buf); 470 + 471 + ret = ph->xops->do_xfer(ph, t); 472 + if (!ret) 473 + *parent_id = get_unaligned_le32(t->rx.buf); 474 + 475 + ph->xops->xfer_put(ph, t); 476 + return ret; 578 477 } 579 478 580 - static int scmi_clock_enable_atomic(const struct scmi_protocol_handle *ph, 581 - u32 clk_id) 479 + /* For SCMI clock v2.1 and onwards */ 480 + static int 481 + scmi_clock_config_set_v2(const struct scmi_protocol_handle *ph, u32 clk_id, 482 + enum clk_state state, u8 oem_type, u32 oem_val, 483 + bool atomic) 582 484 { 583 - return scmi_clock_config_set(ph, clk_id, CLOCK_ENABLE, true); 485 + int ret; 486 + u32 attrs; 487 + struct scmi_xfer *t; 488 + struct scmi_msg_clock_config_set_v2 *cfg; 489 + 490 + if (state == CLK_STATE_RESERVED || 491 + (!oem_type && state == CLK_STATE_UNCHANGED)) 492 + return -EINVAL; 493 + 494 + ret = ph->xops->xfer_get_init(ph, CLOCK_CONFIG_SET, 495 + sizeof(*cfg), 0, &t); 496 + if (ret) 497 + return ret; 498 + 499 + t->hdr.poll_completion = atomic; 500 + 501 + attrs = FIELD_PREP(REGMASK_OEM_TYPE_SET, oem_type) | 502 + FIELD_PREP(REGMASK_CLK_STATE, state); 503 + 504 + cfg = t->tx.buf; 505 + cfg->id = cpu_to_le32(clk_id); 506 + cfg->attributes = cpu_to_le32(attrs); 507 + /* Clear in any case */ 508 + cfg->oem_config_val = cpu_to_le32(0); 509 + if (oem_type) 510 + cfg->oem_config_val = cpu_to_le32(oem_val); 511 + 512 + ret = ph->xops->do_xfer(ph, t); 513 + 514 + ph->xops->xfer_put(ph, t); 515 + return ret; 584 516 } 585 517 586 - static int scmi_clock_disable_atomic(const struct scmi_protocol_handle *ph, 587 - u32 clk_id) 518 + static int scmi_clock_enable(const struct scmi_protocol_handle *ph, u32 clk_id, 519 + bool atomic) 588 520 { 589 - return scmi_clock_config_set(ph, clk_id, 0, true); 521 + struct clock_info *ci = ph->get_priv(ph); 522 + 523 + return ci->clock_config_set(ph, clk_id, CLK_STATE_ENABLE, 524 + NULL_OEM_TYPE, 0, atomic); 525 + } 526 + 527 + static int scmi_clock_disable(const struct scmi_protocol_handle *ph, u32 clk_id, 528 + bool atomic) 529 + { 530 + struct clock_info *ci = ph->get_priv(ph); 531 + 532 + return ci->clock_config_set(ph, clk_id, CLK_STATE_DISABLE, 533 + NULL_OEM_TYPE, 0, atomic); 534 + } 535 + 536 + /* For SCMI clock v2.1 and onwards */ 537 + static int 538 + scmi_clock_config_get_v2(const struct scmi_protocol_handle *ph, u32 clk_id, 539 + u8 oem_type, u32 *attributes, bool *enabled, 540 + u32 *oem_val, bool atomic) 541 + { 542 + int ret; 543 + u32 flags; 544 + struct scmi_xfer *t; 545 + struct scmi_msg_clock_config_get *cfg; 546 + 547 + ret = ph->xops->xfer_get_init(ph, CLOCK_CONFIG_GET, 548 + sizeof(*cfg), 0, &t); 549 + if (ret) 550 + return ret; 551 + 552 + t->hdr.poll_completion = atomic; 553 + 554 + flags = FIELD_PREP(REGMASK_OEM_TYPE_GET, oem_type); 555 + 556 + cfg = t->tx.buf; 557 + cfg->id = cpu_to_le32(clk_id); 558 + cfg->flags = cpu_to_le32(flags); 559 + 560 + ret = ph->xops->do_xfer(ph, t); 561 + if (!ret) { 562 + struct scmi_msg_resp_clock_config_get *resp = t->rx.buf; 563 + 564 + if (attributes) 565 + *attributes = le32_to_cpu(resp->attributes); 566 + 567 + if (enabled) 568 + *enabled = IS_CLK_ENABLED(resp->config); 569 + 570 + if (oem_val && oem_type) 571 + *oem_val = le32_to_cpu(resp->oem_config_val); 572 + } 573 + 574 + ph->xops->xfer_put(ph, t); 575 + 576 + return ret; 577 + } 578 + 579 + static int 580 + scmi_clock_config_get(const struct scmi_protocol_handle *ph, u32 clk_id, 581 + u8 oem_type, u32 *attributes, bool *enabled, 582 + u32 *oem_val, bool atomic) 583 + { 584 + int ret; 585 + struct scmi_xfer *t; 586 + struct scmi_msg_resp_clock_attributes *resp; 587 + 588 + if (!enabled) 589 + return -EINVAL; 590 + 591 + ret = ph->xops->xfer_get_init(ph, CLOCK_ATTRIBUTES, 592 + sizeof(clk_id), sizeof(*resp), &t); 593 + if (ret) 594 + return ret; 595 + 596 + t->hdr.poll_completion = atomic; 597 + put_unaligned_le32(clk_id, t->tx.buf); 598 + resp = t->rx.buf; 599 + 600 + ret = ph->xops->do_xfer(ph, t); 601 + if (!ret) 602 + *enabled = IS_CLK_ENABLED(resp->attributes); 603 + 604 + ph->xops->xfer_put(ph, t); 605 + 606 + return ret; 607 + } 608 + 609 + static int scmi_clock_state_get(const struct scmi_protocol_handle *ph, 610 + u32 clk_id, bool *enabled, bool atomic) 611 + { 612 + struct clock_info *ci = ph->get_priv(ph); 613 + 614 + return ci->clock_config_get(ph, clk_id, NULL_OEM_TYPE, NULL, 615 + enabled, NULL, atomic); 616 + } 617 + 618 + static int scmi_clock_config_oem_set(const struct scmi_protocol_handle *ph, 619 + u32 clk_id, u8 oem_type, u32 oem_val, 620 + bool atomic) 621 + { 622 + struct clock_info *ci = ph->get_priv(ph); 623 + 624 + return ci->clock_config_set(ph, clk_id, CLK_STATE_UNCHANGED, 625 + oem_type, oem_val, atomic); 626 + } 627 + 628 + static int scmi_clock_config_oem_get(const struct scmi_protocol_handle *ph, 629 + u32 clk_id, u8 oem_type, u32 *oem_val, 630 + u32 *attributes, bool atomic) 631 + { 632 + struct clock_info *ci = ph->get_priv(ph); 633 + 634 + return ci->clock_config_get(ph, clk_id, oem_type, attributes, 635 + NULL, oem_val, atomic); 590 636 } 591 637 592 638 static int scmi_clock_count_get(const struct scmi_protocol_handle *ph) ··· 814 470 .rate_set = scmi_clock_rate_set, 815 471 .enable = scmi_clock_enable, 816 472 .disable = scmi_clock_disable, 817 - .enable_atomic = scmi_clock_enable_atomic, 818 - .disable_atomic = scmi_clock_disable_atomic, 473 + .state_get = scmi_clock_state_get, 474 + .config_oem_get = scmi_clock_config_oem_get, 475 + .config_oem_set = scmi_clock_config_oem_set, 476 + .parent_set = scmi_clock_set_parent, 477 + .parent_get = scmi_clock_get_parent, 819 478 }; 820 479 821 480 static int scmi_clk_rate_notify(const struct scmi_protocol_handle *ph, ··· 949 602 ret = scmi_clock_attributes_get(ph, clkid, clk, version); 950 603 if (!ret) 951 604 scmi_clock_describe_rates_get(ph, clkid, clk); 605 + } 606 + 607 + if (PROTOCOL_REV_MAJOR(version) >= 0x2 && 608 + PROTOCOL_REV_MINOR(version) >= 0x1) { 609 + cinfo->clock_config_set = scmi_clock_config_set_v2; 610 + cinfo->clock_config_get = scmi_clock_config_get_v2; 611 + } else { 612 + cinfo->clock_config_set = scmi_clock_config_set; 613 + cinfo->clock_config_get = scmi_clock_config_get; 952 614 } 953 615 954 616 cinfo->version = version;
+1
drivers/firmware/arm_scmi/driver.c
··· 2915 2915 #ifdef CONFIG_ARM_SCMI_TRANSPORT_SMC 2916 2916 { .compatible = "arm,scmi-smc", .data = &scmi_smc_desc}, 2917 2917 { .compatible = "arm,scmi-smc-param", .data = &scmi_smc_desc}, 2918 + { .compatible = "qcom,scmi-smc", .data = &scmi_smc_desc}, 2918 2919 #endif 2919 2920 #ifdef CONFIG_ARM_SCMI_TRANSPORT_VIRTIO 2920 2921 { .compatible = "arm,scmi-virtio", .data = &scmi_virtio_desc},
+51 -61
drivers/firmware/arm_scmi/perf.c
··· 145 145 struct perf_dom_info { 146 146 u32 id; 147 147 bool set_limits; 148 - bool set_perf; 149 148 bool perf_limit_notify; 150 149 bool perf_level_notify; 151 150 bool perf_fastchannels; ··· 153 154 u32 sustained_freq_khz; 154 155 u32 sustained_perf_level; 155 156 u32 mult_factor; 156 - char name[SCMI_MAX_STR_SIZE]; 157 + struct scmi_perf_domain_info info; 157 158 struct scmi_opp opp[MAX_OPPS]; 158 159 struct scmi_fc_info *fc_info; 159 160 struct xarray opps_by_idx; ··· 256 257 flags = le32_to_cpu(attr->flags); 257 258 258 259 dom_info->set_limits = SUPPORTS_SET_LIMITS(flags); 259 - dom_info->set_perf = SUPPORTS_SET_PERF_LVL(flags); 260 + dom_info->info.set_perf = SUPPORTS_SET_PERF_LVL(flags); 260 261 dom_info->perf_limit_notify = SUPPORTS_PERF_LIMIT_NOTIFY(flags); 261 262 dom_info->perf_level_notify = SUPPORTS_PERF_LEVEL_NOTIFY(flags); 262 263 dom_info->perf_fastchannels = SUPPORTS_PERF_FASTCHANNELS(flags); ··· 275 276 dom_info->mult_factor = 276 277 (dom_info->sustained_freq_khz * 1000) / 277 278 dom_info->sustained_perf_level; 278 - strscpy(dom_info->name, attr->name, SCMI_SHORT_NAME_MAX_SIZE); 279 + strscpy(dom_info->info.name, attr->name, 280 + SCMI_SHORT_NAME_MAX_SIZE); 279 281 } 280 282 281 283 ph->xops->xfer_put(ph, t); ··· 288 288 if (!ret && PROTOCOL_REV_MAJOR(version) >= 0x3 && 289 289 SUPPORTS_EXTENDED_NAMES(flags)) 290 290 ph->hops->extended_name_get(ph, PERF_DOMAIN_NAME_GET, 291 - dom_info->id, dom_info->name, 291 + dom_info->id, dom_info->info.name, 292 292 SCMI_MAX_STR_SIZE); 293 293 294 294 if (dom_info->level_indexing_mode) { ··· 423 423 return ret; 424 424 } 425 425 426 + static int scmi_perf_num_domains_get(const struct scmi_protocol_handle *ph) 427 + { 428 + struct scmi_perf_info *pi = ph->get_priv(ph); 429 + 430 + return pi->num_domains; 431 + } 432 + 433 + static inline struct perf_dom_info * 434 + scmi_perf_domain_lookup(const struct scmi_protocol_handle *ph, u32 domain) 435 + { 436 + struct scmi_perf_info *pi = ph->get_priv(ph); 437 + 438 + if (domain >= pi->num_domains) 439 + return ERR_PTR(-EINVAL); 440 + 441 + return pi->dom_info + domain; 442 + } 443 + 444 + static const struct scmi_perf_domain_info * 445 + scmi_perf_info_get(const struct scmi_protocol_handle *ph, u32 domain) 446 + { 447 + struct perf_dom_info *dom; 448 + 449 + dom = scmi_perf_domain_lookup(ph, domain); 450 + if (IS_ERR(dom)) 451 + return ERR_PTR(-EINVAL); 452 + 453 + return &dom->info; 454 + } 455 + 426 456 static int scmi_perf_msg_limits_set(const struct scmi_protocol_handle *ph, 427 457 u32 domain, u32 max_perf, u32 min_perf) 428 458 { ··· 474 444 475 445 ph->xops->xfer_put(ph, t); 476 446 return ret; 477 - } 478 - 479 - static inline struct perf_dom_info * 480 - scmi_perf_domain_lookup(const struct scmi_protocol_handle *ph, u32 domain) 481 - { 482 - struct scmi_perf_info *pi = ph->get_priv(ph); 483 - 484 - if (domain >= pi->num_domains) 485 - return ERR_PTR(-EINVAL); 486 - 487 - return pi->dom_info + domain; 488 447 } 489 448 490 449 static int __scmi_perf_limits_set(const struct scmi_protocol_handle *ph, ··· 782 763 *p_fc = fc; 783 764 } 784 765 785 - /* Device specific ops */ 786 - static int scmi_dev_domain_id(struct device *dev) 787 - { 788 - struct of_phandle_args clkspec; 789 - 790 - if (of_parse_phandle_with_args(dev->of_node, "clocks", "#clock-cells", 791 - 0, &clkspec)) 792 - return -EINVAL; 793 - 794 - return clkspec.args[0]; 795 - } 796 - 797 766 static int scmi_dvfs_device_opps_add(const struct scmi_protocol_handle *ph, 798 - struct device *dev) 767 + struct device *dev, u32 domain) 799 768 { 800 - int idx, ret, domain; 769 + int idx, ret; 801 770 unsigned long freq; 802 - struct scmi_opp *opp; 771 + struct dev_pm_opp_data data = {}; 803 772 struct perf_dom_info *dom; 804 - 805 - domain = scmi_dev_domain_id(dev); 806 - if (domain < 0) 807 - return -EINVAL; 808 773 809 774 dom = scmi_perf_domain_lookup(ph, domain); 810 775 if (IS_ERR(dom)) 811 776 return PTR_ERR(dom); 812 777 813 - for (opp = dom->opp, idx = 0; idx < dom->opp_count; idx++, opp++) { 778 + for (idx = 0; idx < dom->opp_count; idx++) { 814 779 if (!dom->level_indexing_mode) 815 - freq = opp->perf * dom->mult_factor; 780 + freq = dom->opp[idx].perf * dom->mult_factor; 816 781 else 817 - freq = opp->indicative_freq * 1000; 782 + freq = dom->opp[idx].indicative_freq * 1000; 818 783 819 - ret = dev_pm_opp_add(dev, freq, 0); 784 + data.level = dom->opp[idx].perf; 785 + data.freq = freq; 786 + 787 + ret = dev_pm_opp_add_dynamic(dev, &data); 820 788 if (ret) { 821 789 dev_warn(dev, "failed to add opp %luHz\n", freq); 822 - 823 - while (idx-- > 0) { 824 - if (!dom->level_indexing_mode) 825 - freq = (--opp)->perf * dom->mult_factor; 826 - else 827 - freq = (--opp)->indicative_freq * 1000; 828 - dev_pm_opp_remove(dev, freq); 829 - } 790 + dev_pm_opp_remove_all_dynamic(dev); 830 791 return ret; 831 792 } 832 793 833 794 dev_dbg(dev, "[%d][%s]:: Registered OPP[%d] %lu\n", 834 - domain, dom->name, idx, freq); 795 + domain, dom->info.name, idx, freq); 835 796 } 836 797 return 0; 837 798 } 838 799 839 800 static int 840 801 scmi_dvfs_transition_latency_get(const struct scmi_protocol_handle *ph, 841 - struct device *dev) 802 + u32 domain) 842 803 { 843 - int domain; 844 804 struct perf_dom_info *dom; 845 - 846 - domain = scmi_dev_domain_id(dev); 847 - if (domain < 0) 848 - return -EINVAL; 849 805 850 806 dom = scmi_perf_domain_lookup(ph, domain); 851 807 if (IS_ERR(dom)) ··· 917 923 } 918 924 919 925 static bool scmi_fast_switch_possible(const struct scmi_protocol_handle *ph, 920 - struct device *dev) 926 + u32 domain) 921 927 { 922 - int domain; 923 928 struct perf_dom_info *dom; 924 - 925 - domain = scmi_dev_domain_id(dev); 926 - if (domain < 0) 927 - return false; 928 929 929 930 dom = scmi_perf_domain_lookup(ph, domain); 930 931 if (IS_ERR(dom)) ··· 937 948 } 938 949 939 950 static const struct scmi_perf_proto_ops perf_proto_ops = { 951 + .num_domains_get = scmi_perf_num_domains_get, 952 + .info_get = scmi_perf_info_get, 940 953 .limits_set = scmi_perf_limits_set, 941 954 .limits_get = scmi_perf_limits_get, 942 955 .level_set = scmi_perf_level_set, 943 956 .level_get = scmi_perf_level_get, 944 - .device_domain_id = scmi_dev_domain_id, 945 957 .transition_latency_get = scmi_dvfs_transition_latency_get, 946 958 .device_opps_add = scmi_dvfs_device_opps_add, 947 959 .freq_set = scmi_dvfs_freq_set,
+2 -2
drivers/firmware/arm_scmi/powercap.c
··· 360 360 msg = t->tx.buf; 361 361 msg->domain = cpu_to_le32(pc->id); 362 362 msg->flags = 363 - cpu_to_le32(FIELD_PREP(CAP_SET_ASYNC, !!pc->async_powercap_cap_set) | 364 - FIELD_PREP(CAP_SET_IGNORE_DRESP, !!ignore_dresp)); 363 + cpu_to_le32(FIELD_PREP(CAP_SET_ASYNC, pc->async_powercap_cap_set) | 364 + FIELD_PREP(CAP_SET_IGNORE_DRESP, ignore_dresp)); 365 365 msg->value = cpu_to_le32(power_cap); 366 366 367 367 if (!pc->async_powercap_cap_set || ignore_dresp) {
drivers/firmware/arm_scmi/scmi_pm_domain.c drivers/pmdomain/arm/scmi_pm_domain.c
+28 -7
drivers/firmware/arm_scmi/smc.c
··· 15 15 #include <linux/of.h> 16 16 #include <linux/of_address.h> 17 17 #include <linux/of_irq.h> 18 + #include <linux/limits.h> 18 19 #include <linux/processor.h> 19 20 #include <linux/slab.h> 20 21 ··· 51 50 * @func_id: smc/hvc call function id 52 51 * @param_page: 4K page number of the shmem channel 53 52 * @param_offset: Offset within the 4K page of the shmem channel 53 + * @cap_id: smc/hvc doorbell's capability id to be used on Qualcomm virtual 54 + * platforms 54 55 */ 55 56 56 57 struct scmi_smc { ··· 63 60 struct mutex shmem_lock; 64 61 #define INFLIGHT_NONE MSG_TOKEN_MAX 65 62 atomic_t inflight; 66 - u32 func_id; 67 - u32 param_page; 68 - u32 param_offset; 63 + unsigned long func_id; 64 + unsigned long param_page; 65 + unsigned long param_offset; 66 + unsigned long cap_id; 69 67 }; 70 68 71 69 static irqreturn_t smc_msg_done_isr(int irq, void *data) ··· 128 124 bool tx) 129 125 { 130 126 struct device *cdev = cinfo->dev; 127 + unsigned long cap_id = ULONG_MAX; 131 128 struct scmi_smc *scmi_info; 132 129 resource_size_t size; 133 130 struct resource res; ··· 167 162 if (ret < 0) 168 163 return ret; 169 164 165 + if (of_device_is_compatible(dev->of_node, "qcom,scmi-smc")) { 166 + void __iomem *ptr = (void __iomem *)scmi_info->shmem + size - 8; 167 + /* The capability-id is kept in last 8 bytes of shmem. 168 + * +-------+ <-- 0 169 + * | shmem | 170 + * +-------+ <-- size - 8 171 + * | capId | 172 + * +-------+ <-- size 173 + */ 174 + memcpy_fromio(&cap_id, ptr, sizeof(cap_id)); 175 + } 176 + 170 177 if (of_device_is_compatible(dev->of_node, "arm,scmi-smc-param")) { 171 178 scmi_info->param_page = SHMEM_PAGE(res.start); 172 179 scmi_info->param_offset = SHMEM_OFFSET(res.start); ··· 201 184 } 202 185 203 186 scmi_info->func_id = func_id; 187 + scmi_info->cap_id = cap_id; 204 188 scmi_info->cinfo = cinfo; 205 189 smc_channel_lock_init(scmi_info); 206 190 cinfo->transport_info = scmi_info; ··· 229 211 { 230 212 struct scmi_smc *scmi_info = cinfo->transport_info; 231 213 struct arm_smccc_res res; 232 - unsigned long page = scmi_info->param_page; 233 - unsigned long offset = scmi_info->param_offset; 234 214 235 215 /* 236 216 * Channel will be released only once response has been ··· 238 222 239 223 shmem_tx_prepare(scmi_info->shmem, xfer, cinfo); 240 224 241 - arm_smccc_1_1_invoke(scmi_info->func_id, page, offset, 0, 0, 0, 0, 0, 242 - &res); 225 + if (scmi_info->cap_id != ULONG_MAX) 226 + arm_smccc_1_1_invoke(scmi_info->func_id, scmi_info->cap_id, 0, 227 + 0, 0, 0, 0, 0, &res); 228 + else 229 + arm_smccc_1_1_invoke(scmi_info->func_id, scmi_info->param_page, 230 + scmi_info->param_offset, 0, 0, 0, 0, 0, 231 + &res); 243 232 244 233 /* Only SMCCC_RET_NOT_SUPPORTED is valid error code */ 245 234 if (res.a0) {
+5 -8
drivers/firmware/arm_scpi.c
··· 26 26 #include <linux/list.h> 27 27 #include <linux/mailbox_client.h> 28 28 #include <linux/module.h> 29 + #include <linux/of.h> 29 30 #include <linux/of_address.h> 30 31 #include <linux/of_platform.h> 32 + #include <linux/platform_device.h> 31 33 #include <linux/printk.h> 34 + #include <linux/property.h> 32 35 #include <linux/pm_opp.h> 33 36 #include <linux/scpi_protocol.h> 34 37 #include <linux/slab.h> ··· 897 894 return 0; 898 895 } 899 896 900 - static const struct of_device_id legacy_scpi_of_match[] = { 901 - {.compatible = "arm,scpi-pre-1.0"}, 902 - {}, 903 - }; 904 - 905 897 static const struct of_device_id shmem_of_match[] __maybe_unused = { 906 898 { .compatible = "amlogic,meson-gxbb-scp-shmem", }, 907 899 { .compatible = "amlogic,meson-axg-scp-shmem", }, ··· 917 919 if (!scpi_drvinfo) 918 920 return -ENOMEM; 919 921 920 - if (of_match_device(legacy_scpi_of_match, &pdev->dev)) 921 - scpi_drvinfo->is_legacy = true; 922 + scpi_drvinfo->is_legacy = !!device_get_match_data(dev); 922 923 923 924 count = of_count_phandle_with_args(np, "mboxes", "#mbox-cells"); 924 925 if (count < 0) { ··· 1035 1038 1036 1039 static const struct of_device_id scpi_of_match[] = { 1037 1040 {.compatible = "arm,scpi"}, 1038 - {.compatible = "arm,scpi-pre-1.0"}, 1041 + {.compatible = "arm,scpi-pre-1.0", .data = (void *)1UL }, 1039 1042 {}, 1040 1043 }; 1041 1044
+16 -9
drivers/firmware/meson/meson_sm.c
··· 13 13 #include <linux/io.h> 14 14 #include <linux/module.h> 15 15 #include <linux/of.h> 16 - #include <linux/of_device.h> 16 + #include <linux/of_platform.h> 17 17 #include <linux/platform_device.h> 18 18 #include <linux/printk.h> 19 + #include <linux/property.h> 19 20 #include <linux/types.h> 20 21 #include <linux/sizes.h> 21 22 #include <linux/slab.h> ··· 68 67 return cmd->smc_id; 69 68 } 70 69 71 - static u32 __meson_sm_call(u32 cmd, u32 arg0, u32 arg1, u32 arg2, 70 + static s32 __meson_sm_call(u32 cmd, u32 arg0, u32 arg1, u32 arg2, 72 71 u32 arg3, u32 arg4) 73 72 { 74 73 struct arm_smccc_res res; ··· 103 102 * Return: 0 on success, a negative value on error 104 103 */ 105 104 int meson_sm_call(struct meson_sm_firmware *fw, unsigned int cmd_index, 106 - u32 *ret, u32 arg0, u32 arg1, u32 arg2, u32 arg3, u32 arg4) 105 + s32 *ret, u32 arg0, u32 arg1, u32 arg2, u32 arg3, u32 arg4) 107 106 { 108 - u32 cmd, lret; 107 + u32 cmd; 108 + s32 lret; 109 109 110 110 if (!fw->chip) 111 111 return -ENOENT; ··· 145 143 unsigned int bsize, unsigned int cmd_index, u32 arg0, 146 144 u32 arg1, u32 arg2, u32 arg3, u32 arg4) 147 145 { 148 - u32 size; 146 + s32 size; 149 147 int ret; 150 148 151 149 if (!fw->chip) ··· 160 158 if (meson_sm_call(fw, cmd_index, &size, arg0, arg1, arg2, arg3, arg4) < 0) 161 159 return -EINVAL; 162 160 163 - if (size > bsize) 161 + if (size < 0 || size > bsize) 164 162 return -EINVAL; 165 163 166 164 ret = size; 167 165 166 + /* In some cases (for example GET_CHIP_ID command), 167 + * SMC doesn't return the number of bytes read, even 168 + * though the bytes were actually read into sm_shmem_out. 169 + * So this check is needed. 170 + */ 168 171 if (!size) 169 172 size = bsize; 170 173 ··· 199 192 unsigned int size, unsigned int cmd_index, u32 arg0, 200 193 u32 arg1, u32 arg2, u32 arg3, u32 arg4) 201 194 { 202 - u32 written; 195 + s32 written; 203 196 204 197 if (!fw->chip) 205 198 return -ENOENT; ··· 215 208 if (meson_sm_call(fw, cmd_index, &written, arg0, arg1, arg2, arg3, arg4) < 0) 216 209 return -EINVAL; 217 210 218 - if (!written) 211 + if (written <= 0 || written > size) 219 212 return -EINVAL; 220 213 221 214 return written; ··· 298 291 if (!fw) 299 292 return -ENOMEM; 300 293 301 - chip = of_match_device(meson_sm_ids, dev)->data; 294 + chip = device_get_match_data(dev); 302 295 if (!chip) 303 296 return -EINVAL; 304 297
+56
drivers/firmware/qcom/Kconfig
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + # 3 + # For a description of the syntax of this configuration file, 4 + # see Documentation/kbuild/kconfig-language.rst. 5 + # 6 + 7 + menu "Qualcomm firmware drivers" 8 + 9 + config QCOM_SCM 10 + tristate 11 + 12 + config QCOM_SCM_DOWNLOAD_MODE_DEFAULT 13 + bool "Qualcomm download mode enabled by default" 14 + depends on QCOM_SCM 15 + help 16 + A device with "download mode" enabled will upon an unexpected 17 + warm-restart enter a special debug mode that allows the user to 18 + "download" memory content over USB for offline postmortem analysis. 19 + The feature can be enabled/disabled on the kernel command line. 20 + 21 + Say Y here to enable "download mode" by default. 22 + 23 + config QCOM_QSEECOM 24 + bool "Qualcomm QSEECOM interface driver" 25 + depends on QCOM_SCM=y 26 + select AUXILIARY_BUS 27 + help 28 + Various Qualcomm SoCs have a Secure Execution Environment (SEE) running 29 + in the Trust Zone. This module provides an interface to that via the 30 + QSEECOM mechanism, using SCM calls. 31 + 32 + The QSEECOM interface allows, among other things, access to applications 33 + running in the SEE. An example of such an application is 'uefisecapp', 34 + which is required to access UEFI variables on certain systems. If 35 + selected, the interface will also attempt to detect and register client 36 + devices for supported applications. 37 + 38 + Select Y here to enable the QSEECOM interface driver. 39 + 40 + config QCOM_QSEECOM_UEFISECAPP 41 + bool "Qualcomm SEE UEFI Secure App client driver" 42 + depends on QCOM_QSEECOM 43 + depends on EFI 44 + help 45 + Various Qualcomm SoCs do not allow direct access to EFI variables. 46 + Instead, these need to be accessed via the UEFI Secure Application 47 + (uefisecapp), residing in the Secure Execution Environment (SEE). 48 + 49 + This module provides a client driver for uefisecapp, installing efivar 50 + operations to allow the kernel accessing EFI variables, and via that also 51 + provide user-space with access to EFI variables via efivarfs. 52 + 53 + Select Y here to provide access to EFI variables on the aforementioned 54 + platforms. 55 + 56 + endmenu
+9
drivers/firmware/qcom/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0 2 + # 3 + # Makefile for the linux kernel. 4 + # 5 + 6 + obj-$(CONFIG_QCOM_SCM) += qcom-scm.o 7 + qcom-scm-objs += qcom_scm.o qcom_scm-smc.o qcom_scm-legacy.o 8 + obj-$(CONFIG_QCOM_QSEECOM) += qcom_qseecom.o 9 + obj-$(CONFIG_QCOM_QSEECOM_UEFISECAPP) += qcom_qseecom_uefisecapp.o
+120
drivers/firmware/qcom/qcom_qseecom.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + /* 3 + * Driver for Qualcomm Secure Execution Environment (SEE) interface (QSEECOM). 4 + * Responsible for setting up and managing QSEECOM client devices. 5 + * 6 + * Copyright (C) 2023 Maximilian Luz <luzmaximilian@gmail.com> 7 + */ 8 + #include <linux/auxiliary_bus.h> 9 + #include <linux/module.h> 10 + #include <linux/platform_device.h> 11 + #include <linux/slab.h> 12 + #include <linux/types.h> 13 + 14 + #include <linux/firmware/qcom/qcom_qseecom.h> 15 + #include <linux/firmware/qcom/qcom_scm.h> 16 + 17 + struct qseecom_app_desc { 18 + const char *app_name; 19 + const char *dev_name; 20 + }; 21 + 22 + static void qseecom_client_release(struct device *dev) 23 + { 24 + struct qseecom_client *client; 25 + 26 + client = container_of(dev, struct qseecom_client, aux_dev.dev); 27 + kfree(client); 28 + } 29 + 30 + static void qseecom_client_remove(void *data) 31 + { 32 + struct qseecom_client *client = data; 33 + 34 + auxiliary_device_delete(&client->aux_dev); 35 + auxiliary_device_uninit(&client->aux_dev); 36 + } 37 + 38 + static int qseecom_client_register(struct platform_device *qseecom_dev, 39 + const struct qseecom_app_desc *desc) 40 + { 41 + struct qseecom_client *client; 42 + u32 app_id; 43 + int ret; 44 + 45 + /* Try to find the app ID, skip device if not found */ 46 + ret = qcom_scm_qseecom_app_get_id(desc->app_name, &app_id); 47 + if (ret) 48 + return ret == -ENOENT ? 0 : ret; 49 + 50 + dev_info(&qseecom_dev->dev, "setting up client for %s\n", desc->app_name); 51 + 52 + /* Allocate and set-up the client device */ 53 + client = kzalloc(sizeof(*client), GFP_KERNEL); 54 + if (!client) 55 + return -ENOMEM; 56 + 57 + client->aux_dev.name = desc->dev_name; 58 + client->aux_dev.dev.parent = &qseecom_dev->dev; 59 + client->aux_dev.dev.release = qseecom_client_release; 60 + client->app_id = app_id; 61 + 62 + ret = auxiliary_device_init(&client->aux_dev); 63 + if (ret) { 64 + kfree(client); 65 + return ret; 66 + } 67 + 68 + ret = auxiliary_device_add(&client->aux_dev); 69 + if (ret) { 70 + auxiliary_device_uninit(&client->aux_dev); 71 + return ret; 72 + } 73 + 74 + ret = devm_add_action_or_reset(&qseecom_dev->dev, qseecom_client_remove, client); 75 + if (ret) 76 + return ret; 77 + 78 + return 0; 79 + } 80 + 81 + /* 82 + * List of supported applications. One client device will be created per entry, 83 + * assuming the app has already been loaded (usually by firmware bootloaders) 84 + * and its ID can be queried successfully. 85 + */ 86 + static const struct qseecom_app_desc qcom_qseecom_apps[] = { 87 + { "qcom.tz.uefisecapp", "uefisecapp" }, 88 + }; 89 + 90 + static int qcom_qseecom_probe(struct platform_device *qseecom_dev) 91 + { 92 + int ret; 93 + int i; 94 + 95 + /* Set up client devices for each base application */ 96 + for (i = 0; i < ARRAY_SIZE(qcom_qseecom_apps); i++) { 97 + ret = qseecom_client_register(qseecom_dev, &qcom_qseecom_apps[i]); 98 + if (ret) 99 + return ret; 100 + } 101 + 102 + return 0; 103 + } 104 + 105 + static struct platform_driver qcom_qseecom_driver = { 106 + .driver = { 107 + .name = "qcom_qseecom", 108 + }, 109 + .probe = qcom_qseecom_probe, 110 + }; 111 + 112 + static int __init qcom_qseecom_init(void) 113 + { 114 + return platform_driver_register(&qcom_qseecom_driver); 115 + } 116 + subsys_initcall(qcom_qseecom_init); 117 + 118 + MODULE_AUTHOR("Maximilian Luz <luzmaximilian@gmail.com>"); 119 + MODULE_DESCRIPTION("Driver for the Qualcomm SEE (QSEECOM) interface"); 120 + MODULE_LICENSE("GPL");
+871
drivers/firmware/qcom/qcom_qseecom_uefisecapp.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-or-later 2 + /* 3 + * Client driver for Qualcomm UEFI Secure Application (qcom.tz.uefisecapp). 4 + * Provides access to UEFI variables on platforms where they are secured by the 5 + * aforementioned Secure Execution Environment (SEE) application. 6 + * 7 + * Copyright (C) 2023 Maximilian Luz <luzmaximilian@gmail.com> 8 + */ 9 + 10 + #include <linux/efi.h> 11 + #include <linux/kernel.h> 12 + #include <linux/module.h> 13 + #include <linux/mutex.h> 14 + #include <linux/of.h> 15 + #include <linux/platform_device.h> 16 + #include <linux/slab.h> 17 + #include <linux/types.h> 18 + #include <linux/ucs2_string.h> 19 + 20 + #include <linux/firmware/qcom/qcom_qseecom.h> 21 + 22 + /* -- Qualcomm "uefisecapp" interface definitions. -------------------------- */ 23 + 24 + /* Maximum length of name string with null-terminator */ 25 + #define QSEE_MAX_NAME_LEN 1024 26 + 27 + #define QSEE_CMD_UEFI(x) (0x8000 | (x)) 28 + #define QSEE_CMD_UEFI_GET_VARIABLE QSEE_CMD_UEFI(0) 29 + #define QSEE_CMD_UEFI_SET_VARIABLE QSEE_CMD_UEFI(1) 30 + #define QSEE_CMD_UEFI_GET_NEXT_VARIABLE QSEE_CMD_UEFI(2) 31 + #define QSEE_CMD_UEFI_QUERY_VARIABLE_INFO QSEE_CMD_UEFI(3) 32 + 33 + /** 34 + * struct qsee_req_uefi_get_variable - Request for GetVariable command. 35 + * @command_id: The ID of the command. Must be %QSEE_CMD_UEFI_GET_VARIABLE. 36 + * @length: Length of the request in bytes, including this struct and any 37 + * parameters (name, GUID) stored after it as well as any padding 38 + * thereof for alignment. 39 + * @name_offset: Offset from the start of this struct to where the variable 40 + * name is stored (as utf-16 string), in bytes. 41 + * @name_size: Size of the name parameter in bytes, including null-terminator. 42 + * @guid_offset: Offset from the start of this struct to where the GUID 43 + * parameter is stored, in bytes. 44 + * @guid_size: Size of the GUID parameter in bytes, i.e. sizeof(efi_guid_t). 45 + * @data_size: Size of the output buffer, in bytes. 46 + */ 47 + struct qsee_req_uefi_get_variable { 48 + u32 command_id; 49 + u32 length; 50 + u32 name_offset; 51 + u32 name_size; 52 + u32 guid_offset; 53 + u32 guid_size; 54 + u32 data_size; 55 + } __packed; 56 + 57 + /** 58 + * struct qsee_rsp_uefi_get_variable - Response for GetVariable command. 59 + * @command_id: The ID of the command. Should be %QSEE_CMD_UEFI_GET_VARIABLE. 60 + * @length: Length of the response in bytes, including this struct and the 61 + * returned data. 62 + * @status: Status of this command. 63 + * @attributes: EFI variable attributes. 64 + * @data_offset: Offset from the start of this struct to where the data is 65 + * stored, in bytes. 66 + * @data_size: Size of the returned data, in bytes. In case status indicates 67 + * that the buffer is too small, this will be the size required 68 + * to store the EFI variable data. 69 + */ 70 + struct qsee_rsp_uefi_get_variable { 71 + u32 command_id; 72 + u32 length; 73 + u32 status; 74 + u32 attributes; 75 + u32 data_offset; 76 + u32 data_size; 77 + } __packed; 78 + 79 + /** 80 + * struct qsee_req_uefi_set_variable - Request for the SetVariable command. 81 + * @command_id: The ID of the command. Must be %QSEE_CMD_UEFI_SET_VARIABLE. 82 + * @length: Length of the request in bytes, including this struct and any 83 + * parameters (name, GUID, data) stored after it as well as any 84 + * padding thereof required for alignment. 85 + * @name_offset: Offset from the start of this struct to where the variable 86 + * name is stored (as utf-16 string), in bytes. 87 + * @name_size: Size of the name parameter in bytes, including null-terminator. 88 + * @guid_offset: Offset from the start of this struct to where the GUID 89 + * parameter is stored, in bytes. 90 + * @guid_size: Size of the GUID parameter in bytes, i.e. sizeof(efi_guid_t). 91 + * @attributes: The EFI variable attributes to set for this variable. 92 + * @data_offset: Offset from the start of this struct to where the EFI variable 93 + * data is stored, in bytes. 94 + * @data_size: Size of EFI variable data, in bytes. 95 + * 96 + */ 97 + struct qsee_req_uefi_set_variable { 98 + u32 command_id; 99 + u32 length; 100 + u32 name_offset; 101 + u32 name_size; 102 + u32 guid_offset; 103 + u32 guid_size; 104 + u32 attributes; 105 + u32 data_offset; 106 + u32 data_size; 107 + } __packed; 108 + 109 + /** 110 + * struct qsee_rsp_uefi_set_variable - Response for the SetVariable command. 111 + * @command_id: The ID of the command. Should be %QSEE_CMD_UEFI_SET_VARIABLE. 112 + * @length: The length of this response, i.e. the size of this struct in 113 + * bytes. 114 + * @status: Status of this command. 115 + * @_unknown1: Unknown response field. 116 + * @_unknown2: Unknown response field. 117 + */ 118 + struct qsee_rsp_uefi_set_variable { 119 + u32 command_id; 120 + u32 length; 121 + u32 status; 122 + u32 _unknown1; 123 + u32 _unknown2; 124 + } __packed; 125 + 126 + /** 127 + * struct qsee_req_uefi_get_next_variable - Request for the 128 + * GetNextVariableName command. 129 + * @command_id: The ID of the command. Must be 130 + * %QSEE_CMD_UEFI_GET_NEXT_VARIABLE. 131 + * @length: Length of the request in bytes, including this struct and any 132 + * parameters (name, GUID) stored after it as well as any padding 133 + * thereof for alignment. 134 + * @guid_offset: Offset from the start of this struct to where the GUID 135 + * parameter is stored, in bytes. 136 + * @guid_size: Size of the GUID parameter in bytes, i.e. sizeof(efi_guid_t). 137 + * @name_offset: Offset from the start of this struct to where the variable 138 + * name is stored (as utf-16 string), in bytes. 139 + * @name_size: Size of the name parameter in bytes, including null-terminator. 140 + */ 141 + struct qsee_req_uefi_get_next_variable { 142 + u32 command_id; 143 + u32 length; 144 + u32 guid_offset; 145 + u32 guid_size; 146 + u32 name_offset; 147 + u32 name_size; 148 + } __packed; 149 + 150 + /** 151 + * struct qsee_rsp_uefi_get_next_variable - Response for the 152 + * GetNextVariableName command. 153 + * @command_id: The ID of the command. Should be 154 + * %QSEE_CMD_UEFI_GET_NEXT_VARIABLE. 155 + * @length: Length of the response in bytes, including this struct and any 156 + * parameters (name, GUID) stored after it as well as any padding 157 + * thereof for alignment. 158 + * @status: Status of this command. 159 + * @guid_offset: Offset from the start of this struct to where the GUID 160 + * parameter is stored, in bytes. 161 + * @guid_size: Size of the GUID parameter in bytes, i.e. sizeof(efi_guid_t). 162 + * @name_offset: Offset from the start of this struct to where the variable 163 + * name is stored (as utf-16 string), in bytes. 164 + * @name_size: Size of the name parameter in bytes, including null-terminator. 165 + */ 166 + struct qsee_rsp_uefi_get_next_variable { 167 + u32 command_id; 168 + u32 length; 169 + u32 status; 170 + u32 guid_offset; 171 + u32 guid_size; 172 + u32 name_offset; 173 + u32 name_size; 174 + } __packed; 175 + 176 + /** 177 + * struct qsee_req_uefi_query_variable_info - Response for the 178 + * GetNextVariableName command. 179 + * @command_id: The ID of the command. Must be 180 + * %QSEE_CMD_UEFI_QUERY_VARIABLE_INFO. 181 + * @length: The length of this request, i.e. the size of this struct in 182 + * bytes. 183 + * @attributes: The storage attributes to query the info for. 184 + */ 185 + struct qsee_req_uefi_query_variable_info { 186 + u32 command_id; 187 + u32 length; 188 + u32 attributes; 189 + } __packed; 190 + 191 + /** 192 + * struct qsee_rsp_uefi_query_variable_info - Response for the 193 + * GetNextVariableName command. 194 + * @command_id: The ID of the command. Must be 195 + * %QSEE_CMD_UEFI_QUERY_VARIABLE_INFO. 196 + * @length: The length of this response, i.e. the size of this 197 + * struct in bytes. 198 + * @status: Status of this command. 199 + * @_pad: Padding. 200 + * @storage_space: Full storage space size, in bytes. 201 + * @remaining_space: Free storage space available, in bytes. 202 + * @max_variable_size: Maximum variable data size, in bytes. 203 + */ 204 + struct qsee_rsp_uefi_query_variable_info { 205 + u32 command_id; 206 + u32 length; 207 + u32 status; 208 + u32 _pad; 209 + u64 storage_space; 210 + u64 remaining_space; 211 + u64 max_variable_size; 212 + } __packed; 213 + 214 + /* -- Alignment helpers ----------------------------------------------------- */ 215 + 216 + /* 217 + * Helper macro to ensure proper alignment of types (fields and arrays) when 218 + * stored in some (contiguous) buffer. 219 + * 220 + * Note: The driver from which this one has been reverse-engineered expects an 221 + * alignment of 8 bytes (64 bits) for GUIDs. Our definition of efi_guid_t, 222 + * however, has an alignment of 4 byte (32 bits). So far, this seems to work 223 + * fine here. See also the comment on the typedef of efi_guid_t. 224 + */ 225 + #define qcuefi_buf_align_fields(fields...) \ 226 + ({ \ 227 + size_t __len = 0; \ 228 + fields \ 229 + __len; \ 230 + }) 231 + 232 + #define __field_impl(size, align, offset) \ 233 + ({ \ 234 + size_t *__offset = (offset); \ 235 + size_t __aligned; \ 236 + \ 237 + __aligned = ALIGN(__len, align); \ 238 + __len = __aligned + (size); \ 239 + \ 240 + if (__offset) \ 241 + *__offset = __aligned; \ 242 + }); 243 + 244 + #define __array_offs(type, count, offset) \ 245 + __field_impl(sizeof(type) * (count), __alignof__(type), offset) 246 + 247 + #define __array(type, count) __array_offs(type, count, NULL) 248 + #define __field_offs(type, offset) __array_offs(type, 1, offset) 249 + #define __field(type) __array_offs(type, 1, NULL) 250 + 251 + /* -- UEFI app interface. --------------------------------------------------- */ 252 + 253 + struct qcuefi_client { 254 + struct qseecom_client *client; 255 + struct efivars efivars; 256 + }; 257 + 258 + static struct device *qcuefi_dev(struct qcuefi_client *qcuefi) 259 + { 260 + return &qcuefi->client->aux_dev.dev; 261 + } 262 + 263 + static efi_status_t qsee_uefi_status_to_efi(u32 status) 264 + { 265 + u64 category = status & 0xf0000000; 266 + u64 code = status & 0x0fffffff; 267 + 268 + return category << (BITS_PER_LONG - 32) | code; 269 + } 270 + 271 + static efi_status_t qsee_uefi_get_variable(struct qcuefi_client *qcuefi, const efi_char16_t *name, 272 + const efi_guid_t *guid, u32 *attributes, 273 + unsigned long *data_size, void *data) 274 + { 275 + struct qsee_req_uefi_get_variable *req_data; 276 + struct qsee_rsp_uefi_get_variable *rsp_data; 277 + unsigned long buffer_size = *data_size; 278 + efi_status_t efi_status = EFI_SUCCESS; 279 + unsigned long name_length; 280 + size_t guid_offs; 281 + size_t name_offs; 282 + size_t req_size; 283 + size_t rsp_size; 284 + ssize_t status; 285 + 286 + if (!name || !guid) 287 + return EFI_INVALID_PARAMETER; 288 + 289 + name_length = ucs2_strnlen(name, QSEE_MAX_NAME_LEN) + 1; 290 + if (name_length > QSEE_MAX_NAME_LEN) 291 + return EFI_INVALID_PARAMETER; 292 + 293 + if (buffer_size && !data) 294 + return EFI_INVALID_PARAMETER; 295 + 296 + req_size = qcuefi_buf_align_fields( 297 + __field(*req_data) 298 + __array_offs(*name, name_length, &name_offs) 299 + __field_offs(*guid, &guid_offs) 300 + ); 301 + 302 + rsp_size = qcuefi_buf_align_fields( 303 + __field(*rsp_data) 304 + __array(u8, buffer_size) 305 + ); 306 + 307 + req_data = kzalloc(req_size, GFP_KERNEL); 308 + if (!req_data) { 309 + efi_status = EFI_OUT_OF_RESOURCES; 310 + goto out; 311 + } 312 + 313 + rsp_data = kzalloc(rsp_size, GFP_KERNEL); 314 + if (!rsp_data) { 315 + efi_status = EFI_OUT_OF_RESOURCES; 316 + goto out_free_req; 317 + } 318 + 319 + req_data->command_id = QSEE_CMD_UEFI_GET_VARIABLE; 320 + req_data->data_size = buffer_size; 321 + req_data->name_offset = name_offs; 322 + req_data->name_size = name_length * sizeof(*name); 323 + req_data->guid_offset = guid_offs; 324 + req_data->guid_size = sizeof(*guid); 325 + req_data->length = req_size; 326 + 327 + status = ucs2_strscpy(((void *)req_data) + req_data->name_offset, name, name_length); 328 + if (status < 0) 329 + return EFI_INVALID_PARAMETER; 330 + 331 + memcpy(((void *)req_data) + req_data->guid_offset, guid, req_data->guid_size); 332 + 333 + status = qcom_qseecom_app_send(qcuefi->client, req_data, req_size, rsp_data, rsp_size); 334 + if (status) { 335 + efi_status = EFI_DEVICE_ERROR; 336 + goto out_free; 337 + } 338 + 339 + if (rsp_data->command_id != QSEE_CMD_UEFI_GET_VARIABLE) { 340 + efi_status = EFI_DEVICE_ERROR; 341 + goto out_free; 342 + } 343 + 344 + if (rsp_data->length < sizeof(*rsp_data)) { 345 + efi_status = EFI_DEVICE_ERROR; 346 + goto out_free; 347 + } 348 + 349 + if (rsp_data->status) { 350 + dev_dbg(qcuefi_dev(qcuefi), "%s: uefisecapp error: 0x%x\n", 351 + __func__, rsp_data->status); 352 + efi_status = qsee_uefi_status_to_efi(rsp_data->status); 353 + 354 + /* Update size and attributes in case buffer is too small. */ 355 + if (efi_status == EFI_BUFFER_TOO_SMALL) { 356 + *data_size = rsp_data->data_size; 357 + if (attributes) 358 + *attributes = rsp_data->attributes; 359 + } 360 + 361 + goto out_free; 362 + } 363 + 364 + if (rsp_data->length > rsp_size) { 365 + efi_status = EFI_DEVICE_ERROR; 366 + goto out_free; 367 + } 368 + 369 + if (rsp_data->data_offset + rsp_data->data_size > rsp_data->length) { 370 + efi_status = EFI_DEVICE_ERROR; 371 + goto out_free; 372 + } 373 + 374 + /* 375 + * Note: We need to set attributes and data size even if the buffer is 376 + * too small and we won't copy any data. This is described in spec, so 377 + * that callers can either allocate a buffer properly (with two calls 378 + * to this function) or just read back attributes withouth having to 379 + * deal with that. 380 + * 381 + * Specifically: 382 + * - If we have a buffer size of zero and no buffer, just return the 383 + * attributes, required size, and indicate success. 384 + * - If the buffer size is nonzero but too small, indicate that as an 385 + * error. 386 + * - Otherwise, we are good to copy the data. 387 + * 388 + * Note that we have already ensured above that the buffer pointer is 389 + * non-NULL if its size is nonzero. 390 + */ 391 + *data_size = rsp_data->data_size; 392 + if (attributes) 393 + *attributes = rsp_data->attributes; 394 + 395 + if (buffer_size == 0 && !data) { 396 + efi_status = EFI_SUCCESS; 397 + goto out_free; 398 + } 399 + 400 + if (buffer_size < rsp_data->data_size) { 401 + efi_status = EFI_BUFFER_TOO_SMALL; 402 + goto out_free; 403 + } 404 + 405 + memcpy(data, ((void *)rsp_data) + rsp_data->data_offset, rsp_data->data_size); 406 + 407 + out_free: 408 + kfree(rsp_data); 409 + out_free_req: 410 + kfree(req_data); 411 + out: 412 + return efi_status; 413 + } 414 + 415 + static efi_status_t qsee_uefi_set_variable(struct qcuefi_client *qcuefi, const efi_char16_t *name, 416 + const efi_guid_t *guid, u32 attributes, 417 + unsigned long data_size, const void *data) 418 + { 419 + struct qsee_req_uefi_set_variable *req_data; 420 + struct qsee_rsp_uefi_set_variable *rsp_data; 421 + efi_status_t efi_status = EFI_SUCCESS; 422 + unsigned long name_length; 423 + size_t name_offs; 424 + size_t guid_offs; 425 + size_t data_offs; 426 + size_t req_size; 427 + ssize_t status; 428 + 429 + if (!name || !guid) 430 + return EFI_INVALID_PARAMETER; 431 + 432 + name_length = ucs2_strnlen(name, QSEE_MAX_NAME_LEN) + 1; 433 + if (name_length > QSEE_MAX_NAME_LEN) 434 + return EFI_INVALID_PARAMETER; 435 + 436 + /* 437 + * Make sure we have some data if data_size is nonzero. Note that using 438 + * a size of zero is a valid use-case described in spec and deletes the 439 + * variable. 440 + */ 441 + if (data_size && !data) 442 + return EFI_INVALID_PARAMETER; 443 + 444 + req_size = qcuefi_buf_align_fields( 445 + __field(*req_data) 446 + __array_offs(*name, name_length, &name_offs) 447 + __field_offs(*guid, &guid_offs) 448 + __array_offs(u8, data_size, &data_offs) 449 + ); 450 + 451 + req_data = kzalloc(req_size, GFP_KERNEL); 452 + if (!req_data) { 453 + efi_status = EFI_OUT_OF_RESOURCES; 454 + goto out; 455 + } 456 + 457 + rsp_data = kzalloc(sizeof(*rsp_data), GFP_KERNEL); 458 + if (!rsp_data) { 459 + efi_status = EFI_OUT_OF_RESOURCES; 460 + goto out_free_req; 461 + } 462 + 463 + req_data->command_id = QSEE_CMD_UEFI_SET_VARIABLE; 464 + req_data->attributes = attributes; 465 + req_data->name_offset = name_offs; 466 + req_data->name_size = name_length * sizeof(*name); 467 + req_data->guid_offset = guid_offs; 468 + req_data->guid_size = sizeof(*guid); 469 + req_data->data_offset = data_offs; 470 + req_data->data_size = data_size; 471 + req_data->length = req_size; 472 + 473 + status = ucs2_strscpy(((void *)req_data) + req_data->name_offset, name, name_length); 474 + if (status < 0) 475 + return EFI_INVALID_PARAMETER; 476 + 477 + memcpy(((void *)req_data) + req_data->guid_offset, guid, req_data->guid_size); 478 + 479 + if (data_size) 480 + memcpy(((void *)req_data) + req_data->data_offset, data, req_data->data_size); 481 + 482 + status = qcom_qseecom_app_send(qcuefi->client, req_data, req_size, rsp_data, 483 + sizeof(*rsp_data)); 484 + if (status) { 485 + efi_status = EFI_DEVICE_ERROR; 486 + goto out_free; 487 + } 488 + 489 + if (rsp_data->command_id != QSEE_CMD_UEFI_SET_VARIABLE) { 490 + efi_status = EFI_DEVICE_ERROR; 491 + goto out_free; 492 + } 493 + 494 + if (rsp_data->length != sizeof(*rsp_data)) { 495 + efi_status = EFI_DEVICE_ERROR; 496 + goto out_free; 497 + } 498 + 499 + if (rsp_data->status) { 500 + dev_dbg(qcuefi_dev(qcuefi), "%s: uefisecapp error: 0x%x\n", 501 + __func__, rsp_data->status); 502 + efi_status = qsee_uefi_status_to_efi(rsp_data->status); 503 + } 504 + 505 + out_free: 506 + kfree(rsp_data); 507 + out_free_req: 508 + kfree(req_data); 509 + out: 510 + return efi_status; 511 + } 512 + 513 + static efi_status_t qsee_uefi_get_next_variable(struct qcuefi_client *qcuefi, 514 + unsigned long *name_size, efi_char16_t *name, 515 + efi_guid_t *guid) 516 + { 517 + struct qsee_req_uefi_get_next_variable *req_data; 518 + struct qsee_rsp_uefi_get_next_variable *rsp_data; 519 + efi_status_t efi_status = EFI_SUCCESS; 520 + size_t guid_offs; 521 + size_t name_offs; 522 + size_t req_size; 523 + size_t rsp_size; 524 + ssize_t status; 525 + 526 + if (!name_size || !name || !guid) 527 + return EFI_INVALID_PARAMETER; 528 + 529 + if (*name_size == 0) 530 + return EFI_INVALID_PARAMETER; 531 + 532 + req_size = qcuefi_buf_align_fields( 533 + __field(*req_data) 534 + __field_offs(*guid, &guid_offs) 535 + __array_offs(*name, *name_size / sizeof(*name), &name_offs) 536 + ); 537 + 538 + rsp_size = qcuefi_buf_align_fields( 539 + __field(*rsp_data) 540 + __field(*guid) 541 + __array(*name, *name_size / sizeof(*name)) 542 + ); 543 + 544 + req_data = kzalloc(req_size, GFP_KERNEL); 545 + if (!req_data) { 546 + efi_status = EFI_OUT_OF_RESOURCES; 547 + goto out; 548 + } 549 + 550 + rsp_data = kzalloc(rsp_size, GFP_KERNEL); 551 + if (!rsp_data) { 552 + efi_status = EFI_OUT_OF_RESOURCES; 553 + goto out_free_req; 554 + } 555 + 556 + req_data->command_id = QSEE_CMD_UEFI_GET_NEXT_VARIABLE; 557 + req_data->guid_offset = guid_offs; 558 + req_data->guid_size = sizeof(*guid); 559 + req_data->name_offset = name_offs; 560 + req_data->name_size = *name_size; 561 + req_data->length = req_size; 562 + 563 + memcpy(((void *)req_data) + req_data->guid_offset, guid, req_data->guid_size); 564 + status = ucs2_strscpy(((void *)req_data) + req_data->name_offset, name, 565 + *name_size / sizeof(*name)); 566 + if (status < 0) 567 + return EFI_INVALID_PARAMETER; 568 + 569 + status = qcom_qseecom_app_send(qcuefi->client, req_data, req_size, rsp_data, rsp_size); 570 + if (status) { 571 + efi_status = EFI_DEVICE_ERROR; 572 + goto out_free; 573 + } 574 + 575 + if (rsp_data->command_id != QSEE_CMD_UEFI_GET_NEXT_VARIABLE) { 576 + efi_status = EFI_DEVICE_ERROR; 577 + goto out_free; 578 + } 579 + 580 + if (rsp_data->length < sizeof(*rsp_data)) { 581 + efi_status = EFI_DEVICE_ERROR; 582 + goto out_free; 583 + } 584 + 585 + if (rsp_data->status) { 586 + dev_dbg(qcuefi_dev(qcuefi), "%s: uefisecapp error: 0x%x\n", 587 + __func__, rsp_data->status); 588 + efi_status = qsee_uefi_status_to_efi(rsp_data->status); 589 + 590 + /* 591 + * If the buffer to hold the name is too small, update the 592 + * name_size with the required size, so that callers can 593 + * reallocate it accordingly. 594 + */ 595 + if (efi_status == EFI_BUFFER_TOO_SMALL) 596 + *name_size = rsp_data->name_size; 597 + 598 + goto out_free; 599 + } 600 + 601 + if (rsp_data->length > rsp_size) { 602 + efi_status = EFI_DEVICE_ERROR; 603 + goto out_free; 604 + } 605 + 606 + if (rsp_data->name_offset + rsp_data->name_size > rsp_data->length) { 607 + efi_status = EFI_DEVICE_ERROR; 608 + goto out_free; 609 + } 610 + 611 + if (rsp_data->guid_offset + rsp_data->guid_size > rsp_data->length) { 612 + efi_status = EFI_DEVICE_ERROR; 613 + goto out_free; 614 + } 615 + 616 + if (rsp_data->name_size > *name_size) { 617 + *name_size = rsp_data->name_size; 618 + efi_status = EFI_BUFFER_TOO_SMALL; 619 + goto out_free; 620 + } 621 + 622 + if (rsp_data->guid_size != sizeof(*guid)) { 623 + efi_status = EFI_DEVICE_ERROR; 624 + goto out_free; 625 + } 626 + 627 + memcpy(guid, ((void *)rsp_data) + rsp_data->guid_offset, rsp_data->guid_size); 628 + status = ucs2_strscpy(name, ((void *)rsp_data) + rsp_data->name_offset, 629 + rsp_data->name_size / sizeof(*name)); 630 + *name_size = rsp_data->name_size; 631 + 632 + if (status < 0) { 633 + /* 634 + * Return EFI_DEVICE_ERROR here because the buffer size should 635 + * have already been validated above, causing this function to 636 + * bail with EFI_BUFFER_TOO_SMALL. 637 + */ 638 + return EFI_DEVICE_ERROR; 639 + } 640 + 641 + out_free: 642 + kfree(rsp_data); 643 + out_free_req: 644 + kfree(req_data); 645 + out: 646 + return efi_status; 647 + } 648 + 649 + static efi_status_t qsee_uefi_query_variable_info(struct qcuefi_client *qcuefi, u32 attr, 650 + u64 *storage_space, u64 *remaining_space, 651 + u64 *max_variable_size) 652 + { 653 + struct qsee_req_uefi_query_variable_info *req_data; 654 + struct qsee_rsp_uefi_query_variable_info *rsp_data; 655 + efi_status_t efi_status = EFI_SUCCESS; 656 + int status; 657 + 658 + req_data = kzalloc(sizeof(*req_data), GFP_KERNEL); 659 + if (!req_data) { 660 + efi_status = EFI_OUT_OF_RESOURCES; 661 + goto out; 662 + } 663 + 664 + rsp_data = kzalloc(sizeof(*rsp_data), GFP_KERNEL); 665 + if (!rsp_data) { 666 + efi_status = EFI_OUT_OF_RESOURCES; 667 + goto out_free_req; 668 + } 669 + 670 + req_data->command_id = QSEE_CMD_UEFI_QUERY_VARIABLE_INFO; 671 + req_data->attributes = attr; 672 + req_data->length = sizeof(*req_data); 673 + 674 + status = qcom_qseecom_app_send(qcuefi->client, req_data, sizeof(*req_data), rsp_data, 675 + sizeof(*rsp_data)); 676 + if (status) { 677 + efi_status = EFI_DEVICE_ERROR; 678 + goto out_free; 679 + } 680 + 681 + if (rsp_data->command_id != QSEE_CMD_UEFI_QUERY_VARIABLE_INFO) { 682 + efi_status = EFI_DEVICE_ERROR; 683 + goto out_free; 684 + } 685 + 686 + if (rsp_data->length != sizeof(*rsp_data)) { 687 + efi_status = EFI_DEVICE_ERROR; 688 + goto out_free; 689 + } 690 + 691 + if (rsp_data->status) { 692 + dev_dbg(qcuefi_dev(qcuefi), "%s: uefisecapp error: 0x%x\n", 693 + __func__, rsp_data->status); 694 + efi_status = qsee_uefi_status_to_efi(rsp_data->status); 695 + goto out_free; 696 + } 697 + 698 + if (storage_space) 699 + *storage_space = rsp_data->storage_space; 700 + 701 + if (remaining_space) 702 + *remaining_space = rsp_data->remaining_space; 703 + 704 + if (max_variable_size) 705 + *max_variable_size = rsp_data->max_variable_size; 706 + 707 + out_free: 708 + kfree(rsp_data); 709 + out_free_req: 710 + kfree(req_data); 711 + out: 712 + return efi_status; 713 + } 714 + 715 + /* -- Global efivar interface. ---------------------------------------------- */ 716 + 717 + static struct qcuefi_client *__qcuefi; 718 + static DEFINE_MUTEX(__qcuefi_lock); 719 + 720 + static int qcuefi_set_reference(struct qcuefi_client *qcuefi) 721 + { 722 + mutex_lock(&__qcuefi_lock); 723 + 724 + if (qcuefi && __qcuefi) { 725 + mutex_unlock(&__qcuefi_lock); 726 + return -EEXIST; 727 + } 728 + 729 + __qcuefi = qcuefi; 730 + 731 + mutex_unlock(&__qcuefi_lock); 732 + return 0; 733 + } 734 + 735 + static struct qcuefi_client *qcuefi_acquire(void) 736 + { 737 + mutex_lock(&__qcuefi_lock); 738 + return __qcuefi; 739 + } 740 + 741 + static void qcuefi_release(void) 742 + { 743 + mutex_unlock(&__qcuefi_lock); 744 + } 745 + 746 + static efi_status_t qcuefi_get_variable(efi_char16_t *name, efi_guid_t *vendor, u32 *attr, 747 + unsigned long *data_size, void *data) 748 + { 749 + struct qcuefi_client *qcuefi; 750 + efi_status_t status; 751 + 752 + qcuefi = qcuefi_acquire(); 753 + if (!qcuefi) 754 + return EFI_NOT_READY; 755 + 756 + status = qsee_uefi_get_variable(qcuefi, name, vendor, attr, data_size, data); 757 + 758 + qcuefi_release(); 759 + return status; 760 + } 761 + 762 + static efi_status_t qcuefi_set_variable(efi_char16_t *name, efi_guid_t *vendor, 763 + u32 attr, unsigned long data_size, void *data) 764 + { 765 + struct qcuefi_client *qcuefi; 766 + efi_status_t status; 767 + 768 + qcuefi = qcuefi_acquire(); 769 + if (!qcuefi) 770 + return EFI_NOT_READY; 771 + 772 + status = qsee_uefi_set_variable(qcuefi, name, vendor, attr, data_size, data); 773 + 774 + qcuefi_release(); 775 + return status; 776 + } 777 + 778 + static efi_status_t qcuefi_get_next_variable(unsigned long *name_size, efi_char16_t *name, 779 + efi_guid_t *vendor) 780 + { 781 + struct qcuefi_client *qcuefi; 782 + efi_status_t status; 783 + 784 + qcuefi = qcuefi_acquire(); 785 + if (!qcuefi) 786 + return EFI_NOT_READY; 787 + 788 + status = qsee_uefi_get_next_variable(qcuefi, name_size, name, vendor); 789 + 790 + qcuefi_release(); 791 + return status; 792 + } 793 + 794 + static efi_status_t qcuefi_query_variable_info(u32 attr, u64 *storage_space, u64 *remaining_space, 795 + u64 *max_variable_size) 796 + { 797 + struct qcuefi_client *qcuefi; 798 + efi_status_t status; 799 + 800 + qcuefi = qcuefi_acquire(); 801 + if (!qcuefi) 802 + return EFI_NOT_READY; 803 + 804 + status = qsee_uefi_query_variable_info(qcuefi, attr, storage_space, remaining_space, 805 + max_variable_size); 806 + 807 + qcuefi_release(); 808 + return status; 809 + } 810 + 811 + static const struct efivar_operations qcom_efivar_ops = { 812 + .get_variable = qcuefi_get_variable, 813 + .set_variable = qcuefi_set_variable, 814 + .get_next_variable = qcuefi_get_next_variable, 815 + .query_variable_info = qcuefi_query_variable_info, 816 + }; 817 + 818 + /* -- Driver setup. --------------------------------------------------------- */ 819 + 820 + static int qcom_uefisecapp_probe(struct auxiliary_device *aux_dev, 821 + const struct auxiliary_device_id *aux_dev_id) 822 + { 823 + struct qcuefi_client *qcuefi; 824 + int status; 825 + 826 + qcuefi = devm_kzalloc(&aux_dev->dev, sizeof(*qcuefi), GFP_KERNEL); 827 + if (!qcuefi) 828 + return -ENOMEM; 829 + 830 + qcuefi->client = container_of(aux_dev, struct qseecom_client, aux_dev); 831 + 832 + auxiliary_set_drvdata(aux_dev, qcuefi); 833 + status = qcuefi_set_reference(qcuefi); 834 + if (status) 835 + return status; 836 + 837 + status = efivars_register(&qcuefi->efivars, &qcom_efivar_ops); 838 + if (status) 839 + qcuefi_set_reference(NULL); 840 + 841 + return status; 842 + } 843 + 844 + static void qcom_uefisecapp_remove(struct auxiliary_device *aux_dev) 845 + { 846 + struct qcuefi_client *qcuefi = auxiliary_get_drvdata(aux_dev); 847 + 848 + efivars_unregister(&qcuefi->efivars); 849 + qcuefi_set_reference(NULL); 850 + } 851 + 852 + static const struct auxiliary_device_id qcom_uefisecapp_id_table[] = { 853 + { .name = "qcom_qseecom.uefisecapp" }, 854 + {} 855 + }; 856 + MODULE_DEVICE_TABLE(auxiliary, qcom_uefisecapp_id_table); 857 + 858 + static struct auxiliary_driver qcom_uefisecapp_driver = { 859 + .probe = qcom_uefisecapp_probe, 860 + .remove = qcom_uefisecapp_remove, 861 + .id_table = qcom_uefisecapp_id_table, 862 + .driver = { 863 + .name = "qcom_qseecom_uefisecapp", 864 + .probe_type = PROBE_PREFER_ASYNCHRONOUS, 865 + }, 866 + }; 867 + module_auxiliary_driver(qcom_uefisecapp_driver); 868 + 869 + MODULE_AUTHOR("Maximilian Luz <luzmaximilian@gmail.com>"); 870 + MODULE_DESCRIPTION("Client driver for Qualcomm SEE UEFI Secure App"); 871 + MODULE_LICENSE("GPL");
drivers/firmware/qcom_scm-legacy.c drivers/firmware/qcom/qcom_scm-legacy.c
drivers/firmware/qcom_scm-smc.c drivers/firmware/qcom/qcom_scm-smc.c
+441 -9
drivers/firmware/qcom_scm.c drivers/firmware/qcom/qcom_scm.c
··· 2 2 /* Copyright (c) 2010,2015,2019 The Linux Foundation. All rights reserved. 3 3 * Copyright (C) 2015 Linaro Ltd. 4 4 */ 5 - #include <linux/platform_device.h> 6 - #include <linux/init.h> 7 - #include <linux/interrupt.h> 5 + 6 + #include <linux/arm-smccc.h> 7 + #include <linux/clk.h> 8 8 #include <linux/completion.h> 9 9 #include <linux/cpumask.h> 10 - #include <linux/export.h> 11 10 #include <linux/dma-mapping.h> 12 - #include <linux/interconnect.h> 13 - #include <linux/module.h> 14 - #include <linux/types.h> 11 + #include <linux/export.h> 15 12 #include <linux/firmware/qcom/qcom_scm.h> 13 + #include <linux/init.h> 14 + #include <linux/interconnect.h> 15 + #include <linux/interrupt.h> 16 + #include <linux/module.h> 16 17 #include <linux/of.h> 17 18 #include <linux/of_address.h> 18 19 #include <linux/of_irq.h> 19 20 #include <linux/of_platform.h> 20 - #include <linux/clk.h> 21 + #include <linux/platform_device.h> 21 22 #include <linux/reset-controller.h> 22 - #include <linux/arm-smccc.h> 23 + #include <linux/types.h> 23 24 24 25 #include "qcom_scm.h" 25 26 ··· 55 54 __le64 mem_addr; 56 55 __le64 mem_size; 57 56 }; 57 + 58 + /** 59 + * struct qcom_scm_qseecom_resp - QSEECOM SCM call response. 60 + * @result: Result or status of the SCM call. See &enum qcom_scm_qseecom_result. 61 + * @resp_type: Type of the response. See &enum qcom_scm_qseecom_resp_type. 62 + * @data: Response data. The type of this data is given in @resp_type. 63 + */ 64 + struct qcom_scm_qseecom_resp { 65 + u64 result; 66 + u64 resp_type; 67 + u64 data; 68 + }; 69 + 70 + enum qcom_scm_qseecom_result { 71 + QSEECOM_RESULT_SUCCESS = 0, 72 + QSEECOM_RESULT_INCOMPLETE = 1, 73 + QSEECOM_RESULT_BLOCKED_ON_LISTENER = 2, 74 + QSEECOM_RESULT_FAILURE = 0xFFFFFFFF, 75 + }; 76 + 77 + enum qcom_scm_qseecom_resp_type { 78 + QSEECOM_SCM_RES_APP_ID = 0xEE01, 79 + QSEECOM_SCM_RES_QSEOS_LISTENER_ID = 0xEE02, 80 + }; 81 + 82 + enum qcom_scm_qseecom_tz_owner { 83 + QSEECOM_TZ_OWNER_SIP = 2, 84 + QSEECOM_TZ_OWNER_TZ_APPS = 48, 85 + QSEECOM_TZ_OWNER_QSEE_OS = 50 86 + }; 87 + 88 + enum qcom_scm_qseecom_tz_svc { 89 + QSEECOM_TZ_SVC_APP_ID_PLACEHOLDER = 0, 90 + QSEECOM_TZ_SVC_APP_MGR = 1, 91 + QSEECOM_TZ_SVC_INFO = 6, 92 + }; 93 + 94 + enum qcom_scm_qseecom_tz_cmd_app { 95 + QSEECOM_TZ_CMD_APP_SEND = 1, 96 + QSEECOM_TZ_CMD_APP_LOOKUP = 3, 97 + }; 98 + 99 + enum qcom_scm_qseecom_tz_cmd_info { 100 + QSEECOM_TZ_CMD_INFO_VERSION = 3, 101 + }; 102 + 103 + #define QSEECOM_MAX_APP_NAME_SIZE 64 58 104 59 105 /* Each bit configures cold/warm boot address for one of the 4 CPUs */ 60 106 static const u8 qcom_scm_cpu_cold_bits[QCOM_SCM_BOOT_MAX_CPUS] = { ··· 216 168 return qcom_scm_convention; 217 169 218 170 /* 171 + * Per the "SMC calling convention specification", the 64-bit calling 172 + * convention can only be used when the client is 64-bit, otherwise 173 + * system will encounter the undefined behaviour. 174 + */ 175 + #if IS_ENABLED(CONFIG_ARM64) 176 + /* 219 177 * Device isn't required as there is only one argument - no device 220 178 * needed to dma_map_single to secure world 221 179 */ ··· 241 187 forced = true; 242 188 goto found; 243 189 } 190 + #endif 244 191 245 192 probed_convention = SMC_CONVENTION_ARM_32; 246 193 ret = __scm_smc_call(NULL, &desc, probed_convention, &res, true); ··· 457 402 return ret ? : res.result[0]; 458 403 } 459 404 EXPORT_SYMBOL_GPL(qcom_scm_set_remote_state); 405 + 406 + static int qcom_scm_disable_sdi(void) 407 + { 408 + int ret; 409 + struct qcom_scm_desc desc = { 410 + .svc = QCOM_SCM_SVC_BOOT, 411 + .cmd = QCOM_SCM_BOOT_SDI_CONFIG, 412 + .args[0] = 1, /* Disable watchdog debug */ 413 + .args[1] = 0, /* Disable SDI */ 414 + .arginfo = QCOM_SCM_ARGS(2), 415 + .owner = ARM_SMCCC_OWNER_SIP, 416 + }; 417 + struct qcom_scm_res res; 418 + 419 + ret = qcom_scm_clk_enable(); 420 + if (ret) 421 + return ret; 422 + ret = qcom_scm_call(__scm->dev, &desc, &res); 423 + 424 + qcom_scm_clk_disable(); 425 + 426 + return ret ? : res.result[0]; 427 + } 460 428 461 429 static int __qcom_scm_set_dload_mode(struct device *dev, bool enable) 462 430 { ··· 1399 1321 return 0; 1400 1322 } 1401 1323 1324 + #ifdef CONFIG_QCOM_QSEECOM 1325 + 1326 + /* Lock for QSEECOM SCM call executions */ 1327 + static DEFINE_MUTEX(qcom_scm_qseecom_call_lock); 1328 + 1329 + static int __qcom_scm_qseecom_call(const struct qcom_scm_desc *desc, 1330 + struct qcom_scm_qseecom_resp *res) 1331 + { 1332 + struct qcom_scm_res scm_res = {}; 1333 + int status; 1334 + 1335 + /* 1336 + * QSEECOM SCM calls should not be executed concurrently. Therefore, we 1337 + * require the respective call lock to be held. 1338 + */ 1339 + lockdep_assert_held(&qcom_scm_qseecom_call_lock); 1340 + 1341 + status = qcom_scm_call(__scm->dev, desc, &scm_res); 1342 + 1343 + res->result = scm_res.result[0]; 1344 + res->resp_type = scm_res.result[1]; 1345 + res->data = scm_res.result[2]; 1346 + 1347 + if (status) 1348 + return status; 1349 + 1350 + return 0; 1351 + } 1352 + 1353 + /** 1354 + * qcom_scm_qseecom_call() - Perform a QSEECOM SCM call. 1355 + * @desc: SCM call descriptor. 1356 + * @res: SCM call response (output). 1357 + * 1358 + * Performs the QSEECOM SCM call described by @desc, returning the response in 1359 + * @rsp. 1360 + * 1361 + * Return: Zero on success, nonzero on failure. 1362 + */ 1363 + static int qcom_scm_qseecom_call(const struct qcom_scm_desc *desc, 1364 + struct qcom_scm_qseecom_resp *res) 1365 + { 1366 + int status; 1367 + 1368 + /* 1369 + * Note: Multiple QSEECOM SCM calls should not be executed same time, 1370 + * so lock things here. This needs to be extended to callback/listener 1371 + * handling when support for that is implemented. 1372 + */ 1373 + 1374 + mutex_lock(&qcom_scm_qseecom_call_lock); 1375 + status = __qcom_scm_qseecom_call(desc, res); 1376 + mutex_unlock(&qcom_scm_qseecom_call_lock); 1377 + 1378 + dev_dbg(__scm->dev, "%s: owner=%x, svc=%x, cmd=%x, result=%lld, type=%llx, data=%llx\n", 1379 + __func__, desc->owner, desc->svc, desc->cmd, res->result, 1380 + res->resp_type, res->data); 1381 + 1382 + if (status) { 1383 + dev_err(__scm->dev, "qseecom: scm call failed with error %d\n", status); 1384 + return status; 1385 + } 1386 + 1387 + /* 1388 + * TODO: Handle incomplete and blocked calls: 1389 + * 1390 + * Incomplete and blocked calls are not supported yet. Some devices 1391 + * and/or commands require those, some don't. Let's warn about them 1392 + * prominently in case someone attempts to try these commands with a 1393 + * device/command combination that isn't supported yet. 1394 + */ 1395 + WARN_ON(res->result == QSEECOM_RESULT_INCOMPLETE); 1396 + WARN_ON(res->result == QSEECOM_RESULT_BLOCKED_ON_LISTENER); 1397 + 1398 + return 0; 1399 + } 1400 + 1401 + /** 1402 + * qcom_scm_qseecom_get_version() - Query the QSEECOM version. 1403 + * @version: Pointer where the QSEECOM version will be stored. 1404 + * 1405 + * Performs the QSEECOM SCM querying the QSEECOM version currently running in 1406 + * the TrustZone. 1407 + * 1408 + * Return: Zero on success, nonzero on failure. 1409 + */ 1410 + static int qcom_scm_qseecom_get_version(u32 *version) 1411 + { 1412 + struct qcom_scm_desc desc = {}; 1413 + struct qcom_scm_qseecom_resp res = {}; 1414 + u32 feature = 10; 1415 + int ret; 1416 + 1417 + desc.owner = QSEECOM_TZ_OWNER_SIP; 1418 + desc.svc = QSEECOM_TZ_SVC_INFO; 1419 + desc.cmd = QSEECOM_TZ_CMD_INFO_VERSION; 1420 + desc.arginfo = QCOM_SCM_ARGS(1, QCOM_SCM_VAL); 1421 + desc.args[0] = feature; 1422 + 1423 + ret = qcom_scm_qseecom_call(&desc, &res); 1424 + if (ret) 1425 + return ret; 1426 + 1427 + *version = res.result; 1428 + return 0; 1429 + } 1430 + 1431 + /** 1432 + * qcom_scm_qseecom_app_get_id() - Query the app ID for a given QSEE app name. 1433 + * @app_name: The name of the app. 1434 + * @app_id: The returned app ID. 1435 + * 1436 + * Query and return the application ID of the SEE app identified by the given 1437 + * name. This returned ID is the unique identifier of the app required for 1438 + * subsequent communication. 1439 + * 1440 + * Return: Zero on success, nonzero on failure, -ENOENT if the app has not been 1441 + * loaded or could not be found. 1442 + */ 1443 + int qcom_scm_qseecom_app_get_id(const char *app_name, u32 *app_id) 1444 + { 1445 + unsigned long name_buf_size = QSEECOM_MAX_APP_NAME_SIZE; 1446 + unsigned long app_name_len = strlen(app_name); 1447 + struct qcom_scm_desc desc = {}; 1448 + struct qcom_scm_qseecom_resp res = {}; 1449 + dma_addr_t name_buf_phys; 1450 + char *name_buf; 1451 + int status; 1452 + 1453 + if (app_name_len >= name_buf_size) 1454 + return -EINVAL; 1455 + 1456 + name_buf = kzalloc(name_buf_size, GFP_KERNEL); 1457 + if (!name_buf) 1458 + return -ENOMEM; 1459 + 1460 + memcpy(name_buf, app_name, app_name_len); 1461 + 1462 + name_buf_phys = dma_map_single(__scm->dev, name_buf, name_buf_size, DMA_TO_DEVICE); 1463 + status = dma_mapping_error(__scm->dev, name_buf_phys); 1464 + if (status) { 1465 + kfree(name_buf); 1466 + dev_err(__scm->dev, "qseecom: failed to map dma address\n"); 1467 + return status; 1468 + } 1469 + 1470 + desc.owner = QSEECOM_TZ_OWNER_QSEE_OS; 1471 + desc.svc = QSEECOM_TZ_SVC_APP_MGR; 1472 + desc.cmd = QSEECOM_TZ_CMD_APP_LOOKUP; 1473 + desc.arginfo = QCOM_SCM_ARGS(2, QCOM_SCM_RW, QCOM_SCM_VAL); 1474 + desc.args[0] = name_buf_phys; 1475 + desc.args[1] = app_name_len; 1476 + 1477 + status = qcom_scm_qseecom_call(&desc, &res); 1478 + dma_unmap_single(__scm->dev, name_buf_phys, name_buf_size, DMA_TO_DEVICE); 1479 + kfree(name_buf); 1480 + 1481 + if (status) 1482 + return status; 1483 + 1484 + if (res.result == QSEECOM_RESULT_FAILURE) 1485 + return -ENOENT; 1486 + 1487 + if (res.result != QSEECOM_RESULT_SUCCESS) 1488 + return -EINVAL; 1489 + 1490 + if (res.resp_type != QSEECOM_SCM_RES_APP_ID) 1491 + return -EINVAL; 1492 + 1493 + *app_id = res.data; 1494 + return 0; 1495 + } 1496 + EXPORT_SYMBOL_GPL(qcom_scm_qseecom_app_get_id); 1497 + 1498 + /** 1499 + * qcom_scm_qseecom_app_send() - Send to and receive data from a given QSEE app. 1500 + * @app_id: The ID of the target app. 1501 + * @req: Request buffer sent to the app (must be DMA-mappable). 1502 + * @req_size: Size of the request buffer. 1503 + * @rsp: Response buffer, written to by the app (must be DMA-mappable). 1504 + * @rsp_size: Size of the response buffer. 1505 + * 1506 + * Sends a request to the QSEE app associated with the given ID and read back 1507 + * its response. The caller must provide two DMA memory regions, one for the 1508 + * request and one for the response, and fill out the @req region with the 1509 + * respective (app-specific) request data. The QSEE app reads this and returns 1510 + * its response in the @rsp region. 1511 + * 1512 + * Return: Zero on success, nonzero on failure. 1513 + */ 1514 + int qcom_scm_qseecom_app_send(u32 app_id, void *req, size_t req_size, void *rsp, 1515 + size_t rsp_size) 1516 + { 1517 + struct qcom_scm_qseecom_resp res = {}; 1518 + struct qcom_scm_desc desc = {}; 1519 + dma_addr_t req_phys; 1520 + dma_addr_t rsp_phys; 1521 + int status; 1522 + 1523 + /* Map request buffer */ 1524 + req_phys = dma_map_single(__scm->dev, req, req_size, DMA_TO_DEVICE); 1525 + status = dma_mapping_error(__scm->dev, req_phys); 1526 + if (status) { 1527 + dev_err(__scm->dev, "qseecom: failed to map request buffer\n"); 1528 + return status; 1529 + } 1530 + 1531 + /* Map response buffer */ 1532 + rsp_phys = dma_map_single(__scm->dev, rsp, rsp_size, DMA_FROM_DEVICE); 1533 + status = dma_mapping_error(__scm->dev, rsp_phys); 1534 + if (status) { 1535 + dma_unmap_single(__scm->dev, req_phys, req_size, DMA_TO_DEVICE); 1536 + dev_err(__scm->dev, "qseecom: failed to map response buffer\n"); 1537 + return status; 1538 + } 1539 + 1540 + /* Set up SCM call data */ 1541 + desc.owner = QSEECOM_TZ_OWNER_TZ_APPS; 1542 + desc.svc = QSEECOM_TZ_SVC_APP_ID_PLACEHOLDER; 1543 + desc.cmd = QSEECOM_TZ_CMD_APP_SEND; 1544 + desc.arginfo = QCOM_SCM_ARGS(5, QCOM_SCM_VAL, 1545 + QCOM_SCM_RW, QCOM_SCM_VAL, 1546 + QCOM_SCM_RW, QCOM_SCM_VAL); 1547 + desc.args[0] = app_id; 1548 + desc.args[1] = req_phys; 1549 + desc.args[2] = req_size; 1550 + desc.args[3] = rsp_phys; 1551 + desc.args[4] = rsp_size; 1552 + 1553 + /* Perform call */ 1554 + status = qcom_scm_qseecom_call(&desc, &res); 1555 + 1556 + /* Unmap buffers */ 1557 + dma_unmap_single(__scm->dev, rsp_phys, rsp_size, DMA_FROM_DEVICE); 1558 + dma_unmap_single(__scm->dev, req_phys, req_size, DMA_TO_DEVICE); 1559 + 1560 + if (status) 1561 + return status; 1562 + 1563 + if (res.result != QSEECOM_RESULT_SUCCESS) 1564 + return -EIO; 1565 + 1566 + return 0; 1567 + } 1568 + EXPORT_SYMBOL_GPL(qcom_scm_qseecom_app_send); 1569 + 1570 + /* 1571 + * We do not yet support re-entrant calls via the qseecom interface. To prevent 1572 + + any potential issues with this, only allow validated machines for now. 1573 + */ 1574 + static const struct of_device_id qcom_scm_qseecom_allowlist[] = { 1575 + { .compatible = "lenovo,thinkpad-x13s", }, 1576 + { } 1577 + }; 1578 + 1579 + static bool qcom_scm_qseecom_machine_is_allowed(void) 1580 + { 1581 + struct device_node *np; 1582 + bool match; 1583 + 1584 + np = of_find_node_by_path("/"); 1585 + if (!np) 1586 + return false; 1587 + 1588 + match = of_match_node(qcom_scm_qseecom_allowlist, np); 1589 + of_node_put(np); 1590 + 1591 + return match; 1592 + } 1593 + 1594 + static void qcom_scm_qseecom_free(void *data) 1595 + { 1596 + struct platform_device *qseecom_dev = data; 1597 + 1598 + platform_device_del(qseecom_dev); 1599 + platform_device_put(qseecom_dev); 1600 + } 1601 + 1602 + static int qcom_scm_qseecom_init(struct qcom_scm *scm) 1603 + { 1604 + struct platform_device *qseecom_dev; 1605 + u32 version; 1606 + int ret; 1607 + 1608 + /* 1609 + * Note: We do two steps of validation here: First, we try to query the 1610 + * QSEECOM version as a check to see if the interface exists on this 1611 + * device. Second, we check against known good devices due to current 1612 + * driver limitations (see comment in qcom_scm_qseecom_allowlist). 1613 + * 1614 + * Note that we deliberately do the machine check after the version 1615 + * check so that we can log potentially supported devices. This should 1616 + * be safe as downstream sources indicate that the version query is 1617 + * neither blocking nor reentrant. 1618 + */ 1619 + ret = qcom_scm_qseecom_get_version(&version); 1620 + if (ret) 1621 + return 0; 1622 + 1623 + dev_info(scm->dev, "qseecom: found qseecom with version 0x%x\n", version); 1624 + 1625 + if (!qcom_scm_qseecom_machine_is_allowed()) { 1626 + dev_info(scm->dev, "qseecom: untested machine, skipping\n"); 1627 + return 0; 1628 + } 1629 + 1630 + /* 1631 + * Set up QSEECOM interface device. All application clients will be 1632 + * set up and managed by the corresponding driver for it. 1633 + */ 1634 + qseecom_dev = platform_device_alloc("qcom_qseecom", -1); 1635 + if (!qseecom_dev) 1636 + return -ENOMEM; 1637 + 1638 + qseecom_dev->dev.parent = scm->dev; 1639 + 1640 + ret = platform_device_add(qseecom_dev); 1641 + if (ret) { 1642 + platform_device_put(qseecom_dev); 1643 + return ret; 1644 + } 1645 + 1646 + return devm_add_action_or_reset(scm->dev, qcom_scm_qseecom_free, qseecom_dev); 1647 + } 1648 + 1649 + #else /* CONFIG_QCOM_QSEECOM */ 1650 + 1651 + static int qcom_scm_qseecom_init(struct qcom_scm *scm) 1652 + { 1653 + return 0; 1654 + } 1655 + 1656 + #endif /* CONFIG_QCOM_QSEECOM */ 1657 + 1402 1658 /** 1403 1659 * qcom_scm_is_available() - Checks if SCM is available 1404 1660 */ ··· 1879 1467 */ 1880 1468 if (download_mode) 1881 1469 qcom_scm_set_download_mode(true); 1470 + 1471 + 1472 + /* 1473 + * Disable SDI if indicated by DT that it is enabled by default. 1474 + */ 1475 + if (of_property_read_bool(pdev->dev.of_node, "qcom,sdi-enabled")) 1476 + qcom_scm_disable_sdi(); 1477 + 1478 + /* 1479 + * Initialize the QSEECOM interface. 1480 + * 1481 + * Note: QSEECOM is fairly self-contained and this only adds the 1482 + * interface device (the driver of which does most of the heavy 1483 + * lifting). So any errors returned here should be either -ENOMEM or 1484 + * -EINVAL (with the latter only in case there's a bug in our code). 1485 + * This means that there is no need to bring down the whole SCM driver. 1486 + * Just log the error instead and let SCM live. 1487 + */ 1488 + ret = qcom_scm_qseecom_init(scm); 1489 + WARN(ret < 0, "failed to initialize qseecom: %d\n", ret); 1882 1490 1883 1491 return 0; 1884 1492 }
+9 -7
drivers/firmware/qcom_scm.h drivers/firmware/qcom/qcom_scm.h
··· 4 4 #ifndef __QCOM_SCM_INT_H 5 5 #define __QCOM_SCM_INT_H 6 6 7 + struct device; 8 + 7 9 enum qcom_scm_convention { 8 10 SMC_CONVENTION_UNKNOWN, 9 11 SMC_CONVENTION_LEGACY, ··· 66 64 int scm_get_wq_ctx(u32 *wq_ctx, u32 *flags, u32 *more_pending); 67 65 68 66 #define SCM_SMC_FNID(s, c) ((((s) & 0xFF) << 8) | ((c) & 0xFF)) 69 - extern int __scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc, 70 - enum qcom_scm_convention qcom_convention, 71 - struct qcom_scm_res *res, bool atomic); 67 + int __scm_smc_call(struct device *dev, const struct qcom_scm_desc *desc, 68 + enum qcom_scm_convention qcom_convention, 69 + struct qcom_scm_res *res, bool atomic); 72 70 #define scm_smc_call(dev, desc, res, atomic) \ 73 71 __scm_smc_call((dev), (desc), qcom_scm_convention, (res), (atomic)) 74 72 75 73 #define SCM_LEGACY_FNID(s, c) (((s) << 10) | ((c) & 0x3ff)) 76 - extern int scm_legacy_call_atomic(struct device *dev, 77 - const struct qcom_scm_desc *desc, 78 - struct qcom_scm_res *res); 79 - extern int scm_legacy_call(struct device *dev, const struct qcom_scm_desc *desc, 74 + int scm_legacy_call_atomic(struct device *dev, const struct qcom_scm_desc *desc, 80 75 struct qcom_scm_res *res); 76 + int scm_legacy_call(struct device *dev, const struct qcom_scm_desc *desc, 77 + struct qcom_scm_res *res); 81 78 82 79 #define QCOM_SCM_SVC_BOOT 0x01 83 80 #define QCOM_SCM_BOOT_SET_ADDR 0x01 84 81 #define QCOM_SCM_BOOT_TERMINATE_PC 0x02 82 + #define QCOM_SCM_BOOT_SDI_CONFIG 0x09 85 83 #define QCOM_SCM_BOOT_SET_DLOAD_MODE 0x10 86 84 #define QCOM_SCM_BOOT_SET_ADDR_MC 0x11 87 85 #define QCOM_SCM_BOOT_SET_REMOTE_STATE 0x0a
+1
drivers/firmware/raspberrypi.c
··· 378 378 379 379 /** 380 380 * devm_rpi_firmware_get - Get pointer to rpi_firmware structure. 381 + * @dev: The firmware device structure 381 382 * @firmware_node: Pointer to the firmware Device Tree node. 382 383 * 383 384 * Returns NULL is the firmware device is not ready.
+30
drivers/firmware/tegra/bpmp.c
··· 313 313 return __tegra_bpmp_channel_write(channel, mrq, flags, data, size); 314 314 } 315 315 316 + static int __maybe_unused tegra_bpmp_resume(struct device *dev); 317 + 316 318 int tegra_bpmp_transfer_atomic(struct tegra_bpmp *bpmp, 317 319 struct tegra_bpmp_message *msg) 318 320 { ··· 326 324 327 325 if (!tegra_bpmp_message_valid(msg)) 328 326 return -EINVAL; 327 + 328 + if (bpmp->suspended) { 329 + /* Reset BPMP IPC channels during resume based on flags passed */ 330 + if (msg->flags & TEGRA_BPMP_MESSAGE_RESET) 331 + tegra_bpmp_resume(bpmp->dev); 332 + else 333 + return -EAGAIN; 334 + } 329 335 330 336 channel = bpmp->tx_channel; 331 337 ··· 373 363 374 364 if (!tegra_bpmp_message_valid(msg)) 375 365 return -EINVAL; 366 + 367 + if (bpmp->suspended) { 368 + /* Reset BPMP IPC channels during resume based on flags passed */ 369 + if (msg->flags & TEGRA_BPMP_MESSAGE_RESET) 370 + tegra_bpmp_resume(bpmp->dev); 371 + else 372 + return -EAGAIN; 373 + } 376 374 377 375 channel = tegra_bpmp_write_threaded(bpmp, msg->mrq, msg->tx.data, 378 376 msg->tx.size); ··· 814 796 return err; 815 797 } 816 798 799 + static int __maybe_unused tegra_bpmp_suspend(struct device *dev) 800 + { 801 + struct tegra_bpmp *bpmp = dev_get_drvdata(dev); 802 + 803 + bpmp->suspended = true; 804 + 805 + return 0; 806 + } 807 + 817 808 static int __maybe_unused tegra_bpmp_resume(struct device *dev) 818 809 { 819 810 struct tegra_bpmp *bpmp = dev_get_drvdata(dev); 811 + 812 + bpmp->suspended = false; 820 813 821 814 if (bpmp->soc->ops->resume) 822 815 return bpmp->soc->ops->resume(bpmp); ··· 836 807 } 837 808 838 809 static const struct dev_pm_ops tegra_bpmp_pm_ops = { 810 + .suspend_noirq = tegra_bpmp_suspend, 839 811 .resume_noirq = tegra_bpmp_resume, 840 812 }; 841 813
+9 -60
drivers/firmware/ti_sci.c
··· 16 16 #include <linux/kernel.h> 17 17 #include <linux/mailbox_client.h> 18 18 #include <linux/module.h> 19 - #include <linux/of_device.h> 19 + #include <linux/of.h> 20 + #include <linux/of_platform.h> 21 + #include <linux/platform_device.h> 22 + #include <linux/property.h> 20 23 #include <linux/semaphore.h> 21 24 #include <linux/slab.h> 22 25 #include <linux/soc/ti/ti-msgmgr.h> ··· 193 190 return 0; 194 191 } 195 192 196 - /** 197 - * ti_sci_debugfs_destroy() - clean up log debug file 198 - * @pdev: platform device pointer 199 - * @info: Pointer to SCI entity information 200 - */ 201 - static void ti_sci_debugfs_destroy(struct platform_device *pdev, 202 - struct ti_sci_info *info) 203 - { 204 - if (IS_ERR(info->debug_region)) 205 - return; 206 - 207 - debugfs_remove(info->d); 208 - } 209 193 #else /* CONFIG_DEBUG_FS */ 210 194 static inline int ti_sci_debugfs_create(struct platform_device *dev, 211 195 struct ti_sci_info *info) ··· 475 485 ver->abi_major = rev_info->abi_major; 476 486 ver->abi_minor = rev_info->abi_minor; 477 487 ver->firmware_revision = rev_info->firmware_revision; 478 - strncpy(ver->firmware_description, rev_info->firmware_description, 488 + strscpy(ver->firmware_description, rev_info->firmware_description, 479 489 sizeof(ver->firmware_description)); 480 490 481 491 fail: ··· 2876 2886 const struct ti_sci_handle *ti_sci_get_handle(struct device *dev) 2877 2887 { 2878 2888 struct device_node *ti_sci_np; 2879 - struct list_head *p; 2880 2889 struct ti_sci_handle *handle = NULL; 2881 2890 struct ti_sci_info *info; 2882 2891 ··· 2890 2901 } 2891 2902 2892 2903 mutex_lock(&ti_sci_list_mutex); 2893 - list_for_each(p, &ti_sci_list) { 2894 - info = list_entry(p, struct ti_sci_info, node); 2904 + list_for_each_entry(info, &ti_sci_list, node) { 2895 2905 if (ti_sci_np == info->dev->of_node) { 2896 2906 handle = &info->handle; 2897 2907 info->users++; ··· 3000 3012 struct ti_sci_handle *handle = NULL; 3001 3013 struct device_node *ti_sci_np; 3002 3014 struct ti_sci_info *info; 3003 - struct list_head *p; 3004 3015 3005 3016 if (!np) { 3006 3017 pr_err("I need a device pointer\n"); ··· 3011 3024 return ERR_PTR(-ENODEV); 3012 3025 3013 3026 mutex_lock(&ti_sci_list_mutex); 3014 - list_for_each(p, &ti_sci_list) { 3015 - info = list_entry(p, struct ti_sci_info, node); 3027 + list_for_each_entry(info, &ti_sci_list, node) { 3016 3028 if (ti_sci_np == info->dev->of_node) { 3017 3029 handle = &info->handle; 3018 3030 info->users++; ··· 3296 3310 static int ti_sci_probe(struct platform_device *pdev) 3297 3311 { 3298 3312 struct device *dev = &pdev->dev; 3299 - const struct of_device_id *of_id; 3300 3313 const struct ti_sci_desc *desc; 3301 3314 struct ti_sci_xfer *xfer; 3302 3315 struct ti_sci_info *info = NULL; ··· 3306 3321 int reboot = 0; 3307 3322 u32 h_id; 3308 3323 3309 - of_id = of_match_device(ti_sci_of_match, dev); 3310 - if (!of_id) { 3311 - dev_err(dev, "OF data missing\n"); 3312 - return -EINVAL; 3313 - } 3314 - desc = of_id->data; 3324 + desc = device_get_match_data(dev); 3315 3325 3316 3326 info = devm_kzalloc(dev, sizeof(*info), GFP_KERNEL); 3317 3327 if (!info) ··· 3429 3449 return ret; 3430 3450 } 3431 3451 3432 - static int ti_sci_remove(struct platform_device *pdev) 3433 - { 3434 - struct ti_sci_info *info; 3435 - struct device *dev = &pdev->dev; 3436 - int ret = 0; 3437 - 3438 - of_platform_depopulate(dev); 3439 - 3440 - info = platform_get_drvdata(pdev); 3441 - 3442 - if (info->nb.notifier_call) 3443 - unregister_restart_handler(&info->nb); 3444 - 3445 - mutex_lock(&ti_sci_list_mutex); 3446 - if (info->users) 3447 - ret = -EBUSY; 3448 - else 3449 - list_del(&info->node); 3450 - mutex_unlock(&ti_sci_list_mutex); 3451 - 3452 - if (!ret) { 3453 - ti_sci_debugfs_destroy(pdev, info); 3454 - 3455 - /* Safe to free channels since no more users */ 3456 - mbox_free_channel(info->chan_tx); 3457 - mbox_free_channel(info->chan_rx); 3458 - } 3459 - 3460 - return ret; 3461 - } 3462 - 3463 3452 static struct platform_driver ti_sci_driver = { 3464 3453 .probe = ti_sci_probe, 3465 - .remove = ti_sci_remove, 3466 3454 .driver = { 3467 3455 .name = "ti-sci", 3468 3456 .of_match_table = of_match_ptr(ti_sci_of_match), 3457 + .suppress_bind_attrs = true, 3469 3458 }, 3470 3459 }; 3471 3460 module_platform_driver(ti_sci_driver);
+8 -8
drivers/memory/atmel-ebi.c
··· 12 12 #include <linux/mfd/syscon/atmel-matrix.h> 13 13 #include <linux/mfd/syscon/atmel-smc.h> 14 14 #include <linux/init.h> 15 - #include <linux/of_device.h> 15 + #include <linux/of.h> 16 + #include <linux/of_platform.h> 17 + #include <linux/platform_device.h> 18 + #include <linux/property.h> 16 19 #include <linux/regmap.h> 17 20 #include <soc/at91/atmel-sfr.h> 18 21 ··· 33 30 struct atmel_ebi *ebi; 34 31 u32 mode; 35 32 int numcs; 36 - struct atmel_ebi_dev_config configs[]; 33 + struct atmel_ebi_dev_config configs[] __counted_by(numcs); 37 34 }; 38 35 39 36 struct atmel_ebi_caps { ··· 518 515 { 519 516 struct device *dev = &pdev->dev; 520 517 struct device_node *child, *np = dev->of_node, *smc_np; 521 - const struct of_device_id *match; 522 518 struct atmel_ebi *ebi; 523 519 int ret, reg_cells; 524 520 struct clk *clk; 525 521 u32 val; 526 - 527 - match = of_match_device(atmel_ebi_id_table, dev); 528 - if (!match || !match->data) 529 - return -EINVAL; 530 522 531 523 ebi = devm_kzalloc(dev, sizeof(*ebi), GFP_KERNEL); 532 524 if (!ebi) ··· 530 532 platform_set_drvdata(pdev, ebi); 531 533 532 534 INIT_LIST_HEAD(&ebi->devs); 533 - ebi->caps = match->data; 535 + ebi->caps = device_get_match_data(dev); 536 + if (!ebi->caps) 537 + return -EINVAL; 534 538 ebi->dev = dev; 535 539 536 540 clk = devm_clk_get(dev, NULL);
+3 -6
drivers/memory/brcmstb_memc.c
··· 8 8 #include <linux/io.h> 9 9 #include <linux/kernel.h> 10 10 #include <linux/module.h> 11 - #include <linux/of_device.h> 11 + #include <linux/of.h> 12 12 #include <linux/platform_device.h> 13 + #include <linux/property.h> 13 14 14 15 #define REG_MEMC_CNTRLR_CONFIG 0x00 15 16 #define CNTRLR_CONFIG_LPDDR4_SHIFT 5 ··· 122 121 .attrs = dev_attrs, 123 122 }; 124 123 125 - static const struct of_device_id brcmstb_memc_of_match[]; 126 - 127 124 static int brcmstb_memc_probe(struct platform_device *pdev) 128 125 { 129 126 const struct brcmstb_memc_data *memc_data; 130 - const struct of_device_id *of_id; 131 127 struct device *dev = &pdev->dev; 132 128 struct brcmstb_memc *memc; 133 129 int ret; ··· 135 137 136 138 dev_set_drvdata(dev, memc); 137 139 138 - of_id = of_match_device(brcmstb_memc_of_match, dev); 139 - memc_data = of_id->data; 140 + memc_data = device_get_match_data(dev); 140 141 memc->srpd_offset = memc_data->srpd_offset; 141 142 142 143 memc->ddr_ctrl = devm_platform_ioremap_resource(pdev, 0);
+2 -9
drivers/memory/fsl-corenet-cf.c
··· 10 10 #include <linux/irq.h> 11 11 #include <linux/module.h> 12 12 #include <linux/of.h> 13 - #include <linux/of_address.h> 14 - #include <linux/of_device.h> 15 - #include <linux/of_irq.h> 16 13 #include <linux/platform_device.h> 14 + #include <linux/property.h> 17 15 18 16 enum ccf_version { 19 17 CCF1, ··· 170 172 static int ccf_probe(struct platform_device *pdev) 171 173 { 172 174 struct ccf_private *ccf; 173 - const struct of_device_id *match; 174 175 u32 errinten; 175 176 int ret, irq; 176 - 177 - match = of_match_device(ccf_matches, &pdev->dev); 178 - if (WARN_ON(!match)) 179 - return -ENODEV; 180 177 181 178 ccf = devm_kzalloc(&pdev->dev, sizeof(*ccf), GFP_KERNEL); 182 179 if (!ccf) ··· 182 189 return PTR_ERR(ccf->regs); 183 190 184 191 ccf->dev = &pdev->dev; 185 - ccf->info = match->data; 192 + ccf->info = device_get_match_data(&pdev->dev); 186 193 ccf->err_regs = ccf->regs + ccf->info->err_reg_offs; 187 194 188 195 if (ccf->info->has_brr) {
+64
drivers/memory/tegra/tegra234.c
··· 450 450 }, 451 451 }, 452 452 }, { 453 + .id = TEGRA234_MEMORY_CLIENT_VIW, 454 + .name = "viw", 455 + .bpmp_id = TEGRA_ICC_BPMP_VI, 456 + .type = TEGRA_ICC_ISO_VI, 457 + .sid = TEGRA234_SID_ISO_VI, 458 + .regs = { 459 + .sid = { 460 + .override = 0x390, 461 + .security = 0x394, 462 + }, 463 + }, 464 + }, { 453 465 .id = TEGRA234_MEMORY_CLIENT_NVDECSRD, 454 466 .name = "nvdecsrd", 455 467 .bpmp_id = TEGRA_ICC_BPMP_NVDEC, ··· 634 622 }, 635 623 }, 636 624 }, { 625 + .id = TEGRA234_MEMORY_CLIENT_VIFALR, 626 + .name = "vifalr", 627 + .bpmp_id = TEGRA_ICC_BPMP_VIFAL, 628 + .type = TEGRA_ICC_ISO_VIFAL, 629 + .sid = TEGRA234_SID_ISO_VIFALC, 630 + .regs = { 631 + .sid = { 632 + .override = 0x5e0, 633 + .security = 0x5e4, 634 + }, 635 + }, 636 + }, { 637 + .id = TEGRA234_MEMORY_CLIENT_VIFALW, 638 + .name = "vifalw", 639 + .bpmp_id = TEGRA_ICC_BPMP_VIFAL, 640 + .type = TEGRA_ICC_ISO_VIFAL, 641 + .sid = TEGRA234_SID_ISO_VIFALC, 642 + .regs = { 643 + .sid = { 644 + .override = 0x5e8, 645 + .security = 0x5ec, 646 + }, 647 + }, 648 + }, { 637 649 .id = TEGRA234_MEMORY_CLIENT_DLA0RDA, 638 650 .name = "dla0rda", 639 651 .sid = TEGRA234_SID_NVDLA0, ··· 735 699 .sid = { 736 700 .override = 0x628, 737 701 .security = 0x62c, 702 + }, 703 + }, 704 + }, { 705 + .id = TEGRA234_MEMORY_CLIENT_RCER, 706 + .name = "rcer", 707 + .bpmp_id = TEGRA_ICC_BPMP_RCE, 708 + .type = TEGRA_ICC_NISO, 709 + .sid = TEGRA234_SID_RCE, 710 + .regs = { 711 + .sid = { 712 + .override = 0x690, 713 + .security = 0x694, 714 + }, 715 + }, 716 + }, { 717 + .id = TEGRA234_MEMORY_CLIENT_RCEW, 718 + .name = "rcew", 719 + .bpmp_id = TEGRA_ICC_BPMP_RCE, 720 + .type = TEGRA_ICC_NISO, 721 + .sid = TEGRA234_SID_RCE, 722 + .regs = { 723 + .sid = { 724 + .override = 0x698, 725 + .security = 0x69c, 738 726 }, 739 727 }, 740 728 }, { ··· 1045 985 msg.tx.size = sizeof(bwmgr_req); 1046 986 msg.rx.data = &bwmgr_resp; 1047 987 msg.rx.size = sizeof(bwmgr_resp); 988 + 989 + if (pclient->bpmp_id >= TEGRA_ICC_BPMP_CPU_CLUSTER0 && 990 + pclient->bpmp_id <= TEGRA_ICC_BPMP_CPU_CLUSTER2) 991 + msg.flags = TEGRA_BPMP_MESSAGE_RESET; 1048 992 1049 993 ret = tegra_bpmp_transfer(mc->bpmp, &msg); 1050 994 if (ret < 0) {
+1
drivers/pmdomain/Makefile
··· 2 2 obj-y += actions/ 3 3 obj-y += amlogic/ 4 4 obj-y += apple/ 5 + obj-y += arm/ 5 6 obj-y += bcm/ 6 7 obj-y += imx/ 7 8 obj-y += mediatek/
+4
drivers/pmdomain/arm/Makefile
··· 1 + # SPDX-License-Identifier: GPL-2.0-only 2 + 3 + obj-$(CONFIG_ARM_SCMI_PERF_DOMAIN) += scmi_perf_domain.o 4 + obj-$(CONFIG_ARM_SCMI_POWER_DOMAIN) += scmi_pm_domain.o
+184
drivers/pmdomain/arm/scmi_perf_domain.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * SCMI performance domain support. 4 + * 5 + * Copyright (C) 2023 Linaro Ltd. 6 + */ 7 + 8 + #include <linux/err.h> 9 + #include <linux/device.h> 10 + #include <linux/module.h> 11 + #include <linux/pm_domain.h> 12 + #include <linux/pm_opp.h> 13 + #include <linux/scmi_protocol.h> 14 + #include <linux/slab.h> 15 + 16 + struct scmi_perf_domain { 17 + struct generic_pm_domain genpd; 18 + const struct scmi_perf_proto_ops *perf_ops; 19 + const struct scmi_protocol_handle *ph; 20 + const struct scmi_perf_domain_info *info; 21 + u32 domain_id; 22 + }; 23 + 24 + #define to_scmi_pd(pd) container_of(pd, struct scmi_perf_domain, genpd) 25 + 26 + static int 27 + scmi_pd_set_perf_state(struct generic_pm_domain *genpd, unsigned int state) 28 + { 29 + struct scmi_perf_domain *pd = to_scmi_pd(genpd); 30 + int ret; 31 + 32 + if (!pd->info->set_perf) 33 + return 0; 34 + 35 + if (!state) 36 + return -EINVAL; 37 + 38 + ret = pd->perf_ops->level_set(pd->ph, pd->domain_id, state, true); 39 + if (ret) 40 + dev_warn(&genpd->dev, "Failed with %d when trying to set %d perf level", 41 + ret, state); 42 + 43 + return ret; 44 + } 45 + 46 + static int 47 + scmi_pd_attach_dev(struct generic_pm_domain *genpd, struct device *dev) 48 + { 49 + struct scmi_perf_domain *pd = to_scmi_pd(genpd); 50 + int ret; 51 + 52 + /* 53 + * Allow the device to be attached, but don't add the OPP table unless 54 + * the performance level can be changed. 55 + */ 56 + if (!pd->info->set_perf) 57 + return 0; 58 + 59 + ret = pd->perf_ops->device_opps_add(pd->ph, dev, pd->domain_id); 60 + if (ret) 61 + dev_warn(dev, "failed to add OPPs for the device\n"); 62 + 63 + return ret; 64 + } 65 + 66 + static void 67 + scmi_pd_detach_dev(struct generic_pm_domain *genpd, struct device *dev) 68 + { 69 + struct scmi_perf_domain *pd = to_scmi_pd(genpd); 70 + 71 + if (!pd->info->set_perf) 72 + return; 73 + 74 + dev_pm_opp_remove_all_dynamic(dev); 75 + } 76 + 77 + static int scmi_perf_domain_probe(struct scmi_device *sdev) 78 + { 79 + struct device *dev = &sdev->dev; 80 + const struct scmi_handle *handle = sdev->handle; 81 + const struct scmi_perf_proto_ops *perf_ops; 82 + struct scmi_protocol_handle *ph; 83 + struct scmi_perf_domain *scmi_pd; 84 + struct genpd_onecell_data *scmi_pd_data; 85 + struct generic_pm_domain **domains; 86 + int num_domains, i, ret = 0; 87 + 88 + if (!handle) 89 + return -ENODEV; 90 + 91 + /* The OF node must specify us as a power-domain provider. */ 92 + if (!of_find_property(dev->of_node, "#power-domain-cells", NULL)) 93 + return 0; 94 + 95 + perf_ops = handle->devm_protocol_get(sdev, SCMI_PROTOCOL_PERF, &ph); 96 + if (IS_ERR(perf_ops)) 97 + return PTR_ERR(perf_ops); 98 + 99 + num_domains = perf_ops->num_domains_get(ph); 100 + if (num_domains < 0) { 101 + dev_warn(dev, "Failed with %d when getting num perf domains\n", 102 + num_domains); 103 + return num_domains; 104 + } else if (!num_domains) { 105 + return 0; 106 + } 107 + 108 + scmi_pd = devm_kcalloc(dev, num_domains, sizeof(*scmi_pd), GFP_KERNEL); 109 + if (!scmi_pd) 110 + return -ENOMEM; 111 + 112 + scmi_pd_data = devm_kzalloc(dev, sizeof(*scmi_pd_data), GFP_KERNEL); 113 + if (!scmi_pd_data) 114 + return -ENOMEM; 115 + 116 + domains = devm_kcalloc(dev, num_domains, sizeof(*domains), GFP_KERNEL); 117 + if (!domains) 118 + return -ENOMEM; 119 + 120 + for (i = 0; i < num_domains; i++, scmi_pd++) { 121 + scmi_pd->info = perf_ops->info_get(ph, i); 122 + 123 + scmi_pd->domain_id = i; 124 + scmi_pd->perf_ops = perf_ops; 125 + scmi_pd->ph = ph; 126 + scmi_pd->genpd.name = scmi_pd->info->name; 127 + scmi_pd->genpd.flags = GENPD_FLAG_ALWAYS_ON | 128 + GENPD_FLAG_OPP_TABLE_FW; 129 + scmi_pd->genpd.set_performance_state = scmi_pd_set_perf_state; 130 + scmi_pd->genpd.attach_dev = scmi_pd_attach_dev; 131 + scmi_pd->genpd.detach_dev = scmi_pd_detach_dev; 132 + 133 + ret = pm_genpd_init(&scmi_pd->genpd, NULL, false); 134 + if (ret) 135 + goto err; 136 + 137 + domains[i] = &scmi_pd->genpd; 138 + } 139 + 140 + scmi_pd_data->domains = domains; 141 + scmi_pd_data->num_domains = num_domains; 142 + 143 + ret = of_genpd_add_provider_onecell(dev->of_node, scmi_pd_data); 144 + if (ret) 145 + goto err; 146 + 147 + dev_set_drvdata(dev, scmi_pd_data); 148 + dev_info(dev, "Initialized %d performance domains", num_domains); 149 + return 0; 150 + err: 151 + for (i--; i >= 0; i--) 152 + pm_genpd_remove(domains[i]); 153 + return ret; 154 + } 155 + 156 + static void scmi_perf_domain_remove(struct scmi_device *sdev) 157 + { 158 + struct device *dev = &sdev->dev; 159 + struct genpd_onecell_data *scmi_pd_data = dev_get_drvdata(dev); 160 + int i; 161 + 162 + of_genpd_del_provider(dev->of_node); 163 + 164 + for (i = 0; i < scmi_pd_data->num_domains; i++) 165 + pm_genpd_remove(scmi_pd_data->domains[i]); 166 + } 167 + 168 + static const struct scmi_device_id scmi_id_table[] = { 169 + { SCMI_PROTOCOL_PERF, "perf" }, 170 + { }, 171 + }; 172 + MODULE_DEVICE_TABLE(scmi, scmi_id_table); 173 + 174 + static struct scmi_driver scmi_perf_domain_driver = { 175 + .name = "scmi-perf-domain", 176 + .probe = scmi_perf_domain_probe, 177 + .remove = scmi_perf_domain_remove, 178 + .id_table = scmi_id_table, 179 + }; 180 + module_scmi_driver(scmi_perf_domain_driver); 181 + 182 + MODULE_AUTHOR("Ulf Hansson <ulf.hansson@linaro.org>"); 183 + MODULE_DESCRIPTION("ARM SCMI perf domain driver"); 184 + MODULE_LICENSE("GPL v2");
+2 -4
drivers/soc/aspeed/aspeed-lpc-ctrl.c
··· 332 332 return rc; 333 333 } 334 334 335 - static int aspeed_lpc_ctrl_remove(struct platform_device *pdev) 335 + static void aspeed_lpc_ctrl_remove(struct platform_device *pdev) 336 336 { 337 337 struct aspeed_lpc_ctrl *lpc_ctrl = dev_get_drvdata(&pdev->dev); 338 338 339 339 misc_deregister(&lpc_ctrl->miscdev); 340 340 clk_disable_unprepare(lpc_ctrl->clk); 341 - 342 - return 0; 343 341 } 344 342 345 343 static const struct of_device_id aspeed_lpc_ctrl_match[] = { ··· 353 355 .of_match_table = aspeed_lpc_ctrl_match, 354 356 }, 355 357 .probe = aspeed_lpc_ctrl_probe, 356 - .remove = aspeed_lpc_ctrl_remove, 358 + .remove_new = aspeed_lpc_ctrl_remove, 357 359 }; 358 360 359 361 module_platform_driver(aspeed_lpc_ctrl_driver);
+2 -4
drivers/soc/aspeed/aspeed-lpc-snoop.c
··· 331 331 return rc; 332 332 } 333 333 334 - static int aspeed_lpc_snoop_remove(struct platform_device *pdev) 334 + static void aspeed_lpc_snoop_remove(struct platform_device *pdev) 335 335 { 336 336 struct aspeed_lpc_snoop *lpc_snoop = dev_get_drvdata(&pdev->dev); 337 337 ··· 340 340 aspeed_lpc_disable_snoop(lpc_snoop, 1); 341 341 342 342 clk_disable_unprepare(lpc_snoop->clk); 343 - 344 - return 0; 345 343 } 346 344 347 345 static const struct aspeed_lpc_snoop_model_data ast2400_model_data = { ··· 366 368 .of_match_table = aspeed_lpc_snoop_match, 367 369 }, 368 370 .probe = aspeed_lpc_snoop_probe, 369 - .remove = aspeed_lpc_snoop_remove, 371 + .remove_new = aspeed_lpc_snoop_remove, 370 372 }; 371 373 372 374 module_platform_driver(aspeed_lpc_snoop_driver);
+2 -4
drivers/soc/aspeed/aspeed-p2a-ctrl.c
··· 383 383 return rc; 384 384 } 385 385 386 - static int aspeed_p2a_ctrl_remove(struct platform_device *pdev) 386 + static void aspeed_p2a_ctrl_remove(struct platform_device *pdev) 387 387 { 388 388 struct aspeed_p2a_ctrl *p2a_ctrl = dev_get_drvdata(&pdev->dev); 389 389 390 390 misc_deregister(&p2a_ctrl->miscdev); 391 - 392 - return 0; 393 391 } 394 392 395 393 #define SCU2C_DRAM BIT(25) ··· 431 433 .of_match_table = aspeed_p2a_ctrl_match, 432 434 }, 433 435 .probe = aspeed_p2a_ctrl_probe, 434 - .remove = aspeed_p2a_ctrl_remove, 436 + .remove_new = aspeed_p2a_ctrl_remove, 435 437 }; 436 438 437 439 module_platform_driver(aspeed_p2a_ctrl_driver);
+2 -4
drivers/soc/aspeed/aspeed-uart-routing.c
··· 565 565 return 0; 566 566 } 567 567 568 - static int aspeed_uart_routing_remove(struct platform_device *pdev) 568 + static void aspeed_uart_routing_remove(struct platform_device *pdev) 569 569 { 570 570 struct device *dev = &pdev->dev; 571 571 struct aspeed_uart_routing *uart_routing = platform_get_drvdata(pdev); 572 572 573 573 sysfs_remove_group(&dev->kobj, uart_routing->attr_grp); 574 - 575 - return 0; 576 574 } 577 575 578 576 static const struct of_device_id aspeed_uart_routing_table[] = { ··· 589 591 .of_match_table = aspeed_uart_routing_table, 590 592 }, 591 593 .probe = aspeed_uart_routing_probe, 592 - .remove = aspeed_uart_routing_remove, 594 + .remove_new = aspeed_uart_routing_remove, 593 595 }; 594 596 595 597 module_platform_driver(aspeed_uart_routing_driver);
+1 -1
drivers/soc/bcm/Kconfig
··· 3 3 4 4 config SOC_BRCMSTB 5 5 bool "Broadcom STB SoC drivers" 6 - depends on ARM || ARM64 || BMIPS_GENERIC || COMPILE_TEST 6 + depends on ARCH_BRCMSTB || BMIPS_GENERIC || COMPILE_TEST 7 7 select SOC_BUS 8 8 help 9 9 Enables drivers for the Broadcom Set-Top Box (STB) series of chips.
+4 -1
drivers/soc/dove/pmu.c
··· 410 410 struct pmu_domain *domain; 411 411 412 412 domain = kzalloc(sizeof(*domain), GFP_KERNEL); 413 - if (!domain) 413 + if (!domain) { 414 + of_node_put(np); 414 415 break; 416 + } 415 417 416 418 domain->pmu = pmu; 417 419 domain->base.name = kasprintf(GFP_KERNEL, "%pOFn", np); 418 420 if (!domain->base.name) { 419 421 kfree(domain); 422 + of_node_put(np); 420 423 break; 421 424 } 422 425
+2 -4
drivers/soc/fsl/dpaa2-console.c
··· 300 300 return error; 301 301 } 302 302 303 - static int dpaa2_console_remove(struct platform_device *pdev) 303 + static void dpaa2_console_remove(struct platform_device *pdev) 304 304 { 305 305 misc_deregister(&dpaa2_mc_console_dev); 306 306 misc_deregister(&dpaa2_aiop_console_dev); 307 - 308 - return 0; 309 307 } 310 308 311 309 static const struct of_device_id dpaa2_console_match_table[] = { ··· 320 322 .of_match_table = dpaa2_console_match_table, 321 323 }, 322 324 .probe = dpaa2_console_probe, 323 - .remove = dpaa2_console_remove, 325 + .remove_new = dpaa2_console_remove, 324 326 }; 325 327 module_platform_driver(dpaa2_console_driver); 326 328
+2 -4
drivers/soc/fsl/qe/qmc.c
··· 1415 1415 return ret; 1416 1416 } 1417 1417 1418 - static int qmc_remove(struct platform_device *pdev) 1418 + static void qmc_remove(struct platform_device *pdev) 1419 1419 { 1420 1420 struct qmc *qmc = platform_get_drvdata(pdev); 1421 1421 ··· 1427 1427 1428 1428 /* Disconnect the serial from TSA */ 1429 1429 tsa_serial_disconnect(qmc->tsa_serial); 1430 - 1431 - return 0; 1432 1430 } 1433 1431 1434 1432 static const struct of_device_id qmc_id_table[] = { ··· 1441 1443 .of_match_table = of_match_ptr(qmc_id_table), 1442 1444 }, 1443 1445 .probe = qmc_probe, 1444 - .remove = qmc_remove, 1446 + .remove_new = qmc_remove, 1445 1447 }; 1446 1448 module_platform_driver(qmc_driver); 1447 1449
+2 -3
drivers/soc/fsl/qe/tsa.c
··· 706 706 return 0; 707 707 } 708 708 709 - static int tsa_remove(struct platform_device *pdev) 709 + static void tsa_remove(struct platform_device *pdev) 710 710 { 711 711 struct tsa *tsa = platform_get_drvdata(pdev); 712 712 int i; ··· 729 729 clk_put(tsa->tdm[i].l1rclk_clk); 730 730 } 731 731 } 732 - return 0; 733 732 } 734 733 735 734 static const struct of_device_id tsa_id_table[] = { ··· 743 744 .of_match_table = of_match_ptr(tsa_id_table), 744 745 }, 745 746 .probe = tsa_probe, 746 - .remove = tsa_remove, 747 + .remove_new = tsa_remove, 747 748 }; 748 749 module_platform_driver(tsa_driver); 749 750
+2 -4
drivers/soc/fujitsu/a64fx-diag.c
··· 116 116 return 0; 117 117 } 118 118 119 - static int a64fx_diag_remove(struct platform_device *pdev) 119 + static void a64fx_diag_remove(struct platform_device *pdev) 120 120 { 121 121 struct a64fx_diag_priv *priv = platform_get_drvdata(pdev); 122 122 ··· 127 127 free_nmi(priv->irq, NULL); 128 128 else 129 129 free_irq(priv->irq, NULL); 130 - 131 - return 0; 132 130 } 133 131 134 132 static const struct acpi_device_id a64fx_diag_acpi_match[] = { ··· 142 144 .acpi_match_table = ACPI_PTR(a64fx_diag_acpi_match), 143 145 }, 144 146 .probe = a64fx_diag_probe, 145 - .remove = a64fx_diag_remove, 147 + .remove_new = a64fx_diag_remove, 146 148 }; 147 149 148 150 module_platform_driver(a64fx_diag_driver);
+2 -4
drivers/soc/hisilicon/kunpeng_hccs.c
··· 1240 1240 return rc; 1241 1241 } 1242 1242 1243 - static int hccs_remove(struct platform_device *pdev) 1243 + static void hccs_remove(struct platform_device *pdev) 1244 1244 { 1245 1245 struct hccs_dev *hdev = platform_get_drvdata(pdev); 1246 1246 1247 1247 hccs_remove_topo_dirs(hdev); 1248 1248 hccs_unregister_pcc_channel(hdev); 1249 - 1250 - return 0; 1251 1249 } 1252 1250 1253 1251 static const struct acpi_device_id hccs_acpi_match[] = { ··· 1256 1258 1257 1259 static struct platform_driver hccs_driver = { 1258 1260 .probe = hccs_probe, 1259 - .remove = hccs_remove, 1261 + .remove_new = hccs_remove, 1260 1262 .driver = { 1261 1263 .name = "kunpeng_hccs", 1262 1264 .acpi_match_table = hccs_acpi_match,
+2 -4
drivers/soc/ixp4xx/ixp4xx-npe.c
··· 736 736 return 0; 737 737 } 738 738 739 - static int ixp4xx_npe_remove(struct platform_device *pdev) 739 + static void ixp4xx_npe_remove(struct platform_device *pdev) 740 740 { 741 741 int i; 742 742 ··· 744 744 if (npe_tab[i].regs) { 745 745 npe_reset(&npe_tab[i]); 746 746 } 747 - 748 - return 0; 749 747 } 750 748 751 749 static const struct of_device_id ixp4xx_npe_of_match[] = { ··· 759 761 .of_match_table = ixp4xx_npe_of_match, 760 762 }, 761 763 .probe = ixp4xx_npe_probe, 762 - .remove = ixp4xx_npe_remove, 764 + .remove_new = ixp4xx_npe_remove, 763 765 }; 764 766 module_platform_driver(ixp4xx_npe_driver); 765 767
+2 -3
drivers/soc/ixp4xx/ixp4xx-qmgr.c
··· 442 442 return 0; 443 443 } 444 444 445 - static int ixp4xx_qmgr_remove(struct platform_device *pdev) 445 + static void ixp4xx_qmgr_remove(struct platform_device *pdev) 446 446 { 447 447 synchronize_irq(qmgr_irq_1); 448 448 synchronize_irq(qmgr_irq_2); 449 - return 0; 450 449 } 451 450 452 451 static const struct of_device_id ixp4xx_qmgr_of_match[] = { ··· 461 462 .of_match_table = ixp4xx_qmgr_of_match, 462 463 }, 463 464 .probe = ixp4xx_qmgr_probe, 464 - .remove = ixp4xx_qmgr_remove, 465 + .remove_new = ixp4xx_qmgr_remove, 465 466 }; 466 467 module_platform_driver(ixp4xx_qmgr_driver); 467 468
+2 -3
drivers/soc/litex/litex_soc_ctrl.c
··· 120 120 return 0; 121 121 } 122 122 123 - static int litex_soc_ctrl_remove(struct platform_device *pdev) 123 + static void litex_soc_ctrl_remove(struct platform_device *pdev) 124 124 { 125 125 struct litex_soc_ctrl_device *soc_ctrl_dev = platform_get_drvdata(pdev); 126 126 127 127 unregister_restart_handler(&soc_ctrl_dev->reset_nb); 128 - return 0; 129 128 } 130 129 131 130 static struct platform_driver litex_soc_ctrl_driver = { ··· 133 134 .of_match_table = of_match_ptr(litex_soc_ctrl_of_match) 134 135 }, 135 136 .probe = litex_soc_ctrl_probe, 136 - .remove = litex_soc_ctrl_remove, 137 + .remove_new = litex_soc_ctrl_remove, 137 138 }; 138 139 139 140 module_platform_driver(litex_soc_ctrl_driver);
+2 -4
drivers/soc/loongson/loongson2_guts.c
··· 148 148 return 0; 149 149 } 150 150 151 - static int loongson2_guts_remove(struct platform_device *dev) 151 + static void loongson2_guts_remove(struct platform_device *dev) 152 152 { 153 153 soc_device_unregister(soc_dev); 154 - 155 - return 0; 156 154 } 157 155 158 156 /* ··· 169 171 .of_match_table = loongson2_guts_of_match, 170 172 }, 171 173 .probe = loongson2_guts_probe, 172 - .remove = loongson2_guts_remove, 174 + .remove_new = loongson2_guts_remove, 173 175 }; 174 176 175 177 static int __init loongson2_guts_init(void)
+2 -4
drivers/soc/mediatek/mtk-devapc.c
··· 292 292 return 0; 293 293 } 294 294 295 - static int mtk_devapc_remove(struct platform_device *pdev) 295 + static void mtk_devapc_remove(struct platform_device *pdev) 296 296 { 297 297 struct mtk_devapc_context *ctx = platform_get_drvdata(pdev); 298 298 299 299 stop_devapc(ctx); 300 - 301 - return 0; 302 300 } 303 301 304 302 static struct platform_driver mtk_devapc_driver = { 305 303 .probe = mtk_devapc_probe, 306 - .remove = mtk_devapc_remove, 304 + .remove_new = mtk_devapc_remove, 307 305 .driver = { 308 306 .name = "mtk-devapc", 309 307 .of_match_table = mtk_devapc_dt_match,
+2 -4
drivers/soc/mediatek/mtk-mmsys.c
··· 410 410 return 0; 411 411 } 412 412 413 - static int mtk_mmsys_remove(struct platform_device *pdev) 413 + static void mtk_mmsys_remove(struct platform_device *pdev) 414 414 { 415 415 struct mtk_mmsys *mmsys = platform_get_drvdata(pdev); 416 416 417 417 platform_device_unregister(mmsys->drm_pdev); 418 418 platform_device_unregister(mmsys->clks_pdev); 419 - 420 - return 0; 421 419 } 422 420 423 421 static const struct of_device_id of_match_mtk_mmsys[] = { ··· 447 449 .of_match_table = of_match_mtk_mmsys, 448 450 }, 449 451 .probe = mtk_mmsys_probe, 450 - .remove = mtk_mmsys_remove, 452 + .remove_new = mtk_mmsys_remove, 451 453 }; 452 454 module_platform_driver(mtk_mmsys_drv); 453 455
+180 -4
drivers/soc/mediatek/mtk-svs.c
··· 407 407 * @dcbdet: svs efuse data 408 408 * @dcmdet: svs efuse data 409 409 * @turn_pt: 2-line turn point tells which opp_volt calculated by high/low bank 410 + * @vbin_turn_pt: voltage bin turn point helps know which svsb_volt should be overridden 410 411 * @type: bank type to represent it is 2-line (high/low) bank or 1-line bank 411 412 * 412 413 * Svs bank will generate suitalbe voltages by below general math equation ··· 470 469 u32 dcbdet; 471 470 u32 dcmdet; 472 471 u32 turn_pt; 472 + u32 vbin_turn_pt; 473 473 u32 type; 474 474 }; 475 475 ··· 753 751 754 752 ret = thermal_zone_get_temp(svsb->tzd, &tzone_temp); 755 753 if (ret) 756 - seq_printf(m, "%s: temperature ignore, turn_pt = %u\n", 757 - svsb->name, svsb->turn_pt); 754 + seq_printf(m, "%s: temperature ignore, vbin_turn_pt = %u, turn_pt = %u\n", 755 + svsb->name, svsb->vbin_turn_pt, svsb->turn_pt); 758 756 else 759 - seq_printf(m, "%s: temperature = %d, turn_pt = %u\n", 760 - svsb->name, tzone_temp, svsb->turn_pt); 757 + seq_printf(m, "%s: temperature = %d, vbin_turn_pt = %u, turn_pt = %u\n", 758 + svsb->name, tzone_temp, svsb->vbin_turn_pt, 759 + svsb->turn_pt); 761 760 762 761 for (i = 0; i < svsb->opp_count; i++) { 763 762 opp = dev_pm_opp_find_freq_exact(svsb->opp_dev, ··· 955 952 for (i = opp_start; i < opp_stop; i++) 956 953 if (svsb->volt_flags & SVSB_REMOVE_DVTFIXED_VOLT) 957 954 svsb->volt[i] -= svsb->dvt_fixed; 955 + 956 + /* For voltage bin support */ 957 + if (svsb->opp_dfreq[0] > svsb->freq_base) { 958 + svsb->volt[0] = svs_opp_volt_to_bank_volt(svsb->opp_dvolt[0], 959 + svsb->volt_step, 960 + svsb->volt_base); 961 + 962 + /* Find voltage bin turn point */ 963 + for (i = 0; i < svsb->opp_count; i++) { 964 + if (svsb->opp_dfreq[i] <= svsb->freq_base) { 965 + svsb->vbin_turn_pt = i; 966 + break; 967 + } 968 + } 969 + 970 + /* Override svs bank voltages */ 971 + for (i = 1; i < svsb->vbin_turn_pt; i++) 972 + svsb->volt[i] = interpolate(svsb->freq_pct[0], 973 + svsb->freq_pct[svsb->vbin_turn_pt], 974 + svsb->volt[0], 975 + svsb->volt[svsb->vbin_turn_pt], 976 + svsb->freq_pct[i]); 977 + } 958 978 } 959 979 960 980 static void svs_set_bank_freq_pct_v3(struct svs_platform *svsp) ··· 1095 1069 1096 1070 for (i = 0; i < svsb->opp_count; i++) 1097 1071 svsb->volt[i] += svsb->volt_od; 1072 + 1073 + /* For voltage bin support */ 1074 + if (svsb->opp_dfreq[0] > svsb->freq_base) { 1075 + svsb->volt[0] = svs_opp_volt_to_bank_volt(svsb->opp_dvolt[0], 1076 + svsb->volt_step, 1077 + svsb->volt_base); 1078 + 1079 + /* Find voltage bin turn point */ 1080 + for (i = 0; i < svsb->opp_count; i++) { 1081 + if (svsb->opp_dfreq[i] <= svsb->freq_base) { 1082 + svsb->vbin_turn_pt = i; 1083 + break; 1084 + } 1085 + } 1086 + 1087 + /* Override svs bank voltages */ 1088 + for (i = 1; i < svsb->vbin_turn_pt; i++) 1089 + svsb->volt[i] = interpolate(svsb->freq_pct[0], 1090 + svsb->freq_pct[svsb->vbin_turn_pt], 1091 + svsb->volt[0], 1092 + svsb->volt[svsb->vbin_turn_pt], 1093 + svsb->freq_pct[i]); 1094 + } 1098 1095 } 1099 1096 1100 1097 static void svs_set_bank_freq_pct_v2(struct svs_platform *svsp) ··· 1857 1808 return true; 1858 1809 } 1859 1810 1811 + static bool svs_mt8188_efuse_parsing(struct svs_platform *svsp) 1812 + { 1813 + struct svs_bank *svsb; 1814 + u32 idx, i, golden_temp; 1815 + int ret; 1816 + 1817 + for (i = 0; i < svsp->efuse_max; i++) 1818 + if (svsp->efuse[i]) 1819 + dev_info(svsp->dev, "M_HW_RES%d: 0x%08x\n", 1820 + i, svsp->efuse[i]); 1821 + 1822 + if (!svsp->efuse[5]) { 1823 + dev_notice(svsp->dev, "svs_efuse[5] = 0x0?\n"); 1824 + return false; 1825 + } 1826 + 1827 + /* Svs efuse parsing */ 1828 + for (idx = 0; idx < svsp->bank_max; idx++) { 1829 + svsb = &svsp->banks[idx]; 1830 + 1831 + if (svsb->type == SVSB_LOW) { 1832 + svsb->mtdes = svsp->efuse[5] & GENMASK(7, 0); 1833 + svsb->bdes = (svsp->efuse[5] >> 16) & GENMASK(7, 0); 1834 + svsb->mdes = (svsp->efuse[5] >> 24) & GENMASK(7, 0); 1835 + svsb->dcbdet = (svsp->efuse[15] >> 16) & GENMASK(7, 0); 1836 + svsb->dcmdet = (svsp->efuse[15] >> 24) & GENMASK(7, 0); 1837 + } else if (svsb->type == SVSB_HIGH) { 1838 + svsb->mtdes = svsp->efuse[4] & GENMASK(7, 0); 1839 + svsb->bdes = (svsp->efuse[4] >> 16) & GENMASK(7, 0); 1840 + svsb->mdes = (svsp->efuse[4] >> 24) & GENMASK(7, 0); 1841 + svsb->dcbdet = svsp->efuse[14] & GENMASK(7, 0); 1842 + svsb->dcmdet = (svsp->efuse[14] >> 8) & GENMASK(7, 0); 1843 + } 1844 + 1845 + svsb->vmax += svsb->dvt_fixed; 1846 + } 1847 + 1848 + ret = svs_get_efuse_data(svsp, "t-calibration-data", 1849 + &svsp->tefuse, &svsp->tefuse_max); 1850 + if (ret) 1851 + return false; 1852 + 1853 + for (i = 0; i < svsp->tefuse_max; i++) 1854 + if (svsp->tefuse[i] != 0) 1855 + break; 1856 + 1857 + if (i == svsp->tefuse_max) 1858 + golden_temp = 50; /* All thermal efuse data are 0 */ 1859 + else 1860 + golden_temp = (svsp->tefuse[0] >> 24) & GENMASK(7, 0); 1861 + 1862 + for (idx = 0; idx < svsp->bank_max; idx++) { 1863 + svsb = &svsp->banks[idx]; 1864 + svsb->mts = 500; 1865 + svsb->bts = (((500 * golden_temp + 250460) / 1000) - 25) * 4; 1866 + } 1867 + 1868 + return true; 1869 + } 1870 + 1860 1871 static bool svs_mt8183_efuse_parsing(struct svs_platform *svsp) 1861 1872 { 1862 1873 struct svs_bank *svsb; ··· 2282 2173 }, 2283 2174 }; 2284 2175 2176 + static struct svs_bank svs_mt8188_banks[] = { 2177 + { 2178 + .sw_id = SVSB_GPU, 2179 + .type = SVSB_LOW, 2180 + .set_freq_pct = svs_set_bank_freq_pct_v3, 2181 + .get_volts = svs_get_bank_volts_v3, 2182 + .volt_flags = SVSB_REMOVE_DVTFIXED_VOLT, 2183 + .mode_support = SVSB_MODE_INIT02, 2184 + .opp_count = MAX_OPP_ENTRIES, 2185 + .freq_base = 640000000, 2186 + .turn_freq_base = 640000000, 2187 + .volt_step = 6250, 2188 + .volt_base = 400000, 2189 + .vmax = 0x38, 2190 + .vmin = 0x1c, 2191 + .age_config = 0x555555, 2192 + .dc_config = 0x555555, 2193 + .dvt_fixed = 0x1, 2194 + .vco = 0x10, 2195 + .chk_shift = 0x87, 2196 + .core_sel = 0x0fff0000, 2197 + .int_st = BIT(0), 2198 + .ctl0 = 0x00100003, 2199 + }, 2200 + { 2201 + .sw_id = SVSB_GPU, 2202 + .type = SVSB_HIGH, 2203 + .set_freq_pct = svs_set_bank_freq_pct_v3, 2204 + .get_volts = svs_get_bank_volts_v3, 2205 + .tzone_name = "gpu1", 2206 + .volt_flags = SVSB_REMOVE_DVTFIXED_VOLT | 2207 + SVSB_MON_VOLT_IGNORE, 2208 + .mode_support = SVSB_MODE_INIT02 | SVSB_MODE_MON, 2209 + .opp_count = MAX_OPP_ENTRIES, 2210 + .freq_base = 880000000, 2211 + .turn_freq_base = 640000000, 2212 + .volt_step = 6250, 2213 + .volt_base = 400000, 2214 + .vmax = 0x38, 2215 + .vmin = 0x1c, 2216 + .age_config = 0x555555, 2217 + .dc_config = 0x555555, 2218 + .dvt_fixed = 0x4, 2219 + .vco = 0x10, 2220 + .chk_shift = 0x87, 2221 + .core_sel = 0x0fff0001, 2222 + .int_st = BIT(1), 2223 + .ctl0 = 0x00100003, 2224 + .tzone_htemp = 85000, 2225 + .tzone_htemp_voffset = 0, 2226 + .tzone_ltemp = 25000, 2227 + .tzone_ltemp_voffset = 7, 2228 + }, 2229 + }; 2230 + 2285 2231 static struct svs_bank svs_mt8183_banks[] = { 2286 2232 { 2287 2233 .sw_id = SVSB_CPU_LITTLE, ··· 2450 2286 .bank_max = ARRAY_SIZE(svs_mt8192_banks), 2451 2287 }; 2452 2288 2289 + static const struct svs_platform_data svs_mt8188_platform_data = { 2290 + .name = "mt8188-svs", 2291 + .banks = svs_mt8188_banks, 2292 + .efuse_parsing = svs_mt8188_efuse_parsing, 2293 + .probe = svs_mt8192_platform_probe, 2294 + .regs = svs_regs_v2, 2295 + .bank_max = ARRAY_SIZE(svs_mt8188_banks), 2296 + }; 2297 + 2453 2298 static const struct svs_platform_data svs_mt8183_platform_data = { 2454 2299 .name = "mt8183-svs", 2455 2300 .banks = svs_mt8183_banks, ··· 2472 2299 { 2473 2300 .compatible = "mediatek,mt8192-svs", 2474 2301 .data = &svs_mt8192_platform_data, 2302 + }, { 2303 + .compatible = "mediatek,mt8188-svs", 2304 + .data = &svs_mt8188_platform_data, 2475 2305 }, { 2476 2306 .compatible = "mediatek,mt8183-svs", 2477 2307 .data = &svs_mt8183_platform_data,
+2 -4
drivers/soc/microchip/mpfs-sys-controller.c
··· 149 149 return 0; 150 150 } 151 151 152 - static int mpfs_sys_controller_remove(struct platform_device *pdev) 152 + static void mpfs_sys_controller_remove(struct platform_device *pdev) 153 153 { 154 154 struct mpfs_sys_controller *sys_controller = platform_get_drvdata(pdev); 155 155 156 156 mpfs_sys_controller_put(sys_controller); 157 - 158 - return 0; 159 157 } 160 158 161 159 static const struct of_device_id mpfs_sys_controller_of_match[] = { ··· 205 207 .of_match_table = mpfs_sys_controller_of_match, 206 208 }, 207 209 .probe = mpfs_sys_controller_probe, 208 - .remove = mpfs_sys_controller_remove, 210 + .remove_new = mpfs_sys_controller_remove, 209 211 }; 210 212 module_platform_driver(mpfs_sys_controller_driver); 211 213
+2 -4
drivers/soc/pxa/ssp.c
··· 176 176 return 0; 177 177 } 178 178 179 - static int pxa_ssp_remove(struct platform_device *pdev) 179 + static void pxa_ssp_remove(struct platform_device *pdev) 180 180 { 181 181 struct ssp_device *ssp = platform_get_drvdata(pdev); 182 182 183 183 mutex_lock(&ssp_lock); 184 184 list_del(&ssp->node); 185 185 mutex_unlock(&ssp_lock); 186 - 187 - return 0; 188 186 } 189 187 190 188 static const struct platform_device_id ssp_id_table[] = { ··· 197 199 198 200 static struct platform_driver pxa_ssp_driver = { 199 201 .probe = pxa_ssp_probe, 200 - .remove = pxa_ssp_remove, 202 + .remove_new = pxa_ssp_remove, 201 203 .driver = { 202 204 .name = "pxa2xx-ssp", 203 205 .of_match_table = of_match_ptr(pxa_ssp_of_ids),
+2 -2
drivers/soc/qcom/apr.c
··· 41 41 struct apr_rx_buf { 42 42 struct list_head node; 43 43 int len; 44 - uint8_t buf[]; 44 + uint8_t buf[] __counted_by(len); 45 45 }; 46 46 47 47 /** ··· 171 171 return -EINVAL; 172 172 } 173 173 174 - abuf = kzalloc(sizeof(*abuf) + len, GFP_ATOMIC); 174 + abuf = kzalloc(struct_size(abuf, buf, len), GFP_ATOMIC); 175 175 if (!abuf) 176 176 return -ENOMEM; 177 177
+4 -4
drivers/soc/qcom/cmd-db.c
··· 133 133 134 134 return 0; 135 135 } 136 - EXPORT_SYMBOL(cmd_db_ready); 136 + EXPORT_SYMBOL_GPL(cmd_db_ready); 137 137 138 138 static int cmd_db_get_header(const char *id, const struct entry_header **eh, 139 139 const struct rsc_hdr **rh) ··· 193 193 194 194 return ret < 0 ? 0 : le32_to_cpu(ent->addr); 195 195 } 196 - EXPORT_SYMBOL(cmd_db_read_addr); 196 + EXPORT_SYMBOL_GPL(cmd_db_read_addr); 197 197 198 198 /** 199 199 * cmd_db_read_aux_data() - Query command db for aux data. ··· 218 218 219 219 return rsc_offset(rsc_hdr, ent); 220 220 } 221 - EXPORT_SYMBOL(cmd_db_read_aux_data); 221 + EXPORT_SYMBOL_GPL(cmd_db_read_aux_data); 222 222 223 223 /** 224 224 * cmd_db_read_slave_id - Get the slave ID for a given resource address ··· 240 240 addr = le32_to_cpu(ent->addr); 241 241 return (addr >> SLAVE_ID_SHIFT) & SLAVE_ID_MASK; 242 242 } 243 - EXPORT_SYMBOL(cmd_db_read_slave_id); 243 + EXPORT_SYMBOL_GPL(cmd_db_read_slave_id); 244 244 245 245 #ifdef CONFIG_DEBUG_FS 246 246 static int cmd_db_debugfs_dump(struct seq_file *seq, void *p)
+2 -4
drivers/soc/qcom/icc-bwmon.c
··· 793 793 return 0; 794 794 } 795 795 796 - static int bwmon_remove(struct platform_device *pdev) 796 + static void bwmon_remove(struct platform_device *pdev) 797 797 { 798 798 struct icc_bwmon *bwmon = platform_get_drvdata(pdev); 799 799 800 800 bwmon_disable(bwmon); 801 - 802 - return 0; 803 801 } 804 802 805 803 static const struct icc_bwmon_data msm8998_bwmon_data = { ··· 860 862 861 863 static struct platform_driver bwmon_driver = { 862 864 .probe = bwmon_probe, 863 - .remove = bwmon_remove, 865 + .remove_new = bwmon_remove, 864 866 .driver = { 865 867 .name = "qcom-bwmon", 866 868 .of_match_table = bwmon_of_match,
+2 -2
drivers/soc/qcom/kryo-l2-accessors.c
··· 32 32 isb(); 33 33 raw_spin_unlock_irqrestore(&l2_access_lock, flags); 34 34 } 35 - EXPORT_SYMBOL(kryo_l2_set_indirect_reg); 35 + EXPORT_SYMBOL_GPL(kryo_l2_set_indirect_reg); 36 36 37 37 /** 38 38 * kryo_l2_get_indirect_reg() - read an L2 register value ··· 54 54 55 55 return val; 56 56 } 57 - EXPORT_SYMBOL(kryo_l2_get_indirect_reg); 57 + EXPORT_SYMBOL_GPL(kryo_l2_get_indirect_reg);
+277 -90
drivers/soc/qcom/llcc-qcom.c
··· 12 12 #include <linux/kernel.h> 13 13 #include <linux/module.h> 14 14 #include <linux/mutex.h> 15 + #include <linux/nvmem-consumer.h> 15 16 #include <linux/of.h> 16 17 #include <linux/regmap.h> 17 18 #include <linux/sizes.h> ··· 127 126 bool no_edac; 128 127 }; 129 128 129 + struct qcom_sct_config { 130 + const struct qcom_llcc_config *llcc_config; 131 + int num_config; 132 + }; 133 + 130 134 enum llcc_reg_offset { 131 135 LLCC_COMMON_HW_INFO, 132 136 LLCC_COMMON_STATUS0, ··· 191 185 { LLCC_MMUHWT, 13, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 }, 192 186 { LLCC_DISP, 16, 6144, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 193 187 { LLCC_AUDHW, 22, 2048, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 194 - { LLCC_DRE, 26, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 188 + { LLCC_ECC, 26, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 195 189 { LLCC_CVP, 28, 512, 3, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0 }, 196 190 { LLCC_APTCM, 30, 1024, 3, 1, 0x0, 0x1, 1, 0, 0, 1, 0, 0 }, 197 191 { LLCC_WRCACHE, 31, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 0, 1, 0 }, ··· 362 356 {LLCC_VIDVSP, 28, 256, 4, 1, 0xFFFFFF, 0x0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, }, 363 357 }; 364 358 359 + static const struct llcc_slice_config qdu1000_data_2ch[] = { 360 + { LLCC_MDMHPGRW, 7, 512, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 361 + { LLCC_MODHW, 9, 256, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 362 + { LLCC_MDMPNG, 21, 256, 0, 1, 0x3, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 363 + { LLCC_ECC, 26, 512, 3, 1, 0xffc, 0x0, 0, 0, 0, 0, 1, 0, 0 }, 364 + { LLCC_MODPE, 29, 256, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 365 + { LLCC_APTCM, 30, 256, 3, 1, 0x0, 0xc, 1, 0, 0, 1, 0, 0, 0 }, 366 + { LLCC_WRCACHE, 31, 128, 1, 1, 0x3, 0x0, 0, 0, 0, 0, 1, 0, 0 }, 367 + }; 368 + 369 + static const struct llcc_slice_config qdu1000_data_4ch[] = { 370 + { LLCC_MDMHPGRW, 7, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 371 + { LLCC_MODHW, 9, 512, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 372 + { LLCC_MDMPNG, 21, 512, 0, 1, 0x3, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 373 + { LLCC_ECC, 26, 1024, 3, 1, 0xffc, 0x0, 0, 0, 0, 0, 1, 0, 0 }, 374 + { LLCC_MODPE, 29, 512, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 375 + { LLCC_APTCM, 30, 512, 3, 1, 0x0, 0xc, 1, 0, 0, 1, 0, 0, 0 }, 376 + { LLCC_WRCACHE, 31, 256, 1, 1, 0x3, 0x0, 0, 0, 0, 0, 1, 0, 0 }, 377 + }; 378 + 379 + static const struct llcc_slice_config qdu1000_data_8ch[] = { 380 + { LLCC_MDMHPGRW, 7, 2048, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 381 + { LLCC_MODHW, 9, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 382 + { LLCC_MDMPNG, 21, 1024, 0, 1, 0x3, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 383 + { LLCC_ECC, 26, 2048, 3, 1, 0xffc, 0x0, 0, 0, 0, 0, 1, 0, 0 }, 384 + { LLCC_MODPE, 29, 1024, 1, 1, 0xfff, 0x0, 0, 0, 0, 1, 0, 0, 0 }, 385 + { LLCC_APTCM, 30, 1024, 3, 1, 0x0, 0xc, 1, 0, 0, 1, 0, 0, 0 }, 386 + { LLCC_WRCACHE, 31, 512, 1, 1, 0x3, 0x0, 0, 0, 0, 0, 1, 0, 0 }, 387 + }; 388 + 365 389 static const struct llcc_edac_reg_offset llcc_v1_edac_reg_offset = { 366 390 .trp_ecc_error_status0 = 0x20344, 367 391 .trp_ecc_error_status1 = 0x20348, ··· 458 422 [LLCC_COMMON_STATUS0] = 0x0003400c, 459 423 }; 460 424 461 - static const struct qcom_llcc_config sc7180_cfg = { 462 - .sct_data = sc7180_data, 463 - .size = ARRAY_SIZE(sc7180_data), 464 - .need_llcc_cfg = true, 465 - .reg_offset = llcc_v1_reg_offset, 466 - .edac_reg_offset = &llcc_v1_edac_reg_offset, 425 + static const struct qcom_llcc_config qdu1000_cfg[] = { 426 + { 427 + .sct_data = qdu1000_data_8ch, 428 + .size = ARRAY_SIZE(qdu1000_data_8ch), 429 + .need_llcc_cfg = true, 430 + .reg_offset = llcc_v2_1_reg_offset, 431 + .edac_reg_offset = &llcc_v2_1_edac_reg_offset, 432 + }, 433 + { 434 + .sct_data = qdu1000_data_4ch, 435 + .size = ARRAY_SIZE(qdu1000_data_4ch), 436 + .need_llcc_cfg = true, 437 + .reg_offset = llcc_v2_1_reg_offset, 438 + .edac_reg_offset = &llcc_v2_1_edac_reg_offset, 439 + }, 440 + { 441 + .sct_data = qdu1000_data_4ch, 442 + .size = ARRAY_SIZE(qdu1000_data_4ch), 443 + .need_llcc_cfg = true, 444 + .reg_offset = llcc_v2_1_reg_offset, 445 + .edac_reg_offset = &llcc_v2_1_edac_reg_offset, 446 + }, 447 + { 448 + .sct_data = qdu1000_data_2ch, 449 + .size = ARRAY_SIZE(qdu1000_data_2ch), 450 + .need_llcc_cfg = true, 451 + .reg_offset = llcc_v2_1_reg_offset, 452 + .edac_reg_offset = &llcc_v2_1_edac_reg_offset, 453 + }, 467 454 }; 468 455 469 - static const struct qcom_llcc_config sc7280_cfg = { 470 - .sct_data = sc7280_data, 471 - .size = ARRAY_SIZE(sc7280_data), 472 - .need_llcc_cfg = true, 473 - .reg_offset = llcc_v1_reg_offset, 474 - .edac_reg_offset = &llcc_v1_edac_reg_offset, 456 + static const struct qcom_llcc_config sc7180_cfg[] = { 457 + { 458 + .sct_data = sc7180_data, 459 + .size = ARRAY_SIZE(sc7180_data), 460 + .need_llcc_cfg = true, 461 + .reg_offset = llcc_v1_reg_offset, 462 + .edac_reg_offset = &llcc_v1_edac_reg_offset, 463 + }, 475 464 }; 476 465 477 - static const struct qcom_llcc_config sc8180x_cfg = { 478 - .sct_data = sc8180x_data, 479 - .size = ARRAY_SIZE(sc8180x_data), 480 - .need_llcc_cfg = true, 481 - .reg_offset = llcc_v1_reg_offset, 482 - .edac_reg_offset = &llcc_v1_edac_reg_offset, 466 + static const struct qcom_llcc_config sc7280_cfg[] = { 467 + { 468 + .sct_data = sc7280_data, 469 + .size = ARRAY_SIZE(sc7280_data), 470 + .need_llcc_cfg = true, 471 + .reg_offset = llcc_v1_reg_offset, 472 + .edac_reg_offset = &llcc_v1_edac_reg_offset, 473 + }, 483 474 }; 484 475 485 - static const struct qcom_llcc_config sc8280xp_cfg = { 486 - .sct_data = sc8280xp_data, 487 - .size = ARRAY_SIZE(sc8280xp_data), 488 - .need_llcc_cfg = true, 489 - .reg_offset = llcc_v1_reg_offset, 490 - .edac_reg_offset = &llcc_v1_edac_reg_offset, 476 + static const struct qcom_llcc_config sc8180x_cfg[] = { 477 + { 478 + .sct_data = sc8180x_data, 479 + .size = ARRAY_SIZE(sc8180x_data), 480 + .need_llcc_cfg = true, 481 + .reg_offset = llcc_v1_reg_offset, 482 + .edac_reg_offset = &llcc_v1_edac_reg_offset, 483 + }, 491 484 }; 492 485 493 - static const struct qcom_llcc_config sdm845_cfg = { 494 - .sct_data = sdm845_data, 495 - .size = ARRAY_SIZE(sdm845_data), 496 - .need_llcc_cfg = false, 497 - .reg_offset = llcc_v1_reg_offset, 498 - .edac_reg_offset = &llcc_v1_edac_reg_offset, 499 - .no_edac = true, 486 + static const struct qcom_llcc_config sc8280xp_cfg[] = { 487 + { 488 + .sct_data = sc8280xp_data, 489 + .size = ARRAY_SIZE(sc8280xp_data), 490 + .need_llcc_cfg = true, 491 + .reg_offset = llcc_v1_reg_offset, 492 + .edac_reg_offset = &llcc_v1_edac_reg_offset, 493 + }, 500 494 }; 501 495 502 - static const struct qcom_llcc_config sm6350_cfg = { 503 - .sct_data = sm6350_data, 504 - .size = ARRAY_SIZE(sm6350_data), 505 - .need_llcc_cfg = true, 506 - .reg_offset = llcc_v1_reg_offset, 507 - .edac_reg_offset = &llcc_v1_edac_reg_offset, 496 + static const struct qcom_llcc_config sdm845_cfg[] = { 497 + { 498 + .sct_data = sdm845_data, 499 + .size = ARRAY_SIZE(sdm845_data), 500 + .need_llcc_cfg = false, 501 + .reg_offset = llcc_v1_reg_offset, 502 + .edac_reg_offset = &llcc_v1_edac_reg_offset, 503 + .no_edac = true, 504 + }, 508 505 }; 509 506 510 - static const struct qcom_llcc_config sm7150_cfg = { 511 - .sct_data = sm7150_data, 512 - .size = ARRAY_SIZE(sm7150_data), 513 - .need_llcc_cfg = true, 514 - .reg_offset = llcc_v1_reg_offset, 515 - .edac_reg_offset = &llcc_v1_edac_reg_offset, 507 + static const struct qcom_llcc_config sm6350_cfg[] = { 508 + { 509 + .sct_data = sm6350_data, 510 + .size = ARRAY_SIZE(sm6350_data), 511 + .need_llcc_cfg = true, 512 + .reg_offset = llcc_v1_reg_offset, 513 + .edac_reg_offset = &llcc_v1_edac_reg_offset, 514 + }, 516 515 }; 517 516 518 - static const struct qcom_llcc_config sm8150_cfg = { 519 - .sct_data = sm8150_data, 520 - .size = ARRAY_SIZE(sm8150_data), 521 - .need_llcc_cfg = true, 522 - .reg_offset = llcc_v1_reg_offset, 523 - .edac_reg_offset = &llcc_v1_edac_reg_offset, 517 + static const struct qcom_llcc_config sm7150_cfg[] = { 518 + { 519 + .sct_data = sm7150_data, 520 + .size = ARRAY_SIZE(sm7150_data), 521 + .need_llcc_cfg = true, 522 + .reg_offset = llcc_v1_reg_offset, 523 + .edac_reg_offset = &llcc_v1_edac_reg_offset, 524 + }, 524 525 }; 525 526 526 - static const struct qcom_llcc_config sm8250_cfg = { 527 - .sct_data = sm8250_data, 528 - .size = ARRAY_SIZE(sm8250_data), 529 - .need_llcc_cfg = true, 530 - .reg_offset = llcc_v1_reg_offset, 531 - .edac_reg_offset = &llcc_v1_edac_reg_offset, 527 + static const struct qcom_llcc_config sm8150_cfg[] = { 528 + { 529 + .sct_data = sm8150_data, 530 + .size = ARRAY_SIZE(sm8150_data), 531 + .need_llcc_cfg = true, 532 + .reg_offset = llcc_v1_reg_offset, 533 + .edac_reg_offset = &llcc_v1_edac_reg_offset, 534 + }, 532 535 }; 533 536 534 - static const struct qcom_llcc_config sm8350_cfg = { 535 - .sct_data = sm8350_data, 536 - .size = ARRAY_SIZE(sm8350_data), 537 - .need_llcc_cfg = true, 538 - .reg_offset = llcc_v1_reg_offset, 539 - .edac_reg_offset = &llcc_v1_edac_reg_offset, 537 + static const struct qcom_llcc_config sm8250_cfg[] = { 538 + { 539 + .sct_data = sm8250_data, 540 + .size = ARRAY_SIZE(sm8250_data), 541 + .need_llcc_cfg = true, 542 + .reg_offset = llcc_v1_reg_offset, 543 + .edac_reg_offset = &llcc_v1_edac_reg_offset, 544 + }, 540 545 }; 541 546 542 - static const struct qcom_llcc_config sm8450_cfg = { 543 - .sct_data = sm8450_data, 544 - .size = ARRAY_SIZE(sm8450_data), 545 - .need_llcc_cfg = true, 546 - .reg_offset = llcc_v2_1_reg_offset, 547 - .edac_reg_offset = &llcc_v2_1_edac_reg_offset, 547 + static const struct qcom_llcc_config sm8350_cfg[] = { 548 + { 549 + .sct_data = sm8350_data, 550 + .size = ARRAY_SIZE(sm8350_data), 551 + .need_llcc_cfg = true, 552 + .reg_offset = llcc_v1_reg_offset, 553 + .edac_reg_offset = &llcc_v1_edac_reg_offset, 554 + }, 548 555 }; 549 556 550 - static const struct qcom_llcc_config sm8550_cfg = { 551 - .sct_data = sm8550_data, 552 - .size = ARRAY_SIZE(sm8550_data), 553 - .need_llcc_cfg = true, 554 - .reg_offset = llcc_v2_1_reg_offset, 555 - .edac_reg_offset = &llcc_v2_1_edac_reg_offset, 557 + static const struct qcom_llcc_config sm8450_cfg[] = { 558 + { 559 + .sct_data = sm8450_data, 560 + .size = ARRAY_SIZE(sm8450_data), 561 + .need_llcc_cfg = true, 562 + .reg_offset = llcc_v2_1_reg_offset, 563 + .edac_reg_offset = &llcc_v2_1_edac_reg_offset, 564 + }, 565 + }; 566 + 567 + static const struct qcom_llcc_config sm8550_cfg[] = { 568 + { 569 + .sct_data = sm8550_data, 570 + .size = ARRAY_SIZE(sm8550_data), 571 + .need_llcc_cfg = true, 572 + .reg_offset = llcc_v2_1_reg_offset, 573 + .edac_reg_offset = &llcc_v2_1_edac_reg_offset, 574 + }, 575 + }; 576 + 577 + static const struct qcom_sct_config qdu1000_cfgs = { 578 + .llcc_config = qdu1000_cfg, 579 + .num_config = ARRAY_SIZE(qdu1000_cfg), 580 + }; 581 + 582 + static const struct qcom_sct_config sc7180_cfgs = { 583 + .llcc_config = sc7180_cfg, 584 + .num_config = ARRAY_SIZE(sc7180_cfg), 585 + }; 586 + 587 + static const struct qcom_sct_config sc7280_cfgs = { 588 + .llcc_config = sc7280_cfg, 589 + .num_config = ARRAY_SIZE(sc7280_cfg), 590 + }; 591 + 592 + static const struct qcom_sct_config sc8180x_cfgs = { 593 + .llcc_config = sc8180x_cfg, 594 + .num_config = ARRAY_SIZE(sc8180x_cfg), 595 + }; 596 + 597 + static const struct qcom_sct_config sc8280xp_cfgs = { 598 + .llcc_config = sc8280xp_cfg, 599 + .num_config = ARRAY_SIZE(sc8280xp_cfg), 600 + }; 601 + 602 + static const struct qcom_sct_config sdm845_cfgs = { 603 + .llcc_config = sdm845_cfg, 604 + .num_config = ARRAY_SIZE(sdm845_cfg), 605 + }; 606 + 607 + static const struct qcom_sct_config sm6350_cfgs = { 608 + .llcc_config = sm6350_cfg, 609 + .num_config = ARRAY_SIZE(sm6350_cfg), 610 + }; 611 + 612 + static const struct qcom_sct_config sm7150_cfgs = { 613 + .llcc_config = sm7150_cfg, 614 + .num_config = ARRAY_SIZE(sm7150_cfg), 615 + }; 616 + 617 + static const struct qcom_sct_config sm8150_cfgs = { 618 + .llcc_config = sm8150_cfg, 619 + .num_config = ARRAY_SIZE(sm8150_cfg), 620 + }; 621 + 622 + static const struct qcom_sct_config sm8250_cfgs = { 623 + .llcc_config = sm8250_cfg, 624 + .num_config = ARRAY_SIZE(sm8250_cfg), 625 + }; 626 + 627 + static const struct qcom_sct_config sm8350_cfgs = { 628 + .llcc_config = sm8350_cfg, 629 + .num_config = ARRAY_SIZE(sm8350_cfg), 630 + }; 631 + 632 + static const struct qcom_sct_config sm8450_cfgs = { 633 + .llcc_config = sm8450_cfg, 634 + .num_config = ARRAY_SIZE(sm8450_cfg), 635 + }; 636 + 637 + static const struct qcom_sct_config sm8550_cfgs = { 638 + .llcc_config = sm8550_cfg, 639 + .num_config = ARRAY_SIZE(sm8550_cfg), 556 640 }; 557 641 558 642 static struct llcc_drv_data *drv_data = (void *) -EPROBE_DEFER; ··· 1062 906 return ret; 1063 907 } 1064 908 1065 - static int qcom_llcc_remove(struct platform_device *pdev) 909 + static int qcom_llcc_get_cfg_index(struct platform_device *pdev, u8 *cfg_index, int num_config) 910 + { 911 + int ret; 912 + 913 + ret = nvmem_cell_read_u8(&pdev->dev, "multi-chan-ddr", cfg_index); 914 + if (ret == -ENOENT || ret == -EOPNOTSUPP) { 915 + if (num_config > 1) 916 + return -EINVAL; 917 + *cfg_index = 0; 918 + return 0; 919 + } 920 + 921 + if (!ret && *cfg_index >= num_config) 922 + ret = -EINVAL; 923 + 924 + return ret; 925 + } 926 + 927 + static void qcom_llcc_remove(struct platform_device *pdev) 1066 928 { 1067 929 /* Set the global pointer to a error code to avoid referencing it */ 1068 930 drv_data = ERR_PTR(-ENODEV); 1069 - return 0; 1070 931 } 1071 932 1072 933 static struct regmap *qcom_llcc_init_mmio(struct platform_device *pdev, u8 index, ··· 1111 938 struct device *dev = &pdev->dev; 1112 939 int ret, i; 1113 940 struct platform_device *llcc_edac; 941 + const struct qcom_sct_config *cfgs; 1114 942 const struct qcom_llcc_config *cfg; 1115 943 const struct llcc_slice_config *llcc_cfg; 1116 944 u32 sz; 945 + u8 cfg_index; 1117 946 u32 version; 1118 947 struct regmap *regmap; 948 + 949 + if (!IS_ERR(drv_data)) 950 + return -EBUSY; 1119 951 1120 952 drv_data = devm_kzalloc(dev, sizeof(*drv_data), GFP_KERNEL); 1121 953 if (!drv_data) { ··· 1135 957 goto err; 1136 958 } 1137 959 1138 - cfg = of_device_get_match_data(&pdev->dev); 960 + cfgs = of_device_get_match_data(&pdev->dev); 961 + if (!cfgs) { 962 + ret = -EINVAL; 963 + goto err; 964 + } 965 + ret = qcom_llcc_get_cfg_index(pdev, &cfg_index, cfgs->num_config); 966 + if (ret) 967 + goto err; 968 + cfg = &cfgs->llcc_config[cfg_index]; 1139 969 1140 970 ret = regmap_read(regmap, cfg->reg_offset[LLCC_COMMON_STATUS0], &num_banks); 1141 971 if (ret) ··· 1236 1050 } 1237 1051 1238 1052 static const struct of_device_id qcom_llcc_of_match[] = { 1239 - { .compatible = "qcom,sc7180-llcc", .data = &sc7180_cfg }, 1240 - { .compatible = "qcom,sc7280-llcc", .data = &sc7280_cfg }, 1241 - { .compatible = "qcom,sc8180x-llcc", .data = &sc8180x_cfg }, 1242 - { .compatible = "qcom,sc8280xp-llcc", .data = &sc8280xp_cfg }, 1243 - { .compatible = "qcom,sdm845-llcc", .data = &sdm845_cfg }, 1244 - { .compatible = "qcom,sm6350-llcc", .data = &sm6350_cfg }, 1245 - { .compatible = "qcom,sm7150-llcc", .data = &sm7150_cfg }, 1246 - { .compatible = "qcom,sm8150-llcc", .data = &sm8150_cfg }, 1247 - { .compatible = "qcom,sm8250-llcc", .data = &sm8250_cfg }, 1248 - { .compatible = "qcom,sm8350-llcc", .data = &sm8350_cfg }, 1249 - { .compatible = "qcom,sm8450-llcc", .data = &sm8450_cfg }, 1250 - { .compatible = "qcom,sm8550-llcc", .data = &sm8550_cfg }, 1053 + { .compatible = "qcom,qdu1000-llcc", .data = &qdu1000_cfgs}, 1054 + { .compatible = "qcom,sc7180-llcc", .data = &sc7180_cfgs }, 1055 + { .compatible = "qcom,sc7280-llcc", .data = &sc7280_cfgs }, 1056 + { .compatible = "qcom,sc8180x-llcc", .data = &sc8180x_cfgs }, 1057 + { .compatible = "qcom,sc8280xp-llcc", .data = &sc8280xp_cfgs }, 1058 + { .compatible = "qcom,sdm845-llcc", .data = &sdm845_cfgs }, 1059 + { .compatible = "qcom,sm6350-llcc", .data = &sm6350_cfgs }, 1060 + { .compatible = "qcom,sm7150-llcc", .data = &sm7150_cfgs }, 1061 + { .compatible = "qcom,sm8150-llcc", .data = &sm8150_cfgs }, 1062 + { .compatible = "qcom,sm8250-llcc", .data = &sm8250_cfgs }, 1063 + { .compatible = "qcom,sm8350-llcc", .data = &sm8350_cfgs }, 1064 + { .compatible = "qcom,sm8450-llcc", .data = &sm8450_cfgs }, 1065 + { .compatible = "qcom,sm8550-llcc", .data = &sm8550_cfgs }, 1251 1066 { } 1252 1067 }; 1253 1068 MODULE_DEVICE_TABLE(of, qcom_llcc_of_match); ··· 1259 1072 .of_match_table = qcom_llcc_of_match, 1260 1073 }, 1261 1074 .probe = qcom_llcc_probe, 1262 - .remove = qcom_llcc_remove, 1075 + .remove_new = qcom_llcc_remove, 1263 1076 }; 1264 1077 module_platform_driver(qcom_llcc_driver); 1265 1078
+5 -7
drivers/soc/qcom/ocmem.c
··· 211 211 } 212 212 return ocmem; 213 213 } 214 - EXPORT_SYMBOL(of_get_ocmem); 214 + EXPORT_SYMBOL_GPL(of_get_ocmem); 215 215 216 216 struct ocmem_buf *ocmem_allocate(struct ocmem *ocmem, enum ocmem_client client, 217 217 unsigned long size) ··· 267 267 268 268 return ERR_PTR(ret); 269 269 } 270 - EXPORT_SYMBOL(ocmem_allocate); 270 + EXPORT_SYMBOL_GPL(ocmem_allocate); 271 271 272 272 void ocmem_free(struct ocmem *ocmem, enum ocmem_client client, 273 273 struct ocmem_buf *buf) ··· 294 294 295 295 clear_bit_unlock(BIT(client), &ocmem->active_allocations); 296 296 } 297 - EXPORT_SYMBOL(ocmem_free); 297 + EXPORT_SYMBOL_GPL(ocmem_free); 298 298 299 299 static int ocmem_dev_probe(struct platform_device *pdev) 300 300 { ··· 416 416 return ret; 417 417 } 418 418 419 - static int ocmem_dev_remove(struct platform_device *pdev) 419 + static void ocmem_dev_remove(struct platform_device *pdev) 420 420 { 421 421 struct ocmem *ocmem = platform_get_drvdata(pdev); 422 422 423 423 clk_disable_unprepare(ocmem->core_clk); 424 424 clk_disable_unprepare(ocmem->iface_clk); 425 - 426 - return 0; 427 425 } 428 426 429 427 static const struct ocmem_config ocmem_8226_config = { ··· 444 446 445 447 static struct platform_driver ocmem_driver = { 446 448 .probe = ocmem_dev_probe, 447 - .remove = ocmem_dev_remove, 449 + .remove_new = ocmem_dev_remove, 448 450 .driver = { 449 451 .name = "ocmem", 450 452 .of_match_table = ocmem_of_match,
+4 -4
drivers/soc/qcom/pdr_interface.c
··· 554 554 kfree(pds); 555 555 return ERR_PTR(ret); 556 556 } 557 - EXPORT_SYMBOL(pdr_add_lookup); 557 + EXPORT_SYMBOL_GPL(pdr_add_lookup); 558 558 559 559 /** 560 560 * pdr_restart_pd() - restart PD ··· 634 634 635 635 return 0; 636 636 } 637 - EXPORT_SYMBOL(pdr_restart_pd); 637 + EXPORT_SYMBOL_GPL(pdr_restart_pd); 638 638 639 639 /** 640 640 * pdr_handle_alloc() - initialize the PDR client handle ··· 715 715 716 716 return ERR_PTR(ret); 717 717 } 718 - EXPORT_SYMBOL(pdr_handle_alloc); 718 + EXPORT_SYMBOL_GPL(pdr_handle_alloc); 719 719 720 720 /** 721 721 * pdr_handle_release() - release the PDR client handle ··· 749 749 750 750 kfree(pdr); 751 751 } 752 - EXPORT_SYMBOL(pdr_handle_release); 752 + EXPORT_SYMBOL_GPL(pdr_handle_release); 753 753 754 754 MODULE_LICENSE("GPL v2"); 755 755 MODULE_DESCRIPTION("Qualcomm Protection Domain Restart helpers");
+2 -4
drivers/soc/qcom/pmic_glink.c
··· 318 318 return ret; 319 319 } 320 320 321 - static int pmic_glink_remove(struct platform_device *pdev) 321 + static void pmic_glink_remove(struct platform_device *pdev) 322 322 { 323 323 struct pmic_glink *pg = dev_get_drvdata(&pdev->dev); 324 324 ··· 334 334 mutex_lock(&__pmic_glink_lock); 335 335 __pmic_glink = NULL; 336 336 mutex_unlock(&__pmic_glink_lock); 337 - 338 - return 0; 339 337 } 340 338 341 339 static const unsigned long pmic_glink_sm8450_client_mask = BIT(PMIC_GLINK_CLIENT_BATT) | ··· 350 352 351 353 static struct platform_driver pmic_glink_driver = { 352 354 .probe = pmic_glink_probe, 353 - .remove = pmic_glink_remove, 355 + .remove_new = pmic_glink_remove, 354 356 .driver = { 355 357 .name = "qcom_pmic_glink", 356 358 .of_match_table = pmic_glink_of_match,
+19 -19
drivers/soc/qcom/qcom-geni-se.c
··· 199 199 200 200 return readl_relaxed(wrapper->base + QUP_HW_VER_REG); 201 201 } 202 - EXPORT_SYMBOL(geni_se_get_qup_hw_version); 202 + EXPORT_SYMBOL_GPL(geni_se_get_qup_hw_version); 203 203 204 204 static void geni_se_io_set_mode(void __iomem *base) 205 205 { ··· 272 272 val |= S_COMMON_GENI_S_IRQ_EN; 273 273 writel_relaxed(val, se->base + SE_GENI_S_IRQ_EN); 274 274 } 275 - EXPORT_SYMBOL(geni_se_init); 275 + EXPORT_SYMBOL_GPL(geni_se_init); 276 276 277 277 static void geni_se_select_fifo_mode(struct geni_se *se) 278 278 { ··· 364 364 break; 365 365 } 366 366 } 367 - EXPORT_SYMBOL(geni_se_select_mode); 367 + EXPORT_SYMBOL_GPL(geni_se_select_mode); 368 368 369 369 /** 370 370 * DOC: Overview ··· 481 481 if (pack_words || bpw == 32) 482 482 writel_relaxed(bpw / 16, se->base + SE_GENI_BYTE_GRAN); 483 483 } 484 - EXPORT_SYMBOL(geni_se_config_packing); 484 + EXPORT_SYMBOL_GPL(geni_se_config_packing); 485 485 486 486 static void geni_se_clks_off(struct geni_se *se) 487 487 { ··· 512 512 geni_se_clks_off(se); 513 513 return 0; 514 514 } 515 - EXPORT_SYMBOL(geni_se_resources_off); 515 + EXPORT_SYMBOL_GPL(geni_se_resources_off); 516 516 517 517 static int geni_se_clks_on(struct geni_se *se) 518 518 { ··· 553 553 554 554 return ret; 555 555 } 556 - EXPORT_SYMBOL(geni_se_resources_on); 556 + EXPORT_SYMBOL_GPL(geni_se_resources_on); 557 557 558 558 /** 559 559 * geni_se_clk_tbl_get() - Get the clock table to program DFS ··· 594 594 *tbl = se->clk_perf_tbl; 595 595 return se->num_clk_levels; 596 596 } 597 - EXPORT_SYMBOL(geni_se_clk_tbl_get); 597 + EXPORT_SYMBOL_GPL(geni_se_clk_tbl_get); 598 598 599 599 /** 600 600 * geni_se_clk_freq_match() - Get the matching or closest SE clock frequency ··· 656 656 657 657 return 0; 658 658 } 659 - EXPORT_SYMBOL(geni_se_clk_freq_match); 659 + EXPORT_SYMBOL_GPL(geni_se_clk_freq_match); 660 660 661 661 #define GENI_SE_DMA_DONE_EN BIT(0) 662 662 #define GENI_SE_DMA_EOT_EN BIT(1) ··· 684 684 writel_relaxed(GENI_SE_DMA_EOT_BUF, se->base + SE_DMA_TX_ATTR); 685 685 writel(len, se->base + SE_DMA_TX_LEN); 686 686 } 687 - EXPORT_SYMBOL(geni_se_tx_init_dma); 687 + EXPORT_SYMBOL_GPL(geni_se_tx_init_dma); 688 688 689 689 /** 690 690 * geni_se_tx_dma_prep() - Prepare the serial engine for TX DMA transfer ··· 712 712 geni_se_tx_init_dma(se, *iova, len); 713 713 return 0; 714 714 } 715 - EXPORT_SYMBOL(geni_se_tx_dma_prep); 715 + EXPORT_SYMBOL_GPL(geni_se_tx_dma_prep); 716 716 717 717 /** 718 718 * geni_se_rx_init_dma() - Initiate RX DMA transfer on the serial engine ··· 736 736 writel_relaxed(0, se->base + SE_DMA_RX_ATTR); 737 737 writel(len, se->base + SE_DMA_RX_LEN); 738 738 } 739 - EXPORT_SYMBOL(geni_se_rx_init_dma); 739 + EXPORT_SYMBOL_GPL(geni_se_rx_init_dma); 740 740 741 741 /** 742 742 * geni_se_rx_dma_prep() - Prepare the serial engine for RX DMA transfer ··· 764 764 geni_se_rx_init_dma(se, *iova, len); 765 765 return 0; 766 766 } 767 - EXPORT_SYMBOL(geni_se_rx_dma_prep); 767 + EXPORT_SYMBOL_GPL(geni_se_rx_dma_prep); 768 768 769 769 /** 770 770 * geni_se_tx_dma_unprep() - Unprepare the serial engine after TX DMA transfer ··· 781 781 if (!dma_mapping_error(wrapper->dev, iova)) 782 782 dma_unmap_single(wrapper->dev, iova, len, DMA_TO_DEVICE); 783 783 } 784 - EXPORT_SYMBOL(geni_se_tx_dma_unprep); 784 + EXPORT_SYMBOL_GPL(geni_se_tx_dma_unprep); 785 785 786 786 /** 787 787 * geni_se_rx_dma_unprep() - Unprepare the serial engine after RX DMA transfer ··· 798 798 if (!dma_mapping_error(wrapper->dev, iova)) 799 799 dma_unmap_single(wrapper->dev, iova, len, DMA_FROM_DEVICE); 800 800 } 801 - EXPORT_SYMBOL(geni_se_rx_dma_unprep); 801 + EXPORT_SYMBOL_GPL(geni_se_rx_dma_unprep); 802 802 803 803 int geni_icc_get(struct geni_se *se, const char *icc_ddr) 804 804 { ··· 827 827 return err; 828 828 829 829 } 830 - EXPORT_SYMBOL(geni_icc_get); 830 + EXPORT_SYMBOL_GPL(geni_icc_get); 831 831 832 832 int geni_icc_set_bw(struct geni_se *se) 833 833 { ··· 845 845 846 846 return 0; 847 847 } 848 - EXPORT_SYMBOL(geni_icc_set_bw); 848 + EXPORT_SYMBOL_GPL(geni_icc_set_bw); 849 849 850 850 void geni_icc_set_tag(struct geni_se *se, u32 tag) 851 851 { ··· 854 854 for (i = 0; i < ARRAY_SIZE(se->icc_paths); i++) 855 855 icc_set_tag(se->icc_paths[i].path, tag); 856 856 } 857 - EXPORT_SYMBOL(geni_icc_set_tag); 857 + EXPORT_SYMBOL_GPL(geni_icc_set_tag); 858 858 859 859 /* To do: Replace this by icc_bulk_enable once it's implemented in ICC core */ 860 860 int geni_icc_enable(struct geni_se *se) ··· 872 872 873 873 return 0; 874 874 } 875 - EXPORT_SYMBOL(geni_icc_enable); 875 + EXPORT_SYMBOL_GPL(geni_icc_enable); 876 876 877 877 int geni_icc_disable(struct geni_se *se) 878 878 { ··· 889 889 890 890 return 0; 891 891 } 892 - EXPORT_SYMBOL(geni_icc_disable); 892 + EXPORT_SYMBOL_GPL(geni_icc_disable); 893 893 894 894 static int geni_se_probe(struct platform_device *pdev) 895 895 {
+5 -7
drivers/soc/qcom/qcom_aoss.c
··· 260 260 261 261 return ret; 262 262 } 263 - EXPORT_SYMBOL(qmp_send); 263 + EXPORT_SYMBOL_GPL(qmp_send); 264 264 265 265 static int qmp_qdss_clk_prepare(struct clk_hw *hw) 266 266 { ··· 458 458 } 459 459 return qmp; 460 460 } 461 - EXPORT_SYMBOL(qmp_get); 461 + EXPORT_SYMBOL_GPL(qmp_get); 462 462 463 463 /** 464 464 * qmp_put() - release a qmp handle ··· 473 473 if (!IS_ERR_OR_NULL(qmp)) 474 474 put_device(qmp->dev); 475 475 } 476 - EXPORT_SYMBOL(qmp_put); 476 + EXPORT_SYMBOL_GPL(qmp_put); 477 477 478 478 static int qmp_probe(struct platform_device *pdev) 479 479 { ··· 533 533 return ret; 534 534 } 535 535 536 - static int qmp_remove(struct platform_device *pdev) 536 + static void qmp_remove(struct platform_device *pdev) 537 537 { 538 538 struct qmp *qmp = platform_get_drvdata(pdev); 539 539 ··· 542 542 543 543 qmp_close(qmp); 544 544 mbox_free_channel(qmp->mbox_chan); 545 - 546 - return 0; 547 545 } 548 546 549 547 static const struct of_device_id qmp_dt_match[] = { ··· 563 565 .suppress_bind_attrs = true, 564 566 }, 565 567 .probe = qmp_probe, 566 - .remove = qmp_remove, 568 + .remove_new = qmp_remove, 567 569 }; 568 570 module_platform_driver(qmp_driver); 569 571
+2 -4
drivers/soc/qcom/qcom_gsbi.c
··· 212 212 return of_platform_populate(node, NULL, NULL, &pdev->dev); 213 213 } 214 214 215 - static int gsbi_remove(struct platform_device *pdev) 215 + static void gsbi_remove(struct platform_device *pdev) 216 216 { 217 217 struct gsbi_info *gsbi = platform_get_drvdata(pdev); 218 218 219 219 clk_disable_unprepare(gsbi->hclk); 220 - 221 - return 0; 222 220 } 223 221 224 222 static const struct of_device_id gsbi_dt_match[] = { ··· 232 234 .of_match_table = gsbi_dt_match, 233 235 }, 234 236 .probe = gsbi_probe, 235 - .remove = gsbi_remove, 237 + .remove_new = gsbi_remove, 236 238 }; 237 239 238 240 module_platform_driver(gsbi_driver);
+2 -4
drivers/soc/qcom/qcom_stats.c
··· 216 216 return 0; 217 217 } 218 218 219 - static int qcom_stats_remove(struct platform_device *pdev) 219 + static void qcom_stats_remove(struct platform_device *pdev) 220 220 { 221 221 struct dentry *root = platform_get_drvdata(pdev); 222 222 223 223 debugfs_remove_recursive(root); 224 - 225 - return 0; 226 224 } 227 225 228 226 static const struct stats_config rpm_data = { ··· 270 272 271 273 static struct platform_driver qcom_stats = { 272 274 .probe = qcom_stats_probe, 273 - .remove = qcom_stats_remove, 275 + .remove_new = qcom_stats_remove, 274 276 .driver = { 275 277 .name = "qcom_stats", 276 278 .of_match_table = qcom_stats_table,
+3 -3
drivers/soc/qcom/qmi_encdec.c
··· 754 754 755 755 return msg; 756 756 } 757 - EXPORT_SYMBOL(qmi_encode_message); 757 + EXPORT_SYMBOL_GPL(qmi_encode_message); 758 758 759 759 /** 760 760 * qmi_decode_message() - Decode QMI encoded message to C structure ··· 778 778 return qmi_decode(ei, c_struct, buf + sizeof(struct qmi_header), 779 779 len - sizeof(struct qmi_header), 1); 780 780 } 781 - EXPORT_SYMBOL(qmi_decode_message); 781 + EXPORT_SYMBOL_GPL(qmi_decode_message); 782 782 783 783 /* Common header in all QMI responses */ 784 784 const struct qmi_elem_info qmi_response_type_v01_ei[] = { ··· 810 810 .ei_array = NULL, 811 811 }, 812 812 }; 813 - EXPORT_SYMBOL(qmi_response_type_v01_ei); 813 + EXPORT_SYMBOL_GPL(qmi_response_type_v01_ei); 814 814 815 815 MODULE_DESCRIPTION("QMI encoder/decoder helper"); 816 816 MODULE_LICENSE("GPL v2");
+10 -10
drivers/soc/qcom/qmi_interface.c
··· 223 223 224 224 return 0; 225 225 } 226 - EXPORT_SYMBOL(qmi_add_lookup); 226 + EXPORT_SYMBOL_GPL(qmi_add_lookup); 227 227 228 228 static void qmi_send_new_server(struct qmi_handle *qmi, struct qmi_service *svc) 229 229 { ··· 287 287 288 288 return 0; 289 289 } 290 - EXPORT_SYMBOL(qmi_add_server); 290 + EXPORT_SYMBOL_GPL(qmi_add_server); 291 291 292 292 /** 293 293 * qmi_txn_init() - allocate transaction id within the given QMI handle ··· 328 328 329 329 return ret; 330 330 } 331 - EXPORT_SYMBOL(qmi_txn_init); 331 + EXPORT_SYMBOL_GPL(qmi_txn_init); 332 332 333 333 /** 334 334 * qmi_txn_wait() - wait for a response on a transaction ··· 359 359 else 360 360 return txn->result; 361 361 } 362 - EXPORT_SYMBOL(qmi_txn_wait); 362 + EXPORT_SYMBOL_GPL(qmi_txn_wait); 363 363 364 364 /** 365 365 * qmi_txn_cancel() - cancel an ongoing transaction ··· 375 375 mutex_unlock(&txn->lock); 376 376 mutex_unlock(&qmi->txn_lock); 377 377 } 378 - EXPORT_SYMBOL(qmi_txn_cancel); 378 + EXPORT_SYMBOL_GPL(qmi_txn_cancel); 379 379 380 380 /** 381 381 * qmi_invoke_handler() - find and invoke a handler for a message ··· 676 676 677 677 return ret; 678 678 } 679 - EXPORT_SYMBOL(qmi_handle_init); 679 + EXPORT_SYMBOL_GPL(qmi_handle_init); 680 680 681 681 /** 682 682 * qmi_handle_release() - release the QMI client handle ··· 717 717 kfree(svc); 718 718 } 719 719 } 720 - EXPORT_SYMBOL(qmi_handle_release); 720 + EXPORT_SYMBOL_GPL(qmi_handle_release); 721 721 722 722 /** 723 723 * qmi_send_message() - send a QMI message ··· 796 796 return qmi_send_message(qmi, sq, txn, QMI_REQUEST, msg_id, len, ei, 797 797 c_struct); 798 798 } 799 - EXPORT_SYMBOL(qmi_send_request); 799 + EXPORT_SYMBOL_GPL(qmi_send_request); 800 800 801 801 /** 802 802 * qmi_send_response() - send a response QMI message ··· 817 817 return qmi_send_message(qmi, sq, txn, QMI_RESPONSE, msg_id, len, ei, 818 818 c_struct); 819 819 } 820 - EXPORT_SYMBOL(qmi_send_response); 820 + EXPORT_SYMBOL_GPL(qmi_send_response); 821 821 822 822 /** 823 823 * qmi_send_indication() - send an indication QMI message ··· 851 851 852 852 return rval; 853 853 } 854 - EXPORT_SYMBOL(qmi_send_indication); 854 + EXPORT_SYMBOL_GPL(qmi_send_indication);
+11 -4
drivers/soc/qcom/rmtfs_mem.c
··· 200 200 rmtfs_mem->client_id = client_id; 201 201 rmtfs_mem->size = rmem->size; 202 202 203 + /* 204 + * If requested, discard the first and last 4k block in order to ensure 205 + * that the rmtfs region isn't adjacent to other protected regions. 206 + */ 207 + if (of_property_read_bool(node, "qcom,use-guard-pages")) { 208 + rmtfs_mem->addr += SZ_4K; 209 + rmtfs_mem->size -= 2 * SZ_4K; 210 + } 211 + 203 212 device_initialize(&rmtfs_mem->dev); 204 213 rmtfs_mem->dev.parent = &pdev->dev; 205 214 rmtfs_mem->dev.groups = qcom_rmtfs_mem_groups; ··· 290 281 return ret; 291 282 } 292 283 293 - static int qcom_rmtfs_mem_remove(struct platform_device *pdev) 284 + static void qcom_rmtfs_mem_remove(struct platform_device *pdev) 294 285 { 295 286 struct qcom_rmtfs_mem *rmtfs_mem = dev_get_drvdata(&pdev->dev); 296 287 struct qcom_scm_vmperm perm; ··· 305 296 306 297 cdev_device_del(&rmtfs_mem->cdev, &rmtfs_mem->dev); 307 298 put_device(&rmtfs_mem->dev); 308 - 309 - return 0; 310 299 } 311 300 312 301 static const struct of_device_id qcom_rmtfs_mem_of_match[] = { ··· 315 308 316 309 static struct platform_driver qcom_rmtfs_mem_driver = { 317 310 .probe = qcom_rmtfs_mem_probe, 318 - .remove = qcom_rmtfs_mem_remove, 311 + .remove_new = qcom_rmtfs_mem_remove, 319 312 .driver = { 320 313 .name = "qcom_rmtfs_mem", 321 314 .of_match_table = qcom_rmtfs_mem_of_match,
+4 -4
drivers/soc/qcom/rpmh.c
··· 239 239 240 240 return __rpmh_write(dev, state, rpm_msg); 241 241 } 242 - EXPORT_SYMBOL(rpmh_write_async); 242 + EXPORT_SYMBOL_GPL(rpmh_write_async); 243 243 244 244 /** 245 245 * rpmh_write: Write a set of RPMH commands and block until response ··· 270 270 WARN_ON(!ret); 271 271 return (ret > 0) ? 0 : -ETIMEDOUT; 272 272 } 273 - EXPORT_SYMBOL(rpmh_write); 273 + EXPORT_SYMBOL_GPL(rpmh_write); 274 274 275 275 static void cache_batch(struct rpmh_ctrlr *ctrlr, struct batch_cache_req *req) 276 276 { ··· 395 395 396 396 return ret; 397 397 } 398 - EXPORT_SYMBOL(rpmh_write_batch); 398 + EXPORT_SYMBOL_GPL(rpmh_write_batch); 399 399 400 400 static int is_req_valid(struct cache_req *req) 401 401 { ··· 500 500 ctrlr->dirty = true; 501 501 spin_unlock_irqrestore(&ctrlr->cache_lock, flags); 502 502 } 503 - EXPORT_SYMBOL(rpmh_invalidate); 503 + EXPORT_SYMBOL_GPL(rpmh_invalidate);
+1 -1
drivers/soc/qcom/smd-rpm.c
··· 142 142 mutex_unlock(&rpm->lock); 143 143 return ret; 144 144 } 145 - EXPORT_SYMBOL(qcom_rpm_smd_write); 145 + EXPORT_SYMBOL_GPL(qcom_rpm_smd_write); 146 146 147 147 static int qcom_smd_rpm_callback(struct rpmsg_device *rpdev, 148 148 void *data,
+4 -6
drivers/soc/qcom/smem.c
··· 285 285 struct smem_partition partitions[SMEM_HOST_COUNT]; 286 286 287 287 unsigned num_regions; 288 - struct smem_region regions[]; 288 + struct smem_region regions[] __counted_by(num_regions); 289 289 }; 290 290 291 291 static void * ··· 368 368 { 369 369 return !!__smem; 370 370 } 371 - EXPORT_SYMBOL(qcom_smem_is_available); 371 + EXPORT_SYMBOL_GPL(qcom_smem_is_available); 372 372 373 373 static int qcom_smem_alloc_private(struct qcom_smem *smem, 374 374 struct smem_partition *part, ··· 1187 1187 return 0; 1188 1188 } 1189 1189 1190 - static int qcom_smem_remove(struct platform_device *pdev) 1190 + static void qcom_smem_remove(struct platform_device *pdev) 1191 1191 { 1192 1192 platform_device_unregister(__smem->socinfo); 1193 1193 1194 1194 hwspin_lock_free(__smem->hwlock); 1195 1195 __smem = NULL; 1196 - 1197 - return 0; 1198 1196 } 1199 1197 1200 1198 static const struct of_device_id qcom_smem_of_match[] = { ··· 1203 1205 1204 1206 static struct platform_driver qcom_smem_driver = { 1205 1207 .probe = qcom_smem_probe, 1206 - .remove = qcom_smem_remove, 1208 + .remove_new = qcom_smem_remove, 1207 1209 .driver = { 1208 1210 .name = "qcom-smem", 1209 1211 .of_match_table = qcom_smem_of_match,
+2 -4
drivers/soc/qcom/smp2p.c
··· 660 660 return -EINVAL; 661 661 } 662 662 663 - static int qcom_smp2p_remove(struct platform_device *pdev) 663 + static void qcom_smp2p_remove(struct platform_device *pdev) 664 664 { 665 665 struct qcom_smp2p *smp2p = platform_get_drvdata(pdev); 666 666 struct smp2p_entry *entry; ··· 676 676 mbox_free_channel(smp2p->mbox_chan); 677 677 678 678 smp2p->out->valid_entries = 0; 679 - 680 - return 0; 681 679 } 682 680 683 681 static const struct of_device_id qcom_smp2p_of_match[] = { ··· 686 688 687 689 static struct platform_driver qcom_smp2p_driver = { 688 690 .probe = qcom_smp2p_probe, 689 - .remove = qcom_smp2p_remove, 691 + .remove_new = qcom_smp2p_remove, 690 692 .driver = { 691 693 .name = "qcom_smp2p", 692 694 .of_match_table = qcom_smp2p_of_match,
+2 -4
drivers/soc/qcom/smsm.c
··· 613 613 return ret; 614 614 } 615 615 616 - static int qcom_smsm_remove(struct platform_device *pdev) 616 + static void qcom_smsm_remove(struct platform_device *pdev) 617 617 { 618 618 struct qcom_smsm *smsm = platform_get_drvdata(pdev); 619 619 unsigned id; ··· 623 623 irq_domain_remove(smsm->entries[id].domain); 624 624 625 625 qcom_smem_state_unregister(smsm->state); 626 - 627 - return 0; 628 626 } 629 627 630 628 static const struct of_device_id qcom_smsm_of_match[] = { ··· 633 635 634 636 static struct platform_driver qcom_smsm_driver = { 635 637 .probe = qcom_smsm_probe, 636 - .remove = qcom_smsm_remove, 638 + .remove_new = qcom_smsm_remove, 637 639 .driver = { 638 640 .name = "qcom-smsm", 639 641 .of_match_table = qcom_smsm_of_match,
+13 -4
drivers/soc/qcom/socinfo.c
··· 117 117 [55] = "PM2250", 118 118 [58] = "PM8450", 119 119 [65] = "PM8010", 120 + [69] = "PM8550VS", 121 + [70] = "PM8550VE", 122 + [71] = "PM8550B", 123 + [72] = "PMR735D", 124 + [73] = "PM8550", 125 + [74] = "PMK8550", 120 126 }; 121 127 122 128 struct socinfo_params { ··· 355 349 { qcom_board_id(SDA439) }, 356 350 { qcom_board_id(SDA429) }, 357 351 { qcom_board_id(SM7150) }, 352 + { qcom_board_id(SM7150P) }, 358 353 { qcom_board_id(IPQ8070) }, 359 354 { qcom_board_id(IPQ8071) }, 360 355 { qcom_board_id(QM215) }, ··· 366 359 { qcom_board_id(SM6125) }, 367 360 { qcom_board_id(IPQ8070A) }, 368 361 { qcom_board_id(IPQ8071A) }, 362 + { qcom_board_id(IPQ8172) }, 363 + { qcom_board_id(IPQ8173) }, 364 + { qcom_board_id(IPQ8174) }, 369 365 { qcom_board_id(IPQ6018) }, 370 366 { qcom_board_id(IPQ6028) }, 371 367 { qcom_board_id(SDM429W) }, ··· 399 389 { qcom_board_id_named(SM8450_3, "SM8450") }, 400 390 { qcom_board_id(SC7280) }, 401 391 { qcom_board_id(SC7180P) }, 392 + { qcom_board_id(QCM6490) }, 402 393 { qcom_board_id(IPQ5000) }, 403 394 { qcom_board_id(IPQ0509) }, 404 395 { qcom_board_id(IPQ0518) }, ··· 787 776 return 0; 788 777 } 789 778 790 - static int qcom_socinfo_remove(struct platform_device *pdev) 779 + static void qcom_socinfo_remove(struct platform_device *pdev) 791 780 { 792 781 struct qcom_socinfo *qs = platform_get_drvdata(pdev); 793 782 794 783 soc_device_unregister(qs->soc_dev); 795 784 796 785 socinfo_debugfs_exit(qs); 797 - 798 - return 0; 799 786 } 800 787 801 788 static struct platform_driver qcom_socinfo_driver = { 802 789 .probe = qcom_socinfo_probe, 803 - .remove = qcom_socinfo_remove, 790 + .remove_new = qcom_socinfo_remove, 804 791 .driver = { 805 792 .name = "qcom-socinfo", 806 793 },
+1 -2
drivers/soc/qcom/wcnss_ctrl.c
··· 287 287 288 288 return rpmsg_create_ept(_wcnss->channel->rpdev, cb, priv, chinfo); 289 289 } 290 - EXPORT_SYMBOL(qcom_wcnss_open_channel); 290 + EXPORT_SYMBOL_GPL(qcom_wcnss_open_channel); 291 291 292 292 static void wcnss_async_probe(struct work_struct *work) 293 293 { ··· 355 355 .callback = wcnss_ctrl_smd_callback, 356 356 .drv = { 357 357 .name = "qcom_wcnss_ctrl", 358 - .owner = THIS_MODULE, 359 358 .of_match_table = wcnss_ctrl_of_match, 360 359 }, 361 360 };
+6
drivers/soc/renesas/Kconfig
··· 319 319 help 320 320 This enables support for the Renesas RZ/V2L SoC variants. 321 321 322 + config ARCH_R9A08G045 323 + bool "ARM64 Platform support for RZ/G3S" 324 + select ARCH_RZG2L 325 + help 326 + This enables support for the Renesas RZ/G3S SoC variants. 327 + 322 328 config ARCH_R9A09G011 323 329 bool "ARM64 Platform support for RZ/V2M" 324 330 select PM
+13 -2
drivers/soc/renesas/renesas-soc.c
··· 12 12 #include <linux/string.h> 13 13 #include <linux/sys_soc.h> 14 14 15 - 16 15 struct renesas_family { 17 16 const char name[16]; 18 17 u32 reg; /* CCCR or PRR, if not in DT */ ··· 71 72 .name = "RZ/G2UL", 72 73 }; 73 74 75 + static const struct renesas_family fam_rzg3s __initconst __maybe_unused = { 76 + .name = "RZ/G3S", 77 + }; 78 + 74 79 static const struct renesas_family fam_rzv2l __initconst __maybe_unused = { 75 80 .name = "RZ/V2L", 76 81 }; ··· 87 84 .name = "SH-Mobile", 88 85 .reg = 0xe600101c, /* CCCR (Common Chip Code Register) */ 89 86 }; 90 - 91 87 92 88 struct renesas_soc { 93 89 const struct renesas_family *family; ··· 170 168 static const struct renesas_soc soc_rz_g2ul __initconst __maybe_unused = { 171 169 .family = &fam_rzg2ul, 172 170 .id = 0x8450447, 171 + }; 172 + 173 + static const struct renesas_soc soc_rz_g3s __initconst __maybe_unused = { 174 + .family = &fam_rzg3s, 175 + .id = 0x85e0447, 173 176 }; 174 177 175 178 static const struct renesas_soc soc_rz_v2l __initconst __maybe_unused = { ··· 393 386 #ifdef CONFIG_ARCH_R9A07G054 394 387 { .compatible = "renesas,r9a07g054", .data = &soc_rz_v2l }, 395 388 #endif 389 + #ifdef CONFIG_ARCH_R9A08G045 390 + { .compatible = "renesas,r9a08g045", .data = &soc_rz_g3s }, 391 + #endif 396 392 #ifdef CONFIG_ARCH_R9A09G011 397 393 { .compatible = "renesas,r9a09g011", .data = &soc_rz_v2m }, 398 394 #endif ··· 439 429 { .compatible = "renesas,r9a07g043-sysc", .data = &id_rzg2l }, 440 430 { .compatible = "renesas,r9a07g044-sysc", .data = &id_rzg2l }, 441 431 { .compatible = "renesas,r9a07g054-sysc", .data = &id_rzg2l }, 432 + { .compatible = "renesas,r9a08g045-sysc", .data = &id_rzg2l }, 442 433 { .compatible = "renesas,r9a09g011-sys", .data = &id_rzv2m }, 443 434 { .compatible = "renesas,prr", .data = &id_prr }, 444 435 { /* sentinel */ }
+2 -4
drivers/soc/rockchip/io-domain.c
··· 687 687 return ret; 688 688 } 689 689 690 - static int rockchip_iodomain_remove(struct platform_device *pdev) 690 + static void rockchip_iodomain_remove(struct platform_device *pdev) 691 691 { 692 692 struct rockchip_iodomain *iod = platform_get_drvdata(pdev); 693 693 int i; ··· 699 699 regulator_unregister_notifier(io_supply->reg, 700 700 &io_supply->nb); 701 701 } 702 - 703 - return 0; 704 702 } 705 703 706 704 static struct platform_driver rockchip_iodomain_driver = { 707 705 .probe = rockchip_iodomain_probe, 708 - .remove = rockchip_iodomain_remove, 706 + .remove_new = rockchip_iodomain_remove, 709 707 .driver = { 710 708 .name = "rockchip-iodomain", 711 709 .of_match_table = rockchip_iodomain_match,
+2 -4
drivers/soc/samsung/exynos-chipid.c
··· 158 158 return ret; 159 159 } 160 160 161 - static int exynos_chipid_remove(struct platform_device *pdev) 161 + static void exynos_chipid_remove(struct platform_device *pdev) 162 162 { 163 163 struct soc_device *soc_dev = platform_get_drvdata(pdev); 164 164 165 165 soc_device_unregister(soc_dev); 166 - 167 - return 0; 168 166 } 169 167 170 168 static const struct exynos_chipid_variant exynos4210_chipid_drv_data = { ··· 195 197 .of_match_table = exynos_chipid_of_device_ids, 196 198 }, 197 199 .probe = exynos_chipid_probe, 198 - .remove = exynos_chipid_remove, 200 + .remove_new = exynos_chipid_remove, 199 201 }; 200 202 module_platform_driver(exynos_chipid_driver); 201 203
+1 -1
drivers/soc/sifive/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 3 - if SOC_SIFIVE || SOC_STARFIVE 3 + if ARCH_SIFIVE || ARCH_STARFIVE 4 4 5 5 config SIFIVE_CCACHE 6 6 bool "Sifive Composable Cache controller"
+2 -4
drivers/soc/tegra/cbb/tegra194-cbb.c
··· 2293 2293 return tegra_cbb_register(&cbb->base); 2294 2294 } 2295 2295 2296 - static int tegra194_cbb_remove(struct platform_device *pdev) 2296 + static void tegra194_cbb_remove(struct platform_device *pdev) 2297 2297 { 2298 2298 struct tegra194_cbb *cbb = platform_get_drvdata(pdev); 2299 2299 struct tegra_cbb *noc, *tmp; ··· 2311 2311 } 2312 2312 2313 2313 spin_unlock_irqrestore(&cbb_lock, flags); 2314 - 2315 - return 0; 2316 2314 } 2317 2315 2318 2316 static int __maybe_unused tegra194_cbb_resume_noirq(struct device *dev) ··· 2330 2332 2331 2333 static struct platform_driver tegra194_cbb_driver = { 2332 2334 .probe = tegra194_cbb_probe, 2333 - .remove = tegra194_cbb_remove, 2335 + .remove_new = tegra194_cbb_remove, 2334 2336 .driver = { 2335 2337 .name = "tegra194-cbb", 2336 2338 .of_match_table = of_match_ptr(tegra194_cbb_match),
-8
drivers/soc/tegra/pmc.c
··· 1393 1393 return 0; 1394 1394 } 1395 1395 1396 - static unsigned int 1397 - tegra_pmc_core_pd_opp_to_performance_state(struct generic_pm_domain *genpd, 1398 - struct dev_pm_opp *opp) 1399 - { 1400 - return dev_pm_opp_get_level(opp); 1401 - } 1402 - 1403 1396 static int tegra_pmc_core_pd_add(struct tegra_pmc *pmc, struct device_node *np) 1404 1397 { 1405 1398 struct generic_pm_domain *genpd; ··· 1405 1412 1406 1413 genpd->name = "core"; 1407 1414 genpd->set_performance_state = tegra_pmc_core_pd_set_performance_state; 1408 - genpd->opp_to_performance_state = tegra_pmc_core_pd_opp_to_performance_state; 1409 1415 1410 1416 err = devm_pm_opp_set_regulators(pmc->dev, rname); 1411 1417 if (err)
+2 -3
drivers/soc/ti/k3-ringacc.c
··· 1551 1551 return 0; 1552 1552 } 1553 1553 1554 - static int k3_ringacc_remove(struct platform_device *pdev) 1554 + static void k3_ringacc_remove(struct platform_device *pdev) 1555 1555 { 1556 1556 struct k3_ringacc *ringacc = dev_get_drvdata(&pdev->dev); 1557 1557 1558 1558 mutex_lock(&k3_ringacc_list_lock); 1559 1559 list_del(&ringacc->list); 1560 1560 mutex_unlock(&k3_ringacc_list_lock); 1561 - return 0; 1562 1561 } 1563 1562 1564 1563 static struct platform_driver k3_ringacc_driver = { 1565 1564 .probe = k3_ringacc_probe, 1566 - .remove = k3_ringacc_remove, 1565 + .remove_new = k3_ringacc_remove, 1567 1566 .driver = { 1568 1567 .name = "k3-ringacc", 1569 1568 .of_match_table = k3_ringacc_of_match,
+3 -4
drivers/soc/ti/k3-socinfo.c
··· 20 20 * 31-28 VARIANT Device variant 21 21 * 27-12 PARTNO Part number 22 22 * 11-1 MFG Indicates TI as manufacturer (0x17) 23 - * 1 Always 1 23 + * 0 Always 1 24 24 */ 25 25 #define CTRLMMR_WKUP_JTAGID_VARIANT_SHIFT (28) 26 26 #define CTRLMMR_WKUP_JTAGID_VARIANT_MASK GENMASK(31, 28) ··· 60 60 return 0; 61 61 } 62 62 63 - return -EINVAL; 63 + return -ENODEV; 64 64 } 65 65 66 66 static int k3_chipinfo_probe(struct platform_device *pdev) ··· 111 111 112 112 ret = k3_chipinfo_partno_to_names(partno_id, soc_dev_attr); 113 113 if (ret) { 114 - dev_err(dev, "Unknown SoC JTAGID[0x%08X]\n", jtag_id); 115 - ret = -ENODEV; 114 + dev_err(dev, "Unknown SoC JTAGID[0x%08X]: %d\n", jtag_id, ret); 116 115 goto err_free_rev; 117 116 } 118 117
+2 -4
drivers/soc/ti/knav_dma.c
··· 773 773 return ret; 774 774 } 775 775 776 - static int knav_dma_remove(struct platform_device *pdev) 776 + static void knav_dma_remove(struct platform_device *pdev) 777 777 { 778 778 struct knav_dma_device *dma; 779 779 ··· 784 784 785 785 pm_runtime_put_sync(&pdev->dev); 786 786 pm_runtime_disable(&pdev->dev); 787 - 788 - return 0; 789 787 } 790 788 791 789 static struct of_device_id of_match[] = { ··· 795 797 796 798 static struct platform_driver knav_dma_driver = { 797 799 .probe = knav_dma_probe, 798 - .remove = knav_dma_remove, 800 + .remove_new = knav_dma_remove, 799 801 .driver = { 800 802 .name = "keystone-navigator-dma", 801 803 .of_match_table = of_match,
+6 -7
drivers/soc/ti/knav_qmss_queue.c
··· 14 14 #include <linux/interrupt.h> 15 15 #include <linux/io.h> 16 16 #include <linux/module.h> 17 + #include <linux/of.h> 17 18 #include <linux/of_address.h> 18 - #include <linux/of_device.h> 19 19 #include <linux/of_irq.h> 20 + #include <linux/platform_device.h> 20 21 #include <linux/pm_runtime.h> 22 + #include <linux/property.h> 21 23 #include <linux/slab.h> 22 24 #include <linux/soc/ti/knav_qmss.h> 23 25 ··· 1756 1754 { 1757 1755 struct device_node *node = pdev->dev.of_node; 1758 1756 struct device_node *qmgrs, *queue_pools, *regions, *pdsps; 1759 - const struct of_device_id *match; 1760 1757 struct device *dev = &pdev->dev; 1761 1758 u32 temp[2]; 1762 1759 int ret; ··· 1771 1770 return -ENOMEM; 1772 1771 } 1773 1772 1774 - match = of_match_device(of_match_ptr(keystone_qmss_of_match), dev); 1775 - if (match && match->data) 1773 + if (device_get_match_data(dev)) 1776 1774 kdev->version = QMSS_66AK2G; 1777 1775 1778 1776 platform_set_drvdata(pdev, kdev); ··· 1884 1884 return ret; 1885 1885 } 1886 1886 1887 - static int knav_queue_remove(struct platform_device *pdev) 1887 + static void knav_queue_remove(struct platform_device *pdev) 1888 1888 { 1889 1889 /* TODO: Free resources */ 1890 1890 pm_runtime_put_sync(&pdev->dev); 1891 1891 pm_runtime_disable(&pdev->dev); 1892 - return 0; 1893 1892 } 1894 1893 1895 1894 static struct platform_driver keystone_qmss_driver = { 1896 1895 .probe = knav_queue_probe, 1897 - .remove = knav_queue_remove, 1896 + .remove_new = knav_queue_remove, 1898 1897 .driver = { 1899 1898 .name = "keystone-navigator-qmss", 1900 1899 .of_match_table = keystone_qmss_of_match,
+2 -3
drivers/soc/ti/pm33xx.c
··· 583 583 return ret; 584 584 } 585 585 586 - static int am33xx_pm_remove(struct platform_device *pdev) 586 + static void am33xx_pm_remove(struct platform_device *pdev) 587 587 { 588 588 pm_runtime_put_sync(&pdev->dev); 589 589 pm_runtime_disable(&pdev->dev); ··· 594 594 am33xx_pm_free_sram(); 595 595 iounmap(rtc_base_virt); 596 596 clk_put(rtc_fck); 597 - return 0; 598 597 } 599 598 600 599 static struct platform_driver am33xx_pm_driver = { ··· 601 602 .name = "pm33xx", 602 603 }, 603 604 .probe = am33xx_pm_probe, 604 - .remove = am33xx_pm_remove, 605 + .remove_new = am33xx_pm_remove, 605 606 }; 606 607 module_platform_driver(am33xx_pm_driver); 607 608
+2 -4
drivers/soc/ti/pruss.c
··· 565 565 return ret; 566 566 } 567 567 568 - static int pruss_remove(struct platform_device *pdev) 568 + static void pruss_remove(struct platform_device *pdev) 569 569 { 570 570 struct device *dev = &pdev->dev; 571 571 ··· 573 573 574 574 pm_runtime_put_sync(dev); 575 575 pm_runtime_disable(dev); 576 - 577 - return 0; 578 576 } 579 577 580 578 /* instance-specific driver private data */ ··· 608 610 .of_match_table = pruss_of_match, 609 611 }, 610 612 .probe = pruss_probe, 611 - .remove = pruss_remove, 613 + .remove_new = pruss_remove, 612 614 }; 613 615 module_platform_driver(pruss_driver); 614 616
+2 -3
drivers/soc/ti/smartreflex.c
··· 933 933 return ret; 934 934 } 935 935 936 - static int omap_sr_remove(struct platform_device *pdev) 936 + static void omap_sr_remove(struct platform_device *pdev) 937 937 { 938 938 struct device *dev = &pdev->dev; 939 939 struct omap_sr *sr_info = platform_get_drvdata(pdev); ··· 945 945 pm_runtime_disable(dev); 946 946 clk_unprepare(sr_info->fck); 947 947 list_del(&sr_info->node); 948 - return 0; 949 948 } 950 949 951 950 static void omap_sr_shutdown(struct platform_device *pdev) ··· 969 970 970 971 static struct platform_driver smartreflex_driver = { 971 972 .probe = omap_sr_probe, 972 - .remove = omap_sr_remove, 973 + .remove_new = omap_sr_remove, 973 974 .shutdown = omap_sr_shutdown, 974 975 .driver = { 975 976 .name = DRIVER_NAME,
+2 -4
drivers/soc/ti/wkup_m3_ipc.c
··· 713 713 return ret; 714 714 } 715 715 716 - static int wkup_m3_ipc_remove(struct platform_device *pdev) 716 + static void wkup_m3_ipc_remove(struct platform_device *pdev) 717 717 { 718 718 wkup_m3_ipc_dbg_destroy(m3_ipc_state); 719 719 ··· 723 723 rproc_put(m3_ipc_state->rproc); 724 724 725 725 m3_ipc_state = NULL; 726 - 727 - return 0; 728 726 } 729 727 730 728 static int __maybe_unused wkup_m3_ipc_suspend(struct device *dev) ··· 758 760 759 761 static struct platform_driver wkup_m3_ipc_driver = { 760 762 .probe = wkup_m3_ipc_probe, 761 - .remove = wkup_m3_ipc_remove, 763 + .remove_new = wkup_m3_ipc_remove, 762 764 .driver = { 763 765 .name = "wkup_m3_ipc", 764 766 .of_match_table = wkup_m3_ipc_of_match,
+5
include/dt-bindings/arm/qcom,ids.h
··· 193 193 #define QCOM_ID_SDA439 363 194 194 #define QCOM_ID_SDA429 364 195 195 #define QCOM_ID_SM7150 365 196 + #define QCOM_ID_SM7150P 366 196 197 #define QCOM_ID_IPQ8070 375 197 198 #define QCOM_ID_IPQ8071 376 198 199 #define QCOM_ID_QM215 386 ··· 204 203 #define QCOM_ID_SM6125 394 205 204 #define QCOM_ID_IPQ8070A 395 206 205 #define QCOM_ID_IPQ8071A 396 206 + #define QCOM_ID_IPQ8172 397 207 + #define QCOM_ID_IPQ8173 398 208 + #define QCOM_ID_IPQ8174 399 207 209 #define QCOM_ID_IPQ6018 402 208 210 #define QCOM_ID_IPQ6028 403 209 211 #define QCOM_ID_SDM429W 416 ··· 237 233 #define QCOM_ID_SM8450_3 482 238 234 #define QCOM_ID_SC7280 487 239 235 #define QCOM_ID_SC7180P 495 236 + #define QCOM_ID_QCM6490 497 240 237 #define QCOM_ID_IPQ5000 503 241 238 #define QCOM_ID_IPQ0509 504 242 239 #define QCOM_ID_IPQ0518 505
+67 -12
include/linux/arm_ffa.h
··· 6 6 #ifndef _LINUX_ARM_FFA_H 7 7 #define _LINUX_ARM_FFA_H 8 8 9 + #include <linux/bitfield.h> 9 10 #include <linux/device.h> 10 11 #include <linux/module.h> 11 12 #include <linux/types.h> ··· 21 20 22 21 #define FFA_ERROR FFA_SMC_32(0x60) 23 22 #define FFA_SUCCESS FFA_SMC_32(0x61) 23 + #define FFA_FN64_SUCCESS FFA_SMC_64(0x61) 24 24 #define FFA_INTERRUPT FFA_SMC_32(0x62) 25 25 #define FFA_VERSION FFA_SMC_32(0x63) 26 26 #define FFA_FEATURES FFA_SMC_32(0x64) ··· 56 54 #define FFA_MEM_FRAG_RX FFA_SMC_32(0x7A) 57 55 #define FFA_MEM_FRAG_TX FFA_SMC_32(0x7B) 58 56 #define FFA_NORMAL_WORLD_RESUME FFA_SMC_32(0x7C) 57 + #define FFA_NOTIFICATION_BITMAP_CREATE FFA_SMC_32(0x7D) 58 + #define FFA_NOTIFICATION_BITMAP_DESTROY FFA_SMC_32(0x7E) 59 + #define FFA_NOTIFICATION_BIND FFA_SMC_32(0x7F) 60 + #define FFA_NOTIFICATION_UNBIND FFA_SMC_32(0x80) 61 + #define FFA_NOTIFICATION_SET FFA_SMC_32(0x81) 62 + #define FFA_NOTIFICATION_GET FFA_SMC_32(0x82) 63 + #define FFA_NOTIFICATION_INFO_GET FFA_SMC_32(0x83) 64 + #define FFA_FN64_NOTIFICATION_INFO_GET FFA_SMC_64(0x83) 65 + #define FFA_RX_ACQUIRE FFA_SMC_32(0x84) 66 + #define FFA_SPM_ID_GET FFA_SMC_32(0x85) 67 + #define FFA_MSG_SEND2 FFA_SMC_32(0x86) 68 + #define FFA_SECONDARY_EP_REGISTER FFA_SMC_32(0x87) 69 + #define FFA_FN64_SECONDARY_EP_REGISTER FFA_SMC_64(0x87) 70 + #define FFA_MEM_PERM_GET FFA_SMC_32(0x88) 71 + #define FFA_FN64_MEM_PERM_GET FFA_SMC_64(0x88) 72 + #define FFA_MEM_PERM_SET FFA_SMC_32(0x89) 73 + #define FFA_FN64_MEM_PERM_SET FFA_SMC_64(0x89) 59 74 60 75 /* 61 76 * For some calls it is necessary to use SMC64 to pass or return 64-bit values. ··· 95 76 #define FFA_RET_DENIED (-6) 96 77 #define FFA_RET_RETRY (-7) 97 78 #define FFA_RET_ABORTED (-8) 79 + #define FFA_RET_NO_DATA (-9) 98 80 99 81 /* FFA version encoding */ 100 82 #define FFA_MAJOR_VERSION_MASK GENMASK(30, 16) ··· 106 86 (FIELD_PREP(FFA_MAJOR_VERSION_MASK, (major)) | \ 107 87 FIELD_PREP(FFA_MINOR_VERSION_MASK, (minor))) 108 88 #define FFA_VERSION_1_0 FFA_PACK_VERSION_INFO(1, 0) 89 + #define FFA_VERSION_1_1 FFA_PACK_VERSION_INFO(1, 1) 109 90 110 91 /** 111 92 * FF-A specification mentions explicitly about '4K pages'. This should ··· 299 278 #define FFA_MEM_NON_SHAREABLE (0) 300 279 #define FFA_MEM_OUTER_SHAREABLE (2) 301 280 #define FFA_MEM_INNER_SHAREABLE (3) 302 - u8 attributes; 303 - u8 reserved_0; 281 + /* Memory region attributes, upper byte MBZ pre v1.1 */ 282 + u16 attributes; 304 283 /* 305 284 * Clear memory region contents after unmapping it from the sender and 306 285 * before mapping it for any receiver. ··· 338 317 * memory region. 339 318 */ 340 319 u64 tag; 341 - u32 reserved_1; 320 + /* Size of each endpoint memory access descriptor, MBZ pre v1.1 */ 321 + u32 ep_mem_size; 342 322 /* 343 323 * The number of `ffa_mem_region_attributes` entries included in this 344 324 * transaction. 345 325 */ 346 326 u32 ep_count; 347 327 /* 348 - * An array of endpoint memory access descriptors. 349 - * Each one specifies a memory region offset, an endpoint and the 350 - * attributes with which this memory region should be mapped in that 351 - * endpoint's page table. 328 + * 16-byte aligned offset from the base address of this descriptor 329 + * to the first element of the endpoint memory access descriptor array 330 + * Valid only from v1.1 352 331 */ 353 - struct ffa_mem_region_attributes ep_mem_access[]; 332 + u32 ep_mem_offset; 333 + /* MBZ, valid only from v1.1 */ 334 + u32 reserved[3]; 354 335 }; 355 336 356 - #define COMPOSITE_OFFSET(x) \ 357 - (offsetof(struct ffa_mem_region, ep_mem_access[x])) 358 337 #define CONSTITUENTS_OFFSET(x) \ 359 338 (offsetof(struct ffa_composite_mem_region, constituents[x])) 360 - #define COMPOSITE_CONSTITUENTS_OFFSET(x, y) \ 361 - (COMPOSITE_OFFSET(x) + CONSTITUENTS_OFFSET(y)) 339 + 340 + static inline u32 341 + ffa_mem_desc_offset(struct ffa_mem_region *buf, int count, u32 ffa_version) 342 + { 343 + u32 offset = count * sizeof(struct ffa_mem_region_attributes); 344 + /* 345 + * Earlier to v1.1, the endpoint memory descriptor array started at 346 + * offset 32(i.e. offset of ep_mem_offset in the current structure) 347 + */ 348 + if (ffa_version <= FFA_VERSION_1_0) 349 + offset += offsetof(struct ffa_mem_region, ep_mem_offset); 350 + else 351 + offset += sizeof(struct ffa_mem_region); 352 + 353 + return offset; 354 + } 362 355 363 356 struct ffa_mem_ops_args { 364 357 bool use_txbuf; ··· 402 367 int (*memory_lend)(struct ffa_mem_ops_args *args); 403 368 }; 404 369 370 + struct ffa_cpu_ops { 371 + int (*run)(struct ffa_device *dev, u16 vcpu); 372 + }; 373 + 374 + typedef void (*ffa_sched_recv_cb)(u16 vcpu, bool is_per_vcpu, void *cb_data); 375 + typedef void (*ffa_notifier_cb)(int notify_id, void *cb_data); 376 + 377 + struct ffa_notifier_ops { 378 + int (*sched_recv_cb_register)(struct ffa_device *dev, 379 + ffa_sched_recv_cb cb, void *cb_data); 380 + int (*sched_recv_cb_unregister)(struct ffa_device *dev); 381 + int (*notify_request)(struct ffa_device *dev, bool per_vcpu, 382 + ffa_notifier_cb cb, void *cb_data, int notify_id); 383 + int (*notify_relinquish)(struct ffa_device *dev, int notify_id); 384 + int (*notify_send)(struct ffa_device *dev, int notify_id, bool per_vcpu, 385 + u16 vcpu); 386 + }; 387 + 405 388 struct ffa_ops { 406 389 const struct ffa_info_ops *info_ops; 407 390 const struct ffa_msg_ops *msg_ops; 408 391 const struct ffa_mem_ops *mem_ops; 392 + const struct ffa_cpu_ops *cpu_ops; 393 + const struct ffa_notifier_ops *notifier_ops; 409 394 }; 410 395 411 396 #endif /* _LINUX_ARM_FFA_H */
+1 -1
include/linux/firmware/meson/meson_sm.h
··· 19 19 struct meson_sm_firmware; 20 20 21 21 int meson_sm_call(struct meson_sm_firmware *fw, unsigned int cmd_index, 22 - u32 *ret, u32 arg0, u32 arg1, u32 arg2, u32 arg3, u32 arg4); 22 + s32 *ret, u32 arg0, u32 arg1, u32 arg2, u32 arg3, u32 arg4); 23 23 int meson_sm_call_write(struct meson_sm_firmware *fw, void *buffer, 24 24 unsigned int b_size, unsigned int cmd_index, u32 arg0, 25 25 u32 arg1, u32 arg2, u32 arg3, u32 arg4);
+52
include/linux/firmware/qcom/qcom_qseecom.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 + /* 3 + * Driver for Qualcomm Secure Execution Environment (SEE) interface (QSEECOM). 4 + * Responsible for setting up and managing QSEECOM client devices. 5 + * 6 + * Copyright (C) 2023 Maximilian Luz <luzmaximilian@gmail.com> 7 + */ 8 + 9 + #ifndef __QCOM_QSEECOM_H 10 + #define __QCOM_QSEECOM_H 11 + 12 + #include <linux/auxiliary_bus.h> 13 + #include <linux/types.h> 14 + 15 + #include <linux/firmware/qcom/qcom_scm.h> 16 + 17 + /** 18 + * struct qseecom_client - QSEECOM client device. 19 + * @aux_dev: Underlying auxiliary device. 20 + * @app_id: ID of the loaded application. 21 + */ 22 + struct qseecom_client { 23 + struct auxiliary_device aux_dev; 24 + u32 app_id; 25 + }; 26 + 27 + /** 28 + * qcom_qseecom_app_send() - Send to and receive data from a given QSEE app. 29 + * @client: The QSEECOM client associated with the target app. 30 + * @req: Request buffer sent to the app (must be DMA-mappable). 31 + * @req_size: Size of the request buffer. 32 + * @rsp: Response buffer, written to by the app (must be DMA-mappable). 33 + * @rsp_size: Size of the response buffer. 34 + * 35 + * Sends a request to the QSEE app associated with the given client and read 36 + * back its response. The caller must provide two DMA memory regions, one for 37 + * the request and one for the response, and fill out the @req region with the 38 + * respective (app-specific) request data. The QSEE app reads this and returns 39 + * its response in the @rsp region. 40 + * 41 + * Note: This is a convenience wrapper around qcom_scm_qseecom_app_send(). 42 + * Clients should prefer to use this wrapper. 43 + * 44 + * Return: Zero on success, nonzero on failure. 45 + */ 46 + static inline int qcom_qseecom_app_send(struct qseecom_client *client, void *req, size_t req_size, 47 + void *rsp, size_t rsp_size) 48 + { 49 + return qcom_scm_qseecom_app_send(client->app_id, req, req_size, rsp, rsp_size); 50 + } 51 + 52 + #endif /* __QCOM_QSEECOM_H */
+62 -47
include/linux/firmware/qcom/qcom_scm.h
··· 59 59 #define QCOM_SCM_PERM_RW (QCOM_SCM_PERM_READ | QCOM_SCM_PERM_WRITE) 60 60 #define QCOM_SCM_PERM_RWX (QCOM_SCM_PERM_RW | QCOM_SCM_PERM_EXEC) 61 61 62 - extern bool qcom_scm_is_available(void); 62 + bool qcom_scm_is_available(void); 63 63 64 - extern int qcom_scm_set_cold_boot_addr(void *entry); 65 - extern int qcom_scm_set_warm_boot_addr(void *entry); 66 - extern void qcom_scm_cpu_power_down(u32 flags); 67 - extern int qcom_scm_set_remote_state(u32 state, u32 id); 64 + int qcom_scm_set_cold_boot_addr(void *entry); 65 + int qcom_scm_set_warm_boot_addr(void *entry); 66 + void qcom_scm_cpu_power_down(u32 flags); 67 + int qcom_scm_set_remote_state(u32 state, u32 id); 68 68 69 69 struct qcom_scm_pas_metadata { 70 70 void *ptr; ··· 72 72 ssize_t size; 73 73 }; 74 74 75 - extern int qcom_scm_pas_init_image(u32 peripheral, const void *metadata, 76 - size_t size, 77 - struct qcom_scm_pas_metadata *ctx); 78 - extern void qcom_scm_pas_metadata_release(struct qcom_scm_pas_metadata *ctx); 79 - extern int qcom_scm_pas_mem_setup(u32 peripheral, phys_addr_t addr, 80 - phys_addr_t size); 81 - extern int qcom_scm_pas_auth_and_reset(u32 peripheral); 82 - extern int qcom_scm_pas_shutdown(u32 peripheral); 83 - extern bool qcom_scm_pas_supported(u32 peripheral); 75 + int qcom_scm_pas_init_image(u32 peripheral, const void *metadata, size_t size, 76 + struct qcom_scm_pas_metadata *ctx); 77 + void qcom_scm_pas_metadata_release(struct qcom_scm_pas_metadata *ctx); 78 + int qcom_scm_pas_mem_setup(u32 peripheral, phys_addr_t addr, phys_addr_t size); 79 + int qcom_scm_pas_auth_and_reset(u32 peripheral); 80 + int qcom_scm_pas_shutdown(u32 peripheral); 81 + bool qcom_scm_pas_supported(u32 peripheral); 84 82 85 - extern int qcom_scm_io_readl(phys_addr_t addr, unsigned int *val); 86 - extern int qcom_scm_io_writel(phys_addr_t addr, unsigned int val); 83 + int qcom_scm_io_readl(phys_addr_t addr, unsigned int *val); 84 + int qcom_scm_io_writel(phys_addr_t addr, unsigned int val); 87 85 88 - extern bool qcom_scm_restore_sec_cfg_available(void); 89 - extern int qcom_scm_restore_sec_cfg(u32 device_id, u32 spare); 90 - extern int qcom_scm_iommu_secure_ptbl_size(u32 spare, size_t *size); 91 - extern int qcom_scm_iommu_secure_ptbl_init(u64 addr, u32 size, u32 spare); 92 - extern int qcom_scm_iommu_set_cp_pool_size(u32 spare, u32 size); 93 - extern int qcom_scm_mem_protect_video_var(u32 cp_start, u32 cp_size, 94 - u32 cp_nonpixel_start, 95 - u32 cp_nonpixel_size); 96 - extern int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz, 97 - u64 *src, 98 - const struct qcom_scm_vmperm *newvm, 99 - unsigned int dest_cnt); 86 + bool qcom_scm_restore_sec_cfg_available(void); 87 + int qcom_scm_restore_sec_cfg(u32 device_id, u32 spare); 88 + int qcom_scm_iommu_secure_ptbl_size(u32 spare, size_t *size); 89 + int qcom_scm_iommu_secure_ptbl_init(u64 addr, u32 size, u32 spare); 90 + int qcom_scm_iommu_set_cp_pool_size(u32 spare, u32 size); 91 + int qcom_scm_mem_protect_video_var(u32 cp_start, u32 cp_size, 92 + u32 cp_nonpixel_start, u32 cp_nonpixel_size); 93 + int qcom_scm_assign_mem(phys_addr_t mem_addr, size_t mem_sz, u64 *src, 94 + const struct qcom_scm_vmperm *newvm, 95 + unsigned int dest_cnt); 100 96 101 - extern bool qcom_scm_ocmem_lock_available(void); 102 - extern int qcom_scm_ocmem_lock(enum qcom_scm_ocmem_client id, u32 offset, 103 - u32 size, u32 mode); 104 - extern int qcom_scm_ocmem_unlock(enum qcom_scm_ocmem_client id, u32 offset, 105 - u32 size); 97 + bool qcom_scm_ocmem_lock_available(void); 98 + int qcom_scm_ocmem_lock(enum qcom_scm_ocmem_client id, u32 offset, u32 size, 99 + u32 mode); 100 + int qcom_scm_ocmem_unlock(enum qcom_scm_ocmem_client id, u32 offset, u32 size); 106 101 107 - extern bool qcom_scm_ice_available(void); 108 - extern int qcom_scm_ice_invalidate_key(u32 index); 109 - extern int qcom_scm_ice_set_key(u32 index, const u8 *key, u32 key_size, 110 - enum qcom_scm_ice_cipher cipher, 111 - u32 data_unit_size); 102 + bool qcom_scm_ice_available(void); 103 + int qcom_scm_ice_invalidate_key(u32 index); 104 + int qcom_scm_ice_set_key(u32 index, const u8 *key, u32 key_size, 105 + enum qcom_scm_ice_cipher cipher, u32 data_unit_size); 112 106 113 - extern bool qcom_scm_hdcp_available(void); 114 - extern int qcom_scm_hdcp_req(struct qcom_scm_hdcp_req *req, u32 req_cnt, 115 - u32 *resp); 107 + bool qcom_scm_hdcp_available(void); 108 + int qcom_scm_hdcp_req(struct qcom_scm_hdcp_req *req, u32 req_cnt, u32 *resp); 116 109 117 - extern int qcom_scm_iommu_set_pt_format(u32 sec_id, u32 ctx_num, u32 pt_fmt); 118 - extern int qcom_scm_qsmmu500_wait_safe_toggle(bool en); 110 + int qcom_scm_iommu_set_pt_format(u32 sec_id, u32 ctx_num, u32 pt_fmt); 111 + int qcom_scm_qsmmu500_wait_safe_toggle(bool en); 119 112 120 - extern int qcom_scm_lmh_dcvsh(u32 payload_fn, u32 payload_reg, u32 payload_val, 121 - u64 limit_node, u32 node_id, u64 version); 122 - extern int qcom_scm_lmh_profile_change(u32 profile_id); 123 - extern bool qcom_scm_lmh_dcvsh_available(void); 113 + int qcom_scm_lmh_dcvsh(u32 payload_fn, u32 payload_reg, u32 payload_val, 114 + u64 limit_node, u32 node_id, u64 version); 115 + int qcom_scm_lmh_profile_change(u32 profile_id); 116 + bool qcom_scm_lmh_dcvsh_available(void); 117 + 118 + #ifdef CONFIG_QCOM_QSEECOM 119 + 120 + int qcom_scm_qseecom_app_get_id(const char *app_name, u32 *app_id); 121 + int qcom_scm_qseecom_app_send(u32 app_id, void *req, size_t req_size, void *rsp, 122 + size_t rsp_size); 123 + 124 + #else /* CONFIG_QCOM_QSEECOM */ 125 + 126 + static inline int qcom_scm_qseecom_app_get_id(const char *app_name, u32 *app_id) 127 + { 128 + return -EINVAL; 129 + } 130 + 131 + static inline int qcom_scm_qseecom_app_send(u32 app_id, void *req, 132 + size_t req_size, void *rsp, 133 + size_t rsp_size) 134 + { 135 + return -EINVAL; 136 + } 137 + 138 + #endif /* CONFIG_QCOM_QSEECOM */ 124 139 125 140 #endif
+6
include/linux/nvmem-consumer.h
··· 127 127 return -EOPNOTSUPP; 128 128 } 129 129 130 + static inline int nvmem_cell_read_u8(struct device *dev, 131 + const char *cell_id, u8 *val) 132 + { 133 + return -EOPNOTSUPP; 134 + } 135 + 130 136 static inline int nvmem_cell_read_u16(struct device *dev, 131 137 const char *cell_id, u16 *val) 132 138 {
+5
include/linux/pm_domain.h
··· 61 61 * GENPD_FLAG_MIN_RESIDENCY: Enable the genpd governor to consider its 62 62 * components' next wakeup when determining the 63 63 * optimal idle state. 64 + * 65 + * GENPD_FLAG_OPP_TABLE_FW: The genpd provider supports performance states, 66 + * but its corresponding OPP tables are not 67 + * described in DT, but are given directly by FW. 64 68 */ 65 69 #define GENPD_FLAG_PM_CLK (1U << 0) 66 70 #define GENPD_FLAG_IRQ_SAFE (1U << 1) ··· 73 69 #define GENPD_FLAG_CPU_DOMAIN (1U << 4) 74 70 #define GENPD_FLAG_RPM_ALWAYS_ON (1U << 5) 75 71 #define GENPD_FLAG_MIN_RESIDENCY (1U << 6) 72 + #define GENPD_FLAG_OPP_TABLE_FW (1U << 7) 76 73 77 74 enum gpd_status { 78 75 GENPD_STATE_ON = 0, /* PM domain is on */
+33 -10
include/linux/scmi_protocol.h
··· 58 58 u64 step_size; 59 59 } range; 60 60 }; 61 + int num_parents; 62 + u32 *parents; 61 63 }; 62 64 63 65 enum scmi_power_scale { ··· 82 80 * @rate_set: set the clock rate of a clock 83 81 * @enable: enables the specified clock 84 82 * @disable: disables the specified clock 83 + * @state_get: get the status of the specified clock 84 + * @config_oem_get: get the value of an OEM specific clock config 85 + * @config_oem_set: set the value of an OEM specific clock config 86 + * @parent_get: get the parent id of a clk 87 + * @parent_set: set the parent of a clock 85 88 */ 86 89 struct scmi_clk_proto_ops { 87 90 int (*count_get)(const struct scmi_protocol_handle *ph); ··· 97 90 u64 *rate); 98 91 int (*rate_set)(const struct scmi_protocol_handle *ph, u32 clk_id, 99 92 u64 rate); 100 - int (*enable)(const struct scmi_protocol_handle *ph, u32 clk_id); 101 - int (*disable)(const struct scmi_protocol_handle *ph, u32 clk_id); 102 - int (*enable_atomic)(const struct scmi_protocol_handle *ph, u32 clk_id); 103 - int (*disable_atomic)(const struct scmi_protocol_handle *ph, 104 - u32 clk_id); 93 + int (*enable)(const struct scmi_protocol_handle *ph, u32 clk_id, 94 + bool atomic); 95 + int (*disable)(const struct scmi_protocol_handle *ph, u32 clk_id, 96 + bool atomic); 97 + int (*state_get)(const struct scmi_protocol_handle *ph, u32 clk_id, 98 + bool *enabled, bool atomic); 99 + int (*config_oem_get)(const struct scmi_protocol_handle *ph, u32 clk_id, 100 + u8 oem_type, u32 *oem_val, u32 *attributes, 101 + bool atomic); 102 + int (*config_oem_set)(const struct scmi_protocol_handle *ph, u32 clk_id, 103 + u8 oem_type, u32 oem_val, bool atomic); 104 + int (*parent_get)(const struct scmi_protocol_handle *ph, u32 clk_id, u32 *parent_id); 105 + int (*parent_set)(const struct scmi_protocol_handle *ph, u32 clk_id, u32 parent_id); 106 + }; 107 + 108 + struct scmi_perf_domain_info { 109 + char name[SCMI_MAX_STR_SIZE]; 110 + bool set_perf; 105 111 }; 106 112 107 113 /** 108 114 * struct scmi_perf_proto_ops - represents the various operations provided 109 115 * by SCMI Performance Protocol 110 116 * 117 + * @num_domains_get: gets the number of supported performance domains 118 + * @info_get: get the information of a performance domain 111 119 * @limits_set: sets limits on the performance level of a domain 112 120 * @limits_get: gets limits on the performance level of a domain 113 121 * @level_set: sets the performance level of a domain 114 122 * @level_get: gets the performance level of a domain 115 - * @device_domain_id: gets the scmi domain id for a given device 116 123 * @transition_latency_get: gets the DVFS transition latency for a given device 117 124 * @device_opps_add: adds all the OPPs for a given device 118 125 * @freq_set: sets the frequency for a given device using sustained frequency ··· 141 120 * or in some other (abstract) scale 142 121 */ 143 122 struct scmi_perf_proto_ops { 123 + int (*num_domains_get)(const struct scmi_protocol_handle *ph); 124 + const struct scmi_perf_domain_info __must_check *(*info_get) 125 + (const struct scmi_protocol_handle *ph, u32 domain); 144 126 int (*limits_set)(const struct scmi_protocol_handle *ph, u32 domain, 145 127 u32 max_perf, u32 min_perf); 146 128 int (*limits_get)(const struct scmi_protocol_handle *ph, u32 domain, ··· 152 128 u32 level, bool poll); 153 129 int (*level_get)(const struct scmi_protocol_handle *ph, u32 domain, 154 130 u32 *level, bool poll); 155 - int (*device_domain_id)(struct device *dev); 156 131 int (*transition_latency_get)(const struct scmi_protocol_handle *ph, 157 - struct device *dev); 132 + u32 domain); 158 133 int (*device_opps_add)(const struct scmi_protocol_handle *ph, 159 - struct device *dev); 134 + struct device *dev, u32 domain); 160 135 int (*freq_set)(const struct scmi_protocol_handle *ph, u32 domain, 161 136 unsigned long rate, bool poll); 162 137 int (*freq_get)(const struct scmi_protocol_handle *ph, u32 domain, ··· 163 140 int (*est_power_get)(const struct scmi_protocol_handle *ph, u32 domain, 164 141 unsigned long *rate, unsigned long *power); 165 142 bool (*fast_switch_possible)(const struct scmi_protocol_handle *ph, 166 - struct device *dev); 143 + u32 domain); 167 144 enum scmi_power_scale (*power_scale_get)(const struct scmi_protocol_handle *ph); 168 145 }; 169 146
+1 -1
include/linux/soc/qcom/llcc-qcom.h
··· 30 30 #define LLCC_NPU 23 31 31 #define LLCC_WLHW 24 32 32 #define LLCC_PIMEM 25 33 - #define LLCC_DRE 26 33 + #define LLCC_ECC 26 34 34 #define LLCC_CVP 28 35 35 #define LLCC_MODPE 29 36 36 #define LLCC_APTCM 30
+1
include/linux/ucs2_string.h
··· 10 10 unsigned long ucs2_strnlen(const ucs2_char_t *s, size_t maxlength); 11 11 unsigned long ucs2_strlen(const ucs2_char_t *s); 12 12 unsigned long ucs2_strsize(const ucs2_char_t *data, unsigned long maxlength); 13 + ssize_t ucs2_strscpy(ucs2_char_t *dst, const ucs2_char_t *src, size_t count); 13 14 int ucs2_strncmp(const ucs2_char_t *a, const ucs2_char_t *b, size_t len); 14 15 15 16 unsigned long ucs2_utf8size(const ucs2_char_t *src);
+1 -1
include/soc/tegra/bpmp-abi.h
··· 1194 1194 */ 1195 1195 struct cmd_clk_is_enabled_response { 1196 1196 /** 1197 - * @brief The state of the clock that has been succesfully 1197 + * @brief The state of the clock that has been successfully 1198 1198 * requested with CMD_CLK_ENABLE or CMD_CLK_DISABLE by the 1199 1199 * master invoking the command earlier. 1200 1200 *
+6
include/soc/tegra/bpmp.h
··· 102 102 #ifdef CONFIG_DEBUG_FS 103 103 struct dentry *debugfs_mirror; 104 104 #endif 105 + 106 + bool suspended; 105 107 }; 108 + 109 + #define TEGRA_BPMP_MESSAGE_RESET BIT(0) 106 110 107 111 struct tegra_bpmp_message { 108 112 unsigned int mrq; ··· 121 117 size_t size; 122 118 int ret; 123 119 } rx; 120 + 121 + unsigned long flags; 124 122 }; 125 123 126 124 #if IS_ENABLED(CONFIG_TEGRA_BPMP)
+52
lib/ucs2_string.c
··· 32 32 } 33 33 EXPORT_SYMBOL(ucs2_strsize); 34 34 35 + /** 36 + * ucs2_strscpy() - Copy a UCS2 string into a sized buffer. 37 + * 38 + * @dst: Pointer to the destination buffer where to copy the string to. 39 + * @src: Pointer to the source buffer where to copy the string from. 40 + * @count: Size of the destination buffer, in UCS2 (16-bit) characters. 41 + * 42 + * Like strscpy(), only for UCS2 strings. 43 + * 44 + * Copy the source string @src, or as much of it as fits, into the destination 45 + * buffer @dst. The behavior is undefined if the string buffers overlap. The 46 + * destination buffer @dst is always NUL-terminated, unless it's zero-sized. 47 + * 48 + * Return: The number of characters copied into @dst (excluding the trailing 49 + * %NUL terminator) or -E2BIG if @count is 0 or @src was truncated due to the 50 + * destination buffer being too small. 51 + */ 52 + ssize_t ucs2_strscpy(ucs2_char_t *dst, const ucs2_char_t *src, size_t count) 53 + { 54 + long res; 55 + 56 + /* 57 + * Ensure that we have a valid amount of space. We need to store at 58 + * least one NUL-character. 59 + */ 60 + if (count == 0 || WARN_ON_ONCE(count > INT_MAX / sizeof(*dst))) 61 + return -E2BIG; 62 + 63 + /* 64 + * Copy at most 'count' characters, return early if we find a 65 + * NUL-terminator. 66 + */ 67 + for (res = 0; res < count; res++) { 68 + ucs2_char_t c; 69 + 70 + c = src[res]; 71 + dst[res] = c; 72 + 73 + if (!c) 74 + return res; 75 + } 76 + 77 + /* 78 + * The loop above terminated without finding a NUL-terminator, 79 + * exceeding the 'count': Enforce proper NUL-termination and return 80 + * error. 81 + */ 82 + dst[count - 1] = 0; 83 + return -E2BIG; 84 + } 85 + EXPORT_SYMBOL(ucs2_strscpy); 86 + 35 87 int 36 88 ucs2_strncmp(const ucs2_char_t *a, const ucs2_char_t *b, size_t len) 37 89 {