Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'rproc-v5.7' of git://git.kernel.org/pub/scm/linux/kernel/git/andersson/remoteproc

Pull remoteproc updates from Bjorn Andersson:

- a range of improvements to the OMAP remoeteproc driver; among other
things adding devicetree, suspend/resume and watchdog support, and
adds support the remoteprocs in the DRA7xx SoC

- support for 64-bit firmware, extends the ELF loader to support this
and fixes for a number of race conditions in the recovery handling

- a generic mechanism to allow remoteproc drivers to sync state with
remote processors during a panic, and uses this to prepare Qualcomm
remote processors for post mortem analysis

- fixes to cleanly recover from crashes in the modem firmware on
production Qualcomm devices

* tag 'rproc-v5.7' of git://git.kernel.org/pub/scm/linux/kernel/git/andersson/remoteproc: (37 commits)
remoteproc/omap: Switch to SPDX license identifiers
remoteproc/omap: Add watchdog functionality for remote processors
remoteproc/omap: Report device exceptions and trigger recovery
remoteproc/omap: Add support for runtime auto-suspend/resume
remoteproc/omap: Add support for system suspend/resume
remoteproc/omap: Request a timer(s) for remoteproc usage
remoteproc/omap: Check for undefined mailbox messages
remoteproc/omap: Remove the platform_data header
remoteproc/omap: Add support for DRA7xx remote processors
remoteproc/omap: Initialize and assign reserved memory node
remoteproc/omap: Add the rproc ops .da_to_va() implementation
remoteproc/omap: Add support to parse internal memories from DT
remoteproc/omap: Add a sanity check for DSP boot address alignment
remoteproc/omap: Add device tree support
dt-bindings: remoteproc: Add OMAP remoteproc bindings
remoteproc: qcom: Introduce panic handler for PAS and ADSP
remoteproc: qcom: q6v5: Add common panic handler
remoteproc: Introduce "panic" callback in ops
remoteproc: Traverse rproc_list under RCU read lock
remoteproc: Fix NULL pointer dereference in rproc_virtio_notify
...

+2077 -273
+324
Documentation/devicetree/bindings/remoteproc/ti,omap-remoteproc.yaml
··· 1 + # SPDX-License-Identifier: (GPL-2.0-only or BSD-2-Clause) 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/remoteproc/ti,omap-remoteproc.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: OMAP4+ Remoteproc Devices 8 + 9 + maintainers: 10 + - Suman Anna <s-anna@ti.com> 11 + 12 + description: 13 + The OMAP family of SoCs usually have one or more slave processor sub-systems 14 + that are used to offload some of the processor-intensive tasks, or to manage 15 + other hardware accelerators, for achieving various system level goals. 16 + 17 + The processor cores in the sub-system are usually behind an IOMMU, and may 18 + contain additional sub-modules like Internal RAM and/or ROMs, L1 and/or L2 19 + caches, an Interrupt Controller, a Cache Controller etc. 20 + 21 + The OMAP SoCs usually have a DSP processor sub-system and/or an IPU processor 22 + sub-system. The DSP processor sub-system can contain any of the TI's C64x, 23 + C66x or C67x family of DSP cores as the main execution unit. The IPU processor 24 + sub-system usually contains either a Dual-Core Cortex-M3 or Dual-Core 25 + Cortex-M4 processors. 26 + 27 + Each remote processor sub-system is represented as a single DT node. Each node 28 + has a number of required or optional properties that enable the OS running on 29 + the host processor (MPU) to perform the device management of the remote 30 + processor and to communicate with the remote processor. The various properties 31 + can be classified as constant or variable. The constant properties are 32 + dictated by the SoC and does not change from one board to another having the 33 + same SoC. Examples of constant properties include 'iommus', 'reg'. The 34 + variable properties are dictated by the system integration aspects such as 35 + memory on the board, or configuration used within the corresponding firmware 36 + image. Examples of variable properties include 'mboxes', 'memory-region', 37 + 'timers', 'watchdog-timers' etc. 38 + 39 + properties: 40 + compatible: 41 + enum: 42 + - ti,omap4-dsp 43 + - ti,omap5-dsp 44 + - ti,dra7-dsp 45 + - ti,omap4-ipu 46 + - ti,omap5-ipu 47 + - ti,dra7-ipu 48 + 49 + iommus: 50 + minItems: 1 51 + maxItems: 2 52 + description: | 53 + phandles to OMAP IOMMU nodes, that need to be programmed 54 + for this remote processor to access any external RAM memory or 55 + other peripheral device address spaces. This property usually 56 + has only a single phandle. Multiple phandles are used only in 57 + cases where the sub-system has different ports for different 58 + sub-modules within the processor sub-system (eg: DRA7 DSPs), 59 + and need the same programming in both the MMUs. 60 + 61 + mboxes: 62 + minItems: 1 63 + maxItems: 2 64 + description: | 65 + OMAP Mailbox specifier denoting the sub-mailbox, to be used for 66 + communication with the remote processor. The specifier format is 67 + as per the bindings, 68 + Documentation/devicetree/bindings/mailbox/omap-mailbox.txt 69 + This property should match with the sub-mailbox node used in 70 + the firmware image. 71 + 72 + clocks: 73 + description: | 74 + Main functional clock for the remote processor 75 + 76 + resets: 77 + description: | 78 + Reset handles for the remote processor 79 + 80 + firmware-name: 81 + description: | 82 + Default name of the firmware to load to the remote processor. 83 + 84 + # Optional properties: 85 + # -------------------- 86 + # Some of these properties are mandatory on some SoCs, and some are optional 87 + # depending on the configuration of the firmware image to be executed on the 88 + # remote processor. The conditions are mentioned for each property. 89 + # 90 + # The following are the optional properties: 91 + 92 + memory-region: 93 + $ref: /schemas/types.yaml#/definitions/phandle 94 + description: | 95 + phandle to the reserved memory node to be associated 96 + with the remoteproc device. The reserved memory node 97 + can be a CMA memory node, and should be defined as 98 + per the bindings, 99 + Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt 100 + 101 + reg: 102 + description: | 103 + Address space for any remoteproc memories present on 104 + the SoC. Should contain an entry for each value in 105 + 'reg-names'. These are mandatory for all DSP and IPU 106 + processors that have them (OMAP4/OMAP5 DSPs do not have 107 + any RAMs) 108 + 109 + reg-names: 110 + description: | 111 + Required names for each of the address spaces defined in 112 + the 'reg' property. Expects the names from the following 113 + list, in the specified order, each representing the corresponding 114 + internal RAM memory region. 115 + minItems: 1 116 + maxItems: 3 117 + items: 118 + - const: l2ram 119 + - const: l1pram 120 + - const: l1dram 121 + 122 + ti,bootreg: 123 + $ref: /schemas/types.yaml#/definitions/phandle-array 124 + description: | 125 + Should be a triple of the phandle to the System Control 126 + Configuration region that contains the boot address 127 + register, the register offset of the boot address 128 + register within the System Control module, and the bit 129 + shift within the register. This property is required for 130 + all the DSP instances on OMAP4, OMAP5 and DRA7xx SoCs. 131 + 132 + ti,autosuspend-delay-ms: 133 + description: | 134 + Custom autosuspend delay for the remoteproc in milliseconds. 135 + Recommended values is preferable to be in the order of couple 136 + of seconds. A negative value can also be used to disable the 137 + autosuspend behavior. 138 + 139 + ti,timers: 140 + $ref: /schemas/types.yaml#/definitions/phandle-array 141 + description: | 142 + One or more phandles to OMAP DMTimer nodes, that serve 143 + as System/Tick timers for the OS running on the remote 144 + processors. This will usually be a single timer if the 145 + processor sub-system is running in SMP mode, or one per 146 + core in the processor sub-system. This can also be used 147 + to reserve specific timers to be dedicated to the 148 + remote processors. 149 + 150 + This property is mandatory on remote processors requiring 151 + external tick wakeup, and to support Power Management 152 + features. The timers to be used should match with the 153 + timers used in the firmware image. 154 + 155 + ti,watchdog-timers: 156 + $ref: /schemas/types.yaml#/definitions/phandle-array 157 + description: | 158 + One or more phandles to OMAP DMTimer nodes, used to 159 + serve as Watchdog timers for the processor cores. This 160 + will usually be one per executing processor core, even 161 + if the processor sub-system is running a SMP OS. 162 + 163 + The timers to be used should match with the watchdog 164 + timers used in the firmware image. 165 + 166 + if: 167 + properties: 168 + compatible: 169 + enum: 170 + - ti,dra7-dsp 171 + then: 172 + properties: 173 + reg: 174 + minItems: 3 175 + maxItems: 3 176 + required: 177 + - reg 178 + - reg-names 179 + - ti,bootreg 180 + 181 + else: 182 + if: 183 + properties: 184 + compatible: 185 + enum: 186 + - ti,omap4-ipu 187 + - ti,omap5-ipu 188 + - ti,dra7-ipu 189 + then: 190 + properties: 191 + reg: 192 + minItems: 1 193 + maxItems: 1 194 + ti,bootreg: false 195 + required: 196 + - reg 197 + - reg-names 198 + 199 + else: 200 + properties: 201 + reg: false 202 + required: 203 + - ti,bootreg 204 + 205 + required: 206 + - compatible 207 + - iommus 208 + - mboxes 209 + - clocks 210 + - resets 211 + - firmware-name 212 + 213 + additionalProperties: false 214 + 215 + examples: 216 + - | 217 + 218 + //Example 1: OMAP4 DSP 219 + 220 + /* DSP Reserved Memory node */ 221 + #include <dt-bindings/clock/omap4.h> 222 + reserved-memory { 223 + #address-cells = <1>; 224 + #size-cells = <1>; 225 + 226 + dsp_memory_region: dsp-memory@98000000 { 227 + compatible = "shared-dma-pool"; 228 + reg = <0x98000000 0x800000>; 229 + reusable; 230 + }; 231 + }; 232 + 233 + /* DSP node */ 234 + ocp { 235 + dsp: dsp { 236 + compatible = "ti,omap4-dsp"; 237 + ti,bootreg = <&scm_conf 0x304 0>; 238 + iommus = <&mmu_dsp>; 239 + mboxes = <&mailbox &mbox_dsp>; 240 + memory-region = <&dsp_memory_region>; 241 + ti,timers = <&timer5>; 242 + ti,watchdog-timers = <&timer6>; 243 + clocks = <&tesla_clkctrl OMAP4_DSP_CLKCTRL 0>; 244 + resets = <&prm_tesla 0>, <&prm_tesla 1>; 245 + firmware-name = "omap4-dsp-fw.xe64T"; 246 + }; 247 + }; 248 + 249 + - |+ 250 + 251 + //Example 2: OMAP5 IPU 252 + 253 + /* IPU Reserved Memory node */ 254 + #include <dt-bindings/clock/omap5.h> 255 + reserved-memory { 256 + #address-cells = <2>; 257 + #size-cells = <2>; 258 + 259 + ipu_memory_region: ipu-memory@95800000 { 260 + compatible = "shared-dma-pool"; 261 + reg = <0 0x95800000 0 0x3800000>; 262 + reusable; 263 + }; 264 + }; 265 + 266 + /* IPU node */ 267 + ocp { 268 + #address-cells = <1>; 269 + #size-cells = <1>; 270 + 271 + ipu: ipu@55020000 { 272 + compatible = "ti,omap5-ipu"; 273 + reg = <0x55020000 0x10000>; 274 + reg-names = "l2ram"; 275 + iommus = <&mmu_ipu>; 276 + mboxes = <&mailbox &mbox_ipu>; 277 + memory-region = <&ipu_memory_region>; 278 + ti,timers = <&timer3>, <&timer4>; 279 + ti,watchdog-timers = <&timer9>, <&timer11>; 280 + clocks = <&ipu_clkctrl OMAP5_MMU_IPU_CLKCTRL 0>; 281 + resets = <&prm_core 2>; 282 + firmware-name = "omap5-ipu-fw.xem4"; 283 + }; 284 + }; 285 + 286 + - |+ 287 + 288 + //Example 3: DRA7xx/AM57xx DSP 289 + 290 + /* DSP1 Reserved Memory node */ 291 + #include <dt-bindings/clock/dra7.h> 292 + reserved-memory { 293 + #address-cells = <2>; 294 + #size-cells = <2>; 295 + 296 + dsp1_memory_region: dsp1-memory@99000000 { 297 + compatible = "shared-dma-pool"; 298 + reg = <0x0 0x99000000 0x0 0x4000000>; 299 + reusable; 300 + }; 301 + }; 302 + 303 + /* DSP1 node */ 304 + ocp { 305 + #address-cells = <1>; 306 + #size-cells = <1>; 307 + 308 + dsp1: dsp@40800000 { 309 + compatible = "ti,dra7-dsp"; 310 + reg = <0x40800000 0x48000>, 311 + <0x40e00000 0x8000>, 312 + <0x40f00000 0x8000>; 313 + reg-names = "l2ram", "l1pram", "l1dram"; 314 + ti,bootreg = <&scm_conf 0x55c 0>; 315 + iommus = <&mmu0_dsp1>, <&mmu1_dsp1>; 316 + mboxes = <&mailbox5 &mbox_dsp1_ipc3x>; 317 + memory-region = <&dsp1_memory_region>; 318 + ti,timers = <&timer5>; 319 + ti,watchdog-timers = <&timer10>; 320 + resets = <&prm_dsp1 0>; 321 + clocks = <&dsp1_clkctrl DRA7_DSP1_MMU0_DSP1_CLKCTRL 0>; 322 + firmware-name = "dra7-dsp1-fw.xe66"; 323 + }; 324 + };
+1 -1
Documentation/remoteproc.txt
··· 230 230 Binary Firmware Structure 231 231 ========================= 232 232 233 - At this point remoteproc only supports ELF32 firmware binaries. However, 233 + At this point remoteproc supports ELF32 and ELF64 firmware binaries. However, 234 234 it is quite expected that other platforms/devices which we'd want to 235 235 support with this framework will be based on different binary formats. 236 236
+13 -1
drivers/remoteproc/Kconfig
··· 35 35 36 36 config OMAP_REMOTEPROC 37 37 tristate "OMAP remoteproc support" 38 - depends on ARCH_OMAP4 || SOC_OMAP5 38 + depends on ARCH_OMAP4 || SOC_OMAP5 || SOC_DRA7XX 39 39 depends on OMAP_IOMMU 40 40 select MAILBOX 41 41 select OMAP2PLUS_MBOX ··· 51 51 52 52 It's safe to say N here if you're not interested in multimedia 53 53 offloading or just want a bare minimum kernel. 54 + 55 + config OMAP_REMOTEPROC_WATCHDOG 56 + bool "OMAP remoteproc watchdog timer" 57 + depends on OMAP_REMOTEPROC 58 + default n 59 + help 60 + Say Y here to enable watchdog timer for remote processors. 61 + 62 + This option controls the watchdog functionality for the remote 63 + processors in OMAP. Dedicated OMAP DMTimers are used by the remote 64 + processors and triggers the timer interrupt upon a watchdog 65 + detection. 54 66 55 67 config WKUP_M3_RPROC 56 68 tristate "AMx3xx Wakeup M3 remoteproc support"
+6 -5
drivers/remoteproc/imx_rproc.c
··· 186 186 } 187 187 188 188 static int imx_rproc_da_to_sys(struct imx_rproc *priv, u64 da, 189 - int len, u64 *sys) 189 + size_t len, u64 *sys) 190 190 { 191 191 const struct imx_rproc_dcfg *dcfg = priv->dcfg; 192 192 int i; ··· 203 203 } 204 204 } 205 205 206 - dev_warn(priv->dev, "Translation failed: da = 0x%llx len = 0x%x\n", 206 + dev_warn(priv->dev, "Translation failed: da = 0x%llx len = 0x%zx\n", 207 207 da, len); 208 208 return -ENOENT; 209 209 } 210 210 211 - static void *imx_rproc_da_to_va(struct rproc *rproc, u64 da, int len) 211 + static void *imx_rproc_da_to_va(struct rproc *rproc, u64 da, size_t len) 212 212 { 213 213 struct imx_rproc *priv = rproc->priv; 214 214 void *va = NULL; 215 215 u64 sys; 216 216 int i; 217 217 218 - if (len <= 0) 218 + if (len == 0) 219 219 return NULL; 220 220 221 221 /* ··· 235 235 } 236 236 } 237 237 238 - dev_dbg(&rproc->dev, "da = 0x%llx len = 0x%x va = 0x%p\n", da, len, va); 238 + dev_dbg(&rproc->dev, "da = 0x%llx len = 0x%zx va = 0x%p\n", 239 + da, len, va); 239 240 240 241 return va; 241 242 }
+2 -2
drivers/remoteproc/keystone_remoteproc.c
··· 246 246 * can be used either by the remoteproc core for loading (when using kernel 247 247 * remoteproc loader), or by any rpmsg bus drivers. 248 248 */ 249 - static void *keystone_rproc_da_to_va(struct rproc *rproc, u64 da, int len) 249 + static void *keystone_rproc_da_to_va(struct rproc *rproc, u64 da, size_t len) 250 250 { 251 251 struct keystone_rproc *ksproc = rproc->priv; 252 252 void __iomem *va = NULL; ··· 255 255 size_t size; 256 256 int i; 257 257 258 - if (len <= 0) 258 + if (len == 0) 259 259 return NULL; 260 260 261 261 for (i = 0; i < ksproc->num_mems; i++) {
+1 -1
drivers/remoteproc/mtk_scp.c
··· 320 320 return ret; 321 321 } 322 322 323 - static void *scp_da_to_va(struct rproc *rproc, u64 da, int len) 323 + static void *scp_da_to_va(struct rproc *rproc, u64 da, size_t len) 324 324 { 325 325 struct mtk_scp *scp = (struct mtk_scp *)rproc->priv; 326 326 int offset;
+1182 -18
drivers/remoteproc/omap_remoteproc.c
··· 2 2 /* 3 3 * OMAP Remote Processor driver 4 4 * 5 - * Copyright (C) 2011 Texas Instruments, Inc. 5 + * Copyright (C) 2011-2020 Texas Instruments Incorporated - http://www.ti.com/ 6 6 * Copyright (C) 2011 Google, Inc. 7 7 * 8 8 * Ohad Ben-Cohen <ohad@wizery.com> ··· 15 15 16 16 #include <linux/kernel.h> 17 17 #include <linux/module.h> 18 + #include <linux/clk.h> 19 + #include <linux/clk/ti.h> 18 20 #include <linux/err.h> 21 + #include <linux/io.h> 22 + #include <linux/of_device.h> 23 + #include <linux/of_reserved_mem.h> 19 24 #include <linux/platform_device.h> 25 + #include <linux/pm_runtime.h> 20 26 #include <linux/dma-mapping.h> 27 + #include <linux/interrupt.h> 21 28 #include <linux/remoteproc.h> 22 29 #include <linux/mailbox_client.h> 30 + #include <linux/omap-iommu.h> 23 31 #include <linux/omap-mailbox.h> 32 + #include <linux/regmap.h> 33 + #include <linux/mfd/syscon.h> 34 + #include <linux/reset.h> 35 + #include <clocksource/timer-ti-dm.h> 24 36 25 - #include <linux/platform_data/remoteproc-omap.h> 37 + #include <linux/platform_data/dmtimer-omap.h> 26 38 27 39 #include "omap_remoteproc.h" 28 40 #include "remoteproc_internal.h" 41 + 42 + /* default auto-suspend delay (ms) */ 43 + #define DEFAULT_AUTOSUSPEND_DELAY 10000 44 + 45 + /** 46 + * struct omap_rproc_boot_data - boot data structure for the DSP omap rprocs 47 + * @syscon: regmap handle for the system control configuration module 48 + * @boot_reg: boot register offset within the @syscon regmap 49 + * @boot_reg_shift: bit-field shift required for the boot address value in 50 + * @boot_reg 51 + */ 52 + struct omap_rproc_boot_data { 53 + struct regmap *syscon; 54 + unsigned int boot_reg; 55 + unsigned int boot_reg_shift; 56 + }; 57 + 58 + /** 59 + * struct omap_rproc_mem - internal memory structure 60 + * @cpu_addr: MPU virtual address of the memory region 61 + * @bus_addr: bus address used to access the memory region 62 + * @dev_addr: device address of the memory region from DSP view 63 + * @size: size of the memory region 64 + */ 65 + struct omap_rproc_mem { 66 + void __iomem *cpu_addr; 67 + phys_addr_t bus_addr; 68 + u32 dev_addr; 69 + size_t size; 70 + }; 71 + 72 + /** 73 + * struct omap_rproc_timer - data structure for a timer used by a omap rproc 74 + * @odt: timer pointer 75 + * @timer_ops: OMAP dmtimer ops for @odt timer 76 + * @irq: timer irq 77 + */ 78 + struct omap_rproc_timer { 79 + struct omap_dm_timer *odt; 80 + const struct omap_dm_timer_ops *timer_ops; 81 + int irq; 82 + }; 29 83 30 84 /** 31 85 * struct omap_rproc - omap remote processor state 32 86 * @mbox: mailbox channel handle 33 87 * @client: mailbox client to request the mailbox channel 88 + * @boot_data: boot data structure for setting processor boot address 89 + * @mem: internal memory regions data 90 + * @num_mems: number of internal memory regions 91 + * @num_timers: number of rproc timer(s) 92 + * @num_wd_timers: number of rproc watchdog timers 93 + * @timers: timer(s) info used by rproc 94 + * @autosuspend_delay: auto-suspend delay value to be used for runtime pm 95 + * @need_resume: if true a resume is needed in the system resume callback 34 96 * @rproc: rproc handle 97 + * @reset: reset handle 98 + * @pm_comp: completion primitive to sync for suspend response 99 + * @fck: functional clock for the remoteproc 100 + * @suspend_acked: state machine flag to store the suspend request ack 35 101 */ 36 102 struct omap_rproc { 37 103 struct mbox_chan *mbox; 38 104 struct mbox_client client; 105 + struct omap_rproc_boot_data *boot_data; 106 + struct omap_rproc_mem *mem; 107 + int num_mems; 108 + int num_timers; 109 + int num_wd_timers; 110 + struct omap_rproc_timer *timers; 111 + int autosuspend_delay; 112 + bool need_resume; 39 113 struct rproc *rproc; 114 + struct reset_control *reset; 115 + struct completion pm_comp; 116 + struct clk *fck; 117 + bool suspend_acked; 40 118 }; 119 + 120 + /** 121 + * struct omap_rproc_mem_data - memory definitions for an omap remote processor 122 + * @name: name for this memory entry 123 + * @dev_addr: device address for the memory entry 124 + */ 125 + struct omap_rproc_mem_data { 126 + const char *name; 127 + const u32 dev_addr; 128 + }; 129 + 130 + /** 131 + * struct omap_rproc_dev_data - device data for the omap remote processor 132 + * @device_name: device name of the remote processor 133 + * @mems: memory definitions for this remote processor 134 + */ 135 + struct omap_rproc_dev_data { 136 + const char *device_name; 137 + const struct omap_rproc_mem_data *mems; 138 + }; 139 + 140 + /** 141 + * omap_rproc_request_timer() - request a timer for a remoteproc 142 + * @dev: device requesting the timer 143 + * @np: device node pointer to the desired timer 144 + * @timer: handle to a struct omap_rproc_timer to return the timer handle 145 + * 146 + * This helper function is used primarily to request a timer associated with 147 + * a remoteproc. The returned handle is stored in the .odt field of the 148 + * @timer structure passed in, and is used to invoke other timer specific 149 + * ops (like starting a timer either during device initialization or during 150 + * a resume operation, or for stopping/freeing a timer). 151 + * 152 + * Return: 0 on success, otherwise an appropriate failure 153 + */ 154 + static int omap_rproc_request_timer(struct device *dev, struct device_node *np, 155 + struct omap_rproc_timer *timer) 156 + { 157 + int ret; 158 + 159 + timer->odt = timer->timer_ops->request_by_node(np); 160 + if (!timer->odt) { 161 + dev_err(dev, "request for timer node %p failed\n", np); 162 + return -EBUSY; 163 + } 164 + 165 + ret = timer->timer_ops->set_source(timer->odt, OMAP_TIMER_SRC_SYS_CLK); 166 + if (ret) { 167 + dev_err(dev, "error setting OMAP_TIMER_SRC_SYS_CLK as source for timer node %p\n", 168 + np); 169 + timer->timer_ops->free(timer->odt); 170 + return ret; 171 + } 172 + 173 + /* clean counter, remoteproc code will set the value */ 174 + timer->timer_ops->set_load(timer->odt, 0, 0); 175 + 176 + return 0; 177 + } 178 + 179 + /** 180 + * omap_rproc_start_timer() - start a timer for a remoteproc 181 + * @timer: handle to a OMAP rproc timer 182 + * 183 + * This helper function is used to start a timer associated with a remoteproc, 184 + * obtained using the request_timer ops. The helper function needs to be 185 + * invoked by the driver to start the timer (during device initialization) 186 + * or to just resume the timer. 187 + * 188 + * Return: 0 on success, otherwise a failure as returned by DMTimer ops 189 + */ 190 + static inline int omap_rproc_start_timer(struct omap_rproc_timer *timer) 191 + { 192 + return timer->timer_ops->start(timer->odt); 193 + } 194 + 195 + /** 196 + * omap_rproc_stop_timer() - stop a timer for a remoteproc 197 + * @timer: handle to a OMAP rproc timer 198 + * 199 + * This helper function is used to disable a timer associated with a 200 + * remoteproc, and needs to be called either during a device shutdown 201 + * or suspend operation. The separate helper function allows the driver 202 + * to just stop a timer without having to release the timer during a 203 + * suspend operation. 204 + * 205 + * Return: 0 on success, otherwise a failure as returned by DMTimer ops 206 + */ 207 + static inline int omap_rproc_stop_timer(struct omap_rproc_timer *timer) 208 + { 209 + return timer->timer_ops->stop(timer->odt); 210 + } 211 + 212 + /** 213 + * omap_rproc_release_timer() - release a timer for a remoteproc 214 + * @timer: handle to a OMAP rproc timer 215 + * 216 + * This helper function is used primarily to release a timer associated 217 + * with a remoteproc. The dmtimer will be available for other clients to 218 + * use once released. 219 + * 220 + * Return: 0 on success, otherwise a failure as returned by DMTimer ops 221 + */ 222 + static inline int omap_rproc_release_timer(struct omap_rproc_timer *timer) 223 + { 224 + return timer->timer_ops->free(timer->odt); 225 + } 226 + 227 + /** 228 + * omap_rproc_get_timer_irq() - get the irq for a timer 229 + * @timer: handle to a OMAP rproc timer 230 + * 231 + * This function is used to get the irq associated with a watchdog timer. The 232 + * function is called by the OMAP remoteproc driver to register a interrupt 233 + * handler to handle watchdog events on the remote processor. 234 + * 235 + * Return: irq id on success, otherwise a failure as returned by DMTimer ops 236 + */ 237 + static inline int omap_rproc_get_timer_irq(struct omap_rproc_timer *timer) 238 + { 239 + return timer->timer_ops->get_irq(timer->odt); 240 + } 241 + 242 + /** 243 + * omap_rproc_ack_timer_irq() - acknowledge a timer irq 244 + * @timer: handle to a OMAP rproc timer 245 + * 246 + * This function is used to clear the irq associated with a watchdog timer. The 247 + * The function is called by the OMAP remoteproc upon a watchdog event on the 248 + * remote processor to clear the interrupt status of the watchdog timer. 249 + */ 250 + static inline void omap_rproc_ack_timer_irq(struct omap_rproc_timer *timer) 251 + { 252 + timer->timer_ops->write_status(timer->odt, OMAP_TIMER_INT_OVERFLOW); 253 + } 254 + 255 + /** 256 + * omap_rproc_watchdog_isr() - Watchdog ISR handler for remoteproc device 257 + * @irq: IRQ number associated with a watchdog timer 258 + * @data: IRQ handler data 259 + * 260 + * This ISR routine executes the required necessary low-level code to 261 + * acknowledge a watchdog timer interrupt. There can be multiple watchdog 262 + * timers associated with a rproc (like IPUs which have 2 watchdog timers, 263 + * one per Cortex M3/M4 core), so a lookup has to be performed to identify 264 + * the timer to acknowledge its interrupt. 265 + * 266 + * The function also invokes rproc_report_crash to report the watchdog event 267 + * to the remoteproc driver core, to trigger a recovery. 268 + * 269 + * Return: IRQ_HANDLED on success, otherwise IRQ_NONE 270 + */ 271 + static irqreturn_t omap_rproc_watchdog_isr(int irq, void *data) 272 + { 273 + struct rproc *rproc = data; 274 + struct omap_rproc *oproc = rproc->priv; 275 + struct device *dev = rproc->dev.parent; 276 + struct omap_rproc_timer *timers = oproc->timers; 277 + struct omap_rproc_timer *wd_timer = NULL; 278 + int num_timers = oproc->num_timers + oproc->num_wd_timers; 279 + int i; 280 + 281 + for (i = oproc->num_timers; i < num_timers; i++) { 282 + if (timers[i].irq > 0 && irq == timers[i].irq) { 283 + wd_timer = &timers[i]; 284 + break; 285 + } 286 + } 287 + 288 + if (!wd_timer) { 289 + dev_err(dev, "invalid timer\n"); 290 + return IRQ_NONE; 291 + } 292 + 293 + omap_rproc_ack_timer_irq(wd_timer); 294 + 295 + rproc_report_crash(rproc, RPROC_WATCHDOG); 296 + 297 + return IRQ_HANDLED; 298 + } 299 + 300 + /** 301 + * omap_rproc_enable_timers() - enable the timers for a remoteproc 302 + * @rproc: handle of a remote processor 303 + * @configure: boolean flag used to acquire and configure the timer handle 304 + * 305 + * This function is used primarily to enable the timers associated with 306 + * a remoteproc. The configure flag is provided to allow the driver to 307 + * to either acquire and start a timer (during device initialization) or 308 + * to just start a timer (during a resume operation). 309 + * 310 + * Return: 0 on success, otherwise an appropriate failure 311 + */ 312 + static int omap_rproc_enable_timers(struct rproc *rproc, bool configure) 313 + { 314 + int i; 315 + int ret = 0; 316 + struct platform_device *tpdev; 317 + struct dmtimer_platform_data *tpdata; 318 + const struct omap_dm_timer_ops *timer_ops; 319 + struct omap_rproc *oproc = rproc->priv; 320 + struct omap_rproc_timer *timers = oproc->timers; 321 + struct device *dev = rproc->dev.parent; 322 + struct device_node *np = NULL; 323 + int num_timers = oproc->num_timers + oproc->num_wd_timers; 324 + 325 + if (!num_timers) 326 + return 0; 327 + 328 + if (!configure) 329 + goto start_timers; 330 + 331 + for (i = 0; i < num_timers; i++) { 332 + if (i < oproc->num_timers) 333 + np = of_parse_phandle(dev->of_node, "ti,timers", i); 334 + else 335 + np = of_parse_phandle(dev->of_node, 336 + "ti,watchdog-timers", 337 + (i - oproc->num_timers)); 338 + if (!np) { 339 + ret = -ENXIO; 340 + dev_err(dev, "device node lookup for timer at index %d failed: %d\n", 341 + i < oproc->num_timers ? i : 342 + i - oproc->num_timers, ret); 343 + goto free_timers; 344 + } 345 + 346 + tpdev = of_find_device_by_node(np); 347 + if (!tpdev) { 348 + ret = -ENODEV; 349 + dev_err(dev, "could not get timer platform device\n"); 350 + goto put_node; 351 + } 352 + 353 + tpdata = dev_get_platdata(&tpdev->dev); 354 + put_device(&tpdev->dev); 355 + if (!tpdata) { 356 + ret = -EINVAL; 357 + dev_err(dev, "dmtimer pdata structure NULL\n"); 358 + goto put_node; 359 + } 360 + 361 + timer_ops = tpdata->timer_ops; 362 + if (!timer_ops || !timer_ops->request_by_node || 363 + !timer_ops->set_source || !timer_ops->set_load || 364 + !timer_ops->free || !timer_ops->start || 365 + !timer_ops->stop || !timer_ops->get_irq || 366 + !timer_ops->write_status) { 367 + ret = -EINVAL; 368 + dev_err(dev, "device does not have required timer ops\n"); 369 + goto put_node; 370 + } 371 + 372 + timers[i].irq = -1; 373 + timers[i].timer_ops = timer_ops; 374 + ret = omap_rproc_request_timer(dev, np, &timers[i]); 375 + if (ret) { 376 + dev_err(dev, "request for timer %p failed: %d\n", np, 377 + ret); 378 + goto put_node; 379 + } 380 + of_node_put(np); 381 + 382 + if (i >= oproc->num_timers) { 383 + timers[i].irq = omap_rproc_get_timer_irq(&timers[i]); 384 + if (timers[i].irq < 0) { 385 + dev_err(dev, "get_irq for timer %p failed: %d\n", 386 + np, timers[i].irq); 387 + ret = -EBUSY; 388 + goto free_timers; 389 + } 390 + 391 + ret = request_irq(timers[i].irq, 392 + omap_rproc_watchdog_isr, IRQF_SHARED, 393 + "rproc-wdt", rproc); 394 + if (ret) { 395 + dev_err(dev, "error requesting irq for timer %p\n", 396 + np); 397 + omap_rproc_release_timer(&timers[i]); 398 + timers[i].odt = NULL; 399 + timers[i].timer_ops = NULL; 400 + timers[i].irq = -1; 401 + goto free_timers; 402 + } 403 + } 404 + } 405 + 406 + start_timers: 407 + for (i = 0; i < num_timers; i++) { 408 + ret = omap_rproc_start_timer(&timers[i]); 409 + if (ret) { 410 + dev_err(dev, "start timer %p failed failed: %d\n", np, 411 + ret); 412 + break; 413 + } 414 + } 415 + if (ret) { 416 + while (i >= 0) { 417 + omap_rproc_stop_timer(&timers[i]); 418 + i--; 419 + } 420 + goto put_node; 421 + } 422 + return 0; 423 + 424 + put_node: 425 + if (configure) 426 + of_node_put(np); 427 + free_timers: 428 + while (i--) { 429 + if (i >= oproc->num_timers) 430 + free_irq(timers[i].irq, rproc); 431 + omap_rproc_release_timer(&timers[i]); 432 + timers[i].odt = NULL; 433 + timers[i].timer_ops = NULL; 434 + timers[i].irq = -1; 435 + } 436 + 437 + return ret; 438 + } 439 + 440 + /** 441 + * omap_rproc_disable_timers() - disable the timers for a remoteproc 442 + * @rproc: handle of a remote processor 443 + * @configure: boolean flag used to release the timer handle 444 + * 445 + * This function is used primarily to disable the timers associated with 446 + * a remoteproc. The configure flag is provided to allow the driver to 447 + * to either stop and release a timer (during device shutdown) or to just 448 + * stop a timer (during a suspend operation). 449 + * 450 + * Return: 0 on success or no timers 451 + */ 452 + static int omap_rproc_disable_timers(struct rproc *rproc, bool configure) 453 + { 454 + int i; 455 + struct omap_rproc *oproc = rproc->priv; 456 + struct omap_rproc_timer *timers = oproc->timers; 457 + int num_timers = oproc->num_timers + oproc->num_wd_timers; 458 + 459 + if (!num_timers) 460 + return 0; 461 + 462 + for (i = 0; i < num_timers; i++) { 463 + omap_rproc_stop_timer(&timers[i]); 464 + if (configure) { 465 + if (i >= oproc->num_timers) 466 + free_irq(timers[i].irq, rproc); 467 + omap_rproc_release_timer(&timers[i]); 468 + timers[i].odt = NULL; 469 + timers[i].timer_ops = NULL; 470 + timers[i].irq = -1; 471 + } 472 + } 473 + 474 + return 0; 475 + } 41 476 42 477 /** 43 478 * omap_rproc_mbox_callback() - inbound mailbox message handler ··· 500 65 501 66 switch (msg) { 502 67 case RP_MBOX_CRASH: 503 - /* just log this for now. later, we'll also do recovery */ 68 + /* 69 + * remoteproc detected an exception, notify the rproc core. 70 + * The remoteproc core will handle the recovery. 71 + */ 504 72 dev_err(dev, "omap rproc %s crashed\n", name); 73 + rproc_report_crash(oproc->rproc, RPROC_FATAL_ERROR); 505 74 break; 506 75 case RP_MBOX_ECHO_REPLY: 507 76 dev_info(dev, "received echo reply from %s\n", name); 508 77 break; 78 + case RP_MBOX_SUSPEND_ACK: 79 + /* Fall through */ 80 + case RP_MBOX_SUSPEND_CANCEL: 81 + oproc->suspend_acked = msg == RP_MBOX_SUSPEND_ACK; 82 + complete(&oproc->pm_comp); 83 + break; 509 84 default: 85 + if (msg >= RP_MBOX_READY && msg < RP_MBOX_END_MSG) 86 + return; 87 + if (msg > oproc->rproc->max_notifyid) { 88 + dev_dbg(dev, "dropping unknown message 0x%x", msg); 89 + return; 90 + } 510 91 /* msg contains the index of the triggered vring */ 511 92 if (rproc_vq_interrupt(oproc->rproc, msg) == IRQ_NONE) 512 93 dev_dbg(dev, "no message was found in vqid %d\n", msg); ··· 536 85 struct device *dev = rproc->dev.parent; 537 86 int ret; 538 87 88 + /* wake up the rproc before kicking it */ 89 + ret = pm_runtime_get_sync(dev); 90 + if (WARN_ON(ret < 0)) { 91 + dev_err(dev, "pm_runtime_get_sync() failed during kick, ret = %d\n", 92 + ret); 93 + pm_runtime_put_noidle(dev); 94 + return; 95 + } 96 + 539 97 /* send the index of the triggered virtqueue in the mailbox payload */ 540 98 ret = mbox_send_message(oproc->mbox, (void *)vqid); 541 99 if (ret < 0) 542 100 dev_err(dev, "failed to send mailbox message, status = %d\n", 543 101 ret); 102 + 103 + pm_runtime_mark_last_busy(dev); 104 + pm_runtime_put_autosuspend(dev); 105 + } 106 + 107 + /** 108 + * omap_rproc_write_dsp_boot_addr() - set boot address for DSP remote processor 109 + * @rproc: handle of a remote processor 110 + * 111 + * Set boot address for a supported DSP remote processor. 112 + * 113 + * Return: 0 on success, or -EINVAL if boot address is not aligned properly 114 + */ 115 + static int omap_rproc_write_dsp_boot_addr(struct rproc *rproc) 116 + { 117 + struct device *dev = rproc->dev.parent; 118 + struct omap_rproc *oproc = rproc->priv; 119 + struct omap_rproc_boot_data *bdata = oproc->boot_data; 120 + u32 offset = bdata->boot_reg; 121 + u32 value; 122 + u32 mask; 123 + 124 + if (rproc->bootaddr & (SZ_1K - 1)) { 125 + dev_err(dev, "invalid boot address 0x%llx, must be aligned on a 1KB boundary\n", 126 + rproc->bootaddr); 127 + return -EINVAL; 128 + } 129 + 130 + value = rproc->bootaddr >> bdata->boot_reg_shift; 131 + mask = ~(SZ_1K - 1) >> bdata->boot_reg_shift; 132 + 133 + return regmap_update_bits(bdata->syscon, offset, mask, value); 544 134 } 545 135 546 136 /* ··· 595 103 { 596 104 struct omap_rproc *oproc = rproc->priv; 597 105 struct device *dev = rproc->dev.parent; 598 - struct platform_device *pdev = to_platform_device(dev); 599 - struct omap_rproc_pdata *pdata = pdev->dev.platform_data; 600 106 int ret; 601 107 struct mbox_client *client = &oproc->client; 602 108 603 - if (pdata->set_bootaddr) 604 - pdata->set_bootaddr(rproc->bootaddr); 109 + if (oproc->boot_data) { 110 + ret = omap_rproc_write_dsp_boot_addr(rproc); 111 + if (ret) 112 + return ret; 113 + } 605 114 606 115 client->dev = dev; 607 116 client->tx_done = NULL; ··· 610 117 client->tx_block = false; 611 118 client->knows_txdone = false; 612 119 613 - oproc->mbox = omap_mbox_request_channel(client, pdata->mbox_name); 120 + oproc->mbox = mbox_request_channel(client, 0); 614 121 if (IS_ERR(oproc->mbox)) { 615 122 ret = -EBUSY; 616 123 dev_err(dev, "mbox_request_channel failed: %ld\n", ··· 631 138 goto put_mbox; 632 139 } 633 140 634 - ret = pdata->device_enable(pdev); 141 + ret = omap_rproc_enable_timers(rproc, true); 635 142 if (ret) { 636 - dev_err(dev, "omap_device_enable failed: %d\n", ret); 143 + dev_err(dev, "omap_rproc_enable_timers failed: %d\n", ret); 637 144 goto put_mbox; 638 145 } 639 146 147 + ret = reset_control_deassert(oproc->reset); 148 + if (ret) { 149 + dev_err(dev, "reset control deassert failed: %d\n", ret); 150 + goto disable_timers; 151 + } 152 + 153 + /* 154 + * remote processor is up, so update the runtime pm status and 155 + * enable the auto-suspend. The device usage count is incremented 156 + * manually for balancing it for auto-suspend 157 + */ 158 + pm_runtime_set_active(dev); 159 + pm_runtime_use_autosuspend(dev); 160 + pm_runtime_get_noresume(dev); 161 + pm_runtime_enable(dev); 162 + pm_runtime_mark_last_busy(dev); 163 + pm_runtime_put_autosuspend(dev); 164 + 640 165 return 0; 641 166 167 + disable_timers: 168 + omap_rproc_disable_timers(rproc, true); 642 169 put_mbox: 643 170 mbox_free_channel(oproc->mbox); 644 171 return ret; ··· 668 155 static int omap_rproc_stop(struct rproc *rproc) 669 156 { 670 157 struct device *dev = rproc->dev.parent; 671 - struct platform_device *pdev = to_platform_device(dev); 672 - struct omap_rproc_pdata *pdata = pdev->dev.platform_data; 673 158 struct omap_rproc *oproc = rproc->priv; 674 159 int ret; 675 160 676 - ret = pdata->device_shutdown(pdev); 677 - if (ret) 161 + /* 162 + * cancel any possible scheduled runtime suspend by incrementing 163 + * the device usage count, and resuming the device. The remoteproc 164 + * also needs to be woken up if suspended, to avoid the remoteproc 165 + * OS to continue to remember any context that it has saved, and 166 + * avoid potential issues in misindentifying a subsequent device 167 + * reboot as a power restore boot 168 + */ 169 + ret = pm_runtime_get_sync(dev); 170 + if (ret < 0) { 171 + pm_runtime_put_noidle(dev); 678 172 return ret; 173 + } 174 + 175 + ret = reset_control_assert(oproc->reset); 176 + if (ret) 177 + goto out; 178 + 179 + ret = omap_rproc_disable_timers(rproc, true); 180 + if (ret) 181 + goto enable_device; 679 182 680 183 mbox_free_channel(oproc->mbox); 681 184 185 + /* 186 + * update the runtime pm states and status now that the remoteproc 187 + * has stopped 188 + */ 189 + pm_runtime_disable(dev); 190 + pm_runtime_dont_use_autosuspend(dev); 191 + pm_runtime_put_noidle(dev); 192 + pm_runtime_set_suspended(dev); 193 + 682 194 return 0; 195 + 196 + enable_device: 197 + reset_control_deassert(oproc->reset); 198 + out: 199 + /* schedule the next auto-suspend */ 200 + pm_runtime_mark_last_busy(dev); 201 + pm_runtime_put_autosuspend(dev); 202 + return ret; 203 + } 204 + 205 + /** 206 + * omap_rproc_da_to_va() - internal memory translation helper 207 + * @rproc: remote processor to apply the address translation for 208 + * @da: device address to translate 209 + * @len: length of the memory buffer 210 + * 211 + * Custom function implementing the rproc .da_to_va ops to provide address 212 + * translation (device address to kernel virtual address) for internal RAMs 213 + * present in a DSP or IPU device). The translated addresses can be used 214 + * either by the remoteproc core for loading, or by any rpmsg bus drivers. 215 + * 216 + * Return: translated virtual address in kernel memory space on success, 217 + * or NULL on failure. 218 + */ 219 + static void *omap_rproc_da_to_va(struct rproc *rproc, u64 da, size_t len) 220 + { 221 + struct omap_rproc *oproc = rproc->priv; 222 + int i; 223 + u32 offset; 224 + 225 + if (len <= 0) 226 + return NULL; 227 + 228 + if (!oproc->num_mems) 229 + return NULL; 230 + 231 + for (i = 0; i < oproc->num_mems; i++) { 232 + if (da >= oproc->mem[i].dev_addr && da + len <= 233 + oproc->mem[i].dev_addr + oproc->mem[i].size) { 234 + offset = da - oproc->mem[i].dev_addr; 235 + /* __force to make sparse happy with type conversion */ 236 + return (__force void *)(oproc->mem[i].cpu_addr + 237 + offset); 238 + } 239 + } 240 + 241 + return NULL; 683 242 } 684 243 685 244 static const struct rproc_ops omap_rproc_ops = { 686 245 .start = omap_rproc_start, 687 246 .stop = omap_rproc_stop, 688 247 .kick = omap_rproc_kick, 248 + .da_to_va = omap_rproc_da_to_va, 689 249 }; 250 + 251 + #ifdef CONFIG_PM 252 + static bool _is_rproc_in_standby(struct omap_rproc *oproc) 253 + { 254 + return ti_clk_is_in_standby(oproc->fck); 255 + } 256 + 257 + /* 1 sec is long enough time to let the remoteproc side suspend the device */ 258 + #define DEF_SUSPEND_TIMEOUT 1000 259 + static int _omap_rproc_suspend(struct rproc *rproc, bool auto_suspend) 260 + { 261 + struct device *dev = rproc->dev.parent; 262 + struct omap_rproc *oproc = rproc->priv; 263 + unsigned long to = msecs_to_jiffies(DEF_SUSPEND_TIMEOUT); 264 + unsigned long ta = jiffies + to; 265 + u32 suspend_msg = auto_suspend ? 266 + RP_MBOX_SUSPEND_AUTO : RP_MBOX_SUSPEND_SYSTEM; 267 + int ret; 268 + 269 + reinit_completion(&oproc->pm_comp); 270 + oproc->suspend_acked = false; 271 + ret = mbox_send_message(oproc->mbox, (void *)suspend_msg); 272 + if (ret < 0) { 273 + dev_err(dev, "PM mbox_send_message failed: %d\n", ret); 274 + return ret; 275 + } 276 + 277 + ret = wait_for_completion_timeout(&oproc->pm_comp, to); 278 + if (!oproc->suspend_acked) 279 + return -EBUSY; 280 + 281 + /* 282 + * The remoteproc side is returning the ACK message before saving the 283 + * context, because the context saving is performed within a SYS/BIOS 284 + * function, and it cannot have any inter-dependencies against the IPC 285 + * layer. Also, as the SYS/BIOS needs to preserve properly the processor 286 + * register set, sending this ACK or signalling the completion of the 287 + * context save through a shared memory variable can never be the 288 + * absolute last thing to be executed on the remoteproc side, and the 289 + * MPU cannot use the ACK message as a sync point to put the remoteproc 290 + * into reset. The only way to ensure that the remote processor has 291 + * completed saving the context is to check that the module has reached 292 + * STANDBY state (after saving the context, the SYS/BIOS executes the 293 + * appropriate target-specific WFI instruction causing the module to 294 + * enter STANDBY). 295 + */ 296 + while (!_is_rproc_in_standby(oproc)) { 297 + if (time_after(jiffies, ta)) 298 + return -ETIME; 299 + schedule(); 300 + } 301 + 302 + ret = reset_control_assert(oproc->reset); 303 + if (ret) { 304 + dev_err(dev, "reset assert during suspend failed %d\n", ret); 305 + return ret; 306 + } 307 + 308 + ret = omap_rproc_disable_timers(rproc, false); 309 + if (ret) { 310 + dev_err(dev, "disabling timers during suspend failed %d\n", 311 + ret); 312 + goto enable_device; 313 + } 314 + 315 + /* 316 + * IOMMUs would have to be disabled specifically for runtime suspend. 317 + * They are handled automatically through System PM callbacks for 318 + * regular system suspend 319 + */ 320 + if (auto_suspend) { 321 + ret = omap_iommu_domain_deactivate(rproc->domain); 322 + if (ret) { 323 + dev_err(dev, "iommu domain deactivate failed %d\n", 324 + ret); 325 + goto enable_timers; 326 + } 327 + } 328 + 329 + return 0; 330 + 331 + enable_timers: 332 + /* ignore errors on re-enabling code */ 333 + omap_rproc_enable_timers(rproc, false); 334 + enable_device: 335 + reset_control_deassert(oproc->reset); 336 + return ret; 337 + } 338 + 339 + static int _omap_rproc_resume(struct rproc *rproc, bool auto_suspend) 340 + { 341 + struct device *dev = rproc->dev.parent; 342 + struct omap_rproc *oproc = rproc->priv; 343 + int ret; 344 + 345 + /* 346 + * IOMMUs would have to be enabled specifically for runtime resume. 347 + * They would have been already enabled automatically through System 348 + * PM callbacks for regular system resume 349 + */ 350 + if (auto_suspend) { 351 + ret = omap_iommu_domain_activate(rproc->domain); 352 + if (ret) { 353 + dev_err(dev, "omap_iommu activate failed %d\n", ret); 354 + goto out; 355 + } 356 + } 357 + 358 + /* boot address could be lost after suspend, so restore it */ 359 + if (oproc->boot_data) { 360 + ret = omap_rproc_write_dsp_boot_addr(rproc); 361 + if (ret) { 362 + dev_err(dev, "boot address restore failed %d\n", ret); 363 + goto suspend_iommu; 364 + } 365 + } 366 + 367 + ret = omap_rproc_enable_timers(rproc, false); 368 + if (ret) { 369 + dev_err(dev, "enabling timers during resume failed %d\n", ret); 370 + goto suspend_iommu; 371 + } 372 + 373 + ret = reset_control_deassert(oproc->reset); 374 + if (ret) { 375 + dev_err(dev, "reset deassert during resume failed %d\n", ret); 376 + goto disable_timers; 377 + } 378 + 379 + return 0; 380 + 381 + disable_timers: 382 + omap_rproc_disable_timers(rproc, false); 383 + suspend_iommu: 384 + if (auto_suspend) 385 + omap_iommu_domain_deactivate(rproc->domain); 386 + out: 387 + return ret; 388 + } 389 + 390 + static int __maybe_unused omap_rproc_suspend(struct device *dev) 391 + { 392 + struct platform_device *pdev = to_platform_device(dev); 393 + struct rproc *rproc = platform_get_drvdata(pdev); 394 + struct omap_rproc *oproc = rproc->priv; 395 + int ret = 0; 396 + 397 + mutex_lock(&rproc->lock); 398 + if (rproc->state == RPROC_OFFLINE) 399 + goto out; 400 + 401 + if (rproc->state == RPROC_SUSPENDED) 402 + goto out; 403 + 404 + if (rproc->state != RPROC_RUNNING) { 405 + ret = -EBUSY; 406 + goto out; 407 + } 408 + 409 + ret = _omap_rproc_suspend(rproc, false); 410 + if (ret) { 411 + dev_err(dev, "suspend failed %d\n", ret); 412 + goto out; 413 + } 414 + 415 + /* 416 + * remoteproc is running at the time of system suspend, so remember 417 + * it so as to wake it up during system resume 418 + */ 419 + oproc->need_resume = true; 420 + rproc->state = RPROC_SUSPENDED; 421 + 422 + out: 423 + mutex_unlock(&rproc->lock); 424 + return ret; 425 + } 426 + 427 + static int __maybe_unused omap_rproc_resume(struct device *dev) 428 + { 429 + struct platform_device *pdev = to_platform_device(dev); 430 + struct rproc *rproc = platform_get_drvdata(pdev); 431 + struct omap_rproc *oproc = rproc->priv; 432 + int ret = 0; 433 + 434 + mutex_lock(&rproc->lock); 435 + if (rproc->state == RPROC_OFFLINE) 436 + goto out; 437 + 438 + if (rproc->state != RPROC_SUSPENDED) { 439 + ret = -EBUSY; 440 + goto out; 441 + } 442 + 443 + /* 444 + * remoteproc was auto-suspended at the time of system suspend, 445 + * so no need to wake-up the processor (leave it in suspended 446 + * state, will be woken up during a subsequent runtime_resume) 447 + */ 448 + if (!oproc->need_resume) 449 + goto out; 450 + 451 + ret = _omap_rproc_resume(rproc, false); 452 + if (ret) { 453 + dev_err(dev, "resume failed %d\n", ret); 454 + goto out; 455 + } 456 + 457 + oproc->need_resume = false; 458 + rproc->state = RPROC_RUNNING; 459 + 460 + pm_runtime_mark_last_busy(dev); 461 + out: 462 + mutex_unlock(&rproc->lock); 463 + return ret; 464 + } 465 + 466 + static int omap_rproc_runtime_suspend(struct device *dev) 467 + { 468 + struct rproc *rproc = dev_get_drvdata(dev); 469 + struct omap_rproc *oproc = rproc->priv; 470 + int ret; 471 + 472 + mutex_lock(&rproc->lock); 473 + if (rproc->state == RPROC_CRASHED) { 474 + dev_dbg(dev, "rproc cannot be runtime suspended when crashed!\n"); 475 + ret = -EBUSY; 476 + goto out; 477 + } 478 + 479 + if (WARN_ON(rproc->state != RPROC_RUNNING)) { 480 + dev_err(dev, "rproc cannot be runtime suspended when not running!\n"); 481 + ret = -EBUSY; 482 + goto out; 483 + } 484 + 485 + /* 486 + * do not even attempt suspend if the remote processor is not 487 + * idled for runtime auto-suspend 488 + */ 489 + if (!_is_rproc_in_standby(oproc)) { 490 + ret = -EBUSY; 491 + goto abort; 492 + } 493 + 494 + ret = _omap_rproc_suspend(rproc, true); 495 + if (ret) 496 + goto abort; 497 + 498 + rproc->state = RPROC_SUSPENDED; 499 + mutex_unlock(&rproc->lock); 500 + return 0; 501 + 502 + abort: 503 + pm_runtime_mark_last_busy(dev); 504 + out: 505 + mutex_unlock(&rproc->lock); 506 + return ret; 507 + } 508 + 509 + static int omap_rproc_runtime_resume(struct device *dev) 510 + { 511 + struct rproc *rproc = dev_get_drvdata(dev); 512 + int ret; 513 + 514 + mutex_lock(&rproc->lock); 515 + if (WARN_ON(rproc->state != RPROC_SUSPENDED)) { 516 + dev_err(dev, "rproc cannot be runtime resumed if not suspended! state=%d\n", 517 + rproc->state); 518 + ret = -EBUSY; 519 + goto out; 520 + } 521 + 522 + ret = _omap_rproc_resume(rproc, true); 523 + if (ret) { 524 + dev_err(dev, "runtime resume failed %d\n", ret); 525 + goto out; 526 + } 527 + 528 + rproc->state = RPROC_RUNNING; 529 + out: 530 + mutex_unlock(&rproc->lock); 531 + return ret; 532 + } 533 + #endif /* CONFIG_PM */ 534 + 535 + static const struct omap_rproc_mem_data ipu_mems[] = { 536 + { .name = "l2ram", .dev_addr = 0x20000000 }, 537 + { }, 538 + }; 539 + 540 + static const struct omap_rproc_mem_data dra7_dsp_mems[] = { 541 + { .name = "l2ram", .dev_addr = 0x800000 }, 542 + { .name = "l1pram", .dev_addr = 0xe00000 }, 543 + { .name = "l1dram", .dev_addr = 0xf00000 }, 544 + { }, 545 + }; 546 + 547 + static const struct omap_rproc_dev_data omap4_dsp_dev_data = { 548 + .device_name = "dsp", 549 + }; 550 + 551 + static const struct omap_rproc_dev_data omap4_ipu_dev_data = { 552 + .device_name = "ipu", 553 + .mems = ipu_mems, 554 + }; 555 + 556 + static const struct omap_rproc_dev_data omap5_dsp_dev_data = { 557 + .device_name = "dsp", 558 + }; 559 + 560 + static const struct omap_rproc_dev_data omap5_ipu_dev_data = { 561 + .device_name = "ipu", 562 + .mems = ipu_mems, 563 + }; 564 + 565 + static const struct omap_rproc_dev_data dra7_dsp_dev_data = { 566 + .device_name = "dsp", 567 + .mems = dra7_dsp_mems, 568 + }; 569 + 570 + static const struct omap_rproc_dev_data dra7_ipu_dev_data = { 571 + .device_name = "ipu", 572 + .mems = ipu_mems, 573 + }; 574 + 575 + static const struct of_device_id omap_rproc_of_match[] = { 576 + { 577 + .compatible = "ti,omap4-dsp", 578 + .data = &omap4_dsp_dev_data, 579 + }, 580 + { 581 + .compatible = "ti,omap4-ipu", 582 + .data = &omap4_ipu_dev_data, 583 + }, 584 + { 585 + .compatible = "ti,omap5-dsp", 586 + .data = &omap5_dsp_dev_data, 587 + }, 588 + { 589 + .compatible = "ti,omap5-ipu", 590 + .data = &omap5_ipu_dev_data, 591 + }, 592 + { 593 + .compatible = "ti,dra7-dsp", 594 + .data = &dra7_dsp_dev_data, 595 + }, 596 + { 597 + .compatible = "ti,dra7-ipu", 598 + .data = &dra7_ipu_dev_data, 599 + }, 600 + { 601 + /* end */ 602 + }, 603 + }; 604 + MODULE_DEVICE_TABLE(of, omap_rproc_of_match); 605 + 606 + static const char *omap_rproc_get_firmware(struct platform_device *pdev) 607 + { 608 + const char *fw_name; 609 + int ret; 610 + 611 + ret = of_property_read_string(pdev->dev.of_node, "firmware-name", 612 + &fw_name); 613 + if (ret) 614 + return ERR_PTR(ret); 615 + 616 + return fw_name; 617 + } 618 + 619 + static int omap_rproc_get_boot_data(struct platform_device *pdev, 620 + struct rproc *rproc) 621 + { 622 + struct device_node *np = pdev->dev.of_node; 623 + struct omap_rproc *oproc = rproc->priv; 624 + const struct omap_rproc_dev_data *data; 625 + int ret; 626 + 627 + data = of_device_get_match_data(&pdev->dev); 628 + if (!data) 629 + return -ENODEV; 630 + 631 + if (!of_property_read_bool(np, "ti,bootreg")) 632 + return 0; 633 + 634 + oproc->boot_data = devm_kzalloc(&pdev->dev, sizeof(*oproc->boot_data), 635 + GFP_KERNEL); 636 + if (!oproc->boot_data) 637 + return -ENOMEM; 638 + 639 + oproc->boot_data->syscon = 640 + syscon_regmap_lookup_by_phandle(np, "ti,bootreg"); 641 + if (IS_ERR(oproc->boot_data->syscon)) { 642 + ret = PTR_ERR(oproc->boot_data->syscon); 643 + return ret; 644 + } 645 + 646 + if (of_property_read_u32_index(np, "ti,bootreg", 1, 647 + &oproc->boot_data->boot_reg)) { 648 + dev_err(&pdev->dev, "couldn't get the boot register\n"); 649 + return -EINVAL; 650 + } 651 + 652 + of_property_read_u32_index(np, "ti,bootreg", 2, 653 + &oproc->boot_data->boot_reg_shift); 654 + 655 + return 0; 656 + } 657 + 658 + static int omap_rproc_of_get_internal_memories(struct platform_device *pdev, 659 + struct rproc *rproc) 660 + { 661 + struct omap_rproc *oproc = rproc->priv; 662 + struct device *dev = &pdev->dev; 663 + const struct omap_rproc_dev_data *data; 664 + struct resource *res; 665 + int num_mems; 666 + int i; 667 + 668 + data = of_device_get_match_data(dev); 669 + if (!data) 670 + return -ENODEV; 671 + 672 + if (!data->mems) 673 + return 0; 674 + 675 + num_mems = of_property_count_elems_of_size(dev->of_node, "reg", 676 + sizeof(u32)) / 2; 677 + 678 + oproc->mem = devm_kcalloc(dev, num_mems, sizeof(*oproc->mem), 679 + GFP_KERNEL); 680 + if (!oproc->mem) 681 + return -ENOMEM; 682 + 683 + for (i = 0; data->mems[i].name; i++) { 684 + res = platform_get_resource_byname(pdev, IORESOURCE_MEM, 685 + data->mems[i].name); 686 + if (!res) { 687 + dev_err(dev, "no memory defined for %s\n", 688 + data->mems[i].name); 689 + return -ENOMEM; 690 + } 691 + oproc->mem[i].cpu_addr = devm_ioremap_resource(dev, res); 692 + if (IS_ERR(oproc->mem[i].cpu_addr)) { 693 + dev_err(dev, "failed to parse and map %s memory\n", 694 + data->mems[i].name); 695 + return PTR_ERR(oproc->mem[i].cpu_addr); 696 + } 697 + oproc->mem[i].bus_addr = res->start; 698 + oproc->mem[i].dev_addr = data->mems[i].dev_addr; 699 + oproc->mem[i].size = resource_size(res); 700 + 701 + dev_dbg(dev, "memory %8s: bus addr %pa size 0x%x va %pK da 0x%x\n", 702 + data->mems[i].name, &oproc->mem[i].bus_addr, 703 + oproc->mem[i].size, oproc->mem[i].cpu_addr, 704 + oproc->mem[i].dev_addr); 705 + } 706 + oproc->num_mems = num_mems; 707 + 708 + return 0; 709 + } 710 + 711 + #ifdef CONFIG_OMAP_REMOTEPROC_WATCHDOG 712 + static int omap_rproc_count_wdog_timers(struct device *dev) 713 + { 714 + struct device_node *np = dev->of_node; 715 + int ret; 716 + 717 + ret = of_count_phandle_with_args(np, "ti,watchdog-timers", NULL); 718 + if (ret <= 0) { 719 + dev_dbg(dev, "device does not have watchdog timers, status = %d\n", 720 + ret); 721 + ret = 0; 722 + } 723 + 724 + return ret; 725 + } 726 + #else 727 + static int omap_rproc_count_wdog_timers(struct device *dev) 728 + { 729 + return 0; 730 + } 731 + #endif 732 + 733 + static int omap_rproc_of_get_timers(struct platform_device *pdev, 734 + struct rproc *rproc) 735 + { 736 + struct device_node *np = pdev->dev.of_node; 737 + struct omap_rproc *oproc = rproc->priv; 738 + struct device *dev = &pdev->dev; 739 + int num_timers; 740 + 741 + /* 742 + * Timer nodes are directly used in client nodes as phandles, so 743 + * retrieve the count using appropriate size 744 + */ 745 + oproc->num_timers = of_count_phandle_with_args(np, "ti,timers", NULL); 746 + if (oproc->num_timers <= 0) { 747 + dev_dbg(dev, "device does not have timers, status = %d\n", 748 + oproc->num_timers); 749 + oproc->num_timers = 0; 750 + } 751 + 752 + oproc->num_wd_timers = omap_rproc_count_wdog_timers(dev); 753 + 754 + num_timers = oproc->num_timers + oproc->num_wd_timers; 755 + if (num_timers) { 756 + oproc->timers = devm_kcalloc(dev, num_timers, 757 + sizeof(*oproc->timers), 758 + GFP_KERNEL); 759 + if (!oproc->timers) 760 + return -ENOMEM; 761 + 762 + dev_dbg(dev, "device has %d tick timers and %d watchdog timers\n", 763 + oproc->num_timers, oproc->num_wd_timers); 764 + } 765 + 766 + return 0; 767 + } 690 768 691 769 static int omap_rproc_probe(struct platform_device *pdev) 692 770 { 693 - struct omap_rproc_pdata *pdata = pdev->dev.platform_data; 771 + struct device_node *np = pdev->dev.of_node; 694 772 struct omap_rproc *oproc; 695 773 struct rproc *rproc; 774 + const char *firmware; 696 775 int ret; 776 + struct reset_control *reset; 777 + 778 + if (!np) { 779 + dev_err(&pdev->dev, "only DT-based devices are supported\n"); 780 + return -ENODEV; 781 + } 782 + 783 + reset = devm_reset_control_array_get_exclusive(&pdev->dev); 784 + if (IS_ERR(reset)) 785 + return PTR_ERR(reset); 786 + 787 + firmware = omap_rproc_get_firmware(pdev); 788 + if (IS_ERR(firmware)) 789 + return PTR_ERR(firmware); 697 790 698 791 ret = dma_set_coherent_mask(&pdev->dev, DMA_BIT_MASK(32)); 699 792 if (ret) { ··· 1307 188 return ret; 1308 189 } 1309 190 1310 - rproc = rproc_alloc(&pdev->dev, pdata->name, &omap_rproc_ops, 1311 - pdata->firmware, sizeof(*oproc)); 191 + rproc = rproc_alloc(&pdev->dev, dev_name(&pdev->dev), &omap_rproc_ops, 192 + firmware, sizeof(*oproc)); 1312 193 if (!rproc) 1313 194 return -ENOMEM; 1314 195 1315 196 oproc = rproc->priv; 1316 197 oproc->rproc = rproc; 198 + oproc->reset = reset; 1317 199 /* All existing OMAP IPU and DSP processors have an MMU */ 1318 200 rproc->has_iommu = true; 201 + 202 + ret = omap_rproc_of_get_internal_memories(pdev, rproc); 203 + if (ret) 204 + goto free_rproc; 205 + 206 + ret = omap_rproc_get_boot_data(pdev, rproc); 207 + if (ret) 208 + goto free_rproc; 209 + 210 + ret = omap_rproc_of_get_timers(pdev, rproc); 211 + if (ret) 212 + goto free_rproc; 213 + 214 + init_completion(&oproc->pm_comp); 215 + oproc->autosuspend_delay = DEFAULT_AUTOSUSPEND_DELAY; 216 + 217 + of_property_read_u32(pdev->dev.of_node, "ti,autosuspend-delay-ms", 218 + &oproc->autosuspend_delay); 219 + 220 + pm_runtime_set_autosuspend_delay(&pdev->dev, oproc->autosuspend_delay); 221 + 222 + oproc->fck = devm_clk_get(&pdev->dev, 0); 223 + if (IS_ERR(oproc->fck)) { 224 + ret = PTR_ERR(oproc->fck); 225 + goto free_rproc; 226 + } 227 + 228 + ret = of_reserved_mem_device_init(&pdev->dev); 229 + if (ret) { 230 + dev_warn(&pdev->dev, "device does not have specific CMA pool.\n"); 231 + dev_warn(&pdev->dev, "Typically this should be provided,\n"); 232 + dev_warn(&pdev->dev, "only omit if you know what you are doing.\n"); 233 + } 1319 234 1320 235 platform_set_drvdata(pdev, rproc); 1321 236 1322 237 ret = rproc_add(rproc); 1323 238 if (ret) 1324 - goto free_rproc; 239 + goto release_mem; 1325 240 1326 241 return 0; 1327 242 243 + release_mem: 244 + of_reserved_mem_device_release(&pdev->dev); 1328 245 free_rproc: 1329 246 rproc_free(rproc); 1330 247 return ret; ··· 1372 217 1373 218 rproc_del(rproc); 1374 219 rproc_free(rproc); 220 + of_reserved_mem_device_release(&pdev->dev); 1375 221 1376 222 return 0; 1377 223 } 224 + 225 + static const struct dev_pm_ops omap_rproc_pm_ops = { 226 + SET_SYSTEM_SLEEP_PM_OPS(omap_rproc_suspend, omap_rproc_resume) 227 + SET_RUNTIME_PM_OPS(omap_rproc_runtime_suspend, 228 + omap_rproc_runtime_resume, NULL) 229 + }; 1378 230 1379 231 static struct platform_driver omap_rproc_driver = { 1380 232 .probe = omap_rproc_probe, 1381 233 .remove = omap_rproc_remove, 1382 234 .driver = { 1383 235 .name = "omap-rproc", 236 + .pm = &omap_rproc_pm_ops, 237 + .of_match_table = omap_rproc_of_match, 1384 238 }, 1385 239 }; 1386 240
+23 -27
drivers/remoteproc/omap_remoteproc.h
··· 1 + /* SPDX-License-Identifier: BSD-3-Clause */ 1 2 /* 2 3 * Remote processor messaging 3 4 * 4 - * Copyright (C) 2011 Texas Instruments, Inc. 5 + * Copyright (C) 2011-2020 Texas Instruments, Inc. 5 6 * Copyright (C) 2011 Google, Inc. 6 7 * All rights reserved. 7 - * 8 - * Redistribution and use in source and binary forms, with or without 9 - * modification, are permitted provided that the following conditions 10 - * are met: 11 - * 12 - * * Redistributions of source code must retain the above copyright 13 - * notice, this list of conditions and the following disclaimer. 14 - * * Redistributions in binary form must reproduce the above copyright 15 - * notice, this list of conditions and the following disclaimer in 16 - * the documentation and/or other materials provided with the 17 - * distribution. 18 - * * Neither the name Texas Instruments nor the names of its 19 - * contributors may be used to endorse or promote products derived 20 - * from this software without specific prior written permission. 21 - * 22 - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 23 - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 24 - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 25 - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 26 - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 27 - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 28 - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 29 - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 30 - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 31 - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 32 - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 33 8 */ 34 9 35 10 #ifndef _OMAP_RPMSG_H ··· 31 56 * 32 57 * @RP_MBOX_ABORT_REQUEST: a "please crash" request, used for testing the 33 58 * recovery mechanism (to some extent). 59 + * 60 + * @RP_MBOX_SUSPEND_AUTO: auto suspend request for the remote processor 61 + * 62 + * @RP_MBOX_SUSPEND_SYSTEM: system suspend request for the remote processor 63 + * 64 + * @RP_MBOX_SUSPEND_ACK: successful response from remote processor for a 65 + * suspend request 66 + * 67 + * @RP_MBOX_SUSPEND_CANCEL: a cancel suspend response from a remote processor 68 + * on a suspend request 69 + * 70 + * Introduce new message definitions if any here. 71 + * 72 + * @RP_MBOX_END_MSG: Indicates end of known/defined messages from remote core 73 + * This should be the last definition. 74 + * 34 75 */ 35 76 enum omap_rp_mbox_messages { 36 77 RP_MBOX_READY = 0xFFFFFF00, ··· 55 64 RP_MBOX_ECHO_REQUEST = 0xFFFFFF03, 56 65 RP_MBOX_ECHO_REPLY = 0xFFFFFF04, 57 66 RP_MBOX_ABORT_REQUEST = 0xFFFFFF05, 67 + RP_MBOX_SUSPEND_AUTO = 0xFFFFFF10, 68 + RP_MBOX_SUSPEND_SYSTEM = 0xFFFFFF11, 69 + RP_MBOX_SUSPEND_ACK = 0xFFFFFF12, 70 + RP_MBOX_SUSPEND_CANCEL = 0xFFFFFF13, 71 + RP_MBOX_END_MSG = 0xFFFFFF14, 58 72 }; 59 73 60 74 #endif /* _OMAP_RPMSG_H */
+20
drivers/remoteproc/qcom_q6v5.c
··· 15 15 #include <linux/remoteproc.h> 16 16 #include "qcom_q6v5.h" 17 17 18 + #define Q6V5_PANIC_DELAY_MS 200 19 + 18 20 /** 19 21 * qcom_q6v5_prepare() - reinitialize the qcom_q6v5 context before start 20 22 * @q6v5: reference to qcom_q6v5 context to be reinitialized ··· 163 161 return ret == 0 ? -ETIMEDOUT : 0; 164 162 } 165 163 EXPORT_SYMBOL_GPL(qcom_q6v5_request_stop); 164 + 165 + /** 166 + * qcom_q6v5_panic() - panic handler to invoke a stop on the remote 167 + * @q6v5: reference to qcom_q6v5 context 168 + * 169 + * Set the stop bit and sleep in order to allow the remote processor to flush 170 + * its caches etc for post mortem debugging. 171 + * 172 + * Return: 200ms 173 + */ 174 + unsigned long qcom_q6v5_panic(struct qcom_q6v5 *q6v5) 175 + { 176 + qcom_smem_state_update_bits(q6v5->state, 177 + BIT(q6v5->stop_bit), BIT(q6v5->stop_bit)); 178 + 179 + return Q6V5_PANIC_DELAY_MS; 180 + } 181 + EXPORT_SYMBOL_GPL(qcom_q6v5_panic); 166 182 167 183 /** 168 184 * qcom_q6v5_init() - initializer of the q6v5 common struct
+1
drivers/remoteproc/qcom_q6v5.h
··· 42 42 int qcom_q6v5_unprepare(struct qcom_q6v5 *q6v5); 43 43 int qcom_q6v5_request_stop(struct qcom_q6v5 *q6v5); 44 44 int qcom_q6v5_wait_for_start(struct qcom_q6v5 *q6v5, int timeout); 45 + unsigned long qcom_q6v5_panic(struct qcom_q6v5 *q6v5); 45 46 46 47 #endif
+9 -1
drivers/remoteproc/qcom_q6v5_adsp.c
··· 270 270 return ret; 271 271 } 272 272 273 - static void *adsp_da_to_va(struct rproc *rproc, u64 da, int len) 273 + static void *adsp_da_to_va(struct rproc *rproc, u64 da, size_t len) 274 274 { 275 275 struct qcom_adsp *adsp = (struct qcom_adsp *)rproc->priv; 276 276 int offset; ··· 282 282 return adsp->mem_region + offset; 283 283 } 284 284 285 + static unsigned long adsp_panic(struct rproc *rproc) 286 + { 287 + struct qcom_adsp *adsp = rproc->priv; 288 + 289 + return qcom_q6v5_panic(&adsp->q6v5); 290 + } 291 + 285 292 static const struct rproc_ops adsp_ops = { 286 293 .start = adsp_start, 287 294 .stop = adsp_stop, 288 295 .da_to_va = adsp_da_to_va, 289 296 .parse_fw = qcom_register_dump_segments, 290 297 .load = adsp_load, 298 + .panic = adsp_panic, 291 299 }; 292 300 293 301 static int adsp_init_clock(struct qcom_adsp *adsp, const char **clk_ids)
+98 -35
drivers/remoteproc/qcom_q6v5_mss.c
··· 381 381 } 382 382 383 383 static int q6v5_xfer_mem_ownership(struct q6v5 *qproc, int *current_perm, 384 - bool remote_owner, phys_addr_t addr, 384 + bool local, bool remote, phys_addr_t addr, 385 385 size_t size) 386 386 { 387 - struct qcom_scm_vmperm next; 387 + struct qcom_scm_vmperm next[2]; 388 + int perms = 0; 388 389 389 390 if (!qproc->need_mem_protection) 390 391 return 0; 391 - if (remote_owner && *current_perm == BIT(QCOM_SCM_VMID_MSS_MSA)) 392 - return 0; 393 - if (!remote_owner && *current_perm == BIT(QCOM_SCM_VMID_HLOS)) 392 + 393 + if (local == !!(*current_perm & BIT(QCOM_SCM_VMID_HLOS)) && 394 + remote == !!(*current_perm & BIT(QCOM_SCM_VMID_MSS_MSA))) 394 395 return 0; 395 396 396 - next.vmid = remote_owner ? QCOM_SCM_VMID_MSS_MSA : QCOM_SCM_VMID_HLOS; 397 - next.perm = remote_owner ? QCOM_SCM_PERM_RW : QCOM_SCM_PERM_RWX; 397 + if (local) { 398 + next[perms].vmid = QCOM_SCM_VMID_HLOS; 399 + next[perms].perm = QCOM_SCM_PERM_RWX; 400 + perms++; 401 + } 402 + 403 + if (remote) { 404 + next[perms].vmid = QCOM_SCM_VMID_MSS_MSA; 405 + next[perms].perm = QCOM_SCM_PERM_RW; 406 + perms++; 407 + } 398 408 399 409 return qcom_scm_assign_mem(addr, ALIGN(size, SZ_4K), 400 - current_perm, &next, 1); 410 + current_perm, next, perms); 401 411 } 402 412 403 413 static int q6v5_load(struct rproc *rproc, const struct firmware *fw) ··· 813 803 814 804 /* Hypervisor mapping to access metadata by modem */ 815 805 mdata_perm = BIT(QCOM_SCM_VMID_HLOS); 816 - ret = q6v5_xfer_mem_ownership(qproc, &mdata_perm, true, phys, size); 806 + ret = q6v5_xfer_mem_ownership(qproc, &mdata_perm, false, true, 807 + phys, size); 817 808 if (ret) { 818 809 dev_err(qproc->dev, 819 810 "assigning Q6 access to metadata failed: %d\n", ret); ··· 832 821 dev_err(qproc->dev, "MPSS header authentication failed: %d\n", ret); 833 822 834 823 /* Metadata authentication done, remove modem access */ 835 - xferop_ret = q6v5_xfer_mem_ownership(qproc, &mdata_perm, false, phys, size); 824 + xferop_ret = q6v5_xfer_mem_ownership(qproc, &mdata_perm, true, false, 825 + phys, size); 836 826 if (xferop_ret) 837 827 dev_warn(qproc->dev, 838 828 "mdt buffer not reclaimed system may become unstable\n"); ··· 920 908 } 921 909 922 910 /* Assign MBA image access in DDR to q6 */ 923 - ret = q6v5_xfer_mem_ownership(qproc, &qproc->mba_perm, true, 911 + ret = q6v5_xfer_mem_ownership(qproc, &qproc->mba_perm, false, true, 924 912 qproc->mba_phys, qproc->mba_size); 925 913 if (ret) { 926 914 dev_err(qproc->dev, ··· 957 945 q6v5proc_halt_axi_port(qproc, qproc->halt_map, qproc->halt_nc); 958 946 959 947 reclaim_mba: 960 - xfermemop_ret = q6v5_xfer_mem_ownership(qproc, &qproc->mba_perm, false, 961 - qproc->mba_phys, 948 + xfermemop_ret = q6v5_xfer_mem_ownership(qproc, &qproc->mba_perm, true, 949 + false, qproc->mba_phys, 962 950 qproc->mba_size); 963 951 if (xfermemop_ret) { 964 952 dev_err(qproc->dev, ··· 1015 1003 writel(val, qproc->reg_base + QDSP6SS_PWR_CTL_REG); 1016 1004 } 1017 1005 1018 - ret = q6v5_xfer_mem_ownership(qproc, &qproc->mpss_perm, 1019 - false, qproc->mpss_phys, 1020 - qproc->mpss_size); 1021 - WARN_ON(ret); 1022 - 1023 1006 q6v5_reset_assert(qproc); 1024 1007 1025 1008 q6v5_clk_disable(qproc->dev, qproc->reset_clks, ··· 1028 1021 /* In case of failure or coredump scenario where reclaiming MBA memory 1029 1022 * could not happen reclaim it here. 1030 1023 */ 1031 - ret = q6v5_xfer_mem_ownership(qproc, &qproc->mba_perm, false, 1024 + ret = q6v5_xfer_mem_ownership(qproc, &qproc->mba_perm, true, false, 1032 1025 qproc->mba_phys, 1033 1026 qproc->mba_size); 1034 1027 WARN_ON(ret); ··· 1044 1037 } 1045 1038 } 1046 1039 1040 + static int q6v5_reload_mba(struct rproc *rproc) 1041 + { 1042 + struct q6v5 *qproc = rproc->priv; 1043 + const struct firmware *fw; 1044 + int ret; 1045 + 1046 + ret = request_firmware(&fw, rproc->firmware, qproc->dev); 1047 + if (ret < 0) 1048 + return ret; 1049 + 1050 + q6v5_load(rproc, fw); 1051 + ret = q6v5_mba_load(qproc); 1052 + release_firmware(fw); 1053 + 1054 + return ret; 1055 + } 1056 + 1047 1057 static int q6v5_mpss_load(struct q6v5 *qproc) 1048 1058 { 1049 1059 const struct elf32_phdr *phdrs; ··· 1072 1048 phys_addr_t boot_addr; 1073 1049 phys_addr_t min_addr = PHYS_ADDR_MAX; 1074 1050 phys_addr_t max_addr = 0; 1051 + u32 code_length; 1075 1052 bool relocate = false; 1076 1053 char *fw_name; 1077 1054 size_t fw_name_len; ··· 1120 1095 1121 1096 if (phdr->p_paddr + phdr->p_memsz > max_addr) 1122 1097 max_addr = ALIGN(phdr->p_paddr + phdr->p_memsz, SZ_4K); 1098 + } 1099 + 1100 + /** 1101 + * In case of a modem subsystem restart on secure devices, the modem 1102 + * memory can be reclaimed only after MBA is loaded. For modem cold 1103 + * boot this will be a nop 1104 + */ 1105 + q6v5_xfer_mem_ownership(qproc, &qproc->mpss_perm, true, false, 1106 + qproc->mpss_phys, qproc->mpss_size); 1107 + 1108 + /* Share ownership between Linux and MSS, during segment loading */ 1109 + ret = q6v5_xfer_mem_ownership(qproc, &qproc->mpss_perm, true, true, 1110 + qproc->mpss_phys, qproc->mpss_size); 1111 + if (ret) { 1112 + dev_err(qproc->dev, 1113 + "assigning Q6 access to mpss memory failed: %d\n", ret); 1114 + ret = -EAGAIN; 1115 + goto release_firmware; 1123 1116 } 1124 1117 1125 1118 mpss_reloc = relocate ? min_addr : qproc->mpss_phys; ··· 1188 1145 phdr->p_memsz - phdr->p_filesz); 1189 1146 } 1190 1147 size += phdr->p_memsz; 1148 + 1149 + code_length = readl(qproc->rmb_base + RMB_PMI_CODE_LENGTH_REG); 1150 + if (!code_length) { 1151 + boot_addr = relocate ? qproc->mpss_phys : min_addr; 1152 + writel(boot_addr, qproc->rmb_base + RMB_PMI_CODE_START_REG); 1153 + writel(RMB_CMD_LOAD_READY, qproc->rmb_base + RMB_MBA_COMMAND_REG); 1154 + } 1155 + writel(size, qproc->rmb_base + RMB_PMI_CODE_LENGTH_REG); 1156 + 1157 + ret = readl(qproc->rmb_base + RMB_MBA_STATUS_REG); 1158 + if (ret < 0) { 1159 + dev_err(qproc->dev, "MPSS authentication failed: %d\n", 1160 + ret); 1161 + goto release_firmware; 1162 + } 1191 1163 } 1192 1164 1193 1165 /* Transfer ownership of modem ddr region to q6 */ 1194 - ret = q6v5_xfer_mem_ownership(qproc, &qproc->mpss_perm, true, 1166 + ret = q6v5_xfer_mem_ownership(qproc, &qproc->mpss_perm, false, true, 1195 1167 qproc->mpss_phys, qproc->mpss_size); 1196 1168 if (ret) { 1197 1169 dev_err(qproc->dev, ··· 1214 1156 ret = -EAGAIN; 1215 1157 goto release_firmware; 1216 1158 } 1217 - 1218 - boot_addr = relocate ? qproc->mpss_phys : min_addr; 1219 - writel(boot_addr, qproc->rmb_base + RMB_PMI_CODE_START_REG); 1220 - writel(RMB_CMD_LOAD_READY, qproc->rmb_base + RMB_MBA_COMMAND_REG); 1221 - writel(size, qproc->rmb_base + RMB_PMI_CODE_LENGTH_REG); 1222 1159 1223 1160 ret = q6v5_rmb_mba_wait(qproc, RMB_MBA_AUTH_COMPLETE, 10000); 1224 1161 if (ret == -ETIMEDOUT) ··· 1239 1186 void *ptr = rproc_da_to_va(rproc, segment->da, segment->size); 1240 1187 1241 1188 /* Unlock mba before copying segments */ 1242 - if (!qproc->dump_mba_loaded) 1243 - ret = q6v5_mba_load(qproc); 1189 + if (!qproc->dump_mba_loaded) { 1190 + ret = q6v5_reload_mba(rproc); 1191 + if (!ret) { 1192 + /* Reset ownership back to Linux to copy segments */ 1193 + ret = q6v5_xfer_mem_ownership(qproc, &qproc->mpss_perm, 1194 + true, false, 1195 + qproc->mpss_phys, 1196 + qproc->mpss_size); 1197 + } 1198 + } 1244 1199 1245 1200 if (!ptr || ret) 1246 1201 memset(dest, 0xff, segment->size); ··· 1259 1198 1260 1199 /* Reclaim mba after copying segments */ 1261 1200 if (qproc->dump_segment_mask == qproc->dump_complete_mask) { 1262 - if (qproc->dump_mba_loaded) 1201 + if (qproc->dump_mba_loaded) { 1202 + /* Try to reset ownership back to Q6 */ 1203 + q6v5_xfer_mem_ownership(qproc, &qproc->mpss_perm, 1204 + false, true, 1205 + qproc->mpss_phys, 1206 + qproc->mpss_size); 1263 1207 q6v5_mba_reclaim(qproc); 1208 + } 1264 1209 } 1265 1210 } 1266 1211 ··· 1292 1225 goto reclaim_mpss; 1293 1226 } 1294 1227 1295 - xfermemop_ret = q6v5_xfer_mem_ownership(qproc, &qproc->mba_perm, false, 1296 - qproc->mba_phys, 1228 + xfermemop_ret = q6v5_xfer_mem_ownership(qproc, &qproc->mba_perm, true, 1229 + false, qproc->mba_phys, 1297 1230 qproc->mba_size); 1298 1231 if (xfermemop_ret) 1299 1232 dev_err(qproc->dev, ··· 1306 1239 return 0; 1307 1240 1308 1241 reclaim_mpss: 1309 - xfermemop_ret = q6v5_xfer_mem_ownership(qproc, &qproc->mpss_perm, 1310 - false, qproc->mpss_phys, 1311 - qproc->mpss_size); 1312 - WARN_ON(xfermemop_ret); 1313 1242 q6v5_mba_reclaim(qproc); 1314 1243 1315 1244 return ret; ··· 1327 1264 return 0; 1328 1265 } 1329 1266 1330 - static void *q6v5_da_to_va(struct rproc *rproc, u64 da, int len) 1267 + static void *q6v5_da_to_va(struct rproc *rproc, u64 da, size_t len) 1331 1268 { 1332 1269 struct q6v5 *qproc = rproc->priv; 1333 1270 int offset;
+9 -1
drivers/remoteproc/qcom_q6v5_pas.c
··· 222 222 return ret; 223 223 } 224 224 225 - static void *adsp_da_to_va(struct rproc *rproc, u64 da, int len) 225 + static void *adsp_da_to_va(struct rproc *rproc, u64 da, size_t len) 226 226 { 227 227 struct qcom_adsp *adsp = (struct qcom_adsp *)rproc->priv; 228 228 int offset; ··· 234 234 return adsp->mem_region + offset; 235 235 } 236 236 237 + static unsigned long adsp_panic(struct rproc *rproc) 238 + { 239 + struct qcom_adsp *adsp = (struct qcom_adsp *)rproc->priv; 240 + 241 + return qcom_q6v5_panic(&adsp->q6v5); 242 + } 243 + 237 244 static const struct rproc_ops adsp_ops = { 238 245 .start = adsp_start, 239 246 .stop = adsp_stop, 240 247 .da_to_va = adsp_da_to_va, 241 248 .parse_fw = qcom_register_dump_segments, 242 249 .load = adsp_load, 250 + .panic = adsp_panic, 243 251 }; 244 252 245 253 static int adsp_init_clock(struct qcom_adsp *adsp)
+1 -1
drivers/remoteproc/qcom_q6v5_wcss.c
··· 406 406 return 0; 407 407 } 408 408 409 - static void *q6v5_wcss_da_to_va(struct rproc *rproc, u64 da, int len) 409 + static void *q6v5_wcss_da_to_va(struct rproc *rproc, u64 da, size_t len) 410 410 { 411 411 struct q6v5_wcss *wcss = rproc->priv; 412 412 int offset;
+1 -1
drivers/remoteproc/qcom_wcnss.c
··· 287 287 return ret; 288 288 } 289 289 290 - static void *wcnss_da_to_va(struct rproc *rproc, u64 da, int len) 290 + static void *wcnss_da_to_va(struct rproc *rproc, u64 da, size_t len) 291 291 { 292 292 struct qcom_wcnss *wcnss = (struct qcom_wcnss *)rproc->priv; 293 293 int offset;
+111 -48
drivers/remoteproc/remoteproc_core.c
··· 16 16 17 17 #define pr_fmt(fmt) "%s: " fmt, __func__ 18 18 19 + #include <linux/delay.h> 19 20 #include <linux/kernel.h> 20 21 #include <linux/module.h> 21 22 #include <linux/device.h> ··· 27 26 #include <linux/string.h> 28 27 #include <linux/debugfs.h> 29 28 #include <linux/devcoredump.h> 29 + #include <linux/rculist.h> 30 30 #include <linux/remoteproc.h> 31 31 #include <linux/iommu.h> 32 32 #include <linux/idr.h> ··· 40 38 #include <linux/platform_device.h> 41 39 42 40 #include "remoteproc_internal.h" 41 + #include "remoteproc_elf_helpers.h" 43 42 44 43 #define HIGH_BITS_MASK 0xFFFFFFFF00000000ULL 45 44 46 45 static DEFINE_MUTEX(rproc_list_mutex); 47 46 static LIST_HEAD(rproc_list); 47 + static struct notifier_block rproc_panic_nb; 48 48 49 49 typedef int (*rproc_handle_resource_t)(struct rproc *rproc, 50 50 void *, int offset, int avail); ··· 189 185 * here the output of the DMA API for the carveouts, which should be more 190 186 * correct. 191 187 */ 192 - void *rproc_da_to_va(struct rproc *rproc, u64 da, int len) 188 + void *rproc_da_to_va(struct rproc *rproc, u64 da, size_t len) 193 189 { 194 190 struct rproc_mem_entry *carveout; 195 191 void *ptr = NULL; ··· 228 224 /** 229 225 * rproc_find_carveout_by_name() - lookup the carveout region by a name 230 226 * @rproc: handle of a remote processor 231 - * @name,..: carveout name to find (standard printf format) 227 + * @name: carveout name to find (format string) 228 + * @...: optional parameters matching @name string 232 229 * 233 230 * Platform driver has the capability to register some pre-allacoted carveout 234 231 * (physically contiguous memory regions) before rproc firmware loading and ··· 323 318 struct device *dev = &rproc->dev; 324 319 struct rproc_vring *rvring = &rvdev->vring[i]; 325 320 struct fw_rsc_vdev *rsc; 326 - int ret, size, notifyid; 321 + int ret, notifyid; 327 322 struct rproc_mem_entry *mem; 323 + size_t size; 328 324 329 325 /* actual size of vring (in bytes) */ 330 326 size = PAGE_ALIGN(vring_size(rvring->len, rvring->align)); ··· 451 445 * rproc_handle_vdev() - handle a vdev fw resource 452 446 * @rproc: the remote processor 453 447 * @rsc: the vring resource descriptor 448 + * @offset: offset of the resource entry 454 449 * @avail: size of available data (for sanity checking the image) 455 450 * 456 451 * This resource entry requests the host to statically register a virtio ··· 594 587 * rproc_handle_trace() - handle a shared trace buffer resource 595 588 * @rproc: the remote processor 596 589 * @rsc: the trace resource descriptor 590 + * @offset: offset of the resource entry 597 591 * @avail: size of available data (for sanity checking the image) 598 592 * 599 593 * In case the remote processor dumps trace logs into memory, ··· 660 652 * rproc_handle_devmem() - handle devmem resource entry 661 653 * @rproc: remote processor handle 662 654 * @rsc: the devmem resource entry 655 + * @offset: offset of the resource entry 663 656 * @avail: size of available data (for sanity checking the image) 664 657 * 665 658 * Remote processors commonly need to access certain on-chip peripherals. ··· 755 746 va = dma_alloc_coherent(dev->parent, mem->len, &dma, GFP_KERNEL); 756 747 if (!va) { 757 748 dev_err(dev->parent, 758 - "failed to allocate dma memory: len 0x%x\n", mem->len); 749 + "failed to allocate dma memory: len 0x%zx\n", 750 + mem->len); 759 751 return -ENOMEM; 760 752 } 761 753 762 - dev_dbg(dev, "carveout va %pK, dma %pad, len 0x%x\n", 754 + dev_dbg(dev, "carveout va %pK, dma %pad, len 0x%zx\n", 763 755 va, &dma, mem->len); 764 756 765 757 if (mem->da != FW_RSC_ADDR_ANY && !rproc->domain) { ··· 863 853 * rproc_handle_carveout() - handle phys contig memory allocation requests 864 854 * @rproc: rproc handle 865 855 * @rsc: the resource entry 856 + * @offset: offset of the resource entry 866 857 * @avail: size of available data (for image validation) 867 858 * 868 859 * This function will handle firmware requests for allocation of physically ··· 968 957 */ 969 958 struct rproc_mem_entry * 970 959 rproc_mem_entry_init(struct device *dev, 971 - void *va, dma_addr_t dma, int len, u32 da, 960 + void *va, dma_addr_t dma, size_t len, u32 da, 972 961 int (*alloc)(struct rproc *, struct rproc_mem_entry *), 973 962 int (*release)(struct rproc *, struct rproc_mem_entry *), 974 963 const char *name, ...) ··· 1010 999 * provided by client. 1011 1000 */ 1012 1001 struct rproc_mem_entry * 1013 - rproc_of_resm_mem_entry_init(struct device *dev, u32 of_resm_idx, int len, 1002 + rproc_of_resm_mem_entry_init(struct device *dev, u32 of_resm_idx, size_t len, 1014 1003 u32 da, const char *name, ...) 1015 1004 { 1016 1005 struct rproc_mem_entry *mem; ··· 1033 1022 } 1034 1023 EXPORT_SYMBOL(rproc_of_resm_mem_entry_init); 1035 1024 1036 - /** 1025 + /* 1037 1026 * A lookup table for resource handlers. The indices are defined in 1038 1027 * enum fw_resource_type. 1039 1028 */ ··· 1281 1270 unmapped = iommu_unmap(rproc->domain, entry->da, entry->len); 1282 1271 if (unmapped != entry->len) { 1283 1272 /* nothing much to do besides complaining */ 1284 - dev_err(dev, "failed to unmap %u/%zu\n", entry->len, 1273 + dev_err(dev, "failed to unmap %zx/%zu\n", entry->len, 1285 1274 unmapped); 1286 1275 } 1287 1276 ··· 1575 1564 static void rproc_coredump(struct rproc *rproc) 1576 1565 { 1577 1566 struct rproc_dump_segment *segment; 1578 - struct elf32_phdr *phdr; 1579 - struct elf32_hdr *ehdr; 1567 + void *phdr; 1568 + void *ehdr; 1580 1569 size_t data_size; 1581 1570 size_t offset; 1582 1571 void *data; 1583 1572 void *ptr; 1573 + u8 class = rproc->elf_class; 1584 1574 int phnum = 0; 1585 1575 1586 1576 if (list_empty(&rproc->dump_segments)) 1587 1577 return; 1588 1578 1589 - data_size = sizeof(*ehdr); 1579 + data_size = elf_size_of_hdr(class); 1590 1580 list_for_each_entry(segment, &rproc->dump_segments, node) { 1591 - data_size += sizeof(*phdr) + segment->size; 1581 + data_size += elf_size_of_phdr(class) + segment->size; 1592 1582 1593 1583 phnum++; 1594 1584 } ··· 1600 1588 1601 1589 ehdr = data; 1602 1590 1603 - memset(ehdr, 0, sizeof(*ehdr)); 1604 - memcpy(ehdr->e_ident, ELFMAG, SELFMAG); 1605 - ehdr->e_ident[EI_CLASS] = ELFCLASS32; 1606 - ehdr->e_ident[EI_DATA] = ELFDATA2LSB; 1607 - ehdr->e_ident[EI_VERSION] = EV_CURRENT; 1608 - ehdr->e_ident[EI_OSABI] = ELFOSABI_NONE; 1609 - ehdr->e_type = ET_CORE; 1610 - ehdr->e_machine = EM_NONE; 1611 - ehdr->e_version = EV_CURRENT; 1612 - ehdr->e_entry = rproc->bootaddr; 1613 - ehdr->e_phoff = sizeof(*ehdr); 1614 - ehdr->e_ehsize = sizeof(*ehdr); 1615 - ehdr->e_phentsize = sizeof(*phdr); 1616 - ehdr->e_phnum = phnum; 1591 + memset(ehdr, 0, elf_size_of_hdr(class)); 1592 + /* e_ident field is common for both elf32 and elf64 */ 1593 + elf_hdr_init_ident(ehdr, class); 1617 1594 1618 - phdr = data + ehdr->e_phoff; 1619 - offset = ehdr->e_phoff + sizeof(*phdr) * ehdr->e_phnum; 1595 + elf_hdr_set_e_type(class, ehdr, ET_CORE); 1596 + elf_hdr_set_e_machine(class, ehdr, EM_NONE); 1597 + elf_hdr_set_e_version(class, ehdr, EV_CURRENT); 1598 + elf_hdr_set_e_entry(class, ehdr, rproc->bootaddr); 1599 + elf_hdr_set_e_phoff(class, ehdr, elf_size_of_hdr(class)); 1600 + elf_hdr_set_e_ehsize(class, ehdr, elf_size_of_hdr(class)); 1601 + elf_hdr_set_e_phentsize(class, ehdr, elf_size_of_phdr(class)); 1602 + elf_hdr_set_e_phnum(class, ehdr, phnum); 1603 + 1604 + phdr = data + elf_hdr_get_e_phoff(class, ehdr); 1605 + offset = elf_hdr_get_e_phoff(class, ehdr); 1606 + offset += elf_size_of_phdr(class) * elf_hdr_get_e_phnum(class, ehdr); 1607 + 1620 1608 list_for_each_entry(segment, &rproc->dump_segments, node) { 1621 - memset(phdr, 0, sizeof(*phdr)); 1622 - phdr->p_type = PT_LOAD; 1623 - phdr->p_offset = offset; 1624 - phdr->p_vaddr = segment->da; 1625 - phdr->p_paddr = segment->da; 1626 - phdr->p_filesz = segment->size; 1627 - phdr->p_memsz = segment->size; 1628 - phdr->p_flags = PF_R | PF_W | PF_X; 1629 - phdr->p_align = 0; 1609 + memset(phdr, 0, elf_size_of_phdr(class)); 1610 + elf_phdr_set_p_type(class, phdr, PT_LOAD); 1611 + elf_phdr_set_p_offset(class, phdr, offset); 1612 + elf_phdr_set_p_vaddr(class, phdr, segment->da); 1613 + elf_phdr_set_p_paddr(class, phdr, segment->da); 1614 + elf_phdr_set_p_filesz(class, phdr, segment->size); 1615 + elf_phdr_set_p_memsz(class, phdr, segment->size); 1616 + elf_phdr_set_p_flags(class, phdr, PF_R | PF_W | PF_X); 1617 + elf_phdr_set_p_align(class, phdr, 0); 1630 1618 1631 1619 if (segment->dump) { 1632 1620 segment->dump(rproc, segment, data + offset); ··· 1642 1630 } 1643 1631 } 1644 1632 1645 - offset += phdr->p_filesz; 1646 - phdr++; 1633 + offset += elf_phdr_get_p_filesz(class, phdr); 1634 + phdr += elf_size_of_phdr(class); 1647 1635 } 1648 1636 1649 1637 dev_coredumpv(&rproc->dev, data, data_size, GFP_KERNEL); ··· 1665 1653 struct device *dev = &rproc->dev; 1666 1654 int ret; 1667 1655 1668 - dev_err(dev, "recovering %s\n", rproc->name); 1669 - 1670 1656 ret = mutex_lock_interruptible(&rproc->lock); 1671 1657 if (ret) 1672 1658 return ret; 1659 + 1660 + /* State could have changed before we got the mutex */ 1661 + if (rproc->state != RPROC_CRASHED) 1662 + goto unlock_mutex; 1663 + 1664 + dev_err(dev, "recovering %s\n", rproc->name); 1673 1665 1674 1666 ret = rproc_stop(rproc, true); 1675 1667 if (ret) ··· 1701 1685 1702 1686 /** 1703 1687 * rproc_crash_handler_work() - handle a crash 1688 + * @work: work treating the crash 1704 1689 * 1705 1690 * This function needs to handle everything related to a crash, like cpu 1706 1691 * registers and stack dump, information to help to debug the fatal error, etc. ··· 1871 1854 if (!np) 1872 1855 return NULL; 1873 1856 1874 - mutex_lock(&rproc_list_mutex); 1875 - list_for_each_entry(r, &rproc_list, node) { 1857 + rcu_read_lock(); 1858 + list_for_each_entry_rcu(r, &rproc_list, node) { 1876 1859 if (r->dev.parent && r->dev.parent->of_node == np) { 1877 1860 /* prevent underlying implementation from being removed */ 1878 1861 if (!try_module_get(r->dev.parent->driver->owner)) { ··· 1885 1868 break; 1886 1869 } 1887 1870 } 1888 - mutex_unlock(&rproc_list_mutex); 1871 + rcu_read_unlock(); 1889 1872 1890 1873 of_node_put(np); 1891 1874 ··· 1942 1925 1943 1926 /* expose to rproc_get_by_phandle users */ 1944 1927 mutex_lock(&rproc_list_mutex); 1945 - list_add(&rproc->node, &rproc_list); 1928 + list_add_rcu(&rproc->node, &rproc_list); 1946 1929 mutex_unlock(&rproc_list_mutex); 1947 1930 1948 1931 return 0; ··· 2046 2029 rproc->name = name; 2047 2030 rproc->priv = &rproc[1]; 2048 2031 rproc->auto_boot = true; 2032 + rproc->elf_class = ELFCLASS32; 2049 2033 2050 2034 device_initialize(&rproc->dev); 2051 2035 rproc->dev.parent = dev; ··· 2071 2053 rproc->ops->load = rproc_elf_load_segments; 2072 2054 rproc->ops->parse_fw = rproc_elf_load_rsc_table; 2073 2055 rproc->ops->find_loaded_rsc_table = rproc_elf_find_loaded_rsc_table; 2074 - rproc->ops->sanity_check = rproc_elf_sanity_check; 2056 + if (!rproc->ops->sanity_check) 2057 + rproc->ops->sanity_check = rproc_elf32_sanity_check; 2075 2058 rproc->ops->get_boot_addr = rproc_elf_get_boot_addr; 2076 2059 } 2077 2060 ··· 2159 2140 2160 2141 /* the rproc is downref'ed as soon as it's removed from the klist */ 2161 2142 mutex_lock(&rproc_list_mutex); 2162 - list_del(&rproc->node); 2143 + list_del_rcu(&rproc->node); 2163 2144 mutex_unlock(&rproc_list_mutex); 2145 + 2146 + /* Ensure that no readers of rproc_list are still active */ 2147 + synchronize_rcu(); 2164 2148 2165 2149 device_del(&rproc->dev); 2166 2150 ··· 2238 2216 } 2239 2217 EXPORT_SYMBOL(rproc_report_crash); 2240 2218 2219 + static int rproc_panic_handler(struct notifier_block *nb, unsigned long event, 2220 + void *ptr) 2221 + { 2222 + unsigned int longest = 0; 2223 + struct rproc *rproc; 2224 + unsigned int d; 2225 + 2226 + rcu_read_lock(); 2227 + list_for_each_entry_rcu(rproc, &rproc_list, node) { 2228 + if (!rproc->ops->panic || rproc->state != RPROC_RUNNING) 2229 + continue; 2230 + 2231 + d = rproc->ops->panic(rproc); 2232 + longest = max(longest, d); 2233 + } 2234 + rcu_read_unlock(); 2235 + 2236 + /* 2237 + * Delay for the longest requested duration before returning. This can 2238 + * be used by the remoteproc drivers to give the remote processor time 2239 + * to perform any requested operations (such as flush caches), when 2240 + * it's not possible to signal the Linux side due to the panic. 2241 + */ 2242 + mdelay(longest); 2243 + 2244 + return NOTIFY_DONE; 2245 + } 2246 + 2247 + static void __init rproc_init_panic(void) 2248 + { 2249 + rproc_panic_nb.notifier_call = rproc_panic_handler; 2250 + atomic_notifier_chain_register(&panic_notifier_list, &rproc_panic_nb); 2251 + } 2252 + 2253 + static void __exit rproc_exit_panic(void) 2254 + { 2255 + atomic_notifier_chain_unregister(&panic_notifier_list, &rproc_panic_nb); 2256 + } 2257 + 2241 2258 static int __init remoteproc_init(void) 2242 2259 { 2243 2260 rproc_init_sysfs(); 2244 2261 rproc_init_debugfs(); 2262 + rproc_init_panic(); 2245 2263 2246 2264 return 0; 2247 2265 } ··· 2291 2229 { 2292 2230 ida_destroy(&rproc_dev_index); 2293 2231 2232 + rproc_exit_panic(); 2294 2233 rproc_exit_debugfs(); 2295 2234 rproc_exit_sysfs(); 2296 2235 }
+8 -8
drivers/remoteproc/remoteproc_debugfs.c
··· 138 138 buf[count - 1] = '\0'; 139 139 140 140 if (!strncmp(buf, "enabled", count)) { 141 + /* change the flag and begin the recovery process if needed */ 141 142 rproc->recovery_disabled = false; 142 - /* if rproc has crashed, trigger recovery */ 143 - if (rproc->state == RPROC_CRASHED) 144 - rproc_trigger_recovery(rproc); 143 + rproc_trigger_recovery(rproc); 145 144 } else if (!strncmp(buf, "disabled", count)) { 146 145 rproc->recovery_disabled = true; 147 146 } else if (!strncmp(buf, "recover", count)) { 148 - /* if rproc has crashed, trigger recovery */ 149 - if (rproc->state == RPROC_CRASHED) 150 - rproc_trigger_recovery(rproc); 147 + /* begin the recovery process without changing the flag */ 148 + rproc_trigger_recovery(rproc); 149 + } else { 150 + return -EINVAL; 151 151 } 152 152 153 153 return count; ··· 293 293 seq_printf(seq, "\tVirtual address: %pK\n", carveout->va); 294 294 seq_printf(seq, "\tDMA address: %pad\n", &carveout->dma); 295 295 seq_printf(seq, "\tDevice address: 0x%x\n", carveout->da); 296 - seq_printf(seq, "\tLength: 0x%x Bytes\n\n", carveout->len); 296 + seq_printf(seq, "\tLength: 0x%zx Bytes\n\n", carveout->len); 297 297 } 298 298 299 299 return 0; ··· 349 349 350 350 debugfs_create_file("name", 0400, rproc->dbg_dir, 351 351 rproc, &rproc_name_ops); 352 - debugfs_create_file("recovery", 0400, rproc->dbg_dir, 352 + debugfs_create_file("recovery", 0600, rproc->dbg_dir, 353 353 rproc, &rproc_recovery_ops); 354 354 debugfs_create_file("crash", 0200, rproc->dbg_dir, 355 355 rproc, &rproc_crash_ops);
+96
drivers/remoteproc/remoteproc_elf_helpers.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Remote processor elf helpers defines 4 + * 5 + * Copyright (C) 2020 Kalray, Inc. 6 + */ 7 + 8 + #ifndef REMOTEPROC_ELF_LOADER_H 9 + #define REMOTEPROC_ELF_LOADER_H 10 + 11 + #include <linux/elf.h> 12 + #include <linux/types.h> 13 + 14 + /** 15 + * fw_elf_get_class - Get elf class 16 + * @fw: the ELF firmware image 17 + * 18 + * Note that we use and elf32_hdr to access the class since the start of the 19 + * struct is the same for both elf class 20 + * 21 + * Return: elf class of the firmware 22 + */ 23 + static inline u8 fw_elf_get_class(const struct firmware *fw) 24 + { 25 + struct elf32_hdr *ehdr = (struct elf32_hdr *)fw->data; 26 + 27 + return ehdr->e_ident[EI_CLASS]; 28 + } 29 + 30 + static inline void elf_hdr_init_ident(struct elf32_hdr *hdr, u8 class) 31 + { 32 + memcpy(hdr->e_ident, ELFMAG, SELFMAG); 33 + hdr->e_ident[EI_CLASS] = class; 34 + hdr->e_ident[EI_DATA] = ELFDATA2LSB; 35 + hdr->e_ident[EI_VERSION] = EV_CURRENT; 36 + hdr->e_ident[EI_OSABI] = ELFOSABI_NONE; 37 + } 38 + 39 + /* Generate getter and setter for a specific elf struct/field */ 40 + #define ELF_GEN_FIELD_GET_SET(__s, __field, __type) \ 41 + static inline __type elf_##__s##_get_##__field(u8 class, const void *arg) \ 42 + { \ 43 + if (class == ELFCLASS32) \ 44 + return (__type) ((const struct elf32_##__s *) arg)->__field; \ 45 + else \ 46 + return (__type) ((const struct elf64_##__s *) arg)->__field; \ 47 + } \ 48 + static inline void elf_##__s##_set_##__field(u8 class, void *arg, \ 49 + __type value) \ 50 + { \ 51 + if (class == ELFCLASS32) \ 52 + ((struct elf32_##__s *) arg)->__field = (__type) value; \ 53 + else \ 54 + ((struct elf64_##__s *) arg)->__field = (__type) value; \ 55 + } 56 + 57 + ELF_GEN_FIELD_GET_SET(hdr, e_entry, u64) 58 + ELF_GEN_FIELD_GET_SET(hdr, e_phnum, u16) 59 + ELF_GEN_FIELD_GET_SET(hdr, e_shnum, u16) 60 + ELF_GEN_FIELD_GET_SET(hdr, e_phoff, u64) 61 + ELF_GEN_FIELD_GET_SET(hdr, e_shoff, u64) 62 + ELF_GEN_FIELD_GET_SET(hdr, e_shstrndx, u16) 63 + ELF_GEN_FIELD_GET_SET(hdr, e_machine, u16) 64 + ELF_GEN_FIELD_GET_SET(hdr, e_type, u16) 65 + ELF_GEN_FIELD_GET_SET(hdr, e_version, u32) 66 + ELF_GEN_FIELD_GET_SET(hdr, e_ehsize, u32) 67 + ELF_GEN_FIELD_GET_SET(hdr, e_phentsize, u16) 68 + 69 + ELF_GEN_FIELD_GET_SET(phdr, p_paddr, u64) 70 + ELF_GEN_FIELD_GET_SET(phdr, p_vaddr, u64) 71 + ELF_GEN_FIELD_GET_SET(phdr, p_filesz, u64) 72 + ELF_GEN_FIELD_GET_SET(phdr, p_memsz, u64) 73 + ELF_GEN_FIELD_GET_SET(phdr, p_type, u32) 74 + ELF_GEN_FIELD_GET_SET(phdr, p_offset, u64) 75 + ELF_GEN_FIELD_GET_SET(phdr, p_flags, u32) 76 + ELF_GEN_FIELD_GET_SET(phdr, p_align, u64) 77 + 78 + ELF_GEN_FIELD_GET_SET(shdr, sh_size, u64) 79 + ELF_GEN_FIELD_GET_SET(shdr, sh_offset, u64) 80 + ELF_GEN_FIELD_GET_SET(shdr, sh_name, u32) 81 + ELF_GEN_FIELD_GET_SET(shdr, sh_addr, u64) 82 + 83 + #define ELF_STRUCT_SIZE(__s) \ 84 + static inline unsigned long elf_size_of_##__s(u8 class) \ 85 + { \ 86 + if (class == ELFCLASS32)\ 87 + return sizeof(struct elf32_##__s); \ 88 + else \ 89 + return sizeof(struct elf64_##__s); \ 90 + } 91 + 92 + ELF_STRUCT_SIZE(shdr) 93 + ELF_STRUCT_SIZE(phdr) 94 + ELF_STRUCT_SIZE(hdr) 95 + 96 + #endif /* REMOTEPROC_ELF_LOADER_H */
+131 -54
drivers/remoteproc/remoteproc_elf_loader.c
··· 23 23 #include <linux/elf.h> 24 24 25 25 #include "remoteproc_internal.h" 26 + #include "remoteproc_elf_helpers.h" 26 27 27 28 /** 28 - * rproc_elf_sanity_check() - Sanity Check ELF firmware image 29 + * rproc_elf_sanity_check() - Sanity Check for ELF32/ELF64 firmware image 29 30 * @rproc: the remote processor handle 30 31 * @fw: the ELF firmware image 31 32 * 32 - * Make sure this fw image is sane. 33 + * Make sure this fw image is sane (ie a correct ELF32/ELF64 file). 33 34 */ 34 35 int rproc_elf_sanity_check(struct rproc *rproc, const struct firmware *fw) 35 36 { 36 37 const char *name = rproc->firmware; 37 38 struct device *dev = &rproc->dev; 39 + /* 40 + * Elf files are beginning with the same structure. Thus, to simplify 41 + * header parsing, we can use the elf32_hdr one for both elf64 and 42 + * elf32. 43 + */ 38 44 struct elf32_hdr *ehdr; 45 + u32 elf_shdr_get_size; 46 + u64 phoff, shoff; 39 47 char class; 48 + u16 phnum; 40 49 41 50 if (!fw) { 42 51 dev_err(dev, "failed to load %s\n", name); ··· 59 50 60 51 ehdr = (struct elf32_hdr *)fw->data; 61 52 62 - /* We only support ELF32 at this point */ 53 + if (memcmp(ehdr->e_ident, ELFMAG, SELFMAG)) { 54 + dev_err(dev, "Image is corrupted (bad magic)\n"); 55 + return -EINVAL; 56 + } 57 + 63 58 class = ehdr->e_ident[EI_CLASS]; 64 - if (class != ELFCLASS32) { 59 + if (class != ELFCLASS32 && class != ELFCLASS64) { 65 60 dev_err(dev, "Unsupported class: %d\n", class); 61 + return -EINVAL; 62 + } 63 + 64 + if (class == ELFCLASS64 && fw->size < sizeof(struct elf64_hdr)) { 65 + dev_err(dev, "elf64 header is too small\n"); 66 66 return -EINVAL; 67 67 } 68 68 ··· 85 67 return -EINVAL; 86 68 } 87 69 88 - if (fw->size < ehdr->e_shoff + sizeof(struct elf32_shdr)) { 70 + phoff = elf_hdr_get_e_phoff(class, fw->data); 71 + shoff = elf_hdr_get_e_shoff(class, fw->data); 72 + phnum = elf_hdr_get_e_phnum(class, fw->data); 73 + elf_shdr_get_size = elf_size_of_shdr(class); 74 + 75 + if (fw->size < shoff + elf_shdr_get_size) { 89 76 dev_err(dev, "Image is too small\n"); 90 77 return -EINVAL; 91 78 } 92 79 93 - if (memcmp(ehdr->e_ident, ELFMAG, SELFMAG)) { 94 - dev_err(dev, "Image is corrupted (bad magic)\n"); 95 - return -EINVAL; 96 - } 97 - 98 - if (ehdr->e_phnum == 0) { 80 + if (phnum == 0) { 99 81 dev_err(dev, "No loadable segments\n"); 100 82 return -EINVAL; 101 83 } 102 84 103 - if (ehdr->e_phoff > fw->size) { 85 + if (phoff > fw->size) { 104 86 dev_err(dev, "Firmware size is too small\n"); 105 87 return -EINVAL; 106 88 } 107 89 90 + dev_dbg(dev, "Firmware is an elf%d file\n", 91 + class == ELFCLASS32 ? 32 : 64); 92 + 108 93 return 0; 109 94 } 110 95 EXPORT_SYMBOL(rproc_elf_sanity_check); 96 + 97 + /** 98 + * rproc_elf_sanity_check() - Sanity Check ELF32 firmware image 99 + * @rproc: the remote processor handle 100 + * @fw: the ELF32 firmware image 101 + * 102 + * Make sure this fw image is sane. 103 + */ 104 + int rproc_elf32_sanity_check(struct rproc *rproc, const struct firmware *fw) 105 + { 106 + int ret = rproc_elf_sanity_check(rproc, fw); 107 + 108 + if (ret) 109 + return ret; 110 + 111 + if (fw_elf_get_class(fw) == ELFCLASS32) 112 + return 0; 113 + 114 + return -EINVAL; 115 + } 116 + EXPORT_SYMBOL(rproc_elf32_sanity_check); 111 117 112 118 /** 113 119 * rproc_elf_get_boot_addr() - Get rproc's boot address. ··· 144 102 * Note that the boot address is not a configurable property of all remote 145 103 * processors. Some will always boot at a specific hard-coded address. 146 104 */ 147 - u32 rproc_elf_get_boot_addr(struct rproc *rproc, const struct firmware *fw) 105 + u64 rproc_elf_get_boot_addr(struct rproc *rproc, const struct firmware *fw) 148 106 { 149 - struct elf32_hdr *ehdr = (struct elf32_hdr *)fw->data; 150 - 151 - return ehdr->e_entry; 107 + return elf_hdr_get_e_entry(fw_elf_get_class(fw), fw->data); 152 108 } 153 109 EXPORT_SYMBOL(rproc_elf_get_boot_addr); 154 110 ··· 177 137 int rproc_elf_load_segments(struct rproc *rproc, const struct firmware *fw) 178 138 { 179 139 struct device *dev = &rproc->dev; 180 - struct elf32_hdr *ehdr; 181 - struct elf32_phdr *phdr; 140 + const void *ehdr, *phdr; 182 141 int i, ret = 0; 142 + u16 phnum; 183 143 const u8 *elf_data = fw->data; 144 + u8 class = fw_elf_get_class(fw); 145 + u32 elf_phdr_get_size = elf_size_of_phdr(class); 184 146 185 - ehdr = (struct elf32_hdr *)elf_data; 186 - phdr = (struct elf32_phdr *)(elf_data + ehdr->e_phoff); 147 + ehdr = elf_data; 148 + phnum = elf_hdr_get_e_phnum(class, ehdr); 149 + phdr = elf_data + elf_hdr_get_e_phoff(class, ehdr); 187 150 188 151 /* go through the available ELF segments */ 189 - for (i = 0; i < ehdr->e_phnum; i++, phdr++) { 190 - u32 da = phdr->p_paddr; 191 - u32 memsz = phdr->p_memsz; 192 - u32 filesz = phdr->p_filesz; 193 - u32 offset = phdr->p_offset; 152 + for (i = 0; i < phnum; i++, phdr += elf_phdr_get_size) { 153 + u64 da = elf_phdr_get_p_paddr(class, phdr); 154 + u64 memsz = elf_phdr_get_p_memsz(class, phdr); 155 + u64 filesz = elf_phdr_get_p_filesz(class, phdr); 156 + u64 offset = elf_phdr_get_p_offset(class, phdr); 157 + u32 type = elf_phdr_get_p_type(class, phdr); 194 158 void *ptr; 195 159 196 - if (phdr->p_type != PT_LOAD) 160 + if (type != PT_LOAD) 197 161 continue; 198 162 199 - dev_dbg(dev, "phdr: type %d da 0x%x memsz 0x%x filesz 0x%x\n", 200 - phdr->p_type, da, memsz, filesz); 163 + dev_dbg(dev, "phdr: type %d da 0x%llx memsz 0x%llx filesz 0x%llx\n", 164 + type, da, memsz, filesz); 201 165 202 166 if (filesz > memsz) { 203 - dev_err(dev, "bad phdr filesz 0x%x memsz 0x%x\n", 167 + dev_err(dev, "bad phdr filesz 0x%llx memsz 0x%llx\n", 204 168 filesz, memsz); 205 169 ret = -EINVAL; 206 170 break; 207 171 } 208 172 209 173 if (offset + filesz > fw->size) { 210 - dev_err(dev, "truncated fw: need 0x%x avail 0x%zx\n", 174 + dev_err(dev, "truncated fw: need 0x%llx avail 0x%zx\n", 211 175 offset + filesz, fw->size); 212 176 ret = -EINVAL; 177 + break; 178 + } 179 + 180 + if (!rproc_u64_fit_in_size_t(memsz)) { 181 + dev_err(dev, "size (%llx) does not fit in size_t type\n", 182 + memsz); 183 + ret = -EOVERFLOW; 213 184 break; 214 185 } 215 186 216 187 /* grab the kernel address for this device address */ 217 188 ptr = rproc_da_to_va(rproc, da, memsz); 218 189 if (!ptr) { 219 - dev_err(dev, "bad phdr da 0x%x mem 0x%x\n", da, memsz); 190 + dev_err(dev, "bad phdr da 0x%llx mem 0x%llx\n", da, 191 + memsz); 220 192 ret = -EINVAL; 221 193 break; 222 194 } 223 195 224 196 /* put the segment where the remote processor expects it */ 225 - if (phdr->p_filesz) 226 - memcpy(ptr, elf_data + phdr->p_offset, filesz); 197 + if (filesz) 198 + memcpy(ptr, elf_data + offset, filesz); 227 199 228 200 /* 229 201 * Zero out remaining memory for this segment. ··· 248 196 memset(ptr + filesz, 0, memsz - filesz); 249 197 } 250 198 199 + if (ret == 0) 200 + rproc->elf_class = class; 201 + 251 202 return ret; 252 203 } 253 204 EXPORT_SYMBOL(rproc_elf_load_segments); 254 205 255 - static struct elf32_shdr * 256 - find_table(struct device *dev, struct elf32_hdr *ehdr, size_t fw_size) 206 + static const void * 207 + find_table(struct device *dev, const struct firmware *fw) 257 208 { 258 - struct elf32_shdr *shdr; 209 + const void *shdr, *name_table_shdr; 259 210 int i; 260 211 const char *name_table; 261 212 struct resource_table *table = NULL; 262 - const u8 *elf_data = (void *)ehdr; 213 + const u8 *elf_data = (void *)fw->data; 214 + u8 class = fw_elf_get_class(fw); 215 + size_t fw_size = fw->size; 216 + const void *ehdr = elf_data; 217 + u16 shnum = elf_hdr_get_e_shnum(class, ehdr); 218 + u32 elf_shdr_get_size = elf_size_of_shdr(class); 219 + u16 shstrndx = elf_hdr_get_e_shstrndx(class, ehdr); 263 220 264 221 /* look for the resource table and handle it */ 265 - shdr = (struct elf32_shdr *)(elf_data + ehdr->e_shoff); 266 - name_table = elf_data + shdr[ehdr->e_shstrndx].sh_offset; 222 + /* First, get the section header according to the elf class */ 223 + shdr = elf_data + elf_hdr_get_e_shoff(class, ehdr); 224 + /* Compute name table section header entry in shdr array */ 225 + name_table_shdr = shdr + (shstrndx * elf_shdr_get_size); 226 + /* Finally, compute the name table section address in elf */ 227 + name_table = elf_data + elf_shdr_get_sh_offset(class, name_table_shdr); 267 228 268 - for (i = 0; i < ehdr->e_shnum; i++, shdr++) { 269 - u32 size = shdr->sh_size; 270 - u32 offset = shdr->sh_offset; 229 + for (i = 0; i < shnum; i++, shdr += elf_shdr_get_size) { 230 + u64 size = elf_shdr_get_sh_size(class, shdr); 231 + u64 offset = elf_shdr_get_sh_offset(class, shdr); 232 + u32 name = elf_shdr_get_sh_name(class, shdr); 271 233 272 - if (strcmp(name_table + shdr->sh_name, ".resource_table")) 234 + if (strcmp(name_table + name, ".resource_table")) 273 235 continue; 274 236 275 237 table = (struct resource_table *)(elf_data + offset); ··· 336 270 */ 337 271 int rproc_elf_load_rsc_table(struct rproc *rproc, const struct firmware *fw) 338 272 { 339 - struct elf32_hdr *ehdr; 340 - struct elf32_shdr *shdr; 273 + const void *shdr; 341 274 struct device *dev = &rproc->dev; 342 275 struct resource_table *table = NULL; 343 276 const u8 *elf_data = fw->data; 344 277 size_t tablesz; 278 + u8 class = fw_elf_get_class(fw); 279 + u64 sh_offset; 345 280 346 - ehdr = (struct elf32_hdr *)elf_data; 347 - 348 - shdr = find_table(dev, ehdr, fw->size); 281 + shdr = find_table(dev, fw); 349 282 if (!shdr) 350 283 return -EINVAL; 351 284 352 - table = (struct resource_table *)(elf_data + shdr->sh_offset); 353 - tablesz = shdr->sh_size; 285 + sh_offset = elf_shdr_get_sh_offset(class, shdr); 286 + table = (struct resource_table *)(elf_data + sh_offset); 287 + tablesz = elf_shdr_get_sh_size(class, shdr); 354 288 355 289 /* 356 290 * Create a copy of the resource table. When a virtio device starts ··· 383 317 struct resource_table *rproc_elf_find_loaded_rsc_table(struct rproc *rproc, 384 318 const struct firmware *fw) 385 319 { 386 - struct elf32_hdr *ehdr = (struct elf32_hdr *)fw->data; 387 - struct elf32_shdr *shdr; 320 + const void *shdr; 321 + u64 sh_addr, sh_size; 322 + u8 class = fw_elf_get_class(fw); 323 + struct device *dev = &rproc->dev; 388 324 389 - shdr = find_table(&rproc->dev, ehdr, fw->size); 325 + shdr = find_table(&rproc->dev, fw); 390 326 if (!shdr) 391 327 return NULL; 392 328 393 - return rproc_da_to_va(rproc, shdr->sh_addr, shdr->sh_size); 329 + sh_addr = elf_shdr_get_sh_addr(class, shdr); 330 + sh_size = elf_shdr_get_sh_size(class, shdr); 331 + 332 + if (!rproc_u64_fit_in_size_t(sh_size)) { 333 + dev_err(dev, "size (%llx) does not fit in size_t type\n", 334 + sh_size); 335 + return NULL; 336 + } 337 + 338 + return rproc_da_to_va(rproc, sh_addr, sh_size); 394 339 } 395 340 EXPORT_SYMBOL(rproc_elf_find_loaded_rsc_table);
+13 -3
drivers/remoteproc/remoteproc_internal.h
··· 50 50 void rproc_free_vring(struct rproc_vring *rvring); 51 51 int rproc_alloc_vring(struct rproc_vdev *rvdev, int i); 52 52 53 - void *rproc_da_to_va(struct rproc *rproc, u64 da, int len); 53 + void *rproc_da_to_va(struct rproc *rproc, u64 da, size_t len); 54 54 phys_addr_t rproc_va_to_pa(void *cpu_addr); 55 55 int rproc_trigger_recovery(struct rproc *rproc); 56 56 57 + int rproc_elf32_sanity_check(struct rproc *rproc, const struct firmware *fw); 57 58 int rproc_elf_sanity_check(struct rproc *rproc, const struct firmware *fw); 58 - u32 rproc_elf_get_boot_addr(struct rproc *rproc, const struct firmware *fw); 59 + u64 rproc_elf_get_boot_addr(struct rproc *rproc, const struct firmware *fw); 59 60 int rproc_elf_load_segments(struct rproc *rproc, const struct firmware *fw); 60 61 int rproc_elf_load_rsc_table(struct rproc *rproc, const struct firmware *fw); 61 62 struct resource_table *rproc_elf_find_loaded_rsc_table(struct rproc *rproc, ··· 74 73 } 75 74 76 75 static inline 77 - u32 rproc_get_boot_addr(struct rproc *rproc, const struct firmware *fw) 76 + u64 rproc_get_boot_addr(struct rproc *rproc, const struct firmware *fw) 78 77 { 79 78 if (rproc->ops->get_boot_addr) 80 79 return rproc->ops->get_boot_addr(rproc, fw); ··· 118 117 return rproc->ops->find_loaded_rsc_table(rproc, fw); 119 118 120 119 return NULL; 120 + } 121 + 122 + static inline 123 + bool rproc_u64_fit_in_size_t(u64 val) 124 + { 125 + if (sizeof(size_t) == sizeof(u64)) 126 + return true; 127 + 128 + return (val <= (size_t) -1); 121 129 } 122 130 123 131 #endif /* REMOTEPROC_INTERNAL_H */
+8
drivers/remoteproc/remoteproc_virtio.c
··· 320 320 /** 321 321 * rproc_add_virtio_dev() - register an rproc-induced virtio device 322 322 * @rvdev: the remote vdev 323 + * @id: the device type identification (used to match it with a driver). 323 324 * 324 325 * This function registers a virtio device. This vdev's partent is 325 326 * the rproc device. ··· 334 333 struct virtio_device *vdev; 335 334 struct rproc_mem_entry *mem; 336 335 int ret; 336 + 337 + if (rproc->ops->kick == NULL) { 338 + ret = -EINVAL; 339 + dev_err(dev, ".kick method not defined for %s", 340 + rproc->name); 341 + goto out; 342 + } 337 343 338 344 /* Try to find dedicated vdev buffer carveout */ 339 345 mem = rproc_find_carveout_by_name(rproc, "vdev%dbuffer", rvdev->index);
+2 -2
drivers/remoteproc/st_remoteproc.c
··· 190 190 } 191 191 } 192 192 193 - dev_info(&rproc->dev, "Started from 0x%x\n", rproc->bootaddr); 193 + dev_info(&rproc->dev, "Started from 0x%llx\n", rproc->bootaddr); 194 194 195 195 return 0; 196 196 ··· 233 233 .parse_fw = st_rproc_parse_fw, 234 234 .load = rproc_elf_load_segments, 235 235 .find_loaded_rsc_table = rproc_elf_find_loaded_rsc_table, 236 - .sanity_check = rproc_elf_sanity_check, 236 + .sanity_check = rproc_elf32_sanity_check, 237 237 .get_boot_addr = rproc_elf_get_boot_addr, 238 238 }; 239 239
+3 -3
drivers/remoteproc/st_slim_rproc.c
··· 174 174 return 0; 175 175 } 176 176 177 - static void *slim_rproc_da_to_va(struct rproc *rproc, u64 da, int len) 177 + static void *slim_rproc_da_to_va(struct rproc *rproc, u64 da, size_t len) 178 178 { 179 179 struct st_slim_rproc *slim_rproc = rproc->priv; 180 180 void *va = NULL; ··· 191 191 } 192 192 } 193 193 194 - dev_dbg(&rproc->dev, "da = 0x%llx len = 0x%x va = 0x%pK\n", 194 + dev_dbg(&rproc->dev, "da = 0x%llx len = 0x%zx va = 0x%pK\n", 195 195 da, len, va); 196 196 197 197 return va; ··· 203 203 .da_to_va = slim_rproc_da_to_va, 204 204 .get_boot_addr = rproc_elf_get_boot_addr, 205 205 .load = rproc_elf_load_segments, 206 - .sanity_check = rproc_elf_sanity_check, 206 + .sanity_check = rproc_elf32_sanity_check, 207 207 }; 208 208 209 209 /**
+2 -2
drivers/remoteproc/stm32_rproc.c
··· 505 505 .load = rproc_elf_load_segments, 506 506 .parse_fw = stm32_rproc_parse_fw, 507 507 .find_loaded_rsc_table = rproc_elf_find_loaded_rsc_table, 508 - .sanity_check = rproc_elf_sanity_check, 508 + .sanity_check = rproc_elf32_sanity_check, 509 509 .get_boot_addr = rproc_elf_get_boot_addr, 510 510 }; 511 511 ··· 602 602 603 603 err = stm32_rproc_get_syscon(np, "st,syscfg-pdds", &ddata->pdds); 604 604 if (err) 605 - dev_warn(dev, "failed to get pdds\n"); 605 + dev_info(dev, "failed to get pdds\n"); 606 606 607 607 rproc->auto_boot = of_property_read_bool(np, "st,auto-boot"); 608 608
+2 -2
drivers/remoteproc/wkup_m3_rproc.c
··· 80 80 return 0; 81 81 } 82 82 83 - static void *wkup_m3_rproc_da_to_va(struct rproc *rproc, u64 da, int len) 83 + static void *wkup_m3_rproc_da_to_va(struct rproc *rproc, u64 da, size_t len) 84 84 { 85 85 struct wkup_m3_rproc *wkupm3 = rproc->priv; 86 86 void *va = NULL; 87 87 int i; 88 88 u32 offset; 89 89 90 - if (len <= 0) 90 + if (len == 0) 91 91 return NULL; 92 92 93 93 for (i = 0; i < WKUPM3_MEM_MAX; i++) {
-51
include/linux/platform_data/remoteproc-omap.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-only */ 2 - /* 3 - * Remote Processor - omap-specific bits 4 - * 5 - * Copyright (C) 2011 Texas Instruments, Inc. 6 - * Copyright (C) 2011 Google, Inc. 7 - */ 8 - 9 - #ifndef _PLAT_REMOTEPROC_H 10 - #define _PLAT_REMOTEPROC_H 11 - 12 - struct rproc_ops; 13 - struct platform_device; 14 - 15 - /* 16 - * struct omap_rproc_pdata - omap remoteproc's platform data 17 - * @name: the remoteproc's name 18 - * @oh_name: omap hwmod device 19 - * @oh_name_opt: optional, secondary omap hwmod device 20 - * @firmware: name of firmware file to load 21 - * @mbox_name: name of omap mailbox device to use with this rproc 22 - * @ops: start/stop rproc handlers 23 - * @device_enable: omap-specific handler for enabling a device 24 - * @device_shutdown: omap-specific handler for shutting down a device 25 - * @set_bootaddr: omap-specific handler for setting the rproc boot address 26 - */ 27 - struct omap_rproc_pdata { 28 - const char *name; 29 - const char *oh_name; 30 - const char *oh_name_opt; 31 - const char *firmware; 32 - const char *mbox_name; 33 - const struct rproc_ops *ops; 34 - int (*device_enable)(struct platform_device *pdev); 35 - int (*device_shutdown)(struct platform_device *pdev); 36 - void (*set_bootaddr)(u32); 37 - }; 38 - 39 - #if defined(CONFIG_OMAP_REMOTEPROC) || defined(CONFIG_OMAP_REMOTEPROC_MODULE) 40 - 41 - void __init omap_rproc_reserve_cma(void); 42 - 43 - #else 44 - 45 - static inline void __init omap_rproc_reserve_cma(void) 46 - { 47 - } 48 - 49 - #endif 50 - 51 - #endif /* _PLAT_REMOTEPROC_H */
+10 -6
include/linux/remoteproc.h
··· 329 329 struct rproc_mem_entry { 330 330 void *va; 331 331 dma_addr_t dma; 332 - int len; 332 + size_t len; 333 333 u32 da; 334 334 void *priv; 335 335 char name[32]; ··· 369 369 * expects to find it 370 370 * @sanity_check: sanity check the fw image 371 371 * @get_boot_addr: get boot address to entry point specified in firmware 372 + * @panic: optional callback to react to system panic, core will delay 373 + * panic at least the returned number of milliseconds 372 374 */ 373 375 struct rproc_ops { 374 376 int (*start)(struct rproc *rproc); 375 377 int (*stop)(struct rproc *rproc); 376 378 void (*kick)(struct rproc *rproc, int vqid); 377 - void * (*da_to_va)(struct rproc *rproc, u64 da, int len); 379 + void * (*da_to_va)(struct rproc *rproc, u64 da, size_t len); 378 380 int (*parse_fw)(struct rproc *rproc, const struct firmware *fw); 379 381 int (*handle_rsc)(struct rproc *rproc, u32 rsc_type, void *rsc, 380 382 int offset, int avail); ··· 384 382 struct rproc *rproc, const struct firmware *fw); 385 383 int (*load)(struct rproc *rproc, const struct firmware *fw); 386 384 int (*sanity_check)(struct rproc *rproc, const struct firmware *fw); 387 - u32 (*get_boot_addr)(struct rproc *rproc, const struct firmware *fw); 385 + u64 (*get_boot_addr)(struct rproc *rproc, const struct firmware *fw); 386 + unsigned long (*panic)(struct rproc *rproc); 388 387 }; 389 388 390 389 /** ··· 501 498 int num_traces; 502 499 struct list_head carveouts; 503 500 struct list_head mappings; 504 - u32 bootaddr; 501 + u64 bootaddr; 505 502 struct list_head rvdevs; 506 503 struct list_head subdevs; 507 504 struct idr notifyids; ··· 517 514 bool auto_boot; 518 515 struct list_head dump_segments; 519 516 int nb_vdev; 517 + u8 elf_class; 520 518 }; 521 519 522 520 /** ··· 603 599 604 600 struct rproc_mem_entry * 605 601 rproc_mem_entry_init(struct device *dev, 606 - void *va, dma_addr_t dma, int len, u32 da, 602 + void *va, dma_addr_t dma, size_t len, u32 da, 607 603 int (*alloc)(struct rproc *, struct rproc_mem_entry *), 608 604 int (*release)(struct rproc *, struct rproc_mem_entry *), 609 605 const char *name, ...); 610 606 611 607 struct rproc_mem_entry * 612 - rproc_of_resm_mem_entry_init(struct device *dev, u32 of_resm_idx, int len, 608 + rproc_of_resm_mem_entry_init(struct device *dev, u32 of_resm_idx, size_t len, 613 609 u32 da, const char *name, ...); 614 610 615 611 int rproc_boot(struct rproc *rproc);