Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'soc-drivers-6.19-2' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc

Pull more SoC driver updates from Arnd Bergmann:
"These updates came a little late, or were based on a later 6.18-rc tag
than the others:

- A new driver for cache management on cxl devices with memory shared
in a coherent cluster. This is part of the drivers/cache/ tree, but
unlike the other drivers that back the dma-mapping interfaces, this
one is needed only during CPU hotplug.

- A shared branch for reset controllers using swnode infrastructure

- Added support for new SoC variants in the Amlogic soc_device
identification

- Minor updates in Freescale, Microchip, Samsung, and Apple SoC
drivers"

* tag 'soc-drivers-6.19-2' of git://git.kernel.org/pub/scm/linux/kernel/git/soc/soc: (24 commits)
soc: samsung: exynos-pmu: fix device leak on regmap lookup
soc: samsung: exynos-pmu: Fix structure initialization
soc: fsl: qbman: use kmalloc_array() instead of kmalloc()
soc: fsl: qbman: add WQ_PERCPU to alloc_workqueue users
MAINTAINERS: Update email address for Christophe Leroy
MAINTAINERS: refer to intended file in STANDALONE CACHE CONTROLLER DRIVERS
cache: Support cache maintenance for HiSilicon SoC Hydra Home Agent
cache: Make top level Kconfig menu a boolean dependent on RISCV
MAINTAINERS: Add Jonathan Cameron to drivers/cache and add lib/cache_maint.c + header
arm64: Select GENERIC_CPU_CACHE_MAINTENANCE
lib: Support ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGION
soc: amlogic: meson-gx-socinfo: add new SoCs id
dt-bindings: arm: amlogic: meson-gx-ao-secure: support more SoCs
memregion: Support fine grained invalidate by cpu_cache_invalidate_memregion()
memregion: Drop unused IORES_DESC_* parameter from cpu_cache_invalidate_memregion()
dt-bindings: cache: sifive,ccache0: add a pic64gx compatible
MAINTAINERS: rename Microchip RISC-V entry
MAINTAINERS: add new soc drivers to Microchip RISC-V entry
soc: microchip: add mfd drivers for two syscon regions on PolarFire SoC
dt-bindings: soc: microchip: document the simple-mfd syscon on PolarFire SoC
...

+648 -47
+3
.mailmap
··· 186 186 Christian Brauner <brauner@kernel.org> <christian.brauner@canonical.com> 187 187 Christian Brauner <brauner@kernel.org> <christian.brauner@ubuntu.com> 188 188 Christian Marangi <ansuelsmth@gmail.com> 189 + Christophe Leroy <chleroy@kernel.org> <christophe.leroy@c-s.fr> 190 + Christophe Leroy <chleroy@kernel.org> <christophe.leroy@csgroup.eu> 191 + Christophe Leroy <chleroy@kernel.org> <christophe.leroy2@cs-soprasteria.com> 189 192 Christophe Ricard <christophe.ricard@gmail.com> 190 193 Christopher Obbard <christopher.obbard@linaro.org> <chris.obbard@collabora.com> 191 194 Christoph Hellwig <hch@lst.de>
+3
Documentation/devicetree/bindings/arm/amlogic/amlogic,meson-gx-ao-secure.yaml
··· 34 34 - amlogic,a4-ao-secure 35 35 - amlogic,c3-ao-secure 36 36 - amlogic,s4-ao-secure 37 + - amlogic,s6-ao-secure 38 + - amlogic,s7-ao-secure 39 + - amlogic,s7d-ao-secure 37 40 - amlogic,t7-ao-secure 38 41 - const: amlogic,meson-gx-ao-secure 39 42 - const: syscon
+5
Documentation/devicetree/bindings/cache/sifive,ccache0.yaml
··· 48 48 - const: microchip,mpfs-ccache 49 49 - const: sifive,fu540-c000-ccache 50 50 - const: cache 51 + - items: 52 + - const: microchip,pic64gx-ccache 53 + - const: microchip,mpfs-ccache 54 + - const: sifive,fu540-c000-ccache 55 + - const: cache 51 56 52 57 cache-block-size: 53 58 const: 64
+47
Documentation/devicetree/bindings/soc/microchip/microchip,mpfs-mss-top-sysreg.yaml
··· 1 + # SPDX-License-Identifier: GPL-2.0-only OR BSD-2-Clause 2 + %YAML 1.2 3 + --- 4 + $id: http://devicetree.org/schemas/soc/microchip/microchip,mpfs-mss-top-sysreg.yaml# 5 + $schema: http://devicetree.org/meta-schemas/core.yaml# 6 + 7 + title: Microchip PolarFire SoC Microprocessor Subsystem (MSS) sysreg register region 8 + 9 + maintainers: 10 + - Conor Dooley <conor.dooley@microchip.com> 11 + 12 + description: 13 + An wide assortment of registers that control elements of the MSS on PolarFire 14 + SoC, including pinmuxing, resets and clocks among others. 15 + 16 + properties: 17 + compatible: 18 + items: 19 + - const: microchip,mpfs-mss-top-sysreg 20 + - const: syscon 21 + 22 + reg: 23 + maxItems: 1 24 + 25 + '#reset-cells': 26 + description: 27 + The AHB/AXI peripherals on the PolarFire SoC have reset support, so 28 + from CLK_ENVM to CLK_CFM. The reset consumer should specify the 29 + desired peripheral via the clock ID in its "resets" phandle cell. 30 + See include/dt-bindings/clock/microchip,mpfs-clock.h for the full list 31 + of PolarFire clock/reset IDs. 32 + const: 1 33 + 34 + required: 35 + - compatible 36 + - reg 37 + 38 + additionalProperties: false 39 + 40 + examples: 41 + - | 42 + syscon@20002000 { 43 + compatible = "microchip,mpfs-mss-top-sysreg", "syscon"; 44 + reg = <0x20002000 0x1000>; 45 + #reset-cells = <1>; 46 + }; 47 +
+11 -6
MAINTAINERS
··· 4590 4590 4591 4591 BPF JIT for POWERPC (32-BIT AND 64-BIT) 4592 4592 M: Hari Bathini <hbathini@linux.ibm.com> 4593 - M: Christophe Leroy <christophe.leroy@csgroup.eu> 4593 + M: Christophe Leroy (CS GROUP) <chleroy@kernel.org> 4594 4594 R: Naveen N Rao <naveen@kernel.org> 4595 4595 L: bpf@vger.kernel.org 4596 4596 S: Supported ··· 10082 10082 10083 10083 FREESCALE QUICC ENGINE LIBRARY 10084 10084 M: Qiang Zhao <qiang.zhao@nxp.com> 10085 - M: Christophe Leroy <christophe.leroy@csgroup.eu> 10085 + M: Christophe Leroy (CS GROUP) <chleroy@kernel.org> 10086 10086 L: linuxppc-dev@lists.ozlabs.org 10087 10087 S: Maintained 10088 10088 F: drivers/soc/fsl/qe/ ··· 10135 10135 F: drivers/tty/serial/ucc_uart.c 10136 10136 10137 10137 FREESCALE SOC DRIVERS 10138 - M: Christophe Leroy <christophe.leroy@csgroup.eu> 10138 + M: Christophe Leroy (CS GROUP) <chleroy@kernel.org> 10139 10139 L: linuxppc-dev@lists.ozlabs.org 10140 10140 L: linux-arm-kernel@lists.infradead.org (moderated for non-subscribers) 10141 10141 S: Maintained ··· 14400 14400 M: Madhavan Srinivasan <maddy@linux.ibm.com> 14401 14401 M: Michael Ellerman <mpe@ellerman.id.au> 14402 14402 R: Nicholas Piggin <npiggin@gmail.com> 14403 - R: Christophe Leroy <christophe.leroy@csgroup.eu> 14403 + R: Christophe Leroy (CS GROUP) <chleroy@kernel.org> 14404 14404 L: linuxppc-dev@lists.ozlabs.org 14405 14405 S: Supported 14406 14406 W: https://github.com/linuxppc/wiki/wiki ··· 14456 14456 F: arch/powerpc/platforms/85xx/ 14457 14457 14458 14458 LINUX FOR POWERPC EMBEDDED PPC8XX AND PPC83XX 14459 - M: Christophe Leroy <christophe.leroy@csgroup.eu> 14459 + M: Christophe Leroy (CS GROUP) <chleroy@kernel.org> 14460 14460 L: linuxppc-dev@lists.ozlabs.org 14461 14461 S: Maintained 14462 14462 F: arch/powerpc/platforms/8xx/ ··· 22316 22316 F: Documentation/devicetree/bindings/iommu/riscv,iommu.yaml 22317 22317 F: drivers/iommu/riscv/ 22318 22318 22319 - RISC-V MICROCHIP FPGA SUPPORT 22319 + RISC-V MICROCHIP SUPPORT 22320 22320 M: Conor Dooley <conor.dooley@microchip.com> 22321 22321 M: Daire McNamara <daire.mcnamara@microchip.com> 22322 22322 L: linux-riscv@lists.infradead.org ··· 22343 22343 F: drivers/pwm/pwm-microchip-core.c 22344 22344 F: drivers/reset/reset-mpfs.c 22345 22345 F: drivers/rtc/rtc-mpfs.c 22346 + F: drivers/soc/microchip/mpfs-control-scb.c 22347 + F: drivers/soc/microchip/mpfs-mss-top-sysreg.c 22346 22348 F: drivers/soc/microchip/mpfs-sys-controller.c 22347 22349 F: drivers/spi/spi-microchip-core-qspi.c 22348 22350 F: drivers/spi/spi-mpfs.c ··· 24698 24696 24699 24697 STANDALONE CACHE CONTROLLER DRIVERS 24700 24698 M: Conor Dooley <conor@kernel.org> 24699 + M: Jonathan Cameron <jonathan.cameron@huawei.com> 24701 24700 S: Maintained 24702 24701 T: git https://git.kernel.org/pub/scm/linux/kernel/git/conor/linux.git/ 24703 24702 F: Documentation/devicetree/bindings/cache/ 24704 24703 F: drivers/cache 24704 + F: include/linux/cache_coherency.h 24705 + F: lib/cache_maint.c 24705 24706 24706 24707 STARFIRE/DURALAN NETWORK DRIVER 24707 24708 M: Ion Badulescu <ionut@badula.org>
+2
arch/arm64/Kconfig
··· 21 21 select ARCH_ENABLE_THP_MIGRATION if TRANSPARENT_HUGEPAGE 22 22 select ARCH_HAS_CACHE_LINE_SIZE 23 23 select ARCH_HAS_CC_PLATFORM 24 + select ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGION 24 25 select ARCH_HAS_CURRENT_STACK_POINTER 25 26 select ARCH_HAS_DEBUG_VIRTUAL 26 27 select ARCH_HAS_DEBUG_VM_PGTABLE ··· 149 148 select GENERIC_ARCH_TOPOLOGY 150 149 select GENERIC_CLOCKEVENTS_BROADCAST 151 150 select GENERIC_CPU_AUTOPROBE 151 + select GENERIC_CPU_CACHE_MAINTENANCE 152 152 select GENERIC_CPU_DEVICES 153 153 select GENERIC_CPU_VULNERABILITIES 154 154 select GENERIC_EARLY_IOREMAP
+1 -1
arch/x86/mm/pat/set_memory.c
··· 368 368 } 369 369 EXPORT_SYMBOL_NS_GPL(cpu_cache_has_invalidate_memregion, "DEVMEM"); 370 370 371 - int cpu_cache_invalidate_memregion(int res_desc) 371 + int cpu_cache_invalidate_memregion(phys_addr_t start, size_t len) 372 372 { 373 373 if (WARN_ON_ONCE(!cpu_cache_has_invalidate_memregion())) 374 374 return -ENXIO;
+33 -4
drivers/cache/Kconfig
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 - menu "Cache Drivers" 2 + 3 + menuconfig CACHEMAINT_FOR_DMA 4 + bool "Cache management for noncoherent DMA" 5 + depends on RISCV 6 + default y 7 + help 8 + These drivers implement support for noncoherent DMA master devices 9 + on platforms that lack the standard CPU interfaces for this. 10 + 11 + if CACHEMAINT_FOR_DMA 3 12 4 13 config AX45MP_L2_CACHE 5 14 bool "Andes Technology AX45MP L2 Cache controller" 6 - depends on RISCV 7 15 select RISCV_NONSTANDARD_CACHE_OPS 8 16 help 9 17 Support for the L2 cache controller on Andes Technology AX45MP platforms. ··· 24 16 25 17 config STARFIVE_STARLINK_CACHE 26 18 bool "StarFive StarLink Cache controller" 27 - depends on RISCV 28 19 depends on ARCH_STARFIVE 29 20 depends on 64BIT 30 21 select RISCV_DMA_NONCOHERENT ··· 31 24 help 32 25 Support for the StarLink cache controller IP from StarFive. 33 26 34 - endmenu 27 + endif #CACHEMAINT_FOR_DMA 28 + 29 + menuconfig CACHEMAINT_FOR_HOTPLUG 30 + bool "Cache management for memory hot plug like operations" 31 + depends on GENERIC_CPU_CACHE_MAINTENANCE 32 + help 33 + These drivers implement cache management for flows where it is necessary 34 + to flush data from all host caches. 35 + 36 + if CACHEMAINT_FOR_HOTPLUG 37 + 38 + config HISI_SOC_HHA 39 + tristate "HiSilicon Hydra Home Agent (HHA) device driver" 40 + depends on (ARM64 && ACPI) || COMPILE_TEST 41 + help 42 + The Hydra Home Agent (HHA) is responsible for cache coherency 43 + on the SoC. This drivers enables the cache maintenance functions of 44 + the HHA. 45 + 46 + This driver can be built as a module. If so, the module will be 47 + called hisi_soc_hha. 48 + 49 + endif #CACHEMAINT_FOR_HOTPLUG
+2
drivers/cache/Makefile
··· 3 3 obj-$(CONFIG_AX45MP_L2_CACHE) += ax45mp_cache.o 4 4 obj-$(CONFIG_SIFIVE_CCACHE) += sifive_ccache.o 5 5 obj-$(CONFIG_STARFIVE_STARLINK_CACHE) += starfive_starlink_cache.o 6 + 7 + obj-$(CONFIG_HISI_SOC_HHA) += hisi_soc_hha.o
+194
drivers/cache/hisi_soc_hha.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Driver for HiSilicon Hydra Home Agent (HHA). 4 + * 5 + * Copyright (c) 2025 HiSilicon Technologies Co., Ltd. 6 + * Author: Yicong Yang <yangyicong@hisilicon.com> 7 + * Yushan Wang <wangyushan12@huawei.com> 8 + * 9 + * A system typically contains multiple HHAs. Each is responsible for a subset 10 + * of the physical addresses in the system, but interleave can make the mapping 11 + * from a particular cache line to a responsible HHA complex. As such no 12 + * filtering is done in the driver, with the hardware being responsible for 13 + * responding with success for even if it was not responsible for any addresses 14 + * in the range on which the operation was requested. 15 + */ 16 + 17 + #include <linux/bitfield.h> 18 + #include <linux/cache_coherency.h> 19 + #include <linux/dev_printk.h> 20 + #include <linux/init.h> 21 + #include <linux/io.h> 22 + #include <linux/iopoll.h> 23 + #include <linux/kernel.h> 24 + #include <linux/memregion.h> 25 + #include <linux/module.h> 26 + #include <linux/mod_devicetable.h> 27 + #include <linux/mutex.h> 28 + #include <linux/platform_device.h> 29 + 30 + #define HISI_HHA_CTRL 0x5004 31 + #define HISI_HHA_CTRL_EN BIT(0) 32 + #define HISI_HHA_CTRL_RANGE BIT(1) 33 + #define HISI_HHA_CTRL_TYPE GENMASK(3, 2) 34 + #define HISI_HHA_START_L 0x5008 35 + #define HISI_HHA_START_H 0x500c 36 + #define HISI_HHA_LEN_L 0x5010 37 + #define HISI_HHA_LEN_H 0x5014 38 + 39 + /* The maintain operation performs in a 128 Byte granularity */ 40 + #define HISI_HHA_MAINT_ALIGN 128 41 + 42 + #define HISI_HHA_POLL_GAP_US 10 43 + #define HISI_HHA_POLL_TIMEOUT_US 50000 44 + 45 + struct hisi_soc_hha { 46 + /* Must be first element */ 47 + struct cache_coherency_ops_inst cci; 48 + /* Locks HHA instance to forbid overlapping access. */ 49 + struct mutex lock; 50 + void __iomem *base; 51 + }; 52 + 53 + static bool hisi_hha_cache_maintain_wait_finished(struct hisi_soc_hha *soc_hha) 54 + { 55 + u32 val; 56 + 57 + return !readl_poll_timeout_atomic(soc_hha->base + HISI_HHA_CTRL, val, 58 + !(val & HISI_HHA_CTRL_EN), 59 + HISI_HHA_POLL_GAP_US, 60 + HISI_HHA_POLL_TIMEOUT_US); 61 + } 62 + 63 + static int hisi_soc_hha_wbinv(struct cache_coherency_ops_inst *cci, 64 + struct cc_inval_params *invp) 65 + { 66 + struct hisi_soc_hha *soc_hha = 67 + container_of(cci, struct hisi_soc_hha, cci); 68 + phys_addr_t top, addr = invp->addr; 69 + size_t size = invp->size; 70 + u32 reg; 71 + 72 + if (!size) 73 + return -EINVAL; 74 + 75 + addr = ALIGN_DOWN(addr, HISI_HHA_MAINT_ALIGN); 76 + top = ALIGN(addr + size, HISI_HHA_MAINT_ALIGN); 77 + size = top - addr; 78 + 79 + guard(mutex)(&soc_hha->lock); 80 + 81 + if (!hisi_hha_cache_maintain_wait_finished(soc_hha)) 82 + return -EBUSY; 83 + 84 + /* 85 + * Hardware will search for addresses ranging [addr, addr + size - 1], 86 + * last byte included, and perform maintenance in 128 byte granules 87 + * on those cachelines which contain the addresses. If a given instance 88 + * is either not responsible for a cacheline or that cacheline is not 89 + * currently present then the search will fail, no operation will be 90 + * necessary and the device will report success. 91 + */ 92 + size -= 1; 93 + 94 + writel(lower_32_bits(addr), soc_hha->base + HISI_HHA_START_L); 95 + writel(upper_32_bits(addr), soc_hha->base + HISI_HHA_START_H); 96 + writel(lower_32_bits(size), soc_hha->base + HISI_HHA_LEN_L); 97 + writel(upper_32_bits(size), soc_hha->base + HISI_HHA_LEN_H); 98 + 99 + reg = FIELD_PREP(HISI_HHA_CTRL_TYPE, 1); /* Clean Invalid */ 100 + reg |= HISI_HHA_CTRL_RANGE | HISI_HHA_CTRL_EN; 101 + writel(reg, soc_hha->base + HISI_HHA_CTRL); 102 + 103 + return 0; 104 + } 105 + 106 + static int hisi_soc_hha_done(struct cache_coherency_ops_inst *cci) 107 + { 108 + struct hisi_soc_hha *soc_hha = 109 + container_of(cci, struct hisi_soc_hha, cci); 110 + 111 + guard(mutex)(&soc_hha->lock); 112 + if (!hisi_hha_cache_maintain_wait_finished(soc_hha)) 113 + return -ETIMEDOUT; 114 + 115 + return 0; 116 + } 117 + 118 + static const struct cache_coherency_ops hha_ops = { 119 + .wbinv = hisi_soc_hha_wbinv, 120 + .done = hisi_soc_hha_done, 121 + }; 122 + 123 + static int hisi_soc_hha_probe(struct platform_device *pdev) 124 + { 125 + struct hisi_soc_hha *soc_hha; 126 + struct resource *mem; 127 + int ret; 128 + 129 + soc_hha = cache_coherency_ops_instance_alloc(&hha_ops, 130 + struct hisi_soc_hha, cci); 131 + if (!soc_hha) 132 + return -ENOMEM; 133 + 134 + platform_set_drvdata(pdev, soc_hha); 135 + 136 + mutex_init(&soc_hha->lock); 137 + 138 + mem = platform_get_resource(pdev, IORESOURCE_MEM, 0); 139 + if (!mem) { 140 + ret = -ENOMEM; 141 + goto err_free_cci; 142 + } 143 + 144 + soc_hha->base = ioremap(mem->start, resource_size(mem)); 145 + if (!soc_hha->base) { 146 + ret = dev_err_probe(&pdev->dev, -ENOMEM, 147 + "failed to remap io memory"); 148 + goto err_free_cci; 149 + } 150 + 151 + ret = cache_coherency_ops_instance_register(&soc_hha->cci); 152 + if (ret) 153 + goto err_iounmap; 154 + 155 + return 0; 156 + 157 + err_iounmap: 158 + iounmap(soc_hha->base); 159 + err_free_cci: 160 + cache_coherency_ops_instance_put(&soc_hha->cci); 161 + return ret; 162 + } 163 + 164 + static void hisi_soc_hha_remove(struct platform_device *pdev) 165 + { 166 + struct hisi_soc_hha *soc_hha = platform_get_drvdata(pdev); 167 + 168 + cache_coherency_ops_instance_unregister(&soc_hha->cci); 169 + iounmap(soc_hha->base); 170 + cache_coherency_ops_instance_put(&soc_hha->cci); 171 + } 172 + 173 + static const struct acpi_device_id hisi_soc_hha_ids[] = { 174 + { "HISI0511", }, 175 + { } 176 + }; 177 + MODULE_DEVICE_TABLE(acpi, hisi_soc_hha_ids); 178 + 179 + static struct platform_driver hisi_soc_hha_driver = { 180 + .driver = { 181 + .name = "hisi_soc_hha", 182 + .acpi_match_table = hisi_soc_hha_ids, 183 + }, 184 + .probe = hisi_soc_hha_probe, 185 + .remove = hisi_soc_hha_remove, 186 + }; 187 + 188 + module_platform_driver(hisi_soc_hha_driver); 189 + 190 + MODULE_IMPORT_NS("CACHE_COHERENCY"); 191 + MODULE_DESCRIPTION("HiSilicon Hydra Home Agent driver supporting cache maintenance"); 192 + MODULE_AUTHOR("Yicong Yang <yangyicong@hisilicon.com>"); 193 + MODULE_AUTHOR("Yushan Wang <wangyushan12@huawei.com>"); 194 + MODULE_LICENSE("GPL");
+4 -1
drivers/cxl/core/region.c
··· 236 236 return -ENXIO; 237 237 } 238 238 239 - cpu_cache_invalidate_memregion(IORES_DESC_CXL); 239 + if (!cxlr->params.res) 240 + return -ENXIO; 241 + cpu_cache_invalidate_memregion(cxlr->params.res->start, 242 + resource_size(cxlr->params.res)); 240 243 return 0; 241 244 } 242 245
+1 -1
drivers/nvdimm/region.c
··· 110 110 * here is ok. 111 111 */ 112 112 if (cpu_cache_has_invalidate_memregion()) 113 - cpu_cache_invalidate_memregion(IORES_DESC_PERSISTENT_MEMORY); 113 + cpu_cache_invalidate_all(); 114 114 } 115 115 116 116 static int child_notify(struct device *dev, void *data)
+1 -1
drivers/nvdimm/region_devs.c
··· 90 90 } 91 91 } 92 92 93 - cpu_cache_invalidate_memregion(IORES_DESC_PERSISTENT_MEMORY); 93 + cpu_cache_invalidate_all(); 94 94 out: 95 95 for (i = 0; i < nd_region->ndr_mappings; i++) { 96 96 struct nd_mapping *nd_mapping = &nd_region->mapping[i];
+4 -8
drivers/soc/amlogic/meson-canvas.c
··· 60 60 return ERR_PTR(-ENODEV); 61 61 62 62 canvas_pdev = of_find_device_by_node(canvas_node); 63 - if (!canvas_pdev) { 64 - of_node_put(canvas_node); 65 - return ERR_PTR(-EPROBE_DEFER); 66 - } 67 - 68 63 of_node_put(canvas_node); 64 + if (!canvas_pdev) 65 + return ERR_PTR(-EPROBE_DEFER); 69 66 70 67 /* 71 68 * If priv is NULL, it's probably because the canvas hasn't ··· 70 73 * current state, this driver probe cannot return -EPROBE_DEFER 71 74 */ 72 75 canvas = dev_get_drvdata(&canvas_pdev->dev); 73 - if (!canvas) { 74 - put_device(&canvas_pdev->dev); 76 + put_device(&canvas_pdev->dev); 77 + if (!canvas) 75 78 return ERR_PTR(-EINVAL); 76 - } 77 79 78 80 return canvas; 79 81 }
+6
drivers/soc/amlogic/meson-gx-socinfo.c
··· 46 46 { "A5", 0x3c }, 47 47 { "C3", 0x3d }, 48 48 { "A4", 0x40 }, 49 + { "S7", 0x46 }, 50 + { "S7D", 0x47 }, 51 + { "S6", 0x48 }, 49 52 }; 50 53 51 54 static const struct meson_gx_package_id { ··· 89 86 { "A311D2", 0x36, 0x1, 0xf }, 90 87 { "A113X2", 0x3c, 0x1, 0xf }, 91 88 { "A113L2", 0x40, 0x1, 0xf }, 89 + { "S805X3", 0x46, 0x3, 0xf }, 90 + { "S905X5M", 0x47, 0x1, 0xf }, 91 + { "S905X5", 0x48, 0x1, 0xf }, 92 92 }; 93 93 94 94 static inline unsigned int socinfo_to_major(u32 socinfo)
+11 -4
drivers/soc/apple/mailbox.c
··· 302 302 return ERR_PTR(-EPROBE_DEFER); 303 303 304 304 mbox = platform_get_drvdata(pdev); 305 - if (!mbox) 306 - return ERR_PTR(-EPROBE_DEFER); 305 + if (!mbox) { 306 + mbox = ERR_PTR(-EPROBE_DEFER); 307 + goto out_put_pdev; 308 + } 307 309 308 - if (!device_link_add(dev, &pdev->dev, DL_FLAG_AUTOREMOVE_CONSUMER)) 309 - return ERR_PTR(-ENODEV); 310 + if (!device_link_add(dev, &pdev->dev, DL_FLAG_AUTOREMOVE_CONSUMER)) { 311 + mbox = ERR_PTR(-ENODEV); 312 + goto out_put_pdev; 313 + } 314 + 315 + out_put_pdev: 316 + put_device(&pdev->dev); 310 317 311 318 return mbox; 312 319 }
+2 -11
drivers/soc/apple/sart.c
··· 214 214 return 0; 215 215 } 216 216 217 - static void apple_sart_put_device(void *dev) 218 - { 219 - put_device(dev); 220 - } 221 - 222 217 struct apple_sart *devm_apple_sart_get(struct device *dev) 223 218 { 224 219 struct device_node *sart_node; 225 220 struct platform_device *sart_pdev; 226 221 struct apple_sart *sart; 227 - int ret; 228 222 229 223 sart_node = of_parse_phandle(dev->of_node, "apple,sart", 0); 230 224 if (!sart_node) ··· 236 242 return ERR_PTR(-EPROBE_DEFER); 237 243 } 238 244 239 - ret = devm_add_action_or_reset(dev, apple_sart_put_device, 240 - &sart_pdev->dev); 241 - if (ret) 242 - return ERR_PTR(ret); 243 - 244 245 device_link_add(dev, &sart_pdev->dev, 245 246 DL_FLAG_PM_RUNTIME | DL_FLAG_AUTOREMOVE_SUPPLIER); 247 + 248 + put_device(&sart_pdev->dev); 246 249 247 250 return sart; 248 251 }
+1 -1
drivers/soc/fsl/qbman/qman.c
··· 1073 1073 1074 1074 int qman_wq_alloc(void) 1075 1075 { 1076 - qm_portal_wq = alloc_workqueue("qman_portal_wq", 0, 1); 1076 + qm_portal_wq = alloc_workqueue("qman_portal_wq", WQ_PERCPU, 1); 1077 1077 if (!qm_portal_wq) 1078 1078 return -ENOMEM; 1079 1079 return 0;
+1 -1
drivers/soc/fsl/qbman/qman_test_stash.c
··· 219 219 220 220 pcfg = qman_get_qm_portal_config(qman_dma_portal); 221 221 222 - __frame_ptr = kmalloc(4 * HP_NUM_WORDS, GFP_KERNEL); 222 + __frame_ptr = kmalloc_array(4, HP_NUM_WORDS, GFP_KERNEL); 223 223 if (!__frame_ptr) 224 224 return -ENOMEM; 225 225
+12
drivers/soc/microchip/Kconfig
··· 9 9 module will be called mpfs_system_controller. 10 10 11 11 If unsure, say N. 12 + 13 + config POLARFIRE_SOC_SYSCONS 14 + bool "PolarFire SoC (MPFS) syscon drivers" 15 + default y 16 + depends on ARCH_MICROCHIP 17 + select MFD_CORE 18 + help 19 + These drivers add support for the syscons on PolarFire SoC (MPFS). 20 + Without these drivers core parts of the kernel such as clocks 21 + and resets will not function correctly. 22 + 23 + If unsure, and on a PolarFire SoC, say y.
+1
drivers/soc/microchip/Makefile
··· 1 1 obj-$(CONFIG_POLARFIRE_SOC_SYS_CTRL) += mpfs-sys-controller.o 2 + obj-$(CONFIG_POLARFIRE_SOC_SYSCONS) += mpfs-control-scb.o mpfs-mss-top-sysreg.o
+38
drivers/soc/microchip/mpfs-control-scb.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/array_size.h> 4 + #include <linux/of.h> 5 + #include <linux/mfd/core.h> 6 + #include <linux/mfd/syscon.h> 7 + #include <linux/platform_device.h> 8 + 9 + static const struct mfd_cell mpfs_control_scb_devs[] = { 10 + MFD_CELL_NAME("mpfs-tvs"), 11 + }; 12 + 13 + static int mpfs_control_scb_probe(struct platform_device *pdev) 14 + { 15 + struct device *dev = &pdev->dev; 16 + 17 + return mfd_add_devices(dev, PLATFORM_DEVID_NONE, mpfs_control_scb_devs, 18 + ARRAY_SIZE(mpfs_control_scb_devs), NULL, 0, NULL); 19 + } 20 + 21 + static const struct of_device_id mpfs_control_scb_of_match[] = { 22 + { .compatible = "microchip,mpfs-control-scb", }, 23 + {}, 24 + }; 25 + MODULE_DEVICE_TABLE(of, mpfs_control_scb_of_match); 26 + 27 + static struct platform_driver mpfs_control_scb_driver = { 28 + .driver = { 29 + .name = "mpfs-control-scb", 30 + .of_match_table = mpfs_control_scb_of_match, 31 + }, 32 + .probe = mpfs_control_scb_probe, 33 + }; 34 + module_platform_driver(mpfs_control_scb_driver); 35 + 36 + MODULE_LICENSE("GPL"); 37 + MODULE_AUTHOR("Conor Dooley <conor.dooley@microchip.com>"); 38 + MODULE_DESCRIPTION("PolarFire SoC control scb driver");
+44
drivers/soc/microchip/mpfs-mss-top-sysreg.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #include <linux/array_size.h> 4 + #include <linux/of.h> 5 + #include <linux/mfd/core.h> 6 + #include <linux/mfd/syscon.h> 7 + #include <linux/of_platform.h> 8 + #include <linux/platform_device.h> 9 + 10 + static const struct mfd_cell mpfs_mss_top_sysreg_devs[] = { 11 + MFD_CELL_NAME("mpfs-reset"), 12 + }; 13 + 14 + static int mpfs_mss_top_sysreg_probe(struct platform_device *pdev) 15 + { 16 + struct device *dev = &pdev->dev; 17 + int ret; 18 + 19 + ret = mfd_add_devices(dev, PLATFORM_DEVID_NONE, mpfs_mss_top_sysreg_devs, 20 + ARRAY_SIZE(mpfs_mss_top_sysreg_devs) , NULL, 0, NULL); 21 + if (ret) 22 + return ret; 23 + 24 + return devm_of_platform_populate(dev); 25 + } 26 + 27 + static const struct of_device_id mpfs_mss_top_sysreg_of_match[] = { 28 + { .compatible = "microchip,mpfs-mss-top-sysreg", }, 29 + {}, 30 + }; 31 + MODULE_DEVICE_TABLE(of, mpfs_mss_top_sysreg_of_match); 32 + 33 + static struct platform_driver mpfs_mss_top_sysreg_driver = { 34 + .driver = { 35 + .name = "mpfs-mss-top-sysreg", 36 + .of_match_table = mpfs_mss_top_sysreg_of_match, 37 + }, 38 + .probe = mpfs_mss_top_sysreg_probe, 39 + }; 40 + module_platform_driver(mpfs_mss_top_sysreg_driver); 41 + 42 + MODULE_LICENSE("GPL"); 43 + MODULE_AUTHOR("Conor Dooley <conor.dooley@microchip.com>"); 44 + MODULE_DESCRIPTION("PolarFire SoC mss top sysreg driver");
+5 -4
drivers/soc/samsung/exynos-pmu.c
··· 213 213 if (!dev) 214 214 return ERR_PTR(-EPROBE_DEFER); 215 215 216 + put_device(dev); 217 + 216 218 return syscon_node_to_regmap(pmu_np); 217 219 } 218 220 EXPORT_SYMBOL_GPL(exynos_get_pmu_regmap_by_phandle); ··· 456 454 if (!pmu_context->in_cpuhp) 457 455 return -ENOMEM; 458 456 459 - raw_spin_lock_init(&pmu_context->cpupm_lock); 460 - pmu_context->sys_inreboot = false; 461 - pmu_context->sys_insuspend = false; 462 - 463 457 /* set PMU to power on */ 464 458 for_each_online_cpu(cpu) 465 459 gs101_cpuhp_pmu_online(cpu); ··· 527 529 528 530 pmu_context->pmureg = regmap; 529 531 pmu_context->dev = dev; 532 + raw_spin_lock_init(&pmu_context->cpupm_lock); 533 + pmu_context->sys_inreboot = false; 534 + pmu_context->sys_insuspend = false; 530 535 531 536 if (pmu_context->pmu_data && pmu_context->pmu_data->pmu_cpuhp) { 532 537 ret = setup_cpuhp_and_cpuidle(dev);
+61
include/linux/cache_coherency.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Cache coherency maintenance operation device drivers 4 + * 5 + * Copyright Huawei 2025 6 + */ 7 + #ifndef _LINUX_CACHE_COHERENCY_H_ 8 + #define _LINUX_CACHE_COHERENCY_H_ 9 + 10 + #include <linux/list.h> 11 + #include <linux/kref.h> 12 + #include <linux/types.h> 13 + 14 + struct cc_inval_params { 15 + phys_addr_t addr; 16 + size_t size; 17 + }; 18 + 19 + struct cache_coherency_ops_inst; 20 + 21 + struct cache_coherency_ops { 22 + int (*wbinv)(struct cache_coherency_ops_inst *cci, 23 + struct cc_inval_params *invp); 24 + int (*done)(struct cache_coherency_ops_inst *cci); 25 + }; 26 + 27 + struct cache_coherency_ops_inst { 28 + struct kref kref; 29 + struct list_head node; 30 + const struct cache_coherency_ops *ops; 31 + }; 32 + 33 + int cache_coherency_ops_instance_register(struct cache_coherency_ops_inst *cci); 34 + void cache_coherency_ops_instance_unregister(struct cache_coherency_ops_inst *cci); 35 + 36 + struct cache_coherency_ops_inst * 37 + _cache_coherency_ops_instance_alloc(const struct cache_coherency_ops *ops, 38 + size_t size); 39 + /** 40 + * cache_coherency_ops_instance_alloc - Allocate cache coherency ops instance 41 + * @ops: Cache maintenance operations 42 + * @drv_struct: structure that contains the struct cache_coherency_ops_inst 43 + * @member: Name of the struct cache_coherency_ops_inst member in @drv_struct. 44 + * 45 + * This allocates a driver specific structure and initializes the 46 + * cache_coherency_ops_inst embedded in the drv_struct. Upon success the 47 + * pointer must be freed via cache_coherency_ops_instance_put(). 48 + * 49 + * Returns a &drv_struct * on success, %NULL on error. 50 + */ 51 + #define cache_coherency_ops_instance_alloc(ops, drv_struct, member) \ 52 + ({ \ 53 + static_assert(__same_type(struct cache_coherency_ops_inst, \ 54 + ((drv_struct *)NULL)->member)); \ 55 + static_assert(offsetof(drv_struct, member) == 0); \ 56 + (drv_struct *)_cache_coherency_ops_instance_alloc(ops, \ 57 + sizeof(drv_struct)); \ 58 + }) 59 + void cache_coherency_ops_instance_put(struct cache_coherency_ops_inst *cci); 60 + 61 + #endif
+12 -4
include/linux/memregion.h
··· 26 26 27 27 /** 28 28 * cpu_cache_invalidate_memregion - drop any CPU cached data for 29 - * memregions described by @res_desc 30 - * @res_desc: one of the IORES_DESC_* types 29 + * memregion 30 + * @start: start physical address of the target memory region. 31 + * @len: length of the target memory region. -1 for all the regions of 32 + * the target type. 31 33 * 32 34 * Perform cache maintenance after a memory event / operation that 33 35 * changes the contents of physical memory in a cache-incoherent manner. ··· 48 46 * the cache maintenance. 49 47 */ 50 48 #ifdef CONFIG_ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGION 51 - int cpu_cache_invalidate_memregion(int res_desc); 49 + int cpu_cache_invalidate_memregion(phys_addr_t start, size_t len); 52 50 bool cpu_cache_has_invalidate_memregion(void); 53 51 #else 54 52 static inline bool cpu_cache_has_invalidate_memregion(void) ··· 56 54 return false; 57 55 } 58 56 59 - static inline int cpu_cache_invalidate_memregion(int res_desc) 57 + static inline int cpu_cache_invalidate_memregion(phys_addr_t start, size_t len) 60 58 { 61 59 WARN_ON_ONCE("CPU cache invalidation required"); 62 60 return -ENXIO; 63 61 } 64 62 #endif 63 + 64 + static inline int cpu_cache_invalidate_all(void) 65 + { 66 + return cpu_cache_invalidate_memregion(0, -1); 67 + } 68 + 65 69 #endif /* _MEMREGION_H_ */
+3
lib/Kconfig
··· 542 542 config ARCH_HAS_CPU_CACHE_INVALIDATE_MEMREGION 543 543 bool 544 544 545 + config GENERIC_CPU_CACHE_MAINTENANCE 546 + bool 547 + 545 548 config ARCH_HAS_MEMREMAP_COMPAT_ALIGN 546 549 bool 547 550
+2
lib/Makefile
··· 127 127 obj-$(CONFIG_CHECK_SIGNATURE) += check_signature.o 128 128 obj-$(CONFIG_DEBUG_LOCKING_API_SELFTESTS) += locking-selftest.o 129 129 130 + obj-$(CONFIG_GENERIC_CPU_CACHE_MAINTENANCE) += cache_maint.o 131 + 130 132 lib-y += logic_pio.o 131 133 132 134 lib-$(CONFIG_INDIRECT_IOMEM) += logic_iomem.o
+138
lib/cache_maint.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Generic support for Memory System Cache Maintenance operations. 4 + * 5 + * Coherency maintenance drivers register with this simple framework that will 6 + * iterate over each registered instance to first kick off invalidation and 7 + * then to wait until it is complete. 8 + * 9 + * If no implementations are registered yet cpu_cache_has_invalidate_memregion() 10 + * will return false. If this runs concurrently with unregistration then a 11 + * race exists but this is no worse than the case where the operations instance 12 + * responsible for a given memory region has not yet registered. 13 + */ 14 + #include <linux/cache_coherency.h> 15 + #include <linux/cleanup.h> 16 + #include <linux/container_of.h> 17 + #include <linux/export.h> 18 + #include <linux/kref.h> 19 + #include <linux/list.h> 20 + #include <linux/memregion.h> 21 + #include <linux/module.h> 22 + #include <linux/rwsem.h> 23 + #include <linux/slab.h> 24 + 25 + static LIST_HEAD(cache_ops_instance_list); 26 + static DECLARE_RWSEM(cache_ops_instance_list_lock); 27 + 28 + static void __cache_coherency_ops_instance_free(struct kref *kref) 29 + { 30 + struct cache_coherency_ops_inst *cci = 31 + container_of(kref, struct cache_coherency_ops_inst, kref); 32 + kfree(cci); 33 + } 34 + 35 + void cache_coherency_ops_instance_put(struct cache_coherency_ops_inst *cci) 36 + { 37 + kref_put(&cci->kref, __cache_coherency_ops_instance_free); 38 + } 39 + EXPORT_SYMBOL_GPL(cache_coherency_ops_instance_put); 40 + 41 + static int cache_inval_one(struct cache_coherency_ops_inst *cci, void *data) 42 + { 43 + if (!cci->ops) 44 + return -EINVAL; 45 + 46 + return cci->ops->wbinv(cci, data); 47 + } 48 + 49 + static int cache_inval_done_one(struct cache_coherency_ops_inst *cci) 50 + { 51 + if (!cci->ops) 52 + return -EINVAL; 53 + 54 + if (!cci->ops->done) 55 + return 0; 56 + 57 + return cci->ops->done(cci); 58 + } 59 + 60 + static int cache_invalidate_memregion(phys_addr_t addr, size_t size) 61 + { 62 + int ret; 63 + struct cache_coherency_ops_inst *cci; 64 + struct cc_inval_params params = { 65 + .addr = addr, 66 + .size = size, 67 + }; 68 + 69 + guard(rwsem_read)(&cache_ops_instance_list_lock); 70 + list_for_each_entry(cci, &cache_ops_instance_list, node) { 71 + ret = cache_inval_one(cci, &params); 72 + if (ret) 73 + return ret; 74 + } 75 + list_for_each_entry(cci, &cache_ops_instance_list, node) { 76 + ret = cache_inval_done_one(cci); 77 + if (ret) 78 + return ret; 79 + } 80 + 81 + return 0; 82 + } 83 + 84 + struct cache_coherency_ops_inst * 85 + _cache_coherency_ops_instance_alloc(const struct cache_coherency_ops *ops, 86 + size_t size) 87 + { 88 + struct cache_coherency_ops_inst *cci; 89 + 90 + if (!ops || !ops->wbinv) 91 + return NULL; 92 + 93 + cci = kzalloc(size, GFP_KERNEL); 94 + if (!cci) 95 + return NULL; 96 + 97 + cci->ops = ops; 98 + INIT_LIST_HEAD(&cci->node); 99 + kref_init(&cci->kref); 100 + 101 + return cci; 102 + } 103 + EXPORT_SYMBOL_NS_GPL(_cache_coherency_ops_instance_alloc, "CACHE_COHERENCY"); 104 + 105 + int cache_coherency_ops_instance_register(struct cache_coherency_ops_inst *cci) 106 + { 107 + guard(rwsem_write)(&cache_ops_instance_list_lock); 108 + list_add(&cci->node, &cache_ops_instance_list); 109 + 110 + return 0; 111 + } 112 + EXPORT_SYMBOL_NS_GPL(cache_coherency_ops_instance_register, "CACHE_COHERENCY"); 113 + 114 + void cache_coherency_ops_instance_unregister(struct cache_coherency_ops_inst *cci) 115 + { 116 + guard(rwsem_write)(&cache_ops_instance_list_lock); 117 + list_del(&cci->node); 118 + } 119 + EXPORT_SYMBOL_NS_GPL(cache_coherency_ops_instance_unregister, "CACHE_COHERENCY"); 120 + 121 + int cpu_cache_invalidate_memregion(phys_addr_t start, size_t len) 122 + { 123 + return cache_invalidate_memregion(start, len); 124 + } 125 + EXPORT_SYMBOL_NS_GPL(cpu_cache_invalidate_memregion, "DEVMEM"); 126 + 127 + /* 128 + * Used for optimization / debug purposes only as removal can race 129 + * 130 + * Machines that do not support invalidation, e.g. VMs, will not have any 131 + * operations instance to register and so this will always return false. 132 + */ 133 + bool cpu_cache_has_invalidate_memregion(void) 134 + { 135 + guard(rwsem_read)(&cache_ops_instance_list_lock); 136 + return !list_empty(&cache_ops_instance_list); 137 + } 138 + EXPORT_SYMBOL_NS_GPL(cpu_cache_has_invalidate_memregion, "DEVMEM");