Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull irq updates from Thomas Gleixner:
"The irq departement delivers:

- Rework the irqdomain core infrastructure to accomodate ACPI based
systems. This is required to support ARM64 without creating
artificial device tree nodes.

- Sanitize the ACPI based ARM GIC initialization by making use of the
new firmware independent irqdomain core

- Further improvements to the generic MSI management

- Generalize the irq migration on CPU hotplug

- Improvements to the threaded interrupt infrastructure

- Allow the migration of "chained" low level interrupt handlers

- Allow optional force masking of interrupts in disable_irq[_nosysnc]

- Support for two new interrupt chips - Sigh!

- A larger set of errata fixes for ARM gicv3

- The usual pile of fixes, updates, improvements and cleanups all
over the place"

* 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (71 commits)
Document that IRQ_NONE should be returned when IRQ not actually handled
PCI/MSI: Allow the MSI domain to be device-specific
PCI: Add per-device MSI domain hook
of/irq: Use the msi-map property to provide device-specific MSI domain
of/irq: Split of_msi_map_rid to reuse msi-map lookup
irqchip/gic-v3-its: Parse new version of msi-parent property
PCI/MSI: Use of_msi_get_domain instead of open-coded "msi-parent" parsing
of/irq: Use of_msi_get_domain instead of open-coded "msi-parent" parsing
of/irq: Add support code for multi-parent version of "msi-parent"
irqchip/gic-v3-its: Add handling of PCI requester id.
PCI/MSI: Add helper function pci_msi_domain_get_msi_rid().
of/irq: Add new function of_msi_map_rid()
Docs: dt: Add PCI MSI map bindings
irqchip/gic-v2m: Add support for multiple MSI frames
irqchip/gic-v3: Fix translation of LPIs after conversion to irq_fwspec
irqchip/mxs: Add Alphascale ASM9260 support
irqchip/mxs: Prepare driver for hardware with different offsets
irqchip/mxs: Panic if ioremap or domain creation fails
irqdomain: Documentation updates
irqdomain/msi: Use fwnode instead of of_node
...

+2577 -890
+4 -4
Documentation/IRQ-domain.txt
··· 32 32 preferred over interrupt controller drivers open coding their own 33 33 reverse mapping scheme. 34 34 35 - irq_domain also implements translation from Device Tree interrupt 36 - specifiers to hwirq numbers, and can be easily extended to support 37 - other IRQ topology data sources. 35 + irq_domain also implements translation from an abstract irq_fwspec 36 + structure to hwirq numbers (Device Tree and ACPI GSI so far), and can 37 + be easily extended to support other IRQ topology data sources. 38 38 39 39 === irq_domain usage === 40 40 An interrupt controller driver creates and registers an irq_domain by ··· 184 184 related resources associated with these interrupts. 185 185 3) irq_domain_activate_irq(): activate interrupt controller hardware to 186 186 deliver the interrupt. 187 - 3) irq_domain_deactivate_irq(): deactivate interrupt controller hardware 187 + 4) irq_domain_deactivate_irq(): deactivate interrupt controller hardware 188 188 to stop delivering the interrupt. 189 189 190 190 Following changes are needed to support hierarchy irq_domain.
+10 -1
Documentation/arm64/booting.txt
··· 173 173 the kernel image will be entered must be initialised by software at a 174 174 higher exception level to prevent execution in an UNKNOWN state. 175 175 176 - For systems with a GICv3 interrupt controller: 176 + For systems with a GICv3 interrupt controller to be used in v3 mode: 177 177 - If EL3 is present: 178 178 ICC_SRE_EL3.Enable (bit 3) must be initialiased to 0b1. 179 179 ICC_SRE_EL3.SRE (bit 0) must be initialised to 0b1. 180 180 - If the kernel is entered at EL1: 181 181 ICC.SRE_EL2.Enable (bit 3) must be initialised to 0b1 182 182 ICC_SRE_EL2.SRE (bit 0) must be initialised to 0b1. 183 + - The DT or ACPI tables must describe a GICv3 interrupt controller. 184 + 185 + For systems with a GICv3 interrupt controller to be used in 186 + compatibility (v2) mode: 187 + - If EL3 is present: 188 + ICC_SRE_EL3.SRE (bit 0) must be initialised to 0b0. 189 + - If the kernel is entered at EL1: 190 + ICC_SRE_EL2.SRE (bit 0) must be initialised to 0b0. 191 + - The DT or ACPI tables must describe a GICv2 interrupt controller. 183 192 184 193 The requirements described above for CPU mode, caches, MMUs, architected 185 194 timers, coherency and system registers apply to all CPUs. All CPUs must
+22 -6
Documentation/devicetree/bindings/arm/gic.txt
··· 11 11 Main node required properties: 12 12 13 13 - compatible : should be one of: 14 - "arm,gic-400" 15 - "arm,cortex-a15-gic" 16 - "arm,cortex-a9-gic" 17 - "arm,cortex-a7-gic" 18 - "arm,arm11mp-gic" 19 - "brcm,brahma-b15-gic" 20 14 "arm,arm1176jzf-devchip-gic" 15 + "arm,arm11mp-gic" 16 + "arm,cortex-a15-gic" 17 + "arm,cortex-a7-gic" 18 + "arm,cortex-a9-gic" 19 + "arm,gic-400" 20 + "arm,pl390" 21 + "brcm,brahma-b15-gic" 21 22 "qcom,msm-8660-qgic" 22 23 "qcom,msm-qgic2" 23 24 - interrupt-controller : Identifies the node as an interrupt controller ··· 58 57 - cpu-offset : per-cpu offset within the distributor and cpu interface 59 58 regions, used when the GIC doesn't have banked registers. The offset is 60 59 cpu-offset * cpu-nr. 60 + 61 + - clocks : List of phandle and clock-specific pairs, one for each entry 62 + in clock-names. 63 + - clock-names : List of names for the GIC clock input(s). Valid clock names 64 + depend on the GIC variant: 65 + "ic_clk" (for "arm,arm11mp-gic") 66 + "PERIPHCLKEN" (for "arm,cortex-a15-gic") 67 + "PERIPHCLK", "PERIPHCLKEN" (for "arm,cortex-a9-gic") 68 + "clk" (for "arm,gic-400") 69 + "gclk" (for "arm,pl390") 70 + 71 + - power-domains : A phandle and PM domain specifier as defined by bindings of 72 + the power controller specified by phandle, used when the GIC 73 + is part of a Power or Clock Domain. 74 + 61 75 62 76 Example: 63 77
+1
Documentation/devicetree/bindings/interrupt-controller/renesas,irqc.txt
··· 10 10 - "renesas,irqc-r8a7792" (R-Car V2H) 11 11 - "renesas,irqc-r8a7793" (R-Car M2-N) 12 12 - "renesas,irqc-r8a7794" (R-Car E2) 13 + - "renesas,intc-ex-r8a7795" (R-Car H3) 13 14 - #interrupt-cells: has to be <2>: an interrupt index and flags, as defined in 14 15 interrupts.txt in this directory 15 16 - clocks: Must contain a reference to the functional clock.
+220
Documentation/devicetree/bindings/pci/pci-msi.txt
··· 1 + This document describes the generic device tree binding for describing the 2 + relationship between PCI devices and MSI controllers. 3 + 4 + Each PCI device under a root complex is uniquely identified by its Requester ID 5 + (AKA RID). A Requester ID is a triplet of a Bus number, Device number, and 6 + Function number. 7 + 8 + For the purpose of this document, when treated as a numeric value, a RID is 9 + formatted such that: 10 + 11 + * Bits [15:8] are the Bus number. 12 + * Bits [7:3] are the Device number. 13 + * Bits [2:0] are the Function number. 14 + * Any other bits required for padding must be zero. 15 + 16 + MSIs may be distinguished in part through the use of sideband data accompanying 17 + writes. In the case of PCI devices, this sideband data may be derived from the 18 + Requester ID. A mechanism is required to associate a device with both the MSI 19 + controllers it can address, and the sideband data that will be associated with 20 + its writes to those controllers. 21 + 22 + For generic MSI bindings, see 23 + Documentation/devicetree/bindings/interrupt-controller/msi.txt. 24 + 25 + 26 + PCI root complex 27 + ================ 28 + 29 + Optional properties 30 + ------------------- 31 + 32 + - msi-map: Maps a Requester ID to an MSI controller and associated 33 + msi-specifier data. The property is an arbitrary number of tuples of 34 + (rid-base,msi-controller,msi-base,length), where: 35 + 36 + * rid-base is a single cell describing the first RID matched by the entry. 37 + 38 + * msi-controller is a single phandle to an MSI controller 39 + 40 + * msi-base is an msi-specifier describing the msi-specifier produced for the 41 + first RID matched by the entry. 42 + 43 + * length is a single cell describing how many consecutive RIDs are matched 44 + following the rid-base. 45 + 46 + Any RID r in the interval [rid-base, rid-base + length) is associated with 47 + the listed msi-controller, with the msi-specifier (r - rid-base + msi-base). 48 + 49 + - msi-map-mask: A mask to be applied to each Requester ID prior to being mapped 50 + to an msi-specifier per the msi-map property. 51 + 52 + - msi-parent: Describes the MSI parent of the root complex itself. Where 53 + the root complex and MSI controller do not pass sideband data with MSI 54 + writes, this property may be used to describe the MSI controller(s) 55 + used by PCI devices under the root complex, if defined as such in the 56 + binding for the root complex. 57 + 58 + 59 + Example (1) 60 + =========== 61 + 62 + / { 63 + #address-cells = <1>; 64 + #size-cells = <1>; 65 + 66 + msi: msi-controller@a { 67 + reg = <0xa 0x1>; 68 + compatible = "vendor,some-controller"; 69 + msi-controller; 70 + #msi-cells = <1>; 71 + }; 72 + 73 + pci: pci@f { 74 + reg = <0xf 0x1>; 75 + compatible = "vendor,pcie-root-complex"; 76 + device_type = "pci"; 77 + 78 + /* 79 + * The sideband data provided to the MSI controller is 80 + * the RID, identity-mapped. 81 + */ 82 + msi-map = <0x0 &msi_a 0x0 0x10000>, 83 + }; 84 + }; 85 + 86 + 87 + Example (2) 88 + =========== 89 + 90 + / { 91 + #address-cells = <1>; 92 + #size-cells = <1>; 93 + 94 + msi: msi-controller@a { 95 + reg = <0xa 0x1>; 96 + compatible = "vendor,some-controller"; 97 + msi-controller; 98 + #msi-cells = <1>; 99 + }; 100 + 101 + pci: pci@f { 102 + reg = <0xf 0x1>; 103 + compatible = "vendor,pcie-root-complex"; 104 + device_type = "pci"; 105 + 106 + /* 107 + * The sideband data provided to the MSI controller is 108 + * the RID, masked to only the device and function bits. 109 + */ 110 + msi-map = <0x0 &msi_a 0x0 0x100>, 111 + msi-map-mask = <0xff> 112 + }; 113 + }; 114 + 115 + 116 + Example (3) 117 + =========== 118 + 119 + / { 120 + #address-cells = <1>; 121 + #size-cells = <1>; 122 + 123 + msi: msi-controller@a { 124 + reg = <0xa 0x1>; 125 + compatible = "vendor,some-controller"; 126 + msi-controller; 127 + #msi-cells = <1>; 128 + }; 129 + 130 + pci: pci@f { 131 + reg = <0xf 0x1>; 132 + compatible = "vendor,pcie-root-complex"; 133 + device_type = "pci"; 134 + 135 + /* 136 + * The sideband data provided to the MSI controller is 137 + * the RID, but the high bit of the bus number is 138 + * ignored. 139 + */ 140 + msi-map = <0x0000 &msi 0x0000 0x8000>, 141 + <0x8000 &msi 0x0000 0x8000>; 142 + }; 143 + }; 144 + 145 + 146 + Example (4) 147 + =========== 148 + 149 + / { 150 + #address-cells = <1>; 151 + #size-cells = <1>; 152 + 153 + msi: msi-controller@a { 154 + reg = <0xa 0x1>; 155 + compatible = "vendor,some-controller"; 156 + msi-controller; 157 + #msi-cells = <1>; 158 + }; 159 + 160 + pci: pci@f { 161 + reg = <0xf 0x1>; 162 + compatible = "vendor,pcie-root-complex"; 163 + device_type = "pci"; 164 + 165 + /* 166 + * The sideband data provided to the MSI controller is 167 + * the RID, but the high bit of the bus number is 168 + * negated. 169 + */ 170 + msi-map = <0x0000 &msi 0x8000 0x8000>, 171 + <0x8000 &msi 0x0000 0x8000>; 172 + }; 173 + }; 174 + 175 + 176 + Example (5) 177 + =========== 178 + 179 + / { 180 + #address-cells = <1>; 181 + #size-cells = <1>; 182 + 183 + msi_a: msi-controller@a { 184 + reg = <0xa 0x1>; 185 + compatible = "vendor,some-controller"; 186 + msi-controller; 187 + #msi-cells = <1>; 188 + }; 189 + 190 + msi_b: msi-controller@b { 191 + reg = <0xb 0x1>; 192 + compatible = "vendor,some-controller"; 193 + msi-controller; 194 + #msi-cells = <1>; 195 + }; 196 + 197 + msi_c: msi-controller@c { 198 + reg = <0xc 0x1>; 199 + compatible = "vendor,some-controller"; 200 + msi-controller; 201 + #msi-cells = <1>; 202 + }; 203 + 204 + pci: pci@c { 205 + reg = <0xf 0x1>; 206 + compatible = "vendor,pcie-root-complex"; 207 + device_type = "pci"; 208 + 209 + /* 210 + * The sideband data provided to MSI controller a is the 211 + * RID, but the high bit of the bus number is negated. 212 + * The sideband data provided to MSI controller b is the 213 + * RID, identity-mapped. 214 + * MSI controller c is not addressable. 215 + */ 216 + msi-map = <0x0000 &msi_a 0x8000 0x08000>, 217 + <0x8000 &msi_a 0x0000 0x08000>, 218 + <0x0000 &msi_b 0x0000 0x10000>; 219 + }; 220 + };
+1
arch/arm/Kconfig
··· 820 820 bool "Dummy Virtual Machine" if ARCH_MULTI_V7 821 821 select ARM_AMBA 822 822 select ARM_GIC 823 + select ARM_GIC_V3 823 824 select ARM_PSCI 824 825 select HAVE_ARM_ARCH_TIMER 825 826
+188
arch/arm/include/asm/arch_gicv3.h
··· 1 + /* 2 + * arch/arm/include/asm/arch_gicv3.h 3 + * 4 + * Copyright (C) 2015 ARM Ltd. 5 + * 6 + * This program is free software: you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + * 15 + * You should have received a copy of the GNU General Public License 16 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 17 + */ 18 + #ifndef __ASM_ARCH_GICV3_H 19 + #define __ASM_ARCH_GICV3_H 20 + 21 + #ifndef __ASSEMBLY__ 22 + 23 + #include <linux/io.h> 24 + 25 + #define __ACCESS_CP15(CRn, Op1, CRm, Op2) p15, Op1, %0, CRn, CRm, Op2 26 + #define __ACCESS_CP15_64(Op1, CRm) p15, Op1, %Q0, %R0, CRm 27 + 28 + #define ICC_EOIR1 __ACCESS_CP15(c12, 0, c12, 1) 29 + #define ICC_DIR __ACCESS_CP15(c12, 0, c11, 1) 30 + #define ICC_IAR1 __ACCESS_CP15(c12, 0, c12, 0) 31 + #define ICC_SGI1R __ACCESS_CP15_64(0, c12) 32 + #define ICC_PMR __ACCESS_CP15(c4, 0, c6, 0) 33 + #define ICC_CTLR __ACCESS_CP15(c12, 0, c12, 4) 34 + #define ICC_SRE __ACCESS_CP15(c12, 0, c12, 5) 35 + #define ICC_IGRPEN1 __ACCESS_CP15(c12, 0, c12, 7) 36 + 37 + #define ICC_HSRE __ACCESS_CP15(c12, 4, c9, 5) 38 + 39 + #define ICH_VSEIR __ACCESS_CP15(c12, 4, c9, 4) 40 + #define ICH_HCR __ACCESS_CP15(c12, 4, c11, 0) 41 + #define ICH_VTR __ACCESS_CP15(c12, 4, c11, 1) 42 + #define ICH_MISR __ACCESS_CP15(c12, 4, c11, 2) 43 + #define ICH_EISR __ACCESS_CP15(c12, 4, c11, 3) 44 + #define ICH_ELSR __ACCESS_CP15(c12, 4, c11, 5) 45 + #define ICH_VMCR __ACCESS_CP15(c12, 4, c11, 7) 46 + 47 + #define __LR0(x) __ACCESS_CP15(c12, 4, c12, x) 48 + #define __LR8(x) __ACCESS_CP15(c12, 4, c13, x) 49 + 50 + #define ICH_LR0 __LR0(0) 51 + #define ICH_LR1 __LR0(1) 52 + #define ICH_LR2 __LR0(2) 53 + #define ICH_LR3 __LR0(3) 54 + #define ICH_LR4 __LR0(4) 55 + #define ICH_LR5 __LR0(5) 56 + #define ICH_LR6 __LR0(6) 57 + #define ICH_LR7 __LR0(7) 58 + #define ICH_LR8 __LR8(0) 59 + #define ICH_LR9 __LR8(1) 60 + #define ICH_LR10 __LR8(2) 61 + #define ICH_LR11 __LR8(3) 62 + #define ICH_LR12 __LR8(4) 63 + #define ICH_LR13 __LR8(5) 64 + #define ICH_LR14 __LR8(6) 65 + #define ICH_LR15 __LR8(7) 66 + 67 + /* LR top half */ 68 + #define __LRC0(x) __ACCESS_CP15(c12, 4, c14, x) 69 + #define __LRC8(x) __ACCESS_CP15(c12, 4, c15, x) 70 + 71 + #define ICH_LRC0 __LRC0(0) 72 + #define ICH_LRC1 __LRC0(1) 73 + #define ICH_LRC2 __LRC0(2) 74 + #define ICH_LRC3 __LRC0(3) 75 + #define ICH_LRC4 __LRC0(4) 76 + #define ICH_LRC5 __LRC0(5) 77 + #define ICH_LRC6 __LRC0(6) 78 + #define ICH_LRC7 __LRC0(7) 79 + #define ICH_LRC8 __LRC8(0) 80 + #define ICH_LRC9 __LRC8(1) 81 + #define ICH_LRC10 __LRC8(2) 82 + #define ICH_LRC11 __LRC8(3) 83 + #define ICH_LRC12 __LRC8(4) 84 + #define ICH_LRC13 __LRC8(5) 85 + #define ICH_LRC14 __LRC8(6) 86 + #define ICH_LRC15 __LRC8(7) 87 + 88 + #define __AP0Rx(x) __ACCESS_CP15(c12, 4, c8, x) 89 + #define ICH_AP0R0 __AP0Rx(0) 90 + #define ICH_AP0R1 __AP0Rx(1) 91 + #define ICH_AP0R2 __AP0Rx(2) 92 + #define ICH_AP0R3 __AP0Rx(3) 93 + 94 + #define __AP1Rx(x) __ACCESS_CP15(c12, 4, c9, x) 95 + #define ICH_AP1R0 __AP1Rx(0) 96 + #define ICH_AP1R1 __AP1Rx(1) 97 + #define ICH_AP1R2 __AP1Rx(2) 98 + #define ICH_AP1R3 __AP1Rx(3) 99 + 100 + /* Low-level accessors */ 101 + 102 + static inline void gic_write_eoir(u32 irq) 103 + { 104 + asm volatile("mcr " __stringify(ICC_EOIR1) : : "r" (irq)); 105 + isb(); 106 + } 107 + 108 + static inline void gic_write_dir(u32 val) 109 + { 110 + asm volatile("mcr " __stringify(ICC_DIR) : : "r" (val)); 111 + isb(); 112 + } 113 + 114 + static inline u32 gic_read_iar(void) 115 + { 116 + u32 irqstat; 117 + 118 + asm volatile("mrc " __stringify(ICC_IAR1) : "=r" (irqstat)); 119 + return irqstat; 120 + } 121 + 122 + static inline void gic_write_pmr(u32 val) 123 + { 124 + asm volatile("mcr " __stringify(ICC_PMR) : : "r" (val)); 125 + } 126 + 127 + static inline void gic_write_ctlr(u32 val) 128 + { 129 + asm volatile("mcr " __stringify(ICC_CTLR) : : "r" (val)); 130 + isb(); 131 + } 132 + 133 + static inline void gic_write_grpen1(u32 val) 134 + { 135 + asm volatile("mcr " __stringify(ICC_IGRPEN1) : : "r" (val)); 136 + isb(); 137 + } 138 + 139 + static inline void gic_write_sgi1r(u64 val) 140 + { 141 + asm volatile("mcrr " __stringify(ICC_SGI1R) : : "r" (val)); 142 + } 143 + 144 + static inline u32 gic_read_sre(void) 145 + { 146 + u32 val; 147 + 148 + asm volatile("mrc " __stringify(ICC_SRE) : "=r" (val)); 149 + return val; 150 + } 151 + 152 + static inline void gic_write_sre(u32 val) 153 + { 154 + asm volatile("mcr " __stringify(ICC_SRE) : : "r" (val)); 155 + isb(); 156 + } 157 + 158 + /* 159 + * Even in 32bit systems that use LPAE, there is no guarantee that the I/O 160 + * interface provides true 64bit atomic accesses, so using strd/ldrd doesn't 161 + * make much sense. 162 + * Moreover, 64bit I/O emulation is extremely difficult to implement on 163 + * AArch32, since the syndrome register doesn't provide any information for 164 + * them. 165 + * Consequently, the following IO helpers use 32bit accesses. 166 + * 167 + * There are only two registers that need 64bit accesses in this driver: 168 + * - GICD_IROUTERn, contain the affinity values associated to each interrupt. 169 + * The upper-word (aff3) will always be 0, so there is no need for a lock. 170 + * - GICR_TYPER is an ID register and doesn't need atomicity. 171 + */ 172 + static inline void gic_write_irouter(u64 val, volatile void __iomem *addr) 173 + { 174 + writel_relaxed((u32)val, addr); 175 + writel_relaxed((u32)(val >> 32), addr + 4); 176 + } 177 + 178 + static inline u64 gic_read_typer(const volatile void __iomem *addr) 179 + { 180 + u64 val; 181 + 182 + val = readl_relaxed(addr); 183 + val |= (u64)readl_relaxed(addr + 4) << 32; 184 + return val; 185 + } 186 + 187 + #endif /* !__ASSEMBLY__ */ 188 + #endif /* !__ASM_ARCH_GICV3_H */
+29 -26
arch/arm/mach-exynos/suspend.c
··· 177 177 #endif 178 178 }; 179 179 180 - static int exynos_pmu_domain_xlate(struct irq_domain *domain, 181 - struct device_node *controller, 182 - const u32 *intspec, 183 - unsigned int intsize, 184 - unsigned long *out_hwirq, 185 - unsigned int *out_type) 180 + static int exynos_pmu_domain_translate(struct irq_domain *d, 181 + struct irq_fwspec *fwspec, 182 + unsigned long *hwirq, 183 + unsigned int *type) 186 184 { 187 - if (domain->of_node != controller) 188 - return -EINVAL; /* Shouldn't happen, really... */ 189 - if (intsize != 3) 190 - return -EINVAL; /* Not GIC compliant */ 191 - if (intspec[0] != 0) 192 - return -EINVAL; /* No PPI should point to this domain */ 185 + if (is_of_node(fwspec->fwnode)) { 186 + if (fwspec->param_count != 3) 187 + return -EINVAL; 193 188 194 - *out_hwirq = intspec[1]; 195 - *out_type = intspec[2]; 196 - return 0; 189 + /* No PPI should point to this domain */ 190 + if (fwspec->param[0] != 0) 191 + return -EINVAL; 192 + 193 + *hwirq = fwspec->param[1]; 194 + *type = fwspec->param[2]; 195 + return 0; 196 + } 197 + 198 + return -EINVAL; 197 199 } 198 200 199 201 static int exynos_pmu_domain_alloc(struct irq_domain *domain, 200 202 unsigned int virq, 201 203 unsigned int nr_irqs, void *data) 202 204 { 203 - struct of_phandle_args *args = data; 204 - struct of_phandle_args parent_args; 205 + struct irq_fwspec *fwspec = data; 206 + struct irq_fwspec parent_fwspec; 205 207 irq_hw_number_t hwirq; 206 208 int i; 207 209 208 - if (args->args_count != 3) 210 + if (fwspec->param_count != 3) 209 211 return -EINVAL; /* Not GIC compliant */ 210 - if (args->args[0] != 0) 212 + if (fwspec->param[0] != 0) 211 213 return -EINVAL; /* No PPI should point to this domain */ 212 214 213 - hwirq = args->args[1]; 215 + hwirq = fwspec->param[1]; 214 216 215 217 for (i = 0; i < nr_irqs; i++) 216 218 irq_domain_set_hwirq_and_chip(domain, virq + i, hwirq + i, 217 219 &exynos_pmu_chip, NULL); 218 220 219 - parent_args = *args; 220 - parent_args.np = domain->parent->of_node; 221 - return irq_domain_alloc_irqs_parent(domain, virq, nr_irqs, &parent_args); 221 + parent_fwspec = *fwspec; 222 + parent_fwspec.fwnode = domain->parent->fwnode; 223 + return irq_domain_alloc_irqs_parent(domain, virq, nr_irqs, 224 + &parent_fwspec); 222 225 } 223 226 224 227 static const struct irq_domain_ops exynos_pmu_domain_ops = { 225 - .xlate = exynos_pmu_domain_xlate, 226 - .alloc = exynos_pmu_domain_alloc, 227 - .free = irq_domain_free_irqs_common, 228 + .translate = exynos_pmu_domain_translate, 229 + .alloc = exynos_pmu_domain_alloc, 230 + .free = irq_domain_free_irqs_common, 228 231 }; 229 232 230 233 static int __init exynos_pmu_irq_init(struct device_node *node,
+29 -26
arch/arm/mach-imx/gpc.c
··· 181 181 #endif 182 182 }; 183 183 184 - static int imx_gpc_domain_xlate(struct irq_domain *domain, 185 - struct device_node *controller, 186 - const u32 *intspec, 187 - unsigned int intsize, 188 - unsigned long *out_hwirq, 189 - unsigned int *out_type) 184 + static int imx_gpc_domain_translate(struct irq_domain *d, 185 + struct irq_fwspec *fwspec, 186 + unsigned long *hwirq, 187 + unsigned int *type) 190 188 { 191 - if (domain->of_node != controller) 192 - return -EINVAL; /* Shouldn't happen, really... */ 193 - if (intsize != 3) 194 - return -EINVAL; /* Not GIC compliant */ 195 - if (intspec[0] != 0) 196 - return -EINVAL; /* No PPI should point to this domain */ 189 + if (is_of_node(fwspec->fwnode)) { 190 + if (fwspec->param_count != 3) 191 + return -EINVAL; 197 192 198 - *out_hwirq = intspec[1]; 199 - *out_type = intspec[2]; 200 - return 0; 193 + /* No PPI should point to this domain */ 194 + if (fwspec->param[0] != 0) 195 + return -EINVAL; 196 + 197 + *hwirq = fwspec->param[1]; 198 + *type = fwspec->param[2]; 199 + return 0; 200 + } 201 + 202 + return -EINVAL; 201 203 } 202 204 203 205 static int imx_gpc_domain_alloc(struct irq_domain *domain, 204 206 unsigned int irq, 205 207 unsigned int nr_irqs, void *data) 206 208 { 207 - struct of_phandle_args *args = data; 208 - struct of_phandle_args parent_args; 209 + struct irq_fwspec *fwspec = data; 210 + struct irq_fwspec parent_fwspec; 209 211 irq_hw_number_t hwirq; 210 212 int i; 211 213 212 - if (args->args_count != 3) 214 + if (fwspec->param_count != 3) 213 215 return -EINVAL; /* Not GIC compliant */ 214 - if (args->args[0] != 0) 216 + if (fwspec->param[0] != 0) 215 217 return -EINVAL; /* No PPI should point to this domain */ 216 218 217 - hwirq = args->args[1]; 219 + hwirq = fwspec->param[1]; 218 220 if (hwirq >= GPC_MAX_IRQS) 219 221 return -EINVAL; /* Can't deal with this */ 220 222 ··· 224 222 irq_domain_set_hwirq_and_chip(domain, irq + i, hwirq + i, 225 223 &imx_gpc_chip, NULL); 226 224 227 - parent_args = *args; 228 - parent_args.np = domain->parent->of_node; 229 - return irq_domain_alloc_irqs_parent(domain, irq, nr_irqs, &parent_args); 225 + parent_fwspec = *fwspec; 226 + parent_fwspec.fwnode = domain->parent->fwnode; 227 + return irq_domain_alloc_irqs_parent(domain, irq, nr_irqs, 228 + &parent_fwspec); 230 229 } 231 230 232 231 static const struct irq_domain_ops imx_gpc_domain_ops = { 233 - .xlate = imx_gpc_domain_xlate, 234 - .alloc = imx_gpc_domain_alloc, 235 - .free = irq_domain_free_irqs_common, 232 + .translate = imx_gpc_domain_translate, 233 + .alloc = imx_gpc_domain_alloc, 234 + .free = irq_domain_free_irqs_common, 236 235 }; 237 236 238 237 static int __init imx_gpc_init(struct device_node *node,
+29 -26
arch/arm/mach-omap2/omap-wakeupgen.c
··· 399 399 #endif 400 400 }; 401 401 402 - static int wakeupgen_domain_xlate(struct irq_domain *domain, 403 - struct device_node *controller, 404 - const u32 *intspec, 405 - unsigned int intsize, 406 - unsigned long *out_hwirq, 407 - unsigned int *out_type) 402 + static int wakeupgen_domain_translate(struct irq_domain *d, 403 + struct irq_fwspec *fwspec, 404 + unsigned long *hwirq, 405 + unsigned int *type) 408 406 { 409 - if (domain->of_node != controller) 410 - return -EINVAL; /* Shouldn't happen, really... */ 411 - if (intsize != 3) 412 - return -EINVAL; /* Not GIC compliant */ 413 - if (intspec[0] != 0) 414 - return -EINVAL; /* No PPI should point to this domain */ 407 + if (is_of_node(fwspec->fwnode)) { 408 + if (fwspec->param_count != 3) 409 + return -EINVAL; 415 410 416 - *out_hwirq = intspec[1]; 417 - *out_type = intspec[2]; 418 - return 0; 411 + /* No PPI should point to this domain */ 412 + if (fwspec->param[0] != 0) 413 + return -EINVAL; 414 + 415 + *hwirq = fwspec->param[1]; 416 + *type = fwspec->param[2]; 417 + return 0; 418 + } 419 + 420 + return -EINVAL; 419 421 } 420 422 421 423 static int wakeupgen_domain_alloc(struct irq_domain *domain, 422 424 unsigned int virq, 423 425 unsigned int nr_irqs, void *data) 424 426 { 425 - struct of_phandle_args *args = data; 426 - struct of_phandle_args parent_args; 427 + struct irq_fwspec *fwspec = data; 428 + struct irq_fwspec parent_fwspec; 427 429 irq_hw_number_t hwirq; 428 430 int i; 429 431 430 - if (args->args_count != 3) 432 + if (fwspec->param_count != 3) 431 433 return -EINVAL; /* Not GIC compliant */ 432 - if (args->args[0] != 0) 434 + if (fwspec->param[0] != 0) 433 435 return -EINVAL; /* No PPI should point to this domain */ 434 436 435 - hwirq = args->args[1]; 437 + hwirq = fwspec->param[1]; 436 438 if (hwirq >= MAX_IRQS) 437 439 return -EINVAL; /* Can't deal with this */ 438 440 ··· 442 440 irq_domain_set_hwirq_and_chip(domain, virq + i, hwirq + i, 443 441 &wakeupgen_chip, NULL); 444 442 445 - parent_args = *args; 446 - parent_args.np = domain->parent->of_node; 447 - return irq_domain_alloc_irqs_parent(domain, virq, nr_irqs, &parent_args); 443 + parent_fwspec = *fwspec; 444 + parent_fwspec.fwnode = domain->parent->fwnode; 445 + return irq_domain_alloc_irqs_parent(domain, virq, nr_irqs, 446 + &parent_fwspec); 448 447 } 449 448 450 449 static const struct irq_domain_ops wakeupgen_domain_ops = { 451 - .xlate = wakeupgen_domain_xlate, 452 - .alloc = wakeupgen_domain_alloc, 453 - .free = irq_domain_free_irqs_common, 450 + .translate = wakeupgen_domain_translate, 451 + .alloc = wakeupgen_domain_alloc, 452 + .free = irq_domain_free_irqs_common, 454 453 }; 455 454 456 455 /*
+27
arch/arm64/Kconfig
··· 348 348 349 349 If unsure, say Y. 350 350 351 + config CAVIUM_ERRATUM_22375 352 + bool "Cavium erratum 22375, 24313" 353 + default y 354 + help 355 + Enable workaround for erratum 22375, 24313. 356 + 357 + This implements two gicv3-its errata workarounds for ThunderX. Both 358 + with small impact affecting only ITS table allocation. 359 + 360 + erratum 22375: only alloc 8MB table size 361 + erratum 24313: ignore memory access type 362 + 363 + The fixes are in ITS initialization and basically ignore memory access 364 + type and table size provided by the TYPER and BASER registers. 365 + 366 + If unsure, say Y. 367 + 368 + config CAVIUM_ERRATUM_23154 369 + bool "Cavium erratum 23154: Access to ICC_IAR1_EL1 is not sync'ed" 370 + default y 371 + help 372 + The gicv3 of ThunderX requires a modified version for 373 + reading the IAR status to ensure data synchronization 374 + (access to icc_iar1_el1 is not sync'ed before and after). 375 + 376 + If unsure, say Y. 377 + 351 378 endmenu 352 379 353 380
+170
arch/arm64/include/asm/arch_gicv3.h
··· 1 + /* 2 + * arch/arm64/include/asm/arch_gicv3.h 3 + * 4 + * Copyright (C) 2015 ARM Ltd. 5 + * 6 + * This program is free software: you can redistribute it and/or modify 7 + * it under the terms of the GNU General Public License version 2 as 8 + * published by the Free Software Foundation. 9 + * 10 + * This program is distributed in the hope that it will be useful, 11 + * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 + * GNU General Public License for more details. 14 + * 15 + * You should have received a copy of the GNU General Public License 16 + * along with this program. If not, see <http://www.gnu.org/licenses/>. 17 + */ 18 + #ifndef __ASM_ARCH_GICV3_H 19 + #define __ASM_ARCH_GICV3_H 20 + 21 + #include <asm/sysreg.h> 22 + 23 + #define ICC_EOIR1_EL1 sys_reg(3, 0, 12, 12, 1) 24 + #define ICC_DIR_EL1 sys_reg(3, 0, 12, 11, 1) 25 + #define ICC_IAR1_EL1 sys_reg(3, 0, 12, 12, 0) 26 + #define ICC_SGI1R_EL1 sys_reg(3, 0, 12, 11, 5) 27 + #define ICC_PMR_EL1 sys_reg(3, 0, 4, 6, 0) 28 + #define ICC_CTLR_EL1 sys_reg(3, 0, 12, 12, 4) 29 + #define ICC_SRE_EL1 sys_reg(3, 0, 12, 12, 5) 30 + #define ICC_GRPEN1_EL1 sys_reg(3, 0, 12, 12, 7) 31 + 32 + #define ICC_SRE_EL2 sys_reg(3, 4, 12, 9, 5) 33 + 34 + /* 35 + * System register definitions 36 + */ 37 + #define ICH_VSEIR_EL2 sys_reg(3, 4, 12, 9, 4) 38 + #define ICH_HCR_EL2 sys_reg(3, 4, 12, 11, 0) 39 + #define ICH_VTR_EL2 sys_reg(3, 4, 12, 11, 1) 40 + #define ICH_MISR_EL2 sys_reg(3, 4, 12, 11, 2) 41 + #define ICH_EISR_EL2 sys_reg(3, 4, 12, 11, 3) 42 + #define ICH_ELSR_EL2 sys_reg(3, 4, 12, 11, 5) 43 + #define ICH_VMCR_EL2 sys_reg(3, 4, 12, 11, 7) 44 + 45 + #define __LR0_EL2(x) sys_reg(3, 4, 12, 12, x) 46 + #define __LR8_EL2(x) sys_reg(3, 4, 12, 13, x) 47 + 48 + #define ICH_LR0_EL2 __LR0_EL2(0) 49 + #define ICH_LR1_EL2 __LR0_EL2(1) 50 + #define ICH_LR2_EL2 __LR0_EL2(2) 51 + #define ICH_LR3_EL2 __LR0_EL2(3) 52 + #define ICH_LR4_EL2 __LR0_EL2(4) 53 + #define ICH_LR5_EL2 __LR0_EL2(5) 54 + #define ICH_LR6_EL2 __LR0_EL2(6) 55 + #define ICH_LR7_EL2 __LR0_EL2(7) 56 + #define ICH_LR8_EL2 __LR8_EL2(0) 57 + #define ICH_LR9_EL2 __LR8_EL2(1) 58 + #define ICH_LR10_EL2 __LR8_EL2(2) 59 + #define ICH_LR11_EL2 __LR8_EL2(3) 60 + #define ICH_LR12_EL2 __LR8_EL2(4) 61 + #define ICH_LR13_EL2 __LR8_EL2(5) 62 + #define ICH_LR14_EL2 __LR8_EL2(6) 63 + #define ICH_LR15_EL2 __LR8_EL2(7) 64 + 65 + #define __AP0Rx_EL2(x) sys_reg(3, 4, 12, 8, x) 66 + #define ICH_AP0R0_EL2 __AP0Rx_EL2(0) 67 + #define ICH_AP0R1_EL2 __AP0Rx_EL2(1) 68 + #define ICH_AP0R2_EL2 __AP0Rx_EL2(2) 69 + #define ICH_AP0R3_EL2 __AP0Rx_EL2(3) 70 + 71 + #define __AP1Rx_EL2(x) sys_reg(3, 4, 12, 9, x) 72 + #define ICH_AP1R0_EL2 __AP1Rx_EL2(0) 73 + #define ICH_AP1R1_EL2 __AP1Rx_EL2(1) 74 + #define ICH_AP1R2_EL2 __AP1Rx_EL2(2) 75 + #define ICH_AP1R3_EL2 __AP1Rx_EL2(3) 76 + 77 + #ifndef __ASSEMBLY__ 78 + 79 + #include <linux/stringify.h> 80 + 81 + /* 82 + * Low-level accessors 83 + * 84 + * These system registers are 32 bits, but we make sure that the compiler 85 + * sets the GP register's most significant bits to 0 with an explicit cast. 86 + */ 87 + 88 + static inline void gic_write_eoir(u32 irq) 89 + { 90 + asm volatile("msr_s " __stringify(ICC_EOIR1_EL1) ", %0" : : "r" ((u64)irq)); 91 + isb(); 92 + } 93 + 94 + static inline void gic_write_dir(u32 irq) 95 + { 96 + asm volatile("msr_s " __stringify(ICC_DIR_EL1) ", %0" : : "r" ((u64)irq)); 97 + isb(); 98 + } 99 + 100 + static inline u64 gic_read_iar_common(void) 101 + { 102 + u64 irqstat; 103 + 104 + asm volatile("mrs_s %0, " __stringify(ICC_IAR1_EL1) : "=r" (irqstat)); 105 + return irqstat; 106 + } 107 + 108 + /* 109 + * Cavium ThunderX erratum 23154 110 + * 111 + * The gicv3 of ThunderX requires a modified version for reading the 112 + * IAR status to ensure data synchronization (access to icc_iar1_el1 113 + * is not sync'ed before and after). 114 + */ 115 + static inline u64 gic_read_iar_cavium_thunderx(void) 116 + { 117 + u64 irqstat; 118 + 119 + asm volatile( 120 + "nop;nop;nop;nop\n\t" 121 + "nop;nop;nop;nop\n\t" 122 + "mrs_s %0, " __stringify(ICC_IAR1_EL1) "\n\t" 123 + "nop;nop;nop;nop" 124 + : "=r" (irqstat)); 125 + mb(); 126 + 127 + return irqstat; 128 + } 129 + 130 + static inline void gic_write_pmr(u32 val) 131 + { 132 + asm volatile("msr_s " __stringify(ICC_PMR_EL1) ", %0" : : "r" ((u64)val)); 133 + } 134 + 135 + static inline void gic_write_ctlr(u32 val) 136 + { 137 + asm volatile("msr_s " __stringify(ICC_CTLR_EL1) ", %0" : : "r" ((u64)val)); 138 + isb(); 139 + } 140 + 141 + static inline void gic_write_grpen1(u32 val) 142 + { 143 + asm volatile("msr_s " __stringify(ICC_GRPEN1_EL1) ", %0" : : "r" ((u64)val)); 144 + isb(); 145 + } 146 + 147 + static inline void gic_write_sgi1r(u64 val) 148 + { 149 + asm volatile("msr_s " __stringify(ICC_SGI1R_EL1) ", %0" : : "r" (val)); 150 + } 151 + 152 + static inline u32 gic_read_sre(void) 153 + { 154 + u64 val; 155 + 156 + asm volatile("mrs_s %0, " __stringify(ICC_SRE_EL1) : "=r" (val)); 157 + return val; 158 + } 159 + 160 + static inline void gic_write_sre(u32 val) 161 + { 162 + asm volatile("msr_s " __stringify(ICC_SRE_EL1) ", %0" : : "r" ((u64)val)); 163 + isb(); 164 + } 165 + 166 + #define gic_read_typer(c) readq_relaxed(c) 167 + #define gic_write_irouter(v, c) writeq_relaxed(v, c) 168 + 169 + #endif /* __ASSEMBLY__ */ 170 + #endif /* __ASM_ARCH_GICV3_H */
+2 -1
arch/arm64/include/asm/cpufeature.h
··· 27 27 #define ARM64_HAS_SYSREG_GIC_CPUIF 3 28 28 #define ARM64_HAS_PAN 4 29 29 #define ARM64_HAS_LSE_ATOMICS 5 30 + #define ARM64_WORKAROUND_CAVIUM_23154 6 30 31 31 - #define ARM64_NCAPS 6 32 + #define ARM64_NCAPS 7 32 33 33 34 #ifndef __ASSEMBLY__ 34 35
+10 -7
arch/arm64/include/asm/cputype.h
··· 62 62 (0xf << MIDR_ARCHITECTURE_SHIFT) | \ 63 63 ((partnum) << MIDR_PARTNUM_SHIFT)) 64 64 65 - #define ARM_CPU_IMP_ARM 0x41 66 - #define ARM_CPU_IMP_APM 0x50 65 + #define ARM_CPU_IMP_ARM 0x41 66 + #define ARM_CPU_IMP_APM 0x50 67 + #define ARM_CPU_IMP_CAVIUM 0x43 67 68 68 - #define ARM_CPU_PART_AEM_V8 0xD0F 69 - #define ARM_CPU_PART_FOUNDATION 0xD00 70 - #define ARM_CPU_PART_CORTEX_A57 0xD07 71 - #define ARM_CPU_PART_CORTEX_A53 0xD03 69 + #define ARM_CPU_PART_AEM_V8 0xD0F 70 + #define ARM_CPU_PART_FOUNDATION 0xD00 71 + #define ARM_CPU_PART_CORTEX_A57 0xD07 72 + #define ARM_CPU_PART_CORTEX_A53 0xD03 72 73 73 - #define APM_CPU_PART_POTENZA 0x000 74 + #define APM_CPU_PART_POTENZA 0x000 75 + 76 + #define CAVIUM_CPU_PART_THUNDERX 0x0A1 74 77 75 78 #define ID_AA64MMFR0_BIGENDEL0_SHIFT 16 76 79 #define ID_AA64MMFR0_BIGENDEL0_MASK (0xf << ID_AA64MMFR0_BIGENDEL0_SHIFT)
+9
arch/arm64/kernel/cpu_errata.c
··· 23 23 24 24 #define MIDR_CORTEX_A53 MIDR_CPU_PART(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A53) 25 25 #define MIDR_CORTEX_A57 MIDR_CPU_PART(ARM_CPU_IMP_ARM, ARM_CPU_PART_CORTEX_A57) 26 + #define MIDR_THUNDERX MIDR_CPU_PART(ARM_CPU_IMP_CAVIUM, CAVIUM_CPU_PART_THUNDERX) 26 27 27 28 #define CPU_MODEL_MASK (MIDR_IMPLEMENTOR_MASK | MIDR_PARTNUM_MASK | \ 28 29 MIDR_ARCHITECTURE_MASK) ··· 81 80 .desc = "ARM erratum 845719", 82 81 .capability = ARM64_WORKAROUND_845719, 83 82 MIDR_RANGE(MIDR_CORTEX_A53, 0x00, 0x04), 83 + }, 84 + #endif 85 + #ifdef CONFIG_CAVIUM_ERRATUM_23154 86 + { 87 + /* Cavium ThunderX, pass 1.x */ 88 + .desc = "Cavium erratum 23154", 89 + .capability = ARM64_WORKAROUND_CAVIUM_23154, 90 + MIDR_RANGE(MIDR_THUNDERX, 0x00, 0x01), 84 91 }, 85 92 #endif 86 93 {
+18 -1
arch/arm64/kernel/cpufeature.c
··· 23 23 #include <asm/cpufeature.h> 24 24 #include <asm/processor.h> 25 25 26 + #include <linux/irqchip/arm-gic-v3.h> 27 + 26 28 static bool 27 29 feature_matches(u64 reg, const struct arm64_cpu_capabilities *entry) 28 30 { ··· 47 45 __ID_FEAT_CHK(id_aa64mmfr1); 48 46 __ID_FEAT_CHK(id_aa64isar0); 49 47 48 + static bool has_useable_gicv3_cpuif(const struct arm64_cpu_capabilities *entry) 49 + { 50 + bool has_sre; 51 + 52 + if (!has_id_aa64pfr0_feature(entry)) 53 + return false; 54 + 55 + has_sre = gic_enable_sre(); 56 + if (!has_sre) 57 + pr_warn_once("%s present but disabled by higher exception level\n", 58 + entry->desc); 59 + 60 + return has_sre; 61 + } 62 + 50 63 static const struct arm64_cpu_capabilities arm64_features[] = { 51 64 { 52 65 .desc = "GIC system register CPU interface", 53 66 .capability = ARM64_HAS_SYSREG_GIC_CPUIF, 54 - .matches = has_id_aa64pfr0_feature, 67 + .matches = has_useable_gicv3_cpuif, 55 68 .field_pos = 24, 56 69 .min_field_value = 1, 57 70 },
+2
arch/arm64/kernel/head.S
··· 498 498 orr x0, x0, #ICC_SRE_EL2_ENABLE // Set ICC_SRE_EL2.Enable==1 499 499 msr_s ICC_SRE_EL2, x0 500 500 isb // Make sure SRE is now set 501 + mrs_s x0, ICC_SRE_EL2 // Read SRE back, 502 + tbz x0, #0, 3f // and check that it sticks 501 503 msr_s ICH_HCR_EL2, xzr // Reset ICC_HCR_EL2 to defaults 502 504 503 505 3:
+4
arch/arm64/kvm/Kconfig
··· 16 16 17 17 if VIRTUALIZATION 18 18 19 + config KVM_ARM_VGIC_V3 20 + bool 21 + 19 22 config KVM 20 23 bool "Kernel-based Virtual Machine (KVM) support" 21 24 depends on OF ··· 34 31 select KVM_VFIO 35 32 select HAVE_KVM_EVENTFD 36 33 select HAVE_KVM_IRQFD 34 + select KVM_ARM_VGIC_V3 37 35 ---help--- 38 36 Support hosting virtualized guest machines. 39 37
+1 -1
arch/c6x/platforms/megamod-pic.c
··· 178 178 static void __init parse_priority_map(struct megamod_pic *pic, 179 179 int *mapping, int size) 180 180 { 181 - struct device_node *np = pic->irqhost->of_node; 181 + struct device_node *np = irq_domain_get_of_node(pic->irqhost); 182 182 const __be32 *map; 183 183 int i, maplen; 184 184 u32 val;
+2 -2
arch/mips/cavium-octeon/octeon-irq.c
··· 1094 1094 unsigned int pin; 1095 1095 unsigned int trigger; 1096 1096 1097 - if (d->of_node != node) 1097 + if (irq_domain_get_of_node(d) != node) 1098 1098 return -EINVAL; 1099 1099 1100 1100 if (intsize < 2) ··· 2163 2163 2164 2164 if (hw >= host_data->max_bits) { 2165 2165 pr_err("ERROR: %s mapping %u is to big!\n", 2166 - d->of_node->name, (unsigned)hw); 2166 + irq_domain_get_of_node(d)->name, (unsigned)hw); 2167 2167 return -EINVAL; 2168 2168 } 2169 2169
+1 -1
arch/powerpc/platforms/cell/axon_msi.c
··· 327 327 u32 tmp; 328 328 329 329 pr_devel("axon_msi: disabling %s\n", 330 - msic->irq_domain->of_node->full_name); 330 + irq_domain_get_of_node(msic->irq_domain)->full_name); 331 331 tmp = dcr_read(msic->dcr_host, MSIC_CTRL_REG); 332 332 tmp &= ~MSIC_CTRL_ENABLE & ~MSIC_CTRL_IRQ_ENABLE; 333 333 msic_dcr_write(msic, MSIC_CTRL_REG, tmp);
+6 -3
arch/powerpc/platforms/cell/spider-pic.c
··· 231 231 const u32 *imap, *tmp; 232 232 int imaplen, intsize, unit; 233 233 struct device_node *iic; 234 + struct device_node *of_node; 235 + 236 + of_node = irq_domain_get_of_node(pic->host); 234 237 235 238 /* First, we check whether we have a real "interrupts" in the device 236 239 * tree in case the device-tree is ever fixed 237 240 */ 238 - virq = irq_of_parse_and_map(pic->host->of_node, 0); 241 + virq = irq_of_parse_and_map(of_node, 0); 239 242 if (virq) 240 243 return virq; 241 244 242 245 /* Now do the horrible hacks */ 243 - tmp = of_get_property(pic->host->of_node, "#interrupt-cells", NULL); 246 + tmp = of_get_property(of_node, "#interrupt-cells", NULL); 244 247 if (tmp == NULL) 245 248 return NO_IRQ; 246 249 intsize = *tmp; 247 - imap = of_get_property(pic->host->of_node, "interrupt-map", &imaplen); 250 + imap = of_get_property(of_node, "interrupt-map", &imaplen); 248 251 if (imap == NULL || imaplen < (intsize + 1)) 249 252 return NO_IRQ; 250 253 iic = of_find_node_by_phandle(imap[intsize]);
+4 -2
arch/powerpc/platforms/pasemi/msi.c
··· 144 144 { 145 145 int rc; 146 146 struct pci_controller *phb; 147 + struct device_node *of_node; 147 148 148 - if (!mpic->irqhost->of_node || 149 - !of_device_is_compatible(mpic->irqhost->of_node, 149 + of_node = irq_domain_get_of_node(mpic->irqhost); 150 + if (!of_node || 151 + !of_device_is_compatible(of_node, 150 152 "pasemi,pwrficient-openpic")) 151 153 return -ENODEV; 152 154
+1 -1
arch/powerpc/platforms/powernv/opal-irqchip.c
··· 137 137 static int opal_event_match(struct irq_domain *h, struct device_node *node, 138 138 enum irq_domain_bus_token bus_token) 139 139 { 140 - return h->of_node == node; 140 + return irq_domain_get_of_node(h) == node; 141 141 } 142 142 143 143 static int opal_event_xlate(struct irq_domain *h, struct device_node *np,
+2 -1
arch/powerpc/sysdev/ehv_pic.c
··· 181 181 enum irq_domain_bus_token bus_token) 182 182 { 183 183 /* Exact match, unless ehv_pic node is NULL */ 184 - return h->of_node == NULL || h->of_node == node; 184 + struct device_node *of_node = irq_domain_get_of_node(h); 185 + return of_node == NULL || of_node == node; 185 186 } 186 187 187 188 static int ehv_pic_host_map(struct irq_domain *h, unsigned int virq,
+1 -1
arch/powerpc/sysdev/fsl_msi.c
··· 110 110 int rc, hwirq; 111 111 112 112 rc = msi_bitmap_alloc(&msi_data->bitmap, NR_MSI_IRQS_MAX, 113 - msi_data->irqhost->of_node); 113 + irq_domain_get_of_node(msi_data->irqhost)); 114 114 if (rc) 115 115 return rc; 116 116
+2 -1
arch/powerpc/sysdev/i8259.c
··· 165 165 static int i8259_host_match(struct irq_domain *h, struct device_node *node, 166 166 enum irq_domain_bus_token bus_token) 167 167 { 168 - return h->of_node == NULL || h->of_node == node; 168 + struct device_node *of_node = irq_domain_get_of_node(h); 169 + return of_node == NULL || of_node == node; 169 170 } 170 171 171 172 static int i8259_host_map(struct irq_domain *h, unsigned int virq,
+2 -1
arch/powerpc/sysdev/ipic.c
··· 675 675 enum irq_domain_bus_token bus_token) 676 676 { 677 677 /* Exact match, unless ipic node is NULL */ 678 - return h->of_node == NULL || h->of_node == node; 678 + struct device_node *of_node = irq_domain_get_of_node(h); 679 + return of_node == NULL || of_node == node; 679 680 } 680 681 681 682 static int ipic_host_map(struct irq_domain *h, unsigned int virq,
+2 -1
arch/powerpc/sysdev/mpic.c
··· 1011 1011 enum irq_domain_bus_token bus_token) 1012 1012 { 1013 1013 /* Exact match, unless mpic node is NULL */ 1014 - return h->of_node == NULL || h->of_node == node; 1014 + struct device_node *of_node = irq_domain_get_of_node(h); 1015 + return of_node == NULL || of_node == node; 1015 1016 } 1016 1017 1017 1018 static int mpic_host_map(struct irq_domain *h, unsigned int virq,
+1 -1
arch/powerpc/sysdev/mpic_msi.c
··· 84 84 int rc; 85 85 86 86 rc = msi_bitmap_alloc(&mpic->msi_bitmap, mpic->num_sources, 87 - mpic->irqhost->of_node); 87 + irq_domain_get_of_node(mpic->irqhost)); 88 88 if (rc) 89 89 return rc; 90 90
+2 -1
arch/powerpc/sysdev/qe_lib/qe_ic.c
··· 248 248 enum irq_domain_bus_token bus_token) 249 249 { 250 250 /* Exact match, unless qe_ic node is NULL */ 251 - return h->of_node == NULL || h->of_node == node; 251 + struct device_node *of_node = irq_domain_get_of_node(h); 252 + return of_node == NULL || of_node == node; 252 253 } 253 254 254 255 static int qe_ic_host_map(struct irq_domain *h, unsigned int virq,
+33 -21
drivers/acpi/gsi.c
··· 11 11 #include <linux/acpi.h> 12 12 #include <linux/irq.h> 13 13 #include <linux/irqdomain.h> 14 + #include <linux/of.h> 14 15 15 16 enum acpi_irq_model_id acpi_irq_model; 17 + 18 + static struct fwnode_handle *acpi_gsi_domain_id; 16 19 17 20 static unsigned int acpi_gsi_get_irq_type(int trigger, int polarity) 18 21 { ··· 48 45 */ 49 46 int acpi_gsi_to_irq(u32 gsi, unsigned int *irq) 50 47 { 51 - /* 52 - * Only default domain is supported at present, always find 53 - * the mapping corresponding to default domain by passing NULL 54 - * as irq_domain parameter 55 - */ 56 - *irq = irq_find_mapping(NULL, gsi); 48 + struct irq_domain *d = irq_find_matching_fwnode(acpi_gsi_domain_id, 49 + DOMAIN_BUS_ANY); 50 + 51 + *irq = irq_find_mapping(d, gsi); 57 52 /* 58 53 * *irq == 0 means no mapping, that should 59 54 * be reported as a failure ··· 73 72 int acpi_register_gsi(struct device *dev, u32 gsi, int trigger, 74 73 int polarity) 75 74 { 76 - unsigned int irq; 77 - unsigned int irq_type = acpi_gsi_get_irq_type(trigger, polarity); 75 + struct irq_fwspec fwspec; 78 76 79 - /* 80 - * There is no way at present to look-up the IRQ domain on ACPI, 81 - * hence always create mapping referring to the default domain 82 - * by passing NULL as irq_domain parameter 83 - */ 84 - irq = irq_create_mapping(NULL, gsi); 85 - if (!irq) 77 + if (WARN_ON(!acpi_gsi_domain_id)) { 78 + pr_warn("GSI: No registered irqchip, giving up\n"); 86 79 return -EINVAL; 80 + } 87 81 88 - /* Set irq type if specified and different than the current one */ 89 - if (irq_type != IRQ_TYPE_NONE && 90 - irq_type != irq_get_trigger_type(irq)) 91 - irq_set_irq_type(irq, irq_type); 92 - return irq; 82 + fwspec.fwnode = acpi_gsi_domain_id; 83 + fwspec.param[0] = gsi; 84 + fwspec.param[1] = acpi_gsi_get_irq_type(trigger, polarity); 85 + fwspec.param_count = 2; 86 + 87 + return irq_create_fwspec_mapping(&fwspec); 93 88 } 94 89 EXPORT_SYMBOL_GPL(acpi_register_gsi); 95 90 ··· 95 98 */ 96 99 void acpi_unregister_gsi(u32 gsi) 97 100 { 98 - int irq = irq_find_mapping(NULL, gsi); 101 + struct irq_domain *d = irq_find_matching_fwnode(acpi_gsi_domain_id, 102 + DOMAIN_BUS_ANY); 103 + int irq = irq_find_mapping(d, gsi); 99 104 100 105 irq_dispose_mapping(irq); 101 106 } 102 107 EXPORT_SYMBOL_GPL(acpi_unregister_gsi); 108 + 109 + /** 110 + * acpi_set_irq_model - Setup the GSI irqdomain information 111 + * @model: the value assigned to acpi_irq_model 112 + * @fwnode: the irq_domain identifier for mapping and looking up 113 + * GSI interrupts 114 + */ 115 + void __init acpi_set_irq_model(enum acpi_irq_model_id model, 116 + struct fwnode_handle *fwnode) 117 + { 118 + acpi_irq_model = model; 119 + acpi_gsi_domain_id = fwnode; 120 + }
+3 -3
drivers/base/platform-msi.c
··· 152 152 153 153 /** 154 154 * platform_msi_create_irq_domain - Create a platform MSI interrupt domain 155 - * @np: Optional device-tree node of the interrupt controller 155 + * @fwnode: Optional fwnode of the interrupt controller 156 156 * @info: MSI domain info 157 157 * @parent: Parent irq domain 158 158 * ··· 162 162 * Returns: 163 163 * A domain pointer or NULL in case of failure. 164 164 */ 165 - struct irq_domain *platform_msi_create_irq_domain(struct device_node *np, 165 + struct irq_domain *platform_msi_create_irq_domain(struct fwnode_handle *fwnode, 166 166 struct msi_domain_info *info, 167 167 struct irq_domain *parent) 168 168 { ··· 173 173 if (info->flags & MSI_FLAG_USE_DEF_CHIP_OPS) 174 174 platform_msi_update_chip_ops(info); 175 175 176 - domain = msi_create_irq_domain(np, info, parent); 176 + domain = msi_create_irq_domain(fwnode, info, parent); 177 177 if (domain) 178 178 domain->bus_token = DOMAIN_BUS_PLATFORM_MSI; 179 179
+1 -1
drivers/gpio/gpio-sodaville.c
··· 102 102 { 103 103 u32 line, type; 104 104 105 - if (node != h->of_node) 105 + if (node != irq_domain_get_of_node(h)) 106 106 return -EINVAL; 107 107 108 108 if (intsize < 2)
+6
drivers/irqchip/Kconfig
··· 123 123 124 124 config RENESAS_IRQC 125 125 bool 126 + select GENERIC_IRQ_CHIP 126 127 select IRQ_DOMAIN 127 128 128 129 config ST_IRQCHIP ··· 188 187 select IRQ_DOMAIN 189 188 help 190 189 Enables the wakeup IRQs for IMX platforms with GPCv2 block 190 + 191 + config IRQ_MXS 192 + def_bool y if MACH_ASM9260 || ARCH_MXS 193 + select IRQ_DOMAIN 194 + select STMP_DEVICE
+1 -1
drivers/irqchip/Makefile
··· 6 6 obj-$(CONFIG_ARCH_HIP04) += irq-hip04.o 7 7 obj-$(CONFIG_ARCH_MMP) += irq-mmp.o 8 8 obj-$(CONFIG_ARCH_MVEBU) += irq-armada-370-xp.o 9 - obj-$(CONFIG_ARCH_MXS) += irq-mxs.o 9 + obj-$(CONFIG_IRQ_MXS) += irq-mxs.o 10 10 obj-$(CONFIG_ARCH_TEGRA) += irq-tegra.o 11 11 obj-$(CONFIG_ARCH_S3C24XX) += irq-s3c24xx.o 12 12 obj-$(CONFIG_DW_APB_ICTL) += irq-dw-apb-ictl.o
+109
drivers/irqchip/alphascale_asm9260-icoll.h
··· 1 + /* 2 + * Copyright (C) 2014 Oleksij Rempel <linux@rempel-privat.de> 3 + * 4 + * This program is free software; you can redistribute it and/or modify 5 + * it under the terms of the GNU General Public License as published by 6 + * the Free Software Foundation; either version 2 of the License, or 7 + * (at your option) any later version. 8 + */ 9 + 10 + #ifndef _ALPHASCALE_ASM9260_ICOLL_H 11 + #define _ALPHASCALE_ASM9260_ICOLL_H 12 + 13 + #define ASM9260_NUM_IRQS 64 14 + /* 15 + * this device provide 4 offsets for each register: 16 + * 0x0 - plain read write mode 17 + * 0x4 - set mode, OR logic. 18 + * 0x8 - clr mode, XOR logic. 19 + * 0xc - togle mode. 20 + */ 21 + 22 + #define ASM9260_HW_ICOLL_VECTOR 0x0000 23 + /* 24 + * bits 31:2 25 + * This register presents the vector address for the interrupt currently 26 + * active on the CPU IRQ input. Writing to this register notifies the 27 + * interrupt collector that the interrupt service routine for the current 28 + * interrupt has been entered. 29 + * The exception trap should have a LDPC instruction from this address: 30 + * LDPC ASM9260_HW_ICOLL_VECTOR_ADDR; IRQ exception at 0xffff0018 31 + */ 32 + 33 + /* 34 + * The Interrupt Collector Level Acknowledge Register is used by software to 35 + * indicate the completion of an interrupt on a specific level. 36 + * This register is written at the very end of an interrupt service routine. If 37 + * nesting is used then the CPU irq must be turned on before writing to this 38 + * register to avoid a race condition in the CPU interrupt hardware. 39 + */ 40 + #define ASM9260_HW_ICOLL_LEVELACK 0x0010 41 + #define ASM9260_BM_LEVELn(nr) BIT(nr) 42 + 43 + #define ASM9260_HW_ICOLL_CTRL 0x0020 44 + /* 45 + * ASM9260_BM_CTRL_SFTRST and ASM9260_BM_CTRL_CLKGATE are not available on 46 + * asm9260. 47 + */ 48 + #define ASM9260_BM_CTRL_SFTRST BIT(31) 49 + #define ASM9260_BM_CTRL_CLKGATE BIT(30) 50 + /* disable interrupt level nesting */ 51 + #define ASM9260_BM_CTRL_NO_NESTING BIT(19) 52 + /* 53 + * Set this bit to one enable the RISC32-style read side effect associated with 54 + * the vector address register. In this mode, interrupt in-service is signaled 55 + * by the read of the ASM9260_HW_ICOLL_VECTOR register to acquire the interrupt 56 + * vector address. Set this bit to zero for normal operation, in which the ISR 57 + * signals in-service explicitly by means of a write to the 58 + * ASM9260_HW_ICOLL_VECTOR register. 59 + * 0 - Must Write to Vector register to go in-service. 60 + * 1 - Go in-service as a read side effect 61 + */ 62 + #define ASM9260_BM_CTRL_ARM_RSE_MODE BIT(18) 63 + #define ASM9260_BM_CTRL_IRQ_ENABLE BIT(16) 64 + 65 + #define ASM9260_HW_ICOLL_STAT_OFFSET 0x0030 66 + /* 67 + * bits 5:0 68 + * Vector number of current interrupt. Multiply by 4 and add to vector base 69 + * address to obtain the value in ASM9260_HW_ICOLL_VECTOR. 70 + */ 71 + 72 + /* 73 + * RAW0 and RAW1 provides a read-only view of the raw interrupt request lines 74 + * coming from various parts of the chip. Its purpose is to improve diagnostic 75 + * observability. 76 + */ 77 + #define ASM9260_HW_ICOLL_RAW0 0x0040 78 + #define ASM9260_HW_ICOLL_RAW1 0x0050 79 + 80 + #define ASM9260_HW_ICOLL_INTERRUPT0 0x0060 81 + #define ASM9260_HW_ICOLL_INTERRUPTn(n) (0x0060 + ((n) >> 2) * 0x10) 82 + /* 83 + * WARNING: Modifying the priority of an enabled interrupt may result in 84 + * undefined behavior. 85 + */ 86 + #define ASM9260_BM_INT_PRIORITY_MASK 0x3 87 + #define ASM9260_BM_INT_ENABLE BIT(2) 88 + #define ASM9260_BM_INT_SOFTIRQ BIT(3) 89 + 90 + #define ASM9260_BM_ICOLL_INTERRUPTn_SHIFT(n) (((n) & 0x3) << 3) 91 + #define ASM9260_BM_ICOLL_INTERRUPTn_ENABLE(n) (1 << (2 + \ 92 + ASM9260_BM_ICOLL_INTERRUPTn_SHIFT(n))) 93 + 94 + #define ASM9260_HW_ICOLL_VBASE 0x0160 95 + /* 96 + * bits 31:2 97 + * This bitfield holds the upper 30 bits of the base address of the vector 98 + * table. 99 + */ 100 + 101 + #define ASM9260_HW_ICOLL_CLEAR0 0x01d0 102 + #define ASM9260_HW_ICOLL_CLEAR1 0x01e0 103 + #define ASM9260_HW_ICOLL_CLEARn(n) (((n >> 5) * 0x10) \ 104 + + SET_REG) 105 + #define ASM9260_BM_CLEAR_BIT(n) BIT(n & 0x1f) 106 + 107 + /* Scratchpad */ 108 + #define ASM9260_HW_ICOLL_UNDEF_VECTOR 0x01f0 109 + #endif
+1 -1
drivers/irqchip/exynos-combiner.c
··· 144 144 unsigned long *out_hwirq, 145 145 unsigned int *out_type) 146 146 { 147 - if (d->of_node != controller) 147 + if (irq_domain_get_of_node(d) != controller) 148 148 return -EINVAL; 149 149 150 150 if (intsize < 2)
+1 -1
drivers/irqchip/irq-atmel-aic-common.c
··· 114 114 115 115 static void __init aic_common_ext_irq_of_init(struct irq_domain *domain) 116 116 { 117 - struct device_node *node = domain->of_node; 117 + struct device_node *node = irq_domain_get_of_node(domain); 118 118 struct irq_chip_generic *gc; 119 119 struct aic_chip_data *aic; 120 120 struct property *prop;
+27 -35
drivers/irqchip/irq-atmel-aic5.c
··· 70 70 static asmlinkage void __exception_irq_entry 71 71 aic5_handle(struct pt_regs *regs) 72 72 { 73 - struct irq_domain_chip_generic *dgc = aic5_domain->gc; 74 - struct irq_chip_generic *gc = dgc->gc[0]; 73 + struct irq_chip_generic *bgc = irq_get_domain_generic_chip(aic5_domain, 0); 75 74 u32 irqnr; 76 75 u32 irqstat; 77 76 78 - irqnr = irq_reg_readl(gc, AT91_AIC5_IVR); 79 - irqstat = irq_reg_readl(gc, AT91_AIC5_ISR); 77 + irqnr = irq_reg_readl(bgc, AT91_AIC5_IVR); 78 + irqstat = irq_reg_readl(bgc, AT91_AIC5_ISR); 80 79 81 80 if (!irqstat) 82 - irq_reg_writel(gc, 0, AT91_AIC5_EOICR); 81 + irq_reg_writel(bgc, 0, AT91_AIC5_EOICR); 83 82 else 84 83 handle_domain_irq(aic5_domain, irqnr, regs); 85 84 } ··· 86 87 static void aic5_mask(struct irq_data *d) 87 88 { 88 89 struct irq_domain *domain = d->domain; 89 - struct irq_domain_chip_generic *dgc = domain->gc; 90 - struct irq_chip_generic *bgc = dgc->gc[0]; 90 + struct irq_chip_generic *bgc = irq_get_domain_generic_chip(domain, 0); 91 91 struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); 92 92 93 93 /* ··· 103 105 static void aic5_unmask(struct irq_data *d) 104 106 { 105 107 struct irq_domain *domain = d->domain; 106 - struct irq_domain_chip_generic *dgc = domain->gc; 107 - struct irq_chip_generic *bgc = dgc->gc[0]; 108 + struct irq_chip_generic *bgc = irq_get_domain_generic_chip(domain, 0); 108 109 struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); 109 110 110 111 /* ··· 120 123 static int aic5_retrigger(struct irq_data *d) 121 124 { 122 125 struct irq_domain *domain = d->domain; 123 - struct irq_domain_chip_generic *dgc = domain->gc; 124 - struct irq_chip_generic *gc = dgc->gc[0]; 126 + struct irq_chip_generic *bgc = irq_get_domain_generic_chip(domain, 0); 125 127 126 128 /* Enable interrupt on AIC5 */ 127 - irq_gc_lock(gc); 128 - irq_reg_writel(gc, d->hwirq, AT91_AIC5_SSR); 129 - irq_reg_writel(gc, 1, AT91_AIC5_ISCR); 130 - irq_gc_unlock(gc); 129 + irq_gc_lock(bgc); 130 + irq_reg_writel(bgc, d->hwirq, AT91_AIC5_SSR); 131 + irq_reg_writel(bgc, 1, AT91_AIC5_ISCR); 132 + irq_gc_unlock(bgc); 131 133 132 134 return 0; 133 135 } ··· 134 138 static int aic5_set_type(struct irq_data *d, unsigned type) 135 139 { 136 140 struct irq_domain *domain = d->domain; 137 - struct irq_domain_chip_generic *dgc = domain->gc; 138 - struct irq_chip_generic *gc = dgc->gc[0]; 141 + struct irq_chip_generic *bgc = irq_get_domain_generic_chip(domain, 0); 139 142 unsigned int smr; 140 143 int ret; 141 144 142 - irq_gc_lock(gc); 143 - irq_reg_writel(gc, d->hwirq, AT91_AIC5_SSR); 144 - smr = irq_reg_readl(gc, AT91_AIC5_SMR); 145 + irq_gc_lock(bgc); 146 + irq_reg_writel(bgc, d->hwirq, AT91_AIC5_SSR); 147 + smr = irq_reg_readl(bgc, AT91_AIC5_SMR); 145 148 ret = aic_common_set_type(d, type, &smr); 146 149 if (!ret) 147 - irq_reg_writel(gc, smr, AT91_AIC5_SMR); 148 - irq_gc_unlock(gc); 150 + irq_reg_writel(bgc, smr, AT91_AIC5_SMR); 151 + irq_gc_unlock(bgc); 149 152 150 153 return ret; 151 154 } ··· 154 159 { 155 160 struct irq_domain *domain = d->domain; 156 161 struct irq_domain_chip_generic *dgc = domain->gc; 157 - struct irq_chip_generic *bgc = dgc->gc[0]; 162 + struct irq_chip_generic *bgc = irq_get_domain_generic_chip(domain, 0); 158 163 struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); 159 164 int i; 160 165 u32 mask; ··· 178 183 { 179 184 struct irq_domain *domain = d->domain; 180 185 struct irq_domain_chip_generic *dgc = domain->gc; 181 - struct irq_chip_generic *bgc = dgc->gc[0]; 186 + struct irq_chip_generic *bgc = irq_get_domain_generic_chip(domain, 0); 182 187 struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); 183 188 int i; 184 189 u32 mask; ··· 202 207 { 203 208 struct irq_domain *domain = d->domain; 204 209 struct irq_domain_chip_generic *dgc = domain->gc; 205 - struct irq_chip_generic *bgc = dgc->gc[0]; 210 + struct irq_chip_generic *bgc = irq_get_domain_generic_chip(domain, 0); 206 211 struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); 207 212 int i; 208 213 ··· 257 262 irq_hw_number_t *out_hwirq, 258 263 unsigned int *out_type) 259 264 { 260 - struct irq_domain_chip_generic *dgc = d->gc; 261 - struct irq_chip_generic *gc; 265 + struct irq_chip_generic *bgc = irq_get_domain_generic_chip(d, 0); 262 266 unsigned smr; 263 267 int ret; 264 268 265 - if (!dgc) 269 + if (!bgc) 266 270 return -EINVAL; 267 271 268 272 ret = aic_common_irq_domain_xlate(d, ctrlr, intspec, intsize, ··· 269 275 if (ret) 270 276 return ret; 271 277 272 - gc = dgc->gc[0]; 273 - 274 - irq_gc_lock(gc); 275 - irq_reg_writel(gc, *out_hwirq, AT91_AIC5_SSR); 276 - smr = irq_reg_readl(gc, AT91_AIC5_SMR); 278 + irq_gc_lock(bgc); 279 + irq_reg_writel(bgc, *out_hwirq, AT91_AIC5_SSR); 280 + smr = irq_reg_readl(bgc, AT91_AIC5_SMR); 277 281 ret = aic_common_set_priority(intspec[2], &smr); 278 282 if (!ret) 279 - irq_reg_writel(gc, intspec[2] | smr, AT91_AIC5_SMR); 280 - irq_gc_unlock(gc); 283 + irq_reg_writel(bgc, intspec[2] | smr, AT91_AIC5_SMR); 284 + irq_gc_unlock(bgc); 281 285 282 286 return ret; 283 287 }
+34 -28
drivers/irqchip/irq-crossbar.c
··· 78 78 static int allocate_gic_irq(struct irq_domain *domain, unsigned virq, 79 79 irq_hw_number_t hwirq) 80 80 { 81 - struct of_phandle_args args; 81 + struct irq_fwspec fwspec; 82 82 int i; 83 83 int err; 84 + 85 + if (!irq_domain_get_of_node(domain->parent)) 86 + return -EINVAL; 84 87 85 88 raw_spin_lock(&cb->lock); 86 89 for (i = cb->int_max - 1; i >= 0; i--) { ··· 97 94 if (i < 0) 98 95 return -ENODEV; 99 96 100 - args.np = domain->parent->of_node; 101 - args.args_count = 3; 102 - args.args[0] = 0; /* SPI */ 103 - args.args[1] = i; 104 - args.args[2] = IRQ_TYPE_LEVEL_HIGH; 97 + fwspec.fwnode = domain->parent->fwnode; 98 + fwspec.param_count = 3; 99 + fwspec.param[0] = 0; /* SPI */ 100 + fwspec.param[1] = i; 101 + fwspec.param[2] = IRQ_TYPE_LEVEL_HIGH; 105 102 106 - err = irq_domain_alloc_irqs_parent(domain, virq, 1, &args); 103 + err = irq_domain_alloc_irqs_parent(domain, virq, 1, &fwspec); 107 104 if (err) 108 105 cb->irq_map[i] = IRQ_FREE; 109 106 else ··· 115 112 static int crossbar_domain_alloc(struct irq_domain *d, unsigned int virq, 116 113 unsigned int nr_irqs, void *data) 117 114 { 118 - struct of_phandle_args *args = data; 115 + struct irq_fwspec *fwspec = data; 119 116 irq_hw_number_t hwirq; 120 117 int i; 121 118 122 - if (args->args_count != 3) 119 + if (fwspec->param_count != 3) 123 120 return -EINVAL; /* Not GIC compliant */ 124 - if (args->args[0] != 0) 121 + if (fwspec->param[0] != 0) 125 122 return -EINVAL; /* No PPI should point to this domain */ 126 123 127 - hwirq = args->args[1]; 124 + hwirq = fwspec->param[1]; 128 125 if ((hwirq + nr_irqs) > cb->max_crossbar_sources) 129 126 return -EINVAL; /* Can't deal with this */ 130 127 ··· 169 166 raw_spin_unlock(&cb->lock); 170 167 } 171 168 172 - static int crossbar_domain_xlate(struct irq_domain *d, 173 - struct device_node *controller, 174 - const u32 *intspec, unsigned int intsize, 175 - unsigned long *out_hwirq, 176 - unsigned int *out_type) 169 + static int crossbar_domain_translate(struct irq_domain *d, 170 + struct irq_fwspec *fwspec, 171 + unsigned long *hwirq, 172 + unsigned int *type) 177 173 { 178 - if (d->of_node != controller) 179 - return -EINVAL; /* Shouldn't happen, really... */ 180 - if (intsize != 3) 181 - return -EINVAL; /* Not GIC compliant */ 182 - if (intspec[0] != 0) 183 - return -EINVAL; /* No PPI should point to this domain */ 174 + if (is_of_node(fwspec->fwnode)) { 175 + if (fwspec->param_count != 3) 176 + return -EINVAL; 184 177 185 - *out_hwirq = intspec[1]; 186 - *out_type = intspec[2]; 187 - return 0; 178 + /* No PPI should point to this domain */ 179 + if (fwspec->param[0] != 0) 180 + return -EINVAL; 181 + 182 + *hwirq = fwspec->param[1]; 183 + *type = fwspec->param[2]; 184 + return 0; 185 + } 186 + 187 + return -EINVAL; 188 188 } 189 189 190 190 static const struct irq_domain_ops crossbar_domain_ops = { 191 - .alloc = crossbar_domain_alloc, 192 - .free = crossbar_domain_free, 193 - .xlate = crossbar_domain_xlate, 191 + .alloc = crossbar_domain_alloc, 192 + .free = crossbar_domain_free, 193 + .translate = crossbar_domain_translate, 194 194 }; 195 195 196 196 static int __init crossbar_of_init(struct device_node *node)
+11
drivers/irqchip/irq-gic-common.c
··· 21 21 22 22 #include "irq-gic-common.h" 23 23 24 + void gic_enable_quirks(u32 iidr, const struct gic_quirk *quirks, 25 + void *data) 26 + { 27 + for (; quirks->desc; quirks++) { 28 + if (quirks->iidr != (quirks->mask & iidr)) 29 + continue; 30 + quirks->init(data); 31 + pr_info("GIC: enabling workaround for %s\n", quirks->desc); 32 + } 33 + } 34 + 24 35 int gic_configure_irq(unsigned int irq, unsigned int type, 25 36 void __iomem *base, void (*sync_access)(void)) 26 37 {
+9
drivers/irqchip/irq-gic-common.h
··· 20 20 #include <linux/of.h> 21 21 #include <linux/irqdomain.h> 22 22 23 + struct gic_quirk { 24 + const char *desc; 25 + void (*init)(void *data); 26 + u32 iidr; 27 + u32 mask; 28 + }; 29 + 23 30 int gic_configure_irq(unsigned int irq, unsigned int type, 24 31 void __iomem *base, void (*sync_access)(void)); 25 32 void gic_dist_config(void __iomem *base, int gic_irqs, 26 33 void (*sync_access)(void)); 27 34 void gic_cpu_config(void __iomem *base, void (*sync_access)(void)); 35 + void gic_enable_quirks(u32 iidr, const struct gic_quirk *quirks, 36 + void *data); 28 37 29 38 #endif /* _IRQ_GIC_COMMON_H */
+111 -52
drivers/irqchip/irq-gic-v2m.c
··· 37 37 #define V2M_MSI_SETSPI_NS 0x040 38 38 #define V2M_MIN_SPI 32 39 39 #define V2M_MAX_SPI 1019 40 + #define V2M_MSI_IIDR 0xFCC 40 41 41 42 #define V2M_MSI_TYPER_BASE_SPI(x) \ 42 43 (((x) >> V2M_MSI_TYPER_BASE_SHIFT) & V2M_MSI_TYPER_BASE_MASK) 43 44 44 45 #define V2M_MSI_TYPER_NUM_SPI(x) ((x) & V2M_MSI_TYPER_NUM_MASK) 45 46 47 + /* APM X-Gene with GICv2m MSI_IIDR register value */ 48 + #define XGENE_GICV2M_MSI_IIDR 0x06000170 49 + 50 + /* List of flags for specific v2m implementation */ 51 + #define GICV2M_NEEDS_SPI_OFFSET 0x00000001 52 + 53 + static LIST_HEAD(v2m_nodes); 54 + static DEFINE_SPINLOCK(v2m_lock); 55 + 46 56 struct v2m_data { 47 - spinlock_t msi_cnt_lock; 57 + struct list_head entry; 58 + struct device_node *node; 48 59 struct resource res; /* GICv2m resource */ 49 60 void __iomem *base; /* GICv2m virt address */ 50 61 u32 spi_start; /* The SPI number that MSIs start */ 51 62 u32 nr_spis; /* The number of SPIs for MSIs */ 52 63 unsigned long *bm; /* MSI vector bitmap */ 64 + u32 flags; /* v2m flags for specific implementation */ 53 65 }; 54 66 55 67 static void gicv2m_mask_msi_irq(struct irq_data *d) ··· 110 98 msg->address_hi = upper_32_bits(addr); 111 99 msg->address_lo = lower_32_bits(addr); 112 100 msg->data = data->hwirq; 101 + 102 + if (v2m->flags & GICV2M_NEEDS_SPI_OFFSET) 103 + msg->data -= v2m->spi_start; 113 104 } 114 105 115 106 static struct irq_chip gicv2m_irq_chip = { ··· 128 113 unsigned int virq, 129 114 irq_hw_number_t hwirq) 130 115 { 131 - struct of_phandle_args args; 116 + struct irq_fwspec fwspec; 132 117 struct irq_data *d; 133 118 int err; 134 119 135 - args.np = domain->parent->of_node; 136 - args.args_count = 3; 137 - args.args[0] = 0; 138 - args.args[1] = hwirq - 32; 139 - args.args[2] = IRQ_TYPE_EDGE_RISING; 120 + if (is_of_node(domain->parent->fwnode)) { 121 + fwspec.fwnode = domain->parent->fwnode; 122 + fwspec.param_count = 3; 123 + fwspec.param[0] = 0; 124 + fwspec.param[1] = hwirq - 32; 125 + fwspec.param[2] = IRQ_TYPE_EDGE_RISING; 126 + } else { 127 + return -EINVAL; 128 + } 140 129 141 - err = irq_domain_alloc_irqs_parent(domain, virq, 1, &args); 130 + err = irq_domain_alloc_irqs_parent(domain, virq, 1, &fwspec); 142 131 if (err) 143 132 return err; 144 133 ··· 162 143 return; 163 144 } 164 145 165 - spin_lock(&v2m->msi_cnt_lock); 146 + spin_lock(&v2m_lock); 166 147 __clear_bit(pos, v2m->bm); 167 - spin_unlock(&v2m->msi_cnt_lock); 148 + spin_unlock(&v2m_lock); 168 149 } 169 150 170 151 static int gicv2m_irq_domain_alloc(struct irq_domain *domain, unsigned int virq, 171 152 unsigned int nr_irqs, void *args) 172 153 { 173 - struct v2m_data *v2m = domain->host_data; 154 + struct v2m_data *v2m = NULL, *tmp; 174 155 int hwirq, offset, err = 0; 175 156 176 - spin_lock(&v2m->msi_cnt_lock); 177 - offset = find_first_zero_bit(v2m->bm, v2m->nr_spis); 178 - if (offset < v2m->nr_spis) 179 - __set_bit(offset, v2m->bm); 180 - else 181 - err = -ENOSPC; 182 - spin_unlock(&v2m->msi_cnt_lock); 157 + spin_lock(&v2m_lock); 158 + list_for_each_entry(tmp, &v2m_nodes, entry) { 159 + offset = find_first_zero_bit(tmp->bm, tmp->nr_spis); 160 + if (offset < tmp->nr_spis) { 161 + __set_bit(offset, tmp->bm); 162 + v2m = tmp; 163 + break; 164 + } 165 + } 166 + spin_unlock(&v2m_lock); 183 167 184 - if (err) 185 - return err; 168 + if (!v2m) 169 + return -ENOSPC; 186 170 187 171 hwirq = v2m->spi_start + offset; 188 172 ··· 246 224 .chip = &gicv2m_pmsi_irq_chip, 247 225 }; 248 226 227 + static void gicv2m_teardown(void) 228 + { 229 + struct v2m_data *v2m, *tmp; 230 + 231 + list_for_each_entry_safe(v2m, tmp, &v2m_nodes, entry) { 232 + list_del(&v2m->entry); 233 + kfree(v2m->bm); 234 + iounmap(v2m->base); 235 + of_node_put(v2m->node); 236 + kfree(v2m); 237 + } 238 + } 239 + 240 + static int gicv2m_allocate_domains(struct irq_domain *parent) 241 + { 242 + struct irq_domain *inner_domain, *pci_domain, *plat_domain; 243 + struct v2m_data *v2m; 244 + 245 + v2m = list_first_entry_or_null(&v2m_nodes, struct v2m_data, entry); 246 + if (!v2m) 247 + return 0; 248 + 249 + inner_domain = irq_domain_create_tree(of_node_to_fwnode(v2m->node), 250 + &gicv2m_domain_ops, v2m); 251 + if (!inner_domain) { 252 + pr_err("Failed to create GICv2m domain\n"); 253 + return -ENOMEM; 254 + } 255 + 256 + inner_domain->bus_token = DOMAIN_BUS_NEXUS; 257 + inner_domain->parent = parent; 258 + pci_domain = pci_msi_create_irq_domain(of_node_to_fwnode(v2m->node), 259 + &gicv2m_msi_domain_info, 260 + inner_domain); 261 + plat_domain = platform_msi_create_irq_domain(of_node_to_fwnode(v2m->node), 262 + &gicv2m_pmsi_domain_info, 263 + inner_domain); 264 + if (!pci_domain || !plat_domain) { 265 + pr_err("Failed to create MSI domains\n"); 266 + if (plat_domain) 267 + irq_domain_remove(plat_domain); 268 + if (pci_domain) 269 + irq_domain_remove(pci_domain); 270 + irq_domain_remove(inner_domain); 271 + return -ENOMEM; 272 + } 273 + 274 + return 0; 275 + } 276 + 249 277 static int __init gicv2m_init_one(struct device_node *node, 250 278 struct irq_domain *parent) 251 279 { 252 280 int ret; 253 281 struct v2m_data *v2m; 254 - struct irq_domain *inner_domain, *pci_domain, *plat_domain; 255 282 256 283 v2m = kzalloc(sizeof(struct v2m_data), GFP_KERNEL); 257 284 if (!v2m) { 258 285 pr_err("Failed to allocate struct v2m_data.\n"); 259 286 return -ENOMEM; 260 287 } 288 + 289 + INIT_LIST_HEAD(&v2m->entry); 290 + v2m->node = node; 261 291 262 292 ret = of_address_to_resource(node, 0, &v2m->res); 263 293 if (ret) { ··· 340 266 goto err_iounmap; 341 267 } 342 268 269 + /* 270 + * APM X-Gene GICv2m implementation has an erratum where 271 + * the MSI data needs to be the offset from the spi_start 272 + * in order to trigger the correct MSI interrupt. This is 273 + * different from the standard GICv2m implementation where 274 + * the MSI data is the absolute value within the range from 275 + * spi_start to (spi_start + num_spis). 276 + */ 277 + if (readl_relaxed(v2m->base + V2M_MSI_IIDR) == XGENE_GICV2M_MSI_IIDR) 278 + v2m->flags |= GICV2M_NEEDS_SPI_OFFSET; 279 + 343 280 v2m->bm = kzalloc(sizeof(long) * BITS_TO_LONGS(v2m->nr_spis), 344 281 GFP_KERNEL); 345 282 if (!v2m->bm) { ··· 358 273 goto err_iounmap; 359 274 } 360 275 361 - inner_domain = irq_domain_add_tree(node, &gicv2m_domain_ops, v2m); 362 - if (!inner_domain) { 363 - pr_err("Failed to create GICv2m domain\n"); 364 - ret = -ENOMEM; 365 - goto err_free_bm; 366 - } 367 - 368 - inner_domain->bus_token = DOMAIN_BUS_NEXUS; 369 - inner_domain->parent = parent; 370 - pci_domain = pci_msi_create_irq_domain(node, &gicv2m_msi_domain_info, 371 - inner_domain); 372 - plat_domain = platform_msi_create_irq_domain(node, 373 - &gicv2m_pmsi_domain_info, 374 - inner_domain); 375 - if (!pci_domain || !plat_domain) { 376 - pr_err("Failed to create MSI domains\n"); 377 - ret = -ENOMEM; 378 - goto err_free_domains; 379 - } 380 - 381 - spin_lock_init(&v2m->msi_cnt_lock); 382 - 276 + list_add_tail(&v2m->entry, &v2m_nodes); 383 277 pr_info("Node %s: range[%#lx:%#lx], SPI[%d:%d]\n", node->name, 384 278 (unsigned long)v2m->res.start, (unsigned long)v2m->res.end, 385 279 v2m->spi_start, (v2m->spi_start + v2m->nr_spis)); 386 280 387 281 return 0; 388 282 389 - err_free_domains: 390 - if (plat_domain) 391 - irq_domain_remove(plat_domain); 392 - if (pci_domain) 393 - irq_domain_remove(pci_domain); 394 - if (inner_domain) 395 - irq_domain_remove(inner_domain); 396 - err_free_bm: 397 - kfree(v2m->bm); 398 283 err_iounmap: 399 284 iounmap(v2m->base); 400 285 err_free_v2m: ··· 394 339 } 395 340 } 396 341 342 + if (!ret) 343 + ret = gicv2m_allocate_domains(parent); 344 + if (ret) 345 + gicv2m_teardown(); 397 346 return ret; 398 347 }
+3 -4
drivers/irqchip/irq-gic-v3-its-pci-msi.c
··· 42 42 43 43 struct its_pci_alias { 44 44 struct pci_dev *pdev; 45 - u32 dev_id; 46 45 u32 count; 47 46 }; 48 47 ··· 59 60 { 60 61 struct its_pci_alias *dev_alias = data; 61 62 62 - dev_alias->dev_id = alias; 63 63 if (pdev != dev_alias->pdev) 64 64 dev_alias->count += its_pci_msi_vec_count(pdev); 65 65 ··· 84 86 pci_for_each_dma_alias(pdev, its_get_pci_alias, &dev_alias); 85 87 86 88 /* ITS specific DeviceID, as the core ITS ignores dev. */ 87 - info->scratchpad[0].ul = dev_alias.dev_id; 89 + info->scratchpad[0].ul = pci_msi_domain_get_msi_rid(domain, pdev); 88 90 89 91 return msi_info->ops->msi_prepare(domain->parent, 90 92 dev, dev_alias.count, info); ··· 123 125 continue; 124 126 } 125 127 126 - if (!pci_msi_create_irq_domain(np, &its_pci_msi_domain_info, 128 + if (!pci_msi_create_irq_domain(of_node_to_fwnode(np), 129 + &its_pci_msi_domain_info, 127 130 parent)) { 128 131 pr_err("%s: unable to create PCI domain\n", 129 132 np->full_name);
+17 -4
drivers/irqchip/irq-gic-v3-its-platform-msi.c
··· 29 29 { 30 30 struct msi_domain_info *msi_info; 31 31 u32 dev_id; 32 - int ret; 32 + int ret, index = 0; 33 33 34 34 msi_info = msi_get_domain_info(domain->parent); 35 35 36 36 /* Suck the DeviceID out of the msi-parent property */ 37 - ret = of_property_read_u32_index(dev->of_node, "msi-parent", 38 - 1, &dev_id); 37 + do { 38 + struct of_phandle_args args; 39 + 40 + ret = of_parse_phandle_with_args(dev->of_node, 41 + "msi-parent", "#msi-cells", 42 + index, &args); 43 + if (args.np == irq_domain_get_of_node(domain)) { 44 + if (WARN_ON(args.args_count != 1)) 45 + return -EINVAL; 46 + dev_id = args.args[0]; 47 + break; 48 + } 49 + } while (!ret); 50 + 39 51 if (ret) 40 52 return ret; 41 53 ··· 90 78 continue; 91 79 } 92 80 93 - if (!platform_msi_create_irq_domain(np, &its_pmsi_domain_info, 81 + if (!platform_msi_create_irq_domain(of_node_to_fwnode(np), 82 + &its_pmsi_domain_info, 94 83 parent)) { 95 84 pr_err("%s: unable to create platform domain\n", 96 85 np->full_name);
+70 -13
drivers/irqchip/irq-gic-v3-its.c
··· 37 37 #include <asm/cputype.h> 38 38 #include <asm/exception.h> 39 39 40 - #define ITS_FLAGS_CMDQ_NEEDS_FLUSHING (1 << 0) 40 + #include "irq-gic-common.h" 41 + 42 + #define ITS_FLAGS_CMDQ_NEEDS_FLUSHING (1ULL << 0) 43 + #define ITS_FLAGS_WORKAROUND_CAVIUM_22375 (1ULL << 1) 41 44 42 45 #define RDIST_FLAGS_PROPBASE_NEEDS_FLUSHING (1 << 0) 43 46 ··· 820 817 int i; 821 818 int psz = SZ_64K; 822 819 u64 shr = GITS_BASER_InnerShareable; 823 - u64 cache = GITS_BASER_WaWb; 820 + u64 cache; 821 + u64 typer; 822 + u32 ids; 823 + 824 + if (its->flags & ITS_FLAGS_WORKAROUND_CAVIUM_22375) { 825 + /* 826 + * erratum 22375: only alloc 8MB table size 827 + * erratum 24313: ignore memory access type 828 + */ 829 + cache = 0; 830 + ids = 0x14; /* 20 bits, 8MB */ 831 + } else { 832 + cache = GITS_BASER_WaWb; 833 + typer = readq_relaxed(its->base + GITS_TYPER); 834 + ids = GITS_TYPER_DEVBITS(typer); 835 + } 824 836 825 837 for (i = 0; i < GITS_BASER_NR_REGS; i++) { 826 838 u64 val = readq_relaxed(its->base + GITS_BASER + i * 8); ··· 843 825 u64 entry_size = GITS_BASER_ENTRY_SIZE(val); 844 826 int order = get_order(psz); 845 827 int alloc_size; 828 + int alloc_pages; 846 829 u64 tmp; 847 830 void *base; 848 831 ··· 859 840 * For other tables, only allocate a single page. 860 841 */ 861 842 if (type == GITS_BASER_TYPE_DEVICE) { 862 - u64 typer = readq_relaxed(its->base + GITS_TYPER); 863 - u32 ids = GITS_TYPER_DEVBITS(typer); 864 - 865 843 /* 866 844 * 'order' was initialized earlier to the default page 867 845 * granule of the the ITS. We can't have an allocation ··· 875 859 } 876 860 877 861 alloc_size = (1 << order) * PAGE_SIZE; 862 + alloc_pages = (alloc_size / psz); 863 + if (alloc_pages > GITS_BASER_PAGES_MAX) { 864 + alloc_pages = GITS_BASER_PAGES_MAX; 865 + order = get_order(GITS_BASER_PAGES_MAX * psz); 866 + pr_warn("%s: Device Table too large, reduce its page order to %u (%u pages)\n", 867 + node_name, order, alloc_pages); 868 + } 869 + 878 870 base = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO, order); 879 871 if (!base) { 880 872 err = -ENOMEM; ··· 911 887 break; 912 888 } 913 889 914 - val |= (alloc_size / psz) - 1; 890 + val |= alloc_pages - 1; 915 891 916 892 writeq_relaxed(val, its->base + GITS_BASER + i * 8); 917 893 tmp = readq_relaxed(its->base + GITS_BASER + i * 8); ··· 1265 1241 unsigned int virq, 1266 1242 irq_hw_number_t hwirq) 1267 1243 { 1268 - struct of_phandle_args args; 1244 + struct irq_fwspec fwspec; 1269 1245 1270 - args.np = domain->parent->of_node; 1271 - args.args_count = 3; 1272 - args.args[0] = GIC_IRQ_TYPE_LPI; 1273 - args.args[1] = hwirq; 1274 - args.args[2] = IRQ_TYPE_EDGE_RISING; 1246 + if (irq_domain_get_of_node(domain->parent)) { 1247 + fwspec.fwnode = domain->parent->fwnode; 1248 + fwspec.param_count = 3; 1249 + fwspec.param[0] = GIC_IRQ_TYPE_LPI; 1250 + fwspec.param[1] = hwirq; 1251 + fwspec.param[2] = IRQ_TYPE_EDGE_RISING; 1252 + } else { 1253 + return -EINVAL; 1254 + } 1275 1255 1276 - return irq_domain_alloc_irqs_parent(domain, virq, 1, &args); 1256 + return irq_domain_alloc_irqs_parent(domain, virq, 1, &fwspec); 1277 1257 } 1278 1258 1279 1259 static int its_irq_domain_alloc(struct irq_domain *domain, unsigned int virq, ··· 1398 1370 } 1399 1371 } 1400 1372 1373 + static void __maybe_unused its_enable_quirk_cavium_22375(void *data) 1374 + { 1375 + struct its_node *its = data; 1376 + 1377 + its->flags |= ITS_FLAGS_WORKAROUND_CAVIUM_22375; 1378 + } 1379 + 1380 + static const struct gic_quirk its_quirks[] = { 1381 + #ifdef CONFIG_CAVIUM_ERRATUM_22375 1382 + { 1383 + .desc = "ITS: Cavium errata 22375, 24313", 1384 + .iidr = 0xa100034c, /* ThunderX pass 1.x */ 1385 + .mask = 0xffff0fff, 1386 + .init = its_enable_quirk_cavium_22375, 1387 + }, 1388 + #endif 1389 + { 1390 + } 1391 + }; 1392 + 1393 + static void its_enable_quirks(struct its_node *its) 1394 + { 1395 + u32 iidr = readl_relaxed(its->base + GITS_IIDR); 1396 + 1397 + gic_enable_quirks(iidr, its_quirks, its); 1398 + } 1399 + 1401 1400 static int its_probe(struct device_node *node, struct irq_domain *parent) 1402 1401 { 1403 1402 struct resource res; ··· 1482 1427 goto out_free_its; 1483 1428 } 1484 1429 its->cmd_write = its->cmd_base; 1430 + 1431 + its_enable_quirks(its); 1485 1432 1486 1433 err = its_alloc_tables(node->full_name, its); 1487 1434 if (err)
+69 -92
drivers/irqchip/irq-gic-v3.c
··· 108 108 gic_do_wait_for_rwp(gic_data_rdist_rd_base()); 109 109 } 110 110 111 - /* Low level accessors */ 111 + #ifdef CONFIG_ARM64 112 + static DEFINE_STATIC_KEY_FALSE(is_cavium_thunderx); 113 + 112 114 static u64 __maybe_unused gic_read_iar(void) 113 115 { 114 - u64 irqstat; 115 - 116 - asm volatile("mrs_s %0, " __stringify(ICC_IAR1_EL1) : "=r" (irqstat)); 117 - return irqstat; 116 + if (static_branch_unlikely(&is_cavium_thunderx)) 117 + return gic_read_iar_cavium_thunderx(); 118 + else 119 + return gic_read_iar_common(); 118 120 } 119 - 120 - static void __maybe_unused gic_write_pmr(u64 val) 121 - { 122 - asm volatile("msr_s " __stringify(ICC_PMR_EL1) ", %0" : : "r" (val)); 123 - } 124 - 125 - static void __maybe_unused gic_write_ctlr(u64 val) 126 - { 127 - asm volatile("msr_s " __stringify(ICC_CTLR_EL1) ", %0" : : "r" (val)); 128 - isb(); 129 - } 130 - 131 - static void __maybe_unused gic_write_grpen1(u64 val) 132 - { 133 - asm volatile("msr_s " __stringify(ICC_GRPEN1_EL1) ", %0" : : "r" (val)); 134 - isb(); 135 - } 136 - 137 - static void __maybe_unused gic_write_sgi1r(u64 val) 138 - { 139 - asm volatile("msr_s " __stringify(ICC_SGI1R_EL1) ", %0" : : "r" (val)); 140 - } 141 - 142 - static void gic_enable_sre(void) 143 - { 144 - u64 val; 145 - 146 - asm volatile("mrs_s %0, " __stringify(ICC_SRE_EL1) : "=r" (val)); 147 - val |= ICC_SRE_EL1_SRE; 148 - asm volatile("msr_s " __stringify(ICC_SRE_EL1) ", %0" : : "r" (val)); 149 - isb(); 150 - 151 - /* 152 - * Need to check that the SRE bit has actually been set. If 153 - * not, it means that SRE is disabled at EL2. We're going to 154 - * die painfully, and there is nothing we can do about it. 155 - * 156 - * Kindly inform the luser. 157 - */ 158 - asm volatile("mrs_s %0, " __stringify(ICC_SRE_EL1) : "=r" (val)); 159 - if (!(val & ICC_SRE_EL1_SRE)) 160 - pr_err("GIC: unable to set SRE (disabled at EL2), panic ahead\n"); 161 - } 121 + #endif 162 122 163 123 static void gic_enable_redist(bool enable) 164 124 { ··· 319 359 return 0; 320 360 } 321 361 322 - static u64 gic_mpidr_to_affinity(u64 mpidr) 362 + static u64 gic_mpidr_to_affinity(unsigned long mpidr) 323 363 { 324 364 u64 aff; 325 365 326 - aff = (MPIDR_AFFINITY_LEVEL(mpidr, 3) << 32 | 366 + aff = ((u64)MPIDR_AFFINITY_LEVEL(mpidr, 3) << 32 | 327 367 MPIDR_AFFINITY_LEVEL(mpidr, 2) << 16 | 328 368 MPIDR_AFFINITY_LEVEL(mpidr, 1) << 8 | 329 369 MPIDR_AFFINITY_LEVEL(mpidr, 0)); ··· 333 373 334 374 static asmlinkage void __exception_irq_entry gic_handle_irq(struct pt_regs *regs) 335 375 { 336 - u64 irqnr; 376 + u32 irqnr; 337 377 338 378 do { 339 379 irqnr = gic_read_iar(); ··· 392 432 */ 393 433 affinity = gic_mpidr_to_affinity(cpu_logical_map(smp_processor_id())); 394 434 for (i = 32; i < gic_data.irq_nr; i++) 395 - writeq_relaxed(affinity, base + GICD_IROUTER + i * 8); 435 + gic_write_irouter(affinity, base + GICD_IROUTER + i * 8); 396 436 } 397 437 398 438 static int gic_populate_rdist(void) 399 439 { 400 - u64 mpidr = cpu_logical_map(smp_processor_id()); 440 + unsigned long mpidr = cpu_logical_map(smp_processor_id()); 401 441 u64 typer; 402 442 u32 aff; 403 443 int i; ··· 423 463 } 424 464 425 465 do { 426 - typer = readq_relaxed(ptr + GICR_TYPER); 466 + typer = gic_read_typer(ptr + GICR_TYPER); 427 467 if ((typer >> 32) == aff) { 428 468 u64 offset = ptr - gic_data.redist_regions[i].redist_base; 429 469 gic_data_rdist_rd_base() = ptr; 430 470 gic_data_rdist()->phys_base = gic_data.redist_regions[i].phys_base + offset; 431 - pr_info("CPU%d: found redistributor %llx region %d:%pa\n", 432 - smp_processor_id(), 433 - (unsigned long long)mpidr, 434 - i, &gic_data_rdist()->phys_base); 471 + pr_info("CPU%d: found redistributor %lx region %d:%pa\n", 472 + smp_processor_id(), mpidr, i, 473 + &gic_data_rdist()->phys_base); 435 474 return 0; 436 475 } 437 476 ··· 445 486 } 446 487 447 488 /* We couldn't even deal with ourselves... */ 448 - WARN(true, "CPU%d: mpidr %llx has no re-distributor!\n", 449 - smp_processor_id(), (unsigned long long)mpidr); 489 + WARN(true, "CPU%d: mpidr %lx has no re-distributor!\n", 490 + smp_processor_id(), mpidr); 450 491 return -ENODEV; 451 492 } 452 493 453 494 static void gic_cpu_sys_reg_init(void) 454 495 { 455 - /* Enable system registers */ 456 - gic_enable_sre(); 496 + /* 497 + * Need to check that the SRE bit has actually been set. If 498 + * not, it means that SRE is disabled at EL2. We're going to 499 + * die painfully, and there is nothing we can do about it. 500 + * 501 + * Kindly inform the luser. 502 + */ 503 + if (!gic_enable_sre()) 504 + pr_err("GIC: unable to set SRE (disabled at EL2), panic ahead\n"); 457 505 458 506 /* Set priority mask register */ 459 507 gic_write_pmr(DEFAULT_PMR_VALUE); ··· 523 557 }; 524 558 525 559 static u16 gic_compute_target_list(int *base_cpu, const struct cpumask *mask, 526 - u64 cluster_id) 560 + unsigned long cluster_id) 527 561 { 528 562 int cpu = *base_cpu; 529 - u64 mpidr = cpu_logical_map(cpu); 563 + unsigned long mpidr = cpu_logical_map(cpu); 530 564 u16 tlist = 0; 531 565 532 566 while (cpu < nr_cpu_ids) { ··· 587 621 smp_wmb(); 588 622 589 623 for_each_cpu(cpu, mask) { 590 - u64 cluster_id = cpu_logical_map(cpu) & ~0xffUL; 624 + unsigned long cluster_id = cpu_logical_map(cpu) & ~0xffUL; 591 625 u16 tlist; 592 626 593 627 tlist = gic_compute_target_list(&cpu, mask, cluster_id); ··· 623 657 reg = gic_dist_base(d) + GICD_IROUTER + (gic_irq(d) * 8); 624 658 val = gic_mpidr_to_affinity(cpu_logical_map(cpu)); 625 659 626 - writeq_relaxed(val, reg); 660 + gic_write_irouter(val, reg); 627 661 628 662 /* 629 663 * If the interrupt was enabled, enabled it again. Otherwise, ··· 737 771 return 0; 738 772 } 739 773 740 - static int gic_irq_domain_xlate(struct irq_domain *d, 741 - struct device_node *controller, 742 - const u32 *intspec, unsigned int intsize, 743 - unsigned long *out_hwirq, unsigned int *out_type) 774 + static int gic_irq_domain_translate(struct irq_domain *d, 775 + struct irq_fwspec *fwspec, 776 + unsigned long *hwirq, 777 + unsigned int *type) 744 778 { 745 - if (d->of_node != controller) 746 - return -EINVAL; 747 - if (intsize < 3) 748 - return -EINVAL; 779 + if (is_of_node(fwspec->fwnode)) { 780 + if (fwspec->param_count < 3) 781 + return -EINVAL; 749 782 750 - switch(intspec[0]) { 751 - case 0: /* SPI */ 752 - *out_hwirq = intspec[1] + 32; 753 - break; 754 - case 1: /* PPI */ 755 - *out_hwirq = intspec[1] + 16; 756 - break; 757 - case GIC_IRQ_TYPE_LPI: /* LPI */ 758 - *out_hwirq = intspec[1]; 759 - break; 760 - default: 761 - return -EINVAL; 783 + switch (fwspec->param[0]) { 784 + case 0: /* SPI */ 785 + *hwirq = fwspec->param[1] + 32; 786 + break; 787 + case 1: /* PPI */ 788 + *hwirq = fwspec->param[1] + 16; 789 + break; 790 + case GIC_IRQ_TYPE_LPI: /* LPI */ 791 + *hwirq = fwspec->param[1]; 792 + break; 793 + default: 794 + return -EINVAL; 795 + } 796 + 797 + *type = fwspec->param[2] & IRQ_TYPE_SENSE_MASK; 798 + return 0; 762 799 } 763 800 764 - *out_type = intspec[2] & IRQ_TYPE_SENSE_MASK; 765 - return 0; 801 + return -EINVAL; 766 802 } 767 803 768 804 static int gic_irq_domain_alloc(struct irq_domain *domain, unsigned int virq, ··· 773 805 int i, ret; 774 806 irq_hw_number_t hwirq; 775 807 unsigned int type = IRQ_TYPE_NONE; 776 - struct of_phandle_args *irq_data = arg; 808 + struct irq_fwspec *fwspec = arg; 777 809 778 - ret = gic_irq_domain_xlate(domain, irq_data->np, irq_data->args, 779 - irq_data->args_count, &hwirq, &type); 810 + ret = gic_irq_domain_translate(domain, fwspec, &hwirq, &type); 780 811 if (ret) 781 812 return ret; 782 813 ··· 798 831 } 799 832 800 833 static const struct irq_domain_ops gic_irq_domain_ops = { 801 - .xlate = gic_irq_domain_xlate, 834 + .translate = gic_irq_domain_translate, 802 835 .alloc = gic_irq_domain_alloc, 803 836 .free = gic_irq_domain_free, 804 837 }; 838 + 839 + static void gicv3_enable_quirks(void) 840 + { 841 + #ifdef CONFIG_ARM64 842 + if (cpus_have_cap(ARM64_WORKAROUND_CAVIUM_23154)) 843 + static_branch_enable(&is_cavium_thunderx); 844 + #endif 845 + } 805 846 806 847 static int __init gic_of_init(struct device_node *node, struct device_node *parent) 807 848 { ··· 875 900 gic_data.redist_regions = rdist_regs; 876 901 gic_data.nr_redist_regions = nr_redist_regions; 877 902 gic_data.redist_stride = redist_stride; 903 + 904 + gicv3_enable_quirks(); 878 905 879 906 /* 880 907 * Find out how many interrupts are supported.
+70 -40
drivers/irqchip/irq-gic.c
··· 51 51 52 52 #include "irq-gic-common.h" 53 53 54 + #ifdef CONFIG_ARM64 55 + #include <asm/cpufeature.h> 56 + 57 + static void gic_check_cpu_features(void) 58 + { 59 + WARN_TAINT_ONCE(cpus_have_cap(ARM64_HAS_SYSREG_GIC_CPUIF), 60 + TAINT_CPU_OUT_OF_SPEC, 61 + "GICv3 system registers enabled, broken firmware!\n"); 62 + } 63 + #else 64 + #define gic_check_cpu_features() do { } while(0) 65 + #endif 66 + 54 67 union gic_base { 55 68 void __iomem *common_base; 56 69 void __percpu * __iomem *percpu_base; ··· 916 903 { 917 904 } 918 905 919 - static int gic_irq_domain_xlate(struct irq_domain *d, 920 - struct device_node *controller, 921 - const u32 *intspec, unsigned int intsize, 922 - unsigned long *out_hwirq, unsigned int *out_type) 906 + static int gic_irq_domain_translate(struct irq_domain *d, 907 + struct irq_fwspec *fwspec, 908 + unsigned long *hwirq, 909 + unsigned int *type) 923 910 { 924 - unsigned long ret = 0; 911 + if (is_of_node(fwspec->fwnode)) { 912 + if (fwspec->param_count < 3) 913 + return -EINVAL; 925 914 926 - if (d->of_node != controller) 927 - return -EINVAL; 928 - if (intsize < 3) 929 - return -EINVAL; 915 + /* Get the interrupt number and add 16 to skip over SGIs */ 916 + *hwirq = fwspec->param[1] + 16; 930 917 931 - /* Get the interrupt number and add 16 to skip over SGIs */ 932 - *out_hwirq = intspec[1] + 16; 918 + /* 919 + * For SPIs, we need to add 16 more to get the GIC irq 920 + * ID number 921 + */ 922 + if (!fwspec->param[0]) 923 + *hwirq += 16; 933 924 934 - /* For SPIs, we need to add 16 more to get the GIC irq ID number */ 935 - if (!intspec[0]) 936 - *out_hwirq += 16; 925 + *type = fwspec->param[2] & IRQ_TYPE_SENSE_MASK; 926 + return 0; 927 + } 937 928 938 - *out_type = intspec[2] & IRQ_TYPE_SENSE_MASK; 929 + if (fwspec->fwnode->type == FWNODE_IRQCHIP) { 930 + if(fwspec->param_count != 2) 931 + return -EINVAL; 939 932 940 - return ret; 933 + *hwirq = fwspec->param[0]; 934 + *type = fwspec->param[1]; 935 + return 0; 936 + } 937 + 938 + return -EINVAL; 941 939 } 942 940 943 941 #ifdef CONFIG_SMP ··· 976 952 int i, ret; 977 953 irq_hw_number_t hwirq; 978 954 unsigned int type = IRQ_TYPE_NONE; 979 - struct of_phandle_args *irq_data = arg; 955 + struct irq_fwspec *fwspec = arg; 980 956 981 - ret = gic_irq_domain_xlate(domain, irq_data->np, irq_data->args, 982 - irq_data->args_count, &hwirq, &type); 957 + ret = gic_irq_domain_translate(domain, fwspec, &hwirq, &type); 983 958 if (ret) 984 959 return ret; 985 960 ··· 989 966 } 990 967 991 968 static const struct irq_domain_ops gic_irq_domain_hierarchy_ops = { 992 - .xlate = gic_irq_domain_xlate, 969 + .translate = gic_irq_domain_translate, 993 970 .alloc = gic_irq_domain_alloc, 994 971 .free = irq_domain_free_irqs_top, 995 972 }; ··· 997 974 static const struct irq_domain_ops gic_irq_domain_ops = { 998 975 .map = gic_irq_domain_map, 999 976 .unmap = gic_irq_domain_unmap, 1000 - .xlate = gic_irq_domain_xlate, 1001 977 }; 1002 978 1003 979 static void __init __gic_init_bases(unsigned int gic_nr, int irq_start, 1004 980 void __iomem *dist_base, void __iomem *cpu_base, 1005 - u32 percpu_offset, struct device_node *node) 981 + u32 percpu_offset, struct fwnode_handle *handle) 1006 982 { 1007 983 irq_hw_number_t hwirq_base; 1008 984 struct gic_chip_data *gic; 1009 985 int gic_irqs, irq_base, i; 1010 986 1011 987 BUG_ON(gic_nr >= MAX_GIC_NR); 988 + 989 + gic_check_cpu_features(); 1012 990 1013 991 gic = &gic_data[gic_nr]; 1014 992 #ifdef CONFIG_GIC_NON_BANKED ··· 1055 1031 gic_irqs = 1020; 1056 1032 gic->gic_irqs = gic_irqs; 1057 1033 1058 - if (node) { /* DT case */ 1059 - gic->domain = irq_domain_add_linear(node, gic_irqs, 1060 - &gic_irq_domain_hierarchy_ops, 1061 - gic); 1062 - } else { /* Non-DT case */ 1034 + if (handle) { /* DT/ACPI */ 1035 + gic->domain = irq_domain_create_linear(handle, gic_irqs, 1036 + &gic_irq_domain_hierarchy_ops, 1037 + gic); 1038 + } else { /* Legacy support */ 1063 1039 /* 1064 1040 * For primary GICs, skip over SGIs. 1065 1041 * For secondary GICs, skip over PPIs, too. ··· 1082 1058 irq_base = irq_start; 1083 1059 } 1084 1060 1085 - gic->domain = irq_domain_add_legacy(node, gic_irqs, irq_base, 1061 + gic->domain = irq_domain_add_legacy(NULL, gic_irqs, irq_base, 1086 1062 hwirq_base, &gic_irq_domain_ops, gic); 1087 1063 } 1088 1064 ··· 1111 1087 gic_pm_init(gic); 1112 1088 } 1113 1089 1114 - void __init gic_init_bases(unsigned int gic_nr, int irq_start, 1115 - void __iomem *dist_base, void __iomem *cpu_base, 1116 - u32 percpu_offset, struct device_node *node) 1090 + void __init gic_init(unsigned int gic_nr, int irq_start, 1091 + void __iomem *dist_base, void __iomem *cpu_base) 1117 1092 { 1118 1093 /* 1119 1094 * Non-DT/ACPI systems won't run a hypervisor, so let's not 1120 1095 * bother with these... 1121 1096 */ 1122 1097 static_key_slow_dec(&supports_deactivate); 1123 - __gic_init_bases(gic_nr, irq_start, dist_base, cpu_base, 1124 - percpu_offset, node); 1098 + __gic_init_bases(gic_nr, irq_start, dist_base, cpu_base, 0, NULL); 1125 1099 } 1126 1100 1127 1101 #ifdef CONFIG_OF ··· 1190 1168 if (of_property_read_u32(node, "cpu-offset", &percpu_offset)) 1191 1169 percpu_offset = 0; 1192 1170 1193 - __gic_init_bases(gic_cnt, -1, dist_base, cpu_base, percpu_offset, node); 1171 + __gic_init_bases(gic_cnt, -1, dist_base, cpu_base, percpu_offset, 1172 + &node->fwnode); 1194 1173 if (!gic_cnt) 1195 1174 gic_init_physaddr(node); 1196 1175 ··· 1214 1191 IRQCHIP_DECLARE(cortex_a7_gic, "arm,cortex-a7-gic", gic_of_init); 1215 1192 IRQCHIP_DECLARE(msm_8660_qgic, "qcom,msm-8660-qgic", gic_of_init); 1216 1193 IRQCHIP_DECLARE(msm_qgic2, "qcom,msm-qgic2", gic_of_init); 1194 + IRQCHIP_DECLARE(pl390, "arm,pl390", gic_of_init); 1217 1195 1218 1196 #endif 1219 1197 ··· 1266 1242 gic_v2_acpi_init(struct acpi_table_header *table) 1267 1243 { 1268 1244 void __iomem *cpu_base, *dist_base; 1245 + struct fwnode_handle *domain_handle; 1269 1246 int count; 1270 1247 1271 1248 /* Collect CPU base addresses */ ··· 1317 1292 static_key_slow_dec(&supports_deactivate); 1318 1293 1319 1294 /* 1320 - * Initialize zero GIC instance (no multi-GIC support). Also, set GIC 1321 - * as default IRQ domain to allow for GSI registration and GSI to IRQ 1322 - * number translation (see acpi_register_gsi() and acpi_gsi_to_irq()). 1295 + * Initialize GIC instance zero (no multi-GIC support). 1323 1296 */ 1324 - __gic_init_bases(0, -1, dist_base, cpu_base, 0, NULL); 1325 - irq_set_default_host(gic_data[0].domain); 1297 + domain_handle = irq_domain_alloc_fwnode(dist_base); 1298 + if (!domain_handle) { 1299 + pr_err("Unable to allocate domain handle\n"); 1300 + iounmap(cpu_base); 1301 + iounmap(dist_base); 1302 + return -ENOMEM; 1303 + } 1326 1304 1327 - acpi_irq_model = ACPI_IRQ_MODEL_GIC; 1305 + __gic_init_bases(0, -1, dist_base, cpu_base, 0, domain_handle); 1306 + 1307 + acpi_set_irq_model(ACPI_IRQ_MODEL_GIC, domain_handle); 1328 1308 return 0; 1329 1309 } 1330 1310 #endif
+1 -1
drivers/irqchip/irq-hip04.c
··· 325 325 { 326 326 unsigned long ret = 0; 327 327 328 - if (d->of_node != controller) 328 + if (irq_domain_get_of_node(d) != controller) 329 329 return -EINVAL; 330 330 if (intsize < 3) 331 331 return -EINVAL;
+2 -2
drivers/irqchip/irq-i8259.c
··· 377 377 } 378 378 379 379 domain = __init_i8259_irqs(node); 380 - irq_set_handler_data(parent_irq, domain); 381 - irq_set_chained_handler(parent_irq, i8259_irq_dispatch); 380 + irq_set_chained_handler_and_data(parent_irq, i8259_irq_dispatch, 381 + domain); 382 382 return 0; 383 383 } 384 384 IRQCHIP_DECLARE(i8259, "intel,i8259", i8259_of_init);
+29 -35
drivers/irqchip/irq-imx-gpcv2.c
··· 150 150 #endif 151 151 }; 152 152 153 - static int imx_gpcv2_domain_xlate(struct irq_domain *domain, 154 - struct device_node *controller, 155 - const u32 *intspec, 156 - unsigned int intsize, 157 - unsigned long *out_hwirq, 158 - unsigned int *out_type) 153 + static int imx_gpcv2_domain_translate(struct irq_domain *d, 154 + struct irq_fwspec *fwspec, 155 + unsigned long *hwirq, 156 + unsigned int *type) 159 157 { 160 - /* Shouldn't happen, really... */ 161 - if (domain->of_node != controller) 162 - return -EINVAL; 158 + if (is_of_node(fwspec->fwnode)) { 159 + if (fwspec->param_count != 3) 160 + return -EINVAL; 163 161 164 - /* Not GIC compliant */ 165 - if (intsize != 3) 166 - return -EINVAL; 162 + /* No PPI should point to this domain */ 163 + if (fwspec->param[0] != 0) 164 + return -EINVAL; 167 165 168 - /* No PPI should point to this domain */ 169 - if (intspec[0] != 0) 170 - return -EINVAL; 166 + *hwirq = fwspec->param[1]; 167 + *type = fwspec->param[2]; 168 + return 0; 169 + } 171 170 172 - *out_hwirq = intspec[1]; 173 - *out_type = intspec[2]; 174 - return 0; 171 + return -EINVAL; 175 172 } 176 173 177 174 static int imx_gpcv2_domain_alloc(struct irq_domain *domain, 178 175 unsigned int irq, unsigned int nr_irqs, 179 176 void *data) 180 177 { 181 - struct of_phandle_args *args = data; 182 - struct of_phandle_args parent_args; 178 + struct irq_fwspec *fwspec = data; 179 + struct irq_fwspec parent_fwspec; 183 180 irq_hw_number_t hwirq; 181 + unsigned int type; 182 + int err; 184 183 int i; 185 184 186 - /* Not GIC compliant */ 187 - if (args->args_count != 3) 188 - return -EINVAL; 185 + err = imx_gpcv2_domain_translate(domain, fwspec, &hwirq, &type); 186 + if (err) 187 + return err; 189 188 190 - /* No PPI should point to this domain */ 191 - if (args->args[0] != 0) 192 - return -EINVAL; 193 - 194 - /* Can't deal with this */ 195 - hwirq = args->args[1]; 196 189 if (hwirq >= GPC_MAX_IRQS) 197 190 return -EINVAL; 198 191 ··· 194 201 &gpcv2_irqchip_data_chip, domain->host_data); 195 202 } 196 203 197 - parent_args = *args; 198 - parent_args.np = domain->parent->of_node; 199 - return irq_domain_alloc_irqs_parent(domain, irq, nr_irqs, &parent_args); 204 + parent_fwspec = *fwspec; 205 + parent_fwspec.fwnode = domain->parent->fwnode; 206 + return irq_domain_alloc_irqs_parent(domain, irq, nr_irqs, 207 + &parent_fwspec); 200 208 } 201 209 202 210 static struct irq_domain_ops gpcv2_irqchip_data_domain_ops = { 203 - .xlate = imx_gpcv2_domain_xlate, 204 - .alloc = imx_gpcv2_domain_alloc, 205 - .free = irq_domain_free_irqs_common, 211 + .translate = imx_gpcv2_domain_translate, 212 + .alloc = imx_gpcv2_domain_alloc, 213 + .free = irq_domain_free_irqs_common, 206 214 }; 207 215 208 216 static int __init imx_gpcv2_irqchip_init(struct device_node *node,
+26 -23
drivers/irqchip/irq-mtk-sysirq.c
··· 67 67 .irq_set_affinity = irq_chip_set_affinity_parent, 68 68 }; 69 69 70 - static int mtk_sysirq_domain_xlate(struct irq_domain *d, 71 - struct device_node *controller, 72 - const u32 *intspec, unsigned int intsize, 73 - unsigned long *out_hwirq, 74 - unsigned int *out_type) 70 + static int mtk_sysirq_domain_translate(struct irq_domain *d, 71 + struct irq_fwspec *fwspec, 72 + unsigned long *hwirq, 73 + unsigned int *type) 75 74 { 76 - if (intsize != 3) 77 - return -EINVAL; 75 + if (is_of_node(fwspec->fwnode)) { 76 + if (fwspec->param_count != 3) 77 + return -EINVAL; 78 78 79 - /* sysirq doesn't support PPI */ 80 - if (intspec[0]) 81 - return -EINVAL; 79 + /* No PPI should point to this domain */ 80 + if (fwspec->param[0] != 0) 81 + return -EINVAL; 82 82 83 - *out_hwirq = intspec[1]; 84 - *out_type = intspec[2] & IRQ_TYPE_SENSE_MASK; 85 - return 0; 83 + *hwirq = fwspec->param[1]; 84 + *type = fwspec->param[2] & IRQ_TYPE_SENSE_MASK; 85 + return 0; 86 + } 87 + 88 + return -EINVAL; 86 89 } 87 90 88 91 static int mtk_sysirq_domain_alloc(struct irq_domain *domain, unsigned int virq, ··· 93 90 { 94 91 int i; 95 92 irq_hw_number_t hwirq; 96 - struct of_phandle_args *irq_data = arg; 97 - struct of_phandle_args gic_data = *irq_data; 93 + struct irq_fwspec *fwspec = arg; 94 + struct irq_fwspec gic_fwspec = *fwspec; 98 95 99 - if (irq_data->args_count != 3) 96 + if (fwspec->param_count != 3) 100 97 return -EINVAL; 101 98 102 99 /* sysirq doesn't support PPI */ 103 - if (irq_data->args[0]) 100 + if (fwspec->param[0]) 104 101 return -EINVAL; 105 102 106 - hwirq = irq_data->args[1]; 103 + hwirq = fwspec->param[1]; 107 104 for (i = 0; i < nr_irqs; i++) 108 105 irq_domain_set_hwirq_and_chip(domain, virq + i, hwirq + i, 109 106 &mtk_sysirq_chip, 110 107 domain->host_data); 111 108 112 - gic_data.np = domain->parent->of_node; 113 - return irq_domain_alloc_irqs_parent(domain, virq, nr_irqs, &gic_data); 109 + gic_fwspec.fwnode = domain->parent->fwnode; 110 + return irq_domain_alloc_irqs_parent(domain, virq, nr_irqs, &gic_fwspec); 114 111 } 115 112 116 113 static const struct irq_domain_ops sysirq_domain_ops = { 117 - .xlate = mtk_sysirq_domain_xlate, 118 - .alloc = mtk_sysirq_domain_alloc, 119 - .free = irq_domain_free_irqs_common, 114 + .translate = mtk_sysirq_domain_translate, 115 + .alloc = mtk_sysirq_domain_alloc, 116 + .free = irq_domain_free_irqs_common, 120 117 }; 121 118 122 119 static int __init mtk_sysirq_of_init(struct device_node *node,
+153 -18
drivers/irqchip/irq-mxs.c
··· 1 1 /* 2 2 * Copyright (C) 2009-2010 Freescale Semiconductor, Inc. All Rights Reserved. 3 + * Copyright (C) 2014 Oleksij Rempel <linux@rempel-privat.de> 4 + * Add Alphascale ASM9260 support. 3 5 * 4 6 * This program is free software; you can redistribute it and/or modify 5 7 * it under the terms of the GNU General Public License as published by ··· 30 28 #include <linux/stmp_device.h> 31 29 #include <asm/exception.h> 32 30 31 + #include "alphascale_asm9260-icoll.h" 32 + 33 + /* 34 + * this device provide 4 offsets for each register: 35 + * 0x0 - plain read write mode 36 + * 0x4 - set mode, OR logic. 37 + * 0x8 - clr mode, XOR logic. 38 + * 0xc - togle mode. 39 + */ 40 + #define SET_REG 4 41 + #define CLR_REG 8 42 + 33 43 #define HW_ICOLL_VECTOR 0x0000 34 44 #define HW_ICOLL_LEVELACK 0x0010 35 45 #define HW_ICOLL_CTRL 0x0020 36 46 #define HW_ICOLL_STAT_OFFSET 0x0070 37 - #define HW_ICOLL_INTERRUPTn_SET(n) (0x0124 + (n) * 0x10) 38 - #define HW_ICOLL_INTERRUPTn_CLR(n) (0x0128 + (n) * 0x10) 39 - #define BM_ICOLL_INTERRUPTn_ENABLE 0x00000004 47 + #define HW_ICOLL_INTERRUPT0 0x0120 48 + #define HW_ICOLL_INTERRUPTn(n) ((n) * 0x10) 49 + #define BM_ICOLL_INTR_ENABLE BIT(2) 40 50 #define BV_ICOLL_LEVELACK_IRQLEVELACK__LEVEL0 0x1 41 51 42 52 #define ICOLL_NUM_IRQS 128 43 53 44 - static void __iomem *icoll_base; 54 + enum icoll_type { 55 + ICOLL, 56 + ASM9260_ICOLL, 57 + }; 58 + 59 + struct icoll_priv { 60 + void __iomem *vector; 61 + void __iomem *levelack; 62 + void __iomem *ctrl; 63 + void __iomem *stat; 64 + void __iomem *intr; 65 + void __iomem *clear; 66 + enum icoll_type type; 67 + }; 68 + 69 + static struct icoll_priv icoll_priv; 45 70 static struct irq_domain *icoll_domain; 71 + 72 + /* calculate bit offset depending on number of intterupt per register */ 73 + static u32 icoll_intr_bitshift(struct irq_data *d, u32 bit) 74 + { 75 + /* 76 + * mask lower part of hwirq to convert it 77 + * in 0, 1, 2 or 3 and then multiply it by 8 (or shift by 3) 78 + */ 79 + return bit << ((d->hwirq & 3) << 3); 80 + } 81 + 82 + /* calculate mem offset depending on number of intterupt per register */ 83 + static void __iomem *icoll_intr_reg(struct irq_data *d) 84 + { 85 + /* offset = hwirq / intr_per_reg * 0x10 */ 86 + return icoll_priv.intr + ((d->hwirq >> 2) * 0x10); 87 + } 46 88 47 89 static void icoll_ack_irq(struct irq_data *d) 48 90 { ··· 96 50 * BV_ICOLL_LEVELACK_IRQLEVELACK__LEVEL0 unconditionally. 97 51 */ 98 52 __raw_writel(BV_ICOLL_LEVELACK_IRQLEVELACK__LEVEL0, 99 - icoll_base + HW_ICOLL_LEVELACK); 53 + icoll_priv.levelack); 100 54 } 101 55 102 56 static void icoll_mask_irq(struct irq_data *d) 103 57 { 104 - __raw_writel(BM_ICOLL_INTERRUPTn_ENABLE, 105 - icoll_base + HW_ICOLL_INTERRUPTn_CLR(d->hwirq)); 58 + __raw_writel(BM_ICOLL_INTR_ENABLE, 59 + icoll_priv.intr + CLR_REG + HW_ICOLL_INTERRUPTn(d->hwirq)); 106 60 } 107 61 108 62 static void icoll_unmask_irq(struct irq_data *d) 109 63 { 110 - __raw_writel(BM_ICOLL_INTERRUPTn_ENABLE, 111 - icoll_base + HW_ICOLL_INTERRUPTn_SET(d->hwirq)); 64 + __raw_writel(BM_ICOLL_INTR_ENABLE, 65 + icoll_priv.intr + SET_REG + HW_ICOLL_INTERRUPTn(d->hwirq)); 66 + } 67 + 68 + static void asm9260_mask_irq(struct irq_data *d) 69 + { 70 + __raw_writel(icoll_intr_bitshift(d, BM_ICOLL_INTR_ENABLE), 71 + icoll_intr_reg(d) + CLR_REG); 72 + } 73 + 74 + static void asm9260_unmask_irq(struct irq_data *d) 75 + { 76 + __raw_writel(ASM9260_BM_CLEAR_BIT(d->hwirq), 77 + icoll_priv.clear + 78 + ASM9260_HW_ICOLL_CLEARn(d->hwirq)); 79 + 80 + __raw_writel(icoll_intr_bitshift(d, BM_ICOLL_INTR_ENABLE), 81 + icoll_intr_reg(d) + SET_REG); 112 82 } 113 83 114 84 static struct irq_chip mxs_icoll_chip = { ··· 133 71 .irq_unmask = icoll_unmask_irq, 134 72 }; 135 73 74 + static struct irq_chip asm9260_icoll_chip = { 75 + .irq_ack = icoll_ack_irq, 76 + .irq_mask = asm9260_mask_irq, 77 + .irq_unmask = asm9260_unmask_irq, 78 + }; 79 + 136 80 asmlinkage void __exception_irq_entry icoll_handle_irq(struct pt_regs *regs) 137 81 { 138 82 u32 irqnr; 139 83 140 - irqnr = __raw_readl(icoll_base + HW_ICOLL_STAT_OFFSET); 141 - __raw_writel(irqnr, icoll_base + HW_ICOLL_VECTOR); 84 + irqnr = __raw_readl(icoll_priv.stat); 85 + __raw_writel(irqnr, icoll_priv.vector); 142 86 handle_domain_irq(icoll_domain, irqnr, regs); 143 87 } 144 88 145 89 static int icoll_irq_domain_map(struct irq_domain *d, unsigned int virq, 146 90 irq_hw_number_t hw) 147 91 { 148 - irq_set_chip_and_handler(virq, &mxs_icoll_chip, handle_level_irq); 92 + struct irq_chip *chip; 93 + 94 + if (icoll_priv.type == ICOLL) 95 + chip = &mxs_icoll_chip; 96 + else 97 + chip = &asm9260_icoll_chip; 98 + 99 + irq_set_chip_and_handler(virq, chip, handle_level_irq); 149 100 150 101 return 0; 151 102 } ··· 168 93 .xlate = irq_domain_xlate_onecell, 169 94 }; 170 95 96 + static void __init icoll_add_domain(struct device_node *np, 97 + int num) 98 + { 99 + icoll_domain = irq_domain_add_linear(np, num, 100 + &icoll_irq_domain_ops, NULL); 101 + 102 + if (!icoll_domain) 103 + panic("%s: unable to create irq domain", np->full_name); 104 + } 105 + 106 + static void __iomem * __init icoll_init_iobase(struct device_node *np) 107 + { 108 + void __iomem *icoll_base; 109 + 110 + icoll_base = of_io_request_and_map(np, 0, np->name); 111 + if (!icoll_base) 112 + panic("%s: unable to map resource", np->full_name); 113 + return icoll_base; 114 + } 115 + 171 116 static int __init icoll_of_init(struct device_node *np, 172 117 struct device_node *interrupt_parent) 173 118 { 174 - icoll_base = of_iomap(np, 0); 175 - WARN_ON(!icoll_base); 119 + void __iomem *icoll_base; 120 + 121 + icoll_priv.type = ICOLL; 122 + 123 + icoll_base = icoll_init_iobase(np); 124 + icoll_priv.vector = icoll_base + HW_ICOLL_VECTOR; 125 + icoll_priv.levelack = icoll_base + HW_ICOLL_LEVELACK; 126 + icoll_priv.ctrl = icoll_base + HW_ICOLL_CTRL; 127 + icoll_priv.stat = icoll_base + HW_ICOLL_STAT_OFFSET; 128 + icoll_priv.intr = icoll_base + HW_ICOLL_INTERRUPT0; 129 + icoll_priv.clear = NULL; 176 130 177 131 /* 178 132 * Interrupt Collector reset, which initializes the priority 179 133 * for each irq to level 0. 180 134 */ 181 - stmp_reset_block(icoll_base + HW_ICOLL_CTRL); 135 + stmp_reset_block(icoll_priv.ctrl); 182 136 183 - icoll_domain = irq_domain_add_linear(np, ICOLL_NUM_IRQS, 184 - &icoll_irq_domain_ops, NULL); 185 - return icoll_domain ? 0 : -ENODEV; 137 + icoll_add_domain(np, ICOLL_NUM_IRQS); 138 + 139 + return 0; 186 140 } 187 141 IRQCHIP_DECLARE(mxs, "fsl,icoll", icoll_of_init); 142 + 143 + static int __init asm9260_of_init(struct device_node *np, 144 + struct device_node *interrupt_parent) 145 + { 146 + void __iomem *icoll_base; 147 + int i; 148 + 149 + icoll_priv.type = ASM9260_ICOLL; 150 + 151 + icoll_base = icoll_init_iobase(np); 152 + icoll_priv.vector = icoll_base + ASM9260_HW_ICOLL_VECTOR; 153 + icoll_priv.levelack = icoll_base + ASM9260_HW_ICOLL_LEVELACK; 154 + icoll_priv.ctrl = icoll_base + ASM9260_HW_ICOLL_CTRL; 155 + icoll_priv.stat = icoll_base + ASM9260_HW_ICOLL_STAT_OFFSET; 156 + icoll_priv.intr = icoll_base + ASM9260_HW_ICOLL_INTERRUPT0; 157 + icoll_priv.clear = icoll_base + ASM9260_HW_ICOLL_CLEAR0; 158 + 159 + writel_relaxed(ASM9260_BM_CTRL_IRQ_ENABLE, 160 + icoll_priv.ctrl); 161 + /* 162 + * ASM9260 don't provide reset bit. So, we need to set level 0 163 + * manually. 164 + */ 165 + for (i = 0; i < 16 * 0x10; i += 0x10) 166 + writel(0, icoll_priv.intr + i); 167 + 168 + icoll_add_domain(np, ASM9260_NUM_IRQS); 169 + 170 + return 0; 171 + } 172 + IRQCHIP_DECLARE(asm9260, "alphascale,asm9260-icoll", asm9260_of_init);
+14 -4
drivers/irqchip/irq-nvic.c
··· 48 48 handle_IRQ(irq, regs); 49 49 } 50 50 51 + static int nvic_irq_domain_translate(struct irq_domain *d, 52 + struct irq_fwspec *fwspec, 53 + unsigned long *hwirq, unsigned int *type) 54 + { 55 + if (WARN_ON(fwspec->param_count < 1)) 56 + return -EINVAL; 57 + *hwirq = fwspec->param[0]; 58 + *type = IRQ_TYPE_NONE; 59 + return 0; 60 + } 61 + 51 62 static int nvic_irq_domain_alloc(struct irq_domain *domain, unsigned int virq, 52 63 unsigned int nr_irqs, void *arg) 53 64 { 54 65 int i, ret; 55 66 irq_hw_number_t hwirq; 56 67 unsigned int type = IRQ_TYPE_NONE; 57 - struct of_phandle_args *irq_data = arg; 68 + struct irq_fwspec *fwspec = arg; 58 69 59 - ret = irq_domain_xlate_onecell(domain, irq_data->np, irq_data->args, 60 - irq_data->args_count, &hwirq, &type); 70 + ret = nvic_irq_domain_translate(domain, fwspec, &hwirq, &type); 61 71 if (ret) 62 72 return ret; 63 73 ··· 78 68 } 79 69 80 70 static const struct irq_domain_ops nvic_irq_domain_ops = { 81 - .xlate = irq_domain_xlate_onecell, 71 + .translate = nvic_irq_domain_translate, 82 72 .alloc = nvic_irq_domain_alloc, 83 73 .free = irq_domain_free_irqs_top, 84 74 };
+4 -2
drivers/irqchip/irq-renesas-intc-irqpin.c
··· 361 361 .xlate = irq_domain_xlate_twocell, 362 362 }; 363 363 364 - static const struct intc_irqpin_irlm_config intc_irqpin_irlm_r8a7779 = { 364 + static const struct intc_irqpin_irlm_config intc_irqpin_irlm_r8a777x = { 365 365 .irlm_bit = 23, /* ICR0.IRLM0 */ 366 366 }; 367 367 368 368 static const struct of_device_id intc_irqpin_dt_ids[] = { 369 369 { .compatible = "renesas,intc-irqpin", }, 370 + { .compatible = "renesas,intc-irqpin-r8a7778", 371 + .data = &intc_irqpin_irlm_r8a777x }, 370 372 { .compatible = "renesas,intc-irqpin-r8a7779", 371 - .data = &intc_irqpin_irlm_r8a7779 }, 373 + .data = &intc_irqpin_irlm_r8a777x }, 372 374 {}, 373 375 }; 374 376 MODULE_DEVICE_TABLE(of, intc_irqpin_dt_ids);
+30 -56
drivers/irqchip/irq-renesas-irqc.c
··· 62 62 struct irqc_irq irq[IRQC_IRQ_MAX]; 63 63 unsigned int number_of_irqs; 64 64 struct platform_device *pdev; 65 - struct irq_chip irq_chip; 65 + struct irq_chip_generic *gc; 66 66 struct irq_domain *irq_domain; 67 67 struct clk *clk; 68 68 }; 69 + 70 + static struct irqc_priv *irq_data_to_priv(struct irq_data *data) 71 + { 72 + return data->domain->host_data; 73 + } 69 74 70 75 static void irqc_dbg(struct irqc_irq *i, char *str) 71 76 { 72 77 dev_dbg(&i->p->pdev->dev, "%s (%d:%d)\n", 73 78 str, i->requested_irq, i->hw_irq); 74 - } 75 - 76 - static void irqc_irq_enable(struct irq_data *d) 77 - { 78 - struct irqc_priv *p = irq_data_get_irq_chip_data(d); 79 - int hw_irq = irqd_to_hwirq(d); 80 - 81 - irqc_dbg(&p->irq[hw_irq], "enable"); 82 - iowrite32(BIT(hw_irq), p->cpu_int_base + IRQC_EN_SET); 83 - } 84 - 85 - static void irqc_irq_disable(struct irq_data *d) 86 - { 87 - struct irqc_priv *p = irq_data_get_irq_chip_data(d); 88 - int hw_irq = irqd_to_hwirq(d); 89 - 90 - irqc_dbg(&p->irq[hw_irq], "disable"); 91 - iowrite32(BIT(hw_irq), p->cpu_int_base + IRQC_EN_STS); 92 79 } 93 80 94 81 static unsigned char irqc_sense[IRQ_TYPE_SENSE_MASK + 1] = { ··· 88 101 89 102 static int irqc_irq_set_type(struct irq_data *d, unsigned int type) 90 103 { 91 - struct irqc_priv *p = irq_data_get_irq_chip_data(d); 104 + struct irqc_priv *p = irq_data_to_priv(d); 92 105 int hw_irq = irqd_to_hwirq(d); 93 106 unsigned char value = irqc_sense[type & IRQ_TYPE_SENSE_MASK]; 94 107 u32 tmp; ··· 107 120 108 121 static int irqc_irq_set_wake(struct irq_data *d, unsigned int on) 109 122 { 110 - struct irqc_priv *p = irq_data_get_irq_chip_data(d); 123 + struct irqc_priv *p = irq_data_to_priv(d); 111 124 int hw_irq = irqd_to_hwirq(d); 112 125 113 126 irq_set_irq_wake(p->irq[hw_irq].requested_irq, on); ··· 140 153 return IRQ_NONE; 141 154 } 142 155 143 - /* 144 - * This lock class tells lockdep that IRQC irqs are in a different 145 - * category than their parents, so it won't report false recursion. 146 - */ 147 - static struct lock_class_key irqc_irq_lock_class; 148 - 149 - static int irqc_irq_domain_map(struct irq_domain *h, unsigned int virq, 150 - irq_hw_number_t hw) 151 - { 152 - struct irqc_priv *p = h->host_data; 153 - 154 - irqc_dbg(&p->irq[hw], "map"); 155 - irq_set_chip_data(virq, h->host_data); 156 - irq_set_lockdep_class(virq, &irqc_irq_lock_class); 157 - irq_set_chip_and_handler(virq, &p->irq_chip, handle_level_irq); 158 - return 0; 159 - } 160 - 161 - static const struct irq_domain_ops irqc_irq_domain_ops = { 162 - .map = irqc_irq_domain_map, 163 - .xlate = irq_domain_xlate_twocell, 164 - }; 165 - 166 156 static int irqc_probe(struct platform_device *pdev) 167 157 { 168 158 struct irqc_priv *p; 169 159 struct resource *io; 170 160 struct resource *irq; 171 - struct irq_chip *irq_chip; 172 161 const char *name = dev_name(&pdev->dev); 173 162 int ret; 174 163 int k; ··· 204 241 205 242 p->cpu_int_base = p->iomem + IRQC_INT_CPU_BASE(0); /* SYS-SPI */ 206 243 207 - irq_chip = &p->irq_chip; 208 - irq_chip->name = name; 209 - irq_chip->irq_mask = irqc_irq_disable; 210 - irq_chip->irq_unmask = irqc_irq_enable; 211 - irq_chip->irq_set_type = irqc_irq_set_type; 212 - irq_chip->irq_set_wake = irqc_irq_set_wake; 213 - irq_chip->flags = IRQCHIP_MASK_ON_SUSPEND; 214 - 215 244 p->irq_domain = irq_domain_add_linear(pdev->dev.of_node, 216 245 p->number_of_irqs, 217 - &irqc_irq_domain_ops, p); 246 + &irq_generic_chip_ops, p); 218 247 if (!p->irq_domain) { 219 248 ret = -ENXIO; 220 249 dev_err(&pdev->dev, "cannot initialize irq domain\n"); 221 250 goto err2; 222 251 } 252 + 253 + ret = irq_alloc_domain_generic_chips(p->irq_domain, p->number_of_irqs, 254 + 1, name, handle_level_irq, 255 + 0, 0, IRQ_GC_INIT_NESTED_LOCK); 256 + if (ret) { 257 + dev_err(&pdev->dev, "cannot allocate generic chip\n"); 258 + goto err3; 259 + } 260 + 261 + p->gc = irq_get_domain_generic_chip(p->irq_domain, 0); 262 + p->gc->reg_base = p->cpu_int_base; 263 + p->gc->chip_types[0].regs.enable = IRQC_EN_SET; 264 + p->gc->chip_types[0].regs.disable = IRQC_EN_STS; 265 + p->gc->chip_types[0].chip.irq_mask = irq_gc_mask_disable_reg; 266 + p->gc->chip_types[0].chip.irq_unmask = irq_gc_unmask_enable_reg; 267 + p->gc->chip_types[0].chip.irq_set_type = irqc_irq_set_type; 268 + p->gc->chip_types[0].chip.irq_set_wake = irqc_irq_set_wake; 269 + p->gc->chip_types[0].chip.flags = IRQCHIP_MASK_ON_SUSPEND; 223 270 224 271 /* request interrupts one by one */ 225 272 for (k = 0; k < p->number_of_irqs; k++) { ··· 237 264 0, name, &p->irq[k])) { 238 265 dev_err(&pdev->dev, "failed to request IRQ\n"); 239 266 ret = -ENOENT; 240 - goto err3; 267 + goto err4; 241 268 } 242 269 } 243 270 244 271 dev_info(&pdev->dev, "driving %d irqs\n", p->number_of_irqs); 245 272 246 273 return 0; 247 - err3: 274 + err4: 248 275 while (--k >= 0) 249 276 free_irq(p->irq[k].requested_irq, &p->irq[k]); 250 277 278 + err3: 251 279 irq_domain_remove(p->irq_domain); 252 280 err2: 253 281 iounmap(p->iomem);
+2 -2
drivers/irqchip/irq-s3c24xx.c
··· 311 311 * and one big domain for the dt case where the subintc 312 312 * starts at hwirq number 32. 313 313 */ 314 - offset = (intc->domain->of_node) ? 32 : 0; 314 + offset = irq_domain_get_of_node(intc->domain) ? 32 : 0; 315 315 316 316 chained_irq_enter(chip, desc); 317 317 ··· 342 342 return false; 343 343 344 344 /* non-dt machines use individual domains */ 345 - if (!intc->domain->of_node) 345 + if (!irq_domain_get_of_node(intc->domain)) 346 346 intc_offset = 0; 347 347 348 348 /* We have a problem that the INTOFFSET register does not always
+12 -10
drivers/irqchip/irq-sunxi-nmi.c
··· 8 8 * warranty of any kind, whether express or implied. 9 9 */ 10 10 11 + #define DRV_NAME "sunxi-nmi" 12 + #define pr_fmt(fmt) DRV_NAME ": " fmt 13 + 11 14 #include <linux/bitops.h> 12 15 #include <linux/device.h> 13 16 #include <linux/io.h> ··· 99 96 break; 100 97 default: 101 98 irq_gc_unlock(gc); 102 - pr_err("%s: Cannot assign multiple trigger modes to IRQ %d.\n", 103 - __func__, data->irq); 99 + pr_err("Cannot assign multiple trigger modes to IRQ %d.\n", 100 + data->irq); 104 101 return -EBADR; 105 102 } 106 103 ··· 133 130 134 131 domain = irq_domain_add_linear(node, 1, &irq_generic_chip_ops, NULL); 135 132 if (!domain) { 136 - pr_err("%s: Could not register interrupt domain.\n", node->name); 133 + pr_err("Could not register interrupt domain.\n"); 137 134 return -ENOMEM; 138 135 } 139 136 140 - ret = irq_alloc_domain_generic_chips(domain, 1, 2, node->name, 137 + ret = irq_alloc_domain_generic_chips(domain, 1, 2, DRV_NAME, 141 138 handle_fasteoi_irq, clr, 0, 142 139 IRQ_GC_INIT_MASK_CACHE); 143 140 if (ret) { 144 - pr_err("%s: Could not allocate generic interrupt chip.\n", 145 - node->name); 146 - goto fail_irqd_remove; 141 + pr_err("Could not allocate generic interrupt chip.\n"); 142 + goto fail_irqd_remove; 147 143 } 148 144 149 145 irq = irq_of_parse_and_map(node, 0); 150 146 if (irq <= 0) { 151 - pr_err("%s: unable to parse irq\n", node->name); 147 + pr_err("unable to parse irq\n"); 152 148 ret = -EINVAL; 153 149 goto fail_irqd_remove; 154 150 } 155 151 156 152 gc = irq_get_domain_generic_chip(domain, 0); 157 - gc->reg_base = of_iomap(node, 0); 153 + gc->reg_base = of_io_request_and_map(node, 0, of_node_full_name(node)); 158 154 if (!gc->reg_base) { 159 - pr_err("%s: unable to map resource\n", node->name); 155 + pr_err("unable to map resource\n"); 160 156 ret = -ENOMEM; 161 157 goto fail_irqd_remove; 162 158 }
+29 -26
drivers/irqchip/irq-tegra.c
··· 221 221 #endif 222 222 }; 223 223 224 - static int tegra_ictlr_domain_xlate(struct irq_domain *domain, 225 - struct device_node *controller, 226 - const u32 *intspec, 227 - unsigned int intsize, 228 - unsigned long *out_hwirq, 229 - unsigned int *out_type) 224 + static int tegra_ictlr_domain_translate(struct irq_domain *d, 225 + struct irq_fwspec *fwspec, 226 + unsigned long *hwirq, 227 + unsigned int *type) 230 228 { 231 - if (domain->of_node != controller) 232 - return -EINVAL; /* Shouldn't happen, really... */ 233 - if (intsize != 3) 234 - return -EINVAL; /* Not GIC compliant */ 235 - if (intspec[0] != GIC_SPI) 236 - return -EINVAL; /* No PPI should point to this domain */ 229 + if (is_of_node(fwspec->fwnode)) { 230 + if (fwspec->param_count != 3) 231 + return -EINVAL; 237 232 238 - *out_hwirq = intspec[1]; 239 - *out_type = intspec[2]; 240 - return 0; 233 + /* No PPI should point to this domain */ 234 + if (fwspec->param[0] != 0) 235 + return -EINVAL; 236 + 237 + *hwirq = fwspec->param[1]; 238 + *type = fwspec->param[2]; 239 + return 0; 240 + } 241 + 242 + return -EINVAL; 241 243 } 242 244 243 245 static int tegra_ictlr_domain_alloc(struct irq_domain *domain, 244 246 unsigned int virq, 245 247 unsigned int nr_irqs, void *data) 246 248 { 247 - struct of_phandle_args *args = data; 248 - struct of_phandle_args parent_args; 249 + struct irq_fwspec *fwspec = data; 250 + struct irq_fwspec parent_fwspec; 249 251 struct tegra_ictlr_info *info = domain->host_data; 250 252 irq_hw_number_t hwirq; 251 253 unsigned int i; 252 254 253 - if (args->args_count != 3) 255 + if (fwspec->param_count != 3) 254 256 return -EINVAL; /* Not GIC compliant */ 255 - if (args->args[0] != GIC_SPI) 257 + if (fwspec->param[0] != GIC_SPI) 256 258 return -EINVAL; /* No PPI should point to this domain */ 257 259 258 - hwirq = args->args[1]; 260 + hwirq = fwspec->param[1]; 259 261 if (hwirq >= (num_ictlrs * 32)) 260 262 return -EINVAL; 261 263 ··· 269 267 info->base[ictlr]); 270 268 } 271 269 272 - parent_args = *args; 273 - parent_args.np = domain->parent->of_node; 274 - return irq_domain_alloc_irqs_parent(domain, virq, nr_irqs, &parent_args); 270 + parent_fwspec = *fwspec; 271 + parent_fwspec.fwnode = domain->parent->fwnode; 272 + return irq_domain_alloc_irqs_parent(domain, virq, nr_irqs, 273 + &parent_fwspec); 275 274 } 276 275 277 276 static void tegra_ictlr_domain_free(struct irq_domain *domain, ··· 288 285 } 289 286 290 287 static const struct irq_domain_ops tegra_ictlr_domain_ops = { 291 - .xlate = tegra_ictlr_domain_xlate, 292 - .alloc = tegra_ictlr_domain_alloc, 293 - .free = tegra_ictlr_domain_free, 288 + .translate = tegra_ictlr_domain_translate, 289 + .alloc = tegra_ictlr_domain_alloc, 290 + .free = tegra_ictlr_domain_free, 294 291 }; 295 292 296 293 static int __init tegra_ictlr_init(struct device_node *node,
+31 -14
drivers/irqchip/irq-vf610-mscm-ir.c
··· 130 130 { 131 131 int i; 132 132 irq_hw_number_t hwirq; 133 - struct of_phandle_args *irq_data = arg; 134 - struct of_phandle_args gic_data; 133 + struct irq_fwspec *fwspec = arg; 134 + struct irq_fwspec parent_fwspec; 135 135 136 - if (irq_data->args_count != 2) 136 + if (!irq_domain_get_of_node(domain->parent)) 137 137 return -EINVAL; 138 138 139 - hwirq = irq_data->args[0]; 139 + if (fwspec->param_count != 2) 140 + return -EINVAL; 141 + 142 + hwirq = fwspec->param[0]; 140 143 for (i = 0; i < nr_irqs; i++) 141 144 irq_domain_set_hwirq_and_chip(domain, virq + i, hwirq + i, 142 145 &vf610_mscm_ir_irq_chip, 143 146 domain->host_data); 144 147 145 - gic_data.np = domain->parent->of_node; 148 + parent_fwspec.fwnode = domain->parent->fwnode; 146 149 147 150 if (mscm_ir_data->is_nvic) { 148 - gic_data.args_count = 1; 149 - gic_data.args[0] = irq_data->args[0]; 151 + parent_fwspec.param_count = 1; 152 + parent_fwspec.param[0] = fwspec->param[0]; 150 153 } else { 151 - gic_data.args_count = 3; 152 - gic_data.args[0] = GIC_SPI; 153 - gic_data.args[1] = irq_data->args[0]; 154 - gic_data.args[2] = irq_data->args[1]; 154 + parent_fwspec.param_count = 3; 155 + parent_fwspec.param[0] = GIC_SPI; 156 + parent_fwspec.param[1] = fwspec->param[0]; 157 + parent_fwspec.param[2] = fwspec->param[1]; 155 158 } 156 159 157 - return irq_domain_alloc_irqs_parent(domain, virq, nr_irqs, &gic_data); 160 + return irq_domain_alloc_irqs_parent(domain, virq, nr_irqs, 161 + &parent_fwspec); 162 + } 163 + 164 + static int vf610_mscm_ir_domain_translate(struct irq_domain *d, 165 + struct irq_fwspec *fwspec, 166 + unsigned long *hwirq, 167 + unsigned int *type) 168 + { 169 + if (WARN_ON(fwspec->param_count < 2)) 170 + return -EINVAL; 171 + *hwirq = fwspec->param[0]; 172 + *type = fwspec->param[1] & IRQ_TYPE_SENSE_MASK; 173 + return 0; 158 174 } 159 175 160 176 static const struct irq_domain_ops mscm_irq_domain_ops = { 161 - .xlate = irq_domain_xlate_twocell, 177 + .translate = vf610_mscm_ir_domain_translate, 162 178 .alloc = vf610_mscm_ir_domain_alloc, 163 179 .free = irq_domain_free_irqs_common, 164 180 }; ··· 221 205 goto out_unmap; 222 206 } 223 207 224 - if (of_device_is_compatible(domain->parent->of_node, "arm,armv7m-nvic")) 208 + if (of_device_is_compatible(irq_domain_get_of_node(domain->parent), 209 + "arm,armv7m-nvic")) 225 210 mscm_ir_data->is_nvic = true; 226 211 227 212 cpu_pm_register_notifier(&mscm_ir_notifier_block);
+176 -11
drivers/of/irq.c
··· 579 579 } 580 580 } 581 581 582 + static u32 __of_msi_map_rid(struct device *dev, struct device_node **np, 583 + u32 rid_in) 584 + { 585 + struct device *parent_dev; 586 + struct device_node *msi_controller_node; 587 + struct device_node *msi_np = *np; 588 + u32 map_mask, masked_rid, rid_base, msi_base, rid_len, phandle; 589 + int msi_map_len; 590 + bool matched; 591 + u32 rid_out = rid_in; 592 + const __be32 *msi_map = NULL; 593 + 594 + /* 595 + * Walk up the device parent links looking for one with a 596 + * "msi-map" property. 597 + */ 598 + for (parent_dev = dev; parent_dev; parent_dev = parent_dev->parent) { 599 + if (!parent_dev->of_node) 600 + continue; 601 + 602 + msi_map = of_get_property(parent_dev->of_node, 603 + "msi-map", &msi_map_len); 604 + if (!msi_map) 605 + continue; 606 + 607 + if (msi_map_len % (4 * sizeof(__be32))) { 608 + dev_err(parent_dev, "Error: Bad msi-map length: %d\n", 609 + msi_map_len); 610 + return rid_out; 611 + } 612 + /* We have a good parent_dev and msi_map, let's use them. */ 613 + break; 614 + } 615 + if (!msi_map) 616 + return rid_out; 617 + 618 + /* The default is to select all bits. */ 619 + map_mask = 0xffffffff; 620 + 621 + /* 622 + * Can be overridden by "msi-map-mask" property. If 623 + * of_property_read_u32() fails, the default is used. 624 + */ 625 + of_property_read_u32(parent_dev->of_node, "msi-map-mask", &map_mask); 626 + 627 + masked_rid = map_mask & rid_in; 628 + matched = false; 629 + while (!matched && msi_map_len >= 4 * sizeof(__be32)) { 630 + rid_base = be32_to_cpup(msi_map + 0); 631 + phandle = be32_to_cpup(msi_map + 1); 632 + msi_base = be32_to_cpup(msi_map + 2); 633 + rid_len = be32_to_cpup(msi_map + 3); 634 + 635 + msi_controller_node = of_find_node_by_phandle(phandle); 636 + 637 + matched = (masked_rid >= rid_base && 638 + masked_rid < rid_base + rid_len); 639 + if (msi_np) 640 + matched &= msi_np == msi_controller_node; 641 + 642 + if (matched && !msi_np) { 643 + *np = msi_np = msi_controller_node; 644 + break; 645 + } 646 + 647 + of_node_put(msi_controller_node); 648 + msi_map_len -= 4 * sizeof(__be32); 649 + msi_map += 4; 650 + } 651 + if (!matched) 652 + return rid_out; 653 + 654 + rid_out = masked_rid + msi_base; 655 + dev_dbg(dev, 656 + "msi-map at: %s, using mask %08x, rid-base: %08x, msi-base: %08x, length: %08x, rid: %08x -> %08x\n", 657 + dev_name(parent_dev), map_mask, rid_base, msi_base, 658 + rid_len, rid_in, rid_out); 659 + 660 + return rid_out; 661 + } 662 + 663 + /** 664 + * of_msi_map_rid - Map a MSI requester ID for a device. 665 + * @dev: device for which the mapping is to be done. 666 + * @msi_np: device node of the expected msi controller. 667 + * @rid_in: unmapped MSI requester ID for the device. 668 + * 669 + * Walk up the device hierarchy looking for devices with a "msi-map" 670 + * property. If found, apply the mapping to @rid_in. 671 + * 672 + * Returns the mapped MSI requester ID. 673 + */ 674 + u32 of_msi_map_rid(struct device *dev, struct device_node *msi_np, u32 rid_in) 675 + { 676 + return __of_msi_map_rid(dev, &msi_np, rid_in); 677 + } 678 + 679 + static struct irq_domain *__of_get_msi_domain(struct device_node *np, 680 + enum irq_domain_bus_token token) 681 + { 682 + struct irq_domain *d; 683 + 684 + d = irq_find_matching_host(np, token); 685 + if (!d) 686 + d = irq_find_host(np); 687 + 688 + return d; 689 + } 690 + 691 + /** 692 + * of_msi_map_get_device_domain - Use msi-map to find the relevant MSI domain 693 + * @dev: device for which the mapping is to be done. 694 + * @rid: Requester ID for the device. 695 + * 696 + * Walk up the device hierarchy looking for devices with a "msi-map" 697 + * property. 698 + * 699 + * Returns: the MSI domain for this device (or NULL on failure) 700 + */ 701 + struct irq_domain *of_msi_map_get_device_domain(struct device *dev, u32 rid) 702 + { 703 + struct device_node *np = NULL; 704 + 705 + __of_msi_map_rid(dev, &np, rid); 706 + return __of_get_msi_domain(np, DOMAIN_BUS_PCI_MSI); 707 + } 708 + 709 + /** 710 + * of_msi_get_domain - Use msi-parent to find the relevant MSI domain 711 + * @dev: device for which the domain is requested 712 + * @np: device node for @dev 713 + * @token: bus type for this domain 714 + * 715 + * Parse the msi-parent property (both the simple and the complex 716 + * versions), and returns the corresponding MSI domain. 717 + * 718 + * Returns: the MSI domain for this device (or NULL on failure). 719 + */ 720 + struct irq_domain *of_msi_get_domain(struct device *dev, 721 + struct device_node *np, 722 + enum irq_domain_bus_token token) 723 + { 724 + struct device_node *msi_np; 725 + struct irq_domain *d; 726 + 727 + /* Check for a single msi-parent property */ 728 + msi_np = of_parse_phandle(np, "msi-parent", 0); 729 + if (msi_np && !of_property_read_bool(msi_np, "#msi-cells")) { 730 + d = __of_get_msi_domain(msi_np, token); 731 + if (!d) 732 + of_node_put(msi_np); 733 + return d; 734 + } 735 + 736 + if (token == DOMAIN_BUS_PLATFORM_MSI) { 737 + /* Check for the complex msi-parent version */ 738 + struct of_phandle_args args; 739 + int index = 0; 740 + 741 + while (!of_parse_phandle_with_args(np, "msi-parent", 742 + "#msi-cells", 743 + index, &args)) { 744 + d = __of_get_msi_domain(args.np, token); 745 + if (d) 746 + return d; 747 + 748 + of_node_put(args.np); 749 + index++; 750 + } 751 + } 752 + 753 + return NULL; 754 + } 755 + 582 756 /** 583 757 * of_msi_configure - Set the msi_domain field of a device 584 758 * @dev: device structure to associate with an MSI irq domain ··· 760 586 */ 761 587 void of_msi_configure(struct device *dev, struct device_node *np) 762 588 { 763 - struct device_node *msi_np; 764 - struct irq_domain *d; 765 - 766 - msi_np = of_parse_phandle(np, "msi-parent", 0); 767 - if (!msi_np) 768 - return; 769 - 770 - d = irq_find_matching_host(msi_np, DOMAIN_BUS_PLATFORM_MSI); 771 - if (!d) 772 - d = irq_find_host(msi_np); 773 - dev_set_msi_domain(dev, d); 589 + dev_set_msi_domain(dev, 590 + of_msi_get_domain(dev, np, DOMAIN_BUS_PLATFORM_MSI)); 774 591 }
+1 -1
drivers/pci/host/pci-xgene-msi.c
··· 256 256 if (!msi->inner_domain) 257 257 return -ENOMEM; 258 258 259 - msi->msi_domain = pci_msi_create_irq_domain(msi->node, 259 + msi->msi_domain = pci_msi_create_irq_domain(of_node_to_fwnode(msi->node), 260 260 &xgene_msi_domain_info, 261 261 msi->inner_domain); 262 262
+56 -7
drivers/pci/msi.c
··· 20 20 #include <linux/io.h> 21 21 #include <linux/slab.h> 22 22 #include <linux/irqdomain.h> 23 + #include <linux/of_irq.h> 23 24 24 25 #include "pci.h" 25 26 ··· 1251 1250 } 1252 1251 1253 1252 /** 1254 - * pci_msi_create_irq_domain - Creat a MSI interrupt domain 1255 - * @node: Optional device-tree node of the interrupt controller 1253 + * pci_msi_create_irq_domain - Create a MSI interrupt domain 1254 + * @fwnode: Optional fwnode of the interrupt controller 1256 1255 * @info: MSI domain info 1257 1256 * @parent: Parent irq domain 1258 1257 * ··· 1261 1260 * Returns: 1262 1261 * A domain pointer or NULL in case of failure. 1263 1262 */ 1264 - struct irq_domain *pci_msi_create_irq_domain(struct device_node *node, 1263 + struct irq_domain *pci_msi_create_irq_domain(struct fwnode_handle *fwnode, 1265 1264 struct msi_domain_info *info, 1266 1265 struct irq_domain *parent) 1267 1266 { ··· 1272 1271 if (info->flags & MSI_FLAG_USE_DEF_CHIP_OPS) 1273 1272 pci_msi_domain_update_chip_ops(info); 1274 1273 1275 - domain = msi_create_irq_domain(node, info, parent); 1274 + domain = msi_create_irq_domain(fwnode, info, parent); 1276 1275 if (!domain) 1277 1276 return NULL; 1278 1277 ··· 1308 1307 1309 1308 /** 1310 1309 * pci_msi_create_default_irq_domain - Create a default MSI interrupt domain 1311 - * @node: Optional device-tree node of the interrupt controller 1310 + * @fwnode: Optional fwnode of the interrupt controller 1312 1311 * @info: MSI domain info 1313 1312 * @parent: Parent irq domain 1314 1313 * 1315 1314 * Returns: A domain pointer or NULL in case of failure. If successful 1316 1315 * the default PCI/MSI irqdomain pointer is updated. 1317 1316 */ 1318 - struct irq_domain *pci_msi_create_default_irq_domain(struct device_node *node, 1317 + struct irq_domain *pci_msi_create_default_irq_domain(struct fwnode_handle *fwnode, 1319 1318 struct msi_domain_info *info, struct irq_domain *parent) 1320 1319 { 1321 1320 struct irq_domain *domain; ··· 1325 1324 pr_err("PCI: default irq domain for PCI MSI has already been created.\n"); 1326 1325 domain = NULL; 1327 1326 } else { 1328 - domain = pci_msi_create_irq_domain(node, info, parent); 1327 + domain = pci_msi_create_irq_domain(fwnode, info, parent); 1329 1328 pci_msi_default_domain = domain; 1330 1329 } 1331 1330 mutex_unlock(&pci_msi_domain_lock); 1332 1331 1333 1332 return domain; 1333 + } 1334 + 1335 + static int get_msi_id_cb(struct pci_dev *pdev, u16 alias, void *data) 1336 + { 1337 + u32 *pa = data; 1338 + 1339 + *pa = alias; 1340 + return 0; 1341 + } 1342 + /** 1343 + * pci_msi_domain_get_msi_rid - Get the MSI requester id (RID) 1344 + * @domain: The interrupt domain 1345 + * @pdev: The PCI device. 1346 + * 1347 + * The RID for a device is formed from the alias, with a firmware 1348 + * supplied mapping applied 1349 + * 1350 + * Returns: The RID. 1351 + */ 1352 + u32 pci_msi_domain_get_msi_rid(struct irq_domain *domain, struct pci_dev *pdev) 1353 + { 1354 + struct device_node *of_node; 1355 + u32 rid = 0; 1356 + 1357 + pci_for_each_dma_alias(pdev, get_msi_id_cb, &rid); 1358 + 1359 + of_node = irq_domain_get_of_node(domain); 1360 + if (of_node) 1361 + rid = of_msi_map_rid(&pdev->dev, of_node, rid); 1362 + 1363 + return rid; 1364 + } 1365 + 1366 + /** 1367 + * pci_msi_get_device_domain - Get the MSI domain for a given PCI device 1368 + * @pdev: The PCI device 1369 + * 1370 + * Use the firmware data to find a device-specific MSI domain 1371 + * (i.e. not one that is ste as a default). 1372 + * 1373 + * Returns: The coresponding MSI domain or NULL if none has been found. 1374 + */ 1375 + struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev) 1376 + { 1377 + u32 rid = 0; 1378 + 1379 + pci_for_each_dma_alias(pdev, get_msi_id_cb, &rid); 1380 + return of_msi_map_get_device_domain(&pdev->dev, rid); 1334 1381 } 1335 1382 #endif /* CONFIG_PCI_MSI_IRQ_DOMAIN */
+6 -7
drivers/pci/of.c
··· 13 13 #include <linux/kernel.h> 14 14 #include <linux/pci.h> 15 15 #include <linux/of.h> 16 + #include <linux/of_irq.h> 16 17 #include <linux/of_pci.h> 17 18 #include "pci.h" 18 19 ··· 65 64 struct irq_domain *pci_host_bridge_of_msi_domain(struct pci_bus *bus) 66 65 { 67 66 #ifdef CONFIG_IRQ_DOMAIN 68 - struct device_node *np; 69 67 struct irq_domain *d; 70 68 71 69 if (!bus->dev.of_node) 72 70 return NULL; 73 71 74 72 /* Start looking for a phandle to an MSI controller. */ 75 - np = of_parse_phandle(bus->dev.of_node, "msi-parent", 0); 73 + d = of_msi_get_domain(&bus->dev, bus->dev.of_node, DOMAIN_BUS_PCI_MSI); 74 + if (d) 75 + return d; 76 76 77 77 /* 78 78 * If we don't have an msi-parent property, look for a domain 79 79 * directly attached to the host bridge. 80 80 */ 81 - if (!np) 82 - np = bus->dev.of_node; 83 - 84 - d = irq_find_matching_host(np, DOMAIN_BUS_PCI_MSI); 81 + d = irq_find_matching_host(bus->dev.of_node, DOMAIN_BUS_PCI_MSI); 85 82 if (d) 86 83 return d; 87 84 88 - return irq_find_host(np); 85 + return irq_find_host(bus->dev.of_node); 89 86 #else 90 87 return NULL; 91 88 #endif
+38 -5
drivers/pci/probe.c
··· 1622 1622 pci_enable_acs(dev); 1623 1623 } 1624 1624 1625 + /* 1626 + * This is the equivalent of pci_host_bridge_msi_domain that acts on 1627 + * devices. Firmware interfaces that can select the MSI domain on a 1628 + * per-device basis should be called from here. 1629 + */ 1630 + static struct irq_domain *pci_dev_msi_domain(struct pci_dev *dev) 1631 + { 1632 + struct irq_domain *d; 1633 + 1634 + /* 1635 + * If a domain has been set through the pcibios_add_device 1636 + * callback, then this is the one (platform code knows best). 1637 + */ 1638 + d = dev_get_msi_domain(&dev->dev); 1639 + if (d) 1640 + return d; 1641 + 1642 + /* 1643 + * Let's see if we have a firmware interface able to provide 1644 + * the domain. 1645 + */ 1646 + d = pci_msi_get_device_domain(dev); 1647 + if (d) 1648 + return d; 1649 + 1650 + return NULL; 1651 + } 1652 + 1625 1653 static void pci_set_msi_domain(struct pci_dev *dev) 1626 1654 { 1655 + struct irq_domain *d; 1656 + 1627 1657 /* 1628 - * If no domain has been set through the pcibios_add_device 1629 - * callback, inherit the default from the bus device. 1658 + * If the platform or firmware interfaces cannot supply a 1659 + * device-specific MSI domain, then inherit the default domain 1660 + * from the host bridge itself. 1630 1661 */ 1631 - if (!dev_get_msi_domain(&dev->dev)) 1632 - dev_set_msi_domain(&dev->dev, 1633 - dev_get_msi_domain(&dev->bus->dev)); 1662 + d = pci_dev_msi_domain(dev); 1663 + if (!d) 1664 + d = dev_get_msi_domain(&dev->bus->dev); 1665 + 1666 + dev_set_msi_domain(&dev->dev, d); 1634 1667 } 1635 1668 1636 1669 void pci_device_add(struct pci_dev *dev, struct pci_bus *bus)
+1 -1
drivers/spmi/spmi-pmic-arb.c
··· 657 657 "intspec[0] 0x%1x intspec[1] 0x%02x intspec[2] 0x%02x\n", 658 658 intspec[0], intspec[1], intspec[2]); 659 659 660 - if (d->of_node != controller) 660 + if (irq_domain_get_of_node(d) != controller) 661 661 return -EINVAL; 662 662 if (intsize != 4) 663 663 return -EINVAL;
+2 -2
include/kvm/arm_vgic.h
··· 282 282 }; 283 283 284 284 struct vgic_v3_cpu_if { 285 - #ifdef CONFIG_ARM_GIC_V3 285 + #ifdef CONFIG_KVM_ARM_VGIC_V3 286 286 u32 vgic_hcr; 287 287 u32 vgic_vmcr; 288 288 u32 vgic_sre; /* Restored only, change ignored */ ··· 364 364 int vgic_v2_probe(struct device_node *vgic_node, 365 365 const struct vgic_ops **ops, 366 366 const struct vgic_params **params); 367 - #ifdef CONFIG_ARM_GIC_V3 367 + #ifdef CONFIG_KVM_ARM_VGIC_V3 368 368 int vgic_v3_probe(struct device_node *vgic_node, 369 369 const struct vgic_ops **ops, 370 370 const struct vgic_params **params);
+3
include/linux/acpi.h
··· 201 201 int acpi_gsi_to_irq (u32 gsi, unsigned int *irq); 202 202 int acpi_isa_irq_to_gsi (unsigned isa_irq, u32 *gsi); 203 203 204 + void acpi_set_irq_model(enum acpi_irq_model_id model, 205 + struct fwnode_handle *fwnode); 206 + 204 207 #ifdef CONFIG_X86_IO_APIC 205 208 extern int acpi_get_override_irq(u32 gsi, int *trigger, int *polarity); 206 209 #else
+1
include/linux/fwnode.h
··· 17 17 FWNODE_OF, 18 18 FWNODE_ACPI, 19 19 FWNODE_PDATA, 20 + FWNODE_IRQCHIP, 20 21 }; 21 22 22 23 struct fwnode_handle {
+2
include/linux/interrupt.h
··· 102 102 * @flags: flags (see IRQF_* above) 103 103 * @thread_fn: interrupt handler function for threaded interrupts 104 104 * @thread: thread pointer for threaded interrupts 105 + * @secondary: pointer to secondary irqaction (force threading) 105 106 * @thread_flags: flags related to @thread 106 107 * @thread_mask: bitmask for keeping track of @thread activity 107 108 * @dir: pointer to the proc/irq/NN/name entry ··· 114 113 struct irqaction *next; 115 114 irq_handler_t thread_fn; 116 115 struct task_struct *thread; 116 + struct irqaction *secondary; 117 117 unsigned int irq; 118 118 unsigned int flags; 119 119 unsigned long thread_flags;
+6 -17
include/linux/irq.h
··· 67 67 * request/setup_irq() 68 68 * IRQ_NO_BALANCING - Interrupt cannot be balanced (affinity set) 69 69 * IRQ_MOVE_PCNTXT - Interrupt can be migrated from process context 70 - * IRQ_NESTED_TRHEAD - Interrupt nests into another thread 70 + * IRQ_NESTED_THREAD - Interrupt nests into another thread 71 71 * IRQ_PER_CPU_DEVID - Dev_id is a per-cpu variable 72 72 * IRQ_IS_POLLED - Always polled by another interrupt. Exclude 73 73 * it from the spurious interrupt detection 74 74 * mechanism and from core side polling. 75 + * IRQ_DISABLE_UNLAZY - Disable lazy irq disable 75 76 */ 76 77 enum { 77 78 IRQ_TYPE_NONE = 0x00000000, ··· 98 97 IRQ_NOTHREAD = (1 << 16), 99 98 IRQ_PER_CPU_DEVID = (1 << 17), 100 99 IRQ_IS_POLLED = (1 << 18), 100 + IRQ_DISABLE_UNLAZY = (1 << 19), 101 101 }; 102 102 103 103 #define IRQF_MODIFY_MASK \ 104 104 (IRQ_TYPE_SENSE_MASK | IRQ_NOPROBE | IRQ_NOREQUEST | \ 105 105 IRQ_NOAUTOEN | IRQ_MOVE_PCNTXT | IRQ_LEVEL | IRQ_NO_BALANCING | \ 106 106 IRQ_PER_CPU | IRQ_NESTED_THREAD | IRQ_NOTHREAD | IRQ_PER_CPU_DEVID | \ 107 - IRQ_IS_POLLED) 107 + IRQ_IS_POLLED | IRQ_DISABLE_UNLAZY) 108 108 109 109 #define IRQ_NO_BALANCING_MASK (IRQ_PER_CPU | IRQ_NO_BALANCING) 110 110 ··· 299 297 __irqd_to_state(d) &= ~IRQD_FORWARDED_TO_VCPU; 300 298 } 301 299 302 - /* 303 - * Functions for chained handlers which can be enabled/disabled by the 304 - * standard disable_irq/enable_irq calls. Must be called with 305 - * irq_desc->lock held. 306 - */ 307 - static inline void irqd_set_chained_irq_inprogress(struct irq_data *d) 308 - { 309 - __irqd_to_state(d) |= IRQD_IRQ_INPROGRESS; 310 - } 311 - 312 - static inline void irqd_clr_chained_irq_inprogress(struct irq_data *d) 313 - { 314 - __irqd_to_state(d) &= ~IRQD_IRQ_INPROGRESS; 315 - } 316 - 317 300 static inline irq_hw_number_t irqd_to_hwirq(struct irq_data *d) 318 301 { 319 302 return d->hwirq; ··· 438 451 extern int irq_set_affinity_locked(struct irq_data *data, 439 452 const struct cpumask *cpumask, bool force); 440 453 extern int irq_set_vcpu_affinity(unsigned int irq, void *vcpu_info); 454 + 455 + extern void irq_migrate_all_off_this_cpu(void); 441 456 442 457 #if defined(CONFIG_SMP) && defined(CONFIG_GENERIC_PENDING_IRQ) 443 458 void irq_move_irq(struct irq_data *data);
+25 -78
include/linux/irqchip/arm-gic-v3.h
··· 18 18 #ifndef __LINUX_IRQCHIP_ARM_GIC_V3_H 19 19 #define __LINUX_IRQCHIP_ARM_GIC_V3_H 20 20 21 - #include <asm/sysreg.h> 22 - 23 21 /* 24 22 * Distributor registers. We assume we're running non-secure, with ARE 25 23 * being set. Secure-only and non-ARE registers are not described. ··· 229 231 #define GITS_BASER_PAGE_SIZE_16K (1UL << GITS_BASER_PAGE_SIZE_SHIFT) 230 232 #define GITS_BASER_PAGE_SIZE_64K (2UL << GITS_BASER_PAGE_SIZE_SHIFT) 231 233 #define GITS_BASER_PAGE_SIZE_MASK (3UL << GITS_BASER_PAGE_SIZE_SHIFT) 234 + #define GITS_BASER_PAGES_MAX 256 232 235 233 236 #define GITS_BASER_TYPE_NONE 0 234 237 #define GITS_BASER_TYPE_DEVICE 1 ··· 265 266 /* 266 267 * Hypervisor interface registers (SRE only) 267 268 */ 268 - #define ICH_LR_VIRTUAL_ID_MASK ((1UL << 32) - 1) 269 + #define ICH_LR_VIRTUAL_ID_MASK ((1ULL << 32) - 1) 269 270 270 - #define ICH_LR_EOI (1UL << 41) 271 - #define ICH_LR_GROUP (1UL << 60) 272 - #define ICH_LR_HW (1UL << 61) 273 - #define ICH_LR_STATE (3UL << 62) 274 - #define ICH_LR_PENDING_BIT (1UL << 62) 275 - #define ICH_LR_ACTIVE_BIT (1UL << 63) 271 + #define ICH_LR_EOI (1ULL << 41) 272 + #define ICH_LR_GROUP (1ULL << 60) 273 + #define ICH_LR_HW (1ULL << 61) 274 + #define ICH_LR_STATE (3ULL << 62) 275 + #define ICH_LR_PENDING_BIT (1ULL << 62) 276 + #define ICH_LR_ACTIVE_BIT (1ULL << 63) 276 277 #define ICH_LR_PHYS_ID_SHIFT 32 277 - #define ICH_LR_PHYS_ID_MASK (0x3ffUL << ICH_LR_PHYS_ID_SHIFT) 278 + #define ICH_LR_PHYS_ID_MASK (0x3ffULL << ICH_LR_PHYS_ID_SHIFT) 278 279 279 280 #define ICH_MISR_EOI (1 << 0) 280 281 #define ICH_MISR_U (1 << 1) ··· 291 292 #define ICH_VMCR_PMR_SHIFT 24 292 293 #define ICH_VMCR_PMR_MASK (0xffUL << ICH_VMCR_PMR_SHIFT) 293 294 294 - #define ICC_EOIR1_EL1 sys_reg(3, 0, 12, 12, 1) 295 - #define ICC_DIR_EL1 sys_reg(3, 0, 12, 11, 1) 296 - #define ICC_IAR1_EL1 sys_reg(3, 0, 12, 12, 0) 297 - #define ICC_SGI1R_EL1 sys_reg(3, 0, 12, 11, 5) 298 - #define ICC_PMR_EL1 sys_reg(3, 0, 4, 6, 0) 299 - #define ICC_CTLR_EL1 sys_reg(3, 0, 12, 12, 4) 300 - #define ICC_SRE_EL1 sys_reg(3, 0, 12, 12, 5) 301 - #define ICC_GRPEN1_EL1 sys_reg(3, 0, 12, 12, 7) 302 - 303 295 #define ICC_IAR1_EL1_SPURIOUS 0x3ff 304 - 305 - #define ICC_SRE_EL2 sys_reg(3, 4, 12, 9, 5) 306 296 307 297 #define ICC_SRE_EL2_SRE (1 << 0) 308 298 #define ICC_SRE_EL2_ENABLE (1 << 3) ··· 308 320 #define ICC_SGI1R_AFFINITY_3_SHIFT 48 309 321 #define ICC_SGI1R_AFFINITY_3_MASK (0xffULL << ICC_SGI1R_AFFINITY_1_SHIFT) 310 322 311 - /* 312 - * System register definitions 313 - */ 314 - #define ICH_VSEIR_EL2 sys_reg(3, 4, 12, 9, 4) 315 - #define ICH_HCR_EL2 sys_reg(3, 4, 12, 11, 0) 316 - #define ICH_VTR_EL2 sys_reg(3, 4, 12, 11, 1) 317 - #define ICH_MISR_EL2 sys_reg(3, 4, 12, 11, 2) 318 - #define ICH_EISR_EL2 sys_reg(3, 4, 12, 11, 3) 319 - #define ICH_ELSR_EL2 sys_reg(3, 4, 12, 11, 5) 320 - #define ICH_VMCR_EL2 sys_reg(3, 4, 12, 11, 7) 321 - 322 - #define __LR0_EL2(x) sys_reg(3, 4, 12, 12, x) 323 - #define __LR8_EL2(x) sys_reg(3, 4, 12, 13, x) 324 - 325 - #define ICH_LR0_EL2 __LR0_EL2(0) 326 - #define ICH_LR1_EL2 __LR0_EL2(1) 327 - #define ICH_LR2_EL2 __LR0_EL2(2) 328 - #define ICH_LR3_EL2 __LR0_EL2(3) 329 - #define ICH_LR4_EL2 __LR0_EL2(4) 330 - #define ICH_LR5_EL2 __LR0_EL2(5) 331 - #define ICH_LR6_EL2 __LR0_EL2(6) 332 - #define ICH_LR7_EL2 __LR0_EL2(7) 333 - #define ICH_LR8_EL2 __LR8_EL2(0) 334 - #define ICH_LR9_EL2 __LR8_EL2(1) 335 - #define ICH_LR10_EL2 __LR8_EL2(2) 336 - #define ICH_LR11_EL2 __LR8_EL2(3) 337 - #define ICH_LR12_EL2 __LR8_EL2(4) 338 - #define ICH_LR13_EL2 __LR8_EL2(5) 339 - #define ICH_LR14_EL2 __LR8_EL2(6) 340 - #define ICH_LR15_EL2 __LR8_EL2(7) 341 - 342 - #define __AP0Rx_EL2(x) sys_reg(3, 4, 12, 8, x) 343 - #define ICH_AP0R0_EL2 __AP0Rx_EL2(0) 344 - #define ICH_AP0R1_EL2 __AP0Rx_EL2(1) 345 - #define ICH_AP0R2_EL2 __AP0Rx_EL2(2) 346 - #define ICH_AP0R3_EL2 __AP0Rx_EL2(3) 347 - 348 - #define __AP1Rx_EL2(x) sys_reg(3, 4, 12, 9, x) 349 - #define ICH_AP1R0_EL2 __AP1Rx_EL2(0) 350 - #define ICH_AP1R1_EL2 __AP1Rx_EL2(1) 351 - #define ICH_AP1R2_EL2 __AP1Rx_EL2(2) 352 - #define ICH_AP1R3_EL2 __AP1Rx_EL2(3) 323 + #include <asm/arch_gicv3.h> 353 324 354 325 #ifndef __ASSEMBLY__ 355 - 356 - #include <linux/stringify.h> 357 - #include <asm/msi.h> 358 326 359 327 /* 360 328 * We need a value to serve as a irq-type for LPIs. Choose one that will ··· 329 385 u64 flags; 330 386 }; 331 387 332 - static inline void gic_write_eoir(u64 irq) 333 - { 334 - asm volatile("msr_s " __stringify(ICC_EOIR1_EL1) ", %0" : : "r" (irq)); 335 - isb(); 336 - } 337 - 338 - static inline void gic_write_dir(u64 irq) 339 - { 340 - asm volatile("msr_s " __stringify(ICC_DIR_EL1) ", %0" : : "r" (irq)); 341 - isb(); 342 - } 343 - 344 388 struct irq_domain; 345 389 int its_cpu_init(void); 346 390 int its_init(struct device_node *node, struct rdists *rdists, 347 391 struct irq_domain *domain); 392 + 393 + static inline bool gic_enable_sre(void) 394 + { 395 + u32 val; 396 + 397 + val = gic_read_sre(); 398 + if (val & ICC_SRE_EL1_SRE) 399 + return true; 400 + 401 + val |= ICC_SRE_EL1_SRE; 402 + gic_write_sre(val); 403 + val = gic_read_sre(); 404 + 405 + return !!(val & ICC_SRE_EL1_SRE); 406 + } 348 407 349 408 #endif 350 409
+2 -7
include/linux/irqchip/arm-gic.h
··· 100 100 101 101 struct device_node; 102 102 103 - void gic_init_bases(unsigned int, int, void __iomem *, void __iomem *, 104 - u32 offset, struct device_node *); 105 103 void gic_cascade_irq(unsigned int gic_nr, unsigned int irq); 106 104 int gic_cpu_if_down(unsigned int gic_nr); 107 105 108 - static inline void gic_init(unsigned int nr, int start, 109 - void __iomem *dist , void __iomem *cpu) 110 - { 111 - gic_init_bases(nr, start, dist, cpu, 0, NULL); 112 - } 106 + void gic_init(unsigned int nr, int start, 107 + void __iomem *dist , void __iomem *cpu); 113 108 114 109 int gicv2m_of_init(struct device_node *node, struct irq_domain *parent); 115 110
+83 -23
include/linux/irqdomain.h
··· 5 5 * helpful for interrupt controllers to implement mapping between hardware 6 6 * irq numbers and the Linux irq number space. 7 7 * 8 - * irq_domains also have a hook for translating device tree interrupt 9 - * representation into a hardware irq number that can be mapped back to a 10 - * Linux irq number without any extra platform support code. 8 + * irq_domains also have hooks for translating device tree or other 9 + * firmware interrupt representations into a hardware irq number that 10 + * can be mapped back to a Linux irq number without any extra platform 11 + * support code. 11 12 * 12 13 * Interrupt controller "domain" data structure. This could be defined as a 13 14 * irq domain controller. That is, it handles the mapping between hardware ··· 18 17 * model). It's the domain callbacks that are responsible for setting the 19 18 * irq_chip on a given irq_desc after it's been mapped. 20 19 * 21 - * The host code and data structures are agnostic to whether or not 22 - * we use an open firmware device-tree. We do have references to struct 23 - * device_node in two places: in irq_find_host() to find the host matching 24 - * a given interrupt controller node, and of course as an argument to its 25 - * counterpart domain->ops->match() callback. However, those are treated as 26 - * generic pointers by the core and the fact that it's actually a device-node 27 - * pointer is purely a convention between callers and implementation. This 28 - * code could thus be used on other architectures by replacing those two 29 - * by some sort of arch-specific void * "token" used to identify interrupt 30 - * controllers. 20 + * The host code and data structures use a fwnode_handle pointer to 21 + * identify the domain. In some cases, and in order to preserve source 22 + * code compatibility, this fwnode pointer is "upgraded" to a DT 23 + * device_node. For those firmware infrastructures that do not provide 24 + * a unique identifier for an interrupt controller, the irq_domain 25 + * code offers a fwnode allocator. 31 26 */ 32 27 33 28 #ifndef _LINUX_IRQDOMAIN_H ··· 31 34 32 35 #include <linux/types.h> 33 36 #include <linux/irqhandler.h> 37 + #include <linux/of.h> 34 38 #include <linux/radix-tree.h> 35 39 36 40 struct device_node; ··· 42 44 43 45 /* Number of irqs reserved for a legacy isa controller */ 44 46 #define NUM_ISA_INTERRUPTS 16 47 + 48 + #define IRQ_DOMAIN_IRQ_SPEC_PARAMS 16 49 + 50 + /** 51 + * struct irq_fwspec - generic IRQ specifier structure 52 + * 53 + * @fwnode: Pointer to a firmware-specific descriptor 54 + * @param_count: Number of device-specific parameters 55 + * @param: Device-specific parameters 56 + * 57 + * This structure, directly modeled after of_phandle_args, is used to 58 + * pass a device-specific description of an interrupt. 59 + */ 60 + struct irq_fwspec { 61 + struct fwnode_handle *fwnode; 62 + int param_count; 63 + u32 param[IRQ_DOMAIN_IRQ_SPEC_PARAMS]; 64 + }; 45 65 46 66 /* 47 67 * Should several domains have the same device node, but serve ··· 107 91 unsigned int nr_irqs); 108 92 void (*activate)(struct irq_domain *d, struct irq_data *irq_data); 109 93 void (*deactivate)(struct irq_domain *d, struct irq_data *irq_data); 94 + int (*translate)(struct irq_domain *d, struct irq_fwspec *fwspec, 95 + unsigned long *out_hwirq, unsigned int *out_type); 110 96 #endif 111 97 }; 112 98 ··· 148 130 unsigned int flags; 149 131 150 132 /* Optional data */ 151 - struct device_node *of_node; 133 + struct fwnode_handle *fwnode; 152 134 enum irq_domain_bus_token bus_token; 153 135 struct irq_domain_chip_generic *gc; 154 136 #ifdef CONFIG_IRQ_DOMAIN_HIERARCHY ··· 181 163 182 164 static inline struct device_node *irq_domain_get_of_node(struct irq_domain *d) 183 165 { 184 - return d->of_node; 166 + return to_of_node(d->fwnode); 185 167 } 186 168 187 169 #ifdef CONFIG_IRQ_DOMAIN 188 - struct irq_domain *__irq_domain_add(struct device_node *of_node, int size, 170 + struct fwnode_handle *irq_domain_alloc_fwnode(void *data); 171 + void irq_domain_free_fwnode(struct fwnode_handle *fwnode); 172 + struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, int size, 189 173 irq_hw_number_t hwirq_max, int direct_max, 190 174 const struct irq_domain_ops *ops, 191 175 void *host_data); ··· 202 182 irq_hw_number_t first_hwirq, 203 183 const struct irq_domain_ops *ops, 204 184 void *host_data); 205 - extern struct irq_domain *irq_find_matching_host(struct device_node *node, 206 - enum irq_domain_bus_token bus_token); 185 + extern struct irq_domain *irq_find_matching_fwnode(struct fwnode_handle *fwnode, 186 + enum irq_domain_bus_token bus_token); 207 187 extern void irq_set_default_host(struct irq_domain *host); 188 + 189 + static inline struct fwnode_handle *of_node_to_fwnode(struct device_node *node) 190 + { 191 + return node ? &node->fwnode : NULL; 192 + } 193 + 194 + static inline struct irq_domain *irq_find_matching_host(struct device_node *node, 195 + enum irq_domain_bus_token bus_token) 196 + { 197 + return irq_find_matching_fwnode(of_node_to_fwnode(node), bus_token); 198 + } 208 199 209 200 static inline struct irq_domain *irq_find_host(struct device_node *node) 210 201 { ··· 234 203 const struct irq_domain_ops *ops, 235 204 void *host_data) 236 205 { 237 - return __irq_domain_add(of_node, size, size, 0, ops, host_data); 206 + return __irq_domain_add(of_node_to_fwnode(of_node), size, size, 0, ops, host_data); 238 207 } 239 208 static inline struct irq_domain *irq_domain_add_nomap(struct device_node *of_node, 240 209 unsigned int max_irq, 241 210 const struct irq_domain_ops *ops, 242 211 void *host_data) 243 212 { 244 - return __irq_domain_add(of_node, 0, max_irq, max_irq, ops, host_data); 213 + return __irq_domain_add(of_node_to_fwnode(of_node), 0, max_irq, max_irq, ops, host_data); 245 214 } 246 215 static inline struct irq_domain *irq_domain_add_legacy_isa( 247 216 struct device_node *of_node, ··· 255 224 const struct irq_domain_ops *ops, 256 225 void *host_data) 257 226 { 258 - return __irq_domain_add(of_node, 0, ~0, 0, ops, host_data); 227 + return __irq_domain_add(of_node_to_fwnode(of_node), 0, ~0, 0, ops, host_data); 228 + } 229 + 230 + static inline struct irq_domain *irq_domain_create_linear(struct fwnode_handle *fwnode, 231 + unsigned int size, 232 + const struct irq_domain_ops *ops, 233 + void *host_data) 234 + { 235 + return __irq_domain_add(fwnode, size, size, 0, ops, host_data); 236 + } 237 + 238 + static inline struct irq_domain *irq_domain_create_tree(struct fwnode_handle *fwnode, 239 + const struct irq_domain_ops *ops, 240 + void *host_data) 241 + { 242 + return __irq_domain_add(fwnode, 0, ~0, 0, ops, host_data); 259 243 } 260 244 261 245 extern void irq_domain_remove(struct irq_domain *host); ··· 285 239 286 240 extern unsigned int irq_create_mapping(struct irq_domain *host, 287 241 irq_hw_number_t hwirq); 242 + extern unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec); 288 243 extern void irq_dispose_mapping(unsigned int virq); 289 244 290 245 /** ··· 337 290 void *chip_data, irq_flow_handler_t handler, 338 291 void *handler_data, const char *handler_name); 339 292 #ifdef CONFIG_IRQ_DOMAIN_HIERARCHY 340 - extern struct irq_domain *irq_domain_add_hierarchy(struct irq_domain *parent, 293 + extern struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent, 341 294 unsigned int flags, unsigned int size, 342 - struct device_node *node, 295 + struct fwnode_handle *fwnode, 343 296 const struct irq_domain_ops *ops, void *host_data); 297 + 298 + static inline struct irq_domain *irq_domain_add_hierarchy(struct irq_domain *parent, 299 + unsigned int flags, 300 + unsigned int size, 301 + struct device_node *node, 302 + const struct irq_domain_ops *ops, 303 + void *host_data) 304 + { 305 + return irq_domain_create_hierarchy(parent, flags, size, 306 + of_node_to_fwnode(node), 307 + ops, host_data); 308 + } 309 + 344 310 extern int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base, 345 311 unsigned int nr_irqs, int node, void *arg, 346 312 bool realloc);
+1 -1
include/linux/irqreturn.h
··· 3 3 4 4 /** 5 5 * enum irqreturn 6 - * @IRQ_NONE interrupt was not from this device 6 + * @IRQ_NONE interrupt was not from this device or was not handled 7 7 * @IRQ_HANDLED interrupt was handled by this device 8 8 * @IRQ_WAKE_THREAD handler requests to wake the handler thread 9 9 */
+12 -4
include/linux/msi.h
··· 174 174 struct irq_domain; 175 175 struct irq_chip; 176 176 struct device_node; 177 + struct fwnode_handle; 177 178 struct msi_domain_info; 178 179 179 180 /** ··· 263 262 int msi_domain_set_affinity(struct irq_data *data, const struct cpumask *mask, 264 263 bool force); 265 264 266 - struct irq_domain *msi_create_irq_domain(struct device_node *of_node, 265 + struct irq_domain *msi_create_irq_domain(struct fwnode_handle *fwnode, 267 266 struct msi_domain_info *info, 268 267 struct irq_domain *parent); 269 268 int msi_domain_alloc_irqs(struct irq_domain *domain, struct device *dev, ··· 271 270 void msi_domain_free_irqs(struct irq_domain *domain, struct device *dev); 272 271 struct msi_domain_info *msi_get_domain_info(struct irq_domain *domain); 273 272 274 - struct irq_domain *platform_msi_create_irq_domain(struct device_node *np, 273 + struct irq_domain *platform_msi_create_irq_domain(struct fwnode_handle *fwnode, 275 274 struct msi_domain_info *info, 276 275 struct irq_domain *parent); 277 276 int platform_msi_domain_alloc_irqs(struct device *dev, unsigned int nvec, ··· 281 280 282 281 #ifdef CONFIG_PCI_MSI_IRQ_DOMAIN 283 282 void pci_msi_domain_write_msg(struct irq_data *irq_data, struct msi_msg *msg); 284 - struct irq_domain *pci_msi_create_irq_domain(struct device_node *node, 283 + struct irq_domain *pci_msi_create_irq_domain(struct fwnode_handle *fwnode, 285 284 struct msi_domain_info *info, 286 285 struct irq_domain *parent); 287 286 int pci_msi_domain_alloc_irqs(struct irq_domain *domain, struct pci_dev *dev, 288 287 int nvec, int type); 289 288 void pci_msi_domain_free_irqs(struct irq_domain *domain, struct pci_dev *dev); 290 - struct irq_domain *pci_msi_create_default_irq_domain(struct device_node *node, 289 + struct irq_domain *pci_msi_create_default_irq_domain(struct fwnode_handle *fwnode, 291 290 struct msi_domain_info *info, struct irq_domain *parent); 292 291 293 292 irq_hw_number_t pci_msi_domain_calc_hwirq(struct pci_dev *dev, 294 293 struct msi_desc *desc); 295 294 int pci_msi_domain_check_cap(struct irq_domain *domain, 296 295 struct msi_domain_info *info, struct device *dev); 296 + u32 pci_msi_domain_get_msi_rid(struct irq_domain *domain, struct pci_dev *pdev); 297 + struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev); 298 + #else 299 + static inline struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev) 300 + { 301 + return NULL; 302 + } 297 303 #endif /* CONFIG_PCI_MSI_IRQ_DOMAIN */ 298 304 299 305 #endif /* LINUX_MSI_H */
+23
include/linux/of_irq.h
··· 46 46 extern int of_irq_get_byname(struct device_node *dev, const char *name); 47 47 extern int of_irq_to_resource_table(struct device_node *dev, 48 48 struct resource *res, int nr_irqs); 49 + extern struct irq_domain *of_msi_get_domain(struct device *dev, 50 + struct device_node *np, 51 + enum irq_domain_bus_token token); 52 + extern struct irq_domain *of_msi_map_get_device_domain(struct device *dev, 53 + u32 rid); 49 54 #else 50 55 static inline int of_irq_count(struct device_node *dev) 51 56 { ··· 69 64 { 70 65 return 0; 71 66 } 67 + static inline struct irq_domain *of_msi_get_domain(struct device *dev, 68 + struct device_node *np, 69 + enum irq_domain_bus_token token) 70 + { 71 + return NULL; 72 + } 73 + static inline struct irq_domain *of_msi_map_get_device_domain(struct device *dev, 74 + u32 rid) 75 + { 76 + return NULL; 77 + } 72 78 #endif 73 79 74 80 #if defined(CONFIG_OF) ··· 91 75 extern unsigned int irq_of_parse_and_map(struct device_node *node, int index); 92 76 extern struct device_node *of_irq_find_parent(struct device_node *child); 93 77 extern void of_msi_configure(struct device *dev, struct device_node *np); 78 + u32 of_msi_map_rid(struct device *dev, struct device_node *msi_np, u32 rid_in); 94 79 95 80 #else /* !CONFIG_OF */ 96 81 static inline unsigned int irq_of_parse_and_map(struct device_node *dev, ··· 103 86 static inline void *of_irq_find_parent(struct device_node *child) 104 87 { 105 88 return NULL; 89 + } 90 + 91 + static inline u32 of_msi_map_rid(struct device *dev, 92 + struct device_node *msi_np, u32 rid_in) 93 + { 94 + return rid_in; 106 95 } 107 96 #endif /* !CONFIG_OF */ 108 97
+4
kernel/irq/Kconfig
··· 30 30 config GENERIC_PENDING_IRQ 31 31 bool 32 32 33 + # Support for generic irq migrating off cpu before the cpu is offline. 34 + config GENERIC_IRQ_MIGRATION 35 + bool 36 + 33 37 # Alpha specific irq affinity mechanism 34 38 config AUTO_IRQ_AFFINITY 35 39 bool
+1
kernel/irq/Makefile
··· 5 5 obj-$(CONFIG_IRQ_DOMAIN) += irqdomain.o 6 6 obj-$(CONFIG_PROC_FS) += proc.o 7 7 obj-$(CONFIG_GENERIC_PENDING_IRQ) += migration.o 8 + obj-$(CONFIG_GENERIC_IRQ_MIGRATION) += cpuhotplug.o 8 9 obj-$(CONFIG_PM_SLEEP) += pm.o 9 10 obj-$(CONFIG_GENERIC_MSI_IRQ) += msi.o
+27 -1
kernel/irq/chip.c
··· 21 21 22 22 #include "internals.h" 23 23 24 + static irqreturn_t bad_chained_irq(int irq, void *dev_id) 25 + { 26 + WARN_ONCE(1, "Chained irq %d should not call an action\n", irq); 27 + return IRQ_NONE; 28 + } 29 + 30 + /* 31 + * Chained handlers should never call action on their IRQ. This default 32 + * action will emit warning if such thing happens. 33 + */ 34 + struct irqaction chained_action = { 35 + .handler = bad_chained_irq, 36 + }; 37 + 24 38 /** 25 39 * irq_set_chip - set the irq chip for an irq 26 40 * @irq: irq number ··· 241 227 * disabled. If an interrupt happens, then the interrupt flow 242 228 * handler masks the line at the hardware level and marks it 243 229 * pending. 230 + * 231 + * If the interrupt chip does not implement the irq_disable callback, 232 + * a driver can disable the lazy approach for a particular irq line by 233 + * calling 'irq_set_status_flags(irq, IRQ_DISABLE_UNLAZY)'. This can 234 + * be used for devices which cannot disable the interrupt at the 235 + * device level under certain circumstances and have to use 236 + * disable_irq[_nosync] instead. 244 237 */ 245 238 void irq_disable(struct irq_desc *desc) 246 239 { ··· 255 234 if (desc->irq_data.chip->irq_disable) { 256 235 desc->irq_data.chip->irq_disable(&desc->irq_data); 257 236 irq_state_set_masked(desc); 237 + } else if (irq_settings_disable_unlazy(desc)) { 238 + mask_irq(desc); 258 239 } 259 240 } 260 241 ··· 692 669 if (chip->irq_ack) 693 670 chip->irq_ack(&desc->irq_data); 694 671 695 - handle_irq_event_percpu(desc, desc->action); 672 + handle_irq_event_percpu(desc); 696 673 697 674 if (chip->irq_eoi) 698 675 chip->irq_eoi(&desc->irq_data); ··· 769 746 if (desc->irq_data.chip != &no_irq_chip) 770 747 mask_ack_irq(desc); 771 748 irq_state_set_disabled(desc); 749 + if (is_chained) 750 + desc->action = NULL; 772 751 desc->depth = 1; 773 752 } 774 753 desc->handle_irq = handle; ··· 780 755 irq_settings_set_noprobe(desc); 781 756 irq_settings_set_norequest(desc); 782 757 irq_settings_set_nothread(desc); 758 + desc->action = &chained_action; 783 759 irq_startup(desc, true); 784 760 } 785 761 }
+82
kernel/irq/cpuhotplug.c
··· 1 + /* 2 + * Generic cpu hotunplug interrupt migration code copied from the 3 + * arch/arm implementation 4 + * 5 + * Copyright (C) Russell King 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of the GNU General Public License version 2 as 9 + * published by the Free Software Foundation. 10 + */ 11 + #include <linux/interrupt.h> 12 + #include <linux/ratelimit.h> 13 + #include <linux/irq.h> 14 + 15 + #include "internals.h" 16 + 17 + static bool migrate_one_irq(struct irq_desc *desc) 18 + { 19 + struct irq_data *d = irq_desc_get_irq_data(desc); 20 + const struct cpumask *affinity = d->common->affinity; 21 + struct irq_chip *c; 22 + bool ret = false; 23 + 24 + /* 25 + * If this is a per-CPU interrupt, or the affinity does not 26 + * include this CPU, then we have nothing to do. 27 + */ 28 + if (irqd_is_per_cpu(d) || 29 + !cpumask_test_cpu(smp_processor_id(), affinity)) 30 + return false; 31 + 32 + if (cpumask_any_and(affinity, cpu_online_mask) >= nr_cpu_ids) { 33 + affinity = cpu_online_mask; 34 + ret = true; 35 + } 36 + 37 + c = irq_data_get_irq_chip(d); 38 + if (!c->irq_set_affinity) { 39 + pr_warn_ratelimited("IRQ%u: unable to set affinity\n", d->irq); 40 + } else { 41 + int r = irq_do_set_affinity(d, affinity, false); 42 + if (r) 43 + pr_warn_ratelimited("IRQ%u: set affinity failed(%d).\n", 44 + d->irq, r); 45 + } 46 + 47 + return ret; 48 + } 49 + 50 + /** 51 + * irq_migrate_all_off_this_cpu - Migrate irqs away from offline cpu 52 + * 53 + * The current CPU has been marked offline. Migrate IRQs off this CPU. 54 + * If the affinity settings do not allow other CPUs, force them onto any 55 + * available CPU. 56 + * 57 + * Note: we must iterate over all IRQs, whether they have an attached 58 + * action structure or not, as we need to get chained interrupts too. 59 + */ 60 + void irq_migrate_all_off_this_cpu(void) 61 + { 62 + unsigned int irq; 63 + struct irq_desc *desc; 64 + unsigned long flags; 65 + 66 + local_irq_save(flags); 67 + 68 + for_each_active_irq(irq) { 69 + bool affinity_broken; 70 + 71 + desc = irq_to_desc(irq); 72 + raw_spin_lock(&desc->lock); 73 + affinity_broken = migrate_one_irq(desc); 74 + raw_spin_unlock(&desc->lock); 75 + 76 + if (affinity_broken) 77 + pr_warn_ratelimited("IRQ%u no longer affine to CPU%u\n", 78 + irq, smp_processor_id()); 79 + } 80 + 81 + local_irq_restore(flags); 82 + }
+3 -4
kernel/irq/handle.c
··· 132 132 wake_up_process(action->thread); 133 133 } 134 134 135 - irqreturn_t 136 - handle_irq_event_percpu(struct irq_desc *desc, struct irqaction *action) 135 + irqreturn_t handle_irq_event_percpu(struct irq_desc *desc) 137 136 { 138 137 irqreturn_t retval = IRQ_NONE; 139 138 unsigned int flags = 0, irq = desc->irq_data.irq; 139 + struct irqaction *action = desc->action; 140 140 141 141 do { 142 142 irqreturn_t res; ··· 184 184 185 185 irqreturn_t handle_irq_event(struct irq_desc *desc) 186 186 { 187 - struct irqaction *action = desc->action; 188 187 irqreturn_t ret; 189 188 190 189 desc->istate &= ~IRQS_PENDING; 191 190 irqd_set(&desc->irq_data, IRQD_IRQ_INPROGRESS); 192 191 raw_spin_unlock(&desc->lock); 193 192 194 - ret = handle_irq_event_percpu(desc, action); 193 + ret = handle_irq_event_percpu(desc); 195 194 196 195 raw_spin_lock(&desc->lock); 197 196 irqd_clear(&desc->irq_data, IRQD_IRQ_INPROGRESS);
+3 -1
kernel/irq/internals.h
··· 18 18 19 19 extern bool noirqdebug; 20 20 21 + extern struct irqaction chained_action; 22 + 21 23 /* 22 24 * Bits used by threaded handlers: 23 25 * IRQTF_RUNTHREAD - signals that the interrupt handler thread should run ··· 83 81 84 82 extern void init_kstat_irqs(struct irq_desc *desc, int node, int nr); 85 83 86 - irqreturn_t handle_irq_event_percpu(struct irq_desc *desc, struct irqaction *action); 84 + irqreturn_t handle_irq_event_percpu(struct irq_desc *desc); 87 85 irqreturn_t handle_irq_event(struct irq_desc *desc); 88 86 89 87 /* Resending of interrupts :*/
+140 -37
kernel/irq/irqdomain.c
··· 27 27 irq_hw_number_t hwirq, int node); 28 28 static void irq_domain_check_hierarchy(struct irq_domain *domain); 29 29 30 + struct irqchip_fwid { 31 + struct fwnode_handle fwnode; 32 + char *name; 33 + void *data; 34 + }; 35 + 36 + /** 37 + * irq_domain_alloc_fwnode - Allocate a fwnode_handle suitable for 38 + * identifying an irq domain 39 + * @data: optional user-provided data 40 + * 41 + * Allocate a struct device_node, and return a poiner to the embedded 42 + * fwnode_handle (or NULL on failure). 43 + */ 44 + struct fwnode_handle *irq_domain_alloc_fwnode(void *data) 45 + { 46 + struct irqchip_fwid *fwid; 47 + char *name; 48 + 49 + fwid = kzalloc(sizeof(*fwid), GFP_KERNEL); 50 + name = kasprintf(GFP_KERNEL, "irqchip@%p", data); 51 + 52 + if (!fwid || !name) { 53 + kfree(fwid); 54 + kfree(name); 55 + return NULL; 56 + } 57 + 58 + fwid->name = name; 59 + fwid->data = data; 60 + fwid->fwnode.type = FWNODE_IRQCHIP; 61 + return &fwid->fwnode; 62 + } 63 + 64 + /** 65 + * irq_domain_free_fwnode - Free a non-OF-backed fwnode_handle 66 + * 67 + * Free a fwnode_handle allocated with irq_domain_alloc_fwnode. 68 + */ 69 + void irq_domain_free_fwnode(struct fwnode_handle *fwnode) 70 + { 71 + struct irqchip_fwid *fwid; 72 + 73 + if (WARN_ON(fwnode->type != FWNODE_IRQCHIP)) 74 + return; 75 + 76 + fwid = container_of(fwnode, struct irqchip_fwid, fwnode); 77 + kfree(fwid->name); 78 + kfree(fwid); 79 + } 80 + 30 81 /** 31 82 * __irq_domain_add() - Allocate a new irq_domain data structure 32 83 * @of_node: optional device-tree node of the interrupt controller ··· 91 40 * Allocates and initialize and irq_domain structure. 92 41 * Returns pointer to IRQ domain, or NULL on failure. 93 42 */ 94 - struct irq_domain *__irq_domain_add(struct device_node *of_node, int size, 43 + struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, int size, 95 44 irq_hw_number_t hwirq_max, int direct_max, 96 45 const struct irq_domain_ops *ops, 97 46 void *host_data) 98 47 { 99 48 struct irq_domain *domain; 49 + struct device_node *of_node; 50 + 51 + of_node = to_of_node(fwnode); 100 52 101 53 domain = kzalloc_node(sizeof(*domain) + (sizeof(unsigned int) * size), 102 54 GFP_KERNEL, of_node_to_nid(of_node)); 103 55 if (WARN_ON(!domain)) 104 56 return NULL; 105 57 58 + of_node_get(of_node); 59 + 106 60 /* Fill structure */ 107 61 INIT_RADIX_TREE(&domain->revmap_tree, GFP_KERNEL); 108 62 domain->ops = ops; 109 63 domain->host_data = host_data; 110 - domain->of_node = of_node_get(of_node); 64 + domain->fwnode = fwnode; 111 65 domain->hwirq_max = hwirq_max; 112 66 domain->revmap_size = size; 113 67 domain->revmap_direct_max_irq = direct_max; ··· 158 102 159 103 pr_debug("Removed domain %s\n", domain->name); 160 104 161 - of_node_put(domain->of_node); 105 + of_node_put(irq_domain_get_of_node(domain)); 162 106 kfree(domain); 163 107 } 164 108 EXPORT_SYMBOL_GPL(irq_domain_remove); ··· 189 133 { 190 134 struct irq_domain *domain; 191 135 192 - domain = __irq_domain_add(of_node, size, size, 0, ops, host_data); 136 + domain = __irq_domain_add(of_node_to_fwnode(of_node), size, size, 0, ops, host_data); 193 137 if (!domain) 194 138 return NULL; 195 139 ··· 233 177 { 234 178 struct irq_domain *domain; 235 179 236 - domain = __irq_domain_add(of_node, first_hwirq + size, 180 + domain = __irq_domain_add(of_node_to_fwnode(of_node), first_hwirq + size, 237 181 first_hwirq + size, 0, ops, host_data); 238 182 if (domain) 239 183 irq_domain_associate_many(domain, first_irq, first_hwirq, size); ··· 243 187 EXPORT_SYMBOL_GPL(irq_domain_add_legacy); 244 188 245 189 /** 246 - * irq_find_matching_host() - Locates a domain for a given device node 247 - * @node: device-tree node of the interrupt controller 190 + * irq_find_matching_fwnode() - Locates a domain for a given fwnode 191 + * @fwnode: FW descriptor of the interrupt controller 248 192 * @bus_token: domain-specific data 249 193 */ 250 - struct irq_domain *irq_find_matching_host(struct device_node *node, 251 - enum irq_domain_bus_token bus_token) 194 + struct irq_domain *irq_find_matching_fwnode(struct fwnode_handle *fwnode, 195 + enum irq_domain_bus_token bus_token) 252 196 { 253 197 struct irq_domain *h, *found = NULL; 254 198 int rc; ··· 265 209 mutex_lock(&irq_domain_mutex); 266 210 list_for_each_entry(h, &irq_domain_list, link) { 267 211 if (h->ops->match) 268 - rc = h->ops->match(h, node, bus_token); 212 + rc = h->ops->match(h, to_of_node(fwnode), bus_token); 269 213 else 270 - rc = ((h->of_node != NULL) && (h->of_node == node) && 214 + rc = ((fwnode != NULL) && (h->fwnode == fwnode) && 271 215 ((bus_token == DOMAIN_BUS_ANY) || 272 216 (h->bus_token == bus_token))); 273 217 ··· 279 223 mutex_unlock(&irq_domain_mutex); 280 224 return found; 281 225 } 282 - EXPORT_SYMBOL_GPL(irq_find_matching_host); 226 + EXPORT_SYMBOL_GPL(irq_find_matching_fwnode); 283 227 284 228 /** 285 229 * irq_set_default_host() - Set a "default" irq domain ··· 392 336 void irq_domain_associate_many(struct irq_domain *domain, unsigned int irq_base, 393 337 irq_hw_number_t hwirq_base, int count) 394 338 { 339 + struct device_node *of_node; 395 340 int i; 396 341 342 + of_node = irq_domain_get_of_node(domain); 397 343 pr_debug("%s(%s, irqbase=%i, hwbase=%i, count=%i)\n", __func__, 398 - of_node_full_name(domain->of_node), irq_base, (int)hwirq_base, count); 344 + of_node_full_name(of_node), irq_base, (int)hwirq_base, count); 399 345 400 346 for (i = 0; i < count; i++) { 401 347 irq_domain_associate(domain, irq_base + i, hwirq_base + i); ··· 417 359 */ 418 360 unsigned int irq_create_direct_mapping(struct irq_domain *domain) 419 361 { 362 + struct device_node *of_node; 420 363 unsigned int virq; 421 364 422 365 if (domain == NULL) 423 366 domain = irq_default_domain; 424 367 425 - virq = irq_alloc_desc_from(1, of_node_to_nid(domain->of_node)); 368 + of_node = irq_domain_get_of_node(domain); 369 + virq = irq_alloc_desc_from(1, of_node_to_nid(of_node)); 426 370 if (!virq) { 427 371 pr_debug("create_direct virq allocation failed\n"); 428 372 return 0; ··· 459 399 unsigned int irq_create_mapping(struct irq_domain *domain, 460 400 irq_hw_number_t hwirq) 461 401 { 402 + struct device_node *of_node; 462 403 int virq; 463 404 464 405 pr_debug("irq_create_mapping(0x%p, 0x%lx)\n", domain, hwirq); ··· 473 412 } 474 413 pr_debug("-> using domain @%p\n", domain); 475 414 415 + of_node = irq_domain_get_of_node(domain); 416 + 476 417 /* Check if mapping already exists */ 477 418 virq = irq_find_mapping(domain, hwirq); 478 419 if (virq) { ··· 483 420 } 484 421 485 422 /* Allocate a virtual interrupt number */ 486 - virq = irq_domain_alloc_descs(-1, 1, hwirq, 487 - of_node_to_nid(domain->of_node)); 423 + virq = irq_domain_alloc_descs(-1, 1, hwirq, of_node_to_nid(of_node)); 488 424 if (virq <= 0) { 489 425 pr_debug("-> virq allocation failed\n"); 490 426 return 0; ··· 495 433 } 496 434 497 435 pr_debug("irq %lu on domain %s mapped to virtual irq %u\n", 498 - hwirq, of_node_full_name(domain->of_node), virq); 436 + hwirq, of_node_full_name(of_node), virq); 499 437 500 438 return virq; 501 439 } ··· 522 460 int irq_create_strict_mappings(struct irq_domain *domain, unsigned int irq_base, 523 461 irq_hw_number_t hwirq_base, int count) 524 462 { 463 + struct device_node *of_node; 525 464 int ret; 526 465 466 + of_node = irq_domain_get_of_node(domain); 527 467 ret = irq_alloc_descs(irq_base, irq_base, count, 528 - of_node_to_nid(domain->of_node)); 468 + of_node_to_nid(of_node)); 529 469 if (unlikely(ret < 0)) 530 470 return ret; 531 471 ··· 536 472 } 537 473 EXPORT_SYMBOL_GPL(irq_create_strict_mappings); 538 474 539 - unsigned int irq_create_of_mapping(struct of_phandle_args *irq_data) 475 + static int irq_domain_translate(struct irq_domain *d, 476 + struct irq_fwspec *fwspec, 477 + irq_hw_number_t *hwirq, unsigned int *type) 478 + { 479 + #ifdef CONFIG_IRQ_DOMAIN_HIERARCHY 480 + if (d->ops->translate) 481 + return d->ops->translate(d, fwspec, hwirq, type); 482 + #endif 483 + if (d->ops->xlate) 484 + return d->ops->xlate(d, to_of_node(fwspec->fwnode), 485 + fwspec->param, fwspec->param_count, 486 + hwirq, type); 487 + 488 + /* If domain has no translation, then we assume interrupt line */ 489 + *hwirq = fwspec->param[0]; 490 + return 0; 491 + } 492 + 493 + static void of_phandle_args_to_fwspec(struct of_phandle_args *irq_data, 494 + struct irq_fwspec *fwspec) 495 + { 496 + int i; 497 + 498 + fwspec->fwnode = irq_data->np ? &irq_data->np->fwnode : NULL; 499 + fwspec->param_count = irq_data->args_count; 500 + 501 + for (i = 0; i < irq_data->args_count; i++) 502 + fwspec->param[i] = irq_data->args[i]; 503 + } 504 + 505 + unsigned int irq_create_fwspec_mapping(struct irq_fwspec *fwspec) 540 506 { 541 507 struct irq_domain *domain; 542 508 irq_hw_number_t hwirq; 543 509 unsigned int type = IRQ_TYPE_NONE; 544 510 int virq; 545 511 546 - domain = irq_data->np ? irq_find_host(irq_data->np) : irq_default_domain; 512 + if (fwspec->fwnode) 513 + domain = irq_find_matching_fwnode(fwspec->fwnode, DOMAIN_BUS_ANY); 514 + else 515 + domain = irq_default_domain; 516 + 547 517 if (!domain) { 548 518 pr_warn("no irq domain found for %s !\n", 549 - of_node_full_name(irq_data->np)); 519 + of_node_full_name(to_of_node(fwspec->fwnode))); 550 520 return 0; 551 521 } 552 522 553 - /* If domain has no translation, then we assume interrupt line */ 554 - if (domain->ops->xlate == NULL) 555 - hwirq = irq_data->args[0]; 556 - else { 557 - if (domain->ops->xlate(domain, irq_data->np, irq_data->args, 558 - irq_data->args_count, &hwirq, &type)) 559 - return 0; 560 - } 523 + if (irq_domain_translate(domain, fwspec, &hwirq, &type)) 524 + return 0; 561 525 562 526 if (irq_domain_is_hierarchy(domain)) { 563 527 /* ··· 596 504 if (virq) 597 505 return virq; 598 506 599 - virq = irq_domain_alloc_irqs(domain, 1, NUMA_NO_NODE, irq_data); 507 + virq = irq_domain_alloc_irqs(domain, 1, NUMA_NO_NODE, fwspec); 600 508 if (virq <= 0) 601 509 return 0; 602 510 } else { ··· 611 519 type != irq_get_trigger_type(virq)) 612 520 irq_set_irq_type(virq, type); 613 521 return virq; 522 + } 523 + EXPORT_SYMBOL_GPL(irq_create_fwspec_mapping); 524 + 525 + unsigned int irq_create_of_mapping(struct of_phandle_args *irq_data) 526 + { 527 + struct irq_fwspec fwspec; 528 + 529 + of_phandle_args_to_fwspec(irq_data, &fwspec); 530 + return irq_create_fwspec_mapping(&fwspec); 614 531 } 615 532 EXPORT_SYMBOL_GPL(irq_create_of_mapping); 616 533 ··· 691 590 "name", "mapped", "linear-max", "direct-max", "devtree-node"); 692 591 mutex_lock(&irq_domain_mutex); 693 592 list_for_each_entry(domain, &irq_domain_list, link) { 593 + struct device_node *of_node; 694 594 int count = 0; 595 + of_node = irq_domain_get_of_node(domain); 695 596 radix_tree_for_each_slot(slot, &domain->revmap_tree, &iter, 0) 696 597 count++; 697 598 seq_printf(m, "%c%-16s %6u %10u %10u %s\n", 698 599 domain == irq_default_domain ? '*' : ' ', domain->name, 699 600 domain->revmap_size + count, domain->revmap_size, 700 601 domain->revmap_direct_max_irq, 701 - domain->of_node ? of_node_full_name(domain->of_node) : ""); 602 + of_node ? of_node_full_name(of_node) : ""); 702 603 } 703 604 mutex_unlock(&irq_domain_mutex); 704 605 ··· 854 751 855 752 #ifdef CONFIG_IRQ_DOMAIN_HIERARCHY 856 753 /** 857 - * irq_domain_add_hierarchy - Add a irqdomain into the hierarchy 754 + * irq_domain_create_hierarchy - Add a irqdomain into the hierarchy 858 755 * @parent: Parent irq domain to associate with the new domain 859 756 * @flags: Irq domain flags associated to the domain 860 757 * @size: Size of the domain. See below 861 - * @node: Optional device-tree node of the interrupt controller 758 + * @fwnode: Optional fwnode of the interrupt controller 862 759 * @ops: Pointer to the interrupt domain callbacks 863 760 * @host_data: Controller private data pointer 864 761 * ··· 868 765 * domain flags are set. 869 766 * Returns pointer to IRQ domain, or NULL on failure. 870 767 */ 871 - struct irq_domain *irq_domain_add_hierarchy(struct irq_domain *parent, 768 + struct irq_domain *irq_domain_create_hierarchy(struct irq_domain *parent, 872 769 unsigned int flags, 873 770 unsigned int size, 874 - struct device_node *node, 771 + struct fwnode_handle *fwnode, 875 772 const struct irq_domain_ops *ops, 876 773 void *host_data) 877 774 { 878 775 struct irq_domain *domain; 879 776 880 777 if (size) 881 - domain = irq_domain_add_linear(node, size, ops, host_data); 778 + domain = irq_domain_create_linear(fwnode, size, ops, host_data); 882 779 else 883 - domain = irq_domain_add_tree(node, ops, host_data); 780 + domain = irq_domain_create_tree(fwnode, ops, host_data); 884 781 if (domain) { 885 782 domain->parent = parent; 886 783 domain->flags |= flags;
+149 -72
kernel/irq/manage.c
··· 258 258 } 259 259 EXPORT_SYMBOL_GPL(irq_set_affinity_hint); 260 260 261 - /** 262 - * irq_set_vcpu_affinity - Set vcpu affinity for the interrupt 263 - * @irq: interrupt number to set affinity 264 - * @vcpu_info: vCPU specific data 265 - * 266 - * This function uses the vCPU specific data to set the vCPU 267 - * affinity for an irq. The vCPU specific data is passed from 268 - * outside, such as KVM. One example code path is as below: 269 - * KVM -> IOMMU -> irq_set_vcpu_affinity(). 270 - */ 271 - int irq_set_vcpu_affinity(unsigned int irq, void *vcpu_info) 272 - { 273 - unsigned long flags; 274 - struct irq_desc *desc = irq_get_desc_lock(irq, &flags, 0); 275 - struct irq_data *data; 276 - struct irq_chip *chip; 277 - int ret = -ENOSYS; 278 - 279 - if (!desc) 280 - return -EINVAL; 281 - 282 - data = irq_desc_get_irq_data(desc); 283 - chip = irq_data_get_irq_chip(data); 284 - if (chip && chip->irq_set_vcpu_affinity) 285 - ret = chip->irq_set_vcpu_affinity(data, vcpu_info); 286 - irq_put_desc_unlock(desc, flags); 287 - 288 - return ret; 289 - } 290 - EXPORT_SYMBOL_GPL(irq_set_vcpu_affinity); 291 - 292 261 static void irq_affinity_notify(struct work_struct *work) 293 262 { 294 263 struct irq_affinity_notify *notify = ··· 392 423 return 0; 393 424 } 394 425 #endif 426 + 427 + /** 428 + * irq_set_vcpu_affinity - Set vcpu affinity for the interrupt 429 + * @irq: interrupt number to set affinity 430 + * @vcpu_info: vCPU specific data 431 + * 432 + * This function uses the vCPU specific data to set the vCPU 433 + * affinity for an irq. The vCPU specific data is passed from 434 + * outside, such as KVM. One example code path is as below: 435 + * KVM -> IOMMU -> irq_set_vcpu_affinity(). 436 + */ 437 + int irq_set_vcpu_affinity(unsigned int irq, void *vcpu_info) 438 + { 439 + unsigned long flags; 440 + struct irq_desc *desc = irq_get_desc_lock(irq, &flags, 0); 441 + struct irq_data *data; 442 + struct irq_chip *chip; 443 + int ret = -ENOSYS; 444 + 445 + if (!desc) 446 + return -EINVAL; 447 + 448 + data = irq_desc_get_irq_data(desc); 449 + chip = irq_data_get_irq_chip(data); 450 + if (chip && chip->irq_set_vcpu_affinity) 451 + ret = chip->irq_set_vcpu_affinity(data, vcpu_info); 452 + irq_put_desc_unlock(desc, flags); 453 + 454 + return ret; 455 + } 456 + EXPORT_SYMBOL_GPL(irq_set_vcpu_affinity); 395 457 396 458 void __disable_irq(struct irq_desc *desc) 397 459 { ··· 730 730 return IRQ_NONE; 731 731 } 732 732 733 + static irqreturn_t irq_forced_secondary_handler(int irq, void *dev_id) 734 + { 735 + WARN(1, "Secondary action handler called for irq %d\n", irq); 736 + return IRQ_NONE; 737 + } 738 + 733 739 static int irq_wait_for_interrupt(struct irqaction *action) 734 740 { 735 741 set_current_state(TASK_INTERRUPTIBLE); ··· 762 756 static void irq_finalize_oneshot(struct irq_desc *desc, 763 757 struct irqaction *action) 764 758 { 765 - if (!(desc->istate & IRQS_ONESHOT)) 759 + if (!(desc->istate & IRQS_ONESHOT) || 760 + action->handler == irq_forced_secondary_handler) 766 761 return; 767 762 again: 768 763 chip_bus_lock(desc); ··· 917 910 irq_finalize_oneshot(desc, action); 918 911 } 919 912 913 + static void irq_wake_secondary(struct irq_desc *desc, struct irqaction *action) 914 + { 915 + struct irqaction *secondary = action->secondary; 916 + 917 + if (WARN_ON_ONCE(!secondary)) 918 + return; 919 + 920 + raw_spin_lock_irq(&desc->lock); 921 + __irq_wake_thread(desc, secondary); 922 + raw_spin_unlock_irq(&desc->lock); 923 + } 924 + 920 925 /* 921 926 * Interrupt handler thread 922 927 */ ··· 959 940 action_ret = handler_fn(desc, action); 960 941 if (action_ret == IRQ_HANDLED) 961 942 atomic_inc(&desc->threads_handled); 943 + if (action_ret == IRQ_WAKE_THREAD) 944 + irq_wake_secondary(desc, action); 962 945 963 946 wake_threads_waitq(desc); 964 947 } ··· 1005 984 } 1006 985 EXPORT_SYMBOL_GPL(irq_wake_thread); 1007 986 1008 - static void irq_setup_forced_threading(struct irqaction *new) 987 + static int irq_setup_forced_threading(struct irqaction *new) 1009 988 { 1010 989 if (!force_irqthreads) 1011 - return; 990 + return 0; 1012 991 if (new->flags & (IRQF_NO_THREAD | IRQF_PERCPU | IRQF_ONESHOT)) 1013 - return; 992 + return 0; 1014 993 1015 994 new->flags |= IRQF_ONESHOT; 1016 995 1017 - if (!new->thread_fn) { 1018 - set_bit(IRQTF_FORCED_THREAD, &new->thread_flags); 1019 - new->thread_fn = new->handler; 1020 - new->handler = irq_default_primary_handler; 996 + /* 997 + * Handle the case where we have a real primary handler and a 998 + * thread handler. We force thread them as well by creating a 999 + * secondary action. 1000 + */ 1001 + if (new->handler != irq_default_primary_handler && new->thread_fn) { 1002 + /* Allocate the secondary action */ 1003 + new->secondary = kzalloc(sizeof(struct irqaction), GFP_KERNEL); 1004 + if (!new->secondary) 1005 + return -ENOMEM; 1006 + new->secondary->handler = irq_forced_secondary_handler; 1007 + new->secondary->thread_fn = new->thread_fn; 1008 + new->secondary->dev_id = new->dev_id; 1009 + new->secondary->irq = new->irq; 1010 + new->secondary->name = new->name; 1021 1011 } 1012 + /* Deal with the primary handler */ 1013 + set_bit(IRQTF_FORCED_THREAD, &new->thread_flags); 1014 + new->thread_fn = new->handler; 1015 + new->handler = irq_default_primary_handler; 1016 + return 0; 1022 1017 } 1023 1018 1024 1019 static int irq_request_resources(struct irq_desc *desc) ··· 1052 1015 1053 1016 if (c->irq_release_resources) 1054 1017 c->irq_release_resources(d); 1018 + } 1019 + 1020 + static int 1021 + setup_irq_thread(struct irqaction *new, unsigned int irq, bool secondary) 1022 + { 1023 + struct task_struct *t; 1024 + struct sched_param param = { 1025 + .sched_priority = MAX_USER_RT_PRIO/2, 1026 + }; 1027 + 1028 + if (!secondary) { 1029 + t = kthread_create(irq_thread, new, "irq/%d-%s", irq, 1030 + new->name); 1031 + } else { 1032 + t = kthread_create(irq_thread, new, "irq/%d-s-%s", irq, 1033 + new->name); 1034 + param.sched_priority -= 1; 1035 + } 1036 + 1037 + if (IS_ERR(t)) 1038 + return PTR_ERR(t); 1039 + 1040 + sched_setscheduler_nocheck(t, SCHED_FIFO, &param); 1041 + 1042 + /* 1043 + * We keep the reference to the task struct even if 1044 + * the thread dies to avoid that the interrupt code 1045 + * references an already freed task_struct. 1046 + */ 1047 + get_task_struct(t); 1048 + new->thread = t; 1049 + /* 1050 + * Tell the thread to set its affinity. This is 1051 + * important for shared interrupt handlers as we do 1052 + * not invoke setup_affinity() for the secondary 1053 + * handlers as everything is already set up. Even for 1054 + * interrupts marked with IRQF_NO_BALANCE this is 1055 + * correct as we want the thread to move to the cpu(s) 1056 + * on which the requesting code placed the interrupt. 1057 + */ 1058 + set_bit(IRQTF_AFFINITY, &new->thread_flags); 1059 + return 0; 1055 1060 } 1056 1061 1057 1062 /* ··· 1116 1037 if (!try_module_get(desc->owner)) 1117 1038 return -ENODEV; 1118 1039 1040 + new->irq = irq; 1041 + 1119 1042 /* 1120 1043 * Check whether the interrupt nests into another interrupt 1121 1044 * thread. ··· 1135 1054 */ 1136 1055 new->handler = irq_nested_primary_handler; 1137 1056 } else { 1138 - if (irq_settings_can_thread(desc)) 1139 - irq_setup_forced_threading(new); 1057 + if (irq_settings_can_thread(desc)) { 1058 + ret = irq_setup_forced_threading(new); 1059 + if (ret) 1060 + goto out_mput; 1061 + } 1140 1062 } 1141 1063 1142 1064 /* ··· 1148 1064 * thread. 1149 1065 */ 1150 1066 if (new->thread_fn && !nested) { 1151 - struct task_struct *t; 1152 - static const struct sched_param param = { 1153 - .sched_priority = MAX_USER_RT_PRIO/2, 1154 - }; 1155 - 1156 - t = kthread_create(irq_thread, new, "irq/%d-%s", irq, 1157 - new->name); 1158 - if (IS_ERR(t)) { 1159 - ret = PTR_ERR(t); 1067 + ret = setup_irq_thread(new, irq, false); 1068 + if (ret) 1160 1069 goto out_mput; 1070 + if (new->secondary) { 1071 + ret = setup_irq_thread(new->secondary, irq, true); 1072 + if (ret) 1073 + goto out_thread; 1161 1074 } 1162 - 1163 - sched_setscheduler_nocheck(t, SCHED_FIFO, &param); 1164 - 1165 - /* 1166 - * We keep the reference to the task struct even if 1167 - * the thread dies to avoid that the interrupt code 1168 - * references an already freed task_struct. 1169 - */ 1170 - get_task_struct(t); 1171 - new->thread = t; 1172 - /* 1173 - * Tell the thread to set its affinity. This is 1174 - * important for shared interrupt handlers as we do 1175 - * not invoke setup_affinity() for the secondary 1176 - * handlers as everything is already set up. Even for 1177 - * interrupts marked with IRQF_NO_BALANCE this is 1178 - * correct as we want the thread to move to the cpu(s) 1179 - * on which the requesting code placed the interrupt. 1180 - */ 1181 - set_bit(IRQTF_AFFINITY, &new->thread_flags); 1182 1075 } 1183 1076 1184 1077 if (!alloc_cpumask_var(&mask, GFP_KERNEL)) { ··· 1328 1267 irq, nmsk, omsk); 1329 1268 } 1330 1269 1331 - new->irq = irq; 1332 1270 *old_ptr = new; 1333 1271 1334 1272 irq_pm_install_action(desc, new); ··· 1353 1293 */ 1354 1294 if (new->thread) 1355 1295 wake_up_process(new->thread); 1296 + if (new->secondary) 1297 + wake_up_process(new->secondary->thread); 1356 1298 1357 1299 register_irq_proc(irq, desc); 1358 1300 new->dir = NULL; ··· 1382 1320 struct task_struct *t = new->thread; 1383 1321 1384 1322 new->thread = NULL; 1323 + kthread_stop(t); 1324 + put_task_struct(t); 1325 + } 1326 + if (new->secondary && new->secondary->thread) { 1327 + struct task_struct *t = new->secondary->thread; 1328 + 1329 + new->secondary->thread = NULL; 1385 1330 kthread_stop(t); 1386 1331 put_task_struct(t); 1387 1332 } ··· 1463 1394 1464 1395 /* If this was the last handler, shut down the IRQ line: */ 1465 1396 if (!desc->action) { 1397 + irq_settings_clr_disable_unlazy(desc); 1466 1398 irq_shutdown(desc); 1467 1399 irq_release_resources(desc); 1468 1400 } ··· 1500 1430 if (action->thread) { 1501 1431 kthread_stop(action->thread); 1502 1432 put_task_struct(action->thread); 1433 + if (action->secondary && action->secondary->thread) { 1434 + kthread_stop(action->secondary->thread); 1435 + put_task_struct(action->secondary->thread); 1436 + } 1503 1437 } 1504 1438 1505 1439 module_put(desc->owner); 1440 + kfree(action->secondary); 1506 1441 return action; 1507 1442 } 1508 1443 ··· 1651 1576 retval = __setup_irq(irq, desc, action); 1652 1577 chip_bus_sync_unlock(desc); 1653 1578 1654 - if (retval) 1579 + if (retval) { 1580 + kfree(action->secondary); 1655 1581 kfree(action); 1582 + } 1656 1583 1657 1584 #ifdef CONFIG_DEBUG_SHIRQ_FIXME 1658 1585 if (!retval && (irqflags & IRQF_SHARED)) {
+4 -4
kernel/irq/msi.c
··· 235 235 236 236 /** 237 237 * msi_create_irq_domain - Create a MSI interrupt domain 238 - * @of_node: Optional device-tree node of the interrupt controller 238 + * @fwnode: Optional fwnode of the interrupt controller 239 239 * @info: MSI domain info 240 240 * @parent: Parent irq domain 241 241 */ 242 - struct irq_domain *msi_create_irq_domain(struct device_node *node, 242 + struct irq_domain *msi_create_irq_domain(struct fwnode_handle *fwnode, 243 243 struct msi_domain_info *info, 244 244 struct irq_domain *parent) 245 245 { ··· 248 248 if (info->flags & MSI_FLAG_USE_DEF_CHIP_OPS) 249 249 msi_domain_update_chip_ops(info); 250 250 251 - return irq_domain_add_hierarchy(parent, 0, 0, node, &msi_domain_ops, 252 - info); 251 + return irq_domain_create_hierarchy(parent, 0, 0, fwnode, 252 + &msi_domain_ops, info); 253 253 } 254 254 255 255 /**
+1 -1
kernel/irq/proc.c
··· 475 475 for_each_online_cpu(j) 476 476 any_count |= kstat_irqs_cpu(i, j); 477 477 action = desc->action; 478 - if (!action && !any_count) 478 + if ((!action || action == &chained_action) && !any_count) 479 479 goto out; 480 480 481 481 seq_printf(p, "%*d: ", prec, i);
+12
kernel/irq/settings.h
··· 15 15 _IRQ_NESTED_THREAD = IRQ_NESTED_THREAD, 16 16 _IRQ_PER_CPU_DEVID = IRQ_PER_CPU_DEVID, 17 17 _IRQ_IS_POLLED = IRQ_IS_POLLED, 18 + _IRQ_DISABLE_UNLAZY = IRQ_DISABLE_UNLAZY, 18 19 _IRQF_MODIFY_MASK = IRQF_MODIFY_MASK, 19 20 }; 20 21 ··· 29 28 #define IRQ_NESTED_THREAD GOT_YOU_MORON 30 29 #define IRQ_PER_CPU_DEVID GOT_YOU_MORON 31 30 #define IRQ_IS_POLLED GOT_YOU_MORON 31 + #define IRQ_DISABLE_UNLAZY GOT_YOU_MORON 32 32 #undef IRQF_MODIFY_MASK 33 33 #define IRQF_MODIFY_MASK GOT_YOU_MORON 34 34 ··· 155 153 static inline bool irq_settings_is_polled(struct irq_desc *desc) 156 154 { 157 155 return desc->status_use_accessors & _IRQ_IS_POLLED; 156 + } 157 + 158 + static inline bool irq_settings_disable_unlazy(struct irq_desc *desc) 159 + { 160 + return desc->status_use_accessors & _IRQ_DISABLE_UNLAZY; 161 + } 162 + 163 + static inline void irq_settings_clr_disable_unlazy(struct irq_desc *desc) 164 + { 165 + desc->status_use_accessors &= ~_IRQ_DISABLE_UNLAZY; 158 166 }
+2 -2
virt/kvm/arm/vgic.c
··· 2137 2137 case KVM_DEV_TYPE_ARM_VGIC_V2: 2138 2138 vgic_v2_init_emulation(kvm); 2139 2139 break; 2140 - #ifdef CONFIG_ARM_GIC_V3 2140 + #ifdef CONFIG_KVM_ARM_VGIC_V3 2141 2141 case KVM_DEV_TYPE_ARM_VGIC_V3: 2142 2142 vgic_v3_init_emulation(kvm); 2143 2143 break; ··· 2299 2299 block_size = KVM_VGIC_V2_CPU_SIZE; 2300 2300 alignment = SZ_4K; 2301 2301 break; 2302 - #ifdef CONFIG_ARM_GIC_V3 2302 + #ifdef CONFIG_KVM_ARM_VGIC_V3 2303 2303 case KVM_VGIC_V3_ADDR_TYPE_DIST: 2304 2304 type_needed = KVM_DEV_TYPE_ARM_VGIC_V3; 2305 2305 addr_ptr = &vgic->vgic_dist_base;