Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull irq updates from Thomas Gleixner:
"The interrupt department provides:

Core updates:

- Better spreading to NUMA nodes in the affinity management

- Support for more than one set of interrupts to spread out to allow
separate queues for separate functionality of a single device.

- Decouple the non queue interrupts from being managed. Those are
usually general interrupts for error handling etc. and those should
never be shut down. This also a preparation to utilize the
spreading mechanism for initial spreading of non-managed interrupts
later.

- Make the single CPU target selection in the matrix allocator more
balanced so interrupts won't accumulate on single CPUs in certain
situations.

- A large spell checking patch so we don't end up fixing single typos
over and over.

Driver updates:

- A bunch of new irqchip drivers (RDA8810PL, Madera, imx-irqsteer)

- Updates for the 8MQ, F1C100s platform drivers

- A number of SPDX cleanups

- A workaround for a very broken GICv3 implementation on msm8996
which sports a botched register set.

- A platform-msi fix to prevent memory leakage

- Various cleanups"

* 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (37 commits)
genirq/affinity: Add is_managed to struct irq_affinity_desc
genirq/core: Introduce struct irq_affinity_desc
genirq/affinity: Remove excess indentation
irqchip/stm32: protect configuration registers with hwspinlock
dt-bindings: interrupt-controller: stm32: Document hwlock properties
irqchip: Add driver for imx-irqsteer controller
dt-bindings/irq: Add binding for Freescale IRQSTEER multiplexer
irqchip: Add driver for Cirrus Logic Madera codecs
genirq: Fix various typos in comments
irqchip/irq-imx-gpcv2: Add IRQCHIP_DECLARE for i.MX8MQ compatible
irqchip/irq-rda-intc: Fix return value check in rda8810_intc_init()
irqchip/irq-imx-gpcv2: Silence "fall through" warning
irqchip/gic-v3: Add quirk for msm8996 broken registers
irqchip/gic: Add support to device tree based quirks
dt-bindings/gic-v3: Add msm8996 compatible string
irqchip/sun4i: Add support for Allwinner ARMv5 F1C100s
irqchip/sun4i: Move IC specific register offsets to struct
irqchip/sun4i: Add a struct to hold global variables
dt-bindings: interrupt-controller: Add suniv interrupt-controller
irqchip: Add RDA8810PL interrupt driver
...

+1421 -238
+3 -1
Documentation/devicetree/bindings/interrupt-controller/allwinner,sun4i-ic.txt
··· 2 2 3 3 Required properties: 4 4 5 - - compatible : should be "allwinner,sun4i-a10-ic" 5 + - compatible : should be one of the following: 6 + "allwinner,sun4i-a10-ic" 7 + "allwinner,suniv-f1c100s-ic" 6 8 - reg : Specifies base physical address and size of the registers. 7 9 - interrupt-controller : Identifies the node as an interrupt controller 8 10 - #interrupt-cells : Specifies the number of cells needed to encode an
+3 -1
Documentation/devicetree/bindings/interrupt-controller/arm,gic-v3.txt
··· 7 7 8 8 Main node required properties: 9 9 10 - - compatible : should at least contain "arm,gic-v3". 10 + - compatible : should at least contain "arm,gic-v3" or either 11 + "qcom,msm8996-gic-v3", "arm,gic-v3" for msm8996 SoCs 12 + to address SoC specific bugs/quirks 11 13 - interrupt-controller : Identifies the node as an interrupt controller 12 14 - #interrupt-cells : Specifies the number of cells needed to encode an 13 15 interrupt source. Must be a single cell with a value of at least 3.
+34
Documentation/devicetree/bindings/interrupt-controller/fsl,irqsteer.txt
··· 1 + Freescale IRQSTEER Interrupt multiplexer 2 + 3 + Required properties: 4 + 5 + - compatible: should be: 6 + - "fsl,imx8m-irqsteer" 7 + - "fsl,imx-irqsteer" 8 + - reg: Physical base address and size of registers. 9 + - interrupts: Should contain the parent interrupt line used to multiplex the 10 + input interrupts. 11 + - clocks: Should contain one clock for entry in clock-names 12 + see Documentation/devicetree/bindings/clock/clock-bindings.txt 13 + - clock-names: 14 + - "ipg": main logic clock 15 + - interrupt-controller: Identifies the node as an interrupt controller. 16 + - #interrupt-cells: Specifies the number of cells needed to encode an 17 + interrupt source. The value must be 1. 18 + - fsl,channel: The output channel that all input IRQs should be steered into. 19 + - fsl,irq-groups: Number of IRQ groups managed by this controller instance. 20 + Each group manages 64 input interrupts. 21 + 22 + Example: 23 + 24 + interrupt-controller@32e2d000 { 25 + compatible = "fsl,imx8m-irqsteer", "fsl,imx-irqsteer"; 26 + reg = <0x32e2d000 0x1000>; 27 + interrupts = <GIC_SPI 18 IRQ_TYPE_LEVEL_HIGH>; 28 + clocks = <&clk IMX8MQ_CLK_DISP_APB_ROOT>; 29 + clock-names = "ipg"; 30 + fsl,channel = <0>; 31 + fsl,irq-groups = <1>; 32 + interrupt-controller; 33 + #interrupt-cells = <1>; 34 + };
+61
Documentation/devicetree/bindings/interrupt-controller/rda,8810pl-intc.txt
··· 1 + RDA Micro RDA8810PL Interrupt Controller 2 + 3 + The interrupt controller in RDA8810PL SoC is a custom interrupt controller 4 + which supports up to 32 interrupts. 5 + 6 + Required properties: 7 + 8 + - compatible: Should be "rda,8810pl-intc". 9 + - reg: Specifies base physical address of the registers set. 10 + - interrupt-controller: Identifies the node as an interrupt controller. 11 + - #interrupt-cells: Specifies the number of cells needed to encode an 12 + interrupt source. The value shall be 2. 13 + 14 + The interrupt sources are as follows: 15 + 16 + ID Name 17 + ------------ 18 + 0: PULSE_DUMMY 19 + 1: I2C 20 + 2: NAND_NFSC 21 + 3: SDMMC1 22 + 4: SDMMC2 23 + 5: SDMMC3 24 + 6: SPI1 25 + 7: SPI2 26 + 8: SPI3 27 + 9: UART1 28 + 10: UART2 29 + 11: UART3 30 + 12: GPIO1 31 + 13: GPIO2 32 + 14: GPIO3 33 + 15: KEYPAD 34 + 16: TIMER 35 + 17: TIMEROS 36 + 18: COMREG0 37 + 19: COMREG1 38 + 20: USB 39 + 21: DMC 40 + 22: DMA 41 + 23: CAMERA 42 + 24: GOUDA 43 + 25: GPU 44 + 26: VPU_JPG 45 + 27: VPU_HOST 46 + 28: VOC 47 + 29: AUIFC0 48 + 30: AUIFC1 49 + 31: L2CC 50 + 51 + Example: 52 + apb@20800000 { 53 + compatible = "simple-bus"; 54 + ... 55 + intc: interrupt-controller@0 { 56 + compatible = "rda,8810pl-intc"; 57 + reg = <0x0 0x1000>; 58 + interrupt-controller; 59 + #interrupt-cells = <2>; 60 + }; 61 + };
+4
Documentation/devicetree/bindings/interrupt-controller/st,stm32-exti.txt
··· 14 14 (only needed for exti controller with multiple exti under 15 15 same parent interrupt: st,stm32-exti and st,stm32h7-exti) 16 16 17 + Optional properties: 18 + 19 + - hwlocks: reference to a phandle of a hardware spinlock provider node. 20 + 17 21 Example: 18 22 19 23 exti: interrupt-controller@40013c00 {
+2
MAINTAINERS
··· 3700 3700 S: Supported 3701 3701 F: Documentation/devicetree/bindings/mfd/madera.txt 3702 3702 F: Documentation/devicetree/bindings/pinctrl/cirrus,madera-pinctrl.txt 3703 + F: include/linux/irqchip/irq-madera* 3703 3704 F: include/linux/mfd/madera/* 3704 3705 F: drivers/gpio/gpio-madera* 3706 + F: drivers/irqchip/irq-madera* 3705 3707 F: drivers/mfd/madera* 3706 3708 F: drivers/mfd/cs47l* 3707 3709 F: drivers/pinctrl/cirrus/*
+4 -2
drivers/base/platform-msi.c
··· 368 368 unsigned int nvec) 369 369 { 370 370 struct platform_msi_priv_data *data = domain->host_data; 371 - struct msi_desc *desc; 372 - for_each_msi_entry(desc, data->dev) { 371 + struct msi_desc *desc, *tmp; 372 + for_each_msi_entry_safe(desc, tmp, data->dev) { 373 373 if (WARN_ON(!desc->irq || desc->nvec_used != 1)) 374 374 return; 375 375 if (!(desc->irq >= virq && desc->irq < (virq + nvec))) 376 376 continue; 377 377 378 378 irq_domain_free_irqs_common(domain, desc->irq, 1); 379 + list_del(&desc->list); 380 + free_msi_entry(desc); 379 381 } 380 382 } 381 383
+15
drivers/irqchip/Kconfig
··· 150 150 select GENERIC_IRQ_CHIP 151 151 select IRQ_DOMAIN 152 152 153 + config MADERA_IRQ 154 + tristate 155 + 153 156 config IRQ_MIPS_CPU 154 157 bool 155 158 select GENERIC_IRQ_CHIP ··· 197 194 select IRQ_DOMAIN 198 195 help 199 196 Support for the J-Core integrated AIC. 197 + 198 + config RDA_INTC 199 + bool 200 + select IRQ_DOMAIN 200 201 201 202 config RENESAS_INTC_IRQPIN 202 203 bool ··· 397 390 Say yes here to enable C-SKY APB interrupt controller driver used 398 391 by C-SKY single core SOC system. It use mmio map apb-bus to visit 399 392 the controller's register. 393 + 394 + config IMX_IRQSTEER 395 + bool "i.MX IRQSTEER support" 396 + depends on ARCH_MXC || COMPILE_TEST 397 + default ARCH_MXC 398 + select IRQ_DOMAIN 399 + help 400 + Support for the i.MX IRQSTEER interrupt multiplexer/remapper. 400 401 401 402 endmenu 402 403
+3
drivers/irqchip/Makefile
··· 43 43 obj-$(CONFIG_IRQ_MIPS_CPU) += irq-mips-cpu.o 44 44 obj-$(CONFIG_SIRF_IRQ) += irq-sirfsoc.o 45 45 obj-$(CONFIG_JCORE_AIC) += irq-jcore-aic.o 46 + obj-$(CONFIG_RDA_INTC) += irq-rda-intc.o 46 47 obj-$(CONFIG_RENESAS_INTC_IRQPIN) += irq-renesas-intc-irqpin.o 47 48 obj-$(CONFIG_RENESAS_IRQC) += irq-renesas-irqc.o 48 49 obj-$(CONFIG_VERSATILE_FPGA_IRQ) += irq-versatile-fpga.o ··· 92 91 obj-$(CONFIG_CSKY_MPINTC) += irq-csky-mpintc.o 93 92 obj-$(CONFIG_CSKY_APB_INTC) += irq-csky-apb-intc.o 94 93 obj-$(CONFIG_SIFIVE_PLIC) += irq-sifive-plic.o 94 + obj-$(CONFIG_IMX_IRQSTEER) += irq-imx-irqsteer.o 95 + obj-$(CONFIG_MADERA_IRQ) += irq-madera.o
+1 -10
drivers/irqchip/irq-bcm2835.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 1 2 /* 2 3 * Copyright 2010 Broadcom 3 4 * Copyright 2012 Simon Arlott, Chris Boot, Stephen Warren 4 - * 5 - * This program is free software; you can redistribute it and/or modify 6 - * it under the terms of the GNU General Public License as published by 7 - * the Free Software Foundation; either version 2 of the License, or 8 - * (at your option) any later version. 9 - * 10 - * This program is distributed in the hope that it will be useful, 11 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 - * GNU General Public License for more details. 14 5 * 15 6 * Quirk 1: Shortcut interrupts don't set the bank 1/2 register pending bits 16 7 *
+1 -10
drivers/irqchip/irq-bcm2836.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 1 2 /* 2 3 * Root interrupt controller for the BCM2836 (Raspberry Pi 2). 3 4 * 4 5 * Copyright 2015 Broadcom 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License as published by 8 - * the Free Software Foundation; either version 2 of the License, or 9 - * (at your option) any later version. 10 - * 11 - * This program is distributed in the hope that it will be useful, 12 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 13 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 14 - * GNU General Public License for more details. 15 6 */ 16 7 17 8 #include <linux/cpu.h>
+1 -1
drivers/irqchip/irq-dw-apb-ictl.c
··· 105 105 * DW IP can be configured to allow 2-64 irqs. We can determine 106 106 * the number of irqs supported by writing into enable register 107 107 * and look for bits not set, as corresponding flip-flops will 108 - * have been removed by sythesis tool. 108 + * have been removed by synthesis tool. 109 109 */ 110 110 111 111 /* mask and enable all interrupts */
+12
drivers/irqchip/irq-gic-common.c
··· 36 36 gic_kvm_info = info; 37 37 } 38 38 39 + void gic_enable_of_quirks(const struct device_node *np, 40 + const struct gic_quirk *quirks, void *data) 41 + { 42 + for (; quirks->desc; quirks++) { 43 + if (!of_device_is_compatible(np, quirks->compatible)) 44 + continue; 45 + if (quirks->init(data)) 46 + pr_info("GIC: enabling workaround for %s\n", 47 + quirks->desc); 48 + } 49 + } 50 + 39 51 void gic_enable_quirks(u32 iidr, const struct gic_quirk *quirks, 40 52 void *data) 41 53 {
+3
drivers/irqchip/irq-gic-common.h
··· 23 23 24 24 struct gic_quirk { 25 25 const char *desc; 26 + const char *compatible; 26 27 bool (*init)(void *data); 27 28 u32 iidr; 28 29 u32 mask; ··· 36 35 void gic_cpu_config(void __iomem *base, void (*sync_access)(void)); 37 36 void gic_enable_quirks(u32 iidr, const struct gic_quirk *quirks, 38 37 void *data); 38 + void gic_enable_of_quirks(const struct device_node *np, 39 + const struct gic_quirk *quirks, void *data); 39 40 40 41 void gic_set_kvm_info(const struct gic_kvm_info *info); 41 42
+27
drivers/irqchip/irq-gic-v3.c
··· 41 41 42 42 #include "irq-gic-common.h" 43 43 44 + #define FLAGS_WORKAROUND_GICR_WAKER_MSM8996 (1ULL << 0) 45 + 44 46 struct redist_region { 45 47 void __iomem *redist_base; 46 48 phys_addr_t phys_base; ··· 57 55 struct irq_domain *domain; 58 56 u64 redist_stride; 59 57 u32 nr_redist_regions; 58 + u64 flags; 60 59 bool has_rss; 61 60 unsigned int irq_nr; 62 61 struct partition_desc *ppi_descs[16]; ··· 141 138 void __iomem *rbase; 142 139 u32 count = 1000000; /* 1s! */ 143 140 u32 val; 141 + 142 + if (gic_data.flags & FLAGS_WORKAROUND_GICR_WAKER_MSM8996) 143 + return; 144 144 145 145 rbase = gic_data_rdist_rd_base(); 146 146 ··· 1073 1067 .select = gic_irq_domain_select, 1074 1068 }; 1075 1069 1070 + static bool gic_enable_quirk_msm8996(void *data) 1071 + { 1072 + struct gic_chip_data *d = data; 1073 + 1074 + d->flags |= FLAGS_WORKAROUND_GICR_WAKER_MSM8996; 1075 + 1076 + return true; 1077 + } 1078 + 1076 1079 static int __init gic_init_bases(void __iomem *dist_base, 1077 1080 struct redist_region *rdist_regs, 1078 1081 u32 nr_redist_regions, ··· 1286 1271 gic_set_kvm_info(&gic_v3_kvm_info); 1287 1272 } 1288 1273 1274 + static const struct gic_quirk gic_quirks[] = { 1275 + { 1276 + .desc = "GICv3: Qualcomm MSM8996 broken firmware", 1277 + .compatible = "qcom,msm8996-gic-v3", 1278 + .init = gic_enable_quirk_msm8996, 1279 + }, 1280 + { 1281 + } 1282 + }; 1283 + 1289 1284 static int __init gic_of_init(struct device_node *node, struct device_node *parent) 1290 1285 { 1291 1286 void __iomem *dist_base; ··· 1342 1317 1343 1318 if (of_property_read_u64(node, "redistributor-stride", &redist_stride)) 1344 1319 redist_stride = 0; 1320 + 1321 + gic_enable_of_quirks(node, gic_quirks, &gic_data); 1345 1322 1346 1323 err = gic_init_bases(dist_base, rdist_regs, nr_redist_regions, 1347 1324 redist_stride, &node->fwnode);
+3 -3
drivers/irqchip/irq-gic.c
··· 604 604 /* 605 605 * Restores the GIC distributor registers during resume or when coming out of 606 606 * idle. Must be called before enabling interrupts. If a level interrupt 607 - * that occured while the GIC was suspended is still present, it will be 608 - * handled normally, but any edge interrupts that occured will not be seen by 607 + * that occurred while the GIC was suspended is still present, it will be 608 + * handled normally, but any edge interrupts that occurred will not be seen by 609 609 * the GIC and need to be handled by the platform-specific wakeup source. 610 610 */ 611 611 void gic_dist_restore(struct gic_chip_data *gic) ··· 899 899 gic_cpu_map[cpu] = 1 << new_cpu_id; 900 900 901 901 /* 902 - * Find all the peripheral interrupts targetting the current 902 + * Find all the peripheral interrupts targeting the current 903 903 * CPU interface and migrate them to the new CPU interface. 904 904 * We skip DIST_TARGET 0 to 7 as they are read-only. 905 905 */
+47 -18
drivers/irqchip/irq-imx-gpcv2.c
··· 17 17 18 18 #define GPC_IMR1_CORE0 0x30 19 19 #define GPC_IMR1_CORE1 0x40 20 + #define GPC_IMR1_CORE2 0x1c0 21 + #define GPC_IMR1_CORE3 0x1d0 22 + 20 23 21 24 struct gpcv2_irqchip_data { 22 25 struct raw_spinlock rlock; ··· 30 27 }; 31 28 32 29 static struct gpcv2_irqchip_data *imx_gpcv2_instance; 30 + 31 + static void __iomem *gpcv2_idx_to_reg(struct gpcv2_irqchip_data *cd, int i) 32 + { 33 + return cd->gpc_base + cd->cpu2wakeup + i * 4; 34 + } 33 35 34 36 static int gpcv2_wakeup_source_save(void) 35 37 { ··· 47 39 return 0; 48 40 49 41 for (i = 0; i < IMR_NUM; i++) { 50 - reg = cd->gpc_base + cd->cpu2wakeup + i * 4; 42 + reg = gpcv2_idx_to_reg(cd, i); 51 43 cd->saved_irq_mask[i] = readl_relaxed(reg); 52 44 writel_relaxed(cd->wakeup_sources[i], reg); 53 45 } ··· 58 50 static void gpcv2_wakeup_source_restore(void) 59 51 { 60 52 struct gpcv2_irqchip_data *cd; 61 - void __iomem *reg; 62 53 int i; 63 54 64 55 cd = imx_gpcv2_instance; 65 56 if (!cd) 66 57 return; 67 58 68 - for (i = 0; i < IMR_NUM; i++) { 69 - reg = cd->gpc_base + cd->cpu2wakeup + i * 4; 70 - writel_relaxed(cd->saved_irq_mask[i], reg); 71 - } 59 + for (i = 0; i < IMR_NUM; i++) 60 + writel_relaxed(cd->saved_irq_mask[i], gpcv2_idx_to_reg(cd, i)); 72 61 } 73 62 74 63 static struct syscore_ops imx_gpcv2_syscore_ops = { ··· 78 73 struct gpcv2_irqchip_data *cd = d->chip_data; 79 74 unsigned int idx = d->hwirq / 32; 80 75 unsigned long flags; 81 - void __iomem *reg; 82 76 u32 mask, val; 83 77 84 78 raw_spin_lock_irqsave(&cd->rlock, flags); 85 - reg = cd->gpc_base + cd->cpu2wakeup + idx * 4; 86 - mask = 1 << d->hwirq % 32; 79 + mask = BIT(d->hwirq % 32); 87 80 val = cd->wakeup_sources[idx]; 88 81 89 82 cd->wakeup_sources[idx] = on ? (val & ~mask) : (val | mask); ··· 102 99 u32 val; 103 100 104 101 raw_spin_lock(&cd->rlock); 105 - reg = cd->gpc_base + cd->cpu2wakeup + d->hwirq / 32 * 4; 102 + reg = gpcv2_idx_to_reg(cd, d->hwirq / 32); 106 103 val = readl_relaxed(reg); 107 - val &= ~(1 << d->hwirq % 32); 104 + val &= ~BIT(d->hwirq % 32); 108 105 writel_relaxed(val, reg); 109 106 raw_spin_unlock(&cd->rlock); 110 107 ··· 118 115 u32 val; 119 116 120 117 raw_spin_lock(&cd->rlock); 121 - reg = cd->gpc_base + cd->cpu2wakeup + d->hwirq / 32 * 4; 118 + reg = gpcv2_idx_to_reg(cd, d->hwirq / 32); 122 119 val = readl_relaxed(reg); 123 - val |= 1 << (d->hwirq % 32); 120 + val |= BIT(d->hwirq % 32); 124 121 writel_relaxed(val, reg); 125 122 raw_spin_unlock(&cd->rlock); 126 123 ··· 195 192 .free = irq_domain_free_irqs_common, 196 193 }; 197 194 195 + static const struct of_device_id gpcv2_of_match[] = { 196 + { .compatible = "fsl,imx7d-gpc", .data = (const void *) 2 }, 197 + { .compatible = "fsl,imx8mq-gpc", .data = (const void *) 4 }, 198 + { /* END */ } 199 + }; 200 + 198 201 static int __init imx_gpcv2_irqchip_init(struct device_node *node, 199 202 struct device_node *parent) 200 203 { 201 204 struct irq_domain *parent_domain, *domain; 202 205 struct gpcv2_irqchip_data *cd; 206 + const struct of_device_id *id; 207 + unsigned long core_num; 203 208 int i; 204 209 205 210 if (!parent) { 206 211 pr_err("%pOF: no parent, giving up\n", node); 207 212 return -ENODEV; 208 213 } 214 + 215 + id = of_match_node(gpcv2_of_match, node); 216 + if (!id) { 217 + pr_err("%pOF: unknown compatibility string\n", node); 218 + return -ENODEV; 219 + } 220 + 221 + core_num = (unsigned long)id->data; 209 222 210 223 parent_domain = irq_find_host(parent); 211 224 if (!parent_domain) { ··· 231 212 232 213 cd = kzalloc(sizeof(struct gpcv2_irqchip_data), GFP_KERNEL); 233 214 if (!cd) { 234 - pr_err("kzalloc failed!\n"); 215 + pr_err("%pOF: kzalloc failed!\n", node); 235 216 return -ENOMEM; 236 217 } 237 218 ··· 239 220 240 221 cd->gpc_base = of_iomap(node, 0); 241 222 if (!cd->gpc_base) { 242 - pr_err("fsl-gpcv2: unable to map gpc registers\n"); 223 + pr_err("%pOF: unable to map gpc registers\n", node); 243 224 kfree(cd); 244 225 return -ENOMEM; 245 226 } ··· 255 236 256 237 /* Initially mask all interrupts */ 257 238 for (i = 0; i < IMR_NUM; i++) { 258 - writel_relaxed(~0, cd->gpc_base + GPC_IMR1_CORE0 + i * 4); 259 - writel_relaxed(~0, cd->gpc_base + GPC_IMR1_CORE1 + i * 4); 239 + void __iomem *reg = cd->gpc_base + i * 4; 240 + 241 + switch (core_num) { 242 + case 4: 243 + writel_relaxed(~0, reg + GPC_IMR1_CORE2); 244 + writel_relaxed(~0, reg + GPC_IMR1_CORE3); 245 + /* fall through */ 246 + case 2: 247 + writel_relaxed(~0, reg + GPC_IMR1_CORE0); 248 + writel_relaxed(~0, reg + GPC_IMR1_CORE1); 249 + } 260 250 cd->wakeup_sources[i] = ~0; 261 251 } 262 252 ··· 290 262 return 0; 291 263 } 292 264 293 - IRQCHIP_DECLARE(imx_gpcv2, "fsl,imx7d-gpc", imx_gpcv2_irqchip_init); 265 + IRQCHIP_DECLARE(imx_gpcv2_imx7d, "fsl,imx7d-gpc", imx_gpcv2_irqchip_init); 266 + IRQCHIP_DECLARE(imx_gpcv2_imx8mq, "fsl,imx8mq-gpc", imx_gpcv2_irqchip_init);
+261
drivers/irqchip/irq-imx-irqsteer.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * Copyright 2017 NXP 4 + * Copyright (C) 2018 Pengutronix, Lucas Stach <kernel@pengutronix.de> 5 + */ 6 + 7 + #include <linux/clk.h> 8 + #include <linux/interrupt.h> 9 + #include <linux/irq.h> 10 + #include <linux/irqchip/chained_irq.h> 11 + #include <linux/irqdomain.h> 12 + #include <linux/kernel.h> 13 + #include <linux/of_platform.h> 14 + #include <linux/spinlock.h> 15 + 16 + #define CTRL_STRIDE_OFF(_t, _r) (_t * 8 * _r) 17 + #define CHANCTRL 0x0 18 + #define CHANMASK(n, t) (CTRL_STRIDE_OFF(t, 0) + 0x4 * (n) + 0x4) 19 + #define CHANSET(n, t) (CTRL_STRIDE_OFF(t, 1) + 0x4 * (n) + 0x4) 20 + #define CHANSTATUS(n, t) (CTRL_STRIDE_OFF(t, 2) + 0x4 * (n) + 0x4) 21 + #define CHAN_MINTDIS(t) (CTRL_STRIDE_OFF(t, 3) + 0x4) 22 + #define CHAN_MASTRSTAT(t) (CTRL_STRIDE_OFF(t, 3) + 0x8) 23 + 24 + struct irqsteer_data { 25 + void __iomem *regs; 26 + struct clk *ipg_clk; 27 + int irq; 28 + raw_spinlock_t lock; 29 + int irq_groups; 30 + int channel; 31 + struct irq_domain *domain; 32 + u32 *saved_reg; 33 + }; 34 + 35 + static int imx_irqsteer_get_reg_index(struct irqsteer_data *data, 36 + unsigned long irqnum) 37 + { 38 + return (data->irq_groups * 2 - irqnum / 32 - 1); 39 + } 40 + 41 + static void imx_irqsteer_irq_unmask(struct irq_data *d) 42 + { 43 + struct irqsteer_data *data = d->chip_data; 44 + int idx = imx_irqsteer_get_reg_index(data, d->hwirq); 45 + unsigned long flags; 46 + u32 val; 47 + 48 + raw_spin_lock_irqsave(&data->lock, flags); 49 + val = readl_relaxed(data->regs + CHANMASK(idx, data->irq_groups)); 50 + val |= BIT(d->hwirq % 32); 51 + writel_relaxed(val, data->regs + CHANMASK(idx, data->irq_groups)); 52 + raw_spin_unlock_irqrestore(&data->lock, flags); 53 + } 54 + 55 + static void imx_irqsteer_irq_mask(struct irq_data *d) 56 + { 57 + struct irqsteer_data *data = d->chip_data; 58 + int idx = imx_irqsteer_get_reg_index(data, d->hwirq); 59 + unsigned long flags; 60 + u32 val; 61 + 62 + raw_spin_lock_irqsave(&data->lock, flags); 63 + val = readl_relaxed(data->regs + CHANMASK(idx, data->irq_groups)); 64 + val &= ~BIT(d->hwirq % 32); 65 + writel_relaxed(val, data->regs + CHANMASK(idx, data->irq_groups)); 66 + raw_spin_unlock_irqrestore(&data->lock, flags); 67 + } 68 + 69 + static struct irq_chip imx_irqsteer_irq_chip = { 70 + .name = "irqsteer", 71 + .irq_mask = imx_irqsteer_irq_mask, 72 + .irq_unmask = imx_irqsteer_irq_unmask, 73 + }; 74 + 75 + static int imx_irqsteer_irq_map(struct irq_domain *h, unsigned int irq, 76 + irq_hw_number_t hwirq) 77 + { 78 + irq_set_status_flags(irq, IRQ_LEVEL); 79 + irq_set_chip_data(irq, h->host_data); 80 + irq_set_chip_and_handler(irq, &imx_irqsteer_irq_chip, handle_level_irq); 81 + 82 + return 0; 83 + } 84 + 85 + static const struct irq_domain_ops imx_irqsteer_domain_ops = { 86 + .map = imx_irqsteer_irq_map, 87 + .xlate = irq_domain_xlate_onecell, 88 + }; 89 + 90 + static void imx_irqsteer_irq_handler(struct irq_desc *desc) 91 + { 92 + struct irqsteer_data *data = irq_desc_get_handler_data(desc); 93 + int i; 94 + 95 + chained_irq_enter(irq_desc_get_chip(desc), desc); 96 + 97 + for (i = 0; i < data->irq_groups * 64; i += 32) { 98 + int idx = imx_irqsteer_get_reg_index(data, i); 99 + unsigned long irqmap; 100 + int pos, virq; 101 + 102 + irqmap = readl_relaxed(data->regs + 103 + CHANSTATUS(idx, data->irq_groups)); 104 + 105 + for_each_set_bit(pos, &irqmap, 32) { 106 + virq = irq_find_mapping(data->domain, pos + i); 107 + if (virq) 108 + generic_handle_irq(virq); 109 + } 110 + } 111 + 112 + chained_irq_exit(irq_desc_get_chip(desc), desc); 113 + } 114 + 115 + static int imx_irqsteer_probe(struct platform_device *pdev) 116 + { 117 + struct device_node *np = pdev->dev.of_node; 118 + struct irqsteer_data *data; 119 + struct resource *res; 120 + int ret; 121 + 122 + data = devm_kzalloc(&pdev->dev, sizeof(*data), GFP_KERNEL); 123 + if (!data) 124 + return -ENOMEM; 125 + 126 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 127 + data->regs = devm_ioremap_resource(&pdev->dev, res); 128 + if (IS_ERR(data->regs)) { 129 + dev_err(&pdev->dev, "failed to initialize reg\n"); 130 + return PTR_ERR(data->regs); 131 + } 132 + 133 + data->irq = platform_get_irq(pdev, 0); 134 + if (data->irq <= 0) { 135 + dev_err(&pdev->dev, "failed to get irq\n"); 136 + return -ENODEV; 137 + } 138 + 139 + data->ipg_clk = devm_clk_get(&pdev->dev, "ipg"); 140 + if (IS_ERR(data->ipg_clk)) { 141 + ret = PTR_ERR(data->ipg_clk); 142 + if (ret != -EPROBE_DEFER) 143 + dev_err(&pdev->dev, "failed to get ipg clk: %d\n", ret); 144 + return ret; 145 + } 146 + 147 + raw_spin_lock_init(&data->lock); 148 + 149 + of_property_read_u32(np, "fsl,irq-groups", &data->irq_groups); 150 + of_property_read_u32(np, "fsl,channel", &data->channel); 151 + 152 + if (IS_ENABLED(CONFIG_PM_SLEEP)) { 153 + data->saved_reg = devm_kzalloc(&pdev->dev, 154 + sizeof(u32) * data->irq_groups * 2, 155 + GFP_KERNEL); 156 + if (!data->saved_reg) 157 + return -ENOMEM; 158 + } 159 + 160 + ret = clk_prepare_enable(data->ipg_clk); 161 + if (ret) { 162 + dev_err(&pdev->dev, "failed to enable ipg clk: %d\n", ret); 163 + return ret; 164 + } 165 + 166 + /* steer all IRQs into configured channel */ 167 + writel_relaxed(BIT(data->channel), data->regs + CHANCTRL); 168 + 169 + data->domain = irq_domain_add_linear(np, data->irq_groups * 64, 170 + &imx_irqsteer_domain_ops, data); 171 + if (!data->domain) { 172 + dev_err(&pdev->dev, "failed to create IRQ domain\n"); 173 + clk_disable_unprepare(data->ipg_clk); 174 + return -ENOMEM; 175 + } 176 + 177 + irq_set_chained_handler_and_data(data->irq, imx_irqsteer_irq_handler, 178 + data); 179 + 180 + platform_set_drvdata(pdev, data); 181 + 182 + return 0; 183 + } 184 + 185 + static int imx_irqsteer_remove(struct platform_device *pdev) 186 + { 187 + struct irqsteer_data *irqsteer_data = platform_get_drvdata(pdev); 188 + 189 + irq_set_chained_handler_and_data(irqsteer_data->irq, NULL, NULL); 190 + irq_domain_remove(irqsteer_data->domain); 191 + 192 + clk_disable_unprepare(irqsteer_data->ipg_clk); 193 + 194 + return 0; 195 + } 196 + 197 + #ifdef CONFIG_PM_SLEEP 198 + static void imx_irqsteer_save_regs(struct irqsteer_data *data) 199 + { 200 + int i; 201 + 202 + for (i = 0; i < data->irq_groups * 2; i++) 203 + data->saved_reg[i] = readl_relaxed(data->regs + 204 + CHANMASK(i, data->irq_groups)); 205 + } 206 + 207 + static void imx_irqsteer_restore_regs(struct irqsteer_data *data) 208 + { 209 + int i; 210 + 211 + writel_relaxed(BIT(data->channel), data->regs + CHANCTRL); 212 + for (i = 0; i < data->irq_groups * 2; i++) 213 + writel_relaxed(data->saved_reg[i], 214 + data->regs + CHANMASK(i, data->irq_groups)); 215 + } 216 + 217 + static int imx_irqsteer_suspend(struct device *dev) 218 + { 219 + struct irqsteer_data *irqsteer_data = dev_get_drvdata(dev); 220 + 221 + imx_irqsteer_save_regs(irqsteer_data); 222 + clk_disable_unprepare(irqsteer_data->ipg_clk); 223 + 224 + return 0; 225 + } 226 + 227 + static int imx_irqsteer_resume(struct device *dev) 228 + { 229 + struct irqsteer_data *irqsteer_data = dev_get_drvdata(dev); 230 + int ret; 231 + 232 + ret = clk_prepare_enable(irqsteer_data->ipg_clk); 233 + if (ret) { 234 + dev_err(dev, "failed to enable ipg clk: %d\n", ret); 235 + return ret; 236 + } 237 + imx_irqsteer_restore_regs(irqsteer_data); 238 + 239 + return 0; 240 + } 241 + #endif 242 + 243 + static const struct dev_pm_ops imx_irqsteer_pm_ops = { 244 + SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(imx_irqsteer_suspend, imx_irqsteer_resume) 245 + }; 246 + 247 + static const struct of_device_id imx_irqsteer_dt_ids[] = { 248 + { .compatible = "fsl,imx-irqsteer", }, 249 + {}, 250 + }; 251 + 252 + static struct platform_driver imx_irqsteer_driver = { 253 + .driver = { 254 + .name = "imx-irqsteer", 255 + .of_match_table = imx_irqsteer_dt_ids, 256 + .pm = &imx_irqsteer_pm_ops, 257 + }, 258 + .probe = imx_irqsteer_probe, 259 + .remove = imx_irqsteer_remove, 260 + }; 261 + builtin_platform_driver(imx_irqsteer_driver);
+256
drivers/irqchip/irq-madera.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Interrupt support for Cirrus Logic Madera codecs 4 + * 5 + * Copyright (C) 2015-2018 Cirrus Logic, Inc. and 6 + * Cirrus Logic International Semiconductor Ltd. 7 + */ 8 + 9 + #include <linux/module.h> 10 + #include <linux/gpio.h> 11 + #include <linux/interrupt.h> 12 + #include <linux/irq.h> 13 + #include <linux/irqdomain.h> 14 + #include <linux/pm_runtime.h> 15 + #include <linux/regmap.h> 16 + #include <linux/slab.h> 17 + #include <linux/of.h> 18 + #include <linux/of_device.h> 19 + #include <linux/of_gpio.h> 20 + #include <linux/of_irq.h> 21 + #include <linux/irqchip/irq-madera.h> 22 + #include <linux/mfd/madera/core.h> 23 + #include <linux/mfd/madera/pdata.h> 24 + #include <linux/mfd/madera/registers.h> 25 + 26 + #define MADERA_IRQ(_irq, _reg) \ 27 + [MADERA_IRQ_ ## _irq] = { \ 28 + .reg_offset = (_reg) - MADERA_IRQ1_STATUS_2, \ 29 + .mask = MADERA_ ## _irq ## _EINT1 \ 30 + } 31 + 32 + /* Mappings are the same for all Madera codecs */ 33 + static const struct regmap_irq madera_irqs[MADERA_NUM_IRQ] = { 34 + MADERA_IRQ(FLL1_LOCK, MADERA_IRQ1_STATUS_2), 35 + MADERA_IRQ(FLL2_LOCK, MADERA_IRQ1_STATUS_2), 36 + MADERA_IRQ(FLL3_LOCK, MADERA_IRQ1_STATUS_2), 37 + MADERA_IRQ(FLLAO_LOCK, MADERA_IRQ1_STATUS_2), 38 + 39 + MADERA_IRQ(MICDET1, MADERA_IRQ1_STATUS_6), 40 + MADERA_IRQ(MICDET2, MADERA_IRQ1_STATUS_6), 41 + MADERA_IRQ(HPDET, MADERA_IRQ1_STATUS_6), 42 + 43 + MADERA_IRQ(MICD_CLAMP_RISE, MADERA_IRQ1_STATUS_7), 44 + MADERA_IRQ(MICD_CLAMP_FALL, MADERA_IRQ1_STATUS_7), 45 + MADERA_IRQ(JD1_RISE, MADERA_IRQ1_STATUS_7), 46 + MADERA_IRQ(JD1_FALL, MADERA_IRQ1_STATUS_7), 47 + 48 + MADERA_IRQ(ASRC2_IN1_LOCK, MADERA_IRQ1_STATUS_9), 49 + MADERA_IRQ(ASRC2_IN2_LOCK, MADERA_IRQ1_STATUS_9), 50 + MADERA_IRQ(ASRC1_IN1_LOCK, MADERA_IRQ1_STATUS_9), 51 + MADERA_IRQ(ASRC1_IN2_LOCK, MADERA_IRQ1_STATUS_9), 52 + MADERA_IRQ(DRC2_SIG_DET, MADERA_IRQ1_STATUS_9), 53 + MADERA_IRQ(DRC1_SIG_DET, MADERA_IRQ1_STATUS_9), 54 + 55 + MADERA_IRQ(DSP_IRQ1, MADERA_IRQ1_STATUS_11), 56 + MADERA_IRQ(DSP_IRQ2, MADERA_IRQ1_STATUS_11), 57 + MADERA_IRQ(DSP_IRQ3, MADERA_IRQ1_STATUS_11), 58 + MADERA_IRQ(DSP_IRQ4, MADERA_IRQ1_STATUS_11), 59 + MADERA_IRQ(DSP_IRQ5, MADERA_IRQ1_STATUS_11), 60 + MADERA_IRQ(DSP_IRQ6, MADERA_IRQ1_STATUS_11), 61 + MADERA_IRQ(DSP_IRQ7, MADERA_IRQ1_STATUS_11), 62 + MADERA_IRQ(DSP_IRQ8, MADERA_IRQ1_STATUS_11), 63 + MADERA_IRQ(DSP_IRQ9, MADERA_IRQ1_STATUS_11), 64 + MADERA_IRQ(DSP_IRQ10, MADERA_IRQ1_STATUS_11), 65 + MADERA_IRQ(DSP_IRQ11, MADERA_IRQ1_STATUS_11), 66 + MADERA_IRQ(DSP_IRQ12, MADERA_IRQ1_STATUS_11), 67 + MADERA_IRQ(DSP_IRQ13, MADERA_IRQ1_STATUS_11), 68 + MADERA_IRQ(DSP_IRQ14, MADERA_IRQ1_STATUS_11), 69 + MADERA_IRQ(DSP_IRQ15, MADERA_IRQ1_STATUS_11), 70 + MADERA_IRQ(DSP_IRQ16, MADERA_IRQ1_STATUS_11), 71 + 72 + MADERA_IRQ(HP3R_SC, MADERA_IRQ1_STATUS_12), 73 + MADERA_IRQ(HP3L_SC, MADERA_IRQ1_STATUS_12), 74 + MADERA_IRQ(HP2R_SC, MADERA_IRQ1_STATUS_12), 75 + MADERA_IRQ(HP2L_SC, MADERA_IRQ1_STATUS_12), 76 + MADERA_IRQ(HP1R_SC, MADERA_IRQ1_STATUS_12), 77 + MADERA_IRQ(HP1L_SC, MADERA_IRQ1_STATUS_12), 78 + 79 + MADERA_IRQ(SPK_OVERHEAT_WARN, MADERA_IRQ1_STATUS_15), 80 + MADERA_IRQ(SPK_OVERHEAT, MADERA_IRQ1_STATUS_15), 81 + 82 + MADERA_IRQ(DSP1_BUS_ERR, MADERA_IRQ1_STATUS_33), 83 + MADERA_IRQ(DSP2_BUS_ERR, MADERA_IRQ1_STATUS_33), 84 + MADERA_IRQ(DSP3_BUS_ERR, MADERA_IRQ1_STATUS_33), 85 + MADERA_IRQ(DSP4_BUS_ERR, MADERA_IRQ1_STATUS_33), 86 + MADERA_IRQ(DSP5_BUS_ERR, MADERA_IRQ1_STATUS_33), 87 + MADERA_IRQ(DSP6_BUS_ERR, MADERA_IRQ1_STATUS_33), 88 + MADERA_IRQ(DSP7_BUS_ERR, MADERA_IRQ1_STATUS_33), 89 + }; 90 + 91 + static const struct regmap_irq_chip madera_irq_chip = { 92 + .name = "madera IRQ", 93 + .status_base = MADERA_IRQ1_STATUS_2, 94 + .mask_base = MADERA_IRQ1_MASK_2, 95 + .ack_base = MADERA_IRQ1_STATUS_2, 96 + .runtime_pm = true, 97 + .num_regs = 32, 98 + .irqs = madera_irqs, 99 + .num_irqs = ARRAY_SIZE(madera_irqs), 100 + }; 101 + 102 + #ifdef CONFIG_PM_SLEEP 103 + static int madera_suspend(struct device *dev) 104 + { 105 + struct madera *madera = dev_get_drvdata(dev->parent); 106 + 107 + dev_dbg(madera->irq_dev, "Suspend, disabling IRQ\n"); 108 + 109 + /* 110 + * A runtime resume would be needed to access the chip interrupt 111 + * controller but runtime pm doesn't function during suspend. 112 + * Temporarily disable interrupts until we reach suspend_noirq state. 113 + */ 114 + disable_irq(madera->irq); 115 + 116 + return 0; 117 + } 118 + 119 + static int madera_suspend_noirq(struct device *dev) 120 + { 121 + struct madera *madera = dev_get_drvdata(dev->parent); 122 + 123 + dev_dbg(madera->irq_dev, "No IRQ suspend, reenabling IRQ\n"); 124 + 125 + /* Re-enable interrupts to service wakeup interrupts from the chip */ 126 + enable_irq(madera->irq); 127 + 128 + return 0; 129 + } 130 + 131 + static int madera_resume_noirq(struct device *dev) 132 + { 133 + struct madera *madera = dev_get_drvdata(dev->parent); 134 + 135 + dev_dbg(madera->irq_dev, "No IRQ resume, disabling IRQ\n"); 136 + 137 + /* 138 + * We can't handle interrupts until runtime pm is available again. 139 + * Disable them temporarily. 140 + */ 141 + disable_irq(madera->irq); 142 + 143 + return 0; 144 + } 145 + 146 + static int madera_resume(struct device *dev) 147 + { 148 + struct madera *madera = dev_get_drvdata(dev->parent); 149 + 150 + dev_dbg(madera->irq_dev, "Resume, reenabling IRQ\n"); 151 + 152 + /* Interrupts can now be handled */ 153 + enable_irq(madera->irq); 154 + 155 + return 0; 156 + } 157 + #endif 158 + 159 + static const struct dev_pm_ops madera_irq_pm_ops = { 160 + SET_SYSTEM_SLEEP_PM_OPS(madera_suspend, madera_resume) 161 + SET_NOIRQ_SYSTEM_SLEEP_PM_OPS(madera_suspend_noirq, 162 + madera_resume_noirq) 163 + }; 164 + 165 + static int madera_irq_probe(struct platform_device *pdev) 166 + { 167 + struct madera *madera = dev_get_drvdata(pdev->dev.parent); 168 + struct irq_data *irq_data; 169 + unsigned int irq_flags = 0; 170 + int ret; 171 + 172 + dev_dbg(&pdev->dev, "probe\n"); 173 + 174 + /* 175 + * Read the flags from the interrupt controller if not specified 176 + * by pdata 177 + */ 178 + irq_flags = madera->pdata.irq_flags; 179 + if (!irq_flags) { 180 + irq_data = irq_get_irq_data(madera->irq); 181 + if (!irq_data) { 182 + dev_err(&pdev->dev, "Invalid IRQ: %d\n", madera->irq); 183 + return -EINVAL; 184 + } 185 + 186 + irq_flags = irqd_get_trigger_type(irq_data); 187 + 188 + /* Codec defaults to trigger low, use this if no flags given */ 189 + if (irq_flags == IRQ_TYPE_NONE) 190 + irq_flags = IRQF_TRIGGER_LOW; 191 + } 192 + 193 + if (irq_flags & (IRQF_TRIGGER_RISING | IRQF_TRIGGER_FALLING)) { 194 + dev_err(&pdev->dev, "Host interrupt not level-triggered\n"); 195 + return -EINVAL; 196 + } 197 + 198 + /* 199 + * The silicon always starts at active-low, check if we need to 200 + * switch to active-high. 201 + */ 202 + if (irq_flags & IRQF_TRIGGER_HIGH) { 203 + ret = regmap_update_bits(madera->regmap, MADERA_IRQ1_CTRL, 204 + MADERA_IRQ_POL_MASK, 0); 205 + if (ret) { 206 + dev_err(&pdev->dev, 207 + "Failed to set IRQ polarity: %d\n", ret); 208 + return ret; 209 + } 210 + } 211 + 212 + /* 213 + * NOTE: regmap registers this against the OF node of the parent of 214 + * the regmap - that is, against the mfd driver 215 + */ 216 + ret = regmap_add_irq_chip(madera->regmap, madera->irq, IRQF_ONESHOT, 0, 217 + &madera_irq_chip, &madera->irq_data); 218 + if (ret) { 219 + dev_err(&pdev->dev, "add_irq_chip failed: %d\n", ret); 220 + return ret; 221 + } 222 + 223 + /* Save dev in parent MFD struct so it is accessible to siblings */ 224 + madera->irq_dev = &pdev->dev; 225 + 226 + return 0; 227 + } 228 + 229 + static int madera_irq_remove(struct platform_device *pdev) 230 + { 231 + struct madera *madera = dev_get_drvdata(pdev->dev.parent); 232 + 233 + /* 234 + * The IRQ is disabled by the parent MFD driver before 235 + * it starts cleaning up all child drivers 236 + */ 237 + madera->irq_dev = NULL; 238 + regmap_del_irq_chip(madera->irq, madera->irq_data); 239 + 240 + return 0; 241 + } 242 + 243 + static struct platform_driver madera_irq_driver = { 244 + .probe = &madera_irq_probe, 245 + .remove = &madera_irq_remove, 246 + .driver = { 247 + .name = "madera-irq", 248 + .pm = &madera_irq_pm_ops, 249 + } 250 + }; 251 + module_platform_driver(madera_irq_driver); 252 + 253 + MODULE_SOFTDEP("pre: madera"); 254 + MODULE_DESCRIPTION("Madera IRQ driver"); 255 + MODULE_AUTHOR("Richard Fitzgerald <rf@opensource.cirrus.com>"); 256 + MODULE_LICENSE("GPL v2");
+3 -3
drivers/irqchip/irq-mscc-ocelot.c
··· 72 72 domain = irq_domain_add_linear(node, OCELOT_NR_IRQ, 73 73 &irq_generic_chip_ops, NULL); 74 74 if (!domain) { 75 - pr_err("%s: unable to add irq domain\n", node->name); 75 + pr_err("%pOFn: unable to add irq domain\n", node); 76 76 return -ENOMEM; 77 77 } 78 78 ··· 80 80 "icpu", handle_level_irq, 81 81 0, 0, 0); 82 82 if (ret) { 83 - pr_err("%s: unable to alloc irq domain gc\n", node->name); 83 + pr_err("%pOFn: unable to alloc irq domain gc\n", node); 84 84 goto err_domain_remove; 85 85 } 86 86 87 87 gc = irq_get_domain_generic_chip(domain, 0); 88 88 gc->reg_base = of_iomap(node, 0); 89 89 if (!gc->reg_base) { 90 - pr_err("%s: unable to map resource\n", node->name); 90 + pr_err("%pOFn: unable to map resource\n", node); 91 91 ret = -ENOMEM; 92 92 goto err_gc_free; 93 93 }
+107
drivers/irqchip/irq-rda-intc.c
··· 1 + // SPDX-License-Identifier: GPL-2.0+ 2 + /* 3 + * RDA8810PL SoC irqchip driver 4 + * 5 + * Copyright RDA Microelectronics Company Limited 6 + * Copyright (c) 2017 Andreas Färber 7 + * Copyright (c) 2018 Manivannan Sadhasivam 8 + */ 9 + 10 + #include <linux/init.h> 11 + #include <linux/interrupt.h> 12 + #include <linux/irq.h> 13 + #include <linux/irqchip.h> 14 + #include <linux/irqdomain.h> 15 + #include <linux/of_address.h> 16 + 17 + #include <asm/exception.h> 18 + 19 + #define RDA_INTC_FINALSTATUS 0x00 20 + #define RDA_INTC_MASK_SET 0x08 21 + #define RDA_INTC_MASK_CLR 0x0c 22 + 23 + #define RDA_IRQ_MASK_ALL 0xFFFFFFFF 24 + 25 + #define RDA_NR_IRQS 32 26 + 27 + static void __iomem *rda_intc_base; 28 + static struct irq_domain *rda_irq_domain; 29 + 30 + static void rda_intc_mask_irq(struct irq_data *d) 31 + { 32 + writel_relaxed(BIT(d->hwirq), rda_intc_base + RDA_INTC_MASK_CLR); 33 + } 34 + 35 + static void rda_intc_unmask_irq(struct irq_data *d) 36 + { 37 + writel_relaxed(BIT(d->hwirq), rda_intc_base + RDA_INTC_MASK_SET); 38 + } 39 + 40 + static int rda_intc_set_type(struct irq_data *data, unsigned int flow_type) 41 + { 42 + /* Hardware supports only level triggered interrupts */ 43 + if ((flow_type & (IRQF_TRIGGER_HIGH | IRQF_TRIGGER_LOW)) == flow_type) 44 + return 0; 45 + 46 + return -EINVAL; 47 + } 48 + 49 + static void __exception_irq_entry rda_handle_irq(struct pt_regs *regs) 50 + { 51 + u32 stat = readl_relaxed(rda_intc_base + RDA_INTC_FINALSTATUS); 52 + u32 hwirq; 53 + 54 + while (stat) { 55 + hwirq = __fls(stat); 56 + handle_domain_irq(rda_irq_domain, hwirq, regs); 57 + stat &= ~BIT(hwirq); 58 + } 59 + } 60 + 61 + static struct irq_chip rda_irq_chip = { 62 + .name = "rda-intc", 63 + .irq_mask = rda_intc_mask_irq, 64 + .irq_unmask = rda_intc_unmask_irq, 65 + .irq_set_type = rda_intc_set_type, 66 + }; 67 + 68 + static int rda_irq_map(struct irq_domain *d, 69 + unsigned int virq, irq_hw_number_t hw) 70 + { 71 + irq_set_status_flags(virq, IRQ_LEVEL); 72 + irq_set_chip_and_handler(virq, &rda_irq_chip, handle_level_irq); 73 + irq_set_chip_data(virq, d->host_data); 74 + irq_set_probe(virq); 75 + 76 + return 0; 77 + } 78 + 79 + static const struct irq_domain_ops rda_irq_domain_ops = { 80 + .map = rda_irq_map, 81 + .xlate = irq_domain_xlate_onecell, 82 + }; 83 + 84 + static int __init rda8810_intc_init(struct device_node *node, 85 + struct device_node *parent) 86 + { 87 + rda_intc_base = of_io_request_and_map(node, 0, "rda-intc"); 88 + if (IS_ERR(rda_intc_base)) 89 + return PTR_ERR(rda_intc_base); 90 + 91 + /* Mask all interrupt sources */ 92 + writel_relaxed(RDA_IRQ_MASK_ALL, rda_intc_base + RDA_INTC_MASK_CLR); 93 + 94 + rda_irq_domain = irq_domain_create_linear(&node->fwnode, RDA_NR_IRQS, 95 + &rda_irq_domain_ops, 96 + rda_intc_base); 97 + if (!rda_irq_domain) { 98 + iounmap(rda_intc_base); 99 + return -ENOMEM; 100 + } 101 + 102 + set_handle_irq(rda_handle_irq); 103 + 104 + return 0; 105 + } 106 + 107 + IRQCHIP_DECLARE(rda_intc, "rda,8810pl-intc", rda8810_intc_init);
+1 -1
drivers/irqchip/irq-renesas-h8s.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 - * H8S interrupt contoller driver 3 + * H8S interrupt controller driver 4 4 * 5 5 * Copyright 2015 Yoshinori Sato <ysato@users.sourceforge.jp> 6 6 */
+1 -13
drivers/irqchip/irq-renesas-intc-irqpin.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Renesas INTC External IRQ Pin Driver 3 4 * 4 5 * Copyright (C) 2013 Magnus Damm 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License as published by 8 - * the Free Software Foundation; either version 2 of the License 9 - * 10 - * This program is distributed in the hope that it will be useful, 11 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 - * GNU General Public License for more details. 14 - * 15 - * You should have received a copy of the GNU General Public License 16 - * along with this program; if not, write to the Free Software 17 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 18 6 */ 19 7 20 8 #include <linux/init.h>
+1 -13
drivers/irqchip/irq-renesas-irqc.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 1 2 /* 2 3 * Renesas IRQC Driver 3 4 * 4 5 * Copyright (C) 2013 Magnus Damm 5 - * 6 - * This program is free software; you can redistribute it and/or modify 7 - * it under the terms of the GNU General Public License as published by 8 - * the Free Software Foundation; either version 2 of the License 9 - * 10 - * This program is distributed in the hope that it will be useful, 11 - * but WITHOUT ANY WARRANTY; without even the implied warranty of 12 - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the 13 - * GNU General Public License for more details. 14 - * 15 - * You should have received a copy of the GNU General Public License 16 - * along with this program; if not, write to the Free Software 17 - * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA 18 6 */ 19 7 20 8 #include <linux/init.h>
+1 -1
drivers/irqchip/irq-s3c24xx.c
··· 58 58 }; 59 59 60 60 /* 61 - * Sructure holding the controller data 61 + * Structure holding the controller data 62 62 * @reg_pending register holding pending irqs 63 63 * @reg_intpnd special register intpnd in main intc 64 64 * @reg_mask mask register
+103 -19
drivers/irqchip/irq-stm32-exti.c
··· 6 6 */ 7 7 8 8 #include <linux/bitops.h> 9 + #include <linux/delay.h> 10 + #include <linux/hwspinlock.h> 9 11 #include <linux/interrupt.h> 10 12 #include <linux/io.h> 11 13 #include <linux/irq.h> ··· 22 20 23 21 #define IRQS_PER_BANK 32 24 22 23 + #define HWSPNLCK_TIMEOUT 1000 /* usec */ 24 + #define HWSPNLCK_RETRY_DELAY 100 /* usec */ 25 + 25 26 struct stm32_exti_bank { 26 27 u32 imr_ofst; 27 28 u32 emr_ofst; ··· 36 31 }; 37 32 38 33 #define UNDEF_REG ~0 34 + 35 + enum stm32_exti_hwspinlock { 36 + HWSPINLOCK_UNKNOWN, 37 + HWSPINLOCK_NONE, 38 + HWSPINLOCK_READY, 39 + }; 39 40 40 41 struct stm32_desc_irq { 41 42 u32 exti; ··· 69 58 void __iomem *base; 70 59 struct stm32_exti_chip_data *chips_data; 71 60 const struct stm32_exti_drv_data *drv_data; 61 + struct device_node *node; 62 + enum stm32_exti_hwspinlock hwlock_state; 63 + struct hwspinlock *hwlock; 72 64 }; 73 65 74 66 static struct stm32_exti_host_data *stm32_host_data; ··· 283 269 return 0; 284 270 } 285 271 272 + static int stm32_exti_hwspin_lock(struct stm32_exti_chip_data *chip_data) 273 + { 274 + struct stm32_exti_host_data *host_data = chip_data->host_data; 275 + struct hwspinlock *hwlock; 276 + int id, ret = 0, timeout = 0; 277 + 278 + /* first time, check for hwspinlock availability */ 279 + if (unlikely(host_data->hwlock_state == HWSPINLOCK_UNKNOWN)) { 280 + id = of_hwspin_lock_get_id(host_data->node, 0); 281 + if (id >= 0) { 282 + hwlock = hwspin_lock_request_specific(id); 283 + if (hwlock) { 284 + /* found valid hwspinlock */ 285 + host_data->hwlock_state = HWSPINLOCK_READY; 286 + host_data->hwlock = hwlock; 287 + pr_debug("%s hwspinlock = %d\n", __func__, id); 288 + } else { 289 + host_data->hwlock_state = HWSPINLOCK_NONE; 290 + } 291 + } else if (id != -EPROBE_DEFER) { 292 + host_data->hwlock_state = HWSPINLOCK_NONE; 293 + } else { 294 + /* hwspinlock driver shall be ready at that stage */ 295 + ret = -EPROBE_DEFER; 296 + } 297 + } 298 + 299 + if (likely(host_data->hwlock_state == HWSPINLOCK_READY)) { 300 + /* 301 + * Use the x_raw API since we are under spin_lock protection. 302 + * Do not use the x_timeout API because we are under irq_disable 303 + * mode (see __setup_irq()) 304 + */ 305 + do { 306 + ret = hwspin_trylock_raw(host_data->hwlock); 307 + if (!ret) 308 + return 0; 309 + 310 + udelay(HWSPNLCK_RETRY_DELAY); 311 + timeout += HWSPNLCK_RETRY_DELAY; 312 + } while (timeout < HWSPNLCK_TIMEOUT); 313 + 314 + if (ret == -EBUSY) 315 + ret = -ETIMEDOUT; 316 + } 317 + 318 + if (ret) 319 + pr_err("%s can't get hwspinlock (%d)\n", __func__, ret); 320 + 321 + return ret; 322 + } 323 + 324 + static void stm32_exti_hwspin_unlock(struct stm32_exti_chip_data *chip_data) 325 + { 326 + if (likely(chip_data->host_data->hwlock_state == HWSPINLOCK_READY)) 327 + hwspin_unlock_raw(chip_data->host_data->hwlock); 328 + } 329 + 286 330 static int stm32_irq_set_type(struct irq_data *d, unsigned int type) 287 331 { 288 332 struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); ··· 351 279 352 280 irq_gc_lock(gc); 353 281 282 + err = stm32_exti_hwspin_lock(chip_data); 283 + if (err) 284 + goto unlock; 285 + 354 286 rtsr = irq_reg_readl(gc, stm32_bank->rtsr_ofst); 355 287 ftsr = irq_reg_readl(gc, stm32_bank->ftsr_ofst); 356 288 357 289 err = stm32_exti_set_type(d, type, &rtsr, &ftsr); 358 - if (err) { 359 - irq_gc_unlock(gc); 360 - return err; 361 - } 290 + if (err) 291 + goto unspinlock; 362 292 363 293 irq_reg_writel(gc, rtsr, stm32_bank->rtsr_ofst); 364 294 irq_reg_writel(gc, ftsr, stm32_bank->ftsr_ofst); 365 295 296 + unspinlock: 297 + stm32_exti_hwspin_unlock(chip_data); 298 + unlock: 366 299 irq_gc_unlock(gc); 367 300 368 - return 0; 301 + return err; 369 302 } 370 303 371 304 static void stm32_chip_suspend(struct stm32_exti_chip_data *chip_data, ··· 537 460 int err; 538 461 539 462 raw_spin_lock(&chip_data->rlock); 463 + 464 + err = stm32_exti_hwspin_lock(chip_data); 465 + if (err) 466 + goto unlock; 467 + 540 468 rtsr = readl_relaxed(base + stm32_bank->rtsr_ofst); 541 469 ftsr = readl_relaxed(base + stm32_bank->ftsr_ofst); 542 470 543 471 err = stm32_exti_set_type(d, type, &rtsr, &ftsr); 544 - if (err) { 545 - raw_spin_unlock(&chip_data->rlock); 546 - return err; 547 - } 472 + if (err) 473 + goto unspinlock; 548 474 549 475 writel_relaxed(rtsr, base + stm32_bank->rtsr_ofst); 550 476 writel_relaxed(ftsr, base + stm32_bank->ftsr_ofst); 477 + 478 + unspinlock: 479 + stm32_exti_hwspin_unlock(chip_data); 480 + unlock: 551 481 raw_spin_unlock(&chip_data->rlock); 552 482 553 - return 0; 483 + return err; 554 484 } 555 485 556 486 static int stm32_exti_h_set_wake(struct irq_data *d, unsigned int on) ··· 683 599 return NULL; 684 600 685 601 host_data->drv_data = dd; 602 + host_data->node = node; 603 + host_data->hwlock_state = HWSPINLOCK_UNKNOWN; 686 604 host_data->chips_data = kcalloc(dd->bank_nr, 687 605 sizeof(struct stm32_exti_chip_data), 688 606 GFP_KERNEL); ··· 711 625 712 626 static struct 713 627 stm32_exti_chip_data *stm32_exti_chip_init(struct stm32_exti_host_data *h_data, 714 - u32 bank_idx, 715 - struct device_node *node) 628 + u32 bank_idx) 716 629 { 717 630 const struct stm32_exti_bank *stm32_bank; 718 631 struct stm32_exti_chip_data *chip_data; ··· 741 656 if (stm32_bank->fpr_ofst != UNDEF_REG) 742 657 writel_relaxed(~0UL, base + stm32_bank->fpr_ofst); 743 658 744 - pr_info("%s: bank%d, External IRQs available:%#x\n", 745 - node->full_name, bank_idx, irqs_mask); 659 + pr_info("%pOF: bank%d\n", h_data->node, bank_idx); 746 660 747 661 return chip_data; 748 662 } ··· 762 678 domain = irq_domain_add_linear(node, drv_data->bank_nr * IRQS_PER_BANK, 763 679 &irq_exti_domain_ops, NULL); 764 680 if (!domain) { 765 - pr_err("%s: Could not register interrupt domain.\n", 766 - node->name); 681 + pr_err("%pOFn: Could not register interrupt domain.\n", 682 + node); 767 683 ret = -ENOMEM; 768 684 goto out_unmap; 769 685 } ··· 781 697 struct stm32_exti_chip_data *chip_data; 782 698 783 699 stm32_bank = drv_data->exti_banks[i]; 784 - chip_data = stm32_exti_chip_init(host_data, i, node); 700 + chip_data = stm32_exti_chip_init(host_data, i); 785 701 786 702 gc = irq_get_domain_generic_chip(domain, i * IRQS_PER_BANK); 787 703 ··· 844 760 return -ENOMEM; 845 761 846 762 for (i = 0; i < drv_data->bank_nr; i++) 847 - stm32_exti_chip_init(host_data, i, node); 763 + stm32_exti_chip_init(host_data, i); 848 764 849 765 domain = irq_domain_add_hierarchy(parent_domain, 0, 850 766 drv_data->bank_nr * IRQS_PER_BANK, ··· 852 768 host_data); 853 769 854 770 if (!domain) { 855 - pr_err("%s: Could not register exti domain.\n", node->name); 771 + pr_err("%pOFn: Could not register exti domain.\n", node); 856 772 ret = -ENOMEM; 857 773 goto out_unmap; 858 774 }
+77 -29
drivers/irqchip/irq-sun4i.c
··· 28 28 #define SUN4I_IRQ_NMI_CTRL_REG 0x0c 29 29 #define SUN4I_IRQ_PENDING_REG(x) (0x10 + 0x4 * x) 30 30 #define SUN4I_IRQ_FIQ_PENDING_REG(x) (0x20 + 0x4 * x) 31 - #define SUN4I_IRQ_ENABLE_REG(x) (0x40 + 0x4 * x) 32 - #define SUN4I_IRQ_MASK_REG(x) (0x50 + 0x4 * x) 31 + #define SUN4I_IRQ_ENABLE_REG(data, x) ((data)->enable_reg_offset + 0x4 * x) 32 + #define SUN4I_IRQ_MASK_REG(data, x) ((data)->mask_reg_offset + 0x4 * x) 33 + #define SUN4I_IRQ_ENABLE_REG_OFFSET 0x40 34 + #define SUN4I_IRQ_MASK_REG_OFFSET 0x50 35 + #define SUNIV_IRQ_ENABLE_REG_OFFSET 0x20 36 + #define SUNIV_IRQ_MASK_REG_OFFSET 0x30 33 37 34 - static void __iomem *sun4i_irq_base; 35 - static struct irq_domain *sun4i_irq_domain; 38 + struct sun4i_irq_chip_data { 39 + void __iomem *irq_base; 40 + struct irq_domain *irq_domain; 41 + u32 enable_reg_offset; 42 + u32 mask_reg_offset; 43 + }; 44 + 45 + static struct sun4i_irq_chip_data *irq_ic_data; 36 46 37 47 static void __exception_irq_entry sun4i_handle_irq(struct pt_regs *regs); 38 48 ··· 53 43 if (irq != 0) 54 44 return; /* Only IRQ 0 / the ENMI needs to be acked */ 55 45 56 - writel(BIT(0), sun4i_irq_base + SUN4I_IRQ_PENDING_REG(0)); 46 + writel(BIT(0), irq_ic_data->irq_base + SUN4I_IRQ_PENDING_REG(0)); 57 47 } 58 48 59 49 static void sun4i_irq_mask(struct irq_data *irqd) ··· 63 53 int reg = irq / 32; 64 54 u32 val; 65 55 66 - val = readl(sun4i_irq_base + SUN4I_IRQ_ENABLE_REG(reg)); 56 + val = readl(irq_ic_data->irq_base + 57 + SUN4I_IRQ_ENABLE_REG(irq_ic_data, reg)); 67 58 writel(val & ~(1 << irq_off), 68 - sun4i_irq_base + SUN4I_IRQ_ENABLE_REG(reg)); 59 + irq_ic_data->irq_base + SUN4I_IRQ_ENABLE_REG(irq_ic_data, reg)); 69 60 } 70 61 71 62 static void sun4i_irq_unmask(struct irq_data *irqd) ··· 76 65 int reg = irq / 32; 77 66 u32 val; 78 67 79 - val = readl(sun4i_irq_base + SUN4I_IRQ_ENABLE_REG(reg)); 68 + val = readl(irq_ic_data->irq_base + 69 + SUN4I_IRQ_ENABLE_REG(irq_ic_data, reg)); 80 70 writel(val | (1 << irq_off), 81 - sun4i_irq_base + SUN4I_IRQ_ENABLE_REG(reg)); 71 + irq_ic_data->irq_base + SUN4I_IRQ_ENABLE_REG(irq_ic_data, reg)); 82 72 } 83 73 84 74 static struct irq_chip sun4i_irq_chip = { ··· 107 95 static int __init sun4i_of_init(struct device_node *node, 108 96 struct device_node *parent) 109 97 { 110 - sun4i_irq_base = of_iomap(node, 0); 111 - if (!sun4i_irq_base) 98 + irq_ic_data->irq_base = of_iomap(node, 0); 99 + if (!irq_ic_data->irq_base) 112 100 panic("%pOF: unable to map IC registers\n", 113 101 node); 114 102 115 103 /* Disable all interrupts */ 116 - writel(0, sun4i_irq_base + SUN4I_IRQ_ENABLE_REG(0)); 117 - writel(0, sun4i_irq_base + SUN4I_IRQ_ENABLE_REG(1)); 118 - writel(0, sun4i_irq_base + SUN4I_IRQ_ENABLE_REG(2)); 104 + writel(0, irq_ic_data->irq_base + SUN4I_IRQ_ENABLE_REG(irq_ic_data, 0)); 105 + writel(0, irq_ic_data->irq_base + SUN4I_IRQ_ENABLE_REG(irq_ic_data, 1)); 106 + writel(0, irq_ic_data->irq_base + SUN4I_IRQ_ENABLE_REG(irq_ic_data, 2)); 119 107 120 108 /* Unmask all the interrupts, ENABLE_REG(x) is used for masking */ 121 - writel(0, sun4i_irq_base + SUN4I_IRQ_MASK_REG(0)); 122 - writel(0, sun4i_irq_base + SUN4I_IRQ_MASK_REG(1)); 123 - writel(0, sun4i_irq_base + SUN4I_IRQ_MASK_REG(2)); 109 + writel(0, irq_ic_data->irq_base + SUN4I_IRQ_MASK_REG(irq_ic_data, 0)); 110 + writel(0, irq_ic_data->irq_base + SUN4I_IRQ_MASK_REG(irq_ic_data, 1)); 111 + writel(0, irq_ic_data->irq_base + SUN4I_IRQ_MASK_REG(irq_ic_data, 2)); 124 112 125 113 /* Clear all the pending interrupts */ 126 - writel(0xffffffff, sun4i_irq_base + SUN4I_IRQ_PENDING_REG(0)); 127 - writel(0xffffffff, sun4i_irq_base + SUN4I_IRQ_PENDING_REG(1)); 128 - writel(0xffffffff, sun4i_irq_base + SUN4I_IRQ_PENDING_REG(2)); 114 + writel(0xffffffff, irq_ic_data->irq_base + SUN4I_IRQ_PENDING_REG(0)); 115 + writel(0xffffffff, irq_ic_data->irq_base + SUN4I_IRQ_PENDING_REG(1)); 116 + writel(0xffffffff, irq_ic_data->irq_base + SUN4I_IRQ_PENDING_REG(2)); 129 117 130 118 /* Enable protection mode */ 131 - writel(0x01, sun4i_irq_base + SUN4I_IRQ_PROTECTION_REG); 119 + writel(0x01, irq_ic_data->irq_base + SUN4I_IRQ_PROTECTION_REG); 132 120 133 121 /* Configure the external interrupt source type */ 134 - writel(0x00, sun4i_irq_base + SUN4I_IRQ_NMI_CTRL_REG); 122 + writel(0x00, irq_ic_data->irq_base + SUN4I_IRQ_NMI_CTRL_REG); 135 123 136 - sun4i_irq_domain = irq_domain_add_linear(node, 3 * 32, 124 + irq_ic_data->irq_domain = irq_domain_add_linear(node, 3 * 32, 137 125 &sun4i_irq_ops, NULL); 138 - if (!sun4i_irq_domain) 126 + if (!irq_ic_data->irq_domain) 139 127 panic("%pOF: unable to create IRQ domain\n", node); 140 128 141 129 set_handle_irq(sun4i_handle_irq); 142 130 143 131 return 0; 144 132 } 145 - IRQCHIP_DECLARE(allwinner_sun4i_ic, "allwinner,sun4i-a10-ic", sun4i_of_init); 133 + 134 + static int __init sun4i_ic_of_init(struct device_node *node, 135 + struct device_node *parent) 136 + { 137 + irq_ic_data = kzalloc(sizeof(struct sun4i_irq_chip_data), GFP_KERNEL); 138 + if (!irq_ic_data) { 139 + pr_err("kzalloc failed!\n"); 140 + return -ENOMEM; 141 + } 142 + 143 + irq_ic_data->enable_reg_offset = SUN4I_IRQ_ENABLE_REG_OFFSET; 144 + irq_ic_data->mask_reg_offset = SUN4I_IRQ_MASK_REG_OFFSET; 145 + 146 + return sun4i_of_init(node, parent); 147 + } 148 + 149 + IRQCHIP_DECLARE(allwinner_sun4i_ic, "allwinner,sun4i-a10-ic", sun4i_ic_of_init); 150 + 151 + static int __init suniv_ic_of_init(struct device_node *node, 152 + struct device_node *parent) 153 + { 154 + irq_ic_data = kzalloc(sizeof(struct sun4i_irq_chip_data), GFP_KERNEL); 155 + if (!irq_ic_data) { 156 + pr_err("kzalloc failed!\n"); 157 + return -ENOMEM; 158 + } 159 + 160 + irq_ic_data->enable_reg_offset = SUNIV_IRQ_ENABLE_REG_OFFSET; 161 + irq_ic_data->mask_reg_offset = SUNIV_IRQ_MASK_REG_OFFSET; 162 + 163 + return sun4i_of_init(node, parent); 164 + } 165 + 166 + IRQCHIP_DECLARE(allwinner_sunvi_ic, "allwinner,suniv-f1c100s-ic", 167 + suniv_ic_of_init); 146 168 147 169 static void __exception_irq_entry sun4i_handle_irq(struct pt_regs *regs) 148 170 { ··· 192 146 * the extra check in the common case of 1 hapening after having 193 147 * read the vector-reg once. 194 148 */ 195 - hwirq = readl(sun4i_irq_base + SUN4I_IRQ_VECTOR_REG) >> 2; 149 + hwirq = readl(irq_ic_data->irq_base + SUN4I_IRQ_VECTOR_REG) >> 2; 196 150 if (hwirq == 0 && 197 - !(readl(sun4i_irq_base + SUN4I_IRQ_PENDING_REG(0)) & BIT(0))) 151 + !(readl(irq_ic_data->irq_base + SUN4I_IRQ_PENDING_REG(0)) & 152 + BIT(0))) 198 153 return; 199 154 200 155 do { 201 - handle_domain_irq(sun4i_irq_domain, hwirq, regs); 202 - hwirq = readl(sun4i_irq_base + SUN4I_IRQ_VECTOR_REG) >> 2; 156 + handle_domain_irq(irq_ic_data->irq_domain, hwirq, regs); 157 + hwirq = readl(irq_ic_data->irq_base + 158 + SUN4I_IRQ_VECTOR_REG) >> 2; 203 159 } while (hwirq != 0); 204 160 }
+5 -5
drivers/irqchip/irq-tango.c
··· 184 184 185 185 irq = irq_of_parse_and_map(node, 0); 186 186 if (!irq) 187 - panic("%s: failed to get IRQ", node->name); 187 + panic("%pOFn: failed to get IRQ", node); 188 188 189 189 err = of_address_to_resource(node, 0, &res); 190 190 if (err) 191 - panic("%s: failed to get address", node->name); 191 + panic("%pOFn: failed to get address", node); 192 192 193 193 chip = kzalloc(sizeof(*chip), GFP_KERNEL); 194 194 chip->ctl = res.start - baseres->start; ··· 196 196 197 197 dom = irq_domain_add_linear(node, 64, &irq_generic_chip_ops, chip); 198 198 if (!dom) 199 - panic("%s: failed to create irqdomain", node->name); 199 + panic("%pOFn: failed to create irqdomain", node); 200 200 201 201 err = irq_alloc_domain_generic_chips(dom, 32, 2, node->name, 202 202 handle_level_irq, 0, 0, 0); 203 203 if (err) 204 - panic("%s: failed to allocate irqchip", node->name); 204 + panic("%pOFn: failed to allocate irqchip", node); 205 205 206 206 tangox_irq_domain_init(dom); 207 207 ··· 219 219 220 220 base = of_iomap(node, 0); 221 221 if (!base) 222 - panic("%s: of_iomap failed", node->name); 222 + panic("%pOFn: of_iomap failed", node); 223 223 224 224 of_address_to_resource(node, 0, &res); 225 225
+18 -5
drivers/pci/msi.c
··· 534 534 static struct msi_desc * 535 535 msi_setup_entry(struct pci_dev *dev, int nvec, const struct irq_affinity *affd) 536 536 { 537 - struct cpumask *masks = NULL; 537 + struct irq_affinity_desc *masks = NULL; 538 538 struct msi_desc *entry; 539 539 u16 control; 540 540 541 541 if (affd) 542 542 masks = irq_create_affinity_masks(nvec, affd); 543 - 544 543 545 544 /* MSI Entry Initialization */ 546 545 entry = alloc_msi_entry(&dev->dev, nvec, masks); ··· 671 672 struct msix_entry *entries, int nvec, 672 673 const struct irq_affinity *affd) 673 674 { 674 - struct cpumask *curmsk, *masks = NULL; 675 + struct irq_affinity_desc *curmsk, *masks = NULL; 675 676 struct msi_desc *entry; 676 677 int ret, i; 677 678 ··· 1035 1036 if (maxvec < minvec) 1036 1037 return -ERANGE; 1037 1038 1039 + /* 1040 + * If the caller is passing in sets, we can't support a range of 1041 + * vectors. The caller needs to handle that. 1042 + */ 1043 + if (affd && affd->nr_sets && minvec != maxvec) 1044 + return -EINVAL; 1045 + 1038 1046 if (WARN_ON_ONCE(dev->msi_enabled)) 1039 1047 return -EINVAL; 1040 1048 ··· 1092 1086 1093 1087 if (maxvec < minvec) 1094 1088 return -ERANGE; 1089 + 1090 + /* 1091 + * If the caller is passing in sets, we can't support a range of 1092 + * supported vectors. The caller needs to handle that. 1093 + */ 1094 + if (affd && affd->nr_sets && minvec != maxvec) 1095 + return -EINVAL; 1095 1096 1096 1097 if (WARN_ON_ONCE(dev->msix_enabled)) 1097 1098 return -EINVAL; ··· 1263 1250 1264 1251 for_each_pci_msi_entry(entry, dev) { 1265 1252 if (i == nr) 1266 - return entry->affinity; 1253 + return &entry->affinity->mask; 1267 1254 i++; 1268 1255 } 1269 1256 WARN_ON_ONCE(1); ··· 1275 1262 nr >= entry->nvec_used)) 1276 1263 return NULL; 1277 1264 1278 - return &entry->affinity[nr]; 1265 + return &entry->affinity[nr].mask; 1279 1266 } else { 1280 1267 return cpu_possible_mask; 1281 1268 }
+17 -2
include/linux/interrupt.h
··· 247 247 * the MSI(-X) vector space 248 248 * @post_vectors: Don't apply affinity to @post_vectors at end of 249 249 * the MSI(-X) vector space 250 + * @nr_sets: Length of passed in *sets array 251 + * @sets: Number of affinitized sets 250 252 */ 251 253 struct irq_affinity { 252 254 int pre_vectors; 253 255 int post_vectors; 256 + int nr_sets; 257 + int *sets; 258 + }; 259 + 260 + /** 261 + * struct irq_affinity_desc - Interrupt affinity descriptor 262 + * @mask: cpumask to hold the affinity assignment 263 + */ 264 + struct irq_affinity_desc { 265 + struct cpumask mask; 266 + unsigned int is_managed : 1; 254 267 }; 255 268 256 269 #if defined(CONFIG_SMP) ··· 312 299 extern int 313 300 irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify); 314 301 315 - struct cpumask *irq_create_affinity_masks(int nvec, const struct irq_affinity *affd); 302 + struct irq_affinity_desc * 303 + irq_create_affinity_masks(int nvec, const struct irq_affinity *affd); 304 + 316 305 int irq_calc_affinity_vectors(int minvec, int maxvec, const struct irq_affinity *affd); 317 306 318 307 #else /* CONFIG_SMP */ ··· 348 333 return 0; 349 334 } 350 335 351 - static inline struct cpumask * 336 + static inline struct irq_affinity_desc * 352 337 irq_create_affinity_masks(int nvec, const struct irq_affinity *affd) 353 338 { 354 339 return NULL;
+4 -2
include/linux/irq.h
··· 27 27 struct seq_file; 28 28 struct module; 29 29 struct msi_msg; 30 + struct irq_affinity_desc; 30 31 enum irqchip_irq_state; 31 32 32 33 /* ··· 835 834 unsigned int arch_dynirq_lower_bound(unsigned int from); 836 835 837 836 int __irq_alloc_descs(int irq, unsigned int from, unsigned int cnt, int node, 838 - struct module *owner, const struct cpumask *affinity); 837 + struct module *owner, 838 + const struct irq_affinity_desc *affinity); 839 839 840 840 int __devm_irq_alloc_descs(struct device *dev, int irq, unsigned int from, 841 841 unsigned int cnt, int node, struct module *owner, 842 - const struct cpumask *affinity); 842 + const struct irq_affinity_desc *affinity); 843 843 844 844 /* use macros to avoid needing export.h for THIS_MODULE */ 845 845 #define irq_alloc_descs(irq, from, cnt, node) \
+1 -1
include/linux/irq_sim.h
··· 16 16 17 17 struct irq_sim_work_ctx { 18 18 struct irq_work work; 19 - int irq; 19 + unsigned long *pending; 20 20 }; 21 21 22 22 struct irq_sim_irq_ctx {
+2 -2
include/linux/irqchip.h
··· 19 19 * the association between their DT compatible string and their 20 20 * initialization function. 21 21 * 22 - * @name: name that must be unique accross all IRQCHIP_DECLARE of the 22 + * @name: name that must be unique across all IRQCHIP_DECLARE of the 23 23 * same file. 24 24 * @compstr: compatible string of the irqchip driver 25 25 * @fn: initialization function ··· 30 30 * This macro must be used by the different irqchip drivers to declare 31 31 * the association between their version and their initialization function. 32 32 * 33 - * @name: name that must be unique accross all IRQCHIP_ACPI_DECLARE of the 33 + * @name: name that must be unique across all IRQCHIP_ACPI_DECLARE of the 34 34 * same file. 35 35 * @subtable: Subtable to be identified in MADT 36 36 * @validate: Function to be called on that subtable to check its validity.
+132
include/linux/irqchip/irq-madera.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0 */ 2 + /* 3 + * Interrupt support for Cirrus Logic Madera codecs 4 + * 5 + * Copyright (C) 2016-2018 Cirrus Logic, Inc. and 6 + * Cirrus Logic International Semiconductor Ltd. 7 + */ 8 + 9 + #ifndef IRQCHIP_MADERA_H 10 + #define IRQCHIP_MADERA_H 11 + 12 + #include <linux/interrupt.h> 13 + #include <linux/mfd/madera/core.h> 14 + 15 + #define MADERA_IRQ_FLL1_LOCK 0 16 + #define MADERA_IRQ_FLL2_LOCK 1 17 + #define MADERA_IRQ_FLL3_LOCK 2 18 + #define MADERA_IRQ_FLLAO_LOCK 3 19 + #define MADERA_IRQ_CLK_SYS_ERR 4 20 + #define MADERA_IRQ_CLK_ASYNC_ERR 5 21 + #define MADERA_IRQ_CLK_DSP_ERR 6 22 + #define MADERA_IRQ_HPDET 7 23 + #define MADERA_IRQ_MICDET1 8 24 + #define MADERA_IRQ_MICDET2 9 25 + #define MADERA_IRQ_JD1_RISE 10 26 + #define MADERA_IRQ_JD1_FALL 11 27 + #define MADERA_IRQ_JD2_RISE 12 28 + #define MADERA_IRQ_JD2_FALL 13 29 + #define MADERA_IRQ_MICD_CLAMP_RISE 14 30 + #define MADERA_IRQ_MICD_CLAMP_FALL 15 31 + #define MADERA_IRQ_DRC2_SIG_DET 16 32 + #define MADERA_IRQ_DRC1_SIG_DET 17 33 + #define MADERA_IRQ_ASRC1_IN1_LOCK 18 34 + #define MADERA_IRQ_ASRC1_IN2_LOCK 19 35 + #define MADERA_IRQ_ASRC2_IN1_LOCK 20 36 + #define MADERA_IRQ_ASRC2_IN2_LOCK 21 37 + #define MADERA_IRQ_DSP_IRQ1 22 38 + #define MADERA_IRQ_DSP_IRQ2 23 39 + #define MADERA_IRQ_DSP_IRQ3 24 40 + #define MADERA_IRQ_DSP_IRQ4 25 41 + #define MADERA_IRQ_DSP_IRQ5 26 42 + #define MADERA_IRQ_DSP_IRQ6 27 43 + #define MADERA_IRQ_DSP_IRQ7 28 44 + #define MADERA_IRQ_DSP_IRQ8 29 45 + #define MADERA_IRQ_DSP_IRQ9 30 46 + #define MADERA_IRQ_DSP_IRQ10 31 47 + #define MADERA_IRQ_DSP_IRQ11 32 48 + #define MADERA_IRQ_DSP_IRQ12 33 49 + #define MADERA_IRQ_DSP_IRQ13 34 50 + #define MADERA_IRQ_DSP_IRQ14 35 51 + #define MADERA_IRQ_DSP_IRQ15 36 52 + #define MADERA_IRQ_DSP_IRQ16 37 53 + #define MADERA_IRQ_HP1L_SC 38 54 + #define MADERA_IRQ_HP1R_SC 39 55 + #define MADERA_IRQ_HP2L_SC 40 56 + #define MADERA_IRQ_HP2R_SC 41 57 + #define MADERA_IRQ_HP3L_SC 42 58 + #define MADERA_IRQ_HP3R_SC 43 59 + #define MADERA_IRQ_SPKOUTL_SC 44 60 + #define MADERA_IRQ_SPKOUTR_SC 45 61 + #define MADERA_IRQ_HP1L_ENABLE_DONE 46 62 + #define MADERA_IRQ_HP1R_ENABLE_DONE 47 63 + #define MADERA_IRQ_HP2L_ENABLE_DONE 48 64 + #define MADERA_IRQ_HP2R_ENABLE_DONE 49 65 + #define MADERA_IRQ_HP3L_ENABLE_DONE 50 66 + #define MADERA_IRQ_HP3R_ENABLE_DONE 51 67 + #define MADERA_IRQ_SPKOUTL_ENABLE_DONE 52 68 + #define MADERA_IRQ_SPKOUTR_ENABLE_DONE 53 69 + #define MADERA_IRQ_SPK_SHUTDOWN 54 70 + #define MADERA_IRQ_SPK_OVERHEAT 55 71 + #define MADERA_IRQ_SPK_OVERHEAT_WARN 56 72 + #define MADERA_IRQ_GPIO1 57 73 + #define MADERA_IRQ_GPIO2 58 74 + #define MADERA_IRQ_GPIO3 59 75 + #define MADERA_IRQ_GPIO4 60 76 + #define MADERA_IRQ_GPIO5 61 77 + #define MADERA_IRQ_GPIO6 62 78 + #define MADERA_IRQ_GPIO7 63 79 + #define MADERA_IRQ_GPIO8 64 80 + #define MADERA_IRQ_DSP1_BUS_ERR 65 81 + #define MADERA_IRQ_DSP2_BUS_ERR 66 82 + #define MADERA_IRQ_DSP3_BUS_ERR 67 83 + #define MADERA_IRQ_DSP4_BUS_ERR 68 84 + #define MADERA_IRQ_DSP5_BUS_ERR 69 85 + #define MADERA_IRQ_DSP6_BUS_ERR 70 86 + #define MADERA_IRQ_DSP7_BUS_ERR 71 87 + 88 + #define MADERA_NUM_IRQ 72 89 + 90 + /* 91 + * These wrapper functions are for use by other child drivers of the 92 + * same parent MFD. 93 + */ 94 + static inline int madera_get_irq_mapping(struct madera *madera, int irq) 95 + { 96 + if (!madera->irq_dev) 97 + return -ENODEV; 98 + 99 + return regmap_irq_get_virq(madera->irq_data, irq); 100 + } 101 + 102 + static inline int madera_request_irq(struct madera *madera, int irq, 103 + const char *name, 104 + irq_handler_t handler, void *data) 105 + { 106 + irq = madera_get_irq_mapping(madera, irq); 107 + if (irq < 0) 108 + return irq; 109 + 110 + return request_threaded_irq(irq, NULL, handler, IRQF_ONESHOT, name, 111 + data); 112 + } 113 + 114 + static inline void madera_free_irq(struct madera *madera, int irq, void *data) 115 + { 116 + irq = madera_get_irq_mapping(madera, irq); 117 + if (irq < 0) 118 + return; 119 + 120 + free_irq(irq, data); 121 + } 122 + 123 + static inline int madera_set_irq_wake(struct madera *madera, int irq, int on) 124 + { 125 + irq = madera_get_irq_mapping(madera, irq); 126 + if (irq < 0) 127 + return irq; 128 + 129 + return irq_set_irq_wake(irq, on); 130 + } 131 + 132 + #endif
+4 -2
include/linux/irqdomain.h
··· 43 43 struct irq_data; 44 44 struct cpumask; 45 45 struct seq_file; 46 + struct irq_affinity_desc; 46 47 47 48 /* Number of irqs reserved for a legacy isa controller */ 48 49 #define NUM_ISA_INTERRUPTS 16 ··· 267 266 extern void irq_set_default_host(struct irq_domain *host); 268 267 extern int irq_domain_alloc_descs(int virq, unsigned int nr_irqs, 269 268 irq_hw_number_t hwirq, int node, 270 - const struct cpumask *affinity); 269 + const struct irq_affinity_desc *affinity); 271 270 272 271 static inline struct fwnode_handle *of_node_to_fwnode(struct device_node *node) 273 272 { ··· 450 449 451 450 extern int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base, 452 451 unsigned int nr_irqs, int node, void *arg, 453 - bool realloc, const struct cpumask *affinity); 452 + bool realloc, 453 + const struct irq_affinity_desc *affinity); 454 454 extern void irq_domain_free_irqs(unsigned int virq, unsigned int nr_irqs); 455 455 extern int irq_domain_activate_irq(struct irq_data *irq_data, bool early); 456 456 extern void irq_domain_deactivate_irq(struct irq_data *irq_data);
+4 -2
include/linux/msi.h
··· 76 76 unsigned int nvec_used; 77 77 struct device *dev; 78 78 struct msi_msg msg; 79 - struct cpumask *affinity; 79 + struct irq_affinity_desc *affinity; 80 80 81 81 union { 82 82 /* PCI MSI/X specific data */ ··· 116 116 list_first_entry(dev_to_msi_list((dev)), struct msi_desc, list) 117 117 #define for_each_msi_entry(desc, dev) \ 118 118 list_for_each_entry((desc), dev_to_msi_list((dev)), list) 119 + #define for_each_msi_entry_safe(desc, tmp, dev) \ 120 + list_for_each_entry_safe((desc), (tmp), dev_to_msi_list((dev)), list) 119 121 120 122 #ifdef CONFIG_PCI_MSI 121 123 #define first_pci_msi_entry(pdev) first_msi_entry(&(pdev)->dev) ··· 138 136 #endif /* CONFIG_PCI_MSI */ 139 137 140 138 struct msi_desc *alloc_msi_entry(struct device *dev, int nvec, 141 - const struct cpumask *affinity); 139 + const struct irq_affinity_desc *affinity); 142 140 void free_msi_entry(struct msi_desc *entry); 143 141 void __pci_read_msi_msg(struct msi_desc *entry, struct msi_msg *msg); 144 142 void __pci_write_msi_msg(struct msi_desc *entry, struct msi_msg *msg);
+117 -59
kernel/irq/affinity.c
··· 94 94 return nodes; 95 95 } 96 96 97 - static int irq_build_affinity_masks(const struct irq_affinity *affd, 98 - int startvec, int numvecs, 99 - cpumask_var_t *node_to_cpumask, 100 - const struct cpumask *cpu_mask, 101 - struct cpumask *nmsk, 102 - struct cpumask *masks) 97 + static int __irq_build_affinity_masks(const struct irq_affinity *affd, 98 + int startvec, int numvecs, int firstvec, 99 + cpumask_var_t *node_to_cpumask, 100 + const struct cpumask *cpu_mask, 101 + struct cpumask *nmsk, 102 + struct irq_affinity_desc *masks) 103 103 { 104 104 int n, nodes, cpus_per_vec, extra_vecs, done = 0; 105 - int last_affv = affd->pre_vectors + numvecs; 105 + int last_affv = firstvec + numvecs; 106 106 int curvec = startvec; 107 107 nodemask_t nodemsk = NODE_MASK_NONE; 108 108 ··· 117 117 */ 118 118 if (numvecs <= nodes) { 119 119 for_each_node_mask(n, nodemsk) { 120 - cpumask_copy(masks + curvec, node_to_cpumask[n]); 121 - if (++done == numvecs) 122 - break; 120 + cpumask_or(&masks[curvec].mask, 121 + &masks[curvec].mask, 122 + node_to_cpumask[n]); 123 123 if (++curvec == last_affv) 124 - curvec = affd->pre_vectors; 124 + curvec = firstvec; 125 125 } 126 + done = numvecs; 126 127 goto out; 127 128 } 128 129 ··· 131 130 int ncpus, v, vecs_to_assign, vecs_per_node; 132 131 133 132 /* Spread the vectors per node */ 134 - vecs_per_node = (numvecs - (curvec - affd->pre_vectors)) / nodes; 133 + vecs_per_node = (numvecs - (curvec - firstvec)) / nodes; 135 134 136 135 /* Get the cpus on this node which are in the mask */ 137 136 cpumask_and(nmsk, cpu_mask, node_to_cpumask[n]); ··· 152 151 cpus_per_vec++; 153 152 --extra_vecs; 154 153 } 155 - irq_spread_init_one(masks + curvec, nmsk, cpus_per_vec); 154 + irq_spread_init_one(&masks[curvec].mask, nmsk, 155 + cpus_per_vec); 156 156 } 157 157 158 158 done += v; 159 159 if (done >= numvecs) 160 160 break; 161 161 if (curvec >= last_affv) 162 - curvec = affd->pre_vectors; 162 + curvec = firstvec; 163 163 --nodes; 164 164 } 165 165 ··· 168 166 return done; 169 167 } 170 168 169 + /* 170 + * build affinity in two stages: 171 + * 1) spread present CPU on these vectors 172 + * 2) spread other possible CPUs on these vectors 173 + */ 174 + static int irq_build_affinity_masks(const struct irq_affinity *affd, 175 + int startvec, int numvecs, int firstvec, 176 + cpumask_var_t *node_to_cpumask, 177 + struct irq_affinity_desc *masks) 178 + { 179 + int curvec = startvec, nr_present, nr_others; 180 + int ret = -ENOMEM; 181 + cpumask_var_t nmsk, npresmsk; 182 + 183 + if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL)) 184 + return ret; 185 + 186 + if (!zalloc_cpumask_var(&npresmsk, GFP_KERNEL)) 187 + goto fail; 188 + 189 + ret = 0; 190 + /* Stabilize the cpumasks */ 191 + get_online_cpus(); 192 + build_node_to_cpumask(node_to_cpumask); 193 + 194 + /* Spread on present CPUs starting from affd->pre_vectors */ 195 + nr_present = __irq_build_affinity_masks(affd, curvec, numvecs, 196 + firstvec, node_to_cpumask, 197 + cpu_present_mask, nmsk, masks); 198 + 199 + /* 200 + * Spread on non present CPUs starting from the next vector to be 201 + * handled. If the spreading of present CPUs already exhausted the 202 + * vector space, assign the non present CPUs to the already spread 203 + * out vectors. 204 + */ 205 + if (nr_present >= numvecs) 206 + curvec = firstvec; 207 + else 208 + curvec = firstvec + nr_present; 209 + cpumask_andnot(npresmsk, cpu_possible_mask, cpu_present_mask); 210 + nr_others = __irq_build_affinity_masks(affd, curvec, numvecs, 211 + firstvec, node_to_cpumask, 212 + npresmsk, nmsk, masks); 213 + put_online_cpus(); 214 + 215 + if (nr_present < numvecs) 216 + WARN_ON(nr_present + nr_others < numvecs); 217 + 218 + free_cpumask_var(npresmsk); 219 + 220 + fail: 221 + free_cpumask_var(nmsk); 222 + return ret; 223 + } 224 + 171 225 /** 172 226 * irq_create_affinity_masks - Create affinity masks for multiqueue spreading 173 227 * @nvecs: The total number of vectors 174 228 * @affd: Description of the affinity requirements 175 229 * 176 - * Returns the masks pointer or NULL if allocation failed. 230 + * Returns the irq_affinity_desc pointer or NULL if allocation failed. 177 231 */ 178 - struct cpumask * 232 + struct irq_affinity_desc * 179 233 irq_create_affinity_masks(int nvecs, const struct irq_affinity *affd) 180 234 { 181 235 int affvecs = nvecs - affd->pre_vectors - affd->post_vectors; 182 236 int curvec, usedvecs; 183 - cpumask_var_t nmsk, npresmsk, *node_to_cpumask; 184 - struct cpumask *masks = NULL; 237 + cpumask_var_t *node_to_cpumask; 238 + struct irq_affinity_desc *masks = NULL; 239 + int i, nr_sets; 185 240 186 241 /* 187 242 * If there aren't any vectors left after applying the pre/post ··· 247 188 if (nvecs == affd->pre_vectors + affd->post_vectors) 248 189 return NULL; 249 190 250 - if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL)) 251 - return NULL; 252 - 253 - if (!zalloc_cpumask_var(&npresmsk, GFP_KERNEL)) 254 - goto outcpumsk; 255 - 256 191 node_to_cpumask = alloc_node_to_cpumask(); 257 192 if (!node_to_cpumask) 258 - goto outnpresmsk; 193 + return NULL; 259 194 260 195 masks = kcalloc(nvecs, sizeof(*masks), GFP_KERNEL); 261 196 if (!masks) ··· 257 204 258 205 /* Fill out vectors at the beginning that don't need affinity */ 259 206 for (curvec = 0; curvec < affd->pre_vectors; curvec++) 260 - cpumask_copy(masks + curvec, irq_default_affinity); 261 - 262 - /* Stabilize the cpumasks */ 263 - get_online_cpus(); 264 - build_node_to_cpumask(node_to_cpumask); 265 - 266 - /* Spread on present CPUs starting from affd->pre_vectors */ 267 - usedvecs = irq_build_affinity_masks(affd, curvec, affvecs, 268 - node_to_cpumask, cpu_present_mask, 269 - nmsk, masks); 270 - 207 + cpumask_copy(&masks[curvec].mask, irq_default_affinity); 271 208 /* 272 - * Spread on non present CPUs starting from the next vector to be 273 - * handled. If the spreading of present CPUs already exhausted the 274 - * vector space, assign the non present CPUs to the already spread 275 - * out vectors. 209 + * Spread on present CPUs starting from affd->pre_vectors. If we 210 + * have multiple sets, build each sets affinity mask separately. 276 211 */ 277 - if (usedvecs >= affvecs) 278 - curvec = affd->pre_vectors; 279 - else 280 - curvec = affd->pre_vectors + usedvecs; 281 - cpumask_andnot(npresmsk, cpu_possible_mask, cpu_present_mask); 282 - usedvecs += irq_build_affinity_masks(affd, curvec, affvecs, 283 - node_to_cpumask, npresmsk, 284 - nmsk, masks); 285 - put_online_cpus(); 212 + nr_sets = affd->nr_sets; 213 + if (!nr_sets) 214 + nr_sets = 1; 215 + 216 + for (i = 0, usedvecs = 0; i < nr_sets; i++) { 217 + int this_vecs = affd->sets ? affd->sets[i] : affvecs; 218 + int ret; 219 + 220 + ret = irq_build_affinity_masks(affd, curvec, this_vecs, 221 + curvec, node_to_cpumask, masks); 222 + if (ret) { 223 + kfree(masks); 224 + masks = NULL; 225 + goto outnodemsk; 226 + } 227 + curvec += this_vecs; 228 + usedvecs += this_vecs; 229 + } 286 230 287 231 /* Fill out vectors at the end that don't need affinity */ 288 232 if (usedvecs >= affvecs) ··· 287 237 else 288 238 curvec = affd->pre_vectors + usedvecs; 289 239 for (; curvec < nvecs; curvec++) 290 - cpumask_copy(masks + curvec, irq_default_affinity); 240 + cpumask_copy(&masks[curvec].mask, irq_default_affinity); 241 + 242 + /* Mark the managed interrupts */ 243 + for (i = affd->pre_vectors; i < nvecs - affd->post_vectors; i++) 244 + masks[i].is_managed = 1; 291 245 292 246 outnodemsk: 293 247 free_node_to_cpumask(node_to_cpumask); 294 - outnpresmsk: 295 - free_cpumask_var(npresmsk); 296 - outcpumsk: 297 - free_cpumask_var(nmsk); 298 248 return masks; 299 249 } 300 250 ··· 308 258 { 309 259 int resv = affd->pre_vectors + affd->post_vectors; 310 260 int vecs = maxvec - resv; 311 - int ret; 261 + int set_vecs; 312 262 313 263 if (resv > minvec) 314 264 return 0; 315 265 316 - get_online_cpus(); 317 - ret = min_t(int, cpumask_weight(cpu_possible_mask), vecs) + resv; 318 - put_online_cpus(); 319 - return ret; 266 + if (affd->nr_sets) { 267 + int i; 268 + 269 + for (i = 0, set_vecs = 0; i < affd->nr_sets; i++) 270 + set_vecs += affd->sets[i]; 271 + } else { 272 + get_online_cpus(); 273 + set_vecs = cpumask_weight(cpu_possible_mask); 274 + put_online_cpus(); 275 + } 276 + 277 + return resv + min(set_vecs, vecs); 320 278 }
+1 -1
kernel/irq/chip.c
··· 929 929 break; 930 930 /* 931 931 * Bail out if the outer chip is not set up 932 - * and the interrrupt supposed to be started 932 + * and the interrupt supposed to be started 933 933 * right away. 934 934 */ 935 935 if (WARN_ON(is_chained))
+2 -2
kernel/irq/devres.c
··· 169 169 * @cnt: Number of consecutive irqs to allocate 170 170 * @node: Preferred node on which the irq descriptor should be allocated 171 171 * @owner: Owning module (can be NULL) 172 - * @affinity: Optional pointer to an affinity mask array of size @cnt 172 + * @affinity: Optional pointer to an irq_affinity_desc array of size @cnt 173 173 * which hints where the irq descriptors should be allocated 174 174 * and which default affinities to use 175 175 * ··· 179 179 */ 180 180 int __devm_irq_alloc_descs(struct device *dev, int irq, unsigned int from, 181 181 unsigned int cnt, int node, struct module *owner, 182 - const struct cpumask *affinity) 182 + const struct irq_affinity_desc *affinity) 183 183 { 184 184 struct irq_desc_devres *dr; 185 185 int base;
+2 -2
kernel/irq/ipi.c
··· 56 56 unsigned int next; 57 57 58 58 /* 59 - * The IPI requires a seperate HW irq on each CPU. We require 59 + * The IPI requires a separate HW irq on each CPU. We require 60 60 * that the destination mask is consecutive. If an 61 61 * implementation needs to support holes, it can reserve 62 62 * several IPI ranges. ··· 172 172 173 173 /* 174 174 * Get the real hardware irq number if the underlying implementation 175 - * uses a seperate irq per cpu. If the underlying implementation uses 175 + * uses a separate irq per cpu. If the underlying implementation uses 176 176 * a single hardware irq for all cpus then the IPI send mechanism 177 177 * needs to take care of the cpu destinations. 178 178 */
+21 -2
kernel/irq/irq_sim.c
··· 34 34 static void irq_sim_handle_irq(struct irq_work *work) 35 35 { 36 36 struct irq_sim_work_ctx *work_ctx; 37 + unsigned int offset = 0; 38 + struct irq_sim *sim; 39 + int irqnum; 37 40 38 41 work_ctx = container_of(work, struct irq_sim_work_ctx, work); 39 - handle_simple_irq(irq_to_desc(work_ctx->irq)); 42 + sim = container_of(work_ctx, struct irq_sim, work_ctx); 43 + 44 + while (!bitmap_empty(work_ctx->pending, sim->irq_count)) { 45 + offset = find_next_bit(work_ctx->pending, 46 + sim->irq_count, offset); 47 + clear_bit(offset, work_ctx->pending); 48 + irqnum = irq_sim_irqnum(sim, offset); 49 + handle_simple_irq(irq_to_desc(irqnum)); 50 + } 40 51 } 41 52 42 53 /** ··· 72 61 if (sim->irq_base < 0) { 73 62 kfree(sim->irqs); 74 63 return sim->irq_base; 64 + } 65 + 66 + sim->work_ctx.pending = bitmap_zalloc(num_irqs, GFP_KERNEL); 67 + if (!sim->work_ctx.pending) { 68 + kfree(sim->irqs); 69 + irq_free_descs(sim->irq_base, num_irqs); 70 + return -ENOMEM; 75 71 } 76 72 77 73 for (i = 0; i < num_irqs; i++) { ··· 107 89 void irq_sim_fini(struct irq_sim *sim) 108 90 { 109 91 irq_work_sync(&sim->work_ctx.work); 92 + bitmap_free(sim->work_ctx.pending); 110 93 irq_free_descs(sim->irq_base, sim->irq_count); 111 94 kfree(sim->irqs); 112 95 } ··· 162 143 void irq_sim_fire(struct irq_sim *sim, unsigned int offset) 163 144 { 164 145 if (sim->irqs[offset].enabled) { 165 - sim->work_ctx.irq = irq_sim_irqnum(sim, offset); 146 + set_bit(offset, sim->work_ctx.pending); 166 147 irq_work_queue(&sim->work_ctx.work); 167 148 } 168 149 }
+16 -12
kernel/irq/irqdesc.c
··· 449 449 } 450 450 451 451 static int alloc_descs(unsigned int start, unsigned int cnt, int node, 452 - const struct cpumask *affinity, struct module *owner) 452 + const struct irq_affinity_desc *affinity, 453 + struct module *owner) 453 454 { 454 - const struct cpumask *mask = NULL; 455 455 struct irq_desc *desc; 456 - unsigned int flags; 457 456 int i; 458 457 459 458 /* Validate affinity mask(s) */ 460 459 if (affinity) { 461 - for (i = 0, mask = affinity; i < cnt; i++, mask++) { 462 - if (cpumask_empty(mask)) 460 + for (i = 0; i < cnt; i++, i++) { 461 + if (cpumask_empty(&affinity[i].mask)) 463 462 return -EINVAL; 464 463 } 465 464 } 466 465 467 - flags = affinity ? IRQD_AFFINITY_MANAGED | IRQD_MANAGED_SHUTDOWN : 0; 468 - mask = NULL; 469 - 470 466 for (i = 0; i < cnt; i++) { 467 + const struct cpumask *mask = NULL; 468 + unsigned int flags = 0; 469 + 471 470 if (affinity) { 472 - node = cpu_to_node(cpumask_first(affinity)); 473 - mask = affinity; 471 + if (affinity->is_managed) { 472 + flags = IRQD_AFFINITY_MANAGED | 473 + IRQD_MANAGED_SHUTDOWN; 474 + } 475 + mask = &affinity->mask; 476 + node = cpu_to_node(cpumask_first(mask)); 474 477 affinity++; 475 478 } 479 + 476 480 desc = alloc_desc(start + i, node, flags, mask, owner); 477 481 if (!desc) 478 482 goto err; ··· 579 575 } 580 576 581 577 static inline int alloc_descs(unsigned int start, unsigned int cnt, int node, 582 - const struct cpumask *affinity, 578 + const struct irq_affinity_desc *affinity, 583 579 struct module *owner) 584 580 { 585 581 u32 i; ··· 709 705 */ 710 706 int __ref 711 707 __irq_alloc_descs(int irq, unsigned int from, unsigned int cnt, int node, 712 - struct module *owner, const struct cpumask *affinity) 708 + struct module *owner, const struct irq_affinity_desc *affinity) 713 709 { 714 710 int start, ret; 715 711
+2 -2
kernel/irq/irqdomain.c
··· 969 969 EXPORT_SYMBOL_GPL(irq_domain_simple_ops); 970 970 971 971 int irq_domain_alloc_descs(int virq, unsigned int cnt, irq_hw_number_t hwirq, 972 - int node, const struct cpumask *affinity) 972 + int node, const struct irq_affinity_desc *affinity) 973 973 { 974 974 unsigned int hint; 975 975 ··· 1281 1281 */ 1282 1282 int __irq_domain_alloc_irqs(struct irq_domain *domain, int irq_base, 1283 1283 unsigned int nr_irqs, int node, void *arg, 1284 - bool realloc, const struct cpumask *affinity) 1284 + bool realloc, const struct irq_affinity_desc *affinity) 1285 1285 { 1286 1286 int i, ret, virq; 1287 1287
+1 -1
kernel/irq/manage.c
··· 915 915 #endif 916 916 917 917 /* 918 - * Interrupts which are not explicitely requested as threaded 918 + * Interrupts which are not explicitly requested as threaded 919 919 * interrupts rely on the implicit bh/preempt disable of the hard irq 920 920 * context. So we need to disable bh here to avoid deadlocks and other 921 921 * side effects.
+30 -4
kernel/irq/matrix.c
··· 14 14 unsigned int available; 15 15 unsigned int allocated; 16 16 unsigned int managed; 17 + unsigned int managed_allocated; 17 18 bool initialized; 18 19 bool online; 19 20 unsigned long alloc_map[IRQ_MATRIX_SIZE]; ··· 146 145 return best_cpu; 147 146 } 148 147 148 + /* Find the best CPU which has the lowest number of managed IRQs allocated */ 149 + static unsigned int matrix_find_best_cpu_managed(struct irq_matrix *m, 150 + const struct cpumask *msk) 151 + { 152 + unsigned int cpu, best_cpu, allocated = UINT_MAX; 153 + struct cpumap *cm; 154 + 155 + best_cpu = UINT_MAX; 156 + 157 + for_each_cpu(cpu, msk) { 158 + cm = per_cpu_ptr(m->maps, cpu); 159 + 160 + if (!cm->online || cm->managed_allocated > allocated) 161 + continue; 162 + 163 + best_cpu = cpu; 164 + allocated = cm->managed_allocated; 165 + } 166 + return best_cpu; 167 + } 168 + 149 169 /** 150 170 * irq_matrix_assign_system - Assign system wide entry in the matrix 151 171 * @m: Matrix pointer ··· 291 269 if (cpumask_empty(msk)) 292 270 return -EINVAL; 293 271 294 - cpu = matrix_find_best_cpu(m, msk); 272 + cpu = matrix_find_best_cpu_managed(m, msk); 295 273 if (cpu == UINT_MAX) 296 274 return -ENOSPC; 297 275 ··· 304 282 return -ENOSPC; 305 283 set_bit(bit, cm->alloc_map); 306 284 cm->allocated++; 285 + cm->managed_allocated++; 307 286 m->total_allocated++; 308 287 *mapped_cpu = cpu; 309 288 trace_irq_matrix_alloc_managed(bit, cpu, m, cm); ··· 418 395 419 396 clear_bit(bit, cm->alloc_map); 420 397 cm->allocated--; 398 + if(managed) 399 + cm->managed_allocated--; 421 400 422 401 if (cm->online) 423 402 m->total_allocated--; ··· 489 464 seq_printf(sf, "Total allocated: %6u\n", m->total_allocated); 490 465 seq_printf(sf, "System: %u: %*pbl\n", nsys, m->matrix_bits, 491 466 m->system_map); 492 - seq_printf(sf, "%*s| CPU | avl | man | act | vectors\n", ind, " "); 467 + seq_printf(sf, "%*s| CPU | avl | man | mac | act | vectors\n", ind, " "); 493 468 cpus_read_lock(); 494 469 for_each_online_cpu(cpu) { 495 470 struct cpumap *cm = per_cpu_ptr(m->maps, cpu); 496 471 497 - seq_printf(sf, "%*s %4d %4u %4u %4u %*pbl\n", ind, " ", 498 - cpu, cm->available, cm->managed, cm->allocated, 472 + seq_printf(sf, "%*s %4d %4u %4u %4u %4u %*pbl\n", ind, " ", 473 + cpu, cm->available, cm->managed, 474 + cm->managed_allocated, cm->allocated, 499 475 m->matrix_bits, cm->alloc_map); 500 476 } 501 477 cpus_read_unlock();
+4 -4
kernel/irq/msi.c
··· 23 23 * @nvec: The number of vectors used in this entry 24 24 * @affinity: Optional pointer to an affinity mask array size of @nvec 25 25 * 26 - * If @affinity is not NULL then a an affinity array[@nvec] is allocated 27 - * and the affinity masks from @affinity are copied. 26 + * If @affinity is not NULL then an affinity array[@nvec] is allocated 27 + * and the affinity masks and flags from @affinity are copied. 28 28 */ 29 - struct msi_desc * 30 - alloc_msi_entry(struct device *dev, int nvec, const struct cpumask *affinity) 29 + struct msi_desc *alloc_msi_entry(struct device *dev, int nvec, 30 + const struct irq_affinity_desc *affinity) 31 31 { 32 32 struct msi_desc *desc; 33 33
+3 -3
kernel/irq/spurious.c
··· 66 66 raw_spin_lock(&desc->lock); 67 67 68 68 /* 69 - * PER_CPU, nested thread interrupts and interrupts explicitely 69 + * PER_CPU, nested thread interrupts and interrupts explicitly 70 70 * marked polled are excluded from polling. 71 71 */ 72 72 if (irq_settings_is_per_cpu(desc) || ··· 76 76 77 77 /* 78 78 * Do not poll disabled interrupts unless the spurious 79 - * disabled poller asks explicitely. 79 + * disabled poller asks explicitly. 80 80 */ 81 81 if (irqd_irq_disabled(&desc->irq_data) && !force) 82 82 goto out; ··· 292 292 * So in case a thread is woken, we just note the fact and 293 293 * defer the analysis to the next hardware interrupt. 294 294 * 295 - * The threaded handlers store whether they sucessfully 295 + * The threaded handlers store whether they successfully 296 296 * handled an interrupt and we check whether that number 297 297 * changed versus the last invocation. 298 298 *