Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull irq core updates from Thomas Gleixner:
"A rather large update for the interrupt core code and the irq chip drivers:

- Add a new bitmap matrix allocator and supporting changes, which is
used to replace the x86 vector allocator which comes with separate
pull request. This allows to replace the convoluted nested loop
allocation function in x86 with a facility which supports the
recently added property of managed interrupts proper and allows to
switch to a best effort vector reservation scheme, which addresses
problems with vector exhaustion.

- A large update to the ARM GIC-V3-ITS driver adding support for
range selectors.

- New interrupt controllers:
- Meson and Meson8 GPIO
- BCM7271 L2
- Socionext EXIU

If you expected that this will stop at some point, I have to
disappoint you. There are new ones posted already. Sigh!

- STM32 interrupt controller support for new platforms.

- A pile of fixes, cleanups and updates to the MIPS GIC driver

- The usual small fixes, cleanups and updates all over the place.
Most visible one is to move the irq chip drivers Kconfig switches
into a separate Kconfig menu"

* 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (70 commits)
genirq: Fix type of shifting literal 1 in __setup_irq()
irqdomain: Drop pointless NULL check in virq_debug_show_one
genirq/proc: Return proper error code when irq_set_affinity() fails
irq/work: Use llist_for_each_entry_safe
irqchip: mips-gic: Print warning if inherited GIC base is used
irqchip/mips-gic: Add pr_fmt and reword pr_* messages
irqchip/stm32: Move the wakeup on interrupt mask
irqchip/stm32: Fix initial values
irqchip/stm32: Add stm32h7 support
dt-bindings/interrupt-controllers: Add compatible string for stm32h7
irqchip/stm32: Add multi-bank management
irqchip/stm32: Select GENERIC_IRQ_CHIP
irqchip/exiu: Add support for Socionext Synquacer EXIU controller
dt-bindings: Add description of Socionext EXIU interrupt controller
irqchip/gic-v3-its: Fix VPE activate callback return value
irqchip: mips-gic: Make IPI bitmaps static
irqchip: mips-gic: Share register writes in gic_set_type()
irqchip: mips-gic: Remove gic_vpes variable
irqchip: mips-gic: Use num_possible_cpus() to reserve IPIs
irqchip: mips-gic: Configure EIC when CPUs come online
...

+2490 -408
+7
Documentation/admin-guide/kernel-parameters.txt
··· 1716 1716 irqaffinity= [SMP] Set the default irq affinity mask 1717 1717 The argument is a cpu list, as described above. 1718 1718 1719 + irqchip.gicv2_force_probe= 1720 + [ARM, ARM64] 1721 + Format: <bool> 1722 + Force the kernel to look for the second 4kB page 1723 + of a GICv2 controller even if the memory range 1724 + exposed by the device tree is too small. 1725 + 1719 1726 irqfixup [HW] 1720 1727 When an interrupt is not handled search all handlers 1721 1728 for it. Intended to get systems with badly broken
+1
Documentation/arm64/silicon-errata.txt
··· 70 70 | | | | | 71 71 | Hisilicon | Hip0{5,6,7} | #161010101 | HISILICON_ERRATUM_161010101 | 72 72 | Hisilicon | Hip0{6,7} | #161010701 | N/A | 73 + | Hisilicon | Hip07 | #161600802 | HISILICON_ERRATUM_161600802 | 73 74 | | | | | 74 75 | Qualcomm Tech. | Falkor v1 | E1003 | QCOM_FALKOR_ERRATUM_1003 | 75 76 | Qualcomm Tech. | Falkor v1 | E1009 | QCOM_FALKOR_ERRATUM_1009 |
+36
Documentation/devicetree/bindings/interrupt-controller/amlogic,meson-gpio-intc.txt
··· 1 + Amlogic meson GPIO interrupt controller 2 + 3 + Meson SoCs contains an interrupt controller which is able to watch the SoC 4 + pads and generate an interrupt on edge or level. The controller is essentially 5 + a 256 pads to 8 GIC interrupt multiplexer, with a filter block to select edge 6 + or level and polarity. It does not expose all 256 mux inputs because the 7 + documentation shows that the upper part is not mapped to any pad. The actual 8 + number of interrupt exposed depends on the SoC. 9 + 10 + Required properties: 11 + 12 + - compatible : must have "amlogic,meson8-gpio-intc” and either 13 + “amlogic,meson8-gpio-intc” for meson8 SoCs (S802) or 14 + “amlogic,meson8b-gpio-intc” for meson8b SoCs (S805) or 15 + “amlogic,meson-gxbb-gpio-intc” for GXBB SoCs (S905) or 16 + “amlogic,meson-gxl-gpio-intc” for GXL SoCs (S905X, S912) 17 + - interrupt-parent : a phandle to the GIC the interrupts are routed to. 18 + Usually this is provided at the root level of the device tree as it is 19 + common to most of the SoC. 20 + - reg : Specifies base physical address and size of the registers. 21 + - interrupt-controller : Identifies the node as an interrupt controller. 22 + - #interrupt-cells : Specifies the number of cells needed to encode an 23 + interrupt source. The value must be 2. 24 + - meson,channel-interrupts: Array with the 8 upstream hwirq numbers. These 25 + are the hwirqs used on the parent interrupt controller. 26 + 27 + Example: 28 + 29 + gpio_interrupt: interrupt-controller@9880 { 30 + compatible = "amlogic,meson-gxbb-gpio-intc", 31 + "amlogic,meson-gpio-intc"; 32 + reg = <0x0 0x9880 0x0 0x10>; 33 + interrupt-controller; 34 + #interrupt-cells = <2>; 35 + meson,channel-interrupts = <64 65 66 67 68 69 70 71>; 36 + };
+4
Documentation/devicetree/bindings/interrupt-controller/arm,gic-v3.txt
··· 75 75 - reg: Specifies the base physical address and size of the ITS 76 76 registers. 77 77 78 + Optional: 79 + - socionext,synquacer-pre-its: (u32, u32) tuple describing the untranslated 80 + address and size of the pre-ITS window. 81 + 78 82 The main GIC node must contain the appropriate #address-cells, 79 83 #size-cells and ranges properties for the reg property of all ITS 80 84 nodes.
+2 -1
Documentation/devicetree/bindings/interrupt-controller/brcm,l2-intc.txt
··· 2 2 3 3 Required properties: 4 4 5 - - compatible: should be "brcm,l2-intc" 5 + - compatible: should be "brcm,l2-intc" for latched interrupt controllers 6 + should be "brcm,bcm7271-l2-intc" for level interrupt controllers 6 7 - reg: specifies the base physical address and size of the registers 7 8 - interrupt-controller: identifies the node as an interrupt controller 8 9 - #interrupt-cells: specifies the number of cells needed to encode an
+3
Documentation/devicetree/bindings/interrupt-controller/renesas,irqc.txt
··· 13 13 - "renesas,irqc-r8a7793" (R-Car M2-N) 14 14 - "renesas,irqc-r8a7794" (R-Car E2) 15 15 - "renesas,intc-ex-r8a7795" (R-Car H3) 16 + - "renesas,intc-ex-r8a7796" (R-Car M3-W) 17 + - "renesas,intc-ex-r8a77970" (R-Car V3M) 18 + - "renesas,intc-ex-r8a77995" (R-Car D3) 16 19 - #interrupt-cells: has to be <2>: an interrupt index and flags, as defined in 17 20 interrupts.txt in this directory 18 21 - clocks: Must contain a reference to the functional clock.
+32
Documentation/devicetree/bindings/interrupt-controller/socionext,synquacer-exiu.txt
··· 1 + Socionext SynQuacer External Interrupt Unit (EXIU) 2 + 3 + The Socionext Synquacer SoC has an external interrupt unit (EXIU) 4 + that forwards a block of 32 configurable input lines to 32 adjacent 5 + level-high type GICv3 SPIs. 6 + 7 + Required properties: 8 + 9 + - compatible : Should be "socionext,synquacer-exiu". 10 + - reg : Specifies base physical address and size of the 11 + control registers. 12 + - interrupt-controller : Identifies the node as an interrupt controller. 13 + - #interrupt-cells : Specifies the number of cells needed to encode an 14 + interrupt source. The value must be 3. 15 + - interrupt-parent : phandle of the GIC these interrupts are routed to. 16 + - socionext,spi-base : The SPI number of the first SPI of the 32 adjacent 17 + ones the EXIU forwards its interrups to. 18 + 19 + Notes: 20 + 21 + - Only SPIs can use the EXIU as an interrupt parent. 22 + 23 + Example: 24 + 25 + exiu: interrupt-controller@510c0000 { 26 + compatible = "socionext,synquacer-exiu"; 27 + reg = <0x0 0x510c0000 0x0 0x20>; 28 + interrupt-controller; 29 + interrupt-parent = <&gic>; 30 + #interrupt-cells = <3>; 31 + socionext,spi-base = <112>; 32 + };
+3 -1
Documentation/devicetree/bindings/interrupt-controller/st,stm32-exti.txt
··· 2 2 3 3 Required properties: 4 4 5 - - compatible: Should be "st,stm32-exti" 5 + - compatible: Should be: 6 + "st,stm32-exti" 7 + "st,stm32h7-exti" 6 8 - reg: Specifies base physical address and size of the registers 7 9 - interrupt-controller: Indentifies the node as an interrupt controller 8 10 - #interrupt-cells: Specifies the number of cells to encode an interrupt
+5
arch/arm/include/asm/arch_gicv3.h
··· 196 196 isb(); 197 197 } 198 198 199 + static inline u32 gic_read_ctlr(void) 200 + { 201 + return read_sysreg(ICC_CTLR); 202 + } 203 + 199 204 static inline void gic_write_grpen1(u32 val) 200 205 { 201 206 write_sysreg(val, ICC_IGRPEN1);
+19
arch/arm64/Kconfig
··· 556 556 557 557 If unsure, say Y. 558 558 559 + 560 + config SOCIONEXT_SYNQUACER_PREITS 561 + bool "Socionext Synquacer: Workaround for GICv3 pre-ITS" 562 + default y 563 + help 564 + Socionext Synquacer SoCs implement a separate h/w block to generate 565 + MSI doorbell writes with non-zero values for the device ID. 566 + 567 + If unsure, say Y. 568 + 569 + config HISILICON_ERRATUM_161600802 570 + bool "Hip07 161600802: Erroneous redistributor VLPI base" 571 + default y 572 + help 573 + The HiSilicon Hip07 SoC usees the wrong redistributor base 574 + when issued ITS commands such as VMOVP and VMAPP, and requires 575 + a 128kB offset to be applied to the target address in this commands. 576 + 577 + If unsure, say Y. 559 578 endmenu 560 579 561 580
+3
arch/arm64/Kconfig.platforms
··· 161 161 config ARCH_SHMOBILE 162 162 bool 163 163 164 + config ARCH_SYNQUACER 165 + bool "Socionext SynQuacer SoC Family" 166 + 164 167 config ARCH_RENESAS 165 168 bool "Renesas SoC Platforms" 166 169 select ARCH_SHMOBILE
+5
arch/arm64/include/asm/arch_gicv3.h
··· 87 87 isb(); 88 88 } 89 89 90 + static inline u32 gic_read_ctlr(void) 91 + { 92 + return read_sysreg_s(SYS_ICC_CTLR_EL1); 93 + } 94 + 90 95 static inline void gic_write_grpen1(u32 val) 91 96 { 92 97 write_sysreg_s(val, SYS_ICC_IGRPEN1_EL1);
+2 -2
arch/x86/include/asm/irqdomain.h
··· 42 42 unsigned int nr_irqs, void *arg); 43 43 extern void mp_irqdomain_free(struct irq_domain *domain, unsigned int virq, 44 44 unsigned int nr_irqs); 45 - extern void mp_irqdomain_activate(struct irq_domain *domain, 46 - struct irq_data *irq_data); 45 + extern int mp_irqdomain_activate(struct irq_domain *domain, 46 + struct irq_data *irq_data, bool early); 47 47 extern void mp_irqdomain_deactivate(struct irq_domain *domain, 48 48 struct irq_data *irq_data); 49 49 extern int mp_irqdomain_ioapic_idx(struct irq_domain *domain);
+3 -2
arch/x86/kernel/apic/htirq.c
··· 112 112 irq_domain_free_irqs_top(domain, virq, nr_irqs); 113 113 } 114 114 115 - static void htirq_domain_activate(struct irq_domain *domain, 116 - struct irq_data *irq_data) 115 + static int htirq_domain_activate(struct irq_domain *domain, 116 + struct irq_data *irq_data, bool early) 117 117 { 118 118 struct ht_irq_msg msg; 119 119 struct irq_cfg *cfg = irqd_cfg(irq_data); ··· 132 132 HT_IRQ_LOW_MT_ARBITRATED) | 133 133 HT_IRQ_LOW_IRQ_MASKED; 134 134 write_ht_irq_msg(irq_data->irq, &msg); 135 + return 0; 135 136 } 136 137 137 138 static void htirq_domain_deactivate(struct irq_domain *domain,
+5 -4
arch/x86/kernel/apic/io_apic.c
··· 2097 2097 unmask_ioapic_irq(irq_get_irq_data(0)); 2098 2098 } 2099 2099 irq_domain_deactivate_irq(irq_data); 2100 - irq_domain_activate_irq(irq_data); 2100 + irq_domain_activate_irq(irq_data, false); 2101 2101 if (timer_irq_works()) { 2102 2102 if (disable_timer_pin_1 > 0) 2103 2103 clear_IO_APIC_pin(0, pin1); ··· 2119 2119 */ 2120 2120 replace_pin_at_irq_node(data, node, apic1, pin1, apic2, pin2); 2121 2121 irq_domain_deactivate_irq(irq_data); 2122 - irq_domain_activate_irq(irq_data); 2122 + irq_domain_activate_irq(irq_data, false); 2123 2123 legacy_pic->unmask(0); 2124 2124 if (timer_irq_works()) { 2125 2125 apic_printk(APIC_QUIET, KERN_INFO "....... works.\n"); ··· 2978 2978 irq_domain_free_irqs_top(domain, virq, nr_irqs); 2979 2979 } 2980 2980 2981 - void mp_irqdomain_activate(struct irq_domain *domain, 2982 - struct irq_data *irq_data) 2981 + int mp_irqdomain_activate(struct irq_domain *domain, 2982 + struct irq_data *irq_data, bool early) 2983 2983 { 2984 2984 unsigned long flags; 2985 2985 struct irq_pin_list *entry; ··· 2989 2989 for_each_irq_pin(entry, data->irq_2_pin) 2990 2990 __ioapic_write_entry(entry->apic, entry->pin, data->entry); 2991 2991 raw_spin_unlock_irqrestore(&ioapic_lock, flags); 2992 + return 0; 2992 2993 } 2993 2994 2994 2995 void mp_irqdomain_deactivate(struct irq_domain *domain,
+3 -2
arch/x86/platform/uv/uv_irq.c
··· 127 127 * Re-target the irq to the specified CPU and enable the specified MMR located 128 128 * on the specified blade to allow the sending of MSIs to the specified CPU. 129 129 */ 130 - static void uv_domain_activate(struct irq_domain *domain, 131 - struct irq_data *irq_data) 130 + static int uv_domain_activate(struct irq_domain *domain, 131 + struct irq_data *irq_data, bool early) 132 132 { 133 133 uv_program_mmr(irqd_cfg(irq_data), irq_data->chip_data); 134 + return 0; 134 135 } 135 136 136 137 /*
+5 -3
drivers/gpio/gpio-xgene-sb.c
··· 140 140 return irq_create_fwspec_mapping(&fwspec); 141 141 } 142 142 143 - static void xgene_gpio_sb_domain_activate(struct irq_domain *d, 144 - struct irq_data *irq_data) 143 + static int xgene_gpio_sb_domain_activate(struct irq_domain *d, 144 + struct irq_data *irq_data, 145 + bool early) 145 146 { 146 147 struct xgene_gpio_sb *priv = d->host_data; 147 148 u32 gpio = HWIRQ_TO_GPIO(priv, irq_data->hwirq); ··· 151 150 dev_err(priv->gc.parent, 152 151 "Unable to configure XGene GPIO standby pin %d as IRQ\n", 153 152 gpio); 154 - return; 153 + return -ENOSPC; 155 154 } 156 155 157 156 xgene_gpio_set_bit(&priv->gc, priv->regs + MPA_GPIO_SEL_LO, 158 157 gpio * 2, 1); 158 + return 0; 159 159 } 160 160 161 161 static void xgene_gpio_sb_domain_deactivate(struct irq_domain *d,
+3 -2
drivers/iommu/amd_iommu.c
··· 4173 4173 irq_domain_free_irqs_common(domain, virq, nr_irqs); 4174 4174 } 4175 4175 4176 - static void irq_remapping_activate(struct irq_domain *domain, 4177 - struct irq_data *irq_data) 4176 + static int irq_remapping_activate(struct irq_domain *domain, 4177 + struct irq_data *irq_data, bool early) 4178 4178 { 4179 4179 struct amd_ir_data *data = irq_data->chip_data; 4180 4180 struct irq_2_irte *irte_info = &data->irq_2_irte; ··· 4183 4183 if (iommu) 4184 4184 iommu->irte_ops->activate(data->entry, irte_info->devid, 4185 4185 irte_info->index); 4186 + return 0; 4186 4187 } 4187 4188 4188 4189 static void irq_remapping_deactivate(struct irq_domain *domain,
+3 -2
drivers/iommu/intel_irq_remapping.c
··· 1390 1390 irq_domain_free_irqs_common(domain, virq, nr_irqs); 1391 1391 } 1392 1392 1393 - static void intel_irq_remapping_activate(struct irq_domain *domain, 1394 - struct irq_data *irq_data) 1393 + static int intel_irq_remapping_activate(struct irq_domain *domain, 1394 + struct irq_data *irq_data, bool early) 1395 1395 { 1396 1396 struct intel_ir_data *data = irq_data->chip_data; 1397 1397 1398 1398 modify_irte(&data->irq_2_iommu, &data->irte_entry); 1399 + return 0; 1399 1400 } 1400 1401 1401 1402 static void intel_irq_remapping_deactivate(struct irq_domain *domain,
+13
drivers/irqchip/Kconfig
··· 1 + menu "IRQ chip support" 2 + 1 3 config IRQCHIP 2 4 def_bool y 3 5 depends on OF_IRQ ··· 309 307 config STM32_EXTI 310 308 bool 311 309 select IRQ_DOMAIN 310 + select GENERIC_IRQ_CHIP 312 311 313 312 config QCOM_IRQ_COMBINER 314 313 bool "QCOM IRQ combiner support" ··· 327 324 select IRQ_DOMAIN_HIERARCHY 328 325 help 329 326 Support for the UniPhier AIDET (ARM Interrupt Detector). 327 + 328 + config MESON_IRQ_GPIO 329 + bool "Meson GPIO Interrupt Multiplexer" 330 + depends on ARCH_MESON 331 + select IRQ_DOMAIN 332 + select IRQ_DOMAIN_HIERARCHY 333 + help 334 + Support Meson SoC Family GPIO Interrupt Multiplexer 335 + 336 + endmenu
+2
drivers/irqchip/Makefile
··· 81 81 obj-$(CONFIG_STM32_EXTI) += irq-stm32-exti.o 82 82 obj-$(CONFIG_QCOM_IRQ_COMBINER) += qcom-irq-combiner.o 83 83 obj-$(CONFIG_IRQ_UNIPHIER_AIDET) += irq-uniphier-aidet.o 84 + obj-$(CONFIG_ARCH_SYNQUACER) += irq-sni-exiu.o 85 + obj-$(CONFIG_MESON_IRQ_GPIO) += irq-meson-gpio.o
+2 -2
drivers/irqchip/irq-aspeed-i2c-ic.c
··· 76 76 return -ENOMEM; 77 77 78 78 i2c_ic->base = of_iomap(node, 0); 79 - if (IS_ERR(i2c_ic->base)) { 80 - ret = PTR_ERR(i2c_ic->base); 79 + if (!i2c_ic->base) { 80 + ret = -ENOMEM; 81 81 goto err_free_ic; 82 82 } 83 83
+123 -46
drivers/irqchip/irq-brcmstb-l2.c
··· 1 1 /* 2 2 * Generic Broadcom Set Top Box Level 2 Interrupt controller driver 3 3 * 4 - * Copyright (C) 2014 Broadcom Corporation 4 + * Copyright (C) 2014-2017 Broadcom 5 5 * 6 6 * This program is free software; you can redistribute it and/or modify 7 7 * it under the terms of the GNU General Public License version 2 as ··· 31 31 #include <linux/irqchip.h> 32 32 #include <linux/irqchip/chained_irq.h> 33 33 34 - /* Register offsets in the L2 interrupt controller */ 35 - #define CPU_STATUS 0x00 36 - #define CPU_SET 0x04 37 - #define CPU_CLEAR 0x08 38 - #define CPU_MASK_STATUS 0x0c 39 - #define CPU_MASK_SET 0x10 40 - #define CPU_MASK_CLEAR 0x14 34 + struct brcmstb_intc_init_params { 35 + irq_flow_handler_t handler; 36 + int cpu_status; 37 + int cpu_clear; 38 + int cpu_mask_status; 39 + int cpu_mask_set; 40 + int cpu_mask_clear; 41 + }; 42 + 43 + /* Register offsets in the L2 latched interrupt controller */ 44 + static const struct brcmstb_intc_init_params l2_edge_intc_init = { 45 + .handler = handle_edge_irq, 46 + .cpu_status = 0x00, 47 + .cpu_clear = 0x08, 48 + .cpu_mask_status = 0x0c, 49 + .cpu_mask_set = 0x10, 50 + .cpu_mask_clear = 0x14 51 + }; 52 + 53 + /* Register offsets in the L2 level interrupt controller */ 54 + static const struct brcmstb_intc_init_params l2_lvl_intc_init = { 55 + .handler = handle_level_irq, 56 + .cpu_status = 0x00, 57 + .cpu_clear = -1, /* Register not present */ 58 + .cpu_mask_status = 0x04, 59 + .cpu_mask_set = 0x08, 60 + .cpu_mask_clear = 0x0C 61 + }; 41 62 42 63 /* L2 intc private data structure */ 43 64 struct brcmstb_l2_intc_data { 44 - int parent_irq; 45 - void __iomem *base; 46 65 struct irq_domain *domain; 66 + struct irq_chip_generic *gc; 67 + int status_offset; 68 + int mask_offset; 47 69 bool can_wake; 48 70 u32 saved_mask; /* for suspend/resume */ 49 71 }; 50 72 73 + /** 74 + * brcmstb_l2_mask_and_ack - Mask and ack pending interrupt 75 + * @d: irq_data 76 + * 77 + * Chip has separate enable/disable registers instead of a single mask 78 + * register and pending interrupt is acknowledged by setting a bit. 79 + * 80 + * Note: This function is generic and could easily be added to the 81 + * generic irqchip implementation if there ever becomes a will to do so. 82 + * Perhaps with a name like irq_gc_mask_disable_and_ack_set(). 83 + * 84 + * e.g.: https://patchwork.kernel.org/patch/9831047/ 85 + */ 86 + static void brcmstb_l2_mask_and_ack(struct irq_data *d) 87 + { 88 + struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); 89 + struct irq_chip_type *ct = irq_data_get_chip_type(d); 90 + u32 mask = d->mask; 91 + 92 + irq_gc_lock(gc); 93 + irq_reg_writel(gc, mask, ct->regs.disable); 94 + *ct->mask_cache &= ~mask; 95 + irq_reg_writel(gc, mask, ct->regs.ack); 96 + irq_gc_unlock(gc); 97 + } 98 + 51 99 static void brcmstb_l2_intc_irq_handle(struct irq_desc *desc) 52 100 { 53 101 struct brcmstb_l2_intc_data *b = irq_desc_get_handler_data(desc); 54 - struct irq_chip_generic *gc = irq_get_domain_generic_chip(b->domain, 0); 55 102 struct irq_chip *chip = irq_desc_get_chip(desc); 56 103 unsigned int irq; 57 104 u32 status; 58 105 59 106 chained_irq_enter(chip, desc); 60 107 61 - status = irq_reg_readl(gc, CPU_STATUS) & 62 - ~(irq_reg_readl(gc, CPU_MASK_STATUS)); 108 + status = irq_reg_readl(b->gc, b->status_offset) & 109 + ~(irq_reg_readl(b->gc, b->mask_offset)); 63 110 64 111 if (status == 0) { 65 112 raw_spin_lock(&desc->lock); ··· 117 70 118 71 do { 119 72 irq = ffs(status) - 1; 120 - /* ack at our level */ 121 - irq_reg_writel(gc, 1 << irq, CPU_CLEAR); 122 73 status &= ~(1 << irq); 123 - generic_handle_irq(irq_find_mapping(b->domain, irq)); 74 + generic_handle_irq(irq_linear_revmap(b->domain, irq)); 124 75 } while (status); 125 76 out: 126 77 chained_irq_exit(chip, desc); ··· 127 82 static void brcmstb_l2_intc_suspend(struct irq_data *d) 128 83 { 129 84 struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); 85 + struct irq_chip_type *ct = irq_data_get_chip_type(d); 130 86 struct brcmstb_l2_intc_data *b = gc->private; 131 87 132 88 irq_gc_lock(gc); 133 89 /* Save the current mask */ 134 - b->saved_mask = irq_reg_readl(gc, CPU_MASK_STATUS); 90 + b->saved_mask = irq_reg_readl(gc, ct->regs.mask); 135 91 136 92 if (b->can_wake) { 137 93 /* Program the wakeup mask */ 138 - irq_reg_writel(gc, ~gc->wake_active, CPU_MASK_SET); 139 - irq_reg_writel(gc, gc->wake_active, CPU_MASK_CLEAR); 94 + irq_reg_writel(gc, ~gc->wake_active, ct->regs.disable); 95 + irq_reg_writel(gc, gc->wake_active, ct->regs.enable); 140 96 } 141 97 irq_gc_unlock(gc); 142 98 } ··· 145 99 static void brcmstb_l2_intc_resume(struct irq_data *d) 146 100 { 147 101 struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); 102 + struct irq_chip_type *ct = irq_data_get_chip_type(d); 148 103 struct brcmstb_l2_intc_data *b = gc->private; 149 104 150 105 irq_gc_lock(gc); 151 - /* Clear unmasked non-wakeup interrupts */ 152 - irq_reg_writel(gc, ~b->saved_mask & ~gc->wake_active, CPU_CLEAR); 106 + if (ct->chip.irq_ack) { 107 + /* Clear unmasked non-wakeup interrupts */ 108 + irq_reg_writel(gc, ~b->saved_mask & ~gc->wake_active, 109 + ct->regs.ack); 110 + } 153 111 154 112 /* Restore the saved mask */ 155 - irq_reg_writel(gc, b->saved_mask, CPU_MASK_SET); 156 - irq_reg_writel(gc, ~b->saved_mask, CPU_MASK_CLEAR); 113 + irq_reg_writel(gc, b->saved_mask, ct->regs.disable); 114 + irq_reg_writel(gc, ~b->saved_mask, ct->regs.enable); 157 115 irq_gc_unlock(gc); 158 116 } 159 117 160 118 static int __init brcmstb_l2_intc_of_init(struct device_node *np, 161 - struct device_node *parent) 119 + struct device_node *parent, 120 + const struct brcmstb_intc_init_params 121 + *init_params) 162 122 { 163 123 unsigned int clr = IRQ_NOREQUEST | IRQ_NOPROBE | IRQ_NOAUTOEN; 164 124 struct brcmstb_l2_intc_data *data; 165 - struct irq_chip_generic *gc; 166 125 struct irq_chip_type *ct; 167 126 int ret; 168 127 unsigned int flags; 128 + int parent_irq; 129 + void __iomem *base; 169 130 170 131 data = kzalloc(sizeof(*data), GFP_KERNEL); 171 132 if (!data) 172 133 return -ENOMEM; 173 134 174 - data->base = of_iomap(np, 0); 175 - if (!data->base) { 135 + base = of_iomap(np, 0); 136 + if (!base) { 176 137 pr_err("failed to remap intc L2 registers\n"); 177 138 ret = -ENOMEM; 178 139 goto out_free; 179 140 } 180 141 181 142 /* Disable all interrupts by default */ 182 - writel(0xffffffff, data->base + CPU_MASK_SET); 143 + writel(0xffffffff, base + init_params->cpu_mask_set); 183 144 184 145 /* Wakeup interrupts may be retained from S5 (cold boot) */ 185 146 data->can_wake = of_property_read_bool(np, "brcm,irq-can-wake"); 186 - if (!data->can_wake) 187 - writel(0xffffffff, data->base + CPU_CLEAR); 147 + if (!data->can_wake && (init_params->cpu_clear >= 0)) 148 + writel(0xffffffff, base + init_params->cpu_clear); 188 149 189 - data->parent_irq = irq_of_parse_and_map(np, 0); 190 - if (!data->parent_irq) { 150 + parent_irq = irq_of_parse_and_map(np, 0); 151 + if (!parent_irq) { 191 152 pr_err("failed to find parent interrupt\n"); 192 153 ret = -EINVAL; 193 154 goto out_unmap; ··· 216 163 217 164 /* Allocate a single Generic IRQ chip for this node */ 218 165 ret = irq_alloc_domain_generic_chips(data->domain, 32, 1, 219 - np->full_name, handle_edge_irq, clr, 0, flags); 166 + np->full_name, init_params->handler, clr, 0, flags); 220 167 if (ret) { 221 168 pr_err("failed to allocate generic irq chip\n"); 222 169 goto out_free_domain; 223 170 } 224 171 225 172 /* Set the IRQ chaining logic */ 226 - irq_set_chained_handler_and_data(data->parent_irq, 173 + irq_set_chained_handler_and_data(parent_irq, 227 174 brcmstb_l2_intc_irq_handle, data); 228 175 229 - gc = irq_get_domain_generic_chip(data->domain, 0); 230 - gc->reg_base = data->base; 231 - gc->private = data; 232 - ct = gc->chip_types; 176 + data->gc = irq_get_domain_generic_chip(data->domain, 0); 177 + data->gc->reg_base = base; 178 + data->gc->private = data; 179 + data->status_offset = init_params->cpu_status; 180 + data->mask_offset = init_params->cpu_mask_status; 233 181 234 - ct->chip.irq_ack = irq_gc_ack_set_bit; 235 - ct->regs.ack = CPU_CLEAR; 182 + ct = data->gc->chip_types; 183 + 184 + if (init_params->cpu_clear >= 0) { 185 + ct->regs.ack = init_params->cpu_clear; 186 + ct->chip.irq_ack = irq_gc_ack_set_bit; 187 + ct->chip.irq_mask_ack = brcmstb_l2_mask_and_ack; 188 + } else { 189 + /* No Ack - but still slightly more efficient to define this */ 190 + ct->chip.irq_mask_ack = irq_gc_mask_disable_reg; 191 + } 236 192 237 193 ct->chip.irq_mask = irq_gc_mask_disable_reg; 238 - ct->regs.disable = CPU_MASK_SET; 194 + ct->regs.disable = init_params->cpu_mask_set; 195 + ct->regs.mask = init_params->cpu_mask_status; 239 196 240 197 ct->chip.irq_unmask = irq_gc_unmask_enable_reg; 241 - ct->regs.enable = CPU_MASK_CLEAR; 198 + ct->regs.enable = init_params->cpu_mask_clear; 242 199 243 200 ct->chip.irq_suspend = brcmstb_l2_intc_suspend; 244 201 ct->chip.irq_resume = brcmstb_l2_intc_resume; ··· 258 195 /* This IRQ chip can wake the system, set all child interrupts 259 196 * in wake_enabled mask 260 197 */ 261 - gc->wake_enabled = 0xffffffff; 198 + data->gc->wake_enabled = 0xffffffff; 262 199 ct->chip.irq_set_wake = irq_gc_set_wake; 263 200 } 264 201 265 202 pr_info("registered L2 intc (mem: 0x%p, parent irq: %d)\n", 266 - data->base, data->parent_irq); 203 + base, parent_irq); 267 204 268 205 return 0; 269 206 270 207 out_free_domain: 271 208 irq_domain_remove(data->domain); 272 209 out_unmap: 273 - iounmap(data->base); 210 + iounmap(base); 274 211 out_free: 275 212 kfree(data); 276 213 return ret; 277 214 } 278 - IRQCHIP_DECLARE(brcmstb_l2_intc, "brcm,l2-intc", brcmstb_l2_intc_of_init); 215 + 216 + int __init brcmstb_l2_edge_intc_of_init(struct device_node *np, 217 + struct device_node *parent) 218 + { 219 + return brcmstb_l2_intc_of_init(np, parent, &l2_edge_intc_init); 220 + } 221 + IRQCHIP_DECLARE(brcmstb_l2_intc, "brcm,l2-intc", brcmstb_l2_edge_intc_of_init); 222 + 223 + int __init brcmstb_l2_lvl_intc_of_init(struct device_node *np, 224 + struct device_node *parent) 225 + { 226 + return brcmstb_l2_intc_of_init(np, parent, &l2_lvl_intc_init); 227 + } 228 + IRQCHIP_DECLARE(bcm7271_l2_intc, "brcm,bcm7271-l2-intc", 229 + brcmstb_l2_lvl_intc_of_init);
+3 -2
drivers/irqchip/irq-gic-common.c
··· 40 40 for (; quirks->desc; quirks++) { 41 41 if (quirks->iidr != (quirks->mask & iidr)) 42 42 continue; 43 - quirks->init(data); 44 - pr_info("GIC: enabling workaround for %s\n", quirks->desc); 43 + if (quirks->init(data)) 44 + pr_info("GIC: enabling workaround for %s\n", 45 + quirks->desc); 45 46 } 46 47 } 47 48
+1 -1
drivers/irqchip/irq-gic-common.h
··· 23 23 24 24 struct gic_quirk { 25 25 const char *desc; 26 - void (*init)(void *data); 26 + bool (*init)(void *data); 27 27 u32 iidr; 28 28 u32 mask; 29 29 };
+310 -80
drivers/irqchip/irq-gic-v3-its.c
··· 83 83 u32 psz; 84 84 }; 85 85 86 + struct its_device; 87 + 86 88 /* 87 89 * The ITS structure - contains most of the infrastructure, with the 88 90 * top-level MSI domain, the command queue, the collections, and the ··· 99 97 struct its_cmd_block *cmd_write; 100 98 struct its_baser tables[GITS_BASER_NR_REGS]; 101 99 struct its_collection *collections; 100 + struct fwnode_handle *fwnode_handle; 101 + u64 (*get_msi_base)(struct its_device *its_dev); 102 102 struct list_head its_device_list; 103 103 u64 flags; 104 + unsigned long list_nr; 104 105 u32 ite_size; 105 106 u32 device_ids; 106 107 int numa_node; 108 + unsigned int msi_domain_flags; 109 + u32 pre_its_base; /* for Socionext Synquacer */ 107 110 bool is_v4; 111 + int vlpi_redist_offset; 108 112 }; 109 113 110 114 #define ITS_ITT_ALIGN SZ_256 ··· 159 151 static DEFINE_SPINLOCK(its_lock); 160 152 static struct rdists *gic_rdists; 161 153 static struct irq_domain *its_parent; 162 - 163 - /* 164 - * We have a maximum number of 16 ITSs in the whole system if we're 165 - * using the ITSList mechanism 166 - */ 167 - #define ITS_LIST_MAX 16 168 154 169 155 static unsigned long its_list_map; 170 156 static u16 vmovp_seq_num; ··· 274 272 #define ITS_CMD_QUEUE_SZ SZ_64K 275 273 #define ITS_CMD_QUEUE_NR_ENTRIES (ITS_CMD_QUEUE_SZ / sizeof(struct its_cmd_block)) 276 274 277 - typedef struct its_collection *(*its_cmd_builder_t)(struct its_cmd_block *, 275 + typedef struct its_collection *(*its_cmd_builder_t)(struct its_node *, 276 + struct its_cmd_block *, 278 277 struct its_cmd_desc *); 279 278 280 - typedef struct its_vpe *(*its_cmd_vbuilder_t)(struct its_cmd_block *, 279 + typedef struct its_vpe *(*its_cmd_vbuilder_t)(struct its_node *, 280 + struct its_cmd_block *, 281 281 struct its_cmd_desc *); 282 282 283 283 static void its_mask_encode(u64 *raw_cmd, u64 val, int h, int l) ··· 383 379 cmd->raw_cmd[3] = cpu_to_le64(cmd->raw_cmd[3]); 384 380 } 385 381 386 - static struct its_collection *its_build_mapd_cmd(struct its_cmd_block *cmd, 382 + static struct its_collection *its_build_mapd_cmd(struct its_node *its, 383 + struct its_cmd_block *cmd, 387 384 struct its_cmd_desc *desc) 388 385 { 389 386 unsigned long itt_addr; ··· 404 399 return NULL; 405 400 } 406 401 407 - static struct its_collection *its_build_mapc_cmd(struct its_cmd_block *cmd, 402 + static struct its_collection *its_build_mapc_cmd(struct its_node *its, 403 + struct its_cmd_block *cmd, 408 404 struct its_cmd_desc *desc) 409 405 { 410 406 its_encode_cmd(cmd, GITS_CMD_MAPC); ··· 418 412 return desc->its_mapc_cmd.col; 419 413 } 420 414 421 - static struct its_collection *its_build_mapti_cmd(struct its_cmd_block *cmd, 415 + static struct its_collection *its_build_mapti_cmd(struct its_node *its, 416 + struct its_cmd_block *cmd, 422 417 struct its_cmd_desc *desc) 423 418 { 424 419 struct its_collection *col; ··· 438 431 return col; 439 432 } 440 433 441 - static struct its_collection *its_build_movi_cmd(struct its_cmd_block *cmd, 434 + static struct its_collection *its_build_movi_cmd(struct its_node *its, 435 + struct its_cmd_block *cmd, 442 436 struct its_cmd_desc *desc) 443 437 { 444 438 struct its_collection *col; ··· 457 449 return col; 458 450 } 459 451 460 - static struct its_collection *its_build_discard_cmd(struct its_cmd_block *cmd, 452 + static struct its_collection *its_build_discard_cmd(struct its_node *its, 453 + struct its_cmd_block *cmd, 461 454 struct its_cmd_desc *desc) 462 455 { 463 456 struct its_collection *col; ··· 475 466 return col; 476 467 } 477 468 478 - static struct its_collection *its_build_inv_cmd(struct its_cmd_block *cmd, 469 + static struct its_collection *its_build_inv_cmd(struct its_node *its, 470 + struct its_cmd_block *cmd, 479 471 struct its_cmd_desc *desc) 480 472 { 481 473 struct its_collection *col; ··· 493 483 return col; 494 484 } 495 485 496 - static struct its_collection *its_build_int_cmd(struct its_cmd_block *cmd, 486 + static struct its_collection *its_build_int_cmd(struct its_node *its, 487 + struct its_cmd_block *cmd, 497 488 struct its_cmd_desc *desc) 498 489 { 499 490 struct its_collection *col; ··· 511 500 return col; 512 501 } 513 502 514 - static struct its_collection *its_build_clear_cmd(struct its_cmd_block *cmd, 503 + static struct its_collection *its_build_clear_cmd(struct its_node *its, 504 + struct its_cmd_block *cmd, 515 505 struct its_cmd_desc *desc) 516 506 { 517 507 struct its_collection *col; ··· 529 517 return col; 530 518 } 531 519 532 - static struct its_collection *its_build_invall_cmd(struct its_cmd_block *cmd, 520 + static struct its_collection *its_build_invall_cmd(struct its_node *its, 521 + struct its_cmd_block *cmd, 533 522 struct its_cmd_desc *desc) 534 523 { 535 524 its_encode_cmd(cmd, GITS_CMD_INVALL); ··· 541 528 return NULL; 542 529 } 543 530 544 - static struct its_vpe *its_build_vinvall_cmd(struct its_cmd_block *cmd, 531 + static struct its_vpe *its_build_vinvall_cmd(struct its_node *its, 532 + struct its_cmd_block *cmd, 545 533 struct its_cmd_desc *desc) 546 534 { 547 535 its_encode_cmd(cmd, GITS_CMD_VINVALL); ··· 553 539 return desc->its_vinvall_cmd.vpe; 554 540 } 555 541 556 - static struct its_vpe *its_build_vmapp_cmd(struct its_cmd_block *cmd, 542 + static struct its_vpe *its_build_vmapp_cmd(struct its_node *its, 543 + struct its_cmd_block *cmd, 557 544 struct its_cmd_desc *desc) 558 545 { 559 546 unsigned long vpt_addr; 547 + u64 target; 560 548 561 549 vpt_addr = virt_to_phys(page_address(desc->its_vmapp_cmd.vpe->vpt_page)); 550 + target = desc->its_vmapp_cmd.col->target_address + its->vlpi_redist_offset; 562 551 563 552 its_encode_cmd(cmd, GITS_CMD_VMAPP); 564 553 its_encode_vpeid(cmd, desc->its_vmapp_cmd.vpe->vpe_id); 565 554 its_encode_valid(cmd, desc->its_vmapp_cmd.valid); 566 - its_encode_target(cmd, desc->its_vmapp_cmd.col->target_address); 555 + its_encode_target(cmd, target); 567 556 its_encode_vpt_addr(cmd, vpt_addr); 568 557 its_encode_vpt_size(cmd, LPI_NRBITS - 1); 569 558 ··· 575 558 return desc->its_vmapp_cmd.vpe; 576 559 } 577 560 578 - static struct its_vpe *its_build_vmapti_cmd(struct its_cmd_block *cmd, 561 + static struct its_vpe *its_build_vmapti_cmd(struct its_node *its, 562 + struct its_cmd_block *cmd, 579 563 struct its_cmd_desc *desc) 580 564 { 581 565 u32 db; ··· 598 580 return desc->its_vmapti_cmd.vpe; 599 581 } 600 582 601 - static struct its_vpe *its_build_vmovi_cmd(struct its_cmd_block *cmd, 583 + static struct its_vpe *its_build_vmovi_cmd(struct its_node *its, 584 + struct its_cmd_block *cmd, 602 585 struct its_cmd_desc *desc) 603 586 { 604 587 u32 db; ··· 621 602 return desc->its_vmovi_cmd.vpe; 622 603 } 623 604 624 - static struct its_vpe *its_build_vmovp_cmd(struct its_cmd_block *cmd, 605 + static struct its_vpe *its_build_vmovp_cmd(struct its_node *its, 606 + struct its_cmd_block *cmd, 625 607 struct its_cmd_desc *desc) 626 608 { 609 + u64 target; 610 + 611 + target = desc->its_vmovp_cmd.col->target_address + its->vlpi_redist_offset; 627 612 its_encode_cmd(cmd, GITS_CMD_VMOVP); 628 613 its_encode_seq_num(cmd, desc->its_vmovp_cmd.seq_num); 629 614 its_encode_its_list(cmd, desc->its_vmovp_cmd.its_list); 630 615 its_encode_vpeid(cmd, desc->its_vmovp_cmd.vpe->vpe_id); 631 - its_encode_target(cmd, desc->its_vmovp_cmd.col->target_address); 616 + its_encode_target(cmd, target); 632 617 633 618 its_fixup_cmd(cmd); 634 619 ··· 711 688 dsb(ishst); 712 689 } 713 690 714 - static void its_wait_for_range_completion(struct its_node *its, 715 - struct its_cmd_block *from, 716 - struct its_cmd_block *to) 691 + static int its_wait_for_range_completion(struct its_node *its, 692 + struct its_cmd_block *from, 693 + struct its_cmd_block *to) 717 694 { 718 695 u64 rd_idx, from_idx, to_idx; 719 696 u32 count = 1000000; /* 1s! */ ··· 734 711 735 712 count--; 736 713 if (!count) { 737 - pr_err_ratelimited("ITS queue timeout\n"); 738 - return; 714 + pr_err_ratelimited("ITS queue timeout (%llu %llu %llu)\n", 715 + from_idx, to_idx, rd_idx); 716 + return -1; 739 717 } 740 718 cpu_relax(); 741 719 udelay(1); 742 720 } 721 + 722 + return 0; 743 723 } 744 724 745 725 /* Warning, macro hell follows */ ··· 762 736 raw_spin_unlock_irqrestore(&its->lock, flags); \ 763 737 return; \ 764 738 } \ 765 - sync_obj = builder(cmd, desc); \ 739 + sync_obj = builder(its, cmd, desc); \ 766 740 its_flush_cmd(its, cmd); \ 767 741 \ 768 742 if (sync_obj) { \ ··· 770 744 if (!sync_cmd) \ 771 745 goto post; \ 772 746 \ 773 - buildfn(sync_cmd, sync_obj); \ 747 + buildfn(its, sync_cmd, sync_obj); \ 774 748 its_flush_cmd(its, sync_cmd); \ 775 749 } \ 776 750 \ ··· 778 752 next_cmd = its_post_commands(its); \ 779 753 raw_spin_unlock_irqrestore(&its->lock, flags); \ 780 754 \ 781 - its_wait_for_range_completion(its, cmd, next_cmd); \ 755 + if (its_wait_for_range_completion(its, cmd, next_cmd)) \ 756 + pr_err_ratelimited("ITS cmd %ps failed\n", builder); \ 782 757 } 783 758 784 - static void its_build_sync_cmd(struct its_cmd_block *sync_cmd, 759 + static void its_build_sync_cmd(struct its_node *its, 760 + struct its_cmd_block *sync_cmd, 785 761 struct its_collection *sync_col) 786 762 { 787 763 its_encode_cmd(sync_cmd, GITS_CMD_SYNC); ··· 795 767 static BUILD_SINGLE_CMD_FUNC(its_send_single_command, its_cmd_builder_t, 796 768 struct its_collection, its_build_sync_cmd) 797 769 798 - static void its_build_vsync_cmd(struct its_cmd_block *sync_cmd, 770 + static void its_build_vsync_cmd(struct its_node *its, 771 + struct its_cmd_block *sync_cmd, 799 772 struct its_vpe *sync_vpe) 800 773 { 801 774 its_encode_cmd(sync_cmd, GITS_CMD_VSYNC); ··· 928 899 its_send_single_vcommand(dev->its, its_build_vmovi_cmd, &desc); 929 900 } 930 901 931 - static void its_send_vmapp(struct its_vpe *vpe, bool valid) 902 + static void its_send_vmapp(struct its_node *its, 903 + struct its_vpe *vpe, bool valid) 932 904 { 933 905 struct its_cmd_desc desc; 934 - struct its_node *its; 935 906 936 907 desc.its_vmapp_cmd.vpe = vpe; 937 908 desc.its_vmapp_cmd.valid = valid; 909 + desc.its_vmapp_cmd.col = &its->collections[vpe->col_idx]; 938 910 939 - list_for_each_entry(its, &its_nodes, entry) { 940 - if (!its->is_v4) 941 - continue; 942 - 943 - desc.its_vmapp_cmd.col = &its->collections[vpe->col_idx]; 944 - its_send_single_vcommand(its, its_build_vmapp_cmd, &desc); 945 - } 911 + its_send_single_vcommand(its, its_build_vmapp_cmd, &desc); 946 912 } 947 913 948 914 static void its_send_vmovp(struct its_vpe *vpe) ··· 975 951 if (!its->is_v4) 976 952 continue; 977 953 954 + if (!vpe->its_vm->vlpi_count[its->list_nr]) 955 + continue; 956 + 978 957 desc.its_vmovp_cmd.col = &its->collections[col_id]; 979 958 its_send_single_vcommand(its, its_build_vmovp_cmd, &desc); 980 959 } ··· 985 958 raw_spin_unlock_irqrestore(&vmovp_lock, flags); 986 959 } 987 960 988 - static void its_send_vinvall(struct its_vpe *vpe) 961 + static void its_send_vinvall(struct its_node *its, struct its_vpe *vpe) 989 962 { 990 963 struct its_cmd_desc desc; 991 - struct its_node *its; 992 964 993 965 desc.its_vinvall_cmd.vpe = vpe; 994 - 995 - list_for_each_entry(its, &its_nodes, entry) { 996 - if (!its->is_v4) 997 - continue; 998 - its_send_single_vcommand(its, its_build_vinvall_cmd, &desc); 999 - } 966 + its_send_single_vcommand(its, its_build_vinvall_cmd, &desc); 1000 967 } 1001 968 1002 969 /* ··· 1012 991 if (irqd_is_forwarded_to_vcpu(d)) { 1013 992 struct its_device *its_dev = irq_data_get_irq_chip_data(d); 1014 993 u32 event = its_get_event_id(d); 994 + struct its_vlpi_map *map; 1015 995 1016 996 prop_page = its_dev->event_map.vm->vprop_page; 1017 - hwirq = its_dev->event_map.vlpi_maps[event].vintid; 997 + map = &its_dev->event_map.vlpi_maps[event]; 998 + hwirq = map->vintid; 999 + 1000 + /* Remember the updated property */ 1001 + map->properties &= ~clr; 1002 + map->properties |= set | LPI_PROP_GROUP1; 1018 1003 } else { 1019 1004 prop_page = gic_rdists->prop_page; 1020 1005 hwirq = d->hwirq; ··· 1126 1099 return IRQ_SET_MASK_OK_DONE; 1127 1100 } 1128 1101 1102 + static u64 its_irq_get_msi_base(struct its_device *its_dev) 1103 + { 1104 + struct its_node *its = its_dev->its; 1105 + 1106 + return its->phys_base + GITS_TRANSLATER; 1107 + } 1108 + 1129 1109 static void its_irq_compose_msi_msg(struct irq_data *d, struct msi_msg *msg) 1130 1110 { 1131 1111 struct its_device *its_dev = irq_data_get_irq_chip_data(d); ··· 1140 1106 u64 addr; 1141 1107 1142 1108 its = its_dev->its; 1143 - addr = its->phys_base + GITS_TRANSLATER; 1109 + addr = its->get_msi_base(its_dev); 1144 1110 1145 1111 msg->address_lo = lower_32_bits(addr); 1146 1112 msg->address_hi = upper_32_bits(addr); ··· 1165 1131 its_send_clear(its_dev, event); 1166 1132 1167 1133 return 0; 1134 + } 1135 + 1136 + static void its_map_vm(struct its_node *its, struct its_vm *vm) 1137 + { 1138 + unsigned long flags; 1139 + 1140 + /* Not using the ITS list? Everything is always mapped. */ 1141 + if (!its_list_map) 1142 + return; 1143 + 1144 + raw_spin_lock_irqsave(&vmovp_lock, flags); 1145 + 1146 + /* 1147 + * If the VM wasn't mapped yet, iterate over the vpes and get 1148 + * them mapped now. 1149 + */ 1150 + vm->vlpi_count[its->list_nr]++; 1151 + 1152 + if (vm->vlpi_count[its->list_nr] == 1) { 1153 + int i; 1154 + 1155 + for (i = 0; i < vm->nr_vpes; i++) { 1156 + struct its_vpe *vpe = vm->vpes[i]; 1157 + struct irq_data *d = irq_get_irq_data(vpe->irq); 1158 + 1159 + /* Map the VPE to the first possible CPU */ 1160 + vpe->col_idx = cpumask_first(cpu_online_mask); 1161 + its_send_vmapp(its, vpe, true); 1162 + its_send_vinvall(its, vpe); 1163 + irq_data_update_effective_affinity(d, cpumask_of(vpe->col_idx)); 1164 + } 1165 + } 1166 + 1167 + raw_spin_unlock_irqrestore(&vmovp_lock, flags); 1168 + } 1169 + 1170 + static void its_unmap_vm(struct its_node *its, struct its_vm *vm) 1171 + { 1172 + unsigned long flags; 1173 + 1174 + /* Not using the ITS list? Everything is always mapped. */ 1175 + if (!its_list_map) 1176 + return; 1177 + 1178 + raw_spin_lock_irqsave(&vmovp_lock, flags); 1179 + 1180 + if (!--vm->vlpi_count[its->list_nr]) { 1181 + int i; 1182 + 1183 + for (i = 0; i < vm->nr_vpes; i++) 1184 + its_send_vmapp(its, vm->vpes[i], false); 1185 + } 1186 + 1187 + raw_spin_unlock_irqrestore(&vmovp_lock, flags); 1168 1188 } 1169 1189 1170 1190 static int its_vlpi_map(struct irq_data *d, struct its_cmd_info *info) ··· 1256 1168 /* Already mapped, move it around */ 1257 1169 its_send_vmovi(its_dev, event); 1258 1170 } else { 1171 + /* Ensure all the VPEs are mapped on this ITS */ 1172 + its_map_vm(its_dev->its, info->map->vm); 1173 + 1174 + /* 1175 + * Flag the interrupt as forwarded so that we can 1176 + * start poking the virtual property table. 1177 + */ 1178 + irqd_set_forwarded_to_vcpu(d); 1179 + 1180 + /* Write out the property to the prop table */ 1181 + lpi_write_config(d, 0xff, info->map->properties); 1182 + 1259 1183 /* Drop the physical mapping */ 1260 1184 its_send_discard(its_dev, event); 1261 1185 1262 1186 /* and install the virtual one */ 1263 1187 its_send_vmapti(its_dev, event); 1264 - irqd_set_forwarded_to_vcpu(d); 1265 1188 1266 1189 /* Increment the number of VLPIs */ 1267 1190 its_dev->event_map.nr_vlpis++; ··· 1327 1228 lpi_update_config(d, 0xff, (LPI_PROP_DEFAULT_PRIO | 1328 1229 LPI_PROP_ENABLED | 1329 1230 LPI_PROP_GROUP1)); 1231 + 1232 + /* Potentially unmap the VM from this ITS */ 1233 + its_unmap_vm(its_dev->its, its_dev->event_map.vm); 1330 1234 1331 1235 /* 1332 1236 * Drop the refcount and make the device available again if ··· 1771 1669 1772 1670 static int its_alloc_tables(struct its_node *its) 1773 1671 { 1774 - u64 typer = gic_read_typer(its->base + GITS_TYPER); 1775 - u32 ids = GITS_TYPER_DEVBITS(typer); 1776 1672 u64 shr = GITS_BASER_InnerShareable; 1777 1673 u64 cache = GITS_BASER_RaWaWb; 1778 1674 u32 psz = SZ_64K; 1779 1675 int err, i; 1780 1676 1781 - if (its->flags & ITS_FLAGS_WORKAROUND_CAVIUM_22375) { 1782 - /* 1783 - * erratum 22375: only alloc 8MB table size 1784 - * erratum 24313: ignore memory access type 1785 - */ 1786 - cache = GITS_BASER_nCnB; 1787 - ids = 0x14; /* 20 bits, 8MB */ 1788 - } 1789 - 1790 - its->device_ids = ids; 1677 + if (its->flags & ITS_FLAGS_WORKAROUND_CAVIUM_22375) 1678 + /* erratum 24313: ignore memory access type */ 1679 + cache = GITS_BASER_nCnB; 1791 1680 1792 1681 for (i = 0; i < GITS_BASER_NR_REGS; i++) { 1793 1682 struct its_baser *baser = its->tables + i; ··· 2302 2209 return 0; 2303 2210 } 2304 2211 2305 - static void its_irq_domain_activate(struct irq_domain *domain, 2306 - struct irq_data *d) 2212 + static int its_irq_domain_activate(struct irq_domain *domain, 2213 + struct irq_data *d, bool early) 2307 2214 { 2308 2215 struct its_device *its_dev = irq_data_get_irq_chip_data(d); 2309 2216 u32 event = its_get_event_id(d); ··· 2321 2228 2322 2229 /* Map the GIC IRQ and event to the device */ 2323 2230 its_send_mapti(its_dev, d->hwirq, event); 2231 + return 0; 2324 2232 } 2325 2233 2326 2234 static void its_irq_domain_deactivate(struct irq_domain *domain, ··· 2488 2394 its_vpe_db_proxy_move(vpe, from, cpu); 2489 2395 } 2490 2396 2397 + irq_data_update_effective_affinity(d, cpumask_of(cpu)); 2398 + 2491 2399 return IRQ_SET_MASK_OK_DONE; 2492 2400 } 2493 2401 ··· 2557 2461 } 2558 2462 } 2559 2463 2464 + static void its_vpe_invall(struct its_vpe *vpe) 2465 + { 2466 + struct its_node *its; 2467 + 2468 + list_for_each_entry(its, &its_nodes, entry) { 2469 + if (!its->is_v4) 2470 + continue; 2471 + 2472 + if (its_list_map && !vpe->its_vm->vlpi_count[its->list_nr]) 2473 + continue; 2474 + 2475 + /* 2476 + * Sending a VINVALL to a single ITS is enough, as all 2477 + * we need is to reach the redistributors. 2478 + */ 2479 + its_send_vinvall(its, vpe); 2480 + return; 2481 + } 2482 + } 2483 + 2560 2484 static int its_vpe_set_vcpu_affinity(struct irq_data *d, void *vcpu_info) 2561 2485 { 2562 2486 struct its_vpe *vpe = irq_data_get_irq_chip_data(d); ··· 2592 2476 return 0; 2593 2477 2594 2478 case INVALL_VPE: 2595 - its_send_vinvall(vpe); 2479 + its_vpe_invall(vpe); 2596 2480 return 0; 2597 2481 2598 2482 default: ··· 2817 2701 return err; 2818 2702 } 2819 2703 2820 - static void its_vpe_irq_domain_activate(struct irq_domain *domain, 2821 - struct irq_data *d) 2704 + static int its_vpe_irq_domain_activate(struct irq_domain *domain, 2705 + struct irq_data *d, bool early) 2822 2706 { 2823 2707 struct its_vpe *vpe = irq_data_get_irq_chip_data(d); 2708 + struct its_node *its; 2709 + 2710 + /* If we use the list map, we issue VMAPP on demand... */ 2711 + if (its_list_map) 2712 + return 0; 2824 2713 2825 2714 /* Map the VPE to the first possible CPU */ 2826 2715 vpe->col_idx = cpumask_first(cpu_online_mask); 2827 - its_send_vmapp(vpe, true); 2828 - its_send_vinvall(vpe); 2716 + 2717 + list_for_each_entry(its, &its_nodes, entry) { 2718 + if (!its->is_v4) 2719 + continue; 2720 + 2721 + its_send_vmapp(its, vpe, true); 2722 + its_send_vinvall(its, vpe); 2723 + } 2724 + 2725 + irq_data_update_effective_affinity(d, cpumask_of(vpe->col_idx)); 2726 + 2727 + return 0; 2829 2728 } 2830 2729 2831 2730 static void its_vpe_irq_domain_deactivate(struct irq_domain *domain, 2832 2731 struct irq_data *d) 2833 2732 { 2834 2733 struct its_vpe *vpe = irq_data_get_irq_chip_data(d); 2734 + struct its_node *its; 2835 2735 2836 - its_send_vmapp(vpe, false); 2736 + /* 2737 + * If we use the list map, we unmap the VPE once no VLPIs are 2738 + * associated with the VM. 2739 + */ 2740 + if (its_list_map) 2741 + return; 2742 + 2743 + list_for_each_entry(its, &its_nodes, entry) { 2744 + if (!its->is_v4) 2745 + continue; 2746 + 2747 + its_send_vmapp(its, vpe, false); 2748 + } 2837 2749 } 2838 2750 2839 2751 static const struct irq_domain_ops its_vpe_domain_ops = { ··· 2904 2760 } 2905 2761 } 2906 2762 2907 - static void __maybe_unused its_enable_quirk_cavium_22375(void *data) 2763 + static bool __maybe_unused its_enable_quirk_cavium_22375(void *data) 2908 2764 { 2909 2765 struct its_node *its = data; 2910 2766 2767 + /* erratum 22375: only alloc 8MB table size */ 2768 + its->device_ids = 0x14; /* 20 bits, 8MB */ 2911 2769 its->flags |= ITS_FLAGS_WORKAROUND_CAVIUM_22375; 2770 + 2771 + return true; 2912 2772 } 2913 2773 2914 - static void __maybe_unused its_enable_quirk_cavium_23144(void *data) 2774 + static bool __maybe_unused its_enable_quirk_cavium_23144(void *data) 2915 2775 { 2916 2776 struct its_node *its = data; 2917 2777 2918 2778 its->flags |= ITS_FLAGS_WORKAROUND_CAVIUM_23144; 2779 + 2780 + return true; 2919 2781 } 2920 2782 2921 - static void __maybe_unused its_enable_quirk_qdf2400_e0065(void *data) 2783 + static bool __maybe_unused its_enable_quirk_qdf2400_e0065(void *data) 2922 2784 { 2923 2785 struct its_node *its = data; 2924 2786 2925 2787 /* On QDF2400, the size of the ITE is 16Bytes */ 2926 2788 its->ite_size = 16; 2789 + 2790 + return true; 2791 + } 2792 + 2793 + static u64 its_irq_get_msi_base_pre_its(struct its_device *its_dev) 2794 + { 2795 + struct its_node *its = its_dev->its; 2796 + 2797 + /* 2798 + * The Socionext Synquacer SoC has a so-called 'pre-ITS', 2799 + * which maps 32-bit writes targeted at a separate window of 2800 + * size '4 << device_id_bits' onto writes to GITS_TRANSLATER 2801 + * with device ID taken from bits [device_id_bits + 1:2] of 2802 + * the window offset. 2803 + */ 2804 + return its->pre_its_base + (its_dev->device_id << 2); 2805 + } 2806 + 2807 + static bool __maybe_unused its_enable_quirk_socionext_synquacer(void *data) 2808 + { 2809 + struct its_node *its = data; 2810 + u32 pre_its_window[2]; 2811 + u32 ids; 2812 + 2813 + if (!fwnode_property_read_u32_array(its->fwnode_handle, 2814 + "socionext,synquacer-pre-its", 2815 + pre_its_window, 2816 + ARRAY_SIZE(pre_its_window))) { 2817 + 2818 + its->pre_its_base = pre_its_window[0]; 2819 + its->get_msi_base = its_irq_get_msi_base_pre_its; 2820 + 2821 + ids = ilog2(pre_its_window[1]) - 2; 2822 + if (its->device_ids > ids) 2823 + its->device_ids = ids; 2824 + 2825 + /* the pre-ITS breaks isolation, so disable MSI remapping */ 2826 + its->msi_domain_flags &= ~IRQ_DOMAIN_FLAG_MSI_REMAP; 2827 + return true; 2828 + } 2829 + return false; 2830 + } 2831 + 2832 + static bool __maybe_unused its_enable_quirk_hip07_161600802(void *data) 2833 + { 2834 + struct its_node *its = data; 2835 + 2836 + /* 2837 + * Hip07 insists on using the wrong address for the VLPI 2838 + * page. Trick it into doing the right thing... 2839 + */ 2840 + its->vlpi_redist_offset = SZ_128K; 2841 + return true; 2927 2842 } 2928 2843 2929 2844 static const struct gic_quirk its_quirks[] = { ··· 3008 2805 .iidr = 0x00001070, /* QDF2400 ITS rev 1.x */ 3009 2806 .mask = 0xffffffff, 3010 2807 .init = its_enable_quirk_qdf2400_e0065, 2808 + }, 2809 + #endif 2810 + #ifdef CONFIG_SOCIONEXT_SYNQUACER_PREITS 2811 + { 2812 + /* 2813 + * The Socionext Synquacer SoC incorporates ARM's own GIC-500 2814 + * implementation, but with a 'pre-ITS' added that requires 2815 + * special handling in software. 2816 + */ 2817 + .desc = "ITS: Socionext Synquacer pre-ITS", 2818 + .iidr = 0x0001143b, 2819 + .mask = 0xffffffff, 2820 + .init = its_enable_quirk_socionext_synquacer, 2821 + }, 2822 + #endif 2823 + #ifdef CONFIG_HISILICON_ERRATUM_161600802 2824 + { 2825 + .desc = "ITS: Hip07 erratum 161600802", 2826 + .iidr = 0x00000004, 2827 + .mask = 0xffffffff, 2828 + .init = its_enable_quirk_hip07_161600802, 3011 2829 }, 3012 2830 #endif 3013 2831 { ··· 3059 2835 3060 2836 inner_domain->parent = its_parent; 3061 2837 irq_domain_update_bus_token(inner_domain, DOMAIN_BUS_NEXUS); 3062 - inner_domain->flags |= IRQ_DOMAIN_FLAG_MSI_REMAP; 2838 + inner_domain->flags |= its->msi_domain_flags; 3063 2839 info->ops = &its_msi_domain_ops; 3064 2840 info->data = its; 3065 2841 inner_domain->host_data = info; ··· 3120 2896 * locking. Should this change, we should address 3121 2897 * this. 3122 2898 */ 3123 - its_number = find_first_zero_bit(&its_list_map, ITS_LIST_MAX); 3124 - if (its_number >= ITS_LIST_MAX) { 2899 + its_number = find_first_zero_bit(&its_list_map, GICv4_ITS_LIST_MAX); 2900 + if (its_number >= GICv4_ITS_LIST_MAX) { 3125 2901 pr_err("ITS@%pa: No ITSList entry available!\n", 3126 2902 &res->start); 3127 2903 return -EINVAL; ··· 3189 2965 its->base = its_base; 3190 2966 its->phys_base = res->start; 3191 2967 its->ite_size = GITS_TYPER_ITT_ENTRY_SIZE(typer); 2968 + its->device_ids = GITS_TYPER_DEVBITS(typer); 3192 2969 its->is_v4 = !!(typer & GITS_TYPER_VLPIS); 3193 2970 if (its->is_v4) { 3194 2971 if (!(typer & GITS_TYPER_VMOVP)) { 3195 2972 err = its_compute_its_list_map(res, its_base); 3196 2973 if (err < 0) 3197 2974 goto out_free_its; 2975 + 2976 + its->list_nr = err; 3198 2977 3199 2978 pr_info("ITS@%pa: Using ITS number %d\n", 3200 2979 &res->start, err); ··· 3215 2988 goto out_free_its; 3216 2989 } 3217 2990 its->cmd_write = its->cmd_base; 2991 + its->fwnode_handle = handle; 2992 + its->get_msi_base = its_irq_get_msi_base; 2993 + its->msi_domain_flags = IRQ_DOMAIN_FLAG_MSI_REMAP; 3218 2994 3219 2995 its_enable_quirks(its); 3220 2996
+41 -9
drivers/irqchip/irq-gic-v3.c
··· 55 55 struct irq_domain *domain; 56 56 u64 redist_stride; 57 57 u32 nr_redist_regions; 58 + bool has_rss; 58 59 unsigned int irq_nr; 59 60 struct partition_desc *ppi_descs[16]; 60 61 }; ··· 64 63 static struct static_key supports_deactivate = STATIC_KEY_INIT_TRUE; 65 64 66 65 static struct gic_kvm_info gic_v3_kvm_info; 66 + static DEFINE_PER_CPU(bool, has_rss); 67 67 68 + #define MPIDR_RS(mpidr) (((mpidr) & 0xF0UL) >> 4) 68 69 #define gic_data_rdist() (this_cpu_ptr(gic_data.rdists.rdist)) 69 70 #define gic_data_rdist_rd_base() (gic_data_rdist()->rd_base) 70 71 #define gic_data_rdist_sgi_base() (gic_data_rdist_rd_base() + SZ_64K) ··· 529 526 530 527 static void gic_cpu_sys_reg_init(void) 531 528 { 529 + int i, cpu = smp_processor_id(); 530 + u64 mpidr = cpu_logical_map(cpu); 531 + u64 need_rss = MPIDR_RS(mpidr); 532 + 532 533 /* 533 534 * Need to check that the SRE bit has actually been set. If 534 535 * not, it means that SRE is disabled at EL2. We're going to ··· 564 557 565 558 /* ... and let's hit the road... */ 566 559 gic_write_grpen1(1); 560 + 561 + /* Keep the RSS capability status in per_cpu variable */ 562 + per_cpu(has_rss, cpu) = !!(gic_read_ctlr() & ICC_CTLR_EL1_RSS); 563 + 564 + /* Check all the CPUs have capable of sending SGIs to other CPUs */ 565 + for_each_online_cpu(i) { 566 + bool have_rss = per_cpu(has_rss, i) && per_cpu(has_rss, cpu); 567 + 568 + need_rss |= MPIDR_RS(cpu_logical_map(i)); 569 + if (need_rss && (!have_rss)) 570 + pr_crit("CPU%d (%lx) can't SGI CPU%d (%lx), no RSS\n", 571 + cpu, (unsigned long)mpidr, 572 + i, (unsigned long)cpu_logical_map(i)); 573 + } 574 + 575 + /** 576 + * GIC spec says, when ICC_CTLR_EL1.RSS==1 and GICD_TYPER.RSS==0, 577 + * writing ICC_ASGI1R_EL1 register with RS != 0 is a CONSTRAINED 578 + * UNPREDICTABLE choice of : 579 + * - The write is ignored. 580 + * - The RS field is treated as 0. 581 + */ 582 + if (need_rss && (!gic_data.has_rss)) 583 + pr_crit_once("RSS is required but GICD doesn't support it\n"); 567 584 } 568 585 569 586 static int gic_dist_supports_lpis(void) ··· 622 591 623 592 #ifdef CONFIG_SMP 624 593 594 + #define MPIDR_TO_SGI_RS(mpidr) (MPIDR_RS(mpidr) << ICC_SGI1R_RS_SHIFT) 595 + #define MPIDR_TO_SGI_CLUSTER_ID(mpidr) ((mpidr) & ~0xFUL) 596 + 625 597 static int gic_starting_cpu(unsigned int cpu) 626 598 { 627 599 gic_cpu_init(); ··· 639 605 u16 tlist = 0; 640 606 641 607 while (cpu < nr_cpu_ids) { 642 - /* 643 - * If we ever get a cluster of more than 16 CPUs, just 644 - * scream and skip that CPU. 645 - */ 646 - if (WARN_ON((mpidr & 0xff) >= 16)) 647 - goto out; 648 - 649 608 tlist |= 1 << (mpidr & 0xf); 650 609 651 610 next_cpu = cpumask_next(cpu, mask); ··· 648 621 649 622 mpidr = cpu_logical_map(cpu); 650 623 651 - if (cluster_id != (mpidr & ~0xffUL)) { 624 + if (cluster_id != MPIDR_TO_SGI_CLUSTER_ID(mpidr)) { 652 625 cpu--; 653 626 goto out; 654 627 } ··· 670 643 MPIDR_TO_SGI_AFFINITY(cluster_id, 2) | 671 644 irq << ICC_SGI1R_SGI_ID_SHIFT | 672 645 MPIDR_TO_SGI_AFFINITY(cluster_id, 1) | 646 + MPIDR_TO_SGI_RS(cluster_id) | 673 647 tlist << ICC_SGI1R_TARGET_LIST_SHIFT); 674 648 675 649 pr_debug("CPU%d: ICC_SGI1R_EL1 %llx\n", smp_processor_id(), val); ··· 691 663 smp_wmb(); 692 664 693 665 for_each_cpu(cpu, mask) { 694 - unsigned long cluster_id = cpu_logical_map(cpu) & ~0xffUL; 666 + u64 cluster_id = MPIDR_TO_SGI_CLUSTER_ID(cpu_logical_map(cpu)); 695 667 u16 tlist; 696 668 697 669 tlist = gic_compute_target_list(&cpu, mask, cluster_id); ··· 1034 1006 err = -ENOMEM; 1035 1007 goto out_free; 1036 1008 } 1009 + 1010 + gic_data.has_rss = !!(typer & GICD_TYPER_RSS); 1011 + pr_info("Distributor has %sRange Selector support\n", 1012 + gic_data.has_rss ? "" : "no "); 1037 1013 1038 1014 set_handle_irq(gic_handle_irq); 1039 1015
+62 -9
drivers/irqchip/irq-gic.c
··· 1256 1256 1257 1257 #ifdef CONFIG_OF 1258 1258 static int gic_cnt __initdata; 1259 + static bool gicv2_force_probe; 1260 + 1261 + static int __init gicv2_force_probe_cfg(char *buf) 1262 + { 1263 + return strtobool(buf, &gicv2_force_probe); 1264 + } 1265 + early_param("irqchip.gicv2_force_probe", gicv2_force_probe_cfg); 1266 + 1267 + static bool gic_check_gicv2(void __iomem *base) 1268 + { 1269 + u32 val = readl_relaxed(base + GIC_CPU_IDENT); 1270 + return (val & 0xff0fff) == 0x02043B; 1271 + } 1259 1272 1260 1273 static bool gic_check_eoimode(struct device_node *node, void __iomem **base) 1261 1274 { ··· 1278 1265 1279 1266 if (!is_hyp_mode_available()) 1280 1267 return false; 1281 - if (resource_size(&cpuif_res) < SZ_8K) 1282 - return false; 1283 - if (resource_size(&cpuif_res) == SZ_128K) { 1284 - u32 val_low, val_high; 1268 + if (resource_size(&cpuif_res) < SZ_8K) { 1269 + void __iomem *alt; 1270 + /* 1271 + * Check for a stupid firmware that only exposes the 1272 + * first page of a GICv2. 1273 + */ 1274 + if (!gic_check_gicv2(*base)) 1275 + return false; 1276 + 1277 + if (!gicv2_force_probe) { 1278 + pr_warn("GIC: GICv2 detected, but range too small and irqchip.gicv2_force_probe not set\n"); 1279 + return false; 1280 + } 1281 + 1282 + alt = ioremap(cpuif_res.start, SZ_8K); 1283 + if (!alt) 1284 + return false; 1285 + if (!gic_check_gicv2(alt + SZ_4K)) { 1286 + /* 1287 + * The first page was that of a GICv2, and 1288 + * the second was *something*. Let's trust it 1289 + * to be a GICv2, and update the mapping. 1290 + */ 1291 + pr_warn("GIC: GICv2 at %pa, but range is too small (broken DT?), assuming 8kB\n", 1292 + &cpuif_res.start); 1293 + iounmap(*base); 1294 + *base = alt; 1295 + return true; 1296 + } 1285 1297 1286 1298 /* 1287 - * Verify that we have the first 4kB of a GIC400 1299 + * We detected *two* initial GICv2 pages in a 1300 + * row. Could be a GICv2 aliased over two 64kB 1301 + * pages. Update the resource, map the iospace, and 1302 + * pray. 1303 + */ 1304 + iounmap(alt); 1305 + alt = ioremap(cpuif_res.start, SZ_128K); 1306 + if (!alt) 1307 + return false; 1308 + pr_warn("GIC: Aliased GICv2 at %pa, trying to find the canonical range over 128kB\n", 1309 + &cpuif_res.start); 1310 + cpuif_res.end = cpuif_res.start + SZ_128K -1; 1311 + iounmap(*base); 1312 + *base = alt; 1313 + } 1314 + if (resource_size(&cpuif_res) == SZ_128K) { 1315 + /* 1316 + * Verify that we have the first 4kB of a GICv2 1288 1317 * aliased over the first 64kB by checking the 1289 1318 * GICC_IIDR register on both ends. 1290 1319 */ 1291 - val_low = readl_relaxed(*base + GIC_CPU_IDENT); 1292 - val_high = readl_relaxed(*base + GIC_CPU_IDENT + 0xf000); 1293 - if ((val_low & 0xffff0fff) != 0x0202043B || 1294 - val_low != val_high) 1320 + if (!gic_check_gicv2(*base) || 1321 + !gic_check_gicv2(*base + 0xf000)) 1295 1322 return false; 1296 1323 1297 1324 /*
+419
drivers/irqchip/irq-meson-gpio.c
··· 1 + /* 2 + * Copyright (c) 2015 Endless Mobile, Inc. 3 + * Author: Carlo Caione <carlo@endlessm.com> 4 + * Copyright (c) 2016 BayLibre, SAS. 5 + * Author: Jerome Brunet <jbrunet@baylibre.com> 6 + * 7 + * This program is free software; you can redistribute it and/or modify 8 + * it under the terms of version 2 of the GNU General Public License as 9 + * published by the Free Software Foundation. 10 + * 11 + * This program is distributed in the hope that it will be useful, but 12 + * WITHOUT ANY WARRANTY; without even the implied warranty of 13 + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU 14 + * General Public License for more details. 15 + * 16 + * You should have received a copy of the GNU General Public License 17 + * along with this program; if not, see <http://www.gnu.org/licenses/>. 18 + * The full GNU General Public License is included in this distribution 19 + * in the file called COPYING. 20 + */ 21 + 22 + #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt 23 + 24 + #include <linux/io.h> 25 + #include <linux/module.h> 26 + #include <linux/irq.h> 27 + #include <linux/irqdomain.h> 28 + #include <linux/irqchip.h> 29 + #include <linux/of.h> 30 + #include <linux/of_address.h> 31 + 32 + #define NUM_CHANNEL 8 33 + #define MAX_INPUT_MUX 256 34 + 35 + #define REG_EDGE_POL 0x00 36 + #define REG_PIN_03_SEL 0x04 37 + #define REG_PIN_47_SEL 0x08 38 + #define REG_FILTER_SEL 0x0c 39 + 40 + #define REG_EDGE_POL_MASK(x) (BIT(x) | BIT(16 + (x))) 41 + #define REG_EDGE_POL_EDGE(x) BIT(x) 42 + #define REG_EDGE_POL_LOW(x) BIT(16 + (x)) 43 + #define REG_PIN_SEL_SHIFT(x) (((x) % 4) * 8) 44 + #define REG_FILTER_SEL_SHIFT(x) ((x) * 4) 45 + 46 + struct meson_gpio_irq_params { 47 + unsigned int nr_hwirq; 48 + }; 49 + 50 + static const struct meson_gpio_irq_params meson8_params = { 51 + .nr_hwirq = 134, 52 + }; 53 + 54 + static const struct meson_gpio_irq_params meson8b_params = { 55 + .nr_hwirq = 119, 56 + }; 57 + 58 + static const struct meson_gpio_irq_params gxbb_params = { 59 + .nr_hwirq = 133, 60 + }; 61 + 62 + static const struct meson_gpio_irq_params gxl_params = { 63 + .nr_hwirq = 110, 64 + }; 65 + 66 + static const struct of_device_id meson_irq_gpio_matches[] = { 67 + { .compatible = "amlogic,meson8-gpio-intc", .data = &meson8_params }, 68 + { .compatible = "amlogic,meson8b-gpio-intc", .data = &meson8b_params }, 69 + { .compatible = "amlogic,meson-gxbb-gpio-intc", .data = &gxbb_params }, 70 + { .compatible = "amlogic,meson-gxl-gpio-intc", .data = &gxl_params }, 71 + { } 72 + }; 73 + 74 + struct meson_gpio_irq_controller { 75 + unsigned int nr_hwirq; 76 + void __iomem *base; 77 + u32 channel_irqs[NUM_CHANNEL]; 78 + DECLARE_BITMAP(channel_map, NUM_CHANNEL); 79 + spinlock_t lock; 80 + }; 81 + 82 + static void meson_gpio_irq_update_bits(struct meson_gpio_irq_controller *ctl, 83 + unsigned int reg, u32 mask, u32 val) 84 + { 85 + u32 tmp; 86 + 87 + tmp = readl_relaxed(ctl->base + reg); 88 + tmp &= ~mask; 89 + tmp |= val; 90 + writel_relaxed(tmp, ctl->base + reg); 91 + } 92 + 93 + static unsigned int meson_gpio_irq_channel_to_reg(unsigned int channel) 94 + { 95 + return (channel < 4) ? REG_PIN_03_SEL : REG_PIN_47_SEL; 96 + } 97 + 98 + static int 99 + meson_gpio_irq_request_channel(struct meson_gpio_irq_controller *ctl, 100 + unsigned long hwirq, 101 + u32 **channel_hwirq) 102 + { 103 + unsigned int reg, idx; 104 + 105 + spin_lock(&ctl->lock); 106 + 107 + /* Find a free channel */ 108 + idx = find_first_zero_bit(ctl->channel_map, NUM_CHANNEL); 109 + if (idx >= NUM_CHANNEL) { 110 + spin_unlock(&ctl->lock); 111 + pr_err("No channel available\n"); 112 + return -ENOSPC; 113 + } 114 + 115 + /* Mark the channel as used */ 116 + set_bit(idx, ctl->channel_map); 117 + 118 + /* 119 + * Setup the mux of the channel to route the signal of the pad 120 + * to the appropriate input of the GIC 121 + */ 122 + reg = meson_gpio_irq_channel_to_reg(idx); 123 + meson_gpio_irq_update_bits(ctl, reg, 124 + 0xff << REG_PIN_SEL_SHIFT(idx), 125 + hwirq << REG_PIN_SEL_SHIFT(idx)); 126 + 127 + /* 128 + * Get the hwirq number assigned to this channel through 129 + * a pointer the channel_irq table. The added benifit of this 130 + * method is that we can also retrieve the channel index with 131 + * it, using the table base. 132 + */ 133 + *channel_hwirq = &(ctl->channel_irqs[idx]); 134 + 135 + spin_unlock(&ctl->lock); 136 + 137 + pr_debug("hwirq %lu assigned to channel %d - irq %u\n", 138 + hwirq, idx, **channel_hwirq); 139 + 140 + return 0; 141 + } 142 + 143 + static unsigned int 144 + meson_gpio_irq_get_channel_idx(struct meson_gpio_irq_controller *ctl, 145 + u32 *channel_hwirq) 146 + { 147 + return channel_hwirq - ctl->channel_irqs; 148 + } 149 + 150 + static void 151 + meson_gpio_irq_release_channel(struct meson_gpio_irq_controller *ctl, 152 + u32 *channel_hwirq) 153 + { 154 + unsigned int idx; 155 + 156 + idx = meson_gpio_irq_get_channel_idx(ctl, channel_hwirq); 157 + clear_bit(idx, ctl->channel_map); 158 + } 159 + 160 + static int meson_gpio_irq_type_setup(struct meson_gpio_irq_controller *ctl, 161 + unsigned int type, 162 + u32 *channel_hwirq) 163 + { 164 + u32 val = 0; 165 + unsigned int idx; 166 + 167 + idx = meson_gpio_irq_get_channel_idx(ctl, channel_hwirq); 168 + 169 + /* 170 + * The controller has a filter block to operate in either LEVEL or 171 + * EDGE mode, then signal is sent to the GIC. To enable LEVEL_LOW and 172 + * EDGE_FALLING support (which the GIC does not support), the filter 173 + * block is also able to invert the input signal it gets before 174 + * providing it to the GIC. 175 + */ 176 + type &= IRQ_TYPE_SENSE_MASK; 177 + 178 + if (type == IRQ_TYPE_EDGE_BOTH) 179 + return -EINVAL; 180 + 181 + if (type & (IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING)) 182 + val |= REG_EDGE_POL_EDGE(idx); 183 + 184 + if (type & (IRQ_TYPE_LEVEL_LOW | IRQ_TYPE_EDGE_FALLING)) 185 + val |= REG_EDGE_POL_LOW(idx); 186 + 187 + spin_lock(&ctl->lock); 188 + 189 + meson_gpio_irq_update_bits(ctl, REG_EDGE_POL, 190 + REG_EDGE_POL_MASK(idx), val); 191 + 192 + spin_unlock(&ctl->lock); 193 + 194 + return 0; 195 + } 196 + 197 + static unsigned int meson_gpio_irq_type_output(unsigned int type) 198 + { 199 + unsigned int sense = type & IRQ_TYPE_SENSE_MASK; 200 + 201 + type &= ~IRQ_TYPE_SENSE_MASK; 202 + 203 + /* 204 + * The polarity of the signal provided to the GIC should always 205 + * be high. 206 + */ 207 + if (sense & (IRQ_TYPE_LEVEL_HIGH | IRQ_TYPE_LEVEL_LOW)) 208 + type |= IRQ_TYPE_LEVEL_HIGH; 209 + else if (sense & (IRQ_TYPE_EDGE_RISING | IRQ_TYPE_EDGE_FALLING)) 210 + type |= IRQ_TYPE_EDGE_RISING; 211 + 212 + return type; 213 + } 214 + 215 + static int meson_gpio_irq_set_type(struct irq_data *data, unsigned int type) 216 + { 217 + struct meson_gpio_irq_controller *ctl = data->domain->host_data; 218 + u32 *channel_hwirq = irq_data_get_irq_chip_data(data); 219 + int ret; 220 + 221 + ret = meson_gpio_irq_type_setup(ctl, type, channel_hwirq); 222 + if (ret) 223 + return ret; 224 + 225 + return irq_chip_set_type_parent(data, 226 + meson_gpio_irq_type_output(type)); 227 + } 228 + 229 + static struct irq_chip meson_gpio_irq_chip = { 230 + .name = "meson-gpio-irqchip", 231 + .irq_mask = irq_chip_mask_parent, 232 + .irq_unmask = irq_chip_unmask_parent, 233 + .irq_eoi = irq_chip_eoi_parent, 234 + .irq_set_type = meson_gpio_irq_set_type, 235 + .irq_retrigger = irq_chip_retrigger_hierarchy, 236 + #ifdef CONFIG_SMP 237 + .irq_set_affinity = irq_chip_set_affinity_parent, 238 + #endif 239 + .flags = IRQCHIP_SET_TYPE_MASKED, 240 + }; 241 + 242 + static int meson_gpio_irq_domain_translate(struct irq_domain *domain, 243 + struct irq_fwspec *fwspec, 244 + unsigned long *hwirq, 245 + unsigned int *type) 246 + { 247 + if (is_of_node(fwspec->fwnode) && fwspec->param_count == 2) { 248 + *hwirq = fwspec->param[0]; 249 + *type = fwspec->param[1]; 250 + return 0; 251 + } 252 + 253 + return -EINVAL; 254 + } 255 + 256 + static int meson_gpio_irq_allocate_gic_irq(struct irq_domain *domain, 257 + unsigned int virq, 258 + u32 hwirq, 259 + unsigned int type) 260 + { 261 + struct irq_fwspec fwspec; 262 + 263 + fwspec.fwnode = domain->parent->fwnode; 264 + fwspec.param_count = 3; 265 + fwspec.param[0] = 0; /* SPI */ 266 + fwspec.param[1] = hwirq; 267 + fwspec.param[2] = meson_gpio_irq_type_output(type); 268 + 269 + return irq_domain_alloc_irqs_parent(domain, virq, 1, &fwspec); 270 + } 271 + 272 + static int meson_gpio_irq_domain_alloc(struct irq_domain *domain, 273 + unsigned int virq, 274 + unsigned int nr_irqs, 275 + void *data) 276 + { 277 + struct irq_fwspec *fwspec = data; 278 + struct meson_gpio_irq_controller *ctl = domain->host_data; 279 + unsigned long hwirq; 280 + u32 *channel_hwirq; 281 + unsigned int type; 282 + int ret; 283 + 284 + if (WARN_ON(nr_irqs != 1)) 285 + return -EINVAL; 286 + 287 + ret = meson_gpio_irq_domain_translate(domain, fwspec, &hwirq, &type); 288 + if (ret) 289 + return ret; 290 + 291 + ret = meson_gpio_irq_request_channel(ctl, hwirq, &channel_hwirq); 292 + if (ret) 293 + return ret; 294 + 295 + ret = meson_gpio_irq_allocate_gic_irq(domain, virq, 296 + *channel_hwirq, type); 297 + if (ret < 0) { 298 + pr_err("failed to allocate gic irq %u\n", *channel_hwirq); 299 + meson_gpio_irq_release_channel(ctl, channel_hwirq); 300 + return ret; 301 + } 302 + 303 + irq_domain_set_hwirq_and_chip(domain, virq, hwirq, 304 + &meson_gpio_irq_chip, channel_hwirq); 305 + 306 + return 0; 307 + } 308 + 309 + static void meson_gpio_irq_domain_free(struct irq_domain *domain, 310 + unsigned int virq, 311 + unsigned int nr_irqs) 312 + { 313 + struct meson_gpio_irq_controller *ctl = domain->host_data; 314 + struct irq_data *irq_data; 315 + u32 *channel_hwirq; 316 + 317 + if (WARN_ON(nr_irqs != 1)) 318 + return; 319 + 320 + irq_domain_free_irqs_parent(domain, virq, 1); 321 + 322 + irq_data = irq_domain_get_irq_data(domain, virq); 323 + channel_hwirq = irq_data_get_irq_chip_data(irq_data); 324 + 325 + meson_gpio_irq_release_channel(ctl, channel_hwirq); 326 + } 327 + 328 + static const struct irq_domain_ops meson_gpio_irq_domain_ops = { 329 + .alloc = meson_gpio_irq_domain_alloc, 330 + .free = meson_gpio_irq_domain_free, 331 + .translate = meson_gpio_irq_domain_translate, 332 + }; 333 + 334 + static int __init meson_gpio_irq_parse_dt(struct device_node *node, 335 + struct meson_gpio_irq_controller *ctl) 336 + { 337 + const struct of_device_id *match; 338 + const struct meson_gpio_irq_params *params; 339 + int ret; 340 + 341 + match = of_match_node(meson_irq_gpio_matches, node); 342 + if (!match) 343 + return -ENODEV; 344 + 345 + params = match->data; 346 + ctl->nr_hwirq = params->nr_hwirq; 347 + 348 + ret = of_property_read_variable_u32_array(node, 349 + "amlogic,channel-interrupts", 350 + ctl->channel_irqs, 351 + NUM_CHANNEL, 352 + NUM_CHANNEL); 353 + if (ret < 0) { 354 + pr_err("can't get %d channel interrupts\n", NUM_CHANNEL); 355 + return ret; 356 + } 357 + 358 + return 0; 359 + } 360 + 361 + static int __init meson_gpio_irq_of_init(struct device_node *node, 362 + struct device_node *parent) 363 + { 364 + struct irq_domain *domain, *parent_domain; 365 + struct meson_gpio_irq_controller *ctl; 366 + int ret; 367 + 368 + if (!parent) { 369 + pr_err("missing parent interrupt node\n"); 370 + return -ENODEV; 371 + } 372 + 373 + parent_domain = irq_find_host(parent); 374 + if (!parent_domain) { 375 + pr_err("unable to obtain parent domain\n"); 376 + return -ENXIO; 377 + } 378 + 379 + ctl = kzalloc(sizeof(*ctl), GFP_KERNEL); 380 + if (!ctl) 381 + return -ENOMEM; 382 + 383 + spin_lock_init(&ctl->lock); 384 + 385 + ctl->base = of_iomap(node, 0); 386 + if (!ctl->base) { 387 + ret = -ENOMEM; 388 + goto free_ctl; 389 + } 390 + 391 + ret = meson_gpio_irq_parse_dt(node, ctl); 392 + if (ret) 393 + goto free_channel_irqs; 394 + 395 + domain = irq_domain_create_hierarchy(parent_domain, 0, ctl->nr_hwirq, 396 + of_node_to_fwnode(node), 397 + &meson_gpio_irq_domain_ops, 398 + ctl); 399 + if (!domain) { 400 + pr_err("failed to add domain\n"); 401 + ret = -ENODEV; 402 + goto free_channel_irqs; 403 + } 404 + 405 + pr_info("%d to %d gpio interrupt mux initialized\n", 406 + ctl->nr_hwirq, NUM_CHANNEL); 407 + 408 + return 0; 409 + 410 + free_channel_irqs: 411 + iounmap(ctl->base); 412 + free_ctl: 413 + kfree(ctl); 414 + 415 + return ret; 416 + } 417 + 418 + IRQCHIP_DECLARE(meson_gpio_intc, "amlogic,meson-gpio-intc", 419 + meson_gpio_irq_of_init);
+123 -103
drivers/irqchip/irq-mips-gic.c
··· 6 6 * Copyright (C) 2008 Ralf Baechle (ralf@linux-mips.org) 7 7 * Copyright (C) 2012 MIPS Technologies, Inc. All rights reserved. 8 8 */ 9 + 10 + #define pr_fmt(fmt) "irq-mips-gic: " fmt 11 + 9 12 #include <linux/bitmap.h> 10 13 #include <linux/clocksource.h> 14 + #include <linux/cpuhotplug.h> 11 15 #include <linux/init.h> 12 16 #include <linux/interrupt.h> 13 17 #include <linux/irq.h> ··· 52 48 static struct irq_domain *gic_irq_domain; 53 49 static struct irq_domain *gic_ipi_domain; 54 50 static int gic_shared_intrs; 55 - static int gic_vpes; 56 51 static unsigned int gic_cpu_pin; 57 52 static unsigned int timer_cpu_pin; 58 53 static struct irq_chip gic_level_irq_controller, gic_edge_irq_controller; 59 - DECLARE_BITMAP(ipi_resrv, GIC_MAX_INTRS); 60 - DECLARE_BITMAP(ipi_available, GIC_MAX_INTRS); 54 + static DECLARE_BITMAP(ipi_resrv, GIC_MAX_INTRS); 55 + static DECLARE_BITMAP(ipi_available, GIC_MAX_INTRS); 56 + 57 + static struct gic_all_vpes_chip_data { 58 + u32 map; 59 + bool mask; 60 + } gic_all_vpes_chip_data[GIC_NUM_LOCAL_INTRS]; 61 61 62 62 static void gic_clear_pcpu_masks(unsigned int intr) 63 63 { ··· 202 194 203 195 static int gic_set_type(struct irq_data *d, unsigned int type) 204 196 { 205 - unsigned int irq = GIC_HWIRQ_TO_SHARED(d->hwirq); 197 + unsigned int irq, pol, trig, dual; 206 198 unsigned long flags; 207 - bool is_edge; 199 + 200 + irq = GIC_HWIRQ_TO_SHARED(d->hwirq); 208 201 209 202 spin_lock_irqsave(&gic_lock, flags); 210 203 switch (type & IRQ_TYPE_SENSE_MASK) { 211 204 case IRQ_TYPE_EDGE_FALLING: 212 - change_gic_pol(irq, GIC_POL_FALLING_EDGE); 213 - change_gic_trig(irq, GIC_TRIG_EDGE); 214 - change_gic_dual(irq, GIC_DUAL_SINGLE); 215 - is_edge = true; 205 + pol = GIC_POL_FALLING_EDGE; 206 + trig = GIC_TRIG_EDGE; 207 + dual = GIC_DUAL_SINGLE; 216 208 break; 217 209 case IRQ_TYPE_EDGE_RISING: 218 - change_gic_pol(irq, GIC_POL_RISING_EDGE); 219 - change_gic_trig(irq, GIC_TRIG_EDGE); 220 - change_gic_dual(irq, GIC_DUAL_SINGLE); 221 - is_edge = true; 210 + pol = GIC_POL_RISING_EDGE; 211 + trig = GIC_TRIG_EDGE; 212 + dual = GIC_DUAL_SINGLE; 222 213 break; 223 214 case IRQ_TYPE_EDGE_BOTH: 224 - /* polarity is irrelevant in this case */ 225 - change_gic_trig(irq, GIC_TRIG_EDGE); 226 - change_gic_dual(irq, GIC_DUAL_DUAL); 227 - is_edge = true; 215 + pol = 0; /* Doesn't matter */ 216 + trig = GIC_TRIG_EDGE; 217 + dual = GIC_DUAL_DUAL; 228 218 break; 229 219 case IRQ_TYPE_LEVEL_LOW: 230 - change_gic_pol(irq, GIC_POL_ACTIVE_LOW); 231 - change_gic_trig(irq, GIC_TRIG_LEVEL); 232 - change_gic_dual(irq, GIC_DUAL_SINGLE); 233 - is_edge = false; 220 + pol = GIC_POL_ACTIVE_LOW; 221 + trig = GIC_TRIG_LEVEL; 222 + dual = GIC_DUAL_SINGLE; 234 223 break; 235 224 case IRQ_TYPE_LEVEL_HIGH: 236 225 default: 237 - change_gic_pol(irq, GIC_POL_ACTIVE_HIGH); 238 - change_gic_trig(irq, GIC_TRIG_LEVEL); 239 - change_gic_dual(irq, GIC_DUAL_SINGLE); 240 - is_edge = false; 226 + pol = GIC_POL_ACTIVE_HIGH; 227 + trig = GIC_TRIG_LEVEL; 228 + dual = GIC_DUAL_SINGLE; 241 229 break; 242 230 } 243 231 244 - if (is_edge) 232 + change_gic_pol(irq, pol); 233 + change_gic_trig(irq, trig); 234 + change_gic_dual(irq, dual); 235 + 236 + if (trig == GIC_TRIG_EDGE) 245 237 irq_set_chip_handler_name_locked(d, &gic_edge_irq_controller, 246 238 handle_edge_irq, NULL); 247 239 else ··· 346 338 347 339 static void gic_mask_local_irq_all_vpes(struct irq_data *d) 348 340 { 349 - int intr = GIC_HWIRQ_TO_LOCAL(d->hwirq); 350 - int i; 341 + struct gic_all_vpes_chip_data *cd; 351 342 unsigned long flags; 343 + int intr, cpu; 344 + 345 + intr = GIC_HWIRQ_TO_LOCAL(d->hwirq); 346 + cd = irq_data_get_irq_chip_data(d); 347 + cd->mask = false; 352 348 353 349 spin_lock_irqsave(&gic_lock, flags); 354 - for (i = 0; i < gic_vpes; i++) { 355 - write_gic_vl_other(mips_cm_vp_id(i)); 350 + for_each_online_cpu(cpu) { 351 + write_gic_vl_other(mips_cm_vp_id(cpu)); 356 352 write_gic_vo_rmask(BIT(intr)); 357 353 } 358 354 spin_unlock_irqrestore(&gic_lock, flags); ··· 364 352 365 353 static void gic_unmask_local_irq_all_vpes(struct irq_data *d) 366 354 { 367 - int intr = GIC_HWIRQ_TO_LOCAL(d->hwirq); 368 - int i; 355 + struct gic_all_vpes_chip_data *cd; 369 356 unsigned long flags; 357 + int intr, cpu; 358 + 359 + intr = GIC_HWIRQ_TO_LOCAL(d->hwirq); 360 + cd = irq_data_get_irq_chip_data(d); 361 + cd->mask = true; 370 362 371 363 spin_lock_irqsave(&gic_lock, flags); 372 - for (i = 0; i < gic_vpes; i++) { 373 - write_gic_vl_other(mips_cm_vp_id(i)); 364 + for_each_online_cpu(cpu) { 365 + write_gic_vl_other(mips_cm_vp_id(cpu)); 374 366 write_gic_vo_smask(BIT(intr)); 375 367 } 376 368 spin_unlock_irqrestore(&gic_lock, flags); 377 369 } 378 370 371 + static void gic_all_vpes_irq_cpu_online(struct irq_data *d) 372 + { 373 + struct gic_all_vpes_chip_data *cd; 374 + unsigned int intr; 375 + 376 + intr = GIC_HWIRQ_TO_LOCAL(d->hwirq); 377 + cd = irq_data_get_irq_chip_data(d); 378 + 379 + write_gic_vl_map(intr, cd->map); 380 + if (cd->mask) 381 + write_gic_vl_smask(BIT(intr)); 382 + } 383 + 379 384 static struct irq_chip gic_all_vpes_local_irq_controller = { 380 - .name = "MIPS GIC Local", 381 - .irq_mask = gic_mask_local_irq_all_vpes, 382 - .irq_unmask = gic_unmask_local_irq_all_vpes, 385 + .name = "MIPS GIC Local", 386 + .irq_mask = gic_mask_local_irq_all_vpes, 387 + .irq_unmask = gic_unmask_local_irq_all_vpes, 388 + .irq_cpu_online = gic_all_vpes_irq_cpu_online, 383 389 }; 384 390 385 391 static void __gic_irq_dispatch(void) ··· 410 380 { 411 381 gic_handle_local_int(true); 412 382 gic_handle_shared_int(true); 413 - } 414 - 415 - static int gic_local_irq_domain_map(struct irq_domain *d, unsigned int virq, 416 - irq_hw_number_t hw) 417 - { 418 - int intr = GIC_HWIRQ_TO_LOCAL(hw); 419 - int i; 420 - unsigned long flags; 421 - u32 val; 422 - 423 - if (!gic_local_irq_is_routable(intr)) 424 - return -EPERM; 425 - 426 - if (intr > GIC_LOCAL_INT_FDC) { 427 - pr_err("Invalid local IRQ %d\n", intr); 428 - return -EINVAL; 429 - } 430 - 431 - if (intr == GIC_LOCAL_INT_TIMER) { 432 - /* CONFIG_MIPS_CMP workaround (see __gic_init) */ 433 - val = GIC_MAP_PIN_MAP_TO_PIN | timer_cpu_pin; 434 - } else { 435 - val = GIC_MAP_PIN_MAP_TO_PIN | gic_cpu_pin; 436 - } 437 - 438 - spin_lock_irqsave(&gic_lock, flags); 439 - for (i = 0; i < gic_vpes; i++) { 440 - write_gic_vl_other(mips_cm_vp_id(i)); 441 - write_gic_vo_map(intr, val); 442 - } 443 - spin_unlock_irqrestore(&gic_lock, flags); 444 - 445 - return 0; 446 383 } 447 384 448 385 static int gic_shared_irq_domain_map(struct irq_domain *d, unsigned int virq, ··· 454 457 static int gic_irq_domain_map(struct irq_domain *d, unsigned int virq, 455 458 irq_hw_number_t hwirq) 456 459 { 457 - int err; 460 + struct gic_all_vpes_chip_data *cd; 461 + unsigned long flags; 462 + unsigned int intr; 463 + int err, cpu; 464 + u32 map; 458 465 459 466 if (hwirq >= GIC_SHARED_HWIRQ_BASE) { 460 467 /* verify that shared irqs don't conflict with an IPI irq */ ··· 475 474 return gic_shared_irq_domain_map(d, virq, hwirq, 0); 476 475 } 477 476 478 - switch (GIC_HWIRQ_TO_LOCAL(hwirq)) { 477 + intr = GIC_HWIRQ_TO_LOCAL(hwirq); 478 + map = GIC_MAP_PIN_MAP_TO_PIN | gic_cpu_pin; 479 + 480 + switch (intr) { 479 481 case GIC_LOCAL_INT_TIMER: 482 + /* CONFIG_MIPS_CMP workaround (see __gic_init) */ 483 + map = GIC_MAP_PIN_MAP_TO_PIN | timer_cpu_pin; 484 + /* fall-through */ 480 485 case GIC_LOCAL_INT_PERFCTR: 481 486 case GIC_LOCAL_INT_FDC: 482 487 /* ··· 490 483 * the rest of the MIPS kernel code does not use the 491 484 * percpu IRQ API for them. 492 485 */ 486 + cd = &gic_all_vpes_chip_data[intr]; 487 + cd->map = map; 493 488 err = irq_domain_set_hwirq_and_chip(d, virq, hwirq, 494 489 &gic_all_vpes_local_irq_controller, 495 - NULL); 490 + cd); 496 491 if (err) 497 492 return err; 498 493 ··· 513 504 break; 514 505 } 515 506 516 - return gic_local_irq_domain_map(d, virq, hwirq); 507 + if (!gic_local_irq_is_routable(intr)) 508 + return -EPERM; 509 + 510 + spin_lock_irqsave(&gic_lock, flags); 511 + for_each_online_cpu(cpu) { 512 + write_gic_vl_other(mips_cm_vp_id(cpu)); 513 + write_gic_vo_map(intr, map); 514 + } 515 + spin_unlock_irqrestore(&gic_lock, flags); 516 + 517 + return 0; 517 518 } 518 519 519 520 static int gic_irq_domain_alloc(struct irq_domain *d, unsigned int virq, ··· 655 636 .match = gic_ipi_domain_match, 656 637 }; 657 638 639 + static int gic_cpu_startup(unsigned int cpu) 640 + { 641 + /* Enable or disable EIC */ 642 + change_gic_vl_ctl(GIC_VX_CTL_EIC, 643 + cpu_has_veic ? GIC_VX_CTL_EIC : 0); 644 + 645 + /* Clear all local IRQ masks (ie. disable all local interrupts) */ 646 + write_gic_vl_rmask(~0); 647 + 648 + /* Invoke irq_cpu_online callbacks to enable desired interrupts */ 649 + irq_cpu_online(); 650 + 651 + return 0; 652 + } 658 653 659 654 static int __init gic_of_init(struct device_node *node, 660 655 struct device_node *parent) 661 656 { 662 - unsigned int cpu_vec, i, j, gicconfig, cpu, v[2]; 657 + unsigned int cpu_vec, i, gicconfig, v[2], num_ipis; 663 658 unsigned long reserved; 664 659 phys_addr_t gic_base; 665 660 struct resource res; ··· 688 655 689 656 cpu_vec = find_first_zero_bit(&reserved, hweight_long(ST0_IM)); 690 657 if (cpu_vec == hweight_long(ST0_IM)) { 691 - pr_err("No CPU vectors available for GIC\n"); 658 + pr_err("No CPU vectors available\n"); 692 659 return -ENODEV; 693 660 } 694 661 ··· 701 668 gic_base = read_gcr_gic_base() & 702 669 ~CM_GCR_GIC_BASE_GICEN; 703 670 gic_len = 0x20000; 671 + pr_warn("Using inherited base address %pa\n", 672 + &gic_base); 704 673 } else { 705 - pr_err("Failed to get GIC memory range\n"); 674 + pr_err("Failed to get memory range\n"); 706 675 return -ENODEV; 707 676 } 708 677 } else { ··· 725 690 gic_shared_intrs >>= __ffs(GIC_CONFIG_NUMINTERRUPTS); 726 691 gic_shared_intrs = (gic_shared_intrs + 1) * 8; 727 692 728 - gic_vpes = gicconfig & GIC_CONFIG_PVPS; 729 - gic_vpes >>= __ffs(GIC_CONFIG_PVPS); 730 - gic_vpes = gic_vpes + 1; 731 - 732 693 if (cpu_has_veic) { 733 - /* Set EIC mode for all VPEs */ 734 - for_each_present_cpu(cpu) { 735 - write_gic_vl_other(mips_cm_vp_id(cpu)); 736 - write_gic_vo_ctl(GIC_VX_CTL_EIC); 737 - } 738 - 739 694 /* Always use vector 1 in EIC mode */ 740 695 gic_cpu_pin = 0; 741 696 timer_cpu_pin = gic_cpu_pin; ··· 762 737 gic_shared_intrs, 0, 763 738 &gic_irq_domain_ops, NULL); 764 739 if (!gic_irq_domain) { 765 - pr_err("Failed to add GIC IRQ domain"); 740 + pr_err("Failed to add IRQ domain"); 766 741 return -ENXIO; 767 742 } 768 743 ··· 771 746 GIC_NUM_LOCAL_INTRS + gic_shared_intrs, 772 747 node, &gic_ipi_domain_ops, NULL); 773 748 if (!gic_ipi_domain) { 774 - pr_err("Failed to add GIC IPI domain"); 749 + pr_err("Failed to add IPI domain"); 775 750 return -ENXIO; 776 751 } 777 752 ··· 781 756 !of_property_read_u32_array(node, "mti,reserved-ipi-vectors", v, 2)) { 782 757 bitmap_set(ipi_resrv, v[0], v[1]); 783 758 } else { 784 - /* Make the last 2 * gic_vpes available for IPIs */ 785 - bitmap_set(ipi_resrv, 786 - gic_shared_intrs - 2 * gic_vpes, 787 - 2 * gic_vpes); 759 + /* 760 + * Reserve 2 interrupts per possible CPU/VP for use as IPIs, 761 + * meeting the requirements of arch/mips SMP. 762 + */ 763 + num_ipis = 2 * num_possible_cpus(); 764 + bitmap_set(ipi_resrv, gic_shared_intrs - num_ipis, num_ipis); 788 765 } 789 766 790 767 bitmap_copy(ipi_available, ipi_resrv, GIC_MAX_INTRS); ··· 800 773 write_gic_rmask(i); 801 774 } 802 775 803 - for (i = 0; i < gic_vpes; i++) { 804 - write_gic_vl_other(mips_cm_vp_id(i)); 805 - for (j = 0; j < GIC_NUM_LOCAL_INTRS; j++) { 806 - if (!gic_local_irq_is_routable(j)) 807 - continue; 808 - write_gic_vo_rmask(BIT(j)); 809 - } 810 - } 811 - 812 - return 0; 776 + return cpuhp_setup_state(CPUHP_AP_IRQ_MIPS_GIC_STARTING, 777 + "irqchip/mips/gic:starting", 778 + gic_cpu_startup, NULL); 813 779 } 814 780 IRQCHIP_DECLARE(mips_gic, "mti,gic", gic_of_init);
+2 -14
drivers/irqchip/irq-omap-intc.c
··· 25 25 26 26 #include <linux/irqchip/irq-omap-intc.h> 27 27 28 - /* Define these here for now until we drop all board-files */ 29 - #define OMAP24XX_IC_BASE 0x480fe000 30 - #define OMAP34XX_IC_BASE 0x48200000 31 - 32 28 /* selected INTC register offsets */ 33 29 34 30 #define INTC_REVISION 0x0000 ··· 66 70 67 71 static struct irq_domain *domain; 68 72 static void __iomem *omap_irq_base; 69 - static int omap_nr_pending = 3; 70 - static int omap_nr_irqs = 96; 73 + static int omap_nr_pending; 74 + static int omap_nr_irqs; 71 75 72 76 static void intc_writel(u32 reg, u32 val) 73 77 { ··· 358 362 359 363 irqnr &= ACTIVEIRQ_MASK; 360 364 handle_domain_irq(domain, irqnr, regs); 361 - } 362 - 363 - void __init omap3_init_irq(void) 364 - { 365 - omap_nr_irqs = 96; 366 - omap_nr_pending = 3; 367 - omap_init_irq(OMAP34XX_IC_BASE, NULL); 368 - set_handle_irq(omap_intc_handle_irq); 369 365 } 370 366 371 367 static int __init intc_of_init(struct device_node *node,
+3 -6
drivers/irqchip/irq-renesas-intc-irqpin.c
··· 389 389 390 390 static int intc_irqpin_probe(struct platform_device *pdev) 391 391 { 392 - const struct intc_irqpin_config *config = NULL; 392 + const struct intc_irqpin_config *config; 393 393 struct device *dev = &pdev->dev; 394 - const struct of_device_id *of_id; 395 394 struct intc_irqpin_priv *p; 396 395 struct intc_irqpin_iomem *i; 397 396 struct resource *io[INTC_IRQPIN_REG_NR]; ··· 421 422 p->pdev = pdev; 422 423 platform_set_drvdata(pdev, p); 423 424 424 - of_id = of_match_device(intc_irqpin_dt_ids, dev); 425 - if (of_id && of_id->data) { 426 - config = of_id->data; 425 + config = of_device_get_match_data(dev); 426 + if (config) 427 427 p->needs_clk = config->needs_clk; 428 - } 429 428 430 429 p->clk = devm_clk_get(dev, NULL); 431 430 if (IS_ERR(p->clk)) {
+227
drivers/irqchip/irq-sni-exiu.c
··· 1 + /* 2 + * Driver for Socionext External Interrupt Unit (EXIU) 3 + * 4 + * Copyright (c) 2017 Linaro, Ltd. <ard.biesheuvel@linaro.org> 5 + * 6 + * Based on irq-tegra.c: 7 + * Copyright (C) 2011 Google, Inc. 8 + * Copyright (C) 2010,2013, NVIDIA Corporation 9 + * 10 + * This program is free software; you can redistribute it and/or modify 11 + * it under the terms of the GNU General Public License version 2 as 12 + * published by the Free Software Foundation. 13 + */ 14 + 15 + #include <linux/interrupt.h> 16 + #include <linux/io.h> 17 + #include <linux/irq.h> 18 + #include <linux/irqchip.h> 19 + #include <linux/irqdomain.h> 20 + #include <linux/of.h> 21 + #include <linux/of_address.h> 22 + #include <linux/of_irq.h> 23 + 24 + #include <dt-bindings/interrupt-controller/arm-gic.h> 25 + 26 + #define NUM_IRQS 32 27 + 28 + #define EIMASK 0x00 29 + #define EISRCSEL 0x04 30 + #define EIREQSTA 0x08 31 + #define EIRAWREQSTA 0x0C 32 + #define EIREQCLR 0x10 33 + #define EILVL 0x14 34 + #define EIEDG 0x18 35 + #define EISIR 0x1C 36 + 37 + struct exiu_irq_data { 38 + void __iomem *base; 39 + u32 spi_base; 40 + }; 41 + 42 + static void exiu_irq_eoi(struct irq_data *d) 43 + { 44 + struct exiu_irq_data *data = irq_data_get_irq_chip_data(d); 45 + 46 + writel(BIT(d->hwirq), data->base + EIREQCLR); 47 + irq_chip_eoi_parent(d); 48 + } 49 + 50 + static void exiu_irq_mask(struct irq_data *d) 51 + { 52 + struct exiu_irq_data *data = irq_data_get_irq_chip_data(d); 53 + u32 val; 54 + 55 + val = readl_relaxed(data->base + EIMASK) | BIT(d->hwirq); 56 + writel_relaxed(val, data->base + EIMASK); 57 + irq_chip_mask_parent(d); 58 + } 59 + 60 + static void exiu_irq_unmask(struct irq_data *d) 61 + { 62 + struct exiu_irq_data *data = irq_data_get_irq_chip_data(d); 63 + u32 val; 64 + 65 + val = readl_relaxed(data->base + EIMASK) & ~BIT(d->hwirq); 66 + writel_relaxed(val, data->base + EIMASK); 67 + irq_chip_unmask_parent(d); 68 + } 69 + 70 + static void exiu_irq_enable(struct irq_data *d) 71 + { 72 + struct exiu_irq_data *data = irq_data_get_irq_chip_data(d); 73 + u32 val; 74 + 75 + /* clear interrupts that were latched while disabled */ 76 + writel_relaxed(BIT(d->hwirq), data->base + EIREQCLR); 77 + 78 + val = readl_relaxed(data->base + EIMASK) & ~BIT(d->hwirq); 79 + writel_relaxed(val, data->base + EIMASK); 80 + irq_chip_enable_parent(d); 81 + } 82 + 83 + static int exiu_irq_set_type(struct irq_data *d, unsigned int type) 84 + { 85 + struct exiu_irq_data *data = irq_data_get_irq_chip_data(d); 86 + u32 val; 87 + 88 + val = readl_relaxed(data->base + EILVL); 89 + if (type == IRQ_TYPE_EDGE_RISING || type == IRQ_TYPE_LEVEL_HIGH) 90 + val |= BIT(d->hwirq); 91 + else 92 + val &= ~BIT(d->hwirq); 93 + writel_relaxed(val, data->base + EILVL); 94 + 95 + val = readl_relaxed(data->base + EIEDG); 96 + if (type == IRQ_TYPE_LEVEL_LOW || type == IRQ_TYPE_LEVEL_HIGH) 97 + val &= ~BIT(d->hwirq); 98 + else 99 + val |= BIT(d->hwirq); 100 + writel_relaxed(val, data->base + EIEDG); 101 + 102 + writel_relaxed(BIT(d->hwirq), data->base + EIREQCLR); 103 + 104 + return irq_chip_set_type_parent(d, IRQ_TYPE_LEVEL_HIGH); 105 + } 106 + 107 + static struct irq_chip exiu_irq_chip = { 108 + .name = "EXIU", 109 + .irq_eoi = exiu_irq_eoi, 110 + .irq_enable = exiu_irq_enable, 111 + .irq_mask = exiu_irq_mask, 112 + .irq_unmask = exiu_irq_unmask, 113 + .irq_set_type = exiu_irq_set_type, 114 + .irq_set_affinity = irq_chip_set_affinity_parent, 115 + .flags = IRQCHIP_SET_TYPE_MASKED | 116 + IRQCHIP_SKIP_SET_WAKE | 117 + IRQCHIP_EOI_THREADED | 118 + IRQCHIP_MASK_ON_SUSPEND, 119 + }; 120 + 121 + static int exiu_domain_translate(struct irq_domain *domain, 122 + struct irq_fwspec *fwspec, 123 + unsigned long *hwirq, 124 + unsigned int *type) 125 + { 126 + struct exiu_irq_data *info = domain->host_data; 127 + 128 + if (is_of_node(fwspec->fwnode)) { 129 + if (fwspec->param_count != 3) 130 + return -EINVAL; 131 + 132 + if (fwspec->param[0] != GIC_SPI) 133 + return -EINVAL; /* No PPI should point to this domain */ 134 + 135 + *hwirq = fwspec->param[1] - info->spi_base; 136 + *type = fwspec->param[2] & IRQ_TYPE_SENSE_MASK; 137 + return 0; 138 + } 139 + return -EINVAL; 140 + } 141 + 142 + static int exiu_domain_alloc(struct irq_domain *dom, unsigned int virq, 143 + unsigned int nr_irqs, void *data) 144 + { 145 + struct irq_fwspec *fwspec = data; 146 + struct irq_fwspec parent_fwspec; 147 + struct exiu_irq_data *info = dom->host_data; 148 + irq_hw_number_t hwirq; 149 + 150 + if (fwspec->param_count != 3) 151 + return -EINVAL; /* Not GIC compliant */ 152 + if (fwspec->param[0] != GIC_SPI) 153 + return -EINVAL; /* No PPI should point to this domain */ 154 + 155 + WARN_ON(nr_irqs != 1); 156 + hwirq = fwspec->param[1] - info->spi_base; 157 + irq_domain_set_hwirq_and_chip(dom, virq, hwirq, &exiu_irq_chip, info); 158 + 159 + parent_fwspec = *fwspec; 160 + parent_fwspec.fwnode = dom->parent->fwnode; 161 + return irq_domain_alloc_irqs_parent(dom, virq, nr_irqs, &parent_fwspec); 162 + } 163 + 164 + static const struct irq_domain_ops exiu_domain_ops = { 165 + .translate = exiu_domain_translate, 166 + .alloc = exiu_domain_alloc, 167 + .free = irq_domain_free_irqs_common, 168 + }; 169 + 170 + static int __init exiu_init(struct device_node *node, 171 + struct device_node *parent) 172 + { 173 + struct irq_domain *parent_domain, *domain; 174 + struct exiu_irq_data *data; 175 + int err; 176 + 177 + if (!parent) { 178 + pr_err("%pOF: no parent, giving up\n", node); 179 + return -ENODEV; 180 + } 181 + 182 + parent_domain = irq_find_host(parent); 183 + if (!parent_domain) { 184 + pr_err("%pOF: unable to obtain parent domain\n", node); 185 + return -ENXIO; 186 + } 187 + 188 + data = kzalloc(sizeof(*data), GFP_KERNEL); 189 + if (!data) 190 + return -ENOMEM; 191 + 192 + if (of_property_read_u32(node, "socionext,spi-base", &data->spi_base)) { 193 + pr_err("%pOF: failed to parse 'spi-base' property\n", node); 194 + err = -ENODEV; 195 + goto out_free; 196 + } 197 + 198 + data->base = of_iomap(node, 0); 199 + if (IS_ERR(data->base)) { 200 + err = PTR_ERR(data->base); 201 + goto out_free; 202 + } 203 + 204 + /* clear and mask all interrupts */ 205 + writel_relaxed(0xFFFFFFFF, data->base + EIREQCLR); 206 + writel_relaxed(0xFFFFFFFF, data->base + EIMASK); 207 + 208 + domain = irq_domain_add_hierarchy(parent_domain, 0, NUM_IRQS, node, 209 + &exiu_domain_ops, data); 210 + if (!domain) { 211 + pr_err("%pOF: failed to allocate domain\n", node); 212 + err = -ENOMEM; 213 + goto out_unmap; 214 + } 215 + 216 + pr_info("%pOF: %d interrupts forwarded to %pOF\n", node, NUM_IRQS, 217 + parent); 218 + 219 + return 0; 220 + 221 + out_unmap: 222 + iounmap(data->base); 223 + out_free: 224 + kfree(data); 225 + return err; 226 + } 227 + IRQCHIP_DECLARE(exiu, "socionext,synquacer-exiu", exiu_init);
+157 -49
drivers/irqchip/irq-stm32-exti.c
··· 14 14 #include <linux/of_address.h> 15 15 #include <linux/of_irq.h> 16 16 17 - #define EXTI_IMR 0x0 18 - #define EXTI_EMR 0x4 19 - #define EXTI_RTSR 0x8 20 - #define EXTI_FTSR 0xc 21 - #define EXTI_SWIER 0x10 22 - #define EXTI_PR 0x14 17 + #define IRQS_PER_BANK 32 18 + 19 + struct stm32_exti_bank { 20 + u32 imr_ofst; 21 + u32 emr_ofst; 22 + u32 rtsr_ofst; 23 + u32 ftsr_ofst; 24 + u32 swier_ofst; 25 + u32 pr_ofst; 26 + }; 27 + 28 + static const struct stm32_exti_bank stm32f4xx_exti_b1 = { 29 + .imr_ofst = 0x00, 30 + .emr_ofst = 0x04, 31 + .rtsr_ofst = 0x08, 32 + .ftsr_ofst = 0x0C, 33 + .swier_ofst = 0x10, 34 + .pr_ofst = 0x14, 35 + }; 36 + 37 + static const struct stm32_exti_bank *stm32f4xx_exti_banks[] = { 38 + &stm32f4xx_exti_b1, 39 + }; 40 + 41 + static const struct stm32_exti_bank stm32h7xx_exti_b1 = { 42 + .imr_ofst = 0x80, 43 + .emr_ofst = 0x84, 44 + .rtsr_ofst = 0x00, 45 + .ftsr_ofst = 0x04, 46 + .swier_ofst = 0x08, 47 + .pr_ofst = 0x88, 48 + }; 49 + 50 + static const struct stm32_exti_bank stm32h7xx_exti_b2 = { 51 + .imr_ofst = 0x90, 52 + .emr_ofst = 0x94, 53 + .rtsr_ofst = 0x20, 54 + .ftsr_ofst = 0x24, 55 + .swier_ofst = 0x28, 56 + .pr_ofst = 0x98, 57 + }; 58 + 59 + static const struct stm32_exti_bank stm32h7xx_exti_b3 = { 60 + .imr_ofst = 0xA0, 61 + .emr_ofst = 0xA4, 62 + .rtsr_ofst = 0x40, 63 + .ftsr_ofst = 0x44, 64 + .swier_ofst = 0x48, 65 + .pr_ofst = 0xA8, 66 + }; 67 + 68 + static const struct stm32_exti_bank *stm32h7xx_exti_banks[] = { 69 + &stm32h7xx_exti_b1, 70 + &stm32h7xx_exti_b2, 71 + &stm32h7xx_exti_b3, 72 + }; 73 + 74 + static unsigned long stm32_exti_pending(struct irq_chip_generic *gc) 75 + { 76 + const struct stm32_exti_bank *stm32_bank = gc->private; 77 + 78 + return irq_reg_readl(gc, stm32_bank->pr_ofst); 79 + } 80 + 81 + static void stm32_exti_irq_ack(struct irq_chip_generic *gc, u32 mask) 82 + { 83 + const struct stm32_exti_bank *stm32_bank = gc->private; 84 + 85 + irq_reg_writel(gc, mask, stm32_bank->pr_ofst); 86 + } 23 87 24 88 static void stm32_irq_handler(struct irq_desc *desc) 25 89 { 26 90 struct irq_domain *domain = irq_desc_get_handler_data(desc); 27 - struct irq_chip_generic *gc = domain->gc->gc[0]; 28 91 struct irq_chip *chip = irq_desc_get_chip(desc); 92 + unsigned int virq, nbanks = domain->gc->num_chips; 93 + struct irq_chip_generic *gc; 94 + const struct stm32_exti_bank *stm32_bank; 29 95 unsigned long pending; 30 - int n; 96 + int n, i, irq_base = 0; 31 97 32 98 chained_irq_enter(chip, desc); 33 99 34 - while ((pending = irq_reg_readl(gc, EXTI_PR))) { 35 - for_each_set_bit(n, &pending, BITS_PER_LONG) { 36 - generic_handle_irq(irq_find_mapping(domain, n)); 37 - irq_reg_writel(gc, BIT(n), EXTI_PR); 100 + for (i = 0; i < nbanks; i++, irq_base += IRQS_PER_BANK) { 101 + gc = irq_get_domain_generic_chip(domain, irq_base); 102 + stm32_bank = gc->private; 103 + 104 + while ((pending = stm32_exti_pending(gc))) { 105 + for_each_set_bit(n, &pending, IRQS_PER_BANK) { 106 + virq = irq_find_mapping(domain, irq_base + n); 107 + generic_handle_irq(virq); 108 + stm32_exti_irq_ack(gc, BIT(n)); 109 + } 38 110 } 39 111 } 40 112 ··· 116 44 static int stm32_irq_set_type(struct irq_data *data, unsigned int type) 117 45 { 118 46 struct irq_chip_generic *gc = irq_data_get_irq_chip_data(data); 119 - int pin = data->hwirq; 47 + const struct stm32_exti_bank *stm32_bank = gc->private; 48 + int pin = data->hwirq % IRQS_PER_BANK; 120 49 u32 rtsr, ftsr; 121 50 122 51 irq_gc_lock(gc); 123 52 124 - rtsr = irq_reg_readl(gc, EXTI_RTSR); 125 - ftsr = irq_reg_readl(gc, EXTI_FTSR); 53 + rtsr = irq_reg_readl(gc, stm32_bank->rtsr_ofst); 54 + ftsr = irq_reg_readl(gc, stm32_bank->ftsr_ofst); 126 55 127 56 switch (type) { 128 57 case IRQ_TYPE_EDGE_RISING: ··· 143 70 return -EINVAL; 144 71 } 145 72 146 - irq_reg_writel(gc, rtsr, EXTI_RTSR); 147 - irq_reg_writel(gc, ftsr, EXTI_FTSR); 73 + irq_reg_writel(gc, rtsr, stm32_bank->rtsr_ofst); 74 + irq_reg_writel(gc, ftsr, stm32_bank->ftsr_ofst); 148 75 149 76 irq_gc_unlock(gc); 150 77 ··· 154 81 static int stm32_irq_set_wake(struct irq_data *data, unsigned int on) 155 82 { 156 83 struct irq_chip_generic *gc = irq_data_get_irq_chip_data(data); 157 - int pin = data->hwirq; 158 - u32 emr; 84 + const struct stm32_exti_bank *stm32_bank = gc->private; 85 + int pin = data->hwirq % IRQS_PER_BANK; 86 + u32 imr; 159 87 160 88 irq_gc_lock(gc); 161 89 162 - emr = irq_reg_readl(gc, EXTI_EMR); 90 + imr = irq_reg_readl(gc, stm32_bank->imr_ofst); 163 91 if (on) 164 - emr |= BIT(pin); 92 + imr |= BIT(pin); 165 93 else 166 - emr &= ~BIT(pin); 167 - irq_reg_writel(gc, emr, EXTI_EMR); 94 + imr &= ~BIT(pin); 95 + irq_reg_writel(gc, imr, stm32_bank->imr_ofst); 168 96 169 97 irq_gc_unlock(gc); 170 98 ··· 175 101 static int stm32_exti_alloc(struct irq_domain *d, unsigned int virq, 176 102 unsigned int nr_irqs, void *data) 177 103 { 178 - struct irq_chip_generic *gc = d->gc->gc[0]; 104 + struct irq_chip_generic *gc; 179 105 struct irq_fwspec *fwspec = data; 180 106 irq_hw_number_t hwirq; 181 107 182 108 hwirq = fwspec->param[0]; 109 + gc = irq_get_domain_generic_chip(d, hwirq); 183 110 184 111 irq_map_generic_chip(d, virq, hwirq); 185 112 irq_domain_set_info(d, virq, hwirq, &gc->chip_types->chip, gc, ··· 204 129 .free = stm32_exti_free, 205 130 }; 206 131 207 - static int __init stm32_exti_init(struct device_node *node, 208 - struct device_node *parent) 132 + static int 133 + __init stm32_exti_init(const struct stm32_exti_bank **stm32_exti_banks, 134 + int bank_nr, struct device_node *node) 209 135 { 210 136 unsigned int clr = IRQ_NOREQUEST | IRQ_NOPROBE | IRQ_NOAUTOEN; 211 137 int nr_irqs, nr_exti, ret, i; ··· 220 144 return -ENOMEM; 221 145 } 222 146 223 - /* Determine number of irqs supported */ 224 - writel_relaxed(~0UL, base + EXTI_RTSR); 225 - nr_exti = fls(readl_relaxed(base + EXTI_RTSR)); 226 - writel_relaxed(0, base + EXTI_RTSR); 227 - 228 - pr_info("%pOF: %d External IRQs detected\n", node, nr_exti); 229 - 230 - domain = irq_domain_add_linear(node, nr_exti, 147 + domain = irq_domain_add_linear(node, bank_nr * IRQS_PER_BANK, 231 148 &irq_exti_domain_ops, NULL); 232 149 if (!domain) { 233 150 pr_err("%s: Could not register interrupt domain.\n", 234 - node->name); 151 + node->name); 235 152 ret = -ENOMEM; 236 153 goto out_unmap; 237 154 } 238 155 239 - ret = irq_alloc_domain_generic_chips(domain, nr_exti, 1, "exti", 156 + ret = irq_alloc_domain_generic_chips(domain, IRQS_PER_BANK, 1, "exti", 240 157 handle_edge_irq, clr, 0, 0); 241 158 if (ret) { 242 159 pr_err("%pOF: Could not allocate generic interrupt chip.\n", ··· 237 168 goto out_free_domain; 238 169 } 239 170 240 - gc = domain->gc->gc[0]; 241 - gc->reg_base = base; 242 - gc->chip_types->type = IRQ_TYPE_EDGE_BOTH; 243 - gc->chip_types->chip.name = gc->chip_types[0].chip.name; 244 - gc->chip_types->chip.irq_ack = irq_gc_ack_set_bit; 245 - gc->chip_types->chip.irq_mask = irq_gc_mask_clr_bit; 246 - gc->chip_types->chip.irq_unmask = irq_gc_mask_set_bit; 247 - gc->chip_types->chip.irq_set_type = stm32_irq_set_type; 248 - gc->chip_types->chip.irq_set_wake = stm32_irq_set_wake; 249 - gc->chip_types->regs.ack = EXTI_PR; 250 - gc->chip_types->regs.mask = EXTI_IMR; 251 - gc->chip_types->handler = handle_edge_irq; 171 + for (i = 0; i < bank_nr; i++) { 172 + const struct stm32_exti_bank *stm32_bank = stm32_exti_banks[i]; 173 + u32 irqs_mask; 174 + 175 + gc = irq_get_domain_generic_chip(domain, i * IRQS_PER_BANK); 176 + 177 + gc->reg_base = base; 178 + gc->chip_types->type = IRQ_TYPE_EDGE_BOTH; 179 + gc->chip_types->chip.irq_ack = irq_gc_ack_set_bit; 180 + gc->chip_types->chip.irq_mask = irq_gc_mask_clr_bit; 181 + gc->chip_types->chip.irq_unmask = irq_gc_mask_set_bit; 182 + gc->chip_types->chip.irq_set_type = stm32_irq_set_type; 183 + gc->chip_types->chip.irq_set_wake = stm32_irq_set_wake; 184 + gc->chip_types->regs.ack = stm32_bank->pr_ofst; 185 + gc->chip_types->regs.mask = stm32_bank->imr_ofst; 186 + gc->private = (void *)stm32_bank; 187 + 188 + /* Determine number of irqs supported */ 189 + writel_relaxed(~0UL, base + stm32_bank->rtsr_ofst); 190 + irqs_mask = readl_relaxed(base + stm32_bank->rtsr_ofst); 191 + nr_exti = fls(readl_relaxed(base + stm32_bank->rtsr_ofst)); 192 + 193 + /* 194 + * This IP has no reset, so after hot reboot we should 195 + * clear registers to avoid residue 196 + */ 197 + writel_relaxed(0, base + stm32_bank->imr_ofst); 198 + writel_relaxed(0, base + stm32_bank->emr_ofst); 199 + writel_relaxed(0, base + stm32_bank->rtsr_ofst); 200 + writel_relaxed(0, base + stm32_bank->ftsr_ofst); 201 + writel_relaxed(~0UL, base + stm32_bank->pr_ofst); 202 + 203 + pr_info("%s: bank%d, External IRQs available:%#x\n", 204 + node->full_name, i, irqs_mask); 205 + } 252 206 253 207 nr_irqs = of_irq_count(node); 254 208 for (i = 0; i < nr_irqs; i++) { ··· 290 198 return ret; 291 199 } 292 200 293 - IRQCHIP_DECLARE(stm32_exti, "st,stm32-exti", stm32_exti_init); 201 + static int __init stm32f4_exti_of_init(struct device_node *np, 202 + struct device_node *parent) 203 + { 204 + return stm32_exti_init(stm32f4xx_exti_banks, 205 + ARRAY_SIZE(stm32f4xx_exti_banks), np); 206 + } 207 + 208 + IRQCHIP_DECLARE(stm32f4_exti, "st,stm32-exti", stm32f4_exti_of_init); 209 + 210 + static int __init stm32h7_exti_of_init(struct device_node *np, 211 + struct device_node *parent) 212 + { 213 + return stm32_exti_init(stm32h7xx_exti_banks, 214 + ARRAY_SIZE(stm32h7xx_exti_banks), np); 215 + } 216 + 217 + IRQCHIP_DECLARE(stm32h7_exti, "st,stm32h7-exti", stm32h7_exti_of_init);
+3 -2
drivers/pinctrl/stm32/pinctrl-stm32.c
··· 289 289 return 0; 290 290 } 291 291 292 - static void stm32_gpio_domain_activate(struct irq_domain *d, 293 - struct irq_data *irq_data) 292 + static int stm32_gpio_domain_activate(struct irq_domain *d, 293 + struct irq_data *irq_data, bool early) 294 294 { 295 295 struct stm32_gpio_bank *bank = d->host_data; 296 296 struct stm32_pinctrl *pctl = dev_get_drvdata(bank->gpio_chip.parent); 297 297 298 298 regmap_field_write(pctl->irqmux[irq_data->hwirq], bank->bank_nr); 299 + return 0; 299 300 } 300 301 301 302 static int stm32_gpio_domain_alloc(struct irq_domain *d,
+1
include/linux/cpuhotplug.h
··· 99 99 CPUHP_AP_IRQ_HIP04_STARTING, 100 100 CPUHP_AP_IRQ_ARMADA_XP_STARTING, 101 101 CPUHP_AP_IRQ_BCM2836_STARTING, 102 + CPUHP_AP_IRQ_MIPS_GIC_STARTING, 102 103 CPUHP_AP_ARM_MVEBU_COHERENCY, 103 104 CPUHP_AP_PERF_X86_AMD_UNCORE_STARTING, 104 105 CPUHP_AP_PERF_X86_STARTING,
+22
include/linux/irq.h
··· 1114 1114 return readl(gc->reg_base + reg_offset); 1115 1115 } 1116 1116 1117 + struct irq_matrix; 1118 + struct irq_matrix *irq_alloc_matrix(unsigned int matrix_bits, 1119 + unsigned int alloc_start, 1120 + unsigned int alloc_end); 1121 + void irq_matrix_online(struct irq_matrix *m); 1122 + void irq_matrix_offline(struct irq_matrix *m); 1123 + void irq_matrix_assign_system(struct irq_matrix *m, unsigned int bit, bool replace); 1124 + int irq_matrix_reserve_managed(struct irq_matrix *m, const struct cpumask *msk); 1125 + void irq_matrix_remove_managed(struct irq_matrix *m, const struct cpumask *msk); 1126 + int irq_matrix_alloc_managed(struct irq_matrix *m, unsigned int cpu); 1127 + void irq_matrix_reserve(struct irq_matrix *m); 1128 + void irq_matrix_remove_reserved(struct irq_matrix *m); 1129 + int irq_matrix_alloc(struct irq_matrix *m, const struct cpumask *msk, 1130 + bool reserved, unsigned int *mapped_cpu); 1131 + void irq_matrix_free(struct irq_matrix *m, unsigned int cpu, 1132 + unsigned int bit, bool managed); 1133 + void irq_matrix_assign(struct irq_matrix *m, unsigned int bit); 1134 + unsigned int irq_matrix_available(struct irq_matrix *m, bool cpudown); 1135 + unsigned int irq_matrix_allocated(struct irq_matrix *m); 1136 + unsigned int irq_matrix_reserved(struct irq_matrix *m); 1137 + void irq_matrix_debug_show(struct seq_file *sf, struct irq_matrix *m, int ind); 1138 + 1117 1139 /* Contrary to Linux irqs, for hardware irqs the irq number 0 is valid */ 1118 1140 #define INVALID_HWIRQ (~0UL) 1119 1141 irq_hw_number_t ipi_get_hwirq(unsigned int irq, unsigned int cpu);
+4
include/linux/irqchip/arm-gic-v3.h
··· 68 68 #define GICD_CTLR_ENABLE_SS_G1 (1U << 1) 69 69 #define GICD_CTLR_ENABLE_SS_G0 (1U << 0) 70 70 71 + #define GICD_TYPER_RSS (1U << 26) 71 72 #define GICD_TYPER_LPIS (1U << 17) 72 73 #define GICD_TYPER_MBIS (1U << 16) 73 74 ··· 462 461 #define ICC_CTLR_EL1_SEIS_MASK (0x1 << ICC_CTLR_EL1_SEIS_SHIFT) 463 462 #define ICC_CTLR_EL1_A3V_SHIFT 15 464 463 #define ICC_CTLR_EL1_A3V_MASK (0x1 << ICC_CTLR_EL1_A3V_SHIFT) 464 + #define ICC_CTLR_EL1_RSS (0x1 << 18) 465 465 #define ICC_PMR_EL1_SHIFT 0 466 466 #define ICC_PMR_EL1_MASK (0xff << ICC_PMR_EL1_SHIFT) 467 467 #define ICC_BPR0_EL1_SHIFT 0 ··· 551 549 #define ICC_SGI1R_AFFINITY_2_SHIFT 32 552 550 #define ICC_SGI1R_AFFINITY_2_MASK (0xffULL << ICC_SGI1R_AFFINITY_2_SHIFT) 553 551 #define ICC_SGI1R_IRQ_ROUTING_MODE_BIT 40 552 + #define ICC_SGI1R_RS_SHIFT 44 553 + #define ICC_SGI1R_RS_MASK (0xfULL << ICC_SGI1R_RS_SHIFT) 554 554 #define ICC_SGI1R_AFFINITY_3_SHIFT 48 555 555 #define ICC_SGI1R_AFFINITY_3_MASK (0xffULL << ICC_SGI1R_AFFINITY_3_SHIFT) 556 556
+9
include/linux/irqchip/arm-gic-v4.h
··· 20 20 21 21 struct its_vpe; 22 22 23 + /* 24 + * Maximum number of ITTs when GITS_TYPER.VMOVP == 0, using the 25 + * ITSList mechanism to perform inter-ITS synchronization. 26 + */ 27 + #define GICv4_ITS_LIST_MAX 16 28 + 23 29 /* Embedded in kvm.arch */ 24 30 struct its_vm { 25 31 struct fwnode_handle *fwnode; ··· 36 30 irq_hw_number_t db_lpi_base; 37 31 unsigned long *db_bitmap; 38 32 int nr_db_lpis; 33 + u32 vlpi_count[GICv4_ITS_LIST_MAX]; 39 34 }; 40 35 41 36 /* Embedded in kvm_vcpu.arch */ ··· 71 64 * @vm: Pointer to the GICv4 notion of a VM 72 65 * @vpe: Pointer to the GICv4 notion of a virtual CPU (VPE) 73 66 * @vintid: Virtual LPI number 67 + * @properties: Priority and enable bits (as written in the prop table) 74 68 * @db_enabled: Is the VPE doorbell to be generated? 75 69 */ 76 70 struct its_vlpi_map { 77 71 struct its_vm *vm; 78 72 struct its_vpe *vpe; 79 73 u32 vintid; 74 + u8 properties; 80 75 bool db_enabled; 81 76 }; 82 77
-2
include/linux/irqchip/irq-omap-intc.h
··· 18 18 #ifndef __INCLUDE_LINUX_IRQCHIP_IRQ_OMAP_INTC_H 19 19 #define __INCLUDE_LINUX_IRQCHIP_IRQ_OMAP_INTC_H 20 20 21 - void omap3_init_irq(void); 22 - 23 21 int omap_irq_pending(void); 24 22 void omap_intc_save_context(void); 25 23 void omap_intc_restore_context(void);
+1
include/linux/irqdesc.h
··· 94 94 #endif 95 95 #ifdef CONFIG_GENERIC_IRQ_DEBUGFS 96 96 struct dentry *debugfs_file; 97 + const char *dev_name; 97 98 #endif 98 99 #ifdef CONFIG_SPARSE_IRQ 99 100 struct rcu_head rcu;
+11 -9
include/linux/irqdomain.h
··· 33 33 #include <linux/types.h> 34 34 #include <linux/irqhandler.h> 35 35 #include <linux/of.h> 36 + #include <linux/mutex.h> 36 37 #include <linux/radix-tree.h> 37 38 38 39 struct device_node; ··· 42 41 struct irq_chip; 43 42 struct irq_data; 44 43 struct cpumask; 44 + struct seq_file; 45 45 46 46 /* Number of irqs reserved for a legacy isa controller */ 47 47 #define NUM_ISA_INTERRUPTS 16 ··· 107 105 int (*xlate)(struct irq_domain *d, struct device_node *node, 108 106 const u32 *intspec, unsigned int intsize, 109 107 unsigned long *out_hwirq, unsigned int *out_type); 110 - 111 108 #ifdef CONFIG_IRQ_DOMAIN_HIERARCHY 112 109 /* extended V2 interfaces to support hierarchy irq_domains */ 113 110 int (*alloc)(struct irq_domain *d, unsigned int virq, 114 111 unsigned int nr_irqs, void *arg); 115 112 void (*free)(struct irq_domain *d, unsigned int virq, 116 113 unsigned int nr_irqs); 117 - void (*activate)(struct irq_domain *d, struct irq_data *irq_data); 114 + int (*activate)(struct irq_domain *d, struct irq_data *irqd, bool early); 118 115 void (*deactivate)(struct irq_domain *d, struct irq_data *irq_data); 119 116 int (*translate)(struct irq_domain *d, struct irq_fwspec *fwspec, 120 117 unsigned long *out_hwirq, unsigned int *out_type); 118 + #endif 119 + #ifdef CONFIG_GENERIC_IRQ_DEBUGFS 120 + void (*debug_show)(struct seq_file *m, struct irq_domain *d, 121 + struct irq_data *irqd, int ind); 121 122 #endif 122 123 }; 123 124 ··· 139 134 * @mapcount: The number of mapped interrupts 140 135 * 141 136 * Optional elements 142 - * @of_node: Pointer to device tree nodes associated with the irq_domain. Used 143 - * when decoding device tree interrupt specifiers. 137 + * @fwnode: Pointer to firmware node associated with the irq_domain. Pretty easy 138 + * to swap it for the of_node via the irq_domain_get_of_node accessor 144 139 * @gc: Pointer to a list of generic chips. There is a helper function for 145 140 * setting up one or more generic chips for interrupt controllers 146 141 * drivers using the generic chip library which uses this pointer. ··· 178 173 unsigned int revmap_direct_max_irq; 179 174 unsigned int revmap_size; 180 175 struct radix_tree_root revmap_tree; 176 + struct mutex revmap_tree_mutex; 181 177 unsigned int linear_revmap[]; 182 178 }; 183 179 ··· 444 438 unsigned int nr_irqs, int node, void *arg, 445 439 bool realloc, const struct cpumask *affinity); 446 440 extern void irq_domain_free_irqs(unsigned int virq, unsigned int nr_irqs); 447 - extern void irq_domain_activate_irq(struct irq_data *irq_data); 441 + extern int irq_domain_activate_irq(struct irq_data *irq_data, bool early); 448 442 extern void irq_domain_deactivate_irq(struct irq_data *irq_data); 449 443 450 444 static inline int irq_domain_alloc_irqs(struct irq_domain *domain, ··· 514 508 extern bool irq_domain_hierarchical_is_msi_remap(struct irq_domain *domain); 515 509 516 510 #else /* CONFIG_IRQ_DOMAIN_HIERARCHY */ 517 - static inline void irq_domain_activate_irq(struct irq_data *data) { } 518 - static inline void irq_domain_deactivate_irq(struct irq_data *data) { } 519 511 static inline int irq_domain_alloc_irqs(struct irq_domain *domain, 520 512 unsigned int nr_irqs, int node, void *arg) 521 513 { ··· 562 558 563 559 #else /* CONFIG_IRQ_DOMAIN */ 564 560 static inline void irq_dispose_mapping(unsigned int virq) { } 565 - static inline void irq_domain_activate_irq(struct irq_data *data) { } 566 - static inline void irq_domain_deactivate_irq(struct irq_data *data) { } 567 561 static inline struct irq_domain *irq_find_matching_fwnode( 568 562 struct fwnode_handle *fwnode, enum irq_domain_bus_token bus_token) 569 563 {
+5
include/linux/msi.h
··· 284 284 MSI_FLAG_PCI_MSIX = (1 << 3), 285 285 /* Needs early activate, required for PCI */ 286 286 MSI_FLAG_ACTIVATE_EARLY = (1 << 4), 287 + /* 288 + * Must reactivate when irq is started even when 289 + * MSI_FLAG_ACTIVATE_EARLY has been set. 290 + */ 291 + MSI_FLAG_MUST_REACTIVATE = (1 << 5), 287 292 }; 288 293 289 294 int msi_domain_set_affinity(struct irq_data *data, const struct cpumask *mask,
+201
include/trace/events/irq_matrix.h
··· 1 + #undef TRACE_SYSTEM 2 + #define TRACE_SYSTEM irq_matrix 3 + 4 + #if !defined(_TRACE_IRQ_MATRIX_H) || defined(TRACE_HEADER_MULTI_READ) 5 + #define _TRACE_IRQ_MATRIX_H 6 + 7 + #include <linux/tracepoint.h> 8 + 9 + struct irq_matrix; 10 + struct cpumap; 11 + 12 + DECLARE_EVENT_CLASS(irq_matrix_global, 13 + 14 + TP_PROTO(struct irq_matrix *matrix), 15 + 16 + TP_ARGS(matrix), 17 + 18 + TP_STRUCT__entry( 19 + __field( unsigned int, online_maps ) 20 + __field( unsigned int, global_available ) 21 + __field( unsigned int, global_reserved ) 22 + __field( unsigned int, total_allocated ) 23 + ), 24 + 25 + TP_fast_assign( 26 + __entry->online_maps = matrix->online_maps; 27 + __entry->global_available = matrix->global_available; 28 + __entry->global_reserved = matrix->global_reserved; 29 + __entry->total_allocated = matrix->total_allocated; 30 + ), 31 + 32 + TP_printk("online_maps=%d global_avl=%u, global_rsvd=%u, total_alloc=%u", 33 + __entry->online_maps, __entry->global_available, 34 + __entry->global_reserved, __entry->total_allocated) 35 + ); 36 + 37 + DECLARE_EVENT_CLASS(irq_matrix_global_update, 38 + 39 + TP_PROTO(int bit, struct irq_matrix *matrix), 40 + 41 + TP_ARGS(bit, matrix), 42 + 43 + TP_STRUCT__entry( 44 + __field( int, bit ) 45 + __field( unsigned int, online_maps ) 46 + __field( unsigned int, global_available ) 47 + __field( unsigned int, global_reserved ) 48 + __field( unsigned int, total_allocated ) 49 + ), 50 + 51 + TP_fast_assign( 52 + __entry->bit = bit; 53 + __entry->online_maps = matrix->online_maps; 54 + __entry->global_available = matrix->global_available; 55 + __entry->global_reserved = matrix->global_reserved; 56 + __entry->total_allocated = matrix->total_allocated; 57 + ), 58 + 59 + TP_printk("bit=%d online_maps=%d global_avl=%u, global_rsvd=%u, total_alloc=%u", 60 + __entry->bit, __entry->online_maps, 61 + __entry->global_available, __entry->global_reserved, 62 + __entry->total_allocated) 63 + ); 64 + 65 + DECLARE_EVENT_CLASS(irq_matrix_cpu, 66 + 67 + TP_PROTO(int bit, unsigned int cpu, struct irq_matrix *matrix, 68 + struct cpumap *cmap), 69 + 70 + TP_ARGS(bit, cpu, matrix, cmap), 71 + 72 + TP_STRUCT__entry( 73 + __field( int, bit ) 74 + __field( unsigned int, cpu ) 75 + __field( bool, online ) 76 + __field( unsigned int, available ) 77 + __field( unsigned int, allocated ) 78 + __field( unsigned int, managed ) 79 + __field( unsigned int, online_maps ) 80 + __field( unsigned int, global_available ) 81 + __field( unsigned int, global_reserved ) 82 + __field( unsigned int, total_allocated ) 83 + ), 84 + 85 + TP_fast_assign( 86 + __entry->bit = bit; 87 + __entry->cpu = cpu; 88 + __entry->online = cmap->online; 89 + __entry->available = cmap->available; 90 + __entry->allocated = cmap->allocated; 91 + __entry->managed = cmap->managed; 92 + __entry->online_maps = matrix->online_maps; 93 + __entry->global_available = matrix->global_available; 94 + __entry->global_reserved = matrix->global_reserved; 95 + __entry->total_allocated = matrix->total_allocated; 96 + ), 97 + 98 + TP_printk("bit=%d cpu=%u online=%d avl=%u alloc=%u managed=%u online_maps=%u global_avl=%u, global_rsvd=%u, total_alloc=%u", 99 + __entry->bit, __entry->cpu, __entry->online, 100 + __entry->available, __entry->allocated, 101 + __entry->managed, __entry->online_maps, 102 + __entry->global_available, __entry->global_reserved, 103 + __entry->total_allocated) 104 + ); 105 + 106 + DEFINE_EVENT(irq_matrix_global, irq_matrix_online, 107 + 108 + TP_PROTO(struct irq_matrix *matrix), 109 + 110 + TP_ARGS(matrix) 111 + ); 112 + 113 + DEFINE_EVENT(irq_matrix_global, irq_matrix_offline, 114 + 115 + TP_PROTO(struct irq_matrix *matrix), 116 + 117 + TP_ARGS(matrix) 118 + ); 119 + 120 + DEFINE_EVENT(irq_matrix_global, irq_matrix_reserve, 121 + 122 + TP_PROTO(struct irq_matrix *matrix), 123 + 124 + TP_ARGS(matrix) 125 + ); 126 + 127 + DEFINE_EVENT(irq_matrix_global, irq_matrix_remove_reserved, 128 + 129 + TP_PROTO(struct irq_matrix *matrix), 130 + 131 + TP_ARGS(matrix) 132 + ); 133 + 134 + DEFINE_EVENT(irq_matrix_global_update, irq_matrix_assign_system, 135 + 136 + TP_PROTO(int bit, struct irq_matrix *matrix), 137 + 138 + TP_ARGS(bit, matrix) 139 + ); 140 + 141 + DEFINE_EVENT(irq_matrix_cpu, irq_matrix_alloc_reserved, 142 + 143 + TP_PROTO(int bit, unsigned int cpu, 144 + struct irq_matrix *matrix, struct cpumap *cmap), 145 + 146 + TP_ARGS(bit, cpu, matrix, cmap) 147 + ); 148 + 149 + DEFINE_EVENT(irq_matrix_cpu, irq_matrix_reserve_managed, 150 + 151 + TP_PROTO(int bit, unsigned int cpu, 152 + struct irq_matrix *matrix, struct cpumap *cmap), 153 + 154 + TP_ARGS(bit, cpu, matrix, cmap) 155 + ); 156 + 157 + DEFINE_EVENT(irq_matrix_cpu, irq_matrix_remove_managed, 158 + 159 + TP_PROTO(int bit, unsigned int cpu, 160 + struct irq_matrix *matrix, struct cpumap *cmap), 161 + 162 + TP_ARGS(bit, cpu, matrix, cmap) 163 + ); 164 + 165 + DEFINE_EVENT(irq_matrix_cpu, irq_matrix_alloc_managed, 166 + 167 + TP_PROTO(int bit, unsigned int cpu, 168 + struct irq_matrix *matrix, struct cpumap *cmap), 169 + 170 + TP_ARGS(bit, cpu, matrix, cmap) 171 + ); 172 + 173 + DEFINE_EVENT(irq_matrix_cpu, irq_matrix_assign, 174 + 175 + TP_PROTO(int bit, unsigned int cpu, 176 + struct irq_matrix *matrix, struct cpumap *cmap), 177 + 178 + TP_ARGS(bit, cpu, matrix, cmap) 179 + ); 180 + 181 + DEFINE_EVENT(irq_matrix_cpu, irq_matrix_alloc, 182 + 183 + TP_PROTO(int bit, unsigned int cpu, 184 + struct irq_matrix *matrix, struct cpumap *cmap), 185 + 186 + TP_ARGS(bit, cpu, matrix, cmap) 187 + ); 188 + 189 + DEFINE_EVENT(irq_matrix_cpu, irq_matrix_free, 190 + 191 + TP_PROTO(int bit, unsigned int cpu, 192 + struct irq_matrix *matrix, struct cpumap *cmap), 193 + 194 + TP_ARGS(bit, cpu, matrix, cmap) 195 + ); 196 + 197 + 198 + #endif /* _TRACE_IRQ_H */ 199 + 200 + /* This part must be outside protection */ 201 + #include <trace/define_trace.h>
+3
kernel/irq/Kconfig
··· 97 97 config IRQ_TIMINGS 98 98 bool 99 99 100 + config GENERIC_IRQ_MATRIX_ALLOCATOR 101 + bool 102 + 100 103 config IRQ_DOMAIN_DEBUG 101 104 bool "Expose hardware/virtual IRQ mapping via debugfs" 102 105 depends on IRQ_DOMAIN && DEBUG_FS
+1
kernel/irq/Makefile
··· 14 14 obj-$(CONFIG_GENERIC_IRQ_IPI) += ipi.o 15 15 obj-$(CONFIG_SMP) += affinity.o 16 16 obj-$(CONFIG_GENERIC_IRQ_DEBUGFS) += debugfs.o 17 + obj-$(CONFIG_GENERIC_IRQ_MATRIX_ALLOCATOR) += matrix.o
+1 -1
kernel/irq/autoprobe.c
··· 54 54 if (desc->irq_data.chip->irq_set_type) 55 55 desc->irq_data.chip->irq_set_type(&desc->irq_data, 56 56 IRQ_TYPE_PROBE); 57 - irq_startup(desc, IRQ_NORESEND, IRQ_START_FORCE); 57 + irq_activate_and_startup(desc, IRQ_NORESEND); 58 58 } 59 59 raw_spin_unlock_irq(&desc->lock); 60 60 }
+29 -6
kernel/irq/chip.c
··· 207 207 * Catch code which fiddles with enable_irq() on a managed 208 208 * and potentially shutdown IRQ. Chained interrupt 209 209 * installment or irq auto probing should not happen on 210 - * managed irqs either. Emit a warning, break the affinity 211 - * and start it up as a normal interrupt. 210 + * managed irqs either. 212 211 */ 213 212 if (WARN_ON_ONCE(force)) 214 - return IRQ_STARTUP_NORMAL; 213 + return IRQ_STARTUP_ABORT; 215 214 /* 216 215 * The interrupt was requested, but there is no online CPU 217 216 * in it's affinity mask. Put it into managed shutdown 218 217 * state and let the cpu hotplug mechanism start it up once 219 218 * a CPU in the mask becomes available. 220 219 */ 221 - irqd_set_managed_shutdown(d); 222 220 return IRQ_STARTUP_ABORT; 223 221 } 222 + /* 223 + * Managed interrupts have reserved resources, so this should not 224 + * happen. 225 + */ 226 + if (WARN_ON(irq_domain_activate_irq(d, false))) 227 + return IRQ_STARTUP_ABORT; 224 228 return IRQ_STARTUP_MANAGED; 225 229 } 226 230 #else ··· 240 236 struct irq_data *d = irq_desc_get_irq_data(desc); 241 237 int ret = 0; 242 238 243 - irq_domain_activate_irq(d); 239 + /* Warn if this interrupt is not activated but try nevertheless */ 240 + WARN_ON_ONCE(!irqd_is_activated(d)); 241 + 244 242 if (d->chip->irq_startup) { 245 243 ret = d->chip->irq_startup(d); 246 244 irq_state_clr_disabled(desc); ··· 275 269 ret = __irq_startup(desc); 276 270 break; 277 271 case IRQ_STARTUP_ABORT: 272 + irqd_set_managed_shutdown(d); 278 273 return 0; 279 274 } 280 275 } ··· 283 276 check_irq_resend(desc); 284 277 285 278 return ret; 279 + } 280 + 281 + int irq_activate(struct irq_desc *desc) 282 + { 283 + struct irq_data *d = irq_desc_get_irq_data(desc); 284 + 285 + if (!irqd_affinity_is_managed(d)) 286 + return irq_domain_activate_irq(d, false); 287 + return 0; 288 + } 289 + 290 + void irq_activate_and_startup(struct irq_desc *desc, bool resend) 291 + { 292 + if (WARN_ON(irq_activate(desc))) 293 + return; 294 + irq_startup(desc, resend, IRQ_START_FORCE); 286 295 } 287 296 288 297 static void __irq_disable(struct irq_desc *desc, bool mask); ··· 976 953 irq_settings_set_norequest(desc); 977 954 irq_settings_set_nothread(desc); 978 955 desc->action = &chained_action; 979 - irq_startup(desc, IRQ_RESEND, IRQ_START_FORCE); 956 + irq_activate_and_startup(desc, IRQ_RESEND); 980 957 } 981 958 } 982 959
+12
kernel/irq/debugfs.c
··· 81 81 data->domain ? data->domain->name : ""); 82 82 seq_printf(m, "%*shwirq: 0x%lx\n", ind + 1, "", data->hwirq); 83 83 irq_debug_show_chip(m, data, ind + 1); 84 + if (data->domain && data->domain->ops && data->domain->ops->debug_show) 85 + data->domain->ops->debug_show(m, NULL, data, ind + 1); 84 86 #ifdef CONFIG_IRQ_DOMAIN_HIERARCHY 85 87 if (!data->parent_data) 86 88 return; ··· 151 149 raw_spin_lock_irq(&desc->lock); 152 150 data = irq_desc_get_irq_data(desc); 153 151 seq_printf(m, "handler: %pf\n", desc->handle_irq); 152 + seq_printf(m, "device: %s\n", desc->dev_name); 154 153 seq_printf(m, "status: 0x%08x\n", desc->status_use_accessors); 155 154 irq_debug_show_bits(m, 0, desc->status_use_accessors, irqdesc_states, 156 155 ARRAY_SIZE(irqdesc_states)); ··· 228 225 .llseek = seq_lseek, 229 226 .release = single_release, 230 227 }; 228 + 229 + void irq_debugfs_copy_devname(int irq, struct device *dev) 230 + { 231 + struct irq_desc *desc = irq_to_desc(irq); 232 + const char *name = dev_name(dev); 233 + 234 + if (name) 235 + desc->dev_name = kstrdup(name, GFP_KERNEL); 236 + } 231 237 232 238 void irq_add_debugfs_entry(unsigned int irq, struct irq_desc *desc) 233 239 {
+19
kernel/irq/internals.h
··· 75 75 #define IRQ_START_FORCE true 76 76 #define IRQ_START_COND false 77 77 78 + extern int irq_activate(struct irq_desc *desc); 79 + extern void irq_activate_and_startup(struct irq_desc *desc, bool resend); 78 80 extern int irq_startup(struct irq_desc *desc, bool resend, bool force); 79 81 80 82 extern void irq_shutdown(struct irq_desc *desc); ··· 439 437 } 440 438 #endif /* !CONFIG_GENERIC_PENDING_IRQ */ 441 439 440 + #if !defined(CONFIG_IRQ_DOMAIN) || !defined(CONFIG_IRQ_DOMAIN_HIERARCHY) 441 + static inline int irq_domain_activate_irq(struct irq_data *data, bool early) 442 + { 443 + irqd_set_activated(data); 444 + return 0; 445 + } 446 + static inline void irq_domain_deactivate_irq(struct irq_data *data) 447 + { 448 + irqd_clr_activated(data); 449 + } 450 + #endif 451 + 442 452 #ifdef CONFIG_GENERIC_IRQ_DEBUGFS 443 453 #include <linux/debugfs.h> 444 454 ··· 458 444 static inline void irq_remove_debugfs_entry(struct irq_desc *desc) 459 445 { 460 446 debugfs_remove(desc->debugfs_file); 447 + kfree(desc->dev_name); 461 448 } 449 + void irq_debugfs_copy_devname(int irq, struct device *dev); 462 450 # ifdef CONFIG_IRQ_DOMAIN 463 451 void irq_domain_debugfs_init(struct dentry *root); 464 452 # else ··· 473 457 { 474 458 } 475 459 static inline void irq_remove_debugfs_entry(struct irq_desc *d) 460 + { 461 + } 462 + static inline void irq_debugfs_copy_devname(int irq, struct device *dev) 476 463 { 477 464 } 478 465 #endif /* CONFIG_GENERIC_IRQ_DEBUGFS */
+4 -5
kernel/irq/irqdesc.c
··· 27 27 #if defined(CONFIG_SMP) 28 28 static int __init irq_affinity_setup(char *str) 29 29 { 30 - zalloc_cpumask_var(&irq_default_affinity, GFP_NOWAIT); 30 + alloc_bootmem_cpumask_var(&irq_default_affinity); 31 31 cpulist_parse(str, irq_default_affinity); 32 32 /* 33 33 * Set at least the boot cpu. We don't want to end up with ··· 40 40 41 41 static void __init init_irq_default_affinity(void) 42 42 { 43 - #ifdef CONFIG_CPUMASK_OFFSTACK 44 - if (!irq_default_affinity) 43 + if (!cpumask_available(irq_default_affinity)) 45 44 zalloc_cpumask_var(&irq_default_affinity, GFP_NOWAIT); 46 - #endif 47 45 if (cpumask_empty(irq_default_affinity)) 48 46 cpumask_setall(irq_default_affinity); 49 47 } ··· 446 448 } 447 449 } 448 450 449 - flags = affinity ? IRQD_AFFINITY_MANAGED : 0; 451 + flags = affinity ? IRQD_AFFINITY_MANAGED | IRQD_MANAGED_SHUTDOWN : 0; 450 452 mask = NULL; 451 453 452 454 for (i = 0; i < cnt; i++) { ··· 460 462 goto err; 461 463 irq_insert_desc(start + i, desc); 462 464 irq_sysfs_add(start + i, desc); 465 + irq_add_debugfs_entry(start + i, desc); 463 466 } 464 467 bitmap_set(allocated_irqs, start, cnt); 465 468 return start;
+37 -25
kernel/irq/irqdomain.c
··· 21 21 static LIST_HEAD(irq_domain_list); 22 22 static DEFINE_MUTEX(irq_domain_mutex); 23 23 24 - static DEFINE_MUTEX(revmap_trees_mutex); 25 24 static struct irq_domain *irq_default_domain; 26 25 27 26 static void irq_domain_check_hierarchy(struct irq_domain *domain); ··· 210 211 211 212 /* Fill structure */ 212 213 INIT_RADIX_TREE(&domain->revmap_tree, GFP_KERNEL); 214 + mutex_init(&domain->revmap_tree_mutex); 213 215 domain->ops = ops; 214 216 domain->host_data = host_data; 215 217 domain->hwirq_max = hwirq_max; ··· 462 462 if (hwirq < domain->revmap_size) { 463 463 domain->linear_revmap[hwirq] = 0; 464 464 } else { 465 - mutex_lock(&revmap_trees_mutex); 465 + mutex_lock(&domain->revmap_tree_mutex); 466 466 radix_tree_delete(&domain->revmap_tree, hwirq); 467 - mutex_unlock(&revmap_trees_mutex); 467 + mutex_unlock(&domain->revmap_tree_mutex); 468 468 } 469 469 } 470 470 ··· 475 475 if (hwirq < domain->revmap_size) { 476 476 domain->linear_revmap[hwirq] = irq_data->irq; 477 477 } else { 478 - mutex_lock(&revmap_trees_mutex); 478 + mutex_lock(&domain->revmap_tree_mutex); 479 479 radix_tree_insert(&domain->revmap_tree, hwirq, irq_data); 480 - mutex_unlock(&revmap_trees_mutex); 480 + mutex_unlock(&domain->revmap_tree_mutex); 481 481 } 482 482 } 483 483 ··· 921 921 chip = irq_data_get_irq_chip(data); 922 922 seq_printf(m, "%-15s ", (chip && chip->name) ? chip->name : "none"); 923 923 924 - seq_printf(m, data ? "0x%p " : " %p ", 925 - irq_data_get_irq_chip_data(data)); 924 + seq_printf(m, "0x%p ", irq_data_get_irq_chip_data(data)); 926 925 927 926 seq_printf(m, " %c ", (desc->action && desc->action->handler) ? '*' : ' '); 928 927 direct = (irq == hwirq) && (irq < domain->revmap_direct_max_irq); ··· 1458 1459 return; /* Not using radix tree. */ 1459 1460 1460 1461 /* Fix up the revmap. */ 1461 - mutex_lock(&revmap_trees_mutex); 1462 + mutex_lock(&d->domain->revmap_tree_mutex); 1462 1463 slot = radix_tree_lookup_slot(&d->domain->revmap_tree, d->hwirq); 1463 1464 if (slot) 1464 1465 radix_tree_replace_slot(&d->domain->revmap_tree, slot, d); 1465 - mutex_unlock(&revmap_trees_mutex); 1466 + mutex_unlock(&d->domain->revmap_tree_mutex); 1466 1467 } 1467 1468 1468 1469 /** ··· 1681 1682 } 1682 1683 EXPORT_SYMBOL_GPL(irq_domain_free_irqs_parent); 1683 1684 1684 - static void __irq_domain_activate_irq(struct irq_data *irq_data) 1685 - { 1686 - if (irq_data && irq_data->domain) { 1687 - struct irq_domain *domain = irq_data->domain; 1688 - 1689 - if (irq_data->parent_data) 1690 - __irq_domain_activate_irq(irq_data->parent_data); 1691 - if (domain->ops->activate) 1692 - domain->ops->activate(domain, irq_data); 1693 - } 1694 - } 1695 - 1696 1685 static void __irq_domain_deactivate_irq(struct irq_data *irq_data) 1697 1686 { 1698 1687 if (irq_data && irq_data->domain) { ··· 1693 1706 } 1694 1707 } 1695 1708 1709 + static int __irq_domain_activate_irq(struct irq_data *irqd, bool early) 1710 + { 1711 + int ret = 0; 1712 + 1713 + if (irqd && irqd->domain) { 1714 + struct irq_domain *domain = irqd->domain; 1715 + 1716 + if (irqd->parent_data) 1717 + ret = __irq_domain_activate_irq(irqd->parent_data, 1718 + early); 1719 + if (!ret && domain->ops->activate) { 1720 + ret = domain->ops->activate(domain, irqd, early); 1721 + /* Rollback in case of error */ 1722 + if (ret && irqd->parent_data) 1723 + __irq_domain_deactivate_irq(irqd->parent_data); 1724 + } 1725 + } 1726 + return ret; 1727 + } 1728 + 1696 1729 /** 1697 1730 * irq_domain_activate_irq - Call domain_ops->activate recursively to activate 1698 1731 * interrupt ··· 1721 1714 * This is the second step to call domain_ops->activate to program interrupt 1722 1715 * controllers, so the interrupt could actually get delivered. 1723 1716 */ 1724 - void irq_domain_activate_irq(struct irq_data *irq_data) 1717 + int irq_domain_activate_irq(struct irq_data *irq_data, bool early) 1725 1718 { 1726 - if (!irqd_is_activated(irq_data)) { 1727 - __irq_domain_activate_irq(irq_data); 1719 + int ret = 0; 1720 + 1721 + if (!irqd_is_activated(irq_data)) 1722 + ret = __irq_domain_activate_irq(irq_data, early); 1723 + if (!ret) 1728 1724 irqd_set_activated(irq_data); 1729 - } 1725 + return ret; 1730 1726 } 1731 1727 1732 1728 /** ··· 1820 1810 d->revmap_size + d->revmap_direct_max_irq); 1821 1811 seq_printf(m, "%*smapped: %u\n", ind + 1, "", d->mapcount); 1822 1812 seq_printf(m, "%*sflags: 0x%08x\n", ind +1 , "", d->flags); 1813 + if (d->ops && d->ops->debug_show) 1814 + d->ops->debug_show(m, d, NULL, ind + 1); 1823 1815 #ifdef CONFIG_IRQ_DOMAIN_HIERARCHY 1824 1816 if (!d->parent) 1825 1817 return;
+19 -4
kernel/irq/manage.c
··· 398 398 /** 399 399 * irq_set_vcpu_affinity - Set vcpu affinity for the interrupt 400 400 * @irq: interrupt number to set affinity 401 - * @vcpu_info: vCPU specific data 401 + * @vcpu_info: vCPU specific data or pointer to a percpu array of vCPU 402 + * specific data for percpu_devid interrupts 402 403 * 403 404 * This function uses the vCPU specific data to set the vCPU 404 405 * affinity for an irq. The vCPU specific data is passed from ··· 537 536 * time. If it was already started up, then irq_startup() 538 537 * will invoke irq_enable() under the hood. 539 538 */ 540 - irq_startup(desc, IRQ_RESEND, IRQ_START_COND); 539 + irq_startup(desc, IRQ_RESEND, IRQ_START_FORCE); 541 540 break; 542 541 } 543 542 default: ··· 1306 1305 * thread_mask assigned. See the loop above which or's 1307 1306 * all existing action->thread_mask bits. 1308 1307 */ 1309 - new->thread_mask = 1 << ffz(thread_mask); 1308 + new->thread_mask = 1UL << ffz(thread_mask); 1310 1309 1311 1310 } else if (new->handler == irq_default_primary_handler && 1312 1311 !(desc->irq_data.chip->flags & IRQCHIP_ONESHOT_SAFE)) { ··· 1342 1341 if (ret) 1343 1342 goto out_unlock; 1344 1343 } 1344 + 1345 + /* 1346 + * Activate the interrupt. That activation must happen 1347 + * independently of IRQ_NOAUTOEN. request_irq() can fail 1348 + * and the callers are supposed to handle 1349 + * that. enable_irq() of an interrupt requested with 1350 + * IRQ_NOAUTOEN is not supposed to fail. The activation 1351 + * keeps it in shutdown mode, it merily associates 1352 + * resources if necessary and if that's not possible it 1353 + * fails. Interrupts which are in managed shutdown mode 1354 + * will simply ignore that activation request. 1355 + */ 1356 + ret = irq_activate(desc); 1357 + if (ret) 1358 + goto out_unlock; 1345 1359 1346 1360 desc->istate &= ~(IRQS_AUTODETECT | IRQS_SPURIOUS_DISABLED | \ 1347 1361 IRQS_ONESHOT | IRQS_WAITING); ··· 1433 1417 wake_up_process(new->secondary->thread); 1434 1418 1435 1419 register_irq_proc(irq, desc); 1436 - irq_add_debugfs_entry(irq, desc); 1437 1420 new->dir = NULL; 1438 1421 register_handler_proc(irq, new); 1439 1422 return 0;
+443
kernel/irq/matrix.c
··· 1 + /* 2 + * Copyright (C) 2017 Thomas Gleixner <tglx@linutronix.de> 3 + * 4 + * SPDX-License-Identifier: GPL-2.0 5 + */ 6 + #include <linux/spinlock.h> 7 + #include <linux/seq_file.h> 8 + #include <linux/bitmap.h> 9 + #include <linux/percpu.h> 10 + #include <linux/cpu.h> 11 + #include <linux/irq.h> 12 + 13 + #define IRQ_MATRIX_SIZE (BITS_TO_LONGS(IRQ_MATRIX_BITS) * sizeof(unsigned long)) 14 + 15 + struct cpumap { 16 + unsigned int available; 17 + unsigned int allocated; 18 + unsigned int managed; 19 + bool online; 20 + unsigned long alloc_map[IRQ_MATRIX_SIZE]; 21 + unsigned long managed_map[IRQ_MATRIX_SIZE]; 22 + }; 23 + 24 + struct irq_matrix { 25 + unsigned int matrix_bits; 26 + unsigned int alloc_start; 27 + unsigned int alloc_end; 28 + unsigned int alloc_size; 29 + unsigned int global_available; 30 + unsigned int global_reserved; 31 + unsigned int systembits_inalloc; 32 + unsigned int total_allocated; 33 + unsigned int online_maps; 34 + struct cpumap __percpu *maps; 35 + unsigned long scratch_map[IRQ_MATRIX_SIZE]; 36 + unsigned long system_map[IRQ_MATRIX_SIZE]; 37 + }; 38 + 39 + #define CREATE_TRACE_POINTS 40 + #include <trace/events/irq_matrix.h> 41 + 42 + /** 43 + * irq_alloc_matrix - Allocate a irq_matrix structure and initialize it 44 + * @matrix_bits: Number of matrix bits must be <= IRQ_MATRIX_BITS 45 + * @alloc_start: From which bit the allocation search starts 46 + * @alloc_end: At which bit the allocation search ends, i.e first 47 + * invalid bit 48 + */ 49 + __init struct irq_matrix *irq_alloc_matrix(unsigned int matrix_bits, 50 + unsigned int alloc_start, 51 + unsigned int alloc_end) 52 + { 53 + struct irq_matrix *m; 54 + 55 + if (matrix_bits > IRQ_MATRIX_BITS) 56 + return NULL; 57 + 58 + m = kzalloc(sizeof(*m), GFP_KERNEL); 59 + if (!m) 60 + return NULL; 61 + 62 + m->matrix_bits = matrix_bits; 63 + m->alloc_start = alloc_start; 64 + m->alloc_end = alloc_end; 65 + m->alloc_size = alloc_end - alloc_start; 66 + m->maps = alloc_percpu(*m->maps); 67 + if (!m->maps) { 68 + kfree(m); 69 + return NULL; 70 + } 71 + return m; 72 + } 73 + 74 + /** 75 + * irq_matrix_online - Bring the local CPU matrix online 76 + * @m: Matrix pointer 77 + */ 78 + void irq_matrix_online(struct irq_matrix *m) 79 + { 80 + struct cpumap *cm = this_cpu_ptr(m->maps); 81 + 82 + BUG_ON(cm->online); 83 + 84 + bitmap_zero(cm->alloc_map, m->matrix_bits); 85 + cm->available = m->alloc_size - (cm->managed + m->systembits_inalloc); 86 + cm->allocated = 0; 87 + m->global_available += cm->available; 88 + cm->online = true; 89 + m->online_maps++; 90 + trace_irq_matrix_online(m); 91 + } 92 + 93 + /** 94 + * irq_matrix_offline - Bring the local CPU matrix offline 95 + * @m: Matrix pointer 96 + */ 97 + void irq_matrix_offline(struct irq_matrix *m) 98 + { 99 + struct cpumap *cm = this_cpu_ptr(m->maps); 100 + 101 + /* Update the global available size */ 102 + m->global_available -= cm->available; 103 + cm->online = false; 104 + m->online_maps--; 105 + trace_irq_matrix_offline(m); 106 + } 107 + 108 + static unsigned int matrix_alloc_area(struct irq_matrix *m, struct cpumap *cm, 109 + unsigned int num, bool managed) 110 + { 111 + unsigned int area, start = m->alloc_start; 112 + unsigned int end = m->alloc_end; 113 + 114 + bitmap_or(m->scratch_map, cm->managed_map, m->system_map, end); 115 + bitmap_or(m->scratch_map, m->scratch_map, cm->alloc_map, end); 116 + area = bitmap_find_next_zero_area(m->scratch_map, end, start, num, 0); 117 + if (area >= end) 118 + return area; 119 + if (managed) 120 + bitmap_set(cm->managed_map, area, num); 121 + else 122 + bitmap_set(cm->alloc_map, area, num); 123 + return area; 124 + } 125 + 126 + /** 127 + * irq_matrix_assign_system - Assign system wide entry in the matrix 128 + * @m: Matrix pointer 129 + * @bit: Which bit to reserve 130 + * @replace: Replace an already allocated vector with a system 131 + * vector at the same bit position. 132 + * 133 + * The BUG_ON()s below are on purpose. If this goes wrong in the 134 + * early boot process, then the chance to survive is about zero. 135 + * If this happens when the system is life, it's not much better. 136 + */ 137 + void irq_matrix_assign_system(struct irq_matrix *m, unsigned int bit, 138 + bool replace) 139 + { 140 + struct cpumap *cm = this_cpu_ptr(m->maps); 141 + 142 + BUG_ON(bit > m->matrix_bits); 143 + BUG_ON(m->online_maps > 1 || (m->online_maps && !replace)); 144 + 145 + set_bit(bit, m->system_map); 146 + if (replace) { 147 + BUG_ON(!test_and_clear_bit(bit, cm->alloc_map)); 148 + cm->allocated--; 149 + m->total_allocated--; 150 + } 151 + if (bit >= m->alloc_start && bit < m->alloc_end) 152 + m->systembits_inalloc++; 153 + 154 + trace_irq_matrix_assign_system(bit, m); 155 + } 156 + 157 + /** 158 + * irq_matrix_reserve_managed - Reserve a managed interrupt in a CPU map 159 + * @m: Matrix pointer 160 + * @msk: On which CPUs the bits should be reserved. 161 + * 162 + * Can be called for offline CPUs. Note, this will only reserve one bit 163 + * on all CPUs in @msk, but it's not guaranteed that the bits are at the 164 + * same offset on all CPUs 165 + */ 166 + int irq_matrix_reserve_managed(struct irq_matrix *m, const struct cpumask *msk) 167 + { 168 + unsigned int cpu, failed_cpu; 169 + 170 + for_each_cpu(cpu, msk) { 171 + struct cpumap *cm = per_cpu_ptr(m->maps, cpu); 172 + unsigned int bit; 173 + 174 + bit = matrix_alloc_area(m, cm, 1, true); 175 + if (bit >= m->alloc_end) 176 + goto cleanup; 177 + cm->managed++; 178 + if (cm->online) { 179 + cm->available--; 180 + m->global_available--; 181 + } 182 + trace_irq_matrix_reserve_managed(bit, cpu, m, cm); 183 + } 184 + return 0; 185 + cleanup: 186 + failed_cpu = cpu; 187 + for_each_cpu(cpu, msk) { 188 + if (cpu == failed_cpu) 189 + break; 190 + irq_matrix_remove_managed(m, cpumask_of(cpu)); 191 + } 192 + return -ENOSPC; 193 + } 194 + 195 + /** 196 + * irq_matrix_remove_managed - Remove managed interrupts in a CPU map 197 + * @m: Matrix pointer 198 + * @msk: On which CPUs the bits should be removed 199 + * 200 + * Can be called for offline CPUs 201 + * 202 + * This removes not allocated managed interrupts from the map. It does 203 + * not matter which one because the managed interrupts free their 204 + * allocation when they shut down. If not, the accounting is screwed, 205 + * but all what can be done at this point is warn about it. 206 + */ 207 + void irq_matrix_remove_managed(struct irq_matrix *m, const struct cpumask *msk) 208 + { 209 + unsigned int cpu; 210 + 211 + for_each_cpu(cpu, msk) { 212 + struct cpumap *cm = per_cpu_ptr(m->maps, cpu); 213 + unsigned int bit, end = m->alloc_end; 214 + 215 + if (WARN_ON_ONCE(!cm->managed)) 216 + continue; 217 + 218 + /* Get managed bit which are not allocated */ 219 + bitmap_andnot(m->scratch_map, cm->managed_map, cm->alloc_map, end); 220 + 221 + bit = find_first_bit(m->scratch_map, end); 222 + if (WARN_ON_ONCE(bit >= end)) 223 + continue; 224 + 225 + clear_bit(bit, cm->managed_map); 226 + 227 + cm->managed--; 228 + if (cm->online) { 229 + cm->available++; 230 + m->global_available++; 231 + } 232 + trace_irq_matrix_remove_managed(bit, cpu, m, cm); 233 + } 234 + } 235 + 236 + /** 237 + * irq_matrix_alloc_managed - Allocate a managed interrupt in a CPU map 238 + * @m: Matrix pointer 239 + * @cpu: On which CPU the interrupt should be allocated 240 + */ 241 + int irq_matrix_alloc_managed(struct irq_matrix *m, unsigned int cpu) 242 + { 243 + struct cpumap *cm = per_cpu_ptr(m->maps, cpu); 244 + unsigned int bit, end = m->alloc_end; 245 + 246 + /* Get managed bit which are not allocated */ 247 + bitmap_andnot(m->scratch_map, cm->managed_map, cm->alloc_map, end); 248 + bit = find_first_bit(m->scratch_map, end); 249 + if (bit >= end) 250 + return -ENOSPC; 251 + set_bit(bit, cm->alloc_map); 252 + cm->allocated++; 253 + m->total_allocated++; 254 + trace_irq_matrix_alloc_managed(bit, cpu, m, cm); 255 + return bit; 256 + } 257 + 258 + /** 259 + * irq_matrix_assign - Assign a preallocated interrupt in the local CPU map 260 + * @m: Matrix pointer 261 + * @bit: Which bit to mark 262 + * 263 + * This should only be used to mark preallocated vectors 264 + */ 265 + void irq_matrix_assign(struct irq_matrix *m, unsigned int bit) 266 + { 267 + struct cpumap *cm = this_cpu_ptr(m->maps); 268 + 269 + if (WARN_ON_ONCE(bit < m->alloc_start || bit >= m->alloc_end)) 270 + return; 271 + if (WARN_ON_ONCE(test_and_set_bit(bit, cm->alloc_map))) 272 + return; 273 + cm->allocated++; 274 + m->total_allocated++; 275 + cm->available--; 276 + m->global_available--; 277 + trace_irq_matrix_assign(bit, smp_processor_id(), m, cm); 278 + } 279 + 280 + /** 281 + * irq_matrix_reserve - Reserve interrupts 282 + * @m: Matrix pointer 283 + * 284 + * This is merily a book keeping call. It increments the number of globally 285 + * reserved interrupt bits w/o actually allocating them. This allows to 286 + * setup interrupt descriptors w/o assigning low level resources to it. 287 + * The actual allocation happens when the interrupt gets activated. 288 + */ 289 + void irq_matrix_reserve(struct irq_matrix *m) 290 + { 291 + if (m->global_reserved <= m->global_available && 292 + m->global_reserved + 1 > m->global_available) 293 + pr_warn("Interrupt reservation exceeds available resources\n"); 294 + 295 + m->global_reserved++; 296 + trace_irq_matrix_reserve(m); 297 + } 298 + 299 + /** 300 + * irq_matrix_remove_reserved - Remove interrupt reservation 301 + * @m: Matrix pointer 302 + * 303 + * This is merily a book keeping call. It decrements the number of globally 304 + * reserved interrupt bits. This is used to undo irq_matrix_reserve() when the 305 + * interrupt was never in use and a real vector allocated, which undid the 306 + * reservation. 307 + */ 308 + void irq_matrix_remove_reserved(struct irq_matrix *m) 309 + { 310 + m->global_reserved--; 311 + trace_irq_matrix_remove_reserved(m); 312 + } 313 + 314 + /** 315 + * irq_matrix_alloc - Allocate a regular interrupt in a CPU map 316 + * @m: Matrix pointer 317 + * @msk: Which CPUs to search in 318 + * @reserved: Allocate previously reserved interrupts 319 + * @mapped_cpu: Pointer to store the CPU for which the irq was allocated 320 + */ 321 + int irq_matrix_alloc(struct irq_matrix *m, const struct cpumask *msk, 322 + bool reserved, unsigned int *mapped_cpu) 323 + { 324 + unsigned int cpu; 325 + 326 + for_each_cpu(cpu, msk) { 327 + struct cpumap *cm = per_cpu_ptr(m->maps, cpu); 328 + unsigned int bit; 329 + 330 + if (!cm->online) 331 + continue; 332 + 333 + bit = matrix_alloc_area(m, cm, 1, false); 334 + if (bit < m->alloc_end) { 335 + cm->allocated++; 336 + cm->available--; 337 + m->total_allocated++; 338 + m->global_available--; 339 + if (reserved) 340 + m->global_reserved--; 341 + *mapped_cpu = cpu; 342 + trace_irq_matrix_alloc(bit, cpu, m, cm); 343 + return bit; 344 + } 345 + } 346 + return -ENOSPC; 347 + } 348 + 349 + /** 350 + * irq_matrix_free - Free allocated interrupt in the matrix 351 + * @m: Matrix pointer 352 + * @cpu: Which CPU map needs be updated 353 + * @bit: The bit to remove 354 + * @managed: If true, the interrupt is managed and not accounted 355 + * as available. 356 + */ 357 + void irq_matrix_free(struct irq_matrix *m, unsigned int cpu, 358 + unsigned int bit, bool managed) 359 + { 360 + struct cpumap *cm = per_cpu_ptr(m->maps, cpu); 361 + 362 + if (WARN_ON_ONCE(bit < m->alloc_start || bit >= m->alloc_end)) 363 + return; 364 + 365 + if (cm->online) { 366 + clear_bit(bit, cm->alloc_map); 367 + cm->allocated--; 368 + m->total_allocated--; 369 + if (!managed) { 370 + cm->available++; 371 + m->global_available++; 372 + } 373 + } 374 + trace_irq_matrix_free(bit, cpu, m, cm); 375 + } 376 + 377 + /** 378 + * irq_matrix_available - Get the number of globally available irqs 379 + * @m: Pointer to the matrix to query 380 + * @cpudown: If true, the local CPU is about to go down, adjust 381 + * the number of available irqs accordingly 382 + */ 383 + unsigned int irq_matrix_available(struct irq_matrix *m, bool cpudown) 384 + { 385 + struct cpumap *cm = this_cpu_ptr(m->maps); 386 + 387 + return m->global_available - cpudown ? cm->available : 0; 388 + } 389 + 390 + /** 391 + * irq_matrix_reserved - Get the number of globally reserved irqs 392 + * @m: Pointer to the matrix to query 393 + */ 394 + unsigned int irq_matrix_reserved(struct irq_matrix *m) 395 + { 396 + return m->global_reserved; 397 + } 398 + 399 + /** 400 + * irq_matrix_allocated - Get the number of allocated irqs on the local cpu 401 + * @m: Pointer to the matrix to search 402 + * 403 + * This returns number of allocated irqs 404 + */ 405 + unsigned int irq_matrix_allocated(struct irq_matrix *m) 406 + { 407 + struct cpumap *cm = this_cpu_ptr(m->maps); 408 + 409 + return cm->allocated; 410 + } 411 + 412 + #ifdef CONFIG_GENERIC_IRQ_DEBUGFS 413 + /** 414 + * irq_matrix_debug_show - Show detailed allocation information 415 + * @sf: Pointer to the seq_file to print to 416 + * @m: Pointer to the matrix allocator 417 + * @ind: Indentation for the print format 418 + * 419 + * Note, this is a lockless snapshot. 420 + */ 421 + void irq_matrix_debug_show(struct seq_file *sf, struct irq_matrix *m, int ind) 422 + { 423 + unsigned int nsys = bitmap_weight(m->system_map, m->matrix_bits); 424 + int cpu; 425 + 426 + seq_printf(sf, "Online bitmaps: %6u\n", m->online_maps); 427 + seq_printf(sf, "Global available: %6u\n", m->global_available); 428 + seq_printf(sf, "Global reserved: %6u\n", m->global_reserved); 429 + seq_printf(sf, "Total allocated: %6u\n", m->total_allocated); 430 + seq_printf(sf, "System: %u: %*pbl\n", nsys, m->matrix_bits, 431 + m->system_map); 432 + seq_printf(sf, "%*s| CPU | avl | man | act | vectors\n", ind, " "); 433 + cpus_read_lock(); 434 + for_each_online_cpu(cpu) { 435 + struct cpumap *cm = per_cpu_ptr(m->maps, cpu); 436 + 437 + seq_printf(sf, "%*s %4d %4u %4u %4u %*pbl\n", ind, " ", 438 + cpu, cm->available, cm->managed, cm->allocated, 439 + m->matrix_bits, cm->alloc_map); 440 + } 441 + cpus_read_unlock(); 442 + } 443 + #endif
+27 -5
kernel/irq/msi.c
··· 16 16 #include <linux/msi.h> 17 17 #include <linux/slab.h> 18 18 19 + #include "internals.h" 20 + 19 21 /** 20 22 * alloc_msi_entry - Allocate an initialize msi_entry 21 23 * @dev: Pointer to the device for which this is allocated ··· 102 100 return ret; 103 101 } 104 102 105 - static void msi_domain_activate(struct irq_domain *domain, 106 - struct irq_data *irq_data) 103 + static int msi_domain_activate(struct irq_domain *domain, 104 + struct irq_data *irq_data, bool early) 107 105 { 108 106 struct msi_msg msg; 109 107 110 108 BUG_ON(irq_chip_compose_msi_msg(irq_data, &msg)); 111 109 irq_chip_write_msi_msg(irq_data, &msg); 110 + return 0; 112 111 } 113 112 114 113 static void msi_domain_deactivate(struct irq_domain *domain, ··· 376 373 return ret; 377 374 } 378 375 379 - for (i = 0; i < desc->nvec_used; i++) 376 + for (i = 0; i < desc->nvec_used; i++) { 380 377 irq_set_msi_desc_off(virq, i, desc); 378 + irq_debugfs_copy_devname(virq + i, dev); 379 + } 381 380 } 382 381 383 382 if (ops->msi_finish) ··· 401 396 struct irq_data *irq_data; 402 397 403 398 irq_data = irq_domain_get_irq_data(domain, desc->irq); 404 - irq_domain_activate_irq(irq_data); 399 + ret = irq_domain_activate_irq(irq_data, true); 400 + if (ret) 401 + goto cleanup; 402 + if (info->flags & MSI_FLAG_MUST_REACTIVATE) 403 + irqd_clr_activated(irq_data); 405 404 } 406 405 } 407 - 408 406 return 0; 407 + 408 + cleanup: 409 + for_each_msi_entry(desc, dev) { 410 + struct irq_data *irqd; 411 + 412 + if (desc->irq == virq) 413 + break; 414 + 415 + irqd = irq_domain_get_irq_data(domain, desc->irq); 416 + if (irqd_is_activated(irqd)) 417 + irq_domain_deactivate_irq(irqd); 418 + } 419 + msi_domain_free_irqs(domain, dev); 420 + return ret; 409 421 } 410 422 411 423 /**
+3 -2
kernel/irq/proc.c
··· 155 155 */ 156 156 err = irq_select_affinity_usr(irq) ? -EINVAL : count; 157 157 } else { 158 - irq_set_affinity(irq, new_value); 159 - err = count; 158 + err = irq_set_affinity(irq, new_value); 159 + if (!err) 160 + err = count; 160 161 } 161 162 162 163 free_cpumask:
+3 -7
kernel/irq_work.c
··· 131 131 132 132 static void irq_work_run_list(struct llist_head *list) 133 133 { 134 - unsigned long flags; 135 - struct irq_work *work; 134 + struct irq_work *work, *tmp; 136 135 struct llist_node *llnode; 136 + unsigned long flags; 137 137 138 138 BUG_ON(!irqs_disabled()); 139 139 ··· 141 141 return; 142 142 143 143 llnode = llist_del_all(list); 144 - while (llnode != NULL) { 145 - work = llist_entry(llnode, struct irq_work, llnode); 146 - 147 - llnode = llist_next(llnode); 148 - 144 + llist_for_each_entry_safe(work, tmp, llnode, llnode) { 149 145 /* 150 146 * Clear the PENDING bit, after this point the @work 151 147 * can be re-used.