Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull irq updates from Thomas Gleixner:
"The irq departement provides the usual mixed bag:

Core:

- Further improvements to the irq timings code which aims to predict
the next interrupt for power state selection to achieve better
latency/power balance

- Add interrupt statistics to the core NMI handlers

- The usual small fixes and cleanups

Drivers:

- Support for Renesas RZ/A1, Annapurna Labs FIC, Meson-G12A SoC and
Amazon Gravition AMR/GIC interrupt controllers.

- Rework of the Renesas INTC controller driver

- ACPI support for Socionext SoCs

- Enhancements to the CSKY interrupt controller

- The usual small fixes and cleanups"

* 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (39 commits)
irq/irqdomain: Fix comment typo
genirq: Update irq stats from NMI handlers
irqchip/gic-pm: Remove PM_CLK dependency
irqchip/al-fic: Introduce Amazon's Annapurna Labs Fabric Interrupt Controller Driver
dt-bindings: interrupt-controller: Add Amazon's Annapurna Labs FIC
softirq: Use __this_cpu_write() in takeover_tasklets()
irqchip/mbigen: Stop printing kernel addresses
irqchip/gic: Add dependency for ARM_GIC_MAX_NR
genirq/affinity: Remove unused argument from [__]irq_build_affinity_masks()
genirq/timings: Add selftest for next event computation
genirq/timings: Add selftest for irqs circular buffer
genirq/timings: Add selftest for circular array
genirq/timings: Encapsulate storing function
genirq/timings: Encapsulate timings push
genirq/timings: Optimize the period detection speed
genirq/timings: Fix timings buffer inspection
genirq/timings: Fix next event index function
irqchip/qcom: Use struct_size() in devm_kzalloc()
irqchip/irq-csky-mpintc: Remove unnecessary loop in interrupt handler
dt-bindings: interrupt-controller: Update csky mpintc
...

+1530 -194
+29
Documentation/devicetree/bindings/interrupt-controller/amazon,al-fic.txt
··· 1 + Amazon's Annapurna Labs Fabric Interrupt Controller 2 + 3 + Required properties: 4 + 5 + - compatible: should be "amazon,al-fic" 6 + - reg: physical base address and size of the registers 7 + - interrupt-controller: identifies the node as an interrupt controller 8 + - #interrupt-cells: must be 2. 9 + First cell defines the index of the interrupt within the controller. 10 + Second cell is used to specify the trigger type and must be one of the 11 + following: 12 + - bits[3:0] trigger type and level flags 13 + 1 = low-to-high edge triggered 14 + 4 = active high level-sensitive 15 + - interrupt-parent: specifies the parent interrupt controller. 16 + - interrupts: describes which input line in the interrupt parent, this 17 + fic's output is connected to. This field property depends on the parent's 18 + binding 19 + 20 + Example: 21 + 22 + amazon_fic: interrupt-controller@0xfd8a8500 { 23 + compatible = "amazon,al-fic"; 24 + interrupt-controller; 25 + #interrupt-cells = <2>; 26 + reg = <0x0 0xfd8a8500 0x0 0x1000>; 27 + interrupt-parent = <&gic>; 28 + interrupts = <GIC_SPI 0x0 IRQ_TYPE_LEVEL_HIGH>; 29 + };
+1
Documentation/devicetree/bindings/interrupt-controller/amlogic,meson-gpio-intc.txt
··· 15 15 "amlogic,meson-gxbb-gpio-intc" for GXBB SoCs (S905) or 16 16 "amlogic,meson-gxl-gpio-intc" for GXL SoCs (S905X, S912) 17 17 "amlogic,meson-axg-gpio-intc" for AXG SoCs (A113D, A113X) 18 + "amlogic,meson-g12a-gpio-intc" for G12A SoCs (S905D2, S905X2, S905Y2) 18 19 - reg : Specifies base physical address and size of the registers. 19 20 - interrupt-controller : Identifies the node as an interrupt controller. 20 21 - #interrupt-cells : Specifies the number of cells needed to encode an
+16 -4
Documentation/devicetree/bindings/interrupt-controller/csky,mpintc.txt
··· 6 6 SMP soc, and it also could be used in non-SMP system. 7 7 8 8 Interrupt number definition: 9 - 10 9 0-15 : software irq, and we use 15 as our IPI_IRQ. 11 10 16-31 : private irq, and we use 16 as the co-processor timer. 12 11 31-1024: common irq for soc ip. 12 + 13 + Interrupt triger mode: (Defined in dt-bindings/interrupt-controller/irq.h) 14 + IRQ_TYPE_LEVEL_HIGH (default) 15 + IRQ_TYPE_LEVEL_LOW 16 + IRQ_TYPE_EDGE_RISING 17 + IRQ_TYPE_EDGE_FALLING 13 18 14 19 ============================= 15 20 intc node bindings definition ··· 31 26 - #interrupt-cells 32 27 Usage: required 33 28 Value type: <u32> 34 - Definition: must be <1> 29 + Definition: <2> 35 30 - interrupt-controller: 36 31 Usage: required 37 32 38 - Examples: 33 + Examples: ("interrupts = <irq_num IRQ_TYPE_XXX>") 39 34 --------- 35 + #include <dt-bindings/interrupt-controller/irq.h> 40 36 41 37 intc: interrupt-controller { 42 38 compatible = "csky,mpintc"; 43 - #interrupt-cells = <1>; 39 + #interrupt-cells = <2>; 44 40 interrupt-controller; 41 + }; 42 + 43 + device: device-example { 44 + ... 45 + interrupts = <34 IRQ_TYPE_EDGE_RISING>; 46 + interrupt-parent = <&intc>; 45 47 };
+43
Documentation/devicetree/bindings/interrupt-controller/renesas,rza1-irqc.txt
··· 1 + DT bindings for the Renesas RZ/A1 Interrupt Controller 2 + 3 + The RZ/A1 Interrupt Controller is a front-end for the GIC found on Renesas 4 + RZ/A1 and RZ/A2 SoCs: 5 + - IRQ sense select for 8 external interrupts, 1:1-mapped to 8 GIC SPI 6 + interrupts, 7 + - NMI edge select. 8 + 9 + Required properties: 10 + - compatible: Must be "renesas,<soctype>-irqc", and "renesas,rza1-irqc" as 11 + fallback. 12 + Examples with soctypes are: 13 + - "renesas,r7s72100-irqc" (RZ/A1H) 14 + - "renesas,r7s9210-irqc" (RZ/A2M) 15 + - #interrupt-cells: Must be 2 (an interrupt index and flags, as defined 16 + in interrupts.txt in this directory) 17 + - #address-cells: Must be zero 18 + - interrupt-controller: Marks the device as an interrupt controller 19 + - reg: Base address and length of the memory resource used by the interrupt 20 + controller 21 + - interrupt-map: Specifies the mapping from external interrupts to GIC 22 + interrupts 23 + - interrupt-map-mask: Must be <7 0> 24 + 25 + Example: 26 + 27 + irqc: interrupt-controller@fcfef800 { 28 + compatible = "renesas,r7s72100-irqc", "renesas,rza1-irqc"; 29 + #interrupt-cells = <2>; 30 + #address-cells = <0>; 31 + interrupt-controller; 32 + reg = <0xfcfef800 0x6>; 33 + interrupt-map = 34 + <0 0 &gic GIC_SPI 0 IRQ_TYPE_LEVEL_HIGH>, 35 + <1 0 &gic GIC_SPI 1 IRQ_TYPE_LEVEL_HIGH>, 36 + <2 0 &gic GIC_SPI 2 IRQ_TYPE_LEVEL_HIGH>, 37 + <3 0 &gic GIC_SPI 3 IRQ_TYPE_LEVEL_HIGH>, 38 + <4 0 &gic GIC_SPI 4 IRQ_TYPE_LEVEL_HIGH>, 39 + <5 0 &gic GIC_SPI 5 IRQ_TYPE_LEVEL_HIGH>, 40 + <6 0 &gic GIC_SPI 6 IRQ_TYPE_LEVEL_HIGH>, 41 + <7 0 &gic GIC_SPI 7 IRQ_TYPE_LEVEL_HIGH>; 42 + interrupt-map-mask = <7 0>; 43 + };
+6
MAINTAINERS
··· 1306 1306 F: Documentation/devicetree/bindings/interrupt-controller/arm,vic.txt 1307 1307 F: drivers/irqchip/irq-vic.c 1308 1308 1309 + AMAZON ANNAPURNA LABS FIC DRIVER 1310 + M: Talel Shenhar <talel@amazon.com> 1311 + S: Maintained 1312 + F: Documentation/devicetree/bindings/interrupt-controller/amazon,al-fic.txt 1313 + F: drivers/irqchip/irq-al-fic.c 1314 + 1309 1315 ARM SMMU DRIVERS 1310 1316 M: Will Deacon <will@kernel.org> 1311 1317 R: Robin Murphy <robin.murphy@arm.com>
+26
drivers/acpi/irq.c
··· 292 292 acpi_irq_model = model; 293 293 acpi_gsi_domain_id = fwnode; 294 294 } 295 + 296 + /** 297 + * acpi_irq_create_hierarchy - Create a hierarchical IRQ domain with the default 298 + * GSI domain as its parent. 299 + * @flags: Irq domain flags associated with the domain 300 + * @size: Size of the domain. 301 + * @fwnode: Optional fwnode of the interrupt controller 302 + * @ops: Pointer to the interrupt domain callbacks 303 + * @host_data: Controller private data pointer 304 + */ 305 + struct irq_domain *acpi_irq_create_hierarchy(unsigned int flags, 306 + unsigned int size, 307 + struct fwnode_handle *fwnode, 308 + const struct irq_domain_ops *ops, 309 + void *host_data) 310 + { 311 + struct irq_domain *d = irq_find_matching_fwnode(acpi_gsi_domain_id, 312 + DOMAIN_BUS_ANY); 313 + 314 + if (!d) 315 + return NULL; 316 + 317 + return irq_domain_create_hierarchy(d, flags, size, fwnode, ops, 318 + host_data); 319 + } 320 + EXPORT_SYMBOL_GPL(acpi_irq_create_hierarchy);
+44 -7
drivers/gpio/gpio-mb86s7x.c
··· 6 6 * Copyright (C) 2015 Linaro Ltd. 7 7 */ 8 8 9 + #include <linux/acpi.h> 9 10 #include <linux/io.h> 10 11 #include <linux/init.h> 11 12 #include <linux/clk.h> ··· 19 18 #include <linux/platform_device.h> 20 19 #include <linux/spinlock.h> 21 20 #include <linux/slab.h> 21 + 22 + #include "gpiolib.h" 22 23 23 24 /* 24 25 * Only first 8bits of a register correspond to each pin, ··· 138 135 spin_unlock_irqrestore(&gchip->lock, flags); 139 136 } 140 137 138 + static int mb86s70_gpio_to_irq(struct gpio_chip *gc, unsigned int offset) 139 + { 140 + int irq, index; 141 + 142 + for (index = 0;; index++) { 143 + irq = platform_get_irq(to_platform_device(gc->parent), index); 144 + if (irq <= 0) 145 + break; 146 + if (irq_get_irq_data(irq)->hwirq == offset) 147 + return irq; 148 + } 149 + return -EINVAL; 150 + } 151 + 141 152 static int mb86s70_gpio_probe(struct platform_device *pdev) 142 153 { 143 154 struct mb86s70_gpio_chip *gchip; ··· 167 150 if (IS_ERR(gchip->base)) 168 151 return PTR_ERR(gchip->base); 169 152 170 - gchip->clk = devm_clk_get(&pdev->dev, NULL); 171 - if (IS_ERR(gchip->clk)) 172 - return PTR_ERR(gchip->clk); 153 + if (!has_acpi_companion(&pdev->dev)) { 154 + gchip->clk = devm_clk_get(&pdev->dev, NULL); 155 + if (IS_ERR(gchip->clk)) 156 + return PTR_ERR(gchip->clk); 173 157 174 - ret = clk_prepare_enable(gchip->clk); 175 - if (ret) 176 - return ret; 158 + ret = clk_prepare_enable(gchip->clk); 159 + if (ret) 160 + return ret; 161 + } 177 162 178 163 spin_lock_init(&gchip->lock); 179 164 ··· 191 172 gchip->gc.parent = &pdev->dev; 192 173 gchip->gc.base = -1; 193 174 175 + if (has_acpi_companion(&pdev->dev)) 176 + gchip->gc.to_irq = mb86s70_gpio_to_irq; 177 + 194 178 ret = gpiochip_add_data(&gchip->gc, gchip); 195 179 if (ret) { 196 180 dev_err(&pdev->dev, "couldn't register gpio driver\n"); 197 181 clk_disable_unprepare(gchip->clk); 182 + return ret; 198 183 } 199 184 200 - return ret; 185 + if (has_acpi_companion(&pdev->dev)) 186 + acpi_gpiochip_request_interrupts(&gchip->gc); 187 + 188 + return 0; 201 189 } 202 190 203 191 static int mb86s70_gpio_remove(struct platform_device *pdev) 204 192 { 205 193 struct mb86s70_gpio_chip *gchip = platform_get_drvdata(pdev); 206 194 195 + if (has_acpi_companion(&pdev->dev)) 196 + acpi_gpiochip_free_interrupts(&gchip->gc); 207 197 gpiochip_remove(&gchip->gc); 208 198 clk_disable_unprepare(gchip->clk); 209 199 ··· 225 197 }; 226 198 MODULE_DEVICE_TABLE(of, mb86s70_gpio_dt_ids); 227 199 200 + #ifdef CONFIG_ACPI 201 + static const struct acpi_device_id mb86s70_gpio_acpi_ids[] = { 202 + { "SCX0007" }, 203 + { /* sentinel */ } 204 + }; 205 + MODULE_DEVICE_TABLE(acpi, mb86s70_gpio_acpi_ids); 206 + #endif 207 + 228 208 static struct platform_driver mb86s70_gpio_driver = { 229 209 .driver = { 230 210 .name = "mb86s70-gpio", 231 211 .of_match_table = mb86s70_gpio_dt_ids, 212 + .acpi_match_table = ACPI_PTR(mb86s70_gpio_acpi_ids), 232 213 }, 233 214 .probe = mb86s70_gpio_probe, 234 215 .remove = mb86s70_gpio_remove,
+28 -4
drivers/irqchip/Kconfig
··· 15 15 bool 16 16 depends on PM 17 17 select ARM_GIC 18 - select PM_CLK 19 18 20 19 config ARM_GIC_MAX_NR 21 20 int 21 + depends on ARM_GIC 22 22 default 2 if ARCH_REALVIEW 23 23 default 1 24 24 ··· 86 86 depends on PCI 87 87 select PCI_MSI 88 88 select GENERIC_IRQ_CHIP 89 + 90 + config AL_FIC 91 + bool "Amazon's Annapurna Labs Fabric Interrupt Controller" 92 + depends on OF || COMPILE_TEST 93 + select GENERIC_IRQ_CHIP 94 + select IRQ_DOMAIN 95 + help 96 + Support Amazon's Annapurna Labs Fabric Interrupt Controller. 89 97 90 98 config ATMEL_AIC_IRQ 91 99 bool ··· 225 217 select IRQ_DOMAIN 226 218 227 219 config RENESAS_INTC_IRQPIN 228 - bool 220 + bool "Renesas INTC External IRQ Pin Support" if COMPILE_TEST 229 221 select IRQ_DOMAIN 222 + help 223 + Enable support for the Renesas Interrupt Controller for external 224 + interrupt pins, as found on SH/R-Mobile and R-Car Gen1 SoCs. 230 225 231 226 config RENESAS_IRQC 232 - bool 227 + bool "Renesas R-Mobile APE6 and R-Car IRQC support" if COMPILE_TEST 233 228 select GENERIC_IRQ_CHIP 234 229 select IRQ_DOMAIN 230 + help 231 + Enable support for the Renesas Interrupt Controller for external 232 + devices, as found on R-Mobile APE6, R-Car Gen2, and R-Car Gen3 SoCs. 233 + 234 + config RENESAS_RZA1_IRQC 235 + bool "Renesas RZ/A1 IRQC support" if COMPILE_TEST 236 + select IRQ_DOMAIN_HIERARCHY 237 + help 238 + Enable support for the Renesas RZ/A1 Interrupt Controller, to use up 239 + to 8 external interrupts with configurable sense select. 235 240 236 241 config ST_IRQCHIP 237 242 bool ··· 320 299 select IRQ_DOMAIN 321 300 322 301 config RENESAS_H8S_INTC 323 - bool 302 + bool "Renesas H8S Interrupt Controller Support" if COMPILE_TEST 324 303 select IRQ_DOMAIN 304 + help 305 + Enable support for the Renesas H8/300 Interrupt Controller, as found 306 + on Renesas H8S SoCs. 325 307 326 308 config IMX_GPCV2 327 309 bool
+2
drivers/irqchip/Makefile
··· 1 1 # SPDX-License-Identifier: GPL-2.0 2 2 obj-$(CONFIG_IRQCHIP) += irqchip.o 3 3 4 + obj-$(CONFIG_AL_FIC) += irq-al-fic.o 4 5 obj-$(CONFIG_ALPINE_MSI) += irq-alpine-msi.o 5 6 obj-$(CONFIG_ATH79) += irq-ath79-cpu.o 6 7 obj-$(CONFIG_ATH79) += irq-ath79-misc.o ··· 50 49 obj-$(CONFIG_RDA_INTC) += irq-rda-intc.o 51 50 obj-$(CONFIG_RENESAS_INTC_IRQPIN) += irq-renesas-intc-irqpin.o 52 51 obj-$(CONFIG_RENESAS_IRQC) += irq-renesas-irqc.o 52 + obj-$(CONFIG_RENESAS_RZA1_IRQC) += irq-renesas-rza1.o 53 53 obj-$(CONFIG_VERSATILE_FPGA_IRQ) += irq-versatile-fpga.o 54 54 obj-$(CONFIG_ARCH_NSPIRE) += irq-zevio.o 55 55 obj-$(CONFIG_ARCH_VT8500) += irq-vt8500.o
+278
drivers/irqchip/irq-al-fic.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Copyright 2019 Amazon.com, Inc. or its affiliates. All Rights Reserved. 4 + */ 5 + 6 + #include <linux/bitfield.h> 7 + #include <linux/irq.h> 8 + #include <linux/irqchip.h> 9 + #include <linux/irqchip/chained_irq.h> 10 + #include <linux/irqdomain.h> 11 + #include <linux/module.h> 12 + #include <linux/of.h> 13 + #include <linux/of_address.h> 14 + #include <linux/of_irq.h> 15 + 16 + /* FIC Registers */ 17 + #define AL_FIC_CAUSE 0x00 18 + #define AL_FIC_MASK 0x10 19 + #define AL_FIC_CONTROL 0x28 20 + 21 + #define CONTROL_TRIGGER_RISING BIT(3) 22 + #define CONTROL_MASK_MSI_X BIT(5) 23 + 24 + #define NR_FIC_IRQS 32 25 + 26 + MODULE_AUTHOR("Talel Shenhar"); 27 + MODULE_DESCRIPTION("Amazon's Annapurna Labs Interrupt Controller Driver"); 28 + MODULE_LICENSE("GPL v2"); 29 + 30 + enum al_fic_state { 31 + AL_FIC_UNCONFIGURED = 0, 32 + AL_FIC_CONFIGURED_LEVEL, 33 + AL_FIC_CONFIGURED_RISING_EDGE, 34 + }; 35 + 36 + struct al_fic { 37 + void __iomem *base; 38 + struct irq_domain *domain; 39 + const char *name; 40 + unsigned int parent_irq; 41 + enum al_fic_state state; 42 + }; 43 + 44 + static void al_fic_set_trigger(struct al_fic *fic, 45 + struct irq_chip_generic *gc, 46 + enum al_fic_state new_state) 47 + { 48 + irq_flow_handler_t handler; 49 + u32 control = readl_relaxed(fic->base + AL_FIC_CONTROL); 50 + 51 + if (new_state == AL_FIC_CONFIGURED_LEVEL) { 52 + handler = handle_level_irq; 53 + control &= ~CONTROL_TRIGGER_RISING; 54 + } else { 55 + handler = handle_edge_irq; 56 + control |= CONTROL_TRIGGER_RISING; 57 + } 58 + gc->chip_types->handler = handler; 59 + fic->state = new_state; 60 + writel_relaxed(control, fic->base + AL_FIC_CONTROL); 61 + } 62 + 63 + static int al_fic_irq_set_type(struct irq_data *data, unsigned int flow_type) 64 + { 65 + struct irq_chip_generic *gc = irq_data_get_irq_chip_data(data); 66 + struct al_fic *fic = gc->private; 67 + enum al_fic_state new_state; 68 + int ret = 0; 69 + 70 + irq_gc_lock(gc); 71 + 72 + if (((flow_type & IRQ_TYPE_SENSE_MASK) != IRQ_TYPE_LEVEL_HIGH) && 73 + ((flow_type & IRQ_TYPE_SENSE_MASK) != IRQ_TYPE_EDGE_RISING)) { 74 + pr_debug("fic doesn't support flow type %d\n", flow_type); 75 + ret = -EINVAL; 76 + goto err; 77 + } 78 + 79 + new_state = (flow_type & IRQ_TYPE_LEVEL_HIGH) ? 80 + AL_FIC_CONFIGURED_LEVEL : AL_FIC_CONFIGURED_RISING_EDGE; 81 + 82 + /* 83 + * A given FIC instance can be either all level or all edge triggered. 84 + * This is generally fixed depending on what pieces of HW it's wired up 85 + * to. 86 + * 87 + * We configure it based on the sensitivity of the first source 88 + * being setup, and reject any subsequent attempt at configuring it in a 89 + * different way. 90 + */ 91 + if (fic->state == AL_FIC_UNCONFIGURED) { 92 + al_fic_set_trigger(fic, gc, new_state); 93 + } else if (fic->state != new_state) { 94 + pr_debug("fic %s state already configured to %d\n", 95 + fic->name, fic->state); 96 + ret = -EINVAL; 97 + goto err; 98 + } 99 + 100 + err: 101 + irq_gc_unlock(gc); 102 + 103 + return ret; 104 + } 105 + 106 + static void al_fic_irq_handler(struct irq_desc *desc) 107 + { 108 + struct al_fic *fic = irq_desc_get_handler_data(desc); 109 + struct irq_domain *domain = fic->domain; 110 + struct irq_chip *irqchip = irq_desc_get_chip(desc); 111 + struct irq_chip_generic *gc = irq_get_domain_generic_chip(domain, 0); 112 + unsigned long pending; 113 + unsigned int irq; 114 + u32 hwirq; 115 + 116 + chained_irq_enter(irqchip, desc); 117 + 118 + pending = readl_relaxed(fic->base + AL_FIC_CAUSE); 119 + pending &= ~gc->mask_cache; 120 + 121 + for_each_set_bit(hwirq, &pending, NR_FIC_IRQS) { 122 + irq = irq_find_mapping(domain, hwirq); 123 + generic_handle_irq(irq); 124 + } 125 + 126 + chained_irq_exit(irqchip, desc); 127 + } 128 + 129 + static int al_fic_register(struct device_node *node, 130 + struct al_fic *fic) 131 + { 132 + struct irq_chip_generic *gc; 133 + int ret; 134 + 135 + fic->domain = irq_domain_add_linear(node, 136 + NR_FIC_IRQS, 137 + &irq_generic_chip_ops, 138 + fic); 139 + if (!fic->domain) { 140 + pr_err("fail to add irq domain\n"); 141 + return -ENOMEM; 142 + } 143 + 144 + ret = irq_alloc_domain_generic_chips(fic->domain, 145 + NR_FIC_IRQS, 146 + 1, fic->name, 147 + handle_level_irq, 148 + 0, 0, IRQ_GC_INIT_MASK_CACHE); 149 + if (ret) { 150 + pr_err("fail to allocate generic chip (%d)\n", ret); 151 + goto err_domain_remove; 152 + } 153 + 154 + gc = irq_get_domain_generic_chip(fic->domain, 0); 155 + gc->reg_base = fic->base; 156 + gc->chip_types->regs.mask = AL_FIC_MASK; 157 + gc->chip_types->regs.ack = AL_FIC_CAUSE; 158 + gc->chip_types->chip.irq_mask = irq_gc_mask_set_bit; 159 + gc->chip_types->chip.irq_unmask = irq_gc_mask_clr_bit; 160 + gc->chip_types->chip.irq_ack = irq_gc_ack_clr_bit; 161 + gc->chip_types->chip.irq_set_type = al_fic_irq_set_type; 162 + gc->chip_types->chip.flags = IRQCHIP_SKIP_SET_WAKE; 163 + gc->private = fic; 164 + 165 + irq_set_chained_handler_and_data(fic->parent_irq, 166 + al_fic_irq_handler, 167 + fic); 168 + return 0; 169 + 170 + err_domain_remove: 171 + irq_domain_remove(fic->domain); 172 + 173 + return ret; 174 + } 175 + 176 + /* 177 + * al_fic_wire_init() - initialize and configure fic in wire mode 178 + * @of_node: optional pointer to interrupt controller's device tree node. 179 + * @base: mmio to fic register 180 + * @name: name of the fic 181 + * @parent_irq: interrupt of parent 182 + * 183 + * This API will configure the fic hardware to to work in wire mode. 184 + * In wire mode, fic hardware is generating a wire ("wired") interrupt. 185 + * Interrupt can be generated based on positive edge or level - configuration is 186 + * to be determined based on connected hardware to this fic. 187 + */ 188 + static struct al_fic *al_fic_wire_init(struct device_node *node, 189 + void __iomem *base, 190 + const char *name, 191 + unsigned int parent_irq) 192 + { 193 + struct al_fic *fic; 194 + int ret; 195 + u32 control = CONTROL_MASK_MSI_X; 196 + 197 + fic = kzalloc(sizeof(*fic), GFP_KERNEL); 198 + if (!fic) 199 + return ERR_PTR(-ENOMEM); 200 + 201 + fic->base = base; 202 + fic->parent_irq = parent_irq; 203 + fic->name = name; 204 + 205 + /* mask out all interrupts */ 206 + writel_relaxed(0xFFFFFFFF, fic->base + AL_FIC_MASK); 207 + 208 + /* clear any pending interrupt */ 209 + writel_relaxed(0, fic->base + AL_FIC_CAUSE); 210 + 211 + writel_relaxed(control, fic->base + AL_FIC_CONTROL); 212 + 213 + ret = al_fic_register(node, fic); 214 + if (ret) { 215 + pr_err("fail to register irqchip\n"); 216 + goto err_free; 217 + } 218 + 219 + pr_debug("%s initialized successfully in Legacy mode (parent-irq=%u)\n", 220 + fic->name, parent_irq); 221 + 222 + return fic; 223 + 224 + err_free: 225 + kfree(fic); 226 + return ERR_PTR(ret); 227 + } 228 + 229 + static int __init al_fic_init_dt(struct device_node *node, 230 + struct device_node *parent) 231 + { 232 + int ret; 233 + void __iomem *base; 234 + unsigned int parent_irq; 235 + struct al_fic *fic; 236 + 237 + if (!parent) { 238 + pr_err("%s: unsupported - device require a parent\n", 239 + node->name); 240 + return -EINVAL; 241 + } 242 + 243 + base = of_iomap(node, 0); 244 + if (!base) { 245 + pr_err("%s: fail to map memory\n", node->name); 246 + return -ENOMEM; 247 + } 248 + 249 + parent_irq = irq_of_parse_and_map(node, 0); 250 + if (!parent_irq) { 251 + pr_err("%s: fail to map irq\n", node->name); 252 + ret = -EINVAL; 253 + goto err_unmap; 254 + } 255 + 256 + fic = al_fic_wire_init(node, 257 + base, 258 + node->name, 259 + parent_irq); 260 + if (IS_ERR(fic)) { 261 + pr_err("%s: fail to initialize irqchip (%lu)\n", 262 + node->name, 263 + PTR_ERR(fic)); 264 + ret = PTR_ERR(fic); 265 + goto err_irq_dispose; 266 + } 267 + 268 + return 0; 269 + 270 + err_irq_dispose: 271 + irq_dispose_mapping(parent_irq); 272 + err_unmap: 273 + iounmap(base); 274 + 275 + return ret; 276 + } 277 + 278 + IRQCHIP_DECLARE(al_fic, "amazon,al-fic", al_fic_init_dt);
+79 -7
drivers/irqchip/irq-csky-mpintc.c
··· 32 32 #define INTCG_CIDSTR 0x1000 33 33 34 34 #define INTCL_PICTLR 0x0 35 + #define INTCL_CFGR 0x14 35 36 #define INTCL_SIGR 0x60 36 - #define INTCL_HPPIR 0x68 37 37 #define INTCL_RDYIR 0x6c 38 38 #define INTCL_SENR 0xa0 39 39 #define INTCL_CENR 0xa4 ··· 41 41 42 42 static DEFINE_PER_CPU(void __iomem *, intcl_reg); 43 43 44 + static unsigned long *__trigger; 45 + 46 + #define IRQ_OFFSET(irq) ((irq < COMM_IRQ_BASE) ? irq : (irq - COMM_IRQ_BASE)) 47 + 48 + #define TRIG_BYTE_OFFSET(i) ((((i) * 2) / 32) * 4) 49 + #define TRIG_BIT_OFFSET(i) (((i) * 2) % 32) 50 + 51 + #define TRIG_VAL(trigger, irq) (trigger << TRIG_BIT_OFFSET(IRQ_OFFSET(irq))) 52 + #define TRIG_VAL_MSK(irq) (~(3 << TRIG_BIT_OFFSET(IRQ_OFFSET(irq)))) 53 + 54 + #define TRIG_BASE(irq) \ 55 + (TRIG_BYTE_OFFSET(IRQ_OFFSET(irq)) + ((irq < COMM_IRQ_BASE) ? \ 56 + (this_cpu_read(intcl_reg) + INTCL_CFGR) : (INTCG_base + INTCG_CICFGR))) 57 + 58 + static DEFINE_SPINLOCK(setup_lock); 59 + static void setup_trigger(unsigned long irq, unsigned long trigger) 60 + { 61 + unsigned int tmp; 62 + 63 + spin_lock(&setup_lock); 64 + 65 + /* setup trigger */ 66 + tmp = readl_relaxed(TRIG_BASE(irq)) & TRIG_VAL_MSK(irq); 67 + 68 + writel_relaxed(tmp | TRIG_VAL(trigger, irq), TRIG_BASE(irq)); 69 + 70 + spin_unlock(&setup_lock); 71 + } 72 + 44 73 static void csky_mpintc_handler(struct pt_regs *regs) 45 74 { 46 75 void __iomem *reg_base = this_cpu_read(intcl_reg); 47 76 48 - do { 49 - handle_domain_irq(root_domain, 50 - readl_relaxed(reg_base + INTCL_RDYIR), 51 - regs); 52 - } while (readl_relaxed(reg_base + INTCL_HPPIR) & BIT(31)); 77 + handle_domain_irq(root_domain, 78 + readl_relaxed(reg_base + INTCL_RDYIR), regs); 53 79 } 54 80 55 81 static void csky_mpintc_enable(struct irq_data *d) 56 82 { 57 83 void __iomem *reg_base = this_cpu_read(intcl_reg); 84 + 85 + setup_trigger(d->hwirq, __trigger[d->hwirq]); 58 86 59 87 writel_relaxed(d->hwirq, reg_base + INTCL_SENR); 60 88 } ··· 99 71 void __iomem *reg_base = this_cpu_read(intcl_reg); 100 72 101 73 writel_relaxed(d->hwirq, reg_base + INTCL_CACR); 74 + } 75 + 76 + static int csky_mpintc_set_type(struct irq_data *d, unsigned int type) 77 + { 78 + switch (type & IRQ_TYPE_SENSE_MASK) { 79 + case IRQ_TYPE_LEVEL_HIGH: 80 + __trigger[d->hwirq] = 0; 81 + break; 82 + case IRQ_TYPE_LEVEL_LOW: 83 + __trigger[d->hwirq] = 1; 84 + break; 85 + case IRQ_TYPE_EDGE_RISING: 86 + __trigger[d->hwirq] = 2; 87 + break; 88 + case IRQ_TYPE_EDGE_FALLING: 89 + __trigger[d->hwirq] = 3; 90 + break; 91 + default: 92 + return -EINVAL; 93 + } 94 + 95 + return 0; 102 96 } 103 97 104 98 #ifdef CONFIG_SMP ··· 166 116 .irq_eoi = csky_mpintc_eoi, 167 117 .irq_enable = csky_mpintc_enable, 168 118 .irq_disable = csky_mpintc_disable, 119 + .irq_set_type = csky_mpintc_set_type, 169 120 #ifdef CONFIG_SMP 170 121 .irq_set_affinity = csky_irq_set_affinity, 171 122 #endif ··· 187 136 return 0; 188 137 } 189 138 139 + static int csky_irq_domain_xlate_cells(struct irq_domain *d, 140 + struct device_node *ctrlr, const u32 *intspec, 141 + unsigned int intsize, unsigned long *out_hwirq, 142 + unsigned int *out_type) 143 + { 144 + if (WARN_ON(intsize < 1)) 145 + return -EINVAL; 146 + 147 + *out_hwirq = intspec[0]; 148 + if (intsize > 1) 149 + *out_type = intspec[1] & IRQ_TYPE_SENSE_MASK; 150 + else 151 + *out_type = IRQ_TYPE_LEVEL_HIGH; 152 + 153 + return 0; 154 + } 155 + 190 156 static const struct irq_domain_ops csky_irqdomain_ops = { 191 157 .map = csky_irqdomain_map, 192 - .xlate = irq_domain_xlate_onecell, 158 + .xlate = csky_irq_domain_xlate_cells, 193 159 }; 194 160 195 161 #ifdef CONFIG_SMP ··· 239 171 ret = of_property_read_u32(node, "csky,num-irqs", &nr_irq); 240 172 if (ret < 0) 241 173 nr_irq = INTC_IRQS; 174 + 175 + __trigger = kcalloc(nr_irq, sizeof(unsigned long), GFP_KERNEL); 176 + if (__trigger == NULL) 177 + return -ENXIO; 242 178 243 179 if (INTCG_base == NULL) { 244 180 INTCG_base = ioremap(mfcr("cr<31, 14>"),
+68 -17
drivers/irqchip/irq-gic-v2m.c
··· 53 53 54 54 /* List of flags for specific v2m implementation */ 55 55 #define GICV2M_NEEDS_SPI_OFFSET 0x00000001 56 + #define GICV2M_GRAVITON_ADDRESS_ONLY 0x00000002 56 57 57 58 static LIST_HEAD(v2m_nodes); 58 59 static DEFINE_SPINLOCK(v2m_lock); ··· 96 95 .chip = &gicv2m_msi_irq_chip, 97 96 }; 98 97 98 + static phys_addr_t gicv2m_get_msi_addr(struct v2m_data *v2m, int hwirq) 99 + { 100 + if (v2m->flags & GICV2M_GRAVITON_ADDRESS_ONLY) 101 + return v2m->res.start | ((hwirq - 32) << 3); 102 + else 103 + return v2m->res.start + V2M_MSI_SETSPI_NS; 104 + } 105 + 99 106 static void gicv2m_compose_msi_msg(struct irq_data *data, struct msi_msg *msg) 100 107 { 101 108 struct v2m_data *v2m = irq_data_get_irq_chip_data(data); 102 - phys_addr_t addr = v2m->res.start + V2M_MSI_SETSPI_NS; 109 + phys_addr_t addr = gicv2m_get_msi_addr(v2m, data->hwirq); 103 110 104 111 msg->address_hi = upper_32_bits(addr); 105 112 msg->address_lo = lower_32_bits(addr); 106 - msg->data = data->hwirq; 107 113 114 + if (v2m->flags & GICV2M_GRAVITON_ADDRESS_ONLY) 115 + msg->data = 0; 116 + else 117 + msg->data = data->hwirq; 108 118 if (v2m->flags & GICV2M_NEEDS_SPI_OFFSET) 109 119 msg->data -= v2m->spi_offset; 110 120 ··· 197 185 hwirq = v2m->spi_start + offset; 198 186 199 187 err = iommu_dma_prepare_msi(info->desc, 200 - v2m->res.start + V2M_MSI_SETSPI_NS); 188 + gicv2m_get_msi_addr(v2m, hwirq)); 201 189 if (err) 202 190 return err; 203 191 ··· 316 304 317 305 static int __init gicv2m_init_one(struct fwnode_handle *fwnode, 318 306 u32 spi_start, u32 nr_spis, 319 - struct resource *res) 307 + struct resource *res, u32 flags) 320 308 { 321 309 int ret; 322 310 struct v2m_data *v2m; ··· 329 317 330 318 INIT_LIST_HEAD(&v2m->entry); 331 319 v2m->fwnode = fwnode; 320 + v2m->flags = flags; 332 321 333 322 memcpy(&v2m->res, res, sizeof(struct resource)); 334 323 ··· 344 331 v2m->spi_start = spi_start; 345 332 v2m->nr_spis = nr_spis; 346 333 } else { 347 - u32 typer = readl_relaxed(v2m->base + V2M_MSI_TYPER); 334 + u32 typer; 335 + 336 + /* Graviton should always have explicit spi_start/nr_spis */ 337 + if (v2m->flags & GICV2M_GRAVITON_ADDRESS_ONLY) { 338 + ret = -EINVAL; 339 + goto err_iounmap; 340 + } 341 + typer = readl_relaxed(v2m->base + V2M_MSI_TYPER); 348 342 349 343 v2m->spi_start = V2M_MSI_TYPER_BASE_SPI(typer); 350 344 v2m->nr_spis = V2M_MSI_TYPER_NUM_SPI(typer); ··· 372 352 * 373 353 * Broadom NS2 GICv2m implementation has an erratum where the MSI data 374 354 * is 'spi_number - 32' 355 + * 356 + * Reading that register fails on the Graviton implementation 375 357 */ 376 - switch (readl_relaxed(v2m->base + V2M_MSI_IIDR)) { 377 - case XGENE_GICV2M_MSI_IIDR: 378 - v2m->flags |= GICV2M_NEEDS_SPI_OFFSET; 379 - v2m->spi_offset = v2m->spi_start; 380 - break; 381 - case BCM_NS2_GICV2M_MSI_IIDR: 382 - v2m->flags |= GICV2M_NEEDS_SPI_OFFSET; 383 - v2m->spi_offset = 32; 384 - break; 358 + if (!(v2m->flags & GICV2M_GRAVITON_ADDRESS_ONLY)) { 359 + switch (readl_relaxed(v2m->base + V2M_MSI_IIDR)) { 360 + case XGENE_GICV2M_MSI_IIDR: 361 + v2m->flags |= GICV2M_NEEDS_SPI_OFFSET; 362 + v2m->spi_offset = v2m->spi_start; 363 + break; 364 + case BCM_NS2_GICV2M_MSI_IIDR: 365 + v2m->flags |= GICV2M_NEEDS_SPI_OFFSET; 366 + v2m->spi_offset = 32; 367 + break; 368 + } 385 369 } 386 - 387 370 v2m->bm = kcalloc(BITS_TO_LONGS(v2m->nr_spis), sizeof(long), 388 371 GFP_KERNEL); 389 372 if (!v2m->bm) { ··· 439 416 pr_info("DT overriding V2M MSI_TYPER (base:%u, num:%u)\n", 440 417 spi_start, nr_spis); 441 418 442 - ret = gicv2m_init_one(&child->fwnode, spi_start, nr_spis, &res); 419 + ret = gicv2m_init_one(&child->fwnode, spi_start, nr_spis, 420 + &res, 0); 443 421 if (ret) { 444 422 of_node_put(child); 445 423 break; ··· 472 448 return data->fwnode; 473 449 } 474 450 451 + static bool acpi_check_amazon_graviton_quirks(void) 452 + { 453 + static struct acpi_table_madt *madt; 454 + acpi_status status; 455 + bool rc = false; 456 + 457 + #define ACPI_AMZN_OEM_ID "AMAZON" 458 + 459 + status = acpi_get_table(ACPI_SIG_MADT, 0, 460 + (struct acpi_table_header **)&madt); 461 + 462 + if (ACPI_FAILURE(status) || !madt) 463 + return rc; 464 + rc = !memcmp(madt->header.oem_id, ACPI_AMZN_OEM_ID, ACPI_OEM_ID_SIZE); 465 + acpi_put_table((struct acpi_table_header *)madt); 466 + 467 + return rc; 468 + } 469 + 475 470 static int __init 476 471 acpi_parse_madt_msi(union acpi_subtable_headers *header, 477 472 const unsigned long end) ··· 500 457 u32 spi_start = 0, nr_spis = 0; 501 458 struct acpi_madt_generic_msi_frame *m; 502 459 struct fwnode_handle *fwnode; 460 + u32 flags = 0; 503 461 504 462 m = (struct acpi_madt_generic_msi_frame *)header; 505 463 if (BAD_MADT_ENTRY(m, end)) ··· 509 465 res.start = m->base_address; 510 466 res.end = m->base_address + SZ_4K - 1; 511 467 res.flags = IORESOURCE_MEM; 468 + 469 + if (acpi_check_amazon_graviton_quirks()) { 470 + pr_info("applying Amazon Graviton quirk\n"); 471 + res.end = res.start + SZ_8K - 1; 472 + flags |= GICV2M_GRAVITON_ADDRESS_ONLY; 473 + gicv2m_msi_domain_info.flags &= ~MSI_FLAG_MULTI_PCI_MSI; 474 + } 512 475 513 476 if (m->flags & ACPI_MADT_OVERRIDE_SPI_VALUES) { 514 477 spi_start = m->spi_base; ··· 531 480 return -EINVAL; 532 481 } 533 482 534 - ret = gicv2m_init_one(fwnode, spi_start, nr_spis, &res); 483 + ret = gicv2m_init_one(fwnode, spi_start, nr_spis, &res, flags); 535 484 if (ret) 536 485 irq_domain_free_fwnode(fwnode); 537 486
+3
drivers/irqchip/irq-gic-v3.c
··· 1339 1339 if (gic_dist_supports_lpis()) { 1340 1340 its_init(handle, &gic_data.rdists, gic_data.domain); 1341 1341 its_cpu_init(); 1342 + } else { 1343 + if (IS_ENABLED(CONFIG_ARM_GIC_V2M)) 1344 + gicv2m_init(handle, gic_data.domain); 1342 1345 } 1343 1346 1344 1347 if (gic_prio_masking_enabled()) {
+1 -2
drivers/irqchip/irq-mbigen.c
··· 344 344 err = -EINVAL; 345 345 346 346 if (err) { 347 - dev_err(&pdev->dev, "Failed to create mbi-gen@%p irqdomain", 348 - mgn_chip->base); 347 + dev_err(&pdev->dev, "Failed to create mbi-gen irqdomain\n"); 349 348 return err; 350 349 } 351 350
+1
drivers/irqchip/irq-meson-gpio.c
··· 60 60 { .compatible = "amlogic,meson-gxbb-gpio-intc", .data = &gxbb_params }, 61 61 { .compatible = "amlogic,meson-gxl-gpio-intc", .data = &gxl_params }, 62 62 { .compatible = "amlogic,meson-axg-gpio-intc", .data = &axg_params }, 63 + { .compatible = "amlogic,meson-g12a-gpio-intc", .data = &axg_params }, 63 64 { } 64 65 }; 65 66
+2 -1
drivers/irqchip/irq-renesas-intc-irqpin.c
··· 508 508 } 509 509 510 510 irq_chip = &p->irq_chip; 511 - irq_chip->name = name; 511 + irq_chip->name = "intc-irqpin"; 512 + irq_chip->parent_device = dev; 512 513 irq_chip->irq_mask = disable_fn; 513 514 irq_chip->irq_unmask = enable_fn; 514 515 irq_chip->irq_set_type = intc_irqpin_irq_set_type;
+32 -59
drivers/irqchip/irq-renesas-irqc.c
··· 7 7 8 8 #include <linux/init.h> 9 9 #include <linux/platform_device.h> 10 - #include <linux/spinlock.h> 11 10 #include <linux/interrupt.h> 12 11 #include <linux/ioport.h> 13 12 #include <linux/io.h> ··· 47 48 void __iomem *cpu_int_base; 48 49 struct irqc_irq irq[IRQC_IRQ_MAX]; 49 50 unsigned int number_of_irqs; 50 - struct platform_device *pdev; 51 + struct device *dev; 51 52 struct irq_chip_generic *gc; 52 53 struct irq_domain *irq_domain; 53 54 atomic_t wakeup_path; ··· 60 61 61 62 static void irqc_dbg(struct irqc_irq *i, char *str) 62 63 { 63 - dev_dbg(&i->p->pdev->dev, "%s (%d:%d)\n", 64 - str, i->requested_irq, i->hw_irq); 64 + dev_dbg(i->p->dev, "%s (%d:%d)\n", str, i->requested_irq, i->hw_irq); 65 65 } 66 66 67 67 static unsigned char irqc_sense[IRQ_TYPE_SENSE_MASK + 1] = { ··· 123 125 124 126 static int irqc_probe(struct platform_device *pdev) 125 127 { 128 + struct device *dev = &pdev->dev; 129 + const char *name = dev_name(dev); 126 130 struct irqc_priv *p; 127 - struct resource *io; 128 131 struct resource *irq; 129 - const char *name = dev_name(&pdev->dev); 130 132 int ret; 131 133 int k; 132 134 133 - p = kzalloc(sizeof(*p), GFP_KERNEL); 134 - if (!p) { 135 - dev_err(&pdev->dev, "failed to allocate driver data\n"); 136 - ret = -ENOMEM; 137 - goto err0; 138 - } 135 + p = devm_kzalloc(dev, sizeof(*p), GFP_KERNEL); 136 + if (!p) 137 + return -ENOMEM; 139 138 140 - p->pdev = pdev; 139 + p->dev = dev; 141 140 platform_set_drvdata(pdev, p); 142 141 143 - pm_runtime_enable(&pdev->dev); 144 - pm_runtime_get_sync(&pdev->dev); 145 - 146 - /* get hold of manadatory IOMEM */ 147 - io = platform_get_resource(pdev, IORESOURCE_MEM, 0); 148 - if (!io) { 149 - dev_err(&pdev->dev, "not enough IOMEM resources\n"); 150 - ret = -EINVAL; 151 - goto err1; 152 - } 142 + pm_runtime_enable(dev); 143 + pm_runtime_get_sync(dev); 153 144 154 145 /* allow any number of IRQs between 1 and IRQC_IRQ_MAX */ 155 146 for (k = 0; k < IRQC_IRQ_MAX; k++) { ··· 153 166 154 167 p->number_of_irqs = k; 155 168 if (p->number_of_irqs < 1) { 156 - dev_err(&pdev->dev, "not enough IRQ resources\n"); 169 + dev_err(dev, "not enough IRQ resources\n"); 157 170 ret = -EINVAL; 158 - goto err1; 171 + goto err_runtime_pm_disable; 159 172 } 160 173 161 174 /* ioremap IOMEM and setup read/write callbacks */ 162 - p->iomem = ioremap_nocache(io->start, resource_size(io)); 163 - if (!p->iomem) { 164 - dev_err(&pdev->dev, "failed to remap IOMEM\n"); 165 - ret = -ENXIO; 166 - goto err2; 175 + p->iomem = devm_platform_ioremap_resource(pdev, 0); 176 + if (IS_ERR(p->iomem)) { 177 + ret = PTR_ERR(p->iomem); 178 + goto err_runtime_pm_disable; 167 179 } 168 180 169 181 p->cpu_int_base = p->iomem + IRQC_INT_CPU_BASE(0); /* SYS-SPI */ 170 182 171 - p->irq_domain = irq_domain_add_linear(pdev->dev.of_node, 172 - p->number_of_irqs, 183 + p->irq_domain = irq_domain_add_linear(dev->of_node, p->number_of_irqs, 173 184 &irq_generic_chip_ops, p); 174 185 if (!p->irq_domain) { 175 186 ret = -ENXIO; 176 - dev_err(&pdev->dev, "cannot initialize irq domain\n"); 177 - goto err2; 187 + dev_err(dev, "cannot initialize irq domain\n"); 188 + goto err_runtime_pm_disable; 178 189 } 179 190 180 191 ret = irq_alloc_domain_generic_chips(p->irq_domain, p->number_of_irqs, 181 - 1, name, handle_level_irq, 192 + 1, "irqc", handle_level_irq, 182 193 0, 0, IRQ_GC_INIT_NESTED_LOCK); 183 194 if (ret) { 184 - dev_err(&pdev->dev, "cannot allocate generic chip\n"); 185 - goto err3; 195 + dev_err(dev, "cannot allocate generic chip\n"); 196 + goto err_remove_domain; 186 197 } 187 198 188 199 p->gc = irq_get_domain_generic_chip(p->irq_domain, 0); 189 200 p->gc->reg_base = p->cpu_int_base; 190 201 p->gc->chip_types[0].regs.enable = IRQC_EN_SET; 191 202 p->gc->chip_types[0].regs.disable = IRQC_EN_STS; 203 + p->gc->chip_types[0].chip.parent_device = dev; 192 204 p->gc->chip_types[0].chip.irq_mask = irq_gc_mask_disable_reg; 193 205 p->gc->chip_types[0].chip.irq_unmask = irq_gc_unmask_enable_reg; 194 206 p->gc->chip_types[0].chip.irq_set_type = irqc_irq_set_type; ··· 196 210 197 211 /* request interrupts one by one */ 198 212 for (k = 0; k < p->number_of_irqs; k++) { 199 - if (request_irq(p->irq[k].requested_irq, irqc_irq_handler, 200 - 0, name, &p->irq[k])) { 201 - dev_err(&pdev->dev, "failed to request IRQ\n"); 213 + if (devm_request_irq(dev, p->irq[k].requested_irq, 214 + irqc_irq_handler, 0, name, &p->irq[k])) { 215 + dev_err(dev, "failed to request IRQ\n"); 202 216 ret = -ENOENT; 203 - goto err4; 217 + goto err_remove_domain; 204 218 } 205 219 } 206 220 207 - dev_info(&pdev->dev, "driving %d irqs\n", p->number_of_irqs); 221 + dev_info(dev, "driving %d irqs\n", p->number_of_irqs); 208 222 209 223 return 0; 210 - err4: 211 - while (--k >= 0) 212 - free_irq(p->irq[k].requested_irq, &p->irq[k]); 213 224 214 - err3: 225 + err_remove_domain: 215 226 irq_domain_remove(p->irq_domain); 216 - err2: 217 - iounmap(p->iomem); 218 - err1: 219 - pm_runtime_put(&pdev->dev); 220 - pm_runtime_disable(&pdev->dev); 221 - kfree(p); 222 - err0: 227 + err_runtime_pm_disable: 228 + pm_runtime_put(dev); 229 + pm_runtime_disable(dev); 223 230 return ret; 224 231 } 225 232 226 233 static int irqc_remove(struct platform_device *pdev) 227 234 { 228 235 struct irqc_priv *p = platform_get_drvdata(pdev); 229 - int k; 230 - 231 - for (k = 0; k < p->number_of_irqs; k++) 232 - free_irq(p->irq[k].requested_irq, &p->irq[k]); 233 236 234 237 irq_domain_remove(p->irq_domain); 235 - iounmap(p->iomem); 236 238 pm_runtime_put(&pdev->dev); 237 239 pm_runtime_disable(&pdev->dev); 238 - kfree(p); 239 240 return 0; 240 241 } 241 242
+283
drivers/irqchip/irq-renesas-rza1.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + /* 3 + * Renesas RZ/A1 IRQC Driver 4 + * 5 + * Copyright (C) 2019 Glider bvba 6 + */ 7 + 8 + #include <linux/err.h> 9 + #include <linux/init.h> 10 + #include <linux/interrupt.h> 11 + #include <linux/io.h> 12 + #include <linux/irqdomain.h> 13 + #include <linux/irq.h> 14 + #include <linux/module.h> 15 + #include <linux/of_irq.h> 16 + #include <linux/platform_device.h> 17 + #include <linux/slab.h> 18 + 19 + #include <dt-bindings/interrupt-controller/arm-gic.h> 20 + 21 + #define IRQC_NUM_IRQ 8 22 + 23 + #define ICR0 0 /* Interrupt Control Register 0 */ 24 + 25 + #define ICR0_NMIL BIT(15) /* NMI Input Level (0=low, 1=high) */ 26 + #define ICR0_NMIE BIT(8) /* Edge Select (0=falling, 1=rising) */ 27 + #define ICR0_NMIF BIT(1) /* NMI Interrupt Request */ 28 + 29 + #define ICR1 2 /* Interrupt Control Register 1 */ 30 + 31 + #define ICR1_IRQS(n, sense) ((sense) << ((n) * 2)) /* IRQ Sense Select */ 32 + #define ICR1_IRQS_LEVEL_LOW 0 33 + #define ICR1_IRQS_EDGE_FALLING 1 34 + #define ICR1_IRQS_EDGE_RISING 2 35 + #define ICR1_IRQS_EDGE_BOTH 3 36 + #define ICR1_IRQS_MASK(n) ICR1_IRQS((n), 3) 37 + 38 + #define IRQRR 4 /* IRQ Interrupt Request Register */ 39 + 40 + 41 + struct rza1_irqc_priv { 42 + struct device *dev; 43 + void __iomem *base; 44 + struct irq_chip chip; 45 + struct irq_domain *irq_domain; 46 + struct of_phandle_args map[IRQC_NUM_IRQ]; 47 + }; 48 + 49 + static struct rza1_irqc_priv *irq_data_to_priv(struct irq_data *data) 50 + { 51 + return data->domain->host_data; 52 + } 53 + 54 + static void rza1_irqc_eoi(struct irq_data *d) 55 + { 56 + struct rza1_irqc_priv *priv = irq_data_to_priv(d); 57 + u16 bit = BIT(irqd_to_hwirq(d)); 58 + u16 tmp; 59 + 60 + tmp = readw_relaxed(priv->base + IRQRR); 61 + if (tmp & bit) 62 + writew_relaxed(GENMASK(IRQC_NUM_IRQ - 1, 0) & ~bit, 63 + priv->base + IRQRR); 64 + 65 + irq_chip_eoi_parent(d); 66 + } 67 + 68 + static int rza1_irqc_set_type(struct irq_data *d, unsigned int type) 69 + { 70 + struct rza1_irqc_priv *priv = irq_data_to_priv(d); 71 + unsigned int hw_irq = irqd_to_hwirq(d); 72 + u16 sense, tmp; 73 + 74 + switch (type & IRQ_TYPE_SENSE_MASK) { 75 + case IRQ_TYPE_LEVEL_LOW: 76 + sense = ICR1_IRQS_LEVEL_LOW; 77 + break; 78 + 79 + case IRQ_TYPE_EDGE_FALLING: 80 + sense = ICR1_IRQS_EDGE_FALLING; 81 + break; 82 + 83 + case IRQ_TYPE_EDGE_RISING: 84 + sense = ICR1_IRQS_EDGE_RISING; 85 + break; 86 + 87 + case IRQ_TYPE_EDGE_BOTH: 88 + sense = ICR1_IRQS_EDGE_BOTH; 89 + break; 90 + 91 + default: 92 + return -EINVAL; 93 + } 94 + 95 + tmp = readw_relaxed(priv->base + ICR1); 96 + tmp &= ~ICR1_IRQS_MASK(hw_irq); 97 + tmp |= ICR1_IRQS(hw_irq, sense); 98 + writew_relaxed(tmp, priv->base + ICR1); 99 + return 0; 100 + } 101 + 102 + static int rza1_irqc_alloc(struct irq_domain *domain, unsigned int virq, 103 + unsigned int nr_irqs, void *arg) 104 + { 105 + struct rza1_irqc_priv *priv = domain->host_data; 106 + struct irq_fwspec *fwspec = arg; 107 + unsigned int hwirq = fwspec->param[0]; 108 + struct irq_fwspec spec; 109 + unsigned int i; 110 + int ret; 111 + 112 + ret = irq_domain_set_hwirq_and_chip(domain, virq, hwirq, &priv->chip, 113 + priv); 114 + if (ret) 115 + return ret; 116 + 117 + spec.fwnode = &priv->dev->of_node->fwnode; 118 + spec.param_count = priv->map[hwirq].args_count; 119 + for (i = 0; i < spec.param_count; i++) 120 + spec.param[i] = priv->map[hwirq].args[i]; 121 + 122 + return irq_domain_alloc_irqs_parent(domain, virq, nr_irqs, &spec); 123 + } 124 + 125 + static int rza1_irqc_translate(struct irq_domain *domain, 126 + struct irq_fwspec *fwspec, unsigned long *hwirq, 127 + unsigned int *type) 128 + { 129 + if (fwspec->param_count != 2 || fwspec->param[0] >= IRQC_NUM_IRQ) 130 + return -EINVAL; 131 + 132 + *hwirq = fwspec->param[0]; 133 + *type = fwspec->param[1]; 134 + return 0; 135 + } 136 + 137 + static const struct irq_domain_ops rza1_irqc_domain_ops = { 138 + .alloc = rza1_irqc_alloc, 139 + .translate = rza1_irqc_translate, 140 + }; 141 + 142 + static int rza1_irqc_parse_map(struct rza1_irqc_priv *priv, 143 + struct device_node *gic_node) 144 + { 145 + unsigned int imaplen, i, j, ret; 146 + struct device *dev = priv->dev; 147 + struct device_node *ipar; 148 + const __be32 *imap; 149 + u32 intsize; 150 + 151 + imap = of_get_property(dev->of_node, "interrupt-map", &imaplen); 152 + if (!imap) 153 + return -EINVAL; 154 + 155 + for (i = 0; i < IRQC_NUM_IRQ; i++) { 156 + if (imaplen < 3) 157 + return -EINVAL; 158 + 159 + /* Check interrupt number, ignore sense */ 160 + if (be32_to_cpup(imap) != i) 161 + return -EINVAL; 162 + 163 + ipar = of_find_node_by_phandle(be32_to_cpup(imap + 2)); 164 + if (ipar != gic_node) { 165 + of_node_put(ipar); 166 + return -EINVAL; 167 + } 168 + 169 + imap += 3; 170 + imaplen -= 3; 171 + 172 + ret = of_property_read_u32(ipar, "#interrupt-cells", &intsize); 173 + of_node_put(ipar); 174 + if (ret) 175 + return ret; 176 + 177 + if (imaplen < intsize) 178 + return -EINVAL; 179 + 180 + priv->map[i].args_count = intsize; 181 + for (j = 0; j < intsize; j++) 182 + priv->map[i].args[j] = be32_to_cpup(imap++); 183 + 184 + imaplen -= intsize; 185 + } 186 + 187 + return 0; 188 + } 189 + 190 + static int rza1_irqc_probe(struct platform_device *pdev) 191 + { 192 + struct device *dev = &pdev->dev; 193 + struct device_node *np = dev->of_node; 194 + struct irq_domain *parent = NULL; 195 + struct device_node *gic_node; 196 + struct rza1_irqc_priv *priv; 197 + int ret; 198 + 199 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL); 200 + if (!priv) 201 + return -ENOMEM; 202 + 203 + platform_set_drvdata(pdev, priv); 204 + priv->dev = dev; 205 + 206 + priv->base = devm_platform_ioremap_resource(pdev, 0); 207 + if (IS_ERR(priv->base)) 208 + return PTR_ERR(priv->base); 209 + 210 + gic_node = of_irq_find_parent(np); 211 + if (gic_node) { 212 + parent = irq_find_host(gic_node); 213 + of_node_put(gic_node); 214 + } 215 + 216 + if (!parent) { 217 + dev_err(dev, "cannot find parent domain\n"); 218 + return -ENODEV; 219 + } 220 + 221 + ret = rza1_irqc_parse_map(priv, gic_node); 222 + if (ret) { 223 + dev_err(dev, "cannot parse %s: %d\n", "interrupt-map", ret); 224 + return ret; 225 + } 226 + 227 + priv->chip.name = "rza1-irqc", 228 + priv->chip.irq_mask = irq_chip_mask_parent, 229 + priv->chip.irq_unmask = irq_chip_unmask_parent, 230 + priv->chip.irq_eoi = rza1_irqc_eoi, 231 + priv->chip.irq_retrigger = irq_chip_retrigger_hierarchy, 232 + priv->chip.irq_set_type = rza1_irqc_set_type, 233 + priv->chip.flags = IRQCHIP_MASK_ON_SUSPEND | IRQCHIP_SKIP_SET_WAKE; 234 + 235 + priv->irq_domain = irq_domain_add_hierarchy(parent, 0, IRQC_NUM_IRQ, 236 + np, &rza1_irqc_domain_ops, 237 + priv); 238 + if (!priv->irq_domain) { 239 + dev_err(dev, "cannot initialize irq domain\n"); 240 + return -ENOMEM; 241 + } 242 + 243 + return 0; 244 + } 245 + 246 + static int rza1_irqc_remove(struct platform_device *pdev) 247 + { 248 + struct rza1_irqc_priv *priv = platform_get_drvdata(pdev); 249 + 250 + irq_domain_remove(priv->irq_domain); 251 + return 0; 252 + } 253 + 254 + static const struct of_device_id rza1_irqc_dt_ids[] = { 255 + { .compatible = "renesas,rza1-irqc" }, 256 + {}, 257 + }; 258 + MODULE_DEVICE_TABLE(of, rza1_irqc_dt_ids); 259 + 260 + static struct platform_driver rza1_irqc_device_driver = { 261 + .probe = rza1_irqc_probe, 262 + .remove = rza1_irqc_remove, 263 + .driver = { 264 + .name = "renesas_rza1_irqc", 265 + .of_match_table = rza1_irqc_dt_ids, 266 + } 267 + }; 268 + 269 + static int __init rza1_irqc_init(void) 270 + { 271 + return platform_driver_register(&rza1_irqc_device_driver); 272 + } 273 + postcore_initcall(rza1_irqc_init); 274 + 275 + static void __exit rza1_irqc_exit(void) 276 + { 277 + platform_driver_unregister(&rza1_irqc_device_driver); 278 + } 279 + module_exit(rza1_irqc_exit); 280 + 281 + MODULE_AUTHOR("Geert Uytterhoeven <geert+renesas@glider.be>"); 282 + MODULE_DESCRIPTION("Renesas RZ/A1 IRQC Driver"); 283 + MODULE_LICENSE("GPL v2");
+113 -33
drivers/irqchip/irq-sni-exiu.c
··· 2 2 /* 3 3 * Driver for Socionext External Interrupt Unit (EXIU) 4 4 * 5 - * Copyright (c) 2017 Linaro, Ltd. <ard.biesheuvel@linaro.org> 5 + * Copyright (c) 2017-2019 Linaro, Ltd. <ard.biesheuvel@linaro.org> 6 6 * 7 7 * Based on irq-tegra.c: 8 8 * Copyright (C) 2011 Google, Inc. ··· 17 17 #include <linux/of.h> 18 18 #include <linux/of_address.h> 19 19 #include <linux/of_irq.h> 20 + #include <linux/platform_device.h> 20 21 21 22 #include <dt-bindings/interrupt-controller/arm-gic.h> 22 23 ··· 132 131 133 132 *hwirq = fwspec->param[1] - info->spi_base; 134 133 *type = fwspec->param[2] & IRQ_TYPE_SENSE_MASK; 135 - return 0; 134 + } else { 135 + if (fwspec->param_count != 2) 136 + return -EINVAL; 137 + *hwirq = fwspec->param[0]; 138 + *type = fwspec->param[2] & IRQ_TYPE_SENSE_MASK; 136 139 } 137 - return -EINVAL; 140 + return 0; 138 141 } 139 142 140 143 static int exiu_domain_alloc(struct irq_domain *dom, unsigned int virq, ··· 149 144 struct exiu_irq_data *info = dom->host_data; 150 145 irq_hw_number_t hwirq; 151 146 152 - if (fwspec->param_count != 3) 153 - return -EINVAL; /* Not GIC compliant */ 154 - if (fwspec->param[0] != GIC_SPI) 155 - return -EINVAL; /* No PPI should point to this domain */ 147 + parent_fwspec = *fwspec; 148 + if (is_of_node(dom->parent->fwnode)) { 149 + if (fwspec->param_count != 3) 150 + return -EINVAL; /* Not GIC compliant */ 151 + if (fwspec->param[0] != GIC_SPI) 152 + return -EINVAL; /* No PPI should point to this domain */ 156 153 154 + hwirq = fwspec->param[1] - info->spi_base; 155 + } else { 156 + hwirq = fwspec->param[0]; 157 + parent_fwspec.param[0] = hwirq + info->spi_base + 32; 158 + } 157 159 WARN_ON(nr_irqs != 1); 158 - hwirq = fwspec->param[1] - info->spi_base; 159 160 irq_domain_set_hwirq_and_chip(dom, virq, hwirq, &exiu_irq_chip, info); 160 161 161 - parent_fwspec = *fwspec; 162 162 parent_fwspec.fwnode = dom->parent->fwnode; 163 163 return irq_domain_alloc_irqs_parent(dom, virq, nr_irqs, &parent_fwspec); 164 164 } ··· 174 164 .free = irq_domain_free_irqs_common, 175 165 }; 176 166 177 - static int __init exiu_init(struct device_node *node, 178 - struct device_node *parent) 167 + static struct exiu_irq_data *exiu_init(const struct fwnode_handle *fwnode, 168 + struct resource *res) 169 + { 170 + struct exiu_irq_data *data; 171 + int err; 172 + 173 + data = kzalloc(sizeof(*data), GFP_KERNEL); 174 + if (!data) 175 + return ERR_PTR(-ENOMEM); 176 + 177 + if (fwnode_property_read_u32_array(fwnode, "socionext,spi-base", 178 + &data->spi_base, 1)) { 179 + err = -ENODEV; 180 + goto out_free; 181 + } 182 + 183 + data->base = ioremap(res->start, resource_size(res)); 184 + if (!data->base) { 185 + err = -ENODEV; 186 + goto out_free; 187 + } 188 + 189 + /* clear and mask all interrupts */ 190 + writel_relaxed(0xFFFFFFFF, data->base + EIREQCLR); 191 + writel_relaxed(0xFFFFFFFF, data->base + EIMASK); 192 + 193 + return data; 194 + 195 + out_free: 196 + kfree(data); 197 + return ERR_PTR(err); 198 + } 199 + 200 + static int __init exiu_dt_init(struct device_node *node, 201 + struct device_node *parent) 179 202 { 180 203 struct irq_domain *parent_domain, *domain; 181 204 struct exiu_irq_data *data; 182 - int err; 205 + struct resource res; 183 206 184 207 if (!parent) { 185 208 pr_err("%pOF: no parent, giving up\n", node); ··· 225 182 return -ENXIO; 226 183 } 227 184 228 - data = kzalloc(sizeof(*data), GFP_KERNEL); 229 - if (!data) 230 - return -ENOMEM; 231 - 232 - if (of_property_read_u32(node, "socionext,spi-base", &data->spi_base)) { 233 - pr_err("%pOF: failed to parse 'spi-base' property\n", node); 234 - err = -ENODEV; 235 - goto out_free; 185 + if (of_address_to_resource(node, 0, &res)) { 186 + pr_err("%pOF: failed to parse memory resource\n", node); 187 + return -ENXIO; 236 188 } 237 189 238 - data->base = of_iomap(node, 0); 239 - if (!data->base) { 240 - err = -ENODEV; 241 - goto out_free; 242 - } 243 - 244 - /* clear and mask all interrupts */ 245 - writel_relaxed(0xFFFFFFFF, data->base + EIREQCLR); 246 - writel_relaxed(0xFFFFFFFF, data->base + EIMASK); 190 + data = exiu_init(of_node_to_fwnode(node), &res); 191 + if (IS_ERR(data)) 192 + return PTR_ERR(data); 247 193 248 194 domain = irq_domain_add_hierarchy(parent_domain, 0, NUM_IRQS, node, 249 195 &exiu_domain_ops, data); 250 196 if (!domain) { 251 197 pr_err("%pOF: failed to allocate domain\n", node); 252 - err = -ENOMEM; 253 198 goto out_unmap; 254 199 } 255 200 ··· 248 217 249 218 out_unmap: 250 219 iounmap(data->base); 251 - out_free: 252 220 kfree(data); 253 - return err; 221 + return -ENOMEM; 254 222 } 255 - IRQCHIP_DECLARE(exiu, "socionext,synquacer-exiu", exiu_init); 223 + IRQCHIP_DECLARE(exiu, "socionext,synquacer-exiu", exiu_dt_init); 224 + 225 + #ifdef CONFIG_ACPI 226 + static int exiu_acpi_probe(struct platform_device *pdev) 227 + { 228 + struct irq_domain *domain; 229 + struct exiu_irq_data *data; 230 + struct resource *res; 231 + 232 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 233 + if (!res) { 234 + dev_err(&pdev->dev, "failed to parse memory resource\n"); 235 + return -ENXIO; 236 + } 237 + 238 + data = exiu_init(dev_fwnode(&pdev->dev), res); 239 + if (IS_ERR(data)) 240 + return PTR_ERR(data); 241 + 242 + domain = acpi_irq_create_hierarchy(0, NUM_IRQS, dev_fwnode(&pdev->dev), 243 + &exiu_domain_ops, data); 244 + if (!domain) { 245 + dev_err(&pdev->dev, "failed to create IRQ domain\n"); 246 + goto out_unmap; 247 + } 248 + 249 + dev_info(&pdev->dev, "%d interrupts forwarded\n", NUM_IRQS); 250 + 251 + return 0; 252 + 253 + out_unmap: 254 + iounmap(data->base); 255 + kfree(data); 256 + return -ENOMEM; 257 + } 258 + 259 + static const struct acpi_device_id exiu_acpi_ids[] = { 260 + { "SCX0008" }, 261 + { /* sentinel */ } 262 + }; 263 + MODULE_DEVICE_TABLE(acpi, exiu_acpi_ids); 264 + 265 + static struct platform_driver exiu_driver = { 266 + .driver = { 267 + .name = "exiu", 268 + .acpi_match_table = exiu_acpi_ids, 269 + }, 270 + .probe = exiu_acpi_probe, 271 + }; 272 + builtin_platform_driver(exiu_driver); 273 + #endif
+2 -3
drivers/irqchip/qcom-irq-combiner.c
··· 229 229 static int __init combiner_probe(struct platform_device *pdev) 230 230 { 231 231 struct combiner *combiner; 232 - size_t alloc_sz; 233 232 int nregs; 234 233 int err; 235 234 ··· 238 239 return -EINVAL; 239 240 } 240 241 241 - alloc_sz = sizeof(*combiner) + sizeof(struct combiner_reg) * nregs; 242 - combiner = devm_kzalloc(&pdev->dev, alloc_sz, GFP_KERNEL); 242 + combiner = devm_kzalloc(&pdev->dev, struct_size(combiner, regs, nregs), 243 + GFP_KERNEL); 243 244 if (!combiner) 244 245 return -ENOMEM; 245 246
+7
include/linux/acpi.h
··· 10 10 11 11 #include <linux/errno.h> 12 12 #include <linux/ioport.h> /* for struct resource */ 13 + #include <linux/irqdomain.h> 13 14 #include <linux/resource_ext.h> 14 15 #include <linux/device.h> 15 16 #include <linux/property.h> ··· 314 313 315 314 void acpi_set_irq_model(enum acpi_irq_model_id model, 316 315 struct fwnode_handle *fwnode); 316 + 317 + struct irq_domain *acpi_irq_create_hierarchy(unsigned int flags, 318 + unsigned int size, 319 + struct fwnode_handle *fwnode, 320 + const struct irq_domain_ops *ops, 321 + void *host_data); 317 322 318 323 #ifdef CONFIG_X86_IO_APIC 319 324 extern int acpi_get_override_irq(u32 gsi, int *trigger, int *polarity);
+5
include/linux/irqchip/arm-gic-common.h
··· 36 36 37 37 const struct gic_kvm_info *gic_get_kvm_info(void); 38 38 39 + struct irq_domain; 40 + struct fwnode_handle; 41 + int gicv2m_init(struct fwnode_handle *parent_handle, 42 + struct irq_domain *parent); 43 + 39 44 #endif /* __LINUX_IRQCHIP_ARM_GIC_COMMON_H */
-3
include/linux/irqchip/arm-gic.h
··· 157 157 */ 158 158 void gic_init(void __iomem *dist , void __iomem *cpu); 159 159 160 - int gicv2m_init(struct fwnode_handle *parent_handle, 161 - struct irq_domain *parent); 162 - 163 160 void gic_send_sgi(unsigned int cpu_id, unsigned int irq); 164 161 int gic_get_cpu_id(unsigned int cpu); 165 162 void gic_migrate_target(unsigned int new_cpu_id);
+3
kernel/irq/Makefile
··· 2 2 3 3 obj-y := irqdesc.o handle.o manage.o spurious.o resend.o chip.o dummychip.o devres.o 4 4 obj-$(CONFIG_IRQ_TIMINGS) += timings.o 5 + ifeq ($(CONFIG_TEST_IRQ_TIMINGS),y) 6 + CFLAGS_timings.o += -DDEBUG 7 + endif 5 8 obj-$(CONFIG_GENERIC_IRQ_CHIP) += generic-chip.o 6 9 obj-$(CONFIG_GENERIC_IRQ_PROBE) += autoprobe.o 7 10 obj-$(CONFIG_IRQ_DOMAIN) += irqdomain.o
+5 -7
kernel/irq/affinity.c
··· 94 94 return nodes; 95 95 } 96 96 97 - static int __irq_build_affinity_masks(const struct irq_affinity *affd, 98 - unsigned int startvec, 97 + static int __irq_build_affinity_masks(unsigned int startvec, 99 98 unsigned int numvecs, 100 99 unsigned int firstvec, 101 100 cpumask_var_t *node_to_cpumask, ··· 170 171 * 1) spread present CPU on these vectors 171 172 * 2) spread other possible CPUs on these vectors 172 173 */ 173 - static int irq_build_affinity_masks(const struct irq_affinity *affd, 174 - unsigned int startvec, unsigned int numvecs, 174 + static int irq_build_affinity_masks(unsigned int startvec, unsigned int numvecs, 175 175 unsigned int firstvec, 176 176 struct irq_affinity_desc *masks) 177 177 { ··· 195 197 build_node_to_cpumask(node_to_cpumask); 196 198 197 199 /* Spread on present CPUs starting from affd->pre_vectors */ 198 - nr_present = __irq_build_affinity_masks(affd, curvec, numvecs, 200 + nr_present = __irq_build_affinity_masks(curvec, numvecs, 199 201 firstvec, node_to_cpumask, 200 202 cpu_present_mask, nmsk, masks); 201 203 ··· 210 212 else 211 213 curvec = firstvec + nr_present; 212 214 cpumask_andnot(npresmsk, cpu_possible_mask, cpu_present_mask); 213 - nr_others = __irq_build_affinity_masks(affd, curvec, numvecs, 215 + nr_others = __irq_build_affinity_masks(curvec, numvecs, 214 216 firstvec, node_to_cpumask, 215 217 npresmsk, nmsk, masks); 216 218 put_online_cpus(); ··· 293 295 unsigned int this_vecs = affd->set_size[i]; 294 296 int ret; 295 297 296 - ret = irq_build_affinity_masks(affd, curvec, this_vecs, 298 + ret = irq_build_affinity_masks(curvec, this_vecs, 297 299 curvec, masks); 298 300 if (ret) { 299 301 kfree(masks);
+4
kernel/irq/chip.c
··· 748 748 unsigned int irq = irq_desc_get_irq(desc); 749 749 irqreturn_t res; 750 750 751 + __kstat_incr_irqs_this_cpu(desc); 752 + 751 753 trace_irq_handler_entry(irq, action); 752 754 /* 753 755 * NMIs cannot be shared, there is only one action. ··· 963 961 struct irqaction *action = desc->action; 964 962 unsigned int irq = irq_desc_get_irq(desc); 965 963 irqreturn_t res; 964 + 965 + __kstat_incr_irqs_this_cpu(desc); 966 966 967 967 trace_irq_handler_entry(irq, action); 968 968 res = action->handler(irq, raw_cpu_ptr(action->percpu_dev_id));
+12 -9
kernel/irq/internals.h
··· 354 354 return value & U16_MAX; 355 355 } 356 356 357 + static __always_inline void irq_timings_push(u64 ts, int irq) 358 + { 359 + struct irq_timings *timings = this_cpu_ptr(&irq_timings); 360 + 361 + timings->values[timings->count & IRQ_TIMINGS_MASK] = 362 + irq_timing_encode(ts, irq); 363 + 364 + timings->count++; 365 + } 366 + 357 367 /* 358 368 * The function record_irq_time is only called in one place in the 359 369 * interrupts handler. We want this function always inline so the code ··· 377 367 if (!static_branch_likely(&irq_timing_enabled)) 378 368 return; 379 369 380 - if (desc->istate & IRQS_TIMINGS) { 381 - struct irq_timings *timings = this_cpu_ptr(&irq_timings); 382 - 383 - timings->values[timings->count & IRQ_TIMINGS_MASK] = 384 - irq_timing_encode(local_clock(), 385 - irq_desc_get_irq(desc)); 386 - 387 - timings->count++; 388 - } 370 + if (desc->istate & IRQS_TIMINGS) 371 + irq_timings_push(local_clock(), irq_desc_get_irq(desc)); 389 372 } 390 373 #else 391 374 static inline void irq_remove_timings(struct irq_desc *desc) {}
+7 -1
kernel/irq/irqdesc.c
··· 950 950 *per_cpu_ptr(desc->kstat_irqs, cpu) : 0; 951 951 } 952 952 953 + static bool irq_is_nmi(struct irq_desc *desc) 954 + { 955 + return desc->istate & IRQS_NMI; 956 + } 957 + 953 958 /** 954 959 * kstat_irqs - Get the statistics for an interrupt 955 960 * @irq: The interrupt number ··· 972 967 if (!desc || !desc->kstat_irqs) 973 968 return 0; 974 969 if (!irq_settings_is_per_cpu_devid(desc) && 975 - !irq_settings_is_per_cpu(desc)) 970 + !irq_settings_is_per_cpu(desc) && 971 + !irq_is_nmi(desc)) 976 972 return desc->tot_count; 977 973 978 974 for_each_possible_cpu(cpu)
+2 -2
kernel/irq/irqdomain.c
··· 123 123 * @ops: domain callbacks 124 124 * @host_data: Controller private data pointer 125 125 * 126 - * Allocates and initialize and irq_domain structure. 126 + * Allocates and initializes an irq_domain structure. 127 127 * Returns pointer to IRQ domain, or NULL on failure. 128 128 */ 129 129 struct irq_domain *__irq_domain_add(struct fwnode_handle *fwnode, int size, ··· 139 139 140 140 domain = kzalloc_node(sizeof(*domain) + (sizeof(unsigned int) * size), 141 141 GFP_KERNEL, of_node_to_nid(of_node)); 142 - if (WARN_ON(!domain)) 142 + if (!domain) 143 143 return NULL; 144 144 145 145 if (fwnode && is_fwnode_irqchip(fwnode)) {
+419 -34
kernel/irq/timings.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 // Copyright (C) 2016, Linaro Ltd - Daniel Lezcano <daniel.lezcano@linaro.org> 3 + #define pr_fmt(fmt) "irq_timings: " fmt 3 4 4 5 #include <linux/kernel.h> 5 6 #include <linux/percpu.h> 6 7 #include <linux/slab.h> 7 8 #include <linux/static_key.h> 9 + #include <linux/init.h> 8 10 #include <linux/interrupt.h> 9 11 #include <linux/idr.h> 10 12 #include <linux/irq.h> ··· 263 261 #define EMA_ALPHA_VAL 64 264 262 #define EMA_ALPHA_SHIFT 7 265 263 266 - #define PREDICTION_PERIOD_MIN 2 264 + #define PREDICTION_PERIOD_MIN 3 267 265 #define PREDICTION_PERIOD_MAX 5 268 266 #define PREDICTION_FACTOR 4 269 267 #define PREDICTION_MAX 10 /* 2 ^ PREDICTION_MAX useconds */ 270 268 #define PREDICTION_BUFFER_SIZE 16 /* slots for EMAs, hardly more than 16 */ 269 + 270 + /* 271 + * Number of elements in the circular buffer: If it happens it was 272 + * flushed before, then the number of elements could be smaller than 273 + * IRQ_TIMINGS_SIZE, so the count is used, otherwise the array size is 274 + * used as we wrapped. The index begins from zero when we did not 275 + * wrap. That could be done in a nicer way with the proper circular 276 + * array structure type but with the cost of extra computation in the 277 + * interrupt handler hot path. We choose efficiency. 278 + */ 279 + #define for_each_irqts(i, irqts) \ 280 + for (i = irqts->count < IRQ_TIMINGS_SIZE ? \ 281 + 0 : irqts->count & IRQ_TIMINGS_MASK, \ 282 + irqts->count = min(IRQ_TIMINGS_SIZE, \ 283 + irqts->count); \ 284 + irqts->count > 0; irqts->count--, \ 285 + i = (i + 1) & IRQ_TIMINGS_MASK) 271 286 272 287 struct irqt_stat { 273 288 u64 last_ts; ··· 316 297 317 298 static int irq_timings_next_event_index(int *buffer, size_t len, int period_max) 318 299 { 319 - int i; 300 + int period; 301 + 302 + /* 303 + * Move the beginning pointer to the end minus the max period x 3. 304 + * We are at the point we can begin searching the pattern 305 + */ 306 + buffer = &buffer[len - (period_max * 3)]; 307 + 308 + /* Adjust the length to the maximum allowed period x 3 */ 309 + len = period_max * 3; 320 310 321 311 /* 322 312 * The buffer contains the suite of intervals, in a ilog2 ··· 334 306 * period beginning at the end of the buffer. We do that for 335 307 * each suffix. 336 308 */ 337 - for (i = period_max; i >= PREDICTION_PERIOD_MIN ; i--) { 309 + for (period = period_max; period >= PREDICTION_PERIOD_MIN; period--) { 338 310 339 - int *begin = &buffer[len - (i * 3)]; 340 - int *ptr = begin; 311 + /* 312 + * The first comparison always succeed because the 313 + * suffix is deduced from the first n-period bytes of 314 + * the buffer and we compare the initial suffix with 315 + * itself, so we can skip the first iteration. 316 + */ 317 + int idx = period; 318 + size_t size = period; 341 319 342 320 /* 343 321 * We look if the suite with period 'i' repeat 344 322 * itself. If it is truncated at the end, as it 345 323 * repeats we can use the period to find out the next 346 - * element. 324 + * element with the modulo. 347 325 */ 348 - while (!memcmp(ptr, begin, i * sizeof(*ptr))) { 349 - ptr += i; 350 - if (ptr >= &buffer[len]) 351 - return begin[((i * 3) % i)]; 326 + while (!memcmp(buffer, &buffer[idx], size * sizeof(int))) { 327 + 328 + /* 329 + * Move the index in a period basis 330 + */ 331 + idx += size; 332 + 333 + /* 334 + * If this condition is reached, all previous 335 + * memcmp were successful, so the period is 336 + * found. 337 + */ 338 + if (idx == len) 339 + return buffer[len % period]; 340 + 341 + /* 342 + * If the remaining elements to compare are 343 + * smaller than the period, readjust the size 344 + * of the comparison for the last iteration. 345 + */ 346 + if (len - idx < period) 347 + size = len - idx; 352 348 } 353 349 } 354 350 ··· 432 380 return irqs->last_ts + irqs->ema_time[index]; 433 381 } 434 382 383 + static __always_inline int irq_timings_interval_index(u64 interval) 384 + { 385 + /* 386 + * The PREDICTION_FACTOR increase the interval size for the 387 + * array of exponential average. 388 + */ 389 + u64 interval_us = (interval >> 10) / PREDICTION_FACTOR; 390 + 391 + return likely(interval_us) ? ilog2(interval_us) : 0; 392 + } 393 + 394 + static __always_inline void __irq_timings_store(int irq, struct irqt_stat *irqs, 395 + u64 interval) 396 + { 397 + int index; 398 + 399 + /* 400 + * Get the index in the ema table for this interrupt. 401 + */ 402 + index = irq_timings_interval_index(interval); 403 + 404 + /* 405 + * Store the index as an element of the pattern in another 406 + * circular array. 407 + */ 408 + irqs->circ_timings[irqs->count & IRQ_TIMINGS_MASK] = index; 409 + 410 + irqs->ema_time[index] = irq_timings_ema_new(interval, 411 + irqs->ema_time[index]); 412 + 413 + irqs->count++; 414 + } 415 + 435 416 static inline void irq_timings_store(int irq, struct irqt_stat *irqs, u64 ts) 436 417 { 437 418 u64 old_ts = irqs->last_ts; 438 419 u64 interval; 439 - int index; 440 420 441 421 /* 442 422 * The timestamps are absolute time values, we need to compute ··· 499 415 return; 500 416 } 501 417 502 - /* 503 - * Get the index in the ema table for this interrupt. The 504 - * PREDICTION_FACTOR increase the interval size for the array 505 - * of exponential average. 506 - */ 507 - index = likely(interval) ? 508 - ilog2((interval >> 10) / PREDICTION_FACTOR) : 0; 509 - 510 - /* 511 - * Store the index as an element of the pattern in another 512 - * circular array. 513 - */ 514 - irqs->circ_timings[irqs->count & IRQ_TIMINGS_MASK] = index; 515 - 516 - irqs->ema_time[index] = irq_timings_ema_new(interval, 517 - irqs->ema_time[index]); 518 - 519 - irqs->count++; 418 + __irq_timings_store(irq, irqs, interval); 520 419 } 521 420 522 421 /** ··· 560 493 * model while decrementing the counter because we consume the 561 494 * data from our circular buffer. 562 495 */ 563 - 564 - i = (irqts->count & IRQ_TIMINGS_MASK) - 1; 565 - irqts->count = min(IRQ_TIMINGS_SIZE, irqts->count); 566 - 567 - for (; irqts->count > 0; irqts->count--, i = (i + 1) & IRQ_TIMINGS_MASK) { 496 + for_each_irqts(i, irqts) { 568 497 irq = irq_timing_decode(irqts->values[i], &ts); 569 498 s = idr_find(&irqt_stats, irq); 570 499 if (s) ··· 627 564 628 565 return 0; 629 566 } 567 + 568 + #ifdef CONFIG_TEST_IRQ_TIMINGS 569 + struct timings_intervals { 570 + u64 *intervals; 571 + size_t count; 572 + }; 573 + 574 + /* 575 + * Intervals are given in nanosecond base 576 + */ 577 + static u64 intervals0[] __initdata = { 578 + 10000, 50000, 200000, 500000, 579 + 10000, 50000, 200000, 500000, 580 + 10000, 50000, 200000, 500000, 581 + 10000, 50000, 200000, 500000, 582 + 10000, 50000, 200000, 500000, 583 + 10000, 50000, 200000, 500000, 584 + 10000, 50000, 200000, 500000, 585 + 10000, 50000, 200000, 500000, 586 + 10000, 50000, 200000, 587 + }; 588 + 589 + static u64 intervals1[] __initdata = { 590 + 223947000, 1240000, 1384000, 1386000, 1386000, 591 + 217416000, 1236000, 1384000, 1386000, 1387000, 592 + 214719000, 1241000, 1386000, 1387000, 1384000, 593 + 213696000, 1234000, 1384000, 1386000, 1388000, 594 + 219904000, 1240000, 1385000, 1389000, 1385000, 595 + 212240000, 1240000, 1386000, 1386000, 1386000, 596 + 214415000, 1236000, 1384000, 1386000, 1387000, 597 + 214276000, 1234000, 598 + }; 599 + 600 + static u64 intervals2[] __initdata = { 601 + 4000, 3000, 5000, 100000, 602 + 3000, 3000, 5000, 117000, 603 + 4000, 4000, 5000, 112000, 604 + 4000, 3000, 4000, 110000, 605 + 3000, 5000, 3000, 117000, 606 + 4000, 4000, 5000, 112000, 607 + 4000, 3000, 4000, 110000, 608 + 3000, 4000, 5000, 112000, 609 + 4000, 610 + }; 611 + 612 + static u64 intervals3[] __initdata = { 613 + 1385000, 212240000, 1240000, 614 + 1386000, 214415000, 1236000, 615 + 1384000, 214276000, 1234000, 616 + 1386000, 214415000, 1236000, 617 + 1385000, 212240000, 1240000, 618 + 1386000, 214415000, 1236000, 619 + 1384000, 214276000, 1234000, 620 + 1386000, 214415000, 1236000, 621 + 1385000, 212240000, 1240000, 622 + }; 623 + 624 + static u64 intervals4[] __initdata = { 625 + 10000, 50000, 10000, 50000, 626 + 10000, 50000, 10000, 50000, 627 + 10000, 50000, 10000, 50000, 628 + 10000, 50000, 10000, 50000, 629 + 10000, 50000, 10000, 50000, 630 + 10000, 50000, 10000, 50000, 631 + 10000, 50000, 10000, 50000, 632 + 10000, 50000, 10000, 50000, 633 + 10000, 634 + }; 635 + 636 + static struct timings_intervals tis[] __initdata = { 637 + { intervals0, ARRAY_SIZE(intervals0) }, 638 + { intervals1, ARRAY_SIZE(intervals1) }, 639 + { intervals2, ARRAY_SIZE(intervals2) }, 640 + { intervals3, ARRAY_SIZE(intervals3) }, 641 + { intervals4, ARRAY_SIZE(intervals4) }, 642 + }; 643 + 644 + static int __init irq_timings_test_next_index(struct timings_intervals *ti) 645 + { 646 + int _buffer[IRQ_TIMINGS_SIZE]; 647 + int buffer[IRQ_TIMINGS_SIZE]; 648 + int index, start, i, count, period_max; 649 + 650 + count = ti->count - 1; 651 + 652 + period_max = count > (3 * PREDICTION_PERIOD_MAX) ? 653 + PREDICTION_PERIOD_MAX : count / 3; 654 + 655 + /* 656 + * Inject all values except the last one which will be used 657 + * to compare with the next index result. 658 + */ 659 + pr_debug("index suite: "); 660 + 661 + for (i = 0; i < count; i++) { 662 + index = irq_timings_interval_index(ti->intervals[i]); 663 + _buffer[i & IRQ_TIMINGS_MASK] = index; 664 + pr_cont("%d ", index); 665 + } 666 + 667 + start = count < IRQ_TIMINGS_SIZE ? 0 : 668 + count & IRQ_TIMINGS_MASK; 669 + 670 + count = min_t(int, count, IRQ_TIMINGS_SIZE); 671 + 672 + for (i = 0; i < count; i++) { 673 + int index = (start + i) & IRQ_TIMINGS_MASK; 674 + buffer[i] = _buffer[index]; 675 + } 676 + 677 + index = irq_timings_next_event_index(buffer, count, period_max); 678 + i = irq_timings_interval_index(ti->intervals[ti->count - 1]); 679 + 680 + if (index != i) { 681 + pr_err("Expected (%d) and computed (%d) next indexes differ\n", 682 + i, index); 683 + return -EINVAL; 684 + } 685 + 686 + return 0; 687 + } 688 + 689 + static int __init irq_timings_next_index_selftest(void) 690 + { 691 + int i, ret; 692 + 693 + for (i = 0; i < ARRAY_SIZE(tis); i++) { 694 + 695 + pr_info("---> Injecting intervals number #%d (count=%zd)\n", 696 + i, tis[i].count); 697 + 698 + ret = irq_timings_test_next_index(&tis[i]); 699 + if (ret) 700 + break; 701 + } 702 + 703 + return ret; 704 + } 705 + 706 + static int __init irq_timings_test_irqs(struct timings_intervals *ti) 707 + { 708 + struct irqt_stat __percpu *s; 709 + struct irqt_stat *irqs; 710 + int i, index, ret, irq = 0xACE5; 711 + 712 + ret = irq_timings_alloc(irq); 713 + if (ret) { 714 + pr_err("Failed to allocate irq timings\n"); 715 + return ret; 716 + } 717 + 718 + s = idr_find(&irqt_stats, irq); 719 + if (!s) { 720 + ret = -EIDRM; 721 + goto out; 722 + } 723 + 724 + irqs = this_cpu_ptr(s); 725 + 726 + for (i = 0; i < ti->count; i++) { 727 + 728 + index = irq_timings_interval_index(ti->intervals[i]); 729 + pr_debug("%d: interval=%llu ema_index=%d\n", 730 + i, ti->intervals[i], index); 731 + 732 + __irq_timings_store(irq, irqs, ti->intervals[i]); 733 + if (irqs->circ_timings[i & IRQ_TIMINGS_MASK] != index) { 734 + pr_err("Failed to store in the circular buffer\n"); 735 + goto out; 736 + } 737 + } 738 + 739 + if (irqs->count != ti->count) { 740 + pr_err("Count differs\n"); 741 + goto out; 742 + } 743 + 744 + ret = 0; 745 + out: 746 + irq_timings_free(irq); 747 + 748 + return ret; 749 + } 750 + 751 + static int __init irq_timings_irqs_selftest(void) 752 + { 753 + int i, ret; 754 + 755 + for (i = 0; i < ARRAY_SIZE(tis); i++) { 756 + pr_info("---> Injecting intervals number #%d (count=%zd)\n", 757 + i, tis[i].count); 758 + ret = irq_timings_test_irqs(&tis[i]); 759 + if (ret) 760 + break; 761 + } 762 + 763 + return ret; 764 + } 765 + 766 + static int __init irq_timings_test_irqts(struct irq_timings *irqts, 767 + unsigned count) 768 + { 769 + int start = count >= IRQ_TIMINGS_SIZE ? count - IRQ_TIMINGS_SIZE : 0; 770 + int i, irq, oirq = 0xBEEF; 771 + u64 ots = 0xDEAD, ts; 772 + 773 + /* 774 + * Fill the circular buffer by using the dedicated function. 775 + */ 776 + for (i = 0; i < count; i++) { 777 + pr_debug("%d: index=%d, ts=%llX irq=%X\n", 778 + i, i & IRQ_TIMINGS_MASK, ots + i, oirq + i); 779 + 780 + irq_timings_push(ots + i, oirq + i); 781 + } 782 + 783 + /* 784 + * Compute the first elements values after the index wrapped 785 + * up or not. 786 + */ 787 + ots += start; 788 + oirq += start; 789 + 790 + /* 791 + * Test the circular buffer count is correct. 792 + */ 793 + pr_debug("---> Checking timings array count (%d) is right\n", count); 794 + if (WARN_ON(irqts->count != count)) 795 + return -EINVAL; 796 + 797 + /* 798 + * Test the macro allowing to browse all the irqts. 799 + */ 800 + pr_debug("---> Checking the for_each_irqts() macro\n"); 801 + for_each_irqts(i, irqts) { 802 + 803 + irq = irq_timing_decode(irqts->values[i], &ts); 804 + 805 + pr_debug("index=%d, ts=%llX / %llX, irq=%X / %X\n", 806 + i, ts, ots, irq, oirq); 807 + 808 + if (WARN_ON(ts != ots || irq != oirq)) 809 + return -EINVAL; 810 + 811 + ots++; oirq++; 812 + } 813 + 814 + /* 815 + * The circular buffer should have be flushed when browsed 816 + * with for_each_irqts 817 + */ 818 + pr_debug("---> Checking timings array is empty after browsing it\n"); 819 + if (WARN_ON(irqts->count)) 820 + return -EINVAL; 821 + 822 + return 0; 823 + } 824 + 825 + static int __init irq_timings_irqts_selftest(void) 826 + { 827 + struct irq_timings *irqts = this_cpu_ptr(&irq_timings); 828 + int i, ret; 829 + 830 + /* 831 + * Test the circular buffer with different number of 832 + * elements. The purpose is to test at the limits (empty, half 833 + * full, full, wrapped with the cursor at the boundaries, 834 + * wrapped several times, etc ... 835 + */ 836 + int count[] = { 0, 837 + IRQ_TIMINGS_SIZE >> 1, 838 + IRQ_TIMINGS_SIZE, 839 + IRQ_TIMINGS_SIZE + (IRQ_TIMINGS_SIZE >> 1), 840 + 2 * IRQ_TIMINGS_SIZE, 841 + (2 * IRQ_TIMINGS_SIZE) + 3, 842 + }; 843 + 844 + for (i = 0; i < ARRAY_SIZE(count); i++) { 845 + 846 + pr_info("---> Checking the timings with %d/%d values\n", 847 + count[i], IRQ_TIMINGS_SIZE); 848 + 849 + ret = irq_timings_test_irqts(irqts, count[i]); 850 + if (ret) 851 + break; 852 + } 853 + 854 + return ret; 855 + } 856 + 857 + static int __init irq_timings_selftest(void) 858 + { 859 + int ret; 860 + 861 + pr_info("------------------- selftest start -----------------\n"); 862 + 863 + /* 864 + * At this point, we don't except any subsystem to use the irq 865 + * timings but us, so it should not be enabled. 866 + */ 867 + if (static_branch_unlikely(&irq_timing_enabled)) { 868 + pr_warn("irq timings already initialized, skipping selftest\n"); 869 + return 0; 870 + } 871 + 872 + ret = irq_timings_irqts_selftest(); 873 + if (ret) 874 + goto out; 875 + 876 + ret = irq_timings_irqs_selftest(); 877 + if (ret) 878 + goto out; 879 + 880 + ret = irq_timings_next_index_selftest(); 881 + out: 882 + pr_info("---------- selftest end with %s -----------\n", 883 + ret ? "failure" : "success"); 884 + 885 + return ret; 886 + } 887 + early_initcall(irq_timings_selftest); 888 + #endif
+1 -1
kernel/softirq.c
··· 649 649 /* Find end, append list for that CPU. */ 650 650 if (&per_cpu(tasklet_vec, cpu).head != per_cpu(tasklet_vec, cpu).tail) { 651 651 *__this_cpu_read(tasklet_vec.tail) = per_cpu(tasklet_vec, cpu).head; 652 - this_cpu_write(tasklet_vec.tail, per_cpu(tasklet_vec, cpu).tail); 652 + __this_cpu_write(tasklet_vec.tail, per_cpu(tasklet_vec, cpu).tail); 653 653 per_cpu(tasklet_vec, cpu).head = NULL; 654 654 per_cpu(tasklet_vec, cpu).tail = &per_cpu(tasklet_vec, cpu).head; 655 655 }
+8
lib/Kconfig.debug
··· 1870 1870 1871 1871 If unsure, say N. 1872 1872 1873 + config TEST_IRQ_TIMINGS 1874 + bool "IRQ timings selftest" 1875 + depends on IRQ_TIMINGS 1876 + help 1877 + Enable this option to test the irq timings code on boot. 1878 + 1879 + If unsure, say N. 1880 + 1873 1881 config TEST_LKM 1874 1882 tristate "Test module loading with 'hello world' module" 1875 1883 depends on m