Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull irq updates from Thomas Gleixner:
"The irq departement proudly presents:

- A rework of the core infrastructure to optimally spread interrupt
for multiqueue devices. The first version was a bit naive and
failed to take thread siblings and other details into account.
Developed in cooperation with Christoph and Keith.

- Proper delegation of softirqs to ksoftirqd, so if ksoftirqd is
active then no further softirq processsing on interrupt return
happens. Otherwise we try to delegate and still run another batch
of network packets in the irq return path, which then tries to
delegate to ksoftirqd .....

- A proper machine parseable sysfs based alternative for
/proc/interrupts.

- ACPI support for the GICV3-ITS and ARM interrupt remapping

- Two new irq chips from the ARM SoC zoo: STM32-EXTI and MVEBU-PIC

- A new irq chip for the JCore (SuperH)

- The usual pile of small fixlets in core and irqchip drivers"

* 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (42 commits)
softirq: Let ksoftirqd do its job
genirq: Make function __irq_do_set_handler() static
ARM/dts: Add EXTI controller node to stm32f429
ARM/STM32: Select external interrupts controller
drivers/irqchip: Add STM32 external interrupts support
Documentation/dt-bindings: Document STM32 EXTI controller bindings
irqchip/mips-gic: Use for_each_set_bit to iterate over local IRQs
pci/msi: Retrieve affinity for a vector
genirq/affinity: Remove old irq spread infrastructure
genirq/msi: Switch to new irq spreading infrastructure
genirq/affinity: Provide smarter irq spreading infrastructure
genirq/msi: Add cpumask allocation to alloc_msi_entry
genirq: Expose interrupt information through sysfs
irqchip/gicv3-its: Use MADT ITS subtable to do PCI/MSI domain initialization
irqchip/gicv3-its: Factor out PCI-MSI part that might be reused for ACPI
irqchip/gicv3-its: Probe ITS in the ACPI way
irqchip/gicv3-its: Refactor ITS DT init code to prepare for ACPI
irqchip/gicv3-its: Cleanup for ITS domain initialization
PCI/MSI: Setup MSI domain on a per-device basis using IORT ACPI table
ACPI: Add new IORT functions to support MSI domain handling
...

+1926 -283
+53
Documentation/ABI/testing/sysfs-kernel-irq
··· 1 + What: /sys/kernel/irq 2 + Date: September 2016 3 + KernelVersion: 4.9 4 + Contact: Craig Gallek <kraig@google.com> 5 + Description: Directory containing information about the system's IRQs. 6 + Specifically, data from the associated struct irq_desc. 7 + The information here is similar to that in /proc/interrupts 8 + but in a more machine-friendly format. This directory contains 9 + one subdirectory for each Linux IRQ number. 10 + 11 + What: /sys/kernel/irq/<irq>/actions 12 + Date: September 2016 13 + KernelVersion: 4.9 14 + Contact: Craig Gallek <kraig@google.com> 15 + Description: The IRQ action chain. A comma-separated list of zero or more 16 + device names associated with this interrupt. 17 + 18 + What: /sys/kernel/irq/<irq>/chip_name 19 + Date: September 2016 20 + KernelVersion: 4.9 21 + Contact: Craig Gallek <kraig@google.com> 22 + Description: Human-readable chip name supplied by the associated device 23 + driver. 24 + 25 + What: /sys/kernel/irq/<irq>/hwirq 26 + Date: September 2016 27 + KernelVersion: 4.9 28 + Contact: Craig Gallek <kraig@google.com> 29 + Description: When interrupt translation domains are used, this file contains 30 + the underlying hardware IRQ number used for this Linux IRQ. 31 + 32 + What: /sys/kernel/irq/<irq>/name 33 + Date: September 2016 34 + KernelVersion: 4.9 35 + Contact: Craig Gallek <kraig@google.com> 36 + Description: Human-readable flow handler name as defined by the irq chip 37 + driver. 38 + 39 + What: /sys/kernel/irq/<irq>/per_cpu_count 40 + Date: September 2016 41 + KernelVersion: 4.9 42 + Contact: Craig Gallek <kraig@google.com> 43 + Description: The number of times the interrupt has fired since boot. This 44 + is a comma-separated list of counters; one per CPU in CPU id 45 + order. NOTE: This file consistently shows counters for all 46 + CPU ids. This differs from the behavior of /proc/interrupts 47 + which only shows counters for online CPUs. 48 + 49 + What: /sys/kernel/irq/<irq>/type 50 + Date: September 2016 51 + KernelVersion: 4.9 52 + Contact: Craig Gallek <kraig@google.com> 53 + Description: The type of the interrupt. Either the string 'level' or 'edge'.
+26
Documentation/devicetree/bindings/interrupt-controller/jcore,aic.txt
··· 1 + J-Core Advanced Interrupt Controller 2 + 3 + Required properties: 4 + 5 + - compatible: Should be "jcore,aic1" for the (obsolete) first-generation aic 6 + with 8 interrupt lines with programmable priorities, or "jcore,aic2" for 7 + the "aic2" core with 64 interrupts. 8 + 9 + - reg: Memory region(s) for configuration. For SMP, there should be one 10 + region per cpu, indexed by the sequential, zero-based hardware cpu 11 + number. 12 + 13 + - interrupt-controller: Identifies the node as an interrupt controller 14 + 15 + - #interrupt-cells: Specifies the number of cells needed to encode an 16 + interrupt source. The value shall be 1. 17 + 18 + 19 + Example: 20 + 21 + aic: interrupt-controller@200 { 22 + compatible = "jcore,aic2"; 23 + reg = < 0x200 0x30 0x500 0x30 >; 24 + interrupt-controller; 25 + #interrupt-cells = <1>; 26 + };
+25
Documentation/devicetree/bindings/interrupt-controller/marvell,armada-8k-pic.txt
··· 1 + Marvell Armada 7K/8K PIC Interrupt controller 2 + --------------------------------------------- 3 + 4 + This is the Device Tree binding for the PIC, a secondary interrupt 5 + controller available on the Marvell Armada 7K/8K ARM64 SoCs, and 6 + typically connected to the GIC as the primary interrupt controller. 7 + 8 + Required properties: 9 + - compatible: should be "marvell,armada-8k-pic" 10 + - interrupt-controller: identifies the node as an interrupt controller 11 + - #interrupt-cells: the number of cells to define interrupts on this 12 + controller. Should be 1 13 + - reg: the register area for the PIC interrupt controller 14 + - interrupts: the interrupt to the primary interrupt controller, 15 + typically the GIC 16 + 17 + Example: 18 + 19 + pic: interrupt-controller@3f0100 { 20 + compatible = "marvell,armada-8k-pic"; 21 + reg = <0x3f0100 0x10>; 22 + #interrupt-cells = <1>; 23 + interrupt-controller; 24 + interrupts = <GIC_PPI 15 IRQ_TYPE_LEVEL_HIGH>; 25 + };
+1 -1
Documentation/devicetree/bindings/interrupt-controller/marvell,odmi-controller.txt
··· 31 31 Example: 32 32 33 33 odmi: odmi@300000 { 34 - compatible = "marvell,ap806-odm-controller", 34 + compatible = "marvell,ap806-odmi-controller", 35 35 "marvell,odmi-controller"; 36 36 interrupt-controller; 37 37 msi-controller;
+20
Documentation/devicetree/bindings/interrupt-controller/st,stm32-exti.txt
··· 1 + STM32 External Interrupt Controller 2 + 3 + Required properties: 4 + 5 + - compatible: Should be "st,stm32-exti" 6 + - reg: Specifies base physical address and size of the registers 7 + - interrupt-controller: Indentifies the node as an interrupt controller 8 + - #interrupt-cells: Specifies the number of cells to encode an interrupt 9 + specifier, shall be 2 10 + - interrupts: interrupts references to primary interrupt controller 11 + 12 + Example: 13 + 14 + exti: interrupt-controller@40013c00 { 15 + compatible = "st,stm32-exti"; 16 + interrupt-controller; 17 + #interrupt-cells = <2>; 18 + reg = <0x40013C00 0x400>; 19 + interrupts = <1>, <2>, <3>, <6>, <7>, <8>, <9>, <10>, <23>, <40>, <41>, <42>, <62>, <76>; 20 + };
+1
arch/arm/Kconfig
··· 879 879 select CLKSRC_STM32 880 880 select PINCTRL 881 881 select RESET_CONTROLLER 882 + select STM32_EXTI 882 883 help 883 884 Support for STMicroelectronics STM32 processors. 884 885
+8
arch/arm/boot/dts/stm32f429.dtsi
··· 176 176 reg = <0x40013800 0x400>; 177 177 }; 178 178 179 + exti: interrupt-controller@40013c00 { 180 + compatible = "st,stm32-exti"; 181 + interrupt-controller; 182 + #interrupt-cells = <2>; 183 + reg = <0x40013C00 0x400>; 184 + interrupts = <1>, <2>, <3>, <6>, <7>, <8>, <9>, <10>, <23>, <40>, <41>, <42>, <62>, <76>; 185 + }; 186 + 179 187 pin-controller { 180 188 #address-cells = <1>; 181 189 #size-cells = <1>;
+6
arch/arm/include/asm/arch_gicv3.h
··· 34 34 #define ICC_CTLR __ACCESS_CP15(c12, 0, c12, 4) 35 35 #define ICC_SRE __ACCESS_CP15(c12, 0, c12, 5) 36 36 #define ICC_IGRPEN1 __ACCESS_CP15(c12, 0, c12, 7) 37 + #define ICC_BPR1 __ACCESS_CP15(c12, 0, c12, 3) 37 38 38 39 #define ICC_HSRE __ACCESS_CP15(c12, 4, c9, 5) 39 40 ··· 156 155 { 157 156 asm volatile("mcr " __stringify(ICC_SRE) : : "r" (val)); 158 157 isb(); 158 + } 159 + 160 + static inline void gic_write_bpr1(u32 val) 161 + { 162 + asm volatile("mcr " __stringify(ICC_BPR1) : : "r" (val)); 159 163 } 160 164 161 165 /*
+1
arch/arm64/Kconfig.platforms
··· 93 93 select ARMADA_CP110_SYSCON 94 94 select ARMADA_37XX_CLK 95 95 select MVEBU_ODMI 96 + select MVEBU_PIC 96 97 help 97 98 This enables support for Marvell EBU familly, including: 98 99 - Armada 3700 SoC Family
+6
arch/arm64/include/asm/arch_gicv3.h
··· 28 28 #define ICC_CTLR_EL1 sys_reg(3, 0, 12, 12, 4) 29 29 #define ICC_SRE_EL1 sys_reg(3, 0, 12, 12, 5) 30 30 #define ICC_GRPEN1_EL1 sys_reg(3, 0, 12, 12, 7) 31 + #define ICC_BPR1_EL1 sys_reg(3, 0, 12, 12, 3) 31 32 32 33 #define ICC_SRE_EL2 sys_reg(3, 4, 12, 9, 5) 33 34 ··· 164 163 { 165 164 asm volatile("msr_s " __stringify(ICC_SRE_EL1) ", %0" : : "r" ((u64)val)); 166 165 isb(); 166 + } 167 + 168 + static inline void gic_write_bpr1(u32 val) 169 + { 170 + asm volatile("msr_s " __stringify(ICC_BPR1_EL1) ", %0" : : "r" (val)); 167 171 } 168 172 169 173 #define gic_read_typer(c) readq_relaxed(c)
+4
drivers/acpi/Kconfig
··· 523 523 userspace. The configurable ACPI groups will be visible under 524 524 /config/acpi, assuming configfs is mounted under /config. 525 525 526 + if ARM64 527 + source "drivers/acpi/arm64/Kconfig" 528 + endif 529 + 526 530 endif # ACPI
+2
drivers/acpi/Makefile
··· 106 106 107 107 video-objs += acpi_video.o video_detect.o 108 108 obj-y += dptf/ 109 + 110 + obj-$(CONFIG_ARM64) += arm64/
+6
drivers/acpi/arm64/Kconfig
··· 1 + # 2 + # ACPI Configuration for ARM64 3 + # 4 + 5 + config ACPI_IORT 6 + bool
+1
drivers/acpi/arm64/Makefile
··· 1 + obj-$(CONFIG_ACPI_IORT) += iort.o
+368
drivers/acpi/arm64/iort.c
··· 1 + /* 2 + * Copyright (C) 2016, Semihalf 3 + * Author: Tomasz Nowicki <tn@semihalf.com> 4 + * 5 + * This program is free software; you can redistribute it and/or modify it 6 + * under the terms and conditions of the GNU General Public License, 7 + * version 2, as published by the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope it will be useful, but WITHOUT 10 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 + * more details. 13 + * 14 + * This file implements early detection/parsing of I/O mapping 15 + * reported to OS through firmware via I/O Remapping Table (IORT) 16 + * IORT document number: ARM DEN 0049A 17 + */ 18 + 19 + #define pr_fmt(fmt) "ACPI: IORT: " fmt 20 + 21 + #include <linux/acpi_iort.h> 22 + #include <linux/kernel.h> 23 + #include <linux/pci.h> 24 + 25 + struct iort_its_msi_chip { 26 + struct list_head list; 27 + struct fwnode_handle *fw_node; 28 + u32 translation_id; 29 + }; 30 + 31 + typedef acpi_status (*iort_find_node_callback) 32 + (struct acpi_iort_node *node, void *context); 33 + 34 + /* Root pointer to the mapped IORT table */ 35 + static struct acpi_table_header *iort_table; 36 + 37 + static LIST_HEAD(iort_msi_chip_list); 38 + static DEFINE_SPINLOCK(iort_msi_chip_lock); 39 + 40 + /** 41 + * iort_register_domain_token() - register domain token and related ITS ID 42 + * to the list from where we can get it back later on. 43 + * @trans_id: ITS ID. 44 + * @fw_node: Domain token. 45 + * 46 + * Returns: 0 on success, -ENOMEM if no memory when allocating list element 47 + */ 48 + int iort_register_domain_token(int trans_id, struct fwnode_handle *fw_node) 49 + { 50 + struct iort_its_msi_chip *its_msi_chip; 51 + 52 + its_msi_chip = kzalloc(sizeof(*its_msi_chip), GFP_KERNEL); 53 + if (!its_msi_chip) 54 + return -ENOMEM; 55 + 56 + its_msi_chip->fw_node = fw_node; 57 + its_msi_chip->translation_id = trans_id; 58 + 59 + spin_lock(&iort_msi_chip_lock); 60 + list_add(&its_msi_chip->list, &iort_msi_chip_list); 61 + spin_unlock(&iort_msi_chip_lock); 62 + 63 + return 0; 64 + } 65 + 66 + /** 67 + * iort_deregister_domain_token() - Deregister domain token based on ITS ID 68 + * @trans_id: ITS ID. 69 + * 70 + * Returns: none. 71 + */ 72 + void iort_deregister_domain_token(int trans_id) 73 + { 74 + struct iort_its_msi_chip *its_msi_chip, *t; 75 + 76 + spin_lock(&iort_msi_chip_lock); 77 + list_for_each_entry_safe(its_msi_chip, t, &iort_msi_chip_list, list) { 78 + if (its_msi_chip->translation_id == trans_id) { 79 + list_del(&its_msi_chip->list); 80 + kfree(its_msi_chip); 81 + break; 82 + } 83 + } 84 + spin_unlock(&iort_msi_chip_lock); 85 + } 86 + 87 + /** 88 + * iort_find_domain_token() - Find domain token based on given ITS ID 89 + * @trans_id: ITS ID. 90 + * 91 + * Returns: domain token when find on the list, NULL otherwise 92 + */ 93 + struct fwnode_handle *iort_find_domain_token(int trans_id) 94 + { 95 + struct fwnode_handle *fw_node = NULL; 96 + struct iort_its_msi_chip *its_msi_chip; 97 + 98 + spin_lock(&iort_msi_chip_lock); 99 + list_for_each_entry(its_msi_chip, &iort_msi_chip_list, list) { 100 + if (its_msi_chip->translation_id == trans_id) { 101 + fw_node = its_msi_chip->fw_node; 102 + break; 103 + } 104 + } 105 + spin_unlock(&iort_msi_chip_lock); 106 + 107 + return fw_node; 108 + } 109 + 110 + static struct acpi_iort_node *iort_scan_node(enum acpi_iort_node_type type, 111 + iort_find_node_callback callback, 112 + void *context) 113 + { 114 + struct acpi_iort_node *iort_node, *iort_end; 115 + struct acpi_table_iort *iort; 116 + int i; 117 + 118 + if (!iort_table) 119 + return NULL; 120 + 121 + /* Get the first IORT node */ 122 + iort = (struct acpi_table_iort *)iort_table; 123 + iort_node = ACPI_ADD_PTR(struct acpi_iort_node, iort, 124 + iort->node_offset); 125 + iort_end = ACPI_ADD_PTR(struct acpi_iort_node, iort_table, 126 + iort_table->length); 127 + 128 + for (i = 0; i < iort->node_count; i++) { 129 + if (WARN_TAINT(iort_node >= iort_end, TAINT_FIRMWARE_WORKAROUND, 130 + "IORT node pointer overflows, bad table!\n")) 131 + return NULL; 132 + 133 + if (iort_node->type == type && 134 + ACPI_SUCCESS(callback(iort_node, context))) 135 + return iort_node; 136 + 137 + iort_node = ACPI_ADD_PTR(struct acpi_iort_node, iort_node, 138 + iort_node->length); 139 + } 140 + 141 + return NULL; 142 + } 143 + 144 + static acpi_status iort_match_node_callback(struct acpi_iort_node *node, 145 + void *context) 146 + { 147 + struct device *dev = context; 148 + acpi_status status; 149 + 150 + if (node->type == ACPI_IORT_NODE_NAMED_COMPONENT) { 151 + struct acpi_buffer buf = { ACPI_ALLOCATE_BUFFER, NULL }; 152 + struct acpi_device *adev = to_acpi_device_node(dev->fwnode); 153 + struct acpi_iort_named_component *ncomp; 154 + 155 + if (!adev) { 156 + status = AE_NOT_FOUND; 157 + goto out; 158 + } 159 + 160 + status = acpi_get_name(adev->handle, ACPI_FULL_PATHNAME, &buf); 161 + if (ACPI_FAILURE(status)) { 162 + dev_warn(dev, "Can't get device full path name\n"); 163 + goto out; 164 + } 165 + 166 + ncomp = (struct acpi_iort_named_component *)node->node_data; 167 + status = !strcmp(ncomp->device_name, buf.pointer) ? 168 + AE_OK : AE_NOT_FOUND; 169 + acpi_os_free(buf.pointer); 170 + } else if (node->type == ACPI_IORT_NODE_PCI_ROOT_COMPLEX) { 171 + struct acpi_iort_root_complex *pci_rc; 172 + struct pci_bus *bus; 173 + 174 + bus = to_pci_bus(dev); 175 + pci_rc = (struct acpi_iort_root_complex *)node->node_data; 176 + 177 + /* 178 + * It is assumed that PCI segment numbers maps one-to-one 179 + * with root complexes. Each segment number can represent only 180 + * one root complex. 181 + */ 182 + status = pci_rc->pci_segment_number == pci_domain_nr(bus) ? 183 + AE_OK : AE_NOT_FOUND; 184 + } else { 185 + status = AE_NOT_FOUND; 186 + } 187 + out: 188 + return status; 189 + } 190 + 191 + static int iort_id_map(struct acpi_iort_id_mapping *map, u8 type, u32 rid_in, 192 + u32 *rid_out) 193 + { 194 + /* Single mapping does not care for input id */ 195 + if (map->flags & ACPI_IORT_ID_SINGLE_MAPPING) { 196 + if (type == ACPI_IORT_NODE_NAMED_COMPONENT || 197 + type == ACPI_IORT_NODE_PCI_ROOT_COMPLEX) { 198 + *rid_out = map->output_base; 199 + return 0; 200 + } 201 + 202 + pr_warn(FW_BUG "[map %p] SINGLE MAPPING flag not allowed for node type %d, skipping ID map\n", 203 + map, type); 204 + return -ENXIO; 205 + } 206 + 207 + if (rid_in < map->input_base || 208 + (rid_in >= map->input_base + map->id_count)) 209 + return -ENXIO; 210 + 211 + *rid_out = map->output_base + (rid_in - map->input_base); 212 + return 0; 213 + } 214 + 215 + static struct acpi_iort_node *iort_node_map_rid(struct acpi_iort_node *node, 216 + u32 rid_in, u32 *rid_out, 217 + u8 type) 218 + { 219 + u32 rid = rid_in; 220 + 221 + /* Parse the ID mapping tree to find specified node type */ 222 + while (node) { 223 + struct acpi_iort_id_mapping *map; 224 + int i; 225 + 226 + if (node->type == type) { 227 + if (rid_out) 228 + *rid_out = rid; 229 + return node; 230 + } 231 + 232 + if (!node->mapping_offset || !node->mapping_count) 233 + goto fail_map; 234 + 235 + map = ACPI_ADD_PTR(struct acpi_iort_id_mapping, node, 236 + node->mapping_offset); 237 + 238 + /* Firmware bug! */ 239 + if (!map->output_reference) { 240 + pr_err(FW_BUG "[node %p type %d] ID map has NULL parent reference\n", 241 + node, node->type); 242 + goto fail_map; 243 + } 244 + 245 + /* Do the RID translation */ 246 + for (i = 0; i < node->mapping_count; i++, map++) { 247 + if (!iort_id_map(map, node->type, rid, &rid)) 248 + break; 249 + } 250 + 251 + if (i == node->mapping_count) 252 + goto fail_map; 253 + 254 + node = ACPI_ADD_PTR(struct acpi_iort_node, iort_table, 255 + map->output_reference); 256 + } 257 + 258 + fail_map: 259 + /* Map input RID to output RID unchanged on mapping failure*/ 260 + if (rid_out) 261 + *rid_out = rid_in; 262 + 263 + return NULL; 264 + } 265 + 266 + static struct acpi_iort_node *iort_find_dev_node(struct device *dev) 267 + { 268 + struct pci_bus *pbus; 269 + 270 + if (!dev_is_pci(dev)) 271 + return iort_scan_node(ACPI_IORT_NODE_NAMED_COMPONENT, 272 + iort_match_node_callback, dev); 273 + 274 + /* Find a PCI root bus */ 275 + pbus = to_pci_dev(dev)->bus; 276 + while (!pci_is_root_bus(pbus)) 277 + pbus = pbus->parent; 278 + 279 + return iort_scan_node(ACPI_IORT_NODE_PCI_ROOT_COMPLEX, 280 + iort_match_node_callback, &pbus->dev); 281 + } 282 + 283 + /** 284 + * iort_msi_map_rid() - Map a MSI requester ID for a device 285 + * @dev: The device for which the mapping is to be done. 286 + * @req_id: The device requester ID. 287 + * 288 + * Returns: mapped MSI RID on success, input requester ID otherwise 289 + */ 290 + u32 iort_msi_map_rid(struct device *dev, u32 req_id) 291 + { 292 + struct acpi_iort_node *node; 293 + u32 dev_id; 294 + 295 + node = iort_find_dev_node(dev); 296 + if (!node) 297 + return req_id; 298 + 299 + iort_node_map_rid(node, req_id, &dev_id, ACPI_IORT_NODE_ITS_GROUP); 300 + return dev_id; 301 + } 302 + 303 + /** 304 + * iort_dev_find_its_id() - Find the ITS identifier for a device 305 + * @dev: The device. 306 + * @idx: Index of the ITS identifier list. 307 + * @its_id: ITS identifier. 308 + * 309 + * Returns: 0 on success, appropriate error value otherwise 310 + */ 311 + static int iort_dev_find_its_id(struct device *dev, u32 req_id, 312 + unsigned int idx, int *its_id) 313 + { 314 + struct acpi_iort_its_group *its; 315 + struct acpi_iort_node *node; 316 + 317 + node = iort_find_dev_node(dev); 318 + if (!node) 319 + return -ENXIO; 320 + 321 + node = iort_node_map_rid(node, req_id, NULL, ACPI_IORT_NODE_ITS_GROUP); 322 + if (!node) 323 + return -ENXIO; 324 + 325 + /* Move to ITS specific data */ 326 + its = (struct acpi_iort_its_group *)node->node_data; 327 + if (idx > its->its_count) { 328 + dev_err(dev, "requested ITS ID index [%d] is greater than available [%d]\n", 329 + idx, its->its_count); 330 + return -ENXIO; 331 + } 332 + 333 + *its_id = its->identifiers[idx]; 334 + return 0; 335 + } 336 + 337 + /** 338 + * iort_get_device_domain() - Find MSI domain related to a device 339 + * @dev: The device. 340 + * @req_id: Requester ID for the device. 341 + * 342 + * Returns: the MSI domain for this device, NULL otherwise 343 + */ 344 + struct irq_domain *iort_get_device_domain(struct device *dev, u32 req_id) 345 + { 346 + struct fwnode_handle *handle; 347 + int its_id; 348 + 349 + if (iort_dev_find_its_id(dev, req_id, 0, &its_id)) 350 + return NULL; 351 + 352 + handle = iort_find_domain_token(its_id); 353 + if (!handle) 354 + return NULL; 355 + 356 + return irq_find_matching_fwnode(handle, DOMAIN_BUS_PCI_MSI); 357 + } 358 + 359 + void __init acpi_iort_init(void) 360 + { 361 + acpi_status status; 362 + 363 + status = acpi_get_table(ACPI_SIG_IORT, 0, &iort_table); 364 + if (ACPI_FAILURE(status) && status != AE_NOT_FOUND) { 365 + const char *msg = acpi_format_exception(status); 366 + pr_err("Failed to get table, %s\n", msg); 367 + } 368 + }
+2
drivers/acpi/bus.c
··· 36 36 #ifdef CONFIG_X86 37 37 #include <asm/mpspec.h> 38 38 #endif 39 + #include <linux/acpi_iort.h> 39 40 #include <linux/pci.h> 40 41 #include <acpi/apei.h> 41 42 #include <linux/dmi.h> ··· 1189 1188 } 1190 1189 1191 1190 pci_mmcfg_late_init(); 1191 + acpi_iort_init(); 1192 1192 acpi_scan_init(); 1193 1193 acpi_ec_init(); 1194 1194 acpi_debugfs_init();
+1 -2
drivers/base/platform-msi.c
··· 142 142 } 143 143 144 144 for (i = 0; i < nvec; i++) { 145 - desc = alloc_msi_entry(dev); 145 + desc = alloc_msi_entry(dev, 1, NULL); 146 146 if (!desc) 147 147 break; 148 148 149 149 desc->platform.msi_priv_data = data; 150 150 desc->platform.msi_index = base + i; 151 - desc->nvec_used = 1; 152 151 desc->irq = virq ? virq + i : 0; 153 152 154 153 list_add_tail(&desc->list, dev_to_msi_list(dev));
+15
drivers/irqchip/Kconfig
··· 39 39 bool 40 40 depends on PCI 41 41 depends on PCI_MSI 42 + select ACPI_IORT if ACPI 42 43 43 44 config ARM_NVIC 44 45 bool ··· 157 156 select GENERIC_IRQ_CHIP 158 157 select IRQ_DOMAIN 159 158 159 + config JCORE_AIC 160 + bool "J-Core integrated AIC" 161 + depends on OF && (SUPERH || COMPILE_TEST) 162 + select IRQ_DOMAIN 163 + help 164 + Support for the J-Core integrated AIC. 165 + 160 166 config RENESAS_INTC_IRQPIN 161 167 bool 162 168 select IRQ_DOMAIN ··· 259 251 config MVEBU_ODMI 260 252 bool 261 253 254 + config MVEBU_PIC 255 + bool 256 + 262 257 config LS_SCFG_MSI 263 258 def_bool y if SOC_LS1021A || ARCH_LAYERSCAPE 264 259 depends on PCI && PCI_MSI ··· 275 264 select IRQ_DOMAIN 276 265 help 277 266 Support the EZchip NPS400 global interrupt controller 267 + 268 + config STM32_EXTI 269 + bool 270 + select IRQ_DOMAIN
+3
drivers/irqchip/Makefile
··· 40 40 obj-$(CONFIG_IMGPDC_IRQ) += irq-imgpdc.o 41 41 obj-$(CONFIG_IRQ_MIPS_CPU) += irq-mips-cpu.o 42 42 obj-$(CONFIG_SIRF_IRQ) += irq-sirfsoc.o 43 + obj-$(CONFIG_JCORE_AIC) += irq-jcore-aic.o 43 44 obj-$(CONFIG_RENESAS_INTC_IRQPIN) += irq-renesas-intc-irqpin.o 44 45 obj-$(CONFIG_RENESAS_IRQC) += irq-renesas-irqc.o 45 46 obj-$(CONFIG_VERSATILE_FPGA_IRQ) += irq-versatile-fpga.o ··· 69 68 obj-$(CONFIG_IMX_GPCV2) += irq-imx-gpcv2.o 70 69 obj-$(CONFIG_PIC32_EVIC) += irq-pic32-evic.o 71 70 obj-$(CONFIG_MVEBU_ODMI) += irq-mvebu-odmi.o 71 + obj-$(CONFIG_MVEBU_PIC) += irq-mvebu-pic.o 72 72 obj-$(CONFIG_LS_SCFG_MSI) += irq-ls-scfg-msi.o 73 73 obj-$(CONFIG_EZNPS_GIC) += irq-eznps.o 74 74 obj-$(CONFIG_ARCH_ASPEED) += irq-aspeed-vic.o 75 + obj-$(CONFIG_STM32_EXTI) += irq-stm32-exti.o
+5 -18
drivers/irqchip/irq-gic-pm.c
··· 64 64 65 65 static int gic_get_clocks(struct device *dev, const struct gic_clk_data *data) 66 66 { 67 - struct clk *clk; 68 67 unsigned int i; 69 68 int ret; 70 69 ··· 75 76 return ret; 76 77 77 78 for (i = 0; i < data->num_clocks; i++) { 78 - clk = of_clk_get_by_name(dev->of_node, data->clocks[i]); 79 - if (IS_ERR(clk)) { 80 - dev_err(dev, "failed to get clock %s\n", 81 - data->clocks[i]); 82 - ret = PTR_ERR(clk); 83 - goto error; 84 - } 85 - 86 - ret = pm_clk_add_clk(dev, clk); 79 + ret = of_pm_clk_add_clk(dev, data->clocks[i]); 87 80 if (ret) { 88 - dev_err(dev, "failed to add clock at index %d\n", i); 89 - clk_put(clk); 90 - goto error; 81 + dev_err(dev, "failed to add clock %s\n", 82 + data->clocks[i]); 83 + pm_clk_destroy(dev); 84 + return ret; 91 85 } 92 86 } 93 87 94 88 return 0; 95 - 96 - error: 97 - pm_clk_destroy(dev); 98 - 99 - return ret; 100 89 } 101 90 102 91 static int gic_probe(struct platform_device *pdev)
+73 -15
drivers/irqchip/irq-gic-v3-its-pci-msi.c
··· 15 15 * along with this program. If not, see <http://www.gnu.org/licenses/>. 16 16 */ 17 17 18 + #include <linux/acpi_iort.h> 18 19 #include <linux/msi.h> 19 20 #include <linux/of.h> 20 21 #include <linux/of_irq.h> ··· 107 106 {}, 108 107 }; 109 108 110 - static int __init its_pci_msi_init(void) 109 + static int __init its_pci_msi_init_one(struct fwnode_handle *handle, 110 + const char *name) 111 + { 112 + struct irq_domain *parent; 113 + 114 + parent = irq_find_matching_fwnode(handle, DOMAIN_BUS_NEXUS); 115 + if (!parent || !msi_get_domain_info(parent)) { 116 + pr_err("%s: Unable to locate ITS domain\n", name); 117 + return -ENXIO; 118 + } 119 + 120 + if (!pci_msi_create_irq_domain(handle, &its_pci_msi_domain_info, 121 + parent)) { 122 + pr_err("%s: Unable to create PCI domain\n", name); 123 + return -ENOMEM; 124 + } 125 + 126 + return 0; 127 + } 128 + 129 + static int __init its_pci_of_msi_init(void) 111 130 { 112 131 struct device_node *np; 113 - struct irq_domain *parent; 114 132 115 133 for (np = of_find_matching_node(NULL, its_device_id); np; 116 134 np = of_find_matching_node(np, its_device_id)) { 117 135 if (!of_property_read_bool(np, "msi-controller")) 118 136 continue; 119 137 120 - parent = irq_find_matching_host(np, DOMAIN_BUS_NEXUS); 121 - if (!parent || !msi_get_domain_info(parent)) { 122 - pr_err("%s: unable to locate ITS domain\n", 123 - np->full_name); 138 + if (its_pci_msi_init_one(of_node_to_fwnode(np), np->full_name)) 124 139 continue; 125 - } 126 - 127 - if (!pci_msi_create_irq_domain(of_node_to_fwnode(np), 128 - &its_pci_msi_domain_info, 129 - parent)) { 130 - pr_err("%s: unable to create PCI domain\n", 131 - np->full_name); 132 - continue; 133 - } 134 140 135 141 pr_info("PCI/MSI: %s domain created\n", np->full_name); 136 142 } 143 + 144 + return 0; 145 + } 146 + 147 + #ifdef CONFIG_ACPI 148 + 149 + static int __init 150 + its_pci_msi_parse_madt(struct acpi_subtable_header *header, 151 + const unsigned long end) 152 + { 153 + struct acpi_madt_generic_translator *its_entry; 154 + struct fwnode_handle *dom_handle; 155 + const char *node_name; 156 + int err = -ENXIO; 157 + 158 + its_entry = (struct acpi_madt_generic_translator *)header; 159 + node_name = kasprintf(GFP_KERNEL, "ITS@0x%lx", 160 + (long)its_entry->base_address); 161 + dom_handle = iort_find_domain_token(its_entry->translation_id); 162 + if (!dom_handle) { 163 + pr_err("%s: Unable to locate ITS domain handle\n", node_name); 164 + goto out; 165 + } 166 + 167 + err = its_pci_msi_init_one(dom_handle, node_name); 168 + if (!err) 169 + pr_info("PCI/MSI: %s domain created\n", node_name); 170 + 171 + out: 172 + kfree(node_name); 173 + return err; 174 + } 175 + 176 + static int __init its_pci_acpi_msi_init(void) 177 + { 178 + acpi_table_parse_madt(ACPI_MADT_TYPE_GENERIC_TRANSLATOR, 179 + its_pci_msi_parse_madt, 0); 180 + return 0; 181 + } 182 + #else 183 + static int __init its_pci_acpi_msi_init(void) 184 + { 185 + return 0; 186 + } 187 + #endif 188 + 189 + static int __init its_pci_msi_init(void) 190 + { 191 + its_pci_of_msi_init(); 192 + its_pci_acpi_msi_init(); 137 193 138 194 return 0; 139 195 }
+126 -45
drivers/irqchip/irq-gic-v3-its.c
··· 15 15 * along with this program. If not, see <http://www.gnu.org/licenses/>. 16 16 */ 17 17 18 + #include <linux/acpi.h> 18 19 #include <linux/bitmap.h> 19 20 #include <linux/cpu.h> 20 21 #include <linux/delay.h> 21 22 #include <linux/interrupt.h> 23 + #include <linux/irqdomain.h> 24 + #include <linux/acpi_iort.h> 22 25 #include <linux/log2.h> 23 26 #include <linux/mm.h> 24 27 #include <linux/msi.h> ··· 78 75 raw_spinlock_t lock; 79 76 struct list_head entry; 80 77 void __iomem *base; 81 - unsigned long phys_base; 78 + phys_addr_t phys_base; 82 79 struct its_cmd_block *cmd_base; 83 80 struct its_cmd_block *cmd_write; 84 81 struct its_baser tables[GITS_BASER_NR_REGS]; ··· 118 115 static LIST_HEAD(its_nodes); 119 116 static DEFINE_SPINLOCK(its_lock); 120 117 static struct rdists *gic_rdists; 118 + static struct irq_domain *its_parent; 121 119 122 120 #define gic_data_rdist() (raw_cpu_ptr(gic_rdists->rdist)) 123 121 #define gic_data_rdist_rd_base() (gic_data_rdist()->rd_base) ··· 1441 1437 fwspec.param[0] = GIC_IRQ_TYPE_LPI; 1442 1438 fwspec.param[1] = hwirq; 1443 1439 fwspec.param[2] = IRQ_TYPE_EDGE_RISING; 1440 + } else if (is_fwnode_irqchip(domain->parent->fwnode)) { 1441 + fwspec.fwnode = domain->parent->fwnode; 1442 + fwspec.param_count = 2; 1443 + fwspec.param[0] = hwirq; 1444 + fwspec.param[1] = IRQ_TYPE_EDGE_RISING; 1444 1445 } else { 1445 1446 return -EINVAL; 1446 1447 } ··· 1623 1614 gic_enable_quirks(iidr, its_quirks, its); 1624 1615 } 1625 1616 1626 - static int __init its_probe(struct device_node *node, 1627 - struct irq_domain *parent) 1617 + static int its_init_domain(struct fwnode_handle *handle, struct its_node *its) 1628 1618 { 1629 - struct resource res; 1619 + struct irq_domain *inner_domain; 1620 + struct msi_domain_info *info; 1621 + 1622 + info = kzalloc(sizeof(*info), GFP_KERNEL); 1623 + if (!info) 1624 + return -ENOMEM; 1625 + 1626 + inner_domain = irq_domain_create_tree(handle, &its_domain_ops, its); 1627 + if (!inner_domain) { 1628 + kfree(info); 1629 + return -ENOMEM; 1630 + } 1631 + 1632 + inner_domain->parent = its_parent; 1633 + inner_domain->bus_token = DOMAIN_BUS_NEXUS; 1634 + info->ops = &its_msi_domain_ops; 1635 + info->data = its; 1636 + inner_domain->host_data = info; 1637 + 1638 + return 0; 1639 + } 1640 + 1641 + static int __init its_probe_one(struct resource *res, 1642 + struct fwnode_handle *handle, int numa_node) 1643 + { 1630 1644 struct its_node *its; 1631 1645 void __iomem *its_base; 1632 - struct irq_domain *inner_domain; 1633 1646 u32 val; 1634 1647 u64 baser, tmp; 1635 1648 int err; 1636 1649 1637 - err = of_address_to_resource(node, 0, &res); 1638 - if (err) { 1639 - pr_warn("%s: no regs?\n", node->full_name); 1640 - return -ENXIO; 1641 - } 1642 - 1643 - its_base = ioremap(res.start, resource_size(&res)); 1650 + its_base = ioremap(res->start, resource_size(res)); 1644 1651 if (!its_base) { 1645 - pr_warn("%s: unable to map registers\n", node->full_name); 1652 + pr_warn("ITS@%pa: Unable to map ITS registers\n", &res->start); 1646 1653 return -ENOMEM; 1647 1654 } 1648 1655 1649 1656 val = readl_relaxed(its_base + GITS_PIDR2) & GIC_PIDR2_ARCH_MASK; 1650 1657 if (val != 0x30 && val != 0x40) { 1651 - pr_warn("%s: no ITS detected, giving up\n", node->full_name); 1658 + pr_warn("ITS@%pa: No ITS detected, giving up\n", &res->start); 1652 1659 err = -ENODEV; 1653 1660 goto out_unmap; 1654 1661 } 1655 1662 1656 1663 err = its_force_quiescent(its_base); 1657 1664 if (err) { 1658 - pr_warn("%s: failed to quiesce, giving up\n", 1659 - node->full_name); 1665 + pr_warn("ITS@%pa: Failed to quiesce, giving up\n", &res->start); 1660 1666 goto out_unmap; 1661 1667 } 1662 1668 1663 - pr_info("ITS: %s\n", node->full_name); 1669 + pr_info("ITS %pR\n", res); 1664 1670 1665 1671 its = kzalloc(sizeof(*its), GFP_KERNEL); 1666 1672 if (!its) { ··· 1687 1663 INIT_LIST_HEAD(&its->entry); 1688 1664 INIT_LIST_HEAD(&its->its_device_list); 1689 1665 its->base = its_base; 1690 - its->phys_base = res.start; 1666 + its->phys_base = res->start; 1691 1667 its->ite_size = ((readl_relaxed(its_base + GITS_TYPER) >> 4) & 0xf) + 1; 1692 - its->numa_node = of_node_to_nid(node); 1668 + its->numa_node = numa_node; 1693 1669 1694 1670 its->cmd_base = kzalloc(ITS_CMD_QUEUE_SZ, GFP_KERNEL); 1695 1671 if (!its->cmd_base) { ··· 1736 1712 writeq_relaxed(0, its->base + GITS_CWRITER); 1737 1713 writel_relaxed(GITS_CTLR_ENABLE, its->base + GITS_CTLR); 1738 1714 1739 - if (of_property_read_bool(node, "msi-controller")) { 1740 - struct msi_domain_info *info; 1741 - 1742 - info = kzalloc(sizeof(*info), GFP_KERNEL); 1743 - if (!info) { 1744 - err = -ENOMEM; 1745 - goto out_free_tables; 1746 - } 1747 - 1748 - inner_domain = irq_domain_add_tree(node, &its_domain_ops, its); 1749 - if (!inner_domain) { 1750 - err = -ENOMEM; 1751 - kfree(info); 1752 - goto out_free_tables; 1753 - } 1754 - 1755 - inner_domain->parent = parent; 1756 - inner_domain->bus_token = DOMAIN_BUS_NEXUS; 1757 - info->ops = &its_msi_domain_ops; 1758 - info->data = its; 1759 - inner_domain->host_data = info; 1760 - } 1715 + err = its_init_domain(handle, its); 1716 + if (err) 1717 + goto out_free_tables; 1761 1718 1762 1719 spin_lock(&its_lock); 1763 1720 list_add(&its->entry, &its_nodes); ··· 1754 1749 kfree(its); 1755 1750 out_unmap: 1756 1751 iounmap(its_base); 1757 - pr_err("ITS: failed probing %s (%d)\n", node->full_name, err); 1752 + pr_err("ITS@%pa: failed probing (%d)\n", &res->start, err); 1758 1753 return err; 1759 1754 } 1760 1755 ··· 1782 1777 {}, 1783 1778 }; 1784 1779 1785 - int __init its_init(struct device_node *node, struct rdists *rdists, 1786 - struct irq_domain *parent_domain) 1780 + static int __init its_of_probe(struct device_node *node) 1787 1781 { 1788 1782 struct device_node *np; 1783 + struct resource res; 1789 1784 1790 1785 for (np = of_find_matching_node(node, its_device_id); np; 1791 1786 np = of_find_matching_node(np, its_device_id)) { 1792 - its_probe(np, parent_domain); 1787 + if (!of_property_read_bool(np, "msi-controller")) { 1788 + pr_warn("%s: no msi-controller property, ITS ignored\n", 1789 + np->full_name); 1790 + continue; 1791 + } 1792 + 1793 + if (of_address_to_resource(np, 0, &res)) { 1794 + pr_warn("%s: no regs?\n", np->full_name); 1795 + continue; 1796 + } 1797 + 1798 + its_probe_one(&res, &np->fwnode, of_node_to_nid(np)); 1793 1799 } 1800 + return 0; 1801 + } 1802 + 1803 + #ifdef CONFIG_ACPI 1804 + 1805 + #define ACPI_GICV3_ITS_MEM_SIZE (SZ_128K) 1806 + 1807 + static int __init gic_acpi_parse_madt_its(struct acpi_subtable_header *header, 1808 + const unsigned long end) 1809 + { 1810 + struct acpi_madt_generic_translator *its_entry; 1811 + struct fwnode_handle *dom_handle; 1812 + struct resource res; 1813 + int err; 1814 + 1815 + its_entry = (struct acpi_madt_generic_translator *)header; 1816 + memset(&res, 0, sizeof(res)); 1817 + res.start = its_entry->base_address; 1818 + res.end = its_entry->base_address + ACPI_GICV3_ITS_MEM_SIZE - 1; 1819 + res.flags = IORESOURCE_MEM; 1820 + 1821 + dom_handle = irq_domain_alloc_fwnode((void *)its_entry->base_address); 1822 + if (!dom_handle) { 1823 + pr_err("ITS@%pa: Unable to allocate GICv3 ITS domain token\n", 1824 + &res.start); 1825 + return -ENOMEM; 1826 + } 1827 + 1828 + err = iort_register_domain_token(its_entry->translation_id, dom_handle); 1829 + if (err) { 1830 + pr_err("ITS@%pa: Unable to register GICv3 ITS domain token (ITS ID %d) to IORT\n", 1831 + &res.start, its_entry->translation_id); 1832 + goto dom_err; 1833 + } 1834 + 1835 + err = its_probe_one(&res, dom_handle, NUMA_NO_NODE); 1836 + if (!err) 1837 + return 0; 1838 + 1839 + iort_deregister_domain_token(its_entry->translation_id); 1840 + dom_err: 1841 + irq_domain_free_fwnode(dom_handle); 1842 + return err; 1843 + } 1844 + 1845 + static void __init its_acpi_probe(void) 1846 + { 1847 + acpi_table_parse_madt(ACPI_MADT_TYPE_GENERIC_TRANSLATOR, 1848 + gic_acpi_parse_madt_its, 0); 1849 + } 1850 + #else 1851 + static void __init its_acpi_probe(void) { } 1852 + #endif 1853 + 1854 + int __init its_init(struct fwnode_handle *handle, struct rdists *rdists, 1855 + struct irq_domain *parent_domain) 1856 + { 1857 + struct device_node *of_node; 1858 + 1859 + its_parent = parent_domain; 1860 + of_node = to_of_node(handle); 1861 + if (of_node) 1862 + its_of_probe(of_node); 1863 + else 1864 + its_acpi_probe(); 1794 1865 1795 1866 if (list_empty(&its_nodes)) { 1796 1867 pr_warn("ITS: No ITS available, not enabling LPIs\n");
+10 -5
drivers/irqchip/irq-gic-v3.c
··· 495 495 /* Set priority mask register */ 496 496 gic_write_pmr(DEFAULT_PMR_VALUE); 497 497 498 + /* 499 + * Some firmwares hand over to the kernel with the BPR changed from 500 + * its reset value (and with a value large enough to prevent 501 + * any pre-emptive interrupts from working at all). Writing a zero 502 + * to BPR restores is reset value. 503 + */ 504 + gic_write_bpr1(0); 505 + 498 506 if (static_key_true(&supports_deactivate)) { 499 507 /* EOI drops priority only (mode 1) */ 500 508 gic_write_ctlr(ICC_CTLR_EL1_EOImode_drop); ··· 919 911 u64 redist_stride, 920 912 struct fwnode_handle *handle) 921 913 { 922 - struct device_node *node; 923 914 u32 typer; 924 915 int gic_irqs; 925 916 int err; ··· 959 952 960 953 set_handle_irq(gic_handle_irq); 961 954 962 - node = to_of_node(handle); 963 - if (IS_ENABLED(CONFIG_ARM_GIC_V3_ITS) && gic_dist_supports_lpis() && 964 - node) /* Temp hack to prevent ITS init for ACPI */ 965 - its_init(node, &gic_data.rdists, gic_data.domain); 955 + if (IS_ENABLED(CONFIG_ARM_GIC_V3_ITS) && gic_dist_supports_lpis()) 956 + its_init(handle, &gic_data.rdists, gic_data.domain); 966 957 967 958 gic_smp_init(); 968 959 gic_dist_init();
+28 -10
drivers/irqchip/irq-gic.c
··· 91 91 #endif 92 92 }; 93 93 94 - static DEFINE_RAW_SPINLOCK(irq_controller_lock); 94 + #ifdef CONFIG_BL_SWITCHER 95 + 96 + static DEFINE_RAW_SPINLOCK(cpu_map_lock); 97 + 98 + #define gic_lock_irqsave(f) \ 99 + raw_spin_lock_irqsave(&cpu_map_lock, (f)) 100 + #define gic_unlock_irqrestore(f) \ 101 + raw_spin_unlock_irqrestore(&cpu_map_lock, (f)) 102 + 103 + #define gic_lock() raw_spin_lock(&cpu_map_lock) 104 + #define gic_unlock() raw_spin_unlock(&cpu_map_lock) 105 + 106 + #else 107 + 108 + #define gic_lock_irqsave(f) do { (void)(f); } while(0) 109 + #define gic_unlock_irqrestore(f) do { (void)(f); } while(0) 110 + 111 + #define gic_lock() do { } while(0) 112 + #define gic_unlock() do { } while(0) 113 + 114 + #endif 95 115 96 116 /* 97 117 * The GIC mapping of CPU interfaces does not necessarily match ··· 337 317 if (cpu >= NR_GIC_CPU_IF || cpu >= nr_cpu_ids) 338 318 return -EINVAL; 339 319 340 - raw_spin_lock_irqsave(&irq_controller_lock, flags); 320 + gic_lock_irqsave(flags); 341 321 mask = 0xff << shift; 342 322 bit = gic_cpu_map[cpu] << shift; 343 323 val = readl_relaxed(reg) & ~mask; 344 324 writel_relaxed(val | bit, reg); 345 - raw_spin_unlock_irqrestore(&irq_controller_lock, flags); 325 + gic_unlock_irqrestore(flags); 346 326 347 327 return IRQ_SET_MASK_OK_DONE; 348 328 } ··· 394 374 395 375 chained_irq_enter(chip, desc); 396 376 397 - raw_spin_lock(&irq_controller_lock); 398 377 status = readl_relaxed(gic_data_cpu_base(chip_data) + GIC_CPU_INTACK); 399 - raw_spin_unlock(&irq_controller_lock); 400 378 401 379 gic_irq = (status & GICC_IAR_INT_ID_MASK); 402 380 if (gic_irq == GICC_INT_SPURIOUS) ··· 794 776 return; 795 777 } 796 778 797 - raw_spin_lock_irqsave(&irq_controller_lock, flags); 779 + gic_lock_irqsave(flags); 798 780 799 781 /* Convert our logical CPU mask into a physical one. */ 800 782 for_each_cpu(cpu, mask) ··· 809 791 /* this always happens on GIC0 */ 810 792 writel_relaxed(map << 16 | irq, gic_data_dist_base(&gic_data[0]) + GIC_DIST_SOFTINT); 811 793 812 - raw_spin_unlock_irqrestore(&irq_controller_lock, flags); 794 + gic_unlock_irqrestore(flags); 813 795 } 814 796 #endif 815 797 ··· 877 859 cur_target_mask = 0x01010101 << cur_cpu_id; 878 860 ror_val = (cur_cpu_id - new_cpu_id) & 31; 879 861 880 - raw_spin_lock(&irq_controller_lock); 862 + gic_lock(); 881 863 882 864 /* Update the target interface for this logical CPU */ 883 865 gic_cpu_map[cpu] = 1 << new_cpu_id; ··· 897 879 } 898 880 } 899 881 900 - raw_spin_unlock(&irq_controller_lock); 882 + gic_unlock(); 901 883 902 884 /* 903 885 * Now let's migrate and clear any potential SGIs that might be ··· 939 921 return gic_dist_physaddr + GIC_DIST_SOFTINT; 940 922 } 941 923 942 - void __init gic_init_physaddr(struct device_node *node) 924 + static void __init gic_init_physaddr(struct device_node *node) 943 925 { 944 926 struct resource res; 945 927 if (of_address_to_resource(node, 0, &res) == 0) {
+95
drivers/irqchip/irq-jcore-aic.c
··· 1 + /* 2 + * J-Core SoC AIC driver 3 + * 4 + * Copyright (C) 2015-2016 Smart Energy Instruments, Inc. 5 + * 6 + * This file is subject to the terms and conditions of the GNU General Public 7 + * License. See the file "COPYING" in the main directory of this archive 8 + * for more details. 9 + */ 10 + 11 + #include <linux/irq.h> 12 + #include <linux/io.h> 13 + #include <linux/irqchip.h> 14 + #include <linux/irqdomain.h> 15 + #include <linux/cpu.h> 16 + #include <linux/of.h> 17 + #include <linux/of_address.h> 18 + #include <linux/of_irq.h> 19 + 20 + #define JCORE_AIC_MAX_HWIRQ 127 21 + #define JCORE_AIC1_MIN_HWIRQ 16 22 + #define JCORE_AIC2_MIN_HWIRQ 64 23 + 24 + #define JCORE_AIC1_INTPRI_REG 8 25 + 26 + static struct irq_chip jcore_aic; 27 + 28 + static int jcore_aic_irqdomain_map(struct irq_domain *d, unsigned int irq, 29 + irq_hw_number_t hwirq) 30 + { 31 + struct irq_chip *aic = d->host_data; 32 + 33 + irq_set_chip_and_handler(irq, aic, handle_simple_irq); 34 + 35 + return 0; 36 + } 37 + 38 + static const struct irq_domain_ops jcore_aic_irqdomain_ops = { 39 + .map = jcore_aic_irqdomain_map, 40 + .xlate = irq_domain_xlate_onecell, 41 + }; 42 + 43 + static void noop(struct irq_data *data) 44 + { 45 + } 46 + 47 + static int __init aic_irq_of_init(struct device_node *node, 48 + struct device_node *parent) 49 + { 50 + unsigned min_irq = JCORE_AIC2_MIN_HWIRQ; 51 + unsigned dom_sz = JCORE_AIC_MAX_HWIRQ+1; 52 + struct irq_domain *domain; 53 + 54 + pr_info("Initializing J-Core AIC\n"); 55 + 56 + /* AIC1 needs priority initialization to receive interrupts. */ 57 + if (of_device_is_compatible(node, "jcore,aic1")) { 58 + unsigned cpu; 59 + 60 + for_each_present_cpu(cpu) { 61 + void __iomem *base = of_iomap(node, cpu); 62 + 63 + if (!base) { 64 + pr_err("Unable to map AIC for cpu %u\n", cpu); 65 + return -ENOMEM; 66 + } 67 + __raw_writel(0xffffffff, base + JCORE_AIC1_INTPRI_REG); 68 + iounmap(base); 69 + } 70 + min_irq = JCORE_AIC1_MIN_HWIRQ; 71 + } 72 + 73 + /* 74 + * The irq chip framework requires either mask/unmask or enable/disable 75 + * function pointers to be provided, but the hardware does not have any 76 + * such mechanism; the only interrupt masking is at the cpu level and 77 + * it affects all interrupts. We provide dummy mask/unmask. The hardware 78 + * handles all interrupt control and clears pending status when the cpu 79 + * accepts the interrupt. 80 + */ 81 + jcore_aic.irq_mask = noop; 82 + jcore_aic.irq_unmask = noop; 83 + jcore_aic.name = "AIC"; 84 + 85 + domain = irq_domain_add_linear(node, dom_sz, &jcore_aic_irqdomain_ops, 86 + &jcore_aic); 87 + if (!domain) 88 + return -ENOMEM; 89 + irq_create_strict_mappings(domain, min_irq, min_irq, dom_sz - min_irq); 90 + 91 + return 0; 92 + } 93 + 94 + IRQCHIP_DECLARE(jcore_aic2, "jcore,aic2", aic_irq_of_init); 95 + IRQCHIP_DECLARE(jcore_aic1, "jcore,aic1", aic_irq_of_init);
+1 -1
drivers/irqchip/irq-keystone.c
··· 109 109 dev_dbg(kirq->dev, "dispatch bit %d, virq %d\n", 110 110 src, virq); 111 111 if (!virq) 112 - dev_warn(kirq->dev, "sporious irq detected hwirq %d, virq %d\n", 112 + dev_warn(kirq->dev, "spurious irq detected hwirq %d, virq %d\n", 113 113 src, virq); 114 114 generic_handle_irq(virq); 115 115 }
+2 -12
drivers/irqchip/irq-mips-gic.c
··· 371 371 bitmap_and(pending, pending, intrmask, gic_shared_intrs); 372 372 bitmap_and(pending, pending, pcpu_mask, gic_shared_intrs); 373 373 374 - intr = find_first_bit(pending, gic_shared_intrs); 375 - while (intr != gic_shared_intrs) { 374 + for_each_set_bit(intr, pending, gic_shared_intrs) { 376 375 virq = irq_linear_revmap(gic_irq_domain, 377 376 GIC_SHARED_TO_HWIRQ(intr)); 378 377 if (chained) 379 378 generic_handle_irq(virq); 380 379 else 381 380 do_IRQ(virq); 382 - 383 - /* go to next pending bit */ 384 - bitmap_clear(pending, intr, 1); 385 - intr = find_first_bit(pending, gic_shared_intrs); 386 381 } 387 382 } 388 383 ··· 513 518 514 519 bitmap_and(&pending, &pending, &masked, GIC_NUM_LOCAL_INTRS); 515 520 516 - intr = find_first_bit(&pending, GIC_NUM_LOCAL_INTRS); 517 - while (intr != GIC_NUM_LOCAL_INTRS) { 521 + for_each_set_bit(intr, &pending, GIC_NUM_LOCAL_INTRS) { 518 522 virq = irq_linear_revmap(gic_irq_domain, 519 523 GIC_LOCAL_TO_HWIRQ(intr)); 520 524 if (chained) 521 525 generic_handle_irq(virq); 522 526 else 523 527 do_IRQ(virq); 524 - 525 - /* go to next pending bit */ 526 - bitmap_clear(&pending, intr, 1); 527 - intr = find_first_bit(&pending, GIC_NUM_LOCAL_INTRS); 528 528 } 529 529 } 530 530
+197
drivers/irqchip/irq-mvebu-pic.c
··· 1 + /* 2 + * Copyright (C) 2016 Marvell 3 + * 4 + * Yehuda Yitschak <yehuday@marvell.com> 5 + * Thomas Petazzoni <thomas.petazzoni@free-electrons.com> 6 + * 7 + * This file is licensed under the terms of the GNU General Public 8 + * License version 2. This program is licensed "as is" without any 9 + * warranty of any kind, whether express or implied. 10 + */ 11 + 12 + #include <linux/interrupt.h> 13 + #include <linux/io.h> 14 + #include <linux/irq.h> 15 + #include <linux/irqchip.h> 16 + #include <linux/irqchip/chained_irq.h> 17 + #include <linux/irqdomain.h> 18 + #include <linux/module.h> 19 + #include <linux/of_irq.h> 20 + #include <linux/platform_device.h> 21 + 22 + #define PIC_CAUSE 0x0 23 + #define PIC_MASK 0x4 24 + 25 + #define PIC_MAX_IRQS 32 26 + #define PIC_MAX_IRQ_MASK ((1UL << PIC_MAX_IRQS) - 1) 27 + 28 + struct mvebu_pic { 29 + void __iomem *base; 30 + u32 parent_irq; 31 + struct irq_domain *domain; 32 + struct irq_chip irq_chip; 33 + }; 34 + 35 + static void mvebu_pic_reset(struct mvebu_pic *pic) 36 + { 37 + /* ACK and mask all interrupts */ 38 + writel(0, pic->base + PIC_MASK); 39 + writel(PIC_MAX_IRQ_MASK, pic->base + PIC_CAUSE); 40 + } 41 + 42 + static void mvebu_pic_eoi_irq(struct irq_data *d) 43 + { 44 + struct mvebu_pic *pic = irq_data_get_irq_chip_data(d); 45 + 46 + writel(1 << d->hwirq, pic->base + PIC_CAUSE); 47 + } 48 + 49 + static void mvebu_pic_mask_irq(struct irq_data *d) 50 + { 51 + struct mvebu_pic *pic = irq_data_get_irq_chip_data(d); 52 + u32 reg; 53 + 54 + reg = readl(pic->base + PIC_MASK); 55 + reg |= (1 << d->hwirq); 56 + writel(reg, pic->base + PIC_MASK); 57 + } 58 + 59 + static void mvebu_pic_unmask_irq(struct irq_data *d) 60 + { 61 + struct mvebu_pic *pic = irq_data_get_irq_chip_data(d); 62 + u32 reg; 63 + 64 + reg = readl(pic->base + PIC_MASK); 65 + reg &= ~(1 << d->hwirq); 66 + writel(reg, pic->base + PIC_MASK); 67 + } 68 + 69 + static int mvebu_pic_irq_map(struct irq_domain *domain, unsigned int virq, 70 + irq_hw_number_t hwirq) 71 + { 72 + struct mvebu_pic *pic = domain->host_data; 73 + 74 + irq_set_percpu_devid(virq); 75 + irq_set_chip_data(virq, pic); 76 + irq_set_chip_and_handler(virq, &pic->irq_chip, 77 + handle_percpu_devid_irq); 78 + irq_set_status_flags(virq, IRQ_LEVEL); 79 + irq_set_probe(virq); 80 + 81 + return 0; 82 + } 83 + 84 + static const struct irq_domain_ops mvebu_pic_domain_ops = { 85 + .map = mvebu_pic_irq_map, 86 + .xlate = irq_domain_xlate_onecell, 87 + }; 88 + 89 + static void mvebu_pic_handle_cascade_irq(struct irq_desc *desc) 90 + { 91 + struct mvebu_pic *pic = irq_desc_get_handler_data(desc); 92 + struct irq_chip *chip = irq_desc_get_chip(desc); 93 + unsigned long irqmap, irqn; 94 + unsigned int cascade_irq; 95 + 96 + irqmap = readl_relaxed(pic->base + PIC_CAUSE); 97 + chained_irq_enter(chip, desc); 98 + 99 + for_each_set_bit(irqn, &irqmap, BITS_PER_LONG) { 100 + cascade_irq = irq_find_mapping(pic->domain, irqn); 101 + generic_handle_irq(cascade_irq); 102 + } 103 + 104 + chained_irq_exit(chip, desc); 105 + } 106 + 107 + static void mvebu_pic_enable_percpu_irq(void *data) 108 + { 109 + struct mvebu_pic *pic = data; 110 + 111 + mvebu_pic_reset(pic); 112 + enable_percpu_irq(pic->parent_irq, IRQ_TYPE_NONE); 113 + } 114 + 115 + static void mvebu_pic_disable_percpu_irq(void *data) 116 + { 117 + struct mvebu_pic *pic = data; 118 + 119 + disable_percpu_irq(pic->parent_irq); 120 + } 121 + 122 + static int mvebu_pic_probe(struct platform_device *pdev) 123 + { 124 + struct device_node *node = pdev->dev.of_node; 125 + struct mvebu_pic *pic; 126 + struct irq_chip *irq_chip; 127 + struct resource *res; 128 + 129 + pic = devm_kzalloc(&pdev->dev, sizeof(struct mvebu_pic), GFP_KERNEL); 130 + if (!pic) 131 + return -ENOMEM; 132 + 133 + res = platform_get_resource(pdev, IORESOURCE_MEM, 0); 134 + pic->base = devm_ioremap_resource(&pdev->dev, res); 135 + if (IS_ERR(pic->base)) 136 + return PTR_ERR(pic->base); 137 + 138 + irq_chip = &pic->irq_chip; 139 + irq_chip->name = dev_name(&pdev->dev); 140 + irq_chip->irq_mask = mvebu_pic_mask_irq; 141 + irq_chip->irq_unmask = mvebu_pic_unmask_irq; 142 + irq_chip->irq_eoi = mvebu_pic_eoi_irq; 143 + 144 + pic->parent_irq = irq_of_parse_and_map(node, 0); 145 + if (pic->parent_irq <= 0) { 146 + dev_err(&pdev->dev, "Failed to parse parent interrupt\n"); 147 + return -EINVAL; 148 + } 149 + 150 + pic->domain = irq_domain_add_linear(node, PIC_MAX_IRQS, 151 + &mvebu_pic_domain_ops, pic); 152 + if (!pic->domain) { 153 + dev_err(&pdev->dev, "Failed to allocate irq domain\n"); 154 + return -ENOMEM; 155 + } 156 + 157 + irq_set_chained_handler(pic->parent_irq, mvebu_pic_handle_cascade_irq); 158 + irq_set_handler_data(pic->parent_irq, pic); 159 + 160 + on_each_cpu(mvebu_pic_enable_percpu_irq, pic, 1); 161 + 162 + platform_set_drvdata(pdev, pic); 163 + 164 + return 0; 165 + } 166 + 167 + static int mvebu_pic_remove(struct platform_device *pdev) 168 + { 169 + struct mvebu_pic *pic = platform_get_drvdata(pdev); 170 + 171 + on_each_cpu(mvebu_pic_disable_percpu_irq, pic, 1); 172 + irq_domain_remove(pic->domain); 173 + 174 + return 0; 175 + } 176 + 177 + static const struct of_device_id mvebu_pic_of_match[] = { 178 + { .compatible = "marvell,armada-8k-pic", }, 179 + {}, 180 + }; 181 + MODULE_DEVICE_TABLE(of, mvebu_pic_of_match); 182 + 183 + static struct platform_driver mvebu_pic_driver = { 184 + .probe = mvebu_pic_probe, 185 + .remove = mvebu_pic_remove, 186 + .driver = { 187 + .name = "mvebu-pic", 188 + .of_match_table = mvebu_pic_of_match, 189 + }, 190 + }; 191 + module_platform_driver(mvebu_pic_driver); 192 + 193 + MODULE_AUTHOR("Yehuda Yitschak <yehuday@marvell.com>"); 194 + MODULE_AUTHOR("Thomas Petazzoni <thomas.petazzoni@free-electrons.com>"); 195 + MODULE_LICENSE("GPL v2"); 196 + MODULE_ALIAS("platform:mvebu_pic"); 197 +
+201
drivers/irqchip/irq-stm32-exti.c
··· 1 + /* 2 + * Copyright (C) Maxime Coquelin 2015 3 + * Author: Maxime Coquelin <mcoquelin.stm32@gmail.com> 4 + * License terms: GNU General Public License (GPL), version 2 5 + */ 6 + 7 + #include <linux/bitops.h> 8 + #include <linux/interrupt.h> 9 + #include <linux/io.h> 10 + #include <linux/irq.h> 11 + #include <linux/irqchip.h> 12 + #include <linux/irqchip/chained_irq.h> 13 + #include <linux/irqdomain.h> 14 + #include <linux/of_address.h> 15 + #include <linux/of_irq.h> 16 + 17 + #define EXTI_IMR 0x0 18 + #define EXTI_EMR 0x4 19 + #define EXTI_RTSR 0x8 20 + #define EXTI_FTSR 0xc 21 + #define EXTI_SWIER 0x10 22 + #define EXTI_PR 0x14 23 + 24 + static void stm32_irq_handler(struct irq_desc *desc) 25 + { 26 + struct irq_domain *domain = irq_desc_get_handler_data(desc); 27 + struct irq_chip_generic *gc = domain->gc->gc[0]; 28 + struct irq_chip *chip = irq_desc_get_chip(desc); 29 + unsigned long pending; 30 + int n; 31 + 32 + chained_irq_enter(chip, desc); 33 + 34 + while ((pending = irq_reg_readl(gc, EXTI_PR))) { 35 + for_each_set_bit(n, &pending, BITS_PER_LONG) { 36 + generic_handle_irq(irq_find_mapping(domain, n)); 37 + irq_reg_writel(gc, BIT(n), EXTI_PR); 38 + } 39 + } 40 + 41 + chained_irq_exit(chip, desc); 42 + } 43 + 44 + static int stm32_irq_set_type(struct irq_data *data, unsigned int type) 45 + { 46 + struct irq_chip_generic *gc = irq_data_get_irq_chip_data(data); 47 + int pin = data->hwirq; 48 + u32 rtsr, ftsr; 49 + 50 + irq_gc_lock(gc); 51 + 52 + rtsr = irq_reg_readl(gc, EXTI_RTSR); 53 + ftsr = irq_reg_readl(gc, EXTI_FTSR); 54 + 55 + switch (type) { 56 + case IRQ_TYPE_EDGE_RISING: 57 + rtsr |= BIT(pin); 58 + ftsr &= ~BIT(pin); 59 + break; 60 + case IRQ_TYPE_EDGE_FALLING: 61 + rtsr &= ~BIT(pin); 62 + ftsr |= BIT(pin); 63 + break; 64 + case IRQ_TYPE_EDGE_BOTH: 65 + rtsr |= BIT(pin); 66 + ftsr |= BIT(pin); 67 + break; 68 + default: 69 + irq_gc_unlock(gc); 70 + return -EINVAL; 71 + } 72 + 73 + irq_reg_writel(gc, rtsr, EXTI_RTSR); 74 + irq_reg_writel(gc, ftsr, EXTI_FTSR); 75 + 76 + irq_gc_unlock(gc); 77 + 78 + return 0; 79 + } 80 + 81 + static int stm32_irq_set_wake(struct irq_data *data, unsigned int on) 82 + { 83 + struct irq_chip_generic *gc = irq_data_get_irq_chip_data(data); 84 + int pin = data->hwirq; 85 + u32 emr; 86 + 87 + irq_gc_lock(gc); 88 + 89 + emr = irq_reg_readl(gc, EXTI_EMR); 90 + if (on) 91 + emr |= BIT(pin); 92 + else 93 + emr &= ~BIT(pin); 94 + irq_reg_writel(gc, emr, EXTI_EMR); 95 + 96 + irq_gc_unlock(gc); 97 + 98 + return 0; 99 + } 100 + 101 + static int stm32_exti_alloc(struct irq_domain *d, unsigned int virq, 102 + unsigned int nr_irqs, void *data) 103 + { 104 + struct irq_chip_generic *gc = d->gc->gc[0]; 105 + struct irq_fwspec *fwspec = data; 106 + irq_hw_number_t hwirq; 107 + 108 + hwirq = fwspec->param[0]; 109 + 110 + irq_map_generic_chip(d, virq, hwirq); 111 + irq_domain_set_info(d, virq, hwirq, &gc->chip_types->chip, gc, 112 + handle_simple_irq, NULL, NULL); 113 + 114 + return 0; 115 + } 116 + 117 + static void stm32_exti_free(struct irq_domain *d, unsigned int virq, 118 + unsigned int nr_irqs) 119 + { 120 + struct irq_data *data = irq_domain_get_irq_data(d, virq); 121 + 122 + irq_domain_reset_irq_data(data); 123 + } 124 + 125 + struct irq_domain_ops irq_exti_domain_ops = { 126 + .map = irq_map_generic_chip, 127 + .xlate = irq_domain_xlate_onetwocell, 128 + .alloc = stm32_exti_alloc, 129 + .free = stm32_exti_free, 130 + }; 131 + 132 + static int __init stm32_exti_init(struct device_node *node, 133 + struct device_node *parent) 134 + { 135 + unsigned int clr = IRQ_NOREQUEST | IRQ_NOPROBE | IRQ_NOAUTOEN; 136 + int nr_irqs, nr_exti, ret, i; 137 + struct irq_chip_generic *gc; 138 + struct irq_domain *domain; 139 + void *base; 140 + 141 + base = of_iomap(node, 0); 142 + if (!base) { 143 + pr_err("%s: Unable to map registers\n", node->full_name); 144 + return -ENOMEM; 145 + } 146 + 147 + /* Determine number of irqs supported */ 148 + writel_relaxed(~0UL, base + EXTI_RTSR); 149 + nr_exti = fls(readl_relaxed(base + EXTI_RTSR)); 150 + writel_relaxed(0, base + EXTI_RTSR); 151 + 152 + pr_info("%s: %d External IRQs detected\n", node->full_name, nr_exti); 153 + 154 + domain = irq_domain_add_linear(node, nr_exti, 155 + &irq_exti_domain_ops, NULL); 156 + if (!domain) { 157 + pr_err("%s: Could not register interrupt domain.\n", 158 + node->name); 159 + ret = -ENOMEM; 160 + goto out_unmap; 161 + } 162 + 163 + ret = irq_alloc_domain_generic_chips(domain, nr_exti, 1, "exti", 164 + handle_edge_irq, clr, 0, 0); 165 + if (ret) { 166 + pr_err("%s: Could not allocate generic interrupt chip.\n", 167 + node->full_name); 168 + goto out_free_domain; 169 + } 170 + 171 + gc = domain->gc->gc[0]; 172 + gc->reg_base = base; 173 + gc->chip_types->type = IRQ_TYPE_EDGE_BOTH; 174 + gc->chip_types->chip.name = gc->chip_types[0].chip.name; 175 + gc->chip_types->chip.irq_ack = irq_gc_ack_set_bit; 176 + gc->chip_types->chip.irq_mask = irq_gc_mask_clr_bit; 177 + gc->chip_types->chip.irq_unmask = irq_gc_mask_set_bit; 178 + gc->chip_types->chip.irq_set_type = stm32_irq_set_type; 179 + gc->chip_types->chip.irq_set_wake = stm32_irq_set_wake; 180 + gc->chip_types->regs.ack = EXTI_PR; 181 + gc->chip_types->regs.mask = EXTI_IMR; 182 + gc->chip_types->handler = handle_edge_irq; 183 + 184 + nr_irqs = of_irq_count(node); 185 + for (i = 0; i < nr_irqs; i++) { 186 + unsigned int irq = irq_of_parse_and_map(node, i); 187 + 188 + irq_set_handler_data(irq, domain); 189 + irq_set_chained_handler(irq, stm32_irq_handler); 190 + } 191 + 192 + return 0; 193 + 194 + out_free_domain: 195 + irq_domain_remove(domain); 196 + out_unmap: 197 + iounmap(base); 198 + return ret; 199 + } 200 + 201 + IRQCHIP_DECLARE(stm32_exti, "st,stm32-exti", stm32_exti_init);
+110 -60
drivers/pci/msi.c
··· 19 19 #include <linux/smp.h> 20 20 #include <linux/errno.h> 21 21 #include <linux/io.h> 22 + #include <linux/acpi_iort.h> 22 23 #include <linux/slab.h> 23 24 #include <linux/irqdomain.h> 24 25 #include <linux/of_irq.h> ··· 550 549 return ret; 551 550 } 552 551 553 - static struct msi_desc *msi_setup_entry(struct pci_dev *dev, int nvec) 552 + static struct msi_desc * 553 + msi_setup_entry(struct pci_dev *dev, int nvec, bool affinity) 554 554 { 555 - u16 control; 555 + struct cpumask *masks = NULL; 556 556 struct msi_desc *entry; 557 + u16 control; 558 + 559 + if (affinity) { 560 + masks = irq_create_affinity_masks(dev->irq_affinity, nvec); 561 + if (!masks) 562 + pr_err("Unable to allocate affinity masks, ignoring\n"); 563 + } 557 564 558 565 /* MSI Entry Initialization */ 559 - entry = alloc_msi_entry(&dev->dev); 566 + entry = alloc_msi_entry(&dev->dev, nvec, masks); 560 567 if (!entry) 561 - return NULL; 568 + goto out; 562 569 563 570 pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &control); 564 571 ··· 577 568 entry->msi_attrib.default_irq = dev->irq; /* Save IOAPIC IRQ */ 578 569 entry->msi_attrib.multi_cap = (control & PCI_MSI_FLAGS_QMASK) >> 1; 579 570 entry->msi_attrib.multiple = ilog2(__roundup_pow_of_two(nvec)); 580 - entry->nvec_used = nvec; 581 - entry->affinity = dev->irq_affinity; 582 571 583 572 if (control & PCI_MSI_FLAGS_64BIT) 584 573 entry->mask_pos = dev->msi_cap + PCI_MSI_MASK_64; ··· 587 580 if (entry->msi_attrib.maskbit) 588 581 pci_read_config_dword(dev, entry->mask_pos, &entry->masked); 589 582 583 + out: 584 + kfree(masks); 590 585 return entry; 591 586 } 592 587 ··· 617 608 * an error, and a positive return value indicates the number of interrupts 618 609 * which could have been allocated. 619 610 */ 620 - static int msi_capability_init(struct pci_dev *dev, int nvec) 611 + static int msi_capability_init(struct pci_dev *dev, int nvec, bool affinity) 621 612 { 622 613 struct msi_desc *entry; 623 614 int ret; ··· 625 616 626 617 pci_msi_set_enable(dev, 0); /* Disable MSI during set up */ 627 618 628 - entry = msi_setup_entry(dev, nvec); 619 + entry = msi_setup_entry(dev, nvec, affinity); 629 620 if (!entry) 630 621 return -ENOMEM; 631 622 ··· 688 679 } 689 680 690 681 static int msix_setup_entries(struct pci_dev *dev, void __iomem *base, 691 - struct msix_entry *entries, int nvec) 682 + struct msix_entry *entries, int nvec, 683 + bool affinity) 692 684 { 693 - const struct cpumask *mask = NULL; 685 + struct cpumask *curmsk, *masks = NULL; 694 686 struct msi_desc *entry; 695 - int cpu = -1, i; 687 + int ret, i; 696 688 697 - for (i = 0; i < nvec; i++) { 698 - if (dev->irq_affinity) { 699 - cpu = cpumask_next(cpu, dev->irq_affinity); 700 - if (cpu >= nr_cpu_ids) 701 - cpu = cpumask_first(dev->irq_affinity); 702 - mask = cpumask_of(cpu); 703 - } 689 + if (affinity) { 690 + masks = irq_create_affinity_masks(dev->irq_affinity, nvec); 691 + if (!masks) 692 + pr_err("Unable to allocate affinity masks, ignoring\n"); 693 + } 704 694 705 - entry = alloc_msi_entry(&dev->dev); 695 + for (i = 0, curmsk = masks; i < nvec; i++) { 696 + entry = alloc_msi_entry(&dev->dev, 1, curmsk); 706 697 if (!entry) { 707 698 if (!i) 708 699 iounmap(base); 709 700 else 710 701 free_msi_irqs(dev); 711 702 /* No enough memory. Don't try again */ 712 - return -ENOMEM; 703 + ret = -ENOMEM; 704 + goto out; 713 705 } 714 706 715 707 entry->msi_attrib.is_msix = 1; ··· 721 711 entry->msi_attrib.entry_nr = i; 722 712 entry->msi_attrib.default_irq = dev->irq; 723 713 entry->mask_base = base; 724 - entry->nvec_used = 1; 725 - entry->affinity = mask; 726 714 727 715 list_add_tail(&entry->list, dev_to_msi_list(&dev->dev)); 716 + if (masks) 717 + curmsk++; 728 718 } 729 - 719 + ret = 0; 720 + out: 721 + kfree(masks); 730 722 return 0; 731 723 } 732 724 ··· 757 745 * single MSI-X irq. A return of zero indicates the successful setup of 758 746 * requested MSI-X entries with allocated irqs or non-zero for otherwise. 759 747 **/ 760 - static int msix_capability_init(struct pci_dev *dev, 761 - struct msix_entry *entries, int nvec) 748 + static int msix_capability_init(struct pci_dev *dev, struct msix_entry *entries, 749 + int nvec, bool affinity) 762 750 { 763 751 int ret; 764 752 u16 control; ··· 773 761 if (!base) 774 762 return -ENOMEM; 775 763 776 - ret = msix_setup_entries(dev, base, entries, nvec); 764 + ret = msix_setup_entries(dev, base, entries, nvec, affinity); 777 765 if (ret) 778 766 return ret; 779 767 ··· 953 941 } 954 942 EXPORT_SYMBOL(pci_msix_vec_count); 955 943 956 - /** 957 - * pci_enable_msix - configure device's MSI-X capability structure 958 - * @dev: pointer to the pci_dev data structure of MSI-X device function 959 - * @entries: pointer to an array of MSI-X entries (optional) 960 - * @nvec: number of MSI-X irqs requested for allocation by device driver 961 - * 962 - * Setup the MSI-X capability structure of device function with the number 963 - * of requested irqs upon its software driver call to request for 964 - * MSI-X mode enabled on its hardware device function. A return of zero 965 - * indicates the successful configuration of MSI-X capability structure 966 - * with new allocated MSI-X irqs. A return of < 0 indicates a failure. 967 - * Or a return of > 0 indicates that driver request is exceeding the number 968 - * of irqs or MSI-X vectors available. Driver should use the returned value to 969 - * re-send its request. 970 - **/ 971 - int pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries, int nvec) 944 + static int __pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries, 945 + int nvec, bool affinity) 972 946 { 973 947 int nr_entries; 974 948 int i, j; ··· 986 988 dev_info(&dev->dev, "can't enable MSI-X (MSI IRQ already assigned)\n"); 987 989 return -EINVAL; 988 990 } 989 - return msix_capability_init(dev, entries, nvec); 991 + return msix_capability_init(dev, entries, nvec, affinity); 992 + } 993 + 994 + /** 995 + * pci_enable_msix - configure device's MSI-X capability structure 996 + * @dev: pointer to the pci_dev data structure of MSI-X device function 997 + * @entries: pointer to an array of MSI-X entries (optional) 998 + * @nvec: number of MSI-X irqs requested for allocation by device driver 999 + * 1000 + * Setup the MSI-X capability structure of device function with the number 1001 + * of requested irqs upon its software driver call to request for 1002 + * MSI-X mode enabled on its hardware device function. A return of zero 1003 + * indicates the successful configuration of MSI-X capability structure 1004 + * with new allocated MSI-X irqs. A return of < 0 indicates a failure. 1005 + * Or a return of > 0 indicates that driver request is exceeding the number 1006 + * of irqs or MSI-X vectors available. Driver should use the returned value to 1007 + * re-send its request. 1008 + **/ 1009 + int pci_enable_msix(struct pci_dev *dev, struct msix_entry *entries, int nvec) 1010 + { 1011 + return __pci_enable_msix(dev, entries, nvec, false); 990 1012 } 991 1013 EXPORT_SYMBOL(pci_enable_msix); 992 1014 ··· 1059 1041 static int __pci_enable_msi_range(struct pci_dev *dev, int minvec, int maxvec, 1060 1042 unsigned int flags) 1061 1043 { 1044 + bool affinity = flags & PCI_IRQ_AFFINITY; 1062 1045 int nvec; 1063 1046 int rc; 1064 1047 ··· 1088 1069 nvec = maxvec; 1089 1070 1090 1071 for (;;) { 1091 - if (flags & PCI_IRQ_AFFINITY) { 1092 - dev->irq_affinity = irq_create_affinity_mask(&nvec); 1072 + if (affinity) { 1073 + nvec = irq_calc_affinity_vectors(dev->irq_affinity, 1074 + nvec); 1093 1075 if (nvec < minvec) 1094 1076 return -ENOSPC; 1095 1077 } 1096 1078 1097 - rc = msi_capability_init(dev, nvec); 1079 + rc = msi_capability_init(dev, nvec, affinity); 1098 1080 if (rc == 0) 1099 1081 return nvec; 1100 - 1101 - kfree(dev->irq_affinity); 1102 - dev->irq_affinity = NULL; 1103 1082 1104 1083 if (rc < 0) 1105 1084 return rc; ··· 1130 1113 struct msix_entry *entries, int minvec, int maxvec, 1131 1114 unsigned int flags) 1132 1115 { 1133 - int nvec = maxvec; 1134 - int rc; 1116 + bool affinity = flags & PCI_IRQ_AFFINITY; 1117 + int rc, nvec = maxvec; 1135 1118 1136 1119 if (maxvec < minvec) 1137 1120 return -ERANGE; 1138 1121 1139 1122 for (;;) { 1140 - if (flags & PCI_IRQ_AFFINITY) { 1141 - dev->irq_affinity = irq_create_affinity_mask(&nvec); 1123 + if (affinity) { 1124 + nvec = irq_calc_affinity_vectors(dev->irq_affinity, 1125 + nvec); 1142 1126 if (nvec < minvec) 1143 1127 return -ENOSPC; 1144 1128 } 1145 1129 1146 - rc = pci_enable_msix(dev, entries, nvec); 1130 + rc = __pci_enable_msix(dev, entries, nvec, affinity); 1147 1131 if (rc == 0) 1148 1132 return nvec; 1149 - 1150 - kfree(dev->irq_affinity); 1151 - dev->irq_affinity = NULL; 1152 1133 1153 1134 if (rc < 0) 1154 1135 return rc; ··· 1270 1255 return dev->irq + nr; 1271 1256 } 1272 1257 EXPORT_SYMBOL(pci_irq_vector); 1258 + 1259 + /** 1260 + * pci_irq_get_affinity - return the affinity of a particular msi vector 1261 + * @dev: PCI device to operate on 1262 + * @nr: device-relative interrupt vector index (0-based). 1263 + */ 1264 + const struct cpumask *pci_irq_get_affinity(struct pci_dev *dev, int nr) 1265 + { 1266 + if (dev->msix_enabled) { 1267 + struct msi_desc *entry; 1268 + int i = 0; 1269 + 1270 + for_each_pci_msi_entry(entry, dev) { 1271 + if (i == nr) 1272 + return entry->affinity; 1273 + i++; 1274 + } 1275 + WARN_ON_ONCE(1); 1276 + return NULL; 1277 + } else if (dev->msi_enabled) { 1278 + struct msi_desc *entry = first_pci_msi_entry(dev); 1279 + 1280 + if (WARN_ON_ONCE(!entry || nr >= entry->nvec_used)) 1281 + return NULL; 1282 + 1283 + return &entry->affinity[nr]; 1284 + } else { 1285 + return cpu_possible_mask; 1286 + } 1287 + } 1288 + EXPORT_SYMBOL(pci_irq_get_affinity); 1273 1289 1274 1290 struct pci_dev *msi_desc_to_pci_dev(struct msi_desc *desc) 1275 1291 { ··· 1548 1502 pci_for_each_dma_alias(pdev, get_msi_id_cb, &rid); 1549 1503 1550 1504 of_node = irq_domain_get_of_node(domain); 1551 - if (of_node) 1552 - rid = of_msi_map_rid(&pdev->dev, of_node, rid); 1505 + rid = of_node ? of_msi_map_rid(&pdev->dev, of_node, rid) : 1506 + iort_msi_map_rid(&pdev->dev, rid); 1553 1507 1554 1508 return rid; 1555 1509 } ··· 1565 1519 */ 1566 1520 struct irq_domain *pci_msi_get_device_domain(struct pci_dev *pdev) 1567 1521 { 1522 + struct irq_domain *dom; 1568 1523 u32 rid = 0; 1569 1524 1570 1525 pci_for_each_dma_alias(pdev, get_msi_id_cb, &rid); 1571 - return of_msi_map_get_device_domain(&pdev->dev, rid); 1526 + dom = of_msi_map_get_device_domain(&pdev->dev, rid); 1527 + if (!dom) 1528 + dom = iort_get_device_domain(&pdev->dev, rid); 1529 + return dom; 1572 1530 } 1573 1531 #endif /* CONFIG_PCI_MSI_IRQ_DOMAIN */
+1 -2
drivers/staging/fsl-mc/bus/mc-msi.c
··· 213 213 struct msi_desc *msi_desc; 214 214 215 215 for (i = 0; i < irq_count; i++) { 216 - msi_desc = alloc_msi_entry(dev); 216 + msi_desc = alloc_msi_entry(dev, 1, NULL); 217 217 if (!msi_desc) { 218 218 dev_err(dev, "Failed to allocate msi entry\n"); 219 219 error = -ENOMEM; ··· 221 221 } 222 222 223 223 msi_desc->fsl_mc.msi_index = i; 224 - msi_desc->nvec_used = 1; 225 224 INIT_LIST_HEAD(&msi_desc->list); 226 225 list_add_tail(&msi_desc->list, dev_to_msi_list(dev)); 227 226 }
+42
include/linux/acpi_iort.h
··· 1 + /* 2 + * Copyright (C) 2016, Semihalf 3 + * Author: Tomasz Nowicki <tn@semihalf.com> 4 + * 5 + * This program is free software; you can redistribute it and/or modify it 6 + * under the terms and conditions of the GNU General Public License, 7 + * version 2, as published by the Free Software Foundation. 8 + * 9 + * This program is distributed in the hope it will be useful, but WITHOUT 10 + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or 11 + * FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for 12 + * more details. 13 + * 14 + * You should have received a copy of the GNU General Public License along with 15 + * this program; if not, write to the Free Software Foundation, Inc., 59 Temple 16 + * Place - Suite 330, Boston, MA 02111-1307 USA. 17 + */ 18 + 19 + #ifndef __ACPI_IORT_H__ 20 + #define __ACPI_IORT_H__ 21 + 22 + #include <linux/acpi.h> 23 + #include <linux/fwnode.h> 24 + #include <linux/irqdomain.h> 25 + 26 + int iort_register_domain_token(int trans_id, struct fwnode_handle *fw_node); 27 + void iort_deregister_domain_token(int trans_id); 28 + struct fwnode_handle *iort_find_domain_token(int trans_id); 29 + #ifdef CONFIG_ACPI_IORT 30 + void acpi_iort_init(void); 31 + u32 iort_msi_map_rid(struct device *dev, u32 req_id); 32 + struct irq_domain *iort_get_device_domain(struct device *dev, u32 req_id); 33 + #else 34 + static inline void acpi_iort_init(void) { } 35 + static inline u32 iort_msi_map_rid(struct device *dev, u32 req_id) 36 + { return req_id; } 37 + static inline struct irq_domain *iort_get_device_domain(struct device *dev, 38 + u32 req_id) 39 + { return NULL; } 40 + #endif 41 + 42 + #endif /* __ACPI_IORT_H__ */
+11 -3
include/linux/interrupt.h
··· 278 278 extern int 279 279 irq_set_affinity_notifier(unsigned int irq, struct irq_affinity_notify *notify); 280 280 281 - struct cpumask *irq_create_affinity_mask(unsigned int *nr_vecs); 281 + struct cpumask *irq_create_affinity_masks(const struct cpumask *affinity, int nvec); 282 + int irq_calc_affinity_vectors(const struct cpumask *affinity, int maxvec); 282 283 283 284 #else /* CONFIG_SMP */ 284 285 ··· 312 311 return 0; 313 312 } 314 313 315 - static inline struct cpumask *irq_create_affinity_mask(unsigned int *nr_vecs) 314 + static inline struct cpumask * 315 + irq_create_affinity_masks(const struct cpumask *affinity, int nvec) 316 316 { 317 - *nr_vecs = 1; 318 317 return NULL; 319 318 } 319 + 320 + static inline int 321 + irq_calc_affinity_vectors(const struct cpumask *affinity, int maxvec) 322 + { 323 + return maxvec; 324 + } 325 + 320 326 #endif /* CONFIG_SMP */ 321 327 322 328 /*
+13 -5
include/linux/irq.h
··· 916 916 unsigned int clr, unsigned int set); 917 917 918 918 struct irq_chip_generic *irq_get_domain_generic_chip(struct irq_domain *d, unsigned int hw_irq); 919 - int irq_alloc_domain_generic_chips(struct irq_domain *d, int irqs_per_chip, 920 - int num_ct, const char *name, 921 - irq_flow_handler_t handler, 922 - unsigned int clr, unsigned int set, 923 - enum irq_gc_flags flags); 924 919 920 + int __irq_alloc_domain_generic_chips(struct irq_domain *d, int irqs_per_chip, 921 + int num_ct, const char *name, 922 + irq_flow_handler_t handler, 923 + unsigned int clr, unsigned int set, 924 + enum irq_gc_flags flags); 925 + 926 + #define irq_alloc_domain_generic_chips(d, irqs_per_chip, num_ct, name, \ 927 + handler, clr, set, flags) \ 928 + ({ \ 929 + MAYBE_BUILD_BUG_ON(irqs_per_chip > 32); \ 930 + __irq_alloc_domain_generic_chips(d, irqs_per_chip, num_ct, name,\ 931 + handler, clr, set, flags); \ 932 + }) 925 933 926 934 static inline struct irq_chip_type *irq_data_get_chip_type(struct irq_data *d) 927 935 {
+2 -2
include/linux/irqchip/arm-gic-v3.h
··· 430 430 }; 431 431 432 432 struct irq_domain; 433 - struct device_node; 433 + struct fwnode_handle; 434 434 int its_cpu_init(void); 435 - int its_init(struct device_node *node, struct rdists *rdists, 435 + int its_init(struct fwnode_handle *handle, struct rdists *rdists, 436 436 struct irq_domain *domain); 437 437 438 438 static inline bool gic_enable_sre(void)
+3
include/linux/irqdesc.h
··· 2 2 #define _LINUX_IRQDESC_H 3 3 4 4 #include <linux/rcupdate.h> 5 + #include <linux/kobject.h> 5 6 6 7 /* 7 8 * Core internal functions to deal with irq descriptors ··· 44 43 * @force_resume_depth: number of irqactions on a irq descriptor with 45 44 * IRQF_FORCE_RESUME set 46 45 * @rcu: rcu head for delayed free 46 + * @kobj: kobject used to represent this struct in sysfs 47 47 * @dir: /proc/irq/ procfs entry 48 48 * @name: flow handler name for /proc/interrupts output 49 49 */ ··· 90 88 #endif 91 89 #ifdef CONFIG_SPARSE_IRQ 92 90 struct rcu_head rcu; 91 + struct kobject kobj; 93 92 #endif 94 93 int parent_irq; 95 94 struct module *owner;
+3 -2
include/linux/msi.h
··· 68 68 unsigned int nvec_used; 69 69 struct device *dev; 70 70 struct msi_msg msg; 71 - const struct cpumask *affinity; 71 + struct cpumask *affinity; 72 72 73 73 union { 74 74 /* PCI MSI/X specific data */ ··· 123 123 } 124 124 #endif /* CONFIG_PCI_MSI */ 125 125 126 - struct msi_desc *alloc_msi_entry(struct device *dev); 126 + struct msi_desc *alloc_msi_entry(struct device *dev, int nvec, 127 + const struct cpumask *affinity); 127 128 void free_msi_entry(struct msi_desc *entry); 128 129 void __pci_read_msi_msg(struct msi_desc *entry, struct msi_msg *msg); 129 130 void __pci_write_msi_msg(struct msi_desc *entry, struct msi_msg *msg);
+6
include/linux/pci.h
··· 1301 1301 unsigned int max_vecs, unsigned int flags); 1302 1302 void pci_free_irq_vectors(struct pci_dev *dev); 1303 1303 int pci_irq_vector(struct pci_dev *dev, unsigned int nr); 1304 + const struct cpumask *pci_irq_get_affinity(struct pci_dev *pdev, int vec); 1304 1305 1305 1306 #else 1306 1307 static inline int pci_msi_vec_count(struct pci_dev *dev) { return -ENOSYS; } ··· 1343 1342 if (WARN_ON_ONCE(nr > 0)) 1344 1343 return -EINVAL; 1345 1344 return dev->irq; 1345 + } 1346 + static inline const struct cpumask *pci_irq_get_affinity(struct pci_dev *pdev, 1347 + int vec) 1348 + { 1349 + return cpu_possible_mask; 1346 1350 } 1347 1351 #endif 1348 1352
+132 -41
kernel/irq/affinity.c
··· 4 4 #include <linux/slab.h> 5 5 #include <linux/cpu.h> 6 6 7 - static int get_first_sibling(unsigned int cpu) 7 + static void irq_spread_init_one(struct cpumask *irqmsk, struct cpumask *nmsk, 8 + int cpus_per_vec) 8 9 { 9 - unsigned int ret; 10 + const struct cpumask *siblmsk; 11 + int cpu, sibl; 10 12 11 - ret = cpumask_first(topology_sibling_cpumask(cpu)); 12 - if (ret < nr_cpu_ids) 13 - return ret; 14 - return cpu; 13 + for ( ; cpus_per_vec > 0; ) { 14 + cpu = cpumask_first(nmsk); 15 + 16 + /* Should not happen, but I'm too lazy to think about it */ 17 + if (cpu >= nr_cpu_ids) 18 + return; 19 + 20 + cpumask_clear_cpu(cpu, nmsk); 21 + cpumask_set_cpu(cpu, irqmsk); 22 + cpus_per_vec--; 23 + 24 + /* If the cpu has siblings, use them first */ 25 + siblmsk = topology_sibling_cpumask(cpu); 26 + for (sibl = -1; cpus_per_vec > 0; ) { 27 + sibl = cpumask_next(sibl, siblmsk); 28 + if (sibl >= nr_cpu_ids) 29 + break; 30 + if (!cpumask_test_and_clear_cpu(sibl, nmsk)) 31 + continue; 32 + cpumask_set_cpu(sibl, irqmsk); 33 + cpus_per_vec--; 34 + } 35 + } 15 36 } 16 37 17 - /* 18 - * Take a map of online CPUs and the number of available interrupt vectors 19 - * and generate an output cpumask suitable for spreading MSI/MSI-X vectors 20 - * so that they are distributed as good as possible around the CPUs. If 21 - * more vectors than CPUs are available we'll map one to each CPU, 22 - * otherwise we map one to the first sibling of each socket. 23 - * 24 - * If there are more vectors than CPUs we will still only have one bit 25 - * set per CPU, but interrupt code will keep on assigning the vectors from 26 - * the start of the bitmap until we run out of vectors. 27 - */ 28 - struct cpumask *irq_create_affinity_mask(unsigned int *nr_vecs) 38 + static int get_nodes_in_cpumask(const struct cpumask *mask, nodemask_t *nodemsk) 29 39 { 30 - struct cpumask *affinity_mask; 31 - unsigned int max_vecs = *nr_vecs; 40 + int n, nodes; 32 41 33 - if (max_vecs == 1) 34 - return NULL; 35 - 36 - affinity_mask = kzalloc(cpumask_size(), GFP_KERNEL); 37 - if (!affinity_mask) { 38 - *nr_vecs = 1; 39 - return NULL; 42 + /* Calculate the number of nodes in the supplied affinity mask */ 43 + for (n = 0, nodes = 0; n < num_online_nodes(); n++) { 44 + if (cpumask_intersects(mask, cpumask_of_node(n))) { 45 + node_set(n, *nodemsk); 46 + nodes++; 47 + } 40 48 } 49 + return nodes; 50 + } 41 51 52 + /** 53 + * irq_create_affinity_masks - Create affinity masks for multiqueue spreading 54 + * @affinity: The affinity mask to spread. If NULL cpu_online_mask 55 + * is used 56 + * @nvecs: The number of vectors 57 + * 58 + * Returns the masks pointer or NULL if allocation failed. 59 + */ 60 + struct cpumask *irq_create_affinity_masks(const struct cpumask *affinity, 61 + int nvec) 62 + { 63 + int n, nodes, vecs_per_node, cpus_per_vec, extra_vecs, curvec = 0; 64 + nodemask_t nodemsk = NODE_MASK_NONE; 65 + struct cpumask *masks; 66 + cpumask_var_t nmsk; 67 + 68 + if (!zalloc_cpumask_var(&nmsk, GFP_KERNEL)) 69 + return NULL; 70 + 71 + masks = kzalloc(nvec * sizeof(*masks), GFP_KERNEL); 72 + if (!masks) 73 + goto out; 74 + 75 + /* Stabilize the cpumasks */ 42 76 get_online_cpus(); 43 - if (max_vecs >= num_online_cpus()) { 44 - cpumask_copy(affinity_mask, cpu_online_mask); 45 - *nr_vecs = num_online_cpus(); 46 - } else { 47 - unsigned int vecs = 0, cpu; 77 + /* If the supplied affinity mask is NULL, use cpu online mask */ 78 + if (!affinity) 79 + affinity = cpu_online_mask; 48 80 49 - for_each_online_cpu(cpu) { 50 - if (cpu == get_first_sibling(cpu)) { 51 - cpumask_set_cpu(cpu, affinity_mask); 52 - vecs++; 53 - } 81 + nodes = get_nodes_in_cpumask(affinity, &nodemsk); 54 82 55 - if (--max_vecs == 0) 83 + /* 84 + * If the number of nodes in the mask is less than or equal the 85 + * number of vectors we just spread the vectors across the nodes. 86 + */ 87 + if (nvec <= nodes) { 88 + for_each_node_mask(n, nodemsk) { 89 + cpumask_copy(masks + curvec, cpumask_of_node(n)); 90 + if (++curvec == nvec) 56 91 break; 57 92 } 58 - *nr_vecs = vecs; 93 + goto outonl; 59 94 } 60 - put_online_cpus(); 61 95 62 - return affinity_mask; 96 + /* Spread the vectors per node */ 97 + vecs_per_node = nvec / nodes; 98 + /* Account for rounding errors */ 99 + extra_vecs = nvec - (nodes * vecs_per_node); 100 + 101 + for_each_node_mask(n, nodemsk) { 102 + int ncpus, v, vecs_to_assign = vecs_per_node; 103 + 104 + /* Get the cpus on this node which are in the mask */ 105 + cpumask_and(nmsk, affinity, cpumask_of_node(n)); 106 + 107 + /* Calculate the number of cpus per vector */ 108 + ncpus = cpumask_weight(nmsk); 109 + 110 + for (v = 0; curvec < nvec && v < vecs_to_assign; curvec++, v++) { 111 + cpus_per_vec = ncpus / vecs_to_assign; 112 + 113 + /* Account for extra vectors to compensate rounding errors */ 114 + if (extra_vecs) { 115 + cpus_per_vec++; 116 + if (!--extra_vecs) 117 + vecs_per_node++; 118 + } 119 + irq_spread_init_one(masks + curvec, nmsk, cpus_per_vec); 120 + } 121 + 122 + if (curvec >= nvec) 123 + break; 124 + } 125 + 126 + outonl: 127 + put_online_cpus(); 128 + out: 129 + free_cpumask_var(nmsk); 130 + return masks; 131 + } 132 + 133 + /** 134 + * irq_calc_affinity_vectors - Calculate to optimal number of vectors for a given affinity mask 135 + * @affinity: The affinity mask to spread. If NULL cpu_online_mask 136 + * is used 137 + * @maxvec: The maximum number of vectors available 138 + */ 139 + int irq_calc_affinity_vectors(const struct cpumask *affinity, int maxvec) 140 + { 141 + int cpus, ret; 142 + 143 + /* Stabilize the cpumasks */ 144 + get_online_cpus(); 145 + /* If the supplied affinity mask is NULL, use cpu online mask */ 146 + if (!affinity) 147 + affinity = cpu_online_mask; 148 + 149 + cpus = cpumask_weight(affinity); 150 + ret = (cpus < maxvec) ? cpus : maxvec; 151 + 152 + put_online_cpus(); 153 + return ret; 63 154 }
+15 -6
kernel/irq/chip.c
··· 76 76 if (!desc) 77 77 return -EINVAL; 78 78 79 - type &= IRQ_TYPE_SENSE_MASK; 80 79 ret = __irq_set_trigger(desc, type); 81 80 irq_put_desc_busunlock(desc, flags); 82 81 return ret; ··· 755 756 { 756 757 struct irq_chip *chip = irq_desc_get_chip(desc); 757 758 struct irqaction *action = desc->action; 758 - void *dev_id = raw_cpu_ptr(action->percpu_dev_id); 759 759 unsigned int irq = irq_desc_get_irq(desc); 760 760 irqreturn_t res; 761 761 ··· 763 765 if (chip->irq_ack) 764 766 chip->irq_ack(&desc->irq_data); 765 767 766 - trace_irq_handler_entry(irq, action); 767 - res = action->handler(irq, dev_id); 768 - trace_irq_handler_exit(irq, action, res); 768 + if (likely(action)) { 769 + trace_irq_handler_entry(irq, action); 770 + res = action->handler(irq, raw_cpu_ptr(action->percpu_dev_id)); 771 + trace_irq_handler_exit(irq, action, res); 772 + } else { 773 + unsigned int cpu = smp_processor_id(); 774 + bool enabled = cpumask_test_cpu(cpu, desc->percpu_enabled); 775 + 776 + if (enabled) 777 + irq_percpu_disable(desc, cpu); 778 + 779 + pr_err_once("Spurious%s percpu IRQ%u on CPU%u\n", 780 + enabled ? " and unmasked" : "", irq, cpu); 781 + } 769 782 770 783 if (chip->irq_eoi) 771 784 chip->irq_eoi(&desc->irq_data); 772 785 } 773 786 774 - void 787 + static void 775 788 __irq_do_set_handler(struct irq_desc *desc, irq_flow_handler_t handle, 776 789 int is_chained, const char *name) 777 790 {
+48 -24
kernel/irq/generic-chip.c
··· 260 260 } 261 261 262 262 /** 263 - * irq_alloc_domain_generic_chip - Allocate generic chips for an irq domain 263 + * __irq_alloc_domain_generic_chip - Allocate generic chips for an irq domain 264 264 * @d: irq domain for which to allocate chips 265 - * @irqs_per_chip: Number of interrupts each chip handles 265 + * @irqs_per_chip: Number of interrupts each chip handles (max 32) 266 266 * @num_ct: Number of irq_chip_type instances associated with this 267 267 * @name: Name of the irq chip 268 268 * @handler: Default flow handler associated with these chips ··· 270 270 * @set: IRQ_* bits to set in the mapping function 271 271 * @gcflags: Generic chip specific setup flags 272 272 */ 273 - int irq_alloc_domain_generic_chips(struct irq_domain *d, int irqs_per_chip, 274 - int num_ct, const char *name, 275 - irq_flow_handler_t handler, 276 - unsigned int clr, unsigned int set, 277 - enum irq_gc_flags gcflags) 273 + int __irq_alloc_domain_generic_chips(struct irq_domain *d, int irqs_per_chip, 274 + int num_ct, const char *name, 275 + irq_flow_handler_t handler, 276 + unsigned int clr, unsigned int set, 277 + enum irq_gc_flags gcflags) 278 278 { 279 279 struct irq_domain_chip_generic *dgc; 280 280 struct irq_chip_generic *gc; ··· 326 326 d->name = name; 327 327 return 0; 328 328 } 329 - EXPORT_SYMBOL_GPL(irq_alloc_domain_generic_chips); 329 + EXPORT_SYMBOL_GPL(__irq_alloc_domain_generic_chips); 330 + 331 + static struct irq_chip_generic * 332 + __irq_get_domain_generic_chip(struct irq_domain *d, unsigned int hw_irq) 333 + { 334 + struct irq_domain_chip_generic *dgc = d->gc; 335 + int idx; 336 + 337 + if (!dgc) 338 + return ERR_PTR(-ENODEV); 339 + idx = hw_irq / dgc->irqs_per_chip; 340 + if (idx >= dgc->num_chips) 341 + return ERR_PTR(-EINVAL); 342 + return dgc->gc[idx]; 343 + } 330 344 331 345 /** 332 346 * irq_get_domain_generic_chip - Get a pointer to the generic chip of a hw_irq ··· 350 336 struct irq_chip_generic * 351 337 irq_get_domain_generic_chip(struct irq_domain *d, unsigned int hw_irq) 352 338 { 353 - struct irq_domain_chip_generic *dgc = d->gc; 354 - int idx; 339 + struct irq_chip_generic *gc = __irq_get_domain_generic_chip(d, hw_irq); 355 340 356 - if (!dgc) 357 - return NULL; 358 - idx = hw_irq / dgc->irqs_per_chip; 359 - if (idx >= dgc->num_chips) 360 - return NULL; 361 - return dgc->gc[idx]; 341 + return !IS_ERR(gc) ? gc : NULL; 362 342 } 363 343 EXPORT_SYMBOL_GPL(irq_get_domain_generic_chip); 364 344 ··· 376 368 unsigned long flags; 377 369 int idx; 378 370 379 - if (!d->gc) 380 - return -ENODEV; 381 - 382 - idx = hw_irq / dgc->irqs_per_chip; 383 - if (idx >= dgc->num_chips) 384 - return -EINVAL; 385 - gc = dgc->gc[idx]; 371 + gc = __irq_get_domain_generic_chip(d, hw_irq); 372 + if (IS_ERR(gc)) 373 + return PTR_ERR(gc); 386 374 387 375 idx = hw_irq % dgc->irqs_per_chip; 388 376 ··· 413 409 irq_modify_status(virq, dgc->irq_flags_to_clear, dgc->irq_flags_to_set); 414 410 return 0; 415 411 } 416 - EXPORT_SYMBOL_GPL(irq_map_generic_chip); 412 + 413 + static void irq_unmap_generic_chip(struct irq_domain *d, unsigned int virq) 414 + { 415 + struct irq_data *data = irq_domain_get_irq_data(d, virq); 416 + struct irq_domain_chip_generic *dgc = d->gc; 417 + unsigned int hw_irq = data->hwirq; 418 + struct irq_chip_generic *gc; 419 + int irq_idx; 420 + 421 + gc = irq_get_domain_generic_chip(d, hw_irq); 422 + if (!gc) 423 + return; 424 + 425 + irq_idx = hw_irq % dgc->irqs_per_chip; 426 + 427 + clear_bit(irq_idx, &gc->installed); 428 + irq_domain_set_info(d, virq, hw_irq, &no_irq_chip, NULL, NULL, NULL, 429 + NULL); 430 + 431 + } 417 432 418 433 struct irq_domain_ops irq_generic_chip_ops = { 419 434 .map = irq_map_generic_chip, 435 + .unmap = irq_unmap_generic_chip, 420 436 .xlate = irq_domain_xlate_onetwocell, 421 437 }; 422 438 EXPORT_SYMBOL_GPL(irq_generic_chip_ops);
+206 -18
kernel/irq/irqdesc.c
··· 15 15 #include <linux/radix-tree.h> 16 16 #include <linux/bitmap.h> 17 17 #include <linux/irqdomain.h> 18 + #include <linux/sysfs.h> 18 19 19 20 #include "internals.h" 20 21 ··· 124 123 125 124 #ifdef CONFIG_SPARSE_IRQ 126 125 126 + static void irq_kobj_release(struct kobject *kobj); 127 + 128 + #ifdef CONFIG_SYSFS 129 + static struct kobject *irq_kobj_base; 130 + 131 + #define IRQ_ATTR_RO(_name) \ 132 + static struct kobj_attribute _name##_attr = __ATTR_RO(_name) 133 + 134 + static ssize_t per_cpu_count_show(struct kobject *kobj, 135 + struct kobj_attribute *attr, char *buf) 136 + { 137 + struct irq_desc *desc = container_of(kobj, struct irq_desc, kobj); 138 + int cpu, irq = desc->irq_data.irq; 139 + ssize_t ret = 0; 140 + char *p = ""; 141 + 142 + for_each_possible_cpu(cpu) { 143 + unsigned int c = kstat_irqs_cpu(irq, cpu); 144 + 145 + ret += scnprintf(buf + ret, PAGE_SIZE - ret, "%s%u", p, c); 146 + p = ","; 147 + } 148 + 149 + ret += scnprintf(buf + ret, PAGE_SIZE - ret, "\n"); 150 + return ret; 151 + } 152 + IRQ_ATTR_RO(per_cpu_count); 153 + 154 + static ssize_t chip_name_show(struct kobject *kobj, 155 + struct kobj_attribute *attr, char *buf) 156 + { 157 + struct irq_desc *desc = container_of(kobj, struct irq_desc, kobj); 158 + ssize_t ret = 0; 159 + 160 + raw_spin_lock_irq(&desc->lock); 161 + if (desc->irq_data.chip && desc->irq_data.chip->name) { 162 + ret = scnprintf(buf, PAGE_SIZE, "%s\n", 163 + desc->irq_data.chip->name); 164 + } 165 + raw_spin_unlock_irq(&desc->lock); 166 + 167 + return ret; 168 + } 169 + IRQ_ATTR_RO(chip_name); 170 + 171 + static ssize_t hwirq_show(struct kobject *kobj, 172 + struct kobj_attribute *attr, char *buf) 173 + { 174 + struct irq_desc *desc = container_of(kobj, struct irq_desc, kobj); 175 + ssize_t ret = 0; 176 + 177 + raw_spin_lock_irq(&desc->lock); 178 + if (desc->irq_data.domain) 179 + ret = sprintf(buf, "%d\n", (int)desc->irq_data.hwirq); 180 + raw_spin_unlock_irq(&desc->lock); 181 + 182 + return ret; 183 + } 184 + IRQ_ATTR_RO(hwirq); 185 + 186 + static ssize_t type_show(struct kobject *kobj, 187 + struct kobj_attribute *attr, char *buf) 188 + { 189 + struct irq_desc *desc = container_of(kobj, struct irq_desc, kobj); 190 + ssize_t ret = 0; 191 + 192 + raw_spin_lock_irq(&desc->lock); 193 + ret = sprintf(buf, "%s\n", 194 + irqd_is_level_type(&desc->irq_data) ? "level" : "edge"); 195 + raw_spin_unlock_irq(&desc->lock); 196 + 197 + return ret; 198 + 199 + } 200 + IRQ_ATTR_RO(type); 201 + 202 + static ssize_t name_show(struct kobject *kobj, 203 + struct kobj_attribute *attr, char *buf) 204 + { 205 + struct irq_desc *desc = container_of(kobj, struct irq_desc, kobj); 206 + ssize_t ret = 0; 207 + 208 + raw_spin_lock_irq(&desc->lock); 209 + if (desc->name) 210 + ret = scnprintf(buf, PAGE_SIZE, "%s\n", desc->name); 211 + raw_spin_unlock_irq(&desc->lock); 212 + 213 + return ret; 214 + } 215 + IRQ_ATTR_RO(name); 216 + 217 + static ssize_t actions_show(struct kobject *kobj, 218 + struct kobj_attribute *attr, char *buf) 219 + { 220 + struct irq_desc *desc = container_of(kobj, struct irq_desc, kobj); 221 + struct irqaction *action; 222 + ssize_t ret = 0; 223 + char *p = ""; 224 + 225 + raw_spin_lock_irq(&desc->lock); 226 + for (action = desc->action; action != NULL; action = action->next) { 227 + ret += scnprintf(buf + ret, PAGE_SIZE - ret, "%s%s", 228 + p, action->name); 229 + p = ","; 230 + } 231 + raw_spin_unlock_irq(&desc->lock); 232 + 233 + if (ret) 234 + ret += scnprintf(buf + ret, PAGE_SIZE - ret, "\n"); 235 + 236 + return ret; 237 + } 238 + IRQ_ATTR_RO(actions); 239 + 240 + static struct attribute *irq_attrs[] = { 241 + &per_cpu_count_attr.attr, 242 + &chip_name_attr.attr, 243 + &hwirq_attr.attr, 244 + &type_attr.attr, 245 + &name_attr.attr, 246 + &actions_attr.attr, 247 + NULL 248 + }; 249 + 250 + static struct kobj_type irq_kobj_type = { 251 + .release = irq_kobj_release, 252 + .sysfs_ops = &kobj_sysfs_ops, 253 + .default_attrs = irq_attrs, 254 + }; 255 + 256 + static void irq_sysfs_add(int irq, struct irq_desc *desc) 257 + { 258 + if (irq_kobj_base) { 259 + /* 260 + * Continue even in case of failure as this is nothing 261 + * crucial. 262 + */ 263 + if (kobject_add(&desc->kobj, irq_kobj_base, "%d", irq)) 264 + pr_warn("Failed to add kobject for irq %d\n", irq); 265 + } 266 + } 267 + 268 + static int __init irq_sysfs_init(void) 269 + { 270 + struct irq_desc *desc; 271 + int irq; 272 + 273 + /* Prevent concurrent irq alloc/free */ 274 + irq_lock_sparse(); 275 + 276 + irq_kobj_base = kobject_create_and_add("irq", kernel_kobj); 277 + if (!irq_kobj_base) { 278 + irq_unlock_sparse(); 279 + return -ENOMEM; 280 + } 281 + 282 + /* Add the already allocated interrupts */ 283 + for_each_irq_desc(irq, desc) 284 + irq_sysfs_add(irq, desc); 285 + irq_unlock_sparse(); 286 + 287 + return 0; 288 + } 289 + postcore_initcall(irq_sysfs_init); 290 + 291 + #else /* !CONFIG_SYSFS */ 292 + 293 + static struct kobj_type irq_kobj_type = { 294 + .release = irq_kobj_release, 295 + }; 296 + 297 + static void irq_sysfs_add(int irq, struct irq_desc *desc) {} 298 + 299 + #endif /* CONFIG_SYSFS */ 300 + 127 301 static RADIX_TREE(irq_desc_tree, GFP_KERNEL); 128 302 129 303 static void irq_insert_desc(unsigned int irq, struct irq_desc *desc) ··· 363 187 364 188 desc_set_defaults(irq, desc, node, affinity, owner); 365 189 irqd_set(&desc->irq_data, flags); 190 + kobject_init(&desc->kobj, &irq_kobj_type); 366 191 367 192 return desc; 368 193 ··· 374 197 return NULL; 375 198 } 376 199 377 - static void delayed_free_desc(struct rcu_head *rhp) 200 + static void irq_kobj_release(struct kobject *kobj) 378 201 { 379 - struct irq_desc *desc = container_of(rhp, struct irq_desc, rcu); 202 + struct irq_desc *desc = container_of(kobj, struct irq_desc, kobj); 380 203 381 204 free_masks(desc); 382 205 free_percpu(desc->kstat_irqs); 383 206 kfree(desc); 207 + } 208 + 209 + static void delayed_free_desc(struct rcu_head *rhp) 210 + { 211 + struct irq_desc *desc = container_of(rhp, struct irq_desc, rcu); 212 + 213 + kobject_put(&desc->kobj); 384 214 } 385 215 386 216 static void free_desc(unsigned int irq) ··· 401 217 * kstat_irq_usr(). Once we deleted the descriptor from the 402 218 * sparse tree we can free it. Access in proc will fail to 403 219 * lookup the descriptor. 220 + * 221 + * The sysfs entry must be serialized against a concurrent 222 + * irq_sysfs_init() as well. 404 223 */ 405 224 mutex_lock(&sparse_irq_lock); 225 + kobject_del(&desc->kobj); 406 226 delete_irq_desc(irq); 407 227 mutex_unlock(&sparse_irq_lock); 408 228 ··· 424 236 const struct cpumask *mask = NULL; 425 237 struct irq_desc *desc; 426 238 unsigned int flags; 427 - int i, cpu = -1; 239 + int i; 428 240 429 - if (affinity && cpumask_empty(affinity)) 430 - return -EINVAL; 241 + /* Validate affinity mask(s) */ 242 + if (affinity) { 243 + for (i = 0, mask = affinity; i < cnt; i++, mask++) { 244 + if (cpumask_empty(mask)) 245 + return -EINVAL; 246 + } 247 + } 431 248 432 249 flags = affinity ? IRQD_AFFINITY_MANAGED : 0; 250 + mask = NULL; 433 251 434 252 for (i = 0; i < cnt; i++) { 435 253 if (affinity) { 436 - cpu = cpumask_next(cpu, affinity); 437 - if (cpu >= nr_cpu_ids) 438 - cpu = cpumask_first(affinity); 439 - node = cpu_to_node(cpu); 440 - 441 - /* 442 - * For single allocations we use the caller provided 443 - * mask otherwise we use the mask of the target cpu 444 - */ 445 - mask = cnt == 1 ? affinity : cpumask_of(cpu); 254 + node = cpu_to_node(cpumask_first(affinity)); 255 + mask = affinity; 256 + affinity++; 446 257 } 447 258 desc = alloc_desc(start + i, node, flags, mask, owner); 448 259 if (!desc) 449 260 goto err; 450 261 mutex_lock(&sparse_irq_lock); 451 262 irq_insert_desc(start + i, desc); 263 + irq_sysfs_add(start + i, desc); 452 264 mutex_unlock(&sparse_irq_lock); 453 265 } 454 266 return start; ··· 669 481 * @cnt: Number of consecutive irqs to allocate. 670 482 * @node: Preferred node on which the irq descriptor should be allocated 671 483 * @owner: Owning module (can be NULL) 672 - * @affinity: Optional pointer to an affinity mask which hints where the 673 - * irq descriptors should be allocated and which default 674 - * affinities to use 484 + * @affinity: Optional pointer to an affinity mask array of size @cnt which 485 + * hints where the irq descriptors should be allocated and which 486 + * default affinities to use 675 487 * 676 488 * Returns the first irq number or error code 677 489 */
+6 -5
kernel/irq/irqdomain.c
··· 80 80 81 81 /** 82 82 * __irq_domain_add() - Allocate a new irq_domain data structure 83 - * @of_node: optional device-tree node of the interrupt controller 83 + * @fwnode: firmware node for the interrupt controller 84 84 * @size: Size of linear map; 0 for radix mapping only 85 85 * @hwirq_max: Maximum number of interrupts supported by controller 86 86 * @direct_max: Maximum value of direct maps; Use ~0 for no limit; 0 for no ··· 96 96 const struct irq_domain_ops *ops, 97 97 void *host_data) 98 98 { 99 + struct device_node *of_node = to_of_node(fwnode); 99 100 struct irq_domain *domain; 100 - struct device_node *of_node; 101 - 102 - of_node = to_of_node(fwnode); 103 101 104 102 domain = kzalloc_node(sizeof(*domain) + (sizeof(unsigned int) * size), 105 103 GFP_KERNEL, of_node_to_nid(of_node)); ··· 866 868 if (WARN_ON(intsize < 1)) 867 869 return -EINVAL; 868 870 *out_hwirq = intspec[0]; 869 - *out_type = (intsize > 1) ? intspec[1] : IRQ_TYPE_NONE; 871 + if (intsize > 1) 872 + *out_type = intspec[1] & IRQ_TYPE_SENSE_MASK; 873 + else 874 + *out_type = IRQ_TYPE_NONE; 870 875 return 0; 871 876 } 872 877 EXPORT_SYMBOL_GPL(irq_domain_xlate_onetwocell);
+2 -3
kernel/irq/manage.c
··· 669 669 return 0; 670 670 } 671 671 672 - flags &= IRQ_TYPE_SENSE_MASK; 673 - 674 672 if (chip->flags & IRQCHIP_SET_TYPE_MASKED) { 675 673 if (!irqd_irq_masked(&desc->irq_data)) 676 674 mask_irq(desc); ··· 676 678 unmask = 1; 677 679 } 678 680 679 - /* caller masked out all except trigger mode flags */ 681 + /* Mask all flags except trigger mode */ 682 + flags &= IRQ_TYPE_SENSE_MASK; 680 683 ret = chip->irq_set_type(&desc->irq_data, flags); 681 684 682 685 switch (ret) {
+24 -2
kernel/irq/msi.c
··· 18 18 /* Temparory solution for building, will be removed later */ 19 19 #include <linux/pci.h> 20 20 21 - struct msi_desc *alloc_msi_entry(struct device *dev) 21 + /** 22 + * alloc_msi_entry - Allocate an initialize msi_entry 23 + * @dev: Pointer to the device for which this is allocated 24 + * @nvec: The number of vectors used in this entry 25 + * @affinity: Optional pointer to an affinity mask array size of @nvec 26 + * 27 + * If @affinity is not NULL then a an affinity array[@nvec] is allocated 28 + * and the affinity masks from @affinity are copied. 29 + */ 30 + struct msi_desc * 31 + alloc_msi_entry(struct device *dev, int nvec, const struct cpumask *affinity) 22 32 { 23 - struct msi_desc *desc = kzalloc(sizeof(*desc), GFP_KERNEL); 33 + struct msi_desc *desc; 34 + 35 + desc = kzalloc(sizeof(*desc), GFP_KERNEL); 24 36 if (!desc) 25 37 return NULL; 26 38 27 39 INIT_LIST_HEAD(&desc->list); 28 40 desc->dev = dev; 41 + desc->nvec_used = nvec; 42 + if (affinity) { 43 + desc->affinity = kmemdup(affinity, 44 + nvec * sizeof(*desc->affinity), GFP_KERNEL); 45 + if (!desc->affinity) { 46 + kfree(desc); 47 + return NULL; 48 + } 49 + } 29 50 30 51 return desc; 31 52 } 32 53 33 54 void free_msi_entry(struct msi_desc *entry) 34 55 { 56 + kfree(entry->affinity); 35 57 kfree(entry); 36 58 } 37 59
+15 -1
kernel/softirq.c
··· 78 78 } 79 79 80 80 /* 81 + * If ksoftirqd is scheduled, we do not want to process pending softirqs 82 + * right now. Let ksoftirqd handle this at its own rate, to get fairness. 83 + */ 84 + static bool ksoftirqd_running(void) 85 + { 86 + struct task_struct *tsk = __this_cpu_read(ksoftirqd); 87 + 88 + return tsk && (tsk->state == TASK_RUNNING); 89 + } 90 + 91 + /* 81 92 * preempt_count and SOFTIRQ_OFFSET usage: 82 93 * - preempt_count is changed by SOFTIRQ_OFFSET on entering or leaving 83 94 * softirq processing. ··· 324 313 325 314 pending = local_softirq_pending(); 326 315 327 - if (pending) 316 + if (pending && !ksoftirqd_running()) 328 317 do_softirq_own_stack(); 329 318 330 319 local_irq_restore(flags); ··· 351 340 352 341 static inline void invoke_softirq(void) 353 342 { 343 + if (ksoftirqd_running()) 344 + return; 345 + 354 346 if (!force_irqthreads) { 355 347 #ifdef CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK 356 348 /*