Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge branch 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull irq updates from Ingo Molnar:
"Most of the IRQ subsystem changes in this cycle were irq-chip driver
updates:

- Qualcomm PDC wakeup interrupt support

- Layerscape external IRQ support

- Broadcom bcm7038 PM and wakeup support

- Ingenic driver cleanup and modernization

- GICv3 ITS preparation for GICv4.1 updates

- GICv4 fixes

There's also the series from Frederic Weisbecker that fixes memory
ordering bugs for the irq-work logic, whose primary fix is to turn
work->irq_work.flags into an atomic variable and then convert the
complex (and buggy) atomic_cmpxchg() loop in irq_work_claim() into a
much simpler atomic_fetch_or() call.

There are also various smaller cleanups"

* 'irq-core-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (44 commits)
pinctrl/sdm845: Add PDC wakeup interrupt map for GPIOs
pinctrl/msm: Setup GPIO chip in hierarchy
irqchip/qcom-pdc: Add irqchip set/get state calls
irqchip/qcom-pdc: Add irqdomain for wakeup capable GPIOs
irqchip/qcom-pdc: Do not toggle IRQ_ENABLE during mask/unmask
irqchip/qcom-pdc: Update max PDC interrupts
of/irq: Document properties for wakeup interrupt parent
genirq: Introduce irq_chip_get/set_parent_state calls
irqdomain: Add bus token DOMAIN_BUS_WAKEUP
genirq: Fix function documentation of __irq_alloc_descs()
irq_work: Fix IRQ_WORK_BUSY bit clearing
irqchip/ti-sci-inta: Use ERR_CAST inlined function instead of ERR_PTR(PTR_ERR(...))
irq_work: Slightly simplify IRQ_WORK_PENDING clearing
irq_work: Fix irq_work_claim() memory ordering
irq_work: Convert flags to atomic_t
irqchip: Ingenic: Add process for more than one irq at the same time.
irqchip: ingenic: Alloc generic chips from IRQ domain
irqchip: ingenic: Get virq number from IRQ domain
irqchip: ingenic: Error out if IRQ domain creation failed
irqchip: ingenic: Drop redundant irq_suspend / irq_resume functions
...

+1065 -188
+11
Documentation/devicetree/bindings/interrupt-controller/brcm,bcm7038-l1-intc.txt
··· 31 31 - interrupts: specifies the interrupt line(s) in the interrupt-parent controller 32 32 node; valid values depend on the type of parent interrupt controller 33 33 34 + Optional properties: 35 + 36 + - brcm,irq-can-wake: If present, this means the L1 controller can be used as a 37 + wakeup source for system suspend/resume. 38 + 39 + Optional properties: 40 + 41 + - brcm,int-fwd-mask: if present, a bit mask to indicate which interrupts 42 + have already been configured by the firmware and should be left unmanaged. 43 + This should have one 32-bit word per status/set/clear/mask group. 44 + 34 45 If multiple reg ranges and interrupt-parent entries are present on an SMP 35 46 system, the driver will allow IRQ SMP affinity to be set up through the 36 47 /proc/irq/ interface. In the simplest possible configuration, only one
+49
Documentation/devicetree/bindings/interrupt-controller/fsl,ls-extirq.txt
··· 1 + * Freescale Layerscape external IRQs 2 + 3 + Some Layerscape SOCs (LS1021A, LS1043A, LS1046A) support inverting 4 + the polarity of certain external interrupt lines. 5 + 6 + The device node must be a child of the node representing the 7 + Supplemental Configuration Unit (SCFG). 8 + 9 + Required properties: 10 + - compatible: should be "fsl,<soc-name>-extirq", e.g. "fsl,ls1021a-extirq". 11 + - #interrupt-cells: Must be 2. The first element is the index of the 12 + external interrupt line. The second element is the trigger type. 13 + - #address-cells: Must be 0. 14 + - interrupt-controller: Identifies the node as an interrupt controller 15 + - reg: Specifies the Interrupt Polarity Control Register (INTPCR) in 16 + the SCFG. 17 + - interrupt-map: Specifies the mapping from external interrupts to GIC 18 + interrupts. 19 + - interrupt-map-mask: Must be <0xffffffff 0>. 20 + 21 + Example: 22 + scfg: scfg@1570000 { 23 + compatible = "fsl,ls1021a-scfg", "syscon"; 24 + reg = <0x0 0x1570000 0x0 0x10000>; 25 + big-endian; 26 + #address-cells = <1>; 27 + #size-cells = <1>; 28 + ranges = <0x0 0x0 0x1570000 0x10000>; 29 + 30 + extirq: interrupt-controller@1ac { 31 + compatible = "fsl,ls1021a-extirq"; 32 + #interrupt-cells = <2>; 33 + #address-cells = <0>; 34 + interrupt-controller; 35 + reg = <0x1ac 4>; 36 + interrupt-map = 37 + <0 0 &gic GIC_SPI 163 IRQ_TYPE_LEVEL_HIGH>, 38 + <1 0 &gic GIC_SPI 164 IRQ_TYPE_LEVEL_HIGH>, 39 + <2 0 &gic GIC_SPI 165 IRQ_TYPE_LEVEL_HIGH>, 40 + <3 0 &gic GIC_SPI 167 IRQ_TYPE_LEVEL_HIGH>, 41 + <4 0 &gic GIC_SPI 168 IRQ_TYPE_LEVEL_HIGH>, 42 + <5 0 &gic GIC_SPI 169 IRQ_TYPE_LEVEL_HIGH>; 43 + interrupt-map-mask = <0xffffffff 0x0>; 44 + }; 45 + }; 46 + 47 + 48 + interrupts-extended = <&gic GIC_SPI 88 IRQ_TYPE_LEVEL_HIGH>, 49 + <&extirq 1 IRQ_TYPE_LEVEL_LOW>;
+12
Documentation/devicetree/bindings/interrupt-controller/interrupts.txt
··· 108 108 sensitivity = <7>; 109 109 }; 110 110 }; 111 + 112 + 3) Interrupt wakeup parent 113 + -------------------------- 114 + 115 + Some interrupt controllers in a SoC, are always powered on and have a select 116 + interrupts routed to them, so that they can wakeup the SoC from suspend. These 117 + interrupt controllers do not fall into the category of a parent interrupt 118 + controller and can be specified by the "wakeup-parent" property and contain a 119 + single phandle referring to the wakeup capable interrupt controller. 120 + 121 + Example: 122 + wakeup-parent = <&pdc_intc>;
+2 -1
Documentation/devicetree/bindings/interrupt-controller/qcom,pdc.txt
··· 17 17 - compatible: 18 18 Usage: required 19 19 Value type: <string> 20 - Definition: Should contain "qcom,<soc>-pdc" 20 + Definition: Should contain "qcom,<soc>-pdc" and "qcom,pdc" 21 + - "qcom,sc7180-pdc": For SC7180 21 22 - "qcom,sdm845-pdc": For SDM845 22 23 23 24 - reg:
+1 -1
arch/arm/include/asm/arch_gicv3.h
··· 333 333 * GITS_VPENDBASER - the Valid bit must be cleared before changing 334 334 * anything else. 335 335 */ 336 - static inline void gits_write_vpendbaser(u64 val, void * __iomem addr) 336 + static inline void gits_write_vpendbaser(u64 val, void __iomem *addr) 337 337 { 338 338 u32 tmp; 339 339
+6 -2
drivers/irqchip/Kconfig
··· 370 370 config MVEBU_SEI 371 371 bool 372 372 373 + config LS_EXTIRQ 374 + def_bool y if SOC_LS1021A || ARCH_LAYERSCAPE 375 + select MFD_SYSCON 376 + 373 377 config LS_SCFG_MSI 374 378 def_bool y if SOC_LS1021A || ARCH_LAYERSCAPE 375 379 depends on PCI && PCI_MSI ··· 487 483 If you wish to use interrupt aggregator irq resources managed by the 488 484 TI System Controller, say Y here. Otherwise, say N. 489 485 490 - endmenu 491 - 492 486 config SIFIVE_PLIC 493 487 bool "SiFive Platform-Level Interrupt Controller" 494 488 depends on RISCV ··· 498 496 interrupt sources are subordinate to the PLIC. 499 497 500 498 If you don't know what to do here, say Y. 499 + 500 + endmenu
+1
drivers/irqchip/Makefile
··· 84 84 obj-$(CONFIG_MVEBU_ODMI) += irq-mvebu-odmi.o 85 85 obj-$(CONFIG_MVEBU_PIC) += irq-mvebu-pic.o 86 86 obj-$(CONFIG_MVEBU_SEI) += irq-mvebu-sei.o 87 + obj-$(CONFIG_LS_EXTIRQ) += irq-ls-extirq.o 87 88 obj-$(CONFIG_LS_SCFG_MSI) += irq-ls-scfg-msi.o 88 89 obj-$(CONFIG_EZNPS_GIC) += irq-eznps.o 89 90 obj-$(CONFIG_ARCH_ASPEED) += irq-aspeed-vic.o irq-aspeed-i2c-ic.o
+117 -2
drivers/irqchip/irq-bcm7038-l1.c
··· 27 27 #include <linux/types.h> 28 28 #include <linux/irqchip.h> 29 29 #include <linux/irqchip/chained_irq.h> 30 + #include <linux/syscore_ops.h> 30 31 31 32 #define IRQS_PER_WORD 32 32 33 #define REG_BYTES_PER_IRQ_WORD (sizeof(u32) * 4) ··· 40 39 unsigned int n_words; 41 40 struct irq_domain *domain; 42 41 struct bcm7038_l1_cpu *cpus[NR_CPUS]; 42 + #ifdef CONFIG_PM_SLEEP 43 + struct list_head list; 44 + u32 wake_mask[MAX_WORDS]; 45 + #endif 46 + u32 irq_fwd_mask[MAX_WORDS]; 43 47 u8 affinity[MAX_WORDS * IRQS_PER_WORD]; 44 48 }; 45 49 ··· 255 249 resource_size_t sz; 256 250 struct bcm7038_l1_cpu *cpu; 257 251 unsigned int i, n_words, parent_irq; 252 + int ret; 258 253 259 254 if (of_address_to_resource(dn, idx, &res)) 260 255 return -EINVAL; ··· 269 262 else if (intc->n_words != n_words) 270 263 return -EINVAL; 271 264 265 + ret = of_property_read_u32_array(dn , "brcm,int-fwd-mask", 266 + intc->irq_fwd_mask, n_words); 267 + if (ret != 0 && ret != -EINVAL) { 268 + /* property exists but has the wrong number of words */ 269 + pr_err("invalid brcm,int-fwd-mask property\n"); 270 + return -EINVAL; 271 + } 272 + 272 273 cpu = intc->cpus[idx] = kzalloc(sizeof(*cpu) + n_words * sizeof(u32), 273 274 GFP_KERNEL); 274 275 if (!cpu) ··· 287 272 return -ENOMEM; 288 273 289 274 for (i = 0; i < n_words; i++) { 290 - l1_writel(0xffffffff, cpu->map_base + reg_mask_set(intc, i)); 291 - cpu->mask_cache[i] = 0xffffffff; 275 + l1_writel(~intc->irq_fwd_mask[i], 276 + cpu->map_base + reg_mask_set(intc, i)); 277 + l1_writel(intc->irq_fwd_mask[i], 278 + cpu->map_base + reg_mask_clr(intc, i)); 279 + cpu->mask_cache[i] = ~intc->irq_fwd_mask[i]; 292 280 } 293 281 294 282 parent_irq = irq_of_parse_and_map(dn, idx); ··· 299 281 pr_err("failed to map parent interrupt %d\n", parent_irq); 300 282 return -EINVAL; 301 283 } 284 + 285 + if (of_property_read_bool(dn, "brcm,irq-can-wake")) 286 + enable_irq_wake(parent_irq); 287 + 302 288 irq_set_chained_handler_and_data(parent_irq, bcm7038_l1_irq_handle, 303 289 intc); 304 290 305 291 return 0; 306 292 } 293 + 294 + #ifdef CONFIG_PM_SLEEP 295 + /* 296 + * We keep a list of bcm7038_l1_chip used for suspend/resume. This hack is 297 + * used because the struct chip_type suspend/resume hooks are not called 298 + * unless chip_type is hooked onto a generic_chip. Since this driver does 299 + * not use generic_chip, we need to manually hook our resume/suspend to 300 + * syscore_ops. 301 + */ 302 + static LIST_HEAD(bcm7038_l1_intcs_list); 303 + static DEFINE_RAW_SPINLOCK(bcm7038_l1_intcs_lock); 304 + 305 + static int bcm7038_l1_suspend(void) 306 + { 307 + struct bcm7038_l1_chip *intc; 308 + int boot_cpu, word; 309 + u32 val; 310 + 311 + /* Wakeup interrupt should only come from the boot cpu */ 312 + boot_cpu = cpu_logical_map(0); 313 + 314 + list_for_each_entry(intc, &bcm7038_l1_intcs_list, list) { 315 + for (word = 0; word < intc->n_words; word++) { 316 + val = intc->wake_mask[word] | intc->irq_fwd_mask[word]; 317 + l1_writel(~val, 318 + intc->cpus[boot_cpu]->map_base + reg_mask_set(intc, word)); 319 + l1_writel(val, 320 + intc->cpus[boot_cpu]->map_base + reg_mask_clr(intc, word)); 321 + } 322 + } 323 + 324 + return 0; 325 + } 326 + 327 + static void bcm7038_l1_resume(void) 328 + { 329 + struct bcm7038_l1_chip *intc; 330 + int boot_cpu, word; 331 + 332 + boot_cpu = cpu_logical_map(0); 333 + 334 + list_for_each_entry(intc, &bcm7038_l1_intcs_list, list) { 335 + for (word = 0; word < intc->n_words; word++) { 336 + l1_writel(intc->cpus[boot_cpu]->mask_cache[word], 337 + intc->cpus[boot_cpu]->map_base + reg_mask_set(intc, word)); 338 + l1_writel(~intc->cpus[boot_cpu]->mask_cache[word], 339 + intc->cpus[boot_cpu]->map_base + reg_mask_clr(intc, word)); 340 + } 341 + } 342 + } 343 + 344 + static struct syscore_ops bcm7038_l1_syscore_ops = { 345 + .suspend = bcm7038_l1_suspend, 346 + .resume = bcm7038_l1_resume, 347 + }; 348 + 349 + static int bcm7038_l1_set_wake(struct irq_data *d, unsigned int on) 350 + { 351 + struct bcm7038_l1_chip *intc = irq_data_get_irq_chip_data(d); 352 + unsigned long flags; 353 + u32 word = d->hwirq / IRQS_PER_WORD; 354 + u32 mask = BIT(d->hwirq % IRQS_PER_WORD); 355 + 356 + raw_spin_lock_irqsave(&intc->lock, flags); 357 + if (on) 358 + intc->wake_mask[word] |= mask; 359 + else 360 + intc->wake_mask[word] &= ~mask; 361 + raw_spin_unlock_irqrestore(&intc->lock, flags); 362 + 363 + return 0; 364 + } 365 + #endif 307 366 308 367 static struct irq_chip bcm7038_l1_irq_chip = { 309 368 .name = "bcm7038-l1", ··· 390 295 #ifdef CONFIG_SMP 391 296 .irq_cpu_offline = bcm7038_l1_cpu_offline, 392 297 #endif 298 + #ifdef CONFIG_PM_SLEEP 299 + .irq_set_wake = bcm7038_l1_set_wake, 300 + #endif 393 301 }; 394 302 395 303 static int bcm7038_l1_map(struct irq_domain *d, unsigned int virq, 396 304 irq_hw_number_t hw_irq) 397 305 { 306 + struct bcm7038_l1_chip *intc = d->host_data; 307 + u32 mask = BIT(hw_irq % IRQS_PER_WORD); 308 + u32 word = hw_irq / IRQS_PER_WORD; 309 + 310 + if (intc->irq_fwd_mask[word] & mask) 311 + return -EPERM; 312 + 398 313 irq_set_chip_and_handler(virq, &bcm7038_l1_irq_chip, handle_level_irq); 399 314 irq_set_chip_data(virq, d->host_data); 400 315 irqd_set_single_target(irq_desc_get_irq_data(irq_to_desc(virq))); ··· 444 339 ret = -ENOMEM; 445 340 goto out_unmap; 446 341 } 342 + 343 + #ifdef CONFIG_PM_SLEEP 344 + /* Add bcm7038_l1_chip into a list */ 345 + raw_spin_lock(&bcm7038_l1_intcs_lock); 346 + list_add_tail(&intc->list, &bcm7038_l1_intcs_list); 347 + raw_spin_unlock(&bcm7038_l1_intcs_lock); 348 + 349 + if (list_is_singular(&bcm7038_l1_intcs_list)) 350 + register_syscore_ops(&bcm7038_l1_syscore_ops); 351 + #endif 447 352 448 353 pr_info("registered BCM7038 L1 intc (%pOF, IRQs: %d)\n", 449 354 dn, IRQS_PER_WORD * intc->n_words);
+230 -72
drivers/irqchip/irq-gic-v3-its.c
··· 6 6 7 7 #include <linux/acpi.h> 8 8 #include <linux/acpi_iort.h> 9 + #include <linux/bitfield.h> 9 10 #include <linux/bitmap.h> 10 11 #include <linux/cpu.h> 11 12 #include <linux/crash_dump.h> ··· 103 102 struct its_collection *collections; 104 103 struct fwnode_handle *fwnode_handle; 105 104 u64 (*get_msi_base)(struct its_device *its_dev); 105 + u64 typer; 106 106 u64 cbaser_save; 107 107 u32 ctlr_save; 108 108 struct list_head its_device_list; 109 109 u64 flags; 110 110 unsigned long list_nr; 111 - u32 ite_size; 112 - u32 device_ids; 113 111 int numa_node; 114 112 unsigned int msi_domain_flags; 115 113 u32 pre_its_base; /* for Socionext Synquacer */ 116 - bool is_v4; 117 114 int vlpi_redist_offset; 118 115 }; 116 + 117 + #define is_v4(its) (!!((its)->typer & GITS_TYPER_VLPIS)) 118 + #define device_ids(its) (FIELD_GET(GITS_TYPER_DEVBITS, (its)->typer) + 1) 119 119 120 120 #define ITS_ITT_ALIGN SZ_256 121 121 ··· 132 130 u16 *col_map; 133 131 irq_hw_number_t lpi_base; 134 132 int nr_lpis; 135 - struct mutex vlpi_lock; 133 + raw_spinlock_t vlpi_lock; 136 134 struct its_vm *vm; 137 135 struct its_vlpi_map *vlpi_maps; 138 136 int nr_vlpis; ··· 183 181 unsigned long its_list = 0; 184 182 185 183 list_for_each_entry(its, &its_nodes, entry) { 186 - if (!its->is_v4) 184 + if (!is_v4(its)) 187 185 continue; 188 186 189 187 if (vm->vlpi_count[its->list_nr]) ··· 193 191 return (u16)its_list; 194 192 } 195 193 194 + static inline u32 its_get_event_id(struct irq_data *d) 195 + { 196 + struct its_device *its_dev = irq_data_get_irq_chip_data(d); 197 + return d->hwirq - its_dev->event_map.lpi_base; 198 + } 199 + 196 200 static struct its_collection *dev_event_to_col(struct its_device *its_dev, 197 201 u32 event) 198 202 { 199 203 struct its_node *its = its_dev->its; 200 204 201 205 return its->collections + its_dev->event_map.col_map[event]; 206 + } 207 + 208 + static struct its_vlpi_map *dev_event_to_vlpi_map(struct its_device *its_dev, 209 + u32 event) 210 + { 211 + if (WARN_ON_ONCE(event >= its_dev->event_map.nr_lpis)) 212 + return NULL; 213 + 214 + return &its_dev->event_map.vlpi_maps[event]; 215 + } 216 + 217 + static struct its_collection *irq_to_col(struct irq_data *d) 218 + { 219 + struct its_device *its_dev = irq_data_get_irq_chip_data(d); 220 + 221 + return dev_event_to_col(its_dev, its_get_event_id(d)); 202 222 } 203 223 204 224 static struct its_collection *valid_col(struct its_collection *col) ··· 329 305 * The ITS command block, which is what the ITS actually parses. 330 306 */ 331 307 struct its_cmd_block { 332 - u64 raw_cmd[4]; 308 + union { 309 + u64 raw_cmd[4]; 310 + __le64 raw_cmd_le[4]; 311 + }; 333 312 }; 334 313 335 314 #define ITS_CMD_QUEUE_SZ SZ_64K ··· 441 414 static inline void its_fixup_cmd(struct its_cmd_block *cmd) 442 415 { 443 416 /* Let's fixup BE commands */ 444 - cmd->raw_cmd[0] = cpu_to_le64(cmd->raw_cmd[0]); 445 - cmd->raw_cmd[1] = cpu_to_le64(cmd->raw_cmd[1]); 446 - cmd->raw_cmd[2] = cpu_to_le64(cmd->raw_cmd[2]); 447 - cmd->raw_cmd[3] = cpu_to_le64(cmd->raw_cmd[3]); 417 + cmd->raw_cmd_le[0] = cpu_to_le64(cmd->raw_cmd[0]); 418 + cmd->raw_cmd_le[1] = cpu_to_le64(cmd->raw_cmd[1]); 419 + cmd->raw_cmd_le[2] = cpu_to_le64(cmd->raw_cmd[2]); 420 + cmd->raw_cmd_le[3] = cpu_to_le64(cmd->raw_cmd[3]); 448 421 } 449 422 450 423 static struct its_collection *its_build_mapd_cmd(struct its_node *its, ··· 701 674 its_fixup_cmd(cmd); 702 675 703 676 return valid_vpe(its, desc->its_vmovp_cmd.vpe); 677 + } 678 + 679 + static struct its_vpe *its_build_vinv_cmd(struct its_node *its, 680 + struct its_cmd_block *cmd, 681 + struct its_cmd_desc *desc) 682 + { 683 + struct its_vlpi_map *map; 684 + 685 + map = dev_event_to_vlpi_map(desc->its_inv_cmd.dev, 686 + desc->its_inv_cmd.event_id); 687 + 688 + its_encode_cmd(cmd, GITS_CMD_INV); 689 + its_encode_devid(cmd, desc->its_inv_cmd.dev->device_id); 690 + its_encode_event_id(cmd, desc->its_inv_cmd.event_id); 691 + 692 + its_fixup_cmd(cmd); 693 + 694 + return valid_vpe(its, map->vpe); 695 + } 696 + 697 + static struct its_vpe *its_build_vint_cmd(struct its_node *its, 698 + struct its_cmd_block *cmd, 699 + struct its_cmd_desc *desc) 700 + { 701 + struct its_vlpi_map *map; 702 + 703 + map = dev_event_to_vlpi_map(desc->its_int_cmd.dev, 704 + desc->its_int_cmd.event_id); 705 + 706 + its_encode_cmd(cmd, GITS_CMD_INT); 707 + its_encode_devid(cmd, desc->its_int_cmd.dev->device_id); 708 + its_encode_event_id(cmd, desc->its_int_cmd.event_id); 709 + 710 + its_fixup_cmd(cmd); 711 + 712 + return valid_vpe(its, map->vpe); 713 + } 714 + 715 + static struct its_vpe *its_build_vclear_cmd(struct its_node *its, 716 + struct its_cmd_block *cmd, 717 + struct its_cmd_desc *desc) 718 + { 719 + struct its_vlpi_map *map; 720 + 721 + map = dev_event_to_vlpi_map(desc->its_clear_cmd.dev, 722 + desc->its_clear_cmd.event_id); 723 + 724 + its_encode_cmd(cmd, GITS_CMD_CLEAR); 725 + its_encode_devid(cmd, desc->its_clear_cmd.dev->device_id); 726 + its_encode_event_id(cmd, desc->its_clear_cmd.event_id); 727 + 728 + its_fixup_cmd(cmd); 729 + 730 + return valid_vpe(its, map->vpe); 704 731 } 705 732 706 733 static u64 its_cmd_ptr_to_offset(struct its_node *its, ··· 1034 953 1035 954 static void its_send_vmapti(struct its_device *dev, u32 id) 1036 955 { 1037 - struct its_vlpi_map *map = &dev->event_map.vlpi_maps[id]; 956 + struct its_vlpi_map *map = dev_event_to_vlpi_map(dev, id); 1038 957 struct its_cmd_desc desc; 1039 958 1040 959 desc.its_vmapti_cmd.vpe = map->vpe; ··· 1048 967 1049 968 static void its_send_vmovi(struct its_device *dev, u32 id) 1050 969 { 1051 - struct its_vlpi_map *map = &dev->event_map.vlpi_maps[id]; 970 + struct its_vlpi_map *map = dev_event_to_vlpi_map(dev, id); 1052 971 struct its_cmd_desc desc; 1053 972 1054 973 desc.its_vmovi_cmd.vpe = map->vpe; ··· 1102 1021 1103 1022 /* Emit VMOVPs */ 1104 1023 list_for_each_entry(its, &its_nodes, entry) { 1105 - if (!its->is_v4) 1024 + if (!is_v4(its)) 1106 1025 continue; 1107 1026 1108 1027 if (!vpe->its_vm->vlpi_count[its->list_nr]) ··· 1123 1042 its_send_single_vcommand(its, its_build_vinvall_cmd, &desc); 1124 1043 } 1125 1044 1045 + static void its_send_vinv(struct its_device *dev, u32 event_id) 1046 + { 1047 + struct its_cmd_desc desc; 1048 + 1049 + /* 1050 + * There is no real VINV command. This is just a normal INV, 1051 + * with a VSYNC instead of a SYNC. 1052 + */ 1053 + desc.its_inv_cmd.dev = dev; 1054 + desc.its_inv_cmd.event_id = event_id; 1055 + 1056 + its_send_single_vcommand(dev->its, its_build_vinv_cmd, &desc); 1057 + } 1058 + 1059 + static void its_send_vint(struct its_device *dev, u32 event_id) 1060 + { 1061 + struct its_cmd_desc desc; 1062 + 1063 + /* 1064 + * There is no real VINT command. This is just a normal INT, 1065 + * with a VSYNC instead of a SYNC. 1066 + */ 1067 + desc.its_int_cmd.dev = dev; 1068 + desc.its_int_cmd.event_id = event_id; 1069 + 1070 + its_send_single_vcommand(dev->its, its_build_vint_cmd, &desc); 1071 + } 1072 + 1073 + static void its_send_vclear(struct its_device *dev, u32 event_id) 1074 + { 1075 + struct its_cmd_desc desc; 1076 + 1077 + /* 1078 + * There is no real VCLEAR command. This is just a normal CLEAR, 1079 + * with a VSYNC instead of a SYNC. 1080 + */ 1081 + desc.its_clear_cmd.dev = dev; 1082 + desc.its_clear_cmd.event_id = event_id; 1083 + 1084 + its_send_single_vcommand(dev->its, its_build_vclear_cmd, &desc); 1085 + } 1086 + 1126 1087 /* 1127 1088 * irqchip functions - assumes MSI, mostly. 1128 1089 */ 1129 - 1130 - static inline u32 its_get_event_id(struct irq_data *d) 1090 + static struct its_vlpi_map *get_vlpi_map(struct irq_data *d) 1131 1091 { 1132 1092 struct its_device *its_dev = irq_data_get_irq_chip_data(d); 1133 - return d->hwirq - its_dev->event_map.lpi_base; 1093 + u32 event = its_get_event_id(d); 1094 + 1095 + if (!irqd_is_forwarded_to_vcpu(d)) 1096 + return NULL; 1097 + 1098 + return dev_event_to_vlpi_map(its_dev, event); 1134 1099 } 1135 1100 1136 1101 static void lpi_write_config(struct irq_data *d, u8 clr, u8 set) 1137 1102 { 1103 + struct its_vlpi_map *map = get_vlpi_map(d); 1138 1104 irq_hw_number_t hwirq; 1139 1105 void *va; 1140 1106 u8 *cfg; 1141 1107 1142 - if (irqd_is_forwarded_to_vcpu(d)) { 1143 - struct its_device *its_dev = irq_data_get_irq_chip_data(d); 1144 - u32 event = its_get_event_id(d); 1145 - struct its_vlpi_map *map; 1146 - 1147 - va = page_address(its_dev->event_map.vm->vprop_page); 1148 - map = &its_dev->event_map.vlpi_maps[event]; 1108 + if (map) { 1109 + va = page_address(map->vm->vprop_page); 1149 1110 hwirq = map->vintid; 1150 1111 1151 1112 /* Remember the updated property */ ··· 1213 1090 dsb(ishst); 1214 1091 } 1215 1092 1093 + static void wait_for_syncr(void __iomem *rdbase) 1094 + { 1095 + while (gic_read_lpir(rdbase + GICR_SYNCR) & 1) 1096 + cpu_relax(); 1097 + } 1098 + 1099 + static void direct_lpi_inv(struct irq_data *d) 1100 + { 1101 + struct its_collection *col; 1102 + void __iomem *rdbase; 1103 + 1104 + /* Target the redistributor this LPI is currently routed to */ 1105 + col = irq_to_col(d); 1106 + rdbase = per_cpu_ptr(gic_rdists->rdist, col->col_id)->rd_base; 1107 + gic_write_lpir(d->hwirq, rdbase + GICR_INVLPIR); 1108 + 1109 + wait_for_syncr(rdbase); 1110 + } 1111 + 1216 1112 static void lpi_update_config(struct irq_data *d, u8 clr, u8 set) 1217 1113 { 1218 1114 struct its_device *its_dev = irq_data_get_irq_chip_data(d); 1219 1115 1220 1116 lpi_write_config(d, clr, set); 1221 - its_send_inv(its_dev, its_get_event_id(d)); 1117 + if (gic_rdists->has_direct_lpi && !irqd_is_forwarded_to_vcpu(d)) 1118 + direct_lpi_inv(d); 1119 + else if (!irqd_is_forwarded_to_vcpu(d)) 1120 + its_send_inv(its_dev, its_get_event_id(d)); 1121 + else 1122 + its_send_vinv(its_dev, its_get_event_id(d)); 1222 1123 } 1223 1124 1224 1125 static void its_vlpi_set_doorbell(struct irq_data *d, bool enable) 1225 1126 { 1226 1127 struct its_device *its_dev = irq_data_get_irq_chip_data(d); 1227 1128 u32 event = its_get_event_id(d); 1129 + struct its_vlpi_map *map; 1228 1130 1229 - if (its_dev->event_map.vlpi_maps[event].db_enabled == enable) 1131 + map = dev_event_to_vlpi_map(its_dev, event); 1132 + 1133 + if (map->db_enabled == enable) 1230 1134 return; 1231 1135 1232 - its_dev->event_map.vlpi_maps[event].db_enabled = enable; 1136 + map->db_enabled = enable; 1233 1137 1234 1138 /* 1235 1139 * More fun with the architecture: ··· 1358 1208 if (which != IRQCHIP_STATE_PENDING) 1359 1209 return -EINVAL; 1360 1210 1361 - if (state) 1362 - its_send_int(its_dev, event); 1363 - else 1364 - its_send_clear(its_dev, event); 1211 + if (irqd_is_forwarded_to_vcpu(d)) { 1212 + if (state) 1213 + its_send_vint(its_dev, event); 1214 + else 1215 + its_send_vclear(its_dev, event); 1216 + } else { 1217 + if (state) 1218 + its_send_int(its_dev, event); 1219 + else 1220 + its_send_clear(its_dev, event); 1221 + } 1365 1222 1366 1223 return 0; 1367 1224 } ··· 1436 1279 if (!info->map) 1437 1280 return -EINVAL; 1438 1281 1439 - mutex_lock(&its_dev->event_map.vlpi_lock); 1282 + raw_spin_lock(&its_dev->event_map.vlpi_lock); 1440 1283 1441 1284 if (!its_dev->event_map.vm) { 1442 1285 struct its_vlpi_map *maps; 1443 1286 1444 1287 maps = kcalloc(its_dev->event_map.nr_lpis, sizeof(*maps), 1445 - GFP_KERNEL); 1288 + GFP_ATOMIC); 1446 1289 if (!maps) { 1447 1290 ret = -ENOMEM; 1448 1291 goto out; ··· 1485 1328 } 1486 1329 1487 1330 out: 1488 - mutex_unlock(&its_dev->event_map.vlpi_lock); 1331 + raw_spin_unlock(&its_dev->event_map.vlpi_lock); 1489 1332 return ret; 1490 1333 } 1491 1334 1492 1335 static int its_vlpi_get(struct irq_data *d, struct its_cmd_info *info) 1493 1336 { 1494 1337 struct its_device *its_dev = irq_data_get_irq_chip_data(d); 1495 - u32 event = its_get_event_id(d); 1338 + struct its_vlpi_map *map; 1496 1339 int ret = 0; 1497 1340 1498 - mutex_lock(&its_dev->event_map.vlpi_lock); 1341 + raw_spin_lock(&its_dev->event_map.vlpi_lock); 1499 1342 1500 - if (!its_dev->event_map.vm || 1501 - !its_dev->event_map.vlpi_maps[event].vm) { 1343 + map = get_vlpi_map(d); 1344 + 1345 + if (!its_dev->event_map.vm || !map) { 1502 1346 ret = -EINVAL; 1503 1347 goto out; 1504 1348 } 1505 1349 1506 1350 /* Copy our mapping information to the incoming request */ 1507 - *info->map = its_dev->event_map.vlpi_maps[event]; 1351 + *info->map = *map; 1508 1352 1509 1353 out: 1510 - mutex_unlock(&its_dev->event_map.vlpi_lock); 1354 + raw_spin_unlock(&its_dev->event_map.vlpi_lock); 1511 1355 return ret; 1512 1356 } 1513 1357 ··· 1518 1360 u32 event = its_get_event_id(d); 1519 1361 int ret = 0; 1520 1362 1521 - mutex_lock(&its_dev->event_map.vlpi_lock); 1363 + raw_spin_lock(&its_dev->event_map.vlpi_lock); 1522 1364 1523 1365 if (!its_dev->event_map.vm || !irqd_is_forwarded_to_vcpu(d)) { 1524 1366 ret = -EINVAL; ··· 1548 1390 } 1549 1391 1550 1392 out: 1551 - mutex_unlock(&its_dev->event_map.vlpi_lock); 1393 + raw_spin_unlock(&its_dev->event_map.vlpi_lock); 1552 1394 return ret; 1553 1395 } 1554 1396 ··· 1574 1416 struct its_cmd_info *info = vcpu_info; 1575 1417 1576 1418 /* Need a v4 ITS */ 1577 - if (!its_dev->its->is_v4) 1419 + if (!is_v4(its_dev->its)) 1578 1420 return -EINVAL; 1579 1421 1580 1422 /* Unmap request? */ ··· 2080 1922 if (new_order >= MAX_ORDER) { 2081 1923 new_order = MAX_ORDER - 1; 2082 1924 ids = ilog2(PAGE_ORDER_TO_SIZE(new_order) / (int)esz); 2083 - pr_warn("ITS@%pa: %s Table too large, reduce ids %u->%u\n", 1925 + pr_warn("ITS@%pa: %s Table too large, reduce ids %llu->%u\n", 2084 1926 &its->phys_base, its_base_type_string[type], 2085 - its->device_ids, ids); 1927 + device_ids(its), ids); 2086 1928 } 2087 1929 2088 1930 *order = new_order; ··· 2128 1970 case GITS_BASER_TYPE_DEVICE: 2129 1971 indirect = its_parse_indirect_baser(its, baser, 2130 1972 psz, &order, 2131 - its->device_ids); 1973 + device_ids(its)); 2132 1974 break; 2133 1975 2134 1976 case GITS_BASER_TYPE_VCPU: ··· 2519 2361 2520 2362 /* Don't allow device id that exceeds ITS hardware limit */ 2521 2363 if (!baser) 2522 - return (ilog2(dev_id) < its->device_ids); 2364 + return (ilog2(dev_id) < device_ids(its)); 2523 2365 2524 2366 return its_alloc_table_entry(its, baser, dev_id); 2525 2367 } ··· 2538 2380 list_for_each_entry(its, &its_nodes, entry) { 2539 2381 struct its_baser *baser; 2540 2382 2541 - if (!its->is_v4) 2383 + if (!is_v4(its)) 2542 2384 continue; 2543 2385 2544 2386 baser = its_get_baser(its, GITS_BASER_TYPE_VCPU); ··· 2577 2419 * sized as a power of two (and you need at least one bit...). 2578 2420 */ 2579 2421 nr_ites = max(2, nvecs); 2580 - sz = nr_ites * its->ite_size; 2422 + sz = nr_ites * (FIELD_GET(GITS_TYPER_ITT_ENTRY_SIZE, its->typer) + 1); 2581 2423 sz = max(sz, ITS_ITT_ALIGN) + ITS_ITT_ALIGN - 1; 2582 2424 itt = kzalloc_node(sz, GFP_KERNEL, its->numa_node); 2583 2425 if (alloc_lpis) { ··· 2608 2450 dev->event_map.col_map = col_map; 2609 2451 dev->event_map.lpi_base = lpi_base; 2610 2452 dev->event_map.nr_lpis = nr_lpis; 2611 - mutex_init(&dev->event_map.vlpi_lock); 2453 + raw_spin_lock_init(&dev->event_map.vlpi_lock); 2612 2454 dev->device_id = dev_id; 2613 2455 INIT_LIST_HEAD(&dev->entry); 2614 2456 ··· 2629 2471 raw_spin_lock_irqsave(&its_dev->its->lock, flags); 2630 2472 list_del(&its_dev->entry); 2631 2473 raw_spin_unlock_irqrestore(&its_dev->its->lock, flags); 2474 + kfree(its_dev->event_map.col_map); 2632 2475 kfree(its_dev->itt); 2633 2476 kfree(its_dev); 2634 2477 } ··· 2838 2679 its_lpi_free(its_dev->event_map.lpi_map, 2839 2680 its_dev->event_map.lpi_base, 2840 2681 its_dev->event_map.nr_lpis); 2841 - kfree(its_dev->event_map.col_map); 2842 2682 2843 2683 /* Unmap device/itt */ 2844 2684 its_send_mapd(its_dev, 0); ··· 2930 2772 2931 2773 rdbase = per_cpu_ptr(gic_rdists->rdist, from)->rd_base; 2932 2774 gic_write_lpir(vpe->vpe_db_lpi, rdbase + GICR_CLRLPIR); 2933 - while (gic_read_lpir(rdbase + GICR_SYNCR) & 1) 2934 - cpu_relax(); 2775 + wait_for_syncr(rdbase); 2935 2776 2936 2777 return; 2937 2778 } ··· 3026 2869 struct its_node *its; 3027 2870 3028 2871 list_for_each_entry(its, &its_nodes, entry) { 3029 - if (!its->is_v4) 2872 + if (!is_v4(its)) 3030 2873 continue; 3031 2874 3032 2875 if (its_list_map && !vpe->its_vm->vlpi_count[its->list_nr]) ··· 3084 2927 if (gic_rdists->has_direct_lpi) { 3085 2928 void __iomem *rdbase; 3086 2929 2930 + /* Target the redistributor this VPE is currently known on */ 3087 2931 rdbase = per_cpu_ptr(gic_rdists->rdist, vpe->col_idx)->rd_base; 3088 - gic_write_lpir(vpe->vpe_db_lpi, rdbase + GICR_INVLPIR); 3089 - while (gic_read_lpir(rdbase + GICR_SYNCR) & 1) 3090 - cpu_relax(); 2932 + gic_write_lpir(d->parent_data->hwirq, rdbase + GICR_INVLPIR); 2933 + wait_for_syncr(rdbase); 3091 2934 } else { 3092 2935 its_vpe_send_cmd(vpe, its_send_inv); 3093 2936 } ··· 3129 2972 gic_write_lpir(vpe->vpe_db_lpi, rdbase + GICR_SETLPIR); 3130 2973 } else { 3131 2974 gic_write_lpir(vpe->vpe_db_lpi, rdbase + GICR_CLRLPIR); 3132 - while (gic_read_lpir(rdbase + GICR_SYNCR) & 1) 3133 - cpu_relax(); 2975 + wait_for_syncr(rdbase); 3134 2976 } 3135 2977 } else { 3136 2978 if (state) ··· 3294 3138 vpe->col_idx = cpumask_first(cpu_online_mask); 3295 3139 3296 3140 list_for_each_entry(its, &its_nodes, entry) { 3297 - if (!its->is_v4) 3141 + if (!is_v4(its)) 3298 3142 continue; 3299 3143 3300 3144 its_send_vmapp(its, vpe, true); ··· 3320 3164 return; 3321 3165 3322 3166 list_for_each_entry(its, &its_nodes, entry) { 3323 - if (!its->is_v4) 3167 + if (!is_v4(its)) 3324 3168 continue; 3325 3169 3326 3170 its_send_vmapp(its, vpe, false); ··· 3371 3215 { 3372 3216 struct its_node *its = data; 3373 3217 3374 - /* erratum 22375: only alloc 8MB table size */ 3375 - its->device_ids = 0x14; /* 20 bits, 8MB */ 3218 + /* erratum 22375: only alloc 8MB table size (20 bits) */ 3219 + its->typer &= ~GITS_TYPER_DEVBITS; 3220 + its->typer |= FIELD_PREP(GITS_TYPER_DEVBITS, 20 - 1); 3376 3221 its->flags |= ITS_FLAGS_WORKAROUND_CAVIUM_22375; 3377 3222 3378 3223 return true; ··· 3393 3236 struct its_node *its = data; 3394 3237 3395 3238 /* On QDF2400, the size of the ITE is 16Bytes */ 3396 - its->ite_size = 16; 3239 + its->typer &= ~GITS_TYPER_ITT_ENTRY_SIZE; 3240 + its->typer |= FIELD_PREP(GITS_TYPER_ITT_ENTRY_SIZE, 16 - 1); 3397 3241 3398 3242 return true; 3399 3243 } ··· 3428 3270 its->get_msi_base = its_irq_get_msi_base_pre_its; 3429 3271 3430 3272 ids = ilog2(pre_its_window[1]) - 2; 3431 - if (its->device_ids > ids) 3432 - its->device_ids = ids; 3273 + if (device_ids(its) > ids) { 3274 + its->typer &= ~GITS_TYPER_DEVBITS; 3275 + its->typer |= FIELD_PREP(GITS_TYPER_DEVBITS, ids - 1); 3276 + } 3433 3277 3434 3278 /* the pre-ITS breaks isolation, so disable MSI remapping */ 3435 3279 its->msi_domain_flags &= ~IRQ_DOMAIN_FLAG_MSI_REMAP; ··· 3664 3504 } 3665 3505 3666 3506 /* Use the last possible DevID */ 3667 - devid = GENMASK(its->device_ids - 1, 0); 3507 + devid = GENMASK(device_ids(its) - 1, 0); 3668 3508 vpe_proxy.dev = its_create_device(its, devid, entries, false); 3669 3509 if (!vpe_proxy.dev) { 3670 3510 kfree(vpe_proxy.vpes); ··· 3762 3602 INIT_LIST_HEAD(&its->entry); 3763 3603 INIT_LIST_HEAD(&its->its_device_list); 3764 3604 typer = gic_read_typer(its_base + GITS_TYPER); 3605 + its->typer = typer; 3765 3606 its->base = its_base; 3766 3607 its->phys_base = res->start; 3767 - its->ite_size = GITS_TYPER_ITT_ENTRY_SIZE(typer); 3768 - its->device_ids = GITS_TYPER_DEVBITS(typer); 3769 - its->is_v4 = !!(typer & GITS_TYPER_VLPIS); 3770 - if (its->is_v4) { 3608 + if (is_v4(its)) { 3771 3609 if (!(typer & GITS_TYPER_VMOVP)) { 3772 3610 err = its_compute_its_list_map(res, its_base); 3773 3611 if (err < 0) ··· 3832 3674 gits_write_cwriter(0, its->base + GITS_CWRITER); 3833 3675 ctlr = readl_relaxed(its->base + GITS_CTLR); 3834 3676 ctlr |= GITS_CTLR_ENABLE; 3835 - if (its->is_v4) 3677 + if (is_v4(its)) 3836 3678 ctlr |= GITS_CTLR_ImDe; 3837 3679 writel_relaxed(ctlr, its->base + GITS_CTLR); 3838 3680 ··· 4157 3999 return err; 4158 4000 4159 4001 list_for_each_entry(its, &its_nodes, entry) 4160 - has_v4 |= its->is_v4; 4002 + has_v4 |= is_v4(its); 4161 4003 4162 4004 if (has_v4 & rdists->has_vlpis) { 4163 4005 if (its_init_vpe_domain() ||
+2 -2
drivers/irqchip/irq-gic-v3.c
··· 183 183 } 184 184 cpu_relax(); 185 185 udelay(1); 186 - }; 186 + } 187 187 } 188 188 189 189 /* Wait for completion of a distributor change */ ··· 240 240 break; 241 241 cpu_relax(); 242 242 udelay(1); 243 - }; 243 + } 244 244 if (!count) 245 245 pr_err_ratelimited("redistributor failed to %s...\n", 246 246 enable ? "wakeup" : "sleep");
+41 -44
drivers/irqchip/irq-ingenic.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0-or-later 2 2 /* 3 3 * Copyright (C) 2009-2010, Lars-Peter Clausen <lars@metafoo.de> 4 - * JZ4740 platform IRQ support 4 + * Ingenic XBurst platform IRQ support 5 5 */ 6 6 7 7 #include <linux/errno.h> ··· 10 10 #include <linux/interrupt.h> 11 11 #include <linux/ioport.h> 12 12 #include <linux/irqchip.h> 13 - #include <linux/irqchip/ingenic.h> 14 13 #include <linux/of_address.h> 15 14 #include <linux/of_irq.h> 16 15 #include <linux/timex.h> ··· 21 22 22 23 struct ingenic_intc_data { 23 24 void __iomem *base; 25 + struct irq_domain *domain; 24 26 unsigned num_chips; 25 27 }; 26 28 ··· 35 35 static irqreturn_t intc_cascade(int irq, void *data) 36 36 { 37 37 struct ingenic_intc_data *intc = irq_get_handler_data(irq); 38 - uint32_t irq_reg; 38 + struct irq_domain *domain = intc->domain; 39 + struct irq_chip_generic *gc; 40 + uint32_t pending; 39 41 unsigned i; 40 42 41 43 for (i = 0; i < intc->num_chips; i++) { 42 - irq_reg = readl(intc->base + (i * CHIP_SIZE) + 43 - JZ_REG_INTC_PENDING); 44 - if (!irq_reg) 44 + gc = irq_get_domain_generic_chip(domain, i * 32); 45 + 46 + pending = irq_reg_readl(gc, JZ_REG_INTC_PENDING); 47 + if (!pending) 45 48 continue; 46 49 47 - generic_handle_irq(__fls(irq_reg) + (i * 32) + JZ4740_IRQ_BASE); 50 + while (pending) { 51 + int bit = __fls(pending); 52 + 53 + irq = irq_find_mapping(domain, bit + (i * 32)); 54 + generic_handle_irq(irq); 55 + pending &= ~BIT(bit); 56 + } 48 57 } 49 58 50 59 return IRQ_HANDLED; 51 - } 52 - 53 - static void intc_irq_set_mask(struct irq_chip_generic *gc, uint32_t mask) 54 - { 55 - struct irq_chip_regs *regs = &gc->chip_types->regs; 56 - 57 - writel(mask, gc->reg_base + regs->enable); 58 - writel(~mask, gc->reg_base + regs->disable); 59 - } 60 - 61 - void ingenic_intc_irq_suspend(struct irq_data *data) 62 - { 63 - struct irq_chip_generic *gc = irq_data_get_irq_chip_data(data); 64 - intc_irq_set_mask(gc, gc->wake_active); 65 - } 66 - 67 - void ingenic_intc_irq_resume(struct irq_data *data) 68 - { 69 - struct irq_chip_generic *gc = irq_data_get_irq_chip_data(data); 70 - intc_irq_set_mask(gc, gc->mask_cache); 71 60 } 72 61 73 62 static struct irqaction intc_cascade_action = { ··· 97 108 goto out_unmap_irq; 98 109 } 99 110 100 - for (i = 0; i < num_chips; i++) { 101 - /* Mask all irqs */ 102 - writel(0xffffffff, intc->base + (i * CHIP_SIZE) + 103 - JZ_REG_INTC_SET_MASK); 111 + domain = irq_domain_add_legacy(node, num_chips * 32, 112 + JZ4740_IRQ_BASE, 0, 113 + &irq_generic_chip_ops, NULL); 114 + if (!domain) { 115 + err = -ENOMEM; 116 + goto out_unmap_base; 117 + } 104 118 105 - gc = irq_alloc_generic_chip("INTC", 1, 106 - JZ4740_IRQ_BASE + (i * 32), 107 - intc->base + (i * CHIP_SIZE), 108 - handle_level_irq); 119 + intc->domain = domain; 120 + 121 + err = irq_alloc_domain_generic_chips(domain, 32, 1, "INTC", 122 + handle_level_irq, 0, 123 + IRQ_NOPROBE | IRQ_LEVEL, 0); 124 + if (err) 125 + goto out_domain_remove; 126 + 127 + for (i = 0; i < num_chips; i++) { 128 + gc = irq_get_domain_generic_chip(domain, i * 32); 109 129 110 130 gc->wake_enabled = IRQ_MSK(32); 131 + gc->reg_base = intc->base + (i * CHIP_SIZE); 111 132 112 133 ct = gc->chip_types; 113 134 ct->regs.enable = JZ_REG_INTC_CLEAR_MASK; ··· 126 127 ct->chip.irq_mask = irq_gc_mask_disable_reg; 127 128 ct->chip.irq_mask_ack = irq_gc_mask_disable_reg; 128 129 ct->chip.irq_set_wake = irq_gc_set_wake; 129 - ct->chip.irq_suspend = ingenic_intc_irq_suspend; 130 - ct->chip.irq_resume = ingenic_intc_irq_resume; 130 + ct->chip.flags = IRQCHIP_MASK_ON_SUSPEND; 131 131 132 - irq_setup_generic_chip(gc, IRQ_MSK(32), 0, 0, 133 - IRQ_NOPROBE | IRQ_LEVEL); 132 + /* Mask all irqs */ 133 + irq_reg_writel(gc, IRQ_MSK(32), JZ_REG_INTC_SET_MASK); 134 134 } 135 - 136 - domain = irq_domain_add_legacy(node, num_chips * 32, JZ4740_IRQ_BASE, 0, 137 - &irq_domain_simple_ops, NULL); 138 - if (!domain) 139 - pr_warn("unable to register IRQ domain\n"); 140 135 141 136 setup_irq(parent_irq, &intc_cascade_action); 142 137 return 0; 143 138 139 + out_domain_remove: 140 + irq_domain_remove(domain); 141 + out_unmap_base: 142 + iounmap(intc->base); 144 143 out_unmap_irq: 145 144 irq_dispose_mapping(parent_irq); 146 145 out_free:
+197
drivers/irqchip/irq-ls-extirq.c
··· 1 + // SPDX-License-Identifier: GPL-2.0 2 + 3 + #define pr_fmt(fmt) "irq-ls-extirq: " fmt 4 + 5 + #include <linux/irq.h> 6 + #include <linux/irqchip.h> 7 + #include <linux/irqdomain.h> 8 + #include <linux/of.h> 9 + #include <linux/mfd/syscon.h> 10 + #include <linux/regmap.h> 11 + #include <linux/slab.h> 12 + 13 + #include <dt-bindings/interrupt-controller/arm-gic.h> 14 + 15 + #define MAXIRQ 12 16 + #define LS1021A_SCFGREVCR 0x200 17 + 18 + struct ls_extirq_data { 19 + struct regmap *syscon; 20 + u32 intpcr; 21 + bool bit_reverse; 22 + u32 nirq; 23 + struct irq_fwspec map[MAXIRQ]; 24 + }; 25 + 26 + static int 27 + ls_extirq_set_type(struct irq_data *data, unsigned int type) 28 + { 29 + struct ls_extirq_data *priv = data->chip_data; 30 + irq_hw_number_t hwirq = data->hwirq; 31 + u32 value, mask; 32 + 33 + if (priv->bit_reverse) 34 + mask = 1U << (31 - hwirq); 35 + else 36 + mask = 1U << hwirq; 37 + 38 + switch (type) { 39 + case IRQ_TYPE_LEVEL_LOW: 40 + type = IRQ_TYPE_LEVEL_HIGH; 41 + value = mask; 42 + break; 43 + case IRQ_TYPE_EDGE_FALLING: 44 + type = IRQ_TYPE_EDGE_RISING; 45 + value = mask; 46 + break; 47 + case IRQ_TYPE_LEVEL_HIGH: 48 + case IRQ_TYPE_EDGE_RISING: 49 + value = 0; 50 + break; 51 + default: 52 + return -EINVAL; 53 + } 54 + regmap_update_bits(priv->syscon, priv->intpcr, mask, value); 55 + 56 + return irq_chip_set_type_parent(data, type); 57 + } 58 + 59 + static struct irq_chip ls_extirq_chip = { 60 + .name = "ls-extirq", 61 + .irq_mask = irq_chip_mask_parent, 62 + .irq_unmask = irq_chip_unmask_parent, 63 + .irq_eoi = irq_chip_eoi_parent, 64 + .irq_set_type = ls_extirq_set_type, 65 + .irq_retrigger = irq_chip_retrigger_hierarchy, 66 + .irq_set_affinity = irq_chip_set_affinity_parent, 67 + .flags = IRQCHIP_SET_TYPE_MASKED, 68 + }; 69 + 70 + static int 71 + ls_extirq_domain_alloc(struct irq_domain *domain, unsigned int virq, 72 + unsigned int nr_irqs, void *arg) 73 + { 74 + struct ls_extirq_data *priv = domain->host_data; 75 + struct irq_fwspec *fwspec = arg; 76 + irq_hw_number_t hwirq; 77 + 78 + if (fwspec->param_count != 2) 79 + return -EINVAL; 80 + 81 + hwirq = fwspec->param[0]; 82 + if (hwirq >= priv->nirq) 83 + return -EINVAL; 84 + 85 + irq_domain_set_hwirq_and_chip(domain, virq, hwirq, &ls_extirq_chip, 86 + priv); 87 + 88 + return irq_domain_alloc_irqs_parent(domain, virq, 1, &priv->map[hwirq]); 89 + } 90 + 91 + static const struct irq_domain_ops extirq_domain_ops = { 92 + .xlate = irq_domain_xlate_twocell, 93 + .alloc = ls_extirq_domain_alloc, 94 + .free = irq_domain_free_irqs_common, 95 + }; 96 + 97 + static int 98 + ls_extirq_parse_map(struct ls_extirq_data *priv, struct device_node *node) 99 + { 100 + const __be32 *map; 101 + u32 mapsize; 102 + int ret; 103 + 104 + map = of_get_property(node, "interrupt-map", &mapsize); 105 + if (!map) 106 + return -ENOENT; 107 + if (mapsize % sizeof(*map)) 108 + return -EINVAL; 109 + mapsize /= sizeof(*map); 110 + 111 + while (mapsize) { 112 + struct device_node *ipar; 113 + u32 hwirq, intsize, j; 114 + 115 + if (mapsize < 3) 116 + return -EINVAL; 117 + hwirq = be32_to_cpup(map); 118 + if (hwirq >= MAXIRQ) 119 + return -EINVAL; 120 + priv->nirq = max(priv->nirq, hwirq + 1); 121 + 122 + ipar = of_find_node_by_phandle(be32_to_cpup(map + 2)); 123 + map += 3; 124 + mapsize -= 3; 125 + if (!ipar) 126 + return -EINVAL; 127 + priv->map[hwirq].fwnode = &ipar->fwnode; 128 + ret = of_property_read_u32(ipar, "#interrupt-cells", &intsize); 129 + if (ret) 130 + return ret; 131 + 132 + if (intsize > mapsize) 133 + return -EINVAL; 134 + 135 + priv->map[hwirq].param_count = intsize; 136 + for (j = 0; j < intsize; ++j) 137 + priv->map[hwirq].param[j] = be32_to_cpup(map++); 138 + mapsize -= intsize; 139 + } 140 + return 0; 141 + } 142 + 143 + static int __init 144 + ls_extirq_of_init(struct device_node *node, struct device_node *parent) 145 + { 146 + 147 + struct irq_domain *domain, *parent_domain; 148 + struct ls_extirq_data *priv; 149 + int ret; 150 + 151 + parent_domain = irq_find_host(parent); 152 + if (!parent_domain) { 153 + pr_err("Cannot find parent domain\n"); 154 + return -ENODEV; 155 + } 156 + 157 + priv = kzalloc(sizeof(*priv), GFP_KERNEL); 158 + if (!priv) 159 + return -ENOMEM; 160 + 161 + priv->syscon = syscon_node_to_regmap(node->parent); 162 + if (IS_ERR(priv->syscon)) { 163 + ret = PTR_ERR(priv->syscon); 164 + pr_err("Failed to lookup parent regmap\n"); 165 + goto out; 166 + } 167 + ret = of_property_read_u32(node, "reg", &priv->intpcr); 168 + if (ret) { 169 + pr_err("Missing INTPCR offset value\n"); 170 + goto out; 171 + } 172 + 173 + ret = ls_extirq_parse_map(priv, node); 174 + if (ret) 175 + goto out; 176 + 177 + if (of_device_is_compatible(node, "fsl,ls1021a-extirq")) { 178 + u32 revcr; 179 + 180 + ret = regmap_read(priv->syscon, LS1021A_SCFGREVCR, &revcr); 181 + if (ret) 182 + goto out; 183 + priv->bit_reverse = (revcr != 0); 184 + } 185 + 186 + domain = irq_domain_add_hierarchy(parent_domain, 0, priv->nirq, node, 187 + &extirq_domain_ops, priv); 188 + if (!domain) 189 + ret = -ENOMEM; 190 + 191 + out: 192 + if (ret) 193 + kfree(priv); 194 + return ret; 195 + } 196 + 197 + IRQCHIP_DECLARE(ls1021a_extirq, "fsl,ls1021a-extirq", ls_extirq_of_init);
+3 -2
drivers/irqchip/irq-ti-sci-inta.c
··· 246 246 /* No free bits available. Allocate a new vint */ 247 247 vint_desc = ti_sci_inta_alloc_parent_irq(domain); 248 248 if (IS_ERR(vint_desc)) { 249 - mutex_unlock(&inta->vint_mutex); 250 - return ERR_PTR(PTR_ERR(vint_desc)); 249 + event_desc = ERR_CAST(vint_desc); 250 + goto unlock; 251 251 } 252 252 253 253 free_bit = find_first_zero_bit(vint_desc->event_map, ··· 259 259 if (IS_ERR(event_desc)) 260 260 clear_bit(free_bit, vint_desc->event_map); 261 261 262 + unlock: 262 263 mutex_unlock(&inta->vint_mutex); 263 264 return event_desc; 264 265 }
+1 -1
drivers/irqchip/irq-zevio.c
··· 51 51 while (readl(zevio_irq_io + IO_STATUS)) { 52 52 irqnr = readl(zevio_irq_io + IO_CURRENT); 53 53 handle_domain_irq(zevio_irq_domain, irqnr, regs); 54 - }; 54 + } 55 55 } 56 56 57 57 static void __init zevio_init_irq_base(void __iomem *base)
+135 -14
drivers/irqchip/qcom-pdc.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 - * Copyright (c) 2017-2018, The Linux Foundation. All rights reserved. 3 + * Copyright (c) 2017-2019, The Linux Foundation. All rights reserved. 4 4 */ 5 5 6 6 #include <linux/err.h> 7 7 #include <linux/init.h> 8 + #include <linux/interrupt.h> 8 9 #include <linux/irq.h> 9 10 #include <linux/irqchip.h> 10 11 #include <linux/irqdomain.h> ··· 14 13 #include <linux/of.h> 15 14 #include <linux/of_address.h> 16 15 #include <linux/of_device.h> 16 + #include <linux/soc/qcom/irq.h> 17 17 #include <linux/spinlock.h> 18 - #include <linux/platform_device.h> 19 18 #include <linux/slab.h> 20 19 #include <linux/types.h> 21 20 22 - #define PDC_MAX_IRQS 126 21 + #define PDC_MAX_IRQS 168 22 + #define PDC_MAX_GPIO_IRQS 256 23 23 24 24 #define CLEAR_INTR(reg, intr) (reg & ~(1 << intr)) 25 25 #define ENABLE_INTR(reg, intr) (reg | (1 << intr)) 26 26 27 27 #define IRQ_ENABLE_BANK 0x10 28 28 #define IRQ_i_CFG 0x110 29 + 30 + #define PDC_NO_PARENT_IRQ ~0UL 29 31 30 32 struct pdc_pin_region { 31 33 u32 pin_base; ··· 51 47 return readl_relaxed(pdc_base + reg + i * sizeof(u32)); 52 48 } 53 49 50 + static int qcom_pdc_gic_get_irqchip_state(struct irq_data *d, 51 + enum irqchip_irq_state which, 52 + bool *state) 53 + { 54 + if (d->hwirq == GPIO_NO_WAKE_IRQ) 55 + return 0; 56 + 57 + return irq_chip_get_parent_state(d, which, state); 58 + } 59 + 60 + static int qcom_pdc_gic_set_irqchip_state(struct irq_data *d, 61 + enum irqchip_irq_state which, 62 + bool value) 63 + { 64 + if (d->hwirq == GPIO_NO_WAKE_IRQ) 65 + return 0; 66 + 67 + return irq_chip_set_parent_state(d, which, value); 68 + } 69 + 54 70 static void pdc_enable_intr(struct irq_data *d, bool on) 55 71 { 56 72 int pin_out = d->hwirq; ··· 87 63 raw_spin_unlock(&pdc_lock); 88 64 } 89 65 66 + static void qcom_pdc_gic_disable(struct irq_data *d) 67 + { 68 + if (d->hwirq == GPIO_NO_WAKE_IRQ) 69 + return; 70 + 71 + pdc_enable_intr(d, false); 72 + irq_chip_disable_parent(d); 73 + } 74 + 75 + static void qcom_pdc_gic_enable(struct irq_data *d) 76 + { 77 + if (d->hwirq == GPIO_NO_WAKE_IRQ) 78 + return; 79 + 80 + pdc_enable_intr(d, true); 81 + irq_chip_enable_parent(d); 82 + } 83 + 90 84 static void qcom_pdc_gic_mask(struct irq_data *d) 91 85 { 92 - pdc_enable_intr(d, false); 86 + if (d->hwirq == GPIO_NO_WAKE_IRQ) 87 + return; 88 + 93 89 irq_chip_mask_parent(d); 94 90 } 95 91 96 92 static void qcom_pdc_gic_unmask(struct irq_data *d) 97 93 { 98 - pdc_enable_intr(d, true); 94 + if (d->hwirq == GPIO_NO_WAKE_IRQ) 95 + return; 96 + 99 97 irq_chip_unmask_parent(d); 100 98 } 101 99 ··· 160 114 int pin_out = d->hwirq; 161 115 enum pdc_irq_config_bits pdc_type; 162 116 117 + if (pin_out == GPIO_NO_WAKE_IRQ) 118 + return 0; 119 + 163 120 switch (type) { 164 121 case IRQ_TYPE_EDGE_RISING: 165 122 pdc_type = PDC_EDGE_RISING; ··· 197 148 .irq_eoi = irq_chip_eoi_parent, 198 149 .irq_mask = qcom_pdc_gic_mask, 199 150 .irq_unmask = qcom_pdc_gic_unmask, 151 + .irq_disable = qcom_pdc_gic_disable, 152 + .irq_enable = qcom_pdc_gic_enable, 153 + .irq_get_irqchip_state = qcom_pdc_gic_get_irqchip_state, 154 + .irq_set_irqchip_state = qcom_pdc_gic_set_irqchip_state, 200 155 .irq_retrigger = irq_chip_retrigger_hierarchy, 201 156 .irq_set_type = qcom_pdc_gic_set_type, 202 157 .flags = IRQCHIP_MASK_ON_SUSPEND | ··· 222 169 return (region->parent_base + pin - region->pin_base); 223 170 } 224 171 225 - WARN_ON(1); 226 - return ~0UL; 172 + return PDC_NO_PARENT_IRQ; 227 173 } 228 174 229 175 static int qcom_pdc_translate(struct irq_domain *d, struct irq_fwspec *fwspec, ··· 251 199 252 200 ret = qcom_pdc_translate(domain, fwspec, &hwirq, &type); 253 201 if (ret) 254 - return -EINVAL; 255 - 256 - parent_hwirq = get_parent_hwirq(hwirq); 257 - if (parent_hwirq == ~0UL) 258 - return -EINVAL; 202 + return ret; 259 203 260 204 ret = irq_domain_set_hwirq_and_chip(domain, virq, hwirq, 261 205 &qcom_pdc_gic_chip, NULL); 262 206 if (ret) 263 207 return ret; 208 + 209 + parent_hwirq = get_parent_hwirq(hwirq); 210 + if (parent_hwirq == PDC_NO_PARENT_IRQ) 211 + return 0; 264 212 265 213 if (type & IRQ_TYPE_EDGE_BOTH) 266 214 type = IRQ_TYPE_EDGE_RISING; ··· 281 229 static const struct irq_domain_ops qcom_pdc_ops = { 282 230 .translate = qcom_pdc_translate, 283 231 .alloc = qcom_pdc_alloc, 232 + .free = irq_domain_free_irqs_common, 233 + }; 234 + 235 + static int qcom_pdc_gpio_alloc(struct irq_domain *domain, unsigned int virq, 236 + unsigned int nr_irqs, void *data) 237 + { 238 + struct irq_fwspec *fwspec = data; 239 + struct irq_fwspec parent_fwspec; 240 + irq_hw_number_t hwirq, parent_hwirq; 241 + unsigned int type; 242 + int ret; 243 + 244 + ret = qcom_pdc_translate(domain, fwspec, &hwirq, &type); 245 + if (ret) 246 + return ret; 247 + 248 + ret = irq_domain_set_hwirq_and_chip(domain, virq, hwirq, 249 + &qcom_pdc_gic_chip, NULL); 250 + if (ret) 251 + return ret; 252 + 253 + if (hwirq == GPIO_NO_WAKE_IRQ) 254 + return 0; 255 + 256 + parent_hwirq = get_parent_hwirq(hwirq); 257 + if (parent_hwirq == PDC_NO_PARENT_IRQ) 258 + return 0; 259 + 260 + if (type & IRQ_TYPE_EDGE_BOTH) 261 + type = IRQ_TYPE_EDGE_RISING; 262 + 263 + if (type & IRQ_TYPE_LEVEL_MASK) 264 + type = IRQ_TYPE_LEVEL_HIGH; 265 + 266 + parent_fwspec.fwnode = domain->parent->fwnode; 267 + parent_fwspec.param_count = 3; 268 + parent_fwspec.param[0] = 0; 269 + parent_fwspec.param[1] = parent_hwirq; 270 + parent_fwspec.param[2] = type; 271 + 272 + return irq_domain_alloc_irqs_parent(domain, virq, nr_irqs, 273 + &parent_fwspec); 274 + } 275 + 276 + static int qcom_pdc_gpio_domain_select(struct irq_domain *d, 277 + struct irq_fwspec *fwspec, 278 + enum irq_domain_bus_token bus_token) 279 + { 280 + return bus_token == DOMAIN_BUS_WAKEUP; 281 + } 282 + 283 + static const struct irq_domain_ops qcom_pdc_gpio_ops = { 284 + .select = qcom_pdc_gpio_domain_select, 285 + .alloc = qcom_pdc_gpio_alloc, 284 286 .free = irq_domain_free_irqs_common, 285 287 }; 286 288 ··· 376 270 377 271 static int qcom_pdc_init(struct device_node *node, struct device_node *parent) 378 272 { 379 - struct irq_domain *parent_domain, *pdc_domain; 273 + struct irq_domain *parent_domain, *pdc_domain, *pdc_gpio_domain; 380 274 int ret; 381 275 382 276 pdc_base = of_iomap(node, 0); ··· 407 301 goto fail; 408 302 } 409 303 304 + pdc_gpio_domain = irq_domain_create_hierarchy(parent_domain, 305 + IRQ_DOMAIN_FLAG_QCOM_PDC_WAKEUP, 306 + PDC_MAX_GPIO_IRQS, 307 + of_fwnode_handle(node), 308 + &qcom_pdc_gpio_ops, NULL); 309 + if (!pdc_gpio_domain) { 310 + pr_err("%pOF: PDC domain add failed for GPIO domain\n", node); 311 + ret = -ENOMEM; 312 + goto remove; 313 + } 314 + 315 + irq_domain_update_bus_token(pdc_gpio_domain, DOMAIN_BUS_WAKEUP); 316 + 410 317 return 0; 411 318 319 + remove: 320 + irq_domain_remove(pdc_domain); 412 321 fail: 413 322 kfree(pdc_region); 414 323 iounmap(pdc_base); 415 324 return ret; 416 325 } 417 326 418 - IRQCHIP_DECLARE(pdc_sdm845, "qcom,sdm845-pdc", qcom_pdc_init); 327 + IRQCHIP_DECLARE(qcom_pdc, "qcom,pdc", qcom_pdc_init);
+110 -2
drivers/pinctrl/qcom/pinctrl-msm.c
··· 23 23 #include <linux/pm.h> 24 24 #include <linux/log2.h> 25 25 26 + #include <linux/soc/qcom/irq.h> 27 + 26 28 #include "../core.h" 27 29 #include "../pinconf.h" 28 30 #include "pinctrl-msm.h" ··· 46 44 * @enabled_irqs: Bitmap of currently enabled irqs. 47 45 * @dual_edge_irqs: Bitmap of irqs that need sw emulated dual edge 48 46 * detection. 47 + * @skip_wake_irqs: Skip IRQs that are handled by wakeup interrupt controller 49 48 * @soc; Reference to soc_data of platform specific data. 50 49 * @regs: Base addresses for the TLMM tiles. 51 50 */ ··· 64 61 65 62 DECLARE_BITMAP(dual_edge_irqs, MAX_NR_GPIO); 66 63 DECLARE_BITMAP(enabled_irqs, MAX_NR_GPIO); 64 + DECLARE_BITMAP(skip_wake_irqs, MAX_NR_GPIO); 67 65 68 66 const struct msm_pinctrl_soc_data *soc; 69 67 void __iomem *regs[MAX_NR_TILES]; ··· 711 707 unsigned long flags; 712 708 u32 val; 713 709 710 + if (d->parent_data) 711 + irq_chip_mask_parent(d); 712 + 713 + if (test_bit(d->hwirq, pctrl->skip_wake_irqs)) 714 + return; 715 + 714 716 g = &pctrl->soc->groups[d->hwirq]; 715 717 716 718 raw_spin_lock_irqsave(&pctrl->lock, flags); ··· 761 751 unsigned long flags; 762 752 u32 val; 763 753 754 + if (d->parent_data) 755 + irq_chip_unmask_parent(d); 756 + 757 + if (test_bit(d->hwirq, pctrl->skip_wake_irqs)) 758 + return; 759 + 764 760 g = &pctrl->soc->groups[d->hwirq]; 765 761 766 762 raw_spin_lock_irqsave(&pctrl->lock, flags); ··· 794 778 795 779 static void msm_gpio_irq_enable(struct irq_data *d) 796 780 { 781 + /* 782 + * Clear the interrupt that may be pending before we enable 783 + * the line. 784 + * This is especially a problem with the GPIOs routed to the 785 + * PDC. These GPIOs are direct-connect interrupts to the GIC. 786 + * Disabling the interrupt line at the PDC does not prevent 787 + * the interrupt from being latched at the GIC. The state at 788 + * GIC needs to be cleared before enabling. 789 + */ 790 + if (d->parent_data) { 791 + irq_chip_set_parent_state(d, IRQCHIP_STATE_PENDING, 0); 792 + irq_chip_enable_parent(d); 793 + } 797 794 798 795 msm_gpio_irq_clear_unmask(d, true); 796 + } 797 + 798 + static void msm_gpio_irq_disable(struct irq_data *d) 799 + { 800 + struct gpio_chip *gc = irq_data_get_irq_chip_data(d); 801 + struct msm_pinctrl *pctrl = gpiochip_get_data(gc); 802 + 803 + if (d->parent_data) 804 + irq_chip_disable_parent(d); 805 + 806 + if (!test_bit(d->hwirq, pctrl->skip_wake_irqs)) 807 + msm_gpio_irq_mask(d); 799 808 } 800 809 801 810 static void msm_gpio_irq_unmask(struct irq_data *d) ··· 835 794 const struct msm_pingroup *g; 836 795 unsigned long flags; 837 796 u32 val; 797 + 798 + if (test_bit(d->hwirq, pctrl->skip_wake_irqs)) 799 + return; 838 800 839 801 g = &pctrl->soc->groups[d->hwirq]; 840 802 ··· 863 819 const struct msm_pingroup *g; 864 820 unsigned long flags; 865 821 u32 val; 822 + 823 + if (d->parent_data) 824 + irq_chip_set_type_parent(d, type); 825 + 826 + if (test_bit(d->hwirq, pctrl->skip_wake_irqs)) 827 + return 0; 866 828 867 829 g = &pctrl->soc->groups[d->hwirq]; 868 830 ··· 962 912 struct msm_pinctrl *pctrl = gpiochip_get_data(gc); 963 913 unsigned long flags; 964 914 915 + /* 916 + * While they may not wake up when the TLMM is powered off, 917 + * some GPIOs would like to wakeup the system from suspend 918 + * when TLMM is powered on. To allow that, enable the GPIO 919 + * summary line to be wakeup capable at GIC. 920 + */ 921 + if (d->parent_data) 922 + irq_chip_set_wake_parent(d, on); 923 + 965 924 raw_spin_lock_irqsave(&pctrl->lock, flags); 966 925 967 926 irq_set_irq_wake(pctrl->irq, on); ··· 1049 990 chained_irq_exit(chip, desc); 1050 991 } 1051 992 993 + static int msm_gpio_wakeirq(struct gpio_chip *gc, 994 + unsigned int child, 995 + unsigned int child_type, 996 + unsigned int *parent, 997 + unsigned int *parent_type) 998 + { 999 + struct msm_pinctrl *pctrl = gpiochip_get_data(gc); 1000 + const struct msm_gpio_wakeirq_map *map; 1001 + int i; 1002 + 1003 + *parent = GPIO_NO_WAKE_IRQ; 1004 + *parent_type = IRQ_TYPE_EDGE_RISING; 1005 + 1006 + for (i = 0; i < pctrl->soc->nwakeirq_map; i++) { 1007 + map = &pctrl->soc->wakeirq_map[i]; 1008 + if (map->gpio == child) { 1009 + *parent = map->wakeirq; 1010 + break; 1011 + } 1012 + } 1013 + 1014 + return 0; 1015 + } 1016 + 1052 1017 static bool msm_gpio_needs_valid_mask(struct msm_pinctrl *pctrl) 1053 1018 { 1054 1019 if (pctrl->soc->reserved_gpios) ··· 1085 1002 { 1086 1003 struct gpio_chip *chip; 1087 1004 struct gpio_irq_chip *girq; 1088 - int ret; 1089 - unsigned ngpio = pctrl->soc->ngpios; 1005 + int i, ret; 1006 + unsigned gpio, ngpio = pctrl->soc->ngpios; 1007 + struct device_node *np; 1008 + bool skip; 1090 1009 1091 1010 if (WARN_ON(ngpio > MAX_NR_GPIO)) 1092 1011 return -EINVAL; ··· 1105 1020 1106 1021 pctrl->irq_chip.name = "msmgpio"; 1107 1022 pctrl->irq_chip.irq_enable = msm_gpio_irq_enable; 1023 + pctrl->irq_chip.irq_disable = msm_gpio_irq_disable; 1108 1024 pctrl->irq_chip.irq_mask = msm_gpio_irq_mask; 1109 1025 pctrl->irq_chip.irq_unmask = msm_gpio_irq_unmask; 1110 1026 pctrl->irq_chip.irq_ack = msm_gpio_irq_ack; 1027 + pctrl->irq_chip.irq_eoi = irq_chip_eoi_parent; 1111 1028 pctrl->irq_chip.irq_set_type = msm_gpio_irq_set_type; 1112 1029 pctrl->irq_chip.irq_set_wake = msm_gpio_irq_set_wake; 1113 1030 pctrl->irq_chip.irq_request_resources = msm_gpio_irq_reqres; 1114 1031 pctrl->irq_chip.irq_release_resources = msm_gpio_irq_relres; 1115 1032 1033 + np = of_parse_phandle(pctrl->dev->of_node, "wakeup-parent", 0); 1034 + if (np) { 1035 + chip->irq.parent_domain = irq_find_matching_host(np, 1036 + DOMAIN_BUS_WAKEUP); 1037 + of_node_put(np); 1038 + if (!chip->irq.parent_domain) 1039 + return -EPROBE_DEFER; 1040 + chip->irq.child_to_parent_hwirq = msm_gpio_wakeirq; 1041 + 1042 + /* 1043 + * Let's skip handling the GPIOs, if the parent irqchip 1044 + * is handling the direct connect IRQ of the GPIO. 1045 + */ 1046 + skip = irq_domain_qcom_handle_wakeup(chip->irq.parent_domain); 1047 + for (i = 0; skip && i < pctrl->soc->nwakeirq_map; i++) { 1048 + gpio = pctrl->soc->wakeirq_map[i].gpio; 1049 + set_bit(gpio, pctrl->skip_wake_irqs); 1050 + } 1051 + } 1052 + 1116 1053 girq = &chip->irq; 1117 1054 girq->chip = &pctrl->irq_chip; 1118 1055 girq->parent_handler = msm_gpio_irq_handler; 1056 + girq->fwnode = pctrl->dev->fwnode; 1119 1057 girq->num_parents = 1; 1120 1058 girq->parents = devm_kcalloc(pctrl->dev, 1, sizeof(*girq->parents), 1121 1059 GFP_KERNEL);
+14
drivers/pinctrl/qcom/pinctrl-msm.h
··· 92 92 }; 93 93 94 94 /** 95 + * struct msm_gpio_wakeirq_map - Map of GPIOs and their wakeup pins 96 + * @gpio: The GPIOs that are wakeup capable 97 + * @wakeirq: The interrupt at the always-on interrupt controller 98 + */ 99 + struct msm_gpio_wakeirq_map { 100 + unsigned int gpio; 101 + unsigned int wakeirq; 102 + }; 103 + 104 + /** 95 105 * struct msm_pinctrl_soc_data - Qualcomm pin controller driver configuration 96 106 * @pins: An array describing all pins the pin controller affects. 97 107 * @npins: The number of entries in @pins. ··· 111 101 * @ngroups: The numbmer of entries in @groups. 112 102 * @ngpio: The number of pingroups the driver should expose as GPIOs. 113 103 * @pull_no_keeper: The SoC does not support keeper bias. 104 + * @wakeirq_map: The map of wakeup capable GPIOs and the pin at PDC/MPM 105 + * @nwakeirq_map: The number of entries in @wakeirq_map 114 106 */ 115 107 struct msm_pinctrl_soc_data { 116 108 const struct pinctrl_pin_desc *pins; ··· 126 114 const char *const *tiles; 127 115 unsigned int ntiles; 128 116 const int *reserved_gpios; 117 + const struct msm_gpio_wakeirq_map *wakeirq_map; 118 + unsigned int nwakeirq_map; 129 119 }; 130 120 131 121 extern const struct dev_pm_ops msm_pinctrl_dev_pm_ops;
+22 -1
drivers/pinctrl/qcom/pinctrl-sdm845.c
··· 1 1 // SPDX-License-Identifier: GPL-2.0 2 2 /* 3 - * Copyright (c) 2016-2018, The Linux Foundation. All rights reserved. 3 + * Copyright (c) 2016-2019, The Linux Foundation. All rights reserved. 4 4 */ 5 5 6 6 #include <linux/acpi.h> ··· 1282 1282 0, 1, 2, 3, 81, 82, 83, 84, -1 1283 1283 }; 1284 1284 1285 + static const struct msm_gpio_wakeirq_map sdm845_pdc_map[] = { 1286 + { 1, 30 }, { 3, 31 }, { 5, 32 }, { 10, 33 }, { 11, 34 }, 1287 + { 20, 35 }, { 22, 36 }, { 24, 37 }, { 26, 38 }, { 30, 39 }, 1288 + { 31, 117 }, { 32, 41 }, { 34, 42 }, { 36, 43 }, { 37, 44 }, 1289 + { 38, 45 }, { 39, 46 }, { 40, 47 }, { 41, 115 }, { 43, 49 }, 1290 + { 44, 50 }, { 46, 51 }, { 48, 52 }, { 49, 118 }, { 52, 54 }, 1291 + { 53, 55 }, { 54, 56 }, { 56, 57 }, { 57, 58 }, { 58, 59 }, 1292 + { 59, 60 }, { 60, 61 }, { 61, 62 }, { 62, 63 }, { 63, 64 }, 1293 + { 64, 65 }, { 66, 66 }, { 68, 67 }, { 71, 68 }, { 73, 69 }, 1294 + { 77, 70 }, { 78, 71 }, { 79, 72 }, { 80, 73 }, { 84, 74 }, 1295 + { 85, 75 }, { 86, 76 }, { 88, 77 }, { 89, 116 }, { 91, 79 }, 1296 + { 92, 80 }, { 95, 81 }, { 96, 82 }, { 97, 83 }, { 101, 84 }, 1297 + { 103, 85 }, { 104, 86 }, { 115, 90 }, { 116, 91 }, { 117, 92 }, 1298 + { 118, 93 }, { 119, 94 }, { 120, 95 }, { 121, 96 }, { 122, 97 }, 1299 + { 123, 98 }, { 124, 99 }, { 125, 100 }, { 127, 102 }, { 128, 103 }, 1300 + { 129, 104 }, { 130, 105 }, { 132, 106 }, { 133, 107 }, { 145, 108 }, 1301 + }; 1302 + 1285 1303 static const struct msm_pinctrl_soc_data sdm845_pinctrl = { 1286 1304 .pins = sdm845_pins, 1287 1305 .npins = ARRAY_SIZE(sdm845_pins), ··· 1308 1290 .groups = sdm845_groups, 1309 1291 .ngroups = ARRAY_SIZE(sdm845_groups), 1310 1292 .ngpios = 151, 1293 + .wakeirq_map = sdm845_pdc_map, 1294 + .nwakeirq_map = ARRAY_SIZE(sdm845_pdc_map), 1295 + 1311 1296 }; 1312 1297 1313 1298 static const struct msm_pinctrl_soc_data sdm845_acpi_pinctrl = {
+6
include/linux/irq.h
··· 610 610 #ifdef CONFIG_IRQ_DOMAIN_HIERARCHY 611 611 extern void handle_fasteoi_ack_irq(struct irq_desc *desc); 612 612 extern void handle_fasteoi_mask_irq(struct irq_desc *desc); 613 + extern int irq_chip_set_parent_state(struct irq_data *data, 614 + enum irqchip_irq_state which, 615 + bool val); 616 + extern int irq_chip_get_parent_state(struct irq_data *data, 617 + enum irqchip_irq_state which, 618 + bool *state); 613 619 extern void irq_chip_enable_parent(struct irq_data *data); 614 620 extern void irq_chip_disable_parent(struct irq_data *data); 615 621 extern void irq_chip_ack_parent(struct irq_data *data);
+7 -3
include/linux/irq_work.h
··· 22 22 #define IRQ_WORK_CLAIMED (IRQ_WORK_PENDING | IRQ_WORK_BUSY) 23 23 24 24 struct irq_work { 25 - unsigned long flags; 25 + atomic_t flags; 26 26 struct llist_node llnode; 27 27 void (*func)(struct irq_work *); 28 28 }; ··· 30 30 static inline 31 31 void init_irq_work(struct irq_work *work, void (*func)(struct irq_work *)) 32 32 { 33 - work->flags = 0; 33 + atomic_set(&work->flags, 0); 34 34 work->func = func; 35 35 } 36 36 37 - #define DEFINE_IRQ_WORK(name, _f) struct irq_work name = { .func = (_f), } 37 + #define DEFINE_IRQ_WORK(name, _f) struct irq_work name = { \ 38 + .flags = ATOMIC_INIT(0), \ 39 + .func = (_f) \ 40 + } 41 + 38 42 39 43 bool irq_work_queue(struct irq_work *work); 40 44 bool irq_work_queue_on(struct irq_work *work, int cpu);
+2 -2
include/linux/irqchip/arm-gic-v3.h
··· 334 334 #define GITS_TYPER_PLPIS (1UL << 0) 335 335 #define GITS_TYPER_VLPIS (1UL << 1) 336 336 #define GITS_TYPER_ITT_ENTRY_SIZE_SHIFT 4 337 - #define GITS_TYPER_ITT_ENTRY_SIZE(r) ((((r) >> GITS_TYPER_ITT_ENTRY_SIZE_SHIFT) & 0xf) + 1) 337 + #define GITS_TYPER_ITT_ENTRY_SIZE GENMASK_ULL(7, 4) 338 338 #define GITS_TYPER_IDBITS_SHIFT 8 339 339 #define GITS_TYPER_DEVBITS_SHIFT 13 340 - #define GITS_TYPER_DEVBITS(r) ((((r) >> GITS_TYPER_DEVBITS_SHIFT) & 0x1f) + 1) 340 + #define GITS_TYPER_DEVBITS GENMASK_ULL(17, 13) 341 341 #define GITS_TYPER_PTA (1UL << 19) 342 342 #define GITS_TYPER_HCC_SHIFT 24 343 343 #define GITS_TYPER_HCC(r) (((r) >> GITS_TYPER_HCC_SHIFT) & 0xff)
-14
include/linux/irqchip/ingenic.h
··· 1 - /* SPDX-License-Identifier: GPL-2.0-or-later */ 2 - /* 3 - * Copyright (C) 2010, Lars-Peter Clausen <lars@metafoo.de> 4 - */ 5 - 6 - #ifndef __LINUX_IRQCHIP_INGENIC_H__ 7 - #define __LINUX_IRQCHIP_INGENIC_H__ 8 - 9 - #include <linux/irq.h> 10 - 11 - extern void ingenic_intc_irq_suspend(struct irq_data *data); 12 - extern void ingenic_intc_irq_resume(struct irq_data *data); 13 - 14 - #endif
+1
include/linux/irqdomain.h
··· 83 83 DOMAIN_BUS_IPI, 84 84 DOMAIN_BUS_FSL_MC_MSI, 85 85 DOMAIN_BUS_TI_SCI_INTA_MSI, 86 + DOMAIN_BUS_WAKEUP, 86 87 }; 87 88 88 89 /**
+34
include/linux/soc/qcom/irq.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + 3 + #ifndef __QCOM_IRQ_H 4 + #define __QCOM_IRQ_H 5 + 6 + #include <linux/irqdomain.h> 7 + 8 + #define GPIO_NO_WAKE_IRQ ~0U 9 + 10 + /** 11 + * QCOM specific IRQ domain flags that distinguishes the handling of wakeup 12 + * capable interrupts by different interrupt controllers. 13 + * 14 + * IRQ_DOMAIN_FLAG_QCOM_PDC_WAKEUP: Line must be masked at TLMM and the 15 + * interrupt configuration is done at PDC 16 + * IRQ_DOMAIN_FLAG_QCOM_MPM_WAKEUP: Interrupt configuration is handled at TLMM 17 + */ 18 + #define IRQ_DOMAIN_FLAG_QCOM_PDC_WAKEUP (IRQ_DOMAIN_FLAG_NONCORE << 0) 19 + #define IRQ_DOMAIN_FLAG_QCOM_MPM_WAKEUP (IRQ_DOMAIN_FLAG_NONCORE << 1) 20 + 21 + /** 22 + * irq_domain_qcom_handle_wakeup: Return if the domain handles interrupt 23 + * configuration 24 + * @d: irq domain 25 + * 26 + * This QCOM specific irq domain call returns if the interrupt controller 27 + * requires the interrupt be masked at the child interrupt controller. 28 + */ 29 + static inline bool irq_domain_qcom_handle_wakeup(const struct irq_domain *d) 30 + { 31 + return (d->flags & IRQ_DOMAIN_FLAG_QCOM_PDC_WAKEUP); 32 + } 33 + 34 + #endif
+1 -1
kernel/bpf/stackmap.c
··· 289 289 290 290 if (irqs_disabled()) { 291 291 work = this_cpu_ptr(&up_read_work); 292 - if (work->irq_work.flags & IRQ_WORK_BUSY) 292 + if (atomic_read(&work->irq_work.flags) & IRQ_WORK_BUSY) 293 293 /* cannot queue more up_read, fallback */ 294 294 irq_work_busy = true; 295 295 }
+44
kernel/irq/chip.c
··· 1298 1298 #endif /* CONFIG_IRQ_FASTEOI_HIERARCHY_HANDLERS */ 1299 1299 1300 1300 /** 1301 + * irq_chip_set_parent_state - set the state of a parent interrupt. 1302 + * 1303 + * @data: Pointer to interrupt specific data 1304 + * @which: State to be restored (one of IRQCHIP_STATE_*) 1305 + * @val: Value corresponding to @which 1306 + * 1307 + * Conditional success, if the underlying irqchip does not implement it. 1308 + */ 1309 + int irq_chip_set_parent_state(struct irq_data *data, 1310 + enum irqchip_irq_state which, 1311 + bool val) 1312 + { 1313 + data = data->parent_data; 1314 + 1315 + if (!data || !data->chip->irq_set_irqchip_state) 1316 + return 0; 1317 + 1318 + return data->chip->irq_set_irqchip_state(data, which, val); 1319 + } 1320 + EXPORT_SYMBOL_GPL(irq_chip_set_parent_state); 1321 + 1322 + /** 1323 + * irq_chip_get_parent_state - get the state of a parent interrupt. 1324 + * 1325 + * @data: Pointer to interrupt specific data 1326 + * @which: one of IRQCHIP_STATE_* the caller wants to know 1327 + * @state: a pointer to a boolean where the state is to be stored 1328 + * 1329 + * Conditional success, if the underlying irqchip does not implement it. 1330 + */ 1331 + int irq_chip_get_parent_state(struct irq_data *data, 1332 + enum irqchip_irq_state which, 1333 + bool *state) 1334 + { 1335 + data = data->parent_data; 1336 + 1337 + if (!data || !data->chip->irq_get_irqchip_state) 1338 + return 0; 1339 + 1340 + return data->chip->irq_get_irqchip_state(data, which, state); 1341 + } 1342 + EXPORT_SYMBOL_GPL(irq_chip_get_parent_state); 1343 + 1344 + /** 1301 1345 * irq_chip_enable_parent - Enable the parent interrupt (defaults to unmask if 1302 1346 * NULL) 1303 1347 * @data: Pointer to interrupt specific data
+1 -1
kernel/irq/irqdesc.c
··· 750 750 EXPORT_SYMBOL_GPL(irq_free_descs); 751 751 752 752 /** 753 - * irq_alloc_descs - allocate and initialize a range of irq descriptors 753 + * __irq_alloc_descs - allocate and initialize a range of irq descriptors 754 754 * @irq: Allocate for specific irq number if irq >= 0 755 755 * @from: Start the search from this irq number 756 756 * @cnt: Number of consecutive irqs to allocate.
+13 -21
kernel/irq_work.c
··· 29 29 */ 30 30 static bool irq_work_claim(struct irq_work *work) 31 31 { 32 - unsigned long flags, oflags, nflags; 32 + int oflags; 33 33 34 + oflags = atomic_fetch_or(IRQ_WORK_CLAIMED, &work->flags); 34 35 /* 35 - * Start with our best wish as a premise but only trust any 36 - * flag value after cmpxchg() result. 36 + * If the work is already pending, no need to raise the IPI. 37 + * The pairing atomic_fetch_andnot() in irq_work_run() makes sure 38 + * everything we did before is visible. 37 39 */ 38 - flags = work->flags & ~IRQ_WORK_PENDING; 39 - for (;;) { 40 - nflags = flags | IRQ_WORK_CLAIMED; 41 - oflags = cmpxchg(&work->flags, flags, nflags); 42 - if (oflags == flags) 43 - break; 44 - if (oflags & IRQ_WORK_PENDING) 45 - return false; 46 - flags = oflags; 47 - cpu_relax(); 48 - } 49 - 40 + if (oflags & IRQ_WORK_PENDING) 41 + return false; 50 42 return true; 51 43 } 52 44 ··· 53 61 static void __irq_work_queue_local(struct irq_work *work) 54 62 { 55 63 /* If the work is "lazy", handle it from next tick if any */ 56 - if (work->flags & IRQ_WORK_LAZY) { 64 + if (atomic_read(&work->flags) & IRQ_WORK_LAZY) { 57 65 if (llist_add(&work->llnode, this_cpu_ptr(&lazy_list)) && 58 66 tick_nohz_tick_stopped()) 59 67 arch_irq_work_raise(); ··· 135 143 { 136 144 struct irq_work *work, *tmp; 137 145 struct llist_node *llnode; 138 - unsigned long flags; 139 146 140 147 BUG_ON(!irqs_disabled()); 141 148 ··· 143 152 144 153 llnode = llist_del_all(list); 145 154 llist_for_each_entry_safe(work, tmp, llnode, llnode) { 155 + int flags; 146 156 /* 147 157 * Clear the PENDING bit, after this point the @work 148 158 * can be re-used. ··· 151 159 * to claim that work don't rely on us to handle their data 152 160 * while we are in the middle of the func. 153 161 */ 154 - flags = work->flags & ~IRQ_WORK_PENDING; 155 - xchg(&work->flags, flags); 162 + flags = atomic_fetch_andnot(IRQ_WORK_PENDING, &work->flags); 156 163 157 164 work->func(work); 158 165 /* 159 166 * Clear the BUSY bit and return to the free state if 160 167 * no-one else claimed it meanwhile. 161 168 */ 162 - (void)cmpxchg(&work->flags, flags, flags & ~IRQ_WORK_BUSY); 169 + flags &= ~IRQ_WORK_PENDING; 170 + (void)atomic_cmpxchg(&work->flags, flags, flags & ~IRQ_WORK_BUSY); 163 171 } 164 172 } 165 173 ··· 191 199 { 192 200 lockdep_assert_irqs_enabled(); 193 201 194 - while (work->flags & IRQ_WORK_BUSY) 202 + while (atomic_read(&work->flags) & IRQ_WORK_BUSY) 195 203 cpu_relax(); 196 204 } 197 205 EXPORT_SYMBOL_GPL(irq_work_sync);
+1 -1
kernel/printk/printk.c
··· 2961 2961 2962 2962 static DEFINE_PER_CPU(struct irq_work, wake_up_klogd_work) = { 2963 2963 .func = wake_up_klogd_work_func, 2964 - .flags = IRQ_WORK_LAZY, 2964 + .flags = ATOMIC_INIT(IRQ_WORK_LAZY), 2965 2965 }; 2966 2966 2967 2967 void wake_up_klogd(void)
+1 -1
kernel/trace/bpf_trace.c
··· 739 739 return -EINVAL; 740 740 741 741 work = this_cpu_ptr(&send_signal_work); 742 - if (work->irq_work.flags & IRQ_WORK_BUSY) 742 + if (atomic_read(&work->irq_work.flags) & IRQ_WORK_BUSY) 743 743 return -EBUSY; 744 744 745 745 /* Add the current task, which is the target of sending signal,