Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

Merge tag 'irq-core-2021-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip

Pull irq updates from Thomas Gleixner:
"Updates to the interrupt core and driver subsystems:

Core changes:

- The usual set of small fixes and improvements all over the place,
but nothing stands out

MSI changes:

- Further consolidation of the PCI/MSI interrupt chip code

- Make MSI sysfs code independent of PCI/MSI and expose the MSI
interrupts of platform devices in the same way as PCI exposes them.

Driver changes:

- Support for ARM GICv3 EPPI partitions

- Treewide conversion to generic_handle_domain_irq() for all chained
interrupt controllers

- Conversion to bitmap_zalloc() throughout the irq chip drivers

- The usual set of small fixes and improvements"

* tag 'irq-core-2021-08-30' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (57 commits)
platform-msi: Add ABI to show msi_irqs of platform devices
genirq/msi: Move MSI sysfs handling from PCI to MSI core
genirq/cpuhotplug: Demote debug printk to KERN_DEBUG
irqchip/qcom-pdc: Trim unused levels of the interrupt hierarchy
irqdomain: Export irq_domain_disconnect_hierarchy()
irqchip/gic-v3: Fix priority comparison when non-secure priorities are used
irqchip/apple-aic: Fix irq_disable from within irq handlers
pinctrl/rockchip: drop the gpio related codes
gpio/rockchip: drop irq_gc_lock/irq_gc_unlock for irq set type
gpio/rockchip: support next version gpio controller
gpio/rockchip: use struct rockchip_gpio_regs for gpio controller
gpio/rockchip: add driver for rockchip gpio
dt-bindings: gpio: change items restriction of clock for rockchip,gpio-bank
pinctrl/rockchip: add pinctrl device to gpio bank struct
pinctrl/rockchip: separate struct rockchip_pin_bank to a head file
pinctrl/rockchip: always enable clock for gpio controller
genirq: Fix kernel doc indentation
EDAC/altera: Convert to generic_handle_domain_irq()
powerpc: Bulk conversion to generic_handle_domain_irq()
nios2: Bulk conversion to generic_handle_domain_irq()
...

+1828 -1679
+14
Documentation/ABI/testing/sysfs-bus-platform
··· 28 28 value comes from an ACPI _PXM method or a similar firmware 29 29 source. Initial users for this file would be devices like 30 30 arm smmu which are populated by arm64 acpi_iort. 31 + 32 + What: /sys/bus/platform/devices/.../msi_irqs/ 33 + Date: August 2021 34 + Contact: Barry Song <song.bao.hua@hisilicon.com> 35 + Description: 36 + The /sys/devices/.../msi_irqs directory contains a variable set 37 + of files, with each file being named after a corresponding msi 38 + irq vector allocated to that device. 39 + 40 + What: /sys/bus/platform/devices/.../msi_irqs/<N> 41 + Date: August 2021 42 + Contact: Barry Song <song.bao.hua@hisilicon.com> 43 + Description: 44 + This attribute will show "msi" if <N> is a valid msi irq
+25 -3
Documentation/core-api/irq/irq-domain.rst
··· 55 55 the hwirq, and call the .map() callback so the driver can perform any 56 56 required hardware setup. 57 57 58 - When an interrupt is received, irq_find_mapping() function should 59 - be used to find the Linux IRQ number from the hwirq number. 58 + Once a mapping has been established, it can be retrieved or used via a 59 + variety of methods: 60 + 61 + - irq_resolve_mapping() returns a pointer to the irq_desc structure 62 + for a given domain and hwirq number, and NULL if there was no 63 + mapping. 64 + - irq_find_mapping() returns a Linux IRQ number for a given domain and 65 + hwirq number, and 0 if there was no mapping 66 + - irq_linear_revmap() is now identical to irq_find_mapping(), and is 67 + deprecated 68 + - generic_handle_domain_irq() handles an interrupt described by a 69 + domain and a hwirq number 70 + - handle_domain_irq() does the same thing for root interrupt 71 + controllers and deals with the set_irq_reg()/irq_enter() sequences 72 + that most architecture requires 73 + 74 + Note that irq domain lookups must happen in contexts that are 75 + compatible with a RCU read-side critical section. 60 76 61 77 The irq_create_mapping() function must be called *atleast once* 62 78 before any call to irq_find_mapping(), lest the descriptor will not ··· 153 137 IRQ number and call the .map() callback so that driver can program the 154 138 Linux IRQ number into the hardware. 155 139 156 - Most drivers cannot use this mapping. 140 + Most drivers cannot use this mapping, and it is now gated on the 141 + CONFIG_IRQ_DOMAIN_NOMAP option. Please refrain from introducing new 142 + users of this API. 157 143 158 144 Legacy 159 145 ------ ··· 174 156 for IRQ numbers that are passed to struct device registrations. In that 175 157 case the Linux IRQ numbers cannot be dynamically assigned and the legacy 176 158 mapping should be used. 159 + 160 + As the name implies, the *_legacy() functions are deprecated and only 161 + exist to ease the support of ancient platforms. No new users should be 162 + added. 177 163 178 164 The legacy map assumes a contiguous range of IRQ numbers has already 179 165 been allocated for the controller and that the IRQ number can be
+4 -1
Documentation/devicetree/bindings/gpio/rockchip,gpio-bank.yaml
··· 22 22 maxItems: 1 23 23 24 24 clocks: 25 - maxItems: 1 25 + minItems: 1 26 + items: 27 + - description: APB interface clock source 28 + - description: GPIO debounce reference clock source 26 29 27 30 gpio-controller: true 28 31
+1 -1
arch/arc/kernel/mcip.c
··· 352 352 irq_hw_number_t idu_hwirq = core_hwirq - FIRST_EXT_IRQ; 353 353 354 354 chained_irq_enter(core_chip, desc); 355 - generic_handle_irq(irq_find_mapping(idu_domain, idu_hwirq)); 355 + generic_handle_domain_irq(idu_domain, idu_hwirq); 356 356 chained_irq_exit(core_chip, desc); 357 357 } 358 358
+2 -10
arch/arm/common/sa1111.c
··· 196 196 return irq_create_mapping(sachip->irqdomain, hwirq); 197 197 } 198 198 199 - static void sa1111_handle_irqdomain(struct irq_domain *irqdomain, int irq) 200 - { 201 - struct irq_desc *d = irq_to_desc(irq_linear_revmap(irqdomain, irq)); 202 - 203 - if (d) 204 - generic_handle_irq_desc(d); 205 - } 206 - 207 199 /* 208 200 * SA1111 interrupt support. Since clearing an IRQ while there are 209 201 * active IRQs causes the interrupt output to pulse, the upper levels ··· 226 234 227 235 for (i = 0; stat0; i++, stat0 >>= 1) 228 236 if (stat0 & 1) 229 - sa1111_handle_irqdomain(irqdomain, i); 237 + generic_handle_domain_irq(irqdomain, i); 230 238 231 239 for (i = 32; stat1; i++, stat1 >>= 1) 232 240 if (stat1 & 1) 233 - sa1111_handle_irqdomain(irqdomain, i); 241 + generic_handle_domain_irq(irqdomain, i); 234 242 235 243 /* For level-based interrupts */ 236 244 desc->irq_data.chip->irq_unmask(&desc->irq_data);
+2 -4
arch/arm/mach-pxa/pxa_cplds_irqs.c
··· 39 39 40 40 do { 41 41 pending = readl(fpga->base + FPGA_IRQ_SET_CLR) & fpga->irq_mask; 42 - for_each_set_bit(bit, &pending, CPLDS_NB_IRQ) { 43 - generic_handle_irq(irq_find_mapping(fpga->irqdomain, 44 - bit)); 45 - } 42 + for_each_set_bit(bit, &pending, CPLDS_NB_IRQ) 43 + generic_handle_domain_irq(fpga->irqdomain, bit); 46 44 } while (pending); 47 45 48 46 return IRQ_HANDLED;
+2 -3
arch/arm/mach-s3c/irq-s3c24xx.c
··· 298 298 struct s3c_irq_data *irq_data = irq_desc_get_chip_data(desc); 299 299 struct s3c_irq_intc *intc = irq_data->intc; 300 300 struct s3c_irq_intc *sub_intc = irq_data->sub_intc; 301 - unsigned int n, offset, irq; 301 + unsigned int n, offset; 302 302 unsigned long src, msk; 303 303 304 304 /* we're using individual domains for the non-dt case ··· 318 318 while (src) { 319 319 n = __ffs(src); 320 320 src &= ~(1 << n); 321 - irq = irq_find_mapping(sub_intc->domain, offset + n); 322 - generic_handle_irq(irq); 321 + generic_handle_domain_irq(sub_intc->domain, offset + n); 323 322 } 324 323 325 324 chained_irq_exit(chip, desc);
+7 -7
arch/mips/ath25/ar2315.c
··· 69 69 { 70 70 u32 pending = ar2315_rst_reg_read(AR2315_ISR) & 71 71 ar2315_rst_reg_read(AR2315_IMR); 72 - unsigned nr, misc_irq = 0; 72 + unsigned nr; 73 + int ret = 0; 73 74 74 75 if (pending) { 75 76 struct irq_domain *domain = irq_desc_get_handler_data(desc); 76 77 77 78 nr = __ffs(pending); 78 - misc_irq = irq_find_mapping(domain, nr); 79 - } 80 79 81 - if (misc_irq) { 82 80 if (nr == AR2315_MISC_IRQ_GPIO) 83 81 ar2315_rst_reg_write(AR2315_ISR, AR2315_ISR_GPIO); 84 82 else if (nr == AR2315_MISC_IRQ_WATCHDOG) 85 83 ar2315_rst_reg_write(AR2315_ISR, AR2315_ISR_WD); 86 - generic_handle_irq(misc_irq); 87 - } else { 88 - spurious_interrupt(); 84 + 85 + ret = generic_handle_domain_irq(domain, nr); 89 86 } 87 + 88 + if (!pending || ret) 89 + spurious_interrupt(); 90 90 } 91 91 92 92 static void ar2315_misc_irq_unmask(struct irq_data *d)
+6 -7
arch/mips/ath25/ar5312.c
··· 73 73 { 74 74 u32 pending = ar5312_rst_reg_read(AR5312_ISR) & 75 75 ar5312_rst_reg_read(AR5312_IMR); 76 - unsigned nr, misc_irq = 0; 76 + unsigned nr; 77 + int ret = 0; 77 78 78 79 if (pending) { 79 80 struct irq_domain *domain = irq_desc_get_handler_data(desc); 80 81 81 82 nr = __ffs(pending); 82 - misc_irq = irq_find_mapping(domain, nr); 83 - } 84 83 85 - if (misc_irq) { 86 - generic_handle_irq(misc_irq); 84 + ret = generic_handle_domain_irq(domain, nr); 87 85 if (nr == AR5312_MISC_IRQ_TIMER) 88 86 ar5312_rst_reg_read(AR5312_TIMER); 89 - } else { 90 - spurious_interrupt(); 91 87 } 88 + 89 + if (!pending || ret) 90 + spurious_interrupt(); 92 91 } 93 92 94 93 /* Enable the specified AR5312_MISC_IRQ interrupt */
+1 -1
arch/mips/lantiq/irq.c
··· 300 300 */ 301 301 irq = __fls(irq); 302 302 hwirq = irq + MIPS_CPU_IRQ_CASCADE + (INT_NUM_IM_OFFSET * module); 303 - generic_handle_irq(irq_linear_revmap(ltq_domain, hwirq)); 303 + generic_handle_domain_irq(ltq_domain, hwirq); 304 304 305 305 /* if this is a EBU irq, we need to ack it or get a deadlock */ 306 306 if (irq == LTQ_ICU_EBU_IRQ && !module && LTQ_EBU_PCC_ISTAT != 0)
+3 -5
arch/mips/pci/pci-ar2315.c
··· 337 337 struct ar2315_pci_ctrl *apc = irq_desc_get_handler_data(desc); 338 338 u32 pending = ar2315_pci_reg_read(apc, AR2315_PCI_ISR) & 339 339 ar2315_pci_reg_read(apc, AR2315_PCI_IMR); 340 - unsigned pci_irq = 0; 340 + int ret = 0; 341 341 342 342 if (pending) 343 - pci_irq = irq_find_mapping(apc->domain, __ffs(pending)); 343 + ret = generic_handle_domain_irq(apc->domain, __ffs(pending)); 344 344 345 - if (pci_irq) 346 - generic_handle_irq(pci_irq); 347 - else 345 + if (!pending || ret) 348 346 spurious_interrupt(); 349 347 } 350 348
+2 -3
arch/mips/pci/pci-rt3883.c
··· 140 140 } 141 141 142 142 while (pending) { 143 - unsigned irq, bit = __ffs(pending); 143 + unsigned bit = __ffs(pending); 144 144 145 - irq = irq_find_mapping(rpc->irq_domain, bit); 146 - generic_handle_irq(irq); 145 + generic_handle_domain_irq(rpc->irq_domain, bit); 147 146 148 147 pending &= ~BIT(bit); 149 148 }
+1 -1
arch/mips/ralink/irq.c
··· 100 100 101 101 if (pending) { 102 102 struct irq_domain *domain = irq_desc_get_handler_data(desc); 103 - generic_handle_irq(irq_find_mapping(domain, __ffs(pending))); 103 + generic_handle_domain_irq(domain, __ffs(pending)); 104 104 } else { 105 105 spurious_interrupt(); 106 106 }
+6 -10
arch/mips/sgi-ip27/ip27-irq.c
··· 190 190 unsigned long *mask = per_cpu(irq_enable_mask, cpu); 191 191 struct irq_domain *domain; 192 192 u64 pend0; 193 - int irq; 193 + int ret; 194 194 195 195 /* copied from Irix intpend0() */ 196 196 pend0 = LOCAL_HUB_L(PI_INT_PEND0); ··· 216 216 #endif 217 217 { 218 218 domain = irq_desc_get_handler_data(desc); 219 - irq = irq_linear_revmap(domain, __ffs(pend0)); 220 - if (irq) 221 - generic_handle_irq(irq); 222 - else 219 + ret = generic_handle_domain_irq(domain, __ffs(pend0)); 220 + if (ret) 223 221 spurious_interrupt(); 224 222 } 225 223 ··· 230 232 unsigned long *mask = per_cpu(irq_enable_mask, cpu); 231 233 struct irq_domain *domain; 232 234 u64 pend1; 233 - int irq; 235 + int ret; 234 236 235 237 /* copied from Irix intpend0() */ 236 238 pend1 = LOCAL_HUB_L(PI_INT_PEND1); ··· 240 242 return; 241 243 242 244 domain = irq_desc_get_handler_data(desc); 243 - irq = irq_linear_revmap(domain, __ffs(pend1) + 64); 244 - if (irq) 245 - generic_handle_irq(irq); 246 - else 245 + ret = generic_handle_domain_irq(domain, __ffs(pend1) + 64); 246 + if (ret) 247 247 spurious_interrupt(); 248 248 249 249 LOCAL_HUB_L(PI_INT_PEND1);
+3 -5
arch/mips/sgi-ip30/ip30-irq.c
··· 99 99 int cpu = smp_processor_id(); 100 100 struct irq_domain *domain; 101 101 u64 pend, mask; 102 - int irq; 102 + int ret; 103 103 104 104 pend = heart_read(&heart_regs->isr); 105 105 mask = (heart_read(&heart_regs->imr[cpu]) & ··· 130 130 #endif 131 131 { 132 132 domain = irq_desc_get_handler_data(desc); 133 - irq = irq_linear_revmap(domain, __ffs(pend)); 134 - if (irq) 135 - generic_handle_irq(irq); 136 - else 133 + ret = generic_handle_domain_irq(domain, __ffs(pend)); 134 + if (ret) 137 135 spurious_interrupt(); 138 136 } 139 137 }
+1 -3
arch/nios2/kernel/irq.c
··· 19 19 asmlinkage void do_IRQ(int hwirq, struct pt_regs *regs) 20 20 { 21 21 struct pt_regs *oldregs = set_irq_regs(regs); 22 - int irq; 23 22 24 23 irq_enter(); 25 - irq = irq_find_mapping(NULL, hwirq); 26 - generic_handle_irq(irq); 24 + generic_handle_domain_irq(NULL, hwirq); 27 25 irq_exit(); 28 26 29 27 set_irq_regs(oldregs);
+1 -3
arch/powerpc/platforms/4xx/uic.c
··· 198 198 struct uic *uic = irq_desc_get_handler_data(desc); 199 199 u32 msr; 200 200 int src; 201 - int subvirq; 202 201 203 202 raw_spin_lock(&desc->lock); 204 203 if (irqd_is_level_type(idata)) ··· 212 213 213 214 src = 32 - ffs(msr); 214 215 215 - subvirq = irq_linear_revmap(uic->irqhost, src); 216 - generic_handle_irq(subvirq); 216 + generic_handle_domain_irq(uic->irqhost, src); 217 217 218 218 uic_irq_ret: 219 219 raw_spin_lock(&desc->lock);
+10 -13
arch/powerpc/platforms/512x/mpc5121_ads_cpld.c
··· 81 81 .irq_unmask = cpld_unmask_irq, 82 82 }; 83 83 84 - static int 84 + static unsigned int 85 85 cpld_pic_get_irq(int offset, u8 ignore, u8 __iomem *statusp, 86 86 u8 __iomem *maskp) 87 87 { 88 - int cpld_irq; 89 88 u8 status = in_8(statusp); 90 89 u8 mask = in_8(maskp); 91 90 ··· 92 93 status |= (ignore | mask); 93 94 94 95 if (status == 0xff) 95 - return 0; 96 + return ~0; 96 97 97 - cpld_irq = ffz(status) + offset; 98 - 99 - return irq_linear_revmap(cpld_pic_host, cpld_irq); 98 + return ffz(status) + offset; 100 99 } 101 100 102 101 static void cpld_pic_cascade(struct irq_desc *desc) 103 102 { 104 - unsigned int irq; 103 + unsigned int hwirq; 105 104 106 - irq = cpld_pic_get_irq(0, PCI_IGNORE, &cpld_regs->pci_status, 105 + hwirq = cpld_pic_get_irq(0, PCI_IGNORE, &cpld_regs->pci_status, 107 106 &cpld_regs->pci_mask); 108 - if (irq) { 109 - generic_handle_irq(irq); 107 + if (hwirq != ~0) { 108 + generic_handle_domain_irq(cpld_pic_host, hwirq); 110 109 return; 111 110 } 112 111 113 - irq = cpld_pic_get_irq(8, MISC_IGNORE, &cpld_regs->misc_status, 112 + hwirq = cpld_pic_get_irq(8, MISC_IGNORE, &cpld_regs->misc_status, 114 113 &cpld_regs->misc_mask); 115 - if (irq) { 116 - generic_handle_irq(irq); 114 + if (hwirq != ~0) { 115 + generic_handle_domain_irq(cpld_pic_host, hwirq); 117 116 return; 118 117 } 119 118 }
+4 -5
arch/powerpc/platforms/52xx/media5200.c
··· 78 78 static void media5200_irq_cascade(struct irq_desc *desc) 79 79 { 80 80 struct irq_chip *chip = irq_desc_get_chip(desc); 81 - int sub_virq, val; 81 + int val; 82 82 u32 status, enable; 83 83 84 84 /* Mask off the cascaded IRQ */ ··· 92 92 enable = in_be32(media5200_irq.regs + MEDIA5200_IRQ_STATUS); 93 93 val = ffs((status & enable) >> MEDIA5200_IRQ_SHIFT); 94 94 if (val) { 95 - sub_virq = irq_linear_revmap(media5200_irq.irqhost, val - 1); 96 - /* pr_debug("%s: virq=%i s=%.8x e=%.8x hwirq=%i subvirq=%i\n", 97 - * __func__, virq, status, enable, val - 1, sub_virq); 95 + generic_handle_domain_irq(media5200_irq.irqhost, val - 1); 96 + /* pr_debug("%s: virq=%i s=%.8x e=%.8x hwirq=%i\n", 97 + * __func__, virq, status, enable, val - 1); 98 98 */ 99 - generic_handle_irq(sub_virq); 100 99 } 101 100 102 101 /* Processing done; can reenable the cascade now */
+2 -5
arch/powerpc/platforms/52xx/mpc52xx_gpt.c
··· 190 190 static void mpc52xx_gpt_irq_cascade(struct irq_desc *desc) 191 191 { 192 192 struct mpc52xx_gpt_priv *gpt = irq_desc_get_handler_data(desc); 193 - int sub_virq; 194 193 u32 status; 195 194 196 195 status = in_be32(&gpt->regs->status) & MPC52xx_GPT_STATUS_IRQMASK; 197 - if (status) { 198 - sub_virq = irq_linear_revmap(gpt->irqhost, 0); 199 - generic_handle_irq(sub_virq); 200 - } 196 + if (status) 197 + generic_handle_domain_irq(gpt->irqhost, 0); 201 198 } 202 199 203 200 static int mpc52xx_gpt_irq_map(struct irq_domain *h, unsigned int virq,
+2 -4
arch/powerpc/platforms/82xx/pq2ads-pci-pic.c
··· 91 91 break; 92 92 93 93 for (bit = 0; pend != 0; ++bit, pend <<= 1) { 94 - if (pend & 0x80000000) { 95 - int virq = irq_linear_revmap(priv->host, bit); 96 - generic_handle_irq(virq); 97 - } 94 + if (pend & 0x80000000) 95 + generic_handle_domain_irq(priv->host, bit); 98 96 } 99 97 } 100 98 }
+2 -6
arch/powerpc/platforms/cell/interrupt.c
··· 106 106 out_be64(&node_iic->iic_is, ack); 107 107 /* handle them */ 108 108 for (cascade = 63; cascade >= 0; cascade--) 109 - if (bits & (0x8000000000000000UL >> cascade)) { 110 - unsigned int cirq = 111 - irq_linear_revmap(iic_host, 109 + if (bits & (0x8000000000000000UL >> cascade)) 110 + generic_handle_domain_irq(iic_host, 112 111 base | cascade); 113 - if (cirq) 114 - generic_handle_irq(cirq); 115 - } 116 112 /* post-ack level interrupts */ 117 113 ack = bits & ~IIC_ISR_EDGE_MASK; 118 114 if (ack)
+3 -8
arch/powerpc/platforms/cell/spider-pic.c
··· 190 190 { 191 191 struct irq_chip *chip = irq_desc_get_chip(desc); 192 192 struct spider_pic *pic = irq_desc_get_handler_data(desc); 193 - unsigned int cs, virq; 193 + unsigned int cs; 194 194 195 195 cs = in_be32(pic->regs + TIR_CS) >> 24; 196 - if (cs == SPIDER_IRQ_INVALID) 197 - virq = 0; 198 - else 199 - virq = irq_linear_revmap(pic->host, cs); 200 - 201 - if (virq) 202 - generic_handle_irq(virq); 196 + if (cs != SPIDER_IRQ_INVALID) 197 + generic_handle_domain_irq(pic->host, cs); 203 198 204 199 chip->irq_eoi(&desc->irq_data); 205 200 }
+7 -8
arch/powerpc/platforms/embedded6xx/hlwd-pic.c
··· 108 108 static unsigned int __hlwd_pic_get_irq(struct irq_domain *h) 109 109 { 110 110 void __iomem *io_base = h->host_data; 111 - int irq; 112 111 u32 irq_status; 113 112 114 113 irq_status = in_be32(io_base + HW_BROADWAY_ICR) & ··· 115 116 if (irq_status == 0) 116 117 return 0; /* no more IRQs pending */ 117 118 118 - irq = __ffs(irq_status); 119 - return irq_linear_revmap(h, irq); 119 + return __ffs(irq_status); 120 120 } 121 121 122 122 static void hlwd_pic_irq_cascade(struct irq_desc *desc) 123 123 { 124 124 struct irq_chip *chip = irq_desc_get_chip(desc); 125 125 struct irq_domain *irq_domain = irq_desc_get_handler_data(desc); 126 - unsigned int virq; 126 + unsigned int hwirq; 127 127 128 128 raw_spin_lock(&desc->lock); 129 129 chip->irq_mask(&desc->irq_data); /* IRQ_LEVEL */ 130 130 raw_spin_unlock(&desc->lock); 131 131 132 - virq = __hlwd_pic_get_irq(irq_domain); 133 - if (virq) 134 - generic_handle_irq(virq); 132 + hwirq = __hlwd_pic_get_irq(irq_domain); 133 + if (hwirq) 134 + generic_handle_domain_irq(irq_domain, hwirq); 135 135 else 136 136 pr_err("spurious interrupt!\n"); 137 137 ··· 188 190 189 191 unsigned int hlwd_pic_get_irq(void) 190 192 { 191 - return __hlwd_pic_get_irq(hlwd_irq_host); 193 + unsigned int hwirq = __hlwd_pic_get_irq(hlwd_irq_host); 194 + return hwirq ? irq_linear_revmap(hlwd_irq_host, hwirq) : 0; 192 195 } 193 196 194 197 /*
+4 -7
arch/powerpc/platforms/powernv/opal-irqchip.c
··· 46 46 e = READ_ONCE(last_outstanding_events) & opal_event_irqchip.mask; 47 47 again: 48 48 while (e) { 49 - int virq, hwirq; 49 + int hwirq; 50 50 51 51 hwirq = fls64(e) - 1; 52 52 e &= ~BIT_ULL(hwirq); 53 53 54 54 local_irq_disable(); 55 - virq = irq_find_mapping(opal_event_irqchip.domain, hwirq); 56 - if (virq) { 57 - irq_enter(); 58 - generic_handle_irq(virq); 59 - irq_exit(); 60 - } 55 + irq_enter(); 56 + generic_handle_domain_irq(opal_event_irqchip.domain, hwirq); 57 + irq_exit(); 61 58 local_irq_enable(); 62 59 63 60 cond_resched();
+4 -7
arch/powerpc/sysdev/fsl_mpic_err.c
··· 99 99 struct mpic *mpic = (struct mpic *) data; 100 100 u32 eisr, eimr; 101 101 int errint; 102 - unsigned int cascade_irq; 103 102 104 103 eisr = mpic_fsl_err_read(mpic->err_regs, MPIC_ERR_INT_EISR); 105 104 eimr = mpic_fsl_err_read(mpic->err_regs, MPIC_ERR_INT_EIMR); ··· 107 108 return IRQ_NONE; 108 109 109 110 while (eisr) { 111 + int ret; 110 112 errint = __builtin_clz(eisr); 111 - cascade_irq = irq_linear_revmap(mpic->irqhost, 112 - mpic->err_int_vecs[errint]); 113 - WARN_ON(!cascade_irq); 114 - if (cascade_irq) { 115 - generic_handle_irq(cascade_irq); 116 - } else { 113 + ret = generic_handle_domain_irq(mpic->irqhost, 114 + mpic->err_int_vecs[errint]); 115 + if (WARN_ON(ret)) { 117 116 eimr |= 1 << (31 - errint); 118 117 mpic_fsl_err_write(mpic->err_regs, eimr); 119 118 }
+4 -8
arch/powerpc/sysdev/fsl_msi.c
··· 266 266 267 267 static irqreturn_t fsl_msi_cascade(int irq, void *data) 268 268 { 269 - unsigned int cascade_irq; 270 269 struct fsl_msi *msi_data; 271 270 int msir_index = -1; 272 271 u32 msir_value = 0; ··· 277 278 msi_data = cascade_data->msi_data; 278 279 279 280 msir_index = cascade_data->index; 280 - 281 - if (msir_index >= NR_MSI_REG_MAX) 282 - cascade_irq = 0; 283 281 284 282 switch (msi_data->feature & FSL_PIC_IP_MASK) { 285 283 case FSL_PIC_IP_MPIC: ··· 301 305 } 302 306 303 307 while (msir_value) { 308 + int err; 304 309 intr_index = ffs(msir_value) - 1; 305 310 306 - cascade_irq = irq_linear_revmap(msi_data->irqhost, 311 + err = generic_handle_domain_irq(msi_data->irqhost, 307 312 msi_hwirq(msi_data, msir_index, 308 313 intr_index + have_shift)); 309 - if (cascade_irq) { 310 - generic_handle_irq(cascade_irq); 314 + if (!err) 311 315 ret = IRQ_HANDLED; 312 - } 316 + 313 317 have_shift += intr_index + 1; 314 318 msir_value = msir_value >> (intr_index + 1); 315 319 }
-4
arch/s390/pci/pci_irq.c
··· 365 365 for_each_pci_msi_entry(msi, pdev) { 366 366 if (!msi->irq) 367 367 continue; 368 - if (msi->msi_attrib.is_msix) 369 - __pci_msix_desc_mask_irq(msi, 1); 370 - else 371 - __pci_msi_desc_mask_irq(msi, 1, 1); 372 368 irq_set_msi_desc(msi->irq, NULL); 373 369 irq_free_desc(msi->irq); 374 370 msi->msg.address_lo = 0;
+1 -1
arch/sh/boards/mach-se/7343/irq.c
··· 38 38 mask = ioread16(se7343_irq_regs + PA_CPLD_ST_REG); 39 39 40 40 for_each_set_bit(bit, &mask, SE7343_FPGA_IRQ_NR) 41 - generic_handle_irq(irq_linear_revmap(se7343_irq_domain, bit)); 41 + generic_handle_domain_irq(se7343_irq_domain, bit); 42 42 43 43 chip->irq_unmask(data); 44 44 }
+1 -1
arch/sh/boards/mach-se/7722/irq.c
··· 37 37 mask = ioread16(se7722_irq_regs + IRQ01_STS_REG); 38 38 39 39 for_each_set_bit(bit, &mask, SE7722_FPGA_IRQ_NR) 40 - generic_handle_irq(irq_linear_revmap(se7722_irq_domain, bit)); 40 + generic_handle_domain_irq(se7722_irq_domain, bit); 41 41 42 42 chip->irq_unmask(data); 43 43 }
+1 -1
arch/sh/boards/mach-x3proto/gpio.c
··· 68 68 69 69 mask = __raw_readw(KEYDETR); 70 70 for_each_set_bit(pin, &mask, NR_BASEBOARD_GPIOS) 71 - generic_handle_irq(irq_linear_revmap(x3proto_irq_domain, pin)); 71 + generic_handle_domain_irq(x3proto_irq_domain, pin); 72 72 73 73 chip->irq_unmask(data); 74 74 }
+1 -3
arch/xtensa/kernel/irq.c
··· 33 33 34 34 asmlinkage void do_IRQ(int hwirq, struct pt_regs *regs) 35 35 { 36 - int irq = irq_find_mapping(NULL, hwirq); 37 - 38 36 #ifdef CONFIG_DEBUG_STACKOVERFLOW 39 37 /* Debugging check for stack overflow: is there less than 1KB free? */ 40 38 { ··· 46 48 sp - sizeof(struct thread_info)); 47 49 } 48 50 #endif 49 - generic_handle_irq(irq); 51 + generic_handle_domain_irq(NULL, hwirq); 50 52 } 51 53 52 54 int arch_show_interrupts(struct seq_file *p, int prec)
+1 -1
block/blk-mq.c
··· 606 606 * This is probably worse than completing the request on a different 607 607 * cache domain. 608 608 */ 609 - if (force_irqthreads) 609 + if (force_irqthreads()) 610 610 return false; 611 611 612 612 /* same CPU or cache domain? Complete locally */
+15 -5
drivers/base/platform-msi.c
··· 21 21 * and the callback to write the MSI message. 22 22 */ 23 23 struct platform_msi_priv_data { 24 - struct device *dev; 25 - void *host_data; 26 - msi_alloc_info_t arg; 27 - irq_write_msi_msg_t write_msg; 28 - int devid; 24 + struct device *dev; 25 + void *host_data; 26 + const struct attribute_group **msi_irq_groups; 27 + msi_alloc_info_t arg; 28 + irq_write_msi_msg_t write_msg; 29 + int devid; 29 30 }; 30 31 31 32 /* The devid allocator */ ··· 273 272 if (err) 274 273 goto out_free_desc; 275 274 275 + priv_data->msi_irq_groups = msi_populate_sysfs(dev); 276 + if (IS_ERR(priv_data->msi_irq_groups)) { 277 + err = PTR_ERR(priv_data->msi_irq_groups); 278 + goto out_free_irqs; 279 + } 280 + 276 281 return 0; 277 282 283 + out_free_irqs: 284 + msi_domain_free_irqs(dev->msi_domain, dev); 278 285 out_free_desc: 279 286 platform_msi_free_descs(dev, 0, nvec); 280 287 out_free_priv_data: ··· 302 293 struct msi_desc *desc; 303 294 304 295 desc = first_msi_entry(dev); 296 + msi_destroy_sysfs(dev, desc->platform.msi_priv_data->msi_irq_groups); 305 297 platform_msi_free_priv_data(desc->platform.msi_priv_data); 306 298 } 307 299
+2 -5
drivers/edac/altera_edac.c
··· 1812 1812 regmap_read(edac->ecc_mgr_map, sm_offset, &irq_status); 1813 1813 1814 1814 bits = irq_status; 1815 - for_each_set_bit(bit, &bits, 32) { 1816 - irq = irq_linear_revmap(edac->domain, dberr * 32 + bit); 1817 - if (irq) 1818 - generic_handle_irq(irq); 1819 - } 1815 + for_each_set_bit(bit, &bits, 32) 1816 + generic_handle_domain_irq(edac->domain, dberr * 32 + bit); 1820 1817 1821 1818 chained_irq_exit(chip, desc); 1822 1819 }
+8
drivers/gpio/Kconfig
··· 520 520 A 32-bit single register GPIO fixed in/out implementation. This 521 521 can be used to represent any register as a set of GPIO signals. 522 522 523 + config GPIO_ROCKCHIP 524 + tristate "Rockchip GPIO support" 525 + depends on ARCH_ROCKCHIP || COMPILE_TEST 526 + select GPIOLIB_IRQCHIP 527 + default ARCH_ROCKCHIP 528 + help 529 + Say yes here to support GPIO on Rockchip SoCs. 530 + 523 531 config GPIO_SAMA5D2_PIOBU 524 532 tristate "SAMA5D2 PIOBU GPIO support" 525 533 depends on MFD_SYSCON
+1
drivers/gpio/Makefile
··· 128 128 obj-$(CONFIG_GPIO_RDC321X) += gpio-rdc321x.o 129 129 obj-$(CONFIG_GPIO_REALTEK_OTTO) += gpio-realtek-otto.o 130 130 obj-$(CONFIG_GPIO_REG) += gpio-reg.o 131 + obj-$(CONFIG_GPIO_ROCKCHIP) += gpio-rockchip.o 131 132 obj-$(CONFIG_ARCH_SA1100) += gpio-sa1100.o 132 133 obj-$(CONFIG_GPIO_SAMA5D2_PIOBU) += gpio-sama5d2-piobu.o 133 134 obj-$(CONFIG_GPIO_SCH311X) += gpio-sch311x.o
+2 -2
drivers/gpio/gpio-104-dio-48e.c
··· 336 336 unsigned long gpio; 337 337 338 338 for_each_set_bit(gpio, &irq_mask, 2) 339 - generic_handle_irq(irq_find_mapping(chip->irq.domain, 340 - 19 + gpio*24)); 339 + generic_handle_domain_irq(chip->irq.domain, 340 + 19 + gpio*24); 341 341 342 342 raw_spin_lock(&dio48egpio->lock); 343 343
+2 -2
drivers/gpio/gpio-104-idi-48.c
··· 223 223 for_each_set_bit(bit_num, &irq_mask, 8) { 224 224 gpio = bit_num + boundary * 8; 225 225 226 - generic_handle_irq(irq_find_mapping(chip->irq.domain, 227 - gpio)); 226 + generic_handle_domain_irq(chip->irq.domain, 227 + gpio); 228 228 } 229 229 } 230 230
+1 -1
drivers/gpio/gpio-104-idio-16.c
··· 208 208 int gpio; 209 209 210 210 for_each_set_bit(gpio, &idio16gpio->irq_mask, chip->ngpio) 211 - generic_handle_irq(irq_find_mapping(chip->irq.domain, gpio)); 211 + generic_handle_domain_irq(chip->irq.domain, gpio); 212 212 213 213 raw_spin_lock(&idio16gpio->lock); 214 214
+5 -6
drivers/gpio/gpio-altera.c
··· 201 201 (readl(mm_gc->regs + ALTERA_GPIO_EDGE_CAP) & 202 202 readl(mm_gc->regs + ALTERA_GPIO_IRQ_MASK)))) { 203 203 writel(status, mm_gc->regs + ALTERA_GPIO_EDGE_CAP); 204 - for_each_set_bit(i, &status, mm_gc->gc.ngpio) { 205 - generic_handle_irq(irq_find_mapping(irqdomain, i)); 206 - } 204 + for_each_set_bit(i, &status, mm_gc->gc.ngpio) 205 + generic_handle_domain_irq(irqdomain, i); 207 206 } 208 207 209 208 chained_irq_exit(chip, desc); ··· 227 228 status = readl(mm_gc->regs + ALTERA_GPIO_DATA); 228 229 status &= readl(mm_gc->regs + ALTERA_GPIO_IRQ_MASK); 229 230 230 - for_each_set_bit(i, &status, mm_gc->gc.ngpio) { 231 - generic_handle_irq(irq_find_mapping(irqdomain, i)); 232 - } 231 + for_each_set_bit(i, &status, mm_gc->gc.ngpio) 232 + generic_handle_domain_irq(irqdomain, i); 233 + 233 234 chained_irq_exit(chip, desc); 234 235 } 235 236
+3 -6
drivers/gpio/gpio-aspeed-sgpio.c
··· 392 392 struct gpio_chip *gc = irq_desc_get_handler_data(desc); 393 393 struct irq_chip *ic = irq_desc_get_chip(desc); 394 394 struct aspeed_sgpio *data = gpiochip_get_data(gc); 395 - unsigned int i, p, girq; 395 + unsigned int i, p; 396 396 unsigned long reg; 397 397 398 398 chained_irq_enter(ic, desc); ··· 402 402 403 403 reg = ioread32(bank_reg(data, bank, reg_irq_status)); 404 404 405 - for_each_set_bit(p, &reg, 32) { 406 - girq = irq_find_mapping(gc->irq.domain, i * 32 + p); 407 - generic_handle_irq(girq); 408 - } 409 - 405 + for_each_set_bit(p, &reg, 32) 406 + generic_handle_domain_irq(gc->irq.domain, i * 32 + p); 410 407 } 411 408 412 409 chained_irq_exit(ic, desc);
+3 -6
drivers/gpio/gpio-aspeed.c
··· 661 661 struct gpio_chip *gc = irq_desc_get_handler_data(desc); 662 662 struct irq_chip *ic = irq_desc_get_chip(desc); 663 663 struct aspeed_gpio *data = gpiochip_get_data(gc); 664 - unsigned int i, p, girq, banks; 664 + unsigned int i, p, banks; 665 665 unsigned long reg; 666 666 struct aspeed_gpio *gpio = gpiochip_get_data(gc); 667 667 ··· 673 673 674 674 reg = ioread32(bank_reg(data, bank, reg_irq_status)); 675 675 676 - for_each_set_bit(p, &reg, 32) { 677 - girq = irq_find_mapping(gc->irq.domain, i * 32 + p); 678 - generic_handle_irq(girq); 679 - } 680 - 676 + for_each_set_bit(p, &reg, 32) 677 + generic_handle_domain_irq(gc->irq.domain, i * 32 + p); 681 678 } 682 679 683 680 chained_irq_exit(ic, desc);
+2 -5
drivers/gpio/gpio-ath79.c
··· 204 204 205 205 raw_spin_unlock_irqrestore(&ctrl->lock, flags); 206 206 207 - if (pending) { 208 - for_each_set_bit(irq, &pending, gc->ngpio) 209 - generic_handle_irq( 210 - irq_linear_revmap(gc->irq.domain, irq)); 211 - } 207 + for_each_set_bit(irq, &pending, gc->ngpio) 208 + generic_handle_domain_irq(gc->irq.domain, irq); 212 209 213 210 chained_irq_exit(irqchip, desc); 214 211 }
+2 -4
drivers/gpio/gpio-bcm-kona.c
··· 466 466 (~(readl(reg_base + GPIO_INT_MASK(bank_id)))))) { 467 467 for_each_set_bit(bit, &sta, 32) { 468 468 int hwirq = GPIO_PER_BANK * bank_id + bit; 469 - int child_irq = 470 - irq_find_mapping(bank->kona_gpio->irq_domain, 471 - hwirq); 472 469 /* 473 470 * Clear interrupt before handler is called so we don't 474 471 * miss any interrupt occurred during executing them. ··· 473 476 writel(readl(reg_base + GPIO_INT_STATUS(bank_id)) | 474 477 BIT(bit), reg_base + GPIO_INT_STATUS(bank_id)); 475 478 /* Invoke interrupt handler */ 476 - generic_handle_irq(child_irq); 479 + generic_handle_domain_irq(bank->kona_gpio->irq_domain, 480 + hwirq); 477 481 } 478 482 } 479 483
+2 -3
drivers/gpio/gpio-brcmstb.c
··· 277 277 unsigned long status; 278 278 279 279 while ((status = brcmstb_gpio_get_active_irqs(bank))) { 280 - unsigned int irq, offset; 280 + unsigned int offset; 281 281 282 282 for_each_set_bit(offset, &status, 32) { 283 283 if (offset >= bank->width) 284 284 dev_warn(&priv->pdev->dev, 285 285 "IRQ for invalid GPIO (bank=%d, offset=%d)\n", 286 286 bank->id, offset); 287 - irq = irq_linear_revmap(domain, hwbase + offset); 288 - generic_handle_irq(irq); 287 + generic_handle_domain_irq(domain, hwbase + offset); 289 288 } 290 289 } 291 290 }
+1 -1
drivers/gpio/gpio-cadence.c
··· 133 133 ~ioread32(cgpio->regs + CDNS_GPIO_IRQ_MASK); 134 134 135 135 for_each_set_bit(hwirq, &status, chip->ngpio) 136 - generic_handle_irq(irq_find_mapping(chip->irq.domain, hwirq)); 136 + generic_handle_domain_irq(chip->irq.domain, hwirq); 137 137 138 138 chained_irq_exit(irqchip, desc); 139 139 }
+1 -2
drivers/gpio/gpio-davinci.c
··· 369 369 */ 370 370 hw_irq = (bank_num / 2) * 32 + bit; 371 371 372 - generic_handle_irq( 373 - irq_find_mapping(d->irq_domain, hw_irq)); 372 + generic_handle_domain_irq(d->irq_domain, hw_irq); 374 373 } 375 374 } 376 375 chained_irq_exit(irq_desc_get_chip(desc), desc);
+9 -13
drivers/gpio/gpio-dln2.c
··· 395 395 static void dln2_gpio_event(struct platform_device *pdev, u16 echo, 396 396 const void *data, int len) 397 397 { 398 - int pin, irq; 398 + int pin, ret; 399 399 400 400 const struct { 401 401 __le16 count; ··· 416 416 return; 417 417 } 418 418 419 - irq = irq_find_mapping(dln2->gpio.irq.domain, pin); 420 - if (!irq) { 421 - dev_err(dln2->gpio.parent, "pin %d not mapped to IRQ\n", pin); 422 - return; 423 - } 424 - 425 419 switch (dln2->irq_type[pin]) { 426 420 case DLN2_GPIO_EVENT_CHANGE_RISING: 427 - if (event->value) 428 - generic_handle_irq(irq); 421 + if (!event->value) 422 + return; 429 423 break; 430 424 case DLN2_GPIO_EVENT_CHANGE_FALLING: 431 - if (!event->value) 432 - generic_handle_irq(irq); 425 + if (event->value) 426 + return; 433 427 break; 434 - default: 435 - generic_handle_irq(irq); 436 428 } 429 + 430 + ret = generic_handle_domain_irq(dln2->gpio.irq.domain, pin); 431 + if (unlikely(ret)) 432 + dev_err(dln2->gpio.parent, "pin %d not mapped to IRQ\n", pin); 437 433 } 438 434 439 435 static int dln2_gpio_probe(struct platform_device *pdev)
+1 -1
drivers/gpio/gpio-em.c
··· 173 173 while ((pending = em_gio_read(p, GIO_MST))) { 174 174 offset = __ffs(pending); 175 175 em_gio_write(p, GIO_IIR, BIT(offset)); 176 - generic_handle_irq(irq_find_mapping(p->irq_domain, offset)); 176 + generic_handle_domain_irq(p->irq_domain, offset); 177 177 irqs_handled++; 178 178 } 179 179
+4 -4
drivers/gpio/gpio-ep93xx.c
··· 128 128 */ 129 129 stat = readb(epg->base + EP93XX_GPIO_A_INT_STATUS); 130 130 for_each_set_bit(offset, &stat, 8) 131 - generic_handle_irq(irq_find_mapping(epg->gc[0].gc.irq.domain, 132 - offset)); 131 + generic_handle_domain_irq(epg->gc[0].gc.irq.domain, 132 + offset); 133 133 134 134 stat = readb(epg->base + EP93XX_GPIO_B_INT_STATUS); 135 135 for_each_set_bit(offset, &stat, 8) 136 - generic_handle_irq(irq_find_mapping(epg->gc[1].gc.irq.domain, 137 - offset)); 136 + generic_handle_domain_irq(epg->gc[1].gc.irq.domain, 137 + offset); 138 138 139 139 chained_irq_exit(irqchip, desc); 140 140 }
+1 -2
drivers/gpio/gpio-ftgpio010.c
··· 149 149 stat = readl(g->base + GPIO_INT_STAT_RAW); 150 150 if (stat) 151 151 for_each_set_bit(offset, &stat, gc->ngpio) 152 - generic_handle_irq(irq_find_mapping(gc->irq.domain, 153 - offset)); 152 + generic_handle_domain_irq(gc->irq.domain, offset); 154 153 155 154 chained_irq_exit(irqchip, desc); 156 155 }
+2 -2
drivers/gpio/gpio-hisi.c
··· 186 186 187 187 chained_irq_enter(irq_c, desc); 188 188 for_each_set_bit(hwirq, &irq_msk, HISI_GPIO_LINE_NUM_MAX) 189 - generic_handle_irq(irq_find_mapping(hisi_gpio->chip.irq.domain, 190 - hwirq)); 189 + generic_handle_domain_irq(hisi_gpio->chip.irq.domain, 190 + hwirq); 191 191 chained_irq_exit(irq_c, desc); 192 192 } 193 193
+2 -5
drivers/gpio/gpio-hlwd.c
··· 97 97 98 98 chained_irq_enter(chip, desc); 99 99 100 - for_each_set_bit(hwirq, &pending, 32) { 101 - int irq = irq_find_mapping(hlwd->gpioc.irq.domain, hwirq); 102 - 103 - generic_handle_irq(irq); 104 - } 100 + for_each_set_bit(hwirq, &pending, 32) 101 + generic_handle_domain_irq(hlwd->gpioc.irq.domain, hwirq); 105 102 106 103 chained_irq_exit(chip, desc); 107 104 }
+2 -6
drivers/gpio/gpio-merrifield.c
··· 359 359 /* Only interrupts that are enabled */ 360 360 pending &= enabled; 361 361 362 - for_each_set_bit(gpio, &pending, 32) { 363 - unsigned int irq; 364 - 365 - irq = irq_find_mapping(gc->irq.domain, base + gpio); 366 - generic_handle_irq(irq); 367 - } 362 + for_each_set_bit(gpio, &pending, 32) 363 + generic_handle_domain_irq(gc->irq.domain, base + gpio); 368 364 } 369 365 370 366 chained_irq_exit(irqchip, desc);
+1 -1
drivers/gpio/gpio-mpc8xxx.c
··· 120 120 mask = gc->read_reg(mpc8xxx_gc->regs + GPIO_IER) 121 121 & gc->read_reg(mpc8xxx_gc->regs + GPIO_IMR); 122 122 for_each_set_bit(i, &mask, 32) 123 - generic_handle_irq(irq_linear_revmap(mpc8xxx_gc->irq, 31 - i)); 123 + generic_handle_domain_irq(mpc8xxx_gc->irq, 31 - i); 124 124 125 125 return IRQ_HANDLED; 126 126 }
+1 -3
drivers/gpio/gpio-mt7621.c
··· 95 95 pending = mtk_gpio_r32(rg, GPIO_REG_STAT); 96 96 97 97 for_each_set_bit(bit, &pending, MTK_BANK_WIDTH) { 98 - u32 map = irq_find_mapping(gc->irq.domain, bit); 99 - 100 - generic_handle_irq(map); 98 + generic_handle_domain_irq(gc->irq.domain, bit); 101 99 mtk_gpio_w32(rg, GPIO_REG_STAT, BIT(bit)); 102 100 ret |= IRQ_HANDLED; 103 101 }
+1 -1
drivers/gpio/gpio-mxc.c
··· 241 241 if (port->both_edges & (1 << irqoffset)) 242 242 mxc_flip_edge(port, irqoffset); 243 243 244 - generic_handle_irq(irq_find_mapping(port->domain, irqoffset)); 244 + generic_handle_domain_irq(port->domain, irqoffset); 245 245 246 246 irq_stat &= ~(1 << irqoffset); 247 247 }
+1 -1
drivers/gpio/gpio-mxs.c
··· 157 157 if (port->both_edges & (1 << irqoffset)) 158 158 mxs_flip_edge(port, irqoffset); 159 159 160 - generic_handle_irq(irq_find_mapping(port->domain, irqoffset)); 160 + generic_handle_domain_irq(port->domain, irqoffset); 161 161 irq_stat &= ~(1 << irqoffset); 162 162 } 163 163 }
+1 -2
drivers/gpio/gpio-omap.c
··· 611 611 612 612 raw_spin_lock_irqsave(&bank->wa_lock, wa_lock_flags); 613 613 614 - generic_handle_irq(irq_find_mapping(bank->chip.irq.domain, 615 - bit)); 614 + generic_handle_domain_irq(bank->chip.irq.domain, bit); 616 615 617 616 raw_spin_unlock_irqrestore(&bank->wa_lock, 618 617 wa_lock_flags);
+1 -1
drivers/gpio/gpio-pci-idio-16.c
··· 260 260 return IRQ_NONE; 261 261 262 262 for_each_set_bit(gpio, &idio16gpio->irq_mask, chip->ngpio) 263 - generic_handle_irq(irq_find_mapping(chip->irq.domain, gpio)); 263 + generic_handle_domain_irq(chip->irq.domain, gpio); 264 264 265 265 raw_spin_lock(&idio16gpio->lock); 266 266
+1 -2
drivers/gpio/gpio-pcie-idio-24.c
··· 468 468 irq_mask = idio24gpio->irq_mask & irq_status; 469 469 470 470 for_each_set_bit(gpio, &irq_mask, chip->ngpio - 24) 471 - generic_handle_irq(irq_find_mapping(chip->irq.domain, 472 - gpio + 24)); 471 + generic_handle_domain_irq(chip->irq.domain, gpio + 24); 473 472 474 473 raw_spin_lock(&idio24gpio->lock); 475 474
+2 -2
drivers/gpio/gpio-pl061.c
··· 223 223 pending = readb(pl061->base + GPIOMIS); 224 224 if (pending) { 225 225 for_each_set_bit(offset, &pending, PL061_GPIO_NR) 226 - generic_handle_irq(irq_find_mapping(gc->irq.domain, 227 - offset)); 226 + generic_handle_domain_irq(gc->irq.domain, 227 + offset); 228 228 } 229 229 230 230 chained_irq_exit(irqchip, desc);
+4 -5
drivers/gpio/gpio-pxa.c
··· 455 455 for_each_set_bit(n, &gedr, BITS_PER_LONG) { 456 456 loop = 1; 457 457 458 - generic_handle_irq( 459 - irq_find_mapping(pchip->irqdomain, 460 - gpio + n)); 458 + generic_handle_domain_irq(pchip->irqdomain, 459 + gpio + n); 461 460 } 462 461 } 463 462 handled += loop; ··· 470 471 struct pxa_gpio_chip *pchip = d; 471 472 472 473 if (in_irq == pchip->irq0) { 473 - generic_handle_irq(irq_find_mapping(pchip->irqdomain, 0)); 474 + generic_handle_domain_irq(pchip->irqdomain, 0); 474 475 } else if (in_irq == pchip->irq1) { 475 - generic_handle_irq(irq_find_mapping(pchip->irqdomain, 1)); 476 + generic_handle_domain_irq(pchip->irqdomain, 1); 476 477 } else { 477 478 pr_err("%s() unknown irq %d\n", __func__, in_irq); 478 479 return IRQ_NONE;
+2 -2
drivers/gpio/gpio-rcar.c
··· 213 213 gpio_rcar_read(p, INTMSK))) { 214 214 offset = __ffs(pending); 215 215 gpio_rcar_write(p, INTCLR, BIT(offset)); 216 - generic_handle_irq(irq_find_mapping(p->gpio_chip.irq.domain, 217 - offset)); 216 + generic_handle_domain_irq(p->gpio_chip.irq.domain, 217 + offset); 218 218 irqs_handled++; 219 219 } 220 220
+3 -5
drivers/gpio/gpio-rda.c
··· 181 181 struct irq_chip *ic = irq_desc_get_chip(desc); 182 182 struct rda_gpio *rda_gpio = gpiochip_get_data(chip); 183 183 unsigned long status; 184 - u32 n, girq; 184 + u32 n; 185 185 186 186 chained_irq_enter(ic, desc); 187 187 ··· 189 189 /* Only lower 8 bits are capable of generating interrupts */ 190 190 status &= RDA_GPIO_IRQ_MASK; 191 191 192 - for_each_set_bit(n, &status, RDA_GPIO_BANK_NR) { 193 - girq = irq_find_mapping(chip->irq.domain, n); 194 - generic_handle_irq(girq); 195 - } 192 + for_each_set_bit(n, &status, RDA_GPIO_BANK_NR) 193 + generic_handle_domain_irq(chip->irq.domain, n); 196 194 197 195 chained_irq_exit(ic, desc); 198 196 }
+2 -5
drivers/gpio/gpio-realtek-otto.c
··· 196 196 struct irq_chip *irq_chip = irq_desc_get_chip(desc); 197 197 unsigned int lines_done; 198 198 unsigned int port_pin_count; 199 - unsigned int irq; 200 199 unsigned long status; 201 200 int offset; 202 201 ··· 204 205 for (lines_done = 0; lines_done < gc->ngpio; lines_done += 8) { 205 206 status = realtek_gpio_read_isr(ctrl, lines_done / 8); 206 207 port_pin_count = min(gc->ngpio - lines_done, 8U); 207 - for_each_set_bit(offset, &status, port_pin_count) { 208 - irq = irq_find_mapping(gc->irq.domain, offset); 209 - generic_handle_irq(irq); 210 - } 208 + for_each_set_bit(offset, &status, port_pin_count) 209 + generic_handle_domain_irq(gc->irq.domain, offset); 211 210 } 212 211 213 212 chained_irq_exit(irq_chip, desc);
+771
drivers/gpio/gpio-rockchip.c
··· 1 + // SPDX-License-Identifier: GPL-2.0-only 2 + /* 3 + * Copyright (c) 2013 MundoReader S.L. 4 + * Author: Heiko Stuebner <heiko@sntech.de> 5 + * 6 + * Copyright (c) 2021 Rockchip Electronics Co. Ltd. 7 + */ 8 + 9 + #include <linux/bitops.h> 10 + #include <linux/clk.h> 11 + #include <linux/device.h> 12 + #include <linux/err.h> 13 + #include <linux/gpio/driver.h> 14 + #include <linux/init.h> 15 + #include <linux/interrupt.h> 16 + #include <linux/io.h> 17 + #include <linux/module.h> 18 + #include <linux/of.h> 19 + #include <linux/of_address.h> 20 + #include <linux/of_device.h> 21 + #include <linux/of_irq.h> 22 + #include <linux/regmap.h> 23 + 24 + #include "../pinctrl/core.h" 25 + #include "../pinctrl/pinctrl-rockchip.h" 26 + 27 + #define GPIO_TYPE_V1 (0) /* GPIO Version ID reserved */ 28 + #define GPIO_TYPE_V2 (0x01000C2B) /* GPIO Version ID 0x01000C2B */ 29 + 30 + static const struct rockchip_gpio_regs gpio_regs_v1 = { 31 + .port_dr = 0x00, 32 + .port_ddr = 0x04, 33 + .int_en = 0x30, 34 + .int_mask = 0x34, 35 + .int_type = 0x38, 36 + .int_polarity = 0x3c, 37 + .int_status = 0x40, 38 + .int_rawstatus = 0x44, 39 + .debounce = 0x48, 40 + .port_eoi = 0x4c, 41 + .ext_port = 0x50, 42 + }; 43 + 44 + static const struct rockchip_gpio_regs gpio_regs_v2 = { 45 + .port_dr = 0x00, 46 + .port_ddr = 0x08, 47 + .int_en = 0x10, 48 + .int_mask = 0x18, 49 + .int_type = 0x20, 50 + .int_polarity = 0x28, 51 + .int_bothedge = 0x30, 52 + .int_status = 0x50, 53 + .int_rawstatus = 0x58, 54 + .debounce = 0x38, 55 + .dbclk_div_en = 0x40, 56 + .dbclk_div_con = 0x48, 57 + .port_eoi = 0x60, 58 + .ext_port = 0x70, 59 + .version_id = 0x78, 60 + }; 61 + 62 + static inline void gpio_writel_v2(u32 val, void __iomem *reg) 63 + { 64 + writel((val & 0xffff) | 0xffff0000, reg); 65 + writel((val >> 16) | 0xffff0000, reg + 0x4); 66 + } 67 + 68 + static inline u32 gpio_readl_v2(void __iomem *reg) 69 + { 70 + return readl(reg + 0x4) << 16 | readl(reg); 71 + } 72 + 73 + static inline void rockchip_gpio_writel(struct rockchip_pin_bank *bank, 74 + u32 value, unsigned int offset) 75 + { 76 + void __iomem *reg = bank->reg_base + offset; 77 + 78 + if (bank->gpio_type == GPIO_TYPE_V2) 79 + gpio_writel_v2(value, reg); 80 + else 81 + writel(value, reg); 82 + } 83 + 84 + static inline u32 rockchip_gpio_readl(struct rockchip_pin_bank *bank, 85 + unsigned int offset) 86 + { 87 + void __iomem *reg = bank->reg_base + offset; 88 + u32 value; 89 + 90 + if (bank->gpio_type == GPIO_TYPE_V2) 91 + value = gpio_readl_v2(reg); 92 + else 93 + value = readl(reg); 94 + 95 + return value; 96 + } 97 + 98 + static inline void rockchip_gpio_writel_bit(struct rockchip_pin_bank *bank, 99 + u32 bit, u32 value, 100 + unsigned int offset) 101 + { 102 + void __iomem *reg = bank->reg_base + offset; 103 + u32 data; 104 + 105 + if (bank->gpio_type == GPIO_TYPE_V2) { 106 + if (value) 107 + data = BIT(bit % 16) | BIT(bit % 16 + 16); 108 + else 109 + data = BIT(bit % 16 + 16); 110 + writel(data, bit >= 16 ? reg + 0x4 : reg); 111 + } else { 112 + data = readl(reg); 113 + data &= ~BIT(bit); 114 + if (value) 115 + data |= BIT(bit); 116 + writel(data, reg); 117 + } 118 + } 119 + 120 + static inline u32 rockchip_gpio_readl_bit(struct rockchip_pin_bank *bank, 121 + u32 bit, unsigned int offset) 122 + { 123 + void __iomem *reg = bank->reg_base + offset; 124 + u32 data; 125 + 126 + if (bank->gpio_type == GPIO_TYPE_V2) { 127 + data = readl(bit >= 16 ? reg + 0x4 : reg); 128 + data >>= bit % 16; 129 + } else { 130 + data = readl(reg); 131 + data >>= bit; 132 + } 133 + 134 + return data & (0x1); 135 + } 136 + 137 + static int rockchip_gpio_get_direction(struct gpio_chip *chip, 138 + unsigned int offset) 139 + { 140 + struct rockchip_pin_bank *bank = gpiochip_get_data(chip); 141 + u32 data; 142 + 143 + data = rockchip_gpio_readl_bit(bank, offset, bank->gpio_regs->port_ddr); 144 + if (data & BIT(offset)) 145 + return GPIO_LINE_DIRECTION_OUT; 146 + 147 + return GPIO_LINE_DIRECTION_IN; 148 + } 149 + 150 + static int rockchip_gpio_set_direction(struct gpio_chip *chip, 151 + unsigned int offset, bool input) 152 + { 153 + struct rockchip_pin_bank *bank = gpiochip_get_data(chip); 154 + unsigned long flags; 155 + u32 data = input ? 0 : 1; 156 + 157 + raw_spin_lock_irqsave(&bank->slock, flags); 158 + rockchip_gpio_writel_bit(bank, offset, data, bank->gpio_regs->port_ddr); 159 + raw_spin_unlock_irqrestore(&bank->slock, flags); 160 + 161 + return 0; 162 + } 163 + 164 + static void rockchip_gpio_set(struct gpio_chip *gc, unsigned int offset, 165 + int value) 166 + { 167 + struct rockchip_pin_bank *bank = gpiochip_get_data(gc); 168 + unsigned long flags; 169 + 170 + raw_spin_lock_irqsave(&bank->slock, flags); 171 + rockchip_gpio_writel_bit(bank, offset, value, bank->gpio_regs->port_dr); 172 + raw_spin_unlock_irqrestore(&bank->slock, flags); 173 + } 174 + 175 + static int rockchip_gpio_get(struct gpio_chip *gc, unsigned int offset) 176 + { 177 + struct rockchip_pin_bank *bank = gpiochip_get_data(gc); 178 + u32 data; 179 + 180 + data = readl(bank->reg_base + bank->gpio_regs->ext_port); 181 + data >>= offset; 182 + data &= 1; 183 + 184 + return data; 185 + } 186 + 187 + static int rockchip_gpio_set_debounce(struct gpio_chip *gc, 188 + unsigned int offset, 189 + unsigned int debounce) 190 + { 191 + struct rockchip_pin_bank *bank = gpiochip_get_data(gc); 192 + const struct rockchip_gpio_regs *reg = bank->gpio_regs; 193 + unsigned long flags, div_reg, freq, max_debounce; 194 + bool div_debounce_support; 195 + unsigned int cur_div_reg; 196 + u64 div; 197 + 198 + if (!IS_ERR(bank->db_clk)) { 199 + div_debounce_support = true; 200 + freq = clk_get_rate(bank->db_clk); 201 + max_debounce = (GENMASK(23, 0) + 1) * 2 * 1000000 / freq; 202 + if (debounce > max_debounce) 203 + return -EINVAL; 204 + 205 + div = debounce * freq; 206 + div_reg = DIV_ROUND_CLOSEST_ULL(div, 2 * USEC_PER_SEC) - 1; 207 + } else { 208 + div_debounce_support = false; 209 + } 210 + 211 + raw_spin_lock_irqsave(&bank->slock, flags); 212 + 213 + /* Only the v1 needs to configure div_en and div_con for dbclk */ 214 + if (debounce) { 215 + if (div_debounce_support) { 216 + /* Configure the max debounce from consumers */ 217 + cur_div_reg = readl(bank->reg_base + 218 + reg->dbclk_div_con); 219 + if (cur_div_reg < div_reg) 220 + writel(div_reg, bank->reg_base + 221 + reg->dbclk_div_con); 222 + rockchip_gpio_writel_bit(bank, offset, 1, 223 + reg->dbclk_div_en); 224 + } 225 + 226 + rockchip_gpio_writel_bit(bank, offset, 1, reg->debounce); 227 + } else { 228 + if (div_debounce_support) 229 + rockchip_gpio_writel_bit(bank, offset, 0, 230 + reg->dbclk_div_en); 231 + 232 + rockchip_gpio_writel_bit(bank, offset, 0, reg->debounce); 233 + } 234 + 235 + raw_spin_unlock_irqrestore(&bank->slock, flags); 236 + 237 + /* Enable or disable dbclk at last */ 238 + if (div_debounce_support) { 239 + if (debounce) 240 + clk_prepare_enable(bank->db_clk); 241 + else 242 + clk_disable_unprepare(bank->db_clk); 243 + } 244 + 245 + return 0; 246 + } 247 + 248 + static int rockchip_gpio_direction_input(struct gpio_chip *gc, 249 + unsigned int offset) 250 + { 251 + return rockchip_gpio_set_direction(gc, offset, true); 252 + } 253 + 254 + static int rockchip_gpio_direction_output(struct gpio_chip *gc, 255 + unsigned int offset, int value) 256 + { 257 + rockchip_gpio_set(gc, offset, value); 258 + 259 + return rockchip_gpio_set_direction(gc, offset, false); 260 + } 261 + 262 + /* 263 + * gpiolib set_config callback function. The setting of the pin 264 + * mux function as 'gpio output' will be handled by the pinctrl subsystem 265 + * interface. 266 + */ 267 + static int rockchip_gpio_set_config(struct gpio_chip *gc, unsigned int offset, 268 + unsigned long config) 269 + { 270 + enum pin_config_param param = pinconf_to_config_param(config); 271 + 272 + switch (param) { 273 + case PIN_CONFIG_INPUT_DEBOUNCE: 274 + rockchip_gpio_set_debounce(gc, offset, true); 275 + /* 276 + * Rockchip's gpio could only support up to one period 277 + * of the debounce clock(pclk), which is far away from 278 + * satisftying the requirement, as pclk is usually near 279 + * 100MHz shared by all peripherals. So the fact is it 280 + * has crippled debounce capability could only be useful 281 + * to prevent any spurious glitches from waking up the system 282 + * if the gpio is conguired as wakeup interrupt source. Let's 283 + * still return -ENOTSUPP as before, to make sure the caller 284 + * of gpiod_set_debounce won't change its behaviour. 285 + */ 286 + return -ENOTSUPP; 287 + default: 288 + return -ENOTSUPP; 289 + } 290 + } 291 + 292 + /* 293 + * gpiolib gpio_to_irq callback function. Creates a mapping between a GPIO pin 294 + * and a virtual IRQ, if not already present. 295 + */ 296 + static int rockchip_gpio_to_irq(struct gpio_chip *gc, unsigned int offset) 297 + { 298 + struct rockchip_pin_bank *bank = gpiochip_get_data(gc); 299 + unsigned int virq; 300 + 301 + if (!bank->domain) 302 + return -ENXIO; 303 + 304 + virq = irq_create_mapping(bank->domain, offset); 305 + 306 + return (virq) ? : -ENXIO; 307 + } 308 + 309 + static const struct gpio_chip rockchip_gpiolib_chip = { 310 + .request = gpiochip_generic_request, 311 + .free = gpiochip_generic_free, 312 + .set = rockchip_gpio_set, 313 + .get = rockchip_gpio_get, 314 + .get_direction = rockchip_gpio_get_direction, 315 + .direction_input = rockchip_gpio_direction_input, 316 + .direction_output = rockchip_gpio_direction_output, 317 + .set_config = rockchip_gpio_set_config, 318 + .to_irq = rockchip_gpio_to_irq, 319 + .owner = THIS_MODULE, 320 + }; 321 + 322 + static void rockchip_irq_demux(struct irq_desc *desc) 323 + { 324 + struct irq_chip *chip = irq_desc_get_chip(desc); 325 + struct rockchip_pin_bank *bank = irq_desc_get_handler_data(desc); 326 + u32 pend; 327 + 328 + dev_dbg(bank->dev, "got irq for bank %s\n", bank->name); 329 + 330 + chained_irq_enter(chip, desc); 331 + 332 + pend = readl_relaxed(bank->reg_base + bank->gpio_regs->int_status); 333 + 334 + while (pend) { 335 + unsigned int irq, virq; 336 + 337 + irq = __ffs(pend); 338 + pend &= ~BIT(irq); 339 + virq = irq_find_mapping(bank->domain, irq); 340 + 341 + if (!virq) { 342 + dev_err(bank->dev, "unmapped irq %d\n", irq); 343 + continue; 344 + } 345 + 346 + dev_dbg(bank->dev, "handling irq %d\n", irq); 347 + 348 + /* 349 + * Triggering IRQ on both rising and falling edge 350 + * needs manual intervention. 351 + */ 352 + if (bank->toggle_edge_mode & BIT(irq)) { 353 + u32 data, data_old, polarity; 354 + unsigned long flags; 355 + 356 + data = readl_relaxed(bank->reg_base + 357 + bank->gpio_regs->ext_port); 358 + do { 359 + raw_spin_lock_irqsave(&bank->slock, flags); 360 + 361 + polarity = readl_relaxed(bank->reg_base + 362 + bank->gpio_regs->int_polarity); 363 + if (data & BIT(irq)) 364 + polarity &= ~BIT(irq); 365 + else 366 + polarity |= BIT(irq); 367 + writel(polarity, 368 + bank->reg_base + 369 + bank->gpio_regs->int_polarity); 370 + 371 + raw_spin_unlock_irqrestore(&bank->slock, flags); 372 + 373 + data_old = data; 374 + data = readl_relaxed(bank->reg_base + 375 + bank->gpio_regs->ext_port); 376 + } while ((data & BIT(irq)) != (data_old & BIT(irq))); 377 + } 378 + 379 + generic_handle_irq(virq); 380 + } 381 + 382 + chained_irq_exit(chip, desc); 383 + } 384 + 385 + static int rockchip_irq_set_type(struct irq_data *d, unsigned int type) 386 + { 387 + struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); 388 + struct rockchip_pin_bank *bank = gc->private; 389 + u32 mask = BIT(d->hwirq); 390 + u32 polarity; 391 + u32 level; 392 + u32 data; 393 + unsigned long flags; 394 + int ret = 0; 395 + 396 + raw_spin_lock_irqsave(&bank->slock, flags); 397 + 398 + rockchip_gpio_writel_bit(bank, d->hwirq, 0, 399 + bank->gpio_regs->port_ddr); 400 + 401 + raw_spin_unlock_irqrestore(&bank->slock, flags); 402 + 403 + if (type & IRQ_TYPE_EDGE_BOTH) 404 + irq_set_handler_locked(d, handle_edge_irq); 405 + else 406 + irq_set_handler_locked(d, handle_level_irq); 407 + 408 + raw_spin_lock_irqsave(&bank->slock, flags); 409 + 410 + level = rockchip_gpio_readl(bank, bank->gpio_regs->int_type); 411 + polarity = rockchip_gpio_readl(bank, bank->gpio_regs->int_polarity); 412 + 413 + switch (type) { 414 + case IRQ_TYPE_EDGE_BOTH: 415 + if (bank->gpio_type == GPIO_TYPE_V2) { 416 + bank->toggle_edge_mode &= ~mask; 417 + rockchip_gpio_writel_bit(bank, d->hwirq, 1, 418 + bank->gpio_regs->int_bothedge); 419 + goto out; 420 + } else { 421 + bank->toggle_edge_mode |= mask; 422 + level |= mask; 423 + 424 + /* 425 + * Determine gpio state. If 1 next interrupt should be 426 + * falling otherwise rising. 427 + */ 428 + data = readl(bank->reg_base + bank->gpio_regs->ext_port); 429 + if (data & mask) 430 + polarity &= ~mask; 431 + else 432 + polarity |= mask; 433 + } 434 + break; 435 + case IRQ_TYPE_EDGE_RISING: 436 + bank->toggle_edge_mode &= ~mask; 437 + level |= mask; 438 + polarity |= mask; 439 + break; 440 + case IRQ_TYPE_EDGE_FALLING: 441 + bank->toggle_edge_mode &= ~mask; 442 + level |= mask; 443 + polarity &= ~mask; 444 + break; 445 + case IRQ_TYPE_LEVEL_HIGH: 446 + bank->toggle_edge_mode &= ~mask; 447 + level &= ~mask; 448 + polarity |= mask; 449 + break; 450 + case IRQ_TYPE_LEVEL_LOW: 451 + bank->toggle_edge_mode &= ~mask; 452 + level &= ~mask; 453 + polarity &= ~mask; 454 + break; 455 + default: 456 + ret = -EINVAL; 457 + goto out; 458 + } 459 + 460 + rockchip_gpio_writel(bank, level, bank->gpio_regs->int_type); 461 + rockchip_gpio_writel(bank, polarity, bank->gpio_regs->int_polarity); 462 + out: 463 + raw_spin_unlock_irqrestore(&bank->slock, flags); 464 + 465 + return ret; 466 + } 467 + 468 + static void rockchip_irq_suspend(struct irq_data *d) 469 + { 470 + struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); 471 + struct rockchip_pin_bank *bank = gc->private; 472 + 473 + bank->saved_masks = irq_reg_readl(gc, bank->gpio_regs->int_mask); 474 + irq_reg_writel(gc, ~gc->wake_active, bank->gpio_regs->int_mask); 475 + } 476 + 477 + static void rockchip_irq_resume(struct irq_data *d) 478 + { 479 + struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); 480 + struct rockchip_pin_bank *bank = gc->private; 481 + 482 + irq_reg_writel(gc, bank->saved_masks, bank->gpio_regs->int_mask); 483 + } 484 + 485 + static void rockchip_irq_enable(struct irq_data *d) 486 + { 487 + irq_gc_mask_clr_bit(d); 488 + } 489 + 490 + static void rockchip_irq_disable(struct irq_data *d) 491 + { 492 + irq_gc_mask_set_bit(d); 493 + } 494 + 495 + static int rockchip_interrupts_register(struct rockchip_pin_bank *bank) 496 + { 497 + unsigned int clr = IRQ_NOREQUEST | IRQ_NOPROBE | IRQ_NOAUTOEN; 498 + struct irq_chip_generic *gc; 499 + int ret; 500 + 501 + bank->domain = irq_domain_add_linear(bank->of_node, 32, 502 + &irq_generic_chip_ops, NULL); 503 + if (!bank->domain) { 504 + dev_warn(bank->dev, "could not init irq domain for bank %s\n", 505 + bank->name); 506 + return -EINVAL; 507 + } 508 + 509 + ret = irq_alloc_domain_generic_chips(bank->domain, 32, 1, 510 + "rockchip_gpio_irq", 511 + handle_level_irq, 512 + clr, 0, 0); 513 + if (ret) { 514 + dev_err(bank->dev, "could not alloc generic chips for bank %s\n", 515 + bank->name); 516 + irq_domain_remove(bank->domain); 517 + return -EINVAL; 518 + } 519 + 520 + gc = irq_get_domain_generic_chip(bank->domain, 0); 521 + if (bank->gpio_type == GPIO_TYPE_V2) { 522 + gc->reg_writel = gpio_writel_v2; 523 + gc->reg_readl = gpio_readl_v2; 524 + } 525 + 526 + gc->reg_base = bank->reg_base; 527 + gc->private = bank; 528 + gc->chip_types[0].regs.mask = bank->gpio_regs->int_mask; 529 + gc->chip_types[0].regs.ack = bank->gpio_regs->port_eoi; 530 + gc->chip_types[0].chip.irq_ack = irq_gc_ack_set_bit; 531 + gc->chip_types[0].chip.irq_mask = irq_gc_mask_set_bit; 532 + gc->chip_types[0].chip.irq_unmask = irq_gc_mask_clr_bit; 533 + gc->chip_types[0].chip.irq_enable = rockchip_irq_enable; 534 + gc->chip_types[0].chip.irq_disable = rockchip_irq_disable; 535 + gc->chip_types[0].chip.irq_set_wake = irq_gc_set_wake; 536 + gc->chip_types[0].chip.irq_suspend = rockchip_irq_suspend; 537 + gc->chip_types[0].chip.irq_resume = rockchip_irq_resume; 538 + gc->chip_types[0].chip.irq_set_type = rockchip_irq_set_type; 539 + gc->wake_enabled = IRQ_MSK(bank->nr_pins); 540 + 541 + /* 542 + * Linux assumes that all interrupts start out disabled/masked. 543 + * Our driver only uses the concept of masked and always keeps 544 + * things enabled, so for us that's all masked and all enabled. 545 + */ 546 + rockchip_gpio_writel(bank, 0xffffffff, bank->gpio_regs->int_mask); 547 + rockchip_gpio_writel(bank, 0xffffffff, bank->gpio_regs->port_eoi); 548 + rockchip_gpio_writel(bank, 0xffffffff, bank->gpio_regs->int_en); 549 + gc->mask_cache = 0xffffffff; 550 + 551 + irq_set_chained_handler_and_data(bank->irq, 552 + rockchip_irq_demux, bank); 553 + 554 + return 0; 555 + } 556 + 557 + static int rockchip_gpiolib_register(struct rockchip_pin_bank *bank) 558 + { 559 + struct gpio_chip *gc; 560 + int ret; 561 + 562 + bank->gpio_chip = rockchip_gpiolib_chip; 563 + 564 + gc = &bank->gpio_chip; 565 + gc->base = bank->pin_base; 566 + gc->ngpio = bank->nr_pins; 567 + gc->label = bank->name; 568 + gc->parent = bank->dev; 569 + #ifdef CONFIG_OF_GPIO 570 + gc->of_node = of_node_get(bank->of_node); 571 + #endif 572 + 573 + ret = gpiochip_add_data(gc, bank); 574 + if (ret) { 575 + dev_err(bank->dev, "failed to add gpiochip %s, %d\n", 576 + gc->label, ret); 577 + return ret; 578 + } 579 + 580 + /* 581 + * For DeviceTree-supported systems, the gpio core checks the 582 + * pinctrl's device node for the "gpio-ranges" property. 583 + * If it is present, it takes care of adding the pin ranges 584 + * for the driver. In this case the driver can skip ahead. 585 + * 586 + * In order to remain compatible with older, existing DeviceTree 587 + * files which don't set the "gpio-ranges" property or systems that 588 + * utilize ACPI the driver has to call gpiochip_add_pin_range(). 589 + */ 590 + if (!of_property_read_bool(bank->of_node, "gpio-ranges")) { 591 + struct device_node *pctlnp = of_get_parent(bank->of_node); 592 + struct pinctrl_dev *pctldev = NULL; 593 + 594 + if (!pctlnp) 595 + return -ENODATA; 596 + 597 + pctldev = of_pinctrl_get(pctlnp); 598 + if (!pctldev) 599 + return -ENODEV; 600 + 601 + ret = gpiochip_add_pin_range(gc, dev_name(pctldev->dev), 0, 602 + gc->base, gc->ngpio); 603 + if (ret) { 604 + dev_err(bank->dev, "Failed to add pin range\n"); 605 + goto fail; 606 + } 607 + } 608 + 609 + ret = rockchip_interrupts_register(bank); 610 + if (ret) { 611 + dev_err(bank->dev, "failed to register interrupt, %d\n", ret); 612 + goto fail; 613 + } 614 + 615 + return 0; 616 + 617 + fail: 618 + gpiochip_remove(&bank->gpio_chip); 619 + 620 + return ret; 621 + } 622 + 623 + static int rockchip_get_bank_data(struct rockchip_pin_bank *bank) 624 + { 625 + struct resource res; 626 + int id = 0; 627 + 628 + if (of_address_to_resource(bank->of_node, 0, &res)) { 629 + dev_err(bank->dev, "cannot find IO resource for bank\n"); 630 + return -ENOENT; 631 + } 632 + 633 + bank->reg_base = devm_ioremap_resource(bank->dev, &res); 634 + if (IS_ERR(bank->reg_base)) 635 + return PTR_ERR(bank->reg_base); 636 + 637 + bank->irq = irq_of_parse_and_map(bank->of_node, 0); 638 + if (!bank->irq) 639 + return -EINVAL; 640 + 641 + bank->clk = of_clk_get(bank->of_node, 0); 642 + if (IS_ERR(bank->clk)) 643 + return PTR_ERR(bank->clk); 644 + 645 + clk_prepare_enable(bank->clk); 646 + id = readl(bank->reg_base + gpio_regs_v2.version_id); 647 + 648 + /* If not gpio v2, that is default to v1. */ 649 + if (id == GPIO_TYPE_V2) { 650 + bank->gpio_regs = &gpio_regs_v2; 651 + bank->gpio_type = GPIO_TYPE_V2; 652 + bank->db_clk = of_clk_get(bank->of_node, 1); 653 + if (IS_ERR(bank->db_clk)) { 654 + dev_err(bank->dev, "cannot find debounce clk\n"); 655 + clk_disable_unprepare(bank->clk); 656 + return -EINVAL; 657 + } 658 + } else { 659 + bank->gpio_regs = &gpio_regs_v1; 660 + bank->gpio_type = GPIO_TYPE_V1; 661 + } 662 + 663 + return 0; 664 + } 665 + 666 + static struct rockchip_pin_bank * 667 + rockchip_gpio_find_bank(struct pinctrl_dev *pctldev, int id) 668 + { 669 + struct rockchip_pinctrl *info; 670 + struct rockchip_pin_bank *bank; 671 + int i, found = 0; 672 + 673 + info = pinctrl_dev_get_drvdata(pctldev); 674 + bank = info->ctrl->pin_banks; 675 + for (i = 0; i < info->ctrl->nr_banks; i++, bank++) { 676 + if (bank->bank_num == id) { 677 + found = 1; 678 + break; 679 + } 680 + } 681 + 682 + return found ? bank : NULL; 683 + } 684 + 685 + static int rockchip_gpio_probe(struct platform_device *pdev) 686 + { 687 + struct device *dev = &pdev->dev; 688 + struct device_node *np = dev->of_node; 689 + struct device_node *pctlnp = of_get_parent(np); 690 + struct pinctrl_dev *pctldev = NULL; 691 + struct rockchip_pin_bank *bank = NULL; 692 + static int gpio; 693 + int id, ret; 694 + 695 + if (!np || !pctlnp) 696 + return -ENODEV; 697 + 698 + pctldev = of_pinctrl_get(pctlnp); 699 + if (!pctldev) 700 + return -EPROBE_DEFER; 701 + 702 + id = of_alias_get_id(np, "gpio"); 703 + if (id < 0) 704 + id = gpio++; 705 + 706 + bank = rockchip_gpio_find_bank(pctldev, id); 707 + if (!bank) 708 + return -EINVAL; 709 + 710 + bank->dev = dev; 711 + bank->of_node = np; 712 + 713 + raw_spin_lock_init(&bank->slock); 714 + 715 + ret = rockchip_get_bank_data(bank); 716 + if (ret) 717 + return ret; 718 + 719 + ret = rockchip_gpiolib_register(bank); 720 + if (ret) { 721 + clk_disable_unprepare(bank->clk); 722 + return ret; 723 + } 724 + 725 + platform_set_drvdata(pdev, bank); 726 + dev_info(dev, "probed %pOF\n", np); 727 + 728 + return 0; 729 + } 730 + 731 + static int rockchip_gpio_remove(struct platform_device *pdev) 732 + { 733 + struct rockchip_pin_bank *bank = platform_get_drvdata(pdev); 734 + 735 + clk_disable_unprepare(bank->clk); 736 + gpiochip_remove(&bank->gpio_chip); 737 + 738 + return 0; 739 + } 740 + 741 + static const struct of_device_id rockchip_gpio_match[] = { 742 + { .compatible = "rockchip,gpio-bank", }, 743 + { .compatible = "rockchip,rk3188-gpio-bank0" }, 744 + { }, 745 + }; 746 + 747 + static struct platform_driver rockchip_gpio_driver = { 748 + .probe = rockchip_gpio_probe, 749 + .remove = rockchip_gpio_remove, 750 + .driver = { 751 + .name = "rockchip-gpio", 752 + .of_match_table = rockchip_gpio_match, 753 + }, 754 + }; 755 + 756 + static int __init rockchip_gpio_init(void) 757 + { 758 + return platform_driver_register(&rockchip_gpio_driver); 759 + } 760 + postcore_initcall(rockchip_gpio_init); 761 + 762 + static void __exit rockchip_gpio_exit(void) 763 + { 764 + platform_driver_unregister(&rockchip_gpio_driver); 765 + } 766 + module_exit(rockchip_gpio_exit); 767 + 768 + MODULE_DESCRIPTION("Rockchip gpio driver"); 769 + MODULE_ALIAS("platform:rockchip-gpio"); 770 + MODULE_LICENSE("GPL v2"); 771 + MODULE_DEVICE_TABLE(of, rockchip_gpio_match);
+1 -1
drivers/gpio/gpio-sch.c
··· 259 259 260 260 pending = (resume_status << sch->resume_base) | core_status; 261 261 for_each_set_bit(offset, &pending, sch->chip.ngpio) 262 - generic_handle_irq(irq_find_mapping(gc->irq.domain, offset)); 262 + generic_handle_domain_irq(gc->irq.domain, offset); 263 263 264 264 /* Set returning value depending on whether we handled an interrupt */ 265 265 ret = pending ? ACPI_INTERRUPT_HANDLED : ACPI_INTERRUPT_NOT_HANDLED;
+1 -1
drivers/gpio/gpio-sodaville.c
··· 84 84 return IRQ_NONE; 85 85 86 86 for_each_set_bit(irq_bit, &irq_stat, 32) 87 - generic_handle_irq(irq_find_mapping(sd->id, irq_bit)); 87 + generic_handle_domain_irq(sd->id, irq_bit); 88 88 89 89 return IRQ_HANDLED; 90 90 }
+4 -8
drivers/gpio/gpio-sprd.c
··· 189 189 struct gpio_chip *chip = irq_desc_get_handler_data(desc); 190 190 struct irq_chip *ic = irq_desc_get_chip(desc); 191 191 struct sprd_gpio *sprd_gpio = gpiochip_get_data(chip); 192 - u32 bank, n, girq; 192 + u32 bank, n; 193 193 194 194 chained_irq_enter(ic, desc); 195 195 ··· 198 198 unsigned long reg = readl_relaxed(base + SPRD_GPIO_MIS) & 199 199 SPRD_GPIO_BANK_MASK; 200 200 201 - for_each_set_bit(n, &reg, SPRD_GPIO_BANK_NR) { 202 - girq = irq_find_mapping(chip->irq.domain, 203 - bank * SPRD_GPIO_BANK_NR + n); 204 - 205 - generic_handle_irq(girq); 206 - } 207 - 201 + for_each_set_bit(n, &reg, SPRD_GPIO_BANK_NR) 202 + generic_handle_domain_irq(chip->irq.domain, 203 + bank * SPRD_GPIO_BANK_NR + n); 208 204 } 209 205 chained_irq_exit(ic, desc); 210 206 }
+1 -1
drivers/gpio/gpio-tb10x.c
··· 100 100 int i; 101 101 102 102 for_each_set_bit(i, &bits, 32) 103 - generic_handle_irq(irq_find_mapping(tb10x_gpio->domain, i)); 103 + generic_handle_domain_irq(tb10x_gpio->domain, i); 104 104 105 105 return IRQ_HANDLED; 106 106 }
+4 -5
drivers/gpio/gpio-tegra.c
··· 408 408 lvl = tegra_gpio_readl(tgi, GPIO_INT_LVL(tgi, gpio)); 409 409 410 410 for_each_set_bit(pin, &sta, 8) { 411 + int ret; 412 + 411 413 tegra_gpio_writel(tgi, 1 << pin, 412 414 GPIO_INT_CLR(tgi, gpio)); 413 415 ··· 422 420 chained_irq_exit(chip, desc); 423 421 } 424 422 425 - irq = irq_find_mapping(domain, gpio + pin); 426 - if (WARN_ON(irq == 0)) 427 - continue; 428 - 429 - generic_handle_irq(irq); 423 + ret = generic_handle_domain_irq(domain, gpio + pin); 424 + WARN_RATELIMIT(ret, "hwirq = %d", gpio + pin); 430 425 } 431 426 } 432 427
+3 -6
drivers/gpio/gpio-tegra186.c
··· 456 456 457 457 for (i = 0; i < gpio->soc->num_ports; i++) { 458 458 const struct tegra_gpio_port *port = &gpio->soc->ports[i]; 459 - unsigned int pin, irq; 459 + unsigned int pin; 460 460 unsigned long value; 461 461 void __iomem *base; 462 462 ··· 469 469 value = readl(base + TEGRA186_GPIO_INTERRUPT_STATUS(1)); 470 470 471 471 for_each_set_bit(pin, &value, port->pins) { 472 - irq = irq_find_mapping(domain, offset + pin); 473 - if (WARN_ON(irq == 0)) 474 - continue; 475 - 476 - generic_handle_irq(irq); 472 + int ret = generic_handle_domain_irq(domain, offset + pin); 473 + WARN_RATELIMIT(ret, "hwirq = %d", offset + pin); 477 474 } 478 475 479 476 skip:
+4 -6
drivers/gpio/gpio-tqmx86.c
··· 183 183 struct tqmx86_gpio_data *gpio = gpiochip_get_data(chip); 184 184 struct irq_chip *irq_chip = irq_desc_get_chip(desc); 185 185 unsigned long irq_bits; 186 - int i = 0, child_irq; 186 + int i = 0; 187 187 u8 irq_status; 188 188 189 189 chained_irq_enter(irq_chip, desc); ··· 192 192 tqmx86_gpio_write(gpio, irq_status, TQMX86_GPIIS); 193 193 194 194 irq_bits = irq_status; 195 - for_each_set_bit(i, &irq_bits, TQMX86_NGPI) { 196 - child_irq = irq_find_mapping(gpio->chip.irq.domain, 197 - i + TQMX86_NGPO); 198 - generic_handle_irq(child_irq); 199 - } 195 + for_each_set_bit(i, &irq_bits, TQMX86_NGPI) 196 + generic_handle_domain_irq(gpio->chip.irq.domain, 197 + i + TQMX86_NGPO); 200 198 201 199 chained_irq_exit(irq_chip, desc); 202 200 }
+1 -1
drivers/gpio/gpio-vf610.c
··· 149 149 for_each_set_bit(pin, &irq_isfr, VF610_GPIO_PER_PORT) { 150 150 vf610_gpio_writel(BIT(pin), port->base + PORT_ISFR); 151 151 152 - generic_handle_irq(irq_find_mapping(port->gc.irq.domain, pin)); 152 + generic_handle_domain_irq(port->gc.irq.domain, pin); 153 153 } 154 154 155 155 chained_irq_exit(chip, desc);
+2 -2
drivers/gpio/gpio-ws16c48.c
··· 339 339 for_each_set_bit(port, &int_pending, 3) { 340 340 int_id = inb(ws16c48gpio->base + 8 + port); 341 341 for_each_set_bit(gpio, &int_id, 8) 342 - generic_handle_irq(irq_find_mapping( 343 - chip->irq.domain, gpio + 8*port)); 342 + generic_handle_domain_irq(chip->irq.domain, 343 + gpio + 8*port); 344 344 } 345 345 346 346 int_pending = inb(ws16c48gpio->base + 6) & 0x7;
+1 -1
drivers/gpio/gpio-xgs-iproc.c
··· 185 185 int_bits = level | event; 186 186 187 187 for_each_set_bit(bit, &int_bits, gc->ngpio) 188 - generic_handle_irq(irq_linear_revmap(gc->irq.domain, bit)); 188 + generic_handle_domain_irq(gc->irq.domain, bit); 189 189 } 190 190 191 191 return int_bits ? IRQ_HANDLED : IRQ_NONE;
+1 -1
drivers/gpio/gpio-xilinx.c
··· 538 538 539 539 for_each_set_bit(bit, all, 64) { 540 540 irq_offset = xgpio_from_bit(chip, bit); 541 - generic_handle_irq(irq_find_mapping(gc->irq.domain, irq_offset)); 541 + generic_handle_domain_irq(gc->irq.domain, irq_offset); 542 542 } 543 543 544 544 chained_irq_exit(irqchip, desc);
+1 -2
drivers/gpio/gpio-xlp.c
··· 216 216 } 217 217 218 218 if (gpio_stat & BIT(gpio % XLP_GPIO_REGSZ)) 219 - generic_handle_irq(irq_find_mapping( 220 - priv->chip.irq.domain, gpio)); 219 + generic_handle_domain_irq(priv->chip.irq.domain, gpio); 221 220 } 222 221 chained_irq_exit(irqchip, desc); 223 222 }
+2 -6
drivers/gpio/gpio-zynq.c
··· 628 628 if (!pending) 629 629 return; 630 630 631 - for_each_set_bit(offset, &pending, 32) { 632 - unsigned int gpio_irq; 633 - 634 - gpio_irq = irq_find_mapping(irqdomain, offset + bank_offset); 635 - generic_handle_irq(gpio_irq); 636 - } 631 + for_each_set_bit(offset, &pending, 32) 632 + generic_handle_domain_irq(irqdomain, offset + bank_offset); 637 633 } 638 634 639 635 /**
+1 -1
drivers/gpu/drm/amd/amdgpu/amdgpu_irq.c
··· 502 502 503 503 } else if ((client_id == AMDGPU_IRQ_CLIENTID_LEGACY) && 504 504 adev->irq.virq[src_id]) { 505 - generic_handle_irq(irq_find_mapping(adev->irq.domain, src_id)); 505 + generic_handle_domain_irq(adev->irq.domain, src_id); 506 506 507 507 } else if (!adev->irq.client[client_id].sources) { 508 508 DRM_DEBUG("Unregistered interrupt client_id: %d src_id: %d\n",
+4 -11
drivers/gpu/drm/msm/disp/dpu1/dpu_mdss.c
··· 45 45 46 46 while (interrupts) { 47 47 irq_hw_number_t hwirq = fls(interrupts) - 1; 48 - unsigned int mapping; 49 48 int rc; 50 49 51 - mapping = irq_find_mapping(dpu_mdss->irq_controller.domain, 52 - hwirq); 53 - if (mapping == 0) { 54 - DRM_ERROR("couldn't find irq mapping for %lu\n", hwirq); 55 - break; 56 - } 57 - 58 - rc = generic_handle_irq(mapping); 50 + rc = generic_handle_domain_irq(dpu_mdss->irq_controller.domain, 51 + hwirq); 59 52 if (rc < 0) { 60 - DRM_ERROR("handle irq fail: irq=%lu mapping=%u rc=%d\n", 61 - hwirq, mapping, rc); 53 + DRM_ERROR("handle irq fail: irq=%lu rc=%d\n", 54 + hwirq, rc); 62 55 break; 63 56 } 64 57
+1 -2
drivers/gpu/drm/msm/disp/mdp5/mdp5_mdss.c
··· 50 50 while (intr) { 51 51 irq_hw_number_t hwirq = fls(intr) - 1; 52 52 53 - generic_handle_irq(irq_find_mapping( 54 - mdp5_mdss->irqcontroller.domain, hwirq)); 53 + generic_handle_domain_irq(mdp5_mdss->irqcontroller.domain, hwirq); 55 54 intr &= ~(1 << hwirq); 56 55 } 57 56
+4 -7
drivers/gpu/ipu-v3/ipu-common.c
··· 1003 1003 static void ipu_irq_handle(struct ipu_soc *ipu, const int *regs, int num_regs) 1004 1004 { 1005 1005 unsigned long status; 1006 - int i, bit, irq; 1006 + int i, bit; 1007 1007 1008 1008 for (i = 0; i < num_regs; i++) { 1009 1009 1010 1010 status = ipu_cm_read(ipu, IPU_INT_STAT(regs[i])); 1011 1011 status &= ipu_cm_read(ipu, IPU_INT_CTRL(regs[i])); 1012 1012 1013 - for_each_set_bit(bit, &status, 32) { 1014 - irq = irq_linear_revmap(ipu->domain, 1015 - regs[i] * 32 + bit); 1016 - if (irq) 1017 - generic_handle_irq(irq); 1018 - } 1013 + for_each_set_bit(bit, &status, 32) 1014 + generic_handle_domain_irq(ipu->domain, 1015 + regs[i] * 32 + bit); 1019 1016 } 1020 1017 } 1021 1018
+2 -4
drivers/irqchip/irq-alpine-msi.c
··· 267 267 goto err_priv; 268 268 } 269 269 270 - priv->msi_map = kcalloc(BITS_TO_LONGS(priv->num_spis), 271 - sizeof(*priv->msi_map), 272 - GFP_KERNEL); 270 + priv->msi_map = bitmap_zalloc(priv->num_spis, GFP_KERNEL); 273 271 if (!priv->msi_map) { 274 272 ret = -ENOMEM; 275 273 goto err_priv; ··· 283 285 return 0; 284 286 285 287 err_map: 286 - kfree(priv->msi_map); 288 + bitmap_free(priv->msi_map); 287 289 err_priv: 288 290 kfree(priv); 289 291 return ret;
+1 -1
drivers/irqchip/irq-apple-aic.c
··· 226 226 * Reading the interrupt reason automatically acknowledges and masks 227 227 * the IRQ, so we just unmask it here if needed. 228 228 */ 229 - if (!irqd_irq_disabled(d) && !irqd_irq_masked(d)) 229 + if (!irqd_irq_masked(d)) 230 230 aic_irq_unmask(d); 231 231 } 232 232
+2 -3
drivers/irqchip/irq-gic-v2m.c
··· 269 269 270 270 list_for_each_entry_safe(v2m, tmp, &v2m_nodes, entry) { 271 271 list_del(&v2m->entry); 272 - kfree(v2m->bm); 272 + bitmap_free(v2m->bm); 273 273 iounmap(v2m->base); 274 274 of_node_put(to_of_node(v2m->fwnode)); 275 275 if (is_fwnode_irqchip(v2m->fwnode)) ··· 386 386 break; 387 387 } 388 388 } 389 - v2m->bm = kcalloc(BITS_TO_LONGS(v2m->nr_spis), sizeof(long), 390 - GFP_KERNEL); 389 + v2m->bm = bitmap_zalloc(v2m->nr_spis, GFP_KERNEL); 391 390 if (!v2m->bm) { 392 391 ret = -ENOMEM; 393 392 goto err_iounmap;
+3 -3
drivers/irqchip/irq-gic-v3-its.c
··· 2140 2140 if (err) 2141 2141 goto out; 2142 2142 2143 - bitmap = kcalloc(BITS_TO_LONGS(nr_irqs), sizeof (long), GFP_ATOMIC); 2143 + bitmap = bitmap_zalloc(nr_irqs, GFP_ATOMIC); 2144 2144 if (!bitmap) 2145 2145 goto out; 2146 2146 ··· 2156 2156 static void its_lpi_free(unsigned long *bitmap, u32 base, u32 nr_ids) 2157 2157 { 2158 2158 WARN_ON(free_lpi_range(base, nr_ids)); 2159 - kfree(bitmap); 2159 + bitmap_free(bitmap); 2160 2160 } 2161 2161 2162 2162 static void gic_reset_prop_table(void *va) ··· 3387 3387 if (!dev || !itt || !col_map || (!lpi_map && alloc_lpis)) { 3388 3388 kfree(dev); 3389 3389 kfree(itt); 3390 - kfree(lpi_map); 3390 + bitmap_free(lpi_map); 3391 3391 kfree(col_map); 3392 3392 return NULL; 3393 3393 }
+2 -3
drivers/irqchip/irq-gic-v3-mbi.c
··· 290 290 if (ret) 291 291 goto err_free_mbi; 292 292 293 - mbi_ranges[n].bm = kcalloc(BITS_TO_LONGS(mbi_ranges[n].nr_spis), 294 - sizeof(long), GFP_KERNEL); 293 + mbi_ranges[n].bm = bitmap_zalloc(mbi_ranges[n].nr_spis, GFP_KERNEL); 295 294 if (!mbi_ranges[n].bm) { 296 295 ret = -ENOMEM; 297 296 goto err_free_mbi; ··· 328 329 err_free_mbi: 329 330 if (mbi_ranges) { 330 331 for (n = 0; n < mbi_range_nr; n++) 331 - kfree(mbi_ranges[n].bm); 332 + bitmap_free(mbi_ranges[n].bm); 332 333 kfree(mbi_ranges); 333 334 } 334 335
+72 -12
drivers/irqchip/irq-gic-v3.c
··· 100 100 DEFINE_STATIC_KEY_FALSE(gic_nonsecure_priorities); 101 101 EXPORT_SYMBOL(gic_nonsecure_priorities); 102 102 103 + /* 104 + * When the Non-secure world has access to group 0 interrupts (as a 105 + * consequence of SCR_EL3.FIQ == 0), reading the ICC_RPR_EL1 register will 106 + * return the Distributor's view of the interrupt priority. 107 + * 108 + * When GIC security is enabled (GICD_CTLR.DS == 0), the interrupt priority 109 + * written by software is moved to the Non-secure range by the Distributor. 110 + * 111 + * If both are true (which is when gic_nonsecure_priorities gets enabled), 112 + * we need to shift down the priority programmed by software to match it 113 + * against the value returned by ICC_RPR_EL1. 114 + */ 115 + #define GICD_INT_RPR_PRI(priority) \ 116 + ({ \ 117 + u32 __priority = (priority); \ 118 + if (static_branch_unlikely(&gic_nonsecure_priorities)) \ 119 + __priority = 0x80 | (__priority >> 1); \ 120 + \ 121 + __priority; \ 122 + }) 123 + 103 124 /* ppi_nmi_refs[n] == number of cpus having ppi[n + 16] set as NMI */ 104 125 static refcount_t *ppi_nmi_refs; 105 126 ··· 467 446 writeb_relaxed(prio, base + offset + index); 468 447 } 469 448 470 - static u32 gic_get_ppi_index(struct irq_data *d) 449 + static u32 __gic_get_ppi_index(irq_hw_number_t hwirq) 471 450 { 472 - switch (get_intid_range(d)) { 451 + switch (__get_intid_range(hwirq)) { 473 452 case PPI_RANGE: 474 - return d->hwirq - 16; 453 + return hwirq - 16; 475 454 case EPPI_RANGE: 476 - return d->hwirq - EPPI_BASE_INTID + 16; 455 + return hwirq - EPPI_BASE_INTID + 16; 477 456 default: 478 457 unreachable(); 479 458 } 459 + } 460 + 461 + static u32 gic_get_ppi_index(struct irq_data *d) 462 + { 463 + return __gic_get_ppi_index(d->hwirq); 480 464 } 481 465 482 466 static int gic_irq_nmi_setup(struct irq_data *d) ··· 713 687 return; 714 688 715 689 if (gic_supports_nmi() && 716 - unlikely(gic_read_rpr() == GICD_INT_NMI_PRI)) { 690 + unlikely(gic_read_rpr() == GICD_INT_RPR_PRI(GICD_INT_NMI_PRI))) { 717 691 gic_handle_nmi(irqnr, regs); 718 692 return; 719 693 } ··· 1493 1467 } 1494 1468 } 1495 1469 1470 + static bool fwspec_is_partitioned_ppi(struct irq_fwspec *fwspec, 1471 + irq_hw_number_t hwirq) 1472 + { 1473 + enum gic_intid_range range; 1474 + 1475 + if (!gic_data.ppi_descs) 1476 + return false; 1477 + 1478 + if (!is_of_node(fwspec->fwnode)) 1479 + return false; 1480 + 1481 + if (fwspec->param_count < 4 || !fwspec->param[3]) 1482 + return false; 1483 + 1484 + range = __get_intid_range(hwirq); 1485 + if (range != PPI_RANGE && range != EPPI_RANGE) 1486 + return false; 1487 + 1488 + return true; 1489 + } 1490 + 1496 1491 static int gic_irq_domain_select(struct irq_domain *d, 1497 1492 struct irq_fwspec *fwspec, 1498 1493 enum irq_domain_bus_token bus_token) 1499 1494 { 1495 + unsigned int type, ret, ppi_idx; 1496 + irq_hw_number_t hwirq; 1497 + 1500 1498 /* Not for us */ 1501 1499 if (fwspec->fwnode != d->fwnode) 1502 1500 return 0; ··· 1529 1479 if (!is_of_node(fwspec->fwnode)) 1530 1480 return 1; 1531 1481 1482 + ret = gic_irq_domain_translate(d, fwspec, &hwirq, &type); 1483 + if (WARN_ON_ONCE(ret)) 1484 + return 0; 1485 + 1486 + if (!fwspec_is_partitioned_ppi(fwspec, hwirq)) 1487 + return d == gic_data.domain; 1488 + 1532 1489 /* 1533 1490 * If this is a PPI and we have a 4th (non-null) parameter, 1534 1491 * then we need to match the partition domain. 1535 1492 */ 1536 - if (fwspec->param_count >= 4 && 1537 - fwspec->param[0] == 1 && fwspec->param[3] != 0 && 1538 - gic_data.ppi_descs) 1539 - return d == partition_get_domain(gic_data.ppi_descs[fwspec->param[1]]); 1540 - 1541 - return d == gic_data.domain; 1493 + ppi_idx = __gic_get_ppi_index(hwirq); 1494 + return d == partition_get_domain(gic_data.ppi_descs[ppi_idx]); 1542 1495 } 1543 1496 1544 1497 static const struct irq_domain_ops gic_irq_domain_ops = { ··· 1556 1503 unsigned long *hwirq, 1557 1504 unsigned int *type) 1558 1505 { 1506 + unsigned long ppi_intid; 1559 1507 struct device_node *np; 1508 + unsigned int ppi_idx; 1560 1509 int ret; 1561 1510 1562 1511 if (!gic_data.ppi_descs) ··· 1568 1513 if (WARN_ON(!np)) 1569 1514 return -EINVAL; 1570 1515 1571 - ret = partition_translate_id(gic_data.ppi_descs[fwspec->param[1]], 1516 + ret = gic_irq_domain_translate(d, fwspec, &ppi_intid, type); 1517 + if (WARN_ON_ONCE(ret)) 1518 + return 0; 1519 + 1520 + ppi_idx = __gic_get_ppi_index(ppi_intid); 1521 + ret = partition_translate_id(gic_data.ppi_descs[ppi_idx], 1572 1522 of_node_to_fwnode(np)); 1573 1523 if (ret < 0) 1574 1524 return ret;
+18 -1
drivers/irqchip/irq-loongson-pch-pic.c
··· 92 92 case IRQ_TYPE_EDGE_RISING: 93 93 pch_pic_bitset(priv, PCH_PIC_EDGE, d->hwirq); 94 94 pch_pic_bitclr(priv, PCH_PIC_POL, d->hwirq); 95 + irq_set_handler_locked(d, handle_edge_irq); 95 96 break; 96 97 case IRQ_TYPE_EDGE_FALLING: 97 98 pch_pic_bitset(priv, PCH_PIC_EDGE, d->hwirq); 98 99 pch_pic_bitset(priv, PCH_PIC_POL, d->hwirq); 100 + irq_set_handler_locked(d, handle_edge_irq); 99 101 break; 100 102 case IRQ_TYPE_LEVEL_HIGH: 101 103 pch_pic_bitclr(priv, PCH_PIC_EDGE, d->hwirq); 102 104 pch_pic_bitclr(priv, PCH_PIC_POL, d->hwirq); 105 + irq_set_handler_locked(d, handle_level_irq); 103 106 break; 104 107 case IRQ_TYPE_LEVEL_LOW: 105 108 pch_pic_bitclr(priv, PCH_PIC_EDGE, d->hwirq); 106 109 pch_pic_bitset(priv, PCH_PIC_POL, d->hwirq); 110 + irq_set_handler_locked(d, handle_level_irq); 107 111 break; 108 112 default: 109 113 ret = -EINVAL; ··· 117 113 return ret; 118 114 } 119 115 116 + static void pch_pic_ack_irq(struct irq_data *d) 117 + { 118 + unsigned int reg; 119 + struct pch_pic *priv = irq_data_get_irq_chip_data(d); 120 + 121 + reg = readl(priv->base + PCH_PIC_EDGE + PIC_REG_IDX(d->hwirq) * 4); 122 + if (reg & BIT(PIC_REG_BIT(d->hwirq))) { 123 + writel(BIT(PIC_REG_BIT(d->hwirq)), 124 + priv->base + PCH_PIC_CLR + PIC_REG_IDX(d->hwirq) * 4); 125 + } 126 + irq_chip_ack_parent(d); 127 + } 128 + 120 129 static struct irq_chip pch_pic_irq_chip = { 121 130 .name = "PCH PIC", 122 131 .irq_mask = pch_pic_mask_irq, 123 132 .irq_unmask = pch_pic_unmask_irq, 124 - .irq_ack = irq_chip_ack_parent, 133 + .irq_ack = pch_pic_ack_irq, 125 134 .irq_set_affinity = irq_chip_set_affinity_parent, 126 135 .irq_set_type = pch_pic_set_type, 127 136 };
+1 -4
drivers/irqchip/irq-ls-scfg-msi.c
··· 362 362 363 363 msi_data->irqs_num = MSI_IRQS_PER_MSIR * 364 364 (1 << msi_data->cfg->ibs_shift); 365 - msi_data->used = devm_kcalloc(&pdev->dev, 366 - BITS_TO_LONGS(msi_data->irqs_num), 367 - sizeof(*msi_data->used), 368 - GFP_KERNEL); 365 + msi_data->used = devm_bitmap_zalloc(&pdev->dev, msi_data->irqs_num, GFP_KERNEL); 369 366 if (!msi_data->used) 370 367 return -ENOMEM; 371 368 /*
+1
drivers/irqchip/irq-mtk-sysirq.c
··· 65 65 .irq_set_type = mtk_sysirq_set_type, 66 66 .irq_retrigger = irq_chip_retrigger_hierarchy, 67 67 .irq_set_affinity = irq_chip_set_affinity_parent, 68 + .flags = IRQCHIP_SKIP_SET_WAKE, 68 69 }; 69 70 70 71 static int mtk_sysirq_domain_translate(struct irq_domain *d,
+1 -3
drivers/irqchip/irq-mvebu-gicp.c
··· 210 210 gicp->spi_cnt += gicp->spi_ranges[i].count; 211 211 } 212 212 213 - gicp->spi_bitmap = devm_kcalloc(&pdev->dev, 214 - BITS_TO_LONGS(gicp->spi_cnt), sizeof(long), 215 - GFP_KERNEL); 213 + gicp->spi_bitmap = devm_bitmap_zalloc(&pdev->dev, gicp->spi_cnt, GFP_KERNEL); 216 214 if (!gicp->spi_bitmap) 217 215 return -ENOMEM; 218 216
+2 -3
drivers/irqchip/irq-mvebu-odmi.c
··· 171 171 if (!odmis) 172 172 return -ENOMEM; 173 173 174 - odmis_bm = kcalloc(BITS_TO_LONGS(odmis_count * NODMIS_PER_FRAME), 175 - sizeof(long), GFP_KERNEL); 174 + odmis_bm = bitmap_zalloc(odmis_count * NODMIS_PER_FRAME, GFP_KERNEL); 176 175 if (!odmis_bm) { 177 176 ret = -ENOMEM; 178 177 goto err_alloc; ··· 226 227 if (odmi->base && !IS_ERR(odmi->base)) 227 228 iounmap(odmis[i].base); 228 229 } 229 - kfree(odmis_bm); 230 + bitmap_free(odmis_bm); 230 231 err_alloc: 231 232 kfree(odmis); 232 233 return ret;
+1 -2
drivers/irqchip/irq-partition-percpu.c
··· 215 215 goto out; 216 216 desc->domain = d; 217 217 218 - desc->bitmap = kcalloc(BITS_TO_LONGS(nr_parts), sizeof(long), 219 - GFP_KERNEL); 218 + desc->bitmap = bitmap_zalloc(nr_parts, GFP_KERNEL); 220 219 if (WARN_ON(!desc->bitmap)) 221 220 goto out; 222 221
+11 -57
drivers/irqchip/qcom-pdc.c
··· 53 53 return readl_relaxed(pdc_base + reg + i * sizeof(u32)); 54 54 } 55 55 56 - static int qcom_pdc_gic_get_irqchip_state(struct irq_data *d, 57 - enum irqchip_irq_state which, 58 - bool *state) 59 - { 60 - if (d->hwirq == GPIO_NO_WAKE_IRQ) 61 - return 0; 62 - 63 - return irq_chip_get_parent_state(d, which, state); 64 - } 65 - 66 - static int qcom_pdc_gic_set_irqchip_state(struct irq_data *d, 67 - enum irqchip_irq_state which, 68 - bool value) 69 - { 70 - if (d->hwirq == GPIO_NO_WAKE_IRQ) 71 - return 0; 72 - 73 - return irq_chip_set_parent_state(d, which, value); 74 - } 75 - 76 56 static void pdc_enable_intr(struct irq_data *d, bool on) 77 57 { 78 58 int pin_out = d->hwirq; ··· 71 91 72 92 static void qcom_pdc_gic_disable(struct irq_data *d) 73 93 { 74 - if (d->hwirq == GPIO_NO_WAKE_IRQ) 75 - return; 76 - 77 94 pdc_enable_intr(d, false); 78 95 irq_chip_disable_parent(d); 79 96 } 80 97 81 98 static void qcom_pdc_gic_enable(struct irq_data *d) 82 99 { 83 - if (d->hwirq == GPIO_NO_WAKE_IRQ) 84 - return; 85 - 86 100 pdc_enable_intr(d, true); 87 101 irq_chip_enable_parent(d); 88 - } 89 - 90 - static void qcom_pdc_gic_mask(struct irq_data *d) 91 - { 92 - if (d->hwirq == GPIO_NO_WAKE_IRQ) 93 - return; 94 - 95 - irq_chip_mask_parent(d); 96 - } 97 - 98 - static void qcom_pdc_gic_unmask(struct irq_data *d) 99 - { 100 - if (d->hwirq == GPIO_NO_WAKE_IRQ) 101 - return; 102 - 103 - irq_chip_unmask_parent(d); 104 102 } 105 103 106 104 /* ··· 117 159 */ 118 160 static int qcom_pdc_gic_set_type(struct irq_data *d, unsigned int type) 119 161 { 120 - int pin_out = d->hwirq; 121 162 enum pdc_irq_config_bits pdc_type; 122 163 enum pdc_irq_config_bits old_pdc_type; 123 164 int ret; 124 - 125 - if (pin_out == GPIO_NO_WAKE_IRQ) 126 - return 0; 127 165 128 166 switch (type) { 129 167 case IRQ_TYPE_EDGE_RISING: ··· 145 191 return -EINVAL; 146 192 } 147 193 148 - old_pdc_type = pdc_reg_read(IRQ_i_CFG, pin_out); 149 - pdc_reg_write(IRQ_i_CFG, pin_out, pdc_type); 194 + old_pdc_type = pdc_reg_read(IRQ_i_CFG, d->hwirq); 195 + pdc_reg_write(IRQ_i_CFG, d->hwirq, pdc_type); 150 196 151 197 ret = irq_chip_set_type_parent(d, type); 152 198 if (ret) ··· 170 216 static struct irq_chip qcom_pdc_gic_chip = { 171 217 .name = "PDC", 172 218 .irq_eoi = irq_chip_eoi_parent, 173 - .irq_mask = qcom_pdc_gic_mask, 174 - .irq_unmask = qcom_pdc_gic_unmask, 219 + .irq_mask = irq_chip_mask_parent, 220 + .irq_unmask = irq_chip_unmask_parent, 175 221 .irq_disable = qcom_pdc_gic_disable, 176 222 .irq_enable = qcom_pdc_gic_enable, 177 - .irq_get_irqchip_state = qcom_pdc_gic_get_irqchip_state, 178 - .irq_set_irqchip_state = qcom_pdc_gic_set_irqchip_state, 223 + .irq_get_irqchip_state = irq_chip_get_parent_state, 224 + .irq_set_irqchip_state = irq_chip_set_parent_state, 179 225 .irq_retrigger = irq_chip_retrigger_hierarchy, 180 226 .irq_set_type = qcom_pdc_gic_set_type, 181 227 .flags = IRQCHIP_MASK_ON_SUSPEND | ··· 236 282 237 283 parent_hwirq = get_parent_hwirq(hwirq); 238 284 if (parent_hwirq == PDC_NO_PARENT_IRQ) 239 - return 0; 285 + return irq_domain_disconnect_hierarchy(domain->parent, virq); 240 286 241 287 if (type & IRQ_TYPE_EDGE_BOTH) 242 288 type = IRQ_TYPE_EDGE_RISING; ··· 273 319 if (ret) 274 320 return ret; 275 321 322 + if (hwirq == GPIO_NO_WAKE_IRQ) 323 + return irq_domain_disconnect_hierarchy(domain, virq); 324 + 276 325 ret = irq_domain_set_hwirq_and_chip(domain, virq, hwirq, 277 326 &qcom_pdc_gic_chip, NULL); 278 327 if (ret) 279 328 return ret; 280 329 281 - if (hwirq == GPIO_NO_WAKE_IRQ) 282 - return 0; 283 - 284 330 parent_hwirq = get_parent_hwirq(hwirq); 285 331 if (parent_hwirq == PDC_NO_PARENT_IRQ) 286 - return 0; 332 + return irq_domain_disconnect_hierarchy(domain->parent, virq); 287 333 288 334 if (type & IRQ_TYPE_EDGE_BOTH) 289 335 type = IRQ_TYPE_EDGE_RISING;
+1 -1
drivers/mfd/db8500-prcmu.c
··· 2364 2364 2365 2365 for (n = 0; n < NUM_PRCMU_WAKEUPS; n++) { 2366 2366 if (ev & prcmu_irq_bit[n]) 2367 - generic_handle_irq(irq_find_mapping(db8500_irq_domain, n)); 2367 + generic_handle_domain_irq(db8500_irq_domain, n); 2368 2368 } 2369 2369 r = true; 2370 2370 break;
+2 -2
drivers/mfd/fsl-imx25-tsadc.c
··· 35 35 regmap_read(tsadc->regs, MX25_TSC_TGSR, &status); 36 36 37 37 if (status & MX25_TGSR_GCQ_INT) 38 - generic_handle_irq(irq_find_mapping(tsadc->domain, 1)); 38 + generic_handle_domain_irq(tsadc->domain, 1); 39 39 40 40 if (status & MX25_TGSR_TCQ_INT) 41 - generic_handle_irq(irq_find_mapping(tsadc->domain, 0)); 41 + generic_handle_domain_irq(tsadc->domain, 0); 42 42 43 43 chained_irq_exit(chip, desc); 44 44 }
+3 -7
drivers/mfd/ioc3.c
··· 105 105 struct ioc3_priv_data *ipd = domain->host_data; 106 106 struct ioc3 __iomem *regs = ipd->regs; 107 107 u32 pending, mask; 108 - unsigned int irq; 109 108 110 109 pending = readl(&regs->sio_ir); 111 110 mask = readl(&regs->sio_ies); 112 111 pending &= mask; /* Mask off not enabled interrupts */ 113 112 114 - if (pending) { 115 - irq = irq_find_mapping(domain, __ffs(pending)); 116 - if (irq) 117 - generic_handle_irq(irq); 118 - } else { 113 + if (pending) 114 + generic_handle_domain_irq(domain, __ffs(pending)); 115 + else 119 116 spurious_interrupt(); 120 - } 121 117 } 122 118 123 119 /*
+4 -6
drivers/mfd/qcom-pm8xxx.c
··· 122 122 123 123 static int pm8xxx_irq_block_handler(struct pm_irq_chip *chip, int block) 124 124 { 125 - int pmirq, irq, i, ret = 0; 125 + int pmirq, i, ret = 0; 126 126 unsigned int bits; 127 127 128 128 ret = pm8xxx_read_block_irq(chip, block, &bits); ··· 139 139 for (i = 0; i < 8; i++) { 140 140 if (bits & (1 << i)) { 141 141 pmirq = block * 8 + i; 142 - irq = irq_find_mapping(chip->irqdomain, pmirq); 143 - generic_handle_irq(irq); 142 + generic_handle_domain_irq(chip->irqdomain, pmirq); 144 143 } 145 144 } 146 145 return 0; ··· 198 199 static void pm8821_irq_block_handler(struct pm_irq_chip *chip, 199 200 int master, int block) 200 201 { 201 - int pmirq, irq, i, ret; 202 + int pmirq, i, ret; 202 203 unsigned int bits; 203 204 204 205 ret = regmap_read(chip->regmap, ··· 215 216 for (i = 0; i < 8; i++) { 216 217 if (bits & BIT(i)) { 217 218 pmirq = block * 8 + i; 218 - irq = irq_find_mapping(chip->irqdomain, pmirq); 219 - generic_handle_irq(irq); 219 + generic_handle_domain_irq(chip->irqdomain, pmirq); 220 220 } 221 221 } 222 222 }
+101 -207
drivers/pci/msi.c
··· 129 129 return default_restore_msi_irqs(dev); 130 130 } 131 131 132 - static inline __attribute_const__ u32 msi_mask(unsigned x) 133 - { 134 - /* Don't shift by >= width of type */ 135 - if (x >= 5) 136 - return 0xffffffff; 137 - return (1 << (1 << x)) - 1; 138 - } 139 - 140 132 /* 141 133 * PCI 2.3 does not specify mask bits for each MSI interrupt. Attempting to 142 134 * mask all MSI interrupts by clearing the MSI enable bit does not work 143 135 * reliably as devices without an INTx disable bit will then generate a 144 136 * level IRQ which will never be cleared. 145 137 */ 146 - void __pci_msi_desc_mask_irq(struct msi_desc *desc, u32 mask, u32 flag) 138 + static inline __attribute_const__ u32 msi_multi_mask(struct msi_desc *desc) 139 + { 140 + /* Don't shift by >= width of type */ 141 + if (desc->msi_attrib.multi_cap >= 5) 142 + return 0xffffffff; 143 + return (1 << (1 << desc->msi_attrib.multi_cap)) - 1; 144 + } 145 + 146 + static noinline void pci_msi_update_mask(struct msi_desc *desc, u32 clear, u32 set) 147 147 { 148 148 raw_spinlock_t *lock = &desc->dev->msi_lock; 149 149 unsigned long flags; 150 150 151 - if (pci_msi_ignore_mask || !desc->msi_attrib.maskbit) 152 - return; 153 - 154 151 raw_spin_lock_irqsave(lock, flags); 155 - desc->masked &= ~mask; 156 - desc->masked |= flag; 152 + desc->msi_mask &= ~clear; 153 + desc->msi_mask |= set; 157 154 pci_write_config_dword(msi_desc_to_pci_dev(desc), desc->mask_pos, 158 - desc->masked); 155 + desc->msi_mask); 159 156 raw_spin_unlock_irqrestore(lock, flags); 160 157 } 161 158 162 - static void msi_mask_irq(struct msi_desc *desc, u32 mask, u32 flag) 159 + static inline void pci_msi_mask(struct msi_desc *desc, u32 mask) 163 160 { 164 - __pci_msi_desc_mask_irq(desc, mask, flag); 161 + pci_msi_update_mask(desc, 0, mask); 165 162 } 166 163 167 - static void __iomem *pci_msix_desc_addr(struct msi_desc *desc) 164 + static inline void pci_msi_unmask(struct msi_desc *desc, u32 mask) 168 165 { 169 - if (desc->msi_attrib.is_virtual) 170 - return NULL; 166 + pci_msi_update_mask(desc, mask, 0); 167 + } 171 168 172 - return desc->mask_base + 173 - desc->msi_attrib.entry_nr * PCI_MSIX_ENTRY_SIZE; 169 + static inline void __iomem *pci_msix_desc_addr(struct msi_desc *desc) 170 + { 171 + return desc->mask_base + desc->msi_attrib.entry_nr * PCI_MSIX_ENTRY_SIZE; 174 172 } 175 173 176 174 /* 177 - * This internal function does not flush PCI writes to the device. 178 - * All users must ensure that they read from the device before either 179 - * assuming that the device state is up to date, or returning out of this 180 - * file. This saves a few milliseconds when initialising devices with lots 181 - * of MSI-X interrupts. 175 + * This internal function does not flush PCI writes to the device. All 176 + * users must ensure that they read from the device before either assuming 177 + * that the device state is up to date, or returning out of this file. 178 + * It does not affect the msi_desc::msix_ctrl cache either. Use with care! 182 179 */ 183 - u32 __pci_msix_desc_mask_irq(struct msi_desc *desc, u32 flag) 180 + static void pci_msix_write_vector_ctrl(struct msi_desc *desc, u32 ctrl) 184 181 { 185 - u32 mask_bits = desc->masked; 186 - void __iomem *desc_addr; 182 + void __iomem *desc_addr = pci_msix_desc_addr(desc); 187 183 188 - if (pci_msi_ignore_mask) 189 - return 0; 190 - 191 - desc_addr = pci_msix_desc_addr(desc); 192 - if (!desc_addr) 193 - return 0; 194 - 195 - mask_bits &= ~PCI_MSIX_ENTRY_CTRL_MASKBIT; 196 - if (flag & PCI_MSIX_ENTRY_CTRL_MASKBIT) 197 - mask_bits |= PCI_MSIX_ENTRY_CTRL_MASKBIT; 198 - 199 - writel(mask_bits, desc_addr + PCI_MSIX_ENTRY_VECTOR_CTRL); 200 - 201 - return mask_bits; 184 + writel(ctrl, desc_addr + PCI_MSIX_ENTRY_VECTOR_CTRL); 202 185 } 203 186 204 - static void msix_mask_irq(struct msi_desc *desc, u32 flag) 187 + static inline void pci_msix_mask(struct msi_desc *desc) 205 188 { 206 - desc->masked = __pci_msix_desc_mask_irq(desc, flag); 189 + desc->msix_ctrl |= PCI_MSIX_ENTRY_CTRL_MASKBIT; 190 + pci_msix_write_vector_ctrl(desc, desc->msix_ctrl); 191 + /* Flush write to device */ 192 + readl(desc->mask_base); 207 193 } 208 194 209 - static void msi_set_mask_bit(struct irq_data *data, u32 flag) 195 + static inline void pci_msix_unmask(struct msi_desc *desc) 210 196 { 211 - struct msi_desc *desc = irq_data_get_msi_desc(data); 197 + desc->msix_ctrl &= ~PCI_MSIX_ENTRY_CTRL_MASKBIT; 198 + pci_msix_write_vector_ctrl(desc, desc->msix_ctrl); 199 + } 212 200 213 - if (desc->msi_attrib.is_msix) { 214 - msix_mask_irq(desc, flag); 215 - readl(desc->mask_base); /* Flush write to device */ 216 - } else { 217 - unsigned offset = data->irq - desc->irq; 218 - msi_mask_irq(desc, 1 << offset, flag << offset); 219 - } 201 + static void __pci_msi_mask_desc(struct msi_desc *desc, u32 mask) 202 + { 203 + if (pci_msi_ignore_mask || desc->msi_attrib.is_virtual) 204 + return; 205 + 206 + if (desc->msi_attrib.is_msix) 207 + pci_msix_mask(desc); 208 + else if (desc->msi_attrib.maskbit) 209 + pci_msi_mask(desc, mask); 210 + } 211 + 212 + static void __pci_msi_unmask_desc(struct msi_desc *desc, u32 mask) 213 + { 214 + if (pci_msi_ignore_mask || desc->msi_attrib.is_virtual) 215 + return; 216 + 217 + if (desc->msi_attrib.is_msix) 218 + pci_msix_unmask(desc); 219 + else if (desc->msi_attrib.maskbit) 220 + pci_msi_unmask(desc, mask); 220 221 } 221 222 222 223 /** ··· 226 225 */ 227 226 void pci_msi_mask_irq(struct irq_data *data) 228 227 { 229 - msi_set_mask_bit(data, 1); 228 + struct msi_desc *desc = irq_data_get_msi_desc(data); 229 + 230 + __pci_msi_mask_desc(desc, BIT(data->irq - desc->irq)); 230 231 } 231 232 EXPORT_SYMBOL_GPL(pci_msi_mask_irq); 232 233 ··· 238 235 */ 239 236 void pci_msi_unmask_irq(struct irq_data *data) 240 237 { 241 - msi_set_mask_bit(data, 0); 238 + struct msi_desc *desc = irq_data_get_msi_desc(data); 239 + 240 + __pci_msi_unmask_desc(desc, BIT(data->irq - desc->irq)); 242 241 } 243 242 EXPORT_SYMBOL_GPL(pci_msi_unmask_irq); 244 243 ··· 261 256 if (entry->msi_attrib.is_msix) { 262 257 void __iomem *base = pci_msix_desc_addr(entry); 263 258 264 - if (!base) { 265 - WARN_ON(1); 259 + if (WARN_ON_ONCE(entry->msi_attrib.is_virtual)) 266 260 return; 267 - } 268 261 269 262 msg->address_lo = readl(base + PCI_MSIX_ENTRY_LOWER_ADDR); 270 263 msg->address_hi = readl(base + PCI_MSIX_ENTRY_UPPER_ADDR); ··· 293 290 /* Don't touch the hardware now */ 294 291 } else if (entry->msi_attrib.is_msix) { 295 292 void __iomem *base = pci_msix_desc_addr(entry); 296 - bool unmasked = !(entry->masked & PCI_MSIX_ENTRY_CTRL_MASKBIT); 293 + u32 ctrl = entry->msix_ctrl; 294 + bool unmasked = !(ctrl & PCI_MSIX_ENTRY_CTRL_MASKBIT); 297 295 298 - if (!base) 296 + if (entry->msi_attrib.is_virtual) 299 297 goto skip; 300 298 301 299 /* ··· 308 304 * undefined." 309 305 */ 310 306 if (unmasked) 311 - __pci_msix_desc_mask_irq(entry, PCI_MSIX_ENTRY_CTRL_MASKBIT); 307 + pci_msix_write_vector_ctrl(entry, ctrl | PCI_MSIX_ENTRY_CTRL_MASKBIT); 312 308 313 309 writel(msg->address_lo, base + PCI_MSIX_ENTRY_LOWER_ADDR); 314 310 writel(msg->address_hi, base + PCI_MSIX_ENTRY_UPPER_ADDR); 315 311 writel(msg->data, base + PCI_MSIX_ENTRY_DATA); 316 312 317 313 if (unmasked) 318 - __pci_msix_desc_mask_irq(entry, 0); 314 + pci_msix_write_vector_ctrl(entry, ctrl); 319 315 320 316 /* Ensure that the writes are visible in the device */ 321 317 readl(base + PCI_MSIX_ENTRY_DATA); ··· 363 359 { 364 360 struct list_head *msi_list = dev_to_msi_list(&dev->dev); 365 361 struct msi_desc *entry, *tmp; 366 - struct attribute **msi_attrs; 367 - struct device_attribute *dev_attr; 368 - int i, count = 0; 362 + int i; 369 363 370 364 for_each_pci_msi_entry(entry, dev) 371 365 if (entry->irq) ··· 383 381 } 384 382 385 383 if (dev->msi_irq_groups) { 386 - sysfs_remove_groups(&dev->dev.kobj, dev->msi_irq_groups); 387 - msi_attrs = dev->msi_irq_groups[0]->attrs; 388 - while (msi_attrs[count]) { 389 - dev_attr = container_of(msi_attrs[count], 390 - struct device_attribute, attr); 391 - kfree(dev_attr->attr.name); 392 - kfree(dev_attr); 393 - ++count; 394 - } 395 - kfree(msi_attrs); 396 - kfree(dev->msi_irq_groups[0]); 397 - kfree(dev->msi_irq_groups); 384 + msi_destroy_sysfs(&dev->dev, dev->msi_irq_groups); 398 385 dev->msi_irq_groups = NULL; 399 386 } 400 387 } ··· 420 429 arch_restore_msi_irqs(dev); 421 430 422 431 pci_read_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, &control); 423 - msi_mask_irq(entry, msi_mask(entry->msi_attrib.multi_cap), 424 - entry->masked); 432 + pci_msi_update_mask(entry, 0, 0); 425 433 control &= ~PCI_MSI_FLAGS_QSIZE; 426 434 control |= (entry->msi_attrib.multiple << 4) | PCI_MSI_FLAGS_ENABLE; 427 435 pci_write_config_word(dev, dev->msi_cap + PCI_MSI_FLAGS, control); ··· 451 461 452 462 arch_restore_msi_irqs(dev); 453 463 for_each_pci_msi_entry(entry, dev) 454 - msix_mask_irq(entry, entry->masked); 464 + pci_msix_write_vector_ctrl(entry, entry->msix_ctrl); 455 465 456 466 pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_MASKALL, 0); 457 467 } ··· 462 472 __pci_restore_msix_state(dev); 463 473 } 464 474 EXPORT_SYMBOL_GPL(pci_restore_msi_state); 465 - 466 - static ssize_t msi_mode_show(struct device *dev, struct device_attribute *attr, 467 - char *buf) 468 - { 469 - struct msi_desc *entry; 470 - unsigned long irq; 471 - int retval; 472 - 473 - retval = kstrtoul(attr->attr.name, 10, &irq); 474 - if (retval) 475 - return retval; 476 - 477 - entry = irq_get_msi_desc(irq); 478 - if (!entry) 479 - return -ENODEV; 480 - 481 - return sysfs_emit(buf, "%s\n", 482 - entry->msi_attrib.is_msix ? "msix" : "msi"); 483 - } 484 - 485 - static int populate_msi_sysfs(struct pci_dev *pdev) 486 - { 487 - struct attribute **msi_attrs; 488 - struct attribute *msi_attr; 489 - struct device_attribute *msi_dev_attr; 490 - struct attribute_group *msi_irq_group; 491 - const struct attribute_group **msi_irq_groups; 492 - struct msi_desc *entry; 493 - int ret = -ENOMEM; 494 - int num_msi = 0; 495 - int count = 0; 496 - int i; 497 - 498 - /* Determine how many msi entries we have */ 499 - for_each_pci_msi_entry(entry, pdev) 500 - num_msi += entry->nvec_used; 501 - if (!num_msi) 502 - return 0; 503 - 504 - /* Dynamically create the MSI attributes for the PCI device */ 505 - msi_attrs = kcalloc(num_msi + 1, sizeof(void *), GFP_KERNEL); 506 - if (!msi_attrs) 507 - return -ENOMEM; 508 - for_each_pci_msi_entry(entry, pdev) { 509 - for (i = 0; i < entry->nvec_used; i++) { 510 - msi_dev_attr = kzalloc(sizeof(*msi_dev_attr), GFP_KERNEL); 511 - if (!msi_dev_attr) 512 - goto error_attrs; 513 - msi_attrs[count] = &msi_dev_attr->attr; 514 - 515 - sysfs_attr_init(&msi_dev_attr->attr); 516 - msi_dev_attr->attr.name = kasprintf(GFP_KERNEL, "%d", 517 - entry->irq + i); 518 - if (!msi_dev_attr->attr.name) 519 - goto error_attrs; 520 - msi_dev_attr->attr.mode = S_IRUGO; 521 - msi_dev_attr->show = msi_mode_show; 522 - ++count; 523 - } 524 - } 525 - 526 - msi_irq_group = kzalloc(sizeof(*msi_irq_group), GFP_KERNEL); 527 - if (!msi_irq_group) 528 - goto error_attrs; 529 - msi_irq_group->name = "msi_irqs"; 530 - msi_irq_group->attrs = msi_attrs; 531 - 532 - msi_irq_groups = kcalloc(2, sizeof(void *), GFP_KERNEL); 533 - if (!msi_irq_groups) 534 - goto error_irq_group; 535 - msi_irq_groups[0] = msi_irq_group; 536 - 537 - ret = sysfs_create_groups(&pdev->dev.kobj, msi_irq_groups); 538 - if (ret) 539 - goto error_irq_groups; 540 - pdev->msi_irq_groups = msi_irq_groups; 541 - 542 - return 0; 543 - 544 - error_irq_groups: 545 - kfree(msi_irq_groups); 546 - error_irq_group: 547 - kfree(msi_irq_group); 548 - error_attrs: 549 - count = 0; 550 - msi_attr = msi_attrs[count]; 551 - while (msi_attr) { 552 - msi_dev_attr = container_of(msi_attr, struct device_attribute, attr); 553 - kfree(msi_attr->name); 554 - kfree(msi_dev_attr); 555 - ++count; 556 - msi_attr = msi_attrs[count]; 557 - } 558 - kfree(msi_attrs); 559 - return ret; 560 - } 561 475 562 476 static struct msi_desc * 563 477 msi_setup_entry(struct pci_dev *dev, int nvec, struct irq_affinity *affd) ··· 496 602 497 603 /* Save the initial mask status */ 498 604 if (entry->msi_attrib.maskbit) 499 - pci_read_config_dword(dev, entry->mask_pos, &entry->masked); 605 + pci_read_config_dword(dev, entry->mask_pos, &entry->msi_mask); 500 606 501 607 out: 502 608 kfree(masks); ··· 507 613 { 508 614 struct msi_desc *entry; 509 615 616 + if (!dev->no_64bit_msi) 617 + return 0; 618 + 510 619 for_each_pci_msi_entry(entry, dev) { 511 - if (entry->msg.address_hi && dev->no_64bit_msi) { 620 + if (entry->msg.address_hi) { 512 621 pci_err(dev, "arch assigned 64-bit MSI address %#x%08x but device only supports 32 bits\n", 513 622 entry->msg.address_hi, entry->msg.address_lo); 514 623 return -EIO; ··· 537 640 { 538 641 struct msi_desc *entry; 539 642 int ret; 540 - unsigned mask; 541 643 542 644 pci_msi_set_enable(dev, 0); /* Disable MSI during set up */ 543 645 ··· 545 649 return -ENOMEM; 546 650 547 651 /* All MSIs are unmasked by default; mask them all */ 548 - mask = msi_mask(entry->msi_attrib.multi_cap); 549 - msi_mask_irq(entry, mask, mask); 652 + pci_msi_mask(entry, msi_multi_mask(entry)); 550 653 551 654 list_add_tail(&entry->list, dev_to_msi_list(&dev->dev)); 552 655 553 656 /* Configure MSI capability structure */ 554 657 ret = pci_msi_setup_msi_irqs(dev, nvec, PCI_CAP_ID_MSI); 555 - if (ret) { 556 - msi_mask_irq(entry, mask, 0); 557 - free_msi_irqs(dev); 558 - return ret; 559 - } 658 + if (ret) 659 + goto err; 560 660 561 661 ret = msi_verify_entries(dev); 562 - if (ret) { 563 - msi_mask_irq(entry, mask, 0); 564 - free_msi_irqs(dev); 565 - return ret; 566 - } 662 + if (ret) 663 + goto err; 567 664 568 - ret = populate_msi_sysfs(dev); 569 - if (ret) { 570 - msi_mask_irq(entry, mask, 0); 571 - free_msi_irqs(dev); 572 - return ret; 665 + dev->msi_irq_groups = msi_populate_sysfs(&dev->dev); 666 + if (IS_ERR(dev->msi_irq_groups)) { 667 + ret = PTR_ERR(dev->msi_irq_groups); 668 + goto err; 573 669 } 574 670 575 671 /* Set MSI enabled bits */ ··· 572 684 pcibios_free_irq(dev); 573 685 dev->irq = entry->irq; 574 686 return 0; 687 + 688 + err: 689 + pci_msi_unmask(entry, msi_multi_mask(entry)); 690 + free_msi_irqs(dev); 691 + return ret; 575 692 } 576 693 577 694 static void __iomem *msix_map_region(struct pci_dev *dev, unsigned nr_entries) ··· 638 745 entry->msi_attrib.default_irq = dev->irq; 639 746 entry->mask_base = base; 640 747 641 - addr = pci_msix_desc_addr(entry); 642 - if (addr) 643 - entry->masked = readl(addr + PCI_MSIX_ENTRY_VECTOR_CTRL); 748 + if (!entry->msi_attrib.is_virtual) { 749 + addr = pci_msix_desc_addr(entry); 750 + entry->msix_ctrl = readl(addr + PCI_MSIX_ENTRY_VECTOR_CTRL); 751 + } 644 752 645 753 list_add_tail(&entry->list, dev_to_msi_list(&dev->dev)); 646 754 if (masks) ··· 730 836 731 837 msix_update_entries(dev, entries); 732 838 733 - ret = populate_msi_sysfs(dev); 734 - if (ret) 839 + dev->msi_irq_groups = msi_populate_sysfs(&dev->dev); 840 + if (IS_ERR(dev->msi_irq_groups)) { 841 + ret = PTR_ERR(dev->msi_irq_groups); 735 842 goto out_free; 843 + } 736 844 737 845 /* Set MSI-X enabled bits and unmask the function */ 738 846 pci_intx_for_msi(dev, 0); ··· 847 951 static void pci_msi_shutdown(struct pci_dev *dev) 848 952 { 849 953 struct msi_desc *desc; 850 - u32 mask; 851 954 852 955 if (!pci_msi_enable || !dev || !dev->msi_enabled) 853 956 return; ··· 859 964 dev->msi_enabled = 0; 860 965 861 966 /* Return the device with MSI unmasked as initial states */ 862 - mask = msi_mask(desc->msi_attrib.multi_cap); 863 - msi_mask_irq(desc, mask, 0); 967 + pci_msi_unmask(desc, msi_multi_mask(desc)); 864 968 865 969 /* Restore dev->irq to its default pin-assertion IRQ */ 866 970 dev->irq = desc->msi_attrib.default_irq; ··· 945 1051 946 1052 /* Return the device with MSI-X masked as initial states */ 947 1053 for_each_pci_msi_entry(entry, dev) 948 - __pci_msix_desc_mask_irq(entry, 1); 1054 + pci_msix_mask(entry); 949 1055 950 1056 pci_msix_clear_and_set_ctrl(dev, PCI_MSIX_FLAGS_ENABLE, 0); 951 1057 pci_intx_for_msi(dev, 1);
+2 -3
drivers/pinctrl/actions/pinctrl-owl.c
··· 833 833 unsigned int parent = irq_desc_get_irq(desc); 834 834 const struct owl_gpio_port *port; 835 835 void __iomem *base; 836 - unsigned int pin, irq, offset = 0, i; 836 + unsigned int pin, offset = 0, i; 837 837 unsigned long pending_irq; 838 838 839 839 chained_irq_enter(chip, desc); ··· 849 849 pending_irq = readl_relaxed(base + port->intc_pd); 850 850 851 851 for_each_set_bit(pin, &pending_irq, port->pins) { 852 - irq = irq_find_mapping(domain, offset + pin); 853 - generic_handle_irq(irq); 852 + generic_handle_domain_irq(domain, offset + pin); 854 853 855 854 /* clear pending interrupt */ 856 855 owl_gpio_update_reg(base + port->intc_pd, pin, true);
+2 -2
drivers/pinctrl/bcm/pinctrl-bcm2835.c
··· 395 395 events &= pc->enabled_irq_map[bank]; 396 396 for_each_set_bit(offset, &events, 32) { 397 397 gpio = (32 * bank) + offset; 398 - generic_handle_irq(irq_linear_revmap(pc->gpio_chip.irq.domain, 399 - gpio)); 398 + generic_handle_domain_irq(pc->gpio_chip.irq.domain, 399 + gpio); 400 400 } 401 401 } 402 402
+1 -2
drivers/pinctrl/bcm/pinctrl-iproc-gpio.c
··· 176 176 177 177 for_each_set_bit(bit, &val, NGPIOS_PER_BANK) { 178 178 unsigned pin = NGPIOS_PER_BANK * i + bit; 179 - int child_irq = irq_find_mapping(gc->irq.domain, pin); 180 179 181 180 /* 182 181 * Clear the interrupt before invoking the ··· 184 185 writel(BIT(bit), chip->base + (i * GPIO_BANK_SIZE) + 185 186 IPROC_GPIO_INT_CLR_OFFSET); 186 187 187 - generic_handle_irq(child_irq); 188 + generic_handle_domain_irq(gc->irq.domain, pin); 188 189 } 189 190 } 190 191
+1 -2
drivers/pinctrl/bcm/pinctrl-nsp-gpio.c
··· 155 155 int_bits = level | event; 156 156 157 157 for_each_set_bit(bit, &int_bits, gc->ngpio) 158 - generic_handle_irq( 159 - irq_linear_revmap(gc->irq.domain, bit)); 158 + generic_handle_domain_irq(gc->irq.domain, bit); 160 159 } 161 160 162 161 return int_bits ? IRQ_HANDLED : IRQ_NONE;
+2 -5
drivers/pinctrl/intel/pinctrl-baytrail.c
··· 1444 1444 u32 base, pin; 1445 1445 void __iomem *reg; 1446 1446 unsigned long pending; 1447 - unsigned int virq; 1448 1447 1449 1448 /* check from GPIO controller which pin triggered the interrupt */ 1450 1449 for (base = 0; base < vg->chip.ngpio; base += 32) { ··· 1459 1460 raw_spin_lock(&byt_lock); 1460 1461 pending = readl(reg); 1461 1462 raw_spin_unlock(&byt_lock); 1462 - for_each_set_bit(pin, &pending, 32) { 1463 - virq = irq_find_mapping(vg->chip.irq.domain, base + pin); 1464 - generic_handle_irq(virq); 1465 - } 1463 + for_each_set_bit(pin, &pending, 32) 1464 + generic_handle_domain_irq(vg->chip.irq.domain, base + pin); 1466 1465 } 1467 1466 chip->irq_eoi(data); 1468 1467 }
+2 -3
drivers/pinctrl/intel/pinctrl-cherryview.c
··· 1409 1409 raw_spin_unlock_irqrestore(&chv_lock, flags); 1410 1410 1411 1411 for_each_set_bit(intr_line, &pending, community->nirqs) { 1412 - unsigned int irq, offset; 1412 + unsigned int offset; 1413 1413 1414 1414 offset = cctx->intr_lines[intr_line]; 1415 - irq = irq_find_mapping(gc->irq.domain, offset); 1416 - generic_handle_irq(irq); 1415 + generic_handle_domain_irq(gc->irq.domain, offset); 1417 1416 } 1418 1417 1419 1418 chained_irq_exit(chip, desc);
+2 -6
drivers/pinctrl/intel/pinctrl-lynxpoint.c
··· 653 653 /* Only interrupts that are enabled */ 654 654 pending = ioread32(reg) & ioread32(ena); 655 655 656 - for_each_set_bit(pin, &pending, 32) { 657 - unsigned int irq; 658 - 659 - irq = irq_find_mapping(lg->chip.irq.domain, base + pin); 660 - generic_handle_irq(irq); 661 - } 656 + for_each_set_bit(pin, &pending, 32) 657 + generic_handle_domain_irq(lg->chip.irq.domain, base + pin); 662 658 } 663 659 chip->irq_eoi(data); 664 660 }
+2 -3
drivers/pinctrl/mediatek/mtk-eint.c
··· 319 319 struct irq_chip *chip = irq_desc_get_chip(desc); 320 320 struct mtk_eint *eint = irq_desc_get_handler_data(desc); 321 321 unsigned int status, eint_num; 322 - int offset, mask_offset, index, virq; 322 + int offset, mask_offset, index; 323 323 void __iomem *reg = mtk_eint_get_offset(eint, 0, eint->regs->stat); 324 324 int dual_edge, start_level, curr_level; 325 325 ··· 331 331 offset = __ffs(status); 332 332 mask_offset = eint_num >> 5; 333 333 index = eint_num + offset; 334 - virq = irq_find_mapping(eint->domain, index); 335 334 status &= ~BIT(offset); 336 335 337 336 /* ··· 360 361 index); 361 362 } 362 363 363 - generic_handle_irq(virq); 364 + generic_handle_domain_irq(eint->domain, index); 364 365 365 366 if (dual_edge) { 366 367 curr_level = mtk_eint_flip_edge(eint, index);
+1 -1
drivers/pinctrl/nomadik/pinctrl-nomadik.c
··· 815 815 while (status) { 816 816 int bit = __ffs(status); 817 817 818 - generic_handle_irq(irq_find_mapping(chip->irq.domain, bit)); 818 + generic_handle_domain_irq(chip->irq.domain, bit); 819 819 status &= ~BIT(bit); 820 820 } 821 821
+1 -1
drivers/pinctrl/nuvoton/pinctrl-npcm7xx.c
··· 231 231 232 232 sts &= en; 233 233 for_each_set_bit(bit, (const void *)&sts, NPCM7XX_GPIO_PER_BANK) 234 - generic_handle_irq(irq_linear_revmap(gc->irq.domain, bit)); 234 + generic_handle_domain_irq(gc->irq.domain, bit); 235 235 chained_irq_exit(chip, desc); 236 236 } 237 237
+2 -4
drivers/pinctrl/pinctrl-amd.c
··· 620 620 if (!(regval & PIN_IRQ_PENDING) || 621 621 !(regval & BIT(INTERRUPT_MASK_OFF))) 622 622 continue; 623 - irq = irq_find_mapping(gc->irq.domain, irqnr + i); 624 - if (irq != 0) 625 - generic_handle_irq(irq); 623 + generic_handle_domain_irq(gc->irq.domain, irqnr + i); 626 624 627 625 /* Clear interrupt. 628 626 * We must read the pin register again, in case the 629 627 * value was changed while executing 630 - * generic_handle_irq() above. 628 + * generic_handle_domain_irq() above. 631 629 * If we didn't find a mapping for the interrupt, 632 630 * disable it in order to avoid a system hang caused 633 631 * by an interrupt storm.
+2 -4
drivers/pinctrl/pinctrl-at91.c
··· 1712 1712 continue; 1713 1713 } 1714 1714 1715 - for_each_set_bit(n, &isr, BITS_PER_LONG) { 1716 - generic_handle_irq(irq_find_mapping( 1717 - gpio_chip->irq.domain, n)); 1718 - } 1715 + for_each_set_bit(n, &isr, BITS_PER_LONG) 1716 + generic_handle_domain_irq(gpio_chip->irq.domain, n); 1719 1717 } 1720 1718 chained_irq_exit(chip, desc); 1721 1719 /* now it may re-trigger */
+1 -1
drivers/pinctrl/pinctrl-equilibrium.c
··· 155 155 pins = readl(gctrl->membase + GPIO_IRNCR); 156 156 157 157 for_each_set_bit(offset, &pins, gc->ngpio) 158 - generic_handle_irq(irq_find_mapping(gc->irq.domain, offset)); 158 + generic_handle_domain_irq(gc->irq.domain, offset); 159 159 160 160 chained_irq_exit(ic, desc); 161 161 }
+1 -1
drivers/pinctrl/pinctrl-ingenic.c
··· 3080 3080 flag = ingenic_gpio_read_reg(jzgc, JZ4730_GPIO_GPFR); 3081 3081 3082 3082 for_each_set_bit(i, &flag, 32) 3083 - generic_handle_irq(irq_linear_revmap(gc->irq.domain, i)); 3083 + generic_handle_domain_irq(gc->irq.domain, i); 3084 3084 chained_irq_exit(irq_chip, desc); 3085 3085 } 3086 3086
+1 -1
drivers/pinctrl/pinctrl-microchip-sgpio.c
··· 673 673 674 674 for_each_set_bit(port, &val, SGPIO_BITS_PER_WORD) { 675 675 gpio = sgpio_addr_to_pin(priv, port, bit); 676 - generic_handle_irq(irq_linear_revmap(chip->irq.domain, gpio)); 676 + generic_handle_domain_irq(chip->irq.domain, gpio); 677 677 } 678 678 679 679 chained_irq_exit(parent_chip, desc);
+1 -2
drivers/pinctrl/pinctrl-ocelot.c
··· 1290 1290 1291 1291 for_each_set_bit(irq, &irqs, 1292 1292 min(32U, info->desc->npins - 32 * i)) 1293 - generic_handle_irq(irq_linear_revmap(chip->irq.domain, 1294 - irq + 32 * i)); 1293 + generic_handle_domain_irq(chip->irq.domain, irq + 32 * i); 1295 1294 1296 1295 chained_irq_exit(parent_chip, desc); 1297 1296 }
+1 -1
drivers/pinctrl/pinctrl-oxnas.c
··· 1055 1055 stat = readl(bank->reg_base + IRQ_PENDING); 1056 1056 1057 1057 for_each_set_bit(pin, &stat, BITS_PER_LONG) 1058 - generic_handle_irq(irq_linear_revmap(gc->irq.domain, pin)); 1058 + generic_handle_domain_irq(gc->irq.domain, pin); 1059 1059 1060 1060 chained_irq_exit(chip, desc); 1061 1061 }
+1 -1
drivers/pinctrl/pinctrl-pic32.c
··· 2101 2101 pending = pic32_gpio_get_pending(gc, stat); 2102 2102 2103 2103 for_each_set_bit(pin, &pending, BITS_PER_LONG) 2104 - generic_handle_irq(irq_linear_revmap(gc->irq.domain, pin)); 2104 + generic_handle_domain_irq(gc->irq.domain, pin); 2105 2105 2106 2106 chained_irq_exit(chip, desc); 2107 2107 }
+1 -1
drivers/pinctrl/pinctrl-pistachio.c
··· 1306 1306 pending = gpio_readl(bank, GPIO_INTERRUPT_STATUS) & 1307 1307 gpio_readl(bank, GPIO_INTERRUPT_EN); 1308 1308 for_each_set_bit(pin, &pending, 16) 1309 - generic_handle_irq(irq_linear_revmap(gc->irq.domain, pin)); 1309 + generic_handle_domain_irq(gc->irq.domain, pin); 1310 1310 chained_irq_exit(chip, desc); 1311 1311 } 1312 1312
+18 -891
drivers/pinctrl/pinctrl-rockchip.c
··· 21 21 #include <linux/io.h> 22 22 #include <linux/bitops.h> 23 23 #include <linux/gpio/driver.h> 24 - #include <linux/of_device.h> 25 24 #include <linux/of_address.h> 25 + #include <linux/of_device.h> 26 26 #include <linux/of_irq.h> 27 27 #include <linux/pinctrl/machine.h> 28 28 #include <linux/pinctrl/pinconf.h> ··· 37 37 38 38 #include "core.h" 39 39 #include "pinconf.h" 40 - 41 - /* GPIO control registers */ 42 - #define GPIO_SWPORT_DR 0x00 43 - #define GPIO_SWPORT_DDR 0x04 44 - #define GPIO_INTEN 0x30 45 - #define GPIO_INTMASK 0x34 46 - #define GPIO_INTTYPE_LEVEL 0x38 47 - #define GPIO_INT_POLARITY 0x3c 48 - #define GPIO_INT_STATUS 0x40 49 - #define GPIO_INT_RAWSTATUS 0x44 50 - #define GPIO_DEBOUNCE 0x48 51 - #define GPIO_PORTS_EOI 0x4c 52 - #define GPIO_EXT_PORT 0x50 53 - #define GPIO_LS_SYNC 0x60 54 - 55 - enum rockchip_pinctrl_type { 56 - PX30, 57 - RV1108, 58 - RK2928, 59 - RK3066B, 60 - RK3128, 61 - RK3188, 62 - RK3288, 63 - RK3308, 64 - RK3368, 65 - RK3399, 66 - RK3568, 67 - }; 68 - 40 + #include "pinctrl-rockchip.h" 69 41 70 42 /** 71 43 * Generate a bitmask for setting a value (v) with a write mask bit in hiword ··· 55 83 #define IOMUX_UNROUTED BIT(3) 56 84 #define IOMUX_WIDTH_3BIT BIT(4) 57 85 #define IOMUX_WIDTH_2BIT BIT(5) 58 - 59 - /** 60 - * struct rockchip_iomux 61 - * @type: iomux variant using IOMUX_* constants 62 - * @offset: if initialized to -1 it will be autocalculated, by specifying 63 - * an initial offset value the relevant source offset can be reset 64 - * to a new value for autocalculating the following iomux registers. 65 - */ 66 - struct rockchip_iomux { 67 - int type; 68 - int offset; 69 - }; 70 - 71 - /* 72 - * enum type index corresponding to rockchip_perpin_drv_list arrays index. 73 - */ 74 - enum rockchip_pin_drv_type { 75 - DRV_TYPE_IO_DEFAULT = 0, 76 - DRV_TYPE_IO_1V8_OR_3V0, 77 - DRV_TYPE_IO_1V8_ONLY, 78 - DRV_TYPE_IO_1V8_3V0_AUTO, 79 - DRV_TYPE_IO_3V3_ONLY, 80 - DRV_TYPE_MAX 81 - }; 82 - 83 - /* 84 - * enum type index corresponding to rockchip_pull_list arrays index. 85 - */ 86 - enum rockchip_pin_pull_type { 87 - PULL_TYPE_IO_DEFAULT = 0, 88 - PULL_TYPE_IO_1V8_ONLY, 89 - PULL_TYPE_MAX 90 - }; 91 - 92 - /** 93 - * struct rockchip_drv 94 - * @drv_type: drive strength variant using rockchip_perpin_drv_type 95 - * @offset: if initialized to -1 it will be autocalculated, by specifying 96 - * an initial offset value the relevant source offset can be reset 97 - * to a new value for autocalculating the following drive strength 98 - * registers. if used chips own cal_drv func instead to calculate 99 - * registers offset, the variant could be ignored. 100 - */ 101 - struct rockchip_drv { 102 - enum rockchip_pin_drv_type drv_type; 103 - int offset; 104 - }; 105 - 106 - /** 107 - * struct rockchip_pin_bank 108 - * @reg_base: register base of the gpio bank 109 - * @regmap_pull: optional separate register for additional pull settings 110 - * @clk: clock of the gpio bank 111 - * @irq: interrupt of the gpio bank 112 - * @saved_masks: Saved content of GPIO_INTEN at suspend time. 113 - * @pin_base: first pin number 114 - * @nr_pins: number of pins in this bank 115 - * @name: name of the bank 116 - * @bank_num: number of the bank, to account for holes 117 - * @iomux: array describing the 4 iomux sources of the bank 118 - * @drv: array describing the 4 drive strength sources of the bank 119 - * @pull_type: array describing the 4 pull type sources of the bank 120 - * @valid: is all necessary information present 121 - * @of_node: dt node of this bank 122 - * @drvdata: common pinctrl basedata 123 - * @domain: irqdomain of the gpio bank 124 - * @gpio_chip: gpiolib chip 125 - * @grange: gpio range 126 - * @slock: spinlock for the gpio bank 127 - * @toggle_edge_mode: bit mask to toggle (falling/rising) edge mode 128 - * @recalced_mask: bit mask to indicate a need to recalulate the mask 129 - * @route_mask: bits describing the routing pins of per bank 130 - */ 131 - struct rockchip_pin_bank { 132 - void __iomem *reg_base; 133 - struct regmap *regmap_pull; 134 - struct clk *clk; 135 - int irq; 136 - u32 saved_masks; 137 - u32 pin_base; 138 - u8 nr_pins; 139 - char *name; 140 - u8 bank_num; 141 - struct rockchip_iomux iomux[4]; 142 - struct rockchip_drv drv[4]; 143 - enum rockchip_pin_pull_type pull_type[4]; 144 - bool valid; 145 - struct device_node *of_node; 146 - struct rockchip_pinctrl *drvdata; 147 - struct irq_domain *domain; 148 - struct gpio_chip gpio_chip; 149 - struct pinctrl_gpio_range grange; 150 - raw_spinlock_t slock; 151 - u32 toggle_edge_mode; 152 - u32 recalced_mask; 153 - u32 route_mask; 154 - }; 155 86 156 87 #define PIN_BANK(id, pins, label) \ 157 88 { \ ··· 194 319 195 320 #define RK_MUXROUTE_PMU(ID, PIN, FUNC, REG, VAL) \ 196 321 PIN_BANK_MUX_ROUTE_FLAGS(ID, PIN, FUNC, REG, VAL, ROCKCHIP_ROUTE_PMU) 197 - 198 - /** 199 - * struct rockchip_mux_recalced_data: represent a pin iomux data. 200 - * @num: bank number. 201 - * @pin: pin number. 202 - * @bit: index at register. 203 - * @reg: register offset. 204 - * @mask: mask bit 205 - */ 206 - struct rockchip_mux_recalced_data { 207 - u8 num; 208 - u8 pin; 209 - u32 reg; 210 - u8 bit; 211 - u8 mask; 212 - }; 213 - 214 - enum rockchip_mux_route_location { 215 - ROCKCHIP_ROUTE_SAME = 0, 216 - ROCKCHIP_ROUTE_PMU, 217 - ROCKCHIP_ROUTE_GRF, 218 - }; 219 - 220 - /** 221 - * struct rockchip_mux_recalced_data: represent a pin iomux data. 222 - * @bank_num: bank number. 223 - * @pin: index at register or used to calc index. 224 - * @func: the min pin. 225 - * @route_location: the mux route location (same, pmu, grf). 226 - * @route_offset: the max pin. 227 - * @route_val: the register offset. 228 - */ 229 - struct rockchip_mux_route_data { 230 - u8 bank_num; 231 - u8 pin; 232 - u8 func; 233 - enum rockchip_mux_route_location route_location; 234 - u32 route_offset; 235 - u32 route_val; 236 - }; 237 - 238 - struct rockchip_pin_ctrl { 239 - struct rockchip_pin_bank *pin_banks; 240 - u32 nr_banks; 241 - u32 nr_pins; 242 - char *label; 243 - enum rockchip_pinctrl_type type; 244 - int grf_mux_offset; 245 - int pmu_mux_offset; 246 - int grf_drv_offset; 247 - int pmu_drv_offset; 248 - struct rockchip_mux_recalced_data *iomux_recalced; 249 - u32 niomux_recalced; 250 - struct rockchip_mux_route_data *iomux_routes; 251 - u32 niomux_routes; 252 - 253 - void (*pull_calc_reg)(struct rockchip_pin_bank *bank, 254 - int pin_num, struct regmap **regmap, 255 - int *reg, u8 *bit); 256 - void (*drv_calc_reg)(struct rockchip_pin_bank *bank, 257 - int pin_num, struct regmap **regmap, 258 - int *reg, u8 *bit); 259 - int (*schmitt_calc_reg)(struct rockchip_pin_bank *bank, 260 - int pin_num, struct regmap **regmap, 261 - int *reg, u8 *bit); 262 - }; 263 - 264 - struct rockchip_pin_config { 265 - unsigned int func; 266 - unsigned long *configs; 267 - unsigned int nconfigs; 268 - }; 269 - 270 - /** 271 - * struct rockchip_pin_group: represent group of pins of a pinmux function. 272 - * @name: name of the pin group, used to lookup the group. 273 - * @pins: the pins included in this group. 274 - * @npins: number of pins included in this group. 275 - * @data: local pin configuration 276 - */ 277 - struct rockchip_pin_group { 278 - const char *name; 279 - unsigned int npins; 280 - unsigned int *pins; 281 - struct rockchip_pin_config *data; 282 - }; 283 - 284 - /** 285 - * struct rockchip_pmx_func: represent a pin function. 286 - * @name: name of the pin function, used to lookup the function. 287 - * @groups: one or more names of pin groups that provide this function. 288 - * @ngroups: number of groups included in @groups. 289 - */ 290 - struct rockchip_pmx_func { 291 - const char *name; 292 - const char **groups; 293 - u8 ngroups; 294 - }; 295 - 296 - struct rockchip_pinctrl { 297 - struct regmap *regmap_base; 298 - int reg_size; 299 - struct regmap *regmap_pull; 300 - struct regmap *regmap_pmu; 301 - struct device *dev; 302 - struct rockchip_pin_ctrl *ctrl; 303 - struct pinctrl_desc pctl; 304 - struct pinctrl_dev *pctl_dev; 305 - struct rockchip_pin_group *groups; 306 - unsigned int ngroups; 307 - struct rockchip_pmx_func *functions; 308 - unsigned int nfunctions; 309 - }; 310 322 311 323 static struct regmap_config rockchip_regmap_config = { 312 324 .reg_bits = 32, ··· 2057 2295 return 0; 2058 2296 } 2059 2297 2060 - static int rockchip_gpio_get_direction(struct gpio_chip *chip, unsigned offset) 2061 - { 2062 - struct rockchip_pin_bank *bank = gpiochip_get_data(chip); 2063 - u32 data; 2064 - int ret; 2065 - 2066 - ret = clk_enable(bank->clk); 2067 - if (ret < 0) { 2068 - dev_err(bank->drvdata->dev, 2069 - "failed to enable clock for bank %s\n", bank->name); 2070 - return ret; 2071 - } 2072 - data = readl_relaxed(bank->reg_base + GPIO_SWPORT_DDR); 2073 - clk_disable(bank->clk); 2074 - 2075 - if (data & BIT(offset)) 2076 - return GPIO_LINE_DIRECTION_OUT; 2077 - 2078 - return GPIO_LINE_DIRECTION_IN; 2079 - } 2080 - 2081 - /* 2082 - * The calls to gpio_direction_output() and gpio_direction_input() 2083 - * leads to this function call (via the pinctrl_gpio_direction_{input|output}() 2084 - * function called from the gpiolib interface). 2085 - */ 2086 - static int _rockchip_pmx_gpio_set_direction(struct gpio_chip *chip, 2087 - int pin, bool input) 2088 - { 2089 - struct rockchip_pin_bank *bank; 2090 - int ret; 2091 - unsigned long flags; 2092 - u32 data; 2093 - 2094 - bank = gpiochip_get_data(chip); 2095 - 2096 - ret = rockchip_set_mux(bank, pin, RK_FUNC_GPIO); 2097 - if (ret < 0) 2098 - return ret; 2099 - 2100 - clk_enable(bank->clk); 2101 - raw_spin_lock_irqsave(&bank->slock, flags); 2102 - 2103 - data = readl_relaxed(bank->reg_base + GPIO_SWPORT_DDR); 2104 - /* set bit to 1 for output, 0 for input */ 2105 - if (!input) 2106 - data |= BIT(pin); 2107 - else 2108 - data &= ~BIT(pin); 2109 - writel_relaxed(data, bank->reg_base + GPIO_SWPORT_DDR); 2110 - 2111 - raw_spin_unlock_irqrestore(&bank->slock, flags); 2112 - clk_disable(bank->clk); 2113 - 2114 - return 0; 2115 - } 2116 - 2117 - static int rockchip_pmx_gpio_set_direction(struct pinctrl_dev *pctldev, 2118 - struct pinctrl_gpio_range *range, 2119 - unsigned offset, bool input) 2120 - { 2121 - struct rockchip_pinctrl *info = pinctrl_dev_get_drvdata(pctldev); 2122 - struct gpio_chip *chip; 2123 - int pin; 2124 - 2125 - chip = range->gc; 2126 - pin = offset - chip->base; 2127 - dev_dbg(info->dev, "gpio_direction for pin %u as %s-%d to %s\n", 2128 - offset, range->name, pin, input ? "input" : "output"); 2129 - 2130 - return _rockchip_pmx_gpio_set_direction(chip, offset - chip->base, 2131 - input); 2132 - } 2133 - 2134 2298 static const struct pinmux_ops rockchip_pmx_ops = { 2135 2299 .get_functions_count = rockchip_pmx_get_funcs_count, 2136 2300 .get_function_name = rockchip_pmx_get_func_name, 2137 2301 .get_function_groups = rockchip_pmx_get_groups, 2138 2302 .set_mux = rockchip_pmx_set, 2139 - .gpio_set_direction = rockchip_pmx_gpio_set_direction, 2140 2303 }; 2141 2304 2142 2305 /* ··· 2092 2405 return false; 2093 2406 } 2094 2407 2095 - static void rockchip_gpio_set(struct gpio_chip *gc, unsigned offset, int value); 2096 - static int rockchip_gpio_get(struct gpio_chip *gc, unsigned offset); 2097 - 2098 2408 /* set the pin config settings for a specified pin */ 2099 2409 static int rockchip_pinconf_set(struct pinctrl_dev *pctldev, unsigned int pin, 2100 2410 unsigned long *configs, unsigned num_configs) 2101 2411 { 2102 2412 struct rockchip_pinctrl *info = pinctrl_dev_get_drvdata(pctldev); 2103 2413 struct rockchip_pin_bank *bank = pin_to_bank(info, pin); 2414 + struct gpio_chip *gpio = &bank->gpio_chip; 2104 2415 enum pin_config_param param; 2105 2416 u32 arg; 2106 2417 int i; ··· 2131 2446 return rc; 2132 2447 break; 2133 2448 case PIN_CONFIG_OUTPUT: 2134 - rockchip_gpio_set(&bank->gpio_chip, 2135 - pin - bank->pin_base, arg); 2136 - rc = _rockchip_pmx_gpio_set_direction(&bank->gpio_chip, 2137 - pin - bank->pin_base, false); 2449 + rc = rockchip_set_mux(bank, pin - bank->pin_base, 2450 + RK_FUNC_GPIO); 2451 + if (rc != RK_FUNC_GPIO) 2452 + return -EINVAL; 2453 + 2454 + rc = gpio->direction_output(gpio, pin - bank->pin_base, 2455 + arg); 2138 2456 if (rc) 2139 2457 return rc; 2140 2458 break; ··· 2175 2487 { 2176 2488 struct rockchip_pinctrl *info = pinctrl_dev_get_drvdata(pctldev); 2177 2489 struct rockchip_pin_bank *bank = pin_to_bank(info, pin); 2490 + struct gpio_chip *gpio = &bank->gpio_chip; 2178 2491 enum pin_config_param param = pinconf_to_config_param(*config); 2179 2492 u16 arg; 2180 2493 int rc; ··· 2204 2515 if (rc != RK_FUNC_GPIO) 2205 2516 return -EINVAL; 2206 2517 2207 - rc = rockchip_gpio_get(&bank->gpio_chip, pin - bank->pin_base); 2518 + rc = gpio->get(gpio, pin - bank->pin_base); 2208 2519 if (rc < 0) 2209 2520 return rc; 2210 2521 ··· 2442 2753 ctrldesc->npins = info->ctrl->nr_pins; 2443 2754 2444 2755 pdesc = pindesc; 2445 - for (bank = 0 , k = 0; bank < info->ctrl->nr_banks; bank++) { 2756 + for (bank = 0, k = 0; bank < info->ctrl->nr_banks; bank++) { 2446 2757 pin_bank = &info->ctrl->pin_banks[bank]; 2447 2758 for (pin = 0; pin < pin_bank->nr_pins; pin++, k++) { 2448 2759 pdesc->number = k; ··· 2462 2773 return PTR_ERR(info->pctl_dev); 2463 2774 } 2464 2775 2465 - for (bank = 0; bank < info->ctrl->nr_banks; ++bank) { 2466 - pin_bank = &info->ctrl->pin_banks[bank]; 2467 - pin_bank->grange.name = pin_bank->name; 2468 - pin_bank->grange.id = bank; 2469 - pin_bank->grange.pin_base = pin_bank->pin_base; 2470 - pin_bank->grange.base = pin_bank->gpio_chip.base; 2471 - pin_bank->grange.npins = pin_bank->gpio_chip.ngpio; 2472 - pin_bank->grange.gc = &pin_bank->gpio_chip; 2473 - pinctrl_add_gpio_range(info->pctl_dev, &pin_bank->grange); 2474 - } 2475 - 2476 2776 return 0; 2477 - } 2478 - 2479 - /* 2480 - * GPIO handling 2481 - */ 2482 - 2483 - static void rockchip_gpio_set(struct gpio_chip *gc, unsigned offset, int value) 2484 - { 2485 - struct rockchip_pin_bank *bank = gpiochip_get_data(gc); 2486 - void __iomem *reg = bank->reg_base + GPIO_SWPORT_DR; 2487 - unsigned long flags; 2488 - u32 data; 2489 - 2490 - clk_enable(bank->clk); 2491 - raw_spin_lock_irqsave(&bank->slock, flags); 2492 - 2493 - data = readl(reg); 2494 - data &= ~BIT(offset); 2495 - if (value) 2496 - data |= BIT(offset); 2497 - writel(data, reg); 2498 - 2499 - raw_spin_unlock_irqrestore(&bank->slock, flags); 2500 - clk_disable(bank->clk); 2501 - } 2502 - 2503 - /* 2504 - * Returns the level of the pin for input direction and setting of the DR 2505 - * register for output gpios. 2506 - */ 2507 - static int rockchip_gpio_get(struct gpio_chip *gc, unsigned offset) 2508 - { 2509 - struct rockchip_pin_bank *bank = gpiochip_get_data(gc); 2510 - u32 data; 2511 - 2512 - clk_enable(bank->clk); 2513 - data = readl(bank->reg_base + GPIO_EXT_PORT); 2514 - clk_disable(bank->clk); 2515 - data >>= offset; 2516 - data &= 1; 2517 - return data; 2518 - } 2519 - 2520 - /* 2521 - * gpiolib gpio_direction_input callback function. The setting of the pin 2522 - * mux function as 'gpio input' will be handled by the pinctrl subsystem 2523 - * interface. 2524 - */ 2525 - static int rockchip_gpio_direction_input(struct gpio_chip *gc, unsigned offset) 2526 - { 2527 - return pinctrl_gpio_direction_input(gc->base + offset); 2528 - } 2529 - 2530 - /* 2531 - * gpiolib gpio_direction_output callback function. The setting of the pin 2532 - * mux function as 'gpio output' will be handled by the pinctrl subsystem 2533 - * interface. 2534 - */ 2535 - static int rockchip_gpio_direction_output(struct gpio_chip *gc, 2536 - unsigned offset, int value) 2537 - { 2538 - rockchip_gpio_set(gc, offset, value); 2539 - return pinctrl_gpio_direction_output(gc->base + offset); 2540 - } 2541 - 2542 - static void rockchip_gpio_set_debounce(struct gpio_chip *gc, 2543 - unsigned int offset, bool enable) 2544 - { 2545 - struct rockchip_pin_bank *bank = gpiochip_get_data(gc); 2546 - void __iomem *reg = bank->reg_base + GPIO_DEBOUNCE; 2547 - unsigned long flags; 2548 - u32 data; 2549 - 2550 - clk_enable(bank->clk); 2551 - raw_spin_lock_irqsave(&bank->slock, flags); 2552 - 2553 - data = readl(reg); 2554 - if (enable) 2555 - data |= BIT(offset); 2556 - else 2557 - data &= ~BIT(offset); 2558 - writel(data, reg); 2559 - 2560 - raw_spin_unlock_irqrestore(&bank->slock, flags); 2561 - clk_disable(bank->clk); 2562 - } 2563 - 2564 - /* 2565 - * gpiolib set_config callback function. The setting of the pin 2566 - * mux function as 'gpio output' will be handled by the pinctrl subsystem 2567 - * interface. 2568 - */ 2569 - static int rockchip_gpio_set_config(struct gpio_chip *gc, unsigned int offset, 2570 - unsigned long config) 2571 - { 2572 - enum pin_config_param param = pinconf_to_config_param(config); 2573 - 2574 - switch (param) { 2575 - case PIN_CONFIG_INPUT_DEBOUNCE: 2576 - rockchip_gpio_set_debounce(gc, offset, true); 2577 - /* 2578 - * Rockchip's gpio could only support up to one period 2579 - * of the debounce clock(pclk), which is far away from 2580 - * satisftying the requirement, as pclk is usually near 2581 - * 100MHz shared by all peripherals. So the fact is it 2582 - * has crippled debounce capability could only be useful 2583 - * to prevent any spurious glitches from waking up the system 2584 - * if the gpio is conguired as wakeup interrupt source. Let's 2585 - * still return -ENOTSUPP as before, to make sure the caller 2586 - * of gpiod_set_debounce won't change its behaviour. 2587 - */ 2588 - return -ENOTSUPP; 2589 - default: 2590 - return -ENOTSUPP; 2591 - } 2592 - } 2593 - 2594 - /* 2595 - * gpiolib gpio_to_irq callback function. Creates a mapping between a GPIO pin 2596 - * and a virtual IRQ, if not already present. 2597 - */ 2598 - static int rockchip_gpio_to_irq(struct gpio_chip *gc, unsigned offset) 2599 - { 2600 - struct rockchip_pin_bank *bank = gpiochip_get_data(gc); 2601 - unsigned int virq; 2602 - 2603 - if (!bank->domain) 2604 - return -ENXIO; 2605 - 2606 - clk_enable(bank->clk); 2607 - virq = irq_create_mapping(bank->domain, offset); 2608 - clk_disable(bank->clk); 2609 - 2610 - return (virq) ? : -ENXIO; 2611 - } 2612 - 2613 - static const struct gpio_chip rockchip_gpiolib_chip = { 2614 - .request = gpiochip_generic_request, 2615 - .free = gpiochip_generic_free, 2616 - .set = rockchip_gpio_set, 2617 - .get = rockchip_gpio_get, 2618 - .get_direction = rockchip_gpio_get_direction, 2619 - .direction_input = rockchip_gpio_direction_input, 2620 - .direction_output = rockchip_gpio_direction_output, 2621 - .set_config = rockchip_gpio_set_config, 2622 - .to_irq = rockchip_gpio_to_irq, 2623 - .owner = THIS_MODULE, 2624 - }; 2625 - 2626 - /* 2627 - * Interrupt handling 2628 - */ 2629 - 2630 - static void rockchip_irq_demux(struct irq_desc *desc) 2631 - { 2632 - struct irq_chip *chip = irq_desc_get_chip(desc); 2633 - struct rockchip_pin_bank *bank = irq_desc_get_handler_data(desc); 2634 - u32 pend; 2635 - 2636 - dev_dbg(bank->drvdata->dev, "got irq for bank %s\n", bank->name); 2637 - 2638 - chained_irq_enter(chip, desc); 2639 - 2640 - pend = readl_relaxed(bank->reg_base + GPIO_INT_STATUS); 2641 - 2642 - while (pend) { 2643 - unsigned int irq, virq; 2644 - 2645 - irq = __ffs(pend); 2646 - pend &= ~BIT(irq); 2647 - virq = irq_find_mapping(bank->domain, irq); 2648 - 2649 - if (!virq) { 2650 - dev_err(bank->drvdata->dev, "unmapped irq %d\n", irq); 2651 - continue; 2652 - } 2653 - 2654 - dev_dbg(bank->drvdata->dev, "handling irq %d\n", irq); 2655 - 2656 - /* 2657 - * Triggering IRQ on both rising and falling edge 2658 - * needs manual intervention. 2659 - */ 2660 - if (bank->toggle_edge_mode & BIT(irq)) { 2661 - u32 data, data_old, polarity; 2662 - unsigned long flags; 2663 - 2664 - data = readl_relaxed(bank->reg_base + GPIO_EXT_PORT); 2665 - do { 2666 - raw_spin_lock_irqsave(&bank->slock, flags); 2667 - 2668 - polarity = readl_relaxed(bank->reg_base + 2669 - GPIO_INT_POLARITY); 2670 - if (data & BIT(irq)) 2671 - polarity &= ~BIT(irq); 2672 - else 2673 - polarity |= BIT(irq); 2674 - writel(polarity, 2675 - bank->reg_base + GPIO_INT_POLARITY); 2676 - 2677 - raw_spin_unlock_irqrestore(&bank->slock, flags); 2678 - 2679 - data_old = data; 2680 - data = readl_relaxed(bank->reg_base + 2681 - GPIO_EXT_PORT); 2682 - } while ((data & BIT(irq)) != (data_old & BIT(irq))); 2683 - } 2684 - 2685 - generic_handle_irq(virq); 2686 - } 2687 - 2688 - chained_irq_exit(chip, desc); 2689 - } 2690 - 2691 - static int rockchip_irq_set_type(struct irq_data *d, unsigned int type) 2692 - { 2693 - struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); 2694 - struct rockchip_pin_bank *bank = gc->private; 2695 - u32 mask = BIT(d->hwirq); 2696 - u32 polarity; 2697 - u32 level; 2698 - u32 data; 2699 - unsigned long flags; 2700 - int ret; 2701 - 2702 - /* make sure the pin is configured as gpio input */ 2703 - ret = rockchip_set_mux(bank, d->hwirq, RK_FUNC_GPIO); 2704 - if (ret < 0) 2705 - return ret; 2706 - 2707 - clk_enable(bank->clk); 2708 - raw_spin_lock_irqsave(&bank->slock, flags); 2709 - 2710 - data = readl_relaxed(bank->reg_base + GPIO_SWPORT_DDR); 2711 - data &= ~mask; 2712 - writel_relaxed(data, bank->reg_base + GPIO_SWPORT_DDR); 2713 - 2714 - raw_spin_unlock_irqrestore(&bank->slock, flags); 2715 - 2716 - if (type & IRQ_TYPE_EDGE_BOTH) 2717 - irq_set_handler_locked(d, handle_edge_irq); 2718 - else 2719 - irq_set_handler_locked(d, handle_level_irq); 2720 - 2721 - raw_spin_lock_irqsave(&bank->slock, flags); 2722 - irq_gc_lock(gc); 2723 - 2724 - level = readl_relaxed(gc->reg_base + GPIO_INTTYPE_LEVEL); 2725 - polarity = readl_relaxed(gc->reg_base + GPIO_INT_POLARITY); 2726 - 2727 - switch (type) { 2728 - case IRQ_TYPE_EDGE_BOTH: 2729 - bank->toggle_edge_mode |= mask; 2730 - level |= mask; 2731 - 2732 - /* 2733 - * Determine gpio state. If 1 next interrupt should be falling 2734 - * otherwise rising. 2735 - */ 2736 - data = readl(bank->reg_base + GPIO_EXT_PORT); 2737 - if (data & mask) 2738 - polarity &= ~mask; 2739 - else 2740 - polarity |= mask; 2741 - break; 2742 - case IRQ_TYPE_EDGE_RISING: 2743 - bank->toggle_edge_mode &= ~mask; 2744 - level |= mask; 2745 - polarity |= mask; 2746 - break; 2747 - case IRQ_TYPE_EDGE_FALLING: 2748 - bank->toggle_edge_mode &= ~mask; 2749 - level |= mask; 2750 - polarity &= ~mask; 2751 - break; 2752 - case IRQ_TYPE_LEVEL_HIGH: 2753 - bank->toggle_edge_mode &= ~mask; 2754 - level &= ~mask; 2755 - polarity |= mask; 2756 - break; 2757 - case IRQ_TYPE_LEVEL_LOW: 2758 - bank->toggle_edge_mode &= ~mask; 2759 - level &= ~mask; 2760 - polarity &= ~mask; 2761 - break; 2762 - default: 2763 - irq_gc_unlock(gc); 2764 - raw_spin_unlock_irqrestore(&bank->slock, flags); 2765 - clk_disable(bank->clk); 2766 - return -EINVAL; 2767 - } 2768 - 2769 - writel_relaxed(level, gc->reg_base + GPIO_INTTYPE_LEVEL); 2770 - writel_relaxed(polarity, gc->reg_base + GPIO_INT_POLARITY); 2771 - 2772 - irq_gc_unlock(gc); 2773 - raw_spin_unlock_irqrestore(&bank->slock, flags); 2774 - clk_disable(bank->clk); 2775 - 2776 - return 0; 2777 - } 2778 - 2779 - static void rockchip_irq_suspend(struct irq_data *d) 2780 - { 2781 - struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); 2782 - struct rockchip_pin_bank *bank = gc->private; 2783 - 2784 - clk_enable(bank->clk); 2785 - bank->saved_masks = irq_reg_readl(gc, GPIO_INTMASK); 2786 - irq_reg_writel(gc, ~gc->wake_active, GPIO_INTMASK); 2787 - clk_disable(bank->clk); 2788 - } 2789 - 2790 - static void rockchip_irq_resume(struct irq_data *d) 2791 - { 2792 - struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); 2793 - struct rockchip_pin_bank *bank = gc->private; 2794 - 2795 - clk_enable(bank->clk); 2796 - irq_reg_writel(gc, bank->saved_masks, GPIO_INTMASK); 2797 - clk_disable(bank->clk); 2798 - } 2799 - 2800 - static void rockchip_irq_enable(struct irq_data *d) 2801 - { 2802 - struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); 2803 - struct rockchip_pin_bank *bank = gc->private; 2804 - 2805 - clk_enable(bank->clk); 2806 - irq_gc_mask_clr_bit(d); 2807 - } 2808 - 2809 - static void rockchip_irq_disable(struct irq_data *d) 2810 - { 2811 - struct irq_chip_generic *gc = irq_data_get_irq_chip_data(d); 2812 - struct rockchip_pin_bank *bank = gc->private; 2813 - 2814 - irq_gc_mask_set_bit(d); 2815 - clk_disable(bank->clk); 2816 - } 2817 - 2818 - static int rockchip_interrupts_register(struct platform_device *pdev, 2819 - struct rockchip_pinctrl *info) 2820 - { 2821 - struct rockchip_pin_ctrl *ctrl = info->ctrl; 2822 - struct rockchip_pin_bank *bank = ctrl->pin_banks; 2823 - unsigned int clr = IRQ_NOREQUEST | IRQ_NOPROBE | IRQ_NOAUTOEN; 2824 - struct irq_chip_generic *gc; 2825 - int ret; 2826 - int i; 2827 - 2828 - for (i = 0; i < ctrl->nr_banks; ++i, ++bank) { 2829 - if (!bank->valid) { 2830 - dev_warn(&pdev->dev, "bank %s is not valid\n", 2831 - bank->name); 2832 - continue; 2833 - } 2834 - 2835 - ret = clk_enable(bank->clk); 2836 - if (ret) { 2837 - dev_err(&pdev->dev, "failed to enable clock for bank %s\n", 2838 - bank->name); 2839 - continue; 2840 - } 2841 - 2842 - bank->domain = irq_domain_add_linear(bank->of_node, 32, 2843 - &irq_generic_chip_ops, NULL); 2844 - if (!bank->domain) { 2845 - dev_warn(&pdev->dev, "could not initialize irq domain for bank %s\n", 2846 - bank->name); 2847 - clk_disable(bank->clk); 2848 - continue; 2849 - } 2850 - 2851 - ret = irq_alloc_domain_generic_chips(bank->domain, 32, 1, 2852 - "rockchip_gpio_irq", handle_level_irq, 2853 - clr, 0, 0); 2854 - if (ret) { 2855 - dev_err(&pdev->dev, "could not alloc generic chips for bank %s\n", 2856 - bank->name); 2857 - irq_domain_remove(bank->domain); 2858 - clk_disable(bank->clk); 2859 - continue; 2860 - } 2861 - 2862 - gc = irq_get_domain_generic_chip(bank->domain, 0); 2863 - gc->reg_base = bank->reg_base; 2864 - gc->private = bank; 2865 - gc->chip_types[0].regs.mask = GPIO_INTMASK; 2866 - gc->chip_types[0].regs.ack = GPIO_PORTS_EOI; 2867 - gc->chip_types[0].chip.irq_ack = irq_gc_ack_set_bit; 2868 - gc->chip_types[0].chip.irq_mask = irq_gc_mask_set_bit; 2869 - gc->chip_types[0].chip.irq_unmask = irq_gc_mask_clr_bit; 2870 - gc->chip_types[0].chip.irq_enable = rockchip_irq_enable; 2871 - gc->chip_types[0].chip.irq_disable = rockchip_irq_disable; 2872 - gc->chip_types[0].chip.irq_set_wake = irq_gc_set_wake; 2873 - gc->chip_types[0].chip.irq_suspend = rockchip_irq_suspend; 2874 - gc->chip_types[0].chip.irq_resume = rockchip_irq_resume; 2875 - gc->chip_types[0].chip.irq_set_type = rockchip_irq_set_type; 2876 - gc->wake_enabled = IRQ_MSK(bank->nr_pins); 2877 - 2878 - /* 2879 - * Linux assumes that all interrupts start out disabled/masked. 2880 - * Our driver only uses the concept of masked and always keeps 2881 - * things enabled, so for us that's all masked and all enabled. 2882 - */ 2883 - writel_relaxed(0xffffffff, bank->reg_base + GPIO_INTMASK); 2884 - writel_relaxed(0xffffffff, bank->reg_base + GPIO_PORTS_EOI); 2885 - writel_relaxed(0xffffffff, bank->reg_base + GPIO_INTEN); 2886 - gc->mask_cache = 0xffffffff; 2887 - 2888 - irq_set_chained_handler_and_data(bank->irq, 2889 - rockchip_irq_demux, bank); 2890 - clk_disable(bank->clk); 2891 - } 2892 - 2893 - return 0; 2894 - } 2895 - 2896 - static int rockchip_gpiolib_register(struct platform_device *pdev, 2897 - struct rockchip_pinctrl *info) 2898 - { 2899 - struct rockchip_pin_ctrl *ctrl = info->ctrl; 2900 - struct rockchip_pin_bank *bank = ctrl->pin_banks; 2901 - struct gpio_chip *gc; 2902 - int ret; 2903 - int i; 2904 - 2905 - for (i = 0; i < ctrl->nr_banks; ++i, ++bank) { 2906 - if (!bank->valid) { 2907 - dev_warn(&pdev->dev, "bank %s is not valid\n", 2908 - bank->name); 2909 - continue; 2910 - } 2911 - 2912 - bank->gpio_chip = rockchip_gpiolib_chip; 2913 - 2914 - gc = &bank->gpio_chip; 2915 - gc->base = bank->pin_base; 2916 - gc->ngpio = bank->nr_pins; 2917 - gc->parent = &pdev->dev; 2918 - gc->of_node = bank->of_node; 2919 - gc->label = bank->name; 2920 - 2921 - ret = gpiochip_add_data(gc, bank); 2922 - if (ret) { 2923 - dev_err(&pdev->dev, "failed to register gpio_chip %s, error code: %d\n", 2924 - gc->label, ret); 2925 - goto fail; 2926 - } 2927 - } 2928 - 2929 - rockchip_interrupts_register(pdev, info); 2930 - 2931 - return 0; 2932 - 2933 - fail: 2934 - for (--i, --bank; i >= 0; --i, --bank) { 2935 - if (!bank->valid) 2936 - continue; 2937 - gpiochip_remove(&bank->gpio_chip); 2938 - } 2939 - return ret; 2940 - } 2941 - 2942 - static int rockchip_gpiolib_unregister(struct platform_device *pdev, 2943 - struct rockchip_pinctrl *info) 2944 - { 2945 - struct rockchip_pin_ctrl *ctrl = info->ctrl; 2946 - struct rockchip_pin_bank *bank = ctrl->pin_banks; 2947 - int i; 2948 - 2949 - for (i = 0; i < ctrl->nr_banks; ++i, ++bank) { 2950 - if (!bank->valid) 2951 - continue; 2952 - gpiochip_remove(&bank->gpio_chip); 2953 - } 2954 - 2955 - return 0; 2956 - } 2957 - 2958 - static int rockchip_get_bank_data(struct rockchip_pin_bank *bank, 2959 - struct rockchip_pinctrl *info) 2960 - { 2961 - struct resource res; 2962 - void __iomem *base; 2963 - 2964 - if (of_address_to_resource(bank->of_node, 0, &res)) { 2965 - dev_err(info->dev, "cannot find IO resource for bank\n"); 2966 - return -ENOENT; 2967 - } 2968 - 2969 - bank->reg_base = devm_ioremap_resource(info->dev, &res); 2970 - if (IS_ERR(bank->reg_base)) 2971 - return PTR_ERR(bank->reg_base); 2972 - 2973 - /* 2974 - * special case, where parts of the pull setting-registers are 2975 - * part of the PMU register space 2976 - */ 2977 - if (of_device_is_compatible(bank->of_node, 2978 - "rockchip,rk3188-gpio-bank0")) { 2979 - struct device_node *node; 2980 - 2981 - node = of_parse_phandle(bank->of_node->parent, 2982 - "rockchip,pmu", 0); 2983 - if (!node) { 2984 - if (of_address_to_resource(bank->of_node, 1, &res)) { 2985 - dev_err(info->dev, "cannot find IO resource for bank\n"); 2986 - return -ENOENT; 2987 - } 2988 - 2989 - base = devm_ioremap_resource(info->dev, &res); 2990 - if (IS_ERR(base)) 2991 - return PTR_ERR(base); 2992 - rockchip_regmap_config.max_register = 2993 - resource_size(&res) - 4; 2994 - rockchip_regmap_config.name = 2995 - "rockchip,rk3188-gpio-bank0-pull"; 2996 - bank->regmap_pull = devm_regmap_init_mmio(info->dev, 2997 - base, 2998 - &rockchip_regmap_config); 2999 - } 3000 - of_node_put(node); 3001 - } 3002 - 3003 - bank->irq = irq_of_parse_and_map(bank->of_node, 0); 3004 - 3005 - bank->clk = of_clk_get(bank->of_node, 0); 3006 - if (IS_ERR(bank->clk)) 3007 - return PTR_ERR(bank->clk); 3008 - 3009 - return clk_prepare(bank->clk); 3010 2777 } 3011 2778 3012 2779 static const struct of_device_id rockchip_pinctrl_dt_match[]; ··· 2474 3329 { 2475 3330 const struct of_device_id *match; 2476 3331 struct device_node *node = pdev->dev.of_node; 2477 - struct device_node *np; 2478 3332 struct rockchip_pin_ctrl *ctrl; 2479 3333 struct rockchip_pin_bank *bank; 2480 3334 int grf_offs, pmu_offs, drv_grf_offs, drv_pmu_offs, i, j; 2481 3335 2482 3336 match = of_match_node(rockchip_pinctrl_dt_match, node); 2483 3337 ctrl = (struct rockchip_pin_ctrl *)match->data; 2484 - 2485 - for_each_child_of_node(node, np) { 2486 - if (!of_find_property(np, "gpio-controller", NULL)) 2487 - continue; 2488 - 2489 - bank = ctrl->pin_banks; 2490 - for (i = 0; i < ctrl->nr_banks; ++i, ++bank) { 2491 - if (!strcmp(bank->name, np->name)) { 2492 - bank->of_node = np; 2493 - 2494 - if (!rockchip_get_bank_data(bank, d)) 2495 - bank->valid = true; 2496 - 2497 - break; 2498 - } 2499 - } 2500 - } 2501 3338 2502 3339 grf_offs = ctrl->grf_mux_offset; 2503 3340 pmu_offs = ctrl->pmu_mux_offset; ··· 2701 3574 return PTR_ERR(info->regmap_pmu); 2702 3575 } 2703 3576 2704 - ret = rockchip_gpiolib_register(pdev, info); 3577 + ret = rockchip_pinctrl_register(pdev, info); 2705 3578 if (ret) 2706 3579 return ret; 2707 3580 2708 - ret = rockchip_pinctrl_register(pdev, info); 3581 + platform_set_drvdata(pdev, info); 3582 + 3583 + ret = of_platform_populate(np, rockchip_bank_match, NULL, NULL); 2709 3584 if (ret) { 2710 - rockchip_gpiolib_unregister(pdev, info); 3585 + dev_err(&pdev->dev, "failed to register gpio device\n"); 2711 3586 return ret; 2712 3587 } 2713 - 2714 - platform_set_drvdata(pdev, info); 2715 3588 2716 3589 return 0; 2717 3590 }
+287
drivers/pinctrl/pinctrl-rockchip.h
··· 1 + /* SPDX-License-Identifier: GPL-2.0-only */ 2 + /* 3 + * Copyright (c) 2020-2021 Rockchip Electronics Co. Ltd. 4 + * 5 + * Copyright (c) 2013 MundoReader S.L. 6 + * Author: Heiko Stuebner <heiko@sntech.de> 7 + * 8 + * With some ideas taken from pinctrl-samsung: 9 + * Copyright (c) 2012 Samsung Electronics Co., Ltd. 10 + * http://www.samsung.com 11 + * Copyright (c) 2012 Linaro Ltd 12 + * https://www.linaro.org 13 + * 14 + * and pinctrl-at91: 15 + * Copyright (C) 2011-2012 Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com> 16 + */ 17 + 18 + #ifndef _PINCTRL_ROCKCHIP_H 19 + #define _PINCTRL_ROCKCHIP_H 20 + 21 + enum rockchip_pinctrl_type { 22 + PX30, 23 + RV1108, 24 + RK2928, 25 + RK3066B, 26 + RK3128, 27 + RK3188, 28 + RK3288, 29 + RK3308, 30 + RK3368, 31 + RK3399, 32 + RK3568, 33 + }; 34 + 35 + /** 36 + * struct rockchip_gpio_regs 37 + * @port_dr: data register 38 + * @port_ddr: data direction register 39 + * @int_en: interrupt enable 40 + * @int_mask: interrupt mask 41 + * @int_type: interrupt trigger type, such as high, low, edge trriger type. 42 + * @int_polarity: interrupt polarity enable register 43 + * @int_bothedge: interrupt bothedge enable register 44 + * @int_status: interrupt status register 45 + * @int_rawstatus: int_status = int_rawstatus & int_mask 46 + * @debounce: enable debounce for interrupt signal 47 + * @dbclk_div_en: enable divider for debounce clock 48 + * @dbclk_div_con: setting for divider of debounce clock 49 + * @port_eoi: end of interrupt of the port 50 + * @ext_port: port data from external 51 + * @version_id: controller version register 52 + */ 53 + struct rockchip_gpio_regs { 54 + u32 port_dr; 55 + u32 port_ddr; 56 + u32 int_en; 57 + u32 int_mask; 58 + u32 int_type; 59 + u32 int_polarity; 60 + u32 int_bothedge; 61 + u32 int_status; 62 + u32 int_rawstatus; 63 + u32 debounce; 64 + u32 dbclk_div_en; 65 + u32 dbclk_div_con; 66 + u32 port_eoi; 67 + u32 ext_port; 68 + u32 version_id; 69 + }; 70 + 71 + /** 72 + * struct rockchip_iomux 73 + * @type: iomux variant using IOMUX_* constants 74 + * @offset: if initialized to -1 it will be autocalculated, by specifying 75 + * an initial offset value the relevant source offset can be reset 76 + * to a new value for autocalculating the following iomux registers. 77 + */ 78 + struct rockchip_iomux { 79 + int type; 80 + int offset; 81 + }; 82 + 83 + /* 84 + * enum type index corresponding to rockchip_perpin_drv_list arrays index. 85 + */ 86 + enum rockchip_pin_drv_type { 87 + DRV_TYPE_IO_DEFAULT = 0, 88 + DRV_TYPE_IO_1V8_OR_3V0, 89 + DRV_TYPE_IO_1V8_ONLY, 90 + DRV_TYPE_IO_1V8_3V0_AUTO, 91 + DRV_TYPE_IO_3V3_ONLY, 92 + DRV_TYPE_MAX 93 + }; 94 + 95 + /* 96 + * enum type index corresponding to rockchip_pull_list arrays index. 97 + */ 98 + enum rockchip_pin_pull_type { 99 + PULL_TYPE_IO_DEFAULT = 0, 100 + PULL_TYPE_IO_1V8_ONLY, 101 + PULL_TYPE_MAX 102 + }; 103 + 104 + /** 105 + * struct rockchip_drv 106 + * @drv_type: drive strength variant using rockchip_perpin_drv_type 107 + * @offset: if initialized to -1 it will be autocalculated, by specifying 108 + * an initial offset value the relevant source offset can be reset 109 + * to a new value for autocalculating the following drive strength 110 + * registers. if used chips own cal_drv func instead to calculate 111 + * registers offset, the variant could be ignored. 112 + */ 113 + struct rockchip_drv { 114 + enum rockchip_pin_drv_type drv_type; 115 + int offset; 116 + }; 117 + 118 + /** 119 + * struct rockchip_pin_bank 120 + * @dev: the pinctrl device bind to the bank 121 + * @reg_base: register base of the gpio bank 122 + * @regmap_pull: optional separate register for additional pull settings 123 + * @clk: clock of the gpio bank 124 + * @db_clk: clock of the gpio debounce 125 + * @irq: interrupt of the gpio bank 126 + * @saved_masks: Saved content of GPIO_INTEN at suspend time. 127 + * @pin_base: first pin number 128 + * @nr_pins: number of pins in this bank 129 + * @name: name of the bank 130 + * @bank_num: number of the bank, to account for holes 131 + * @iomux: array describing the 4 iomux sources of the bank 132 + * @drv: array describing the 4 drive strength sources of the bank 133 + * @pull_type: array describing the 4 pull type sources of the bank 134 + * @valid: is all necessary information present 135 + * @of_node: dt node of this bank 136 + * @drvdata: common pinctrl basedata 137 + * @domain: irqdomain of the gpio bank 138 + * @gpio_chip: gpiolib chip 139 + * @grange: gpio range 140 + * @slock: spinlock for the gpio bank 141 + * @toggle_edge_mode: bit mask to toggle (falling/rising) edge mode 142 + * @recalced_mask: bit mask to indicate a need to recalulate the mask 143 + * @route_mask: bits describing the routing pins of per bank 144 + */ 145 + struct rockchip_pin_bank { 146 + struct device *dev; 147 + void __iomem *reg_base; 148 + struct regmap *regmap_pull; 149 + struct clk *clk; 150 + struct clk *db_clk; 151 + int irq; 152 + u32 saved_masks; 153 + u32 pin_base; 154 + u8 nr_pins; 155 + char *name; 156 + u8 bank_num; 157 + struct rockchip_iomux iomux[4]; 158 + struct rockchip_drv drv[4]; 159 + enum rockchip_pin_pull_type pull_type[4]; 160 + bool valid; 161 + struct device_node *of_node; 162 + struct rockchip_pinctrl *drvdata; 163 + struct irq_domain *domain; 164 + struct gpio_chip gpio_chip; 165 + struct pinctrl_gpio_range grange; 166 + raw_spinlock_t slock; 167 + const struct rockchip_gpio_regs *gpio_regs; 168 + u32 gpio_type; 169 + u32 toggle_edge_mode; 170 + u32 recalced_mask; 171 + u32 route_mask; 172 + }; 173 + 174 + /** 175 + * struct rockchip_mux_recalced_data: represent a pin iomux data. 176 + * @num: bank number. 177 + * @pin: pin number. 178 + * @bit: index at register. 179 + * @reg: register offset. 180 + * @mask: mask bit 181 + */ 182 + struct rockchip_mux_recalced_data { 183 + u8 num; 184 + u8 pin; 185 + u32 reg; 186 + u8 bit; 187 + u8 mask; 188 + }; 189 + 190 + enum rockchip_mux_route_location { 191 + ROCKCHIP_ROUTE_SAME = 0, 192 + ROCKCHIP_ROUTE_PMU, 193 + ROCKCHIP_ROUTE_GRF, 194 + }; 195 + 196 + /** 197 + * struct rockchip_mux_recalced_data: represent a pin iomux data. 198 + * @bank_num: bank number. 199 + * @pin: index at register or used to calc index. 200 + * @func: the min pin. 201 + * @route_location: the mux route location (same, pmu, grf). 202 + * @route_offset: the max pin. 203 + * @route_val: the register offset. 204 + */ 205 + struct rockchip_mux_route_data { 206 + u8 bank_num; 207 + u8 pin; 208 + u8 func; 209 + enum rockchip_mux_route_location route_location; 210 + u32 route_offset; 211 + u32 route_val; 212 + }; 213 + 214 + struct rockchip_pin_ctrl { 215 + struct rockchip_pin_bank *pin_banks; 216 + u32 nr_banks; 217 + u32 nr_pins; 218 + char *label; 219 + enum rockchip_pinctrl_type type; 220 + int grf_mux_offset; 221 + int pmu_mux_offset; 222 + int grf_drv_offset; 223 + int pmu_drv_offset; 224 + struct rockchip_mux_recalced_data *iomux_recalced; 225 + u32 niomux_recalced; 226 + struct rockchip_mux_route_data *iomux_routes; 227 + u32 niomux_routes; 228 + 229 + void (*pull_calc_reg)(struct rockchip_pin_bank *bank, 230 + int pin_num, struct regmap **regmap, 231 + int *reg, u8 *bit); 232 + void (*drv_calc_reg)(struct rockchip_pin_bank *bank, 233 + int pin_num, struct regmap **regmap, 234 + int *reg, u8 *bit); 235 + int (*schmitt_calc_reg)(struct rockchip_pin_bank *bank, 236 + int pin_num, struct regmap **regmap, 237 + int *reg, u8 *bit); 238 + }; 239 + 240 + struct rockchip_pin_config { 241 + unsigned int func; 242 + unsigned long *configs; 243 + unsigned int nconfigs; 244 + }; 245 + 246 + /** 247 + * struct rockchip_pin_group: represent group of pins of a pinmux function. 248 + * @name: name of the pin group, used to lookup the group. 249 + * @pins: the pins included in this group. 250 + * @npins: number of pins included in this group. 251 + * @data: local pin configuration 252 + */ 253 + struct rockchip_pin_group { 254 + const char *name; 255 + unsigned int npins; 256 + unsigned int *pins; 257 + struct rockchip_pin_config *data; 258 + }; 259 + 260 + /** 261 + * struct rockchip_pmx_func: represent a pin function. 262 + * @name: name of the pin function, used to lookup the function. 263 + * @groups: one or more names of pin groups that provide this function. 264 + * @ngroups: number of groups included in @groups. 265 + */ 266 + struct rockchip_pmx_func { 267 + const char *name; 268 + const char **groups; 269 + u8 ngroups; 270 + }; 271 + 272 + struct rockchip_pinctrl { 273 + struct regmap *regmap_base; 274 + int reg_size; 275 + struct regmap *regmap_pull; 276 + struct regmap *regmap_pmu; 277 + struct device *dev; 278 + struct rockchip_pin_ctrl *ctrl; 279 + struct pinctrl_desc pctl; 280 + struct pinctrl_dev *pctl_dev; 281 + struct rockchip_pin_group *groups; 282 + unsigned int ngroups; 283 + struct rockchip_pmx_func *functions; 284 + unsigned int nfunctions; 285 + }; 286 + 287 + #endif
+2 -2
drivers/pinctrl/pinctrl-single.c
··· 1491 1491 mask = pcs->read(pcswi->reg); 1492 1492 raw_spin_unlock(&pcs->lock); 1493 1493 if (mask & pcs_soc->irq_status_mask) { 1494 - generic_handle_irq(irq_find_mapping(pcs->domain, 1495 - pcswi->hwirq)); 1494 + generic_handle_domain_irq(pcs->domain, 1495 + pcswi->hwirq); 1496 1496 count++; 1497 1497 } 1498 1498 }
+1 -1
drivers/pinctrl/pinctrl-st.c
··· 1420 1420 continue; 1421 1421 } 1422 1422 1423 - generic_handle_irq(irq_find_mapping(bank->gpio_chip.irq.domain, n)); 1423 + generic_handle_domain_irq(bank->gpio_chip.irq.domain, n); 1424 1424 } 1425 1425 } 1426 1426 }
+1 -3
drivers/pinctrl/qcom/pinctrl-msm.c
··· 1177 1177 const struct msm_pingroup *g; 1178 1178 struct msm_pinctrl *pctrl = gpiochip_get_data(gc); 1179 1179 struct irq_chip *chip = irq_desc_get_chip(desc); 1180 - int irq_pin; 1181 1180 int handled = 0; 1182 1181 u32 val; 1183 1182 int i; ··· 1191 1192 g = &pctrl->soc->groups[i]; 1192 1193 val = msm_readl_intr_status(pctrl, g); 1193 1194 if (val & BIT(g->intr_status_bit)) { 1194 - irq_pin = irq_find_mapping(gc->irq.domain, i); 1195 - generic_handle_irq(irq_pin); 1195 + generic_handle_domain_irq(gc->irq.domain, i); 1196 1196 handled++; 1197 1197 } 1198 1198 }
+7 -8
drivers/pinctrl/samsung/pinctrl-exynos.c
··· 246 246 { 247 247 struct samsung_pinctrl_drv_data *d = data; 248 248 struct samsung_pin_bank *bank = d->pin_banks; 249 - unsigned int svc, group, pin, virq; 249 + unsigned int svc, group, pin; 250 + int ret; 250 251 251 252 svc = readl(bank->eint_base + EXYNOS_SVC_OFFSET); 252 253 group = EXYNOS_SVC_GROUP(svc); ··· 257 256 return IRQ_HANDLED; 258 257 bank += (group - 1); 259 258 260 - virq = irq_linear_revmap(bank->irq_domain, pin); 261 - if (!virq) 259 + ret = generic_handle_domain_irq(bank->irq_domain, pin); 260 + if (ret) 262 261 return IRQ_NONE; 263 - generic_handle_irq(virq); 262 + 264 263 return IRQ_HANDLED; 265 264 } 266 265 ··· 474 473 struct exynos_weint_data *eintd = irq_desc_get_handler_data(desc); 475 474 struct samsung_pin_bank *bank = eintd->bank; 476 475 struct irq_chip *chip = irq_desc_get_chip(desc); 477 - int eint_irq; 478 476 479 477 chained_irq_enter(chip, desc); 480 478 481 - eint_irq = irq_linear_revmap(bank->irq_domain, eintd->irq); 482 - generic_handle_irq(eint_irq); 479 + generic_handle_domain_irq(bank->irq_domain, eintd->irq); 483 480 484 481 chained_irq_exit(chip, desc); 485 482 } ··· 489 490 490 491 while (pend) { 491 492 irq = fls(pend) - 1; 492 - generic_handle_irq(irq_find_mapping(domain, irq)); 493 + generic_handle_domain_irq(domain, irq); 493 494 pend &= ~(1 << irq); 494 495 } 495 496 }
+10 -15
drivers/pinctrl/samsung/pinctrl-s3c24xx.c
··· 234 234 { 235 235 struct irq_data *data = irq_desc_get_irq_data(desc); 236 236 struct s3c24xx_eint_data *eint_data = irq_desc_get_handler_data(desc); 237 - unsigned int virq; 237 + int ret; 238 238 239 239 /* the first 4 eints have a simple 1 to 1 mapping */ 240 - virq = irq_linear_revmap(eint_data->domains[data->hwirq], data->hwirq); 240 + ret = generic_handle_domain_irq(eint_data->domains[data->hwirq], data->hwirq); 241 241 /* Something must be really wrong if an unmapped EINT is unmasked */ 242 - BUG_ON(!virq); 243 - 244 - generic_handle_irq(virq); 242 + BUG_ON(ret); 245 243 } 246 244 247 245 /* Handling of EINTs 0-3 on S3C2412 and S3C2413 */ ··· 288 290 struct s3c24xx_eint_data *eint_data = irq_desc_get_handler_data(desc); 289 291 struct irq_data *data = irq_desc_get_irq_data(desc); 290 292 struct irq_chip *chip = irq_data_get_irq_chip(data); 291 - unsigned int virq; 293 + int ret; 292 294 293 295 chained_irq_enter(chip, desc); 294 296 295 297 /* the first 4 eints have a simple 1 to 1 mapping */ 296 - virq = irq_linear_revmap(eint_data->domains[data->hwirq], data->hwirq); 298 + ret = generic_handle_domain_irq(eint_data->domains[data->hwirq], data->hwirq); 297 299 /* Something must be really wrong if an unmapped EINT is unmasked */ 298 - BUG_ON(!virq); 299 - 300 - generic_handle_irq(virq); 300 + BUG_ON(ret); 301 301 302 302 chained_irq_exit(chip, desc); 303 303 } ··· 360 364 pend &= range; 361 365 362 366 while (pend) { 363 - unsigned int virq, irq; 367 + unsigned int irq; 368 + int ret; 364 369 365 370 irq = __ffs(pend); 366 371 pend &= ~(1 << irq); 367 - virq = irq_linear_revmap(data->domains[irq], irq - offset); 372 + ret = generic_handle_domain_irq(data->domains[irq], irq - offset); 368 373 /* Something is really wrong if an unmapped EINT is unmasked */ 369 - BUG_ON(!virq); 370 - 371 - generic_handle_irq(virq); 374 + BUG_ON(ret); 372 375 } 373 376 374 377 chained_irq_exit(chip, desc);
+7 -10
drivers/pinctrl/samsung/pinctrl-s3c64xx.c
··· 414 414 unsigned int svc; 415 415 unsigned int group; 416 416 unsigned int pin; 417 - unsigned int virq; 417 + int ret; 418 418 419 419 svc = readl(drvdata->virt_base + SERVICE_REG); 420 420 group = SVC_GROUP(svc); ··· 431 431 pin -= 8; 432 432 } 433 433 434 - virq = irq_linear_revmap(data->domains[group], pin); 434 + ret = generic_handle_domain_irq(data->domains[group], pin); 435 435 /* 436 436 * Something must be really wrong if an unmapped EINT 437 437 * was unmasked... 438 438 */ 439 - BUG_ON(!virq); 440 - 441 - generic_handle_irq(virq); 439 + BUG_ON(ret); 442 440 } while (1); 443 441 444 442 chained_irq_exit(chip, desc); ··· 605 607 pend &= range; 606 608 607 609 while (pend) { 608 - unsigned int virq, irq; 610 + unsigned int irq; 611 + int ret; 609 612 610 613 irq = fls(pend) - 1; 611 614 pend &= ~(1 << irq); 612 - virq = irq_linear_revmap(data->domains[irq], data->pins[irq]); 615 + ret = generic_handle_domain_irq(data->domains[irq], data->pins[irq]); 613 616 /* 614 617 * Something must be really wrong if an unmapped EINT 615 618 * was unmasked... 616 619 */ 617 - BUG_ON(!virq); 618 - 619 - generic_handle_irq(virq); 620 + BUG_ON(ret); 620 621 } 621 622 622 623 chained_irq_exit(chip, desc);
+1 -2
drivers/pinctrl/spear/pinctrl-plgpio.c
··· 400 400 401 401 /* get correct irq line number */ 402 402 pin = i * MAX_GPIO_PER_REG + pin; 403 - generic_handle_irq( 404 - irq_find_mapping(gc->irq.domain, pin)); 403 + generic_handle_domain_irq(gc->irq.domain, pin); 405 404 } 406 405 } 407 406 chained_irq_exit(irqchip, desc);
+3 -5
drivers/pinctrl/sunxi/pinctrl-sunxi.c
··· 1149 1149 if (val) { 1150 1150 int irqoffset; 1151 1151 1152 - for_each_set_bit(irqoffset, &val, IRQ_PER_BANK) { 1153 - int pin_irq = irq_find_mapping(pctl->domain, 1154 - bank * IRQ_PER_BANK + irqoffset); 1155 - generic_handle_irq(pin_irq); 1156 - } 1152 + for_each_set_bit(irqoffset, &val, IRQ_PER_BANK) 1153 + generic_handle_domain_irq(pctl->domain, 1154 + bank * IRQ_PER_BANK + irqoffset); 1157 1155 } 1158 1156 1159 1157 chained_irq_exit(chip, desc);
+5 -3
include/linux/interrupt.h
··· 13 13 #include <linux/hrtimer.h> 14 14 #include <linux/kref.h> 15 15 #include <linux/workqueue.h> 16 + #include <linux/jump_label.h> 16 17 17 18 #include <linux/atomic.h> 18 19 #include <asm/ptrace.h> ··· 475 474 476 475 #ifdef CONFIG_IRQ_FORCED_THREADING 477 476 # ifdef CONFIG_PREEMPT_RT 478 - # define force_irqthreads (true) 477 + # define force_irqthreads() (true) 479 478 # else 480 - extern bool force_irqthreads; 479 + DECLARE_STATIC_KEY_FALSE(force_irqthreads_key); 480 + # define force_irqthreads() (static_branch_unlikely(&force_irqthreads_key)) 481 481 # endif 482 482 #else 483 - #define force_irqthreads (0) 483 + #define force_irqthreads() (false) 484 484 #endif 485 485 486 486 #ifndef local_softirq_pending
+10 -4
include/linux/msi.h
··· 107 107 * address or data changes 108 108 * @write_msi_msg_data: Data parameter for the callback. 109 109 * 110 - * @masked: [PCI MSI/X] Mask bits 110 + * @msi_mask: [PCI MSI] MSI cached mask bits 111 + * @msix_ctrl: [PCI MSI-X] MSI-X cached per vector control bits 111 112 * @is_msix: [PCI MSI/X] True if MSI-X 112 113 * @multiple: [PCI MSI/X] log2 num of messages allocated 113 114 * @multi_cap: [PCI MSI/X] log2 num of messages supported ··· 140 139 union { 141 140 /* PCI MSI/X specific data */ 142 141 struct { 143 - u32 masked; 142 + union { 143 + u32 msi_mask; 144 + u32 msix_ctrl; 145 + }; 144 146 struct { 145 147 u8 is_msix : 1; 146 148 u8 multiple : 3; ··· 236 232 void __pci_read_msi_msg(struct msi_desc *entry, struct msi_msg *msg); 237 233 void __pci_write_msi_msg(struct msi_desc *entry, struct msi_msg *msg); 238 234 239 - u32 __pci_msix_desc_mask_irq(struct msi_desc *desc, u32 flag); 240 - void __pci_msi_desc_mask_irq(struct msi_desc *desc, u32 mask, u32 flag); 241 235 void pci_msi_mask_irq(struct irq_data *data); 242 236 void pci_msi_unmask_irq(struct irq_data *data); 237 + 238 + const struct attribute_group **msi_populate_sysfs(struct device *dev); 239 + void msi_destroy_sysfs(struct device *dev, 240 + const struct attribute_group **msi_irq_groups); 243 241 244 242 /* 245 243 * The arch hooks to setup up msi irqs. Default functions are implemented
+4 -4
kernel/irq/affinity.c
··· 355 355 goto fail_npresmsk; 356 356 357 357 /* Stabilize the cpumasks */ 358 - get_online_cpus(); 358 + cpus_read_lock(); 359 359 build_node_to_cpumask(node_to_cpumask); 360 360 361 361 /* Spread on present CPUs starting from affd->pre_vectors */ ··· 384 384 nr_others = ret; 385 385 386 386 fail_build_affinity: 387 - put_online_cpus(); 387 + cpus_read_unlock(); 388 388 389 389 if (ret >= 0) 390 390 WARN_ON(nr_present + nr_others < numvecs); ··· 505 505 if (affd->calc_sets) { 506 506 set_vecs = maxvec - resv; 507 507 } else { 508 - get_online_cpus(); 508 + cpus_read_lock(); 509 509 set_vecs = cpumask_weight(cpu_possible_mask); 510 - put_online_cpus(); 510 + cpus_read_unlock(); 511 511 } 512 512 513 513 return resv + min(set_vecs, maxvec - resv);
+1 -1
kernel/irq/cpuhotplug.c
··· 166 166 raw_spin_unlock(&desc->lock); 167 167 168 168 if (affinity_broken) { 169 - pr_warn_ratelimited("IRQ %u: no longer affine to CPU%u\n", 169 + pr_debug_ratelimited("IRQ %u: no longer affine to CPU%u\n", 170 170 irq, smp_processor_id()); 171 171 } 172 172 }
+10 -7
kernel/irq/generic-chip.c
··· 240 240 void __iomem *reg_base, irq_flow_handler_t handler) 241 241 { 242 242 struct irq_chip_generic *gc; 243 - unsigned long sz = sizeof(*gc) + num_ct * sizeof(struct irq_chip_type); 244 243 245 - gc = kzalloc(sz, GFP_KERNEL); 244 + gc = kzalloc(struct_size(gc, chip_types, num_ct), GFP_KERNEL); 246 245 if (gc) { 247 246 irq_init_generic_chip(gc, name, num_ct, irq_base, reg_base, 248 247 handler); ··· 287 288 { 288 289 struct irq_domain_chip_generic *dgc; 289 290 struct irq_chip_generic *gc; 290 - int numchips, sz, i; 291 291 unsigned long flags; 292 + int numchips, i; 293 + size_t dgc_sz; 294 + size_t gc_sz; 295 + size_t sz; 292 296 void *tmp; 293 297 294 298 if (d->gc) ··· 302 300 return -EINVAL; 303 301 304 302 /* Allocate a pointer, generic chip and chiptypes for each chip */ 305 - sz = sizeof(*dgc) + numchips * sizeof(gc); 306 - sz += numchips * (sizeof(*gc) + num_ct * sizeof(struct irq_chip_type)); 303 + gc_sz = struct_size(gc, chip_types, num_ct); 304 + dgc_sz = struct_size(dgc, gc, numchips); 305 + sz = dgc_sz + numchips * gc_sz; 307 306 308 307 tmp = dgc = kzalloc(sz, GFP_KERNEL); 309 308 if (!dgc) ··· 317 314 d->gc = dgc; 318 315 319 316 /* Calc pointer to the first generic chip */ 320 - tmp += sizeof(*dgc) + numchips * sizeof(gc); 317 + tmp += dgc_sz; 321 318 for (i = 0; i < numchips; i++) { 322 319 /* Store the pointer to the generic chip */ 323 320 dgc->gc[i] = gc = tmp; ··· 334 331 list_add_tail(&gc->list, &gc_list); 335 332 raw_spin_unlock_irqrestore(&gc_lock, flags); 336 333 /* Calc pointer to the next generic chip */ 337 - tmp += sizeof(*gc) + num_ct * sizeof(struct irq_chip_type); 334 + tmp += gc_sz; 338 335 } 339 336 return 0; 340 337 }
+16 -16
kernel/irq/ipi.c
··· 14 14 /** 15 15 * irq_reserve_ipi() - Setup an IPI to destination cpumask 16 16 * @domain: IPI domain 17 - * @dest: cpumask of cpus which can receive the IPI 17 + * @dest: cpumask of CPUs which can receive the IPI 18 18 * 19 19 * Allocate a virq that can be used to send IPI to any CPU in dest mask. 20 20 * 21 - * On success it'll return linux irq number and error code on failure 21 + * Return: Linux IRQ number on success or error code on failure 22 22 */ 23 23 int irq_reserve_ipi(struct irq_domain *domain, 24 24 const struct cpumask *dest) ··· 104 104 105 105 /** 106 106 * irq_destroy_ipi() - unreserve an IPI that was previously allocated 107 - * @irq: linux irq number to be destroyed 108 - * @dest: cpumask of cpus which should have the IPI removed 107 + * @irq: Linux IRQ number to be destroyed 108 + * @dest: cpumask of CPUs which should have the IPI removed 109 109 * 110 110 * The IPIs allocated with irq_reserve_ipi() are returned to the system 111 111 * destroying all virqs associated with them. 112 112 * 113 - * Return 0 on success or error code on failure. 113 + * Return: %0 on success or error code on failure. 114 114 */ 115 115 int irq_destroy_ipi(unsigned int irq, const struct cpumask *dest) 116 116 { ··· 150 150 } 151 151 152 152 /** 153 - * ipi_get_hwirq - Get the hwirq associated with an IPI to a cpu 154 - * @irq: linux irq number 155 - * @cpu: the target cpu 153 + * ipi_get_hwirq - Get the hwirq associated with an IPI to a CPU 154 + * @irq: Linux IRQ number 155 + * @cpu: the target CPU 156 156 * 157 157 * When dealing with coprocessors IPI, we need to inform the coprocessor of 158 158 * the hwirq it needs to use to receive and send IPIs. 159 159 * 160 - * Returns hwirq value on success and INVALID_HWIRQ on failure. 160 + * Return: hwirq value on success or INVALID_HWIRQ on failure. 161 161 */ 162 162 irq_hw_number_t ipi_get_hwirq(unsigned int irq, unsigned int cpu) 163 163 { ··· 216 216 * This function is for architecture or core code to speed up IPI sending. Not 217 217 * usable from driver code. 218 218 * 219 - * Returns zero on success and negative error number on failure. 219 + * Return: %0 on success or negative error number on failure. 220 220 */ 221 221 int __ipi_send_single(struct irq_desc *desc, unsigned int cpu) 222 222 { ··· 250 250 } 251 251 252 252 /** 253 - * ipi_send_mask - send an IPI to target Linux SMP CPU(s) 253 + * __ipi_send_mask - send an IPI to target Linux SMP CPU(s) 254 254 * @desc: pointer to irq_desc of the IRQ 255 255 * @dest: dest CPU(s), must be a subset of the mask passed to 256 256 * irq_reserve_ipi() ··· 258 258 * This function is for architecture or core code to speed up IPI sending. Not 259 259 * usable from driver code. 260 260 * 261 - * Returns zero on success and negative error number on failure. 261 + * Return: %0 on success or negative error number on failure. 262 262 */ 263 263 int __ipi_send_mask(struct irq_desc *desc, const struct cpumask *dest) 264 264 { ··· 298 298 299 299 /** 300 300 * ipi_send_single - Send an IPI to a single CPU 301 - * @virq: linux irq number from irq_reserve_ipi() 301 + * @virq: Linux IRQ number from irq_reserve_ipi() 302 302 * @cpu: destination CPU, must in the destination mask passed to 303 303 * irq_reserve_ipi() 304 304 * 305 - * Returns zero on success and negative error number on failure. 305 + * Return: %0 on success or negative error number on failure. 306 306 */ 307 307 int ipi_send_single(unsigned int virq, unsigned int cpu) 308 308 { ··· 319 319 320 320 /** 321 321 * ipi_send_mask - Send an IPI to target CPU(s) 322 - * @virq: linux irq number from irq_reserve_ipi() 322 + * @virq: Linux IRQ number from irq_reserve_ipi() 323 323 * @dest: dest CPU(s), must be a subset of the mask passed to 324 324 * irq_reserve_ipi() 325 325 * 326 - * Returns zero on success and negative error number on failure. 326 + * Return: %0 on success or negative error number on failure. 327 327 */ 328 328 int ipi_send_mask(unsigned int virq, const struct cpumask *dest) 329 329 {
+1 -1
kernel/irq/irqdesc.c
··· 188 188 189 189 raw_spin_lock_irq(&desc->lock); 190 190 if (desc->irq_data.domain) 191 - ret = sprintf(buf, "%d\n", (int)desc->irq_data.hwirq); 191 + ret = sprintf(buf, "%lu\n", desc->irq_data.hwirq); 192 192 raw_spin_unlock_irq(&desc->lock); 193 193 194 194 return ret;
+1
kernel/irq/irqdomain.c
··· 1215 1215 irqd->chip = ERR_PTR(-ENOTCONN); 1216 1216 return 0; 1217 1217 } 1218 + EXPORT_SYMBOL_GPL(irq_domain_disconnect_hierarchy); 1218 1219 1219 1220 static int irq_domain_trim_hierarchy(unsigned int virq) 1220 1221 {
+9 -10
kernel/irq/manage.c
··· 25 25 #include "internals.h" 26 26 27 27 #if defined(CONFIG_IRQ_FORCED_THREADING) && !defined(CONFIG_PREEMPT_RT) 28 - __read_mostly bool force_irqthreads; 29 - EXPORT_SYMBOL_GPL(force_irqthreads); 28 + DEFINE_STATIC_KEY_FALSE(force_irqthreads_key); 30 29 31 30 static int __init setup_forced_irqthreads(char *arg) 32 31 { 33 - force_irqthreads = true; 32 + static_branch_enable(&force_irqthreads_key); 34 33 return 0; 35 34 } 36 35 early_param("threadirqs", setup_forced_irqthreads); ··· 1259 1260 irqreturn_t (*handler_fn)(struct irq_desc *desc, 1260 1261 struct irqaction *action); 1261 1262 1262 - if (force_irqthreads && test_bit(IRQTF_FORCED_THREAD, 1263 - &action->thread_flags)) 1263 + if (force_irqthreads() && test_bit(IRQTF_FORCED_THREAD, 1264 + &action->thread_flags)) 1264 1265 handler_fn = irq_forced_thread_fn; 1265 1266 else 1266 1267 handler_fn = irq_thread_fn; ··· 1321 1322 1322 1323 static int irq_setup_forced_threading(struct irqaction *new) 1323 1324 { 1324 - if (!force_irqthreads) 1325 + if (!force_irqthreads()) 1325 1326 return 0; 1326 1327 if (new->flags & (IRQF_NO_THREAD | IRQF_PERCPU | IRQF_ONESHOT)) 1327 1328 return 0; ··· 2071 2072 * request_threaded_irq - allocate an interrupt line 2072 2073 * @irq: Interrupt line to allocate 2073 2074 * @handler: Function to be called when the IRQ occurs. 2074 - * Primary handler for threaded interrupts 2075 - * If NULL and thread_fn != NULL the default 2076 - * primary handler is installed 2075 + * Primary handler for threaded interrupts. 2076 + * If handler is NULL and thread_fn != NULL 2077 + * the default primary handler is installed. 2077 2078 * @thread_fn: Function called from the irq handler thread 2078 2079 * If NULL, no irq thread is created 2079 2080 * @irqflags: Interrupt type flags ··· 2107 2108 * 2108 2109 * IRQF_SHARED Interrupt is shared 2109 2110 * IRQF_TRIGGER_* Specify active edge(s) or level 2110 - * 2111 + * IRQF_ONESHOT Run thread_fn with interrupt line masked 2111 2112 */ 2112 2113 int request_threaded_irq(unsigned int irq, irq_handler_t handler, 2113 2114 irq_handler_t thread_fn, unsigned long irqflags,
+2 -1
kernel/irq/matrix.c
··· 280 280 /** 281 281 * irq_matrix_alloc_managed - Allocate a managed interrupt in a CPU map 282 282 * @m: Matrix pointer 283 - * @cpu: On which CPU the interrupt should be allocated 283 + * @msk: Which CPUs to search in 284 + * @mapped_cpu: Pointer to store the CPU for which the irq was allocated 284 285 */ 285 286 int irq_matrix_alloc_managed(struct irq_matrix *m, const struct cpumask *msk, 286 287 unsigned int *mapped_cpu)
+146 -7
kernel/irq/msi.c
··· 14 14 #include <linux/irqdomain.h> 15 15 #include <linux/msi.h> 16 16 #include <linux/slab.h> 17 + #include <linux/pci.h> 17 18 18 19 #include "internals.h" 19 20 20 21 /** 21 - * alloc_msi_entry - Allocate an initialize msi_entry 22 + * alloc_msi_entry - Allocate an initialized msi_desc 22 23 * @dev: Pointer to the device for which this is allocated 23 24 * @nvec: The number of vectors used in this entry 24 25 * @affinity: Optional pointer to an affinity mask array size of @nvec 25 26 * 26 - * If @affinity is not NULL then an affinity array[@nvec] is allocated 27 + * If @affinity is not %NULL then an affinity array[@nvec] is allocated 27 28 * and the affinity masks and flags from @affinity are copied. 29 + * 30 + * Return: pointer to allocated &msi_desc on success or %NULL on failure 28 31 */ 29 32 struct msi_desc *alloc_msi_entry(struct device *dev, int nvec, 30 33 const struct irq_affinity_desc *affinity) ··· 72 69 } 73 70 EXPORT_SYMBOL_GPL(get_cached_msi_msg); 74 71 72 + static ssize_t msi_mode_show(struct device *dev, struct device_attribute *attr, 73 + char *buf) 74 + { 75 + struct msi_desc *entry; 76 + bool is_msix = false; 77 + unsigned long irq; 78 + int retval; 79 + 80 + retval = kstrtoul(attr->attr.name, 10, &irq); 81 + if (retval) 82 + return retval; 83 + 84 + entry = irq_get_msi_desc(irq); 85 + if (!entry) 86 + return -ENODEV; 87 + 88 + if (dev_is_pci(dev)) 89 + is_msix = entry->msi_attrib.is_msix; 90 + 91 + return sysfs_emit(buf, "%s\n", is_msix ? "msix" : "msi"); 92 + } 93 + 94 + /** 95 + * msi_populate_sysfs - Populate msi_irqs sysfs entries for devices 96 + * @dev: The device(PCI, platform etc) who will get sysfs entries 97 + * 98 + * Return attribute_group ** so that specific bus MSI can save it to 99 + * somewhere during initilizing msi irqs. If devices has no MSI irq, 100 + * return NULL; if it fails to populate sysfs, return ERR_PTR 101 + */ 102 + const struct attribute_group **msi_populate_sysfs(struct device *dev) 103 + { 104 + const struct attribute_group **msi_irq_groups; 105 + struct attribute **msi_attrs, *msi_attr; 106 + struct device_attribute *msi_dev_attr; 107 + struct attribute_group *msi_irq_group; 108 + struct msi_desc *entry; 109 + int ret = -ENOMEM; 110 + int num_msi = 0; 111 + int count = 0; 112 + int i; 113 + 114 + /* Determine how many msi entries we have */ 115 + for_each_msi_entry(entry, dev) 116 + num_msi += entry->nvec_used; 117 + if (!num_msi) 118 + return NULL; 119 + 120 + /* Dynamically create the MSI attributes for the device */ 121 + msi_attrs = kcalloc(num_msi + 1, sizeof(void *), GFP_KERNEL); 122 + if (!msi_attrs) 123 + return ERR_PTR(-ENOMEM); 124 + 125 + for_each_msi_entry(entry, dev) { 126 + for (i = 0; i < entry->nvec_used; i++) { 127 + msi_dev_attr = kzalloc(sizeof(*msi_dev_attr), GFP_KERNEL); 128 + if (!msi_dev_attr) 129 + goto error_attrs; 130 + msi_attrs[count] = &msi_dev_attr->attr; 131 + 132 + sysfs_attr_init(&msi_dev_attr->attr); 133 + msi_dev_attr->attr.name = kasprintf(GFP_KERNEL, "%d", 134 + entry->irq + i); 135 + if (!msi_dev_attr->attr.name) 136 + goto error_attrs; 137 + msi_dev_attr->attr.mode = 0444; 138 + msi_dev_attr->show = msi_mode_show; 139 + ++count; 140 + } 141 + } 142 + 143 + msi_irq_group = kzalloc(sizeof(*msi_irq_group), GFP_KERNEL); 144 + if (!msi_irq_group) 145 + goto error_attrs; 146 + msi_irq_group->name = "msi_irqs"; 147 + msi_irq_group->attrs = msi_attrs; 148 + 149 + msi_irq_groups = kcalloc(2, sizeof(void *), GFP_KERNEL); 150 + if (!msi_irq_groups) 151 + goto error_irq_group; 152 + msi_irq_groups[0] = msi_irq_group; 153 + 154 + ret = sysfs_create_groups(&dev->kobj, msi_irq_groups); 155 + if (ret) 156 + goto error_irq_groups; 157 + 158 + return msi_irq_groups; 159 + 160 + error_irq_groups: 161 + kfree(msi_irq_groups); 162 + error_irq_group: 163 + kfree(msi_irq_group); 164 + error_attrs: 165 + count = 0; 166 + msi_attr = msi_attrs[count]; 167 + while (msi_attr) { 168 + msi_dev_attr = container_of(msi_attr, struct device_attribute, attr); 169 + kfree(msi_attr->name); 170 + kfree(msi_dev_attr); 171 + ++count; 172 + msi_attr = msi_attrs[count]; 173 + } 174 + kfree(msi_attrs); 175 + return ERR_PTR(ret); 176 + } 177 + 178 + /** 179 + * msi_destroy_sysfs - Destroy msi_irqs sysfs entries for devices 180 + * @dev: The device(PCI, platform etc) who will remove sysfs entries 181 + * @msi_irq_groups: attribute_group for device msi_irqs entries 182 + */ 183 + void msi_destroy_sysfs(struct device *dev, const struct attribute_group **msi_irq_groups) 184 + { 185 + struct device_attribute *dev_attr; 186 + struct attribute **msi_attrs; 187 + int count = 0; 188 + 189 + if (msi_irq_groups) { 190 + sysfs_remove_groups(&dev->kobj, msi_irq_groups); 191 + msi_attrs = msi_irq_groups[0]->attrs; 192 + while (msi_attrs[count]) { 193 + dev_attr = container_of(msi_attrs[count], 194 + struct device_attribute, attr); 195 + kfree(dev_attr->attr.name); 196 + kfree(dev_attr); 197 + ++count; 198 + } 199 + kfree(msi_attrs); 200 + kfree(msi_irq_groups[0]); 201 + kfree(msi_irq_groups); 202 + } 203 + } 204 + 75 205 #ifdef CONFIG_GENERIC_MSI_IRQ_DOMAIN 76 206 static inline void irq_chip_write_msi_msg(struct irq_data *data, 77 207 struct msi_msg *msg) ··· 233 97 * 234 98 * Intended to be used by MSI interrupt controllers which are 235 99 * implemented with hierarchical domains. 100 + * 101 + * Return: IRQ_SET_MASK_* result code 236 102 */ 237 103 int msi_domain_set_affinity(struct irq_data *irq_data, 238 104 const struct cpumask *mask, bool force) ··· 415 277 } 416 278 417 279 /** 418 - * msi_create_irq_domain - Create a MSI interrupt domain 280 + * msi_create_irq_domain - Create an MSI interrupt domain 419 281 * @fwnode: Optional fwnode of the interrupt controller 420 282 * @info: MSI domain info 421 283 * @parent: Parent irq domain 284 + * 285 + * Return: pointer to the created &struct irq_domain or %NULL on failure 422 286 */ 423 287 struct irq_domain *msi_create_irq_domain(struct fwnode_handle *fwnode, 424 288 struct msi_domain_info *info, ··· 627 487 * are allocated 628 488 * @nvec: The number of interrupts to allocate 629 489 * 630 - * Returns 0 on success or an error code. 490 + * Return: %0 on success or an error code. 631 491 */ 632 492 int msi_domain_alloc_irqs(struct irq_domain *domain, struct device *dev, 633 493 int nvec) ··· 664 524 } 665 525 666 526 /** 667 - * __msi_domain_free_irqs - Free interrupts from a MSI interrupt @domain associated tp @dev 527 + * msi_domain_free_irqs - Free interrupts from a MSI interrupt @domain associated to @dev 668 528 * @domain: The domain to managing the interrupts 669 529 * @dev: Pointer to device struct of the device for which the interrupts 670 530 * are free ··· 681 541 * msi_get_domain_info - Get the MSI interrupt domain info for @domain 682 542 * @domain: The interrupt domain to retrieve data from 683 543 * 684 - * Returns the pointer to the msi_domain_info stored in 685 - * @domain->host_data. 544 + * Return: the pointer to the msi_domain_info stored in @domain->host_data. 686 545 */ 687 546 struct msi_domain_info *msi_get_domain_info(struct irq_domain *domain) 688 547 {
+1 -1
kernel/irq/pm.c
··· 227 227 } 228 228 229 229 /** 230 - * irq_pm_syscore_ops - enable interrupt lines early 230 + * irq_pm_syscore_resume - enable interrupt lines early 231 231 * 232 232 * Enable all interrupt lines with %IRQF_EARLY_RESUME set. 233 233 */
+1 -1
kernel/irq/proc.c
··· 513 513 seq_printf(p, " %8s", "None"); 514 514 } 515 515 if (desc->irq_data.domain) 516 - seq_printf(p, " %*d", prec, (int) desc->irq_data.hwirq); 516 + seq_printf(p, " %*lu", prec, desc->irq_data.hwirq); 517 517 else 518 518 seq_printf(p, " %*s", prec, ""); 519 519 #ifdef CONFIG_GENERIC_IRQ_SHOW_LEVEL
+2
kernel/irq/timings.c
··· 799 799 800 800 __irq_timings_store(irq, irqs, ti->intervals[i]); 801 801 if (irqs->circ_timings[i & IRQ_TIMINGS_MASK] != index) { 802 + ret = -EBADSLT; 802 803 pr_err("Failed to store in the circular buffer\n"); 803 804 goto out; 804 805 } 805 806 } 806 807 807 808 if (irqs->count != ti->count) { 809 + ret = -ERANGE; 808 810 pr_err("Count differs\n"); 809 811 goto out; 810 812 }
+1 -1
kernel/softirq.c
··· 422 422 if (ksoftirqd_running(local_softirq_pending())) 423 423 return; 424 424 425 - if (!force_irqthreads || !__this_cpu_read(ksoftirqd)) { 425 + if (!force_irqthreads() || !__this_cpu_read(ksoftirqd)) { 426 426 #ifdef CONFIG_HAVE_IRQ_EXIT_ON_IRQ_STACK 427 427 /* 428 428 * We can safely execute softirq on the current stack if