Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

genirq: Allow the affinity of a percpu interrupt to be set/retrieved

In order to prepare the genirq layer for the concept of partitionned
percpu interrupts, let's allow an affinity to be associated with
such an interrupt. We introduce:

- irq_set_percpu_devid_partition: flag an interrupt as a percpu-devid
interrupt, and associate it with an affinity
- irq_get_percpu_devid_partition: allow the affinity of that interrupt
to be retrieved.

This will allow a driver to discover which CPUs the per-cpu interrupt
can actually fire on.

Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: devicetree@vger.kernel.org
Cc: Jason Cooper <jason@lakedaemon.net>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Rob Herring <robh+dt@kernel.org>
Link: http://lkml.kernel.org/r/1460365075-7316-3-git-send-email-marc.zyngier@arm.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>

authored by

Marc Zyngier and committed by
Thomas Gleixner
222df54f 651e8b54

+30 -1
+4
include/linux/irq.h
··· 530 530 } 531 531 532 532 extern int irq_set_percpu_devid(unsigned int irq); 533 + extern int irq_set_percpu_devid_partition(unsigned int irq, 534 + const struct cpumask *affinity); 535 + extern int irq_get_percpu_devid_partition(unsigned int irq, 536 + struct cpumask *affinity); 533 537 534 538 extern void 535 539 __irq_set_handler(unsigned int irq, irq_flow_handler_t handle, int is_chained,
+1
include/linux/irqdesc.h
··· 66 66 int threads_handled_last; 67 67 raw_spinlock_t lock; 68 68 struct cpumask *percpu_enabled; 69 + const struct cpumask *percpu_affinity; 69 70 #ifdef CONFIG_SMP 70 71 const struct cpumask *affinity_hint; 71 72 struct irq_affinity_notify *affinity_notify;
+25 -1
kernel/irq/irqdesc.c
··· 595 595 chip_bus_sync_unlock(desc); 596 596 } 597 597 598 - int irq_set_percpu_devid(unsigned int irq) 598 + int irq_set_percpu_devid_partition(unsigned int irq, 599 + const struct cpumask *affinity) 599 600 { 600 601 struct irq_desc *desc = irq_to_desc(irq); 601 602 ··· 611 610 if (!desc->percpu_enabled) 612 611 return -ENOMEM; 613 612 613 + if (affinity) 614 + desc->percpu_affinity = affinity; 615 + else 616 + desc->percpu_affinity = cpu_possible_mask; 617 + 614 618 irq_set_percpu_devid_flags(irq); 619 + return 0; 620 + } 621 + 622 + int irq_set_percpu_devid(unsigned int irq) 623 + { 624 + return irq_set_percpu_devid_partition(irq, NULL); 625 + } 626 + 627 + int irq_get_percpu_devid_partition(unsigned int irq, struct cpumask *affinity) 628 + { 629 + struct irq_desc *desc = irq_to_desc(irq); 630 + 631 + if (!desc || !desc->percpu_enabled) 632 + return -EINVAL; 633 + 634 + if (affinity) 635 + cpumask_copy(affinity, desc->percpu_affinity); 636 + 615 637 return 0; 616 638 } 617 639