Linux kernel mirror (for testing) git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git
kernel os linux

drivers/perf: arm-pmu: Handle per-interrupt affinity mask

On a big-little system, PMUs can be wired to CPUs using per CPU
interrups (PPI). In this case, it is important to make sure that
the enable/disable do happen on the right set of CPUs.

So instead of relying on the interrupt-affinity property, we can
use the actual percpu affinity that DT exposes as part of the
interrupt specifier. The DT binding is also updated to reflect
the fact that the interrupt-affinity property shouldn't be used
in that case.

Acked-by: Rob Herring <robh@kernel.org>
Tested-by: Caesar Wang <wxt@rock-chips.com>
Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

authored by

Marc Zyngier and committed by
Catalin Marinas
19a469a5 90f777be

+25 -6
+3 -1
Documentation/devicetree/bindings/arm/pmu.txt
··· 39 39 When using a PPI, specifies a list of phandles to CPU 40 40 nodes corresponding to the set of CPUs which have 41 41 a PMU of this type signalling the PPI listed in the 42 - interrupts property. 42 + interrupts property, unless this is already specified 43 + by the PPI interrupt specifier itself (in which case 44 + the interrupt-affinity property shouldn't be present). 43 45 44 46 This property should be present when there is more than 45 47 a single SPI.
+22 -5
drivers/perf/arm_pmu.c
··· 603 603 604 604 irq = platform_get_irq(pmu_device, 0); 605 605 if (irq >= 0 && irq_is_percpu(irq)) { 606 - on_each_cpu(cpu_pmu_disable_percpu_irq, &irq, 1); 606 + on_each_cpu_mask(&cpu_pmu->supported_cpus, 607 + cpu_pmu_disable_percpu_irq, &irq, 1); 607 608 free_percpu_irq(irq, &hw_events->percpu_pmu); 608 609 } else { 609 610 for (i = 0; i < irqs; ++i) { ··· 646 645 irq); 647 646 return err; 648 647 } 649 - on_each_cpu(cpu_pmu_enable_percpu_irq, &irq, 1); 648 + 649 + on_each_cpu_mask(&cpu_pmu->supported_cpus, 650 + cpu_pmu_enable_percpu_irq, &irq, 1); 650 651 } else { 651 652 for (i = 0; i < irqs; ++i) { 652 653 int cpu = i; ··· 964 961 i++; 965 962 } while (1); 966 963 967 - /* If we didn't manage to parse anything, claim to support all CPUs */ 968 - if (cpumask_weight(&pmu->supported_cpus) == 0) 969 - cpumask_setall(&pmu->supported_cpus); 964 + /* If we didn't manage to parse anything, try the interrupt affinity */ 965 + if (cpumask_weight(&pmu->supported_cpus) == 0) { 966 + if (!using_spi) { 967 + /* If using PPIs, check the affinity of the partition */ 968 + int ret, irq; 969 + 970 + irq = platform_get_irq(pdev, 0); 971 + ret = irq_get_percpu_devid_partition(irq, &pmu->supported_cpus); 972 + if (ret) { 973 + kfree(irqs); 974 + return ret; 975 + } 976 + } else { 977 + /* Otherwise default to all CPUs */ 978 + cpumask_setall(&pmu->supported_cpus); 979 + } 980 + } 970 981 971 982 /* If we matched up the IRQ affinities, use them to route the SPIs */ 972 983 if (using_spi && i == pdev->num_resources)